url
stringlengths
31
38
title
stringlengths
7
229
abstract
stringlengths
44
2.87k
text
stringlengths
319
2.51M
meta
dict
https://arxiv.org/abs/2103.08058
On Planar Visibility Counting Problem
For a set $S$ of $n$ disjoint line segments in $\mathbb{R}^{2}$, the visibility counting problem is to preprocess $S$ such that the number of visible segments in $S$ from any query point $p$ can be computed quickly. There have been approximation algorithms for this problem with trade off between space and query time. We propose a new randomized algorithm to compute the exact answer of the problem. For any $0<\alpha<1$, the space, preprocessing time and query time are $O_{\epsilon}(n^{4-4\alpha})$, $O_{\epsilon}(n^{4-2\alpha})$ and $O_{\epsilon}(n^{2\alpha})$, respectively. Where $O_{\epsilon}(f(n)) = O(f(n)n^{\epsilon})$ and $\epsilon>0$ is an arbitrary constant number.
\section{Introduction} Let $S=\{s_1, s_2,..., s_n\}$ be a set of $n$ disjoint closed line segments in the plane contained in a bounding box. Two points $p,q$ in the bounding box are visible to each other with respect to $S$, if the line segment $\overline{pq}$ does not intersect any segment of $S$. A segment $s_i\in S$ is also said to be visible with respect to $S$ from a point $p$, if there exists a point $q\in s_i$ such that $q$ is visible to $p$. \textit{The Visibility Counting Problem (VCP)} is to find $m_p$, the number of segments of $S$ visible from a query point $p$. We know that \textit{the visibility region of a given point $p\in \mathbb{R}^{2}$} is defined as \textit{VP}$_{S}(p) = \{ q\in \mathbb{R}^{2}: p$ and $q$ are visible with respect to $S$ $\}$, and \textit{the visibility region of a given segment $s_i$} is defined as \textit{VP}$_{S}(s_i) = $\{$p\in \mathbb{R}^{2}: s_i$ and $p$ are visible with respect to $S$ $\}$. Note that the size of \textit{VP}$_{S}(p)$ is $O(n)$, but there are examples that the size of \textit{VP}$_{S}(s_i)$ is $\Theta(n^4)$. Consider the $2n$ end-points of the segments of $S$ as vertices of a geometric graph. Add a straight-line-edge between each pair of visible vertices. The result is \textit{the visibility graph of $S$} or \textit{VG(S)}. We can extend each edge of \textit{VG(S)} in both directions to the points that the edge hits some segments in $S$ (or the bounding box). This generates at most two new vertices and two new edges. Adding all these vertices and edges to \textit{VG(S)} results in a new geometric graph, called \textit{the extended visibility graph of $S$} or \textit{EVG(S)}. We can use \textit{EVG(S)} to compute the visibility region of any segment $s_i\in S$~\cite{gud}. \subsection{Related Works} There is an $O(n\log n)$ time algorithm that can compute \textit{VP}$_S(p)$ using $O(n)$ space~\cite{asa,sur}. Vegter propose an output sensitive algorithm that reports \textit{VP}$_S(p)$ in $O\left(|\textit{VP}_S(p)|\log\left(\frac{n}{|\textit{VP}_S(p)|}\right)\right)$ time, by preprocessing the segments in $O(m\log n)$ time using $O(m)$ space, where $m=O(n^{2})$ is the number of edges of \textit{VG(S)} and $|\textit{VP}_S(p)|$ is the number of vertices of $\textit{VP}_S(p)$~\cite{vet}. We can also solve VCP using \textit{EVG(S)}. Consider the planar arrangement of the edges of \textit{EVG(S)} as a planar graph. All points in any face of this arrangement can see the same number of visible segments and this number can be computed for each face in the preprocessing step~\cite{gud}. Since there are $O(n^{4})$ faces in the planar arrangement of \textit{EVG(S)}, a point location structure of size $O(n^{4})$ can answer each query in $O(\log n)$ time. However, $O(n^4)$ preprocessing time and space is high. We also know that by computing $\textit{VP}_S(p)$, any query can be answered in $O(n\log n)$ with no preprocessing. This has led to several results with a tradeof between the preprocessing cost and the query time~\cite{aro,bos,poc,zar} (please refer to~\cite{gho2} for a complete survey). Suri and O'Rourke~\cite{sur} introduce the first 3-approximation algorithm for VCP. Their algorithm is based on representing a region by the union of a set of convex (triangular) regions. Gudmundsson and Morin~\cite{gud} improve this result to a 2-approximation algorithm using an improved covering scheme. For any $0<\alpha\leq1$ their method builds a data structure of size $O_{\epsilon}(m^{1+\alpha})=O_{\epsilon}(n^{2(1+\alpha)})$ using $O_{\epsilon}(m^{1+\alpha})=O_{\epsilon}(n^{2(1+\alpha)})$ preprocessing time, from which each query is answered in $O_{\epsilon}(m^{(1-\alpha)/2})=O_{\epsilon}(n^{1-\alpha})$ time, where $O_{\epsilon}(f(n)) = O(f(n)n^{\epsilon})$ and $\epsilon>0$ is an arbitrary constant number. This algorithm returns $m'_{p}$ such that $m_{p}\leq m'_{p}\leq 2m_{p}$. The same result can be achieved by using the algorithms described in~\cite{ali}~and~\cite{nor}. In \cite{ali}, considering the endpoints of the segments, it is proven that the number of visible end-points, denoted by $ve_p$ is a 2-approximation of $m_p$, which means $m_p\leq ve_p\leq 2m_p$. Another notable approximation algorithm for VCP is described by Fischer \textit{et al.}~\cite{fis1,fis2} to estimate the saving of applying a visibility algorithm for computer graphics applications. Alipour~\textit{et al.}~\cite{alil} propose two randomized approximation algorithms for VCP. The first algorithm depends on two constants $0\leq \beta\leq \frac{2}{3}$ and $0<\delta\le 1$, and the expected preprocessing time, the expected space, and the expected query time are $O(m^{2-3\beta/2}\log m)$, $O(m^{2-3\beta/2})$, and $O(\frac{1}{\delta^2}m^{\beta/2}\log m)$, respectively. In the preprocessing phase, the algorithm selects a sequence of random samples, whose size and number depend on the computation time and memory space tradeoff parameters. When a query point $p$ is given by an adversary unaware of the random sample of our algorithm, it computes the exact number of visible segments from $p$, denoted by $m_p$, if $m_p\leq \frac{3}{\delta^2}m^{\beta/2}\log(2m)$. Otherwise, it computes an approximate value, $m'_p$, such that with the probability of at least $1-\frac{1}{m}$, we have $(1-\delta)m_p\leq m'_p\leq (2+2\delta)m_p$. The preprocessing time and space of the second algorithm are $O(n^2\log n)$ and $O(n^2)$, respectively. This algorithm computes the exact value of $m_p$ if $m_p\leq \frac{1}{\delta^2}\sqrt{n}\log n$. Otherwise, it returns an approximate value $m''_p$ in expected $O(\frac{1}{\delta^2}\sqrt{n}\log n)$ time, such that with the probability at least $1-\frac{1}{\log n}$, we have $(1-3\delta)m_p\leq m''_p\leq (1.5+3\delta)m_p$. Alipour~\textit{et al.}~\cite{alip} study the problem over terrain. Given a $2.5D$ terrain and a query point $p$ on or above it, they propose an approximation algorithm to find the triangles of terrain that are visible from $p$. They implement and test their algorithm on real data sets. \subsection{Our Result} We combine the algorithms described in~\cite{ali} and~\cite{gud} to propose a randomized algorithm to find the exact answer of VCP. The expected space and the preprocessing time of our algorithm are $O_{\epsilon}(m^{2-2\alpha})=O_{\epsilon}(n^{4-4\alpha})$ and $O_{\epsilon}(m^{2-\alpha})=O_{\epsilon}(n^{4-2\alpha})$, respectively and the expected query time is $O_{\epsilon}(m^{\alpha})=O_{\epsilon}(n^{2\alpha})$. Our algorithm returns the exact value of VCP, as opposed the previous approximation algorithms. Where $0< \alpha <1$ and $m=O(n^2)$ is the number of edges in the \textit{EVG(S)}. \section{Preliminaries} In this section, we present theorems and results that we utilize in our algorithm. \begin{theorem} \textnormal{(\cite{alin})} \label{vt} For a set of $n$ disjoint line segments, we can preprocess them in $O_{\epsilon}(n^2)$ time using $O_{\epsilon}(n^2)$ space such that in expected $O(n^{\epsilon})$ query time, we can check whether a given query point $p$ and a given segment $s_i\in S$ can see each other or not. \end{theorem} According to \cite{gud}, it is possible to cover the visibility region of each segment $s_i\in S$ with $O(m_{s_i})$ triangles denoted by $\textit{VT}(s_i)$ such that $|\textit{VT}(s_i)|=O(m_{s_i})$, where $m_{s_i}$ is the number of edges of \textit{EVG(S)} incident on $s_i$. Note that the visibility triangles of $s_i$ may overlap. If we consider the visibility triangles of all segments, then there is a set $\textit{VT}_S=\{\Delta_1,\Delta_2,...\}$ of $|\textit{VT}_S|=O(m)$ triangles, where $m=O(n^2)$ is the number of edges in \textit{EVG(S)} \cite{gud}. We say $\Delta_i$ is related to $s_j$ if and only if $\Delta_i\in \textit{VT}(s_j)$. Gudmundsson and Morin proved that for a given query point $p$, the number of triangles in $\textit{VT}_S$ containing $p$ is between $m_p$ and $2m_p$~\cite{gud}. So, for any query point~$p$, they compute the number of triangles containing $p$ to give a 2-approximation solution of VCP. Let $T=\{\Delta_1,\Delta_2, \ldots, \Delta_n\}$ be a set of $n$ triangles. For any $n'$ with $n\leq n'\leq n^2$, the \textit{multi-level partition tree} data structure of size $O_{\epsilon}(n')$ can be constructed in $O_{\epsilon}(n')$ time. By using the multi-level partition tree, the number of triangles containing a query point $p$ can be counted in $O_{\epsilon}(n/\sqrt n')$ time and the triangles can be reported by $O(k)$ extra query time where $k$ is the number of reported triangles (see Theorem~1 in \cite{gud}). Please refer to \cite{aga,mat} for the details of the partition tree data structure and its efficient implementation. In the algorithm proposed by Gudmundsson and Morin~\cite{gud}, there are $O(m)$ visibility triangles. By $O_{\epsilon}(m^{1+\alpha})=O_{\epsilon}(n^{2(1+\alpha)})$ preprocessing time and space, each query is answered in $O_{\epsilon}(m^{(1-\alpha)/2})=O_{\epsilon}(n^{1-\alpha})$ time, where $0<\alpha\leq1$. Obviously, if we color the visibility triangles of each segment $s_i$ with color $c_i$, then the number of triangles with distinct colors containing $p$ is the number of segments visible from $p$. Note that we can use their algorithm to find the exact number of triangles with distinct color containing $p$, but the query time depends on the value of $m_p$, which is not efficient for the query points with a large set of visible segments. \section{The Algorithm} Before we describe our algorithm, we explain a concept usually used in algorithms with a divide-and-conquer approach. Let $L$ be a set of $n$ lines in the plane and $r\leq n$ be a given parameter. A $(1/r)$-cutting for $L$ is a collection of (possibly unbounded) triangles with disjoint interiors, which covers all the plane and the interior of each triangle intersects at most $n/r$ lines of~$L$. The first (though not optimal) algorithm for constructing a cutting is given by Clarkson~\cite{cla}. He proved that a random sample of size $r$ of the lines of $L$ can be used to produce an $O(\log r/r)$-cutting of size $O(r^2)$. The efficient construction of $(1/r)$-cuttings of optimal size $O(r^2)$ can be found in \cite{deb1} and \cite{cha}. The set of triangles in the cutting together with the collection of lines intersecting each triangle can be found in $O(nr)$ time. We use the algorithm described in \cite{cla} for generating an $(1/r)$-cutting. Let $E=\{e_1,e_2,...,e_{m'}\}$, $m'=O(m)$, be the set of edges of all the visibility triangles. So, we can partition the space into $r^2$ triangular regions such each region is intersected by at most $m'\log r/r$ edges of $E$. Let $r=m^{1-\alpha}$. Let $R=\{R_1,R_2,...,R_{m''}\}$, $m''=O(m^{{2-2\alpha}})$, be the set of triangular regions in our cutting. So each triangular region is crossed by at most $m^{\alpha} \log r$ edges of $E$. Now we describe our algorithm based on the $(1/r)$-cutting approach. In the preprocessing stage, we choose a random point $p_i$ from each triangular region $R_i$. We then compute the number of triangles in $T$ with distinct colors containing $p_i$ and denote it by $mp_i$. This is actually is the same as the number of segments that are visible from $p_i$. Note that $T$ is the set of given $O(m)$ colored triangles. For each $R_i$, we record $m_{p_i}$ and the location of $p_i$. Thus, the plane is partitioned into $m''$ regions and we record a point $p_i$ and the number of triangles with distinct colors in $T$ containing $p_i$ for each region. This data structure needs $O(m^{{2-2\alpha}})$ space. We have the following theorem for computing the segments that a given query segment intersects. \begin{theorem} \emph{\cite{deb}} \label{cs} Let $S$ be a set of $n$ segments in the plane and $n\leq n'\leq n^2$, we can preprocess the segments in $O_{\epsilon}(n')$ time such that for a given query segment $s$, the segments crossed by $s$ are reported in $O_{\epsilon}(n/\sqrt n' +k)$, where $k$ is the number of segments crossed by $s$. \end{theorem} In the preprocessing stage, we construct the data structure described in Theorem~\ref{cs} on the edges of $T$. The set $T$ has $O(m)$ edges. Let $n'=m^{2-2\alpha}$. So, the preprocessing time and space for this data structure is $O_{\epsilon}(m^{2-2\alpha})$ and for a given segment $s$, we can report the segments of $T$ crossing $s$ in $O_{\epsilon}(m^{\alpha})$ time. In the query time, for a given query point $p$, first we find the region $R_i$ containing $p$ in $O(\log m)$ time. Then, we find the edges of triangles that cross $\overline{pp_i}$. The number of these edges crossing $R_i$ is $O(m^{\alpha})$, so the number of edges that cross $\overline{pp_i}$ is $O(m^{\alpha})$. Thus, we can report these edges in $O_{\epsilon}(m^{\alpha})$ using $O_{\epsilon}(m^{2-2\alpha})$ preprocessing time and space (see~Theorem~\ref{cs}). For each $p_i$, we know the number of triangles with distinct colors containing $p_i$ that is the number of segments visible from $p_i$. We also know that the difference between the number of visible segments from $p$ and $p_i$ can be computed by considering the edges of $T$ that cross $\overline{pp_i}$. So, to compute the number of visible segments from $p$, we compute the distinct colors of edges that cross $\overline{pp_i}$. The expected number of these colors is $O(m^{\alpha})$, which means they are related to $O(m^{\alpha})$ segments of $S$. For any segment $s_i$ that at least one of the edges of the visibility triangles of $s_i$ crosses $\overline{pp_i}$, we use Theorem~\ref{vt} to test whether $p_i$ and $p$ are visible from $s_i$. If $s_i$ is not visible from $p_i$ and is visible from $p$, then we add $1$ to the value of $m_{p_{i}}$, where $m_{p_{i}}$ is the number of visible segments from $p_i$. If $s_i$ is visible from $p_i$ and is not visible from $p$, then we subtract $1$ from $m_{p_{i}}$. At the end, we return the new value of $m_{p_i}$ that is the number of segments visible from $p$. Note that we test at most $O(m^{\alpha})$ segments. For each segment $s_j$, we use Theorem~\ref{vt} for testing the visibility of ($s_j$, $p$) and ($s_j$, $p_i$). So, the query time is $O_{\epsilon}(m^{\alpha})$. See Figure~\ref{color} for a sample visibility scenario. So, we conclude the following theorem. \begin{figure}[htpb] \centering \begin{tikzpicture}[scale=0.6] \draw(0,0)--(1,3)[red]; \draw(1,3)--(3,1)[red]; \draw(3,1)--(0,0)[red]; \draw(-2,4)--(0,-1)[red]; \draw(0,-1)--(3,0)[red]; \draw(3,0)--(-2,4)[red]; \draw(-2,0)--(2,1)[red]; \draw(2,1)--(2,-1)[red]; \draw(2,-1)--(-2,0)[red]; \draw(3,3)--(1,-3)[green]; \draw(1,-3)--(-4,1)[green]; \draw(-4,1)--(3,3)[green]; \draw(1,0)--(5,4)[blue]; \draw(5,4)--(6,-1)[blue]; \draw(6,-1)--(1,0)[blue]; \draw(-1.2,-0.2)--(-1.3,4)[blue]; \draw(-1.3,4)--(5,1)[blue]; \draw(5,1)--(-1.2,-0.2)[blue]; \draw(0,7)--(7,0)[dashed]; \draw(0,7)--(-7,-6)[dashed]; \draw(-7,-6)--(7,7)[dashed]; \draw(7,0)--(-7,-6)[dashed]; \draw(7,0)--(4,-7)[dashed]; \draw(4,-7)--(3.27,3.78)[dashed]; \draw(4,-7)--(-7,-6)[dashed]; \filldraw(-1,0) circle(2pt); \draw (-1,0) node[above] {$p_i$}; \filldraw(-1,3) circle(2pt); \draw (-1,3) node[above] {$p$}; \filldraw(-2.5,0) circle(2pt); \draw (-2.5,0) node[above] {$R_i$}; \end{tikzpicture} \caption{A sample visibility scenario. Here, $p_i$ is selected for the region $R_i$. The number of triangles with distinct color containing $p_i$ is 3. In the preprocessing stage, this value is computed for each $p_i$. For the query point $p$, we consider the set of segments that intersect $\overline{pp_i}$. The initial answer for $p$ is $m_{p_{i}}$. Because both $p_i$ and $p$ are inside a red triangle, we do not change the value of $m_{p_{i}}$ for red triangles. Because $p_i$ is inside a green triangle and $p$ is not inside the same green triangle, the value of $m_{p_i}$ is reduced by 1 and the final answer is~$2$. } \label{color} \end{figure} \begin{theorem} \label{l1} The VCP can be answered in expected $O_{\epsilon}(m^{\alpha})$ computation time with the expected $O_{\epsilon}(m^{2-2\alpha})$ memory space. \end{theorem} \subsection*{Preprocessing time} The preprocessing time of this algorithm depends on computing $m_{p_{i}}$. We start with $R_1$. First, we compute $m_{p_{1}}$ in $O(n\log n)$ and then move to one of the neighbor regions of $R_1$, for example $R_2$. We can compute $m_{p_{2}}$ in $O(m^{\alpha})$ time. Since there are $O(m^{2-2\alpha})$ regions, the preprocessing times is $O_{\epsilon}(m^{\alpha}.m^{2-2\alpha})=O_{\epsilon}(m^{2-\alpha})$. \section{Conclusion} We proposed an exact algorithm for the visibility counting problem. By exploiting the results described in~\cite{gud}, we transform the problem into computing the triangles with distinct colors containing a query point, which is known as general range search problem. The algorithms for general range search problem are output sensitive~\cite{aga1,pro}. However, the nature of the visibility counting problem enabled us to solve the problem in a query time independent of the size of output. In our general range search problem for a given query point $p$ the difference between $p$ and $p_i$ depends on the edges of the triangles that cross $pp_i$. Each edge $e$ was related to a visibility triangle of a segment $s$. So, the processing of $e$ was possible by testing the visibility of $(p,s)$ and $(p_i,s)$ in expected time of $O(\log n)$. As a future work suppose that we are given some triangles in $R^3$ and the goal is to compute the number of visible triangles from a query point $p$. Our approach can be extended to solve the problem in $R^3$. Gudmundsson and Morin~\cite{gud} proposed a $2$-approx\-imation factor algorithm for $VCP$. We produce the exact answer of VCP with the same storage and query time of their algorithm. However, our preprocessing time is higher than that of their algorithm. So, an interesting open question is to answer the problem with the same preprocessing time of their algorithm. \small \bibliographystyle{abbrv}
{ "timestamp": "2021-03-16T01:22:48", "yymm": "2103", "arxiv_id": "2103.08058", "language": "en", "url": "https://arxiv.org/abs/2103.08058", "abstract": "For a set $S$ of $n$ disjoint line segments in $\\mathbb{R}^{2}$, the visibility counting problem is to preprocess $S$ such that the number of visible segments in $S$ from any query point $p$ can be computed quickly. There have been approximation algorithms for this problem with trade off between space and query time. We propose a new randomized algorithm to compute the exact answer of the problem. For any $0<\\alpha<1$, the space, preprocessing time and query time are $O_{\\epsilon}(n^{4-4\\alpha})$, $O_{\\epsilon}(n^{4-2\\alpha})$ and $O_{\\epsilon}(n^{2\\alpha})$, respectively. Where $O_{\\epsilon}(f(n)) = O(f(n)n^{\\epsilon})$ and $\\epsilon>0$ is an arbitrary constant number.", "subjects": "Computational Geometry (cs.CG)", "title": "On Planar Visibility Counting Problem", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.985496423290417, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.7081721917968985 }
https://arxiv.org/abs/1808.10621
A concavity condition for existence of a negative Neumann-Poincaré eigenvalue in three dimensions
It is proved that if a bounded domain in three dimensions satisfies a certain concavity condition, then the Neumann-Poincaré operator on the boundary of the domain or its inversion in a sphere has at least one negative eigenvalue. The concavity condition is quite simple, and is satisfied if there is a point on the boundary at which the Gaussian curvature is negative.
\section{Introduction} The Neumann-Poincar\'e (abbreviated by NP) operator is a boundary integral operator which appears naturally when solving classical boundary value problems using layer potentials. Its study goes back to C. Neumann \cite{Neumann-87} and Poincar\'e \cite{Poincare-AM-87} as the name of the operator suggests. Lately interest in the spectral properties of NP operator is growing rapidly, which is due to their relation to plasmonics \cite{ACKLM, AMRZ, MFZ}, and significant progress is being made among which are work on continuous spectrum in two dimensions \cite{BZ, HKL, KLY, PP2} and Weyl asymptotic of eigenvalues in three dimensions \cite{Miya} to name only a few. However, there are still several puzzling questions on the NP spectrum (spectrum of the NP operator). A question on existence of negative NP eigenvalues in three dimensions is one of them. Unlike two-dimensional NP spectrum which is symmetric with respect to $0$ and hence there are the same number of negative eigenvalues as positive ones (see, for example \cite{HKL, KPS}), not so many surfaces (boundaries of three-dimensional domains) are known to have negative NP eigenvalues. In fact, NP eigenvalues on spheres are all positive, and it is only in \cite{Ahner} which was published in 1994 that the NP operator on a very thin oblate spheroid is shown to have a negative eigenvalue. We emphasize that the NP eigenvalues on ellipsoids can be found explicitly using Lam\'e functions for which we also refer to \cite{AA, FK, Mart, Ritt}. As far as we are aware of, there is no surface other than ellipsoids known to have a negative NP eigenvalue. It is the purpose of this paper to present a simple geometric condition which guarantees existence of a negative NP eigenvalue. To present the condition, let $\Omega$ be a bounded domain with the $C^{1,\alpha}$ boundary for some $\alpha>0$. We say $\partial\Omega$ is concave with respect to $p \in \Omega$ if there is a point $x \in \partial\Omega$ such that \begin{equation}\label{concave} (x-p) \cdot \nu_x <0, \end{equation} where $\nu_x$ denotes the outward unit normal vector to $\partial\Omega$. We emphasize that if $\partial\Omega$ is $C^2$, then this condition is fulfilled if there is a point on $\partial\Omega$ where the Gaussian curvature is negative. In fact, if $(x-p) \cdot \nu_x \ge 0$ for all $p \in \Omega$ and $x \in \partial\Omega$, then $\Omega$ is convex and hence the Gaussian curvatures on $\partial\Omega$ are non-negative. We prove in this paper that if the concavity condition \eqnref{concave} holds for some $p \in \Omega$, then the NP operator defined either on $\partial\Omega$ or the surface of inversion with respect $p$ has at least one negative eigenvalue (see Theorem \ref{mainthm1} and Corollary \ref{mainthm2}). We emphasize that \eqnref{concave} is not a necessary condition for existence of a negative NP eigenvalue; oblate spheroids have negative NP eigenvalues as mentioned before. This paper is organized as follows. We review in section \ref{sec:main} the definition of the NP operator and state main results of this paper. Section \ref{sec:proof} is to prove the transformation formula for the NP operator under the inversion in a sphere. We use this formula to prove the main results. \section{The NP operator and statements of main results}\label{sec:main} Let $\Gamma(x)$ be the fundamental solution to the Laplacian, {\it i.e.}, \begin{equation}\label{gammacond} \Gamma (x) = \begin{cases} \displaystyle \frac{1}{2\pi} \ln |x|, \quad & d=2, \\ \noalign{\smallskip} \displaystyle -\frac{1}{4\pi |x|}, \quad & d = 3. \end{cases} \end{equation} As before, let $\Omega$ be a bounded domain with the $C^{1,\alpha}$ boundary for some $\alpha>0$. The single layer potential $\mathcal{S}_{\partial \Omega} [\varphi]$ of a density function $\varphi$ is defined by \begin{equation} \mathcal{S}_{\partial \Omega} [\varphi] (x):= \int_{\partial \Omega} \Gamma (x-y) \varphi (y) \, d\sigma(y), \quad x \in \mathbb{R}^d. \end{equation} It is well known (see, for example, \cite{AmKa07Book2, Fo95}) that $\mathcal{S}_{\partial \Omega} [\varphi]$ satisfies the jump relation \begin{equation}\label{singlejump} \frac{\partial}{\partial\nu} \mathcal{S}_{\partial \Omega} [\varphi] \Big|_\pm (x) = \biggl( \pm \frac{1}{2} I + \mathcal{K}_{\partial \Omega}^* \biggr) [\varphi] (x), \quad x \in \partial \Omega, \end{equation} where $\frac{\partial}{\partial\nu}$ denotes the outward normal derivative, the subscripts $\pm$ indicate the limit from outside and inside of $\Omega$, respectively, and the operator $\mathcal{K}_{\partial \Omega}^*$ is defined by \begin{equation}\label{introkd} \mathcal{K}_{\partial \Omega}^* [\varphi] (x):= \int_{\partial \Omega} \nu_x \cdot \nabla_x \Gamma(x-y) \varphi(y)\,d\sigma(y), \quad x \in \partial \Omega. \end{equation} The operator $\mathcal{K}_{\partial \Omega}^*$ is called the NP operator associated with the domain $\Omega$ (or its boundary $\partial\Omega$). The operator $\mathcal{S}_{\partial\Omega}$, as an operator on $\partial\Omega$, maps $H^{-1/2}(\partial\Omega)$ into $H^{1/2}(\partial\Omega)$ continuously, and is invertible if $d=3$. If $d=2$, then there are domains where $\mathcal{S}_{\partial\Omega}$ has one-dimensional kernel, but by dilating the domain, it can be made to be invertible (see \cite{Verch-JFA-84}). So, the bilinear form $\langle \cdot,\cdot \rangle_{\partial\Omega}$, defined by \begin{equation}\label{innerp} \langle \varphi,\psi \rangle_{\partial\Omega}:= -\langle \varphi, \mathcal{S}_{\partial \Omega}[\psi] \rangle, \end{equation} for $\varphi, \psi \in H^{-1/2}(\partial \Omega)$, is actually an inner product on $H^{-1/2}(\partial \Omega)$ and it yields the norm equivalent to usual $H^{-1/2}$-norm (see, for example, \cite{KKLSY}). Here, $H^s$ denotes the usual $L^2$-Sobolev space of order $s$ and $\langle \cdot, \cdot \rangle$ is the $H^{-1/2}$-$H^{1/2}$ duality pairing. We denote the space $H^{-1/2}(\partial \Omega)$ equipped with the inner product $\langle \cdot,\cdot \rangle_{\partial\Omega}$ by $\mathcal{H}^*$. It is proved in \cite{KPS} that the NP operator $\mathcal{K}_{\partial \Omega}^*$ is self-adjoint with respect to the inner product $\langle \cdot,\cdot \rangle_{\partial\Omega}$. In fact, it is an immediate consequence of Plemelj's symmetrization principle \begin{equation}\label{Plemelj} \mathcal{S}_{\partial \Omega} \mathcal{K}^*_{\partial \Omega} = \mathcal{K}_{\partial \Omega} \mathcal{S}_{\partial \Omega}. \end{equation} Here $\mathcal{K}_{\partial \Omega}$ is the adjoint of $\mathcal{K}^*_{\partial \Omega}$ with respect to the usual $L^2$-inner product. Since $\mathcal{K}_{\partial \Omega}^*$ is compact on $\mathcal{H}^*$ if $\partial\Omega$ is $C^{1,\alpha}$, it has eigenvalues converging to $0$. For a fixed $r>0$ and $p \in \mathbb{R}^d$, let $T_p : \mathbb{R}^d \setminus \{p\} \rightarrow \mathbb{R}^d \setminus \{p\}$, $d =2,3$, be the inversion in a sphere, namely, \begin{equation} T_p x := \frac{r^2}{|x-p|^2} (x-p) + p. \end{equation} For a given bounded domain $\Omega$ in $\mathbb{R}^d$, let $\partial \Omega_p^*$ be the inversion of $\partial \Omega$, {\it i.e.}, $\partial \Omega_p^*= T_p(\partial \Omega)$. If we invert $\Omega$ in spheres of two different radii, then the resulting domains are dilations of each other. Since NP spectrum is invariant under dilation as one can see easily by a change of variables, we may fix the radius of the inversion sphere once for all. The following is the main results of this paper. \begin{theorem}\label{mainthm1} Let $\Omega$ be a bounded domain whose boundary is $C^{1,\alpha}$ smooth for some $\alpha>0$. If the concavity condition \eqnref{concave} holds for some $p \in \Omega$, then either $\mathcal{K}_{\partial\Omega}^*$ or $\mathcal{K}_{\partial\Omega_p^*}^*$ has a negative eigenvalue. \end{theorem} We have the following corollary as an immediate consequence of Theorem \ref{mainthm1}. \begin{cor}\label{mainthm2} Suppose $\partial\Omega$ is $C^{2}$ smooth. If there is a point on $\partial\Omega$ where the Gaussian curvature is negative, then either $\mathcal{K}_{\partial\Omega}^*$ or $\mathcal{K}_{\partial\Omega_p^*}^*$ for some $p \in \Omega$ has a negative eigenvalue. \end{cor} \section{Inversion in a sphere}\label{sec:proof} Just for simplicity we now assume the center $p$ of the inversion sphere is $0$, and denote $\partial\Omega_p^*$ and $T_p$ by $\partial\Omega^*$ and $T$, respectively. Let $x^*:= Tx$. Since $$ \frac{\partial x_i}{\partial x^*_j} = \frac{r^2 \delta_{ij}}{|x^*|^2} - 2r^2 \frac{x^*_i x^*_j}{|x^*|^4}, $$ the Jacobian matrix of $T^{-1}$ is given by \begin{equation}\label{Jacobian} J_{T^{-1}} = \frac{r^2}{|x^*|^2} \Big(I - 2\frac{x^*}{|x^*|}\frac{(x^*)^t}{|x^*|} \Big) = \frac{|x|^2}{r^2} \Big(I - 2\frac{x}{|x|}\frac{x^t}{|x|} \Big). \end{equation} Here $t$ denotes transpose. So, $T$ is conformal. The change of variable formulas for line and surface are respectively given as follows: \begin{align} & ds(x^*) = \frac{r^2}{|x|^2} ds(x), \label{line}\\ & dS(x^*) = \frac{r^4}{|x|^4} dS(x). \label{surface} \end{align} It is known (see \cite{MacMillan-book}) that \begin{align} |x^*-y^*| = \frac{r^2}{|x||y|} |x-y|. \end{align} So we have \begin{equation}\label{tildefund} \Gamma(x^*-y^*) = \begin{cases} \Gamma(x-y) -\Gamma(x) -\Gamma(y) + \Gamma(r^2) &\text{if } d = 2, \\ \noalign{\smallskip} \displaystyle \frac{|x||y|}{r^2} \Gamma(x-y) &\text{if } d = 3. \end{cases} \end{equation} For a function $\varphi$ defined on $\partial\Omega$, define $\varphi^*$ on $\partial\Omega^*_p$ by \begin{equation}\label{inversionfunc} \varphi^*(y^*) := \varphi (y) \frac{|y|^d}{r^d}. \end{equation} Then, we can easily see using \eqref{tildefund} that the following relation between the single layer potentials on domains $\partial \Omega$ and $\partial\Omega^*$ holds (see also \cite{MacMillan-book}): \begin{equation}\label{singleform} \mathcal{S}_{\partial\Omega^*} [ \varphi^* ](x^*) = \begin{cases} \mathcal{S}_{\partial \Omega} [\varphi](x) - \mathcal{S}_{\partial \Omega} [\varphi](0) + \Big( \displaystyle \int_{\partial\Omega} \varphi ds \Big) \big(\Gamma(r^2) - \Gamma(x)\big) &\text{if } d = 2, \\ \displaystyle \frac{|x|}{r} \mathcal{S}_{{\partial \Omega}} [ \varphi ](x) &\text{if } d = 3 . \end{cases} \end{equation} We note that the map $\varphi \mapsto \varphi^*$ is a conformal map from $\mathcal{H}^*(\partial \Omega)$ to $\mathcal{H}^*(\partial \Omega^*)$, in three dimensions, that is, \begin{equation} \langle \varphi^*, \psi^* \rangle_{\partial\Omega^*} = \langle \varphi, \psi \rangle_{\partial\Omega}. \end{equation} This relation is also true in two dimensions if $\varphi$ and $\psi$ are of mean zero. The relationship between outward unit normal vectors $\nu_{x^*}$ on $\partial\Omega^*$ and $\nu_x$ on ${\partial \Omega}$ are given as follows: \begin{equation}\label{normalform} \nu_{x^*} = (-1)^m \Big( I - 2\frac{x}{|x|} \frac{x^t}{|x|} \Big) \nu_x, \end{equation} where $m = 1$ if $0$ is an exterior point of $\Omega$ and $m= 0$ if $0$ is an interior point of $\Omega$. We emphasize that $0$ is the inversion center. The NP operators $ \mathcal{K}^{*}_{\partial \Omega}$ and $\mathcal{K}^{*}_{\partial\Omega^*}$ are related in the following way: \begin{lemma} \label{tildNP} Suppose that $0$ is the center of the inversion sphere. \begin{itemize} \item[(i)] If $0 \in \Omega$, then \begin{equation} \label{tildNPinterior} \mathcal{K}^{*}_{\partial\Omega^*} [ \varphi^* ](x^*) = \begin{cases} \displaystyle -\frac{|x|^2}{r^2} \mathcal{K}^{*}_{\partial \Omega} [ \varphi ](x) + \Big( \int_{\partial \Omega} \varphi ds \Big) \frac{ x \cdot \nu_x }{2\pi r^2} \quad &\text{if } d = 2, \\ \noalign{\smallskip} \displaystyle -\frac{|x|^3}{r^3} \mathcal{K}^{*}_{\partial \Omega} [ \varphi ](x) - \frac{|x|(x \cdot \nu_x)}{r^3} \mathcal{S}_{{\partial \Omega}} [ \varphi ](x) \quad &\text{if } d = 3. \end{cases} \end{equation} \item[(ii)] If $0 \in \overline{\Omega}^c$, then \begin{equation} \label{tildNPexterior} \mathcal{K}^{*}_{\partial\Omega^*} [ \varphi^* ](x^*) = \begin{cases} \displaystyle \frac{|x|^2}{r^2} \mathcal{K}^{*}_{\partial \Omega} [ \varphi ](x) - \Big( \int_{\partial \Omega} \varphi ds \Big) \frac{x \cdot \nu_x}{2\pi r^2} \quad &\text{if } d = 2, \\ \noalign{\smallskip} \displaystyle \frac{|x|^3}{r^3} \mathcal{K}^{*}_{\partial \Omega} [ \varphi ](x) + \frac{|x|(x \cdot \nu_x)}{r^3} \mathcal{S}_{{\partial \Omega}} [ \varphi ](x) \quad &\text{if } d = 3. \end{cases} \end{equation} \end{itemize} \end{lemma} \begin{proof} Since the difference of proofs for \eqnref{tildNPinterior} and \eqnref{tildNPexterior} is just the sign of the normal vector, we only prove the first one. If $d=2$, we use \eqnref{Jacobian}, \eqnref{line}, \eqnref{tildefund}, \eqnref{inversionfunc}, and \eqnref{normalform} to have \begin{align*} \mathcal{K}^{*}_{\partial\Omega^*} [ \varphi^* ](x^*) &= \int_{\partial\Omega^*} \nu_{x^*} \cdot \nabla_{x^*} \Gamma(x^* - y^*) \varphi^*(y^*) \; ds(y^*) \\ &= \int_{{\partial \Omega}} - \Big( I - 2\frac{x}{|x|} \frac{x^t}{|x|} \Big) \nu_x \cdot J_{T^{-1}}^t \nabla_{x} \Big( \Gamma(x-y) -\Gamma(x) \Big) \varphi (y) \; ds(y) \\ &= \int_{{\partial \Omega}} - \nu_x \cdot \Big( I - 2\frac{x}{|x|} \frac{x^t}{|x|} \Big)^t J_{T^{-1}}^t \nabla_{x} \Big( \Gamma(x-y) -\Gamma(x) \Big) \varphi (y) \;ds(y) \\ &= - \frac{|x|^2}{r^2} \int_{{\partial \Omega}} \nu_x \cdot \Big( \nabla_x \Gamma(x-y) - \nabla_x \Gamma(x) \Big) \varphi (y) \; ds(y) \\ &= -\frac{|x|^2}{r^2} \mathcal{K}^{*}_{\partial \Omega} [ \varphi ](x) + \Big( \int_{\partial \Omega} \varphi ds \Big) \frac{x \cdot \nu_x}{2\pi r^2}. \end{align*} The three-dimensional case can be proved similarly. In fact, we have \begin{align*} \mathcal{K}^{*}_{\partial\Omega^*} [ \varphi^* ](x^*) &= \int_{\partial\Omega^*} \nu_{x^*} \cdot \nabla_{x^*} \Gamma(x^* - y^*) \; \varphi^*(y^*) \; dS(y^*) \\ &= \int_{{\partial \Omega}} - \Big( I - 2\frac{x}{|x|} \frac{x^t}{|x|} \Big) \nu_x \cdot J_{T^{-1}}^t \nabla_{x} \Big( \frac{|x||y|}{r^2} \Gamma(x-y) \Big) \varphi (y) \; \frac{r}{|y|}dS(y) \\ &= \int_{{\partial \Omega}} - \nu_x \cdot \Big( I - 2\frac{x}{|x|} \frac{x^t}{|x|} \Big)^t J_{T^{-1}}^t \nabla_{x} \Big( \frac{|x||y|}{r^2} \Gamma(x-y) \Big) \varphi (y) \; \frac{r}{|y|}dS(y) \\ &= - \frac{|x|^2}{r} \int_{{\partial \Omega}} \nu_x \cdot \Big( \frac{|x|}{r^2} \nabla_x \Gamma(x-y) + \frac{x}{r^2|x|}\Gamma(x-y) \Big) \varphi (y) \; dS(y) \\ &= \displaystyle -\frac{|x|^3}{r^3} \mathcal{K}^{*}_{\partial \Omega} [ \varphi ](x) - \frac{|x|(x \cdot \nu_x)}{r^3} \mathcal{S}_{{\partial \Omega}} [ \varphi ](x). \end{align*} This completes the proof. \end{proof} If $\varphi$ is an eigenvector of $\mathcal{K}^{*}_{\partial \Omega}$ with corresponding eigenvalue $\lambda \neq 1/2$, then $\int_{\partial \Omega} \varphi ds=0$. So \eqnref{inversionfunc} and Lemma \ref{tildNP} for $d=2$ show that $\varphi^*$ is an eigenvector of $\mathcal{K}^{*}_{\partial\Omega^*}$ and the corresponding eigenvalue is $-\lambda$ if $0\in \Omega$ and $\lambda$ if $0 \in \overline{\Omega}^c$. Since the spectrum $\sigma(\mathcal{K}^{*}_{\partial \Omega})$ of the NP operator in two dimensions is symmetric with respect to $0$, we infer that $\sigma(\mathcal{K}^{*}_{\partial\Omega^*}) = \sigma(\mathcal{K}^{*}_{\partial \Omega})$, namely, the spectrum is invariant under inversion. This fact is known \cite{Schi}, but the above yields an alternative proof. In three dimensions, we obtain the following identities. \begin{prop}\label{tildeNPinnerprod} Suppose $d=3$ and $0$ is the center of the inversion sphere. \begin{itemize} \item[(i)] If $0 \in \Omega$, then \begin{equation}\label{inverform1} \langle \mathcal{K}^{*}_{\partial\Omega^*} [ \varphi^* ], \varphi^* \rangle_{\partial\Omega^*} + \langle \mathcal{K}^{*}_{\partial \Omega} [ \varphi ], \varphi \rangle_{\partial\Omega} = \int_{\partial \Omega} \frac{x \cdot \nu_x}{|x|^2} | \mathcal{S}_{{\partial \Omega}} [ \varphi ](x)|^2 \; dS. \end{equation} \item[(ii)] If $0 \in \overline{\Omega}^c$, then \begin{equation}\label{inverform2} \langle \mathcal{K}^{*}_{\partial\Omega^*} [ \varphi^* ], \varphi^* \rangle_{\partial\Omega^*} - \langle \mathcal{K}^{*}_{\partial \Omega} [ \varphi ], \varphi \rangle_{\partial\Omega} = -\int_{\partial \Omega} \frac{x \cdot \nu_x}{|x|^2} | \mathcal{S}_{{\partial \Omega}} [ \varphi ](x)|^2 \; dS. \end{equation} \end{itemize} \end{prop} \begin{proof} The identity follows from \eqnref{singleform} and \eqnref{tildNPinterior} immediately. In fact, we have \begin{align*} \langle \mathcal{K}^{*}_{\partial\Omega^*} [ \varphi^* ], \varphi^* \rangle_{\partial\Omega^*} &= -\int_{\partial\Omega^*} \mathcal{K}^{*}_{\partial\Omega^*} [ \varphi^* ](x^*) \; \overline{\mathcal{S}_{\partial\Omega^*} [ \varphi^* ](x^*)} dS(x^*) \\ &= \int_{\partial \Omega} \Big( \frac{|x|^3}{r^3} \mathcal{K}^{*}_{\partial \Omega} [ \varphi ](x) + \frac{|x|(x \cdot \nu_x)}{r^3} \mathcal{S}_{{\partial \Omega}} [ \varphi ](x) \Big) \overline{\mathcal{S}_{{\partial \Omega}} [ \varphi ](x)} \frac{r^3}{|x|^3} dS(x) \\ &= -\langle \mathcal{K}^{*}_{\partial \Omega} [ \varphi ], \varphi \rangle_* + \int_{\partial \Omega} \frac{x \cdot \nu_x}{|x|^2} \; | \mathcal{S}_{{\partial \Omega}} [ \varphi ](x) |^2 dS(x), \end{align*} which proves \eqnref{inverform1}. \eqnref{inverform2} can be proved similarly. \end{proof} \medskip We are now ready to prove Theorem \ref{mainthm1}. \noindent{\sl Proof of Theorem \ref{mainthm1}}. Suppose $\Omega$ satisfies \eqnref{concave} at $p \in \Omega$. Without loss of generality we assume that $p=0$. Then there is $x_0 \in \partial\Omega$ such that $x_0 \cdot \nu_{x_0} <0$. Choose an open neighborhood $U$ of $x_0$ in $\partial \Omega$ so that $x \cdot \nu_x <0$ for all $x \in U$. Since $\mathcal{S}_{\partial\Omega}: H^{-1/2}(\partial\Omega) \to H^{1/2}(\partial\Omega)$ is invertible in three dimensions (see \cite{Verch-JFA-84}), we may choose $\varphi \neq 0$ so that $\mathcal{S}_{\partial\Omega}[\varphi]$ is supported in $U$. We then infer from \eqnref{inverform1} that $$ \langle \mathcal{K}^{*}_{\partial\Omega^*} [ \varphi^* ], \varphi^* \rangle_{\partial\Omega^*} + \langle \mathcal{K}^{*}_{\partial \Omega} [ \varphi ], \varphi \rangle_{\partial\Omega} <0. $$ Therefore, the numerical range of either $\mathcal{K}^{*}_{\partial\Omega^*}$ or $\mathcal{K}^{*}_{\partial \Omega}$ has a negative element. It implies that either $\mathcal{K}^{*}_{\partial\Omega^*}$ or $\mathcal{K}^{*}_{\partial \Omega}$ has at least one negative eigenvalue. This completes the proof. $\Box$ \section*{Acknowledgement} We thank Yoshihisa Miyanishi for many inspiring discussions.
{ "timestamp": "2018-10-30T01:14:22", "yymm": "1808", "arxiv_id": "1808.10621", "language": "en", "url": "https://arxiv.org/abs/1808.10621", "abstract": "It is proved that if a bounded domain in three dimensions satisfies a certain concavity condition, then the Neumann-Poincaré operator on the boundary of the domain or its inversion in a sphere has at least one negative eigenvalue. The concavity condition is quite simple, and is satisfied if there is a point on the boundary at which the Gaussian curvature is negative.", "subjects": "Spectral Theory (math.SP)", "title": "A concavity condition for existence of a negative Neumann-Poincaré eigenvalue in three dimensions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9854964203086181, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.7081721896541945 }
https://arxiv.org/abs/1005.3003
Infinite Hilbert Class Field Towers from Galois Representations
We investigate class field towers of number fields obtained as fixed fields of modular representations of the absolute Galois group of the rational numbers. First, for each $k\in\{12,16,18,20,22,26\}$, we give explicit rational primes $ł$ such that the fixed field of the mod-$ł$ representation attached to the unique normalized cusp eigenforms of weight $k$ on $\Sl_2(\Z)$ has an infinite class field tower. Under a conjecture of Hardy and Littlewood, we further prove that there exist infinitely many such primes for each $k$ (in the above list). Second, given a non-CM curve $E/\Q$, we show that there exists an integer $M_E$ such that the fixed field of the representation attached to the $n$-division points of $E$ has an infinite class field tower for a set of integers $n$ of density one among integers coprime to $M_E$.
\section{Introduction} Most current examples of number fields known to have an infinite (Hilbert) class field tower are constructed ``from the bottom up,'' e.g., by beginning with a fixed number field and constructing extensions of that number field in which a large number of primes ramify. If sufficiently many primes ramify, invoking results from genus theory and one of many variants of the Golod-Shafarevich inequality is enough to prove the class field tower infinite. We investigate instead class field towers of number fields obtained ``from the top down,'' i.e., as fixed fields of representations of the absolute Galois group ${\rm Gal}(\Qb/{\mathbb Q})$ and prove that many naturally occurring fields of arithmetic interest have infinite Hilbert class field towers. In the process, we prove also prove, assuming a conjecture of Hardy-Littlewood, that there are infinitely cyclotomic fields of prime conductor with infinite Hilbert class field towers. We consider two primary sources for these representations. First, we consider Galois representations arising from modular forms. For any $k\in\{12,16,18,20,22,26\}$, it is well-known that there exists a unique normalized cuspidal Hecke eigenform $\Delta_k$ on $\Sl_2({\mathbb Z})$ of weight $k$ with integer Fourier coefficients. For each such form and rational prime $\l$, we consider the fixed field of the associated mod-$\l$ representation $\rho_{\Delta_k,\l}$. We find explicit examples of primes $\l$ such that each of these fixed fields have an infinite class field tower. The key new idea is to use certain auxiliary cubic fields introduced by Daniel Shanks (see \cite{shanks74}). Using these auxiliary fields and a refined Golod-Shafarevich type inequality of R.~Schoof (see \cite{schoof86}) we show that, for suitable primes $\ell$, the fixed fields of the mod-$\ell$ representations alluded to earlier have an infinite Hilbert class field tower. Moreover, assuming a well-known conjecture of Hardy and Littlewood on prime values of quadratic polynomials, we prove the existence of infinitely many such $\ell$ (see Theorem~\ref{cyc} and Corollary~\ref{Cor}). Further, the fields arising from these mod-$\l$ representations have Galois groups containing $\Sl_2({\mathbb Z}/\l)$ and are ramified at a single finite prime (in contrast to the number fields shown to have an infinite class field tower via genus theory). As a consequence, we also show that the conjecture of Hardy and Littlewood implies the existence of infinitely many primes $\ell$ such that the cyclotomic fields ${\mathbb Q}(\zeta_\ell)$ have infinite Hilbert class field towers. For the second construction, let $E/{\mathbb Q}$ be an elliptic curve without complex multiplication. For any $n\geq 1$, the absolute Galois group ${\rm Gal}(\Qb/{\mathbb Q})$ acts on the $n$-torsion points $E[n](\Qb)\cong({\mathbb Z}/n{\mathbb Z})^{2}$ of $E$. The fixed field $K_n$ of the kernel of the associated representation $\rho_n:{\rm Gal}(\Qb/{\mathbb Q})\rightarrow{\rm GL}_2({\mathbb Z}/n)$ is of significant arithmetic interest. In Theorem~\ref{SmallThm}, we give an elementary proof that for almost all integers $n$ coprime to a fixed integer $A_E$ (almost all here means outside a set of density zero), the field $K_n$ has an infinite class field tower. Further, known information about these fixed fields can be translated into information about the fields arising in these towers. Finally, we note that the construction can be made fairly explicit--in Remark \ref{FurutaNote}, we give a specific example of an elliptic curve and an integer $n$ provided by the result. We would like to thank William McCallum and Dinesh Thakur for comments. \section{Infinite Class Field Towers from Hecke Eigenforms} For each even integer $k$ with $12\leq k\leq 26$ and $k\neq 14,24$, there exists a unique normalized cuspidal Hecke eigenform on ${\rm SL}_2({\mathbb Z})$ of weight $k$, which we denote here by $\Delta_k$. The theorem of Deligne-Serre provides for each $\Delta_k$ and prime $\ell$ a Galois representation $\rho_{\Delta_k,\ell}:{\rm Gal}(\Qb/{\mathbb Q})\to {\rm GL}_2(\mathbb{F}_{\ell})$ which is unramified outside $\ell$. This representation is the reduction modulo $\ell$ of the $\ell$-adic representation associated to $\Delta_k$. Let $K_{\Delta_k,\ell}$ be the fixed field of the representation. By a well-known result of Serre and Swinnerton-Dyer (see \cite{serre73}), for any $\Delta_k$ and sufficiently large prime $\ell$, the Galois group ${\rm Gal}(K_{{\Delta_k},\ell}/{\mathbb Q})$ is the subgroup $$G_{\ell}=\left\{g\in {\rm GL}_2(\mathbb{F}_{\ell}):\det(g)\in (\mathbb{F}_{\ell}^*)^{k-1}\right\}$$ of ${\rm GL}_2(\mathbb{F}_{\ell})$. Note that if $(k-1,\ell-1)=1$, we obtain $G_{\ell}={\rm GL}_2(\mathbb{F}_{\ell})$. The theorem is effective and one has a complete list of the exceptional primes: \begin{center} \begin{tabular}{|c|c|}\hline $k$&exceptional $\ell$ for $\Delta_k$\\\hline $12$&$2,3,5,7,23,691$\\\hline $16$&$2,3,5,7,11,31,59,3617$\\\hline $18$&$2,3,5,7,11,13,43867$\\\hline $20$&$2,3,5,7,11,13,,283,617$\\\hline $22$&$2,3,5,7,13,17,131,593$\\\hline $26$&$2,3,5,7,11,17,19,657931$\\\hline \end{tabular} \end{center} The field $K_{\Delta_k,\ell}=\Qb^{\rho_{\Delta_k,\ell}}$ certainly contains the fixed field $\Qb^{\det(\rho_{\Delta_k,\ell})}$ of the determinant representation of $\rho_{\Delta_k,\ell}$. This is a field which is unramified outside $\ell$ and has $(\mathbb{F}_{\ell}^*)^{k-1}$ as its Galois group over ${\mathbb Q}$. If $(k-1,\ell-1)=1$, then the Galois group has order $\ell-1$ and the extension is unramified outside $\ell$. By class field theory, such an extension corresponds to a suitable quotient of ${\mathbb Z}_\ell^*$. The only quotient of this group of order $\ell-1$ is the quotient by the subgroup of ${\mathbb Z}_\ell^*$ consisting of $\ell$-adic units congruent to $1$ mod $\ell$, and thus the determinantal fixed field is ${\mathbb Q}(\zeta_\ell)$. Since any extension of a field with an infinite class field tower itself has an infinite class field tower (a variant of this easy lemma is proved in the middle of the proof of Theorem \ref{SmallThm}), we have proven the following: \begin{theorem}\label{BigThm} Let $k\in\{12,16,18,20,22,26\}$, and let $\ell$ be a prime such that: \begin{itemize} \item $\l$ is not exceptional for $\Delta_k;$ \item $(k-1,\l-1)=1;$ \item ${\mathbb Q}(\zeta_\l)$ has an infinite class field tower. \end{itemize} Then ${\rm Gal}(K_{\Delta_k,\ell}/{\mathbb Q})={\rm GL}_2(\mathbb{F}_{\ell})$, and $K_{\Delta_k,\l}$ has an infinite class field tower. \end{theorem} We now turn to searching for primes which satisfy the hypotheses of the theorem. As a first example, we note that the prime $\ell=877$ satisfies the conditions of the theorem for the Ramanujan form $\Delta_{12}$. Namely, we have that $\operatorname{gcd}(12-1,877-1)=1$ and that $\ell=877$ is not an exceptional prime for $\Delta_{12}$, so $\rho_{\Delta_{12},877}$ is surjective. Moreover, it is shown in \cite{schoof86} that ${\mathbb Q}(\zeta_{877})$ has an infinite Hilbert class field tower. Theorem~\ref{BigThm} then gives that $K_{\Delta_{12},877}$ has an infinite class field tower. What principally remains to do is to generalize Schoof's argument to produce a large class of primes $\l$ such that ${\mathbb Q}(\zeta_\l)$ has an infinite class field tower. We do this below, and show that if we assume the conjecture of Hardy and Littlewood presented below, the set of such primes is in fact infinite. The conjecture arose from their famous ``circle method,'' and should be viewed as the quadratic analogue of Dirichlet's Theorem on primes in arithmetic (i.e., ``linear'') progressions. \begin{conj}[Hardy-Littlewood, \cite{HL23}]\label{HL} If $h(x):=ax^2+bx+c\in{\mathbb Z}[x]$ satisfies: \begin{itemize} \item the quantities $a+b$ and $c$ are not both even; \item the discriminant $D(h):=b^2-4ac$ is not a square; \end{itemize} then $h(x)$ represents infinitely many prime values. \end{conj} We now show that the conjecture implies the existence of infinitely many cyclotomic fields \emph{of prime conductor} with an infinite class field tower. In contrast, most techniques used to construct infinite class field towers (note in particular Remark \ref{FurutaNote}) provide fields with highly composite conductors. \begin{theorem}\label{cyc} Under the assumption of Conjecture \ref{HL}, there exist infinitely many primes $\l$ such that ${\mathbb Q}(\zeta_\l)$ has an infinite class field tower. \end{theorem} \begin{proof} Let $k$ be a positive integer and let $m=12k+2$. Consider the cubic polynomial $$f_m(x)=x^3-mx^2-(m+3)x-1,$$ with discriminant $(m^2+3m+9)^2$. With notation as in the Hardy-Littlewood conjecture, the quadratic polynomial $m^2+3m+9=144k^2+84k+19$ has (viewed as a polynomial in $k$) odd $c$ and the non-square discriminant $$D(144k^2+84k+19)=-3888.$$ Thus, assuming the conjecture, there are infinitely many values of $k$ for which $m^2+3m+9=:\l$ is prime. For the remainder of the proof, we restrict to such $k$, $m$, and $\l$. In this case, the splitting field $F_m$ of $f_m(x)$ is one of Shanks' ``simplest cubic fields,'' a totally real cyclic cubic field of prime conductor $\l$ (see \cite{shanks74}). Thus $F_m$ is the unique cubic subfield of ${\mathbb Q}(\zeta_\l)$. Now note that since $m\equiv 2$ mod $12$, we have $\l\equiv 7$ mod $12$. Since $\l\equiv 1$ mod $6$, there is a unique sextic subfield of ${\mathbb Q}(\zeta_\l)$, which we denote by $L_m$. Further, $6\nmid \frac{\l-1}{2}$, so $L_m\not\subset{\mathbb Q}(\zeta_\l)^+$ is totally imaginary. Let $h$ be the class number of $F_m$ and $L_m'=L_mF_m^{(1)}$, giving the following diagram of fields: \begin{equation*} \xymatrix@R=1pc@C=3pc{ {\mathbb Q}(\zeta_\l)\ar@{-}[dd]_{2}&&L_m'\ar@{-}[dd]^2\\ &L_m\ar@{-}[ul]_(.5){\frac{\l-1}{6}}\ar@{-}[dd]_2\ar@{-}[ur]^(.5){h}\\ {\mathbb Q}(\zeta_\l)^+\ar@{-}[dd]_{\frac{\l-1}{2}}&&F_m^{(1)}\\ &F_m\ar@{-}[ul]\ar@{-}[dl]^3\ar@{-}[ur]_{h}\\ {\mathbb Q} &} \end{equation*} Denote by $d_2E_K$ the 2-rank of the unit group of a number field $K$. Applying \cite[Proposition 3.3]{schoof86} to the extension $L_m'/F_m^{(1)}$, a sufficient condition for $L_m'$ to have an infinite class field tower is that that the number $\rho$ of ramified primes in this extension satisfies $$\rho\geq 3+d_2E_{F_m^{(1)}}+2\sqrt{d_2E_{L_m'}+1}.$$ Since $L_m'$ is totally imaginary and $F_m^{(1)}$ is totally real, Dirichlet's Unit theorem easily calculates the right-hand side of this inequality to be $3+3h+2\sqrt{3h+1}$. Now we count ramified primes: First, all $3h$ infinite places of $F_m^{(1)}$ ramify in $L_m'$. Second, since $L_m/F_m$ is totally ramified at the prime $\lambda$ of $F_m$ above $\l$, and $\lambda$ (being principal) splits completely in $F_m^{(1)}$, the extension $L_m'/F_m^{(1)}$ is ramified also at the $h$ primes of $F_m^{(1)}$ above $\l$. In sum, this gives $\rho=4h$, and we find that the equality is satisfied whenever $h\geq 18$. Using that $h\rightarrow\infty$ as $m\rightarrow \infty$ (see \cite{shanks74}), we now get infinitely many $m$ such that $L_m'$ has an infinite class field tower. Note that $F_m^{(1)}$ and $L_m$ are linearly disjoint over $F_m$ since the former is unramified, and the latter is totally ramified at primes above $\l$. By disjointness, $L_m'/L_m$ is abelian and unramified, and so $L_m$, and consequently ${\mathbb Q}(\zeta_\l)$, also have infinite class field towers. \end{proof} \begin{corollary}\label{Cor} Assume Conjecture $\ref{HL}$. Then for each $k\in\{12,16,18,20,22,26\}$, there are infinitely many primes $\l$ such that $K_{\Delta_k,\l}$ has an infinite class field tower. \end{corollary} \begin{proof} It is easy to verify that primes $\l$ of the form $m^2+3m+9$ are never congruent to 1 mod $(k-1)$ for each $k$ in the list, so $(\l-1,k-1)=1$ for any $\l$ constructed by the theorem. Avoiding the finitely many primes in the table given before Theorem~\ref{BigThm} for which the representation is not surjective, the remaining primes constructed in Theorem~\ref{cyc} satisfy all of the hypotheses of Theorem~\ref{BigThm}. \end{proof} \begin{remark} Hardy and Littlewood also provide an asymptotic version of their conjecture (see again \cite{HL23}). Applied to the polynomial $h(k)$ used in the proof, the number $P_h(x)$ of primes less than $x$ which are represented by $h(k)=144k^2+84k+19$ satisfies $$ P_h(x)\sim \frac{1}{4}\prod_{p=5}^\infty\left(1-\frac{\left(\frac{-3888}{p}\right)}{p-1}\right)\frac{\sqrt{x}}{\log x}\approx \frac{.28\sqrt{x}}{\log x}, $$ where we have used SAGE to approximate the constant by including the terms in the product for all primes $p\leq 10^7$. This thus also provides an asymptotic lower bound for the number of primes $\l\leq x$ such that ${\mathbb Q}(\zeta_\l)$ has an infinite class field tower. \end{remark} Finally, we remark that setting $m=12k+2$ in the proof of Theorem \ref{BigThm} was overly restrictive on our choice of $m$, designed only to ensure that $\l\equiv 7\mod 12$. This can be equally well achieved by insisting that $m\equiv 2,7,10,\text{ or }11\mod 12$. Searching Shanks' Table 1 for such values of $m$ giving $h\geq 18$ provides the first few examples of ${\mathbb Q}(\zeta_\l)$ provided by the proof: $\l\in\{2659,3547,5119,8563,\ldots\}.$ Note that for each of these specific values of $\l$, Theorem \ref{BigThm} is proved unconditionally, the Hardy-Littlewood conjecture being used only to guarantee that there are infinitely many primes in this list. \section{Infinite Class Field Towers from non-CM Elliptic Curves} A second class of fields of arithmetic interest we discuss are the fixed fields of representations attached to elliptic curves. Let $E$ be an elliptic curve without complex multiplication. Let $\rho_n:\gal(\bQ/\Q)\to{\rm GL}_2({\mathbb Z}/n)$ be the Galois representation associated to the $n$-torsion points of $E$. Let $A_E$ be the product of all the exceptional primes of $E$, the finite set of primes $\l$ such that $\rho_\l$ is not surjective. Then for all $n$ relatively prime to $M_E:=30A_E$, the representation $\rho_n$ is surjective, and hence ${\rm Gal}(K_n/{\mathbb Q})\cong{\rm GL}_2({\mathbb Z}/n)$ (see \cite{serre72,serre-abelian,kani05}). We note that this set of primes, and hence the constants $A_E$ and $M_E$, have been studied extensively. In particular, we note that explicit bounds on these constants are known \cite[Theorem A.1 and Theorem 2]{Co05}), and $M_E=30$ for almost all elliptic curves (see Remark~\ref{DukeNote}). For a number field $F$, let $F^{(m)}$ denote the $m$-step in the Hilbert class field tower over $F$. \begin{theorem}\label{SmallThm} Let $E/{\mathbb Q}$ be an elliptic curve without complex multiplication, and let $\rho_n$, $K_n$, $A_E$, and $M_E$ be as in the preceding paragraph. Let $S$ be the set of integers prime to $M_E$. Then for all $n\in S$ outside a subset of density zero, the field $K_n$ has an infinite Hilbert class field tower. Furthermore, for such $n$, there is a natural surjection of class groups $\operatorname{Cl}(K_n^{(m)})\rightarrow \operatorname{Cl}({\mathbb Q}(\zeta_n)^{(m)})$ for each $m\geq 1$. \end{theorem} \begin{proof} For $n$ prime to $M_E$, the representation $\rho_n$ is surjective, and so $\text{Image}(\rho_n)\cong{\rm Gal}(K_{n}/{\mathbb Q})\cong{\rm GL}_2({\mathbb Z}/n).$ The fixed field of the kernel of the composite representation $\gal(\bQ/\Q)\to({\mathbb Z}/n)^*$ arising from the exact sequence $$ 1\to{\rm SL}_2({\mathbb Z}/n)\to{\rm GL}_2({\mathbb Z}/n)\to({\mathbb Z}/n)^*\to 1 $$ is the $n$-th cyclotomic field ${\mathbb Q}(\zeta_n)$, and so we have an inclusion of fields ${\mathbb Q}\subset {\mathbb Q}(\zeta_n)\subset K_n$. By a result of Shparlinski (see \cite{shparlinski08}), the set of $n$ coprime to $M_E$ for which ${\mathbb Q}(\zeta_n)$ has an infinite class field tower has density one in the set of integers coprime to $M_E$. For the remainder of the proof, $n$ will denote such an integer. We claim that the fields ${\mathbb Q}(\zeta_n)^{(m)}$ and $K_{n}$ are linearly disjoint extensions of ${\mathbb Q}(\zeta_n)$ for all $m\geq 1$. Let $H_{n,m}$ denote their intersection. Consider the lattice diagram of fields: \begin{equation*} \xymatrix@R=1pc@C=1pc{ \text{ }{\mathbb Q}(\zeta_n)^{(m)}\ar@{-}[ddr] \ar@{-}[rd]_{} & & K_n\ar@{-}[ld]\ar@{-}[ldd] \\ & H_{n,m}\ar@{-}[d] &\\ & {\mathbb Q}(\zeta_n) &} \end{equation*} Since ${\mathbb Q}(\zeta_n)^{(m)}/{\mathbb Q}(\zeta_n)$ is a Galois extension with solvable Galois group (being constructed via a series of abelian extensions), so is $H_{n,m}/{\mathbb Q}(\zeta_n)$. But ${\rm Gal}(K_n/{\mathbb Q})\cong {\rm SL}_2({\mathbb Z}/n)$ is perfect for $(n,30)=1$ (see \cite{serre-abelian}), and so has no non-trivial solvable quotients. Thus ${\rm Gal}(H_{n,m}/{\mathbb Q}(\zeta_n))$ is trivial and the two fields are linearly disjoint. Let $L_{n,m}$ be the compositum $K_n{\mathbb Q}(\zeta_n)^{(m)}$, giving the following diagram for each $m\geq 1$. \begin{equation*} \xymatrix@R=.5pc{ L_{n,m}\ar@{-}[dd]\ar@{-}[dr]&\\ &K_n\ar@{-}[dd]\\ {\mathbb Q}(\zeta_n)^{(m)}\ar@{-}[dr]& \\ &{\mathbb Q}(\zeta_n)} \end{equation*} By linear disjointness, we have ${\rm Gal}(L_{n,m}/{\mathbb Q}(\zeta_n)^{(m)})\cong {\rm Gal}(K_n/{\mathbb Q}(\zeta_n))\cong {\rm SL}_2({\mathbb Z}/n)$ for all $m\geq 0$. Further, each extension $L_{n,m}/L_{n,m-1}$ is unramified abelian. Thus for each $m\geq 0$, $L_{n,m}$ is an unramified solvable extension of $L_{n,0}=K_n$, and hence is contained in the Hilbert class field tower over $K_n$. Since the degrees of $L_{n,m}$ go to infinity with $m$, we see that $K_n$ has an infinite class field tower, as desired. Finally, $L_{m,n}$ is an unramified extension of $K_n$ with ${\rm Gal}(L_{m,n}/K_n)$ solvable of derived length $\leq m$, so is contained in the maximal such extension $K_n^{(m)}$. The restriction map on Galois groups \begin{equation*} \xymatrix@R=.5pc{ {\rm Gal}(K_n^{(m+1)}/K_n^{(m)})\ar@{->>}[r]&{\rm Gal}(K_n^{(m)}{\mathbb Q}(\zeta_n)^{(m)}/K_n^{(m)})\cong {\rm Gal}({\mathbb Q}(\zeta_n)^{(m+1)}/{\mathbb Q}(\zeta_n)^{(m)}) } \end{equation*} corresponds by class field theory to the desired surjection $\operatorname{Cl}(K_n^{(m)})\rightarrow \operatorname{Cl}({\mathbb Q}(\zeta_n)^{(m)})$ of class groups. \end{proof} \begin{remark}\label{FurutaNote} The density result of Shparlinski used in the proof is based on an explicit construction due to Furuta (see \cite{furuta72}, Theorem 4). Namely, we can find explicit examples of the $n$ described in the theorem by choosing a rational prime $\l$ and taking $n$ to be a product of nine or more rational primes congruent to 1 mod $\l$ and prime to $M_E$. The field $K_n$ then has an infinite $\l$-class field tower. For example, consider the elliptic curve $$E:y^2+y=x^3-x$$ of conductor $37$. By \cite[5.5.6, Page 310]{serre72} we find that $A_E=1$ and so $M_E=30$. We take $n$ to be the product of the first nine primes which are congruent to $1$ mod $5$: $$n=11\cdot31\cdot41\cdot61\cdot71\cdot101\cdot131\cdot151\cdot181.$$ Then $K_{n}$ has an infinite Hilbert $5$-class field tower and is unramified outside primes dividing $37n$. \end{remark} \begin{remark}\label{DukeNote} The constant $A_E$ in Serre's Theorem has been studied extensively. In \cite{duke97} it was shown that almost all elliptic curves over ${\mathbb Q}$ have Serre constant $A_E=1$ (and thus $M_E=30$). Thus for almost all elliptic curves over ${\mathbb Q}$ and almost all integers $n$ with $(n,30)=1$, the field $K_n$ has an infinite class field tower. \end{remark} Finally, we note that variants of these techniques apply to other arithmetically significant fields. For example, let $E$ be a semistable elliptic curve over ${\mathbb Q}$ of prime conductor $\l\geq11$, and suppose that ${\mathbb Q}(\zeta_\l)$ has an infinite Hilbert class field tower. By the above techniques, $K_{E,\l}$ has an infinite Hilbert class field tower. Further, $K_{E,\l}$ is unramified outside $\l$, and and by a well-known theorem of Mazur (see \cite{mazur78}), we have ${\rm Gal}(K_{E,\l}/{\mathbb Q})=\Gl_2({\mathbb Z}/\l)$. Unfortunately, it is unknown if there exists an elliptic curve of conductor $\l$ for infinitely many primes $\l$ (though this is widely believed). Regardless, an examination of Cremona's tables of elliptic curves (see \cite{cremona-tables}) shows that there are no elliptic curves with conductor $877$ but that there exist elliptic curves with conductor $3547$, the second prime provided in the discussion after Corollary \ref{Cor}. Thus we can find examples of elliptic curves to which this variant approach applies. \bibliographystyle{amsplain}
{ "timestamp": "2010-05-18T02:03:10", "yymm": "1005", "arxiv_id": "1005.3003", "language": "en", "url": "https://arxiv.org/abs/1005.3003", "abstract": "We investigate class field towers of number fields obtained as fixed fields of modular representations of the absolute Galois group of the rational numbers. First, for each $k\\in\\{12,16,18,20,22,26\\}$, we give explicit rational primes $ł$ such that the fixed field of the mod-$ł$ representation attached to the unique normalized cusp eigenforms of weight $k$ on $\\Sl_2(\\Z)$ has an infinite class field tower. Under a conjecture of Hardy and Littlewood, we further prove that there exist infinitely many such primes for each $k$ (in the above list). Second, given a non-CM curve $E/\\Q$, we show that there exists an integer $M_E$ such that the fixed field of the representation attached to the $n$-division points of $E$ has an infinite class field tower for a set of integers $n$ of density one among integers coprime to $M_E$.", "subjects": "Number Theory (math.NT); Algebraic Geometry (math.AG)", "title": "Infinite Hilbert Class Field Towers from Galois Representations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9854964194566753, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.7081721890419933 }
https://arxiv.org/abs/1801.05355
A Local-global principle for isogenies of composite degree
Let $E$ be an elliptic curve over a number field $K$. If for almost all primes of $K$, the reduction of $E$ modulo that prime has rational cyclic isogeny of fixed degree, we can ask if this forces $E$ to have a cyclic isogeny of that degree over $K$. Building upon the work of Sutherland, Anni, and Banwait-Cremona in the case of prime degree, we consider this question for cyclic isogenies of arbitrary degree.
\section{Introduction} Fix a number field $K$. Let $E$ be an elliptic curve defined over $K$. For all primes $\mathfrak{p}$ of $K$ where $E$ has good reduction, let $E_\mathfrak{p}$ denote the reduction of $E$ modulo $\mathfrak{p}$. If $E$ has some level structure over $K$, such as a rational torsion point or a rational isogeny, then for almost all $\mathfrak{p}$, the reduction $E_\mathfrak{p}$ does as well. Say that $E$ has level-structure \defi{locally at $\mathfrak{p}$} if $E_\mathfrak{p}$ has such structure. One can then ask about a converse: if $E$ has some structure locally for almost all $\mathfrak{p}$, does $E$ necessarily have such structure over $K$? Katz originally asked this question for the property that $m \mid \#E(K)_\text{tors}$. (Note that this in particular asks about rational $\ell$-torsion points, where $\ell$ is prime.) He showed that it is not true in general; however, $E$ is always isogenous (over $K$) to a curve $E'$ such that $m \mid \#E(K)_\text{tors}$ \cite[Theorem 2]{katz}. Sutherland asked the analogous question for the property of having an isogeny of degree $\ell$, for a fixed prime $\ell$. In \cite{sutherland}, he showed that this question again has a negative answer in general. This local-global question cannot be salvaged by considering isogenous curves since the property of having an isogeny of prime degree $\ell$ is itself an isogeny invariant. However, Sutherland gives a classification of the exceptions that implies, in particular, that if $\ell \equiv 1 \pmod{4}$ and $\sqrt{\ell} \not\in K$, then $E$ \emph{does} have an isogeny of degree $\ell$ over $K$. Anni in \cite{anni} then proved that for any fixed number field $K$, there are only finitely many primes $\ell$ such that there exists an elliptic curve $E/K$ with an isogeny of degree $\ell$ locally almost everywhere, but not an isogeny of degree $\ell$ over $K$. And if $\ell \neq 5,7$, there are finitely many $j$-invariants of curves over $K$ which give exceptions. The key to the proof of these theorems is to translate the problem into a purely group theoretic statement about the image of the mod $\ell$ Galois representation, \[ \rho_{E,\ell} \colon G_K \rightarrow \operatorname{Aut}(E[\ell]) \simeq \text{GL}_2(\mathbb{Z}/\ell\mathbb{Z}), \] where $G_K$ denotes the absolute Galois group $\text{Gal}(\bar{K}/K)$. There exists a $K$-rational isogeny of degree $\ell$ if and only if $G = \Im(\rho_{E,\ell})$ preserves a $1$-dimensional subspace; the Chebotarev density theorem shows that $E_\mathfrak{p}$ admits a $k_\mathfrak{p}$-rational isogeny of degree $\ell$ for almost all $\mathfrak{p}$ if and only if \emph{every element }of $G$ preserves a $1$-dimensional subspace. In this paper we extend these results to cyclic isogenies of composite degree $N$, which we will refer to as \defi{$N$-isogenies} for the remainder of the paper. While the property of having an $\ell$-isogeny is an isogeny invariant, this is not true for isogenies of composite degree. For this reason, following Katz, we will focus on ($K$-rational) isogeny classes of elliptic curves. For any field $k$, and $E/k$ an elliptic curve, we will denote by $\mathscr{C}(E) = \mathscr{C}(E/k)$ the $k$-rational isogeny class of $E$. We say that $\mathscr{C}(E)$ \defi{has an $N$-isogeny} if there exist $E_1, E_2 \in \mathscr{C}(E)$ and an isogeny $E_1 \to E_2$ of degree $N$. We then ask, if for almost all $\mathfrak{p}$, $\mathscr{C}(E_\mathfrak{p}/k_\mathfrak{p})$ has an $N$ isogeny (i.e. $\mathscr{C}(E)$ has an $N$-isogeny locally almost everywhere), must $\mathscr{C}(E)$ have one as well? When this is true, we say that $\mathscr{C}(E)$ satisfies the local-global principle for $N$-isogenies. For any particular $E$ defined over $K$, if $\mathscr{C}(E)$ satisfies the local-global principle, we say that $E$ \defi{satisfies the local-global principle up to isogeny}. If not, we say that $\mathscr{C}(E)$ is an \defi{exceptional isogeny class} and that $E$ is \defi{exceptional} (for $N$-isogenies) over $K$. One checks that for $j \neq 0,1728$, this depends only upon the $j$-invariant of $E$; it is therefore convenient to say $(N, j)$ is exceptional for $K$ if there exists a curve $E/K$ with $j(E) = j$ that is exceptional for $N$-isogenies over $K$. Since this specializes to Sutherland's question when $N$ is prime, there are necessarily exceptions. Our first theorem is the following finiteness statement for counterexamples: \begin{ithm}\label{mainthm} For any number field $K$, \begin{enumerate} \item There exist only finitely many $j$-invariants $j \in K$ such that $(N,j)$ is exceptional for some $N$ with $\gcd(N,70)=1$. \item For a fixed integer $N$ not in the set \[\big\{5, 7, 8, 10, 16, 24, 25, 32, 40, 49, 50, 72\big\},\] there are only finitely many $j$-invariants $j$ such that $(N, j)$ is an exceptional pair. \end{enumerate} \end{ithm} If $N = \prod_i \ell_i^{n_i}$, then $\mathscr{C}(E)$ has an $N$-isogeny locally almost everywhere if and only if for all $i$, $\mathscr{C}(E)$ has an $\ell_i^{n_i}$-isogeny locally almost everywhere. And $\mathscr{C}(E)$ has a global $N$-isogeny if and only if $\mathscr{C}(E)$ has a global $\ell_i^{n_i}$-isogeny for all $i$. For this reason, we begin by classifying exceptions when $N = \ell^n$ is a prime power. Given the results of \cite{sutherland, anni, bc} in the case $n=1$, a key component of the proof of Theorem \ref{mainthm} is understanding when a global $\ell$-isogeny can be ``lifted" to a global $\ell^n$-isogeny using the local data of an $\ell^n$-isogeny locally almost everywhere. We make the following definition: \begin{defin} Assume that $\mathscr{C}(E)$ globally has an $\ell$-isogeny, and locally almost everywhere has an $\ell^n$-isogeny. If $\mathscr{C}(E)$ does not globally have an $\ell^n$-isogeny, then we say that $\ell$ is a \defi{lift-exceptional prime} for $E/K$ and call $(\ell^n, j(E))$ a \defi{lift-exceptional pair} for $K$. If there exists an $n$ and a prime $\ell$ such that $(\ell^n, j(E))$ is a lift-exceptional pair, then we say that $j(E)$ is a \defi{lift-exceptional $j$-invariant}. \end{defin} Using this terminology, we prove \begin{ithm}\label{ellnthm} Let $E$ be an elliptic curve over a number field $K$, and $\ell$ an odd prime such that \textbf{$E$ has a global $\ell$-isogeny} and $\mathscr{C}(E)$ locally almost everywhere has an $\ell^n$-isogeny. \begin{enumerate}[(i)] \item If $\ell^k$ is the minimal lift-exceptional power of $\ell$, then $k$ is odd. Writing $k=2m+1$, for some $E'$ in the isogeny class of $E$, the image of $\rho_{E', \ell^{k}}$ must, up to conjugacy, be contained in the group \[ R(\ell^{2m+1}) = \left\{ \begin{pmatrix} r & s \\ \ell^{2m} (\epsilon s) & \epsilon t \end{pmatrix} \ : \ r\equiv t \pmod{\ell^{m+1}}, \epsilon = \pm 1 \right\}. \] Furthermore, $\mathscr{C}(E_F)$ has an $\ell^k$-isogeny for some extension $F$ with $[F:K] = 2$. \item If there exists $j \in K$ such that for some $n\geq 2$, $(\ell^n, j)$ is lift-exceptional, then $\ell \leq 6[K:\mathbb{Q}] +1$. \item There are finitely many lift-exceptional $j$-invariants $j(E)$ over $K$. \end{enumerate} \end{ithm} \begin{rem} Note, in particular, that there are no elliptic curves with an $\ell$-isogeny for which the local-global principle fails for $\ell^2$-isogenies (up to taking isogenous curves). \end{rem} \begin{rem} This result bounds exceptional primes by a constant depending only upon $K$, not $E$. It is not possible to strengthen this to an absolute constant. For any $\ell$, there exists a number field $K$ and an elliptic curve $E$ defined over $K$ such that $\ell$ is a lift-exceptional prime (see Corollary \ref{counter} below). \end{rem} Theorem \ref{ellnthm} applies only to odd primes $\ell$. Similar to part (i), in Proposition \ref{2exceptions} we classify all exceptional Galois representations for $2^n$-torsion for $n \leq 6$ (in particular minimal ones). This covers all counterexamples to the local-global principle for $2^n$-isogenies that could occur infinitely often over a number field. When $E/\bar{K}$ has the extra structure of complex multiplication by an order $\O$ in an imaginary quadratic field $F$, we can classify exceptional pairs up to a factor which is polynomially small in the degree of $K$ over $\mathbb{Q}$. We denote the Hilbert class field of $F$ by $H_F$. \begin{ithm}\label{main_cm} Let $E/K$ be an elliptic curve with geometric complex multiplication by an order $\O \subset \O_F$. If $C$ is any integer of the form $C = \prod_i \ell_i^{n_i}$ where for all $i$, $\ell_i$ splits in $F$ and one of \begin{itemize} \item $\ell_i \equiv 1 \pmod{4}$ and $K \supset \mathbb{Q}(\sqrt{\ell_i})$, or \item $\ell_i \equiv 3 \pmod{4}$ and $KF = K(\sqrt{-\ell_i})$, or \item $\ell_i = 2$, $n_i \geq 3$ and $K \supset \mathbb{Q}(\sqrt{2})$ and $KF = K(\sqrt{-2})$, \end{itemize} then $E$ has a $C$-isogeny locally almost everywhere up to isogeny. Conversely, if $E$ has an $N$-isogeny locally almost everywhere up to isogeny, then there exist relatively prime numbers $A, B, C$ such that $N = ABC$ and \[A \leq (\#\O_F [KF : H_F])^4 \leq (6[K:\mathbb{Q}])^4, \] $E$ has a $B$-isogeny up to isogeny, $E$ fails to have a $C$-isogeny up to isogeny, and $C$ is of the form above; if $F \subset K$ then $C=1$. \end{ithm} When $N = \ell$ is prime, the quartic polynomial bound for $A$ can be improved to linear. More precisely, we show in Section \ref{cm} that for $\ell > 6[K:\mathbb{Q}]+1$, the pair $(\ell, j(E))$ is exceptional if and only if $F \not \subset K$ and $\ell$ is of the form of the integer $C$ in the statement of Theorem \ref{main_cm}. This shows that if $K$ does not contain the CM field $F$, then exceptional primes can be arbitrarily large compared to the degree $[K:\mathbb{Q}]$, correcting \cite[Lemma 6.1]{anni}. In addition to classifying exceptions for general number fields $K$, Sutherland proves that there is exactly one counterexample to the local-global principle for $\ell$-isogenies when $K=\mathbb{Q}$, namely $(\ell, j(E)) = (7, 2268945/128)$. We extend this to prime power degrees and prove \begin{ithm}\label{rationalpoints} Let $\mathscr{C}(E/\mathbb{Q})$ have an $\ell^n$-isogeny locally almost everywhere. If $E$ is not $\mathbb{Q}$-isogenous to a curve with an $\ell^n$-isogeny over $\mathbb{Q}$, then $\ell^n$ and $j(E)$ are in the following list: \begin{enumerate} \item $\ell=7$, $n=1$ or $n=2$, and $j(E) = 2268945/128$. \item $\ell=2$, $n=4$, and $j(E) = j(t_0)$ for \[j(t) = \frac{-4t^8 + 110640t^6 - 221336t^4 + 110640t^2 - 4}{64t^6 - 128t^4 + 64t^2},\] and $t_0 \in \mathbb{Q}$. \end{enumerate} Conversely $j = 2268945/128$ is exceptional for $7$- and $7^2$-isogenies and for every $t_0 \in \mathbb{Q}$ an elliptic curve with $j$-invariant $j(t_0)$ has a $2^4$-isogeny locally almost everywhere up to isogeny. For $t_0$ outside of a thin set, $(2^4,j(t_0))$ is exceptional. \end{ithm} Whereas in the case of prime degree isogenies there were only finitely many counterexamples over $\mathbb{Q}$, part (2) of Theorem \ref{rationalpoints} gives an infinite family with exceptions for isogenies with degree a power of $2$. As for isogenies of prime degree, these theorems are proved by analyzing the mod $N$ Galois representation attached to $E$ and modular curves $X_H$ for exceptional subgroups $H \subseteq \text{GL}_2(\mathbb{Z}/N\mathbb{Z})$. The paper is organized as follows: results on prime power isogenies and lift-exceptional pairs are covered in Sections \ref{lemup} -- \ref{jinvs} of the paper. In Section \ref{prelim} we cover the basic preliminaries on Galois representations and modular curves, to set the stage for the remainder of the paper. We also lay out a framework for the group-theoretic analysis to follow . The group theory necessary to classify lift-exceptional subgroups of $\text{GL}_2(\mathbb{Z}/\ell^n \mathbb{Z})$ is contained in Sections \ref{lemup} and \ref{exceptional}. At this point we will be able to deduce part (i) of Theorem \ref{ellnthm}. The results of previous sections do not apply to the prime $2$, so in Section \ref{two} we summarize the techniques used to computationally investigate this problem. In Section \ref{boundell} we prove part (ii) of Theorem \ref{ellnthm}, which bounds lift-exceptional $\ell$. We prove part (iii) of Theorem \ref{ellnthm} and the finiteness result of Theorem \ref{mainthm} in Section \ref{jinvs} by examining the appropriate modular curves. Section \ref{cm} contains more precise characterizations of exceptional primes for curves with complex multiplication and a proof of Theorem \ref{main_cm}. Finally, in Section \ref{qpoints}, we turn to the problem of finding the rational points on the relevant modular curves to prove Theorem \ref{rationalpoints}. This relies heavily on the recent work on Rouse--Zureick-Brown and Sutherland--Zywina. \subsection*{Acknowledgements} I would like to thank Andrew Sutherland for useful conversations, suggestions, and data on subgroups of $\text{GL}_2(\mathbb{Z}/\ell^n\mathbb{Z})$ for small $\ell$ and $n$. I would also like to thank David Zureick-Brown and Jeremy Rouse for help with finding explicit equations for relevant modular curves. Finally I would like to thank Alina Cojocaru, Noam Elkies, Dick Gross, Nathan Jones, Eric Larson, Bjorn Poonen, Barry Mazur, and Dmitry Vaintrob for comments and discussions. This research was supported in part by the National Science Foundation Graduate Research Fellowship Program under Grant No. 1122374 as well as Grant DMS-1601946. \section{Preliminaries}\label{prelim} \subsection{Galois Representations} Let $E$ be an elliptic curve over $K$ and $N$ a natural number. Fix an algebraic closure $\bar{K}$ of $K$. We will denote by $E[N]$ the group of $N$-torsion points of $E$ over $\bar{K}$, which is endowed with a linear action of $\text{Gal}(\bar{K}/K) \equalscolon G_K$. We therefore have a map $G_K \to \operatorname{Aut}(E[N])$. Choosing an isomorphism $(\mathbb{Z}/N\mathbb{Z})^2 \simeq E[N]$ gives an identification of $\operatorname{Aut}(E[N])$ with $\text{GL}_2(\mathbb{Z}/N\mathbb{Z})$ by action on the left (via column vectors). This defines a representation \[\rho_{E,N} \colon G_K \to \text{GL}_2(\mathbb{Z}/N\mathbb{Z}), \] which we refer to as the \defi{mod $N$ Galois representation} attached to $E/K$. Choosing compatible bases for all $E[N]$, we can define the \defi{adelic Galois representation} \[\rho_{E, \infty} \colon G_K \to \operatorname{Aut}\left( \lim_{N} E[N]\right) \simeq \text{GL}_2(\hat{\mathbb{Z}}). \] Composition with reduction modulo $N$ recovers the mod $N$ Galois representation $\rho_{E, N}$. Similarly composition with the surjection $\hat{\mathbb{Z}} \to \mathbb{Z}_\ell$ defines the \defi{$\ell$-adic Galois representation} \[ \rho_{E,\ell^\infty} \colon G_K \rightarrow \text{GL}_2(\mathbb{Z}_\ell), \] which is the inverse limit of the mod $\ell^n$ Galois representations. Let us fix the notation $G \colonequals G_E \subset \text{GL}_2(\mathbb{Z}_\ell)$ for the image of the $\ell$-adic Galois representation of an elliptic curve $E/K$. Let $G(\ell^n)$ denote the image of $G$ under reduction $\text{GL}_2(\mathbb{Z}_\ell) \to \text{GL}_2(\mathbb{Z}/\ell^n \mathbb{Z})$, i.e.\ the image of the mod $\ell^n$-representation. \subsection{Subgroups of $\text{GL}_2(\mathbb{Z}/\ell^n \mathbb{Z})$} Recall that a \defi{Borel subgroup} of $\text{GL}_2(\mathbb{Z}/\ell^n \mathbb{Z})$ is the subgroup of automorphisms of $(\mathbb{Z}/\ell^n \mathbb{Z})^2$ that preserve a specified submodule $L \simeq \mathbb{Z}/\ell^n \mathbb{Z} \subset (\mathbb{Z}/\ell^n \mathbb{Z})^2$ (which we will refer to as a line). Choosing a basis compatible with the flag $L \subset \left( \mathbb{Z}/\ell^n\mathbb{Z}\right)^2$, the Borel subgroup associated to $L$ can be identified with matrices of the form $\sm{\Asterisk}{\Asterisk}{0}{\Asterisk}$. Call two lines $L$ and $L'$ in $\left( \mathbb{Z}/\ell^n \mathbb{Z}\right)^2$ \defi{independent} if they are independent modulo $\ell$. Note that two such lines give a direct sum decomposition $L \oplus L' \simeq \left( \mathbb{Z}/\ell^n \mathbb{Z}\right)^2$. Associated to two such lines is a \defi{split Cartan subgroup} of linear automorphisms of $ \left( \mathbb{Z}/\ell^n \mathbb{Z}\right)^2$ preserving both $L$ and $L'$. Again in an appropriate basis, a split Cartan subgroup can be identified with matrices of the form $\sm{\Asterisk}{0}{0}{\Asterisk}$. We make one new definition which will be useful in what follows: \begin{defin} A \defi{radical subgroup} of $\text{GL}_2(\mathbb{Z}/\ell^n\mathbb{Z})$ is one that fixes a line $L \subset (\mathbb{Z}/\ell^n\mathbb{Z})^2$ and an isomorphism between $L$ and the quotient $(\mathbb{Z}/\ell^n\mathbb{Z})^2/L$ up to a sign. Hence there exists a basis in which a radical subgroup acts as \[\begin{pmatrix} \chi_1 & \Asterisk \\ 0 & \pm \chi_1 \end{pmatrix},\] \end{defin} Radical subgroups occur ``in nature" as the image of $\rho_{E, \ell}$ for $E$ with CM by an order in a field $F$ in which $\ell$ ramifies \cite[Thm 13.1.2]{gross}. Finally, recall that a \defi{nonsplit Cartan subgroup} of $\text{GL}_2(\mathbb{Z}/\ell\mathbb{Z})$ is a cyclic subgroup isomorphic to $\mathbb{F}_{\ell^2}^\times$ acting on the $\mathbb{F}_\ell$-vector space $\mathbb{F}_{\ell^2} \simeq \mathbb{F}_{\ell}^2$. Let $\epsilon$ be a nonquadratic residue mod $\ell$. Then in an appropriate basis, a nonsplit Cartan subgroup can be identified with matrices of the form $\sm{x}{\epsilon y}{y}{x}$ for $x,y \in \mathbb{F}_\ell$ not both $0$. There is an involution on the set of subgroups of $\text{GL}_2(\mathbb{Z}/N\mathbb{Z})$ sending a group to its transpose \[G^T \colonequals \{g^T : g \in G\}.\] Many relevant subgroups of $\text{GL}_2(\mathbb{Z}/N\mathbb{Z})$ are conjugate to their transpose; for example Borel and Cartan subgroups. \subsection{Modular Curves} We begin by recalling the definition of the modular curve $X(N)$. This is the course space of the smooth compactification of the stack whose $S$ points parameterize (up to isomorphism) pairs $(E, \iota)$, where $E/S$ is an elliptic curve and \[\iota \colon \left(\mathbb{Z}/N\mathbb{Z}\right)^2_S \to E[N] \] is a choice of isomorphism. Note that this is the ``big" modular curve at level $N$: it is geometrically disconnected, with components over $\mathbb{Q}(\mu_N)$ in bijection with the primitive elements of $\mu_N$. Precomposition of $\iota$ with $g^{-1} \in \text{GL}_2(\mathbb{Z}/N\mathbb{Z})$ \[ g(\iota) = \iota \circ g^{-1} \] defines a \emph{left} action of $\text{GL}_2(\mathbb{Z}/N\mathbb{Z})$ on $X(N)$. The map $X(N)$ to the $j$-line $X(1)$ forgetting the level structure at $N$ is Galois with group $\text{GL}_2(\mathbb{Z}/N\mathbb{Z})/\{\pm 1\}$. Note that the Galois group $G_\mathbb{Q}$ also acts on $X(N)$ by postcomposition with $\iota$ on the left. Let $G$ be a subgroup of $\text{GL}_2(\mathbb{Z}/N\mathbb{Z})$ containing $-I \colonequals \sm{-1}{0}{0}{-1}$. We can then define the modular curve $X_G$ as the (coarse space of the) quotient of $X(N)$ by the action of $G$. The $K$-points of $X_G$ parameterize pairs $(E, \mathcal{C})$, where $E$ is an elliptic curve over $K$ and $\mathcal{C}$ is an equivalence class of isomorphisms $\iota \colon (\mathbb{Z}/N\mathbb{Z})^2 \to E[N]$, where $\iota \sim \iota'$ if there exists $g \in G$ such that $\iota' = \iota \circ g^{-1}$. \begin{lem}[{\cite[Lemma 2.1]{rzb}}] There exists a choice of $\iota$ such that $(E, \iota)$ gives rise to a $K$-point of $X_G$ if and only if $\Im(\rho_{E, N})$ is contained in a subgroup of $\text{GL}_2(\mathbb{Z}/N\mathbb{Z})$ conjugate to $G$. \end{lem} Note that in \cite{rzb} the Galois representation is defined in terms of row vectors, and so one must take the transpose $G \mapsto G^T$ to match our modular curves. If $G$ does not have surjective determinant, then $X_G$ is also geometrically disconncected. It is classical that a connected component $X(N)^\circ$ of $X(N)$ can be described geometrically by \[X(N)^\circ(\mathbb{C}) \simeq \Gamma(N) \backslash \mathbb{H}^*, \] where $\mathbb{H}^*$ is the extended upper half place $\mathbb{H} \cup \mathbb{P}^1(\mathbb{Q})$ and $\Gamma(N)$ is the full modular group of level $N$. There is a similar description in the case of $X_G$. Given a subgroup $S \subset \SL_2(\mathbb{Z}/N\mathbb{Z})$ define the congruence subgroup \[\Gamma_S \colonequals \{s\in \SL_2(\mathbb{Z}) : (s \mod N) \in S \}. \] For convenience let $\bar{G} \colonequals G\cap \SL_2(\mathbb{Z}/N\mathbb{Z})$. \begin{lem} Let $G \subset \text{GL}_2(\mathbb{Z}/N\mathbb{Z})$ contain $-I$. Then \[ X_G^\circ(\mathbb{C}) \simeq \Gamma_{\bar{G}} \backslash\mathbb{H}^*. \] \end{lem} \begin{proof} This follows from the classical fact for $X(N)$. Indeed by the description of the components of $X(N)(\mathbb{C})$, the action of $\bar{G}$ preserves the components (and is in fact the stabilizer of any one component). This action restricts on $\Gamma(N) \backslash \mathbb{H}^*$ to the action of $\Gamma_{\bar{G}} / \Gamma(N) \simeq \bar{G}$. \begin{center} \begin{tikzcd} & \Gamma(N) \backslash\mathbb{H}^* \arrow{ld}{\bar{G}/\{\pm 1\}} \arrow{rd}{\text{conn. comp.}} \\ \Gamma_{\bar{G}} \backslash \mathbb{H}^*\arrow{rd} & & X(N)(\mathbb{C}) \arrow{ld}{\bar{G}/\{\pm 1\}} \\ & X_{\bar{G}}(\mathbb{C}) \arrow{d} \\ & X_G(\mathbb{C}) \end{tikzcd} \end{center} The result now follows from the following elementary fact about group actions: \begin{lem} If a group $G$ acts on a set $X$ and $X = \bigcup_i X_i$ so that for all $g \in G$, $g(X_i) = X_{f(g, i)}$ for some function $f$, then the image of $X_i$ in $X/G$ is isomorphic to $X_i/(G\cap \operatorname{Stab}(i))$ . \qedhere \end{lem} \end{proof} In particular this implies that we may compute the genus of $X_G$ using the description $\Gamma_{\bar{G}} \backslash\mathbb{H}^*$. \subsection{Group-Theoretic Rephrasing} Because our argument will focus on particular curves in an isogeny class we make the following definitions. Say that $E/k$ \defi{has an $N$-isogeny up to isogeny} if $\mathscr{C}(E)$ has an $N$-isogeny. Similarly say that $E$ \defi{has an $N$-isogney locally almost everywhere up to isogeny} if for almost all $\mathfrak{p}$, $\mathscr{C}(E_\mathfrak{p})$ has an $N$-isogeny. We now translate the conditions of $E$ having an $\ell^n$-isogeny (up to isogeny) and locally almost everywhere having an $\ell^n$-isogeny (up to isogeny) into the language of Galois representations. Having a cyclic $\ell^n$-isogeny over $K$ is equivalent to the condition that $G(\ell^n)$ be contained in a Borel subgroup of $\text{GL}_2(\mathbb{Z}/\ell^n\mathbb{Z})$. Indeed, the kernel of the isogeny is a line $L \subset E[\ell^n]$ which is Galois-stable, and hence preserved by every element of $G(\ell^n)$. If some member of the $K$-isogeny class of $E$ has an $\ell^n$-isogeny over $K$, then $E$ lies somewhere in a chain of $\ell$-isogenous curves of length $n$ in its $K$-isogeny graph. Therefore this condition is equivalent to the existence of an integer $k$ such that $G(\ell^n)$ is Cartan mod $\ell^k$ and Borel mod $\ell^{n-k}$. The Chebotarev Density Theorem allows us to translate these conditions at almost all primes into a purely group-theoretic condition on the image of the Galois representation. For any (conjugacy class of an) element $g \in G(\ell^n)$, there exists some prime $\mathfrak{p}$ of $K$ such that $g = \varphi_{\mathfrak{p}, \ell^n}$, the image of the $|k_\mathfrak{p}|$-power Frobenius mod $\ell^n$. As $\text{Gal}(\bar{k_\mathfrak{p}}/k_\mathfrak{p})$ is generated by $\varphi_{\mathfrak{p}}$, conditions imposed upon every element of $G(\ell^n)$ translate into conditions on the image of the mod $\ell^n$ Galois representation of $E_\mathfrak{p}$ \[\rho_{E_\mathfrak{p}, \ell} \colon \text{Gal}(\bar{k_\mathfrak{p}}/k_\mathfrak{p}) \to \text{GL}_2(\mathbb{Z}/\ell^n\mathbb{Z}) .\] If $E_\mathfrak{p}$ has an $\ell^n$-isogeny over $k_\mathfrak{p}$, then $\rho_{E_\mathfrak{p}, \ell^n}(\varphi_{\mathfrak{p}, \ell^n})$ fixes a line, and so is contained in some Borel subgroup of $\text{GL}_2(\mathbb{Z}/N\mathbb{Z})$. We will also refer to this condition as every element having a rational eigenvector, which of course implies that the characteristic polynomial of $\rho_{E_\mathfrak{p}, \ell^n}(\varphi_{\mathfrak{p}, \ell^n})$ has a solution mod $\ell^n$. Imposing this condition modulo almost all $\mathfrak{p}$ therefore translates into the condition that every element of $G(\ell^n)$ has a rational eigenvector and so is contained in some Borel subgroup. If instead we only know that $E_\mathfrak{p}$ is $k_\mathfrak{p}$-isogenous to a curve with an $\ell^n$-isogeny, then as the trace of Frobenius $\text{tr}(\rho_{E_\mathfrak{p}, \ell^n}(\varphi_{\mathfrak{p}, \ell^n}))$ is a $k_\mathfrak{p}$-isogeny invariant, the characteristic polynomial \[ x^2 - \text{tr}(\rho_{E_\mathfrak{p}, \ell^n}(\varphi_{\mathfrak{p}, \ell^n})) x + \operatorname{Nm}(\mathfrak{p})\] still has a rational solution mod $\ell^n$. Therefore the discriminant \[\Delta(\varphi) \colonequals \text{tr}(\varphi)^2 - 4\det(\varphi) = \text{tr}(\varphi)^2 - 4\operatorname{Nm}(\mathfrak{p}) \] is a square modulo $\ell^n$. Therefore, for $\ell \neq 2$, the weaker assumption that $E$ locally almost everywhere has an $\ell^n$-isogeny \emph{up to isogeny} implies the condition that every element $\phi$ of $G(\ell^{n})$ has discriminant a square mod $\ell^n$. A converse is the content of Proposition \ref{equiv_p}. \subsection{Overview of Group-Theoretic Techniques} We will prove Theorem \ref{ellnthm} by proving that, for $\ell$ large enough, we may perform a sequence of lifts of our global $\ell$-isogeny to an $\ell^2$-isogeny up to isogeny, then an $\ell^3$-isogeny up to isogeny, and so forth to successively higher powers of $\ell$. The technique applied splits the problem naturally into two cases: (1) we are lifting from an odd power of $\ell$ to an even power, and (2) we are lifting from an even power to an odd power of $\ell$. We will show that lift-exceptional subgroups only arise when lifting from an even to an odd power of $\ell$. Key to this argument is the fact that everything here is only ``up to isogeny". In the course of our inductive argument, we are naturally led to assume that $E/K$ is isogenous to an $E'/K$ which has a global $\ell^{n-1}$-isogeny. This does not imply that $G_{E}(\ell^{n-1})$ is contained in a Borel, but $G_{E'}(\ell^{n-1})$ \emph{is} contained in a Borel. Furthermore, $E$ has a global $\ell^n$-isogeny up to isogeny if and only if $E'$ does, as $E$ and $E'$ are in the same isogeny class. Thus, without loss of generality, in our inductive argument we will assume that $E$ itself has an $\ell^{n-1}$-isogeny globally by replacing $E$ by an appropriate curve in its isogeny class. Sections \ref{lemup} and \ref{exceptional} contain the group-theoretic results necessary to prove the following theorem: \begin{thm}\label{upthm} Let $\ell$ be an odd prime and let $E$/K have a rational $\ell$-isogeny and locally almost everywhere have a rational $\ell^n$-isogeny up to isogeny. Then: \begin{itemize} \item If $n=2$, $E$ has a rational $\ell^2$-isogeny up to isogeny, \item If $n>2$, $E$ has a rational $\ell^n$-isogeny up to isogeny or $G_{E'}(\ell^{2m+1})$ is contained in \[R(\ell^{2m+1}) = \left\{ \begin{pmatrix} r & s \\ \ell^{2m} (\epsilon s) & \epsilon t \end{pmatrix} \ : \ r\equiv t \pmod{\ell^{m+1}}, \epsilon = \pm 1 \right\}, \] for some $0<m\leq (n-1)/2$ and some $E' \sim_K E$. In that case, $E$ has a rational $\ell^{2m}$-isogengy up to isogeny. \end{itemize} \end{thm} \begin{rem} This Theorem shows that exceptions occur only when lifting from an even power of $\ell$ to an odd power. And in these cases the exceptional images of Galois are Borel mod $\ell^{2m}$ and radical mod $\ell^{m+1}$. \end{rem} The proof of this theorem relies in a crucial way upon the map \[\phi_B = \phi \colon G(\ell^{n-1}) \subseteq B \to \left( \mathbb{Z}/\ell^{n-1} \mathbb{Z}\right)^\times = \mathbb{Z}/\ell^{n-2}(\ell - 1) \mathbb{Z},\] defined as the ratio of the diagonal characters: \[\phi \begin{pmatrix} \Asterisk_1 & \Asterisk \\ 0 & \Asterisk_2 \end{pmatrix} = \frac{\Asterisk_1}{\Asterisk_2}, \] where $B$ is a Borel subgroup of $\text{GL}_2(\mathbb{Z}/\ell^{n-1}\mathbb{Z})$ that we assume to be upper-triangular in the chosen basis. (Note the map depends on the choice of $B$.) Under our inductive hypothesis that $G(\ell^{n-1})$ is contained in a Borel subgroup $B$, we consider the composite map \[ \phi' \colon G(\ell^n) \to G(\ell^{n-1}) \xrightarrow{\phi} \left(\mathbb{Z}/\ell^{n-1}\mathbb{Z}\right)^\times, \] where the first map is reduction mod $\ell^{n-1}$. As $(\mathbb{Z}/\ell^{n-1}\mathbb{Z})^\times$ is cyclic, the group $G(\ell^n)$ is generated by a preimage of a generator of the image of $\phi'$, and the kernel of $\phi'$. To show that $G(\ell^n)$ is contained in a Borel, we use the local data that every element of $G(\ell^n)$ has square discriminant to show that each of these pieces is contained in \emph{the same} Borel lifting the Borel $B$ containing $G(\ell^{n-1})$. This analysis is carried out in Section \ref{lemup}, with the result of proving a necessary but insufficient condition on exceptional subgroups. These ``potentially exceptional" subgroups are then analyzed more carefully in Section \ref{exceptional}, leading to a proof of Theorem \ref{upthm}. In these sections we will always assume that $\ell$ is an odd prime. At this point we will have shown (i) of Theorem \ref{ellnthm}. \section{Group Theoretic Analysis of $\text{GL}_2(\mathbb{Z}/\ell^n\mathbb{Z})$}\label{lemup} Let $G$ denote a closed subgroup of $\text{GL}_2(\mathbb{Z}_\ell)$. We will eventually take $G$ to be the image of the $\ell$-adic Galois representation of $E$. Denote by $G(\ell^n)$ the reduction of $G$ mod $\ell^n$. When we say that $H \subset \text{GL}_2(\mathbb{Z}/\ell^n\mathbb{Z})$ \emph{lifts} $H' \subset \text{GL}_2(\mathbb{Z}/\ell^{n-1}\mathbb{Z})$, we mean that $(H \mod\ell^{n-1}) = H'$. \begin{lem}\label{hensel} Let $\bar{\gamma} \in G(\ell)$ have $2$ distinct eigenvalues. If $\gamma \in G(\ell^n)$ reduces mod $\ell$ to $\bar{\gamma}$ then $\gamma$ has $2$ distinct eigenvalues mod $\ell^n$ with corresponding eigenvectors. \end{lem} \begin{proof} As the eigenvalues of $\bar{\gamma}$ are distinct, the derivative of the characteristic polynomial of $\bar{\gamma}$ evaluated on any eigenvalue is nonzero. Hence Hensel's Lemma guarantees the existence of distinct eigenvalues of $\gamma$ mod $\ell^n$. Similarly, the derivative of the projectivized eigenvector equation evaluated on the projectivized eigenvector is nonzero as $\text{tr}^2(\bar{\gamma}) - 4\det(\bar{\gamma}) \neq 0$. \end{proof} \begin{lem}\label{dis} Assume that $G(\ell^{n-1})$ is contained in a fixed Borel subgroup $B \subseteq \text{GL}_2(\mathbb{Z}/\ell^{n-1} \mathbb{Z})$. Let $\gamma \in G(\ell^n)$, with reduction $\bar{\gamma} \in G(\ell)$ having distinct eigenvalues. Then there exists some Borel subgroup $B' \subseteq \text{GL}_2(\mathbb{Z}/\ell^n \mathbb{Z})$ lifting $B$ which contains $\gamma$. \end{lem} \begin{proof} Let $\gamma'$ denote the reduction of $\gamma$ mod $\ell^{n-1}$. By Lemma \ref{hensel} $\gamma$ and $\gamma'$ each have two distinct eigenspaces. The group $B$ necessarily fixes one of the eigenspaces $L'$ of $\gamma'$, which is the reduction of an eigenspace $L$ of $\gamma$. The Borel associated to $L$ lifts $B$ and contains $\gamma$. \end{proof} Recall that for $G(\ell^{n-1}) \subseteq B$, the map $\phi_B = \phi \colon G(\ell^{n-1}) \to \left(\mathbb{Z}/\ell^{n-1}\mathbb{Z}\right)^\times$ is defined as \[\phi \begin{pmatrix} \Asterisk_1 & \Asterisk \\ &\Asterisk_2 \end{pmatrix} = \frac{\Asterisk_1}{\Asterisk_2}. \] Define the subgroup $I(\ell^{n-1}) \subseteq G(\ell^{n-1})$ to be the kernel of the map $\phi$. Let $K(\ell^n) \subseteq G(\ell^n)$ be the kernel of the composite map \[\phi' \colon G(\ell^n) \to G(\ell^{n-1}) \to \left(\mathbb{Z}/\ell^{n-1}\mathbb{Z}\right)^\times, \] which is alternatively the subgroup generated by the preimage of $I(\ell^{n-1})$ in $G(\ell^n)$: \begin{center} \begin{tikzcd} K(\ell^n) \arrow[r, hook] & G(\ell^n) \arrow[d] \arrow[dr, dashed, "\phi' "] \\ I(\ell^{n-1}) \arrow[r, hook] & G(\ell^{n-1}) \arrow[r, "\phi"] & \left( \mathbb{Z}/ \ell^{n-1} \mathbb{Z} \right)^\times \end{tikzcd} \end{center} The group $G(\ell^n)$ is generated by $K(\ell^n)$ and a preimage $X\in G(\ell^n)$ of a generator of the image $\phi'(G(\ell^n))$. \begin{lem}\label{liftX} Assume that every element of $G(\ell^n)$ has square discriminant and that $G(\ell^{n-1})$ is contained in a Borel subgroup, but $G(\ell)$ is not contained in a split Cartan subgroup. \textbf{Assuming that the image of $\phi$ is nontrivial, there exists} a preimage $X$ of a generator of the image of $\phi' \colon G(\ell^n) \to (\mathbb{Z}/\ell^{n-1} \mathbb{Z})^\times$ that is contained in \textbf{some} Borel subgroup lifting the Borel mod $\ell^{n-1}$. \end{lem} \begin{proof} We work in a basis in which $G(\ell^{n-1})$ is upper-triangular. Let $X$ be some choice of preimage of a generator of the image of $\phi'$; by assumption $X$ has distinct diagonal entries mod $\ell^{n-1}$. If $X$ has distinct diagonal entries mod $\ell$, the Lemma follows from Lemma \ref{dis}. Otherwise $n \geq 3$ and it has equal diagonal entries mod $\ell^j$ for some $0 < j \leq \lfloor \frac{n-1}{2} \rfloor$. Note that this implies that all of $G(\ell^{n-1})$ has equal diagonal entries modulo $\ell^j$, and hence every element of $G(\ell^{n-1})$ is of the form \[\begin{pmatrix} a + \ell^j x & b \\ 0 & a + \ell^j w \end{pmatrix}. \] Because we have \begin{align*} \begin{pmatrix} a + \ell^j x & b \\ 0 & a + \ell^j w \end{pmatrix} \vect{1}{k \ell^{n-j-1}} &= \vect{a + \ell^j x + bk \ell^{n-j-1}}{(a+ \ell^j w)k \ell^{n-j-1}} \\ &\equiv_{\ell^{n-1}} (a+ \ell^j(x + bk \ell^{n-2j-1}))\vect{1}{k \ell^{n-j-1}}, \end{align*} $G(\ell^{n-1})$ is contained in the intersection of all of the Borels fixing the lines spanned by $\vect{1}{k \ell^{n-j-1}}$ for any $k$. The element $X \in G(\ell^n)$ is necessarily of the form \[\begin{pmatrix} a + \ell^jx & b \\ \ell^{n-1} \zeta & a + \ell^j w \end{pmatrix}, \] and $\ell \nmid b$ since $X$ is not contained in a Cartan mod $\ell$. The discriminant of $X$ \[\Delta(X) = \ell^{2j} \left( (x-w)^2 + 4b\zeta \ell^{n-2j -1} \right), \] is a square mod $\ell^{n}$ by assumption. For $\vect{1}{k \ell^{n-j-1}}$ to be an eigenvector of $X$ for some $k$, we must have \begin{align*} k \ell^{n-j-1} ( a + \ell^j x + bk \ell^{n-j-1}) & \equiv_{\ell^n} \ell^{n-1} \zeta + (a+ \ell^j w) k \ell^{n-j-1}, \\ k\ell^j (x-w) + bk^2 \ell^{n-j-1} & \equiv_{\ell^{j+1}} \ell^j \zeta, \\ k (x-w) + bk^2 \ell^{n-2j-1} & \equiv_{\ell} \zeta . \end{align*} \begin{itemize} \item If $2j = n-1$, then this equation becomes \[ b k^2 + (x-w)k -\zeta \equiv 0 \pmod{\ell}, \] which has a solution mod $\ell$ because $\ell \nmid b$ and the discriminant $(x-w)^2 + 4b \zeta$ is assumed to be a square. \item If $2j < n-1 $, then this equation becomes \[k(x-w) \equiv_\ell \zeta. \] This has a solution if $\ell \nmid (x-w)$ or $\ell \mid \zeta$. Hence if $j$ is strictly less than $\lfloor \frac{n-1}{2} \rfloor$ then either there is a solution or $\ell \mid (x-w)$ and hence the diagonal characters are congruent modulo an even higher power of $\ell$. \end{itemize} It suffices to show that the induction terminates. If $N$ is odd, then it terminates at or before $j = \lfloor \frac{n-1}{2} \rfloor$, which was already dealt with above. Otherwise, if $n = 2m$ then $2j < n-1$ and either the induction terminates, or when $j = m-1 = \lfloor \frac{2m-1}{2} \rfloor$ from above we have $\ell | (x-w)$. Computing \[ \Delta(X) = \ell^{2m-2} \left( (x-w)^2 + 4b\zeta \ell \right) \pmod{\ell^{2m}}. \] This is a square modulo $\ell^{2m}$ if and only if \[(x-w)^2 + 4b\zeta \ell \] is a square modulo $\ell^2$. But if $\ell \mid (x-w)$, then $4b \zeta \ell$ must be a square modulo $\ell^2$, and hence $\ell \mid b \zeta$, forcing $\ell \mid \zeta$, and $G(\ell^n)$ is contained in a Borel lifting the one corresponding to $k=0$. Thus the induction terminates at $j = \lfloor \frac{n-1}{2} \rfloor$ or before. \end{proof} Lemma \ref{liftX} guarantees the ability to lift the Borel structure mod $\ell^{n-1}$ to a Borel containing an appropriate choice of $X$. We must now try to lift the Borel structure to one containing $K$. To do this, it is necessary to consider the cases of lifting from mod $\ell^{2m-1}$ to mod $\ell^{2m}$ and lifting from mod $\ell^{2m}$ to mod $\ell^{2m+1}$ separately. \begin{defi} Let $\mathbb{K}(\ell^{2m+1})$ for $m>0$ denote the subgroup of $\text{GL}_2(\mathbb{Z}/\ell^{2m+1} \mathbb{Z})$ given in an appropriate basis by \[\mathbb{K} = \left \{ \begin{pmatrix} r & s \\ \ell^{2m}s & t \end{pmatrix} \ : \ r \equiv t \pmod{\ell^{2m}} \right\}. \] \end{defi} \begin{lem}\label{nondis} Assume that $G(\ell)$ is not contained in a split Cartan, $G(\ell^{n-1})$ is contained in a Borel $B$, and that every element of $G(\ell^{n})$ has square discriminant. Let $I(\ell^{n-1})$ and $K(\ell^n)$ be as above. \begin{enumerate}[(i)] \item If $n = 2m$ is even, then $K(\ell^{2m})$ is contained in the intersection of all Borel subgroups of $\text{GL}_2(\mathbb{Z}/\ell^{2m} \mathbb{Z})$ lifting $B$. \item If $n=2m+1$ is odd, then either $K(\ell^{2m+1})$ is contained in the intersection of all Borel subgroups of $\text{GL}_2(\mathbb{Z}/\ell^{2m} \mathbb{Z})$ lifting $B$, or $G(\ell)$ is contained in a radical subgroup and \[K(\ell^{2m+1}) \subseteq \mathbb{K}(\ell^{2m+1}), \] in an appropriate basis. \end{enumerate} \end{lem} \begin{proof} Choose a basis for $\left(\mathbb{Z}/\ell^{n}\mathbb{Z}\right)^2$ such that some Borel $B' \subset \text{GL}_2(\mathbb{Z}/\ell^n\mathbb{Z})$ lifting $B$ is upper-triangular. Any lift of a generator of $K(\ell^{n})$ is of the form \[A = \begin{pmatrix} a + \ell^{n-1}x & b + \ell^{n-1}y \\ \ell^{n-1}z & a + \ell^{n-1}w \end{pmatrix}, \] where $a,b,c \in \mathbb{Z}/\ell^{n-1} \mathbb{Z}$ and $b \not\equiv 0 \pmod \ell$. The matrix $A$ has discriminant mod $\ell^{n}$ \[\Delta(A) = 4\ell^{n-1}b z. \] If $n=2m$ is even, then $\ell^{2m-1}$ is not a square mod $\ell^{2m}$. Hence, as $\ell \nmid b$, we must have that $z \equiv 0 \pmod \ell$ in order for $\Delta(A)$ to be a square mod $\ell^{2m}$. As such, all generators of $K(\ell^{n-1})$ are necessarily upper-triangular in this basis and as such contained in $B'$. Thus $G(\ell^n)$ is contained in the intersection of all such Borels lifting $B$. If $n=2m+1$ is odd, then $\ell^{2m}$ is a square mod $\ell^{2m+1}$ and we must have $z b$ a square mod $\ell$ for $\Delta(A)$ to be a square mod $\ell^{2m+1}$. We will show that if $G(\ell)$ is not contained in a radial subgroup, then the Borel structure mod $\ell^{n-1}$ must lift to mod $\ell^n$ in order for $K(\ell^n)$ to be a normal subgroup of $G(\ell^n)$. For the discriminant $\Delta(A)$ to be a square, we are equivalently concerned with whether $\beta\cdot\zeta$ is a quadratic residue mod $\ell$, where \[\beta = \frac{b}{a} \mod \ell, \qquad \zeta = \frac{z}{a} \mod \ell. \] The operation of matrix multiplication on elements of $K(\ell^{2m+1})$: \[ \begin{pmatrix} a + \ell^{n-1}x & b + \ell^{n-1}y \\ \ell^{n-1}z & a + \ell^{n-1}w \end{pmatrix} \cdot \begin{pmatrix} a' + \ell^{n-1}x' & b' + \ell^{n-1}y' \\ \ell^{n-1}z' & a' + \ell^{n-1}w' \end{pmatrix}\] acts as \[ \left( \frac{b}{a}, \frac{z}{a} \right) \cdot \left( \frac{b'}{a'}, \frac{z'}{a'} \right) = \left( \frac{b}{a} + \frac{b'}{a'}, \frac{z}{a} + \frac{z'}{a'} \right).\] As such the set of possible $(\beta, \zeta)$ is a nontrivial additive subgroup of $\mathbb{F}_\ell^2$, in which $\beta\zeta$ is necessarily a quadratic residue. Hence it is necessarily a line $(\delta x, \gamma x) \ \forall x \in \mathbb{F}_\ell$ in which \[\delta \gamma = \frac{\beta \zeta}{x^2} \] is a quadratic residue. If $G(\ell)$ is not contained in a radical subgroup then it contains some element $G = \begin{pmatrix} g & \\ & f \end{pmatrix}$, where $f^2 \neq g^2$. The action of conjugation by (a lift to $G(\ell^{2m+1})$ of) $G$ defines an operator on the group of $(\beta, \zeta)$ whose action is \[(\beta, \zeta) \mapsto \left( \frac{f}{g} \beta, \frac{g}{f} \zeta \right). \] In order for this to stabilize the line (necessary if $K(\ell^n)$ is a normal subgroup), either $\zeta = 0$ (as desired) or $f^2 = g^2$, which is a contradiction. Otherwise we have that $G(\ell)$ is contained in a radical subgroup. Further, for all elements of $K(\ell^n)$, $bz$ is a square mod $\ell$ and $b/z$ is a fixed element of $\mathbb{F}_\ell^\times$ (determining this line in $\mathbb{F}_\ell^2$). Up to conjugation (e.g. in an appropriate basis), we may assume that this ratio is $1$. Hence $K(\ell^{2m+1})$ is contained in $\mathbb{K}(\ell^{2m+1})$ \end{proof} \begin{prop}\label{potexcep} Let $E/K$ have an $\ell$-isogeny and an $\ell^n$-isogeny locally almost everywhere up to isogeny. Then $E$ has an $\ell^n$-isogeny globally up to isogeny or \[K_{E'}(\ell^{2m+1}) \subseteq \mathbb{K}(\ell^{2m+1}), \qquad \text{and} \qquad G_{E'}(\ell) \subseteq \text{ a radical subgroup},\] for some $0<m \leq (n-1)/2$ and $E' \sim_K E$. \end{prop} \begin{proof} If $E$ has 2 distinct $\ell$-isogenies, i.e. $G(\ell)$ is contained in a split Cartan subgroup of $\text{GL}_2(\mathbb{Z}/\ell \mathbb{Z})$, then $E$ is either isogenous to a curve which does not have 2 distinct $\ell$-isogenies, or by composing the chain of distinct $\ell$-isogenies, $E$ has an $\ell^n$-isogeny. So we may assume that $G(\ell)$ is contained in a Borel subgroup but not contained in a split Cartan subgroup. Our inductive hypothesis is that $G(\ell^{j-1})$ is contained in a Borel subgroup (up to replacing $E$ with another elliptic curve in its isogeny class). Recall that $K(\ell^j)$ is the kernel of the map $\phi' \colon G(\ell^j) \to (\mathbb{Z}/\ell^{j-1}\mathbb{Z})^\times$, and $X$ is a preimage of a generator of the image. Together they generate $G(\ell^j)$. By Lemma \ref{liftX}, we can assume $X$ is contained in some Borel $B'$ lifting $B$, or $G(\ell^j) \subset K(\ell^j)$. But further by Lemma \ref{nondis} either $K(\ell^j)$ is contained in the intersection of all Borel subgroups of $\text{GL}_2(\mathbb{Z}/\ell^j\mathbb{Z})$ lifting $B$, or $j=2m+1$ is odd. In the first case, the generators of $G(\ell^j)$ are contained in the Borel subgroup $B'$, implying the same of $G(\ell^j)$. In the second case, $j=2m+1$ is odd, and Lemma \ref{nondis} gives that $G(\ell)$ is contained in a radical subgroup and $K(\ell^{2m+1}) \subseteq \mathbb{K}(\ell^{2m+1})$. \end{proof} In the next section, we consider exactly when groups with $G(\ell)$ radical and $K(\ell^{2m+1}) \subseteq \mathbb{K}(\ell^{2m+1})$ give rise to lift-exceptional subgroups. This will lead to the classification cited in the introduction and to the proof of Theorem \ref{upthm}. \section{Lift-Exceptional Subgroups}\label{exceptional} \begin{defin} We say that $H \subseteq \text{GL}_2(\mathbb{Z}/\ell^{2m+1}\mathbb{Z})$, $m>0$, is \defi{potentially lift-exceptional} if every element of $H$ has square discriminant and \begin{itemize} \item $H \mod \ell$ is contained in a radical subgroup but not a Cartan, \item $H$ is contained in a Borel subgroup mod $\ell^{2m}$, \item $\ker\big(\phi' \colon H \to (\mathbb{Z}/\ell^{2m} \mathbb{Z})^\times\big)$ is contained in $\mathbb{K}(\ell^{2m+1})$. \end{itemize} \end{defin} \begin{defin} We will say that $H \subseteq G(\ell^{2m+1})$ is a \defi{lift-exceptional subgroup} for $\ell^{2m+1}$-isogenies if it is potentially lift-exceptional, and $H$ is not contained in a Borel mod $\ell^{2m+1}$. \end{defin} Note that if $E/K$ has an $\ell$-isogeny but violates the local-global principle for $\ell^n$-isogenies, then Proposition \ref{potexcep} implies that $G_{E'}(\ell^{2m+1})$ is lift-exceptional for some $0<m \leq (n-1)/2$ and some $E'$ isogenous to $E$. The goal of this section to to determine which potentially lift-exceptional subgroups are actually lift-exceptional. As indicated in the Section \ref{prelim}, the condition that $E/K$ have an $\ell^n$-isogeny locally almost everywhere up to isogeny implies that every element of the image of the mod-$\ell^n$ Galois representation $\rho_{E, \ell^n}(G_K)$ have square discriminant. Here we prove the reverse implication, which justifies the above definition of lift-exceptional. \begin{lem}\label{jordan} Let $\gamma \in G(\ell^n)$, $n>2$, be a lift of $\begin{pmatrix} 1 & r \\ 0 & 1 \end{pmatrix}$ mod $\ell$, with $\ell \nmid r$, and assume that $\Delta(\gamma)$ is a square mod $\ell^n$. Then $\gamma$ has an eigenvector mod $\ell^n$. \end{lem} \begin{proof} First we show that $\gamma$ has an eigenvector mod $\ell^2$. This is essentially the same as the argument that there are no counter-examples for lifting from mod $\ell$ to mod $\ell^2$: the condition of square discriminant forces $\ell \mid z$. We then have that $\gamma$ is of the form \[\gamma = \begin{pmatrix} 1 + \ell x & r + \ell y \\ \ell^2 z & 1 + \ell w \end{pmatrix}, \] with $\Delta(\gamma) = \ell^2((x-w)^2 + 4z(r+\ell y))$ a square mod $\ell^n$. The quadratic equation that $\begin{pmatrix} 1 \\ k\ell \end{pmatrix}$ be an eigenvector for $\gamma$ is \[k^2 + \frac{x-w}{r+\ell y} k - \frac{z}{r+\ell y} \equiv 0 \pmod{\ell^{n-2}}. \] This monic polynomial has discriminant $\frac{(x-y)^2+4z(r+\ell y)}{(r+\ell y)^2}$ which is square by hypothesis. Hence it has a solution. \end{proof} \begin{prop}\label{equiv_p} If every element of the image of $\rho_{E, \ell^n}$ has square discriminant, then for almost all $\frak{p}$, $\tilde{E}_\mathfrak{p}$ is isogenous to an elliptic curve with an $\ell^n$-isogeny. \end{prop} \begin{proof} If $\gamma \in G(\ell^n)$ has nonequal eigenvalues mod $\ell$, then it has an eigenvector mod $\ell^n$ by Lemma \ref{dis}. If $\gamma \mod \ell$ is not diagonalizable, then it is conjugate mod $\ell$ to something of the form $\begin{pmatrix} 1 & r \\ 0 & 1 \end{pmatrix}$. Hence, by Lemma \ref{jordan}, it has an eigenvector mod $\ell^n$. This only leaves the case that $\gamma \mod \ell$ is scaler. In this case, $\tilde{E}_\mathfrak{p}$ (for the primes whose Frobenius element corresponds to $\gamma$) has two distinct $\ell$-isogenies. This curve is either isogenous to one without two distinct $\ell$-isogenies (landing in one of the above cases) or there exists a chain of $n$ $\ell$-isogenies giving the desired $\ell^n$-isogeny. \end{proof} Now we take up the question of when potentially lift-exceptional subgroups are actually exceptional. For ease of notation we will drop the degree from $\mathbb{K}(\ell^{2m+1})$ and refer to it simply as $\mathbb{K}$ -- the fact that it is a subgroup of $\text{GL}_2(\mathbb{Z}/\ell^{2m+1}\mathbb{Z})$ will be implied. \begin{lem}\label{kevec} A vector $v \in (\mathbb{Z}/\ell^{2m+1} \mathbb{Z})^2$ is a simulataneous eigenvector for all elements of $\mathbb{K}$ precisely when \[ v \ \propto \ \begin{pmatrix} 1 \\ k \ell \end{pmatrix}, \qquad k \equiv \pm \ell^{m-1} \pmod{\ell^{2m-1}}. \] \end{lem} \begin{proof} The up to multiplication by an invertible scaler, vector $v$ can be normalized to be of the form $\begin{pmatrix} 1 \\ k \ell \end{pmatrix}$ as $\mathbb{K} \mod \ell$ contains the matrix $\begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}$. The condition that such a vector is an eigenvector for any matrix in $\mathbb{K}$ reduces to \[ k^2s \equiv \ell^{2m-2} s \pmod{\ell^{2m-1}}. \qedhere \] \end{proof} This suggests a method of understanding exceptional subgroups that arise from the inability to ``obviously" lift the Borel structure on the matrices with equal diagonal entries mod $\ell^{2m}$. If the matrix $X$, which is a preimage under $\phi'$ of a generator of the image of $\phi'$, also has one of these vectors as an eigenvector, then the group generated by $X$ and $\mathbb{K}$ -- what we will denote $X \cdot \mathbb{K}$ \footnote{In our context $\mathbb{K}$ is a normal subgroup of this composite, so every element of the group can be represented as a power of $X$ times an element of $\mathbb{K}$. We will always work under this assumption on $X$.} -- is contained in the corresponding Borel subgroup, and hence is not exceptional. Otherwise, if every element of $X \cdot \mathbb{K}$ has square discriminant, but $X$ shares no eigenvectors with all elements of $\mathbb{K}$, then $X \cdot \mathbb{K}$ is lift-exceptional. (Note: a subgroup of $X \cdot \mathbb{K}$ may fail to be lift-exceptional even when $X \cdot \mathbb{K}$ is.) \begin{lem}\label{borelcomp} Let $X \in \text{GL}_2(\mathbb{Z}/\ell^{2m+1}\mathbb{Z})$ be contained in a Borel subgroup mod $\ell^{2m}$. Then $X \cdot \mathbb{K}$ is contained in a Borel subgroup mod $\ell^{2m+1}$ if and only if $X$ has equal diagonal characters mod $\ell^{m}$ and writing \[X = \begin{pmatrix} a + \ell^{m} x & b \\ \ell^{2m}z & a + \ell^{m} w \end{pmatrix}, \] we have \[\pm (x-w) \equiv (z-b) \pmod{\ell}.\] \end{lem} \begin{proof} It suffices by Lemma \ref{kevec} to consider when $X = \begin{pmatrix} x_1 & b \\ \ell^{2m}z & x_2 \end{pmatrix}$ stabilizes the line spanned by $\vect{1}{k \ell}$ for $k \equiv \pm \ell^{m-1} \pmod{\ell^{2m-1}}$. The eigenvector equation reduces to \[\ell^{2m}z + k x_2 \equiv_{\ell^{2m}} k x_1 + k^2 \ell b. \] A consideration of valuations of the monomials gives that $\ell^m \mid (x_1-x_2)$. Now letting $x_1 = a + \ell^m x$ and $x_2 = a+ \ell^m w$, the above simplifies to $(x-w) \equiv_\ell \pm (z-b)$ as desired. \end{proof} \begin{lem}\label{inductcong} Let $Y \in \text{GL}_2(\mathbb{Z}/\ell^{2m+1} \mathbb{Z})$ be contained in a Borel subgroup mod $\ell^{2m}$ with equal diagonal entires modulo $\ell^j$ for $0 < j < m$. If every element of $Y \cdot \mathbb{K}$ has square discriminant then $Y$ has equal diagonal entries modulo $\ell^{j+1}$. \end{lem} \begin{proof} By hypothesis, every element of $Y \cdot \mathbb{K}$ is a power of $Y$ times an element of $\mathbb{K}$. Write \[Y = \begin{pmatrix} a + \ell^j x & b \\ \ell^{2m}z & a + \ell^j w \end{pmatrix}, \text{ and}\qquad R = \begin{pmatrix} r & s \\ \ell^{2m}s & t \end{pmatrix}\] for an element of $\mathbb{K}$ (recall that $r \equiv t \pmod{\ell^{2m}}$). It is easy to prove by induction that \[ Y^k = \begin{pmatrix} (a+\ell^jx)^k + {k \choose 2} \ell^{2m}a^{k-2}zb & k a^{k-1}b + \ell(\Asterisk) \\ k \ell^{2m}za^{k-1} & (a+\ell^j w)^k + {k \choose 2} \ell^{2m}a^{k-2} zb \end{pmatrix}.\] Now consider the product \[Y^{\ell^{m-j}} \cdot R = \begin{pmatrix} (a+\ell^jx)^{\ell^{m-j}} &\ell(\Asterisk) \\ 0 & (a+\ell^jw)^{\ell^{m-j}} \end{pmatrix} \begin{pmatrix} r & s \\ \ell^{2m}s & t \end{pmatrix} = \begin{pmatrix} r(a+\ell^jx)^{\ell^{m-j}} & a^{\ell^{m-j}}s + \ell(\Asterisk) \\ \ell^{2m}a^{\ell^{m-j}}s & t(a+\ell^jw)^{\ell^{m-j}} \end{pmatrix} . \] This element has discriminant \begin{align*} \Delta(Y^{\ell^{m-j}} \cdot R) &= \left( r(a+\ell^j x)^{\ell^{m-j}} - t(a+\ell^jw)^{\ell^{m-j}} \right)^2 + 4\ell^{2m}a^{2\ell^{m-j}}s^2, \\ &= \left( \sum_{k=0}^{\ell^{m-j}} {\ell^{m-j} \choose k} \ell^{jk} a^{\ell^{m-j}-k} (rx^k-tw^k) \right)^2 +4\ell^{2m}a^{2\ell^{m-j}}s^2, \\ &\equiv_{\ell^{2m+1}} \left( r\ell^ma^{\ell^{m-j}-1}(x-w) + \sum_{k=2}^{\ell^{m-j}} {\ell^{m-j} \choose k} \ell^{jk} ra^{\ell^{m-j}-k} (x^k-w^k) \right)^2 +4\ell^{2m}a^{2\ell^{m-j}}s^2,\\ \end{align*} since every term in sum includes some power of $\ell$, and $r\equiv_{\ell^{2m}} t$. By Hensel's Lemma (and our assumption that $\ell \neq 2$) the discriminant $\Delta(Y^{\ell^{m-j}} \cdot R)$ is a square modulo $\ell^{2m+1}$ if and only if it is zero or an even power of $\ell$ times a (nonzero) quadratic residue mod $\ell$. Hence the above discriminant is a square iff \[r^2a^{2\ell^{m-j}-2}(x-w)^2 + 4a^{2\ell^{m-j}}s^2 = a^{2\ell^{m-j}-2}(r^2(x-w)^2 + 4a^2s^2), \] is a square mod $\ell$. In order for \emph{every} element of $Y \cdot \mathbb{K}$ to have square discriminant, the quadratic form \[Q(r,s) = (x-w)^2r^2 + 4a^2s^2 \] must be zero or a quadratic residue modulo $\ell$ \textbf{for all} $r,s$ mod $\ell$. Hence the discriminant of this form, $-16(x-w)^2a^2$, must be zero mod $\ell$ (any form with nonzero discriminant represents a quadratic non-residue). As $\ell \nmid a$, we have $\ell \mid (x-w)$. So in fact the diagonal characters are equal modulo $\ell^{j+1}$. \end{proof} \begin{lem}\label{lastcong} Let $Y = \begin{pmatrix} a + \ell^mx & b \\ \ell^{2m}z & a + \ell^m w \end{pmatrix}$ have equal diagonal entires modulo $\ell^m$. If every element of $Y \cdot \mathbb{K}$ has square discriminant then \[ (x-w) \equiv \pm(z-b) \pmod{\ell}. \] \end{lem} \begin{proof} As above, let $R = \begin{pmatrix} r & s \\ \ell^{2m}s & t \end{pmatrix}$ be an element of $\mathbb{K}$. Then we have \begin{align*} \Delta(Y) &= \ell^{2m}\left((x-w)^2 + 4 bz\right), \\ \Delta(Y \cdot R) &= \ell^{2m}r^2(x-w)^2 + 4 \ell^{2m}(zr + as)(as+bt) \\ &= r^2 \Delta(Y) + 4\ell^{2m}(a^2s^2 + asr(z+b)). \end{align*} If $\Delta(Y)$ is a square, then it is $\ell^{2m} \delta^2$, for some $\delta \in \mathbb{Z}/\ell\mathbb{Z}$. In order for $\Delta(Y \cdot R) = \ell^{2m}(\delta^2r^2 + 4a(z+b) rs + 4a^2 s^2)$ to be square, we must have that the quadratic form \[Q(r,s) = \delta^2r^2 + 4a(z+b) rs + 4a^2 s^2 \] does not represent any quadratic nonresidue of $\mathbb{Z}/\ell\mathbb{Z}$. Hence the discriminant of $Q$ must be $0$ mod $\ell$: \[(z+b)^2 \equiv (x-w)^2 + 4zb \pmod{\ell},\] which implies \[ (x-w) \equiv \pm (z-b) \pmod{\ell}, \] as desired. \end{proof} \begin{prop}\label{excepX} Let $X$ be radical mod $\ell$ and Borel mod $\ell^{2m}$. Then $X \cdot \mathbb{K}$ is lift-exceptional if and only if the diagonal characters of $X$ are opposite modulo $\ell^{m+1}$. \end{prop} \begin{proof} In order to be lift-exceptional, $X \cdot \mathbb{K}$ must not be contained in a Borel subgroup and every element of $X \cdot \mathbb{K}$ must have square discriminant. We consider two cases separately: \textbf{$X$ has equal diagonal characters mod $\ell$:} in this case, write \[ X = \begin{pmatrix} a + \ell^j x & b \\ \ell^{2m}z & a + \ell^j w \end{pmatrix}, \text{ and}\qquad R = \begin{pmatrix} r & s \\ \ell^{2m}s & t \end{pmatrix}.\] Lemma \ref{borelcomp} shows that $X \cdot \mathbb{K}$ is contained in a Borel subgroup if and only if \begin{equation}\label{fundcong} (x-w) \equiv \pm (z+b) \pmod{\ell}. \end{equation} But Lemmas \ref{inductcong} and \ref{lastcong} with $Y = X$ show that the congruence \eqref{fundcong} must hold if every element of $X \cdot \mathbb{K}$ has square discriminant. So $X \cdot \mathbb{K}$ is never lift-exceptional. \textbf{$X$ has opposite diagonal characters mod $\ell$:} in this case, Lemma \ref{borelcomp} shows that $X \cdot \mathbb{K}$ is never contained in a Borel subgroup; so it suffices to show that every element of $X \cdot \mathbb{K}$ has square discriminant precisely when $X$ has opposite diagonal characters modulo $\ell^{m+1}$. If $X$ has opposite diagonal characters modulo $\ell^j$, then $X^2$ has equal diagonal characters modulo $\ell^j$. Hence Lemma \ref{inductcong} applied to $Y = X^2$ shows that this assumption forces the diagonal characters to be opposite modulo $\ell^m$ (since the sum and difference of the diagonal characters of $X$ can't both be divisible by $\ell$ while $X$ is contained in a Borel and invertible mod $\ell$). Similarly, Lemma \ref{lastcong} applied to $Y = X^2$ shows that the diagonal characters are opposite modulo $\ell^{m+1}$, since the ``$z$" and ``$b$" of $Y = X^2$ are both $0$ mod $\ell$. So this congruence condition is necessary for $X \cdot \mathbb{K}$ to be lift-exceptional. It is an easy calculation that all elements of $X \cdot \mathbb{K}$ have square discriminant if this congruence condition is met. \end{proof} \subsection{Proof of Theorem \ref{upthm}} To prove Theorem \ref{upthm}, it suffices to prove that for any choice of $X$ subject to the constraints of Proposition \ref{excepX}, the exceptional subgroup generated is contained in the group \[R(\ell^{2m+1}) = \left\{\begin{pmatrix} r & s \\ \ell^{2m}(\epsilon s) & \epsilon t \end{pmatrix} \ : \ {\epsilon = \pm 1 \atop r \equiv t \pmod{\ell^{m+1}} } \right\}, \] in an appropriate basis. \begin{prop} For any $X$ of the form \[X = \begin{pmatrix} a + \ell^{m+1}x & b \\ \ell^{2m}z & -a + \ell^{m+1}w \end{pmatrix}, \] $X \cdot \mathbb{K}$ is contained in $R(\ell^{2m+1})$ up to conjugation. \end{prop} \begin{proof} If $X$ has the above form with $z \equiv -b \pmod{\ell}$, then so will all of its powers. Hence it suffices to check that something of this form times an arbitrary element of $\mathbb{K}$ is contained in $R(\ell^{2m+1})$: \[\begin{pmatrix} a + \ell^{m+1}x & b \\ -\ell^{2m}b & -a + \ell^{m+1}w \end{pmatrix} \cdot \begin{pmatrix} r & s \\ \ell^{2m}s & t \end{pmatrix} = \begin{pmatrix} r(a+\ell^{m+1}) + \ell^{2m}bs & s(a + \ell^{m+1}x) + bt \\ \ell^{2m}(-sa-br) & t(-a + \ell^{m+1})-\ell^{2m}sb \end{pmatrix}, \] which is clearly true. Now it remains only to show that we may conjugate $X$ and $\mathbb{K}$ by an appropriate matrix $M$ so as to bring $\mathbb{K}$ into itself and bring $X' = M^{-1}XM$ into another matrix of the same form except with off-diagonal entries of $X'$ so that $z' \equiv -b' \pmod{\ell}$. This is achieved by the a matrix of the form $M = \begin{pmatrix} 1 & \mu \\ 0 & 1 \end{pmatrix}$ as we have that: \[M^{-1} \cdot X \cdot M = \begin{pmatrix} a + \ell^{m+1}x - \ell^{2m}z\mu & 2\mu a + b + \ell^{m+1}\mu(x-w) - \ell^{2m}\mu^2z \\ \ell^{2m}z & -a + \ell^{m+1} w + \ell^{2m}z\mu \end{pmatrix}. \] As $a$ is a unit, we may choose $\mu \equiv \frac{z-b}{2a} \pmod{\ell}$. Finally, on some element $R$ of $\mathbb{K}$ this acts as \[ M^{-1} \cdot R \cdot M = \begin{pmatrix} r - \ell^{2m}s\mu & s + \mu(r-t) - \ell^{2m}s\mu^2 \\ \ell^{2m}s & t + \ell^{2m}s \mu \end{pmatrix}, \] which is again an element of $\mathbb{K}$ as $\ell^{2m} \mid (r-t)$. \end{proof} \begin{rem} $R(\ell^{2m+1})$ is itself exceptional and corresponds to $X \cdot \mathbb{K}(\ell^{2m+1})$ for \[X = \begin{pmatrix} 1 & 0 \\ 0 & -1 + \ell^{m+1} \end{pmatrix}. \] \end{rem} We now prove the last part of Theorem \ref{ellnthm} (i), \begin{cor}\label{quadextn} If $E/K$ has an $\ell^{2m}$-isogeny up to isogeny but fails to have a global $\ell^{2m+1}$ isogeny up to isogeny, then there exists some quadratic extension $F/K$ such that $E$ is $F$-isogenous to a curve $E''/F$ having an $\ell^{2m+1}$-isogeny over $F$. \end{cor} \begin{proof} The index $2$ subgroup of $R(\ell^{2m+1})$ where $\epsilon = +1$ is contained in the Borel subgroup corresponding to the vector $\vect{1}{\ell^m}$. \end{proof} \begin{cor}\label{counter} For every $\ell$, there exists a number field $K$ and an elliptic curve $E$ over $K$ having a rational $\ell$-isogeny, and locally almost everywhere having a rational $\ell^{2m}$-isogeny up to isogeny, but which is not isogenous to an elliptic curve with a global rational $\ell^{2m+1}$-isogeny. \end{cor} \begin{proof} It suffices to demonstrate a subgroup $H$ of $\text{GL}_2(\mathbb{Z}/\ell^{2m+1} \mathbb{Z})$ giving rise to an exceptional $\ell$ for any choice of $\ell$. By Galois theory, there then exists a $K$ and an $E$ given as the base-change of an elliptic curve $E'/K'$ with surjective $\ell$-adic image of Galois. As noted in the above Remark, we can simply take $H = R(\ell^{2m+1})$. \end{proof} \begin{rem} If instead of asking for a local-global principle for the isogeny class of $E$ (e.g. our up to isogeny results) we asked for a strict local-to-global principle, Proposition \ref{potexcep} would still give a necessary condition on potentially exceptional subgroups if $G(\ell)$ is \textbf{not} contained in a split Cartan subgroup. The following example shows that the set of exceptional subgroups is still non-empty even when you add the extra structure of $E$ itself having an $\ell^n$-isogeny locally almost everywhere: Let $H \subseteq \text{GL}_2(\mathbb{Z}/\ell^3 \mathbb{Z})$ be the subgroup generated by \[M = \begin{pmatrix} 1 & 1 \\ \ell^2 & 1 \end{pmatrix}, \qquad X = \begin{pmatrix} 1 & 0\\0 & -1 \end{pmatrix}. \] $H$ is not contained in a Borel mod $\ell^3$ as the only two Borels containing $X$ are the upper and lower-triangular matrices. $M$ itself is contained in a Borel, as \[ \begin{pmatrix} 1 & 1 \\ \ell^2 & 1 \end{pmatrix} \begin{pmatrix} 1 \\ \ell \end{pmatrix} = (1+ \ell)\begin{pmatrix} 1 \\ \ell \end{pmatrix}. \] Hence $H$ cannot be the image of the mod $\ell^3$ Galois representation of an elliptic curve $E$ satisfying the strict conclusions of the local-to-global principle (it also fails ``up to isogeny"). All that remains is to show that it satisfies the hypotheses -- that is that every element is contained in a Borel. To show this, we may replace $H$ with the quotient by all scaler matrices $\mathbb{P} H$. In this case we have the relations $M^{\ell^3} = 1, X^2 = 1, XMX = M^{-1}$. This implies that we have an exact sequence \[0 \to \langle M \rangle \to H \to \mathbb{Z}/2\mathbb{Z} \to 0, \] where the last map counts the number of copies of $X$ in a word modulo $2$. If the image in $\mathbb{Z}/2 \mathbb{Z}$ is $1$, then mod $\ell$, the matrix is of the form $\begin{pmatrix} 1 & \Asterisk \\ 0 & -1 \end{pmatrix}$, and hence has distinct eigenvalues mod $\ell$. Lemma \ref{hensel} guarantees that the same is true mod $\ell^3$, and it is necessarily diagonalizable, hence contained in a Borel. If a matrix is in the kernel of this map, then it is in the group generated by $M$, and hence contained in a Borel. \end{rem} \section{Analysis at $\ell =2$}\label{two} Although the above analysis only holds for odd primes, we may computationally explore the picture at $2$. In this case, the local condition must be modified slightly: we first prove that $E/K$ with a $2^{n-1}$-isogeny has a $2^n$-isogeny locally almost everywhere if and only if the characteristic polynomial of every element of $G(2^{n})$ has a root. This, of course, implies the local condition for other primes $\ell$ which is stated more clearly in terms of discriminants. \begin{prop}\label{2jord2} Let $\gamma \in \text{GL}_2(\mathbb{Z}/2^n\mathbb{Z})$ be of the form $\gamma = \begin{pmatrix} 1 + 2x & 1+ 2y \\ 2^{n-1} z & 1 + 2w \end{pmatrix}$ with a rational eigenvalue mod $2^n$. Then $\gamma$ has a rational eigenvector. \end{prop} \begin{proof} The characteristic polynomial of $\gamma$ is $X^2 - 2(1+x+w)X + (1+2x)(1+2w)-2^{n-1}z$ mod $2^n$. By assumption this has a root, and by Hensel's Lemma such a root must be $1 \mod 2$. Hence substituting $X = 1 + 2Y$ we have that there exists a root of $4Y^2 - 4(2+x+w)Y + 4(1+ x + w + xw - 2^{n-3}z)$ mod $2^n$. Equivalently, there exists a root of \begin{equation}\label{char}Y^2 - (2+x+w)Y + 1+ x + w +xw - 2^{n-3}z \pmod{2^{n-2}}. \end{equation} The quadratic equation in $X$ for $\begin{pmatrix} 1 \\ 2X \end{pmatrix}$ to be an eigenvector of $\gamma$ is \begin{equation}\label{eigen} X^2 + \frac{x-w}{1+2y} X - \frac{2^{n-3}z}{1+2y} \pmod{2^{n-2}}. \end{equation} We would like to show that equation \eqref{char} having a root implies that equation \eqref{eigen} also has a root. Both equations have the same parity of the linear term, hence we split into two cases depending on the parity of $x$ and $w$. \textit{Case 1: $x$ and $w$ have opposite parity}. In this case, both equation \eqref{char} and \eqref{eigen} are of the form $X^2 + aX +b$ where $a$ is odd. As there exists a root of \eqref{char} mod $2^{n-2}$, it must be that $2^{n-3}$ is even, and hence for both \eqref{char} and \eqref{eigen} the constant term $b$ is even. Thus there exists a root of equation \eqref{eigen} by Hensel's Lemma which is $1 \pmod{2}$. \textit{Case 2: $x$ and $w$ have the same parity}. In this case we let $x =w +2r$ and complete the square on both equations to get \begin{equation}\label{char2} \left( Y - (1+w+r) \right)^2 - (2^{n-3}z + r^2), \end{equation} and \begin{equation}\label{eigen2} \left( X + \frac{r}{1+2y} \right)^2 - \left( \frac{2^{n-3} z(1+2y) + r^2}{(1+2y)^2} \right) .\end{equation} Equation \eqref{char2} has a solution mod $2^{n-2}$ if and only if $2^{n-3} + r^2$ is a square mod $2^{n-2}$. And equation \eqref{eigen2} has a solution mod $2^{n-2}$ if and only if \[2^{n-3}z(1+2y) + r^2 \equiv 2^{n-3}z + r^2 \pmod{2^{n-2}}\] is square mod $2^{n-2}$. Hence a solution to equation \eqref{char2} implies one for \eqref{eigen2}. \end{proof} \begin{prop} An elliptic curve $E/K$ having a $2^{n-1}$-isogeny has a $2^{n}$-isogeny locally almost almost everywhere if and only if for all $g \in G_E(2^n)$, the characteristic equation of $g$ has a root. \end{prop} \begin{proof} Given the result of Prop \ref{2jord2}, this follows from the argument in Prop \ref{equiv_p}. \end{proof} Inspired by this criterion, we now make the following definition: \begin{defin} A subgroup $H \subset \text{GL}_2(\mathbb{Z}/2^n\mathbb{Z})$ is called \defi{exceptional} if \begin{itemize} \item There do not exist integers $j, k$ such that $j+k = n$ and $H \mod 2^{j}$ is contained in a Borel subgroup of $\text{GL}_2(\mathbb{Z}/2^j\mathbb{Z})$, $H \mod 2^k$ is contained in a Cartan subgroup of $\text{GL}_2(\mathbb{Z}/2^k\mathbb{Z})$, \item For every element $h \in H$, the characteristic polynomial of $h$ has a rational root. \end{itemize} \end{defin} Exceptional subgroups of $\text{GL}_2(\mathbb{Z}/2^n\mathbb{Z})$ give rise to exceptions to the local-global principle mod $2^n$, and every exception mod $2^n$ must have image of Galois equal to an exceptional group. The \textsc{Sage} and \textsc{Magma} code necessary to find all exceptional subgroups of $\text{GL}_2(\mathbb{Z}/2^n\mathbb{Z})$ for $n \leq 6$ can be found at \texttt{stuff.mit.edu/\~{}ivogt/isogeny}. This classification differs from the case of odd primes $\ell$ in two ways. There we only classified \emph{minimally} exceptional images of Galois, which were assumed to Borel modulo one lower power of $\ell$. This ignores (1) exceptional groups which are images of $\ell^r$-isogenous curves and (2) exceptional groups mod higher powers of $\ell$ whose reduction is contained in a minimal one. However, if there is an exception, then there is a minimal exception, so this simplification turned out to be harmless since there are finitely many lift-exceptional subgroups modulo every odd prime power. The presence of genus 0 and 1 modular curves for 2-power levels necessitated the finer analysis. We summarize the results here. \begin{prop}\label{2exceptions} Let $H \subset \text{GL}_2(\mathbb{Z}/2^n\mathbb{Z})$ be exceptional. Then \begin{enumerate} \item $n \geq 3$. \item If $n=3, 4, 5$ or $6$, then $H$ is a subgroup of one of the maximal exceptional subgroups listed in Table 1. \item If $n\geq 6$, then the corresponding modular curve $X_H$ has genus at least $2$. \end{enumerate} \end{prop} {\footnotesize \begin{table}\label{2data} \begin{tabular}{c c c c c c c} label & $n$& $\text{GL}_2$-level & genus & generators &SZ label & RZB label \\ \hline 2147& 3 & 4 & 0 & $\sm{1}{1}{0}{1},\sm{3}{0}{0}{3},\sm{7}{4}{4}{3},\sm{5}{4}{4}{5}$ & \multicolumn{2}{c}{(det not surjective)} \\[5pt] 2177& 3 & 4 & 0 & $\sm{1}{2}{0}{1},\sm{7}{4}{4}{3},\sm{3}{4}{4}{3},\sm{7}{6}{2}{3}$ & \multicolumn{2}{c}{(det not surjective)} \\[5pt] 27445& 4 & 8 & 0 & $\sm{1}{2}{0}{1},\sm{3}{1}{0}{1},\sm{3}{0}{0}{3},\sm{1}{8}{8}{9},\sm{13}{8}{8}{5},\sm{3}{9}{0}{5}$ & 8G0-8f & 92\\[5pt] 189551& 5 & 16 & 0 & $\sm{1}{1}{0}{1},\sm{3}{0}{0}{1},\sm{3}{0}{0}{3},\sm{1}{16}{16}{17},\sm{29}{16}{16}{13}$ & \multicolumn{2}{c}{(det not surjective)} \\[5pt] 189605& 5 & 16 & 0 & $\sm{1}{1}{0}{1},\sm{9}{0}{0}{1},\sm{3}{0}{0}{3},\sm{1}{16}{16}{17},\sm{29}{16}{16}{13},\sm{3}{9}{16}{21}$ & \multicolumn{2}{c}{(det not surjective)} \\[5pt] 189621& 5 & 32 & 1 & $\sm{1}{1}{0}{1},\sm{3}{0}{0}{3},\sm{21}{0}{0}{21},\sm{3}{0}{8}{9},\sm{15}{27}{8}{9}$ & 32A1-32b & 353\\[5pt] 189785& 5 & 8 & 0 & $\sm{1}{2}{0}{1},\sm{3}{0}{0}{1},\sm{3}{0}{0}{3},\sm{1}{24}{8}{25},\sm{13}{24}{24}{21}$ & \multicolumn{2}{c}{(det not surjective)} \\[5pt] 189892& 5 & 8 & 0 & $\sm{1}{2}{0}{1},\sm{3}{1}{0}{1},\sm{3}{0}{0}{3},\sm{1}{24}{8}{25},\sm{13}{24}{24}{21}$ & \multicolumn{2}{c}{(det not surjective)} \\[5pt] 189900& 5 & 16 & 1 & $\sm{1}{2}{0}{1},\sm{3}{1}{0}{1},\sm{3}{0}{0}{3},\sm{1}{16}{16}{17},\sm{29}{16}{16}{13},\sm{3}{9}{16}{21}$ & 16E1-16h & 305\\[5pt] 189979& 5 & 8 & 0 & $\sm{1}{2}{0}{1},\sm{9}{0}{0}{1},\sm{3}{0}{0}{3},\sm{1}{24}{8}{25},\sm{13}{24}{24}{21},\sm{13}{30}{4}{3}$ & \multicolumn{2}{c}{(det not surjective)} \\[5pt] 189981& 5 & 8 & 0 & $\sm{1}{2}{0}{1},\sm{9}{0}{0}{1},\sm{3}{0}{0}{3},\sm{1}{24}{8}{25},\sm{13}{24}{24}{21},\sm{3}{9}{16}{21}$ & \multicolumn{2}{c}{(det not surjective)} \\[5pt] 189995& 5 & 16 & 1 & $\sm{1}{2}{0}{1},\sm{3}{0}{0}{3},\sm{29}{16}{16}{13},\sm{3}{0}{4}{9},\sm{7}{4}{4}{9}$ & 16E1-16b & 314\\[5pt] 190318& 5 & 8 & 0 & $\sm{1}{4}{0}{1},\sm{3}{0}{0}{1},\sm{3}{0}{0}{3},\sm{29}{28}{20}{21},\sm{19}{12}{4}{3}$ & \multicolumn{2}{c}{(det not surjective)} \\[5pt] 190435& 5 & 8 & 1 & $\sm{3}{1}{0}{1},\sm{3}{0}{0}{3},\sm{1}{24}{8}{25},\sm{13}{24}{24}{21},\sm{3}{9}{16}{21}$ & 8F1-8k & 278\\[5pt] 190487& 5 & 8 & 0 & $\sm{1}{4}{0}{1},\sm{3}{0}{0}{3},\sm{1}{24}{8}{25},\sm{29}{28}{20}{21},\sm{19}{12}{4}{3},\sm{13}{30}{4}{3}$ & \multicolumn{2}{c}{(det not surjective)} \\[5pt] 190525& 5 & 8 & 1 & $\sm{1}{4}{0}{1},\sm{3}{0}{0}{3},\sm{13}{24}{24}{21},\sm{9}{0}{2}{3},\sm{13}{12}{10}{3}$ & 8F1-8j & 255\\[5pt] 876594& 6 & 64 & 3 & $\sm{1}{1}{0}{1},\sm{9}{0}{0}{1},\sm{3}{0}{0}{3},\sm{5}{0}{0}{5},\sm{15}{27}{8}{9}$ & \multicolumn{2}{c}{(det not surjective)} \\[5pt] 878116& 6 & 32 & 3 & $\sm{1}{2}{0}{1},\sm{9}{0}{0}{1},\sm{3}{0}{0}{3},\sm{21}{32}{32}{53},\sm{13}{30}{4}{3}$ & \multicolumn{2}{c}{(det not surjective)} \\[5pt] 881772& 6 & 16 & 3 & $\sm{1}{4}{0}{1},\sm{9}{0}{0}{1},\sm{3}{0}{0}{3},\sm{29}{16}{48}{45},\sm{13}{12}{10}{3}$ & \multicolumn{2}{c}{(det not surjective)} \\[5pt] 885865& 6 & 8 & 3 & $\sm{1}{8}{0}{1},\sm{3}{0}{0}{3},\sm{33}{56}{40}{57},\sm{0}{9}{1}{0},\sm{45}{24}{24}{21}$ & \multicolumn{2}{c}{(det not surjective)} \\[5pt] 890995& 6 & 64 & 3 & $\sm{1}{1}{0}{1},\sm{3}{0}{0}{1},\sm{3}{0}{0}{3},\sm{5}{0}{0}{5},\sm{3}{9}{32}{15}$ & & 667\\[5pt] 891525& 6 & 32 & 3 & $\sm{1}{2}{0}{1},\sm{3}{0}{0}{1},\sm{3}{0}{0}{3},\sm{21}{32}{32}{53},\sm{19}{30}{16}{21}$ & & 627\\[5pt] 891526& 6 & 32 & 3 & $\sm{3}{0}{0}{1},\sm{3}{0}{0}{3},\sm{1}{32}{32}{33},\sm{21}{32}{32}{53},\sm{3}{9}{32}{15}$ & & 617\\[5pt] 891735& 6 & 32 & 3 & $\sm{1}{2}{0}{1},\sm{3}{1}{0}{1},\sm{3}{0}{0}{3},\sm{21}{32}{32}{53},\sm{3}{9}{16}{21}$ & & 636\\[5pt] 891737& 6 & 32 & 3 & $\sm{3}{1}{0}{1},\sm{3}{0}{0}{3},\sm{1}{32}{32}{33},\sm{21}{32}{32}{53},\sm{3}{9}{32}{15}$ & & 621\\[5pt] 891738& 6 & 32 & 3 & $\sm{1}{2}{0}{1},\sm{3}{1}{0}{1},\sm{3}{0}{0}{3},\sm{1}{32}{32}{33},\sm{21}{32}{32}{53},\sm{35}{24}{32}{15}$ & & 638\\[5pt] 893009& 6 & 16 & 3 & $\sm{3}{0}{0}{1},\sm{3}{0}{0}{3},\sm{1}{48}{16}{49},\sm{29}{16}{48}{45},\sm{19}{30}{16}{21}$ & & 612\\[5pt] 893011& 6 & 16 & 3 & $\sm{1}{4}{0}{1},\sm{3}{0}{0}{1},\sm{3}{0}{0}{3},\sm{29}{16}{48}{45},\sm{23}{36}{8}{9}$ & & 614\\[5pt] 893326& 6 & 16 & 3 & $\sm{3}{1}{0}{1},\sm{3}{0}{0}{3},\sm{1}{48}{16}{49},\sm{29}{16}{48}{45},\sm{3}{9}{16}{21}$ & & 603\\[5pt] 893327& 6 & 16 & 3 & $\sm{9}{0}{0}{1},\sm{3}{1}{0}{1},\sm{3}{0}{0}{3},\sm{1}{48}{16}{49},\sm{29}{16}{48}{45},\sm{35}{51}{16}{21}$ & & 544\\[5pt] 894711& 6 & 8 & 3 & $\sm{3}{0}{0}{1},\sm{3}{0}{0}{3},\sm{33}{56}{40}{57},\sm{45}{24}{24}{21},\sm{23}{24}{4}{3}$ & & 541\\[5pt] \end{tabular} \caption{Maximal exceptional subgroups of $\text{GL}_2(\mathbb{Z}/2^n\mathbb{Z})$ for $n \leq 6$. The SZ label corresponds to the paper \cite{sz} and the RZB label corresponds to the transpose group in the paper \cite{rzb}.} \end{table} } \section{Boundedness of Lift-Exceptional Primes}\label{boundell} Recall that a prime $\ell$ is called lift-exceptional if there exists an elliptic curve $E/K$ with an $\ell$-isogeny and an $\ell^n$-isogeny locally almost everywhere up to isogeny, but which is not isogenous to a curve with an $\ell^n$-isogeny. The classification of lift-exceptional subgroups (subgroups which occur as images of $\rho_{E', \ell^n}$ for some $E'$ isogenous to the $E$ as above) in Theorem \ref{upthm} shows that they are necessarily Borel modulo $\ell^{2m}$ and radical modulo $\ell^{m+1}$ for some $m \geq 1$. Using this, we bound lift-exceptional primes $\ell$, depending on the number field alone. We begin with a bound analogous to the bound in \cite[Theorem 1]{sutherland} of Sutherland. Being radical mod $\ell$ alone puts a condition on the number field $K$ when $\ell \equiv 1 \pmod{4}$. \begin{prop} For $\ell \equiv 1 \pmod{4}$, if $K \not\supseteq \mathbb{Q}(\sqrt{\ell})$, then $G(\ell)$ is not contained in a radical subgroup. \end{prop} \begin{proof} The assumption that $G(\ell)$ is contained in a radical subgroup $\begin{pmatrix} \chi_1 & \Asterisk \\ 0 & \pm \chi_1 \end{pmatrix}$, and the fact that the determinant character is the cyclotomic character, imply that \begin{equation}\label{4power} \chi_1^4 = \operatorname{cyc}^2. \end{equation} If $K \not\supseteq \mathbb{Q}(\sqrt{\ell})$, then the image of the determinant contains nonsquares in $\mathbb{F}_\ell^\times$. But as $\ell \equiv 1 \pmod{4}$, $(\mathbb{F}_\ell^\times)^4 \subsetneq (\mathbb{F}_\ell^\times)^2$, and indeed the square of any nonsquare will not be a fourth power. So we have a contradiction. \end{proof} \begin{cor}\label{sqrt} If $\ell \equiv 1 \pmod{4}$, and $\ell^k$ is a lift-exceptional prime power, then \[\ell \leq \Delta_K.\] \end{cor} \begin{rem} In fact, if $\ell \equiv 1 \pmod{4}$, then it is exceptional only if $\sqrt{\ell} \in K$. \end{rem} We can obtain a better bound depending on only the degree of the number field using the following Lemma and an easy extension of a result of Serre. \begin{lem}\label{iso_inv} The property of being contained in a radical subgroup mod $\ell$ is an isogeny invariant. \end{lem} \begin{proof} If $E$ and $E'$ are isogenous by a prime-to-$\ell$ degree isogeny, then $E[\ell] \simeq E'[\ell]$ as Galois modules. Therefore we can assume that $E$ and $E'$ are $\ell$-isogenous. Assume that $G_E(\ell)$ is contained in a radical subgroup $\begin{pmatrix} \chi_1 & \Asterisk \\ 0 & \pm \chi_1 \end{pmatrix}$ of $\text{GL}_2(\mathbb{Z}/\ell\mathbb{Z})$. Let $\chi$ be the isogeny character for $E \to E'$ (e.g. giving the Galois action on the kernel of $E \to E'$). We claim that it suffices to show that $\chi^2 = \operatorname{cyc}$. Indeed, recall that the determinant is the cyclotomic character. Therefore the character $\chi'$ of the dual isogeny $E' \to E$ also satisfies $\chi'^2 = \operatorname{cyc}$. And so $G_{E'}(\ell)$ is radical as well. To show that $\chi^2 = \operatorname{cyc}$, note that there is a subgroup $H \subset G_E(\ell)$, of index at most $2$, which is the kernel of the map $\phi$ taking the ratio of the diagonal characters. \[ H \hookrightarrow G_E(\ell) \xrightarrow{\phi} \{\pm 1\} \] Since $H$ has only one eigenspace which corresponds to the character $\chi_1$, the isogeny character $\chi$ can differ from $\chi_1$ by at most a quadratic character. Therefore \[\chi_1^2 = \operatorname{cyc} = \chi^2, \] as desired. \end{proof} \begin{lem}[c.f. Lemma 18' of \cite{serrechebotarev} in the case $K = \mathbb{Q}$]\label{largeorder} The inertia subgroup at $\lambda$ for every $\lambda$ above $\ell \geq 5$ in $\O_K$ of $\mathbb{P} G(\ell)$ contains an element of order at least $\frac{\ell -1}{3d_K}$. \end{lem} \begin{rem} Serre's argument naturally gives $\frac{\ell -1}{4d_K}$, but this can be strengthened slightly. In fact, in the case of potential good reduction at $\ell$, if $E$ attains good reduction over an extension of ramification index $4$, then over a ramified quadratic extension, a quadratic twist of $E$ has good reduction. Therefore in that argument, we may assume that $e \leq 3$. \end{rem} \begin{cor}\label{radCart_bound} If $G_{E}(\ell)$ is contained in a radical Cartan subgroup, then \[\ell \leq 6d_K +1. \] \end{cor} \begin{proof} Under these hypotheses, $\mathbb{P} G(\ell)$ would have order at most $2$. Hence, by the previous Lemma, we have that \[2 \geq \frac{\ell-1}{3d_K}, \qquad \Rightarrow \qquad \ell \leq 6d_K + 1. \qedhere \] \end{proof} \begin{cor}\label{finite_primes} If $(\ell^k, j(E))$ is lift-exceptional, then $\ell \leq 6 d_K + 1$. \end{cor} \begin{proof} From the classification of lift-exceptional subgroups, (a member of the isogeny class of) $E$ is radical mod $\ell$ and Borel mod $\ell^2$. Hence $E$ is isogenous to a curve which has two independent $\ell$-isogenies (factoring the $\ell^2$-isogeny). But by Lemma \ref{iso_inv}, such a curve is also radical mod $\ell$. The result now follows from Corollary \ref{radCart_bound}. \end{proof} \section{Finiteness of exceptional $j$-invariants}\label{jinvs} In this section, we prove the finiteness results in Theorem \ref{mainthm}. Corollary \ref{finite_primes} guarantees that for any number field $K$, there are only finitely many primes $\ell$ for which some power $\ell^n$ could be lift-exceptional. \begin{prop}\label{finitej} For a fixed odd prime $\ell$, as $k$ ranges over all natural numbers, there are finitely many lift-exceptional j-invariants $j(E)$ for $\ell^k$-isogenies over $K$. \end{prop} \begin{proof} If $(\ell^k, j(E))$ is a lift-exceptional pair, then for some $E'/K$ isogenous to $E$, $G_{E'}(\ell^3) \subseteq R(\ell^3)$ or $E$ has an $\ell^4$-isogeny up to isogeny. In both cases, the modular curve $X_{R(\ell^3)}$ and $X_0(\ell^4)$ are genus $\geq 2$ for $\ell \geq 3$. \end{proof} Having addressed the problem of lift-exceptional pairs, we now apply this to say something about exceptional pairs, where we make no assumption that $E$ has an $\ell$-isogeny. If $E$ locally almost everywhere has an $\ell^n$-isogeny up to isogeny, then it may fail to have an $\ell^n$-isogeny globally up to isogeny because it fails to have an $\ell$-isogeny. This could be the case whenever $\ell$ is an exceptional prime (in the sense of \cite{sutherland}) for the number field $K$. The following theorem of Anni, building on work of Sutherland, bounds such examples: \begin{thm}[Thm 4.3, Cor 4.5, Thm 5.3 of \cite{anni}]\label{anni_bound} Let $E/K$ be an elliptic curve with an $\ell$-isogeny locally almost everywhere. Then $E$ has an $\ell$-isogeny globally or $\ell \leq \max(\Delta_K, 6d_K+1)$. Further, for fixed $\ell$, there are finitely many exceptional pairs $(\ell, j(E))$ unless $\ell=5,7$. \end{thm} Using this, can prove part (1) of Theorem \ref{mainthm}. \begin{proof}[Proof of Theorem 1 (1)] Fix a number field $K$. We want to show that there are only finitely many exceptional $j$-invariants as $N$ ranges over all integers coprime to $70$. Let $S_K := \{3, 11, \cdots \ell_m\}$ be the list of primes less than $\max(\Delta_K, 6[K:\mathbb{Q}]+1)$ coprime to $2,5$, and $7$. Any exceptions must ``come from these primes", i.e. for any exceptional pair $(N, j(E))$, there exists a prime $\ell \in S_K$ and $\ell^n \mid\mid N$ such that $(\ell^n, j(E))$ is exceptional. We may therefore assume that the prime factors of $N$ are drawn from $S_K$. For any $\ell \neq 5,7$, let $\operatorname{Exc}_{\ell, K}$ denote the finite set of exceptional $j$-invariants over $K$ for $\ell$-isogenies. By Proposition \ref{finitej} the set of lift exceptional $j$-invariants for powers of the prime $\ell$ is also finite. Denote this set $\operatorname{Exc}_{\ell^n, K}$. Then we can define the set \[ \operatorname{Exc}_K \colonequals \bigcup_{i=1}^m \operatorname{Exc}_{\ell_i, K} \cup \operatorname{Exc}_{\ell_i^n, K} .\] Note that this set is \textit{finite}. If $j(E)$ is an exceptional $j$-invariant, then the $j$-invariant $j(E')$ for a $K$-isogenous curve $E'$ must lie in $\operatorname{Exc}_K$. As $K$-isogeny classes are finite, this proves the desired result. \end{proof} To prove part (2) of Theorem \ref{mainthm} we need to more deeply analyze counter-examples arising from the failure of the local-global principle mod $\ell = 5, 7$, since there exist number fields where these occur infinitely often. Recall the following result of Anni and Banwait--Cremona. \begin{prop}[Prop 1.3 of \cite{bc} and Prop 3.8 of \cite{anni}.] \hfill \begin{enumerate} \item If $(5, j(E))$ is an exceptional pair for $K$ then $\sqrt{5} \in K$ and \begin{itemize} \item $\mathbb{P} G(5) \simeq D_{4}$, the Klein $4$ group, and $G(5)$ is contained in the normalizer of a split Cartan. \end{itemize} \item If $(7, j(E))$ is an exceptional pair for $K$ then $\sqrt{-7} \not\in K$ and \begin{itemize} \item $\mathbb{P} G(7) \simeq D_6$, the Dihedral group of order $6$, and $G(7)$ is contained in the normalizer of a split Cartan, \item $E$ admits a $7$-isogeny over $K(\sqrt{-7})$. \end{itemize} \end{enumerate} \end{prop} We can explicitly describe the (maximal) exceptional group in these cases. Let $\alpha$ be a generator of $\mathbb{F}_\ell^\times$ for $\ell = 5,7$. Then \[ G(\ell) \subseteq H_{\ell, \operatorname{exc}} \colonequals \left\{ \begin{pmatrix} \alpha^i & 0 \\ 0 & \alpha^j \end{pmatrix}, \begin{pmatrix} 0 & \alpha^i \\ \alpha^j & 0 \end{pmatrix} \right\}_{i \equiv j \pmod{2}}. \] Define $H_{\ell^n, \operatorname{exc}}$ to be the full preimage of $H_{\ell, \operatorname{exc}}$ in $\text{GL}_2(\mathbb{Z}/\ell^n\mathbb{Z})$. Then we have \begin{prop}\label{vert57} Let $\ell = 5, 7$. \begin{enumerate} \item Every element of $H_{\ell^2, \operatorname{exc}}$ has square discriminant. \item If $G \subseteq H_{\ell^3, \operatorname{exc}}$ satisfies the property that $G(\ell) = H_{\ell, \operatorname{exc}}$ and every element has square discriminant, then $\operatorname{genus}(X_G) \geq 2$. \end{enumerate} \end{prop} \begin{proof} Part (1) is an easy calculation. For part (2), if $\ell = 7$ this is also easy. As $E$ always has an $\ell$-isogeny over $K(\sqrt{-7})$, $(7^3, j(E))$ either gives rise to a $K(\sqrt{-7})$ point on $X_0(7^3)$ or $X_{R(7^3)}$. Both of these modular curves have genus $\geq 2$. If $\ell = 5$, we use the following elementary observations from representation theory. We represent $D_4 = \{e, x, y ,xy\}$ with $x^2 = y^2 = e$. The Klein four group $D_4$ has character table: \begin{center} \begin{tabular}{c | c | c | c | c} & $e$ & $x$ & $y$ & $xy$ \\ \hline $A$ & 1 & 1 & 1 &1 \\ \hline $B$ & 1 & -1 & -1 &1 \\ \hline $C$ & 1 & 1 & -1 &-1 \\ \hline $D$ & 1 & -1 & 1 &-1 \end{tabular} \end{center} Let $V:= M_2(\mathbb{F}_5)$ be the $4$-dimensional vector space of $2 \times 2$ matrices over $\mathbb{F}_5$. We have an action of $D_4 = \mathbb{P} H_{5, \operatorname{exc}}$ on $V$ by conjugation. Using character theory, it is easy to show that \[V = A \oplus B \oplus C \oplus D, \] as a representation of $D_4$. We explicitly write this as \[A = \begin{pmatrix} a & 0 \\0 & a \end{pmatrix}, \qquad B = \begin{pmatrix} b & 0 \\0 & -b \end{pmatrix}, \qquad C = \begin{pmatrix} 0 & c \\ c & 0 \end{pmatrix}, \qquad D = \begin{pmatrix} 0 & d \\-d & 0 \end{pmatrix}. \] Note that we have a lift of $H_{5, \operatorname{exc}}$ to $GL_2(\mathbb{Z}/5^n\mathbb{Z})$ for any $n$ by replacing $\alpha$ with $\widetilde{\alpha} \in (\mathbb{Z}/5^n\mathbb{Z})^\times$ of order $4$ lifting $\alpha$. We will call this lift $\widetilde{H}_{5, \operatorname{exc}}$. Using this, we claim that it suffices to show that if $G \subset \text{GL}_2(\mathbb{Z}/5^3\mathbb{Z})$ such that $G(5)$ equals $H_{5, \operatorname{exc}}$ and every element of $G$ has square discriminant, then \[G(25) = I \cdot \tilde{H}_{5, \operatorname{exc}}, \] where $I \subset \text{GL}_2(\mathbb{Z}/25\mathbb{Z})$ is a subgroup of $1 + A \oplus B$, or $1 + A \oplus C$, or $1 + A \oplus D$. Indeed, in these cases an easy Magma computation shows that these groups define modular curves with genus at least 2. To prove the claim, first notice that every element of $G$ can be written as either \[ g_1 = \begin{pmatrix} \alpha^i + 5x & 5y \\ 5z & \alpha^j + 5w \end{pmatrix}, \qquad \text{or} \qquad g_2 = \begin{pmatrix} 5x & \alpha^i + 5y \\ \alpha^i + 5z & 5w \end{pmatrix}. \] We can calculate \[\Delta(g_1) = (\alpha^i - \alpha^j) \Big( (\alpha^i-\alpha^j) + 2 \cdot 5 (x-w) \Big) + 5^2((x-w)^2 + 4 yz),\] \[\Delta(g_2) \equiv 4 \alpha^i \alpha^j \pmod{5}.\] So by Hensel's Lemma, $\Delta(g_2)$ is always a square. And $\Delta(g_1)$ is a square if $i \neq j$. If $i = j$, then $\Delta(g_1)$ is a square if and only if $\sm{x}{y}{z}{w}$ has square discriminant modulo $5$. We see that this condition depends only upon $g_1$ mod $25$. So it suffices to determine which subgroups of $H_{5^2, \operatorname{exc}} \subset \text{GL}_2(\mathbb{Z}/25\mathbb{Z})$ satisfy this property. Since all such groups $G(25)$ are presented as, \[1 \to I \to G(25) \to H_{5, \operatorname{exc}} \to 1, \] it suffices to classify $I$. Further $I \simeq 1 + 5 \cdot J$ for some \emph{additive} subgroup $J$ of $M_2(\mathbb{F}_5)$. The matrix $\sm{x}{y}{z}{w}$ above is the corresponding element of $J$ in this decomposition. The group $J$ has an action of $H_{5, \operatorname{exc}}$ by conjugation, making it a representation of $D_4 = \mathbb{P}(H_{5, \operatorname{exc}})$. Our claim is that the maximal representations giving rise to groups with the desired properties are $A \oplus B, A \oplus C$, and $A \oplus D$. We have \[\Delta \begin{pmatrix} a + b & c+d \\ c-d & a-b \end{pmatrix} = 4(b^2 + c^2 - d^2). \] If $b, c, d$ are drawn from additive subgroups $V_B, V_C, V_D$ of $\mathbb{F}_5$, at least two of which are nontrivial, then the above discriminant represents nonsquares. Hence we have the claim, which completes the proof.\end{proof} \begin{proof}[Proof of Theorem \ref{mainthm} (2)] To prove part (2) of Theorem \ref{mainthm} we must combine information at different primes. Fix a number field $K$ and a positive integer $N$. Write \[N = \ell_1^{n_1} \cdot \ell_2^{n_2} \cdots \ell_k^{n_k}. \] If there exist infinitely many exceptional pairs $(N, j(E))$ for $K$, then for each $\ell_i$, either \begin{enumerate}[(a)] \item $\operatorname{genus}(X_0(\ell_i^{n_i})) \leq 1$, or \item there are infinitely many counter-examples for $\ell_i^{n_i}$-isogenies over $K$, \end{enumerate} and for at least one prime $\ell_i \mid N$, we must be in case (b). Case (a) includes prime powers $2,3, 4, 5,7, 8, 9, 11, 13, 16, 17, 19, 25, 27, 32, 49 $. We may further characterize the counter-examples in case (b): by Proposition \ref{finitej}, we are in one of the following two cases: \begin{itemize} \item $\ell = 2$ and $E/K$ has $G(2^n)$ contained in one of the genus 0 or 1 groups in Table 1; in this case $n=3,4$ or $5$. \item $\ell = 5, 7$ and $(\ell, j(E))$ is exceptional. In this case, Proposition \ref{vert57} gives that $n = 1$ or $n=2$. \end{itemize} However, there is a further condition: the modular curve parameterizing the level structure mod $N$ (which includes the exceptional level structure in case (b) and the Borel level structure in case (a)) must also have small genus. This is a much stronger condition, which says that the fiber product \[ X_{\ell_1^{n_1}} \times_{X(1)} \cdots \times_{X(1)} X_{\ell_k^{n_k}}, \] where $X_{\ell_i^{n_i}}$ is short-hand for the modular curve parameterizing the given level structure mod $\ell_i^{n_i}$, has small genus. To find the finite list in Theorem \ref{mainthm}, we compute the genera of the fiber products \[X_G \times_{X(1)} X_H \times_{X(1)} X_0(N), \] where $G$ is one of the groups with $n \leq 5$ in Table 1, and $H$ is one of $H_{5, \operatorname{exc}}$ or $H_{7, \operatorname{exc}}$, and $\operatorname{genus}(X_0(N)) = 0$ or $1$, and the levels of all factors are coprime. This is achieved by functions in the file \texttt{ram.py} available at \texttt{stuff.mit.edu/\~{}ivogt/isogeny}. The results are as follows: \begin{itemize} \item $X_G \times_{X(1)} X_H$ has genus $>1$ for all $G$ and $H$, \item $X_G \times_{X(1)} X_0(N)$ has genus $=0,1$ if and only if $G = G_{2147}$ or $G_{2177}$ and $N = 3, 5, 9$, \item $X_H \times_{X(1)} X_0(N)$ has genus $=0,1$ if and only if $H = H_{5, \operatorname{exc}}$ and $N=2$. \end{itemize} From this information one can assemble the list \begin{align*} L &=\{5, 5^2, 7, 7^2\} \cup \{2^3, 2^4, 2^5\} \cup \{3*2^3, 5*2^3, 9*2^3\} \cup \{2*5, 2*25\}, \\ &= \{5, 7, 8, 10, 16, 24, 25, 32, 40, 49, 50, 72\}. \qedhere \end{align*} \end{proof} \section{Exceptional Curves with Complex Multiplication}\label{cm} In this section we use the extra $\O$-module structure on $E[N]$ when $E$ has complex multiplication by an order $\O$ in an imaginary quadratic field $F$ to more precisely classify exceptional $j$-invariants corresponding to CM curves. Let $H_F$ denote the Hilbert class field of $F$, which is the field of moduli of elliptic curves with CM by the maximal order $\O_F$. \begin{thm}\label{cm_nicebound} Let $E/K$ have complex multiplication by $\O$ with $\operatorname{Frac}(\O) = F$. Let $\O_F$ denote the maximal order in $F$. If $(N, j(E))$ is exceptional, then there exist relatively prime numbers $A, B$ with $N = A\cdot B$ such that \[A \leq (\#\O_F^\times \cdot [KF:H_F])^4 \leq (6[K:\mathbb{Q}])^4, \] and if $F \subset K$, $E$ has a $B$-isogeny, or if $F \not\subset K$, $B$ factors as $B = \prod_i \ell_i^{n_i}$ such that for all $i$ one of the following holds \begin{itemize} \item $n_i=1$ and $\ell_i | d_F$, or \item $\ell_i$ splits in $F$, $\ell_i \equiv 1 \pmod{4}$ and $K \supset \mathbb{Q}(\sqrt{\ell_i})$, or \item $\ell_i$ splits in $F$, $\ell_i \equiv 3 \pmod{4}$ and $KF= K(\sqrt{-\ell_i})$, or \item $\ell_i = 2$ splits in $F$ and \begin{itemize} \item $n_i = 1$ or $2$, or \item $n_i \geq 3$ and $K \supset \mathbb{Q}(\sqrt{2})$ and $KF = K(\sqrt{-2})$, \end{itemize} \end{itemize} and $E$ does not have an $\ell_i^{n_i}$-isogeny up to isogeny over $K$ for $\ell_i^{n_i} \mid\mid B$ unless $n_i=1$ and $\ell_i \mid d_F$, or $2$ splits in $F$ and $\ell_i^{n_i} = 2,4$. \end{thm} The first step in proving this Theorem is reducing to the case of curves with CM by the maximal order $\O_F$. That is achieved by the following Lemma. \begin{lem}[{\cite[Cor 3]{ehom}}] Let $E/K$ have CM by $\O \subsetneq \O_F$. Then there exists an elliptic curve $E'/K$ with CM by $\O_F$ and an isogeny $E \to E'$ defined over $K$. \end{lem} From this point forward we will assume that $E$ over $K$ has CM by the maximal order $\O \colonequals \O_F$ in the imaginary quadratic field $F$. We have the following tower of field extensions: \begin{center} \begin{tikzpicture}[scale=.8] \draw (0,0) node{$\mathbb{Q}$}; \draw (-1,1) node{$F$}; \draw (-2,2) node{$H_F$}; \draw (1.5,1.5) node{$\mathbb{Q}(j(E))$}; \draw (2.5,2.5) node{$K$}; \draw (0.5,4.5) node{$KF$}; \draw (-.2,.2) -- (-.8,.8); \draw (-1.2, 1.2) -- (-1.8,1.8); \draw (.2,.2)--(1.2,1.2); \draw (1.8,1.8) -- (2.3,2.3); \draw (2.2,2.8) -- (.8,4.2); \draw (-1.8,2.2) --(.2,4.2); \draw (-2+1.25, 2+1.25) node[above]{$d$}; \end{tikzpicture} \end{center} The field $\mathbb{Q}(j(E))$ is the \defi{field of moduli of $E$} and $H_F$ denotes the Hilbert class field of $F$. Recall that $H \colonequals H_F = F(j(E))$. We let $d = [KF :H]$ for simplicity. As a consequence of the main theorem of complex multiplication we have that $E[N]\simeq \O/N \O$ as $\O$-modules and $\rho_{E, N}(G_{H})$ acts on $E[N]$ through $C_{N}(\O) \colonequals \left(\O/N \O\right)^\times$. By the Chinese remainder theorem we have a canonical isomorphism \[C_N(\O) \simeq \prod_{\ell^{n} \mid\mid N} C_{\ell^{n}}(\O). \] It is easy to show \cite[Lemma 2.2]{bourdon} that \[\#C_{\ell^n}(\O) = \ell^{2n-2}(\ell-1) \left( \ell - \legendre{d_F}{\ell} \right). \] As in \cite[Theorem 1.1]{bourdon}, for any $E$ defined over $H = F(j(E))$ with CM by $\O$, the reduced Galois representation \[ \widebar{\rho} \colon G_H \to \frac{\left(\O/N \O\right)^\times}{\O^\times} \] is surjective. As a consequence, if $K$ is arbitrary, \begin{equation}\label{index_bound} \prod_{\ell^n \mid\mid N} [C_{\ell^n}(\O): \rho_{\ell^n}(G_{KF})] \leq [C_{N}(\O) : \rho_{N}(G_{KF}) ] \leq \#\O^\times [KF : H] \leq 6 d. \end{equation} As as $\mathbb{Z}$-module, we have $\O = \left[ 1, \left( \frac{ d_F + \sqrt{d_F}}{2} \right) \right].$ Therefore, in this basis, the image of the mod $N$ Galois representation of $E_{KF}$ is contained in matrices of the form \[ \left\{ \begin{pmatrix} a & \frac{bd_F(1-d_F)}{4} \\ b & a+bd_F \end{pmatrix} \right\}, \] for $a, b \in \mathbb{Z}/N \mathbb{Z}$. If $E$ is defined over the field of moduli $\mathbb{Q}(j(E))$, then the full group $\rho_N(G_{\mathbb{Q}(j(E))})$ also contains an element corresponding to complex conjugation of $F /\mathbb{Q}$ acting on $\O_F$. In terms of our chosen basis above, this is of the form \[c \colonequals \begin{pmatrix} 1 & d_F \\ 0 & -1 \end{pmatrix}. \] In general, if $F \not\subset K$, then the group $\rho_N(G_{KF}) \subset \rho_N(G_K)$ is an index 2 normal subgroup. So $\rho_{N}(G_K)$ is an index less than or equal to $6d$ subgroup of the group generated by $C_{N}(\O)$ and $c$, which need not contain $c$. All elements of $\rho_N(G_K)$ not in $\rho_N(G_{KF})$ are of the form \[h_{a,b} \colonequals c g_{a,b} = \begin{pmatrix} a & ad_F- \frac{bd_F(1-d_F)}{4} \\ b & -a \end{pmatrix}, \] for some $a,b$. The main tool in proving Theorem \ref{cm_nicebound} is the following result for prime powers: \begin{prop}\label{cmprimepower} Let $E/K$ have CM by $\O_F$ the full ring of integers of $F$ and let $\ell$ be a prime. Assume that $E/K$ has an $\ell^n$-isogeny locally almost everywhere up to isogeny. Then either \begin{enumerate} \item $n=1$ and $\ell \mid d_F$ and $E$ has an $\ell^n$-isogeny over $K$, or \item $\ell$ splits in $F$, and \begin{itemize} \item[$\bullet$] ($F \subset K$): $E$ has an $\ell^n$-isogeny over $K$, \item[$\bullet$] ($F \not\subset K$): $\ell \equiv 1 \pmod{4}$, $K \supset \mathbb{Q}(\sqrt{\ell})$, and $E$ does not have an $\ell^n$-isogeny up to isogeny over $K$, \item[$\bullet$] ($F \not\subset K$): $\ell \equiv 3 \pmod{4}$, $KF = K(\sqrt{-\ell})$, and $E$ does not have an $\ell^n$-isogeny up to isogeny over $K$, \item[$\bullet$] ($F \not\subset K$): $\ell^n = 2$ or $4$, and $E$ has an $\ell^n$-isogeny up to isogeny over $K$, \item[$\bullet$] ($F \not\subset K$): $\ell = 2$, $n\geq 3$, $K \supset \mathbb{Q}(\sqrt{2})$ and $KF = K (\sqrt{-2})$, and $E$ does not have an $\ell^n$-isogeny up to isogeny over $K$. \end{itemize} \item The index \begin{align*} [C_{\ell^n}(\O) : \rho_{\ell^n}(G_{KF})] &\geq \begin{cases} \ell^{n/2-1}(\ell-1) &: \ n \geq 4 \text{ even or } \ell \text{ odd and } n=2, \\ \ell^{(n-1)/2} &: \ n\geq 3 \text{ odd} \\ \ell & : \ n=1 \text{ or }\ell=2,n=2. \end{cases} \\ &\geq \ell^{ n/4 }. \end{align*} \end{enumerate} \end{prop} \begin{cor} Let $E/K$ have CM by $\O$ with $\operatorname{Frac}\O = F$ and let $\ell$ be a prime. If \[ \left. \begin{cases}\ell &: \ n=1 \\ \ell^{n/4} &: \ n>1 \end{cases} \right\} > \# \O_F^\times d, \] then $(\ell^n, j(E))$ is exceptional if and only if $\ell$ splits in $F$, $F \not \subset K$ and \begin{itemize} \item[$\bullet$] $\ell \equiv 1 \pmod{4}$, $K \supset \mathbb{Q}(\sqrt{\ell})$, or \item[$\bullet$] $\ell \equiv 3 \pmod{4}$, $KF\subset K(\zeta_\ell)$, or \item[$\bullet$] $\ell = 2$, $n \geq 3$ and both $K \supset \mathbb{Q}(\sqrt{2})$ and $KF = K(\sqrt{-2})$. \end{itemize} \end{cor} \begin{rem} In the case $K = \mathbb{Q}$, this gives an alternate proof that for $\ell > 7$, there are no exceptional CM $j$-invariants. \end{rem} In the course of the proof of this Proposition, we will need the following strengthening of Hensel's Lemma, which implies that a unit mod $2^n$ for $n\geq 3$ is a square if and only if it is a square mod $2^3$. \begin{lem}\label{better_hensel} Let $f \in \mathbb{Z}_p[x]$ be a polynomial and $\alpha \in \mathbb{Z}_p$ such that \[ v_p(f'(\alpha)) \leq n, \qquad \text{and } \ v_p(f(\alpha)) \geq 2n +1. \] Then there exists a unique $\beta \in \mathbb{Z}_p$ such that $f(\beta)=0$ and $v_p(\beta-\alpha) > 2n+1$. \end{lem} \begin{proof}[Proof of Proposition \ref{cmprimepower}] In order for $E$ to have an $\ell^n$-isogeny locally almost everywhere up to isogeny, every element of $\rho_{\ell^n}(G_K)$ must have a root of its characteristic polynomial (equivalently square discriminant when $\ell$ is odd). We begin by considering the condition this imposes on elements of the index $\leq$ $2$ subgroup $\rho_{\ell^n}(G_{KF})$. Any element has the form \[ g_{a,b} = \begin{pmatrix} a & \frac{bd_F(1-d_F)}{4} \\ b & a+bd_F \end{pmatrix},\] with characteristic polynomial and discriminant \[\chi_{a,b}(x) = x^2 + (2a + bd_F)x + a(a+bd_F) - b^2d_F(1-d_F)/4, \qquad \Delta(g_{a,b}) = b^2d_F.\] Let us split into the following cases: \textbf{$\ell$ is inert in $F$.} If $\ell$ is odd, then $d_F$ is a nonzero nonsquare mod $\ell$, and so in order for all $g_{a,b} \in \rho(G_{KF})$ to have square discriminant, we need $v_\ell(b) \geq \lceil n/2 \rceil$. Therefore the image of $\rho_{\ell^{\lceil \frac{n}{2} \rceil} }(G_{KF})$ is contained in the scalar matrices in $\text{GL}_2(\mathbb{Z}/\ell^{\lceil \frac{n}{2} \rceil} \mathbb{Z})$. And so \begin{align*} [C_{\ell^n}(\O) : \rho_{\ell^n}(G_{KF})] \geq \frac{\#C_{\ell^{\lceil \frac{n}{2} \rceil} }(\O)}{\#\rho_{\ell^{\lceil n/2\rceil}}(G_{KF})} &\geq \frac{\ell^{2\lceil \frac{n}{2} \rceil -2}(\ell-1)(\ell+1) }{\ell^{\lceil \frac{n}{2}\rceil -1}(\ell-1)}, \\ &= \ell^{\lceil \frac{n}{2}\rceil -1}(\ell+1). \end{align*} Therefore this falls into case (3). If $\ell=2$, then $d_F \equiv 5 \pmod{8}$. Assume that $2 \nmid b$, then \[\chi_{a,b}(x) \equiv x^2 + x + 1 \pmod{2}. \] Therefore there are no solutions mod $2$, and hence mod any power of $2$. If $2 \mid b$, then write $b = 2b'$ for simplicity. In this case we may change variables to complete the square, \begin{equation}\label{compsquare} \chi_{a,b}(x) = (x + (a+b'd_F))^2-(b')^2d_F \end{equation} to reduce to the question of whether $(b')^2d_F$ is a square mod $2^n$. This is impossible if $v_2(b') < \lceil n/2 \rceil -1$, since $d_F$ is not a square mod $8$. The case $v_2(b') \geq\lceil n/2 \rceil -1$ implies $v_2(b) \geq \lceil n/2 \rceil$ which in turn implies \[ [C_{2^n}(\O) : \rho_{2^n}(G_{KF})] \geq \frac{2^{2\left\lceil n/2 \right\rceil-2}\cdot 3}{2^{\left\lceil n/2 \right\rceil-1}} = 3 \cdot 2^{\left\lceil n/2 \right\rceil-1} \geq 2^{\left\lceil n/2 \right\rceil}, \] and so also falls into case (3). \textbf{$\ell$ ramifies in $F$.} This is the case when $\ell \mid d_F$. If $\ell$ is odd, then $v_\ell(d_F) = 1$ and so we need $v_\ell(b) \geq \lfloor n/2 \rfloor$ for all $g_{a,b} \in \rho(G_{KF})$. As above this implies that \[[C_{\ell^n}(\O) : \rho_{\ell^n}(G_{KF})] \geq [C_{\ell^{\lfloor n/2 \rfloor}}(\O) : \rho_{\ell^{\lfloor n/2 \rfloor}}(G_{KF})] \geq \ell^{\lfloor n/2 \rfloor}, \] and so falls into case (3) if $n \geq 2$. If $n=1$ then as $d_F$ is zero mod $\ell$, $\Delta(g_{a,b})$ is trivially always a square. In addition $\sv{0}{1}$ is a simultaneous eigenvector of all $g_{a,b} \in \rho_{\ell}(G_{KF})$ as well as $c$; hence it is necessarily a simultaneous eigenvector of all elements of $\rho_\ell(G_K)$. Therefore $E/K$ always has an $\ell$-isogeny. For a discussion of this isogeny, see \cite[Sections 12 and 13]{gross}. If $\ell=2$, then $v_2(d_F) = 2$ or $3$, corresponding to whether $d_F/4$ is $3$ or $2$ mod $4$, respectively. We change variables to complete the square and the characteristic polynomial of $g_{a,b}$ simplifies to \[(x-(a+bd_F/2))^2 - b^2 d_F/4. \] This has a solution if and only if $b^2 d_F/4$ is a square mod $2^n$. If $d_F/4 \equiv 2 \pmod 4$, then it has odd $2$-adic valuation and hence $b^2d_F/4$ does as well. So it is a square if and only if $v_2(b) \geq \left\lfloor n/2 \right\rfloor$. Exactly as in the odd case, this implies that \[ [C_{2^n}(\O) : \rho_{2^n}(G_{KF})] \geq 2^{\lfloor n/2 \rfloor}, \] and so falls into case (3) unless $n=1$. In that case $\sv{0}{1}$ is again a simultaneous eigenvector of all of $\rho_2(G_K)$ and so $E$ has a $2$-isogeny over $K$. If $d_F/4 \equiv 3 \pmod 4$ and $2 \nmid b$, then $b^2d_F/4$ is a unit, which is not a square mod 4. If $v_2(b) < \lfloor n/2 \rfloor$, then $b^2d_F/4$ is not a square mod $2^n$. If $v_2(b) \geq \lfloor n/2 \rfloor$, then we are in case (3) unless $n=1$, in which case we are in case (1). One of $a$ or $b$ is always $0$ mod $2$. Therefore $\sv{1}{1}$ is a simultaneous eigenvector of all of $\rho_2(G_{KF})$ and also $c$, and therefore all of $\rho_2(G_K)$. \textbf{$\ell$ splits in $F$.} If $\ell$ is odd, then $d_F$ is a nonzero square mod $\ell$ and so we have $D \in (\mathbb{Z}/\ell^n \mathbb{Z})^\times$ such that $D^2 \equiv d_F \pmod{\ell^n}$. If $\ell =2$, and $2 \nmid b$, then the characteristic polynomial of $g_{a,b}$ reduces to $x^2 +x$ mod $2$. By Hensel's Lemma this has distinct roots mod all powers of $2$. If $2 \mid b$, we again let $b= 2b'$ and complete the square as in \eqref{compsquare} to get that $(b')^2d_F$ must be a square mod $2^n$. But this is always a square mod all powers of $2$ as $d_F$ is a unit square mod $8$. Hence for all primes $\ell$ splitting in $F$, all elements of $\rho_{\ell^n}(G_{KF})$ have a rational root of their characteristic polynomial. Let us now determine when the same is true for $\rho_{\ell^n}(G_K)$ in the case that $F \not\subset K$. Every $h_{a,b} \in \rho(G_K) \smallsetminus \rho(G_{KF})$ is of the form \[h_{a,b} = cg_{a,b} = \begin{pmatrix} a & ad_F- \frac{bd_F(1-d_F)}{4} \\ b & -a \end{pmatrix}.\] Since this is trace $0$, the characteristic polynomial has a root if and only if $-\det = a^2 + abd_F - b^2d_F(1-d_F)/4$ is a square. We split into the following cases: If $\ell \equiv 1 \pmod{4}$, then using multiplicativity of determinants, the discriminant of $g \in \rho(G_K)$ is always a square if and only if the determinant is always a square. Using the Weil pairing this occurs if and only if $K \supset \mathbb{Q}(\sqrt{\ell})$. If $\ell \equiv 3 \pmod{4}$, then the discriminant of $g \in \rho(G_K))$ is always a square if and only if for all $g\in \rho(G_{KF})$ we have that $\det(g)$ is a square and for all $h \in \rho(G_K) \smallsetminus \rho(G_{KF})$ we have that $\det(h)$ is not a square. Hence the normalizer character for $(KF) / K$ factors through the determinant character, which is the cyclotomic character. In fact it must factor through the quadratic character of the unique quadratic subfield of $K(\zeta_\ell)$, so equivalently, \[ KF = K(\sqrt{-\ell}). \] In particular, note that $\ell$ ramifies in $K$, since it splits in $F$. If $\ell = 2$ and $n\geq 3$, then by Lemma \ref{better_hensel} the negative of the determinant of $h_{a,b}$ is a square if and only if it is a square mod $8$. Therefore every element of $\rho_{2^n}(G_K)$ has square discriminant if and only if the determinant restricts to $+1 \mod 8$ on $\rho_{2^n}(G_{KF})$ and $-1 \mod 8$ on $\rho_{2^n}(G_K) \smallsetminus \rho_{2^n}(G_{KF})$. This implies first that $K$ contains the quadratic subfield of $\mathbb{Q}(\zeta_8)$ determines by $\{\pm1\} \subset (\mathbb{Z}/8\mathbb{Z})^\times$, namely $\mathbb{Q}(\sqrt{2})$. Furthermore, the normalizer character for $KF /K$ must factor through the cyclotomic character. Therefore \[KF = K(\zeta_8) = K(\sqrt{-2}), \] as $K$ already contains $\sqrt{2}$. This completes the forward implication of Proposition \ref{cmprimepower}. All that remains is to show that for split prime powers large enough to be excluded from case (3), $\ell^n$ is exceptional if and only if $F \not \subset K$ and $\ell^n \neq 2, 2^2$. The elements $(\pm D + \sqrt{d_F})/2 \in \O_F$ represented by the vectors \[v_{\pm} \colonequals \begin{pmatrix} (-d_F\pm D)/2 \\ 1 \end{pmatrix}, \] are simultaneaous eigenvectors of all of $\rho_{\ell^n}(G_{KF})$. Therefore $E_{KF}$ has an $\ell^n$-isogeny. So we may assume that $F \not\subset K$. Changing into the $[v_+, v_-]$ basis, everything is of the form \[\tilde{g}_{a,b} \colonequals \begin{pmatrix} a + b \left( \frac{d_F +D}{2}\right) & 0 \\ 0 &a + b \left( \frac{d_F -D}{2}\right) \end{pmatrix} ,\] and complex conjugation simply swaps $v_+$ and $v_-$. Since this is scalar mod $2$, $2^n$ is not exceptional for $n\leq 2$. Define \[r \colonequals \min_{g_{a,b} \in \rho(G_{KF})}(v_\ell(b)).\] The only simultaneous eigenvectors of all $g_{a.b}$ mod $\ell^k$ are congruent to $v_{\pm}$ mod $\ell^{k-r}$, and hence in the $v_{\pm}$ basis, are given by $\sv{\ell^{k-r}x}{1}$, $\sv{1}{\ell^{k-r}x}$ for any choice of $x$. In order for a vector congruent to $v_{\pm}$ mod $\ell^{k-r}$ to be an eigenvector of a $\tilde{h}_{a,b} \colonequals \tilde{c} \tilde{g}_{a,b} = \sm{0}{a+b(d_F +D)/2}{a+b(d_F-D)/2}{0}$ mod $\ell^k$ we must have \[ \ell^{2(k-r)}x^2(a \mp bD) \equiv a\pm bD \pmod{\ell^k}. \] But by assumption $a\pm bD$ is a unit in $\O/\ell^n \O$, and hence we must have $k=r$. Therefore there are no common eigenvectors for the entire normalizer (e.g. for the $\tilde{g}_{a,b}$ and the $\tilde{h}_{a', b'}$) modulo any larger powers of $\ell$ then $\ell^r$. If we assume that $r < \lfloor n/2 \rfloor$, there do not exist $j, k$ with $j+k=n$ such that $\rho_{\ell^j}(G_K)$ is contained in a Cartan mod $\ell^j$ and $\rho_{\ell^k}(G_K)$ is contained in a Borel mod $\ell^k$. Therefore $E_K$ does not have an $\ell^n$-isogeny up to isogeny over $K$. As above, modulo $\ell^r$, $\rho(G_{KF})$ is contained in the scalar matrices, so \[[C_{\ell^n}(\O) : \rho_{\ell^n}(G_{KF})] \geq \frac{\ell^{2r-2} (\ell-1)^2}{\ell^{r-1}(\ell-1)} = \ell^{r-1}(\ell-1). \] So $r \geq \lfloor n/2 \rfloor$ is in case (3). Finally, $\ell^{n/4}$ is a lower bound for the more refined bounds in part (3).\end{proof} \begin{proof}[Proof of Theorem \ref{cm_nicebound}] Factor $N$ as $\prod_{i \in S} \ell_i^{n_i}$. By assumption of $E$ having an $N$-isogeny locally almost everywhere up to isogeny, $E$ has an $\ell_i^{n_i}$-isogeny locally almost everywhere up to isogeny for each $i$. We will define $B$ as a subproduct of those prime powers such that $\ell_i^{n_i} > [C_{\ell_i^{n_i}}(\O):\rho_{\ell_i^{n_i}}(G_{KF})]^4$ and \begin{enumerate} \item ($F \subset K$) \begin{itemize} \item $n_i=1$ and $\ell_i | d_F$ \item $\ell_i$ splits in $F$. \end{itemize} \item ($F \not\subset K$) \begin{itemize} \item $n_i=1$ and $\ell_i | d_F$ \item $\ell_i$ splits in $F$, $\ell_i \equiv 1 \pmod{4}$ and $K \supset \mathbb{Q}(\sqrt{\ell_i})$, \item $\ell_i$ splits in $F$, $\ell_i \equiv 3 \pmod{4}$ and $KF= K(\sqrt{-\ell_i})$ \item $\ell _i= 2$ splits in $F$ and either $n_i=1,2$ or $K \supset \mathbb{Q}(\sqrt{2})$ and $KF = K(\sqrt{-2})$. \end{itemize} \end{enumerate} Proposition \ref{cmprimepower} implies that $E$ has a $\ell_i^{n_i}$-isogeny for each $\ell_i^{n_i} \mid\mid B$ if and only if $F \subset K$ or $n_i=1$ and $\ell_i \mid d_F$, or $\ell_i=2$ and $n_i \leq 2$. What remains is to show that the quotient $A \colonequals N/B$ is small. Let $A = \prod_j p_j^{m_j}$. By \eqref{index_bound} and Proposition \ref{cmprimepower} \begin{align*} 6d &\geq [C_A(\O):\rho_A(G_{KF})] \\ & \geq \prod_j [C_{p_j^{m_j}}(\O) : \rho_{p_j^{m_j}}(G_{KF})], \intertext{and each prime $p_j$ not in $B$ must fall into case (3) of Proposition \ref{cmprimepower}, so} &\geq \prod_j p_j^{m_j/4} = A^{1/4}, \end{align*} and the result follows. \end{proof} \section{Exceptional Primes for $K = \mathbb{Q}$}\label{qpoints} From \cite[Thm 2]{sutherland}, the only exceptional pair for prime $N$ over $\mathbb{Q}$ is $(7, 2268945/128)$. Hence for all other primes $\ell$, if $E$ locally almost everywhere has an $\ell$-isogeny, then it has an $\ell$-isogeny over $\mathbb{Q}$. Since our goal here is to find all prime power exceptions over $\mathbb{Q}$, we can use Theorem \ref{upthm} for odd prime powers and Proposition \ref{2exceptions} powers of $2$. By considering $X_0(\ell^n)(\mathbb{Q})$, it has been shown that there exist $\ell^n$-isogenies over $\mathbb{Q}$ if and only if $\ell^n = 2,3,4,5,7,8,9,13,16,25,27,37,43,67,163$ (see table in \cite{kenku}). In particular, there are no $\ell^2$-isogenies over $\mathbb{Q}$ for $\ell \geq 7$. As $5 \equiv 1 \pmod{4}$, Corollary \ref{sqrt} guarantees that the only exceptional pairs over $\mathbb{Q}$ could come from $\ell = 2,3,7$. \subsection{Exceptional Subgroups at $\ell = 3$} $X_0(3^r)(\mathbb{Q})$ has only cuspidal rational points when $r \geq 4$. As any exceptional subgroup modulo $3^r$ must be lift-exceptional, all exceptional $j$-invariants must give rise to rational points on the modular curve $X_{R(27)}$, as we now show. Indeed, by Theorem \ref{upthm}, if $(3^r, j(E))$ is lift-exceptional, then either $j(E') \in X_{R(27)}(\mathbb{Q})$ or $j(E') \in X_0(3^4)(\mathbb{Q})$, for $E'$ $\mathbb{Q}$-isogenous to $E$. But $X_0(3^4)$ has no noncuspidal rational points. The group $R(27)$ corresponds to the congruence subgroup 27B${}^4$ in the Cummins--Pauli database, and as such $X_{R(27)}$ is genus 4. The group $R(27)$ is not ``arithmetically maximal" (as in Definition 3.1 of \cite{rzb}). It is conjugate to a subgroup of the genus $2$ group $G_{32} \subset \text{GL}_2(\mathbb{Z}/3^3\mathbb{Z})$ with generators \[ \begin{pmatrix}26 & 0 \\ 0 & 26 \end{pmatrix}, \begin{pmatrix}11 & 21 \\ 0 & 5\end{pmatrix}, \begin{pmatrix}19 & 9 \\ 0 &10 \end{pmatrix}, \begin{pmatrix}16 & 3 \\ 0 & 1\end{pmatrix}, \begin{pmatrix}8 & 18 \\ 0 & 26 \end{pmatrix}, \begin{pmatrix} 1 & 0 \\ 0 & 26 \end{pmatrix}, \begin{pmatrix}1 & 6 \\ 0 & 1 \end{pmatrix}, \begin{pmatrix}10 & 2 \\ 18 & 1\end{pmatrix}, \begin{pmatrix}1 & 9 \\ 0 & 1\end{pmatrix}. \] Using the publically available Magma code from \cite{rzb} in the case $\ell=3$, Jeremy Rouse and David Zureick-Brown computed that the corresponding modular curve $X_{32}$ has equation \[X_{32} : y^2 + (x^3+z^3)y = -5x^3z^3 - 7z^6, \] in the weighted projective space $\mathbb{P}(1,3,1)$. They also computed that the map $X_{32} \to X(1)$ is given by {\footnotesize \[j = \frac{ -(-3x^9 + 99x^6y - 189x^6z^3 + 63x^3y^2 + 126x^3yz^3 - 441x^3z^6 + 25y^3 - 21y^2z^3 + 147yz^6 - 343z^{12})^3(-3x^3 + y - 7z^3)^3}{(4y - 7z^3)(x^3 + y)^9(27x^6 + 18x^3y + 63x^3z^3 + 7y^2 + 7yz^3 + 49z^6)}.\] } The Jacobian of $X_{32}$ is rank $0$, so the implementation of Chaubaty's method in Magma gives that \[ X_{32}(\mathbb{Q}) = \{ [1:-1:0], [1:0:0]\}. \] Using the $j$-map above, both rational points of $X_{32}$ are cuspidal, and so the same must be true of any rational points on $X_{R(27)}$. \subsection{Exceptional Subgroups at $\ell = 7$} Since there are no elliptic curves over $\mathbb{Q}$ with $7^2$-isogenies, all exceptions $7$-adically over $\mathbb{Q}$ come from exceptions mod $7$. Thus $j = 2268945/128$ is the only $j$-invariant giving $7$-adic exceptions over $\mathbb{Q}$. From Proposition \ref{vert57} we have that this $j$-invariant is also exceptional mod $49$. However, it is not exceptional mod $7^3$; by sampling Frobenius elements, one quickly sees that every element of $\rho_{7^3}(G_\mathbb{Q})$ need not have square discriminant. For example $\#E(\mathbb{F}_{53}) = 58$, so $a_{53} = -4$ and \[\Delta(\rho(\text{Frob}_{53})) \equiv -2^27^2 \pmod{7^3}, \] which is not a square as $7 \equiv 3 \pmod{4}$. \subsection{Exceptional Subgroups at $\ell = 2$} \begin{table}[ht] \begin{tabular}{c | c c c c} label & $n$ & RZB label & RZB cover& $\mathbb{Q}$-points on RZB cover \\ \hline 27445 & 4 & 92 & 92 & family $j_1$ \\ 189621 & 5 & 353 & 353 &cuspidal \\ 189900 & 5 & 305 & 168 &cuspidal \\ 189995 & 5 & 314 & 158 & cuspidal + (j=287496 (CM by -16)) \\ 190435 & 5 & 278 & 54 & cuspidal + (j=1728)\\ 190525 & 5 & 255 &54 & cuspidal + (j=1728)\\ 890995& 6 & 667 &354 & cuspidal\\ 891525& 6 & 627 & 168 & cuspidal\\ 891526& 6 & 617 &159 &cuspidal\\ 891735& 6 & 636 &168 &cuspidal\\ 891737& 6 & 621 &159 &cuspidal \\ 891738& 6 & 638 & 168 &cuspidal\\ 893009& 6 & 612 &52 &cuspidal \\ 893011& 6 & 614 &51 &cuspidal + (j=8000 (CM by -8)\\ 893326& 6 & 603 &54 &cuspidal + (j=1728)\\ 893327& 6 & 544 &53 &cuspidal + (j=1728)\\ 894711& 6 & 541 &51 &cuspidal + (j=8000 (CM by -8))\\ \end{tabular} \caption{Rational points on 2-adic exceptional modular curves.} \end{table} Proposition \ref{2exceptions} gives all maximal exceptional subgroups of $\text{GL}_2(\mathbb{Z}/2^n\mathbb{Z})$ for $n \leq 6$ (which in particular covers all exceptions over $\mathbb{Q}$ since $X_0(32)$ contains no noncuspidal rational points). In order for $G\subset \text{GL}_2(\mathbb{Z}/2^n\mathbb{Z})$ to occur over $\mathbb{Q}$, the determinant map $\det \colon G \to (\mathbb{Z}/2^n \mathbb{Z})^\times$ must be surjective. Table \ref{2data} gives labels from the recent work of Sutherland--Zywina \cite{sz} and Rouse--Zureick-Brown \cite{rzb} for such groups which can occur over $\mathbb{Q}$. The relevant information from \cite{rzb} is contained in Table 2. The curve $X_{27445}$ is genus $0$ with infinitely many $\mathbb{Q}$-points, while the remainder have no noncuspidal non-CM rational points. The modular curve $X_{27445}$ has $j$-parameterization: \[j_1 = \frac{-4t^8 + 110640t^6 - 221336t^4 + 110640t^2 - 4}{64t^6 - 128t^4 + 64t^2}.\] Outside of a thin subset, points of $X_{27445}(\mathbb{Q})$ correspond to elliptic curves $E/\mathbb{Q}$ with $\rho_{E, 16} (G_\mathbb{Q})$ conjugate to the group $27445$, and hence are exceptional; see Remark 6.2 of \cite{rzb}. As above we rule out the CM $j$-invariants not $1728$ by sampling discriminants of Frobenius elements. For any $E/\mathbb{Q}$ with $j(E) = 1728$, and any rational prime $p$ which is inert in $\mathbb{Q}(i)$, the Frobenius element $\text{Frob}_{p}$ is contained in the complement of the Cartan subgroup $\rho_{E, 2^5}(G_{\mathbb{Q}(i)})$. As in Section $8$, such matrices have trace $0$, and hence the discriminant depends only upon the determinant. Hence $\Delta(\rho_{E, 2^5}(\text{Frob}_p))$ is a square modulo $2^5$ if and only if $-4\cdot p$ is a square modulo $2^5$, which is visibly independent of twists. As $-11 \equiv 5 \pmod{8}$ is not a square, $\text{Frob}_{11}$ witnesses the fact that $j=1728$ is not exceptional. \bibliographystyle{plain}
{ "timestamp": "2018-01-17T02:11:02", "yymm": "1801", "arxiv_id": "1801.05355", "language": "en", "url": "https://arxiv.org/abs/1801.05355", "abstract": "Let $E$ be an elliptic curve over a number field $K$. If for almost all primes of $K$, the reduction of $E$ modulo that prime has rational cyclic isogeny of fixed degree, we can ask if this forces $E$ to have a cyclic isogeny of that degree over $K$. Building upon the work of Sutherland, Anni, and Banwait-Cremona in the case of prime degree, we consider this question for cyclic isogenies of arbitrary degree.", "subjects": "Number Theory (math.NT)", "title": "A Local-global principle for isogenies of composite degree", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9854964181787613, "lm_q2_score": 0.7185943805178139, "lm_q1q2_score": 0.7081721881236914 }
https://arxiv.org/abs/1603.03717
The Asymptotics of Quantum Max-Flow Min-Cut
The quantum max-flow min-cut conjecture relates the rank of a tensor network to the minimum cut in the case that all tensors in the network are identical\cite{mfmc1}. This conjecture was shown to be false in Ref. \onlinecite{mfmc2} by an explicit counter-example. Here, we show that the conjecture is almost true, in that the ratio of the quantum max-flow to the quantum min-cut converges to $1$ as the dimension $N$ of the degrees of freedom on the edges of the network tends to infinity. The proof is based on estimating moments of the singular values of the network. We introduce a generalization of "rainbow diagrams"\cite{rainbow} to tensor networks to estimate the dominant diagrams. A direct comparison of second and fourth moments lower bounds the ratio of the quantum max-flow to the quantum min-cut by a constant. To show the tighter bound that the ratio tends to $1$, we consider higher moments. In addition, we show that the limiting moments as $N \rightarrow \infty$ agree with that in a different ensemble where tensors in the network are chosen independently, this is used to show that the distributions of singular values in the two different ensembles weakly converge to the same limiting distribution. We present also a numerical study of one particular tensor network, which shows a surprising dependence of the rank deficit on $N \mod 4$ and suggests further conjecture on the limiting behavior of the rank.
\section{Introduction} The quantum max-flow min-cut conjecture was introduced in Ref.~\onlinecite{mfmc1}. This conjecture relates the rank of a tensor network for a generic choice of tensor to a maximal classical flow (or minimal cut) on the graph corresponding to the tensor network. In Ref.~\onlinecite{mfmc2}, various forms of the quantum max-flow min-cut conjecture were considered, and the conjectures were in fact shown to be false. Here, we consider a particular version of the conjecture, called version 2 in that paper. Even though the conjecture is not true, we show that the ratio of the actual rank to the rank predicted by the conjecture converges to $1$ in the limit of large dimension $N$ of the degrees of freedom on the edges of the network. The proof is statistical in nature, relying on a random choice of tensor in the network in a particular Gaussian ensemble. We begin by reviewing that particular form of the conjecture (in fact, we consider a special case of the conjecture in that paper, in which all the vertices in the graph have the same degree and all edges have the same capacity; however, our results can be fairly straightfowardly extended to the general case of the conjecture). We slightly modify the notation in that paper. Consider a tensor network. The tensor network is defined by several pieces of data. First, there is a graph $G$, with some open edges (i.e., edges that attach to only one vertex inside the graph) and some closed edges (edges which attach to two vertices). Second, for each edge, there is an integer, called the capacity in Ref.~\onlinecite{mfmc2}. Finally, for each vertex, there is a tensor; the number of indices of the tensor is equal to the degree of that vertex and each index of the tensor corresponds to a distinct edge attached to that vertex; each index ranges over a number of possible values equal to the capacity of that edge. The entries of the tensor are complex numbers (one could also consider the case that they are real numbers; this would require some modifications of the techniques here). By contracting the tensor network, the tensor network assigns a complex number for each choice of indices on the open edges. We partition the open edges into two sets, called the input set and output set. Define $D_S$ to equal the product of the capacities of all input edges and define $D_T$ to equal the product of the capacities of all output edges. This contraction of the tensor network defines a linear operator $L$ from a complex vector space of dimension $D_S$ to one of dimension $D_T$. For a given tensor network, the rank of this linear operator is termed the quantum flow of the network. We define two sets, $S$ and $T$; these sets are the open ends of the input and output open edges, respectively. We let $V$ be the set of vertices in the graph, not including $S$ and $T$. We let $\overline V=S \cup T \cup V$; below when we refer to vertices, we only mean vertices in $V$. A cut is a partition of $\overline V$ into $\overline S \cup \overline T$, where $S \subset \overline S$ and $T \subset \overline T$. The cut set of the cut is the set of edges $(v,w)$ with $v \in \overline S$ and $w \in \overline T$. For a given graph, and a given cut set separating $S$ from $T$, define $D_C$ to equal the product of the capacities of the edges in the cut set. Then, define the quantum min-cut of the network to be the minimum of $D_C$ over all cuts. For a given graph and given capacities, for any choice of tensors, the quantum flow is bounded by the quantum min-cut. We briefly sketch the proof\cite{mfmc2}. Given a cut set $C$, cutting the edges in the cut set separates the graph into two graphs, and similarly the tensor network can be split into two tensor networks, one with input $S$ and output $C$ and one with input $C$ and output $T$. Letting $L_1$ and $L_2$ be the linear operators defined by these networks, the linear operator $L$ can be written as a product $L=L_2 L_1$. Clearly, since $L_1$ is a map to a space of dimension $D_C$, it has rank at most $D_C$ and hence so does $L$. This leaves the question: under what circumstances will the quantum flow equal the quantum min-cut? Suppose that all edges have the same capacity. We denote this capacity by $N$ ($c$ was used in Ref.~\onlinecite{mfmc2}; here is one place where we change notation to be more suggestive of ``large $N$" limits in physics). Then, suppose that all vertices have the same degree $d$. Choose a single tensor ${\cal T}$ which has $d$ indices, each ranging from $1$ to $N$. We then use the {\it same} tensor ${\cal T}$ on all vertices of the graph. For each vertex, choose some ordering assigning indices of the tensor to edges attached to that vertex. Then, for a given $d$, $N$, graph, and choice of ordering, define the ``quantum max-flow" to be the maximum of the rank over all tensors ${\cal T}$. Then, the second version of the quantum max-flow min-cut conjecture is that the quantum max-flow is equal to the quantum min-cut. This conjecture was shown to be false. In fact, the conjecture considered in Ref.~\onlinecite{mfmc2} is slightly more general, as edges are allowed to have different capacities. Then any two vertices which have the same degree and which have the same sequence of capacities of edges attached to the vertex (with the sequence of edges ordered by the assignment of edges to indices of the tensor) are said to have the same ``valence type" and any two vertices with the same valence type have the same tensor. Our results can be extended to this case. Our main result is that the conjecture is ``asynptotically true", in that as $N$ becomes large, the ratio of the quantum max-flow to the quantum min-cut converges to $1$. Write $QMC(G,N)$ to denote the quantum min-cut for a given graph $G$ and given capacity $N$. Write $QMF(G,N,O)$ to denote the quantum max-flow for a given graph $G$ and ordering $O$ (the symbol $L$ was used for the ordering in Ref.~\onlinecite{mfmc2} but we use $O$ instead to avoid confusion with $L$ for the linear operator). For a graph $G$, let $MC(G)$ denote the minimum cut of $G$, where each edge is assigned capacity $1$ to determine the min cut. Then, \begin{equation} \label{qmcmc} QMC(G,N)=N^{MC(G)}. \end{equation} We show that \begin{theorem} \label{strongth} For all $G,O$, \begin{equation} QMF(G,N,O)=QMC(G,N) \cdot (1-o(1)). \end{equation} \end{theorem} Here, we use a big-O notation where we consider asymptotic behavior as a function of $N$. The constant factors hidden by the big-O notation may depend on $G,O$. Of course, the fact that the flow is bounded by the quantum min-cut implies that $QMF(G,N,O)\leq QMC(G,N)$. The new result will be to lower bound $QMF(G,N,O)$. Before proving theorem \ref{strongth}, we will prove a weaker theorem: \begin{theorem} \label{mainth} For all $G,O$, \begin{equation} QMF(G,N,O)=\Theta(QMC(G,N)). \end{equation} \end{theorem} The proof of this will rely on estimating the expectation value of ${\rm tr}(L^\dagger L)$ and ${\rm tr}\Bigl( (L^\dagger L)^2 \Bigr)$, and using a relation between moments of an operator and its rank. Most of the work of the paper will be developing tools to estimate moments. Before stating the next theorem about moments, we make some definitions: \begin{definition} Given two tensor networks, ${\cal N}_1,{\cal N}_2$ with corresponding graphs $G_1,G_2$, we define their product to be a tensor network with graph $G$ which is the union of $G_1,G_2$. We write the product as a network ${\cal N}_1 \cdot {\cal N}_2$. The capacity of an edge in $G$ is given by the capacity of the corresponding edge in $G_1$ or $G_2$ and the ordering of indices of a vertex in $G$ is given by the ordering of the indices of the corresponding vertex in $G_1$ or $G_2$. If ${\cal N}_1,{\cal N}_2$ correspond to linear operators $L_1,L_2$ respectively, then their product corresponds to linear operator $L_1 \otimes L_2$; if ${\cal N}_1,{\cal N}_2$ both have no open edges, so that the corresponding linear operators are scalars, then the linear operator corresponding to the product is simply the product of these scalars. \end{definition} \begin{definition} \label{defconn} We say that a tensor network ${\cal N}$ with open edges is ``connected" if every vertex is connected to some open edge (either input or output) by a path in the graph corresponding to that network. \end{definition} Note that a tensor network may be connected and yet the graph corresponding to that network may be disconnected. See Fig.~\ref{figconn}. Suppose a tensor network ${\cal N}$ is not connected, with ${\cal N}$ corresponding to graph $G$ and linear operator $L$. Then, let $W$ be the set of vertices which are not connected to an input or output edge. Then, we can write ${\cal N}$ as a the product of two networks, ${\cal N}_1,{\cal N}_2$, with linear operators $L_1,L_2$ and graphs $G_1,G_2$, where the vertices in ${\cal N}_2$ correspond to the vertices in $W$ and where ${\cal N}_1$ is connected. Then, $L_2$ is a scalar which is nonzero for generic choice of ${\cal T}$, so that ${\rm rank}(L)={\rm rank}(L_1)$. Further, $MC(G)=MC(G_1)$. So, to prove results about the rank of $L$ in terms of $MC(G)$, we can assume, without loss of generality, that ${\cal N}$ is connected. So, unless explicitly stated otherwise, all tensor networks with open edges that we consider will be connected and all linear operators $L$ will correspond to connected tensor networks. \begin{figure} \includegraphics[width=0.5in]{fignetconn.pdf} \caption{Example of a tensor network that is connected in the sense of definition \ref{defconn}. This network has $|V|=2$ vertices, labelled by solid circles, and $|E|=6$ edges, all of which are open. There are $|S|=3$ input edges, shown at the left of the figure and $|T|=3$ output edges, shown at the right of the figure.} \label{figconn} \end{figure} In general, we use $|E|$ to denote the number of edges of a tensor network (including open edges) and use $|V|$ to denote the number of vertices.For notational simplicity, throughout the paper we label closed edges by the pair of vertices $(v,w)$ to which they are attached; the results also apply to the case in which there are multiple edges connecting a pair of vertices in which case an additional label must be given to the edge to indicate for which edge it is; for notational simplicity, we do not write this extra label. We will later define a Gaussian ensemble to choose the tensors. For most of the paper, we consider the case that all the tensors in the network are identical, given by some fixed tensor ${\cal T}$ drawn from this ensemble. However, we sometimes also consider the case that the tensors in the network are not identical, and instead are chosen independently from the Gaussian ensemble. If we need to distinguish these two cases, we refer to them as the {\it identical ensemble} and {\it independent} ensemble. If not otherwise stated, we are referring to the indentical ensemble. We use $E[\ldots]$ to denote an expectation value in the identical ensemle and $E_{\rm ind}[\ldots]$ to denote an expectation value in the independent ensemble. We can now state the following theorem proven later. \begin{theorem} \label{mainth2} Consider a tensor network ${\cal N}$ (assumed connected as explained above) with corresponding graph $G$. Then for the linear operator $L$ corresponding to the tensor network, for any $k \geq 1$, we have \begin{equation} \label{main2} E[{\rm tr}\Bigl((L^\dagger L)^k\Bigr)]=c(G,k) \cdot N^{k |E| - (k-1) MC(G)}+{\cal O}(N^{k |E| - (k-1) MC(G)-1}), \end{equation} where where $c(G,k)$ denotes a positive integer that depends upon the graph $G$ and on $k$ (the constant $c(G,k)$ does not depend on the ordering $O$). Further, \begin{equation} \label{main2ind} E_{\rm ind}[{\rm tr}\Bigl((L^\dagger L)^k\Bigr)]=c(G,k) \cdot N^{k |E| - (k-1) MC(G)}+{\cal O}(N^{k |E| - (k-1) MC(G)-1}), \end{equation} with the same constant $c(G,k)$ in Eq.~(\ref{main2}) as in Eq.~(\ref{main2ind}). Further, for any graph $G$, the constant $c(G,k)$is bounded by an exponential in $k$, i.e., $C(G,k)\leq c_1 \cdot c_2^k$, where the constants $c_1,c_2$ depend upon $G$. \end{theorem} This theorem requires that the network be connected. As a trivial example to show how this is necessary, consider a network which computes a scalar. Suppose that the tensors in the network all are also scalars; i.e., the vertices have degree $0$. In this trivial example, suppose that $|V|=2$. This network is {\it not} connected. If the two ``tensors" at the two different vertices are the scalars $x,y$, then the operator $L$ is equal to the scalar $xy$. If one picks $x,y$ independent complex Gaussians with probability distribution function $\frac{1}{\pi}\exp(-|z|^2)$, then one can readily compute $E_{\rm ind}[{\rm tr}(L^\dagger L)]=E_{\rm ind}[|x|^2 |y|^2]=1$. However, if we choose $x$ Gaussian and set $y=x$, then $E[|x|^2 |y|^2]=E[|x|^4]=2$ and so the expectation values would be different in the independent and identical ensembles. This example might seem strange (having degree $0$), so the reader can also consider, for example, a tensor network with $2$ vertices and $d$ edges, with no open edges so that all edges connect one vertex to the other; using the techniques later to evaluate expectation values, the reader can check that the expectation values will differ. Thus, if we define $$K=\Bigl( N^{|E|-MC(G)} \Bigr)^{-1/2} L$$ and define $${\rm av}(\ldots)=\frac{1}{N^{MC(G)}} {\rm tr}(\ldots),$$ we have \begin{equation} E[ {\rm av}\Bigl( (K^\dagger K)^k \Bigr)]= c(G,k)+{\cal O}(1/N), \end{equation} and \begin{equation} E_{\rm ind}[ {\rm av}\Bigl( (K^\dagger K)^k \Bigr)] = c(G,k)+{\cal O}(1/N). \end{equation} The notation ${\rm av}(\ldots)$ is intended to be suggestive as follows: we know\cite{mfmc2} that if the tensors are chosen independently for generic choice of tensors then $L$ (and hence $K$) has rank $N^{MC(G)}$. Hence, for given $K$, the expectation value, of the $2k$-th moment of a randomly chosen non-zero singular value of a random tensor in the independent ensemble is $E_{\rm ind}[{\rm av}(\Bigl( (K^\dagger K)^k \Bigr)]$. Let $\mu^{\rm ind}_N$ be the distribution function of a randomly chosen singular value for a randomly chosen tensor. By the fact that $c(G,k)$ are bounded by an exponential in $k$, the distributions $\mu^{\rm ind}_N$ converge weakly to a limit\cite{carleman}. Suppose instead we choose the tensors all identically from the Gaussian ensemble and then randomly choose one of the largest $N^{MC(G)}$ singular values (i.e., if there are ${\rm rank}(L) \leq N^{MC(G)}$ non-zero singular values, then with probability ${\rm rank}(L)/N^{MC(G)}$ we choose one of the non-zero singular values and otherwise the result we choose a zero singular value). Let this distribution be $\mu_N$. Then, the moments of $\mu_N$ are the same as the moments of $\mu^{\rm ind}_N$ up to $O(1/N)$ corrections and so we have the further corollary that \begin{corollary} \label{weakconv} $\mu_N$ converges weakly to the same limit as $\mu^{\rm ind}_N$. The limiting distribution has compact support \end{corollary} Also, \begin{corollary} \label{cor2} For all $\epsilon>0$, for all $n>0$, there is a constant $c$ such that for all sufficiently large $N$ the probability that the largest singular value of $L$ is greater than or equal to $x\sqrt{N^{|E|-MC(G)+\epsilon}}$ is bounded by $c x^{-n}$. \begin{proof} Let $\lambda$ be the largest singular value of $L$. $E[\lambda^{2k}]\leq c(G,k)(1+{\cal O}(1/N)) N^{k |E| - (k-1) MC(G)}$. So, the probability that $\lambda\geq x \sqrt{N^{|E|-MC(G)+\epsilon}}$ is bounded by \begin{equation} \frac{E[\lambda^{2k}]}{x^{2k} N^{k|E|-kMC(G)+k\epsilon}}= c(G,k)x^{-2k} (1+{\cal O}(1/N)) N^{MC(G)-k\epsilon}, \end{equation} and choosing $k>MC(G))/\epsilon$ and $k\geq n/2$, this is bounded by a constant times $x^{-n}$ for all sufficiently large $N$. \end{proof} \end{corollary} Since if the tensors are chosen independently, one has $QMC(G,N)=QMF(G,N)$ generically, one might naively guess that corollary \ref{weakconv} implies theorem \ref{strongth} as follows: for independent choice of tensors, the linear operator $L$ will generically have $QMC(G,N)$ nonzero singular values and so one might expect that when the tensors are chosen identically the linear operator $L$ will have nearly $QMC(G,N)$ singular values. The trouble with this naive argument is that it is conceivable that in the independent ensemble the linear operator $K$ will have $QMC(G,N)$ nonzero singular values but that with high probability a constant fraction of them will have magnitude which is $o(1)$ so that $\mu^{\rm ind}_N$ will converge to, for example, a sum of a smooth function plus a $\delta$-function at the origin. So, to prove theorem \ref{strongth} we instead give a more detailed analysis of higher moments. We begin in section \ref{msec} by lower bounding the rank in terms of traces of moments of the linear operator $L$. We also define the appropriate Gaussian ensemble for ${\cal T}$ in this section, and give a combinatorial method for computing these traces for this ensemble. Then, in section \ref{fsec}, we use these methods to estimate the expectation value of ${\rm tr}(L^\dagger L)$. In sections \ref{mlbsec},\ref{mubsec}, we show how to estimate expectation values of traces of higher moments of $L^\dagger L$, as well as expectation values of products of such traces; this is done by combining a lower bound in section \ref{mlbsec} for these expectation values with an upper bound in section \ref{mubsec}. The techniques for computing the expectation value show that the results are indeed the same, up to ${\cal O}(1/N)$, for the two ensembles. In section \ref{proofstrongth} we complete the proof of theorem \ref{strongth}. In section \ref{sectionvar} we collect some results on variance that are not needed elsewhere but may be of independent interest. In section \ref{sectionnumerics} we present some numerical simulations. \section{Moments Bound and Definition of Ensemble} \label{msec} Let ${\rm rank}(L)$ denote the rank of a linear operator $L$. We have the following bound: \begin{lemma} \label{momentslemma} For any linear operator $L$, and any integer $k>1$, \begin{equation} \label{moments} {\rm rank}(L)^{k-1} \geq \frac{{\rm tr}(L^\dagger L)^k}{{\rm tr}\Bigl((L^\dagger L)^k\Bigr)}, \end{equation} assuming that the denominator of the right-hand side is nonzero. In the special case $k=2$ used later we have \begin{equation} {\rm rank}(L)\geq \frac{{\rm tr}(L^\dagger L)^2}{{\rm tr}\Bigl((L^\dagger L)^2\Bigr)}. \end{equation} \begin{proof} Let the non-zero singular values of $L$ be $\lambda_1,...,\lambda_{{\rm rank}(L)}$. Let the vector $v$ be $(\lambda_1^2,...,\lambda_{{\rm rank}(L)}^2)$. Let the vector $w$ be $(1,1,...,1)$. Then, by H\"{o}lder's inequality applied to vectors $v,w$, \begin{equation} \sum_{i=1}^{{\rm rank}(L)} \lambda_i^2 \leq \Bigl( \sum_{i=1}^{{\rm rank}(L)} \lambda_i^{2p} \Bigr)^{1/p} {\rm rank}(L)^{1/q} \end{equation} for any $p,q$ with $1/p+1/q=1$. Choosing $p=k$, $q=k/(k-1)$, and raising the above equation to the $k$-th power, we get \begin{equation} {\rm tr}(L^\dagger L)^k\leq {\rm tr} \Bigl((L^\dagger L)^k\Bigr) {\rm rank}(L)^{k-1}, \end{equation} as claimed (in the special case of $k=2$, we can use Cauchy-Schwarz instead of H\"{o}lder). \end{proof} \end{lemma} We will prove Theorem \ref{mainth} by estimating the expected value of the numerator and denominator of Eq.~(\ref{moments}) for a particular random ensemble of tensors. The ensemble that we choose is that the entries of the tensor ${\cal T}$ will be chosen independently and indentically distributed, using a Gaussian distribution with probability density $$\frac{1}{\pi}\exp(-|z|^2)$$ so that $E[|z|^2]=1$. We estimate the expectation value of the numerator and denominator of Eq.~(\ref{moments}) independently, rather than estimating the expectation value of the ratio, and use the following lemma: \begin{lemma} \label{exists} Let $E[{\rm tr}(L^\dagger L)^k]$ and $E[{\rm tr}\Bigl((L^\dagger L)^k\Bigr)]$ be given and nonzero for $k>1$. Then there must exist some tensor ${\cal T}_0$ for which the corresponding linear operator $L_0$ obeys \begin{equation} {\rm rank}(L_0)^{k-1} \geq \frac{E[{\rm tr}(L^\dagger L)^k]}{E[{\rm tr}\Bigl((L^\dagger L)^k\Bigr)]}. \end{equation} Further, since $E[{\rm tr}(L^\dagger L)^k] \geq E[{\rm tr}(L^\dagger L)]^k$, \begin{equation} {\rm rank}(L_0)^{k-1} \geq \frac{E[{\rm tr}(L^\dagger L)]^k}{E[{\rm tr}\Bigl((L^\dagger L)^k\Bigr)]}. \end{equation} \begin{proof} Given $E[{\rm tr}(L^\dagger L)^k]$ and $E[{\rm tr}\Bigl((L^\dagger L)^k\Bigr)]$, there must exist some ${\cal T}_0$ for which \begin{equation} \frac{{\rm tr}(L_0^\dagger L_0)^k}{{\rm tr}\Bigl((L_0^\dagger L_0)^k\Bigr)} \geq \frac{E[{\rm tr}(L^\dagger L)]^k}{E[{\rm tr}\Bigl((L^\dagger L)^k\Bigr)]}. \end{equation} The result then follows from Eq.~(\ref{moments}). \end{proof} \end{lemma} For a given tensor ${\cal T}$, traces such as ${\rm tr}(L^\dagger L)$ and ${\rm tr}(L^\dagger L L^\dagger L)$ are also given by tensor networks; they are tensor networks with no open edges so that their contraction yields a scalar, and this scalar is equal to the desired trace. \begin{definition} We refer to such networks with no open edges as {\it closed tensor networks}. \end{definition} See Figs.~\ref{figLdaggerL},\ref{figLdaggerL2}. The notation in Fig.~\ref{figLdaggerL} with closed and open circles to denote ${\cal T}$ and $\overline {\cal T}$ and dashed lines to denote edges that should be joined to compute a trace will be used in figures from here on. We use ${\cal N}$ to denote the network with open edges that is used to define $L$ and we use ${\cal N}_c$ to denote various different tensor networks with no open edges; the networks ${\cal N}_c$ that we consider will correspond to traces such as ${\rm tr}(L^\dagger L)$, ${\rm tr}(L^\dagger L L^\dagger L)$, and so on; so, ${\cal N}_c$ is derived from ${\cal N}$ and from the choice of the particular trace. \begin{figure} \includegraphics[width=1.0in]{figLdL.pdf} \caption{Closed tensor network for computing ${\rm tr}(L^\dagger L)$ for the linear operator corresponding to the tensor network shown in Fig.~\ref{figconn}. Closed circles represent tensors ${\cal T}$, while open circles represent tensors $\overline {\cal T}$. The dashed lines on the $4$ open edges are used to indicate that the edges on the right-hand side of the figure should be joined to the edges on the left-hand side of the figure so that the tensor network is closed, computing the trace. Without these dashed lines, the tensor network would have $4$ output edges and would compute the linear operator $L^\dagger L$ with the four output edges on the right and the four input edges on the left (while a mathematical expression such as $L^\dagger L$ has its ``input on the right" and its ``output on the left", tensor networks are conventionally read from left to right instead.} \label{figLdaggerL} \end{figure} \begin{figure} \includegraphics[width=2.0in]{figLdL2.pdf} \caption{Closed tensor network for computing ${\rm tr}\Bigl((L^\dagger L)^2\Bigr)$ for the linear operator corresponding to the tensor network shown in Fig.~\ref{figconn}.} \label{figLdaggerL2} \end{figure} \begin{definition} We introduce notation: ${\cal N}_c({\rm tr}(L^\dagger L))$ indicates the closed tensor network corresponding to ${\rm tr}(L^\dagger L)$, while ${\cal N}_c({\rm tr}(L^\dagger L L^\dagger L))$ indicates the tensor network corresponding to ${\rm tr}(L^\dagger L L^\dagger L)$, and so on. Additionally, we consider closed tensor networks which correspond to products of traces, so that ${\cal N}_c({\rm tr}(L^\dagger L) {\rm tr}(L^\dagger L L^\dagger L))$ indicates the tensor network corresponding to ${\rm tr}(L^\dagger L) {\rm tr}(L^\dagger L L^\dagger L)$. Such a tensor network ${\cal N}_c({\rm tr}(L^\dagger L) {\rm tr}(L^\dagger L L^\dagger L))$ is the product of the tensor networks ${\cal N}_c({\rm tr}(L^\dagger L))$ and ${\cal N}_c({\rm tr}(L^\dagger L L^\dagger L))$. \end{definition} We now explain how to compute the expectation value of the contraction of a closed tensor network ${\cal N}_c$; for brevity, we will simply refer to this as ``the expectation value of the tensor network", rather than ``the expectation value of the contraction of the tensor network". The tensor network ${\cal N}_c$ is a polynomial in the entries of ${\cal T}$ and $\overline {\cal T}$, where the overline denotes the complex conjugate. Hence, we can use Wick's theorem to compute the expectation value. Suppose that the tensor network has $M$ different tensors ${\cal T}$ and $M'$ different tensors $\overline {\cal T}$. The expectation value is nonzero only if $M=M'$. Wick's theorem computes the expectation value by summing over all possible pairings of a tensor ${\cal T}$ with a tensor $\overline {\cal T}$, so that every ${\cal T}$ is paired with a unique $\overline {\cal T}$; that is, there are $M!$ different pairings to sum over. Given a tensor ${\cal T}$ with $d$ indices, with entries of the tensor written ${\cal T}_{i_1,...,i_d}$, we have \begin{equation} E[{\cal T}_{i_1,...,i_d} \overline{\cal T}_{j_1,...,j_d}]=\delta_{i_1,j_1} ... \delta_{i_d,j_d}. \end{equation} These $\delta$-functions can be represented graphically as follows. For each pairing, we define a new graph by removing every vertex (leaving all edges with both ends open) and then for each pair of vertices $v,w$ which are paired with each other, for each $a \in \{1,...,d\}$, we attach the end of the edge which corresponded to the $a$-th index of the tensor at vertex $v$ to the end of the edge which corresponded to the $a$-th index of the tensor at vertedx $w$. Then, the tensor network breaks into a set of disconnected closed loops. See Fig.~\ref{figloops}. \begin{definition} \label{cldef} A closed loop created by a pairing consists of a sequence of edges $(v_1,w_1),(v_2,w_2),...,(v_l,w_l)$; no repetition of edges is allowed in a closed loop, but vertices may be repeated. The closed loops created by the pairing are such that $w_i$ is paired with $v_{i+1}$ (identiyfing $v_{l+1}$ with $v_1$) with the local ordering assigning the same index of the tensor to the edge $(v_i,w_i)$ at $w_i$ as it assigns to edge $(v_{i+1},w_{i+1})$ at $v_{i+1}$. The choice of starting edge in the sequence is irrelevant as is the direction in which the edges are traversed; i.e., the sequences $(v_1,w_1),(v_2,w_2),...,(v_l,w_l)$ and $(v_2,w_2),\ldots,(v_l,w_l),(v_1,w_1)$ and $(w_l,v_l),\ldots,(w_2,v_2),(w_1,v_1)$ all denote the same closed loop. \end{definition} Suppose for a given pairing $\pi$ that the number of closed loops is equal to $C(\pi)$. Then, the sum over indices on the edges for the given pairing is $N^{C(\pi)}$. Thus, for a tensor network ${\cal N}_c$ with $M=M'$, the expectation value is equal to \begin{equation} \label{value} E[{{\cal N}_c}]=\sum_{\pi \, {\rm pairing} \, {\cal T} \, {\rm with} \, \overline {\cal T}} N^{C(\pi)}. \end{equation} \begin{figure} \includegraphics[width=1.0in]{figloops.pdf} \caption{A possible pairing of the closed tensor network in Fig.~\ref{figLdaggerL}. Dotted lines are used to connect edges which end at vertices which are paired; each vertex in a pair has $d=3$ different edges: we connected the edges which have the same index for the local ordering. This pairing has $6$ different closed loops. There is one other possible pairing for this network; that pairing would have only $3$ closed loops.} \label{figloops} \end{figure} Note that every term in Eq.~(\ref{value}) is positive. So, once we have found that there exists some pairing $\pi_0$ with some given $C(\pi_0)$, we have established that $E[{{\cal N}_c}] \geq N^{C(\pi_0)}$. Further, the pairing or pairings with the largest $C(\pi_0)$ give the dominant contribution in the large $N$ limit. We define \begin{equation} C_{\rm max}={\rm max}_{\pi} C(\pi), \end{equation} and define $n_{\rm max}$ to be the number of distinct pairings $\pi$ with $C(\pi)=C_{\rm max}$; then \begin{equation} \label{valueasympt} E[{{\cal N}_c}]= \Theta(N^{C_{\rm max}}). \end{equation} and \begin{equation} \label{valueasympt2} E[{{\cal N}_c}]=\Bigl(1-{\cal O}(1/N)\Bigr) \cdot n_{\rm max} N^{C_{\rm max}}. \end{equation} \section{Estimating First Moment} \label{fsec} We now estimate $E[{\rm tr}(L^\dagger L)]$. \begin{lemma} \label{sqlemma} Given a tensor network ${\cal N}$ obtained from a graph $G$ with $|V|$ vertices and $|E|$ edges, with corresponding linear operator $L$, for ${\cal N}_c({\rm tr}(L^\dagger L))$ we have \begin{equation} C_{\rm max}=|E|, \end{equation} and \begin{equation} n_{\rm max}=1. \end{equation} \begin{proof} First, we explicitly give a pairing $\pi$ for which $C(\pi)=|E|$. Note that the number of vertices in the graph $G'$ for tensor network ${\cal N}_c({\rm tr}(L^\dagger L))$ equals $2|V|$. There are $|V|$ vertices with tensor ${\cal T}$ and $|V|$ vertices with tensor $\overline {\cal T}$. There is an obvious pairing $\pi$ of the vertices in $G'$ exemplified in Fig~\ref{figloops} for one particular network. For this pairing $\pi$, we have $C(\pi)=|E|$. We can define this pairing formally for arbitrary $G$ as follows. Suppose $G$ has vertex set ${\cal V}$, and edge set ${\cal E}$. Then, $G'$ has vertices labelled by a pair $(v;\sigma)$ where $v\in {\cal V}$ and $\sigma\in \{1,2\}$. If $\sigma=1$ then the vertex has tensor ${\cal T}$, and if $\sigma=2$ then the vertex has tensor $\overline {\cal T}$. If $(v,w)$ is an edge in $G$, then $((v;\sigma),(w;\sigma))$ is an edge of $G'$ for $\sigma\in \{1,2\}$. Additionally, for every open edge in $G$ there is an edge in $G'$; for each open edge attached to a vertex $v$, then there is an edge $((v;1),(v;2))$ in $G'$. There are no other edges in $G'$ other than those given by these rules. The ordering of edges attached to vertices in $G'$ is defined in the obvious way: if an edge $e$ in $G$ attached to a vertex $v$ corresponds to the $j$-th index of the tensor, then the edge in $G'$ obtained from $e$ in the above rules attached to $(v;\sigma)$ also corresponds to the $j$-th index of the tensor. This pairing $\pi$ is then the pairing which pairs each vertex $(v;1)$ with $(v;2)$. This pairing gives one closed loop for every edge of $G$ so that $C(\pi)=|E|$. We next show that there is no pairing $\pi$ for which $C(\pi)>|E|$. A closed loop corresponds to a sequence of edges in $G'$ which we write as $(v_1,w_1),(v_2,w_2),...,(v_l,w_l)$ for some $l$; we say that such a loop ``has $l$ edges". The pairing is such that $w_i$ is paired with $v_{i+1}$ for $i<l$ and $w_l$ is paired with $v_1$. If $l=1$, then $v_1$ has a tensor ${\cal T}$ and $w_1$ has a tensor $\overline {\cal T}$ and so this edge in $G'$ is obtained from an open edge in $G$. If $l>1$, then vertices $v_i$ and $w_i$ for odd $i$ have tensors ${\cal T}$ while for even $i$ they have tensors $\overline {\cal T}$. So, if $l>1$, then $l$ is even. Let $n_1$ be the number of closed loops in the pairing with $l=1$; $n_1$ is bounded by the number of open edges in $G$, which is equal to $|S|+|T|$. The number of closed loops with $l>1$ is then bounded by $(1/2)(|E'|-n_1)$, where $|E'|$ is the number of edges in $G'$, since every closed loop with $l>1$ must have at least two edges. Note that $|E'|=2|E|-|S|-|T|$. So, the number of closed loops is bounded by \begin{equation} \label{maxC} C(\pi) \leq n_1+(1/2)(|E'|-n_1), \end{equation} which is maximal when $n_1$ is as large as possible, i.e., $n_1=|S|+|T|$. In this case, the maximum number of closed loops equals $|E|$. So, $C_{\rm max}=|E|$. We now show that $n_{\rm max}=1$. Consider a pairing $\pi$ with $C(\pi)=|E|$ so that $n_1=|S|+|T|$. Thus, for every open edge in $G$, there must be a closed loop containing just the edge in $G'$ corresponding to that open edge in $G$. Further, to have $C(\pi)=|E|$, we must have that no loops have more than two edges, as otherwise $C(\pi)<n_1+(1/2)(|E'|-n_1)$. A loop with two edges corresponds to a sequence $(v_1,w_1),(v_2,w_2)$ with $w_1,v_2$ paired and $v_1,w_2$ paired. This then allows us to show that $n_{\rm max}=1$ as follows. Since for every open edge in $G$, there is a closed loop of length $l=1$ containing just the corresponding edge in $G'$, any vertex $(v;1)$ which is attached to an open edge must be paired with $(v,2)$. That is, for all vertices $v \in G$ attached to an open edge, we pair $(v;1)$ and $(v;2)$. Now consider a vertex $w\in G$ which neighbors some vertex $v\in G$ such that we pair $(v;1)$ and $(v;2)$. Then, there is some edge $((w;1),(v;1))$ and this edge must be in a closed loop with two edges. Since we pair $(v;1)$ with $(v;2)$, there must be a loop containing edges $((w;1),(v;1))$ and $((v;2),((w;2))$. Then, since this loop must have length $2$, we must pair $(w;1)$ with $(w;2)$. Let $P$ be the set of vertices $v\in G$ such that we pair $(v;1)$ with $(v;2)$; we have shown that $P$ contains all vertices attached to an open edge and $P$ contains all vertices connected to a vertex in $P$ by an edge. So, since the network is connected, $P$ must contain all vertices and so there is indeed only one such pairing. \end{proof} \end{lemma} \section{Lower Bound For Higher Moments} \label{mlbsec} We now lower bound $E[{\rm tr}(L^\dagger L L^\dagger L)]$ and other higher moments. First let $G'$ denote the graph for tensor network ${\cal N}_c({\rm tr}(L^\dagger L L^\dagger L))$. Let us formally define $G'$, in a way similar to that in which a different graph (also called $G'$) was defined in lemma \ref{sqlemma}. Suppose $G$ has vertex set ${\cal V}$, and edge set ${\cal E}$. Then, $G'$ has vertices labelled by a pair $(v;\sigma)$ where $v\in {\cal V}$ and $\sigma\in \{1,2,3,4\}$. If $\sigma\in \{1,3\}$ then the vertex has tensor ${\cal T}$, and if $\sigma\in \{2,4\}$ then the vertex has tensor $\overline {\cal T}$. If $(v,w)$ is an edge in $G$, then $((v;\sigma),(w;\sigma))$ is an edge of $G'$ for $\sigma\in \{1,2,3,4\}$. Additionally, for every open edge in $G$ there is are two edges in $G'$; for each open edge attached to a vertex $v$, if the edge is an input edge then there are edges $((v;1),(v;2))$ and $((v;3),(v;4))$ in $G'$ while if the edge is an output edge then there are edges $((v;2),(v;3))$ and $((v;4),(v;1))$. There are no other edges in $G'$ other than those given by these rules. The ordering of edges attached to vertices in $G'$ is defined in the obvious way: if an edge $e$ in $G$ attached to a vertex $v$ corresponds to the $j$-th index of the tensor, then the edge in $G'$ obtained from $e$ in the above rules attached to $(v;\sigma)$ also corresponds to the $j$-th index of the tensor. If we instead consider a higher moment $E[{\rm tr}\Bigl((L^\dagger L)^k\Bigr)]$, we define a graph $G'$ for tensor network ${\cal N}_c({\rm tr}\Bigl((L^\dagger L)^k\Bigr))$ similarly. Now $G'$ has vertices labelled by a pair $(v;\sigma)$ where $v\in {\cal V}$ and $\sigma\in \{1,2,\ldots,2k\}$. If $\sigma$ is odd then the vertex has tensor ${\cal T}$, and if $\sigma$ is even then the vertex has tensor $\overline {\cal T}$. If $(v,w)$ is an edge in $G$, then $((v;\sigma),(w;\sigma))$ is an edge of $G'$ for all $\sigma$. Additionally, for every open edge in $G$ there is are $k$ edges in $G'$; for each open edge attached to a vertex $v$, if the edge is an input edge then there are edges $((v;\sigma),(v;\sigma+1))$ for $\sigma$ odd while if the edge is an output edge then there are edges $((v;\sigma),(v;\sigma+1))$ for $\sigma$ even. We regard $\sigma$ as periodic mod $2k$, so that $\sigma=2k+1$ is the same as $\sigma=1$. The ordering of edges attached to vertices in $G'$ is defined in the obvious way: if an edge $e$ in $G$ attached to a vertex $v$ corresponds to the $j$-th index of the tensor, then the edge in $G'$ obtained from $e$ in the above rules attached to $(v;\sigma)$ also corresponds to the $j$-th index of the tensor. Thus, for example, for the closed tensor network in Fig.~\ref{figLdaggerL2}, the four vertices in the top row correspond to $\sigma=1,2,3,4$ from {\it left} to {\it right} (recall that the input of the tensor network is on the left), as do the four vertices in the bottom row. We now show the lower bound \begin{lemma} \label{lblemma} Let linear operator $L$ correspond to a tensor network ${\cal N}$ obtained from a graph $G$ with $|V|$ vertices and $|E|$ edges. Then, for ${\cal N}_c({\rm tr}\Bigl((L^\dagger L)^k\Bigr))$ for $k\geq 1$ we have \begin{equation} C_{\rm max}\geq k|E|-(k-1)MC(G). \end{equation} \begin{proof} The case $k=1$ is already given above. So, assume $k>1$. Suppose the lemma does not hold, so that $C_{\rm max}<k|E|-(k-1)MC(G)$ and hence $C_{\rm max}\leq k|E|-(k-1)MC(G)-1$. Then from lemma \ref{sqlemma}, for sufficiently large $N$ we would have \begin{equation} \frac{E[{\rm tr}(L^\dagger L)]^k}{E[{\rm tr}\Bigl((L^\dagger L)^k\Bigr)]} \geq c \cdot N^{(k-1)MC(G)+1}, \end{equation} for some positive constant $c$ (the ratio of expectation values would asymptotically tend to $1/n_{\rm max}$, where here $n_{\rm max}$ refers to the number of pairings with maximal $C(\pi)$ for tensor network ${\cal N}_c({\rm tr}\Bigl((L^\dagger L)^k\Bigr))$. Then, from lemma \ref{exists}, for some choice of tensor ${\cal T}_0$ the corresponding linear operator $L_0$ obeys ${\rm rank}(L_0) \geq c \cdot N^{MC(G)+1/(k-1)}$, which is asymptotically larger than $QMC(G,N)$, contradicting the fact that ${\rm rank}(L) \leq QMC(G,N)$. In addition to this proof, we give an alternative proof by explicitly giving a pairing $\pi$ for which $C(\pi)=k|E|-(k-1)MC(G)$. We exemplify this pairing for a particular network in Fig.~\ref{figcutpair}. Consider a minimum cut, with corresponding sets $\overline S,\overline T$. For $v \in \overline T$, we pair $(v;\sigma)$ with $(v;\sigma+1)$ for odd $\sigma$, while for $v \in \overline S$ we pair $(v;\sigma)$ with $(v;\sigma+1)$ for even $\sigma$. Then, for every edge $(v,w)$ for vertices $v,w \in \overline T\setminus T$ there are $k$ closed loops, corresponding to edges $((v;\sigma),(w;\sigma))$ and $((w;\sigma+1),(v;\sigma+1))$ for odd $\sigma$; similarly, for every edge $(v,w)$ for vertices $v,w \in \overline S\setminus S$ there are $k$ closed loops, corresponding to edges $((v;\sigma),(w;\sigma))$ and $((w;\sigma+1),(v;\sigma+1))$ for even $\sigma$. These edges $(v,w)$ for $v,w \in \overline T\setminus T$ or $v,w \in \overline S\setminus S$ are not in the cut set. Similarly, for every edge in the output set or in the input set (assuming that the edge is not in the cut set) there are $k$ closed loops. However, for each edge $(v,w)$ in the cut set, there is only one closed loop, corresponding to edges $((v;1),(w;1)),((w;2),(v;2)), \ldots, ((w;2k),(v;2k))$. Thus, the number of closed loops is equal to $k$ times the number of edges not in the cut set, plus the number of edges in the cut set, which equals $k|E|$ minus $k-1$ times the number of edges in the cut set. \end{proof} \end{lemma} \begin{figure} \includegraphics[width=2.0in]{figcutpair.pdf} \caption{Example of the pairing defined in lemma \ref{lblemma}. The network is the same as in Fig.~\ref{figLdaggerL2}. The thin curving vertical lines represent minimum cuts of the network. There are four such lines, each cutting one of the cases $\sigma=1,2,3,4$. The lines for odd $\sigma$ are reflected in the horizontal direction compared to those for even $\sigma$. The vertical dotted lines represent reflection planes: reflecting the region between any two neighboring vertical curved lines about one of these vertical dashed lines leaves the region unchanged except for interchanging tensors ${\cal T}$ and $\overline {\cal T}$ (we have drawn one of these lines at the right-hand side of the figure, indicating a reflection relating vertices at the right-hand side to those at the left-hand side). The pairing in lemma \ref{lblemma} pairs vertices related by such a reflection.} \label{figcutpair}. \end{figure} \section{Upper Bound For Higher Moments and Its Realization By ``Direct Pairings"} \label{mubsec} The pairing in the lemma \ref{lblemma} has a certain structure, which we now define (that pairing is not the only pairing consistent with this structure). \begin{definition} \label{dsnDef} Consider an arbitrary closed tensor network, ${\cal N}_c$, with some vertices having tensor ${\cal T}$ and some having tensor $\overline {\cal T}$. Suppose that there are two vertices, $v,w$ with $v$ having tensor ${\cal T}$ and $w$ having tensor $\overline {\cal T}$. Further, suppose that there is at least one edge from $v$ to $w$ which has the property that the local ordering of indices makes that edge correspond to the same index for tensor ${\cal T}$ as it does for tensor $\overline {\cal T}$. Then, define a new closed tensor network, ${\cal N}_c'$ by removing the vertices $v,w$ from ${\cal N}_c$; every edge from $v$ to $w$ is removed, while for every pair of edges $(u,v)$ and $(w,x)$ for which the local ordering gives the same tensor index at $v$ as at $w$, we replace that pair with a single edge $(u,x)$, defining the local ordering in the obvious way, so that the edge $(u,x)$ corresponds to the index of the tensor at $u$ that $(u,v)$ did and corresponds to the index of the tensor at $x$ that $(w,x)$ did. Then, we say that a network ${\cal N}_c'$ constructed in this fashion is a ``one-step direct subnetwork" of ${\cal N}_c$; more specifically, we call it a ``one-step direct subnetwork made by pairing $v,w$ in ${\cal N}$" to indicate how it is constructed. Any network ${\cal N}_c'$ constructed by zero or more steps of the above procedure is termed a ``direct subnetwork" of ${\cal N}_c$. Given a sequence of direct subnetworks, ${\cal N}_c(1)$ pairing $(v(1),w(1))$ in ${\cal N}_c$ and ${\cal N}_c(2)$ pairing $(v(2),w(2))$ in ${\cal N}_c(1)$, and so on, we define a partial pairing of the original network ${\cal N}_c$. This partial pairing pairs $v(1)$ with $w(1)$, pairs $v(2)$ with $w(2)$, and so on. This is a partial pairing as it may pair only a subset of the vertices. Such a partial pairing is termed a direct partial pairing. If all vertices are paired (so that the last direct subnetwork in the sequence has no vertices), then this is termed a direct pairing. For each direct partial pairing there are several possible sequences of direct subnetworks ${\cal N}_c(1),\ldots$, but the last network in the sequence is unique, and we say that this is the direct subnetwork determined by that direct partial pairing. \end{definition} One way to understand direct pairings is to consider the case that the graph has degree $d=2$ and has one input and one output edge. Then, if the graph has a single vertex, the problem of the singular values of $L$ reduces to a well-studied problem in random matrix theory, studying the singular values of a random square matrix with independent entries. This is the so-called chiral Gaussian Unitary Ensemble\cite{chgue1,chgue2}. In this case, it is well-known that the dominant diagrams in the large $N$ limit for any moment are the so-called ``rainbow diagrams" (these are also called ``planar diagrams")\cite{rainbow}. Even if there is one input and one output edge, but more than one vertex (so that we now study the singular values of a power of a random matrix) the dominant diagrams are still rainbow diagrams. These rainbow diagrams are precisely the direct pairings in this case. If we still stick to the case $d=2$, with $|S|=|T|=MC(G)$, but allow $|MC(G)|>1$, dominant diagrams can still be understood as rainbow diagrams: for each of the $MC(G)$ distinct paths from input to output, we draw a rainbow diagram, pairing only vertices which are both in the same path (i.e., given a power $k\geq 1$ so that we have $k$ copies of a given path with tensors ${\cal T}$ and $k$ copies with tensors $\overline{\cal T}$, we pair vertices between copies of that path, but only within a given path, not between paths). Again, these are the direct pairings. Later, we will use this understanding in terms of rainbow diagrams to better understand the case $d>2$. Suppose $d>2$ and we have a direct pairing. We can construct $MC(G)$ edge disjoint paths from input to output. We will show that the direct pairing has the properties that if we consider only the vertices and edges in one of these paths, the result is a rainbow diagram. Since the edge-disjoint paths might share vertices (if $d\geq 4$) this can impose some relationship between the different rainbow diagrams for each paths: i.e., it is not the case that we can choose a distinct rainbow diagram for each path independently. Further, while every pairing that gives rainbow diagrams when restricted to each of these paths will be a direct pairing, not every such direct pairing will having maximal $C(\pi)$; see Fig.~\ref{figbadcutpair}. \begin{figure} \includegraphics[width=2.0in]{figbadcutpair.pdf} \caption{Example of a direct pairing that does not have maximal $C(\pi)$. The notation is the same as in Fig.~\ref{figcutpair}, except that the thin curving vertical lines are cuts which are {\it not} min cuts.} \label{figbadcutpair} \end{figure} \begin{definition} Consider an arbitrary closed tensor network ${\cal N}_c$, and a one-step direct subnetwork ${\cal N}_c'$ made by pairing vertices $v,w$ in ${\cal N}_c$. Let $\pi'$ be a partial pairing of ${\cal N}_c'$. Then, we say that $\pi'$ induces a partial pairing of ${\cal N}_c$ which pairs $v$ with $w$ and pairs all other vertices as they are paired in $\pi'$ (every vertex in ${\cal N}_c$ other than $v,w$ corresponds to a vertex in ${\cal N}_c'$). Inductively, for any direct partial pairing $\theta$ defining a direct subnetwork ${\cal N}_c'$ and any pairing $\pi'$ of ${\cal N}_c'$ we define a partial pairing on ${\cal N}_c$ induced by $\pi'$. We write $\Pi(\pi',\theta)$ to denote this partial pairing. \end{definition} \begin{definition} Given a partial pairing $\pi$ of a network, we define $C(\pi)$ the number of closed loops created by the partial pairing in the obvious way: it is the number of distinct closed loops, where as in Definition \ref{cldef} each closed loop contains edges $(v_1,w_1),\ldots,(v_l,w_l)$ such that $w_i$ is paired with $v_{i+1}$ (identiyfing $v_{l+1}$ with $v_1$) with the local ordering assigning the same index of the tensor to the edge $(v_i,w_i)$ at $w_i$. Note that for a partial pairing, some edges might not be in a closed loop. \end{definition} \begin{lemma} \label{create} Consider direct partial pairing $\theta$ determining direct subnetwork ${\cal N}_c'$ and let $\pi'$ be a pairing of ${\cal N}_c'$. Then \begin{equation} C(\Pi(\pi',\theta))=C(\pi')+C(\theta). \end{equation} In the special case that ${\cal N}_c'$ is a one-step direct subnetwork, $C(\theta) \leq N_E(v,w)$, where $N_E(v,w)$ is the number of edges from $v$ to $w$, with $v,w$ as in definition \ref{dsnDef}; this is an equality if the ordering is such that all of these edges correspond to the same index at $v$ as they do at $w$. \end{lemma} When applying this lemma \ref{create}, we will say below that the pairing of vertices $v,w$ ``creates $N_E(v,w)$ closed loops" or that the pairing $\theta$ ``creates $C(\theta)$ closed loops". \begin{lemma} \label{ndsp} Let $\pi$ be a pairing of a closed tensor network ${\cal N}_c$. Assume that there is no one-step direct subnetwork ${\cal N}_c'$ with pairing $\pi'$ such that $\pi$ is induced by $\pi'$. Then, \begin{equation} C(\pi)\leq N_E({\cal N}_c)/2, \end{equation} where $N_E({\cal N}_c)$ is the number of edges in tensor network ${\cal N}_c$. \begin{proof} Every closed loop for pairing $\pi$ must be composed of at least two edges (if it is composed of one edge, then pairing those vertices defines a one-step direct subnetwork). \end{proof} \end{lemma} \begin{lemma} \label{continue} Let ${\cal N}_c={\cal N}_c({\rm tr}\Bigl((L^\dagger L)^k\Bigr))$. Let $G$ be the graph corresponding to this tensor network. Let ${\cal N}_c'$ be a direct subnetwork of ${\cal N}_c$. Let $\pi'$ be the partial pairing defined by ${\cal N}_c'$ (if ${\cal N}_c'$ consists only of closed loops, then $\pi'$ is a pairing). Then, labelling the vertices of ${\cal N}_c$ by pairs $(v;\sigma)$ with $\sigma\in \{1,...,2k\}$ as above, the partial pairing $\pi$ only pairs $(v;\sigma)$ with $(w;\tau)$ for $v=w$ and $\sigma$ odd and $\tau$ even. Further, (*) if a pair of vertices $(x;\mu)$ and $(y;\nu)$ in ${\cal N}_c'$ are connected by an edge in ${\cal N}_c'$, then either $x=y$ and $\mu \neq \nu \mod 2$ and the local ordering is such that the edge is assigned the same index of the tensor at $(x;\mu)$ as it is at $(y;\nu)$, or $\mu=\nu \mod 2$ and there is an edge $(x,y)$ in $G$ and the local orderings agree: if the ordering assigns the $j$-th index of the tensor to edge $(x,y)$ at $x$ it also assigns the $j$-th index of the tensor to the edge $((x;\mu),(y;\nu))$ at $(x;\mu)$ and similarly for $y$ and $(y;\nu)$. \begin{proof} The fact that $\sigma;\tau$ have different parity mod $2$ follows because the odd and even vertices correspond to ${\cal T}^\dagger$,${\cal T}$ respectively. The statement (*) in the last paragraph of the claim of the lemma can be established inductively. The base case is that ${\cal N}_c'={\cal N}_c$, where (*) follows trivially. If ${\cal N}_c'$ is a direct subnetwork obeying(*) and ${\cal N}_c''$ is a one-step direct subnetwork, one can check case-by-case that ${\cal N}_c''$ obeys (*). Once (*) is established, the fact that the pairing only pairs $(v;\sigma)$ with $(w;\tau)$ for $v=w$ and $\sigma$ odd and $\tau$ even follows inductively since one only pairs vertices connected by an edge. \end{proof} \end{lemma} \begin{figure} \includegraphics[width=2.0in]{figQi.pdf} \caption{Example of paths $Q(i)$ as constructed in lemma \ref{ublemma}. The two thickened solid lines each represent such a path. For this network, the choice of paths $P(i)$ is non-unique.} \label{figQi} \end{figure} \begin{lemma} \label{ublemma} Let ${\cal N}_c={\cal N}_c({\rm tr}\Bigl((L^\dagger L)^{k}\Bigr) )$. Then, for any direct pairing $\pi$, \begin{equation} C(\pi)\leq k |E|-(k-1)MC(G). \end{equation} For any pairing $\pi$ that is not a direct pairing, \begin{equation} C(\pi)\leq k |E|-(k-1)MC(G) -1. \end{equation} Finally, for any graph $G$, $n_{\rm max}$ is bounded by an exponentially growing function of $k$. \begin{proof} First consider the case that $\pi$ is a direct pairing. Consider a maximal flow on the graph $G$, giving each edge of the graph capacity $1$. By the max-flow/min-cut theorem, the flow is equal to the min cut\cite{ct1,ct2}. Further, the flow on each edge in a maximal flow can be chosen to be an integer, and hence equals $0$ or $1$ on every edge. Consider the set of edges on which the flow equals $1$. This set defines $MC(G)$ edge-disjoint paths from input to output. Call these paths $P(1),P(2),\ldots,P(MC(G))$. Let $v_{i,1},v_{i,2},\ldots,v_{i,l(i)}$ denote the sequence of vertices in $P(i)$, as the path is traversed from input to output, where $l(i)$ is the total number of vertices in the path $P(i)$. Each path in the graph defines a closed path $Q(i)$ in the network: the path is traversed backwards for each network corresponding to $L^\dagger$ and forward for each network corresponding to $L$ so that it forms a closed path traversing $kl(i)$ vertices; i.e., the path $Q(i)$ traverses vertices $(v_{i,1};1), (v_{i,2};1), \ldots, (v_{i,l(i)};1), (v_{i,l(i)};2), \ldots, (v_{i,2};2), (v_{i,1};2), \ldots$. See Fig.~\ref{figQi} for an example. If $P(i)$ has $e_i$ edges in the graph, including $1$ input edge and $1$ output edge, then the number of edges $E_i$ in $Q(i)$ is equal to $2k (e_i-1)$. By lemma \ref{continue}, for a direct pairing, for every such path, vertex $(v_{i,b};\sigma)$ is paired with $(v_{i,b};\sigma')$ for some $\sigma'$. So, $\pi$ pairs vertices in $Q(i)$ with other vertices in $Q(i)$. As noted above, in the case of a graph with degree $d=2$, a direct pairing gives a rainbow diagram. Here, if we consider the subgraph containing only vertices in $Q(i)$, we have a graph with degree $2$ and again pairing $\pi$ defines a ``rainbow diagram". Hence, the number of closed loops formed by edges on the path is equal to $$\frac{1}{2}E_i+1=k e_i-(k-1).$$ (One can derive this number of closed loops using the known result for rainbow diagrams, or one can also derive it inductively: given a graph with degree $2$ containing $1$ closed loop with $e_i>2$ edges, pairing two vertices $v,w$ gives a one-step direct subnetwork with $2$ fewer edges and creates $N_E(v,w)=1$ edges between them, while for $e_i=2$, the pairing gives two closed loops.) Hence, the total number of closed loops formed in all such paths in the network is equal to $$k\sum_{i=1}^{MC(G)} e_i -(k-1) MC(G).$$ Let $Q_E$ be the set of edges in ${\cal N}_c$ which are in some path $Q(i)$ and $Q_V$ be the set of vertices in ${\cal N}_c$ which are in some path $Q(i)$. We now count the number of closed loops which do not contain any edges in $Q_E$; note that for a direct pairing, every closed loop either contains only edges from $Q_E$ or contains no edge in $Q_E$. Let $P^\perp$ be the set of edges in $G$ which are not in a path $P(i)$, so $|P^\perp|=|E|=\sum_i e_i$. Note that $S-MC(G)$ of the edges in $P^\perp$ are input edges and $T-MC(G)$ of such edges are output edges so that there are $|P^\perp|-(S-MC(G))-(T-MC(G))$ edges in $G$ which are not in a path $P(i)$ and which are closed edges. Every edge in $P^\perp$ which is closed corresponds to $2k$ edges in the network, while every edge in $P^\perp$ which is open corresponds to only $k$ edges in the network. The number of closed loops which can be formed by these edges is at most $1/2$ the number of edges which connect ${\cal T}$ to ${\cal T}$ (i.e., corresponding to the closed edges in $G$) or $\overline {\cal T}$ to $\overline {\cal T}$ (i.e., also corresponding to closed edges), plus the number of edges connecting ${\cal T}$ to $\overline {\cal T}$ (i.e., corresponding to the open edges in $G$) so that the total number of such closed loops is at most $(1/2) (2k) (|P^\perp|-(S-MC(G))-(T-MC(G)))+k (S-MC(G))+(T-MC(G))=k |P^\perp|$. Hence, the total number of closed loops including edges in $Q_E$ and edges not in $Q_E$ is at most \begin{equation} \sum_i e_i+(k-1) MC(G)+k|P^\perp|=k|E|-(k-1) MC(G). \end{equation} Now, suppose that $\pi$ is not a direct pairing. We show that $C(\pi)\leq k |E|-(k_i-1)MC(G) -1$. Before giving the proof, we give some motivation. The basic idea is that if we consider the edges not in $Q_E$ then no pairing can do better than the direct pairing: the direct pairing has one loop of length $1$ for each edge in $Q_E$ corresponding to an output edge in $G$ and has one loop of length $2$ for each edge in $Q_E$ not corresponding to an output edge in $G$, and that is the shortest such a loop can be. On the other hand, when we consider the edges in $Q_E$, we know that a rainbow diagram gives the optimal pairing for a network with degree $d=2$ and so any other pairing must be worse. To do this in detail, we will use lemma \ref{ndsp}. The $\theta$ be a direct partial pairing of ${\cal N}_c$ determining direct subnetwork ${\cal N}_c'$ such that $\pi$ is induced by a pairing $\pi'$ of ${\cal N}_c'$ (i.e., $\pi=\Pi(\pi',\theta)$) and such that there is no one-step direct subnetwork ${\cal N}_c''$ of ${\cal N}_c'$ with pairing $\pi''$ such that $\pi'$ is induced by $\pi''$. By lemma \ref{ndsp}, $C(\pi')\leq N_E({\cal N}_c')/2$, so by lemma \ref{create}, $C(\pi)\leq N_E({\cal N}_c')/2+C(\theta)$. We wish to estimate $C(\theta)$ and to estimate $N_E({\cal N}_c)-N_E({\cal N}_c')$. Recall that when we define a one-step direct subnetwork, the number of edges changes for two reasons: we remove $N_E(v,w)$ edges (and create $N_E(v,w)$ loops) but we also combine other pairs of edges into a single edge. Let $C$ be the set of these closed loops created by $\theta$. Let $C_E$ be the set of edges in a loop in $C$. Each closed loop in $C$ contains either only edges in $Q_E$ or contains no edges in $Q_E$. Consider the loops in $C$ containing edges not in $Q_E$. Let $R_o$ be the set of edges in these loops which correspond to output edges in $G$ and let $R_i$ be the set of edges in these loops which do not correspond to output edges in $G$ so that the number of loops in $C$ containing edges not in $Q_E$ is bounded by $|R_o|+(1/2) |R_i|$. f Consider the loops in $C$ containing edges in $Q_E$. Each such loop contains edges from at most one path $Q(i)$. In such a path, there are $2k (e_i-1)$ total edges in ${\cal N}_c$. For each path $Q(i)$ we define a path in ${\cal N}_c'$ in the obvious way, so that the path in ${\cal N}_c'$ consists of the vertices in ${\cal N}_c'$ which are in the path in ${\cal N}_c$. If all edges in $Q(i)$ are in a loop in $C$ then there are $k(e_i-1)+1$ closed loops in $C$ containing edges in $Q(i)$. On the other hand, if not all edges in $Q(i)$ are in such a loop, then there are less than $k(e_i-1)+1$ closed loops and the number of closed loops is equal to $(1/2)(E_i-E'_i)$, where $E_i$ is the number of edges in $Q(i)$ and $E'_i$ is the number of edges in the corresponding path in ${\cal N}_c'$. So, \begin{equation} N_E({\cal N}_c)-N_E({\cal N}_c')\geq |R_o|+|R_i|+\sum_i (E_i-E'_i). \end{equation} Let $q$ be the number of paths $Q(i)$ such that $\pi'$ pairs all vertices in $Q(i)$. So, \begin{eqnarray} C(\theta) &=& |R_o|+(1/2) |R_i|+(1/2)\sum_i (E_i-E'_i)+q \\ \nonumber &\leq & \frac{N_E({\cal N}_c)-N_E({\cal N}_c')}{2}+|R_o|/2+q. \end{eqnarray} So, \begin{equation} N_E({\cal N}_c')/2+C(\theta) \leq N_E({\cal N}_c)/2 + |R_o|/2 + q. \end{equation} The number of open edges in $G$ is $|S|+|T|$ of which $|S|+|T|-2MC(G)$ edges are not in a path $Q(i)$, so $|R_o|\leq k(|S|+|T|-2MC(G))$. So, \begin{eqnarray} N_E({\cal N}_c')/2+C(\theta) &\leq& N_E({\cal N}_c)/2 +k(|S+|T|)/2-kMC(G) + q \\ \nonumber &=& k|E|-kMC(G)+q. \end{eqnarray} So, unless $q=MC(G)$, we have established the desired bound on $C(\pi)$. So, suppose that $q=MC(G)$ and suppose that $C(\pi)=C_{\rm max}$. We will show that $\pi$ is a direct pairing. We will use the assumption that the network is connected. Since $C(\pi)=C_{\rm max}$, every edge in ${\cal N}_c$ which is not in a loop created by $\theta$ must either be in a loop of length $1$ (if corresponds to an open edge in $G$) or to a loop of length $2$ (otherwise). Consider a vertex $(v;\sigma)$ which is not paired by $\theta$ and which is attached to an open edge. Since this edge must be in a closed loop of length $1$, $(v;\sigma)$ must be paired with $\sigma \pm 1$ depending on $\sigma \,{\rm mod} \, 2$ and on whether it is an input edge or an output edge. Thus, we can pair $(v;\sigma)$ with $(v;\sigma \pm 1)$ in ${\cal N}_c'$ giving a further one-step direct subnetwork ${\cal N}_c''$ such that a pairing $\pi''$ on ${\cal N}_c''$ induces $\pi'$. So, we can assume that no such vertices exist. Consider instead a vertex $(v;\sigma)$ which is not paired by $\theta$ and which neighbors a vertex $(w;\sigma)$ which is paired by $\theta$. The pairing $\theta$ is a direct partial pairing so it pairs $(w;\sigma)$ with $(w;\tau)$ for some $\tau$. Then, since the edge $((v;\sigma),(w;\sigma))$ must be in a loop of length $2$ in $\pi$, the pairing $\pi$ must pair $(v;\sigma),(v;\tau)$. So, pairing these vertices would define a one-step direct subnetwork ${\cal N}_c''$. So, there can be no vertices not paired by $\theta$ which are attached to output edges or which neighbor a vertex paired by $\theta$; so, since the network is connected, all vertices are paired by $\theta$ and $\pi$ is a direct pairing. We now bound $n_{\rm max}$ to bound the $c(G,k)$. Let $\pi$ be a direct pairing with $C(\pi)=C_{\rm max}$. In each path $Q(i)$, the direct pairing must define a rainbow diagram. Further, since every edge not in $Q_E$ is either in a loop of length $1$ (if it is an open edge) or a loop of length $2$ (otherwise), the pairing of the vertices in $Q_V$ fully determines $\pi$. So, we can bound $n_{\rm max}$ by bounding the number of possible pairings of the vertices in $Q_V$. For each path $Q(i)$, the pairings of the vertices in that path define some rainbow diagram. If $P(i)$ has $l(i)$ vertices, than $Q(i)$ has $2k l(i)$ vertices. There is some restriction on the pairing of these vertices, as one can only pair vertices corresponding to ${\cal T}$ to those corresponding to $\overline {\cal T}$. Ignoring this restriction to obtain an upper bound, we ask for the number of possible rainbow diagrams pairing $2kl(i)$ vertices (this is a problem that arises in estimating the $2kl(i)$-th moment in the Gaussian Orthogonal Ensemble where one consider random real symmetric matrices). The number of such rainbow diagrams is at most exponential\cite{rainbowbound} in $2kl(i)$ and so the product over paths $Q(i)$ of the number of such rainbow diagrams for each path is at most exponential in $2k\sum_i l(i)\leq 2k|V|$, showing the desired result. The number $n_{\rm max}$ may be less than the product of these for two reasons: first, not all direct pairings have $C(\pi)=C_{\rm max}$ and second, if two paths share a vertex then the imposes some relation between the pairings on those two paths. \end{proof} \end{lemma} {\it Proof of Theorem \ref{mainth}} By lemmas \ref{lblemma},\ref{ublemma}, for ${\cal N}_c={\cal N}_c({\rm tr}\Bigl((L^\dagger L)^{k}\Bigr) )$, we have $C_{\max}= k |E|-(k-1)MC(G)$. This implies Eq.~(\ref{main2}) in theorem \ref{mainth2} with $c(G,k)=n_{\rm max}$ (we show below that $c(G,k)$ does not depend on $O$; we will not need that to prove theorem \ref{mainth}). Using Eq.~(\ref{main2}) for $k=2$ to estimate $E[{\rm tr}\Bigl((L^\dagger L)^{k}\Bigr)]$ and using lemma \ref{sqlemma} to estimate $E[{\rm tr}(L^\dagger L)]$, we find that \begin{equation} \frac{E[{\rm tr}(L^\dagger L)]^2}{E[{\rm tr}\Bigl((L^\dagger L)^2\Bigr)]}\geq \frac{1}{c(G,2)} N^{MC(G)}-{\cal O}(1/N). \end{equation} So, by lemma \ref{exists}, theorem \ref{mainth} follows. {\it Proof of Theorem \ref{mainth2}} We have shown Eq.~(\ref{main2}) in theorem \ref{mainth2}. To show that Eq.~(\ref{main2ind}) holds for the independent ensemble, note that in that ensemble, the only pairings allowed are those in which we pair $(v;\sigma)$ with $(w;\tau)$ for $v=w$. However, by lemma \ref{continue} all direct pairings have that property and by lemma \ref{ublemma} the only pairings $\pi$ with $C(\pi)=C_{\rm max}$ are direct pairings. To show that $c(G,k)$ indeed does not depend on $O$, one can either note that the possible direct pairings do not depend on $O$ or one can note that in the enbsemble in which tensors are chosen independently, one can freely change the ordering at any vertex without altering the expectation values. The bound on the $c(G,k)$ follows follows from the bound on $n_{\rm max}$ in lemma \ref{ublemma}. \section{Proof of Theorem \ref{strongth}} \label{proofstrongth} We now prove theorem \ref{strongth}. Let $f(x)$ be some smooth function defined for $0 \leq x < \infty$ with $0 \leq f(x) \leq 1$, $f(x)=0$ for $x\geq 2$, and $f(x)=1$ for $0 \leq x\leq 1$ and let $f(x)$ decrease monotonically with increasing $x$. Let $P_N(\epsilon)=\int f(x/\epsilon) {\rm d}\mu_N$, with $\epsilon$ chosen later. We choose $f(x)$ to be smooth so that as $N\rightarrow \infty$, $P_N(\epsilon)$ converges to some limit $Pr(\epsilon,G)=\int f(x/\epsilon) {\rm d}\mu$, where we explicitly put the dependence on $G$ in parentheses as we will deal with this probability for several different graphs below. We have \begin{equation} \int f(x/\epsilon) {\rm d}\mu_N\geq 1-\frac{E[{\rm rank}(L)]}{QMC(G,N)}, \end{equation} for all $\epsilon>0$. So, $Pr(\epsilon,G) \geq \liminf_{N \rightarrow \infty} 1-\frac{E[{\rm rank}(L)]}{QMC(G,N)}$ for all $\epsilon > 0$. So, if we can show that $Pr(\epsilon,G)\rightarrow 0$ as $\epsilon\rightarrow 0^+$, then this establishes theorem \ref{strongth}. For a given graph $G$, let $Pr(G)$ denote the limit of $Pr(\epsilon,G)$ as $\epsilon\rightarrow 0^+$. We now prove that $Pr(G)=0$ using induction on the number of vertices in the graph. The base case, a graph with no vertices, obviously has $Pr(G)=0$ since all edges must be identity edges. Otherwise, given a general graph, we will either find a min cut which cuts it into two graphs with fewer vertices and apply the inductive assumption, or, if no such cut exists, we will prove that $Pr(G)=0$ by estimating moments. We need the following: \begin{lemma} \label{prodlemma} Let $A$ be an $N_1$-by-$N_2$ matrix and $B$ be an $N_2$-by-$N_3$ matrix. Assume that $A$ has at least $r_A$ singular values which are greater than or equal to $\epsilon_A$ for some $\epsilon_A$. Assume that $B$ has at least $r_B$ singular values which are greater than or equal to $\epsilon_B$ for some $\epsilon_B$. Then, $AB$ has at least $r_A+r_B-N_2$ singular values which are greater than or equal to $\epsilon_A \epsilon_B$. \begin{proof} Let $P_B$ project onto the eigenspace of $B^\dagger B$ with eigenvalue greater than or equal to $\epsilon_B^2$. By assumption, $P_B$ has rank at least $r_B$. We have $$A^\dagger B^\dagger B A \geq \epsilon_B^2 A^\dagger P_B A.$$ The non-zero eigenvalues of $A^\dagger P_B A$ are the same as the nonzero eigenvalues of $P_B A A^\dagger P_B$. Let $P_A$ project onto the eigenspace of $A A^\dagger$ with eigenvalue greater than or equal to $\epsilon_A^2$ so that $$P_B A A^\dagger P_B \geq \epsilon_A^2 P_B P_A P_B.$$ By assumption, $P_B$ has rank at least $r_B$. The operator $P_B P_A P_B$ must have at least $r_A+r_B-N_2$ eigenvalues equal to $1$ (this can be shown by using Jordan's lemma to bring both projectors $P_A,P_B$ to a block $2$-by-$2$ form). So, $P_B A A^\dagger P_B$ has at least $r_A+r_B-N_2$ eigenvalues greater than or equal to $\epsilon_A^2$ and so $A^\dagger B^\dagger B A$ has at least $r_A+r_B-N_2$ eigenvalues greater than or equal to $\epsilon_A^2 \epsilon_B^2$. \end{proof} \end{lemma} Consider a tensor network ${\cal N}$ and corresponding graph $G$ and linear operator $L$. Let $C$ be a min cut. This cut splits the tensor network into two tensor networks ${\cal N}_1,{\cal N}_2$ with corresponding linear operators $L_1,L_2$ so that $L=L_2 L_1$ and splits $G$ into two graphs, $G_1,G_2$. Recall that $$K=\Bigl( N^{|E|-MC(G)} \Bigr)^{-1/2} L$$ and similarly $$K_1=\Bigl( N^{|E_1|-MC(G_1)}\Bigr)^{-1/2} L_1, \, K_2=\Bigl( N^{|E_2|-MC(G_2)}\Bigr)^{-1/2} L_2.$$ Since $MC(G)=MC(G_1)=MC(G_2)$ and $|E_1|+|E_2|=|E|+MC(G)$ (this holds because the $MC(G)$ edges in the cut set become both input edges of $G_2$ and output edges of $G_1$), $K=K_1 K_2$. If both $G_1,G_2$ have at least $1$ vertex, then $G_1,G_2$ both have fewer vertices than $G$ and so by the inductive assumption, $Pr(G_1)=Pr(G_2)=0$. \begin{lemma} \label{l2lemma} In this case (i.e., where $Pr(G_1)=Pr(G_2)=0$ and where the cut splitting $G$ into $G_1,G_2$ is a min cut), $Pr(G)=0$. \begin{proof} Since $Pr(G_1)=0$, for any $\delta>0$, there is an $\epsilon_1>0$ such that $Pr(\epsilon_1,G_1)\leq \delta/2$. Since $\int f(\epsilon_1) {\rm d}\mu_N$ converges to $Pr(\epsilon_1,G)$, there is an $N_0$ such that for all $N \geq N_0$, the difference $|\int f(\epsilon_1) {\rm d}\mu_N-Pr(\epsilon_1,G)|$ is bounded by $\delta/2$. So, there is an $N_0$ such that for all $N\geq N_0$, the probability that a randomly chosen singular value of $L_1$ for a random choice of tensors will be smaller than $\epsilon_1$ is bounded by $\delta$. So, the probability that, for a random choice of tensors in ${\cal N}_1$, the operator $L_1$ will have more than $\sqrt{\delta} QMC(G_1,N)$ singular values smaller than $\epsilon_1$ is bounded by $\sqrt{\delta}$ for all sufficiently large $N$. Since $Pr(G_2)=0$ also, there is an $\epsilon_2>0$ such that the probability that, for a random choice of tensors in ${\cal N}_2$, the operator $L_2$ will have more than $\sqrt{\delta} QMC(G,N)$ singular values smaller than $\epsilon_2$ is bounded by $\sqrt{\delta}$ for all sufficiently large $N$. So, with probability at least $1-2\sqrt{\delta}$, $L_1$ has at least $(1-\sqrt{\delta}) QMC(G,N)$ singular values larger than $\epsilon_1$ and $L_2$ has at least $(1-\sqrt{\delta}) QMC(G,N)$ singular values larger than $\epsilon_2$. So by lemma \ref{prodlemma}, with probability at least $1-2\sqrt{\delta}$, $L$ has at least $(1-2\sqrt{\delta}) QMC(G,N)$ singular values larger than $\epsilon_1 \epsilon_2$. Since for any $\delta>0$ such $\epsilon_1,\epsilon_2>0$ exist, the result follows. \end{proof} \end{lemma} Note that when we cut a graph, the new graph might have some open edges which connect to {\it no} vertices inside the graph. Such edges are open edges with one open end in $S$ and one in $T$. For example, the graph in Fig.~\ref{figconn} can be cut into two graphs, as shown in Fig.~\ref{figconnsplit}. We call these edges ``identity edge", because the linear operator for such a graph is the identity operator (on the degree of freedom on that edge) tensored with the linear operator for the rest of the graph. Such identity edges count as a single edge when determing $|E|$, and Eq.~(\ref{main2}) still holds for a network with identity edges; one can verify this by noting that adding an identity edge multiplies the trace of any moment of $L^\dagger L$ by $N$ and it increases both $MC(G)$ and $|E|$ by $1$, leaving $|E|-MC(G)$ unchanged. \begin{figure} \includegraphics[width=0.5in]{fignetconnsplit.pdf} \caption{A min cut of the network shown in Fig.~\ref{figconn}. Thin curving vertical line shows the cut.} \label{figconnsplit} \end{figure} Suppose instead that $G$ has no min cuts other than possibly the cuts $\overline S=S, \overline T=\overline V \setminus S$ or $\overline S=\overline V \setminus T,\overline T=T$; that is, the only min cuts will cut $G$ into two graphs, one of which has no vertices. We consider three cases: (i) $|S|=MC(G)<|T|$; (ii) $|T|=MC(G)<|S|$; (iii) $|S|=|T|=MC(G)$. Case (i) and (ii) can be related by interchanging $L$ and $L^\dagger$ so we only do cases (i,iiii) and prove in both cases that $Pr(G)=0$. For both cases, we will use a method of defining a new network by removing vertices from a network. \begin{definition} Consider a tensor network ${\cal N}$. We define a new network by ``removing a vertex $v$ as input from the network" as follows. The vertex $v$ is removed. The input edges previously attached to $v$ are also removed. The edges attached to $v$ which were not input edges are not removed; if they were not open edges, then they become input edges and the end attached to $v$ becomes the open end of the edge. If they were output edges, then they become identity edges. Similarly, we define ``removing a vertex $v$ as output from the network"; this definition is the same as above except ``input" and ``output" are interchanged everywhere. \end{definition} In case (i), we now show that the constant $c(G,k)$ in Eq.~(\ref{main2}) is equal to $1$. Thus, in this case, the limiting distribution $\mu$ is a $\delta$-function at $1$ and so $Pr(G)=0$. We first need the following lemma which we will also use in case (iii). Remark: even though we are proving a combinatoric result (the value of a certain constant counting pairings), we do this in a linear algebraic fashion, by estimating the trace of a certain product of operators. \begin{lemma} \label{lalemma} Let ${\cal N}(1),{\cal N}(2),\ldots,{\cal N}(l)$ be tensor networks with open edges, with ${\cal N}(i)$ having the same number of output edges as ${\cal N}(i+1)$ has input edges (identifying ${\cal N}_c(l+1)$ with ${\cal N}_c(1)$). Let $L_1,L_2,\ldots,L_l$ be the corresponding linear operators and let $G_1,\ldots$ be the corresponding graphs, with edge sets $E(1),\ldots$. Consider the tensor network ${\cal N}_c$ which computes the trace ${\rm tr}(L_l L_{l-1} \ldots L_1)$. This tensor network has \begin{equation} C_{\rm max} \leq \sum_{i=1}^l \Bigl( |E(l)|-MC(G(l))\Bigr)+{\rm min}_i MC(G(i)). \end{equation} \begin{proof} By corollary \ref{cor2}, for all $n,\epsilon_i>0$, for all sufficiently large $N$, there is a $c_i$ such that the probability that the largest singular value of $L(i)$ is $\geq xN^{|E(i)|-MC(G(i))+\epsilon_i}$ is bounded by $c_i x^{-n}$. The largest singular value of $L_l L_{l-1} \ldots L_1$ is bounded by the product over $i$ of the largest singular values of $L_i$, so for any $m$ there is a constant $c$ such that the probability that the largest singular value of $L_l L_{l-1} \ldots L_1$ is $\geq z N^{\sum_i (|E(i)|-MC(G(i))+\epsilon_i)}$ is bounded by $c z^{-m}$. (To show this bound, one must pick each $n$ sufficiently large above) Choosing $\epsilon_i=\epsilon/l$, it follows that for all $\epsilon>0$, for all $m$, there is a constant $c$ such that the probability the largest singular value of $L_l L_{l-1} \ldots L_1$ is $\geq z N^{\sum_i (|E(i)|-MC(G(i)))+\epsilon}$ is bounded by $c z^{-m}$. The operator $L_l L_{l-1} \ldots L_1$ has rank at most ${\rm min}_i MC(G(i))$ so the probability that the trace ${\rm tr}(L_l L_{l-1} \ldots L_1)$ is $\geq z N^{\sum_i (|E(i)|-MC(G(i)))+{\rm min}_i MC(G(i))+\epsilon}$ is bounded by $c z^{-m}$. Choosing $m$ sufficiently large, this implies that $E[{\rm tr}(L_l L_{l-1} \ldots L_1)]$ is ${\cal O}(N^{\sum_i (|E(i)|-MC(G(i)))+{\rm min}_i MC(G(i))+\epsilon})$ for all $\epsilon>0$. Choosing $\epsilon<1$, this implies the result. \end{proof} \end{lemma} \begin{lemma} \label{aBlemma} For a graph $G$ in case (i), $c(G,k)=1$. \begin{proof} Let ${\cal N}_c={\cal N}_c({\rm tr}(\Bigl(L^\dagger L\Bigr)^k)$. We show that $n_{\rm max}=1$ for all $k$. We only need to consider the case $k>1$ (for $k=1$, lemma \ref{sqlemma} shows $n_{\rm max}=1$). Lemma \ref{lblemma} constructs a direct pairing $\pi$ with $C(\pi)=C_{\rm max}=k|E|-(k-1)MC(G)$. There is only one min cut of the graph, so this lemma constructs only one such direct pairing. Let $\pi'$ be a direct pairing different from that $\pi$. Since $\pi' \neq \pi$, there must be some vertex $(v;\sigma)$ for odd $\sigma$ which is attached to an input edge such that we pair $(v;\sigma)$ with $(v;\sigma-1)$. For example, see Fig.~\ref{figSlessT}. Consider the sum over all pairings $\pi''$ which pair $(v;\sigma)$ with $(v;\sigma-1)$, weighted by $N^{C(\pi'')}$. This is the expectation value of the one-step direct subnetwork made by pairing $(v;\sigma)$ with $(v;\sigma-1)$. Let $L$ be the linear operator defined above, and let $M$ be the linear operator corresponding to the network with vertex $v$ removed as input. Then, the contraction of this one-step direct subnetwork is equal to ${\rm tr}(\Bigl( L^\dagger L )^{k-1} M^\dagger M\Bigr)$. The graph $G_M$ corresponding to $M$ has $MC(G_M) \geq MC(G)+1$ (if not, one has a min cut of $G$ with partition $\overline V = \overline S \cup \overline T$ with $\overline S=S \cup \{v\}$). Let there be $N_E$ edges connecting $(v;\sigma)$ with $(v;\sigma-1)$ so that $G_M$ has $|E|-N_E$ edges. Hence by lemma \ref{lalemma}, the network ${\cal N}_c'$ has $C_{\rm max}\leq k|E|-(k-1) MC(G)-N_E-1$ and so, all pairings $\pi'$ of ${\cal N}_c$ which pair $(v;\sigma)$ with $(v;\sigma-1)$ have $C(\pi') \leq k|E|-(k-1)MC(G)-1$. \end{proof} \end{lemma} \begin{figure} \includegraphics[width=3in]{figgSlessT.pdf} \caption{An example network for ${\rm tr}(\Bigl(L^\dagger L\Bigr)^2)$, for a graph with $|S|=MC(G)=1$ and $|T|=4$. For a direct pairing for which one does not pair $(v;\sigma)$ with $(v;\sigma-1)$ for any odd $\sigma$, necessarily the pairing is the same as that constructed in lemma \ref{lblemma}.} \label{figSlessT} \end{figure} In case (iii), we show that the constant $c(G,k)$ in Eq.~(\ref{main2}) is independent of the particular graph chosen so long as the graph has at least $1$ vertex. In particular, the constant is the same as that if we consider a graph with $|S|=|T|=MC(G)=1$, with one vertex of degree $2$ and $|E|=2$, with both edges open. This case is the well-known case of random matrix theory studying the eigenvalues of $L^\dagger L$ where $L$ is a random $N$-by-$N$ matrix with independent Gaussian entries; the matrix $L$ is drawn from the chiral Gaussian unitary ensemble. In this case, the limiting distribution is known\cite{chgue1} and is smooth near $0$ so again $Pr(G)=0$. \begin{lemma} For a graph $G$ in case (iii), $c(G,k)$ is independent of $G$ so long as $G$ has at least $1$ vertex. \begin{proof} Let ${\cal N}_c={\cal N}_c({\rm tr}\Bigl((L^\dagger L)^k\Bigr))$. Consider the sum of all pairings $\pi$ in which for some odd $\sigma$, for some vertex $v$ attached to an input edge, $(v;\sigma)$ is paired with $(v;\sigma-1)$ and for some vertex $w$ attached to an output edge $(w;\sigma)$ is paired with $(w;\sigma+1)$. This is the expectation value of the direct subnetwork ${\cal N}_c'$ made by pairing $(v;\sigma)$ with $(v;\sigma-1)$ and $(w;\sigma)$ with $(w;\sigma+1)$. Let $L$ be the linear operator defined above, and let $M_1$ be the linear operator corresponding to the network with vertex $v$ removed as output and let $M_2$ be the linear operator corresponding to the network with vertices $v$ removed as input and $w$ removed as output and let $M_3$ be the linear operator corresponding to the network with vertex $w$ removed as input. Then, the contraction of ${\cal N}_c'$ equals ${\rm tr}(\Bigl( L^\dagger L )^{k-2} M_1^{\dagger} M_2 M_3^\dagger L \Bigr)$. Let $G_{M_1},G_{M_2},G_{M_3}$ be the graphs corresponding to $M_1,M_2,M_3$, respectively. We have $MC(G_{M_2}) \geq MC(G)+1$ (if not, then there is a set of $MC(G)$ edges that one can remove from $G_{M_2}$ to disconnect the graph; removing this same set of edges from $G$ will disconnect $G$ giving a min cut which cuts $G$ into two graphs, each of which has at least one vertex). Let there be $N_E(v)$ edges connecting $(v;\sigma)$ with $(v;\sigma-1)$ and $N_E(w)$ edges connecting $(v;\sigma)$ with $(v;\sigma+1)$, so $G_{M_1}$ has $|E|-N_E(v)$ edges, $G_{M_2}$ has $|E|-N_E(v)-N_E(w)$ edges, and $G_{M_3}$ has $|E|-N_E(w)$ edges. Hence, by lemma \ref{lalemma}, ${\cal N}_c'$ has $C_{\rm max}\leq k|E|-(k-1) MC(G)-N_E(v)-N_E(w)-1$, and so all such pairings $\pi$ of ${\cal N}_c$ have $C(\pi)\leq k|E|-(k-1)MC(G)-1$. The above proof was for $\sigma$ odd, but it works with $\sigma$ even also, if one takes the adjoint of all linear operators $L,M_1,M_2,M_3$. So, for a maximal pairing, there is no $\sigma$ such that for some vertex $v$ attached to an input edge, $(v;\sigma)$ is paired with $(v;\sigma-1)$ and for some vertex $w$ attached to an output edge $(w;\sigma)$ is paired with $(w;\sigma+1)$. For a direct pairing, there must be some $\sigma$ such that for some vertex $v$ attached to an input edge, $(v;\sigma)$ is paired with $(v;\sigma-1)$ or for some vertex $w$ attached to an output edge $(w;\sigma)$ is paired with $(w;\sigma+1)$. So, for that $\sigma$, either for {\it all} $v$, $(v;\sigma)$ is paired with $(v;\sigma-1)$ are for all {\it all} $ v$, $(v;\sigma)$ is paired with $(v;\sigma+1)$. Suppose, without loss of generality, the first case holds, so that for all $v$, $(v;\sigma)$ is paired with $(v;\sigma-1)$ for that $\sigma$. Define a direct subnetwork of ${\cal N}_c$ by pairing such vertices $v$ in that way; this direct subnetwork is precisely the network ${\cal N}_c({\rm tr}(\Bigl(L^\dagger L\Bigr)^{k-1})$, so one can apply the result above and and for some $\sigma$, either for {\it all} $v$, $(v;\sigma)$ is paired with $(v;\sigma-1)$ are for all {\it all} $ v$, $(v;\sigma)$ is paired with $(v;\sigma+1)$. Iterating this $k$ times, one is left with a network with no vertices. The possible choices (which $\sigma$ and whether one pairs with $\sigma-1$ or $\sigma+1$) are in one-to-one corresponding with rainbow diagrams and do not depend upon $G$. \end{proof} \end{lemma} To better understand this case $|S|=|T|=MC(G)$, consider the network shown in Fig.~\ref{fignocut} as an example of such a network. Imagine generalizing the networks that we consider, so that the $4$ open edges have capacity $N$ but the vertical edge in the figure has some capacity $N'\leq N$. In the case $N'=1$, the tensor network factors into a product of two tensor networks; the linear operator $L$ factors as $L=L_1 \otimes L_2$, where $L_1$ maps the upper input edge to the upper output edge and $L_2$ maps the lower input edge to the lower output edge (upper and lower refer to the position of the edge in the figure). Then, the singular value spectrum of $L$ is the product of the singular value spectrums of $L_1,L_2$ and the limiting distribution of the singular value spectrum of $L$ is {\it not} the same as in the chiral Gaussian unitary ensemble. (The case with $N'=1$ is equivalent to a case with degree $d=2$ but where there is a min cut which splits the graph into two networks each with only $1$ vertex; i.e., simply redraw the graph by removing the vertical edge with $N=1$.) For general $N'$, the entries of $L$ can be obtained by summing $N'$ independent matrices of the form $L_1 \otimes L_2$. As $N'$ increases, the correlations between entries of $L$ are reduced until eventually when $N'$ is large enough, the singular value spectrum of $L$ is the same as in the chiral Gaussian unitary ensemble. \begin{figure} \includegraphics[width=0.5in]{fignocut.pdf} \caption{A network with $|S|=|T|=MC(G)=2$ and with no min cuts other than those which divide $G$ into two graphs, one of which has no vertices. Input edges are on left and output edges are on right.} \label{fignocut} \end{figure} \section{Variance} \label{sectionvar} Here we collect some results on variance of moments. The first result bounds the variance \begin{lemma} \label{varlemma} For any linear operator $L$ obtained from a tensor network ${\cal N}_c$ with graph $G$ with $|E|$ edges, for any positive integer $c$ \begin{equation} E[{\rm tr}\Bigl((L^\dagger L)^{k}\Bigr)^c] -E[{\rm tr}\Bigl((L^\dagger L)^{k}\Bigr)]^c={\cal O}( N^{c(k |E|-(k_i-1)MC(G))-1}, \end{equation} and \begin{equation} E_{\rm ind}[{\rm tr}\Bigl((L^\dagger L)^{k}\Bigr)^c] -E_{\rm ind}[{\rm tr}\Bigl((L^\dagger L)^{k}\Bigr)]^c={\cal O}( N^{2(k |E|-(k_i-1)MC(G))-1}, \end{equation} \begin{proof} Let $M=L\otimes L \ldots$, with a total of $c$ copies of $L$ in the tensor product so that ${\rm tr}\Bigl((L^\dagger L)^{k}\Bigr)^c={\rm tr}\Bigl((M^\dagger M)^k\Bigr)$. Then, $M$ is the linear operator associated to the tensor network ${\cal N}_c'$ which is the product ${\cal N}_c \cdot {\cal N}_c \cdot \ldots$ and has graph $G'$ with min cut $MC(G')=cMC(G)$. We can label vertices in $G'$ by a pair $\{v;d\}$ for $d=1,2,\ldots,c$ where $d$ labelto the two different tensor factors in $M$ (we use different notation $\{v;f\}$ to distinguish this from the notation $(v;\sigma)$ used above). Then, vertices in ${\cal N}_c'$ are labelled by triples $(\{v;d\},\sigma)$. We can use lemma \ref{ublemma} to show that the pairings of ${\cal N}_c'$ with maximal number of closed loops are direct pairings. Every direct pairing of ${\cal N}_c'$ only pairs $(\{v;d\},\sigma)$ with $(\{v;d\},\tau)$, so that the direct pairings of ${\cal N}_c'$ are in one-to-one correspondance to the product of the direct pairings of ${\cal N}_c$ with itself. \end{proof} \end{lemma} This has the corollary using the case $c=2$: \begin{corollary} With high probability, for either choice of ensemble, ${\rm av}\Bigl( (K^\dagger K)^k \Bigr)$ is within $o(1)$ of its expectation value. \end{corollary} To study expectation values of other products, we define a ``product of pairings". \begin{definition} Let ${\cal N}_c(1),{\cal N}_c(2),\ldots,{\cal N}_c(a)$ be a sequence of closed tensor networks, $a \geq 1$. Let ${\cal N}_c={\cal N}_c(1) \cdot \ldots \cdot {\cal N}_c(a)$ be the product of these networks and label each vertex in the product network by a pair $[v;i]$, for $1 \leq i \leq a$, where $v$ labels a vertex in tensor network ${\cal N}_c(i)$. Let $\pi(1),\pi(2),\ldots,\pi(n)$ be pairings of these networks. Then, define the pairing $\pi$ which is a product of these pairings to be the pairing which pairs $[v;i]$ with $[w;i]$ is $\pi(i)$ pairs $v$ with $w$. Note that $C(\pi)=\sum_i C(\pi(i))$. \end{definition} Then, once we lower bound a trace, we have an immediate lower bound for products of traces: \begin{lemma} \label{thisone} Let ${\cal N}_c(1),{\cal N}_c(2),...{\cal N}_c(a)$ be a sequence of closed tensor networks, $a \geq 1$. Let $C_{\rm max}(i)$ be the maximum number of loops in a pairing of ${\cal N}_c(i)$ and let $n_{\rm max}(i)$ be the number of distinct pairings $\pi$ of ${\cal N}_c(i)$ with $C(\pi)=C_{\rm max}(i)$. Then, for the network ${\cal N}_c$ defined to be the product network ${\cal N}_c(1) \cdot {\cal N}_c(2) \cdot \ldots \cdot {\cal N}_c(a)$ we have \begin{equation} \label{one} C_{\rm max} \geq \sum_i C_{\rm max}(i), \end{equation} and if $C_{\rm max}=\sum_i C_{\rm max}(i)$ then \begin{equation} \label{two} n_{\rm max} \geq \prod_i n_{\rm max}(i). \end{equation} Further, \begin{equation} \label{bprod} E[{\cal N}_c]\geq E[{\cal N}_c(1)] \cdot E[{\cal N}_c(2)] \cdot \ldots \cdot E[{\cal N}_c(a)], \end{equation} where $E[{\cal N}_c]$ denotes the expectation value of the contraction of the network. \begin{proof} Let $\pi_i(j)$, for $1 \leq j \leq n_{\rm max}(i)$ label the pairings of ${\cal N}_c(i)$ with $C(\pi)=C_{\rm max}(i)$. Then, for each sequence $j_1,\ldots, j_a$, consider the product pairing of ${\cal N}_c$. This shows Eqs.~(\ref{one},\ref{two}). To show Eq.~(\ref{bprod}), note that each expectation value $E[{\cal N}_c(i)]$ is a sum over pairings of that network, weighted by the number of closed loops; the sum over product pairings of ${\cal N}_c$, weighted by the number of closed loops, is equal to $E[{\cal N}_c(1)] \cdot \ldots \cdot E[{\cal N}_c(a)]$. \end{proof} \end{lemma} We now show: \begin{lemma} The expectation value of any product of traces is close to the product of the averages: \begin{equation} \label{prodmany} E[{\rm tr}\Bigl( (L^\dagger L)^{k_1} \Bigr) {\rm tr}\Bigl( (L^\dagger L)^{k_2} \Bigr) \ldots]= (1+{\cal O}(1/N)) \cdot E[{\rm tr}\Bigl( (L^\dagger L)^{k_1} \Bigr)] E[ {\rm tr}\Bigl( (L^\dagger L)^{k_2} \Bigr)] \ldots \end{equation} \begin{proof} By lemma \ref{thisone}, $E[{\rm tr}\Bigl( (L^\dagger L)^{k_1} \Bigr) {\rm tr}\Bigl( (L^\dagger L)^{k_2} \Bigr)] \ldots \geq (1-{\cal O}(1/N)) \cdot E[{\rm tr}\Bigl( (L^\dagger L)^{k_1} \Bigr)] E[ {\rm tr}\Bigl( (L^\dagger L)^{k_2} \Bigr)] \ldots$. So, we just need to upper bound the left-hand side of Eq.~(\ref{prodmany}). Consider the case of a product of only two traces. Then, by Cauchy-Schwarz, $E[{\rm tr}\Bigl( (L^\dagger L)^{k_1} \Bigr) {\rm tr}\Bigl( (L^\dagger L)^{k_2} \Bigr)] \leq \sqrt{E[{\rm tr}\Bigl( (L^\dagger L)^{k_1} \Bigr)^2] E[ {\rm tr}\Bigl( (L^\dagger L)^{k_2} \Bigr)^2]}$. By lemma \ref{varlemma}, and Eq.~(\ref{main2}), \begin{equation} \sqrt{E[{\rm tr}\Bigl( (L^\dagger L)^{k} \Bigr)^2]} \leq (1+{\cal O}(1/N)) \cdot E[{\rm tr}\Bigl( (L^\dagger L)^{k} \Bigr)]. \end{equation} So, \begin{equation} E[{\rm tr}\Bigl( (L^\dagger L)^{k_1} \Bigr) {\rm tr}\Bigl( (L^\dagger L)^{k_2} \Bigr)] \leq (1+{\cal O}(1/N)) \cdot E[{\rm tr}\Bigl( (L^\dagger L)^{k_1} \Bigr)] E[ {\rm tr}\Bigl( (L^\dagger L)^{k_2} \Bigr)]. \end{equation} So, the claim follows for the case of two traces. Now, consider a case with $a$ traces for some $a>2$. By Cauchy-Schwarz, \begin{equation} E[{\rm tr}\Bigl( (L^\dagger L)^{k_1} \Bigr) {\rm tr}\Bigl( (L^\dagger L)^{k_2} \Bigr) \ldots {\rm tr}\Bigl( (L^\dagger L)^{k_a} \Bigr)] \leq \sqrt{E[{\rm tr}\Bigl((L^\dagger L)^{k_1}\Bigr)^2]} \sqrt{E[{\rm tr}\Bigl( (L^\dagger L)^{k_2} \Bigr)^2 \ldots {\rm tr}\Bigl( (L^\dagger L)^{k_{a}} \Bigr)^2]}. \end{equation} Applying Cauchy-Schwarz again to the second term in this product on the right-hand side, and continuing in this fashion, we find that \begin{equation} E[{\rm tr}\Bigl( (L^\dagger L)^{k_1} \Bigr) {\rm tr}\Bigl( (L^\dagger L)^{k_2} \Bigr) \ldots {\rm tr}\Bigl( (L^\dagger L)^{k_a} \Bigr)] \leq \ldots E[{\rm tr}\Bigl((L^\dagger L)^{k_1}\Bigr)^2]^{1/2} E[{\rm tr}\Bigl((L^\dagger L)^{k_2}\Bigr)^4]^{1/4} E[{\rm tr}\Bigl((L^\dagger L)^{k_3}\Bigr)^8]^{1/8} \ldots \end{equation} By lemma \ref{varlemma}, and Eq.~(\ref{main2}), \begin{equation} E[{\rm tr}\Bigl( (L^\dagger L)^{k_1} \Bigr) {\rm tr}\Bigl( (L^\dagger L)^{k_2} \Bigr) \ldots] \leq (1+{\cal O}(1/N)) \cdot E[{\rm tr}\Bigl( (L^\dagger L)^{k_1} \Bigr)] E[ {\rm tr}\Bigl( (L^\dagger L)^{k_2} \Bigr)] \ldots \end{equation} \end{proof} \end{lemma} \section{Numerics} \label{sectionnumerics} We performed a numerical study of the network shown in Fig.~\ref{fignum}. This network was used in Ref.~\onlinecite{mfmc2} as an example of a network for which $QMF<QMC$ for $N=2$. Since this network has $|S|=|T|=MC(G)=2$ and there is no min cut that cuts the graph into two graphs which each have at least one vertex, the results above predict that for large $N$, the limiting distribution of singular values is, up to dividing by $\sqrt{N^{|E|-MC}}=N^3$, the same as that for random matrix theory in the chiral Gaussian unitary ensemble. In Fig.~\ref{fig20} we show the singular values both for a randomly chosen example of this ensemble for $N=20$ and for a randomly chosen chiral Gaussian unitary matrix of size $N^2$-by-$N^2$. In Fig.~\ref{fig40}, we show the same for $N=40$. One can see the convergence. We have also considered the smallest singular values to obtain numerical evidence for the rank of $L$. Studying a few random samples for each $N$ with $2 \leq N \leq 40$, we found that there was exactly one very small singular value (smaller than $10^{-14}$) for $N=2,3 \mod \, 4$ and no very small singular values otherwise. In both cases, all the other singular values were larger than $6*10^{-4}$. While this is only numerical evidence, it supports a conjecture that for this network, $QMF(G,N,O)=QMC(G,N)-1$ if $N=2,3 \mod \, 4$ and $QMF(G,N,O)=QMC(G,N)$ otherwise. The quantum max-flow/min-cut conjecture of Ref.~\onlinecite{mfmc1} was shown to be false in Ref.~\onlinecite{mfmc2}. If the conjecture above for the network shown in Fig.~\ref{fignum} is true, then it is not even the case that ``for all $G,O$, for all sufficiently large $N$, $QMF(G,N,O)=QMC(G,N)$". We may still hope for the weaker conjecture that ``for all $G,O$, for infinitely many $N$, $QMF(G,N,O)=QMC(G,N)$". Indeed, to show this weaker conjecture, it suffices to show that ``for all $G,O$, for some $N_0>1$, $QMF(G,N_0,O)=QMC(G,N_0)$" as then $QMF(G,N_0^k,O)=QMC(G,N_0^k)$ for all integer $k\geq 0$ due to the following: \begin{lemma} For all $G$, $QMF(G,N_1 N_2,O) \geq QMF(G,N_1,O) QMF(G,N_2,O)$. \begin{proof} Let ${\cal T}_1,{\cal T}_2$ be tensors whose indices range from $1$ to $N_1$ or $1$ to $N_2$ respectively, such that the network with tensor ${\cal T}_1$ gives a linear operator $L_1$ with rank $QMF(G,N_1,O)$ and with tensor ${\cal T}_2$ gives a linear operator $L_2$ with rank $QMF(G,N_2,O)$. Then, ${\cal T}={\cal T}_1 \otimes {\cal T}_2$ is a tensor whose indices range from $1$ to $N_1 N_2$ and gives a linear operator $L=L_1 \otimes L_2$ with rank $QMF(G,N_1,O) QMF(G,N_2,O)$. Here, the product ${\cal T}_1 \otimes {\cal T}_2$ is defined as follows: label the each index of ${\cal T}$ by a pair $(i,j)$. Write the entries of ${\cal T}$ as ${\cal T}_{(i_1,j_1),(i_2,j_2),\ldots,(i_d,j_d)}$. Set ${\cal T}_{(i_1,j_1),(i_2,j_2),\ldots,(i_d,j_d)}=({\cal T}_1)_{i_1,i_2,\ldots,i_d} ({\cal T}_2)_{j_1,j_2,\ldots,j_d}$. \end{proof} \end{lemma} \begin{figure} \includegraphics[width=2in]{fignum.pdf} \caption{A tensor network from Ref.~\onlinecite{mfmc2} that we studied numerically. Numbers indicate local ordering.} \label{fignum} \end{figure} \begin{figure} \includegraphics[width=5in]{figure_2.pdf} \caption{Plot of the value of the singular values (divided by $N^3$) for one sample of the tensor network shown in Fig.~\ref{fignum} for $N=20$ shown in blue and for a chiral Gaussian unitary matrix of size $N^2$-by-$N^2$ shown in green. The largest singular value shown on the blue curve is larger than the largest on the green curve.} \label{fig20} \end{figure} \begin{figure} \includegraphics[width=5in]{figure_1.pdf} \caption{Plot of the value of the singular values (divided by $N^3$) for one sample of the tensor network shown in Fig.~\ref{fignum} for $N=40$ shown in blue and for a chiral Gaussian unitary matrix of size $N^2$-by-$N^2$ shown in green. The largest singular value shown on the blue curve is slightly larger than the largest on the green curve but the discrepancy is smaller than in the case $N=20$.} \label{fig40} \end{figure}
{ "timestamp": "2016-03-14T01:11:56", "yymm": "1603", "arxiv_id": "1603.03717", "language": "en", "url": "https://arxiv.org/abs/1603.03717", "abstract": "The quantum max-flow min-cut conjecture relates the rank of a tensor network to the minimum cut in the case that all tensors in the network are identical\\cite{mfmc1}. This conjecture was shown to be false in Ref. \\onlinecite{mfmc2} by an explicit counter-example. Here, we show that the conjecture is almost true, in that the ratio of the quantum max-flow to the quantum min-cut converges to $1$ as the dimension $N$ of the degrees of freedom on the edges of the network tends to infinity. The proof is based on estimating moments of the singular values of the network. We introduce a generalization of \"rainbow diagrams\"\\cite{rainbow} to tensor networks to estimate the dominant diagrams. A direct comparison of second and fourth moments lower bounds the ratio of the quantum max-flow to the quantum min-cut by a constant. To show the tighter bound that the ratio tends to $1$, we consider higher moments. In addition, we show that the limiting moments as $N \\rightarrow \\infty$ agree with that in a different ensemble where tensors in the network are chosen independently, this is used to show that the distributions of singular values in the two different ensembles weakly converge to the same limiting distribution. We present also a numerical study of one particular tensor network, which shows a surprising dependence of the rank deficit on $N \\mod 4$ and suggests further conjecture on the limiting behavior of the rank.", "subjects": "Quantum Physics (quant-ph); Mathematical Physics (math-ph); Combinatorics (math.CO)", "title": "The Asymptotics of Quantum Max-Flow Min-Cut", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668728630677, "lm_q2_score": 0.7217432182679957, "lm_q1q2_score": 0.7081505364781359 }
https://arxiv.org/abs/1710.04325
Improved Coresets for Kernel Density Estimates
We study the construction of coresets for kernel density estimates. That is we show how to approximate the kernel density estimate described by a large point set with another kernel density estimate with a much smaller point set. For characteristic kernels (including Gaussian and Laplace kernels), our approximation preserves the $L_\infty$ error between kernel density estimates within error $\epsilon$, with coreset size $2/\epsilon^2$, but no other aspects of the data, including the dimension, the diameter of the point set, or the bandwidth of the kernel common to other approximations. When the dimension is unrestricted, we show this bound is tight for these kernels as well as a much broader set.This work provides a careful analysis of the iterative Frank-Wolfe algorithm adapted to this context, an algorithm called \emph{kernel herding}. This analysis unites a broad line of work that spans statistics, machine learning, and geometry.When the dimension $d$ is constant, we demonstrate much tighter bounds on the size of the coreset specifically for Gaussian kernels, showing that it is bounded by the size of the coreset for axis-aligned rectangles. Currently the best known constructive bound is $O(\frac{1}{\epsilon} \log^d \frac{1}{\epsilon})$, and non-constructively, this can be improved by $\sqrt{\log \frac{1}{\epsilon}}$. This improves the best constant dimension bounds polynomially for $d \geq 3$.
\section{Introduction} A kernel density estimate~\cite{Par62} of a point set $P \subset \mathbb{R}^d$ smooths out the point set to create a continuous function $\kde_P : \mathbb{R}^d \to \mathbb{R}$. This object has a rich history and many applications in statistical data analysis~\cite{Sil86,DG84,Sco92}, with many results around the question of if $P$ is drawn iid from an unknown distribution $\psi$, how well can $\kde_P$ converge to $\psi$ as a function of $|P|$ (mainly in the $L_2$~\cite{Sil86,Sco92} and $L_1$~\cite{DG84} sense). Then kernel techniques in machine learning~\cite{SS02} developed the connection of kernel density estimates to reproducing kernel Hilbert spaces (RKHS), which are infinite dimensional function spaces (each $\kde_P$ is a point in such a space). From these techniques grew much of non-linear data analysis (e.g., kernel PCA, kernel SVM). In particular, an object in the RKHS called the \emph{kernel mean} is another representation of $\kde_P$, and its sparse approximation plays a critical role in distribution hypothesis testing~\cite{GBRSS12,HBCM13}, Markov random fields~\cite{CWS10}, and even political data analysis~\cite{FWS15}. Through a simple argument (described below), the standard approximation of the kernel mean in the RKHS implies a $L_\infty$ approximation bound of the kernel density estimate in $\mathbb{R}^d$~\cite{CWS10,SZSGS08} (which is stronger than the $L_1$ and $L_2$ variants~\cite{ZP15}). More recently, the sparse approximation of a kernel density estimate has gained interest from the computational geometry community for its connections in topological data analysis~\cite{PWZ15,FLRWBS14}, coresets~\cite{Phi13}, and discrepancy theory~\cite{harvey2014near}. In this paper, we provide strong connections between all of these storylines, and in particular provide a simpler analysis of the common sparse kernel mean approximation techniques with application to the strong $L_\infty$-error coresets of kernel density estimates. With unrestricted dimensions, we show our bounds for KDEs are tight, and in constant dimensions of at least $3$, we polynomially improve the best known bounds so they are now tight up to poly-log factors. \paragraph{Formal definitions.} For a point set $P \subset \mathbb{R}^d$ of size $n$ and a kernel $K : \mathbb{R}^d \times \mathbb{R}^d \to \mathbb{R}$, a \emph{kernel density estimate} $\kde_P$ at $x \in \mathbb{R}^d$ is defined $ \kde_P(x) = \frac{1}{|P|}\sum_{p \in P} K(x,p). $ Our goal is to construct a subset $Q \subset P$, and bound its size, so that its $\kde$ has $\varepsilon$-bounded $L_\infty$ error: \[ \| \kde_P - \kde_Q \|_\infty = \max_{x \in \mathbb{R}^d} \left| \kde_P(x) - \kde_Q(x) \right| \leq \varepsilon. \] We call such a subset $Q$ an \emph{$\varepsilon$-coreset of a kernel range space $(P,\Eu{K})$} (or just an \emph{$\varepsilon$-kernel coreset} for short), where $\Eu{K}$ is the set of all functions $K(x,\cdot)$ represented by a fixed kernel $K$ and an arbitrary center point $x \in \mathbb{R}^d$. While there is not one standard definition of a kernel, many of these kernels have properties that unite them. Common examples are the Gaussian kernel $K(x,p) = \exp(-\|x-p\|^2/\sigma^2)$, the Laplace kernel $K(x,p) = \exp(-\|x-p\|/\sigma)$, the ball kernel $K(x,p) = \{1$ if $\|x-p\|/\sigma \leq 1$; and $0$ otherwise\}, and the triangle kernel $K(x,p) = \max \{0, 1 - \|x-p\|/\sigma\}$. The parameter $\sigma$ is often called the \emph{bandwidth} and controls the level of smoothing. All of these kernels (and indeed most) are \emph{shift invariant}, thus can be written with a single input $z = \|x-p\|$ as a $f(z = \|x-p\|) = K(x,p)$. We have chosen to normalize all kernels so $f(0) = K(x,x) = 1$. The \emph{kernel distance}~\cite{HB05,glaunesthesis, JoshiKommarajuPhillips2011,PhillipsVenkatasubramanian2011} (also called \emph{current distance} or \emph{maximum mean discrepancy}) is a metric~\cite{muller1997integral,SGFSL10} between two point sets $P$, $Q$ (as long as the kernel used is characteristic~\cite{SGFSL10}, a slight restriction of being positive definite~\cite{aronszajn1950theory,Wah99}, this includes the Gaussian and Laplace kernels). Define the similarity between the two point sets as $ \kappa(P,Q) = \frac{1}{|P|}\frac{1}{|Q|} \sum_{p \in P} \sum_{q \in Q} K(p,q), $ and the kernel distance as $ D_K(P,Q) = \sqrt{\kappa(P,P) + \kappa(Q,Q) - 2 \kappa(P,Q)}. $ When $Q$ is a single point $x$, then $\kappa(P,x) = \kde_P(x)$. If $K$ is positive definite, it is said to have the reproducing property~\cite{aronszajn1950theory,Wah99}. This implies that $K(p,x)$ is an inner product in a reproducing kernel Hilbert space (RKHS) $\H$. Specifically, there exists a lifting map $\phi : \mathbb{R}^d \to \H$ so $K(p,x) = \langle \phi(p), \phi(x) \rangle_{\H}$, and moreover the entire set $P$ can be represented as $\Phi(P) = \sum_{p \in P} \phi(p)$, which is a single element of $\H$ and has norm $\|\Phi(P)\|_{\H} = \sqrt{\kappa(P,P)}$. A single point $x \in \mathbb{R}^d$ also has a norm $\|\phi(x)\|_{\H} = \sqrt{K(x,x)} = 1$ in this space. A \emph{kernel mean} of a point set $P$ and a reproducing kernel $K$ is defined \[ \ensuremath{\hat{\mu}}_P = \frac{1}{|P|} \sum_{p \in P} \phi(p) = \Phi(P)/|P| \in \H. \] Note that $\|\ensuremath{\hat{\mu}}_P\|_{\H} \leq K(x,x)$ so in our setting $\|\ensuremath{\hat{\mu}}_P\|_{\H} \leq 1$. Also $D_K(P,Q) = \|\ensuremath{\hat{\mu}}_P - \ensuremath{\hat{\mu}}_Q\|_\H$. \vspace{-.1in} \paragraph{Relationship between kernel mean and $\varepsilon$-kernel coresets.} It is possible to convert between bounds on the subset size required for the kernel mean and an $\varepsilon$-kernel coreset of an associated kernel range space. But they are not symmetric. The Koksma-Hlawka inequality (in the context of reproducing kernels~\cite{CWS10,SZSGS08} when $K(x,x) = 1$) states that \[ \|\kde_P - \kde_Q\|_\infty \leq \|\ensuremath{\hat{\mu}}_P - \ensuremath{\hat{\mu}}_Q\|_{\Eu{H}_K}. \] Since $\kde_P(x) = \kappa(\hat{P}, x) = \langle \ensuremath{\hat{\mu}}_P, \phi(x) \rangle_\H$ and via Cauchy-Schwartz, for any $x \in \mathbb{R}^d$ \[ | \kde_P(x) - \kde_Q(x) | = | \langle \ensuremath{\hat{\mu}}_P , \phi(x) \rangle_\H - \langle \ensuremath{\hat{\mu}}_Q, \phi(x) \rangle_\H | = | \langle \ensuremath{\hat{\mu}}_P - \ensuremath{\hat{\mu}}_Q, \phi(x) \rangle_\H | \leq \| \ensuremath{\hat{\mu}}_P - \ensuremath{\hat{\mu}}_Q \|_\H. \] Thus to bound $\max_{x \in \mathbb{R}^d} |\kde_P(x) - \kde_Q(x)| \leq \varepsilon$ it is sufficient to bound $\|\ensuremath{\hat{\mu}}_P - \ensuremath{\hat{\mu}}_Q\|_\H \leq \varepsilon$. On the other hand, if we have a bound $\max_{x \in \mathbb{R}^d} |\kde_P(x) - \kde_Q(x)| \leq \varepsilon$, then we can only argue that $\|\ensuremath{\hat{\mu}}_P - \ensuremath{\hat{\mu}}_Q\|_\H \leq \sqrt{2 \varepsilon}$. We observe that \begin{align*} \|\ensuremath{\hat{\mu}}_P - \ensuremath{\hat{\mu}}_Q\|_\H^2 &= D_K(P,Q)^2 = \kappa(P,P) + \kappa(Q,Q) - 2 \kappa(P,Q) \\ & = \frac{1}{|P|} \sum_{p \in P} \kde_P(p) + \frac{1}{|Q|} \sum_{q \in Q} \kde_Q(q) - \frac{1}{|P|} \sum_{p \in P} \kde_Q(p) - \frac{1}{|Q|} \sum_{q \in Q} \kde_P(q) \\ &= \frac{1}{|P|} \sum_{p \in P} (\kde_P(p) - \kde_Q(p)) + \frac{1}{|Q|} \sum_{q \in Q} (\kde_Q(q) - \kde_P(q)) \\ & \leq \frac{1}{|P|} \sum_{p \in P} (\varepsilon) + \frac{1}{|Q|} \sum_{q \in Q} (\varepsilon) = 2\varepsilon. \end{align*} We can also take the only inequality the other direction to get the lower bound. Taking a square root of both sides leads to the implication. Unfortunately, the second reduction does not map the other way; a bound on $\|\ensuremath{\hat{\mu}}_P - \ensuremath{\hat{\mu}}_Q\|_H^2$ only ensures an average ($L_2$ error) for $\kde_P$ holds, not the desired stronger $L_\infty$ error. \subsection{Known Results on KDE Coresets} In this section we survey known bounds on the size $|Q|$ required for $Q$ to be an $\varepsilon$-kernel coreset of the kernel range space $(P,\Eu{K})$. We assume $P \subset \mathbb{R}^d$, it is of size $n$, and $P$ has a diameter $\Delta = (1/\sigma) \max_{p, p' \in P} \|p-p'\|$, where $\sigma$ is the bandwidth parameter of the kernel. We sometimes allow a $\delta$ probability that the algorithm does not succeed. Results are summarized in Table \ref{tbl:compare}. \begin{table}[t] \begin{tabular}{|r|c|c|l|} \hline Paper & Coreset Size & Runtime & Restrictions \\ \hline Joshi \etal~\cite{JoshiKommarajuPhillips2011} & $(1/\varepsilon^2)(d + \log(1/\delta))$ & $|Q|$ samples & centrally symmetric, positive \\ Fasy \etal~\cite{FLRWBS14} & $(d/\varepsilon^2) \log(d \Delta / \varepsilon \delta)$ & $|Q|$ samples & .. \\ Gretton \etal~\cite{GBRSS12} & $(1/\varepsilon^4) \log(1/\delta)$ & $|Q|$ samples & characteristic kernels \\ \hline Phillips~\cite{Phi13} & $(1/\varepsilon\sigma)^{2d/(d+2)} \log^{d/(d+2)} (1/\varepsilon\sigma)$ & $n/\varepsilon^2$ & $(1/\sigma)$-Lipschitz, \; $d$ is constant \\ Phillips~\cite{Phi13} & $1/\varepsilon$ & $n \log n$ & $d=1$ \\ \hline Chen \etal~\cite{CWS10} & $1/(\varepsilon r_P)$ & $n/(\varepsilon r_P)$ & characteristic kernels \\ Bach \etal~\cite{BLO12} & $(1/r_P^2)\log (1/\varepsilon)$ & $n\log(1/\varepsilon)/r_P^2$ & characteristic kernels \\ Bach \etal~\cite{BLO12} & $1/\varepsilon^2$ & $n/\varepsilon^2$ & characteristic kernels, weighted \\ Harvey and Samadi~\cite{harvey2014near} & $(1/\varepsilon)\sqrt{n}\log^{2.5}(n)$ & $\mathsf{poly}(n,1/\varepsilon,d)$ & characteristic kernels \\ Cortez and Scott \cite{CS15} & $k_0$ \;\; ($\leq (\Delta/ \varepsilon)^d$) & $nk_0$ & $(1/\sigma)$-Lipschitz; \; $d$ is constant \\ \hline \textbf{New Result} & $1/\varepsilon^2$ & $n/\varepsilon^2$ & characteristic kernels, \emph{unweighted} \\ \textbf{New Result} & $(1/\varepsilon) \log^d \frac{1}{\varepsilon}$ & $n + \poly(1/\varepsilon)$ & Gaussian, $d$ is constant \\ \textbf{New Lower Bound} & $\Omega(1/\varepsilon^2)$ & $-$ & \textsc{siss} (e.g., Gaussian); $d = \Omega(1/\varepsilon^2)$ \\\hline \end{tabular} \vspace{-.1in} \caption{Asymptotic $\varepsilon$-kernel coreset sizes and runtimes in terms of $\varepsilon$, $n$, $d$, $r_P$, $\sigma$, $\Delta$. \textsc{siss} = Shift-invariant, somewhere-steep (see Section \ref{sec:LB}). \label{tbl:compare}} \vspace{-.1in} \end{table} \paragraph{Halving approaches.} Phillips~\cite{Phi13} showed that kernels with a bounded Lipschitz factor (so $|K(x,p) - K(x,q)| \leq C \|p-q\|$ for some constant $C$, including Gaussian, Laplace, and Triangle kernels which have $C = O(1/\sigma)$, admit coresets of size $O((1/\varepsilon\sigma) \sqrt{\log(1/\varepsilon\sigma)})$ in $\mathbb{R}^2$. For points in $\mathbb{R}^d$ (for $d>1$) this generalizes to a bound of $O((1/\varepsilon\sigma)^{2d/(d+2)} \log^{d/(d+2)} (1/\varepsilon\sigma))$. That paper also observed that for $d=1$, selecting evenly spaced points in the sorted order achieves a coreset of size $O(1/\varepsilon)$. \paragraph{Sampling bounds.} Joshi \etal~\cite{JoshiKommarajuPhillips2011} showed that a random sample of size $O((1/\varepsilon^2)(d + \log(1/\delta)))$ results in an $\varepsilon$-kernel coreset for any centrally symmetric, non-increasing kernel. This works by reducing to a VC-dimensional~\cite{LLS01} argument with ranges defined by balls. Fasy~\etal~\cite{FLRWBS14} provide an alternative bound on how random sampling preserves the $L_\infty$ error in the context of statistical topological data analysis. Their bound can be converted to require size $O((d/\varepsilon^2) \log(d \Delta/\varepsilon \delta))$, which can improve upon the bound of Joshi~\cite{JoshiKommarajuPhillips2011} if $K(x,x) > 1$ (otherwise, herein we only consider the case $K(x,x) =1$). Examining characteristic kernels which induce an RKHS in that function space leads to a simpler bound of $O((1/\varepsilon^4) \log(1/\delta))$ as observed by Gretton \etal~\cite{GBRSS12}. This shows that after $O((1/\varepsilon')^2 \log(1/\delta))$ samples $Q$, then $\|\ensuremath{\hat{\mu}}_P - \ensuremath{\hat{\mu}}_Q\|_\H^2 \leq \varepsilon'$. To bound the non-squared distance by $\varepsilon$ we set $\varepsilon' = \varepsilon^2$, and obtain the bound of $O((1/\varepsilon^4) \log(1/\delta))$. \paragraph{Iterative approaches.} Motivated by the task of constructing samples from Markov random fields, Chen \etal~\cite{CWS10} introduced a technique called \emph{kernel herding} suitable for characteristic kernels. They showed that iteratively and greedily choosing the point $p \in P$ which when added to $Q$ most decreases the quantity $\|\ensuremath{\hat{\mu}}_P - \ensuremath{\hat{\mu}}_Q\|_{\Eu{H}_K}$, will decrease that term at rate $O(r_P/t)$ for $t = |Q|$. Here $r_P$ is the largest radius of a ball centered at $\ensuremath{\hat{\mu}}_P \in \H$ which is completely contained in the convex hull of the set $\{\phi(p) \mid p \in P\}$. They did not specify the quantity $r_P$ but argued that it is a constant greater than $0$. Bach \etal~\cite{BLO12} showed that this algorithm can be interpreted under the Frank-Wolfe framework~\cite{Cla10}. Moreover, they argue that $r_P$ is not always a constant; in particular when $P$ is infinite (e.g., it represents a continuous distribution) then $r_P$ is arbitrarily large. However, when $P$ is finite, they prove that $r_P$ is finite without giving an explicit bound. They also makes explicit that after $t$ steps, they achieve $\|\ensuremath{\hat{\mu}}_P - \ensuremath{\hat{\mu}}_{Q,w}\|_{\Eu{H}_K} \leq 4 / (r_P \cdot t)$. They also describe a method which includes ``line search'' to create a weighted coreset $(Q,w)$, so each point $q \in Q$ is associated with a weight $w(q) \in [0,1]$ so $\sum_{q \in Q} w(q) = 1$; then $\ensuremath{\hat{\mu}}_{Q,w} = \sum_{q \in Q} w(q) \phi(q)$. For this method they achieve $ \|\ensuremath{\hat{\mu}}_P - \ensuremath{\hat{\mu}}_{Q,w}\|_{\Eu{H}_K} \leq \sqrt{\exp(-r_P^2 t)}. $ Bach \etal~\cite{BLO12} also mentions a bound $\|\ensuremath{\hat{\mu}}_P - \ensuremath{\hat{\mu}}_{Q,w}\|_{\Eu{H}_K} \leq \sqrt{8 / t}$, that is independent of $r_P$. It relies on very general bound of Dunn~\cite{dunn1980convergence} which uses line search, or one of Jaggi~\cite{Jag13} which uses a fixed but non-uniform set of weights. These show linear convergence for any smooth function, including $\|\ensuremath{\hat{\mu}}_P - \ensuremath{\hat{\mu}}_{Q,w}\|^2_{\Eu{H}_K}$; taking the square root provides a bound for $\|\ensuremath{\hat{\mu}}_P - \ensuremath{\hat{\mu}}_{Q,w}\|_{\Eu{H}_K} \leq \varepsilon$ after $t = O(1/\varepsilon^2)$ steps. However, the result is a weighted coreset $(Q,w)$ which may be less intuitive or harder to work with in some situations. Harvey and Samadi ~\cite{harvey2014near} further revisited kernel herding in the context of a general mean approximation problem in $\mathbb{R}^{d'}$. That is, consider a set $P'$ of $n$ points in $\mathbb{R}^{d'}$, find a subset $Q' \subset P'$ so that $\|\bar P' - \bar Q'\| \leq \varepsilon$, where $\bar P'$ and $\bar Q'$ are the Euclidean averages of $P'$ and $Q'$, respectively. This maps to the kernel mean problem with $P' = \{\phi(p) \mid p \in P\}$, and with the only bound of $d'$ as $n$. They show that the $r_P$ term can be manipulated by affine scaling, but that in the worst case (after such transformations via John's theorem) it is $O(\sqrt{d'} \log^{2.5} (n))$, and hence show one can always set $\varepsilon = O(\sqrt{d'} \log^{2.5} (n) / t) = O((1/t) \sqrt{n} \log^{2.5}(n))$. We will show that we can always compress $P'$ to another set $P''$ of size $n = O(1/\varepsilon^2)$ (or for instance use the random sampling bound of Joshi \etal~\cite{JoshiKommarajuPhillips2011}, ignoring other factors); then solving for $t$ yields $t = O((1/\varepsilon^2) \log^{2.5}(1/\varepsilon))$. Harvey and Samadi also provide a lower bound to show that after $t$ steps, the kernel mean error may be as large as $\Omega(\sqrt{d'}/t)$ when $t = \Theta(n)$. This seems to imply (using the $d' = \Omega(n)$ and a $P'$ of size $\Theta(1/\varepsilon^2)$) that we need $t = \Omega(1/\varepsilon^2)$ steps to achieve $\varepsilon$-error for kernel density estimates. But this would contradict the bound of Phillips~\cite{Phi13}, which for instance shows a coreset of size $O((1/\varepsilon) \sqrt{\log (1/\varepsilon)})$ in $\mathbb{R}^2$. More specifically, it uses $t = \Theta(d')$ steps to achieve this case, so if $d' = n = \Theta(1/\varepsilon^2)$ then this requires asymptotically as many steps as there are points. Moreover, a careful analysis of their construction shows that the corresponding points in $\mathbb{R}^d$ (using an inverse projection $\phi^{-1} : \Eu{H}_K \to \mathbb{R}^d$ to a set $P \in \mathbb{R}^d$) would have them so spread out that $\kde_P(x) < c/\sqrt{n}$ (for constant $c$, so $= O(\varepsilon)$ for $n = 1/\varepsilon^2$) for all $x \in \mathbb{R}^d$; hence it is easy to construct a $2/\varepsilon$ size $\varepsilon$-kernel coreset for this point set. \paragraph{Discretization bounds.} Another series of bounds comes from the Lipschitz factor of the kernels: $C = \max_{x,y,z \in \mathbb{R}^d} \frac{K(z,x) - K(z,y)}{\|x-y\|}$. For most kernels $C$ is small constant. This implies that $\max_{x,y \in \mathbb{R}^d} \frac{\kde_P(x) - \kde_P(y)}{\|x-y\|} \leq C$ for any $P$. Thus, we can for instance, lay down an infinite grid $G_{\varepsilon} \subset \mathbb{R}^d$ of points so for all $x \in \mathbb{R}^d$ there exists some $g \in G_\varepsilon$ such that $\|g-x\| \leq \varepsilon$, and no two $g, g' \in G_\varepsilon$ are $\|g-g'\| \leq 2\varepsilon \sqrt{d}$. Then we can map each $p \in P$ to $p_g$ the closest point $g \in G_\varepsilon$ (with multiplicity), resulting in $P_G$. By the additive property of $\kde$, we know that $\|\kde_P - \kde_{P_G}\|_\infty \leq \varepsilon$. Cortes and Scott~\cite{CS15} provide another approach to the sparse kernel mean problem. They run Gonzalez's algorithms~\cite{Gon85} for $k$-center on the points $P \in \mathbb{R}^d$ (iteratively add points to $Q$, always choosing the furthest point from any in $Q$) and terminate when the furthest distance to the nearest point in $Q$ is $\Theta(\varepsilon)$. Then they assign weights to $Q$ based on how many points are nearby, similar to in the grid argument above. They make an ``incoherence'' based argument, specifically showing that $\|\ensuremath{\hat{\mu}}_P - \ensuremath{\hat{\mu}}_Q\| \leq \sqrt{1-v_Q}$ where $v_Q = \min_{p \in P} \max_{q \in Q} K(p,q)$. This does not translate meaningfully in any direct way to any of the parameters we study. However, we can use the above discretization bound to argue that if $\Delta$ is bounded, then this algorithm must terminate in $O((\Delta/\sigma \varepsilon)^d)$ steps. \paragraph{Lower bounds.} Finally, there is a simple lower bound of size $\lceil 1/\varepsilon \rceil - 1$ for an $\varepsilon$-coreset $Q$ for kernel density estimates~\cite{Phi13}. Consider a point set $P$ of size $1/\varepsilon-1$ where each point is very far away from every other point, then we cannot remove any point otherwise it would create too much error at that location. \subsection{Our Results} We have three main results. First, in Section \ref{sec:FW}, we study the kernel herding algorithm for characteristic kernels, and show that after $2/\varepsilon^2$ steps (with no other parameters) it creates a subset $Q \subset P$ so that $\|\ensuremath{\hat{\mu}}_P - \ensuremath{\hat{\mu}}_Q\|_\H \leq \varepsilon$, and hence $\|\kde_P - \kde_Q\|_\infty \leq \varepsilon$. Our result is simple and from first principles, and does not required a weighted coreset, unlike Bach \etal~\cite{BLO12}. Second, in Section \ref{sec:Gauss-rect}, we prove a new discrepancy reduction, that shows a form of kernel discrepancy for Gaussian kernels is implied by the same coloring with respect to the very commonly studied axis-aligned rectangle range space. As a result, this allows us to apply a halving approach to create $\varepsilon$-kernel coresets which are of size $O((1/\varepsilon) \log^d \frac{1}{\varepsilon})$ for $d$ constant, polynomially improving upon all bounds for $d \geq 3$. Third, in Section \ref{sec:LB}, we show a lower bound, that there exist point sets $P$ in dimension $\Omega(1/\varepsilon^2)$, such that any $\varepsilon$-kernel coreset requires $\Omega(1/\varepsilon^2)$ points. Because this construction uses $\Omega(1/\varepsilon^2)$ dimensions, it does not contradict the halving-based results. This applies to every shift-invariant kernel we considered, with a slightly weakened condition for the ball kernel. \section{Analysis of Frank-Wolfe Algorithm for $\varepsilon$-Kernel Coresets} \label{sec:FW} Our first contribution analyzes the Frank-Wolfe algorithm~\cite{FW56} in the context of estimating the kernel mean $\ensuremath{\hat{\mu}}_P$. To simplify notation, let $\mu = \ensuremath{\hat{\mu}}_P$, and for each original point $p_i \in P$ we denote $\phi_i = \phi(p_i) \in \H$ as its representation in $\H$. Our estimate $\ensuremath{\hat{\mu}}_Q$ for $\ensuremath{\hat{\mu}}_P$ will change each step; on the $t$th step it will be labeled $x_t \in \H$. The algorithm is then summarized in Algorithm \ref{alg:fw}. \begin{algorithm} \caption{\label{alg:fw} Frank-Wolfe Algorithm} \begin{algorithmic} \STATE $x_1\leftarrow \text{any of }\phi_i$ \FOR {$t=1,2,\ldots,T$} \STATE $i_t\leftarrow\arg\min_{i\in\{1,\dots,n\}} \langle x_t-\mu, \phi_i-\mu\rangle_\H$ \STATE $x_{t+1}\leftarrow \frac{1}{t} \phi_{i_t}+\frac{t-1}{t} x_t$ \ENDFOR \STATE \textbf{return} $x_T = \ensuremath{\hat{\mu}}_Q$. \end{algorithmic} \end{algorithm} Before we begin our own analysis specific to approximating the kernel mean, we note that there exists rich analysis of different variants of the Frank-Wolfe algorithm. In many cases with careful analysis of the eccentricity~\cite{GJ09} or convexity~\cite{BLO12} or dual structure~\cite{Cla10} of the problem, one can attain convergence at a rate of $O(1/t)$ (hiding structural terms in the $O(\cdot)$, where random sampling typically achieves a slower rate of $O(1/\sqrt{t}$). Recent progress has focused mainly on approximate settings~\cite{FG16} or situations where one can achieve a ``linear'' rate of roughly $O(c^{-t})$~\cite{JL15}. This faster linear convergence, unless some specific properties of the data existed, would violate our lower bound, and thus is not possible in general. \begin{theorem}\label{avg} After $T \geq 2/\varepsilon^2$ steps, Algorithm \ref{alg:fw} produces $\ensuremath{\hat{\mu}}_Q$ with $T$ points so $\normr{\ensuremath{\hat{\mu}}_Q - \ensuremath{\hat{\mu}}_P} \leq \varepsilon$. \end{theorem} \begin{proof} We first obtain the following recursive equation. \[ x_{t+1}-\mu = \frac{1}{t}\phi_{i_t}+\frac{t-1}{t}x_{t-1}-\mu = \frac{1}{t}(\phi_{i_t}-\mu)+\frac{t-1}{t}(x_{t-1}-\mu) \] Then, by multiplying $t$ to both sides and taking the squared norm, we have \begin{align*} \normr{t (x_t-\mu)}^2 & = \normr{(\phi_{i_t}-\mu)+(t-1)(x_{t-1}-\mu)}^2 \\ & = \normr{\phi_{i_t}-\mu}^2+2(t-1)\langle\phi_{i_t}-\mu, x_{t-1}-\mu\rangle_\H + (t-1)^2\normr{x_{t-1}-\mu}^2. \end{align*} Next we use two observations to simplify this. First, since $\mu$ is the mean of $\{x_1, \ldots, x_n\}$ then $0=\sum_{i=1}^n\langle x_t-\mu, \phi_i-\mu\rangle_\H$. Thus by optimal selection of $i_t$, then $\langle x_t-\mu, \phi_i-\mu\rangle_\H \leq 0$ for all $i\in\{1,\ldots,n\}$. Second $\norm{\mu}_\H \leq 1$ since it is a convex combination of $\{\phi_1, \ldots, \phi_n\}$, where each $\norm{\phi_i}_\H = 1$. This implies $\norm{\phi_{i_t}-x_t}_\H\leq \norm{\phi_{i_t}}_\H+\norm{x_t}_\H\leq 2$. Applying the above observations, \[ t^2\normr{x_t-\mu}^2 \leq (t-1)^2\normr{x_{t-1}-\mu}^2+2. \] Now, by induction on $t$, at step $t=T$, we can conclude that $T^2 \normr{x_T-\mu}^2 \leq 2T$, and hence $\normr{x_T - \mu}^2 \leq 2/T$. Thus for $T \geq 2/\varepsilon^2$, then $\normr{x_T-\mu} \leq \varepsilon$. \end{proof} \section{Gaussian Kernel Coresets Bounded using Rectangle Discrepancy} \label{sec:Gauss-rect} We now present a completely different way to improve the size of kernel coresets, via discrepancy. In particular we reduce the kernel coreset problem to a rectangle discrepancy problem. Our result is specific to the Gaussian kernel. Let $(P,\Eu{R}_d)$ be the range space with ground set $P \subset \ensuremath{\mathbb{R}}^d$ and $\Eu{R}_d$ the family of subsets of $P$ defined by inclusion in axis-aligned rectangles. In particular, for any point $x \in \ensuremath{\mathbb{R}}^d$, let $x_i$ represent its $i$th coordinate. We define a combinatorial rectangle $R \subset P$ as $R = \{x \in P \mid m_i \leq x_i \leq M_i\}$ by $2d$ values on the left $m_1, m_2, \ldots, m_d$ and right $M_1, M_2, \ldots, M_d$. Let $\ensuremath{{\mathbbm{1}}_R} : \ensuremath{\mathbb{R}}^d \to \{0,1\}$ be the \emph{characteristic function of the rectangle} $R$; for a point $x \in \ensuremath{\mathbb{R}}^d$ it returns $1$ if $x \in R$ and $0$ otherwise. Let $\chi : P \to \{-1, +1\}$ be a coloring on $P$. Then the discrepancy of $(P,\Eu{R}_d)$ with respect to a coloring $\chi$ is defined $ \mathsf{disc}(P,\chi,\Eu{R}_d) = \max_{R \in \Eu{R}_d} | \sum_{p \in R} \chi(p) |. $ Following Joshi \etal~\cite{JKPV11}, we can define a similar concept for kernels. Let $\Eu{K}_d$ be the family of functions $\Eu{K}_d = \{K(x,\cdot) \mid x \in \ensuremath{\mathbb{R}}^d\}$, for a specific kernel $K$ which in this case will be Gaussian. Now define the kernel discrepancy of $(P, \Eu{K}_d)$ with respect to a coloring $\chi$ as $ \mathsf{disc}(P, \chi, \Eu{K}_d) = \max_{x \in \ensuremath{\mathbb{R}}^d} | \sum_{p \in P} \chi(p) K(x,p)|. $ \begin{lemma} \label{lem:KtoR} For any point set $P \subset \ensuremath{\mathbb{R}}^d$ and coloring $\chi$, we have $\mathsf{disc}(P,\chi, \Eu{K}_d)\leq \mathsf{disc}(P, \chi, \Eu{R}_d)$. \end{lemma} The proof of the above lemma hinges on two key observations. The first one is the Gaussian kernel has an important property in common with the characteristic function for an axis-aligned rectangle: multiplicative separability. Namely, both of them can be expressed as the product of factors, each corresponding to one dimension. Another observation is that the Gaussian function could be expressed as the average of characteristic function for a family of intervals. Combining the above two facts, one can derive that the signed discrepancy for the Gaussian kernel is the average of signed discrepancy for a family of axis paralleled rectangles. Finally, by triangle inequality, we will complete the proof of lemma. \begin{proof} Define that $\mathbbm{1}_{[-r,r]}(x) = 1$ iff $r >\abs{x}$, otherwise $\mathbbm{1}_{[-r,r]}(x) = 0$. For any $x\in \mathbb{R}$, consider the term $\int_0^\infty 2r\exp(-r^2)\mathbbm{1}_{[-r,r]}(x) \dir{r}$, which can be expanded as follows. \[ \int_0^\infty 2r\exp(-r^2)\mathbbm{1}_{[-r,r]}(x) \dir r = \int_{\abs{x}}^\infty 2r\exp(-r^2) \dir r = \left. -\exp(-r^2)\right\vert_{\abs{x}}^\infty = \exp(-x^2) \numberthis\label{intform} \] Next we show that we can decompose the Gaussian kernel $K(c,\cdot) \in \Eu{K}_d$, as follows for any $x \in \ensuremath{\mathbb{R}}^d$. \begin{align*} K(c,x) & = \exp(-\norm{x-c}^2) = \exp(- (\sum_{i=1}^d (x_i - c_i)^2)) = \prod_{i=1}^d \exp(-(x_i-c_i)^2) \\ & = \prod_{i=1}^d \left( \int_0^\infty 2r_i\exp(-r_i^2) \mathbbm{1}_{[-r_i, r_i]}(x_i-c_i) \dir r_i \right) \qquad\text{from ($\ref{intform}$)}\\ & = \prod_{i=1}^d \left( \int_0^\infty 2r_i\exp(-r_i^2) \mathbbm{1}_{[c_i-r_i, c_i+r_i]}(x_i) \dir r_i \right) \\ & = \int_0^\infty\dots\int_0^\infty \left(\prod_{i=1}^d 2r_i\exp(-r_i^2) \mathbbm{1}_{[c_i-r_i, c_i+r_i]}(x_i) \right) \dir r_1\dots \dir r_d \\ & = \int_0^\infty\dots\int_0^\infty \prod_{i=1}^d\left( 2r_i\exp(-r_i^2)\right) \prod_{i=1}^d \left(\mathbbm{1}_{[c_i-r_i, c_i+r_i]}(x_i) \right) \dir r_1\dots \dir r_d \\ & = \int_0^\infty\dots\int_0^\infty \prod_{i=1}^d\left( 2r_i\exp(-r_i^2)\right) \mathbbm{1}_{\prod_{i=1}^d[c_i-r_i,c_i+r_i]}(x) \dir r_1\dots \dir r_d \end{align*} The third line is from \[ \mathbbm{1}_{[-r_i, r_i]}(x_i-c_i)=1 \;\; \Leftrightarrow \;\; -r_i \leq x_i-c_i \leq r_i \;\; \Leftrightarrow \;\; c_i-r_i \leq x_i \leq c_i+r_i \;\; \Leftrightarrow \;\; \mathbbm{1}_{[c_i-r_i, c_i+r_i]}(x_i)=1. \] The fourth line is because for any $f,g$, $\left(\int_0^\infty f(x) \dir x\right)\left(\int_0^\infty g(y) \dir y \right) = \int_0^\infty \int_0^\infty f(x)g(y) \dir x \dir y$. And the last line applies the definition $\prod_{i=1}^d \left(\mathbbm{1}_{[c_i-r_i, c_i+r_i]}(x_i)\right) = \mathbbm{1}_{\prod_{i=1}^d[c_i-r_i,c_i+r_i]}(x)$. Then we can apply this rewriting to the definition of kernel discrepancy for any $\chi$, and then factor out all of the positive terms. What remains in the absolute value is exactly $\mathsf{disc}(P,\chi, R)$ for a rectangle $R$ with characteristic function $\mathbbm{1}_R(x) = \mathbbm{1}_{\prod_{i=1}^d[c_i-r_i,c_i+r_i]}(x)$, and hence is bounded by $\mathsf{disc}(P,\chi, \Eu{R}_d)$. \begin{align*} \left| \sum_{p\in P} \chi(p) K(c,p) \right| & = \left| \sum_{p\in P} \chi(p) \int_0^\infty\dots\int_0^\infty \prod_{i=1}^d\left( 2r_i\exp(-r_i^2)\right) \mathbbm{1}_{\prod_{i=1}^d[c_i-r_i,c_i+r_i]}(p) \dir r_1\dots \dir r_d \right| \\ & = \left| \int_0^\infty\dots\int_0^\infty \prod_{i=1}^d\left( 2r_i\exp(-r_i^2)\right) \left(\sum_{p\in P} \chi(p) \mathbbm{1}_{\prod_{i=1}^d[c_i-r_i,c_i+r_i]}(p) \right) \dir r_1\dots \dir r_d \right| \\ & \leq \int_0^\infty\dots\int_0^\infty \prod_{i=1}^d\left( 2r_i\exp(-r_i^2)\right) \left| \sum_{p\in P} \chi(p) \mathbbm{1}_{\prod_{i=1}^d[c_i-r_i,c_i+r_i]}(p)\right| \dir r_1\dots \dir r_d \\ & \leq \int_0^\infty\dots\int_0^\infty \prod_{i=1}^d\left( 2r_i\exp(-r_i^2)\right) \mathsf{disc} (P,\chi,\mathcal{R}_d) \dir r_1\dots \dir r_d \\ & = \mathsf{disc}(P,\chi,\mathcal{R}_d)\prod_{i=1}^d\left(\int_0^\infty 2r_i\exp(-r_i^2) \dir r_i\right) \\ & \leq \mathsf{disc}(P,\chi,\mathcal{R}_d) \end{align*} The last line follows by $\int_0^\infty 2r\exp(-r^2) \dir r = \exp(0) = 1$. \end{proof} Furthermore, we define \[ \mathsf{disc}(n,\mathcal{R}_d) = \max_{\abs{P}=n}\min_{\chi} \mathsf{disc}(P,\chi,\mathcal{R}_d) \qquad \textrm{ and } \qquad \mathsf{disc}(n,\mathcal{K}_d) = \max_{\abs{P}=n}\min_{\chi} \mathsf{disc}(P,\chi,\mathcal{K}_d). \] Bansal and Garg~\cite{BG17} showed $\mathsf{disc}(n, \mathcal{R}_d) = O(\log^dn)$, and their proof provides a polynomial time algorithm. Nikolov~\cite{Nik17} soon after showed that $\mathsf{disc}(n,\mathcal{R}_d) = O(\log^{d-\frac{1}{2}}n)$ although this result does not describe how to efficiently construct the coloring. With these result we obtain the following. \begin{corollary} $\mathsf{disc} (n,\Eu{K}_d)=O(\log^{d-\frac{1}{2}}n)$, and for any point set $P$ of size $n$, one can construct a coloring $\chi$ so that $\mathsf{disc}(P, \Eu{K}_d, \chi) = O(\log^d n)$. \end{corollary} The following corollary is the direct implication of the above, for instance following~\cite{Phi13}. \begin{corollary} For any point set $P \subset \ensuremath{\mathbb{R}}^d$, there exists an $\varepsilon$-kernel coreset of size $O(\frac{1}{\varepsilon} \log^{d-\frac{1}{2}} \frac{1}{\varepsilon})$. Moreover, an $\varepsilon$-kernel coreset of size $O(\frac{1}{\varepsilon} \log^{d} \frac{1}{\varepsilon})$ can be constructed in $O(n + \poly(1/\varepsilon))$ time with high probability. \end{corollary} \section{Lower Bound for Kernel Coresets} \label{sec:LB} In this section, we provide a lower bound matching or nearly matching our algorithms in Section \ref{sec:FW}. To do so, we need to specify some properties of the kernels we consider. We only consider shift invariant kernels, with a univariate function $f(|\|x-y\|) = K(x,y)$. Next we consider a class of functions we call \emph{somewhere-steep}. For these kernels, there exists a region of $f$ where it is ``steep;'' its value consistently decreases quickly. Specifically \begin{itemize} \item There exist constant $C_f>0$, and values $z_f>r_f>0$ such that $f(z_1)-f(z_2)>C_f\cdot(z_2-z_1)$ for all $z_1\in (z_f-r_f,z_f)$ and $z_2\in (z_f,z_f+r_f)$. \end{itemize} Almost all kernels we have observed in literature (Gaussian, Laplace, Triangle, Epanechnikov, Sinc etc) are steep for all values $z_1, z_2 > 0$, and thus satisfy this property. The exception is the ball kernel. For this we define another class of kernels we call \emph{drop kernels} where $f$ has a discontinuity where it drops more than a constant. Specifically \begin{itemize} \item There exist constant $C_f>0$, and values $z_f>r_f>0$ such that $f(z_1)-f(z_2)>C_f$ for all $z_1\in (z_f-r_f,z_f)$ and $z_2\in (z_f,z_f+r_f)$. \end{itemize} \paragraph{Construction.} We now describe the construction used in the lower bounds; it is illustrated in Figure \ref{fig:LB}. Let $z_f$ be a scalar value that will depend on the specific kernel's univariate function $f$. Now consider a point set $P=\{p_i = z_f e_i/\sqrt{2}\mid i=1,\dots,n\} \subset \mathbb{R}^n$ i.e. the scaled canonical basis, where $e_i = (0,0,\ldots,0,1,0,\ldots,0)$ with the $1$ in the $i$th coordinate. We can select $Q=\{p_i\mid i=1,\dots,k\}$, without loss of generality, because of the symmetry in $P$. Denote $\kde_Q=\sum_{i=1}^k\beta_i\phi_i$ for some $\beta_1, \beta_2, \ldots$ where $\sum_{i=1}^k\beta_i=1$. Let $\bar{p}=\frac{1}{n}\sum_{i=1}^n p_i$ be the mean of points in $P$, $\bar{p}_k=\frac{1}{k}\sum_{i=1}^k p_i$ be the mean of points in $Q$. Let $\bar{p}_{-k}=\frac{1}{n-k}\sum_{i=k+1}^n p_i$ be mean of points in $P\setminus Q$, and $p=\bar{p}_k+\frac{z_f}{\sqrt{2}}\frac{\bar{p}_k-\bar{p}}{\norm{\bar{p}_k-\bar{p}}}$ be the point lying on the line $\bar{p}\bar{p}_k$ such that $p$ and $\bar{p}$ are on the opposite side of $\bar{p}_k$ and $\norm{p-\bar{p}_k}=\frac{z_f}{\sqrt{2}}$. Note that, for all $i=1,\dots,k$, $\norm{p-p_i}$ are the same which is denoted $l_1$ and, for all $i=k+1,\dots,n$, $\norm{p-p_i}$ are the same which is denoted $l_2$. \begin{figure} \begin{center} \includegraphics[width=0.8\textwidth]{LB.pdf} \end{center} \vspace{-.25in} \caption{Illustration of the lower bound construction.\label{fig:LB}} \end{figure} Now we evaluate $\kde_P-\kde_Q$ at $p$, resulting in \begin{align*} (\kde_Q-\kde_P)(p) & = \sum_{i=1}^k(\beta_i-\frac{1}{n})f(\norm{p-p_i})+\sum_{i=k+1}^n(-\frac{1}{n})f(\norm{p-p_i}) \\& = \sum_{i=1}^k(\beta_i-\frac{1}{n})f(l_1)+\sum_{i=k+1}^n(-\frac{1}{n})f(l_2) \\& = (\sum_{i=1}^k\beta_i-\frac{k}{n})f(l_1)+(n-k)(-\frac{1}{n})f(l_2) \\& = (1-\frac{k}{n})(f(l_1)-f(l_2)). \numberthis \label{kdeerr} \end{align*} \begin{lemma} \label{lem:l1l2} $l_1^2 =z_f^2 - z_f^2 / (2k)$ and \[ l_2^2 = \frac{z_f^2}{2} \left(1 + \sqrt{\frac{1}{k} - \frac{1}{n}} + \sqrt{\frac{1}{n-k} - \frac{1}{n}}\right)^2 + \frac{z_f^2}{2} \left(1-\frac{1}{n-k}\right). \] \end{lemma} \begin{proof} Since $\bar{p}=(\frac{k}{n})\bar{p}_k+(\frac{n-k}{n})\bar{p}_{-k}$, it means that $\bar{p}_{-k}$ also lies on line $\bar{p}\bar{p}_k$. Moreover, $\langle\bar{p}_k-\bar{p},\bar{p}_{-k}-\bar{p}\rangle=-\frac{k}{n-k}\norm{\bar{p}-\bar{p}_k}^2\leq 0$ which shows that $\bar{p}_k$ and $\bar{p}_{-k}$ lies on different side of $\bar{p}$. For any $i=1,\dots,k$, $\langle \bar{p}-\bar{p}_k, \bar{p}_k-p_i\rangle = 0$ which means that the line $\bar{p}\bar{p}_k$ is perpendicular to the subspace of the linear combination of $\{p_1,\dots,p_k\}$. A similar argument can be applied for $\bar{p}_{-k}$. Also, note that, for all $i=1, \dots,k$, $\norm{\bar{p}_k-p_i}$ are the same and are equal to $\frac{z_f}{\sqrt{2}}\sqrt{1-\frac{1}{k}}$, and for all $i=k+1,\dots,n$, $\norm{\bar{p}_{-k}-p_i}$ are same and equal to $\frac{z_f}{\sqrt{2}}\sqrt{1-\frac{1}{n-k}}$. Moreover, it is easy to compute $\norm{\bar{p}-\bar{p}_k}=\frac{z_f}{\sqrt{2}}\sqrt{\frac{1}{k}-\frac{1}{n}}$ and $\norm{\bar{p}-\bar{p}_{-k}}=\frac{z_f}{\sqrt{2}}\sqrt{\frac{1}{n-k}-\frac{1}{n}}$. For all $i=1,\dots,k$, \[ l_1^2 = \norm{p-\bar{p}_k}^2+\norm{\bar{p}_k-p_i}^2= \frac{z_f^2}{2}+\frac{z_f^2}{2}(1-\frac{1}{k})=z_f^2-\frac{z_f^2}{2k}. \] For all $i=k+1,\dots,n$, \[ l_2^2 = \norm{p-\bar{p}_{-k}}^2+\norm{\bar{p}_{-k}-p_i}^2 = \frac{z_f^2}{2} \left( 1+\sqrt{\frac{1}{k}-\frac{1}{n}}+\sqrt{\frac{1}{n-k}-\frac{1}{n}}\right)^2+\frac{z_f^2}{2}\left(1-\frac{1}{n-k}\right). \qedhere \] \end{proof} \paragraph{Analysis.} Since the choice of $Q$ is arbitrary due to the symmetry of $P$, if we can show (\ref{kdeerr}) is sufficiently large as a function of $k$, then we can prove a lower bound. Given a careful choice of $z_f$ the depends on the kernel, we now evaluate (\ref{kdeerr}) with respect to $k$ using the definitions of $l_1$ and $l_2$ specified in Lemma \ref{lem:l1l2}. But first we observe that a (very minor \footnote{ The bounds will contain some complicated appearing terms depending on $z_f$ and $r_f$. We can set $z_f/2 = r_f$, then we observe they are small constants $ (\frac{4 z_f^2}{2 z_f r_f + r_f^2})^2 = (\frac{4 z_f^2}{z_f^2 + 0.25z_f^2})^2 = (\frac{4}{1.25})^2 =10.24, $ and $ \frac{z_f^2}{2(2 z_f r_f - r_f^2)} = \frac{z_f^2}{2z_f^2 - 0.5z_f^2} \approx 0.666. $ } restriction on $k$, we can ensure that $l_1$ and $l_2$ are in the proper interval with respect to $z_f$ and $r_f$. \begin{lemma} \label{lem:interval} If $\min\{n-1, n-(\frac{4z_f^2}{2z_fr_f+r_f^2})^2\} \geq k\geq \max\{\frac{z_f^2}{2(2z_fr_f-r_f^2)}, (\frac{4z_f^2}{2z_fr_f+r_f^2})^2, 1\}$ then $l_1\in (z_f-r_f,z_f)$ and $l_2\in(z_f,z_f+r_f)$. \end{lemma} \begin{proof} Clearly from Lemma \ref{lem:l1l2}, $l_1^2$ is less than $z_f^2$. Also, \[ \frac{z_f^2}{2k}\leq \frac{z_f^2}{2}/\left(\frac{z_f^2}{2(2z_fr_f-r_f^2)}\right)=2z_fr_f-r_f^2 \] which means $l_1^2>z_f^2-(2z_fr_f-r_f^2)=(z_f-r_f)^2$. Again using Lemma \ref{lem:l1l2} we see \begin{align*} l_1^2& \geq \frac{z_f^2}{2}(1+\sqrt{\frac{1}{n-k}-\frac{1}{n}})^2+\frac{z_f^2}{2}(1-\frac{1}{n-k})\\ & = \frac{z_f^2}{2}(1+2\sqrt{\frac{1}{n-k}-\frac{1}{n}}+\frac{1}{n-k}-\frac{1}{n}+1-\frac{1}{n-k}) \\ & = \frac{z_f^2}{2}(2+2\sqrt{\frac{1}{n-k}-\frac{1}{n}}-\frac{1}{n}) \geq z_f^2. \end{align*} Moreover from Lemma \ref{lem:l1l2}, \begin{align*} l_2^2 & \leq \frac{z_f^2}{2}(1+2\max\{\sqrt{\frac{1}{k}},\sqrt{\frac{1}{n-k}}\})^2+\frac{z_f^2}{2} \\ & = z_f^2+2z_f^2\max\{\sqrt{\frac{1}{k}},\sqrt{\frac{1}{n-k}}\}+2z_f^2\max\{\sqrt{\frac{1}{k}},\sqrt{\frac{1}{n-k}}\}^2 \\ & \leq z_f^2+4z_f^2\max\{\sqrt{\frac{1}{k}},\sqrt{\frac{1}{n-k}}\} \\ & \leq z_f^2+4z_f^2/\left(\frac{4z_f^2}{2z_fr_f+r_f^2}\right) = (z_f+r_f)^2. \end{align*} That is $l_1\in (z_f-r_f,z_f)$ and $l_2\in(z_f,z_f+r_f)$. \end{proof} \begin{lemma} \label{lb1} Consider a shift-invariant drop kernel $K$. There exists a set $P$ of size $n$ so any subset $Q \subset P$ such that $\|\kde_P - \kde_Q\|_\infty \leq \varepsilon$ requires that $k \geq n - O(\varepsilon n)$, assuming $\min\{n-1, n-(\frac{4z_f^2}{2z_fr_f+r_f^2})^2\} \geq k\geq \max\{\frac{z_f^2}{2(2z_fr_f-r_f^2)}, (\frac{4z_f^2}{2z_fr_f+r_f^2})^2, 1\}$. \end{lemma} \begin{proof} From Lemma \ref{lem:interval} we have $l_1\in (z_f-r_f,z_f)$ and $l_2\in(z_f,z_f+r_f)$, thus by the definition of a drop kernel and (\ref{kdeerr}) we observe \[ (\kde_Q-\kde_P)(p) = (1-\frac{k}{n})(f(l_1)-f(l_2)) \geq (1-\frac{k}{n})C_f. \] Thus if $\norm{\kde_Q-\kde_P}_\infty\leq \varepsilon$, then $k\geq n-O(\varepsilon n)$. \end{proof} \begin{lemma} Consider a shift-invariant somewhere-steep $K$. There exists a set $P$ of size $n$ so any subset $Q \subset P$ such that $\|\kde_P - \kde_Q\|_\infty \leq \varepsilon$ requires that $k = \Omega(1/\varepsilon^2)$, assuming $n/2 \geq k\geq \max\{\frac{z_f^2}{2(2z_fr_f-r_f^2)}, (\frac{4z_f^2}{2z_fr_f+r_f^2})^2, 1\}$. \end{lemma} \begin{proof} We can write $l_2^2$ in terms of $l_1^2$. \begin{align*} l_2^2 & = \norm{p-\bar{p}_{-k}}^2+\norm{\bar{p}_{-k}-p_i}^2 \;\;\; \geq \;\;\; \norm{p-\bar{p}}^2+\norm{\bar{p}_{-k}-p_i}^2\\ & = \frac{z_f^2}{2}\left(1+\sqrt{\frac{1}{k}-\frac{1}{n}}\right)^2+\frac{z_f^2}{2}(1-\frac{1}{n-k}) \\ & \geq \frac{z_f^2}{2}\left(1+\sqrt{\frac{1}{k}-\frac{1}{n}}\right)^2+\frac{z_f^2}{2}(1-\frac{1}{k}) \;\;=\;\; \frac{z_f^2}{2}\left(1+2\sqrt{\frac{1}{k}-\frac{1}{n}}+(\frac{1}{k}-\frac{1}{n})+(1-\frac{1}{k})\right)\\ & \geq \frac{z_f^2}{2}\left(1+2\sqrt{\frac{1}{k}-\frac{1}{n}}+(1-\frac{1}{k})\right)\\ & = l_1^2+z_f^2\sqrt{(1-\frac{k}{n})}\sqrt{\frac{1}{k}} \;\;\; \geq \;\;\; l_1^2+z_f^2\sqrt{\frac{1}{2k}}. \end{align*} From Lemma \ref{lem:interval} we have $l_1\in (z_f-r_f,z_f)$ and $l_2\in(z_f,z_f+r_f)$. So from the definition of a somewhere-steep kernel, we can conclude from (\ref{kdeerr}) and $n/2 \geq k$ that \begin{align*} (\kde_Q-\kde_P)(p) & = (1-\frac{k}{n})(f(l_1)-f(l_2)) \geq \frac{1}{2}(f(l_1)-f(l_2)) = C_f(l_2-l_1)\\ & = C_f\left(\sqrt{l_1^2+z_f^2\sqrt{\frac{1}{2k}}}-l_1\right) = C_f\left(\sqrt{l_1^2+z_f^2\sqrt{\frac{1}{2k}}}+l_1\right)^{-1}z_f^2\sqrt{\frac{1}{2k}} \\ & \geq \frac{C_fz_f}{3}\sqrt{\frac{1}{2k}}. \end{align*} If $\norm{\kde_Q-\kde_P}_\infty\leq \varepsilon$, then $k=\Omega(1/\varepsilon^2)$. \end{proof} \begin{theorem} For the Gaussian or Laplace kernel, there is a set $P$ so for any subset $Q \subset P$ such that $\|\kde_P - \kde_Q\|_\infty \leq \varepsilon$, then $|Q|=\Omega(1/\varepsilon^2)$. \end{theorem} \bibliographystyle{plain}
{ "timestamp": "2017-10-13T02:02:32", "yymm": "1710", "arxiv_id": "1710.04325", "language": "en", "url": "https://arxiv.org/abs/1710.04325", "abstract": "We study the construction of coresets for kernel density estimates. That is we show how to approximate the kernel density estimate described by a large point set with another kernel density estimate with a much smaller point set. For characteristic kernels (including Gaussian and Laplace kernels), our approximation preserves the $L_\\infty$ error between kernel density estimates within error $\\epsilon$, with coreset size $2/\\epsilon^2$, but no other aspects of the data, including the dimension, the diameter of the point set, or the bandwidth of the kernel common to other approximations. When the dimension is unrestricted, we show this bound is tight for these kernels as well as a much broader set.This work provides a careful analysis of the iterative Frank-Wolfe algorithm adapted to this context, an algorithm called \\emph{kernel herding}. This analysis unites a broad line of work that spans statistics, machine learning, and geometry.When the dimension $d$ is constant, we demonstrate much tighter bounds on the size of the coreset specifically for Gaussian kernels, showing that it is bounded by the size of the coreset for axis-aligned rectangles. Currently the best known constructive bound is $O(\\frac{1}{\\epsilon} \\log^d \\frac{1}{\\epsilon})$, and non-constructively, this can be improved by $\\sqrt{\\log \\frac{1}{\\epsilon}}$. This improves the best constant dimension bounds polynomially for $d \\geq 3$.", "subjects": "Machine Learning (cs.LG); Computational Geometry (cs.CG); Machine Learning (stat.ML)", "title": "Improved Coresets for Kernel Density Estimates", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668712109664, "lm_q2_score": 0.7217432182679957, "lm_q1q2_score": 0.7081505352857429 }
https://arxiv.org/abs/1506.02605
Probability inequalities and tail estimates for metric semigroups
We study probability inequalities leading to tail estimates in a general semigroup $\mathscr{G}$ with a translation-invariant metric $d_{\mathscr{G}}$. (An important and central example of this in the functional analysis literature is that of $\mathscr{G}$ a Banach space.) Using our prior work [Ann. Prob. 2017] that extends the Hoffmann-Jorgensen inequality to all metric semigroups, we obtain tail estimates and approximate bounds for sums of independent semigroup-valued random variables, their moments, and decreasing rearrangements. In particular, we obtain the "correct" universal constants in several cases, extending results in the Banach space literature by Johnson-Schechtman-Zinn [Ann. Prob. 1985], Hitczenko [Ann. Prob. 1994], and Hitczenko and Montgomery-Smith [Ann. Prob. 2001]. Our results also hold more generally, in a very primitive mathematical framework required to state them: metric semigroups $\mathscr{G}$. This includes all compact, discrete, or (connected) abelian Lie groups.
\section{Introduction and main results} The goal of this paper is to extend the study of probability theory beyond the Banach space setting. In the present work, we estimate sums of independent random variables in several ways, under the most primitive mathematical assumptions required to state them. The setting is as follows. \begin{definition}\label{Dsemi} A \textit{metric semigroup} is defined to be a semigroup $(\ensuremath{\mathscr G}, \cdot)$ equipped with a metric $d_\ensuremath{\mathscr G} : \ensuremath{\mathscr G} \times \ensuremath{\mathscr G} \to [0,\infty)$ that is translation-invariant. In other words, \[ d_\ensuremath{\mathscr G}(ac,bc) = d_\ensuremath{\mathscr G}(a,b) = d_\ensuremath{\mathscr G}(ca,cb)\ \forall a,b,c \in \ensuremath{\mathscr G}. \] \noindent (Equivalently, $(\ensuremath{\mathscr G},d_\ensuremath{\mathscr G})$ is a metric space equipped with a associative binary operation such that $d_\ensuremath{\mathscr G}$ is translation-invariant.) Similarly, one defines a \textit{metric monoid} and a \textit{metric group}. \end{definition} Metric groups are ubiquitous in probability theory, and subsume all compact and abelian Lie groups as well as normed linear spaces as special cases. More modern examples of recent interest are mentioned presently. Now suppose $(\Omega, \mathscr{A}, \mu)$ is a probability space and $X_1, \dots, X_n \in L^0(\Omega,\ensuremath{\mathscr G})$ are $\ensuremath{\mathscr G}$-valued random variables. Fix $z_0, z_1 \in \ensuremath{\mathscr G}$ and define for $1 \leqslant j \leqslant n$: \begin{equation}\label{Esemigroup} S_j := X_1 X_2 \cdots X_j, \quad U_j := \max_{1 \leqslant i \leqslant j} d_\ensuremath{\mathscr G}(z_1, z_0 S_i), \quad Y_j := d_\ensuremath{\mathscr G}(z_0, z_0 X_j), \quad M_j := \max_{1 \leqslant i \leqslant j} Y_i. \end{equation} In this paper we discuss bounds that govern the behavior of $U_n$ -- and consequently, of sums $S_n$ of independent $\ensuremath{\mathscr G}$-valued random variables $X_j$ -- in terms of the variables $X_j$, and even $Y_j$ or $M_j$. We are interested in a variety of bounds: (a)~one-sided geometric tail estimates; (b)~approximate two-sided bounds for tail probabilities; (c)~approximate two-sided bounds for moments; and (d)~comparison of moments. For instance, is it possible to obtain bounds for $\ensuremath{\mathbb E}_\mu[U_n^p]^{1/p}$ in terms of the tail distribution for $U_n$, or in terms of $\ensuremath{\mathbb E}_\mu[U_n^q]^{1/q}$ for $p,q > 0$? The latter question has been well-studied in the literature for Banach spaces, and universal bounds that grow at the ``correct'' rate have been obtained for all $q \gg 0$. We explore the question of obtaining correctly growing universal constants for metric semigroups, which include not only normed linear spaces and inner product spaces, but also all abelian and compact Lie groups. Our results show that the universal constants in such inequalities do not depend on the semigroup in question. \subsection{Motivations} Our motivations in developing probability theory in such general settings are both modern and classical. An increasing number of modern-day theoretical and applied settings require mathematical frameworks that go beyond Banach spaces. For instance, data and random variables may take values in manifolds such as (real or complex) Lie groups. Compact or abelian Lie groups also commonly feature in the literature, including permutation groups and other finite groups, lattices, orthogonal groups, and tori. In fact every abelian, Hausdorff, metrizable, topologically complete group $G$ admits a translation-invariant metric \cite{Kl}, though this fails to hold for cancellative semigroups \cite{KG}. Certain classes of amenable groups are also metric groups (see \cite{KR4} for more details). Other modern examples arise in the study of large networks and include the space of graphons with the cut norm, which arises naturally out of combinatorics and is related to many applications \cite{Lo}. In a parallel vein, the space of labelled graphs $\ensuremath{\mathscr G}(V)$ on a fixed vertex set $V$ is a $2$-torsion metric group (see \cite{KR1,KR2}), hence does not embed into a normed linear space. With these modern examples in mind, in this paper we develop novel techniques for proving maximal inequalities -- as well as comparison results between tail distributions and various moments -- for sums of independent random variables taking values in the aforementioned groups, which are not Banach spaces. At the same time, we also have theoretical motivations in mind when developing probability theory on non-linear spaces such as $\ensuremath{\mathscr G}(V)$ and beyond. Throughout the past century, the emphasis in probability has shifted somewhat from proving results on stochastic convergence, to obtaining sharper and stronger bounds on random sums, in increasingly weaker settings. A celebrated achievement of probability theory has been to develop a rigorous and systematic framework for studying the behavior of sums of (independent) random variables; see e.g.~\cite{LT}. In this vein, we provide \textit{unifications} of our results on graph space with those in the Banach space literature, by proving them \textit{in the most primitive mathematical framework possible}. In particular, our results apply to compact/abelian/discrete Lie groups, as well as normed linear spaces. For example, maximal inequalities by Hoffmann-J{\o}rgensen, L\'evy, Ottaviani--Skorohod, and Mogul'skii require merely the notions of a metric and a binary associative operation to state them. Thus one only needs a separable metric semigroup $\ensuremath{\mathscr G}$ rather than a Banach space to state these inequalities. However, note that working in a metric semigroup raises technical questions. For instance, the lack of an identity element means one has to specify how to compute magnitudes of $\ensuremath{\mathscr G}$-valued random variables (before trying to bound or estimate them); also, it is not apparent how to define truncations of random variables. The lack of inverses, norms, or commutativity implies in particular that one cannot rescale or subtract random variables. Thus new methods need to be developed when the minimal mathematical structure of $\ensuremath{\mathscr G}$ makes it impossible to adopt and extend the existing proofs of the aforementioned results. In the present work, we hope to show that the approach of working with arbitrary metric semigroups turns out to be richly rewarding in (i)~obtaining the above (and other) results for non-Banach settings; (ii)~unifying these results with the existing Banach space results in order to hold in the greatest possible generality; and (iii)~further strengthening these unified versions where possible. \subsection{Organization and results} We now describe the organization and contributions of the present paper. In Section~\ref{Slevy} we prove the Mogul'skii--Ottaviani--Skorohod inequalities for all metric semigroups $\ensuremath{\mathscr G}$. As an application, we show L\'evy's equivalence for stochastic convergence in metric semigroups. In Section~\ref{Stails}, we come to our main goal in this paper, of estimating and comparing moments and tail probabilities for sums of independent $\ensuremath{\mathscr G}$-valued random variables. Our main tool is a variant of Hoffmann-J{\o}rgensen's inequality for metric semigroups, which is shown in recent work \cite{KR3}. The relevant part for our purposes is now stated. \begin{theorem}[Khare and Rajaratnam, \cite{KR3}]\label{Thj} Notation as in Definition~\ref{Dsemi} and Equation~\eqref{Esemigroup}. Suppose $X_1, \dots$, $X_n \in L^0(\Omega,\ensuremath{\mathscr G})$ are independent. Fix integers $n_1, \dots, n_k \in \ensuremath{\mathbb N}$ and numbers $t_1, \dots, t_k, s \in [0,\infty)$, and define $I_0 := \{ 1 \leqslant i \leqslant k : \bp{U_n \leqslant t_i}^{n_i - \delta_{i1}} \leqslant 1/n_i! \}$, where $\delta_{i1}$ denotes the Kronecker delta. Now if $\sum_{i=1}^k n_i \leqslant n+1$, then: \begin{align*} &\ \bp{U_n > (2 n_1 - 1) t_1 + 2 \sum_{i=2}^k n_i t_i + \left( \sum_{i=1}^k n_i - 1 \right) s}\\ \leqslant &\ \bp{M_n > s} + \bp{U_n \leqslant t_1}^{{\bf 1}_{1 \notin I_0}} \prod_{i \in I_0} \bp{U_n > t_i}^{n_i} \prod_{i \notin I_0} \frac{1}{n_i!} \left( \frac{\bp{U_n > t_i}}{\bp{U_n \leqslant t_i}} \right)^{n_i}. \end{align*} \end{theorem} Remark that Theorem~\ref{Thj} generalizes the original Hoffmann-J{\o}rgensen inequality in three ways: (i)~mathematically it strengthens the state-of-the-art even for real variables; (ii)~it unifies previous results by Johnson and Schechtman [{\it Ann.~Prob.}~17], Klass and Nowicki [{\it Ann.~Prob.} 28], and Hitczenko and Montgomery-Smith [{\it Ann.~Prob.}~29] in the Banach space literature; and (iii) the result holds in the most primitive setting needed to state it, thereby being applicable also to e.g.~Lie groups. We now discuss several ways in which to estimate the size of sums of independent $\ensuremath{\mathscr G}$-valued random variables, for metric semigroups $\ensuremath{\mathscr G}$. We present two results in this section, corresponding to two of the estimation techniques discussed in the introduction. (For a third result, see Theorem~\ref{Tbounds}.) The \textbf{first} approach, informally speaking, uses the Hoffmann-J{\o}rgensen inequality to generalize an upper bound for $\ensuremath{\mathbb E}_\mu[\|S_n\|^p]$ in terms of the quantiles of $\|S_n\|$ as well as $\ensuremath{\mathbb E}_\mu[M_n^p]$ -- but now in the ``minimal'' framework of metric semigroups. More precisely, we show that controlling the behavior of $X_n$ is equivalent to controlling $S_n$ or $U_n$, for all metric semigroups. \begin{utheorem}\label{Thj2} Suppose $A \subset \ensuremath{\mathbb N}$ is either $\ensuremath{\mathbb N}$ or $\{ 1, \dots, N \}$ for some $N \in \ensuremath{\mathbb N}$. Suppose $(\ensuremath{\mathscr G}, d_\ensuremath{\mathscr G})$ is a separable metric semigroup, $z_0, z_1 \in \ensuremath{\mathscr G}$, and $X_n \in L^0(\Omega,\ensuremath{\mathscr G})$ are independent for all $n \in A$. If $\sup_{n \in A} d_\ensuremath{\mathscr G}(z_1, z_0 S_n) < \infty$ almost surely, then for all $p \in (0,\infty)$, \[ \ensuremath{\mathbb E}_\mu \left[ \sup_{n \in A} d_\ensuremath{\mathscr G}(z_0, z_0 X_n)^p \right] < \infty \quad \iff \quad \ensuremath{\mathbb E}_\mu \left[ \sup_{n \in A} d_\ensuremath{\mathscr G}(z_1, z_0 S_n)^p \right] < \infty. \] \end{utheorem} This result extends \cite[Theorem 3.1]{HJ} by Hoffmann-J{\o}rgensen to the ``minimal'' framework of metric semigroups. The proofs of Theorem~\ref{Thj2} and the next result use the notion of the quantile functions, or decreasing rearrangements, of $\ensuremath{\mathscr G}$-valued random variables: \begin{definition}\label{Ddecrear} Suppose $(\ensuremath{\mathscr G}, d_\ensuremath{\mathscr G})$ is a metric semigroup, and $X : (\Omega, \mathscr{A}, \mu) \to (\ensuremath{\mathscr G},\mathscr{B}_\ensuremath{\mathscr G})$. We define the \textit{decreasing} (or \textit{non-increasing}) \textit{rearrangement} of $X$ to be the right-continuous inverse $X^*$ of the function $t \mapsto \bp{d_\ensuremath{\mathscr G}(z_0, z_0 X) > t}$, for any $z_0 \in \ensuremath{\mathscr G}$. In other words, $X^*$ is the real-valued random variable defined on $[0,1]$ with the Lebesgue measure, as follows: \[ X^*(t) := \sup \{ y \in [0,\infty) : \bp{d_\ensuremath{\mathscr G}(z_0, z_0 X) > y} > t \}. \] \end{definition} Note that $X^*$ has exactly the same law as $d_\ensuremath{\mathscr G}(z_0, z_0 X)$. Moreover, if $(\ensuremath{\mathscr G}, \| \cdot \|)$ is a normed linear space, then $d_\ensuremath{\mathscr G}(z_0, z_0 X)$ can be replaced by $\|X\|$, and often papers in the literature refer to $X^*$ as the decreasing rearrangement of $\|X\|$ instead of $X$ itself. The convention that we adopt above is slightly weaker. The \textbf{second} approach provides another estimate on the size of $S_n$ through its moments, by comparing $\| S_n \|_q$ to $\| S_n \|_p$ -- or more precisely, $\ensuremath{\mathbb E}_\mu[U_n^q]^{1/q}$ to $\ensuremath{\mathbb E}_\mu[U_n^p]^{1/p}$ -- for $0 < p \leqslant q$. Moreover, the constants of comparison are universal, valid for all abelian semigroups and all finite sequences of independent random variables, and depend only on a threshold: \begin{utheorem}\label{Tupq} Given $p_0 > 0$, there exist universal constants $c = c(p_0), c' = c'(p_0) > 0$ depending only on $p_0$, such that for all choices of (a) separable abelian metric semigroups $(\ensuremath{\mathscr G}, d_\ensuremath{\mathscr G})$, (b) finite sequences of independent $\ensuremath{\mathscr G}$-valued random variables $X_1, \dots, X_n$, (c) $q \geqslant p \geqslant p_0$, and (d) $\epsilon \in (-q,\log(16)]$, we have \begin{align*} \ensuremath{\mathbb E}_\mu[U_n^q]^{1/q} \leqslant &\ c \frac{q}{\max(p, \log(\epsilon+q))} ( \ensuremath{\mathbb E}_\mu[U_n^p]^{1/p} + M_n^*(e^{-q}/8)) + c \ensuremath{\mathbb E}_\mu[M_n^q]^{1/q}\\ \leqslant &\ c' \frac{q}{\max(p, \log(\epsilon+q))} (\ensuremath{\mathbb E}_\mu[U_n^p]^{1/p} + \ensuremath{\mathbb E}_\mu[M_n^q]^{1/q}) \quad \text{ if } \epsilon \geqslant \min(1, e-p_0). \end{align*} \noindent Moreover, we may choose \[ c'(p_0) = c(p_0) \cdot \left( 8^{1/p_0}e + \max(1, \frac{\log(\epsilon+p_0)}{p_0}) \right). \] \end{utheorem} Theorem~\ref{Tupq} extends a host of results in the Banach space literature, including by Johnson--Schechtman--Zinn [{\it Ann.~Prob.}~13], Hitczenko [{\it Ann.~Prob.}~22], and Hitczenko and Montgomery-Smith [{\it Ann.~Prob.}~29]. (See also \cite[Theorem 6.20]{LT} and \cite[Proposition 1.4.2]{KW}.) Theorem~\ref{Tupq} also yields the correct order of the constants as $q \to \infty$, as discussed by Johnson \textit{et al} in \textit{loc.~cit.}~where they extend previous work on Khinchin's inequality by Rosenthal \cite{Ro}. Moreover, all of these results are shown for Banach spaces. Theorem~\ref{Tupq} holds additionally for all compact Lie groups, finite abelian groups and lattices, and spaces of labelled and unlabelled graphs. \section{L\'evy's equivalence in metric semigroups}\label{Slevy} In this section we prove: \begin{theorem}[L\'evy's Equivalence]\label{Tlevy} Suppose $(\ensuremath{\mathscr G}, d_\ensuremath{\mathscr G})$ is a complete separable metric semigroup, $X_n : (\Omega, \mathscr{A}, \mu) \to (\ensuremath{\mathscr G}, \mathscr{B}_\ensuremath{\mathscr G})$ are independent, $X \in L^0(\Omega, \ensuremath{\mathscr G})$, and $S_n$ is defined as in \eqref{Esemigroup}. Then \[ S_n \longrightarrow X\ a.s.~P_\mu \quad \iff \quad S_n \conv{P} X. \] \noindent Moreover, if these conditions fail, then $S_n$ diverges almost surely. \end{theorem} Special cases of this result have been shown in the literature. For instance, \cite[\S 9.7]{Du} considers $\ensuremath{\mathscr G} = \mathbb{R}^n$. The more general case of a separable Banach space $\ensuremath{\mathbb B}$ was shown by It$\hat{\mbox{o}}$--Nisio \cite[Theorem 3.1]{IN}, as well as by Hoffmann-J{\o}rgensen and Pisier \cite[Lemma 1.2]{HJP}. The most general version in the literature to date is by Tortrat, who proved the result for a complete separable metric group in \cite{To2}. Thus Theorem~\ref{Tlevy} is the closest to assuming only the minimal structure necessary to state the result (as well as to prove it). In order to prove Theorem~\ref{Tlevy}, we first study basic properties of metric semigroups. Note that for a metric group, the following is standard; see \cite{Kl}, for instance. \begin{lemma}\label{Linverse} If $(\ensuremath{\mathscr G}, d_\ensuremath{\mathscr G})$ is a metric (semi)group, then the translation-invariance of $d_\ensuremath{\mathscr G}$ implies the ``triangle inequality'': \begin{equation}\label{Etriangle} d_\ensuremath{\mathscr G}(y_1 y_2, z_1 z_2) \leqslant d_\ensuremath{\mathscr G}(y_1, z_1) + d_\ensuremath{\mathscr G}(y_2, z_2)\ \forall y_i, z_i \in \ensuremath{\mathscr G}, \end{equation} \noindent and in turn, this implies that each (semi)group operation is continuous. If instead $\ensuremath{\mathscr G}$ is a group equipped with a metric $d_\ensuremath{\mathscr G}$, then except for the last two statements, any two of the following assertions imply the other two: \begin{enumerate} \item $d_\ensuremath{\mathscr G}$ is left-translation invariant: $d_\ensuremath{\mathscr G}(ca, cb) = d_\ensuremath{\mathscr G}(a,b)$ for all $a,b,c \in \ensuremath{\mathscr G}$. In other words, left-multiplication by any $c \in \ensuremath{\mathscr G}$ is an isometry. \item $d_\ensuremath{\mathscr G}$ is right-translation invariant. \item The inverse map $: \ensuremath{\mathscr G} \to \ensuremath{\mathscr G}$ is an isometry. Equivalently, the triangle inequality~\eqref{Etriangle} holds. \item $d_\ensuremath{\mathscr G}$ is invariant under all inner/conjugation automorphisms. \end{enumerate} \end{lemma} In order to show Theorem~\ref{Tlevy} for metric semigroups, we collect in the following proposition a few preliminary results from \cite{KR4}, and will use these below without further reference. \begin{prop}[\cite{KR4}]\label{Cstrict} Suppose $(\ensuremath{\mathscr G}, d_\ensuremath{\mathscr G})$ is a metric semigroup, and $a,b \in \ensuremath{\mathscr G}$. Then \begin{equation}\label{Estrict} d_\ensuremath{\mathscr G}(a,ba) = d_\ensuremath{\mathscr G}(b,b^2) = d_\ensuremath{\mathscr G}(a,ab) \end{equation} \noindent is independent of $a \in \ensuremath{\mathscr G}$. Moreover, a set $\ensuremath{\mathscr G}$ is a metric semigroup only if $\ensuremath{\mathscr G}$ is a metric monoid, or the set of non-identity elements in a metric monoid $\ensuremath{\mathscr G}'$. This is if and only if the number of idempotents in $\ensuremath{\mathscr G}$ is one or zero, respectively. Furthermore, the metric monoid $\ensuremath{\mathscr G}'$ is (up to a monoid isomorphism) the unique smallest element in the class of metric monoids containing $\ensuremath{\mathscr G}$ as a sub-semigroup. \end{prop} \begin{remark}\label{Rstrict} If needed below, we will denote the unique metric monoid containing a given metric semigroup $\ensuremath{\mathscr G}$ by $\ensuremath{\mathscr G}' := \ensuremath{\mathscr G} \cup \{ 1' \}$. Note that the idempotent $1'$ may already be in $\ensuremath{\mathscr G}$, in which case $\ensuremath{\mathscr G} = \ensuremath{\mathscr G}'$. One consequence of Proposition~\ref{Cstrict} is that instead of working with metric semigroups, one can use the associated monoid $\ensuremath{\mathscr G}'$ instead. (In other words, the (non)existence of the identity is not an issue in many such cases.) This helps simplify other calculations. For instance, what would be a lengthy, inductive (yet straightforward) computation now becomes much simpler: for nonnegative integers $k,l$, and $z_0, z_1, \dots, z_{k+l} \in \ensuremath{\mathscr G}$, the triangle inequality~\eqref{Etriangle} implies: \[ d_\ensuremath{\mathscr G}(z_0 \cdots z_k, z_0 \cdots z_{k+l}) = d_{\ensuremath{\mathscr G}'}(1', \prod_{i=1}^l z_{k+i}) \leqslant \sum_{i=1}^l d_{\ensuremath{\mathscr G}'}(1', z_{k+i}) = \sum_{i=1}^l d_\ensuremath{\mathscr G}(z_0, z_0 z_{k+i}). \] \end{remark} \subsection{The Mogul'skii inequalities and proof of L\'evy's equivalence} Like L\'evy's Equivalence (Theorem~\ref{Tlevy}) and the Hoffmann-J{\o}rgensen inequality (Theorem~\ref{Thj}), many other maximal and minimal inequalities can be formulated using only the notions of a distance function and of a semigroup operation. We now extend to metric semigroups two inequalities by Mogul'skii, which were used in \cite{Mog} to prove a law of the iterated logarithm in normed linear spaces. The following result will be useful in proving Theorem~\ref{Tlevy}. \begin{prop}[Mogul'skii--Ottaviani--Skorohod inequalities]\label{Pmog} Suppose $(\ensuremath{\mathscr G}, d_\ensuremath{\mathscr G})$ is a separable metric semigroup, $z_0, z_1 \in \ensuremath{\mathscr G}$, $a,b \in [0,\infty)$, and $X_1, \dots, X_n \in L^0(\Omega,\ensuremath{\mathscr G})$ are independent. Then for all integers $1 \leqslant m \leqslant n$, \begin{align*} & \bp{\min_{m \leqslant k \leqslant n} d_\ensuremath{\mathscr G}(z_1, z_0 S_k) \leqslant a} \cdot \min_{m \leqslant k \leqslant n} \bp{d_\ensuremath{\mathscr G}(S_k, S_n) \leqslant b} \leqslant \bp{d_\ensuremath{\mathscr G}(z_1, z_0 S_n) \leqslant a + b},\\ & \bp{\max_{m \leqslant k \leqslant n} d_\ensuremath{\mathscr G}(z_1, z_0 S_k) \geqslant a} \cdot \min_{m \leqslant k \leqslant n} \bp{d_\ensuremath{\mathscr G}(S_k, S_n) \leqslant b} \leqslant \bp{d_\ensuremath{\mathscr G}(z_1, z_0 S_n) \geqslant a - b}. \end{align*} \end{prop} These inequalities strengthen \cite[Lemma 1]{Mog} from normed linear spaces to arbitrary metric semigroups. Also note that the second inequality generalizes the \textit{Ottaviani--Skorohod inequality} to all metric semigroups. Indeed, sources such as \cite[\S 9.7.2]{Du} prove this result in the special case $\ensuremath{\mathscr G} = (\mathbb{R}^n, +), z_0 = z_1 = 0, m=1, a = \alpha + \beta, b = \beta$, with $\alpha, \beta > 0$. We omit the proof of Proposition \ref{Pmog} for brevity as it involves standard arguments. Using this result, one can now prove Theorem~\ref{Tlevy}. The idea is to use the approach in \cite{Du}; however, it needs to be suitably modified in order to work in the current level of generality. \begin{proof}[Proof of Theorem~\ref{Tlevy}] The forward implication is true in much greater generality. Conversely, we claim that $S_i$ is Cauchy almost everywhere, if it converges in probability to $X$. Given $\epsilon,\eta > 0$, the assumption and definitions imply that there exists $n_0 \in \ensuremath{\mathbb N}$ such that \[ \bp{d_\ensuremath{\mathscr G}(S_m,X) \geqslant \epsilon/8} < \frac{\eta}{2(1 + \eta)}, \quad \forall m \geqslant n_0. \] \noindent This implies: $\displaystyle \bp{d_\ensuremath{\mathscr G}(S_m,S_n) \geqslant \epsilon / 4} < \frac{\eta}{1 + \eta}\ \forall n \geqslant m \geqslant n_0$. Now define $S'_i := \prod_{j=1}^i X_{n_0+j}$. Fix $n > n_0$ and apply Proposition~\ref{Pmog} to $\{ X_{n_0+i} : i \in \ensuremath{\mathbb N} \}$ with $m=1, a = \epsilon/2, b = \epsilon/4$, and $z_0 = z_1$: \begin{align*} & \bp{\max_{n_0 + 1 \leqslant m \leqslant n} d_\ensuremath{\mathscr G}(S_{n_0}, S_m) \geqslant \epsilon/2} = \bp{\max_{1 \leqslant i \leqslant n - n_0} d_{\ensuremath{\mathscr G}'}(z_0, z_0 S'_i) \geqslant \epsilon/2}\\ \leqslant &\ \frac{\bp{d_{\ensuremath{\mathscr G}'}(z_0, z_0 S'_{n-n_0}) \geqslant \epsilon/4}}{1 - \max_{1 \leqslant i \leqslant n-n_0} \bp{d_{\ensuremath{\mathscr G}'}(S'_i,S'_{n-n_0}) \geqslant \epsilon/4}} < \frac{\eta/ (1 + \eta)}{1 - \eta/(1+\eta)} = \eta. \end{align*} Now define $Q_{n_0} := \sup_{n > n_0} d_\ensuremath{\mathscr G}(S_{n_0}, S_n)$ and $\delta_{n_0} := \sup_{n > m > n_0} d_\ensuremath{\mathscr G}(S_m,S_n)$. Then $\delta_{n_0} \leqslant 2 Q_{n_0}$; moreover, taking the limit of the above inequality as $n \to \infty$ yields: \[ \bp{Q_{n_0} \geqslant \epsilon / 2} \leqslant \eta \quad \implies \quad \bp{\delta_{n_0} \geqslant \epsilon} \leqslant \eta. \] \noindent But then $\bp{\sup_{n > m} d_\ensuremath{\mathscr G}(S_m,S_n) \geqslant \epsilon} \leqslant \eta$ for all $m > n_0$. Thus, $S_n$ is Cauchy almost everywhere. Since $\ensuremath{\mathscr G}$ is complete, the result now follows from \cite[Lemma 9.2.4]{Du}; that the almost sure limit is $X$ is because $S_n \conv{P} X$. Finally, $S_n$ either converges almost surely or diverges almost surely by the Kolmogorov 0-1 law, which concludes the proof. \end{proof} We remark for completeness that the other L\'evy equivalence has been addressed in \cite{Csi,Ga,To2} for various classes of topological groups. See also \cite{MS} for a variant in discrete completely simple semigroups, \cite{Du,IN} for Banach space versions, and \cite{KR4} for a version over any normed abelian metric group. \section{Measuring the magnitude of sums of independent random variables}\label{Stails} We now prove Theorems~\ref{Thj2} and~\ref{Tupq} using the Hoffmann-J{\o}rgensen inequality in Theorem~\ref{Thj}. Recall that the Banach space version of this inequality is extremely important in the literature and is widely used in bounding sums of independent Banach space-valued random variables. Having proved Theorem~\ref{Thj}, an immediate application of our main result is in obtaining the first such bounds for metric semigroups $\ensuremath{\mathscr G}$. We also provide uniformly good $L^p$-bounds and tail probability bounds on sums $S_n$ of independent $\ensuremath{\mathscr G}$-valued random variables. \subsection{An upper bound by Hoffmann-J{\o}rgensen} In this subsection we prove Theorem~\ref{Thj2}. The proof uses basic properties of decreasing rearrangements (see Definition~\ref{Ddecrear}), which we record here and use below, possibly without reference. \begin{prop}\label{Pdecrear} Suppose $X, Y : (\Omega, \mathscr{A}, \mu) \to [0,\infty)$ are random variables, and \[ x,\alpha,\beta,\gamma > 0, \quad t \in [0,1]. \] \begin{enumerate} \item $X^*(t) \leqslant x$ if and only if $\bp{X > x} \leqslant t$. \item $X^*(t)$ is decreasing in $t \in [0,1]$ and increasing in $X \geqslant 0$. \item $(X/x)^*(t) = X^*(t)/x$. \item Suppose $\bp{X > x} \leqslant \beta \bp{Y > \gamma x}$ for all $x>0$. Then for all $p \in (0,\infty)$ and $t \in (0,1)$, \[ \ensuremath{\mathbb E}_\mu[Y^p] \geqslant \beta^{-1} \gamma^p \ensuremath{\mathbb E}_\mu[X^p], \qquad \ensuremath{\mathbb E}_\mu[X^p] \geqslant t X^*(t)^p. \] \item Fix finitely many tuples of positive constants $(\alpha_i, \beta_i, \gamma_i, \delta_i)_{i=1}^N$, and real-valued nondecreasing functions $f_i$ such that for all $x>0$ there exists at least one $i$ such that \begin{equation}\label{Edecrear0} f_i(\bp{X > \alpha_i x}) \leqslant \beta_i \bp{Y > \gamma_i x}^{\delta_i}. \end{equation} \noindent Then \begin{equation}\label{Edecrear} X^*(t) \leqslant \max_{1 \leqslant i \leqslant N} \frac{\alpha_i}{\gamma_i} Y^*((f_i(t)/\beta_i)^{1/\delta_i}). \end{equation} \noindent If on the other hand~\eqref{Edecrear0} holds for all $i$, then $\displaystyle X^*(t) \leqslant \min_{1 \leqslant i \leqslant N} \frac{\alpha_i}{\gamma_i} Y^*((f_i(t)/\beta_i)^{1/\delta_i})$. \end{enumerate} \end{prop} Using these arguments, we now sketch the proof of one of the main results in this paper. \begin{proof}[Proof of Theorem~\ref{Thj2}] The backward implication is easy. Conversely, first claim that controlling sums of $\ensuremath{\mathscr G}$-valued $L^p$ random variables in probability (i.e., in $L^0$) allows us to control these sums in $L^p$ as well, for $p>0$. Namely, we make the following claim:\medskip \textit{Suppose $(\ensuremath{\mathscr G}, d_\ensuremath{\mathscr G})$ is a separable metric semigroup, $p \in (0,\infty)$, and $X_1, \dots, X_n$ $\in L^p(\Omega,\ensuremath{\mathscr G})$ are independent. Now fix $z_0, z_1 \in \ensuremath{\mathscr G}$ and let $S_k, U_n, M_n$ be as in Definition~\ref{Dsemi} and Equation~\eqref{Esemigroup}. Then,} \[ \ensuremath{\mathbb E}_\mu[U_n^p] \leqslant 2^{1 + 2p} (\ensuremath{\mathbb E}_\mu[M_n^p] + U_n^*(2^{-1-2p})^p). \] Note that the claim is akin to the upper bound by Hoffmann-J{\o}rgensen that bounds $\ensuremath{\mathbb E}_\mu[\| S_n \|^p]$ in terms of $\ensuremath{\mathbb E}_\mu[M_n^p]$ and the quantiles of $\| S_n \|$ for Banach space-valued random variables. (See \cite[proof of Theorem 3.1]{HJ} and \cite[Lemma 3.1]{GZ}.) We omit its proof for brevity, as a similar statement is asserted in \cite[Proposition 6.8]{LT}. Now given the claim, define: \begin{align}\label{Edefma} \begin{aligned} t_n := & U_n^*(2^{-1-2p}) \quad (n \in A),\qquad U_A := \sup_{n \in A} d_\ensuremath{\mathscr G}(z_1, z_0 S_n), \\ M_A := &\ \sup_{n \in A} d_\ensuremath{\mathscr G}(z_0, z_0 X_n), \qquad \quad t_A := U_A^*(2^{-1-2p}), \end{aligned} \end{align} \noindent as above, where we also use the assumption that $U_A < \infty$ almost surely. Now for all $n \in A$, compute using the above claim and elementary properties of decreasing rearrangements: \[ \ensuremath{\mathbb E}_\mu[U_n^p] \leqslant 2^{1+2p} \ensuremath{\mathbb E}_\mu[M_n^p] + 2 (4 t_n)^p \leqslant 2^{1+2p} \ensuremath{\mathbb E}_\mu[M_A^p] + 2 (4 t_A)^p. \] This concludes the proof if $A$ is finite; for $A = \ensuremath{\mathbb N}$, use the Monotone Convergence Theorem for the increasing sequence $0 \leqslant U_n^p \to U_A^p$. \end{proof} \subsection{Two-sided bounds and $L^p$ norms} We now formulate and prove additional results that control tail behavior for metric semigroups and monoids -- specifically, $M_A, U_n, U_n^*$. This includes proving our other main result, Theorem~\ref{Tupq}. We begin by setting notation. \begin{definition}\label{Dtrunc} Suppose $\ensuremath{\mathscr G}$ is a metric semigroup. \begin{enumerate} \item Given $X_n \in L^0(\Omega,\ensuremath{\mathscr G})$ as above, for all $n$ in a finite or countable set $A$, define the random variable $\ell_X = \ell_{(X_n)} : [0,1] \to [0,\infty]$ via: \[ \ell_X(t) := \inf \{ y > 0\ : \ \sum_{n \in A} \bp{d_\ensuremath{\mathscr G}(z_0, z_0 X_n) > y} \leqslant t \}. \] As indicated in \cite[\S 2]{HM}, one then has: $\bp{\ell_X > x} = \sum_{n \in A} \bp{d_\ensuremath{\mathscr G}(z_0, z_0 X_n) > x}$. \item Two families of variables $P(t)$ and $Q(t)$ are said to be \textit{comparable}, denoted by $P(t) \approx Q(t)$, if there exist constants $c_1, c_2 > 0$ such that $c_1^{-1} P(t) \leqslant Q(t) \leqslant c_2 P(t)$. The $c_i$ are called the ``constants of approximation''. \item (For the remaining definitions, assume $(\ensuremath{\mathscr G}, 1_\ensuremath{\mathscr G}, d_\ensuremath{\mathscr G})$ is a separable metric monoid.) Given $t \geqslant 0$ and a random variable $X \in L^0(\Omega, \ensuremath{\mathscr G})$, define its \textit{truncation} to be: \[ X(t) := \begin{cases} 1_\ensuremath{\mathscr G}, \qquad & \mbox{ if } d_\ensuremath{\mathscr G}(1_\ensuremath{\mathscr G}, X) > t,\\ X, & \mbox{ otherwise.} \end{cases} \] \item Given variables $X_1, \dots, X_n : \Omega \to \ensuremath{\mathscr G}$, and $r \in (0,1)$, define: \[ U'_n(r) := \max_{1 \leqslant k \leqslant n} d_\ensuremath{\mathscr G}(1_\ensuremath{\mathscr G}, \prod_{i=1}^k X_i(\ell_X(r))). \] \end{enumerate} \end{definition} The following estimate on tail behavior compares $U_n$ with its decreasing rearrangement. \begin{theorem}\label{Tbounds} Given $p_0 > 0$, there exist universal constants of approximation (depending only on $p_0$), such that for all $p \geqslant p_0$, separable abelian metric monoids $(\ensuremath{\mathscr G}, 1_\ensuremath{\mathscr G}, d_\ensuremath{\mathscr G})$, and finite sequences $X_1, \dots, X_n$ of independent $\ensuremath{\mathscr G}$-valued random variables (for any $n \in \ensuremath{\mathbb N}$), \[ \ensuremath{\mathbb E}_\mu[U_n^p]^{1/p} \approx U_n^*(e^{-p}/4) + \ensuremath{\mathbb E}_\mu[\ell_X^p]^{1/p} \approx (U'_n(e^{-p}/8))^* (e^{-p}/4) + \ensuremath{\mathbb E}_\mu[\ell_X^p]^{1/p}, \] \noindent where $U_n, U'_n$ were defined in Equation \eqref{Esemigroup} and Definition \ref{Dtrunc} respectively. \end{theorem} \noindent For real-valued $X$, the expression $\ensuremath{\mathbb E}[|X|^p]^{1/p}$ is also denoted by $\| X \|_p$ in the literature. To show Theorem~\ref{Tbounds}, we require some preliminary results which provide additional estimates to govern tail behavior, and which we now collect before proving the theorem. As these preliminaries are often extensions to metric semigroups of results in the Banach space literature, we sketch or omit their proofs when convenient. The first result obtains two-sided bounds to control the behavior of the ``maximum magnitude'' $M_A$ (cf.~Equation~\eqref{Edefma}). \begin{prop}\label{Pbounds} Suppose $\{ X_n : n \in A \}$ is a (finite or countably infinite) sequence of independent random variables with values in a separable metric semigroup $(\ensuremath{\mathscr G}, d_\ensuremath{\mathscr G})$. \begin{enumerate} \item For all $t \in (0,1)$, $\ell_X(2t) \leqslant \ell_X(t/(1-t)) \leqslant M_A^*(t) \leqslant \ell_X(t)$. \item Suppose $X_n \in L^p(\Omega,\ensuremath{\mathscr G})$ for some $p>0$ (and for all $n \in A$). For all $t > 0$, define: \[ \Psi_X(t) := p \sum_{n \in A} \int_{\ell_X(t)}^\infty u^{p-1} \bp{d_\ensuremath{\mathscr G}(z_0, z_0 X_n) > u}\ du. \] \noindent Then, $\displaystyle \frac{t \ell_X(t)^p + \Psi_X(t)}{1+t} \leqslant \ensuremath{\mathbb E}_\mu[M_A^p] \leqslant \ell_X(t)^p + \Psi_X(t)$. \end{enumerate} \end{prop} \begin{proof} The first part follows \cite[Proposition 1]{HM} (using a special case of Equation~\eqref{Edecrear}). For the second, follow \cite[Lemma 3.2]{GZ}; see also \cite[Lemma 6.9]{LT}. \end{proof} We next discuss a consequence of Hoffmann-J{\o}rgensen's inequality for metric semigroups, Theorem~\ref{Thj}, which can be used to bound the $L^p$-norms of the variables $U_n$ -- or more precisely, to relate these $L^p$-norms to the tail distributions of $U_n$ via $U_n^*$. \begin{lemma}\label{Lbounds} (Notation as in Definition~\ref{Dsemi} and Equation~\eqref{Esemigroup}.) There exists a universal positive constant $c_1$ such that for any $0 \leqslant t \leqslant s \leqslant 1/2$, any separable metric semigroup $(\ensuremath{\mathscr G}, d_\ensuremath{\mathscr G})$ with elements $z_0, z_1$, and any sequence of independent $\ensuremath{\mathscr G}$-valued random variables $X_1, \dots, X_n$, \[ U_n^*(t) \leqslant c_1 \frac{\log(1/t)}{\max \{ \log(1/s), \log \log(4/t) \}} ( U_n^*(s) + M_n^*(t/2)). \] \end{lemma} \begin{proof} We begin by writing down a consequence of Theorem~\ref{Thj}: \begin{equation}\label{Eunr} \bp{U_n > (3K-1)t} \leqslant \frac{1}{K!} \left( \frac{\bp{U_n > t}}{\bp{U_n \leqslant t}} \right)^K + \bp{M_n > t}, \quad \forall t > 0,\ \forall K,n \in \ensuremath{\mathbb N}. \end{equation} \noindent If $\bp{U_n > t} \leqslant 1/2$, then this quantity is further dominated by \[ 2 \max \left\{ \bp{M_n > t}, \frac{1}{K!} (2 \bp{U_n > t})^K \right\}. \] \noindent Now carry out the steps mentioned in the proof of \cite[Corollary 1]{HM}. \end{proof} The final preliminary result is proved by adapting the proofs of \cite[Lemma 3 and Corollary 2]{HM} to metric monoids. \begin{prop}\label{Pbounds2} Suppose $(\ensuremath{\mathscr G}, 1_\ensuremath{\mathscr G}, d_\ensuremath{\mathscr G})$ is a separable metric monoid and $X_1, \dots, X_n: \Omega \to \ensuremath{\mathscr G}$ is a finite sequence of independent $\ensuremath{\mathscr G}$-valued random variables. For $r \in (0,1)$, define: \[ U''_n(r) := \max_{1 \leqslant k \leqslant n} d_\ensuremath{\mathscr G}(1_\ensuremath{\mathscr G}, \prod_{i=1}^k X_i'(\ell_X(r))), \] \noindent where $X'_i(t)$ equals $1_\ensuremath{\mathscr G}$ if $d_\ensuremath{\mathscr G}(1_\ensuremath{\mathscr G}, X_i) \leqslant t$, and $X_i$ otherwise. \begin{enumerate} \item Then $U''_n(r)$ may be expressed as the sum of ``disjoint'' random variables $V_k$ for $k \in \ensuremath{\mathbb N}$. In other words, $\Omega$ can be partitioned into measurable subsets $E_k$ such that $V_k = U''_n(r)$ on $E_k$ and $1_\ensuremath{\mathscr G}$ otherwise. Moreover, the $V_k$ may be chosen such that $V_k^*(t) \leqslant k \cdot \ell(t (k-1)! / r^{k-1})$. \item Given the assumptions, for all $p \in (0,\infty)$, \[ \ensuremath{\mathbb E}_\mu[U''_n(r)^p]^{1/p} \leqslant 2 e^{2^p r/p} \ensuremath{\mathbb E}_\mu[\ell_X^p]^{1/p}. \] \end{enumerate} \end{prop} With the above results in hand, we can now show the above theorem. \begin{proof}[Proof of Theorem~\ref{Tbounds}] Compute using the triangle inequality~\eqref{Etriangle} and Remark~\ref{Rstrict}: \[ d_\ensuremath{\mathscr G}(1_\ensuremath{\mathscr G}, X_k) \leqslant d_\ensuremath{\mathscr G}(1_\ensuremath{\mathscr G}, S_{k-1}) + d_\ensuremath{\mathscr G}(1_\ensuremath{\mathscr G}, S_k) \leqslant 2 U_n. \] \noindent Hence $M_n \leqslant 2 U_n$. Now compute for $p \geqslant p_0$, using Propositions~\ref{Pdecrear} and \ref{Pbounds}: \begin{align*} \ensuremath{\mathbb E}_\mu[U_n^p]^{1/p} \geqslant &\ \frac{1}{2} \ensuremath{\mathbb E}_\mu[M_n^p]^{1/p} \geqslant 2^{-1-p_0^{-1}} \ensuremath{\mathbb E}_\mu[\ell_X^p]^{1/p},\\ \ensuremath{\mathbb E}_\mu[U_n^p]^{1/p} \geqslant &\ (e^{-p}/8)^{1/p} U_n^*(e^{-p}/8) \geqslant 8^{-p_0^{-1}} e^{-1} U_n^*(e^{-p}/4). \end{align*} \noindent Hence there exists a constant $0 < c_1 = c_1(p_0)$ such that: \[ \ensuremath{\mathbb E}_\mu[U_n^p] \geqslant c_1^{-1} (U_n^*(e^{-p}/4) + \ensuremath{\mathbb E}_\mu[\ell_X^p]^{1/p}). \] This yields one inequality; another one is obtained using Proposition~\ref{Pbounds} as follows: \[ \bp{U_n \neq U'_n(e^{-p}/8)} \leqslant \bp{M_n > \ell_X(e^{-p}/8)} \leqslant \bp{M_n > M_n^*(e^{-p}/8)} \leqslant e^{-p}/8. \] \noindent Now if $\bp{U'_n(e^{-p}/8) > y} > \eta$ for some $\eta \in [\frac{e^{-p}}{8},1]$, then by the reverse triangle inequality, \begin{align*} \bp{U_n > y} \geqslant &\ \bp{U_n > y,\ U_n = U'_n(e^{-p}/8)}\\ \geqslant &\ \bp{U'_n(e^{-p}/8) > y} - \bp{U_n \neq U'_n(e^{-p}/8)} > \eta - \frac{e^{-p}}{8}. \end{align*} \noindent Hence by definition and the above calculations, \begin{equation}\label{Ehm} U'_n(e^{-p}/8)^*(\eta) \leqslant U_n^*(\eta - e^{-p}/8). \end{equation} \noindent Applying this with $\eta = e^{-p}/4$, \[ U'_n(e^{-p}/8)^*(e^{-p}/4) \leqslant U_n^*(e^{-p}/8) \leqslant e 8^{1/p} \ensuremath{\mathbb E}_\mu[U_n^p]^{1/p} \leqslant e 8^{1/p_0} \ensuremath{\mathbb E}_\mu[U_n^p]^{1/p}. \] \noindent Hence as above, there exists a constant $0 < c_2 = c_2(p_0)$ such that: \[ \ensuremath{\mathbb E}_\mu[U_n^p]^{1/p} \geqslant c_2^{-1} ( U'_n(e^{-p}/8)^*(e^{-p}/4) + \ensuremath{\mathbb E}_\mu[\ell_X^p]^{1/p}). \] \noindent This proves the second of the four claimed inequalities. The remaining arguments can now be shown by suitably adapting the proof of \cite[Theorem 3]{HM}. \end{proof} Finally, we use Theorem~\ref{Tbounds} to prove our remaining main result. \begin{proof}[Proof of Theorem~\ref{Tupq}] Using Proposition~\ref{Cstrict}, let $\ensuremath{\mathscr G}'$ denote the smallest metric monoid containing $\ensuremath{\mathscr G}$. Thus the $X_k$ are a sequence of independent $\ensuremath{\mathscr G}'$-valued random variables, and we may assume henceforth that $\ensuremath{\mathscr G} = \ensuremath{\mathscr G}'$. Compute using Proposition~\ref{Pbounds}, and the fact that $X^*$ and $X$ have the same law for the real-valued random variable $X = M_n$: \begin{align*} \ensuremath{\mathbb E}_\mu[\ell_X^q] = &\ \int_0^{1/2} \ell_X(2t)^q \cdot 2 dt \leqslant 2 \int_0^{1/2} M^*_n(t)^q\ dt \leqslant 2 \int_0^1 M^*_n(t)^q\ dt = 2 \ensuremath{\mathbb E}_\mu[(M^*_n)^q]\\ = &\ 2 \ensuremath{\mathbb E}_\mu[M_n^q]. \end{align*} Using this computation, as well as Lemma~\ref{Lbounds} and Theorem~\ref{Tbounds} for $\ensuremath{\mathscr G}'$, we compute: \begin{align*} &\ \ensuremath{\mathbb E}_\mu[U_n^q]^{1/q}\\ \leqslant &\ c'_1 (\ensuremath{\mathbb E}_\mu[\ell_X^q]^{1/q} + U_n^*(e^{-q}/4))\\ \leqslant &\ c'_1 \cdot 2^{1/q} \ensuremath{\mathbb E}_\mu[M_n^q]^{1/q} + c'_1 c_1 \frac{\log(4e^q)}{\max(\log(4e^p), \log \log (16e^q))} (U_n^*(e^{-p}/4) + M_n^*(e^{-q}/8))\\ \leqslant &\ c'_1 \cdot 2^{1/q} \ensuremath{\mathbb E}_\mu[M_n^q]^{1/q} + c'_1 c_1 \frac{\log(4e^q)}{\max(\log(4e^p), \log(\epsilon + q))} (c_2 \ensuremath{\mathbb E}_\mu[U_n^p]^{1/p} + M^*_n(e^{-q}/8)) \end{align*} \noindent since $\epsilon \in (-q, \log(16)]$. There are now two cases: first if $e^p \geqslant \epsilon+q$, then \[ \frac{\log(4e^q)}{\max(\log(4e^p), \log(\epsilon + q))} \leqslant \frac{q + \log(4)}{p + \log(4)} \leqslant \frac{q}{p} = \frac{q}{\max(p, \log(\epsilon + q))}. \] \noindent On the other hand, if $e^p < \epsilon + q$ then set $C := 1 + \frac{\log(4)}{p_0}$ and note that $Cq \geqslant q + \log(4)$. Therefore, \[ \frac{\log(4e^q)}{\max(\log(4e^p), \log(\epsilon + q))} \leqslant \frac{q + \log(4)}{\log(\epsilon + q)} \leqslant \frac{C q}{\log(\epsilon + q)} = C \frac{q}{\max(p, \log(\epsilon + q))}. \] Using the above analysis now yields: \begin{align*} &\ \ensuremath{\mathbb E}_\mu[U_n^q]^{1/q}\\ \leqslant &\ c'_1 \cdot 2^{1/q} \ensuremath{\mathbb E}_\mu[M_n^q]^{1/q} + c'_1 c_1 \left( 1 + \frac{\log(4)}{p_0} \right) \frac{q}{\max(p, \log(\epsilon+q))} (c_2 \ensuremath{\mathbb E}_\mu[U_n^p]^{1/p} + M^*_n(e^{-q}/8)). \end{align*} \noindent Setting $c := c'_1 \max(2^{1/q}, c_1(1 + \log(4)/p_0), c_1 c_2 (1 + \log(4)/p_0))$, we obtain the first inequality claimed in the statement of the theorem. To show the second inequality, we first verify that if $\epsilon \geqslant \min(1, e - p_0)$, then the function $f(x) := x/\log(\epsilon+x)$ is strictly increasing on $(p_0,\infty)$. Now compute: \begin{align*} \frac{q}{\max(p, \log(\epsilon+q))} = &\ \min \left( \frac{q}{p}, \frac{q}{\log(\epsilon+q)} \right) \geqslant \min \left( 1, \frac{q}{\log(\epsilon+q)} \right)\\ \geqslant &\ \min \left( 1, \frac{p_0}{\log(\epsilon+p_0)} \right). \end{align*} Next, use Proposition~\ref{Pdecrear} to show: $M^*_n(e^{-q}/8) \leqslant \ensuremath{\mathbb E}_\mu[M_n^q]^{1/q} (8e^q)^{1/q} \leqslant 8^{1/p_0} e \ensuremath{\mathbb E}_\mu[M_n^q]^{1/q}$. Using the previous two facts, we now complete the proof of the second inequality by beginning with the first inequality: \begin{align*} &\ \ensuremath{\mathbb E}_\mu[U_n^q]^{1/q}\\ \leqslant &\ c \frac{q}{\max(p, \log(\epsilon+q))} ( \ensuremath{\mathbb E}_\mu[U_n^p]^{1/p} + M_n^*(e^{-q}/8)) + c \ensuremath{\mathbb E}_\mu[M_n^q]^{1/q}\\ \leqslant &\ c \frac{q}{\max(p, \log(\epsilon+q))} (\ensuremath{\mathbb E}_\mu[U_n^p]^{1/p} + 8^{1/p_0} e \ensuremath{\mathbb E}_\mu[M_n^q]^{1/q}) + c \cdot 1 \cdot \ensuremath{\mathbb E}_\mu[M_n^q]^{1/q}\\ \leqslant &\ c \frac{q}{\max(p, \log(\epsilon+q))} \left( \ensuremath{\mathbb E}_\mu[U_n^p]^{1/p} + 8^{1/p_0} e \ensuremath{\mathbb E}_\mu[M_n^q]^{1/q} + \max(1, \frac{\log(\epsilon+p_0)}{p_0}) \ensuremath{\mathbb E}_\mu[M_n^q]^{1/q} \right). \end{align*} \noindent The second inequality in the theorem now follows. \end{proof} \subsection*{Acknowledgments} We thank David Montague and Doug Sparks for providing detailed feedback on an early draft of the paper, which improved the exposition.
{ "timestamp": "2016-10-07T02:05:43", "yymm": "1506", "arxiv_id": "1506.02605", "language": "en", "url": "https://arxiv.org/abs/1506.02605", "abstract": "We study probability inequalities leading to tail estimates in a general semigroup $\\mathscr{G}$ with a translation-invariant metric $d_{\\mathscr{G}}$. (An important and central example of this in the functional analysis literature is that of $\\mathscr{G}$ a Banach space.) Using our prior work [Ann. Prob. 2017] that extends the Hoffmann-Jorgensen inequality to all metric semigroups, we obtain tail estimates and approximate bounds for sums of independent semigroup-valued random variables, their moments, and decreasing rearrangements. In particular, we obtain the \"correct\" universal constants in several cases, extending results in the Banach space literature by Johnson-Schechtman-Zinn [Ann. Prob. 1985], Hitczenko [Ann. Prob. 1994], and Hitczenko and Montgomery-Smith [Ann. Prob. 2001]. Our results also hold more generally, in a very primitive mathematical framework required to state them: metric semigroups $\\mathscr{G}$. This includes all compact, discrete, or (connected) abelian Lie groups.", "subjects": "Probability (math.PR); Functional Analysis (math.FA); Group Theory (math.GR)", "title": "Probability inequalities and tail estimates for metric semigroups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.981166867906763, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.7081505329009564 }
https://arxiv.org/abs/quant-ph/0703236
Parameters of Integral Circulant Graphs and Periodic Quantum Dynamics
The intention of the paper is to move a step towards a classification of network topologies that exhibit periodic quantum dynamics. We show that the evolution of a quantum system, whose hamiltonian is identical to the adjacency matrix of a circulant graph, is periodic if and only if all eigenvalues of the graph are integers (that is, the graph is integral). Motivated by this observation, we focus on relevant properties of integral circulant graphs. Specifically, we bound the number of vertices of integral circulant graphs in terms of their degree, characterize bipartiteness and give exact bounds for their diameter. Additionally, we prove that circulant graphs with odd order do not allow perfect state transfer.
\section{Introduction} Circulant graphs have a vast number of uses and applications to telecommunication network, VLSI design, parallel and distributed computing (see~\cite{Hw} and the references therein). A graph is integral if all the eigenvalues of its adjacency matrix are integers (see~\cite{BCRASS} for a survey on integral graphs). Here, we first show that the evolution of a quantum system, whose hamiltonian is identical to the adjacency matrix of a circulant graph, is periodic if and only if the graph is integral. Then, motivated by this observation, we focus on relevant properties of integral circulant graphs. The intention of the paper is to move a step towards a classification of network topologies which exhibit periodic quantum dynamics. For certain quantum spin systems with fixed nearest-neighbour couplings, periodicity is a necessary condition for \emph{perfect state transfer}, that is, for transferring a quantum state between sites of the system, with the use of a \emph{free} evolution and without dissipating the information content of the state (see~\cite{CDEL}, for more information on this topic). The main mathematical results of the paper are the following: \begin{itemize} \item we bound the order of connected integral circulant graphs as a function of the degree (Section~\ref{sec:deg_ord}, Theorem~\ref{thm:Nk}); \item we characterize bipartite integral circulant graphs (Section~\ref{sec:bipart}, Theorem~\ref{thm:Nkb}); \item we prove tight lower and upper bounds on the diameter of integral circulant graphs (Section~\ref{sec:diam}, Theorem \ref{thm-diam-bounds} and Theorem \ref{thm-diam-bounds-tight}). \end{itemize} Given the properties of circulant graphs, the proofs are based on elementary number theory. In the last section we show that circulant graphs with odd order do not allow perfect state transfer. However, we do not have a characterization of integral circulant graphs allowing perfect state transfer. This is left as an open problem. \section{Background on Circulant Graphs} \label{sec:background} A \emph{graph} $\mathcal G=(V(\mathcal G),E(\mathcal G))$ is a pair whose elements are two sets, $V(\mathcal G)=\{1,2,\ldots,n\}$ and $E(\mathcal G)\subset V(\mathcal G)\times V(G)$. The elements of $V(\mathcal G)$ and $E(\mathcal G)$ are called \emph{vertices} and \emph{edges}, respectively. We assume that $\{i,i\}\notin E(\mathcal G)$ for all $i\in V(\mathcal G)$. Two vertices $i,j$ of a graph are said to be \emph{adjacent} if $\{i,j\}$ is an edge; the edge $\{i,j\}$ is then \emph{incident} with the vertices $i,j$. The \emph{adjacency matrix} of a graph $\mathcal G$ is the matrix $A(\mathcal G)$ such that $A(\mathcal G)_{i,j}=1$ if $\{i,j\}\in E(\mathcal G)$ and $A(\mathcal G)_{i,j}=0$ if $\{i,j\}\notin E(\mathcal G)$. The \emph{spectrum} of a graph $\mathcal G$ is the collection of eigenvalues of $A(\mathcal G)$, or equivalently, the collection of zeros of the characteristic polynomial of $A(\mathcal G)$, see~\cite{CDS}. We denote by sp$(\mathcal G)=(\lambda_{0}(\mathcal G),\ldots ,\lambda_{n-1}(\mathcal G))$ the spectrum of a graph $\mathcal G$ in the non-increasing (wrt modulus) ordering. We simply write $\lambda_{0},\ldots,\lambda_{n-1}$ when $\mathcal G$ is clear from the context. Let $S=\{s_{1},s_{2},\ldots,s_{k}\}$ be a set of $k$ integers in the range $$ 1\leq s_{1},s_{2},\ldots,s_{k}<n. $$ Since we consider only \emph{undirected graphs}, we assume that $s\in S$ if and only if $n-s\in S$. A \emph{circulant graph} $\mathcal G =G(n;S)$ is a graph on the set of $n$ vertices $V(\mathcal G)=\{v_{1},\ldots,v_{n}\}$ with an edge incident with $v_{i}$ and $v_{j}$ whenever $|i-j|\in S$, see~\cite{Hw}. The set $S$ is said to be the \emph{symbol} of $\mathcal G$. In particular, $k=\#S$ is the \emph{degree} of a circulant graph $G(n;S)$. Let ${\mathbb Z}_n$ denote the \emph{residue ring} modulo $n$ and let ${\mathbb Z}_n^\ast$ be the multiplicative group of ${\mathbb Z}_n$. Notice that a circulant graph $G(n;S)$ is a {\it Cayley graph\/} of the additive group of ${\mathbb Z}_n$ with respect to the {\it Cayley set\/} $S$. We recall that a Cayley graph with respect to a finite group ${\mathfrak G}$ and a set $S \subseteq {\mathfrak G}$, such that it contains $-w$ for every $w \in S$, is a graph on $n = \# {\mathfrak G}$ vertices, labeled by elements of ${\mathfrak G}$ where the vertices $u$ and $v$ are connected if and only if $u-v \in S$ (or equivalently, $v - u \in S$). A \emph{path} in a graph is a finite sequence of vertices which are connected by an edge. A \emph{connected graph} is a graph such that there is a path between all pairs of vertices. It is easy to show that a circulant graph $G(n;S)$ with symbol $S=\{s_{1},s_{2},\ldots,s_{k}\}$ is connected if and only if $\gcd(n,s_{1},s_{2},\ldots,s_{k})=1$. The adjacency matrix of a circulant graph is diagonalized by the Fourier transform at the irreducible representations over the group ${\mathbb Z}_n^\ast$ (which is a Vandermonde matrix). Lemma~\ref{lem:Eigen} is based on this observation. Let $\omega_{n}=\exp\( 2\pi\iota/n\)$ where $\iota=\sqrt{-1}$. \begin{lemma} \label{lem:Eigen}The spectrum of a circulant graph $\mathcal G=G(n;S)$ on $n $ vertices with symbol $S$ is \begin{equation}\label{eq-lambda-in-terms-of-omega} \lambda_{j}=\sum_{s\in S}\omega_{n}^{js}, \end{equation} where $0\leq j\leq n-1$. \end{lemma} By Lemma~\ref{lem:Eigen}, the eigenvalues of a circulant graph are just the sum over $S$ of the irreducible characters of ${\mathbb Z}_n^\ast$. The eigenvectors are also easily available. In fact, it is straightforward to see that the eigenvector corresponding to the eigenvalue $\lambda_{j}$ has the form $v_{j}=[1,\omega^{j},\ldots,\omega^{j(n-1)}]^{T}$. \section{Integral Circulant Graphs and Periodic Quantum Dynamics} \label{sec:PerQuantChan} A {\em quantum spin system} associated to a graph $\mathcal G$ can be defined by attaching a spin-$\frac{1}{2}$ particle to each of the $n$ vertices of $\mathcal G$. The Hilbert space assigned to the system is then $\mathcal{H}\cong({\mathbb C}^{2})^{\otimes n}$. This system can be interpreted as a noiseless quantum channel, whose Hamiltonian is identical to the adjacency matrix of $\mathcal G$ itself. From another perspective, its evolution can be seen as a continuous-time quantum walk on $\mathcal G$. Some properties of such dynamics on circulant graphs have been studied in~\cite{am}. As observed in~\cite{CDEL}, the dynamics of the system is {\em periodic} if for every state $|\psi\rangle\in\mathcal{H}$, there exists $p\in{\mathbb R}$, $0<p<\infty$, for which $|\langle\psi|e^{-\iota A(\mathcal G)p}|\psi\rangle|=1$. The number $p$ is the {\em period} of the system. In general, assuming that the initial state was $|\psi(0)\rangle=\sum_j\alpha_j|\lambda_{j}\rangle$, we can express as follows the state of the system at generic time $t$: \[ |\psi(t)\rangle=e^{-\iota H_G t}|\psi(0)\rangle=\sum_{j}\alpha_{j}e^{-\iota t\lambda_{j}}|\lambda_{j} \rangle \] where $|\lambda_{j}\rangle$ is an eigenvector of $A(\mathcal G)$ with eigenvalue $\lambda_{j}$ and $\alpha_{j}\in{\mathbb C}$. Thus, the periodicity condition $|\psi(t)\rangle=e^{-\iota\phi}|\psi(0)\rangle$ ($\phi$ is a phase) gives us that for every $\lambda_j\in\text{sp}(G)$ we have: \[ \lambda_{j}t-\phi=2\pi r_{j}, \text{~~ for some }r_{j}\in\mathbb{Z}. \] Therefore, for every quadruple $\lambda_{i},\lambda_{j},\lambda_{r},\lambda_{s}\in$ sp$(G)$ (with $\lambda_{r}\neq\lambda_{s}$), it follows that \begin{equation} \label{eq:lambdas} \frac{\lambda_{i}-\lambda_{j}}{\lambda_{r}-\lambda_{s}}\in{\mathbb Q}. \end{equation} We now show that~\eqref{eq:lambdas} implies the integrality of the underlying graph. \begin{theorem} \label{thm:periodic} Let $\mathcal G=G(n;S)$ be a circulant graph on $n\geqslant 4$ vertices with symbol $S$. If $\mathcal G$ has at least four distinct eigenvalues and all of them satisfy the condition~\eqref{eq:lambdas} then $\mathcal G$ is integral. \end{theorem} \begin{proof} Let $k = \# S$ be the degree of $\mathcal G$. By Lemma~\ref{lem:Eigen}, $\lambda _0=k$. It is clear then that $\lambda_1,\ldots,\lambda_{n-1}$ are all different from $\lambda_0$. If sp$(\mathcal G)$ satisfies~\eqref{eq:lambdas}, then for all $i\in\{1,\ldots,n-1\}$, we have $$ \frac{\lambda_i-k}{\lambda_1-k}\in{\mathbb Q}. $$ Therefore, $\lambda_i=a_i\lambda_1+b_i$ for some $a_i, b_i\in{\mathbb Q}$. We now show that $\lambda_1\in{\mathbb Q}$. For this we consider three cases: {\bf Case 1:} Suppose $n = p$, a prime. Then the minimal polynomial of $\omega_n$ over ${\mathbb Q}$ is $1+X+\ldots+X^{n-1}$. Since $\mathcal G$ has at least four distinct eigenvalues we can find $2\leqslant j<h\leqslant(n-1)$ such that $\lambda_0$, $\lambda_1$, $\lambda_{j}$ and $\lambda_{h}$ are all distinct. Suppose that $\lambda_1\not\in{\mathbb Q}$. {From}~\eqref{eq:lambdas} we have that $\lambda_{j}=a_{j}\lambda_1+b_{j}$ for some $a_{j}, b_{j}\in{\mathbb Q}$. Applying~\eqref{eq-lambda-in-terms-of-omega} we get $$ \sum_{s\in S}\omega_n^{js}=a_{j}\sum_{s\in S}\omega_n^s+b_{j}. $$ In the last identity we can replace each exponent $js$ with its smallest positive residue $r_{j,s}$ modulo $n$, which in turn means the following divisibility of polynomials $$ 1+X+\ldots+X^{n-1}~\Big|~\sum_{s\in S}X^{r_{j,s}}-a_{j}\sum_{s\in S}X^s-b_{j}. $$ Since the nonzero polynomial on the right hand side is of degree at most $n-1$ and since $\lambda_1\not=\lambda_{j}$, we obtain $$ 1+X+\ldots+X^{n-1} = \sum_{s\in S}X^{r_{j,s}}-a_{j}\sum_{s\in S}X^s-b_{j} , $$ which implies $-a_{j}=-b_{j}=1$. Thus, $\lambda_{j}=-\lambda_1-1$. Applying the same argument on $\lambda_{h}$ we get, $\lambda_{h}=-\lambda_1-1$, thus implying $\lambda_{h}=\lambda_{j}$. This contradiction shows that $\lambda_1\in{\mathbb Q}$ in this case. {\bf Case 2:} Suppose $n=p^r$, a power of a prime $p$, where $r\geqslant 2$. Now focus on the set of eigenvalues: $$\left\{\lambda_{p^{r-1}},\lambda_{2p^{r-1}},\ldots,\lambda_{(p-1)p^{r-1}}\right\}$$ Suppose that $\lambda_1\not\in{\mathbb Q}$. Clearly, $\lambda_{p^{r-1}}$ cannot be rational (otherwise $\lambda_1\in{\mathbb Q}$). The above eigenvalues can be described as: $$ \lambda_{ip^{r-1}}= \sum_{s\in S}\omega_{n}^{ip^{r-1}s}=\sum_{s\in S}\omega_p^{is}. $$ Thus, this case now reduces to the prime case above and shows that $\lambda_1$ is rational. {\bf Case 3:} Suppose $n$ has two distinct prime factors $p, q$. We have that for all $i\in\{1,\ldots,n-1\},\ \lambda_i=a_i\lambda_1+b_i$, for some $a_i, b_i\in{\mathbb Q}$. Thus, \begin{equation}\label{eq-equal-fields} {\mathbb Q}(\lambda_1)=\ldots={\mathbb Q}(\lambda_{n-1}). \end{equation} Observe that $\lambda_{n/p}\in{\mathbb Q}(\omega_p)$ and $\lambda_{n/q}\in{\mathbb Q}(\omega_q)$. But the equation~\eqref{eq-equal-fields} implies that $\lambda_{n/p}\in{\mathbb Q}(\lambda_{n/q})$. Thus, $\lambda_{n/p}\in{\mathbb Q}(\omega_p)\cap{\mathbb Q}(\omega_q)$. It can be shown that ${\mathbb Q}(\omega_p)\cap{\mathbb Q}(\omega_q)={\mathbb Q}$ since $p, q$ are coprime. Thus, $\lambda_{n/p}\in{\mathbb Q}$ and then the equation~\eqref{eq-equal-fields} forces $\lambda_1\in{\mathbb Q}$. Thus, in all the cases $\lambda_1\in{\mathbb Q}$ and hence all the $n$ eigenvalues are rational. Since they are also algebraic integers, this further implies the desired result. \end{proof} It is plausible that the method of proof of Theorem~\ref{thm:periodic} can be extended to other classes of Cayley graphs. In the light of Theorem~\ref{thm:periodic}, in the next sections we consider parameter of circulant integral graphs. Before doing that, we now give a characterization of these graphs, which is due to So~\cite{So} (and which is naturally based on Lemma~\ref{lem:Eigen}). This is our main technical tool. Let \[ G_{n}(d)=\{k~\mid~1\leq k\leq n-1,\gcd(k,n)=d\}, \] be the set of all integers less than $n$ having same greatest common divisor $d$ with $n$. In particular $\#G_{n}(d)=\varphi(n/d)$, where, as usual, \[ \varphi(m)=\#\{1\leq s\leq m~\mid~\gcd(s,m)=1\} \] denotes the Euler totient function of a positive integer $m$ (see, for example,~\cite{HardyWright}). Notice that the collection $\{G_{n}(d)~\mid~d\mid n\}$ is a partition of the set $\{1,2,\ldots,n-1\}$. Notice that $k\in G_{n}(d)$ if and only if $n-k\in G_{n}(d)$, since $\gcd(k,n)=\gcd(n-k,n)$. Let $D_{n}$ be the set of all $\tau(n)-1$ divisors $d\mid n$ with $d\leq n/2$, where, as usual, $\tau(n)$ is the number of positive integer divisors of $n$. \begin{lemma} \label{lem:Char} A circulant graph $\mathcal G=G(n;S)$ on $n$ vertices with symbol $S$ is integral if and only if \begin{equation} S=\bigcup_{d\in D}G_{n}(d) \label{eq:union} \end{equation} for some set of divisors $D\subseteq D_{n}$. \end{lemma} Throughout the paper, the implied constants in the symbols `$O$', `$\ll$ and `$\gg$' are absolute. We recall that $A\ll B$ and $B \gg A$ is equivalent to the statement that $A=O(B)$ for positive functions $A$ and $B$). \section{Degree and Order} \label{sec:deg_ord} In this section we prove an upper bound on the number of vertices of an integral circulant graph in terms of its degree. \begin{theorem} \label{thm:Nk} There is an absolute constant $c>0$ such that for any $k\geq2 $, the largest number $N(k)$ of vertices of an integral connected circulant graph $\mathcal G=G(n;S)$ having degree $k$ is bounded by \[ N(k)\leq \exp\(c\sqrt{k\log\log(k+2)}\log k\). \] \end{theorem} \begin{proof} By Lemma~\ref{lem:Char}, we see that $S=\bigcup_{d\in D}G_{n}(d)$, for some set of divisors $D\subseteq D_{n}$. Therefore \begin{equation} \label{eq:k and F} k=\#S=\sum_{f\in F}\varphi(f). \end{equation} Given that $\mathcal G$ is connected, we have $\gcd(\{d~\mid~d\in D\},n)=1$. Noting that for any two divisor $f,F\mid n$ we have \[ \gcd(n/f,F)\geq\gcd(F/f,F)\geq F/f, \] it is easy to prove by induction on $m$ that for any sequence $f_{1} ,\ldots,f_{m}$ of divisors of $n$ we have \[ \gcd(n/f_{1},\ldots,n/f_{m},n)\geq\frac{n}{f_{1}\ldots f_{m}}. \] Therefore $$ 1 =\gcd(\{d~\mid~d\in D\},n)=\gcd\(\{n/f~\mid~f\in F\},n\) \geq n\prod_{f\in F}f^{-1} $$ which leads us to the bound \begin{equation} n\leq\prod_{f\in F}f. \label{eq:prod div} \end{equation} We now recall the well known bound that for some absolute constant $C>0$, \begin{equation} \varphi(f)\gg\frac{f}{\log\log(f+2)} \label{eq:phi}, \end{equation} see~\cite[Theorem~328]{HardyWright}. Thus we see from~\eqref{eq:k and F} that \[ \frac{f}{\log\log(f+2)}\ll k \] for every $f\in F$, which obviously implies that \[ f\ll k\log\log(k+2). \] Now, using this bound together with~\eqref{eq:phi} and~\eqref{eq:k and F} again, we derive $$ k =\sum_{f\in F}\varphi(f)\gg\sum_{f\in F}\frac{f}{\log\log(f+2)} \gg\sum_{f\in F}\frac{f}{\log\log(k+2)}. $$ Thus if we denote $$ \sigma=\sum_{f\in F}f $$ then we have \begin{equation} \sigma\ll k\log\log(k+2). \label{eq:bound sigma} \end{equation} Let $s=\#F$. Then we deduce from~\eqref{eq:prod div} that \begin{equation} n\leq\( \sigma/s\) ^{s}. \label{eq:bound n} \end{equation} Since \[ \sigma=\sum_{f\in F}f\geq\sum_{j=1}^{s}j=\frac{s(s+1)}{2} , \] we see that \begin{equation} s\ll\sqrt{\sigma}. \label{eq:bound s} \end{equation} Since the function $\( \sigma/x\) ^{x}$ monotonically increases for $1\leq x\leq\sigma/e$, we obtain from~\eqref{eq:bound sigma} and~\eqref{eq:bound n} that \[ n\leq\exp\( O\( \sqrt{\sigma}\log\sigma\) \) , \] and recalling~\eqref{eq:bound sigma}, we conclude the proof. \end{proof} On the basis of the arguments used in the proof of Theorem~\ref{thm:Nk}, we can construct the following table, in which we list the maximum order of an integral circulant graph of fixed degree $k=2,\ldots,11$ (this is the sequence A126857 in~\cite{oeis}). \[ \begin{tabular} [c]{c|c} Degree $k$ & Maximum order $N(k)$\\\hline $2,3$ & $6$\\ $4,5$ & $12$\\ $6,7$ & $30$\\ $8,9$ & $42$\\ $10,11$ & $120$ \end{tabular} \] \section{Bipartiteness} \label{sec:bipart} In this section we characterize bipartite integral circulant graphs. Let us denote by $\mu(m)$ the M\"{o}bius function of a positive integer $m$: \[ \mu(m)=\left\{ \begin{tabular} [c]{ll} $0$, & if $m$ has repeated prime factors;\\ $1$, & if $m=1$;\\ $\( -1\) ^{k}$, & if $m$ is a product of $k$ distinct primes. \end{tabular} \ \right. \] For a fixed $k$, there exists a set $F\subset\mathbb{N}$ such that we have~\eqref{eq:k and F}. Writing \[ n=\mathrm{lcm}\left\{f~\mid~f\in F\right\} \] and \begin{equation} S=\bigcup_{f\in F} G_n\(\frac{n}{f}\) \label{ess} \end{equation} it is not hard to see that that the above defines an integral circulant graph $\mathcal G=G\(n;S\) $. As discussed in \cite{So}, the eigenvalues of $\mathcal G=G\(n;S\) $ are then: for $0\leq j\leq n-1$, \begin{equation} \lambda_{j}=\sum_{f\in F}\varphi(f)\cdot \frac{\mu\(f/\gcd(f,j)\)}{\varphi\(f/\gcd\( f,j\)\)}. \label{eigv} \end{equation} By~\eqref{eigv}, we can determine which integral circulant graphs are bipartite. \begin{theorem}\label{thm:Nkb} An integral circulant graph $\mathcal G=G(n;S)$ on $n$ vertices with symbol $S$ is bipartite if and only if $n$ is even and $S=\cup_{f\in F} G_n\(\frac{n}{f}\)$, where for some number $\ell_0$, the set $\left\{ 2\ell_0/f~\mid~f\in F\right\}$ contains only odd integers. \end{theorem} \begin{proof} Having degree $k$, the graph $\mathcal G$ is bipartite if and only if it has an eigenvalue $\lambda_{\ell}=-k$, see~\cite{CDS}. Suppose $\mathcal G$ is bipartite. On the basis of~\eqref{ess} and~\eqref{eigv}, \[ \lambda_{\ell}=-k=\sum_{f\in F}\varphi\( f\)\cdot \frac{\mu\( f/\gcd(f,\ell)\)}{\varphi\(f/\gcd\(f,\ell\)\)}. \] Since~\eqref{eq:k and F}, the above equation can hold only if for every $f\in F$: \[ \frac{\mu\( f/\gcd(f,\ell)\)}{\varphi\(f/\gcd\(f,\ell\)\)} =-1. \] This implies that \begin{equation} \label{emm} \mu\( \frac{f}{\gcd(f,\ell)}\) =-1 \qquad \mbox{and} \qquad \varphi\(\frac{f}{\gcd\( f,\ell\) }\) =1. \end{equation} Whence, \begin{equation} \label{eff} \frac{f}{\gcd\( f,\ell\) }\in\{1,2\}. \end{equation} So, the equation~\eqref{emm} together with~\eqref{eff} gives: \[ \frac{f}{\gcd\( f,\ell\) }=2. \] Implying that for every $f\in F$ the ratio $ 2\ell/f$ is an odd integer. Also it follows that $n$ is even as $n=\mathrm{lcm}\left\{f~\mid~f\in F\right\}$. Thus, the theorem is true in one direction. Conversely, suppose that $n$ is even and $2\ell_0/f$ is odd for every $f\in F$. Consequently, the $\ell_0$-th eigenvalue is: \begin{align*} \lambda_{\ell_0}&= \sum_{f\in F}\varphi\( f\)\cdot \frac{\mu\(f/\gcd(f,\ell_0)\)}{\varphi\(f/\gcd\(f,\ell_0\)\)}\\ & = \sum_{f\in F}\varphi\( f\)\cdot\frac{\mu(2)}{\varphi(2)} = \sum_{f\in F}\varphi\( f\)\cdot(-1)= -k \end{align*} Thus, $\mathcal G$ is bipartite and the theorem is proved. \end{proof} \section{Diameter} \label{sec:diam} In this section we prove tight lower and upper bounds on the diameter of integral circulant graphs. The \emph{diameter} of a graph $\mathcal G$, denoted by $\mathrm{diam}\, \mathcal G$, is the longest among the shortest paths between any two vertices. If $\mathcal G$ is a circulant graph on $n$ vertices then it is clear that $1\leq \mathrm{diam}\, \mathcal G \leq n/2$. For a given degree $k$, the number of vertices of an integral circulant graph $\mathcal G$ can be $n=\mathrm{lcm}\{f~\mid~f\in F\}$, where $F$ is such that we have given in equation \eqref{eq:k and F}. Assuming that the columns (and rows) of the adjacency matrix $A_{\mathcal G}$ of $\mathcal G$ are labelled from $1,\ldots,n$ then the first row of $A_\mathcal G$ is: \[ S=\bigcup_{f\in F}G_{n}\( \frac{n}{f}\) =\bigcup_{f\in F}\left\{ i\ \lvert\ 1\leq i\leq n,\ \gcd(i,n)=\frac{n}{f}\right\} \] A right shift of row $S$ gives the subsequent rows of $A_{\mathcal G}$. Let $X\subseteq{\mathbb Z}_n$ then, for a positive integer, we define $$iX=\underset{i\text{ times}}{\underbrace{X+\ldots+X}} = \{x_1 +\ldots + x_i~\mid~x_1 ,\ldots , x_i \in X\} $$ (where the elements are added modulo $n$). Note that the vertices in $\mathcal G$ reachable from the vertex $0$ in $1$ step are exactly the vertices of $S$; the vertices reachable from the vertex $0$ in $2$ steps are those of $2S$, and so on so forth. Similarly, if we define $T=S\cup\left\{ 0\right\}$ then the vertices reachable from the vertex $0$ in $i$ or smaller steps are $ iT$. Thus, we have: \begin{lemma} \label{lem-diam-T-i-times} The diameter of the circulant graph $\mathcal G=G(n;S)$ is the least index $i$ such that $ iT={\mathbb Z}_n$. \end{lemma} \begin{theorem} \label{thm-diam-bounds} Let $D$ be a set of divisors of $n$ such that $\gcd(D,n)=1$ and let $t$ be the size of the smallest set of additive generators of ${\mathbb Z}_n$ contained in $D$. Then, for the circulant graph $\mathcal G=G(n;S)$, where $S=\cup_{d\in D}G_{n}(d)$, we have \[ t\leq\mathrm{diam}\, \mathcal G\leq 2t+1. \] \end{theorem} \begin{proof} It is very simple to show the lower bound. By the hypothesis, it is easy to see that $t$ is the size of the smallest set of generators of ${\mathbb Z}_n$ contained in $T$. Thus, by Lemma~\ref{lem-diam-T-i-times}, we deduce that $\mathrm{diam}\, \mathcal G\geq t$. We now turn to the upper bound. Let $d_{1},\ldots,d_{t}\in D$ be the additive generators of ${\mathbb Z}_n$. Without loss of generality, we can assume that $d_{1}$ is odd. Clearly, $\gcd(d_{1},\ldots,d_{t},n)=1$. We intend to show that given any $\ell\in{\mathbb Z}_n$ there exist $x_{0},x_{1},\ldots,x_{2t} \in({\mathbb Z}_n)^{\ast}$ such that either $$ d_{1}x_{0}+d_{1}(x_{1}+x_{t+1})+\ldots+d_{t}(x_{t}+x_{2t}) \equiv \ell \pmod n $$ or \begin{equation} \label{eqn-coprime-solns-2} d_{1}(x_{1}+x_{t+1})+\ldots+d_{t}(x_{t}+x_{2t}) \equiv \ell \pmod n . \end{equation} Note that this would mean that $(2t+1)T={\mathbb Z}_n$. We now solve one of the above congruences modulo prime factors of $n$ and then ``lift'' that solution modulo $n$. If $2|n$ then we can put \[ x_{0} \equiv x_{1} \equiv \ldots \equiv x_{2t} \equiv 1 \pmod 2 \] and then, depending on the parity of $\ell$ one of the above equations, say~\eqref{eqn-coprime-solns-2}, holds modulo $2$. Suppose $\alpha_{2}$ is the largest index of $2$ dividing $n$. Then this solution can be Hensel lifted~\cite{LN} to a solution $(x_{0},\ldots,x_{2t})$ modulo $2^{\alpha_{2}}$. Next, let $p$ be an odd prime dividing $n$. Since $\gcd(d_{1},\ldots ,d_{t},n)=1$, without loss of generality, we can assume that $p\nmid d_{1}$. Now we substitute \[ x_{2} \equiv \ldots \equiv x_{t} \equiv 1 \equiv -x_{2+t} \equiv \ldots \equiv -x_{2t} \pmod p \] and then~\eqref{eqn-coprime-solns-2} simply becomes \[d_{1}(x_{1} +x_{t+1}) \equiv \ell \pmod p\] or \[ x_{1}+x_{t+1} \equiv \ell\cdot d_{1}^{-1} \pmod p \] and we can easily find nonzero values of $x_{1}$ and $x_{t+1}$ modulo $p$. So we have a solution of the equation~\eqref{eqn-coprime-solns-2} modulo $p$ and it can be Hensel lifted to a solution modulo $p^{\alpha_{p}}$, where $\alpha_{p}$ is the largest index of $p$ dividing $n$. Finally, the solutions of~\eqref{eqn-coprime-solns-2} modulo $q^{\alpha_{q} }$ for every prime $q|n$ can be combined using Chinese Remaindering to get a solution modulo $n$. \end{proof} It is natural to try to obtain bounds on the diameter of $\mathcal G=G(n;S)$ in terms of $\# D$. Certainly we have trivial bounds $$ 2 \leqslant \mathrm{diam}\, \mathcal G \leqslant 2 \# D + 1. $$ The following result shows that in general no better bounds are possible. \begin{theorem} \label{thm-diam-bounds-tight}The following statements are true for integral circulant graphs: \begin{enumerate} \item[i] For $r\geq3$, let $n$ be the product of distinct odd primes $p_{1},\ldots,p_{r}$ and let $D=\{p_{1},\ldots,p_{r}\}$. The graph corresponding to these parameters has diameter $2$. \item[ii] Let $m$ be the product of distinct odd primes $p_{1},\ldots,p_{r}$. Let $n=2m^{2}$ and \[ D=\left\{(m/p_1)^2 , \ldots, (m/p_r)^2\right\}. \] The graph corresponding to these parameters has diameter $(2r+1)$. \end{enumerate} \end{theorem} \begin{proof} \emph{Part~i}. By the hypothesis $n=p_{1}\ldots p_{r}$, $D=\{p_{1},\ldots ,p_{r}\}$. Recall that $T=\{0\}\cup_{d\in D}G_{n}(d)$. Let $\mathcal G$ be the corresponding graph. We show that given any $\ell\in{\mathbb Z}_n$, we have $\ell\in T+T$. Suppose $\ell$ is coprime to $n$. Then using the methods of Theorem~\ref{thm-diam-bounds}, we can find a solution $x_{1}, x_{2}\in {\mathbb Z}_n^{\ast}$, such that $p_{1}x_{1}+p_{2}x_{2} \equiv \ell \pmod n$. Thus, $\ell\in T+T$. If $\ell$ is not coprime to $n$ then without loss of generality, we can assume that $p_{1}|n$. Again, using the methods of Theorem~\ref{thm-diam-bounds} we can find a solution $x_{1},\ x_{2}\in{\mathbb Z}_n^{\ast}$ such that $p_{1}x_{1}+p_{1}x_{2} \equiv \ell \pmod n$. Thus, $\ell\in T+T$. Therefore, $T+T={\mathbb Z}_n$. As the smallest additive generator set contained in $D$ is of size $2$ we deduce from Theorem~\ref{thm-diam-bounds} that $\mathrm{diam}\, \mathcal G=2$. \emph{Part~ii}. We recall that $T = \{0\}\cup_{d\in D}G_{n}(d)$. Let $\mathcal G$ be the corresponding graph. We show that $m\not \in 2rT$. Suppose that $m\in 2rT$. This means that there are $d_{1},\ldots,d_{2r} \in D$ such that $m\in G_{n}(d_{1})+\ldots+G_{n}(d_{2r})$. Since $p_{j}^{2}\nmid m$, we deduce that $(m/p_j)^2\in\{d_{1},\ldots,d_{2r}\}$, $j=1, \ldots, r$. Without loss of generality, we can assume that $d_{1}=(m/p_1)^2,\ldots, d_{r}=(m/p_r)^2$. In other words, there are $x_{1},\ldots,x_{2r}\in {\mathbb Z}_n^{\ast}$ such that \begin{equation} \frac{m^{2}}{p_{1}^{2}}x_{1}+\ldots+\frac{m^{2}}{p_{r}^{2}}x_{r} +d_{r+1}x_{r+1}+\ldots+d_{2r}x_{2r} \equiv m \pmod n. \label{eqn-diam-bounds-tight-1} \end{equation} Taking the above congruence modulo $p_{1}$ we deduce that \[ \frac{m^{2}}{p_{1}^{2}}x_{1}+d_{r+1}x_{r+1}+\ldots+d_{2r}x_{2r} \equiv 0 \pmod {p_1}. \] As $\gcd(x_{1}, p_{1}) =1$, the above congruence implies $(m/p_1)^2 \in \{d_{r+1},\ldots,d_{2r}\}$. Similarly, taking~\eqref{eqn-diam-bounds-tight-1} modulo primes $p_{2},\ldots,p_{r}$ and repeating the argument we deduce $$(m/p_1)^2,\ldots,(m/p_r)^2\in\{d_{r+1},\ldots,d_{2r}\}.$$ Without loss of generality, we can assume that $$ d_{r+1} =\frac{m^{2}}{p_{1}^{2}},\ldots, d_{2r}=\frac{m^{2}}{p_{r}^{2}}. $$ Thus, the congruence~\eqref{eqn-diam-bounds-tight-1} becomes $$ \frac{m^{2}}{p_{1}^{2}}(x_{1}+x_{r+1})+\ldots+\frac{m^{2}}{p_{r}^{2}} (x_{r}+x_{2r}) \equiv m \pmod n.$$ Recall that $x_{1},\ldots,x_{2r}$ are coprime to $n$. So, looking at the above equation modulo $2$, we deduce $m \equiv 0 \pmod 2$, which is a contradiction as $m$ is odd. This shows that $m\not \in 2rT$ and hence $\mathrm{diam}\, \mathcal G>2r$. Since the smallest additive generator set of ${\mathbb Z}_n$ in $D$ is of size $r$, by Theorem~\ref{thm-diam-bounds}, we have that $\mathrm{diam}\, \mathcal G=2r+1$. \end{proof} \section{Conclusion} \label{sec:concl} We have proved that, a quantum system, whose hamiltonian is identical to the adjacency matrix of a circulant graph, is periodic if and only if the graph is integral. We have bounded the number of vertices of integral circulant graphs in terms of their degree, characterized bipartiteness and given exact bounds for the diameter. It is a natural problem to extend the theorems~\ref{thm:periodic}, \ref{thm:Nkb} and~\ref{thm-diam-bounds} to other classes of Cayley graphs. For example, Cayley graphs of Abelian groups. We conclude with a partial result about perfect state transfer. We say that there is \emph{perfect state transfer} (see~\cite{CDEL}) in a graph $\mathcal G$ between the vertex $a$ and the vertex $b$ if there is $0<t<\infty$, such that \[ |\langle a|e^{-\iota A(\mathcal G)t}|b\rangle|=1. \] For an integral circulant graph $\mathcal G =G(n;S)$, we have the following setting: for all $0\leq j\leq n-1$, $v_{j}=[1,\omega^{j},\ldots,\omega^{j(n-1)}]^{T}$ is an eigenvector of $A(\mathcal G)$ corresponding to the eigenvalue $\lambda_j$ given by~\eqref{eq-lambda-in-terms-of-omega}. Thus $$A(\mathcal G)=\frac{1}{n}\sum_{j=0}^{n-1}\lambda_{j}v_{j}v_{j}^{\dagger}. $$ This gives $$e^{-\iota A(\mathcal G)t}=\frac{1}{n}\sum_{j=0}^{n-1}e^{-i\lambda_{j}t}v_{j} v_{j}^{\dagger}$$ and $$|\langle a|e^{-i A(\mathcal G)t}|b\rangle|=\sum_{j=0}^{n-1}e^{-i\lambda_{j}t}\omega^{\ell(a-b)}. $$ We have then the next question: are there $0\leq a,b\leq(n-1)$ and $t\in{\mathbb R}$ such that $|\langle a|e^{-i A(\mathcal G)t}|b\rangle|=1$? \begin{proposition} If $n$ is odd then there do not exist $0\leq a<b\leq(n-1)$ and $t\in{\mathbb R}^{>0}$ such that $\vert \left<a\vert e^{\iota A t}\vert b\right> \vert=1$. In other words, an integral circulant graph having odd number of vertices cannot have perfect state transfer. \end{proposition} \begin{proof} Recall $$e^{\iota At}=\frac{1}{n}\sum_{\ell=0}^{n-1}e^{\iota\lambda_\ell t} v_\ell v_\ell^\dagger.$$ Therefore, \begin{align*} \left<a|e^{\iota At}|b\right> =& \frac{1}{n}\sum_{\ell=0}^{n-1}e^{\iota\lambda_\ell t} \omega^{\ell a}\omega^{-\ell b}\\ =&\frac{1}{n}\sum_{\ell=0}^{n-1}e^{\iota\lambda_\ell t}\omega^{\ell (a- b)}. \end{align*} Now the magnitude of the above expression is clearly $\leqslant 1$. The equality holds if and only if each term is $1$ implying that $e^{\iota\lambda_\ell t}=\pm 1$ and $\omega^{\ell(a- b)}=\pm 1$ for all $\ell$. Now if $n$ is odd then $\omega^{\ell(a- b)}=\pm 1$ happens only when $a \equiv b\pmod n$. Thus, there is no perfect state transfer when $n$ is odd. \end{proof} When $n$ is even there is perfect state transfer (between vertices $a$ and $a+\frac{n}{2}$) if there exists a $t\in{\mathbb R}^{>0}$ such that $e^{\iota\lambda_\ell t}=(-1)^{\ell}$ for all $\ell\in\{0,\ldots,n-1\}$. For example, this happens in the case of $n=4$ and $S=\{1,3\}$. However, we do not know whether there are other such instances. \section*{Acknowledgments} Part of this work has been carried out while the second author was visiting CWI. This was possible thanks to the financial support of CWI and the kind hospitality of Harry Buhrman.
{ "timestamp": "2007-03-26T11:37:12", "yymm": "0703", "arxiv_id": "quant-ph/0703236", "language": "en", "url": "https://arxiv.org/abs/quant-ph/0703236", "abstract": "The intention of the paper is to move a step towards a classification of network topologies that exhibit periodic quantum dynamics. We show that the evolution of a quantum system, whose hamiltonian is identical to the adjacency matrix of a circulant graph, is periodic if and only if all eigenvalues of the graph are integers (that is, the graph is integral). Motivated by this observation, we focus on relevant properties of integral circulant graphs. Specifically, we bound the number of vertices of integral circulant graphs in terms of their degree, characterize bipartiteness and give exact bounds for their diameter. Additionally, we prove that circulant graphs with odd order do not allow perfect state transfer.", "subjects": "Quantum Physics (quant-ph)", "title": "Parameters of Integral Circulant Graphs and Periodic Quantum Dynamics", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668662546612, "lm_q2_score": 0.7217432182679956, "lm_q1q2_score": 0.7081505317085632 }
https://arxiv.org/abs/1808.07801
On a 'Two Truths' Phenomenon in Spectral Graph Clustering
Clustering is concerned with coherently grouping observations without any explicit concept of true groupings. Spectral graph clustering - clustering the vertices of a graph based on their spectral embedding - is commonly approached via K-means (or, more generally, Gaussian mixture model) clustering composed with either Laplacian or Adjacency spectral embedding (LSE or ASE). Recent theoretical results provide new understanding of the problem and solutions, and lead us to a 'Two Truths' LSE vs. ASE spectral graph clustering phenomenon convincingly illustrated here via a diffusion MRI connectome data set: the different embedding methods yield different clustering results, with LSE capturing left hemisphere/right hemisphere affinity structure and ASE capturing gray matter/white matter core-periphery structure.
\section*{Spectral Graph Clustering} Given a simple graph $G=(V,E)$ on $n$ vertices, consider the associated $n \times n$ adjacency matrix $A$ in which $A_{ij}$ = 0 or 1 encoding whether vertices $i$ and $j$ in $V$ share an edge $(i,j)$ in $E$. For our simple undirected, unweighted, loopless case, $A$ is binary with $A_{ij} \in \{0,1\}$, symmetric with $A=A^\top$, and hollow with $diag(A)=\vec{0}$. The first step of spectral graph clustering \cite{vonLuxburg2007,rohe2011} involves embedding the graph into Euclidean space via an eigendecomposition. We consider two options: Laplacian Spectral Embedding (LSE) wherein we decompose the normalized Laplacian of the adjacency matrix, and Adjacency Spectral Embedding (ASE) given by a decomposition of the adjacency matrix itself. With target dimension $d$, either spectral embedding method produces $n$ points in $\Re^d$, denoted by the $n \times d$ matrix $X$. ASE employs the eigendecomposition to represent the adjacency matrix via $A = USU^{\top}$ and chooses the top $d$ eigenvalues by magnitude and their associated vectors to embed the graph via the scaled eigenvectors $U_d |S_d|^{1/2}$. Similarly, LSE embeds the graph via the top scaled eigenvectors of the normalized Laplacian $\mathcal{L}(A) = D^{-1/2} A D^{-1/2}$, where $D$ is the diagonal matrix of vertex degrees. In either case, each vertex is mapped to the corresponding row of $X = U_d |S_d|^{1/2}$. Spectral graph clustering concludes via classical Euclidean clustering of $X$. As described below, Central Limit Theorems for spectral embedding of the (sufficiently dense) Stochastic Block Model via either LSE or ASE suggest Gaussian Mixture Modeling (GMM) for this clustering step. Thus we consider spectral graph clustering to be GMM composed with LSE or ASE: $$ \mbox{GMM} ~ \circ ~ \{\mbox{LSE},\mbox{ASE}\}. $$ \section*{Stochastic Block Model} The random graph model we use to illustrate our phenomenon is the Stochastic Block Model (SBM), introduced in \cite{holland1983stochastic}. This model is parameterized by ({\it i}) a block membership probability vector $\vec{\pi} = [\pi_1,\dots,\pi_K]^\top$ in the unit simplex and ({\it ii}) a symmetric $K \times K$ block connectivity probability matrix $B$ with entries in $[0,1]$ governing the probability of an edge between vertices given their block memberships. Use of the SBM is ubiquitous in theoretical, methodological, and practical graph investigations, % and SBMs have been shown to be universal approximators for exchangeable random graphs \cite{Olhede14722}. For sufficiently dense graphs, both LSE and ASE have a Central Limit Theorem \cite{athreya2013limit,tang_lse,PRD-GRDPG} demonstrating that, for large $n$, embedding via the top $d$ eigenvectors from a rank $d$ $K$-block SBM ($d \equiv rank(B) \leq K$) yields $n$ points in $\Re^d$ behaving approximately as a random sample from a mixture of $K$ Gaussians. That is, given that the $i${th} vertex belongs to block $k$, the $i${th} row of $X = U_d S_d^{1/2}$ will be approximately distributed as a multivariate normal with parameters specific to block $k$, $X_i \sim \mathcal{MVN}(\mu_k,\Sigma_k)$. The structure of the covariance matrices suggest that the GMM is called for, as an appropriate generalization of $K$-means clustering. Therefore, GMM$(X)$ via Maximum Likelihood will produce mixture parameter estimates and associated asymptotically perfect clustering, using either LSE or ASE. For finite $n$, however, LSE and ASE yield different clustering performance, and neither dominates the other. We will make significant conceptual use of the positive definite 2-block SBM ($K=2$), with $$ B = \begin{bmatrix} B_{11} & B_{12} \\ B_{21} & B_{22} \\ \end{bmatrix} = \begin{bmatrix} a & b \\ b & c \\ \end{bmatrix} $$ which henceforth we shall abbreviate as $B = [a,b;b,c]$. In this simple setting, two general/generic cases present themselves: affinity and core-periphery. Affinity: $a,c \gg b$. An SBM with $B = [a,b;b,c]$ is said to exhibit affinity structure if each of the two blocks have a relatively high within-block connectivity probability compared to the between-block connectivity probability. Core-periphery: $a \gg b,c$. An SBM with $B = [a,b;b,c]$ is said to exhibit core-periphery structure if one of the two blocks has a relatively high within-block connectivity probability compared to both the other block's within-block connectivity probability and the between-block connectivity probability. The relative performance of LSE and ASE for these two cases provides the foundation for our analyses. Informally: LSE outperforms ASE for affinity, and ASE is the better choice for core-periphery. We make this clustering performance assessment analytically precise via Chernoff Information, and we demonstrate this in practice via Adjusted Rand Index. \section*{Clustering Performance Assessment} We consider two approaches to assessing the performance of a given clustering, defined to be a partition of $[n] \equiv \{1,\dots,n\}$ into a disjoint union of $K$ partition cells or clusters. For our purposes -- demonstrating a `Two Truths' phenomenon in LSE vs.\ ASE spectral graph clustering -- we will consider the case in which there is a `true' or meaningful clustering of the vertices against which we can assess performance, but we emphasize that in practice such a truth is neither known nor necessarily unique. \subsection*{Chernoff Information} Comparing and contrasting the relative performance of LSE vs.\ ASE via the concept of Chernoff information \cite{chernoff_1952,chernoff_1956}, in the context of their respective CLTs, provides a limit theorem notion of superiority. Thus, in the SBM, we allude to the GMM provided by the CLT for either LSE or ASE. The Chernoff information between two distributions is the exponential rate at which the decision-theoretic Bayes error decreases as a function of sample size. In the 2-block SBM, with the true clustering of the vertices given by the block memberships, we are interested in the large-sample optimal error rate for recovering the underlying block memberships after the spectral embedding step has been carried out. Thus we require the Chernoff information $C(F_1,F_2)$ when $F_1 = \mathcal{MVN}(\mu_1, \Sigma_1)$ and $F_2 = \mathcal{MVN}(\mu_2, \Sigma_2)$ are multivariate normals. Letting $\Sigma_t = t \Sigma_1 + (1 - t) \Sigma_2$ and \begin{align*} h(t;F_1,F_2) = \ & \frac{t(1 - t)}{2} (\mu_1 - \mu_2)^{\top}\Sigma_t^{-1}(\mu_1 - \mu_2) \\ & + \frac{1}{2} \log \frac{|\Sigma_t|}{|\Sigma_1|^{t} |\Sigma_2|^{1 - t}} \end{align*} we have $$ \rho_{F_1, F_2} = \sup_{t \in (0,1)} h(t;F_1,F_2). $$ This provides both $\rho_L$ and $\rho_A$ when using the large-sample GMM parameters for $F_1, F_2$ obtained from the LSE and ASE embeddings, respectively, for a particular 2-block SBM distribution (defined by its block membership probability vector $\vec{\pi}$ and block connectivity probability matrix $B$). We will make use of the Chernoff ratio $\rho = \rho_A/\rho_L$; $\rho > 1$ implies ASE is preferred while $\rho < 1$ implies LSE is preferred. (Recall that as the Chernoff information increases, the large-sample optimal error rate decreases.) Chernoff analysis in the 2-block SBM demonstrates that, in general, LSE is preferred for affinity while ASE is preferred for core-periphery \cite{tang_lse,JCapeCC}. \subsection*{Adjusted Rand Index} In practice, we wish to empirically assess the performance of a particular clustering algorithm on a given graph. There are numerous cluster assessment criteria available in the literature: Rand Index (RI) \cite{hubert85}, Normalized Mutual Information (NMI) \citep{dadiduar05}, Variation of Information (VI) \citep{me07}, Jaccard \citep{ja1912}, etc. These are typically employed to compare either an empirical clustering against a `truth', or two separate empirical clusterings. For concreteness, we consider the well known Adjusted Rand Index (ARI), popular in machine learning, which normalizes RI so that expected chance performance is zero: ARI is the adjusted-for-chance probability that two partitions of a collection of data points will agree for a randomly chosen pair of data points, putting the pair into the same partition cell in both clusterings, or splitting the pair into different cells in both clusterings. (Our empirical connectome results are essentially unchanged when using other cluster assessment criteria.) In the context of spectral clustering via $\mbox{GMM} ~ \circ ~ \{\mbox{LSE},\mbox{ASE}\}$, we consider $\mathcal{C}_{LSE}$ and $\mathcal{C}_{ASE}$ to be the two clusterings of the vertices of a given graph. Then ARI($\mathcal{C}_{LSE}$,$\mathcal{C}_{ASE}$) assesses their agreement: ARI($\mathcal{C}_{LSE}$,$\mathcal{C}_{ASE}$) $= 1$ implies that the two clusterings are identical; ARI($\mathcal{C}_{LSE}$,$\mathcal{C}_{ASE}$) $\approx 0$ implies that the two spectral embedding methods are ``operationally orthogonal.'' (Significance is assessed via permutation testing.) In the context of `Two Truths', we consider $\mathcal{C}_1$ and $\mathcal{C}_2$ to be two known `true' or meaningful clusterings of the vertices. Then, with $\mathcal{C}_{SE}$ being either $\mathcal{C}_{LSE}$ or $\mathcal{C}_{ASE}$, ARI($C_{SE}$,$\mathcal{C}_1$) $\gg$ ARI($C_{SE}$,$\mathcal{C}_2$) implies that the spectral embedding method under consideration is more adept at discovering truth $\mathcal{C}_1$ than truth $\mathcal{C}_2$. Analogous to the theoretical Chernoff analysis, ARI simulation studies in the 2-block SBM demonstrate that, in general, LSE is preferred for affinity while ASE is preferred for core-periphery. \section*{Model Selection $\times$ 2} In order to perform the spectral graph clustering $ \mbox{GMM} ~ \circ ~ \{\mbox{LSE},\mbox{ASE}\}$ in practice, we must address two inherent model selection problems: we must choose the embedding dimension ($\widehat{d}$) and the number of clusters ($\widehat{K}$). \subsection*{SBM vs.\ network histogram} If the SBM model were actually true, then as $n \to \infty$ any reasonable procedure for estimating the SVD rank would yield a consistent estimator $\widehat{d} \to d$ and any reasonable procedure for estimating the number of clusters would yield a consistent estimator $\widehat{K} \to K$. Critically, the universal approximation result of \cite{Olhede14722} shows that SBMs provide a principled `network histogram' model even without the assumption that the SBM model with some fixed $(d,K)$ actually holds. Thus, practical model selection for spectral graph clustering is concerned with choosing ($\widehat{d},\widehat{K}$) so as to provide a useful approximation. The bias-variance tradeoff demonstrates that any quest for a universally optimal methodology for choosing the ``best'' dimension and number of clusters, in general, for finite $n$, is a losing proposition. Even for a low-rank model, subsequent inference may be optimized by choosing a dimension {\em smaller than} the true signal dimension, and even for a mixture of $K$ Gaussians, inference performance may be optimized by choosing a number of clusters {\em smaller than} the true cluster complexity. In the case of semiparametric SBM fitting, wherein low-rank and finite mixture are employed as a practical modeling convenience as opposed to a believed true model, and one presumes that both $\widehat{d}$ and $\widehat{K}$ will tend to infinity as $n \to \infty$, these bias-variance tradeoff considerations are exacerbated. For $\widehat{d}$ and $\widehat{K}$ below, we make principled methodological choices for simplicity and concreteness, but make no claim that these are best in general or even for the connectome data considered herein. Nevertheless, one must choose an embedding dimension and a mixture complexity, and thus we proceed \subsection*{Choosing the embedding dimension $\widehat{d}$} A ubiquitous and principled general methodology for choosing the number of dimensions in eigendecompositions and SVDs (e.g., principal components analysis, factor analysis, spectral embedding, etc.)\ is to examine the so-called scree plot and look for ``elbows'' defining the cut-off between the top (signal) dimensions and the noise dimensions. There are a plethora of variations for automating this singular value thresholding (SVT); Section 2.8 of \cite{Jackson} provides a comprehensive discussion in the context of principal components, and \cite{chatterjee2015} provides a theoretically-justified (but perhaps practically suspect, for small $n$) universal SVT. We consider the profile likelihood SVT method of \cite{Zhu:2006fv}. Given $A = USU^{\top}$ (for either LSE or ASE) the singular values $S$ are used to choose the embedding dimension $\widehat{d}$ via $$\widehat{d} = \arg\max_{d} ProfileLikelihood_{S}(d)$$ where $ProfileLikelihood_{S}(d)$ provides a definition for the magnitude of the ``gap'' after the first $d$ singular values. \subsection*{Choosing the number of clusters $\widehat{K}$} Choosing the number of clusters in Gaussian mixture models is most often addressed by maximizing a fitness criterion penalized by model complexity. Common approaches include Akaike Information Criterion (AIC) \citep{akaike1974new}, Bayesian Information Criterion (BIC) \citep{BIC}, Minimum Description Length (MDL) \citep{MDL}, etc. We consider penalized likelihood via BIC \citep{mclust2012}. Given $n$ points in $\Re^d$ represented by $X = U_d S_d^{1/2}$ (obtained via either LSE or ASE) and letting $\theta_K$ represent the GMM parameter vector whose dimension $dim(\theta_K)$ is a function of the data dimension $d$, the mixture complexity $\widehat{K}$ is chosen via $$\widehat{K} = \arg\max_K PenalizedLikelihood_X(\widehat{\theta}_K)$$ where $PenalizedLikelihood_X(\widehat{\theta}_K)$ is twice the log-likelihood of the data $X$ evaluated at the GMM with mixture parameter estimate $\widehat{\theta}_K$ penalized by $dim(\theta_K) \cdot \ln n$. For spectral clustering, we employ BIC for $\widehat{K}$ after spectral embedding, so $X \in \Re^{\widehat{d}}$ with $\widehat{d}$ chosen as above. \section*{Connectome Data} We consider for illustration a diffusion MRI data set consisting of $114$ connectomes (57 subjects, 2 scans each) with 72,783 vertices each and both Left/Right/other hemispheric and Gray/White/other tissue attributes for each vertex. Graphs were estimated using the NDMG pipeline \cite{Kiar188706}, with vertices representing sub-regions defined via spatial proximity and edges by tensor-based fiber streamlines connecting these regions. See Figure \ref{fig:DataGen}. The actual graphs we consider are the largest connected component (LCC) of the induced subgraph on the vertices labeled as both Left or Right and Gray or White. This yields $m=114$ connected graphs on $n \approx 40,000$ vertices. Additionally, for each graph every vertex has a $\{\mbox{Left,Right}\}$ label and a $\{\mbox{Gray,White}\}$ label, which we sometimes find convenient to consider as a single label in $\{\mbox{LG,LW,RG,RW}\}$. \subsection*{Sparsity} The only notions of sparsity relevant here are linear algebraic: whether there are enough edges in the graph to support spectral embedding, and are there few enough to allow for sparse matrix computations. We have a collection of observed connectomes and we want to cluster the vertices in these graphs, as opposed to in an unobserved sequence with the number of vertices tending to infinity. Our connectomes have, on average, $n \approx 40,000$ vertices and $e \approx 2,000,000$ edges, for an average degree $2e/n \approx 100$ and a graph density $e/\binom{n}{2} \approx 0.0025$. \begin{figure \centering \includegraphics[width=1.0\linewidth]{TTericnew} \caption{Connectome data generation. The output is diffusion MRI graphs on $\approx$1M vertices. Spatial vertex contraction yields graphs on $\approx$70K vertices from which we extract largest connected components of $\approx$40K vertices with $\{\mbox{Left,Right}\}$ and $\{\mbox{Gray,White}\}$ labels for each vertex. Figure \ref{fig:killer} depicts (a subsample from) one such graph. } \label{fig:DataGen} \end{figure} \section*{Synthetic Analysis} We consider a synthetic data analysis via a priori projections onto the SBM -- block model estimates based on known or assumed block memberships. Averaging the collection of $m=114$ connectomes yields the composite (weighted) graph adjacency matrix $\bar{A}$. The $\{\mbox{LG,LW,RG,RW}\}$ projection of the binarized $\bar{A}$ onto the 4-block SBM yields the block connectivity probability matrix $B$ presented in Figure \ref{fig:LRGW} and the block membership probability vector $\vec{\pi} = [0.28, 0.22, 0.28, 0.22]^\top$. Limit theory demonstrates that spectral graph clustering using $d=K=4$ will, for large $n$, correctly identify block memberships for this 4-block case when using either LSE or ASE. Our interest is to compare and contrast the two spectral embedding methods for clustering into 2 clusters. We demonstrate that this synthetic case exhibits the `Two Truths' phenomenon both theoretically and in simulation -- the $\{\mbox{LG,LW,RG,RW}\}$ a priori projection of our composite connectome yields a 4-block `Two Truths' SBM. \begin{figure}[tbhp] \centering \includegraphics[width=1.0\linewidth]{LRGW-binary} \caption{Block connectivity probability matrix for the $\{\mbox{LG,LW,RG,RW}\}$ a priori projection of the composite connectome onto the 4-block SBM. The two two-block projections ($\{\mbox{Left},\mbox{Right}\}$ \& $\{\mbox{Gray},\mbox{White}\}$) are shown in Figure \ref{fig:LRandGW}. This synthetic SBM exhibits the `Two Truths' phenomenon both theoretically (via Chernoff analysis) and in simulation (via Monte Carlo). } \label{fig:LRGW} \end{figure} \begin{figure}[tbhp] \centering \includegraphics[width=.45\linewidth]{LR-binary} \includegraphics[width=.45\linewidth]{GW-binary} \caption{Block connectivity probability matrices for the a priori projection of the composite connectome onto the 2-block SBM for (left panel) $\{\mbox{Left},\mbox{Right}\}$ \& (right panel) $\{\mbox{Gray},\mbox{White}\}$. $\{\mbox{Left},\mbox{Right}\}$ exhibits affinity structure, with Chernoff ratio < 1; $\{\mbox{Gray},\mbox{White}\}$ exhibits core-periphery structure, with Chernoff ratio > 1. } \label{fig:LRandGW} \end{figure} \subsection*{2-Block Projections} A priori projections onto the 2-block SBM for $\{\mbox{Left,Right}\}$ and $\{\mbox{Gray,White}\}$ yield the two block connectivity probability matrices shown in Figure \ref{fig:LRandGW}. It is apparent that the $\{\mbox{Left,Right}\}$ a priori block connectivity probability matrix $B=[a,b;b,c]$ represents an affinity SBM with $a \approx c \gg b$ and the $\{\mbox{Gray,White}\}$ a priori projection yields a core-periphery SBM with $c \gg a \approx b$. It remains to investigate the extent to which the Chernoff analysis from the 2-block setting (LSE is preferred for affinity while ASE is preferred for core-periphery) extends to such a 4-block `Two Truths' case; we do so theoretically and in simulation using this synthetic model derived from the $\{\mbox{LG,LW,RG,RW}\}$ a priori projection of our composite connectome in the next two subsections, and then empirically on the original connectomes in the following section. \subsection*{Theoretical Results} Analysis using the large-sample Gaussian mixture model approximations from the LSE and ASE CLTs shows that the 2-dimensional embedding of the 4-block model, when clustered into 2 clusters, will yield \{ \{LG,LW\} , \{RG,RW\} \} (i.e., \{Left,Right\}) when embedding via LSE and \{ \{LG,RG\} , \{LW,RW\} \} (i.e., \{Gray,White\}) when using ASE. That is, using numerical integration for the $d=K=2$ $\mbox{GMM} ~ \circ ~ \mbox{LSE}$, the largest Kullback-Leibler divergence (as a surrogate for Chernoff information) among the 10 possible ways of grouping the 4 Gaussians into two clusters is for the \{~\{LG,LW\}~,~\{RG,RW\}~\} grouping, and the largest of these values for the $\mbox{GMM} ~ \circ ~ \mbox{ASE}$ is for the \{~\{LG,RG\}~,~\{LW,RW\}~\} grouping. \subsection*{Simulation Results} We augment the Chernoff limit theory via Monte Carlo simulation, sampling graphs from the 4-block model and running the $\mbox{GMM} ~ \circ ~ \{\mbox{LSE},\mbox{ASE}\}$ algorithm specifying $\widehat{d}=\widehat{K}=2$. This results in LSE finding $\{\mbox{Left},\mbox{Right}\}$ (ARI > 0.95) with probability > 0.95 and ASE finding $\{\mbox{Gray},\mbox{White}\}$ (ARI > 0.95) with probability > 0.95. \section*{Connectome Results} Figures \ref{fig:BNU1AffCP}, \ref{fig:xxx} and \ref{fig:BNU1DeltaARI} present empirical results for the connectome data set, $m=114$ graphs each on $n \approx 40,000$ vertices. We note that these connectomes are most assuredly {\emph not} 4-block `Two Truths' SBMs of the kind presented in Figures \ref{fig:LRGW} and \ref{fig:LRandGW}, but they do have `Two Truths' (\{Left,Right\} \& \{Gray,White\}) and, as we shall see, they do exhibit a real-data version of the synthetic results presented above, in the spirit of semiparametric SBM fitting. First, in Figure \ref{fig:BNU1AffCP}, we consider a priori projections of the individual connectomes, analogous to the Figure \ref{fig:LRandGW} projections of the composite connectome. Letting $B=[a,b;b,c]$ be the observed block connectivity probability matrix for the a priori 2-block SBM projection (\{Left,Right\} or \{Gray,White\}) of a given individual connectome, the coordinates in Figure \ref{fig:BNU1AffCP} are given by $x=\min(a,c)/\max(a,c)$ and $y=b/\max(a,c)$. Each graph yields two points, one for each of \{Left,Right\} and \{Gray,White\}. We see that the $\{\mbox{Left},\mbox{Right}\}$ projections are in the affinity region (large $x$ and small $y$ implies $a \approx c \gg b$, where Chernoff ratio $\rho < 1$ and LSE is preferred) while the $\{\mbox{Gray},\mbox{White}\}$ projections are in the core-periphery region (small $x$ and small $y$ implies $\max(a,c) \gg b \approx \min(a,c)$ where $\rho > 1$ and ASE is preferred). This exploratory data analysis finding indicates complex `Two Truths' structure in our connectome data set. (Of independent interest: we propose Figure \ref{fig:BNU1AffCP} as the representative for a novel and illustrative `Two Truths' exploratory data analysis (EDA) plot for a data set of $m$ graphs with multiple categorical vertex labels.) In Figures \ref{fig:xxx} and \ref{fig:BNU1DeltaARI} we present the results of $m=114$ runs of the spectral clustering algorithm $\mbox{GMM} ~ \circ ~ \{\mbox{LSE},\mbox{ASE}\}$. We consider each of LSE and ASE, choosing $\widehat{d}$ and $\widehat{K}$ as described above. The resulting empirical clusterings are evaluated via ARI against each of the \{Left,Right\} and \{Gray,White\} truths. In Figure \ref{fig:xxx} we present the results of the ($\widehat{d},\widehat{K}$) model selection, and we observe that ASE is choosing $\widehat{d} \in \{2,\dots,20\}$ and LSE is choosing $\widehat{d} \in \{30,\dots,60\}$, while ASE is choosing $\widehat{K} \in \{10,\dots,50\}$ and LSE is choosing $\widehat{K} \in \{2,\dots,20\}$. In Figure \ref{fig:BNU1DeltaARI}, each graph is represented by a single point, plotting $x$ = ARI(LSE,LR) $-$ ARI(LSE,GW) vs.\ $y$ = ARI(ASE,LR) $-$ ARI(ASE,GW), where ``LSE'' (resp.\ ``ASE'') represents the empirical clustering $\mathcal{C}_{LSE}$ (resp.\ $\mathcal{C}_{ASE}$) and ``LR'' (resp.\ ``GW'') represents the true clustering $\mathcal{C}_{\mbox{\small{\{Left,Right\}}}}$ (resp.\ $\mathcal{C}_{\mbox{\small{\{Gray,White\}}}}$). We see that almost all of the points lie in the $(+,-)$ quadrant, indicating ARI(LSE,LR) > ARI(LSE,GW) and ARI(ASE,LR) < ARI(ASE,GW). That is, LSE finds the affinity \{Left,Right\} structure and ASE finds the core-periphery \{Gray,White\} structure. The `Two Truths' structure in our connectome data set illustrated in Figure \ref{fig:BNU1AffCP} leads to fundamentally different but equally meaningful LSE vs.\ ASE spectral clustering performance. This is our `Two Truths' phenomenon in spectral graph clustering. \begin{figure \centering \includegraphics[width=\linewidth]{newFig25b-a0_5-weightedAbar3} \caption{For each of our 114 connectomes, we plot the a priori 2-block SBM projections for $\{\mbox{Left},\mbox{Right}\}$ in red and $\{\mbox{Gray},\mbox{White}\}$ in blue. The coordinates are given by $x=\min(a,c)/\max(a,c)$ and $y=b/\max(a,c)$, where $B=[a,b;b,c]$ is the observed block connectivity probability matrix. The thin black curve $y=\sqrt{x}$ represents the rank 1 submodel separating positive definite (lower right) from indefinite (upper left). The background color shading is Chernoff ratio $\rho$, and the thick black curves are $\rho=1$ separating the region where ASE is preferred (between the curves) from LSE preferred. The point $(1,1)$ represents Erd\H{o}s-R\'enyi ($a=b=c$). The large stars are from the a priori composite connectome projections (Figure \ref{fig:LRandGW}). We see that the red $\{\mbox{Left},\mbox{Right}\}$ projections are in the affinity region where $\rho < 1$ and LSE is preferred while the blue $\{\mbox{Gray},\mbox{White}\}$ projections are in the core-periphery region where $\rho > 1$ and ASE is preferred. This analytical finding based on projections onto the SBM carries over to empirical spectral clustering results on the individual connectomes (Figure \ref{fig:BNU1DeltaARI}). } \label{fig:BNU1AffCP} \end{figure} \begin{figure \centering \includegraphics[width=1.0\linewidth]{dhatKhat-ARI-LRvsGW-LSE-VVVprior-th0_05} \caption{ Results of the ($\widehat{d},\widehat{K}$) model selection for spectral graph clustering for each of our 114 connectomes. For LSE we see $\widehat{d} \in \{30,\dots,60\}$ and $\widehat{K} \in \{2,\dots,20\}$; for ASE we see $\widehat{d} \in \{2,\dots,20\}$ and $\widehat{K} \in \{10,\dots,50\}$. The color-coding represents clustering performance in terms of ARI for each of LSE and ASE against each of the two truths \{Left,Right\} and \{Gray,White\}, and shows that LSE clustering identifies $\{\mbox{Left},\mbox{Right}\}$ better than $\{\mbox{Gray},\mbox{White}\}$ and ASE identifies $\{\mbox{Gray},\mbox{White}\}$ better than $\{\mbox{Left},\mbox{Right}\}$. Our 'Two Truths' phenomenon is conclusively demonstrated: LSE finds $\{\mbox{Left},\mbox{Right}\}$ (affinity) while ASE finds $\{\mbox{Gray},\mbox{White}\}$ (core-periphery). } \label{fig:xxx} \end{figure} \begin{figure \centering \includegraphics[width=1.0\linewidth]{Fig25A-LSE-VVVprior} \caption{Spectral graph clustering assessment via ARI. For each of our 114 connectomes, we plot the difference in ARI for the $\{\mbox{Left},\mbox{Right}\}$ truth against the difference in ARI for the $\{\mbox{Gray},\mbox{White}\}$ truth for the clusterings produced by each of LSE and ASE: $x$ = ARI(LSE,LR) $-$ ARI(LSE,GW) vs.\ $y$ = ARI(ASE,LR) $-$ ARI(ASE,GW). A point in the $(+,-)$ quadrant indicates that for that connectome the LSE clustering identified $\{\mbox{Left},\mbox{Right}\}$ better than $\{\mbox{Gray},\mbox{White}\}$ and ASE identified $\{\mbox{Gray},\mbox{White}\}$ better than $\{\mbox{Left},\mbox{Right}\}$. Marginal histograms are provided. Our `Two Truths' phenomenon is conclusively demonstrated: LSE identifies $\{\mbox{Left},\mbox{Right}\}$ (affinity) while ASE identifies $\{\mbox{Gray},\mbox{White}\}$ (core-periphery).} \label{fig:BNU1DeltaARI} \end{figure} \newpage \section*{Conclusion} The results presented herein demonstrate that practical spectral graph clustering exhibits a `Two Truths' phenomenon with respect to Laplacian vs.\ Adjacency spectral embedding. This phenomenon can be understood theoretically from the perspective of affinity vs.\ core-periphery Stochastic Block Models, and via consideration of the two a priori projections of a 4-block `Two-Truths' SBM onto the 2-block SBM. For connectomics, this phenomenon manifests itself via LSE better capturing the left hemisphere/right hemisphere affinity structure and ASE better capturing the gray matter/white matter core-periphery structure, and suggests that a connectivity-based parcellation based on spectral clustering should consider both LSE and ASE, as the two spectral embedding approaches facilitate the identification of different and complementary connectivity-based clustering truths. \acknow{ This work is partially supported by DARPA (XDATA, GRAPHS, SIMPLEX, D3M), JHU HLTCOE, and the Acheson J.\ Duncan Fund for the Advancement of Research in Statistics. The authors thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, UK, for support and hospitality during the programme Theoretical Foundations for Statistical Network Analysis (EPSRC grant no.\ EP/K032208/1) where a portion of the work on this paper was undertaken, and the University of Haifa, where these ideas were conceived in June 2014. } \showacknow{}
{ "timestamp": "2019-02-12T02:29:06", "yymm": "1808", "arxiv_id": "1808.07801", "language": "en", "url": "https://arxiv.org/abs/1808.07801", "abstract": "Clustering is concerned with coherently grouping observations without any explicit concept of true groupings. Spectral graph clustering - clustering the vertices of a graph based on their spectral embedding - is commonly approached via K-means (or, more generally, Gaussian mixture model) clustering composed with either Laplacian or Adjacency spectral embedding (LSE or ASE). Recent theoretical results provide new understanding of the problem and solutions, and lead us to a 'Two Truths' LSE vs. ASE spectral graph clustering phenomenon convincingly illustrated here via a diffusion MRI connectome data set: the different embedding methods yield different clustering results, with LSE capturing left hemisphere/right hemisphere affinity structure and ASE capturing gray matter/white matter core-periphery structure.", "subjects": "Machine Learning (stat.ML); Machine Learning (cs.LG)", "title": "On a 'Two Truths' Phenomenon in Spectral Graph Clustering", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668739644686, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7081505314005856 }
https://arxiv.org/abs/2203.09253
Visualizing Riemannian data with Rie-SNE
Faithful visualizations of data residing on manifolds must take the underlying geometry into account when producing a flat planar view of the data. In this paper, we extend the classic stochastic neighbor embedding (SNE) algorithm to data on general Riemannian manifolds. We replace standard Gaussian assumptions with Riemannian diffusion counterparts and propose an efficient approximation that only requires access to calculations of Riemannian distances and volumes. We demonstrate that the approach also allows for mapping data from one manifold to another, e.g. from a high-dimensional sphere to a low-dimensional one.
\section{Introduction}\label{sec:intro} Visualizations are crucial to investigators trying to make sense of high-dimensional data. The most common output of a visualization is a two-dimensional plot (e.g.\@ on a piece of paper or a computer screen), so we often call on a form of dimensionality reduction when working with high-dimensional data. The vast majority of dimensionality reduction techniques assume that data resides on a Euclidean domain (Sec.~\ref{sec:euc_viz}), which presents a problem when data is not quite that simple. Data residing on Riemannian manifolds, such as the sphere, appear in many domains where either known constraints or other modeling assumptions impose a Riemannian structure (Sec.~\ref{sec:riem_data}). In such settings, how should one visualize data? There are many concerns and questions when visualizing Riemannian data. The first is generic: all dimensionality reduction tools amplify parts of the signal, while reducing the remainder. This is an inherent limitation, which should always be in mind when interpreting data visualizations. Since some loss of information is inevitable, should we then loosen our grip on the data or its underlying Riemannian structure when such is present? Gauss's \emph{Theorema Egregium} \cite{gauss2005general} informs us that if the final plot is to be presented on a \emph{flat} screen or piece of paper, then a distortion of the Riemannian structure is inevitable. In practice, even if one accepts the limitations of a visualization, actual algorithms for visualizing Riemannian data are missing. In this paper, we develop an extension of the \emph{Stochastic Neighbor Embedding} \cite{sne} method to Riemannian data and thereby provide one such tool. We call this \emph{Riemannian Stochastic Neighbor Embedding}, or \emph{Rie-SNE} for short. Our approach is quite general as it allows for embedding data observed on one Riemannian manifold to be embedded on another. This allows for mapping data from a Riemannian space to a two-dimensional Euclidean plane (for plotting), but also mapping to a two-dimensional sphere, or similar, when the Euclidean topology is inappropriate. Rie-SNE does not claim to solve the above-mentioned limitations of visualization, but it does provide a working tool, which we demonstrate to have practical merit. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{figures/teaser.pdf} \caption{Rie-SNE visualizes high-dimensional Riemannian data by mapping to a low-dimensional Riemannian (or Euclidean) manifold, which can then be shown. The figure shows high-dimensional spherical data mapped to either a low-dimensional sphere or a low-dimensional plane. The method also supports other manifolds, both as input and output.} \label{fig:teaser} \end{figure} \section{Background and related work}\label{sec:background} Before describing Rie-SNE, we provide the relevant context on visualization and geometry. \subsection{Euclidean visualization}\label{sec:euc_viz} General data visualization is a vast topic, and a complete review is beyond our scope \cite{vanderplas2016python}. We here focus on the setting where data is represented as vectors (points) in a Euclidean space of high dimension. When data is two- or three-dimensional a scatter plot can directly reveal its structure, and we focus on the more difficult setting where data dimensions vastly exceed the easily plottable. Here one may explore the data through multiple projections, such as pairwise scatter plots, which is usually manageable for data of up to around 10 dimensions. Eventually the approach tends to become unwieldy and the greater picture is lost. Alternatives include continuously interpolating between dimensions to provide an (interactive) animation \cite{asimov1985grand}. The approach we here explore is to find a non-linear mapping from the high-dimensional observation space into a two- or three-dimensional space, which is suitable for plotting. Many variants of this approach exists, and we only touch upon a few. \emph{Principal component analysis (PCA)} \cite{jolliffe2002principal} is perhaps the most commonly used approach to dimensionality reduction. This seeks a low-dimensional representation of data that preserves as much variance as possible. The restriction to spanning a linear subspace of the observation space, however, often implies that the low-dimensional view reveal little structure. The \emph{Gaussian process latent variable model (GP-LVM)} \cite{gplvm} provides a nonlinear probabilistic extension of PCA that places a Gaussian process prior on the unknown mapping that reduce dimensionality, and marginalize this accordingly. The approach carries intrinsic elegance, but its optimization can be brittle \cite{feldager:icann:2021}. Classic `manifold learning' techniques avoid optimization issues by phrasing optimization tasks, where an optimum is available through spectral decompositions \cite{isomap, lle, laplacian}. These rely on constructions of neighborhood graphs, which, however, can be brittle, so in practice the resulting visualizations are sensitive to parameter choices. The \emph{stochastic neighbor embedding (SNE)} \cite{sne} replaces the `hard' graph construction with a softer construction. A modern variant of this method \cite{vanDerMaaten2008} is one of the currently most popular algorithms, and is the one we here extend. We cover this in Secs.~\ref{sec:sne} and \ref{sec:tsne}. \subsection{Riemannian data}\label{sec:riem_data} Data is often equipped with additional knowledge such as constraints or given smooth structures. In many scenarios this additional knowledge gives the data a Riemannian structure. For example, knowing that all observations have unit norm, places them on the unit sphere, which has a well-studied Riemannian structure. The a priori available knowledge giving rise to a Riemannian data interpretation differs between domains, and we here only name a few. The most prominent example is that of \emph{directional statistics} \cite{mardia2000directional, kurz2013recursive, hauberg:SN:2018} where data resides on the unit sphere. Other examples include \emph{shape data} \cite{Kendall:BLMS:1984, Freifeld:ECCV:2012, Srivastava:PAMI:2005, Younes:ICV:2012, kurtek:pami:2012}, \emph{DTI images} \cite{lenglet:eccv:2004, Pennec:IJCV:2006,wang2014tracking}, \emph{image features} \cite{Tuzel:ECCV:2006, Porikli:CVPR:2006, freifeld:cvpr:2014}, \emph{motion models}~\cite{turaga,Cetingul:cvpr:2009}, \emph{human poses} \cite{said:eusipco:2007, Hauberg:IMAVIS:2011, spatial_priors:hauberg_et_al10, hauberg:mukf}, \emph{robotics} \cite{gilitschenski2015unscented, jaquier2022geometry}, \emph{social networks} \cite{mathieu2019continuous} and more. Even if Riemannian data is becoming increasingly common, tools for visualization have not followed. The most common approach for visualizing Riemannian data is to locate a point on the manifold, e.g.\@ the intrinsic mean \cite{pennec}, and map the data to the tangent space of this point. That gives a Euclidean view of the data, which can then be visualized with one of the many methods for this domain (Sec.~\ref{sec:euc_viz}). In particular, applying PCA tangentially is the gold standard \cite{fletcher2004principal}. Unless data is concentrated around the point of tangency this approach is bound to give a highly distorted view of the data, and in practice most knowledge reflected in the geometry will be lost by the linearization \cite{sommer:eccv10}. Several extensions of PCA to Riemannian manifolds exist, e.g.\@ \cite{huckemann, panaretos:jasa:2014, jung:biometrika:2012}. These focus on generalizing the classic linear models, and less work has been done on extending nonlinear methods. Two notable extensions are a `wrapped' extension of the GP-LVM \cite{mallasto:2018}, and an extension of the classic principal curves model \cite{hauberg:tpami:princurve}. \subsection{Measuring on Riemannian manifolds}\label{sec:geom} In order to engage with data residing on a manifold, we need a collection operators that can be applied to data. Here we only cover the most basic as that is all required by Rie-SNE. A detailed exposition can be found elsewhere \cite{pennec, doCarmoRiemannian}. A Riemannian manifold is a space which is locally Euclidean. This imply that in a neighborhood around a point $\bm{\mu}$ on the manifold, we can get a Euclidean view of the manifold in the form of a tangent space, which is equipped with an inner product, \begin{align} \langle \x_i, \x_j \rangle_{\bm{\mu}} = \x_i^\top G_{\bm{\mu}} \x_j, \end{align} where $G_{\bm{\mu}}$ is a symmetric positive definite matrix that reflects the inner product at $\bm{\mu}$. This inner product is allowed to change smoothly between tangent spaces, in order to compensates for the approximation error induced by linear view of the manifold. This distortion can be characterized by the change-in-volume between the manifold and its tangent, which follows $\sqrt{\det G_{\bm{\mu}}}$ \cite{pennec}. The inner product allow us to define local distances, which can be integrated to provide a notion of \emph{curve length}. That is, given a curve $\cc$ on a manifold, we may compute its length as \begin{align} \mathrm{Length}[\cc] &= \int \sqrt{\langle \dot{\cc}_t, \dot{\cc}_t \rangle}_{\cc_t} \mathrm{d}t, \label{curvelength} \end{align} where we assume the curve to be parametrized by $t \in [0, 1]$, and use $\cc_t$ and $\dot{\cc}_t$ to denote the position and velocity of the curve, respectively. From the notion of a curve length, it is trivial to define the distance between two points as the length of the shortest curve, \begin{align} \mathrm{dist} &= \min_{\cc} \mathrm{Length}[\cc]. \label{geodesic} \end{align} The shortest curve commonly goes under the name \emph{geodesic}. The distance function can be differentiated using the relation \begin{align} \partial_{\x_i} \mathrm{dist}^2(\x_i, \x_j) &= 2 \mathrm{Log}_{\x_i}(\x_j), \end{align} where $\mathrm{Log}_{\x_i}(\x_j)$ is the Riemannian \emph{logarithm map} \cite{pennec}. If $\cc$ is a constant-speed geodesic connecting $\x_i$ and $\x_j$, then the logarithm map is merely the initial velocity $\dot{\cc}_0$ of said curve. \subsection{Stochastic neighbor embedding}\label{sec:sne} \emph{Stochastic neighbor embedding (SNE)} \cite{sne} is a dimensionality reduction tool, which aims to preserve similarity between neighboring points when mapped to a low-dimensional representation. Assume for now, that we have access to function $s_{\text{high}}$ and $s_{\text{low}}$, which measure the similarity between observation pairs in the high-dimensional observation space and the low-dimensional representation space, respectively. % Now define the conditional probability, $p_{j|i}$ that $\x_i$ would pick $\x_j$ as its neighbor \cite{vanDerMaaten2008} \begin{align} p_{j|i} = \frac{s_{\text{high}}(\x_j | \x_i)}{\sum_{k \ne i} s_{\text{high}}(\x_k | \x_i)}. \label{hdimp2} \end{align} Common convention is to define $p{i|i} = 0$. Further note that $\sum_j p_{j|i} = 1$. We can renormalize this to form a distribution over all observations as \begin{gather} p_{ij} = \frac{p_{j|i} + p_{i|j}}{2n}, \label{hdimp1} \end{gather} where $n$ is the number of observations. To learn a low-dimensional representation, the key idea is to repeat the above over the low-dimensional space to form \begin{align} q_{j|i} &= \frac{s_{\text{low}}(\y_j | \y_i)}{\sum_{k \ne i} s_{\text{low}}(\y_k | \y_i)}, \\ q_{ij} &= \frac{q_{j|i} + q_{i|j}}{2n}. \end{align} We can now compare the similarity of our data and the representation by computing the Kullback-Leibler divergence between $p_{ij}$ and $q_{ij}$, \begin{align} C = \mathrm{KL}\left( P || Q \right) = \sum_{i=1}^n \sum_{j=1}^n p_{ij} \log \frac{p_{ij}}{q_{ij}}. \end{align} This can then be minimized using gradient descent with respect to the low-dimensional representation $\{ \y_i \}_{i=1}^n$. In its classic form, SNE picks the measures of similarity as Gaussian functions \begin{align} \begin{split} s_{\text{high}}(\x_j | \x_i) &= \left(2\pi\sigma_i^2\right)^{-\sfrac{D}{2}} \exp\left( -\frac{\| \x_j - \x_i \|^2}{2\sigma_i^2} \right), \\ s_{\text{low}}(\y_j | \y_i) &= \left(2\pi\right)^{-\sfrac{d}{2}} \exp\left( -\frac{\| \y_j - \y_i \|^2}{2} \right). \end{split}\label{eq:sne_s} \end{align} With this choice, the normalization constants of Eq.~\ref{eq:sne_s} cancel out when computing $p_{j|i}$ and $q_{j|i}$. Note that this approach gives a per-observation variance $\sigma_i^2$, such that different points effectively can have different sizes of neighborhoods. To determine the $\sigma_i^2$ parameters, the user specifies a \emph{perplexity} parameter, which can be thought of as a measure of the effective number of neighbors \cite{vanDerMaaten2008}. This is defined as \begin{align} \text{perplexity} = 2^{H\left(P_i\right)}, \label{eq:perplexity} \end{align} where $H\left(P_i\right)$ is the Shannon entropy \cite{journals/bstj/Shannon48} of $P_i$ in bits: \begin{align} H\left(P_i\right) = - \sum_{j} p_{j|i} \log_2 p_{j|i}. \end{align} For a specific user-provided value of the perplexity parameter, we can perform a binary search over $\sigma_i^2$ such that Eq.~\ref{eq:perplexity} holds. In practice, the user experiments with different choices of perplexities to see which reveals a pattern. \subsection{The t-distributed stochastic neighbor embedding}\label{sec:tsne} The most popular variant of SNE is the \emph{t-distributed SNE} \cite{vanDerMaaten2008}. This is motivated by the so-called `crowding problem' often observed in SNE, where the low-dimensional representations significantly overlap without revealing much underlying structure. The idea is to use a similarity in representation space with more heavy tails than the Gaussian. Specifically, $s_{\text{low}}$ is chosen as a $t$-distribution with one degree of freedom centered around one representation, i.e. \begin{align} s_{\text{low}}(\y_j | \y_i) &= \pi^{-1} \left( 1 + \|\y_j - \y_i\|^2 \right)^{-1}. \end{align} As is evident, t-SNE needs to compute all pairwise distances between data points and therefore has quadratic complexity. Using approximation techniques, such as vantage-point trees or the \textit{Barnes-Hut approximation}, the running time can be lowered down to having $\mathcal{O}(n \log n)$ complexity \cite{tsneOptimization}. \begin{figure*} \centering \includegraphics[width=\textwidth]{figures/results/mnist6_plane.pdf} \caption{Two-dimensional Euclidean embeddings of spherical MNIST. The left panel show the embedding obtained by Rie-SNE while the right panel show the gold standard of tangent space PCA. Note how Rie-SNE discovers structure, which is lost on tangent space PCA.} \label{fig:mnist6_plane} \end{figure*} \section{Method} The inner workings of Rie-SNE, or \emph{Riemannian Stochastic Neighbor Embedding}, are now examined. \subsection{Brownian motion on a Riemannian manifold} The key building block for generalizing SNE to Riemannian manifolds is a suitable generalization of the Gaussian distribution. Here we consider a density derived from a Brownian motion on a Riemannian manifold for high-dimensional probability computations. Given a Brownian motion in Euclidean space, the probability that the random walk will end in point $\x$ can be computed by using the Gaussian density. If the increments of the Brownian motion are sufficiently small, each increment can be projected onto the tangent space of corresponding points on a Riemannian manifold without error. Then, a Brownian motion starting at point $\bm{\lambda}$ running for some time $t$ can be projected onto a $D$-dimensional Riemannian manifold and there it will give rise to a random variable, a random variable that can be interpreted as the probability that a Brownian motion starting at $\bm{\lambda}$ will end in point $\x$ on the manifold. Since now the Brownian motion has been projected onto a Riemannian manifold the density will be different, and it can be approximated with a paramatrix expansion \cite{hsu2002stochastic,kalatzis2020variational}: % \begin{align} \mathcal{BM}(\x|\bm{\lambda}, t) &\approx \left( 2 \pi t \right)^{-\frac{D}{2}}H_{0} \exp \left( - \frac{\mathrm{dist}^2 (\x, \bm{\lambda})}{2t}\right) \label{eq:riemannianbrownianmotion} \end{align} % where % \begin{itemize} \item [-] $t \in \mathbb{R}_+$ is the duration of the Brownian motion and corresponds to variance in Euclidean space. \item [-] $\bm{\lambda}$ is the starting point of the Brownian motion. \item [-] $H_0$ is the ratio of Riemannian volume measures evaluated at points $\x$ and $\bm{\lambda}$ respectively, i.e.: \begin{align} H_0 = \left( \frac{\det G_{\x}}{\det G_{\bm{\lambda}}}\right)^{\frac{1}{2}} \end{align} with again $G_{\mathbf{p}}$ being the metric evaluated at $\mathbf{p}$. \end{itemize} Superficially, Eq.~\ref{eq:riemannianbrownianmotion} looks like the density of the normal distribution, with the Euclidean distance being replaced by its Riemannian counterpart. However, here it is worth noting that the normalization factor $H_0$ is different from the usual Euclidean distribution. \subsection{Rie-SNE} \emph{Rie-SNE} works in a similar manner as SNE and t-SNE, i.e.\@ it will also produce two probability distributions $P$ and $Q$ from the data and aim to make them as similar as possible and in the process capture some underlying structure in the produced low dimensional embedding. However, computing the high-dimensional probability distribution $P$ comes with an added cost. To preserve the Riemannian nature of the data, a different density is used when computing high-dimensional probabilities belonging to $P$, namely the approximate density induced by the heat kernel of a Brownian motion on a Riemannian manifold given in Eq.~\ref{eq:riemannianbrownianmotion}, i.e.\@ we pick \begin{align} s_{\text{high}}(\x_j|\x_i) &= \mathcal{BM}{\x_j | \x_i, t_i}. \end{align} The added computational cost is that the evaluation of $\mathcal{BM}(\cdot|\cdot)$ is more demanding than the conventional Gaussian similarity \eqref{eq:sne_s}. Specifically, the normalization $H_0$ and the geodesic distance may be demanding, depending on the manifold on which the data resides. With the Browninan motion model we get \begin{align} p_{j|i} \approx \frac{ H_{0}[i,j] \cdot \exp \left( - \frac{\mathrm{dist}^2[i,j]}{2t_i}\right) }{\sum\limits_{k \ne i} H_{0}[i,k] \cdot \exp \left( - \frac{\mathrm{dist}^2[i,k]}{2t_i}\right)}, \end{align} where we use the notations $\mathrm{dist}^2[i,j] = \mathrm{dist}^2(\x_i,\x_j)$ and $H_{0}[i,j] = \sqrt{ \sfrac{\det G_{\x_i}}{\det G_{\x_j}}}$ to emphasize that these quantities can be pre-computed. As with SNE, we can optimize $t_i$ to match a pre-specified perplexity using a binary search. \begin{figure*} \includegraphics[width=0.95\textwidth]{figures/results/mnist6.pdf} \vspace{-3mm} \caption{Four different views of embeddings of spherical MNIST. The top row show spherical embeddings obtained using Rie-SNE, while the bottom row show three-dimensional embeddings using the gold standard tangent space PCA. Note that the clustering structure is significantly more evident in Rie-SNE.} \label{fig:mnist6} \end{figure*} \subsection{Choice of representation} As mentioned in Sec.~\ref{sec:intro}, Gauss's \emph{Theorema Egregium} \cite{gauss2005general} inform us that we cannot isometrically embed data from a curved space into a space of different curvature without introducing distortion. Specifically, if we embed data from a nonlinear manifold onto a \emph{flat} two-dimensional representation (for plotting) then the curvature mismatch between spaces induces a distortion. This is a fundamental limitation that any visualization of Riemannian data will face, but we may nonetheless try to limit its impact. One approach is to embed the data onto a manifold of similar curvature as that of the manifold on which the data resides. For example, if the data resides on a high-dimensional sphere, it is perhaps more prudent to embed onto a two-dimensional sphere for plotting, rather than a Euclidean space. With this in mind, we choose different distributions over the low-dimensional representation, depending on user preference. \begin{itemize} \item \textbf{Euclidean.} If the user prefers a Euclidean low-dimensional representation, we opt to use a student-t as in regular t-SNE, \begin{align} s_{\text{low}}(\y_j | \y_i) &= \pi^{-1} \left( 1 + \|\y_j - \y_i\|^2 \right)^{-1}. \end{align} % \item \textbf{Spherical.} If the data manifold has positive curvature it may be beneficial to embed on a sphere, in which case we opt to use a von Mises-Fisher distribution \cite{mardia2000directional}, \begin{align} s_{\text{low}}(\y_j | \y_i) &= \left( \sqrt{2\pi} I_{\sfrac{d}{2}-1}(1) \right)^{-1} \exp\left( \y_j^{\top} \y_i \right), \end{align} where $I_{v}$ is a modified Bessel function of the first kind of order $v$. In practise the normalization constant cancels out and can be ignored. % \item \textbf{Other.} The user may have other prior knowledge about the manifold on which the data resides, which may suggest embedding on some other low-dimensional manifold. In this case, we suggest to also use the Riemannian Brownian over the low-dimensional representation, i.e., \begin{align} s_{\text{low}}(\y_j | \y_i) &= \mathcal{BM}(\y_j | \y_i, 1). \end{align} \end{itemize} Once we have defined both $s_{\text{high}}$ and $s_{\text{low}}$, we can estimate the representations using gradient descent just as regular SNE. Having performed $T$ iterations (with a sufficiently large value for $T$) of the gradient descent, the two probability distributions $P$ and $Q$ will have a minimal KL-divergence resulting in near-optimal positions of the points in the low-dimensional embedding. The key elements of the resulting computations are provided in Alg.~\ref{alg:main} on page~\pageref{alg:main}. \subsection{Implementation details} Our implementation of Rie-SNE relies on two approximation techniques that are traditionally also used when performing regular t-SNE. First, we compute the high-dimensional probabilities in $P$ by using a sparse nearest neighbor-based approximation technique \cite{tsneOptimization}. This means we compute a sparse approximate $P$ distribution, where far-away points are given a probability of zero. This can be realized with a nearest neighbor search. Empirical results from van der Maaten \cite{tsneOptimization} suggest a value of $\tau = \lfloor 3 \cdot \text{perplexity} \rfloor$ nearest neighbors will give sufficiently good approximations of $P$, which we also use here. Finding the $\tau$ nearest neighbors of each point can be done by constructing a vantage-point tree \cite{vptrees} over the data and performing a nearest neighbors search on the resulting tree. Constructing the tree, performing the nearest neighbor search and computing the relevant values of $P$ has time complexity $\mathcal{O}(\tau n \log n)$. Second, we use the \emph{Barnes-Hut approximation} \cite{barnes86a}, whenever we opt to use a student's t-distribution over the low-dimensional representation. Minimizing the KL-divergence between the two probability distributions $P$ and $Q$ requires using gradient descent, and in each step of the gradient descent we need to compute all pairwise $q_{ij}$ of $Q$, which has quadratic complexity. The Barnes-Hut approximation of the gradient instead has complexity $\mathcal{O}(n \log n)$. In short, this approximation split the low-dimensional representation into quadrants (via tree structures named quadtrees/octtrees), such that points in far-away and small enough quadrants can be approximated as the same point appearing multiple times. For each low-dimensional point, a depth-first search with complexity $\mathcal{O}(\log n)$ is done to mark quadrants as approximate quadrants or not. A total of $n$ depth-first searches are carried out, yielding the $\mathcal{O}(n \log n)$ time complexity. \section{Results}\label{sec:results} The performance of Rie-SNE is shown by comparing it to the gold-standard of visualizing non-euclidean data. This amounts to first computing the intrinsic mean, mapping all data to the tangent space at this point, and performing PCA over the tangential data. \subsection{Spherical MNIST} We start with the classic MNIST dataset \cite{lecun-mnisthandwrittendigit-2010} consiting of $24 \times 24$ dimensional gray-scale images. We consider digits 0--5 to reduce clutter. To induce a non-Euclidean data geometry, we project the data onto the unit sphere of $\mathbb{R}^{24 \times 24}$ and denote the resulting data \emph{spherical MNIST}. We visualize the resulting data using both Rie-SNE and tangent space PCA. First, we embed the data onto the plane, $\mathbb{R}^2$, and show the resulting plots in Fig.~\ref{fig:mnist6_plane}. Here it can be seen that Rie-SNE captures well the underlying relationship between the data points (same digits are grouped together), while tangent space PCA produces a cluttered view which does not reveal the underlying structure. Since the data has a spherical geometry, it may be beneficial to embed onto a low-dimensional sphere to better preserve topology and curvature. Figure~\ref{fig:mnist6} show the Rie-SNE embedding onto $\mathcal{S}^2$, where the clustering is again evident. As a baseline, the figure also shows a tangent space PCA embedding on $\mathbb{R}^3$. Although some structure can be captured with tangent space PCA here, Rie-SNE still gives better separation of the digit classes. \subsection{Crypto-tensors} Following Mallasto et al.\@ \cite{mallasto:2018} we consider the price of 10 popular crypto-currencies\footnote{Bitcoin, Dash, Digibyte, Dogecoin, Litecoin, Vertcoin, Stellar, Monero, Ripple, and Verge.} over the time period \textit{2.12.2014 --- 15.5.2018}. As is common in economy \cite{wilson2010generalised} the relationship between prices is captured by a $10 \times 10$ covariance matrix constructed from the past 20 days. This gives rise to a time series of covariance matrices, each of which reside on the cone of symmetric positive definite matrices. We provide visualizations of the data in Fig.~\ref{fig:crypto}. Rie-SNE is used to produce visualizations on both the plane $\mathbb{R}^2$ and the sphere $\mathcal{S}^2$, showing a one-dimensional structure capturing the time-evolution behind the data. In contrast, tangent space PCA produce $\mathbb{R}^2$ and $\mathbb{R}^3$ visualizations showing little to no structure in the embeddings. \begin{figure} \centering \includegraphics[width=\columnwidth]{figures/results/crypto.pdf} \caption{Embeddings of symmetric positive definite matrices using Rie-SNE (left) and tangent space PCA (right). The top row show two-dimensional Euclidean embeddings, while bottom row show spherical and $\mathbb{R}^3$ embeddings, respectively. In both cases Rie-SNE recovers a one-dimensional signal matching the underlying time series, while tangent space PCA does not.} \label{fig:crypto} \end{figure} \section{Conclusions} In this paper, we presented a new type of visualization technique, \emph{Rie-SNE}, that is aimed at data residing on Riemannian manifolds, such as spheres. It is a SNE-based technique that can additionally produce different kinds of low-dimensional embeddings depending on user preference and the curvature of the original data manifold. We compare Rie-SNE to a standard technique when it comes to visualizing non-euclidean data, which is to perform PCA on the data mapped to tangent space. The results are promising, and we believe this technique could have some merit. For future work we would like to see this taken further: it would be interesting to see more visualizations of data mapped to well-known manifolds, other than those that were used in this paper, and it would especially be interesting to try this out on some non-standard manifolds. In the case of non-standard manifolds, computing geodesics is likely non-trivial, such that some approximation techniques might need to be developed. We hope that the present work also paves the way for other visualization tools for Riemannian data in order to support investigators relying on geometric models. \section*{Acknowledgements} SH was supported by research grants (15334, 42062) from VILLUM FONDEN. This project has also received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement n\textsuperscript{o} 757360). \bibliographystyle{unsrt}
{ "timestamp": "2022-03-18T01:23:32", "yymm": "2203", "arxiv_id": "2203.09253", "language": "en", "url": "https://arxiv.org/abs/2203.09253", "abstract": "Faithful visualizations of data residing on manifolds must take the underlying geometry into account when producing a flat planar view of the data. In this paper, we extend the classic stochastic neighbor embedding (SNE) algorithm to data on general Riemannian manifolds. We replace standard Gaussian assumptions with Riemannian diffusion counterparts and propose an efficient approximation that only requires access to calculations of Riemannian distances and volumes. We demonstrate that the approach also allows for mapping data from one manifold to another, e.g. from a high-dimensional sphere to a low-dimensional one.", "subjects": "Machine Learning (cs.LG); Human-Computer Interaction (cs.HC)", "title": "Visualizing Riemannian data with Rie-SNE", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668734137681, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7081505310031212 }
https://arxiv.org/abs/1802.00733
A Mathematical Framework for Resilience: Dynamics, Uncertainties, Strategies and Recovery Regimes
Resilience is a rehashed concept in natural hazard management - resilience of cities to earthquakes, to floods, to fire, etc. In a word, a system is said to be resilient if there exists a strategy that can drive the system state back to "normal" after any perturbation. What formal flesh can we put on such a malleable notion? We propose to frame the concept of resilience in the mathematical garbs of control theory under uncertainty. Our setting covers dynamical systems both in discrete or continuous time, deterministic or subject to uncertainties. We will say that a system state is resilient if there exists an adaptive strategy such that the generated state and control paths, contingent on uncertainties, lay within an acceptable domain of random processes, called recovery regimes. We point out how such recovery regimes can be delineated thanks to so called risk measures, making the connection with resilience indicators. Our definition of resilience extends others, be they "` a la Holling" or rooted in viability theory. Indeed, our definition of resilience is a form of controlability for whole random processes (regimes), whereas others require that the state values must belong to an acceptable subset of the state set.
\section{Introduction} \label{Introduction} Consider a system whose state evolves with time, being subject to a dynamics driven both by controls and by external perturbations. The system is said to be resilient if there exists a strategy that can drive the system state towards a normal regime, whatever the perturbations. Basic references are \cite{Holling:73,Martin:2004,Martin-Deffuant-Calabrese:2011,Rouge-Mathias-Deffuant:2013,Arnoldi-Loreau-Haegeman:2016}. In the case of fisheries, the state can be a vector of abundances at ages of one or several species; the control can be fishing efforts; the external perturbations can affect mortality rates or birth functions appearing in the dynamics (an extreme perturbation could be an El Ni\~no event, affecting the populations renewal). In the case of a city exposed to earthquakes, floods or other climatic events, the state can be a vector of capital stocks (energy reserves, energy production units, water treatment plants, health units, etc.); the controls would be the different investments in capital as well as current operations (flows in and out capital stocks); the dynamics would express the changes in the stocks due to investment and to day to day operations; external perturbations (rain, wind, climatic events, etc.) would affect the stocks by reducing them, possibly down to zero. In Sect.~\ref{Ingredients_for_an_abstract_control_system_with_uncertainties}, we introduce basic ingredients from the mathematical framework of control theory under uncertainty. Thus equipped, we frame the concept of resilience in mathematical garbs in Sect.~\ref{Resilience:_A_Mathematical_Framework}. Then, in Sect.~\ref{Illustrations}, we provide illustrations of the abstract general framework and compare our approach with others, ``\`a la Holling'' or the stochastic viability theory approach to resilience. In Sect.~\ref{Resilience_and_risk_control/minimization}, we sketch how concepts from risk measures (introduced initially in mathematical finance) can be imported to tackle resilience issues. Finally, we discuss pros and cons of our approach to resilience in Sect.~\ref{sec:conclusions}. \section{Ingredients for an abstract control system with uncertainties} \label{Ingredients_for_an_abstract_control_system_with_uncertainties} We outline the mathematical formulations of time, controls, states, Nature (uncertainties), flow (dynamics) and strategies. As the reference \cite{Rouge-Mathias-Deffuant:2013} is the more mathematically driven paper on resilience, we will systematically emphasize in what our approach differs from that of Roug\'e, Mathias and Deffuant. \subsection{Time, states, controls, Nature and flow} We lay out the basic ingredients of control theory: time, states, controls, Nature (uncertainties) and flow (dynamics). \subsubsection{Time} The \emph{time set}~$\TT$ is a (nonempty) subset of the real line~$\RR$. The set~$\TT$ holds a minimal element~$t_0 \in \TT$ and an upper bound~$T$, which is either a maximal element when $T <+\infty$ (that is, $T \in \TT$) or not when $T =+\infty$ (that is, $+\infty \not\in \TT$). For any couple $\np{s,t} \in \bp{\RR \cup \{ +\infty \} }^2$, we use the notation \begin{equation} \segment{s}{t}= \defset{ r \in \TT }{ s \leq r \leq t } \end{equation} for the \emph{segment} that joins~$s$ to~$t$ (when $s > t$, $ \segment{s}{t}= \emptyset $). \paragraph{Special cases of discrete and continuous time.} This setting includes the discrete time case when $\TT$ is a discrete set, be it infinite like $\TT=\{t_0+k\Delta\;\!\!t, \quad k \in \NN \}$ (with $\Delta\;\!\!t >0$), or finite like $\TT=\{t_0+k\Delta\;\!\!t, \quad k =0,1,\ldots, K \}$ (with $K \in \NN$). Of course, in the continuous time case, $\TT$ is an interval of~$\RR$, like $[t_0,T]$ when $T <+\infty$ or $[t_0,+\infty[$ when $T =+\infty$. But the setting makes it possible to consider interval of continuous times separated by discrete times corresponding to jumps. For these reasons, our setting is more general than the one in~\cite{Rouge-Mathias-Deffuant:2013}, which considers discrete time systems. \paragraph{Environmental illustration.} In fisheries, investment decisions (boats, equipment) are made at large time scale (years), regulations quotas are generally annual, boat operations are daily. By contrast, populations and external perturbations evolve in continuous time. Depending on the issues at hand, the modeler will choose the proper time scales, symbolized by the set~$\TT$. In an energy system, like a micro-grid with battery and solar panels, investment decisions in equipment (buying or renewal) occur at large time scale, whereas flows inside the system have to be decided at short time scales (minutes). \subsubsection{States, controls, Nature} At each time~$t \in \TT$, \begin{itemize} \item the system under consideration can be described by an element~$x_{t}$ of the \emph{state set}~${\mathbb X}_{t}$, \item the decision-maker (DM) makes a decision~$u_{t}$, taken within a \emph{control set}~$\CONTROL_{t}$. \end{itemize} A \emph{state of Nature}~$\omega$ affects the system, drawn within a \emph{sample set}~$\Omega$, also called \emph{Nature}. No probabilistic structure is imposed on the set~$\Omega$. \paragraph{Environmental illustration.} In the case of dengue epidemics control at daily time steps, the state can be a vector of abundances of healthy and deseased individuals (possibly at ages), together with the same description for the mosquito vector; the control can be the daily fumigation effort, mosquito larva removal, quarantine measures, or the opening and closing of sanitary facilities; Nature represents unknown factors that affect the dengue dynamics, like rains, humidity, mosquito biting rates, individual susceptibilities, etc. Some of these factirs (like rain) can be progressively unfolded as times passes. \paragraph{Special case where the sample set is a set of scenarios.} In many cases, at each time~$t \in \TT$, an uncertainty~$w_{t}$ affects the system, drawn within an \emph{uncertainty set}~$\UNCERTAIN_{t}$. Hence, a state of Nature has the form $\omega= \sequence{w_{t}}{t \in \TT}$ --- and is called a \emph{scenario} --- drawn within a product {sample set} $\Omega= \prod_{t \in \TT}\UNCERTAIN_{t}$. \paragraph{Environmental illustration.} The above definition of scenarios is in phase with the vocable of scenarios in climate change mitigation; it represents sequences of uncertainties that affect the climate evolution. In our framework, a scenario is not in the hands of the decision-maker; for instance, a scenario does not include investment decisions. \paragraph{Relevance for resilience.} In the case of scenarios, as the {uncertainty sets}~$\UNCERTAIN_{t}$ depend on~$t$, we cover the case where \begin{itemize} \item an uncertainty~$w_{t} \in \UNCERTAIN_{t}$ affects the system at each time~$t$, possibly progressively revealed to the DM, hence available when he makes decisions; \item other uncertainties, that are present from the start (like parameters), hence are part of the set~$\UNCERTAIN_{t_0}$; such uncertainties are not necessarily revealed to the DM as times passes, and remain unknown. \end{itemize} Our setting is more general than the one in~\cite{Rouge-Mathias-Deffuant:2013}. First, we do not restrict the {sample set} to be made of scenarios as Roug\'e, Mathias and Deffuant do. Second, even in the case of scenarios, no probabilistic structure is imposed on the set~\( \prod_{t \in \TT}\UNCERTAIN_{t} \) whereas Roug\'e, Mathias and Deffuant require that it be equipped with a probability distribution having a density (with respect to an, unspecified, measure, likely the Lebesgue measure on a Euclidian space). \subsubsection{State and control paths} \label{State_and_control_paths,_scenarios_of_uncertainties,_states_of_Nature} With the basic set~$\TT$ and the basic families of sets~$\sequence{{\mathbb X}_{t}}{t \in \TT}$ and $\sequence{\CONTROL_{t}}{t \in \TT}$, we define \begin{itemize} \item the \emph{set~$\prod_{t \in \TT}{\mathbb X}_{t}$ of state paths}, made of sequences $\sequence{x_{t}}{t \in \TT}$ where \( x_{t} \in {\mathbb X}_{t} \) for all $t \in \TT$; \emph{tail state paths} $\sequence{x_{r}}{r \in \segment{s}{t}}$ (starting at time~$s<t$) are elements of \( \prod_{r=s}^{t}{\mathbb X}_{r} \); \item the \emph{set~$\prod_{t \in \TT}\CONTROL_{t}$ of control paths}, made of sequences $\sequence{u_{t}}{t \in \TT}$ where \( u_{t} \in \CONTROL_{t} \) for all $t \in \TT$; \emph{tail control paths} $\sequence{u_{r}}{r \in \segment{s}{t}}$ (starting at time~$s<t$) are elements of \( \prod_{r=s}^{t}\CONTROL_{r} \). \end{itemize} \paragraph{Relevance for resilience.} We introduce paths because, as stated in the abstract, our (forthcoming) definition of resilience requires that, after any perturbation, the system returns to an acceptable ``regime'', that is, that the state-control path as a whole must return to a set of acceptable paths (and not only the state values must belong to an acceptable subset of the state set). We introduce tail paths because resilience encapsulates the idea that recovery is possible after some time, and that the system remains ``normal'' after that time. \subsubsection{Dynamics/flow} \label{Flow} We now introduce a dynamics under the form of a \emph{flow} \( \sequence{\phi_{\segment{s}{t}}}{\np{s,t} \in \TT^2} \), that is, a family of mappings \begin{equation} \phi_{\segment{s}{t}}: {\mathbb X}_{s} \times \prod_{r=s}^{t}\CONTROL_{r} \times \Omega \to \prod_{r=s}^{t}{\mathbb X}_{r} \eqfinp \label{eq:flow} \end{equation} When $s>t$, all these expressions are void because $ \segment{s}{t}= \emptyset $. The flow~$\phi_{\segment{s}{t}}$ maps an initial state~$\barx_{s} \in {\mathbb X}_{s}$ at time~$s$, a {tail control path} \( \sequence{{u}_{r}}{r\in\segment{s}{t}} \) and a state of Nature~$\omega$ towards a tail state path \begin{equation} \sequence{{x}_{r}}{r\in \segment{s}{t}} = \phi_{\segment{s}{t}} \bp{\barx_{s}, \sequence{{u}_{r}}{r\in \segment{s}{t}}, \omega } \eqfinv \label{eq:flow_path} \end{equation} with the property that \( {x}_{s} = \barx_{s} \). \paragraph{Relevance for resilience.} Our setting is more general than the one in~\cite{Rouge-Mathias-Deffuant:2013}: as illustrated below, we cover differential and stochastic differential systems, in addition to iterated dynamics in discrete time (which is the scope of Roug\'e, Mathias and Deffuant). Our approach thus allows for a general treatment of resilience. \paragraph{Cemetery point to take into account either analytical properties or bounds on the controls.} In general, a state path cannot be determined by~\eqref{eq:flow_path} for \emph{any} state of Nature or for \emph{any} control path, for analytical reasons (measurability, continuity) or because of bounds on the controls. To circumvent this difficulty, one can use a mathematical trick and add to any state set~${\mathbb X}_{t}$ a cemetery point~$\partial$. Any time a state cannot be properly defined by the flow by~\eqref{eq:flow_path}, we attribute the value~$\partial$. The vocable ``cemetery'' expresses the property that, once in the state~$\partial$, the future state values, yielded by the flow, will all be~$\partial$. Therefore, the stationary state path with value~$\partial$ will be the image of those scenarios and control paths for which no state path can be determined by~\eqref{eq:flow_path}. \paragraph{Special case of an iterated dynamics in discrete time.} In discrete time, when $\TT=\NN$, the flow is generally produced by the iterations of a dynamic \begin{equation} {x}_{t}=x \eqsepv {x}_{s+1} = F_{s}\np{{x}_{s},{u}_{s},{w}_{s}} \eqsepv t \geq s \eqfinp \label{eq:iterated_dynamics_in__discrete_time} \end{equation} How do we include control constraints in this setting? Suppose given a family of nonempty set-valued mappings \( \mathcal{U}_{s} : {\mathbb X}_{s} \rightrightarrows \CONTROL_{s} \), $s \in \TT$. We want to express that only controls~${u}_{s}$ that belong to \( \mathcal{U}_{s}\np{x_{s}} \) are relevant. For this purpose, we add to all the state sets~${\mathbb X}_{s}$ a cemetery point~$\partial$. Then, when \( {u}_{r} \not\in \mathcal{U}_{r}\np{x_{r}} \) in~\eqref{eq:iterated_dynamics_in__discrete_time} for at least one~\( r \in \segment{s}{t} \), we set \( \phi_{\segment{s}{t}} \bp{\barx_{s}, \sequence{{u}_{r}}{r\in \segment{s}{t}}, \sequence{{w}_{r}}{r\in \segment{s}{t}},\gamma } = \sequence{\partial}{s\in \segment{t}{T}} \) in~\eqref{eq:flow_path}. \paragraph{Environmental illustration.} In natural resource management, many population models (anmal, plants) are given by discrete time abundance-at-age dynamical equations. Outside population models, many stock problems are also based upon discrete time dynamical equations. This is the case of dam management, where water stock balance equations are written at a daily scale (possibly less like every eight hours, or possibly more like months for long term planning); control constraints represent the properties that turbined water must be less than the current water stock and bounded by turbine capacity. \paragraph{Special case of differential systems.} In continuous time, the mapping $ \phi_{\segment{s}{t}}$ in~\eqref{eq:flow} generally cannot be defined over the whole set \( \prod_{r=s}^{t}\CONTROL_{r} \times \Omega \). Tail control paths and states of Nature need to be restricted to subsets of $\prod_{r=s}^{t}\CONTROL_{r}$ and $\Omega$, like the continuous ones for example when dealing with Euclidian spaces. For instance, when $\TT=\RR_+$ and the flow is produced by a smooth dynamical system on a Euclidian space~${\mathbb X}=\RR^n$ \begin{equation} {x}_{t}=x \eqsepv \dot{{x}}_{s} =f_{s}\np{{x}_{s},{u}_{s}} \eqsepv s \geq t \eqfinv \end{equation} control paths \( \sequence{{u}_{s}}{s\in\segment{t}{T}} \) are generally restricted to piecewise continuous ones for a solution to exist. \paragraph{Special case of stochastic differential equations.} Under certain technical assumptions, a stochastic differential equation \begin{equation} d{\va{X}}_{s} =f_{s} \np{\va{X}_{s},\va{U}_{s},\va{W}_{s}}ds + g\np{\va{X}_{s},\va{U}_{s},\va{W}_{s}} d\va{W}_{s} \eqfinv \end{equation} where \( \sequence{\va{W}_{s}}{s\in \RR_+} \) is a Brownian motion, gives rise to solutions in the strong sense. In that case, a flow can be defined (but not over the whole set \( \prod_{r=s}^{t}\CONTROL_{r} \times \Omega \)). \paragraph{The case of the history flow.} Any possible state derives from the so-called \emph{history} \begin{equation} h_t=\bp{\sequence{{u}_{r}}{r\in \segment{t_0}{t}}, \omega } \eqfinp \end{equation} In that case, the flow~\eqref{eq:flow_path} is trivially given by \( \sequence{{h}_{r}}{r\in \segment{s}{t}} = \bp{h_{s}, \sequence{{u}_{r}}{r\in \segment{s}{t}} } \). We will use the notion of history when we compare our approach with the viability approach to resilience. \subsection{Adapted and admissible strategies} A control~${u}_{t}$ is an element of the {control set}~$\CONTROL_{t}$. A \emph{policy} (at time~$t$) is a mapping \begin{equation} \policy_{t} : {\mathbb X}_{t} \times \Omega \to \CONTROL_{t} \label{eq:policy} \end{equation} with image in the {control set}~$\CONTROL_{t}$. A \emph{strategy} is a sequence \( \sequence{\policy_{t}}{t\in \TT} \) of policies. \paragraph{Environmental illustration.} In climate change mitigation, a strategy can be an investment policy in renewable energies as a function of the past observed temperatures. In epidemics control, a strategy can be quarantine measures or vector control as a function of observed infected individuals. \subsubsection{Admissible strategies} Suppose given a family of nonempty set-valued mappings \( \mathcal{U}_{t} : {\mathbb X}_{t} \times \Omega \rightrightarrows \CONTROL_{t} \), $t \in \TT$. An \emph{admissible strategy} is a strategy \( \sequence{\policy_{t}}{t\in \segment{t_0}{T}} \) such that control constraints are satisfied in the sense that, for all $t \in \TT$, \begin{equation} \policy_{t}\np{x_{t}, \omega } \in \mathcal{U}_{t}\np{x_{t},\omega } % \eqsepv \forall \np{ x_{t}, \omega } \in {\mathbb X}_{t} \times \Omega \eqfinp \end{equation} \subsubsection{Adapted strategies} Suppose that the sample set~$\Omega$ is equipped with a \emph{filtration} \( \sequence{\tribu{F}_{t}}{t\in \TT} \). Hence each $\tribu{F}_{t}$ is a $\sigma$-field and the sequence \( t \mapsto \tribu{F}_{t} \) is increasing (for the inclusion order). Suppose that each \emph{state set}~${\mathbb X}_{t}$, is equipped with a $\sigma$-field~$\tribu{X}_{t}$. An \emph{adapted policy} is a mapping~\eqref{eq:policy} which is measurable with respect to the product $\sigma$-field~$\tribu{X}_{t} \otimes \tribu{F}_{t} $. An \emph{adapted strategy} is a family \( \sequence{\policy_{t}}{t\in \segment{t_0}{T}} \) of adapted policies. \paragraph{Special case where the sample set is a set of scenarios.} Consider the case where $\Omega= \prod_{t \in \TT}\UNCERTAIN_{t}$ and where each set~$\UNCERTAIN_{t}$ is equipped with a $\sigma$-field $\tribu{W}_{t}$ (supposed to contain the singletons). The natural {filtration} \( \sequence{\tribu{F}_{t}}{t\in \TT} \) is given by \begin{equation} \tribu{F}_{t} = \bigotimes_{r \leq t} \tribu{W}_{r} \otimes \bigotimes_{s > t} \{ \emptyset, \UNCERTAIN_{s} \} \eqfinp \end{equation} Then, in an {adapted strategy} \( \sequence{\policy_{t}}{t\in \segment{t_0}{T}} \), each policy can be identified with a mapping of the form \cite{Carpentier-Chancelier-Cohen-DeLara:2015} \begin{equation} \policy_{t} : {\mathbb X}_{t} \times \prod_{r=t_0}^{t}\UNCERTAIN_{r} \to \CONTROL_{t} \eqfinp \end{equation} In that case, our definition of adapted strategy means that the DM can, at time~$t$, use no more than time~$t$, current state value~$x_{t}$ and past scenario $\sequence{{w}_{s}}{s\in \segment{t_0}{t} }$ to make his decision \( u_{t} = \policy_{t}\np{ x_{t}, \sequence{ {w}_{s}}{s\in \segment{t_0}{t} } } \). \paragraph{Relevance for resilience.} Though this is not the most general framework to handle information (see \cite{Carpentier-Chancelier-Cohen-DeLara:2015} for a more general treatment of information), we hope it can enlighten the notion of \emph{adaptive response} often found in the resilience literature. Our setting is more general than the one in~\cite{Rouge-Mathias-Deffuant:2013}: indeed, Roug\'e, Mathias and Deffuant only consider state feedbacks, that is, Markovian strategies as defined below. By contrast, our setting includes the case of corrupted and partially observed state feedback strategies, that is, the case where strategies have as input a partial observation of the state that is corrupted by noise. \paragraph{Special case of Markovian or state feedback strategies.} Markovian or state feedback policies are of the form \begin{equation} \policy_{t} : {\mathbb X}_{t} \to \CONTROL_{t} \eqfinp \end{equation} With this definition, we express that, at time~$t$, the DM can only use time~$t$ and current state value~$x_{t}$ --- but not the state of Nature~$\omega$ --- to make his decision \( u_{t} = \policy_{t}\np{ x_{t} } \). In some cases (when dynamic programming applies for instance), it is enough to restrict to Markovian strategies, much more economical than general strategies. \subsection{Closed loop flow} From now on, when we say ``strategy'', we mean ``adapted and admissible strategy''. \bigskip Given an initial state and a state of Nature, a strategy will induce a state path thanks to the flow: this gives the closed loop flow as follows. Let $s \in \TT$ and $t \in \TT$, with $s < t$. Let \( \sequence{\policy_{t}}{t\in \TT} \) be a strategy. We suppose that, for any initial state~$\barx_{s} \in {\mathbb X}_{s}$ and any state of Nature~$\omega$, the following system of (closed loop) equations \begin{subequations} \begin{align} \sequence{{x}_{r}}{r\in \segment{s}{t}} &= \phi_{\segment{s}{t}} \bp{\barx_{s}, \sequence{{u}_{r}}{r\in \segment{s}{t}}, \omega } \\ {u}_{r} &= \policy_{r}\np{ {x}_{r} , \omega } \eqsepv \forall r \in \segment{s}{t} \end{align} \end{subequations} has a \emph{unique} solution \( \bp{ \sequence{{x}_{r}}{r \in \segment{s}{t}} , \sequence{{u}_{r}}{r \in \segment{s}{t}} } \). Quite naturally, we define the \emph{closed loop flow}~\( \phi_{\segment{s}{t}}^{\policy} \) by \begin{equation} \phi_{\segment{s}{t}}^{\policy} \bp{\barx_{s},\omega } = \bp{ \sequence{{x}_{s}}{s\in \segment{s}{t}} , \sequence{{u}_{s}}{s\in \segment{s}{t}} } \eqfinp \label{eq:closed_loop_flow} \end{equation} \section{Resilience: a mathematical framework} \label{Resilience:_A_Mathematical_Framework} Equipped with the material in~Sect.~\ref{Ingredients_for_an_abstract_control_system_with_uncertainties}, we now frame the concept of resilience in mathematical garbs. For this purpose, we introduce the notion of \emph{recovery regime}. Compared to other definitions of resilience \cite{Holling:73,Martin:2004,Martin-Deffuant-Calabrese:2011,Rouge-Mathias-Deffuant:2013}, our definition requires that, after any perturbation, the state-control path as a whole can be driven, by a proper strategy, to a set of acceptable paths (and not only the state values must belong to an acceptable subset of the state set, asymptotically or not). In addition, as state and control paths are contingent on uncertainties, we require that they lay within an acceptable domain of random processes, called recovery regimes. Once again, as the reference \cite{Rouge-Mathias-Deffuant:2013} is the more mathematically driven paper on resilience, we will systematically emphasize in what our approach differs from that of Roug\'e, Mathias and Deffuant. \subsection{Robustness, resilience and random processes} The notion of robustness captures a form of stability to perturbations; it is a static notion, as no explicit reference to time is required. By contrast, the concept of resilience makes reference to time (dynamics), strategies and perturbations. This is why, to speak of resilience --- a notion that mixes time and randomness --- we find it convenient to use the framework of random processes, although this does not mean that we require any probability. From now on, we consider that the {sample space}~$\Omega$ is a measurable set equipped with a $\sigma$-field~$\tribu{F}$ (but not necessarily equipped with a probability). When we consider a deterministic setting, $\Omega$ is reduced to a singleton (and ignored). The set of \emph{measurable mappings} from~$\Omega$ to any measurable set~$\YY$ will be denoted by \( \Adapted{\Omega}{\YY} \). Elements of \( \Adapted{\Omega}{\YY} \) are called \emph{random variables} or \emph{random processes}, although this does not imply the existence of an underlying probability. Random variables are designated with bold capital letters like~$\va{Z}$. From now on, every {state set}~${\mathbb X}_{t}$ is a measurable set equipped with a $\sigma$-field~$\tribu{X}_{t}$, every {control set}~$\CONTROL_{t}$ with a $\sigma$-field~$\tribu{U}_{t}$, and, when needed, every {uncertainty set}~$\UNCERTAIN_{t}$ with a $\sigma$-field $\tribu{W}_{t}$. Fields are introduced when probabilities are needed. When they are not, as with the robust setting, it suffices to equip all sets with their complete $\sigma$-fields, made of all subsets. Then, {measurable mappings} \( \Adapted{\Omega}{\YY} \) from~$\Omega$ to any set~$\YY$ are all mappings. \subsection{Recovery regimes} \emph{Recovery regimes}, starting from $t \in \TT$, are subsets of random processes of the form \begin{equation} {\mathcal A}^{t} \subset \Adapted{\Omega}{\prod_{s=t}^{T}{\mathbb X}_{s} \times \prod_{s=t}^{T}\CONTROL_{s} } \eqfinp \end{equation} When there are no uncertainties, $\Omega$ is reduced to a singleton, so that \( {\mathcal A}^{t} \subset \prod_{s=t}^{T}{\mathbb X}_{s} \times \prod_{s=t}^{T}\CONTROL_{s} \), as in the two first following examples. \paragraph{Example of recovery regimes converging to an equilibrium.} Let $\TT=\RR_+$, ${\mathbb X}_{t}=\RR^n$ and $\CONTROL_{t}=\RR^m$, for all $t \in \TT$. Let \( \barx \) be an equilibrium point of the dynamical system \( \dot{x}_{s} =f\np{x_{s},\baru} \) when the control is stationary equal to~$\baru$, that is, \( 0 =f\np{\barx,\baru} \). The recovery regimes, starting from $t \in \TT$, converging to the equilibrium~$\barx$ form the set \begin{equation} {\mathcal A}^{t} = \defset{ \bp{ \sequence{x_{s}}{s \geq t}, \sequence{u_{s}}{s \geq t} } \in {\mathbb X}^{[t,+\infty[} \times \CONTROL^{[t,+\infty[} }% {x_{s} \to_{s\to +\infty} \barx } \eqfinp \end{equation} In general, the equilibrium~$\barx$ is supposed to be asymptotically stable, locally or globally. A more general definition would be \begin{equation} {\mathcal A}^{t} = \defset{ \bp{ \sequence{x_{s}}{s \geq t}, \sequence{u_{s}}{s \geq t} } }% {\lim_{s\to +\infty}x_{s} \textrm{ exists} } \eqfinv \end{equation} and, to account for constraints on the values taken by the controls, we can consider \begin{equation} {\mathcal A}^{t} = \defset{ \bp{ \sequence{x_{s}}{s \geq t}, \sequence{u_{s}}{s \geq t} } }% {\lim_{s\to +\infty}x_{s} \textrm{ exists and } u_{s} \in \mathcal{U}_{s}\np{x_{s}} \eqsepv \forall s \geq t } \eqfinv \end{equation} where \( \mathcal{U}_{s} : {\mathbb X}_{s} \rightrightarrows \CONTROL_{s} \), for all $s \in \TT$. \paragraph{Example of bounded recovery regimes.} Let $\TT=\RR_+$, ${\mathbb X}_{t}=\RR^n$ and $\CONTROL_{t}=\RR^m$, for all $t \in \TT$. If~$B$ is a bounded region of ${\mathbb X}=\RR^n$, we consider \begin{equation} {\mathcal A}^{t} = \defset{ \bp{ \sequence{x_{s}}{s \geq t}, \sequence{u_{s}}{s \geq t} } }% {x_{s} \in B \eqsepv \forall s \geq t } \eqfinp \end{equation} When $B$ is a ball of small radius~$\rho>0$ around the equilibrium~$\barx$, we obtain state paths that remain close to~$\barx$. \paragraph{Example of random recovery regimes.} Suppose that the measurable {sample space}~$(\Omega,\tribu{F})$ is equipped with a probability~$\PP$. Let ${\mathbb X}_{t}=\RR^n$ and $\CONTROL_{t}=\RR^m$, for all $t \in \TT$. Letting~$B$ be a bounded region of ${\mathbb X}=\RR^n$ and $\beta \in ]0,1[ $, the set \begin{align} {\mathcal A}^{t} = & \{ \bp{ \sequence{\va{X}_{s}}{s \geq t }, \sequence{\va{U}_{s}}{s \geq t} } \in \Adapted{\Omega}{\prod_{s=t}^{T}{\mathbb X}_{s} \times \prod_{s=t}^{T}\CONTROL_{s} } \mid \nonumber \\ & \PP\bc{ \exists s \geq t \mid \va{X}_{s} \not \in B } \leq \beta \} \end{align} represents state paths that get at least once outside the bounded region~$B$ with a probability less than~$\beta$. If $\TT$ is discrete, the set \begin{align} {\mathcal A}^{t} = & \{ \bp{ \sequence{\va{X}_{s}}{s \geq t }, \sequence{\va{U}_{s}}{s \geq t} } \in \Adapted{\Omega}{\prod_{s=t}^{T}{\mathbb X}_{s} \times \prod_{s=t}^{T}\CONTROL_{s} } \mid \nonumber \\ & \PP\bc{ \exists s_1 \geq t \eqsepv s_2 \geq t \eqsepv s_3 \geq t \mid \va{X}_{s_1} \not \in B \eqsepv \va{X}_{s_2} \not \in B \eqsepv \va{X}_{s_3} \not \in B } = 0 \} \end{align} represents state paths that get no more than two times outside the bounded region~$B$. \subsection{Resilient strategies and resilient states} \label{Resilient_strategies_and_resilient_states} Consider a starting time~$t\in\TT$ and an initial state~$\barx_{t}\in{\mathbb X}_{t}$. We say that the strategy~\( \policy \) is a \emph{resilient strategy} starting from time~$t$ in state~$\barx_{t}$ if the random process \( \Bp{ \sequence{\va{X}_{s}}{s\in \segment{t}{T}} , \sequence{\va{U}_{s}}{s\in \segment{t}{T}} } \) given by \begin{subequations} \begin{align} \sequence{\va{X}_{s}\np{\omega}}{s\in \segment{t}{T}} &= \phi_{\segment{t}{T}}^{\policy}\bp{x_{t},\omega } \\ \va{U}_{s}\np{\omega} &= \policy_{s}\np{ \va{X}_{s}\np{\omega} , \omega } \eqsepv \forall s\in \segment{t}{T} \eqfinv \end{align} \label{eq:output} \end{subequations} where the closed loop flow~$\phi_{\segment{t}{T}}^{\policy}$ is given in~\eqref{eq:closed_loop_flow}, is such that \begin{equation} \Bp{ \sequence{\va{X}_{s}}{s\in \segment{t}{T}} , \sequence{\va{U}_{s}}{s\in \segment{t}{T}} } \in {\mathcal A}^{t} \eqfinp \end{equation} Notice that we do not use the part \( \sequence{\policy_{r}}{r < t} \) of the strategy~\( \policy = \sequence{\policy_{r}}{r\in \segment{t_0}{T} } \). With this definition, a resilient strategy is able to drive the state-control random process into an acceptable regime. As a resilient strategy is adapted, it can ``adapt'' to the past values of the randomness but no to its future values (hence, our notion of resilience does not require clairvoyance of the DM). Our definition of resilience is a form of controlability for whole random processes (regimes): a resilient strategy has the property to shape the closed loop flow~$\phi_{\segment{t}{T}}^{\policy}$ so that it belongs to a given subset of random processes. We denote by~\( {\STRATEGY}^{R}_{t}\np{\barx_{t}} \) the set of \emph{resilient strategies} at time~$t$, starting from state~$\barx_{t}$. The set of \emph{resilient states} at time~$t$ is \begin{equation} {\STATE}^{R}_{t} = \defset{ \barx_{t} \in {\mathbb X}_{t} }% {{\STRATEGY}^{R}_{t}\np{\barx_{t}} \not = \emptyset } \eqfinp \end{equation} \section{Illustrations} \label{Illustrations} In~Sect.~\ref{Resilience:_A_Mathematical_Framework}, we have provided some illustrations in the course of the exposition. Now, we make the connection between the previous setting and two other settings, the resilience ``\`a la Holling'' \cite{Holling:73} in~\S\ref{Holling} and the resilience-viability framework \cite{Martin:2004,Martin-Deffuant-Calabrese:2011,Rouge-Mathias-Deffuant:2013} in~\S\ref{resilience-viability}. \subsection{Deterministic control dynamical system with attractor} \label{Holling} As the paper~\cite{Holling:73} does not contain a single equation, it is bit risky to force the seminal Holling's contribution into our setting. However, it is likely that it corresponds to $\TT=\RR_+$ and to recovery regimes of the form \begin{equation} {\mathcal A}^{t} = \defset{ \bp{ \sequence{x_{s}}{s \geq t}, \sequence{u_{s}}{s \geq t} } }% {x_{s} \textrm{ converges towards an attractor} } \eqfinp \end{equation} Note that, as often in the ecological literature on resilience \cite{Arnoldi-Loreau-Haegeman:2016}, the underlying dynamical system is not controlled. \subsection{Resilience and viability} \label{resilience-viability} Some authors \cite{Martin:2004,Martin-Deffuant-Calabrese:2011,Rouge-Mathias-Deffuant:2013} propose to frame resilience within the mathematical theory of viability \cite{Aubin:1991}. Let ${\mathbb X}_{t}={\mathbb X}$ and $\CONTROL_{t}=\CONTROL$, for all $t \in \TT$. Let $\sdo \subset {\mathbb X}$ denote a set made of ``acceptable states''. Let \( \mathcal{U}_{s} : {\mathbb X} \rightrightarrows \CONTROL \), $s \in \TT$ be a family of set-valued mappings that represent control constraints. \subsubsection{Deterministic viability} Consider a starting time~$t\in\TT$ and the recovery regimes \begin{align} {\mathcal A}^{t} = \{ \bp{ \sequence{x_{s}}{s \geq t}, \sequence{u_{s}}{s \geq t} } \mid x_{s} \in \sdo \eqsepv u_{s} \in \mathcal{U}_{s}\np{x_{s}} \eqsepv \forall s \geq t \} \eqfinp \end{align} Then, a resilient strategy is one that is able to drive the state towards the set~$\sdo$ of acceptable states. \subsubsection{Robust viability} \label{Deterministic_and_robust_viability} When there are no uncertainties, we just established a connection between recovery regimes and viability. But, with uncertainties, as resilience requires a form of stability ``whatever the perturbations'', we are in the realm of \emph{robust viability} \cite{DeLara-Doyen:2008}, as follows. Let \( \overline\Omega \subset \Omega \), corresponding to the (nonempty) subset of states of Nature with respect to which the DM expects the system to be robust. Consider a starting time~$t\in\TT$ and the recovery regimes \begin{align} {\mathcal A}^{t} = \{ & \bp{ \sequence{\va{X}_{s}}{s\in \segment{t}{T}} \sequence{\va{U}_{s}}{s\in \segment{t}{T}} } \in \Adapted{\Omega}{\prod_{s=t}^{T}{\mathbb X}_{s} \times \prod_{s=t}^{T}\CONTROL_{s} } \mid \nonumber \\ & \exists \va{\tau} \in \Adapted{\Omega}{\TT} \eqsepv \forall \omega \in \overline\Omega \eqsepv \nonumber \\ & \va{\tau}\np{\omega} \geq t \eqsepv \va{X}_{s}\np{\omega} \in \sdo \eqsepv \va{U}_{s}\np{\omega} \in \mathcal{U}_{s}\np{\va{X}_{s}\np{\omega}} \eqsepv \forall s \geq \va{\tau}\np{\omega} \} \eqfinp \label{eq:recovery_regimes_robust_viability} \end{align} Then, a resilient strategy is one that is able to drive the state towards the set~$\sdo$ of acceptable states, after a random time~$\va{\tau}$, whatever the perturbations in~$\overline\Omega$. \subsubsection{Robust viability and recovery time attached to a resilient strategy } \label{recovery_time} Let \( \overline\Omega \subset \Omega \), whose elements can be interpreted as shocks. Consider a starting time~$t\in\TT$ and an initial state~$\barx_{t}\in{\mathbb X}_{t}$. If \( \policy=\sequence{\policy_{s}}{s\in \segment{t}{T}} \) is a resilient strategy for the recovery regimes~\eqref{eq:recovery_regimes_robust_viability}, the \emph{recovery time} is the random time defined by \begin{align} \va{\tau}\np{\omega} =\inf\{ r \in \segment{t}{T} \mid & \sequence{\va{X}_{s}\np{\omega}}{s\in \segment{t}{T}} = \phi_{\segment{t}{T}}^{\policy} \bp{\barx_{t}, \omega} \nonumber \\ & \va{U}_{s}\np{\omega} = \policy_{s}\np{ \va{X}_{s}\np{\omega} , \omega } \eqsepv \forall s\in \segment{t}{T} \nonumber \\ & \va{X}_{s}\np{\omega} \in \sdo \eqsepv \va{U}_{s}\np{\omega} \in \mathcal{U}_{s}\np{ \va{X}_{s}\np{\omega} } \eqsepv \forall s \geq r \} \eqfinv \end{align} for all \( \omega \in \Omega \), with the convention that $\inf \emptyset = +\infty$. Thus, the resilient strategy drives the state towards the set~$\sdo$ of acceptable states, after the random time~$\va{\tau}$, whatever the perturbations (shocks) in~$\overline\Omega$. By contrast, the so-called \emph{time of crisis} occurs before~$\va{\tau}$ \cite{Doyen-Saint-Pierre:1997}. \subsubsection{Stochastic viability} \label{Stochastic_viability} Suppose that the measurable {sample space}~$(\Omega,\tribu{F})$ is equipped with a probability~$\PP$ and let $\beta \in [0,1] $, represent a probability level. Consider a starting time~$t\in\TT$ and the recovery regimes \begin{align} {\mathcal A}^{t} = \{ & \bp{ \sequence{\va{X}_{s}}{s\in \segment{t}{T}}, \sequence{\va{U}_{s}}{s\in \segment{t}{T}} } \in \Adapted{\Omega}{\prod_{s=t}^{T}{\mathbb X}_{s} \times \prod_{s=t}^{T}\CONTROL_{s} } \mid \nonumber \\ & \PP\bc{ \va{X}_{s} \in \sdo \eqsepv \va{U}_{s} \in \mathcal{U}_{s}\np{ \va{X}_{s} } \eqsepv \forall s \geq t } \geq \beta \} \eqfinp \end{align} With these recovery regimes, we express that the probability to satisfy state and control constraints after time~$t$ is at least~$\beta$ \cite{Doyen-DeLara:2010}. \subsubsection{Discussion and comparison with the viability theory approach for resilience.} Our setting is more general than the viability theory approach for resilience as introduced in \cite{Martin:2004,Martin-Deffuant-Calabrese:2011,Rouge-Mathias-Deffuant:2013}. Indeed, the viability approach to resilience deals with constraints time by time; our approach does not. To illustrate our point, consider the deterministic case with discrete and finite time, and scalar controls, to make things easy. It is clear that the recovery regimes given by \begin{subequations} \begin{align} {\mathcal A}^{t} &= \{ \bp{ \sequence{x_{s}}{s \geq t}, \sequence{u_{s}}{s \geq t} } \mid \min_{s \geq t} u_{s} \leq 0 \} \\ &= \{ \bp{ \sequence{x_{s}}{s \geq t}, \sequence{u_{s}}{s \geq t} } \mid \exists s \geq t \eqfinv u_{s} \leq 0 \} \eqfinp \end{align} \end{subequations} cannot be expressed as time by time constraints on the controls. Of course, the viability approach \emph{could} handle such a case, but at the price of extending the state and the dynamics, to turn an intertemporal constraint into a time by time constraint. For instance, with the history state introduced at the end of~\S\ref{Flow}, we can always express any recovery regimes set as viability constraints. In the example above, we do not need the whole history to turn the set~${\mathcal A}^{t}$ into one described by time by time constraints: it suffices to introduce an additional component to the state like \( \sum_{s \geq t} {\mathbf 1}_{ \{ u_{s} \leq 0 \} } \) in discrete time and impose the final constraint that this new part of an extended state be non zero. To sum up, our approach to resilience covers more recovery regimes, described with the original states and controls, than those captured by the time by time constraints that make the specificity of the viability approach to resilience. \section{Resilience and risk} \label{Resilience_and_risk_control/minimization} We now sketch how concepts from risk measures (introduced initially in mathematical finance \cite{Follmer-Schied:2002}) can be imported to tackle resilience issues. This again is a novelty with respect to the stochastic viability theory approach for resilience as in~\cite{Rouge-Mathias-Deffuant:2013}. Risk measures are potential candidates as \emph{indicators of resilience}. \subsection{Recovery regimes given by risk measures} We start by a definition of recovery regimes given by risk measures, then we provide examples. \subsubsection{Definition of recovery regimes given by extended risk measures} Suppose that \( \TT \subset \RR \) is equipped with the trace~$\tribu{T}$ of the Borel field of~$\RR$. Then, \( \TT \times \Omega \) is a measurable space when equipped with the product $\sigma$-field $\tribu{T} \otimes \tribu{F}$. Then, any random process in \( \Adapted{\Omega}{\prod_{s=t}^{T}{\mathbb X}_{s} \times \prod_{s=t}^{T}\CONTROL_{s} } \) can be identified with a random variable in \( \Adapted{\segment{t}{T} \times \Omega}{\bigcup_{s=t}^{T}{\mathbb X}_{s} \bigcup \bigcup_{s=t}^{T}\CONTROL_{s} } \). We call \emph{extended risk measure} any $\GG_t$ that maps random variables in \( \Adapted{\segment{t}{T} \times \Omega}% {\bigcup_{s=t}^{T}{\mathbb X}_{s} \bigcup \bigcup_{s=t}^{T}\CONTROL_{s} } \) towards the real numbers \cite{Follmer-Schied:2002}. The lower the risk measure~$\GG_t$, the better. The basic example of a risk measure is the mathematical expectation under a given probability distribution. A celebrated risk measure in mathematical finance is the \emph{tail/average/conditional value-at-risk}. With $\GG_t$ an extended risk measure and $\alpha \in \RR$ a given \emph{risk level}, we define recovery regimes by \begin{align} {\mathcal A}^{t} = \{ & \bp{ \sequence{\va{X}_{s}}{s\in \segment{t}{T}}, \sequence{\va{U}_{s}}{s\in \segment{t}{T}} } \in \Adapted{\Omega}{\prod_{s=t}^{T}{\mathbb X}_{s} \times \prod_{s=t}^{T}\CONTROL_{s} } \mid \nonumber \\ & \GG_{t} \bc{ \sequence{\va{X}_{s}}{s\in \segment{t}{T}}, \sequence{\va{U}_{s}}{s\in \segment{t}{T}} } \leq \alpha \} \eqfinp \label{eq:alpha} \end{align} The quantity \( \GG_{t} \bc{ \sequence{\va{X}_{s}}{s\in \segment{t}{T}}, \sequence{\va{U}_{s}}{s\in \segment{t}{T}} } \) measures the ``risk'' borne by the random process \( \bp{ \sequence{\va{X}_{s}}{s\in \segment{t}{T}}, \sequence{\va{U}_{s}}{s\in \segment{t}{T}} } \). Therfore, recovery regimes like in~\eqref{eq:alpha} represent a form of ``risk containment'' under the level~$\alpha$. \subsubsection{Robust viability and the worst case risk measure} The robust viability inspired definition of resilience in~\S\ref{Deterministic_and_robust_viability} corresponds to~\eqref{eq:alpha} with \( \alpha < 1 \) and the \emph{worst case risk measure} \begin{equation} \GG_{s}\bp{ \sequence{\va{X}_{s}}{s\in \segment{t}{T}}, \sequence{\va{U}_{s}}{s\in \segment{t}{T}} } = \sup_{s\in \segment{t}{T}} \sup_{\omega \in \overline\Omega} {\mathbf 1}_{\sdo^c}\bp{\va{X}_{s}\np{\omega}} \eqfinv \end{equation} where \( \overline\Omega \subset \Omega \). Indeed, \( \GG_{t} \bc{ \sequence{\va{X}_{s}}{s\in \segment{t}{T}}, \sequence{\va{U}_{s}}{s\in \segment{t}{T}} } \leq \alpha < 1 \) means that \( {\mathbf 1}_{\sdo^c}\bp{\va{X}_{s}\np{\omega}} \equiv 0 \), that is, the state \( \bp{\va{X}_{s}\np{\omega}} \) always belongs to~$\sdo$ (as $\sdo^c$ is the complementary set of~$\sdo$ in~${\mathbb X}$) for all $\omega \in \overline\Omega$. Here, the {worst case risk measure} captures that the state \( \va{X}_{s}\np{\omega} \) belongs to~$\sdo$ both for all times --- the core of viability, here handled by the term $\sup_{s \geq t} $ --- and for all states of Nature in~$\overline\Omega$ --- the core of robustness, here handled by the term $\sup_{\omega \in \overline\Omega}$. \subsubsection{Stochastic viability and beyond: ambiguity} The stochastic viability inspired definition of resilience in~\S\ref{Stochastic_viability} corresponds to~\eqref{eq:alpha} with \( \alpha=1-\beta \) and the risk measure \begin{equation} \GG_{s}\bp{ \sequence{\va{X}_{s}}{s\in \segment{t}{T}}, \sequence{\va{U}_{s}}{s\in \segment{t}{T}} } = \PP\bc{ \exists s \geq t \mid \va{X}_{s} \not\in \sdo \mtext{ or } \va{U}_{s} \not\in \mathcal{U}_{s}\np{ \va{X}_{s} } } \eqfinp \end{equation} Now, suppose that different risk-holders do not share the same beliefs and let $\mathcal{P}$ denote a set of probabilities on $\np{\Omega, \tribu{F}}$. We can arrive at an \emph{ambiguity viability} inspired definition of resilience using the risk measure \begin{equation} \GG_{s}\bp{ \sequence{\va{X}_{s}}{s\in \segment{t}{T}}, \sequence{\va{U}_{s}}{s\in \segment{t}{T}} } = \sup_{\PP\in \mathcal{P}} \PP\bc{ \exists s \geq t \mid \va{X}_{s} \not\in \sdo \mtext{ or } \va{U}_{s} \not\in \mathcal{U}_{s}\np{ \va{X}_{s} } } \eqfinp \end{equation} Here, \( \GG_{t} \bc{ \sequence{\va{X}_{s}}{s\in \segment{t}{T}}, \sequence{\va{U}_{s}}{s\in \segment{t}{T}} } \leq \alpha=1-\beta \) means that \( \PP\bc{ \va{X}_{s} \in \sdo \eqsepv \va{U}_{s} \in \mathcal{U}_{s}\np{ \va{X}_{s} } \eqsepv \forall s \geq t } \geq \beta \), for all $\PP\in\mathcal{P}$. \subsubsection{Random exit time and viability} Let $\mu$ be a measure on the time set~$\TT$ like, for instance, the counting measure when $\TT=\NN$ or the Lebesgue measure when $\TT=\RR_+$. Then, the random quantity \begin{equation} \mu \{ s \geq t \eqsepv \va{X}_{s} \not\in \sdo \mtext{ or } \va{U}_{s} \not\in \mathcal{U}_{s}\np{ \va{X}_{s} } \} \end{equation} measures the number of times that the state-control path \( \bp{ \sequence{\va{X}_{s}}{s\in \segment{t}{T}}, \sequence{\va{U}_{s}}{s\in \segment{t}{T}} } \) \emph{exits} from the viability constraints. Using risk measures --- like the \emph{tail/average/conditional value-at-risk} \cite{Follmer-Schied:2002} --- or stochastic orders \cite{Muller-Stoyan:2002,Shaked-Shanthikumar:2007}, we have differents ways to express that this random quantity remains ``small''. \subsubsection{The general umbrella of cost functions} \label{cost_functions} All the examples above, and many more \cite{Rouge-Mathias-Deffuant:2015}, fall under the general umbrella of cost functions as follows. Consider a starting time~$t\in\TT$ and a measurable function \begin{equation} \Psi_{t} : \prod_{s=t}^{T}{\mathbb X}_{s} \times \prod_{s=t}^{T}\CONTROL_{s} \times \Omega \to \RR \eqsepv \label{eq:utility} \end{equation} that attachs a \emph{disutility} or \emph{cost} --- the opposite of \emph{value}, \emph{utility}, \emph{payoff} --- to any tail state and control path, starting from time~$t$, and to any state of Nature. Let $\FF$ be a risk measure that maps random variables on~$\Omega$ towards the real numbers. Then, an extended risk measure is given by \begin{equation} \GG_{t} \bc{ \sequence{\va{X}_{s}}{s\in \segment{t}{T}}, \sequence{\va{U}_{s}}{s\in \segment{t}{T}} } = \FF \bc{ \Psi_{t} \bp{ \sequence{\va{X}_{s}\np{\cdot}}{s\in \segment{t}{T}} , \sequence{\va{U}_{s}\np{\cdot}}{s\in \segment{t}{T}}, \cdot } } \eqfinp \end{equation} \subsection{Resilience and risk minimization} When the set~\( {\STRATEGY}^{R}_{t}\np{\barx_{t}} \) of {resilient strategies} at time~$t$ in~\S\ref{Resilient_strategies_and_resilient_states} is not empty, how can we select one among the many? Here is a possible way that makes use of risk measures for risk minimization purposes. \subsubsection{Indicators of resilience} Let $\GG_{t}$ be an extended risk measure. We can look for resilient strategies that minimize risk, solution of \begin{equation} \min_{ \policy \in {\STRATEGY}^{R}_{t}\np{\barx_{t}} } \GG_{t} \bc{ \bp{ \phi_{\segment{t}{T}}^{\policy} \np{\barx_{t}, \cdot} , \cdot } } \eqfinp \end{equation} The minimum of the risk measure is a potential candidate as an \emph{indicator of resilience}. Indeed, it is the best achievable measure of residual risk under a resilient strategy. \subsubsection{Examples} Using cost functions as in~\S\ref{cost_functions}, we can look for resilient strategies that minimize \emph{expected costs} \begin{equation} \min_{ \policy \in {\STRATEGY}^{R}_{t}\np{\barx_{t}} } \EE \bc{ \Psi_{t} \bp{ \phi_{\segment{t}{T}}^{\policy} \np{\barx_{t}, \cdot} , \cdot } } \eqsepv \end{equation} or that minimize \emph{worst case costs}, where \( \overline\Omega \subset \Omega \), \begin{equation} \min_{ \policy \in {\STRATEGY}^{R}_{t}\np{\barx_{t}} } \sup_{ \omega \in \overline\Omega } \Psi_{t} \bp{ \phi_{\segment{t}{T}}^{\policy} \np{\barx_{t}, \omega } , \omega } \eqsepv \end{equation} or, more generally, that minimize \begin{equation} \min_{ \policy \in {\STRATEGY}^{R}_{t}\np{\barx_{t}} } \FF \bc{ \Psi_{t} \bp{ \phi_{\segment{t}{T}}^{\policy} \np{\barx_{t}, \cdot} , \cdot } } \eqsepv \end{equation} where $\FF$ is a risk measure that maps random variables on~$\Omega$ towards the real numbers \cite{Follmer-Schied:2002}. For instance, in the robust viability setting of~\S\ref{recovery_time}, an {indicator of resilience} could be the minimum (over all resilient strategies) of the maximal (over all states of Nature in~$\overline\Omega$) recovery time. \section{Conclusion} \label{sec:conclusions} Resilience is a rehashed concept in natural hazard management. Most of the formalizations of the concept require that, after any perturbation, the state of a system returns to an acceptable subset of the state set. Equipped with tools from control theory under uncertainty, we have proposed that resilience is the ability for the state-control random process as a whole to be driven to an acceptable ``recovery regime'' by a proper resilient strategy (adaptive). Our definition of resilience is a form of controlability: a resilient strategy has the property to shape the closed loop flow so that the resulting state and control random process belongs to a given subset of random processes, the acceptable recovery regimes. We have proposed to handle risk thanks to risk measures\footnote{% We also hinted at the possibility to use so-called stochastic orders.}, by defining recovery regimes that represent a form of ``risk containment''. In addition, risk measures are potential candidates as {indicators of resilience} as they measure the residual risk under a resilient strategy. Our contribution is formal, with its pros and cons: by its generality, our approach covers a large scope of notions of resilience; however, such generality makes it difficult to propose resolution methods. For instance, the possibility to use dynamic programing in stochastic viability relies upon a white noise assumption that we have not supposed. Much would remain to be done regarding applications and numerical implementation. \bigskip \paragraph{Acknowledgement.} The author is indebted to the editor-in-chief, the advisory editor and two reviewers. They supplied detailed critique, comments and inputs which, ultimately, contributed to an improved version of the manuscript. \newcommand{\noopsort}[1]{} \ifx\undefined\allcaps\def\allcaps#1{#1}\fi
{ "timestamp": "2018-02-05T02:09:53", "yymm": "1802", "arxiv_id": "1802.00733", "language": "en", "url": "https://arxiv.org/abs/1802.00733", "abstract": "Resilience is a rehashed concept in natural hazard management - resilience of cities to earthquakes, to floods, to fire, etc. In a word, a system is said to be resilient if there exists a strategy that can drive the system state back to \"normal\" after any perturbation. What formal flesh can we put on such a malleable notion? We propose to frame the concept of resilience in the mathematical garbs of control theory under uncertainty. Our setting covers dynamical systems both in discrete or continuous time, deterministic or subject to uncertainties. We will say that a system state is resilient if there exists an adaptive strategy such that the generated state and control paths, contingent on uncertainties, lay within an acceptable domain of random processes, called recovery regimes. We point out how such recovery regimes can be delineated thanks to so called risk measures, making the connection with resilience indicators. Our definition of resilience extends others, be they \"` a la Holling\" or rooted in viability theory. Indeed, our definition of resilience is a form of controlability for whole random processes (regimes), whereas others require that the state values must belong to an acceptable subset of the state set.", "subjects": "Optimization and Control (math.OC)", "title": "A Mathematical Framework for Resilience: Dynamics, Uncertainties, Strategies and Recovery Regimes", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668725877175, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7081505304069249 }
https://arxiv.org/abs/2008.08950
A Special Conic Associated with the Reuleaux Negative Pedal Curve
The Negative Pedal Curve of the Reuleaux Triangle w.r. to a point $M$ on its boundary consists of two elliptic arcs and a point $P_0$. Interestingly, the conic passing through the four arc endpoints and by $P_0$ has a remarkable property: one of its foci is $M$. We provide a synthetic proof based on Poncelet's polar duality and inversive techniques. Additional intriguing properties of Reuleaux negative pedal are proved using straightforward techniques.
\section{Introduction} \label{sec:intro} \input{010_introduction} \section{Main Result: The Endpoint Conic} \label{sec:endpoint-conic} \input{020_endpoint_conic} \section{Some Elementary Properties} \label{sec:elementary} \input{030_elementary} \section{Conclusion} \label{sec:conclusion} \input{060_conclusion} \subsection*{Main Result} Our main result (Theorem~\ref{thm:endpoint}, Section~\ref{sec:endpoint-conic}) is an intriguing property of the conic $\mathcal{C}^*$ -- called here the {\em endpoint conic} -- that passes through the endpoints $A_1$, $A_2$, $B_1$, $B_2$ of the negative pedal curve $\mathcal{N}$, and through $P_0$: that one of its foci is precisely the pedal point $M$; see Figure~\ref{fig:endpoint-conic}. We also give a full geometric description of its axes, directrix and vertices, and a criterion for identifying its type, according to the location of the pedal point $M$. In Section~\ref{sec:elementary} we prove other properties of the Reuleaux triangle and its negative pedal curve, involving tangencies, collinearities and homotheties. The proofs combine elementary techniques with inversive arguments and polar reciprocity. A review of polar reciprocity and other concepts, including the description of the negative pedal curve as a locus of points, as well as an alternative description of it as an envelope of lines is postponed to the Appendix. \begin{figure}[H] \centering \includegraphics[trim=250 20 400 40,clip,width=.7\textwidth]{pics/0031_cusp_ellipse.eps} \caption{The sides of the Reuleaux $\mathcal{R}$ are three circular arcs centered at the vertices $V_1,V_2,V_3$ of an equilateral triangle. Its negative pedal curve $\mathcal{N}$ w.r. to a pedal point $M$ on its boundary consists of a point $P_0$ (the antipode of $M$ through $V_3$) and of two elliptic arcs ${A_1}{A_2}$ and ${B_1}{B_2}$ (green and blue). The endpoint conic $\mathcal{C}^*$ (purple) passes through $P_0$ and the four endpoints of the two elliptic arcs of $\mathcal{N}$. It has a focus on $M$ and its focal axis passes through the center of the Reuleaux triangle. } \label{fig:endpoint-conic} \end{figure} \subsection{Collinearity and Tangencies} \label{sec:coll} Referring to Figure~\ref{fig:m-foci-circ-tri}: \begin{proposition} The negative pedal curve $\mathcal{N}$ of the Reuleaux triangle consists of two elliptic arcs $\mathcal{E}_A$ and $\mathcal{E}_B$ and a point $P_0$, the antipode of $M$ w.r. to the center of the circle where $M$ is located. The two ellipses $\mathcal{E}_A,\mathcal{E}_B$ are centered on the vertices of the Reuleaux triangle, $V_1$ and $V_2$, have one common focus at $M$, and their semi-axes are of length $r$. \label{prop:elipse-reciproca} \end{proposition} \begin{proof} By hypothesis, $M$ belongs to arc ${V_1}{V_2}$ of the circle centered in $V_3$ that passes through $V_2$ and $V_3$. Hence, if $P$ is any point on this arc and we draw the perpendicular $p$ through $P$ on $PM$, all these lines will pass through a fixed point $P_0$, which is he antipode of $M$ w.r. to center $V_3$. The second part derives directly from the general construction of the negative pedal curve of a circle. See Proposition~\ref{prop:negative-pedal-point-polar} in the Appendix. \end{proof} \label{prop:coll-first} \begin{proposition} The minor axes of ${\mathcal{E_A}}$ and ${\mathcal{E_B}}$ pass through $P_0$. \end{proposition} \begin{proof} By the definition of the negative pedal curve, if we regard $V_1$ as a point on arc ${V_1}{V_2}$ of the circle centered on $V_3$ on which the pedal point $M$ lies, then $P_0V_1$ will be perpendicular to $MV_1$. Since $V_1$ is the center of $\mathcal{E_A}$ and since line $MV_1$ is its major axis, its minor axis will be along $P_0V_1$. Similarly, the minor axis of $\mathcal{E_B}$ is $P_0V_2$. \end{proof} \begin{proposition} Points $A_2$, $B_2$ and $V_3$ are collinear and line $A_2B_2$ is a common tangent to $\mathcal{E}_A$ and $\mathcal{E}_B$. \label{prop:hom-first} \end{proposition} \begin{proof} By construction, the negative pedal curve of arc $V_2V_3$ is the elliptic arc $\mathcal{E_A}$, delimited by $A_1$ and $A_2$. This implies that $M V_3$ and $A_2V_3$ are perpendicular, as well as $M V_3$ and $B_2V_3$. Thus points $A_2$, $V_3$ and $B_2$ are collinear. Also by construction, the perpendicular to ${M}{V_3}$ at $V_3$ is tangent to $\mathcal{N}$ at $A_2$ (resp. $B_2$) when $V_3$ is regarded as a point in the $V_2V_3$ (resp. $V_1V_3$) arc. Hence the points $A_2,V_3$ and $B_2$ are collinear ($\angle{A_2V_3B_2=180^{\circ}}$) and $A_2B_2$ is the common tangent to $\mathcal{E_A}$ and $ \mathcal{E_B}$, at $A_2$ and $B_2$, respectively. \end{proof} \begin{proposition} Point $A_1$ is on $P_0V_2$ and $B_1$ is on $P_0V_1$. \end{proposition} \begin{proof} If we regard $V_1$ as a point on arc ${V_1}{V_2}$ of the circle centered on $V_3$ whose negative pedal curve is $P_0$, then, necessarily, $V_1P_0\perp M V_1$. Similarly, if we regard $V_1$ as a point on arc $V_1V_3$ of the circle centered on $V_2$ whose negative pedal is $\mathcal{E_B}$, then by $\mathcal{N}$'s construction $B_1V_1\perp M V_1$. Since this perpendicular must be unique, $P_0$, $B_1$, and $V_1$ are collinear as will be $P_0$, $A_1$, and $V_2$. \end{proof} \begin{proposition} The line joining the intersection points of $\mathcal{E}_A$ and $\mathcal{E}_B$ is the perpendicular bisector of segment $[f_A f_B]$ and also passes through $P_0$. \label{prop:coll-last} \end{proposition} \begin{proof} Let $U_1,U_2$ denote the points where $\mathcal{E_A}$ and $\mathcal{E_B}$ intersect. In order to prove that $P_0$, $U_1$, and $U_2$ are collinear, we show each one lies on the perpendicular bisector of $[f_A f_B]$. Since $U_1$ (resp. $U_2$) is on $\mathcal{E_A}$ (resp. $\mathcal{E_B}$), whose foci are $M$ and $f_A$ (resp. $M$ and $f_B$), with major axis of length $2r$, then \[ U_1f_A+U_1M=2r;\;\;\;\;\;\; U_2f_B+U_1M=2r. \] This implies that $U_1f_A= U_1f_B$ and $U_2f_A= U_2f_B$, hence both $U_1$ and $U_2$ belong to the perpendicular bisector of $[{f_A}{f_B}]$. Since we've already shown that ${{P_0}{V_1}}\perp{{M}{V_1}}$, and since $V_1$ is the center of ${M}{f_A}$, this means that $P_0V_1$ is the perpendicular bisector of $[Mf_A]$ and this implies that $P_0f_A=P_0M$. Similarly, $P_0f_B=P_0M$, and hence $P_0f_A= P_0f_B$. Therefore $P_0$ is also on the perpendicular bisector of $[f_A f_B]$, ending the proof. \end{proof} \begin{figure} \centering \includegraphics[trim=200 10 280 10,clip,width=1.0\textwidth]{pics/0061_branch_ellips_tri.eps} \caption{The negative pedal curve $\mathcal{N}$ w.r. to pedal point $M$ consists of two arcs of ellipses $\mathcal{E}_A$ and $\mathcal{E}_B$ (green and blue), centered on Reuleaux vertices $V_1,V_2$, respectively. They have a common focus at $M$, and the other foci are $f_A,f_B$. Their major axes have length of $2r$, equal to the diameters of the three Reuleaux circles (dashed). Points $P_0,A_1,V_2$ $P_0,B_1,V_1$ are collinear and along their minor axis. The lines $P_0A_1$ and $P_0B_1$ are tangent to $\mathcal{E}_A$ and $\mathcal{E}_B$, respectively. $A_2B_2$ is tangent to both ellipses and $A_2,B_2,V3$ are collinear. The circle (black) passing through $M$, $f_A$ and $f_B$ $\mathcal{E}_A,\mathcal{E}_B$ (green and blue) is centered on $P_0$ (antipodal of $M$ w.r. to $V_3$). The distance between the foci $f_A$ and $f_B$ is constant. Triangle $\mathcal{T}=\triangle{f_Af_BP_0}$ is equilateral and its sides pass through (i) $A_2$, (ii) $B_2$, (iii) $A_1,B_1$, respectively. Both intersections $U_1,U_2$ of $\mathcal{E}_A$ with $\mathcal{E}_B$ lie on the perpendicular bisector of ${f_A}{f_B}$, hence are collinear with $P_0$. $\mathcal{T}$ and $\triangle{V_1}{V_2}{V_3}$ are homothetic (homothety center $M$ and homothety ratio $2)$).} \label{fig:m-foci-circ-tri} \end{figure} \subsection{Triangles and Homotheties} \label{sec:hom} \noindent Referring to Figure~\ref{fig:m-foci-circ-tri}: \begin{proposition} The two sides of triangle $\triangle f_AP_0f_B$, incident on $P_0$, contain points $A_2$ and $B_2$. The other side contains points $A_1$ and $B_1$. \end{proposition} \begin{proof} The construction of the negative pedal curve of arc $V_2V_3$ implies $A_1V_2\perp MV_2$. Since $V_2$ is the center of the $\mathcal{E}_A$, $A_1V_2$ is the perpendicular bisector of $[Mf_B]$ hence $A_1f_B=A_1M$. Since $A_1$ lies on $\mathcal{E}_A$, $MA_1+f_A A_1=2r$, hence $f_B A_1+f_A A_1=f_A f_B$. Therefore, triangle inequality implies $f_B$, $A_1$, and $f_A$ must be collinear. A similar proof applies to $B_1$. In order to prove that $P_0$, $B_2$, and $f_A$ are collinear, we simply show that $P_0f_A=P_0A_2+A_2f_A$. As noted above, $A_2V_3$ is perpendicular to $P_0M$ and $V_3$ is its midpoint. Hence $A_2V_3$ is the perpendicular bisector of $[P_0M]$; so $P_0A_2=MA_2$. Since $A_2$ lies on ${\mathcal E_A}$, we have: \[ P_0A_2+A_2f_A'=MA_2+A_2f_A'=2r \] The proof for $B_2$ is similar. \end{proof} \begin{proposition} Triangles $\triangle f_A f_B P_0$ and $\triangle V_1V_2V_3$ are homothetic at ratio 2, and with $M$ the homothety center. Hence, $\triangle f_A f_B P_0$ is equilateral and the distance between $f_A$ and $f_B$ is the same as $2r$. Furthermore, their barycenters $X_2$ and $X_2'$ are collinear with $M$. \label{prop:hom-last} \end{proposition} \begin{proof} Points $V_1,V_2,V_3$ are the midpoints of $Mf_A$, $Mf_B$, and $P_0M$, respectively. Thus, ${V_1}{V_2}$ is a mid-base of $\triangle f_AMf_B,$ $V_2V_3$ is a mid-base of $\triangle{f_B P_0f_A}$ and $V_3V_1$ is a mid-base of $\triangle{P_0Mf_A}$. Hence $\triangle f_A f_B P_0$ and $\triangle {V_1}{V_2}{V_3}$ are homothetic with ratio 2, and homothety center $M$. Therefore $\triangle f_A f_B P_0$ is equilateral and the distance between $f_A$ and $f_B$ is the same as the diameter $2r$ of the circles that form the Reuleaux triangle. Thus, $\triangle{f_A f_B P_0}$ is equilateral with sides twice that of the original triangle: $f_A f_B=2{V_1}{V_2}$. This shows that the distance between the pair of foci of $\mathcal{E_A}$ and $\mathcal{E_B}$ is constant and equal to the length of their major axes. Note that lines $V_1f_A,V_2f_B,P_0V_3$ intersect at $M$, hence the two triangles are perspective at $M$. Due to the parallelism of their sides, their medians will be respectively parallel; let $X_2$ and $X_2'$ denote the barycenters of triangles $\triangle{V_1V_2V_3}$ and $\triangle{f_Af_BP_0}$, respectively. The barycenter divides the medians in equal proportions, which guarantees $\triangle MX_2'V_2\sim \triangle M{X_2}f_B$. Since $M,V_2,f_B$ are collinear, so are $M,X_2',X_2$. \end{proof}
{ "timestamp": "2020-10-20T02:19:55", "yymm": "2008", "arxiv_id": "2008.08950", "language": "en", "url": "https://arxiv.org/abs/2008.08950", "abstract": "The Negative Pedal Curve of the Reuleaux Triangle w.r. to a point $M$ on its boundary consists of two elliptic arcs and a point $P_0$. Interestingly, the conic passing through the four arc endpoints and by $P_0$ has a remarkable property: one of its foci is $M$. We provide a synthetic proof based on Poncelet's polar duality and inversive techniques. Additional intriguing properties of Reuleaux negative pedal are proved using straightforward techniques.", "subjects": "Dynamical Systems (math.DS); Graphics (cs.GR); Complex Variables (math.CV); Metric Geometry (math.MG)", "title": "A Special Conic Associated with the Reuleaux Negative Pedal Curve", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668706602658, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7081505290157997 }
https://arxiv.org/abs/1107.5209
"Spectral implies Tiling" for Three Intervals Revisited
In \cite{BCKM} it was shown that "Tiling implies Spectral" holds for a union of three intervals and the reverse implication was studied under certain restrictive hypotheses on the associated spectrum. In this paper, we reinvestigate the "Spectral implies Tiling" part of Fuglede's conjecture for the three interval case. We first show that the "Spectral implies Tiling" for two intervals follows from the simple fact that two distinct circles have at most two points of intersections. We then attempt this for the case of three intervals and except for one situation are able to prove "Spectral implies Tiling". Finally, for the exceptional case, we show a connection to a problem of generalized Vandermonde varieties.
\section{\bf{Introduction}} We begin with the standard definitions and the statement of Fuglede's conjecture. Let $\Omega$ and $T$ be Lebesgue measurable subsets of ${\mathbb R}^d$ with finite positive measure. For $\lambda \in {\mathbb R}^d$, let $$e_{\lambda}(x):=|\Omega|^{-1/2} e^{2 \pi i \lambda .x}{\chi}_{\Omega}(x),\,\,\, x\in {\mathbb R}^d .$$ \begin{definition} $\Omega$ is said to be a {\bf $spectral$ $set$} if there exists a subset $\Lambda \subset {\mathbb R}^d$ such that the set of exponential functions $E_{\Lambda}:=\{e_\lambda:\lambda \in \Lambda\}$ is an orthonormal basis for the Hilbert space $L^2(\Omega)$. The set $\Lambda$ is said to be a {\bf $spectrum$} for $\Omega$ and the pair $(\Omega,\Lambda)$ is called a {\bf $spectral$ $pair$}. \end{definition} \begin{definition} $T$ is said to be a {\bf $prototile$} if $T$ tiles ${\mathbb R}^d$ by translations; i.e., if there exists a subset $\mathcal T \subset {\mathbb R}^d$ such that $\{T+t: t \in \mathcal T\}$ forms a partition a.e. of ${\mathbb R}^d$, where $T+t=\{x+t : x \in T\}$. The set $\mathcal T$ is said to be a {\bf $tiling$ $set$} for $T$ and the pair $(T,\mathcal T)$ is called a $ tiling$ $pair$. \end{definition} The study of relationships between spectral and tiling properties of sets began with the work of B. Fuglede \cite{Fug}, who proved the following result: \smallskip \begin{theorem}(Fuglede~\cite{Fug})\label{fuglede} Let ${\mathcal {L}}$ be a full rank lattice in ${\mathbb R}^d$ and let ${\mathcal {L}}^*$ be the dual lattice. Then $(\Omega,{\mathcal {L}})$ is a tiling pair if and only if $(\Omega, {\mathcal {L}}^*)$ is a spectral pair. \end{theorem} \smallskip In the same paper, Fuglede made the following conjecture, which is also known as the Spectral Set conjecture. \smallskip \begin{conjecture}(Fuglede's conjecture) {\it A set $\Omega \subset {\mathbb R}^d$ is a spectral set if and only if $\Omega$ tiles ${\mathbb R}^d$ by translation.} \end{conjecture} This led to the study of spectral and tiling properties of sets. We refer the reader to \cite{BM} for a survey and the present status of this problem. \medskip In one dimension, for the simplest case when $\Omega$ is a finite union of intervals, the problem is open in both directions and only the $2$-interval case has been completely resolved by Laba in \cite{L1}, where she proved that the conjecture holds true. \medskip In \cite{BCKM} the case of three intervals was explored. It was shown there that the ``Tiling implies Spectral'' part of Fuglede's conjecture is true in this case, and the reverse implication was proved under some restrictive hypothesis on the associated spectrum. \medskip Recently in \cite{BM}, the authors have shown that any spectrum associated with a spectral set which is a finite union of intervals is periodic (see \cite{Kol} for a simplification of the proof). One of the key ingredients in both proofs is an embedding of the spectrum in a suitable vector space, equipped with an indefinite conjugate linear form. In this note we develop these ideas to give another proof of the ``Spectral implies Tiling'' part of Fuglede's conjecture for two intervals and then we attempt this for the case of three intervals. With the exception of one case, we are able to conclude that the ``Spectral implies Tiling'' indeed holds. In the last section, we show a connection of the exceptional case to a question on the intersections of generalized Vandermonde varieties restricted to the $3$-torus. \section{Embedding $\Lambda$ in a vector space} In this section we recall the embedding of the spectrum in a vector space \cite{BM}. \medskip Consider the $2n$-dimensional vector space ${\mathbb C}^n\times{\mathbb C}^n$. We write its elements as $\underbar{v}=\left(v_1,v_2\right)$ with $v_1,v_2\in {\mathbb C}^n$. We define a conjugate linear form $\odot$ on ${\mathbb C}^n\times{\mathbb C}^n$ as follows: for $\underbar{v},\underbar{w}\in{\mathbb C}^n\times{\mathbb C}^n$, let $$\underbar{v}\odot\underbar{w}:= \langle v_1,w_1\rangle -\langle v_2, w_2\rangle, $$ where $\langle\cdot,\cdot\rangle$ denotes the usual inner product on ${\mathbb C}^n$. Note that this conjugate linear form is degenerate, i.e., there exists $\underbar{v} \in {\mathbb C}^n\times{\mathbb C}^n$, $\underbar{v} \neq 0$ such that $\underbar{v}\odot\underbar{v}=0$. We call such a vector a {\it null-vector}. For example, every element of $\mathbb T^n \times \mathbb T^n$ is a null-vector. \medskip A subset $S\subseteq{\mathbb C}^n\times{\mathbb C}^n$ is called a set of {\it mutually null-vectors} if $\forall \,\,\underbar{v},\underbar{w}\in S$\textcolor{blue}{,} we have $\underbar{v}\odot\underbar{w}=0$. \medskip It is clear from the definition that elements of a set of mutually null-vectors are themselves necessarily null-vectors. Any linear subspace $V$ spanned by a set of mutually null vectors is itself a set of mutually null-vectors and $dim(V)\leq n$. \medskip Now, suppose $\Omega=\cup_{j=1}^n \left[a_j, a_j+r_j\right)$ is a union of $n$ disjoint intervals with $a_1=0$ and $ |\Omega| = \sum_1^n r_j=1$. We define a map $\varphi_{\Omega}$ from ${\mathbb R}$ to $\mathbb T^n \times \mathbb T^n \subseteq {\mathbb C}^n \times {\mathbb C}^n$ by $$ x\rightarrow \varphi_{\Omega}(x)=\left(\varphi_1(x); \varphi_2(x)\right),$$ where $$ \varphi_1(x)=\left(e^{2\pi i (a_1+r_1) x}, e^{2\pi i (a_2+r_2) x}, \dots, e^{2\pi i (a_n+r_n) x}\right)$$ $$ \varphi_2(x)=\left(1, e^{2\pi i a_2 x}, \dots, e^{2\pi i a_n x}\right).$$ For a set ${\Lambda} \subset {\mathbb R}$, the mutual orthogonality of the set of exponentials $E_{\Lambda} = \{e_\lambda: \lambda \in {\Lambda}\}$ is equivalent to saying that the set $\varphi_\Omega({\Lambda}) = \{\varphi_\Omega(\lambda) ; \lambda \in \Lambda \}$ is a set of mutually null vectors, and so the vector space $V_\Omega({\Lambda})$ spanned by $\varphi_\Omega({\Lambda})$ has dimension at most $n$. Therefore if $(\Omega , {\Lambda})$ is a spectral pair, we can say that ${\Lambda}$ has a ``local finiteness property'', in the sense that there exists a finite subset $\mathcal{B}=\left\{y_1,\dots,y_m\right\} \subseteq\Lambda$, $m\leq n$ which determines $\Lambda$ uniquely. More precisely we have, \begin{lemma}\label{local finite} Let $(\Omega,\Lambda)$ be a spectral pair and let $\mathcal{B}\subseteq\Lambda$ be such that $\varphi_{\Omega}(\mathcal B):=\left\{\varphi_{\Omega}(y):y\in \mathcal{B} \right\}$ forms a basis of $V_{\Omega}(\Lambda)$. Then $x\in\Lambda\,$ if and only if $\,\varphi_{\Omega}(x)\odot\varphi_{\Omega}(y)=0,\,\, \forall \, y\in\mathcal{B}$. \end{lemma} Next, we give a criterion for the periodicity of the spectrum. \begin{lemma}\label{repeated} Let $(\Omega,\Lambda)$ be a spectral pair. If $\exists\ \lambda_1, \lambda_2 \in \Lambda$ such that $\varphi_{\Omega}(\lambda_1)=\varphi_{\Omega}(\lambda_2)$, then $d=|\lambda_1-\lambda_2|\in{\mathbb N}$ and $\Lambda$ is $d$-periodic, i.e., $\Lambda= \{\lambda_1,\dots,\lambda_d\} +d {\mathbb Z}$. \end{lemma} \begin{proof} Since $\varphi_{\Omega}(\lambda_1)=\varphi_{\Omega}(\lambda_2)$, we have $\varphi_{\Omega}(d)=(1,\dots,1;1,\dots,1)$ and hence $\varphi_{\Omega}(x+d)=\varphi_{\Omega}(x), \forall \, x\in{\mathbb R}$. Let $\mathcal{B}\subseteq\Lambda$ be such that $\varphi_{\Omega} (\mathcal B)$ is a basis of $V_{\Omega}(\Lambda)$. Then, whenever $\lambda\in\Lambda$, we have $\varphi_{\Omega}(\lambda+nd)\odot\varphi_{\Omega}(y)=\varphi_{\Omega}(\lambda)\odot\varphi_{\Omega}(y)=0,\forall \,n \in {\mathbb Z} \ \mbox{and} \ \forall \, y\in\mathcal{B}$. Thus $\lambda+d{\mathbb Z}\subseteq \Lambda$ and so $\Lambda$ is $d$-periodic. By a simple application of Poisson summation formula we see that $d\in{\mathbb N}$. But $\Lambda$ must have density $1$ (by Landau's density theorem \cite{Landau}), so we conclude that $\Lambda= \{\lambda_1,\dots,\lambda_d\} +d {\mathbb Z}$. \end{proof} \section {Spectral Implies Tiling for 2 intervals}\label{2int vs} We will now use the ideas developed in the previous section to give a simple proof of the ``Spectral implies Tiling'' part of Fuglede's conjecture for a set which is a union of two intervals. See \cite{L1} for the original proof. \medskip Let $\Omega=[0,r] \cup [a,a+1-r]$, where $0<r<1$, $r<a$, and let $(\Omega,\Lambda)$ be a spectral pair. Without loss of generality, we may assume that $0 \in \Lambda$. \medskip Consider the map $\varphi_{\Omega}:\Lambda\rightarrow{\mathbb C}^2\times{\mathbb C}^2$ given by $$\varphi_{\Omega}(\lambda):=(e^{2 \pi i\lambda r}, e^{2 \pi i \lambda (a+1-r)}; 1, e^{2 \pi i\lambda a})$$ and let $V_\Omega(\Lambda)$ be the subspace spanned by $\varphi_{\Omega}(\Lambda)=\{\varphi_{\Omega}(\lambda):\lambda\in\Lambda\}$. Then we have $dim (V_\Omega(\Lambda)) \leq 2$. We will now show that in fact $dim V_\Omega(\Lambda) = 2$, unless $\Omega$ is degenerate, i.e., $\Omega$ consists of a single interval of length $1$. \medskip First, observe that for $\lambda, \lambda' \in {\mathbb R}$, the vectors $\varphi_{\Omega}(\lambda)$ and $\varphi_{\Omega}(\lambda')$ are linearly dependent if and only if $\varphi_{\Omega}(\lambda) = \varphi_{\Omega}(\lambda')$ (since the third coordinate in $\varphi_{\Omega}(x)$ is $1 \, \forall \,\, x \in {\mathbb R}$). Now if $dim(V_\Omega(\Lambda))=1$, then $\forall \, \lambda \in \Lambda$, $\varphi_{\Omega}(\lambda)=\varphi_{\Omega}(0)=(1,1; 1,1)$ and so $\chi_\Omega(n\lambda) =0 \,\forall n \in {\mathbb Z} $ and thus Poisson summation implies that $\lambda \in {\mathbb Z}$. Further, observe that $\Lambda$ is actually a subgroup of ${\mathbb Z}$, therefore using Landau's density criteria we get $\Lambda={\mathbb Z}$. In particular, $1 \in \Lambda$, and so $e^{2 \pi i r}=1$, which implies that $r=0$ or $1$, and thus this is a degenerate case. \medskip Now let $\lambda_1=0, \lambda_2, \lambda_3$ be the first three elements of $\Lambda \cap [0,\infty)$. We claim that $\varphi_{\Omega}(\lambda_2) \neq \varphi_{_\Omega}(0)$. For if $\varphi_{\Omega}(\lambda_2) = \varphi_{_\Omega}(0)$ then by Lemma \ref{repeated}, $\Lambda$ is $\lambda_2-$periodic, and since by our assumption $\lambda_2$ is the smallest positive element of $\Lambda$, we get $\Lambda= \lambda_2 {\mathbb Z} $ and $dim(V_{\Omega}(\Lambda))=1$, a contradiction. A similar argument shows that $\varphi_{\Omega}(\lambda_2) \neq \varphi_{_\Omega}(\lambda_3)$. Hence, we have two possible cases to consider: \begin{enumerate} \item $\varphi_{\Omega}(0)=\varphi_{\Omega}(\lambda_3)$, \item $\varphi_{\Omega}(0),\varphi_{\Omega}(\lambda_2),\varphi_{\Omega}(\lambda_3)$ are all distinct. \end{enumerate} \medskip {\bf Case(1).} By Lemma \ref{repeated}, $\lambda_3=d \in {\mathbb N} $ and $\Lambda = d{\mathbb Z} \cup (\lambda_2 + d {\mathbb Z})$. But $\Lambda$ must have density $1$, so $d=2$. Next, $\varphi_{\Omega}(2)=\varphi_{\Omega}(0)$ implies that $e^{2\pi i 2 a}= e^{2\pi i 2 r}= e^{2\pi i 2(a+1-r)} =1 $, and so $a \in {\mathbb Z}/2$ and $r= 1/2$. That such an $\Omega$ tiles ${\mathbb R}$ is now easy to see. \medskip {\bf Case(2).} Suppose that $\varphi_{\Omega}(0)$, $\varphi_{\Omega}(\lambda_2)$, $\varphi_{\Omega}(\lambda_3)$ are all distinct. Then any two of these are linearly independent and form a basis of $V_\Omega(\Lambda)$. \medskip Let, \begin{equation} A=\left(\begin{array}{cccc} 1 & 1 & 1 & 1 \\ 1 & e^{2 \pi i \lambda_2 a} & e^{ 2 \pi i \lambda_2 r} & e^{2 \pi i \lambda_2(a+1-r)} \\ 1 & e^{2 \pi i \lambda_3 a} & e^{ 2 \pi i \lambda_3 r} & e^{2 \pi i \lambda_3(a+1-r)} \\ \end{array}\right) \end{equation} \medskip Then we have $Rank(A)=2$, and in particular \begin{equation} \left|\begin{array}{ccc} 1 & 1 & 1 \\ 1 & e^{2 \pi i \lambda_2 a} & e^{ 2 \pi i \lambda_2 r} \\ 1 & e^{2 \pi i \lambda_3 a} & e^{ 2 \pi i \lambda_3 r} \\ \end{array}\right| = 0 \end{equation} \medskip Therefore, \begin{equation} \left|\begin{array}{cc} e^{2 \pi i \lambda_2 a}-1 & e^{2 \pi i \lambda_2 r}-1 \\ e^{2 \pi i \lambda_3 a}-1 & e^{2 \pi i \lambda_3 r}-1 \\ \end{array}\right| = 0 \end{equation} \medskip So finally we get, \begin{equation}\label{4} (e^{2 \pi i \lambda_2 a}-1)( e^{2 \pi i \lambda_3 r}-1 )=(e^{2 \pi i \lambda_2 r}-1)(e^{2 \pi i \lambda_3 a}-1). \end{equation} \medskip Put $e^{2 \pi i \lambda_2 a}-1 =\alpha$, and $e^{2 \pi i \lambda_2 r}-1 =\beta$ in equation (\ref{4}). \medskip If $\alpha=0$, then $e^{2 \pi i \lambda_2 r}=e^{2 \pi i \lambda_2(a+1-r)}=1$, and so $\varphi_{\Omega}(\lambda_2)=\varphi_{\Omega}(0)$ which is a contradiction. If $\beta=0$, we have $e^{2 \pi i \lambda_2 a} = e^{2 \pi i \lambda_2(a+1-r)}$ and thus $\varphi_{\Omega}(\lambda_2)$ is of the form $(1, c; 1, c)$. Since the set $\{\varphi_{\Omega}(0),\varphi_{\Omega}(\lambda_2)\}$ generates $V_{\Omega}(\Lambda)$, all elements of $V_{\Omega}(\Lambda)$ are of this form. So that $\Lambda\subseteq {\mathbb Z}$ and is a subgroup of ${\mathbb Z}$. Thus $\Lambda={\mathbb Z}$, and $\Omega$ tiles ${\mathbb R}$ by ${\mathbb Z}$. \medskip So without loss of generality, let $\alpha, \beta \neq 0$, so that we have $$ \alpha( e^{2 \pi i \lambda_3 r}-1 )=\beta(e^{2 \pi i \lambda_3 a}-1).$$ Consider now two circles given by $C_1(t)=\alpha (e^{2 \pi i t}-1)$, and $C_2(s)=\beta (e^{2 \pi i s}-1)$, $t,\, s \in [0,1]$. Both circles pass through $0$. Further note that \begin{eqnarray}\label{points} C_1(\lambda_3r) = C_2(\lambda_3a),\\ C_1(\lambda_2r) = C_2(\lambda_2a).\label{points2} \end{eqnarray} We consider the various possibilities. \medskip First, if the two circles coincide, then they have the same radius and center i.e., $\alpha = \beta$ and so $e^{2 \pi i \lambda_2 a}= e^{2 \pi i \lambda_2 r}$. Thus $ \varphi_{\Omega}(\lambda_2)$ is of the form $(1, c; c, 1),$ and we conclude that $\Lambda={\mathbb Z}$ as before. \medskip Next, we consider the case when the two circles $C_1(t),C_2(t)$ are distinct. As mentioned above, both the circles pass through $0$ and now there are two more points of intersection given by equations (\ref{points}) and (\ref{points2}). But two distinct circles can have at most two distinct points of intersection. If $C_1(\lambda_2r) = C_2(\lambda_2a) = 0$ then $\varphi_{\Omega}(0)=\varphi_{\Omega}(\lambda_2)$ and similarly if $C_1(\lambda_3r) = C_2(\lambda_3a) = 0$, then $\varphi_{\Omega}(0)=\varphi_{\Omega}(\lambda_3)$. By our assumption these cases are not possible. Thus the only possibility is that $C_1(\lambda_2 r) = C_2(\lambda_3 a) = C_1(\lambda_3r) = C_2(\lambda_2a) = \alpha\beta$. Then $ e^{2 \pi i \lambda_2 a} = e^{2 \pi i \lambda_3 a}$ and $e^{2 \pi i \lambda_2r} = e^{ 2 \pi i \lambda_3r}$, i.e., $\varphi_{\Omega}(\lambda_2)=\varphi_{\Omega}(\lambda_3)$ which is again not possible. This completes the proof. \section {On 3 intervals}\label{3int vs} In this section, we investigate the ``Spectral implies Tiling'' part of Fuglede's conjecture for three intervals, in the same spirit as in the previous section. \medskip Let $\Omega= [0,r]\cup[a,a+s]\cup[b,b+1-r-s]$ where $0<r,s,r+s<1$ and let $(\Omega,\Lambda)$ be a spectral pair. We will assume here that $0 \in \Lambda$. Again we define the map $\varphi_{_\Omega}:\Lambda\rightarrow {\mathbb C}^3 \times{\mathbb C}^3$ by $$\varphi_{_\Omega}(\lambda):=(e^{2 \pi i \lambda r}, e^{2 \pi i \lambda (a+s)}, e^{2 \pi i \lambda (b+1-r-s)}; 1, e^{2 \pi i \lambda a}, e^{2 \pi i \lambda b})$$ and let $V_\Omega(\Lambda)$ be the subspace spanned by $\varphi_{_\Omega}(\Lambda)=\{\varphi_{_\Omega}(\lambda):\lambda\in\Lambda \}$. We know that $dim(V_{_\Omega}(\Lambda))\leq 3$. As in the $2$-interval case we will first show that if $\Omega$ is non-degenerate, in the sense that the three intervals are disjoint and have non zero length, then $dim(V_{_\Omega}(\Lambda))=3$. \begin{proposition} Let $\Omega$, as above, be a spectral set and let $\Lambda$ be a associated spectrum. Then $dim(V_{_\Omega}(\Lambda))=3$. \end{proposition} \begin{proof} If $dim(V_{_\Omega}(\Lambda))= 1$, we again get $\Lambda={\mathbb Z}, \,\, r, s, 1-r-s \in {\mathbb Z}$, i.e., $\Omega$ consists of a single interval of length $1$ and this is a degenerate case. \medskip So let, if possible, $dim(V_{_\Omega}(\Lambda))=2$. Suppose $0 < \lambda_2 < \lambda_3$ are the first three elements of $\Lambda \cap [0,\infty)$. In the two cases $\varphi_{_\Omega}(0) = \varphi_{_\Omega}(\lambda_2)$ or $\varphi_{_\Omega}(\lambda_2) = \varphi_{_\Omega}(\lambda_3)$ we conclude that $dim(V_{_\Omega}(\Lambda))=1$, and if $\varphi_{_\Omega}(0)=\varphi_{_\Omega}(\lambda_3)$ we see easily that $\Lambda=2{\mathbb Z} \cup (2{\mathbb Z}+\alpha)$ and $r,s\in {\mathbb Z}/2$, i.e., $\Omega$ is a union of $2$ intervals of length $1/2$ or is a single interval of length $1$, and this case too is degenerate. \medskip Finally, suppose that $\varphi_{_\Omega}(0),\varphi_{_\Omega}(\lambda_2),\varphi_{_\Omega} (\lambda_3)$ are all distinct. Define \begin{equation} A=\left(\begin{array}{cccccc} 1 & 1 & 1 & 1 & 1 & 1 \\ e^{2 \pi i \lambda_2 r} & e^{2 \pi i \lambda_2 (a+s)} & e^{2 \pi i \lambda_2 (b+1-r-s)} & 1 & e^{2 \pi i \lambda_2 a} & e^{2 \pi i \lambda_2 b}\\ e^{2 \pi i \lambda_3 r} & e^{2 \pi i \lambda_3 (a+s)} & e^{2 \pi i \lambda_3 (b+1-r-s)} & 1 & e^{2 \pi i \lambda_3 a} & e^{2 \pi i \lambda_3 b}\\ \end{array}\right) \end{equation} Then, by our assumption, $Rank(A)=2$, and the rows of $A$ are distinct. Since $\varphi_{_\Omega}(0)\neq\varphi_{_\Omega} (\lambda_2)$ and $\varphi_{_\Omega}(0)\odot\varphi_{_\Omega}(\lambda_2)=0$, at least two entries in the second row of $A$ are different from $1$, i.e., $\exists \,\, i_1, i_2$ such that $A(2,i_1),A(2,i_2)\neq 1$. Consider the $3 \times 3$ matrix constructed out of the $1$st, $i_1$th and $i_2$th column of A, since $Rank(A)=2$ it is singular. Hence we have, \begin{equation} \left|\begin{array}{ccc} 1 & 1 & 1 \\ 1 & A(2,i_1) & A(2,i_2) \\ 1 & A(3,i_1) & A(3,i_2) \\ \end{array}\right| = 0 \end{equation} \medskip Using the fact that $Rank(A)=2$, and $A(2,i_1),A(2,i_2)\neq 1$ we argue as in the two interval case to conclude that the circles $C_1(t)= (A(2,i_1)-1) (e^{2\pi i t} -1)$ and $C_2(t)= (A(2,i_2)-1) (e^{2\pi i t} -1)$ coincide. Therefore, $A(2,i_2)= A(2,i_1)= \alpha$, say. By choosing other columns of $A$, we see that the coordinates of $\varphi_{\Omega}(\lambda_2)$ are either $1$ or $\alpha$. But since $\varphi_{_\Omega}(0)\odot\varphi_{_\Omega}(\lambda_2)=0$, $\varphi_{_\Omega}(\lambda_2)$ is either of the form $(1,1,\alpha; 1,1,\alpha)$ or $(1,\alpha,\alpha; 1,\alpha,\alpha)$ (up to suitable permutations). Now $\{\varphi_{\Omega}(0),\varphi_{\Omega}(\lambda_2) \} $ forms a basis of $V_{\Omega}(\Lambda)$, so as before, we see that $\Lambda={\mathbb Z}$. But then one of the intervals has length $0$ or $1$, and this is a degenerate case. \end{proof} So $Rank(A)=3$. Let $\lambda_1=0,\lambda_2,\lambda_3,\lambda_4$ be the first 4 elements of $\Lambda \cap [0,\infty)$. Each of the cases $\varphi_{\Omega}(\lambda_i)=\varphi_\Omega(\lambda_{i+1})$ or $\varphi_{\Omega}(\lambda_i)=\varphi_\Omega(\lambda_{i+2})$ will imply $\Omega$ is degenerate, i.e., one of the intervals has length $0$. Now if $\varphi_{\Omega}(0)=\varphi_{\Omega}(\lambda_4)$, then $\lambda_4=d$ and the spectrum is $d$-periodic, and by a density argument we conclude $d=3$. It follows then, that this is the case of three equal intervals i.e., $r=s=1/3$ and spectral implies tiling follows by the result of \cite{Newman} (see \cite{BCKM} for a proof). So, now it remains to consider the case that $\varphi_{\Omega}(0),\varphi_{\Omega}(\lambda_2),\varphi_{\Omega}(\lambda_3),\varphi_{\Omega}(\lambda_4)$ are all distinct and $Rank(A)=3$. \medskip To proceed further, we will use the result that the spectrum is periodic \cite{BM}. Let $d$ be the smallest positive integer such that $d{\mathbb Z} \subseteq \Lambda$. Let $V_{\Omega}(d{\mathbb Z})$ denote the linear space spanned by the image of the arithmetic projection $d{\mathbb Z}$ under the map $\phi_\Omega$. Now if $dim(V_{_\Omega}(d{\mathbb Z}))=3$ or $2$, then by the results of \cite{BCKM}, Section 5, we get Spectral implies Tiling. \medskip So without loss of generality, we may assume that $dim(V_{\Omega}(d{\mathbb Z}))=1$, and that $\Lambda$ is $d$-periodic. There are now two possible cases to consider: \begin{enumerate} \item $dim(V_{\Omega}(\Lambda \setminus d{\mathbb Z}))= 2$ \item $dim(V_{\Omega}(\Lambda \setminus d{\mathbb Z})) = 3$. \end{enumerate} In the first case we are able to show that $d=3$, thus $\Omega$ is a union of three equal interval and hence Spectral implies Tiling as before. It is the second case that remains inconclusive. \medskip {\bf Case(1).} We show that in this case $d = 3$. Suppose not, and $d > 3$. Let $\Lambda \cap (0,d)=\{\lambda_2,\lambda_3,\dots,\lambda_d\}$. Since $d$ is the minimal period, $\varphi_{\Omega}(\lambda_2),\varphi_{\Omega}(\lambda_3),\varphi_{\Omega}(\lambda_4)$ are all distinct and since $dim(V_{\Omega}(\Lambda \setminus d{\mathbb Z}))=2$, $\{\varphi_{\Omega}(\lambda_2),\varphi_{\Omega}(\lambda_3), \varphi_{\Omega}(\lambda_4)\}$ is a linearly dependent set. Let \begin{eqnarray*} \varphi_{\Omega}(\lambda_2) & = &(\xi_1,\xi_2,\xi_3; 1,\xi_5,\xi_6)\\ \varphi_{\Omega}(\lambda_3) & = &(\rho_1,\rho_2,\rho_3; 1,\rho_5,\rho_6)\\ \varphi_{\Omega}(\lambda_4) & = &(\eta_1,\eta_2,\eta_3; 1,\eta_5,\eta_6) \end{eqnarray*} Now there exists $i,j$ such that $\xi_i\neq\rho_i$ and $\xi_j\neq\rho_j$. By our assumption \begin{equation}\left|\begin{array}{ccc} 1 & \xi_i & \xi_j \\ 1 & \rho_i & \rho_j \\ 1 & \eta_i & \eta_j \\ \end{array}\right| = 0 \end{equation} and so, \begin{equation} \left|\begin{array}{ccc} 1 & \xi_i & \xi_j \\ 0 & \rho_i-\xi_i & \rho_j-\xi_j \\ 0 & \eta_i-\xi_i & \eta_j-\xi_j \\ \end{array}\right| = 0 \end{equation} Thus we obtain, \begin{equation} (\rho_i-\xi_i)(\eta_j-\xi_j)=(\rho_j-\xi_j)(\eta_i-\xi_i) \end{equation} which we rewrite as \begin{equation} (\rho_i \bar{\xi_i}-1)(\eta_j \bar{\xi_j}-1)=(\rho_j\bar{\xi_j}-1)(\eta_i\bar{\xi_i}-1) \end{equation} \medskip Since $\xi_i\neq\rho_i$ and $\xi_j\neq\rho_j$ and $dim(V_{\Omega}(\Lambda \setminus d{\mathbb Z}))=2$ we can exclude the two possibilities that $\{\eta_i=\rho_i,\eta_j=\rho_j\}$ or that $\{\eta_i=\xi_i,\eta_j=\xi_j\}$. Then, by the same argument with two circles as at the end of section 3, we see that $\rho_i \overline{\xi_i} =\rho_j \overline{\xi_j}= \alpha$. In particular this would hold for any other index $j'$ such that $\xi_{j'} \neq \rho_{j'}$. This implies that $\varphi_{\Omega}(\lambda_3-\lambda_2)=(1,1,\alpha;1,1,\alpha)$ or $(1,\alpha,\alpha;1,\alpha,\alpha)$ (or some suitable permutation). But since $\varphi_{_\Omega}(\lambda):=(e^{2 \pi i \lambda r}, e^{2 \pi i \lambda (a+s)}, e^{2 \pi i \lambda (b+1-r-s)}; 1, e^{2 \pi i \lambda a}, e^{2 \pi i \lambda b})$, we see that $\lambda_3-\lambda_2 \in {\mathbb Z}$. We write $\lambda_3-\lambda_2 =k$, and show that $k{\mathbb Z} \subset \Lambda$. \medskip Consider the first situation, namely $\varphi_\Omega(k) = (1,1,\alpha;1,1,\alpha)$. Now $\varphi_{\Omega}(0),\varphi_{\Omega}(\lambda_2),\varphi_{\Omega} (\lambda_3)$ is a basis of $V_{\Omega}(\Lambda)$, and we are in the case where $\varphi_{\Omega} (\lambda_3) = (\xi_1,\xi_2,\alpha\xi_3; 1,\xi_5,\alpha\xi_6)$. But both $\varphi_{\Omega} (\lambda_2)$ and $\varphi_{\Omega} (\lambda_3)$ are null vectors, so $\xi_3 = \xi_6$. Then, $\varphi_{\Omega}(kn)\odot \varphi_{\Omega}(\lambda_i)=0,\,\,i=1,2,3$ for every $n \in {\mathbb Z}$. Thus by Lemma \ref{local finite}, $k{\mathbb Z} \subseteq \Lambda$, so that $\Lambda$ and $k < d$. But $d$ is the smallest positive integer with this property, which is a contradiction. A similar argument works for all other cases. Thus $d=3$, and $\Omega$ is a union of $3$ equal intervals, and Spectral implies Tiling follows. \section{Generalized Vandermonde Matrix} It remains now to consider the case when $dim(V_\Omega(d {\mathbb Z}))=1$ and $dim(V_\Omega(\Lambda \setminus d{\mathbb Z}))=3$ where $d$ is the smallest integer such that $d {\mathbb Z}$ is in the spectrum $\Lambda$. Note that $dim(V_\Omega(d {\mathbb Z}))=1$ implies that $\Omega$ can be written as $$\Omega=[0,k_1/d] \cup [l_2/d,(l_2+k_2)/d] \cup [l_3/d,(l_3+k_3)/d],$$ where $l_i,k_i \in {\mathbb N}$ and $k_1+k_2+k_3=d$. \medskip If $d=3$ then it is the case of three equal intervals in which case we know that Fuglede's conjecture holds. By known results it is possible to rule out the cases $d=4$ and 5 as well. Hence the problem will be resolved if we can show that $d<6$. In any case finding a bound on $d$ is desirable. \medskip Now if $d>3$, let $0=\lambda_1,\lambda_2,\lambda_3 < d $ be three elements of $\Lambda$ such that $\{\varphi_{\Omega}(0),\varphi_{\Omega}(\lambda_2), \varphi_{\Omega}(\lambda_3)\}$ forms a basis of $V_{\Omega}(\Lambda)$. By our assumption there exists $\lambda_4<d$ in $\Lambda$ such that $\varphi_{\Omega}(\lambda_2), \varphi_{\Omega}(\lambda_3),\varphi_{\Omega}(\lambda_4)$ are linearly independent. We construct the matrix \begin{equation} A=\left(\begin{array}{cccccc} 1 & 1 & 1 & 1 & 1 & 1 \\ e^{2 \pi i \frac{\lambda_2 k_1}{d}} & e^{2 \pi i \frac{\lambda_2 (l_2+k_2)}{d}} & e^{2 \pi i \frac{\lambda_2 (l_3+k_3)}{d}} & 1 & e^{2 \pi i \frac{\lambda_2 l_2}{d}} & e^{2 \pi i \frac{\lambda_2 l_3}{d}}\\ e^{2 \pi i \frac{\lambda_3 k_1}{d}} & e^{2 \pi i \frac{\lambda_3 (l_2+k_2)}{d}} & e^{2 \pi i \frac{\lambda_3 (l_3+k_3)}{d}} & 1 & e^{2 \pi i \frac{\lambda_3 l_2}{d}} & e^{2 \pi i \frac{\lambda_3 l_3}{d}}\\ e^{2 \pi i \frac{\lambda_4 k_1}{d}} & e^{2 \pi i \frac{\lambda_4 (l_2+k_2)}{d}} & e^{2 \pi i \frac{\lambda_4 (l_3+k_3)}{d}} & 1 & e^{2 \pi i \frac{\lambda_4 l_2}{d}} & e^{2 \pi i \frac{\lambda_4 l_3}{d}}\\ \end{array}\right) \end{equation} The rank of this matrix is $3$ i.e., the rows are linearly dependent, hence each of its $4 \times 4$ minors are zero. \medskip Observe that the $4\times 4$ minors are of the form \begin{equation} \left|\begin{array}{cccc} 1 & 1 & 1 & 1 \\ X_1^{i} & X_1^{j} & X_1^{k} & X_1^{l}\\ X_2^{i} & X_2^{j} & X_2^{k} & X_2^{l}\\ X_3^{i} & X_3^{j} & X_3^{k} & X_3^{l}\\ \end{array}\right| \end{equation} Thus we get after reductions, equations of the form \begin{equation}\label{15} \left|\begin{array}{cccc} 1 & 1 & 1 & 1 \\ 1 & X_1^{j} & X_1^{k} & X_1^{l}\\ 1 & X_2^{j} & X_2^{k} & X_2^{l}\\ 1 & X_3^{j} & X_3^{k} & X_3^{l}\\ \end{array}\right|=0 \end{equation} These are the determinants of generalized Vandermonde matrices in the variables $(X_1,X_2,X_3)$ and exponents $(j,k,l)$. We write (\ref{15}) as $$R_{(j,k,l)}(X_1,X_2,X_3)=0.$$ We are interested in the common zero solution set of these Vandermonde varieties intersected with the set $ 1 \times \mathbb T^3$. \medskip In particular, let us consider the generalized Vandermonde matrix which we get by taking those minors where the first three columns correspond to the left end-points of the set $\Omega$ and the $4$th column is one of the right end point i.e., a minor obtained by choosing the $4$th, $5$th, and $6$th columns of the matrix $A$ and one of the first three columns. Thus we consider $R_{(i_5,i_6,i_1)}(X_1,X_2,X_3)$, $R_{(i_5,i_6,i_2)}(X_1,X_2,X_3),$ and $R_{(i_5,i_6,i_3)}(X_1,X_2,X_3).$ \medskip In \cite{DZ} (Theorem 3.1) it is proved that the polynomials $$ T_{(j,k,l)}(X_1,X_2,X_3) =\frac{R_{(j,k,l)}(X_1,X_2,X_3)}{V(X_1^g,X_2^g,X_3^g)},$$ are either irreducible or constant. Here $g = gcd(j,k,l)$ and $V$ denotes the standard Vandermonde determinant, thus $V(X_1^g,X_2^g,X_3^g)= R_{(1,2,3)}(X_1^g,X_2^g,X_3^g)$. \medskip Consider next, the Schur Polynomials given by $$S_{(j,k.l)}(X_1,X_2,X_3)=\frac{R_{(j,k,l)}(X_1,X_2,X_3)}{V(X_1,X_2,X_3)},$$ Let $g_1=gcd(i_5,i_6,i_1)$, $g_2=gcd(i_5,i_6,i_2)$ and $g_3=gcd(i_5,i_6,i_3)$. We know that $gcd(g_1,g_2,g_3) =1$ by our choice of $d$. Theorem 4.1 in \cite{DZ} regarding intersection of Fermat hypersurfaces seems to suggest that there can not be many solutions. \medskip In the particular case when $gcd(g_1,g_2)=1$ the analysis in \cite{CL} tells us that $S_{(i_5,i_6,i_1)}$ and $S_{(i_5,i_6,i_2)}$ are coprime and each hypersurface defined by $S_{(i_5,i_6,i_1)}=0$ and $S_{(i_5,i_6,i_2)}=0$ in $\mathbb C^3$ has distinct reduced irreducible components of dimension 2. Then their intersection $W$ has dimension 1. (Note that with respect to the setting of \cite{CL}, we have fixed the first coordinate, hence we get one dimension less). \medskip In our case we need only those solutions such that $|X_j| = 1, \, \forall j $. In other words we need the set $W\cap \mathbb T^3$. This condition in itself is very restrictive. In the previous sections, where we used the two-circles argument along with mutual orthogonality, we saw that this set can be finite. If an analysis as in \cite{CL} can be carried through to get that $W\cap \mathbb T^3$ is indeed finite, we immediately get a bound on the period $d$. Then along with orthogonality, one may be able to resolve the remaining case of the $3$-intervals!
{ "timestamp": "2011-07-27T02:02:38", "yymm": "1107", "arxiv_id": "1107.5209", "language": "en", "url": "https://arxiv.org/abs/1107.5209", "abstract": "In \\cite{BCKM} it was shown that \"Tiling implies Spectral\" holds for a union of three intervals and the reverse implication was studied under certain restrictive hypotheses on the associated spectrum. In this paper, we reinvestigate the \"Spectral implies Tiling\" part of Fuglede's conjecture for the three interval case. We first show that the \"Spectral implies Tiling\" for two intervals follows from the simple fact that two distinct circles have at most two points of intersections. We then attempt this for the case of three intervals and except for one situation are able to prove \"Spectral implies Tiling\". Finally, for the exceptional case, we show a connection to a problem of generalized Vandermonde varieties.", "subjects": "Classical Analysis and ODEs (math.CA); Algebraic Geometry (math.AG); Combinatorics (math.CO); Functional Analysis (math.FA)", "title": "\"Spectral implies Tiling\" for Three Intervals Revisited", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668690081642, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7081505278234065 }
https://arxiv.org/abs/math/0603665
The minimum degree threshold for perfect graph packings
Let H be any graph. We determine (up to an additive constant) the minimum degree of a graph G which ensures that G has a perfect H-packing (also called an H-factor). More precisely, let delta(H,n) denote the smallest integer t such that every graph G whose order n is divisible by |H| and with delta(G) > t contains a perfect H-packing. We show that delta(H,n) = (1-1/\chi*(H))n+O(1). The value of chi*(H) depends on the relative sizes of the colour classes in the optimal colourings of H and satisfies k-1 < chi*(H) \le k, where k is the chromatic number of H.
\section{Introduction}\label{intro} \subsection{Background} Given two graphs $H$ and $G$, an \emph{$H$-packing in $G$} is a collection of vertex-disjoint copies of $H$ in $G$. $H$-packings are natural generalizations of graph matchings (which correspond to the case when $H$ consists of a single edge). An $H$-packing in $G$ is called \emph{perfect} if it covers all vertices of $G$. In this case, we also say that $G$ contains an \emph{$H$-factor} or a \emph{perfect $H$-matching}. If $H$ has a component which contains at least~3 vertices then the question whether $G$ has a perfect $H$-packing is difficult from both a structural and algorithmic point of view: Tutte's theorem characterizes those graphs which have a perfect $H$-packing if $H$ is an edge but for other graphs~$H$ no such characterization exists. Moreover, Hell and Kirkpatrick~\cite{HKsiam} showed that the decision problem whether a graph $G$ has a perfect $H$-packing is NP-complete if and only if $H$ has a component which contains at least~3 vertices. They were motivated by questions arising in timetabling (see~\cite{HK78}). This leads to the search for simple sufficient conditions which ensure the existence of a perfect $H$-packing. A fundamental result of this kind is the theorem of Hajnal and Szemer\'edi~\cite{HSz} which states that every graph~$G$ whose order~$n$ is divisible by~$r$ and whose minimum degree is at least $(1-1/r)n$ contains a perfect $K_r$-packing. The minimum degree condition is easily seen to be best possible. (The case when $r=3$ was proved earlier by Corr\'adi and Hajnal~\cite{CH}.) The following result is a generalization of this to arbitrary graphs $H$. \begin{thm}\label{KSS}{\bf [Koml\'os, S\'ark\"ozy and Szemer\'edi~\cite{KSSz01}]} For every graph $H$ there exists a constant $C=C(H)$ such that every graph $G$ whose order $n$ is divisible by $|H|$ and whose minimum degree is at least $(1-1/\chi(H))n+C$ contains a perfect $H$-packing. \end{thm} This confirmed a conjecture of Alon and Yuster~\cite{AY96}, who had obtained the above result with an additional error term of~${\varepsilon} n$ in the minimum degree condition. As observed in~\cite{AY96}, there are graphs $H$ for which the above constant~$C$ cannot be omitted completely. Thus one might think that this settles the question of which minimum degree guarantees a perfect $H$-packing. However, there are graphs $H$ for which the bound on the minimum degree can be improved significantly: Kawarabayashi~\cite{KK} conjectured that if $H=K_\ell^-$ (i.e.~a complete graph with one edge removed) and $\ell \ge 4$ then one can replace the chromatic number with the critical chromatic number in Theorem~\ref{KSS} and take~$C=0$. He~\cite{KK} proved the case $\ell=4$ and together with Cooley, we proved the general case for all graphs whose order $n$ is sufficiently large~\cite{KOKlminus}. Here the \emph{critical chromatic number} $\chi_{cr}(H)$ of a graph $H$ is defined as $(\chi(H)-1)|H|/(|H|-\sigma(H))$, where $\sigma(H)$ denotes the minimum size of the smallest colour class in a colouring of $H$ with $\chi(H)$ colours. Note that $\chi_{cr}(H)$ always satisfies $\chi(H)-1 < \chi_{cr}(H) \le \chi(H)$ and equals $\chi(H)$ if and only if for every colouring of $H$ with $\chi(H)$ colours all the colour classes have equal size. The critical chromatic number was introduced by Koml\'os~\cite{JKtiling}. He (and independently Alon and Fischer~\cite{AF99}) observed that for \emph{any} graph~$H$ it gives a lower bound on the minimum degree that guarantees a perfect $H$-packing.% \COMMENT{This is true for \emph{every} integer $n$ that is divisible by $|H|$. Indeed, let $\ell:=\chi(H)$ and $\sigma=\sigma(H)$. Let $k\in\mathbb{N}$ and let $G$ be the complete $\ell$-partite graph of order $k|H|$ whose smallest vertex class has size $\frac{\sigma}{|H|}k|H|-1=\sigma k-1$ and whose other vertex class sizes are as equal as possible. Then $G$ doesn't contain a perfect $H$-packing and $$\delta(G)=k|H|-\text{largest vx class}= k|H|-\lceil \frac{k|H|-\sigma k+1}{\ell-1}\rceil.$$ But $|H|-\sigma=|H|(1-\xi/(\ell-1+\xi))$ since $(\ell-1)\sigma=z_1=\xi z=\xi(|H|-\sigma)$. So \begin{align*} \delta(G) =k|H|-\left\lceil \frac{k|H|\left(\frac{\ell-1}{\ell-1+\xi}\right)+1}{\ell-1}\right\rceil \ge k|H|-\frac{k|H|\left(\frac{\ell-1}{\ell-1+\xi}\right)+1}{\ell-1}-\frac{\ell-2}{\ell-1} =k|H|\left(1-\frac{1}{\ell-1+\xi}\right)-1. \end{align*} (To see the inequality use that $k|H|\frac{\ell-1}{\ell-1+\xi}$ is an integer.)} \begin{prop}\label{propKomlos} For every graph $H$ and every integer $n$ that is divisible by $|H|$ there exists a graph $G$ of order $n$ and minimum degree $\lceil(1-1/\chi_{cr}(H))n\rceil-1$ which does not contain a perfect $H$-packing. \end{prop} Koml\'os also showed that the critical chromatic number is the parameter which governs the existence of \emph{almost} perfect packings in graphs of large minimum degree. \begin{thm}\label{thmKomlos}{\bf [Koml\'os~\cite{JKtiling}]} For every graph $H$ and every $\gamma>0$ there exists an integer $n_1=n_1(\gamma,H)$ such that every graph $G$ of order $n\ge n_1$ and minimum degree at least $(1-1/\chi_{cr}(H))n$ contains an $H$-packing which covers all but at most $\gamma n$ vertices of~$G$. \end{thm} Confirming a conjecture of Koml\'os~\cite{JKtiling}, Shokoufandeh and Zhao~\cite{SZ} proved that the number of uncovered vertices can be reduced to a constant depending only on~$H$. \subsection{Main result} Our main result is that for any graph~$H$, either its critical chromatic number or its chromatic number is the relevant parameter which governs the existence of perfect packings in graphs of large minimum degree. The exact classification depends on a parameter which we call the highest common factor of $H$ and which is defined as follows. We say that a colouring of $H$ is \emph{optimal} if it uses exactly $\chi(H)=:\ell$ colours. Given an optimal colouring $c$, let $x_1\le x_2\le \dots\le x_{\ell}$ denote the sizes of the colour classes of~$c$. Put $\mathcal{D}(c):= \{x_{i+1}-x_i\,|\, i=1,\dots,\ell-1\}.$ Let $\mathcal{D}(H)$ denote the union of all the sets $\mathcal{D}(c)$ taken over all optimal colouring $c$. We denote by ${\rm hcf}_\chi(H)$ the highest common factor of all integers in $\mathcal{D}(H)$. (If $\mathcal{D}(H)=\{0\}$ we set ${\rm hcf}_\chi(H):=\infty$.) We write ${\rm hcf}_c(H)$ for the highest common factor of all the orders of components of~$H$. If $\chi(H)\neq 2$ we say that ${\rm hcf}(H)=1$ if ${\rm hcf}_\chi(H)=1$. If $\chi(H)=2$ then we say that ${\rm hcf}(H)=1$ if both ${\rm hcf}_c(H)=1$ and ${\rm hcf}_\chi(H)\le 2$. The following table gives some examples: $$ \begin{array}{l|l|l|l|l|l} H & \chi(H) & \chi_{cr}(H) & {\rm hcf}_\chi(H) & {\rm hcf}_c(H) & {\rm hcf}(H) \\ \hline C_{2k+1} \ (k\ge 2) & 3 & 2+1/k & 1 & - & 1 \\ C_{2k} & 2 & 2 & \infty & 2k & \neq 1 \\ K_{1,2}\cup C_6 & 2 & 9/5 & 1 & 3 & \neq 1 \\ K_{1,4}\cup C_4 & 2 & 3/2 & 3 & 1 & \neq 1 \\ K_{1,2}\cup K_{1,4} & 2 & 4/3 & 2 & 1 & 1 \end{array} $$ Note that if all the optimal colourings of $H$ have the property that all colour classes have equal size, then $\mathcal{D}(H)=\{0\}$ and so ${\rm hcf}(H) \neq 1$ in this case. So if $\chi_{cr}(H)=\chi(H)$, then ${\rm hcf}(H) \neq 1$. Moreover, it is easy to see that there are graphs $H$ with ${\rm hcf}_\chi(H)=1$ but such that for all optimal colourings $c$ of $H$ the highest common factor of all integers in $\mathcal{D}(c)$ is strictly bigger than one (for example, take~$H$ to be the graph obtained from~$K_{1,4,6}$ by adding a new vertex and joining it to all the vertices in the vertex class of size~4).% \COMMENT{Indeed, take for $H$ the graph obtained from $K_{1,4,6}$ by adding one vertex joined to the vertex class of size~4. Then the optimal colourings of $H$ have vertex classes of size $2,4,6$ and $1,4,7$ respectively. The hcf of the 1st colouring is 2 and the hcf of the 2nd one is 3.} Thus for such graphs~$H$ we do need to consider all optimal colourings of $H$. As indicated above, our main result is that in Theorem~\ref{KSS} one can replace the chromatic number by the critical chromatic number if ${\rm hcf}(H)=1$. \begin{thm}\label{thmmain} Suppose that $H$ is a graph with ${\rm hcf}(H)=1$. Then there exists a constant $C=C(H)$ such that every graph $G$ whose order $n$ is divisible by~$|H|$ and whose minimum degree is at least $(1-1/\chi_{cr}(H))n+C$ contains a perfect $H$-packing. \end{thm} Note that Proposition~\ref{propKomlos} shows the result is best possible up to the value of the constant~$C$. A simple modification of the examples in~\cite{AF99,JKtiling} shows that there are graphs~$H$ for which the constant $C$ cannot be omitted entirely.% \COMMENT{Indeed, let $H=K_{s,s,s-1}$ where $s\ge 6$. So ${\rm hcf}(H)=1$. Let $G$ be complete tripartite graph with smallest vertex class size $\sigma k-1$ and other vertex class sizes as equal as possible (i.e. the graph from the first comment). Also add an $(s-1)$-factor of large girth into each of the vertex classes. We claim that $G$ doesn't contain a perfect $H$-packing. Indeed, suppose that $G$ has a perfect $H$-packing. Let $A$ be one of the large vertex classes of~$G$. Then (wlog) some copy of $H$ in the $H$-packing must have at least $s+1$ vertices in~$A$. Let $c_i$ denote the size of the intersection of the $i$th colour class of $H$ with~$A$. Wlog $c_1\le c_2\le c_3$. Note that $c_3\le s-1$ as $G[A]$ is $(s-1)$-regular. Thus either $c_1,c_2\ge 1$ or $c_2\ge 2$. In both cases we must have a 4-cycle in $G[A]$, a contradiction.} Moreover, it turns out that Theorem~\ref{KSS} is already best possible up to the value of the constant~$C$ if ${\rm hcf}(H)\neq 1$ (see Propositions~\ref{bestposs1} and~\ref{bestposs2} for the details). In~\cite{KOSODA} we sketched a simpler argument that yields a weaker result than Theorem~\ref{thmmain}: there $H$ had to be either connected or non-bipartite and we needed an additional error term of~${\varepsilon} n$ in the minimum degree condition. If we combine Theorems~\ref{KSS} and~\ref{thmmain} together with Propositions~\ref{propKomlos},~\ref{bestposs1} and~\ref{bestposs2} we obtain the statement indicated in the abstract. Let $$\chi^*(H):= \begin{cases} \chi_{cr}(H) &\text{ if ${\rm hcf} (H)=1$};\\ \chi(H) &\text{ otherwise}. \end{cases} $$ Also let $\delta(H,n)$ denote the smallest integer $k$ such that every graph $G$ whose order $n$ is divisible by~$|H|$ and with $\delta(G)\ge k$ contains a perfect $H$-packing. \begin{thm}\label{thmmaingeneral} For every graph $H$ there exists a constant $C=C(H)$ such that $$\left( 1-\frac{1}{\chi^*(H)} \right)n-1\le \delta(H,n) \le \left(1-\frac{1}{\chi^*(H)} \right)n+C.$$ \end{thm} \noindent (The $-1$ on the left hand side can be omitted if ${\rm hcf}(H)= 1$ or if $\chi(H)\ge 3$.) Thus for perfect packings in graphs of large minimum degree, the parameter $\chi^*(H)$ is the relevant parameter, whereas for almost perfect packings it is $\chi_{cr}(H)$. Note that while the definition of the parameter $\chi^*$ is somewhat complicated, the form of Theorem~\ref{thmmaingeneral} is exactly analogous to that of the Erd\H{o}s-Stone theorem (see e.g.~\cite[Thm~7.1.2]{Diestel} or~\cite[Ch.~IV, Thm.~20]{BGraphTh}), which implies that \begin{equation}\label{eqErdosStone} ex(H,n)=\left(1-\frac{1}{\chi(H)-1} +o(1) \right)n, \end{equation} where $ex(H,n)$ denotes the smallest number~$k$ such that every graph $G$ of order $n$ and average degree~$>k$ contains a copy of~$H$. \subsection{Open problems} Our constant~$C$ appearing in Theorems~\ref{thmmain} and~\ref{thmmaingeneral} is rather large since it is related to the number of partition classes (clusters) obtained by the Regularity lemma. It would be interesting to know whether one can take e.g.~$C= |H|$. Another open problem is to characterize all those graphs~$H$ for which $\delta(H,n)=\lceil(1-1/\chi^*(H))n\rceil$. This is known to be the case e.g.~for complete graphs~\cite{HSz} and, if $n$ is large, for cycles~\cite{Abbasi} and for the case when $H=K_\ell^-$~\cite{KOKlminus}. Further observations on this problem can be found in~\cite{KOKlminus} and~\cite{MPhil}. \subsection{Algorithmic aspects} Kann~\cite{Kann94} showed that the optimization problem of finding a maximum $H$-packing is APX-complete if $H$ is connected and $|H| \ge 3$ (i.e.~it is not possible to approximate the optimum solution within an arbitrary factor unless P=NP). For such $H$ and any $\gamma>0$, we gave a polynomial time algorithm in~\cite{KOSODA} which finds a perfect $H$-packing if $\delta(G) \ge (1-1/\chi^*(H) +\gamma )n$. Also note that Theorem~\ref{thmmain} immediately implies that the decision problem whether a graph $G$ has a perfect $H$-packing is trivially solvable in polynomial time in this case. On the other hand, in~\cite{KOSODA} we showed that for many graphs $H$, the problem becomes NP-complete when the input graphs are all those graphs $G$ with minimum degree at least $(1-1/\chi^*(H)-\gamma)|G|$, where $\gamma>0$ is arbitrary. We were able to show this if $H$ is complete or $H$ is a complete $\ell$-partite graph where all colour classes contain at least two vertices. It would certainly be interesting to know whether this extends to all graphs $H$ which have a component with at least three vertices. \subsection{Organization of the paper} In the next section we introduce some basic definitions and then describe the extremal examples which show that our main result is best possible. In Section~\ref{sec:RLandBL} we then state the Regularity lemma of Szemer\'edi and the Blow-up lemma of Koml\'os, S\'ark\"ozy and Szemer\'edi. In Section~\ref{sec:overview} we give a rough outline of the structure of the proof and state some of the main lemmas. In Section~\ref{sec:nonextremal} we consider the case where $G$ is not similar to an extremal graph. In Section~\ref{sec:complete} we investigate perfect $H$-packings in complete $\ell$-partite graphs (these results are needed when we apply the Blow-up lemma in Sections~\ref{sec:nonextremal} and~\ref{sec:extremal}). In Section~\ref{sec:extremal} we consider the case where $G$ is similar to an extremal graph. Finally, we combine the results of the previous sections in Section~\ref{sec:thmproof} to prove Theorem~\ref{thmmain}. Our proof of Theorem~\ref{thmmain} uses ideas from~\cite{KSSz01}. \section{Notation and extremal examples}\label{sec:tools} Throughout this paper we omit floors and ceilings whenever this does not affect the argument. We write $e(G)$ for the number of edges of a graph $G$, $|G|$ for its order, $\delta(G)$ for its minimum degree, $\Delta(G)$ for its maximum degree, $\chi(G)$ for its chromatic number and $\chi_{cr}(G)$ for its critical chromatic number as defined in Section~\ref{intro}. We denote the degree of a vertex $x\in G$ by $d_G(x)$ and its neighbourhood by $N_G(x)$. Given a set $A\subseteq V(G)$, we write $e(A)$ for the number of edges in~$A$ and define the density of~$A$ by $d(A):=e(A)/\binom{|A|}{2}$. Given disjoint $A,B\subseteq V(G)$, an \emph{$A$--$B$ edge} is an edge of $G$ with one endvertex in $A$ and the other in $B$; the number of these edges is denoted by $e_G(A,B)$ or $e(A,B)$ if this is unambiguous. We write $(A,B)_G$ for the bipartite subgraph of $G$ whose vertex classes are $A$ and $B$ and whose edges are all $A$--$B$ edges in~$G$. More generally, we write $(A,B)$ for a bipartite graph with vertex classes $A$ and~$B$. The following two propositions together show that if ${\rm hcf}(H)\neq 1$ then Theorem~\ref{KSS} is best possible up the the value of the constant~$C$. Thus in this case the chromatic number of~$H$ is the relevant parameter which governs the existence of perfect matchings in graphs of large minimum degree. The first proposition deals with the case when $\chi(H)\ge 3$ as well as the case when $\chi(H)=2$ and ${\rm hcf}_\chi(H)\ge 3$. \begin{prop}\label{bestposs1} Let $H$ be a graph with $2\le\chi(H)=:\ell$ and let $k\in\mathbb{N}$. Let $G_1$ be the complete $\ell$-partite graph of order $k|H|$ whose vertex classes $U_1,\dots,U_\ell$ satisfy $|U_1|=\lfloor k|H|/\ell\rfloor+1$, $|U_2|= \lceil k|H|/\ell\rceil-1$ and $\lfloor k|H|/\ell\rfloor\le |U_i|\le \lceil k|H|/\ell\rceil$ for all $i\ge 3$. (So $\delta(G_1)=\lceil(1-1/\chi(H))|G_1|\rceil-1$.) If $\ell\ge 3$ and ${\rm hcf}_\chi(H)\neq 1$ or if $\ell=2$ and ${\rm hcf}_\chi(H)\ge 3$ then $G_1$ does not contain a perfect $H$-packing. \end{prop} \removelastskip\penalty55\medskip\noindent{\bf Proof. } Suppose first that $\ell\ge 3$ and that ${\rm hcf}_\chi(H)$ is finite. In this case there are vertex classes $U_{i_1}$ and $U_{i_2}$ such that $|U_{i_1}|-|U_{i_2}|=1$ (note that these are not necessarily $U_1$ and $U_2$). Consider any $H$-packing in $G_1$ consisting of $H_1,\dots,H_r$ say. By induction one can show that $|U_{i_1}\setminus(H_1\cup \dots\cup H_r)|- |U_{i_2}\setminus(H_1\cup \dots\cup H_r)|\equiv 1\mod {\rm hcf}_\chi(H)$. But as ${\rm hcf}_\chi(H)\neq 1$ this implies that at least one of $U_{i_1}\setminus(H_1\cup \dots\cup H_r)$ and $U_{i_2}\setminus(H_1\cup \dots\cup H_r)$ has to be non-empty. Thus the $H$-packing $H_1,\dots,H_r$ cannot be perfect. If $\ell\ge 3$ but ${\rm hcf}_\chi(H)=\infty$ then the colour classes of~$H$ have the same size and thus~$G_1$ also works. The case when $\ell=2$ is similar except that we have to work with $U_1$ and $U_2$ and can only assume that $|U_1|-|U_2|\in\{1,2\}$. \noproof\bigskip In the next proposition we consider the case when $\chi(H)=2$ and ${\rm hcf}_c(H)\neq 1$. We omit its proof as it is similar to the proof of Proposition~\ref{bestposs1}.% \COMMENT{Indeed, let $A$ and $B$ denote the vertex classes of~$G_2$ such that $|A|\ge |B|$. Suppose first that ${\rm hcf}_c(H)=2$ and $k|H|$ is not divisible by~$4$. Note that $|H|$ is even since ${\rm hcf}_c(H)=2$. Thus $k|H|$ is divisible by $2$ (so the def of $G_2$ makes sense) and $k|H|/2$ is an odd number. Now consider any $H$-packing in $G_2$ consisting of $H_1,\dots,H_r$ say. By induction one can show that $|A\setminus(H_1\cup \dots\cup H_r)|\equiv 1\mod {\rm hcf}_c(H)$. Thus $A\setminus(H_1\cup \dots\cup H_r)$ is non-empty and so $H_1,\dots,H_r$ cannot be perfect. Next consider the case when $k|H|$ is not divisible by~$2$. Then $|A|=|B|+1$. So at least one of $|A|,|B|$ is not divisible by~${\rm hcf}_c(H)$. Suppose that this is true for~$A$. Then the above argument works. Finally, consider the case when $k|H|$ is divisible by~$2$. Suppose first that ${\rm hcf}_c(H)\ge 3$. Then we can argue as before since at least one of $|A|, |B|$ is not divisible by~${\rm hcf}_c(H)$. So suppose that ${\rm hcf}_c(H)=2$ and (as we are not in the first case) $k|H|$ is divisible by~$4$. Then $|A|$ is odd and our argument works again.} \begin{prop}\label{bestposs2} Let $H$ be a bipartite graph with ${\rm hcf}_c(H)\neq 1$ and let $k\in\mathbb{N}$. If ${\rm hcf}_c(H)=2$ and $k|H|$ is not divisible by~$4$ let $G_2$ be the disjoint union of two cliques of order~$k|H|/2$. Otherwise let $G_2$ be the disjoint union of two cliques of orders~$\lfloor k|H|/2\rfloor+1$ and~$\lceil k|H|/2\rceil-1$. (So $\delta(G_2)\ge (1-1/\chi(H))|G_1|-2$.) $G_2$ does not contain a perfect $H$-packing.% \noproof \end{prop} The following corollary gives a characterization of those graphs with ${\rm hcf}(H)=1$. It follows immediately from Propositions~\ref{bestposs1} and~\ref{bestposs2} as well as Lemmas~\ref{completemove}--\ref{completebipmove} in Section~\ref{sec:complete}. We will not need it in the proof of Theorem~\ref{thmmain} but state it as the characterization may be of independent interest. \begin{cor} \label{equivalence} Let $H$ be a graph with $2\le \chi(H)=:\ell$. Let $k'\gg |H|$ be an integer and let $G_1$ and $G_2$ be the graphs defined in Propositions~\ref{bestposs1} and~\ref{bestposs2} for $k:=\ell k'$. If $\chi(H)\ge 3$ then $G_1$ contains a perfect $H$-packing if and only if ${\rm hcf}(H)=1$. Similarly, if $\chi(H)=2$ then both $G_1$ and $G_2$ contain a perfect $H$-packing if and only if ${\rm hcf}(H)=1$.% \noproof \end{cor} \section{The Regularity lemma and the Blow-up lemma}\label{sec:RLandBL} The purpose of this section is to collect all the information we need about the Regularity lemma and the Blow-up lemma. See~\cite{KSi} and~\cite{JKblowup} for surveys about these. Let us start with some more notation. The \emph{density} of a bipartite graph $G=(A,B)$ is defined to be $$d_G(A,B):=\frac{e_G(A,B)}{|A||B|}.$$ We also write $d(A,B)$ if this is unambiguous. Given ${\varepsilon}>0$, we say that $G$ is \emph{${\varepsilon}$-regular} if for all sets $X\subseteq A$ and $Y\subseteq B$ with $|X|\ge {\varepsilon} |A|$ and $|Y|\ge {\varepsilon} |B|$ we have $|d(A,B)-d(X,Y)|<{\varepsilon}$. Given $d\in[0,1]$, we say that $G$ is \emph{$({\varepsilon},d)$-superregular} if all sets $X\subseteq A$ and $Y\subseteq B$ with $|X|\ge {\varepsilon} |A|$ and $|Y|\ge {\varepsilon} |B|$ satisfy $d(X,Y)>d$ and, furthermore, if $d_G(a)>d|B|$ for all $a\in A$ and $d_G(b)> d|A|$ for all $b\in B$. We will use the following degree form of Szemer\'edi's Regularity lemma which can be easily derived from the classical version. Proofs of the latter are for example included in~\cite{BGraphTh} and~\cite{Diestel}. \begin{lemma}[Regularity lemma]\label{deg-reglemma} For all ${\varepsilon}>0$ and all integers $k_0$ there is an $N=N({\varepsilon},k_0)$ such that for every number $d\in [0,1]$ and for every graph $G$ on at least $N$ vertices there exist a partition of $V(G)$ into $V_0,V_1,\dots,V_k$ and a spanning subgraph $G'$ of $G$ such that the following holds: \begin{itemize} \item $k_0\le k\le N$, \item $|V_0|\le {\varepsilon} |G|$, \item $|V_1|=\dots=|V_k|=:L$, \item $d_{G'}(x)>d_G(x)-(d+{\varepsilon})|G|$ for all vertices $x\in G$, \item for all $i\ge 1$ the graph $G'[V_i]$ is empty, \item for all $1\le i<j\le k$ the graph $(V_i,V_j)_{G'}$ is ${\varepsilon}$-regular and has density either $0$ or $>d$. \end{itemize} \end{lemma} The sets $V_i$ ($i\ge 1$) are called \emph{clusters}, $V_0$ is called the \emph{exceptional set}. Given clusters and $G'$ as in Lemma~\ref{deg-reglemma}, the \emph{reduced graph} $R$ is the graph whose vertices are $V_1,\dots,V_k$ and in which $V_i$ is joined to $V_j$ whenever $(V_i,V_j)_{G'}$ is ${\varepsilon}$-regular and has density $>d$. Thus $V_iV_j$ is an edge of $R$ if and only if $G'$ has an edge between $V_i$ and $V_j$. Given a set $A\subseteq V(R)$, we call the set of all those vertices of $G$ which are contained in clusters belonging to $A$ the \emph{blow-up of $A$}. Similarly, if $R'$ is a subgraph of $R$, then the \emph{blow-up of $R'$} is the subgraph of $G'$ induced by the blow-up of~$V(R')$. We will also use the Blow-up lemma of Koml\'os, S\'ark\"ozy and Szemer\'edi~\cite{KSSblowup}. It implies that dense regular pairs behave like complete bipartite graphs with respect to containing bounded degree graphs as subgraphs. \begin{lemma}[Blow-up lemma]\label{blowup} Given a graph $F$ on $\{1,\dots,f\}$ and positive numbers $d,\Delta$, there is a positive number ${\varepsilon}_0={\varepsilon}_0(d,\Delta,f)$ such that the following holds. Given $L_1,\dots,L_f\in \mathbb{N}$ and ${\varepsilon}\le {\varepsilon}_0$, let $F^*$ be the graph obtained from $F$ by replacing each vertex $i\in F$ with a set $V_i$ of $L_i$ new vertices and joining all vertices in $V_i$ to all vertices in $V_j$ whenever $ij$ is an edge of $F$. Let $G$ be a spanning subgraph of $F^*$ such that for every edge $ij\in F$ the graph $(V_i,V_j)_G$ is $({\varepsilon},d)$-superregular. Then $G$ contains a copy of every subgraph $H$ of $F^*$ with $\Delta(H)\le \Delta$. \end{lemma} \section{Preliminaries and overview of the proof}\label{sec:overview} Let $H$ be a graph of chromatic number $\ell\ge 2$. Put \begin{equation}\label{eqdefxi} z_1:=(\ell-1)\sigma(H),\ \ z:=|H|-\sigma(H),\ \ \xi:=\frac{z_1}{z}=\frac{(\ell-1)\sigma(H)}{|H|-\sigma(H)}. \end{equation} (Recall that $\sigma(H)$ is the smallest colour class in any $\ell$-colouring of $H$.) Note that $\xi<1$ if $\chi_{cr}(H)<\chi(H)$ and so in particular if ${\rm hcf}(H)=1$. Let $B^*$ denote the complete $\ell$-partite graph with one vertex class of size $z_1$ and $\ell-1$ vertex classes of size $z$. Note that $B^*$ has a perfect $H$-packing consisting of $\ell-1$ copies of $H$. Moreover, it is easy to check that \begin{equation}\label{eqchicr} \chi_{cr}(H)=\chi_{cr}(B^*)=\ell-1+\xi. \end{equation} Call $B^*$ the \emph{bottlegraph assigned to~$H$}. We now give an overview of the proof of Theorem~\ref{thmmain}. In Section~\ref{sec:applyRG} we first apply the Regularity lemma to $G$ in order to obtain a set $V_0$ of exceptional vertices and a reduced graph~$R$. It will turn out that the minimum degree of $R$ is almost $(1-1/\chi_{cr}(B^*))|R|$. So $R$ has an almost perfect $B^*$-packing $\mathcal{B}'$ by Theorem~\ref{thmKomlos}. Let $B_1,\dots,B_{k'}$ denote the copies of $B^*$ in~$\mathcal{B}'$. Our aim in Sections~\ref{sec:adjust} and~\ref{sec:div} is to show that one can take out a small number of suitably chosen copies of $B^*$ from $G$ to achieve that the following conditions hold: \begin{itemize} \item[($\alpha$)] Each vertex in $V_0$ lies in one of these copies of $B^*$ taken out from~$G$. Moreover, each vertex that does not belong to a blow-up of some $B_t\in \mathcal{B}'$ also lies in one of these copies of~$B^*$. \item[($\beta$)] The (modified) blow-up of each $B_t\in \mathcal{B}'$ has a perfect $H$-packing. \end{itemize} Note that if we say that we take out a copy of $B^*$ (or $H$) from $G$ then we mean that we delete all its vertices from $G$ and thus also from the clusters they belong to. So in particular the (modified) blow-up of $B_t$ no longer contains these vertices. For all $t\le k'$ and all $j\le \ell$ let $X_j(t)$ denote the (modified) blow-up of the $j$th vertex class of $B_t\in \mathcal{B}'$ (where the $\ell$th vertex class of $B_t$ is the small one). It will turn out that~$(\beta)$ holds if the $X_j(t)$ satisfy the conditions in the following definition. (The graph $G'\subseteq G$ obtained from the Regularity lemma will play the role of $G^*$ in Definition~\ref{defblownupcover}. So it will be easy to satisfy condition~(a) of Definition~\ref{defblownupcover}.) \begin{defin}\label{defblownupcover} {\rm Suppose that $G$ is a graph whose order~$n$ is divisible by~$|B^*|$. Let $k\ge 1$ be an integer and let ${\varepsilon}\ll d\ll\beta\ll 1$ be positive constants. We say that \emph{$G$ has a blown-up $B^*$-cover for parameters ${\varepsilon},d,\beta,k$} if there exists a spanning subgraph $G^*$ of $G$ and a partition $X_1(1),\dots,X_\ell(1),\dots,X_1(k),\dots,X_\ell(k)$ of the vertex set of $G$ such that the following holds: \begin{itemize} \item[(a)] All the bipartite subgraphs $(X_j(t),X_{j'}(t))_{G^*}$ of $G^*$ between $X_j(t)$ and $X_{j'}(t)$ are $({\varepsilon},d)$-superregular whenever $j\neq j'$. \item[(b)] $|X_1(t)\cup\dots\cup X_\ell(t)|$ is divisible by~$|B^*|$ for all $t\le k$. \item[(c)] $(1-\beta^{1/10})|X_\ell(t)|\le \xi |X_j(t)|\le (1-\beta)|X_\ell(t)|$ for all $j<\ell$ and all $t\le k$ and $|\, |X_j(t)|-|X_{j'}(t)|\,|\le d|X_1(t)\cup\dots\cup X_\ell(t)|$ for all $1\le j<j'<\ell$ and all $t\le k$. \end{itemize}} \end{defin} For all $t \le k$, we call the $\ell$-partite subgraph of $G^*$ whose vertex classes are the sets $X_1(t),\dots,X_\ell(t)$ the \emph{$t$'th element} of the blown-up $B^*$-cover. The \emph{complete $\ell$-partite graph corresponding to the $t$'th element} is the one whose vertex classes have sizes $|X_1(t)|,\dots,|X_\ell(t)|$. Note that condition~(c) implies that for all $j<\ell$ the ratio of $|X_j(t)|$ to the total size of the $t$th element is a little smaller than $z/|B^*|$ (recall that $z$ is the size of the large vertex classes of~$B^*$). The following lemma implies that the complete $\ell$-partite graph corresponding to some element of a blown-up $B^*$-cover contains a perfect $H$-packing. Combined with the Blow-up lemma, this will imply that each element of the blown-up $B^*$-cover has a perfect $H$-packing. (Thus~$(\beta)$ will be satisfied if the $X_j(t)$ are as in Definition~\ref{defblownupcover}.) \begin{lemma}\label{corcomplete1} Let $H$ be a graph with $\ell:=\chi(H)\ge 2$ and ${\rm hcf}(H)=1$. Let $\xi$ be as defined in~$(\ref{eqdefxi})$. Let $0<d\ll\beta\ll \xi,1-\xi,1/|H|$ be positive constants. Suppose that $F$ is a complete $\ell$-partite graph with vertex classes $U_1,\dots,U_\ell$ such that $|F|\gg |H|$ is divisible by $|H|$, $(1-\beta^{1/10})|U_\ell|\le \xi |U_i|\le (1-\beta)|U_\ell|$ for all $i<\ell$ and such that $|\, |U_i|-|U_j|\,|\le d|F|$ whenever $1\le i<j<\ell$. Then $F$ contains a perfect $H$-packing. \end{lemma} Lemma~\ref{corcomplete1} will be proved at the end of Section~\ref{sec:complete}, where we will deduce it from Lemmas~\ref{completeconst} and~\ref{completeapprox}, which are also proved in that section. Lemma~\ref{corcomplete1} is one of the points where the condition that ${\rm hcf}(H)=1$ is necessary. The following lemma shows that we can find a blown-up $B^*$-cover as long as $G$ satisfies certain properties. Roughly speaking these properties~(i) and~(ii) say that $G$ is not too close to being one of the extremal graphs having minimum degree almost $(1-1/\chi_{cr}(H))n$ but not containing a perfect $H$-packing. We will refer to this as the non-extremal case. \begin{lemma}\label{nonextremal} Let $H$ be a graph of chromatic number $\ell\ge 2$ such that $\chi_{cr}(H)<\ell$. Let $B^*$ denote the bottlegraph assigned to~$H$ and let $z$ and $\xi$ be as defined in~$(\ref{eqdefxi})$. Let $$ {\varepsilon}'\ll d'\ll\theta\ll \tau\ll\xi,1-\xi,1/|B^*| $$ be positive constants. There exist integers $n_0$ and $k_1=k_1({\varepsilon}',\theta,B^*)$ such that the following holds. Suppose that $G$ is a graph whose order $n\ge n_0$ is divisible by $|B^*|$ and whose minimum degree satisfies $\delta(G)\ge (1-\frac{1}{\chi_{cr}(H)}-\theta)n$. Furthermore, suppose that $G$ satisfies the following further properties: \begin{itemize} \item[{\rm (i)}] $G$ does not contain a vertex set $A$ of size $z n/|B^*|$ such that $d(A)\le \tau$. \item[{\rm (ii)}] Additionally, if $\ell=2$ then $G$ does not contain a vertex set $A$ such that $d(A,V(G)\setminus A)\le \tau$. \end{itemize} Then there exists a family $\mathcal{B}^*$ of at most $\theta^{1/3}n$ disjoint copies of $B^*$ in $G$ such that the graph $G-\bigcup \mathcal{B}^*$ (which is obtained from $G$ by taking out all the copies of $B^*$ in $\mathcal{B}^*$) has a blown-up $B^*$-cover with parameters~$2{\varepsilon}',d'/2,2\theta,k_1$. \end{lemma} Lemma~\ref{nonextremal} will be proved in Section~\ref{sec:nonextremal}. Lemmas~\ref{corcomplete1} and~\ref{nonextremal} together with the Blow-up lemma imply that in the non-extremal case we can satisfy conditions~($\alpha$) and~($\beta$), i.e.~we have a perfect $H$-packing in this case. This is formalized in the following corollary. \begin{cor}\label{cornonextremal} Let $H$ be a graph of chromatic number $\ell\ge 2$ such that ${\rm hcf}(H)=1$. Let $B^*$ denote the bottlegraph assigned to~$H$ and let $z$ and $\xi$ be as defined in~$(\ref{eqdefxi})$. Let $\theta\ll\tau\ll\xi,1-\xi,1/|B^*|$ be positive constants. There exists an integer $n_0$ such that the following holds. Suppose that $G$ is a graph whose order $n\ge n_0$ is divisible by~$|B^*|$ and whose minimum degree satisfies $\delta(G)\ge (1-\frac{1}{\chi_{cr}(H)}-\theta)n$. Furthermore, suppose that $G$ satisfies the following further properties: \begin{itemize} \item[{\rm (i)}] $G$ does not contain a vertex set $A$ of size $z n/|B^*|$ such that $d(A)\le \tau$. \item[{\rm (ii)}] Additionally, if $\ell=2$ then $G$ does not contain a vertex set $A$ such that $d(A,G-A)\le \tau$. \end{itemize} Then $G$ has a perfect $H$-packing. \end{cor} \removelastskip\penalty55\medskip\noindent{\bf Proof of Corollary~\ref{cornonextremal}. } Fix positive constants ${\varepsilon}'$, $d'$ such that $$ {\varepsilon}'\ll d'\ll\theta\ll \tau\ll \xi,1-\xi,1/|B^*|. $$ An application of Lemma~\ref{nonextremal} shows that by taking out a small number of disjoint copies of $B^*$ from $G$ we obtain a subgraph which has a blown-up $B^*$-cover with parameters $2{\varepsilon}',d'/2,2\theta,k_1$. Conditions~(b) and (c) in Definition~\ref{defblownupcover} imply that the complete $\ell$-partite graphs corresponding to the $k_1$ elements of this blown-up $B^*$-cover satisfy the assumptions of Lemma~\ref{corcomplete1} with $d:=d'/2$ and where $2\theta$ plays the role of~$\beta$. Thus each of these complete $\ell$-partite graphs contains a perfect $H$-packing. Condition~(a) in Definition~\ref{defblownupcover} ensures that we can now apply the Blow-up lemma (Lemma~\ref{blowup}) to each of the $k_1$ elements in the blown-up $B^*$-cover to obtain a perfect $H$-packing of this element. All these $H$-packings together with the copies of $B^*$ taken out earlier in order to obtain the blown-up $B^*$-cover yield a perfect $H$-packing of~$G$. \noproof\bigskip The extremal cases (i.e.~where $G$ satisfies either~(i) or~(ii)) will be dealt with in Section~\ref{sec:extremal}. These cases also rely on Lemma~\ref{nonextremal}. For example if $G$ satisfies (i) but $G-A$ does not satisfy (i) or (ii), then very roughly the strategy is to apply Lemma~\ref{nonextremal} to find a perfect $B_1^*$-packing of $G-A$, where $B_1^*$ is obtained from $B^*$ by removing one of the large colour classes. The minimum degree of $G$ will ensure that the bipartite subgraph spanned by $A$ and $V(G)-A$ is almost complete. This will be used to extend the $B_1^*$-packing of $G-A$ to a perfect $B^*$-packing of~$G$. The reason we considered sets $A$ of size $zn/|B^*|$ in~(i) is that this is precisely the number of vertices needed to extend each copy of $B^*_1$ to a copy of~$B^*$. (Recall that $z$ was the size of the large vertex classes of~$B^*$.) However, for this strategy to work, we first need to modify the set $A$ slighly. We will also take out some carefully chosen copies of $H$ from $G$. One matter which complicates the argument is that $B_1^*$ does not necessarily satisfy ${\rm hcf}(B_1^*)=1$. This means that we cannot find perfect $B_1^*$-packing of $G-A$ by a direct application of Lemma~\ref{nonextremal}, as the blown-up $B_1^*$-cover produced by that lemma does not necessarily yield a perfect $B_1^*$-packing of $G-A$. To overcome this difficulty, we will work directly with the blown-up $B_1^*$-cover. So the use of Lemma~\ref{nonextremal} in Section~\ref{sec:extremal} is the reason why we do not assume ${\rm hcf}(H)=1$ in Lemma~\ref{nonextremal}. It is also the reason why we allow for an error term $\theta n$ in the minimum degree condition on $G$. \section{The non-extremal case: proof of Lemma~\ref{nonextremal}}\label{sec:nonextremal} The purpose of this section is to prove Lemma~\ref{nonextremal}. \subsection{Applying the Regularity lemma and choosing a packing of the reduced graph}\label{sec:applyRG} We will fix further constants satisfying the following hierarchy \begin{equation}\label{eqconst} 0< {\varepsilon}\ll {\varepsilon}'\ll d'\ll d\ll \theta\ll\tau\ll\xi, 1-\xi,1/|B^*|. \end{equation} Moreover, we choose an integer $k_0$ such that \begin{equation}\label{eqk0} k_0\ge n_1(\theta,B^*), \end{equation} where $n_1$ is as defined in Theorem~\ref{thmKomlos}. We put \begin{equation}\label{eqdefk1} k_1:=\lfloor N({\varepsilon},k_0)/|B^*|\rfloor, \end{equation} where $N({\varepsilon},k_0)$ is as defined in the Regularity lemma (Lemma~\ref{deg-reglemma}). In what follows, we assume that the order $n$ of our given graph $G$ is sufficiently large for our estimates to hold. We now apply the Regularity lemma with parameters ${\varepsilon}$, $d$ and $k_0$ to $G$ to obtain clusters, an exceptional set $V_0$, a spanning subgraph $G'\subseteq G$ and a reduced graph $R$. (\ref{eqconst}) together with the well-known fact that the minimum degree of~$G$ is almost inherited by its reduced graph (see e.g.~\cite[Prop.~9]{KOTplanar} for an explicit proof) implies that \begin{align}\label{eqminR1} \delta(R) \ge \left(1-\frac{1}{\chi_{cr}(H)}-2\theta\right)|R| \stackrel{(\ref{eqchicr})}{=}\left(1-\frac{1}{\ell-1+\xi}-2\theta\right)|R|. \end{align} Since $|R|\ge k_0\ge n_1(\theta,B^*)$ by~(\ref{eqk0}), we may apply Theorem~\ref{thmKomlos} to $R$ to find a $B^*$-packing $\mathcal{B}'$ which covers all but at most $\sqrt{\theta}|R|$ vertices of~$R$. (More precisely, we apply Theorem~\ref{thmKomlos} to a graph $R'$ which is obtained from $R$ by adding at most $\theta^{3/4}|R|$ new vertices and connecting them to all other vertices. By~(\ref{eqchicr}) and~(\ref{eqminR1}), we have $\delta(R')\ge (1-1/\chi_{cr}(B^*))|R'|$, as required in Theorem~\ref{thmKomlos}.% \COMMENT{Note that it does not suffice to add $2\theta |R|$ vs. Indeed, let $y$ denote the number of vertices which we have to add. Then $y$ has to satisfy $$(1-\frac{1}{\ell-1+\xi}-2\theta)|R|+y\ge (1-\frac{1}{\ell-1+\xi})(|R|+y)$$ and thus $y\ge 2\theta|R|(\ell-1+\xi)$.} Removing the new vertices results in a $B^*$-packing of $R$ which has the desired size.) We delete all the clusters not contained in some copy of $B^*$ in $\mathcal{B}'$ from $R$ and add all the vertices lying in these clusters to the exceptional set~$V_0$. Thus $|V_0|\le {\varepsilon} n+\sqrt{\theta} n\le 2\sqrt{\theta} n$. From now on, we denote by $R$ the subgraph of the reduced graph induced by all the remaining clusters. Thus $\mathcal{B}'$ now is a perfect $B^*$-packing of~$R$ and we still have that \begin{align}\label{eqmindegreduced} \delta(R) \ge \left(1-\frac{1}{\ell-1+\xi}-2\sqrt{\theta}\right)|R|. \end{align} It is easy to check that for all $B\in\mathcal{B}'$ we can replace each cluster $V_a$ in $B$ by a subcluster of size $L':=(1-{\varepsilon}|B^*|)L$ such that for each edge $V_aV_b$ of $B$ the bipartite subgraph of $G'$ between the chosen subclusters of $V_a$ and $V_b$ is $(2{\varepsilon},d/2)$-superregular (see e.g.~\cite[Prop.~8]{KOTplanar}). Add all the vertices of~$G$ which do not lie in one of the chosen subclusters to the exceptional set $V_0$. Then \begin{equation*} |V_0|\le 3\sqrt{\theta} n. \end{equation*} By adjusting $L'$ if necessary and adding a bounded number of further vertices to~$V_0$ we may assume that $L'$ is divisible by $z_1z$. (Recall that $z_1$ and $z$ were defined in~(\ref{eqdefxi}).) From now on, we refer to the chosen subclusters as the clusters of~$R$. Next we partition each of these clusters $V_a$ into a red part $V_a^{red}$ and a blue part $V_a^{blue}$ such that $|V_a^{red}|=\theta^2 |V_a|$ and such that $|\, |N_G(x)\cap V_a^{red}|-\theta^2|N_G(x)\cap V_a|\, |\le {\varepsilon} L'$ for every vertex $x\in G$. (Consider a random partition to see that there are $V_a^{red}$ and $V_a^{blue}$ with these properties.) Together all these partitions of the clusters of $R$ yield a partition of the vertices of $G-V_0$ into red and blue vertices. We will use these partitions to ensure that even after some modifications which we have to carry out during the proof, the edges of the $B\in\mathcal{B}'$ will still correspond to superregular subgraphs of~$G'$. More precisely, during the proof we will take out certain copies of $B^*$ from $G$, but each copy will avoid all the red vertices. All the vertices contained in these copies of $B^*$ will be removed from the clusters they belong to. However, if we look at the (modified) bipartite subgraph of $G'$ which corresponds to some edge $V_aV_b$ of $B\in \mathcal{B}'$, then this subgraph of $G'$ will still be $({\varepsilon}', d')$-superregular since it still contains all vertices in $V_a^{red}$ and $V_b^{red}$. The blown-up $B^*$-cover required in Lemma~\ref{nonextremal} will be otained from the cover corresponding to $\mathcal{B}'$ by taking out a small number of copies of~$B^*$ from~$G$. This will be done in two steps. Firstly, we will take out copies of $B^*$ to ensure that for every $B\in \mathcal{B}'$ the size of the blow-up of each of its $\ell-1$ large vertex classes is significantly smaller than $(z/|B^*|)$times the size of the blow-up of the entire~$B$. This will ensure that the cover corresponding to $\mathcal{B}'$ satisfies condition~(c) in the definition of blown-up $B^*$-cover (Definition~\ref{defblownupcover}). Moreover, each exceptional vertex will be contained in one of the copies taken out. So after this step we also have incorporated all the exceptional vertices. All the copies of $B^*$ deleted in this process will avoid the red vertices of~$G$. In the second step we will then take out a bounded number of further copies of $B^*$ in order to achieve that the blow-up of each $B\in\mathcal{B}'$ is divisible by~$|B^*|$ (as required in condition~(b) in Definition~\ref{defblownupcover}). As mentioned in the previous paragraph, the blown-up $B^*$-cover thus obtained from $\mathcal{B}'$ will also satisfy condition~(a) in Definition~\ref{defblownupcover} since in this last step we remove only a bounded number of further vertices from the clusters, which does not affect the superregularity significantly. Finally, since the blown-up $B^*$ cover obtained in this way has $|\mathcal{B}'|\le k_1$ elements, we may have to split some of the blown up copies of $B^*$ to obtain a blown-up $B^*$ cover with exactly $k_1$ elements. \subsection{Adjusting the sizes of the vertex classes in the blow-ups of the $B\in \mathcal{B}'$}\label{sec:adjust} Let $B_1,\dots,B_{k'}$ denote the copies of $B^*$ in~$\mathcal{B}'$. As described at the end of the previous section, our next aim is to take out a small number of copies of $B^*$ from $G$ to achieve that, for all $t\le k'$, the blow-up of each large vertex class of $B_t$ is significantly smaller than $(z/|B^*|)$times the size of the blow-up of~$B_t$ itself. It turns out that this becomes simpler if we first split the blow-up of each $B_t$ into $z_1z$ `smaller blow-ups'. Then we take out copies of $B^*$ from $G$ in order to modify the sizes of these smaller blow-ups. This will imply that sizes of the blown-up vertex classes of the original $B_t$'s are as desired. We will not remove red vertices in this process. Thus consider any $B_t$. We will think of the $\ell$th vertex class of~$B_t$ as the one having size~$z_1$. For all $j<\ell$, split each of the $z$ clusters belonging to the $j$th vertex class of $B_t$ into $z_1$ subclusters of equal size. Let $Z'_j((t-1)z_1z+1),\dots, Z'_j(tz_1z)$ denote the subclusters thus obtained. Similarly, split each cluster belonging to the $\ell$th vertex class of $B_t$ into $z$ subclusters of equal size. Let $Z_\ell((t-1)z_1z+1),\dots, Z_\ell(tz_1z)$ denote the subclusters thus obtained. Put $$k'':=z_1zk'. $$ Given $i\le k''$, we think of the $\ell$-partite subgraph of $G'$ with vertex classes $Z'_1(i),\dots,Z'_{\ell-1}(i),Z_\ell(i)$ as a blown-up copy of $B^*$. (Indeed, note that $\xi |Z'_j(i)|=|Z_\ell(i)|$ for all $j<\ell$.) We may assume that about $\theta^2|Z'_j(i)|$ vertices in $|Z'_j(i)|$ are red and that for every vertex $x\in G$ about a $\theta^2$-fraction of its neighbours in each~$Z'_j(i)$ are red and that the analogue holds for~$Z_\ell(i)$. (Indeed, consider random partitions again to show that this can be guaranteed.) In order to achieve that the size of each large vertex class is significantly smaller than the size of the entire blown-up copy, we will remove a $\theta$-fraction of vertices from each of $Z'_1(i),\dots,Z'_{\ell-1}(i)$. We will add all these vertices to~$V_0$. The aim then is to incorporate all the vertices in~$V_0$ by taking out copies of $B^*$ from~$G$. Of course, this has to be done in such a way that we don't destroy the properties of the vertex classes again. So put $$L'':=(1-\theta)L'/z_1. $$ For all $i\le k''$ and all $j< \ell$ remove $\theta L'/z_1$ blue vertices from $Z'_j(i)$ and add all these vertices to~$V_0$. Denote the subset of $Z'_j(i)$ thus obtained by~$Z_j(i)$. So \begin{equation}\label{sizeZji} |Z_\ell(i)|=L'/z=\xi L''/(1-\theta),\ \ \ |Z_j(i)|=L''=(1-\theta)|Z_\ell(i)|/\xi \end{equation} whenever $j<\ell$. Also, we now have that \begin{equation}\label{eqV0} |V_0|\le 4\sqrt{\theta}n. \end{equation} Note that for all $i\le k''$ and all $0\le j<j'\le \ell$ the graph $(Z_j(i),Z_{j'}(i))_{G'}$ is ${\varepsilon}'$-regular and has density at least~$d'$. Denote by $Z^{red}_j(i)$ the set of red vertices in~$Z_j(i)$. Let us now prove the following claim. Roughly speaking, it states that by taking out a small number of copies of $B^*$ from $G$ we can incorporate all the vertices in~$V_0$ and that this can be done without destroying the properties of the vertex classes of the blown-up copies of~$B^*$. \medskip \noindent \textbf{Claim.} {\it We can take out at most $3|V_0|\le 12\sqrt{\theta}n$ disjoint copies of $B^*$ from $G$ which cover all the vertices in~$V_0$ and have the property that the leftover sets $Y_j(i)\subseteq Z_j(i)$ thus obtained satisfy \begin{itemize} \item[{\rm (a)}] $Z^{red}_j(i)\subseteq Y_j(i)$, \item[{\rm (b)}] $L''-|Y_1(i)|\le \theta^{1/7}L''$, \item[{\rm (c)}] $|Y_1(i)|=\dots=|Y_{\ell-1}(i)|\le (1-\theta)|Y_\ell(i)|/\xi$. \end{itemize}} \medskip \noindent To prove this claim, we show that for every vertex $x\in V_0$ in turn we can take out either one, two or three disjoint copies of $B^*$ which satisfy the following three properties.% \textno Firstly, $x$ lies in one of the copies. Secondly, these copies avoid all the red vertices. Thirdly, when removing these copies from $G$ then, for every $i\le k''$, we either delete no vertex at all in $Z_1(i)\cup\dots\cup Z_\ell(i)$ or else we delete precisely $z$ vertices in each of $Z_1(i),\dots,Z_{\ell-1}(i)$ and delete either $z_1$ or $z_1-1$ vertices in~$Z_\ell(i)$. &(*)% \noindent Together with~(\ref{sizeZji}) this implies that after each step the subsets obtained from the~$Z_j(i)$ will satisfy conditions~(a) and (c).% \COMMENT{Indeed, for (c) we need that $\xi(|Z_1(i)|-z)\le (1-\theta)(|Z_\ell(i)|-z_1)$. But (\ref{sizeZji}) together with the fact that $-z_1=-\xi z\le -(1-\theta)z_1$ show that this holds.} We will discuss later how~(b) can be satisfied too. Thus consider the first vertex~$x\in V_0$. To find the copies of $B^*$ satisfying~$(*)$ we will distinguish several cases. Suppose first that there exists an index $i=i(x)$ such that $x$ has at least $\theta L''$ neighbours in $Z_j(i)$ for all $j<\ell$.% \footnote{In later steps we will ask whether a vertex $x'\in V_0$ has at least $\theta L''$ neighbours in the \emph{current} set $Z_j(i)$ for all $j<\ell$.} Take out a copy of $B^*$ from $G$ which contains~$x$, which meets each of $Z_1(i),\dots,Z_{\ell-1}(i)$ in $z$ vertices and $Z_\ell(i)$ in $z_1-1$ vertices and which avoids the red vertices of~$G$. (The existence of such copies of $B^*$ in $G$ easily follows from a `greedy' argument based on the ${\varepsilon}'$-regularity of the bipartite subgraphs $(Z_j(i),Z_{j'}(i))_{G'}$ of $G'$, see e.g.~Lemma~7.5.2 in~\cite{Diestel} or Theorem~2.1 in~\cite{KSi}. We will often use this and similar facts below. We can avoid the red vertices since $|Z_j^{red}(i)|\ll \theta L''$ and so most of the neighbours of $x$ in $Z_j(i)$ will be blue.)% \COMMENT{Indeed, $|Z_j^{red}(i)|\approx \theta^2L'/z_1=\theta^2(1-\theta)L'' \ll \theta L''$.} Next suppose that we cannot find an index $i$ as above. By relabelling if necessary, we may assume that $x$ has at most $\theta L''$ neighbours in $Z_1(i)$ for all $i\le k''$. Let $I$ denote the set of all those indices $i$ for which $x$ has at least $\theta L''$ neighbours in $Z_j(i)$ for all $j=2,\dots,\ell$. To obtain a lower bound on the size of~$I$, we now consider~$d_{G'}(x)$. This shows that \begin{align*} (k''-|I|) L''(\ell-2+2\theta)+|I|L''(\ell-2+\xi/(1-\theta)+\theta) & \stackrel{(\ref{sizeZji})}{\ge} d_{G'}(x)-|V_0|\\ & \stackrel{(\ref{eqV0})}{\ge} (1-1/\chi_{cr}(H)-\theta^{1/3})n. \end{align*} The term $\theta^{1/3}n$ in the second line is a bound on $|V_0|$ (with room to spare -- this will be useful later on). The above equation implies that% \COMMENT{Note that the $\theta^{1/3}n$ also works in later steps since we claim that we remove at most $12\sqrt{\theta}n$ copies of $B^*$ in the entire process and since there are $\ll \theta^{1/3}k''$ indices $i$ which we exclude from consideration because the size of some $Z_j(i)$ is critical. Thus the modified inequality looks like \begin{align*} \text{old LHS} \ge & \text{degree of } x\text{ in current subgraph of } G'\\ & -|V_0| -\text{ no. vs already removed } -\bigcup_{\text{excluded }i}\bigcup_{j=1}^\ell Z_j(i) \ge \text{old RHS}. \end{align*} Moreover, we have $\xi/(1-\theta)$ instead of $\xi$ since the size of the $Z_\ell(i)$ is $\xi L''/(1-\theta)$. From the above inequality, we get \begin{align*} \frac{\xi|I| L''}{1-\theta} & \ge \left(1-\frac{1}{\ell-1+\xi}-2\theta^{1/3}\right)n -k''L''(\ell-2)\\ & \ge \frac{\ell-2+\xi}{\ell-1+\xi}(\ell-1+\xi)L''k'' -k''L''(\ell-2)-2\theta^{1/3}\ell k''L''\\ & = \xi L''k''-2\theta^{1/3}\ell k''L''. \end{align*} This shows that $|I|\ge (1-\theta)k''-2\theta^{1/3}\ell k''\ge (1-\theta^{1/4})k''$.} \begin{equation}\label{eqsizeI} |I|\ge (1-\theta^{1/4})k''. \end{equation} Now suppose that there are two indices $i_1,i_2\in I$ such that the density $d_{G'}(Z_{1}(i_1),Z_j(i_2))$ is nonzero for all $j=1,\dots,\ell-1$. (Note that the last condition in Lemma~\ref{deg-reglemma} implies that then each of the bipartite subgraphs $(Z_{1}(i_1),Z_j(i_2))_{G'}$ of $G'$ is ${\varepsilon}'$-regular and has density at least~$d'$.) In this case we take out two disjoint copies of $B^*$. The first contains~$x$ and has $z$ vertices in each of $Z_2(i_1),\dots,Z_{\ell-1}(i_1)$, $z-1$ vertices in $Z_{1}(i_1)$ and $z_1$ vertices in $Z_\ell(i_1)$. Such a copy of $B^*$ exists since $i_1\in I$. The second copy will have one vertex in $Z_1(i_1)$, $z$ vertices in each of $Z_1(i_2),\dots,Z_{\ell-1}(i_2)$ and $z_1-1$ vertices in $Z_\ell(i_2)$. Again, these copies of $B^*$ are chosen such that they avoid the red vertices of~$G$. So we may assume that there are no indices $i_1,i_2$ as above. Suppose next that there are indices $i_3,i_4\in I$ and $j^*=j^*(i_3,i_4)$ with $2\le j^*\le \ell-1$ and such that $d_{G'}(Z_{j^*}(i_3),Z_j(i_4))>0$ for all $j=2,\dots,\ell$ and $d_{G'}(Z_{j}(i_3),Z_1(i_4))>0$ for all $j^*\neq j\le \ell$. In this case we take out 2 disjoint copies of $B^*$ again. The first one contains~$x$, has $z$ vertices in $Z_{j^*}(i_3)$ and in each of $Z_2(i_4),\dots,Z_{\ell-1}(i_4)$ and $z_1-1$ vertices in $Z_\ell(i_4)$. The second copy will have $z_1$ vertices in $Z_\ell(i_3)$ and $z$ vertices in $Z_1(i_4)$ as well as $z$ vertices in each $Z_j(i_3)$ with $j^*\neq j<\ell$. Again, all these copies of $B^*$ are chosen such that they avoid all the red vertices. So we may assume that there are no such indices~$i_3,i_4,j^*$. Suppose next that there are indices $i_5,i_6, i_7\in I$ and $j^\diamond=j^\diamond(i_5,i_6,i_7)$ with $2\le j^\diamond\le \ell-1$ and such that $d_{G'}(Z_{j^\diamond}(i_5),Z_j(i_7))>0$ for all $j=1,\dots,\ell-1$ and $d_{G'}(Z_{j}(i_5),Z_1(i_6))>0$ for all $j^\diamond\neq j\le \ell$. In this case we take out 3 disjoint copies of $B^*$. The first copy contains~$x$, has $z-1$ vertices in $Z_{1}(i_6)$, $z$ vertices in each of $Z_2(i_6),\dots,Z_{\ell-1}(i_6)$ and $z_1$ vertices in~$Z_\ell(i_6)$. The second copy has one vertex in $Z_1(i_6)$, $z-1$ vertices in $Z_{j^\diamond}(i_5)$, $z$ vertices in each $Z_j(i_5)$ with $j^\diamond\neq j<\ell$ and $z_1$ vertices in~$Z_\ell(i_5)$. The third copy has one vertex in $Z_{j^\diamond}(i_5)$, $z$ vertices in each of $Z_1(i_7),\dots,Z_{\ell-1}(i_7)$ and $z_1-1$ vertices in~$Z_\ell(i_7)$. Again, all these copies of $B^*$ are chosen such that they avoid all the red vertices. So we may assume that there are no such indices~$i_5,i_6,i_7,j^\diamond$. We will show that together with our previous three assumptions this leads to a contradiction to our assumption on the minimum degree of~$G$. For this, first note that there are at least $\tau |I|^2/4$ ordered pairs of indices $i,i'\in I$ for which $d_{G'}(Z_1(i),Z_1(i'))>0$. Indeed, otherwise the union $U$ of all the $Z_1(i)$ with $i\in I$ would have density at most $\tau/2$ in $G'$ and thus density at most $3\tau/4$ in $G$. But $zn/|B^*|-\theta^{1/10} n\le |U|\le zn/|B^*|$ by~(\ref{eqsizeI}).% \COMMENT{Condition~(b) implies that this also holds in later steps.} Thus by adding at most $\theta^{1/10}n\ll \tau |U|$ vertices to $U$ if necessary we would obtain a set $A$ that contradicts condition~(i) of Lemma~\ref{nonextremal}. Given $i'\in I$, we call an index $i\in I$ \emph{useful for $i'$} if $d_{G'}(Z_1(i),Z_1(i'))>0$. Let $I'\subseteq I$ be the set of all those indices $i'\in I$ for which at least $\tau |I|/8$ other indices $i\in I$ are useful. Thus \begin{equation}\label{eqsizeI'} |I'|\ge \tau|I|/8. \end{equation} Note that for every pair $i,i'\in I$ there exists an index $j'=j'(i,i')$ with $1\le j'<\ell$ and such that $d_{G'}(Z_{j'}(i),Z_1(i'))=0$. (Otherwise we could take $i',i$ for~$i_1,i_2$.) So in the graph $G'$ every vertex in $Z_1(i')$ has at most $(\ell-2+\xi/(1-\theta))L''$ neighbours in $Z_1(i)\cup\dots\cup Z_\ell(i)$. Clearly, $2\le j'<\ell$ if $i$ is useful for~$i'$. Given $i'\in I$, call another index $i\in I$ \emph{typical for $i'$} if $d_{G'}(Z_j(i),Z_1(i'))>0$ for all $j\le \ell$ with $j\neq j'$. Thus if $i$ is not typical for $i'$ then in the graph $G'$ every vertex in $Z_1(i')$ has at most $(\ell-2)L''$ neighbours in $Z_1(i)\cup\dots\cup Z_\ell(i)$. Given $i'\in I'$, we will now show that at least half of the $\ge \tau |I|/8$ indices $i$ which are useful for $i'$ are also typical. Indeed, suppose not. Consider any vertex $v\in Z_1(i')$ and look at its degree in~$G'$. We have that \begin{align*} d_{G'}(v) & \le |I|L''(1-\tau/16)(\ell-2+\xi/(1-\theta))+\frac{\tau}{16}|I|L''(\ell-2) +\theta^{1/5}n\\ & \stackrel{(\ref{eqsizeI})}{\le} (\ell-2+\xi)k''L''-\tau \xi |I|L''/16+2\theta^{1/5}n \le \delta(G)-\tau^2n<\delta(G'), \end{align*} a contradiction. (Indeed, to see the first inequality use that the error bound $\theta^{1/5}n$ on the right hand side is a bound on the number of all those neighbours of~$v$ which lie in $V_0$ or in sets $Z_j(i)$ with $i\notin I$ (c.f.~(\ref{eqV0}) and~(\ref{eqsizeI})). Again, we have room to spare here. To check the third inequality use that~(\ref{sizeZji}) implies $n-|V_0|=(\ell-1+\xi/(1-\theta))L''k''\ge (\ell-1+\xi)L''k''$. For the last inequality use that the Regularity lemma (Lemma~\ref{deg-reglemma}) implies $\delta(G')\ge \delta(G)-2dn$.) This shows that for every $i'\in I'$ at least $\tau |I|/16$ indices $i\in I$ are both useful and typical for~$i'$. Consider all the triples $i,i',j'$ such that $i\in I$, $i'\in I'$ and such that $i$ is both useful and typical for~$i'$ and where $j'=j'(i,i')$ is as defined after~(\ref{eqsizeI'}). It is easy to see that the number of such triples is at least $\tau |I||I'|/16$. Thus there must be one pair $i,j'$ which occurs for at least $\tau |I'|/(16\ell)$ indices $i'\in I'$. Let $I''$ denote the set of all these indices~$i'$. So crudely \begin{equation} \label{Iprimebound} |I''| \stackrel{(\ref{eqsizeI'})}{\ge} \tau^3|I| \stackrel{(\ref{eqsizeI})}{\ge} \tau^4 k''. \end{equation} Note that for each $i'\in I''$ there exists a $j''$ such that $2\le j''\le\ell$ and $d_{G'}(Z_{j'}(i),Z_{j''}(i'))=0$. (Otherwise we could take $i,i',j'$ for~$i_3,i_4,j^*$ since $i$ is both useful and typical for~$i'$.) So in the graph~$G'$ every vertex in $Z_{j'}(i)$ has at most $(\ell-2)L''$ neighbours in $Z_1(i')\cup\dots\cup Z_\ell(i')$. Furthermore, for each $i''\in I\setminus (I''\cup \{i\})$ there exists a $j'''$ such that $1\le j'''<\ell$ and $d_{G'}(Z_{j'}(i),Z_{j'''}(i''))=0$. (Otherwise we could take $i,i',i'',j'$ for $i_5,i_6,i_7,j^\diamond$.) Thus for each $i'' \in I\setminus (I''\cup \{i\})$ we can still say that in~$G'$ a vertex $v \in Z_{j'}(i)$ has at most $(\ell-2+\xi/(1-\theta))L''$ neighbours in $Z_1(i'')\cup\dots\cup Z_\ell(i'')$. Note that $v$ sends at most $\theta^{1/5}n$ edges to $V_0$ and to sets $Z_{i^*}$ with $i^* \notin I$ by~(\ref{eqV0}) and~(\ref{eqsizeI}). Together the above observations show that \begin{align*} d_{G'}(v) & \le (|I|-|I''|)(\ell-2+\xi/(1-\theta))L''+|I''|(\ell-2)L''+\theta^{1/5}n\\ & \le k''(\ell-2+\xi)L''-\xi|I''|L''/(1-\theta)+2\theta^{1/5}n \stackrel{(\ref{Iprimebound})}{\le} \delta(G)-\tau^5 n<\delta(G'), \end{align*} a contradiction. Thus we have shown that we can incorporate the first exceptional vertex $x$ by removing at most three copies of $B^*$ which are as in~$(*)$. Recall that this ensures that the subsets thus obtained from the $Z_j(i)$ satisfy conditions~(a) and~(c). Next we proceed similarly with all other vertices in~$V_0$. However, in order to ensure that in the end condition~(b) is satisfied too, we need to be careful that we do not remove to many vertices from a single set~$Z_j(i)$. So if the size of some set $Z_j(i)$ becomes critical after we dealt with some vertex in~$V_0$, then we exclude \emph{all} the sets $Z_1(i),\dots,Z_\ell(i)$ from consideration when dealing with the remaining vertices in~$V_0$. The definition of the critical threshold in~(b) implies that we exclude at most $$z|V_0|/(\theta^{1/7} L'')\stackrel{(\ref{eqV0})}{\le} 4z\sqrt{\theta}n/(\theta^{1/7}L'')\ll \theta^{1/3} k'' $$ indices $i$ in this way. It is easy to check that this will not affect any of the above calculations significantly. This completes the proof of the claim. \medskip \noindent Recall that the sets $Z_j(i)$ were obtained by splitting the clusters belonging to the copies $B_1,\dots, B_{k'}$ of $B^*$ in~$\mathcal{B}'$. By taking out the copies of $B^*$ chosen in the above process we modified these clusters. For all $t\le k'$ and all $j\le \ell$ let $X_j(t)$ denote the union of all the modified clusters belonging to the $j$th vertex class of~$B_t$. Thus $X_j(t)=Y_j((t-1)z_1z+1)\cup\dots\cup Y_j(tz_1z)$. Moreover,~(\ref{sizeZji}) and (a)--(c) imply that% \COMMENT{Indeed, (a) implies that all the bipartite graphs between all the modified clusters $V,V'$ with $V\subseteq X_j(t)$ and $V'\subseteq X_{j'}(t)$ are superregular. But this in turn implies~(a$'$).} \begin{itemize} \item[(a$'$)] all the bipartite graphs $(X_j(t),X_{j'}(t))_{G'}$ are $({\varepsilon}',d')$-superregular whenever $j\neq j'$, \item[(b$'$)] $(1-\theta^{1/8})zL'\le |X_j(t)|\le (1-\theta)zL'$ for all $j<\ell$ and $(1-\theta^{1/8})z_1L'\le |X_\ell(t)|\le z_1L'$, \item[(c$'$)] $|X_1(t)|=\dots=|X_{\ell-1}(t)|\le (1-\theta)|X_\ell(t)|/\xi$. \end{itemize} \subsection{Making the blow-ups of the $B\in \mathcal{B}'$ divisible by~$|B^*|$}\label{sec:div} Given a subgraph $S\subseteq R$, we denote by $V_G(S)\subseteq V(G)$ the blow-up of~$V(S)$. Thus $V_G(S)$ is the union of all the clusters which are vertices of~$S$. In particular, $V_G(B_i)=X_1(i)\cup\dots\cup X_\ell(i)$. If $|V_G(B_i)|$ was divisible by~$|B^*|$ for each $B_i\in\mathcal{B}'$, then $\mathcal{B}'$ would correspond to a blown-up $B^*$ cover as required in the lemma. As already described at the end of Section~\ref{sec:applyRG}, we will achieve this by taking out a bounded number of further copies of $B^*$ from $G$. For this, we define an auxiliary graph $F$ whose vertices are the elements of $\mathcal{B}'$ and in which $B_i,B_j\in\mathcal{B}'$ are adjacent if the reduced graph $R$ contains a copy of $K_\ell$ with one vertex in $B_i$ and $\ell-1$ vertices in $B_j$ or vice versa. To motivate the definition of $F$, let us first consider the case when $F$ is connected. If $B_i,B_j\in\mathcal{B}'$ are adjacent in $F$ then $G$ contains a copy of $B^*$ with one vertex in $V_G(B_i)$ and all the other vertices in $V_G(B_j)$ or vice versa. In fact, we can even find $|B^*|-1$ disjoint such copies of $B^*$ in~$G$. Taking out a suitable number of such copies (at most $|B^*|-1$), we can achieve that the size of the subset of $V_G(B_i)$ obtained in this way is divisible by $|B^*|$. Thus we can `shift the remainders mod~$|B^*|$' along a spanning tree of~$F$ to achieve that $|V_G(B)|$ is divisible by $|B^*|$ for each $B\in\mathcal{B}^*$. (To see this, use that $\sum_{B\in\mathcal{B}'} |V_G(B)|$ is divisible by $|B^*|$ since $|G|$ is divisible by $|B^*|$.) Let us next show that in the case when $\ell=2$ the graph $F$ is always connected. If $\ell=2$, then $B_i,B_j\in \mathcal{B}'$ are joined in $F$ if and only $R$ contains an edge between $B_i$ and~$B_j$. Now suppose that $F$ is not connected and let $C$ be any component of~$F$. Let $A\subseteq V(G)$ denote the union of all those clusters which belong to some $B_i\in C$. Then in the current subgraph of $G'$ there are no edges emanating from~$A$. As we have taken out at most $3|B^*||V_0|\le 12|B^*|\sqrt{\theta}n$ vertices in Section~\ref{sec:adjust} this implies that $d_{G'}(A,V(G')\setminus A)\le \theta^{1/3}$. Since $d_G(x)\le d_{G'}(x)+2dn$ for any $x\in G$ we have $d_G(A,V(G)\setminus A)\le \theta^{1/3}+4d\ll \tau$, a contradiction to condition~(ii) of Lemma~\ref{nonextremal}. Thus in what follows we may assume that $\ell\ge 3$ and that $F$ is not connected. Let $\mathcal{C}$ denote the set of all components of~$F$. Given a component $C$ of $F$, we denote by $V_R(C)\subseteq V(R)$ the set of all those clusters which belong to some $B\in \mathcal{B}'$ with $B\in C$. Let $V_G(C)\subseteq V(G)$ denote the union of all clusters in $V_R(C)$. We first show that we can take out a bounded number of copies of $B^*$ from $G$ in order to make $|V_G(C)|$ divisible by $|B^*|$ for each $C\in\mathcal{C}$. After that, we can `shift the remainders mod~$|B^*|$' within each component $C\in\mathcal{C}$ along a spanning tree as indicated above to make $|V_G(B)|$ divisible by~$|B^*|$ for each $B\in \mathcal{B}'$. For our argument, we will need the following claim. \medskip \noindent \textbf{Claim~1.} \emph{Let $C_1,C_2\in \mathcal{C}$ be distinct and let $a\in V_R(C_2)$. Then $$|N_R(a)\cap V_R(C_1)|< \frac{\ell-2}{\ell-1}|V_R(C_1)|.$$} \smallskip \noindent Suppose not. Then there is some $B\in \mathcal{B}'$ such that $B\in C_1$ and such that $$ |N_R(a)\cap B|\ge \frac{\ell-2}{\ell-1} |B|> \frac{\ell-2}{\ell-1+\xi} |B|=(\ell-2)z. $$ This implies that $a$ has a neighbour in at least $\ell-1$ vertex classes of~$B$. Thus $R$ contains a copy of $K_\ell$ which consists of $a$ together with $\ell-1$ of its neighbours in~$B$. But by definition of the auxiliary graph $F$, this means that $B$ is adjacent in $F$ to the copy $B_i\in\mathcal{B}'$ that contains~$a$, i.e.~$B$ and $B_i$ lie in the same component of $F$, a contradiction. This completes the proof of Claim~1. \medskip \noindent \textbf{Claim~2.} \emph{There exist a component $C'\in \mathcal{C}$, a copy $K$ of $K_\ell$ in $R$ and a vertex $a_0\in V(R)\setminus (V(K)\cup V_R(C'))$ such that $K$ meets $V_R(C')$ in exactly one vertex and such that $a_0$ is joined to all the remaining vertices in $K$. } \smallskip \noindent As $\delta(R)> 1/2$,% \COMMENT{since we are in the case when $\ell\ge 3$} there exists an edge $a_1a_2\in R$ which joins the vertex sets corresponding to two different components of $F$, i.e.~there are distinct $C_1,C_2\in\mathcal{C}$ such that $a_1\in V_R(C_1)$ and $a_2\in V_R(C_2)$. Note that (\ref{eqmindegreduced}) implies that \begin{equation}\label{eqweakdeltaR} \delta(R)> \frac{\ell-2}{\ell-1}|R|. \end{equation} Thus the number of common neighbours of $a_1$ and $a_2$ in~$R$ is greater than $\frac{\ell-3}{\ell-1}|R|. $ To prove the claim, we will now distinguish two cases. \medskip \noindent \textbf{Case~1.} \emph{More than $\frac{\ell-3}{\ell-1}|V(R)\setminus V_R(C_1)| $ common neighbours of $a_1$ and $a_2$ lie outside~$V_R(C_1)$.} \smallskip \noindent Let $a_3$ be a common neighbour of $a_1$ and $a_2$ outside $V_R(C_1)$. Claim~1 and (\ref{eqweakdeltaR}) together imply that the number of common neighbours of $a_1$, $a_2$ and $a_3$ outside $V_R(C_1)$ is more than $$\frac{\ell-4}{\ell-1}|V(R)\setminus V_R(C_1)|. $$ Choose such a common neighbour~$a_4$. Continuing in this way, we can obtain distinct vertices $a_2,\dots,a_{\ell}$ outside $V_R(C_1)$ which together with $a_1$ form a copy $K$ of $K_\ell$ in~$R$. As before, Claim~1 and (\ref{eqweakdeltaR}) together imply that the number of common neighbours of $a_2,\dots,a_{\ell}$ outside $V_R(C_1)$ is nonzero. Let $a_0$ be such a common neighbour. Then Claim~2 holds with $C':=C_1$, $K$ and $a_0$. Thus we may now consider \medskip \noindent \textbf{Case~2.} \emph{More than $\frac{\ell-3}{\ell-1}|V_R(C_1)| $ common neighbours of $a_1$ and $a_2$ lie in~$V_R(C_1)$.} \smallskip \noindent In this case we proceed similarly as in Case~1. However, this time we choose $a_0,a_3,\dots,a_{\ell}$ inside $V_R(C_1)$. Indeed, this can be done since Claim~1 and (\ref{eqweakdeltaR}) together imply that each vertex in $V_R(C_1)$ has more than $\frac{\ell-2}{\ell-1}|V_R(C_1)| $ neighbours in~$V_R(C_1)$. Then Claim~2 holds with $C':=C_2$. \medskip \noindent \textbf{Claim~3.} \emph{We can make $|V_G(B)|$ divisible by $|B^*|$ for all $B\in\mathcal{B}'$ by taking out at most $|\mathcal{B}'||B^*|$ disjoint copies of $B^*$ from~$G$.} \smallskip \noindent We first take out some copies of $B^*$ from $G$ to achieve that $|V_G(C)|$ is divisible by $|B^*|$ for each $C\in\mathcal{C}$. To do this we proceed as follows. We apply Claim~2 to find a component $C_1\in\mathcal{C}$, a copy $K$ of $K_\ell$ in $R$ and a vertex $a_0\in V(R)\setminus (V(K)\cup V_R(C_1))$ such that $K$ meets $V_R(C_1)$ in exactly one vertex, $a_1$ say, and such that $a_0$ is joined to all vertices in $K-a_1$. Thus $G$ contains a copy $B'$ of $B^*$ which has exactly one vertex $x\in V_G(C_1)$ and whose other vertices lie in clusters belonging to $V(K-a_1)\cup \{a_0\}$. (Indeed, we can choose the vertices of $B'$ lying in the same vertex class as $x$ in the cluster $a_0$ and the vertices lying in other vertex classes in the clusters belonging to $K-a_1$.) In fact, $G$ contains $|B^*|-1$ (say) disjoint such copies of~$B^*$. Now suppose that $|V_G(C_1)|\equiv j\mod |B^*|$. Then we take out $j$ disjoint such copies of $B^*$ from $G$ to achieve that $|V_G(C_1)|$ is divisible by $|B^*|$. Next we consider the graphs $F_1:=F-V(C_1)$ and $R_1:=R-V_R(C_1)$ instead of $F$ and $R$. Claim~1 and (\ref{eqweakdeltaR}) together imply that $\delta(R_1)>\frac{\ell-2}{\ell-1}|R_1|. $ Now suppose that $|\mathcal{C}|\ge 3$. Then similarly as in the proof of Claim~2 one can find a component $C_2\in\mathcal{C}\setminus\{C_1\}$, a copy $K'$ of $K_\ell$ in $R_1$ and a vertex $a'_0\in V(R_1)\setminus (V(K')\cup V_R(C_2))$ such that $K'$ meets $V_R(C_2)$ in exactly one vertex, $a_2$ say, and such that $a'_0$ is joined to all vertices in $K-a_2$. As before, we take out at most $|B^*|-1$ copies of $B^*$ from $G$ to achieve that $|V_G(C_2)|$ is divisible by~$|B^*|$. As $|G|$ was divisible by $|B^*|$, we can continue in this fashion to achieve that $|V_G(C)|$ is divisible by $|B^*|$ for all components $C\in\mathcal{C}$. In this process, we have to take out at most $(|\mathcal{C}|-1)(|B^*|-1)$ copies of $B^*$ from $G$. Now we consider each component $C\in \mathcal{C}$ separately. By proceeding as in the connected case for each $C$ and taking out at most $(|C|-1)(|B^*|-1)$ further copies of $B^*$ from $G$ in each case, we can make $|V_G(B)|$ divisible by $|B^*|$ for each $B\in \mathcal{B}'$. Hence, in total, we have taken out at most $(|\mathcal{C}|-1)(|B^*|-1)+(|\mathcal{B}'|-|\mathcal{C}|)(|B^*|-1)\le |\mathcal{B}'||B^*|$ copies of $B^*$ from~$G$. $\mathcal{B}'$ now corresponds to a blown-up $B^*$-cover as desired in the lemma, except that it has $k'\le k_1$ elements. (Recall that $k_1$ was defined in~(\ref{eqdefk1}).) But by considering random partitions, it is easy to see that one can split these elements to obtain a blown-up $B^*$-cover as required. The $B^*$-packing $\mathcal{B}^*$ in Lemma~\ref{nonextremal} consists of all the copies of $B^*$ taken out during the proof. Thus $|\mathcal{B}^*|\le 3|V_0|+|\mathcal{B}'||B^*|\le 12\sqrt{\theta}n+|\mathcal{B}'||B^*|\le \theta^{1/3}n$, as desired. \section{Packings in complete $\ell$-partite graphs}\label{sec:complete} In this section, we prove several results which together imply Lemma~\ref{corcomplete1}. However, almost all of the results of this section are also used directly in Section~\ref{sec:extremal}. Clearly, a complete $\ell$-partite graph has a perfect $H$-packing if all its vertex classes have equal size which is divisible by~$|H|$. Together the following two lemmas show that if ${\rm hcf}(H)=1$ then we still have a perfect $H$-packing if the sizes of the vertex classes are permitted to deviate slightly. (By Proposition~\ref{bestposs1} this is false if $\chi(H)\ge 3$ and ${\rm hcf}(H)\neq 1$ or if $\chi(H)=2$ and ${\rm hcf}_\chi(H)>2$.) In Lemma~\ref{completemove} we first consider the case when ${\rm hcf}_\chi(H)=1$. In Lemma~\ref{completebipmove2} we then deal with the remaining case (i.e.~when $H$ is bipartite, ${\rm hcf}(H)=1$ but ${\rm hcf}_\chi(H)=2$). \begin{lemma}\label{completemove} Suppose that $H$ is a graph of chromatic number $\ell\ge 2$ such that ${\rm hcf}_\chi(H)=1$. Let $B^*$ be the bottlegraph assigned to~$H$. Let $D'\gg |H|$ be an integer divisible by~$|H|$. Let $a$ be an integer such that $|a|\le |B^*|$. Given $1\le i_1<i_2\le \ell$, let $G$ be a complete $\ell$-partite graph with vertex classes $U_1,\dots,U_\ell$ such that $|U_{i_1}|=D'+a$, $|U_{i_2}|=D'-a$ and $|U_r|=D'$ for all $r\neq i_1,i_2$. Then $G$ contains a perfect $H$-packing. \end{lemma} \removelastskip\penalty55\medskip\noindent{\bf Proof. } Let $q$ denote the number of optimal colourings of~$H$. First note that by taking out at most $q\ell!$ disjoint copies of $H$ from $G$ we may assume that $D'$ is divisible by~$q(\ell-1)!|H|$.% \COMMENT{Indeed, let $0\le t<q(\ell-1)!$ be an integer such that $D'=|H|(q(\ell-1)!k+t)$ where $k\in\mathbb{N}$. Take out copies of $H$ in $t$ steps. In each step we take out $\ell$ copies such that from each $U_i$ we remove $|H|$ vertices (in each step).} Consider the complete $\ell$-partite graph $G'$ whose vertex classes $U'_1,\dots,U'_\ell$ all have size~$D'$. Thus $|G|=|G'|$. We will think of $G$ and $G'$ as two graphs on the same vertex set whose vertex classes are roughly identical. Recall that $G'$ has a perfect $H$-packing. Our aim is to choose a suitable such $H$-packing $\mathcal{H}'$ in $G'$ and to show that it can be modified into a perfect $H$-packing of $G$. To choose $\mathcal{H}'$, we will consider all optimal colourings of~$H$. Let $c^1,\dots,c^q$ be all these colourings. Let $x^j_1\le x^j_2\le \dots\le x^j_{\ell}$ denote the sizes of the colour classes of~$c^j$. Put $$ k:=\frac{D'}{q(\ell-1)!|H|}. $$ Let $S_\ell$ denote the set of all permutations of $\{1,\dots,\ell\}$. Given $j\le q$, let $\mathcal{H}'_j$ be an $H$-packing in $G'$ which, for all $s\in S_\ell$, contains precisely $k$ copies of $H$ which in the colouring $c^j$ have their $s(i)$th colour class in $U'_i$ (for all $i=1,\dots,\ell$). Thus $\mathcal{H}'_j$ consists of $\ell!k$ copies of $H$ and covers precisely $k(\ell-1)!|H|$ vertices in each vertex class $U'_i$ of~$G'$. Moreover, we choose all the $\mathcal{H}'_j$ to be disjoint from each other. Thus the union $\mathcal{H}'$ of $\mathcal{H}'_1,\dots,\mathcal{H}'_q$ is a perfect $H$-packing in~$G'$. We will now show that $\mathcal{H}'$ can be modified into a perfect $H$-packing of~$G$. Roughly, the reason why this can be done is the following. Clearly, we may assume that $a>0$. So $\mathcal{H}'$ has less than $|U_{i_1}|$ vertices in its $i_1$th vertex class and more than $|U_{i_2}|$ vertices in its $i_2$th vertex class. We will modify $\mathcal{H}'$ slightly by interchanging some vertex classes in some copies of $H$ in $\mathcal{H}'$. As ${\rm hcf}_\chi(H)=1$ this can be done in such a way that the $H$-packing obtained from $\mathcal{H}'$ covers one vertex more in its $i_1$th vertex class than $\mathcal{H}'$ and one vertex less in its $i_2$th vertex class. Continuing in this fashion we obtain an $H$-packing which covers the correct number of vertices in each vertex class. For all $j\le q$ and $r<\ell$ put $d^j_r:=x^j_{r+1}-x^j_r$. Since ${\rm hcf}_\chi(H)=1$, we can find $b^j_r\in\mathbb{Z}$ such that $$ 1=\sum_{j=1}^q\sum_{r=1}^{\ell-1} b^j_rd^j_r. $$ (Here we take $b^j_r:=0$ if $d^j_r=0$.) In order to modify the $H$-packing $\mathcal{H}'$ we proceed as follows. For all $j\le q$ and $r<\ell$ we consider $b^j_r$. If $b^j_r\ge 0$ we choose $b^j_r$ of the copies of $H$ in $\mathcal{H}'_j\subseteq \mathcal{H}'$ which in the colouring $c^j$ have their $r$th vertex class in $U'_{i_1}$ and their $(r+1)$th vertex class in $U'_{i_2}$. We change each of these copies of $H$ such that they now have their $r$th vertex class in $U'_{i_2}$ and its $(r+1)$th vertex class in $U'_{i_1}$. All the other vertices remain unchanged. Note the number of vertices in the $i_1$th vertex class covered by this new $H$-packing increases by $b^j_rd^j_r$ whereas the number of covered vertices in the $i_2$th vertex class decreases by~$b^j_rd^j_r$. If $b^j_r< 0$ we choose $|b^j_r|$ of the copies of $H$ in $\mathcal{H}'_j$ which in the colouring $c^j$ have their $r$th vertex class in $U'_{i_2}$ and their $(r+1)$th vertex class in $U'_{i_1}$. This time, we change each of these copies of~$H$ such that they now have their $r$th vertex class in $U'_{i_1}$ and its $(r+1)$th vertex class in~$U'_{i_2}$. Note that all these copies of $H$ will automatically be distinct for different pairs $j,r$. Let $\mathcal{H}^*$ denote the modified $H$-packing obtained by proceeding as described above $a$ times (where all the copies of $H$ which we change are chosen to be distinct).% \COMMENT{Indeed, this can be done since $D'\gg |H|$, $|a|\le |B^*|$ and thus $k\gg |b^j_r| a$ for all $j,r$.} We have to check that $\mathcal{H}^*$ is a perfect $H$-packing of~$G$. For all $i\le \ell$ let $n_i$ denote the number of vertices in the $i$th vertex class covered by~$\mathcal{H}^*$. Thus $n_i=D'$ whenever $i\neq i_1,i_2$. We have to check that $n_{i_1}=D'+a$ and $n_{i_2}=D'-a$. But $$ n_{i_1}=D'+a\sum_{j=1}^q\sum_{r=1}^{\ell-1} b^j_rd^j_r=D'+a $$ and $$ n_{i_2}=D'-a\sum_{j=1}^q\sum_{r=1}^{\ell-1} b^j_rd^j_r=D'-a, $$ as required. \noproof\bigskip \begin{lemma}\label{completebipmove2} Suppose that $H$ is a bipartite graph such that ${\rm hcf}_c(H)=1$ and ${\rm hcf}_\chi(H)=2$. Let $B^*$ be the bottlegraph assigned to~$H$. Let $D'\gg |H|$ be an integer divisible by~$|H|$. Let $a$ be an integer such that $|a|\le |B^*|$. Let $G$ be a complete bipartite graph with vertex classes $U_1$ and $U_2$ such that $|U_1|=D'+a$ and $|U_2|=D'-a$. Then $G$ contains a perfect $H$-packing. \end{lemma} \removelastskip\penalty55\medskip\noindent{\bf Proof. } Clearly, we may assume that $a>0$. Our first aim is to take out a small number of disjoint copies of $H$ from $G$ to obtain sets $U'_i\subseteq U_i$ with $|U'_1|=|U'_2|$. To do this, we will use the fact that ${\rm hcf}_\chi(H)=2$. So let $c^1,\dots,c^q$ be all the optimal colourings of~$H$. Let $x^j_1\le x^j_2$ denote the sizes of the colour classes of~$c^j$. Since ${\rm hcf}_\chi(H)=2$, we can find $b^j\in\mathbb{Z}$ such that $2=\sum_{j=1}^q b^j (x^j_2-x^j_1)$. (Here we take $b^j:=0$ if $x^j_2=x^j_1$.) For each $j=1,\dots,q$ in turn we take out $a|b^j|$ copies of $H$ from~$G$. If $b^j\ge 0$ each of these $a|b^j|$ copies will meet $U_1$ in $x^j_2$ vertices and $U_2$ in $x^j_1$ vertices. If $b^j<0$ then each of these copies will have $x^j_1$ vertices in $U_1$ and $x^j_2$ vertices in~$U_2$. We choose all these copies of $H$ to be disjoint. It is easy to check that the subsets $U'_1$ and $U'_2$ obtained from $U_1$ and $U_2$ in this way have the same size, $u'$ say.% \COMMENT{Indeed, $|U'_1|-|U'_2|=2a-a\sum_j b^j(x^j_2-x^j_1)=0$.} Note that $|H|$ divides $2u'$ since $|H|$ divides $|U_1|+|U_2|$. Also observe that if $|H|$ even divides $u'$, then $G[U'_1\cup U'_2]$ (and thus also~$G$ itself) has a perfect $H$-packing. So we may assume that $|H|$ does not divide~$u'$ but that it does divide $2u'$. Hence $|H|$ is even and $u'=|H|k/2$ where $k$ is an odd integer. We will now use the fact that ${\rm hcf}_c(H)=1$ to show that we can take out further copies of $H$ from $G$ to achieve that the subsets $U''_1$ and $U''_2$ obtained in this way have the same size and that this size is divisible by~$|H|$. As ${\rm hcf}_c(H)=1$ there exists a component $C$ of $H$ such that $|C|$ is odd. Using the fact that $|H|$ is even and thus $|H-C|$ is odd it is easy to see that there exists a $2$-colouring of $H$ whose colour classes both have odd size and another $2$-colouring whose colour classes both have even size.% \COMMENT{Indeed, let $a_1$ and $a_2$ denote the sizes of the colour classes of $C$ (in some 2-colouring). We may assume that $a_1$ is odd and $a_2$ is even. Since $|H|$ is even it follows that $|H-C|$ is odd. Let $a_3$ and $a_4$ denote the sizes of the colour classes of $H-C$ (in some $2$-colouring) such that $a_3$ is odd and $a_4$ is even. Thus there exists a 2-colouring of $H$ whose colour classes have size $a_1+a_3$ (even) and $a_2+a_4$ (even). Another colouring has colour classes of size $a_1+a_4$ (odd) and $a_2+a_3$ (odd).} We may assume that $c^1$ and $c^2$ are such colourings, i.e. that both $x^1_1$ and $x^1_2$ are odd and both $x^2_1$ and $x^2_2$ are even. Let $k_1:=|H|/2-x^2_1$ and $k_2:=|H|/2-x^1_1$. Take out $k_1$ copies of $H$ with $x^1_1$ vertices in $U'_1$ and $x^1_2$ vertices in~$U'_2$. Then take out $k_2$ copies of $H$ with $x^2_2$ vertices in $U'_1$ and $x^2_1$ vertices in~$U'_2$. Let $U''_1$ and $U''_2$ denote the subsets obtained from $U'_1$ and $U'_2$ in this way. It is easy to check that% \COMMENT{Indeed, \begin{align*} |U''_1|&= u'-(\frac{|H|}{2}-x^2_1)x^1_1-(\frac{|H|}{2}-x^1_1)x^2_2= u'-(\frac{|H|}{2}-x^2_1)x^1_1-(\frac{|H|}{2}-x^1_1)(|H|-x^2_1)\\ & =u'-\frac{|H|}{2}(x^1_1-x^2_1+|H|-2x^1_1) \end{align*} and \begin{align*} |U''_2|&= u'-(\frac{|H|}{2}-x^2_1)x^1_2-(\frac{|H|}{2}-x^1_1)x^2_1= u'-(\frac{|H|}{2}-x^2_1)(|H|-x^1_1)-(\frac{|H|}{2}-x^1_1)x^2_1\\ & =u'-\frac{|H|}{2}(-x^1_1-2x^2_1+|H|+x^2_1). \end{align*}} $$|U''_1|=|U''_2|=u'-\frac{|H|}{2}(|H|-x^1_1-x^2_1)=\frac{|H|}{2}(k-|H|+x^1_1+x^2_1).$$ But $k-|H|+x^1_1+x^2_1$ is even and so $|U''_1|$ is divisible by $|H|$, as desired. \noproof\bigskip The next lemma is an analogue of Lemma~\ref{completebipmove2} for perfect $H$-packings in graphs which are the disjoint union of two cliques. It will be needed in the proof of Lemma~\ref{extremal2}. \begin{lemma}\label{completebipmove} Suppose that $H$ is a bipartite graph such that ${\rm hcf}_c(H)=1$. Let $B^*$ be the bottlegraph assigned to~$H$. Let $D'\gg |H|$ be an integer divisible by~$|H|$. Let $a$ be an integer such that $|a|\le |B^*|$. Let $G$ be the disjoint union of two cliques of order $D'+a$ and $D'-a$ respectively. Then $G$ contains a perfect $H$-packing. \end{lemma} \removelastskip\penalty55\medskip\noindent{\bf Proof. } Clearly, we may assume that $a>0$. Let $G_1$ be the clique of order $D'+a$ and let $G_2$ be the clique of order $D'-a$. Our aim is to take out disjoint copies of $H$ from $G$ in order to obtain subcliques $G'_i\subseteq G_i$ such that $|G'_i|$ is divisible by $|H|$ for both $i=1,2$. (Then each $G'_i$ (and thus also~$G$ itself) has a perfect $H$-packing.) Let $C_1 \le \dots \le C_s$ denote the components of $H$. Since ${\rm hcf}_c(H)=1$, we can find $b^j\in\mathbb{Z}$ such that $ 1=\sum_{j=1}^s b^j |C_j|. $ For each $j=1,\dots,s$ in turn we take out $a|b^j|$ copies of $H$ from~$G$. If $b^j\ge 0$ each of these $a|b^j|$ copies will meet $G_1$ in $C_j$ and $G_2$ in~$H-C_j$. If $b^j<0$ then each of these copies will meet $G_1$ in $H-C_j$ and $G_2$ in~$C_j$. We choose all these copies of $H$ to be disjoint. Let $G'_1$ and $G'_2$ denote the subcliques obtained from $G_1$ and $G_2$ in this way. Then \begin{align*} |G'_1|=D'+a-a\sum_{j:\, b^j\ge 0} b_j|C_j|+a \sum_{j:\, b^j< 0} b^j(|H|-|C_j|) = D'+a \sum_{j:\, b^j< 0} b^j|H|. \end{align*} Thus $|H|$ divides $|G'_1|$. Similarly one can show that $|H|$ divides $|G'_2|$. \noproof\bigskip The following lemma states that if $G$ is a complete $\ell$-partite graph which is very close to being bottle-shaped then $G$ contains a perfect $H$-packing as long as the ratio of the smallest to the largest vertex class a bit larger than in the bottlegraph $B^*$ of $H$. (The terms involving $D'$ in (i) and (ii) ensure that the latter condition holds.) \begin{lemma}\label{completeconst} Suppose that $H$ is a graph of chromatic number $\ell\ge 2$ such that ${\rm hcf}(H)=1$. Let $B^*$ be the bottlegraph assigned to~$H$. Let $z$ and $z_1$ be as defined in~$(\ref{eqdefxi})$. Let $D'\gg |H|$ be an integer divisible by~$|B^*|$. Let $G$ be a complete $\ell$-partite graph with vertex classes $U_1,\dots,U_\ell$ whose order $n\gg D'$ is divisible by~$|B^*|$. Let $u_i:=|U_i|$ for all~$i$. Suppose that \begin{itemize} \item[(i)] $|\, (u_i-D')-z(n-\ell D')/|B^*|\,|\le |B^*|$ for all $i<\ell$ and \item[(ii)] $|\, (u_\ell-D')-z_1(n-\ell D')/|B^*|\,|\le |B^*|$. \end{itemize} Then one can take out $\ell D'/|H|$ disjoint copies of $H$ from $G$ to obtain a subgraph $G^*\subseteq G$ such that, writing $n^*:=|G^*|=n-\ell D'$ and $u^*_i:=|U_i\cap V(G^*)|$, we have $u^*_i=zn^*/|B^*|$ for all $i<\ell$ and $u^*_\ell=z_1n^*/|B^*|$. So in particular, $G^*$ contains a perfect $B^*$-packing and thus $G$ contains a perfect $H$-packing. \end{lemma} \removelastskip\penalty55\medskip\noindent{\bf Proof. } Let us first consider the case when ${\rm hcf}_\chi(H)=1$. Put $a_i:=(u_i-D')-z(n-\ell D')/|B^*|$ for all $i<\ell$ and $a_\ell:=(u_\ell-D')-z_1(n-\ell D')/|B^*|$. Thus $\sum_{i=1}^\ell a_i=0$ and $|a_i|\le |B^*|$ for all~$i\le \ell$. Consider the complete $\ell$-partite graph $G'$ whose $i$th vertex class has size $D'+a_i$. By repeated applications of Lemma~\ref{completemove} one can show that this graph has a perfect $H$-packing~$\mathcal{H}$. View $G'$ as a subgraph of $G$ such that the $i$th vertex class of $G'$ lies in~$U_i$. Then the subgraph $G^*$ obtained from $G$ by removing all the copies of $H$ in $\mathcal{H}$ (and thus deleting precisely the vertices in $V(G')\subseteq V(G)$) is as required in the lemma.% \COMMENT{Note that $n^*=n-\ell D'$ is divisible by $|B^*|$ since both $n$ and $D'$ are divisible by~$|B^*|$. Thus $G^*$ is a `multiple' of $B^*$ and hence contains a perfect $B^*$-packing.} In the remaining case when ${\rm hcf}_\chi(H)\neq 1$ (and thus $\chi(H)=2$, ${\rm hcf}_c(H)=1$ and ${\rm hcf}_\chi(H)=2$) we proceed similarly except that we now apply Lemma~\ref{completebipmove2} instead of Lemma~\ref{completemove}. \noproof\bigskip The next lemma shows that we can achieve the conditions in the setup of Lemma~\ref{completeconst} when larger deviations from the bottle-shape are allowed. \begin{lemma}\label{completeapprox} Let $H$, $B^*$, $z$, $z_1$, $G$, $U_i$, $u_i$ and $D'$ be defined as in the previous lemma, except that we now no longer assume that ${\rm hcf}(H)=1$ and that $G$ satisfies~(i) and~(ii) and we only require $D'\ge 0$ to be any integer divisible by~$|B^*|$. Suppose that $\chi_{cr}(H)<\ell$ and $a_i:=z(n-\ell D')/|B^*|-(u_i-D')\ge 0$ where $a_i\le n/(\ell^3|B^*|^2)$ for all $i<\ell$. Then one can take out at most $\ell^2 \sum_{i=1}^{\ell-1} a_i$ disjoint copies of $B^*$ from $G$ to obtain a subgraph $G^*\subseteq G$ such that, writing $n^*:=|G^*|$ and $u^*_i:=|U_i\cap V(G^*)|$, conditions (i) and (ii) of Lemma~\ref{completeconst} hold with $n$ replaced by $n^*$ and with $u_i$ replaced by~$u^*_i$. \end{lemma} \removelastskip\penalty55\medskip\noindent{\bf Proof. } First note that we only need to consider the case when $D'=0$. Indeed, to reduce the general case, suppose that Lemma~\ref{completeapprox} holds if $D'=0$. Now instead of~$G$ we consider the graph $G'$ obtained from $G$ by removing $D'$ vertices from each vertex class. Apply Lemma~\ref{completeapprox} to $G'$ to obtain a graph $G^*\subseteq G'$. Let $U^*_1,\dots, U^*_\ell$ denote the vertex classes of~$G^*$. Then the vertex classes obtained from the $U^*_i$ by adding the $D'$ vertices span a subgraph of $G$ as desired in the lemma. We may also assume that $u_1\le \dots\le u_{\ell-1}$. Let $k:=n/|B^*|$ and let $\xi$ be as defined in~(\ref{eqdefxi}). Note that \begin{equation}\label{eqsizeuell} \quad u_i=kz-a_i \mbox{ for } i<\ell \quad \mbox{ and } \quad u_\ell=\xi zk+\sum_{i=1}^{\ell-1} a_i . \end{equation} We now take out disjoint copies of $B^*$ from $G$ in order to achieve that the subsets of $U_1,\dots,U_{\ell-1}$ thus obtained have almost the same size. More precisely, we proceed as follows. For every $i=1,\dots,\ell-2$ let $r_i:=\lfloor(u_{\ell-1}-u_i)/(z-z_1)\rfloor$. Put \begin{align*} r:=\sum_{i=1}^{\ell-2} r_i \le \frac{(\ell-2)(kz-a_{\ell-1})-\sum_{i=1}^{\ell-2} (kz-a_i)}{z-z_1} =\frac{\sum_{i=1}^{\ell-2} a_i-(\ell-2)a_{\ell-1}}{z-z_1}. \end{align*} For every $i=1,\dots,\ell-2$ in turn remove $r_i$ copies of $B^*$ from $G$, each having $z_1$ vertices in $U_i$ and $z$ vertices in every other set~$U_j$. Then the subsets $U'_i$ obtained from the $U_i$ in this way satisfy $0\le |U'_{\ell-1}|-|U'_i|<z-z_1$ for all $i=1,\dots,\ell-2$ and $$ |U'_\ell|-\xi |U'_{\ell-1}|=u_\ell-\xi u_{\ell-1}-r(z-z_1) \stackrel{(\ref{eqsizeuell})}{\ge}(\ell-1+\xi)a_{\ell-1}\ge 0. $$ Next we will take out further copies of $B^*$ from $G$ in order to achieve that the size of the $\ell$th vertex class is about $\xi$-times as large as the size of any other vertex class. In each step we remove $\ell-1$ copies of $B^*$, for every $i=1,\dots,\ell-1$ one copy having $z_1$ vertices in the $i$th vertex class and $z$ vertices in each other class. A straightforward calculation shows that after $$ \left\lfloor \frac{|U'_\ell|-\xi |U'_{\ell-1}|}{(\ell-1)z-(\ell-2)z_1-\xi z_1} \right\rfloor $$ steps the subsets $U^*_i$ obtained from the $U'_i$ in this way span a subgraph as required in the lemma.% \COMMENT{First note that all the $U^*_i$ are non-empty. Indeed, in total we took out at most \begin{align*} r+(\ell-1)\frac{|U'_\ell|-\xi |U'_{\ell-1}|}{(\ell-1)z-(\ell-2)z_1-\xi z_1} \le \sum_{i=1}^{\ell-2} a_i+\ell^2 a_{\ell-1}\le \ell^2\sum_{i=1}^{\ell-1} a_i \le \frac{n}{|B^*|^2} \end{align*} copies of $H$. Here we also used that the lower bound on $|U'_\ell|-\xi |U'_{\ell-1}|$ is almost an equality. Thus from each $U_i$ we deleted at most $zn/|B^*|^2\le zn/2|B^*|\le 2u_i/3$ vertices. To check that the size of $U^*_\ell$ is ok let $x:=\lfloor (|U'_\ell|-\xi |U'_{\ell-1}|)/((\ell-1)z-(\ell-2)z_1-\xi z_1)\rfloor$ and note that \begin{align*} u^*_\ell-\xi u^*_{\ell-1} & =u'_\ell-x(\ell-1)z-\xi (u'_{\ell-1}-x((\ell-2)z+z_1))\\ & = u'_\ell-\xi u'_{\ell-1}-x[(\ell-1)z-(\ell-2)z_1-\xi z_1]. \end{align*} Thus $0\le u^*_\ell-\xi u^*_{\ell-1}\le (\ell-1)z$ and $0\le u^*_{\ell-1}-u^*_i\le z$ for all $i<\ell-1$. Let us now show that this implies that the $u^*_i$ are as required in the lemma. Indeed, we have that $(\ell-2)z\ge (\ell-2)u^*_{\ell-1}-\sum_{i=1}^{\ell-2}u^*_i\ge 0$ and thus $$(\ell-2)z\ge (\ell-2+\xi)u^*_{\ell-1}-\sum_{i=1}^{\ell-2}u^*_i-u^*_\ell \ge-(\ell-1)z.$$ Note that the term in the middle equals $(\ell-1+\xi)u^*_{\ell-1}-n^*.$ Thus $-|B^*|\le -\frac{\ell-2}{(\ell-1+\xi)}z\le \frac{zn^*}{|B^*|}-u^*_{\ell-1}= \frac{n^*}{\ell-1+\xi}-u^*_{\ell-1}\le \frac{\ell-1}{\ell-1+\xi}z\le |B^*|.$ Similarly, we get that \begin{align*} u^*_\ell & \le \xi u^*_{\ell-1}+(\ell-1)z \le \xi\frac{zn^*}{|B^*|}+\frac{(\ell-2)\xi z}{\ell-1+\xi}+(\ell-1)z \le\frac{\xi n^*}{\ell-1+\xi}+|B^*| \end{align*} and so $$u^*_\ell-\frac{\xi n^*}{\ell-1+\xi}\le |B^*|.$$ Finally, we have that $u^*_\ell\ge \xi u^*_{\ell-1}\ge \frac{\xi n^*}{\ell-1+\xi}-\xi|B^*|$ and so $u^*_\ell-\frac{\xi n^*}{\ell-1+\xi}\ge -|B^*|$. To get the bounds for the other $u_i^*$, note the above calculations have some room to spare} \noproof\bigskip \removelastskip\penalty55\medskip\noindent{\bf Proof of Lemma~\ref{corcomplete1}. } Let $D'$ be an integer as in Lemma~\ref{completeconst}. Consider the graph $F$ given in Lemma~\ref{corcomplete1}. By taking out at most $\ell-2$ disjoint copies of $H$ from $F$ if necessary, we may assume that $|F|$ is divisible by~$|B^*|$. It is easy to check that% \COMMENT{Indeed, we may assume that the vertex classes in Lemma~\ref{corcomplete1} satisfy $(1-d)u_{\ell-1}\le u_i\le u_{\ell-1}$ for all $i<\ell-1$. Together with the fact that $(1-\beta^{1/10})u_\ell\le \xi u_i\le (1-\beta)u_\ell$ this implies that $$ \frac{zn}{|B^*|}-u_{\ell-1} = \frac{1}{\ell-1+\xi}(\sum_{i=1}^{\ell} u_i)-u_{\ell-1} \le \frac{\ell-1}{\ell-1+\xi} u_{\ell-1} + \frac{\xi}{\ell-1+\xi} \frac{u_{\ell-1}}{1-\beta^{1/10}} -u_{\ell-1} \le \beta^{1/20} u_{ \ell-1} \le \beta^{1/20}n $$ The lower bound follows in the same way, just using the other side of the previous inequalities. Including the $D'$ in the inequalities doesn't change things significantly if $n\gg D'$. } $F$ satisfies the conditions in Lemma~\ref{completeapprox} if $|F|\gg D'$. Thus Lemmas~\ref{completeapprox} and~\ref{completeconst} together imply Lemma~\ref{corcomplete1}. \noproof\bigskip \section{Proof of the extremal cases}\label{sec:extremal} In most of the extremal cases, we know that $G$ contains several large almost independent sets $A_1,\dots,A_q$ where $1\le q <\ell$. In the preliminary Lemma~\ref{exceptionalvs} we show that we can modify the $A_i$ slightly to obtain sets $A_1^*,\dots,A_{q}^*$ which together with $V(G)\setminus \bigcup_{i=1}^q A_i^*$ induce an almost complete $(q+1)$-partite graph. In the proof of Lemma~\ref{exceptionalvs} below we need the following observation. \begin{lemma}\label{disjointstars} Let $i$ be a positive integer and let $G$ be a graph of order $n$ whose average degree satisfies $d:=d(G)\ge 2i$. Then $G$ contains at least $$\frac{dn}{4(i+1)\Delta(G)}$$ disjoint $i$-stars. \end{lemma} \removelastskip\penalty55\medskip\noindent{\bf Proof. } Let $k:=\lceil dn/(4(i+1)\Delta(G))\rceil$. We take out the disjoint $i$-stars greedily. So in each step we delete $i+1$ vertices and thus at most $(i+1)\Delta(G)$ edges. So after $<k$ steps the remaining subgraph $G'$ of $G$ has at least $e(G)-k(i+1)\Delta(G)\ge dn/4$ edges and thus $d(G')\ge d/2\ge i$. So $G'$ contains an $i$-star. This shows that the number of disjoint $i$-stars we can find greedily is at least~$k$. \noproof\bigskip \begin{lemma}\label{exceptionalvs} Suppose that $H$ is a graph of chromatic number $\ell\ge 2$ such that $\chi_{cr}(H)<\ell$. Let $B^*$ denote the bottlegraph assigned to~$H$. Let $\xi$, $z$ and $z_1$ be as defined in~$(\ref{eqdefxi})$ and let $0<\tau \ll \xi, 1-\xi, 1/|B^*|$. Let $|B^*|\ll D'\ll C$ be integers such that $D'$ is divisible by $|B^*|$. Let $G$ be a graph whose order $n\gg C,1/\tau$ is divisible by $|B^*|$ and whose minimum degree satisfies $\delta(G)\ge (1-\frac{1}{\chi_{cr}(H)})n+C$. Furthermore, suppose that for some $1\le q< \ell$ there are disjoint sets $A_1,\dots,A_q\subseteq V(G)$ which satisfy $|A_i|=(n-2\ell D')z/|B^*|+2D'$ and $d(A_i)\le \tau$. Let $A_{q+1}:=V(G)\setminus \bigcup_{i=1}^q A_i$. Then there are sets $A^*_1,\dots,A^*_{q+1}$ which satisfy the following properties: \begin{itemize} \item[{\rm (i)}] Let $A^*$ denote the union of $A^*_1,\dots,A^*_{q+1}$ and put $n^*:=|A^*|$. Then $G-A^*$ has a perfect $H$-packing. Moreover $n-n^*\le \tau^{3/5} n$. \item[{\rm (ii)}] $|A^*_i|=(n^*-\ell D')z/|B^*|+D'$ and $d(A_i^*)\le \tau^{2/5}$ for all $i\le q$. \item[{\rm (iii)}] For all $i,j\le q+1$ with $j\neq i$ each vertex in $A^*_i$ has at least $(1-\tau^{1/5})|A^*_j|$ neighbours in $A^*_j$. \end{itemize} \end{lemma} \removelastskip\penalty55\medskip\noindent{\bf Proof. } First note that the minimum degree condition on $G$ and~(\ref{eqchicr}) imply that the neighbourhood of each vertex $x\in G$ can avoid almost $|A_1|=\dots=|A_q|$ vertices of $G$ but no more. Given an index $i\le q+1$, we call a vertex $x\in A_i$ \emph{$i$-bad} if $x$ has at least $\tau^{1/3}|A_i|$ neighbours in $A_i$. Since $d(A_i)\le \tau$ for each $i\le q$, for such $i$'s the number of $i$-bad vertices is at most $\tau^{2/3}|A_i|$. Call a vertex $x\in A_i$ \emph{$i$-useless} if $x$ has at most $(1-\tau^{1/4})|A_j|$ neighbours in~$A_j$ for some $j\neq i$. Thus if $i\le q$ every $i$-useless vertex is also $i$-bad.% \COMMENT{Indeed to see this (and also for later arguments) it is helpful to note that $$|A_1|=\frac{(n-2\ell D')z}{(\ell-1+\xi)z}+2D'= \frac{n}{\ell-1+\xi}-\frac{2D(1-\xi)}{\ell-1+\xi}, $$ ie $A_1$ contains $\frac{2D(1-\xi)}{\ell-1+\xi}$ vs less than the `correct' number.} In particular, for each $i\le q$ there are at most $\tau^{2/3}|A_i|$ vertices which are $i$-useless. To estimate the number $u_{q+1}$ of $(q+1)$-useless vertices we count the number $e(A_{q+1},V(G)\setminus A_{q+1})$ of edges emanating from~$A_{q+1}$. We have \begin{align*} q|A_1|\delta(G) & -2\sum_{i=1}^q e(A_i)-2\binom{q}{2}|A_1|^2 \le e(A_{q+1},V(G)\setminus A_{q+1})\\ & \le u_{q+1}[(q-1)|A_1|+(1-\tau^{1/4})|A_1|]+(|A_{q+1}|-u_{q+1})q|A_1| \end{align*} which implies% \COMMENT{As $n\ge (\ell-1+\xi)|A_1|$ we get $$ q|A_1|\left(1-\frac{1}{\ell-1+\xi}\right)(\ell-1+\xi)|A_1|-2\tau q|A_1|^2-q(q-1)|A_1|^2 \le -u_{q+1}\tau^{1/4}|A_1|+q|A_{q+1}||A_1|$$ and thus \begin{align*} u_{q+1}\tau^{1/4} & \le q|A_{q+1}|-(\ell-2+\xi)q|A_1|+2\tau q|A_1|+q(q-1)|A_1|\\ & \le q|A_1|+2\tau q |A_1|-q|A_1|+\tau^2 n\le 2\tau q|A_{q+1}|/\xi +\tau^2 n. \end{align*} (To get the 2nd line note that $|A_{q+1}|=n-q|A_1|= (\ell-1+\xi)|A_1|-q|A_1|+ some large constant depending on~D'$.) Thus $u_{q+1}\le \tau^{2/3}|A_{q+1}|$.} that the number $u_{q+1}$ of $(q+1)$-useless vertices is at most $\tau^{2/3}|A_{q+1}|$. So in total, at most $\tau^{2/3} n$ vertices of $G$ are $i$-useless for some~$i\le q+1$. Given $j\neq i$, we call a vertex $x\in A_i$ \emph{$j$-exceptional} if $x$ has at most $\tau^{1/3}|A_j|$ neighbours in $A_j$. Thus every such $x$ is both $i$-useless and $i$-bad.% \COMMENT{This holds regardless whether $j=q+1$ or $j<q+1$.} It will be important later that the number of vertices which are $i$-useless for some~$i$ is much smaller than the number of neighbours in~$A_j$ of a non-$j$-exceptional vertex. By interchanging $i$-bad vertices with $i$-exceptional vertices if necessary, we may assume that for each $i$ for which there exist $i$-exceptional vertices, we don't have $i$-bad vertices. Note that after we have interchanged vertices every non-$j$-exceptional vertex still has at least $\tau^{1/3}|A_j|/2$ neighbours in~$A_j$. Similarly, every non-$i$-bad vertex still has at most $2\tau^{1/3}|A_i|$ neighbours in~$A_i$ and every non-$i$-useless vertex still has at least $(1-2\tau^{1/4})|A_j|$ neighbours in~$A_j$ for every~$j\neq i$. For each index $i\le q$ in turn we now proceed as follows in order to take care of the $i$-exceptional vertices.% \COMMENT{The bound on $e(A_i)$ below only works if $A_i$ is not the small class, ie if $i<\ell$. Moreover, if $q\le \ell-2$ then no vertex will be $(q+1)$-exceptional.} Let $S_i\subseteq V(G)\setminus A_i$ denote the set of $i$-exceptional vertices and assume that $s_i:=|S_i|>0$. We will choose a set $\mathcal{S}_i$ of $s_i$ disjoint $z$-stars in $G[A_i]$ and interchange the star centres with the $i$-exceptional vertices.% \COMMENT{Note that when doing this for some $j>i$ we will not produce new $i$-exceptional vertices. Indeed, the only way this could happen is if we move some vertex $x\in A_i$ to $A_j$ which was a $j$-exceptional vertex and now becomes a (new) $i$-exceptional vertex. But this is not possible since $x$ has lots of neighbours in $A_i$ as it is $j$-exceptional.} To show the existence of such stars, note that $\Delta(A_i)\le 2\tau^{1/3}|A_i|$ since by our assumption no vertex in $A_i$ is $i$-bad. Moreover, we can bound the number of edges in $G[A_i]$ by \begin{align*} e(A_i)& \ge \frac{\delta(G)|A_i|-|G-(A_i\cup S_i)||A_i|-e(A_i,S_i)}{2}\\ & \ge \frac{1}{2}|A_i|\left[\left(1-\frac{1}{\ell-1+\xi}\right)n- \left(n-\frac{n}{\ell-1+\xi}-s_i\right)-2s_i\tau^{1/3}+\frac{C}{2}\right]\\ & \ge \frac{1}{2}|A_i|\left(C/2+s_i/2 \right). \end{align*} We only have $C/2$ instead of $C$ in the second line since we have to compensate for the fact that the size of the $A_i$'s is not exactly $n/(\ell-1+\xi)$. Thus $G[A_i]$ has average degree at least $C/2+s_i/2\ge 2z$. Lemma~\ref{disjointstars} now implies that $G[A_i]$ contains at least $$\frac{(C/2+s_i/2)|A_i|}{8(z+1)\tau^{1/3}|A_i|}\ge s_i $$ disjoint $z$-stars, as required. We still denote the modified sets by $A_i$ and let $\mathcal{S}:=\bigcup_{i=1}^{q} \mathcal{S}_i$. We now choose a set $\mathcal{B}$ of $|\mathcal{S}|$ disjoint copies of $B^*$ in~$G$, each containing precisely one of the stars in~$\mathcal{S}$. Moreover, each such copy will have precisely $z$ vertices in every $A_i$ with $i\le q$. To see that such copies exist, we will first show that $G[A_{q+1}]$ contains many disjoint copies of $B_1^*$, where $B_1^*$ denotes the subgraph of $B^*$ obtained by removing $q$ of the large vertex classes. For this, let $n_{q+1}:=|A_{q+1}|$. Then \begin{align}\label{mindegGq1} \frac{\delta(G[A_{q+1}])}{n_{q+1}} & \ge \frac{\delta(G)-|A_1\cup \dots \cup A_q|}{n} \cdot\frac{n}{n_{q+1}}\nonumber\\ & \ge \left(1-\frac{1}{\ell-1+\xi}-\frac{q}{\ell-1+\xi}\right) \frac{\ell-1+\xi}{\ell-q-1+\xi}\nonumber\\ & = 1-\frac{1}{\ell-q-1+\xi}\nonumber\\ & =1-\frac{1}{\chi_{cr}(B^*_1)}. \end{align} (The fact that $C\gg D'$ enables us to ignore the terms involving the constant $D'$ when estimating~$n/n_{q+1}$.)% \COMMENT{Indeed, we use that $n/n^*_1\ge n/n_1$. We have $$(\ell-1+\xi)|A_1|= (\ell-1+\xi)\left(\frac{n-\ell D}{\ell-1+\xi}+D\right)= n-D(1-\xi)$$ and so $n_1=n-q|A_1|=(\ell-1-q+\xi)|A_1|+D(1-\xi)$. Hence $$n/n_1\ge \frac{\ell-1+\xi}{(\ell-1-q+\xi)+D/|A_1|}\ge \frac{\ell-1+\xi}{\ell-1-q+\xi}- \frac{\ell-1+\xi}{\ell-1-q+\xi}\frac{D}{|A_1|}. $$ Also, the intermediate 3rd line in above is: $\left(\frac{\ell-2+\xi-q}{\ell-1+\xi}- 2|B^*|\tau^{2/3}\right)\frac{\ell-1+\xi}{\ell-q-1+\xi}$} Since $\tau^{1/5}\ll \chi_{cr}(B^*_1)-(\chi(B^*_1)-1)$, by repeated applications of the Erd\H{o}s-Stone theorem (see~(\ref{eqErdosStone})) we can find $\tau^{1/5}n_{q+1}$ disjoint copies of $B_1^*$ in $G[A_{q+1}]$. Since at most $\tau^{2/3}n_{q+1}$ vertices in $A_{q+1}$ are $(q+1)$-useless, we may assume that all of these copies of $B^*_1$ avoid the $(q+1)$-useless vertices. Moreover, all the stars $S\in \mathcal{S}$ are disjoint and it is easy to see that none of the vertices of such a star~$S$ can be $i$-useless where $A_i$ is the set which originally contained~$S$.% \COMMENT{Otherwise it would also be $i$-bad (since $i\le q$) and we could have interchanged this $i$-bad vertex with some $i$-exceptional vertex.} The latter implies that each vertex of~$S$ is joined to at least $(1-2\tau^{1/4})|A_j|$ vertices in $A_j$ for every $j\neq i$. Thus in particular, each vertex of $S$ is joined to all vertices in almost all of the copies of $B_1^*$ selected above. Altogether, the above shows that we can greedily choose the set $\mathcal{B}$ of $|\mathcal{S}|$ disjoint copies of $B^*$ as follows: for each copy, first choose a star $S \in \mathcal{S}$, then choose a copy of $B_1^*$ selected above all of whose vertices are joined to all vertices in~$S$. If the centre of~$S$ was moved into~$A_{q+1}$, we interchange it with some vertex in the copy of~$B^*_1$. Finally we choose the remaining vertices of $B^*$. Let $A'_i$ be the subset of $A_i$ which contains all those vertices that do not lie in a copy of $B^*$ in~$\mathcal{B}$. Note that \begin{equation}\label{eqsizeAi'} |A_i\setminus A_i'|\le |B^*||\mathcal{S}|\le |B^*|\tau^{2/3}n. \end{equation} After this process we have removed all the $i$-exceptional vertices for all $i\le q$. The next step is to deal with the useless vertices (and thus also with the $(q+1)$-exceptional vertices). For each such vertex~$x$ we will move~$x$ into another vertex class or/and we will remove a copy of $B^*$ which contains~$x$. (We do the former if $x$ lies in the set $U$ defined below.) Let $U$ denote the set of all vertices in $A'_1\cup\dots\cup A'_q$ which had at most $(1-\tau^{1/4})|A_{q+1}|$ neighbours in $A_{q+1}$. So in particular, each $u\in U$ is $i$-useless where $i\le q$ is the index such that $u\in A'_i$. Thus $|U|\le \tau^{2/3}n$. (Moreover, if $q=\ell-1$ then $U$ contains all the $(q+1)$-exceptional vertices. If $q<\ell-1$ then there are no $(q+1)$-exceptional vertices.) Note that each $u\in U\cap A'_i$ must still have at least $\tau^{1/3}|A'_i|$ neighbours in its own class $A'_i$. Moreover, as in~(\ref{mindegGq1}) one can show that each $u\in U$ satisfies \begin{equation}\label{eqNu} |N(u)\cap A'_{q+1}|\ge \delta(G)-|A_1\cup\dots\cup A_q|-|A_{q+1}\setminus A'_{q+1}| \stackrel{(\ref{eqsizeAi'})}{\ge} \left(1-\frac{1}{\chi_{cr}(B^*_1)}-\tau^{3/5}\right)|A'_{q+1}|. \end{equation} Let $A''_1,\dots,A''_{q+1}$ denote the sets obtained from the $A'_i$ by moving all the vertices in $U$ to $A'_{q+1}$. Then~(\ref{mindegGq1}) and~(\ref{eqNu}) together with the fact that $\tau^{1/5}\ll \chi_{cr}(B^*_1)-(\chi(B^*_1)-1)$ imply that \begin{equation}\label{eqmindegA''q+1} \delta(G[A''_{q+1}])\ge \left(1-\frac{1}{\chi_{cr}(B^*_1)}-\tau^{1/2}\right)|A''_{q+1}| \ge \left(1-\frac{1}{\ell-q-1}+\tau^{1/5}\right)|A''_{q+1}|. \end{equation} (If $q=\ell-1$ then we will only use the first inequality in~(\ref{eqmindegA''q+1}).) Consider the graph $K$ obtained from the complete $(q+1)$-partite graph with vertex classes $A''_1,\dots,A''_{q+1}$ by making $A''_{q+1}$ into a clique. Let $K''$ denote the subgraph of $K$ obtained by deleting $D'$ vertices from each of the first $q$ classes and $(\ell-q)D'$ vertices from~$A''_{q+1}$. An application of Lemmas~\ref{completeapprox} and ~\ref{completeconst} shows that by taking out at most $\ell^3|U|+D'\le 2\ell^3\tau^{2/3} n$ disjoint copies% \COMMENT{We take out at most $\ell^3|U|$ disjoint copies of $B^*$ in Lemma~\ref{completeapprox} and thus at most $\ell^2|U|$ disjoint copies of $H$.} of $H$ from $K''$ one can obtain a subgraph $K'''$ whose vertex classes $A'''_1,\dots,A'''_{q+1}$ satisfy $|A'''_i|=z|K'''|/|B^*|$ for all $i\le q$. Moreover, each of these copies of $H$ meets~$A''_{q+1}$ in an $(\ell-q)$-partite graph. Together with~(\ref{eqmindegA''q+1}) and the Erd\H{o}s-Stone theorem this shows that for each of these copies of $H$ in $K''$ we can take out a copy of $H$ from $G$ which intersects the $q+1$ vertex classes in exactly the same way and avoids all the useless vertices. We add all these copies of $H$ in $G$ to the set~$\mathcal{B}$. Adding the $\ell D'$ vertices set aside (when going from $K$ to $K''$) to the vertex classes again we thus obtain vertex sets $A^\diamond_1,\dots,A^\diamond_{q+1}$ such that $|A^\diamond_i|=(n^\diamond-\ell D')z/|B^*|+D'$ for all $i\le q$, where $n^\diamond:=|A^\diamond_1\cup\dots\cup A^\diamond_{q+1}|$. By the bound in~(\ref{eqsizeAi'}) and the previous paragraph we have removed at most $3\ell^3|B^*|\tau^{2/3}n\ll \tau^{1/3}n$ vertices so far. Thus for all $i\le q+1$ every vertex in $A^\diamond_i$ still has at least $\tau^{1/3}|A^\diamond_j|/3$ neighbours in each other~$A^\diamond_j$ with $j\le q$ (since it is non-$j$-exceptional). Moreover, since we have moved the vertices in $U$, for all $i\le q$ every vertex in $A^\diamond_i$ still has at least $(1-3\tau^{1/4})|A^\diamond_{q+1}|$ neighbours in~$A^\diamond_{q+1}$. Also, $d(A^\diamond_i)\le \tau^{2/5}/2$ for all $i\le q$ (note that exchanging $i$-exceptional vertices with $i$-bad vertices does not affect the density too much).% \COMMENT{Here we have to be careful since, apart from deleting a small number of vertices from $A_i$ we also added some new vertices when we interchanged $i$-bad vertices with $i$-exceptional vertices or $j$-exceptional vs (in $A_i$) with $j$-bad vertices. In the latter step we might have added up to $\tau^{2/3}|A_i|$ vertices to $A_i$ which see everything in $A_i$. Hence the increase in the density.} For all $i\le q$ in turn, we now add further copies of~$B^*$ to $\mathcal{B}$ in order to cover all those $i$-useless vertices which still lie in $A^\diamond_i$. Let $U'$ denote the set of all these vertices. Again, each such copy of $B^*$ will meet every $A^\diamond_i$ with $i\le q$ in precisely $z$ vertices. It is easy to see that these copies of $B^*$ can be found greedily.% \COMMENT{Indeed, at least $\tau^{1/3}|A_j|/4-\tau^{2/3}n\gg \tau^{1/3}|A\diamond_j|/8$ of the neighbours of an $i$-useless vertex in some $A\diamond_j$ are typical, ie not $j$-useless. So the number of these neighbours is much larger than the number of vertices which we have already removed from $A_j$ in order to deal with other $j'$-useless vertices, so we can proceed greedily.} This follows similarly as before since $|U'|$ is much smaller than the number of neighbours in any~$A^\diamond_j$ of such a (non-$j$-exceptional) vertex $u\in U'$ and since each $u\in U'$ is joined to almost all vertices in $A^\diamond_{q+1}$. More precisely, given $u\in U'\cap A^\diamond_k$, let $i\le q$ with $i\neq k$ be such that $|N(u)\cap A^\diamond_i|$ is minimal. Note that this implies that $|N(u)\cap A^\diamond_j|\ge |A^\diamond_j|/3$ for all $j\le q$ with $j\neq i,k$. We choose the copy of~$B^*$ containing~$u$ by first picking $z$ neighbours of $u$ in $A^\diamond_i$ which are not $i$-useless (this can be done since $u$ has at least $\tau^{1/3}|A^\diamond_i|/3$ neighbours in~$A^\diamond_i$). Then we pick a copy of~$B^*_1$ in~$A^\diamond_{q+1}$ which is joined to all the $z+1$ vertices chosen before and which also avoids all the useless vertices. Finally, we pick the remaining vertices. Call a vertex $u\in A^\diamond_{q+1}$ \emph{worthless} if $u$ has at most $(1-3\tau^{1/4})|A^\diamond_i|$ neighbours in~$A^\diamond_i$ for some $i\le q$. Thus every worthless vertex is either $(q+1)$-useless or lies in~$U$. In particular, at most $\tau^{2/3}n$ vertices are worthless. For each worthless vertex~$u$ we will remove a copy of $B^*$ containing~$u$. Since $u$ has at least $\tau^{1/3}|A^\diamond_i|/3$ neighbours in~$A^\diamond_i$ for each $i\le q$, it is easy to see that this can be done if $q=\ell-1$. So suppose that $q<\ell-1$. We now consider all the worthless vertices~$u$ in turn. Again, we let $i\le q$ be such that $|N(u)\cap A^\diamond_i|$ is minimal. Thus $|N(u)\cap A^\diamond_j|\ge |A^\diamond_j|/3$ for all $j\le q$ with $j\neq i$. Choose a set~$T_u$ of~$z$ neighbours of~$u$ in~$A^\diamond_i$. Let $N_u$ denote the set of all those common neighbours of the vertices from~$T_u$ in the set~$A^\diamond_{q+1}$ which are not worthless. Thus \begin{equation} \label{Nusize} |N_u|\ge (1-\tau^{1/5})|A^\diamond_{q+1}|. \end{equation} We will show that there are many disjoint copies of~$B^*_1$ in~$G[N_u]$ such that all but one vertex class in each of these copies lie in the neighbourhood of~$u$. We will call such a copy of~$B^*_1$ \emph{good for~$u$}. Let $t:=\lceil 3/\xi\rceil$. Let~$K^*$ denote the complete $(\ell-q)$-partite graph with $\ell-q-1$ vertex classes of size~$zt$ and one vertex class of size~$z_1t$. Note that $\chi_{cr}(K^*)=\chi_{cr}(B^*_1)$. Thus Theorem~\ref{thmKomlos} together with~(\ref{Nusize}) and the first inequality in~(\ref{eqmindegA''q+1}) imply that $G[N_u]$ contains a $K^*$-packing which covers all but at most $\tau^{1/6}|N_u|$ vertices. On the other hand, similarly as in~(\ref{eqNu}) we have \begin{align*} |N(u)\cap N_u| & \stackrel{(\ref{Nusize})}{\ge} \left(1-\frac{1}{\chi_{cr}(B^*_1)}-\tau^{1/6}\right)|N_u| \stackrel{(\ref{eqchicr})}{=} \left(\frac{(\ell-q-2+\xi)zt}{(\ell-q-1+\xi)zt}-\tau^{1/6}\right)|N_u|\\ & \ge \left(\frac{(\ell-q-2)zt+2z}{|K^*|}+\tau^{1/7}\right)|N_u|, \end{align*} where the last inequality holds since $\xi zt\ge 2z+2|K^*|\tau^{1/7}$. Thus there are many copies of $K^*$ in the $K^*$-packing such that $u$ is joined to at least~$z$ vertices in all but at most one class. Each such copy of~$K^*$ gives a copy of~$B^*_1$ which is good for~$u$. Take such a copy of~$B^*_1$, exchange~$u$ with an appropriate vertex, extend the new copy of~$B^*_1$ to a copy of~$B^*$ (which will meet~$A^\diamond_i$ precisely in~$T_u$) and remove it. Since there is room to spare in the calculations above, we can do this for every worthless vertex~$u$ in turn. Let $A^*_i$ denote the subset of all those vertices in $A^\diamond_i$ which are not covered by some copy of $B^*$ or~$H$ in~$\mathcal{B}$. Then the sets $A^*_1,\dots,A^*_{q+1}$ are as required in the lemma.% \COMMENT{To check that the size of $A^*_1$ is as required in (ii) let $r$ denote the number of copies of $B^*$ taken out. Thus $|A^*_1|=|A_1|-rz$ and $n^*=n-rz(\ell-1+\xi)$ and so \begin{align*} |A^*_1| & =\frac{z(n-\ell D')}{|B^*|}+D'-rz= \frac{z(n^*+rz(\ell-1+\xi)-\ell D')}{|B^*|}+D'-rz\\ & =\frac{z(n^*-\ell D')}{|B^*|}+D'+\frac{z^2r(\ell-1+\xi)}{z(\ell-1+\xi)}-zr =\frac{z(n^*-\ell D')}{|B^*|}+D'. \end{align*}} \noproof\bigskip We first deal with the case where $G$ looks very much like the complete $\ell$-partite graph whose vertex class sizes are a multiple of those of the bottlegraph $B^*$ of $H$. \begin{lemma}\label{extremal1} Suppose that $H$ is a graph of chromatic number $\ell\ge 2$ such that ${\rm hcf}(H)=1$. Let $B^*$ denote the bottlegraph assigned to~$H$. Let $\xi$, $z$ and $z_1$ be as defined in~$(\ref{eqdefxi})$ and let $0<\tau \ll \xi, 1-\xi, 1/|B^*|$. Let $|B^*|\ll D\ll C$ be integers such that $D$ is divisible by $2|B^*|$. Let $G$ be a graph whose order $n\gg C,1/\tau$ is divisible by $|B^*|$ and which satisfies the following two properties: \begin{itemize} \item[{\rm (i)}] $\delta(G)\ge (1-\frac{1}{\chi_{cr}(H)})n+C$. \item[{\rm (ii)}] The vertex set of $G$ can be partitioned into $A_1,\dots,A_\ell$ such that, for all $i<\ell$, we have $|A_i|=(n-\ell D)z/|B^*|+D$ and $d(A_i)\le \tau$. \end{itemize} Then $G$ has a perfect $H$-packing. \end{lemma} \removelastskip\penalty55\medskip\noindent{\bf Proof. } Our aim is to find a subgraph of $G$ for which it is clear that we can apply the Blow-up lemma to find a perfect $H$-packing. We first apply Lemma~\ref{exceptionalvs} with $q:=\ell-1$ and $D':=D/2$ to obtain sets $A^*_1,\dots,A^*_\ell$. Let $G^*$ denote the subgraph of $G$ induced by the union of all these $A^*_i$. So $G-G^*$ has a perfect $H$-packing. It is easy to see (and follows from Lemma~\ref{completeconst} applied with $D'=D/2$) that the complete $\ell$-partite graph with vertex classes $A^*_1,\dots,A^*_\ell$ has a perfect $H$-packing. Since in $G^*$ each vertex in $A^*_i$ has at least $(1-\tau^{1/5})|A^*_j|$ neighbours in each other~$A^*_j$ the bipartite subgraph of $G^*$ between every pair $A^*_i$, $A^*_j$ of sets is $(2\tau^{1/5},1/2)$-superregular. Hence the Blow-up lemma implies that $G^*$ has a perfect $H$-packing. Together with all the copies of $H$ chosen so far this yields a perfect $H$-packing in~$G$. \noproof\bigskip Another family of graphs having large minimum degree but not containing a perfect $H$-packing can be obtained from a complete $\ell$-partite graph whose vertex classes are multiples of those of the bottlegraph $B^*$ as follows: remove all edges between the smallest vertex class ($A_\ell$ say) and one of the others ($A_{\ell-1}$ say), remove one vertex $x$ from $A_\ell$, delete all the edges between $x$ and $A_1$ and add $x$ to~$A_1$, add all edges within the remainder of $A_\ell$ and add a sufficient number of edges within $A_{\ell-1}$. The next lemma deals with the case where $G$ is similar to this family of graphs, although slightly more dense. \begin{lemma}\label{extremal2} Suppose that $H$ is a graph of chromatic number $\ell\ge 2$ such that ${\rm hcf}(H)=1$. Let $B^*$ denote the bottlegraph assigned to~$H$. Let $\xi$, $z$ and $z_1$ be as defined in~$(\ref{eqdefxi})$ and let $0<\tau \ll \xi, 1-\xi, 1/|B^*|$. Then there exists an integer $s_0=s_0(\tau,H)$ such that the following holds. Let $|B^*|\ll D\ll C$ be integers such that $D$ is divisible by~$s_0$. Let $G$ be a graph whose order $n\gg C,1/\tau$ is divisible by $|B^*|$ and which satisfies the following properties: \begin{itemize} \item[{\rm (i)}] $\delta(G)\ge (1-\frac{1}{\chi_{cr}(H)})n+C$. \item[{\rm (ii)}] There are disjoint vertex sets $A_1,\dots,A_{\ell-2}$ in $G$ such that $|A_i|=(n-\ell D)z/|B^*|+D$ and $d(A_i)\le \tau$ for all $i\le \ell-2$. \item[{\rm (iii)}] The graph $G_1:=G-\bigcup_{i=1}^{\ell-2} A_i$ contains a vertex set $A$ such that $d(A,V(G_1)\setminus A)\le \tau$. \end{itemize} Then $G$ has a perfect $H$-packing. \end{lemma} \removelastskip\penalty55\medskip\noindent{\bf Proof. } Put $$p:=\lfloor 4/\xi\rfloor.$$ Fix further constants ${\varepsilon}',d',\theta,\tau_2,\dots,\tau_p$ such that $$ 0<{\varepsilon}'\ll d'\ll\theta\ll\tau\ll\tau_2\ll\tau_3\ll\dots\ll \tau_p\ll \xi,1-\xi,1/|B^*|. $$ Let $B_1^*$ be the complete bipartite graph with vertex classes of size $z_1$ and $z$ (in other words, it is the subgraph of $B^*$ induced by its two smallest vertex classes). Let $k_1({\varepsilon}',\theta,B^*_1)$ be as defined in Lemma~\ref{nonextremal}. Put $$s_0:=4k_1(p!)|B^*_1||B^*|. $$ Let $q:=\ell-2$ and $A_{q+1}:=V(G_1)$. If $\ell\ge 3$ we first apply Lemma~\ref{exceptionalvs} with $D'=D/2$ to obtain sets $A_1^*,\dots,A_{q+1}^*$. Let $G^*$ denote the subgraph of $G$ induced by all the~$A^*_i$ and put $n^*:=|G^*|$. Thus $G^*$ was obtained from $G$ by taking out a small number of disjoint copies of~$H$ and $n-n^*\le \tau^{3/5}n$. We have to show that $G^*$ has a perfect $H$-packing. Put $G^*_1:=G[A^*_{q+1}]$ and $n^*_1:=|G^*_1|$. Note that $n^*_1$ is divisible by~$|B^*_1|$. (In the case when $\ell=2$ we put $G^*=G^*_1=G$.) Also, it will be crucial later on that \begin{equation}\label{mindegG*1} \frac{\delta(G^*_1)}{n^*_1} \ge 1-\frac{1}{\chi_{cr}(B^*_1)}-\tau^{1/2}=\frac{\xi}{1+\xi}-\tau^{1/2}. \end{equation} This can be proved in the same way as~(\ref{mindegGq1}), the only difference is that we have to account for the fact that $V(G)$ and $V(G^*)$ are not quite the same. (But since $n-n^* \le \tau^{3/5}n$, we can compensate for this by including the error term $\tau^{1/2}$ in the above.) Ideally, we would like to choose a perfect $B_1^*$-packing in $G_1^*$ using Lemma~\ref{nonextremal}. If $\ell\ge 3$ we would like to extend each copy of $B_1^*$ to a copy of $B^*$ by adding suitable vertices in $A^*_1\cup \dots\cup A^*_q$. Inequality~(\ref{mindegG*1}) implies that $G_1^*$ has sufficiently large minimum degree for this. However, we cannot apply Lemma~\ref{nonextremal} directly to $G_1^*$ since the condition (ii) is not satisfied. So we will consider the `almost components' of $G_1^*$ instead. Note that if $C'\subseteq V(G^*_1)$ is such that $d(C',V(G^*_1)\setminus C')\le \tau_p$ then $|C'|\ge \delta(G^*_1)-\tau_p n^*_1 \ge \xi n^*_1/3$. Let $r\le p$ be maximal such that there is a partition $C_1,\dots,C_r$ of $V(G^*_1)$ with $d(C_j,V(G^*_1)\setminus C_j)\le \tau_r$ for all $j\le r$. We have just seen that $r\le 3/\xi<p$ and \begin{equation}\label{eqsizeCi} \left(\frac{\xi}{1+\xi}-\tau_r^{1/2}\right)n^*_1\le |C_j| \le \left(\frac{1}{1+\xi}+\tau_r^{1/2}\right)n^*_1. \end{equation} Moreover, $r\ge 2$ since $d(A\cap V(G^*_1),V(G^*_1)\setminus A)\le \tau_2$. Recall that the aim is to choose a perfect $B^*_1$-packing in $G^*_1[C_j]$ (for all $j\le r$) and to extend each copy of $B^*_1$ to a copy of $B^*$ by adding suitable vertices in $A^*_1\cup \dots\cup A^*_q$. Our choice of $r$ ensures that no $G^*_1[C_j]$ is close to an extremal graph and thus to find a perfect $B^*_1$-packing we can argue similarly as in the non-extremal case (c.f.~Lemma~\ref{nonextremal} and Corollary~\ref{cornonextremal}). More precisely, we proceed as follows. The first step is to tidy up the sets $C_j$ to ensure that \emph{every} vertex in $C_j$ has only few neighbours in~$G^*_1-C_j$. (The latter implies that the minimum degree of each graph $G^*_1[C_j]$ is about~$\delta(G^*_1)$.) Given a set $C'\subseteq V(G^*_1)$, put $\overline{C}':=V(G^*_1)\setminus C'$. We say that a vertex $x\in C_j$ is \emph{$j$-useless} if it has less than $\xi|C_j|/3$ neighbours in~$C_j$. By~(\ref{mindegG*1}) each $j$-useless vertex $x$ has at least $\xi|\overline{C_j}|/3$ neighbours in $\overline{C_j}$. Since we assumed that $d(C_j,\overline{C_j})\le \tau_r$, this implies that the number of $j$-useless vertices is at most $\tau^{3/4}_r|C_j|$. For all $j\le r$ we remove every $j$-useless vertex $x\in C_j$ and add $x$ to some $C_i$ which contains at least $\xi|C_i|/3$ neighbours of~$x$. We denote by $C'_j$ the sets thus obtained from the~$C_j$. So every vertex in $C'_j$ has at least $\xi|C'_j|/4$ neighbours in~$C'_j$. Moreover, $d(C'_j,\overline{C'_j})\le \tau^{2/3}_r$. We say that a vertex $x\in C'_j$ \emph{$j$-bad} if $x$ has at least $\tau^{1/6}_r |\overline{C'_j}|$ neighbours in $\overline{C'_j}$. Thus there exist at most $\tau^{1/2}_r|C'_j|$ such vertices. For each such vertex $x$ in turn we take out a copy of $B^*_1=K_{z,z_1}$ from $G^*_1[C'_j]$ which contains~$x$. All these copies of $B^*_1$ can be found greedily since each vertex in $C'_j$ has at least $\xi|C'_j|/4$ neighbours in~$C'_j$ (and thus we can apply for instance the Erd\H{o}s-Stone theorem).% \COMMENT{Indeed, let $N$ be a set of $\xi|C'_j|/4$ neighbours of $x$ in $C'_j$. We distinguish two cases. Firstly, suppose that at least half of the vertices in $N$ have at least half of there neighbours in $N$. Then $N$ is dense and thus contains a copy of $K_{z-1},z$ which together with $x$ forms a copy of $B^*_1$. If at least half of the vertices in $N$ have at least half of their neighbours in $C'_j\setminus N$ then the bipartite graph between $N$ and $C'_j\setminus (N\cup\{x\})$ is dense and thus contains a copy of $K_{z-1,z_1}$ with $z-1$ vertices in $C'_j\setminus (N\cup\{x\})$. Again, this gives us a copy of $B^*-1$ containing~$x$. Since the number of $i$-bad vertices is much smaller than $\xi|C'_j|/4$, we can argue greedily.} We denote by $\mathcal{B}'_j$ the set of all copies of $B^*_1$ chosen for the $j$-bad vertices. Let $C''_j$ be the subset obtained from $C'_j$ in this way. Put $n'':=|C''_1\cup \dots\cup C''_r|$. Our next aim is to take out a \emph{bounded} number of copies of $H$ to ensure that for each $j\le r$ the size of the subset thus obtained from~$C''_j$ is divisible by~$|B^*_1|$. Let $D':=D/(4r)$.% \COMMENT{Recall that $D$ is divisible by $4r|B^*|$.} Choose integers $t_j>0$ and $a_j$ such that $|C_j''|=|B^*_1|t_j+a_j+4D'$ where $\sum_{i=1}^r a_j=0$ and $|a_j|<|B^*_1|$.% \COMMENT{Note that we can do this: Since $|B_1^*|$ divides $D'$, we can find $a'_j$ and $t_j> 0$ with $0 \le a'_j < |B_1^*|$ so that $|C_j''|=|B^*_1|t'_j+a'_j+4D'$ and $\sum a'_j= s|B_1^*|$, where $s\le r$. Note that the number of nonzero $a_j'$ is at least $s$. Now let $a_j=a'_j-|B_1^*|$ for the first $s$ of the $a'_j$ which are nonzero (and set $t_j=t'_j+1$ for these $j$) and $a_j=a'_j$ and $t_j=t'_j$ for the others.} Note that this implies $|B^*_1|\sum t_j= n''-D$. We now need to distinguish the cases when $\ell\ge 3$ and $\ell=2$. Let us first consider the case when~$\ell\ge 3$. Let $G'$ be the graph obtained from the complete $(\ell-2)$-partite graph with vertex classes of size $D/4$ by adding $r$ complete bipartite graphs $K_{D'+a_j,D'}$ (with $1 \le j \le r$) and joining all the vertices of these bipartite graphs to all the vertices of the complete $(\ell-2)$-partite graph. So $|G'|=\ell D/4$. An $r$-fold application of Lemma~\ref{completemove} shows that $G'$ contains a perfect $H$-packing~$\mathcal{H}'$.% \COMMENT{We can apply this lemma since $|B^*|$ (and thus $|H|$) divides~$D'$.} This in turn implies that we can greedily take out $|\mathcal{H}'|= \ell D/(4|H|)$ disjoint copies of $H$ from $G$ to% \COMMENT{Indeed, such copies of $H$ can be found greedily since each vertex in $A^*_i$ is joined to almost all vertices in each other $A^*_i$ as well as to almost all vertices in each $C''_j$. Similarly each vertex in $C''_j$ is joined to almost all vertices in each set~$A^*_i$ and to a constant fraction of the vertices in~$C''_j$. Thus to choose a copy of $H$ which intersects the sets $A^*_i$ as well as the set $C''_j$ (say) as desired we first choose a suitable bipartite subgraph in $C''_j$. Then the common nbd of the vertices in this bipartite subgraph in $A^*_i$ is almost all of $A^*_i$ and thus we can extend the bip subgraph to the desired copy of~$H$.} achieve that the subsets $A^\diamond_i$ and $C^\diamond_j$ thus obtained from the sets $A^*_i$ and $C''_j$ have the following sizes (where $n^\diamond=n^*-\ell D/4$ denotes the remaining number of vertices in the graph) \COMMENT{$|A^\diamond_1|=|A^*_1|-D/4=z(n^*-\ell D/2)/|B^*|+D/2-D/4= z(n^\diamond-\ell D/4)/|B^*|+D/4$} \begin{equation}\label{Aidiamond} |A^\diamond_i|=|A^*_i|-D/4 \quad \mbox{ and thus } \quad |A^\diamond_i|=z(n^\diamond-\ell D/4)/|B^*|+D/4 \end{equation} and \begin{equation}\label{Cdiamond} |C^\diamond_j|=|C''_j|-2D'-a_j=|B^*_1|t_j+D/(2r). \end{equation} Note that every vertex in $C^\diamond_j$ still has at most $2\tau_r^{1/6}|\overline{C^\diamond_j}|$ neighbours in~$\overline{C^\diamond_j}$. Thus \begin{equation}\label{eqmindegCdiamond} \delta(G^*_1[C^\diamond_j])\stackrel{(\ref{mindegG*1})}{\ge} \left( \frac{\xi}{1+\xi}-\tau^{1/2} \right)n^*_1 -2\tau^{1/6}_r n^*_1 \stackrel{(\ref{eqsizeCi})}{\ge} (\xi -\tau^{1/7}_r)|C^\diamond_j| \stackrel{(\ref{eqchicr})}{>} \left(1-\frac{1}{\chi_{cr}(B^*_1)}\right)|C^\diamond_j| \end{equation} and \begin{equation}\label{eqdensityCdiamond} d(C^\diamond_j,\overline{C}^\diamond_j)\le \tau_r^{1/7}. \end{equation} The arguments in the case when $\ell=2$ are similar except that now we consider the graph $G'$ consisting of $r$ complete subgraphs of sizes $2D'+a_j$ (where $j=1,\dots,r$). An $(r-1)$-fold application of Lemma~\ref{completebipmove} shows that $G'$ has a perfect $H$-packing. So we can proceed similarly as before to obtain sets $C^\diamond_j$ which satisfy~(\ref{Cdiamond})--(\ref{eqdensityCdiamond}). In order to show that $G^*_1[C^\diamond_j]$ has a perfect $B^*_1$-packing we wish to apply Lemma~\ref{nonextremal} with $\tau_{r+1}/2$ playing the role of $\tau$ to find a blown-up $B^*_1$-cover. (We will then use the fact that ${\rm hcf}(H)=1$ and Lemmas~\ref{completeconst} and~\ref{completeapprox} to find a suitable perfect $B^*_1$-packing of $G_1^*[C_j^\diamond]$ which is extendable to a perfect $H$-packing of the whole graph.) Inequality~(\ref{eqmindegCdiamond}) shows that $G^*_1[C^\diamond_j]$ satisfies the requirement on the minimum degree in Lemma~\ref{nonextremal}. We will now check that $G^*_1[C^\diamond_j]$ also satisfies the conditions (i) and (ii) there. So suppose that $G^*_1[C^\diamond_j]$ does not satisfy~(ii) and let $C'\subseteq C^\diamond_j$ be such that $d(C',C^\diamond_j\setminus C')\le \tau_{r+1}/2$. Using that $|C_i\setminus C^\diamond_i|\ll \tau_{r+1}|C_i|$ for all $i\le r$ it is easy to see that the $r+1$ sets $C'\cap C_j$, $C_j\setminus C'$, $C_i$ ($i\neq j$) would then contradict the choice of~$r$.% \COMMENT{We need $C'\cap C_j$ instead of $C'$ here since $C'\subseteq C^\diamond_j \subseteq C'_j$ but we might have $C'\not\subseteq C_j$.} Thus $G^*_1[C^\diamond_j]$ satisfies~(ii). Suppose next that $G^*_1[C^\diamond_j]$ does not satisfy~(i). Let $C'\subseteq C^\diamond_j$ be such that $|C'|=z|C^\diamond_j|/|B^*_1|$ and $d(C')\le \tau_{r+1}/2$. Let $x\in C'$ by any vertex which has at most $\tau_{r+1} |C'|$ neighbours in~$C'$. Then \begin{align*} d_{G^*_1[C^\diamond_j]}(x) \le & \tau_{r+1} |C'|+|C^\diamond_j\setminus C'| \le \left( \tau_{r+1}+1-\frac{z}{|B_1^*|} \right)|C^\diamond_j| \stackrel{(\ref{eqmindegCdiamond})}{<} \delta(G^*_1[C^\diamond_j]), \end{align*} a contradiction. (In the final inequality, we also used the fact that $1-z/|B^*_1|=\xi/(1+\xi)$.)% \COMMENT{Actually, we don't use the final inequality of~\ref{eqmindegCdiamond} but the one before} Thus we can apply Lemma~\ref{nonextremal} to $G^*_1[C^\diamond_j]$ to obtain a $B^*_1$-packing $\mathcal{B}^*_j$ in $G^*_1[C^\diamond_j]$ such that the graph $G^\diamond_j:=G^*_1[C^\diamond_j]-\bigcup \mathcal{B}^*_j$ (which is obtained by removing all those vertices which lie in a copy of $B^*_1$ in~$\mathcal{B}^*_j$) has a blown-up $B^*_1$-cover with parameters $2{\varepsilon}',d'/2,2\theta,k:=k_1({\varepsilon}',\theta,B^*_1)$. Note that this blown-up $B_1^*$-cover does not necessarily yield a perfect $B_1^*$-packing of $G^\diamond_j$ as we need not have ${\rm hcf}(B_1^*)=1$. This will cause difficulties later on. Recall that, when removing the $j$-bad vertices from $C'_j$, we have already set aside a $B^*_1$-packing~$\mathcal{B}'_j$. For each $B\in \mathcal{B}^*_j\cup \mathcal{B}'_j$ we now greedily choose $z$ vertices in each of $A^\diamond_1,\dots,A^\diamond_q$ such that all these vertices form a copy of $B^*$ together with~$B$.% \COMMENT{Indeed, this can be done greedily since every vertex in $G^*_1$ sees almost all vertices in each $A^\diamond_i$ and all these vertices in turn see almost all vertices in each other $A^\diamond_j$ as well as in $G^*_1$ and so on. Thus the common neighbourhood of the vs from $B$ in some $A^\diamond_i$ is almost all of $A^\diamond_i$.} We remove these copies of $B^*$ and still denote the subsets of the $A^\diamond_i$ obtained in this way by~$A^\diamond_i$. Also, we still denote the remaining number of vertices in $G$ by $n^\diamond$. Then it is easy to see that the second equation in~(\ref{Aidiamond}) still holds for all $i$. Consider the blown-up $B^*_1$-cover of~$G^\diamond_j$. Let $\{ X_i^j(t) \mid 1\le t \le k, \, i=1,2\}$ be a partition of $V(G^\diamond_j)$ as in the definition of a blown-up $B^*_1$-cover (Definition~\ref{defblownupcover}). For all $j\le r$ and all $t\le k$ we would like to apply the Blow-up lemma in order to find a $B^*_1$-packing which covers precisely the vertices in $X_1^j(t)\cup X_2^j(t)$. To be able to do this, we need that the complete bipartite graph with vertex classes $X_1^j(t)$ and $X_2^j(t)$ contains a perfect $B^*_1$-packing. Clearly, the latter is the case if $|X_1^j(t)|=z|X_1^j(t)\cup X_2^j(t)|/|B^*_1|$ and $|X_2^j(t)|= z_1|X_1^j(t)\cup X_2^j(t)|/|B^*_1|$. We will now show that this can be achieved by taking out a small number of further copies of~$H$ from~$G$. (Note that this would be much simpler to achieve if we could assume that ${\rm hcf}(B_1^*)=1$.) Put $D'':=D/(4rk)$ and% \COMMENT{Here we need that $D$ is divisible by~$4rk|B^*_1|$.} $x^{j}(t):=(|X_1^j(t)\cup X_2^j(t)|-2D'')/|B^*_1|$.% \COMMENT{The definition of blown-up $B^*$-cover implies that each $x^j(t)$ is an integer. ($x^j(t)$ counts the number of copies of $B^*_1$ in the $t$th blown-up copy of $B^*_1$ which (in each class) avoid the `additional set' of~$2D''$ vertices.)} If $\ell\ge 3$ consider a random partition of $A^\diamond_i$ into $kr$ sets $A^{\diamond j}_i(t)$ ($j\le r$, $t\le k$) such that $|A^{\diamond j}_i(t)|=z x^j(t)+D''$. (It is straightforward to check that these numbers sum up to exactly $|A_i^\diamond|$.)% \COMMENT{ \begin{align*} \sum_j^r \sum_t^k |A_i^{\diamond j}(t)|= & \sum_j^r \sum_t^k (z x^j(t) +D'') = \frac{z}{|B_1^*|} \left( \sum_j |G_j^\diamond | -2D''rk \right) +D/2 \\ =& \frac{1}{1+\xi} \left( n^\diamond -(\sum_i A_i^\diamond ) -D \right) +D/2 \\ =& \frac{1}{1+\xi} \left( n^\diamond -\ell D/2+\ell D/2- \left(\frac{z}{|B^*|}[n^\diamond -\ell D/2] +D/2 \right) (\ell-2) -D \right) +D/2 \\ =& \frac{1}{1+\xi} \left( [n^\diamond -\ell D/2]\frac{(\ell-1+\xi )-(\ell-2 )}{\ell-1+\xi} +\ell D/2 -(\ell-2)D/2 -D \right) +D/2 \\ =& \frac{n^\diamond-\ell D/2}{\ell-1+\xi} +D/2, \end{align*} as required} Consider the complete $\ell$-partite graph $G_j(t)$ with vertex classes $A^{\diamond j}_1(t),\dots,A^{\diamond j}_{\ell-2}(t),X_1^j(t),X_2^j(t)$. As is easily seen, the sizes of these classes satisfy the conditions of Lemma~\ref{completeapprox} with $D''$ playing the role of $D'$ in that lemma.% \COMMENT{Indeed, $$|G_j(t)|=(\ell-2)zx^j(t)+(\ell-2)D''+|B^*_1|x^j(t)+2D''= (\ell-1+\xi)zx^j(t)+\ell D''$$ and thus $z(|G_j(t)|-\ell D'')/|B^*|=zx^j(t)$ ie $|A^{\diamond j}_i(t)|-D''=z(|G_j(t)|-\ell D'')/|B^*|$. The size of the sets $X^j_1(t)$ and $X^j_2(t)$ are ok by definition of blown-up $B^*_1$ cover.} Thus we can apply Lemma~\ref{completeapprox} and then Lemma~\ref{completeconst} to obtain a subgraph $\tilde{G}_j(t)$ of $G_j(t)$ which is obtained by removing a few copies of~$H$ as described there. So this gives us a collection $\tilde{\mathcal{H}}_j(t)$ of at most $\theta^{1/20} |G_j(t)|$ disjoint copies of $H$ in $G_j(t)$ such that the subsets $\tilde{A}^j_1(t),\dots,\tilde{A}^j_{\ell-2}(t),\tilde{X}^j_1(t),\tilde{X}^j_2(t)$ obtained from $A^{\diamond j}_1(t),\dots,A^{\diamond j}_{\ell-2}(t),X^j_1(t),X^j_2(t)$ by deleting all those vertices which lie in some copy of $H$ in $\tilde{\mathcal{H}}_j(t)$ satisfy $$ |\tilde{A}^j_i(t)|=z\tilde{n}_j/|B^*|=|\tilde{X}^j_1(t)| $$ for all $i\le \ell-2$ and $$ |\tilde{X}^j_2(t)|=\xi z \tilde{n}_j/|B^*|, $$ where $\tilde{n}_j:=|\tilde{G}_j(t)|$. (We get the bound of $\theta^{1/20}|G_j(t)|$ copies by observing that we can apply Lemma~\ref{completeapprox} with $a_{\ell-1} \le \theta^{1/15} |G_j(t)|$ from Definition~\ref{defblownupcover} and $a_i=0$ for $i\le \ell-2$.) For each copy $H'\in\tilde{\mathcal{H}}_j(t)$ of $H$ in turn we greedily remove a copy of $H$ in $G^\diamond_j\subseteq G$ which intersects the sets $A^{\diamond j}_1(t),\dots,A^{\diamond j}_{\ell-2}(t),X_1^j(t),X_2^j(t)$ in the same way as~$H'$. (Note that we are able to do this as Lemma~\ref{exceptionalvs}(iii) and the fact we considered a random partition of the $A^\diamond_i$ imply that all vertices in $G^\diamond_j$ are adjacent to almost all vertices in each $A^{\diamond j}_i(t)$. Similarly, all vertices in $A^{\diamond j}_i(t)$ are adjacent to almost all vertices in each other $A^{\diamond j}_{i'}(t)$. Moreover, the pair $(X_1^j(t),X_2^j(t))$ is $(2{\varepsilon}',d'/2)$-superregular. This enables us to construct the required number of copies of $H$ in $G^\diamond_j$ if we begin the construction of each copy with the vertices that lie in $X_1^j(t)\cup X_2^j(t)$. Since $\theta \gg d'$ we have to be careful that we do not destroy the superregularity of the leftover subsets of the sets $X_i^j(t)$ in this process. We can get around this difficulty by considering a random red-blue partition of the vertices as described in Section~\ref{sec:applyRG} again and removing only copies of $H$ whose vertices are all blue.) We think of $\tilde{A}^j_1(t),\dots,\tilde{A}^j_{\ell-2}(t),\tilde{X}^j_1(t),\tilde{X}^j_2(t)$ as the subsets obtained from $A^{\diamond j}_1(t),\dots,A^{\diamond j}_{\ell-2}(t),X_1^j(i),X_2^j(t)$ in this way. We can now apply the Blow-up lemma to find a $B^*_1$-packing which covers precisely the vertices in $\tilde{X}^j_1(t) \cup \tilde{X}^j_2(t)$. The union of all these $B^*_1$-packings over all $t\le k$ and all $j\le r$ forms a $B^*_1$-packing $\mathcal{B}_1$ which covers precisely the leftover vertices of the graphs $G_j^\diamond$ with $j \le r$. If $\ell=2$ then $\mathcal{B}_1$ together with all the copies of $B^*_1=B^*$ and $H$ chosen earlier yields a perfect $H$-packing of~$G$. If $\ell\ge 3$ then our aim is to extend $\mathcal{B}_1$ to a $B^*$-packing by adding all the remaining vertices in the sets~$A^\diamond_i$. So for all $i\le \ell-2$ let $A'_i$ be the subset of $A^\diamond_i$ which is left over after removing the copies of $H$ in the union (over all $j$ and $t$) of the sets $\tilde{\mathcal{H}}_j(t)$ described above. In order to extend $\mathcal{B}_1$ to a $B^*$-packing which also covers the vertices in the sets $A'_i$, we consider the following $(\ell-1)$-partite auxiliary graph~$J$. The vertex classes of $J$ are $A'_1,\dots,A'_{\ell-2},\mathcal{B}_1$. The subgraph of $J$ induced by $A'_1,\dots,A'_{\ell-2}$ is nothing else than the $(\ell-2)$-partite subgraph of $G$ induced by these sets. $J$ contains an edge between $x\in A'_j$ and $B_1^*\in \mathcal{B}_1$ if $x$ is joined (in $G$) to all vertices of~$B_1^*$. Using Lemma~\ref{exceptionalvs}(iii) and the fact that we deleted only comparatively few vertices so far, it is easy to check that in each of the $\binom{\ell-1}{2}$ bipartite subgraphs forming $J$, every vertex is adjacent to all but a $\tau_{r+1}$-fraction of the vertices in the other class (with room to spare). Thus the bipartite subgraphs are all $(2\tau_{r+1},1/2)$-superregular. Let $B^*_2$ be the complete $(\ell-1)$-partite graph with $\ell-2$ vertex classes of size $z$ and one vertex class of size~1. The Blow-up lemma implies that $J$ has a perfect $B^*_2$-packing. This corresponds to a $B^*$-packing (and thus also an $H$-packing) in~$G$. Together with all the copies of $H$ chosen earlier this yields a perfect $H$-packing in~$G$. \noproof\bigskip The final lemma in this section deals with the remaining `extremal' possibilities: so $G$ contains at least one large almost independent set $A$ but does not satisfy the conditions of either of the two previous lemmas. \begin{lemma}\label{extremal3} Suppose that $H$ is a graph of chromatic number $\ell\ge 3$ such that ${\rm hcf}(H)=1$. Let $B^*$ denote the bottlegraph assigned to~$H$. Let $\xi$, $z$ and $z_1$ be as defined in~$(\ref{eqdefxi})$ and let $0<\tau \ll \tau'\ll \xi, 1-\xi, 1/|B^*|$. Then there exists an integer $s_1=s_1(\tau,\tau',H)$ such that the following holds. Let $|B^*|\ll D\ll C$ and $1\le q\le \ell-2$ be integers such that $D$ is divisible by~$s_1$. Let $G$ be a graph whose order $n\gg C, 1/\tau$ is divisible by $|B^*|$ and which satisfies the following properties: \begin{itemize} \item[{\rm (i)}] $\delta(G)\ge (1-\frac{1}{\chi_{cr}(H)})n+C$. \item[{\rm (ii)}] There are disjoint vertex sets $A_1,\dots,A_q$ in $G$ such that $|A_i|=(n-\ell D)z/|B^*|+D$ and $d(A_i)\le \tau$ for all $i\le q$. \item[{\rm (iii)}] $G$ does not contain disjoint vertex sets $A'_1,\dots,A'_{q+1}$ such that $|A'_i|=(n-\ell D)z/|B^*|+D$ and $d(A'_i)\le \tau'$ for all $i\le q+1$. \item[{\rm (iv)}] If $q=\ell-2$, then the graph $G_1:=G-\bigcup_{i=1}^{\ell-2} A_i$ contains no vertex set $A$ so that $d(A,V(G_1) \setminus A) \le \tau'$. \end{itemize} Then $G$ has a perfect $H$-packing. \end{lemma} \removelastskip\penalty55\medskip\noindent{\bf Proof. } Let $\theta:=\tau^{1/2}$. Fix further constants ${\varepsilon}',d'$ such that $$ 0<{\varepsilon}'\ll d'\ll\tau\ll\tau'\ll \xi,1-\xi,1/|B^*|. $$ Let $B_1^*$ denote the complete $(\ell-q)$-partite graph with $\ell-q-1$ vertex classes of size $z$ and one vertex class of size~$z_1$. Let $k_1({\varepsilon}',\theta,B^*_1)$ be as defined in Lemma~\ref{nonextremal}. Put $$s_1:=2k_1|B^*_1||B^*|. $$ Let $A_{q+1}:=V(G)\setminus \bigcup_{i=1}^q A_i$. As in the proof of Lemma~\ref{extremal2}, we first apply Lemma~\ref{exceptionalvs} with $D'=D/2$ to obtain sets $A_1^*,\dots,A_{q+1}^*$. Let $G^*$ denote the subgraph of $G$ induced by all the~$A^*_i$ and put $n^*:=|G^*|$. Thus $G^*$ was obtained from $G$ by taking out a small number of disjoint copies of $H$ and $n-n^*\le \tau^{3/5}n$. Put $G^*_1:=G[A^*_{q+1}]$ and $n^*_1:=|G^*_1|$. Note that $n^*_1$ is divisible by~$|B^*_1|$. Moreover, as in (\ref{mindegGq1}) or (\ref{mindegG*1}) one can show that \begin{equation}\label{mindegG*11} \delta(G^*_1)\ge \left(1-\frac{1}{\chi_{cr}(B_1^*)}-\tau^{1/2}\right)n^*_1. \end{equation} Similarly as in Lemma~\ref{extremal2}, to find a perfect $H$-packing in $G^*$ our aim is to choose a perfect $B^*_1$-packing in $G^*_1$ and extend each copy of $B^*_1$ to a copy of $B^*$ by adding suitable vertices in $A^*_1\cup \dots\cup A^*_q$. Again, we wish to apply Lemma~\ref{nonextremal} with $\tau'/2$ playing the role of~$\tau$ to $G^*_1$ in order to do this. Thus we have to check that $G^*_1$ satisfies the conditions of Lemma~\ref{nonextremal}. Inequality~(\ref{mindegG*11}) implies that $G^*_1$ satisfies the condition on the minimum degree. Suppose that $G^*_1$ does not satisfy condition~(i) of Lemma~\ref{nonextremal}. So there exists a set $A'\subseteq V(G^*_1)$ such that $|A'|=zn^*_1/|B^*_1|$ and $d(A')\le \tau'/2$. It is easy to check that $(1-\tau^{1/2})|A_1|\le |A'|\le (1+\tau^{1/2})|A_1|$.% \COMMENT{Indeed, $$\frac{n^*_1}{|B^*_1|}=\frac{n^*-q|A^*_1|}{|B^*_1|}\approx \frac{n^*-qzn^*/|B^*|}{|B^*_1|}= \frac{n^*}{|B^*|}\frac{(|B^*|-qz)}{|B^*_1|} =\frac{n^*}{|B^*|}, $$ where $\approx$ means equal up to a constant depending on~$D$. Thus we have $|A'|\approx |A^*_1|$. But $|A_1|-|A^*_1|\ll \tau^{1/2}|A_1|$.} Thus by changing a small number of vertices we obtain a set $A''\subseteq V(G)\setminus (A_1\cup\dots\cup A_q)$ such that $d(A'')\le 2\tau^{1/2}+\tau'/2\le \tau'$. But then the sets $A_1,\dots,A_q,A''$ contradict condition~(iii) in Lemma~\ref{extremal3}. So $G^*_1$ satisfies condition~(i) of Lemma~\ref{nonextremal}. Next suppose that $q=\ell-2$ and $G^*_1$ does not satisfy condition~(ii) of Lemma~\ref{nonextremal}. So $G_1^*$ contains a set $A$ with $d(A,V(G_1^*) \setminus A) \le \tau'/2$. Since $V(G_1)$ and $V(G_1^*)$ are almost the same, this in turn implies that the graph $G_1$ defined in (iv) contains a set $A$ with $d(A,V(G_1) \setminus A) \le 2\tau'/3$, a contradiction to the assumption in (iv). Thus we may assume that $G^*_1$ also satisfies condition~(ii) of Lemma~\ref{nonextremal}. So we can apply Lemma~\ref{nonextremal} to~$G^*_1$. This shows that we can take out a small number of copies of $B^*_1$ to obtain a subgraph of $G^*_1$ which has a blown-up $B^*_1$-cover. We can then proceed similarly as in the final part of the proof of Lemma~\ref{extremal2}. \noproof\bigskip \section{Proof of Theorem~\ref{thmmain}}\label{sec:thmproof} We will now combine the results of Sections~\ref{sec:nonextremal} and~\ref{sec:extremal} to prove Theorem~\ref{thmmain}. Fix constants $$0<{\varepsilon}'\ll d'\ll \theta\ll \tau_1\ll\dots\ll \tau_{\ell-1}\ll \xi,1-\xi,1/|B^*|. $$ Let $D\gg |B^*|$ be an integer satisfying the conditions in Lemmas~\ref{extremal1}--\ref{extremal3}. Let $C\gg D,1/\tau$. By taking out at most $\ell-2$ disjoint copies of $H$ from $G$ if necessary, we may assume that the order $n$ of our given graph $G$ is divisible by~$|B^*|$. (The existence of such copies follows from the Erd\H{o}s-Stone-theorem.) We are done if $G$ satisfies conditions~(i) and~(ii) of Corollary~\ref{cornonextremal} with $\tau_1$ playing the role of~$\tau$. So suppose first that~$G$ violates~(i). Thus there is some set $A\subseteq V(G)$ of size $zn/|B^*|$ such that $d(A)\le \tau_1$. Choose $q\le \ell-1$ maximal such that there are disjoint sets $A'_1,\dots,A'_q\subseteq V(G)$ with $|A'_i|=zn/|B^*|$ and $d(A'_i)\le \tau_q$ for all $i\le q$. So $q\ge 1$ by our assumption. By removing a constant number of vertices from each $A'_i$ we obtain subsets $A_i$ with $|A_i|=z(n-\ell D)/|B^*|+D$ and $d(A_i)\le 2\tau_q$. If $q=\ell-1$ then Lemma~\ref{extremal1} shows that $G$ has a perfect $H$-packing. Thus we may assume that $q\le \ell-2$. But then we can apply either Lemma~\ref{extremal2} with $\tau_{q+1}$ playing the role of $\tau$ or Lemma~\ref{extremal3} with $2\tau_q$ playing the role of $\tau$ and $\tau_{q+1}$ playing the role of~$\tau'$. If $G$ satisfies condition~(i) in Corollary~\ref{cornonextremal} but violates~(ii) then we are done by~Lemma~\ref{extremal2} applied with $\tau:=\tau_1$. This completes the proof of Theorem~\ref{thmmain}. \noproof\bigskip \section{Acknowledgement} We would like to thank Oliver Cooley for his comments on an earlier version of this manuscript.
{ "timestamp": "2008-02-01T18:43:11", "yymm": "0603", "arxiv_id": "math/0603665", "language": "en", "url": "https://arxiv.org/abs/math/0603665", "abstract": "Let H be any graph. We determine (up to an additive constant) the minimum degree of a graph G which ensures that G has a perfect H-packing (also called an H-factor). More precisely, let delta(H,n) denote the smallest integer t such that every graph G whose order n is divisible by |H| and with delta(G) > t contains a perfect H-packing. We show that delta(H,n) = (1-1/\\chi*(H))n+O(1). The value of chi*(H) depends on the relative sizes of the colour classes in the optimal colourings of H and satisfies k-1 < chi*(H) \\le k, where k is the chromatic number of H.", "subjects": "Combinatorics (math.CO)", "title": "The minimum degree threshold for perfect graph packings", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668684574637, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7081505274259422 }
https://arxiv.org/abs/1011.1773
A discrete dynamical system for the greedy strategy at collective Parrondo games
We consider a collective version of Parrondo's games with probabilities parametrized by rho in (0,1) in which a fraction phi in (0,1] of an infinite number of players collectively choose and individually play at each turn the game that yields the maximum average profit at that turn. Dinis and Parrondo (2003) and Van den Broeck and Cleuren (2004) studied the asymptotic behavior of this greedy strategy, which corresponds to a piecewise-linear discrete dynamical system in a subset of the plane, for rho=1/3 and three choices of phi. We study its asymptotic behavior for all (rho,phi) in (0,1)x(0,1], finding that there is a globally asymptotically stable equilibrium if phi<=2/3 and, typically, a unique (asymptotically stable) limit cycle if phi>2/3 ("typically" because there are rare cases with two limit cycles). Asymptotic stability results for phi>2/3 are partly conjectural.
\section{Introduction} \label{intro} The Parrondo effect, in which there is a reversal in direction in some system parameter when two similar dynamics are combined, is the result of an underlying nonlinearity. It was first described by J. M. R. Parrondo in 1996 in the context of games of chance: He showed that it is possible to combine two losing games to produce a winning one. His idea has inspired research in such diverse areas as chemistry \cite{O}, evolutionary biology \cite{XPYX}, population genetics \cite{R}, finance \cite{S}, reliability theory \cite{D}, chaos \cite{APR}, fractals \cite{APA}, epistemology \cite{St}, quantum mechanics \cite{FA}, and probability theory \cite{Py}. In the present paper, we analyze a discrete dynamical system, introduced by Din\'is and Parrondo \cite{DP}, that models the short-range optimization, or greedy, strategy at collective Parrondo games. Our analysis gives conditions under which the Parrondo effect is present. Let us describe Parrondo's original example. The so-called capital-dependent Parrondo games consist of two games, $A$ and $B$. In game $A$, the player wins one unit with probability $1/2-\eps$, where $\eps\ge0$ is a small bias parameter, and loses one unit otherwise. Game $B$ is played with two coins, the one tossed depending on the current capital of the player: If the player's current capital is divisible by 3, he tosses a ``bad'' coin with probability of heads $p_0 := 1/10 -\eps$, otherwise he tosses a ``good'' coin with probability of heads $p_1:=3/4-\eps$; he wins one unit with heads and loses one unit with tails. It can be shown that when $\eps=0$, both games $A$ and $B$ are fair, hence losing when $\eps>0$. However, the random mixture $\gamma A+(1-\gamma)B$, in which game $A$ is played with probability $\gamma\in(0,1)$ and game $B$ is played with probability $1-\gamma$, is winning for $\eps\ge0$ sufficiently small. Furthermore, the non-random pattern $[r,s]$, denoting $r$ plays of game $A$ followed by $s$ plays of game $B$ (repeated ad infinitum), is also winning for $\eps\ge0$ sufficiently small, except when $r=s=1$. In summary, two losing (or fair) games can be combined, by random mixture or nonrandom alternation, to create a winning game. This was the original form of \textit{Parrondo's paradox}. See \cite{HA, PD, E, A} for survey articles. Din\'is and Parrondo \cite{DP} formulated a modification of the capital-dependent Parrondo games in which a fraction of an infinite number of players collectively choose and individually play the same game at each turn. They found that, in certain cases, by choosing the game that yields the maximum average profit at each turn, the surprising result is systematic losses (if $\eps>0$), whereas a random or nonrandom sequence of choices yields a steady increase in average profit. Van den Broeck and Cleuren \cite{VC} considered also the case of a finite number of players. They evaluated the expected profit for this greedy strategy as a function of the number of players and proved that the strategy is optimal when the number of players is one or two but suboptimal when it is three or infinite. In this paper we consider only the case of infinitely many players, and we adopt the one-parameter family of capital-dependent Parrondo games of Ethier and Lee \cite{EL} given by $p_0:=\rho^2/(1+\rho^2)-\eps$ and $p_1:=1/(1+\rho)-\eps$ with $\rho>0$; the original Parrondo game $B$, assumed by Din\'is and Parrondo \cite{DP} and Van den Broeck and Cleuren \cite{VC}, corresponds to $\rho=1/3$. In order to focus on the case in which two fair games produce a winning game, we assume that $\rho\in(0,1)$ for game $B$ and $\eps=0$ for both games. The fraction of players who play at each turn, denoted here by $\phi$ (as in \cite{E}), was assumed to be 1/2 or 27/40 by Din\'is and Parrondo \cite{DP} and to be 1 by Van den Broeck and Cleuren \cite{VC}. Here we let $\phi$ range over the interval $(0,1]$. Behrends \cite{B} introduced a stochastic model that includes our (deterministic) model as a special case, and he proved that the sequence of expected (or average) profits is eventually quasiperiodic under certain assumptions. This paper develops, in the context of the Din\'is--Parrondo model, techniques for analyzing the asymptotic behavior of a piecewise-linear discrete dynamical system, and may therefore be of interest even to readers unfamiliar with Parrondo's paradox. In Section \ref{prelim} we formulate a piecewise-linear discrete dynamical system for the capital-dependent Parrondo games played collectively according to the greedy strategy; it is parametrized by $(\rho,\phi)\in(0,1)\times(0,1]$. In Section \ref{Bforever} we show that, in one region of the parameter space (namely, $\phi\le2/3$), game $B$ is eventually played forever, resulting in an asymptotically fair game and, in terms of the discrete dynamical system, a globally asymptotically stable equilibrium. In Section \ref{periodic} we show that, in the remainder of the parameter space (namely, $\phi>2/3$), there is an initial state that yields a periodic pattern of games, resulting in an asymptotically winning game and, in terms of the discrete dynamical system, a limit cycle. In fact, in a very small region of the parameter space, there are two limit cycles. Section \ref{stability} attempts to show that (again assuming $\phi>2/3$), where there is a unique limit cycle it is asymptotically stable (in fact, it is globally asymptotically stable unless there is an unstable equilibrium). The proofs of the assertions in Section \ref{stability} are incomplete, so some of our findings are stated as conjectures. In Section \ref{profit} we confirm the assertions just made concerning the asymptotic profitability of the greedy strategy. For example, if $(\rho,\phi)=(1/3,1/2)$ (and $\eps=0$), our results show that there is a globally asymptotically stable equilibrium with game $B$ eventually played forever. This is contrary to a computational result of Din\'is and Parrondo \cite{DP}, who found the pattern $[1,40]$ in this case (i.e., $AB^{40}AB^{40}AB^{40}\cdots$). The anomaly is likely attributable to roundoff error; 64-bit arithmetic (C++) is insufficient here. Let us introduce some additional notation. As defined in the second paragraph above, the game pattern $[1,2]$ stands for $ABBABBABB\cdots$. It will be useful to have a concise notation for game sequences that are eventually periodic. We write $ABABBABBABB\cdots$, for example, as $AB\overline{ABB}$, just as one would write the binary expansion of the fraction 5/14 as $0.01\overline{011}$. With this notation, we can describe the limit cycles that appear in terms of the game patterns. They are of two types, either one of $\overline{AB^2}$, $\overline{AB^4}$, $\overline{AB^6}$, \dots, or one of $\overline{AB^4AB^2}$, $\overline{AB^6AB^4}$, $\overline{AB^8AB^6}$, \dots. For further simplicity, we will also denote these patterns by $[1,2]$, $[1,4]$, $[1,6]$, \dots, and by $[1,4,1,2]$, $[1,6,1,4]$, $[1,8,1,6]$, \dots. \section{Preliminaries} \label{prelim} It is well known that a Markov chain $\{X_n\}_{n\ge0}$ with state space $\{0,1,2\}$ underlies the capital-dependent Parrondo games; here $X_n$ represents the player's capital modulo 3 after $n$ games. When playing game $B$ it evolves according to the one-step transition matrix \begin{equation} \label{PBcirc} \setlength{\arraycolsep}{1.5mm} {\bm P}_B^\circ:=\left(\begin{array}{ccc} 0&p_0&1-p_0\\ 1-p_1&0&p_1\\ p_1&1-p_1&0 \end{array}\right), \end{equation} where $p_0:=\rho^2/(1+\rho^2)$ and $p_1:=1/(1+\rho)$ with $0<\rho<1$. When playing game $A$ it evolves according to the one-step transition matrix $\bm P_A^\circ$ defined by the matrix in (\ref{PBcirc}) with $\rho=1$ (i.e., with $p_0=p_1:=1/2$). The unique stationary distribution $\bm\pi:=(\pi_0,\pi_1,\pi_2)$ of $\bm P_B^\circ$ is given by $$ \pi_0 = {1+\rho^2\over2(1+\rho+\rho^2)}, \quad \pi_1 = {\rho(1+\rho)\over2(1+\rho+\rho^2)}, \quad \pi_2 ={1+\rho\over2(1+\rho+\rho^2)}, $$ while that of $\bm P_A^\circ$ is $(1/3,1/3,1/3)$. Given $0<\phi\le1$, consider a large number $N$ of players, of whom $\phi N$ are selected at random. Everyone in the sample of size $\phi N$ independently plays game $A$ or everyone independently plays game $B$, the choice determined by the strategy. When playing game $B$, each player uses his own capital to determine which coin to toss. Let $x_0(n)$ be the fraction of the players whose capital is divisible by 3 after $n$ turns. If the players in the sample collectively choose and individually play game $B$, then the expected average profit, conditioned on $x_0(n)$, is equal to \begin{eqnarray*} x_0(n) (2 p_0-1) + [1-x_0(n)](2 p_1 -1) = - x_0(n) {1-\rho^2\over1+\rho^2} + [1-x_0(n)]{1-\rho\over1+\rho}, \end{eqnarray*} which is nonpositive if and only if $x_0(n)\ge\pi_0$. Game $A$ always has expected average profit equal to 0. So the strategy of maximizing the expected average profit at each turn can be summarized by the rules ``play game $A$ if $x_0(n) \geq \pi_0$'' and ``play game $B$ if $x_0(n)<\pi_0$.'' In particular, if both games have expected average profit equal to 0, then game $A$ is played. We investigate the mean-field limit as $N\to\infty$, in which case the model is deterministic. (We will not try to justify this; the preceding paragraph was included mainly for motivation.) Let $x_i$ represent the fraction of players whose capital is congruent to $i$ (mod 3) for $i=0,1,2$. Then $x_0+x_1+x_2=1$. Thus, in the state space defined by $$ \Delta := \{ (x_0, x_1, x_2) : x_0 \geq 0,\, x_1 \geq 0,\, x_2 \geq 0,\, x_0+x_1+x_2=1 \} $$ we have a discrete dynamical system given by $$ (x_0(n+1),x_1(n+1),x_2(n+1))=F(x_0(n),x_1(n),x_2(n)),\qquad n\ge0, $$ where $$ F(x_0,x_1,x_2):=\begin{cases}(x_0,x_1,x_2)\bm P_A&\text{if $x_0\ge\pi_0$,}\\ (x_0,x_1,x_2)\bm P_B&\text{if $x_0<\pi_0$,}\end{cases} $$ and ${\bm P}_A:=(1-\phi)\bm I+\phi\bm P_A^\circ$ and ${\bm P}_B:=(1-\phi)\bm I+\phi\bm P_B^\circ$. Clearly, the function $F$ is piecewise linear but discontinuous. Let us define the projection $p$ of $\Delta$ onto $\{(x_0,x_1): x_0\ge0,\,x_1\ge0,\,x_0+x_1\le1\}$ by $p(x_0,x_1,x_2):=(x_0,x_1)$. It is a one-to-one transformation. While the trajectory belongs to $\Delta$, it will often be convenient to regard it as belonging to $p(\Delta)$, a subset of the plane. In fact, we could redefine $F$ in terms of $2\times2$ matrices, but this does not appear to simplify matters. To study the asymptotic behavior of the system, we will need the spectral representation for matrices $\bm P_A$ and $\bm P_B$. We note that $\bm \pi$ is also the unique stationary distribution of $\bm P_B$. The nonunit eigenvalues of ${\bm P}_B$ are given by $e_1:= 1- \phi + \phi e_1^\circ$ and $e_2:= 1- \phi + \phi e_2^\circ$, where, with $S:=\sqrt{(1+\rho^2)(1+4\rho+\rho^2)}$, $$ e_1^\circ :=-{1\over2}+{(1-\rho)S\over2(1+\rho)(1+\rho^2)}\quad \mbox{and} \quad e_2^\circ:=-{1\over2}-{(1-\rho)S\over2(1+\rho)(1+\rho^2)} $$ are the nonunit eigenvalues of $\bm P_B^\circ$. Observe that $e_1=0$ if $\phi=\phi_2$, and $e_2=0$ if $\phi=\phi_1$, where \begin{eqnarray} \nonumber \phi_1:={1\over1-e_2^\circ}&=& {2(1+\rho)(1+\rho^2)\over3(1+\rho)(1+\rho^2) + (1-\rho)S}, \\ \label{phi2} \phi_2:={1\over1-e_1^\circ}&=& {2(1+\rho)(1+\rho^2)\over3(1+\rho)(1+\rho^2)- (1-\rho)S}. \end{eqnarray} Since $0>e_1^\circ>-1/2>e_2^\circ>-1$ for $0<\rho<1$, we have $1/2<\phi_1<2/3<\phi_2<1$ for $0<\rho<1$. Because the nonunit eigenvalues $e_1$ and $e_2$ will play an important role in what follows, we indicate their dependence on $\phi$ in Table \ref{eigenvalues}. \begin{table} \begin{center} \caption{\label{eigenvalues}Dependence on $\phi$ of the nonunit eigenvalues $e_1$ and $e_2$ {of $\bm P_B$.}}\medskip {\small \begin{tabular}{ccccc}\hline $0<\phi<\phi_1$ & $e_1>0$ & $e_2>0$ & $|e_1|>|e_2|$ \\ $\phi=\phi_1$ & $e_1>0$ & $e_2=0$ & $|e_1|>|e_2|$ \\ $\phi_1<\phi<2/3$ & $e_1>0$ & $e_2<0$ & $|e_1|>|e_2|$ \\ $\phi=2/3$ & $e_1>0$ & $e_2<0$ & $|e_1|=|e_2|$ \\ $2/3<\phi<\phi_2$ & $e_1>0$ & $e_2<0$ & $|e_1|<|e_2|$ \\ $\phi=\phi_2$ & $e_1=0$ & $e_2<0$ & $|e_1|<|e_2|$ \\ $\phi_2<\phi\le1$ & $e_1<0$ & $e_2<0$ & $|e_1|<|e_2|$ \\ \hline \end{tabular}} \end{center} \end{table} We define the diagonal matrix $\bm D:={\rm diag}(1,e_1,e_2)$. Corresponding right eigenvectors (both for $\bm P_B$ and $\bm P_B^\circ$) are $$ {\bm r}_0:=\left(\begin{array}{c} 1\\ 1\\ 1 \end{array}\right),\quad {\bm r}_1:=\left(\begin{array}{c} (1+\rho)(1-\rho^2-S)\\ 2+\rho+2\rho^2+\rho^3+\rho S\\ -(1+2\rho+\rho^2+2\rho^3-S) \end{array}\right), $$ $$ {\bm r}_2:=\left(\begin{array}{c} (1+\rho)(1-\rho^2+S)\\ 2+\rho+2\rho^2+\rho^3-\rho S\\ -(1+2\rho+\rho^2+2\rho^3+S) \end{array}\right). $$ They are linearly independent, so we define ${\bm R}:=({\bm r}_0,{\bm r}_1,{\bm r}_2)$ and ${\bm L}:={\bm R}^{-1}$. The rows of $\bm L$ are left eigenvectors, and the spectral representation gives \begin{equation}\label{spectral} {\bm P}_B ^n={\bm R}{\bm D^n}{\bm L},\qquad n\ge0. \end{equation} Of course, ${\bm P}_A$ is the special case $\rho=1$ of ${\bm P}_B$, so it follows from (\ref{spectral}) (or a simple induction argument) that \begin{eqnarray*} \setlength{\arraycolsep}{1.5mm} {\bm P}_A^n=\left(\begin{array}{ccc} 1-2d_n&d_n&d_n\\ d_n&1-2d_n&d_n\\ d_n&d_n&1-2d_n \end{array}\right),\qquad n\ge0, \end{eqnarray*} where $d_n:=[1-(1- 3\phi /2)^n]/3$. \section{A globally asymptotically stable equilibrium for $\phi\le2/3$} \label{Bforever} In this section we show that, corresponding to playing game $B$ forever under the greedy strategy, there is a globally asymptotically stable equilibrium when $\phi\le2/3$ and an unstable equilibrium when $2/3<\phi<\phi_2$. Let $$ \Delta_A := \{ (x_0, x_1, x_2)\in \Delta : x_0 \geq \pi_0 \}\quad({\rm resp.,\ } \Delta_B:=\Delta-\Delta_A) $$ be the set of states at which game $A$ (resp., game $B$) is chosen under the greedy strategy. In fact, it will be useful to extend this notation considerably. For example, $\Delta_{ABBA}$ is the subset of $\Delta_A$ such that the first four games played are $ABBA$ (in that order), and $\Delta_{BBA\overline{B}}$ is the subset of $\Delta_B$ such that the complete game sequence is $BBA\overline{B}$. \begin{proposition} \label{prop1} Under the greedy strategy, game $A$ is chosen for only finitely many consecutive turns, given an initial state in $\Delta_A$. If $2/3 \leq \phi \leq1$, then, after only one play of game $A$, game $B$ is played. \end{proposition} \begin{proof} After $n$ plays of game $A$, the initial state, say $(x_0, x_1, x_2) \in \Delta_A$, moves to \begin{equation}\label{effectofP_A^n} (x_0, x_1, x_2){\bm P}_A^n =\bigg({1 \over 3},{1 \over 3},{1 \over 3} \bigg) + \bigg(1- {3 \over 2} \phi \bigg)^n \bigg[(x_0,x_1,x_2)-\bigg({1 \over 3},{1 \over 3},{1 \over 3} \bigg) \bigg]. \end{equation} If $0<\phi <2/3$, the trajectory converges to the limit $(1/3,1/3,1/3)$ along the line segment that connects the initial state $(x_0, x_1, x_2)$ with $(1/3,1/3,1/3)$ as $n\to\infty$. Since $1/3 < \pi_0$ for $0<\rho<1$, we see that, after a finite number of consecutive plays of game $A$, game $B$ is played. If $2/3 \leq \phi \leq1$, then we have $$ (x_0, x_1, x_2){\bm P}_A(1,0,0)^\T={1 \over 3} + \bigg(1- {3 \over 2} \phi \bigg) \bigg(x_0 - {1 \over 3} \bigg) \leq {1\over3}<\pi_0, $$ which means that, after only one play of game $A$ (requiring $x_0\ge\pi_0>1/3$), game $B$ is played. \end{proof} \begin{theorem} \label{thm2} If $0< \phi \leq 2/3$, wherever the initial state is located, the greedy strategy chooses game $B$ forever except for an initial finite number of turns. In particular, the discrete dynamical system has a globally asymptotically stable equilibrium, namely $\bm \pi$, the stationary distribution of $\bm P_B$. If $2/3 < \phi < 1$, game $B$ is chosen forever only when $\phi<\phi_2$, where $\phi_2$ is defined by (\ref{phi2}), and only when the initial state belongs to one of at most four one-dimensional regions, which will be specified below in terms of $\rho$ and $\phi$. \end{theorem} \begin{proof} Recall that $\Delta_{\overline{B}}$ denotes the set of initial states from which game $B$ is played forever. Once we know that $\Delta_{\overline{B}}$ is eventually reached from any initial state, we have a \textit{linear} discrete dynamical system $$ (x_0(n+1),x_1(n+1),x_2(n+1))=(x_0(n),x_1(n),x_2(n))\bm P_B,\qquad n\ge0, $$ or $(x_0(n),x_1(n),x_2(n))=(x_0,x_1,x_2)\bm P_B^n$ for each $n\ge1$, with $(x_0,x_1,x_2)\in\Delta_{\overline{B}}$. But since $\bm P_B$ is irreducible and aperiodic, $\bm P_B^n\to\bm\Pi$, where $\bm\Pi$ is the $3\times 3$ matrix with each row equal to $\bm\pi$. This implies that $(x_0(n),x_1(n),x_2(n))\to\bm\pi$ for all $(x_0,x_1,x_2)\in\Delta_{\overline{B}}$, and this leads to the global asymptotic stability. Since the trajectory enters $\Delta_B$ eventually by Proposition \ref{prop1} it is enough to consider a trajectory that starts from $\Delta_B$. Let $(x_0, x_1,x_2) \in \Delta_B$ and let $\pi_0 (n)$ be the fraction of the players whose capital is divisible by 3 after $n$ plays of game $B$. Then $\pi_0(0)= x_0 <\pi_0$. From the spectral representation (\ref{spectral}), we have \begin{eqnarray*} \pi_0 (n) &=& (x_0, x_1, x_2){\bm P}_B^n(1,0,0)^\T=(x_0, x_1, 1-x_0 -x_1){\bm R}{\bm D}^n{\bm L}(1,0,0)^\T \\ &=&{1 \over 4 (1 + \rho + \rho^2)S} \{ 2 (1+ \rho^2) S \\ && \quad{} - e_1^n [2 x_0 (1+ \rho + \rho^2)(1+ \rho^2 - S) + 4 x_1 (1+\rho+\rho^2)(1+\rho^2) \\ & & \qquad{} \qquad{} -(1+2\rho+3\rho^2-S)(1+\rho^2)] \\ && \quad{} + e_2^n [2 x_0 (1+ \rho + \rho^2)(1+ \rho^2 + S) + 4 x_1 (1+\rho+\rho^2)(1+\rho^2) \\ & & \qquad{} \qquad{} -(1+2\rho+3\rho^2+S)(1+\rho^2)]\}, \end{eqnarray*} from which it follows that \begin{equation}\label{pi0-pi0(n)} \pi_0 - \pi_0(n) = c_1 e_1^n - c_2 e_2^n, \end{equation} where \begin{eqnarray*} c_1&:=& {1+\rho^2\over2S}\bigg[\bigg(1-{S\over1+\rho^2}\bigg)(x_0-\pi_0)+2(x_1-\pi_1)\bigg],\\ c_2&:=& {1+\rho^2\over2S}\bigg[\bigg(1+{S\over1+\rho^2}\bigg)(x_0-\pi_0)+2(x_1-\pi_1)\bigg]. \end{eqnarray*} Notice that $c_1 > c_2$ for $x_0 < \pi_0$ and $0<\rho <1$. Also, $S/(1+\rho^2)>1$ for $0<\rho <1$. The region $\Delta_{\overline{B}}$ depends on the nonunit eigenvalues of ${\bm P}_B$, so we derive it separately in the seven cases of Table \ref{eigenvalues} as follows. Case 1. $0<\phi< \phi_1$. Since $e_1>e_2>0$, if $c_1 \geq 0$ we have $\pi_0(n) < \pi_0$ for all $n\geq 1$. On the other hand, if $c_1<0$, we can find a positive integer $n$ such that $(e_2/e_1)^n \leq c_1/c_2 < (e_2/e_1)^{n-1}$, and we conclude that $\pi_0(n)\ge\pi_0$. Thus, it follows that $$ \Delta_{\overline{B}} =\bigg\{ (x_0, x_1, x_2) \in \Delta_B : \bigg(1- {S \over 1+ \rho^2}\bigg)(x_0-\pi_0)+2(x_1-\pi_1)\ge0\bigg\}. $$ For $(x_0, x_1, x_2)\in\Delta_B-\Delta_{\overline{B}}$, we have $x_0 < \pi_0$, hence $x_1 <\pi_1$. Therefore, $(x_0, x_1, x_2)$ moves to state $(y_0, y_1, y_2):=(x_0, x_1, x_2){\bm P}_B$, which satisfies $$ y_0 - x_0={\phi[1-(2+\rho)x_0-(1-\rho)x_1] \over 1+\rho} >{\phi[1-(2+\rho)\pi_0-(1-\rho)\pi_1] \over 1+\rho} =0 $$ and \begin{eqnarray*} y_1 - x_1&=&{\phi[\rho(1+\rho^2)-\rho(1-\rho)x_0-(1+2\rho)(1+\rho^2)x_1 ] \over (1+\rho)(1+\rho^2)} \\ &>&{\phi[\rho(1+\rho^2)-\rho(1-\rho)\pi_0-(1+2\rho)(1+\rho^2)\pi_1 ] \over (1+\rho)(1+\rho^2)}=0. \end{eqnarray*} Therefore, in the region $\Delta_B-\Delta_{\overline{B}}$, as long as game $B$ is played, the trajectory continues to move in the positive $x_0$ and $x_1$ directions and finally reaches the region $\Delta_A$. (It cannot reach $\Delta_{\overline{B}}$ first, and it cannot remain in $\Delta_B-\Delta_{\overline{B}}$ forever, for in either case it would have started in $\Delta_{\overline{B}}$.) Once it arrives at a certain state $(y_0,y_1,y_2)\in\Delta_A$, it moves along the line segment between $(y_0,y_1,y_2)$ and $(1/3,1/3,1/3)\in\Delta_{\overline{B}}$, proceeding $3\phi/2$ of the way (see (\ref{effectofP_A^n})). After one or more such jumps from $\Delta_A$, the trajectory will either reach $\Delta_{\overline{B}}$ or return to $\Delta_B-\Delta_{\overline{B}}$. Even if it returns to $\Delta_B-\Delta_{\overline{B}}$, it eventually enters $\Delta_{\overline{B}}$ because it keeps moving in the positive $x_1$ direction while it visits the two regions $\Delta_B-\Delta_{\overline{B}}$ and $\Delta_A$, and from $\Delta_A$ it moves in the positive $x_1$ direction $3\phi/2$ of the way toward 1/3, ensuring that this alternation between $\Delta_B-\Delta_{\overline{B}}$ and $\Delta_A$ cannot go on forever. See Figure \ref{thm2-fig}. \begin{figure} \centering \includegraphics[width = 340bp]{fig1.pdf} \caption{\label{thm2-fig}Four of the seven cases of Theorem \ref{thm2} illustrated with $\rho=1/3$.} \end{figure} Case 2. $\phi=\phi_1$. Here $e_1>0=e_2$. Consequently, $\pi_0(n) < \pi_0$ for all $n\geq 1$ if and only if $c_1 >0$, so that \begin{eqnarray*} \Delta_{\overline{B}} = \bigg\{ (x_0, x_1, x_2) \in \Delta_B : \bigg(1- {S \over 1+ \rho^2}\bigg)(x_0-\pi_0)+2(x_1-\pi_1)>0\bigg\}. \end{eqnarray*} For $(x_0,x_1,x_2)\in \Delta_B-\Delta_{\overline{B}}$, since $c_1 \leq 0$ we have $\pi_0(1)\ge\pi_0$. So game $A$ is played after one play of game $B$. Therefore, when the trajectory starts in the region $\Delta_B - \Delta_{\overline{B}}$, after a few alternations of game $A$ and game $B$, it enters the region $\Delta_{\overline{B}}$ and stays there forever, just as in Case 1. Case 3. $\phi_1 < \phi < 2/3$. In this case, $e_1>0>e_2$ and $|e_1|>|e_2|$. We claim that a necessary and sufficient condition for $\pi_0(n) < \pi_0$ for every $n\geq 1$ is that $c_1 e_1 - c_2 e_2>0$. It is clearly necessary (take $n=1$ in (\ref{pi0-pi0(n)})). For sufficiency, note that $c_1 e_1 - c_2 e_2>0$ implies $c_1>0$. Thus, we have $c_1 e_1^n-c_2 e_2^n>0$ for all $n\ge1$ if in addition $c_1>c_2\ge0$. If $c_1>0>c_2$, then for even $n\ge2$, $c_1 e_1^n-c_2 e_2^n>0$, and for odd $n\ge1$, \begin{equation*} c_1 e_1^n-c_2 e_2^n=c_1 e_1^n \bigg[1-{c_2\over c_1}\bigg({e_2\over e_1}\bigg)^n\bigg]\ge c_1 e_1^n \bigg( 1- {c_2 e_2 \over c_1 e_1} \bigg)>0, \end{equation*} proving the claim. It follows that \begin{eqnarray}\label{Delta(B)-case3} \Delta_{\overline{B}}&=&\bigg\{ (x_0, x_1, x_2) \in \Delta_B : \nonumber\\ &&\qquad\bigg(1+{(3\phi-2)(1+\rho) \over \phi(1-\rho)} \bigg)(x_0 - \pi_0) + 2 (x_1 -\pi_1)>0 \bigg\}. \end{eqnarray} Let $(x_0,x_1,x_2) \in \Delta_B -\Delta_{\overline{B}}$. Since $c_1 e_1 - c_2 e_2 \leq 0$, game $A$ is played after one play of game $B$. Especially if $x_1<\pi_1$, the trajectory enters $\Delta_{\overline{B}}$ after a few alternations of game $B$ and game $A$ as in Case 1. If $x_1 \geq \pi_1$, it moves to state $(y_0, y_1, y_2):=(x_0, x_1, x_2){\bm P}_B$ in $\Delta_A$, which satisfies \begin{eqnarray*} y_1-\pi_1 &=& [ 2(1+\rho)(1+\rho^2)(1+\rho+\rho^2)]^{-1} \{[2\phi(1+\rho+\rho^2)-(1+\rho)^2]\rho(1+\rho^2)\\ &&\quad{} - 2\phi \rho(1-\rho^3) x_0 +2[1+\rho-\phi(1+2\rho)](1+\rho^2)(1+\rho+\rho^2)x_1 \} \\ &>& [ 2(1+\rho)(1+\rho^2)(1+\rho+\rho^2)]^{-1} \{[2\phi(1+\rho+\rho^2)-(1+\rho)^2]\rho(1+\rho^2)\\ &&\quad{} - 2\phi \rho(1-\rho^3) \pi_0 +2[1+\rho-\phi(1+2\rho)](1+\rho^2)(1+\rho+\rho^2)\pi_1 \}\\ &=&0, \end{eqnarray*} where the inequality uses $\phi <2/3 <(1+\rho)/(1+2 \rho)$ for $0<\rho<1$. From Proposition \ref{prop1} it follows that the trajectory converges to the limit $(1/3,1/3,1/3)\in \Delta_{\overline{B}}$ along the line segment between $(y_0, y_1, y_2) \in \Delta_A$ with $y_1 > \pi_1$ and $(1/3,1/3,\break1/3)$. Since this line is above (in $p(\Delta)$) the critical line segment for $\Delta_{\overline{B}}$ in (\ref{Delta(B)-case3}), after a finite number of plays of game $A$ the trajectory directly enters the region $\Delta_{\overline{B}}$. See Figure \ref{thm2-fig}. Case 4. $\phi=2/3$. Since $e_1>0$ and $e_2=-e_1$, for even $n \geq 2$, $\pi_0 - \pi_0(n) = e_1^n (\pi_0-x_0) >0$, and for odd $n \geq 1$, \begin{eqnarray}\label{case4} \pi_0-\pi_0 (n)=(1+\rho^2)e_1^n[x_0-\pi_0+2(x_1-\pi_1)]/S. \end{eqnarray} So we have \begin{eqnarray*} \Delta_{\overline{B}} =\{ (x_0, x_1, x_2) \in \Delta_B : x_0-\pi_0+2(x_1-\pi_1)>0\}. \end{eqnarray*} If $(x_0, x_1, x_2) \in \Delta_B-\Delta_{\overline{B}}$, since $\pi_0(1)\ge\pi_0$ by (\ref{case4}), game $A$ is played after one play of game $B$. But after playing game $A$ the trajectory directly moves to $(1/3,1/3,1/3) \in \Delta_{\overline{B}}$ because $\bm P_A$ is the $3\times3$ matrix each of whose entries is 1/3. Therefore, there are only three possible sequences of games, namely $\overline{B}$, $A\overline{B}$, and $BA\overline{B}$. See Figure \ref{thm2-fig}. Case 5. $2/3 < \phi < \phi_2 $. Here $e_1 >0>e_2$ and $|e_1|<|e_2|$. If $c_2=0$, then because $c_1>c_2=0$ we have $\pi_0(n) < \pi_0$ for all $n\geq 1$. On the other hand, if $c_2\ne0$, then $\pi_0(n)\ge\pi_0$ for some $n\ge1$. This gives \begin{equation}\label{Delta(B)} \Delta_{\overline{B}} = \bigg\{ (x_0, x_1, x_2) \in \Delta_B : \bigg(1+{S \over 1+ \rho^2} \bigg)(x_0 - \pi_0) +2(x_1 - \pi_1)=0 \bigg\}. \end{equation} We recall from Proposition \ref{prop1} that there is only one play of game $A$ when the trajectory enters $\Delta_A$. Since $(w_0, w_1, w_2) \in \Delta_A$ moves to $$ (x_0, x_1, x_2) := (w_0, w_1, w_2){\bm P}_A ={3\over 2}\phi\bigg({1\over3},{1\over3},{1\over3}\bigg)+\bigg(1-{3 \over 2} \phi\bigg)(w_0,w_1,w_2), $$ any state $(w_0, w_1, w_2) \in \Delta_A$ satisfying \begin{eqnarray*} \bigg(1+{S \over 1+ \rho^2} \bigg)\bigg[{1\over2}\phi + \bigg(1-{3 \over 2} \phi\bigg)w_0 - \pi_0 \bigg] +2\bigg[{1\over2}\phi + \bigg(1-{3 \over 2} \phi\bigg)w_1 - \pi_1\bigg]=0 \end{eqnarray*} moves to $\Delta_{\overline{B}}$ after one play of game $A$. Defining \begin{eqnarray*} f(x)&:=& - {1 \over 2} \bigg(1+ {S \over 1+\rho^2} \bigg) x + { 2( \phi -2 \pi_1 )(1+\rho^2) + (\phi - 2 \pi_0)(1+ \rho^2 + S) \over 2(3 \phi -2)(1+\rho^2) } \end{eqnarray*} for $\pi_0 \leq x \leq 1$, we have \begin{equation}\label{DeltaA(B)} \Delta_{A\overline{B}} = \{(w_0, w_1, w_2) \in \Delta_A : w_1 = f(w_0)\}. \end{equation} Since the slope $m$ of $y=f(x)$ satisfies $m <-1$, we need both $f(\pi_0)\geq 0$ and $f(1) \leq 0$ to ensure that $\Delta_{A \overline{B}}$ is nonempty. For $0<\rho<1$ and $2/3 < \phi < \phi_2$ it is easy to check that $f(\pi_0)>0$. Next we have \begin{eqnarray*} f(1) ={ (1+\rho)[(1-\rho)(1+\rho^2) + (1+\rho)S] - 2\phi (1+\rho+\rho^2) S \over 2(3\phi-2)(1+\rho^2)(1+\rho+\rho^2)}. \end{eqnarray*} So it follows that if $2/3 < \phi < \phi_3$ with $$ \phi_3 :={(1+\rho)[(1-\rho)(1+\rho^2) + (1+\rho)S]\over2 (1+\rho+\rho^2) S}, $$ then $f(1) > 0$ and $\Delta_{A \overline{B}}=\varnothing$, whereas if $\phi_3 \leq \phi < \phi_2$, then $\Delta_{A \overline{B}}\ne\varnothing$. The latter occurs only in regions 5--8 of Figure \ref{region-fig} in Section \ref{stability}. See Figure \ref{thm2-fig}. Similar arguments show that \begin{equation}\label{DeltaBA(B)} \Delta_{BA\overline{B}}=\{(w_0,w_1,w_2)\in\Delta_B: w_1=g(w_0)\} \end{equation} and \begin{equation}\label{DeltaBBA(B)} \Delta_{BBA\overline{B}}=\{(w_0,w_1,w_2)\in\Delta_B: w_1=h(w_0)\}, \end{equation} where \begin{eqnarray*} g(x) &:=& - {1 \over 2} \bigg(1+ {S \over 1+\rho^2} \bigg) x + {g_1 (\rho, \phi)\over (3\phi-2)g_2(\rho, \phi)}, \end{eqnarray*} $g_1(\rho, \phi) := \phi (3 \phi - 2) [(1+2\rho)(1+\rho^2) + S] -(1 + \rho) [ 2( \phi -2 \pi_1 )(1+\rho^2) + (\phi - 2 \pi_0)(1+ \rho^2 + S)]$, $g_2 (\rho, \phi) := (3\phi-2)(1+ \rho)(1+\rho^2) + \phi(1-\rho) S$, \begin{eqnarray*} h(x) &:=& - {1 \over 2} \bigg(1+ {S \over 1+\rho^2} \bigg) x + {h_1 (\rho, \phi) \over (3\phi-2)h_2 (\rho, \phi)}, \end{eqnarray*} $h_1(\rho, \phi) := \phi (3 \phi - 2) \{2 (1 + \rho) [(1+2\rho)(1+\rho^2) + S] - \phi [2 + 6 \rho + 3 \rho^2 + 4 \rho^3 + 3 \rho^4 + (2 + 2 \rho - \rho^2) S]\}- (1 + \rho)^2[2 (\phi - 2 \pi_1) (1 + \rho^2) + (\phi - 2 \pi_0) (1 + \rho^2 + S)]$, and $h_2(\rho, \phi) := -2 (1 + \rho)^2 (1 + \rho^2) + 6 \phi (1 + \rho)^2 (1 + \rho^2) - \phi^2 (5 + 10 \rho + 6 \rho^2 + 10 \rho^3 + 5 \rho^4) - \phi (3 \phi - 2) (1 - \rho^2) S$. Further, (\ref{DeltaBA(B)}) is nonempty in a region slightly smaller than the union of regions 7 and 8 of Figure \ref{region-fig} in Section \ref{stability}, while (\ref{DeltaBBA(B)}) is nonempty only in a very small subset of region 8 of that figure. Finally, it can be shown that $\Delta_{ABA\overline{B}}=\varnothing$, $\Delta_{ABBA\overline{B}}=\varnothing$, and $\Delta_{BBBA\overline{B}}=\varnothing$, implying that there are no other ways in which game $B$ is played forever. In summary, in the case of $2/3 <\phi<\phi_2$, only when the initial state belongs to (\ref{Delta(B)}), (\ref{DeltaA(B)}), (\ref{DeltaBA(B)}), or (\ref{DeltaBBA(B)}), some of which may be empty, is game $B$ played forever. Cases 6 and 7. $\phi_2\le \phi\le1$. Here $e_2<e_1\le0$. Clearly, $\Delta_{\overline{B}}=\varnothing$ in these cases. \end{proof} \section{A limit cycle for $\phi>2/3$} \label{periodic} In this section we show that, whenever $\phi>2/3$, there is at least one limit cycle, at least when the discrete dynamical system starts from a certain initial state, which will be specified. The game patterns that occur include $[1,n]$, denoting 1 play of game $A$ followed by $n$ plays of game $B$, for even $n\ge2$, and $[1,n,1,n-2]$, denoting 1 play of game $A$, $n$ plays of game $B$, 1 play of game $A$, and $n-2$ plays of game $B$, for even $n\ge4$. The value of $n$ depends on $\rho$ and $\phi$. We will need three lemmas to prepare for the next theorem. The proofs are trivial and therefore omitted. \begin{lemma}\label{Lemma1} If $1<a<b$ and $c>0$, then $(a^n+c)/(b^n+c)$ is decreasing in $n\ge1$. \end{lemma} \begin{lemma}\label{Lemma2} If $0<a<b<1$, $a+b\le1$, and $c>1$, then $(c-a^n)/(c-b^n)$ is decreasing in $n\ge1$. \end{lemma} \begin{lemma}\label{Lemma3} If $0<c\le{1\over2}$, the functions $$ f(x):={x-cx^n\over1-cx^n}\quad{\rm and}\quad g(x):={x-cx^n\over1-cx^{n+1}} $$ are increasing on $(0,1)$ for each $n\ge1$. \end{lemma} We will also need the functions \begin{eqnarray}\label{E_n} E_n &:=&\phi (1- \rho)\big\{ e_2^n[2+ e_1^n (3 \phi -2) ][3(1+\rho)(1+\rho^2)-(1-\rho)S] \nonumber\\ && \qquad\qquad\;\;{} - e_1^n [2+ e_2^n (3\phi -2)][3(1+\rho)(1+\rho^2) +(1-\rho)S ] \big\}/2,\\ E_{n,m} &:=&\phi (1- \rho)\big\{ e_2^m[2+ e_1^n (3 \phi -2) ][3(1+\rho)(1+\rho^2)-(1-\rho)S] \nonumber\\ \label{E_{n,m}} && \qquad\qquad\;\;{} - e_1^m [2+ e_2^n (3\phi -2)][3(1+\rho)(1+\rho^2) +(1-\rho)S ] \big\}/2,\\ G_{n,m}&:=& \phi (1- \rho)\big\{ e_2^m [4 -e_1^{2 (n-1)}(3 \phi-2)^2][ 2 - e_2^{n-2} (3 \phi -2) ][3(1+\rho)(1+\rho^2)\nonumber\\ && \qquad\qquad\;\; {}-(1-\rho)S ] - e_1^m [4 -e_2^{2 (n-1)}(3 \phi-2)^2] [ 2 - e_1^{n-2} (3\phi -2) ]\nonumber\\ \label{G_{n,m}} &&\qquad\qquad\qquad\qquad{}\cdot[3(1+\rho)(1+\rho^2)+(1-\rho)S ] \big\},\\ H_{n,m}&:=& \phi (1- \rho)\big\{ e_2^m [4-e_1^{2 (n-1)}(3 \phi-2)^2][2 - e_2^n (3 \phi -2)] [3(1+\rho)(1+\rho^2)\nonumber\\ && \qquad\qquad\;\; {}-(1-\rho)S ] - e_1^m [4 -e_2^{2 (n-1)}(3 \phi-2)^2] [ 2- e_1^n (3 \phi -2) ]\nonumber\\ \label{H_{n,m}} &&\qquad\qquad\qquad\qquad{}\cdot[3(1+\rho)(1+\rho^2) +(1-\rho)S ] \big\}. \end{eqnarray} The significance of these functions is explained in the following theorem. \begin{theorem}\label{thm-limitcycles} For even $n\ge4$, the implicitly defined curves $G_{n,n-2}=0$, $E_{n-2}=0$, $E_{n,n-2}=0$, $H_{n,n-2}=0$, and $G_{n+2,n}=0$ are monotonically ordered, from highest to lowest, in $\{(\rho,\phi)\in(0,1)\times (2/3,3/4): \phi<\phi_2\}$. More precisely, \begin{eqnarray} G_{n,n-2}=0&{\rm\ is\ above\ }&E_{n-2}=0,\label{ineq1}\\ E_{n-2}=0&{\rm\ is\ above\ }&E_{n,n-2}=0,\label{ineq2}\\ E_{n,n-2}=0&{\rm\ is\ above\ }&H_{n,n-2}=0,\label{ineq3}\\ H_{n,n-2}=0&{\rm\ is\ above\ }&G_{n+2,n}=0,\label{ineq4} \end{eqnarray} for $n=4,6,8,\ldots$. Furthermore, the functions defining the curves are positive above, and negative below, the curves. For even $n\ge2$ and $(\rho,\phi)\in(0,1)\times(2/3,1]$, the greedy strategy leads to the periodic pattern $[1,n]$ starting from the corresponding stationary distribution as the initial state if and only if $E_{n,n-2}<0$ and $E_n\ge0$. ($E_{2,0}<0$ is automatically satisfied.) For even $n\ge4$ and $(\rho,\phi)\in(0,1)\times(2/3,1]$, the greedy strategy leads to the periodic pattern $[1,n,1,n-2]$ starting from the corresponding stationary distribution as the initial state if and only if $G_{n,n-2}<0$ and $H_{n,n-2}\ge0$. \end{theorem} \begin{remark} In particular, if $\phi>2/3$, there is at least one limit cycle. If $(\rho,\phi)$ belongs to the region between the curves in (\ref{ineq1}), there are two limit cycles, of the forms $[1,n,1,n-2]$ and $[1,n-2]$. If $(\rho,\phi)$ belongs to the region between the curves in (\ref{ineq2}), there is one limit cycle, of the form $[1,n,1,n-2]$. If $(\rho,\phi)$ belongs to the region between the curves in (\ref{ineq3}), there are two limit cycles, of the forms $[1,n,1,n-2]$ and $[1,n]$. If $(\rho,\phi)$ belongs to the region between the curves in (\ref{ineq4}), there is one limit cycle, of the form $[1,n]$. The regions with two limit cycles are very small. See Table \ref{regions-rho=1/3} for the case $\rho=1/3$. \end{remark} \begin{table} \caption{\label{regions-rho=1/3}Critical $\phi$-values separating regions at $\rho=1/3$. Numbers are truncated (not rounded) at 18 decimal places.\medskip} \begin{center} {\small \begin{tabular}{ccc}\hline form of & lower-boundary & $\phi$-value \\ limit cycles & equation & at $\rho=1/3$ \\ \hline $[1,2]$ & $G_{4,2}=0$ & $0.688\,066\,413\,565\,052\,628\cdots$ \\ $[1,4,1,2]$, $[1,2]$ & $E_2=0$ & $0.688\,066\,239\,503\,137\,641\cdots$ \\ $[1,4,1,2]$ & $E_{4,2}=0$ & $0.688\,026\,898\,650\,299\,426\cdots$ \\ $[1,4,1,2]$, $[1,4]$ & $H_{4,2}=0$ & $0.688\,026\,881\,018\,074\,821\cdots$ \\ $[1,4]$ & $G_{6,4}=0$ & $0.677\,218\,563\,694\,275\,305\cdots$ \\ $[1,6,1,4]$, $[1,4]$ & $E_4=0$ & $0.677\,218\,563\,614\,298\,209\cdots$ \\ $[1,6,1,4]$ & $E_{6,4}=0$ & $0.677\,217\,953\,395\,292\,912\cdots$ \\ $[1,6,1,4]$, $[1,6]$ & $H_{6,4}=0$ & $0.677\,217\,953\,388\,847\,194\cdots$ \\ $[1,6]$ & $G_{8,6}=0$ & $0.673\,669\,128\,225\,600\,196\cdots$ \\ $\vdots$ & $\vdots$ & $\vdots$ \\ \hline \end{tabular}} \end{center} \end{table} \begin{proof} Let $\bm \pi_{[1,n]}$ be the stationary distribution of $ {\bm P}_A {\bm P}_B^n$. Then we have \begin{eqnarray}\label{E_n/D_n} (\bm \pi_{[1,n]} - \bm \pi)(1,0,0)^\T = E_n / D_n , \end{eqnarray} where $E_n$ is as in (\ref{E_n}) and \begin{equation}\label{Dn} D_n := 2[2+ e_1^n (3 \phi-2)][2+ e_2^n (3 \phi -2)](1+\rho+\rho^2)S. \end{equation} Now $D_n$ is positive for all positive integers $n$ because $0<3\phi-2\le1$ and $|e_1|<|e_2|<1$. Noting that $e_1+e_2=-(3\phi-2)$, $e_2-e_1=-\phi(1-\rho)S/[(1+\rho)(1+\rho^2)]$, and $e_1e_2= 1- 3 \phi + 2 \phi^2 (1+\rho+\rho^2)^2/[(1+\rho)^2(1+\rho^2)]$, we have $$ E_2 = { \phi^2 (1-\rho)^2 g(\rho, \phi) S \over (1+\rho)^4 (1+ \rho^2)^2}, $$ where $g(\rho, \phi) := g_1(\phi)(1 + 4 \rho +4 \rho^7 + \rho^8)+ g_2(\phi) \rho^2 (1 + \rho^4)+ g_3(\phi)\rho^3 (1 + \rho^2)+ g_4(\phi) \rho^4$ with $g_1(\phi):=-15 + 48 \phi - 63 \phi^2 + 44 \phi^3 - 12 \phi^4$, $g_2(\phi):=-120 + 396 \phi - 540 \phi^2 + 404 \phi^3 -120 \phi^4$, $g_3(\phi):=-180 + 600 \phi - 828 \phi^2 +632 \phi^3 - 192 \phi^4$, and $g_4(\phi):=-210 + 696 \phi -954 \phi^2 + 728 \phi^3 - 228 \phi^4$. Since the functions $g_1$, $g_2$, $g_3$, and $g_4$ are increasing on $(0,1]$, we have $g(\rho,\phi)\geq g(\rho,\phi_2)$ for $\phi_2 \leq \phi \leq 1$ and \begin{eqnarray*} g(\rho,\phi_2) ={16 (1-\rho) (1+\rho)^3(1+\rho^2)^3 g_5(\rho)S \over [3(1+\rho)(1+\rho^2)-(1-\rho)S]^4}, \end{eqnarray*} where $g_5(\rho):=17 + 68 \rho + 80 \rho^2 + 92 \rho^3 + 134 \rho^4 + 92 \rho^5 + 80 \rho^6+ 68 \rho^7 + 17 \rho^8 -3(1-\rho^2) (5 + 10 \rho +6 \rho^2 + 10 \rho^3 + 5 \rho^4)S$. Using \begin{eqnarray*} &&[17 + 68 \rho + 80 \rho^2 + 92 \rho^3 + 134 \rho^4 + 92 \rho^5 + 80 \rho^6+ 68 \rho^7 + 17 \rho^8]^2\\ &&\quad{}\quad{} -[3(1-\rho^2)(5 + 10 \rho +6 \rho^2 + 10 \rho^3 + 5 \rho^4)S]^2=64(1+\rho+\rho^2)^8>0, \end{eqnarray*} we have $g_5(\rho)>0$ and hence $g(\rho,\phi_2)>0$ for $0<\rho<1$. It follows that if $\phi_2 \leq \phi \leq 1$, then $E_2 >0$, which implies that $\bm \pi_{[1,2]} \in \Delta_A$. More generally, a similar argument shows that \begin{eqnarray} \label{EnmDn} (\bm \pi_{[1,n]}{\bm P}_A {\bm P}_B^m - \bm \pi)(1,0,0)^\T = E_{n,m} / D_n , \end{eqnarray} where $E_{n,m}$ and $D_n$ are as in (\ref{E_{n,m}}) and (\ref{Dn}). Notice that $E_{n,n}=E_n$. First, $E_{2,0}<0$ by Proposition \ref{prop1}. Next, \begin{eqnarray*} E_{2,1} = {\phi^2 (1-\rho)^2 S\over(1+\rho)^2(1+\rho^2)} [h(\phi)(1+2\rho+2\rho^3+\rho^4)-6(1-\phi)(5-9\phi +9 \phi^2)\rho^2], \end{eqnarray*} where $h(\phi):=-15+40\phi-45\phi^2+18\phi^3$. Since $h$ is increasing, its maximum on $(0,1]$ is $h(1)=-2$, so we have $E_{2,1}<0$, which proves that $\bm \pi_{[1,2]}{\bm P}_A {\bm P}_B \in \Delta_B$. We need one more play of game $B$ to return to $\bm\pi_{[1,2]}{\bm P}_A {\bm P}^2_{B}=\bm\pi_{[1,2]}\in\Delta_A$. Hence if $\phi_2\le\phi\le1$, the trajectory that starts from initial state $\bm \pi_{[1,2]} \in \Delta_A$ follows the periodic pattern $[1,2]$ under the greedy strategy. Next consider the case of $ 2/3 < \phi < \phi_2 $. We can rewrite $E_n$ in (\ref{E_n}) as \begin{eqnarray*} E_n &=& {\phi (1- \rho)\over2}[3(1+\rho)(1+\rho^2) +(1-\rho)S]e_2^n [2+ e_1^n (3\phi -2)] \bigg( {\phi_1 \over\phi_2 }- F_n\bigg), \end{eqnarray*} where $F_n :=(e_1/e_2)^n [2+e_2^n(3\phi-2)]/[2+e_1^n(3\phi-2)]$. In this case, since $e_2<0<e_1$ with $|e_2|>|e_1|$, we find, for all odd $n\ge1$, that $F_n<0$ and therefore $E_n <0$. Moreover, $F_n>0$ for all even $n\ge2$, and the sequence $\{F_n: n=2,4,6,\ldots\}$ is decreasing to 0 since $F_n=[(1/e_2)^n+(3\phi-2)/2]/[(1/e_1)^n+(3\phi-2)/2]$ and therefore Lemma \ref{Lemma1} applies. Thus, we let $s$ denote the smallest even $n\ge2$ such that $0< F_n \le \phi_1/\phi_2$. Equivalently, $s:=\min\{n\in\{2,4,6,\ldots\}:E_n\ge0\}$, hence $E_s\ge0$ and (if $s\ge4$) $E_{s-2}<0$. By (\ref{E_n/D_n}), $\bm \pi_{[1,s]} \in \Delta_A$. For odd $m<s$, we have $E_{s,m}<0$. If $E_{s,s-2}<0$, then $E_{s,m}<0$ for all even $m<s-2$ and since $\bm \pi_{[1,s]} {\bm P}_A{\bm P}_B^s = \bm \pi_{[1,s]}$, we can conclude that after $s$ plays of game $B$, the trajectory returns to the initial state $\bm \pi_{[1,s]} \in \Delta_A$. Hence for $(\rho,\phi)$ satisfying $E_{s,s-2}<0$ (with $s$ defined as above), the trajectory that starts from the initial state $\bm \pi_{[1,s]}$ follows the periodic pattern $[1,s]$ under the greedy strategy. (Notice that if $s=2$, then $E_{s,s-2}=E_{2,0}<0$ automatically.) Moreover, the conclusion fails if $E_n<0$ (implying $\bm\pi_{[1,n]}\in\Delta_B$) or if $E_{n,n-2}\ge0$ (implying $\bm\pi_{[1,n]}\bm P_A\bm P_B^{n-2}\in\Delta_A$). This proves the assertions in the second paragraph of the theorem. We next claim that for $(\rho,\phi)$ satisfying $E_{s,s-2} \geq 0$ with $s\geq4$, the greedy strategy leads to periodic pattern $[1,s,1,s-2]$ if we start from the initial state $\bm \pi_{[1,s,1,s-2]} \in \Delta_A$. Calculations similar to (\ref{EnmDn}) give \begin{eqnarray*} (\bm \pi_{[1,n,1,n-2]}{\bm P}_A {\bm P}_B^m - \bm \pi)(1,0,0)^\T = G_{n,m}/I_n,\\ (\bm \pi_{[1,n,1,n-2]}{\bm P}_A {\bm P}_B^n {\bm P}_A {\bm P}_B^m - \bm \pi)(1,0,0)^\T = H_{n,m}/I_n, \end{eqnarray*} where $G_{n,m}$ and $H_{n,m}$ are as in (\ref{G_{n,m}}) and (\ref{H_{n,m}}) and \begin{eqnarray*} I_n &:=& 2 [2-e_1^{n-1}(3\phi-2)][2-e_2^{n-1}(3\phi-2)] D_{n-1}. \end{eqnarray*} Since $0<3 \phi-2\le1$ and $|e_1|<|e_2|<1$, we have $I_n >0$. To prove the claim we need to show that $G_{s,m}<0$ for $0 \leq m \leq s-1$ and $G_{s,s}\ge0$, as well as $H_{s,m}<0$ for $0 \leq m \leq s-3$ and $H_{s,s-2}\ge0$. Since $e_2<0<e_1$ with $|e_2|>|e_1|$, it is sufficient to prove that $G_{s,s-2} <0$, $G_{s,s} \geq 0$, $H_{s,s-4} <0$, and $H_{s,s-2} \geq 0$. From the fact that $E_{s-2}<0$ we have \begin{eqnarray}\nonumber \bigg({e_1 \over e_2} \bigg)^{s-2} &>& \bigg({ 2+ e_1^{s-2} (3\phi -2) \over 2+ e_2^{s-2} ( 3 \phi -2)} \bigg) \bigg({\phi_1\over\phi_2 } \bigg) \\ \label{gs-2} &=& \bigg({ 4- e_1^{2(s-2)} (3\phi -2)^2 \over 4- e_2^{2(s-2)} ( 3 \phi -2)^2} \bigg) \bigg( { 2- e_2^{s-2} (3\phi -2) \over 2- e_1^{s-2}( 3 \phi -2)} \bigg) \bigg({\phi_1 \over\phi_2 } \bigg). \end{eqnarray} Here we can show that the sequence $(4- e_1^{2 m}(3\phi-2)^2)/(4-e_2^{2 m}(3 \phi-2)^2)$ is decreasing in $m$ if we apply Lemma \ref{Lemma2} with $a=e_1^2$, $b=e_2^2$, and $c=4/(3\phi-2)^2$, where we recall that $2/3<\phi<\phi_2<1$, so $0<a<b<1$ (see Table \ref{eigenvalues}) and $c>4$. It remains to show that $a+b\le1$. Let us write $e_1=1-\phi+\phi e_1^\circ=1-(3/2)\phi+\phi e^\circ$ and $e_2=1-\phi+\phi e_2^\circ=1-(3/2)\phi-\phi e^\circ$, where $e^\circ:=(1-\rho)S/[2(1+\rho)(1+\rho^2)]\in(0,1/2)$. Then \begin{eqnarray*} a+b&=&e_1^2+e_2^2=2\bigg(1-{3\over2}\phi\bigg)^2+2\phi^2(e^\circ)^2<2\bigg(1-{3\over2}\phi\bigg)^2+{\phi^2\over2}\\ &=&2-6\phi+5\phi^2=1+(1-\phi)(1-5\phi)<1 \end{eqnarray*} since $2/3<\phi<\phi_2<1$. Hence from (\ref{gs-2}) we have \begin{eqnarray*} \bigg({e_1 \over e_2} \bigg)^{s-2}>\bigg({ 4- e_1^{2(s-1)} (3\phi -2)^2 \over 4- e_2^{2(s-1)} ( 3 \phi -2)^2} \bigg) \bigg( { 2- e_2^{s-2} (3\phi -2) \over 2- e_1^{s-2}( 3 \phi -2)} \bigg) \bigg({\phi_1 \over\phi_2 } \bigg), \end{eqnarray*} which is equivalent to $G_{s,s-2}<0$. By the same reasoning, for even $n\ge4$, $E_{n-2}\le0$ implies $G_{n,n-2}<0$, which yields (\ref{ineq1}). We also have \begin{eqnarray}\label{hs-4} \bigg( {e_1 \over e_2} \bigg) ^{s-4} &=& \bigg( {e_1 \over e_2} \bigg) ^{s-2} \bigg( {e_2 \over e_1} \bigg) ^2 \nonumber\\ &>& \bigg({ 4- e_1^{2(s-1)} (3\phi -2)^2 \over 4- e_2^{2(s-1)} ( 3 \phi -2)^2} \bigg) \bigg( { 2- e_2^{s-2} (3\phi -2) \over 2- e_1^{s-2}( 3 \phi -2)} \bigg) \bigg({\phi_1 \over\phi_2 } \bigg)\bigg( {e_2 \over e_1} \bigg) ^2 \nonumber\\ &>& \bigg({ 4- e_1^{2(s-1)} (3\phi -2)^2 \over 4- e_2^{2(s-1)} ( 3 \phi -2)^2} \bigg) \bigg( { 2- e_2^s (3\phi -2) \over 2- e_1^s( 3 \phi -2)} \bigg) \bigg({\phi_1 \over\phi_2 } \bigg), \end{eqnarray} which is equivalent to $H_{s,s-4}<0$. Notice that the last inequality in (\ref{hs-4}) uses \begin{eqnarray}\label{inequality} { 2 e_2^2- e_2^s (3\phi -2) \over 2 e_1^2- e_1^s ( 3 \phi -2)} > { 2- e_2^s (3\phi -2) \over 2- e_1^s( 3 \phi -2)}, \end{eqnarray} which is equivalent to $$ {2 e_2^2- e_2^s (3\phi -2) \over2- e_2^s (3\phi -2) } > {2 e_1^2- e_1^s ( 3 \phi -2) \over 2- e_1^s( 3 \phi -2)} $$ for even $s\ge4$. To confirm the latter inequality, divide both numerators and denominators by 2 and apply Lemma \ref{Lemma3}. On the other hand, from $E_{s,s-2}\geq 0$ and the argument below (\ref{gs-2}), it follows that \begin{eqnarray}\label{H-ineq} \bigg( {e_1 \over e_2} \bigg)^{s-2} &\leq& \bigg({2+e_1^s (3\phi-2)\over 2+ e_2^s (3\phi-2)} \bigg) \bigg({\phi_1\over\phi_2} \bigg) \nonumber\\ &=& \bigg({ 4- e_1^{2s} (3\phi -2)^2 \over 4-e_2^{2s} (3\phi-2)^2} \bigg) \bigg( {2-e_2^s (3\phi-2)\over 2-e_1^s(3\phi-2)} \bigg) \bigg({\phi_1 \over\phi_2} \bigg) \nonumber\\ &<& \bigg({4-e_1^{2(s-1)} (3\phi-2)^2 \over 4-e_2^{2(s-1)} (3\phi-2)^2} \bigg) \bigg({2-e_2^s (3\phi-2) \over 2-e_1^s(3\phi -2)} \bigg) \bigg({\phi_1 \over\phi_2 } \bigg),\qquad \end{eqnarray} which is equivalent to $H_{s,s-2}>0$. By the same reasoning, for even $n\ge4$, $E_{n,n-2}\ge0$ implies $H_{n,n-2}>0$, which yields (\ref{ineq3}). Similarly, we have \begin{eqnarray}\label{G-ineq} \bigg( {e_1 \over e_2} \bigg) ^s &=& \bigg( {e_1\over e_2} \bigg) ^{s-2} \bigg( {e_1\over e_2} \bigg) ^2 \nonumber\\ &<&\bigg({4-e_1^{2(s-1)} (3\phi-2)^2 \over 4-e_2^{2(s-1)} (3\phi-2)^2} \bigg) \bigg({ 2-e_2^s (3\phi -2) \over 2-e_1^s (3\phi-2)} \bigg) \bigg({\phi_1 \over\phi_2} \bigg)\bigg( {e_1 \over e_2} \bigg) ^2 \nonumber\\ &<& \bigg({ 4-e_1^{2(s-1)} (3\phi-2)^2 \over 4-e_2^{2(s-1)} (3\phi-2)^2} \bigg) \bigg({2-e_2^{s-2} (3\phi-2) \over 2-e_1^{s-2} (3\phi-2)} \bigg) \bigg({\phi_1 \over\phi_2 } \bigg), \end{eqnarray} where the last inequality follows from (\ref{inequality}), and this is equivalent to $G_{s,s}>0$. This almost proves the assertions in the third paragraph of the theorem, except that we have implicitly assumed that $E_{n,n-2}\ge0$ and $E_{n-2}<0$, both of which are stronger than necessary. We can weaken the former to $H_{n,n-2}\ge0$, in which case (\ref{H-ineq}) is no longer necessary and (\ref{G-ineq}) follows as before. We can weaken the latter to $G_{n,n-2}<0,$ in which case (\ref{gs-2}) is no longer necessary and (\ref{hs-4}) follows as before. (When $E_{n-2}\ge0$ and $G_{n,n-2}<0$, we have $s=n-2$, so we apply the inequalities involving $s$ with $s$ replaced by $n$.) Finally, the necessity of the inequalities is clear: If $G_{n,n-2}\ge0$, then $\bm\pi_{[1,n,1,n-2]}\bm P_A\bm P_B^{n-2}\in\Delta_A$, while if $H_{n,n-2}<0$, then $\bm\pi_{[1,n,1,n-2]}=\bm\pi_{[1,n,1,n-2]}\bm P_A\bm P_B^n\bm P_A\bm P_B^{n-2}\in\Delta_B$. For (\ref{ineq2}), it is enough to show that, for even $n\ge4$, $E_{n,n-2}-E_{n-2}>0$ for all $(\rho,\phi)\in(0,1)\times(2/3,3/4)$. This will then imply that, for even $n\ge4$, when $E_{n-2}=0$ we have $E_{n,n-2}>0$, which yields (\ref{ineq2}). Now \begin{eqnarray*} E_{n,n-2}-E_{n-2}&=&e_1^{n-2} e_2^{n-2} \phi^2 (3 \phi-2) (1-\rho)^2 S [6(1+\rho)^2 (1+\rho^2) \\ &&\quad{}- \phi (7 + 14 \rho + 12 \rho^2 + 14 \rho^3 + 7 \rho^4)] /[(1 + \rho)^2 (1 + \rho^2)], \end{eqnarray*} which has the sign of $6(1+\rho)^2 (1+\rho^2) - \phi (7 + 14 \rho + 12 \rho^2 + 14 \rho^3 + 7 \rho^4)$. The latter is decreasing in $\phi$ and, at $\phi=3/4$, equals $(3/4) (1 + 2 \rho + 4 \rho^2 + 2 \rho^3 + \rho^4)>0$, hence it is positive in $(0,1)\times(2/3,3/4)$. For (\ref{ineq4}), it is enough to show that, for even $n\ge4$, if $H_{n,n-2}=0$, then $G_{n+2,n}>0$. Equivalently, it suffices to show that, if $$ \bigg({e_2 \over e_1} \bigg)^{n-2} \bigg({ 4- e_1^{2n-2} (3\phi -2)^2 \over 4-e_2^{2n-2} ( 3 \phi -2)^2} \bigg) \bigg({ 2- e_2^n (3\phi -2) \over 2-e_1^n ( 3 \phi -2)} \bigg) \bigg({\phi_1\over\phi_2 } \bigg)=1, $$ then $$ \bigg({e_2 \over e_1} \bigg)^n \bigg({ 4- e_1^{2n+2} (3\phi -2)^2 \over 4-e_2^{2n+2} ( 3 \phi -2)^2} \bigg) \bigg({ 2- e_2^n (3\phi -2) \over 2-e_1^n ( 3 \phi -2)} \bigg) \bigg({\phi_1\over\phi_2 } \bigg)>1. $$ For this we need only show that, for the last two products of fractions, the ratio of the second to the first is greater than 1. It is in fact equal to $$ \bigg({ 4- e_1^{2n+2} (3\phi -2)^2 \over 4-e_2^{2n+2} ( 3 \phi -2)^2} \bigg) \bigg({ 4e_2^2 - e_2^{2n} (3\phi -2)^2 \over 4e_1^2 -e_1^{2n} ( 3 \phi -2)^2} \bigg). $$ This is greater than 1 if and only if $$ { 4e_1^2 -e_1^{2n} ( 3 \phi -2)^2 \over 4- e_1^{2n+2} (3\phi -2)^2 } < { 4e_2^2 - e_2^{2n} (3\phi -2)^2 \over 4-e_2^{2n+2} ( 3 \phi -2)^2}. $$ Since $2/3<\phi<\phi_2$, we have $e_1^2<e_2^2$ by Table 1. We divide both numerators and denominators by 4 and apply Lemma \ref{Lemma3}. Finally, it remains to show that the functions defining the curves in (\ref{ineq1})--(\ref{ineq4}) are positive above, and negative below, the curves. Consider $G_{n,n-2}$ for even $n\ge4$. Notice that $G_{n,n-2}$ is positive, 0, or negative according to whether \begin{equation}\label{product} \bigg({e_2\over e_1}\bigg)^{n-2}\bigg({4-e_1^{2 (n-1)}(3 \phi-2)^2\over4 -e_2^{2 (n-1)}(3 \phi-2)^2}\bigg) \bigg({ 2 - e_2^{n-2} (3 \phi -2)\over 2 - e_1^{n-2} (3\phi -2)}\bigg) \bigg({\phi_1\over\phi_2 } \bigg) \end{equation} is $>1$, $=1$, or $<1$. It is therefore enough to show that (\ref{product}) is increasing in $\phi\in(2/3,\phi_2)$ for each $\rho\in(0,1)$. This follows by showing that the product of the first and third factors is increasing and the second factor alone is increasing (the fourth factor is constant). The other functions, $E_{n-2}$, $E_{n,n-2}$, and $H_{n,n-2}$, are treated similarly. \end{proof} We will later need the following consequence of the proof. \begin{proposition}\label{propTh7} Using the notation (\ref{E_n})--(\ref{H_{n,m}}), if $E_{n,n-2}<0$ and $E_n\ge0$ for some even $n\ge4$, then $E_{n,m}<0$ for $m=0,1,\ldots,n-1$. If $G_{n,n-2}<0$ and $H_{n,n-2}\ge0$ for some even $n\ge4$, then $G_{n,m}<0$ for $m=0,1,\ldots,n-1$ and $H_{n,m}<0$ for $m=0,1,\ldots,n-3$. \end{proposition} \section{Asymptotic stability of limit cycles} \label{stability} In this section, we consider the case in which the discrete dynamical system starts from an arbitrary initial state. We investigate its asymptotic behavior for parameters $(\rho,\phi)$ belonging to $(0,1)\times(2/3,1]$. Let $(x_0,x_1,x_2)\in\Delta_A$ be the initial state. (If the initial state is in $\Delta_B$, the trajectory will enter $\Delta_A$ eventually by Theorem \ref{thm2}, with an exception as noted in that theorem.) Define $(y_0,y_1,y_2):=(x_0,x_1,x_2)\bm P_A$, $(z_0,z_1,z_2):=(y_0,y_1,y_2)\bm P_B$, and $(w_0,w_1,w_2):=(z_0,z_1,z_2)\bm P_B$. We know by Proposition \ref{prop1} that $(y_0,y_1,y_2)\in\Delta_B$. In order that $(z_0,z_1,z_2)\in\Delta_B$, we need $z_0<\pi_0$. Now $z_0-\pi_0=\alpha_1 x_0+\beta_1 x_1-\gamma_1$, where \begin{eqnarray*} \alpha_1&:=&(3 \phi-2) [-(1 + \rho) + \phi (2 + \rho)]/[2 (1 + \rho)]>0,\\ \beta_1&:=&\phi (3 \phi-2) (1 - \rho)/[2 (1 + \rho)]>0,\\ \gamma_1&:=&{(1 + \rho) (1 + \rho^2) - \phi (3 + \rho) (1 + \rho + \rho^2) + 3 \phi^2 (1 + \rho + \rho^2)\over 2 (1 + \rho) (1 + \rho + \rho^2)}>0, \end{eqnarray*} and so $(z_0,z_1,z_2)\in\Delta_B$ is equivalent to $\alpha_1 x_0+\beta_1 x_1<\gamma_1$. Since $$ {\gamma_1\over\alpha_1}-\pi_0={\phi [1 - 3(1-\phi)\rho] (1-\rho^2)\over 2 (3 \phi-2) (1 + \rho + \rho^2) [-(1 + \rho) + \phi (2 + \rho)]}>0, $$ the region $\{(x_0,x_1,x_2)\in\Delta_A: \alpha_1 x_0+\beta_1 x_1<\gamma_1\}$ is nonempty. Next, in order that $(w_0,w_1,w_2)\in\Delta_A$ as well, we need $w_0\ge \pi_0$. Now $w_0-\pi_0=-\alpha_2 x_0-\beta_2 x_1+\gamma_2$, where \begin{eqnarray*} \alpha_2&:=&(3\phi-2)[(1+\rho)^2(1+\rho^2)-2\phi(1+\rho)(2+\rho)(1+\rho^2)\\ &&\quad{}+\phi^2 (4+5\rho+3\rho^2+5\rho^3+\rho^4)]/[2(1+\rho)^2(1+\rho^2)]>0,\\ \beta_2&:=&(3 \phi-2)^2 \phi (1 - \rho)/[2 (1 + \rho)]>0,\\ \gamma_2&:=&[-(1 + \rho)^2(1 + \rho^2)^2 + \phi(1 + \rho) (5 + \rho) (1 + \rho^2) (1 + \rho + \rho^2)\\ &&\quad{}- 2 \phi^2 (1 + \rho^2) (5 + 5 \rho - \rho^2) (1 + \rho + \rho^2)\\ &&\quad{}+\phi^3 (1 + \rho + \rho^2) (7 +5 \rho + 3 \rho^2 + 5 \rho^3 - 2 \rho^4)] \\ &&\;/[2 (1 + \rho)^2 (1 + \rho^2) (1 + \rho + \rho^2)], \end{eqnarray*} and so $(w_0,w_1,w_2)\in\Delta_A$ is equivalent to $\alpha_2 x_0+\beta_2 x_1\le\gamma_2$. However, $\gamma_2$ is not necessarily positive. More importantly, the set of $(\rho,\phi)\in(0,1)\times(2/3,1]$ for which \begin{eqnarray*} {\gamma_2\over\alpha_2}-\pi_0&=&(1 - \rho) \phi [-(1 + \rho)^2 (1 - 5 \rho) (1 + \rho^2)-12\phi\rho (1 + \rho)^2 (1 + \rho^2)\\ \noalign{\vglue-2mm} &&\qquad\qquad\quad{} + \phi^2(2 + 11 \rho + 20 \rho^2 + 16 \rho^3 + 16 \rho^4 + 7 \rho^5) ]\\ &&\quad/\{2(3\phi-2)(1+\rho+\rho^2)[(1+\rho)^2(1+\rho^2)\\ &&\qquad{}-2\phi(1+\rho)(2+\rho)(1+\rho^2)+\phi^2(4+5\rho+3\rho^2+5\rho^3+\rho^4)]\}\\ &\ge&0, \end{eqnarray*} is the region for which $\{(x_0,x_1,x_2)\in\Delta_A:\alpha_2 x_0+\beta_2 x_1\le\gamma_2\}$ is necessarily nonempty, which includes (but is not equal to) the union of regions 1--12 in Figure \ref{region-fig} below. Let us assume that $(\rho,\phi)$ belongs to this region. Then \begin{equation*} \Delta_{ABBA}=\{(x_0,x_1,x_2)\in\Delta_A: \alpha_1 x_0+\beta_1 x_1<\gamma_1,\,\alpha_2 x_0+\beta_2 x_1\le\gamma_2\} \end{equation*} is nonempty, that is, there exists $(x_0,x_1,x_2)\in\Delta_A$ such that the greedy strategy begins with the game sequence $ABBA$. If we solve the system of equations \begin{equation}\label{lines} \alpha_1 x_0+\beta_1 x_1=\gamma_1\quad{\rm and}\quad \alpha_2 x_0+\beta_2 x_1=\gamma_2, \end{equation} simultaneously, we find that $x_0-\pi_0=-\phi (1 - \rho)^2/[2 (3 \phi-2) (1 + \rho + \rho^2)]<0$, hence the two lines intersect only outside $p(\Delta_A)$ and only one of the inequalities $\alpha_1 x_0+\beta_1 x_1<\gamma_1$ and $\alpha_2 x_0+\beta_2 x_1\le\gamma_2$ is needed to define $p(\Delta_{ABBA})$. This tells us that $p(\Delta_{ABBA})$ is the intersection of the triangular region $\{(x_0,x_1): x_0\ge\pi_0,\,x_1\ge0,\,x_0+x_1\le1\}$ with exactly one of the two triangular regions \begin{eqnarray*} &&\{(x_0,x_1): x_0\ge\pi_0,\,x_1\ge0,\,\alpha_1 x_0+\beta_1 x_1<\gamma_1\},\\ &&\{(x_0,x_1): x_0\ge\pi_0,\,x_1\ge0,\,\alpha_2 x_0+\beta_2 x_1\le\gamma_2\}, \end{eqnarray*} hence its closure is convex with at most four extreme points. Now the lines (\ref{lines}) intersect the vertical line $x_0=\pi_0$ at $x_1=a_1:=(\gamma_1-\alpha_1\pi_0)/\beta_1>0$ and $x_1=a_2:=(\gamma_2-\alpha_2\pi_0)/\beta_2\ge0$, and they intersect the horizontal axis $x_1=0$ at $x_0=b_1:=\gamma_1/\alpha_1>\pi_0$ and $x_0=b_2:=\gamma_2/\alpha_2\ge\pi_0$. The quantities $a_1,a_2,b_1,b_2$, which are functions of $\rho$ and $\phi$, play an important role in what follows. In particular, they allow us to partition the set of all $(\rho,\phi)\in(0,1)\times(2/3,1]$ such that $G_{4,2}>0$ into 12 regions, as indicated in Figure \ref{region-fig}. To be more precise, region 1 is $G_{4,2}\ge0$, $a_2<1-\pi_0$, and $\phi<(3/4)(1-\rho)+(2/3)\rho$; region 2 is $a_2\ge1-\pi_0$, $b_2<1$, and $\phi<(3/4)(1-\rho)+(2/3)\rho$; region 3 is $b_2\ge1$; region 4 is $b_2<1$, $\phi<\phi_3$, and $\phi>(3/4)(1-\rho)+(2/3)\rho$; region 5 is $\phi\ge\phi_3$ and $b_1>1$; region 6 is $b_1\le1$, $a_2\ge1-\pi_0$, and $b_2<b_1$; region 7 is $a_2<1-\pi_0$ and $\phi<1-\rho/3$; region 8 is $\phi\ge1-\rho/3$ and $b_2<b_1$; region 9 is $b_2\ge b_1$ and $\phi<1-\rho/3$; region 10 is $\phi\ge1-\rho/3$ and $a_2\ge1-\pi_0$; region 11 is $b_2\ge b_1$, $a_2<1-\pi_0$, and $\alpha_1 f_0+\beta_1 f_1<\gamma_1$, where $(f_0,f_1,f_2):=(1,0,0)\bm P_A\bm P_B$; and region 12 is $\alpha_1 f_0+\beta_1 f_1\ge\gamma_1$. \begin{figure} \centering \includegraphics[width = 340bp]{fig2.pdf} \caption{\label{region-fig}Partition into 12 regions of the set where $[1,2]$ is (conjectured to be) the form of the unique (asymptotically stable) limit cycle. If the curves are labeled 1--9 according to their $\phi$-values at $\rho=0.2$ from smallest to largest, curve 1 is $G_{4,2}=0$ ($>$ above), curves 2 and 7 are $a_2=1-\pi_0$ ($>$ between), curves 3 and 4 are $b_2=1$ ($>$ between), curve 5 is $\phi=\phi_3$, curve 6 is $b_1=1$ ($>$ below), curve 8 is $a_1=1-\pi_0$ ($>$ below) or $\phi=1-\rho/3$, and curve 9 is $b_2=b_1$ ($>$ above) or $\phi=\phi_2$. The curve separating regions 11 and 12 is $\alpha_1 f_0+\beta_1 f_1=\gamma_1$ ($>$ above), where $(f_0,f_1,f_2):=(1,0,0)\bm P_A\bm P_B$. The regions are defined more precisely in the text.} \end{figure} We believe that, whenever $G_{4,2}>0$, there is a unique (asymptotically stable) limit cycle of the form $[1,2]$. However, we can prove this only in regions 3, 9, 10, and 11. \begin{theorem}\label{thm-regions3,9--11} For $(\rho,\phi)$ belonging to region 3 of Figure \ref{region-fig}, there is a unique limit cycle, which is asymptotically stable and of the form $[1,2]$, as well as an unstable equilibrium. Indeed, \begin{equation}\label{region3} \Delta_A=\Delta_{\overline{ABB}}, \end{equation} and for initial states in $\Delta_B-\Delta_{\overline{B}}$ (see (\ref{Delta(B)})), the trajectory eventually enters $\Delta_A$. For $(\rho,\phi)$ belonging to region 9, 10, or 11 of Figure \ref{region-fig}, there is a unique limit cycle, which is globally asymptotically stable and of the form $[1,2]$. Indeed, \begin{equation}\label{regions9--11} \Delta_{ABBA}=\Delta_{\overline{ABB}},\qquad \Delta_A-\Delta_{ABBA}=\Delta_{AB\overline{ABB}}, \end{equation} and for initial states in $\Delta_B$, the trajectory eventually enters $\Delta_A$. \end{theorem} \begin{remark} The theorem includes the case $(\rho,\phi)=(1/3,1)$ studied by Van den Broeck and Cleuren \cite{VC}. Results for the regions not covered by the theorem will be stated later in the form of a conjecture. \end{remark} \begin{proof} Suppose we can show (with some exceptions in the case of region 3) that the trajectory eventually reaches $\Delta_{\overline{ABB}}$. Then we have three \textit{linear} discrete dynamical systems \begin{eqnarray*} &&(x_0(3m+3),x_1(3m+3),x_2(3m+3))=(x_0(3m),x_1(3m),x_2(3m))\bm P_A\bm P_B\bm P_B,\\ &&(x_0(3m+4),x_1(3m+4),x_2(3m+4))\\ &&\qquad\qquad\qquad\qquad\qquad\quad{}=(x_0(3m+1),x_1(3m+1),x_2(3m+1))\bm P_B\bm P_B\bm P_A,\\ &&(x_0(3m+5),x_1(3m+5),x_2(3m+5))\\ &&\qquad\qquad\qquad\qquad\qquad\quad{}=(x_0(3m+2),x_1(3m+2),x_2(3m+2))\bm P_B\bm P_A\bm P_B, \end{eqnarray*} or \begin{eqnarray*} (x_0(3m),x_1(3m),x_2(3m))&=&(x_0,x_1,x_2)(\bm P_A\bm P_B\bm P_B)^m,\\ (x_0(3m+1),x_1(3m+1),x_2(3m+1))&=&(x_0,x_1,x_2)(\bm P_A\bm P_B\bm P_B)^m\bm P_A,\\ (x_0(3m+2),x_1(3m+2),x_2(3m+2))&=&(x_0,x_1,x_2)(\bm P_A\bm P_B\bm P_B)^m\bm P_A\bm P_B, \end{eqnarray*} with $(x_0,x_1,x_2)\in\Delta_{\overline{ABB}}$. Now $(\bm P_A\bm P_B\bm P_B)^m\to\bm\Pi_{[1,2]}$, where $\bm\Pi_{[1,2]}$ is the $3\times3$ matrix with each row equal to $\bm\pi_{[1,2]}$, so the limits in the last group of equations are $\bm\pi_{[1,2]}$, $\bm\pi_{[1,2]}\bm P_A$, and $\bm\pi_{[1,2]}\bm P_A\bm P_B$, regardless of the initial state in $\Delta_{\overline{ABB}}$. This leads to the asymptotic stability. Recall that $\Delta_{ABBA}$ is the intersection of three triangular regions. This intersection is itself triangular if $\min\{a_1,a_2\}\le1-\pi_0$ and $\min\{b_1,b_2\}\le1$ or if $\min\{a_1,a_2\}\ge1-\pi_0$ and $\min\{b_1,b_2\}\ge1$. And the intersection is four-sided if $\min\{a_1,a_2\}>1-\pi_0$ and $\min\{b_1,b_2\}<1$ ($\min\{a_1,a_2\}<1-\pi_0$ and $\min\{b_1,b_2\}>1$ is impossible; see Figure \ref{region-fig}). If for some $(\rho,\phi)$, $\bm P_A\bm P_B^2$ maps the closure of $\Delta_{ABBA}$ into $\Delta_{ABBA}$, then, for this $(\rho,\phi)$, $\Delta_{ABBA}=\Delta_{\overline{ABB}}$. Now $\bm P_A\bm P_B^2$ is linear, so $\bm P_A\bm P_B^2$ maps the closure of $\Delta_{ABBA}$ into $\Delta_{ABBA}$ if and only if it maps the three or four vertices of the closure of $\Delta_{ABBA}$ into $\Delta_{ABBA}$. In one important case, this completes the proof. If $\Delta_{ABBA}=\Delta_A$, then we conclude that (\ref{region3}) holds. This happens when $a_1>1-\pi_0$, $a_2\ge1-\pi_0$, $b_1>1$, and $b_2\ge1$. Actually, the last inequality implies the first three and holds precisely in region 3 of Figure \ref{region-fig}. Recalling the result of Theorem \ref{thm2}, we conclude that the assertions about region 3 are established. See Figure \ref{regions3,11-fig}. \begin{figure} \centering \includegraphics[width = 170bp]{fig3-1.pdf} \includegraphics[width = 170bp]{fig3-2.pdf} \caption{\label{regions3,11-fig}Two of the four cases of Theorem \ref{thm-regions3,9--11} illustrated with $\rho=1/3$.} \end{figure} Next, we generalize lines (\ref{lines}). Given an initial state $(x_0,x_1,x_2)\in\Delta_A$, let us define $z_0(n):=(x_0,x_1,1-x_0-x_1)\bm P_A\bm P_B^{n}(1,0,0)^\T$ for each $n\ge1$. Then, for $n$ odd, $z_0(n)<\pi_0$ if and only if $\alpha_n x_0+\beta_n x_1<\gamma_n$, and, for $n$ even, $z_0(n)\ge\pi_0$ if and only if $\alpha_n x_0+\beta_n x_1\le\gamma_n$, where the coefficients can be expressed in terms of the spectral representation of the matrix $\bm P_B$: \begin{eqnarray*} \alpha_n&:=&(-1)^{n-1}(1,0,-1)\bm P_A\bm P_B^{n}(1,0,0)^\T\\ &\;=&(-1)^{n}(3\phi-2)[e_2^{n}(1+\rho^2+S)-e_1^{n}(1+\rho^2-S)]/(4S),\\ \beta_n&:=&(-1)^{n-1}(0,1,-1)\bm P_A\bm P_B^{n}(1,0,0)^\T=(-1)^{n} (3\phi-2)(e_2^{n}-e_1^{n}) (1+\rho^2)/(2S),\\ \gamma_n&:=&(-1)^{n-1}[\bm\pi-(0,0,1)\bm P_A\bm P_B^{n}](1,0,0)^\T\\ &\;=&(-1)^{n}\{e_2^{n} [\phi (1+\rho+\rho^2) (3+3\rho^2+S)-(1+\rho^2) (1+2\rho+3\rho^2+S)]\\ &&\qquad\quad\;{}-e_1^{n} [\phi (1+\rho+\rho^2) (3+3\rho^2-S)-(1+\rho^2) (1+2\rho+3\rho^2-S)]\}\\ &&\qquad/[4 (1+\rho+\rho^2) S]. \end{eqnarray*} Notice that these definitions are consistent with $(\alpha_1,\beta_1,\gamma_1)$ and $(\alpha_2,\beta_2,\gamma_2)$ defined earlier. The line $\alpha_n x_0+\beta_n x_1=\gamma_n$ in the plane intersects the line $x_0=\pi_0$ at $x_1=a_n:=(\gamma_n-\alpha_n\pi_0)/\beta_n$; it intersects the line $x_1=0$ at $x_0=b_n:=\gamma_n/\alpha_n$; and it intersects the line $x_0+x_1=1$ at $x_0=c_n:=(\gamma_n-\beta_n)/(\alpha_n-\beta_n)$. Each of the lines $\alpha_n x_0+\beta_n x_1=\gamma_n$ $(n\ge1)$ passes through the point $((\phi-2\pi_0)/(3\phi-2),(\phi-2\pi_1)/(3\phi-2))$, which lies to the left of the line $x_0=\pi_0$ because $\pi_0>1/3$. We turn next to region 9 of Figure \ref{region-fig}. This region is determined by $b_2\ge b_1$ (equivalent to $\phi\ge\phi_2$) and $a_1>1-\pi_0$, from which it follows that $b_1<1$ (see Figure \ref{region-fig}). Consequently, $\Delta_A-\Delta_{ABBA}$ is the triangular region with vertices $(1,0,0)$, $(b_1,0,1-b_1)$, and $(c_1,1-c_1,0)$. We claim that (a) $\bm P_A\bm P_B$ maps these three points, and hence the triangular region they determine, into $\Delta_{ABBA}$. Further, we claim that (b) $\bm P_A\bm P_B^2$ maps the four corners of the closure of $\Delta_{ABBA}$, namely $(\pi_0,0,1-\pi_0)$, $(b_1,0,1-b_1)$, $(c_1,1-c_1,0)$, and $(\pi_0,1-\pi_0,0)$, into $\Delta_{ABBA}$, hence maps the closure of $\Delta_{ABBA}$ into $\Delta_{ABBA}$. This is enough to show that, starting from $\Delta_A$, pattern $ABB$ repeats forever, possibly after an initial $AB$. To verify (a) and (b), we let \begin{eqnarray*} (f_0,f_1,f_2)&:=&(1,0,0)\bm P_A\bm P_B,\\ (g_0,g_1,g_2)&:=&(b_1,0,1-b_1)\bm P_A\bm P_B,\\ (h_0,h_1,h_2)&:=&(c_1,1-c_1,0)\bm P_A\bm P_B,\\ \noalign{\smallskip} (r_0,r_1,r_2)&:=&(\pi_0,0,1-\pi_0)\bm P_A\bm P_B^2,\\ (s_0,s_1,s_2)&:=&(b_1,0,1-b_1)\bm P_A\bm P_B^2,\\ (t_0,t_1,t_2)&:=&(c_1,1-c_1,0)\bm P_A\bm P_B^2,\\ (u_0,u_1,u_2)&:=&(\pi_0,1-\pi_0,0)\bm P_A\bm P_B^2. \end{eqnarray*} First, since $b_2\ge b_1$ in region 9, it follows that if $(x_0,x_1,x_2)\in\Delta$ satisfies $x_0\ge\pi_0$ and $\alpha_1 x_0+\beta_1 x_1<\gamma_1$, then $(x_0,x_1,x_2)\in\Delta_{ABBA}$. Thus, it suffices to check these two inequalities for each of the seven points listed. We have done this algebraically but omit the details, as it is reasonably straightforward. We finally consider regions 10 and 11 of Figure \ref{region-fig}, which can be treated simultaneously. The regions are determined by $a_1\le1-\pi_0$ (equivalently, $\phi\ge1-\rho/3$), $b_2\ge b_1$ (equivalently, $\phi\ge\phi_2$), and $\alpha_1 f_0+\beta_1 f_1<\gamma_1$, where $(f_0,f_1,f_2):=(1,0,0)\bm P_A\bm P_B$, from which it follows that $a_1\le1-\pi_0$ and $b_1\le1$. Consequently, $\Delta_A-\Delta_{ABBA}$ is the four-sided region with vertices $(1,0,0)$, $(b_1,0,1-b_1)$, $(\pi_0,a_1,1-\pi_0-a_1)$, and $(\pi_0,1-\pi_0,0)$, and the closure of $\Delta_{ABBA}$ is the triangular region with vertices $(b_1,0,1-b_1)$, $(\pi_0,a_1,1-\pi_0-a_1)$, and $(\pi_0,0,1-\pi_0)$. As with region 9, it suffices to show that each of the following seven points $(x_0,x_1,x_2)$ satisfies the inequalities $x_0\ge\pi_0$ and $\alpha_1 x_0+\beta_1 x_1<\gamma_1$ determining $\Delta_{ABBA}$: \begin{eqnarray*} (f_0,f_1,f_2)&:=&(1,0,0)\bm P_A\bm P_B,\\ (g_0,g_1,g_2)&:=&(b_1,0,1-b_1)\bm P_A\bm P_B,\\ (h_0,h_1,h_2)&:=&(\pi_0,a_1,1-\pi_0-a_1)\bm P_A\bm P_B,\\ (i_0,i_1,i_2)&:=&(\pi_0,1-\pi_0,0)\bm P_A\bm P_B,\\ \noalign{\smallskip} (s_0,s_1,s_2)&:=&(b_1,0,1-b_1)\bm P_A\bm P_B^2,\\ (t_0,t_1,t_2)&:=&(\pi_0,a_1,1-\pi_0-a_1)\bm P_A\bm P_B^2,\\ (u_0,u_1,u_2)&:=&(\pi_0,0,1-\pi_0)\bm P_A\bm P_B^2. \end{eqnarray*} Again, we have done this algebraically but omit the details. See Figure \ref{regions3,11-fig}. \end{proof} Let us describe what appears to happen in each of the eight regions not covered by Theorem \ref{thm-regions3,9--11}. \begin{conjecture}\label{conj1} If $(\rho,\phi)$ belongs to region 1 of Figure \ref{region-fig}, then there is a unique limit cycle, which is asymptotically stable and of the form $[1,2]$, as well as an unstable equilibrium. Indeed, \begin{eqnarray*} \Delta_A=\bigcup_{k=0}^\infty\Delta_{(ABBBBABB)^k\overline{ABB}}\cup\bigcup_{k=1}^\infty\Delta_{ABB(ABBBBABB)^k\overline{ABB}}, \end{eqnarray*} and if the initial state is in $\Delta_B-\Delta_{\overline{B}}$ (see (\ref{Delta(B)})), the trajectory eventually enters $\Delta_A$. For $(\rho,\phi)$ belonging to region 2 of Figure \ref{region-fig}, there is a unique limit cycle, which is asymptotically stable and of the form $[1,2]$, as well as an unstable equilibrium. Indeed, $\Delta_A=\Delta_{\overline{ABB}}\cup\Delta_{ABBBB\overline{ABB}}$, and for initial states in $\Delta_B-\Delta_{\overline{B}}$, the trajectory eventually enters $\Delta_A$. If $(\rho,\phi)$ belongs to region 4, 5, 6, 7, or 8 of Figure \ref{region-fig}, then there is a unique limit cycle, which is asymptotically stable and of the form $[1,2]$, as well as an unstable equilibrium. Indeed, $$ \Delta_A=\Delta_{A\overline{B}}\cup\bigcup_{k=1}^\infty\Delta_{AB^k\overline{ABB}}. $$ (Note that $\Delta_{AB^2\overline{ABB}}=\Delta_{\overline{ABB}}$.) If the initial state is in $\Delta_B-\Delta_{\overline{B}}-\Delta_{BA\overline{B}}-\Delta_{BBA\overline{B}}$, the trajectory eventually enters $\Delta_A-\Delta_{A\overline{B}}$. If $(\rho,\phi)$ belongs to region 12 of Figure \ref{region-fig}, then there is a unique limit cycle, which is globally asymptotically stable and of the form $[1,2]$. Indeed, \begin{eqnarray*} \Delta_A=\bigcup_{k,l=0}^\infty\Delta_{(AB)^k ABB(AB)^l\overline{ABB}}, \end{eqnarray*} and only finitely many of the sets comprising the union are nonempty. If the initial state is in $\Delta_B$, the trajectory eventually enters $\Delta_A$. \end{conjecture} The lower boundary of region 1 is the curve $G_{4,2}=0$. In the unshaded portion of Figure \ref{region-fig} (where $G_{4,2}<0$ and $\phi>2/3$), we have at least one limit cycle, as shown by Theorem \ref{thm-limitcycles}. See the remark following the statement of the theorem. Of course, the theorem does not imply that the limit cycles identified there are the only ones, but we conjecture that this is in fact true. \begin{conjecture}\label{conj2} Let $n\ge4$ be even. The curve $b_{n-2}-\pi_0=0$ lies below $H_{n,n-2}=0$ and above $G_{n+2,n}=0$, and the function $b_{n-2}-\pi_0$ is positive above, and negative below, the curve. If $(\rho,\phi)$ satisfies $G_{n,n-2}<0$ and $E_{n-2}\ge0$, then there are precisely two limit cycles, which are of the forms $[1,n,1,n-2]$ and $[1,n-2]$, as well as an unstable equilibrium. Indeed, \begin{eqnarray*} \Delta_A&=&\Delta_{\overline{AB^nAB^{n-2}}}\cup\bigcup_{k=0}^\infty\Delta_{(AB^nAB^{n-2})^k\overline{AB^{n-2}}}\\ &&\quad{}\cup\Delta_{AB^{n-2}\overline{AB^nAB^{n-2}}}\cup\bigcup_{k=1}^\infty\Delta_{AB^{n-2}(AB^nAB^{n-2})^k\overline{AB^{n-2}}}, \end{eqnarray*} and if the initial state is in $\Delta_B-\Delta_{\overline{B}}$, the trajectory eventually enters $\Delta_A$. If $(\rho,\phi)$ satisfies $E_{n-2}<0$ and $E_{n,n-2}\ge0$, then there is a unique limit cycle, which is asymptotically stable and of the form $[1,n,1,n-2]$, as well as an unstable equilibrium. Indeed, $\Delta_A=\Delta_{\overline{AB^nAB^{n-2}}}\cup\Delta_{AB^{n-2}\overline{AB^nAB^{n-2}}}\cup\Delta_{AB^n\overline{AB^nAB^{n-2}}}$, and if the initial state is in $\Delta_B-\Delta_{\overline{B}}$, the trajectory eventually enters $\Delta_A$. If $(\rho,\phi)$ satisfies $E_{n,n-2}<0$ and $H_{n,n-2}\ge0$, then there are precisely two limit cycles, which are of the forms $[1,n,1,n-2]$ and $[1,n]$, as well as an unstable equilibrium. Indeed, \begin{eqnarray*} \Delta_A&=&\Delta_{\overline{AB^nAB^{n-2}}}\cup\bigcup_{k=0}^\infty\Delta_{(AB^nAB^{n-2})^k\overline{AB^n}}\cup\Delta_{AB^{n-2}\overline{AB^nAB^{n-2}}} \\&&\quad{}\cup\bigcup_{k=0}^\infty\Delta_{AB^{n-2}(AB^nAB^{n-2})^k\overline{AB^n}}\cup\Delta_{AB^n\overline{AB^nAB^{n-2}}}, \end{eqnarray*} and if the initial state is in $\Delta_B-\Delta_{\overline{B}}$, the trajectory eventually enters $\Delta_A$. If $(\rho,\phi)$ satisfies $H_{n,n-2}<0$ and $b_{n-2}-\pi_0\ge0$, then there is a unique limit cycle, which is asymptotically stable and of the form $[1,n]$, as well as an unstable equilibrium. Indeed, \begin{eqnarray*} \Delta_A=\bigcup_{k=0}^\infty\Delta_{(AB^nAB^{n-2})^k\overline{AB^n}} \cup\bigcup_{k=0}^\infty\Delta_{AB^{n-2}(AB^nAB^{n-2})^k\overline{AB^n}}, \end{eqnarray*} and if the initial state is in $\Delta_B-\Delta_{\overline{B}}$, the trajectory eventually enters $\Delta_A$. If $(\rho,\phi)$ satisfies $b_{n-2}-\pi_0<0$ and $G_{n+2,n}\ge0$, then there is a unique limit cycle, which is asymptotically stable and of the form $[1,n]$, as well as an unstable equilibrium. Indeed, \begin{eqnarray*} \Delta_A=\bigcup_{k=0}^\infty\Delta_{(AB^{n+2}AB^n)^k\overline{AB^n}} \cup\bigcup_{k=1}^\infty\Delta_{AB^n(AB^{n+2}AB^n)^k\overline{AB^n}}, \end{eqnarray*} and if the initial state is in $\Delta_B-\Delta_{\overline{B}}$, the trajectory eventually enters $\Delta_A$. \end{conjecture} \section{Asymptotic cumulative average profit}\label{profit} In Section \ref{intro} we stated that, if game $B$ is eventually played forever, we have an asymptotically fair game, whereas if the pattern of games is eventually periodic, we have an asymptotically winning game. Here we try to justify these assertions. In Section 4 we found that the periodic patterns $[1,n]$ for even $n\ge2$ and $[1,n,1,n-2]$ for even $n\ge4$ can occur. If our Conjectures \ref{conj1} and \ref{conj2} are correct, then these are the only periodic patterns that can occur. \begin{theorem} We denote the asymptotic cumulative average profit per game played by $\mu_{\overline{B}}$ in the situation where game $B$ is eventually played forever, by $\mu_{[1,n]}$ in the case of a limit cycle of the form $[1,n]$ for even $n\ge2$, and by $\mu_{[1,n,1,n-2]}$ in the case of a limit cycle of the form $[1,n,1,n-2]$ for even $n\ge4$. Then all (Ces\'aro) limits exist, and $\mu_{\overline{B}}=0$, $\mu_{[1,n]}>0$, and $\mu_{[1,n,1,n-2]}>0$. \end{theorem} \begin{remark} This shows that the greedy strategy exhibits the Parrondo effect when $\phi>2/3$ (with some exceptions) but not when $\phi\le2/3$. \end{remark} \begin{proof} In the situation where game $B$ is eventually played forever, $\mu_{\overline{B}}=\phi\bm\pi\bm\zeta$, where $\bm\pi$ is the stationary distribution of $\bm P_B$ and $\bm\zeta:=(2p_0-1,2p_1-1,2p_1-1)^\T$ with $p_0=\rho^2/(1+\rho^2)$ and $p_1=1/(1+\rho)$. We note that, for given $(z_0,z_1,z_2)\in\Delta$, $(z_0,z_1,z_2)\bm\zeta=z_0(2p_0-1)+(1-z_0)(2p_1-1)=z_0(2p_0-2p_1)+2p_1-1$. Hence $\bm\pi\bm\zeta=\pi_0(2p_0-2p_1)+2p_1-1=0$, where the second (algebraic) step will be used again below. Thus, $\mu_{\overline{B}}=0$. In the case of a limit cycle of the form $[1,n]$ for even $n\ge2$, \begin{eqnarray}\label{mu1n} \mu_{[1,n]}&=&{\phi\over n+1}\bm\pi_{[1,n]}\bm P_A(\bm I+\bm P_B+\cdots+\bm P_B^{n-1})\bm\zeta. \end{eqnarray} Now \begin{eqnarray*} \bm\pi_{[1,n]}\bm P_A\bm P_B^m\bm\zeta &=&\bm\pi_{[1,n]}\bm P_A\bm P_B^m\bm (1,0,0)^\T(2p_0-2p_1)+2p_1-1\\ &=&(\pi_0+E_{n,m}/D_n)(2p_0-2p_1)+2p_1-1\\ &=&(E_{n,m}/D_n)(2p_0-2p_1), \end{eqnarray*} so (\ref{mu1n}) can be written as \begin{eqnarray*} \mu_{[1,n]}={2 \phi\over n+1}(p_0-p_1)\sum_{m=0}^{n-1}{E_{n,m}\over D_n}. \end{eqnarray*} By Proposition \ref{propTh7}, $E_{n,m}<0$ for $m=0,1,\ldots,n-1$. Since $p_0 < p_1$, it follows that $\mu_{[1,n]}>0$. In the case of a limit cycle of the form $[1,n,1,n-2]$ for $n\ge4$ even, \begin{eqnarray}\label{mu1n1n-2} \mu_{[1,n,1,n-2]}&=&{\phi\over2n}\bm\pi_{[1,n,1,n-2]}[\bm P_A(\bm I+\bm P_B+\cdots+\bm P_B^{n-1})\nonumber\\ &&\qquad\qquad\qquad\quad{}+\bm P_A\bm P_B^n\bm P_A(\bm I+\bm P_B+\cdots+\bm P_B^{n-3})]\bm\zeta.\qquad \end{eqnarray} Now \begin{eqnarray*} \bm\pi_{[1,n,1,n-2]}\bm P_A\bm P_B^m\bm\zeta &=&\bm\pi_{[1,n,1,n-2]}\bm P_A\bm P_B^m\bm (1,0,0)^\T(2p_0-2p_1)+2p_1-1\\ &=&(\pi_0+G_{n,m}/I_n)(2p_0-2p_1)+2p_1-1 \\ &=&(G_{n,m}/I_n)(2p_0-2p_1) \end{eqnarray*} and \begin{eqnarray*} &&\bm\pi_{[1,n,1,n-2]}\bm P_A\bm P_B^n\bm P_A\bm P_B^m\bm\zeta\\ &&\quad{}=\bm\pi_{[1,n,1,n-2]}\bm P_A\bm P_B^n\bm P_A\bm P_B^m\bm (1,0,0)^\T(2p_0-2p_1)+2p_1-1\\ &&\quad{}=(\pi_0+H_{n,m}/I_n)(2p_0-2p_1)+2p_1-1\\ &&\quad{}=(H_{n,m}/I_n)(2p_0-2p_1), \end{eqnarray*} so (\ref{mu1n1n-2}) can be written as $$ \mu_{[1,n,1,n-2]}={\phi\over n}(p_0-p_1)\sum_{m=0}^{n-1}{G_{n,m}\over I_n}+{\phi\over n}(p_0-p_1)\sum_{m=0}^{n-3}{H_{n,m}\over I_n}. $$ By Proposition \ref{propTh7}, $G_{n,m}<0$ for $m=0,1,\ldots,n-1$ and $H_{n,m}<0$ for $m=0,1,\ldots,n-3$. Since $p_0 < p_1$, it follows that $\mu_{[1,n,1,n-2]}>0$. \end{proof} \section*{Acknowledgment} We thank Derek Abbott for valuable advice. J. Lee was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2010-0005364).
{ "timestamp": "2011-11-23T02:01:59", "yymm": "1011", "arxiv_id": "1011.1773", "language": "en", "url": "https://arxiv.org/abs/1011.1773", "abstract": "We consider a collective version of Parrondo's games with probabilities parametrized by rho in (0,1) in which a fraction phi in (0,1] of an infinite number of players collectively choose and individually play at each turn the game that yields the maximum average profit at that turn. Dinis and Parrondo (2003) and Van den Broeck and Cleuren (2004) studied the asymptotic behavior of this greedy strategy, which corresponds to a piecewise-linear discrete dynamical system in a subset of the plane, for rho=1/3 and three choices of phi. We study its asymptotic behavior for all (rho,phi) in (0,1)x(0,1], finding that there is a globally asymptotically stable equilibrium if phi<=2/3 and, typically, a unique (asymptotically stable) limit cycle if phi>2/3 (\"typically\" because there are rare cases with two limit cycles). Asymptotic stability results for phi>2/3 are partly conjectural.", "subjects": "Probability (math.PR); Dynamical Systems (math.DS)", "title": "A discrete dynamical system for the greedy strategy at collective Parrondo games", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668668053619, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7081505262335489 }
https://arxiv.org/abs/0905.4378
The Cramer-Rao Bound for Sparse Estimation
The goal of this paper is to characterize the best achievable performance for the problem of estimating an unknown parameter having a sparse representation. Specifically, we consider the setting in which a sparsely representable deterministic parameter vector is to be estimated from measurements corrupted by Gaussian noise, and derive a lower bound on the mean-squared error (MSE) achievable in this setting. To this end, an appropriate definition of bias in the sparse setting is developed, and the constrained Cramer-Rao bound (CRB) is obtained. This bound is shown to equal the CRB of an estimator with knowledge of the support set, for almost all feasible parameter values. Consequently, in the unbiased case, our bound is identical to the MSE of the oracle estimator. Combined with the fact that the CRB is achieved at high signal-to-noise ratios by the maximum likelihood technique, our result provides a new interpretation for the common practice of using the oracle estimator as a gold standard against which practical approaches are compared.
\section{Introduction} The problem of estimating a sparse unknown parameter vector from noisy measurements has been analyzed intensively in the past few years \cite{tropp06, donoho06, candes06, candes07}, and has already given rise to numerous successful signal processing algorithms \cite{elad06, dabov07, dabov08, protter09, elad05}. In this paper, we consider the setting in which noisy measurements of a deterministic vector $\mb{x}_0$ are available. It is assumed that $\mb{x}_0$ has a sparse representation $\mb{x}_0 = \mb{D}{\boldsymbol \alpha}_0$, where $\mb{D}$ is a given dictionary and most of the entries of ${\boldsymbol \alpha}_0$ equal zero. Thus, only a small number of ``atoms,'' or columns of $\mb{D}$, are required to represent $\mb{x}_0$. The challenges confronting an estimation technique are to recover either $\mb{x}_0$ itself or its sparse representation ${\boldsymbol \alpha}_0$. Several practical approaches turn out to be surprisingly successful in this task. Such approaches include the Dantzig selector (DS) \cite{candes07} and basis pursuit denoising (BPDN), which is also referred to as the Lasso \cite{chen98, tropp06, donoho06}. A standard measure of estimator performance is the mean-squared error (MSE). Several recent papers analyzed the MSE obtained by methods such as the DS and BPDN \cite{candes07, ben-haim09c}. To determine the quality of estimation approaches, it is of interest to compare their achievements with theoretical performance limits: if existing methods approach the performance bound, then they are nearly optimal and further improvements in the current setting are impossible. This motivates the development of lower bounds on the MSE of estimators in the sparse setting. Since the parameter to be estimated is deterministic, the MSE is in general a function of the parameter value. While there are lower bounds on the worst-case achievable MSE among all possible parameter values \cite[\S7.4]{candes06b}, the actual performance for a specific value, or even for most values, might be substantially lower. Our goal is therefore to characterize the minimum MSE obtainable for each particular parameter vector. A standard method of achieving this objective is the Cram\'er--Rao bound (CRB) \cite{kay93, shao03}. The fact that $\mb{x}_0$ has a sparse representation is of central importance for estimator design. Indeed, many sparse estimation settings are underdetermined, meaning that without the assumption of sparsity, it is impossible to identify the correct parameter from its measurements, even without noise. In this paper, we treat the sparsity assumption as a deterministic prior constraint on the parameter. Specifically, we assume that $\mb{x}_0 \in \SS$, where $\SS$ is the set of all parameter vectors which can be represented by no more than $s$ atoms, for a given integer $s$. Our results are inspired by the well-studied theory of the constrained CRB \cite{gorman90, marzetta93, StoicaNg98, ben-haim09}. This theory is based on the assumption that the constraint set can be defined using the system of equations $\mb{f}(\mb{x})=\mb{0}$, $\mb{g}(\mb{x})\le\mb{0}$, where $\mb{f}$ and $\mb{g}$ are continuously differentiable functions. The resulting bound depends on the derivatives of the function $\mb{f}$. However, sparsity constraints cannot be written in this form. This necessitates the development of a bound suitable for non-smooth constraint sets \cite{ben-haim09d}. In obtaining this modified bound, we also provide new insight into the meaning of the general constrained CRB\@. In particular, we show that the fact that the constrained CRB is lower than the unconstrained bound results from an expansion of the class of estimators under consideration. With the aforementioned theoretical tools at hand, we obtain lower bounds on the MSE in a variety of sparse estimation problems. Our bound limits the MSE achievable by any estimator having a pre-specified bias function, for each parameter value. Particular emphasis is given to the unbiased case; the reason for this preference is twofold: First, when the signal-to-noise ratio (SNR) is high, biased estimation is suboptimal. Second, for high SNR values, the unbiased CRB is achieved by the maximum likelihood (ML) estimator. While the obtained bounds differ depending on the exact problem definition, in general terms and for unbiased estimation the bounds can be described as follows. For parameters having maximal support, i.e., parameters whose representation requires the maximum allowed number $s$ of atoms, the lower bound equals the MSE of the ``oracle estimator'' which knows the locations (but not the values) of the nonzero representation elements. On the other hand, for parameters which do not have maximal support (a set which has Lebesgue measure zero in $\SS$), our lower bound is identical to the CRB for an unconstrained problem, which is substantially higher than the oracle MSE\@. The correspondence between the CRB and the MSE of the oracle estimator (for all but a zero-measure subset of the feasible parameter set $\SS$) is of practical interest since, unlike the oracle estimator, the CRB is achieved by the ML estimator at high SNR\@. Our bound can thus be viewed as an alternative justification for the common use of the oracle estimator as a baseline against which practical algorithms are compared. This gives further merit to recent results, which demonstrate that BPDN and the DS both achieve near-oracle performance \cite{candes07, ben-haim09c}. However, the existence of parameters for which the bound is much higher indicates that oracular performance cannot be attained for \emph{all} parameter values, at least using unbiased techniques. Indeed, as we will show, in many sparse estimation scenarios, one cannot construct \emph{any} estimator which is unbiased for all sparsely representable parameters. Our contribution is related to, but distinct from, the work of Babadi et al.\ \cite{babadi09}, in which the CRB of the oracle estimator was derived (and shown to equal the aforementioned oracle MSE). Our goal in this work is to obtain a lower bound on the performance of estimators which are not endowed with oracular knowledge; consequently, as explained above, for some parameter values the obtained CRB will be higher than the oracle MSE\@. It was further shown in \cite{babadi09} that when the measurements consist of Gaussian random mixtures of the parameter vector, there exists an estimator which achieves the oracle CRB at high SNR; this is shown to hold on average over realizations of the measurement mixtures. The present contribution strengthens this result by showing that for any given (deterministic) well-behaved measurement setup, there exists a technique (namely, the ML estimator) achieving the CRB at high SNR\@. Thus, convergence to the CRB is guaranteed for all measurement settings, and not merely when averaging over an ensemble of such settings. The rest of this paper is organized as follows. In Section~\ref{se:sparse backgnd}, we review the sparse setting as a constrained estimation problem. Section~\ref{se:crb} defines a generalization of sparsity constraints, which we refer to as locally balanced constraint sets; the CRB is then derived in this general setting. In Section~\ref{se:sparse bounds}, our general results are applied back to some specific sparse estimation problems. In Section~\ref{se:numer}, the CRB is compared to the empirical performance of estimators of sparse vectors. Our conclusions are summarized in Section~\ref{se:discuss}. Throughout the paper, boldface lowercase letters $\v$ denote vectors while boldface uppercase letters $\mb{M}$ denote matrices. Given a vector function $\mb{f}: {\mathbb R}^n \rightarrow {\mathbb R}^k$, we denote by $\partial \mb{f} / \partial \mb{x}$ the $k \times n$ matrix whose $ij$th element is $\partial f_i / \partial x_j$. The support of a vector, denoted $\supp(\v)$, is the set of indices of the nonzero entries in $\v$. The Euclidean norm of a vector $\v$ is denoted $\|\v\|_2$, and the number of nonzero entries in $\v$ is $\|\v\|_0$. Finally, the symbols $\Ra{\mb{M}}$, $\Nu{\mb{M}}$, and $\mb{M}^\dagger$ refer, respectively, to the column space, null space, and Moore--Penrose pseudoinverse of the matrix $\mb{M}$. \section{Sparse Estimation Problems} \label{se:sparse backgnd} In this section, we describe several estimation problems whose common theme is that the unknown parameter has a sparse representation with respect to a known dictionary. We then review some standard techniques used to recover the unknown parameter in these problems. In Section~\ref{se:numer} we will compare these methods with the performance bounds we develop. \subsection{The Sparse Setting} \label{ss:sparse setting} Suppose we observe a measurement vector $\mb{y} \in {\mathbb R}^m$, given by \begin{equation} \label{eq:y=Ax+w} \mb{y} = \mb{A}\mb{x}_0 + \mb{w} \end{equation} where $\mb{x}_0 \in {\mathbb R}^n$ is an unknown deterministic signal, $\mb{w}$ is independent, identically distributed (IID) Gaussian noise with zero mean and variance $\sigma^2$, and $\mb{A}$ is a known $m \times n$ matrix. We assume the prior knowledge that there exists a sparse representation of $\mb{x}_0$, or, more precisely, that \begin{equation} \label{eq:def S} \mb{x}_0 \in \SS \triangleq \left\{ \mb{x} \in {\mathbb R}^n : \mb{x} = \mb{D} {\boldsymbol \alpha}, \|{\boldsymbol \alpha}\|_0 \le s \right\}. \end{equation} In other words, the set $\SS$ describes signals $\mb{x}$ which can be formed from a linear combination of no more than $s$ columns, or atoms, from $\mb{D}$. The dictionary $\mb{D}$ is an $n \times p$ matrix with $n \le p$, and we assume that $s < p$, so that only a subset of the atoms in $\mb{D}$ can be used to represent any signal in $\SS$. We further assume that $\mb{D}$ and $s$ are known. Quite a few important signal recovery applications can be formulated using the setting described above. For example, if $\mb{A}=\mb{I}$, then $\mb{y}$ consists of noisy observations of $\mb{x}_0$, and recovering $\mb{x}_0$ is a denoising problem \cite{elad06, dabov07}. If $\mb{A}$ corresponds to a blurring kernel, we obtain a deblurring problem \cite{dabov08}. In both cases, the matrix $\mb{A}$ is square and invertible. Interpolation and inpainting can likewise be formulated as \eqref{eq:y=Ax+w}, but in those cases $\mb{A}$ is an underdetermined matrix, i.e., we have $m<n$ \cite{elad05}. For all of these estimation scenarios, our goal is to obtain an estimate ${\widehat{\x}}$ whose MSE is as low as possible, where the MSE is defined as \begin{equation} \label{eq:def MSE} {\mathrm{MSE}} \triangleq \E{ \| {\widehat{\x}} - \mb{x}_0 \|_2^2 }. \end{equation} Note that $\mb{x}_0$ is deterministic, so that the expectation in \eqref{eq:def MSE} (and throughout the paper) is taken over the noise $\mb{w}$ but not over $\mb{x}_0$. Thus, the MSE is in general a function of $\mb{x}_0$. In the above settings, the goal is to estimate the unknown signal $\mb{x}_0$. However, it may also be of interest to recover the coefficient vector ${\boldsymbol \alpha}_0$ for which $\mb{x}_0 = \mb{D}{\boldsymbol \alpha}_0$, e.g., for the purpose of model selection \cite{tropp06, candes07}. In this case, the goal is to construct an estimator ${\widehat{\alf}}$ whose MSE $E\{ \|{\widehat{\alf}}-{\boldsymbol \alpha}_0\|_2^2 \}$ is as low as possible. Unless $\mb{D}$ is unitary, estimating ${\boldsymbol \alpha}_0$ is not equivalent to estimating $\mb{x}_0$. Note, however, that when estimating ${\boldsymbol \alpha}_0$, the matrices $\mb{A}$ and $\mb{D}$ can be combined to obtain the equivalent problem \begin{equation} \label{eq:y=H alf + w} \mb{y} = \H {\boldsymbol \alpha}_0 + \mb{w} \end{equation} where $\H \triangleq \mb{A}\mb{D}$ is an $m \times p$ matrix and \begin{equation} \label{eq:def T} {\boldsymbol \alpha}_0 \in {\mathcal T} = \{ {\boldsymbol \alpha} \in {\mathbb R}^p : \|{\boldsymbol \alpha}\|_0 \le s \} . \end{equation} Therefore, this problem can also be seen as a special case of \eqref{eq:y=Ax+w} and \eqref{eq:def S}. Nevertheless, it will occasionally be convenient to refer specifically to the problem of estimating ${\boldsymbol \alpha}_0$ from \eqref{eq:y=H alf + w}. Signal estimation problems differ in the properties of the dictionary $\mb{D}$ and measurement matrix $\mb{A}$. In particular, problems of a very different nature arise depending on whether the dictionary is a basis or an overcomplete frame. For example, many approaches to denoising yield simple shrinkage techniques when $\mb{D}$ is a basis, but deteriorate to NP-hard optimization problems when $\mb{D}$ is overcomplete \cite{natarajan95}. A final technical comment is in order. If the matrix $\H$ in \eqref{eq:y=H alf + w} does not have full column rank, then there may exist different feasible parameters ${\boldsymbol \alpha}_1$ and ${\boldsymbol \alpha}_2$ such that $\H{\boldsymbol \alpha}_1 = \H{\boldsymbol \alpha}_2$. In this case, the probability distribution of $\mb{y}$ will be identical for these two parameter vectors, and the estimation problem is said to be unidentifiable \cite[\S1.5.2]{lehmann98}. A necessary and sufficient condition for identifiability is \begin{equation} \label{eq:spark req} \spark(\H) > 2s \end{equation} where $\spark(\H)$ is defined as the smallest integer $k$ such that there exist $k$ linearly dependent columns in $\H$ \cite{donoho03}. We will adopt the assumption \eqref{eq:spark req} throughout the paper. Similarly, in the problem \eqref{eq:y=Ax+w} we will assume that \begin{equation} \label{eq:spark req D} \spark(\mb{D}) > 2s. \end{equation} \subsection{Estimation Techniques} \label{ss:est techniques} We now review some standard estimators for the sparse problems described above. These techniques are usually viewed as methods for obtaining an estimate ${\widehat{\alf}}$ of the vector ${\boldsymbol \alpha}_0$ in \eqref{eq:y=H alf + w}, and we will adopt this perspective in the current section. One way to estimate $\mb{x}_0$ in the more general problem \eqref{eq:y=Ax+w} is to first estimate ${\boldsymbol \alpha}_0$ with the methods described below and then use the formula ${\widehat{\x}} = \mb{D}{\widehat{\alf}}$. A widely-used estimation technique is the ML approach, which provides an estimate of ${\boldsymbol \alpha}_0$ by solving \begin{equation} \label{eq:ml} \min_{\boldsymbol \alpha} \|\mb{y} - \H{\boldsymbol \alpha}\|_2^2 \quad \text{s.t. } \|{\boldsymbol \alpha}\|_0 \le s. \end{equation} Unfortunately, \eqref{eq:ml} is a nonconvex optimization problem and solving it is NP-hard \cite{natarajan95}, meaning that an efficient algorithm providing the ML estimator is unlikely to exist. In fact, to the best of our knowledge, the most efficient method for solving \eqref{eq:ml} for general $\H$ is to enumerate the $\binom{p}{s}$ possible $s$-element support sets of ${\boldsymbol \alpha}$ and choose the one for which $\|\mb{y} - \H{\boldsymbol \alpha}\|_2^2$ is minimal. This is clearly an impractical strategy for reasonable values of $p$ and $s$. Consequently, several efficient alternatives have been proposed for estimating ${\boldsymbol \alpha}_0$. One of these is the $\ell_1$-penalty version of BPDN \cite{tropp06}, which is defined as a solution $\half_{\mathrm{BP}}$ to the quadratic program \begin{equation} \label{eq:bpdn} \min_{\boldsymbol \alpha} \tfrac{1}{2} \|\mb{y} - \H{\boldsymbol \alpha}\|_2^2 + \gamma \|{\boldsymbol \alpha}\|_1 \end{equation} with some regularization parameter $\gamma$. More recently, the DS was proposed \cite{candes07}; this approach estimates ${\boldsymbol \alpha}_0$ as a solution $\half_{\mathrm{DS}}$ to \begin{equation} \label{eq:ds} \min_{\boldsymbol \alpha} \|{\boldsymbol \alpha}\|_1 \quad \text{s.t. } \|\H^T (\mb{y} - \H{\boldsymbol \alpha}) \|_\infty \le \tau \end{equation} where $\tau$ is again a user-selected parameter. A modification of the DS, known as the Gauss--Dantzig selector (GDS) \cite{candes07}, is to use $\half_{\mathrm{DS}}$ only to estimate the support of ${\boldsymbol \alpha}_0$. In this approach, one solves \eqref{eq:ds} and determines the support set of $\half_{\mathrm{DS}}$. The GDS estimate is then obtained as \begin{equation} \label{eq:gds} \half_{\mathrm{GDS}} = \begin{cases} \H_{\half_{\mathrm{DS}}}^\dagger \mb{y} & \text{on the support set of $\half_{\mathrm{DS}}$} \cr \mb{0} & \text{elsewhere} \end{cases} \end{equation} where $\H_{\half_{\mathrm{DS}}}$ consists of the columns of $\H$ corresponding to the support of $\half_{\mathrm{DS}}$. Previous research on the performance of these estimators has primarily examined their worst-case MSE among all possible values of ${\boldsymbol \alpha}_0 \in {\mathcal T}$. Specifically, it has been shown \cite{candes07} that, under suitable conditions on $\H$, $s$, and $\tau$, the DS of \eqref{eq:ds} satisfies \begin{equation} \label{eq:ds wc bound} \|{\boldsymbol \alpha}_0 - \half_{\mathrm{DS}}\|_2^2 \le C s \sigma^2 \log p \quad \text{with high probability} \end{equation} for some constant $C$. It follows that the MSE of the DS is also no greater than a constant times $s \sigma^2 \log p$ for all ${\boldsymbol \alpha}_0 \in {\mathcal T}$ \cite{candes06b}. An identical property was also demonstrated for BPDN \eqref{eq:bpdn} with an appropriate choice of $\gamma$ \cite{ben-haim09c}. Conversely, it is known that the worst-case error of \emph{any} estimator is at least a constant times $s \sigma^2 \log p$ \cite[\S7.4]{candes06b}. Thus, both BPDN and the DS are optimal, up to a constant, in terms of worst-case error. Nevertheless, the MSE of these approaches for specific values of ${\boldsymbol \alpha}_0$, even for a vast majority of such values, might be much lower. Our goal differs from this line of work in that we characterize the \emph{pointwise} performance of an estimator, i.e., the MSE for specific values of ${\boldsymbol \alpha}_0$. Another baseline with which practical techniques are often compared is the oracle estimator, given by \begin{equation}\label{eq:def xo} \half_{\mathrm{oracle}} = \begin{cases} \H_{{\boldsymbol \alpha}_0}^\dagger \b & \text{on the set $\supp({\boldsymbol \alpha}_0)$} \\ \mb{0} & \text{elsewhere} \end{cases} \end{equation} where $\H_{{\boldsymbol \alpha}_0}$ is the submatrix constructed from the columns of $\H$ corresponding to the nonzero entries of ${\boldsymbol \alpha}_0$. In other words, $\half_{\mathrm{oracle}}$ is the least-squares (LS) solution among vectors whose support coincides with $\supp({\boldsymbol \alpha}_0)$, which is assumed to have been provided by an ``oracle.'' Of course, in practice the support of ${\boldsymbol \alpha}_0$ is unknown, so that $\half_{\mathrm{oracle}}$ cannot actually be implemented. Nevertheless, one often compares the performance of true estimators with $\half_{\mathrm{oracle}}$, whose MSE is given by \cite{candes07} \begin{equation} \label{eq:oracle mse} \sigma^2 \Tr((\H_{{\boldsymbol \alpha}_0}^T \H_{{\boldsymbol \alpha}_0})^{-1}). \end{equation} Is \eqref{eq:oracle mse} a bound on estimation MSE\@? While $\half_{\mathrm{oracle}}$ is a reasonable technique to adopt if $\supp({\boldsymbol \alpha}_0)$ is known, this does not imply that \eqref{eq:oracle mse} is a lower bound on the performance of practical estimators. Indeed, as will be demonstrated in Section~\ref{se:numer}, when the SNR is low, both BPDN and the DS outperform $\half_{\mathrm{oracle}}$, thanks to the use of shrinkage in these estimators. Furthermore, if $\supp({\boldsymbol \alpha}_0)$ is known, then there exist biased techniques which are better than $\half_{\mathrm{oracle}}$ for \emph{all} values of ${\boldsymbol \alpha}_0$ \cite{ben-haim06}. Thus, $\half_{\mathrm{oracle}}$ is neither achievable in practice, nor optimal in terms of MSE\@. As we will see, one can indeed interpret \eqref{eq:oracle mse} as a lower bound on the achievable MSE, but such a result requires a certain restriction of the class of estimators under consideration. \section{The Constrained Cram\'er--Rao Bound} \label{se:crb} A common technique for determining the achievable performance in a given estimation problem is to calculate the CRB, which is a lower bound on the MSE of estimators having a given bias \cite{kay93}. In this paper, we are interested in calculating the CRB when it is known that the parameter $\mb{x}$ satisfies sparsity constraints such as those of the sets $\SS$ of \eqref{eq:def S} and ${\mathcal T}$ of \eqref{eq:def T}. The CRB for constrained parameter sets has been studied extensively in the past \cite{gorman90, marzetta93, StoicaNg98, ben-haim09}. However, in prior work derivation of the CRB assumed that the constraint set is given by \begin{equation} \label{eq:constr} {\mathcal X} = \{ \mb{x} \in {\mathbb R}^n: \mb{f}(\mb{x})=\mb{0}, \ \mb{g}(\mb{x})\le\mb{0} \} \end{equation} where $\mb{f}(\mb{x})$ and $\mb{g}(\mb{x})$ are continuously differentiable functions. We will refer to such ${\mathcal X}$ as continuously differentiable sets. As shown in prior work \cite{gorman90}, the resulting bound depends on the derivatives of the function $\mb{f}$. Yet in some cases, including the sparse estimation scenarios discussed in Section~\ref{se:sparse backgnd}, the constraint set cannot be written in the form \eqref{eq:constr}, and the aforementioned results are therefore inapplicable. Our goal in the current section is to close this gap by extending the constrained CRB to constraint sets ${\mathcal X}$ encompassing the sparse estimation scenario. We begin this section with a general discussion of the CRB and the class of estimators to which it applies. This will lead us to interpret the constrained CRB as a bound on estimators having an incompletely specified bias gradient. This interpretation will facilitate the application of the existing constrained CRB to the present context. \begin{figure*} \centerline{\includegraphics{locally_balanced.eps}} \caption{In a locally balanced set such as a union of subspaces (a) and an open ball (b), each point is locally defined by a set of feasible directions along which an infinitesimal movement does not violate the constraints. The curve (c) is not characterized in this way and thus is not locally balanced.} \label{fi:loc bal} \end{figure*} \subsection{Bias Requirements in the Constrained CRB} \label{ss:bias req} In previous settings for which the constrained CRB was derived, it was noted that the resulting bound is typically lower than the unconstrained version \cite[Remark~4]{gorman90}. At first glance, one would attribute the reduction in the value of the CRB to the fact that the constraints add information about the unknown parameter, which can then improve estimation performance. On the other hand, the CRB separately characterizes the achievable performance for each value of the unknown parameter $\mb{x}_0$. Thus, the CRB at $\mb{x}_0$ applies even to estimators designed specifically to perform well at $\mb{x}_0$. Such estimators surely cannot achieve further gain in performance if it is known that $\mb{x}_0 \in {\mathcal X}$. Why, then, is the constrained CRB lower than the unconstrained bound? The answer to this apparent paradox involves a careful definition of the class of estimators to which the bound applies. To obtain a meaningful bound, one must exclude some estimators from consideration. Unless this is done, the bound will be tarnished by estimators of the type ${\widehat{\x}} = \x_\mathrm{u}$, for some constant $\x_\mathrm{u}$, which achieve an MSE of $0$ at the specific point $\mb{x} = \x_\mathrm{u}$. It is standard practice to circumvent this difficulty by restricting attention to estimators having a particular bias $\b(\mb{x}) \triangleq \E{{\widehat{\x}}} - \mb{x}$. In particular, it is common to examine unbiased estimators, for which $\b(\mb{x}) = \mb{0}$. However, in some settings, it is impossible to construct estimators which are unbiased for all $\mb{x} \in {\mathbb R}^n$. For example, suppose we are to estimate the coefficients ${\boldsymbol \alpha}_0$ of an overcomplete dictionary based on the measurements given by \eqref{eq:y=H alf + w}. Since the dictionary is overcomplete, its nullspace is nontrivial; furthermore, each coefficient vector in the nullspace yields an identical distribution of the measurements, so that an estimator can be unbiased for one of these vectors at most. The question is whether it is possible to construct estimators which are unbiased for some, but not all, values of $\mb{x}$. One possible approach is to seek estimators which are unbiased for all $\mb{x} \in {\mathcal X}$. However, as we will see later in this section, even this requirement can be too strict: in some cases it is impossible to construct estimators which are unbiased for all $\mb{x} \in {\mathcal X}$. More generally, the CRB is a \emph{local} bound, meaning that it determines the achievable performance at a particular value of $\mb{x}$ based on the statistics at $\mb{x}$ and at nearby values. Thus, it is irrelevant to introduce requirements on estimation performance for parameters which are distant from the value $\mb{x}$ of interest. Since we seek a locally unbiased estimator, one possibility is to require unbiasedness at a single point, say $\x_\mathrm{u}$. As it turns out, it is always possible to construct such a technique: this is again ${\widehat{\x}} = \x_\mathrm{u}$, which is unbiased at $\x_\mathrm{u}$ but nowhere else. To avoid this loophole, one can require an estimator to be unbiased in the neighborhood \begin{equation} {{\mathcal B}_\eps(\x_0)} = \left\{ \mb{x} \in {\mathbb R}^m : \|\mb{x}-\mb{x}_0\|_2 < \varepsilon \right\} \end{equation} of $\mb{x}_0$, for some small $\varepsilon$. It follows that both the bias $\b(\mb{x})$ and the bias gradient \begin{equation} \label{eq:def B} \mb{B}(\mb{x}) \triangleq \pd{\b}{\mb{x}} \end{equation} vanish at $\mb{x} = \mb{x}_0$. This formulation is the basis of the unconstrained unbiased CRB, a lower bound on the covariance at $\mb{x}_0$ which applies to all estimators whose bias gradient is zero at $\mb{x}_0$. It turns out that even this requirement is too stringent in constrained settings. As we will see in Section~\ref{ss:sparse}, estimators of the coefficients of an overcomplete dictionary must have a nonzero bias gradient matrix. The reason is related to the fact that unbiasedness is required over the set ${{\mathcal B}_\eps(\x_0)}$, which, in the overcomplete setting, has a higher dimension than the number of measurements. However, it can be argued that one is not truly interested in the bias at all points in ${{\mathcal B}_\eps(\x_0)}$, since many of these points violate the constraint set ${\mathcal X}$. A reasonable compromise is to require unbiasedness over ${{\mathcal B}_\eps(\x_0)} \cap {\mathcal X}$, i.e., over the neighborhood of $\mb{x}_0$ restricted to the constraint set ${\mathcal X}$. This leads to a weaker requirement on the bias gradient $\mb{B}$ at $\mb{x}_0$. Specifically, the derivatives of the bias need only be specified in directions which do not violate the constraints. The exact formulation of this requirement depends on the nature of the set ${\mathcal X}$. In the following subsections, we will investigate various constraint sets and derive the corresponding requirements on the bias function. It is worth emphasizing that the dependence of the CRB on the constraints is manifested through the class of estimators being considered, or more specifically, through the allowed estimators' bias gradient matrices. By contrast, the unconstrained CRB applies to estimators having a fully specified bias gradient matrix. Consequently, the constrained bound applies to a wider class of estimators, and is thus usually lower than the unconstrained version of the CRB\@. In other words, estimators which are unbiased in the constrained setting, and thus applicable to the unbiased constrained CRB, are likely to be biased in the unconstrained context. Since a wider class of estimators is considered by the constrained CRB, the resulting bound is lower, thus explaining the puzzling phenomenon described in the beginning of this subsection. \subsection{Locally Balanced Constraints} \label{ss:loc bal} We now consider a class of constraint sets, called locally balanced sets, which encompass the sparsity constraints of Section~\ref{se:sparse backgnd}. Roughly speaking, a locally balanced set is one which is locally defined at each point by the directions along which one can move without leaving the set. Formally, a metric space ${\mathcal X}$ is said to be locally balanced if, for all $\mb{x} \in {\mathcal X}$, there exists an open set ${\mathcal C} \subset {\mathcal X}$ such that $\mb{x} \in {\mathcal C}$ and such that, for all $\mb{x}' \in {\mathcal C}$ and for all $|\lambda| \le 1$, we have \begin{equation} \label{eq:def loc bal} \mb{x} + \lambda (\mb{x}' - \mb{x}) \in {\mathcal C}. \end{equation} As we will see, locally balanced sets are useful in the context of the constrained CRB, as they allow us to identify the feasible directions along which the bias gradient must be specified. An example of a locally balanced set is given in Fig.~\ref{fi:loc bal}(a), which represents a union of two subspaces. In Fig.~\ref{fi:loc bal}(a), for any point $\mb{x} \in {\mathcal X}$, and for any point $\mb{x}' \in {\mathcal X}$ sufficiently close to $\mb{x}$, the entire line segment between $\mb{x}$ and $\mb{x}'$, as well as the line segment in the opposite direction, are also in ${\mathcal X}$. This illustrates the fact that any union of subspaces is locally balanced, and, in particular, so are the sparse estimation settings of Section~\ref{se:sparse backgnd} \cite{eldar09, eldar09b, gedalyahu09}. As another example, consider any open set, such as the open ball in Fig.~\ref{fi:loc bal}(b). For such a set, any point $\mb{x}$ has a sufficiently small neighborhood ${\mathcal C}$ such that, for any $\mb{x}' \in {\mathcal C}$, the line segment connecting $\mb{x}$ to $\mb{x}'$ is contained in ${\mathcal X}$. On the other hand, the curve in Fig.~\ref{fi:loc bal}(c) is not locally balanced, since the line connecting $\mb{x}$ to any other point on the set does not lie within the set.\footnote{We note in passing that since the curve in Fig.~\ref{fi:loc bal}(c) is continuously differentiable, it can be locally approximated by a locally balanced set. Our derivation of the CRB can be extended to such approximately locally balanced sets in a manner similar to that of \cite{gorman90}, but such an extension is not necessary for the purposes of this paper.} Observe that the neighborhood of a point $\mb{x}$ in a locally balanced set ${\mathcal X}$ is entirely determined by the set of feasible directions $\v$ along which infinitesimal changes of $\mb{x}$ do not violate the constraints. These are the directions $\v = \mb{x}' - \mb{x}$ for all points $\mb{x}' \ne \mb{x}$ in the set ${\mathcal C}$ of \eqref{eq:def loc bal}. Recall that we seek a lower bound on the performance of estimators whose bias gradient is defined over the neighborhood of $\mb{x}_0$ restricted to the constraint set ${\mathcal X}$. Suppose for concreteness that we are interested in unbiased estimators. For a locally balanced constraint set ${\mathcal X}$, this implies that \begin{equation} \label{eq:pre T-unbias} \mb{B} \v = \mb{0} \end{equation} for any feasible direction $\v$. In other words, all feasible directions must be in the nullspace of $\mb{B}$. This is a weaker condition than requiring the bias gradient to equal zero, and is thus more useful for constrained estimation problems. If an estimator ${\widehat{\x}}$ satisfies \eqref{eq:pre T-unbias} for all feasible directions $\v$ at a certain point $\mb{x}_0$, we say that ${\widehat{\x}}$ is ${\mathcal X}$-unbiased at $\mb{x}_0$. This terminology emphasizes the fact that ${\mathcal X}$-unbiasedness depends both on the point $\mb{x}_0$ and on the constraint set ${\mathcal X}$. Consider the subspace $\mathcal F$ spanned by the feasible directions at a certain point $\mb{x} \in {\mathcal X}$. We refer to $\mathcal F$ as the feasible subspace at $\mb{x}$. Note that $\mathcal F$ may include infeasible directions, if these are linear combinations of feasible directions. Nevertheless, because of the linearity of \eqref{eq:pre T-unbias}, any vector $\u \in \mathcal F$ satisfies $\mb{B}\u = \mb{0}$, even if $\u$ is infeasible. Thus, ${\mathcal X}$-unbiasedness is actually a property of the feasible subspace $\mathcal F$, rather than the set of feasible directions. Since ${\mathcal X}$ is a subset of a finite-dimensional Euclidean space, $\mathcal F$ is also finite-dimensional, although different points in ${\mathcal X}$ may yield subspaces having differing dimensions. Let $\u_1, \ldots, \u_l$ denote an orthonormal basis for $\mathcal F$, and define the matrix \begin{equation} \label{eq:def U} \mb{U} = [ \u_1, \ldots, \u_l ]. \end{equation} Note that $\u_i$ and $\mb{U}$ are functions of $\mb{x}$. For a given function $\mb{x}$, different orthonormal bases can be chosen, but the choice of a basis is arbitrary and will not affect our results. As we have seen, ${\mathcal X}$-unbiasedness at $\mb{x}_0$ can alternatively be written as $\mb{B}\u = \mb{0}$ for all $\u \in \mathcal F$, or, equivalently \begin{equation} \label{eq:T-unbias} \mb{B}\mb{U} = \mb{0}. \end{equation} The constrained CRB can now be derived as a lower bound on all ${\mathcal X}$-unbiased estimators, which is a weaker requirement than ``ordinary'' unbiasedness. Just as ${\mathcal X}$-unbiasedness was defined by requiring the bias gradient matrix to vanish when multiplied by any feasible direction vector, we can define ${\mathcal X}$-biased estimators by requiring a specific value (not necessarily zero) for the bias gradient matrix when multiplied by a feasible direction vector. In an analogy to \eqref{eq:T-unbias}, this implies that one must define a value for the matrix $\mb{B}\mb{U}$. Our goal is thus to construct a lower bound on the covariance at a given $\mb{x}$ achievable by any estimator whose bias gradient $\mb{B}$ at $\mb{x}$ satisfies $\mb{B}\mb{U} = \P$, for a given matrix $\P$. This is referred to as specifying the ${\mathcal X}$-bias of the estimator at $\mb{x}$. \subsection{The CRB for Locally Balanced Constraints} It is helpful at this point to compare our derivation with prior work on the constrained CRB, which considered continuously differentiable constraint sets of the form \eqref{eq:constr}. It has been previously shown \cite{gorman90} that inequality constraints of the type $\mb{g}(\mb{x}) \le \mb{0}$ have no effect on the CRB\@. Consequently, we will consider constraints of the form \begin{equation} \label{eq:eq constr} {\mathcal X} = \{ \mb{x} \in {\mathbb R}^n: \mb{f}(\mb{x})=\mb{0} \}. \end{equation} Define the $k \times n$ matrix $\mb{F}(\mb{x}) = \partial \mb{f} / \partial \mb{x}$. For simplicity of notation, we will omit the dependence of $\mb{F}$ on $\mb{x}$. Assuming that the constraints are non-redundant, $\mb{F}$ is a full-rank matrix, and thus one can define an $n \times (n-k)$ matrix $\mb{W}$ (also dependent on $\mb{x}$) such that \begin{equation} \mb{F}\mb{W} = \mb{0}, \quad \mb{W}^T\mb{W} = \mb{I}. \end{equation} The matrix $\mb{W}$ is closely related to the matrix $\mb{U}$ spanning the feasible direction subspace of locally balanced sets. Indeed, the column space $\Ra{\mb{W}}$ of $\mb{W}$ is the tangent space of ${\mathcal X}$, i.e., the subspace of ${\mathbb R}^n$ containing all vectors which are tangent to ${\mathcal X}$ at the point $\mb{x}$. Thus, the vectors in $\Ra{\mb{W}}$ are precisely those directions along which infinitesimal motion from $\mb{x}$ does not violate the constraints, up to a first-order approximation. It follows that if a particular set ${\mathcal X}$ is both locally balanced and continuously differentiable, its matrices $\mb{U}$ and $\mb{W}$ coincide. Note, however, that there exist sets which are locally balanced but not continuously differentiable (and vice versa). With the above formulation, the CRB for continuously differentiable constraints can be stated as a function of the the matrix $\mb{W}$ and the bias gradient $\mb{B}$ \cite{ben-haim09}. In fact, the resulting bound depends on $\mb{B}$ only through $\mb{B}\mb{W}$. This is to be expected in light of the discussion of Section~\ref{ss:bias req}: The bias should be specified only for those directions which do not violate the constraint set. Furthermore, the proof of the CRB in \cite[Theorem~1]{ben-haim09} depends not on the formulation \eqref{eq:eq constr} of the constraint set, but merely on the class of bias functions under consideration. Consequently, one can state the bound without any reference to the underlying constraint set. To do so, let $\mb{y}$ be a measurement vector with pdf $p(\mb{y};\mb{x})$, which is assumed to be differentiable with respect to $\mb{x}$. The Fisher information matrix (FIM) $\mb{J}(\mb{x})$ is defined as \begin{equation} \label{eq:def J} \mb{J}(\mb{x}) = \E{{\boldsymbol \Delta} {\boldsymbol \Delta}^T} \end{equation} where \begin{equation} \label{eq:def bD} {\boldsymbol \Delta} = \pd{\log p(\mb{y};\mb{x})}{\mb{x}}. \end{equation} We assume that the FIM is well-defined and finite. We further assume that integration with respect to $\mb{y}$ and differentiation with respect to $\mb{x}$ can be interchanged, a standard requirement for the CRB\@. We then have the following result. \begin{theorem} \label{th:crb} Let ${\widehat{\x}}$ be an estimator and let $\mb{B} = \partial \b / \partial \mb{x}$ denote the bias gradient matrix of ${\widehat{\x}}$ at a given point $\mb{x}_0$. Let $\mb{U}$ be an orthonormal matrix, and suppose that $\mb{B}\mb{U}$ is known, but that $\mb{B}$ is otherwise arbitrary. If \begin{equation} \label{eq:UUM in UUJUU} \Ra{\mb{U}(\mb{U}+\mb{B}\mb{U})^T)} \subseteq \Ra{{\U\U^T\J\U\U^T}} \end{equation} then the covariance of ${\widehat{\x}}$ at $\mb{x}_0$ satisfies \begin{equation} \label{eq:th:crb} \Cov({\widehat{\x}}) \succeq (\mb{U}+\mb{B}\mb{U}) \left(\mb{U}^T\mb{J}\mb{U} \right)^\dagger (\mb{U}+\mb{B}\mb{U})^T. \end{equation} Equality is achieved in \eqref{eq:th:crb} if and only if \begin{equation} \label{eq:th:crb eq cond} {\widehat{\x}} = \mb{x}_0 + \b(\mb{x}_0) + (\mb{U}+\mb{B}\mb{U}) \left( \mb{U}^T\mb{J}\mb{U} \right)^\dagger \mb{U}^T {\boldsymbol \Delta} \end{equation} in the mean square sense, where ${\boldsymbol \Delta}$ is defined by \eqref{eq:def bD}. Conversely, if \eqref{eq:UUM in UUJUU} does not hold, then there exists no finite-variance estimator with the required bias gradient. \end{theorem} As required, no mention of constrained estimation is made in Theorem~\ref{th:crb}; instead, partial information about the bias gradient is assumed. Apart from this restatement, the theorem is identical to \cite[Theorem~1]{ben-haim09}, and its proof is unchanged. However, the above formulation is more general in that it can be applied to any constrained setting, once the constraints have been translated to bias gradient requirements. In particular, Theorem~\ref{th:crb} provides a CRB for locally balanced sets if the matrix $\mb{U}$ is chosen as a basis for the feasible direction subspace of Section~\ref{ss:loc bal}. \section{Bounds on Sparse Estimation} \label{se:sparse bounds} In this section, we apply the CRB of Theorem~\ref{th:crb} to several sparse estimation scenarios. We begin with an analysis of the problem of estimating a sparse parameter vector. \subsection{Estimating a Sparse Vector} \label{ss:sparse} Suppose we would like to estimate a parameter vector ${\boldsymbol \alpha}_0$, known to belong to the set ${\mathcal T}$ of \eqref{eq:def T}, from measurements $\mb{y}$ given by \eqref{eq:y=H alf + w}. To determine the CRB in this setting, we begin by identifying the feasible subspaces $\mathcal F$ corresponding to each of the elements in ${\mathcal T}$. To this end, consider first vectors ${\boldsymbol \alpha} \in {\mathcal T}$ for which $\|{\boldsymbol \alpha}\|_0 = s$, i.e., vectors having maximal support. Denote by $\{ i_1, \ldots, i_s \}$ the support set of ${\boldsymbol \alpha}$. Then, for all $\delta$, we have \begin{equation} \|{\boldsymbol \alpha} + \delta \mb{e}_{i_k}\|_0 = \|{\boldsymbol \alpha}\|_0 = s, \quad k=1,\ldots,s \end{equation} where $\mb{e}_j$ is the $j$th column of the identity matrix. Thus ${\boldsymbol \alpha} + \delta \mb{e}_{i_k} \in {\mathcal T}$, and consequently, the vectors $\{ \mb{e}_{i_1}, \ldots, \mb{e}_{i_s} \}$ are all feasible directions, as is any linear combination of these vectors. On the other hand, for any $j \notin \supp({\boldsymbol \alpha})$ and for any nonzero $\delta$, we have $\|{\boldsymbol \alpha} + \delta \mb{e}_j\|_0 = s+1$, and thus $\mb{e}_j$ is not a feasible direction; neither is any other vector which is not in $\spn\{\mb{e}_{i_1}, \ldots, \mb{e}_{i_s}\}$. It follows that the feasible subspace $\mathcal F$ for points having maximal support is given by $\spn\{\mb{e}_{i_1}, \ldots, \mb{e}_{i_s}\}$, and a possible choice for the matrix $\mb{U}$ of \eqref{eq:def U} is \begin{equation} \label{eq:U when =s} \mb{U} = [ \mb{e}_{i_1}, \ldots, \mb{e}_{i_s} ] \quad \text{for } \|{\boldsymbol \alpha}\|_0 = s. \end{equation} The situation is different for points ${\boldsymbol \alpha}$ having $\|{\boldsymbol \alpha}\|_0 < s$. In this case, vectors $\mb{e}_i$ corresponding to \emph{any} direction $i$ are feasible directions, since \begin{equation} \|{\boldsymbol \alpha} + \delta \mb{e}_i\|_0 \le \|{\boldsymbol \alpha}\|_0 + 1 \le s. \end{equation} Because the feasible subspace is defined as the span of all feasible directions, we have \begin{equation} \mathcal F \supseteq \spn\{ \mb{e}_1, \ldots, \mb{e}_p \} = {\mathbb R}^p. \end{equation} It follows that $\mathcal F = {\mathbb R}^p$ and thus a convenient choice for the matrix $\mb{U}$ is \begin{equation} \label{eq:U when <s} \mb{U} = \mb{I} \quad \text{for } \|{\boldsymbol \alpha}\|_0 < s. \end{equation} Consequently, whenever $\|{\boldsymbol \alpha}\|_0 < s$, a specification of the ${\mathcal T}$-bias amounts to completely specifying the usual estimation bias $\b(\mb{x})$. To invoke Theorem~\ref{th:crb}, we must also determine the FIM $\mb{J}({\boldsymbol \alpha})$. Under our assumption of white Gaussian noise, $\mb{J}({\boldsymbol \alpha})$ is given by \cite[p.~85]{kay93} \begin{equation} \label{eq:half J} \mb{J}({\boldsymbol \alpha}) = \frac{1}{\sigma^2} \H^T\H. \end{equation} Using \eqref{eq:U when =s}, \eqref{eq:U when <s}, and \eqref{eq:half J}, it is readily shown that \begin{equation} \label{eq:half UJU} \mb{U}^T\mb{J}\mb{U} = \begin{cases} \frac{1}{\sigma^2} \H_{\boldsymbol \alpha}^T \H_{\boldsymbol \alpha} & \text{when } \|{\boldsymbol \alpha}\|_0 = s \\ \frac{1}{\sigma^2} \H^T \H & \text{when } \|{\boldsymbol \alpha}\|_0 < s \end{cases} \end{equation} where $\H_{\boldsymbol \alpha}$ is the $p \times s$ matrix consisting of the columns of $\H$ indexed by $\supp({\boldsymbol \alpha})$. We now wish to determine under what conditions \eqref{eq:UUM in UUJUU} holds. Consider first points ${\boldsymbol \alpha}_0$ for which $\|{\boldsymbol \alpha}_0\|_0 = s$. Since, by \eqref{eq:spark req}, we have $\spark(\H)>s$, it follows that in this case $\mb{U}^T\mb{J}\mb{U}$ is invertible. Therefore \begin{equation} \Ra{{\U\U^T\J\U\U^T}} = \Ra{\mb{U}\U^T}. \end{equation} Since \begin{equation} \Ra{\mb{U}\U^T(\mb{I}+\mb{B}^T)} \subseteq \Ra{\mb{U}\U^T} \end{equation} we have that condition \eqref{eq:UUM in UUJUU} holds when $\|{\boldsymbol \alpha}_0\|_0=s$. The condition \eqref{eq:UUM in UUJUU} is no longer guaranteed when $\|{\boldsymbol \alpha}_0\|_0 < s$. In this case, $\mb{U}=\mb{I}$, so that \eqref{eq:UUM in UUJUU} is equivalent to \begin{equation} \label{eq:I+B in HH} \Ra{\mb{I}+\mb{B}^T} \subseteq \Ra{\H^T\H}. \end{equation} Using the fact that $\Ra{\H^T\H} = \Ra{\H^T}$ and that, for any matrix $\mb{Q}$, $\Ra{\mb{Q}^T} = \Nu{\mb{Q}}^\perp$, we find that \eqref{eq:I+B in HH} is equivalent to \begin{equation} \label{eq:N(H) in N(I+B)} \Nu{\H} \subseteq \Nu{\mb{I}+\mb{B}}. \end{equation} Combining these conclusions with Theorem~\ref{th:crb} yields the following CRB for the problem of estimating a sparse vector. \begin{theorem} \label{th:alf} Consider the estimation problem \eqref{eq:y=H alf + w} with ${\boldsymbol \alpha}_0$ given by \eqref{eq:def T}, and assume that \eqref{eq:spark req} holds. For a finite-variance estimator ${\widehat{\alf}}$ of ${\boldsymbol \alpha}_0$ to exist, its bias gradient matrix $\mb{B}$ must satisfy \eqref{eq:N(H) in N(I+B)} whenever $\|{\boldsymbol \alpha}_0\|_0 < s$. Furthermore, the covariance of any estimator whose ${\mathcal T}$-bias gradient matrix is $\mb{B}\mb{U}$ satisfies \begin{align} \label{eq:th:alf} \Cov({\widehat{\alf}}) &\succeq \sigma^2 (\mb{I}+\mb{B}) (\H^T\H)^\dagger (\mb{I}+\mb{B}^T) \notag\\ &\hspace{11em} \text{ when } \|{\boldsymbol \alpha}_0\|_0 < s, \notag\\ \Cov({\widehat{\alf}}) &\succeq \sigma^2 (\mb{U}+\mb{B}\mb{U}) (\H_{{\boldsymbol \alpha}_0}^T \H_{{\boldsymbol \alpha}_0})^{-1} (\mb{U}+\mb{B}\mb{U})^T \notag\\ &\hspace{11em} \text{ when } \|{\boldsymbol \alpha}_0\|_0 = s. \end{align} Here, $\H_{{\boldsymbol \alpha}_0}$ is the matrix containing the columns of $\H$ corresponding to $\supp({\boldsymbol \alpha}_0)$. \end{theorem} Let us examine Theorem~\ref{th:alf} separately in the underdetermined and well-determined cases. In the well-determined case, in which $\H$ has full row rank, the nullspace of $\H$ is trivial, so that \eqref{eq:N(H) in N(I+B)} always holds. It follows that the CRB is always finite, in the sense that we cannot rule out the existence of an estimator having any given bias function. Some insight can be obtained in this case by examining the ${\mathcal T}$-unbiased case. Noting also that $\H^T\H$ is invertible in the well-determined case, the bound for ${\mathcal T}$-unbiased estimators is given by \begin{align} \label{eq:alf well-det unbiased} \Cov({\widehat{\alf}}) &\succeq \sigma^2 (\H^T\H)^{-1} &\text{ when } \|{\boldsymbol \alpha}_0\|_0 &< s, \notag\\ \Cov({\widehat{\alf}}) &\succeq \sigma^2 \mb{U} (\H_{{\boldsymbol \alpha}_0}^T \H_{{\boldsymbol \alpha}_0})^{-1} \mb{U}^T &\text{ when } \|{\boldsymbol \alpha}_0\|_0 &= s. \end{align} From this formulation, the behavior of the CRB can be described as follows. When ${\boldsymbol \alpha}_0$ has non-maximal support ($\|{\boldsymbol \alpha}_0\|_0 < s$), the CRB is identical to the bound which would have been obtained had there been no constraints in the problem. This is because $\mb{U}=\mb{I}$ in this case, so that ${\mathcal T}$-unbiasedness and ordinary unbiasedness are equivalent. As we have seen in Section~\ref{ss:bias req}, the CRB is a function of the class of estimators under consideration, so the unconstrained and constrained bounds are equivalent in this situation. The bound $\sigma^2 (\H^T\H)^{-1}$ is achieved by the unconstrained LS estimator \begin{equation} {\widehat{\alf}} = (\H^T\H)^{-1}\H^T\mb{y} \end{equation} which is the minimum variance unbiased estimator in the unconstrained case. Thus, we learn from Theorem~\ref{th:alf} that for values of ${\boldsymbol \alpha}_0$ having non-maximal support, no ${\mathcal T}$-unbiased technique can outperform the standard LS estimator, which does not assume any knowledge about the constraint set ${\mathcal T}$. On the other hand, consider the case in which ${\boldsymbol \alpha}_0$ has maximal support, i.e., $\|{\boldsymbol \alpha}_0\|_0 = s$. Suppose first that $\supp({\boldsymbol \alpha}_0)$ is known, so that one must estimate only the nonzero values of ${\boldsymbol \alpha}_0$. In this case, a reasonable approach is to use the oracle estimator \eqref{eq:def xo}, whose covariance matrix is given by $\sigma^2 \mb{U} (\H_{{\boldsymbol \alpha}_0}^T \H_{{\boldsymbol \alpha}_0})^{-1} \mb{U}^T$ \cite{candes07}. Thus, when ${\boldsymbol \alpha}_0$ has maximal support, Theorem~\ref{th:alf} states that ${\mathcal T}$-unbiased estimators can perform, at best, as well as the oracle estimator, which is equivalent to the LS approach when the support of ${\boldsymbol \alpha}_0$ is known. The situation is similar, but somewhat more involved, in the underdetermined case. Here, the condition \eqref{eq:N(H) in N(I+B)} for the existence of an estimator having a given bias gradient matrix no longer automatically holds. To interpret this condition, it is helpful to introduce the mean gradient matrix $\mb{M}({\boldsymbol \alpha})$, defined as \begin{equation} \mb{M}({\boldsymbol \alpha}) = \pd{\E{{\widehat{\alf}}}}{{\boldsymbol \alpha}} = \mb{I} + \mb{B}. \end{equation} The matrix $\mb{M}({\boldsymbol \alpha})$ is a measure of the sensitivity of an estimator to changes in the parameter vector. For example, a ${\mathcal T}$-unbiased estimator is sensitive to any \emph{feasible} change in ${\boldsymbol \alpha}$. Thus, $\Nu{\mb{M}}$ denotes the subspace of directions to which ${\widehat{\alf}}$ is insensitive. Likewise, $\Nu{\H}$ is the subspace of directions for which a change in ${\boldsymbol \alpha}$ does not modify $\H{\boldsymbol \alpha}$. The condition \eqref{eq:N(H) in N(I+B)} therefore states that for an estimator to exist, it must be insensitive to changes in ${\boldsymbol \alpha}$ which are unobservable through $\H{\boldsymbol \alpha}$, at least when $\|{\boldsymbol \alpha}\|_0 < s$. No such requirement is imposed in the case $\|{\boldsymbol \alpha}\|_0 = s$, since in this case there are far fewer feasible directions. The lower bound \eqref{eq:th:alf} is similarly a consequence of the wide range of feasible directions obtained when $\|{\boldsymbol \alpha}\|_0 < s$, as opposed to the tight constraints when $\|{\boldsymbol \alpha}\|_0 = s$. Specifically, when $\|{\boldsymbol \alpha}\|_0 < s$, a change to any component of ${\boldsymbol \alpha}$ is feasible and hence the lower bound equals that of an unconstrained estimation problem, with the FIM given by $\sigma^{-2} \H^T \H$. On the other hand, when $\|{\boldsymbol \alpha}\|_0 = s$, the bound is effectively that of an estimator with knowledge of the particular subspace to which ${\boldsymbol \alpha}$ belongs; for this subspace the FIM is the submatrix $\mb{U}^T\mb{J}\mb{U}$ given in \eqref{eq:half UJU}. This phenomenon is discussed further in Section~\ref{se:discuss}. Another difference between the well-determined and underdetermined cases is that when $\H$ is underdetermined, an estimator cannot be ${\mathcal T}$-unbiased for all ${\boldsymbol \alpha}$. To see this, recall from \eqref{eq:T-unbias} that ${\mathcal T}$-unbiased estimators are defined by the fact that $\mb{B}\mb{U}=\mb{0}$. When $\|{\boldsymbol \alpha}\|_0 < s$, we have $\mb{U}=\mb{I}$ and thus ${\mathcal T}$-unbiasedness implies $\mb{B}=\mb{0}$, so that $\Nu{\mb{I}+\mb{B}} = \{ \mb{0} \}$. But since $\H$ is underdetermined, $\Nu{\H}$ is nontrivial. Consequently, \eqref{eq:N(H) in N(I+B)} cannot hold for ${\mathcal T}$-unbiased estimators when $\|{\boldsymbol \alpha}\|_0 < s$. The lack of ${\mathcal T}$-unbiased estimators when $\|{\boldsymbol \alpha}_0\|_0 < s$ is a direct consequence of the fact that the feasible direction set at such ${\boldsymbol \alpha}_0$ contains all of the directions $\mb{e}_1, \ldots, \mb{e}_p$. The conclusion from Theorem~\ref{th:alf} is then that no estimator can be expected to be unbiased in such a high-dimensional neighborhood, just as unbiased estimation is impossible in the $p$-dimensional neighborhood ${{\mathcal B}_\eps({\boldsymbol \alpha}_0)}$, as explained in Section~\ref{ss:bias req}. However, it is still possible to obtain a finite CRB in this setting by further restricting the constraint set: if it is known that $\|{\boldsymbol \alpha}_0\|_0 = \tilde{s} < s$, then one can redefine ${\mathcal T}$ in \eqref{eq:def T} by replacing $s$ with $\tilde{s}$. This will enlarge the class of estimators considered ${\mathcal T}$-unbiased, and Theorem~\ref{th:alf} would then provide a finite lower bound on those estimators. Such estimators will not, however, be unbiased in the sense implied by the original constraint set. While an estimator cannot be unbiased for \emph{all} ${\boldsymbol \alpha} \in {\mathcal T}$, unbiasedness is possible at points ${\boldsymbol \alpha}$ for which $\|{\boldsymbol \alpha}\|_0 = s$. In this case, Theorem~\ref{th:alf} produces a bound on the MSE of a ${\mathcal T}$-unbiased estimator, obtained by calculating the trace of \eqref{eq:th:alf} in the case $\mb{B}\mb{U}=\mb{0}$. This bound is given by \begin{equation} \label{eq:crb T-unbiased} \E{\|{\widehat{\alf}} - {\boldsymbol \alpha}_0\|_2^2} \ge \sigma^2 \Tr((\H_{{\boldsymbol \alpha}_0}^T \H_{{\boldsymbol \alpha}_0})^{-1}), \quad \|{\boldsymbol \alpha}_0\|_0 = s. \end{equation} The most striking feature of \eqref{eq:crb T-unbiased} is that it is identical to the oracle MSE \eqref{eq:oracle mse}. However, the CRB is of additional importance because of the fact that the ML estimator achieves the CRB in the limit when a large number of independent measurements are available, a situation which is equivalent in our setting to the limit $\sigma \rightarrow 0$. In other words, an MSE of \eqref{eq:crb T-unbiased} is achieved at high SNR by the ML approach \eqref{eq:ml}, as we will illustrate numerically in Section~\ref{se:numer}. While the ML approach is computationally intractable in the sparse estimation setting, it is still implementable in principle, as opposed to $\half_{\mathrm{oracle}}$, which relies on unavailable information (namely, the support set of ${\boldsymbol \alpha}_0$). Thus, Theorem~\ref{th:crb} gives an alternative interpretation to comparisons of estimator performance with the oracle. Observe that the bound \eqref{eq:crb T-unbiased} depends on the value of ${\boldsymbol \alpha}_0$ (through its support set, which defines $\H_{{\boldsymbol \alpha}_0}$). This implies that some values of ${\boldsymbol \alpha}_0$ are more difficult to estimate than others. For example, suppose the $\ell_2$ norms of some of the columns of $\H$ are significantly larger than the remaining columns. Measurements of a parameter ${\boldsymbol \alpha}_0$ whose support corresponds to the large-norm columns of $\H$ will then have a much higher SNR than measurements of a parameter corresponding to small-norm columns, and this will clearly affect the accuracy with which ${\boldsymbol \alpha}_0$ can be estimated. To analyze the behavior beyond this effect, it is common to consider the situation in which the columns $\mb{h}_i$ of $\H$ are normalized so that $\|\mb{h}_i\|_2 = 1$. In this case, for sufficiently incoherent dictionaries, $\Tr((\H_{{\boldsymbol \alpha}_0}^T \H_{{\boldsymbol \alpha}_0})^{-1})$ is bounded above and below by a small constant times $s$, so that the CRB is similar for all values of ${\boldsymbol \alpha}_0$. To see this, let $\mu$ be the coherence of $\H$ \cite{tropp06}, defined (for $\H$ having normalized columns) as \begin{equation} \mu \triangleq \max_{i \ne j} \left| \mb{h}_i^T \mb{h}_j \right| . \end{equation} By the Gershgorin disc theorem, the eigenvalues of $\H_{{\boldsymbol \alpha}_0}^T \H_{{\boldsymbol \alpha}_0}$ are in the range $[1 - s\mu, 1 + s\mu]$. It follows that the unbiased CRB \eqref{eq:crb T-unbiased} is bounded above and below by \begin{equation} \frac{s\sigma^2}{1+s\mu} \le \sigma^2 \Tr((\H_{{\boldsymbol \alpha}_0}^T \H_{{\boldsymbol \alpha}_0})^{-1}) \le \frac{s\sigma^2}{1-s\mu}. \end{equation} Thus, when $s$ is somewhat smaller than $1/\mu$, the CRB is roughly equal to $s \sigma^2$ for all values of ${\boldsymbol \alpha}_0$. As we have seen in Section~\ref{ss:est techniques}, for sufficiently small $s$, the worst-case MSE of practical estimators, such as BPDN and the DS, is $O(s \sigma^2 \log p)$. Thus, practical estimators come almost within a constant of the unbiased CRB, implying that they are close to optimal for all values of ${\boldsymbol \alpha}_0$, at least when compared with unbiased techniques. \subsection{Denoising and Deblurring} \label{ss:deblur} We next consider the problem \eqref{eq:y=Ax+w}, in which it is required to estimate not the sparse vector ${\boldsymbol \alpha}_0$ itself, but rather the vector $\mb{x}_0 = \mb{D} {\boldsymbol \alpha}_0$, where $\mb{D}$ is a known dictionary matrix. Thus, $\mb{x}_0$ belongs to the set $\SS$ of \eqref{eq:def S}. We assume for concreteness that $\mb{D}$ has full row rank and that $\mb{A}$ has full column rank. This setting encompasses the denoising and deblurring problems described in Section~\ref{ss:sparse setting}, with the former arising when $\mb{A}=\mb{I}$ and the latter obtained when $\mb{A}$ represents a blurring kernel. Similar calculations can be carried out when $\mb{A}$ is rank-deficient, a situation which occurs, for example, in some interpolation problems. Recall from Section~\ref{ss:sparse setting} the assumption that every $\mb{x} \in \SS$ has a \emph{unique} representation $\mb{x} = \mb{D}{\boldsymbol \alpha}$ for which ${\boldsymbol \alpha}$ is in the set ${\mathcal T}$ of \eqref{eq:def T}. We denote by $\r(\cdot)$ the mapping from $\SS$ to ${\mathcal T}$ which returns this representation. In other words, $\r(\mb{x})$ is the unique vector in ${\mathcal T}$ for which \begin{equation} \mb{x} = \mb{D} \r(\mb{x}) \quad \text{and} \quad \|\r(\mb{x})\|_0 \le s. \end{equation} Note that while the mapping $\r$ is well-defined, actually calculating the value of $\r(\mb{x})$ for a given vector $\mb{x}$ is, in general, NP-hard. In the current setting, unlike the scenario of Section~\ref{ss:sparse}, it is always possible to construct an unbiased estimator. Indeed, even without imposing the constraint \eqref{eq:def S}, there exists an unbiased estimator. This is the LS or maximum likelihood estimator, given by \begin{equation} {\widehat{\x}} = (\mb{A}^T\mb{A})^{-1} \mb{A}^T \mb{y}. \end{equation} A standard calculation demonstrates that the covariance of ${\widehat{\x}}$ is \begin{equation} \label{eq:LS cov} \sigma^2 (\mb{A}^T\mb{A})^{-1}. \end{equation} On the other hand, the FIM for the setting \eqref{eq:y=Ax+w} is given by \begin{equation} \label{eq:deb J} \mb{J} = \frac{1}{\sigma^2} \mb{A}^T\mb{A}. \end{equation} Since $\mb{A}$ has full row rank, the FIM is invertible. Consequently, it is seen from \eqref{eq:LS cov} and \eqref{eq:deb J} that the LS approach achieves the CRB $\mb{J}^{-1}$ for unbiased estimators. This well-known property demonstrates that in the unconstrained setting, the LS technique is optimal among all unbiased estimators. The LS estimator, like any unbiased approach, is also $\SS$-unbiased. However, with the addition of the constraint $\mb{x}_0 \in \SS$, one would expect to obtain improved performance. It is therefore of interest to obtain the CRB for the constrained setting. To this end, we first note that since $\mb{J}$ is invertible, we have $\Ra{{\U\U^T\J\U\U^T}} = \Ra{\mb{U}\U^T}$ for any $\mb{U}$, and consequently \eqref{eq:UUM in UUJUU} holds for any matrix $\mb{B}$. The bound \eqref{eq:th:crb} of Theorem~\ref{th:crb} thus applies regardless of the bias gradient matrix. For simplicity, in the following we derive the CRB for $\SS$-unbiased estimators. A calculation for arbitrary $\SS$-bias functions can be performed along similar lines. Consider first values $\mb{x} \in \SS$ such that $\|\r(\mb{x})\|_0 < s$. Then, $\|\r(\mb{x}) + \delta \mb{e}_i\|_0 \le s$ for any $\delta$ and for any $\mb{e}_i$. Therefore, \begin{equation} \mb{x} + \delta \mb{D} \mb{e}_i \in \SS \end{equation} for any $\delta$ and $\mb{e}_i$. In other words, the feasible directions include all columns of $\mb{D}$. Since it is assumed that $\mb{D}$ has full row rank, this implies that the feasible subspace $\mathcal F$ equals ${\mathbb R}^n$, and the matrix $\mb{U}$ of \eqref{eq:def U} can be chosen as $\mb{U}=\mb{I}$. \begin{figure*} \centerline{% \subfigure[]{% \includegraphics{plot_ssp_snr.eps} \label{fi:snr}} \hfil % \subfigure[]{% \includegraphics{plot_ssp.eps} \label{fi:spar}} } \caption{MSE of various estimators compared with the unbiased CRB \eqref{eq:crb T-unbiased}, for (a) varying SNR and (b) varying sparsity levels.} \label{fi:sim} \end{figure*} Next, consider values $\mb{x} \in \SS$ for which $\|\r(\mb{x})\|_0 = s$. Then, for sufficiently small $\delta>0$, we have $\|\r(\mb{x}) + \delta \v\|_0 \le s$ if and only if $\v = \mb{e}_i$ for some $i \in \supp(\r(\mb{x}))$. Equivalently, \begin{equation} \mb{x} + \delta \v \in \SS \text{ if and only if } \v = \mb{D}\mb{e}_i \text{ and } i \in \supp(\r(\mb{x})). \end{equation} Consequently, the feasible direction subspace in this case corresponds to the column space of the matrix $\mb{D}_\mb{x}$ containing the $s$ columns of $\mb{D}$ indexed by $\supp(\r(\mb{x}))$. From \eqref{eq:spark req D} we have $\spark(\mb{D})>s$, and therefore the columns of $\mb{D}_\mb{x}$ are linearly independent. Thus the orthogonal projector onto $\mathcal F$ is given by \begin{equation} \label{eq:def P} \P \triangleq \mb{U}\U^T = \mb{D}_\mb{x} (\mb{D}_\mb{x}^T \mb{D}_\mb{x})^{-1} \mb{D}_\mb{x}^T. \end{equation} Combining these calculations with Theorem~\ref{th:crb} yields the following result. \begin{theorem} \label{th:deblur} Consider the estimation setting \eqref{eq:y=Ax+w} with the constraint \eqref{eq:def S}, and suppose $\spark(\mb{D}) > 2s$. Let ${\widehat{\x}}$ be a finite-variance $\SS$-unbiased estimator. Then, \begin{align} \Cov({\widehat{\x}}) &\succeq \sigma^2 (\mb{A}^T \mb{A})^{-1} &\text{when } \|\r(\mb{x})\|_0 < s, \notag\\ \Cov({\widehat{\x}}) &\succeq \sigma^2 \left( \P \mb{A}^T\mb{A} \P \right)^\dagger &\text{when } \|\r(\mb{x})\|_0 = s. \label{eq:th:deblur} \end{align} Here, $\P$ is given by \eqref{eq:def P}, in which $\mb{D}_\mb{x}$ is the $n \times s$ matrix consisting of the columns of $\mb{D}$ participating in the (unique) $s$-element representation $\mb{D}{\boldsymbol \alpha}$ of $\mb{x}$. \end{theorem} As in Theorem~\ref{th:alf}, the bound exhibits a dichotomy between points having maximal and non-maximal support. In the former case, the CRB is equivalent to the bound obtained when the support set is known, whereas in the latter the bound is equivalent to an unconstrained CRB. This point is discussed further in Section~\ref{se:discuss}. \section{Numerical Results} \label{se:numer} In this section, we demonstrate the use of the CRB for measuring the achievable MSE in the sparse estimation problem \eqref{eq:y=H alf + w}. To this end, a series of simulations was performed. In each simulation, a random $100 \times 200$ dictionary $\H$ was constructed from a zero-mean Gaussian IID distribution, whose columns $\mb{h}_i$ were normalized so that $\|\mb{h}_i\|_2=1$. A parameter ${\boldsymbol \alpha}_0$ was then selected by choosing a support uniformly at random and selecting the nonzero elements as Gaussian IID variables with mean $0$ and variance $1$. Noisy measurements $\mb{y}$ were obtained from \eqref{eq:y=H alf + w}, and ${\boldsymbol \alpha}_0$ was then estimated using BPDN \eqref{eq:bpdn}, the DS \eqref{eq:ds}, and the GDS \eqref{eq:gds}. The regularization parameters were chosen as $\tau = 2\sigma \sqrt{\log p}$ and $\gamma = 4\sigma \sqrt{\log(p-s)}$, rules of thumb which are motivated by a theoretical analysis \cite{ben-haim09c}. The MSE of each estimate was then calculated by repeating this process with different realizations of the random variables. The unbiased CRB was calculated using \eqref{eq:crb T-unbiased}. In this case, the unbiased CRB equals the MSE of the oracle estimator \eqref{eq:def xo}, but as we will see below, interpreting \eqref{eq:crb T-unbiased} as a bound on unbiased estimators provides further insight into the estimation problem. A first set of experiments was conducted to examine the CRB at various SNR levels. In this simulation, the ML estimator \eqref{eq:ml} was also computed, in order to verify its convergence to the CRB at high SNR\@. Since the ML approach is computationally prohibitive when $p$ and $s$ are large, this necessitated the selection of the rather low support size $s=3$. The MSE and CRB were calculated for 15 SNR values by changing the noise standard deviation $\sigma$ between $1$ and $10^{-3}$. The MSE of the ML approach, as well as the other estimators of Section~\ref{ss:est techniques}, is compared with the CRB in Fig.~\ref{fi:snr}. The convergence of the ML estimator to the CRB is clearly visible in this figure. The performance of the GDS is also impressive, being as good or better than the ML approach. Apparently, at high SNR, the DS tends to correctly recover the true support set, in which case GDS \eqref{eq:gds} equals the oracle \eqref{eq:def xo}. Perhaps surprisingly, applying a LS estimate on the support set obtained by BPDN (which could be called a ``Gauss--BPDN'' strategy) does not work well at all, and in fact results in higher MSE than a direct application of BPDN. (The results for the Gauss--BPDN method are not plotted in Fig.~\ref{fi:sim}.) Note that some estimation techniques outperform the oracle MSE (or CRB) at low SNR\@. It may appear surprising that a practical technique such as the DS outperforms the oracle. The explanation for this stems from the fact that the CRB \eqref{eq:crb T-unbiased} is a lower bound on the MSE of \emph{unbiased} estimators. The bias of most estimators tends to be negligible in low-noise settings, but often increases with the noise variance $\sigma^2$. Indeed, when $\sigma^2$ is as large as $\|{\boldsymbol \alpha}_0\|_2^2$, the measurements carry very little useful information about ${\boldsymbol \alpha}_0$, and an estimator can improve performance by shrinkage. Such a strategy, while clearly biased, yields lower MSE than a naive reliance on the noisy measurements. This is indeed the behavior of the DS and BPDN, since for large $\sigma^2$, the $\ell_1$ regularization becomes the dominant term, resulting in heavy shrinkage. Consequently, it is to be expected that such techniques will outperform even the best unbiased estimator at low SNR, as indeed occurs in Fig.~\ref{fi:snr}. The performance of the estimators of Section~\ref{ss:est techniques}, excluding the ML method, was also compared for varying sparsity levels. To this end, the simulation was repeated for 15 support sizes in the range $1 \le s \le 30$, with a constant noise standard deviation of $\sigma = 0.01$. The results are plotted in Fig.~\ref{fi:spar}. While a substantial gap exists between the CRB and the MSE of the practical estimators in this case, the general trend in both cases describes a similar rate of increase as $s$ grows. Interestingly, a drawback of the GDS approach is visible in this setting: as $s$ increases, correct support recovery becomes more difficult, and shrinkage becomes a valuable asset for reducing the sensitivity of the estimate to random measurement fluctuations. The LS approach practiced by the GDS, which does not perform shrinkage, leads to gradual performance deterioration. Results similar to Fig.~\ref{fi:sim} were obtained for a variety of related estimation scenarios, including several deterministic, rather than random, dictionaries $\H$. \section{Discussion} \label{se:discuss} In this paper, we extended the CRB to constraint sets satisfying the local balance condition (Theorem~\ref{th:crb}). This enabled us to derive lower bounds on the achievable performance in various estimation problems (Theorems \ref{th:alf} and~\ref{th:deblur}). In simple terms, Theorems \ref{th:alf} and~\ref{th:deblur} can be summarized as follows. The behavior of the CRB differs depending on whether or not the parameter has maximal support (i.e., $\|{\boldsymbol \alpha}\|_0 = s$). In the case of maximal support, the bound equals that which would be obtained if the sparsity pattern were known; this can be considered an ``oracle bound''. On the other hand, when $\|{\boldsymbol \alpha}\|_0 < s$, performance is identical to the unconstrained case, and the bound is substantially higher. We now discuss some practical implications of these conclusions. To simplify the discussion, we consider the case of unbiased estimators, though analogous conclusions can be drawn for any bias function. When $\|{\boldsymbol \alpha}\|_0 = s$ and all nonzero elements of ${\boldsymbol \alpha}$ are considerably larger than the standard deviation of the noise, the support set can be recovered correctly with high probability (at least if computational considerations are ignored). Thus, in this case an estimator can mimic the behavior of the oracle, and the CRB is expected to be tight. Indeed, in the high SNR limit, the ML estimator achieves the unbiased CRB\@. On the other hand, when the support of ${\boldsymbol \alpha}$ is not maximal, the unbiasedness requirement demands sensitivity to changes in all components of ${\boldsymbol \alpha}$, and consequently the bound coincides with the unconstrained CRB\@. Thus, as claimed in Section~\ref{se:crb}, in underdetermined cases no estimator is unbiased for all ${\boldsymbol \alpha} \in \SS$. An interesting observation can also be made concerning maximal-support points ${\boldsymbol \alpha}$ for which some of the nonzero elements are close to zero. The CRB in this ``low-SNR'' case corresponds to the oracle MSE, but as we will see, the bound is loose for such values of ${\boldsymbol \alpha}$. Intuitively, at low-SNR points, any attempt to recover the sparsity pattern will occasionally fail. Consequently, despite the optimistic CRB, it is unlikely that the oracle MSE can be achieved. Indeed, the covariance matrix of any finite-variance estimator is a continuous function of ${\boldsymbol \alpha}$ \cite{lehmann98}, and the fact that performance is bounded by the (much higher) unconstrained bound when $\|{\boldsymbol \alpha}\|_0 < s$ implies that performance must be similarly poor for low SNR\@. This excessive optimism is a result of the local nature of the CRB\@: The bound is a function of the estimation setting only in an $\varepsilon$-neighborhood of the parameter itself. Indeed, the CRB depends on the constraint set only through the feasible directions, which were defined in Section~\ref{ss:loc bal} as those directions which do not violate the constraints for \emph{sufficiently small} deviations. Thus, for the CRB, it is entirely irrelevant if some of the components of ${\boldsymbol \alpha}$ are close to zero, as long as $\supp({\boldsymbol \alpha})$ is held constant. A tighter bound for sparse estimation problems may be obtained using the Hammersley--Chapman--Robbins (HCR) approach \cite{Hammersley50, ChapmanRobbins51, gorman90}, which depends on the constraints at points beyond the local neighborhood of $\mb{x}$. Such a bound is likely to yield tighter results for low SNR values, and will create a smooth transition between the regions of maximal and non-maximal support. However, the bound will depend on more complex properties of the estimation setting, such as the distance between $\mb{D}{\boldsymbol \alpha}$ and feasible points with differing supports. The derivation of such a bound is a subject for further research. \section*{Acknowledgement} The authors would like to thank Yaniv Plan for helpful discussions. The authors are also grateful to the anonymous reviewers for their comments, which considerably improved the presentation of the paper. \bibliographystyle{IEEEtran}
{ "timestamp": "2009-09-29T10:56:08", "yymm": "0905", "arxiv_id": "0905.4378", "language": "en", "url": "https://arxiv.org/abs/0905.4378", "abstract": "The goal of this paper is to characterize the best achievable performance for the problem of estimating an unknown parameter having a sparse representation. Specifically, we consider the setting in which a sparsely representable deterministic parameter vector is to be estimated from measurements corrupted by Gaussian noise, and derive a lower bound on the mean-squared error (MSE) achievable in this setting. To this end, an appropriate definition of bias in the sparse setting is developed, and the constrained Cramer-Rao bound (CRB) is obtained. This bound is shown to equal the CRB of an estimator with knowledge of the support set, for almost all feasible parameter values. Consequently, in the unbiased case, our bound is identical to the MSE of the oracle estimator. Combined with the fact that the CRB is achieved at high signal-to-noise ratios by the maximum likelihood technique, our result provides a new interpretation for the common practice of using the oracle estimator as a gold standard against which practical approaches are compared.", "subjects": "Statistics Theory (math.ST); Information Theory (cs.IT)", "title": "The Cramer-Rao Bound for Sparse Estimation", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668668053619, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.7081505262335489 }
https://arxiv.org/abs/1301.1287
Geometric Error of Finite Volume Schemes for Conservation Laws on Evolving Surfaces
This paper studies finite volume schemes for scalar hyperbolic conservation laws on evolving hypersurfaces of $\mathbb{R}^3$. We compare theoretical schemes assuming knowledge of all geometric quantities to (practical) schemes defined on moving polyhedra approximating the surface. For the former schemes error estimates have already been proven, but the implementation of such schemes is not feasible for complex geometries. The latter schemes, in contrast, only require (easily) computable geometric quantities and are thus more useful for actual computations. We prove that the difference between approximate solutions defined by the respective families of schemes is of the order of the mesh width. In particular, the practical scheme converges to the entropy solution with the same rate as the theoretical one. Numerical experiments show that the proven order of convergence is optimal.
\section{Introduction} Hyperbolic conservation laws serve as models for a wide variety of applications in continuum dynamics. In many applications the physical domains of these problems are stationary or moving hypersurfaces. Examples of the former are in particular geophysical problems \cite {WDHJS92} and magnetohydrodynamics in the tachocline of the sun \cite{Gil00,SBG01}. Examples of the latter include transport processes on cell surfaces \cite{RS05}, surfactant flow on interfaces in multiphase flow \cite{BPS05} and petrol flow on a time dependent water surface. There are several recent approaches to the numerical computation of such equations. Numerical schemes for the shallow water equations on a rotating sphere can be found in \cite{CHL06,Gir06,Ros06}. For the simulation of surfactant flow on interfaces we refer to \cite{AB09,BS10,JL04}. As we are interested in numerical analysis we focus on nonlinear scalar conservation laws as a model for these systems. The intense study of conservation laws posed on fixed Riemannian manifolds started within the last years. There are results about well-posedness \cite{BL06,DKM13,LM13} of the differential equations and about the convergence of appropriate finite volume schemes \cite{ABL05,Gie09,GW12,LON09}. For recent developments on finite volume schemes for parabolic equations we refer to \cite{LNR11}. In the previous error analysis for finite volume schemes approximating nonlinear conservation laws on manifolds the schemes were defined on curved elements lying on the curved surface and it was assumed that geometric quantities like lengths, areas and conormals are known exactly. While this is a reasonable assumption for schemes defined on general Riemannian manifolds or even more general structures \cite{LO08.2} with no ambient space, most engineering applications involve equations on hypersurfaces of $\mathbb R^3$ and one aims at computing the geometry with the least effort. This is in particular important for moving surfaces where the geometric quantities have to be computed in each time step. Now the question arises to which extent an approximation of the geometry influences the order of convergence of the scheme. We consider the following initial value problem, posed on a family of closed, smooth hypersurfaces $\Gamma=\Gamma(t) \subset \mathbb R^3$. For a derivation cf. \cite{DE07.2,Sto90}. For some $T>0$, find $u: G_T := \bigcup_{t \in [0,T]}\Gamma(t) \times \{t\} \rightarrow\mathbb R$ with \begin{align} \dot{u} + u \nabla_\Gamma \cdot v+ \nabla_\Gamma \cdot \mathbf{f}(u,\cdot,\cdot) &= 0 & &\text{in } G_T, \label{eq:ConLaw}\\ u(\cdot,0)&=u_0 & &\text{on } \Gamma(0),\label{eq:IVConLaw} \end{align} where $v$ is the velocity of the material points of the surface and $u_0:\Gamma(0)\rightarrow\mathbb R$ are initial data. For every $\bar u\in \mathbb R,\ t\in[0,T]$ the flux $\mathbf{f}(\bar u,\cdot,t)$ is a smooth vector field tangential to $\Gamma(t),$ which depends Lipschitz on $\bar u$ and smoothly on $t$. Moreover, we impose the following growth condition \begin{equation}\label{eq:growth} |\nabla_{\Gamma} \cdot f (\bar u,x,t) | \leq c + c |\bar u| \quad \forall \, \bar u \in \mathbb R , (x,t) \in G_T\end{equation} for some constant $c>0$. By $\dot{u}$ we denote the material derivative of $u$ which is given by \[ \dot{u}(\Phi_t(x),t) := \frac{d}{dt} u(\Phi_t(x),t),\] where $\Phi_t : \Gamma(0) \rightarrow \Gamma(t)$ is a family of diffeomorphisms depending smoothly on $t$, such that $\Phi_0$ is the identity on $\Gamma(0).$ Obviously this excludes changes of the topology of $\Gamma.$ We will assume that the movement of the surface and also the family $\Phi_t$ is prescribed. A main result of this paper is a bound for the difference between two approximations of $u$. In particular, we will give an estimate for the difference between the flat approximate and the curved approximate solution. By curved approximate solution we refer to a numerical solution given by a finite volume scheme defined on the curved surface, cf. Section \ref{subsec:fv_curved}, and by flat approximate solution we refer to a numerical solution given by a finite volume scheme defined on a polyhedron approximating the surface, cf. Section \ref{subsec:fv_flat}. We will see that the arising geometry errors can be neglected compared to the error between the curved approximate solution and the exact solution, i.e. both approximate solutions converge to the entropy solution with the same convergence rate. We will present numerical examples showing that the proven convergence rate is optimal under the assumptions for the numerical analysis. However, for most numerical experiments we observe higher orders of convergence. Our analysis also indicates that the geometry error poses an obstacle to the construction of higher order schemes. To this end we perform numerical experiments underlining in which manner the order of convergence of the higher order scheme is restricted by the approximation of the geometry. This shows that to obtain higher order convergence also the geometry of the manifold has to be approximated more accurately, cf. \cite{Dem09} in a finite element context. The outline of this paper is as follows. In Section \ref{sec:fvs} we review the definition of finite volume schemes on moving curved surfaces and define finite volume schemes on moving polyhedra approximating the surfaces. The approximation errors for geometric quantities are established in Section \ref{sec:geom}. Section \ref{sec:mr} is devoted to estimating the difference between the curved and the flat approximate solution. Finally, numerical experiments are given in Section \ref{sec:numerics}. \section{The Finite Volume Schemes}\label{sec:fvs} This section is devoted to the construction of a family of triangulations $\mathcal T _h(t)$ of the surfaces suitably linked to polyhedral approximations $\Gamma_h(t)$ of the surfaces. Afterwards we will recall the definition of a finite volume scheme on $\mathcal T _h(t)$ which was considered in the hitherto error analysis and define a finite volume scheme on $\Gamma_h(t)$ which is an algorithm only relying on easily computable quantities. We mention that our triangulation as well as the definition of the finite volume scheme on $\Gamma_h$ is in the same spirit as the one Lenz et al. \cite{LNR11} used for the diffusion equation on evolving surfaces. \subsection{Triangulation} We start by mentioning that there are neighbourhoods $\mathcal N(t)\subset \mathbb R^3$ of $\Gamma(t)$ such that for every $x\in \mathcal N(t)$ there is a unique point $a(x,t)\in \Gamma(t)$ such that \begin{equation} x = a(x,t) + d(x,t) \nu_{\Gamma(t)}(a(x,t)), \label{eq:projection} \end{equation} where $d(\cdot,t)$ denotes the signed distance function to $\Gamma(t)$ and $\nu_{\Gamma(t)}(a(x,t))$ the unit normal vector to $\Gamma(t)$ pointing towards the non-compact component of $\mathbb R^3\setminus \Gamma(t)$. See \cite{DE07} for example. Let us choose a polyhedral surface $\Gamma_h(0) \subset \mathcal N(0)$ which consists of flat triangles such that the vertices of $\Gamma_h(0)$ lie on $\Gamma(0),$ and $h$ is the length of the longest edge of $\Gamma_h(0).$ In addition we impose that the restriction of $a|_{\Gamma_h(0)}: \Gamma_h(0) \rightarrow \Gamma(0)$ is one-to-one. We define $\Gamma_h(t)$ as the polyhedral surface that is constructed by moving the vertices of $\Gamma_h(t)$ via the diffeomorphism $\Phi_t$ and connecting them with straight lines such that all triangulations share the same grid topology. A triangulation $\bar {\mathcal{ T}} _h(t)$ of $\Gamma_h(t)$ is automatically given by the decomposition into faces. We define the triangulation $\mathcal T _h(t)$ on $\Gamma(t)$ as the image of $\bar {\mathcal{ T}} _h(t)$ under $a(\cdot,t)|_{\Gamma_h(t)}.$ We will denote the curved cells with $K(t)$ and the curved faces with $e(t)$. A flat quantity corresponding to some curved quantity is denoted by the same letter and a bar, e.g. let $e(t) \subset \Gamma(t)$ be a curved face then $\bar e(t) = (a(\cdot,t)|_{\Gamma_h(t)})^{-1}(e(t)).$ In order to reflect the fact that all triangulations share the same grid topology we introduce the following misuse of notation. We denote by $K$ the family of all curved triangles relating to the same triangle $\bar K(0)$ on $\Gamma_h(0).$ We do the same for $e, \bar K, \bar e.$ Analogously by $\mathcal T _h$ we denote the family of such families of triangles $K.$ For later use we state the following Lemma summarizing geometric properties, whose derivation can be found in \cite{DE07}. \begin{lemma} Let $\Gamma_h(t)$ be a polyhedral approximation of $\Gamma(t)$ as described above then there exists $C=C(T)$ such that for all $t \in [0,T]$ \begin{enumerate} \item $\nu_{\Gamma(t)} = \nabla d(\cdot, t),$ \item $\| d(\cdot,t)|_{\Gamma_h(t)} \|_{L^\infty(\Gamma_h(t))} \leq C h^2$ . \end{enumerate} \label{lemma:dziuk-elliott} \end{lemma} We will use the following notation. By $h_{K(t)} := \operatorname{diam}(K(t))$ we denote the diameter of each cell, furthermore $h:= \max_{t \in [0,T]} \max_{K(t)}h_{K(t)}$ and $\abs{K(t)},$ $\abs{\partial K(t)}$ are the Hausdorff measures of $K(t)$ and the boundary of $K(t)$ respectively. When we write $e(t)\subset \partial K(t)$ we mean $e(t)$ to be a face of $ K(t)$. We need to impose the following assumption uniformly on all triangulations $\bar {\mathcal{ T}} _h(t).$ There is a constant number $\alpha >0$ such that for each flat cell $\bar K(t) \in \bar {\mathcal{ T}} _h(t)$ we have \begin{align} \begin{split} \alpha h_{\bar K(t)}^2 \leq \abs{\bar K(t)}, \\ \alpha \abs{\partial \bar K(t)} \leq h_{\bar K(t)}. \end{split} \label{ineq:angle-assumption} \end{align} Later on, we will see that \eqref{ineq:angle-assumption} implies the respective estimate for the curved triangulation, cf. Remark \ref{rem:angles}. A consequence of \eqref{ineq:angle-assumption} is that $2 \alpha^2 h_{\bar K(t)}$ is a lower bound of the radius of the inner circle of $\bar K(t)$, which implies that the sizes of the angles in $\bar K(t)$ are bounded from below. Furthermore we denote by $\kappa(x,t)$ the supremum of the spectral norm of $\nabla \nu_{\Gamma(t)}(x).$ By straightforward continuity and compactness arguments $\kappa$ is uniformly bounded in space and time. \subsection{The Finite Volume Scheme on Curved Elements}\label{subsec:fv_curved} In this section we will briefly review the notion of finite volume schemes on moving curved surfaces. We consider a sequence of times $0=t_0< t_1 < t_2 < \dots$ and set $I_n:= [t_n,t_{n+1}].$ Moreover we assign to each $n \in \mathbb N$ and $K \in \mathcal T _h$ the term $u^n_K$ approximating the mean value of $u$ on $\bigcup_{t \in I_n} K(t)\times \{t\}$ and to each $K \in \mathcal T _h$ and face $e \subset \partial K$ a numerical flux function $f^n_{K,e} : \mathbb R^2 \rightarrow \mathbb R$, which should approximate \begin{equation}\label{curv_num_flux_approximates} \int_{I_n} \int_{e(t)} \langle f(u(x,t),x,t), \mu_{K,e}(x,t)\rangle \, de(t) \, dt, \end{equation} where $de(t)$ is the line element, $\mu_{K,e}(x,t)$ is the unit conormal to $e(t)$ pointing outwards from $K(t)$ and $\langle \cdot, \cdot \rangle $ is the standard Euclidean inner product. Please note that $\mu_{K,e}(t)$ is tangential to $\Gamma(t).$ Then the finite volume scheme is given by \begin{align}\begin{split}\label{eq:FVcurv} u^0_K &:= \int_{K(0)} u_0(x) d\Gamma(0),\\ u^{n+1}_K&:= \frac{|K(t_n)|}{|K(t_{n+1})|} u^n_K - \frac{\abs{I_n}}{|K(t_{n+1})|} \sum_{e \subset \partial K} |e(t_n)| f^n_{K,e}(u_K^n,u_{K_e}^n),\\ u^h(x,t)&:= u^n_K \quad \text{ for } t \in [t_n,t_{n+1}), x \in K(t),\end{split} \end{align} where $K_e$ denotes the cell sharing face $e$ with $K$ and $d\Gamma(0)$ is the surface element. For the convergence analysis it was usually assumed \cite{Gie09,LON09} that the used numerical fluxes are uniformly Lipschitz, consistent, conservative and monotone. Additionally, the CFL condition \begin{equation}\label{cfl}t_{n+1}-t_n \leq \frac{\alpha^2 h}{8L}\end{equation} has to be imposed to ensure stability, where $L$ is the Lipschitz constant of the numerical fluxes. Lax-Friedrichs fluxes satisfying this condition are usually defined by \begin{multline} \label{curv_num_flux} f^n_{ K,e} (u,v) := \int_{I_n} \frac{1}{2} \int_{e(t)} \langle f(u,\cdot,t)+ f(v,\cdot,t), \mu_{ K, e}(t) \rangle\, de(t)\, dt + \lambda (u-v), \end{multline} where $\lambda= \frac{1}{2}\|\partial_u f \|_\infty$ is an artificial viscosity coefficient ensuring the monotonicity of $f^n_{K,e}$ and stabilizing the scheme. \subsection{The Finite Volume Scheme on Flat Elements} \label{subsec:fv_flat} In this section we define a finite volume scheme on $\bar {\mathcal{ T}} _h$ which is in the same spirit as \eqref{eq:FVcurv} but only relies on easily accessible geometrical information. We assume that $f$ is smoothly extended from $G_T$ to the whole of $\bigcup_{t\in [0,T]} \mathcal{N}(t) \times \{t\}.$ We want to point out that the calculation of areas and lengths is straightforward for flat elements. As well, the approximation of integrals can be achieved using quadrature formulas by mapping cells and edges to a standard triangle and the unit interval, respectively, using affine linear maps. In this fashion we obtain for every time $t \in [0,T]$ quadrature operators $Q_{\bar K(t)}: C^0(\bar K(t)) \rightarrow \mathbb R,$ and $ Q_{\bar e(t)}: C^0(\bar e(t)) \rightarrow \mathbb R$ of order $p_1,p_2 \in \mathbb N,$ respectively. In addition for any compact interval $I \subset [0,T]$ the term $Q_I: C^0(\bar I) \rightarrow \mathbb R$ denotes a quadrature operator of order $p_3 \in \mathbb N.$ Before we can use the quadrature operators to define numerical fluxes we need to determine the "discrete" conormals. To each flat triangle $\bar K(t)$ we fix a unit normal $\bar \nu_{\bar K(t)}$ by imposing \begin{equation}\label{**} \langle\bar \nu_{\bar K(t)}, \nu_{\Gamma(t)}(y)\rangle >0, \end{equation} where $y$ is the barycentre of $K(t).$ We will see in Lemma \ref{lem:angles} that $\bar \nu_{\bar K(t)}$ converges to $\nu_{\Gamma(t)}(y)$ for $h \rightarrow 0.$ To each face $\bar e(t)$ and adjacent cell $\bar K(t)$ there is a unique unit tangent vector $\bar {\bf t}_{\bar K(t),\bar e(t)}$ such that $\bar \nu_{\bar K(t)} \times \bar {\bf t}_{\bar K(t), \bar e(t)} $ is a conormal to $\bar e(t)$ pointing outward from $\bar K(t).$ Hence this vector product is one candidate for $\bar \mu_{\bar K(t),\bar e(t)}.$ However in general \begin{equation}\label{lack_of_conserv} \bar \nu_{\bar K(t)} \times \bar {\bf t}_{\bar K(t),\bar e(t)} \not= \pm (\bar \nu_{{\bar K}_{\bar e}(t)} \times \bar {\bf t}_{\bar K_{\bar e}(t),\bar e(t)}) \end{equation} such that a choice like \[ \bar \mu_{\bar K(t),\bar e(t)} =\bar \nu_{\bar K(t)} \times \bar {\bf t}_{\bar K(t),\bar e(t)}\] would lead to a loss of conservativity of the resulting numerical fluxes. Therefore we choose \[ \bar \mu_{\bar K(t), \bar e(t)}:= \frac{1}{2} \left( \bar \nu_{\bar K(t)} \times \bar {\bf t}_{\bar K(t),\bar e(t)} + \bar \nu_{{\bar K}_{\bar e}(t)} \times \bar {\bf t}_{\bar K(t),\bar e(t)}\right).\] We define a numerical Lax-Friedrichs flux and a finite-volume scheme: \begin{align}\begin{split} \label{eq:FVflat} \bar f_{\bar K,\bar e} (u,v) &:= \frac{1}{|I_n|} Q_{I_n}\left[\frac{1}{2\abs{\bar e(\cdot)}} Q_{\bar e(\cdot)}\left( \langle f(u,\cdot,\cdot)+ f(v,\cdot,\cdot), \bar \mu_{\bar K(\cdot), \bar e(\cdot)}\rangle \right)\right] \\ &\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad + \lambda (u-v),\\ \bar u^0_{\bar K} &:= \frac{1}{|\bar K(0)|} Q_{\bar K(0)} (u_0),\\ \bar u^{n+1}_{\bar K}&:=\frac{|\bar K(t_n)|}{|\bar K(t_{n+1})|} \bar u^n_{\bar K} - \frac{|I_n|}{|\bar K(t_{n+1})|}\sum_{\bar e \subset\partial \bar K} |\bar e(t_n)| \bar f^n_{\bar K,\bar e}(\bar u_{\bar K}^n,\bar u_{\bar K_{\bar e}}^n),\\ \bar u^h(x,t)&:= \bar u^n_{\bar K}, \quad \text{ for } t \in [t_n,t_{n+1}),\ x \in K(t), \end{split}\end{align} for some sufficiently large $\lambda \geq 0.$ Note that by \eqref{eq:FVflat}$_4$ the function $\bar u^h$ is defined on $G_T.$ \section{Geometrical Estimates}\label{sec:geom} In this section we derive estimates for the approximation errors of the geometric quantities. Throughout this section we suppress the time dependence of all quantities. All the estimates can be derived uniformly in time. To obtain the geometrical estimates, we introduce the following lift operator. \begin{definition} Let $ \bar U \subset \Gamma_h$ and $\bar g$ a function on $\bar U$ then we define a function $\bar g^l$ on $a|_{\Gamma_h}(\bar U)$ as \[ \bar g^l= \bar g \circ a|_{\Gamma_h}^{-1}.\] Similarly we define the inverse of this lift operator by \[ g^{-l}= g \circ a|_{\Gamma_h}\] for a function $g$ defined on some $ U \subset \Gamma$. \end{definition} We begin our investigation with the differences between the normal vectors of the flat and curved elements. \begin{lemma}\label{lem:angles} There is a constant $C$ such that for all flat cells $\bar K$ and every $y \in \bar K$ we have \begin{align} \label{norm_est} \left\| \nu_\Gamma^{-l}(y) - \bar \nu_{\bar K} \right\| &\leq Ch. \end{align} The constant $C$ depends on derivatives of $d,$ in particular on $\kappa.$ \end{lemma} \textit{Proof.} WLOG we can assume that $\bar K $ is a subset of $\{ (x,y,0) \in \mathbb R^3 \, | \, y<0\}$ such that $\bar e= \{ (s,0,0) \in \mathbb R^3 \, |\, s \in [0,h_{\bar e}]\}$ is one of its faces and $(\nu_\Gamma)^{-l}_3(y)>0$ for some $y \in \bar K$. We start by showing that there exists some constant $C>0$ such that \begin{equation}\label{nuk_est} |(\nu_\Gamma)_i|\leq Ch , \quad \text{for } i=1,2.\end{equation} We recall that $\nu_\Gamma=\nabla d,$ where $d$ is the signed distance function to $\Gamma.$ As the vertices of $\Gamma_h$ lie on $\Gamma$ we know that there exists $(x,y,0) \in \bar K$ such that \[ d(0,0,0)=0,\ d(h_{\bar e},0,0)=0, \ d(x,y,0)=0.\] Hence, the directional derivatives of $d$ with respect to $(x,y,0)$ and $(1,0,0)$ need to vanish somewhere in $\bar K.$ Thus their absolute value is of order $\mathcal O(h)$ on $\bar K.$ Due to the angle condition \eqref{ineq:angle-assumption} an analogous inequality also holds for the directional derivative of $d$ with respect to $(0,1,0)$. As the directional derivative of $d$ with respect to $(1,0,0),\ (0,1,0)$ coincides with $(\nu_\Gamma)_1,\ (\nu_\Gamma)_2,$ respectively, this proves \eqref{nuk_est}. This immediately implies $(\nu_\Gamma)_3 =\pm \sqrt{1- \mathcal O(h^2)} = \pm 1 + \mathcal O(h^2).$ By assumption $(\nu_\Gamma)_3 = 1 + \mathcal O(h^2)$ everywhere and by \eqref{**} we have $\bar \nu_{\bar K}=(0,0,1) $ which proves \eqref{norm_est}. \qed \begin{lemma} \label{lem:eK} For the difference between the length of a curved edge $e$ and the corresponding flat edge $\bar e$ we have \begin{equation} \abs{\frac{\abs{e}}{\abs{\bar e }} - 1 } \leq C h^2, \label{eq:error-e} \end{equation} and for the difference between the area of a curved cell $K$ and the corresponding flat cell $\bar K$ we have \begin{equation} \abs{\frac{\abs{K}}{\abs{\bar K }} - 1 } \leq C h^2, \label{eq:error-K} \end{equation} where $C$ does not depend on $h$ but on $\kappa.$ Furthermore let $c_e$ be the parametrization of $e$ over $\bar e$ given by $a|_{\bar e}$ then we have \begin{equation}\label{le:normc} \abs{ \norm{c_e^\prime(s)}- 1}\leq C h^2. \end{equation} \end{lemma} \textit{Proof.} We assume without loss of generality that $\bar K \subset \mathbb R^2 \times \{0\}$. For small enough $h$ we can parametrize the curved cell $K$ according to \eqref{eq:projection} by a parametrization $c = a|_{\bar K} : \bar K \rightarrow K \subset \mathbb R^3$ with \begin{equation*} c(x_1,x_2) = (x_1,x_2,0) - d(x_1,x_2,0) \nu_\Gamma(c(x_1,x_2)), \label{eq:c-K} \end{equation*} where we suppressed the third coordinate in $\bar K$. The ratio of volume elements of $K$ and $\bar K$ with respect to the parametrization $c$ is given by \begin{equation*} \sqrt{\abs {g}}:=\sqrt{\det(g)}, \end{equation*} where the matrix $g$ is defined by \begin{equation*} g = \left(g_{ij}\right)_{1\leq i,j, \leq 2} : = \left(\left\langle \partial_i c, \partial_j c\right\rangle\right)_{1\leq i,j, \leq 2}. \end{equation*} For the parametrization $c$ of $K$ we have \begin{equation*} \partial_i c = e_i - \left\langle \nabla d , e_i \right\rangle \nu_\Gamma\circ c - d \; \partial_i c\; \left( \nabla \nu_\Gamma\right)^T \circ c \quad\text{for } i=1,2, \end{equation*} where $e_i$ denotes the $i$-th standard unit vector. Due to the bounded curvature of $\Gamma$ and Lemma \ref{lemma:dziuk-elliott} we can show that \begin{equation}\label{eq:partialc} \partial_i c = e_i - ((\nu_\Gamma)_i \nu_\Gamma)\circ c + \mathcal O ( h^2) \quad\text{for } i=1,2. \end{equation} Applying \eqref{norm_est} we see that \begin{equation*} \nu_\Gamma= \pm(0,0,1) + \mathcal O( h) \text{ and } \left\langle e_i,\nu_\Gamma\right\rangle =(\nu_\Gamma)_i= \mathcal O (h)\quad\text{for } i=1,2. \end{equation*} Thus, for the matrix $g$ we have \begin{equation*} g = \begin{pmatrix} 1+ \mathcal O ( h^2) & \mathcal O ( h^2)\\ \mathcal O ( h^2) & 1+ \mathcal O ( h^2) \end{pmatrix} \end{equation*} which implies for the volume element \begin{equation} dK= \sqrt{\abs {g}}d\bar K = \sqrt{1 + \mathcal O (h^2)}d\bar K = d \bar K + \mathcal O (h^2)d\bar K. \end{equation} Therefore, we arrive at \begin{align*} \abs{\abs{K} - \abs{\bar K } } = \abs{\int_{\bar K} \sqrt{\abs {g}} - 1d\bar K } \leq C \abs{\bar K } h^2 \end{align*} for the error of the cell area which proves \eqref{eq:error-K}. To prove \eqref{eq:error-e} and \eqref{le:normc} we consider WLOG an edge $\bar e = \set{(s,0,0)| 0\leq s \leq h_{\bar e}} \subset \partial\bar K$, where $h_{\bar e}$ denotes the length of $\bar e $. The corresponding curved edge $e$ is parametrized by \begin{equation} c_{ e}(s) = c(s,0) = (s,0,0) - d(s,0,0) \nu_\Gamma(c_{ e}(s)). \label{eq:c-e} \end{equation} Due to the bounded curvature of $\Gamma$ we get for the derivative \begin{equation} c_{ e}^\prime(s) = (1,0,0) - \nu_\Gamma(c_{e}(s)) (\nu_\Gamma)_1(c_{e}(s)) + \mathcal O( h^2). \label{eq:dc-e} \end{equation} Applying \eqref{norm_est} we get \begin{align} \label{normc} \norm{c_e^\prime(s)} = 1 + \mathcal O(h^2) \end{align} and therefore \begin{align*} \abs{\abs{e} - \abs{\bar e } } = \abs{\int_0^{h_{\bar e}} \norm{ c_e^\prime(s) } - 1 \; ds } \leq C \abs{\bar e } h^2. \end{align*} \qed \begin{remark} Let us note that an analogous estimate to \eqref{ineq:angle-assumption} for curved elements is an easy consequence of \eqref{ineq:angle-assumption}, \eqref{eq:error-e}, \eqref{eq:error-K} and the fact $|h_{\bar K}-h_K | \leq C h^2,$ which is a consequence of Lemma \ref{lem:eK}. \label{rem:angles} \end{remark} \begin{lemma}\label{lem:angles1} There is a constant $C$ (depending on $\kappa$) such that for all flat cells $\bar K,$ all flat edges $\bar e \subset \partial \bar K$ and every $x \in \bar e$ we have \begin{align} \label{conorm_est1} \left|\langle \bar \mu_{\bar K,\bar e} , {\bf t}^{-l}(x) \rangle \right| &\leq C h^2,\\ \label{conorm_est2} \left|\langle \bar \mu_{\bar K,\bar e} , \nu_\Gamma^{-l}(x) \rangle \right| &\leq C h,\\ \label{conorm_est3} \left|\langle \bar \mu_{\bar K,\bar e} , \mu_{K,e}^{-l}(x) \rangle - 1\right| &\leq C h^2, \end{align} where ${\bf t}$ denotes a unit tangent vector to $e$. We want to point out that this estimate is independent of the sign of ${\bf t}.$ \end{lemma} \textit{Proof.} It is sufficient to show versions of \eqref{conorm_est1} - \eqref{conorm_est3} where $\bar \mu_{\bar K,\bar e}$ is substituted by $ \bar \nu_{\bar K} \times \bar {\bf t}_{\bar K,\bar e}.$ Then analogous results for $ \bar \nu_{\bar K_{\bar e}} \times \bar {\bf t}_{\bar K,\bar e}$ are immediate. Indeed, estimates \eqref{conorm_est1} - \eqref{conorm_est3} follow because $\bar \mu_{\bar K,\bar e}$ is the mean of the vectors $ \bar \nu_{\bar K_{\bar e}} \times \bar {\bf t}_{\bar K,\bar e}$ and $ \bar \nu_{\bar K} \times \bar {\bf t}_{\bar K,\bar e}$. Firstly, we address the proof of \eqref{conorm_est1}. Let the same assumptions as in the proof of Lemma \ref{lem:angles} hold and in addition let $\bar e$ be given by $ \{ (x,0,0) \in \mathbb R^3 \, |\, x \in [0,h_{\bar e}]\}$. We obviously have \begin{equation}\label{bar_mu} \bar \nu_{\bar K} \times \bar {\bf t}_{\bar K,\bar e}= (0,1,0).\end{equation} Note that the assumptions of the proof of Lemma \ref{lem:eK} are satisfied. Hence we can use \eqref{eq:dc-e} i.e. the parametrization of $e$ given by $c$ satisfies \begin{equation} c'(s)= \left( 1 , 0 , 0 \right) - \nu_\Gamma(c(s)) (\nu_\Gamma)_1(c(s))+ \mathcal O(h^2) , \end{equation} and coincides with ${\bf t}(c(s))$ up to standardization. Hence, in view of \eqref{le:normc} we obtain \begin{equation}\label{t} {\bf t}^{-l}(x) = ( 1 ,0,0) - \nu_\Gamma(c(s)) (\nu_\Gamma)_1(c(s)) + \mathcal O(h^2) \end{equation} for some $s\in [0,h_{\bar e}]$. Combining \eqref{bar_mu} and \eqref{t} we find using \eqref{nuk_est} \[\left|\langle \bar \nu_{\bar K} \times \bar {\bf t}_{\bar K,\bar e}, {\bf t}^{-l}(x) \rangle \right| = \left| (\nu_\Gamma)_2(c(s)) (\nu_\Gamma)_1(c(s)) \right| + \mathcal O(h^2) \leq C h^2,\] which is \eqref{conorm_est1}. Concerning \eqref{conorm_est2}, \[ \left|\langle \bar \nu_{\bar K} \times \bar {\bf t}_{\bar K,\bar e} , \nu_\Gamma^{-l}(x) \rangle \right| \leq \left| (\nu_\Gamma^{-l})_2(x) \right| \leq Ch\] holds because of \eqref{bar_mu} and \eqref{nuk_est}. Thus, it remains to show \eqref{conorm_est3}. By definition ${\bf t}^{-l}(x), \nu_\Gamma^{-l}(x), \mu_{K,e}^{-l}(x)$ form an orthonormal basis of $\mathbb R^3$ and the vector $\bar \nu_{\bar K} \times \bar {\bf t}_{\bar K,\bar e}$ is of unit length. This means that for every $\bar x$ in $\bar e$ there exist $b_1(\bar x),b_2(\bar x),b_3(\bar x) \in \mathbb R$ satisfying $b_1^2(\bar x) + b_2^2(\bar x) + b_3^2(\bar x)=1$ such that \begin{equation}\label{eq:bs} \bar \nu_{\bar K} \times \bar {\bf t}_{\bar K,\bar e} = b_1(\bar x) {\bf t}^{-l}(\bar x) + b_2(\bar x) \nu_\Gamma^{-l}(\bar x) + b_3(\bar x) \mu_{K,e}^{-l}(\bar x). \end{equation} We know from \eqref{conorm_est1} and \eqref{conorm_est2} that $|b_1(\bar x)|,|b_2(\bar x)| \leq C h$ for some $C>0,$ which implies using Taylor expansion \begin{equation}\label{eq:b3} b_3(\bar x) = \pm \sqrt{ 1 + \mathcal O(h^2)} = \pm 1 + \mathcal O(h^2).\end{equation} Note that it only remains to show that in \eqref{eq:b3} the $+$ holds. As $b_3$ depends continuously on $\bar x$ it is sufficient to find one $(\bar x_1,0,0)\in\bar K$ such that $ b_3(\bar x_1) = 1 + \mathcal O(h^2).$ To that end we consider some $\bar x_1,\bar y_1>0$ such that \[ \gamma : (-\bar y_1,0] \longrightarrow \bar K, \quad s \mapsto (\bar x_1,s,0)\] is a curve leaving $\bar K$ through $\bar e.$ By definition the curve $\tilde \gamma$ given by \[ \tilde \gamma(s) := \gamma(s) - \nu_\Gamma^{-l} ( \gamma(s)) d(\gamma(s))\] is a curve in $K$ leaving through $e.$ This means we have \begin{equation}\label{eq:angle_sign} 0 < \langle \tilde \gamma'(0) , \mu_{K,e}(\tilde \gamma(0)) \rangle . \end{equation} Due to \eqref{eq:bs}, \eqref{eq:b3} and the fact that $\mu_{K,e}$ is of unit length we already know that \begin{equation}\label{eq:muke} \mu_{K,e} \equiv \pm ( 0 , 1, 0 ) + \mathcal O(h). \end{equation} We are able to compute \begin{multline}\label{outw} \tilde \gamma '(s) = (0,1,0) - (0,1,0) ( \nabla \nu_\Gamma^{-l}(\gamma(s)))^T d(\gamma(s)) - \nu_\Gamma (\tilde \gamma(s)) \left\langle \nu_\Gamma(\tilde \gamma(s)) , (0,1,0)\right\rangle \\ = (0,1,0) + \mathcal O(h), \end{multline} because $\nabla \nu_\Gamma^{-l}$ is bounded, Lemma \ref{lemma:dziuk-elliott} and \eqref{nuk_est}. Inserting \eqref{eq:muke} and \eqref{outw} in \eqref{eq:angle_sign} we find \begin{equation}\label{eq:sign_final} 0 < \pm 1 + \mathcal O(h), \end{equation} where $\pm$ is the sign from \eqref{eq:b3}. Obviously for $h$ sufficiently small \eqref{eq:sign_final} only holds for ``+'', which finishes the proof. \qed\\ \section{Estimating the Difference Between Both Schemes } \label{sec:mr} This section is devoted to establishing a bound for the difference between the curved and flat approximate solutions. To start with we investigate the difference between the numerical fluxes defined on the flat and the curved triangulation respectively. \begin{lemma}\label{lem:flux_est} Let $\mathcal{K}$ be some compact subset of $\mathbb R^2$. Provided the quadrature operators $Q_{\bar e(t)}$ and $Q_{I_n}$ are of order at least $1,$ then there is a constant $C$ depending only on $G_T$ and $\mathcal{K}$ such that for the Lax-Friedrichs fluxes \eqref{curv_num_flux} and \eqref{eq:FVflat}$_{1}$ with the same diffusion rate $\lambda$ the following inequality holds \[\left| f^n_{K,e} (u,v) - \bar f^n_{\bar K, \bar e} (u,v)\right|\leq Ch^2 \qquad \forall \ (u,v) \in \mathcal{K} ,\ K\in\mathcal T_h,\ e\subset \partial K.\] \end{lemma} \textit{Proof.} We start by observing that the diffusive terms drop out, such that \begin{align} \begin{split} 2 \left| f^n_{K,e} (u,v) - \bar f^n_{\bar K, \bar e} (u,v)\right| =& \left| \int_{I_n} \int_{e(t)} \langle f(u,x,t), \mu_{K(t),e(t)}(x)\rangle de(t)\, dt\right.\\ &-\frac{1}{|I_n|} Q_{I_n}\left[ \frac{1}{|\bar e(\cdot)|} Q_{\bar e(\cdot)} [\langle f(u,\cdot,\cdot), \bar \mu_{\bar K(\cdot),\bar e(\cdot)}\rangle ]\right]\\ &+ \int_{I_n} \int_{e(t)} \langle f(v,x,t), \mu_{K(t),e(t)}(x)\rangle de(t)\, dt\\ &\left.-\frac{1}{|I_n|} Q_{I_n}\left[ \frac{1}{|\bar e(\cdot)|} Q_{\bar e(\cdot)} [\langle f(v,\cdot,\cdot), \bar \mu_{\bar K(\cdot),\bar e(\cdot)}\rangle ]\right]\right|. \end{split}\label{eq:j1} \end{align} As $u$ and $v$ appear symmetrically we will omit all terms containing the latter in our subsequent analysis. We now add zero several times in \eqref{eq:j1} and get \begin{align}\label{eq:flux_est} 2 \left| f^n_{K,e} (u,v) - \bar f^n_{\bar K, \bar e} (u,v)\right| = \left|\int_{I_n} T_1 + T_2 + T_3 +T_4 +T_5 \; dt\right| \end{align} with \begin{align*}\begin{split} &T_1(t) := \int_{e(t)} \langle f(u,x,t), \mu_{K(t),e(t)}(x)\rangle de(t) - \int_{\bar e(t)}\langle f^{-l}(u,x,t), \mu_{K(t),e(t)}^{-l}(x)\rangle d \bar e(t), \\ &T_2(t):= \int_{\bar e(t)}\langle f^{-l}(u,x,t) ,\mu_{K(t),e(t)}^{-l}(x)\rangle d\bar e (t) - \int_{\bar e(t)} \langle f^{-l}(u,x,t), \bar \mu_{\bar K(t),\bar e(t)}\rangle d\bar e(t)\\ &T_3(t):= \int_{\bar e(t)} \langle f^{-l}(u,x,t), \bar \mu_{\bar K(t),\bar e(t)}\rangle d\bar e(t) - \int_{\bar e(t)}\langle f(u,x,t), \bar \mu_{\bar K(t),\bar e(t)}\rangle d \bar e(t) \\ &T_4(t):= \int_{\bar e(t)} \langle f(u,x,t) ,\bar \mu_{\bar K(t),\bar e(t)} \rangle d\bar e(t) -\frac{1}{|I_n|}Q_{I_n}\left[ \int_{\bar e(\cdot)} \langle f(u,x,\cdot) ,\bar \mu_{\bar K(\cdot),\bar e(\cdot)} \rangle d\bar e(\cdot) \right]\\ &T_5:=\frac{1}{|I_n|} Q_{I_n}\!\left[ \int_{\bar e(\cdot)} \langle f(u,x,\cdot) ,\bar \mu_{\bar K(\cdot),\bar e(\cdot)} \rangle d\bar e(\cdot) - \frac{1}{|\bar e(\cdot)|} Q_{\bar e(\cdot)}\left[ \left\langle f(u,\cdot ,\cdot), \bar \mu_{\bar K(\cdot),\bar e(\cdot)} \right\rangle\right] \right]\!. \end{split} \end{align*} In the following we will estimate the summands one by one. First, by properties of the quadrature operators $Q_{I_n}$, $Q_{\bar e(t)}$ and the CFL condition \eqref{cfl} \begin{equation} \label{t4} \left|\int_{I_n}T_4(t)\; dt\right| \leq C h^{p_3+1}, \qquad |T_5| \leq C h^{p_2+1}, \end{equation} as the integrands are sufficiently smooth. In particular, we use the fact that the surface evolves smoothly. Addressing the estimates for $T_1,T_2,T_3$ we will omit the time dependency as all three estimates are uniform in time. To establish an estimate for $T_1$ we recall that we can parametrize $e$ over $\bar e$ such that for the parametrisation $c_e$ inequality \eqref{le:normc} holds. We have \begin{equation}\label{t1} |T_1| =\Bigg| \int_{\bar e}\langle f^{-l}(u,x), \mu^{-l}_{K,e}(x) \rangle \left( \|c_e'(s)\| - 1\right) d\bar e\Bigg| \leq\|f\|_\infty C h^2, \end{equation} where $\|f\|_\infty$ denotes the supremum of $f(u,x,t)$ for $(x,t) \in G_T$ and $u \in \mathcal{K}.$ Next we turn to $T_3.$ Its estimate is based on the assumption that we have extended $f(u,\cdot)$ to $\mathcal N$ smoothly and on the second statement of Lemma \ref{lemma:dziuk-elliott}. This leads to \begin{equation}\label{t3} |T_3| \leq \int_{\bar e} \norm{f^{-l}(u,x) - f(u,x)} \norm{\bar \mu_{\bar K,\bar e}}\, d\bar e \leq Ch^2. \end{equation} This leaves $T_2.$ It is clear that \begin{equation}\label{t21} |T_2| \leq \max_{x \in \bar e} \left|\langle f^{-l}(u,x), \mu_{K,e}^{-l} (x) - \bar \mu_{\bar K,\bar e}\rangle\right|.\end{equation} Furthermore we find, as $f$ is tangential to $\Gamma,$ \[ f^{-l}(u,x)=f_1(u,x) {\bf t}^{-l}(x) + f_2(u,x) \mu_{K,e}^{-l}(x),\] where ${\bf t}$ is a unit tangent vector to $e$ and $f_1(u,x),f_2(u,x) \in \mathbb R.$ Due to Lemma \ref{lem:angles1} we have \begin{align} \label{t22}\langle f^{-l}(u,x), \mu_{K,e}^{-l} (x)\rangle &= f_2(u,x), \\ \label{t23}\langle f^{-l}(u,x), \bar \mu_{\bar K,\bar e}\rangle &=f_1(u,x) \mathcal O(h^2) + f_2(u,x) + f_2(u,x) \mathcal O(h^2). \end{align} Obviously it holds $|f_1(u,x)|,|f_2(u,x)| \leq \|f\|_\infty$ such that inserting \eqref{t22},\eqref{t23} into \eqref{t21} gives \begin{equation} \label{t2} |T_2| \leq Ch^2. \end{equation} Now the statement of the Lemma follows from \eqref{eq:flux_est} together with \eqref{t4}, \eqref{t1}, \eqref{t3} and \eqref{t2}. \qed\\ Our next step is to establish stability estimates for the curved and flat approximate solution. Due to the geometry change of the surface $\Gamma$ which might act as a source term we need the following lemma. \begin{lemma} For every finite sequence of positive numbers $\{b_n\}_{n=1,\dots,N}$ we have \begin{equation} \label{eq:expest} \prod_{n=1}^N (1 + b_n) \leq \left( 1 + \sum_{n=1}^N \frac{b_n}{N} \right)^N \leq \exp{\left(\sum_{n=1}^N b_n\right)}. \end{equation} \end{lemma} \textit{Proof.} From Jensen's inequality we know \begin{equation}\label{dec1} \sum_{n=1}^N \ln(1+ b_n) \leq N \ln \left(\sum_{n=1}^N \frac{1+b_n}{N} \right). \end{equation} Applying the exponential function to \eqref{dec1} gives the first inequality in \eqref{eq:expest}. The second inequality in \eqref{eq:expest} follows from the fact that \[ \left( 1 + \frac{c}{N} \right)^N \leq \exp(c) \quad \forall N \in \mathbb N , c \in \mathbb R.\] \qed Now we can show a stability estimate for the curved scheme, the proof of which is mostly standard. \begin{lemma}\label{lem:curvedstability} Provided the initial data satisfy $u_0 \in L^\infty(\Gamma(0))$, then the solution of the curved scheme fulfils \begin{equation}\label{dec2} | u^{n+1}_K | \leq (1 + c |I_n|) \max\{ |u_K^n| , \max_{e \subset \partial K} \{|u_{K_e}^n|\}\} + c |I_n| \quad \forall \, K \in \mathcal T _h, \end{equation} for some constant $c$ and therefore \begin{equation} \| u^h(t) \|_{L^\infty} \leq ( \|u_0\|_{L^\infty} + cT) \exp(cT) \quad \forall \, 0 \leq t \leq T.\end{equation} \end{lemma} \textit{Proof.} {Invoking the consistency of the numerical flux functions we can rewrite \eqref{eq:FVcurv} \begin{align*} u_K^{n+1} = \frac{|K(t_n)|}{|K(t_{n+1})|} & \left( (1 - \sum_{e \subset \partial K} c_{K,e}) u_K^n + \sum_{e \subset \partial K} c_{K,e} u_{K_e}^n \right.\\ &\quad\left.- |I_n| \int_{I_n}\int_{K(t_n)} \nabla_{\Gamma} \cdot f(u_K^n,x,t) \, d\Gamma(t)dt \right) \end{align*} with \[ c_{K,e} = \frac{|I_n|\ |e(t_n)|}{|K(t_n)|}\frac{f^n_{K,e}(u^n_K,u^n_{K_e}) - f^n_{K,e} (u^n_K,u^n_K) }{u^n_{K} -u^n_{K,e}}.\] Due to the monotonicity of the numerical fluxes and the CFL condition \eqref{cfl} we have \[ c_{K,e} \geq 0, \sum_{e \subset \partial K} c_{K,e} \leq 1.\] Combining the growth condition \eqref{eq:growth} and the fact that $|K(t_n)|/|K(t_{n+1})| \leq 1+ c |I_n|$ we get \eqref{dec2} for a new, possibly larger constant $c$. Iteration of \eqref{dec2} implies \begin{equation}\label{dec4} \max_{K \in \mathcal T _h} |u_K^n| \leq \prod_{k=0}^{n-1} (1+ c |I_k|) \max_{K \in \mathcal T _h} |u_K^0| + \sum_{k=0}^{n-1} c|I_k|\prod_{j=k+1}^{n-1} (1 + c |I_j|) . \end{equation} Invoking \eqref{eq:expest} we obtain from \eqref{dec4} \begin{equation*} \max_{K \in \mathcal T _h} |u_K^n| \leq \exp(cT) \|u_0\|_{L^\infty} + \sum_{k=0}^{n-1} c|I_k| \exp(cT)\leq( \|u_0\|_{L^\infty} + cT) \exp(cT).\\ \end{equation*} \qed As a technical ingredient for the stability estimate of the flat scheme and the error estimate we need the following lemma whose proof is given in the appendix. \begin{lemma}\label{lem:quotients} For times $t_{n},t_{n+1}$ and corresponding cells $K(t_n),$ $K(t_{n+1}),$ $\bar K(t_n),$ $\bar K(t_{n+1})$ the following estimates hold \begin{align} \label{lem:quotients1} \abs{ \frac{|K(t_n)|}{|\bar K(t_{n})|} - \frac{|K(t_{n+1})|}{|\bar K(t_{n+1})|} }& \leq C h |t_{n+1} - t_n|,\\ \label{lem:quotients2} \abs{ \frac{|\bar K(t_n)|}{|\bar K(t_{n+1})|} \frac{|K(t_{n+1})|}{|K(t_{n})|}-1 } &\leq C h |t_{n+1} - t_n|. \end{align} \end{lemma} The stability estimate for the flat scheme is a combination of the stability estimate of the curved scheme and the estimate for the difference of the fluxes. \begin{lemma}\label{lem:flatstability} Provided the initial data satisfy $u_0 \in L^\infty(\Gamma(0))$, then the solution of the flat scheme fulfils \begin{equation}\label{dec5} | \bar u^{n+1}_{\bar K} | \leq (1 + 2(c+1) |I_n|) \max\{ |\bar u_{\bar K}^n| , \max_{\bar e \subset \partial \bar K} \{|\bar u_{\bar K_{\bar e}}^n|\}\} + 2(c+1) |I_n| + d |I_n| h\end{equation} for all $K \in \mathcal T _h$ and $0 \leq t_{n+1} \leq T$. Here $c$ can be chosen as the same constant as in Lemma \ref{lem:curvedstability} and $d>0$ is another constant. Therefore, for $h$ sufficiently small, \begin{equation} \| \bar u^h(t) \|_{L^\infty} \leq (\|u_0\|_{L^\infty} + bT) \exp(b T) + 1, \quad \forall \, 0 \leq t \leq T,\end{equation} where $b := 2(c+1).$ \end{lemma} \textit{Proof.} We have \begin{equation}\begin{split}\label{flatstep} \bar u_{\bar K}^{n+1} =& \frac{|\bar K(t_n)|}{|\bar K(t_{n+1})|} \left( \bar u^n_K - \frac{\abs{I_n}}{|\bar K(t_{n})|} \sum_{\bar e \subset \partial \bar K} |\bar e(t_n)| \bar f^n_{\bar K,\bar e}(\bar u_{\bar K}^n,\bar u_{\bar K_{\bar e}}^n) \right)\\ =& \frac{|\bar K(t_n)|}{|\bar K(t_{n+1})|} \left( \bar u^n_{\bar K} - \frac{\abs{I_n}}{| K(t_{n})|} \sum_{e \subset \partial K} |e(t_n)| f^n_{K,e}(\bar u_{\bar K}^n,\bar u_{\bar K_{\bar e}}^n) \right. \\ &\left. + \abs{I_n} \sum_{e \subset \partial K} \left( \frac{\abs{e(t_n)}}{\abs{K(t_n)}} f^n_{K,e}(\bar u_{\bar K}^n,\bar u_{\bar K_{\bar e}}^n) - \frac{\abs{\bar e(t_n)}}{\abs{\bar K(t_n)}} \bar f^n_{\bar K,\bar e}(\bar u_{\bar K}^n,\bar u_{\bar K_{\bar e}}^n)\right) \right). \end{split}\end{equation} We observe that because of \eqref{lem:quotients2} \begin{multline} \label{dec8} \frac{|\bar K(t_n)|}{|\bar K(t_{n+1})|} = \frac{| K(t_n)|}{| K(t_{n+1})|} \frac{|\bar K(t_n)|}{|\bar K(t_{n+1})|} \frac{| K(t_{n+1})|}{| K(t_{n})|}\\ \leq ( 1 + c \abs{I_n} ) \cdot ( 1 + C \abs{I_n} h) \leq 1 + (c+1) \abs{I_n} , \end{multline} where $c$ is the same constant as in Lemma \ref{lem:curvedstability}, for $h$ small enough. Moreover, provided $\max_{K \in \mathcal T _h} |\bar u_{\bar K}^n| \leq A+1:=(\|u_0\|_{L^\infty} + bT) \exp(b T) + 1$ we have \begin{equation} \frac{|\bar K(t_n)|}{|\bar K(t_{n+1})|} \left( \frac{\abs{e(t_n)}}{\abs{K(t_n)}} f^n_{K,e}(\bar u_{\bar K}^n,\bar u_{\bar K_{\bar e}}^n) - \frac{\abs{\bar e(t_n)}}{\abs{\bar K(t_n)}} \bar f^n_{\bar K,\bar e}(\bar u_{\bar K}^n,\bar u_{\bar K_{\bar e}}^n)\right) \leq Ch \end{equation} because of \eqref{eq:error-e}, \eqref{eq:error-K}, and Lemma \ref{lem:flux_est}. Here we have used that for $|u| ,|v| \leq A+1,$ the numerical fluxes $f^n_{K,e}(u,v)$, $\bar f^n_{\bar K,\bar e}(u,v)$ are uniformly bounded. Provided $\max_{K \in \mathcal T _h}|\bar u_{\bar K}^n| \leq A+1$ and $h, |I_n|$ sufficiently small, we obtain \eqref{dec5} by the same argumentation as in the proof of Lemma \ref{lem:curvedstability} for some $d>0$. As obviously $\| \bar u^h(0) \|_{L^\infty} \leq A + 1$ we have by induction \begin{multline}\label{dec6} \max_{K \in \mathcal T _h} |\bar u_K^n| \leq \prod_{k=0}^{n-1} (1+ b |I_k|) \max_{K \in \mathcal T _h} |\bar u_K^0| + \sum_{k=0}^{n-1} (b|I_k| + d h |I_k| )\prod_{j=k+1}^{n-1} (1 + b |I_j|) \\ \leq \underbrace{( \| u_0\|_{L^\infty} + bT) \exp(bT)}_{\leq A} + d T \exp(bT) h, \end{multline} where $b=2(c+1)$. Equation \eqref{dec6} shows that our induction hypothesis, $\max_{K \in \mathcal T _h}|\bar u_{\bar K}^n| \leq A+1$, also holds for the next time step provided $ h < \frac{1}{ \exp(bT)d T}$ and $t_n\leq T.$ This implies that \eqref{dec5} and \eqref{dec6} in fact hold for all $t_n \leq T$. Thus, provided $h$ is small enough, the assertion of the lemma follows by induction. \qed In addition we need the fact that the curved scheme satisfies a discrete $L^1$-contraction property. \begin{lemma}\label{lem:contraction} For given data $u_K^n$ and $v_K^n,$ let $u_K^{n+1}$ and $v_K^{n+1},$ be defined according to the curved finite volume scheme \eqref{eq:FVcurv}. Then \[ \sum_K |K(t_{n+1})| | u_K^{n+1} - v_K^{n+1} | \leq \sum_K |K(t_{n})| | u_K^{n} - v_K^{n} |.\] \end{lemma} The proof is analogous to the proof of the discrete $L^1$-contraction property for finite volume schemes in Euclidean space, cf. \cite{CCL94}. We recall that the Lax-Friedrichs flux \eqref{curv_num_flux} satisfies the classical conservation and monotonicity conditions. For the difference between the curved and flat approximate solutions we obtain the following estimate. \begin{theorem} \label{thm:main} Provided the quadrature operators $Q_{\bar e(t)}$ and $Q_{I_n}$ are of order at least $1$ for all $t,n$ and the quadrature operators $Q_{K(0)}$ and the initial data $u_0$ are such that \begin{align}\label{eq:mainthmass} \|u^h(0)-\bar u^h(0)\|_{L^1(\Gamma(0))} \leq C \,h \end{align} for some constant $C$. Then, for fixed $T>0$, the error between the solution $u^h$ of the curved finite volume scheme \eqref{eq:FVcurv} and the solution $\bar u^h$ of the flat finite volume scheme \eqref{eq:FVflat} satisfies \begin{equation*} \norm{ u^h(T) - \bar{u}^h(T) }_{L^1(\Gamma(T))} \leq C \, h. \end{equation*} for some constant $C$ depending on $T,G_T,f,u_0$, provided the same diffusion rate $\lambda$ is used in both schemes. \end{theorem} \begin{remark} The curved approximate solution converges to the entropy solution of \eqref{eq:ConLaw}-\eqref{eq:IVConLaw} with a convergence rate of $\mathcal O(h^{1/4})$, cf. \cite{GW12}. Hence, invoking Theorem \ref{thm:main} the same kind of error bound holds for the flat approximate solution. \end{remark} \textit{Proof.}[of Theorem \ref{thm:main}] Let $n\in\mathbb N$ be such that $T \in [t_n, t_{n+1})$, then we have \begin{align*} \| u^h(T) - &\bar{u}^h(T) \|_{L^1(\Gamma)} = \sum_K\abs{K(t_{n+1})} \abs{ u_{K}^{n+1} - \bar u _{\bar K}^{n+1} } \\ = \sum_K \Bigg|& |K(t_n)| u_{K}^{n} - |I_n| \sum_{e\subset \partial K} \abs{e(t_n)} f^n_{K,e} ( u_{K}^{n} ,u_{K_e}^{n}) \\ &- \frac{|K(t_{n+1})|\, | \bar K(t_n)|}{|\bar K(t_{n+1})|}\bar u _{\bar K}^{n} + |I_n| \frac{|K(t_{n+1})|}{|\bar K(t_{n+1})|}\sum_{e \subset \partial K} \abs{\bar e(t_n)} \bar f^n_{\bar K,\bar e} ( \bar u_{\bar K}^{n} ,\bar u_{\bar K_{\bar e}}^{n}) \Bigg| \\ \leq R_1 &+ R_2 + R_3 + R_4 + R_5, \end{align*} where \begin{align*} R_1:= & \sum_K \Big| |K(t_n)| u_{K}^{n} - |I_n| \sum_{e\subset \partial K} \abs{e(t_n)} f^n_{K,e} ( u_{K}^{n} ,u_{K_e}^{n})\\ & \qquad-|K(t_n)| \bar u_{\bar K}^{n} + |I_n| \sum_{e\subset \partial K} \abs{e(t_n)} f^n_{K,e} ( \bar u_{\bar K}^{n} ,\bar u_{\bar K_{\bar e}}^{n}) \Big|\\ R_2:=&\sum_K \Big||K(t_n)| \bar u_{\bar K}^n - \frac{|K(t_{n+1})|\, | \bar K(t_n)|}{|\bar K(t_{n+1})|}\bar u_{\bar K}^n \Big| \\ R_3:=&\sum_K |I_n| \sum_{e\subset \partial K} \abs{ \abs{e(t_n)} - \abs{\bar e(t_n)} } \abs{f^n_{K,e} (\bar u_{\bar K}^{n} ,\bar u_{{\bar K}_{\bar e}}^{n})}\\ R_4:=&\sum_K\left| \left( 1 - \frac{|K(t_{n+1})|}{|\bar K(t_{n+1})|} \right) |I_n| \sum_{e\subset \partial K} \abs{\bar e(t_n)} \abs{f^n_{K,e} (\bar u_{\bar K}^{n} ,\bar u_{{\bar K}_{\bar e}}^{n})} \right|\\ R_5:=&\sum_K |I_n| \frac{|K(t_{n+1})|}{|\bar K(t_{n+1})|} \sum_{e\subset \partial K} \abs{\bar e(t_n)} \abs{ f^n_{ K, e} (\bar u_{\bar K}^{n} ,\bar u_{{\bar K}_{\bar e}}^{n}) - \bar f^n_{\bar K,\bar e} (\bar u_{\bar K}^{n} ,\bar u_{{\bar K}_{\bar e}}^{n}) }. \end{align*} Because we consider the Lax-Friedrichs flux which is conservative and monotone and due to the CFL-condition \eqref{cfl}, term $R_1$ can be estimated via the discrete $L^1$-contraction property (Lemma \ref{lem:contraction}) \begin{align*} R_1 \leq \sum_{K} \abs{K(t_n)} \abs{ u_{K}^{n} - \bar u _{\bar K}^{n} }. \end{align*} The term $R_2$ can be estimated using \eqref{lem:quotients1}, we get \begin{equation} R_2 \leq \sum_K \abs{ \bar u_{\bar K}^n} |\bar K(t_n)| \underbrace{ \left| \frac{|K(t_n)|}{|\bar K(t_n)|}- \frac{|K(t_{n+1})|}{|\bar K(t_{n+1})|}\right|}_{\leq C |I_n| h} \leq C |I_n| h. \end{equation} Applying Lemma \ref{lem:eK} and assumption \eqref{ineq:angle-assumption} together with Remark \ref{rem:angles} we get \begin{equation} R_3,R_4 \leq \sum_K |I_n| \sum_{e \in\subset \partial K} C h^3 \abs{f^n_{K,e} (\bar u_{\bar K}^{n} ,\bar u_{{\bar K}_{\bar e}}^{n})} \leq C |I_n| h.\\ \end{equation} Based on Lemma \ref{lem:eK}, assumption \eqref{ineq:angle-assumption}, Remark \ref{rem:angles} and Lemma \ref{lem:flux_est} we have \begin{equation} R_5 \leq C \sum_K |I_n| \sum_{e \subset \partial K} h^3 \leq C |I_n| h. \end{equation} Combining these estimates we thus obtain by iteration \begin{align*} \norm{ u^h(T) - \bar{u}^h(T) }_{L^1(\Gamma)} &= \sum_K\abs{K(t_{n+1})} \abs{ u_{K}^{n+1} - \bar u _{\bar K}^{n+1} } \\ &\leq \sum_K\abs{K(t_n)} \abs{ u_{K}^{n} - \bar u _{\bar K}^{n} } + C |I_n|\; h\\ &\leq \sum_K\abs{K(0)} \abs{ u_{K}^{0} - \bar u _{\bar K}^{0} } + C T\; h\\ &\leq C (T+1) h, \end{align*} where the last step follows with \eqref{eq:mainthmass}. \qed \section{Numerical Experiments} \label{sec:numerics} Numerical investigations based on the finite volume schemes defined in Section \ref{sec:fvs} are presented in this section. The upshot of our experiments is three-fold. Firstly, under the present assumptions the order of convergence stated in Theorem \ref{thm:main} is optimal. This is demonstrated by Test Problem \ref{tp:independent}. Secondly, all of our experiments which include a sufficiently large numerical viscosity, i.e. $\lambda\in\operatorname\Theta (1)$ in \eqref{curv_num_flux}, lead to a considerably higher experimental order of convergence (EOC) between 1 and 2 for the $L^1$-difference between the flat and the curved approximate solution. Thirdly, the application of a finite volume scheme of second order to Test Problem \ref{tp:independent} demonstrates that orders of convergence higher than 1 are not to be expected in general, if the geometry is not approximated sufficiently well, see Test Problem \ref{tp:hofvtp1}. In the following we will present several test cases. Thereafter, we will mention some implementation aspects. \subsection{Test Problems} All test cases except Test Problem \ref{tp:movingsurface} use the geometrical setting $G_T=\mathbb S^2\times [0,1]$, i.e. $\Gamma(t)=\mathbb S^2$ for all $t\in [0,T]$, and $T=1$. This is due to the fact, that we are able to compute the exact curved quantities only in this or similarly simple settings. In addition, let us fix the vector fields $V(x)=\frac{2\pi}{\|x\|}(x_2,-x_1,0)^T$ and $W(x)=\frac{2\pi}{\|x\|}(-x_3,0,x_1)^T$ for $x=(x_1,x_2,x_3)\in \mathbb S^2$. \begin{testproblem}[$u$-independent flux function] We choose $f=V$ as the flux function. Since $f$ neither depends on $t$ nor on $u$ and is divergence-free on $\mathbb S^2$ any initial datum $u_0: \mathbb S^2\rightarrow \mathbb R$ is a stationary solution of the corresponding initial value problem \eqref{eq:ConLaw}-\eqref{eq:IVConLaw}. For initial values identically to zero the curved scheme conserves this stationary solution. Thus, the error between the curved and the flat approximate solution is equal to the error between the flat approximate solution and the exact solution. The results for this test case for $\lambda=0$ are plotted in Table \ref{tab:1}. Note that due to $\partial_u f=0$ the numerical flux functions are monotone. This experiment shows, that under the assumptions from our convergence analysis $\mathcal O(h)$ is indeed the optimal order of convergence. However, if we modify the numerical diffusion by setting $\lambda=\pi$ in the numerical flux functions we achieve EOCs between $1$ and $2$ as can be seen in Table \ref{tab:1}, as well. \label{tp:independent} \end{testproblem} \begin{table} \centering\scalebox{0.95}{ \begin{tabular}{|c|@{}p{0.2em}@{}|c|c|@{}p{0.2em}@{}|c|c|} \cline{3-4}\cline{5-7} \multicolumn{1}{c}{} & & \multicolumn{2}{|c|}{Test Problem \ref{tp:independent}, $\lambda = 0$} & & \multicolumn{2}{|c|}{Test Problem \ref{tp:independent}, $\lambda = 1$}\\ \cline{1-2}\cline{3-4}\cline{5-7} \multicolumn{1}{|c|}{level} & & \multicolumn{1}{|c|}{$L^1$-difference} & \multicolumn{1}{|c|}{EOC} & & \multicolumn{1}{|c|}{$L^1$-difference} & \multicolumn{1}{|c|}{EOC}\\ \cline{1-2}\cline{3-4}\cline{5-7} $0$ & & $0.758314$ & ---& & $0.0119577$ & --- \\ $1$ & & $0.437173$ & $0.794$ & & $0.0050082$ & $1.256$\\ $2$ & & $0.231999$ & $0.914$ & & $0.0020877$ & $1.262$\\ $3$ & & $0.119190$ & $0.960$ & & $0.0008286$ & $1.333$ \\ $4$ & & $0.060372$ & $0.981$ & & $0.0003137$ & $1.401$\\ $5$ & & $0.030378$ & $0.990$ & & $0.0001165$ & $1.429$\\ $6$ & & $0.015237$ & $0.995$ & & $0.0000439$ & $1.409$\\ \cline{1-2}\cline{3-4}\cline{5-7} \end{tabular}} \caption{$L^1$-difference and EOCs between curved approximate solution $u^h(T)$ and flat approximate solution $\bar u^h(T)$ from Test Problem \ref{tp:independent} for different values $\lambda$ of numerical diffusion.} \label{tab:1} \end{table} \begin{testproblem}[Advection across the poles] \label{tp:linear} Let the flux function $f$ be defined by $f(u,x)= u W(x)$ for $x\in \mathbb S^2$. Initial values are given by $u_0(x)= \mathbbmss 1_{\{x_1>0.15\}}(x)$. In order to get monotone numerical flux functions we set $\lambda=\frac{1}{2}\|\partial_u f \|_\infty=\pi$. For this test case we obtain EOCs of almost $2$, cf. Table \ref{tab:2}. \end{testproblem} \begin{testproblem}[Burgers along the latitudes] \label{tp:Burgers} We choose a flux function of Burgers-type $f=f(u,x)=1/2 u^2 V(x)$ for $x\in \mathbb S^2$ and initial values $u_0(x)= \mathbbmss 1_{\{x_1>0.15\}}(x)$. In order to get monotone numerical flux functions we set $\lambda=\frac{1}{2}\|\partial_u f \|_\infty=\pi$ and obtain EOCs of almost $2$, cf. Table \ref{tab:2}. \end{testproblem} \begin{testproblem}[Fully two-dimensional problem] \label{tp:twodim} In this test problem we consider a flux function $f$ such that the corresponding initial value problem is not equivalent to a family of one-dimensional problems. Note that the flux functions from the previous test problems have been of one-dimensional nature. To this end we define $f(x,u)=u V(x) + 1/2u^2 W(x)$ for $x\in\mathbb S^2$ with initial values $u_0(x)= \mathbbmss 1_{\{x_1>0.15\}}(x)$ and observe EOCs of almost $2$, cf. Table \ref{tab:2}. \end{testproblem} \begin{table} \centering\scalebox{0.95}{ \begin{tabular}{|c|@{}p{0.2em}@{}|c|c|@{}p{0.2em}@{}|c|c|@{}p{0.2em}@{}|c|c|} \cline{3-4}\cline{6-7}\cline{9-10} \multicolumn{1}{c}{} & & \multicolumn{2}{|c|}{Test Problem \ref{tp:linear}} & & \multicolumn{2}{|c|}{Test Problem \ref{tp:Burgers}}& & \multicolumn{2}{|c|}{Test Problem \ref{tp:twodim}}\\ \cline{1-2}\cline{3-4}\cline{5-7}\cline{8-10} \multicolumn{1}{|c|}{level} & & \multicolumn{1}{|c|}{$L^1$-difference} & \multicolumn{1}{|c|}{EOC} & & \multicolumn{1}{|c|}{$L^1$-difference} & \multicolumn{1}{|c|}{EOC}& & \multicolumn{1}{|c|}{$L^1$-difference} & \multicolumn{1}{|c|}{EOC}\\ \cline{1-2}\cline{3-4}\cline{5-7}\cline{8-10} $0$ & & $0.112518$ & --- & &$0.0370831$ & --- & & $ 0.115867 $ & --- \\ $1$ & & $0.039167$ & $1.523$ & &$0.0133379$ & $1.475$ & & $ 0.035202 $ & $1.719$ \\ $2$ & & $0.011223$ & $1.803$ & &$0.0040350$ & $1.725$ & & $ 0.009566 $ & $1.880$ \\ $3$ & & $0.002984$ & $1.911$ & &$0.0011216$ & $1.847$ & & $ 0.002475 $ & $1.950$ \\ $4$ & & $0.000772$ & $1.951$ & &$0.0002992$ & $1.906$ & & $ 0.000630 $ & $1.975$ \\ $5$ & & $0.000197$ & $1.967$ & &$0.0000778$ & $1.944$ & & $ 0.000159 $ & $1.985$ \\ $6$ & & $0.000053$ & $1.891$ & &$0.0000199$ & $1.966$ & & $ 0.000040 $ & $1.989$ \\ \cline{1-2}\cline{3-4}\cline{5-7}\cline{8-10} \end{tabular}} \caption{$L^1$-difference and EOCs between curved approximate solution $u^h(T)$ and flat approximate solution $\bar u^h(T)$ from Test Problems \ref{tp:linear}, \ref{tp:Burgers} and \ref{tp:twodim}.} \label{tab:2} \end{table} \begin{testproblem} 2nd order scheme applied to Test Problem \ref{tp:independent}] \label{tp:hofvtp1} The\\ motivation of this test problem is to show that in general even higher order schemes, which are based on the flat finite volume schemes, are not able to achieve higher order convergence rates for smooth data. To this end, we apply a second order finite volume scheme (which is validated in Test Problem \ref{tp:hofvtpval}) to Test Problem \ref{tp:independent}. This scheme is based on the flat finite volume scheme of first order (cf. Subsection \ref{subsec:fv_flat}) with $\lambda=0$ enhanced with a linear reconstruction and a second order Runge-Kutta method for time evolution. In Table \ref{tab:3} we observe EOCs of almost $1$. Indeed, the application of a second order finite volume scheme to Test Problem \ref{tp:independent} gives exactly the same convergence rates as a first order scheme since the linear reconstruction on each cell does not affect the numerical flux functions as $f$ is independent of $u$. We like to point out that we do not have to compute the curved approximate solution as it coincides with the (constant) exact solution. \end{testproblem} \begin{testproblem}[Validation of the 2nd order scheme] \label{tp:hofvtpval} This test problem serves as validation of the second order finite volume scheme. We consider smooth initial values \begin{align*} u_0(x):=\frac{1}{10} \ \mathbbmss 1_{\{r(x)<1\}}(x)\ \exp \left( \frac{-2 \left( 1+r^2(x)\right)}{\left(1-r^2(x)\right)^2} \right) \end{align*} with $r(x):=\frac{|x_0-x|}{0.74}$ and $x_0:=(1,0,0)^T$ and a flux function $f(x,u):= u V(x)$, which transports the initial values around the sphere. For the error between the flat second order finite volume scheme (see Test Problem \ref{tp:hofvtp1}) and the exact solution EOCs significantly higher than $1$ are shown in Table \ref{tab:3}. \end{testproblem} \begin{table} \centering\scalebox{0.95}{ \begin{tabular}{|c|@{}p{0.2em}@{}|c|c|@{}p{0.2em}@{}|c|c|} \cline{3-4}\cline{6-7} \multicolumn{1}{c}{} & & \multicolumn{2}{|c|}{Test Problem \ref{tp:hofvtp1}} & & \multicolumn{2}{|c|}{Test Problem \ref{tp:hofvtpval}}\\ \cline{1-2}\cline{3-4}\cline{5-7} \multicolumn{1}{|c|}{level} & & \multicolumn{1}{|c|}{$L^1$-difference} & \multicolumn{1}{|c|}{EOC} & & \multicolumn{1}{|c|}{$L^1$-error} & \multicolumn{1}{|c|}{EOC}\\ \cline{1-2}\cline{3-4}\cline{5-7} $0$ & & $0.777427$ & --- & &$0.00362492$ & --- \\ $1$ & & $0.444068$ & $0.808$ & &$0.00243102$ & $0.576$\\ $2$ & & $0.233521$ & $0.927$ & &$0.00112593$ & $1.115$\\ $3$ & & $0.119553$ & $0.966$ & &$0.00034768$ & $1.700$ \\ $4$ & & $0.060461$ & $0.983$ & &$0.00012006$ & $1.534$\\ $5$ & & $0.030400$ & $0.992$ & &$0.00003713$ & $1.693$\\ $6$ & & $0.015242$ & $0.996$ & &$0.00001116$ & $1.734$\\ \cline{1-2}\cline{3-4}\cline{5-7} \end{tabular}} \caption{$L^1$-difference and EOCs between second order curved approximate solution (which equals the exact solution in this case) and a second order flat approximate solution from Test Problem \ref{tp:hofvtp1} and $L^1$-error between the exact solution from Test Problem \ref{tp:hofvtpval} and its approximation by the second order finite volume scheme.} \label{tab:3} \end{table} \begin{testproblem}[Deforming Torus] \label{tp:movingsurface} We consider a deforming torus as computational domain $\Gamma$ and $T=4$ as final time. Within the time interval $[0,2]$ the right half of the torus undergoes compression whereas the left half is stretched, while $\Gamma(t)$ remains constant for $t\in[2,4]$. We choose a Burgers-type flux function $f=f(u,x)=\frac{1}{2} u^2 (x_2,-x_1,0)^T$ and constant initial values $u_0\equiv 1$. The time step size is chosen dynamically for each time step such that stability is guaranteed. In Figure \ref{fig:movingsurface} the numerical solution is shown at four different times. Note that in spite of the constant initial values, a shock wave is induced due to the change of geometry (compression and rarefaction) and the nonlinearity of the flux function. \end{testproblem} \begin{figure}[htbp] \centering \subfigure[$t=0$.]{ \includegraphics[width=0.45\textwidth]{image000.png} } \subfigure[$t=1.07$.]{ \includegraphics[width=0.45\textwidth]{image080.png} } \subfigure[$t=2.36$.]{ \includegraphics[width=0.45\textwidth]{image177.png} } \subfigure[$t=4$.]{ \includegraphics[width=0.45\textwidth]{image300.png} } \caption{Flat approximate solution for Test Problem \ref{tp:movingsurface} for four different times. The computation was performed on a deforming polyhedron consisting of about 3 million. triangles.} \label{fig:movingsurface} \end{figure} \subsection{Implementation Aspects} \subsubsection{Software} All simulations have been performed within the DUNE-FEM module which is based on the Distributed and Unified Numerics Environment (DUNE) \cite{DKNO10}. As coarsest grid approximating the sphere we use an unstructured grid consisting of $632$ triangles, see Figure \ref{fig:sphere0}. For finer computations we refine the coarse macro grid (level $0$) and obtain up to $2.5$ million. triangles for the finest grid (level $6$) whose vertices are projected onto the sphere, cf. Table \ref{tab:sphere}. \begin{figure}[htbp] \begin{minipage}[b]{0.47\textwidth} \centering\scalebox{0.95}{ \begin{tabular}{|c|r|r|} \cline{1-3} \multicolumn{1}{|c|}{level} & \multicolumn{1}{|c|}{h} & \multicolumn{1}{|c|}{size} \\ \cline{1-3} $0$ & $0.033664$ & $632$ \\ $1$ & $0.016833$ & $2528$ \\ $2$ & $0.008416$ & $10112$ \\ $3$ & $0.004208$ & $40448$ \\ $4$ & $0.002104$ & $161792$ \\ $5$ & $0.001052$ & $647168$ \\ $6$ & $0.000526$ & $2588672$ \\ \cline{1-3} \end{tabular}} \captionof{table}{Different refinement levels of the sphere grid.} \label{tab:sphere} \end{minipage} \begin{minipage}[b]{0.47\textwidth} \centering \includegraphics[width=0.95\textwidth]{sphere0.png} \caption{The sphere grid of level $0$.} \label{fig:sphere0} \end{minipage} \end{figure} \subsubsection{Exact Computation of Spherical Volume} For the curved finite volume scheme on the sphere the exact outer conormals, exact lengths of boundary segments and exact volumes of spherical triangles need to be computed. While the computation of the former two quantities is an easy geometric exercise, we use the formula from \cite{OS83} for the computation of the latter. \subsubsection{Exact Computation of Numerical Flux Functions} For the exact evaluation of the numerical flux function corresponding to an edge $e$ of a grid cell $K$, quantities of the form $\int_{e} \langle V, \mu_{ K, e} \rangle\, de$ have to be computed. Note that $V$ can be written as $V = \nu \times \nabla h_V$ with $h_V(x)=2\pi x_3$, where $\nu(x):=x$ denotes the outer unit normal to $\mathbb S^2$. As a result, similar to \cite{DKM13}, we deduce \begin{align*} \int_{e} \langle V, \mu_{ K, e} \rangle\, de = \int_{e} \langle \mu_{ K, e} \times \nu, \nabla h_V \rangle\, de. \end{align*} As $\mu_{ K, e} \times \nu$ is a unit tangent vector to $e$, the integrand is a directional derivative along $e$ and thus the integral can be computed by the evaluation at the endpoints of $e$. Obviously, the same applies to $W$ with $h_W(x)=2\pi x_2$ and $W = \nu \times \nabla h_W$. \subsubsection{Computation of $L^1$-Norms} We remark that the $L^1$-differences between the flat end the curved approximate solutions are computed on the triangulation $\Gamma_h$. This does not have any influence on the convergence rates.\\ \\ \emph{Acknowledgements.} We gratefully acknowledge that the work of Thomas M\"uller was supported by the German Research Foundation (DFG) via SFB TR 71 `Geometric Partial Differential Equations'. Jan Giesselmann would like to thank the German Research Foundation (DFG) for financial support of the project `Modeling and sharp interface limits of local and non-local generalized Navier--Stokes--Korteweg Systems'. \section*{Appendix} Here we give the proof of Lemma \ref{lem:quotients}. \textit{Proof.} As we know that \[ \frac{| \bar K(t_n)|}{|\bar K(t_{n+1})|},\frac{| K(t_n)|}{| K(t_{n+1})|},\frac{| K(t_n)|}{|\bar K(t_{n+1})|} = 1 + \mathcal O(h), \ \frac{|K(t_n)|}{|\bar K(t_n)|} + \frac{|K(t_{n+1})| }{|\bar K(t_{n+1})|} = 2 + \mathcal O(h^2), \] it is sufficient to prove \begin{equation}\label{eq:quotients} \abs{\frac{|K(t_{n+1})|^2}{|K(t_n)|^2} - \frac{|\bar K(t_{n+1})|^2}{|\bar K(t_n)|^2} } \leq C |I_n| h, \end{equation} to obtain both assertions of the Lemma. Without loss of generality we assume the following situation: the triangle $K(t_n)$ is the convex hull of $(0,0,0), (h,0,0), (x,y,0).$ We define \[ \Phi^n(\cdot, t) : \Gamma(t_n) \rightarrow \Gamma(t), \quad \Phi^n(\cdot,t):= \Phi(\cdot,t) \circ \Phi(\cdot,t_n)^{-1}, \] such that $\Phi^n(\cdot,t_n)$ is the identity mapping. In addition we define for vectors ${\bf a},{\bf b} \in \mathbb R^3$ \[ {\bf a} \odot {\bf b} \in \mathbb R^{2 \times 2}, {\bf a} \odot {\bf b} := \left( \begin{array}{cc} \langle {\bf a},{\bf a} \rangle & \langle {\bf a},{\bf b} \rangle\\ \langle {\bf a},{\bf b} \rangle & \langle {\bf b},{\bf b} \rangle \end{array}\right). \] and observe the identities \begin{align}\label{eq:oprod} \norm{ {\bf a} \times {\bf b}}^2 = \det( {\bf a} \odot {\bf b}) \quad \text{and} \quad \det( {\bf a} \odot {\bf b}) = \det(( {\bf a} + \lambda {\bf b}) \odot {\bf b})\quad \forall \lambda \in \mathbb R. \end{align} We denote the canonical projection $\bar K(t_n) \to K(t_n)$ by $c$ and abbreviate $\Phi^n \circ c$ by $\Phi^n_c$. Obviously we have \begin{equation}\label{eq:kqn} |\bar K(t_n)| = \frac{1}{2} h y \end{equation} and \begin{equation* \aligned 4|\bar K(t_{n+1})|^2 =& \|\left( \Phi^n((x,y,0),t_{n+1}) - \Phi^n((0,0,0),t_{n+1}) \right)\\ & \times \left( \Phi^n((h,0,0),t_{n+1}) - \Phi^n((0,0,0),t_{n+1})\right) \|^2\\ =&\det \Big( \left( \Phi^n(c(x,y,0),t_{n+1}) - \Phi^n(c(0,0,0),t_{n+1}) \right)\\ & \odot \left( \Phi^n(c(h,0,0),t_{n+1}) - \Phi^n(c(0,0,0),t_{n+1}) \right) \Big) \\ \endaligned \end{equation*} as the vertices of $\bar K(t_n)$ and $K(t_n)$ coincide. Continuing the computation using \eqref{eq:oprod} we get \begin{equation}\label{eq:kqnp} \aligned &4|\bar K(t_{n+1})|^2\\ =& \det \Bigg( \left((x,y,0) + \int_{I_n} \left( \partial_\tau\Phi^n(c(x,y,0),\tau) - \partial_\tau \Phi^n(c(0,0,0),\tau) \right) \, d\tau\right)\\ & \odot \left( (h,0,0) + \int_{I_n} \left( \partial_\tau \Phi^n(c(h,0,0),\tau) - \partial_\tau\Phi^n(c(0,0,0),\tau) \right)d\tau \right) \Bigg) \\ =& \det \Bigg( \left((x,y,0) + \int_{I_n} \partial_y\partial_\tau\Phi^n_c((x_1,y_1,0),\tau) y + \partial_x\partial_\tau\Phi^n_c((x_2,y_2,0),\tau)x \, d\tau\right)\\ & \odot \left( (h,0,0) + \int_{I_n} \left( \partial_x\partial_\tau\Phi^n_c((x_3,0,0),\tau) h \right)d\tau \right) \Bigg) \\ =& \det \Bigg( \Big((0,y,0) + \int_{I_n} \partial_y\partial_\tau\Phi^n_c((x_1,y_1,0),\tau) y d\tau\\ & + \int_{I_n}\left( \partial_x\partial_\tau\Phi^n_c((x_2,y_2,0),\tau) - \partial_x\partial_\tau\Phi^n_c((x_3,0,0),\tau)\right)x \, d\tau\Big)\\ & \odot \left( (h,0,0) + \int_{I_n} \left( \partial_x\partial_\tau\Phi^n_c((x_3,0,0),\tau) h \right)d\tau \right) \Bigg) . \endaligned \end{equation} Combining \eqref{eq:kqn} and \eqref{eq:kqnp} we get \begin{equation}\label{eq:kq_quotient} \aligned &\frac{|\bar K(t_{n+1})|^2}{|\bar K(t_n)|^2}\\ =& \det \Big( \Big((0,1,0) + \int_{I_n} \partial_y\partial_\tau\Phi^n_c((x_1,y_1,0),\tau) d\tau\\ & + \int_{I_n}\left( \partial_x\partial_\tau\Phi^n_c((x_2,y_2,0),\tau) - \partial_x\partial_\tau\Phi^n_c((x_3,0,0),\tau)\right)\frac{x}{y} \, d\tau\Big)\\ & \odot \left( (1,0,0) + \int_{I_n} \left( \partial_x\partial_\tau\Phi^n_c((x_3,0,0),\tau) \right)d\tau \right) \Big) \\ =& 1 + \left\langle (0,1,0), \int_{I_n} \partial_y\partial_\tau\Phi^n_c((x_1,y_1,0),\tau) d\tau \right\rangle\\ &+ \left\langle(0,1,0), \int_{I_n}\left( \partial_x\partial_\tau\Phi^n_c((x_2,y_2,0),\tau) - \partial_x\partial_\tau\Phi^n_c((x_3,0,0),\tau)\right)\frac{x}{y} \, d\tau \right\rangle \\ &+ \left\langle (1,0,0),\int_{I_n} \left( \partial_x\partial_\tau\Phi^n_c((x_3,0,0),\tau) \right)d\tau \right\rangle + \mathcal O (|I_n|^2). \endaligned \end{equation} Here we used that \begin{equation}\label{eq:matrix} \det (A+ B)= \det A + \operatorname{tr}_A(B) +\mathcal O( \|B\|^2), \ \forall A,B \in \mathbb R^{2 \times 2}. \end{equation} Now we turn to the quotient of the areas of the curved faces and remark that \begin{equation} | K(t_{n+1})| = |\Phi^n(K(t_n),t_{n+1})| + \mathcal O(|I_n|\, h \, |\bar K(t_{n+1})|), \end{equation} cf. \cite{LNR11} for a proof. \begin{equation} \aligned |\Phi^n(K(t_n),t_{n+1})| &= \int_{\bar K(t_{n})} \sqrt{\det \left( \partial_x \Phi^n_c \odot \partial_y \Phi^n_c\right)}\\ &=\int_{\bar K(t_{n})} \sqrt{\frac{\det \left( \partial_x \Phi^n_c \odot \partial_y \Phi^n_c\right)}{\det( \partial_x c \odot \partial_y c)}} \sqrt{\det( \partial_x c \odot \partial_y c)}\\ &= \sqrt{\frac{\det \left( \partial_x \Phi^n_c \odot \partial_y \Phi^n_c\right)}{\det( \partial_x c \odot \partial_y c)}}((x_4,y_4,0),t_{n+1}) | K(t_{n})|. \endaligned \end{equation} Using \eqref{eq:partialc} we get for the quotient \begin{equation}\label{eq:k_quotient} \aligned &\frac{| K(t_{n+1})|^2}{| K(t_{n})|^2} = \frac{\det \left( \partial_x \Phi^n_c \odot \partial_y \Phi^n_c\right)}{\det( \partial_x c \odot \partial_y c)}((x_4,y_4,0),t_{n+1}) + \mathcal O(|I_n|h)\\ &=\frac{\left\|\! \left( \partial_x c +\! \int_{I_n}\!\! \partial_\tau \partial_x \Phi^n_c ((x_4,y_4,0),\tau) d\tau \right) \!\! \times\!\! \left( \partial_y c +\! \int_{I_n}\!\! \partial_\tau \partial_y \Phi^n_c ((x_4,y_4,0),\tau) d\tau \right)\! \right\|^2}{\det\left( \partial_x c \odot \partial_y c \right)}\\ &\stackrel{\eqref{eq:matrix}}{=} 1 + \left\langle (1,0,0) , \int_{I_n} \partial_\tau \partial_x \Phi^n_c ((x_4,y_4,0),\tau) d\tau \right\rangle\\ &\quad + \left\langle (0,1,0) , \int_{I_n} \partial_\tau \partial_y \Phi^n_c ((x_4,y_4,0),\tau) d\tau \right\rangle + \mathcal O (|I_n| h ). \endaligned \end{equation} The statement of the Lemma follows by considering the difference of \eqref{eq:k_quotient} and \eqref{eq:kq_quotient} and the fact that the smoothness of $\Phi^n\circ c $ only depends on the smoothness of $\Phi$. \qed \bibliographystyle{plain}
{ "timestamp": "2013-01-08T02:07:24", "yymm": "1301", "arxiv_id": "1301.1287", "language": "en", "url": "https://arxiv.org/abs/1301.1287", "abstract": "This paper studies finite volume schemes for scalar hyperbolic conservation laws on evolving hypersurfaces of $\\mathbb{R}^3$. We compare theoretical schemes assuming knowledge of all geometric quantities to (practical) schemes defined on moving polyhedra approximating the surface. For the former schemes error estimates have already been proven, but the implementation of such schemes is not feasible for complex geometries. The latter schemes, in contrast, only require (easily) computable geometric quantities and are thus more useful for actual computations. We prove that the difference between approximate solutions defined by the respective families of schemes is of the order of the mesh width. In particular, the practical scheme converges to the entropy solution with the same rate as the theoretical one. Numerical experiments show that the proven order of convergence is optimal.", "subjects": "Numerical Analysis (math.NA)", "title": "Geometric Error of Finite Volume Schemes for Conservation Laws on Evolving Surfaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668668053619, "lm_q2_score": 0.7217432122827967, "lm_q1q2_score": 0.7081505262335488 }
https://arxiv.org/abs/1703.06205
Dwell time for switched systems with multiple equilibria on a finite time-interval
We describe the behavior of solutions of switched systems with multiple globally exponentially stable equilibria. We introduce an ideal attractor and show that the solutions of the switched system stay in any given $\varepsilon$-inflation of the ideal attractor if the frequency of switchings is slower than a suitable dwell time $T$. In addition, we give conditions to ensure that the $\varepsilon$-inflation is a global attractor. Finally, we investigate the effect of the increase of the number of switchings on the total time that the solutions need to go from one region to another.
\section{Introduction} Dwell time is the lower bound on the time between successive switchings of the switched system \begin{equation}\label{ss} \dot x=f_{u(t)}(x),\qquad u(t)\mbox{ is a piecewise constant function, } x\in\mathbb{R}^n, \end{equation} which ensures a required dynamic behavior under the assumption that each of the subsystems \begin{equation}\label{ssi} \dot x=f_{u}(x),\qquad u\in\mathbb{R},\ x\in\mathbb{R}^n, \end{equation} possess a globally stable equilibrium $x_u.$ When all the equilibria $\{x_{u(t)}\}_{t\ge t_0}$ coincide, the dwell time $T>0$ which gives global exponential stability of the common equilibrium $x_0$ is computed e.g. in Liberzon \cite[\S3.2.1]{liberzon}. Specifically, the result of \cite[\S3.2.1]{liberzon} gives a formula for $T$ which makes $x_0$ globally exponentially stable for any piecewise constant function $u(t)$ whose discontinuities $t_1,t_2,\ldots$ verify \begin{equation}\label{p} |t_i-t_{i-1}|\ge T. \end{equation} The case where the equilibria are distinct is covered in Alpcan-Basar \cite{alpcan}, who offered a dwell time $T$ that ensures global exponential stability of a suitable set $A\supset \{x_{u(t)}\}_{t\ge t_0}$ for any $u(t)$ whose discontinuities verify (\ref{p}). The problem of stability of switched systems with multiple equilibria appears e.g. in differential games, load balancing, agreement and robotic navigation (see \cite{alpcan,spong} and references therein). \vskip0.2cm \noindent A deeper analysis of the dynamics of switched systems with multiple equilibria was recently carried out in Xu et al \cite{xu}, who gave a sharp formula for the attractor $A$ in the case of quasi-linear switched systems (\ref{ss}). Assuming that $u(t)$ is periodic and denoting by $t\mapsto X_u(t,x)$ the solution of (\ref{ssi}) with the initial condition $X_u(0,x)=x$, the paper \cite{xu} investigated the asymptotic attractivity of $$ A=\bigcup_{t\ge t_0,\ \tau\ge t_0}\{X_{u(\tau)}(t,x_{u(\tau)})\}. $$ \noindent The motivation for our paper comes from the problem of planning the motion of a 3-D walking robot, where "turn left", "walk straight" and "turn right" correspond to $u(t)=-1$, $u(t)=0$ and $u(t)=1$ respectively, see Gregg et al \cite{gregg}. It is not the asymptotic attractivity of $A$ which is of importance for the robot turning maneuver but rather an appropriate attractivity of $A$ during the time of the maneuver. The goal of this paper is to provide a dwell time which can ensure the required attractivity. \vskip0.2cm \noindent The paper is organized as follows. In the next section of the paper, we prove our main result (Theorem~\ref{thm1}). Given $\varepsilon>0$, Theorem~\ref{thm1} provides a dwell time $T>0$ such that the solutions of (\ref{ss}) with the initial conditions in the $\varepsilon$-neighborhood $B_\varepsilon(A)$ of $A$ never leave $B_\varepsilon(A)$ in the future. Theorem~\ref{thm1} can be viewed as a version of \cite[Theorem~1]{xu} for fully nonlinear systems. In section 3, we compute (Theorem~\ref{thm2}) a dwell time to ensure that the attractor $B_\varepsilon(A)$ is reached asymptotically from any initial condition. The proof of Theorem~\ref{thm2} follows the ideas of Alpcan-Basar \cite{alpcan}. However, we offer weaker conditions where the Lyapunov functions of subsystems (\ref{ssi}) are not supposed to respect any uniform estimates. A particular case study where the Lyapunov functions of subsystems (\ref{ssi}) are shifts of one another is addressed in section~4. In this section, we consider a switched system which switches between two subsystems $u=u_1$ and $u=u_2$ and analyze the solutions of the switched system with the initial conditions in $B_\varepsilon(A)$. Let $x_1$ and $x_2$ be the equilibria of subsystems $u=u_1$ and $u=u_2$ respectively. The result of section~4 (Theorem~\ref{thm41}) clarifies whether or not the solutions from the neighborhood of $x_1$ reach the neighborhood of $x_2$ faster if the switching signal is amended in such a way that an additional switching occurs between $u=u_1$ and $u=u_2.$ In other words, section~4 investigates whether or not adding more discrete events is alone capable of making the dynamics inside $B_\varepsilon(A)$ faster. Examples~\ref{ex1} and \ref{ex2} illustrate the conclusions of Theorems~\ref{thm1} and \ref{thm41}. \section{The local trapping region} Let $x_u$ be the unique equilibrium of (\ref{ssi}). We assume that for any $u$, system (\ref{ssi}) admits a global Lyapunov function $V_u$ such that \begin{eqnarray} && \alpha_u(\|x-x_u\|)\le V_u(x)\le \beta_u(\|x-x_u\|),\qquad x\in\mathbb{R}^n,\label{ab}\\ && (V_u)'(x)f_u(x)\le -k_u V_u(x),\qquad x\in\mathbb{R}^n,\label{k} \end{eqnarray} where $\alpha$, $\beta$ are strictly monotonically increasing functions with $\alpha_u(0)=\beta_u(0)$, and $k_u>0.$ Introduce the following trapping regions \begin{equation}\label{Ni} \renewcommand*{\arraystretch}{1.5} \begin{array}{rcl} N_u^\varepsilon&=& \left\{x:V_u(x)\le \varepsilon \right\},\\ L_{u_1,u_2}^\varepsilon (t)& = & \bigcup\limits_{x\in N_{u_1}^\varepsilon}\left\{X_{u_2}(t,x)\right\} \end{array} \end{equation} and define the dwell time that the solutions need to go from $N_{u_1}^\varepsilon$ to $N_{u_2}^\varepsilon$ as \begin{equation}\label{Ti} T_{u_1,u_2}^\varepsilon=-\dfrac{1}{k_{u_2}}\ln\dfrac{\varepsilon}{\beta_{u_2}\left(\|x_{u_2}-x_{{u_1}}\|+\alpha_{{u_1}}^{-1}(\varepsilon)\right)}. \end{equation} \begin{theorem}\label{thm1} Assume that \begin{itemize} \item[(A1)] $f_u\in C^1(\mathbb{R}^n,\mathbb{R}^n)$ for any $u\in\mathbb{R}$, \item[(A2)] for any $u\in\mathbb{R}$, system (\ref{ssi}) admits an equilibrium $x_u$ whose Lyapunov function $V_u$ satisfies (\ref{ab})-(\ref{k}), where $\alpha,\beta\in C^0(\mathbb{R},\mathbb{R})$ are strictly increasing functions, $\alpha(0)=\beta(0)=0,$ $k_u>0,$ \item[(A3)] $u:[t_0,\infty)\to \mathbb{R}$ is a piecewise constant function. \end{itemize} Let $\{t_1,t_2,\ldots\}=\{t_i\}_{i\in I}$ be a finite or infinite increasing sequence of points of discontinuity of $u$ and $$ u_i=u(t_i+0). $$ If $$ t_i-t_{i-1}\ge T_{u_{i-1},u_i}^\varepsilon, $$ then for any solution $x$ of (\ref{ss}) with $$ x(t_{i-1})\in N_{u_{i-1}}^\varepsilon,\quad i\in I, $$ one has \begin{eqnarray} & x(t)\in L_{u_{i-1},u_i}^\varepsilon(t), &\ \ t_{i-1}\le t\le t_i,\ \ \ i\in I,\label{prove1} \\ &x(t_i)\in N_{u_i}^\varepsilon, & \ \ i\in I. \label{prove2} \end{eqnarray} \end{theorem} \noindent {\bf Proof.} We only have to prove (\ref{prove2}) because the validity of (\ref{prove1}) follows directly from the definition of $L_{u_{i-1},u_i}^\varepsilon$. Let $x(t_{i-1})\in N_{u_{i-1}}^\varepsilon.$ Our goal is to show that $x(t_i)\in N_{u_i}^\varepsilon.$ Given $\varepsilon>0$, define $\delta>0$ as $$ \delta=\beta_{u_i}\left(\|x_{u_i}-x_{u_{i-1}}\|+\alpha_{u_{i-1}}^{-1}(\varepsilon)\right). $$ \begin{figure}[h]\center \includegraphics[scale=0.7]{fig.pdf}\ \ \ \ \vskip-0.2cm \caption{\footnotesize Illustration of the proof of Theorem~\ref{thm1}. The two gray rings are the estimates for the sets $\{x:V_{u_{i-1}}(x)=\varepsilon\}$ and $\{x:V_{u_{i}}(x)=\delta\}$ given by condition (\ref{ab}).} \label{fig11} \end{figure} \noindent By construction (see Fig.~\ref{fig11}), $N_{u_i}^\delta\supset N_{u_{i-1}}^\varepsilon$, and so $x(t_{i-1})\in N_{u_i}^\delta$. Introduce $$ v(t)=V_{u_i}(x(t)). $$ By (\ref{k}) we have \begin{eqnarray*} \dot v(t)&\le &-k_i v(t),\quad t_{i-1}\le t\le t_i,\\ v(t_{i-1})&\le &\delta. \end{eqnarray*} By the comparison lemma (see e.g. \cite[Lemma~16.4]{amann}), it holds that $$ v(t)\le p(t), $$ where $p(t)$ is the solution of \begin{eqnarray*} \dot p(t)&= &-k_i p(t),\quad t_{i-1}\le t\le t_i,\\ p(t_{i-1})&= &\delta. \end{eqnarray*} At the same time, $$ p(t_i)=e^{-{k_{u_i}}(t_i-t_{i-1})}\delta \le e^{-k_{u_i} T_{u_{i-1},u_i}^\varepsilon}\delta=\dfrac{\varepsilon}{\beta_{u_i}\left(\|x_{u_i}-x_{u_{i-1}}\|+\alpha_{u_{i-1}}^{-1}(\varepsilon)\right)}\delta=\varepsilon. $$ Therefore, $V_{u_i}(x(t_i))\le\varepsilon$, which completes the proof.\hfill $\square$ \vskip0.2cm \noindent Theorem~\ref{thm1} suggests the following definition of the $\varepsilon$-inflation $A_\varepsilon$ of the ideal attractor $A$ of (\ref{ss}). Given a function $u:[t_0,\infty)\to \mathbb{R}$ and the respective increasing sequence $(t_1,t_2,\ldots)=\{t_i\}_{i\in I}$, let $u_i=u(t_i)$ and $$ \renewcommand*{\arraystretch}{1.5} A_\varepsilon(t)=\left\{\begin{array}{lll} L_{u_{i-1},u_i}^\varepsilon(t), & t_{i-1}\le t<t_i, & i\in I,\\ L_{u_{\max (I)-1},u_{\max (I)}}^\varepsilon(t), & t\ge t_{\max (I)}, & {\rm if}\ I\ {\rm is\ finite.}\end{array}\right. $$ \begin{cor}\label{Tloc} Let the assumptions (A1)-(A3) of Theorem~\ref{thm1} hold. If $$ t_i-t_{i-1}\ge \sup\limits_{i\in I} T_{u_{i-1},u_i}^\varepsilon=:T_{loc}^\varepsilon, \quad i\in I, $$ then, for any solution $x$ of (\ref{ss}) with the initial condition $$ x(t_{0})\in A_\varepsilon (t_{0}), $$ one has $$ x(t)\in A_\varepsilon(t),\quad t\ge t_{0}. $$ \end{cor} \noindent Note, $\sup\limits_{i\in I} T_{u_{i-1},u_i}^\varepsilon$ is finite when $t\mapsto u(t)$ takes a finite number of values on $[t_0,\infty).$ \begin{ex}\label{ex1} To illustrate Theorem~\ref{thm1}, we consider the following switched system (slightly modified from Example~2 in \cite{alpcan}) \begin{equation}\label{ss_ex1} \dot x=\left(\begin{array}{cc} -1 & -1 \\ 1 & -1 \end{array}\right)x+\left(\begin{array}{c} u \\ 1\end{array}\right), \end{equation} whose unique equilibrium is given by $$ x_u=\dfrac{1}{2}\left(\begin{array}{c} u-1\\ u+1 \end{array} \right). $$ Introduce the three discrete states $u_1,$ $u_2,$ and $u_3$ as $$ u_1=1,\qquad u_2=0,\qquad u_3=-1, $$ and consider $$ \varepsilon=0.05. $$ If the Lyapunov function $V_u(x)$ is selected as $$ V_u(x)=\|x-x_u\|^2, $$ then formulas (\ref{Ni}) and (\ref{Ti}) yield $$ N_u^\varepsilon=\left\{x:\|x-x_u\|\le \sqrt{\varepsilon}\right\},\quad T_{loc}^\varepsilon\approx 1.426. $$ Therefore, for the control input \begin{equation}\label{ci} u(t)=\left\{\begin{array}{l}u_1,\quad t\in[0,T),\\ u_2,\quad t\in[T,2T),\\ u_3,\quad t\ge 2T,\end{array}\right.\qquad T=1.43, \end{equation} and for any solution $x$ of (\ref{ss_ex1}), Theorem~\ref{thm1} ensures the following: $$ {\rm if} \ x(0)\in N^\varepsilon_{u_1}, \ {\rm then} \ x(T)\in N^\varepsilon_{u_2} \ {\rm and} \ x(2T)\in N^\varepsilon_{u_3}. $$ \end{ex} \noindent Figure~\ref{fig1}(left) documents the sharpness of the dwell time $T$. Indeed, the figure shows that if the initial condition $x(0)$ deviates to the outside of $N^\varepsilon_{u_1}$ just a little bit, then the dwell time $T$ is no longer sufficient to get $x(T)\in N^\varepsilon_{u_2}$ (though we still have $x(2T)\in N^\varepsilon_{u_3}$ for this solution). \begin{figure}[h]\center \includegraphics[scale=0.44]{pic1.pdf}\ \ \ \ \includegraphics[scale=0.44]{pic2rev.pdf} \vskip-0.4cm \caption{\footnotesize Left: Solutions of switched system (\ref{ss}) with initial conditions (blue dots) inside $N^{0.05}_{u_1}$, on the boundary of $N^{0.05}_{u_1}$, and outside $N^{0.05}_{u_1}$, and for the control input $u(t)$ given by (\ref{ci}). Right: The solution of switched system (\ref{ss}) with the initial condition $(-0.5,0.5)^T$ for the $4T$-periodic control input (\ref{4Tci}).} \label{fig1} \end{figure} \vskip0.2cm \noindent To demonstrate that trapping regions $N^\varepsilon_{u_1}$, $N^\varepsilon_{u_2}$, $N^\varepsilon_{u_3}$ (and thus the $\varepsilon$-inflated attractor $A_\varepsilon$, see Corollary~\ref{Tloc}) provide a rather sharp estimate for the location of the attractor of (\ref{ss_ex1}), we extend the input $u(t)$ to $[0,4T]$ as \begin{equation}\label{4Tci} u(t)=\left\{\begin{array}{l}u_1,\quad t\in[0,T),\\ u_2,\quad t\in[T,2T),\\ u_3,\quad t\in[2T,3T),\\ u_2,\quad t\in[3T,4T), \end{array}\right. \end{equation} and then continue it to the entire $[0,\infty)$ by $4T$-periodicity. The respective solution $x$ of (\ref{ss_ex1}) with the initial condition $x(0)=(-0.5,0.5)^T$ is plotted in Fig.~\ref{fig1} (right). The drawing shows that the switching points of the solution $x$ are very close to the boundaries of the trapping regions $N^\varepsilon_{u_1}$, $N^\varepsilon_{u_2}$, $N^\varepsilon_{u_3}$, i.e. there is only a little window to reduce the size of those regions. \section{Global attractivity of the local trapping region} \begin{theorem} \label{thm2} Let the assumptions (A1)-(A3) of Theorem 2.1 hold and $I$ be infinite. Fix $\varepsilon>0$ and suppose that there exists constants $\mu_i(\varepsilon)$ such that \begin{equation*} \frac{V_{u_{i+1}}(x)}{V_{u_i}(x)}\leq \mu_i(\varepsilon), \quad x\in\mathbb{R}^n\setminus N_{u_i}^{\varepsilon}, \quad i\in\mathbb{N}\cup\{0\}. \end{equation*} Finally, assume that \begin{equation*} \mu_0(\varepsilon)\cdot\dotsc\cdot\mu_i(\varepsilon)e^{\displaystyle -\int_{t_0}^{t_{i+1}}k_{u(s)}ds}\to 0, \quad \text{as } i\to\infty. \end{equation*} Then, $x(\hat{T})\in N_{u_i}^{\varepsilon}$ for some $\hat{T}>0$ and some $i\in\mathbb{N}$. \end{theorem} \noindent{\bf Proof.} Let $W(t)=e^{k_{u(t)}t}V_{u(t)}(x(t))$, where $t\in[t_0,\infty)$. Then, for $t\in[t_i,t_{i+1})$, \begin{equation*} W'(t)=k_{u_i}W(t)+e^{k_{u_i}t}\frac{d}{dt}V_{u_i}(x(t))\leq k_{u_i}W(t)-k_{u_i}e^{k_{u_i}t}V_{u_i}(x(t))=0, \end{equation*} which means that $W$ is decreasing on $[t_i,t_{i+1})$. In particular, \begin{equation*} W(t_i^+)\geq W(t_{i+1}^-). \end{equation*} On the other hand, \begin{equation*} \frac{W(t_{i+1}^+)} {W(t_{i+1}^-)} = \frac{e^{k_{u_{i+1}}t_{i+1}}} {e^{k_{u_i}t_{i+1}}} \cdot \frac{V_{u_{i+1}}(x(t_{i+1}))} {V_{u_i}(x(t_{i+1}))} \leq \frac{e^{k_{u_{i+1}}t_{i+1}}} {e^{k_{u_i}t_{i+1}}} \cdot \mu_i(\varepsilon)=:\tilde{\mu}_i. \end{equation*} Therefore, \begin{equation}\label{eq:tildeInequality1} \tilde{\mu}_iW(t_i^+) \geq \tilde{\mu}_iW(t_{i+1}^-) \geq W(t_{i+1}^+) \end{equation} Replacing $i$ by $i-1$ and combining with (\ref{eq:tildeInequality1}), one gets $\tilde{\mu}_{i-1}\tilde{\mu}_i W(t_{i-1}^+)\geq W(t_{i+1}^+)$. Continuing this process for $i-2,i-3,$ etc. we obtain \begin{equation*} \tilde{\mu}_0\cdot\dotsc\cdot\tilde{\mu}_iW(t_0^+) \geq W(t_{i+1}^+). \end{equation*} Applying $e^{-k_{u_{i+1}}t_{i+1}}$ yields \begin{equation*} \tilde{\mu}_0\cdot\dotsc\cdot \tilde{\mu}_i e^{-k_{u_{i+1}}t_{i+1}}W(t_0^+) \geq V_{u_{i+1}}(x(t_{{i+1}})), \end{equation*} or, equivalently, \begin{equation*} V_{u_0}(x(t_0))e^{-\sum_{j=0}^{i}k_{u_j}(t_{j+1}-t_j)} \mu_0(\varepsilon)\cdot\dotsc\cdot \mu_i(\varepsilon) \geq V_{u_{i+1}}(x(t_{i+1})). \end{equation*} The left-hand-side approaches 0 as $i\to\infty$ by the assumption of the theorem. Therefore, $V_{u_{i+1}}(x(t_{i+1}))\to0$ as $i\to\infty$. The proof is complete. \hfill $\square$ \begin{cor} {\bf (Alpcan-Basar \cite{alpcan})} \label{alpcan} Let the assumptions (A1)-(A3) of Theorem~\ref{thm1} hold and $I$ be infinite. Fix $\varepsilon>0$ and suppose that there exists a constant $\mu(\varepsilon)>1$ such that $$ \dfrac{V_{u(t)}(x)}{V_{u(\tau)}(x)}\le\mu(\varepsilon),\quad x\in\mathbb{R}^n\backslash N^\varepsilon_{u(\tau)},\quad \tau,t\ge t_0. $$ Finally, assume that $k=\inf\limits_{t\ge t_0}k_{u(t)}>0$ and consider $T^\varepsilon_{glob}$ satisfying $$ T^\varepsilon_{glob}>\dfrac{\ln(\mu(\varepsilon))}{k}. $$ If $$ t_i-t_{i-1}\ge T^\varepsilon_{glob},\quad i\in\mathbb{N}, $$ then, $x(\hat T)\in N_{u_i}^\varepsilon$ for some $\hat T>0$ and some $i\in \mathbb{N}.$ \end{cor} \noindent {\bf Proof.} Let $\gamma>0$ be such that $T^\varepsilon_{glob}=\dfrac{\ln(\mu(\varepsilon)+\gamma)}{k}$. Let $\mu_0(\varepsilon),\ldots\mu_i(\varepsilon)$ be as given by Theorem~\ref{thm2}. Then \begin{eqnarray*} && e^{-\sum_{j=0}^i k_{u_{j}}(t_{j+1}-t_j)}\mu_0(\varepsilon)\cdot\ldots\cdot\mu_i(\varepsilon)\le e^{-kiT^\varepsilon_{glob}}\mu(\varepsilon)^i=\\ && =\left(\mu(\varepsilon)+\gamma\right)^{-i}\mu(\varepsilon)^i=\left(\dfrac{\mu(\varepsilon)}{\mu(\varepsilon)+\gamma}\right)^i\to 0\ \ {\rm as}\ \ i\to\infty.\hfill $\square$ \end{eqnarray*} \begin{cor} Let the conditions of Corollary~\ref{alpcan} hold. Let $T^\varepsilon_{loc}$ and $T^\varepsilon_{glob}$ be those given by Corollaries \ref{Tloc} and \ref{alpcan}. If $$ t_i-t_{i-1}\ge \max\left\{T^\varepsilon_{loc},T^\varepsilon_{glob}\right\},\quad i\in \mathbb{N}, $$ then, for any solution $x$ of (\ref{ss}), there exists $\hat T>t_0$ such that $$ x(t)\in A_\varepsilon(t),\quad t\ge \hat T. $$ \end{cor} \section{Dependence of the dwell time on the number of discrete states} \noindent Suppose that $u(t)$ switches from $u_0$ to $u_1$ at $t=t_0$. According to Theorem~\ref{thm1}, it takes at most time $T_{u_0,u_1}^\varepsilon$ (see formula (\ref{Ti})) for a trajectory $x$ of (\ref{ss}) to go from $N_{u_0}^\varepsilon$ to $N_{u_1}^\varepsilon.$ The next theorem shows that adding more discrete states between $u_0$ and $u_1$ makes the travel time from $N_{u_0}^\varepsilon$ to $N_{u_1}^\varepsilon$ longer. \begin{theorem} \label{thm41} Let the assumptions (A1)-(A2) of Theorem~\ref{thm1} hold and suppose $\alpha_u=:\alpha$, $\beta_u=:\beta$, $k_u=:k$ don't depend on $u$. Fix $d>0$ and $r>0$. Then there exists $\varepsilon_0>0$ such that $$ T_{u_0,u_1}^\varepsilon<{T}_{u_0,v}^\varepsilon+{T}_{v,u_1}^\varepsilon, $$ for any \begin{equation}\label{ver} \varepsilon\in(0,\varepsilon_0),\ \|x_{u_0}\|\le d,\ \|x_{u_1}\|\le d, \ \|x_{u_0}-x_{v}\|\ge r, \ \|x_{v}-x_{u_1}\|\ge r. \end{equation} \end{theorem} \vskip0.2cm \noindent {\bf Proof.} By formula (\ref{Ti}) one has \begin{eqnarray} T_{u_0,u_1}^\varepsilon- T_{u_0,v}^\varepsilon-T_{v,u_1}^\varepsilon &=& -\dfrac{1}{k}\ln\dfrac{\varepsilon}{\beta\left(\|x_{u_0}-x_{u_1}\|+\alpha^{-1}(\varepsilon)\right)}+\nonumber\\ && +\dfrac{1}{k}\ln\dfrac{\varepsilon}{\beta\left(\|x_{u_0}-x_{v}\|+\alpha^{-1}(\varepsilon)\right)}+\nonumber\\ &&+\dfrac{1}{k}\ln\dfrac{\varepsilon}{\beta\left(\|x_{v}-x_{u_1}\|+\alpha^{-1}(\varepsilon)\right)}=\nonumber\\ &=& -\ln \dfrac{K}{\varepsilon^{1/k}},\label{f4} \end{eqnarray} where $$ K={\left(\dfrac{\beta\left(\|x_{ v}-x_{ u_1}\|+\alpha^{-1}(\varepsilon)\right)}{\beta\left(\|x_{ u_0}-x_{ u_1}\|+\alpha^{-1}(\varepsilon)\right)}\right)^{{1}/{k}}} {\left({\beta\left(\|x_{ u_0}-x_{ v}\|+\alpha^{-1}(\varepsilon)\right)}\right)^{{1}/{k}}}. $$ Observe that there exists $K_0>0$ such that $K\ge K_0$ for any functions $u_0,u_1,v$ that verify (\ref{ver}) as long as $d>0$ and $r>0$ stay fixed. Therefore, it is possible to choose $\varepsilon_0>0$ (which depends on just $d>0$ and $r>0$) to satisfy ${K}/{\varepsilon^{1/k}}>1$ for all $\varepsilon\in(0,\varepsilon_0)$. The proof is complete.\hfill $\square$ \begin{figure}[h]\center \includegraphics[scale=0.44]{pic3.pdf}\ \ \ \ \includegraphics[scale=0.44]{pic4.pdf} \vskip-0.4cm \caption{\footnotesize Solutions of switched system (\ref{ss}) with the initial condition in $N^{0.05}_{u_1}$ for the control inputs $\widetilde u(t)$ (Left) and $u(t)$ (Right) of Example~\ref{ex2}.} \label{fig2} \end{figure} \begin{ex}\label{ex2} In order to illustrate Theorem~\ref{thm41}, we refer to Example~\ref{ex1} again. Figure~\ref{fig2} shows the graphs of the solutions $x$ of (\ref{ss_ex1}) for two control inputs $$ \widetilde u(t)=\left\{\begin{array}{l}u_1,\quad t\in[0,T^\varepsilon_{u_1,u_3}),\\ u_3,\quad t\ge T^\varepsilon_{u_1,u_3},\end{array}\right.\ \ u(t)=\left\{\begin{array}{l}u_1,\quad t\in[0,T_{u_1,u_2}^\varepsilon),\\ u_2,\quad t\in[T_{u_1,u_2}^\varepsilon,T_{u_1,u_2}^\varepsilon+T_{u_2,u_3}^\varepsilon),\\ u_3,\quad t\ge T_{u_1,u_2}^\varepsilon+T_{u_2,u_3}^\varepsilon,\end{array}\right. $$ over the time interval $[0,T^\varepsilon_{u_1,u_3}]$. The plotting documents that $T_{u_1,u_2}^\varepsilon+T_{u_2,u_3}^\varepsilon$ turns out to be a longer time compared to $T^\varepsilon_{u_1,u_3}.$ \end{ex} \section{Conclusion} In this paper we considered a switched system of differential equations under the assumption that the time between two successive switchings is greater than a certain number $T$ called dwell time. We proved (Theorem~\ref{thm1}) that a suitable choice of the dwell time makes the solution stay within a required neighborhood $A_\varepsilon$ of a so-called ideal attractor. We further proved that the solutions reach $A_\varepsilon$ asymptotically if the initial conditions don't belong to $A_\varepsilon.$ By doing that we obtained a new integral condition (Theorem~\ref{thm2}) for global stability which didn't seem to appear in the literature before. Finally, we addressed a case study where the Lyapunov functions of different subsystems are just shifts of one another. Here we used the dwell time formulas from Theorem~\ref{thm1} to estimate the time that the trajectories need to go from the neighborhood of an equilibrium of one subsystem to the neighborhood of an equilibrium of another subsystem (i.e. we considered a switched system with two discrete states). We proved (Theorem~\ref{thm41}) that adding more discrete states makes this travel time longer. Examples~\ref{ex1} and \ref{ex2} show that our theoretical conclusions agree with numeric simulations. \section{Acknowledgements} \noindent The first author is partially supported by NSF Grant CMMI-1436856. \footnotesize
{ "timestamp": "2017-03-21T01:01:48", "yymm": "1703", "arxiv_id": "1703.06205", "language": "en", "url": "https://arxiv.org/abs/1703.06205", "abstract": "We describe the behavior of solutions of switched systems with multiple globally exponentially stable equilibria. We introduce an ideal attractor and show that the solutions of the switched system stay in any given $\\varepsilon$-inflation of the ideal attractor if the frequency of switchings is slower than a suitable dwell time $T$. In addition, we give conditions to ensure that the $\\varepsilon$-inflation is a global attractor. Finally, we investigate the effect of the increase of the number of switchings on the total time that the solutions need to go from one region to another.", "subjects": "Dynamical Systems (math.DS)", "title": "Dwell time for switched systems with multiple equilibria on a finite time-interval", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.981166874515169, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.708150525925571 }
https://arxiv.org/abs/1703.05510
Morsifications of real plane curve singularities
A real morsification of a real plane curve singularity is a real deformation given by a family of real analytic functions having only real Morse critical points with all saddles on the zero level. We prove the existence of real morsifications for real plane curve singularities having arbitrary real local branches and pairs of complex conjugate branches satisfying some conditions. This was known before only in the case of all local branches being real (A'Campo, Gusein-Zade). We also discuss a relation between real morsifications and the topology of singularities, extending to arbitrary real morsifications the Balke-Kaenders theorem, which states that the A'Campo--Gusein-Zade diagram associated to a morsification uniquely determines the topological type of a singularity.
\section*{Introduction} By a {\bf singularity} we always mean a germ $(C,z)\subset{\mathbb{C}}}\newcommand{\Q}{{\mathbb{Q}}^2$ of a plane reduced analytic curve at its singular point $z$. Irreducible components of the germ $(C,z)$ are called {\bf branches of $(C,z)$}. Let $f(x,y)=0$ be an (analytic) equation of $(C,z)$, where $f$ is defined in the closed ball $B(z,{\varepsilon})\subset{\mathbb{C}}}\newcommand{\Q}{{\mathbb{Q}}^2$ of radius ${\varepsilon}>0$ centered at $z$. The ball $B(z,{\varepsilon})$ is called the {\bf Milnor ball} of $(C,z)$ (and is denoted in the sequel $B_{C,z}$) if $z$ is the only singular point of $C$ in $B(z,{\varepsilon})$, and $\partial B(z,\eta)$ intersects $C$ transversally for all $0<\eta\le{\varepsilon}$. A {\bf nodal deformation} of a singularity $(C,z)$ is a family of analytic curves $C_t=\{f_t(x,y)=0\}$, where $f_t(x,y)$ is analytic in $x,y,t$ for $(x,y)\in B(C,z)$ and $t$ varying in an open disc $\D_\zeta\subset{\mathbb{C}}}\newcommand{\Q}{{\mathbb{Q}}$ of some radius $\zeta>0$ centered at zero, and where $C_0=C$, $C_t$ is smooth along $\partial B_{C,z}$, intersects $\partial B_{C,z}$ trasversally for all $t\in\D_\zeta$, for any $t\ne0$, the curve $C_t$ has only ordinary nodes in $B_{C,z}$, and the number of nodes does not depend on $t$. The maximal number of nodes in a nodal deformation of $(C,z)$ in $B$ equals $\delta(C,z)$, the $\delta$-invariant (see, for instance, \cite[\S10]{M}). Let $(C,z)$ be a real singularity, i.e., invariant with respect to the complex conjugation, $z\in C$ its real singular point. Denote by $\ReBr(C,z)$, ${\operatorname{ImBr}}(C,z)$ the numbers of real branches and the pairs of complex conjugate branches centered at $z$, respectively. Let $C_t=\{f_t(x,y)=0\}$, $t\in\D_\zeta$, be an equivariant\footnote{Here and further on, {\it equivariant} means commuting with the complex conjugation.} nodal deformation of a real singularity $(C,z)$. Its restriction to $t\in[0,\zeta)$ is called a {\bf real nodal deformation}. A real nodal deformation is called a {\bf real morsification} of $(C,z)$ if each function $f_t$, $0<t<\zeta$, has only real critical points in $B(C,z)$, all critical points are Morse, and all the saddle points have the zero critical level. Clearly, then all maxima have positive critical values, and all minima negative ones. N. A'Campo \cite{AC,AC1,AC2} and S. Gusein-Zade \cite{GZ,GZ1} performed a foundational research on this subject. In particular, they showed that real morsifications carry a lot of information on singularities and allow one to compute such invariants as the monodromy and intersection form in vanishing homology in a simple and efficient way. However, some questions have remained open, in particular: \smallskip {\bf Question:} {\it Does any real plane curve singularity admit a real morsification?} \smallskip Our main result is a partial answer to this question. Before precise formulation, we should mention that an affirmative answer was given before in the case of all branches of $(C,z)$ being real (below referred to as a {\bf totally real} singularity), see \cite[Theorem 1]{AC}\footnote{As pointed to us by S. Gusein-Zade, there is a gap in the proof of \cite[Theorem 1]{AC}: namely, the function in \cite[Formula (1) in page 12]{AC} does not possess the claimed properties.} and \cite[Theorem 4]{GZ2} (see also \cite[Section 4.3]{AGV}). Notice that any topological type of a curve singularity is presented by a totally real singularity, see \cite[Theorem 3]{GZ2}. Now we give necessary definitions. A singularity is called {\bf Newton non-degenerate}, if in some local coordinates, it is {\bf strictly Newton non-degenerate}, that is given by an equation $f(x,y)=0$ with a convenient Newton diagram at $z=(0,0)$ and such that the truncation of $f(x,y)$ to any edge of the Newton diagram is a quasihomogeneous polynomial without critical points in $({\mathbb{C}}}\newcommand{\Q}{{\mathbb{Q}}^*)^2$ (i.e., it has no multiple binomial factors). We say that a singularity $(C,z)$ is {\bf admissible along its tangent line} $L$ if the singularity $(C_L,z)$ formed by the union of the branches of $(C,z)$ tangent to $L$ is as follows: $(C_l,z)$ is the union of a Newton non-degenerate singularity with a singularity, whose all branches are smooth. \begin{theorem}\label{t1} Let $(C,z)$ be a real singularity, ${\mathcal T}(C,z)=\{z_0=z,z_1,...\}$ the vertices of its minimal resolution tree. For any $z_i\in{\mathcal T}(C,z)$ denote by $(C_i,z_i)$ the germ at $z_i$ of the corresponding strict transform of $(C,z)$. If, for any real point $z_i\in{\mathcal T}(C,z)$, the singularity $(C_i,z_i)$ is admissible along each of its non-real tangent lines, then the real singularity $(C,z)$ admits a real morsification. \end{theorem} Note that the case of totally real singularities is included, since then the restrictions asserted in Theorem are empty. We illustrate the range of singularities covered by Theorem \ref{t1} with a few examples. \begin{example}\label{exmor1} (1) Any quasihomogeneous (in real coordinates) singularity satisfies the hypotheses of Theorem \ref{t1}, and their morsifications can be constructed in the same manner as for the totally real singularities even if the singularity contains complex conjugate branches, see Section \ref{sqh}. (2) The simplest singularity satisfying the hypotheses of Theorem \ref{t1} and whose morsification is constructed by a new method suggested in the present paper is a pair of transversal ordinary cuspidal branches, given, for instance, by an equation $(x^2+y^2)^2+x^5=0$. The real part of its morsification looks as shown in Figure \ref{fmor1}. One can show that all possible morsifications are isotopic to this one. \begin{figure} \setlength{\unitlength}{0.6cm} \begin{picture}(7,5)(-5,0) \epsfxsize 60mm \epsfbox{mors3.eps} \end{picture} \caption{Morsification of a pair of complex conjugate cuspidal branches} \label{fmor1} \end{figure} (3) The simplest singularity beyond the range of Theorem \ref{t1} is a pair of two transversal complex conjugate branches of order $4$ with two Puiseux pairs $(2,3)$ and $(2,7)$ (equivalently, with the Puiseux characteristic exponents $(4,6,7)$), given, for instance, by an equation $$((w_+^2-x^3)^2-x^5w_+)((w_-^2-x^3)^2-x^5w_-)=0,\quad w_{\pm}=y\pm x\sqrt{-1}\ .$$ On the other hand, a singularity consisting of a pair of complex conjugate branches with the same Puiseux pairs $(2,3)$, $(2,7)$ as above, but having a common real tangent does satisfy the hypotheses of Theorem \ref{t1}, since after one blow up it turns into a singularity with two complex conjugate branches having only one Puiseux pair. \end{example} We believe that the following holds: \begin{conjecture}\label{con-new} Any real plane curve singularity possesses a real morsification. \end{conjecture} In the proof of Theorem \ref{t1} presented in Section \ref{sec-exi}, we combine a relatively elementary inductive blow-up construction in the spirit of \cite{AC} with the patchworking construction as appears in \cite{Sh1,ST} and some explicit formulas for real morsifications of pairs of complex conjugate smooth branches and pairs of branches of topological type $x^p+y^q=0$, $(p,q)=1$. We expect that suitable formulas for real morsifications of pairs of complex conjugate branches with several Puiseux pairs would lead to a complete solution of the existence problem of real morsifications. A real morsification of a totally real singularity yields a so-called A'Campo-Gusein-Zade diagram, which uniquely determines the topological type of the singular point, as shown by L. Balke and R. Kaenders \cite[Theorem 2.5 and Corollary 2.6]{BK}. In Section \ref{sec3}, we extend this result to morsifications of arbitrary real singularities. \section{Elementary geometry of real morsifications}\label{sec1} For the reader's convenience, we present here few simple and in fact known claims on morsifications. In what follows we consider only real singularities. Recall that a real node of a real curve can be either hyperbolic or elliptic, that is, analytically equivalent over ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}$ either to $x^2-y^2=0$, or $x^2+y^2=0$, respectively. For a real nodal deformation $C_t=\{f_t(x,y)=0\}$, $0\le t<\zeta$, the saddle critical points of $f_t$ on the zero level correspond to real hyperbolic nodes of $C_t$ and vice versa. \begin{lemma}\label{l1} The number of hyperbolic nodes in any real nodal deformation $C_t$, $0\le t<\zeta$, of $(C,z)$ does not exceed $\delta(C,z)-{\operatorname{ImBr}}(C,z)$. \end{lemma} {\bf Proof.} As we noticed in Introduction, the maximal number of nodes in a nodal deformation of a singularity $(C,z)$ is the $\delta$-invariant $\delta(C,z)$. In a real nodal deformation, a pair $Q,\overline Q$ of complex conjugate branches either glues up into one surface immersed into $B(C,z)$ thus reducing the total number of nodes by at least one, or $Q$ and $\overline Q$ do not glue up to each other and to other branches and then their intersection points are either complex conjugate nodes or real elliptic nodes, and, at last, if $Q$ and $\overline Q$ do not glue up to each other, but glue up to some other branches of $(C,z)$, we loose at least two nodes. So, the bound follows. \hfill$\blacksquare$\bigskip The following lemma is a version of \cite[Lemma 4 and Theorem 3]{AC}. Let $C_t$, $0\le t<\zeta$, be a a real morsification of a real singularity $(C,z)$. The sets ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} C_t$, $0<t<\zeta$, are isotopic in the disc ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} B_{C,z}$. Each of them is called a {\bf divide} of the given morsification (more information on divides see in Section \ref{sec-ag}). Given a divide $D\subset{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} B_{C,z}$ of a real morsification of the real singularity $(C,z)$, the connected components of ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} B_{C,z}\setminus D$ disjoint from $\partial{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} B_{C,z}$ are called inner components. Denote by $I(D)$ the union of the closures of the inner components of ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} B_{C,z}\setminus D$ (called {\bf body of the divide} in \cite{AC4}). \begin{lemma}\label{l3} Let $D={\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} C_t$ be a divide of a real morsification of a real singularity $(C,z)$. Then \begin{enumerate} \item[(i)] if $(C,z)$ is not a hyperbolic node then $I(D)$ is non-empty, connected, and simply connected; \item[(ii)] $D$ has $\delta(C,z)-{\operatorname{ImBr}}(C,z)$ singularities, which are hyperbolic nodes of $C_t$; \item[(iii)] each inner component of ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} B_{C,z}\setminus D$ is homeomorphic to an open disc; \item[(iv)] the number $h(C,z)$ of the inner components of ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} B_{C,z}\setminus D$ does not depend on the morsification and satisfies the relation $$h(C,z)+\delta(C,z)-{\operatorname{ImBr}}(C,z)=\mu(C,z)\ ,$$ $\mu(C,z)$ being the Milnor number. \end{enumerate} \end{lemma} {\bf Proof.} In claim (i) suppose that $I(D)$ is not connected. Then the associated Coxeter-Dynkin diagram of the singularity $(C,z)$ constructed in \cite{GZ} (see also \cite[\S3]{GZ1}) appears to be disconnected contrary to the fact that it is always connected \cite{Gab,GZ2}. Furthermore, $I(D)$ is simply connected since is has no holes by construction. Statements (ii)-(iv) follow from claim (i), from the bound $$\#{\operatorname{Sing}}(D)\le\delta(C,z)-{\operatorname{ImBr}}(C,z)$$ of Lemma \ref{l1}, from the Milnor formula \cite[Theorem 10.5]{M} $$\mu(C,z)=2\delta(C,z)-\ReBr(C,z)-2{\operatorname{ImBr}}(C,z)+1\ ,$$ from the fact that each inner component of ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} B_{C,z}\setminus D$ contains a critical point of the function $f_t(x,y)$, and hence $$h(C,z)+\delta(C,z)-{\operatorname{ImBr}}(C,z)\le\mu(C,z)\ ,$$ and from the calculation of the Euler characteristic of $I(D)$ $$h(C,z)-\left(2\cdot\#{\operatorname{Sing}}(D)-\ReBr(C,z)\right)+\#{\operatorname{Sing}}(D)\ge1\ .$$ \hfill$\blacksquare$\bigskip \begin{remark} In fact, one could equivalently define real morsifications as real nodal deformations having precisely $\delta(C,z)-{\operatorname{ImBr}}(C,z)$ hyperbolic nodes as their only singularities. \end{remark} \begin{lemma}\label{l2} Given a real morsification $C_t$, $0\le t<\zeta$, of a real singularity $(C,z)$, \begin{itemize}\item any real branch $P$ of $(C,z)$ does not glue up with other branches and deforms into a family of immersed discs $P_t$, $t>0$, whose real point sets ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} P_t\subset{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} B_{C,z}$ are immersed segments with $\delta(P)$ selfintersetions and endpoints on $\partial{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} B_{C,z}$; \item any pair of complex conjugate branches $Q,\overline Q$ of $(C,z)$ do not glue up to other branches, but glue up to each other so that they deform into a family of immersed cylinders $Q_t$, $t>0$, with the real point set ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} Q_t\subset {\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} B_{C,z}$ being an immersed circle disjoint from $\partial B(C,z)$ and having $\delta(Q\cup\overline Q)-1=2\delta(Q)+(Q\cdot\overline Q)-1$ selfintersections (here $(Q\cdot\overline Q)$ denotes the intersection number); \item for any two real branches $P',P''$, the intersection ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} P'_t\cap{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} P''_t$, $t>0$, consists of $(P'\cdot P'')$ points; \item for any real branch $P$ and a pair of complex conjugate branches $Q,\overline Q$, the intersection ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} P_t\cap{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} Q_t$, $t>0$, consists of $2(P\cdot Q)$ points; \item for any two pairs of complex conjugate branches $Q',\overline Q'$ and $Q'',\overline Q''$, the intersection ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} Q'_t\cap{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} Q''_t$, $t>0$, consists of $2(Q'\cdot Q'')+2(Q'\cdot\overline Q'')$ points. \end{itemize} \end{lemma} {\bf Proof.} Straightforward from Lemmas \ref{l1} and \ref{l3}. \hfill$\blacksquare$\bigskip \begin{lemma}\label{l4} Let $(C_1,z)$, $(C_2,z)$ be two real singularities without branches in common. If the real singularity $(C_1\cup C_2,z)$ possesses a real morsification, then each of the real singularities $(C_1,z)$, $(C_2,z)$ possesses a real morsification too. \end{lemma} {\bf Proof.} Straightforward from Lemma \ref{l2}. \hfill$\blacksquare$\bigskip Given a divide $D$ of a real morsification of a real singularity $(C,z)$, it follows from Lemma \ref{l3} that $I(D)$ possesses a cellular decomposition into ${\operatorname{Sing}}(D)$ as vertices, the components of $D \setminus{\operatorname{Sing}}(D)$, disjoint from $\partial{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} B_{C,z}$, as the $1$-cells, and the inner components of ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} B_{C,z}\setminus D$ as the $2$-cells. Following \cite[\S1]{AC}, we say that the given real morsification defines a {\bf partition}, if, in the above cellular decomposition of $I(D)$, the intersection of the closures of any two $2$-cells is either empty, or a vertex, or the closure of a $1$-cell. This property was assumed in the Balke-Kaenders theorem \cite[Theorem 2.5 and Corollary 2.6]{BK} about the recovery of the topological type of a singularity out of the A'Campo-Gusein-Zade diagram. In fact, this assumption is not needed (see Section \ref{sec3}). Here we just notice the following: \begin{lemma}\label{ap1} There are real morsifications that do not define a partition. \end{lemma} {\bf Proof.} For the proof, we present two simple examples: Figure \ref{fig1}(a) shows a real morsification of the singularity $(y^2+x^3)(y^2+2x^3)=0$ (two cooriented real cuspidal branches with a common tangent), while Figure \ref{fig1}(b) shows a real morsification of the real singularity $(y^2-x^4)(y^2-2x^4)=0$ (four real smooth branches quadratically tangent to each other). A construction is elementary. For example, the morsification shown in Figure \ref{fig1}(a) can be defined by $$(y^2+x^2(x-{\varepsilon}_1(t)))(y^2+2(x-{\varepsilon}_2(t))^2(x-{\varepsilon}_3(t)))=0\ ,$$ where $0<{\varepsilon}_2(t)<{\varepsilon}_3(t)\ll{\varepsilon}_1(t)\ll1$. \hfill$\blacksquare$\bigskip \begin{figure} \setlength{\unitlength}{0.8cm} \begin{picture}(14.5,10)(0,-0.7) \epsfxsize 135mm \epsfbox{mors1.eps} \put(-14,-0.7){(a)}\put(-4,-0.7){(b)} \end{picture} \caption{Non-partitions} \label{fig1} \end{figure} \section{Existence of real morsifications}\label{sec-exi} \subsection{Blow-up construction}\label{sec2} Let us recall that the multiplicity of a singularity $(C,z)$, resp. of a branch $P$, is the intersection numbers ${\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}}(C,z)=(C\cdot L)_z$, resp. $(P\cdot L)_z$ with a generic line $L$ through $z$. Recall that the proper transform of $(C,z)$ under the blowing up of $z$ consists of several germs $(C^*_i,z_i)$ with $z_i$ being distinct points on the exceptional divisor $E$ associated with distinct tangents to $(C,z)$. It is know that (see, for instance, \cite[Page 185 and Proposition 3.34]{GLS}) \begin{equation}\delta(C,z)=\sum_i\delta(C^*_i,z_i)+\frac{{\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}}(C,z)({\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}}(C,z)-1)}{2},\quad {\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}}(C,z)=\sum_i(C^*_i\cdot E)_{z_i}\ . \label{e2}\end{equation} \subsubsection{The totally real singularities}\label{total} The existence of real morsifications for totally real singularities was proved in \cite[Theorem 1]{AC}. We present here a proof (similar to the A'Campo's one) in order to be self-contained and to use elements of that proof in the general case. \smallskip {\bf (1)} Consider, first, the case of a totally real singularity $(C,z)$ whose all branches are smooth. We proceed by induction on the maximal $\delta$-invariant $\Delta_1(C,z)$ of the union of any subset of branches tangent to each other. The base of induction, $\Delta_1(C,z)=0$, corresponds to the union of $d\ge2$ smooth branches with distinct tangents. Here $\delta(C,z)=d(d-1)/2$, and we construct a real morsification by shifting the branches to a general position. Assuming that $\Delta_1(C,z)>0$ in the induction step, we blow up the point $z$ into an exceptional divisor $E$. The strict transform of $(C,z)$ splits into components $(C^*_i,z_i)$, $z_i\in {\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} E$, corresponding to different tangents of $(C,z)$. Notice that $E$ is transversal to all branches of $(C^*_i,z_i)$, and hence $\Delta_1(C^*_i\cup E,z_i)<\Delta_1(C,z)$ for all $i$ (cf. (\ref{e2})). Then we construct real morsifications of each real singularity $(C^*_i\cup E,z_i)$ in which the germs $(E,z_i)$ stay fixed (in view of Lemma \ref{l2} these germs do not glue up with other branches, and hence can be kept fixed by suitable local equivariant diffeomorphisms). Thus, we get the union of real curves $(C^*_i)^+$ in neighborhoods of $z_i$, having $$\sum_i\delta(C^*_i,z_i)=\delta(C,z)-\frac{{\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}}(C,z)\cdot({\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}}(C,z)-1)}{2}$$ real hyperbolic nodes and ${\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}}(C,z)$ real intersetion points with $E$. Then we blow down $E$ and obtain a deformation whose elements have $\delta(C,z)-\frac{{\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}}(C,z)\cdot({\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}}(C,z)-1)}{2}$ real hyperbolic nodes and a point of transversal intersection of ${\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}}(C,z)$ smooth branches. Deforming the latter real singularity, we complete the construction of a real morsification. \smallskip {\bf(2)} Now we prove the existence of real morsifications for arbitrary totally real singularities, using induction on $\Delta_2(C,z)$, the $\delta$-invariant of the union of all singular branches of $(C,z)$. The preceding consideration serves as the base of induction. The induction step is very similar: we blow up the point $z$ and notice that $\sum_i\Delta_2(C^*_i\cup E,z_i)<\Delta_2(C,z)$; then proceed as in the preceding paragraph. \subsubsection{Semiquasihomogeneous singularities}\label{sqh} The same blow-up construction of real morsifications works well in the important particular case of semiquasihomogeneous singularities. Let $F(x,y)=\sum_{pi+qj=pq}a_{ij}x^iy^j$ be a real square-free quasihomogeneous polynomial, where $1\le p\le q$. Then $(C,z)=\{F(x,y)+\sum_{pi+qj>pq}a_{ij}x^iy^j=0\}$ is called a real semiquasihomogenelous singularity of type $(p,q)$. This real singularity has $d=\gcd(p,q)$ branches, among which we allow complex conjugate pairs. \smallskip {\bf (1)} A semiquasihomogeneous singularity of type $(p,p)$ is just the union of smooth transversal branches. If they all are real the existence of a real morsification is proved in Section \ref{total}. Thus, suppose that $F(x,y)$ splits into the product $F_1(x,y)$ of real linear forms and the product $F_2(x,y)$ of positive definite quadratic forms $q_i(x,y)$, $1\le i\le k$, $k\ge1$. The forms $q_i$ are not proportional to each other, and there are $b_i>0$, $i=1,...,k$, such that any two quadrics $q_i-b_i=0$ and $q_j-b_j=0$, $1\le i<j\le k$, intersect in four real points, and all their intersection points are distinct. So, we obtain a real morsification by deforming $(C,z)$ in the family $$F(x,y,t)=F_1(x,y)\prod_{i=1}^k(q_i(x,y)-b_it),\quad 0\le t\ll1\ ,$$ and then by shifting each of the lines defined by $F_1=0$ to a general position. \smallskip {\bf (2)} Let $(C,z)$ be a real semiquasihomogeneous singularity of type $(p,q)$, $2\le p<q$. We simultaneously prove the existence of real morsifications of $(C,z)$ and of the following additional singularities: \begin{enumerate}\item[(f1)] $(C\cup L,z)$, where $L$ is a real line intersecting $(C,z)$ at $z$ with multiplicity $p$ (i.e. transversally) or $q$ (tangent); \item[(f2)] $(C\cup L_1\cup L_2,z)$, where a real line $L_1$ intersects $(C,z)$ with multiplicity $p$ and a real line $L_2\ne L_1$ intersects $(C,z)$ at $z$ with multiplicity $p$ or $q$. \end{enumerate} We proceed by induction on $\delta(C,z)$. The base of induction, $\delta(C,z)=1$, corresponds to $p=2$, $q=3$, that is, an ordinary cusp. Here $(C,z)$, $(C\cup L,z)$, and $(C\cup L_1\cup L_2,z)$ are totally real, hence possess a real morsification. Suppose that $\delta(C,z)>1$, blow up the point $z$, and consider the union of the strict transform of the studied singularity with the exceptional divisor $E$. Notice that the strict transform of a real semiquasihomogeneous singularity of type $(p,q)$ is also a real semiquasihomogeneous singularity either of type $(p,q-p)$ if $2p\le q$, or of type $(q-p,p)$ if $2p>q$, and in both cases it intersects $E$ with multiplicity $p$. It is easy to see that the strict transform of singularities of the form (f1) and (f2) with added $E$ is again a real singularity of one of these forms with parameters $(p,q-p)$ or $(q-p,p)$ and, may be, an extra real node. We then complete the proof as in Section \ref{total}. \subsection{Singularities without real tangent }\label{sec4} The constructions of morsifications presented in this section is the mein novelty of the present paper. In the case of singularities with only smooth branches, Lemma \ref{lsmooth} presents a rather simple direct formula for the morsification. In the case of non-smooth branches with one Puiseux pair (Lemma \ref{lNND} below), we apply an ad hoc deformation argument (a kind of the pathchworking construction). The geometric background for this argument is as follows. We extend the pair $({\mathbb{C}}}\newcommand{\Q}{{\mathbb{Q}}^2,(C,z))$ to a trivial family $({\mathbb{C}}}\newcommand{\Q}{{\mathbb{Q}}^2,(C,z))\times({\mathbb{C}}}\newcommand{\Q}{{\mathbb{Q}},0)$, then blow up the point $z\in{\mathbb{C}}}\newcommand{\Q}{{\mathbb{Q}}^2\times\{0\}$. The central fiber of the new family is the union of the blown-up plane ${\mathbb{C}}}\newcommand{\Q}{{\mathbb{Q}}^2_1$ and the exceptional divisor $E\simeq{\mathbb{P}}^2$. The germ $(C,z)$ yields in ${\mathbb{P}}^2$ a real conic $C_2$ with multiplicity $p\ge2$ that intersects the line ${\mathbb{C}}}\newcommand{\Q}{{\mathbb{Q}}^2_1\cap E$ in two imaginary points. Our deformation gives an inscribed equivariant family of curve germs, whose real part appears to be a deformation of the above $p$-multiple conic $C_2$. \subsubsection{The case of one pair of complex conjugate tangents}\label{sec-smooth} Let a real singularity $(C,z)$ have exactly two tangent lines, and they are complex conjugate. In suitable local equivariant coordinates $x,y$ in $B_{C,z}$, we have $z=(0,0)$, and the tangent lines are $$L=\{x+(\alpha+\beta\sqrt{-1})y=0\},\ \overline L=\{x+(\alpha-\beta\sqrt{-1})y=0\}\ ,$$ where $\alpha,\beta\in{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}$, $\beta\ne0$. Denote by $(C_i,z)$, $i=1,...,s$, the branches of $(C,z)$ tangent to $L$; respectively $(\overline C_i,z)$, $i=1,...,s$, are the branches of $(C,z)$ tangent to $\overline L$. Introduce the new coordinates $$w=x+(\alpha+\beta\sqrt{-1})y,\quad\widehat w=x+(\alpha-\beta\sqrt{-1})y\ .$$ Notice that $\widehat w=\overline w$ if $x,y\in{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}$. We also will use for ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}^2\setminus\{0\}$ the coordinates $\rho>0$, $\theta\in{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}/2\pi{\mathbb Z}$ such that \begin{equation}x+\alpha y=\rho\cos\theta,\quad \beta y=\rho\sin\theta,\quad \rho=\sqrt{w\widehat w}\ .\label{e8b}\end{equation} \begin{lemma}\label{lsmooth} Let $(C,z)$ have only smooth branches. Then $(C,z)$ possesses a real morsification. \end{lemma} {\bf Proof.} A branch $(C_i,z)$, $1\le i\le s$, has an analytic equation $$\widehat w=\sum_{n\in I_i}a_{in}w^n,\quad I_i\subset\{n\in{\mathbb Z}\ :\ n>1\},\ a_{in}\in{\mathbb{C}}}\newcommand{\Q}{{\mathbb{Q}}^*\ \text{as}\ n\in I_i\ .$$ Correspondingly, $(\overline C_i,z)$ is given by $w=\sum_{n\in I_i}\overline a_{in}\widehat w^n$. We claim that the equation \begin{equation}F_t(w,\widehat w):=\prod_{i=1}^s(\Phi_i(w,\widehat w)-t^2)=0,\quad 0\le t<\zeta\ , \label{ep1}\end{equation} defines a real morsification of $(C,z)$, where $$\Phi_i(w,\widehat w)=\left(\widehat w-\sum_{n\in I_i}a_{in}w^n\right)\left( w-\sum_{n\in I_i}\overline a_{in}\widehat w^n\right)$$ and $\zeta>0$ is sufficiently small. First, $F_t(w,\widehat w)$ (the left-hand side of (\ref{ep1})) is an analytic function in $w,\widehat w$ and $t$. A separate factor in $F_t(w,\widehat w)$ is $$\Phi_i(w,\widehat w)-t^2=w\widehat w-t^2+\sum_{n\in I_i}|a_{in}|^2(w\widehat w)^n -\sum_{n\in I_1}(\overline a_{in}w^{n+1}+a_{in}\widehat w^{n+1})$$ $$+2\sum_{\renewcommand{\arraystretch}{0.6} \begin{array}{c} \scriptstyle{n_1<n_2}\\ \scriptstyle{n_1,n_2\in I_i}\end{array}}(w\widehat w)^{n_1}(a_{in_1}\overline a_{in_2}w^{n_2-n_1}+ \overline a_{in_1}a_{in_2}\widehat w^{n_2-n_1})\ .$$ Restricting the equation $\Phi_i(w,\widehat w)$ to ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} B_{C,z}$ (in coordinates $x,y$), passing in ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}^2\setminus\{0\}$ to coordinates $\rho>0$, $\theta\in{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}/2\pi{\mathbb Z}$ defined via (\ref{e8b}, and rescaling by substitution of $t\rho$ for $\rho$, we obtain a family of curves depending on the parameter $0\le t<\zeta$ $$\Psi_{i,t}:=\rho^2-1+\sum_{n\in I_i}t^{2n-2}|a_{in}|^2\rho^{2n}-2\sum_{n\in I_i}t^{n-1}|a_{in}|\rho^{n+1} \cos((n+1)\theta-\theta_{in})$$ $$+2\sum_{\renewcommand{\arraystretch}{0.6} \begin{array}{c} \scriptstyle{n_1<n_2}\\ \scriptstyle{n_1,n_2\in I_i}\end{array}}t^{n_1+n_2-2}|a_{in_1}a_{in_2}|\rho^{n_1+n_2}\cos((n_2-n_1)\theta+ \theta_{in_1}-\theta_{in_2})=0\ ,$$ where $a_{in}=|a_{in}|\exp(\sqrt{-1}\theta_{in})$, $n\in I_i$. It is easy to see that each of them a circle embedded into an annulus $\{|\rho-1|<Kt\}\subset{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}^2$ with $K>0$ a constant determined by the given singularity $(C,z)$, and, furthermore, the normal projection of each curve to the circle $\rho=1$ is a diffeomorphism. Let $1\le i<j\le s$. Set $$n_{ij}=\min\{n\in I_i\cup I_j\ :\ a_{in_{ij}}\ne a_{jn_{ij}}\}\ .$$ Note that $n_{ij}=(C_i\cdot C_j)$, the intersection number of branches $C_i,C_j$. On the other hand, $$\Psi_{i,t}(\rho,\theta)-\Psi_{j,t}(\rho,\theta)=2t^{n_{ij}-1}|a_{in_{ij}}-a_{jn_{ij}}|\rho^{n_{ij}+1} \cos((n_{ij}+1)\theta-\theta_{ij,n_{ij}})+O(t^{n_{ij}})\ ,$$ where $\theta_{ij,n_{ij}}\in{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}/2\pi{\mathbb Z}$, and hence, for a sufficiently small $t>0$, the curves $\Psi_{i,t}=0$ and $\Psi_{j,t}=0$ intersect transversally in $2n_{ij}+2$ points. In total, we obtain $$2\sum_{1\le i< j\le s}(n_{ij}+1)=2\sum_{1\le i<j\le s}(C_i\cdot C_j)+s^2-s =\delta(C,z)-{\operatorname{ImBr}}(C,z)$$ hyperbolic nodes as required for a real morsification. \hfill$\blacksquare$\bigskip \begin{lemma}\label{lNND} Let the singularity $(C_L,z)$ be formed by a pair of branches of topological type $x^p+y^q=0$, $2\le p<q$, $(p,q)=1$, that are tangent to $L$ and $\overline L$ respectively. Then $(C,z)$ possesses a real morsification. \end{lemma} {\bf Proof.} {\bf (1)} We start with the very special case of $(C,z)$ given by \begin{equation}F(w,\widehat w)=w^p\widehat w^p-a\widehat w^{p+q}-\overline a w^{p+q}=0, \quad a\in{\mathbb{C}}}\newcommand{\Q}{{\mathbb{Q}}^*\ .\label{ecrit3}\end{equation} Denote by $P(\lambda)=\lambda^p+b^{(0)}_{p-2}\lambda^{p-2}+...+b^{(0)}_0 \in{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}[\lambda]$ the monic polynomial of degree $p$ having $\left[\frac{p}{2}\right]$ critical points on the level $-2|a|$ and $\left[\frac{p-1}{2}\right]$ critical points on the level $2|a|$, whose roots sum up to zero (a kind of the $p$-th Chebyshev polynomial). We claim that there exist real functions $b_0(t),...,b_{p-2}(t)$, analytic in $t^{\frac{1}{p}}$ such that $b_i(0)=b^{(0)}_i$, $0\le i\le p-2$, and the family \begin{equation} F_t(w,\widehat w)=(w\widehat w-t^2)^p+\sum_{i=p-2}^0t^{\frac{(p-i)(p+q)}{p}}b_i(t)(w\widehat w-t^2)^i-a\widehat w^{p+q}-\overline a w^{p+q}=0\ \label{ecrit2}\end{equation} $$0\le t<\zeta\ ,$$ is a real morsification of $(C,z)$. To prove this, we rescale the latter equation by substituting $(tw,t\widehat w)$ for $(w,\widehat w)$ and restrict our attention to ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} B_{C,z}$ passing to the coordinates $\rho,\theta$ in (\ref{e8b}): $$(\rho^2-1)^p+\sum_{i=p-2}^0t^{\frac{(p-i)(q-p)}{p}}b_i(t)(\rho^2-1)^i-2|a|\rho^{p+q}\cos((p+q)\theta- \theta_a)=0\ ,$$ where $a=|a|\exp(\sqrt{-1}\theta_a)$. Next, we substitute $\rho^2=1+t^{\frac{q-p}{p}}\sigma$ and come to \begin{equation} (1+t^{\frac{q-p}{p}}\sigma)^{-(p+q)/2}\left(\sigma^p+\sum_{i=p-2}^0b_i(t)\sigma^i\right) =2|a|\cos((p+q)\theta-\theta_a)\ . \label{ecrit1}\end{equation} Finally, we recover the unknown functions $b_{p-2}(t),...,b_0(t)$ from the following conditions. Let $P(\lambda)>3|a|$ as $|\lambda|>\lambda_0$. Suppose that $|\sigma|\le\lambda_0$ and that $t$ is small so that the function of $\sigma$ $$P_t(\sigma):=(1+t^{\frac{q-p}{p}}\sigma)^{-(p+q)/2}\left(\sigma^p+\sum_{i=p-2}^0b_i(t)\sigma^i\right)$$ has simple critical points $\mu_1(t),...,\mu_{p-1}(t)$ arranged in the growing order and respectively close to the critical points $\mu^{(0)}_1,...,\mu^{(0)}_{p-1}$ of $P(\lambda)$. So, we require \begin{equation}P_t(\mu_i(t))=(-1)^i\cdot2|a|,\quad i=1,...,p-1\ .\label{ecrit}\end{equation} These conditions hold true for $t=0$ by construction, and we only need to verify that the Jacobian with respect to $\mu_1,...,\mu_{p-1}$ does not vanish. To this end, we observe that there exists a diffeomorphism of a neighborhood of the point $(\mu_1^{(0)},...,\mu^{(0)}_{p-1})\in{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}^{p-1}$ onto a neighborhood of the point $(b^{(0)}_{p-2},...,b^{(0)}_0)\in{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}^{p-1}$ sending the critical points of a polynomial $\lambda^p+\widetilde b_{p-2}\lambda^{p-2}+...+\widetilde b_0$ to its coefficients. Then the Jacobian of the left-hand side of the system (\ref{ecrit}) with respect to $\mu_1,...,\mu_{p-1}$ at $t=0$ turns to be $$\det\left((\mu_i^{(0)})^j\frac{\partial b_j}{\partial \mu_i}\Big|_{t=0}\right)_{i=1,...,p-1}^{j=0,...,p-2}= \pm\prod_{1\le i<j\le p-1}(\mu^{(0)}_i-\mu^{(0)}_j)\cdot\det\frac{D(\widetilde b_{p-2},...,\widetilde b_0)} {D(\mu_1,...,\mu_{p-1})}\big|_{t=0}\ne0\ .$$ It follows from (\ref{ecrit}) that, for any $\theta\in{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}/2\pi{\mathbb Z}$, the equation (\ref{ecrit1}) on $\sigma$ has $p$ real solutions (counting multiplicities) in the interval $|\sigma|<\lambda_0$, and we have exactly $(p-1)(p+q)=\delta(C,z)-{\operatorname{ImBr}}(C,z)$ double roots as $$\sigma=\mu_{2i-1}(t),\quad \cos((p+q)\theta-\theta_a)=-1,\quad 1\le i\le\frac{p}{2}\ ,$$ or $$\sigma=\mu_{2i}(t),\quad \cos((p+q)\theta-\theta_a)=1,\quad 1\le i\le\frac{p-1}{2}\ .$$ That is, family (\ref{ecrit2}) indeed describes a real morsification of $(C,z)$. Note, that the real curve $\{F_t=0\}\subset{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} B_{C,z}$ is an immersed circle lying in the $\lambda_0t^{\frac{p+q}{p}}$-neighborhood of the ellipse $\rho=t$ and transversally intersecting in $2p$ points (counting multiplicities) with each real line through the origin. \smallskip \begin{figure} \setlength{\unitlength}{0.8cm} \begin{picture}(16,20)(-1,0) \thinlines \put(1,1){\vector(0,1){11.5}}\put(1,6.5){\vector(1,0){6.5}} \put(9.5,1){\vector(0,1){11.5}}\put(9.5,6.5){\vector(1,0){6.5}} \put(1,14.5){\vector(0,1){6}}\put(1,14.5){\vector(1,0){6.5}} \put(9.5,14.5){\vector(0,1){6}}\put(9.5,14.5){\vector(1,0){6.5}} \dashline{0.2}(1,11.5)(6,11.5)\dashline{0.2}(6,6.5)(6,11.5) \dashline{0.2}(14.5,6.5)(14.5,11.5)\dashline{0.2}(3,14.5)(3,16.5) \dashline{0.2}(1,16.5)(3,16.5) \thicklines \put(1,1.5){\line(0,1){5}}\put(1,1.5){\line(2,5){2}} \put(1,6.5){\line(1,0){2}}\put(1,6.5){\line(1,1){5}} \put(3,6.5){\line(3,5){3}} \put(9.5,1.5){\line(0,1){10}}\put(9.5,1.5){\line(2,5){2}} \put(11.5,6.5){\line(-2,5){2}}\put(9.5,11.5){\line(1,0){5}} \put(11.5,6.5){\line(3,5){3}} \put(3,16.5){\line(-2,3){2}}\put(3,16.5){\line(3,-2){3}} \put(11.5,16.5){\line(-2,3){2}}\put(11.5,16.5){\line(3,-2){3}} \put(9.5,14.5){\line(1,1){2}}\put(9.5,14.5){\line(1,0){5}} \put(9.5,14.5){\line(0,1){5}} \put(-0.4,19.4){$p+q$}\put(-0.4,11.4){$p+q$} \put(8.1,19.4){$p+q$}\put(8.1,11.4){$p+q$} \put(-0.7,1.5){$-p-q$}\put(7.8,1.5){$-p-q$} \put(5.5,14){$p+q$}\put(14,14){$p+q$} \put(5.5,6){$p+q$}\put(14,6){$p+q$} \put(0.5,16.4){$p$}\put(2.9,14){$p$} \put(11.5,6){$p$}\put(3,6){$p$} \put(11.5,15){$T_1$}\put(10,16.5){$T_2$} \put(1.5,5.2){$T'_1$}\put(2.3,7){$T'_2$}\put(10,7){$T$} \put(7,14.7){$w$}\put(15.5,14.7){$w$} \put(7,6.7){$u$}\put(1,20.7){$\widehat w$}\put(9.5,20.7){$\widehat w$} \put(1,20.7){$\widehat w$} \put(4,0.5){\rm (c)}\put(12.5,0.5){\rm (d)} \put(4,13.3){\rm (a)}\put(12.5,13.3){\rm (b)} \end{picture} \caption{Patchworking a real morsification}\label{fcrit} \end{figure} {\bf(2)} Consider the general case. By a coordinate change $$(w,\widehat w)\mapsto \left(w+\sum_{i\ge2}\alpha_i\widehat w^i,\ \widehat w+\sum_{i\ge2}\overline\alpha_i w^i\right)$$ one can bring $(C,z)$ to a strictly Newton non-degenerate form with the Newton diagram $\Gamma(F)=[(p+q,0),p,p)]\cup[(p,p),(0,p+q)]$ in the coordinates $w,\widehat w$ (see Figure \ref{fcrit}(a)), and with an equation $$F(w,\widehat w)=(w\widehat w)^p-a\widehat w^{p+q}-\overline aw^{p+q}+\sum_{\renewcommand{\arraystretch}{0.6} \begin{array}{c} \scriptstyle{pi+qj>p(p+q)}\\ \scriptstyle{qi+pj>p(p+q)}\end{array}}a_{ij}w^i\widehat w^j=0\ ,$$ where $a\in{\mathbb{C}}}\newcommand{\Q}{{\mathbb{Q}}^*$ and $a_{ij}=\overline a_{ji}$ for all $i,j$ (cf. (\ref{ecrit2})). We construct a real morsification of $(C,z)$ combining the result of the preceding step with the patchworking construction as developed in \cite[Section 2]{ST}. Denote by $\Delta(F)$ the Newton polygon of $F(w,\widetilde w)$ and divide the domain under $\Gamma(F)$ by the segment $[(0,0),(p,p)]$ into two triangles $T_1,T_2$ (see Figure \ref{fcrit}(b)). So, $\Delta(F)$, $T_1$, and $T_2$ form a convex subdivision of the convex polygon $\widetilde\Delta(F)={\operatorname{Conv}}(\Delta(F)\cup\{(0,0)\})$, i.e., there exists a convex piecewise linear function $\nu:\widetilde\Delta(F)\to{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}$ taking integral values at integral points and whose linearity domains are $\Delta(F)$, $T_1$, and $T_2$. The overgraph ${\operatorname{Graph}}^+(\nu)$ of $\nu$ is a three-dimensional convex lattice polytope, and we have a natural morphism ${\operatorname{Tor}}({\operatorname{Graph}}^+(\nu))\to{\mathbb{C}}}\newcommand{\Q}{{\mathbb{Q}}$ whose fibers for $t\in{\mathbb{C}}}\newcommand{\Q}{{\mathbb{Q}}^*$ are isomorphic to ${\operatorname{Tor}}(\widetilde(F))$, and the central fiber is the union ${\operatorname{Tor}}(\Delta(F))\cup{\operatorname{Tor}}(T_1)\cup{\operatorname{Tor}}(T_2)$. In the toric surface ${\operatorname{Tor}}(\Delta(F))$, we have a curve $C=\{F(w,\widehat w)=0\}$, in the toric surfaces ${\operatorname{Tor}}(T_1)$ and ${\operatorname{Tor}}(T_2)$, we define curves $$R_1=\{(w\widehat w-1)^p-\overline aw^{p+q}=0\}\quad\text{and}\quad R_2=\{(w\widehat w-1)^p-a\widehat w=0\}\ ,$$ respectively. The complex conjugation interchanges the pairs $({\operatorname{Tor}}(T_1),R_1)$ and $({\operatorname{Tor}}(T_2),R_2)$. Note that $R_1,R_2$ transversally intersect the toric divisors ${\operatorname{Tor}}([(p,p),(p+q,0)])$ and ${\operatorname{Tor}}([(p,p),(0,p+q)])$ in the same points as $C$. Furthermore, $R_1$, $R_2$ are rational curves intersecting the toric divisor ${\operatorname{Tor}}([(0,0),(p,p)]={\operatorname{Tor}}(T_1)\cap{\operatorname{Tor}}(T_2)$ in the same point $z_1$, where each of them has a singular point of topological type $x^p+y^{p+q}=0$. To apply the patchworking statement of \cite[Theorem 2.8]{ST}, we perform the weighted blow up ${\mathfrak X}\to{\operatorname{Tor}}({\operatorname{Graph}}^+(\nu))$ of the point $z_1$ with the exceptional divisor $E={\operatorname{Tor}}(T)$, $T={\operatorname{Conv}}\{(p,0),(0,p+q),(0,-p-q)\}$ (see \cite[Figure 1]{ST}) being a part of the central fiber of ${\mathfrak X}\to{\mathbb{C}}}\newcommand{\Q}{{\mathbb{Q}}$. One can view this blow up via the refinement procedure developed in \cite[Section 3.5]{Sh1}. Namely, we perform the toric coordinate change $u=w\widehat w$, $v=w^{-1}$, transforming the triangles $T_1,T_2$ to $T'_1,T'_2$ as shown in Figure \ref{fcrit}(c), and respectively transforming the curves $R_1,R_2$ and the function $\nu$. Note that this coordinate change defines an automorphism of the punctures real plane ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}^2\setminus\{0\}$. Next we perform another coordinate change $u=u_1+1$, $v=v_1$, bringing the singular points of $R_1,R_2$ to the origin and transforming their Newton triangles $T'_1,T'_2$ into the edge $T''_1=[(p,0),(0,-p-q)]$ and the triangle $T''_2={\operatorname{Conv}}\{(0,p+q),(p,0),(p+q,p+q)\}$, respectively (see Figure \ref{fcrit}(d)). The triangle $T={\operatorname{Conv}}\{(0,-p-q),(0,p+q),(p,0)\}$ corresponds to the exceptional surface, in which we have to define a real curve by an equation with Newton triangle $T$, having the coefficients at the vertices determined by the equations of $R_1$ and $R_2$ and having $(p-1)(p+q)=\delta(C,z)-{\operatorname{ImBr}}(C,z)$ real hyperbolic nodes. We just borrow the required curve from the special example studied in the first step. Namely, we do the above transformations with the data given by (\ref{ecrit3}), and arrive at the curve given by a polynomial having coefficient $a$ at $(0,p+q)$, coefficient $\overline a$ at $(0,-p-q)$, coefficient $1$ at $(p,0)$, and coefficients $b_i^{(0)}$ at $(i,0)$, $i=0,...,p-2$. To apply \cite[Theorem 2.8]{ST}, we have to verify the following transversality conditions: \begin{itemize}\item for $i=1,2$, the germ at $R_i$ of the family of curves on the surface ${\operatorname{Tor}}(T_i)$ in the tautological linear system that have a singularity of the topological type $x^p+y^{p+q}=0$ in a fixed position, is smooth of expected dimension; \item the germ at $R$ of the family of curves on the surface ${\operatorname{Tor}}(T)$ in the tautological linear system that intersect the toric divisors ${\operatorname{Tor}}([(0,-p-q),(p,0)])$ and ${\operatorname{Tor}}([(p,0),(0,p+q)])$ in fixed points and have $(p-1)(p+q)$ nodes, is smooth of expected dimension. \end{itemize} Both conditions are particular cases of the $S$-transvesality property, and they follow from the criterion in \cite[Theorem 4.1(1)]{ShT}. In the former case, one needs the inequality $-R_iK_i>b$, where $K_i$ is the canonical divisor of the surface ${\operatorname{Tor}}(T_i)$, and $b$ a topological invariant of the singularity defined by $$b(x^p+y^{p+q}=0)=\begin{cases}p+(p+q)-1,\quad & \text{if}\ q\not\equiv 1\mod p,\\ p+(p+q)-2,\quad &\text{if}\ q\equiv1\mod p\end{cases}$$ and the inequality holds, since $-R_iK_i=p+(p+q)+1$. In the latter case, one needs the inequality $$R\cdot{\operatorname{Tor}}([(0,p+q),(0,-p-q)])>0$$ (nodes do not count in the criterion), which evidently holds. Thus, \cite[Theorem 2.8]{ST} yields the existence of an analytic equivariant deformation of $F(w,\widehat w)$ defining in ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} B_{C,z}$ curves with $(p-1)(p+q)=\delta(C,z)-{\operatorname{ImBr}}(C,z)$ hyperbolic nodes. \hfill$\blacksquare$\bigskip \begin{lemma}\label{lNDD1} Let a real singularity $(C,z)$ with exactly two tangent lines $L,\overline L$ be admissible along its tangent lines. Then $(C,z)$ possesses a real morsification. \end{lemma} {\bf Proof.} We apply construction presented in the proof of Lemmas \ref{lsmooth} and \ref{lNND} for the bunch of smooth branches a nd for pairs of singular complex conjugate branches separately, and we shall show that, for any two pairs $(C_1,\overline C_1)$, $(C_2,\overline C_2)$ of complex conjugate branches of $(C,z)$, their divides intersect in $2(C_1\cdot C_2)+2{\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}} C_1\cdot{\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}} C_2$ (real) points. For $C_1,C_2$ smooth this follows from Lemma \ref{lsmooth}. In other situations, we can assume that $C_1\cup C_2$ (and $\overline C_1\cup\overline C_2$) form a strictly Newton non-degenerate singularity so that $C_1$ os of topological type $x^p+y^q=0$ with $2\le p<q$, $(p,q)=1$, and $C_2$ is of topological type $x^{p'}+y^{q'}=0$ with $1\le p'<q'$, $(p',q')=1$. If $q/p=q'/p'$, then $p=p'$, $q=q'$, and hence $C_1\cup\overline C_1$ and $C_2\cup C_2$ are given by $$F(w,\widehat w)=(w\widehat w)^p-a\widehat w^{p+q}-\overline aw^{p+q}+\sum_{\renewcommand{\arraystretch}{0.6} \begin{array}{c} \scriptstyle{pi+qj>p(p+q)}\\ \scriptstyle{qi+pj>p(p+q)}\end{array}}a_{ij}w^i\widehat w^j=0\ ,$$ and $$F'(w,\widehat w)=(w\widehat w)^p-a'\widehat w^{p+q}-\overline a'w^{p+q}+\sum_{\renewcommand{\arraystretch}{0.6} \begin{array}{c} \scriptstyle{pi+qj>p(p+q)}\\ \scriptstyle{qi+pj>p(p+q)}\end{array}}a'_{ij}w^i\widehat w^j=0\ ,$$ respectively, where $a,a',a-a'\in{\mathbb{C}}}\newcommand{\Q}{{\mathbb{Q}}^*$. The patchworking construction in the second step of the proof of Lemma \ref{lNND} can be applied to both the pairs of the branches simultaneously, and the considered question on the intersection of the divides reduces then to the intersection of the curves $R,R'$ in the toric surface ${\operatorname{Tor}}(T)$, $T={\operatorname{Conv}}\{(0,-p-q),(p,0),(0,p+q)\}$. The real parts ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} R,{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} R'$ of these curves, in suitable coordinates $\sigma>0$, $\theta\in{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}/2\pi{\mathbb Z}$ are given by $$\sigma^p+\sum_{i=p-2}^0b_i^{(0)}\sigma^i=2|a|\cos((p+q)\theta-\theta_a),\quad \sigma^p+\sum_{i=p-2}^0b_i^{(0)}\sigma^i=2|a'|\cos((p+q)\theta-\theta_{a'})\ ,$$ respectively. The number of their (real) intersection points is $p$ times the number of solutions of the equation $$|a|\cos((p+q)\theta-\theta_a)=|a'|\cos((p+q)\theta-\theta_{a'}),\quad \theta \in{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}/2\pi{\mathbb Z}\ .$$ The latter number is $2(p+q)$, and hence the total number of intersection points is $$2p(p+q)=2pq+2p^2=2(C_1\cdot C_2)+2{\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}} C_1\cdot{\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}} C_2$$ as required. Suppose that $\tau=\frac{q'}{p'}-\frac{q}{p}>0$. Then $C_1\cup\overline C_1$ and $C_2\cup C_2$ are given by $$F(w,\widehat w)=(w\widehat w)^p-a\widehat w^{p+q}-\overline aw^{p+q}+\sum_{\renewcommand{\arraystretch}{0.6} \begin{array}{c} \scriptstyle{pi+qj>p(p+q)}\\ \scriptstyle{qi+pj>p(p+q)}\end{array}}a_{ij}w^i\widehat w^j=0\ ,$$ and $$F'(w,\widehat w)=(w\widehat w)^{p'}-a'\widehat w^{p'+q'}-\overline a'w^{p'+q'}+\sum_{\renewcommand{\arraystretch}{0.6} \begin{array}{c} \scriptstyle{p'i+q'j>p'(p'+q')}\\ \scriptstyle{q'i+p'j>p'(p'+q')}\end{array}}a'_{ij}w^i\widehat w^j=0\ ,$$ respectively. Along the construction of Lemmas \ref{lsmooth} and \ref{lNND}, we substitute in the above equations $(w\widehat w-t^2)^p$ for $(w\widehat w)^p$ and $(w\widehat w-t^2)^{p'}$ for $w\widehat w)^{p'}$, respectively, then make the same rescaling $(w,\widehat w)\mapsto(tw,t\widehat w)$. Next, we pass to the real coordinates $\sigma,\theta$ via $$\rho^2=w\widehat w=1+t^{\frac{q-p}{p}}\sigma, \quad w=\rho\exp(\sqrt{-1}\theta),\ \widehat w=\rho\exp(-\sqrt{-1}\theta)\ ,$$ (adapted to the pair $p,q$, not $p',q'$ !). Then the real morsification of $C_1\cup\overline C_1$ is given by $$\sigma^p+\sum_{i=p-2}^0b_i^{(0)}\sigma^i=2|a|\cos((p+q)\theta-\theta_a)+O(t^{\frac{1}{p}})\ ,$$ while the real morsification of $C_2\cup\overline C_2$ is given by $\sigma^{p'}=O(t^{\\tau})$. The divide of the real morsification of $C_2\cup\overline C_2$ is the circle immersed into the $O(t^{\frac{1}{p'}})$-neighborhood of the level line $\sigma=0$ in the annulus $\{(\sigma,\theta)\in(-\lambda_0,\lambda_0)\times({\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}/2\pi{\mathbb Z})\}$ so that the normal projection onto the circle $\sigma=0$ is a $p'$-fold covering. Hence, this divide intersects with the divide of the real morsification of $C_1\cup\overline C_1$ in $2p'(p+q)=2p'q+2p'p=2(C_1\cdot C_2)+2{\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}} C_1\cdot{\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}} C_2$ real points. The case of $\tau=\frac{q}{p}-\frac{q'}{p'}<0$ can be considered in the same manner. \hfill$\blacksquare$\bigskip \subsubsection{The case of several pairs of complex conjugate tangents}\label{sec-sev} Suppose now that $(C,z)$ has $r\ge2$ pairs of complex conjugate tangent lines $$L_i=\{x+(\alpha_i+\beta_i\sqrt{-1})y=0\}, \quad\overline L_i=\{x+(\alpha_i-\beta_i\sqrt{-1})y=0\},\quad i=1,...,r\ ,$$ where $\alpha_i,\beta_i\in{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}$, $\beta_i\ne0$ for all $i=1,...,r$. Set $$w_i=x+(\alpha_i+\beta_i\sqrt{-1})y,\quad\widehat w_i=x+(\alpha_i-\beta_i\sqrt{-1})y, \quad i=1,,,.,r\ .$$ Equations $\rho_i^2:=w_i\widehat w_i={\operatorname{const}}>0$, $i=1,...,r$, define distinct ellipses in ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}^2$, and there are $\gamma_1,...,\gamma_r>0$ such that each two ellipses $\rho_i^2=\gamma_i$, $\rho_j^2=\gamma_j$, $1\le i<j\le r$, intersect in four (real) points, and all $2r(r-1)$ intersection points are distinct. For any $i=1,...,r$, we introduce a real singularity $(C^{(i)},z)$ formed by the union of all the branches of $(C,z)$ tangent either to $L_i$, or to $\overline L_i$, and then construct a real morsification of $(C^{(i)},z)$ following the procedure of Section \ref{sec-smooth}, in which $t$ should be replaced by $t\sqrt{\gamma_i}$. For a given $t>0$, the divide of this morsification lies in $O(t^{>2})$-neighborhood of the ellipse $\rho_i^2=\gamma_it^2$, and it is the union of several immersed circles so that the normal projection onto the ellipse is a covering of multiplicity $\frac{1}{2}{\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}}(C^{(i)},z)$. Hence, the divides of the morsifications of $(C^{(i)},z)$ and $C^{(j)},z)$, $1\le i<j\le r$, intersect in ${\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}} C^{(i)}\cdot{\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}} C^{(j)}$ real points. So, in total the union of all $r$ divides contains $$\sum_{i=1}^r\left(\delta(C^{(i)},z)-{\operatorname{ImBr}}(C^{(i)},z)\right)+\sum_{1\le i<j\le r} (C^{(i)}\cdot C^{(j)})_z=\delta(C,z)-{\operatorname{ImBr}}(C,z)$$ real hyperbolic nodes. \subsection{Proof of Theorem \ref{t1}: general case}\label{sec-theorem1} Suppose now that $(C,z)$ is a real singularity satisfying hypotheses of Theorem \ref{t1}. Denote by $(C^{re},z)$, resp. $(C^{im},z)$, the union of the branches of $(C,z)$ that have real, resp. complex conjugate tangents. If $C^{re}=\emptyset$, the existence of a real morsification follows from the results of Sections \ref{sec-smooth} and \ref{sec-sev}. Assume that $C^{re}\ne\emptyset$, and it contains only smooth branches. We settle this case by induction on $\Delta_3(C,z)$, the maximal $\delta$-invariant of a subgerm of $(C^{re},z)$ having a unique tangent line. If $\Delta_3(C,z)=0$, then all branches of $(C^{re},z)$ are smooth real and transversal to each other. Then we first construct a real morsification of $(C^{im},z)$ as in Sections \ref{sec-smooth} and \ref{sec-sev} with $t>0$ chosen so small that each branch of $(C^{re},z)$ intersects the divide of the morsification of $(C^{im},z)$ in ${\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}}(C^{im},z)$ real points. Then we slightly shift the branches of $(C^{re},z)$ in general position keeping the above real intersection points and obtaining additional $\delta(C^{re},z)$ hyperbolic nodes as required. In the case of $\Delta_3(C,z)>0$, we blow up the point $z$ and consider the strict transform of $(C^{re},z)$, which consists of germ $(C_i,z_i)$ with real centers $z_i$ on the exceptional divisor $E$. Clearly, for each germ $(C_i\cup E,z_i)$, its branches with real tangents are smooth and transversal to $E$, and, furthermore, $\Delta_3(C_i\cup E,z_i)<\Delta_3(C,z)$ for all $i$. Hence, there are real morsifications of the germs $(C_i\cup E,z_i)$, in which we cam assume the germs $(E,z_i)$ to be fixed. Then we blow down $E$ and obtain a deformation of $(C^{re},z)$ with ${\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}}(C^{re},z)$ real smooth transversal branches at $z$ and additional $\delta(C^{re},z)-{\operatorname{ImBr}}(C^{re},z)-\frac{1}{2}{\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}}(C^{re},z)({\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}}(C^{re},z)-1)$ real hyperbolic nodes (cf. computations in Section \ref{total}(1)). Returning back the subgerm $(C^{im},z)$, we obtain a real singularity at $z$ with $\Delta_3=0$, and thus, complete the construction of a real morsification of $(C,z)$ as in the beginning of this paragraph. Now we get rid of all extra restrictions on $(C^{re},z)$ and prove the existence of a real morsification of $(C,z)$ by induction on $\Delta_4(C,z)$, which is the $\delta$-invariant of the union of singular branches of $(C^{re},z)$. The preceding consideration serves as the base of induction. The induction step is precisely the same, and we only notice that (in the above notations) $\max\Delta_4(C_i\cup E,z_i)<\Delta_4(C,z)$. The proof of Theorem \ref{t1} is completed. \section{Real morsifications and Milnor fibers}\label{sec-reg} \subsection{A'Campo surface and Milnor fiber}\label{sec-mil} In \cite[Section 3]{AC1}, A'Campo constructs the link of a divide of a real morsification of a singularity (which we call {\bf A'Campo link}). This link is embedded into the $3$-sphere, the boundary of the Milnor ball, and the fundamental result by A'Campo \cite[Theorem 2]{AC1} states that it is isotopic to the link of the given singularity in the $3$-sphere. In this section, we discuss a somewhat stronger isotopy. Namely, in \cite[Section 3]{AC1}, A'Campo associates with a real morsification a surface (which we call {\bf A'Campo surface}), whose boundary is the A'Campo link. It is natural to ask whether the pair (A'Campo surface, A'Campo link) is isotopic to the pair (Milnor fiber, its boundary). In \cite[Page 22]{AC1}, A'Campo conjectures a certain transversality condition for the known morsifications that ensure the discussed transversality. Here we prove this transversality condition for all morsifications constructed in Section \ref{sec-exi}. We also show that the spoken transversality condition may fail even for morsifications of simple singularities. Hence, the question on the isotopy between the A'Campo surface and the Milnor fiber remains open in a general case. Let $(C,0)\subset{\mathbb{C}}}\newcommand{\Q}{{\mathbb{Q}}^2$ be a real singularity given by an equivariant analytic equation $f(x,y)=0$. Following \cite[Section 3]{AC1}, we replace the standard Milnor ball $B(C,0)$ by the bi-disc $B(0,\rho_0):=\{u+v\sqrt{-1}\in{\mathbb{C}}}\newcommand{\Q}{{\mathbb{Q}}^2\ :\ u,v\in D(0,\rho_0)\subset{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}^2\}$, where $\rho_0>0$ and ${\mathbb{C}}}\newcommand{\Q}{{\mathbb{Q}}^2={\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}^2\oplus{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}^2\sqrt{-1}$. It is easy to verify that $\partial B(0,\rho)$ transversally intersects with $C$ for each $0<\rho\le\rho_0$ if $\rho_0$ is small enough, and we assume this further on. For $\xi\in{\mathbb{C}}}\newcommand{\Q}{{\mathbb{Q}}$ with $0<|\xi|\ll1$ all curves $M_\xi=\{f(x,y)=\xi\}\subset B(0,\rho_0)$ are smooth and transversally intersect $\partial B(0,\rho_0)$. They are called {\bf Milnor fibers} of the given singularity $(C,0)$. Respectively, the links $LM_\xi=M_\xi\cap\partial B(0,\rho_0)$ are isotopic in the sphere $\partial B(0,\rho_0)$ to the link $L(C,z)=C\cap\partial B(0,\rho_0)$ of the singularity $(C,z)$, and the pairs $(M_\xi,LM_\xi)$, $0<|\xi|\ll1$, are isotopic in $(B(0,\rho_0),\partial B(0,\rho_0))$. Introduce the family of bi-discs $$B'_\rho(0,\rho_0)=\{u+v\sqrt{-1}\in{\mathbb{C}}}\newcommand{\Q}{{\mathbb{Q}}^2\ :\ u\in D(0,\rho_0),\ v\in D(0,\rho)\},\quad 0<\rho\le\rho_0\ .$$ By definition, $B'_{\rho_0}(0,\rho_0)=B(0,\rho_0)$. Let $C_t=\{f_t(x,y)=0\}$, $0\le t\le t_0$, $f_0=f$, be a real morsification of $(C,0)$ defined in $B(0,\rho_0)$. Without loss of generality, we can assume that $C_t$ intersects with $\partial B(0,\rho_0)$ transversally for all $0\le t\le t_0$. We have two families of singular surfaces in $B(0,\rho_0)$: \begin{itemize}\item $F(\rho)=C_{t_0}\cap B'_\rho(0,\rho_0)$, $0\le\rho\le\rho_0$, \item $R(\rho)=\{u+v\sqrt{-1}\in B'_\rho(0,\rho_0)\ :\ u\in{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} C_{t_0},\ v\in T_u{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} C_{t_0},\ v\in D(0,\rho)\}$, $0\le\rho\le\rho_0$ (here ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} C_{t_0}\subset D(0,\rho_0)$ is an immersed real analytic curve with nodes, and at each node $u\in{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} C_{t_0}$ we understand $T_u{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} C_{t_0}$ as the union of the tangent lines to the branches centered at $u$). \end{itemize} Denote $LF(\rho)=F(\rho)\cap\partial B'_\rho(0,\rho_0)$ and $LR(\rho)=R(\rho)\cap\partial B'_\rho(0,\rho_0)$ for all $0<\rho\le\rho_0$. \begin{lemma}\label{l15}[cf. \cite{AC1}, Theorem 2] (1) The set $LR(\rho)$ is a link in the sphere $\partial B'(\rho)$ for any $0<\rho\le\rho_0$. The set $LF(\rho)$ is a link in the sphere $\partial B'_\rho(0,\rho_0)$ for all but finitely many values $\rho\in(0,\rho_0]$. Furthermore, $LF(\rho_0)$ is a link equivariantly isotopic in $\partial B(0,\rho_0)$ to the singularity link $L(C,z)$. (2) There exists $\rho'=\rho'(t_0)$ such that the links $LF(\rho')$ and $LR(\rho')$ are equivariantly isotopic in $\partial B'_{\rho'}(0,\rho_0)$, and the pairs $(F(\rho'),LF(\rho'))$ and $(R(\rho'),LR(\rho'))$ are equivariantly isotopic in $(B'_{\rho'}(0,\rho_0),\partial B'_{\rho'}(0,\rho_0))$. \end{lemma} {\bf Proof.} The first statement is straightforward. The second one immediately follows from the fact that $F(\rho)$ and $R(\rho)$ are immersed surfaces having the same real point set with the same tangent planes along it. \hfill$\blacksquare$\bigskip For $\eta>0$ small enough, the algebraic curves $$F^{sm}(\rho)=\{f_{t_0}(x,y)=\eta\}\cap B'_\rho(0,\rho_0)$$ are smooth for all $\rho'(t_0)\le\rho\le\rho_0$, and each of them is obtained from $F(\rho)$ by a small deformation in a neighborhood $U_u$ of each node $u\in {\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} C_{t_0}$ that replaces two trasversally intersecting discs with a cylinder. Respectively, for all $\rho'(t_0)\le\rho\le\rho_0$, we define $C^\infty$-smooth equivariant {\bf A'Campo surfaces} $R^{sm}(\rho)\subset B'_\rho(0,\rho_0)$, obtained from $R(\rho)$ by replacing $R(\rho)\cap U_u$ with the cylinder $F^{sm}(\rho)\cap U_u$ smoothly attached to $R(\rho)\setminus U_u$ for each node $u\in{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} C_{t_0}$. If $\xi\in{\mathbb{C}}}\newcommand{\Q}{{\mathbb{Q}}\setminus\{0\}$ with $|\xi|$ small enough, then the intersections $M_\xi\cap\partial B'(\rho)$ are transversal for all $\rho'(t_0)\le\rho\le\rho_0$. We would like to address \medskip {\bf Question.} {\it Is the pair $(R^{sm}(\rho_0),LR(\rho_0))$ isotopic to $(M_\xi,LM_\xi)$ in $(B(0,\rho_0),\partial B(0,\rho_0))$, or, equivalently, is the pair $(R^{sm}(\rho'(t_0)),LR(\rho'(t_0)))$ isotopic to $(M_\xi\cap B'(\rho'(t_0)),M_\xi\cap\partial B'_{\rho'(t_0)} (0,\rho_0)$ in $(B'_{\rho'(t_0)}(0,\rho_0),\partial B'_{\rho'(t_0)}(0,\rho_0))$?} \medskip This seems to be stronger that Lemma \ref{l15}. We would like to comment on this question more. Since $(F^{sm}(\rho_0),F^{sm}(\rho_0)\cap\partial B(0,\rho_0))$ is isotopic to $(M_\xi,LM_\xi)$ in $(B(0,\rho_0),\partial B(0,\rho_0))$, and, by Lemma \ref{l15}, $(F^{sm}(\rho'(t_0)),F^{sm}(\rho'(t_0))\cap\partial B'_{\rho'(t_0)}(0,\rho_0))$ is (equivariantly) isotopic to $(R^{sm}(\rho'(t_0)),LR(\rho'(t_0)))$ in $B'_{\rho'(t_0)}(0,\rho_0),\partial B'_{\rho'(t_0)} (0,\rho_0))$, the answer to the above Question would be {\bf yes}, if we could prove one of the following claims. Observe that the closure of $R^{sm}(\rho_0)\setminus R^{sm}(\rho'(t_0))$ as well as the closure of $F^{sm}(\rho_0)\setminus F^{sm}(\rho'(t_0))$ is the disjoint union of pairs of discs (corresponding to real branches of $(C,z)$) and cylinders (corresponding to pairs of complex conjugate branches of $(C,z)$), and the former surface defines a cobordism of $LR(\rho_0)$ and $LR(\rho'(t_0))$ trivially fibred over $[\rho'(t_0),\rho_0]$. So the requested claims are \begin{enumerate}\item[(A)] The surface $Closure(F^{sm}(\rho_0)\setminus F^{sm}(\rho'(t_0)))$ defines a trivial cobordism of $F^{sm}(\rho_0)\cap\partial B(0,\rho_0)$ and $F^{sm}(\rho'(t_0))\cap\partial B'_{\rho'(t_0)}(0,\rho_0)$. \item[(B)] The intersections $C_t\cap\partial B'_{\rho'(t_0)}(0,\rho_0)$ are trasversal for all $0\le t\le t_0$. \end{enumerate} Claim (A) seems to be open in general so far, and it is proved in \cite{P} for morsifications of totally real singularities obtained by the blowing up construction as in \cite{AC} (see also \cite[Theorem 5.2]{CP}). Claim (B) is formulated in \cite[Page 22]{AC1} as a conjecture again for the morsifications of totally real singularities constructed in \cite{AC}. However, in general, it does not hold: \begin{proposition}\label{l16} The totally real singularity $(C,z)$ given by $y^2-x^{2n}=0$, $n\ge4$, possesses a real morsification $C_t$, $0\le t\le t_0$ such that for arbitrary $0<\rho<\rho_0$ and $0<t<t_0$, there exist $0<\rho'<\rho$ and $0<t'<t$ for which the intersection of $C_{t'}$ and $\partial B'_{\rho'}(0,\rho_0)$ is not transversal. \end{proposition} {\bf Proof.} We have $\partial B'_\rho(0,\rho_0)=\left(\partial D(0,\rho_0)\times D(0,\rho)\right)\cup \left(D(0,\rho_0)\times\partial D(0,\rho)\right)$. The intersection of $C_t$ with $\partial D(0,\rho_0)\times D(0,\rho)$ is transversal for any real morsification of $(C,z)$. On the other hand, the intersection of $C_t$ with $D(0,\rho_0)\times\partial D(0,\rho)$ is not transversal at some point $p=u+v\sqrt{-1}\in D(0,\rho_0)\times\partial D(0,\rho)$ if and only if the tangent line to $C_t$ at this point has a real slope. Indeed, if $C_t$ is given in a neighborhood of $p$ by $y=\varphi(x)$, then the lack of transversality of the intersection of $C_t$ and $D(0,\rho_0)\times\partial D(0,\rho)$ at $p$ can be expressed as $${\operatorname{Im}}\frac{d\varphi}{dx}\big|_p\cdot v_2=v_1-{\operatorname{Re}}\frac{d\varphi}{dx}\big|_p\cdot v_2=0,\quad\text{where}\ v=(v_1,v_2)\ne0\ ,$$ and hence ${\operatorname{Im}}\frac{d\varphi}{dx}\big|_p=0$. In other words, the lack of transversality means the existence of a real slope tangent line to $C_t$ at a non-real point. Now we define $$C_t=\left\{(y-tx^2)^2-\prod_{k=1}^n(x-kt)^2=0\right\},\quad 0\le t\le t_0,\quad 0<t_0\ll1\ .$$ The real point set of $C_t$ consist of two branches $y=tx^2\pm\prod_{k=1}^n(x-kt)$ transversally intersecting in $n$ points, and hence it is a real morsification. It is easy to compute that the branch $y=tx^2+\prod_{k=1}^n(x-kt)$ has $n-2$ tangent lines with the zero slope at the points $$x_i(t)=\lambda^i\left(\frac{2}{n}\right)^{1/(n-2)}t^{1/(n-2)}(1+O(t^{>0})),\quad i=0,...,n-3\ ,$$ where $\lambda^{n-2}=-1$ is a primitive root of unity. Thus, we obtain at least $n-3$ zero slope tangents at imaginary points. Since $x_i(t)\to0$ as $t\to0$, the statement of Proposition follows. \hfill$\blacksquare$\bigskip \subsection{Real Milnor morsifications} We say that a real morsification of a real singularity $(C,z)$ is a {\bf real Milnor morsification} if in the notation of Section \ref{sec-mil}, the pair $(R^{sm}(\rho_0),LR(\rho_0))$ is isotopic to $(M_\xi,LM_\xi)$ in $(B'_\rho(z,\rho_0),\partial B'_\rho(z,\rho_0))$ for some $0<\rho\le\rho_0$. \begin{theorem}\label{ap2} Any isolated real plane curve singularity satisfying the hypotheses of Theorem \ref{t1} admits a real Milnor morsification. \end{theorem} {\bf Proof.} We prove the theorem by establishing Claim (B) formulated in the preceding section. Let $(C,z)$ be a real singularity as in Theorem \ref{t1}. Applying a suitable local diffeomorphism, we can assume that $(C,z)$ does not contain (segments of) straight lines, and hence $(L\cdot C)_z<\infty$ for any line $L$ through $z$. Denote by $\Lambda$ the union of all real tangent lines to $(C,z)$ at $z$. Under the assumption made, we apply the construction used in the proof of Theorem \ref{t1} and obtain a real morsification of $(C\cup\Lambda,z)$, in which $\Lambda$ remains fixed. Then we get rid of $\Lambda$ and obtain a real morsification $C_t$, $0\le t\le t_0$, of $(C,z)$. We shall show that it is a real Milnor morsification (possibly replacing $t_0$ with a smaller positive number). As noticed in the proof of Proposition \ref{l16}, the required property is equivalent to the absence of non-real lines with real slopes tangent to $C_t$, $0\le t\le t_0$. Our first observation is \begin{lemma}\label{l7} Let $(C,z)$ be a real singularity, $L$ a real line passing through $z$ and intersecting $(C,z)$ only at $z$ (in the Milnor ball), with a finite multiplicity $(L\cdot C)_0$. Denote by ${\mathcal P}_L$ the germ of the pencil of the lines parallel to $L$ and by ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}{\mathcal P}_L$ its real point set. Let $C_t$, $0\le t<{\varepsilon}$, be a real morsification of $(C,z)$ as above, and let $C_t$ and $L_t$ intersect in $(L\cdot C)_z$ real points for any $t\in(0,{\varepsilon})$. Then each line $L'\in{\mathcal P}_L\setminus{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}{\mathcal P}_L$ intersects each element $C_t$, $0< t<{\varepsilon}$, transversally. \end{lemma} {\bf Proof.} Let $C'$ be a Milnor fiber. Then the lines of ${\mathcal P}_L$ in total are tangent to $C'$ in $\kappa(C,z)+(L\cdot C)_z-{\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}}(C,z)$ points, where $\kappa(C,z)$ is the class of the singularity $(C,z)$ (see, for example, \cite[Section I.3.4]{GLS} for details). Since, for a node, $\kappa=2$, and in general $\kappa(C,z)=2\delta(C,z)+{\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}}(C,z)-\Br(C,z)$, we get that the lines of ${\mathcal P}_L$ in total are tangent to $C_t$ in $$\kappa(C,z)+(L\cdot C)_z-{\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}}(C,z)-2(\delta(C,z)-{\operatorname{ImBr}}(C,z))= (L\cdot C)_z-\ReBr(C,z)$$ points. It follows that \begin{itemize}\item $L$ intersects the morsification $C_{i,t}$ of any real branch $(C_i,z)$ of $(C,z)$ in $(L\cdot C_i)_z$ real points, while the real point set ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} C_{i,t}$ of $C_{i,t}$ is an immersed segment; that is, $L$ cuts ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} C_{i,t}$ into $(L\cdot C_i)+1$ immersed segments, among all but two have both endpoints on ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} L$; hence, varying $L$ in ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}{\mathcal P}_L$, we encounter at least $(L\cdot C_i)_z-1$ real tangency points; \item $L$ intersects the morsification $C_{j,t}$ of a pair of complex conjugate branches $(C_j,z)$, $\overline C_j,z)$ of $(C,z)$ in $2(L\cdot C_i)_z$ real points, and hence it cuts ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} C_{j,t}$ (which is an immersed circle) into $2(L\cdot C_i)_z$ immersed segments, whose all endpoints lie on ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} L$, and hence, varying $L$ in ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}}{\mathcal P}_L$, we encounter at least $2(L\cdot C_i)_z$ real tangency points. \end{itemize} The claim of Lemma follows. \hfill$\blacksquare$\bigskip Remark that, under conditions of Lemma \ref{l7}, there is an open neighborhood $U_L$ of $L$ in the dual plane ${\mathbb{P}}^{2,\vee}$ such that all non-real curves with real slopes intersect each curve $C_t$, $0<t<{\varepsilon}$, transversally. Thus, Theorem \ref{ap2} follows from \begin{lemma}\label{al7} For any real line $L$ through $z$, there exist $0<\rho\le\rho_0$ satisfying the following conditions \begin{itemize}\item $L\cap C\cap B'_\rho(z,\rho_0)=\{z\}$; \item for some ${\varepsilon}>0$, $L$ intersects with any curve $C_t$, $0<t<{\varepsilon}$, in $(L\cdot C)_z$ real points (counting multiplicities). \end{itemize} \end{lemma} {\bf Proof.} Let $L_1,...,L_k$ be all real tangent lines to $(C,z)$ at $z$. Write $(C,z)=\bigcup_i(C_i,z)$, where $(C_i,z)$ either has a unique (real) tangent line, or a pair of complex conjugate tangent lines, and $(C_i,z)$, $(C_j,z)$ have no tangent in common as $i\ne j$. We can consider morsifications of $(C_i,z)$ separately. Suppose that $(C_i,z)$ has a pair of complex conjugate tangent lines. The morsification of $(C_i,z)$ constructed in Section \ref{sec-smooth} is such that the real point set of $C_t$, $0<t<{\varepsilon}$, consists of one or several immersed circles going in total $\frac{1}{2}{\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}}(C_i,z)$ times around $z$, and hence $L$ (which is transversal to $(C_i,z)$, i.e. $(L\cdot C_i)_z={\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}}(C_i,z)$) intersects any curve $C_t$ in ${\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}}(C_i,z)$ real points (counting multiplicities). Suppose that $(C_i,z)$ has a unique (real) tangent line $L_z$, and $L\ne L_z$. Then $(L\cdot C_i)_z={\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}}(C_i,z)$. The smooth real branches of $(C_i,z)$ are deformed in any morsification so that they remain transversal to $L$ and intersect $L$ at one real point. For $(C'_i,z)$, the union of the other branches of $(C_i,z)$, the construction of a morsification presented in Section \ref{sec-theorem1} goes inductively. Namely, we blow up $z$, construct a morsification of the strict transform of $(C_i,z)$ united with the exceptional divisor and then blow down the exceptional divisor. Elements of this intermediate deformation have ${\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}}(C'_i,z)$ smooth real branches centered at $z$, all transversal to $L$, and in any further deformation they intersect with $L$ in ${\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}}(C'_i,z)$ real points. If $(C_i,z)$ has a unique (real) tangent line $L_z$, and $L=L_z$, the statement follows from the construction. \hfill$\blacksquare$\bigskip \section{A'Campo-Gusein-Zade diagrams and topology of singularities}\label{sec3} \subsection{${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagrams of real morsifications}\label{sec-ag} L. Balke and R. Kaenders proved \cite[Theorem 2.5 and Corollary 2.6]{BK} that the A'Campo-Gusein-Zade diagram (briefly, ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagram) associated with a morsification of a totally real singularity determines the complex topological type of the given singularity. Here we extend this result to real morsifications of arbitrary real singularities. We get rid of the requirement for morsifications to define a partition (see Section \ref{sec1} and \cite[Definition 1.2]{BK}) and prove that an ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagram determines the topological type of the singularity as well as some additional information on its real structure. Let us recall definitions from \cite{AC3} and \cite{BK}. A subset $D$ of a closed disc $\boldsymbol{D}\subset \mathbb{R}^2$ is called a {\bf connected divide} if it is the image of an immersion of a disjoint union $\Sigma\ne\emptyset$ of a finite number of segments $I=[0,1]$ and circles $S^1$ satisfying the following conditions: \begin{itemize} \item the set of the endpoints of all the segments in $\Sigma$ is injectively mapped to $\partial\boldsymbol{D}$, whereas the other points of $\Sigma$ are mapped to the interior of $\boldsymbol{D}$; \item each point of the complement $D\setminus{\operatorname{Sing}}(D)$ to a finite set ${\operatorname{Sing}}(D)$ has a unique preimage in $\Sigma$, each point of ${\operatorname{Sing}}(D)$ is a transversal intersection of two smooth local branches; \item the images of any two connected components of $\Sigma$ intersect each other. \end{itemize} Note that $\Sigma$ is uniquely determined by $D$. The image of any connected component of $\Sigma$ is a divide, which is called a {\bf branch} of the divide $D$. The divide of a real morsification of a real singularity placed in the real Milnor disc (see Section \ref{sec1}) is a connected divide in the above sense. Connected components of $\overline D\setminus D$ and of $D\setminus{\operatorname{Sing}}(D)$, disjoint from $\partial\boldsymbol{D}$, are called inner components. Clearly, each inner component of $\boldsymbol{D}\setminus D$ is homeomorphic to an open disc, and each inner component of $D\setminus{\operatorname{Sing}}(D)$ is homeomorphic either to an open interval, or to $S^1$ if $D\simeq S^1$. It is straightforward that the set $\pi_0(\boldsymbol{D}\setminus D)$ of the connected components of $\boldsymbol{D}\setminus D$ can be $2$-colored, i.e., there exists a function $\pi_0(\boldsymbol{D}\setminus D)\to\{\pm1\}$ such that the components, whose boundaries intersect along one-dimensional pieces of $D$, have different signs, and there are precisely two functions like that (cf. \cite[Proposition 1.4]{BK}). Fix a $2$-coloring $s:\pi_0(\boldsymbol{D}\setminus D)\to\{\pm1\}$. The {\bf A'Campo-Gusein-Zade diagram} (${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}${\bf -diagram}) of a connected divide $D$ is a $3$-colored graph ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}(D)=(V,E,c)$ such that \begin{itemize}\item the set $V$ of its vertices is in one-to-one correspondence with the disjoint union of ${\operatorname{Sing}}(D)$ (the set of $\bullet$-vertices in the notation of \cite{BK}) and the set $\pi^{inn}_0(\boldsymbol{D}\setminus D)$ of the inner components of $\boldsymbol{D}\setminus D$ (the $\oplus$-vertices and $\ominus$-vertices in the notation of \cite{BK} in accordance with the chosen coloring); \item two distinct vertices $K_1,K_2\in\pi^{inn}_0(\boldsymbol{D} \setminus D)$ such that $\partial K_1\cap\partial K_2\setminus{\operatorname{Sing}}(D)\ne\emptyset$ are joined by $k$ edges, where $k$ is the number of inner components of $D\setminus{\operatorname{Sing}}(D)$ inside $\partial K_1\cap\partial K_2$; \item two vertices $K\in\pi^{inn}_0(\boldsymbol{D}\setminus D)$ and $p\in{\operatorname{Sing}}(D)$ such that $p\in\partial K$ are joined by $k$ edges, where $k$ is the number of components of the intersection of $K$ with a small disc centered at $p$ (clearly, here $k=1$ or $2$); \item the $3$-coloring $c:V\to\{\pm1,0\}$ is defined by $c(K)=s(K)$, $K\in\pi^{int}_0(\boldsymbol{D}\setminus D)$, and $c(p)=0$, $p\in{\operatorname{Sing}}(D)$.\end{itemize} Comparing with \cite[Definition 1.5]{BK}, we admit multi-graphs, i.e., vertices can be joined by several edges, while this is excluded in \cite[Definition 1.5]{BK} by the partition requirement. On the other hand, there are no loops. By construction, the ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagram can be embedded into $D$ (cf. \cite[Remark in page 43]{BK}). The ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagram associated with the divide of a real morsification of a real singularity is simply called an ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagram of that singularity. \subsection{${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagram determines the weak real topological type of a singularity} The topological type of a real singularity $(C,z)$ is its equivalence class up to a homomorphism of the Milnor ball, and it is known \cite{Bra,Z} (see also \cite[Section 8.4]{BrK}) that the topological type of a given singularity is determined by the collections of Puiseux pairs of its branches and by pairwise intersection numbers of the branches. We introduce the {\bf weak real topological type} of $(C,z)$ to be the topological type enriched with the following information: \begin{itemize}\item indication of real branches and pairs of complex conjugate branches; \item the cyclic order of real branches, that is, if $(C,z)$ has $k\ge1$ real branches, we number them somehow and introduce the cyclic order on the multiset $\{1,1,2,2,...,k,k\}$ induced by the position of the $2k$ intersection points of the real branches with the circle $\partial{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} B_{C,z}$ and defined up to reversing the orientation of $\partial{\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} B_{C,z}$ and renumbering the topological types of the real branches, their mutual intersection multiplicities and their intersection multiplicities with non-real branches.\end{itemize} \begin{theorem}\label{t3} An ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagram of an arbitrary real singularity determines its weak real topological type. \end{theorem} {\bf Proof.} Balke and Kaenders \cite{BK} proved that the ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagram determines the topological type of a totally real singularity, and we closely follow the lines of their proof referring for details to \cite[Section 2]{BK} and presenting necessary modifications for the general case. First, we remark that the partition requirement (see Section \ref{sec1}) was not, in fact, used in \cite{BK}. In particular, it is not needed in the construction of the Coxeter-Dynkin diagram from the given divide as presented in \cite{GZ}. \smallskip {\bf(1)} The main step in the proof of \cite[Theorem 2.5 and Corollary 2.6]{BK} is to show that an ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagram of a totally real singularity determines the branch structure of the divide, pairwise intersection numbers of the branches, and an ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagram of each branch. Their argument literally applies in the general case. We notice in addition that one can easily distinguish between ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagrams of non-closed and closed branches of the divide, i.e., between an ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagram of a real branch of $(C,z)$ and an ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagram of a pair of complex conjugate branches. Namely, in the former case, the ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagram contains either a univalent $\bullet$-vertex, or a bivalent $\bullet$-vertex joined with a $\oplus$-vertex and $\ominus$-vertex, while in the latter case, the ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagram has no such $\bullet$-vertices. We only comment on the persistence of the cyclic order of real branches of the singularity (aka, non-closed branches of the divide). An embedding of the ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagram into ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} B_{C,z}$ defines the divide up to isotopy (see \cite[Page 46]{BK}). The ambiguity in the construction of an embedding is related to the existence of the so-called {\bf chains} in the ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagram, i.e., connected subgraphs consisting of bivalent or univalent $\bullet$-vertices and bivalent $\oplus$-vertices (or bivalent $\ominus$-vertices) joined by arcs as shown in Figure \ref{fig2}(a) (cf. \cite[Figure 6]{BK}). Figure \ref{fig2}(b) shows the corresponding fragment of the divide (cf. \cite[Figure 7]{BK}). By \cite[Lemma 2.8]{BK}, the given ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagram can be transformed by inserting new chains and extending the existing ones in a controlled way into a {\bf chain separating} ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagram, whose maximal (with respect to inclusion) chains have pairwise distinct lengths, and no new chain can be added. Each chain of a divide shares the boundary with two non-inner components of the complement to the divide, and the disc ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} B_{C,z}$ can be cut into three parts as shown in Figure \ref{fig2}(b) by dashed lines (cf. \cite[Figure 7]{BK}), and similarly one can cut ${\mathbb{R}}}\newcommand{\D}{{\mathbb{D}} B_{C,z}$ with respect to the embedded chain of the ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagram, Figure \ref{fig2}(a). Then a given embedding of a chain separating ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagram can be changed in part $A$ or in part $B$ by a reflection with respect to the axis of the chain (and so for any other maximal chain). Note that the branches of the divide, which are disjoint from the chain of the divide, must all lie either in part $A$, or in part $B$, since any two of them must intersect each other. In the presence of such branches, located, say, in part $A$, and under the assumption that the chain is formed by two branches of the divide, all possible self-intersections of the latter branches must lie in part $A$ too due to Lemma \ref{l3}(i) applied to the divide with one of these two branches removed. All these observations yield that the cyclic order of non-closed branches of the divide is preserved under the changes of the embedding of the chain separating ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagram described above. Finally, we note that the same cycling order of the divide is induced by the corresponding embedding of the original ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagram. \begin{figure} \setlength{\unitlength}{0.8cm} \begin{picture}(14.5,7)(-1,-0.7) \epsfxsize 135mm \epsfbox{mors2.eps} \put(-13.2,-0.7){(a)}\put(-4,-0.7){(b)}\put(-7.2,3.8){$A$}\put(-0.8,3.8){$B$} \put(-16.5,3.8){$A$}\put(-10,3.8){$B$} \end{picture} \caption{Chains of an ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagram and of a divide} \label{fig2} \end{figure} \smallskip {\bf(2)} The topological type a real branch of the given singularity can be recovered from its ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagram, see \cite[Theorem 1.9]{BK}. In a similar way, we show that an ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagram of a closed branch of the divide determines the topological type of a real singularity formed by a pair of complex conjugate branches. Namely, an ${\mathrm A}\Gamma}\newcommand{\MCTE}{{\operatorname{MCTE}}$-diagram defines the monodromy operator of such a singularity, see \cite{AC2} and \cite[Page 39]{GZ1}, and hence its characteristic polynomial, which is the reduced Alexander polynomial of the link of the singularity \cite[\S8]{M} (see also \cite[Theorem 3.3]{SW}). Thus, we have to prove \begin{lemma}\label{l1203} The reduced Alexander polynomial of a singularity formed by two topologically equivalent branches determines the topological type of the branches and their intersection multiplicity.\end{lemma} {\bf Proof.} This statement is, in fact, a particular case of \cite[Proposition 3.2]{CDGZ}. For the reader's convenience, we provide here a proof based on a simple direct computation. Such a singularity is topologically equivalent to a singularity $(C,z)$ with $z=0\in{\mathbb{C}}}\newcommand{\Q}{{\mathbb{Q}}^2$ and two branches having the following Puiseux-type expansions: $$y=x^{\frac{m_1}{n_1}}+...+x^{\frac{m_i}{n_1...n_i}}+\sqrt{-1}\left(x^{\frac{m_{i+1}}{n_1....n_{i+1}}}+ ...+x^{\frac{m_s}{n_1...n_s}}\right)\ ,$$ $$y=x^{\frac{m_1}{n_1}}+...+x^{\frac{m_i}{n_1...n_i}}-\sqrt{-1}\left(x^{\frac{m_{i+1}}{n_1....n_{i+1}}}+ ...+x^{\frac{m_s}{n_1...n_s}}\right)\ ,$$ where the parameters are positive integers satisfying $$s\ge1,\quad 0\le i<s,\quad \gcd(m_j,n_j)=1\ \text{for all}\ j=1,...,s,\quad n:=n_1....n_s={\operatorname{mt}}}\newcommand{\Br}{{\operatorname{Br}}(C,z)\ , $$ \begin{equation}n_j>1\ \text{for all}\ j=1,...,s,\ j\ne i+1,\quad\text{and}\ 1\le\frac{m_1}{n_1}<...<\frac{m_s}{n_1...n_s}\ .\label{e1203} \end{equation} Note that here all the pairs $(m_s,n_s)$, $s\ne i+1$, are characteristic Puiseux pairs; for $s=i+1$, the pair $(m_{i+1},n_{i+1})$ may be Puiseux pair as well and in this case, $n_{i+1}>1$, or may not and in this case $n_{i+1}=1$. This dichotomy reflects the position of the last common infinitely near point of the two branches of the given singularity. To recover the topological type of the branches and their intersection number, we need to know the parameters \begin{equation}n_j,\ m_j,\ j=1,...,s,\quad \text{and}\quad i\ .\label{e1603}\end{equation} The link $L:=C\cap\partial B_{C,z}$ consists of two algebraic knots in $\partial B_{C,z}\simeq S^3$ and it has a topological invariant $\Delta^2_L(t_1,t_2)\in{\mathbb Z}[t_1,t_2]$ called the {\bf Alexander polynomial of the link} (see \cite{SW} for precise definitions and detailed treatment). According to \cite[\S8, page 95]{M} (see also \cite[Theorem 3.3]{SW}), the {\bf reduced Alexander polynomial} $\Delta_L(t):=(t-1)\Delta^2_L(t,t)$ is the characteristic polynomial of the monodromy of $(C,z)$. In our setting, the formula in \cite[Theorem 7.6]{SW} says that \begin{equation}\Delta_L(t)= \frac{t-1}{t^{2n}-1}\cdot\left(\prod_{j=1}^i\frac{t^{2w_jb_{j,s}}-1}{t^{2w_jb_{j+1,s}}-1}\right) \cdot\frac{(t^{2w_{i+1}b_{i+1,s}}-1)^2}{t^{2w_{i+1}b_{i+2,s}}-1} \cdot\left(\prod_{j=i+2}^s\frac{t^{n_je_j}-1}{t^{e_j}-1}\right)^2\ ,\label{e1803}\end{equation} where $$w_1=m_1,\quad w_j=m_j-m_{j-1}n_j+w_{j-1}n_{j-1}n_j,\ 2\le j\le s\ ,$$ $$b_{j_1,j_2}=\prod_{j_1\le j\le j_2}n_j,\quad e_j= w_{i+1}b_{i+1,s}b_{i+2,j-1}+w_jb_{j+1,s},\ i+2\le j\le s\ . $$ The polynomial $\Delta_L(t)$ splits into the product of cyclotomic polynomials $\Phi_d(t)\in{\mathbb Z}[t]$ that are distinct for distinct $d\ge1$, are irreducible in $\Q[t]$ and are such that $\Phi_d(t)^2$ does not divide $t^d-1$. For a rational function $f(t)\in\Q(t)$ that is a product of powers of pairwise distinct cyclotomic polynomials, set $\MCTI(f)=d$ and $\MCTE(f)=k$, where $d$ is the maximal with the property that $\Phi_d(t)$ enters the aforementioned expression for $f$, and $k$ is the exponent of $\Phi_d(t)$ in $f(t)$. Now, we construct a sequence of functions and integers as follows: Set $f_1(t)=\Delta_L(t)$, and for any $k\ge1$, inductively define \begin{equation}d_k=\MCTI(f_k),\quad {\varepsilon}_k=\MCTE(f_k),\quad f_{k+1}(t)= f_k(t)(t^{d_k}-1)^{-{\varepsilon}_k}\ ,\label{e1703}\end{equation} ending with $f_{k+1}=1$. We can suppose that $(C,z)$ is not a node, since the node is easily recognized by the condition $\deg\Delta_L(t)=\mu(C,z)=1$, and hence either $s>1$, or $m_1>1$. It follows from relations (\ref{e1203}) that \begin{itemize}\item if $i\ge1$, then \begin{equation}n<w_1b_{2,s}\quad \text{and}\quad w_jb_{j+1,s}<w_jb_{j,s}<w_{j+1}b_{j+2,s}\ \text{for all}\ 1\le j\le i,\label{e1303}\end{equation} \item if $i+1<s$, then \begin{equation}2w_{i+1}b_{i+1,s}<e_{i+2}\quad\text{and}\quad \begin{cases}e_j<n_je_j,\ &\text{for all}\ i+2\le j\le s,\\ n_je_j<e_{j+1}\ &\text{for all}\ i+2\le j<s,\end{cases}\label{e1403}\end{equation} \item $w_{i+1}b_{i+2,s}\begin{cases}=w_{i+1}b_{i+1,s},\quad&\text{if}\ n_{i+1}=1,\\ <w_{i+1}b_{i+1,s},\quad&\text{if}\ n_{i+1}>1.\end{cases}$ \end{itemize} These inequalities yield that the sequence (\ref{e1703}) is finite. Denote by $r$ the number of triples $(f_k,d_k,{\varepsilon}_k)$ in this sequence. Observe, that, in the beginning, it has an even or odd number $l$ of even values of ${\varepsilon}_k$ according as $n_{i+1}=1$, or $n_{i+1}>1$. It follows that $s=[(r-1)/2]$ and $i+1=s-[l/2]$. Moreover, the sequence $d_k$, $k\ge1$, in (\ref{e1703}) provides values for all the exponents of $t$ in the formula (\ref{e1803}). Considering this as a system of equations for $m_j,n_j$, $j=1,...,s$, we can easily resolve it and hence restore the topological type of the considered singularity $(C,z)$. \hfill$\blacksquare$\bigskip \smallskip {\bf(3)} To complete the recovery of the topological type of the given singularity $(C,z)$, we have to find pairwise intersection multiplicities of the branches of $(C,z)$. By \cite[Lemma 2.2]{BK}, the intersection number of two non-closed branches of the divide equals the intersection multiplicity of the corresponding real branches of $(C,z)$. Similarly, the intersection number of a non-closed and a closed branches of the divide equals twice the intersection multiplicity of the corresponding real branch of $(C,z)$ with each of the two complex conjugate branches of $(C,z)$ corresponding to the closed branch of the divide. At last, consider the intersection of two closed branches of the divide and suppose without loss of generality that these are the only branches of the divide. From Lemma \ref{l1203} we know the topological type and the intersection multiplicity of complex conjugate branches of $(C,z)$ associated with each of the branches of the divide. We claim that this information together with the intersection number of the branches of the divide determines the pairwise intersection multiplicities of all four branches of $(C,z)$. Indeed, this can easily be proved by induction on the number of real infinitely near points in the resolution tree of $(C,z)$. \smallskip {\small \section*{Acknowledgements} The authors were supported by the grant no. 1174-197.6/2011 from the German-Israeli Foundations, by the grant no. 176/15 from the Israeli Science Foundation, and by a grant from the Hermann-Minkowski-Minerva Center for Geometry at Tel Aviv University. The authors are indebted to Sergey Fomin for prompting this research and for many stimulating discussions and important suggestions. Special thanks are due to Ilya Tyomkin, who pointed out a gap in the first version of the paper. We are also very thankful to the unknown referee for valuable remarks and corrections.}
{ "timestamp": "2018-05-17T02:03:41", "yymm": "1703", "arxiv_id": "1703.05510", "language": "en", "url": "https://arxiv.org/abs/1703.05510", "abstract": "A real morsification of a real plane curve singularity is a real deformation given by a family of real analytic functions having only real Morse critical points with all saddles on the zero level. We prove the existence of real morsifications for real plane curve singularities having arbitrary real local branches and pairs of complex conjugate branches satisfying some conditions. This was known before only in the case of all local branches being real (A'Campo, Gusein-Zade). We also discuss a relation between real morsifications and the topology of singularities, extending to arbitrary real morsifications the Balke-Kaenders theorem, which states that the A'Campo--Gusein-Zade diagram associated to a morsification uniquely determines the topological type of a singularity.", "subjects": "Algebraic Geometry (math.AG)", "title": "Morsifications of real plane curve singularities", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668739644686, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7081505255281066 }
https://arxiv.org/abs/1807.09861
On distinct finite covers of 3-manifolds
Every closed orientable surface S has the following property: any two connected covers of S of the same degree are homeomorphic (as spaces). In this, paper we give a complete classification of compact 3-manifolds with empty or toroidal boundary which have the above property. We also discuss related group-theoretic questions.
\subsection{Surfaces} As a warm-up we state the following fairly elementary lemma, which doubles as a great exam problem in a first course on topology. \begin{lemma}\label{lem:exceptional-surfaces} The only exceptional compact 2-dimensional manifolds are the disk $D^2$, the annulus, the M\"obius band, the real projective plane $\RR \textup{P}^2$, and all closed orientable surfaces. \end{lemma} \begin{proof} It is clear that the surfaces listed are exceptional. (For closed orientable surfaces this is an immediate consequence of the classification of closed orientable surfaces in terms of their Euler characteristic and the multiplicativity of Euler characteristic under finite covers.) Next assume that $M$ is an orientable surface with at least one boundary component and that $M$ is neither a disk nor an annulus. After possibly going to a finite cover we can assume that $M$ has $k\geq 3$ boundary components. By giving the boundary components the orientation coming from~$M$, the boundary of $M$ induces a summand of $H_1(M)$, that is naturally isomorphic to \[\left(\bigoplus_{i=1}^k \ZZ\; a_i \right) \bigg/ \ZZ\; (a_1+\cdots +a_k).\] Choose an epimorphism $\varphi\colon \pi_1(M)\to \mathbb{Z}/k$ such that $\varphi(a_1) = 1$, $\varphi(a_2)=-1$, and $\varphi(a_i)=0$ for $i\neq 1,2$. On the other hand, we can also find an epimorphism $\psi \colon \pi_1(M)\to \mathbb{Z}/k$ such $\psi(a_i)\ne 0$ for all $i$. But then the covers corresponding to $\ker(\varphi)$ and $\ker(\psi)$ have different numbers of boundary components, so they are not homeomorphic. Now suppose that $M$ is a non-orientable surface that is not homeomorphic to either $\RR\textup{P}^2$ or the M\"obius band. There exists precisely one $2$-fold cover of $M$ that is orientable. On the other hand, we have $H^1(M;\mathbb{Z}/2)\cong (\mathbb{Z}/2)^k$ for some $k\geq 2$. Since $k\geq 2$, there exists at least one other $2$-fold cover. This shows that $M$ is not exceptional. \end{proof} \subsection{Fuchsian groups} If $\Gamma$ is a finitely generated Fuchsian group of finite covolume, then the quotient $O = \Gamma \backslash \HH^2$ is an orbifold of finite type, that is, a compact surface with genus $g$ and $k$ punctures as well as conical points with angles $2\pi/m_i$, $1 \le i \le s$, for some $g,s,k$. We will call the tuple $(g, k, m_1, \ldots, m_s)$ the {\em signature} of $\Gamma$. The orbifold Euler characteristic of $O$ is defined by \[ -\chi(O) := 2g - 2 + k + \sum_{i=1}^s \left( 1 - \frac 1 {m_i} \right). \] It is multiplicative in covers, that is, if $\Gamma'$ is a finite index subgroup of $\Gamma$ and $O' = \Gamma' \backslash \HH^2$ then $\chi(O') = [\Gamma : \Gamma'] \cdot \chi(O)$. \begin{proposition} \label{fuchsian_distinct} Let $\Gamma \subset \mathrm{PSL}_2(\mathbb R)$ be a Fuchsian group of finite covolume with signature $(g, k, m_1, \ldots, m_s)$. Let $d$ be the number of distinct divisors of all $m_i$. Then \begin{equation} \label{bds_fuchsian} n^{d-1} \ll e_n'(\Gamma) \ll n^{d-1}. \end{equation} \end{proposition} In specific cases one can say more by studying the proof of the upper and lower bounds more carefully; for example $e_n(\PSL_2(\mathbb{Z})) = n^2/6 + O(n)$ as we stated in the introduction: to obtain this we note that $e_n(\PSL_2(\mathbb{Z}))$ is equal to the number of solutions to the equation \eqref{euler_char_OK} below by \cite[Theorem B]{MP2}. Computing the covolume of the lattice appearing in the proof of the upper bound below (an exercise in Euclidean geometry) we get the result. We could have used M\"uller and Schlage-Puchta's result to get a more precise asymptote in more cases (though the computation of constants would require further analysis), but as it does not have the generality we require (in particular, it is not clear whether an analogous result holds for cocompact Fuchsian groups) we elected to give a more direct and elementary proof which works for all Fuchsian groups. \begin{proof} For the upper bound in \eqref{bds_fuchsian} we note that two Fuchsian groups with respective signatures $(g, k, m_1, \ldots, m_s)$ and $(h, l, n_1, \ldots, n_r)$ are isomorphic if and only if $(m_1, \ldots, m_s) = (n_1, \ldots, n_r)$ and either $k = 0 = l$ and $g = h$ or $k, l > 0$ and $2g+k = 2h+l$. In addition, the condition that $k = 0$ is invariant under taking finite index subgroups. Thus bounding the number of possible tuples $(g+k, m_1, \ldots, m_s)$ for a degree $n$ cover of $O:=\Gamma \backslash \HH^2$ with signature $(g, k, m_1, \ldots, m_s)$ gives an upper bound for $e_n(\Gamma)$. Let $\Omega$ be the set of divisors of elements of the set $\{m_i\}$. Let $d$ denote the cardinality $d := |\Omega|$. If $\Gamma'$ is a finite index subgroup of $\Gamma$ with signature $(g', k', m_1', \ldots, m_t')$ we further set $r := 2g' - 2 + k'$, and \[ k_m = |\{ 1\le i \le t : m_i' = m\}| \] for $m \in \Omega$, so that (as each $m_i'$ must divide some $m_i$) we have $\sum_{i=1}^t \left( 1 - \frac 1 {m_i'} \right) = \sum_{m\in\Omega,\, m\not=1} \left( 1 - \frac 1 {m} \right)k_m$. We see by multiplicativity of Euler characteristic that, letting $n = |\Gamma/\Gamma'|$ be the index, we have: \begin{equation} \label{euler_char_OK} r + \sum_{\substack{m \in \Omega \\ m \not= 1}} \left(1 - \frac 1 m \right)k_m = n\chi(O). \end{equation} and hence the number of non-isomorphic subgroups of $\Gamma$ is at most the number of points $(r/n, k_m/n : m\in \Omega, m\not=1)$ of the lattice $\frac 1 n \mathbb{Z}^d$ which belong to the $(d-1)$-simplex given by the intersection of the hyperplane $x_0 + \sum_{m \in \Omega,\, m\not=1} (1 - 1/m)x_m = \chi(O)$ with the positive quadrant. Now if $P$ is a polygon in an affine space $E$ and $L$ a lattice in $E$ we have $|L \cap nP| \ll n^{\dim E}$, and the upper bound follows\footnote{More precisely $|L \cap nP| \sim Cn^{\dim P}$ where $C$ is the quotient of the volume of $P$ by the covolume of $L$, so computing these volumes gives an explicit asymptotic upper bound. }. \medskip To prove the lower bound in \eqref{bds_fuchsian} we construct explicit subgroups which contain distinct numbers of conjugacy classes of elements of order $m_i$. To do this we use the following presentation for the Fuchsian group $\Gamma$ of signature $(g, k, m_1, \ldots, m_s)$: \begin{equation} \label{pres_fuchsian} \Gamma = \left\langle x_1, \ldots, x_s, p_1, \ldots, p_k, a_1, b_1, \ldots, a_g, b_g \,\middle|\, x_i^{m_i},\, \prod_i x_i \prod_j p_j \prod_l [a_l, b_l] \right\rangle. \end{equation} Our first step is to reformulate our problem in terms of morphisms $\rho \colon \Gamma \to \sym_n$, where $\sym_n$ denotes the symmetric group on $n$ letters. This will use the well known correspondence between subgroups of index $n$ and transitive permutation representations $\rho \colon \Gamma \to \sym_n$ (see for instance \cite{Lubotzky_Segal}). We start with the following lemma. Given a group $G$, $Z(x)$ will denote the centraliser of $x \in G$ and $x^G$ will denote the conjugacy class of $x$ in $G$. \begin{lemma} \label{fixed_points_sing} Let $G$ be a group and $\rho \colon G \to \sym_n$ a homomorphism with transitive image; let \[ H =\mathrm{Stab}_\rho \{1\} < G \] be the stabiliser of $1$ in $G$ via $\rho$. Then the number of conjugacy classes in $H$ containing an element of $x^G$ is equal to the cardinality of the quotient set \[ \raisebox{-5pt}{$\rho(Z(x))$} \mspace{-6mu} \diagdown \mspace{-6mu} \raisebox{5pt}{$\st{ a\in \{1,\ldots,n\} }{ \rho(x)\cdot a = a}$}. \] \end{lemma} \begin{proof} Write $Z=Z(x)$. The orbits of the right-conjugation action of $H$ on $x^G$ are in bijection with $Z \backslash G / H$. The orbit map $G/H \to \{1, \ldots, n\}$ is bijective (using transitivity) and $Z$-equivariant so $Z \backslash G / H \leftrightarrow \rho(Z) \backslash \{1, \ldots, n\}$. Finally, the last map (bijectively) sends conjugacy classes of $x$ lying in $H$ to $\rho(Z)$-orbits fixed points of $\rho(x)$. \end{proof} We will distinguish between non-isomorphic finite index subgroups of the same index by comparing the numbers of conjugacy classes of maximal finite cyclic subgroups (on the orbifold side this corresponds to the cone points together with their orders). We will first deal with the cases where the algebraic structure of $\Gamma$ is simpler, that is when we are in one of the following cases : $k \ge 2$ ($O$ has at least 2 cusps), $k \ge 1, g \ge 1$ (at least 1 cusp and genus at least 1) or $g \ge 2$. We will deal with the remaining cases with a bootstrap argument. Let $m \in \Omega$, $m \not= 1$. Let $\rho$ be a morphism $\Gamma \to \sym_n$ and $H = \mathrm{Stab}_\rho(1)$ as in the statement of the lemma above. Then if $x \in H$ is a primitive element\footnote{Recall that a group element is primitive if it generates the maximal cyclic subgroups it lies in.} of order $m$, we have that $m$ is a divisor of some $m_i$ and $x=gx_i^{pm_i/m}g^{-1} \in H$ for some $p \in \ZZ$ coprime to $m$, but $gx_i^{m_i/q}g^{-1} \not\in H$ for any $m\mid q \mid m_i$, $m < q$. If we fix such $i$ the corresponding conjugacy classes of primitive order $m$ elements correspond, via the lemma, to $\rho(Z(x_i))$-orbits on the set of $m/m_i$-cycles of $\rho(x_i)$ times the set of invertible elements of $\ZZ/m\ZZ$. When we consider only the maximal cyclic subgroups they generate we kill the $(\ZZ/m\ZZ)^\times$-factor; in conclusion, the number of conjugacy classes of maximal cyclic subgroups of $H$, which are of order $m$, is equal to the number of $\rho(Z(x_i))$-orbits on the set of $m/m_i$-cycles of $\rho(x_i)$. Now in a Fuchsian group we have $Z(x) = \langle x \rangle$ for any nontrivial element. In particular, $\rho(Z(x))$ acts trivially on the cycles of $\rho(x)$. So the number $N$ of conjugacy classes of maximal cyclic subgroups of order $m$ in $H$ is given by the following formula : \begin{equation} \label{formula_nb_cc} N = \sum_{i:\; m|m_i} \#\{ (m_i/m) \text{-cycles in the image of } \rho(x_i) \}. \end{equation} Let $\Omega_i$ be the set of divisors of $m_i$, and choose subsets $1 \in \Omega_i' \subset \Omega_i$ so that $\Omega \setminus \{1\}$ is the disjoint union of all $\Omega_i' \setminus \{1\}$ ; in particular we have \begin{equation} \label{cardinalities} d - 1 = \sum_{i=1}^s \left(|\Omega_i'| - 1\right) \end{equation} Let $n \ge 2$. Consider collections $(\pi_1, \ldots, \pi_s)$, where for all $i$, $\pi_i$ is a partition of $n$ whose parts lie in $\{m_i/m : m \in \Omega_i'\}$. We claim (and will prove in the next paragraph) that for a positive proportion of such partitions it is possible to construct a representation of $\Gamma$ in the symmetric group $\sym_n$ such that the cycle decomposition of $\rho(x_i)$ corresponds to $\pi_i$. Moreover, for different partition-tuples the subgroups cannot be conjugated to each other: if $m \in \Omega, m\not= 1$ then maximal cyclic subgroups of order $m$ can only come from lifts of the only $x_i$ such that $m \in \Omega_i'$. It follows from \eqref{formula_nb_cc} that the number of such subgroups is equal to the $(m/m_i)$-part of $\pi_i$, and these numbers determine the $\pi_i$. In this paragraph we prove the claim (under the hypotheses on $g, k$ stated above). Assume first that $k\ge 2$ or $k \ge 1$ and $g \ge 1$. Then we see from \eqref{pres_fuchsian} that $\Gamma$ decomposes as the free product \[ \Gamma = \langle e_1 \rangle \ast \cdots \ast \langle e_s \rangle \ast F \] where $F$ is a free group. Then any collection is realisable, as we can pick any image we want for each $e_i$ and then complete it by a transitive representation of $F$. In the remaining cases where $k = 0$ and $g \ge 2$ then we can do the same thing, choosing $\langle \rho(a_1), \rho(b_1) \rangle$ to have transitive image and so that the product $\sigma = \prod \rho(e_i) [\rho(a_1), \rho(b_1)]$ is alternating, and finally $a_2, b_2$ so that the relation in \eqref{pres_fuchsian} holds (this is possible because every alternating permutation can be written as a commutator). For a finite subset $A \subset \N$ denote by $p_A(n)$ the number of partitions of $n$ into elements of $A$. The representations constructed above give rise to $\prod p_{\Omega_i'}(n)$ isomorphy classes of index $n$ subgroups of $\Gamma$. Thus we have proven that for any $n$ we have: \[ e_n(\Gamma) \ge \prod_{i=1}^s p_{\Omega_i'}(n). \] It is an easily observed fact that $p_A(n) \gg n^{|A|-1}$ (see also \cite[pp. 458--464]{Nathanson} for a more precise asymptotic), so that we get from the previous inequality that \[ e_n(\Gamma) \gg n^{\sum_{i=1}^s (|\Omega_i'| - 1)} \] and using \eqref{cardinalities} we can conclude that $e_n'(\Gamma) \gg n^{d-1}$. \medskip To finish the proof in the remaining cases (that is, those that do not satisfy our assumptions on $k$ and $g$), we pass to a finite cover $O'$ which satisfies the hypotheses we used above and contains torsion elements of (at least) the same orders as $O$ (see next lemma). Let $f$ be the degree of this cover, then we get from what we previously proved that $e_{fd}(\Gamma) \gg f^{d-1}$ for all $n$, which immediately implies that $e_n'(\Gamma) \gg n^{d-1}$. \end{proof} The following fact that we used at the end of the proof might be standard (and using results of Liebeck--Shalev our proof is almost immediate) but we did not find a reference for it. \begin{lemma} Let $\Gamma$ be a Fuchsian group. Then it has a finite index subgroup $H$ whose signature $(g, k, \ldots)$ satisfies either $k \ge 2$ or $g \ge 2$ and such that for any $m$ if $\Gamma$ contains an element of order $m$ the same goes for $H$. \end{lemma} \begin{proof} Recall that the group $\Gamma$ admits a presentation of the form \eqref{pres_fuchsian}. In \cite{Liebeck_Shalev} Liebeck -- Shalev prove the following: let $C_i$ be the conjugacy class of an element of $\sym_n$ which has only $m_i$-cycles and $f_i$ fixed points, with $f_i$ bounded independently of $n$. Let $\mathbf C = (C_1, \ldots, C_s)$ and $\Hom_{\mathbf C}(\Gamma, \sym_n)$ the set of homomorphisms $\rho : \Gamma \to \sym_n$ such that $\rho(x_i) \in C_i$ (recall from \eqref{pres_fuchsian} that $x_i$ is an element of order $m_i$ in $\Gamma$). Then \[ \left| \Hom_{\mathbf C}(\Gamma, \sym_n) \right| \sim (n!)^{1-\chi(\Gamma)} \cdot n^{O(1)} \] (loc. cit., Theorem 3.5) and an element in $\Hom_{\mathbf C}(\Gamma, \sym_n)$ is transitive with probability tending to 1 as $n \to +\infty$ (loc. cit., Theorem 4.4). We choose all $f_i$ such that $1 \le f_i \le m_i$ (there is a unique possible choice for each $n$ and $i$). Let $\rho \in \Hom_{\mathbf C}(\Gamma, \sym_n)$ be transitive (such $\rho$ exists by the results presented above) and let $H$ be the subgroup $\mathrm{Stab}_\rho\{1\}$ and $O'$ the corresponding orbifold cover of $O$, with genus $g'$ and $k'$ cusps. Then we have that \[ 2g'+ k' = n \chi(O) - \sum_{i=1}^s \left( 1 - \frac 1{m_i} \right) f_i + 2 = n \chi(O) - O(1) \] and the right-hand side goes to infinity, so taking $n$ large enough we can ensure $2g' + k \ge 4$, which implies either $k' \ge 2$ or $g' \ge 2$. Moreover, since $f_i \ge 1$ if follows from Lemma \ref{fixed_points_sing} that the group $H$ contains an element of order $m_i$ for each $i$. So this $H$ satisfies the conclusion of the lemma. \end{proof} \section{Introduction} Given a finitely generated group $G$ and $n\in \N$ we denote by $s_n(G)$ the number of subgroups of index $n$ and we denote by $e_n(G)$ the number of \emph{isomorphism types} of subgroups of index $n$. The study of $s_n(G)$ has attracted a lot of interest over the years. For example, the well-known asymptote for the number of index \( n \) subgroups in a free group $F_2$ of rank 2 is \[s_n(F_2) \sim n \cdot (n!) \quad \text{ as }\quad n\to\infty, \] (see for instance \cite{Dixon} and \cite[Chapter 2]{Lubotzky_Segal}). Much more recently, similar asymptotes were determined for surface groups by M\"uller and Schlage-Puchta \cite{MP1}, Fuchsian groups by Liebeck and Shalev \cite{Liebeck_Shalev} and non-uniform lattices in higher rank simple Lie groups by Lubotzky and Nikolov \cite{Lubotzky_Nikolov}. Finer questions ask for the number of subgroups of a given type. For instance, in \cite{MP2}, M\"uller and Schlage-Puchta determine the statistics of given isomorphism types in free products and in \cite{Lubotzky,Goldfeld_Lubotzky_Pyber} Lubotzky--Goldfeld--Pyber and Lubotzky--Nikolov count the number of congruence subgroups in arithmetic groups. For an introduction to the subject of subgroup growth, we refer to the monograph by Lubotzky and Segal \cite{Lubotzky_Segal}. In this paper we are interested in the growth of the sequence $\{e_n(G)\}_{n\in \N}$. It is apparent that its behavior can be strikingly different from $\{s_n(G)\}_{n\in \N}$. For example, we have just seen above that the sequence $\{s_n(F_2)\}_{n\in \N}$ grows super exponentially whereas it is clear that the sequence $\{e_n(F_2)\}_{n\in \N}$ is constantly equal to one. Similarly it is evident that for any surface group~$G$ the sequence $\{e_n(G)\}_{n\in \N}$ is constantly equal to one. These and free abelian groups are the only examples we know of (infinite, residually finite) groups with $e_n \equiv 1$. To give an example where $e_n$ exhibits nontrivial growth let us describe its behaviour for Fuchsian groups (namely, discrete subgroup of $\PSL_2(\RR)$) containing torsion elements: we observe in Proposition \ref{fuchsian_distinct} below that for a Fuchsian group $G \subset \mathrm{PSL}_2(\mathbb R)$ of finite covolume, $e_n(G)$ grows polynomially. For example, we obtain (using M\"uller and Schlage--Puchta's asymptotic mentioned above) that \[ e_n(\mathrm{PSL}_2(\mathbb Z)) \sim \frac {n^2} 6 \quad \text{ as }\quad n\to\infty. \] We note that this asymptotic contains geometric information about the orbifold $\mathrm{PSL}_2(\ZZ) \backslash \mathbb H^2$: in general, the exponent ($2$ in this case) equals the number of nontrivial divisors of the orders of its cone points. Moreover, in the case of $\mathrm{PSL}_2(\mathbb Z)$, the leading coefficient ($1/6$) appears as the inverse of the product these divisors. To give our topic a more topological flavour, given a compact, connected manifold $M$ let us denote by $e_n(M)$ the number of homeomorphism types of connected degree $d$ covering spaces of $M$. All the examples above are essentially 1- or 2-dimensional. For a topologist, after the case of surfaces has been settled it is natural to next consider the case of 3-manifolds. A more pragmatic reason for this is that if $M$ is a closed, orientable, aspherical $3$-manifold, then we have $e_n(M)=e_n(\pi_1(M))$. This follows from the geometrization theorem, the rigidity theorem~\cite{Mos}, and work of Waldhausen~\cite{Waldhausen} and Scott~\cite{Sco2} (see e.g.~\cite[Theorem~2.1.2]{AFW}). Thus in this case the group-theoretical and topological problems are equivalent. Another is that the current understanding of 3--manifold groups (as opposed to higher-dimensional aspherical manifolds) is good enough to let us actually prove some nontrivial results. For technical reasons it is often more convenient to study the following closely related sequence \[ e_n'(M)\,:=\, \sup \{ e_k(M)\,|\, k\leq n\}. \] In Section~\ref{quant} we point out that it follows from \cite{Ago}, \cite{Cooper_Long_Reid_VH}, \cite{BGLM} and \cite[Section 5.2]{BGLS} that $e_n'(M)$ grows as a power of a factorial for hyperbolic 3-manifolds. On the other hand it is clear that $e_n'(M)$ is a constant sequence for some choices of $M$, e.g.\ for the 3-torus and for 3-manifolds with cyclic fundamental group. As hyperbolic manifolds are in many ways generic among 3--manifolds this leads us to the following definition. \begin{definition} A manifold $M$ is called \emph{exceptional} if $e_n'(M)=1$ for all $n\in \N$. In other words, $M$ is exceptional, if for all $n\in \N$ any two connected degree $n$ covers of $M$ are homeomorphic. \end{definition} Note that there is an analogous notion of exceptional groups, that is, a group is said to be \emph{exceptional} if any two index $n$ subgroups are isomorphic. As a warm-up, in Lemma~\ref{lem:exceptional-surfaces} we determine which compact 2-dimensional manifolds are exceptional (this makes for a surprisingly amusing exercise). In our main theorem we determine all exceptional compact $3$-manifolds with empty or toroidal boundary. \begin{maintheorem}\emph{ Let $M$ be a compact $3$-manifold with empty or toroidal boundary. Then $M$ is exceptional if and only if it is homeomorphic to one of the following manifolds: \begin{enumerate}[font=\normalfont] \item $k \cdot S^1 \times S^2$ for $k\geq1$, \item $S^1\widetilde{\times} S^2$, \item $S^1\times D^2$, \item $T^2\times I$, \item $T^3$, \item all spherical manifolds except those with fundamental group $P_{48}\times \ZZ/p$ with $\gcd(p,3)=1$ and $p$ odd, or $Q_{8n}\times \ZZ/q$ with $\gcd(q,n)=1$, $q$ odd, and $n\geq 2$. \label{mainthm_spherical} \end{enumerate}} \end{maintheorem} In the theorem above and henceforth, $S^n$ denotes the $n$-sphere, $D^n$ denotes the $n$-dimensional disk, $T^n$ denotes the $n$-torus, $I$ denotes the unit interval $[0,1]$, and $k\cdot M$ denotes the $k$-fold connected sum of the manifold $M$. The groups mentioned in item (\ref{mainthm_spherical}) are defined in Section \ref{sec_spherical}. Some of the techniques we use in the proof of the main theorem apply in a a larger setting. For instance, as noted above, it follows from work of Agol et al that hyperbolic 3-manifolds are not exceptional. We also give a different proof of this fact that generalizes to lattices in most semisimple Lie groups as follows. (Recall that two Lie groups are called \emph{locally isomorphic} if they have isomorphic Lie algebras.) \begin{prprep}{\ref{lattices}} Let $\Gamma$ be an irreducible lattice in a semisimple linear Lie group not locally isomorphic to $\mathrm{PGL}_2(\RR)$. Then $\Gamma$ is not exceptional. \end{prprep} Regarding $\mathrm{PGL}_2(\RR)$ we noted (see above and Proposition \ref{fuchsian_distinct}) that Fuchsian groups with torsion exhibit nontrivial growth for $e_n$ and in particular are not exceptional. The latter is also true of Euclidean 2-orbifolds, and to prove this together with the fact that $T^3$ is the only exceptional Euclidean $3$-manifold we use arguments which lead to the following generalization. \begin{prprep}{\ref{proposition_euccase}} Let \( E \) be a Euclidean space and \( \Gamma \) a lattice in \( \mathrm{Isom}(E) \). Then \( \Gamma \) is exceptional if and only if it is free abelian. \end{prprep} Moreover for hyperbolic $3$-manifolds we can produce \textit{regular} non-homeomorphic covering spaces of equal degree: \begin{prprep}{\ref{main_normal}} Let $\Gamma$ be the fundamental group of a complete hyperbolic $3$-manifold of finite volume. Then there exist sequences $c_n, d_n \to +\infty$ such that for each $n$ we can find at least $c_n$ normal subgroups of index $d_n$ in $\Gamma$, which are pairwise non-isomorphic. \end{prprep} \subsection{Questions} There are many more questions about $e_n(M)$ that arise naturally. \subsubsection{Quantitative questions} The only question we deal with in general is whether or not $e'_n(M)=1$ for all $n\in \NN$. Once it is known that this is \emph{not} the case, it would of course be interesting to know how $e_n(M)$ and $e_n'(M)$ behave as functions of $n$. For any hyperbolic $3$-manifold $M$, as we already noted above, there exists a constant $\alpha>0$ so that \[e_{n}'(M) \geq (n!)^{\alpha},\] for all large enough $n\in \NN$. However, as of yet, there is no control on $\alpha$ in terms of, for instance, the hyperbolic geometry of $M$. We note that an upper bound on the smallest index of a subgroup of $\pi_1(M)$ that surjects onto $F_2$ would give a lower bound on $\alpha$. On the other hand, our constructions of non-homeomorphic covers for Seifert fibered manifolds produce (in general) much sparser sequences of covers and we do not have an expectation for the answer to the quantitative question above. The following is a sample of concrete questions we would like to know the answers to. \begin{enumerate} \item Which growth types (polynomial, intermediate, exponential, factorial) appear for $e_n'(M)$? \item What are the 3-manifolds for which $e_n'(M)$ is polynomial, intermediate, exponential, factorial? \item Are there hyperbolic 3-manifolds for which $e_n(M)$ is not monotone even for large $n$? \item Given a hyperbolic 3-manifold do $e_n(M)$ and $e_n'(M)$ have the same growth type? \item If $M$ is a hyperbolic manifold, what information on the geometry of $M$ is encoded in the sequence $\{e_n(M)\}_{n\in\NN}$? \end{enumerate} We note that for general $3$-manifolds, many of the question above are also still open for the sequence $\{s_n(\pi_1(M))\}_{n\in\NN}$. For $2$-dimensional manifolds we know more. For instance, it is known that for a hyperbolic $2$-orbifold $M$, the sequence $\{s_n(\pi_1(M))\}_{n\in\NN}$ does encode some geometric information: the factorial growth rate of $s_n(\pi_1(M))$ is linear in the area of the orbifold \cite{MP1,Liebeck_Shalev}. On the other hand, for hyperbolic $3$-manifolds such a simple relation could never hold: there are hyperbolic $3$-manifolds $M$ of arbitrarily large volume but with $\mathrm{rank}(\pi_1(M))\leq 5$. The latter implies that the factorial growth rate of the sequence $\{s_n(\pi_1(M))\}_{n\in\NN}$ cannot be more than $4$. \subsubsection{Stronger versions of non-exceptionality} In most cases where we establish that a group $G$ is not exceptional, we do so by providing, for infinitely many distinct values of $d$, pairs $(G_1, G_2)$ of non-isomorphic subgroups of $G$ with index $d$ in $G$. We are not aware of an example of an infinite residually finite group which is not exceptional, but for which this stronger property is in default. \subsubsection{Non-exceptionality for other classes of groups} One may inquire about the exceptionality, or lack thereof, of other interesting classes of groups. Here are some examples: \begin{enumerate} \item The only exceptional right-angled Artin groups are the free groups and the free abelian groups\footnote{This can be proved using the fact that all other RAAGs surject onto $\ZZ^2\ast \ZZ$.}. What about more general Artin groups? \item Which Coxeter groups are exceptional? It seems reasonable to expect that only a few finite ones are. \item Are non-abelian polycyclic groups always non-exceptional? \item Are there other examples of non-elementary word-hyperbolic groups, besides surface groups and free groups, that are exceptional? \end{enumerate} Note that all the groups in the first three items are linear and finitely generated and hence, by Malcev's theorem \cite{Malcev}, residually finite. As such, they at least have infinitely many different finite index subgroups (given that they are infinite themselves). \subsection{Structure of the proof} Our proof consists of a case by case analysis. That is, we use the prime decomposition theorem and Perelman's work to divide up $3$-manifolds into various geometric classes that we then address separately. For most of the paper, we consider prime, orientable $3$-manifolds. Other than certain simple cases (Proposition~\ref{prop:specialones}), we show that prime, compact, orientable $3$-manifolds with non-empty toroidal boundary are not exceptional in Proposition~\ref{prp_JSJ}. Among the closed, prime, orientable $3$-manifolds, we see that $S^1\times S^2$ is clearly exceptional (Proposition~\ref{prop:specialones}), so it only remains to consider closed, irreducible, orientable $3$-manifolds. Those with a trivial JSJ decomposition are divided into the following classes: \begin{itemize} \item hyperbolic $3$-manifolds (Proposition \ref{proposition_hypcase}), \item Euclidean $3$-manifolds (Proposition \ref{prop:euclidean}), \item spherical $3$-manifolds (Proposition \ref{prp_spherical}), \item and the remaining Seifert fibered $3$-manifolds (Proposition \ref{prp_seif}). \end{itemize} \begin{figure}[h] \input{diagram} \caption{Leitfaden} \label{fig_diagram} \end{figure} Then we treat closed, irreducible, orientable $3$-manifolds with a non-trivial JSJ decomposition, where we need two separate arguments - one for Sol manifolds (Proposition \ref{prp_sol}) and one for all others (Proposition \ref{prp_JSJ}). The classification of prime non-orientable exceptional $3$-manifolds and non-prime exceptional $3$-manifolds is an almost direct consequence of our work with prime and orientable $3$-manifolds and is given in Section~\ref{sec:nonprime}. The diagram in Figure \ref{fig_diagram} shows the structure of the proof. \subsection{Conventions} All manifolds are assumed to be compact and connected. We will call a compact manifold with non-empty toroidal boundary \emph{hyperbolic} if its interior admits a complete hyperbolic metric of finite volume. Usually we do not distinguish between a manifold and its homeomorphism type. \subsection{Acknowledgements} SF is grateful for the support provided by the SFB 1085 ``higher invariants'' funded by the DFG. JR was supported by ANR grant ANR-16-CE40-0022-01-AGIRA. Most of the work on the paper was done while various subsets of the authors met at ICMAT Madrid, the Max Planck Institute for Mathematics in Bonn, the University of Bonn, and the University of Regensburg. We are very grateful for the hospitality of these institutions. Lastly, we thank the anonymous referee for the careful reading of the paper and for the useful comments and suggestions. \section{Surfaces and two-dimensional orbifolds} \input{2dim} \section{Preliminaries} Let us start our discussion of $3$-manifolds with some preliminary observations. Recall that a group $G$ is called \emph{residually finite} if \[\bigcap_{\substack{H \triangleleft G \\ [G:H] < \infty}} H = \{e\},\] where $e\in G$ denotes the unit element. It follows from work of Hempel~\cite{Hem}, together with the proof of the geometrization theorem, that the fundamental group of a $3$-manifold has this property: \begin{theorem}\label{theorem_resfin} \cite{Hem} Let $M$ be a compact $3$-manifold, then $\pi_1(M)$ is residually finite. \end{theorem} \noindent Next, we make some elementary observations about certain simple $3$-manifolds. \begin{proposition}\label{prop:specialones} The $3$-manifolds $S^3$, $T^3$, $T^2\times I$, $S^1\times D^2$, $S^1\times S^2$, and $S^1\widetilde{\times} S^2$ are exceptional. The twisted $I$-bundle over the Klein bottle is not exceptional. \end{proposition} \begin{proof} It is an elementary exercise to verify that the manifolds mentioned in the first sentence are exceptional. For example, note that any manifold with cyclic fundamental group is exceptional. Finally note that the twisted $I$-bundle over the Klein bottle has two $2$-fold covers, one of which is again homeomorphic to the twisted $I$-bundle over the Klein bottle and the other is homeomorphic to $T^2\times I$. Thus it is not exceptional. \end{proof} We conclude the section with the following elementary observation, which uses the fact that our covers need not be regular. \begin{lemma}\label{lem:nonexceptional-cover} If a manifold $M$ has a finite-sheeted cover $p\colon \widehat{M}\rightarrow M$ such that $\widehat{M}$ is not exceptional, then $M$ is not exceptional. \end{lemma} \section{The hyperbolic case} \input{hyperbolic} \section{Euclidean manifolds} \input{euclidean} \section{Spherical manifolds}\label{sec_spherical} Seifert fibered manifolds finitely covered by the $3$-sphere, namely the \emph{spherical manifolds}, also require a different proof than the general case. We will soon completely classify the exceptional spherical $3$-manifolds (Proposition~\ref{prp_spherical}). However, we will first need some notation. It is known that spherical $3$-manifolds are exactly the quotients of $\Sphere^3$ by finite groups that act by isometries~\cite{Sco}. These quotients of $\Sphere^3$ have been classified by Hopf \cite[Section 2]{Hopf} (see also \cite[p.\ 12]{AFW} and~\cite[Theorem~2]{Milnor:1957-1}) as follows (note that the group $Q_{4n}$ (in the notation of \cite[p.\ 12]{AFW}) is isomorphic to $D_{2n}$ when $n$ is odd). \begin{theorem}\label{thm_sphericalclassification} The fundamental group of a spherical $3$-manifold is of exactly one of the following forms: \begin{itemize} \item The trivial group, \item $Q_{8n} := \langle x,y|\; x^2=(xy)^2 = y^{2n} \rangle$, for $n\geq 1$, \item the binary octahedral group: $P_{48} := \langle x,y|\; x^2 = (xy)^3 = y^4, \; x^4 =1 \rangle $, \item the binary icosahedral group: $P_{120} := \langle x,y|\; x^2=(xy)^3=y^5,\;x^4=1\rangle $, \item $D_{2^m(2n+1)} := \langle x,y |\; x^{2^m}=1,\;y^{2n+1} =1,\; xyx^{-1} = y^{-1} \rangle$, for $m\geq 2$ and $n\geq 1$, \item $P_{8\cdot 3^m}' := \langle x,y,z|\;x^2=(xy)^2=y^2,\; zxz^{-1}=y,\;zyz^{-1} = xy,\; z^{3^m}=1 \rangle$, for $m\geq 1$, \item the direct product of any of the above groups with a cyclic group of relatively prime order. \end{itemize} \end{theorem} The subscripts in the notation for the groups above always denote their order. We are now ready to state the main result of this section. \begin{proposition}\label{prp_spherical} A spherical manifold is exceptional if and only if its fundamental group is of one of the following forms: \begin{itemize} \item The trivial group, \item $Q_8$, \item $P_{120}$, \item $D_{2^m(2n+1)}$ for $m\geq 2$ and $n\geq 1$, \item $P_{8\cdot 3^m}'$ for $m\geq 1$, \item the direct product of any of the above groups with a cyclic group of relatively prime order. \end{itemize} \end{proposition} Before giving the proof, we gather a few relevant facts. Recall that the fundamental group determines a spherical manifold unless it is cyclic and non-trivial (see \cite[p.\ 133]{Orlik:1972-1}). This fact, for the larger class of closed, irreducible $3$-manifolds, yields the following lemma (see~\cite[Theorem~2.1.2]{AFW}). \begin{lemma}\label{lemma:fundgp} Let $M$ be a closed, orientable, irreducible $3$-manifold. Let $G$ and $H$ be finite index subgroups of $\pi_1(M)$ and let $\widehat{M_G}$ and $\widehat{M_H}$ be covers of $M$ that correspond to $G$ and $H$, respectively. Suppose $G$ is not a finite cyclic group, then $G$ and $H$ are isomorphic if and only if $\widehat{M_G}$ and $\widehat{M_H}$ are homeomorphic. \end{lemma} The case of spherical manifolds with cyclic fundamental groups, namely, lens spaces, is more subtle. We recall the following well known result of Reidemeister \cite{Reidemeister:1935-1}. \begin{theorem}\label{theorem:Reidemeister} Let $L(p, q)$ and $L(p, q')$ be two lens spaces. Then $L(p, q)$ and $L(p, q')$ are homeomorphic if and only if $q' \equiv \pm q^{ \pm1} \mod p$. \end{theorem} \noindent We are now ready to prove Proposition~\ref{prp_spherical}. \begin{proof}[Proof of Proposition~\ref{prp_spherical}] For each of the manifolds listed in Theorem~\ref{thm_sphericalclassification}, we will follow one of the following two strategies. \begin{itemize} \item In order to show that a given manifold is not exceptional, we will show that its fundamental group has two non-isomorphic subgroups with the same index. \item In contrast, in order to show that a given manifold is exceptional, we will first show that its fundamental group has a unique isomorphism type of subgroup with any fixed index. By Lemma~\ref{lemma:fundgp}, this implies that it only remains to consider the case when the subgroups of a given index are all isomorphic to a fixed finite cyclic group. In this case, we will show that corresponding covers are homeomorphic, either by using Theorem~\ref{theorem:Reidemeister} or by showing that these subgroups are conjugate to each other. \end{itemize} We divide our proof into multiple lemmata. In what follows we will often tacitly identify a manifold with its fundamental group. \noindent First we note: \begin{lemma} Any spherical $3$-manifold with cyclic fundamental group (namely, a lens space) is exceptional. \end{lemma} \begin{proof} This follows since a cyclic group contains at most one subgroup of a given index. \end{proof} \begin{lemma} The spherical $3$-manifold with fundamental group $P_{48}$ is not exceptional while that with fundamental group $P_{120}$ is exceptional. \end{lemma} \begin{proof} The proper subgroups of $P_{48}$ and $P_{120}$ are well known (see e.g.~\cite[Appendix]{Lima-Guaschi:2013-1}). The group $P_{48}$ has $\mathbb{Z}/8$ and $Q_8$ as proper subgroups, and hence, the spherical manifold with fundamental group $P_{48}$ is not exceptional. The group $P_{120}$ has at most one isomorphism type of subgroup of any given index, and those of order $2,3,4,5,6$, and $10$ are isomorphic to finite cyclic groups. Note that by Theorem~\ref{theorem:Reidemeister} there is a unique $3$-manifold with fundamental group with order $2,3,4,$ or $6$. Any two proper subgroups of $P_{120}$ of order $5$ are Sylow $5$-subgroups and hence conjugate to each other. Also, it is known that any order two element of $P_{120}$ is contained in the center. Since the order $5$ subgroups are conjugate, this implies that the order $10$ subgroups are also conjugate to one another. Thus, we see that the spherical manifold with fundamental group $P_{120}$ (namely the Poincar\'{e} homology sphere) is exceptional. \end{proof} All that remains are the three infinite sequences and products with cyclic groups of coprime order. We start with the groups $Q_{8n}$: \begin{lemma} The spherical $3$-manifold with fundamental group $Q_{8n}$ is exceptional if and only if $n=1$. \end{lemma} \begin{proof} The group $Q_8$ is the quaternion group, the proper subgroups of which are all cyclic with either order $2$ or $4$. Using a similar argument as above, one can show that the spherical manifold with fundamental group $Q_8$ is exceptional. Next, we prove that for $n>1$, the group $Q_{8n}$ is not exceptional. In particular, we will show that these groups have two non-isomorphic subgroups of index $2$. Let $N_1 = \langle\langle y\rangle\rangle$ and $N_2=\langle\langle x\rangle\rangle$ be the subgroups normally generated by $y$ and $x$ respectively. It is easy to verify that both $N_1$ and $N_2$ are subgroups of index $2$. Note that $N_1$ is cyclic since $xyx^{-1}=y^{-1}$ in $Q_{8n}$. Since $N_1$ has index $2$, the order of $N_1$ is $4n$ and in particular, $y$ has order $4n$ in $Q_{8n}$. On the other hand, we will show that $N_2$ is non-abelian. Using the $x=yxy$ relation, it is easy to see that $yxy^{-1}\cdot x^{-1} = y^{2} \in N_2$. Now suppose that $y^2xy^{-2}=x$. Since $x^2=(xy)^2$, and hence $x=yxy$, we see that $y^4x=x$, and thus, that the order of $y$ is at most $4$, which contradicts our previous observation, since $n>1$. Thus, the groups $Q_{8n}$ for $n>1$ are not exceptional. \end{proof} \noindent For the dihedral groups we have: \begin{lemma} The spherical $3$-manifold with fundamental group $D_{2^m(2n+1)}$ are exceptional for all $m\geq 2$ and $n\geq 1$. \end{lemma} \begin{proof} We will invoke some Sylow theory. First, note that the subgroup generated by $x\in D_{2^m(2n+1)}$ is a Sylow $2$-subgroup of $D_{2^m(2n+1)}$ and is isomorphic to $\mathbb{Z}/2^m$. It is also easy to show that the abelianization of $D_{2^m(2n+1)}$ is isomorphic to $\mathbb{Z}/2^m$, and is generated by the image of $x$. Let $N_1=\langle\langle y\rangle\rangle$ be the subgroup of $D_{2^m(2n+1)}$ normally generated by $y$. Note that $N_1$ is cyclic, since $xyx^{-1} = y^{-1}$. Thus, we have the following exact sequence of groups \[1\to N_1\cong \mathbb{Z}/(2n+1) \to D_{2^m(2n+1)} \to \mathbb{Z}/2^m \cong \langle x\rangle \to 1 \] corresponding to the abelianization map. Further, let $N_2 = \langle y, x^2 \rangle$. Since $x^2$ commutes with $y$, we have the following exact sequence \[1\to N_2 \cong \mathbb{Z}/(2n+1) \times \mathbb{Z}/2^{m-1} \to D_{2^m(2n+1)} \to \mathbb{Z}/2\cong\{0,x\} \to 1.\] Clearly, $N_2$ is cyclic of order $2^m(2n+1)$ and is generated by $yx^2$. Let $\Gamma\leq D_{2^m(2n+1)}$ be a subgroup and $\phi$ be the restriction to $\Gamma$ of the homomorphism $D_{2^m(2n+1)} \to \mathbb{Z}/2$ in the last sequence. We first consider the case where $\phi$ is a surjection. Then $\Gamma$ must also surject onto $\mathbb{Z}/2^m$ in the abelianization map, since every map from $D_{2^m(2n+1)}$ to an abelian group factors through the abelianization. There are two options -- either $\card{\Gamma} = 2^m$ or $\card{\Gamma}>2^m$. In the first case, $\Gamma$ is cyclic and conjugate to $\langle x \rangle$ by Sylow's theorem. If $\card{\Gamma}>2^m$, we now show that $\Gamma$ must be isomorphic to $D_{2^m(2n'+1)}$ for some $n'\leq n$. First, note that due to the relation $xyx^{-1}=y^{-1}$, any element of $D_{2^m(2n+1)}$ can be written as $y^ix^j$ for some $0\leq i\leq 2n$ and $0\leq j\leq 2^m-1$. Recall that $N_2=\langle yx^2\rangle$. Then $\ker \phi = \Gamma\cap N_2=\Gamma \cap \langle yx^2\rangle$ and thus, $\ker \phi=\langle (yx^2)^d\rangle=\langle y^dx^{2d}\rangle$ for some $d\geq 1$. Since $[\Gamma : \langle y^dx^{2d} \rangle ] = 2$ and $\lvert \langle y^dx^{2d} \rangle \rvert = \frac{2^{m-1}(2n+1)}{d}$, we have $\card{\Gamma} = \frac{2n+1}{d}\cdot2^m$. Since $\card{\Gamma}>2^m$, we see that $d<2n+1$. Since $\Gamma$ surjects onto $\mathbb{Z}/2^m$, $\card{\Gamma}$ is divisible by $2^m$. As a result, $d$ divides $2n+1$. This implies that $d$ is odd and $\frac{2n+1}{d}$ is an integer. Note that $(y^dx^{2d})^\frac{2n+1}{d}=x^{2(2n+1)}$. Since $(2n+1, 2^{m-1})=1$, $x^{2(2n+1)}$ generates $\langle x^2\rangle$. As a result, $x^2\in \langle y^dx^{2d}\rangle$, and thus, $y^d\in\langle y^dx^{2d}\rangle$. Next, let $y^ix^j$ be an element of $\Gamma$ such that $y^ix^j\notin \langle y^dx^{2d}\rangle$. If no such element exists, then $\Gamma \leq N_2$ which is a contradiction. We show that $j$ is odd, as follows. Suppose for the sake of contradiction that $j$ is even. Since $x^2\in \langle y^dx^{2d}\rangle$ and $y^ix^j\notin \langle y^dx^{2d}\rangle$, $y^i\notin \langle y^dx^{2d}\rangle$. Since $y^d\in \langle y^dx^{2d}\rangle$, this implies that $i$ is not a multiple of $d$. In particular, $\gcd(i,d)<d$. Moreover, since $y^i\in\Gamma$ and $y^d\in\Gamma$, $y^{\gcd(i,d)}\in \Gamma$. However, note that the order of $y^{\gcd(i,d)}$ is $\frac{2n+1}{\gcd(i,d)}$ since the order of $y$ in $D_{2^m\cdot (2n+1)}$ is $2n+1$. Then, $\frac{2n+1}{\gcd(i,d)}$ divides $\card{\Gamma}=\frac{2n+1}{d}\cdot2^m$, which implies that $d$ divides $2^m\cdot \gcd(i,d)$. Since $d$ is odd, it must divide $\gcd(i,d)<d$ which is a contradiction. Thus, $j$ must be odd; denote $j$ by $2k+1$ for some $k$. We will now complete the proof by showing that the subgroup generated by $y^d$ and $y^ix^{2k+1}$ is equal to $\Gamma$ and isomorphic to $D_{2^m(2n'+1)}$ where $2n'+1=\frac{2n+1}{d}$. We have the following identities \begin{align*} (y^ix^{2k+1})^{2^m}&=(y^ix^{2k+1}y^ix^{2k+1})\cdots (y^ix^{2k+1}y^ix^{2k+1})\\ &=x^{2k+1}\cdots x^{2k+1}\\\ &=(x^{2k+1})^{2^m}\\ &=1,\\ (y^d)^\frac{2n+1}{d}&=y^{2n+1}=1,\\ (y^ix^{2k+1})y^d(y^ix^{2k+1})^{-1}&=y^ix^{2k+1}y^dx^{-2k-1}y^{-i}\\ &=y^ixy^dx^{-1}y^{-i}\\ &=y^iy^{-d}dy^{-i}\\ &=y^{-d}, \end{align*} where we have used the facts that $x^2y^dx^{-2}=y^d$, $xyx^{-1}=y^{-1}$, and $yxy=x$. It is easy to check that any element of $\langle y^d, y^ix^{2k+1}\rangle$ can be uniquely expressed in the form $(y^d)^{i'}(x^{2k+1})^{j'}$, where $0\leq i'\leq \frac{2n+1}{d}-1$ and $0\leq j'\leq 2^m-1$ and thus, $\lvert \langle y^d, y^ix^{2k+1}\rangle\rvert =\card{\Gamma}=\frac{2n+1}{d}\cdot2^m$ which completes the argument. When $\phi$ is not surjective, it must be the zero map and thus, $\Gamma$ is a subgroup of the cyclic subgroup $N_2 \cong \mathbb{Z}/((2n+1)2^{m-1})$ which implies that it is cyclic. Returning to the group $D_{2^m(2n+1)}$, we have now shown that subgroups of a given fixed index are isomorphic. More precisely, subgroups are of the form $D_{2^m(2n'+1)}$ for $n'\leq n$, or cyclic groups with order either $2^m$, or a factor of $(2n+1)2^{m-1}$. The latter arose as subgroups of a cyclic group and thus, occur exactly once. We saw earlier that any subgroup of order $2^m$ is a Sylow $2$-subgroup, and thus such subgroups are conjugate to one another. This concludes that spherical manifolds with fundamental group $D_{2^m(2n+1)}$ are exceptional for $m\geq 2$ and $n\geq 1$. \end{proof} \noindent For the sequence $P_{8\cdot 3^m}'$ we have: \begin{lemma} The spherical $3$-manifold with fundamental group $P_{8\cdot 3^m}'$ is exceptional for all $m\geq 1$. \end{lemma} \begin{proof} First note that $P_{8\cdot 3^m}'^{ab}\cong \mathbb{Z}/3^m$. Since $Q_8\cong \langle x, y\rangle < P_{8\cdot 3^m}'$ lies in the kernel of the abelianization map and $\card{Q_8}=8$, it must actually coincide with this kernel. Thus this copy of $Q_8$ is a normal subgroup and as such is the unique Sylow-2 subgroup of $P'_{8\cdot 3^m}$. We obtain the following exact sequence \[1\to Q_8 \to P_{8\cdot 3^m}'\to \mathbb{Z}/3^m \to 1\] corresponding to the abelianization map. Now let $\Gamma \leq P_{8\cdot 3^m}$. If the image of $\Gamma$ in the abelianization map is trivial, it needs to lie in $Q_8$, hence it is either $Q_8$, $\mathbb{Z}/2$, or $\mathbb{Z}/4$. For the case when the image of $\Gamma$ is non-trivial, we first show that $\Gamma$ is either a finite cyclic group or $Q_8\times \mathbb{Z}/3^j$ for some $0<j\leq m$. Since the order of $\Gamma$ divides the order of $P_{8\cdot 3^m}'$, we see that $\card{\Gamma}=2^i\cdot3^{j}$ for some $0\leq i \leq3, 0\leq {j} \leq m$. Since $\Gamma$ has non-trivial image in $P_{8\cdot 3^m}'^{ab}$, we see that ${j}\neq0$. With these restrictions, we see from the list in Theorem~\ref{thm_sphericalclassification} that the only possible groups are $D_{4\cdot3^j}$, $D_{8\cdot 3^j}$, $P'_{8\cdot 3^j}$, $Q_{8\cdot 3^j}$, $Q_8\times \mathbb{Z}/3^j$, $\mathbb{Z}/3^j$, $\mathbb{Z}/{2\cdot 3^j}$, $\mathbb{Z}/{4\cdot 3^j}$, and $\mathbb{Z}/{8\cdot 3^j}$, where $0<j\leq m$. It is straightforward to see that the abelianization of $D_{2^m\cdot (2n+1)}$ is $\mathbb{Z}/2^m$, and the abelianization of $Q_{8\cdot 3^j}$ is $\mathbb{Z}/2\times \mathbb{Z}/2$. Neither of these can surject onto $\mathbb{Z}/3^j$. Next, suppose that $8$ divides the order of $\Gamma$. Then $\Gamma$ has a unique Sylow-2 subgroup, namely $Q_8$, where the uniqueness follows from normality. Since the unique Sylow-2 subgroup of $\mathbb{Z}/8\cdot 3^j$ is cyclic, we see that $\Gamma\ncong \mathbb{Z}/8\cdot 3^j$. So, if $8$ divides the order of $\Gamma$, we see that $\Gamma$ is either isomorphic to $P'_{8\cdot 3^j}$ or $Q_8\times \mathbb{Z}/3^j$. In these cases $\Gamma$ has a Sylow-3 subgroup, denoted by $\Gamma_3$. Recall that there is a Sylow-3 subgroup of $P'_{8\cdot 3^m}$ which is a copy of $\mathbb{Z}/3^m$ generated by $z$. Since any Sylow-3 subgroup of $\Gamma$ must be contained in some Sylow-3 subgroup of $P'_{8\cdot 3^m}$ and Sylow-3 subgroups of a given group are conjugate, there is some $g\in P'_{8\cdot 3^m}$ such that $g^{-1}z^{3^{m-j}}g$ generates $\Gamma_3 \leq \Gamma$, and thus, $z^{3^{m-j}}\in g\Gamma g^{-1}$ generates $g\Gamma_3 g^{-1}$. Note that the subgroup $g\Gamma g^{-1}\leq P'_{8\cdot 3^m}$ has order divisible by $8$, so as before, its Sylow-2 subgroup coincides with the Sylow-2 subgroup of $P'_{8\cdot 3^m}$, namely the copy of $Q_8$ generated by $x$ and $y$. Since $z^3$ commutes with $<x,y>$, we see that $<x,y,z^{3^{m-j}}> \cong Q_8\times \mathbb{Z}/3^j \leq g\Gamma g^{-1}$ for $j\neq m$ and in fact, $\Gamma \cong Q_8\times \mathbb{Z}/3^j$ due to cardinality. We have thus reduced the possibilities for $\Gamma$ to $\mathbb{Z}/3^j$, $\mathbb{Z}/{2\cdot 3^j}$, $\mathbb{Z}/{4\cdot 3^j}$, and $Q_8\times \mathbb{Z}/3^j$, where $0<j\leq m$ when the image of $\Gamma$ is non-trivial. Thus, the possible subgroups of $P'_{8\cdot 3^m}$ are of the form $\ZZ/2, \ZZ/4, Q_8, \ZZ/3^j, \ZZ/2\cdot3^j, \mathbb{Z}/4\cdot3^{j}, Q_8\times \mathbb{Z}/3^j$, where $0<j\leq m$. Again, no two distinct isomorphism types of subgroups have the same order, therefore we only need to consider the case of cyclic subgroups. By Theorem~\ref{theorem:Reidemeister} there is a unique $3$-manifold with fundamental group $\ZZ/2$ or with fundamental group $\ZZ/4$. For $\ZZ/3^j$, we know that this group is contained some Sylow-$3$ subgroup of $P_{8\cdot 3^m}'$, which is cyclic. Hence any two subgroups with order $3^j$ are conjugate to each other, and thus, correspond to isomorphic covering spaces. Suppose $\Gamma \cong \ZZ/2\cdot3^j\cong \ZZ/2\times \ZZ/3^j$. Note there is a Sylow-$3$ subgroup of $\Gamma$ which is contained in some Sylow-$3$ subgroup of $P_{8\cdot 3^m}'$ which is cyclic and conjugate to $\langle z \rangle$, as we saw earlier. Hence there is some $g \in P_{8\cdot 3^m}'$ such that $g^{-1}z^{3^{m-j}}g$ is contained in $\Gamma$. Similarly, any Sylow-2 subgroup of $\Gamma$ corresponds to a subgroup of order two within the copy of $Q_8$ in $P'_{8\cdot 3^m}$. There is a unique such subgroup, generated by $x^2$. Thus, since $\Gamma$ is cyclic, we see that $\Gamma = \langle x^2, g^{-1}z^{3^{m-j}}g \rangle$. Note that any two such subgroups are conjugate to each other since $x^2$ is central. Lastly, suppose $\Gamma \cong \mathbb{Z}/4\cdot3^{j}$. Let $\Gamma_2$ be a Sylow-$2$ subgroup of $\Gamma$. Then it is contained in $Q_8$, namely the Sylow-2 subgroup of $P'_{8\cdot 3^m}$, and consequently, it is either $\langle x \rangle, \langle y \rangle$, or $\langle xy \rangle$. Further, let $\Gamma_3$ be a Sylow-$3$ subgroup of $\Gamma$. Then again $g^{-1}z^{3^{m-j}}g$ is contained in $\Gamma$ for some $g \in P_{8\cdot 3^m}'$. We have now seen that $g\Gamma g^{-1}$ is either $\langle x, z^{3^{m-j}} \rangle, \langle y, z^{3^{m-j}} \rangle,$ or $\langle xy, z^{3^{m-j}}\rangle$ and any two such subgroups are conjugate to each other since $z^{3^{m-j}}$ is central, and $x$, $y$ and $xy$ are conjugates. This completes the proof. \end{proof} \noindent Finally, we need to consider direct products with cyclic groups: \begin{lemma} Let $G$ be a group from the statement of Theorem~\ref{thm_sphericalclassification} and $C$ a finite cyclic group so that $\gcd(\card{G},\card{C})=1$. Then $G\times C$ is exceptional if and only if $G$ is exceptional. \end{lemma} \begin{proof} Subgroups of direct products of finite groups with relatively prime order are direct products of subgroups in the factors. Hence, since the orders of the groups in the direct product need to be relatively prime and cyclic groups are exceptional, taking a direct product with a cyclic group preserves being exceptional or not. \end{proof} \noindent We have now addressed each case in Theorem~\ref{thm_sphericalclassification} and thus, our proof is completed. \end{proof} \section{The general Seifert fibered case}\label{sect:SF} \noindent In this section, we prove the following result. \begin{proposition}\label{prop:seifert-fibered-for-leitfaden} Closed orientable Seifert fibered $3$-manifolds, other than those finitely covered by $T^3$, $S^1\times S^2$ or $S^3$, are not exceptional. \end{proposition} Note that along with Propositions~\ref{prop:euclidean} and~\ref{prp_spherical}, this shows that closed Seifert fibered $3$-manifolds are not exceptional. Our proof is based on the following proposition, which can for instance be found in \cite[p.\ 52 (C.10)]{AFW}: \begin{proposition} Let $M$ be a closed Seifert fibered $3$-manifold. There exists a finite cover $\widehat{M}\to M$ so that $\widehat{M}$ is an $\Sphere^1$-bundle over a closed orientable surface. \end{proposition} In order to find distinct finite covers of a Seifert fibered manifold, it thus suffices to find distinct finite covers of $\Sphere^1$-bundles over closed orientable surfaces. Intuitively, the way we produce these is to take covers both in the $\Sphere^1$-direction and the surface direction. To make this idea precise, we will use the Euler number $e(\pi)$ of our circle bundles to distinguish covers (see \cite[p. 427, 436]{Sco} for a definition). In particular, we will use the following property of Euler numbers (see for instance \cite[Lemma 3.5]{Sco}). \begin{lemma}\label{lem_eulernumber} Let $d\in \NN$ and \[\Sphere^1\to M \stackrel{\pi}{\to} \Sigma\] be an $\Sphere^1$-bundle over a closed oriented surface $\Sigma$, such that $M$ is orientable. Moreover, let $\widehat{M}$ be a degree $d$ finite cover of $M$, so that $\widehat{M}$ is the total space of the following bundle \[\Sphere^1\to M \stackrel{\widehat{\pi}}{\to} \Sigma.\] Suppose the induced circle and surface covers $\Sphere^1\to\Sphere^1$ and $\widehat{\Sigma}\rightarrow \Sigma$ have degrees $m$ and $\ell$ respectively, then $\ell m=d$ and \[e(\widehat{\pi}) = \frac{\ell}{m} e(\pi). \] \end{lemma} \noindent We now prove the following proposition. This, together with the fact that the only orientable $S^1$-bundle over $S^2$ is $S^1\times S^2$, will complete the proof of Proposition~\ref{prop:seifert-fibered-for-leitfaden}. \begin{proposition}\label{prp_seif}Let $\Sigma$ be a closed orientable surface that is not a sphere and let $M$ be an $\Sphere^1$-bundle over $\Sigma$. Then $M$ is exceptional if and only if $M$ is the trivial $\Sphere^1$-bundle over the $2$-torus. \end{proposition} \begin{proof} Our first claim is that the map $\pi_1(\Sphere^1) \to \pi_1(M)$ is injective. This follows from the long exact sequence in homotopy of the fibration \[\ldots \to \pi_2(\Sigma) \to \pi_1(\Sphere^1) \to \pi_1(M) \to \pi_1(\Sigma)\to \ldots.\] Our assumption on the genus of $\Sigma$ implies that $\pi_2(\Sigma) = \{e\}$ and hence that the map $\pi_1(\Sphere^1) \to \pi_1(M)$ is injective. First we assume the bundle is non-trivial. Let $t\in \pi_1(\Sphere^1)$ denote a generator. Residual finiteness of $3$-manifold groups (see Theorem~\ref{theorem_resfin}) implies that we can find a finite group $G$ and a surjection $\varphi:\pi_1(M) \to G$ so that \[ \varphi(t) \neq e.\] Let us denote the induced degree $d=\card{G}$ cover by $\widehat{M}\to M$. Since $t$ is mapped to a non-trivial element, the induced circle cover is non-trivial. Then Lemma \ref{lem_eulernumber} tells us that the induced $\Sphere^1$-bundle $\Sphere^1\to \widehat{M} \stackrel{\widehat{\pi}}{\to} \widehat{\Sigma}$ satisfies \[ \abs{e(\widehat{\pi})} < d\cdot \abs{e(\pi)}. \] To build the second cover, take any degree $d$ surface cover $\widetilde{\Sigma}\to\Sigma$ (these exist for any $d$) and pull back the $\Sphere^1$-bundle. This gives rise to a degree $d$ cover $\widetilde{M}\to M$, that has the structure of a $\Sphere^1$-bundle $\Sphere^1\to \widetilde{M} \stackrel{\widetilde{\pi}}{\to} \widetilde{\Sigma}$. Applying Lemma \ref{lem_eulernumber} again, we obtain \[ \abs{e(\widetilde{\pi})} = d\cdot \abs{e(\pi)}. \] Since our bundle is non-trivial, we have $e(\pi)\neq 0$. It can now for instance be extracted from the Gysin sequence that if $\Sphere^1\to N\to \Sigma$ is a circle bundle with euler number $e\neq 0$, then \[H_1(N,\ZZ) \cong \ZZ^{2g} \oplus \ZZ/e \] where $g$ denotes the genus of $\Sigma$. In particular this implies that the absolute value of $e$ is an invariant of the total space and not just the circle bundle. That in turn means that $\widehat{M}$ and $\widetilde{M}$ are not homeomorphic. Finally, we have to deal with the trivial bundle, that is, $M \cong \Sigma\times \Sphere^1$. In this case $e(\pi)=0$ and \[H_1(M;\ZZ) \cong \ZZ^{2g+1}. \] Now surface and circle covers \[\widehat{\Sigma}\to \Sigma \;\;\text{and}\;\;\Sphere^1\to\Sphere^1\] of the same degree induce two covers \[\widehat{M} \cong \widehat{\Sigma}\times \Sphere^1\to M \;\;\text{and}\;\;\widetilde{M} \cong \Sigma\times \Sphere^1\to M\] of the same degree. If $g >1$, then $\widehat{\Sigma}$ has strictly greater genus than $\Sigma$, so $\widehat{M}$ is not homeomorphic to $\widetilde{M}$. \end{proof} \section{Sol manifolds} \noindent In this section, we prove the following proposition. \begin{proposition}\label{prp_sol} Sol $3$-manifolds are not exceptional. \end{proposition} \begin{proof} Every orientable Sol manifold $M$ is finitely covered by a $2$-torus bundle over $\Sphere^1$ with Anosov monodromy $\varphi\in \mathrm{MCG}(\Torus^2) \cong \mathrm{SL}_2(\ZZ)$ (see for instance \cite[Theorem 1.8.2]{AFW}). Recall this the monodromy is called Anosov if the top-eigenvalue $\lambda_\varphi$ of $\varphi$ as an $\mathrm{SL}_2(\ZZ)$-matrix satisfies $\abs{\lambda_\varphi} > 1$. By Lemma \ref{lem:nonexceptional-cover}, we may assume $M$ is a $2$-torus bundle over $S^1$ with Anosov monodromy $\varphi$. Let $\lambda_\varphi$ be the leading eigenvalue. First we remind the reader of the well known fact that, as opposed to the case of hyperbolic mapping tori, the modulus of the eigenvalue $\lambda_\varphi$ is a topological invariant. Indeed, we have $b_1(M) = 1$. Each fibration $\pi:M\to S^1$ with connected fibers and monodromy $\psi$ induces a primitive non-torsion cohomology class $[\psi]\in H^1(M;\ZZ)$ and conversely each such cohomology class determines the fibration up to isotopy. The latter fact implies that the top eigenvalue $\lambda_\psi$ of the monodromy $\psi$ depends only on $[\psi]$. The former observation implies that the only fibered classes are $[\varphi]$ and $-[\varphi]$. Since \[\abs{\lambda_{[-\varphi]}} = \abs{\lambda_{[\varphi]}},\] $\abs{\lambda_\varphi}$ is indeed a topological invariant In order to build two non-homeomorphic covers, we proceed as follows. First let \[\Torus^2\to\Torus^2\] be a finite non-trivial characteristic cover. This means that $\varphi$ lifts to a map \[\widehat{\varphi}: \Torus^2\to \Torus^2\] such that $\lambda_{\varphi} = \lambda_{\widehat{\varphi}}$. We obtain a cover \[\widehat{M} \to M,\] where \[\widehat{M} \cong \Torus^2 \times [0,1] / (x,0)\sim (\widehat{\varphi}(x),1).\] Since $b_1(M) =1$, we can also take a finite cyclic cover \[ \widetilde{M}\to M\] of the same degree, say $d\neq 1$. The monodromy $\widetilde{\varphi}$ of this cover satisfies \[\abs{\lambda_{\widetilde{\varphi}}} = \abs{\lambda_\varphi}^d,\] thus $\widehat{M}$ and $\widetilde{M}$ are not homeomorphic. \end{proof} \section{Manifolds with non-trivial JSJ decompositions and non-trivial boundary} \noindent In this section, we prove the following proposition. \begin{proposition}\label{prp_JSJ} Let $M$ be an orientable, irreducible $3$-manifold with empty or toroidal boundary such that either $M$ has a non-trivial JSJ decomposition or $\partial M$ is non-empty. Assume that $M$ is not homeomorphic to $S^1\times D^2$, $T^2\times I$ or the twisted $I$-bundle over the Klein bottle, and that $M$ is not a Sol manifold. Then $M$ is not exceptional. \end{proposition} \noindent We start out with the following useful lemma. \begin{lemma}\label{lem:extend-homomorphisms} Let $M$ be an orientable, irreducible $3$-manifold with empty or toroidal boundary and let $N$ be a JSJ component of $M$. Then for any finite group $G$ and any surjective homomorphism $f\colon\pi_1(N)\twoheadrightarrow G$, there exist finite groups $K$ and $H$ and homomorphisms $g, g_1, g_2, g_3$ $($of the type shown in the diagram$)$, such that the following diagram commutes. \[ \xymatrix@C1.4cm@R1.3cm{\pi_1(N)\ar@{->>}[d]^f\ar[r]^{i_*}\ar@{->>}[dr]^{g_3} &\pi_1(M)\ar@{->>}[dr]^g\\ G &K\ar@{->>}[l]_-{g_1} \ar@{^(->}[r]^{g_2}& H.} \] \end{lemma} Note in particular that the cover of $N$ induced by the map $g_3$ is a cover of the one induced by $f$. \begin{proof} For closed manifolds this lemma is an immediate consequence of \cite[Theorem~A]{WZ10}. As is explained in \cite[(C.35)]{AFW}, the statement also holds in the case that $M$ has non-empty toroidal boundary. \end{proof} \begin{proof}[Proof of Proposition~\ref{prp_JSJ}] In this proof we use the following terminology. Given a 3-manifold $W$ with empty or toroidal boundary we refer to the union of the JSJ tori and the boundary tori of $W$ as the set of \emph{characteristic tori} of $W$. At the end of the upcoming proof we will have constructed two index $d$ covering spaces $\widehat{M}$ and $\widetilde{M}$ of $M$ that we will distinguish by showing that they have unequal numbers of characteristic tori. We say that a $3$-manifold is \emph{tiny} if it is homeomorphic to $S^1\times D^2$, $T^2\times I$, or to the twisted $I$-bundle over the Klein bottle. Throughout this proof we will use on several occasions the following preliminary remark: If $W$ is an orientable $3$-manifold that is not tiny, then it follows from the classification of $3$-manifolds with virtually solvable fundamental group, see~\cite[Theorem~1.11.1]{AFW}, that no finite cover of $W$ is tiny. By~\cite{Hem} (see also~\cite[C.10]{AFW}), we know that $M$ has a finite-sheeted cover $M'\rightarrow M$ such that each Seifert fibered JSJ component of $M'$ is an $S^1$-bundle over a compact orientable surface. Since $M$ is not a Sol manifold, neither is $M'$. By our preliminary remark, since $M$ is not tiny, neither is $M'$. These three latter facts, along with~\cite[Propositions 1.9.2 and 1.9.3]{AFW}, imply that this manifold $M'$ has the following useful property $(\dagger)$: For any finite cover $\widehat{M'}\rightarrow M'$, the preimage of the JSJ decomposition of $M'$ is exactly the JSJ decomposition of $\widehat{M'}$. Let $N'$ be a JSJ component of $M'$, where possibly $N'=M'$. By hypothesis, $\partial N'$ is non-empty. There exists a finite-sheeted cover $\overline{N'}\rightarrow N'$ such that the rank of the cokernel of the map $H_1(\partial \overline{N'})\rightarrow H_1(\overline{N'})$ is at least one, by~\cite[C.15, C.17]{AFW}. (Here we used that $N'$ is not homeomorphic to $T^2\times I$, this follows from the fact that $M'$ is not tiny and from our hypothesis that $M$ is not a Sol-manifold and from \cite[Proposition~1.6.2(3), 1.8.1, 1.10.1]{AFW}.) The finite-sheeted cover $\overline{N'}\rightarrow N'$ corresponds to a finite index subgroup of $\pi_1(N')$. Recall that any finite index subgroup of a group contains a finite index normal subgroup, called its normal core; let the (finite index, regular) cover corresponding to the latter normal subgroup of $\pi_1(N')$ be denoted $\widehat{N'}\rightarrow N'$. By construction, $\widehat{N'}\rightarrow N'$ corresponds to the kernel of a surjective map $\pi_1(N')\twoheadrightarrow G$ for some finite group $G$, and by Lemma~\ref{lem:extend-homomorphisms}, we obtain the following commutative diagram: \[ \xymatrix@C1.4cm@R1.3cm{\pi_1(N')\ar@{->>}[d]^f\ar[r]^{i_*}\ar@{->>}[dr]^{g_3} &\pi_1(M')\ar@{->>}[dr]^g\\ G \cong \pi_1(N')/\pi_1(\widehat{N'}) &K\ar@{->>}[l]_-{g_1} \ar@{^(->}[r]^{g_2}& H.} \] Let $M^*$ be the cover $M^*\rightarrow M'$ corresponding to the kernel of $g$. From Lemma~\ref{lem:extend-homomorphisms}, it follows that the induced cover of $N'$ corresponding to $g_3$ is a finite-sheeted cover of $\widehat{N'}$; call it $N^*$. Since $M^*$ is a finite-sheeted cover of $M'$ it follows from $(\dagger)$ that $N^*$ is a JSJ component of $M^*$. Since the cover $N^*\rightarrow \overline{N'}$ is finite-sheeted it follows from an elementary argument, see~\cite[A.12]{AFW}, that the rank of the cokernel of the map $H_1(\partial N^*)\rightarrow H_1(N^*)$ is also at least one. Since $M^*$ is a finite-sheeted cover of $M$ it follows from Lemma~\ref{lem:nonexceptional-cover} that it suffices to show that $M^*$ is not exceptional. We have the following commutative diagram, where the horizontal sequences form the long exact sequence in singular homology for the pairs $(N^*, \partial N^*)$ and $(M^*, M^*\setminus \Int(N^*))$ and the vertical arrows are induced by inclusion. \begin{equation*} \begin{tikzcd} H_1(\partial N^*) \arrow{r}{i_*}\arrow{d} &H_1(N^*) \arrow{r}\arrow{d}{k_*} &H_1(N^*, \partial N^*)\arrow{d} &\\ H_1(M^*\setminus \Int(N^*)) \arrow{r}{j_*} &H_1(M^*) \arrow{r} &H_1(M^*, M^*\setminus \Int(N^*)). \end{tikzcd} \end{equation*} We obtain an induced commutative diagram \begin{equation*} \begin{tikzcd} \coker(i_*) \arrow[hookrightarrow]{r}\arrow{d}{\overline{k_*}} &H_1(N^*, \partial N^*)\arrow{d}\\ \coker(j*) \arrow[hookrightarrow]{r} &H_1(M^*, M^*\setminus \Int(N^*)). \end{tikzcd} \end{equation*} By a standard excision argument, we see that $H_1(M^*, M^*\setminus \Int(N^*))\cong H_1(N^*, \partial N^*)$; in other words, the rightmost vertical map above is an isomorphism. We see that $\overline{k_*}$ is injective, and thus, the rank of $\coker(j_*)$ is bounded below by the rank of $\coker(i_*)$ which by hypothesis is at least one. Thus, there is an epimorphism $\coker(j_*)\twoheadrightarrow \mathbb{Z}$. We can then define, for any $m>1$, \[ \begin{tikzcd} f_m\colon\pi_1(M^*) \arrow[two heads]{r}{\operatorname{Ab}} &H_1(M^*)\arrow[two heads]{r} & \coker(j_*)\arrow[two heads]{r} &\ZZ \arrow[two heads]{r} &\mathbb{Z}/m, \end{tikzcd} \] where the second and last maps are the canonical projections. Note that each characteristic torus of $M^*$ is contained in $M^*\setminus \Int(N^*)$, and thus, by our construction, the image under inclusion of the fundamental group of any characteristic torus of $M^*$ lies in the kernel of $f_m$ for all $m$. We are finally ready to construct two non-homeomorphic covers of $M^*$ with the same index. As mentioned in the beginning of the proof, we will do so by constructing two finite covers of $M^*$ with the same degree, but different number of characteristic tori. At this point we would like to recall that it follows from $(\dagger)$ for any finite cover $W$ of $M^*$ the characteristic tori of $W$ are the preimages of the characteristic tori of $M^*$. Now let $n$ be the total number of characteristic tori of $M^*$. Let $T$ be a characteristic torus of $M^*$. Since $M^*$ is not tiny (and in particular, $M^*$ is not homeomorphic to $S^1\times D^2$, and thus, the boundary tori of $M^*$ are $\pi_1$-injective) we can view $\pi_1(T)$ as a subgroup of $\pi_1(M)$. Since $\pi_1(M^*)$ is residually finite (Theorem~\ref{theorem_resfin}), there is some finite index normal subgroup $J\lhd \pi_1(M^*)$ such that $\pi_1(T)$ is not contained in $J$. Let $d>1$ be the index of $J$ in $\pi_1(M^*)$, and let $\widehat{p}\colon \widehat{M}\rightarrow M^*$ be the $d$-sheeted cover of $M^*$ corresponding to $J$. Since $\pi_1(T)$ is not contained in $J$ we see that the preimage of $T$ has strictly fewer than $d$ components. Since the preimage under $\widehat{p}$ of the characteristic tori for $M^*$ gives the characteristic tori for $\widehat{M}$, we see that the number of characteristic tori in $\widehat{M}$ is strictly less than $d\cdot n$. In order to build a second cover, let $\widetilde{p}\colon \widetilde{M}\rightarrow M^*$ denote the cover corresponding to the kernel of $f_d\colon \pi_1(M^*)\rightarrow \mathbb{Z}/d$ constructed above. By construction, the index of $\widetilde{p}$ is $d$ and the number of characteristic tori in $\widetilde{M}$ is $d\cdot n$, since the image under inclusion of characteristic torus of $M^*$ lies in the kernel of $f_d$. We see that $\widehat{M}$ and $\widetilde{M}$ are index $d$ covers of $M^*$ but have an unequal number of characteristic tori, and thus are non-homeomorphic. \end{proof} \section{Non-prime and non-orientable $3$-manifolds}\label{sec:nonprime} \subsection{Prime non-orientable $3$-manifolds} Little further work is needed to completely characterize exceptional prime non-orientable $3$-manifolds with empty or toroidal boundary. Such a manifold is either the twisted $S^2$-bundle over $S^1$, or irreducible. We already saw that the former is exceptional (Proposition~\ref{prop:specialones}). For the latter case, note that if the orientable double cover of a non-orientable $3$-manifold $M$ is not exceptional, then $M$ is not exceptional. By our previous work, we only need to consider the irreducible non-orientable $3$-manifolds whose orientable double covers are $S^1\times S^2$, the $3$-torus $T^3$, $S^1\times D^2$, $S^1\times S^1\times [0,1]$. Here we have used that closed non-orientable $3$-manifolds have positive first Betti number. In particular, their fundamental groups are infinite~\cite[(E.3)]{AFW}. Note that if a boundary torus for a manifold $M$ is compressible, $M$ has an $S^1 \times D^2$ summand. If $M$ is prime, $M$ must be homeomorphic to $S^1\times D^2$, which is of course not non-orientable. Thus, a prime non-orientable $3$-manifold with toroidal boundary, must have incompressible boundary. By~\cite[Lemma~2.1]{Swarup1973}, the fundamental group of the orientable double cover of an irreducible non-orientable $3$-manifold $M$ with incompressible boundary is free, which would be the case if the double cover is $S^1\times S^2$ or $S^1\times D^2$, if and only if $M$ is a homotopy $\RR\textup{P}^2\times S^1$. Such a manifold has first homology group $\mathbb{Z}\oplus \mathbb{Z}/2$. Thus, it has two double covers, only one of which is orientable. As a result, it is not exceptional. It remains to consider the non-orientable $3$-manifolds with empty or toroidal boundary, whose orientable double cover is $T^3$ or $T^2\times I$. First suppose that $M$ is a $3$-manifold that is covered by $T^3$. It follows from~\cite[Theorem~2.1]{Meeks-Scott} that $M$ is Euclidean. This case is then addressed by Proposition~\ref{proposition_euccase}. Secondly, if $M$ is covered by $T^2\times I$, then a doubling argument shows, using the above, that $M$ is either homeomorphic to the Klein bottle times an interval or to the M\"obius band times $S^1$. In either case $M$ admits two $2$-fold coverings, only one of which is orientable. Thus $M$ is not exceptional. Thus, we have established the following proposition. \begin{proposition}\label{prop:non-orientable-prime} Let $M$ be a prime, non-orientable $3$-manifold with empty or toroidal boundary. Then $M$ is exceptional if and only if it is homeomorphic to $S^1\widetilde{\times} S^2$. \end{proposition} \subsection{Non-prime $3$-manifolds} Finally, we consider non-prime $3$-manifolds. Recall that a prime decomposition for a $3$-manifold $M$ is said to be \emph{normal} if there is no $S^1\times S^2$ factor when $M$ is non-orientable. It is well-known that every $3$-manifold has a unique normal prime decomposition (see, for instance~\cite[Theorem~3.15 and~3.21]{Hempel-book}). Note that we do not assume that the manifold is closed or orientable. \noindent Below we give a general structure for covering spaces of non-prime $3$-manifolds. \begin{proposition}\label{prop:cover-of-connected-sum} Let $M\cong M_1\#\cdots \#M_k$ be a normal prime decomposition for a $3$-manifold $M$. Then if $\widehat{M}$ is a cover of $M$ with index $d$, then \[ \widehat{M}\cong \left(\widehat{M_{11}}\# \cdots \#\widehat{M_{1i_1}}\right)\, \# \cdots \#\, \left(\widehat{M_{k1}}\# \cdots\#\widehat{M_{ki_k}}\right)\#\, \ell S, \] where $S$ denotes $S^1\times S^2$ or $S^1\widetilde{\times} S^2$ whenever $\widehat{M}$ is orientable or non-orientable respectively, and $\widehat{M_j}:= \left(\widehat{M_{j1}}\sqcup \cdots \sqcup\widehat{M_{ji_1}}\right)$ is a cover of $M_j$ with index $d$. Moreover, \[ (k-1)\cdot d = \sum_{j=1}^k i_j -1 +\ell. \] In addition, any $\widehat{M}$ of this form is a cover of $M$ with index $d$. \end{proposition} Since any cover of a prime $3$-manifold is prime, the above gives a prime decomposition of $\widehat{M}$ when we ignore $S^3$ summands. \begin{proof}Let $p\colon \widehat{M}\to M$ be the cover. Write $M$ as $M_1\setminus B_1 \cup \cdots \cup M_k\setminus B_k$ where each $B_j\subset M_j$ is an open ball. Then each restricted map \[ p\vert_{p^{-1}(M_j\setminus B_j)}\colon p^{-1}(M_j\setminus B_j)\rightarrow M_j\setminus B_j \] is a covering map. Gluing balls along the lifts of the connected sum spheres gives rise to the covers $\widehat{M_j}$. It is clear that the cover $\widehat{M}$ is built from the collection $\bigsqcup \widehat{M_j}$ by identifying spheres in pairs, arising as lifts of the connected sum spheres. When the sphere pairs are in distinct connected components, we obtain a connected sum. When they lie in the same connected component, we obtain an $S$ summand. Note that $S$ can be chosen to be $S^1\widetilde{\times} S^2$ whenever $\widehat{M}$ is non-orientable since $N\# S^1\times S^2 \cong N\# S^1\widetilde{\times} S^2$ whenever $N$ is a non-orientable $3$-manifold. The relationship between $k$, $d$, $\ell$, and $i_j$ follows from the construction. When $M$ is oriented, each $\widehat{M_j}$ inherits an orientation and it is easy to see that $\widehat{M}$ is a connected sum of oriented manifolds. For $\widehat{M}$ of the given form, an index $d$ covering map $\widehat{M}\to M$ can be constructed by gluing together the individual covering maps. \end{proof} \noindent For non-prime $3$-manifolds, we have the following result. \begin{proposition}\label{prop:nonprime} Let $M$ be a non-prime $3$-manifold with empty or toroidal boundary. Then $M$ is exceptional if and only if it is homeomorphic to $k\cdot S^1\times S^2$, for some $k\geq 2$. \end{proposition} \begin{proof} First, we show that $k\cdot S^1\times S^2$ is exceptional. Note that the only cover of $S^1\times S^2$ is itself. Then, we see immediately from Proposition~\ref{prop:cover-of-connected-sum} that any degree $d$ cover of $k\cdot S^1\times S^2$ is homeomorphic to $((k-1)d +1)\cdot S^1\times S^2$. Next, we show that any manifold which is not of the form $k\cdot S^1\times S^2$ is not exceptional. Let $M\cong M_1\# \cdots \# M_k$ be a normal prime decomposition for $M$. By hypothesis, $k\geq 2$. As a preliminary step, we observe that if $M$ has a single prime summand which is not exceptional, it is itself not exceptional. Since both $S^2$-bundles over $S^1$ are exceptional, such a prime summand must be irreducible. Without loss of generality, assume that $M_1$ is irreducible and not exceptional. Then, there exist non-homeomorphic covers $\widehat{M_1}$ and $\widetilde{M_1}$ of $M_1$, both with index $d$, for some $1<d<\infty$. Construct the covers $\widehat{M}\cong \widehat{M_1}\# d(M_2\# \cdots \#M_k)$ and $\widetilde{M}\cong \widetilde{M_1}\# d(M_2\# \cdots \#M_k)$ of $M$. Note that both have index $d$. Since $\widehat{M_1}$ and $\widetilde{M_1}$ are both irreducible, they appear in the prime decomposition of $\widehat{M}$ and $\widetilde{M}$ respectively. By the uniqueness of normal prime decompositions, $\widehat{M}$ and $\widetilde {M}$ are not homeomorphic. Thus, we only need to consider the case where each $M_i$ is itself exceptional. First we consider the case where $M$ is orientable. Since $M$ is not of the form $k\cdot S^1\times S^2$, we can assume, without loss of generality, that $M_1$ be an exceptional manifold other than $S^1\times S^2$, that there exists a cover $\widehat{M_1}\to M_1$ of index $d_1$ and a cover $\widehat{M_k}\to M_k$ of index $d_k$, such that $d_1\leq d_k$. We now build two covers of $M$ with index $d_k$ as follows. Let \[ \widehat{M}= d_k(M_1\# \cdots \# M_{k-1}) \# \widehat{M_k}, \] and let \[ \widetilde{M}= \left(\widehat{M_1}\# (d_k-d_1)M_1\right) \# (d_1-1)S^2\times S^1 \# d_k(M_2\# \cdots M_{k-1})\# \widehat{M_k}. \] Suppose that $\widehat{M}\cong \widetilde{M}$. Then we see that $M_1\cong S^1\times S^2$, which is a contradiction. Next, consider the case when $M$ is non-orientable. Suppose first that there is at least one irreducible prime summand in the given normal prime decomposition of $M$. Without loss of generality, we can assume that this is $M_1$. Since no non-orientable irreducible $3$-manifold is exceptional, we see that $M_1$ is orientable. Let $N$ denote the manifold $M_2\# \cdots \# M_k$. Since $M\cong M_1\# N$ is non-orientable, we see that $N$ is non-orientable. Let $\widehat{N}$ be the orientable double cover of $N$ and construct the orientable double cover $2M_1\#\widehat{N}$ of $M$. Since $M_1$ is irreducible, it is in particular not $S^1\times S^2$. By our argument in the previous paragraph, $2M_1\#\widehat{N}$ has two non-homeomorphic covers of the same index, showing that $M$ is not exceptional. It only remains to consider the case where $M$ is non-orientable but has no irreducible prime summands. Since we have a normal prime decomposition, this implies $M$ is of the form $k\cdot S^1\widetilde{\times} S^2$ where $k\geq 2$. Note that in this case $H_1(M)\cong \mathbb{Z}^k$ where $k\geq 2$. Then, we see that there are $2^k-1$ connected double covers, only one of which is orientable, which completes the proof. \end{proof} \noindent We observe that we have established the following proposition. \begin{proposition}\label{prop:nonorientable} The only non-orientable exceptional $3$-manifold with empty or toroidal boundary is the twisted $S^2$-bundle over $S^1$. \end{proposition} \bibliographystyle{alpha} \subsection{Non-exceptionality of lattices in Lie groups} Let $G$ be a semisimple Lie group and let $X$ be the symmetric space associated to $G$ (for example, $G = \mathrm{PGL}_2(\CC)$ and $X = \HH^3$). Then for any discrete subgroup $\Gamma \le G$, the quotient $\Gamma \backslash X$ is a complete Riemannian orbifold locally isometric to quotients of $X$ by finite subgroups of $G$ (in particular, if $\Gamma$ is torsion-free then $\Gamma \backslash X$ is a manifold). We will call such orbifolds {\em $X$-orbifolds}. The Mostow-Prasad rigidity theorem \cite{Mos, Mos2, Prasad_rigidity,Margulis_book} states that if $G$ is not locally isomorphic to $\mathrm{PGL}_2(\RR)$, then two irreducible lattices $\Gamma_1$ and $\Gamma_2$ in $G$ are isomorphic as abstract groups if and only if the orbifolds $\Gamma_i \backslash X$ are isometric to each other. In particular the metric invariants of $\Gamma \backslash X$ are an isomorphism invariant of $\Gamma$. We will be using the {\em systole} to distinguish between subgroups: given an $X$-orbifold $M$ this is defined as the infimum of lengths of closed geodesics on $M$, and we will denote it by $\mathrm{sys}(M)$. Note that it follows from the Margulis lemma that $\sys(M)$ is positive if $M$ has finite volume. The systole of $\Gamma \backslash X$ can be computed from the action of $\Gamma$ on $X$. If $g \in G$ is an element whose semisimple part does not belong to a compact subgroup of $G$, then the minimal translation \[ \ell(g) := \inf_{x \in X} d_X(x, gx) \] is positive. Then, denoting by $\Gamma_{ah}$ the set of such elements in $\Gamma$, we have: \begin{equation} \label{systole_tranlength} \sys(M) = \min_{\gamma\in\Gamma_{ah}} \ell(\gamma). \end{equation} Note that if $\Gamma$ is cocompact, then $\Gamma_{ah}$ is the set of semisimple elements of infinite order in $\Gamma$. \medskip \noindent We will now prove the following result, of which Proposition \ref{proposition_hypcase} is a special case. \begin{proposition} \label{lattices} Let $\Gamma$ be an irreducible lattice in a semisimple linear Lie group not locally isomorphic to $\mathrm{PGL}_2(\RR)$. Then $\Gamma$ is not exceptional. \end{proposition} \begin{proof} Let \( G \) be a semisimple linear Lie group as in the statement, so we may assume that $G < \mathrm{GL}_d(\mathbb R)$ for some $d$ and let \( \Gamma \) be a lattice in \( G \). It is a standard consequence of local rigidity of \( \Gamma \), which holds under the condition that \( G \) not be locally isomorphic to \( \mathrm{PGL}_2(\RR) \), that we may conjugate $G$ so that there exists a number field $F$ such that $\Gamma < \mathrm{GL}_d(F)$ (the proof given for \cite[Theorem 3.1.2]{MR_book} in the cocompact case adapts immediately to all other groups). Let $\mathbf H$ be the Zariski closure of $\Gamma$ in the $\QQ$-algebraic group obtained by Weil restriction of the linear $F$-algebraic group \( \mathrm{GL}_d(F) \) to \( \QQ \). By passing to a finite index subgroup if necessary, we may assume that every finite index subgroup of $\Gamma$ has Zariski closure equal to $\mathbf H$. Indeed, every chain of finite index algebraic subgroups $\ldots < \Gamma_{i+1} < \Gamma_i < \ldots < \Gamma$ is necessarily finite. So, a chain of finite index subgroups so that the Zariski closures are strictly contained in each other necessarily terminates after a finite number of steps and we may take the last term. By finite generation of $\Gamma$ there exists a finite set $S$ of rational primes such that $\Gamma \subset \mathbf H\left(\ZZ[p^{-1}, p \in S]\right)$. For the rest of the proof we will fix a rational prime $q \not\in S$. Thus we can define the group of $\ZZ_q$-points, $H_q = \mathbf H(\ZZ_q)$. Nori-Weisfeiler strong approximation \cite{Weisfeiler} implies that we can choose $q$ so that the closure of $\Gamma$ in $H_q$ is of finite index. Since $H_q$ is $q$-adic analytic we may assume that it is a uniform pro-$q$ subgroup (cf. \cite[Theorem 8.1]{DdSMS}), replacing $\Gamma$ by a finite index subgroup if necessary. \medskip \noindent Now we prove the following lemma. \begin{lemma} \label{kill_element} Let $p$ be a prime, $H$ a uniform pro-$p$ group, and $\gamma \in H$. There exists a sequence $(H_1(k), H_2(k))$ of pairs of open subgroups of $H$ such that $|H/H_1(k)| = |H/H_2(k)|$, $H_i(k+1) \subset H_i(k)$ and \[ \bigcap_{k \ge 1} H_1(k) = \{e\}, \: \bigcap_{k \ge 1} H_2(k) = \overline{\langle\gamma\rangle}. \] \end{lemma} \begin{proof} Let $P_k(H)$ be the lower $p$-series of $H$ (see \cite[Definition 1.15]{DdSMS}). Replacing $H$ by some $P_k(H)$ we may assume that $\gamma \in H \setminus P_2(H)$. Uniformity of $H$ implies that, independently of $k \ge 1$, the group $P_k(H)/P_{k+1}(H)$ is an $\mathbb F_p$-vector space of fixed dimension $c$ so that \[ |H / P_{k+1}(H)| = p^{ck}. \] On the other hand, we have $\gamma^{p^k} \in P_{k+1}(H) \setminus P_{k+2}(H)$, so we get that \[ \left| H / \left(\langle\gamma\rangle P_{k+1}(H) \right)\right| = p^{(c-1)k}. \] We define \[ H_2(k) = \langle \gamma \rangle P_{c\cdot k+1} \] which satisfies that $H_2(k+1) \subset H_2(k)$ and $\bigcap_{k \ge 1} H_2(k) = \overline{\langle\gamma\rangle}$. Let also \[ H_1(k) = P_{(c-1)\cdot k+1} \] Then we have that \[ |H/H_1(k)| = p^{(c-1)k} = p^{ck}/p^k = |H/H_2(k)|. \] On the other hand, $P_{k-l_k+1}(H) \supset H_1(k)$ so that $\bigcap_{k \ge 1} H_1(k) = \{e\}$. \end{proof} Now let $\gamma \in \Gamma$ be a semisimple element in $\Gamma$ of infinite order. Applying the lemma to $H_q$ and~$\gamma$ we get two sequences of subgroups \[ \Gamma_1(q,k) = \Gamma \cap H_1(k+1), \: \Gamma_2(q,k) = \Gamma \cap H_2(k+1) \] which satisfy the same properties as $H_i$. In particular, for any finite set $\Sigma \subset \Gamma \setminus \{1\}$, we have $\Sigma \cap \Gamma_1(q,k) = \emptyset$ for large enough $k$. Applying this to the finite sets \[ \Sigma_R = \{ \gamma \in \Gamma_{ah}:\: \ell(\gamma) \le R\} \] with $R$ going to infinity we see, using the formula \eqref{systole_tranlength}, that: \[ \lim_{k\to+\infty}\sys(\Gamma_1(q,k) \backslash X) = +\infty. \] On the other hand, we have $\gamma \in \Gamma_2(k)$ for all $k > 0$ and it follows that \[ \forall k > 0 : \: \sys(\Gamma_2(q,k) \backslash X) \le \ell(\gamma) \] and in particular, for any large enough $k$, the systoles of the $X$-orbifolds $\Gamma_1(q,k) \backslash X$ and $\Gamma_2(q,k) \backslash X$ are different. It finally follows from Mostow's rigidity theorem, as observed before the proposition, that the subgroups $\Gamma_1(q,k)$ and $\Gamma_2(q,k)$, which have the same index, cannot be isomorphic to each other for large enough $k$. \end{proof} \subsection{Agol-Wise's theorem and a quantitative result}\label{quant} Hyperbolic $3$-manifolds have finite degree covers with positive Betti number, which was proved by Agol, based on the work of Kahn-Markovic, Wise, and many others. In fact more is true; we have the following properties (see \cite[Theorem~9.2]{Ago} for the closed case and \cite[Theorem~1.3]{Cooper_Long_Reid_VH} for the case with toroidal boundary), in order of strength. \begin{theorem}\label{theorem_virtbetti} Let $M$ be a hyperbolic $3$-manifold with finite volume. Then there exists: \begin{enumerate}[font=\normalfont] \item \label{pos_virtbetti} a finite cover $\widehat{M}\to M$ so that $b_1\left(\widehat{M}\right) > 0$; \item \label{infinite_virtbetti} for any $r \ge 1$, a finite cover $\widehat{M}\to M$ so that $b_1\left(\widehat{M}\right) \ge r$; \item \label{large} a finite cover $\widehat{M}\to M$ so that $\pi_1(M)$ surjects onto a non-abelian free group. \end{enumerate} \end{theorem} We note that (\ref{pos_virtbetti}) can be used to give a proof of Proposition~\ref{proposition_hypcase} which is similar but simpler than that of Proposition \ref{lattices}: if $H^1(\Gamma)$ contains a class $\phi$ of infinite order then the systole of the index $n$ subgroup $\Gamma_n:=\phi^{-1}(n\ZZ)$ stays bounded as $n \to +\infty$. As $\Gamma$ is residually finite, there exists a sequence of subgroups $\Gamma_m'$ whose systoles tend to infinity. For $m$ large enough it thus follows from Mostow rigidity that the subgroups $\Gamma_{[\Gamma : \Gamma_m']}$ and $\Gamma_m'$ are both of index $[\Gamma:\Gamma_m']$ and not isomorphic to each other. \medskip In fact a much stronger quantitative result holds. The strongest result (\ref{large}), together with an argument due to Lubotzky and Belolipetsky-Gelander-Lubotzky-Shalev (\cite{BGLM}, \cite[Section 5.2]{BGLS}) shows that the number $e_d(M)$ of pairwise non-isometric covers of a hyperbolic manifold of finite volume satisfies \[ \limsup_{d \to +\infty} \frac{\log e_d(M)}{d\log(d)} > 0. \] \subsection{Regular covers of hyperbolic manifolds} In this subsection we use (\ref{infinite_virtbetti}), with $r = 2$, to prove the following result about regular covers. \begin{proposition} \label{main_normal} Let $\Gamma$ be the fundamental group of a hyperbolic $3$-manifold (complete of finite volume). Then there exists sequences $c_n, d_n \to +\infty$ such that for each $n$ we can find at least $c_n$ normal subgroups of index $d_n$ in $\Gamma$, which are pairwise non-isomorphic. \end{proposition} \noindent An important step in the proof is the following special case. \begin{proposition} \label{special_norm} Let $\Gamma$ be a lattice in a simple Lie group $G$, not isogenous to $\mathrm{PGL}_2(\RR)$. Assume that $b_1(\Gamma) \ge 2$ and, for all $n \ge 1$, let $c_n$ be the maximal number of pairwise non-isomorphic normal subgroups $\Gamma' \lhd \Gamma$ with $\Gamma / \Gamma' \cong \ZZ / n$. Then \[ \liminf_{n \to +\infty} \frac{c_n} n > 0. \] \end{proposition} We note that the hypothesis on \( \Gamma \) implies that \( G \) is of real rank 1, and in fact isogenous to one of \( \mathrm{SO}(n, 1) \) or \( \mathrm{SU}(n, 1) \), as lattices in higher-rank simple Lie groups have property (T) and hence finite abelianisation, as do those in \( \mathrm{Sp}(n, 1) \) and the exceptional rank 1 group \( F_4^{-20} \). \subsubsection{Remarks} \begin{itemize} \item Proposition \ref{special_norm} shows that when $b_1(\Gamma) \ge 2$, for any large enough $n$ there exists a pair of non-isomorphic normal subgroups of index $n$ within $\Gamma$. The conclusion of Proposition~\ref{main_normal} is much weaker, and we do not know whether in general there are non-homeomorphic normal covers for every degree in a subset of \( \NN \) of natural density one. Note that in general this cannot be true of every degree---for example if \( M \) is a homology sphere then it cannot have regular covers of any prime degree. \item We still have some control over the density of the sequence $d_n$ in Proposition \ref{main_normal}: it follows from the proof that we have $d_n \ll n^M$ where $M = r!$, with $r$ the smallest index of a normal subgroup with $b_1 \ge 2$. \item Moreover, the proof of Proposition \ref{main_normal} shows that we can take $c_n \gg d_n^{1/e - \varepsilon}$ for all $\varepsilon > 0$. \item The only ingredient specific to dimension 3 in the proof of Proposition \ref{main_normal} is property~(\ref{infinite_virtbetti}). We note that this property holds for many lattices in higher dimensions as well (in particular all known lattices in $\mathrm{SO}(n,1)$ in even dimensions), and for some complex hyperbolic lattices (see for example \cite{Marshall_unitary}). \item In \cite{Zimmermann}, Zimmermann produces a similar set of subgroups in $\Gamma$. In that construction, the quotients are isomorphic to $\ZZ/p^{n-i}\ZZ \oplus \ZZ/p^i\ZZ$ where $p$ is some large prime. Moreover, the number of subgroups in Zimmermann's construction is sublinear as a function of their index. \end{itemize} \subsubsection{Proof of Proposition \ref{special_norm}} Let $\varphi_2(n)$ be the number of surjective homomorphisms from $(\ZZ/n)^2$ to $\ZZ/n$ and let $\varphi(n)$ be Euler's totient. Observe that $(1,0)\mapsto a, (0,1)\mapsto *$ defines $\varphi(n)\times n$ surjective homomorphisms from $(\ZZ/n)^2$ to $\ZZ/n$, where $(a,n)=1$ and $*$ is any arbitrary element of $\mathbb{Z}/n$. Similarly, there are $n\times \varphi(n)$ surjective homomorphisms defined by $(1,0)\mapsto *, (0,1)\mapsto b$, where $(b,n)=1$ and $*$ is an arbitrary element of $\mathbb{Z}/n$. Counted without repetition, we have produced \[ \varphi(n)\times n + n \times \varphi(n) - \varphi(n) \times\varphi(n) = (2n - \varphi(n))\varphi(n) \] surjective homomorphisms from $(\ZZ/n)^2$ to $\ZZ/n$. Thus, $\varphi_2(n) \geq (2n - \varphi(n))\varphi(n)$. Let $h_n$ be the number of surjective morphisms from $\Gamma$ to $\ZZ/n$. Since $\Gamma$ surjects onto $(\ZZ/n)^2$ we have that $h_n \ge \varphi_2(n)$. Since two surjective morphisms $\pi_1, \pi_2 : \Gamma \to Q$ have the same kernel if and only if there exists an automorphism $\psi$ of $Q$ such that $\pi_2 = \psi \circ \pi_1$, and the number of automorphisms of $\ZZ/n$ equals $\varphi(n)$ and hence \[ \frac{h_n} {\card{\mathrm{Aut}(\ZZ/n)}} \ge \frac{\varphi_2(n)}{\varphi(n)} \ge 2n - \varphi(n) \ge n \] we get that there are pairwise distinct normal subgroups $A_1, \ldots, A_n \le \Gamma$ such that $\Gamma / A_j \cong \ZZ / n$ for $1 \le j \le n$. By Mostow rigidity, we have that $c_n$ is at least the maximal number of $A_j$ which are pairwise non-conjugate in $G$. We want to prove that there exists $b> 0$ depending only on $\Gamma$ such that for every $n$ at most $b$ among the $A_j$s can be conjugated to each other, which implies that $c_n \ge n/b$ and finally the conclusion of Proposition~\ref{special_norm}. For this we use a refinement of the arguments of \cite{BGLM} and \cite{BGLS} that we mentioned in the previous subsection. \medskip First we deal with the case where $\Gamma$ is non-arithmetic: then an immediate and well-known consequence of Margulis' commensurator criterion for arithmeticity\footnote{The criterion \cite[Theorem (B) in Chapter IX]{Margulis_book} states that $\Gamma$ has finite index in its commensurator $\Omega$; since any lattice commensurable to $\Gamma$ commensurates $\Gamma$, it has to be contained in $\Omega$ and the claim follows.} is that there is a unique maximal lattice $\Omega \subset G$ in the commensurability class of $\Gamma$, which is equal to the commensurator of $\Gamma$. Thus any $g \in G$ which conjugates two $A_j$s must belong to $\Omega$, and since the $A_j$s are normal in $\Gamma$ each has at most $b = |\Omega / \Gamma|$ conjugates among them. \medskip Now assume $\Gamma$ is arithmetic. By definition of arithmeticity there exists a semisimple algebraic group $\mathbf G$ defined over $\ZZ$ such that $\Gamma \subset \mathbf G(\ZZ)$ with finite index. For $p$ a rational prime let $\ZZ_p$ denote the $p$-adic integers. Then a {\em congruence subgroup} of $\Gamma$ is a subgroup of the form $\Gamma \cap U$ where $U$ is a finite index (equivalently, open) subgroup in $\prod_p \mathbf G(\ZZ_p)$. If $\Lambda$ is a finite index subgroup in $\Gamma$ we denote by $\ccl{\Lambda}$ the {\em congruence closure} of a subgroup $\Lambda \subset \Gamma$: this is the smallest congruence subgroup of $\mathbf G(\QQ)$ containing $\Lambda$; explicitely the congruence closure of $\Lambda$ is equal to $\Gamma \cap V$ where $V$ is the closure in $\prod_p \mathbf G(\ZZ_p)$ of $\Lambda$. \begin{lemma} \label{abelian_cong} Let $\Gamma$ be an arithmetic group. There exists finitely many congruence subgroups $\Gamma_1, \ldots, \Gamma_m$ with the following property: if $\Lambda$ is a finite index normal subgroup in $\Gamma$ such that $\Gamma / \Lambda$ is abelian then $\ccl{\Lambda}$ is equal to one of the $\Gamma_i$. \end{lemma} \begin{proof} If $\Gamma / \Lambda$ is abelian then so is $\Gamma / \ccl{\Lambda}$ so that it suffices to show that there are only finitely many congruence subgroups $\Delta \le \Gamma$ such that $\Gamma / \Delta$ is abelian. Let $\Gamma'$ be the derived subgroup of $\Gamma$, which is a Zariski-dense in $\mathbf G$ (since $\mathbf G$ does not have abelian quotients). By Nori-Weisfeiler strong approximation \cite{Nori,Weisfeiler} it follows that the closure of $\Gamma'$ in $\prod_p \mathbf G(\ZZ_p)$ has finite index in that of $\Gamma$. This means that at most finitely many congruence subgroups of $\Gamma$ contain $\Gamma'$, which is the statement we wanted to prove. \end{proof} The commensurator of $\Gamma$ is equal to (the image in $G$ of) $\mathbf G(\QQ)$ and we have that $\ccl{g\Lambda g^{-1}} = g\ccl{\Lambda} g^{-1}$ for all $g \in \mathbf G(\QQ)$. It follows that if two subgroups of $\Gamma$ are conjugate to each other an element conjugating them must belong to $\mathbf G(\QQ)$ and conjugate their congruence closures to each other as well: in particular, if the latter are equal then the element must belong to its normaliser. Let \( \Gamma_1, \ldots, \Gamma_m \) be given by the lemma and let \( i \) be an index such that \( \Gamma_i \) contains \( n' \ge n/m \) of the \( A_j \)s, and assume for notational ease that those are \( A_1, \ldots, A_k \). It follows from the above that for any $n$, any element conjugating two of the $A_1, \ldots, A_k$ must belong to the normaliser \( \Omega_i \) of $\Gamma_i$. Thus, as the \( A_j \) are normal in \( \Gamma \), the maximal number of conjugates among \( A_1, \ldots, A_k \) is $c = |\Omega_i/(\Gamma_i \cap \Gamma)|$. In conclusion, we have shown that we can find at least \( k/C = n/(cm) \) among the \( A_j \) that are pairwise not conjugate in \( \mathrm{PGL}_2(\CC) \), hence our claim follows (with \( b = cm \)). \begin{proof}[Proof of Proposition \ref{main_normal}] Let $\Delta$ be a lattice in $\mathrm{PGL}_2(\CC)$. By Theorem~\ref{theorem_virtbetti}(\ref{infinite_virtbetti}) there exists a finite index normal subgroup $\Gamma \triangleleft \Delta$ with $b_1(\Gamma) \ge 2$, so that we may apply Proposition \ref{special_norm} to $\Gamma$. Let $a_1, \ldots, a_r$ be representatives for the left cosets of $\Gamma$ in $\Delta$. Let $n \ge 1$ and $B_1, \ldots, B_{c_n}$ the subgroups obtained in Proposition \ref{special_norm}. Then since $B_j \triangleleft \Gamma$ we get, for all $1 \le j \le c_n$, that: \[ C_j = \bigcap_{i=1}^r a_i B_j a_i^{-1} \] is normal in $\Delta$. We recall that if $A$ is a permutation group of degree $r$ (that is, a subgroup of the symmetric group $\sym_r$) and $B$ any group, the \emph{wreath product} $A \wr B$ is the semidirect product $A \rtimes B^r$ where $A$ acts on $B^r$ by permuting indices. \begin{lemma} \label{bigger_quotient} There exist $r,l\in\NN$, depending only on $\Gamma$ and $\Delta$, such that for each $1\le j\le c_n$, there exists a finite abelian group $Q_j$ so that \[(\ZZ/n)^l \twoheadrightarrow Q_j \] and \[\Delta/C_j \hookrightarrow \sym_r \wr Q_j.\] \end{lemma} \begin{proof} Let $\rho$ be the morphism $\Delta \to \sym(\Delta /C_j)$ associated to the left-translation action. It respects the decomposition into left $\Gamma$-cosets and hence it has image inside $\sym(\Delta/\Gamma) \wr \sym(\Gamma/C_j)$. Moreover the stabiliser of a block has its image in a conjugate of the image of the action of $\Gamma$ on $\Gamma/C_j$. It remains to see that \( \Gamma/C_j \) is abelian of exponent \( n \) (we can then take \( l \) to be the rank of \( H_1(\Gamma) \otimes \ZZ/n \)). To do so we need only remark that the subgroup \([\Gamma,\Gamma]\cdot\Gamma^n < \Gamma \) generated by commutators and \( n \)th powers is characteristic in \( \Gamma \) and contained in \( B_j \), hence it is also contained in \( C_j \). Since \[[\Gamma,\Gamma]\cdot\Gamma^n = \ker\left( \Gamma \to H_1(\Gamma) \otimes \ZZ/n \right)\] this implies that \[H_1(\Gamma)\otimes \ZZ/n \twoheadrightarrow \Gamma/C_j\] and hence that the image of $\Delta/C_j$ in the second factor in the wreath product is a quotient of $H_1(\Gamma)\otimes \ZZ/n$. \end{proof} By the same argument as in the proof of the previous proposition we can eliminate some of the $C_j$s so that at least $b_n = c_n/a$ (where $a$ depends only on $\Gamma$) are pairwise non-isomorphic. Indeed, if $\Delta$ is non-arithmetic then the same argument applies verbatim, while if $\Delta$ is arithmetic we have to show that for $C_j \triangleleft \Delta$ with $\Delta/C_j \hookrightarrow \sym_r \wr Q_j$ there are only finitely many possibilities for the congruence closures of $C_j$ in $\Delta$. To do this, we only need to note that this is true of the congruence closures of the $C_j$ in \[\Delta_1:= \ker\left(\Gamma \to \sym_r\right).\] Indeed, since all the $\Delta_1/C_j$ are abelian, this follows from Lemma \ref{abelian_cong}. Moreover, since these closures contain those in $\Delta$ it is also true of the latter. So we may assume that $C_1, \ldots, C_{b_n}$ are pairwise non-conjugate, with $b_n \ge c_n/a \ge n/a'$ for some $a' \ge 1$ independent of $n$. A priori the $C_j$ have different indices in $\Delta$. But by Lemma \ref{bigger_quotient} the orders $|\Delta/G_j|$ all divide $r! \cdot n^{lr}$. Let $\delta(N)$ denote the number of divisors of a positive integer $N$, then using the classical estimate that $\forall \varepsilon > 0$ there exists a constant $K_\varepsilon$ so that \[ \delta(N) \leq K_\varepsilon\cdot N^\varepsilon \] for all $N\in\NN$ (see e.g. \cite[Proposition 7.12]{dKL}) and the fact that $b_n \gg (r! \cdot n^{rl})^{1/(rl)}$ we see that \[ b_n / \delta(r! \cdot d_n^r) \xrightarrow[n \to+\infty]{} +\infty. \] Hence by the pigeonhole principle we see that as $n \to +\infty$ an unbounded number of the $C_j$ have the same index in $\Delta$. This finishes the proof of Proposition~ \ref{main_normal}. \end{proof}
{ "timestamp": "2020-05-12T02:00:51", "yymm": "1807", "arxiv_id": "1807.09861", "language": "en", "url": "https://arxiv.org/abs/1807.09861", "abstract": "Every closed orientable surface S has the following property: any two connected covers of S of the same degree are homeomorphic (as spaces). In this, paper we give a complete classification of compact 3-manifolds with empty or toroidal boundary which have the above property. We also discuss related group-theoretic questions.", "subjects": "Geometric Topology (math.GT)", "title": "On distinct finite covers of 3-manifolds", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668657039606, "lm_q2_score": 0.7217432122827968, "lm_q1q2_score": 0.70815052543862 }
https://arxiv.org/abs/1803.07853
On automorphisms of finite $p$-groups
It is proved in [J. Group Theory, {\bf 10} (2007), 859-866] that if $G$ is a finite $p$-group such that $(G,Z(G))$ is a Camina pair, then $|G|$ divides $|\Aut(G)|$. We give a very short and elementary proof of this result.
\section{Introduction} Let $G$ be a finite non-abelian $p$-group. The problem ``Does the order, if it is greater than $p^2$, of a finite non-cyclic $p$-group divide the order of its automorphism group?'' is a well-known problem \cite[Problem 12.77]{maz} in finite group theory. Gasch$\ddot{\mbox{u}}$tz \cite{gas} proved that any finite $p$-group of order at least $p^2$ admits a non-inner automorphism of order a power of $p$. It follows that the problem has an affirmative answer for finite $p$-groups with center of order $p$. This immediately answers the problem positively for finite $p$-groups of maximal class. Otto \cite{ott} also gave an independent proof of this result. Fouladi et al. \cite{fou} gave a supportive answer to the problem for finite $p$-groups of co-class 2. For more details on this problem, one can see the introduction in the paper of Yadav \cite{yad1}. In \cite[Theorem A]{yad1}, Yadav proved that if $G$ is a finite $p$-group such that $(G,Z(G))$ is a Camina pair, then $|G|$ divides $|\Aut(G)|$. He also proved the important result \cite[Corollary 4.4]{yad1} that the group of all class-preserving outer automorphisms is non-trivial for finite $p$-groups $G$ with $(G,Z(G))$ a Camina pair. In this paper, we give different and very short proofs of these results of Yadav using elementary arguments. Let $G$ be a finite $p$-group. Then $(G,Z(G))$ is called a Camina pair if $xZ(G) \subseteq x^G$ for all $x\in G-Z(G)$, where $x^G$ denotes the conjugacy class of $x$ in $G$. In particular, if $(G,G')$ is a Camina pair, then $G$ is called a Camina $p$-group. \section{Proofs} We shall need the following lemma which is a simple modification of a lemma of Alperin \cite[Lemma 3]{alp}. \begin{lm} Let $G$ be any group and $B$ be a central subgroup of $G$ contained in a normal subgroup $A$ of $G$. Then the group $\Aut_{A}^{B}(G)$of all automorphisms of $G$ that induce the identity on both $A$ and $G/B$ is isomorphic onto $\mathrm{Hom}(G/A,B)$. \end{lm} \begin{thm} Let $G$ be a finite $p$-group such that $(G,Z(G))$ is a Camina pair. Then $|G|$ divides $|\Aut(G)|.$ \end{thm} \begin{proof} Observe that $Z(G)\le G'\le \Phi(G)$ and, therefore, $Z(G)\le Z(M)$ for every maximal subgroup $M$ of $G$. Suppose that $Z(G)<Z(M_1)$ for some maximal subgroup $M_1$ of $G$. Let $G=M_1\langle g_1\rangle $, where $g_1\in G-M_1$ and $g_{1}^{p}\in M_1$. Let $g\in Z(M_1)-Z(G)$. Then \[|Z(G)|\le |[g, G]|= |[g, M_1\langle g_1\rangle]|=|[g,\langle g_1\rangle]|\le p\] implies that $|Z(G)|=p.$ The result therefore follows by Gasch$\ddot{\mbox{u}}$tz \cite{gas}. We therefore suppose that $Z(G)=Z(M)$ for every maximal subgroup $M$ of $G$. We prove that $C_G(M)\le M$. Assume that there exists $g_0\in C_G(M_0)-M_0$ for some maximal subgroup $M_0$ of $G$. Then $G=M_0\langle g_0\rangle$ and thus $g_0\in Z(G)$, because $g_0$ commutes with $M_0$. This is a contradiction because $Z(G)\le \Phi(G)$. Therefore $C_G(M)\le M$ for every maximal subgroup $M$ of $G$. Consider the group $\Aut_{M}^{Z(G)}(G)$ which is isomorphic to $\Hom(G/M,Z(G))$ by Lemma 2.1. It follows that $\Aut_{M}^{Z(G)}(G)$ is non-trivial. Let $\alpha\in \Aut_{M}^{Z(G)}(G)\cap (\inn(G))$. Then $\alpha$ is an inner automorphism induced by some $g\in C_G(M)=Z(M)$. Since $Z(G)=Z(M)$, $\alpha$ is trivial. It follows that $$|(\Aut_{M}^{Z(G)}(G))(\inn(G))|=|(\Aut_{M}^{Z(G)}(G))||(\inn(G))|=|Z(G)||G/Z(G)|=|G|,$$ because $Z(G)$ is elementary abelian by Theorem 2.2 of \cite{mac}. This completes the proof. \end{proof} \begin{cor} Let $G$ be a finite Camina $p$-group. Then $|G|$ divides $|\Aut(G)|$. \end{cor} \begin{proof} It is a well known result \cite{dar} that nilpotence class of $G$ is at most 3. Also, it follows from \cite[Lemma 2.1, Theorem 5.2, Corollary 5.3]{mac} that $(G,Z(G))$ is a Camina pair. The result therefore follows from Theorem 2.2. \end{proof} An automorphism $\alpha$ of $G$ is called a class-preserving automorphism of $G$ if $\alpha(x)\in x^G$ for each $x\in G$. The group of all class-preserving automorphisms of $G$ is denoted by $\Aut_c(G)$. An automophism $\beta$ of $G$ is called a central automorphism if $x^{-1}\beta(x)\in Z(G)$ for each $x\in G$. It is easy to see that if $(G,Z(G))$ is a Camina pair, then the group of all central automorphisms fixing $Z(G)$ element-wise is contained in $\Aut_c(G)$. \begin{rem} {\em It follows from the proof of Theorem 2.2 that if $G$ is a finite $p$-group such that $(G,Z(G))$ is a Camina pair and $|Z(G)|\ge p^2$, then} $$|\Aut_c(G)|\ge|(\Aut_{M}^{Z(G)}(G))(\inn(G))|=|G|.$$ \end{rem} Thus, in particular, we obtain the following result of Yadav \cite{yad1}. \begin{cor}[{\cite[Corollary 4.4]{yad1}}] Let $G$ be a finite $p$-group such that $(G, Z(G))$ is a Camina pair and $|Z(G)|\ge p^2$. Then $\Aut_c(G)/\inn(G)$ is non-trivial. \end{cor} The following example shows that Remark 2.4 is not true if $|Z(G)|=p$. \begin{expl} {\em Consider a finite $p$-group $G$ of nilpotence class 2 such that $(G,Z(G))$ is a Camina pair and $|Z(G)|=p$. Since $\cl(G)=2$, $\exp(G/Z(G))=\exp(G')$ and hence $G'=Z(G)=\Phi(G)$. Let $|G|=p^n$, where $n\ge 3$, and let $\lbrace x_1, x_2, \ldots, x_{n-1}\rbrace$ be the minimal generating set of $G$. Then } $$|\Aut_c(G)|\le \prod_{i=1}^{n-1} |x{_i}^G|=p^{n-1}=|G/Z(G)|.$$ \end{expl} \noindent {\bf Acknowledgment}: Research of first author is supported by Thapar Institute of Engineering and Technology and also by SERB, DST grant no. MTR/2017/000581. Research of second author is supported by SERB, DST grant no. EMR/2016/000019.
{ "timestamp": "2018-03-22T01:08:04", "yymm": "1803", "arxiv_id": "1803.07853", "language": "en", "url": "https://arxiv.org/abs/1803.07853", "abstract": "It is proved in [J. Group Theory, {\\bf 10} (2007), 859-866] that if $G$ is a finite $p$-group such that $(G,Z(G))$ is a Camina pair, then $|G|$ divides $|\\Aut(G)|$. We give a very short and elementary proof of this result.", "subjects": "Group Theory (math.GR)", "title": "On automorphisms of finite $p$-groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668734137682, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7081505251306425 }
https://arxiv.org/abs/1405.7582
Tame and wild refinement monoids
The class of refinement monoids (abelian monoids satisfying the Riesz refinement property) is subdivided into those which are tame, defined as being an inductive limit of finitely generated refinement monoids, and those which are wild, i.e., not tame. It is shown that tame refinement monoids enjoy many positive properties, including separative cancellation ($2x=2y=x+y \implies x=y$) and multiplicative cancellation with respect to the algebraic ordering ($mx\le my \implies x\le y$). In contrast, examples are constructed to exhibit refinement monoids which enjoy all the mentioned good properties but are nonetheless wild.
\section*{Introduction} The class of \emph{refinement monoids} -- commutative monoids satisfying the Riesz refinement property -- has been extensively studied over the past few decades, in connection with the classification of countable Boolean algebras (e.g., \cite{Dob82, Ket, Pierce}) and the non-stable K-theory of rings and C*-algebras (e.g., \cite{Areal, AGOP, AMP, GPW, PW06}), as well as for its own sake (e.g., \cite{Brook01, Dob, Dobb84, Gril76, W98}). Ketonen proved in \cite{Ket} that the set $BA$ of isomorphism classes of countable Boolean algebras, with the operation induced from direct products, is a refinement monoid, and that $BA$ contains all countable commutative monoid phenomena in that every countable commutative monoid embeds into $BA$. An important invariant in non-stable K-theory is the commutative monoid $V(R)$ associated to any ring $R$, consisting of the isomorphism classes of finitely generated projective (left, say) $R$-modules, with the operation induced from direct sum. If $R$ is a (von Neumann) regular ring or a C*-algebra with real rank zero (more generally, an exchange ring), then $V(R)$ is a refinement monoid (e.g., \cite[Corollary 1.3, Theorem 7.3]{AGOP}). The \emph{realization problem} asks which refinement monoids appear as a $V(R)$ for $R$ in one of the above-mentioned classes. Wehrung \cite{W98IsraelJ} constructed a conical refinement monoid of cardinality $\aleph_2$ which is not isomorphic to $V(R)$ for any regular ring $R$, but it is an open problem whether every countable conical refinement monoid can be realized as $V(R)$ for some regular $R$. One observes that large classes of realizable refinement monoids satisfy many desirable properties that do not hold in general, in marked contrast to the ``universally bad" refinement monoid $BA$, which exhibits any property that can be found in a commutative monoid, such as elements $a$ and $b$ satisfying $2a=2b$ while $a\ne b$, or satisfying $a=a+2b$ while $a\ne a+b$. Moreover, the largest classes of realizable refinement monoids consist of inductive limits of simple ingredients, such as finite direct sums of copies of ${\mathbb{Z}^+}$ or $\{0,\infty\}$. These monoids are more universally realizable in the sense that they can be realized as $V(R)$ for regular algebras $R$ over any prescribed field. By contrast, examples are known of countable refinement monoids which are realizable only for regular algebras over some countable fields. These examples are modelled on the celebrated construction of Chuang and Lee \cite{CL} (see \cite[Section 4]{Areal}). These considerations lead us to separate the class of refinement monoids into subclasses of \emph{tame} and \emph{wild} refinement monoids, where the tame ones are the inductive limits of finitely generated refinement monoids and the rest are wild. Existing inductive limit theorems allow us to identify several large classes of tame refinement monoids, such as unperforated cancellative refinement monoids; refinement monoids in which $2x=x$ for all $x$; and the \emph{graph monoids} introduced in \cite{AMP, AG}. We prove that tame refinement monoids satisfy a number of desirable properties, such as separative cancellation and lack of perforation (see \S\ref{background} for unexplained terms). Tame refinement monoids need not satisfy full cancellation, as $\{0,\infty\}$ already witnesses, but we show that among tame refinement monoids, stable finiteness ($x+y=x \implies y=0$) implies cancellativity. The collection of good properties enjoyed by tame refinement monoids known so far does not, as yet, characterize tameness. We construct two wild refinement monoids (one a quotient of the other) which are conical, stably finite, antisymmetric, separative, and unperforated; moreover, one of them is also archimedean. These monoids will feature in \cite{AGreal}, where an investigation of the subtleties of the realization problem will be carried out. In particular, we will show that one of the two monoids (the quotient monoid) is realizable by a von Neumann regular algebra over any field, but the other is realizable only by von Neumann regular algebras over {\it countable\/} fields. We will also develop in \cite{AGreal} a connection with the Atiyah problem for the lamplighter group. \section{Refinement monoids} \subsection{Background and notation} \label{background} All monoids in this paper will be commutative, written additively, and homomorphisms between them will be assumed to be monoid homomorphisms. Categorical notions, such as inductive limits, will refer to the category of commutative monoids. The \emph{kernel} of a monoid homomorphism $\phi: A\rightarrow B$ is the congruence $$\ker(\phi) := \{ (a,a') \in A^2 \mid \phi(a) = \phi(a') \} .$$ We write ${\mathbb{Z}^+}$ for the additive monoid of nonnegative integers, and ${\mathbb{N}}$ for the additive semigroup of positive integers. The symbol $\sqcup $ stands for the disjoint union of sets. A monoid $M$ is \emph{conical} if $0$ is the only invertible element of $M$, that is, elements $x,y \in M$ can satisfy $x+y=0$ only if $x=y=0$. Several levels of cancellation properties will be considered, as follows. First, $M$ is \emph{stably finite} if $x+y=x$ always implies $y=0$, for any $x,y \in M$. Second, $M$ is \emph{separative} provided $2x=2y=x+y$ always implies $x=y$, for any $x,y \in M$. There are a number of equivalent formulations of this property, as, for instance, in \cite[Lemma 2.1]{AGOP}. Further, $M$ is \emph{strongly separative} if $2x=x+y$ always implies $x=y$. Finally, $M$ is \emph{cancellative} if it satisfies full cancellation: $x+y=x+z$ always implies $y=z$, for any $x,y,z\in M$. The \emph{algebraic ordering} (or \emph{minimal ordering}) in $M$ is the translation-invariant pre-order given by the existence of subtraction: elements $x,y \in M$ satisfy $x \le y$ if and only if there is some $z \in M$ such that $x+z=y$. If $M$ is conical and stably finite, this relation is a partial order on $M$. An \emph{order-unit} in $M$ is any element $u \in M$ such that all elements of $M$ are bounded above by multiples of $u$, that is, for any $x\in M$, there exists $m \in {\mathbb{Z}^+}$ such that $x\le mu$. The monoid $M$ is called \emph{unperforated} if $mx \le my$ always implies $x \le y$, for any $m \in {\mathbb{N}}$ and $x,y \in M$. The monoid $M$ is said to be \emph{archimedean} if elements $x,y\in M$ satisfy $nx \le y$ for all $n\in{\mathbb{N}}$ only when $x$ is invertible. When $M$ is conical, this condition implies stable finiteness. An \emph{o-ideal} of $M$ is any submonoid $J$ of $M$ which is hereditary with respect to the algebraic ordering, that is, whenever $x \in M$ and $y \in J$ with $x\le y$, it follows that $x\in J$. (Note that an o-ideal is not an ideal in the sense of semigroup theory.) The hereditary condition is equivalent to requiring that $x+z \in J$ always implies $x,z\in J$, for any $x,z\in M$. Given an o-ideal $J$ in $M$, we define the \emph{quotient monoid} $M/J$ to be the monoid $M/{\equiv}_J$, where ${\equiv}_J$ is the congruence on $M$ defined as follows: $x \equiv_J y$ if and only if there exist $a,b\in J$ such that $x+a = y+b$. Quotient monoids $M/J$ are always conical. Separativity and unperforation pass from a monoid $M$ to any quotient $M/J$ \cite[Lemma 4.3]{AGOP}, but stable finiteness and the archimedean property do not, even in refinement monoids (Remark \ref{MMbarexamples}). The monoid $M$ is called a \emph{refinement monoid} provided it satisfies the \emph{Riesz refinement property}: whenever $x_1,x_2,y_1,y_2 \in M$ with $x_1+x_2 = y_1+y_2$, there exist $z_{ij} \in M$ for $i,j=1,2$ with $x_i = z_{i1}+z_{i2}$ for $i=1,2$ and $y_j = z_{1j}+z_{2j}$ for $j = 1,2$. It can be convenient to record the last four equations in the format of a \emph{refinement matrix} $$\bordermatrix{ &y_1&y_2\cr x_1&z_{11}&z_{12}\cr x_2&z_{21}&z_{22}\cr},$$ where the notation indicates that the sum of each row of the matrix equals the element labelling that row, and similarly for column sums. By induction, analogous refinements hold for equalities between sums with more than two terms. A consequence of refinement is the \emph{Riesz decomposition property}: whenever $x,y_1,\dots,y_n \in M$ with $x \le y_1+ \cdots+ y_n$, there exist $x_1,\dots,x_n \in M$ such that $x = x_1+ \cdots+ x_n$ and $x_i \le y_i$ for all $i$. The quotient of any refinement monoid by an o-ideal is a conical refinement monoid (e.g., \cite[p.~476]{Dob}, \cite[Proposition 7.8]{Brook97}). The \emph{Riesz interpolation property} in $M$ is the following condition: Whenever $x_1,x_2,y_1,y_2 \in M$ with $x_i\le y_j$ for $i,j=1,2$, there exists $z\in M$ such that $x_i\le z\le y_j$ for $i,j=1,2$. If $M$ is cancellative and conical, so that it is the positive cone of a partially ordered abelian group, the Riesz refinement, decomposition, and interpolation properties are all equivalent (e.g., \cite[Proposition 2.1]{poagi}). In general, however, the only relation is that refinement implies decomposition. For a refinement monoid, unperforation implies separativity. This follows immediately from \cite[Theorem 1]{Chen09}, and it was noted independently in \cite[Corollary 2.4]{Wunpub}. An element $p \in M$ is \emph{prime} if $p\le x+y$ always implies $p\le x$ or $p\le y$, for any $x,y\in M$. This follows the definition in \cite{Brook97, Brook01} as opposed to the one in \cite{APW}, which requires prime elements to be additionally non-invertible. The monoid $M$ is called \emph{primely generated} if every element of $M$ is a sum of prime elements. In case $M$ is conical, this is equivalent to the definition used in \cite{APW}. An element $x \in M$ is \emph{irreducible} if $x$ is not invertible and $x=a+b$ only when $a$ or $b$ is invertible, for any $a,b\in M$. The following facts are likely known, but we did not locate any reference. \begin{lemma} \label{cancelirredelement} In a conical refinement monoid $M$, all irreducible elements cancel from sums. \end{lemma} \begin{proof} Let $a,b,c\in M$ with $a+b=a+c$ and $a$ irreducible. There is a refinement $$\bordermatrix{ &a&b\cr a&a'&b'\cr c&c'&d'}.$$ Since $a=a'+b'$, either $a'=0$ or $b'=0$. Likewise, either $a'=0$ or $c'=0$. If $a'=0$, we get $a=b'=c'$, and thus $b=a+d'=c$. If $a'\ne 0$, then $b'=c'=0$, and thus $b=d'=c$. \end{proof} \begin{proposition} \label{irredelementgenideal} Let $(a_i)_{i\in I}$ be a family of distinct irreducible elements in a conical refinement monoid $M$. {\rm (a)} The submonoid $J := \sum_{i\in I} {\mathbb{Z}^+} a_i$ is an o-ideal of $M$. {\rm (b)} The map $f:\bigoplus_{i\in I} {\mathbb{Z}^+} \rightarrow J$ sending $(m_i)_{i\in I} \mapsto \sum_{i\in I} m_ia_i$ is an isomorphism. \end{proposition} \begin{proof} (a) Let $b\in M$ and $c\in J$ with $b\le c$. If $c=0$, then $b=0$ because $M$ is conical, whence $b\in J$. Assume that $c \ne 0$, and write $c= \sum_{l=1}^n a_{i_l}$ for some $i_l\in I$. By Riesz decomposition, $b= \sum_{l=1}^n b_l$ for some $b_l \in M$ with $b_l\le a_{i_l}$. By the irreducibility of the $a_{i_l}$, each $b_l= \varepsilon_l a_{i_l}$ for some $\varepsilon_l \in \{0,1\}$. Thus, $b= \sum_{l=1}^n \varepsilon_l a_{i_l} \in J$. (b) By definition, $f$ is surjective. To see that $f$ is injective, it suffices to prove the following: \begin{enumerate} \item[$(*)$] If $a_1,\dots,a_n$ are distinct irreducible elements in $M$ and $\sum_{i=1}^n m_ia_i= \sum_{i=1}^n m'_ia_i$ for some $m_i,m'_i\in {\mathbb{Z}^+}$, then $m_i=m'_i$ for all $i$. \end{enumerate} We proceed by induction on $t := \sum_{i=1}^n (m_i+m'_i)$. If $t=0$, then $m_i=0=m'_i$ for all $i$. Now let $t>0$. Without loss of generality, $m'_1>0$. Hence, $a_1\le \sum_{i=1}^n m_ia_i$, so $a_1= \sum_{i=1}^n \sum_{l=1}^{m_i} b_{il}$ with each $b_{il} \le a_i$. Also, each $b_{il} \le a_1$. Some $b_{il} \ne 0$, whence $a_i=b_{il}= a_1$, yielding $i=1$ and $m_1>0$. Cancel $a_1$ from $\sum_{i=1}^n m_ia_i= \sum_{i=1}^n m'_ia_i$, leaving $$(m_1-1)a_1+ \sum_{i=2}^n m_ia_i= (m'_1-1)a_1+ \sum_{i=2}^n m'_ia_i.$$ By induction, $m_1-1= m'_1-1$ and $m_i=m'_i$ for all $i\ge2$, yielding $(*)$. \end{proof} \begin{definition} \label{def:socle} Let $M$ be a conical refinement monoid. Then the \emph{pedestal} of $M$, denoted by $\ped (M)$, is the submonoid of $M$ generated by all the irreducible elements of $M$. By Proposition \ref{irredelementgenideal}, $\ped (M)$ is an o-ideal of $M$. With an eye on non-stable K-theory, it would seem reasonable to call the submonoid defined above the socle of $M$. This is due to the fact that if $R$ is a regular ring (or just a semiprime exchange ring), then $V(\soc (R_R))\cong \ped (V(R))$, where $\soc (R_R)$ is the socle of the right $R$-module $R_R$ in the sense of module theory. However, the concept of the socle of a semigroup (e.g., \cite[Section 6.4]{CP}) is entirely different from our concept of a pedestal. The latter concept is designed for and works well in conical refinement monoids, but it may need modification for use with non-refinement monoids. \end{definition} \section{Tame refinement monoids} \label{tamerefmonoids} \subsection{Tameness and wildness} \label{tamewild} \begin{definition} \label{def:wild-tamemonoids} Let $M$ be a refinement monoid. We say that $M$ is \emph{tame} in case $M$ is an inductive limit for some inductive system of finitely generated refinement monoids, and that $M$ is \emph{wild} otherwise. \end{definition} \begin{examples} \label{examples:tame} Beyond finitely generated refinement monoids themselves, several classes of tame refinement monoids can be identified. {\bf 1.} Every unperforated cancellative refinement monoid is tame. This follows from theorems of Grillet \cite{Gril} and Effros-Handelman-Shen \cite{EHS}. See Theorem \ref{tamecriterion:stablyfinite} below for details. {\bf 2.} Any refinement monoid $M$ such that $2x=x$ for all $x\in M$ is tame. Recall that (upper) semilattices correspond exactly to semigroups satisfying the identity $2x=x$ for all elements $x$, where $\vee = +$. A semilattice is called \emph{distributive} if it satisfies Riesz decomposition \cite[p.~117]{Gr71}, and it is well known that this is equivalent to the semilattice satisfying Riesz refinement. Pudl\'ak proved in \cite[Fact 4, p.~100]{Pud} that every distributive semilattice equals the directed union of its finite distributive subsemilattices. In monoid terms, any refinement monoid $M$ satisfying $2x=x$ for all $x\in M$ is the directed union of those of its finite submonoids which satisfy refinement. Thus, such an $M$ is tame. In fact, any refinement monoid of this type is an inductive limit of finite boolean monoids (meaning semilattices of the form ${\mathbf 2}^n$ for $n\in{\mathbb{N}}$) \cite[Theorem 6.6, Corollary 6.7]{GW01}. {\bf 3.} Write $A\sqcup\{0\}$ for the monoid obtained from an abelian group $A$ by adjoining a new zero element. Any such monoid has refinement (e.g., \cite[Corollary 5]{Dobb84}). Inductive limits of monoids of the form $\bigoplus_{i=1}^k (A_i\sqcup\{0\})$ with each $A_i$ finite cyclic were characterized in \cite[Theorem 6.4]{GPW}, and those for which the $A_i$ can be arbitrary cyclic groups were characterized in \cite[Theorem 6.6]{PW06}. (We do not list the conditions here.) All these monoids are tame refinement monoids. {\bf 4.} A commutative monoid $M$ is said to be {\it regular} in case $2x\le x$ for all $x\in M$. It has been proved by Pardo and Wehrung \cite[Theorem 4.4]{PWunp} that every regular conical refinement monoid is a direct limit of finitely generated regular conical refinement monoids. In particular, these monoids are tame. {\bf 5.} A monoid $M(E)$ associated with any directed graph $E$ was defined for so-called row-finite graphs in \cite[p.~163]{AMP}, and then in general in \cite[p.~196]{AG}. These monoids have refinement by \cite[Proposition 4.4]{AMP} and \cite[Corollary 5.16]{AG}, and we prove below that they are tame (Theorem \ref{MEgeneral}). {\bf 6.} In \cite[Theorem 0.1]{AP}, Pardo and the first-named author proved that every primely generated conical refinement monoid is tame, thus resolving an open problem from the initial version of the present paper. \end{examples} The existence of wild refinement monoids follows, for instance, from the next theorem. Other examples will appear below. The following theorem and many other consequences of tameness rely on Brookfield's result that every finitely generated refinement monoid is primely generated \cite[Corollary 6.8]{Brook01}, together with properties of primely generated refinement monoids established in \cite{Brook97, Brook01}. \begin{theorem} \label{tame:sepunperf} Every tame refinement monoid is separative, unperforated, and satisfies the Riesz interpolation property. \end{theorem} \begin{proof} These properties obviously pass to inductive limits. For finitely generated refinement monoids, the first two properties were established in \cite[Theorem 4.5, Corollaries 5.11, 6.8]{Brook01}, and the third follows from \cite[Propositions 9.15 and 11.8]{Brook97}. \end{proof} By \cite[Theorem 1]{Gril} or \cite[Theorem 5.1]{Dob}, any monoid can be embedded in a refinement monoid. More strongly, for any monoid $M'$, there exists an embedding $\phi: M'\rightarrow M$ into a refinement monoid $M$ such that $\phi$ is also an embedding for the algebraic ordering \cite[p.~112]{W98}. If $M'$ is either perforated or not separative, then $M$ has the same property, and so by Theorem \ref{tame:sepunperf}, $M$ cannot be tame. Explicit examples of perforated refinement monoids (even cancellative ones) were constructed in \cite[Examples 11.17, 15.9]{Brook97}; these provide further wild monoids. Wild refinement monoids which are unperforated and separative also exist. One example is the monoid to which \cite{MDS} is devoted; others will be constructed below (see Theorems \ref{Mwild} and \ref{Mbarwild}). \begin{remark} \label{conicaltame} A conical tame refinement monoid $M$ must be an inductive limit of conical finitely generated refinement monoids, as follows. Write $M=\varinjlim _{i\in I} M_i$ for an inductive system of finitely generated refinement monoids $M_i$, with transition maps $\psi_{ij}: M_i \rightarrow M_j$ for $i\le j$ and limit maps $\psi^i: M_i \rightarrow M$. For each $i\in I$, the group of units $U(M_i)$ is an o-ideal of $M_i$, and the quotient $\overline{M}_i := M_i/U(M_i)$ is a conical finitely generated refinement monoid. The maps $\psi_{ij}$ and $\psi^i$ induce homomorphisms $\ol{\psi}_{ij} : \overline{M}_i \rightarrow \overline{M}_j$ and $\ol{\psi}^{\,i} : \overline{M}_i \rightarrow M$, and the monoids $\overline{M}_i$ together with the transition maps $\ol{\psi}_{ij}$ form an inductive system. It is routine to check that $M$ together with the maps $\ol{\psi}^{\,i}$ is an inductive limit for this system. \end{remark} \begin{proposition} \label{prop:tame-closed-under-quotients} Let $M$ be a tame refinement monoid and let $J$ be an o-ideal of $M$. Then both $J$ and $M/J$ are tame refinement monoids. \end{proposition} \begin{proof} Write $M=\varinjlim _{i\in I} M_i$ for an inductive system of finitely generated refinement monoids $M_i$, with transition maps $\psi_{ij}: M_i \rightarrow M_j$ for $i\le j$ and limit maps $\psi^i: M_i \rightarrow M$. For each $i\in I$, the set $J_i := (\psi^i)^{-1}(J)$ is an o-ideal of $M_i$, and the quotient $\overline{M}_i := M_i/J_i$ is a (conical) finitely generated refinement monoid. Observe that the maps $\psi_{ij}$ and $\psi^i$ induce homomorphisms $\ol{\psi}_{ij} : \overline{M}_i \rightarrow \overline{M}_j$ and $\ol{\psi}^{\,i} : \overline{M}_i \rightarrow M/J$, and the monoids $\overline{M}_i$ together with the transition maps $\ol{\psi}_{ij}$ form an inductive system. A routine check verifies that $M/J$ together with the maps $\ol{\psi}^{\,i}$ is an inductive limit for this system. Therefore $M/J$ is tame. As is also routine, $J$ together with the restricted maps $\psi^i|_{J_i}$ is an inductive limit for the inductive system consisting of the $J_i$ and the maps $\psi_{ij}|_{J_i} : J_i \rightarrow J_j$. Each $J_i$ is a refinement monoid, and once we check that it is finitely generated, we will have shown that $J$ is tame. Thus, it just remains to verify the following fact: \begin{enumerate} \item[$\bullet$] If $N$ is a finitely generated monoid and $K$ an o-ideal of $N$, then $K$ is a finitely generated monoid. \end{enumerate} We may assume that $K$ is nonzero. Let $\{x_1,\dots,x_n\}$ be a finite set of generators for $N$. After permuting the indices, we may assume that the $x_i$ which lie in $K$ are exactly $x_1,\dots,x_m$, for some $m\le n$. Given $x \in K$, write $x = a_1x_1+ \cdots+ a_nx_n$ for some $a_i \in {\mathbb{Z}^+}$. Whenever $a_i>0$, we have $x_i \le x$ and so $x_i \in K$, whence $i \le m$. Consequently, $x = a_1x_1+ \cdots+ a_mx_m$, proving that $K$ is generated by $x_1,\dots,x_m$. \end{proof} \begin{theorem} \label{tamecriterion} A commutative monoid $M$ is a tame refinement monoid if and only if \begin{enumerate} \item[$(\dagger)$] For each finitely generated submonoid $M' \subseteq M$, the inclusion map $M' \rightarrow M$ factors through a finitely generated refinement monoid. \end{enumerate} \end{theorem} \begin{proof} By \cite[Lemma 4.1, Remark 4.3]{GPW}, $M$ is a direct limit of finitely generated refinement monoids (and thus a tame refinement monoid) if and only if the following conditions hold: \begin{enumerate} \item For each $x\in M$, there exist a finitely generated refinement monoid $N$ and a homomorphism $\phi: N\rightarrow M$ such that $x\in \phi(N)$. \item For each finitely generated refinement monoid $N$, any homomorphism $\phi:N \rightarrow M$ equals the composition of homomorphisms $\psi:N \rightarrow N'$ and $\phi': N'\rightarrow M$ such that $N'$ is a finitely generated refinement monoid and $\ker\phi= \ker\psi$. \end{enumerate} Condition (1) is always satisfied, since for each $x\in M$, there is a homomorphism $\phi: {\mathbb{Z}^+} \rightarrow M$ such that $\phi(1)= x$. Assume first that $(\dagger)$ holds, and let $\phi:N \rightarrow M$ be a homomorphism with $N$ a finitely generated refinement monoid. By $(\dagger)$, the inclusion map $\phi(N) \rightarrow M$ is a composition of homomorphisms $\theta: \phi(N) \rightarrow N'$ and $\phi': N'\rightarrow M$ with $N'$ a finitely generated refinement monoid. Then $\phi= \phi' \theta \phi_0$, where $\phi_0 : N \rightarrow \phi(N)$ is $\phi$ with codomain restricted to $\phi(N)$. Since $\theta$ is injective, it is clear that $\ker\phi= \ker\phi_0= \ker \theta\phi_0$. Conversely, assume that (2) holds, and let $M'$ be a finitely generated submonoid of $M$. Choose elements $x_1,\dots,x_n$ that generate $M'$, set $N := ({\mathbb{Z}^+})^n$, and define $\phi: N\rightarrow M$ by the rule $\phi(m_1,\dots,m_n) = \sum_i m_ix_i$. This provides us with a finitely generated refinement monoid $N$ and a homomorphism $\phi: N\rightarrow M$ such that $M' = \phi(N)$. Write $\phi= \iota\phi_0$ where $\phi_0: N\rightarrow M'$ is $\phi$ with codomain restricted to $M'$ and $\iota: M' \rightarrow M$ is the inclusion map. Now let $\psi$, $\phi'$, and $N'$ be as in (2). Since $\ker\phi_0= \ker\phi= \ker\psi$, there is a unique homomorphism $\psi': M' \rightarrow N'$ such that $\psi'\phi_0 = \psi$. Then $\phi'\psi'\phi_0= \phi'\psi= \phi= \iota\phi_0$ and so $\phi'\psi'= \iota$. Thus $\iota$ factors through $N'$, as required. \end{proof} \begin{proposition} \label{classoftame} The class of tame refinement monoids is closed under direct sums, inductive limits, and retracts. \end{proposition} \begin{proof} Closure under inductive limits follows from \cite[Corollary 4.2, Remark 4.3]{GPW}, and then closure under retracts follows as in \cite[Lemma 4.4]{GPW}. Closure under finite direct sums is clear, and closure under arbitrary direct sums follows because such direct sums are inductive limits of finite direct sums, or by an application of Theorem \ref{tamecriterion}. \end{proof} \begin{definitions} \label{primeandprimitive} A monoid $M$ is \emph{antisymmetric} if the algebraic ordering on $M$ is antisymmetric (and thus is a partial order). In particular, antisymmetric monoids are conical, and any stably finite conical monoid is antisymmetric. If $M$ is an antisymmetric refinement monoid, its prime elements coincide with the \emph{pseudo-indecomposable} elements of \cite[p.~845]{Pierce}. A \emph{primitive monoid} \cite[Definition 3.4.1]{Pierce} is any antisymmetric, primely generated refinement monoid. Pierce characterized these monoids as follows. \end{definitions} \begin{proposition} \label{primitivepresentation} {\rm\cite[Proposition 3.5.2]{Pierce}} The primitive monoids are exactly the commutative monoids with presentations of the form \begin{equation} \label{Dtriangle} \langle D \mid e+f=f \;\, \text{for all} \;\, e,f\in D \;\, \text{with} \;\, e\vartriangleleft f \rangle, \end{equation} where $D$ is a set and $\vartriangleleft$ is a transitive, antisymmetric relation on $D$. If a monoid $M$ has such a presentation, then $D$ equals the set of nonzero prime elements of $M$, and elements $e,f\in D$ satisfy $e\vartriangleleft f$ if and only if $e+f=f$. \end{proposition} Given any set $D$ equipped with a transitive, antisymmetric relation $\vartriangleleft$, let us write $M(D,\vartriangleleft)$ for the monoid with presentation \eqref{Dtriangle}. Since antisymmetric monoids are conical, primitive monoids are tame by \cite[Theorem 0.1]{AP}. However, the result for primitive monoids is immediate from Pierce's results, as follows. \begin{theorem} \label{primitive=>tame} Every primitive monoid is a tame refinement monoid. \end{theorem} \begin{proof} Let $M$ be a primitive monoid, with a presentation as in Proposition \ref{primitivepresentation}. If $D$ is finite, then $M$ is finitely generated and we are done, so assume that $D$ is infinite. Let $\mathcal{D}$ be the collection of nonempty finite subsets of $D$, partially ordered by inclusion. For $X\in \mathcal{D}$, let $\vartriangleleft_X$ denote the restriction of $\vartriangleleft$ to $X$, and set $M_X := M(X,\vartriangleleft_X)$. Since $\vartriangleleft_X$ is transitive and antisymmetric, $M_X$ is primitive. In particular, $M_X$ is a refinement monoid, and it is finitely generated by construction. For any $X,Y \in \mathcal{D}$ with $X\subseteq Y$, the inclusion map $X\rightarrow Y$ extends uniquely to a homomorphism $\psi_{X,Y}: M_X \rightarrow M_Y$. The collection of monoids $M_X$ and transition maps $\psi_{X,Y}$ forms an inductive system, and $M$ is an inductive limit for this system. Therefore $M$ is tame. \end{proof} \subsection{Further consequences of tameness} \label{tameimplies} Any monoid $M$ has a maximal antisymmetric quotient, namely $M/{\equiv}$ where $\equiv$ is the congruence defined as follows: $x \equiv y$ if and only if $x\le y\le x$. \begin{theorem} \label{tame:antisymmetrizn} If $M$ is a tame refinement monoid, then its maximal antisymmetric quotient $M/{\equiv}$ is a conical tame refinement monoid. \end{theorem} \begin{proof} Conicality for $M/{\equiv}$ follows immediately from antisymmetry. Assume that $M$ is an inductive limit of an inductive system of finitely generated refinement monoids $M_i$. Then $M/{\equiv}$ is an inductive limit of the corresponding inductive system of finitely generated monoids $M_i/{\equiv}$. The latter monoids have refinement by \cite[Theorem 5.2 and Corollary 6.8]{Brook01}. Therefore $M$ has refinement and is tame. \end{proof} Moreira Dos Santos constructed an example in \cite{MDS} of a conical, unperforated, strongly separative refinement monoid whose maximal antisymmetric quotient does not have refinement. This monoid is wild, by Theorem \ref{tame:antisymmetrizn}. \begin{theorem} \label{tame:stablyfinite} Let $M$ be a tame refinement monoid. If $M$ is stably finite, then $M$ is cancellative. \end{theorem} \begin{proof} Suppose $a,b,c\in M$ with $a+c= b+c$, and let $M'$ be the submonoid of $M$ generated by $a$, $b$, $c$. By Theorem \ref{tamecriterion}, the inclusion map $M' \rightarrow M$ can be factored as the composition of homomorphisms $f: M'\rightarrow M''$ and $g: M''\rightarrow M$ where $M''$ is a finitely generated refinement monoid. Apply \cite[Corollary 6.8]{Brook01} and \cite[Lemma 2.1]{APW} to the equation $f(a)+f(c) = f(b)+f(c)$ in $M''$. This yields elements $x,y\in M''$ such that $f(a)+x = f(b)+y$ and $f(c)+x= f(c)+y= f(c)$. Then $c+g(x)= c+g(y) =c$, and so $g(x)=g(y)=0$ by the stable finiteness of $M$. Therefore $a= a+g(x)= b+g(y) = b$, as desired. \end{proof} We shall construct examples of stably finite, noncancellative refinement monoids below. These will be wild by Theorem \ref{tame:stablyfinite}. \begin{corollary} \label{tamecancellativequo} Let $M$ be a tame refinement monoid, and set $$J := \{a \in M \mid \text{there exists} \;\, b \in M \; \text{with} \;\, a+b\le b \}.$$ Then $J$ is an o-ideal of $M$, and $M/J$ is a cancellative tame refinement monoid. \end{corollary} \begin{proof} It is clear that $J$ is an o-ideal. The quotient $M/J$ thus exists, and it is a tame refinement monoid by Proposition \ref{prop:tame-closed-under-quotients}. By Theorem \ref{tame:stablyfinite}, it only remains to show that $M/J$ is stably finite. Suppose $x$ and $a$ are elements of $M$ whose images $\ol{x},\ol{a}\in M/J$ satisfy $\ol{x}+\ol{a}= \ol{x}$. Then there exist $u,v\in J$ such that $x+a+u = x+v$. By definition of $J$, there is some $b\in M$ such that $v+b\le b$. Hence, $$a+(x+b) \le x+a+u+b = x+v+b \le x+b.$$ Therefore $a\in J$ and $\ol{a}=0$, as required. \end{proof} Among stably finite conical refinement monoids, tameness can be characterized by combining Theorem \ref{tame:stablyfinite} with criteria of Grillet \cite{Gril76} and Effros-Handelman-Shen \cite{EHS}, as follows. \begin{theorem} \label{tamecriterion:stablyfinite} Let $M$ be a stably finite conical refinement monoid. Then $M$ is tame if and only if it is unperforated and cancellative. \end{theorem} \begin{proof} Necessity is given by Theorems \ref{tame:sepunperf} and \ref{tame:stablyfinite}. Conversely, assume that $M$ is unperforated and cancellative. Since $M$ is also conical, its algebraic order is antisymmetric. We now apply one of two major theorems. The first approach stays within commutative monoids. From the above properties of $M$, we see that the following conditions hold for any $a,c,d\in M$ and $n\in{\mathbb{N}}$: (1) If $na=nc$, then $a=c$, and (2) if $na= nc+d$, then the element $d$ is divisible by $n$ in $M$. By \cite[Proposition 2.7]{Gril76}, $M$ satisfies what is there called the \emph{strong RIP}: Whenever $a,b,c,d\in M$ and $n\in {\mathbb{N}}$ with $na+b= nc+d$, there exist $u,v,w,z\in M$ such that $$a=u+v,\qquad b=nw+z,\qquad c=u+w,\qquad d=nv+z.$$ The main theorem of \cite{Gril76}, Theorem 2.1, now implies that $M$ is an inductive limit of an inductive system of free commutative monoids, i.e., of direct sums of copies of ${\mathbb{Z}^+}$. Clearly free commutative monoids are tame, and therefore $M$ is tame by Proposition \ref{classoftame}. For the second approach, observe that $M$ is the positive cone of a directed, partially ordered abelian group $G$ (because $M$ is conical and cancellative). The assumptions on $M$ imply that $G$ is an unperforated interpolation group (with respect to its given partial order), and thus $G$ is a dimension group. The theorem of Effros-Handelman-Shen \cite[Theorem 2.2]{EHS} now implies that $G$ is an inductive limit of an inductive system of partially ordered abelian groups ${\mathbb{Z}}^{n_i}$. Consequently, $M$ is an inductive limit of an inductive system of monoids $({\mathbb{Z}^+})^{n_i}$, and therefore $M$ is tame. \end{proof} Tame refinement monoids satisfy a number of other properties, of which we mention a few samples. \begin{remark} \label{tame:further} Let $M$ be a tame refinement monoid and $a,b,c,d_1,d_2 \in M$. Then: \begin{enumerate} \item If $a+c \le b+c$, there exists $a_1\in M$ such that $a_1+c=c$ and $a\le b+a_1$. \item If $a\le c+d_i$ for $i=1,2$, there exists $d\in M$ such that $a\le c+d$ and $d\le d_i$ for $i=1,2$. \end{enumerate} Both (1) and (2) hold in finitely generated refinement monoids, by \cite[Corollaries 4.2, 5.17, 6.8]{Brook01}, and they pass to inductive limits. \end{remark} We close this section with a result due to the referee. We thank him/her for allowing us to include it here. \begin{proposition} \label{prop:Riesz-prop} Let $M$ be a tame refinement monoid, and let $\sim $ be the congruence on $M$ given by $x\sim y $ if and only if there exists $z \in M$ such that $x+z=y+z$. Then $M/{\sim} $ {\rm(}the maximal cancellative quotient of $M$\/{\rm)} is a Riesz monoid. \end{proposition} \begin{proof} Denote by $[x]$ the $\sim$-equivalence class of $x\in M$. Suppose that $[a]\le [b_1]+[b_2]$ in $M/{\sim} $. There is $c\in M$ such that $a+c\le b_1+b_2+c$, and so, by Remark \ref{tame:further}(1), there is $d\in M$ such that $d+c= c$ and $a\le b_1+b_2+d$. By Riesz decomposition in $M$, we have $a=a_1+a_2+e$, where $a_i\le b_i$, $i=1,2$, and $e\le d$. Now, $a_1\le b_1$ and $e+c\le c$, whence $a_1+e+c\le b_1+c$, and thus $[a_1+e]\le [b_1]$. Since $[a_2]\le [b_2]$ and $[a]=[a_1+e]+[a_2]$, we have thus verified that $M/{\sim} $ has Riesz decomposition. \end{proof} We believe that the first example of a regular ring $R$ such that $V(R)$ is a wild refinement monoid is Bergman's example \cite[Example 5.10]{vnrr}. This ring is stably finite but not unit-regular, so $V(R)$ is wild by Theorem \ref{tame:stablyfinite}. As noted in \cite[Section 3]{AGreal}, the regular rings constructed by Bergman in \cite[Example 5.10]{vnrr} and by Menal and Moncasi in \cite[Example 2]{MM} realize the monoid $\ol{\calM}$ constructed in Section \ref{MMbar}, so that $\ol{\calM}$ could be considered as the most elementary example of a wild refinement monoid. Moncasi constructed in \cite{moncasi} an example of a regular ring $R$ such that $K_0(R)$ is not a Riesz group. Hence, $K_0(R)^+$ is not a Riesz monoid. But $K_0(R)^+ \cong V(R)/{\sim}$. Thus, by Proposition \ref{prop:Riesz-prop}, $V(R)$ is a wild refinement monoid. \section{Graph monoids} \label{sec:graphmon} We express directed graphs in the form $E := (E^0, E^1, r, s)$ where $E^0$ and $E^1$ are the sets of vertices and arrows of $E$, respectively, while $r = r_E$ and $s = s_E$ are the respective range and source maps $E^1 \rightarrow E^0$. The graph $E$ is said to be \emph{row-finite} if its incidence matrix is row-finite, i.e., for each vertex $v \in E^0$, there are at most finitely many arrows in $E^1$ with source $v$. There is a natural \emph{category of directed graphs}, call it $\mathcal{D}$, whose objects are all directed graphs and in which a morphism from an object $E$ to an object $F$ is any pair of maps $(g_0,g_1)$ where $g_i: E^i \rightarrow F^i$ for $i=1,2$ while $r_F g_1 = g_0 r_E$ and $s_F g_1 = g_0 s_E$. Any inductive system in $\mathcal{D}$ has an inductive limit in $\mathcal{D}$. \subsection{Monoids associated to (unseparated) directed graphs} \label{M(E)} A \emph{graph monoid} $M(E)$ associated to a directed graph $E$ was first introduced in \cite[p.~163]{AMP} in the case that $E$ is row-finite. In that setting, $M(E)$ is defined to be the commutative monoid presented by the set of generators $E^0$ and the relations \begin{enumerate} \item $v = \sum \{ r(e) \mid e \in s^{-1}(v) \}$ for all non-sinks $v \in E^0$. \end{enumerate} A definition for $M(E)$ in the general case was given in \cite[p.~196]{AG}. Then, the generators $v\in E^0$ are supplemented by generators $q_Z$ as $Z$ runs through all nonempty finite subsets of $s^{-1}(v)$ for \emph{infinite emitters} $v$, that is, vertices $v\in E^0$ such that $s^{-1}(v)$ is infinite. The relations consist of \begin{enumerate} \item $v = \sum \{ r(e) \mid e \in s^{-1}(v) \}$ for all $v \in E^0$ such that $s^{-1}(v)$ is nonempty and finite. \item $v = \sum \{r(e)\mid e\in Z\} +q_Z$ for all infinite emitters $v \in E^0$ and all nonempty finite subsets $Z\subset s^{-1}(v)$. \item $q_Z = \sum \{r(e)\mid e\in W \setminus Z\} +q_W$ for all nonempty finite sets $Z \subseteq W \subset s^{-1}(v)$, where $v\in E^0$ is an infinite emitter. \end{enumerate} We give two proofs that the graph monoids $M(E)$ are tame refinement monoids. The first (Theorem \ref{MEgeneral}) involves changing the graph $E$ to a new graph $\widetilde{E}$ whose monoid is isomorphic to $M(E)$, while the second (see \S\ref{sepgraphmonoids}) takes advantage of inductive limit results for monoids associated to separated graphs (Definition \ref{defsepgraph}). Observe that both proofs give that $M(E)$ is in fact an inductive limit of {\it graph monoids} $M(F)$ associated to {\it finite graphs} $F$, showing thus an {\it a priori} stronger statement. \begin{theorem} \label{MEgeneral} For any directed graph $E$, the graph monoid $M(E)$ is a tame, conical, refinement monoid. In particular, it is unperforated and separative. \end{theorem} \begin{proof} It is known that $M(E)$ is always a conical refinement monoid. In the row-finite case, conicality is easily seen from the definition of $M(E)$, or one can obtain it from \cite[Theorem 3.5]{AMP}, and refinement was proved in \cite[Proposition 4.4]{AMP}. These two properties of $M(E)$ were proved in general in \cite[Corollary 5.16]{AG}. Tameness in the row-finite case follows from \cite[Lemma 3.4]{AMP}, in which it was proved that $M(E)$ is an inductive limit of graph monoids $M(E_i)$, for certain finite subgraphs $E_i$ of $E$. Since each $M(E_i)$ is a finitely generated refinement monoid, tameness follows. We will show that tameness in general can be reduced to the ``row-countable" case, and that the latter case follows from the row-finite case. Let $\mathcal{A}$ denote the collection of those subsets $A\subseteq E^1$ such that \begin{enumerate} \item $s^{-1}(v)\subseteq A$ for all $v\in E^0$ such that $s^{-1}(v)$ is finite. \item $s^{-1}(v)\cap A$ is countably infinite for all infinite emitters $v\in E^0$. \end{enumerate} For each $A\in \mathcal{A}$, let $E_A$ denote the subgraph $(E^0,A,r|_A,s|_A)$ of $E$. Since $\mathcal{A}$ is closed under finite unions, the graphs $E_A$ together with the inclusion maps $E_A \rightarrow E_B$ for $A\subseteq B$ in $\mathcal{A}$ form an inductive system, and $E$ is an inductive limit of this system in the category $\mathcal{D}$. While the functor $M(-)$ does not preserve all inductive limits, it does preserve ones of the form just described, as is easily verified. Consequently, to prove that $M(E)$ is tame, it suffices to prove that each $M(E_A)$ is tame, by Proposition \ref{classoftame}. It remains to deal with the case in which $s^{-1}(v)$ is countable for all $v \in E^0$. Let $E^0_\infty$ denote the set of all infinite emitters in $E^0$, and for each $v \in E^0_\infty$, list the arrows emitted by $v$ as a sequence without repetitions, say $$s^{-1}(v) = \{ e_{v,1}, e_{v,2}, \dots\}.$$ Then set $q_{v,n} := q_{\{e_{v,1}, \dots, e_{v,n}\}}$ for $v \in E^0_\infty$ and $n\in {\mathbb{N}}$, and observe that $M(E)$ has a presentation with generating set $$E^0 \sqcup \{ q_{v,n} \mid v \in E^0_\infty, \; n \in {\mathbb{N}}\}$$ and relations \begin{align*} v &= \sum \{ r(e) \mid e \in s^{-1}(v) \} \,, &&\quad (v\in E^0,\; 0< |s^{-1}(v)|< \infty) \\ v &= r(e_{v,1}) +q_{v,1} \,, &&\quad (v \in E^0_\infty) \\ q_{v,n} &= r(e_{v,n+1})+ q_{v,n+1} \,, &&\quad (v \in E^0_\infty, \; n \in {\mathbb{N}}). \end{align*} Define a new directed graph $\widetilde{E}$ with vertex set $$\widetilde{E}^0 := E^0 \sqcup \{ w_{v,n} \mid v \in E^0_\infty, \; n \in {\mathbb{N}} \}$$ and $\widetilde{E}^1$ consisting of the following arrows: \begin{enumerate} \item $e \in E^1$, for those $e$ such that $s(e) \notin E^0_\infty$; \item $e_{v,1}$ and an arrow $v \rightarrow w_{v,1}$, for each $v \in E^0_\infty$; \item Arrows $w_{v,n} \rightarrow w_{v,n+1}$ and $w_{v,n} \rightarrow r(e_{v,n+1})$, for each $v \in E^0_\infty$ and $n\in {\mathbb{N}}$. \end{enumerate} This graph is row-finite, and so $M(\widetilde{E})$ is tame. A comparison of presentations shows that there is an isomorphism $M(E) \rightarrow M(\widetilde{E})$, sending $v\mapsto v$ for all $v\in E^0$ and $q_{v,n} \mapsto w_{v,n}$ for all $v \in E^0_\infty$ and $n\in {\mathbb{N}}$. Therefore $M(E)$ is tame, as desired. \end{proof} Tame conical refinement monoids, even finitely generated antisymmetric ones, do not all arise as graph monoids $M(E)$, as shown in \cite[Lemma 4.1]{APW}. \subsection{Separated graphs and their monoids} \label{sepgraphmonoids} \begin{definition} \label{defsepgraph} A \emph{separated graph}, as defined in \cite[Definition 2.1]{AG}, is a pair $(E,C)$ where $E$ is a directed graph, $C=\bigsqcup _{v\in E^0} C_v$, and $C_v$ is a partition of $s^{-1}(v)$ (into pairwise disjoint nonempty subsets) for every vertex $v$. (In case $v$ is a sink, we take $C_v$ to be the empty family of subsets of $s^{-1}(v)$.) The pair $(E,C)$ is called \emph{finitely separated} provided all the members of $C$ are finite sets. We first give the definition of the \emph{graph monoid} $M(E,C)$ associated to a finitely separated graph $(E,C)$. This is the commutative monoid presented by the set of generators $E^0$ and the relations $$v = \sum_{e \in X} r(e) \qquad \text{for all} \; v \in E^0 \; \text{and} \; X \in C_v \,.$$ \end{definition} Lemma 4.2 of \cite{AG} shows that $M(E,C)$ is conical, and that it is nonzero as long as $E^0$ is nonempty. Otherwise, $M(E,C)$ has no special properties, in contrast to Theorem \ref{MEgeneral}: \begin{theorem} \label{MECarb} {\rm\cite[Proposition 4.4]{AG}} Any conical commutative monoid is isomorphic to $M(E,C)$ for a suitable finitely separated graph $(E,C)$. \end{theorem} \begin{definition} \label{defMEC} The general definition of graph monoids in \cite[Definition 4.1]{AG} applies to triples $(E,C,S)$, where $(E,C)$ is a separated graph and $S$ is any subset of the collection $$C_{\fin} := \{ X\in C \mid X \;\, \text{is finite} \}.$$ These triples are the objects of a category $\mathbf{SSGr}$ \cite[Definition 3.1]{AG} in which the morphisms $\phi: (E,C,S) \rightarrow (F,D,T)$ are those graph morphisms $\phi= (\phi^0,\phi^1): E\rightarrow F$ such that \begin{enumerate} \item[$\bullet$] $\phi^0$ is injective. \item[$\bullet$] For each $X\in C$, the restriction of $\phi^1$ to $X$ is an injection of $X$ into some member of $D$. \item[$\bullet$] For each $X\in S$, the restriction $\phi^1|_X$ is a bijection of $X$ onto a member of $T$. \end{enumerate} For any object $(E,C,S)$ of $\mathbf{SSGr}$, the \emph{graph monoid} $M(E,C,S)$ is presented by the generating set $$E^0 \sqcup \{ q'_Z \mid Z \subseteq X \in C, \;\, 0 < |Z|< \infty\}$$ and the relations \begin{enumerate} \item[$\bullet$] $v= q'_Z + \sum_{e\in Z} r(e)$ for all $v\in E^0$ and nonempty finite subsets $Z$ of members of $C_v$. \item[$\bullet$] $q'_{Z_1} = q'_{Z_2}+ \sum_{e\in Z_2\setminus Z_1} r(e)$ for all nonempty finite subsets $Z_1\subseteq Z_2$ of members of $C$. \item[$\bullet$] $q'_X=0$ for all $X\in S$. \end{enumerate} In case $S = C_{\fin}$, we set $M(E,C) := M(E,C,C_{\fin})$. When $(E,C)$ is finitely separated, the generators $q'_Z$ are redundant, and $M(E,C)$ has the presentation described in Definition \ref{defsepgraph}. Finally, we define $M(E) := M(E,C)$ where $C$ is the ``unseparation" of $E$, namely, the collection of all sets $s^{-1}(v)$ for non-sinks $v\in E^0$. \end{definition} We note that whenever $(E,C)$ is a separated graph and there is a (directed) path from a vertex $v$ to a vertex $w$ in $E$, we have $w \le v$ with respect to the algebraic ordering in any $M(E,C,S)$. It is enough to verify this statement when there is an arrow $e : v \rightarrow w$. In that case, $w\le v$ follows from the relation $v= q'_{\{e\}}+ w$. \medskip \noindent\emph{Alternative proof of Theorem {\rm\ref{MEgeneral}}}. As before, $M(E)$ is a conical refinement monoid by the results of \cite{AMP, AG}. As above, write $M(E)= M(E,C,S)$ where $C := \{ s^{-1}(v) \mid v\in E^0, \;\, |s^{-1}(v)| > 0\}$ and $S := C_{\fin}$. By \cite[Proposition 3.5]{AG}, $(E,C,S)$ is an inductive limit in $\mathbf{SSGr}$ of its finite complete subobjects $(F,D,T)$, where finiteness means that $F^0$ and $F^1$ are finite sets while completeness in this case just means that \begin{enumerate} \item[$\bullet$] $D = \{ s_F^{-1}(v) \mid v\in F^0, \;\, |s_F^{-1}(v)| > 0\}$, and \item[$\bullet$] $T = \{ s_E^{-1}(v) \mid v\in F^0, \;\, 0< |s_E^{-1}(v)| <\infty\}$. \end{enumerate} As noted in \cite[p.~186]{AG}, the functor $M(-)$ from $\mathbf{SSGr}$ to commutative monoids preserves inductive limits, so $M(E)$ is an inductive limit of the monoids $M(F,D,T)$. These monoids are finitely generated because the graphs $F$ are finite, so we just need them to have refinement. Each $(F,D)$ is a finitely separated graph, but $M(F,D,T)$ may not be equal to $M(F)$, because $T$ is in general a proper subset of $D_{\text{fin}}$. We apply \cite[Construction 5.3]{AG} to $(F,D,T)$, to obtain a finitely separated graph $(\widetilde{F},\widetilde{D})$ such that $M(F,D,T) \cong M(\widetilde{F},\widetilde{D})$. It is clear from the construction that $\widetilde{D}_v = s^{-1}_{\widetilde{F}}(v)$ for all non-sinks $v\in \widetilde{D}^0$, and so $M(\widetilde{F},\widetilde{D})= M(\widetilde{F})$. Since $M(\widetilde{F})$ is a refinement monoid \cite[Proposition 4.4]{AMP}, so is $M(F,D,T)$, as required. \qed \section{Two wild examples} \label{MMbar} We present explicit constructions of two refinement monoids which are wild, but otherwise possess most of the properties of tame refinement monoids established above. In particular, they are stably finite, separative, unperforated, and conical, and one of them is archimedean. Both are constructed as graph monoids of separated graphs. \subsection{Three separated graphs} \label{3sepgraphs} We concentrate on three particular separated graphs, denoted $(E_0,C^0)$, $(E,C)$, and $(\ol{E},\ol{C})$, which are drawn in Figures \ref{E0C0} and \ref{EC} below. Both of the latter two graphs contain $E_0$ as a subgraph. We indicate the sets in the families $C^0$, $C$, and $\ol{C}$ by connecting their members with dotted lines. Thus, $C^0_u= \bigl\{ \{e_1,e_2\}, \{f_1,f_2\} \bigr\}$, while $C^0_{x_0}$, $C^0_{y_0}$, $C^0_{z_0}$ are empty. \ignore{ \begin{figure}[htp] $$ \xymatrixrowsep{4.5pc}\xymatrixcolsep{5pc}\def\displaystyle{\displaystyle} \xymatrix{ &u \ar@/_4ex/[dl]_{e_1} \ar@/_4ex/[dl]|(0.65){\circ}="e1" \ar@/_1pc/[d]_(0.6){e_2} \ar@/_1pc/[d]|(0.45){\circ}="e2" \ar@/^1pc/[d]^(0.6){f_2} \ar@/^1pc/[d]|(0.45){\circ}="f2" \ar@/^4ex/[dr]^{f_1} \ar@/^4ex/[dr]|(0.65){\circ}="f1" \ar@{.}"e1";"e2" \ar@{.}"f1";"f2" \\ y_0 &x_0 &z_0 }$$ \caption{The separated graph $(E_0,C^0)$} \label{E0C0} \end{figure} } As we shall see below, the graph monoid of $(E_0,C^0)$ does not have refinement (Remark \ref{M0notref}). In \cite[Construction 8.8]{AG}, a process of \emph{complete resolutions} was developed, by which any finitely separated graph can be enlarged to one whose graph monoid has refinement \cite[Theorem 8.9]{AG}. One application of this process leads to $(E,C)$ (we leave the details to the reader). Since $|C_v|\le 2$ for all $v\in E^0$, $(E,C)$ is also a \emph{complete multiresolution} of $(E_0,C^0)$ in the sense of \cite[Section 3]{AE}. Finally, $(\ol{E},\ol{C})$ is obtained by removing the vertices $a_1,a_2,\dots$ from $E^0$ and shrinking the sets in $C$ as indicated in the diagram. \ignore{ \begin{figure}[htp] $$\xymatrixrowsep{1.33pc}\xymatrixcolsep{4pc}\def\displaystyle{\displaystyle} \xymatrix{ &u \ar@/_3ex/[dddl] \ar@{}@/_3ex/[dddl]|(0.65){\circ}="e1" \ar@/_2ex/[ddd] \ar@{}@/_2ex/[ddd]|{\circ}="e2" \ar@/^2ex/[ddd] \ar@{}@/^2ex/[ddd]|{\circ}="f2" \ar@/^3ex/[dddr] \ar@{}@/^3ex/[dddr]|(0.65){\circ}="f1" \ar@{.}"e1";"e2" \ar@{.}"f1";"f2" &&&u \ar@/_3ex/[dddl] \ar@{}@/_3ex/[dddl]|(0.65){\circ}="ee1" \ar@/_2ex/[ddd] \ar@{}@/_2ex/[ddd]|{\circ}="ee2" \ar@/^2ex/[ddd] \ar@{}@/^2ex/[ddd]|{\circ}="ff2" \ar@/^3ex/[dddr] \ar@{}@/^3ex/[dddr]|(0.65){\circ}="ff1" \ar@{.}"ee1";"ee2" \ar@{.}"ff1";"ff2" \\ \\ \\ y_0 \ar[ddd] \ar[ddd]|(0.5){\circ}="g12" \ar@/^2ex/[ddr] \ar@/^2ex/[ddr]|(0.4){\circ}="g11" &x_0 \ar[dddl] \ar[dddl]|(0.65){\circ}="h12" \ar@/_4ex/[ddd] \ar@/_4ex/[ddd]|(0.65){\circ}="h22" \ar@/^4ex/[ddd] \ar@/^4ex/[ddd]|(0.65){\circ}="g22" \ar[dddr] \ar[dddr]|(0.65){\circ}="g21" &z_0 \ar@/_2ex/[ddl] \ar@/_2ex/[ddl]|(0.4){\circ}="h11" \ar[ddd] \ar[ddd]|(0.5){\circ}="h21" \ar@{.}"g12";"g11" \ar@{.}"g21";"g22" \ar@{.}"h11";"h21" \ar@{.}"h12";"h22" &y_0 \ar[ddd] \ar[ddd]|(0.5){\circ} &x_0 \ar[dddl] \ar[dddl]|(0.65){\circ}="hh12" \ar@/_4ex/[ddd] \ar@/_4ex/[ddd]|(0.65){\circ}="hh22" \ar@/^4ex/[ddd] \ar@/^4ex/[ddd]|(0.65){\circ}="gg22" \ar[dddr] \ar[dddr]|(0.65){\circ}="gg21" &z_0 \ar[ddd] \ar[ddd]|(0.5){\circ} \ar@{.}"gg21";"gg22" \ar@{.}"hh12";"hh22" \\ \\ &a_1 \\ y_1 \ar@{}[dd]|{\vdots} &x_1 \ar@{}[dd]|{\vdots} &z_1 \ar@{}[dd]|{\vdots} &y_1 \ar@{}[dd]|{\vdots} &x_1 \ar@{}[dd]|{\vdots} &z_1 \ar@{}[dd]|{\vdots} \\ \\ y_{n-1} \ar[ddd] \ar[ddd]|(0.5){\circ}="g12" \ar@/^2ex/[ddr] \ar@/^2ex/[ddr]|(0.4){\circ}="g11" &x_{n-1} \ar[dddl] \ar[dddl]|(0.65){\circ}="h12" \ar@/_4ex/[ddd] \ar@/_4ex/[ddd]|(0.65){\circ}="h22" \ar@/^4ex/[ddd] \ar@/^4ex/[ddd]|(0.65){\circ}="g22" \ar[dddr] \ar[dddr]|(0.65){\circ}="g21" &z_{n-1} \ar@/_2ex/[ddl] \ar@/_2ex/[ddl]|(0.4){\circ}="h11" \ar[ddd] \ar[ddd]|(0.5){\circ}="h21" \ar@{.}"g12";"g11" \ar@{.}"g21";"g22" \ar@{.}"h11";"h21" \ar@{.}"h12";"h22" &y_{n-1} \ar[ddd] \ar[ddd]|(0.5){\circ} &x_{n-1} \ar[dddl] \ar[dddl]|(0.65){\circ}="hh12" \ar@/_4ex/[ddd] \ar@/_4ex/[ddd]|(0.65){\circ}="hh22" \ar@/^4ex/[ddd] \ar@/^4ex/[ddd]|(0.65){\circ}="gg22" \ar[dddr] \ar[dddr]|(0.65){\circ}="gg21" &z_{n-1} \ar[ddd] \ar[ddd]|(0.5){\circ} \ar@{.}"gg21";"gg22" \ar@{.}"hh12";"hh22" \\ \\ &a_n \\ y_n \ar@{}[dd]|{\vdots} &x_n \ar@{}[dd]|{\vdots} &z_n \ar@{}[dd]|{\vdots} &y_n \ar@{}[dd]|{\vdots} &x_n \ar@{}[dd]|{\vdots} &z_n \ar@{}[dd]|{\vdots} \\ \\ &{\save+<0ex,-2ex> \drop{(E,C)} \restore} & &&{\save+<0ex,-2ex> \drop{(\ol{E},\ol{C})} \restore} & }$$ \caption{} \label{EC} \end{figure} } \subsection{Three graph monoids} \label{3graphmons} We label the monoids of the three separated graphs introduced in \S\ref{3sepgraphs} as follows: $$\mathcal{M}_0:= M(E_0,C^0), \qquad\quad \mathcal{M} := M(E,C), \qquad\quad \ol{\calM} := M(\ol{E},\ol{C}).$$ These are conical commutative monoids, as noted above. Note that $u$ is an order-unit in each of these monoids, because there are paths from $u$ to each vertex of any of the three graphs. In all three monoids, we have $u= x_0+y_0$, which means that $u$ can be omitted from the set of generators. In particular, $\mathcal{M}_0$ has the monoid presentation $$\mathcal{M}_0 = \langle x_0, \, y_0, \, z_0 \mid x_0+y_0 = x_0+z_0 \rangle.$$ We can present $\mathcal{M}$ by the generators $$x_0,y_0,z_0,a_1,x_1,y_1,z_1,a_2,x_2,y_2,z_2,\dots$$ and the relations \begin{equation} \begin{gathered} x_0+y_0 = x_0+z_0\,,\qquad\qquad y_l = y_{l+1}+a_{l+1}\,,\qquad\qquad z_l = z_{l+1}+a_{l+1}\,, \\ x_l = x_{l+1}+y_{l+1} =x_{l+1}+z_{l+1}\,. \end{gathered} \label{MECrelns} \end{equation} By \cite[Proposition 5.9 or Theorem 8.9]{AG}, $\mathcal{M}$ is a refinement monoid. We give a direct proof of this in Proposition \ref{Mref}. The monoid $\ol{\calM}$ can be presented by the generators $$x_0,y_0,z_0,x_1,y_1,z_1,x_2,y_2,z_2,\dots$$ and the relations \begin{equation} \begin{gathered} x_0+y_0 = x_0+z_0\,,\qquad\qquad y_l = y_{l+1}\,,\qquad\qquad z_l = z_{l+1}\,, \\ x_l = x_{l+1}+y_{l+1} =x_{l+1}+z_{l+1}\,. \end{gathered} \notag \end{equation} The generators $y_n$ and $z_n$ for $n>0$ are redundant, and we write the remaining generators with overbars to avoid confusion between $\ol{\calM}$ and $\mathcal{M}$. Thus, $\ol{\calM}$ is presented by the generators $$\ol{x}_0,\ol{y}_0,\ol{z}_0,\ol{x}_1,\ol{x}_2,\dots$$ and the relations \begin{equation} \label{relnsMbar} \ol{x}_0+\ol{y}_0= \ol{x}_0+\ol{z}_0\,, \qquad\qquad \ol{x}_l= \ol{x}_{l+1}+\ol{y}_0= \ol{x}_{l+1}+\ol{z}_0 \,. \end{equation} We shall see below that $\ol{\calM}$ is a quotient of $\mathcal{M}$ modulo an o-ideal (Lemma \ref{Mbarisom}). This corresponds to the fact that $(\ol{E},\ol{C})$ is a quotient of $(E,C)$ in the sense of \cite[Construction 6.8]{AG}. \subsection{Structure of $\mathcal{M}$} \label{structureM} It is convenient to identify the following submonoids $\mathcal{A}_n$ and $\mathcal{M}_n$ of $\mathcal{M}$: $$\mathcal{A}_n := \sum_{i=1}^n {\mathbb{Z}^+} a_i \qquad\text{and}\qquad \mathcal{M}_n := \sum_{i=0}^n \bigl( {\mathbb{Z}^+} x_i+ {\mathbb{Z}^+} y_i+ {\mathbb{Z}^+} z_i \bigr) +\mathcal{A}_n \,,$$ for $n\in {\mathbb{N}}$. We shall see shortly that the submonoid ${\mathbb{Z}^+} x_0+ {\mathbb{Z}^+} y_0+ {\mathbb{Z}^+} z_0$ of $\mathcal{M}$ can be identified with the monoid $\mathcal{M}_0$ defined in \S\ref{3graphmons}. \begin{lemma} \label{equalMn} {\rm(a)} For each $n\in{\mathbb{N}}$, the monoid $\mathcal{M}_n$ is generated by $a_1,\dots,a_n$, $x_n$, $y_n$, $z_n$. {\rm(b)} Let $n\in {\mathbb{Z}^+}$ and $m,m',i,i',j,j',k_1,k'_1,\dots,k_n,k'_n \in {\mathbb{Z}^+}$. Then \begin{equation} \label{Mneqn} mx_n+ iy_n+ jz_n+ \sum_{l=1}^n k_la_l= m'x_n+ i'y_n+ j'z_n+ \sum_{l=1}^n k'_la_l \end{equation} in $\mathcal{M}$ if and only if \begin{equation} \label{Mnconds} \begin{aligned} &(m=m'=0, \;\, i=i', \;\, j=j', \;\, k_l=k'_l \;\, \text{for all} \;\, l) \;\, \text{or} \\ &(m=m'>0, \;\, i+j=i'+j', \;\, k_l=k'_l \;\, \text{for all} \;\, l) \end{aligned} \end{equation} {\rm(c)} The natural map $\mathcal{M}_0\rightarrow \mathcal{M}$ gives an isomorphism of $\mathcal{M}_0$ onto ${\mathbb{Z}^+} x_0+ {\mathbb{Z}^+} y_0+ {\mathbb{Z}^+} z_0$. \end{lemma} \begin{proof} (a) This is clear from the fact that \begin{align*} y_l &= y_{l+1}+a_{l+1} &z_l &= z_{l+1}+a_{l+1} &x_l &= x_{l+1}+y_{l+1} \end{align*} for all $l=0,\dots,n-1$. (b) Since $x_n+y_n = x_n+z_n$, we have $$x_n+ iy_n+ jz_n= x_n+ (i+j)y_n \qquad \text{and} \qquad x_n+ i'y_n+ j'z_n= x_n+ (i'+j')y_n \,. $$ This is all that is needed for the implication \eqref{Mnconds}$\Longrightarrow$\eqref{Mneqn}. Conversely, assume that \eqref{Mneqn} holds. In view of the presentation of $\mathcal{M}$ given in \S\ref{3graphmons}, there exists a homomorphism $f: \mathcal{M} \rightarrow {\mathbb{Z}^+}$ such that $$f(x_l)=1 \qquad\quad \text{and} \qquad\quad f(y_l)= f(z_l)= f(a_l) =0$$ for all $l$. Applying $f$ to \eqref{Mneqn} yields $m=m'$. Assume first that $m=0$, let $A$ be a free abelian group with a basis $\{\beta,\gamma\} \sqcup \{\alpha_l \mid l\in{\mathbb{N}} \}$, and enlarge $A$ to a monoid $A\sqcup \{\infty\}$ by adjoining an infinity element to $A$. There exists a homomorphism $g: \mathcal{M} \rightarrow A\sqcup \{\infty\}$ such that \begin{align*} g(x_l) &= \infty, &g(y_l) &= \beta+ \alpha_1+ \cdots+ \alpha_l \,, \\ g(a_l) &= -\alpha_l \,, &g(z_l) &= \gamma+ \alpha_1+ \cdots+ \alpha_l \end{align*} for all $l$. Applying $g$ to \eqref{Mneqn} yields $$i\beta+ j\gamma+ \sum_{l=1}^n (i+j-k_l)\alpha_l = i'\beta+ j'\gamma+ \sum_{l=1}^n (i'+j'-k'_l)\alpha_l \,,$$ from which the first alternative of \eqref{Mnconds} follows. Now suppose that $m>0$. There exists a homomorphism $h: \mathcal{M} \rightarrow A$ such that \begin{align*} h(a_l) &= -\alpha_l \,, &h(y_l) &= h(z_l) = \beta+ \alpha_1+ \cdots+ \alpha_l \,, &h(x_l) &= -l\beta+ \sum_{k=1}^l (k-l-1)\alpha_k \end{align*} for all $l$. Applying $h$ to \eqref{Mneqn} yields \begin{align*} (i+j-mn)\beta &+ \sum_{l=1}^n ( m(l-n-1) +i+j-k_l)\alpha_l = \\ & (i'+j'-mn)\beta+ \sum_{l=1}^n ( m(l-n-1) +i'+j'-k'_l)\alpha_l \,, \end{align*} from which the second alternative of \eqref{Mnconds} follows. (c) The homomorphism in question, call it $\eta$, sends the generators $x_0$, $y_0$, $z_0$ of $\mathcal{M}_0$ to the elements of $\mathcal{M}$ denoted by the same symbols. Part (b) implies that $\eta$ is injective, and the result follows. \end{proof} \begin{remark} \label{M0notref} We now identify $\mathcal{M}_0$ with the submonoid ${\mathbb{Z}^+} x_0+ {\mathbb{Z}^+} y_0+ {\mathbb{Z}^+} z_0$ of $\mathcal{M}$, and we observe that $\mathcal{M}_0$ is not a refinement monoid. Namely, it follows from Lemma \ref{equalMn} that $x_0$, $y_0$, and $z_0$ are distinct irreducible elements in $\mathcal{M}_0$, and hence the equation $x_0+y_0 = x_0+z_0$ has no refinement in $\mathcal{M}_0$. \end{remark} \begin{corollary} \label{Mnisom} For $n\in{\mathbb{N}}$, there are isomorphisms $({\mathbb{Z}^+})^n \oplus\mathcal{M}_0 \rightarrow \mathcal{M}_n$ and $({\mathbb{Z}^+})^n \rightarrow \mathcal{A}_n$ given by the rules \begin{align*} \bigl( (k_1,\dots,k_n),\, mx_0+iy_0+jz_0 \bigr) &\longmapsto mx_n+iy_n+jz_n+ k_1a_1+ \cdots+ k_na_n \\ (k_1,\dots,k_n) & \longmapsto k_1a_1+ \cdots+ k_na_n \,. \end{align*} \end{corollary} \begin{proof} By Lemma \ref{equalMn}, the displayed rules give well defined injective maps from $({\mathbb{Z}^+})^n \oplus\mathcal{M}_0$ to $\mathcal{M}_n$ and $({\mathbb{Z}^+})^n$ to $\mathcal{A}_n$, respectively. It is clear that they are surjective homomorphisms. \end{proof} The above results allow us to give a direct proof that $\mathcal{M}$ has refinement, as follows. We will need the equations \begin{align} \label{xyzntok} y_n &= y_k+ \sum_{l=n+1}^k a_l \,, &z_n &= z_k+ \sum_{l=n+1}^k a_l \,, &x_n &= x_k+ (k-n)y_k+ \sum_{l=n+1}^k (l-n-1)a_l \,, \end{align} for $k>n\ge0$, which follow directly from the relations \eqref{MECrelns}. \begin{proposition} \label{Mref} $\mathcal{M}$ is a refinement monoid. \end{proposition} \begin{proof} Let $b_1,b_2,c_1,c_2 \in \mathcal{M}$ such that $b_1+b_2= c_1+c_2$, and choose $n\in{\mathbb{N}}$ such that these elements all lie in $\mathcal{M}_n$. Write each $$b_i= \beta_{i0}x_n+ \beta_{i1}y_n+ \beta_{i2}z_n+ b'_i \qquad\text{and}\qquad c_j= \gamma_{j0}x_n+ \gamma_{j1}y_n+ \gamma_{j2}z_n+ c'_j$$ with coefficients $\beta_{is}, \gamma_{js} \in {\mathbb{Z}^+}$ and elements $b'_i,c'_j \in \mathcal{A}_n$. Substituting these expressions into the equation $b_1+b_2= c_1+c_2$ and applying Lemma \ref{equalMn}, we find that $b'_1+b'_2= c'_1+c'_2$ and \begin{equation} \label{cut} \begin{aligned} (\beta_{10}x_n+ \beta_{11}y_n+ \beta_{12}z_n) &+ (\beta_{20}x_n+ \beta_{21}y_n+ \beta_{22}z_n) \\ &= (\gamma_{10}x_n+ \gamma_{11}y_n+ \gamma_{12}z_n) + (\gamma_{20}x_n+ \gamma_{21}y_n+ \gamma_{22}z_n) . \end{aligned} \end{equation} The equation $b'_1+b'_2= c'_1+c'_2$ has refinements in $\mathcal{A}_n$, because that monoid is isomorphic to $({\mathbb{Z}^+})^n$. Any such refinement, added to a refinement for \eqref{cut}, will yield a refinement for $b_1+b_2= c_1+c_2$. Thus, \begin{equation} \label{reducerefinement} \text{If} \;\, \eqref{cut} \;\, \text{has a refinement, then} \;\, b_1+b_2= c_1+c_2 \;\, \text{has a refinement}. \end{equation} This allows us to assume that $b'_i= c'_j =0$ for all $i,j=1,2$. In view of Lemma \ref{equalMn}, the submonoids ${\mathbb{Z}^+} x_n+ {\mathbb{Z}^+} y_n$ and ${\mathbb{Z}^+} y_n+ {\mathbb{Z}^+} z_n$ are both isomorphic to $({\mathbb{Z}^+})^2$. Hence, the desired refinement exists in case the elements $b_1$, $b_2$, $c_1$, $c_2$ either all lie in ${\mathbb{Z}^+} x_n+ {\mathbb{Z}^+} y_n$ or all lie in ${\mathbb{Z}^+} y_n+ {\mathbb{Z}^+} z_n$. Now assume that at least one of $b_1$, $b_2$, $c_1$, $c_2$ is not in ${\mathbb{Z}^+} x_n+ {\mathbb{Z}^+} y_n$ and at least one of these elements is not in ${\mathbb{Z}^+} y_n+ {\mathbb{Z}^+} z_n$. If $b_1$ and $b_2$ are both in ${\mathbb{Z}^+} y_n+ {\mathbb{Z}^+} z_n$, so is $c_1+c_2$, from which we see that $c_1,c_2 \in {\mathbb{Z}^+} y_n+ {\mathbb{Z}^+} z_n$, contradicting our choices. Thus, we may assume that $b_1\notin {\mathbb{Z}^+} y_n+ {\mathbb{Z}^+} z_n$, and similarly that $c_1\notin {\mathbb{Z}^+} y_n+ {\mathbb{Z}^+} z_n$. It follows that $b_1,c_1 \in {\mathbb{N}} x_n+ {\mathbb{Z}^+} y_n$. Now $b_2$ and $c_2$ cannot both be in ${\mathbb{Z}^+} x_n+ {\mathbb{Z}^+} y_n$; without loss of generality, $b_2 \notin {\mathbb{Z}^+} x_n+ {\mathbb{Z}^+} y_n$. It follows that $b_2\in {\mathbb{Z}^+} y_n+ {\mathbb{N}} z_n$. Thus, we may assume that the coefficients $\beta_{is}$, $\gamma_{js}$ have been chosen so that $$\beta_{10}, \gamma_{10}, \beta_{22} > 0, \qquad\quad \beta_{12}, \gamma_{12}, \beta_{20} = 0, \qquad\quad \gamma_{22}=0 \;\, \text{if} \;\, \gamma_{20}>0.$$ There are now two cases to consider, depending on whether $\gamma_{20}$ is zero or not. First, suppose that $\gamma_{20}=0$. Then $$\beta_{10}x_n+ (\beta_{11}+ \beta_{21}+ \beta_{22})y_n = \gamma_{10}x_n+ (\gamma_{11}+ \gamma_{21}+ \gamma_{22})y_n \,,$$ and so Lemma \ref{equalMn} implies that $\beta_{10}= \gamma_{10}$ and $\beta_{11}+ \beta_{21}+ \beta_{22} = \gamma_{11}+ \gamma_{21}+ \gamma_{22}$. For any $k>n$, \eqref{xyzntok} shows that \begin{align*} b_1 &= \beta_{10}x_k+ (\beta_{10}(k-n)+ \beta_{11})y_k+ \sum_{l=n+1}^k (\beta_{10}(l-n-1)+ \beta_{11})a_l \\ b_2 &= \beta_{21}y_k+ \beta_{22}z_k+ \sum_{l=n+1}^k (\beta_{21}+\beta_{22})a_l \\ c_1 &= \gamma_{10}x_k+ (\gamma_{10}(k-n)+ \gamma_{11})y_k+ \sum_{l=n+1}^k (\gamma_{10}(l-n-1)+ \gamma_{11})a_l \\ c_2 &= \gamma_{21}y_k+ \gamma_{22}z_k+ \sum_{l=n+1}^k (\gamma_{21}+\gamma_{22})a_l \,. \end{align*} Substitute these expressions in the equation $b_1+b_2= c_1+c_2$, and apply \eqref{reducerefinement}. This shows that it suffices to find a refinement for \begin{align*} \bigl[ \beta_{10}x_k+ (\beta_{10}(k-n)+ \beta_{11})y_k \bigr] &+ \bigl[ \beta_{21}y_k+ \beta_{22}z_k \bigr] = \\ &\bigl[ \gamma_{10}x_k+ (\gamma_{10}(k-n)+ \gamma_{11})y_k \bigr] + \bigl[ \gamma_{21}y_k+ \gamma_{22}z_k \bigr]. \end{align*} Consequently, we may replace $\beta_{11}$ and $\gamma_{11}$ by $\beta_{10}(k-n)+ \beta_{11}$ and $\gamma_{10}(k-n)+ \gamma_{11}$, for any $k>n$. Therefore, after choosing a sufficiently large $k$ and making the above replacement, we may now assume that $$\beta_{11} \ge \gamma_{21}+ \gamma_{22} \,.$$ Recall that $\beta_{11}+ \beta_{21}+ \beta_{22} = \gamma_{11}+ \gamma_{21}+ \gamma_{22}$, so that $\beta_{11}- \gamma_{21}- \gamma_{22}= \gamma_{11}- \beta_{21}- \beta_{22}$. Since also $\beta_{10}= \gamma_{10}$, we now have a refinement $$\bordermatrix{ &c_1&c_2 \cr b_1 & \beta_{10}x_n+ (\beta_{11}-\gamma_{21}-\gamma_{22})y_n &\gamma_{21}y_n+ \gamma_{22}z_n \cr b_2 &\beta_{21}y_n+ \beta_{22}z_n &0 \cr }.$$ Finally, suppose that $\gamma_{20} >0$. Then $$\beta_{10}x_n+ (\beta_{11}+ \beta_{21}+ \beta_{22})y_n = (\gamma_{10}+ \gamma_{20})x_n + (\gamma_{11}+\gamma_{21})y_n \,,$$ whence $\beta_{10}= \gamma_{10}+ \gamma_{20}$ and $\beta_{11}+ \beta_{21}+ \beta_{22}= \gamma_{11}+\gamma_{21}$. As in the previous case, we may assume that $\beta_{11} \ge \gamma_{21}$. This allows the refinement $$\bordermatrix{ &c_1&c_2 \cr b_1 & \gamma_{10}x_n+ (\beta_{11}-\gamma_{21})y_n &\gamma_{20}x_n+ \gamma_{21}y_n \cr b_2 &\beta_{21}y_n+ \beta_{22}z_n &0 \cr },$$ completing the proof. \end{proof} \begin{lemma} \label{stateM} $\mathcal{M}$ is conical, stably finite, archimedean, and antisymmetric. \end{lemma} \begin{proof} Conicality was noted in \S\ref{3graphmons}, but will also be an immediate consequence of the present proof. Antisymmetry will follow once we have shown that $\mathcal{M}$ is conical and stably finite. There is a homomorphism $s: \mathcal{M}\rightarrow {\mathbb{Q}}^+$ such that $s(x_n)= s(y_n)= s(z_n)= s(a_n)= 1/2^n$ for all $n$. Since the generators of $\mathcal{M}$ are all mapped to nonzero elements of ${\mathbb{Q}}^+$, we have $s^{-1}(0)= \{0\}$. Conicality, stable finiteness, and the archimedean property follow immediately. \end{proof} \begin{theorem} \label{Mwild} The monoid $\mathcal{M}$ is a wild refinement monoid. \end{theorem} \begin{proof} Refinement holds by Proposition \ref{Mref}, and $\mathcal{M}$ is stably finite by Lemma \ref{stateM}. However, $\mathcal{M}$ is not cancellative, since $x_0+y_0= x_0+z_0$, whereas $y_0\ne z_0$ by Lemma \ref{equalMn}. Therefore Theorem \ref{tame:stablyfinite} shows that $\mathcal{M}$ is wild. \end{proof} \begin{lemma} \label{Mseparative} $\mathcal{M}$ is separative and unperforated. \end{lemma} \begin{proof} Since $\mathcal{M}$ is antisymmetric (Lemma \ref{stateM}), it will follow from lack of perforation that $\mathcal{M}$ is torsionfree, i.e., $(ma=mb \implies a=b)$ for any $m\in {\mathbb{N}}$ and $a,b\in \mathcal{M}$. Torsionfreeness implies separativity. Thus, we just need to prove that $\mathcal{M}$ is unperforated. It suffices to prove that each $\mathcal{M}_n$ is unperforated. Since $({\mathbb{Z}^+})^n$ is unperforated, Corollary \ref{Mnisom} shows that it is enough to prove that $\mathcal{M}_0$ is unperforated. Suppose that $m\in{\mathbb{N}}$ and $a,b\in \mathcal{M}_0$ with $ma\le mb$ in $\mathcal{M}_0$, that is, $ma+c =mb$ for some $c\in \mathcal{M}_0$. Write \begin{align*} a &= \alpha_0x_0+ \alpha_1y_0+ \alpha_2z_0 \,, &b &= \beta_0x_0+ \beta_1y_0+ \beta_2z_0 \,, &c &= \gamma_0x_0+ \gamma_1y_0+ \gamma_2z_0 \,, \end{align*} with coefficients $\alpha_i,\beta_i,\gamma_i \in{\mathbb{Z}^+}$. Then $$(m\alpha_0+\gamma_0)x_0+ (m\alpha_1+\gamma_1)y_0+ (m\alpha_2+\gamma_2)z_0 = m\beta_0x_0+ m\beta_1y_0+ m\beta_2z_0 \,,$$ and so Lemma \ref{equalMn} implies that $m\alpha_0+\gamma_0 = m\beta_0$. Thus, $\alpha_0\le \beta_0$. Assume first that $\beta_0=0$, which forces $\alpha_0= \gamma_0 =0$. By Lemma \ref{equalMn}, $m\alpha_i+ \gamma_i= m\beta_i$ for $i=1,2$, and so each $\alpha_i \le \beta_i$. Consequently, $$a+ (\beta_1-\alpha_1)y_0+ (\beta_2-\alpha_2)z_0= \beta_1y_0+ \beta_2z_0= b,$$ proving that $a\le b$ in $\mathcal{M}_0$. Now assume that $\beta_0>0$. In this case, Lemma \ref{equalMn} implies that $$m\alpha_1+\gamma_1+ m\alpha_2+\gamma_2 = m\beta_1+ m\beta_2 \,,$$ and so $\alpha_1+\alpha_2 \le \beta_1+ \beta_2$. Consequently, \begin{align*} a+ (\beta_0-\alpha_0)x_0+ (\beta_1+ \beta_2- \alpha_1- \alpha_2)z_0 &= \beta_0x_0+ \alpha_1y_0+ (\beta_1+ \beta_2- \alpha_1)z_0 \\ &= \beta_0x_0+ (\beta_1+ \beta_2)y_0 =b, \end{align*} and again $a\le b$ in $\mathcal{M}_0$. \end{proof} We conclude the subsection with the following information about the structure of $\mathcal{M}$. More about the ideals of $\mathcal{M}$ will appear in the following subsection. Recall that $u= x_0+y_0$. \begin{lemma} \label{Mirredelts} {\rm (a)} The elements $a_1,a_2,\dots$ are distinct irreducible elements in $\mathcal{M}$. {\rm (b)} The submonoid $J_1 := \sum_{n=1}^\infty {\mathbb{Z}^+} a_n$ is an o-ideal of $\mathcal{M}$, and $J_1 \cong ({\mathbb{Z}^+})^{({\mathbb{N}})}$. {\rm (c)} Every nonzero element of $\mathcal{M}$ dominates at least one $a_n$. {\rm (d)} $J_1= \ped (\mathcal{M} )$. {\rm (e)} For $n\in{\mathbb{N}}$, we have $na_n\le u$ but $(n+1)a_n \nleq u$. \end{lemma} \begin{proof} (a) The $a_n$ are distinct by Lemma \ref{equalMn}. Let $A$ be a free abelian group with a basis $\{\alpha_n \mid n\in{\mathbb{N}}\}$. There is a homomorphism $g: \mathcal{M} \rightarrow A\sqcup \{\infty\}$ such that $$g(x_n)= g(y_n)= g(z_n)= \infty \qquad\quad \text{and} \qquad\quad g(a_n)= \alpha_n$$ for all $n$, and since $g$ maps the generators of $\mathcal{M}$ to nonzero elements of the conical monoid $A\sqcup \{\infty\}$, we see that $g^{-1}(0)= \{0\}$. If $v,w\in \mathcal{M}$ with $v+w=a_n$ for some $n$, then $g(v)+g(w)= \alpha_n$. Since $\alpha_n$ is irreducible in $A\sqcup \{\infty\}$, either $g(v)=0$ or $g(w)=0$, and thus $v=0$ or $w=0$. This shows that $a_n$ is irreducible in $\mathcal{M}$. (b) These properties follow from Proposition \ref{irredelementgenideal}. (c) Let $b$ be a nonzero element of $\mathcal{M}$. Then $b\in \mathcal{M}_n$ for some $n$, and so $$b= mx_n+ iy_n+ jz_n+ \sum_{l=1}^n k_la_l$$ with coefficients in ${\mathbb{Z}^+}$, not all zero. If some $k_l>0$, then $b\ge a_l$. Otherwise, \begin{align*} b &= mx_n+ iy_n+ jz_n = m(x_{n+1}+y_{n+1}) + i(y_{n+1}+a_{n+1})+ j(z_{n+1}+a_{n+1}) \\ &= mx_{n+2}+ (2m+i)y_{n+2}+ jz_{n+2}+ (i+j)a_{n+1}+ (m+i+j)a_{n+2} \,, \end{align*} from which it follows that $b\ge a_{n+2}$. (d) Clearly $J_1\subseteq \ped (\mathcal{M})$. If $a$ is an irreducible element in $\mathcal{M}$, then $a_n\le a$ for some $n$ by (c), so that $a=a_n$. This shows that $\ped (\mathcal{M} ) \subseteq J_1$. (e) It follows from \eqref{xyzntok} that $u= x_n+ (n+1)y_n+ \sum_{l=1}^n la_l$ for $n\in {\mathbb{N}}$, whence $u\ge na_n$. Suppose $(n+1)a_n \le u$. Then $(n+1)a_n+b =u$ for some $b\in \mathcal{M}$, say $b\in \mathcal{M}_k$ for some $k\ge n$. Write $b= mx_k+ iy_k+ jz_k+ \sum_{l=1}^k k_la_l$ with coefficients in ${\mathbb{Z}^+}$. Since $u= x_k+ (k+1)y_k+ \sum_{l=1}^k la_l$, Lemma \ref{equalMn} implies that $n+1+k_n= n$, which is impossible. Therefore $(n+1)a_n \nleq u$. \end{proof} \subsection{Structure of $\ol{\calM}$} \begin{lemma} \label{Mbarisom} There is a surjective homomorphism $q: \mathcal{M} \rightarrow \ol{\calM}$ such that \begin{align*} q(a_n) &= 0, &q(x_n) &= \ol{x}_n \,, &q(y_n) &= \ol{y}_0 \,, &q(z_n) &= \ol{z}_0 \end{align*} for all $n$, and $q$ induces an isomorphism of $\mathcal{M}/J_1$ onto $\ol{\calM}$. \end{lemma} \begin{proof} Since $\ol{y}_n= \ol{y}_0$ and $\ol{z}_n= \ol{z}_0$ for all $n$, the existence of $q$ is clear. This homomorphism sends all elements of $J_1$ to $0$, and so it induces a homomorphism $\ol{q}: \mathcal{M}/J_1\rightarrow \ol{\calM}$. Since $y_n \equiv_{J_1} y_{n+1}$ and $z_n \equiv_{J_1} z_{n+1}$ for all $n$, there is a homomorphism $q': \ol{\calM} \rightarrow \mathcal{M}/J_1$ such that \begin{align*} q'(\ol{y}_0) &= (y_0/{\equiv}_{J_1}), &q'(\ol{z}_0) &= (z_0/{\equiv}_{J_1}), &q'(\ol{x}_n) &= (x_n/{\equiv}_{J_1}) \end{align*} for all $n$. Clearly, $q'$ and $\ol{q}$ are mutual inverses. \end{proof} \begin{corollary} \label{Mbarunperf} $\ol{\calM}$ is separative and unperforated. \end{corollary} \begin{proof} Separativity and unperforation, which hold in $\mathcal{M}$ by Lemma \ref{Mseparative}, pass to $\mathcal{M}/J_1 \cong \ol{\calM}$. \end{proof} \begin{lemma} \label{Mbarstabfin} $\ol{\calM}$ is conical, stably finite, and antisymmetric. \end{lemma} \begin{proof} Set $B := ({\mathbb{Z}^+}\times \{0\})+ ({\mathbb{Z}}\times{\mathbb{N}})$, which is a conical submonoid of the group ${\mathbb{Z}}^2$. There is a homomorphism $t: \ol{\calM}\rightarrow B$ such that $$t(\ol{y}_0)= t(\ol{z}_0)= (1,0) \qquad\quad \text{and} \qquad\quad t(\ol{x}_n)= (1-n,1)$$ for all $n$. Since $t$ sends the generators of $\ol{\calM}$ to nonzero elements of $B$, we have $t^{-1}(0)= \{0\}$. Conicality and stable finiteness follow, and these two properties imply antisymmetry. \end{proof} \begin{lemma} \label{equalMbar} Let $n \in {\mathbb{Z}^+}$. {\rm(a)} ${\mathbb{Z}^+} \ol{y}_0+ {\mathbb{Z}^+}\ol{z}_0+ \sum_{l=0}^n {\mathbb{Z}^+}\ol{x}_l = {\mathbb{Z}^+} \ol{y}_0+ {\mathbb{Z}^+}\ol{z}_0+ {\mathbb{Z}^+}\ol{x}_n$. {\rm(b)} Let $i,i',j,j',k,k' \in {\mathbb{Z}^+}$. Then \begin{equation} \label{Mbareqn} i\ol{y}_0+ j\ol{z}_0+ k\ol{x}_n = i'\ol{y}_0+ j'\ol{z}_0+ k'\ol{x}_n \end{equation} in $\ol{\calM}$ if and only if \begin{equation} \label{Mbarconds} (k=k'=0, \;\, i=i', \;\, j=j') \quad \text{or} \quad (k=k'>0, \;\, i+j= i'+j'). \end{equation} \end{lemma} \begin{proof} (a) This is clear from the fact that $\ol{x}_l= \ol{x}_{l+1}+ \ol{y}_0$ for all $l=0,\dots,n-1$. (b) Since $i\ol{y}_0+ j\ol{z}_0+ \ol{x}_n= (i+j)\ol{y}_0+ \ol{x}_n$ and $i'\ol{y}_0+ j'\ol{z}_0+ \ol{x}_n= (i'+j')\ol{y}_0+ \ol{x}_n$, the implication \eqref{Mbarconds}$\Longrightarrow$\eqref{Mbareqn} is clear. Conversely, assume that \eqref{Mbareqn} holds. There is a homomorphism $f: \ol{\calM}\rightarrow {\mathbb{Z}^+}$ such that $$f(\ol{y}_0)= f(\ol{z}_0) = 0 \qquad\quad \text{and} \qquad\quad f(\ol{x}_l) =1$$ for all $l$. Applying $f$ to \eqref{Mbareqn} yields $k=k'$. Assume first that $k=0$. There is a homomorphism $g: \ol{\calM} \rightarrow ({\mathbb{Z}^+})^2\sqcup\{\infty\}$ such that \begin{align*} g(\ol{y}_0) &= (1,0), &g(\ol{z}_0) &= (0,1), &g(\ol{x}_l) &= \infty \end{align*} for all $l$. Applying $g$ to \eqref{Mbareqn} yields $(i,j)= (i',j')$, so $i=i'$ and $j=j'$. Now suppose that $k>0$, and apply the homomorphism $t$ from the proof of Lemma \ref{Mbarstabfin} to \eqref{Mbareqn}, to get $$(i+j+k(1-n),\, k)= (i'+j'+k(1-n),\, k).$$ Therefore $i+j= i'+j'$ in this case. \end{proof} \begin{theorem} \label{Mbarwild} The monoid $\ol{\calM}$ is a wild refinement monoid. \end{theorem} \begin{proof} Since $\ol{\calM} \cong \mathcal{M}/J_1$, refinement passes from $\mathcal{M}$ to $\ol{\calM}$. Now $\ol{\calM}$ is stably finite by Lemma \ref{Mbarstabfin}, but it is not cancellative, because $\ol{x}_0+\ol{y}_0= \ol{x}_0+\ol{z}_0$ while $\ol{y}_0 \ne \ol{z}_0$ by Lemma \ref{equalMbar}. Therefore Theorem \ref{tame:stablyfinite} shows that $\ol{\calM}$ is wild. \end{proof} \begin{lemma} \label{Mbarirredelts} {\rm(a)} The elements $\ol{y}_0$ and $\ol{z}_0$ are distinct irreducible elements in $\ol{\calM}$. {\rm(b)} The submonoid $\ol{J}_2 := {\mathbb{Z}^+}\ol{y}_0+ {\mathbb{Z}^+}\ol{z}_0$ is an o-ideal of $\ol{\calM}$, and $\ol{J}_2 \cong ({\mathbb{Z}^+})^2$. {\rm(c)} $\ol{\calM}/\ol{J}_2 \cong {\mathbb{Z}^+}$. {\rm(d)} Every nonzero element of $\ol{\calM}$ dominates $\ol{y}_0$ or $\ol{z}_0$. {\rm(e)} $\ol{J}_2= \ped (\ol{\calM} )$. {\rm(f)} $n\ol{y}_0+ n\ol{z}_0 \le \ol{x}_0$ for all $n\in{\mathbb{N}}$. \end{lemma} \begin{proof} (a) As already noted, Lemma \ref{equalMbar} implies that $\ol{y}_0 \ne \ol{z}_0$. Recall the homomorphism $g: \ol{\calM}\rightarrow ({\mathbb{Z}^+})^2\sqcup\{\infty\}$ in the proof of that lemma. Since $g(\ol{y}_0)= (1,0)$ and $g(\ol{z}_0)= (0,1)$ are irreducible elements of $({\mathbb{Z}^+})^2\sqcup\{\infty\}$, it follows that $\ol{y}_0$ and $\ol{z}_0$ are irreducible elements in $\ol{\calM}$. (b) This follows from Proposition \ref{irredelementgenideal}. (c) Recall the homomorphism $f: \ol{\calM}\rightarrow {\mathbb{Z}^+}$ from the proof of Lemma \ref{equalMbar}. Since $f$ sends all elements of $\ol{J}_2$ to $0$, it induces a homomorphism $\ol{f}: \ol{\calM}/\ol{J}_2 \rightarrow {\mathbb{Z}^+}$. The homomorphism ${\mathbb{Z}^+}\rightarrow \ol{\calM}/\ol{J}_2$ that sends $1$ to $\ol{x}_0/{\equiv}_{\ol{J}_2}$ is an inverse for $\ol{f}$. (d) Any nonzero element $b\in \ol{\calM}$ can be written as $i\ol{y}_0+ j\ol{z}_0+ k\ol{x}_n$ for some $n\in {\mathbb{N}}$ and $i,j,k\in {\mathbb{Z}^+}$, not all zero. If $i>0$ or $j>0$, then $b\ge \ol{y}_0$ or $b\ge \ol{z}_0$. Otherwise, $b\ge \ol{x}_n = \ol{x}_{n+1}+ \ol{y}_0 \ge \ol{y}_0$. (e) Same as in Lemma \ref{Mirredelts}(d). (f) It follows from \eqref{relnsMbar} that $\ol{x}_0= \ol{x}_{2n}+ n\ol{y}_0+ n\ol{z}_0$ for all $n$. \end{proof} \subsection{Summary} \label{summary} We summarize the main properties of $\mathcal{M}$ and $\ol{\calM}$. \begin{theorem} \label{MMbarsummary} The monoids $\mathcal{M}$ and $\ol{\calM}$ are wild refinement monoids. They are conical, stably finite, antisymmetric, separative, and unperforated. The monoid $\mathcal{M}$ is archimedean, but $\ol{\calM}$ is not. There are o-ideals $J_1\subset J_2$ in $\mathcal{M}$ such that \begin{align*} J_1 &\cong ({\mathbb{Z}^+})^{({\mathbb{N}})}, &J_2/J_1 &\cong ({\mathbb{Z}^+})^2, &\mathcal{M}/J_2 &\cong {\mathbb{Z}^+}. \end{align*} Moreover, $\ol{\calM}\cong \mathcal{M}/J_1$, and $\ol{\calM}$ has an o-ideal $\ol{J}_2$ such that $\ol{J}_2 \cong ({\mathbb{Z}^+})^2$ and $\ol{\calM}/\ol{J}_2 \cong {\mathbb{Z}^+}$. \end{theorem} \begin{proof} The properties stated in the first two sentences are proved in Theorems \ref{Mwild}, \ref{Mbarwild}, Lemmas \ref{stateM}, \ref{Mseparative}, \ref{Mbarstabfin}, and Corollary \ref{Mbarunperf}. Further, Lemma \ref{stateM} shows that $\mathcal{M}$ is archimedean. By Lemma \ref{Mbarirredelts}, $\ol{y}_0\ne 0$ and $n\ol{y}_0 \le \ol{x}_0$ for all $n\in{\mathbb{N}}$, so $\ol{\calM}$ is not archimedean. The statements about o-ideals and quotients follow from Lemmas \ref{Mirredelts}, \ref{Mbarisom}, \ref{Mbarirredelts}. \end{proof} \begin{remark} \label{MMbarexamples} The monoids $\mathcal{M}$ and $\ol{\calM}$ illustrate other aspects of refinement monoid behavior as well. For instance, the archimedean refinement monoid $\mathcal{M}$ has a non-archimedean quotient $\mathcal{M}/J_1$. We also note that ${\mathbb{Z}^+}\, \ol{z}_0$ is an o-ideal of $\ol{\calM}$ and that $\ol{\calM}/{\mathbb{Z}^+}\, \ol{z}_0$ has a presentation $\langle x,y \mid x+y=x\rangle$. This gives an example of a non-stably-finite quotient of a stably finite refinement monoid. \end{remark} \section{Open Problems} \subsection{} Find axioms that characterize tameness for refinement monoids. A theorem on the model of Theorem \ref{tamecriterion:stablyfinite} would be the ideal result, where the characterizing axioms might include unperforation and separativity (recall Theorem \ref{tame:sepunperf}). \subsection{} Find conditions (C), applying to refinement monoids $M$ with o-ideals $J$, such that tameness of $J$ and $M/J$, together with (C), implies that $M$ is tame. I.e., find conditions under which the converse of Proposition \ref{prop:tame-closed-under-quotients} holds. This converse fails in general, as Theorem \ref{MMbarsummary} shows. \section*{Acknowledgments} We thank the referee for his/her thorough reading of the manuscript and for useful suggestions, particularly Proposition \ref{prop:Riesz-prop}, and we thank E. Pardo and F. Wehrung for helpful correspondence and references.
{ "timestamp": "2014-10-14T02:13:08", "yymm": "1405", "arxiv_id": "1405.7582", "language": "en", "url": "https://arxiv.org/abs/1405.7582", "abstract": "The class of refinement monoids (abelian monoids satisfying the Riesz refinement property) is subdivided into those which are tame, defined as being an inductive limit of finitely generated refinement monoids, and those which are wild, i.e., not tame. It is shown that tame refinement monoids enjoy many positive properties, including separative cancellation ($2x=2y=x+y \\implies x=y$) and multiplicative cancellation with respect to the algebraic ordering ($mx\\le my \\implies x\\le y$). In contrast, examples are constructed to exhibit refinement monoids which enjoy all the mentioned good properties but are nonetheless wild.", "subjects": "Rings and Algebras (math.RA); Group Theory (math.GR)", "title": "Tame and wild refinement monoids", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668723123672, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7081505243357138 }
https://arxiv.org/abs/1510.00511
Simple polytopes without small separators
We show that by cutting off the vertices and then the edges of neighborly cubical polytopes, one obtains simple 4-dimensional polytopes with n vertices such that all separators of the graph have size at least $\Omega(n/\log^{3/2}n)$. This disproves a conjecture by Kalai from 1991/2004.
\section{Introduction} The Lipton--Tarjan planar separator theorem from 1979 \cite{LiTa} states that for any \emph{separation constant} $c$, with $0<c<\nicefrac12$, the vertex set of any planar graph on $n$ vertices can be partitioned into three sets $A,B,C$ with $cn\le|A|\le|B|\le(1-c)n$ and $|C|=O(\sqrt n)$, such that $C$ separates $A$ from $B$, that is, there are no edges between $A$ and~$B$. Traditionally $c=\nicefrac13$ is used. Miller, Thurston et al.\ \cite{MillerThurston-separators} in 1990/1991 provided a geometric proof for the planar separator theorem, combining the fact that every $3$-polytope has an edge tangent representation by the Koebe--Andreev--Thurston circle packing theorem with the center point theorem. Miller, Teng, Thurston \& Vavasis \cite{MillerTengThurstonVavasis} generalized the planar separator theorem to $d$ dimensions, that is, to the intersection graphs of suitable ball packings in~$\R^d$. In view of this, Kalai noted that there is no separator theorem for general $d$-polytopes, due to the existence of the cyclic polytopes, whose graph is complete for $d\ge4$ and thus has no separators. However, he conjectured that the graphs of \emph{simple} $d$-polytopes cannot be good expanders, that is, they all should have small separators. Specifically, in his 1991 paper on diameters and $f$-vector theory \cite[Conj.~12.1]{Ka2} (repeated in the 1997 first edition of~\cite{kalai04:_polyt}) he postulated that for every $d\ge3$ any simple $d$-polytope on $n$ vertices should have a separator of size \[ O\big( n^{1-\frac1{\floor{d/2}}}\big), \] which fails for $d=3$, while for $d=4$ it postulates separators of size $O(\sqrt n)$. In the 2004 second edition of the \emph{Handbook} \cite[Conj.~20.2.12]{kalai04:_polyt} he corrected this to ask for separators of size \[ O\big( n^{1-\frac1{d-1}}\big), \] which for $d=3$ yields the planar separator theorem, and for $d=4$ conjectures the existence of separators of size $O(n^{2/3})$. At that time Kalai also referred to \cite{MillerTengThurstonVavasis} for the claim that there are triangulations of~$S^3$ on $n$ tetrahedra that cannot be even separated by $O(n/\log n)$ vertices. This is not stated in the paper \cite{MillerTengThurstonVavasis}, but it refers to a construction of Thurston who had described to his coauthors an embedding of the cube-connected cycle graph in $\R^3$ as the dual graph of a configuration of tetrahedra. Details about this construction seem to be lost (Gary Miller, personal communication 2015). In this note, we disprove Kalai's conjectures, and come close to confirming Thurston's claim. Our construction uses the existence of \emph{neighborly cubical $4$-polytopes} $\NC_4(m)$, first proved by Joswig \& Ziegler \cite{Z62}: For each $m\ge4$ there is a $4$-dimensional polytope $\NC_4(m)$ whose graph is isomorphic to the graph $C_m$ of the $m$-cube and whose facets are combinatorial $3$-cubes. \begin{theorem}\label{thm:main} For any $m\ge4$ “cutting off the vertices, and then the original edges” from a neighborly cubical $4$-polytope $\NC_4(m)$ results in a simple $4$-dimensional polytope $\NC_4(m)''$ with $n:=(6m-12)2^m$ vertices whose graph has no separator of size less than \[\Omega\Big(\frac n{\log^{3/2}n}\Big),\] while separators of size \[O\Big(\frac n{\log n}\Big)\] exist. \end{theorem} To prove this, we only use the \emph{existence} of neighborly cubical $4$-polytopes, and the fact that the $f$-vector of any such polytope is \[ f(\NC_4(m)) = (f_0,f_1,f_2,f_3) = 2^{m-2} (4, 2m, 3m-6, m-2), \] but not a complete combinatorial description, as given in~\cite[Thm.~18]{Z62}. Indeed, it was later established by Sanyal \& Ziegler \cite{Z102} that there are \emph{many} different combinatorial types, and Theorem~\ref{thm:main} and its proof are valid for all of them. It may still be that for some specific neighborly cubical $4$-polytopes all separators in the resulting simple polytopes have size at least $\Omega(n/\log n)$. This would strongly confirm Thurston's claim. On the other hand, all simple $4$-polytopes that are constructed according to Theorem \ref{thm:main} have separators of size $O(n/\log n)$. However, we do not know whether such separators exist for arbitrary simple $4$-polytopes on $n$ vertices. This paper is structured as follows. In Section~\ref{sec:construct} we describe the construction of $\NC_4(m)''$, compute its $f$-vector, establish that it is simple, and give a “coarse” description of the graph $C''_m:=G(\NC_4(m)'')$. In Section~\ref{sec:separator} we show that the graph $C''_m$ has no small separators. This follows from elementary and well-known expansion properties of the cube graph $C_m$. In Section~\ref{sec:upper_bound} we exhibit separators of size $O(n/\log n)$ in the graphs $C_m''$ derived from \emph{arbitrary} neighborly cubical $4$-polytopes. Finally, in Section~\ref{sec:comments} we extend all this to simple $d$-dimensional polytopes for $d\ge4$. \section{Doubly truncated neighborly cubical polytopes}\label{sec:construct} A \emph{neighborly cubical $d$-polytope} $\NC_d(m)$ is a $d$-dimensional convex polytope whose $k$-skeleton for $2k+2 \leq d$ is isomorphic to that of the $m$-cube. It is required to be \emph{cubical}, which means that all of its faces are combinatorial cubes. The existence of such polytopes was established by Joswig \& Ziegler \cite{Z62}. For $4$-dimensional polytopes, the complete flag vector (that is, the extended $f$-vector of Bayer \& Billera \cite{BaBi}) is determined by the $f$-vector together with the number $f_{03}$ of vertex-facet incidences. Let $m\ge4$. We start our constructions with a neighborly cubical $4$-polytope $\NC_4(m)$ with the graph ($1$-skeleton) of the $m$-cube, so $f_0=2^m$ and $f_1=m2^{m-1}$. The rest of the flag vector is now obtained from the Euler equation together with the fact that $\NC_4(m)$ is cubical: Each facet has $6$ $2$-faces and $8$ vertices, which yields $6f_3=2f_{2}$ and $8f_3=f_{03}$. Thus we obtain \begin{eqnarray*} \textrm{flag} (\NC_4(m))&:=&(f_0, f_1, f_2, f_3; f_{03}) \\ &=& (2^m,m2^{m-1},3(m-2)2^{m-2},(m-2)2^{m-2}; 8(m-2)2^{m-2} )\\ &=& (4, 2m, 3m-6, m-2; 8m-16)\cdot2^{m-2}. \end{eqnarray*} We generate the polytope $\NC_4(m)'$ from $\NC_4(m)$ by cutting off all of its vertices. The resulting polytope thus has \begin{compactitem}[ $\bullet$ ] \item $(m-2)2^{m-2}$ facets that are $3$-cubes whose vertices have been cut off, with $f$-vector $(24,36,14)$, \item $2^m$ facets that are simplicial $3$-polytopes, each with $f$-vector $(m,3m-6,2m-4)$. \end{compactitem} The latter facets are the vertex figures of $\NC_4(m)$, which are simplicial since the facets of $\NC_4(m)$, which are $3$-cubes, are simple. The resulting $4$-polytope has the following flag vector: \begin{eqnarray*} \textrm{flag} (\NC_4(m)') &=& (4m, 14m-24, 11m-22,m+2; 28m-24)\cdot2^{m-2}. \end{eqnarray*} We now generate $\NC_4(m)''$ from $\NC_4(m)'$ by cutting off the edges which come from edges in the original polytope $\NC_4(m)$ (but have been shortened in the transition to $\NC_4(m)'$). The resulting polytope has three types of facets: \begin{compactitem}[ $\bullet$ ] \item $(m-2)2^{m-2}$ facets that are $3$-cubes whose vertices and subsequently the original cube edges have been cut off, with $f$-vector $(48,72,26)$, \item $2^m$ simple $3$-polytopes with $f$-vector $(6m-12,9m-18,3m-4)$, which arise by cutting off the vertices of a simplicial $3$-polytope with $n$ vertices, and \item $m2^{m-1}$ prisms over polygons that may range between triangles and $(m-1)$-gons. \end{compactitem} Again from the available information one can easily work out the flag vector, \begin{eqnarray*} \textrm{flag} (\NC_4(m)'') &=& (24m-48,48m-96,27m-46,3m+2; 28m-48)\cdot2^{m-2}. \end{eqnarray*} In particular, we can see from the $f$-vector that $f_1=2f_0$, so $\NC_4(m)''$ is simple. Indeed, from \emph{any} $4$-polytope one gets a simple polytope by first cutting off the vertices, then the original edges. This may also be visualized in the dual picture: \emph{Any} $4$-polytope may be made simplicial by first stacking onto the facets, and then onto the ridges of the original polytope. After the first step, the facets are pyramids over the original ridges. The second step corresponds to subdivisions of the pyramids in a point in the base, which subdivides it into tetrahedra. More generally, for $d$-polytopes we observe the following. \begin{proposition}[see Ewald \& Shephard \cite{EwSh}]\label{prop:simplify} For $d\ge2$ and $0< k< d$ let $P$ be a $d$-polytope. Denote by $P^{(k)}$ the result of truncating the vertices, edges, etc.\ up to the $(k-1)$-faces of $P$, in this order. Then the polytopes $P^{(d-2)}$ and $P^{(d-1)}$ are simple. \end{proposition} Indeed, in the dual picture stacking onto facets etc.\ down to edges, which yields $P^{(d-1)}$, corresponds to the barycentric subdivision of the boundary complex of the polytope. Subdividing the edges is unnecessary for our purpose, since these are already simplices. \section{No small separators} \label{sec:separator} Let $C_m$ be the graph of the $m$-cube, whose vertex set we identify with $\{0,1\}^m$. It has $2^m$ vertices and $m2^{m-1}$ edges. For any subset $S\subseteq V$ of the vertex set, its \emph{neighborhood} is defined as $N(S) := \{v \in V\setminus S : \{u,v\} \text{ is an edge for some }u\in S\}$. Harper solved the “discrete isoperimetric problem” in the $m$-cube in the sixties \cite{Harp}: For given cardinality $|S|$, the cardinality of its neighborhood $|N(S)|$ is minimized by taking $S = \{v \in V, \sum_i^n v_i \leq d \}$ for some $d$ and taking the rest of vertices with coordinate sum $d+1$ in lexicographic order. See Bollobás \cite{Bollobas-Combinatorics} or Harper \cite{Harper2004} for expositions. Thus optimal separators in the cube graph $C_m$ are obtained by taking level sets, of size $\binom{m}k$. Here the usual asymptotics for binomial coefficients (as given by the central limit theorem, see e.g.\ \cite[Sect.~6.4]{Spencer:Asymptopia}) tell us that all the separators in the cube graph~$C_m$ have cardinality at least $\Omega(2^m/\sqrt{m})$, where the implied constant depends on the separation constant~$c$. The graph $C'_m$ is obtained from the cube graph $C_m$ by replacing each node by a maximal planar graph on $m$ vertices and $3m-6$ edges. (Note that this description does not specify the graph $C'_m$ completely.) In the transition from $C'_m$ to $C''_m$, the $2^m$ planar graphs grow into cubic ($3$-regular) graphs on $2(3m-6)=6m-12$ vertices each, which we call the \emph{clusters} of~$C_m''$. Each of the $m2^{m-1}$ edges between two vertices of~$C_m$ resp.\ between the maximal planar graphs in $C'_m$ gives rise to a number of edges (at least~$3$, at most $m-1$) between the corresponding two clusters in $C_m''$. While the cube graph~$C_m$ has $m2^{m-1}$ edges, the modified graph $C_m''$ has $(6m-12)2^{m-1}$ edges between clusters. Thus in the transition from $C_m$ to $C_m''$, the cube graph edges are replaced by less than $6$ edges on average. $C_m''$ is a $4$-regular graph on $n=(6m-12)2^m$ vertices. Consider an arbitrary separator of $C_m''$, consisting of two disjoint sets of vertices $A$ and $B$ with $cn \le |A|\le |B|\le(1-c)n$ and a set~$C$ that contains the remaining vertices. From this we can generate a separator for the cube graph $C_m$ by labeling its vertices with $a$ or $b$ if the corresponding cluster in $C_m''$ has vertices only in $A$ or only in $B$, respectively. The remaining vertices will be labeled with $c$. There cannot be any neighboring vertices in $C_m$ labeled by $a$ and $b$, since this would imply neighboring $A$ and $B$ clusters in $C_m''$. The set of vertices of $C_m$ labeled $a$ has size at most $(1-c)n$, and the same is true for the set of vertices labeled $b$. Thus, for any fixed $c'<c$, unless the set of vertices labeled $c$ has linear size (and thus we are done), both the sets of vertices with labels $a$ and $b$ have size at least $c'n$, and thus we have constructed a separator for $C_m$. By the isoperimetric inequality for vertex neighborhoods, there must be $\Omega(2^m/\sqrt{m})$ vertices labeled with $c$ and hence at least as many vertices in the separator for $C_m''$. Thus all separators for the graph $C''_m$ of $\NC_4(m)''$ have size at least \[ \Omega\Big(\frac{2^m}{\sqrt m}\Big) \ =\ \Omega\Big(\frac{n}{\log^{3/2} n}\Big). \] \section{Small separators} \label{sec:upper_bound} Here we argue that for \emph{any} neighborly cubical $4$-polytope $\NC_4(m)$, the derived simple $4$-polytope $\NC_4(m)''$ on $n=(6m-12)2^m$ vertices has a separator of size \[ O(2^m) \ =\ O\Big(\frac{n}{\log n}\Big). \] Indeed, with respect to the identification of the vertex set of $C_m$ with $\{0,1\}^m$, choose a random coordinate (“edge direction”), and divide the vertices of $C_m$ into two sets by whether the corresponding vertex label is $0$ or $1$. This corresponds to cutting the $m$-cube into two $(m-1)$-cubes, with $2^{m-1}$ edges between them. This cutting also divides the vertex set of $C_m''$ into two equal halves, containing $n/2=(3m-6)2^m$ vertices each. In $C_m''$, there is an average of less than 6 edges between adjacent clusters. For a random coordinate direction, the expected number of edges between the two equal halves of $C_m''$ is less than $6\cdot 2^{m-1}=3\cdot 2^m$. Thus by choosing a suitable coordinate, and removing one end vertex of each edge of $C_m''$ in the corresponding direction, we obtain a separator of size less than $3\cdot 2^m$. \section{More generally} \label{sec:comments} We can extend the result of Theorem~\ref{thm:main} to dimensions $d>4$ by taking the product of $\NC_4(m)''$ and the standard $(d-4)$-cube. For a fixed dimension $d$ this gives a sequence of polytopes with $2^{d-4}$ times as many vertices. We can find a separator in this graph by taking a product of a separator in $\NC_4(m)''$ and the standard $(d-4)$-cube, so these polytopes are at least as easy to separate as $\NC_4(m)''$. On the other hand the graph of this polytope again has a cube-like structure, with $2^m$ clusters that are products of a cubic planar graph on $6m-12$ vertices with the fixed graph $C_{d-4}$. Again we need to remove at least \[ \Omega\Big(\frac{2^m}{\sqrt m}\Big) \ =\ \Omega\Big(\frac{n}{\log^{3/2} n}\Big). \] vertices to separate it. \begin{corollary} For each $d\ge4$ there is a sequence of simple $d$-polytopes whose graphs (on $n$ vertices) have no separators that are smaller than \[ \Omega\Big(\frac{n}{\log^{3/2} n}\Big), \] but which have separators of size \[ O\Big(\frac{n}{\log n}\Big). \] \end{corollary} Alternatively, one could try to start with neighborly cubical $d$-polytopes $\NC_d(m)$ and to “simplify” them by Proposition~\ref{prop:simplify}. The resulting simple $d$-polytopes have graphs that are again similar to those of $m$-cubes, where however the clusters have a size of the order of $\Theta(m^{\lfloor d/2\rfloor-1})$ for fixed $d$, and thus we get $n=\Theta(m^{\lfloor d/2\rfloor-1}2^m)$ vertices in total, and thus separators of size \[ O(2^m) \ =\ O\Big(\frac{n}{\log^{\lfloor d/2\rfloor-1}n}\Big). \] So the product construction sketched above is better for $d>5$. \subsubsection*{Acknowledgements.} We are grateful to Hao Chen and Arnau Padrol for useful discussions. \bibliographystyle{siam \begin{small}
{ "timestamp": "2015-10-05T02:06:59", "yymm": "1510", "arxiv_id": "1510.00511", "language": "en", "url": "https://arxiv.org/abs/1510.00511", "abstract": "We show that by cutting off the vertices and then the edges of neighborly cubical polytopes, one obtains simple 4-dimensional polytopes with n vertices such that all separators of the graph have size at least $\\Omega(n/\\log^{3/2}n)$. This disproves a conjecture by Kalai from 1991/2004.", "subjects": "Metric Geometry (math.MG); Combinatorics (math.CO)", "title": "Simple polytopes without small separators", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668723123673, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7081505243357138 }
https://arxiv.org/abs/0902.3075
On Partitions of Finite vector Spaces
In this note, we give a new necessary condition for the existence of non-trivial partitions of a finite vector space. Precisely, we prove that, if V is a finite vector space over a field of order q, then the number of the subspaces of minimum dimension t of a non-trivial partition of V is greater than or equal to q+t. Moreover, we give some extensions of a well known Beutelspacher-Heden's result on existence of T-partitions.
\section{Introduction} \label{S:intro} \bigskip \noindent A partition ${\bf P}$ of the $n$-dimensional vector space $V_{n}{(q)}$ over the finite field with $q$ elements, is a set of non-zero subspaces (components) of $V_{n}{(q)}$ such that each non-zero element of $V_{n}{(q)}$ is contained in exactly one element of ${\bf P}$. The interest on the problems of existence, enumeration, and classification of partitions, arises in connection with the construction of codes and combinatorial designs. In fact, if ${\bf P}=\{V_{1},V_{2},.....,V_{r} \}$ is a non-trivial partition of $V_{n}{(q)}$, then the subspace $W$ of the vector space $V_{1}\times V_{2} \times ..... \times V_{r}$, which is defined by $$W:=\{ (y_{1},y_{2},.....,y_{r}) \in V_{1}\times V_{2} \times ..... \times V_{r} \hspace{0.05in} | \hspace{0.05in} \sum_{i=1}^{r}y_{i}= \{ 0 \}\},$$ is a perfect mixed linear code (see [HS] and [Li]). Moreover, if $\mathbb{B}$ is the set of subspaces in ${\bf P}$ together with all their cosets, then $V_{n}{(q)}$ and $\mathbb{B}$ are, respectively, the point set and the block set of an uniformly resolvable design which admits a translation group isomorphic to $V_{n}{(q)}$ (see [DR] and [ESSSV2]). Furthermore, combinatorial designs can be associated with certain more general "partitions" (see for instance [Sc1], [Sc2] and [Sp]). If a partition consists of $x_{i}$ components of dimension $n_{i}$ for each $i=1,2,...,k$, then the non-negative integers $x_{1},x_{2},.....,x_{k}$ are a solution of the Diophantine equation $ \sum_{i=1}^{k}(q^{n_{i}}-1)X_{i}= q^{n}-1$. A fundamental problem about partitions of $V_{n}{(q)}$ is to give necessary and sufficient conditions on non-negative solutions of the above equation, in order that they correspond to partitions of $V_{n}{(q)}$. \noindent In [ESSSV2] the authors gave such conditions in the case where $q=2$, $k=2$, $n_{1}=2$ and $n_{2}=3$ (see section 2). In order that, they proved the following two necessary conditions for existence of partitions: \noindent a) In a non-trivial partition of $V_{n}{(q)}$ the number of components of minimum dimension must be greater than or equal to $2$. \noindent b) If the components of minimum dimension of a partition of $V_{n}{(2)}$ have dimension 1, then their number is greater than or equal to $3$ (see [ESSSV1]). Another interesting problem is related to existence results on $T$-partitions, where, if $T$ is a set of positive integers, a $T$-partition is a partition ${\bf P}$ of $V_{n}{(q)}$ such that $\{ dim V' \hspace{0.05in} | \hspace{0.05in} V' \in {\bf P} \} = T$. Clearly the existence of a $T$-partition of $V_{n}{(q)}$ implies the existence of a positive solution of the equation \break $ \sum_{i=1}^{k}(q^{n_{i}}-1)X_{i}= q^{n}-1$ in the case where $T=\{n_{1},n_{2},......,n_{k}\}$. A. Beutelspacher and O. Heden proved (see [Be] and [He]) the existence of a $T$-partition in the case where $\min T \geq 2$ and $\max T = \frac{n}{2}$. In this paper, in section 2 we will recall some definitions and some known results about the existence of partitions of a finite vector space. In section 3, we will provide a more general necessary condition on the minimum dimension components of a partition. Precisely, we will shown that the number of components of minimum dimension $t$, of any non-trivial partition of $V_{n}{(q)}$, is greater than or equal to $\alpha q+t$ where $\alpha$ is a positive integer. Finally, in section 4, we will extend the mentioned Beutelspacher-Heden's existence result for $T$-partition of $V_{n}{(q)}$ in the case where the minimum dimension of the components is $1$ and in some other cases where the maximum dimension of the components is consistent with $n$. \vspace{1in} \section{Definitions and first results} \label{S:P*} \bigskip In this section we recall some basic properties of partitions of finite vector spaces and some results about the existence of certain classes of partitions. Let $n$ be a positive integer $(n>1)$, $q$ be a prime power, $\mathbb{F}_{q}$ be the finite field of order $q$ and $V_{n}{(q)}$ be the $n$-dimensional vector space over $\mathbb{F}_{q}$. A set ${\bf P}=\{V_{1},V_{2},......,V_{r}\} $ of non-zero subspaces of $V_{n}{(q)}$ is a partition of $V_{n}{(q)}$ if and only if $ \cup_{i=1}^{r}V_{i}=V_{n}{(q)}$ and $V_{i} \cap V_{j}=\{ 0 \}$ for every $i,j \in \{ 1,2,...,r \}$ and $i \neq j$. We call ${\bf P}$ non-trivial in the case where $r \geq 2$. The elements of ${\bf P}$ are said to be the components of the partition. Let $T$ be a set of positive integers and ${\bf P}$ a set of disjoint non-trivial subspaces (which is not necessarily a partition). ${\bf P}$ is said to be a $T$-set of subspaces (a $T$-partition if ${\bf P}$ is a partition) if the map $dim:{\bf P}\rightarrow T$ is surjective, where $dim(V_{i})$ is the dimension of the subspace $V_{i}$ for each $V_{i}\in{\bf P} $. Of course if $T=\{n_{1},n_{2},......,n_{k}\}$, then $1\leq n_{i}\leq n$ for every $i=1,2,......,k$. \noindent A. Beutelspacher and O. Heden proved (see [Be] and [He]) the following well known existence result. \bigskip \noindent{\bf 2.1 Theorem}. {\it Let $T=\{n_{1},n_{2},......,n_{k}\}$ be a set of positive integers with $ n_{1}< n_{2}<......< n_{k}$. If $n_{1}\geq 2$, then there exists a $T$-partition of $V_{2n_{k}}{(q)}$.} \bigskip \noindent The above theorem was proved by Beutelspacher [Be] in the case where $n_{1}=2$ and by Heden [He] for $n_{1}>2$. Now, let $k$ be a positive integer and $x_{1},x_{2},......,x_{k}$ be non-negative integers. If ${\bf P}$ is a partition of $V_{n}{(q)}$ which contains $x_{i}$ components of dimension $n_{i}$ for each $i=1,2,...,k$, then we say that ${\bf P}$ is of type $$[(x_{1},n_{1}),(x_{2},n_{2}),......,(x_{k},n_{k})]$$ \noindent or that ${\bf P}$ is an $[(x_{1},n_{1}),(x_{2},n_{2}),......,(x_{k},n_{k})]$-partition of $V_{n}{(q)}$ (see [ESSSV2]). Note that it is possible to have $ x_{i}= 0 $ for some $1 \leq i \leq k$. Clearly, in such a case, there are no components of dimension $n_{i}$. However, as we will see, such notation will be useful when we associate partitions to non-negative solutions of some Diophantine equation. Later on, for a partition of type $[(x_{1},n_{1}),(x_{2},n_{2}),......,(x_{k},n_{k})]$ we will always suppose $ 1\leq n_{1}< n_{2}<......< n_{k}\leq n$. Let ${\bf P}$ be an $[(x_{1},n_{1}),(x_{2},n_{2}),......,(x_{k},n_{k})]$-partition of $V_{n}{(q)}$. Then it is easy to show the following necessary condition. \bigskip 1) \hspace{0.05in} $(x_{1},x_{2},......,x_{k})$ is a non-negative solution of the Diophantine equation $$ \sum_{i=1}^{k}(q^{n_{i}}-1)X_{i}= q^{n}-1. \eqno(1)$$ \noindent Moreover, if $V_{i}$ and $V_{j}$ are two distinct components of ${\bf P}$, then $dim(V_{i}+V_{j})=dimV_{i}+dimV_{j}$ since $V_{i}\cap V_{j}=0$. Hence, the following necessary conditions are obtained: \bigskip 2)\hspace{0.05in} If $ i \neq j$ and $x_{i} \neq 0 \neq x_{j}$, then $ n_{i}+n_{j} \leq n$. \hspace{0.06in} \hspace{0.06in} If $2n_{i}> n$, then $x_{i} \leq 1$. \bigskip Note that, by Theorem 2.1, the equation $(1)$ always admits a non-negative solution when $n=2n_{k}$ and $n_{1} \geq 2$. Furthermore, if $W$ is a subspace of $V_{n}{(q)}$ of dimension $s$, then ${\bf P'}=\{V_{i}\cap W \hspace{0.05in} | \hspace{0.05in} V_{i} \in {\bf P} \hspace{0.05in} and \hspace{0.05in} V_{i}\cap W \neq 0 \}$ is a partition of $W$ and so, if ${\bf P'}$ is of type $[(x'_{1},n'_{1}),(x'_{2},n'_{2}),......,(x'_{k'},n'_{k'})]$, we also have \bigskip 3) \hspace{0.05in} the equation $\sum_{i=1}^{k'}(q^{n'_{i}}-1)X_{i}= q^{s}-1$ admits the non-negative \hspace{0.05in} \hspace{0.05in} \hspace{0.05in} solution $ (x'_{1}, x'_{2},.....,x'_{k'}) $. \bigskip \noindent In [Bu] it was shown that for $s=n-1$ the property 3) is a necessary condition which does not follow from conditions 1) and 2). Now we recall some existence results. The following two theorems can be found in [Bu], but Theorem 2.2 was known before. In fact, part i) of it is a well known result on $d$-spreads and part ii) has been also proved previously by Beutelspacher in [Be]. \bigskip \noindent{\bf 2.2 Theorem}. {\it Let $d$ and $n$ be positive integers. \noindent i) If $d$ divides $n$, then there exists a partition of $V_{n}{(q)}$ of type $[(\frac{q^{n}-1}{q^{d}-1},d)]$. \noindent ii) If $d< \frac{n}{2}$, then there exists a partition of $V_{n}{(q)}$ of type $[(q^{n-d},d),(1,n-d)]$.} \bigskip \noindent{\bf 2.3 Theorem}. {\it Let $n$, $k$ and $d$ be positive integers with $d>1$. \break If $n=kd-1$, then there exists a partition of $V_{n}{(q)}$ of type \break $[(q^{(k-1)d},d-1),(\frac{q^{(k-1)d}-1}{q^{d}-1},d)]$.} \bigskip For partitions of finite vector spaces, we have the following fundamental problem. \bigskip \noindent{\bf 2.4}. {\it Give necessary and sufficient conditions on non-negative solutions of the Diophantine equation (1) such that they correspond to $[(x_{1},n_{1}),(x_{2},n_{2}), \break ......,(x_{k},n_{k})]$-partitions of $V_{n}{(q)}$}. \bigskip For small value of $k$ some result is available. \bigskip \noindent{\bf 2.5 Proposition}. {\it The properties 1) and 2) are necessary and sufficient conditions for the existence of an $[(x_{1},n_{1}),(x_{2},n_{2}),......,(x_{k},n_{k})]$-partition of $V_{n}{(q)}$ when $k=1$ or $k=2$ and $ n_{1}+n_{2}=n $}. \bigskip {\bf Proof.} \noindent In the case where $k=1$, to every solution of (1) corresponds an $[(x_{1},n_{1})]$-partition. In fact, the equation $(q^{n_{1}}-1)x_{1}=q^{n}-1$ admits a solution if and only if $n_{1}$ divides $n$ and so, by i) of Theorem 2.2, there exists an $[(x_{1},n_{1})]$-partition. \noindent Now consider the case where $k=2$ and $n_{1}+n_{2}=n$. Suppose that $(x_{1},x_{2})$ is a non-negative solution of (1) which also verifies the necessary condition 2). Since $n_{2}=n-n_{1}>n_{1}$, we have $n_{1} < \frac{n}{2}$ and so $n_{2} > \frac{n}{2}$. It follows by 2) that $x_{2} \leq 1$. If $x_{2}=0$, then from the equation (1) we obtain that $n_{1}$ divides $n$ and so, again by i) of Theorem 2.2, we have a $[(\frac{q^{n}-1}{q^{n_{1}}-1},n_{1}),(0,n_{2})]$-partition. In case $x_{2}=1$, from the equation (1) we get $x_{1}=q^{n-n_{1}}$. Therefore, $(x_{1},x_{2})=(q^{n-n_{1}},1)$ and so, by ii) of Theorem 2.2, there exists a $[(q^{n-n_{1}},n_{1}),(1,n_{2})]$-partition. So the proposition is proved. \bigskip For $k=2$ and for any $n_{1}$ and $n_{2}$ the question is still an open problem. Recently, in [ESSSV1] and [ESSSV2] the authors resolved it in the case where $n_{1}=2$, $n_{2}=3$ and $q=2$. \bigskip \noindent{\bf 2.6 Theorem}. {\it There exists a partition of $V_{n}{(2)}$, $n \geq 3$, of type $[(x_{1},2),(x_{2},3)]$ if and only if $(x_{1},x_{2})$ is a solution of the Diophantine equation $$ 3x_{1}+7x_{2}=2^{n}-1 \eqno(2) $$ with $x_{1}$ and $x_{2}$ non-negative integers and $x_{1} \neq 1$.} \bigskip In order to show Theorem 2.6, they gave the following theorems. \bigskip \noindent{\bf 2.7 Theorem}. {\it Let $V$ and $V'$ be $\mathbb{F}_{q}$-vector spaces of finite dimension and $T=\{n_{1},n_{2},......,n_{k}\}$ a set of positive integers with $ n_{1} < n_{2} < ...... < n_{k}$. If ${\bf P}$ is a $T$-partition of $V$ with $ n_{k} \leq dim V'$, then there exists a $T$-set of subspaces ${\bf \overline{P}}$ of $V \oplus V'$ such that $|{\bf \overline{P}}| = (q^{dim V'}-1)|{\bf P}|$ and $\{ V,V' \} \cup {\bf \overline{P}} $ is a partition of $V \oplus V'$.} \bigskip \noindent{\bf 2.8 Theorem}. {\it Let ${\bf P}$ be a non-trivial partition of $V_{n}{(q)}$ of type \break $[(x_{1},n_{1}),(x_{2},n_{2}),......,(x_{k},n_{k})]$. i) If $x_{1} \neq 0$, then $x_{1}\geq 2$. ii) If $n_{1}=1$ and $q=2$, then $x_{1}\geq 3$ .} \bigskip \noindent{\bf 2.9 Remark}. Note that Theorem 2.7 is a generalization of Theorem 2.2, part ii). In fact if $d = dim V $, $n-d = dim V'$ with $d < n-d $ and ${\bf P} = \{ V \} $, then there exists a partition of $ V_{n}{(q)} = V \oplus V'$ of type $[(q^{n-d},d),(1,n-d)]$. Moreover note that Theorem 2.8 gives information about the number of minimum dimension components of a partition. \bigskip \noindent{\bf 2.10 Remark}. Observe that, if ${\bf P}$ is a non-trivial $[(x_{1},2),(x_{2},3)]$-partition of $V_{n}{(2)}$ with $x_{1}$ and $x_{2}$ positive integers, then $x_{1} \geq 3$. In fact, if $x_{1}=2$, from equation (2), we have $x_{2}=\frac{2^{n}-1-6}{7}$ and so $7$ divides $2^{n}$ which is a contradiction. Of course, if either $x_{1}$ or $x_{2}$ is equal to zero, we get $x_{1}+x_{2} \geq 3$ since a group can not be the union of two proper and disjoint subgroups. Therefore, if ${\bf P}$ is a non-trivial $[(x_{1},2),(x_{2},3)]$-partition of $V_{n}{(2)}$, then the number of its minimum dimension subspaces is greater than or equal to $3$. \vspace{1in} \section{A new necessary condition} \label{S:P*} \bigskip \noindent In this section we will prove that, for every prime power $q$ and for every non-trivial partition ${\bf P}$ of $V_{n}{(q)}$, the number of minimum dimension subspaces in ${\bf P}$ is always greater than or equal to $q+t$. First we need the following lemmas. \bigskip \noindent{\bf 3.1 Lemma}. {\it Let ${\bf P}=\{V_{1},V_{2},......,V_{r}\} $ be a non-trivial partition of $V_{n}{(q)}$ and $t$ and $s$ be positive integers with $t<n$ and $s<r$. If $dim(V_{i})=t$ for every $i=1,...,s$ and $dim(V_{j}) \geq t+1$ for every $j=s+1,...,r$, then $s \geq \alpha q $ for some positive integer $ \alpha $.} \bigskip {\bf Proof.} We can suppose that $V_{n}{(q)}=\mathbb{F}_{q}^ {n}$ and $V_{1}=\mathbb{F}_{q} ^{t} \times \{ 0 \}^{n-t}$ after we choose an ordered basis of $V_{n}{(q)}$ which extends a fixed ordered basis of $V_{1}$. Let $W=V_{1}^ {\perp}$ be the dual subspace of $V_{1}$ with respect to the canonical inner product on $\mathbb{F}_{q}^ {n}$. Of course we have $V_{1} \cap W=\{ 0 \}$ and $V_{n}{(q)}=V_{1} \oplus W$. Moreover, for every $j=s+1,...,r$, we have that $dim(V_{j}+W)=dim(V_{j})+dim(W)-dim(V_{j} \cap W)=n-t+dim(V_{j})-dim(V_{j} \cap W)$. But $dim(V_{j})>dim(V_{1})=t$, then $z:=dim(V_{j})-t \geq 1$. Thus we have $dim(V_{j}+W)=n+z-dim(V_{j} \cap W)\leq n$. It follows $dim(V_{j} \cap W) \geq z \geq 1$ and so we get $V_{j} \cap W \neq \{0\}$. \noindent Of course it is possible that there are some other subspaces of dimension $t$, different from $V_{1}$, which are not disjoint from $W$. So, we can suppose that there exists an integer $s'$ with $1 \leq s' \leq s$ and such that $V_{i} \cap W = \{0\}$ for every $i=1,...,s'$, whereas $V_{i} \cap W \neq \{0\}$ for every $i>s'$ and $i \leq s$. Consider the partition ${\bf P'}$ induced by ${\bf P}$ on $W$, that is ${\bf P'}=\{ V_{i} \cap W \hspace{0.05in} | \hspace{0.05in} V_{i} \in {\bf P} \hspace{0.05in} and \hspace{0.05in} V_{i}\cap W \neq \{0\} \}$. Clearly we have $$ \sum_{j=s'+1}^{r}(q^{m_{j}}-1)= q^{n-t}-1, \eqno (4)$$ where $m_{j}=dim(V_{j} \cap W) \geq 1$ for every $j=s'+1,...,r$. Further, considering the partition ${\bf P}$, we obtain $ \sum_{j=s'+1}^{r}(q^{n_{j}}-1)= q^{n}-1- \sum_{j=1}^{s'}(q^{n_{j}}-1)=q^{n}-1-s'(q^{t}-1)$, from which $$ \sum_{j=s'+1}^{r}(q^{n_{j}}-1)= q^{n}-1-s'q^{t}+s'. \eqno (5) $$ \noindent Now, subtracting (4) from (5), we get $$ \sum_{j=s'+1}^{r}(q^{n_{j}}-q^{m_{j}})= q^{n}-s'q^{t}-q^{n-t}+s'. $$ \noindent Hence we obtain that $q$ divides $s'$. But $s' \neq 0$ being $V_{1} \cap W=\{ 0 \}$. Therefore, for some positive integer $\alpha$, we obtain $s \geq s' = \alpha q$ and the proof is complete. \bigskip \noindent{\bf 3.2 Lemma}. {\it Let ${\bf P}$ be a non-trivial partition of $V_{n}{(q)}$ of type \break $[(x_{1},n_{1}),(x_{2},n_{2}),......,(x_{k},n_{k})]$. Suppose $i_{0}$ be a positive integer such that $ x_{i_{0}} \neq 0 $ and $ x_{i}=0 $ for each $ i=1,2,...,i_{0}-1 $, then $ x_{i_{0}} \geq \alpha q+1 $ where $\alpha$ is a positive integer.} \bigskip {\bf Proof.} We use the same notations as in the previous lemma. If $i_{0}=k$, then we have that $ x_{i_{0}}=x_{k}= \frac {q^{n}-1}{q^{n_{k}}-1}$ because of condition 1). So $n_{k}$ divides $n$. Moreover, being ${\bf P}$ a non-trivial partition, we have $ n_{k} < n $. It follows that $x_{i_{0}}=q^{n-n_{k}}+q^{n-2n_{k}}+.....+q^{n-(\frac {n}{n_{k}}-1)n_{k}}+1= \alpha q^{n_{k}}+1 \geq \alpha q+1$ where $\alpha =q^{n-2n_{k}}+q^{n-3n_{k}}+.....+q^{n_{k}}+1$. \noindent Now suppose $i_{0} < k$ and set $t=n_{i_{0}}$ and $s=x_{i_{0}}$ as in the previous lemma. If ${\bf P}=\{V_{1},V_{2},......,V_{r}\}$, and $V_{1},V_{2},......,V_{s}$ are the $s$ components of dimension $t$, then there exist at least two distinct components which have the same minimum dimension $t$ since, by Lemma 3.1, $s \geq q \geq 2$. Therefore, we certainly have that $V_{1}$ is distinct from $V_{s}$. So $dim(V_{1}+V_{s})=2t \leq n$ because of $V_{1}$ and $V_{s}$ belong to ${\bf P}$ and $dim(V_{1})=dim(V_{s})=t$. Let $\{ v_{1},v_{2},.....,v_{t} \}$ be an ordered basis of $V_{1}$ and $\{ v'_{1},v'_{2},.....,v'_{t} \}$ be an ordered basis of $V_{s}$. Then the vectors $\{ v_{1},v_{2},.....,v_{t},v'_{1},v'_{2},.....,v'_{t} \}$ are a basis of $V_{1}+V_{s}$. So they are linearly independent and we can consider a basis of $V_{n}{(q)}$ which contains them. In relation to this new basis we can identify $V_{n}{(q)}$ with $\mathbb{F}_{q}^{n}$, $V_{1}$ with $\mathbb{F}_{q}^{t} \times \{0\}^{n-t}$ and $V_{s}$ with $\{0\}^{t} \times \mathbb{F}_{q}^{t} \times \{0\}^{n-2t}$. Let $W$ be the dual space of $V_{1}$, that is $W=\{0\}^{t} \times \mathbb{F}_{q}^{n-t}$. It follows that $V_{s} \subseteq W$ and so $V_{s} \cap W \neq \{0\}$. Therefore, the number $s'$ of the $t$-dimensional subspaces of $V_{n}{(q)}$ which are disjoint from $W$ is smaller than $s$. But, as in the proof of Lemma 3.1, we have $s' = \alpha q$ for some positive integer $\alpha$. So $s > s' = \alpha q$, that is to say $x_{i_{0}}=s \geq \alpha q+1$, and the lemma is shown. \bigskip \noindent{\bf 3.3 Theorem}. {\it In any non-trivial partition of $V_{n}{(q)}$, the number of subspaces of minimum dimension $t$ is greater than or equal to $ \alpha q+t $ for some positive integer $\alpha$.} \bigskip {\bf Proof.} We proceed by induction on $t$. For $t = 1$ the theorem is true by the above Lemma 3.2. Now, let $t \geq 2$, ${\bf P}$ be a partition of $V_{n}{(q)}$ and $S$ be the subset of ${\bf P}$ of all the components of minimum dimension $t$. Consider an hyperplane $W$ of $V_{n}{(q)}$ which contains at least a component of $S$ and it does not contain all the contains of $S$. A such hyperplane there exists since $s = |S| \geq \alpha q+1 \geq q+1 \geq 3$ by the above lemma. Being $t > 1$ the partition ${\bf P}_{W}$, which is induced from ${\bf P}$ on $W$, has components of minimum dimension $t-1$. Let $S'$ be the set of such components of ${\bf P}_{W}$. So, by induction, their number is $s' = |S'| = \alpha q+t-1$. But if $V'_{i} \in S'$, then $V'_{i} = V_{i} \cap W$ where $V_{i} \in S$. Therefore, $s \geq s' = \alpha q+t-1$. Moreover, by construction, $W$ contains at least one component of $S$. Thus we get that $s \geq s'+1 \geq (\alpha q+t-1)+1$. So $s \geq \alpha q+t$ and the proof is complete. \bigskip Finally we can state the next corollary which clearly follows from the above theorem. \bigskip \noindent{\bf 3.4 Corollary}. {\it Let $V_{n}{(q)}$ be a vector space which admits a non trivial partition ${\bf P}$. Then the number of components of ${\bf P}$ of minimum dimension $t$ is greater than or equal to $ q+t $ .} \bigskip Now we observe that, if ${\bf P}=\{V_{1},V_{2},......,V_{r}\} $ is a non-trivial partition of $V_{n}{(q)}$ whose components are all of the same dimension $t$, then $r$ is equal to the number $s$ of minimum dimension components of ${\bf P}$ and $t$ divides $n$ by Proposition 2.5. So we have that $r = s = \frac{q^{n}-1}{q^{t}-1} \geq q^{t}+1$ (Note that, $s$ may be much greater than $q+t$ if $t \neq 1$). More generally, we have the following proposition. \bigskip \noindent{\bf 3.5 Proposition}. {\it Let ${\bf P}$ be a non-trivial partition of $V_{n}{(q)}$ which have $r$ components. If $t$ is the minimum dimension of the components of ${\bf P}$, then $q^{t}+1 \leq r \leq \lfloor \frac{q^{n}-1}{q^{t}-1} \rfloor $. \noindent (Here $ \lfloor x \rfloor $ denotes the integer part of the real number $x$) .} \bigskip {\bf Proof.} Suppose ${\bf P}=\{V_{1},V_{2},......,V_{r}\}$ be the partition of $V_{n}{(q)}$. Then we have $ \sum_{i=1}^{r}(q^{n_{i}}-1)= q^{n}-1$ if $n_{i}$ denotes the dimension of $V_{i}$ for every $1 \leq i \leq r$. Hence $r-1 = \sum_{i=1}^{r} q^{n_{i}}-q^{n}= q^{t}(\sum_{i=1}^{r} q^{n_{i}-t}-q^{n-t})$ and we obtain that $r = \alpha q^{t}+1$ with $\alpha \geq 1$ being ${\bf P}$ a non-trivial partition. It follows that $r \geq q^{t}+1$. Now, let $ \{ V_{1},V_{2},......,V_{s} \}$ be the components of ${\bf P}$ of minimum dimension $t$ and suppose $s < r$. We have that $(V \backslash \{ 0 \}) \backslash ( \bigcup_{i=1}^{s}(V_{i} \backslash \{ 0 \} ))= \bigcup_{i=s+1}^{r}(V_{i} \backslash \{ 0 \} )$. So we obtain $| \bigcup_{i=s+1}^{r}(V_{i} \backslash \{ 0 \} ) | = | V \backslash \{ 0 \} | - | \bigcup_{i=1}^{s}(V_{i} \backslash \{ 0 \} ) |$. But $(r-s)(q^{t}-1) \ < | \bigcup_{i=s+1}^{r}(V_{i} \backslash \{ 0 \} ) |$ since $| V_{i} | > q^{t}$ for every $s+1 \leq i \leq r$. It follows that $(r-s)(q^{t}-1) < | V \backslash \{ 0 \} | - | \bigcup_{i=1}^{s}(V_{i} \backslash \{ 0 \} ) | = (q^{n}-1)-s(q^{t}-1)$ and so $(r-s)(q^{t}-1) < q^{n}-1-sq^{t}-s$ from which we get $r < \frac{q^{n}-1}{q^{t}-1}$. If $s = r$, then the components of ${\bf P}$ have all the same dimension $t$ and, as observed before, $r = \frac{q^{n}-1}{q^{t}-1}$. Therefore, in any case, $r \leq \frac{q^{n}-1}{q^{t}-1}$ and the proposition is shown. \bigskip \noindent{\bf 3.6 Remark}. Proposition 3.5 and the examples of partitions which are known to us, drive us to think that Corollary 3.4 can be substantially improved. In fact, we conjecture that the number of components of minimum dimension $t$ of a non-trivial partition of $V_{n}{(q)}$ is greater or equal to $q^{t}+1$. \vspace{1in} \section{Existence results on $T$-partitions} \label{S:P*} \bigskip In this section we give some extensions of Theorem 2.1. To begin, we can drop the hypothesis "$ n_{1} \geq 2 $" in Theorem 2.1. In fact, we have the following proposition. \bigskip \noindent{\bf 4.1 Proposition}. {\it Let $T = \{ n_{1},n_{2},......,n_{k} \}$ be a set of positive integers such that $ n_{1}< n_{2}<......< n_{k}$. Then there exists a T-partition of $V_{2n_{k}}{(q)}$.} \bigskip {\bf Proof.} By Theorem 2.1 we can suppose that $ n_{1} = 1 $. If $k = 1$ the proposition follows by $i)$ of Theorem 2.2. So let $k \geq 2$ and consider the subset $T'= \{ n_{2},n_{3}......,n_{k} \} $ of $T$. Again by Theorem 2.1, there exists a $T'$-partition ${\bf P'}$ of $V_{2n_{k}}{(q)}$ because $n_{2} > n_{1} = 1$. But, by Corollary 3.4, there exist at least $q+n_{2} \geq 4$ components of ${\bf P'}$ of minimum dimension $n_{2}$. Let $V'$ be such a component of dimension $n_{2}$ and consider the partition ${\bf P''}$ of $V'$ whose components are all its subspaces of dimension 1. Now, ${\bf P} = ({\bf P'} \setminus \{ V' \}) \cup {\bf P''}$ is a $T$-partition of $V_{2n_{k}}{(q)}$ since $|{\bf P''}| \geq q+1 \geq 1$ and there are some other components (at least 3) of ${\bf P'} \subset {\bf P}$ of dimension $n_{2}$. This complete the proof. \bigskip \noindent{\bf 4.2 Theorem}. {\it Let $T = \{ n_{1},n_{2},....,n_{k} \}$ be a set of positive integers such that $ n_{1} < n_{2} < .... < n_{k} $ and consider the vector space $ V_{n}{(q)}$ with $n \geq 2n_{k}$. If $gcd(n,2n_{k})$ admits some divisor into $T$, then there exists a $T$-partition of $ V_{n}{(q)}$.} \bigskip {\bf Proof.} By Proposition 4.1 we can suppose that $n > 2n_{k}$. Consider a subspace $V$ of $V_{n}{(q)}$ of dimension $2n_{k}$ and such that $V \cap V^{\perp} = \{ 0 \}$ where $V^{\perp}$ is the dual space of $V$. If $n_{i_{0}} \in T$ is a divisor of $gcd(n,2n_{k})$, then $n_{i_{0}}$ is a divisor of $n - 2n_{k} = dim V^{\perp} $. So, by Theorem 2.2, there exists a partition ${\bf P'}$ of $V^{\perp}$ whose components have all the same dimension $n_{i_{0}}$. Since $n_{i_{0}}$ divides $2n_{k}$, for the same reason we get that there exists a partition ${\bf P}$ of $V$ whose components have all the same dimension $n_{i_{0}}$, that is, ${\bf P}$ is a $\bar{T}$-partition of $V$ where $\bar{T} = \{ n_{i_{0}} \}$ and $n_{i_{0}} \leq dim V^{\perp} = n-2n_{k} $ being $n_{i_{0}}$ a divisor of $n-2n_{k}$. It follows, by Theorem 2.7, that there exists a $\bar{T}$-set of $n_{i_{0}}$-dimensional subspaces ${\bf P''}$ of $V \oplus V^{\perp} = V_{n}{(q)}$ such that $ \{ V,V^{\perp} \} \cup {\bf P''}$ is a partition of $ V_{n}{(q)}$. Now, by the above proposition, let $\bar{{\bf P}}$ be a $T$-partition of $V$. Then $\bar{{\bf P}} \cup {\bf P'} \cup {\bf P''}$ is a $T$-partition of $V_{n}{(q)}$ and the proof is complete. \bigskip \noindent Note that the above theorem is Proposition 4.1 for $n=2n_{k}$. So it is a generalization of Beutelspacher-Heden's Theorem 2.1. \bigskip \noindent{\bf 4.3 Lemma}. {\it Let $T$ be as in Theorem 4.2 and $ V_{n}{(q)}$ be a vector space over $\mathbb{F}_{q}$ of dimension $n \geq 3n_{k}$. If there exists a subset $T'$ of $T$ such that $V_{n-2n_{k} }{(q)}$ has a $T'$-partition, then there exists a $T$-partition of $ V_{n}{(q)}$ }. \bigskip {\bf Proof.} Let $V$ be a subspace of $ V_{n}{(q)}$ of dimension $2n_{k}$ and such that $V \cap V^{\perp} = \{ 0 \}$. Since $dim V^{\perp} = n-2n_{k} $, then $V^{\perp}$ is (isomorphic to) $V_{n-2n_{k} }{(q)}$ and so $V^{\perp}$ admits a $T'$-partition ${\bf P'}$. By Proposition 4.1, we can consider a $T$-partition ${\bf P}$ of $V$. Moreover, by hypothesis, $n-2n_{k} \geq n_{k}$ and so $dim V^{\perp} \geq n_{k}$. Therefore there exists a $T$-set of subspaces ${\bf P''}$ of $V \oplus V^{\perp}$ such that $ \{ V,V^{\perp} \} \cup {\bf P''}$ is a partition of $V \oplus V^{\perp}$. Now we get that $ {\bf P} \cup {\bf P'} \cup {\bf P''}$ is a $T$-partition of $V \oplus V^{\perp} = V_{n}{(q)}$ and the lemma is shown. \bigskip For $n$ greater than or equal to $3n_{k}$ or for $n$ smaller than $2n_{k}$, we have the next theorem. \bigskip \noindent{\bf 4.4 Theorem}. {\it Let $n$ be a positive integer and $T = \{ n_{1},n_{2},....,n_{k} \}$ be a set of positive integers such that $ n_{1} < n_{2} < .... < n_{k} < n $. Then $ V_{n}{(q)}$ admits a $T$-partition if one of the following hypothesis is satisfied: a) $ 3n_{k} \leq n$ and $n-2n_{k}$ admits some divisor into $T$; b) $2n_{k} > n = n_{k}+n_{k-1}$ and $n_{1}=1$; c) $2n_{k} > n \geq n_{k}+2n_{k-1}$ and $gcd(n,2n_{k-1})$ has some divisor into $T$.} \bigskip {\bf Proof.} a) By the above lemma it is enough to note that there exists $T' \subseteq T$ such that $V_{n-2n_{k} }{(q)}$ has a $T'$-partition. In fact, if $ n_{i_{0}} \in T$ is a divisor of $n-2n_{k}$, then $V_{n-2n_{k} }{(q)}$ admits a partition whose components have all the same dimension $n_{i_{0}}$. That is to say, $V_{n-2n_{k} }{(q)}$ has a $T'$-partition if we set $T' = \{ n_{i_{0}} \}$. b) Note that, since $n_{k} > \frac{n}{2}$, if $ V_{n}{(q)}$ admits a $T$-partition then exists exactly one subspace of dimension $n_{k}$ because of the necessary condition 2). Again we note that if $d$ is a positive integer smaller than the dimension of a vector space $U$, then always there exists a $\bar{T}$-partition of $U$ where $\bar{T} = \{ 1, d \}$; in fact, the components of a such $\bar{T}$-partition may be a fixed $d$-dimensional subspace $W$ of $U$ and the $1$-dimensional subspaces which are not in $W$. \noindent Now we can prove b). By ii) of Theorem 2.2, let ${\bf P'}$ be a $T'$-partition of $ V_{n}{(q)}$ of type $[(q^{n_{k}},n-n_{k}),(1,n_{k})]$. Of course here $T' = \{ n-n_{k},n_{k} \}$. Since $q^{n_{k}} \geq n_{k} \geq k > k-2$, we can choose $k-2$ distinct subspaces $V_{1},V_{2},...,V_{k-2}$ of dimension $n-n_{k} = n_{k-1}$ of the $T'$-partition ${\bf P'}$. For every $1 \leq i \leq k-2$, being $n_{i} < n_{k-1} = n-n_{k} = dim V_{i}$, it is possible to consider a $T'_{i}$-partition ${\bf P'_{i}}$ of $V_{i}$ where $T'_{i} = \{ 1, n_{i} \}$. Let ${\bf P''} = \{ V_{i} \in {\bf P'} \hspace{0.05in} | \hspace{0.05in} k-1 \leq i \leq q^{n_{k}} \hspace{0.05in} and \hspace{0.05in} dim V_{i} = n_{k-1} \}$. Since $q^{n_{k}} > k-2$, we get that ${\bf P''} \neq \emptyset$. Now, if $V$ is the component of dimension $n_{k}$ in ${\bf P'}$, then $\{ V \} \cup (\cup_{i=1}^{k-2}{\bf P'_{i}}) \cup {\bf P''}$ is a $T$-partition of $ V_{n}{(q)}$. c) Let $V$ be a subspace of $ V_{n}{(q)}$ of dimension $n-n_{k}$ and such that $V \cap V^{\perp} = \{ 0 \}$. The hypothesis $n > n_{k} > \frac{n}{2} $ implies that $n_{k}$ is not a divisor of $gcd(n,2n_{k-1})$ and so $gcd(n,2n_{k-1})$ admits a divisor in $T' = \{ n_{1},n_{2},....,n_{k-1} \}$. Now since $n-n_{k} \geq 2n_{k-1}$, by Theorem 4.2, we have that there exists a $T'$-partition ${\bf P'}$ of $V$. The subspace $V^{\perp}$ has dimension $n_{k}$ and so $n_{k-1} < dim V^{\perp}$. Therefore, by Theorem 2.7, we obtain that there exists a $T'$-set of subspaces ${\bf P''}$ such that $\{ V, V^{\perp} \} \cup {\bf P''} $ is a partition of $V_{n}{(q)}$. Now we get that ${\bf P'} \cup {\bf P''} \cup \{ V^{\perp} \} $ is a $T$-partition of $V_{n}{(q)}$. So the theorem is completely shown. \bigskip \noindent{\bf 4.5 Remark}. Of course the above results do not give a complete answer to the problem to give sufficient conditions for the existence of a $T$-partition of $ V_{n}{(q)}$. For example, it is not known if there are $T$-partitions of $ V_{n}{(q)}$ when $ 3n_{k} > n > 2n_{k}$ and no integer belonging to $T$ divides $gcd(n,2n_{k})$. More generally the problem to give necessary and sufficient conditions about a set of positive integers $T$ to have a $T$-partition of $ V_{n}{(q)}$ is an open problem. Of course, it is in correlation with the analogous problem on \break $[(x_{1},n_{1}),(x_{2},n_{2}),......,(x_{k},n_{k})]$-partitions. But, now the problem is to give conditions on the elements of $T$ such that there exists some \break $[(x_{1},n_{1}),(x_{2},n_{2}),......,(x_{k},n_{k})]$-partition where $x_{1},x_{2},...,x_{k}$ are positive integers. Note that, as it follows from the results of this section, it is not necessary to know the positive integers $x_{1},x_{2},...,x_{k}$ to have the existence of a $T$-partition. \vspace{1,2in} \noindent {\Large {\bf References}} \bigskip \bigskip \noindent [Be] Beutelspacher A. {\it Partitions of finite vector spaces: an application of the Frobenius number in geometry.} Arch. Math. 31 (1978), 202-208. \bigskip \noindent [Bu] BU T. {\it Partitions of a vector space.} Discrete Math. 31 (1980), 79-83. \bigskip \noindent [DR] Danziger P. and Rodney P. {\it Uniformly resolvable designs } in: The CRC handbook of combinatorial designs. Edited by Charles J. Colbourn and Jeffrey H. Dinitz. CRC Press Series on Discrete Mathematics and its applications. CRC Press, Boca Raton, FL, (1996), 490-492. \bigskip \noindent [ESSSV1] El-Zanati S.I., Seelinger G.F., Sissokho P.A., Spence L.E. and Vanden Eynden C. {\it On partitions of finite vector spaces of small dimension over GF(2).} Submitted. \bigskip \noindent [ESSSV2] El-Zanati S.I., Seelinger G.F., Sissokho P.A., Spence L.E. and Vanden Eynden C. {\it Partitions of finite vector spaces into subspaces.} J. Comb. Des. 16,No.4 (2008) 329-341. \bigskip \noindent [He] Heden O.{\it On partitions of finite vector spaces of small dimensions.} Arch. Math. 43(1984), 507-509. \bigskip \noindent [HS] Herzog M. and Schonheim J. {\it Linear and nonlinear single error-correcting perfect mixed codes.} Informat. and Control 18 (1971), 364-368. \bigskip \noindent [Li] Lindstrom B. {\it Group partitions and mixed perfect codes.} Canad. Math. Bull. 18 (1975), 57-60. \bigskip \noindent [Sc1] Schulz R.-H. {\it On existence of Generalized triads related to transversal designs.} Ars Comb. 25 B (1988), 203-209. \bigskip \noindent [Sc2] Schulz R.-H. {\it On translation transversal designs with $\lambda > 1$.} Arch. Math. 49,(1987) 97-102. \bigskip \noindent [Sp] Spera A.G. {\it $(s,k,\lambda)$-partitions of a vector space.} Discrete Math. 89 (1991), 213-217. \end{document}
{ "timestamp": "2009-02-18T10:02:33", "yymm": "0902", "arxiv_id": "0902.3075", "language": "en", "url": "https://arxiv.org/abs/0902.3075", "abstract": "In this note, we give a new necessary condition for the existence of non-trivial partitions of a finite vector space. Precisely, we prove that, if V is a finite vector space over a field of order q, then the number of the subspaces of minimum dimension t of a non-trivial partition of V is greater than or equal to q+t. Moreover, we give some extensions of a well known Beutelspacher-Heden's result on existence of T-partitions.", "subjects": "Combinatorics (math.CO); Group Theory (math.GR)", "title": "On Partitions of Finite vector Spaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668712109664, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7081505235407852 }
https://arxiv.org/abs/1912.09438
Hairy graphs to ribbon graphs via a fixed source graph complex
We show that the hairy graph complex $(HGC_{n,n},d)$ appears as an associated graded complex of the oriented graph complex $(OGC_{n+1},d)$, subject to the filtration on the number of targets, or equivalently sources, called the fixed source graph complex. The fixed source graph complex $(OGC_1,d_0)$ maps into the ribbon graph complex $RGC$, which models the moduli space of Riemann surfaces with marked points. The full differential $d$ on the oriented graph complex $OGC_{n+1}$ corresponds to the deformed differential $d+h$ on the hairy graph complex $HGC_{n,n}$, where $h$ adds a hair. This deformed complex $(HGC_{n,n},d+h)$ is already known to be quasi-isomorphic to standard Kontsevich's graph complex $GC^2_n$. This gives a new connection between the standard and the oriented version of Kontsevich's graph complex.
\section{Introduction} The main motivation for the present work is the explicit morphism of complexes $$F:(\mathrm{O} \mathrm{GC}_1, \delta) \to (\mathrm{RGC}[1], \delta+\Delta_1)$$ constructed by S.\ Merkulov and T.\ Willwacher in \cite{MW}. Here $(\mathrm{O} \mathrm{GC}_1, \delta)$ is the oriented version of Kontsevich's graph complex, and $ (\mathrm{RGC}, \delta+\Delta_1)$ is a complex of ribbon graphs. Ribbon graphs (sometimes called fat graphs) models Riemann surfaces with marked points \cite{Penner}. For our ribbon graph complex $\mathrm{RGC}$, we get \begin{equation} H^k(\mathrm{RGC},\delta) \cong \prod_{g,n} \left(H_c^{k-n}(\mathcal{M}_{g,n},\mathbb{Q}) \otimes\sgn_n\right)^{{\mathbb{S}}_n} \oplus \begin{cases} \mathbb{Q} & \text{for } k=1,5,9,\ldots\\ 0 &\text{otherwise,} \end{cases} \end{equation} where $H_c(\mathcal{M}_{g,n},\mathbb{Q})$ is the compact support cohomology of the moduli space of Riemann surfaces of genus $g$ with $n$ marked points, see \cite{Kont3} for more details. In this context, the differential $\delta+\Delta_1$ constructed in \cite{MW} is a deformation of the classical differential $\delta$ on $\mathrm{RGC}$. A simple observation gives that the same explicit formula $F$ is also a map of complexes \begin{equation}\label{eq:Fintro} F:(\mathrm{O} \mathrm{GC}_1, \delta_0) \to (\mathrm{RGC}[1], \delta), \end{equation} where it is instead the differential on $\mathrm{O}\mathrm{GC}_1$ that is not standard. The differential $\delta_0$ splits vertices of graphs in $\mathrm{O} \mathrm{GC}$ in a way that preserves the number of target vertices. To the authors best knowledge, the \emph{(oriented) fixed target graph complex} $(\mathrm{O} \mathrm{GC}_n, \delta_0)$ has not been studied earlier. In this paper, we show that there is a quasi-isomorphism from the oriented graph complex with this new differential to the better known hairy graph complex $\mathrm{HGC}_{n,n}$, studied in e.g.\ \cite{AT}, \cite{DGC2}. \begin{thm}\label{thm:main} There is a map of graded vector spaces $$G:\mathrm{O} \mathrm{GC}_{n+1} \to \mathrm{HGC}_{n,n}$$ such that the associated morphisms of complexes $$G:(\mathrm{O} \mathrm{GC}_{n+1}, \delta_0) \to (\mathrm{HGC}_{n,n},\delta).$$ and $$G:(\mathrm{O} \mathrm{GC}_{n+1},\delta) \to (\mathrm{HGC}_{n,n},\delta+\chi)$$ are quasi-isomorphisms. Here $\chi$ is the extra differential on $\mathrm{HGC}_{n,n}$ that adds a hair, considered in \cite{DGC2}. \end{thm} As a corollary, we get a relationship between the ribbon graph complex and the hairy graph complex. \begin{cor} \label{cor:Mgn} We have an explicit zig-zag of morphisms $$(\mathrm{HGC}_{0}, \delta)\gets (\mathrm{O} \mathrm{GC}_1, \delta_0)\to (\mathrm{RGC}[1],\delta),$$ where the left map is a quasi-isomorphism. \end{cor} A recent result by M. Chan, S. Galatius and S. Payne \cite{CGP2} states that there exists an embedding $$H^k(\mathrm{GC}^{marked}_0, \delta)\to \prod_{g,n} H^{k-n+1}_c(\mathcal{M}_{g,n},\mathbb{Q}).$$ Here, $\mathrm{GC}^{marked}$ is a complex of hairy graphs where each hair is labeled by an integer. However, no explicit map is given. After symmetrizing both sides and using Theorem \ref{thm:main}, we get that there exists an embedding $$H(\mathrm{O} \mathrm{GC}_1,\delta_0) \to \prod_{g,n} \left( H_c(\mathcal{M}_{g,n},\mathbb{Q}) \otimes \sgn_n\right)^{{\mathbb{S}}_n}\oplus \begin{cases} \mathbb{Q} & \text{for } k=1,5,9,\ldots\\ 0 &\text{otherwise,}\end{cases}.$$ We conjecture that this embedding is given explicitly by the map $H(F)$. This conjecture is also given in \cite{MW}, and to support it, it is shown that $H(F)$ is nontrivial on all loop classes of $\mathrm{O} \mathrm{GC}_1.$ Let $(\mathrm{S} \mathrm{G}_n,d)$ be the graph complex consisting of directed graphs that contain at least one source vertex, with the edge contracting differential $d$. In \cite{MultiSourced}, the second author showed that the projection $$ \left(\mathrm{S} \mathrm{G}_n,d \right)\to \left(\mathrm{O} \mathrm{G}_n,d\right) $$ is a quasi-isomorphism. \begin{rem} The graph complex $(\mathrm{S} \mathrm{GC}_n,\delta)$ with the vertex splitting differential $\delta$, is dual of the complex $(\mathrm{S} \mathrm{G}_n,d)$ with the edge contraction differential $d$. The space $\mathrm{S} \mathrm{G}_n$ is spanned by formal linear combinations of graphs, while its dual space $\mathrm{S} \mathrm{GC}_n$ is spanned by formal series of the same graphs. Each result regarding a graph complex $(\mathrm{G},d)$ transfers to a dual result regarding $(\mathrm{GC},\delta)$. e.g.\ the map $G:(\mathrm{O} \mathrm{GC}_{n+1}, \delta_0) \to (\mathrm{HGC}_{n,n},\delta)$ has a dual map $\Phi:(\mathrm{HG}_{n,n},d) \to (\mathrm{OG}_{n+1},{d_0})$, which is also a quasi-isomorphism. See Subsection \ref{ss:dual} for more details. \end{rem} In this paper, we show that this relation between oriented graphs and sourced graphs also holds true when we consider the \emph{fixed source graph complexes}, with differentials $d_0$ that preserve the number of source vertices. \begin{prop} \label{prop:sourced} The projection $$ (\mathrm{S} \mathrm{GC}_n,\delta_0)\to(\mathrm{O} \mathrm{GC}_n,\delta_0), $$ is a quasi-isomorphism. \end{prop} \begin{rem} In \eqref{eq:Fintro}, we consider the differential $\delta_0$ that preserves the number of target vertices as opposed to source vertices. However, as there is an isomorphism $Inv:\mathrm{O}\mathrm{GC}\to \mathrm{O}\mathrm{GC}$ that inverts all the edges, thus mapping source vertices to target vertices and vice versa, we may shift freely between considering source vertices and target vertices. To keep notation consistent with what is previously established in \cite{MultiSourced}, we shall mainly consider the differential that preserves the number of source vertices, which we will also denote by $\delta_0$. \end{rem} Let $\mathrm{GC}_n^2$ $(\mathrm{HGC}_{n,n}^2)$ denote the version of Kontsevich's standard (hairy) graph complex that includes graphs with $2$-valent vertices. The following result connects Kontsevich's graph complex and hairy graph complex. \begin{thm}[\cite{TW1},\cite{TW2},\cite{Will},\cite{DGC2}] There is a quasi-isomorphism $$ \left(\mathrm{GC}_n^2,\delta\right)\rightarrow\left(\mathrm{HGC}^2_{n,n},\delta+\chi\right), $$ by summing over all ways to attach a hair to each graph. \end{thm} Together with Theorem \ref{thm:main}, we arrive to the following corollary. \begin{cor} \label{cor:Mgn} There is a zig-zag of explicit quasi-isomorphisms $$ \left(\mathrm{GC}_n^2,\delta\right) \rightarrow\left(\mathrm{HGC}^2_{n,n},\delta+\chi\right) \hookleftarrow\left(\mathrm{HGC}_{n,n},\delta+\chi\right) \leftarrow\left(\mathrm{OGC}_{n+1},\delta\right) \hookrightarrow(\mathrm{SGC}_{n+1},\delta). $$ \end{cor} The previous corollary gives a fourth proof that the cohomology of Kontsevich's graph complex is equal to the cohomology of the oriented graph complex, together with \cite{MW}, \cite{oriented}, and \cite{Multi}. The proof from \cite{Multi} gave an explicit map $$ \left(\mathrm{G}_n,d \right) \rightarrow\left(\mathrm{OG}_{n+1}^\varnothing, d\right), $$ where the superscript $\varnothing$, means that the loop graphs are omitted. This proof gives explicit maps too, and this time they are also naturally defined for loops. \subsection*{Structure of the paper} In Section \ref{s:def} we recall the required definitions and some results. Section \ref{s:map} introduces the map between the oriented and the hairy graph complexes, showing that it is indeed a map of complexes. Section \ref{s:qi} contains our main results about quasi-isomorphisms. In Section \ref{s:rib} we recall the definition of Ribbon graphs and the map $F$ of Merkulov and Willwacher from \cite{MW}, giving sense to our motivation. \subsection*{Acknowledgements} We are grateful to Sergei Merkulov and Alexey Kalugin for many useful discussions and comments. \section{Required definitions and results} \label{s:def} In this section we recall basic notation and several results shown in the literature that will be used in the paper. \subsection{General notation} We work over a field ${\mathbb{K}}$ of characteristic zero. All vector spaces and differential graded vector spaces are assumed to be ${\mathbb{K}}$-vector spaces. Graph complexes as vector spaces are generally defined by the graphs that span them. When we say \emph{a single term graph} in a graph complex, we mean the base graph, while any linear combination (or a series) of graphs will be called \emph{an element} of the graph complex. \subsection{Directed, oriented and sourced graph complexes} The directed, oriented and sourced graph complexes $\mathrm{D}\mathrm{GC}_{n}$, $\mathrm{O}\mathrm{GC}_{n}$ and $\mathrm{S}\mathrm{GC}_n$ are defined in \cite{MultiSourced}. In this paper, we will only consider single directional complexes. Accordingly, let us recall a simplified definition. Consider the set of directed connected graphs $\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{grac}^{2}$ with $v>0$ distinguishable vertices and $e\geq 0$ distinguishable directed edges, all vertices being at least $2$-valent, and without tadpoles (edges that start and end at the same vertex). We also ban passing vertices, i.e.\ 2-valent vertices with one incoming and one outgoing edge. This condition will not change the homology, as shown in \cite[Subsection 3.2]{MultiSourced}. Let $\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{O}\mathrm{grac}^{2}\subset\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{grac}^{2}$ be the subset of all graphs without closed paths along the directed edges. For $s\geq 0$, let $\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{S}_s\mathrm{grac}^{2}\subset\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{grac}^{2}$ be the subset of graphs that have exactly $s$ sources, i.e.\ vertices without incoming edges. For $n\in{\mathbb{Z}}$, let the degree of an element of $\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{grac}^{\geq 2}$ be $d=n-vn-(1-n)e$. Let \begin{equation} \bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{D}\mathrm{G}_n:=\langle\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{grac}^{2}\rangle[n-vn-(1-n)e], \end{equation} \begin{equation} \bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{O}\mathrm{G}_n:=\langle\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{O}\mathrm{grac}^{2}\rangle[n-vn-(1-n)e], \end{equation} \begin{equation} \bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{S}_s\mathrm{G}_n:=\langle\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{S}_s\mathrm{grac}^{2}\rangle[n-vn-(1-n)e], \end{equation} be the vector spaces of formal linear combinations of some elements of $\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{grac}^{2}$ with coefficients in ${\mathbb{K}}$. They are graded vector spaces with non-zero terms only in degree $d=n-vn-(1-n)e$. There are natural right actions of the group ${\mathbb{S}}_v\times{\mathbb{S}}_e$ on $\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{grac}^{2}$, $\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{O}\mathrm{grac}^{2}$ and $\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{S}_s\mathrm{grac}^{2}$, where $S_v$ permutes vertices and $S_e$ permutes edges. Let $\sgn_v$ and $\sgn_e$ be one-dimensional representations of $S_v$, respectively $S_e$, where the odd permutation reverses the sign. They can be considered as representations of the product ${\mathbb{S}}_v\times{\mathbb{S}}_e$. Let us consider the spaces of invariants: \begin{equation} \mathrm{V}_v\mathrm{E}_e\mathrm{D}\mathrm{G}_{n}:=\left\{ \begin{array}{ll} \left(\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{D}\mathrm{G}_n\otimes\sgn_e\right)^{{\mathbb{S}}_v\times{\mathbb{S}}_e} \qquad&\text{for $n$ even,}\\ \left(\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{D}\mathrm{G}_n\otimes\sgn_v\right)^{{\mathbb{S}}_v\times{\mathbb{S}}_e} \qquad&\text{for $n$ odd,} \end{array} \right. \end{equation} \begin{equation} \mathrm{V}_v\mathrm{E}_e\mathrm{O}\mathrm{G}_{n}:=\left\{ \begin{array}{ll} \left(\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{O}\mathrm{G}_n\otimes\sgn_e\right)^{{\mathbb{S}}_v\times{\mathbb{S}}_e} \qquad&\text{for $n$ even,}\\ \left(\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{O}\mathrm{G}_n\otimes\sgn_v\right)^{{\mathbb{S}}_v\times{\mathbb{S}}_e} \qquad&\text{for $n$ odd,} \end{array} \right. \end{equation} \begin{equation} \mathrm{V}_v\mathrm{E}_e\mathrm{S}_s\mathrm{G}_{n}:=\left\{ \begin{array}{ll} \left(\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{S}_s\mathrm{G}_n\otimes\sgn_e\right)^{{\mathbb{S}}_v\times{\mathbb{S}}_e} \qquad&\text{for $n$ even,}\\ \left(\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{S}_s\mathrm{G}_n\otimes\sgn_v\right)^{{\mathbb{S}}_v\times{\mathbb{S}}_e} \qquad&\text{for $n$ odd.} \end{array} \right. \end{equation} As the group is finite, the space of invariants may be replaced by the space of coinvariants. The underlying vector space of the \emph{directed graph complex} is given by \begin{equation} \mathrm{D}\mathrm{G}_{n}:=\bigoplus_{v\geq 1,e\geq 0}\mathrm{V}_v\mathrm{E}_e\mathrm{D}\mathrm{G}_{n}. \end{equation} The underlying vector space of the \emph{oriented graph complex} is given by \begin{equation} \mathrm{O}\mathrm{G}_{n}:=\bigoplus_{v\geq 1,e\geq 0}\mathrm{V}_v\mathrm{E}_e\mathrm{O}\mathrm{G}_{n}. \end{equation} The underlying vector space of the \emph{sourced graph complex} with $s$ sources is given by \begin{equation} \mathrm{S}_s\mathrm{G}_{n}:=\bigoplus_{v\geq 1,e\geq 0}\mathrm{V}_v\mathrm{E}_e\mathrm{S}_s\mathrm{G}_{n}, \end{equation} and the full sourced graph complex is given by \begin{equation} \mathrm{S}\mathrm{G}_n:=\bigoplus_{s\geq 1}\mathrm{S}_s\mathrm{G}_{n}. \end{equation} Dual spaces of those defined above are spanned by the same graphs, but infinite sums are allowed. Since $\mathrm{V}_v\mathrm{E}_e\mathrm{D}\mathrm{G}_{n}$ are finitely dimensional, this does not make difference. But duals of total complexes are defined as: \begin{equation} \mathrm{D}\mathrm{GC}_{n}:=\prod_{v\geq 1,e\geq 0}\mathrm{V}_v\mathrm{E}_e\mathrm{D}\mathrm{G}_{n}, \end{equation} \begin{equation} \mathrm{O}\mathrm{GC}_{n}:=\prod_{v\geq 1,e\geq 0}\mathrm{V}_v\mathrm{E}_e\mathrm{O}\mathrm{G}_{n}, \end{equation} \begin{equation} \mathrm{S}_s\mathrm{GC}_{n}:=\prod_{v\geq 1,e\geq 0}\mathrm{V}_v\mathrm{E}_e\mathrm{S}_s\mathrm{G}_{n}, \end{equation} \begin{equation} \mathrm{S}\mathrm{GC}_n:=\prod_{s\geq 1}\mathrm{S}_s\mathrm{G}_{n}. \end{equation} Note that every oriented graph is sourced and both of them are directed, so there are inclusions and a projections \begin{equation} \mathrm{O}\mathrm{GC}_{n}\hookrightarrow\mathrm{S}\mathrm{GC}_{n}\hookrightarrow\mathrm{D}\mathrm{GC}_{n}, \quad \mathrm{D}\mathrm{G}_{n}\twoheadrightarrow\mathrm{S}\mathrm{G}_{n}\twoheadrightarrow\mathrm{O}\mathrm{G}_{n}. \end{equation} \subsection{Hairy graph complex} The hairy graph complexe $\mathrm{HGC}_{m,n}$ is in general defined e.g.\ in \cite{DGC2}. In this paper we are interested in the case when $m=n$, that is $\mathrm{HGC}_{m,n}$. For simplicity, we will use the shorter notation $\mathrm{HGC}_n:=\mathrm{HGC}_{n,n}$. Let us quickly recall the definition, similar to the one of oriented and sourced graph complexes. Consider the set of directed connected graphs $\bar\mathrm{V}_v\bar\mathrm{E}_e\bar\mathrm{H}_s\mathrm{grac}^{3}$ with $v>0$ distinguishable vertices, $e\geq 0$ distinguishable directed edges and $s\geq 0$ distinguishable hairs attached to some vertices, all vertices being at least $3$-valent, and without tadpoles (edges that start and end at the same vertex). In hairy graphs, the valence also includes the attached hairs, i.e.\ the valence of a vertex is the number of edges and hairs attached to it. For $n\in{\mathbb{Z}}$, let the degree of an element of $\bar\mathrm{V}_v\bar\mathrm{E}_e\bar\mathrm{H}_s\mathrm{grac}^{3}$ be $d=n-vn-(1-n)e-s$. Let \begin{equation} \bar\mathrm{V}_v\bar\mathrm{E}_e\bar\mathrm{H}_s\mathrm{G}_n:=\langle\bar\mathrm{V}_v\bar\mathrm{E}_e\bar\mathrm{H}_s\mathrm{grac}^{3}\rangle[n-vn-(1-n)e-s] \end{equation} be the vector space of formal series of $\bar\mathrm{V}_v\bar\mathrm{E}_e\bar\mathrm{H}_s\mathrm{grac}^{3}$ with coefficients in ${\mathbb{K}}$. It is a graded vector space with a non-zero term only in degree $d=n-vn-(1-n)e-s$. There is a natural right action of the group ${\mathbb{S}}_v\times {\mathbb{S}}_s\times \left({\mathbb{S}}_e\ltimes {\mathbb{S}}_2^{\times e}\right)$ on $\bar\mathrm{V}_v\bar\mathrm{E}_e\bar\mathrm{H}_s\mathrm{grac}^{3}$, where $S_v$ permutes vertices, $S_s$ permutes hairs, $S_e$ permutes edges and $S_2^{\times e}$ changes the direction of edges. Let $\sgn_v$, $\sgn_s$, $\sgn_e$ and $\sgn_2$ be one-dimensional representations of $S_v$, respectively $S_s$, respectively $S_e$, respectively $S_2$, where the odd permutation reverses the sign. They can be considered as representations of the whole product ${\mathbb{S}}_v\times {\mathbb{S}}_s\times \left({\mathbb{S}}_e\ltimes {\mathbb{S}}_2^{\times e}\right)$. Let us consider the space of invariants: \begin{equation} \mathrm{V}_v\mathrm{E}_e\mathrm{H}_s\mathrm{G}_{n}:=\left\{ \begin{array}{ll} \left(\bar\mathrm{V}_v\bar\mathrm{E}_e\bar\mathrm{H}_s\mathrm{G}_n\otimes\sgn_e\otimes\sgn_s\right)^{{\mathbb{S}}_v\times {\mathbb{S}}_s\times \left({\mathbb{S}}_e\ltimes {\mathbb{S}}_2^{\times e}\right)} \qquad&\text{for $n$ even,}\\ \left(\bar\mathrm{V}_v\bar\mathrm{E}_e\bar\mathrm{H}_s\mathrm{G}_n\otimes\sgn_v\otimes\sgn_s\otimes\sgn_2^{\otimes e}\right)^{{\mathbb{S}}_v\times {\mathbb{S}}_s\times \left({\mathbb{S}}_e\ltimes {\mathbb{S}}_2^{\times e}\right)} \qquad&\text{for $n$ odd.} \end{array} \right. \end{equation} Again, because the group is finite, the space of invariants may be replaced by the space of coinvariants. The \emph{hairy graph complex} with $s$ hairs is \begin{equation} \mathrm{H}_s\mathrm{G}_{n}:=\bigoplus_{v\geq 1,e\geq 0}\mathrm{V}_v\mathrm{E}_e\mathrm{H}_s\mathrm{G}_{n}, \end{equation} and the general one is \begin{equation} \mathrm{H}\mathrm{G}_n:=\bigoplus_{s\geq 1}\mathrm{H}_s\mathrm{G}_{n}. \end{equation} Duals are: \begin{equation} \mathrm{H}_s\mathrm{GC}_{n}:=\prod_{v\geq 1,e\geq 0}\mathrm{V}_v\mathrm{E}_e\mathrm{H}_s\mathrm{G}_{n}, \end{equation} \begin{equation} \mathrm{H}\mathrm{GC}_n:=\prod_{s\geq 1}\mathrm{H}_s\mathrm{G}_{n}. \end{equation} \subsection{The differential} The standard differential acts by edge contraction: \begin{equation} d(\Gamma)=\sum_{a\in E(\Gamma)}\Gamma/a \end{equation} where $\Gamma$ is a graph from $\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{grac}^{2}$ or $\bar\mathrm{V}_v\bar\mathrm{E}_e\bar\mathrm{H}_s\mathrm{grac}^{3}$, $E(\Gamma)$ is its set of edges and $\Gamma/a$ is the graph from $\bar\mathrm{V}_{v-1}\bar\mathrm{E}_{e-1}\mathrm{grac}^{2}$ or $\bar\mathrm{V}_{v-1}\bar\mathrm{E}_{e-1}\bar\mathrm{H}_s\mathrm{grac}^{3}$ respectively, produced from $\Gamma$ by contracting edge $a$ and merging its end vertices. If a tadpole or a passing vertex is produced, we consider the result to be zero. Also, for oriented and sourced versions, if a closed path is formed or the last source is removed respectively, we consider the result to be zero. The precise signs and verification that the map can be extended to space of invariants and graph complexes $\mathrm{O}\mathrm{G}_{n}$ and $\mathrm{S}\mathrm{G}_{n}$ can be found in \cite[Subsection 2.9]{MultiSourced}. In \cite[Subsection 2.10]{MultiSourced} it is shown that $\delta$ is a differential. Exactly the same arguments hold for hairy graph complex $\mathrm{H}\mathrm{G}_{n}$. The differential can not produce closed path, so the projection \begin{equation} p:\left(\mathrm{S}\mathrm{G}_n,d\right)\twoheadrightarrow\left(\mathrm{O}\mathrm{G}_n,d\right) \end{equation} is well defined. On $\mathrm{H}\mathrm{G}_{n}$ we define another differential $h$, that deletes a hair, summed over all hairs: \begin{equation} h(\Gamma)=\sum_{i\in H(\Gamma)}\Gamma/i \end{equation} where $\Gamma$ is a graph from $\bar\mathrm{V}_v\bar\mathrm{E}_e\bar\mathrm{H}_s\mathrm{grac}^{3}$, $H(\Gamma)$ is its set of hairs and $\Gamma/i$ is the graph from $\bar\mathrm{V}_{v}\bar\mathrm{E}_{e}\bar\mathrm{H}_{s-1}\mathrm{grac}^{3}$, produced from $\Gamma$ by deleting hair $i$. If it was the last hair or a 2-valent vertex is formed, we consider the result to be zero. We can easily extend it to $\mathrm{H}\mathrm{G}_{n}$ and see that $h^2=0$, so it is a differential. Also, it easily follows that $hd+hd=0$, so $d+h$ is also a differential. \subsection{Loop order and the dual differentials} \label{ss:dual} So far, we have defined complexes $\left(\mathrm{O}\mathrm{G}_{n},d\right)$, $\left(\mathrm{S}\mathrm{G}_{n},d\right)$, $\left(\mathrm{H}\mathrm{G}_{n},d\right)$, $\left(\mathrm{H}\mathrm{G}_{n},h\right)$ and $\left(\mathrm{H}\mathrm{G}_{n},d+h\right)$. For a (hairy) graph $\Gamma$, with $e$ edges and $v$ vertices, let the \emph{loop order} of $\Gamma$ be given by $e-v$. The differentials $d$ and $h$ preserve the loop order, hence all complexes above admit sub-complexes $$\mathrm{B}_b\mathrm{O} \mathrm{G}_n:= \bigoplus_{e-v= b} \mathrm{V}_v\mathrm{E}_e \mathrm{O} \mathrm{G}_n,$$ $$\mathrm{B}_b\mathrm{S} \mathrm{G}_n:= \bigoplus_{e-v= b} \mathrm{V}_v\mathrm{E}_e \mathrm{S} \mathrm{G}_n,$$ $$\mathrm{B}_b\mathrm{HG}_n:= \bigoplus_{e-v= b} \bigoplus_{s\ge 1} \mathrm{V}_v\mathrm{E}_e \mathrm{H}_s G_n,$$ for each $b\in \mathbb{Z}.$ It is clear that $$ \mathrm{O} \mathrm{G}_{n} = \bigoplus_{b\ge 0} \mathrm{B}_b\mathrm{O} \mathrm{G}_{n}, \quad \mathrm{S} \mathrm{G}_{n} = \bigoplus_{b\ge 0} \mathrm{B}_b\mathrm{S} \mathrm{G}_{n}, \quad \mathrm{H} \mathrm{G}_n = \bigoplus_{b\ge 0} \mathrm{B}_b\mathrm{H} \mathrm{G}_n. $$ Furthermore, complexes $\mathrm{B}_b\mathrm{O} \mathrm{G}_n, \mathrm{B}_b\mathrm{S} \mathrm{G}_n$, $ \mathrm{B}_b\mathrm{HG}_n$ are finite dimensional in each homological degree $k$. Hence there are canonical isomorphisms of vector spaces to their duals $$ \mathrm{B}_b\mathrm{O} \mathrm{G}_n \to \hom(\mathrm{B}_b\mathrm{O} \mathrm{G}_n,\Bbbk)=:\mathrm{B}_b\mathrm{O} \mathrm{GC}_n, $$ $$ \mathrm{B}_b \mathrm{S} \mathrm{G}_n \to \hom(\mathrm{B}_b\mathrm{S} \mathrm{G}_n,\Bbbk)=:\mathrm{B}_b \mathrm{S} \mathrm{GC}_n, $$ $$ \mathrm{B}_b\mathrm{HG}_n \to \hom(\mathrm{B}_b\mathrm{HG}_n,\Bbbk)=:\mathrm{B}_b\mathrm{HGC}_n, $$ identifying a single term graph $\Gamma$ to the linear map that maps $\Gamma$ to $1$ and all other graphs to $0$. Any differential $\Delta$ on $\mathrm{G}= \mathrm{O} \mathrm{G}_n, \mathrm{S}\mathrm{G}_n$, or $\mathrm{HG}$, that preserves the loop order, is paired with a dual differential $\nabla\leftrightarrow \Delta$ on the dual space $\mathrm{GC}= \mathrm{O} \mathrm{GC}_n, \mathrm{S}\mathrm{GC}_n$, or $\mathrm{HGC}$ such that $$(\mathrm{GC},\nabla) \cong \prod_{b} \left( \hom(\mathrm{B}_b G,\Bbbk),\Delta^* \right).$$ We denote the dual differentials of $d$ and $h$ by \begin{equation} \delta \leftrightarrow d, \quad \chi \leftrightarrow h. \end{equation} The differential $\delta$ splits a vertex in all possible ways, while $\chi$ adds a hair in all possible ways. The vertex splitting differential (and adding a hair differential) are often defined as the standard differential(s). For hairy graph complexes both $\delta$ and $\chi$ are defined in \cite{DGC2}. The dual of the projection $p:\left(\mathrm{S}\mathrm{G}_n,d\right)\twoheadrightarrow\left(\mathrm{O}\mathrm{G}_n,d\right)$ is the inclusion \begin{equation} \iota:\left(\mathrm{O}\mathrm{GC}_n,\delta\right)\hookrightarrow\left(\mathrm{S}\mathrm{GC}_n,\delta\right). \end{equation} It is a well known result from homological algebra that the cohomology of a dual complex is dual to the homology of the complex. For this reason, we are free to study any of them, in order to obtain the results regarding (co)homology. \subsection{Skeleton version of directed graph complex} Instead of directed, oriented and sourced graph complexes $\left(\mathrm{D}\mathrm{G}_{n},d\right)$, $\left(\mathrm{O}\mathrm{G}_{n},d\right)$ and $\left(\mathrm{S}\mathrm{G}_{n},d\right)$, it may sometimes be useful to consider their isomorphic skeleton versions: $\left(\mathrm{D}^{sk}\mathrm{G}_{n},d\right)$, $\left(\mathrm{O}^{sk}\mathrm{G}_{n},d\right)$ and $\left(\mathrm{S}^{sk}\mathrm{G}_{n},d\right)$. They are defined using theory from \cite[Section 2]{MultiSourced} as follows: Consider the set of directed connected graphs $\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{grac}^{3}$ with $v>0$ distinguishable vertices and $e\geq 0$ distinguishable directed edges, all vertices being at least $3$-valent, without tadpoles. In this context, those graphs are called \emph{core graphs} because we are going to attach edge types to them. Let $\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{Grac}^{3}:=\langle\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{grac}^{3}\rangle$ be the generated vector space. To each edge of a core graph $\Gamma\in\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{grac}^{3}$, we attach an element of graded $\langle {\mathbb{S}}_2\rangle$ module $\Sigma$ spanned by $\{\Ed[n-1],\dE[n-1],\Ess[n-2]\}$ with ${\mathbb{S}}_2$ action \begin{equation} \Ed \leftrightarrow\dE,\quad\Ess\mapsto -(-1)^{n}\Ess \end{equation} to get $\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{Grac}^{3}\otimes\Sigma^{\otimes e}$. We say that the type of an edge is its attached element of $\Sigma$. Let \begin{equation} \bar\mathrm{V}_v\bar\mathrm{E}_e^{\Sigma}\mathrm{Grac}^{3}:= \bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{Grac}^{3}\otimes_{S_2^{\times e}}\Sigma^{\otimes e}= \left(\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{Grac}^{3}\otimes\Sigma^{\otimes e}\right)_{S_2^{\times e}} \end{equation} where $S_2^{\otimes e}$ acts by reversing edges in $\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{grac}^{3}$, as well as on the element from $\Sigma$ attached to the corresponding edge. Similarly as before, there are natural right actions of the groups $S_v$ and $S_e$ on $\bar\mathrm{V}_v\bar\mathrm{E}_e^{{\mathbb{S}}}\mathrm{Grac}^{3}$, where $S_v$ permutes vertices and $S_e$ permutes edges. Let $\sgn_v$ be one-dimensional representations of $S_v$, where the odd permutation reverses the sign. The sign of the action of $S_e$ depends on the types of the permuted edges, such that switching edges with odd degree types change the sign, c.f.\ \cite[Subsection 2.7]{MultiSourced}. Then let \begin{equation} \mathrm{V}_v\mathrm{E}_e\mathrm{D}^{sk}\mathrm{G}_{n}:=\left\{ \begin{array}{ll} \left(\bar\mathrm{V}_v\bar\mathrm{E}_e^{{\mathbb{S}}}\mathrm{Grac}^{3}\right)^{{\mathbb{S}}_v\times{\mathbb{S}}_e}[n-vn] \qquad&\text{for $n$ even,}\\ \left(\bar\mathrm{V}_v\bar\mathrm{E}_e^{{\mathbb{S}}}\mathrm{Grac}^{3}\otimes\sgn_v\right)^{{\mathbb{S}}_v\times{\mathbb{S}}_e}[n-vn] \qquad&\text{for $n$ odd.} \end{array} \right. \end{equation} This means that for even $n$ switching edges of type $\Ed$ or $\dE$ change the sign, while for odd $n$ switching vertices and edges of type $\Ess$ change the sign. Note the degree shift $[n-vn]$ that comes from the degrees of vertices, while the degree of edges of particular type is already included in $\Sigma$. The \emph{skeleton version of directed graph complex} is \begin{equation} \mathrm{D}^{sk}\mathrm{G}_{n}:=\bigoplus_{v\geq 1,e\geq 0}\mathrm{V}_v\mathrm{E}_e\mathrm{D}^{sk}\mathrm{G}_{n}. \end{equation} On graded $\langle{\mathbb{S}}_2\rangle$ module $\Sigma$ there is also a differential defined with \begin{equation} \Ess\mapsto\Ed-(-1)^n\dE \end{equation} that induces the \emph{edge differential} $d_E$ on $\mathrm{D}^{sk}\mathrm{G}_{n}$, c.f.\ \cite[Subsection 2.6]{MultiSourced}. The core differential $d_C$ comes from contracting edges of type $\Ed$ and $\dE$, c.f.\ \cite[Subsection 2.10]{MultiSourced}. This enables us to define the \emph{combined differential} \begin{equation} d:=d_C+(-1)^{n\deg}d_E \end{equation} on $\mathrm{D}^{sk}\mathrm{G}_{n}$ as in \cite[Subsection 2.11]{MultiSourced}. This skeleton version $\mathrm{D}^{sk}\mathrm{G}_{n}$ is defined to be isomorphic to the original version $\mathrm{D}\mathrm{G}_{n}$. In short, 3-valent vertices and 2-valent sources in a single term graph in $\mathrm{D}\mathrm{G}_{n}$ are called \emph{skeleton vertices}. Strings of edges and vertices between two skeleton vertices that have to be in the set $\{\Ed,\dE,\EdE\}$, are called \emph{skeleton edges}. A corresponding graph in $\mathrm{D}^{sk}\mathrm{G}_{n}$ is the one with skeleton vertices as vertices and skeleton edges as edges, where $\EdE$ is mapped to $\Ess$. One can check that the degrees and parities are correctly defined, and obtain the following result. C.f.\ \cite[Subsection 3.4]{MultiSourced}. But note that we have more skeleton vertices here because 2-valent sources are also considered skeleton vertices, and therefore there are less skeleton edges. \begin{prop} There is an isomorphism of complexes \begin{equation} \kappa:\left(\mathrm{D}\mathrm{G}_n,d\right)\rightarrow\left(\mathrm{D}^{sk}\mathrm{G}_n,d\right). \end{equation} \end{prop} Since oriented and sourced graph complexes $\left(\mathrm{O}\mathrm{G}_n,d\right)$ and $\left(\mathrm{S}\mathrm{G}_n,d\right)$ are both quotient of directed graph complex $\left(\mathrm{D}\mathrm{G}_n,d\right)$, we can induce skeleton versions of oriented and sourced graph complexes as follows. \begin{defprop} \label{defprop:sk} Skeleton versions of oriented and sourced graph complexes are \begin{equation} \mathrm{O}^{sk}\mathrm{G}_n:=\kappa\left(\mathrm{O}\mathrm{G}_n\right), \end{equation} \begin{equation} \mathrm{S}^{sk}\mathrm{G}_n:=\kappa\left(\mathrm{S}\mathrm{G}_n\right), \end{equation} with the differential $d$ as on $\mathrm{D}^{sk}\mathrm{G}_n$, where forbidden graphs are considered zero. The quotient maps, by abuse of notation again denoted by $\kappa$, are isomorphisms of complexes. \begin{equation} \kappa:\left(\mathrm{O}\mathrm{G}_n,d\right)\rightarrow\left(\mathrm{O}^{sk}\mathrm{G}_n,d\right), \end{equation} \begin{equation} \kappa:\left(\mathrm{S}\mathrm{G}_n,d\right)\rightarrow\left(\mathrm{S}^{sk}\mathrm{G}_n,d\right). \end{equation} \end{defprop} \section{The map $\Phi$} \label{s:map} In this section we are going to define the main map $\Phi:\left(\mathrm{HG}_{n},d+h\right)\rightarrow\left(\mathrm{O}\mathrm{GC}_{n+1},\delta\right)$. Thanks to Definition/Proposition \ref{defprop:sk}, it suffices to give \begin{equation} \Phi:\left(\mathrm{HG}_{n},d+h\right)\rightarrow\left(\mathrm{O}^{sk}\mathrm{G}_{n+1},d\right). \end{equation} The map \begin{equation*} G: \mathrm{O} \mathrm{GC}_{n+1} \to \mathrm{HGC}_{n}, \end{equation*} from Theorem \ref{thm:main} is the map dual of $\Phi$. \subsection{Forests} Let us pick the number of vertices $v$, the number of edges $e$ and the number of hairs $s$. Let $\Gamma\in\bar\mathrm{V}_v\bar\mathrm{E}_e\bar\mathrm{H}_s\mathrm{G}_{n}$ be a single term graph. A \emph{forest} is any sub-graph of $\Gamma$ that contains all its hairs (and thus all vertices with hairs), that does not contain cycles (of any orientation), and whose every connected component has exactly one hair. Let a \emph{spanning forest} be a forest that contains all vertices. Let $F(\Gamma)$ be the set of all spanning forests of $\Gamma$. An example of a spanning forest is given in Figure \ref{fig:span}. \begin{figure}[H] $$ \begin{tikzpicture} \node[int] (a) at (0,0) {}; \node[int] (b) at (1,0) {}; \node[int] (c) at (1,1) {}; \node[int] (d) at (0,1) {}; \node[int] (a1) at (-.5,-.5) {}; \node[int] (b1) at (1.5,-.5) {}; \node[int] (c1) at (1.5,1.5) {}; \node[int] (d1) at (-.5,1.5) {}; \node[int] (x) at (-1,.5) {}; \node[int] (y) at (1.5,.5) {}; \node[int] (z) at (2.2,.5) {}; \draw (a) edge[red] (b); \draw (b) edge[dotted] (c); \draw (c) edge[dotted] (d); \draw (d) edge[dotted] (a); \draw (a1) edge[dotted] (b1); \draw (b1) edge[red] (y); \draw (y) edge[red] (c1); \draw (b1) edge[dotted] (z); \draw (z) edge[dotted] (c1); \draw (y) edge[red] (z); \draw (c1) edge[dotted] (d1); \draw (d1) edge[dotted] (a1); \draw (a) edge[red] (a1); \draw (b) edge[dotted] (b1); \draw (c) edge[red] (c1); \draw (d) edge[red] (d1); \draw (a1) edge[dotted] (x); \draw (x) edge[red] (d1); \draw (x) edge (-1.3,.5); \draw (a1) edge (-.7,-.7); \draw (b1) edge (1.7,-.7); \end{tikzpicture} $$ \caption{\label{fig:span} An example of a hairy graph and a spanning forest. Edges of the forest are red, while other edges are dotted.} \end{figure} \subsection{Model pairs} For a chosen spanning forest $\tau\in F(\Gamma)$, our first goal is to define $\Phi_{\tau}(\Gamma)\in\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{O}^{sk}\mathrm{G}_{n+1}$. As our final goal is to define a map $$ \Phi: \mathrm{V}_v\mathrm{E}_e\mathrm{H}_s\mathrm{G}_{n} \to \mathrm{V}_v \mathrm{E}_e\mathrm{O}^{sk}\mathrm{G}_{n+1}, $$ the definition of $\Phi_{\tau}(\Gamma)$ must be invariant of the action of ${\mathbb{S}}_v\times {\mathbb{S}}_s\times \left({\mathbb{S}}_e\ltimes {\mathbb{S}}_2^{\times e}\right)$. We will only define $\Phi_{\tau}(\Gamma)$ on some pairs $(\Gamma,\tau)$ that are called \emph{models}, and we extend the definition to all pairs $(\Gamma,\tau)$ by the requirement that the map is invariant under the symmetry action. In order to do that correctly, we need to check two conditions: \begin{enumerate} \item[(i)] if there is an element of ${\mathbb{S}}_v\times {\mathbb{S}}_s\times \left({\mathbb{S}}_e\ltimes {\mathbb{S}}_2^{\times e}\right)$ that sends one model to another model, the definition is invariant under its action, and \item[(ii)] for every pair $(\Gamma,\tau)$, there is an element of ${\mathbb{S}}_v\times {\mathbb{S}}_s\times \left({\mathbb{S}}_e\ltimes {\mathbb{S}}_2^{\times e}\right)$ that sends it to a model. \end{enumerate} Note that the spanning forest $\tau$ has $v-s$ edges. For a pair $(\Gamma,\tau)$ to be a model we require that: \begin{itemize} \item edges of a connected component in $\tau$ are directed away from the hair of that connected component; \item an edge in $\tau$ has the same label as the vertex it is heading to, labels being in the set $\{1,\dots,v-s\}$; \item a hair labelled by $x$ stands on the hairy vertex labelled by $x+v-s$. \end{itemize} An example of a model is given in Figure \ref{fig:model}. \begin{figure}[H] $$ \begin{tikzpicture}[scale=1.6] \node[int] (a) at (0,0) {}; \node[below right] at (a) {\bf 1}; \node[int] (b) at (1,0) {}; \node[below left] at (b) {\bf 2}; \node[int] (c) at (1,1) {}; \node[above left] at (c) {\bf 3}; \node[int] (d) at (0,1) {}; \node[above right] at (d) {\bf 4}; \node[int] (a1) at (-.5,-.5) {}; \node[below right] at (a1) {\bf 10}; \node[int] (b1) at (1.5,-.5) {}; \node[below left] at (b1) {\bf 11}; \node[int] (c1) at (1.5,1.5) {}; \node[above left] at (c1) {\bf 5}; \node[int] (d1) at (-.5,1.5) {}; \node[above right] at (d1) {\bf 6}; \node[int] (x) at (-1,.5) {}; \node[above left] at (x) {\bf 9}; \node[int] (y) at (1.5,.5) {}; \node[left] at (y) {\bf 7}; \node[int] (z) at (2.2,.5) {}; \node[right] at (z) {\bf 8}; \draw (a) edge[red,->] node[above] {\color{black} 2} (b); \draw (b) edge[dotted,->] node[left] {9} (c); \draw (c) edge[dotted,->] node[below] {10} (d); \draw (d) edge[dotted,->] node[right] {11} (a); \draw (a1) edge[dotted,->] node[below] {14} (b1); \draw (b1) edge[red,->] node[right] {\color{black} 7} (y); \draw (y) edge[red,->] node[right] {\color{black} 5} (c1); \draw (b1) edge[dotted,->] node[right] {16} (z); \draw (z) edge[dotted,->] node[right] {17} (c1); \draw (y) edge[red,->] node[above] {\color{black} 8} (z); \draw (c1) edge[dotted,->] node[above] {12} (d1); \draw (d1) edge[dotted,->] node[right] {13} (a1); \draw (a) edge[red,<-] node[above] {\color{black} 1} (a1); \draw (b) edge[dotted,->] node[above] {15} (b1); \draw (c) edge[red,<-] node[below] {\color{black} 3} (c1); \draw (d) edge[red,<-] node[below] {\color{black} 4} (d1); \draw (a1) edge (-.7,-.7); \node[left] at (-.7,-.7) {\bf 2}; \draw (a1) edge[dotted,->] node[left] {18} (x); \draw (x) edge[red,->] node[left] {\color{black} 6} (d1); \draw (x) edge (-1.3,.5); \node[left] at (-1.3,.5) {\bf 1}; \draw (b1) edge (1.7,-.7); \node[right] at (1.7,-.7) {\bf 3}; \end{tikzpicture} $$ \caption{\label{fig:model} An example of model with the spanning forest from Figure \ref{fig:span}. Edges of the forest are red, while other edges are dotted. Labels of vertices and hairs are thick.} \end{figure} It is clear that every pair $(\Gamma,\tau)$ can be using an element of ${\mathbb{S}}_v\times {\mathbb{S}}_s\times \left({\mathbb{S}}_e\ltimes {\mathbb{S}}_2^{\times e}\right)$ mapped to a model, so the condition (ii) is fulfilled. \subsection{Defining the map for a model} Let us now pick up a model $(\Gamma,\tau)$, consisting of a single term graph $\Gamma\in\bar\mathrm{V}_v\bar\mathrm{E}_e\bar\mathrm{H}_s\mathrm{G}_{n}$ and $\tau\in F(\Gamma)$. A graph $\phi(\Gamma)$ obtained from $\Gamma$ by deleting all hairs and ignoring the degree is a core graph in $\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{Grac}^3$. To its edges that belong to $E(\tau)$ we attach an edge type $\Ed$, and to those that do not belong to $E(\tau)$ we attach edge type $\Ess$ to get an element of $\bar\mathrm{V}_v\bar\mathrm{E}_e^{{\mathbb{S}}}\mathrm{Grac}^{3}$, and then after taking coinvariants and adding the degrees an element \begin{equation} \Phi_{\tau}(\Gamma)\in\mathrm{V}_v\mathrm{E}_e\mathrm{D}^{sk}\mathrm{G}_{n+1}. \end{equation} It is straightforward to check the following: \begin{itemize} \item the map is well defined in a sense that if there is an element of ${\mathbb{S}}_v\times {\mathbb{S}}_s\times \left({\mathbb{S}}_e\ltimes {\mathbb{S}}_2^{\times e}\right)$ that sends one model to another model, the same result in $\mathrm{V}_v\mathrm{E}_e\mathrm{D}^{sk}\mathrm{GC}_{n+1}$ is obtained, i.e.\ condition (i) from above is fulfilled; \item The degree of $\Phi_{\tau}(\Gamma)$ is by 1 greater that the degree of $\Gamma$; \item The result $\Phi_{\tau}(\Gamma)$ is oriented, i.e.\ it is an element of $\mathrm{V}_v\mathrm{E}_e\mathrm{O}^{sk}\mathrm{GC}_{n+1}$. \end{itemize} An example of $\Phi_{\tau}(\Gamma)$ is given in Figure \ref{fig:Phi}. \begin{figure}[H] $$ \begin{tikzpicture} \node[int] (a) at (0,0) {}; \node[int] (b) at (1,0) {}; \node[int] (c) at (1,1) {}; \node[int] (d) at (0,1) {}; \node[int] (a1) at (-.5,-.5) {}; \node[int] (b1) at (1.5,-.5) {}; \node[int] (c1) at (1.5,1.5) {}; \node[int] (d1) at (-.5,1.5) {}; \node[int] (x) at (-1,.5) {}; \node[int] (y) at (1.5,.5) {}; \node[int] (z) at (2.2,.5) {}; \draw (a) edge[-latex] (b); \draw (b) edge[crossed,->] (c); \draw (c) edge[crossed,->] (d); \draw (d) edge[crossed,->] (a); \draw (a1) edge[crossed,->] (b1); \draw (b1) edge[-latex] (y); \draw (y) edge[-latex] (c1); \draw (b1) edge[crossed,->] (z); \draw (z) edge[crossed,->] (c1); \draw (y) edge[-latex] (z); \draw (c1) edge[crossed,->] (d1); \draw (d1) edge[crossed,->] (a1); \draw (a) edge[latex-] (a1); \draw (b) edge[crossed,->] (b1); \draw (c) edge[latex-] (c1); \draw (d) edge[latex-] (d1); \draw (a1) edge[crossed,->] (x); \draw (x) edge[-latex] (d1); \end{tikzpicture} $$ \caption{\label{fig:Phi} Oriented graph $\Phi_\tau(\Gamma)$ for the graph $\Gamma$ and spanning forest $\tau$ from Figure \ref{fig:span}.} \end{figure} \subsection{The final map} The map is now extended to all pairs $(\Gamma,\tau)$ by invariance under the action of ${\mathbb{S}}_v\times {\mathbb{S}}_s\times \left({\mathbb{S}}_e\ltimes {\mathbb{S}}_2^{\times e}\right)$. Then let us define \begin{equation} \Phi:\bar\mathrm{V}_v\bar\mathrm{E}_e\bar\mathrm{H}_s\mathrm{G}_{n}\rightarrow\mathrm{V}_v\mathrm{E}_e\mathrm{O}^{sk}\mathrm{G}_{n+1}, \quad \Gamma\mapsto\sum_{\tau\in F(\Gamma)}\Phi_{\tau}(\Gamma). \end{equation} The invariance under all actions implies that the induced map $\Phi:\mathrm{V}_v\mathrm{E}_e\mathrm{H}_s\mathrm{G}_{n}\rightarrow\mathrm{V}_v\mathrm{E}_e\mathrm{O}^{sk}\mathrm{G}_{n+1}$ is well defined. It is then extended to the whole \begin{equation} \Phi:\mathrm{H}\mathrm{G}_{n}\rightarrow\mathrm{O}^{sk}\mathrm{G}_{n+1}. \end{equation} \begin{prop} \label{prop:map} The map $\Phi:\left(\mathrm{HG}_{n},d+h\right)\rightarrow\left(\mathrm{O}^{sk}\mathrm{GC}_{n+1},d\right)$ is a map of complexes of degree 1, i.e.\ $$ \Phi((d+h)\Gamma)=d \Phi(\Gamma) $$ for every $\Gamma\in\mathrm{HG}_{n}$. \end{prop} \begin{proof} We have already checked that the degree of $\Phi$ is 1. For the other claim let us pick a single term graph $\Gamma\in\bar\mathrm{V}_v\bar\mathrm{E}_e\bar\mathrm{H}_s\mathrm{G}_{n}$. It holds that $$ \Phi(d\Gamma)= \Phi\left(\sum_{a\in E(\Gamma)}\Gamma/a\right)= \sum_{a\in E(\Gamma)}\Phi\left(\Gamma/a\right)= \sum_{a\in E(\Gamma)}\sum_{\tau\in F(\Gamma/a)}\Phi_{\tau}(\Gamma/a) $$ where $\Gamma/a$ is contracting an edge $a$ in $\Gamma$. Spanning forests of $\Gamma/a$ are in natural bijection with spanning forests of $\Gamma$ that contain $a$, $\tau/a\leftrightarrow\tau$, so we can write $$ \Phi(d\Gamma)= \sum_{a\in E(\Gamma)} \sum_{\substack{\tau\in F(\Gamma)\\ a\in E(\tau)}}\Phi_{\tau/a}(\Gamma/a)= \sum_{\tau\in F(\Gamma)}\sum_{a\in E(\tau)}\Phi_{\tau/a}(\Gamma/a). $$ \begin{lemma} Let $\Gamma\in\bar\mathrm{V}_v\bar\mathrm{E}_e\bar\mathrm{H}_s\mathrm{G}_{n}$, $\tau\in F(\Gamma)$ and $a\in E(\tau)$. Then \begin{equation} \Phi_{\tau/a}(\Gamma/a)\sim \Phi_{\tau}(\Gamma)/a, \end{equation} where $\sim$ means that they are in the same class of coinvariants under the action of ${\mathbb{S}}_v\times{\mathbb{S}}_e$. \end{lemma} \begin{proof} It is clear that one side is $\pm$ the other side. Careful calculation of the sign is left to the reader. \end{proof} The lemma implies that \begin{equation} \Phi(d\Gamma)\sim\sum_{\tau\in F(\Gamma)}\sum_{a\in E(\tau)}\Phi_{\tau}(\Gamma)/a. \end{equation} Still on the left-hand-side of the claimed relation we have $$ \Phi(h(\Gamma))=\Phi\left(\sum_{i\in H(\Gamma)}\Gamma/i\right) =\sum_{i\in H(\Gamma)}\Phi(\Gamma/i) =\sum_{i\in H(\Gamma)}\sum_{\tau\in F(\Gamma/i)}\Phi_{\tau}(\Gamma/i), $$ where $\Gamma/i$ deletes hair $i$. Spanning forests in $F(\Gamma/i)$ correspond to a kind of forests of $\Gamma$ where one connected component has two hairs. Therefore, let $FD(\Gamma)$ (\emph{double-hair forests}) be the set of all sub-graphs $\lambda$ of $\Gamma$ that contain all vertices and hairs, have no cycles, whose one connected component has exactly two hairs and whose other connected components have exactly one hair. Let those two hairs be $j(\lambda)$ and $k(\lambda)$. Sets $\{(i,\tau)|i\in H(\Gamma),\tau\in F(\Gamma/i)\}$ and $\{(\lambda,i)|\lambda\in FD(\Gamma),i\in\{j(\lambda),k(\lambda)\}\}$ are clearly bijective. So we have \begin{equation} \label{eq:map2} \Phi(\Gamma/i)= \sum_{\lambda\in FD(\Gamma)}\left(\Phi_{\lambda}(\Gamma/j(\lambda))+\Phi_{\lambda}(\Gamma/k(\lambda))\right). \end{equation} On the other side the differential is $d=d_C\pm d_E$. It acts on edges of $\Phi(\Gamma)$. They come from edges in $\Gamma$ and can be split into three sets: \begin{itemize} \item edges that are in $\tau$ form the set $E(\tau)$; \item edges that connect two connected components of $\tau$ form the set $ED(\tau)$; \item edges that make a cycle in a connected component of $\tau$ form the set $EC(\tau)$. \end{itemize} Only edges of type $\Ed$ and $\dE$ can be contracted by $d_C$. They come from the edges in $E(\tau)$ so \begin{equation} \label{eq:map1} d_C \Phi(\Gamma)= d_C\left(\sum_{\tau\in F(\Gamma)}\Phi_{\tau}(\Gamma)\right)= \sum_{\tau\in F(\Gamma)}d_C\left(\Phi_{\tau}(\Gamma)\right)= \sum_{\tau\in F(\Gamma)}\sum_{a\in E(\tau)}\Phi_{\tau}(\Gamma)/a\sim \Phi(d\Gamma). \end{equation} The edge differential $d_E$ acts on edges of type $\Ess$, which are those in the sets $ED(\tau)$ and $EC(\tau)$. We then split \begin{equation} d_E(\Phi_\tau(\Gamma))=d_{ED}(\Phi_\tau(\Gamma))+d_{EC}(\Phi_\tau(\Gamma)), \end{equation} where \begin{equation} d_{ED}(\Phi_\tau(\Gamma))=\sum_{a\in ED(\tau)}d_E^{(a)}(\Phi_\tau(\Gamma)),\quad d_{EC}(\Phi_\tau(\Gamma))=\sum_{a\in EC(\tau)}d_E^{(a)}(\Phi_\tau(\Gamma)), \end{equation} where $d_E^{(a)}$ maps edge $a$ as $\Ess\mapsto=\Ed-(-1)^{n+1}\dE$. \begin{lemma} \label{lem:map3} Let $\Gamma\in\bar\mathrm{V}_v\bar\mathrm{E}_e\bar\mathrm{H}_s\mathrm{G}_{n}$ be a single term graph. Then $$ \sum_{\tau\in F(\Gamma)}d_{EC}\left(\Phi_{\tau}(\Gamma)\right)\sim 0. $$ \end{lemma} \begin{proof} Let $$ N(\Gamma):= \sum_{\tau\in F(\Gamma)}d_{EC}\left(\Phi_{\tau}(\Gamma)\right)= \sum_{\tau\in F(\Gamma)}\sum_{a\in EC(\tau)}d_E^{(a)}\left(\Phi_{\tau}(\Gamma)\right). $$ Terms in the above relation can be summed in another order. Let $FC(\Gamma)$ (\emph{cycled forests}) be the set of all sub-graphs $\rho$ of $\Gamma$ that contain all vertices and hairs, have $v-s+1$ edges, and whose every connected component has exactly one hair. Those graphs are similar to spanning forests, but have one cycle. Let $C(\rho)$ be the set of edges in the cycle of $\rho$. Clearly, $\rho\setminus \{a\}$ for $a\in C(\rho)$ is a spanning forest of $\Gamma$ and sets $\{(\tau,a)|\tau\in F(\Gamma),a\in EC(\tau)\}$ and $\{(\rho,a)|\rho\in FC(\Gamma),a\in C(\rho)\}$ are bijective, so $$ N(\Gamma)=\sum_{\rho\in FC(\Gamma)}\sum_{a\in C(\rho)}d_E^{(a)}\left(\Phi_{\rho\setminus \{a\}}(\Gamma)\right). $$ It is now enough to show that $$ \sum_{a\in C(\rho)}d_E^{(a)}\left(\Phi_{\rho\setminus \{a\}}(\Gamma)\right)\sim 0 $$ for every $\rho\in FC(\Gamma)$. Let $y\in V(\Gamma)$ be the vertex in the cycle of $\rho$ closest to the hair of its connected component (along $\rho$). After choosing $a\in C(\rho)$, the cycle in $\Phi_{\rho\setminus \{a\}}(\Gamma)$ has the edge $a$ of type $\Ess$, and other edges of type $\Ed$ or $\dE$ with direction from $y$ to the edge $a$, such as in the following diagram. $$ \begin{tikzpicture}[baseline=0ex,scale=.7] \node[int] (a) at (0,1.1) {}; \node[int] (b) at (1,.5) {}; \node[int] (c) at (1,-.5) {}; \node[int] (d) at (0,-1.1) {}; \node[int] (e) at (-1,-.5) {}; \node[int] (f) at (-1,.5) {}; \node[above] at (a) {$\scriptstyle y$}; \draw (a) edge[-latex] (b); \draw (b) edge[-latex] (c); \draw (a) edge[-latex] (f); \draw (f) edge[-latex] (e); \draw (e) edge[-latex] (d); \draw (c) edge[->,crossed] (d); \end{tikzpicture} $$ After acting by $d_E^{(a)}$ this $\Ess$ is replaced by $\Ed+(-1)^n\dE$, such as in the following diagram. $$ \begin{tikzpicture}[baseline=-.6ex,scale=.7] \node[int] (a) at (0,1.1) {}; \node[int] (b) at (1,.5) {}; \node[int] (c) at (1,-.5) {}; \node[int] (d) at (0,-1.1) {}; \node[int] (e) at (-1,-.5) {}; \node[int] (f) at (-1,.5) {}; \node[above] at (a) {$\scriptstyle y$}; \draw (a) edge[-latex] (b); \draw (b) edge[-latex] (c); \draw (a) edge[-latex] (f); \draw (f) edge[-latex] (e); \draw (e) edge[-latex] (d); \draw (c) edge[-latex] (d); \end{tikzpicture} \quad+(-1)^n\quad \begin{tikzpicture}[baseline=-.6ex,scale=.7] \node[int] (a) at (0,1.1) {}; \node[int] (b) at (1,.5) {}; \node[int] (c) at (1,-.5) {}; \node[int] (d) at (0,-1.1) {}; \node[int] (e) at (-1,-.5) {}; \node[int] (f) at (-1,.5) {}; \node[above] at (a) {$\scriptstyle y$}; \draw (a) edge[-latex] (b); \draw (b) edge[-latex] (c); \draw (a) edge[-latex] (f); \draw (f) edge[-latex] (e); \draw (e) edge[-latex] (d); \draw (c) edge[latex-] (d); \end{tikzpicture} $$ Careful calculation of the sign shows that those two terms are cancelled with terms given from choosing neighbouring edges in $C(\rho)$, and two last terms which do not have a corresponding neighbour are indeed $0$ as they have a cycle along arrows. This concludes the proof that $N(\Gamma)=0$. \end{proof} The similar study of the action on edges from $ED(\tau)$ leads to the following lemma. \begin{lemma} \label{lem:map4} Let $\Gamma\in\bar\mathrm{V}_v\bar\mathrm{E}_e\bar\mathrm{H}_s\mathrm{G}_{n}$ be a single term graph. Then $$ \sum_{\tau\in F(\Gamma)}d_{ED}\left(\Phi_{\tau}(\Gamma)\right)\sim \sum_{\lambda\in FD(\Gamma)}\left(\Phi_{\lambda}\left(\Gamma/j(\lambda)\right)+\Phi_{\lambda}\left(\Gamma/k(\lambda)\right)\right). $$ \end{lemma} \begin{proof} It holds that $$ \sum_{\tau\in F(\Gamma)}d_{ED}\left(\Phi_{\tau}(\Gamma)\right)= \sum_{\tau\in F(\Gamma)}\sum_{a\in ED(\tau)}d_E^{(a)}(\Phi_\tau(\Gamma)). $$ For $\lambda\in FD(\Gamma)$ let $P(\lambda)$ be the set of edges in the path from $j(\lambda)$ to $k(\lambda)$. Clearly, $\lambda\setminus \{a\}$ for $a\in P(\lambda)$ is a spanning forest of $\Gamma$ and $a$ is in $ED(\Gamma)$ for that spanning forest. One can easily see that sets $\{(\tau,a)|\tau\in F(\Gamma),a\in ED(\tau)\}$ and $\{(\lambda,a)|\lambda\in FD(\Gamma),a\in P(\lambda)\}$ are bijective, so $$ \sum_{\tau\in F(\Gamma)}d_{ED}\left(\Phi_{\tau}(\Gamma)\right)= \sum_{\lambda\in FD(\Gamma)}\sum_{a\in P(\lambda)}d_E^{(a)}\left(\Phi_{\lambda\setminus\{a\}}(\Gamma)\right). $$ To finish the proof it is enough to show that $$ \sum_{a\in P(\lambda)}d_E^{(a)}\left(\Phi_{\lambda\setminus\{a\}}(\Gamma)\right)\sim \Phi_{\lambda}\left(\Gamma/j(\lambda)\right)+\Phi_{\lambda}\left(\Gamma/k(\lambda)\right) $$ for every $\lambda\in FD(\Gamma)$. After choosing $a\in P(\lambda)$ the path from $j(\lambda)$ to $k(\lambda)$ along $\lambda$ in $\Phi_{\lambda\setminus\{a\}}(\Gamma)$ has the edge $a$ of type $\Ess$, and the other edges of type $\Ed$ or $\dE$ with direction from $j(\lambda)$ or $k(\lambda)$ to the edge $a$, such as in the following diagram. $$ \begin{tikzpicture}[baseline=0ex,scale=.7] \node[int] (b) at (1,.5) {}; \node[int] (c) at (1,-.5) {}; \node[int] (d) at (0,-1.1) {}; \node[int] (e) at (-1,-.5) {}; \node[int] (f) at (-1,.5) {}; \node[above] at (b) {$\scriptstyle k(\lambda)$}; \node[above] at (f) {$\scriptstyle j(\lambda)$}; \draw (b) edge[-latex] (c); \draw (f) edge[-latex] (e); \draw (e) edge[-latex] (d); \draw (c) edge[->,crossed] (d); \end{tikzpicture} $$ After acting by $d_E^{(a)}$ this $\Ess$ is replaced by $\Ed+(-1)^n\dE$, such as in the following diagram. $$ \begin{tikzpicture}[baseline=-.6ex,scale=.7] \node[int] (b) at (1,.5) {}; \node[int] (c) at (1,-.5) {}; \node[int] (d) at (0,-1.1) {}; \node[int] (e) at (-1,-.5) {}; \node[int] (f) at (-1,.5) {}; \node[above] at (b) {$\scriptstyle k(\lambda)$}; \node[above] at (f) {$\scriptstyle j(\lambda)$}; \draw (b) edge[-latex] (c); \draw (f) edge[-latex] (e); \draw (e) edge[-latex] (d); \draw (c) edge[-latex] (d); \end{tikzpicture} \quad+(-1)^n\quad \begin{tikzpicture}[baseline=-.6ex,scale=.7] \node[int] (b) at (1,.5) {}; \node[int] (c) at (1,-.5) {}; \node[int] (d) at (0,-1.1) {}; \node[int] (e) at (-1,-.5) {}; \node[int] (f) at (-1,.5) {}; \node[above] at (b) {$\scriptstyle k(\lambda)$}; \node[above] at (f) {$\scriptstyle j(\lambda)$}; \draw (b) edge[-latex] (c); \draw (f) edge[-latex] (e); \draw (e) edge[-latex] (d); \draw (c) edge[latex-] (d); \end{tikzpicture} $$ Careful calculation of the sign shows that those two terms are cancelled with terms given from choosing neighbouring edges in $P(\lambda)$. The two last terms which does not have corresponding neighbour are exactly $\Phi_{\lambda}\left(\Gamma/j(\lambda)\right)$ and $\Phi_{\lambda}\left(\Gamma/k(\lambda)\right)$, which was to be demonstrated. \end{proof} Equations \eqref{eq:map1} and \eqref{eq:map2}, and Lemmas \ref{lem:map3} and \ref{lem:map4} imply that \begin{multline*} \Phi(d(\Gamma))+\Phi(h(\Gamma))\sim d_C(\Phi(\Gamma))+\sum_{\lambda\in FD(\Gamma)}\left(\Phi_{\lambda}(h_{j(\lambda)}(\Gamma))+\Phi_{\lambda}(h_{k(\lambda)}(\Gamma))\right)\sim\\ \sim d_C(\Phi(\Gamma))+ \sum_{\tau\in F(\Gamma)}d_{EC}\left(\Phi_{\tau}(\Gamma)\right)+ \sum_{\tau\in F(\Gamma)}d_{ED}\left(\Phi_{\tau}(\Gamma)\right)= d_C(\Phi(\Gamma))+d_E(\Phi(\Gamma))=d(\Phi(\Gamma)). \end{multline*} After taking coinvariants this implies that $$ \Phi(d(\Gamma))+\Phi(h(\Gamma))=d(\Phi(\Gamma)) $$ for each $\Gamma\in\mathrm{HG}_{n}$. Hence, $\Phi:(\mathrm{HG}_{n},d+h)\to \mathrm{O} (\mathrm{G}_{n+1},d)$ is a map of complexes. \end{proof} \subsection{Differential grading on the number of hairs/sources} Note that the part of $\mathrm{HG}_{n}$ with $s$ hairs is mapped to the part of $\mathrm{O}\mathrm{G}_{n+1}$ with $s$ sources, so the map $\Phi:\mathrm{H}\mathrm{G}_{n}\rightarrow\mathrm{O}\mathrm{G}_{n+1}$ can be split as \begin{equation} \Phi:\mathrm{H}_s\mathrm{G}_{n}\rightarrow\mathrm{S}_s\mathrm{O}\mathrm{G}_{n+1}, \end{equation} where $\mathrm{S}_s\mathrm{O}\mathrm{G}_{n+1}$ is the part of $\mathrm{O}\mathrm{G}_{n+1}$ spanned by oriented graphs with exactly $s$ sources. There are filtrations on $\left(\mathrm{H}\mathrm{G}_{n},d+h\right)$ and $\left(\mathrm{O}\mathrm{G}_{n+1},d\right)$ on the number of hairs and sources respectively. Their differential gradings are $\left(\mathrm{H}\mathrm{G}_{n},d\right)$ and $\left(\mathrm{O}\mathrm{G}_{n+1},d_0\right)$, where $d_0$ is the part of $d$ that does not destroy a source. We will refer to graph complexes with such a differential $d_0$ as \emph{fixed source graph complexes}. As the map $\Phi$ maps a graph with $s$ hairs to a sum of graphs all containing $s$ sources, the same map of vector spaces, $\Phi$, is also a map of complexes \begin{equation} \Phi:\left(\mathrm{H}\mathrm{G}_{n},d\right)\rightarrow\left(\mathrm{O}\mathrm{G}_{n+1},d_0\right). \end{equation} \subsection{The dual map} As the map $\Phi$ preserves the loop order, we obtain a dual map $\Phi\leftrightarrow G$ \begin{equation} G:\left(\mathrm{O}\mathrm{GC}_{n+1},\delta\right)\rightarrow\left(\mathrm{H}\mathrm{GC}_{n},\delta+\chi\right). \end{equation} Similarly, the same map is a map of complexes \begin{equation} G:\left(\mathrm{O}\mathrm{GC}_{n+1},\delta_0\right)\rightarrow\left(\mathrm{H}\mathrm{GC}_{n},\delta\right), \end{equation} where $\delta_0\leftrightarrow d_0$ is the part of $\delta$ that does not produce a source by splitting non-source. Let $\Gamma\in \mathrm{O} \mathrm{GC}_{n+1}$ be a single term graph. We define the \emph{hairy skeleton} $hs(\Gamma)\in \mathrm{H}\mathrm{GC}_{n}$ to be the hairy graph obtained from $\Gamma$ by: \begin{itemize} \item putting a hair on each source vertex; \item contracting each occurrence of bi-valent target vertices $\EdE$ into a single non-oriented edge; \end{itemize} It is evident that \begin{equation}\label{eq:G} G(\Gamma) = \begin{cases} hs(\Gamma) & \text{if $\Gamma=\Phi_\tau(hs(\Gamma))$ for some spanning forest $\tau$} \\ 0& \text{otherwise.} \end{cases} \end{equation} \begin{prop} Let $\Gamma\in \mathrm{O} \mathrm{GC}_{n+1}$ be a single term graph whose hairy skeleton $hs(\Gamma)$ contains $v$ vertices, $e$ edges, and $s$ sources . We have that $\Gamma=\Phi_\tau(hs(\Gamma))$ for some spanning forest $\tau$ if and only if $\Gamma$ contains $e-v+s$ bi-valent target vertices $\EdE$. \end{prop} \begin{proof} If $\Gamma=\Phi_\tau(hs(\Gamma))$, it is clear that $\Gamma$ must contain $e-v+s$ of bi-valent target vertices $\EdE$. Now, let $\Gamma$ be an oriented graph that contains $e-v+s$ bi-valent target vertices $\EdE$. Let $\gamma$ be the subgraph of $\Gamma$ obtained by removing all bi-valent targets $\EdE$ from $\Gamma$. Note that $\gamma$ is a graph with $v$ vertices and $v-s$ edges. Such a graph must contain at least $s$ different connected components, with equality if and only if $\gamma$ is a forest. Next, note that $\gamma$ is oriented, hence each component must contain a source vertex, and $\gamma$ contains precisely $s$ source vertices. We conclude that $\gamma$ must be a forest with $s$ components, where each component contains precisely one source vertex. Hence, we get that $$\Gamma= \Phi_{hs(\gamma)}(hs(\Gamma)).$$ \end{proof} This proposition gives us an alternative description of the map $G$. \begin{equation}\label{eq:Ge} G(\Gamma) = \begin{cases} hs(\Gamma) & \text{if $\Gamma$ contains $v-e+s$ bi-valent target vertices $\EdE$} \\ 0& \text{otherwise.} \end{cases} \end{equation} \section{The map is a quasi-isomorphism} \label{s:qi} \begin{thm} \label{thm:main1} The map $$ \Phi: \left(\mathrm{HG}_n,d\right)\to\left( \mathrm{O} \mathrm{G}_{n+1},d_0\right) $$ and the projection $$ p:\left(\mathrm{S}\mathrm{G}_n,d_0\right)\twoheadrightarrow\left(\mathrm{O}\mathrm{G}_n,d_0\right) $$ are quasi-isomorphism. \end{thm} \begin{proof} We prove both claims simultaneously. After splitting by the number of hairs/sources, we need to prove that $$ \Phi: \left(\mathrm{H}_s\mathrm{G}_n,d\right)\to\left( \mathrm{S}_s\mathrm{O} \mathrm{G}_{n+1},d_0\right) \twoheadleftarrow\left( \mathrm{S}_s\mathrm{G}_{n+1},d_0\right) $$ are quasi-isomorphisms. Using Definition/Proposition \ref{defprop:sk} it is enough to prove the claim for $$ \Phi: \left(\mathrm{H}_s\mathrm{G}_n,d\right)\to\left( \mathrm{S}_s\mathrm{O}^{sk} \mathrm{G}_{n+1},d_0\right) \twoheadleftarrow\left( \mathrm{S}_s^{sk}\mathrm{G}_{n+1},d_0\right) $$ where \begin{equation} \mathrm{S}_s\mathrm{O}^{sk}\mathrm{G}_n:=\kappa\left(\mathrm{S}_s\mathrm{O}\mathrm{G}_n\right), \end{equation} \begin{equation} \mathrm{S}_s^{sk}\mathrm{G}_n:=\kappa\left(\mathrm{S}_s\mathrm{G}_n\right). \end{equation} On the mapping cone of them we set up the spectral sequence on the number of vertices. Standard splitting of complexes as the product of complexes with fixed loop number implies the correct convergence. It is therefore enough to prove the claim for the first differential of the spectral sequence (differential grading). The edge differential does not change the number of vertices, while the core differential does. Therefore, on the first page of the spectral sequences there are mapping cones of the maps $$ \Phi: \left(\mathrm{H}_s\mathrm{G}_n,0\right)\to\left( \mathrm{S}_s\mathrm{O}^{sk} \mathrm{G}_{n+1},d_{E0}\right) \twoheadleftarrow\left( \mathrm{S}_s^{sk}\mathrm{G}_{n+1},d_{E0}\right), $$ where $d_{E0}$ is the part of the edge differential $d_E$ that does not destroy a source. Complexes are direct sums of those with fixed number of vertices and edges, so it is enough to show the claim for $$ \Phi: \left(\mathrm{V}_v\mathrm{E}_e\mathrm{H}_s\mathrm{G}_n,0\right)\to \left(\mathrm{V}_v\mathrm{E}_e\mathrm{S}_s\mathrm{O}^{sk} \mathrm{G}_{n+1},d_{E0}\right) \twoheadleftarrow\left( \mathrm{V}_v\mathrm{E}_e\mathrm{S}_s^{sk}\mathrm{G}_{n+1},d_{E0}\right), $$ where $\mathrm{V}_v\mathrm{E}_e\mathrm{S}_s\mathrm{O}^{sk} \mathrm{G}_{n+1}$ is the part of $\mathrm{V}_v\mathrm{E}_e\mathrm{O}^{sk} \mathrm{G}_{n+1}$ with $s$ sources. Recall that the skeleton versions of oriented and sourced graph complexes $\mathrm{V}_v\mathrm{E}_e\mathrm{S}_s\mathrm{O}^{sk} \mathrm{G}_{n+1}$ and $\mathrm{V}_v\mathrm{E}_e\mathrm{S}_s^{sk} \mathrm{G}_{n+1}$ are spaces of invariants of the action of ${\mathbb{S}}_v\times{\mathbb{S}}_e$, while the hairy graph complex $\mathrm{V}_v\mathrm{E}_e\mathrm{H}_s\mathrm{G}_n$ is the space of invariants of the action of ${\mathbb{S}}_v\times {\mathbb{S}}_s\times \left({\mathbb{S}}_e\ltimes {\mathbb{S}}_2^{\times e}\right)$. In order to have the same group acting in a latter case, let us redefine the hairy graph complex in two steps: Let \begin{equation} \bar\mathrm{V}_v\dot\mathrm{E}_e\mathrm{H}_s\mathrm{G}_{n}:=\left\{ \begin{array}{ll} \left(\bar\mathrm{V}_v\bar\mathrm{E}_e\bar\mathrm{H}_s\mathrm{G}_n\otimes\sgn_s\right)^{{\mathbb{S}}_s\times{\mathbb{S}}_2^{\times e}} \qquad&\text{for $n$ even,}\\ \left(\bar\mathrm{V}_v\bar\mathrm{E}_e\bar\mathrm{H}_s\mathrm{G}_n\otimes\sgn_s\otimes\sgn_2^{\otimes e}\right)^{{\mathbb{S}}_s\times{\mathbb{S}}_2^{\times e}} \qquad&\text{for $n$ odd,} \end{array} \right. \end{equation} and then \begin{equation} \mathrm{V}_v\mathrm{E}_e\mathrm{H}_s\mathrm{G}_{n}=\left\{ \begin{array}{ll} \left(\bar\mathrm{V}_v\dot\mathrm{E}_e\mathrm{H}_s\mathrm{G}_{n}\otimes\sgn_e\right)^{{\mathbb{S}}_v\times{\mathbb{S}}_e} \qquad&\text{for $n$ even,}\\ \left(\bar\mathrm{V}_v\dot\mathrm{E}_e\mathrm{H}_s\mathrm{G}_{n}\otimes\sgn_v\right)^{{\mathbb{S}}_v\times{\mathbb{S}}_e} \qquad&\text{for $n$ odd.} \end{array} \right. \end{equation} The action of ${\mathbb{S}}_v\times{\mathbb{S}}_e$ clearly commutes with the map $\Phi$. Since the edge differential does not change the number of vertices and edges, taking homology commutes with taking coinvariants of that action. Therefore, it is now enough to show the claim for $$ \Phi: \left(\bar\mathrm{V}_v\dot\mathrm{E}_e\mathrm{H}_s\mathrm{G}_{n},0\right)\to \left(\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{S}_s\mathrm{O}^{sk} \mathrm{G}_{n+1},d_{E0}\right) \twoheadleftarrow\left(\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{S}_s^{sk}\mathrm{G}_{n+1},d_{E0}\right). $$ Let us pick up a particular single term graph $\Gamma\in\bar\mathrm{V}_v\dot\mathrm{E}_e\mathrm{H}_s\mathrm{G}_{n}$. Note that in $\Gamma$ edges are (up to the sign) undirected. Let $\langle\mathrm{O}\Gamma\rangle$ be the subspace of $\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{S}_s\mathrm{O}^{sk} \mathrm{G}_{n+1}$ spanned by oriented graphs of the shape $\Gamma$ without hairs, but with source exactly where the hairs were. Similarly, let $\langle\mathrm{S}\Gamma\rangle$ be the subspace of $\bar\mathrm{V}_v\bar\mathrm{E}_e\mathrm{S}_s{sk} \mathrm{G}_{n+1}$ spanned by sourced graphs of the shape $\Gamma$ without hairs, but with source exactly where the hairs were. The map $\Phi$ is defined such that $\Phi(\Gamma)\in\langle\mathrm{O}\Gamma\rangle$. Also, differential $d_{E0}$ acts within particular subspaces $\langle\mathrm{O}\Gamma\rangle$ and $\langle\mathrm{S}\Gamma\rangle$. Therefore, we can split the map as a direct sum and it is enough to prove the clam for \begin{equation} \label{eq:hreduced} \Phi: \left(\langle\Gamma\rangle,0\right)\to \left(\langle\mathrm{O}\Gamma\rangle,d_{E0}\right) \twoheadleftarrow\left(\langle\mathrm{S}\Gamma\rangle,d_{E0}\right), \end{equation} for every $\Gamma\in\bar\mathrm{V}_v\dot\mathrm{E}_e\mathrm{H}_s\mathrm{G}_{n}$. In order to prove that, let us choose $v-s$ edges in $\Gamma$, say $a_1,\dots,a_{v-s}$. Let $F(a_1,\dots,a_i)$ be the sub-graph of $\Gamma$ that includes those edges, all hairs and all necessary vertices. We require that for every $i=1,\dots,v-s$ the sub-graph $F(a_1,\dots,a_i)$ is a forest. Recall that in a forest, every connected component has exactly one hair. Clearly, $F(a_1,\dots, a_{v-s})$ is a spanning forest. For every $i=0,\dots,v-s$, we form two graph complexes $\langle\mathrm{O}\Gamma^i\rangle$ and $\langle\mathrm{S}\Gamma^i\rangle$ as follows: They are all spanned by graphs with a core graph being un-haired $\Gamma$ with attached edge types from dg $\langle {\mathbb{S}}_2\rangle$ module $\bar\Sigma$ spanned by $\{\Ed[n],\dE[n],\Ess[n-1],\ET[n]\}$ with ${\mathbb{S}}_2$ action \begin{equation} \Ed \leftrightarrow\dE,\quad \Ess\mapsto (-1)^{n}\Ess,\quad \ET\mapsto (-1)^{n+1}\ET, \end{equation} and the differential \begin{equation} \Ess\mapsto\Ed-(-1)^n\dE. \end{equation} The complex $\langle\mathrm{S}\Gamma^i\rangle$ is spanned by graphs whose attached edge types fulfill the following conditions: \begin{enumerate} \item Edges $a_1,\dots, a_i$ have type $\ET$, and other edges have other types. \item No vertex in the forest $F(a_1,\dots,a_i)$ has a neighbouring edge of type $\Ed$ or $\dE$ heading towards it. \item Every vertex outside the forest $F(a_1,\dots,a_i)$ has a neighbouring edge of type $\Ed$ or $\dE$ heading towards it. \end{enumerate} The complex $\langle\mathrm{O}\Gamma^i\rangle$ is spanned by graphs whose attached edge types fulfill the conditions above together with: \begin{enumerate} \item[(4)] There are no cycles along arrows on edges of type $\Ed$ and $\dE$. \end{enumerate} Examples of graphs in $\langle\mathrm{S}\Gamma^i\rangle$ and $\langle\mathrm{O}\Gamma^i\rangle$ are shown in Figure \ref{fig:SGamma}. \begin{figure}[H] $$ \begin{tikzpicture} \node[red,int] (a) at (0,0) {}; \node[red,int] (b) at (1,0) {}; \node[int] (c) at (1,1) {}; \node[int] (d) at (0,1) {}; \node[red,int] (a1) at (-.5,-.5) {}; \node[red,int] (b1) at (1.5,-.5) {}; \node[int] (c1) at (1.5,1.5) {}; \node[red,int] (d1) at (-.5,1.5) {}; \node[red,int] (x) at (-1,.5) {}; \node[int] (y) at (1.5,.5) {}; \node[int] (z) at (2.2,.5) {}; \draw (a) edge[red,very thick,->] (b); \draw (b) edge[crossed,->] (c); \draw (c) edge[crossed,->] (d); \draw (d) edge[latex-] (a); \draw (a1) edge[crossed,->] (b1); \draw (b1) edge[-latex] (y); \draw (y) edge[latex-] (c1); \draw (b1) edge[crossed,->] (z); \draw (z) edge[-latex] (c1); \draw (y) edge[-latex] (z); \draw (c1) edge[crossed,->] (d1); \draw (d1) edge[crossed,->] (a1); \draw (a) edge[red,very thick,<-] (a1); \draw (b) edge[crossed,->] (b1); \draw (c) edge[latex-] (c1); \draw (d) edge[latex-] (d1); \draw (a1) edge[crossed,->] (x); \draw (x) edge[red,very thick,->] (d1); \end{tikzpicture} \quad\quad\quad \begin{tikzpicture} \node[red,int] (a) at (0,0) {}; \node[red,int] (b) at (1,0) {}; \node[int] (c) at (1,1) {}; \node[int] (d) at (0,1) {}; \node[red,int] (a1) at (-.5,-.5) {}; \node[red,int] (b1) at (1.5,-.5) {}; \node[int] (c1) at (1.5,1.5) {}; \node[red,int] (d1) at (-.5,1.5) {}; \node[red,int] (x) at (-1,.5) {}; \node[int] (y) at (1.5,.5) {}; \node[int] (z) at (2.2,.5) {}; \draw (a) edge[red,very thick,->] (b); \draw (b) edge[crossed,->] (c); \draw (c) edge[crossed,->] (d); \draw (d) edge[latex-] (a); \draw (a1) edge[crossed,->] (b1); \draw (b1) edge[-latex] (y); \draw (y) edge[-latex] (c1); \draw (b1) edge[crossed,->] (z); \draw (z) edge[-latex] (c1); \draw (y) edge[-latex] (z); \draw (c1) edge[crossed,->] (d1); \draw (d1) edge[crossed,->] (a1); \draw (a) edge[red,very thick,<-] (a1); \draw (b) edge[crossed,->] (b1); \draw (c) edge[latex-] (c1); \draw (d) edge[latex-] (d1); \draw (a1) edge[crossed,->] (x); \draw (x) edge[red,very thick,->] (d1); \end{tikzpicture} $$ \caption{\label{fig:SGamma} Two examples of graphs in $\langle\mathrm{S}\Gamma^3\rangle$, with $\Gamma$ as in Figure \ref{fig:span}. The forest $F(a_1,a_2,a_3)$ is depicted red. The right example is also in $\langle\mathrm{O}\Gamma^i\rangle$, while the left one is not.} \end{figure} The differential on $\bar\Sigma$ induces the differential $d_{E0}$ on $\langle\mathrm{S}\Gamma^i\rangle$ and $\langle\mathrm{O}\Gamma^i\rangle$, where the forbidden graphs are considered zero. It is straightforward to check that \begin{equation} \left(\langle\mathrm{S}\Gamma^0\rangle,d_{EO}\right)= \left(\langle\mathrm{S}\Gamma\rangle,d_{E0}\right), \qquad \left(\langle\mathrm{O}\Gamma^0\rangle,d_{EO}\right)= \left(\langle\mathrm{O}\Gamma\rangle,d_{E0}\right). \end{equation} Also, it holds that \begin{equation} \langle\mathrm{S}\Gamma^{v-s}\rangle=\langle\mathrm{O}\Gamma^{v-s}\rangle \end{equation} is one dimensional, spanned by the graph with edges $a_1,\dots,a_{v-s}$ of type $\ET$ and other edges of type $\Ess$. There are natural projections of complexes $\pi_i:\langle\mathrm{S}\Gamma^i\rangle\twoheadrightarrow\langle\mathrm{O}\Gamma^0\rangle$ for every $i=0,\dots,v-s$. Also for every $i=1,\dots,v-s$, there are natural maps \begin{equation} f^i:\langle\mathrm{O}\Gamma^{i-1}\rangle\rightarrow\langle\mathrm{O}\Gamma^i\rangle,\quad g^i:\langle\mathrm{S}\Gamma^{i-1}\rangle\rightarrow\langle\mathrm{S}\Gamma^i\rangle, \end{equation} that only change the type of the edge $a_i$ as \begin{equation} \Ess\mapsto 0,\quad \Ed\mapsto\ET,\quad \dE\mapsto(-1)^{n+1}\ET, \end{equation} where forbidden graphs are considered zero. The following diagram clearly commutes. $$ \begin{tikzcd} \langle\mathrm{O}\Gamma^{i-1}\rangle \arrow[twoheadleftarrow]{r}{\pi_{i-1}} \arrow{d}{f^i} & \langle\mathrm{S}\Gamma^{i-1}\rangle \arrow{d}{g^i} \\ \langle\mathrm{O}\Gamma^{i}\rangle \arrow[twoheadleftarrow]{r}{\pi_{i}} & \langle\mathrm{S}\Gamma^{i}\rangle \end{tikzcd} $$ \begin{lemma} For every $i=1,\dots,v-s$ maps $f^i:\langle\mathrm{O}\Gamma^{i-1}\rangle\rightarrow\langle\mathrm{O}\Gamma^i\rangle$ and $g^i:\langle\mathrm{S}\Gamma^{i-1}\rangle\rightarrow\langle\mathrm{S}\Gamma^i\rangle$ are quasi-isomorphisms. \end{lemma} \begin{proof} Let us prove the claim for $g^i$. The essential difference between $\langle\mathrm{S}\Gamma^{i-1}\rangle$ and $\langle\mathrm{S}\Gamma^{i}\rangle$ is in the edge $a_i$, it has to be of type $\ET$ in $\langle\mathrm{S}\Gamma^{i}\rangle$, and it is of another type in $\langle\mathrm{S}\Gamma^{i-1}\rangle$. Since $g^i$ does not change types of other edges, it splits as a direct sum of maps between complexes with fixed types of other edges $$ g^i_{fix}:\langle\mathrm{S}\Gamma^{i-1}_{fix}\rangle\rightarrow\langle\mathrm{S}\Gamma^i_{fix}\rangle $$ where $\langle\mathrm{S}\Gamma^{i-1}_{fix}\rangle$ and $\langle\mathrm{S}\Gamma^{i}_{fix}\rangle$ are sub-complexes spanned by graphs with fixed types of all edges other than $a_i$. It is enough to show that each $g^i_{fix}$ is a quasi-isomorphism. Here, depending on the choice of fixed edge types, the condition of being sourced can disallow some possibilities for the edge $a_i$ in both $\langle\mathrm{S}\Gamma^{i-1}_{fix}\rangle$ and $\langle\mathrm{S}\Gamma^{i}_{fix}\rangle$. We list all cases, showing that the map is a quasi-isomorphism in all of them. Let the vertex that is in the forest $F(a_1,\dots,a_{i})$ but not in the forest $F(a_1,\dots,a_{i-1})$ be called $x_i$. \begin{enumerate} \item If there is a vertex in the forest $F(a_1,\dots,a_{i-1})$ that has a neighboring edge of type $\Ed$ or $\dE$ heading towards it, or there is a vertex outside the forest $F(a_1,\dots,a_{i})$ that does not have a neighboring edge of type $\Ed$ or $\dE$ heading towards it, both $\langle\mathrm{S}\Gamma^{i-1}_{fix}\rangle$ and $\langle\mathrm{S}\Gamma^{i}_{fix}\rangle$ are zero complexes and the map is clearly quasi-isomorphism. \item If it is not the case from (1) and $x_i$ has a neighboring edge of type $\Ed$ or $\dE$ heading towards it, the edge $a_i$ (that goes from a vertex in the forest $F(a_1,\dots,a_{i-1})$ towards $x_i$) can have types $\Ed$ or $\Ess$ in $\langle\mathrm{S}\Gamma^{i-1}_{fix}\rangle$, making it acyclic. In $\langle\mathrm{S}\Gamma^{i}_{fix}\rangle$, no type is allowed, so it is again the zero complex. Therefore, the map is again a quasi-isomorphism. \item If it is not the case from (1) and $x_i$ does not have a neighboring edge of type $\Ed$ or $\dE$ heading towards it, the edge $a_i$ must have type $\Ed$ in $\langle\mathrm{S}\Gamma^{i-1}_{fix}\rangle$. In $\langle\mathrm{S}\Gamma^{i}_{fix}\rangle$ that edge must have type $\ET$, making the map an isomorphism. Thus, it is also a quasi-isomorphism. \end{enumerate} For the map $f^i$ between complexes of oriented graphs, one can easily verify that the extra condition does not change the argument. \end{proof} The lemma implies that \begin{equation} f:=f^{v-s}\circ\dots\circ f^1: \langle\mathrm{O}\Gamma\rangle\rightarrow\langle\mathrm{O}\Gamma^{v-s}\rangle, \end{equation} and \begin{equation} g:=g^{v-s}\circ\dots\circ g^1: \langle\mathrm{S}\Gamma\rangle\rightarrow\langle\mathrm{S}\Gamma^{v-s}\rangle \end{equation} are quasi-isomorphisms and the following diagram commutes: $$ \begin{tikzcd} \langle\mathrm{O}\Gamma\rangle \arrow[twoheadleftarrow]{r}{\pi} \arrow{d}{f} & \langle\mathrm{S}\Gamma\rangle \arrow{d}{g} \\ \langle\mathrm{O}\Gamma^{v-s}\rangle \arrow[leftrightarrow]{r}{\id} & \langle\mathrm{S}\Gamma^{v-s}\rangle. \end{tikzcd} $$ \begin{lemma} \label{lem:QIfh} The map $f\circ \Phi:\langle\Gamma\rangle\rightarrow\langle\mathrm{O}\Gamma^{v-s}\rangle$ is a quasi-isomorphism. \end{lemma} \begin{proof} Both complexes are one-dimensional, so we only need to prove that $f\circ \Phi\neq 0$. The left-hand side complex has a generator $\Gamma$. It holds that $$ f\circ \Phi(\Gamma)= f\left(\sum_{\tau\in F(\Gamma)}\Phi_{\tau}(\Gamma)\right). $$ The map $\Phi_{\tau}$ gives edges in $E(\tau)$ type $\Ed$ or $\dE$, and type $\Ess$ to the other edges. After that, the map $f=f^1\circ\dots\circ f^{v-s}$ kills all graphs with any of edges $a_1,\dots,a_{v-s}$ being of type $\Ess$. Therefore, $f\circ h_{x,\tau}$ is non-zero only if the forest $\tau$ consist exactly of the edges $a_1,\dots,a_{v-s}$. Let us call this forest $T$. So \begin{equation} f\circ \Phi(\Gamma)= f\left(\Phi_{T}(\Gamma)\right). \end{equation} It is clearly the generator of $\langle\mathrm{O}\Gamma^{v-s}\rangle$, and therefore non-zero. \end{proof} With this lemma, we have proven that the diagonal map in the following commutative diagram is also a quasi-isomorphism. $$ \begin{tikzcd} \langle\Gamma\rangle \arrow{r}{\Phi} \arrow[swap]{dr}{f\circ \Phi} & \langle\mathrm{O}\Gamma\rangle \arrow[twoheadleftarrow]{r}{\pi} \arrow{d}{f} & \langle\mathrm{S}\Gamma\rangle \arrow{d}{g} \\ & \langle\mathrm{O}\Gamma^{v-s}\rangle \arrow[leftrightarrow]{r}{\id} & \langle\mathrm{S}\Gamma^{v-s}\rangle \end{tikzcd} $$ Together with the previous result we conclude that all mentioned maps are quasi-isomorphisms. Which concludes the proof. \end{proof} \begin{cor} The map $$ \Phi: \left(\mathrm{HG}_n,d+h\right)\to\left( \mathrm{O} \mathrm{G}_{n+1},d\right) $$ and the projection $$ p:\left(\mathrm{S}\mathrm{G}_n,d\right)\twoheadrightarrow\left(\mathrm{O}\mathrm{G}_n,d\right) $$ are quasi-isomorphism. \end{cor} \begin{proof} On the mapping cone of $\Phi$, we set up a spectral sequence on the number $s$, which is the number of hairs in $\mathrm{HG}_n$ and number of sources in $\mathrm{O} \mathrm{G}_{n+1}$. Standard splitting of complexes as the product of complexes with fixed loop number implies the correct convergence. On the first page of the spectral sequence we have exactly the mapping cone of $$ \Phi: \left(\mathrm{HG}_n,d\right)\to\left( \mathrm{O} \mathrm{G}_{n+1},d_0\right) $$ which is acyclic according to Theorem \ref{thm:main1}. By the spectral sequence argument, the mapping cone of $$ \Phi: \left(\mathrm{HG}_n,d+h\right)\to\left( \mathrm{O} \mathrm{G}_{n+1},d\right) $$ is also acyclic, hence $\Phi$ is here a quasi-isomorphism. The same argument works for the projection $p$. However, it has already been proven in \cite[Theorem 1.1]{MultiSourced}. \end{proof} \begin{cor}[Theorem \ref{thm:main}] The map dual map of $G\leftrightarrow \Phi$, given explicitly in \eqref{eq:G}, is a quasi-isomorphism of complexes $$ G: \left( \mathrm{O} \mathrm{GC}_{n+1},\delta \right)\to \left(\mathrm{HGC}_n,\delta+\chi\right) $$ and of complexes $$ G: \left( \mathrm{O} \mathrm{GC}_{n+1},\delta_0\right)\to \left(\mathrm{HGC}_n,\delta\right). $$ \end{cor} \begin{proof} We may fix the loop order $b$. As $\mathrm{B}_b\mathrm{O} \mathrm{G}_{n+1}$ and $\mathrm{B}_b\mathrm{HG}_{n}$ are finite dimensional in each degree, it is clear that each quasi-isomorphism $$ F: (\mathrm{B}_b\mathrm{O} \mathrm{G}_{n+1}, d )\to (\mathrm{B}_b\mathrm{HG}_{n},d+h),\quad F: (\mathrm{B}_b\mathrm{O} \mathrm{G}_{n+1}, d_0 )\to (\mathrm{B}_b\mathrm{HG}_{n},d)$$ has a dual quasi-isomorphism $$ G: (\mathrm{B}_b\mathrm{O} \mathrm{GC}_{n}, \delta )\to (\mathrm{B}_b\mathrm{HGC}_{n},\delta+\chi),\quad G: (\mathrm{B}_b\mathrm{O} \mathrm{GC}_{n}, \delta )\to (\mathrm{B}_b\mathrm{HGC}_{n+1},\delta_0).$$ As $\Phi$ preserves the loop order, its dual map $G$ must be a quasi-isomorphism. \end{proof} \section{Application to ribbon graphs and the moduli space of curves with punctures} \label{s:rib} In this section, we will follow \cite{MW2} in order to define the ribbon graph complex $(\mathrm{RGC}[1],\delta+ \Delta_1)$, as well as the map $F:(\mathrm{O} \mathrm{GC}_1,\delta) \to (\mathrm{RGC}[1],\delta+\Delta_1)$. This will allow us to make the observation that $F$ is also a map of complexes $$F:(\mathrm{O} \mathrm{GC}_1,\delta_0)\to (\mathrm{RGC}[1],\delta).$$ \subsection{Ribbon Graphs} \begin{defi} A \emph{ribbon graph} $\Gamma$ is a triple $(F_\Gamma, \iota_{\Gamma}, \sigma_{\Gamma})$, where $F_{\Gamma}$ is a finite set, $\iota_{\gamma}:F_{\Gamma}\to F_{\Gamma}$ is an involution with no fixed points, i.e.\ $$\iota_\Gamma^2 = id,\quad \iota_\Gamma(f)\neq f \quad ,$$ for every $f\in F_\Gamma$, and $\sigma_{\Gamma}:F_{\Gamma}\to F_{\Gamma}$ is a bijection. \end{defi} The elements of $F_{\Gamma}$ are called \emph{flags} or \emph{half edges}. The orbits of the involution $\iota_{\Gamma}$ are called \emph{edges}, and the set of all edges will be denoted by $E(\Gamma)$. The orbits of $\sigma_{\Gamma}$ are called \emph{vertices}, and the set of all vertices will be denoted by $V(\Gamma)$. \begin{defi} We say that a cyclic ordering on a finite set $A$ is a $\mathbb{Z}_{|A|}$ action on $A$ with precisely $1$ orbit. \end{defi} The difference between an ordinary graph and a ribbon graph is that each vertex in a ribbon graph is equipped with a cyclic ordering of its adjacent (half) edges, given by $f+1= \sigma_\Gamma f.$ We may draw a picture of a ribbon graph $\Gamma$ in the following way: \begin{enumerate} \item For each vertex $(i_1 i_2\ldots i_k)\in V(\Gamma)$, draw a dot with clockwise ordered lines labeled by $i_1,i_2,\ldots i_k$ connected to the dot $$(i_1 i_2\ldots i_k)\leftrightarrow\begin{tikzpicture}[baseline=-3.5,>=stealth',shorten >=1pt,auto,node distance=1.5cm, thick,main node/.style={circle,draw,font=\sffamily\bfseries}] \node[main node,int] (1) {}; \node[] (2) at ({360/5 }:1.3cm) {}; \node[] (3) at ({2*360/5 }:1.3cm) {}; \node[] (0) at ({3.5*360/5 }:0.7cm) {$\cdots$}; \node[] (0) at ({2.7*360/5 }:0.7cm) {$\cdots$}; \node[] (5) at ({4*360/5 }:1.3cm) {}; \node[] (4) at ({3*360/5 }:1.3cm) {}; \node[] (6) at ({5*360/5 }:1.3cm) {}; \path[every node/.style={font=\sffamily\small}](1) edge node[left]{} (4); \path[every node/.style={font=\sffamily\small}](1) edge node[left]{$i_1$} (2); \path[every node/.style={font=\sffamily\small}](1) edge node[below]{$i_k$} (3); \path[every node/.style={font=\sffamily\small}](1) edge node[right]{$i_3$}(5); \path[every node/.style={font=\sffamily\small}](1) edge node[above]{$i_2$}(6); \end{tikzpicture}.$$ \item For each edge $(i_a i_b)\in E(\Gamma)$, connect the lines labeled by $i_a$ and $i_b$. \end{enumerate} Two very basic examples of ribbon graphs are $$(\{1,2\}, (12), (1)(2))= \begin{tikzpicture}[baseline=-.55ex,scale=1] \node[int] (a) at (0,0) {}; \node[int] (b) at (1,0) {}; \draw (a) -- (b) node[near start, above]{$_1$} node[near end, above]{$_2$}; \end{tikzpicture} \quad \quad(\{1,2\}, (12), (12))= \begin{tikzpicture}[every loop/.style={},baseline=-.55ex,scale=1] \node[int] (A) at (0,0) {}; \draw (a) edge[loop] node[near start, above]{$_2$}node[near end, above]{$_1$}(a); \end{tikzpicture}.$$ We call the orbits of the permutation $\sigma^{-1}_{\Gamma} \circ \iota_{\Gamma}$ \emph{boundaries} of $\Gamma$, and we denote the set of boundaries by $B(\Gamma)$. For example, the ribbon graph $$ \begin{tikzpicture}[baseline=-.55ex,scale=1] \node[int] (a) at (0,0) {}; \node[int] (b) at (1,0) {}; \draw (a) -- (b) node[near start, above]{$_1$} node[near end, above]{$_2$}; \end{tikzpicture}$$ has one boundary $(12)$, while the ribbon graph $$\begin{tikzpicture}[every loop/.style={},baseline=-.55ex,scale=1] \node[int] (A) at (0,0) {}; \draw (a) edge[loop] node[near start, above]{$_2$}node[near end, above]{$_1$} (a); \end{tikzpicture}$$ has two boundaries, $(1)$ and $(2)$. \subsection{A PROP of ribbon graphs} Let $\mathrm{rgra}_{n,m,k}$ be the set of ribbon graphs with vertex set labeled by $[n]$, boundary set labeled by $[m]$ and edge set labeled by $[k]$. That is ribbon graphs $$\Gamma=([k]\sqcup[k],\iota_k, \sigma_{\Gamma}),$$ where $\iota_k$ is the natural involution on $[k]\sqcup [k]$, together with bijections $v_{\Gamma}:[n]\to V(\Gamma), \quad b_{\Gamma}:[m]\to B(\Gamma)$. Here, the edge labeled by $i\in [k]$ is the pair $(i_1,i_2)$, $i=i_1\in [k]\sqcup \emptyset$, $i=i_2\in \emptyset \sqcup [k]$. We say that the edge $(i_1,i_2)$ is \emph{intrinsically oriented} from $i_1$ to $i_2$. The group $\mathbb{P}_k:={\mathbb{S}}_{k}\ltimes {\mathbb{S}}^{\times k}_2$ acts naturally on $\mathrm{rgra}_{n,m,k}$ by permuting edges and reversing the intrinsic orientation. Let $\mathrm{RGra}_d(n,m)$ be the vector space $$\mathrm{RGra}(n,m):= \prod_{k\ge 0}\left(\Bbbk\langle \mathrm{rgra}_{n,m,k}\rangle [k] \otimes \sgn_k\right)^{\mathbb{P}_k} $$ The space $$\mathrm{RGra}:=\bigoplus_{n,m\ge 1} \mathrm{RGra}(n,m) $$ is an ${\mathbb{S}}$-bimodule, where ${\mathbb{S}}_n$ acts on $\mathrm{RGra}(n,m)$ by permuting vertex labels, and ${\mathbb{S}}_m$ acts on $\mathrm{RGra}(n,m)$ by permuting boundary labels. In order to define the properadic composition maps $\circ :\mathrm{RGra}\otimes \mathrm{RGra}\to \mathrm{RGra} $, we have to make a few definitions. \begin{defi} Let $A$ and $B$ be two finite sets with cyclic orderings. We say that an \emph{ordered $A$-partition $p$ of $B$} is a partition $$\bigsqcup_{a\in A} p_a =B,$$ where each $$p_a=\{ b_a^1,\ldots, b_a^{k}\}\subset B$$ is ordered with $$b_a^i+1= \begin{cases} b_a^{i+1} & i<k\\ b_{a+r}^1 & i=k, \text{ where $r=\min\{j\in{\mathbb{Z}}|j\geq 1, p_{a+j}\neq \emptyset\}$.} \end{cases}.$$ We denote the set of all ordered $A$-partitions of $B$ by $P(A,B)$. \end{defi} \begin{defi} Let $\Gamma=(F,\iota, \sigma)$ be a ribbon graph, and let $v\in V(\Gamma)$, $b\in B(\Gamma)$ such that $v\cap b=\emptyset.$ Then, for each ordered partition $p\in P(b,v)$, we define the \emph{$p$-grafted} ribbon graph $$\circ_p \Gamma= (F,\iota,\circ_p \sigma ),$$ by letting $\circ_p \sigma$ be the unique permutation such that \begin{enumerate} \item $\circ_p \sigma |_{F \setminus (v\cup b)} = \sigma|_{F \setminus (v\cup v)}$; \item for each $j\in b$ $$\circ_p \sigma(j) = \begin{cases} \min p_j & p_j \neq \emptyset,\\ \sigma(j) & p_j = \emptyset; \end{cases}$$ \item for each $i\in p_j$ $$\circ_p \sigma(i) = \begin{cases}\sigma(i) & i \neq \max {p_j},\\ \sigma(j) & i = \max {p_j}. \end{cases}$$ \end{enumerate} \end{defi} For two ribbon graphs $\Gamma_1=(F_1,\iota_1,\sigma_1)$ and $\Gamma_2=(F_2,\iota_2,\sigma_2)$, $v\in V(\Gamma_1)$, $b\in B(\Gamma_2)$, and $p\in P(b,v)$, we set $$ \Gamma_1\circ_p \Gamma_2 := \left(F_1\sqcup F_2,\iota_1\sqcup\iota_2,\circ_p (\sigma_1\sqcup \sigma_2)\right). $$ A picture of the ribbon graph $\circ_p \Gamma$ is obtained from a picture of the ribbon graph $\Gamma$ by removing the vertex $v$ and reconnecting its adjacent edges to the corners of the boundary $b$ according to the partition $p$. \begin{lemma} \label{lemma:ribbon_PROP} There is a bijection of vertices $$p_V: V(\Gamma)\setminus\{v\} \to V(\circ_p \Gamma)$$ and a bijection of boundaries $$p_B: B(\Gamma)\setminus\{b\} \to B(\circ_p \Gamma),$$ such that $v'\subseteq p_V(v')$ and $b'\subseteq p_B(b')$ for every $v'\in V(\Gamma)\setminus\{v\}$ and $b'\in B(\Gamma)\setminus\{b\}$. Furthermore, if $v'\cap b=\emptyset$ we have equality $v'= p_V(v')$. Similarly if $v\cap b'=\emptyset$, we have $b'= p_B(b')$. \end{lemma} \begin{proof} Pick a boundary $b'\in B(\Gamma)\setminus \{b\},$ and a flag $j'\in b'$. Suppose that $(\circ_p \sigma)^{-1}\iota(j')\neq \sigma^{-1}\iota(j')$. Then we must have $\iota(j')=\sigma(j)$ or $\iota(j')= \min p_j$ for some $j\in b$. The first case implies that $\sigma^{-1} \iota(j')\in b$ and, therefore, we get $j'\in b$, which contradicts our choice of $j'$. If $\iota(j')=\min p_j$, then $(\circ_p \sigma)^{-1}\iota(j')\in b$. Furthermore, for $r=1,2,3,\ldots$, we have that $((\circ_p \sigma)^{-1}\iota)^r(j')=(\sigma^{-1}\iota)^r(j') \in b$ until $\iota(\sigma^{-1}\iota)^{r-1}(j')=\sigma(j)$, in which case $\iota(\circ_p\sigma^{-1}\iota)^{r}(j')= \sigma^{-1}\iota (j').$ It follows that $b'$ is a subset of the orbit of $\iota(\circ_p\sigma^{-1}\iota)^{r}(j')$. Thus, we may define the map $$p_B: B(\Gamma)\setminus\{v\} \to B(\circ_p \Gamma),$$ by setting $p_2(b')$ to be the $(\circ_p\sigma)^{-1}\iota$ orbit of any $j'\in b'$. From the arguments above, it follows that $p_2$ is injective and $b'\subset p_2(b')$. Finally, we note that for any $j\in b$, there must exist an $r\ge 1$ such that $\iota(\circ_p\sigma^{-1}\iota)^{r}(j)\notin b$. Hence $p_2$ must also be surjective. If $v\cap b'=\emptyset$, then it is clear from the construction that $b'= p_B(b')$. Similarly, we can define $$p_V: V(\Gamma)\setminus\{v\} \to V(\circ_p \Gamma)$$ by setting $p_V(v')$ to be the $\circ_p\sigma$ orbit of any $i\in v'$. By the same arguments as above, we get that $p_V$ is well defined, bijective, and $v'\subset p_V(v')$. \end{proof} This lemma implies that, for mutually disjoint $v_1,v_2\in V(\Gamma)$, $b_1,b_2\in B(\Gamma)$, and partitions $p_1\in P(b_1,v_1)$, $p_2\in P(b_2,v_2)$, we may define $$\circ_{p_1,p_2} \Gamma:= \circ_{p_2}(\circ_{p_1} \Gamma).$$ It is clear that we have \begin{equation} \label{eq:ribbon_comp_ass} \circ_{p_1,p_2} \Gamma = \circ_{p_2}(\circ_{p_1} \Gamma)= \circ_{p_1}(\circ_{p_2} \Gamma)=\circ_{p_2,p_1} \Gamma. \end{equation} For each $k\le m_1,n_2$, we define the properadic composition maps \begin{eqnarray*} \circ_k :\mathrm{RGra}_d(n_1,m_1)\otimes \mathrm{RGra}_d(n_2,m_2)&\to &\mathrm{RGra}_d(n_1+n_2-k,m_1+m_2-k), \end{eqnarray*} $$\begin{tikzpicture}[baseline=-0.55ex,scale=0.6] \node[] (0) at (0,0) {$_{\Gamma_1}$}; \node[] (1) at (-1,1.2) {$_1$}; \node[] (2) at (-0.6,1.2) {$_2$}; \node[] (.1) at (0.2,0.8) {\ldots}; \node[] (4) at (1,1.2) {$_{m_1}$}; \draw (0) edge[->] node {} (1); \draw (0) edge[->] node {} (2); \draw (0) edge[->] node {} (4); \node[] (-1) at (-1,-1.2) {$_1$}; \node[] (-2) at (-0.6,-1.2) {$_2$}; \node[] (-.1) at (0.2,-.9) {\ldots}; \node[] (-4) at (1,-1.2) {$_{n_1}$}; \draw (0) edge[<-] node {} (-1); \draw (0) edge[<-] node {} (-2); \draw (0) edge[<-] node {} (-4); \end{tikzpicture} \otimes \begin{tikzpicture}[baseline=-0.55ex,scale=0.6] \node[] (0) at (0,0) {$_{\Gamma_2}$}; \node[] (1) at (-1,1.2) {$_1$}; \node[] (2) at (-0.6,1.2) {$_2$}; \node[] (.1) at (0.2,0.8) {\ldots}; \node[] (4) at (1,1.2) {$_{m_2}$}; \draw (0) edge[->] node {} (1); \draw (0) edge[->] node {} (2); \draw (0) edge[->] node {} (4); \node[] (-1) at (-1,-1.2) {$_1$}; \node[] (-2) at (-0.6,-1.2) {$_2$}; \node[] (-.1) at (0.2,-.9) {\ldots}; \node[] (-4) at (1,-1.2) {$_{n_2}$}; \draw (0) edge[<-] node {} (-1); \draw (0) edge[<-] node {} (-2); \draw (0) edge[<-] node {} (-4); \end{tikzpicture} \mapsto \begin{tikzpicture}[baseline=-0.55ex,scale=0.6] \node[] (0) at (0,0) {$_{\Gamma_1}$}; \node[] (ddots) at (1,0.5) {$\tiny{\ddots}$}; \node[] (1) at (-1,1.5) {}; \node[] (2) at (-0.6,1.5) {}; \node[] (.1) at (0.2,1.1) {$\ldots$}; \node[] (4) at (1,1.5) {}; \node[] (.1) at (0,1.7) {$\overbrace{\qquad \quad }^{1,\ldots, m_1-k}$}; \node[] (10) at (2,1) {$_{\Gamma_2}$}; \node[] (11) at (1.1,-.6) {}; \node[] (12) at (1.5,-0.6) {}; \node[] (va) at (2.2,-0.2) {$\ldots$}; \node[] (13) at (3,-0.6) {}; \node[] (.1) at (2,-0.9) {$\underbrace{\qquad \quad }_{ n_2-k+1,\ldots, n_2}$}; \draw (10) edge[<-] node {} (11); \draw (10) edge[<-] node {} (12); \draw (10) edge[<-] node {} (13); \node[] (21) at (1.1,2.1) {}; \node[] (22) at (1.5,2.1) {}; \node[] (-.1) at (2.2,1.7) {$\ldots$}; \node[] (23) at (3,2.1) {}; \node[] (.1) at (2,2.3) {$\overbrace{\qquad \quad }^{ m_2-k+1, \ldots, m_2 }$}; \draw (10) edge[->] node {} (21); \draw (10) edge[->] node {} (22); \draw (10) edge[->] node {} (23); \draw (0) edge[->,bend left] node {} (10); \draw (0) edge[->,bend left=10] node {} (10); \draw (0) edge[->,bend right] node {} (10); \draw (0) edge[->] node {} (1); \draw (0) edge[->] node {} (2); \draw (0) edge[->] node {} (4); \node[] (-1) at (-1,-1) {}; \node[] (-2) at (-0.6,-1) {}; \node[] (-.1) at (0.2,-.7) {\ldots}; \node[] (-4) at (1,-1) {}; \node[] (.1) at (0,-1.2) {$\underbrace{\qquad \quad}_{1,\ldots, n_1-k}$}; \draw (0) edge[<-] node {} (-1); \draw (0) edge[<-] node {} (-2); \draw (0) edge[<-] node {} (-4); \end{tikzpicture} \mapsto \begin{tikzpicture}[baseline=-0.55ex,scale=0.6] \node[] (0) at (0,0) {$_{\Gamma_1\circ_k\Gamma_2}$}; \node[] (1) at (-1,1.2) {$_1$}; \node[] (2) at (-0.6,1.2) {$_2$}; \node[] (.1) at (0.2,0.8) {\ldots}; \node[] (4) at (1,1.2) {$_{m_1+m_2-k}$}; \draw (0) edge[->] node {} (1); \draw (0) edge[->] node {} (2); \draw (0) edge[->] node {} (4); \node[] (-1) at (-1,-1.2) {$_1$}; \node[] (-2) at (-0.6,-1.2) {$_2$}; \node[] (-.1) at (0.2,-.9) {\ldots}; \node[] (-4) at (1,-1.2) {$_{n_1+n_2-k}$}; \draw (0) edge[<-] node {} (-1); \draw (0) edge[<-] node {} (-2); \draw (0) edge[<-] node {} (-4); \end{tikzpicture} $$ composing the boundaries $m_1-k+1,\ldots, m_1$ of $\Gamma_1$ to the vertices $1,\ldots, k$ of $\Gamma_2$, by $$\Gamma_1\circ_{k} \Gamma_2:= \prod_{i=1}^{k}\left(\sum_{p \in P(b_{\Gamma_1}(n_1-k+i),v_{\Gamma_2}(i))} \circ_{p}(\Gamma_1\sqcup \Gamma_2)\right).$$ The element $\Gamma_1\circ_k \Gamma_2$ is the sum of all graphs obtained from $\Gamma_1$ and $\Gamma_2$ by \begin{enumerate} \item Remove each vertex $1,\ldots, k $ from $\Gamma_2$; \item For each $i\in [k],$ reconnecting each half edge in $v_{\Gamma_2}(i)$ to a corner of the boundary $b_{\Gamma_1}(m_1-k+i)$, respecting the cyclic orientations. \end{enumerate} \begin{prop} The composition maps $\circ_{{\bullet}}:\mathrm{RGra}\otimes \mathrm{RGra} \to \mathrm{RGra}$ defines a PROP structure on $\mathrm{RGra}$. \end{prop} \begin{proof}It follows by Lemma \ref{lemma:ribbon_PROP} and \eqref{eq:ribbon_comp_ass} that the maps are well defined and well behaved. \end{proof} Let $\textsf{LieB}_{0,0}$ be the PROP of (degree shifted) Lie bi-algebras, i.e. the PROP generated by symmetric $3$-valent corollas of degree $1$ \begin{equation}\label{eq:Lie_Corollas} \begin{tikzpicture}[baseline=-.55ex,scale=0.6] \node[int] (a) at (0,0) {}; \node[](out1) at (0.7,-0.7) {$_2$}; \node[](out2) at (-.7,-0.7) {$_1$}; \node[](in) at (0,.7) {}; \draw (out1) edge[->] (a); \draw (out2) edge[->] (a); \draw (a) edge[->] (in); \end{tikzpicture}= \begin{tikzpicture}[baseline=-.55ex,scale=0.6] \node[int] (a) at (0,0) {}; \node[](out1) at (0.7,-0.7) {$_1$}; \node[](out2) at (-.7,-0.7) {$_2$}; \node[](in) at (0,.7) {}; \draw (out1) edge[->] (a); \draw (out2) edge[->] (a); \draw (a) edge[->] (in); \end{tikzpicture}, \begin{tikzpicture}[baseline=-.55ex,scale=0.6] \node[int] (a) at (0,0) {}; \node[](out1) at (0.7,0.7) {$_2$}; \node[](out2) at (-.7,0.7) {$_1$}; \node[](in) at (0,-.7) {}; \draw (a) edge[->] (out1); \draw (a) edge[->] (out2); \draw (in) edge[->] (a); \end{tikzpicture}= \begin{tikzpicture}[baseline=-.55ex,scale=0.6] \node[int] (a) at (0,0) {}; \node[](out1) at (0.7,0.7) {$_1$}; \node[](out2) at (-.7,0.7) {$_2$}; \node[](in) at (0,-.7) {}; \draw (a) edge[->] (out1); \draw (a) edge[->] (out2); \draw (in) edge[->] (a); \end{tikzpicture}, \end{equation} modulo the relations \begin{equation} \label{eq:jacobi} \begin{tikzpicture}[baseline=-.55ex,scale=0.6] \node[int] (a) at (0,0) {}; \node[int](out1) at (0.7,-0.7) {}; \node[](out2) at (-.7,-0.7) {$_1$}; \node[](out3) at (1.4,-1.4) {$_3$}; \node[](out4) at (0,-1.4) {$_2$}; \node[](in) at (0,.7) {}; \draw (a) edge[<-] (out1); \draw (a) edge[<-] (out2); \draw (out1) edge[<-] (out3); \draw (out1) edge[<-] (out4); \draw (in) edge[<-] (a); \end{tikzpicture}+\begin{tikzpicture}[baseline=-.55ex,scale=0.6] \node[int] (a) at (0,0) {}; \node[int](out1) at (0.7,-0.7) {}; \node[](out2) at (-.7,-0.7) {$_2$}; \node[](out3) at (1.4,-1.4) {$_1$}; \node[](out4) at (0,-1.4) {$_3$}; \node[](in) at (0,.7) {}; \draw (a) edge[<-] (out1); \draw (a) edge[<-] (out2); \draw (out1) edge[<-] (out3); \draw (out1) edge[<-] (out4); \draw (in) edge[<-] (a); \end{tikzpicture}+\begin{tikzpicture}[baseline=-.55ex,scale=0.6] \node[int] (a) at (0,0) {}; \node[int](out1) at (0.7,-0.7) {}; \node[](out2) at (-.7,-0.7) {$_3$}; \node[](out3) at (1.4,-1.4) {$_2$}; \node[](out4) at (0,-1.4) {$_1$}; \node[](in) at (0,.7) {}; \draw (a) edge[<-] (out1); \draw (a) edge[<-] (out2); \draw (out1) edge[<-] (out3); \draw (out1) edge[<-] (out4); \draw (in) edge[<-] (a); \end{tikzpicture}=0, \end{equation} \begin{equation} \label{eq:cojac} \begin{tikzpicture}[baseline=-.55ex,scale=0.6] \node[int] (a) at (0,0) {}; \node[int](out1) at (0.7,0.7) {}; \node[](out2) at (-.7,0.7) {$_1$}; \node[](out3) at (1.4,1.4) {$_3$}; \node[](out4) at (0,1.4) {$_2$}; \node[](in) at (0,-.7) {}; \draw (a) edge[->] (out1); \draw (a) edge[->] (out2); \draw (out1) edge[->] (out3); \draw (out1) edge[->] (out4); \draw (in) edge[->] (a); \end{tikzpicture}+\begin{tikzpicture}[baseline=-.55ex,scale=0.6] \node[int] (a) at (0,0) {}; \node[int](out1) at (0.7,0.7) {}; \node[](out2) at (-.7,0.7) {$_2$}; \node[](out3) at (1.4,1.4) {$_1$}; \node[](out4) at (0,1.4) {$_3$}; \node[](in) at (0,-.7) {}; \draw (a) edge[->] (out1); \draw (a) edge[->] (out2); \draw (out1) edge[->] (out3); \draw (out1) edge[->] (out4); \draw (in) edge[->] (a); \end{tikzpicture}+\begin{tikzpicture}[baseline=-.55ex,scale=0.6] \node[int] (a) at (0,0) {}; \node[int](out1) at (0.7,0.7) {}; \node[](out2) at (-.7,0.7) {$_3$}; \node[](out3) at (1.4,1.4) {$_2$}; \node[](out4) at (0,1.4) {$_1$}; \node[](in) at (0,-.7) {}; \draw (a) edge[->] (out1); \draw (a) edge[->] (out2); \draw (out1) edge[->] (out3); \draw (out1) edge[->] (out4); \draw (in) edge[->] (a); \end{tikzpicture}=0, \end{equation} \begin{equation} \label{eq:IHXor} \begin{tikzpicture}[baseline=-.55ex,scale=0.6] \node[int] (a) at (0,0) {}; \node[int](out1) at (0,0.7) {}; \node[](out4) at (-.7,1.4) {$_1$}; \node[](out3) at (.7,1.4) {$_2$}; \node[](out2) at (-.7,-0.7) {$_1$}; \node[](in) at (.7,-.7) {$_2$}; \draw (a) edge[->] (out1); \draw (a) edge[<-] (out2); \draw (out1) edge[->] (out3); \draw (out1) edge[->] (out4); \draw (in) edge[->] (a); \end{tikzpicture} + \begin{tikzpicture}[baseline=-.55ex,scale=0.6] \node[int] (a) at (0,0) {}; \node[int](out1) at (0.7,0.7) {}; \node[](out4) at (-.7,1.4) {$_1$}; \node[](out3) at (.7,1.4) {$_2$}; \node[](out2) at (0,-.7) {$_1$}; \node[](in) at (1.4,-.7) {$_2$}; \draw (a) edge[->] (out1); \draw (a) edge[<-] (out2); \draw (out1) edge[->] (out3); \draw (a) edge[->] (out4); \draw (in) edge[->] (out1); \end{tikzpicture}+ \begin{tikzpicture}[baseline=-.55ex,scale=0.6] \node[int] (a) at (0,0.7) {}; \node[int](out1) at (0.7,0) {}; \node[](out4) at (0,1.4) {$_1$}; \node[](out3) at (1.4,1.4) {$_2$}; \node[](out2) at (-.7,-.7) {$_1$}; \node[](in) at (0.7,-.7) {$_2$}; \draw (a) edge[<-] (out1); \draw (a) edge[<-] (out2); \draw (out1) edge[->] (out3); \draw (a) edge[->] (out4); \draw (in) edge[->] (out1); \end{tikzpicture} + \begin{tikzpicture}[baseline=-.55ex,scale=0.6] \node[int] (a) at (0,0) {}; \node[int](out1) at (0.7,0.7) {}; \node[](out4) at (-.7,1.4) {$_2$}; \node[](out3) at (.7,1.4) {$_1$}; \node[](out2) at (0,-.7) {$_1$}; \node[](in) at (1.4,-.7) {$_2$}; \draw (a) edge[->] (out1); \draw (a) edge[<-] (out2); \draw (out1) edge[->] (out3); \draw (a) edge[->] (out4); \draw (in) edge[->] (out1); \end{tikzpicture}+ \begin{tikzpicture}[baseline=-.55ex,scale=0.6] \node[int] (a) at (0,0.7) {}; \node[int](out1) at (0.7,0) {}; \node[](out4) at (0,1.4) {$_1$}; \node[](out3) at (1.4,1.4) {$_2$}; \node[](out2) at (-.7,-.7) {$_2$}; \node[](in) at (0.7,-.7) {$_1$}; \draw (a) edge[<-] (out1); \draw (a) edge[<-] (out2); \draw (out1) edge[->] (out3); \draw (a) edge[->] (out4); \draw (in) edge[->] (out1); \end{tikzpicture}=0. \end{equation} \begin{prop} [\cite{MW}]\label{prop:LieB_{0,0}-RGra} There is a map of PROPs $$s:\textsf{LieB}_{0,0} \to \mathrm{RGra}$$ given by $$ \begin{tikzpicture}[baseline=-.55ex,scale=0.6] \node[int] (a) at (0,0) {}; \node[](out1) at (0.7,-0.7) {$_2$}; \node[](out2) at (-.7,-0.7) {$_1$}; \node[](in) at (0,.7) {$_1$}; \draw (out1) edge[->] (a); \draw (out2) edge[->] (a); \draw (a) edge[->] (in); \end{tikzpicture} \mapsto \begin{tikzpicture}[baseline=-.55ex,scale=0.6] \node[draw,circle] (a) at (0,0) {$_1$}; \node[draw,circle] (b) at (1.5,0) {$_2$}; \node[] (c) at (0.75,0.5) {$_1$}; \draw (a) edge[->] (b); \end{tikzpicture}, \quad\quad \begin{tikzpicture}[baseline=-.55ex,scale=0.6] \node[int] (a) at (0,0) {}; \node[](out1) at (0.7,0.7) {$_2$}; \node[](out2) at (-.7,0.7) {$_1$}; \node[](in) at (0,-.7) {$_1$}; \draw (a) edge[->] (out1); \draw (a) edge[->] (out2); \draw (in) edge[->] (a); \end{tikzpicture} \mapsto \begin{tikzpicture}[baseline=-.55ex,scale=0.6] \node[draw,circle] (a) at (0,0) {$_1$}; \draw (a) edge[loop] node[midway, below]{$_{1}$} node[midway, above]{$_{2}$}(a); \end{tikzpicture}, $$ \end{prop} \begin{rem} The map $s$ factors through the PROP of involutive Lie bialgebras $\textsf{LieB}_{0,0}^{\diamond}$ which is generated by the corollas \eqref{eq:Lie_Corollas} modulo the relations \eqref{eq:jacobi},\eqref{eq:cojac}, \eqref{eq:IHXor} plus the additional relation \begin{equation}\label{eq:involutive}\begin{tikzpicture}[baseline=-.55ex,scale=0.6] \node[int] (a) at (0,-0.3) {}; \node[int](out1) at (0,0.4) {}; \node[](out4) at (0,1) {}; \node[](in) at (0,-.9) {}; \draw (a) edge[->,bend left] (out1); \draw (a) edge[->,bend right] (out1); \draw (in) edge[->] (a); \draw (out1) edge[->] (out4); \end{tikzpicture} =0. \end{equation} \end{rem} \subsection{The ribbon graph complex $\mathrm{RGC}$} Let $\textsf{hoLieB}_{0,0}$ be the quasi-free differential graded PROP generated by skew symmetric corollas $$\begin{tikzpicture}[baseline=-0.55ex,scale=0.6] \node[int] (0) at (0,0) {}; \node[] (1) at (-1,1.2) {$_1$}; \node[] (2) at (-0.6,1.2) {$_2$}; \node[] (.1) at (0.2,0.8) {\ldots}; \node[] (4) at (1,1.2) {$_{m}$}; \draw (0) edge[->] node {} (1); \draw (0) edge[->] node {} (2); \draw (0) edge[->] node {} (4); \node[] (-1) at (-1,-1.2) {$_1$}; \node[] (-2) at (-0.6,-1.2) {$_2$}; \node[] (-.1) at (0.2,-.9) {\ldots}; \node[] (-4) at (1,-1.2) {$_{n}$}; \draw (0) edge[<-] node {} (-1); \draw (0) edge[<-] node {} (-2); \draw (0) edge[<-] node {} (-4); \end{tikzpicture} \quad n\ge 1, m\ge 1, n+m\ge 3$$ of degree $1$, with the differential $$\delta \begin{tikzpicture}[baseline=-0.55ex,scale=0.6] \node[int] (0) at (0,0) {}; \node[] (1) at (-1,1.2) {$_1$}; \node[] (2) at (-0.6,1.2) {$_2$}; \node[] (.1) at (0.2,0.8) {\ldots}; \node[] (4) at (1,1.2) {$_{m}$}; \draw (0) edge[->] node {} (1); \draw (0) edge[->] node {} (2); \draw (0) edge[->] node {} (4); \node[] (-1) at (-1,-1.2) {$_1$}; \node[] (-2) at (-0.6,-1.2) {$_2$}; \node[] (-.1) at (0.2,-.9) {\ldots}; \node[] (-4) at (1,-1.2) {$_{n}$}; \draw (0) edge[<-] node {} (-1); \draw (0) edge[<-] node {} (-2); \draw (0) edge[<-] node {} (-4); \end{tikzpicture} = \sum_{J_1\sqcup J_2 =[m]\atop I_1\sqcup I_2=[n]} \begin{tikzpicture}[baseline=-0.55ex,scale=0.6] \node[int] (0) at (0,0) {}; \node[] (1) at (-1,1.5) {}; \node[] (2) at (-0.6,1.5) {}; \node[] (.1) at (0.2,1.1) {$\ldots$}; \node[] (4) at (1,1.5) {}; \node[] (.1) at (0,1.7) {$\overbrace{\qquad \quad }^{J_1}$}; \node[int] (10) at (2,1) {}; \node[] (11) at (1.1,-.6) {}; \node[] (12) at (1.5,-0.6) {}; \node[] (va) at (2.2,-0.2) {$\ldots$}; \node[] (13) at (3,-0.6) {}; \node[] (.1) at (2,-0.9) {$\underbrace{\qquad \quad }_{ I_2}$}; \draw (10) edge[<-] node {} (11); \draw (10) edge[<-] node {} (12); \draw (10) edge[<-] node {} (13); \node[] (21) at (1.1,2.1) {}; \node[] (22) at (1.5,2.1) {}; \node[] (-.1) at (2.2,1.7) {$\ldots$}; \node[] (23) at (3,2.1) {}; \node[] (.1) at (2,2.3) {$\overbrace{\qquad \quad }^{ J_2 }$}; \draw (10) edge[->] node {} (21); \draw (10) edge[->] node {} (22); \draw (10) edge[->] node {} (23); \draw (0) edge[->] node {} (10); \draw (0) edge[->] node {} (1); \draw (0) edge[->] node {} (2); \draw (0) edge[->] node {} (4); \node[] (-1) at (-1,-1) {}; \node[] (-2) at (-0.6,-1) {}; \node[] (-.1) at (0.2,-.7) {\ldots}; \node[] (-4) at (1,-1) {}; \node[] (.1) at (0,-1.2) {$\underbrace{\qquad \quad}_{I_1}$}; \draw (0) edge[<-] node {} (-1); \draw (0) edge[<-] node {} (-2); \draw (0) edge[<-] node {} (-4); \end{tikzpicture}. $$ It was shown in \cite{PROPed} that $\textsf{hoLieB}_{0,0}$ is a minimal resolution of $\textsf{LieB}_{0,0},$ i.e.\ the natural projection map $p:\textsf{hoLieB}_{0,0}\to \textsf{LieB}_{0,0}$ is a quasi-isomorphism. We define the \emph{ribbon graph complex} $(\mathrm{RGC},\delta+\Delta_1)$ to be the deformation complex $$(\mathrm{RGC},\delta+\Delta_1) := \Def(\textsf{hoLieB}_{0,0}\stackrel{s \circ p}{\rightarrow} \mathrm{RGra}).$$ As a graded vector space $$ \mathrm{RGC}\cong \prod_{n,m\ge 1} \left(\mathrm{RGra}(n,m) \right)^{{\mathbb{S}}_{n}\times{\mathbb{S}}_m} $$ is spanned by ribbon graphs in $\mathrm{RGra}$ coinvariant under the actions of permuting vertices and boundaries. The differentials $\delta + \Delta_1$ act on a ribbon graph $\Gamma$ with $n$ vertices and $m$ boundaries by $$\delta \begin{tikzpicture}[baseline=-0.55ex,scale=0.6] \node[] (0) at (0,0) {$_{\Gamma}$}; \node[] (1) at (-1,1.2) {$_1$}; \node[] (2) at (-0.6,1.2) {$_2$}; \node[] (.1) at (0.2,0.8) {\ldots}; \node[] (4) at (1,1.2) {$_{m}$}; \draw (0) edge[->] node {} (1); \draw (0) edge[->] node {} (2); \draw (0) edge[->] node {} (4); \node[] (-1) at (-1,-1.2) {$_1$}; \node[] (-2) at (-0.6,-1.2) {$_2$}; \node[] (-.1) at (0.2,-.9) {\ldots}; \node[] (-4) at (1,-1.2) {$_{n}$}; \draw (0) edge[<-] node {} (-1); \draw (0) edge[<-] node {} (-2); \draw (0) edge[<-] node {} (-4); \end{tikzpicture} = \sum_{i=1}^{n} \begin{tikzpicture}[baseline=-0.55ex,scale=0.6] \node[] (0) at (0,0) {$_{\Gamma}$}; \node[] (1) at (-1,1.2) {$_1$}; \node[] (2) at (-0.6,1.2) {$_2$}; \node[] (.1) at (0.2,0.8) {\ldots}; \node[] (4) at (1,1.2) {$_{m}$}; \draw (0) edge[->] node {} (1); \draw (0) edge[->] node {} (2); \draw (0) edge[->] node {} (4); \node[] (-1) at (-1,-1.2) {$_1$}; \node[] (-2) at (-0.6,-1.2) {$_2$}; \node[] (-.1) at (-0.2,-.9) {\tiny{\ldots}}; \node[] (-.1) at (0.5,-.9) {\tiny{\ldots}}; \node[] (-4) at (1,-1.2) {$_{n}$}; \draw (0) edge[<-] node {} (-1); \draw (0) edge[<-] node {} (-2); \draw (0) edge[<-] node {} (-4); \node[int] (i) at (0.2,-2) {}; \node[] (i') at (-0.3,-3) {$_i$}; \node[] (i'1) at (0.7,-3) {$_{n+1}$}; \draw (0) edge[<-] node[near end,left] {$_i$} (i); \draw (i) edge[<-] (i'); \draw (i) edge[<-] (i'1); \end{tikzpicture}-\sum_{i=1}^{m} \begin{tikzpicture}[baseline=-0.55ex,scale=0.6] \node[] (0) at (0,0) {$_{\Gamma}$}; \node[] (1) at (-1,1.2) {$_1$}; \node[] (2) at (-0.6,1.2) {$_2$}; \node[] (.1) at (0.2,0.8) {\ldots}; \node[] (4) at (1,1.2) {$_{m}$}; \draw (0) edge[->] node {} (1); \draw (0) edge[->] node {} (2); \draw (0) edge[->] node {} (4); \node[] (-1) at (-1,-1.2) {$_1$}; \node[] (-2) at (-0.6,-1.2) {$_2$}; \node[] (-.1) at (-0.2,-.9) {\tiny{\ldots}}; \node[] (-.1) at (0.5,-.9) {\tiny{\ldots}}; \node[] (-4) at (1,-1.2) {$_{n}$}; \draw (0) edge[<-] node {} (-1); \draw (0) edge[<-] node {} (-2); \draw (0) edge[<-] node {} (-4); \node[int] (i) at (0.2,2) {}; \node[] (i') at (-0.3,3) {$_i$}; \node[] (i'1) at (1.7,1.5) {$_{n+1}$}; \draw (0) edge[->] node[near end,left] {$_i$} (i); \draw (i) edge[->] (i'); \draw (i) edge[<-] (i'1); \end{tikzpicture},$$ $$\Delta_1\begin{tikzpicture}[baseline=-0.55ex,scale=0.6] \node[] (0) at (0,0) {$_{\Gamma}$}; \node[] (1) at (-1,1.2) {$_1$}; \node[] (2) at (-0.6,1.2) {$_2$}; \node[] (.1) at (0.2,0.8) {\ldots}; \node[] (4) at (1,1.2) {$_{m}$}; \draw (0) edge[->] node {} (1); \draw (0) edge[->] node {} (2); \draw (0) edge[->] node {} (4); \node[] (-1) at (-1,-1.2) {$_1$}; \node[] (-2) at (-0.6,-1.2) {$_2$}; \node[] (-.1) at (0.2,-.9) {\ldots}; \node[] (-4) at (1,-1.2) {$_{n}$}; \draw (0) edge[<-] node {} (-1); \draw (0) edge[<-] node {} (-2); \draw (0) edge[<-] node {} (-4); \end{tikzpicture} = \sum_{i=1}^{m} \begin{tikzpicture}[baseline=-0.55ex,scale=0.6] \node[] (0) at (0,0) {$_{\Gamma}$}; \node[] (1) at (-1,1.2) {$_1$}; \node[] (2) at (-0.6,1.2) {$_2$}; \node[] (.1) at (0.2,0.8) {\ldots}; \node[] (4) at (1,1.2) {$_{m}$}; \draw (0) edge[->] node {} (1); \draw (0) edge[->] node {} (2); \draw (0) edge[->] node {} (4); \node[] (-1) at (-1,-1.2) {$_1$}; \node[] (-2) at (-0.6,-1.2) {$_2$}; \node[] (-.1) at (-0.2,-.9) {\tiny{\ldots}}; \node[] (-.1) at (0.5,-.9) {\tiny{\ldots}}; \node[] (-4) at (1,-1.2) {$_{n}$}; \draw (0) edge[<-] node {} (-1); \draw (0) edge[<-] node {} (-2); \draw (0) edge[<-] node {} (-4); \node[int] (i) at (0.2,2) {}; \node[] (i') at (-0.3,3) {$_i$}; \node[] (i'1) at (0.9,3) {$_{m+1}$}; \draw (0) edge[->] node[near end,left] {$_i$} (i); \draw (i) edge[->] (i'); \draw (i) edge[->] (i'1); \end{tikzpicture}- \ \sum_{i=1}^{n} \begin{tikzpicture}[baseline=-0.55ex,scale=0.6] \node[] (0) at (0,0) {$_{\Gamma}$}; \node[] (1) at (-1,1.2) {$_1$}; \node[] (2) at (-0.6,1.2) {$_2$}; \node[] (.1) at (0.2,0.8) {\ldots}; \node[] (4) at (1,1.2) {$_{m}$}; \draw (0) edge[->] node {} (1); \draw (0) edge[->] node {} (2); \draw (0) edge[->] node {} (4); \node[] (-1) at (-1,-1.2) {$_1$}; \node[] (-2) at (-0.6,-1.2) {$_2$}; \node[] (-.1) at (-0.2,-.9) {\tiny{\ldots}}; \node[] (-.1) at (0.5,-.9) {\tiny{\ldots}}; \node[] (-4) at (1,-1.2) {$_{n}$}; \draw (0) edge[<-] node {} (-1); \draw (0) edge[<-] node {} (-2); \draw (0) edge[<-] node {} (-4); \node[int] (i) at (0.2,-2) {}; \node[] (i') at (-0.3,-3) {$_i$}; \node[] (i'1) at (1.7,-1.5) {$_{m+1}$}; \draw (0) edge[<-] node[near end,left] {$_i$} (i); \draw (i) edge[<-] (i'); \draw (i) edge[->] (i'1); \end{tikzpicture}, $$ where $$ \begin{tikzpicture}[baseline=-.55ex,scale=0.6] \node[int] (a) at (0,0) {}; \node[](out1) at (1,-1) {$_{(m+1)}$}; \node[](out2) at (-1,-1) {$_i$}; \node[](in) at (0,1) {$_i$}; \draw (out1) edge[->] (a); \draw (out2) edge[->] (a); \draw (a) edge[->] (in); \end{tikzpicture}= \begin{tikzpicture}[baseline=-.55ex,scale=0.6] \node[draw,circle] (a) at (0,0) {$_i$}; \node[draw,circle] (b) at (1.8,0) {$_{m+1}$}; \node[] (c) at (0.75,0.5) {$_i$}; \draw (a) edge[->] (b); \end{tikzpicture}, \quad \text{ and } \begin{tikzpicture}[baseline=-.55ex,scale=0.6] \node[int] (a) at (0,0) {}; \node[](out1) at (1,1) {$_{m+1}$}; \node[](out2) at (-1,1) {$_i$}; \node[](in) at (0,-1) {$_i$}; \draw (a) edge[->] (out1); \draw (a) edge[->] (out2); \draw (in) edge[->] (a); \end{tikzpicture}=\begin{tikzpicture}[baseline=-.55ex,scale=0.6] \node[draw,circle] (A) at (0,0) {$_i$}; \draw (A) edge[loop] node[midway, below]{$_{i}$} node[midway, above]{$_{m+1}$}(a); \end{tikzpicture}. $$ In words, the first part of the differential, $\delta$, splits vertices in a way that respects the cyclic ordering. The other part of the differential, $\Delta_1$, adds an edge between each pair of corners of each boundary. \subsection{The Map $F:\mathrm{O} \mathrm{GC}_1 \to \mathrm{RGC}$ } Consider the deformation complex $$ \Def(\textsf{hoLieB}_{0,0} \to \textsf{LieB}_{0,0}) \cong \prod_{n,m\ge 1}\left(\textsf{LieB}_{0,0}(n,m)\right)^{{{\mathbb{S}}_{n}\times{\mathbb{S}}_m}}. $$ It is spanned by oriented graphs with all vertices 3-valent, with ingoing and outgoing hairs, without internal sources or targets, modulo the relations \eqref{eq:jacobi},\eqref{eq:cojac},\eqref{eq:IHXor}. In \cite{MW2}, S. Merkulov and T. Willwacher constructed a quasi-isomorphism \begin{eqnarray*} F_1:\mathrm{O} \mathrm{GC}_1 &\to & \Def(\textsf{hoLieB}_{0,0} \to \textsf{LieB}_{0,0})[1]\\ \Gamma &\mapsto & F_1(\Gamma). \end{eqnarray*} The element $F_1(\Gamma)$ is obtained from a graph $\Gamma\in \mathrm{O} \mathrm{GC}_1$ by attaching an incoming leg to each source, an outgoing leg to each target, and setting it to $0$ if it contains a vertex that is not $3$-valent. Next, the map $s:\textsf{LieB}_{0,0}\to \mathrm{RGra}$ from Proposition \ref{prop:LieB_{0,0}-RGra} gives us a map of complexes $$F_2:\Def(\textsf{hoLieB}_{0,0}\to \textsf{LieB}_{0,0})[1] \to (\mathrm{RGC}[1],\delta+\Delta_1)$$ by $F_2(\Gamma: \textsf{hoLieB}_{0,0}\to \textsf{LieB}_{0,0}) := s\circ \Gamma: \textsf{hoLieB}_{0,0}\to \mathrm{RGra}$. We now have our map \begin{equation} \label{eq:F} F:=F_2\circ F_1:\mathrm{O} \mathrm{GC}_1 \to \Def(\textsf{hoLieB}_{0,0}\to \textsf{LieB}_{0,0})[1] \to (\mathrm{RGC}[1],\delta+\Delta_1). \end{equation} We note that $F_2$ maps a graph in $\textsf{LieB}_{0,0}(n,m)^{{\mathbb{S}}_{n}\times{\mathbb{S}}_m}$ to a sum of ribbon graphs with $n$ vertices and $m$ boundaries. We also have that $F_1$ maps a graph $\Gamma\in \mathrm{O} \mathrm{GC}_1$ with $n$ sources and $m$ targets to a graph in $\textsf{LieB}_{0,0}(n,m)^{{\mathbb{S}}_{n}\times{\mathbb{S}}_m}$. Hence $F$ maps a graph $\Gamma$ with $m$ target vertices to a sum of ribbon graphs with $m$ boundaries. We may take a filtration on $\mathrm{RGC}$ by the number of boundaries, and a filtration on $\mathrm{O}\mathrm{GC}_1$ by the number of target vertices. Then $$gr(\mathrm{RGC},\delta+\Delta_1)= (\mathrm{RGC},\delta),$$ and $$ gr(\mathrm{O} \mathrm{GC}_1, \delta) = (\mathrm{O} \mathrm{GC}_1, \delta_0). $$ As $F$ maps a graph with $m$ target vertices to a sum of ribbon graphs with precisely $m$ boundaries, we get that $$ gr F:gr(\mathrm{O} \mathrm{GC}_1,\delta)\to gr(\mathrm{RGC}[1],\delta+\Delta_1) $$ is given by the same map of vector spaces $$F:(\mathrm{O} \mathrm{GC}_1,\delta_0)\to (\mathrm{RGC}[1],\delta).$$ We have now established both maps mentioned in Corollary \ref{cor:Mgn}. \begin{thm} [Corollary \ref{cor:Mgn}] We have a zig-zag of morphisms $$(\mathrm{HGC}_{0}, \delta)\gets (\mathrm{O} \mathrm{GC}_1, \delta_0)\to (\mathrm{RGC}[1],\delta),$$ where the left map is a quasi-isomorphism, given explicitly in \eqref{eq:G} and \eqref{eq:Ge}, and the right map is given in \eqref{eq:F}. \end{thm}
{ "timestamp": "2020-04-17T02:15:49", "yymm": "1912", "arxiv_id": "1912.09438", "language": "en", "url": "https://arxiv.org/abs/1912.09438", "abstract": "We show that the hairy graph complex $(HGC_{n,n},d)$ appears as an associated graded complex of the oriented graph complex $(OGC_{n+1},d)$, subject to the filtration on the number of targets, or equivalently sources, called the fixed source graph complex. The fixed source graph complex $(OGC_1,d_0)$ maps into the ribbon graph complex $RGC$, which models the moduli space of Riemann surfaces with marked points. The full differential $d$ on the oriented graph complex $OGC_{n+1}$ corresponds to the deformed differential $d+h$ on the hairy graph complex $HGC_{n,n}$, where $h$ adds a hair. This deformed complex $(HGC_{n,n},d+h)$ is already known to be quasi-isomorphic to standard Kontsevich's graph complex $GC^2_n$. This gives a new connection between the standard and the oriented version of Kontsevich's graph complex.", "subjects": "Quantum Algebra (math.QA)", "title": "Hairy graphs to ribbon graphs via a fixed source graph complex", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668706602658, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7081505231433207 }
https://arxiv.org/abs/2005.07932
How far is an extension of $p$-adic fields from having a normal integral basis?
Let $L/K$ be a finite Galois extension of $p$-adic fields with group $G$. It is well-known that $\mathcal{O}_L$ contains a free $\mathcal{O}_K[G]$-submodule of finite index. We study the minimal index of such a free submodule, and determine it exactly in several cases, including for any cyclic extension of degree $p$ of $p$-adic fields.
\section{Introduction} Let $L/K$ be a Galois extension of $p$-adic fields with Galois group $G$, ramification index $e_{L/K}$ and inertia degree $f_{L/K}$, and let $\mathcal{O}_K$ (respectively $\mathcal{O}_L$) denote the ring of integers of $K$ (resp.~$L$). For any $p$-adic field $F$, let $e_F$ and $f_F$ be the absolute ramification index and inertia degree, $\pi_F$ a uniformiser, and $v_F$ the valuation, normalised by $v_F(\pi_F)=1$. We will write $K[G]$ for the $K$-algebra whose elements are formal linear combinations of elements of $G$ with $K$-coefficients. We similarly define the group ring $\mathcal{O}_K[G]$ and observe that $L$ is in a natural way a $K[G]$-module (resp.~$\mathcal{O}_L$ is a $\mathcal{O}_K[G]$-module). The classical normal basis theorem shows that in this setting $L$ is a free $K[G]$-module of rank 1. It is then natural to ask whether $\mathcal{O}_L$ is also a free $\mathcal{O}_K[G]$-module of rank 1. The answer to this question is given by the following result. \begin{theorem}[{\cite{MR1581331}, \cite[Theorem 3 on p.~26]{MR717033}}]\label{tamepadic} Let $L/K$ be a Galois extension of $p$-adic fields with Galois group $G$. Then $\mathcal{O}_L$ is free of rank $1$ as an $\mathcal{O}_K[G]$-module if and only if the extension is tamely ramified. \end{theorem} More generally, one may ask whether $\mathcal{O}_L$ is free over other subrings of $K[G]$. It turns out that there is a natural candidate for a ring over which $\mathcal{O}_L$ could be free, the so-called \textbf{associated order}. \begin{definition} The associated order (of $\mathcal{O}_L$ in $K[G]$) is the ring \[ \mathfrak{A}_{L/K} = \{ \lambda \in K[G] : \lambda \cdot x \in \mathcal{O}_L \; \forall x \in \mathcal{O}_L \}. \] \end{definition} It is well-known that the associated order is indeed an $\mathcal{O}_K$-order in $K[G]$. Furthermore, if $\mathcal{O}_L$ is free of rank 1 over an $\mathcal{O}_K$-order $\Lambda$ in $K[G]$, then necessarily $\Lambda=\mathfrak{A}_{L/K}$ (see Proposition \ref{associatedfree}). In our setting, we have that $\mathfrak{A}_{L/K}$ coincides with $ \mathcal{O}_K[G]$ precisely when $L/K$ is at most tamely ramified. In this paper, we investigate the situation when $\mathcal{O}_L$ is \emph{not} free over $\mathcal{O}_K[G]$. More specifically, we measure the failure of freeness by studying the minimal index of a free $\mathcal{O}_K[G]$-module inside $\mathcal{O}_L$, namely the quantity \[ m(L/K) := \min_{\alpha \in \mathcal{O}_L} [\mathcal{O}_L : \mathcal{O}_K[G] \alpha], \] where $[\mathcal{O}_L : \mathcal{O}_K[G]\alpha]$ denotes the group index. This quantity is well defined since for every integral normal basis generator $\alpha$ of the extension the index $[\mathcal{O}_L : \mathcal{O}_K[G]\alpha]$ is finite. Clearly, by Theorem \ref{tamepadic}, we have $m(L/K)=1$ if and only if the extension is at most tamely ramified. Notice that, even though this is not clear from the definition, $m(L/K)$ is in principle computable for any finite extension $L/K$: this is the content of Theorem \ref{thm:mLKEffectivelyComputable}. In certain settings it may be more natural to consider indices of the form $[\mathcal{O}_L : \mathfrak{A}_{L/K} \alpha]$, but, as we will see, these differ from $[\mathcal{O}_L : \mathcal{O}_K[G] \alpha]$ only by a constant factor independent of $\alpha$ (cf.~Proposition \ref{productindex}). On the other hand, the proportionality factor $[\mathfrak{A}_{L/K} : \mathcal{O}_K[G]]$ also carries interesting arithmetic content, so that $m(L/K)$ might in fact capture more complete information than the indices $[\mathcal{O}_L : \mathfrak{A}_{L/K} \alpha]$. It is not the first time that the quantity $m(L/K)$ appears in the literature, in more or less explicit form. For example, in \cite{MR3411126} Johnston computed an explicit free generator of the ring of integers over its associated order in any wildly and weakly ramified Galois extension $L/K$ of local fields (for some earlier results see also Burns \cite{MR1760494}); a crucial ingredient in his approach is precisely the computation of $m(L/K)$ in this situation. \begin{theorem}[{\cite[Proof of Theorem 1.2]{MR3411126}}]\label{thm:JohnstonWeaklyRamified} Let $L/K$ be a wildly and weakly ramified Galois extension of $p$-adic fields. Then $ m(L/K)=p^{f_L}. $ \end{theorem} In a closely related direction, K\"ock \cite{MR2089083} and Johnston \cite{MR3411126} discuss whether a power of the maximal ideal $(\pi_L)$ can be $\mathcal{O}_K[G]$-free, thus giving some information on the possible indices $[\mathcal{O}_L : \mathcal{O}_K[G]\alpha]$; see also Ullom \cite{MR263790}. Besides being useful for understanding the \textit{additive} structure of $\mathcal{O}_L$, free $\mathcal{O}_K[G]$-submodules $M$ of the ring of integers also appear in the cohomological study of local class field theory. In fact, starting from such an $M$, one can construct a cohomologically trivial submodule $V$ of finite index in $\mathcal{O}_L^\times$ (see for example \cite[p.~8]{MR2467155}); the cohomology of $\mathcal{O}_L^\times$ is then isomorphic to that of the finite quotient $\mathcal{O}_L^\times/V$, so that information on $[\mathcal{O}_L:M]$ (hence on $[\mathcal{O}_L^\times : V]$) translates into bounds for the cohomology groups $H^i(G, \mathcal{O}_L^\times)$. In this paper we provide both widely-applicable estimates for $m(L/K)$ and exact formulas in several special cases. Our first result is the following general upper bound for $m(L/K)$, that will be proved in Subsection \ref{sect:GeneralBound}. Notice that $m(L/K)$ is by definition a power of $p$, so bounding its absolute value is equivalent to bounding its $p$-adic valuation. \begin{theorem}\label{thm:IntroGeneralBound} Let $L/K$ be a Galois extension of $p$-adic fields. Then \[ v_p (m(L/K)) \leq f_L(e_{L/K}-1) + \frac{1}{2} [L:\mathbb{Q}_p] \cdot v_p([L:K]). \] \end{theorem} We note that the bound of the previous theorem can be relaxed to an estimate depending only on $[L:\mathbb{Q}_p]$. Much more is known on the Galois structure of $L/K$ when $L$ is absolutely abelian. As a consequence, in this case we obtain an explicit formula for the value of $m(L/K)$. \begin{theorem}\label{thm:IntroAbsolutelyAbelian} Let $L/K$ be a Galois extension of $p$-adic fields, with $p$ odd, and assume that $L/\mathbb{Q}_p$ is abelian. Denote by $L^{nr}$ the maximal unramified subextension of $L/K$. Then $$ \begin{array}{rl} v_p( m(L/K))&=v_{p}(m(L/L^{nr}))\\ & \displaystyle={\frac{f_L}{2}\left(e_{L}\cdot v_p(e_{L/K})- \sum_{d \mid e_{L/K}} \frac{\varphi(d)}{[L^{nr}(\zeta_d):L^{nr}]} v_{L^{nr}}( \operatorname{disc}(L^{nr}(\zeta_d)/L^{nr}))\right)}. \end{array} $$ \end{theorem} We prove Theorem~\ref{thm:IntroAbsolutelyAbelian} in Section~\ref{sec:assab}. In Section~\ref{sec:local-global} we consider an analogous quantity $m(E/F)$ for a number field extension $E/F$ and, in the case when $F=\mathbb{Q}$ and $E/\mathbb{Q}$ is abelian, we establish some relations between the $p$-adic valuation of the minimal index for $E/\mathbb{Q}$, and the $p$-adic valuation of the minimal index for their $p$-adic completions. Section~\ref{sec:degp} is devoted to the special interesting case of cyclic extensions of degree $p$. We prove the following exact formula for $m(L/K)$, which holds without any assumption on $K$. \begin{theorem}\label{thm:IntroCyclicDegreep} Let $L/K$ be a ramified Galois extension of $p$-adic fields of degree $p$, with ramification jump $t$. Let $a \in \{0,\ldots,p-1\}$ be the residue class of $t$ modulo $p$ and set $\nu_i = \left\lfloor \frac{a+it}{p} \right\rfloor$. Then if $a \neq 0$ we have $ v_p( m(L/K))= f_K \left(\sum_{i=0}^{p-1} \nu_i+\min_{0\leq i\leq p-1}(i e_K-(p-1)\nu_i) \right), $ while for $a=0$ we have $v_p(m(L/K))=\frac{1}{2}[L:\mathbb{Q}_p]$. \end{theorem} The method used to show Theorem \ref{thm:IntroCyclicDegreep} also allows us to give, in Section~\ref{sect:BertrandiasFerton}, a new proof of a result originally due to Ferton and Bertrandias \cite[Théorème]{MR296047}. \begin{theorem}\label{thm:IntroFertonBertrandias} Let $L/K$ be a totally ramified cyclic extension of degree $p$ of $p$-adic fields. Let $t$ be the unique ramification jump of the extension (in the lower numbering) and let $a \in \{0,\ldots,p-1\}$ be the residue class of $t$ modulo $p$. The following hold: \begin{enumerate} \item if $a=0$ or $a \mid p-1$, then $\mathcal{O}_L$ is free over $\mathfrak{A}_{L/K}$. \item Suppose that the inequality $t < \frac{ep}{p-1} - 1$ holds. If $\mathcal{O}_L$ is free over $\mathfrak{A}_{L/K}$, then $a \mid p-1$. \end{enumerate} \end{theorem} \begin{remark} The inequality $t < \frac{ep}{p-1} - 1$ corresponds to the condition that $L/K$ is not \textit{almost-maximally ramified}, see Definition \ref{def:AlmostMaximallyRamified}. \end{remark} Finally, a natural related question is to understand which elements generate $\mathcal{O}_K[G]$-submodules of minimal index. We obtain some partial results on this problem by exhibiting an explicit minimal element for cyclic extensions of degree $p$ (Proposition \ref{prop:aeq0} and Remark \ref{rmk:MinimalElement}). \subsection*{Acknowledgements} The authors are indebted to Henri Johnston for his many insightful comments, especially on Theorem \ref{thm:IntroGeneralBound}, and to Nigel Byott for helpful discussions and for pointing out some relevant results in the literature. We are also grateful to Cornelius Greither for his comments on a preliminary version of the paper and for suggesting Remark \ref{rmk:greither}. \section{Preliminaries}\label{sect:Preliminaries} \subsection{Preliminaries on discriminants}\label{sub:discriminants} We will use the notation fixed in the Introduction, so $L/K$ is a Galois extension of $p$-adic fields with Galois group $G$. The normal basis theorem ensures that $L/K$ admits a normal basis, namely a $K$-basis of the form $\{\sigma(\alpha)\}_{\sigma\in G}$ for some $\alpha\in L$ (equivalently, we have $L=K[G]\alpha$). In this case the element $\alpha$ is called a normal basis generator. Clearly, $\alpha$ can be chosen to lie in $\mathcal{O}_L$, and it is easy to see that $\alpha \in \mathcal{O}_L$ is a normal basis generator if and only if $\mathcal{O}_K[G]\alpha$ is a \emph{full} $\mathcal{O}_K$-lattice in $L$; this happens precisely when $[\mathcal O_L:\mathcal{O}_K[G]\alpha]$ is finite. Therefore, in order to compute $m(L/K)$, it suffices to consider elements $\alpha \in \mathcal{O}_L$ that are normal basis generators. Let $X\supseteq Y$ be $\mathcal O_K$-lattices of the same rank ($\mathcal O_K$ is a principal ideal domain, so the rank of an $\mathcal O_K$-lattice, which is free, is well defined). Choosing two bases for $X$ and $Y$, we easily see that the quotient $X/Y$ is of the form $\oplus_{i=1}^r \mathcal O_K/(\pi_K^{a_i})$, hence in particular finite, and we define the \emph{module index} $[X:Y]_{\mathcal O_K}$ as the ideal of $\mathcal O_K$ generated by $\pi_K^{\sum_{i=1}^r a_i}$. The \emph{subgroup index} is instead defined as $[X:Y]:=|X/Y|$, and we have \[ [X:Y]=N([X:Y]_{\mathcal O_K}), \] where $N$ is the ideal norm. More generally, if $V$ is a finite-dimensional $K$ vector space and $X$ and $Y$ are full $\mathcal{O}_K$-lattices in $V$, then we set $[X:Y]_{\mathcal O_K}=[X:X\cap Y]_{\mathcal O_K}[Y:X\cap Y]^{-1}_{\mathcal O_K}$, and this is a fractional ideal of $\mathcal O_K$. Note that there exists an invertible $K$-linear transformation of $V$ such that $\varphi(X)=Y$, so that \begin{equation} \label{eq:ind-det} [X:Y]_{\mathcal O_K}=(\det \varphi)\mathcal O_K \end{equation} (see \cite{CF} for details and \cite{DCDJA} for an overview). \begin{remark}\label{remark:nodvr} Let $L/K$ be an extension of number fields and let $X, Y$ be full $\mathcal{O}_K$-lattices in $L$. In this setting, the index $[X:Y]_{\mathcal{O}_K}$ is defined to be the only ideal of $\mathcal{O}_K$ with the following property: for every maximal ideal $\mathfrak{p}$ of $\mathcal{O}_K$, the $\mathfrak{p}$-adic valuation of $[X:Y]_{\mathcal{O}_K}$ is equal to that of $[X\otimes_{\mathcal{O}_K}\mathcal{O}_{K_\mathfrak p}:Y\otimes_{\mathcal{O}_K}\mathcal{O}_{K_\mathfrak p}]_{\mathcal{O}_{K_\mathfrak p}}$, where $K_\mathfrak p$ is the completion of $K$ at $\mathfrak{p}$. Note that a formula analogous to \eqref{eq:ind-det} holds if $X$ and $Y$ are free over $\mathcal{O}_K$, which happens for instance whenever $K$ has class number $1$. \end{remark} Suppose now that $X$ is an $\mathcal O_K$-lattice and let $V := X \otimes_{\mathcal{O}_K} K$ be the corresponding finite-dimensional vector space. Let $B$ be a non-degenerate symmetric $K$-bilinear form on $V$. \begin{definition}\label{def:Discriminant} We define the \emph{discriminant} of $X$ by means of the module index: $$\disc_{B}(X):=[X':X]_{\mathcal O_K},$$ where $X'$ is the dual module of $X$ with respect to $B$, namely $$X'=\{v\in V\mid B(v,X)\subseteq \mathcal O_K\}.$$ \end{definition} With these definitions, for $Y\subseteq X$ we have the following proposition, which is a special case of \cite[Proposition 4 on p.12]{CF}. \begin{proposition}\label{prop:disc} Let $X,Y$ be $\mathcal O_K$-lattices such that $ X \otimes_{\mathcal{O}_K} K = Y \otimes_{\mathcal{O}_K} K=:V$. Let $B$ be a non-degenerate symmetric bilinear form on $V$. Then \begin{enumerate} \item $\disc_{B}(Y)=[X:Y]_{\mathcal O_K}^2\disc_{B}(X).$ \item If $X$ is the free $\mathcal O_K$-module generated by $\{x_i\}_{i=1}^n$ then \[ \disc_K(X)=\det\{B(x_i,x_j)\}. \] \end{enumerate} \end{proposition} \subsection{Preliminaries on ramification groups} Let $L/K$ be a Galois extension of $p$-adic fields with Galois group $G$. % \begin{definition}\label{def:RamificationGroups} For $i \geq -1$ we let \[ G_i = \{g \in G : (g-1)(\mathcal{O}_L) \subseteq (\pi_L)^{i+1} \} \] be the $i$-th ramification group of $G$ in the lower numbering. \end{definition} Clearly, $G_{i+1}\unlhd G_i$ for each $i\ge-1$ and we say that $t$ is a (lower) ramification jump for the extension $L/K$ if $G_t\ne G_{t+1}$. It is well-known that for a fixed Galois extension $L/K$ the group $G_i$ is trivial for $i$ large enough (see \cite[Chapter 4]{MR554237} for details). The extension $L/K$ is called \textit{unramified} (respectively \textit{tamely ramified}, \textit{weakly ramified}) when $G_0$ (respectively $G_1, G_2$) is trivial. The unique ramification jump of a ramified extension of degree $p$ has the following property. \begin{proposition}[{\cite[Chapter IV, §2, Ex.~3]{MR554237} and \cite[III.2, Prop.~2.3]{MR1915966}}]\label{prop:InequalitiesOneAndt} Let $L/K$ be a cyclic ramified extension of degree $p$ and let $t$ be its ramification jump. Then: \begin{enumerate} \item the inequality $1 \leq t \leq \frac{e_Kp}{p-1}$ holds; \item if $p \mid t$, then $t = \frac{e_Kp}{p - 1}$, the ground field $K$ contains the $p$-th roots of unity, and there exists a uniformiser $\pi_K$ of $K$ such that $L = K(\pi_K^{1/p})$. \end{enumerate} \end{proposition} \begin{definition}\label{def:AlmostMaximallyRamified} Let $L/K$ be a ramified cyclic extension of degree $p$. The extension $L/K$ is called \textit{maximally ramified} if its ramification jump assumes its maximum possible value, namely $t=\frac{e_Kp}{p-1}$, and is called \textit{almost-maximally ramified} if $t$ satisfies \[ \frac{e_Kp}{p-1}-t\le1 . \] More generally, a totally ramified cyclic extension $L/K$ of degree $dp^n$, with $(d,p)=1$, is called \textit{almost-maximally ramified} if its first positive ramification jump $t_1$ satisfies \[ \frac{e_Kdp}{p-1}-t_1\le 1. \] \end{definition} \begin{remark} Jacobinski \cite{jac64} defined an extension $L/K$ with Galois group $G$ to be almost-maximally ramified if all idempotents $e_H=\frac1{|H|}\sum_{\sigma\in H}\sigma$ belong to the associated order $\mathfrak{A}_{L/K}$, when $H$ ranges over all subgroups of $G$ included between two consecutive ramification groups of the extension. Using basic facts on ramification explained in \cite[Chapters 3 and 4]{MR554237}, one can verify that $e_H\in\mathfrak{A}_{L/K}$ if and only if \begin{equation}\label{eq:RamificationInequality} \sum_{i=0}^\infty (|G_i(L/L^H)|-1)\geq e_Lv_p(|H|), \end{equation} where $L^H$ is the field fixed by $H$ and $G_i(L/L^H)$ the $i$-th ramification group of the extension $L/L^H$. One may then show that Jacobinski's definition is equivalent to the inequalities in Definition \ref{def:AlmostMaximallyRamified}. See \cite[$\S$1.2]{MR513880} or \cite[Proposition 1]{MR543208} for further details. \end{remark} \section{General results} Let $L/K$ be a Galois extension of $p$-adic fields with group $G$. As observed in the Introduction, the quantity $m(L/K)=\min_{\alpha \in \mathcal{O}_L} [\mathcal{O}_L : \mathcal{O}_K[G] \alpha]$ is well defined as a minimum, and can be computed on the integers which are normal basis generators. In fact, these are precisely the cases for which $\mathcal{O}_K[G] \alpha$ is a full $\mathcal{O}_K$-submodule of L. We are interested in bounding the value of $m(L/K)$ in terms of the standard invariants of the extensions $L/K$ and $L/\mathbb{Q}_p$. Before doing this, we will show in Theorem \ref{thm:mLKEffectivelyComputable} that in principle the minimal index $m(L/K)$ can be effectively computed for any finite Galois extension $L/K$. In practice, direct computation of $m(L/K)$ with the algorithm we propose is limited to very simple extensions only. For this reason, an \textit{a priori} bound on $m(L/K)$ (such as in Theorem \ref{thm:IntroGeneralBound}) can be very useful both for the study of a single extension and for understanding the behaviour of $m(L/K)$ in families.% \subsection{The minimal index $m(L/K)$ is effectively computable} \begin{theorem}\label{thm:mLKEffectivelyComputable} Let $L/K$ be a finite Galois extension of $p$-adic fields with Galois group $G$. The quantity $m(L/K)$ may be determined through a finite, effective procedure. \end{theorem} \begin{proof} We fix a basis $\alpha_1,\ldots,\alpha_n$ of $\mathcal{O}_L$ over $\mathcal{O}_K$ and use it to identify $L$ with $K^n$ and $\mathcal{O}_L$ with $\mathcal{O}_K^n$: for a vector $v=(c_{1},\ldots,c_{n}) \in \mathcal{O}_K^n$, we let $\omega(v) \in \mathcal{O}_L$ be the element $\sum_{i=1}^n c_i \alpha_i$. Note that an integral basis may be computed effectively, for example using the Montes algorithm \cite{MR3105943, MR3276340}. Let $g_1=\operatorname{Id},\ldots,g_n$ be an enumeration of the elements of $G$. Every $g_i$ corresponds to a $K$-linear transformation of $K^n$, which can be represented by a matrix $M_i \in \operatorname{Mat}_{n}(K)$. Notice that $M_k=\{m_{ij}^{(k)}\}$ where $g_k(\alpha_j) = \sum_{i=1}^n m_{ij}^{(k)} \alpha_i$. Since the $\alpha_j$'s are a basis of $\mathcal{O}_L$ over $\mathcal{O}_K$ it follows that the entries of $M_k$ lie in $\mathcal{O}_K$. Let $\omega_1, \omega_2 \in \mathcal{O}_L$ and let $v_1, v_2$ be the vectors of the coordinates of $\omega_1,\omega_2$ in the basis $\alpha_j$. Assume that $\omega_1$ generates a normal basis and let $[\mathcal{O}_L : \mathcal{O}_K[G]\omega_1]_{\mathcal{O}_K} = \pi_K^{R}\mathcal{O}_K$. With the previous notation, for $i=1,2,$ we have \begin{equation} \label{indici} [\mathcal{O}_L : \mathcal{O}_K[G] \omega_i] = N\left( \det \left( v_i \bigm\vert M_2v_i \bigm\vert \cdots \bigm\vert M_n v_i \right) \right). \end{equation} We claim that if $v_2 \equiv v_1 \pmod{\pi^{R+1}}$ then \[ [\mathcal{O}_L : \mathcal{O}_K[G] \omega_2]_{\mathcal{O}_K} =[\mathcal{O}_L : \mathcal{O}_K[G] \omega_1]_{\mathcal{O}_K}: \] indeed from $v_2 \equiv v_1 \pmod{\pi_K^{R+1}}$ we obtain $M_k v_1 \equiv M_kv_2 \pmod{\pi_K^{R+1}}$ for every $k$, so the claim follows from Equation \eqref{indici}. We turn to the description of a possible algorithm to compute $m(L/K)$. First we claim that one can effectively find a normal basis generator $\omega_0$ of $L/K$. The usual proof of the normal basis theorem (see for example \cite[Theorem VI.13.1]{MR1878556}) shows that there exists a non-zero (and effectively computable) polynomial $p(x_1,\ldots,x_a) \in K[x_1,\ldots,x_a]$ such that all elements $\omega=\sum_{i=1}^a c_i\alpha_i$ in $L$ that do \textit{not} generate a normal basis for $L/K$ satisfy $p(c_1,\ldots,c_a)=0$. Hence it suffices to find a point where $p(x_1,\ldots,x_n)$ does not vanish, and it is enough to try sufficiently many values of $x_1,\ldots,x_n$ to find some $v_0:=(c_1,\ldots,c_n) \in \mathcal{O}_K^n$ for which $p(c_1,\ldots,c_n) \neq 0$. The element $\omega_0 := \omega(v_0) \in \mathcal{O}_L$ is then a normal basis generator for $L/K$. Next we compute the positive integer $R_0$ defined by the equality $[\mathcal{O}_L : \mathcal{O}_K[G] \omega_0]=\pi_K^{R_0}\mathcal{O}_K$. This can also be done effectively: indeed, it suffices to compute the determinant \[ \det\left( v_0 \bigm\vert M_2v_0 \bigm\vert \cdots \bigm\vert M_nv_0 \right) \] to sufficient $\pi_K$-adic precision to ensure that one can determine its $\pi_K$-adic valuation. Fix representatives $v_1,\ldots,v_s$ of the (finitely many) residue classes in $\mathcal{O}_K^n/(\pi_K^{R_0+1} \mathcal{O}_K)^{n}$, and for every $i=1,\ldots,s$ let $\omega_i=\omega(v_i)$. Write $[\mathcal{O}_L : \mathcal{O}_K[G] \omega_i]=\pi_K^{R_i} \mathcal{O}_K$. By the same argument as above, each of the finitely many $R_i$ is effectively computable, and there is no loss of generality in assuming that there is an index $i_0$ for which $\omega_{i_0}=\omega_0$. We claim that $m(L/K)$ is equal to the norm of the ideal $\pi_K^{\min_i R_i}$. To prove this, let \[ r_i = \min_{v \equiv v_i \bmod{\pi_K^{R_0+1}}} v_K [\mathcal{O}_L : \mathcal{O}_K[G] \omega(v)]_{\mathcal{O}_K}, \] where the minimum is taken over all vectors $v \in \mathcal{O}_K^n$ all of whose coordinates are congruent to the respective coordinates of $v_i$ modulo $\pi_K^{R_0+1}$. We now show that either $r_i=R_i$ or $r_i \geq R_0$. Indeed, it is clear by definition that $r_i \leq R_i$. So $r_i=R_i$ or $r_i < R_i$, and in this last case we can show that $r_i \geq R_0$. In fact, if we had $r_i < R_0$ the index would be constant on the coset $v_i +\pi_K^{R_0+1}$, and this is not the case since we are assuming $r_i< R_i$. This proves that for all $i$ we have \[ \min\{r_i, R_0\} = \min\{R_i, R_0 \}; \] note in particular that for the index $i_0$ such that $\omega_i=\omega_0$ we have $r_{i_0}=R_{i_0}=R_0$. Taking the minimum over all $i$ now yields \[ \begin{aligned} \min_{\omega \in \mathcal{O}_L} v_K[\mathcal{O}_L : \mathcal{O}_K[G] \omega]_{\mathcal{O}_K} % & = \min_i r_i = \min_i \min\{r_i, r_{i_0}\} \\ & = \min_i \min\{R_i, R_0\} = \min_i R_i, \end{aligned} \] hence $m(L/K) = N\left( \pi_K^{\min_i R_i} \right)$. Since the finitely many quantities $R_i$ are effectively computable, so is $m(L/K)$, as claimed. \end{proof} \subsection{The associated order} We briefly review the notion of \textit{associated order} in the general setting of Dedekind domains that we will need later. We also show that the quantity $m(L/K)$ may be expressed in a natural way in terms of the associated order. The following result is well known. \begin{proposition}\label{associatedfree} Let $R$ be a Dedekind domain with field of fractions $K$, and let $A$ be a finite-dimensional $K$-algebra. Let $N$ be an $R$-lattice such that $K N:=K \otimes_{R} N \cong A$ as $A$-modules. Finally, suppose that $\Lambda$ is an $R$-order in $A$ such that $N$ is a free $\Lambda$-module. Then $\Lambda$ is equal to the associated order of $N$ in $A$: \[ \Lambda=\mathfrak{A}(A, N) := \{ \lambda \in A \mid \lambda N \subseteq N \}. \] \end{proposition} \begin{proof} By assumption there exists $\alpha\in N$ such that $N=\Lambda \alpha$ is a free $\Lambda$-module. Then $\alpha$ is also such that $K N=A \alpha$ is free over $A$. The inclusion $\Lambda\subseteq \mathfrak{A}(A, N)$ is clear. Conversely, let $a\in \mathfrak{A}(A, N)$; then $a\alpha\in N=\Lambda \alpha$, so there exists $\lambda\in\Lambda$ such that $a\alpha =\lambda\alpha$ in $A$. Since $K N$ is freely generated by $\alpha$, this means that $a=\lambda\in \Lambda$. \end{proof} \begin{proposition}\label{productindex} Let $L/K$ be a Galois extension of $p$-adic fields with Galois group $G$. Then \[ m(L/K) = [\mathfrak{A}_{L/K}:\mathcal{O}_K[G]] \cdot \min_{\alpha \in \mathcal{O}_L} [\mathcal{O}_L : \mathfrak{A}_{L/K} \alpha]. \] In particular, $\mathcal{O}_L$ is free over $\mathfrak{A}_{L/K}$ if and only if $m(L/K) = [\mathfrak{A}_{L/K}:\mathcal{O}_K[G]]$. In this case, given an element $\beta$ that realises the minimal index $m(L/K)$, the elements realising $m(L/K)$ are precisely those of the form $\lambda \beta$ for $\lambda \in \mathfrak{A}_{L/K}^*$. Equivalently, an element $\beta \in \mathcal{O}_L$ realises the minimum if and only if it generates $\mathcal{O}_L$ over $\mathfrak{A}_{L/K}$. \end{proposition} \begin{proof} Let $\beta\in \mathcal{O}_L$ be an element such that \[ m(L/K) = [\mathcal{O}_L : \mathcal{O}_K[G] \beta]. \] By the definition of the associated order, we have that $\mathfrak{A}_{L/K} \beta\subseteq \mathcal {O}_L$, so \[ m(L/K) = [\mathcal{O}_L : \mathfrak{A}_{L/K} \beta]\cdot [\mathfrak{A}_{L/K} \beta : \mathcal{O}_K[G] \beta]. \] Since $\beta$ generates a normal basis for $L/K$, we have that the $K$-linear map $\varphi_\beta \colon K[G]\to K[G] \beta$ defined by $x\mapsto x\beta$ is nonsingular, so $[\mathfrak{A}_{L/K} \beta : \mathcal{O}_K[G] \beta]=[\mathfrak{A}_{L/K}: \mathcal{O}_K[G]]$. The result follows. \end{proof} \begin{remark}\label{rmk:SeparateIndices} Proposition \ref{productindex} shows that every index $[\mathcal{O}_L : \mathcal{O}_K[G] \alpha]$ differs from the corresponding index $[\mathcal{O}_L : \mathfrak{A}_{L/K} \alpha]$ by a constant independent of $\alpha$. Combined with Proposition \ref{associatedfree}, this suggests that it may be more natural to study separately the two quantities $[\mathfrak{A}_{L/K} : \mathcal{O}_K[G]]$ and $\min_{\alpha \in \mathcal{O}_L} [\mathcal{O}_L : \mathfrak{A}_{L/K} \alpha]$. Therefore our results can be also read as bounds on the minimum of $[\mathcal{O}_L : \mathfrak{A}_{L/K} \alpha]$. \end{remark} \subsection{Index of $\mathcal{O}_K[G]$ in a maximal order}\label{indexmaximal} In the spirit of Remark \ref{rmk:SeparateIndices} we now give a simple upper bound on $[\mathfrak{A}_{L/K} : \mathcal{O}_K[G]]$ by estimating the index of $\mathcal{O}_K[G]$ in a \textit{maximal} $\mathcal{O}_K$-order $\mathfrak{M}$ in which it is contained. We consider on $K[G]$ the bilinear form $(x,y) \mapsto \operatorname{tr}(xy)$, where we denote by $\operatorname{tr}(z)$ the trace of multiplication by $z$ on the $K$-vector space $K[G]$; it is nondegenerate since $K$ has characteristic $0$, so it allows us to compute discriminants of lattices in the sense of Definition \ref{def:Discriminant}. Let $\underline{m} = [\mathfrak{M} : \mathcal{O}_K[G]]_{\mathcal{O}_K}$. By Proposition \ref{prop:disc} we have an equality of ideals \begin{equation}\label{eq:DiscIndex} \underline{m}^2 \cdot \operatorname{disc} \mathfrak{M} = \operatorname{disc} \mathcal{O}_K[G], \end{equation} and writing $\underline{m} = \pi_K^A \mathcal{O}_K$ we obtain \begin{equation}\label{eq:IndexIdealNumber} [\mathfrak{M} : \mathcal{O}_K[G]] = N(\underline{m})= p^{f_KA}. \end{equation} We now show that $\operatorname{disc} \mathcal{O}_K[G]=|G|^{|G|} \mathcal{O}_K$. We claim that the trace of an element $g \in G \subset \mathcal{O}_K[G]$ is 0 if $g \neq \operatorname{Id}$ and is $|G|$ for $g=\operatorname{Id}$. To see this, notice that an $\mathcal{O}_K$-basis of $\mathcal{O}_K[G]$ is given by $\{g \in G\}$, and in this basis the left multiplication action by $g$ is represented by a permutation matrix, whose trace equals the number of fixed points of the map $h \mapsto gh$. The group axioms imply that either $h \mapsto gh$ has no fixed points ($g \neq \operatorname{Id}$), or every element of $G$ is fixed ($g=\operatorname{Id}$). This proves the claim. In particular, the Gram matrix of the bilinear form $(x,y) \mapsto \operatorname{tr}(x,y)$ in this basis is \[ \left( \operatorname{tr}(gh) \right)_{g, h \in G} = |G| \left( \delta_{gh,\operatorname{id}} \right)_{g, h \in G}, \] whose determinant is clearly $\pm |G|^{|G|}$, because $\left( \delta_{gh,\operatorname{id}} \right)_{g, h \in G}$ is a permutation matrix. From Equation \eqref{eq:DiscIndex} we obtain \[ \underline{m}^2 = \frac{|G|^{|G|} \mathcal{O}_K}{\operatorname{disc} \mathfrak{M}} \bigm\vert |G|^{|G|} \mathcal{O}_K, \] so (writing again $\underline{m}=\pi_K^{A}\mathcal{O}_K$) we have $A \leq\frac12|G| e_K v_p(|G|)$. Via Equation \eqref{eq:IndexIdealNumber}, this implies \begin{equation}\label{eq:IndexMaximalOrderBound} v_p\left(\#\frac{\mathfrak{M}}{\mathcal{O}_K[G]}\right) \leq \frac{1}{2} [K:\mathbb{Q}_p] \cdot |G| \cdot v_p(|G|). \end{equation} We summarise these results in the following Proposition. \begin{proposition}\label{prop:MaximalOrder} Let $\mathfrak{M}$ be a maximal order of $K[G]$. We have $[\mathfrak{M} : \mathcal{O}_K[G]]_{\mathcal{O}_K}=\pi_K^{A}\mathcal{O}_K$ with $A=\frac{1}{2}|G|e_K v_p(|G|) -\frac{1}{2} v_K(\operatorname{disc} \mathfrak{M} ) \leq \frac{1}{2}|G|e_K v_p(|G|)$. \end{proposition} \subsection{Proof of Theorem \ref{thm:IntroGeneralBound} } \label{sect:GeneralBound} In this section we prove a bound on $m(L/K)$ which holds in complete generality. Let $\mathfrak{B}$ be an $\mathcal{O}_K$-order in $K[G]$ containing $\mathcal{O}_K[G]$. We define $\mathfrak{B}\mathcal{O}_L$ to be the $\mathfrak{B}$-lattice generated by $\mathcal{O}_L$, that is, \[ \mathfrak{B} \mathcal{O}_L=\left\{\sum_i b_ix_i\right\}_{b_i\in \mathfrak{B},x_i\in\mathcal{O}_L}\subseteq L, \] and suppose that $\mathfrak{B} \mathcal{O}_L$ is a free $\mathfrak{B}$-module of rank $1$. We note that every maximal order $\mathfrak{B}$ has this property. Indeed, let $\mathfrak{B}$ be a maximal order of the separable $K$-algebra $K[G]$; by the normal basis theorem $K\mathfrak{B}\mathcal{O}_L=L$ is isomorphic to $K\mathfrak{B}=K[G]$ as a left $K[G]$-module, so $\mathfrak{B}\mathcal{O}_L$ is isomorphic to $\mathfrak{B}$ as a left $\mathfrak{B}$-module by \cite[Theorem 18.10]{MR1972204}. \begin{lemma} The set $\mathfrak{B} \mathcal{O}_L$ is a fractional ideal in $L$. \end{lemma} \begin{proof} Indeed, given $\sum_{g \in G} k_g g$ in $\mathfrak{B}$ (where $k_g \in K$) and $x,y \in \mathcal{O}_L$ we have \[ \left( \sum_{g \in G} k_g g x \right) y = \sum_{g \in G} k_g g \left( x g^{-1}(y) \right) \in \mathfrak{B}\mathcal{O}_L. \] \end{proof} In the light of this lemma we can write $\mathfrak{B}\mathcal{O}_L$ as $\pi_L^{-a}\mathcal{O}_L$ for a certain integer $a \geq 0$. Let now $x$ be a generator of $\mathfrak{B} \mathcal{O}_L$ over $\mathfrak{B}$ (in particular, $x$ is a normal basis generator for $L/K$), and let $0\neq r\in\mathcal O_K$ be such that $rx\in \mathcal{O}_L$. Then we have: \[\begin{split} [\mathfrak{B} \mathcal O_L : \mathcal O_L] [\mathcal O_L : \mathcal O_K[G] rx]&=[\mathfrak{B} \mathcal O_L : \mathcal O_K[G] rx]\\ &=[\mathfrak{B}x : \mathcal O_K[G] x] [\mathcal O_K[G] x : \mathcal O_K[G] rx], \end{split} \] and since $x$ is a normal basis generator this quantity is equal to \[ \begin{split} [\mathfrak{B} : \mathcal O_K[G] ]\cdot [\mathcal O_K[G] : \mathcal O_K[G] r] & =[\mathfrak{B} : \mathcal O_K[G] ]\cdot |\mathcal{O}_K/r\mathcal{O}_K|^{|G|} \\ & =[\mathfrak{B} : \mathcal O_K[G] ]\cdot p^{{f_K|G|v_K(r)}}, \end{split} \] where we have used that $\mathcal O_K[G]$ is a free $\mathcal{O}_K$-module of rank $|G|$. Hence we have \[ m(L/K) \leq \left[ \mathfrak{B} : \mathcal{O}_K[G] \right] \cdot \frac{p^{{f_K|G|v_K(r)}}}{[ \mathfrak{B} \mathcal{O}_L : \mathcal{O}_L]}, \] and we may replace $[ \mathfrak{B} \mathcal{O}_L : \mathcal{O}_L]$ by $[ \pi_L^{-a} \mathcal{O}_L : \mathcal{O}_L] = p^{a f_L}$. Given that $\mathfrak{B}\mathcal{O}_L$ is the fractional ideal $(\pi_L^{-a})$, it is clear that $\pi_K^b x$ is in $\mathcal{O}_L$ provided that $v_L(\pi_K^b) \geq a$, that is, $ b \geq \frac{a}{e_{L/K}}. $ Plugging in $r=\pi_K^b$ (with $b=\lceil\frac{a}{e_{L/K}}\rceil$) in the formula above, and noticing that $b \leq \frac{a+e_{L/K}-1}{e_{L/K}}$, we obtain \[ \begin{aligned} m(L/K) & \leq \left[ \mathfrak{B} : \mathcal{O}_K[G] \right] \cdot \frac{ p^{f_K |G| b} }{p^{af_L}} \\ & \leq \left[ \mathfrak{B} : \mathcal{O}_K[G] \right] \cdot p^{f_K f_{L/K}e_{L/K} \frac{a+e_{L/K}-1}{e_{L/K}} - af_L} \\ & = \left[ \mathfrak{B} : \mathcal{O}_K[G] \right] \cdot p^{f_L (e_{L/K}-1)}. \end{aligned} \] Using Equation \eqref{eq:IndexMaximalOrderBound} we then obtain \[ \begin{aligned} v_p (m(L/K)) & \leq f_L(e_{L/K}-1) + \frac{1}{2} [K:\mathbb{Q}_p] \cdot |G| \cdot v_p(|G|) \\ & = f_L(e_{L/K}-1) + \frac{1}{2} [L:\mathbb{Q}_p] v_p([L:K]). \end{aligned} \] This proves Theorem \ref{thm:IntroGeneralBound}, and it is clear that one can get a very simple (albeit rough) estimate by replacing $f_L(e_{L/K}-1)$ with $[L:\mathbb{Q}_p]$, thus obtaining \begin{equation}\label{eq:EasierBound} v_p(m(L/K)) < [L:\mathbb{Q}_p] (1 + \frac{1}{2} v_p([L:K])). \end{equation} \begin{remark}\label{rmk:AlmostOptimalBound} Comparison with cases in which we can compute $m(L/K)$ exactly suggests that the bound of Theorem \ref{thm:IntroGeneralBound} is \emph{sharper} when $L/K$ is highly ramified. In fact, in some maximally ramified cases our bound is almost optimal: take for example $K=\mathbb{Q}_p(\zeta_{p^r})$ and $L=K(\sqrt[p^r]{\pi_K})$. One may show that $v_p(m(L/K))=\frac{r}{2} [L:\mathbb{Q}_p]$ (see Remark \ref{rmk:KummerDegreepn}), while our result -- in the form of Equation \eqref{eq:EasierBound} -- gives the upper bound $\left(\frac{r}{2}+1\right) [L:\mathbb{Q}_p]$, which is essentially sharp for large $r$. Theorem \ref{thm:IntroGeneralBound} might thus be useful in situations where a complicated ramification structure prevents the use of other tools. \end{remark} \subsection{Reduction to the totally ramified case}\label{subsect:ReductionToTotallyRamifiedCase} Let $L/K$ be a Galois extension of $p$-adic fields, $L^{nr}$ be its maximal unramified subextension, and $G_0$ be the inertia subgroup of $G=\operatorname{Gal}(L/K)$. In this section we show that $[\mathfrak{A}_{L/K} : \mathcal{O}_K[G]]$ is bounded above by the analogous quantity for the extension $L/L^{nr}$, and discuss some cases in which the equality $m(L/K)=m(L/L^{nr})$ holds. To compare $\mathfrak{A}_{L/K}$ and $\mathfrak{A}_{L/L^{nr}}$ we start with a result of Jacobinski (see the beginning of $\S2.1$ in \cite{MR513880}), according to which we have \[ \mathfrak{A}_{L/K}=\bigoplus_{s\in G/G_0}(\mathfrak{A}_{L/L^{nr}}\cap K[G_0])s, \] where $G/G_0$ denotes a fixed system of left coset representatives. Then \[\begin{split} [\mathfrak{A}_{L/K}:\mathcal{O}_K[G]]&=[\oplus_{s\in G/G_0}(\mathfrak{A}_{L/L^{nr}}\cap K[G_0])s:\oplus_{s\in G/G_0}\mathcal{O}_K[G_0]s]\\ &=[\mathfrak{A}_{L/L^{nr}}\cap K[G_0]:\mathcal{O}_K[G_0]]^{[G:G_0]}\\ &=[\mathcal{O}_{L^{nr}}\otimes_{\mathcal{O}_K}(\mathfrak{A}_{L/L^{nr}}\cap K[G_0]) :\mathcal{O}_{L^{nr}}[G_0]] \end{split} \] (we are using that $\mathcal{O}_{L^{nr}}$ is free over $\mathcal{O}_K$ of rank $[G:G_0]$). As noted for instance by Berg\'e in \cite{MR513880}, we always have an injection \begin{equation}\label{injection} \mathcal{O}_{L^{nr}}\otimes_{\mathcal{O}_K}(\mathfrak{A}_{L/L^{nr}}\cap K[G_0])\hookrightarrow \mathfrak{A}_{L/L^{nr}}. \end{equation} We have thus obtained the following. \begin{proposition}\label{indexassociated} In the above notation, \[ [\mathfrak{A}_{L/K}:\mathcal{O}_K[G]]\leq[\mathfrak{A}_{L/L^{nr}}:\mathcal{O}_{L^{nr}}[G_0]], \] with equality if and only if (\ref{injection}) is a surjection. \end{proposition} However, the injection \eqref{injection} is not always a surjection. In the same article, Berg\'e computed the image of \eqref{injection} when $G_1$ is cyclic, see \cite[Remarque after Lemme 5]{MR513880}. She also gave an elegant description of the image of \eqref{injection} in the general case that we now recall. \begin{proposition}[{\cite[Proposition 4]{MR747998}}]\label{imageberge} The following equality holds: \begin{equation}\label{intersection} \mathcal{O}_{L^{nr}}\otimes_{\mathcal{O}_K}(\mathfrak{A}_{L/L^{nr}}\cap K[G_0])=\bigcap_{g\in G}g\mathfrak{A}_{L/L^{nr}}g^{-1}. \end{equation} In particular, if $G$ is abelian then (\ref{injection}) is a surjection. \end{proposition} Note that $\bigcap_{g\in G}g\mathfrak{A}_{L/L^{nr}}g^{-1}$ is an $\mathcal{O}_{L^{nr}}$-order in $L^{nr}[G_0]$ contained in $\mathfrak{A}_{L/L^{nr}}$, hence $\mathcal{O}_L$ has the structure of a module over it. It is natural to ask whether there is a relation between freeness of $\mathcal{O}_L$ over $\mathfrak{A}_{L/K}$ and over $\bigcap_{g\in G}g\mathfrak{A}_{L/L^{nr}}g^{-1}$ (which would imply $\bigcap_{g\in G}g\mathfrak{A}_{L/L^{nr}}g^{-1}=\mathfrak{A}_{L/L^{nr}}$ by Proposition \ref{associatedfree}). In full generality we only know the following. \begin{theorem}[{\cite[Théorème]{MR747998}}]\label{bergeprojective} Keeping the above notation, $\mathcal{O}_L$ is projective over $\mathfrak{A}_{L/K}$ if and only if it is projective over $\bigcap_{g\in G}g\mathfrak{A}_{L/L^{nr}}g^{-1}$. \end{theorem} Fortunately there are situations in which knowing that $\mathcal{O}_L$ is projective is sufficient to conclude that it is free, for example when the relevant orders are commutative. More precisely, the next proposition shows that in several cases the minimal index $m(L/K)$ is controlled purely by the totally ramified extension $L/L^{nr}$. \begin{proposition}\label{unramifiedindex} Assume that $G_0$ is abelian and that $\mathcal{O}_L$ is free over $\mathfrak{A}_{L/K}$. Then $\mathcal{O}_{L}$ is free over $\mathfrak{A}_{L/L^{nr}}$ and \[ m(L/K)=m(L/L^{nr})=[\mathfrak{A}_{L/K}:\mathcal{O}_K[G]]=[\mathfrak{A}_{L/L^{nr}}:\mathcal{O}_{L^{nr}}[G_0]]. \] Conversely, if $G$ is abelian and $\mathcal{O}_{L}$ is free over $\mathfrak{A}_{L/L^{nr}}$, then $\mathcal{O}_L$ is free over $\mathfrak{A}_{L/K}$ and as before \[ m(L/K)=m(L/L^{nr})=[\mathfrak{A}_{L/K}:\mathcal{O}_K[G]]=[\mathfrak{A}_{L/L^{nr}}:\mathcal{O}_{L^{nr}}[G_0]]. \] \end{proposition} \begin{proof} For the forward implication, assume that the $\mathfrak{A}_{L/K}$-module $\mathcal{O}_L$ is free, hence also projective. By Theorem \ref{bergeprojective} we have that $\mathcal{O}_L$ is projective over $\bigcap_{g\in G}g\mathfrak{A}_{L/L^{nr}}g^{-1}$. The latter is a commutative order since $G_0$ is abelian, so it is ``clean'', that is, every projective $\bigcap_{g\in G}g\mathfrak{A}_{L/L^{nr}}g^{-1}$-lattice that spans the algebra $L^{nr}[G_0]$ is free (see \cite{MR0175950} or \cite[IX Corollary 1.5]{MR0283014}). It follows that $\mathcal{O}_L$ is free over $\bigcap_{g\in G}g\mathfrak{A}_{L/L^{nr}}g^{-1}$, so the latter is the associated order of $\mathcal{O}_L$ in $L^{nr}[G_0]$, that is, \[ \bigcap_{g\in G}g\mathfrak{A}_{L/L^{nr}}g^{-1}=\mathfrak{A}_{L/L^{nr}} \] and $\mathcal{O}_L$ is free over $\mathfrak{A}_{L/L^{nr}}$. By Proposition \ref{imageberge} we have that (\ref{injection}) is a bijection. Therefore, by Proposition \ref{indexassociated} and Proposition \ref{productindex}, we have \[ m(L/K)=[\mathfrak{A}_{L/K}:\mathcal{O}_K[G]]=[\mathfrak{A}_{L/L^{nr}}:\mathcal{O}_{L^{nr}}[G_0]]=m(L/L^{nr}). \] As for the converse, assume that $\mathcal{O}_L$ is free over $\mathfrak{A}_{L/L^{nr}}$, which coincides with the intersection $ \bigcap_{g\in G}g\mathfrak{A}_{L/L^{nr}}g^{-1}$ since $G$ is abelian. It follows from Theorem \ref{bergeprojective} that $\mathcal{O}_L$ is projective over $\mathfrak{A}_{L/K}$, which is commutative, hence clean. We obtain that $\mathcal{O}_L$ is free over $\mathfrak{A}_{L/K}$, and the remaining equalities are proved as above. \end{proof} \section{The case when the associated order is maximal} \label{sec:assab} When $\mathfrak{A}_{L/K}$ is a maximal order in $K[G]$ the problem of studying $m(L/K)$ simplifies considerably. In this section we study this situation and prove Theorem \ref{thm:IntroAbsolutelyAbelian}. We start with the following simple lemma. \begin{lemma}\label{associatedmaximal} Let $L/K$ be a Galois extension of $p$-adic fields with Galois group $G$. Suppose that $\mathfrak{A}_{L/K}$ is a maximal $\mathcal{O}_K$-order in $K[G]$. Then $\mathcal{O}_L$ is free over $\mathfrak{A}_{L/K}$ and $m(L/K)= [\mathfrak{A}_{L/K}:\mathcal{O}_K[G]]$. \end{lemma} \begin{proof} The first assertion follows from \cite[Theorem (18.10)]{MR1972204}, the second from Proposition \ref{productindex}. \end{proof} Under the assumption of Lemma \ref{associatedmaximal}, the discussion in $\S$\ref{indexmaximal} shows that the minimal index $m(L/K)$ is determined by the discriminant of a maximal order in $K[G]$. We now describe some interesting instances of this situation. If $G$ is abelian, the Wedderburn decomposition of $K[G]$ is given by a product of cyclotomic extensions of $K$, \[ K[G]\cong \prod_{\gamma\in \Phi} K(\gamma), \] where $\Phi$ is the set of orbits of the characters of $G$ under the action of the absolute Galois group of $K$ and $K(\gamma)$ is the extension of $K$ generated by the image of any character in the orbit $\gamma$. In this case the unique maximal order is \[ \mathfrak{M}\cong \prod_{\gamma\in \Phi} \mathcal{O}_{K(\gamma)}, \] and using Proposition \ref{prop:disc} one gets \[ \operatorname{disc}\mathfrak{M}=\prod_{\gamma\in \Phi} \operatorname{disc}(K(\gamma)/K). \] Hence, by Proposition \ref{prop:MaximalOrder}, when $G$ is abelian we have \begin{equation} \label{formula-max-abeliano} v_p[\mathfrak{M} : \mathcal{O}_K[G]]=\frac{f_K}{2}\left(e_K|G|v_p(|G|)-\sum_{\gamma\in \Phi} v_K( \operatorname{disc}(K(\gamma)/K))\right). \end{equation} If in addition $\mathfrak{A}_{L/K}=\mathfrak{M}$, this is also the value of $m(L/K)$. As a consequence, we can deal with the case of absolutely abelian extensions of $\mathbb{Q}_p$ when $p$ is odd. \begin{proof}[Proof of Theorem \ref{thm:IntroAbsolutelyAbelian}] In this case $L$ is absolutely abelian, and the main result of \cite{MR1627831} shows that $\mathcal{O}_L$ is free over $\mathfrak{A}_{L/K}$. This, together with Proposition \ref{unramifiedindex}, implies $m(L/K)=m(L/L^{nr})$. Thus we can suppose that $L/K$ is totally ramified. By the local version of the Kronecker-Weber Theorem, when the extension is totally ramified we know that $L\subseteq K(\zeta_{p^n})$ for some $n$. Hence $G=\text{Gal}(L/K)$ is naturally isomorphic to a subquotient of $\text{Gal}(\mathbb{Q}(\zeta_{p^n})/\mathbb{Q})$, which is cyclic since $p$ is odd. So $G$ is also cyclic, and the Wedderburn decomposition of $K[G]$ is \[ K[G]\cong \prod_{d \mid [L:K]} K(\zeta_d)^{\frac{\varphi(d)}{[K(\zeta_d):K]}}. \] By \cite[Theorem 1]{MR1627831} the associated order $\mathfrak{A}_{L/K}$ is the maximal order in $K[G]$. % By Lemma \ref{associatedmaximal} and Equation \eqref{formula-max-abeliano}, we get \[ \begin{aligned} v_p(m(L/K)) &=v_p(m(L/L^{nr})) \\ & =\frac{f_{L}}{2}\left(e_K|G_0|v_p(|G_0|)-\sum_{d\mid e_{L/K}} \frac{\varphi(d)}{[{L^{nr}(\zeta_d):L^{nr}]}} v_{L^{nr}}( \operatorname{disc}(L^{nr}(\zeta_d)/L^{nr}))\right) \end{aligned} \] as required. \end{proof} \begin{remark}\label{rmk:CasepEqualsTwo} The formula in the previous theorem needs to be modified for $p=2$ and $L/K$ ramified. The main difference is that the associated order is not necessarily maximal in this case (see \cite{MR1627831}). In addition, since the Galois group $G$ may not be cyclic in this case, for the maximal order we get: $$ v_2[\mathfrak{M} : \mathcal{O}_K[G]]=\frac{f_{L}}{2}\left(|G_0|v_2(|G_0|)e_K-\sum_{d\mid e_{L/K}} \frac{a(d)}{[{L^{nr}(\zeta_d):L^{nr}]}} v_{L^{nr}}( \operatorname{disc}(L^{nr}(\zeta_d)/L^{nr}))\right), $$ where $a(d)$ is the number of element of order $d$ in $G$. When $\mathfrak{A}_{L/K}$ is maximal, this is also the value of $m(L/K)$, but $m(L/K)$ may be much smaller in general. A general formula may also be established in the case $p=2$ by following the description of $\mathfrak{A}_{L/K}$ given in \cite{MR1627831}, but we content ourselves with describing a case in which the result of Theorem \ref{thm:IntroAbsolutelyAbelian} does not hold for $p=2$. Let $m\ge2$, $L=\mathbb{Q}_2(\zeta_{2^m})$ and $K=\mathbb{Q}_2(\zeta_{2^m}+\zeta_{2^m}^{-1})$. Let $\sigma$ be a generator of the Galois group $G$ of $L/K$, which is cyclic of order 2. We have $K[G]\cong K\oplus K$, and so $\mathfrak{M}\cong\mathcal{O}_K\oplus\mathcal{O}_K$. Via the same isomorphism, which can be taken to be the Chinese Remainder Theorem map $\alpha+\beta\sigma\to(\alpha+\beta,\alpha-\beta)$, one can check that \[ \mathcal{O}_K[G]\cong\{(a,b)\in\mathcal{O}_K\oplus\mathcal{O}_K\mid a-b\in 2\mathcal{O}_K\}. \] By \cite[Proposition 3]{MR1627831} the associated order in this case is the ring generated by $\frac{1-\sigma}{\pi_K}$ over $\mathcal{O}_K[G]$, and via the above isomorphism \[ \mathfrak{A}_{L/K}\cong\{(a,b)\in\mathcal{O}_K\oplus\mathcal{O}_K\mid a-b\in \pi_K^{e_K-1}\mathcal{O}_K\}. \] Since $\mathcal{O}_L$ is free over $\mathfrak{A}_{L/K}$ by \cite[Proposition 3]{MR1627831}, these explicit descriptions allow us to easily compute \[ m(L/K)=[\mathfrak{A}_{L/K}:\mathcal{O}_K[G]]=N(\pi_K)=2, \] while on the other hand \[ [\mathfrak{M} : \mathcal{O}_K[G]]=2^{e_K}=2^{2^{m-2}}. \] Finally, we remark that in this case the value $m(L/K)$ may also be obtained easily from Theorem \ref{thm:IntroCyclicDegreep} (one may check that in our situation we have $t=1$ and $\nu_1=1$, hence $v_2(m(L/K))=f_K \cdot 1 = 1$). In addition, the description of the associated order can also be obtained from Theorem \ref{thm:StructureOfALK}. \end{remark} The discriminants of the cyclotomic extensions of $\mathbb{Q}_p$ are well-known, so from Theorem \ref{thm:IntroAbsolutelyAbelian} we easily deduce the following. \begin{corollary}\label{cor:AbsolutelyAbelianCase} Let $L/K$ be an absolutely abelian extension of $p$-adic fields with ramification index $p^nd$, where $p$ is an odd prime and $(d,p)=1$, and suppose that $K$ is unramified over $\mathbb{Q}_p$. Then \[ m(L/K)=p^{\frac{f_Ld(p^n-1)}{p-1}}. \] \end{corollary} \begin{proof} Up to replacing $K$ with $L^{nr}$ we can assume that $L$ is totally ramified over $K$ of degree $p^nd$, so from Theorem \ref{thm:IntroAbsolutelyAbelian} we have \begin{equation}\label{eq:thFormula} v_p(m(L/K))=\frac{f_L}{2}\left(dp^n n-\sum_{\delta\mid dp^n } \frac{\varphi(\delta)}{[K(\zeta_\delta):K]} v_{p}( \operatorname{disc}(K(\zeta_\delta)/K))\right). \end{equation} Let $s$ be a positive integer relatively prime to $p$ and let $f_s= [\mathbb{Q}_p(\zeta_{s}):\mathbb{Q}_p].$ We recall that the extension $\mathbb{Q}_p(\zeta_{s})/\mathbb{Q}_p$ is unramified, so its discriminant is trivial, while for each $r\ge1$ we have \begin{equation} \label{eq:disc} \operatorname{disc}(\mathbb{Q}_p(\zeta_{p^rs})/\mathbb{Q}_p )=p^{p^{r-1}(pr-r-1)f_s} \end{equation} (this follows e.g.~from \cite[Theorem 2.20]{MR2078267}, combined with the transitivity of the discriminant in towers). The extension $K/\mathbb{Q}_p$ is unramified, so $K\cap\mathbb{Q}_p(\zeta_{p^rs})= K\cap\mathbb{Q}_p(\zeta_{s})$; denote this extension by $K_s$. Letting $[K_s:\mathbb{Q}_p]=l_s$, the extensions $K/K_s$ and $K(\zeta_{p^rs})/\mathbb{Q}_p(\zeta_{p^rs})$ have degree $\frac{f_K}{l_s}$ and, being unramified, have trivial discriminant. We also have \[ [K(\zeta_{p^rs}):K]=[\mathbb{Q}_p(\zeta_{p^rs}):K_s]=\varphi(p^r)\frac{f_s}{l_s}. \] By transitivity of the discriminant in towers of extensions we get $$ \operatorname{disc}(\mathbb{Q}_p(\zeta_{p^rs})/\mathbb{Q}_p )=N_{K_s/\mathbb{Q}_p}(\operatorname{disc}(\mathbb{Q}_p(\zeta_{p^rs})/K_s ))=\operatorname{disc}(\mathbb{Q}_p(\zeta_{p^rs})/K_s )^{l_s}.$$ Computing $\operatorname{disc}(K(\zeta_{p^rs})/K_s)$ along the extensions $K(\zeta_{p^rs}) / \mathbb{Q}_p(\zeta_{p^rs})/K_s$ and $K(\zeta_{p^rs}) / K/K_s$ we also obtain \[ \operatorname{disc}(\mathbb{Q}_p(\zeta_{p^rs})/K_s)^{[K(\zeta_{p^rs}):\mathbb{Q}_p(\zeta_{p^rs})]}= N_{ K/K_s}(\operatorname{disc}(K(\zeta_{p^rs})/K ))=\operatorname{disc}(K(\zeta_{p^rs})/K )^{[K:K_s]}, \] so from \eqref{eq:disc} we get \[ \operatorname{disc}(K(\zeta_{p^rs})/K )=\operatorname{disc}(\mathbb{Q}_p(\zeta_{p^rs})/\mathbb{Q}_p )^{\frac{1}{l_s}}=p^{p^{r-1}(pr-r-1)\frac{f_s}{l_s}}. \] This gives \[ \frac{\varphi(p^rs)}{[K(\zeta_{p^rs}):K]} v_p\left( \operatorname{disc}(K(\zeta_{p^rs})/K)\right)= \varphi(s)p^{r-1}(pr-r-1), \] and, by Equation \eqref{eq:thFormula}, \begin{align*} v_p(m(L/K))=&\frac{f_L}{2}\left(dp^n n-\sum_{r=1}^n\sum_{s\mid d} \varphi(s)p^{r-1}(pr-r-1)\right)\\ =&\frac{f_L}{2}\left(dp^n n-d\sum_{r=1}^np^{r-1}(pr-r-1)\right)\\ =&f_L d\frac{p^n-1}{p-1}. \end{align*} \end{proof} There are also other known cases in which the associated order is maximal. From now on, we no longer assume that $p$ is odd. \begin{lemma}[{\cite[Corollaire 3 to Théorème 1]{MR513880}}]\label{bergealmostmaximal} Let $L/K$ be a totally ramified cyclic extension of $p$-adic fields with almost-maximal ramification. Assume that $K$ is unramified over $\mathbb{Q}_p$. Then $\mathfrak{A}_{L/K}$ is a maximal order. \end{lemma} \begin{corollary} \label{cor:almax} Let $L/K$ be a Galois extension of $p$-adic fields with group $G$. Assume that $L/L^{nr}$ is cyclic of order $p^nd$ (with $(p,d)=1$) and almost-maximally ramified, and that $K/\mathbb{Q}_p$ is unramified. \begin{enumerate} \item $\mathcal{O}_L$ is free over $\mathfrak{A}_{L/L^{nr}}$, and if $G$ is abelian it is also free over $\mathfrak{A}_{L/K}$. \item Assume $\mathcal{O}_L$ is free over $\mathfrak{A}_{L/K}$. Then $m(L/K)=p^{\frac{f_Ld(p^n-1)}{p-1}}$. \end{enumerate} \end{corollary} \begin{proof} \begin{enumerate} \item $L/L^{nr}$ satisfies the assumptions of Lemma \ref{bergealmostmaximal}, hence $\mathfrak{A}_{L/L^{nr}}$ is a maximal order in $L^{nr}[G]$, and in particular $\mathcal{O}_L$ is free over $\mathfrak{A}_{L/L^{nr}}$ by Lemma \ref{associatedmaximal}. If $G$ is abelian, then Proposition \ref{unramifiedindex} applies giving that $\mathcal{O}_L$ is free also over $\mathfrak{A}_{L/K}$. \item By Proposition \ref{unramifiedindex} we see that $m(L/K)=m(L/L^{nr})=[\mathfrak{A}_{L/L^{nr}} : \mathcal{O}_{L^{nr}}[G_0]]$; since $\mathfrak{A}_{L/L^{nr}}$ is a maximal order of $L^{nr}[G_0]$, it suffices to compute the index of $\mathcal{O}_{L^{nr}}[G_0]$ inside a maximal order of $L^{nr}[G_0]$. Using Equation \eqref{formula-max-abeliano} and the same computation of discriminants as in the proof of Corollary \ref{cor:AbsolutelyAbelianCase} we get the result. \end{enumerate} \end{proof} It is possible for the associated order to be maximal also when $K/\mathbb{Q}_p$ is ramified. This happens for instance whenever $L/K$ is an almost-maximally ramified Kummer extension of degree $p$: see Proposition \ref{prop:aeq0}, Corollary \ref{cor:AEqZeroOLIsAFree} and Remark \ref{rmk:greither}. \begin{remark} If $G$ is abelian, a necessary condition for the associated order to be maximal is that $G$ is cyclic. For other considerations in the abelian case, see the survey of Thomas \cite{MR2760251}. \end{remark} \section{A local-global principle} \label{sec:local-global} Our main goal in this paper is to investigate the index $m(L/K)$ for a Galois extension $L/K$ of $p$-adic fields. In this section we shift slightly away from this setting, and show that in some special cases it is possible to study an analogous quantity $m(L/K)$ for extensions of number fields by reducing it to the local indices at the various completions. First of all note that, if $L/K$ is a Galois extension of number fields, the quantity \[ m(L/K):=\min_{\alpha \in \mathcal{O}_L} [\mathcal{O}_L : \mathcal{O}_K[G] \alpha] \] is still well-defined: in fact, as in the case of $p$-adic fields, any normal basis generator gives a finite index. However, in this case it is less interesting to define $m(L/K)$ as a minimum, since for instance it could be the case that the greatest common divisor of all the possible finite indices is strictly smaller than their minimum. We can still make some interesting remarks about the globally defined index. As a preliminary observation, note that the associated order $\mathfrak{A}_{L/K}$ is defined exactly as for extensions of $p$-adic fields, and this continues to be the only possible candidate for an $\mathcal{O}_K$-order over which $\mathcal O_L$ is free. With the same proof as Proposition \ref{productindex}, we have the following. \begin{proposition}\label{productindexglobal} Let $L/K$ be a Galois extension of number fields with Galois group $G$ such that $\mathcal{O}_L$ is free as an $\mathfrak{A}_{L/K}$-module. Then \[ m(L/K)=[\mathfrak{A}_{L/K}:\mathcal{O}_K[G]], \] and the minimal index is realised exactly by the generators of $\mathcal{O}_L$ as an $\mathfrak{A}_{L/K}$-module. \end{proposition} Our main theorem is the following. \begin{theorem}\label{thm:GlobalCase} Let $L/\mathbb{Q}$ be a finite abelian extension. Then for every rational prime $p$ and every prime $\mathfrak{P}$ of $L$ above $p$ we have \[ \displaystyle v_p(m(L/\mathbb{Q}))=\frac{[L:\mathbb{Q}]}{e(\mathfrak{P}|p)f(\mathfrak{P}|p)}v_p(m(L_\mathfrak{P}/\mathbb{Q}_p)). \] Hence $ m(L/\mathbb{Q})=\prod_{\mathfrak{P}|p} m(L_\mathfrak{P}/\mathbb{Q}_p), $ where the products runs over all the primes $\mathfrak{P}$ of $L$. \end{theorem} \begin{proof} Let $G$ be the Galois group of $L/\mathbb{Q}$. By Proposition \ref{productindexglobal} and Leopoldt's Theorem \cite{MR0108479} we know that $ m(L/\mathbb{Q})=[\mathfrak{A}_{L/\mathbb{Q}}:\mathbb{Z}[G]]. $ Note that $\mathfrak{A}_{L/\mathbb{Q}}$ and $\mathbb{Z}[G]$ are two $\mathbb{Z}$-lattices in $\mathbb{Q}[G]$, hence both are free of rank $|G|$. Let $f:\mathbb{Q}[G]\rightarrow \mathbb{Q}[G]$ be a map of $\mathbb{Q}$-vector spaces such that $f(\mathfrak{A}_{L/\mathbb{Q}})=\mathbb{Z}[G]$. Then by Remark \ref{remark:nodvr} we have \[ [\mathfrak{A}_{L/\mathbb{Q}}:\mathbb{Z}[G]]=|\det f|. \] Let $p$ be a rational prime; tensoring with $\mathbb{Q}_p$, the map $f$ induces a map $f_p:\mathbb{Q}_p[G]\rightarrow \mathbb{Q}_p[G]$ of $\mathbb{Q}_p$-vector spaces with the same determinant. Denoting by $\mathfrak{A}_{L/\mathbb{Q},p}:=\mathfrak{A}_{L/\mathbb{Q}}\otimes_\mathbb{Z}\Z_p$, note that $f_p(\mathfrak{A}_{L/\mathbb{Q},p})=\mathbb{Z}_p[G]$. By \eqref{eq:ind-det}, we deduce that $ ([\mathfrak{A}_{L/\mathbb{Q},p}:\mathbb{Z}_p[G]])=(\det f) $ as ideals in $\mathbb{Z}_p$, which implies \[ v_p(m(L/\mathbb{Q}))=v_p([\mathfrak{A}_{L/\mathbb{Q},p}:\mathbb{Z}_p[G]]). \] We now study the $\mathbb{Z}_p$-order $\mathfrak{A}_{L/\mathbb{Q},p}$. Let $\mathfrak{P}$ be a prime of $L$ above $p$ and $D$ be its decomposition group in $G$. Note that $\mathfrak{A}_{L/\mathbb{Q},p}$ can be seen as the associated order of \[ \mathcal{O}_{L,p}=\mathcal{O}_L\otimes_\mathbb{Z}\Z_p\cong \mathcal{O}_{L_\mathfrak{P}}\otimes_{\mathbb{Z}_p[D]}\mathbb{Z}_p[G] \] in $\mathbb{Q}_p[G]$, in the sense of Proposition \ref{associatedfree}. Then by \cite[Equation (2)]{MR747998} we have the relation \[ \mathfrak{A}_{L/\mathbb{Q},p}=\bigcap_{g\in G}g\left(\mathfrak{A}_{L_\mathfrak{P}/\mathbb{Q}_p}\otimes_{\mathbb{Z}_p[D]}\mathbb{Z}_p[G]\right)g^{-1}, \] and since $G$ is abelian this simply means $ \mathfrak{A}_{L/\mathbb{Q},p}=\mathfrak{A}_{L_\mathfrak{P}/\mathbb{Q}_p}\otimes_{\mathbb{Z}_p[D]}\mathbb{Z}_p[G]. $ As $\mathbb{Z}_p[G]$ is free of rank $[G:D]=[L:\mathbb{Q}]/\left(e(\mathfrak{P}|p)f(\mathfrak{P}|p)\right)$ over $\mathbb{Z}_p[D]$, then we find that \[ [\mathfrak{A}_{L/\mathbb{Q},p}:\mathbb{Z}_p[G]]=[\mathfrak{A}_{L_\mathfrak{P}/\mathbb{Q}_p}\otimes_{\mathbb{Z}_p[D]}\mathbb{Z}_p[G]:\mathbb{Z}_p[G]]=[\mathfrak{A}_{L_\mathfrak{P}/\mathbb{Q}_p}:\mathbb{Z}_p[D]]^{[G:D]}. \] The statement follows. \end{proof} From Corollary \ref{cor:AbsolutelyAbelianCase} we immediately deduce the following. \begin{corollary} Let $L/\mathbb{Q}$ be a finite abelian extension and $p$ an odd prime. Let $p^nd$ be the ramification index of any prime of $L$ above $p$, with $(d,p)=1$. Then \[ v_p(m(L/\mathbb{Q}))=\frac{[L:\mathbb{Q}](p^n-1)}{p^n(p-1)}. \] \end{corollary} \begin{remark} Let $\mathfrak{f}$ be the conductor of $L$. Then, since $\mathbb{Q}(\zeta_{\mathfrak{f}})/L$ is tamely ramified above odd primes (as follows for example from \cite[Theorem 8.2]{MR2078267}), in the notation of the above corollary we have that $v_p(\mathfrak{f})=n+1$ for all ramified primes $p$. In unpublished notes, Henri Johnston already established, by different methods, a similar formula relating $v_p(m(L/\mathbb{Q}))$ with the degree and conductor of an abelian extension. \end{remark} \section{Cyclic extensions of degree $p$} \label{sec:degp} \subsection{Preliminaries} In this section we need the following notation. Let $L/K$ be a Galois extension of degree $p$ of a $p$-adic fields. Let $G=\operatorname{Gal}(L/K)$, and let $\sigma$ be a generator of $G$. We assume that $L/K$ is ramified with ramification jump $t$; we denote by $a \in \{0,\ldots,p-1\}$ the residue class modulo $p$ of $t$ and write $t=pt_0+a$. To simplify the notation, we will also denote simply by $e$ (instead of $e_K$) the absolute ramification index of $K$. \begin{remark}\label{rmk:EGeqA} Proposition \ref{prop:InequalitiesOneAndt} implies $ \frac{p-1}p t=\frac{p-1}p a +(p-1)t_0\le e, $ so we have the inequality \[ e \geq \left\lceil \frac{p-1}p a\right\rceil +(p-1)t_0=a +(p-1)t_0 \] since $p>a$. Moreover, if $L/K$ is assumed \textit{not} to be almost-maximally ramified, then equality cannot hold and we have $e \geq a+(p-1)t_0+1$. Recall that $L/K$ is maximally ramified if $t=\frac{ep}{p-1}$, and this corresponds to the case $a=0$. \end{remark} We will repeatedly need the following well-known lemma. \begin{lemma}\label{lemma:DifferentValuations} Suppose $L/K$ is totally ramified. Then any set of elements $\alpha_0, \ldots, \alpha_{p-1} \in \mathcal{O}_L$ such that $v_L(\alpha_i)=i$ forms a basis of $\mathcal{O}_L$ as a free $\mathcal{O}_K$-module. Similarly, any set of elements $\beta_0,\ldots,\beta_{p-1} \in L$ such that $v_L(\beta_i) \equiv i \pmod{p}$ form a basis of $L$ as a $K$-vector space. \end{lemma} The case $a=0$ is special, and we handle it first. This case is well studied in the literature (see e.g.~\cite[Proposition 3]{MR296047}), but our approach seems to be new. \begin{proposition}\label{prop:aeq0} Suppose that $L/K$ is ramified and $a=0$. Then $L=K(\pi_K^{1/p})$ for a suitable uniformiser $\pi_K$ of $K$, we have $m(L/K)=p^{\frac{1}{2}[L:\mathbb{Q}_p]}$, and $\omega = \sum_{i=0}^{p-1} c_i \pi_K^{i/p}$ achieves the minimal index if and only if every $c_i$ is a unit in $\mathcal{O}_K$. \end{proposition} \begin{proof} By Proposition \ref{prop:InequalitiesOneAndt} we have $t=\frac{ep}{p-1}$, the extension $L/K$ is maximally ramified, $K$ contains the $p$-th roots of unity, and $L=K(\pi_K^{1/p})$ for a suitable choice of uniformiser $\pi_K$. Letting $\pi_L := \pi_K^{1/p}$, Lemma \ref{lemma:DifferentValuations} ensures that $1,\pi_L,\ldots,\pi_L^{p-1}$ is an $\mathcal{O}_K$-basis of $\mathcal{O}_L$. Let $\zeta_p = \sigma(\pi_L)/\pi_L$; it is a $p$-th root of unity. Writing an arbitrary element of $\mathcal{O}_L$ as $\omega = \sum_{i=0}^{p-1} c_i \pi_L^i$ (with $c_i \in \mathcal{O}_K$), the Galois conjugates of $\omega$ are the elements $ \omega_j := \sigma^j(\omega)= \sum_{i=0}^{p-1} c_i \zeta_p^{ij} \pi_L^i $, so the index of $\mathcal{O}_K[G] \cdot \omega$ in $\mathcal{O}_L$ is the norm of the determinant of the matrix $M := \left( c_i \zeta_p^{ij} \right)_{i,j=0,\ldots,p-1}$. We have $\det(M) = \prod_{i=0}^{p-1} c_i \cdot \det \left(\zeta_p^{ij} \right)_{i,j=0,\ldots,p-1}$, and the latter is a Vandermonde determinant, so \[ [\mathcal{O}_L : \mathcal{O}_K[G] \cdot \omega] = N(\det M) = N\left( \prod_{i=0}^{p-1} c_i \right) \cdot \prod_{0 \leq i<j \leq p-1} N(\zeta_p^{j}-\zeta_p^i) = N\left( \prod_{i=0}^{p-1} c_i \right) \cdot N(p \mathcal{O}_K)^{p/2}, \] where the last equality follows from the well-known fact that $\operatorname{disc}(x^p-1)=\pm p^p$. The minimum of $[\mathcal{O}_L : \mathcal{O}_K[G] \cdot \omega]$ is then achieved whenever $\prod_{i=0}^{p-1} c_i$ is a unit in $\mathcal{O}_K$, and is equal to $N(p\mathcal{O}_K)^{p/2}$. Notice that $\zeta_p \in K$ ensures that $p-1 \mid e$, hence $N(p\mathcal{O}_K)^{p/2}=p^{ef_Kp/2}$ is an integer. The statement follows from the fact that $ef_Kp = [K:\mathbb{Q}_p][L:K]=[L:\mathbb{Q}_p]$. \end{proof} \begin{remark}\label{rmk:KummerDegreepn} An immediate extension of the argument in the previous proof shows that when $K$ contains the $p^r$-th roots of unity and $L=K(\sqrt[p^r]{\pi_K})$ we have \[ v_p(m(L/K))=\frac{1}{2} [K:\mathbb{Q}_p] v_p (\operatorname{disc}(x^{p^r}-1)) = \frac{1}{2} [K:\mathbb{Q}_p] rp^r = \frac{1}{2}r [L:\mathbb{Q}_p]. \] \end{remark} \begin{corollary}\label{cor:AEqZeroOLIsAFree} Suppose that $L/K$ is ramified and $a=0$, so we can write $L=K(\pi_K^{1/p})$. Then $\mathcal{O}_L$ is free over $\mathfrak{A}_{L/K}$, and an $\mathcal{O}_K$-basis of the latter is given by $\pi_K^{-ei/(p-1)}(\sigma-1)^i$ for $i=0,\ldots,p-1$. Moreover, $\omega=\sum_{i=0}^{p-1} c_i \pi_K^{i/p}$ generates $\mathcal{O}_L$ over $\mathfrak{A}_{L/K}$ if and only if every $c_i$ is a unit in $\mathcal{O}_K$. \end{corollary} \begin{proof} Let $\pi_L=\pi_K^{1/p}$. We know that $1,\pi_L,\ldots,\pi_L^{p-1}$ is an $\mathcal{O}_K$-basis of $\mathcal{O}_L$. If $j\geq 1$, then we have $\pi_K^{-ei/(p-1)}(\sigma-1)^i \pi_L^j = \pi_K^{-ei/(p-1)} (\zeta_p-1)^{ij} \pi_L^j$, which has $L$-valuation $-e p i /(p-1) + \frac{ijep}{p-1}+j \geq 0$, so all the elements $\pi_K^{-ei/(p-1)}(\sigma-1)^i$ do in fact belong to $\mathfrak{A}_{L/K}$. Set $\mathfrak{B}=\bigoplus \mathcal{O}_K \pi_K^{-ei/(p-1)}(\sigma-1)^i$: it is a sub-$\mathcal{O}_K$-lattice of $\mathfrak{A}_{L/K}$ containing $\mathcal{O}_K[G]$. The index $[\mathfrak{B}:\mathcal{O}_K[G]]$ is $N(\prod_{i=0}^{p-1} \pi_K^{ei/(p-1)}) = N( \pi_K^{\frac{1}{2} p e} ) = p^{\frac{1}{2}[L:\mathbb{Q}_p]}$, which by Proposition \ref{prop:aeq0} is also $m(L/K)$. Proposition \ref{productindex} then yields \[ [\mathfrak{B}: \mathcal{O}_K[G]] = m(L/K) = [\mathfrak{A}_{L/K} : \mathfrak{B}] \cdot [\mathfrak{B}: \mathcal{O}_K[G]] \cdot \min_{\omega \in \mathcal{O}_L} [\mathfrak{A}_{L/K} \omega : \mathcal{O}_L], \] which implies that $\mathfrak{B}$ coincides with $\mathfrak{A}_{L/K}$ and that $\mathcal{O}_L$ is free over $\mathfrak{A}_{L/K}$. The last part of the statement follows from the analogous description of the elements that achieve the minimal index in Proposition \ref{prop:aeq0}. \end{proof} \begin{remark}\label{rmk:greither} In the setting of Proposition~\ref{prop:aeq0} the associated order is maximal: this is well-known, but it can also be deduced from our results. In fact, $m(L/K)$ is equal to the index of $\mathcal{O}_K[G]$ in the maximal order $\mathfrak{M}$ (see Equation \eqref{formula-max-abeliano}); on the other hand, since $\mathcal{O}_L$ is free over the associated order by Corollary \ref{cor:AEqZeroOLIsAFree}, Proposition~\ref{productindex} shows that $m(L/K)=[\mathfrak{A}_{L/K}:\mathcal{O}_L]$, so $\mathfrak{A}_{L/K}=\mathfrak{M}$. Proposition \ref{productindex} also shows that the elements realising the minimal index $m(L/K)$ are obtained as a fixed generator times a unit of $\mathfrak{A}_{L/K}=\mathfrak{M}$. More precisely we have $\mathfrak{M} \cong \mathcal{O}_K^p$, with the isomorphism given by a basis of orthogonal idempotents, and the action of $(u_0,\ldots,u_{p-1}) \in (\mathcal{O}_K^\times)^p$ on a generator $\alpha_0=c_0+c_1\pi_L+\cdots + c_{p-1}\pi_L^{p-1}$ sends it to $\sum_{i=0}^{p-1} u_i c_i \pi_L^i$, thus recovering in another way the description of all minimal generators given in Proposition \ref{prop:aeq0}. Finally, $\omega_1$ and $\omega_2$ generate the same $\mathcal{O}_K[G]$-module if and only if $\omega_1/\omega_2$ is a unit of $\mathcal{O}_K[G]$. As a consequence, the set of Galois modules $M_\omega: = \mathcal{O}_K[G] \omega$ realising the minimal index is potentially very large, because $\mathfrak{M}^*$ is much bigger than $\mathcal{O}_K[G]^*$ in general: indeed, the quotient $\mathfrak{M}^*/\mathcal{O}_K[G]^*$ is finite, but (usually) nontrivial since the torsion subgroup of $\mathfrak{M}^*$ is isomorphic to $(\mathcal{O}_K^*)_{\operatorname{tors}}^p $ while $(\mathcal{O}_K[G])_{\operatorname{tors}}^*$ is isomorphic to $(\mathcal{O}_K^*)_{\operatorname{tors}}\times G$ (see \cite[Corollary 2.2]{MR2138721}). \end{remark} In all that follows it will be useful to work with a special element $f$ of the group ring, which we now define and whose properties we describe next. \begin{definition} We let $f := \sigma-1 \in \mathcal{O}_K[G]$. \end{definition} The following proposition summarises the key properties of the action of $f$. \begin{proposition}\label{prop:PropertiesOff} Suppose $L/K$ is totally ramified. The following hold: \begin{enumerate} \item the powers $1, f, f^2, \ldots, f^{p-1}$ of $f$ form an $\mathcal{O}_K$-basis of $\mathcal{O}_K[G]$; \item the equality $ f^p = - \sum_{j=1}^{p-1} {p \choose j} f^j $ holds. \end{enumerate} Suppose in addition $a \neq 0$. Then: \begin{enumerate} \setcounter{enumi}{2} \item $v_L(f^i \pi_L^a) = a+ it $ for $i=0, \ldots, p-1$; \item the element $\pi_L^a$ generates a normal basis for $L/K$. Explicitly, $\{f^i \pi_L^a\}_{i=0,\ldots,p-1}$ is a $K$-basis of $L$; \item we have $v_L(f^p \pi_L^a) = ep+t+a$; \item for every $x \in \mathcal{O}_L$ we have $v_L(f x) \geq v_L(x) + t$. \end{enumerate} \end{proposition} \begin{proof} \begin{enumerate}[wide, labelwidth=!, labelindent=0pt] \item By definition, an $\mathcal{O}_K$-basis of $\mathcal{O}_K[G]$ is given by the powers of $\sigma$. The claim then follows because the transition matrix between powers of $\sigma$ and powers of $f$ is unitriangular (that is, upper-triangular with all diagonal entries equal to $1$), hence invertible over $\mathcal{O}_K$. \item By definition we have $\sigma^p=1$ in $\mathcal{O}_K[G]$, hence we obtain \[ 0 = \sigma^p-1=(1+f)^p-1 = \sum_{j=1}^{p-1} {p \choose j} f^j + f^p \Rightarrow f^p = - \sum_{j=1}^{p-1} {p \choose j} f^j. \] \item We begin by noticing that $t \geq 1$: indeed $t=-1$ would imply that $L/K$ is unramified, and $t=0$ would give $a=0$. By \cite[IV.2, Proposition 5]{MR554237}, since $\sigma$ belongs to $G_t$ but not to $G_{t+1}$ we have \[ \frac{\sigma(\pi_L)}{\pi_L} = 1 + \pi_L^{t} u \] for some $u \in \mathcal{O}_L^\times$. Raising both sides to the $j$-th power, for any $j$ prime to $p$, gives \[ \frac{\sigma(\pi_L^j)}{\pi_L^j} = 1 + \pi_L^{t} u_j \] where $u_j\in \mathcal{O}_L^\times$ since $(j,p)=1$. Rearranging the previous equality gives \[ \sigma(\pi_L^j)-\pi_L^j = \pi_L^{j+t} u_j, \] so we obtain $v_L(f \pi_L^j)=j+t$, provided that $(j,p)=1$. We now show the claim by induction on $0 \leq i \leq p-1$, the base case $i=0$ being trivial. We have \[ f^{i+1}\pi_L^a = f \left( f^i \pi_L^a \right) = f\left( \pi_L^{a+it} v_i \right) \] for some $v_i \in \mathcal{O}_L^\times$. Notice that $i<p-1$ (otherwise we would not need to prove anything for $i+1$), so $a+it \equiv a+ia \equiv a(i+1) \pmod{p}$ is nonzero. Using that $\sigma(v_i) \equiv v_i \pmod{\pi_L^{t+1}}$ (since $\sigma \in G_t$) we obtain $\sigma(v_i)=v_i+\gamma \pi_L^{t+1}$ for some $\gamma \in \mathcal{O}_L$, so we can write \[ \begin{aligned} f(\pi_L^{a+it} v_i) & = \sigma( \pi_L^{a+it} ) \sigma( v_i ) - \pi_L^{a+it} v_i \\ & = (\pi_L^{a+it} + \pi_L^{a+(i+1)t} u_{a+it} ) (v_i + \pi_L^{t+1} \gamma) - \pi_L^{a+it} v_i \\ & = \pi_L^{a+(i+1)t} u_{a+it}v_i +\pi_L^{a+(i+1)t+1} \gamma + \pi_L^{a+(i+2)t+1} u_{a+it} \gamma \end{aligned} \] Since $ u_{a+it}$ and $ v_i$ are units, last expression contains only one summand of minimal valuation, namely $\pi_L^{a+(i+1)t} u_{i+1}u_i$, so the valuation of $f^{i+1}(\pi_L^a)$ is $a+(i+1)t$ as desired. \item By part (3), the $L$-valuations of the elements $\{f^i \pi_L^a\}_{i=0,\ldots,p-1}$ are all distinct modulo $p$. The claim then follows from Lemma \ref{lemma:DifferentValuations}. \item Follows from (2) and (3) upon noticing that $v_L\left( {p \choose j} \right)=v_L(p)=ep$ for all $j=1,\ldots,p-1$. \item By (4) we may write every element of $\mathcal{O}_L$ as $\sum_{i=0}^{p-1} c_i f^i \pi_L^a$ with $c_i \in K$. The claim then follows combining (3), (5) and Proposition \ref{prop:InequalitiesOneAndt}: together they show that we have $v_L (f(c_i f^i \pi_L^a)) \geq v_L (c_i f^i \pi_L^a)+t$ for all $i$. \end{enumerate} \end{proof} The following quantity will be important in all that follows. \begin{definition}\label{def:nui} We set $\nu_i = \left\lfloor \frac{it+a}{p} \right\rfloor$. \end{definition} In terms of this notation, we have very explicit descriptions of the ring of integers of $L$. % \begin{theorem}\label{thm:StructureOfStuff} Assume $a \neq 0$. The set $\{\pi_K^{-\nu_i} f^i \pi_L^a\}_{i=0,\ldots,p-1}$ is a $\mathcal{O}_K$-basis of $\mathcal{O}_L$. \end{theorem} \begin{proof} The $L$-valuation of $\pi_K^{-\nu_i} f^i\pi_L^a$ is $v_L(f^i\pi_L^a)-\nu_i p$, which by Proposition \ref{prop:PropertiesOff} (3) and the definition of $\nu_i$ is equal to $a+it - p \left\lfloor \frac{it+a}{p} \right\rfloor$. This is nothing but the remainder in the division of $a+it$ by $p$, so it is a non-negative integer in the interval $[0,p-1]$. Furthermore, it is congruent modulo $p$ to $(i+1)a$, so (since $a \not\equiv 0 \pmod{p}$) the $L$-valuations of the $p$ elements $\pi_K^{-\nu_i} f^i\pi_L^a$ for $i=0,\ldots,p-1$ are precisely $\{0,\ldots,p-1\}$ in some order. The claim follows from Lemma \ref{lemma:DifferentValuations}. \end{proof} We conclude this section of preliminaries with a technical lemma that we will need several times. \begin{lemma}\label{lemma:InequalityOne} The inequality $e \geq \nu_{p-1} \geq \nu_{s}$ holds for all $s=0,\ldots,p-1$. Moreover, $e > \nu_s$ holds for all $s=0,\ldots,p-2$. \end{lemma} \begin{proof} The sequence $s \mapsto \nu_s$ is increasing in $s$, so it suffices to prove $e \geq \nu_{p-1}$ and $e > \nu_{p-2}$. By Remark \ref{rmk:EGeqA} we have $e \geq (p-1)t_0 + a$. Now simply observe that $\nu_{p-1} = \left\lfloor \frac{a+(p-1)t}{p} \right\rfloor = \left\lfloor \frac{a+(p-1)pt_0 + (p-1)a}{p} \right\rfloor = a+(p-1)t_0 \leq e$, and that \[ \nu_{p-2} = \left\lfloor \frac{a+(p-2)t}{p} \right\rfloor \leq \frac{a+(p-2)t}{p}=(p-2)t_0 + \frac{(p-1)a}{p} < a+(p-1)t_0. \] \end{proof} \subsection{The minimal index for cyclic extensions of degree $p$} Our purpose in this section is to prove the following result, which -- combined with Proposition \ref{prop:aeq0} -- will give a proof of Theorem \ref{thm:IntroCyclicDegreep}. \begin{theorem}\label{thm:FormulaCyclicExtensionsDegreeP} Suppose $a \neq 0$. Let $\nu_i$ be as in Definition \ref{def:nui} and let $\displaystyle \mu := \min_{0\leq i\leq p-1}(i e-(p-1)\nu_i). $ Then $m(L/K)=p^{f_K(\mu + \sum_{i=0}^{p-1} \nu_i)}$. \end{theorem} We identify the elements of $L$ with the vectors of their coordinates with respect to the $K$-basis of $L$ given by $\{f^i \pi_L^a\}_{i=0,\ldots,p-1}$. We let $v:=f^p \pi_L^a= - \sum_{i=1}^{p-1} {p \choose i} f^i \pi_L^a$ and, with the identification above, we have $v=\begin{pmatrix} 0 \\ -{p \choose 1} \\ -{p \choose 2} \\ \vdots \\ -{p \choose p-1} \end{pmatrix}$. Also denote by $\lambda_i : L \to K$ the dual basis of $\{f^i \pi_L^a\}_{i=0,\ldots,p-1}$, so that every $\beta \in L$ can be written as $\beta = \sum_{i=0}^{p-1} \lambda_i(\beta) f^i \pi_L^a$. \begin{lemma}\label{lemma:fiv} The following hold for all $j =0,\ldots,p-1$: \begin{enumerate} \item $\lambda_0 (f^jv)=0$; \item $v_K(\lambda_i (f^jv)) \geq 2e$ for $i=1,\ldots,j$; \item $v_K(\lambda_i (f^jv))=e$ for $i=j+1,\ldots,p-1$. \end{enumerate} \end{lemma} \begin{proof} For $j=0$ the statement follows from the well-known fact that $v_p \left({p \choose i} \right)=1$ for all $i=1,\ldots,p-1$. The general case then follows by an immediate induction. \end{proof} We now compare several $\mathcal{O}_K$-lattices in $L$, namely $\mathcal{O}_L$, $\mathcal{O}_K[G] \pi_L^a$, and the lattice $\mathcal{O}_K[G] \beta$ generated by a normal basis generator $\beta \in \mathcal{O}_L$. To each of these lattices $\Lambda$ we associate a matrix whose $i$-th column is the vector of coordinates of the $i$-th generator of $\Lambda$ in the basis $\{f^i\pi_L^a\}_{i=0,\ldots,p-1}$. The theory of Section \ref{sect:Preliminaries} will then allow us to describe the index $[\mathcal{O}_L : \mathcal{O}_K[G] \beta]$ in terms of the determinants of these matrices. By Theorem \ref{thm:StructureOfStuff} we have $ \mathcal{O}_L=\langle 1,\pi_K^{-\nu_1}f,...,\pi_K^{-\nu_{p-1}}f^{p-1}\rangle_{\mathcal{O}_K}\pi_L^a, $ so the matrix of the lattice $\mathcal{O}_L$ is simply \begin{equation}\label{eq:TransitionMatrixOL} B := \begin{pmatrix} 1 & 0 & 0 & \cdots & 0 \\ 0 & \pi_K^{-\nu_1} & 0 & \cdots & 0 \\ 0 & 0 & \pi_K^{-\nu_2} & \cdots & 0 \\ \vdots & \vdots & \vdots & \cdots & \vdots \\ 0 & 0 & 0 & \cdots & \pi_K^{-\nu_{p-1}} \end{pmatrix}, \end{equation} while the lattice $\mathcal{O}_K[G]\pi_L^a=\langle 1,f,...,f^{p-1}\rangle_{\mathcal{O}_K}\pi_L^a$ corresponds to the $p \times p$ identity matrix. Now let $\beta\in \mathcal{O}_L$ be a normal basis generator of the extension $L/K$. To write the matrix associated to the lattice $\langle 1,f,...,f^{p-1}\rangle_{\mathcal{O}_K}\beta$ we need to compute the coordinates of $f^j \beta$ in the basis $\{f^i \pi_L^a\}_{i=0,\ldots,p-1}$. Writing $ \beta = \begin{pmatrix} c_0 \\ c_1 \\ \vdots \\ c_{p-1} \end{pmatrix} $ we have $ f\beta = \begin{pmatrix} 0 \\ c_0 \\ c_1 \\ \vdots \\ c_{p-2} \end{pmatrix} + c_{p-1} v, $ and iterating $f$ we find \begin{equation}\label{coefficients} f^j \beta = \sum_{k=0}^{p-1-j} c_k f^{k+j} \pi_L^a + \sum_{k=0}^{j-1} c_{p-1-k} f^{j-1-k} v. \end{equation} Our goal is to estimate the determinant of the matrix corresponding to the lattice $\mathcal{O}_K[G]\beta$. To do this we estimate the valuation of the coefficients of each $f^j\beta$, which are the entries of the aforementioned matrix, looking at the two separate terms of \eqref{coefficients}. We start with the following observation. \begin{remark}\label{rmk:LowerBoundValuationsCi} Since $\mathcal{O}_L$ is free over $\mathcal{O}_K$ with basis $\{\pi_K^{-\nu_i} f^i \pi_L^a\}_{i=0,\ldots,p-1}$ we have $v_K(c_i) \geq -\nu_i$ for all $i=0,\ldots,p-1$. \end{remark} \begin{lemma}\label{lemma:InequalityOnValuations1} The following inequality holds for all $j=1,\ldots,p-1$ and $i=1,\ldots,j$: \[ v_K \left( \lambda_i\left( \sum_{k=0}^{j-1} c_{p-1-k} f^{j-1-k} v \right) \right) \geq e - \nu_{p-1-j+i}. \] \end{lemma} \begin{proof} As $\lambda_i$ is linear, the quantity we want to estimate is at least \[ \min_{k=0,\ldots,j-1} v_K( \lambda_i( c_{p-1-k} f^{j-1-k} v ) )= \min_{k=0,\ldots,j-1} \left( v_K(c_{p-1-k}) + v_K( \lambda_i( f^{j-1-k}v ) ) \right) \] Now if $i \leq j-1-k$ we have $v_K(\lambda_i(f^{j-1-k} v)) \geq 2e$ by Lemma \ref{lemma:fiv}, and by the same lemma for all $i$ we have $v_K(\lambda_i(f^{j-1-k} v)) \geq e$, so the previous expression is bounded below by \[ \min\left\{ \min_{k=0,\ldots,j-1-i} (2e + v_K(c_{p-1-k}) ), \min_{k=j-i,\ldots,j-1} (e + v_K(c_{p-1-k}) ) \right\}. \] Now recall from Remark \ref{rmk:LowerBoundValuationsCi} that $v_K(c_{p-1-k}) \geq -\nu_{p-1-k}$, hence we get the lower bound \[ \min \left\{ \min_{k=0,\ldots,j-1-i} (2e -\nu_{p-1-k} ), \min_{k=j-i,\ldots,j-1} (e - \nu_{p-1-k} )\right\}. \] Since the sequence $\nu_s$ is increasing in $s$, each of the two internal minima is achieved for $p-1-k$ as large as possible, that is, for the minimal admissible value of $k$ in each case. Thus we are reduced to studying \[ \min \{ 2e -\nu_{p-1}, e - \nu_{p-1-j+i} \}; \] Lemma \ref{lemma:InequalityOne} implies $2e -\nu_{p-1} \geq e \geq e - \nu_{p-1-j+i}$, so the minimum above is in fact equal to $e - \nu_{p-1-j+i}$, and we are done. \end{proof} The following is a variant of the previous lemma for $i=j+1,\ldots,p-1$. \begin{lemma}\label{lemma:InequalityOnValuations2} The following inequality holds for all $j=1,\ldots,p-1$ and $i=j+1,\ldots,p-1$: \[ v_K \left( \lambda_i\left( \sum_{k=0}^{j-1} c_{p-1-k} f^{j-1-k} v \right) \right) \geq e-\nu_{p-1} \] \end{lemma} \begin{proof} This is proved as the previous lemma, but is in fact easier, because for every index $j$ we have $v_K(c_{p-1-k}) \geq -\nu_{p-1-k} \geq -\nu_{p-1}$ and $v_K(\lambda_i (f^{j-1-k} v)) \geq v_K(p) = e$. \end{proof} Combining Equation \eqref{coefficients} (which implies in particular $\lambda_0(f^j \beta)=0$ for $j > 0$) with Lemmas \ref{lemma:InequalityOne}, \ref{lemma:InequalityOnValuations1}, and \ref{lemma:InequalityOnValuations2} allows us to conclude that the coefficients $a_{ij} := \lambda_i(f^j \beta)$ have valuation at least \[ v_K(a_{ij}) \geq \begin{cases} - \nu_{i-j} \text{ for } i \geq j \\ e - \nu_{p-1-j+i} \text{ for } i < j \end{cases} \] Notice moreover that $a_{0,j}=0$ for $j=1,\ldots,p-1$. Let $A=(a_{ij})_{i,j=0,\ldots,p-1}$ be the matrix corresponding to the lattice $\mathcal{O}_K[G] \beta$ in the basis $f^i\pi_L^a$. Recalling that the subgroup index is the norm of the index as a module and relation \eqref{eq:ind-det}, we get \begin{equation}\label{eq:DeterminantsToIndices} \begin{split} [\mathcal{O}_L : \mathcal{O}_K[G] \beta] &= [\mathcal{O}_L : \mathcal{O}_K[G] \pi_L^a] \, [\mathcal{O}_K[G] \pi_L^a:\mathcal{O}_K[G] \beta] \\ &=N\left( \det(B^{-1}) \right)N\left(\det(A) \right)=N( \pi_K^{\sum_{i=0}^{p-1} \nu_i} ) N(\det(A)). \end{split} \end{equation} It follows that the index $[\mathcal{O}_L : \mathcal{O}_K[G] \beta]$ is minimal if and only if the valuation of $\det(A)$ is minimal. We are now in a position to prove our exact formula for $m(L/K)$ in the case of cyclic extensions of order $p$. Let \[ \mu := \min_{0\leq i\leq p-1}(i e-(p-1)\nu_i). \] We will prove that $v_K(\det(A)) \geq \mu$, and that for suitable $\beta$ equality holds. In reading the next few paragraphs, the reader may find it useful to bear in mind the following description of $A$, which is equivalent to the inequalities discussed above: we have \begin{equation}\label{eq:AFleshAndBones} \begin{split} A&=\begin{pmatrix} d_0 & 0 & 0 & \cdots & 0 \\ \pi_K^{-\nu_1}d_1 & d_0 & 0 & \cdots & 0 \\ \pi_K^{-\nu_2}d_2 & \pi_K^{-\nu_1}d_1 & d_0 & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ \pi_K^{-\nu_{p-1}}d_{p-1} & \pi_K^{-\nu_{p-2}}d_{p-2} & \pi_K^{-\nu_{p-3}}d_{p-3} & \cdots & d_0 \end{pmatrix}\\ &+\begin{pmatrix} 0 & 0 & 0 & 0 & \cdots & 0 \\ 0 & \pi_K^{e-\nu_{p-1}} \gamma_{1,1} & \pi_K^{e-\nu_{p-2}} \gamma_{1,2} & \pi_K^{e-\nu_{p-3}} \gamma_{1,3} & \cdots & \pi_K^{e-\nu_{1}} \gamma_{1,p-1} \\ 0 & \pi_K^{e-\nu_{p-1}} \gamma_{2,1} & \pi_K^{e-\nu_{p-1}} \gamma_{2,2} & \pi_K^{e-\nu_{p-2}} \gamma_{2,3} & \cdots & \pi_K^{e-\nu_{2}} \gamma_{2,p-1} \\ 0 & \pi_K^{e-\nu_{p-1}} \gamma_{3,1} & \pi_K^{e-\nu_{p-1}} \gamma_{3,2} & \pi_K^{e-\nu_{p-1}} \gamma_{3,3} & \cdots & \pi_K^{e-\nu_{3}} \gamma_{3,p-1} \\ \vdots & \vdots & \vdots& \vdots & \cdots & \vdots \\ 0 & \pi_K^{e-\nu_{p-1}} \gamma_{p-1,1} & \pi_K^{e-\nu_{p-1}} \gamma_{p-1,2} & \pi_K^{e-\nu_{p-1}} \gamma_{p-1,3} & \cdots & \pi_K^{e-\nu_{p-1}} \gamma_{p-1,p-1} \end{pmatrix} \end{split} \end{equation} where we have written $c_i=d_i \pi_K^{-\nu_i}$ with $d_i \in \mathcal{O}_L$ and where every $\gamma_{i,j}$ is in $\mathcal{O}_L$. The two terms of $A$ correspond to the two terms of \eqref{coefficients}. Let $\tilde{A}$ denote the $(p-1) \times (p-1)$ submatrix $(a_{i,j})_{i,j = 1,\ldots,p-1}$, that is, the bottom-right block of $A$. Since the only non-zero coefficient in the first row of $A$ is $a_{0,0}=c_0=d_0$, the Laplace expansion of $\det A$ gives $\det(A) = d_0 \det(\tilde{A})$. Define now \[ b_{i,j} = \begin{cases} \pi_K^{e - \nu_{(p-1)+(i-j)}}, \text{ if } i<j \\ \pi_K^{-\nu_{i-j}}, \text{ if } i \geq j \end{cases} \] for $i,j = 1,\ldots,p-1$: by what we have already shown, $v_K(a_{i,j}) \geq v_K(b_{i,j})$ for all $i,j=1,\ldots,p-1$. Consider the Leibniz formula for $\det(\tilde{A})$: \[ \det(\tilde{A}) = \sum_{\sigma \in S_{p-1}} \operatorname{sgn}(\sigma) \prod_{i=1}^{p-1} a_{i, \sigma(i)}. \] We will show that $\det(\tilde{A})$ (hence also $\det(A)=d_0 \det(\tilde{A})$) has valuation at least $\mu$ by proving that $v_K\left( \prod_{i=1}^{p-1} a_{i, \sigma(i)} \right) \geq \mu$ for all $\sigma$. Since $v_K(a_{i,\sigma(i)}) \geq v_K(b_{i,\sigma(i)})$, it is enough to prove that the inequality $v_K\left( \prod_{i=1}^{p-1} b_{i, \sigma(i)} \right) \geq \mu$ holds for every $\sigma$. Given the definition of the coefficients $b_{i,j}$ we have that in general \[ \prod_{i=1}^{p-1} b_{i,\sigma(i)}=\pi_K^{ke-\sum_{j\in S}\nu_j}, \] where $k=k(\sigma)$ is a non-negative integer and $S=S(\sigma)$ is a multiset of indices of cardinality $p-1$ (taking into account the multiplicities). \begin{lemma}\label{lemma:RelationKS} If for some $\sigma$ we have $ \prod_{i=1}^{p-1} b_{i,\sigma(i)}=\pi_K^{ke-\sum_{j\in S}\nu_j}$, then $\sum_{j\in S}j=(p-1)k$. \end{lemma} \begin{proof} By the definition of $b_{i,j}$ we have \[ \begin{aligned} v_K \left( \prod_{i} b_{i,\sigma(i)} \right) & = \sum_i \begin{cases} e - \nu_{(p-1)+(i-\sigma(i))}, \text{ if } i<\sigma(i) \\ -\nu_{i - \sigma(i)}, \text{ if } i \geq \sigma(i) \end{cases} \\ & = k e - \sum_{j \in S} \nu_j, \end{aligned} \] for $k=\#\{ i \in \{1,\ldots,p-1\} : i < \sigma(i) \}$ and $S$ the multiset $ \left\{s_i \bigm\vert i \in \{1,\ldots,p-1\} \right\}, $ where \[ s_i= \begin{cases} (p-1)+(i-\sigma(i)) \text{ if } i<\sigma(i) \\ i-\sigma(i) \text{ if } i \geq \sigma(i). \end{cases} \] In particular, \[ \begin{aligned} \sum_{s \in S} s & = \sum_{i=1}^{p-1} s_i \\ & = \sum_{i=1}^{p-1} \begin{cases} (p-1)+(i-\sigma(i)) \text{ if } i<\sigma(i) \\ i-\sigma(i) \text{ if } i \geq \sigma(i) \end{cases} \\ & =k(p-1) + \sum_{i=1}^{p-1} \begin{cases} i-\sigma(i) \text{ if } i<\sigma(i) \\ i-\sigma(i) \text{ if } i \geq \sigma(i) \end{cases} \\ & = k(p-1) + \sum_{i=1}^{p-1} i - \sum_{i=1}^{p-1} \sigma(i) \\ & = k(p-1). \end{aligned} \] \end{proof} Now consider the function $g:\{0,...,p-1\} \rightarrow \mathbb{Q}$ given by $j \mapsto \frac{j}{p-1}e-\nu_j$ and let $i_m$ be an index that realises the minimum of $g$. Notice that $(p-1)g(i_m)$ realises the minimum of $j \mapsto je - (p-1)\nu_j$, that is, $(p-1)g(i_m)=\mu$. Hence for every $\sigma\in S_p$ we have \[ \begin{aligned} v_K\left(\prod_ib_{i,\sigma(i)}\right) & = ke-\sum_{j\in S}\nu_j \\ & =\frac{\sum_{j\in S}j}{p-1}e-\sum_{j\in S}\nu_j & (\text{Lemma \ref{lemma:RelationKS}}) \\ & =\sum_{j\in S}g(j)\geq (p-1)g(i_m)=\mu \end{aligned} \] as claimed. Finally, we show that for suitable $\beta\in \mathcal{O}_L$ we do in fact have $v_K(\det A)=\mu$. If the minimum of the function $g(j)$ is $0$ we can take $\beta=\pi_L^a$ (in this case $\mu=0$ and the matrix $A$ is simply the identity), otherwise we claim that $\beta=\pi_L^a+\pi_K^{-\nu_{i_m}}f^{i_m}\pi_L^a$ does the job (in terms of Equation \eqref{eq:AFleshAndBones} we are choosing $d_0=d_{i_m}=1$ and $d_i =0 $ for $i \neq 0, i_m$). Notice that we may assume $i_m>0$, for otherwise the minimum would be $0$. Similarly, we may assume $i_m < p-1$, because $g(p-1) \geq 0$ by Lemma \ref{lemma:InequalityOne}. To show the claim, we begin by noticing that specialising Equation \eqref{coefficients} to our choice of $\beta$ we obtain \begin{equation}\label{eq:ValuationsSpecialA} f^k \beta = f^k \pi_L^a + \begin{cases} \pi_K^{-\nu_{i_m}} f^{k+i_m} \pi_L^a, \text{ if } k+i_m \leq p-1 \\ \pi_K^{-\nu_{i_m}} f^{k+i_m-p} v, \text{ if } k +i_m \geq p, \end{cases} \end{equation} We now show that in the Leibniz development of $\det(A)$ there is precisely one summand of valuation $\mu$, while all other have strictly larger valuation, whence $v_K(\det A)=\mu$ will follow. Observe that with our choice we have $d_0=1$, so $\det(A)=\det(\tilde{A})$; we will work with this latter determinant and its Leibniz expansion. Let $\tau : \{1,\ldots,p-1\} \to \{1,\ldots,p-1\}$ be defined by \[ \tau(j) = \begin{cases} \begin{array}{ll} i_m+j & \text{ if } j+i_m \leq p-1 \\ i_m+j+1-p & \text{ if } j+i_m \geq p \\ \end{array} \end{cases} \] One checks that the function $\tau$ is a permutation (indeed, it's easily seen to be injective). The Leibniz term $ \operatorname{sgn} \tau \cdot \prod_{j=1}^{p-1} a_{\tau(j),j} $ in the development of $\det(\tilde{A})$ has valuation \[ \sum_{j=1}^{p-1} v_K(a_{\tau(j),j}) = \sum_{j=1}^{p-1-i_m} (-\nu_{i_m}) + \sum_{j=p-i_m}^{p-1} (e-\nu_{i_m})=(p-1)g(i_m)=\mu. \] We now show that all other terms in the Leibniz development have strictly greater valuation. Since we already know that $v_K \left( \prod_{i=1}^{p-1} a_{i,\sigma(i)} \right) \geq \mu$, it suffices to prove the following. \begin{proposition}\label{prop:SingleLeibnizTermWithMinimalValuation} Let $\sigma$ be a permutation such that $v_K \left( \prod_{i=1}^{p-1} a_{i,\sigma(i)} \right) = \mu$. Then $\sigma = \tau^{-1}$. \end{proposition} By Equation \eqref{eq:ValuationsSpecialA} and Lemma \ref{lemma:fiv} we know that % \begin{equation}\label{eq:ExplicitValuations} v_K(a_{i, j}) = \begin{cases} 0, \text{ if } i=j \\ \infty, \text{ if } j \leq p-i_m-1 \text{ and }i \neq j \text{ and } i \neq j+i_m \\ -\nu_{i_m}, \text{ if } j \leq p-i_m-1 \text{ and } i=j+i_m \\ e- \nu_{i_m}, \text{ if } j \geq p-{i_m} \text{ and } i \neq j \text{ and } i \geq j+i_m-(p-1) \\ v_K(a_{i, j}) \geq 2e-\nu_{i_m}, \text{ if } j \geq p-i_m \text{ and } i \leq j+i_m-p \end{cases} \end{equation} We set for simplicity $\nu := \nu_{i_m}$. We shall need the following observation multiple times: \begin{lemma} We have $e-\nu > 0$. \end{lemma} \begin{proof} Follows from Lemma \ref{lemma:InequalityOne} and the fact that, as already observed, $i_m$ cannot be $p-1$. \end{proof} By what we have already shown, the inequalities \[ v_K\left( a_{i, \sigma(i)} \right) \geq v_K\left( b_{i, \sigma(i)} \right), \quad \sum_{i=1}^{p-1} v_K\left( b_{i, \sigma(i)} \right) \geq \mu \] hold for every permutation $\sigma$. If $\sigma$ is such that $v_K\left(\prod_{i=1}^{p-1} a_{i, \sigma(i)} \right)=\mu$ holds, then equality must hold in both inequalities above. In particular, we must have $v_K(a_{i,\sigma(i)}) = v_K(b_{i,\sigma(i)})$ for all $i$. We now remark that $v_K(b_{i,j})$ is either of the form $-\nu_{i-j} \leq 0 < 2e-\nu$ or of the form $e-\nu_{(p-1)+(i-j)} < 2e-\nu$. It follows that none of the terms $a_{i, \sigma(i)}$ can have valuation $2e-\nu$ (or more), for otherwise its valuation would be strictly greater than the valuation of the corresponding term $b_{i,\sigma(i)}$. Hence, by \eqref{eq:ExplicitValuations}, for each $i$ we have $v_L(a_{i,\sigma(i)})=v_L(b_{i,\sigma(i)}) \in \{0, -\nu, e-\nu\}$. Next suppose that $v_K(a_{i,\sigma(i)}) = e-\nu > 0$. Then also $v_K(b_{i,\sigma(i)}) > 0$, so $b_{i,\sigma(i)}$ must be of the form $\pi_K^{e-\nu_{(p-1) + (i-\sigma(i))}}$, and in order for equality to hold we must have $\nu_{(p-1) + (i-\sigma(i))} = \nu$. Finally, if $a_{i,\sigma(i)}$ has valuation $0=\nu_0$ or $-\nu$, then so does the corresponding $b_{i,\sigma(i)}$. It follows that \[ v_K\left( \prod_{i=1}^{p-1} b_{i,\sigma(i)} \right) = ke - \sum_{j \in S} \nu_j \] for some multiset $S$ with the property that every $\nu_j$ is either equal to $\nu$ or to $0$, and (by Lemma \ref{lemma:RelationKS}) $(p-1)k = \sum_{j \in S} j$. Notice that we are not claiming that every $j$ is either $0$ or $i_m$, but just that $\nu_j \in \{0, \nu\}$ for every $j \in S$. We may then continue our chain of inequalities as follows: \[ \begin{aligned} \mu & = v_K \left( \prod_{i=1}^{p-1} a_{i,\sigma(i)} \right) = v_K \left( \prod_{i=1}^{p-1} b_{i,\sigma(i)} \right)\\ & = ke - \sum_{j \in S} \nu_j = \sum_{j \in S} \left( \frac{j}{p-1}e - \nu_j \right) \\ & = \sum_{j \in S} g(j) \geq (p-1) g(i_m) = \mu, \end{aligned} \] which again yields that equality must hold everywhere. In particular, we must have that for all $j \in S$ the equality $g(j)=g(i_m)$ holds, so that for no $j \in S$ we can have $\nu_j=0$. By the above remarks, this implies that no valuation $v_K(b_{i,\sigma(i)})$ is equal to $0$, hence no valuation $v_K(a_{i,\sigma(i)})$ is equal to $0$ either. Since clearly no $v_K(a_{i,\sigma(i)})$ can be equal to $\infty$ (for otherwise $\prod_{i=1}^{p-1} a_{i,\sigma(i)}=0$ does not have valuation $\mu$), by Equation \eqref{eq:ExplicitValuations} we have shown that $v_K(a_{i,\sigma(i)})$ is either $-\nu$ or $e-\nu$ for every $i=1,\ldots,p-1$. Let us recapitulate which terms of $\tilde{A}$ are such that $v_K(a_{i,j})=v_K(b_{i,j})\in \{-\nu,e-\nu\}$ (these are the only entries $a_{i,\sigma(i)}$ that may be involved in a Leibniz term of minimal valuation): from \eqref{eq:ExplicitValuations} we see that if $1\leq j\leq p-1-i_m$ then we can only have $v_K(a_{\sigma^{-1}(j),j})=-\nu$ and $\sigma^{-1}(j)=j+i_m$; if $p-i_m\leq j \leq p-1$ then $v_K(a_{\sigma^{-1}(j),j})=e-\nu$ and $\sigma^{-1}(j)\geq j-p+1+i_m$ (more precisely, even if this is not necessary for the proof, note that we also have $\sigma^{-1}(j)\leq j-p+1+\max\{i\geq i_m:\nu_i=\nu_{i_m}\}$). While reading the rest of the proof, the reader may find it useful to look at the following matrix, where in correspondence of such terms $(i,j)$ we display their valuation $v_K(a_{i,j})$: \[ \begin{matrix} \phantom{a} \\ \phantom{\vdots} \\ \phantom{\vdots} \\ \phantom{c} \\ \phantom{\vdots} (i_m+1)\text{-th row} \rightarrow \\ \phantom{e} \\ \phantom{\ddots} \\ \phantom{h} \end{matrix} \begin{pmatrix} & & & & e-\nu & & & \\ & & & & \vdots & e-\nu & & \\ & & & & e-\nu & \vdots & \ddots & \\ & & & & & e-\nu & & e-\nu \\ -\nu & & & & & & \ddots & \vdots \\ & -\nu & & & & & & e-\nu \\ & & \ddots & & & & & \\ & & & -\nu & & & & \\ \end{pmatrix}\begin{matrix} \phantom{a} \\ \phantom{\vdots} \\ \phantom{\vdots} \\ \leftarrow i_m\text{-th row} \\ \phantom{\ddots} \\ \phantom{e} \\ \phantom{\ddots} \\ \phantom{h} \end{matrix} \] \begin{lemma}\label{lemma:OnlyPermutation} Let $\sigma : \{1,\ldots,p-1\} \to \{1,\ldots,p-1\}$ be a permutation such that $v_K(a_{i,\sigma(i)})\in \{-\nu, e-\nu\}$ for all $i=1,\ldots,p-1$. Then $\sigma=\tau^{-1}$. \end{lemma} \begin{proof} For every $i=1,\ldots,p-1$ the element $(i,\sigma(i))$ must be among those displayed in the matrix above (call such elements \textit{admissible}). For $j \leq p-i_m-1$, the $j$-th column contains only one admissible element, in position $j+i_m$, hence $\sigma^{-1}(j)=j+i_m$ for $j=1,\ldots,p-1-i_m$, that is, $\sigma(i)=i-i_m=\tau^{-1}(i)$ for $i=i_m+1,\ldots,p-1$. It remains to determine $\sigma(i)$ for $i \leq i_m$. There is only one admissible element on the first row, in position $(1,p-i_m)$, which forces $\sigma(1)=p-i_m$. After erasing the first $p-i_m$ columns there is only one admissible element on the second row, in position $(2,p-i_m+1)$, hence $\sigma(2)=p-i_m+1$. Continuing in this way, by induction one shows easily that $\sigma(i)=(p-i_m-1)+i=\tau^{-1}(i)$ for $i=1,\ldots,i_m$, hence $\sigma=\tau^{-1}$ as claimed. \end{proof} This finishes the proof of Proposition \ref{prop:SingleLeibnizTermWithMinimalValuation}, hence it shows that for our special choice of $\beta$ we have $v_K(\det(A))=\mu$. It now follows from Equation \eqref{eq:DeterminantsToIndices} and the previous considerations that \[ m(L/K)=N(\pi_K^{\sum_{i=0}^{p-1} \nu_i} ) \cdot N(\pi_K^{\mu})= p^{f_K(\sum_{i=0}^{p-1} \nu_i+\min_{0\leq i\leq p-1}(i e-(p-1)\nu_i))}, \] which concludes the proof of Theorem \ref{thm:FormulaCyclicExtensionsDegreeP}. \begin{remark}\label{rmk:MinimalElement} The proof shows in particular that the element $\beta = \pi_L^a$ (if $\mu=0$) or $\beta=\pi_L^a+\pi_K^{-\nu_{i_m}}f^{i_m}\pi_L^a$ (if $\mu \neq 0$) realises the minimal index. \end{remark} \section{Some classical results on cyclic extensions of degree $p$}\label{sect:BertrandiasFerton} In this section we recover two results, originally due to Bertrandias, Bertrandias and Ferton, concerning the Galois structure of the ring of integers in cyclic extensions of degree $p$: we characterise the associated order and the cases when $\mathcal{O}_L$ is free over $\mathfrak{A}_{L/K}$. The original results may be found in \cite{MR296047}, \cite{MR0374104} and \cite{MR296048}, but these papers do not contain detailed proofs. We remark that in \cite{MR296048} the authors also obtain a characterisation of the almost-maximally ramified extensions $L/K$ for which $\mathcal{O}_L$ is $\mathfrak{A}_{L/K}$-free. Specifically, they show that -- for $L/K$ almost-maximally ramified -- the ring of integers $\mathcal{O}_L$ is free over $\mathfrak{A}_{L/K}$ if and only if in the continued fraction expansion $\frac{t}{p} = a_0 + \frac{1}{a_1+\frac{1}{a_2+\cdots}}=[a_0; a_1,a_2,\ldots,a_N]$ one has $N \leq 4$. While it would probably be possible to also obtain this result by our methods, the proof would involve computations very similar to those of \cite{MR543208}, so we prefer not to treat this case. Our approach is based on the following corollary, itself a consequence of Proposition \ref{prop:PropertiesOff}. \begin{corollary}\label{cor:iPlusjGreaterThanP} Let $0 \leq i,j \leq p-1$ be such that $i+j \geq p$. Then \[ \pi_K^{-\nu_i} \, f^i \cdot \pi_K^{-\nu_j} f^j \pi_L^a \] lies in $\mathcal{O}_L$, and if $L/K$ is not almost-maximally ramified, then it also lies in $\pi_K \mathcal{O}_L$. \end{corollary} \begin{proof} We may write \[ \pi_K^{-\nu_i} \, f^i \cdot \pi_K^{-\nu_j} f^j \pi_L^a = \pi_K^{-\nu_i-\nu_j} f^{i+j-p} f^p \pi_L^a, \] and by Proposition \ref{prop:PropertiesOff} (5) we know $v_L(f^p \pi_L^a) = ep+t+a$. Since every further application of $f$ increases the valuation by at least $t$ (by Proposition \ref{prop:PropertiesOff} (6)), we get \[ \begin{aligned} v_L( \pi_K^{-\nu_i} \, f^i \cdot \pi_K^{-\nu_j} f^j \pi_L^a) & \geq v_L(\pi_K^{-\nu_i-\nu_j}) + (i+j-p)t +ep+t+a \\ & = (-\nu_i-\nu_j)p + (i+j-p+1)t + ep + a \end{aligned} \] Using the obvious inequality $\nu_k \leq \frac{a+kt}{p}$ and the fact that $e \geq (p-1)t_0+a=t-t_0$ (Remark \ref{rmk:EGeqA}) we obtain \[ \begin{aligned} v_L(\pi_K^{-\nu_i} \, f^i \cdot \pi_K^{-\nu_j} f^j \pi_L^a) & \geq (-a-it-a-jt) + (i+j-p+1)t + p(t-t_0) + a = 0, \end{aligned} \] so $\pi_K^{-\nu_i} \, f^i \cdot \pi_K^{-\nu_j} f^j \pi_L^a$ is integral. If $L/K$ is not almost-maximally ramified, then (again by Remark \ref{rmk:EGeqA}) we have $e \geq t-t_0+1$, which leads to $v_L(\pi_K^{-\nu_i} \, f^i \cdot \pi_K^{-\nu_j} f^j \pi_L^a) \geq p$, that is, this element lies in in $\pi_K \mathcal{O}_L$. \end{proof} \subsection{Description of the associated order} \begin{theorem}\label{thm:StructureOfALK} Assume $a \neq 0$. Then the associated order $\mathfrak{A}_{L/K}$ is a free $\mathcal{O}_K$-module with basis $\pi_K^{-n_i} f^i$ for $i=0,\ldots,p-1$, where \[ n_i = \min_{0 \leq j \leq p-1-i} (\nu_{i+j} - \nu_j). \] \end{theorem} \begin{proof} Let $\lambda$ be an element of $K[G]$. As $1, f, \ldots, f^{p-1}$ is a $K$-basis of $K[G]$, we may write uniquely $\lambda = \sum_{i=0}^{p-1} c_i f^i$ with the $c_i$ in $K$. By Theorem \ref{thm:StructureOfStuff}, $\lambda\in\mathfrak{A}_{L/K}$ if and only if $\lambda(\pi_K^{-\nu_j} f^j \pi_L^a)$ is in $\mathcal{O}_L$ for every $j$. Taking $j=0$ yields $ \sum_{i=0}^{p-1} c_i f^i \pi_L^a \in \mathcal{O}_L, $ and since $v_L(c_i f^i \pi_L^a) = pv_K(c_i) + a+it$ by Proposition \ref{prop:PropertiesOff} (3), we see that the valuations of the terms in the previous sum are pairwise distinct (since they are all different modulo $p$). In particular there can be no cancellation among them, hence $c_i f^i \pi_L^a \in \mathcal{O}_L$ holds for all $i=0,\ldots,p-1$. Using again that the valuation of $f^i\pi_L^a$ is $a+it$ we obtain $v_L(c_i) \geq -a-it$, from which it follows that $v_K(c_i) \geq -\nu_i$ is a necessary condition for $\lambda= \sum_{i=0}^{p-1} c_i f^i\in\mathcal{O}_L$. We claim that $\lambda=\sum_{i=0}^{p-1} c_i f^i$ is in $\mathfrak{A}_{L/K}$ if and only if each summand $c_i f^i$ is. One implication is clear. On the other hand, we have that $\lambda\in \mathfrak{A}_{L/K}$ exactly when, for each $j$, $$\left(\sum_{i=0}^{p-1} c_i f^{i}\right) \pi_K^{-\nu_j}f^j \pi_L^a\in \mathcal{O}_L.$$ We rewrite this last condition as $$ \sum_{i=0}^{p-1-j} c_i \pi_K^{-\nu_j} f^{i+j}\pi_L^a +\sum_{i=p-j}^{p-1} c_i \pi_K^{-\nu_j} f^{i+j} \pi_L^a \in\mathcal{O}_L. $$ Now, all the summands $c_i \pi_K^{-\nu_j} f^{i+j} \pi_L^a$ with $i+j\ge p$, namely those of the second sum, are in $\mathcal{O}_L$ by Corollary \ref{cor:iPlusjGreaterThanP}, since $v_K(c_i) \geq -\nu_i$, so the condition reduces to \[ \sum_{i=0}^{p-1-j} c_i \pi_K^{-\nu_j} f^{i+j} \pi_L^a \in \mathcal{O}_L. \] By Proposition \ref{prop:PropertiesOff} (3) we have $v_L(c_i \pi_K^{-\nu_j} f^{i+j} \pi_L^a) \equiv a+(i+j)t \equiv (i+j+1) a \pmod{p}$, namely, the valuations of the terms in the above sum are all distinct, and the only possibility for the sum to be integral is that every summand is itself integral. This shows that $c_if^i\cdot\pi_K^{-\nu_j}f^j\pi_L^a\in\mathcal{O}_L$, for each $i$ and $j$, namely that $c_if^i$ must be in $\mathfrak{A}_{L/K}$. Thus it remains to understand what elements of the form $c_i f^i$ are in $\mathfrak{A}_{L/K}$; equivalently, we require $ v_L(c_i f^i \cdot \pi_K^{-\nu_j} f^j \pi_L^a) \geq 0$ for all $j=0,\ldots,p-1$. In turn, this is equivalent to \[ p v_K(c_i) -p\nu_j+ v_L(f^{i+j} \pi_L^a) \geq 0 \; \text{ for all } j=0,\ldots,p-1. \] As observed above, for $i+j\ge p$ this is guaranteed by the condition $v_K(c_i) \geq -\nu_i$. For $j=0,\ldots,p-1-i$, using Proposition \ref{prop:PropertiesOff} (3) we get \[ -p\nu_j+ (i+j)t+a \geq -p v_K(c_i) \Leftrightarrow \nu_{i+j}-\nu_j \geq -v_K(c_i); \] since this needs to hold for every $j$, we obtain $v_K(c_i) \geq -n_i$. Putting everything together, we see that $\sum_{i=0}^{p-1} c_i f^i$ is in $\mathfrak{A}_{L/K}$ if and only if $v_K(c_i) \geq -n_i$ for every $i$, that is, $\mathfrak{A}_{L/K}$ is free over $\mathcal{O}_K$ with basis $\{\pi_K^{-n_i} f^i\}_{i=0,\ldots,p-1}$. \end{proof} \subsection{On the $\mathfrak{A}_{L/K}$-freeness of $\mathcal{O}_L$} In this section we show how our approach leads to a new proof of Theorem \ref{thm:IntroFertonBertrandias}. Notice that if $a=0$ then $\mathcal{O}_L$ is free over $\mathfrak{A}_{L/K}$ by Corollary \ref{cor:AEqZeroOLIsAFree}, while if $L/K$ is not almost-maximally ramified, then $a \neq 0$ by Proposition \ref{prop:InequalitiesOneAndt}. Hence in what follows we can prove the theorem under the additional assumption that $a \neq 0$. We aim to compare the two expressions \begin{equation}\label{eq:MinimalIndexOverFreeSubmodule} \sum_{i=0}^{p-1} \nu_i + \min_{0 \leq i \leq p-1} (ei -(p-1)\nu_i) \end{equation} and \begin{equation}\label{eq:IndexOverAssociatedOrder} \sum_{i=0}^{p-1} n_i, \end{equation} which, by Theorems \ref{thm:FormulaCyclicExtensionsDegreeP} and \ref{thm:StructureOfALK}, are the minimal index of $\mathcal{O}_L$ over a free $\mathcal{O}_K[G]$-submodule and the index $[\mathfrak{A}_{L/K} : \mathcal{O}_K[G]]$, respectively. As a consequence of Proposition \ref{productindex}, $\mathcal{O}_L$ is free over $\mathfrak{A}_{L/K}$ if and only if \eqref{eq:MinimalIndexOverFreeSubmodule} and \eqref{eq:IndexOverAssociatedOrder} are equal, so we are reduced to showing the following statement. \begin{theorem}\label{thm:BertrandiasFertonReformulation} Let $L/K$ be a degree-$p$ cyclic extension of $p$-adic fields with ramification jump $t$. Let $a$ be the residue class of $t$ modulo $p$. The following hold: \begin{enumerate} \item Suppose $a \mid p-1$. Then \eqref{eq:MinimalIndexOverFreeSubmodule} and \eqref{eq:IndexOverAssociatedOrder} are equal, so $\mathcal{O}_L$ is free over $\mathfrak{A}_{L/K}$. \item Suppose $L/K$ is not almost-maximally ramified and that $\mathcal{O}_L$ is free over $\mathfrak{A}_{L/K}$, so that \eqref{eq:MinimalIndexOverFreeSubmodule} and \eqref{eq:IndexOverAssociatedOrder} are equal. Then $a \mid p-1$. \end{enumerate} \end{theorem} \subsubsection{Sufficiency} In this section we assume that $a \mid p-1$ and show that \eqref{eq:MinimalIndexOverFreeSubmodule} and \eqref{eq:IndexOverAssociatedOrder} are equal, thus proving (1) in Theorem \ref{thm:BertrandiasFertonReformulation}. We start with the following arithmetical lemma: \begin{lemma}\label{lemma:CFLength2} Assume that $a \mid p-1$ and set $k=\frac{p-1}{a}$. Then $ \nu_i = it_0 + \left\lfloor \frac{i}{k} \right\rfloor $ for $i=0,\ldots,p-1$. \end{lemma} \begin{proof} By definition we have $\nu_i = \left\lfloor \frac{i(pt_0+a)+a}{p} \right\rfloor = it_0 + \left\lfloor \frac{(i+1)a}{p} \right\rfloor$, so we have to prove that $\left\lfloor \frac{(i+1)a}{p} \right\rfloor=\left\lfloor \frac{i}{k} \right\rfloor$. We establish this by induction on $i$. For $i \leq k-1$ we have $(i+1)a \leq ka = p-1$, and $\left\lfloor \frac{(i+1)a}{p} \right\rfloor=0$. Now let $i \geq k$: we have \[ \left\lfloor \frac{(i+1)a}{p} \right\rfloor = \left\lfloor \frac{(i-k+1)a + ka}{p} \right\rfloor = \left\lfloor \frac{(i-k+1)a + (p-1)}{p} \right\rfloor, \] which is easily seen to be equal to $\left\lfloor \frac{(i-k+1)a}{p} \right\rfloor + 1$, provided that $(i-k+1)a$ is not divisible by $p$. Since $a<p$ and $i-k+1 \leq p-k < p$ we never have $(i-k+1)a \equiv 0 \pmod p$, so we obtain \[ \left\lfloor \frac{(i+1)a}{p} \right\rfloor = \left\lfloor \frac{(i-k+1)a}{p} \right\rfloor + 1 = \left\lfloor \frac{i-k}{k} \right\rfloor +1 = \left\lfloor \frac{i}{k} \right\rfloor \] as claimed. \end{proof} \begin{lemma} Assume that $a \mid p-1$. Then $\min_{0 \leq i \leq p-1} (ei -(p-1)\nu_i)=0$ and $\nu_i =n_i$ for all $i=0,\ldots,p-1$. In particular, \eqref{eq:MinimalIndexOverFreeSubmodule} and \eqref{eq:IndexOverAssociatedOrder} are equal. \end{lemma} \begin{proof} We need to prove that $ei - (p-1) \nu_i \geq 0$ for all $i=0,\ldots,p-1$ (notice that equality holds for $i=0$): this can be seen by writing $k=\frac{p-1}{a}$ and using the inequality $e \geq a +(p-1)t_0$ (Remark \ref{rmk:EGeqA}) and Lemma \ref{lemma:CFLength2}, that yield \[ ei - (p-1) \nu_i \geq (a +(p-1)t_0)i - (p-1) \left( \left\lfloor\frac{i}{k}\right\rfloor + it_0 \right) \geq ai - \frac{(p-1)i}{k} = ai - ai=0. \] We now turn to the equality $\nu_i=n_i$. Notice that $n_i$ is defined as the minimum of several quantities, one of which is $\nu_i-\nu_0=\nu_i$, so we certainly have $n_i \leq \nu_i$. As for the opposite inequality, we need to prove that $\nu_{i+j} - \nu_j \geq \nu_i$, that is, using Lemma \ref{lemma:CFLength2} again, \[ (i+j)t_0 + \left\lfloor \frac{i+j}{k} \right\rfloor \geq it_0 + \left\lfloor \frac{i}{k} \right\rfloor + jt_0 + \left\lfloor \frac{j}{k} \right\rfloor, \] which is obvious. \end{proof} \subsubsection{Necessity} We now prove that if $\mathcal{O}_L$ is $\mathfrak{A}_{L/K}$-free, and the extension is not almost-maximally ramified, then $a \mid p-1$. % Let $0 \leq i,j \leq p-1$ be such that $i+j \geq p$. By Corollary \ref{cor:iPlusjGreaterThanP} and the obvious inequality $n_i \leq \nu_i$ we see that $\pi_K^{-n_i} \, f^i \cdot \pi_K^{-\nu_j} f^j \pi_L^a$ is in $\pi_K \mathcal{O}_L$. Now start with $\beta \in \mathcal{O}_L$, which we assume to generate $\mathcal{O}_L$ over $\mathfrak{A}_{L/K}$ and which we represent as a vector $\begin{pmatrix} d_0 \\ d_1 \\ \vdots \\ d_{p-1} \end{pmatrix}$ of coordinates in the basis $\pi_K^{-\nu_i} f^i \pi_L^a$ of $\mathcal{O}_L$. We consider the $\mathcal{O}_K$-lattice $\mathfrak{A}_{L/K} \beta$ in $L$, namely, the free $\mathcal{O}_K$-module spanned by $ \pi_K^{-n_i} \, f^i \beta $ for $i=0,\ldots,p-1$. Using the fact that $\pi_K^{-n_i-\nu_j}f^{i+j} \pi_L^a$ lies in $\pi_K \mathcal{O}_L$ for $i+j \geq p$ we obtain \[ \begin{aligned} \pi_K^{-n_i} f^i \beta = \pi_K^{-n_i} f^i \sum_{j=0}^{p-1} d_j \pi_K^{-\nu_j} f^j \pi_L^a \equiv \sum_{k=i}^{p-1} \left(d_{k-i} \pi_K^{-n_i-\nu_{k-i}+\nu_{k}} \right) \pi_K^{-\nu_{k}} f^{k} \pi_L^a \pmod{\pi_K}. \end{aligned} \] The matrix of the lattice $\mathfrak{A}_{L/K} \beta$, expressed with respect to the basis $\{\pi_K^{-\nu_i}f^i\pi_L^a \}$ of $\mathcal{O}_L$, is then congruent modulo $\pi_K$ to \[ \begin{pmatrix} d_0 & 0 & 0 & \cdots & 0 \\ d_1 & \pi_K^{-n_1+\nu_{1}} d_0 & 0 & \cdots & 0 \\ \vdots & \vdots & \pi_K^{-n_2+\nu_{2}} d_0 & \cdots & \vdots \\ d_{p-1} & & \vdots & & \pi_K^{-n_{p-1}+\nu_{p-1}} d_0 \end{pmatrix}. \] The assumption $\mathfrak{A}_{L/K} \beta=\mathcal{O}_L$ implies that the above matrix must be invertible. On the other hand, it is clear that its determinant is congruent modulo $\pi_K$ to $d_0^p \cdot \pi_K^{\sum_{i=1}^{p-1} (\nu_i-n_i) }$, so it is invertible if and only if this quantity has valuation $0$. Since $v_K(d_0) \geq 0$ and $\nu_i-n_i \geq 0$ for all $i$, this happens if and only if $d_0$ is a $\pi_K$-adic unit and $\nu_i=n_i$ for $i=1,\ldots,p-1$, so that in particular we must have \[ n_i = \min_{0 \leq j \leq p-1-i} (\nu_{i+j} - \nu_j) = \nu_i, \] which implies % $ \nu_{i+j} - \nu_j \geq \nu_i$ for all $i,j$ such that $i+j \leq p-1$. Set for simplicity $\tilde{\nu}_i := \nu_i - it_0 = \left\lfloor \frac{a(i+1)}{p} \right\rfloor$ and notice that the previous inequality is equivalent to \begin{equation}\label{eq:nuisuperadditive} \tilde{\nu}_{i+j} - \tilde{\nu}_j \geq \tilde{\nu}_i \quad \text{for all }i,j\text{ such that }i+j \leq p-1. \end{equation} Let $k$ be such that $ka<p\le (k+1)a$; clearly $0\le k\le p-1$ and $a(k+1)=p+r$ with $r < a$, so that $\tilde{\nu}_{k-1}=0$ and $\tilde{\nu}_k=1$. For $i\ge0$ we have $$\tilde{\nu}_{i+k}=\left\lfloor \frac{ (i+k+1)a }{p} \right\rfloor =\left\lfloor \frac{ ia+r+p }{p} \right\rfloor\le\left\lfloor \frac{ (i+1)a }{p} \right\rfloor+1=\tilde{\nu}_{i}+\tilde{\nu}_{k},$$ which, together with Equation \eqref{eq:nuisuperadditive}, shows that $\tilde{\nu}_{i+k} = \tilde{\nu}_i+\tilde{\nu}_k$ for each $i \leq p-1-k$. An immediate induction then implies $\tilde{\nu}_i = \left\lfloor \frac{i}{k} \right\rfloor$ for all $i \leq p-1$. Setting $i=p-1$ we then obtain \[ a = \tilde{\nu}_{p-1}= \left\lfloor \frac{p-1}{k} \right\rfloor \Rightarrow a \leq \frac{p-1}{k}, \] hence in particular $ak \leq p-1$. We can then set $i=ak \leq p-1$ to obtain \[ a = \left\lfloor \frac{ak}{k} \right\rfloor= \tilde{\nu}_{ak} = \left\lfloor \frac{(ak+1)a}{p} \right\rfloor \Rightarrow a \leq \frac{(ak+1)a}{p} \Rightarrow p \leq ak+1 \Rightarrow ak \geq p-1, \] hence $ak=p-1$ and $a \mid p-1$ as desired. This concludes the proof of Theorem \ref{thm:BertrandiasFertonReformulation} (2). \renewcommand{\bibfont}{\small} \bibliographystyle{plainnatabbrv}
{ "timestamp": "2020-10-23T02:23:25", "yymm": "2005", "arxiv_id": "2005.07932", "language": "en", "url": "https://arxiv.org/abs/2005.07932", "abstract": "Let $L/K$ be a finite Galois extension of $p$-adic fields with group $G$. It is well-known that $\\mathcal{O}_L$ contains a free $\\mathcal{O}_K[G]$-submodule of finite index. We study the minimal index of such a free submodule, and determine it exactly in several cases, including for any cyclic extension of degree $p$ of $p$-adic fields.", "subjects": "Number Theory (math.NT)", "title": "How far is an extension of $p$-adic fields from having a normal integral basis?", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668695588648, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.708150522348392 }
https://arxiv.org/abs/1704.00173
Iterated stochastic processes : simulation and relationship with high order partial differential equations
In this paper, we consider the composition of two independent processes : one process corresponds to position and the other one to time. Such processes will be called iterated processes. We first propose an algorithm based on the Euler scheme to simulate the trajectories of the corresponding iterated processes on a fixed time interval. This algorithm is natural and can be implemented easily. We show that it converges almost surely, uniformly in time, with a rate of convergence of order 1/4 and propose an estimation of the error. We then extend the well known Feynman- Kac formula which gives a probabilistic representation of partial differential equations (PDEs), to its higher order version using iterated processes. In particular we consider general position processes which are not necessarily Markovian or are indexed by the real line but real valued. We also weaken some assumptions from previous works. We show that intertwining diffusions are related to transformations of high order PDEs. Combining our numerical scheme with the Feynman-Kac formula, we simulate functionals of the trajectories and solutions to fourth order PDEs that are naturally associated to a general class of iterated processes.
\section{Introduction} In the present paper we address {\it iterated processes} for which we propose a numerical scheme to simulate their trajectories and investigate their relationship with high order partial differential equations (PDEs). Then we use our scheme to approximate numerically the solution of such PDEs. Consider two independent processes $(X_t)_{t\geq 0}$ and $(Y_t)_{t\in I}$, where $I$ is an interval, defined on a probability space $(\Omega, \mathcal{F}, \P)$ and taking values respectively in $\mathbb{R}^d$ and $\mathbb{R}^n$. We define an interated process $Z$ by replacing the time index of $X$ by $|Y|$ where $|\cdot|$ denotes a norm on $\mathbb{R}^n$ as follows \begin{equation} \label{processusitere} Z_t:=X(|Y_t|),\quad t\in I. \end{equation} In the sequel we call $(X_t)$ (resp. $(Y_t)$) the {\it position} (resp. {\it time}) process. Different types of iterated processes have been considered recently for instance by Allouba \cite{allouba1, allouba}, Burdzy \cite{burdzy1, burdzy3, burdzy2}, Khoshnevisan and Lewis \cite{khoshnevisan}. Some of these authors extend $X$ to a process indexed by the real line as follows \begin{equation} \label{realline} X_2(t):=\left \lbrace \begin{array}{lcl} X(t)&\text{if}& t\geq 0\\ X'(-t)&\text{if}& t<0\\ \end{array}\right. \end{equation} where $X'$ is an independent copy of $X$. The corresponding iterated process is then defined by $X_2(Y_t)$. \smallskip \noindent In order to study properties of iterated processes, in particular to be able to simulate solutions of related PDEs, we would like to simulate their trajectories and to control the error. We haven't found any reference on the simulation of such processes in the literature. In the first part of the paper we propose a scheme based on the classical Euler scheme to simulate the trajectories of the iterated process $(Z_t)$ when $X$ and $Y$ are two diffusions. For every $T>0$ and positive integer $n$, we construct a process $\tilde{Z}^n$ piecewise linear on a regular partition of $[0,T]$ with mesh $T/n$. We will see that this scheme converges a.s. uniformly as $n$ tends to infinity on $[0,T]$ to $(Z_t)$ with a rate of convergence of order $1/4$, which means that \begin{equation} \label{rate} \forall\ 0<\alpha<1/4,\quad \lim_{n\rightarrow +\infty} \, \, \, n^\alpha\sup_{t\in[0,T]} |\tilde{Z}^n(t)-Z(t)|=0,\, \, \, \quad {\rm a.s.} \end{equation} It seems that the order of convergence in (\ref{rate}) for iterated processes satisfying our assumptions cannot be better than $1/4$. Indeed for a Brownian motion $Y$, we have for all $\alpha>1/2$ (see \cite{faure}) \begin{equation*} \limsup_n \, \, \, n^\alpha \, |Y_T-Y_T^n|=\infty, \, \, \, \, \, {\rm a.s.} \end{equation*} The convergence of the scheme being uniform, we can use it to simulate quantities depending on the whole trajectory of the process, not only on its value at a fixed time, like for instance the variations, the mean of a functional of the process or the measure of a subset of the function space $[0,T]^\mathbb{R}$ with respect to the law of the process. Our scheme is easy to implement since it only requires the simulation of $2n$ Gaussian random variables and so it is adapted to methods using simulations of many trajectories such as Monte Carlo methods. It can be extended to processes given by construction (\ref{realline}). We have chosen the Euler scheme because of its simplicity. The Milshtein approximation for instance, doesn't provide a better rate of convergence and is more complex. \smallskip \noindent In the subsequent part of the paper we address the connection between iterated processes and high order PDEs. There are in the literature several papers which prove that the density of specific iterated processes satisfies a Fokker-Planck type PDE (\cite{orsingher2}, \cite{orsingher}), establish a Feynman-Kac type formula (\cite{allouba}, \cite{funaki}, \cite{erkan}) for such processes or associate them to fractional equations (\cite{baeumer}, \cite{erkan2}, \cite{orsingher3}). We extend these works to general position processes in particular not necessarily Markovian (cf. section \ref{gyongy}).\\ When the time process is a Brownian motion or an $\alpha$-stable process, the equation we obtain reduces to the equation obtained respectively by \cite{allouba} and \cite{erkan}. However our result holds under more general assumptions on the initial value. Moreover in this particular case we prove that the solution of the PDE is unique. It is difficult to extend the results of this paper in a general setting when the density of the time process satisfies a PDE whose coefficients depend on the spatial variable. We provide examples of such time processes. \\ We also consider transformations going from one high order PDE to another one and we point out the relationship with intertwining diffusions. However, these results (as well as those in \cite{allouba} and \cite{erkan}) concern PDEs which contain the initial value and some of its derivatives. The PDE obtained in \cite{funaki} does not have this drawback but the underlying iterated process takes values in the complex plane. Extending the construction (\ref{realline}), we obtain a Feynman-Kac formula where the initial value does not appear any longer in the PDE. Moreover the underlying process is real valued. An application of our result provides a stochastic representation of a solution of the Euler-Bernouilli beam equation when one considers iteration of a Brownian motion by an independent Cauchy process. \smallskip \noindent In the last part of the paper, we implement the algorithm for the iterated Brownian motion (IBM) given by (\ref{processusitere}) when $X$ and $Y$ are two independent Brownian motions. We simulate its third and fourth order variations and the solution of the corresponding fourth order PDE. \bigskip \noindent The present paper is organized as follows. In section \ref{algorithme} we describe our scheme, we prove its uniform convergence and study the error. Section \ref{pdes} is devoted to the relationship between iterated processes and PDEs in the spirit of \cite{allouba} and \cite{erkan}. Transformations of PDEs are addressed in section \ref{transformations}. In section \ref{funakipde}, inspired by \cite{funaki}, we work with construction (\ref{realline}) in order to obtain PDEs that do not contain the initial value. Section \ref{application} implements our scheme in the IBM case. The Appendix (section \ref{appendix}) contains some auxiliary proofs. \section{Numerical scheme.}\label{algorithme} \noindent In this section we describe our numerical scheme and study its convergence. We consider two independent time inhomogeneous diffusion processes $X$ and $Y$. Our scheme is based on the idea that a pathwise approximation of $X(|Y|)$ may be obtained by the composition of the respective Euler approximations of $X$ and $Y$. More precisely for $T>0$ and a positive integer $n$, the Euler scheme for $Y$ on $[0,T]$ yields a continuous approximation $Y^n$. If $M_n$ denotes the maximum of $|Y^n|$ on $[0,T]$, the Euler scheme for $X$ on $[0,M_n]$ provides an approximation $X^n$. We prove below that a piecewise linear approximation $\tilde{Z}^n$ of the composition $Z^n:=X^n(|Y^n|)$ converges a.s. uniformly on $[0,T]$ as $n$ tends to infinity to $Z=X(|Y|)$ with rate of order $\frac{1}{4}$. \smallskip \noindent We start with the stochastic differential equations (SDEs) satisfied by the position and time processes. Let us assume that the position process $(X_t)_{t\geq 0}$ starting from $X_0$ satisfies \begin{equation} \label{edsX} dX_t=b_X(t,X_t)dt+\sigma_X(t,X_t)dW_t, \ t\geq 0, \end{equation} where $b_X: \mathbb{R} \times \mathbb{R}^d \rightarrow \mathbb{R}^d$, $\sigma_X: \mathbb{R} \times \mathbb{R}^d \rightarrow \mathbb{R}^{d\times p}$ and $(W_t)$ is a standard $\mathbb{R}^p$ valued Brownian motion. The time process $(Y_t)_{t\in [0,T]}$ starting from $Y_0$ solves \begin{equation} \label{edsY} dY_t=b_Y(t,Y_t)dt+\sigma_Y(t,Y_t)dW'_t, \ t\in[0,T], \end{equation} with $b_Y: \mathbb{R} \times \mathbb{R}^n \rightarrow \mathbb{R}^n$, $\sigma_Y: \mathbb{R} \times \mathbb{R}^n \rightarrow \mathbb{R}^{n\times q}$ and $(W'_t)$ a standard $\mathbb{R}^q$ valued Brownian motion independent from $W$. Several assumptions will be used throughout the paper. Regarding the starting points, we assume that \begin{equation} \label{initial_val} \forall p\geq 1, \, \, \mathbb{E}|X_0|^p+\mathbb{E}|Y_0|^p<\infty. \end{equation} The coefficients of (\ref{edsX}) and (\ref{edsY}) are supposed to enjoy Lipschitz continuity in space as well as at most linear growth and H\" older continuity in time. For simplicity, we write down these assumptions for $b_X$ and $\sigma_X$ only. For $b_Y$ and $\sigma_Y$, the only difference is that the time variable in (\ref{edsY}) belongs to the compact interval $[0,T]$. We assume that there exist two positive real numbers $K $ and $\beta$ such that \begin{eqnarray} &\forall t\geq 0,& \forall x,y\in \mathbb{R}^d,\notag\\ &&|b_X(t,x)-b_X(t,y)|+|\sigma_X(t,x)-\sigma_X(t,y)|\leq K|x-y|, \label{lipschitz_cond}\\ &&|b_X(t,x)|^2+|\sigma_X(t,x)|^2\leq K^2(1+|x|^2), \label{growth_cond} \end{eqnarray} \begin{eqnarray} \label{holder_cond} &\forall s, t\geq 0, &\forall x\in \mathbb{R}^n,\notag\\ &&|b_X(s,x)-b_X(t,x)|+|\sigma_X(t,x)-\sigma_X(s,x)|\leq K|t-s|^\beta. \end{eqnarray} \noindent Under these assumptions equations (\ref{edsX}) and (\ref{edsY}) have unique strong solutions (cf. \cite{kar}). \bigskip \noindent Let us recall that the Euler scheme for $Y$ on $[0,T]$ given the regular subdivision $t_0=0<t_1<\ldots<t_n=T$ with mesh $\Delta_T=T/n$ is defined recursively by $Y_0^n:=Y_0$ and \begin{equation} \label{eulerpourY} \forall t\in ]t_k, t_{k+1}], Y_t^n:=Y_{t_k}^n+\sigma_Y(t_k,Y_{t_k}^n)(W'_t-W'_{t_k})+b_Y(t_k,Y_{t_k}^n)(t-t_k). \end{equation} Let us define $M_n:=\sup_{t\in [0,T]} |Y_t^n|$, which is a.s. finite (cf. Proposition \ref{estimeesclassiques}) and denote by $X^n$ the approximation resulting from the Euler scheme for $X$ on $[0,M_n]$. We propose to approximate $X(|Y_t|)$ by $X^n(|Y^n_t|)$. The following statement provides an estimate of the mean error associated to this approximation, uniformly on $t\in [0,T]$. In the following, $n$ is assumed to be such that $\Delta_T=T/n\leq 1$. \begin{thm} \label{meanerror} Let us fix $T>0$ and assume that $X$ and $Y$ satisfy (\ref{edsX})-(\ref{holder_cond}). Let $n\geq 1$ be an integer. For $t\in [0,T]$ we set $Z_t:=X(|Y_t|)$ and $Z_t^n:=X^n(|Y_t^n|)$. If $\sup_{t\in [0,T]} \max \{|Y_t|, |Y_t^n|\}$ has finite Laplace transform, \begin{equation}\label{errorestimate} \mathbb{E} (\sup_{t\in [0,T]} |Z_t-Z_t^n|^{2p})\leq C(\Delta_T)^\rho, \end{equation} where $C$ is a constant depending only on $K$, $p$, $T$ and $\mathbb{E}|X_0|^{2p}$ and $\rho$ is defined by $\rho=(p-1)\inf \{1,2\beta\}/2$. In particular (\ref{errorestimate}) holds true if $Y_0^2$ has a finite Laplace transform. \end{thm} \smallskip \noindent In Theorem \ref{meanerror}, we have used the exact composition of the respective Euler schemes of $X$ and $Y$ to approximate $X(|Y|)$ by $X^n(|Y^n|)$. Actually another approximation easier to implement may be chosen. Indeed, let us consider $\bar{Y}_t^n$, a step function defined on $[0,T]$ by $\bar{Y}_t^n=Y^n(kT/n)$ for $t\in[kT/n,(k+1)T/n]$ and $\bar{X}^n$, a step function defined on $[0,M_n]$ by $\bar{X}_t^n=X^n(kM_n/n)$ for $t\in[kM_n/n,(k+1)M_n/n]$. Let $\tilde{Z}^n$ be the linear interpolation between the points $\bar{X}^n(\bar{Y}^n(kT/n))$ for $k=0,1,\ldots,n$. The computation of $\tilde{Z}^n$ is facilitated by the use of piecewise constant processes whose composition is simpler than the composition of piecewise linear ones. We now show that $Z^n$ and $\tilde{Z}^n$ both converge uniformly to $Z$ with order $\frac{1}{4}$. \begin{thm} \label{rateofconvergence} Let $X$ and $Y$ satisfy (\ref{edsX})-(\ref{holder_cond}). Then $\forall \, \, 0\leq \alpha < \frac{1}{4},$ \begin{equation} \lim_{n\rightarrow +\infty} \, \, \, n^\alpha \, sup_{t\in [0,T]} {(|Z_t^n-Z_t|+|\tilde{Z}_t^n-Z_t|)}=0, \, \, \, \, {\rm a.s.} \end{equation} \end{thm} \noindent The proof of Theorem \ref{meanerror} relies on the following proposition. \begin{prop}(cf. \cite{faure}) \label{errorboundforX} Under assumptions (\ref{edsX})-(\ref{holder_cond}), \begin{equation}\label{explicitbound} \mathbb{E} (\sup_{t\in [0,M]} |X_t-X_t^n|^{2p})\leq C(1+M^p)M(\Delta_M)^\gamma e^{CM} \end{equation} where $\Delta_M:=M/n$, $\gamma:=p\sup \{1,2\beta\}$ if $\Delta_M\geq1$ and $\gamma:=p\inf \{1,2\beta\}$ otherwise. $C$ is a constant depending only on $K$, $p$, $n$ and $\mathbb{E}|X_0|^{2p}$. \end{prop} In the sequel we need the explicit dependance of (\ref{explicitbound}) on the parameter $M$, since the interval on which we approximate $X$ is random. For the sake of completeness, we provide the proof of Proposition \ref{errorboundforX} in the Appendix (section 5). \medskip \noindent\textbf{Proof of Theorem \ref{meanerror}.} In the proof $C$ and $\tilde C$ denote constants depending only on $K$, $p$, $n$, $T$ and $\mathbb{E}|X_0|^{2p}$ which may vary from line to line. Remember that $Z_t=X(|Y_t|)$ and $Z^n_t=X^n(|Y_t^n|)$. We start with \begin{multline} \mathbb{E}\left(\sup_{t\in[0,T]} |Z^n_t-Z_t|^{2p}\right)\leq 2^{2p-1}\mathbb{E}\left(\sup_{t\in[0,T]} |X(|Y_t^n|)-X(|Y_t|)|^{2p}\right) \\ +2^{2p-1}\mathbb{E}\left(\sup_{t\in[0,T]} |X^n(|Y_t^n|)-X(|Y_t^n|)|^{2p}\right). \label{eq:82} \end{multline} Let us bound the first term on the RHS. Define $M:=\sup_{t\in[0,T]} \max \{|Y_t|,|Y_t^n|\}$ and $\delta:=\sup_{t\in[0,T]} |Y_t-Y_t^n|$. Let us recall that if $M$ were fixed, Garsia-Rodemich-Rumsey Lemma (cf. \cite{garsia}) would provide a random variable $\Gamma$ such that $|X_s-X_t|^{2p}\leq c\, \Gamma\, |t-s|^m,$ for all $s,t$ in $[0,M]$ whatever $m\in]0,p-1[$. In this inequality $c$ is a constant depending on $p$ and $m$. Moreover this lemma provides the following expression for $\Gamma$, \begin{equation}\label{expressiondegamma} \Gamma= \int_{[0,M]^2}\, \frac{|X_a-X_b|}{|a-b|^{m+2}}\, da db, \end{equation} as well as the estimate \begin{equation*} \mathbb{E} (\Gamma)\leq H c_1 \frac{1}{(p-1-m)}(\frac{M}{2})^{p-m}, \end{equation*} where $c_1$ is a universal constant and $H$ denotes the right-hand side of (\ref{Kolmogorovcriterion}). Here the pair $(M,\delta)$ is random. However by definition it is independent of $X$. Therefore we can condition on $(M,\delta)$ and write \begin{eqnarray*} \mathbb{E}\left(\sup_{t\in[0,T]} |X(|Y_t^n|)-X(|Y_t|)|^{2p}\right)&\leq& {\tilde C}\, \mathbb{E}\left[ (1+M^p)e^{CM}(\frac{M}{2})^{p-m}\delta^m \right]\\ &\leq& \tilde C\left(\mathbb{E}\left[ P(M)e^{CM}\right] \right)^{1/2} (\mathbb{E} [\delta^{2m} ])^{1/2}, \end{eqnarray*} In the latter expression (obtained by H\"older inequality) we have set $P(x):=(1+x^p)^2(\frac{x}{2})^{2(p-m)}$. Remember now that $\delta$ is equal to $\sup_{t\in[0,T]} |Y_t-Y_t^n|$. Proposition 14 in \cite{faure} implies \begin{equation*} \mathbb{E} [\delta^{2m} ]\leq (\Delta_T)^{m\inf (2\beta,1)}. \end{equation*} Therefore we obtain the following upper bound for the first term: \begin{equation} \label{eq:80} \mathbb{E}\left(\sup_{t\in[0,T]} |X(|Y_t^n|)-X(|Y_t|)|^{2p}\right)\leq \tilde C\left(\mathbb{E}\left[ P(M)e^{CM}\right] \right)^{1/2}(\Delta_t)^{(p-1)\inf \{2\beta,1\}/2}. \end{equation} The second term can be easily dominated as follows. Indeed notice first that $\sup_{t\in[0,T]} |X^n(|Y_t^n|)-X(|Y_t^n|)|\leq \sup_{t\in[0,M]} |X_t^n-X_t|$. Conditioning by $M$ (which is independent of $X$) and applying Proposition \ref{errorboundforX} we obtain, \begin{align} \mathbb{E}\left(\sup_{t\in[0,T]} |X^n(|Y_t^n|)-X(|Y_t^n|)|^{2p}\right)&\leq \mathbb{E}\left(\sup_{t\in[0,M]} |X_t^n-X_t|^{2p}\right)\notag \\ &\leq n^{-\gamma}\, \, \mathbb{E} \left[C(1+M^p)M^{1+\gamma} e^{CM}\right]. \label{eq:81} \end{align} where $\gamma=p\inf \{1,2\beta\}$. Combining (\ref{eq:82}) with inequalities (\ref{eq:80}) and (\ref{eq:81}) gives the result. \noindent Let us now prove that the assumption $\sup_{t\in [0,T]} \max \{|Y_t|, |Y_t^n|\}$ has finite Laplace transform is satisfied if $Y_0^2$ has a finite Laplace transform.\\ \noindent Take $\lambda>0$. Then $$\mathbb{E} e^{\lambda \sup_{t\in[0,T]} \max \{|Y_t|,|Y_t^n|\}}\leq \mathbb{E} e^{\lambda \sup_{t\in[0,T]} |Y_t|}+\mathbb{E} e^{\lambda \sup_{t\in[0,T]} |Y_t^n|},$$ \begin{align*} \mathbb{E} e^{\lambda \sup_{t\in[0,T]} |Y_t|}&\leq e^\lambda +\mathbb{E} \left( \sum_{k\geq 0} \frac{\lambda^k}{k!}(\sup_{t\in [0,T]} |Y_t|)^k \mathbf{1}_{\sup_{t\in [0,T]} |Y_t|\geq 1}\right)\\ &\leq e^\lambda +\mathbb{E} \left( \sum_{k\geq 0} \frac{\lambda^k}{k!}(\sup_{t\in [0,T]} |Y_t|)^{2k}\right).\\ \end{align*} Using Proposition \ref{estimeesclassiques} (see the Appendix), this latter term is dominated by $$e^\lambda +\mathbb{E} \left( \sum_{k\geq 0} \frac{\lambda^k}{k!}(C(1+T^k)\left\{\mathbb{E}(|Y_0|^{2k})+\left(1+\mathbb{E}(|Y_0|^{2k})\right)T^ke^{CT}\right\}\right),$$ which is finite since $Y_0^2$ has a finite Laplace transform. The same reasoning can be applied to $Y_t^n$ which satisfies (\ref{eulerpourY}). \hfill \framebox[0.6em]{} \bigskip \noindent In the proof of Theorem \ref{rateofconvergence} we use the following lemma which enables us to approximate a given function by a sequence of step functions, instead of an arbitrary sequence, without changing the rate of convergence. This lemma is proved in the Appendix. \begin{lem} \label{lem:1} Let $f$ and $(f_n)_n$ denote functions defined on $[0, T]$. For all integer $n\geq 1$, define $\bar{f}_n(t):=f_n(kT/n),\, \, \forall t\in[kT/n, (k+1)T/n]$. Let $\ell>0$, and suppose that $f$ is H\" older continuous with exponent $\beta$ for every $\beta<\ell$. Then, \begin{equation*} \forall\, \, \alpha<\ell,\, \, \lim_{n\rightarrow +\infty} \, \, \, n^\alpha \sup_{t\in[0,T]} |f_n(t)-f(t)|=0 \end{equation*} is equivalent to \begin{equation*} \forall\, \, \alpha<\ell,\, \, \lim_{n\rightarrow +\infty} \, \, \, n^\alpha \sup_{t\in[0,T]} |\bar{f}_n(t)-f(t)|=0. \end{equation*} \end{lem} \noindent\textbf{Proof of Theorem \ref{rateofconvergence}.} We prove the statement in detail for ${\tilde Z}_n$ (the proof for $Z_n$ is similar). This is equivalent to prove that \begin{equation*} \forall \, \, 0\leq \alpha < \frac{1}{4}, \, \, \, \, \lim_{n\rightarrow +\infty} \, n^\alpha \, sup_{t\in [0,T]} {|\bar{X}^n(|\bar{Y}_t^n|)-X(|Y_t|)|}=0,\, \, {\rm a.s.}, \end{equation*} thanks to Lemma \ref{lem:1} applied to $Z_t=X(|Y_t|)$ which is locally H\" older continuous with exponent $\beta$ for all $0<\beta<1/4$. \noindent Let $\alpha\in]0,1/4[$. Let us take $\epsilon>0$ and $M'>0$ such that $\P(\sup_{t\in [0,T]} |Y_t|\geq M')\leq \epsilon$ and set $A:=\{\sup_{t\in [0,T]} |Y_t|<M'\}$. Since $\bar{Y}^n$ converges a.s. uniformly to $Y$ on $[0,T]$, there exists $N$ such that a.s., for all $n\geq N$, $\sup_{t\in [0,T]} |\bar{Y}_t^n-Y_t|<1$. In particular the inequality $\sup_{t\in [0,T]} |\bar{Y}_t^n|<1+M'$ holds a.s. on $A$. Since $X$ is H\" older continuous on $[0,M'+1]$ of exponent $\beta$ for all $\beta\in]2\alpha,\frac{1}{2}[$, there exists a random value $C$ such that for all $n\geq N$, a.s. on $A$, \begin{equation*} |X_{|Y_t|}-\bar{X}^n_{|\bar{Y}_t^n|}|\leq C|Y_t-\bar{Y}_t^n|^\beta+ |X_{|\bar{Y}_t^n|}-\bar{X}^n_{|\bar{Y}_t^n|}|. \end{equation*} Taking the supremum and multiplying by $n^\alpha$, we obtain that for all $n\geq N$, a.s. on $A$, \begin{equation*} n^\alpha \sup_{[0,T]} |X_{|Y_t|}-\bar{X}^n_{|\bar{Y}_t^n|}|\leq C(n^{\alpha/\beta} \sup_{[0,T]}|Y_t-\bar{Y}_t^n|)^\beta +n^\alpha \sup_{[0,M'+1]}|X_t-\bar{X}_t^n|). \end{equation*} It is proved in \cite{faure} that \begin{equation*}\label{cv1} \forall \, \, 0\leq \alpha < \frac{1}{2}, \, \, \, \, \lim_{n\rightarrow +\infty} \, n^\alpha sup_{t\in [0,T]} {|\tilde{X}_t^n-X_t|}=0, \, \, {\rm a.s.} \end{equation*} Lemma \ref{lem:1} applied to $X$ implies that the same convergence holds for $(\bar{X}^n)$ instead of $\tilde{X}^n$ namely \begin{equation*}\label{cv2} \forall \, \, 0\leq \alpha < \frac{1}{2}, \, \, \, \, \lim_{n\rightarrow +\infty} \, n^\alpha sup_{t\in [0,T]} {|\bar{X}_t^n-X_t|}=0, \, \, {\rm a.s.} \end{equation*} The same argument applies to $Y$. This leads to \begin{equation*} \lim_{n\rightarrow +\infty} \, n^\alpha \sup_{t\in [0,T]} |X_{|Y_t|}-\bar{X}^n_{|\bar{Y}_t^n|}|=0. \end{equation*} Therefore \begin{equation*} \P\left(\lim_{n\rightarrow +\infty} \, n^\alpha \sup_{t\in [0,T]} |X_{|Y_t|}-\bar{X}^n_{|\bar{Y}_t^n|}|=0\right)\geq \P(A)>1-\epsilon. \end{equation*} We conclude the proof by letting $\epsilon$ tend to $0$. \hfill \framebox[0.6em]{} \bigskip \section{PDEs for iterated processes with general time and position processes}\label{pdes} \noindent In this section, we connect iterated processes and high order PDEs in the spirit of \cite{allouba}, \cite{erkan}. However we consider a general framework where the position or time process is not necessarily Markovian. It seems difficult to prove a general statement when the density $p_Y$ of the time process satisfies a PDE with space dependent coefficients. In this case we treat an example. In particular we address the iteration by a Brownian motion with drift and by an Ornstein-Uhlenbeck process as time process. When the time process is a Brownian motion or an $\alpha$-stable process, the equation we obtain reduces to the equation obtained respectively by \cite{allouba} and \cite{erkan}. However our result holds under more general assumptions on the initial value. Moreover in this particular case we prove that the solution of the PDE is unique. \bigskip \subsection{Iteration by a general time process} \noindent In this section the time process $(Y_t)_{t\geq 0}$ is real valued, starts from $0$ at time $0$ and satisfies the following assumptions. \medskip \noindent (A1) $Y_t$ admits a density denoted below by $p_Y(t,0,u)$ (or simply $p_Y$) for all $t>0$, \smallskip \noindent (A2) for all $t>0$, $p_Y(t,0,\cdot)\in \mathcal{C}^{r}(\mathbb{R})$ and $\partial^i_t \partial^j_u \, p_Y(t,0,\cdot)\in L^1(\mathbb{R})$ for all $0\leq i\leq q,\ 0\leq j \leq r$, for some integers $q,r$, \smallskip \noindent (A3) the even part $\mathcal{E}(p_Y)$ of $p_Y$ satisfies \begin{equation} \label{edpdensiteY} \sum_{k=1}^{p} \alpha_k \frac{\partial^k}{\partial t^k}\mathcal{E}(p_Y)(t,0,u)=\sum_{i=0}^{q} \sum_{j=1}^{r} \beta_{i,j}\frac{\partial^{i+j}}{\partial t^i \partial u^j}\mathcal{E}(p_Y)(t,0,u), \end{equation} for some integer $p$ and real valued functions of the time variable $\alpha_k$, $\beta_{i,j}$ (integers $q,r$ are those appearing in (A2)). \bigskip \noindent Examples satisfying (A1)-(A3) are provided by \begin{itemize} \item Brownian motion corresponding to $p=1,\, q=0,\, r=2,\,$ $\alpha_1=1,\, \beta_{0,1}=0,\, \beta_{0,2}=1/2$, \item the Cauchy process $p=2,\, q=0,\, r=2,\,$$\alpha_1=0,\, \alpha_2=1,\, $ $\beta_{0,1}=0, \, \beta_{0,2}=-1$, \item the telegraph process whose density satisfies \begin{equation*} \left( \frac{\partial^2}{\partial t^2}+2\lambda \frac{\partial}{\partial t}\right)p_Y(t,0,u)=v^2\frac{\partial^2}{\partial u^2}p_Y(t,0,u), \quad \lambda, v>0. \end{equation*} \item the sum of a telegraph process and an independent Brownian motion. In this case $p_Y$ satisfies (see \cite{Blanchard1993225}) \begin{multline*} \left( \frac{\partial^2}{\partial t^2}+2\lambda \frac{\partial}{\partial t}\right)p_Y(t,0,u)\\ =\left((v^2+\lambda)\frac{\partial^2}{\partial u^2}+\frac{\partial^3}{\partial t \partial u^2}-\frac{1}{4}\frac{\partial^4}{\partial u^4}\right)p_Y(t,0,u), \quad \lambda, v>0. \end{multline*} \end{itemize} Actually in the first three cases the density $p_Y(t,0,\cdot)$ itself is even for all $t$. \medskip \noindent If $p_Y$ satisfies (\ref{edpdensiteY}) and if moreover the orders of the partial derivatives w.r.t. the space variable $u$ on the right hand side of (\ref{edpdensiteY}) are even, then $\mathcal{E}(p_Y)$ satisfies (\ref{edpdensiteY}) as well. \medskip \noindent Another example is obtained when $Y$ is a Brownian motion with variance $\sigma^2$ and constant drift $\mu$. From the Fokker-Planck equation satisfied by $p_Y,$ \begin{equation*} \frac{\partial}{\partial t}p_Y(t,0,u)=\frac{1}{2}\sigma^2\frac{\partial^2}{\partial u^2}p_Y(t,0,u)-\mu\frac{\partial}{\partial u}p_Y(t,0,u),\quad \sigma,\mu\in \mathbb{R}, \end{equation*} we deduce that its even and odd parts satisfy the system \begin{align*} \frac{\partial}{\partial t}\mathcal{E}(p_Y)&=\frac{1}{2}\sigma^2\frac{\partial^2}{\partial u^2}\mathcal{E}(p_Y)-\mu\frac{\partial}{\partial u}\mathcal{O}(p_Y),\\ \frac{\partial}{\partial t}\mathcal{O}(p_Y)&=\frac{1}{2}\sigma^2\frac{\partial^2}{\partial u^2}\mathcal{O}(p_Y)-\mu\frac{\partial}{\partial u}\mathcal{E}(p_Y). \end{align*} which implies the following particular case of (\ref{edpdensiteY}) \begin{equation*} \frac{\partial^2}{\partial t^2}\mathcal{E}(p_Y)-\sigma^2\frac{\partial^3}{\partial t\partial u^2}\mathcal{E}(p_Y)-\mu^2\frac{\partial^2}{\partial u^2}\mathcal{E}(p_Y)+\frac{1}{4}\sigma^4\frac{\partial^4}{\partial u^4}\mathcal{E}(p_Y)=0. \end{equation*} \bigskip \bigskip \noindent In this subsection the position process is a Markov process $(X_t^x)_{t\geq 0}$ starting from a given $x\in \mathbb{R}$ with semigroup $P^t$ and infinitesimal generator $\mathcal{L}$ with domain $D(\mathcal{L})$. We denote by $B(\mathbb{R}^d,\mathbb{R}^d)$ the set of bounded functions from $\mathbb{R}^d$ to $\mathbb{R}^d$ and define $B_0:=\{f \in B(\mathbb{R}^d,\mathbb{R}^d), \|P^tf-f\|_\infty \rightarrow 0 \text{\ as\ } t\downarrow 0\}$. $B_0$ is a closed subset of $B(\mathbb{R}^d,\mathbb{R}^d)$ containing $D(\mathcal{L})$. In the following, we will use iterations of $\mathcal{L}$: \begin{equation*} D(\mathcal{L}^0)=B_0 \text{\quad and\quad} D(\mathcal{L}^n)=\{f\in D(\mathcal{L}^{n-1});\, \mathcal{L}^{n-1}f\in D(\mathcal{L})\},\, \, \forall\, n\geq 1. \end{equation*} \noindent We state below the results of this section (Theorem \ref{FKitere} and its corollaries). Their proofs are provided in section \ref{proofs}. \begin{thm} \label{FKitere} Let $Y$ be a real valued process independent of $X^x$ whose density $p_Y$ satisfies assumptions {\rm (A1)}-{\rm (A3)}. Let $f\in D(\mathcal{L}^{r-1})$ where $r$ is the highest order of the partial derivatives in $u$ appearing in (\ref{edpdensiteY}). Define $v(t,x):=\mathbb{E} \left[f(X^x(|Y_t|))\right]$. Then $v\in D(\mathcal{L}^{r})$ and satisfies \begin{equation} \label{edpFKitere} \sum_{k=1}^{p} \alpha_k \frac{\partial^k}{\partial t^k}v(t,x)=\sum_{i=0}^{q}\sum_{j=1}^{r}\beta_{i,j}(-1)^j\frac{\partial^i}{\partial t^i}\mathcal{L}^jv(t,x)+{\cal B}_f(t,x), \end{equation} on $]0,+\infty[\times \mathbb{R}^d$ where $\mathcal{L}$ acts on $x$ and ${\cal B}_f(t,x)$ is the boundary term \begin{equation} \sum_{i=0}^{q}\sum_{j=1}^{r}\, \, \beta_{i,j}\sum_{0\leq 2k\leq j-1} (-1)^{j-2k}\mathcal{L}^{j-1-2k}f(x)\left. \frac{\partial^{i+2k}}{\partial t^i \partial u^{2k}}p_Y(t,0,u)\right|_{u=0}. \end{equation} \end{thm} \bigskip \noindent Restricting to $Y$ a Brownian motion, we recover as a corollary the following result of \cite{allouba}. Moreover we show that in this case (\ref{edpFKitere}) admits a unique solution under weaker assumptions on the initial condition. \begin{corol}\label{corol:2} Let $f\in D(\mathcal{L})$ and $B$ a Brownian motion independent of $X^x$. Then, $v(t,x):=\mathbb{E} \left[f(X^x(|B_t|))\right]$ is an element of $D(\mathcal{L}^2)$ and is the unique solution in $D(\mathcal{L}^2)$ of \begin{equation} \begin{cases} \label{FKibm} \frac{\partial}{\partial t}v(t,x)=\frac{1}{\sqrt{2\pi t}}\mathcal{L}f(x)+\frac{1}{2}\mathcal{L}^2v(t,x), \quad t>0,\ x\in\mathbb{R}^d,\\ v(0,x)=f(x), \quad x\in\mathbb{R}^d. \end{cases} \end{equation} \end{corol} \begin{corol} Let $f\in D(\mathcal{L}^3)$ and $\, Y$ be a Brownian motion with drift $\mu$ and diffusion coefficient $\sigma$, independent of $X^x$. Then, there exist two functions of time, $\alpha$ and $\beta$, such that $v(t,x):=\mathbb{E} \left[f(X^x(|Y_t|))\right]$ satisfies \begin{multline*} \frac{\partial^2}{\partial t^2}v(t,x)=\sigma^2\frac{\partial}{\partial t}\mathcal{L}^2v(t,x)+\mu^2\frac{\partial}{\partial t}\mathcal{L}^2v(t,x)-\frac{1}{4}\sigma^4\frac{\partial}{\partial t}\mathcal{L}^4v(t,x)\\ +\alpha(t)\mathcal{L}f(x)+\beta(t)\mathcal{L}^3f(x), \quad \forall \, t>0, \forall \, u\in \mathbb{R}, \end{multline*} with $v(t,\cdot)\in D(\mathcal{L}^4)$, $\forall\, t>0$. By definition $v(0,x)=f(x)$. \end{corol} \noindent The time processes considered in the above statements are associated to PDEs whose coefficients may depend on the time variable but not on the spatial variable. It seems difficult to extend Theorem \ref{FKitere} to a large class of time processes. For instance, if $Y$ is an Ornstein-Uhlenbeck process issued from $0$ satisfying $dY_t=-\frac{1}{2}Y_tdt+dW_t$, its density satisfies the Fokker-Planck equation \begin{equation} \label{FPOU} \frac{\partial}{\partial t}p_Y(t,0,u)=\frac{1}{2}\frac{\partial^2}{\partial u^2}p_Y(t,0,u)+\frac{1}{2}\frac{\partial}{\partial u}\left[up_Y(t,0,u)\right], \, \forall\, t>0, \forall\, u\in \mathbb{R}. \end{equation} \noindent Since some coefficients of this equation are functions of the spatial variable $u$, we cannot apply Theorem \ref{FKitere} directly. However in this particular case a rewriting of the equation leads to the following result. The difficulty is that in general such a rewriting is not possible. \begin{prop}\label{OU} Let $X^x$ be a $\mathbb{R}^d$-valued Markov process with infinitesimal generator $\mathcal{L}$. Let $Y$ an Ornstein-Uhlenbeck process independent of $X$, satisfying $dY_t=-\frac{1}{2}Y_tdt+dW_t$, $Y_0=0$. Then, for $f\in D(\mathcal{L})$, the function $v(t,x):=\mathbb{E} \left[f(X^x(|Y_t|))\right]$ satisfies $v(t,\cdot)\in D(\mathcal{L}^2)$ for all $t>0$ and solves \begin{equation} \label{eq:46} \frac{\partial}{\partial t}v(t,x)=\frac{e^{-t}}{2\sqrt{2\pi(1-e^{-t})}}\mathcal{L}f+\frac{1}{2}e^{-t}\mathcal{L}^2v(t,x), \quad t>0,\ x\in\mathbb{R}^d, \end{equation} with initial condition $v(0,x)=f(x)$. \end{prop} \begin{corol} Let $Y$ be a telegraph process with parameters $\lambda>0$ and $v>0$, independent of $X$ and $f\in D(\mathcal{L})$. Then, $u(t,x)=\mathbb{E} \left[f(X^x(|Y_t|))\right]$ is solution of \begin{equation*} \begin{cases} \left(\frac{\partial^2}{\partial t^2}+\lambda\frac{\partial}{\partial t}\right)u(t,x)=v^2p_Y(t,0,0)\mathcal{L}f(x)+v^2\mathcal{L}^2u(t,x), \quad t>0,\ x\in\mathbb{R}^d.\\ u(0,x)=f(x), \quad x\in\mathbb{R}^d \end{cases} \end{equation*} \end{corol} \subsection{Proofs}\label{proofs} \noindent We now come to the proofs of Theorem \ref{FKitere}, Corollary \ref{corol:2} and Proposition \ref{OU}. We start with the proof of Theorem \ref{FKitere} which relies on the following lemma proved in the Appendix (section \ref{appendix}). \begin{lem} \label{IPP} Let $g\in L^1(\mathbb{R})$ such that $g$ is differentiable with $g'\in L^1(\mathbb{R})$. Then for all $f\in B_0$, the function $F(g):x\mapsto \int_0^\infty g(s)P^sf(x)ds$ belongs to $D(\mathcal{L})$ and $$\mathcal{L}F(g)=-F(g')-g(0)f.$$ More generally, if $f\in D(\mathcal{L}^{n-1})$ and for all $k\in\{0,\ldots, n\}$, $g^{(k)}\in L^1(\mathbb{R})$, then $F(g)\in D(\mathcal{L}^n)$ and \begin{equation} \mathcal{L}^nF(g)=(-1)^nF(g^{(n)})-\sum_{l=0}^{n-1}(-1)^{n-1-l}g^{(n-1-l)}(0)\mathcal{L}^lf. \label{formuleIPP} \end{equation} \end{lem} \medskip \noindent\textbf{Proof of Theorem \ref{FKitere}.} The independence of $X$ and $Y$ implies that \begin{equation*} v(t,x)=\int_\mathbb{R} p_Y(t,0,u)P^{|u|}f(x)du, \end{equation*} and the fact that $\mathcal{E}(p_Y)(t,0,u)=\frac{1}{2}(p_Y(t,0,u)+p_Y(t,0,-u))$ yield \begin{equation} v(t,x)=2\int_0^\infty \mathcal{E}(p_Y)(t,0,u)P^uf(x)du. \end{equation} For fixed $t>0$ define $g(\cdot):=\mathcal{E}(p_Y)(t,0,\cdot)$. Then $v(t,\cdot)$ coincides with $2F(g)$. From {\bf (A3)} we obtain \begin{eqnarray*} \sum_{k=1}^{p} \alpha_k(t)\frac{\partial^k}{\partial t^k}v(t,x)&=&2\int_0^\infty \sum_{k=1}^{p} \alpha_k(t)\frac{\partial^k}{\partial t^k}\mathcal{E}(p_Y)(t,0,u)P^u f(x)du\nonumber \\ &=&2\sum_{i=0}^{q}\sum_{j=1}^{r} \beta_{i,j}(t)\frac{\partial^i}{\partial t^i}\int_0^\infty \frac{\partial^j}{\partial u^j}\mathcal{E}(p_Y)(t,0,u)P^u f(x)du\\ &=& 2\sum_{i=0}^{q}\sum_{j=1}^{r} \beta_{i,j}(t)\frac{\partial^i}{\partial t^i} F(g^{(j)})(x),\label{intermediaire} \end{eqnarray*} where we make explicit the fact that $\alpha_k$ and $\beta_{i,j}$ depend only on time. Applying (\ref{formuleIPP}) to each $0\leq j\leq r$, we obtain \begin{equation*} F(g^{(j)})=(-1)^j{\cal L}^j F(g)+\sum_{\ell=0}^{j-1}(-1)^{\ell+1}\left. \frac{\partial^{j-1-\ell}}{\partial u^{j-1-\ell}}\mathcal{E}(p_Y)(t,0,u)\right|_{u=0}\mathcal{L}^\ell f. \end{equation*} We can now conclude using the fact the even function $g\equiv \mathcal{E}(p_Y)(t,0,\cdot)$ satisfies \begin{equation*} \left. \frac{\partial^\ell}{\partial u^\ell}\mathcal{E}(p_Y)(t,0,u)\right|_{u=0}= \left\lbrace \begin{array}{lcl} \left. \frac{\partial^\ell}{\partial u^\ell}p_Y(t,0,u)\right|_{u=0} &\text{if $\ell$ even}\\ 0 & \text{if $\ell$ odd}\\ \end{array}\right. \end{equation*} for all $0\leq \ell\leq j-1$. \hfill \framebox[0.6em]\\ \bigskip \noindent {\bf Proof of Corollary \ref{corol:2}} We deduce from Theorem \ref{FKitere} that $v\in D(\mathcal{L}^2)$ and it satisfies (\ref{FKibm}). Let $\tilde{v}$ be another solution of (\ref{FKibm}) belonging to $D(\mathcal{L}^2)$. Then $u=v-\tilde{v}\in D(\mathcal{L}^2)$ and $u$ is solution of \begin{equation} \label{eq:44} \frac{\partial}{\partial t}u(t,x)=\frac{1}{2}\mathcal{L}^2u(t,x), \quad t>0,\ x\in\mathbb{R}, \end{equation} with initial condition $u(0,x)=0, \forall x\in \mathbb{R}^d$. Since $\mathcal{L}^2$ generates a strongly continuous semigroup on $B_0$ (see \cite{nag86}), the solution of equation (\ref{eq:44}) is unique and vanishes identically, which implies that $v\equiv \tilde{v}$.\hfill \framebox[0.6em] \bigskip \noindent{\bf Proof of Proposition \ref{OU}} Since $p_Y(t,0,u)=\frac{1}{\sqrt{2\pi(1-e^{-t})}}e^{-\frac{u^2}{2(1-e^{-t})}}$, it satisfies the two following identities, \begin{equation*} u\, p_Y(t,0,u)=-(1-e^{-t})\frac{\partial}{\partial u}p_Y(t,0,u), \end{equation*} \begin{equation}\label{pdeOU} \frac{\partial}{\partial t}p_Y(t,0,u)=\frac{1}{2}e^{-t}\frac{\partial^2}{\partial u^2}p_Y(t,0,u), \quad \forall\, t>0,\, \forall\, u\in \mathbb{R}. \end{equation} The argument in the proof of Theorem \ref{FKitere} applies to (\ref{pdeOU}) instead of the original equation (\ref{FPOU}). Thus (\ref{eq:46}) is satisfied. (\ref{eq:46}) is indeed a particular case of (\ref{edpFKitere}) where $p=1, q=0, r=2$, $\beta_{0,1}=0,\, \beta_{0,2}=\frac{1}{2}e^{-t}, \alpha_1=1$. The boundary term $\mathcal{B}_f(t,x)$ is equal to $\frac{1}{2}e^{-t}p_Y(t,0,0)=\frac{e^{-t}}{2\sqrt{2\pi(1-e^{-t})}}.$ \hfill \framebox[0.6em]{} \begin{rmq} Let $r:=\inf \{k\in \mathbb{N},\, \partial^k_u \, p_Y(t,0,u)|_{u=0}\not \equiv 0\}$. Then the assumption $f\in D(\mathcal{L}^{n_j-1})$ can be weakened and replaced by $f\in D(\mathcal{L}^{n_j-1-r})$ by a slight modification of equality (\ref{intermediaire}). For instance, if $0\not \in supp\, p_Y(t,0,\cdot) $, for all $t>0$, , then we only need $f\in B_0$ and equation (\ref{edpFKitere}) becomes \begin{equation*} \sum_{k=1}^p \alpha_k \frac{\partial^k}{\partial t^k}v(t,x)=\sum_{i=0}^{q}\sum_{j=1}^{r} \beta_{i,j}(-1)^j.\frac{\partial^i}{\partial t^i}\mathcal{L}^jv(t,x). \end{equation*} Obviously if $Y$ is a non-negative process and $p_Y$ satisfies an equation of the type (\ref{edpdensiteY}), then removing the absolute value, $v(t,x):=\mathbb{E} \left[f(X^x(Y_t))\right]$ is solution of (\ref{edpFKitere}). \end{rmq} \subsection{General position processes}\label{gyongy} In this subsection, the position process $X^x$ is not necessarily markovian. Neither is the time process for which we assume {\rm (A1)}-{\rm (A3)}. The position process $X^x$ is an It\^o process, it can be written as \begin{equation} X_t^x=x+\int_0^t \delta(s,\omega)dW_s+\int_0^t \beta(s,\omega)ds, \end{equation} where $(W(t),\mathcal{F}_t)$ is a Wiener process, $\delta$ and $\beta$ are bounded $\mathcal{F}_t$-nonanticipative processes such that $\delta\delta^T$ is uniformly positive definite. \begin{thm} Let us keep the assumptions of Theorem \ref{FKitere} on $Y$. Let $\mathcal{L}$ be a differential operator defined by $$\mathcal{L}:=\frac{1}{2}\sigma(x)^2\frac{\partial^2}{x^2}+b(x)\frac{\partial}{\partial x}$$ with \begin{eqnarray} \sigma(t,z)&:=&\mathbb{E}(\delta \delta^T(t)\, |\, \ X_t^x=z) \label{eq:95}\\ b(t,z)&:=&\mathbb{E}(\beta(t)\, |\, \ X_t^x=z). \label{eq:96} \end{eqnarray} Let $f\in D(\mathcal{L}^{r-1})$ where $r$ is the highest order of the partial derivatives in $u$ appearing in (\ref{edpdensiteY}). Define $v(t,x):=\mathbb{E} \left[f(X^x(|Y_t|))\right]$. Then $v\in D(\mathcal{L}^{r})$ and satisfies \begin{equation} \label{edpFKitere2} \sum_{k=1}^{p} \alpha_k \frac{\partial^k}{\partial t^k}v(t,x)=\sum_{i=0}^{q}\sum_{j=1}^{r}\beta_{i,j}(-1)^j\frac{\partial^i}{\partial t^i}\mathcal{L}^jv(t,x)+{\cal B}_f(t,x), \end{equation} on $]0,+\infty[\times \mathbb{R}^d$ where $\mathcal{L}$ acts on $x$ and ${\cal B}_f(t,x)$ is the boundary term \begin{equation} \sum_{i=0}^{q}\sum_{j=1}^{r}\, \, \beta_{i,j}\sum_{0\leq 2k\leq j-1} (-1)^{j-2k}\mathcal{L}^{j-1-2k}f(x)\left. \frac{\partial^{i+2k}}{\partial t^i \partial u^{2k}}p_Y(t,0,u)\right|_{u=0}. \end{equation} \end{thm} \begin{dem} Under assumptions on $\delta$ and $\beta$, there exists from \cite{mimicking} a process $({\cal X}_t)$ weak solution of \begin{equation} {\cal X}_t^x=x+\int_0^t \sigma(s,{\cal X}_s)dW_s+\int_0^t b(s,{\cal X}_s)ds, \end{equation} such that $\forall\, t>0$, ${\cal X}_t^x$ and $X_t^x$ are identically distributed and coefficients $b$ and $\sigma$ are given by equations (\ref{eq:95}) and (\ref{eq:96}).\\ Then for all $t$, $\mathbb{E}(f(X_t^x))=\mathbb{E}(f({\cal X}^x_t))$. For $(Y_t)$ independent of $X$ and of ${\cal X}$, we have \begin{equation} v(t,x):=\mathbb{E} \left[f(X^x(|Y_t|))\right]=\mathbb{E} \left[f({\cal X}^x(|Y_t|))\right] \end{equation} therefore \begin{equation}\label{newposition} v(t,x):=\mathbb{E} \left[f(X^x(|Y_t|))\right]=\mathbb{E} \left[f({\cal X}^x(|Y_t|))\right]=2\int_0^\infty \mathcal{E}(p_Y)(t,0,u)P^uf(x)du, \end{equation} where $P^u$ in (\ref{newposition}) denotes the semigroup of ${\cal X}$ at time $u$. Hence under assumptions {\bf (A1)}-{\bf (A3)}, the result of Theorem \ref{FKitere} applies to ${\cal X}$, $Y$ and the function $v$ which satisfies PDE (\ref{edpFKitere2}) with ${\cal L}$ the infinitesimal generator of ${\cal X}$.\\ \end{dem} \section{Transformations of high order PDEs}\label{transformations} \noindent In this section we consider transformations between two high order PDEs. We start with the use of intertwining diffusions to build such a transformation. \subsection{Intertwining diffusions}\label{sec:intertwining} \noindent Consider two diffusions $(X_t)_{t\geq 0}$ and $(U_t)_{t\geq 0}$ which take values in $\mathbb{R}^d$ and $ \mathbb{R}^n$ respectively, with respective semigroups $P_t^X$ and $P_t^U$. The diffusions $X$ and $U$ are intertwining if there exists a density function $\Lambda$ such that the operator $L$ defined by \begin{equation} L\phi(z):=\int_{\mathbb{R}^d} \Lambda(z,x)\phi(x)dx, \quad \forall\, z\in\mathbb{R}^n, \end{equation} satisfies $P_t^U\, L=L\, P_t^X$ for all $t>0$. \bigskip \noindent Assume moreover that there exists a $\mathbb{R}^d\times \mathbb{R}^n$-valued diffusion process $(Z_1,Z_2)$ satisfying: \noindent (i) $X=Z_1$ and $U=Z_2$ in law, \noindent (ii) $\mathbb{E}(f(Z_1(0))|Z_2(0)=z_2)=Lf(z_2)$, \noindent (iii) for any $t>0$ the random variables $Z_2(0)$ and $Z_1(t)$ are conditionally independent given $Z_1(0)$, \noindent (iv) for any $t>0$ the random variables $Z_2(0)$ and $Z_1(t)$ are conditionally independent given $Z_2(t)$. \noindent In this case it is proved in \cite{pal} that $\Lambda$ is a distributional solution of the hyperbolic PDE \begin{equation}\label{edppourLambda} ({\cal L}_X)^* \; \Lambda={\cal L}_U\, \Lambda, \end{equation} where $\mathcal{L}_X$ and $\mathcal{L}_U$ denote the infinitesimal generators of $X$ and $U$. \bigskip \noindent In the following theorem we show that intertwining preserves the structure of (\ref{FKitere}), provided that we iterate $X$ and $U$ by the same time process. Let us recall that if $X^x$, $Y$ and $f$ satisfy the assumptions of Theorem \ref{FKitere}, then $v(t,x):=\mathbb{E} \left[f(X^x(|Y_t|))\right]$ is a solution of (\ref{edpFKitere}) with initial condition $v(0,\cdot)\equiv f(\cdot)$. We rewrite (\ref{edpFKitere}) as \begin{equation} \sum_{k=1}^{p} \alpha_k \frac{\partial^k}{\partial t^k}v(t,x)=Q(\frac{\partial}{\partial t},\mathcal{L}_X)v(t,x)+\mathcal{B}_{f,\mathcal{L}^X}(t,x), \end{equation} where the polynomial $Q(\frac{\partial}{\partial t},\mathcal{L}_X)$ coincides with $\sum_{i=0}^{q}\sum_{j=1}^{r}\beta_{i,j}(-1)^j\frac{\partial^i}{\partial t^i}\mathcal{L}_X^j$. \begin{thm}\label{intertwining} Let $X^x$ be a diffusion. Assume that $X^x, Y$ and $f$ satisfy the assumptions of Theorem \ref{FKitere}. Let $U^x$ be another diffusion independent of $Y$ such that $X$ and $U$ are intertwining. Define $g:=Lf$ and $h(t,x):=\mathbb{E}(g(U^x_{|Y_t|}))$. Then $h$ belongs to $\mathcal{D}((\mathcal{L}_U)^{r-1})$ ($r$ is the highest order of the partial derivatives in $u$ appearing in (\ref{edpdensiteY})) and satisfies the PDE \begin{equation} \label{eq:83} \sum_{k=1}^{p} \alpha_k \frac{\partial^k}{\partial t^k}h(t,x)=Q(\frac{\partial}{\partial t},\mathcal{L}_U)h(t,x)+\mathcal{B}_{g,\mathcal{L}_U}(t,x), \end{equation} on $]0,+\infty[\times \mathbb{R}^n$ with initial condition $h(0,\cdot)\equiv g(\cdot)$ by definition. \end{thm} \begin{rmq} An interesting point is that this theorem maps a solution defined on $\mathbb{R}_+\times \mathbb{R}^d$ into a solution defined on $\mathbb{R}_+\times \mathbb{R}^n$. A version without the boundary term $\mathcal{B}_{f}$ can be derived from Theorem \ref{edpsansf} by a slight modification of the proof. \end{rmq} \bigskip \noindent {\bf Proof of Theorem \ref{intertwining}.} From the relation $P_t^UL=LP_t^X$, we have for all $w\in\mathcal{D}(\mathcal{L}_X)$, $$\frac{P_t^ULw-Lw}{t}=\frac{LP_t^Xw-Lw}{t}$$ and using the boundedness of $L$, the right-hand side converges. Therefore so does the left-hand side which implies that $Lw\in\mathcal{D}(\mathcal{L}_U)$. Hence, $f\in\mathcal{D}((\mathcal{L}_X)^{r-1})$ implies $Lf\in\mathcal{D}((\mathcal{L}_U)^{r-1})$. Moreover by definition \begin{equation*} h(t,x)=\int_\mathbb{R} p_Y(t,0,\tau)\mathbb{E}(Lf(U_{|\tau|}^x))d\tau=\int_\mathbb{R} p_Y(t,0,\tau) \, P_{|\tau|}^U Lf(x) \, d\tau. \end{equation*} Using first that $P^U L=LP^X$, and then the kernel $\Lambda$, we obtain \begin{eqnarray*} h(t,x)&=&\int_\mathbb{R} p_Y(t,0,\tau) \, LP_{|\tau|}^X f(x) \, d\tau\\ &=&\int_\mathbb{R} p_Y(t,0,\tau) \, \int_{\mathbb{R}^d} \Lambda (x, \rho) P_{|\tau|}^X f(\rho) d\rho \, d\tau. \end{eqnarray*} We can apply Fubini Theorem which yields \begin{equation*} h(t,x)=\int_{\mathbb{R}^d} \Lambda (x, \rho) \int_\mathbb{R} p_Y(t,0,\tau) \, P_{|\tau|}^X f(\rho) \, d\tau d\rho, \end{equation*} and by definition of $v$, we conclude that \begin{equation}\label{relationhv} h(t,\cdot)=\int_{\mathbb{R}^d} \Lambda (\cdot, \rho) v(t,\rho) d\rho. \end{equation} We apply $\sum_{k=1}^{p} \alpha_k \frac{\partial^k}{\partial t^k}$ to (\ref{relationhv}), exchange the order of $\frac{\partial^k}{\partial t^k}$ and the integral, and use (\ref{eq:83}) to obtain \begin{equation} \label{eq:50} \sum_{k=1}^{p} \alpha_k \frac{\partial^k}{\partial t^k} h(t,\cdot)=\int_{\mathbb{R}^d} \Lambda (\cdot, \rho) \left\{Q(\frac{\partial}{\partial t},\mathcal{L}_X)v(t,\rho)+\mathcal{B}_{f,\mathcal{L}_X}(t,\rho)\right\} \, d\rho. \end{equation} In order to simplify the proof, we consider $Q(M,N)=M^iN^j$ and $\mathcal{B}_{f,\mathcal{L}_X}(t,\rho)=(\mathcal{L}_X)^kf(\rho)\alpha(t)$. We now decompose the right hand side of (\ref{eq:50}) into two parts. \begin{multline*} \int_{\mathbb{R}^d} \Lambda (\cdot, \rho) Q(\frac{\partial}{\partial t},\mathcal{L}_X)v(t,\rho) \, d\rho=\int_{\mathbb{R}^d} \Lambda (\cdot, \rho) \frac{\partial^i}{\partial t^i}(\mathcal{L}_X)^jv(t,\rho) \, d\rho\\ =\frac{\partial^i}{\partial t^i}\int_{\mathbb{R}^d} \Lambda (\cdot, \rho) (\mathcal{L}_X)^jv(t,\rho) \, d\rho. \end{multline*} Taking the adjoint brings the term $({\cal L}_X)^*\Lambda(\cdot,\rho)$ under the integral. By assumption (\ref{edppourLambda}), applied $j$-times, we conclude that \begin{equation*} \frac{\partial^i}{\partial t^i}\int_{\mathbb{R}^d} \Lambda (\cdot, \rho) (\mathcal{L}_X)^jv(t,\rho) \, d\rho= \frac{\partial^i}{\partial t^i}\int_{\mathbb{R}^d} (\mathcal{L}_U)^j\Lambda (\cdot, \rho) v(t,\rho) \, d\rho \end{equation*} Inverting the order between $(\mathcal{L}_U)^j$ and $\int$, we obtain \begin{equation*} \frac{\partial^i}{\partial t^i}\int_{\mathbb{R}^d} \Lambda (\cdot, \rho) (\mathcal{L}_X)^jv(t,\rho) \, d\rho= \frac{\partial^i}{\partial t^i}(\mathcal{L}_U)^j\int_{\mathbb{R}^d} \Lambda (\cdot, \rho) v(t,\rho) \, d\rho \end{equation*} So by definition of $h$, we have shown \begin{equation} \label{eq:51} \int_{\mathbb{R}^d} \Lambda (\cdot, \rho) Q(\frac{\partial}{\partial t},\mathcal{L}_X)v(t,\rho) \, d\rho= Q(\frac{\partial}{\partial t},\mathcal{L}_U)h(t,\cdot) \end{equation} \begin{multline*} \int_{\mathbb{R}^d} \Lambda (\cdot, \rho) \mathcal{B}_{f,\mathcal{L}_X}(t,\rho) \, d\rho=\int_{\mathbb{R}^d} \Lambda (\cdot, \rho) (\mathcal{L}_X)^kf(\rho)\alpha(t) \, d\rho\\ =\int_{\mathbb{R}^d} (\mathcal{L}_U)^k\Lambda (\cdot, \rho) f(\rho)\alpha(t) \, d\rho =(\mathcal{L}_U)^k\alpha(t)\int_{\mathbb{R}^d} \Lambda (\cdot, \rho) f(\rho) \, d\rho\\ =(\mathcal{L}_U)^k g(\cdot)\alpha(t)=\mathcal{B}_{g,\mathcal{L}_U}(t,\cdot) \end{multline*} This combined with (\ref{eq:50}) and (\ref{eq:51}) gives the announced PDE for $h$ thanks to (\ref{relationhv}). In all the identities above ${\cal L}_X$ acts on $\rho$ whereas ${\cal L}_U$ acts on the other variable represented by a $\cdot$ (as in $(\mathcal{L}_U)^k\Lambda (\cdot, \rho))$). \hfill \framebox[0.6em]{}\\ \bigskip \noindent As an example let $X=X_\alpha$ and $U=X_\alpha+X_\beta$ with $X_\alpha$ and $X_\beta$ two independent squared-Bessel processes of respective dimension $2\alpha>0$ and $2\beta>0$ (cf. \cite{pal}). Their semigroups are intertwining with \begin{equation*} \Lambda(y,x)=\frac{y^{-1}}{B(a,b)}\left(\frac{x}{y}\right)^{\alpha-1}\left(1-\frac{x}{y}\right)^{\beta-1}\mathbf{1}_{0<x<y}. \end{equation*} Here $B$ denotes the Beta function. The infinitesimal generators are given by \begin{equation*} \mathcal{L}_X=2x\partial_{xx}+2\alpha\partial_y\quad \text{and}\quad \mathcal{L}_U=2y\partial_{yy}+2(\alpha+\beta)\partial_y. \end{equation*} Let $f$ be a real function of class $C^2$ with compact support so that $f\in \mathcal{D}(\mathcal{L}_X)$ and let $W$ be a Brownian motion independent of $X$ and $U$. Then from Corollary \ref{corol:2}, $v(t,x):=\mathbb{E} \left[f(X^x(|W_t|))\right]$ is the unique solution in $\mathcal{D}((\mathcal{L}_X)^2)$ of \begin{equation} \begin{cases} \frac{\partial}{\partial t}v(t,x)=\frac{1}{\sqrt{2\pi t}}\mathcal{L}_Xf(x)+\frac{1}{2}(\mathcal{L}_X)^2v(t,x), \quad t>0,\ x\in\mathbb{R}\\ v(0,x)=f(x), \quad x\in\mathbb{R} \end{cases} \end{equation} and from Theorem \ref{intertwining}, $h(t,x):=\mathbb{E} \left[g(U^x(|W_t|))\right]$ is the unique solution in $\mathcal{D}((\mathcal{L}_U)^2)$ of \begin{equation} \begin{cases} \frac{\partial}{\partial t}v(t,x)=\frac{1}{\sqrt{2\pi t}}\mathcal{L}_Ug(x)+\frac{1}{2}(\mathcal{L}_U)^2h(t,x), \quad t>0,\ x\in\mathbb{R}\\ h(0,x)=g(x), \quad x\in\mathbb{R} \end{cases} \end{equation} where $g(x):=Lf(x)=\int_0^1f(x\rho)\rho^{\alpha-1}(1-\rho)^{\beta-1}d\rho.$ \subsection{Mapping a high order PDE into another one} \medskip \noindent In this section, we consider two initial-boundary value problems of high order (cf.\cite{bragg}). We will construct a mapping that transforms one of these problems into the other one. We show that this mapping can be expressed using a Feynman-Kac formula. If $x=(x_1,\ldots,x_n)$ is a point in ${\mathbb{R}}^n$ and $\phi$ is a smooth function, we write $D_i\phi(x):=\partial \phi(x)/\partial x_i$. We need compositions of the form $D^\alpha:=D_1^{\alpha_1}D_2^{\alpha_2}\cdots D_k^{\alpha_k}$ where $k$ and the $\alpha_i$ are integers. Finally we denote by $P(x,D)$ any polynomial in the partial derivatives as follows \begin{equation*} P(x,D):=\sum_{0\leq\alpha \leq m }a_\alpha(x)D^\alpha, \end{equation*} where $a_\alpha(x)$ are given functions of $x$. The first initial-boundary value problem is \begin{equation}\label{eq:48} \left\lbrace \begin{array}{lcl} \partial^2 v(x,t)/\partial t^2=P(x,D)v(x,t), &t>0,\\ v(x,0)=0, \quad v_t(x,0)=\phi(x),\\ B(x,D)v(x,t)=g(x,t),&x\in {\cal S},\, t>0, \end{array}\right. \end{equation} \noindent where $B(x,D)$ is a non tangential boundary operator and ${\cal S}:=\{ x; S(x)=0\}$ for some function $S$, denotes a cylindrical surface. \begin{thm}\label{transform} Suppose that (\ref{eq:48}) admits a solution $v(x,t)$. Let $B$ be a Brownian motion and suppose that the function $u(t,x):=\frac{\partial^{1/2}}{\partial t^{1/2}}\mathbb{E} (v(x,|B_{2t}|))$ is well defined. Then $u(t,x)$ solves the following problem \begin{equation}\label{eq:49} \left\lbrace \begin{array}{lcl} \partial u(x,t)/\partial t=P(x,D)u(x,t), &t>0,\\ u(x,0)=\phi(x),\\ B(x,D)u(x,t)=f(x,t),&x\in S,\, t>0, \end{array}\right. \end{equation} where $f(x,t):=\frac{1}{2\sqrt{\pi}t^{3/2}}\int_0^\infty\xi e^{-\xi^2/4t}g(x,\xi)d\xi$. \end{thm} In Theorem \ref{transform} the derivative in $u(t,x)$ is a particular case of the Caputo fractional derivative. Take a function $f$ and a positive real $\gamma$. If $m-1<\gamma<m$ for some integer $m$, the Caputo derivative of $f$ at order $\gamma$ is \begin{equation*} \frac{\partial^\gamma}{\partial t^\gamma}f(t):=\frac{1}{\Gamma(m-\alpha)}\int_0^t \frac{f^{(m)}(u)}{(t-u)^{1+\gamma-m}}du. \end{equation*} If $\gamma$ is an integer $\frac{\partial^\gamma}{\partial t^\gamma}f(t)$ is the usual derivative $f^{(\gamma)}$. \medskip \noindent {\bf Proof of Theorem \ref{transform}} Remember that $\Gamma(1/2)=\pi$. Then for $\gamma=1/2$, \begin{equation*} \frac{\partial^{1/2}}{\partial t^{1/2}}\frac{e^{-\xi^2/4t}}{\sqrt{4\pi t}}=\frac{1}{\sqrt{\pi}}\int_0^t \frac{\partial}{\partial u}\left(\frac{e^{-\xi^2/4u}}{\sqrt{4\pi u}}\right)\cdot \frac{1}{\sqrt{t-u}}du=\frac{\xi e^{-\xi^2/4t}}{4\sqrt{\pi}t^{3/2}} \end{equation*} Let $w(t,x):=\mathbb{E} (v(x,|B_{2t}|))=\frac{2}{\sqrt{4\pi t}}\int_0^\infty e^{-\xi^2/4t}v(x,\xi)d\xi$. Then, \begin{align*} \frac{\partial^{1/2}}{\partial t^{1/2}}w(t,x)&=\frac{1}{\sqrt{\pi}}\int_0^t \frac{\partial}{\partial u}\frac{2}{\sqrt{4\pi u}}\int_0^\infty e^{-\xi^2/4u}v(x,\xi)d\xi\cdot \frac{1}{\sqrt{t-u}}du\\ &=\frac{2}{\sqrt{\pi}}\int_0^\infty \int_0^t\frac{\partial}{\partial u}\left(\frac{1}{\sqrt{4\pi u}} e^{-\xi^2/4u}\right)\cdot \frac{1}{\sqrt{t-u}}du\ v(x,\xi)d\xi\\ &=\frac{1}{2\sqrt{\pi}}\int_0^\infty \frac{\xi e^{-\xi^2/4t}}{t^{3/2}} v(x,\xi)d\xi \end{align*} From \cite{bragg}, $u(x,t)=\frac{1}{2\sqrt{\pi}t^{3/2}}\int_0^\infty\xi e^{-\xi^2/4t}v(x,\xi)d\xi$ is solution of (\ref{eq:49}).\hfill \framebox[0.6em]{} \subsection{Time change.} \begin{thm}\label{changementdetemps} Let $\alpha:\mathbb{R} \rightarrow {\mathbb{R}}^+$ be a positive increasing and differentiable function. Let $X$ be an $\mathbb{R}^d$ valued process and $Y$ a real valued process satisfying the assumptions of Section \ref{algorithme} such that $\sigma_Y$ and $b_Y$ in $(\ref{edsY})$ depend only on time. Let us define \begin{equation*} \tilde{\sigma}_X(t,x):=\sigma_X(t,x)\sqrt{\alpha'(\alpha^{-1}(t))},\quad {\tilde b}_X(t,x):=b_X(t,x)\alpha'(\alpha^{-1}(t)), \end{equation*} and $\tilde{a}_X:=\tilde{\sigma}_X \tilde{\sigma}_X^T$. Consider the iterated process $Z:=X(\alpha(Y_t))$. Its density $p_Z(t,x,z)$ satisfies \begin{equation*} \frac{\partial}{\partial t}p_Z(t,x,z)=\frac{1}{2}a_Y(t)\Gamma^2 p_Z(t,x,z)+b_Y(t)\Gamma p_Z(t,x,z), \end{equation*} where the differential operator $\Gamma$ acts on smooth functions $\varphi$ by \begin{equation*} \Gamma\varphi(z):=\frac{1}{2}\sum_{i,j=1}^d\frac{\partial^2}{\partial z_i\partial z_j} \left[\tilde{a}^{i,j}_X(z)\varphi(z)\right]-\sum_{i=1}^d\frac{\partial}{\partial z_i} \left[\tilde{b}^i_X(z)\varphi(z)\right]. \end{equation*} \end{thm} For instance, we can take $\alpha(x):=e^x$.\\ \noindent {\bf Proof of Theorem \ref{changementdetemps}} By independence of $X$ and $Y$, the density of $Z$ is given by $$p_Z(t,x,y)=\int_\mathbb{R} p_Y(t,0,y)p_X(\alpha(y),x,z)dy.$$ We then have \begin{align*} \frac{\partial}{\partial t}p_Z(t,x,y)&=\int_\mathbb{R} \frac{\partial}{\partial t}p_Y(t,0,y)p_X(\alpha(y),x,z)dy\\ &=\int_\mathbb{R} \left(a_Y(t)\frac{\partial^2}{\partial y^2}-b_Y(t)\frac{\partial}{\partial y}\right)p_Y(t,0,y)p_X(\alpha(y),x,z)dy\\ &=a_Y(t)\int_\mathbb{R} p_Y(t,0,y)\frac{\partial^2}{\partial y^2}p_X(\alpha(y),x,z)dy\\ &\quad +b_Y(t)\int_\mathbb{R} p_Y(t,0,y)\frac{\partial}{\partial y}p_X(\alpha(y),x,z)dy \end{align*} \noindent Now, since $\alpha'(y)a^{i,j}_X(\alpha(y),z)=\tilde{a}^{i,j}_X(z)$ and $\alpha'(y)b^i_X(\alpha(y),z)=\tilde{b}^i_X(z)$, using the forward Kolmogorov (or Fokker-Planck) equation for $p_X$, we are able to conclude since \begin{align*} \frac{\partial}{\partial y}p_X(\alpha(y),x,z)&=\alpha'(y)\left.\frac{\partial}{\partial t}p_X(t,x,z)\right|_{t=\alpha(y)}\\ &=\frac{1}{2}\alpha'(y)\sum_{i,j=1}^d\frac{\partial^2}{\partial z_i\partial z_j} \left[a^{i,j}_X(\alpha(y),z)p_X(\alpha(y),x,z)\right]\\ &-\alpha'(y)\sum_{i=1}^d\frac{\partial}{\partial z_i} \left[b^i_X(\alpha(y),z)p_X(\alpha(y),x,z)\right]\\ &=\Gamma p_X(\alpha(y),x,z). \end{align*} \hfill \framebox[0.6em]{} \section{Position process indexed by the real line.}\label{funakipde} \medskip \medskip \noindent The PDEs that we have associated to iterated processes so far exhibit terms depending on the initial value (cf. Theorem \ref{FKitere} where $f(\cdot)\equiv v(0,\cdot)$ and some of its derivatives appear on the right-hand side of (\ref{edpFKitere})). The PDEs obtained in \cite{allouba} and \cite{erkan} for iterated processes also have this drawback due to the use of the absolute value of the time process. Another type of PDE without this drawback is obtained in \cite{funaki} but the underlying iterated process takes values in the complex plane. Extending the construction (\ref{realline}), we obtain in this section a Feynman-Kac formula where the initial value does not appear any longer in the PDE with a real valued underlying process. \medskip \noindent Let us consider $(X_+(t))_{t\geq 0}$ and $(X_-(t))_{t\geq 0}$ two Markov processes with infinitesimal generators ${\cal L}_+$ (resp. ${\cal L}_-$) such that $X_+$ (resp. $X_-$) takes values in $\mathbb{R}^d$ (resp. $\mathbb{R}^n$). Inspired by Funaki's construction \cite{funaki}, we define the $\mathbb{R}^{d+n}$-valued process $(X^{(x_1,x_2)}_t)_{t\in \mathbb{R}}$ starting from $(x_1,x_2)\in \mathbb{R}^{2d}$ and given for any real time index by \begin{equation} \label{XindexeparR} X^{(x_1,x_2)}_t:=\left\lbrace \begin{array}{lcl} (X_+^{x_1}(t),x_2)& {\rm if} \, t\geq 0,\\ (x_1,X_-^{x_2}(-t))& {\rm if} \, t<0, \end{array}\right. \end{equation} Note that in \cite{funaki} the resulting process takes values in the complex plane whereas each component of our $X^{(x_1,x_2)}$ is real valued. \medskip \noindent We will be interested in bounded functions $f:\mathbb{R}^d\rightarrow \mathbb{R}$ which admit an extension $\tilde{f}:\mathbb{R}^{d+n}\rightarrow \mathbb{R}$ satisfying \begin{eqnarray}\label{extension} &&(i)\,\, \, \forall x_1\in \mathbb{R}^d,\, \tilde{f}(x_1,0)=f(x_1),\\\ &&(ii)\,\, \, \forall x_2\in \mathbb{R}^n,\, \tilde{f}(\cdot,x_2)\in {\cal D}({\cal L}_+),\nonumber\\ &&(iii)\,\, \, \forall x_1\in \mathbb{R}^d,\, \tilde{f}(x_1,\cdot)\in {\cal D}({\cal L}_-),\nonumber\\ &&(iv)\,\, \, \forall (x^1,x^2)\in \mathbb{R}^d\times\mathbb{R}^n, (\mathcal{L_+}+\mathcal{L_-})\tilde{f}(x_1,x_2)=0.\nonumber \end{eqnarray} In (iv), the operator $\mathcal{L_+}$ (resp. $\mathcal{L_-}$) acts on $x_1$ (resp. $x_2$). Let us stress the resemblance between (iv) and the intertwining identity (\ref{edppourLambda}) in section \ref{sec:intertwining}. \bigskip \subsection{Main statement} \begin{thm} \label{edpsansf} Let $X$ be defined by (\ref{XindexeparR}). Consider $Y$ a real valued continuous process, independent of $X_+$ and $X_-$, such that $Y_0=0$. Let us assume that $Y$ admits a density $p_Y(t,0,y)$ which satisfies \begin{equation*} \frac{\partial}{ \partial t}p_Y(t,0,y)=P(t,\frac{\partial}{ \partial t},\frac{\partial}{ \partial u})p_Y(t,0,y),\, \, \, \forall\, t>0,\, \forall\, y\in \mathbb{R}, \end{equation*} for some polynomial $P$ with constant coefficients. Suppose moreover that $\frac{\partial^k}{ \partial t^k}p_Y(t,0,\cdot)$ is integrable for all $k$ up to the order of $P$ w.r.t. its second variable. Then \begin{equation*}\label{sansf} v(t,x):=\mathbb{E}\left[\tilde{f}(X^{(x,0)}_{Y_t})\right], \end{equation*} is solution of the PDE \begin{equation}\label{FK} \frac{\partial}{\partial t}v(t,x)=P(\frac{\partial}{ \partial t},-\, \mathcal{L}_+)v(t,x),\quad \forall \, x\in \mathbb{R}^d,\, \forall\, t>0, \end{equation} with initial condition $v(0,x)=f(x)$, where $\mathcal{L}_+$ acts on $v$ as a function of $x$. \end{thm} \bigskip \noindent It can be noticed that if $X_+$ and $X_-$ are independent and $f$ satisfies assumptions (\ref{extension}), then for all $t>0$ the function $x\mapsto \mathbb{E} \tilde{f}(X_+^x(t),X_-^0(t))$ satisfies (\ref{extension}) too, with an extension given by $(x,y)\mapsto \mathbb{E} \tilde{f}(X_+^x(t),X_-^y(t))$.\\ Theorem \ref{edpsansf} applies if we choose for instance $f(x):=e^{-x^2}$ with its extension $\tilde{f}(x,y):=e^{-x^2}\cdot \arctan(y)$ (cf. (\ref{extension})), and independent diffusions $X_+$ and $X_-$ with infinitesimal generators $\mathcal{L}_+g(x)=\frac{1}{2}\partial_{xx}g(x)+x\partial_xg(x)$ and $\mathcal{L}_-g(y)=(1+y^2)\partial_{yy}+3\partial_y$. \bigskip \noindent Let us mention that the assumptions can be weakened and $\tilde{f}$ can be unbounded when $X_-$ or $X_+$ is a diffusion process. For instance, let $X_+$, $X_-$, $Y$ be independent, $X_+$ be an Ornstein-Uhlenbeck process and $X_-$ and $Y$ be two Brownian motions. In this case $\mathcal{L}_+=\frac{1}{2}\frac{\partial^2}{\partial x^2}-x\frac{\partial}{\partial x}$ and $\mathcal{L}_-=\frac{1}{2}\frac{\partial^2}{\partial y^2}$. Let $f(x):=x$ extended in $\tilde{f}(x,y):=x\cosh(y)$. Then $(\mathcal{L}_++\mathcal{L}_-)\tilde f(x,y)=0$ and $\tilde{f}(x,0)=f(x)=x$, $\forall \, x,y\in \mathbb{R}$. Then for $t>0$, \begin{equation*} \mathbb{E}\left[\tilde{f}(X_t^{(x,0)})\right]=\mathbb{E}\left[\tilde{f}(X_+^x(t),0)\right]=\mathbb{E}\left[X_+^x(t)\right]=xe^{-t/2}. \end{equation*} If $t<0$, \begin{equation*} \mathbb{E}\left[\tilde{f}(X_t^{(x,0)})\right]=\mathbb{E}\left[\tilde{f}(x,X_-^0(-t))\right]=\mathbb{E}\left[x\cosh(X_-^0(-t))\right]=xe^{-t/2}. \end{equation*} Therefore \begin{equation*} \forall\, t\in \mathbb{R}, \quad \mathbb{E}\left[\tilde{f}(X^{(x,0)}_t)\right]=xe^{-t/2} \end{equation*} Then iterating by $Y_t$, and setting $v(t,x):=\mathbb{E}\left[\tilde{f}(X^{(x,0)}(Y_t))\right]]$, we obtain $v(t,x)=\mathbb{E}\left[xe^{-Y_t/2}\right]=xe^{t/8}$ which indeed satisfies \begin{eqnarray*} \frac{\partial}{\partial t}v(t,x)&=&\frac{1}{2}(\mathcal{L}_+)^2v(t,x)\\ &=&\frac{1}{8}(\partial_x^4-3x\partial_x^3+(x^2-2)\partial_x^2+x\partial_x)v(t,x), \end{eqnarray*} forall $x\in \mathbb{R}^d$ and $t>0$, with initial condition $v(0,x)=x$, as stated in (\ref{FK}). \bigskip \noindent {\bf Proof of Theorem \ref{edpsansf}.} For $t\in\mathbb{R}$ and $x\in \mathbb{R}^d$, define $\psi(x,t):=\mathbb{E}\left[\tilde{f}(X^{(x,0)}_t)\right]$. Remember that the notation $X^{(x,0)}$ implies that $X_-$ starts at $0$ this is why we write $X_-^0$ below. We start by proving that \begin{equation}\label{FKpourX} \frac{\partial}{\partial t}\psi(x,t)=\mathcal{L}_+\psi(x,t),\quad \forall \, t\in \mathbb{R}, \, \, \forall \, x\in\mathbb{R}^d. \end{equation} If $t>0$, then $\psi(x,\cdot)=\mathbb{E}(f(X_+^x(\cdot)))$ on an open neighborhood of $t$ and therefore $\frac{\partial}{\partial t}\psi(x,t)=\mathcal{L}_+\psi(x,t).$ \noindent If $t<0$, for all $s$ in an open interval containing $t$, we have $\psi(x,s)=\mathbb{E}(\tilde{f}(x,X_-^0(-s)))=\mathbb{E}(g(X_-^0(-s)))=P_-^{-s}g\, (0)$ with $g: x_2\mapsto \tilde {f}(x,x_2)$ where $x$ plays the role of a parameter. Therefore $\frac{\partial}{\partial t}\psi(x,t)=-\, P_-^{-t}\, {\cal L}_- \, g\, (0).$ The operator ${\cal L}_-$ acts on the second variable $x_2$. With the notations of (\ref{extension}), ${\cal L}_- \, g(x,x_2)$ is also ${\cal L}_- \, \tilde{f}$ so using $(iv)$ of (\ref{extension}), we obtain $-{\cal L}_- \, g(x,x_2)={\cal L}_+\tilde{f}(x,x_2)$ where ${\cal L}_+$ acts only on the first variable $x$. We conclude that $\frac{\partial}{\partial t}\psi(x,t)=P_-^{-t}\, {\cal L}_+\, g\, (0)={\cal L}_+\, P_-^{-t}\, g\, (0)$, the latter identity being true since ${\cal L}_+$ acts only on $x$. \noindent It remains to study the case $t=0$. Previously we obtained that $\frac{\partial}{\partial t}\psi(x,t)=\mathcal{L}_+\, P_+^t\, f\ (x)$ for $t>0\, $ and $\frac{\partial}{\partial t} \psi(x,t)=\mathcal{L}_+\, P_-^{-t}\, g\, (0)$ for $t<0\, $. The left-hand sides admit the same limit when $t\rightarrow 0$ equal to $\mathcal{L}_+\, f\, (x)$ since $\mathcal{L}_+$ acts only on $x$. Hence $t\rightarrow \psi(x,t)$ is differentiable at $t=0$ with derivative $\mathcal{L}_+\, f\, (x)$ which coincides with $\mathcal{L}_+\, \psi\, (x,0)$. Hence (\ref{FKpourX}) is proved. \noindent We now consider $v(t,x):=\mathbb{E}\left[\tilde{f}(X^{(x,0)}_{Y_t}\right]$. Then $v(t,x)=\int_\mathbb{R} p_Y(t,0,s)\psi(x,s)ds$ by independence of $X$ and $Y$ and the following identities hold, \begin{eqnarray*} \frac{\partial}{\partial t}v(t,x)&=&\frac{\partial}{\partial t}\int_\mathbb{R} p_Y(t,0,s)\psi(x,s)ds\\ &=&\int_\mathbb{R} P(t,\frac{\partial}{\partial t},\frac{\partial}{\partial s})p_Y(t,0,s)\psi(x,s)ds. \end{eqnarray*} If we perform successive integrations by parts on each term involving $\frac{\partial}{\partial s}$ and its powers that appear in $P$, we see that ${(-1)}^k\, \int_\mathbb{R} \frac{\partial^k}{\partial s^k}p_Y(t,0,s)\psi(x,s)ds=\int_\mathbb{R} p_Y(t,0,s)\frac{\partial^k}{\partial s^k}\psi(x,s)ds$ for all integer $k$. Then we conclude using (\ref{FKpourX}) that \begin{equation*} \int_\mathbb{R} \frac{\partial^k}{\partial s^k}p_Y(t,0,s)\psi(x,s)ds=(-1)^k\, \int_\mathbb{R} p_Y(t,0,s)({\cal L_+})^k\psi(x,s)ds \end{equation*} where ${\cal L_+}$ acts on $x$. In this way we have been able to separate the variables $t$ and $x$. Therefore \begin{equation*} \frac{\partial}{\partial t}\mathbb{E}\left[\tilde{f}(X^{(x,0)}(Y_t)\right]=P(t,\frac{\partial}{\partial t},-\, \mathcal{L}_+) \int_\mathbb{R} p_Y(t,0,s)\psi(x,s)ds. \end{equation*} This is (\ref{FK}) that we wanted to prove. \hfill \framebox[0.6em]{} \subsection{Application of Theorem \ref{edpsansf}} \begin{corol}\label{FKwithkilling} Let us keep the assumptions and notations of Theorem \ref{edpsansf}. Furthermore, we assume $X_-$ is such that the set $\{s\in [0,t], X_-(s)=0\}$ has Lebesgue measure zero $\P$-almost surely for all $t>0$. Let $c_+:\mathbb{R}^d\rightarrow\mathbb{R}_+$ and $c_-:\mathbb{R}^n\rightarrow\mathbb{R}_+$ be two continuous bounded functions. Let $f$ satisfying assumptions (i) to (iii) of (\ref{extension}) and the relation $$\forall (x^1,x^2)\in \mathbb{R}^d\times\mathbb{R}^n, (\mathcal{L_+}-c_+(x_1)+\mathcal{L_-}-c_-(x_2))\tilde{f}(x_1,x_2)=0.$$ Set $c(x_1,x_2)=c_+(x_1)\mathbf{1}_{x_2=0}-c_-(x_2)\mathbf{1}_{x_2\neq 0}$, then \\$\nu(t,x):=\mathbb{E} \left[ \exp \left\{ -\int_0^{Y_t} c(X^{(x,0)}_s)ds\right\}\tilde{f}(X^{(x,0)}_{Y_t})\right]$ is solution of the PDE \begin{equation}\label{FKkilling} \frac{\partial}{\partial t}\nu(t,x)=P(\frac{\partial}{\partial t},-(\mathcal{L}_+-c_+(x)))\nu(t,x), \, \, \, \forall \, x\in \mathbb{R}^d,\, \forall\, t>0, \end{equation} with initial condition $\nu(0,x)=f(x)$. \end{corol} \bigskip \noindent In the following statement we show that the Euler-Bernoulli {\it beam equation} \begin{equation} \label{eq:47} \frac{\partial^2}{\partial x^2}\left(g(x)\frac{\partial^2u}{\partial x^2}\right)+m(x)\frac{\partial^2 u}{\partial t^2}=0, \quad t>0,\, \, 0<x<L, \end{equation} where $g(x)>0$ is the flexural rigidity and $m(x)>0$ the lineal mass density, can be obtained as a consequence of Theorem \ref{edpsansf}, by considering the iteration of a Brownian motion by an independent Cauchy process. \begin{corol}\label{beam} Let $g$ and $m$ be two positive functions. Let $X_+$ be a diffusion process with infinitiesmal generator $\mathcal{L}_+:=g(x)\partial_{xx}^2$. Consider $X_-$ (resp. C) a Markov (resp. Cauchy) process such that $X_+$, $X_-$ and $C$ are independent. Define $\gamma_t:=C_{t/\sqrt{g(x)m(x)}}$. Then for any $f$ extendable in $\tilde{f}$ in the sense of (\ref{extension}), the function \begin{equation*} u(t,x):=\mathbb{E}[\tilde{f}(X^{(x,0)}_{\gamma_t})], \end{equation*} satisfies equation $(\ref{eq:47})$ with initial condition $u(0,x)=f(x), \forall \, x\in \mathbb{R}$. \end{corol} \medskip \noindent As an example, let be $Y$ a Brownian motion with drift $\mu$. Corollary \ref{FKwithkilling} provides a probabilistic representation of the solution to equation $$ \left \lbrace \begin{array}{lcl} \frac{\partial}{\partial t}u(t,x)=\frac{1}{2}\left(\mathcal{L}_+-c_+\right)^2+\mu \left(\mathcal{L}_+-c_+\right), \quad \forall x\in \mathbb{R},\forall t>0,\\ u(0,x)=f(x), \end{array}\right. $$ by $u(t,x)=\mathbb{E} \left[ \exp \left\{ \int_0^{Y_t} c(X^{(x,0)}(s))ds\right\}\tilde{f}(X^{(x,0)}_{Y_t})\right]$. A solution to such an equation can be processed using the algorithm developed in this paper and the remark following Theorem \ref{rateofconvergence}. \bigskip \noindent {\bf Proof of Corollary \ref{FKwithkilling}:} Let us define $(\tilde{X}_+(t))_{t\geq 0}$ (resp. $(\tilde{X}_-(t))_{t\geq 0}$) two Markov processes with infinitesimal generators ${\cal L}_+-c_+$ (resp. ${\cal L}_--c_-$) and the process \begin{equation} \tilde{X}^{(x_1,x_2)}_t:=\left\lbrace \begin{array}{lcl} (\tilde{X}_+^{x_1}(t),x_2)& {\rm if} \, t\geq 0,\\ (x_1,\tilde{X}_-^{x_2}(-t))& {\rm if} \, t<0, \end{array}\right. \end{equation} From Theorem \ref{edpsansf}, $\nu(t,x):=\mathbb{E}\left[\tilde{f}(\tilde{X}^{(x,0)}_{Y_t})\right]$ is solution of $$\frac{\partial}{\partial t}\nu(t,x)=P(\frac{\partial}{ \partial t},-\, (\mathcal{L}_+-c_+(x)))\nu(t,x)$$ For $t\geq 0$, using the definition (\ref{XindexeparR}) of $X^{(x_1,x_2)}$ and the Feynman-Kac formula, \begin{multline*} \mathbb{E}\left[\tilde{f}(\tilde{X}^{(x,0)}_t)\right]=\mathbb{E}\left[\tilde{f}(\tilde{X}_+^x(t),0)\right]\\=\mathbb{E} \left[\exp\left\{-\int_0^tc_+(X_+^x(s))ds\right\}\tilde{f}(X_+^x(t),0)\right]\\ =\mathbb{E} \left[\exp\left\{-\int_0^tc(X^{(x,0)}(s))ds\right\}\tilde{f}(X^{(x,0)}_t)\right]\ \end{multline*} When $t<0$, the following identities hold \begin{align*} \mathbb{E}\left[\tilde{f}(\tilde{X}^{(x,0)}_t)\right]&=\mathbb{E}\left[\tilde{f}(x,\tilde{X}_-^0(-t))\right]\\ &=\mathbb{E} \left[\exp\left\{-\int_0^{-t}c_-(X_-^0(s))ds\right\}\tilde{f}(x,X_-^0(-t))\right]\\ &=\mathbb{E} \left[\exp\left\{\int_0^t c_-(X_-^0(-s))ds\right\}\tilde{f}(x,X_-^0(-t))\right]\\ \end{align*} Since $c(X^{(x,0)}(s)=-c_-(X_-^0(-s))$ if $X_-^0(-s)\neq 0$, using assumption on the zero set of $X_-$, we have $$\mathbb{E}\left[\tilde{f}(\tilde{X}^{(x,0)}_t)\right]=\mathbb{E} \left[\exp\left\{-\int_0^tc(X^{(x,0)}(s))ds\right\}\tilde{f}(X^{(x,0)}_t)\right]$$ So $$\nu(t,x)=\mathbb{E}\left[\tilde{f}(\tilde{X}^{(x,0)}_{Y_t})\right]=\mathbb{E} \left[\exp\left\{-\int_0^{Y_t}c(X^{(x,0)}(s))ds\right\}\tilde{f}(X^{(x,0)}_{Y_t})\right]$$ \hfill \framebox[0.6em]{} \medskip \noindent {\bf Proof of Corollary \ref{beam}.} Since the density of $C$, $p_C(t,0,y):=\P(C_t\in dy)=\frac{t}{\pi(t^2+y^2)}$ satisfies $$\frac{\partial^2}{\partial t^2}p_C(t,0,y)=-\frac{\partial^2}{\partial y^2}p_C(t,0,y),$$ $w(t,y):=p_C(t/\sqrt{g(x)m(x)},0,y)$ is solution of $$g(x)m(x)\frac{\partial^2}{\partial t^2}w(t,y)=-\frac{\partial^2}{\partial y^2}w(t,y).$$ Applying Theorem \ref{edpsansf} gives $w$ is solution of $$(\mathcal{L}_+)^2+g(x)m(x)\frac{\partial^2 u}{\partial t^2}=0.$$ Factorizing by $g(x)$ ends the proof. \hfill \framebox[0.6em]{} \section{Application of the numerical scheme to the iterated Brownian motion}\label{application} \noindent In this section, we illustrate the algorithm proposed in Section \ref{algorithme}. First we simulate a trajectory of an iterated Brownian motion (IBM) $Z_t=X(|Y_t|)$ on $[0,T]$ where $X$ and $Y$ are two independent Brownian motions. Then we approximate numerically the function $v(t,x):=\mathbb{E} \left[f(Z^x_t))\right]$ and the variations of order three and four of $(Z_t)$. \smallskip \noindent For a fixed positive integer $n$, the Brownian motion $Y$ is evaluated at times $kT/n$ and we define our piecewise constant processes $(\bar{Y}_t^n)_{0\leq t \leq T}$ recursively by \begin{equation*} \bar{Y}_{(k+1)T/n}^n=\bar{Y}_{kT/n}^n+\xi_k^n, \end{equation*} where $\xi_k^n$ are independent centered Gaussian random variables with variance $T/n$. This determines $M_n=\sup_{t\in [0,T]} |\bar{Y}_t^n|$ which is a.s. finite. The same construction for $\bar{X}_t^n$ on $[0,M_n]$ can be performed, $$\bar{X}_{(k+1)M_n/n}^n=\bar{X}_{kM_n/n}^n+\zeta_k^n$$ where $(\zeta_k^n)$ are independent centered Gaussian random variables with variance $M_n/n$.\\ The composition $\bar{X}^n(|\bar{Y}^n(kT/n)|)$ for $k=0,1,\ldots, n$, is given by \begin{equation}\label{leschema} \bar{X}^n(|\bar{Y}^n(kT/n|)=\bar{X}^n\left( \frac{M_n}{n} \Big \lfloor \frac{n}{M_n}|\bar{Y}^n(kT/n)|\Big \rfloor \right), \end{equation} with $\lfloor \cdot \rfloor$ the floor function. The continuous approximation $\tilde{Z}^n$ of $Z$ is the linear interpolation between the points defined in (\ref{leschema}). A trajectory of $\tilde{Z}^n$ can be seen in Figure \ref{fig:1} presenting huge variations.\\ As mentioned in section \ref{algorithme} the scheme (\ref{leschema}) is interesting since it converges uniformly as $n$ tends to infinity, it simply requires to simulate $2n$ independent Gaussian random variables and the composition of $\bar{X}^n$ by $|\bar{Y}^n|$ is facilitated by the use of step functions. This makes it possible to use methods such as Monte Carlo methods which require the simulation of thousands of trajectories (see Figure \ref{fig:2}). Moreover the almost sure uniform convergence proved in this paper for $Z^n$ and its version $\tilde{Z}^n,$ which is more convenient for implementation, renders possible all types of numerical studies requiring an approximation of a whole trajectory and not only of the value at a fixed time, like for instance the variations of various order. In Figure \ref{fig:3} we apply this remark to the third and fourth order variations of $(Z_t)$, illustrating the following results of \cite{burdzy3} \begin{equation*} V_3(t):=\lim_{|\Lambda|\rightarrow 0} \sum_{k=1}^n(Z(t_k)-Z(t_{k-1}))^3= 0 \quad \text{in\ } L^p. \end{equation*} \begin{equation*} V_4(t):=\lim_{|\Lambda|\rightarrow 0} \sum_{k=1}^n(Z(t_k)-Z(t_{k-1}))^4= 3t \quad \text{in\ } L^p, \end{equation*} where $\Lambda$ denotes an arbitrary subdivision $t_0=0\leq t_1 \leq \ldots \leq t_n=t $ of $[0,t]$ with mesh $|\Lambda|:= \max_{1\leq k \leq n} |t_k-t_{k-1}|$: \bigskip \noindent The algorithm can also be used to simulate the solution to a fourth-order PDE of the type we studied in the previous sections. Figure \ref{fig:5} shows an approximation of $v(t,x):=\mathbb{E} \left[f(Z^x_t))\right]$ corresponding to $f(x)=e^{-x^2}$. From Corollary \ref{corol:2} we know that $v(t,x)$ is the unique solution of \begin{equation} \label{eq:45} \frac{\partial}{\partial t}v(t,x)=\frac{1}{2\sqrt{2\pi t}}\frac{\partial^2}{\partial x^2}f(x)+\frac{1}{8}\frac{\partial^4}{\partial x^4}v(t,x), \quad t>0,\ x\in\mathbb{R}^d, \end{equation} with initial condition $v(0,x)=f(x)$. \begin{figure}[h] \begin{center} \includegraphics[trim = 10mm 9mm 0mm 9mm, clip, scale=0.7]{2} \end{center} \caption{\label{fig:1} A trajectory of $\tilde{Z}^n$ on $[0,1]$ for $n=1000$.} \begin{center} \includegraphics[trim = 18mm 50mm 20mm 50mm, clip,scale=0.7]{1} \end{center} \caption{\label{fig:2}Comparison between the density of the Iterated Brownian Motion (IBM) at $t=1$ ($Z_1$) and an histogram representing an estimation of this density by the simulation of $20000$ trajectories of $\tilde{Z}_1$ for $n=1000$.} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[trim = 10mm 9mm 0mm 9mm, clip, scale=0.7]{3} \end{center} \caption{\label{fig:3} Estimation of $V_3$ and $V_4$ from the simulation of $2000$ trajectories of $\tilde{Z}^n$ for $n=1000$.} \end{figure} \bigskip \begin{figure}[h] \begin{center} \includegraphics[trim = 1mm 9mm 0mm 9mm, clip, scale=0.6]{5} \end{center} \caption{\label{fig:5} Estimation of $v(t,x)=\mathbb{E} \exp\{-(X^x(|Y(t)|))^2\}$, solution of (\ref{eq:45}). ($n=1000$, $20000$ trajectories)} \end{figure} \section{Appendix}\label{appendix} \subsection{Classical results for section \ref{algorithme}} \begin{prop}(cf. \cite{friedman}). \label{estimeesclassiques} Let $T>0$ and $(U_t)_{0\leq t\leq T}$ be the solution of (\ref{edsX}) under assumptions (\ref{lipschitz_cond})-(\ref{holder_cond}). Then for all $t, s\in [0,T]$ and $p\geq 1$ such that $\mathbb{E}|U_0|^{2p}<\infty$, \begin{equation} \label{eq:11} \mathbb{E}(|U_t|^{2p})\leq \left(1+\mathbb{E}(|U_0|^{2p})\right)e^{Ct}, \end{equation} \begin{equation} \label{eq:17} \mathbb{E}(\sup_{t\in [0,T]}|U_t|^{2p})\leq C(1+T^p)\, [\mathbb{E}(|U_0|^{2p})+\left(1+\mathbb{E}(|U_0|^{2p})\right)T^p e^{CT}], \end{equation} \begin{equation} \label{Kolmogorovcriterion} \mathbb{E}(|U_t-U_s|^{2p})\leq C\left(1+\mathbb{E}(|U_0|^{2p})\right)(1+T^p)|t-s|^p e^{CT}, \end{equation} for some constant $C>0$ depending only on $K$ and $p$. \end{prop} \bigskip \noindent\textbf{Proof of Proposition \ref{errorboundforX}.} $C$ can denote different constants from lines to lines depending only on $K$, $p$, $n$ and $\mathbb{E}|X_0|^{2p}$. Keeping the notations in \cite{faure}, we have for $t\in [t_k, t_{k+1}]$: \begin{multline} \label{eq:14} \mathbb{E}(|\epsilon_t^n|^{2p})\leq \mathbb{E}(|\epsilon_{t_k}^n|^{2p})+C\int_{t_k}^t \mathbb{E}\{ |\epsilon_s^n|^{2p}+|\sigma(s,X_s)-\sigma(t_k,\tilde{X}_{t_k}^n)|^{2p}\\ +|b(s,X_s)-b(t_k,\tilde{X}_{t_k}^n)|^{2p}\}ds \end{multline} where $\epsilon_{t_k}^n$ denotes the error process defined by $\epsilon_t^n:=X_t-X_t^n$. \begin{eqnarray} \label{eq:13} |b(s,X_s)-b(t_k,\tilde{X}_{t_k}^n)|^{2p}&\leq& C|b(s,X_s)-b(t_k,X_s)|^{2p}+C|b(t_k,X_s)-b(t_k,X_{t_k})|^{2p}\nonumber\\ &+&C|b(t_k,X_{t_k})-b(t_k,\tilde{X}_{t_k}^n)|^{2p}. \end{eqnarray} From Proposition \ref{estimeesclassiques} and the assumptions of Section \ref{algorithme}, we have \begin{align*} &\mathbb{E}|b(s,X_s)-b(t_k,X_s)|^{2p}\leq K^{2p}(\Delta_t)^{2p\beta},\\ &\mathbb{E}|b(t_k,X_s)-b(t_k,X_{t_k})|\leq K^{2p}\mathbb{E}|X_s-X_{t_k}|^{2p}\leq C(1+T^p)(\Delta_t)^pe^{CT},\\ &\mathbb{E}|b(t_k,X_{t_k})-b(t_k,\tilde{X}_{t_k}^n)|^{2p}\leq K^{2p}\mathbb{E}|\epsilon_{t_k}^n|^{2p}. \end{align*} Combining these inequalities with (\ref{eq:13}) yields \begin{equation} \label{eq:15} \mathbb{E}|b(s,X_s)-b(t_k,\tilde{X}_{t_k}^n)|^{2p}\leq C\{(1+T^p)(\Delta_t)^\gamma e^{CT}+\mathbb{E}|\epsilon_{t_k}^n|^{2p}\} \end{equation} for $\gamma=\max(p,2p\beta)$ if $\Delta_t\geq1$ and $\gamma=\min(p,2p\beta)$ otherwise. The same inequality holds for $\sigma$. By writing $\epsilon_s^n=\epsilon_{t_k}^n+(X_s-X_{t_k})+(\tilde{X}_{t_k}^n-\tilde{X}_s^n)$, and using inequality (\ref{eq:11}), we obtain $$\mathbb{E}|\epsilon_s^n|^{2p}\leq C\mathbb{E}|\epsilon_{t_k}^n|^{2p}+C(1+T^p)(\Delta_t)^pe^{CT}.$$ Thus, inequality (\ref{eq:14}) for $t=t_{k+1}$ becomes $$\mathbb{E}(|\epsilon_{t_{k+1}}^n|^{2p})\leq \mathbb{E}(|\epsilon_{t_k}^n|^{2p})(1+C\Delta_t)+C(1+T^p)(\Delta_t)^{\gamma+1} e^{CT}.$$ Consequently, \begin{align} \mathbb{E}(|\epsilon_{t_k}^n|^{2p})&\leq Cne^{nC\Delta_t}(1+T^p)(\Delta_t)^{\gamma+1} e^{CT}\notag\\ &\leq C(1+T^p)T(\Delta_t)^\gamma e^{CT}. \label{eq:16} \end{align} Using \begin{multline*} \epsilon_t^n=\int_0^t \sum_{k=0}^{n-1}(\sigma(s,X_s)-\sigma(t_k,\tilde{X}_{t_k}^n))\mathbf{1}_{[t_k,t_{k+1}]}dW_s\\ +\int_0^t \sum_{k=0}^{n-1}(b(s,X_s)-b(t_k,\tilde{X}_{t_k}^n))\mathbf{1}_{[t_k,t_{k+1}]}(s)ds, \end{multline*} and BurkholderDavisGundy inequality, we have \begin{multline*} \mathbb{E} (\sup_{t\in [0,T]} |\epsilon_t^n|^{2p})\leq C \mathbb{E}(\int_0^T \sum_{k=0}^{n-1}|\sigma(s,X_s)-\sigma(t_k,\tilde{X}_{t_k}^n)|^{2p}\mathbf{1}_{[t_k,t_{k+1}]}(s)ds)\\ +C\mathbb{E}(\int_0^T \sum_{k=0}^{n-1}|b(s,X_s)-b(t_k,\tilde{X}_{t_k}^n)|^{2p}\mathbf{1}_{[t_k,t_{k+1}]}(s)ds). \end{multline*} Using this latter inequality as well as (\ref{eq:15}) and (\ref{eq:16}) we complete the proof. \hfill \framebox[0.6em]{}\\ \noindent\textbf{Proof of Lemma \ref{lem:1}.} We suppose $\forall \alpha<l, n^\alpha \sup_{t\in[0,T]} |f_n(t)-f(t)|\rightarrow_n 0$. The second implication can be shown similarly. Let $\alpha>0$ and $t\in [0,T]$, then \begin{equation} \label{eq:2} \sup_{t\in[0,T]}|\bar{f}_n(t)-f(t)|\leq \sup_{t\in[0,T]}|\bar{f}_n(t)-f_n(t)|+\sup_{t\in[0,T]}|f_n(t)-f(t)|. \end{equation} By construction, $$\sup_{t\in[0,T]}|\bar{f}_n(t)-f_n(t)|=\max_k \sup_{t\in [kT/n,(k+1)T/n]}|f_n(t)-f_n(kT/n)|.$$ Let $t, t'\in[0,T]$, \begin{align} |f_n(t)-f_n(t')|&\leq |f_n(t)-f(t)|+|f(t)-f(t')|+|f(t')-f_n(t')|\notag \\ &\leq 2\sup_{t\in[0,T]}|f_n(t)-f(t)|+|f(t)-f(t')|.\label{eq:4} \end{align} $f$ being $\beta$-H\" older continue for all $\beta<l$, we choose $\beta$ such that $l>\beta>\alpha$ and inequality (\ref{eq:4}) becomes for some $C>0$ \begin{equation} \label{eq:5} |f_n(t)-f_n(t')|\leq 2\sup_{t\in[0,T]}|f_n(t)-f(t)|+C|t-t'|^\beta. \end{equation} Thus, $$\sup_{t\in [kT/n,(k+1)T/n]}|f_n(t)-f_n(kT/n)|\leq 2\sup_{t\in[0,T]}|f_n(t)-f(t)|+C(T/n)^\beta.$$ Consequently, the right side of inequality (\ref{eq:2}) multiplied by $n^\alpha$ tends to $0$ as $n$ tends to infinity and the result follows.\hfill \framebox[0.6em]{} \subsection{Classical results for Theorem \ref{FKitere}} \noindent\textbf{Proof of Lemma \ref{IPP}.} We first assume that $g$ is infinitely differentiable with compact support. Let $h>0$, \begin{align*} \frac{P^h-I}{h}F(g)&=\frac{1}{h}\int_0^\infty g(s)(P^{s+h}-P^s)f(x)ds\\ &=\frac{1}{h}\int_h^\infty (g(s-h)-g(s))P^sf(x)ds-\frac{1}{h}\int_0^hg(s)P^sf(x)ds \end{align*} We have to show the convergence of the right hand side as $h\downarrow 0$ in \\$(B(\mathbb{R},\mathbb{R}), {||\cdot||_\infty})$. Since $P^tf$ converges to $f$ uniformly as $t\downarrow 0$ and $g$ is continuous, $\exists \delta>0$ such that for all $s\in [0,\delta]$, $\| P^sf-f\|_\infty<\epsilon/\|g\|_\infty$ and $|g(s)-g(0)|<\epsilon/\|f\|_\infty$ so \begin{multline*} |g(s)P^sf(x)-g(0)f(x)|\leq |g(s)P^sf(x)-g(s)f(x)|+|g(s)f(x)-g(0)f(x)|\\ \leq \|g\|_\infty |P^sf(x)-f(x)|+\| f\|_\infty |g(s)-g(0)|<2\epsilon \end{multline*} Thus, $\forall \epsilon>0$, $\exists \delta$ such that $\forall\ 0<h<\delta$, \begin{align*} \left\| \frac{1}{h}\int_0^h g(s)P^sf-g(0)fds \right\|_\infty \leq \frac{1}{h}\int_0^h \|g(s)P^sf-g(0)f\|_\infty ds<2\epsilon \end{align*} By dominated convergence theorem, using the bound $\|P^sf\|_\infty\leq \|f\|_\infty$ and the regularity of $g$, we have $$\lim_{h\downarrow 0} \frac{P^h-I}{h}F(g)=-\int_0^\infty g'(s)P^sfds-g(0)f=-F(g')-g(0)f$$ This ensures that $F(g)\in D(\mathcal{L})$ and $\mathcal{L}F(g)=-F(g')-g(0)f$. Let $(\alpha_n)_{n\in \mathbb{N}}$ be an infinitely differentiable mollifier. We define $g_m(x):=g(x)\mathbf{1}_{|x|\leq m}$ and $\tilde{g}_m(x):=g'(x)\mathbf{1}_{|x|\leq m}$. Then for all $n,m\in \mathbb{N}, (g_m \star \alpha_n)$ is $\mathcal{C}^\infty$ with compact support. Since $g_m\in L^1(\mathbb{R})$ and $g'_m\in L^1$, $(g_m\star \alpha_n)\rightarrow g_m$ in $L^1(\mathbb{R})$ and $(g_m\star \alpha_n)'=(\tilde{g}_m\star \alpha_n)\rightarrow \tilde{g}_m$ in $L^1(\mathbb{R})$ $g$ being continuous, $g_m\in L^\infty(\mathbb{R})$ for all $m\in \mathbb{N}$, so $|(g_m\star \alpha_n)(0)-g(0)|=|(g_m\star \alpha_n)(0)-g_m(0)|\rightarrow_n 0$. This and the convergence of $g_m$ to $g$ and $\tilde{g}_m$ to $g'$ in $L^1(\mathbb{R})$ imply that we can construct a sequence $(\phi_n)_n$ of $\mathcal{C}^\infty$ functions with compact support such that $$\phi_n\rightarrow_n g \text{\ in\ } L^1(\mathbb{R}),\quad \phi'_n\rightarrow_n g' \text{\ in\ } L^1(\mathbb{R}) \text{\quad and\quad}|\phi_n(0)-g(0)| \rightarrow_n 0$$ For all $n\in \mathbb{N}$, $F(\phi_n)\in D(\mathcal{L})$ and $\mathcal{L}F(\phi_n)=-F(\phi_n')-\phi_n(0)f$. From the inequality \begin{eqnarray*} |F(\phi_n)(x)-F(g)(x)|&=&\left| \int_0^\infty (\phi_n(s)-g(s))P^sf(x)ds\right|\\ &\leq& \|f\|_\infty \int_0^\infty |\phi_n(s)-g(s)|ds, \end{eqnarray*} we have $\|F(\phi_n)-F(g)\|_\infty\rightarrow_n 0$ and similarly $\|F(\phi'_n)-F(g')\|_\infty\rightarrow_n 0$. $$\|\phi_n(0)f(x)-g(0)f(x)\|_\infty \leq \|f\|_\infty |\phi_n(0)-g(0)|\rightarrow_n 0$$ \noindent We have shown that $(F(\phi_n))$ is then a sequence of $D(\mathcal{L})$ converging to $F(g)$ and $\mathcal{L}F(\phi_n)=-F(\phi'_n)-\phi_n(0)f$ is converging to $-F(g')-g(0)f$. Since $\mathcal{L}$ is a closed operator, we conclude that $F(g)\in D(\mathcal{L})$ and $\mathcal{L}F(g)=-F(g')-g(0)f$. The second part of the Lemma is proved by induction. \hfill \framebox[0.6em] \begin{lem}\label{intervertion} Let $g$ be a continuous density function. For all $f$ in the domain of $\mathcal{L}$, $$\mathcal{L}\int_0^\infty P^sf(x)g(s)ds=\int_0^\infty \mathcal{L}P^sf(x)g(s)ds.$$ \end{lem} \noindent\textbf{Proof of Lemma \ref{intervertion}.} Let $h>0$ and $p(h,x,dy):=\P(X_h^x \in dy)$ \begin{align*} P^h\int_0^\infty P^sf(x)g(s)ds&=\int_\mathbb{R} p(h,x,dy)\int_0^\infty P^sf(y)g(s)ds\\ &=\int_0^\infty \int_\mathbb{R} p(h,x,dy) P^sf(y)g(s)ds\\ &=\int_0^\infty P^{h+s}f(x)g(s)ds \end{align*} So \begin{equation} \label{eq:39} \frac{P^h-I}{h}\int_0^\infty P^sf(x)g(s)ds=\int_0^\infty \frac{P^{h+s}-P^s}{h}f(x)g(s)ds \end{equation} where $I$ stands for the identity. Since, $(P^{h+s}f(x)-P^sf(x))/h\rightarrow \mathcal{L}P^sf(x)$ uniformly in $x$ as $h\downarrow 0$, letting $h$ decreasing to $0$ in equality (\ref{eq:39}) ends the proof. \hfill \framebox[0.6em] \nocite{*} \newpage \renewcommand{\refname}{Bibliography} \bibliographystyle{plain}
{ "timestamp": "2017-05-03T02:01:41", "yymm": "1704", "arxiv_id": "1704.00173", "language": "en", "url": "https://arxiv.org/abs/1704.00173", "abstract": "In this paper, we consider the composition of two independent processes : one process corresponds to position and the other one to time. Such processes will be called iterated processes. We first propose an algorithm based on the Euler scheme to simulate the trajectories of the corresponding iterated processes on a fixed time interval. This algorithm is natural and can be implemented easily. We show that it converges almost surely, uniformly in time, with a rate of convergence of order 1/4 and propose an estimation of the error. We then extend the well known Feynman- Kac formula which gives a probabilistic representation of partial differential equations (PDEs), to its higher order version using iterated processes. In particular we consider general position processes which are not necessarily Markovian or are indexed by the real line but real valued. We also weaken some assumptions from previous works. We show that intertwining diffusions are related to transformations of high order PDEs. Combining our numerical scheme with the Feynman-Kac formula, we simulate functionals of the trajectories and solutions to fourth order PDEs that are naturally associated to a general class of iterated processes.", "subjects": "Probability (math.PR)", "title": "Iterated stochastic processes : simulation and relationship with high order partial differential equations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668684574637, "lm_q2_score": 0.721743206297598, "lm_q1q2_score": 0.7081505215534635 }
https://arxiv.org/abs/2202.03027
Alexander polynomials and signatures of some high-dimensional knots
We give necessary and sufficient conditions for an integer to be the signature of a (4q-1)-knot in the (4q+1)-sphere with a given square-free Alexander polynomial.
\section{Introduction} What are the possiblities for the signatures of knots with a given Alexander polynomial ? This question is answered in \cite{B 21} for ``classical" knots, i.e. knots $K^1 \subset S^3$, with some restrictions on the Alexander polynomial, and the same results hold for knots $K^m \subset S^{m+2}$ if $m \equiv 1 \ {\rm (mod \ 4)}$. In the present paper, we consider high-dimensional knots $K^m \subset S^{m+2}$ with $m \equiv -1 \ {\rm (mod \ 4)}$. In the introduction, we describe the results for $m > 3$; the case $m = 3$ is somewhat different (see Section \ref{3}). \medskip Let $m \geqslant 7$ be an integer with $m \equiv -1 \ {\rm (mod \ 4)}$. An {\it $m$-knot} $K^m \subset S^{m+2}$ is by definition a smooth, oriented submanifold of $S^{m+2}$, homeomorphic to $S^m$; in the following, a {\it knot} will mean an $m$-knot as above. We refer to the book of Michel and Weber \cite {MW} for a survey of high-dimensional knot theory. \medskip Let $K^m$ be a knot. The {\it Alexander polynomial} $\Delta = \Delta_K \in {\bf Z}[X]$ is a polynomial of even degree; set $2n = {\rm deg}(\Delta)$. It satisfies the following three properties (see for instance \cite {Le 69}, Proposition 1) : \medskip (1) $\Delta(X) = X^{2n} \Delta(X^{-1})$, \medskip (2) $\Delta(1) = (-1)^n$, \medskip (3) $\Delta(-1)$ is a square. \medskip Conversely, if $\Delta \in {\bf Z}[X]$ is a degree $2n$ polynomial satisfying conditions (1)-(3), then there exists a knot with Alexander polynomial $\Delta$ (cf. Levine \cite {Le 69}, Proposition 2 and Lemma 3; note that Lemma 3 is based on a result of Kervaire, \cite {K 65}, Th\'eor\`eme II.3). \medskip Let $F^{m+1}$ be a Seifert hypersurface of $K^m$ (see for instance \cite{MW}, Definition 6.16), and let $L = H_n(F^{m+1},{\bf Z})/{\rm Tors}$, where ${\rm Tors}$ is the ${\bf Z}$-torsion subgroup of $H_n(F^{m+1},{\bf Z})$. Let $S : L \times L \to {\bf Z}$ be the intersection form; since $m \equiv -1 \ {\rm (mod \ 4)}$, the form $S$ is {\it symmetric}. The {\it signature} of $K^m$ is by definition the signature of the symmetric form $S$; it is an invariant of the knot. The form $S$ is even and unimodular, therefore its signature is $ \equiv 0 \ {\rm (mod \ 8)}$. \medskip Levine's construction (see \cite {Le 69}, Proposition 2 and Lemma 3) shows the existence of a knot with Alexander polynomial $\Delta$ and signature $0$. It is natural to ask : {\it what other signatures occur}~? \medskip Let us denote by $\rho (\Delta)$ the number of roots $z$ of $\Delta$ such that $|z| = 1$. If a knot has Alexander polynomial $\Delta$ and signature $s$, then $|s| \leqslant \rho (\Delta)$. This shows that the conditions $s \equiv 0 \ {\rm (mod \ 8)}$ and $|s| \leqslant \rho (\Delta)$ are {\it necessary} for the existence of a knot with Alexander polynomial $\Delta$ and signature $s$; however, these conditions are not sufficient, as shown by the following example, taken from \cite{GM}, Proposition 5.2 : \medskip \noindent {\bf Example 1.} Let $\Delta(X) = (X^6 - 3 X^5 - X^4 + 5 X^3 - X^2 - 3X + 1)(X^4 - X^2 + 1)$; we have $\rho(\Delta) = 8$, hence $s = -8,0$ and $8$ satisfy the above necessary conditions. However, there does not exist any knot with Alexander polynomial $\Delta$ and signature $-8$ or $8$. \medskip Let $\Delta \in {\bf Z}[X]$ be a polynomial satisfying conditions (1)-(3), and suppose that $\Delta$ is {\it square-free}. We associate to $\Delta$ a finite abelian group $G_{\Delta}$ that controls the signatures of the knots with Alexander polynomial $\Delta$ (see \S \ref{group} - \S \ref{knot section}). In particular, we have (cf. Corollary \ref{knot coro sign}) : \medskip \noindent {\bf Theorem 1.} Assume that $G_{\Delta} = 0$, and let $s$ be an integer with $s \equiv 0 \ {\rm (mod \ 8)}$ and $|s| \leqslant \rho(\Delta)$. Then there exists a knot with Alexander polynomial $\Delta$ and signature $s$. \medskip The vanishing of the group $G_{\Delta}$ has other geometric consequences : we show the existence of {\it indecomposable} knots with Alexander polynomial $\Delta$ (see \S \ref{indecomposable}). \section{Seifert forms and Seifert pairs}\label{Seifert} Seifert forms are well-known objects of knot theory; the aim of this section is to recall this notion, and to show that it is equivalent to the one of {\it Seifert pairs}; this notion was introduced, under a different name, by Kervaire in \cite {K 71} in the context of knot cobordism; see also Stoltzfus (\cite {St}) and \cite{B 82}, \S 5. \begin{defn} \label{Seifert form} A {\it Seifert form} is by definition a pair $(L,A)$, where $L$ is a free $\bf Z$-module of finite rank and $A : L \times L \to {\bf Z}$ is a $\bf Z$-bilinear form such that the symmetric form $L \times L \to {\bf Z}$ sending $(x,y)$ to $A(x,y) + A(y,x)$ is unimodular (i.e. has determinant $\pm 1$); the {\it signature} of $(L,A)$ is by definition the signature of this symmetric form. \medskip The {\it Alexander polynomial} of $(L,A)$, denoted by $\Delta_A$, is by definition the determinant of the form $L \times L \to {\bf Z}[X]$ given by $$(x,y) \mapsto A(x,y)X + A(y,x).$$ \end{defn} \begin{defn} \label{Seifert pair} A {\it Seifert pair} is by definition a triple $(L,S,a)$, where $L$ is a free $\bf Z$-module of finite rank, $S : L \times L \to {\bf Z}$ is an even (i.e. $S(x,x)$ is an even integer for all $x \in L$), unimodular, symmetric $\bf Z$-bilinear form, and $a : L \to L$ is an injective $\bf Z$-linear map such that $$S(ax,y) = S(x,(1-a)y)$$ for all $x,y \in L$. \end{defn} \medskip Let $(L,S,a)$ be a Seifert pair. Since $S$ is even and unimodular, the rank of $L$ is an even integer; let $n \in {\bf Z}$ be such that ${\rm rank}(L) = 2n$. Let $A : L \times L \to {\bf Z}$ be defined by $$A(x,y) = S(ax,y);$$ note that $(L,A$) is a Seifert form, and we have \begin{prop}\label{bijection} Sending $(L,S,a)$ to $(L,A)$ as above induces a bijection between isomorphism classes of Seifert pairs and of Seifert forms. Let $P_a$ be the characteristic polynomial of $a$. We have $$P_a(X) = (-1)^n X^{2n} \Delta_A(1 - X^{-1}).$$ \end{prop} \medskip Note that $\Delta_A(X) = X^{2n} \Delta_A(X^{-1})$, and that $P_a(X) = P_a(1-X)$. \begin{defn} A {\it lattice} is a pair $(L,S)$, where $L$ is a free $\bf Z$-module of finite rank, and $S : L \times L \to {\bf Z}$ is a symmetric bilinear form with ${\rm det}(S) \not = 0$. We say that $(L,S)$ is unimodular if ${\rm det}(S) = \pm 1$, and even if $S(x,x)$ is an even integer for all $x \in L$. \end{defn} Note that a Seifert pair consists of an even, unimodular lattice $(L,S)$ and an injective endomorphism $a : L \to L$ such that $S(ax,y) = S(x,(1-a)y)$ for all $x,y \in L$, \section {Involutions of $K[X]$, symmetric polynomials and bilinear forms compatible with a module}\label{Witt} Let $K$ be a field, let $R$ be a commutative $K$-algebra, and let $\sigma : R \to R$ be an involution; we say that $\lambda \in R$ is {\it $\sigma$-symmetric} (or symmetric, if the choice of $\sigma$ is clear from the context) if $\sigma(\lambda) = \lambda$. \begin{example} (1) Let $\sigma : K[X,X^{-1}] \to K[X,X^{-1}]$ be the involution sending $X$ to $X^{-1}$. If $\Delta$ is the Alexander polynomial of a Seifert form of rank $2n$, then $X^{-n} \Delta(X)$ is symmetric. \medskip (2) Let $\sigma : K[X] \to K[X]$ be the involution sending $X$ to $1-X$; the symmetric polynomials are the $f \in K[X]$ such that $f(1-X) = f(X)$. The characteristic polynomial of a Seifert pair is symmetric. \end{example} Let $M$ be an $R$-module that is a finite dimensional $K$-vector space. Recall from \cite {B 21}, \S 1, that a non-degenerate symmetric bilinear form $b : M \times M \to K$ is called an $(R,\sigma)$-{\it bilinear form} if $$b(\lambda x,y) = b(x, \sigma(\lambda) y)$$ for all $x,y \in V$ and for all $\lambda \in R$. \begin{example} Let $(L,S,a)$ be a Seifert pair and set $V = L \otimes_{\bf Z}{\bf Q}$; we denote by $S : V \times V \to {\bf Q}$ and $a : V \to V$ the symmetric bilinear form and the $\bf Q$-linear map induced by $S$ and $a$. Let $\sigma : {\bf Q}[X] \to {\bf Q}[X]$ be the involution sending $X$ to $1-X$. We endow $V$ with a structure of ${\bf Q}[X]$-module by setting $X.x = a(x)$ for all $x \in V$; note that $S : V \times V \to {\bf Q}$ is a $({\bf Q}[X],\sigma)$ bilinear form. \end{example} Let $V$ be a finite dimensional $K$-vector space, and let $q : V \times V \to K$ be a non-degenerate symmetric bilinear form. Following \cite {B 21}, \S 1, we say that $M$ and $(V,q)$ are {\it compatible} if there exists a $K$-linear isomorphism $\phi : M \to V$ such that the bilinear form $b_{\phi} : M \times M \to K$, defined by $b_{\phi}(x,y) = q(\phi(x),\phi(y))$, is an $R$-bilinear form. \begin{example} Let $\sigma : K[X] \to K[X]$ be the involution sending $X$ to $1-X$, and let $P \in K[X]$ be a monic, $\sigma$-symmetric polynomial. Assume $P$ is a product of distinct monic, symmetric, irreducible factors; let us denote by $I$ the set of these polynomials. Set $M = \underset{f \in I} \oplus K[X]/(f)$, regarded as a $K[X]$-module. Let $V$ be a finite dimensional $K$-vector space, and let $q : V \times V \to K$ be a non-degenerate symmetric bilinear form. The module $M$ and the form $(V,q)$ are compatible if and only if there exists an endomorphism $a : V \to V$ with characteristic polynomial $P$ such that $q(ax,y) = q(x,(1-a)y)$ for all $x,y \in V$. \end{example} \section{Milnor signatures}\label{Milnor} We recall the notion of {\it Milnor signatures}, introduced by Milnor in \cite {M 68}, in the context of Seifert pairs. Let $(L,S,a)$ be a Seifert pair, and let $P \in {\bf Z}[X]$ be the characteristic polynomial of $a$. Assume that the polynomial $P$ is {\it square-free}, i.e. has no repeated factors; we also suppose that if $f \in {\bf Z}[X]$ is a monic, irreducible factor of $P$, then $f(X) = f(1-X)$. \medskip Let $V = L \otimes_{\bf Z}{\bf R}$. Let $f \in {\bf R}[X]$ be a monic, irreducible factor of degree $2$ of $P \in {\bf R}[X]$; note that this implies that $f(X) = f(1-X)$. \begin{defn} The {\it signature} of $(L,S,a)$ at $f$ is by definition the signature of the restriction of $S$ to ${\rm Ker}(f(a))$. \end{defn} \begin{notation} Let ${\rm Irr}_{\bf R}(P)$ be the set of monic, irreducible factors $f \in {\bf R}[X]$ of degree $2$ of $P$. Let $s \in {\bf Z}$. We denote by ${\rm Mil}(P)$ the set of maps $${\rm Irr}_{\bf R}(P) \to \{-2,2 \},$$ and by ${\rm Mil}_s(P)$ the set of $\tau \in {\rm Mil}(P)$ such that $$\underset {f \in {\rm Irr}_{\bf R}(P)} \sum \tau(f) = s.$$ \end{notation} Let $n \geqslant 1$ be an integer, and let $\Delta \in {\bf Z}[X]$ be a polynomial of degree $2n$ such that $\Delta(X) = X^{2n}\Delta(X^{-1})$, $\Delta(1) = (-1)^n$ and that $\Delta(-1)$ is a square of an integer. Suppose that $P(X) = (-1)^nX^{2n} \Delta(1-X^{-1})$. We define ${\rm Mil}_s(\Delta)$ as in \cite {B 21}, \S 26; note that there are obvious bijections between ${\rm Irr}_{\bf R}(P)$ and ${\rm Irr}_{\bf R}(\Delta)$, ${\rm Mil}_s(P)$ and ${\rm Mil}_s(\Delta)$, and that we recover the usual notion of Milnor signature. \medskip If $P$ and $\Delta$ are as above, set $\rho(P) = \rho(\Delta)$; alternatively, $\rho(P)$ can be defined as the number of roots $z$ of $P$ with $z + \overline z = 1$, where $\overline z$ denotes the complex conjugate of $z$. Note that $\rho(\Delta) = |{\rm Irr}_{\bf R}(\Delta)|$ and $\rho(P) = |{\rm Irr}_{\bf R}(P)|$. \section{The obstruction group}\label{group} Let $P \in {\bf Z}[X]$ be a monic polynomial such that $P(1-X) = P(X)$. Assume that $P$ is a product of distinct irreducible monic polynomials $f \in {\bf Z}[X]$ such that $f(1-X) = f(X)$. We associate to $P$ an elementary abelian $2$-group $G_P$ that will be useful in the following sections; this construction is similar to the one of \cite{B 21}, \S 21. \medskip Let $I$ be the set of irreducible factors of $P$. If $f,g \in I$, let $\Pi_{f,g}$ be the set of prime numbers $p$ such that $f \ {\rm mod} \ p$ and $g \ {\rm mod} \ p$ have a common factor $h \in {\bf F}_p[X]$ such that $h(1-X) = h(X)$. Let $C(I)$ be the set of maps $I \to {\bf Z}/2{\bf Z}$, and let $C_0(I)$ be the set of $c \in C_0(I)$ such that $c(f) = c(g)$ if $\Pi_{f,g} \not = \varnothing$. Note that $C_0(I)$ is a group with respect to the addition of maps, and let $G_P$ be the quotient of the group $C_0(I)$ by the subgroup of the constant maps. \begin{example}\label{first example} Let $f_1(X) = X^4 - 2X^3 + 5X^2 - 4X + 1$ and $$f_2(X) = X^4 - 2X^3 + 11X^2 - 10X + 3;$$ set $P = f_1 f_2$. We have $\Pi_{f_1,f_2} = \{2\}$, hence $G_P = 0$. \end{example} \medskip If $P(X) = (-1)^nX^{2n} \Delta(1-X^{-1})$ for some polynomial $\Delta \in {\bf Z}[X]$, set $G_{\Delta} = G_P$. If moreover $\Delta(0) = \pm 1$, then the group $G_{\Delta}$ is equal to the obstruction group ${\mbox{\textcyr{Sh}}}_{\Delta(0) \Delta}$ of \cite{B 21}, \S 21 and \S 25. In particular, 25.8 - 25.11, 31.4 and 31.5 of \cite {B 21} provide examples of obstruction groups in our context as well. This is also the case for the following example, given in the introduction~: \begin{example} Let $g_1(X) = X^6 - 3 X^5 - X^4 + 5 X^3 - X^2 -3X +1$ and $g_2(X) = X^4 - X^2 + 1$; set $\Delta = g_1 g_2$, as in Example 1. Set $f_1(X) = -X^6g_1(1-X^{-1})$, and $f_2 (X) = X^4 g_2(1-X^{-1})$, and let $P = f_1 f_2$. The polynomials $f_1$ and $f_2$ are relatively prime over $\bf Z$, hence $\Pi_{f_1,f_2} = \varnothing$; therefore $G_P \simeq {\bf Z}/2{\bf Z}$. \end{example} \section{Seifert pairs with a given characteristic polynomial and signature}\label{obstruction} Let $n \geqslant 1$ be an integer, and let $\Delta \in {\bf Z}[X]$ be a polynomial of degree $2n$ such that $\Delta(X) = X^{2n}\Delta(X^{-1})$, $\Delta(1) = (-1)^n$ and that $\Delta(-1)$ is a square of an integer. Set $P(X) = (-1)^nX^{2n} \Delta(1-X^{-1})$. Assume that $P$ is a product of distinct irreducible monic polynomials $f \in {\bf Z}[X]$ such that $f(1-X) = f(X)$, and let $I$ be the set of irreducible, monic factors of $P$. \medskip Let $G_P$ be the group introduced in \S \ref{group}, and set $G_{\Delta} = G_P$. \medskip Let $s$ be an integer such that $s \equiv 0 \ {\rm (mod \ 8)}$, and that $|s| \leqslant \rho(P)$. Let $\tau \in {\rm Mil}_s(P)$. The aim of this section is to give a necessary and sufficient condition for the existence of a Seifert pair with characteristic polynomial $P$ and Milnor signature $\tau$. \medskip Let $V$ be a $\bf Q$-vector space of dimension $2n$, and let $S : V \times V \to {\bf Q}$ be a non-degenerate quadratic form of signature $s$ containing an even, unimodular lattice; such a form exists and is unique up to isomorphism (see for instance \cite {B 21}, Lemma 25.5). \medskip Let $M = \underset{f \in I} \oplus {\bf Q}[X]/(f)$, considered as a ${\bf Q}[X]$-module. Let $\sigma : {\bf Q}[X] \to {\bf Q}[X]$ be the $\bf Q$-linear involution such that $\sigma(X) = 1-X$. The Milnor signature $\tau \in {\rm Mil}_s(P)$ determines an $({\bf R}[X],\sigma)$-quadratic form (cf \cite {B 21}, Example 24.1). The local conditions of \cite {B 21}, \S 24 are satisfied. Indeed, the ${\bf R}[X]$-module $M \otimes {\bf R}$ is compatible with $(V,S)$ by \cite {B 15}, Proposition 8.1. Using a result of Levine (see \cite{Le 69}, Proposition 2) and the bijection between Seifert forms and Seifert pairs (see \S \ref{Seifert}), we see that there exists a Seifert pair of characteristic polynomial $P$. This implies that for all prime numbers $p$, the ${\bf Q}_p[X]$-module $M \otimes_{\bf Q} {\bf Q}_p$ and the quadratic form $(V,S) \otimes _{\bf Q} {\bf Q}_p$ are compatible. \medskip As in \cite{B 21}, \S 24, we define a homomorphism $\epsilon_{\tau} : G_P \to {\bf Z}/2{\bf Z}$. \begin{theo}\label{main} There exists a Seifert pair with characteristic polynomial $P$ and Milnor signature $\tau$ if and only if $\epsilon_{\tau} = 0$. \end{theo} \noindent {\bf Proof.} By \cite {B 21}, Theorem 24.2, the global conditions are satisfied if and only if $\epsilon_{\tau} = 0$. Using \cite {B 21}, Proposition 6.2 this is equivalent with the existence of a Seifert pair having characteristic polynomial $P$ and Milnor signature $\tau$. \begin{coro}\label{main coro} Assume that $G_P = 0$. Then for all $\tau \in {\rm Mil}_s(P)$ there exists a Seifert pair with characteristic polynomial $P$ and Milnor signature $\tau$. \end{coro} \section{Seifert forms with a given Alexander polynomial and signature}\label{Seifert form section} We keep the notation of the previous section. Using Proposition \ref{bijection}, Theorem \ref{main} and Corollary \ref{main coro} can be reformulated as follows : \begin{theo}\label{main Seifert} There exists a Seifert form with Alexander polynomial $\Delta$ and Milnor signature $\tau$ if and only if $\epsilon_{\tau} = 0$. \end{theo} \begin{coro}\label{main coro Seifert} Assume that $G_{\Delta} = 0$. Then for all $\tau \in {\rm Mil}_s(\Delta)$ there exists a Seifert form with Alexander polynomial $\Delta$ and Milnor signature $\tau$. \end{coro} \section{Knots with a given Alexander polynomial and signature}\label{knot section} We keep the notation of the previous two sections. Let $m \geqslant 7$ be an integer with $m \equiv -1 \ {\rm (mod \ 4)}$. We refer to \cite {MW}, 6.5 for the definition of the Seifert form associated to an $m$-knot. The results of this section rely on a result of Kervaire~: \begin{theo} \label{Kervaire} Let $(L,A)$ be a Seifert form. Then there exists an $m$-knot with associated Seifert form isomorphic to $(L,A)$. \end{theo} \noindent {\bf Proof.} This is proved by Kervaire in \cite {K 65}, Theorem II.3, and formulated more explicitly by Levine in \cite{Le 69}, Lemma 3 and \cite{Le 70}, Theorem 2. A different proof is given by Michel and Weber in \cite {MW}, Theorem 7.3 (see also the remark at the end of \cite {MW}, \S 7.1). \medskip Combining Theorem \ref{main Seifert} and Corollary \ref{main coro Seifert} with Theorem \ref{Kervaire}, we have the following applications : \begin{theo}\label{knot} There exists an $m$-knot with Alexander polynomial $\Delta$ and Milnor signature $\tau$ if and only if $\epsilon_{\tau} = 0$. \end{theo} \begin{coro}\label{knot coro} Assume that $G_{\Delta} = 0$. Then for all $\tau \in {\rm Mil}_s(\Delta)$ there exists an $m$-knot with Alexander polynomial $\Delta$ and Milnor signature $\tau$. \end{coro} Recall that $s$ is an integer such that $s \equiv 0 \ {\rm (mod \ 8)}$, and that $|s| \leqslant \rho(\Delta)$. \begin{coro}\label{knot coro sign} Assume that $G_{\Delta} = 0$. Then there exists an $m$-knot with Alexander polynomial $\Delta$ and signature $s$. \end{coro} \section {Indecomposable knots with decomposable Alexander polynomial}\label{indecomposable} As an application of Corollary \ref{knot coro sign}, we give some examples of indecomposable knots with decomposable Alexander polynomials. Let $m \geqslant 7$ be an integer with $m \equiv -1 \ {\rm (mod \ 4)}$. \begin{example}\label{4} Let $\Delta = \Delta_1 \Delta_2$, where $\Delta_1(X) = X^4 - X^2 +1$ and $$\Delta_2(X) = 3X^4 - 2X^3 - X^2 - 2X + 3;$$ we have $\rho(\Delta) = 8$. The corresponding polynomial $$P(X) = (-1)^4X^{8} \Delta(1-X^{-1})$$ is the one of example \ref{first example} : it is equal to $f_1f_2$, where $$f_1(X) = X^4 - 2X^3 + 5X^2 - 4X + 1,$$ and $$f_1(X) = X^4 - 2X^3 + 11X^2 - 10X + 3.$$ \medskip We have $\Pi_{f_1,f_2} = \{2\}$, hence $G_{\Delta} = G_P = 0$. Corollary \ref{knot coro signature} implies that there exists an $m$-knot with Alexander polynomial $\Delta$ and signature $8$. But such a knot is indecomposable, since an $m$-knot with Alexander polynomial $\Delta_i$ has signature $0$ for $i = 1,2$. \end{example} \begin{example}\label{6} Let $a \geqslant 0$ be an integer, and set $$\Delta_a(X) = X^6 -a X^5 - X^4 +(2a-1) X^3 - X^2 -a X + 1.$$ The polynomial $\Delta_a$ is irreducible, and $\rho(\Delta_a) = 4$ (see \cite {GM}, \S 7.3, Example 1 on page 284). This implies that all $m$-knots with Alexander polynomial $\Delta_a$ have signature 0. \medskip Let $b \geqslant 0$ be an integer with $b \not = a$. We have $\rho(\Delta_a \Delta_b) = 8$, and if moreover $G_{\Delta_a \Delta_b} = 0$, then there exist $m$-knots with Alexander polynomial $\Delta_a \Delta_b$ and signature $8$; these knots are indecomposable. We can take for instance $a = 0$ and $b = 2$; then $\Pi_{\Delta_a,\Delta_b} = \{2 \}$, hence $G_{\Delta_a \Delta_b} = 0$. \end{example} \section{$3$-knots in the $5$-sphere}\label{3} The signature of a $3$-dimensional knot $K^3 \subset S^5$ is {\it divisible by $16$} (see for instance \cite{KW}, \S 3, page 95). The aim of this section is to show that with this additional restriction, the results of \S \ref{knot section} extend to $3$-knots. \medskip Let $n \geqslant 1$ be an integer, and let $\Delta \in {\bf Z}[X]$ be a polynomial of degree $2n$ such that $\Delta(X) = X^{2n}\Delta(X^{-1})$, $\Delta(1) = (-1)^n$ and that $\Delta(-1)$ is a square of an integer. Set $P(X) = (-1)^nX^{2n} \Delta(1-X^{-1})$. Assume that $P$ is a product of distinct irreducible monic polynomials $f \in {\bf Z}[X]$ such that $f(1-X) = f(X)$. Let $G_{\Delta} = G_P$ be the group introduced in \S \ref{group}. \medskip Let $s$ be an integer such that $s \equiv 0 \ {\rm (mod \ 16)}$, and that $|s| \leqslant \rho(P)$. Let $\tau \in {\rm Mil}_s(P)$. \begin{theo}\label{knot 3} There exists a $3$-knot with Alexander polynomial $\Delta$ and Milnor signature $\tau$ if and only if $\epsilon_{\tau} = 0$. \end{theo} \noindent {\bf Proof.} This follows from Theorem \ref{main Seifert} and from a result of Levine (see \cite {Le 70}, Theorem 2) : if $A$ is a Seifert form of signature divisible by $16$, then there exists a $3$-knot in the $5$-sphere with Seifert form $S$-equivalent to $A$. Since $S$-equivalent Seifert forms have the same Alexander polynomial and Milnor signature, this completes the proof of the theorem. \begin{coro}\label{knot coro 3} Assume that $G_{\Delta} = 0$. Then for all $\tau \in {\rm Mil}_s(\Delta)$ there exists a $3$-knot with Alexander polynomial $\Delta$ and Milnor signature $\tau$. \end{coro} \begin{coro}\label{knot coro signature} Assume that $G_{\Delta} = 0$. Then there exists a $3$-knot with Alexander polynomial $\Delta$ and signature $s$. \end{coro} \section{Unimodular Seifert forms}\label{unimodular} We conclude by some remarks on a special case, which was already treated in detail in \cite {B 21}. Let $A : L \times L \to {\bf Z}$ be a unimodular Seifert form, i.e. ${\rm det}(A) = \pm 1$, and let $S : L \times L \to {\bf Z}$, defined by $S(x,y) = A(x,y) + A(y,x)$, be the associated even, unimodular lattice. \medskip Let $t : L \to L$ be defined by $A(tx,y) = - A(y,x)$ for all $x,y \in L$; note that $t$ is an isometry of $A$, and hence of $S$, and that the characteristic polynomial of $t$ is ${\rm det}(A) \Delta_A$. \begin{prop} Sending $(L,A)$ to $(L,S,t)$ induces a bijection between isomorphism classes of unimodular Seifert forms and isomorphism classes of even, unimodular lattices with an isometry. \end{prop} Hence the existence of a unimodular Seifert form with a given Alexander polynomial and Milnor signature is equivalent to the existence of an even, unimodular lattice having an isometry of a given characteristic polynomial and Milnor signature. This question is treated in \cite{B 21}, \S 25, 27 and 31; we recover the results of \S \ref{Seifert form section} in this special case. \bigskip
{ "timestamp": "2022-02-08T02:31:15", "yymm": "2202", "arxiv_id": "2202.03027", "language": "en", "url": "https://arxiv.org/abs/2202.03027", "abstract": "We give necessary and sufficient conditions for an integer to be the signature of a (4q-1)-knot in the (4q+1)-sphere with a given square-free Alexander polynomial.", "subjects": "Geometric Topology (math.GT)", "title": "Alexander polynomials and signatures of some high-dimensional knots", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668745151689, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.708150520053092 }
https://arxiv.org/abs/1209.6099
Lattices of Equivalence Relations Closed Under the Operations of Relation Algebras
One of the longstanding problems in universal algebra is the question of which finite lattices are isomorphic to the congruence lattices of finite algebras. This question can be phrased as which finite lattices can be represented as lattices of equivalence relations on finite sets closed under certain first order formulas. We generalize this question to a different collection of first-order formulas, giving examples to demonstrate that our new question is distinct. We then prove that every lattice $\m M_n$ can be represented in this new way. [This is an extended version of a paper submitted to \emph{Algebra Universalis}.]
\section{Introduction} One of the longstanding problems in universal algebra is, \begin{prb} \label{FCLRP} {\bf Finite Congruence Lattice Representation Problem:} For which finite lattices $\m l$ is there a finite algebra $\m a$ with $\m l \cong {\rm Con} \m a$? \end{prb} A {\it primitive positive formula} is a first-order formula of the form $\exists \wedge({\rm atomic})$. Suppose that $\vr R$ is a set of relations on a finite set $A$. Let ${\rm PPF}(\vr r)$ be the set of all relations on $A$ definable using primitive positive formulas and relations from $\vr R$. Let ${\rm Eq}(\vr R)$ be the set of all equivalence relations in $\vr R$. It follows from \cite{bod,PoschelKaluznin1979} that $\vr R$ is the set of all universes of direct powers of an algebra $\m a$ with universe $A$ if and only if ${\rm PPF}(\vr R)=\vr R$. (For references on similar characterizations, the reader can see \cite{PosRel}.) Therefore, Problem \ref{FCLRP} can be restated in the following way. \begin{prb} \label{PPFP} For which finite lattices $\m l$ is there a lattice $\vr l$ of equivalence relations on a finite set so that $\m l \cong \vr l$ and $\vr l= {\rm Eq}({\rm PPF}(\vr L))$? \end{prb} A natural extension of this problem is to consider first-order definitions employing types of formulas other than primitive positive formulas. We suggest replacing primitive positive formulas here with any first-order formulas using at most three variables. If $\vr R$ is a set of relations on a finite set $A$, let ${\rm FO3}(\vr R)$ be the set of all relations on $A$ definable using first-order formulas with at most three variables and relations from $\vr R$. Our extension of \ref{PPFP} can be stated as: \begin{prb} \label{FO3P} For which finite lattices $\m l$ is there a lattice $\vr l$ of equivalence relations on a finite set so that $\m l \cong \vr l$ and $\vr l = {\rm Eq}({\rm FO3}(\vr L))$? \end{prb} Our interest in first-order formulas with three variables stems from a connection with relation algebras. A {\it relation algebra} is an algebra $\mathbf{A}=\langle A,+,\bar{\cdot},;,\cdot^{\cup},1^\text{'}\rangle$ with operations intended mimic the operations of union, complement, composition, converse, and identity on binary relations. A relation algebra $\m a$ is {\it representable} if there is a set of binary relations $\vr R$ on a set $B$ so that $\m a$ is isomorphic to the algebra $\langle \vr R, \cup, \bar{\cdot}, \circ, \cdot^{\cup}, 1^\text{'}_B\rangle$. A set $\vr R$ of binary relations on a finite set $A$ is closed under the relation algebra operations if and only if every binary relation on $A$ definable with a first-order formula with at most three variables and relations in $\vr R$ is already in $\vr R$ (see Theorem 3.32 of \cite{games} or page 172 of \cite{thebook}). For any set $\vr R$ of binary relations on a set $A$, let ${\rm RA} (\vr R)$ be the relation algebra generated by $\vr R$. Then the above problem becomes: \begin{prb} \label{RAP} For which finite lattices $\m l$ is there a lattice $\vr l$ of equivalence relations on a finite set so that $\m l \cong \vr l$ and $\vr l = {\rm Eq}({\rm RA}(\vr L))$? \end{prb} For any relation algebra $\m a$, let ${\rm Eq}(\m a)$ be equivalence relation elements of $\m a$. Then our problem becomes: \begin{prb} For which finite lattices $\m l$ is there a relation algebra $\m a$ which is representable on a finite set so that $\m l \cong {\rm Eq}(\m a)$? \end{prb} \section{Examples} In this section we give two examples $\vr L$ and $\vr M$ of lattices of equivalence relations on finite sets. In the first example, ${\rm Eq}({\rm PPF}(\vr L)) = \vr L$ but ${\rm Eq}({\rm RA}(\vr L)) \neq \vr L$. In the second example, ${\rm Eq}({\rm RA}(\vr M)) = \vr M$ but ${\rm Eq}({\rm PPF}(\vr M)) \neq \vr M$. This demonstrates that these two notions are indeed distinct. First, let $\m 2$ be the two-element lattice with universe $\{0,1\}$. Let $\m a = \m 2^2$, and let $\vr L={\rm Con} \m a$. Then $\vr L$ contains four equivalence relations -- the identity relation, the universal relation, and the kernels of the projection homomorphisms. The projection kernels are the relations $\eta_0$ and $\eta_1$ defined so that $(x_0,x_1) \eta_0 (y_0,y_1)$ when $x_0=y_0$ and $(x_0,x_1) \eta_1 (y_0,y_1)$ when $x_1=y_1$. Since $\vr L$ is a congruence lattice, ${\rm Eq}({\rm PPF}(\vr L)) = \vr L$. However, ${\rm RA}(\vr L)$ also contains the equivalence relation $$\gamma = {1\textrm{'}} \cup \overline{(\eta_0 \cup \eta_1)}$$ which is not in $\vr L$, so ${\rm Eq}({\rm RA}(\vr L)) \neq \vr L$. Note that the relation $\gamma$ can also be defined with this first-order formula which only uses {\it two} variables: $$x \gamma y \leftrightarrow (x=y) \vee \neg[(x\eta_0 y) \vee(x \eta_1 y)].$$ Thus $\vr L$ is closed under primitive positive definitions but not under the operations of relation algebras or first-order definitions using at most three variables. For our second example, suppose that $p\geq5$ is prime. We consider ${\rm Con}(\mathbb{Z}^2_p)$, which is a copy of $\m M_{p+1}$ consisting of the identity ${1\textrm{'}}$, the universal relation $1$, and $p+1$ atoms $\eta_0,\eta_1,\alpha_1,\ldots,\alpha_{p-1}$, given by \begin{align*} \langle x_0,x_1\rangle\eta_0\langle y_0,y_1\rangle &\leftrightarrow x_0=y_0\\ \langle x_0,x_1\rangle\eta_1\langle y_0,y_1\rangle &\leftrightarrow x_1=y_1\\ \langle x_0,x_1\rangle\alpha_1 \langle y_0,y_1\rangle &\leftrightarrow 1 x_0-x_1=1 y_0-y_1\\ \langle x_0,x_1\rangle\alpha_2\langle y_0,y_1\rangle &\leftrightarrow 2x_0-x_1=2y_0-y_1 \\ &\vdots\\ \langle x_0,x_1\rangle\alpha_k\langle y_0,y_1\rangle &\leftrightarrow kx_0-x_1=ky_0-y_1\\ &\vdots\\ \langle x_0,x_1\rangle\alpha_{p-1}\langle y_0,y_1\rangle &\leftrightarrow (p-1)x_0-x_1=(p-1)y_0-y_1 \end{align*} \begin{lm}\label{lemma1} Suppose that $1\leq n <p-2$ and let $\vr M=\{1, {1\textrm{'}}, \eta_0, \eta_1, \alpha_1, \ldots, \alpha_n\}$. Then ${\rm Eq}({\rm RA}(\vr M)) = \vr M$. \end{lm} This lemma follows from \cite{Lyndon}; the result is not explicitly stated in the paper, although it can be extracted from it. A proof is given in Section \ref{mainsection}. Consider the relation $\alpha_{p-1}$ (which is not in $\vr M$); $\alpha_{p-1}$ can be defined from $\eta_0$, $\eta_1$, and $\alpha_1$ with a primitive positive formula by $$a \alpha_{p-1} b \leftrightarrow \exists c, d \left( a \eta_0 c \wedge c \eta_1 b \wedge a \eta_1 d \wedge d \eta_0 b \wedge c \alpha_1 d \right).$$ Thus ${\rm Eq}({\rm PPF}(\vr M)) \neq \vr M$. The lattice $\vr m$ is closed under the operations of relation algebras and first-order definitions using at most three variables but not under primitive positive definitions. This second example has the following interesting consequence. If $n\geq 1$ and if $p\geq5$ is a prime greater than $n+2$, then the lattice $\vr M$ in the example gives a lattice of equivalence relations closed under the operations of relation algebras which is isomorphic to $\m m_{n+2}$. Note that $\m M_1$ and $\m M_2$ can easily be represented by letting $\vr M$ be $\{1, {1\textrm{'}}, \eta_0 \}$ and $\{1, {1\textrm{'}}, \eta_0, \eta_1 \}$, respectively. Thus we have \begin{thm} For any positive integer $n$, there is a lattice $\vr M$ of equivalence relations on a finite set so that $\vr M \cong \m M_n$ and ${\rm Eq}({\rm RA}(\vr M))= {\rm Eq}({\rm FO3}(\vr M)) = \vr M$. \end{thm} \section{Proof of Lemma \ref{lemma1}} \label{mainsection} In this section, we prove that ${\rm Eq}({\rm RA}(\vr m)) = \vr m$. Again, the results in this section are implicit in \cite{Lyndon}, but the relationship is not immediately apparent; hence we provide ``bottom-up" proofs here. \begin{lm}\label{lemma} For distinct atoms $\alpha$ and $\beta$ of $\vr m$, $\alpha\circ\beta=1$. \end{lm} \begin{proof} There are two nontrivial cases. Case 1: $\eta_i\circ\alpha_k$.\\ Suppose for simplicity that $i=0$. Then let $\langle u_0,u_1\rangle$ and $\langle v_0,v_1\rangle$ be any pairs in $\mathbb{Z}_p^2$. We will show $\langle u_0,u_1\rangle\eta_0\circ\alpha_k\langle v_0,v_1\rangle$. Let $\langle y_0,y_1\rangle=\langle u_0,ku_0+v_1-kv_0\rangle$. Then $\langle u_0,u_1\rangle\eta_0\langle y_0,y_1\rangle$. We need to show $\langle y_0,y_1\rangle\alpha_k\langle v_0,v_1\rangle$. We have \begin{align*} ky_0-y_1 &=ky_0-(ku_0+v_1-kv_0)\\ &=ku_0-ku_0-v_1+kv_0\\ &=kv_0-v_1.\\ \end{align*} Hence $\langle y_0,y_1\rangle\alpha_k\langle v_0,v_1\rangle$.\\ Case 2: $\alpha_i\circ\alpha_j, i\neq j$.\\ Again, let $\langle u_0,u_1\rangle,\langle v_0,v_1\rangle \in\mathbb{Z}_p^2$. We need $\langle y_0,y_1\rangle$ such that $$iu_0-u_1=iy_0-y_1 \text{ and }jy_0-y_1=jv_0-v_1.$$ Since $\mathbb{Z}_p$ is a field, we can find $y_0\in\mathbb{Z}_p$ such that $$(j-i)y_0=u_1-iu_0+jv_0-v_1,$$ so $$iy_0=jy_0-u_1+iu_0-jv_0+v_1,$$ \noindent and let $$y_1=j(y_0-v_0)+v.$$ Then \begin{align*} iy_0-y_1 &=jy_0-u_1+iu_0-jv_0+v_1-y_1\\ & =jy_0-u_1+iu_0-jv_0+v_1-[jy_0-jv_0+v_1]\\ & =iu_0-u_1.\\ \end{align*} Hence $\langle u_0,u_1\rangle\alpha_i\langle y_0-y_1\rangle$. Also \begin{align*} jy_0-y_1 &=(j-i)y_0+iy_0-y_1\\ &= (j-i)y_0+iy_0-(jy_0-jv_0+v_1)\\ &=(j-i)y_0+(i-j)y_0+jv_0-v_1\\ &=jv_0-v_1.\\ \end{align*} Hence $\langle y_0,y_1\rangle\alpha_j\langle v_0,v_1\rangle$. \end{proof} Thus we see that ${\rm Con}(\mathbb{Z}^2_p)\cong \m M_{p+1}$. Now we define $\mathcal{M}$ to be the sublattice of Con$(\mathbb{Z}_p^2)$ consisting of the identity and universal relations, along with the atoms $\eta_0,\eta_1$, and $\alpha_1$ through $\alpha_{n}$. Then $\mathcal{M}\cong\m M_{n+2}$. Now we are ready to prove Lemma \ref{lemma1}. \begin{proof}[Proof of Lemma \ref{lemma1}] First we establish the following claim: BA$(\mathcal{M})=$RA$(\mathcal{M})$, where BA$(\mathcal{M})$ is the Boolean algebra generated by $\mathcal{M}$. The set $At(\text{BA}(\mathcal{M}))$ of atoms of BA$(\mathcal{M})$ consists of the identity relation $1^\text{'}$ along with $\eta_0\cap \overline{1^\text{'}},\eta_1 \cap \overline{1^\text{'}}$, and $\alpha_1\cap \overline{1^\text{'}}$ through $\alpha_{n}\cap \overline{1^\text{'}}$, and the one additional atom $$\beta=\overline{1^\text{'}+\eta_0+\eta_1+\displaystyle\sum^{n}_{i=1}\alpha_i}.$$ To see this, consider the $\eta _i$'s and $\alpha_j$'s. Any distinct pair of these intersect to $1^\text{'}$, so the $\eta_i\cap\overline{1^\text{'}}$'s and $\alpha_j\cap\overline{1^\text{'}}$'s are minimal nonzero elements, hence are atoms. The Boolean algebra generated by the $\eta_i$'s and the $\alpha_j$'s will be the same as that generated by $1^\text{'}$, the $\eta_i\cap\overline{1^\text{'}}$'s and the $\alpha_j\cap\overline{1^\text{'}}$'s. By Proposition 4.4 of \cite{SK}, the Boolean algebra generated by these atoms is equal to all joins of meets of atoms and their complements. Since the meet of an atom $a$ with anything is either $a$ or 0, we need only consider joins of meets of complements of atoms. Since we are looking for the atoms of the Boolean algebra, we need only consider meets of complements of atoms. All such meets will be above the meet of all such complements, which is $$\beta=\overline{1^\text{'}}\cdot\overline{n_0}\cdot\overline{n_1}\cdot \prod_{i=1}^{n}\overline{\alpha_i}=\overline{1^\text{'}+\eta_0+\eta_1+\displaystyle\sum^{n}_{i=1}\alpha_i}$$ Thus $\beta$ is the only atom not previously listed. We need to show that any composition of atoms is already in BA$(\mathcal{M})$. If $a\in At$(BA$(\mathcal{M})$), then $a\circ a=1^\text{'}+a$, since $a$ is an equivalence relation, ``minus the identity", that has no singleton equivalence classes. If $a\neq b\in At(\text{BA}(\mathcal{M}))$, then $$a\circ b=\overline{1^\text{'}+a+b}.$$ To establish this, we first prove that $$(\alpha_i\cap \overline{1^\text{'}})\circ(\alpha_j\cap\overline{1^\text{'}})=\overline{1^\text{'}+\alpha_i+\alpha_j}. $$ To establish the inclusion $(\alpha_i\cap\overline{1^\text{'}})\circ(\alpha_j\cap\overline{1^\text{'}})\supseteq\overline{1^\text{'}+\alpha_i+\alpha_j}$, consider the proof of Case 2 of Lemma \ref{lemma}. It is not hard to check that if $\langle u_0,u_1\rangle$ and $\langle v_0,v_1\rangle$ are not related by $1^\text{'}$, $\alpha_i$, or $\alpha_j$, then the pair $\langle y_0,y_1\rangle$ given in the proof is distinct from both $\langle u_0,u_1\rangle$ and $\langle v_0,v_1\rangle$. Hence if $$\langle u_0,u_1\rangle\overline{1^\text{'} +\alpha_i+\alpha_j}\langle v_0,v_1\rangle,$$ then $$\langle u_0,u_1\rangle(\alpha_i\cap\overline{1^\text{'}}\circ(\alpha_j\cap\overline{1^\text{'}})\langle v_0,v_1\rangle.$$ It remains to show that $(\alpha_i\cap\overline{1^\text{'}})\circ(\alpha_j\cap\overline{1^\text{'}})$ contains nothing but $\overline{1^\text{'}+\alpha_i+\alpha_j}$. Since $\alpha_i\cap\overline{1^\text{'}}$ and $\alpha_j\cap\overline{1^\text{'}}$ are disjoint symmetric diversity relations, their composition is disjoint from the identity. To prove that this composition is disjoint from $\alpha_i$ (and by symmetry from $\alpha_j$ as well), suppose for contradiction that there exist pairwise distinct pairs $\langle u_0,u_1\rangle,\langle y_0,y_1\rangle,\langle v_0,v_1\rangle$ with $\langle u_0,u_1\rangle\alpha_i\langle y_0,y_1\rangle\alpha_j\langle v_0,v_1\rangle$ and $\langle u_0,u_1\rangle\alpha_i\langle v_0,v_1\rangle$. Then $iu_0-u_1=iy_0-y_1=iv_0-v_1$ and $jy_0-y_1=jv_0-v_1$. Then $(j-i)y_0=(j-i)v_0$, and so $y_0=v_0$. Then $y_1=v_1$ as well, a contradiction. By a similar argument, $(\eta_i\cap\overline{1^\text{'}})\circ(\alpha_j\cap\overline{1^\text{'}})=\overline{1^\text{'}+\eta_i+\alpha_j}$. Therefore, the composition of atoms is a boolean combination of those atoms. Since composition distributes over union, it follows that BA$(\mathcal{M})=\text{RA}(\mathcal{M})$.\\ Now we are ready to show that Eq(RA$(\mathcal{M})$)$=\mathcal{M}$. First, we note that $\eta_0,\eta_1,$ and $\alpha_1$ through $\alpha_{n}$ are the minimal non-identity elements in Eq$(\text{RA}(\mathcal{M}))$. So suppose that there is some other relation $\gamma\in \text{Eq}(\text{RA}(\mathcal{M}))$. Since $\gamma\in$RA$(\mathcal{M})=$BA$(\mathcal{M})$ and since $\gamma$ is not an atom, $\gamma$ must contain at least two atoms. Since $\gamma$ also contains $1^\text{'}$, $\gamma$ contains two of the non-trivial relations $\{\eta_0,\eta_1,\alpha_1,\ldots,\alpha_{n}\}$, hence $\gamma\circ \gamma=1$. But $\gamma\circ \gamma=\gamma$, hence $\gamma=1$. So the only non-trivial relations in Eq(RA$(\mathcal{M})$ are $\{\eta_0,\eta_1,\alpha_1,\ldots,\alpha_{n}\}$ along with the unit $1$. Therefore Eq$(\text{RA}(\mathcal{M}))=\mathcal{M}$. \end{proof}
{ "timestamp": "2012-09-28T02:01:19", "yymm": "1209", "arxiv_id": "1209.6099", "language": "en", "url": "https://arxiv.org/abs/1209.6099", "abstract": "One of the longstanding problems in universal algebra is the question of which finite lattices are isomorphic to the congruence lattices of finite algebras. This question can be phrased as which finite lattices can be represented as lattices of equivalence relations on finite sets closed under certain first order formulas. We generalize this question to a different collection of first-order formulas, giving examples to demonstrate that our new question is distinct. We then prove that every lattice $\\m M_n$ can be represented in this new way. [This is an extended version of a paper submitted to \\emph{Algebra Universalis}.]", "subjects": "Combinatorics (math.CO); Logic (math.LO); Rings and Algebras (math.RA)", "title": "Lattices of Equivalence Relations Closed Under the Operations of Relation Algebras", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668662546613, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7081505199636057 }
https://arxiv.org/abs/2010.03894
Intrinsic Hierarchical Clustering Behavior Recovers Higher Dimensional Shape Information
We show that specific higher dimensional shape information of point cloud data can be recovered by observing lower dimensional hierarchical clustering dynamics. We generate multiple point samples from point clouds and perform hierarchical clustering within each sample to produce dendrograms. From these dendrograms, we take cluster evolution and merging data that capture clustering behavior to construct simplified diagrams that record the lifetime of clusters akin to what zero dimensional persistence diagrams do in topological data analysis. We compare differences between these diagrams using the bottleneck metric, and examine the resulting distribution. Finally, we show that statistical features drawn from these bottleneck distance distributions detect artefacts of, and can be tapped to recover higher dimensional shape characteristics.
\section{Introduction} \label{intro} Understanding the overall shape of point clouds is a central goal in topological data analysis. The basic task is to recover invariant signatures that characterize either the overall structure, or local properties (manifold learning) of the space where the point cloud is understood to be sampled from. The reward is that these signatures do not only afford us with a holistic understanding of the data's composition, but also meaningful information about the underlying data that would otherwise be missed by other methods. For example, Perea and Harer \cite{perea} used sliding window embeddings to examine time series gene expression data as point clouds and were able to detect periodicity in the presence of damping missed by state-of-the-art methods. The same strategy was employed in \cite{ignacio} where topology-based features are used as proxy for expert-dependent features to improve diagnosis of Atrial Fibrillation. This idea of interrogating data via its underlying structure is not unique to topological data analysis. Traditionally, insights from clustering algorithms are used to describe the extent to which local or global regions of data tend to group together, or in many other settings to describe the connectivity of an associated graph. This grouping behavior among points, driven by pairwise measures of dissimilarity, provides a lowest dimension characterization of the overall shape of point cloud data. Clustering information, however, is very seldom enough to characterize the shape of point clouds as multiple point clouds may elicit the same overall clustering characteristics while having drastically different overall shapes. A simple example would be if one considers sufficiently sampled points from the digits 0, 3 and 8 considered as point cloud objects. Clustering the points independently in each sample reveals that all three have a single significant cluster (see Figure \ref{pointclouds}). While density-based clustering may be able to separate these three when all are embedded together in a common space, clustering independently using only intrinsic information within each sample would give very similar characterization for all three point clouds. To circumvent this limitation, the need to appeal to higher dimensional characterization, such as presence of loops or cycles, arises. \begin{figure} \centering \includegraphics[width=0.7\textwidth, height=0.35\textwidth]{mnist2} \caption{Point clouds sampled from the digits 0, 3, and 8 with corresponding dendrograms from hierarchical clustering.} \label{pointclouds} \end{figure} Access to higher dimensional characterization, however, comes at a cost as most methods suffer from combinatorial explosion as the size and complexity of data scale up. Unfortunately, the speed at which technology in this area is progressing is simply outpaced by the speed at which data can be collected. This setting, thus, challenges us to explore other ways of exploiting currently available methods and technology in ways that aid the state-of-the-art. For this, we turn our attention to clustering methods. Consider again the dendrograms below the digits in Figure \ref{pointclouds}. Observe that while the dominant characteristics of all three dendrograms are similar, the merging dynamics among the clusters in the three digits are actually different. It is this subtle differences in clustering behavior that we want to interrogate. In more ways than one, this approach is inspired by the work of Bubenik et al. in \cite{bubenik}, where they showed that specific portions of the summary produced by persistent homology, widely believed and accepted to be artefacts of noise, can actually be used to recover curvature in the underlying space. In particular, we follow their idea of utilizing unprocessed information from available output summaries to extract supplemental information, and contextualize this approach in the recovery of expensive higher dimensional information. In this work, the main question that we ask is ``\emph{Does intrinsic clustering behavior contain signal about higher dimensional shape characteristics?}" We provide evidence to positively answer this question, and show that it is indeed possible to recover specific higher dimensional shape information of point clouds by observing clustering behavior across multiple point samples. This approach leverages intrinsic hierarchical clustering dynamics within point clouds to detect and retrieve artefacts of higher dimensional characteristics that clustering algorithms, by themselves, are agnostic to. Exacting empirical proof that this method works affords access to the inverse problem of using lower dimensional information, which are considerably easier, cheaper and faster to compute, to either be used as basis for classifying point clouds with varying higher dimensional characteristics, or to explicitly recover these higher dimensional shape information. \section{Pipeline and Methods} \label{sec:1} The first step in our approach is to generate multiple collections of points sampled from a given point cloud. In this work, we consider as point clouds images of handwritten digits from the popular MNIST data set \cite{lecun}. Each collection of sampled points is independently clustered using hierarchical methods, and captured clustering behavior is extracted from their resulting dendrograms. In particular, cluster evolution and lifetime data are used to construct diagrams akin to persistence diagrams in topological data analysis. These diagrams are then compared using the bottleneck metric to obtain an overall comparison of the captured clustering behavior across point samples. Distributions of these comparisons are examined, and summaries are extracted and tested for classification and prediction tasks. The entire pipeline is shown in Figure \ref{Pipeline}. \begin{figure}[h] \centering \includegraphics[width=\textwidth,height = 0.35\textwidth]{pipeline} \caption{Pipeline for recovering higher dimensional shape information using observed clustering behavior.} \label{Pipeline} \end{figure} \subsection{Point Cloud Sampling} \label{sec:sampling} As the main idea of our approach is to leverage intrinsic characteristics observed from the clustering of sampled sub-collections of points, it is important that each sample is a good representation of the original point cloud. A caveat, however, is that observation is perspective-dependent. Thus, to minimize the overall effect of bias introduced from specific observations of clustering behavior, we capture such from multiple perspectives. We do this by extracting point samples that reflect the distribution of points relative to multiple pre-selected landmark locations in the ambient space where the point cloud lives (see Figure \ref{sampling}a). We use the same grid-based landmark points used by Garin and Tauzin \cite{garin} for their work on the same data set. \begin{figure}[h] \begin{subfigure}{0.3\textwidth} \includegraphics[width=\textwidth]{grid} \caption{} \label{sample1} \end{subfigure} \begin{subfigure}{0.75\textwidth} \centering \noindent \begin{tikzpicture} \node [anchor=west,style={rotate=90}] (res) at (-0.1,0.25) {\large Resolution}; \begin{scope} \node[anchor=south west,inner sep=0] (image) at (0,0) { \includegraphics[trim= 0 90 1 1,clip,width = 0.8\textwidth, height = 0.4\textwidth]{sampling}}; \begin{scope}[x={(image.south east)},y={(image.north west)}] \draw [-stealth, line width=2pt, cyan] (res) -- ++(0.0,0.6); \end{scope} \end{scope} \end{tikzpicture}% \caption{} \label{sample2} \end{subfigure} \caption{(a) Points are sampled based on their distribution relative to landmark points shown in red. (b) Samples are also taken in varying levels of resolution.} \label{sampling} \end{figure} We also observe clustering behavior in multiple resolutions. Using the distribution of points relative to a pre-selected landmark, we consider several sampling resolutions in which the number of points selected per bin in the distribution histogram is $1/k$th of the bin size for $k = 2, 3, 4, 5, 6$, e.g. a sampling resolution for $k = 2$ selects $1/2$ of the total points in every histogram bin. This will allow us to capture clustering behavior in varying levels of granularity (see Figure \ref{sampling}b). We create $n=30$ instances for each of the 45 pairs of sampling resolution and landmark reference. We refer to each combination pair of resolution and landmark reference as a sampling setting. \subsection{Clustering Diagrams and Bottleneck-based Summaries} \label{sec:bottleneck} For each sub-collection of sampled points, we use four hierarchical clustering algorithms to produce four distinct \emph{dendrograms}. A dendrogram is a tree-like multi-scale distance-parametrized record of the evolution of the clusters within a collection of points. Each leaf in the dendrogram represents a cluster and the merging of leaves the merging of the clusters they represent. The distance value at which clusters merge, and hence the clustering dynamics observed, is determined by the algorithm used. By employing several algorithms, we are able to record a nuanced observation of the clustering dynamics and evolution. It is this recorded history of cluster merging that we specifically employ hierarchical clustering algorithms. We would like to examine if features borne out of an examination of this cluster merging history contain recoverable artefacts of higher dimensional characteristics. The four hierarchical clustering algorithms we use are based on Single, Average, Complete, and Ward's linkage. For a quick introduction on these linkage methods, we refer the interested reader to \cite{jain}. To extract the clustering behavior, we take the merge heights $\Lambda := \{d_i\}_{i\in I}$ in the dendrogram and use it to construct a multi-set of points in the plane $\{(0,d)|d\in\Lambda\}$. To ease visualization, we illustrate this multi-set of points as a collection of bars with length corresponding to merge heights in the dendrogram (see Figure \ref{dendrogram}). This multi-set of points, called a \emph{clustering diagram} \cite{leyda}, filters out unnecessary choices needed to properly illustrate a dendrogram while keeping important information that we want to explore. Clustering diagrams record the lifetime of clusters akin to what persistence diagrams in dimension zero do in topological data analysis. In fact, the clustering diagram constructed from the single linkage dendrogram recovers the dimension zero persistence barcode produced by the Vietoris-Rips filtration. \begin{figure}[h] \centering \includegraphics[height=0.35\textwidth,width=0.8\textwidth]{dendro} \caption{Clustering diagrams are constructed by extracting the merge heights of clusters in a dendrogram.} \label{dendrogram} \end{figure} To obtain a measure on the differences among the clustering behaviors observed across the sub-collections of points, we compute the pairwise differences across all clustering diagrams via the \emph{bottleneck distance} implemented in \textsc{Lum\'awig} \cite{lumawig}. Given two multi-set of points $X$ and $Y$ in the plane, the bottleneck distance between them is defined as $$ d_B(X,Y) = \inf_{\phi} \sup_{x\in X} ||x-\phi(x)||_{\infty} $$ where the infimum is taken over all bijections $\phi:X\sqcup \Delta \to Y\sqcup \Delta$ and $\Delta$ is the diagonal. In general terms, the bottleneck distance measures the cost to transform one multi-set to another. We compute the distribution of pairwise bottleneck distances among the 30 clustering diagrams produced in each sampling setting. This distribution provides an insight on how observed clustering behaviors vary across the sampled representations of the original point cloud. From this distribution, we obtain the following statistical summaries: \begin{enumerate} \item Minimum and maximum bottleneck distance values; \item Mean, standard deviation, skewness, and kurtosis of all bottleneck distances; \item Size of the largest bin in the histogram obtained from the distribution. \end{enumerate} \subsection{Higher Dimensional Shape Information via Persistent Homology} We generate higher dimensional characteristics of point clouds using persistent homology. For a quick introduction to persistent homology, we refer the interested reader to \cite{roadmap}. Succinctly, we induce a distance-parameterized sequence of simplicial complexes built over each point cloud sample to obtain mathematically computable topological signatures that persist across multiple scales (see Figure \ref{filtration}). In this approach, persistence over a large range of scales is regarded as a measure of significance for the corresponding topological characteristic. \begin{figure} \centering \noindent \begin{tikzpicture} \node [anchor=west,style={rotate=90}] (res) at (-0.1,0.25) {\large Resolution}; \node [anchor=west] (water) at (3,0) {\large Filtration}; \begin{scope} \node[anchor=south west,inner sep=0] (image) at (0,0) { \includegraphics[trim= 0 195 1 1,clip,width = 0.8\textwidth, height = 0.35\textwidth]{filtration}}; \begin{scope}[x={(image.south east)},y={(image.north west)}] \draw [-stealth, line width=2pt, cyan] (res) -- ++(0.0,0.6); \draw [-stealth, line width=2pt, cyan] (water) -- ++(0.4,0.0); \end{scope} \end{scope} \end{tikzpicture}% \caption{A Vietoris-Rips filtration is induced over each point cloud in every sampling setting.} \label{filtration} \end{figure} We extract persistent homological signatures pertaining to cycles (i.e. two-dimensional holes) in the point clouds. In particular, we take the number of detected persistent cycles, the average persistence, and the maximum persistence for each of the 30 sub-collections of points in every sampling setting, and take the mean for each feature. In what follows, we examine whether our clustering-based features can recover both the information provided by persistence-based higher dimensional features and their explicit values. \section{Recovering higher dimensional shape information} Our goal is to provide empirical proof that statistical summaries obtained from intrinsic clustering behavior contain information that can be used to successfully classify point clouds with varying higher dimensional characteristics, and explicitly predict values for such. We employ a \emph{random forest classifier} for the first task, and a \emph{random forest regressor} for the second task. Succinctly, a \emph{random forest} is an ensemble of \emph{decision trees} that uses the collective output of all trees in coming up with either a classification or a prediction. We train a 1000-tree random forest using an initial pool of 1,395 features that include clustering-based features and persistence-based higher dimensional features. For distinction, we refer to features coming from the first group as \emph{dimension 0} features and the second as \emph{dimension 1} features. \subsection{Classification of Digit Images in MNIST} \label{mnist} After an initial training, we rank all features from our initial pool by importance, and extract the top 200 features of which 150 are dimension 0 and 50 are dimension 1. We construct 3 training feature sets from these. The first set consists of only dimension 0 features, the second set consists of only dimension 1 features, and the third set consists of all 200 dimensions 0 and 1 features. To obtain comparable results from training across these 3 feature sets, no other fine tuning is done for the random forest training. We perform a 30-fold stratified cross validation on the full MNIST training data set of 60,000 digit images and report the performance in Table \ref{classifiercross}. We discuss first the performance of the random forest trained using the first feature set of only dimension 0 features. \begin{table}[h] \centering \caption{$F_1$ scores over a 30-fold stratified cross validation of the Random Forest trained on dimension 0 features to classify digits.} \label{classifiercross} \begin{tabular}{cccccccccccc} \hline\noalign{\smallskip} Digit & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & Overall \\ \noalign{\smallskip}\hline\noalign{\smallskip} Mean & 0.777 & 0.927 & 0.649 & 0.653 & 0.644 & 0.629 & 0.791 & 0.741 & 0.699 & 0.705 & 0.722 \\ SD & 0.020 & 0.012 & 0.034 & 0.020 & 0.030 & 0.029 & 0.021 & 0.025 & 0.028& 0.030&0.011 \\ \noalign{\smallskip}\hline \end{tabular} \end{table} Clusters are often interpreted as the components of point cloud data. Using linkage-based clustering algorithms, these clusters are further understood as connected components in the graph induced by the linking criteria, and the hierarchical nature of the algorithm suggests that the significant connected components of the underlying point cloud are captured by the long branches in the corresponding dendrogram. It is worth pointing out that despite the fact that all the digits represented in the MNIST data set have the same number of significant connected components (i.e. they all have a single connected component), the clustering behavior observed still provides a relatively good basis for classifying the digits as evidenced by the obtained $F_1$ scores. We take note in particular the relatively high scores for the digits $0, 6, 8,$ and $9$, which all have a higher dimensional topological characteristic of possessing cycles\footnote{Although it can be argued that the digit ``4'' may also be considered in this class, handwriting styles introduce variations of this digit that do not contain the topological hole.}. To put this performance in perspective, we investigate how much new information is contributed by dimension 1 features, and is unseen by our clustering-based dimension 0 features. We train the random forest using the two other feature sets and plot their performances against the performance when training with only dimension 0 features. The idea is that, as dimension 1 features capture higher dimensional characteristics present in several digit classes, they themselves could be used as basis for classification, and since clustering algorithms are agnostic to these higher dimensional characteristics that in theory are good discriminants for the digits, their inclusion should produce a significant increase across all class scores. \begin{figure}[h] \centering \includegraphics[height = 0.75\textwidth,width=0.8\textwidth]{plots} \caption{Each plot corresponds to the average $F_1$ class score across a 30-fold cross validation on the full MNIST training data. The horizontal axis represents $F_1$ scores for a random forest classifier trained using dimension 0 features. The vertical axis represents $F_1$ scores from the same random forest trained on different feature sets, one using only dimension 1 features (red points), and the other using dimension 0 and dimension 1 features (blue points). The boxplots show the distribution of the relative differences on $F_1$ scores between training sets.} \label{plots} \end{figure} Figure \ref{plots} plots the class and overall $F_1$ scores of the random forest over a 30-fold stratified cross validation over the full MNIST training set. In these plots, each blue point is paired with a red point as they share the same first coordinate (i.e. $F_1$ score from training on dimension 0 features). Several important observations can be made based on these plots. The first is that dimension 1 features certainly improve the classification performance of the random forest as evidenced by the significant majority of blue points plotted above the diagonal in many of the classes, and hence in the overall classification. This is most impressed in the class scores for the digits 0, 4, 8, and 9. We verify this observation and measure the improvement introduced by performing a paired t-test using the cross validation scores. We report the result of this test in Table \ref{ttest1}. \begin{table}[h] \centering \caption{Paired t-test of significant difference of $F_1$ scores between training on dimension 0 features and when supplemented with dimension 1 features.} \label{ttest1} \begin{tabular}{cccccccccccc} \hline\noalign{\smallskip} Digit & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & Overall \\ \noalign{\smallskip}\hline\noalign{\smallskip} Mean Diff. & 0.115 & 0.014 & -0.009 & 0.023 & 0.063 & -0.007 & 0.010 & 0.036 & 0.094 & 0.098 & 0.044 \\ $p$-value & 2e-16 & 1e-10 & 0.05 & 3e-05 & 5e-15 & 0.2 & 0.01 &3e-11 & 2e-16& 2e-16&2e-16 \\ \noalign{\smallskip}\hline \end{tabular} \end{table} Now, the fact that in all except one class, red points in Figure \ref{plots} are plotted below the diagonal, shows that the random forest performed better when trained using only dimension 0 features than when trained using dimension 1 features. This is confirmed by the red boxplots in the last subfigure showing the distribution of positive relative differences in $F_1$ scores. In addition, the significant vertical distance of the blue points from the red points illuminates the significant increase in classification performance across all classes when dimension 0 features are supplemented to dimension 1 features. We again verify this observation and measure the improvement introduced by performing a paired t-test using the cross validation scores and report the result of this test in Table \ref{ttest2}. All these suggest that dimension 0 features are able to pick up shape characteristics within the digits that are better discriminants for the digit classes and/or are not seen by dimension 1 features. \begin{table}[h] \centering \caption{Paired t-test of significant difference of $F_1$ scores between training on dimension 1 features and when supplemented with dimension 0 features.} \label{ttest2} \begin{tabular}{cccccccccccc} \hline\noalign{\smallskip} Digit & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & Overall \\ \noalign{\smallskip}\hline\noalign{\smallskip} Mean Diff. & 0.050 & 0.051 & 0.299 & 0.172 & 0.148 & 0.415 & 0.268 & 0.235 & 0.206 & 0.182 & 0.203 \\ $p$-value & 2e-16 & 2e-16 & 2e-16 & 2e-16 & 2e-16 & 2e-16 & 2e-16 &2e-16 & 2e-16& 2e-16&2e-16 \\ \noalign{\smallskip}\hline \end{tabular} \end{table} Finally, we highlight that while dimension 0 and 1 features both benefit when one is supplemented by the other, the increase in performance of the random forest is generally much more significant when dimension 1 features are supplemented by dimension 0 features. \subsection{Predicting Higher Dimensional Shape Characteristics} \label{predict} We now attempt to explicitly recover higher dimensional shape characteristics using only clustering data. We perform this task in two ways. In the first approach, we use the same random forest classifier in the previous section trained only on dimension 0 features and replace the labels with the number of topological holes to see if the model can learn to classify the digits according to this higher dimensional characteristic. The classes are defined as follows: the digits 1, 2, 3, 5, and 7 have 0 topological hole, the digits 0, 4, 6, and 9 have 1 topological hole, and the digit 8 alone has 2 topological holes. In this approach, the assigned class provides the predicted value for the higher dimensional characteristic. In a second approach, we train a random forest regressor with 1000 trees on the same dimension 0 features and use each of the 50 dimension 1 features in the previous section as labels in an attempt to predict their values. In both approaches, we perform a stratified 30-fold cross validation on the full MNIST training data. Table \ref{predclass} shows that the random forest classifier is able to classify at a respectable level of accuracy the digits 0-9 according to the number of topological holes present in their shape using only features based on observed clustering behavior. We remark that this is somewhat expected considering the relatively good performance of the random forest trained on the same feature set in classifying all 10 digits themselves as seen in the previous section. Indeed, the increase in overall $F_1$ score could be logically attributed to the pooling together of digits with the same topological shape as it essentially filters out some confusion introduced when digits that are topologically equal, e.g. the digits 6 and 9, are labelled differently. Nevertheless, this does verify that clustering behavior is indeed able to capture artefacts of the identified higher dimensional characteristic. \begin{table}[h] \centering \caption{$F_1$ scores over a 30-fold stratified cross validation of the Random Forest classifier trained on dimension 0 features to classify based on the number of topological holes.} \label{predclass} \begin{tabular}{ccccc} \hline\noalign{\smallskip} \# of Holes: & 0 & 1 & 2 & Overall \\ \noalign{\smallskip}\hline\noalign{\smallskip} Mean & 0.842 & 0.816 & 0.634 & 0.764 \\ SD & 0.008 & 0.009 & 0.031 & 0.014 \\ \noalign{\smallskip}\hline \end{tabular} \end{table} Now, to flesh out in more detail the kind of higher dimensional characteristics that intrinsic hierarchical clustering behavior is able to recover, and the level of precision it is able to do so, we examine the performance of the random forest regressor in predicting each of the persistence-based dimension 1 features. For this part, we do a pre-training of the regressor to identify which 50 out of 150 dimension 0 features are best suited in predicting each dimension 1 feature. Hence our training sets in this exercise are dimension 1 feature-specific. Figure \ref{predreg} shows the distribution of mean relative errors in the predicted values for each dimension 1 feature across a 30-fold cross-validation. The predicted features are stacked from bottom to top based on their original ranking discussed in the previous section. It is clear that the regressor performs best when predicting features induced from sampling at best resolution, and that the performance and prediction variability decay with resolution. It can also be observed that the distributions of mean relative error for samples referenced from the same landmark point follow a consistent trend at resolutions 3 and above. \begin{figure}[h] \centering \includegraphics[height=0.3\textwidth,width=0.8\textwidth]{dim1boxplots} \caption{Distributions of mean relative errors from the prediction of the top 50 dimension 1 features by a random forest regressor trained on dimension 0 features. Each boxplot is obtained using scores from a 30-fold cross validation.} \label{predreg} \end{figure} It is interesting to note, however, that despite this apparent significant deviation of predicted values from actual ones, the regressor is still able to use these predicted values to recover a significant chunk of the performance boost reported in Table \ref{ttest1} where actual dimension 1 features are introduced. In particular, this is most notable for the digits 0, 4, 6, 8, and 9 as observed from Table \ref{ttest3} and Figure \ref{predreg2}, further supporting our hypothesis that our dimension zero features, crafted from an observation of clustering behavior, do detect artefacts of, and can be tapped by machine learning algorithms to recover higher dimensional characteristics. \begin{table}[h] \centering \caption{Paired t-test of significant difference of $F_1$ scores obtained when training on dimension 0 features from when the same training set is supplemented with predicted dimension 1 features.} \label{ttest3} \begin{tabular}{cccccccccccc} \hline Digit & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & Overall \\ \hline Mean Diff. & 0.101 & -0.009 & -0.017 & 0.004 & 0.038 & -0.021 & 0.012 & -0.009 & 0.092 & 0.088 & 0.028 \\ $p$-value & 2e-16 & 2e-4 & 0.001 & 0.5 & 2e-9 & 7e-4 & 0 .002&0.02 & 2e-16& 2e-16&2e-16 \\ \hline \end{tabular} \end{table} \begin{figure}[h] \centering \includegraphics[height = 0.4\textwidth,width=0.9\textwidth]{plots21} \caption{The blue points are the same points from Figure \ref{plots} representing the comparison between $F_1$ scores obtained when a random forest classifier is trained on dimension 0 features (horizontal component) and when the same training set is supplemented with dimension 1 features (vertical component). The red points have the same horizontal component, but the vertical component represents the $F_1$ scores when the dimension 0 features are supplemented with predicted dimension 1 features. The boxplots show the distribution of the relative differences in $F_1$ scores between training sets.} \label{predreg2} \end{figure} \section{Conclusion} In this paper, we showed that specific artefacts of higher dimensional shape information can be recovered from intrinsic clustering behavior observed from samplings of an underlying point cloud. By employing a multi-referenced and multi-resolution sampling of the point cloud, we are able to capture intrinsic clustering behavior from multiple perspectives and levels of granularity. In addition, our application of several hierarchical clustering algorithms provided a nuanced observation of clustering behavior from these sampling settings. Tapping the bottleneck distance provided a holistic way to compare and measure differences between observed clustering behavior. Finally, we showed that statistical features derived from these computed differences can be used to train a machine learning algorithm to classify point cloud objects with higher dimensional shape characteristics, and to produce approximations of these quantities that can be used as proxies for classification tasks.
{ "timestamp": "2020-10-09T02:13:46", "yymm": "2010", "arxiv_id": "2010.03894", "language": "en", "url": "https://arxiv.org/abs/2010.03894", "abstract": "We show that specific higher dimensional shape information of point cloud data can be recovered by observing lower dimensional hierarchical clustering dynamics. We generate multiple point samples from point clouds and perform hierarchical clustering within each sample to produce dendrograms. From these dendrograms, we take cluster evolution and merging data that capture clustering behavior to construct simplified diagrams that record the lifetime of clusters akin to what zero dimensional persistence diagrams do in topological data analysis. We compare differences between these diagrams using the bottleneck metric, and examine the resulting distribution. Finally, we show that statistical features drawn from these bottleneck distance distributions detect artefacts of, and can be tapped to recover higher dimensional shape characteristics.", "subjects": "Computational Geometry (cs.CG)", "title": "Intrinsic Hierarchical Clustering Behavior Recovers Higher Dimensional Shape Information", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668739644684, "lm_q2_score": 0.721743200312399, "lm_q1q2_score": 0.7081505196556277 }
https://arxiv.org/abs/2002.02523
Max Min vertex cover and the size of Betti tables
Let $G$ be a finite simple graph on $n$ vertices, that contains no isolated vertices, and let $I(G) \subseteq S = K[x_1, \dots, x_n]$ be its edge ideal. In this paper, we study the pair of integers that measure the projective dimension and the regularity of $S/I(G)$. We show that if the projective dimension of $S/I(G)$ attains its minimum value $2\sqrt{n}-2$ then, with only one exception, the its regularity must be 1. We also provide a full description for the spectrum of the projective dimension of $S/I(G)$ when the regularity attains its minimum value 1.
\section{Introduction} In the current trends of commutative algebra, the role of combinatorics is distinguished. Particularly, the combinatorics of finite graphs has created fascinating research problems in commutative algebra and, vice-versa, algebraic methods and techniques have shed new lights on graph-theoretic questions (\cite[Chapters 9 and 10]{HHgtm260}). Let $G$ be a simple graph over the vertex set $V_G = \{x_1, \dots, x_n\}$ and edge set $E_G$. Throughout the paper, all our graphs are assumed to contain no isolated vertices. Let $K$ be a field and identify the vertices in $V_G$ with the variables in the polynomial ring $S = K[x_1, \ldots, x_n]$. The \emph{edge ideal} of $G$, first introduced by Villarreal \cite{V}, is defined by $$I(G) = \langle x_i x_j ~\big|~ \{x_i, x_j\} \in E_G\rangle \subseteq S.$$ Let $\pd(G)$ and $\reg(G)$ denote the \emph{projective dimension} and the Castelnuovo--Mumford \emph{regularity} of $S/I(G)$, respectively. These are fundamental homological invariants that measure the computational complexity of $S/I(G)$. Particularly, $\pd(G)$ and $\reg(G)$ describe the size of the \emph{graded Betti table} of $S/I(G)$. Our work in this paper is motivated by the following basic question. \begin{question} \label{question} Given a positive integer $n$, for which pairs of integers $(p,r)$, there exists a graph $G$ on $n$ vertices such that $\pd(G) = p$ and $\reg(G) = r$? \end{question} Our approach to Question \ref{question} is purely combinatorial in nature. Specifically, we investigate an important graph-theoretic invariant, namely, the maximum size of a minimal vertex cover of $G$, which we shall denote by $\tau_{\max}(G)$. In graph theory, the two symmetric dual problems, to find a \textsc{max min vertex cover} and to find a \textsc{min max independent set} in a graph, are known to be \textbf{NP}-hard problems and, in recent years, have received a growing attention \cite{BdCP, BdCEP, CHLN, GKL, GL, Hall}. Particular, a result of Costa, Haeusler, Laber and Nogueira \cite[Theorem 2.2]{CHLN} proves that \begin{align} \tau_{\max}(G) \ge 2\sqrt{n}-2. \label{eq.tau} \end{align} The inequality (\ref{eq.tau}) gives rise to a somewhat surprising bound, that may have been overlooked, that $\pd(G) \ge 2\sqrt{n}-2$; see Corollary \ref{cor.pd}. Our first main result shows that if $\pd(G) = 2\sqrt{n}-2$ then, with only one exception, we must have $\reg(G) = 1$. In fact, since $\pd(G) \ge \tau_{\max}(G)$ (see, for example, the proof of Corollary \ref{cor.pd}), we prove the following stronger statement. For a positive integer $s$, let $H_s$ denote the graph consisting of a complete subgraph $K_s$ each of whose vertex is connected to a set of $s-1$ independent vertices, and these independent sets are pairwise disjoint (see Figure \ref{figHs}). Let $2K_2$ denote the graph consisting of two disjoint edges and let $C_4$ be the induced $4$-cycle. \medskip \noindent\textbf{Theorem \ref{thm.reg}.} Let $G$ be a simple graph on $n$ vertices and suppose that $n$ is a perfect square. If $\tau_{\max}(G) = 2\sqrt{n}-2$ then we must have either \begin{enumerate} \item $G = 2K_2$ and $(\pd(G), \reg(G)) = (2,2)$, or \item $G = C_4$ and $(\pd(G), \reg(G)) = (3,1)$, or \item $G = H_s$, for some $s \in {\mathbb N}$, and $(\pd(G),\reg(G)) = (2\sqrt{n}-2,1)$. \end{enumerate} To establish Theorem \ref{thm.reg}, we characterize all graphs $G$ for which $\tau_{\max}(G) = 2\sqrt{n}-2$, when $n$ is a perfect square. Our classification reads as follows. \medskip \noindent\textbf{Theorem \ref{classification}.} Let $G$ be a simple graph on $n$ vertices and suppose that $n$ is a perfect square. Then $\tau_{\max}(G) = 2\sqrt{n}-2$ if and only if $G$ is either $2K_2$, $C_4$ or $H_s$, for some $s \in {\mathbb N}$. \medskip Theorem \ref{thm.reg} further leads to the following natural special case of Question \ref{question}: what is the spectrum of $\pd(G)$ for all graphs $G$, for which $\reg(G) = 1$? Our next main result answers this question. \medskip \noindent\textbf{Theorem \ref{thm.spec}.} Let $n \ge 2$ be any integer. The spectrum of $\pd(G)$ for all graphs $G$, for which $\reg(G) = 1$, is precisely $[2\sqrt{n}-2, n-1] \cap {\mathbb Z}$. \medskip As an immediate consequence of Theorem \ref{thm.spec}, we show in Corollary \ref{cor.spec} that for any pairs of positive integers $(p,r)$ such that $r \le n/2$ and $$2 \sqrt{n-2(r-1)}+r-3 \le p \le n-r,$$ there exists a graph on $n$ vertices such that $(\pd(G),\reg(G)) = (p,r)$. The paper is outlined as follows. Section \ref{sec.graph} collects basic notations and terminology of finite simple graphs that will be used in the paper. In Section \ref{sec.gapfree}, we start with a simple proof for the inequality (\ref{eq.tau}) when $G$ is a gap-free graph, giving the non-experts (in our case, the algebraic readers) a glimpse of why $2\sqrt{n}-2$ appears naturally in the bound for $\tau_{\max}(G)$; see Theorem \ref{gap-free}. We also construct, for any given $n \in {\mathbb N}$, a gap-free and chordal graph $G$ such that $\tau_{\max}(G) = \lceil 2\sqrt{n}-2\rceil$, exhibiting that the inequality (\ref{eq.tau}) is sharp. Section \ref{sec.main} is devoted to a new proof of the inequality (\ref{eq.tau}) in the most general situation; see Theorem \ref{tulane}. Our proof is different from that given in \cite[Theorem 2.2]{CHLN} and provides structures that we can later on use in the classification for graphs where the equality is attained. Section \ref{sec.class} contains our main results of the paper. We classify graphs $G$ for which $\tau_{\max}(G) = 2\sqrt{n}-2$, in Theorem \ref{classification}, and examine $(\pd(G), \reg(G))$ in this case, in Theorem \ref{thm.reg}. We finally show that when $\reg(G) = 1$, the spectrum of $\pd(G)$ is $[2\sqrt{n}-2, n-1] \cap {\mathbb Z}$, in Theorem~\ref{thm.spec}. \begin{ack} The first author was supported by Louisiana BOR grant LEQSF(2017-19)-ENH-TR-25, and the second author was supported by JSPS KAKENHI 19H00637. The authors would like to thank Martin Milanic for informing us that the inequality (\ref{eq.tau}) was already known and drawing our attention to \cite{CHLN}. The authors would also like to thank Antonio Macchia for pointing out to us that $C_4$ should be included in our classification result. \end{ack} \section{Combinatorics of finite graphs and algebraic invariants} \label{sec.graph} Throughout the paper $G$ shall denote a finite simple graph on $n \ge 2$ vertices that contains no isolated vertices. Recall that a finite graph $G$ is \emph{simple} if $G$ has no \emph{loops} nor \emph{multiple edges}. We shall use $V_G$ and $E_G$ to denote the vertex and edge sets of $G$, respectively. \begin{defn} Let $G$ be a graph. \begin{enumerate} \item A subset $W \subseteq V_G$ is called a \emph{vertex cover} of $G$ if $e \cap W \not= \emptyset$ for every edge $e \in E_G$. A vertex cover $W$ is \emph{minimal} if no proper subset of $W$ is also a vertex cover of $G$. Set $$\tau_{\max}(G) = \max\{ |W| ~\big|~ W \text{ is a minimal vertex cover of } G\}.$$ \item A subset $W \subseteq V_G$ is called an \emph{independent set} if for any $u\not=v \in W$, $\{u,v\} \not\in E_G$. \end{enumerate} \end{defn} \noindent It is easy to see that the complement of a vertex cover is an independent set. Particularly, the complement of a \textsc{maximum minimal vertex cover} is a \textsc{minimum maximal independent set}. For a subset $W \subseteq V_G$, the \emph{induced subgraph} of $G$ over $W$ is the graph whose vertex set is $W$ and whose edge set is $\{\{u,v\} ~\big|~ u,v \in W \text{ and } \{u,v\} \in E_G\}$. A subset $M$ of $E_G$ is a \emph{matching} of $G$ if, for any $e \not= e'$ in $M$, one has $e \cap e' = \emptyset$. The \emph{matching number} of $G$ is the largest size of a matching in $G$, and is denoted by $\beta(G)$. A matching $M$ of $G$ is called an \emph{induced matching} of $G$ if the induced subgraph of $G$ over $\bigcup_{e \in M} e$ has no edges other than those already in $M$. The \emph{induced matching number} of $G$ is the largest size of an induced matching $G$, and is denoted by $\nu(G)$. It is known from \cite{HVT, Ka} that $\nu(G) \leq \reg(G) \leq \beta(G)$. \begin{defn} Let $G$ be a graph. \begin{enumerate} \item $G$ is called \emph{gap-free} if $\nu(G) = 1$. Equivalently, $G$ is gap-free if for any two disjoint edges $e, f \in E_G$, there exists an edge $g \in E_G$ such that $e \cap g \not= \emptyset$ and $f \cap g \not=\emptyset$. \item $G$ is called \emph{chordal} if every cycle of length at least 4 in $G$ has a \emph{chord}. That is, for every cycle $C$ of length at least 4 in $G$, there exist two nonconsecutive vertices $u$ and $v$ on $C$ such that $\{u,v\} \in E_G$. \end{enumerate} \end{defn} \noindent It is known from \cite{DS, HVT} that if $G$ is a chordal graph then $\pd(G) = \tau_{\max}(G)$ and $\reg(G) = \nu(G)$. \begin{defn} Let $W \subseteq V_G$ be a subset of the vertices in $G$. The \emph{neighborhood} (the set of \emph{neighbors}) and the \emph{closed neighborhood} of $W$ are defined by $$N_G(W) = \{u \in V_G ~\big|~ \exists w \in W: \{u,w\} \in E_G\} \text{ and } N_G[W] = N_G(W) \cup W.$$ \end{defn} \noindent When $W = \{v\}$, for simplicity of notation, we shall write $N_G(v)$ and $N_G[v]$ in place of $N_G(W)$ and $N_G[W]$. The \emph{degree} of a vertex $v \in V_G$ is defined to be $\deg_G(v) = |N_G(v)|$. A vertex $v \in V_G$ is called \emph{free} if $\deg_G(v) = 1$. A \emph{leaf} in $G$ is an edge that contains a free vertex. A \emph{path} is an alternating sequence of distinct vertices and edges (except possibly the first and the last vertex) $x_1, e_1, x_2, e_2, \dots, e_s, x_{s+1}$ such that $e_i = \{x_i, x_{i+1}\}$, for $i = 1, \dots, s$. The \emph{length} of a path is the number of edges on the path. A path of length $s$ is denoted by $P_s$. A \emph{cycle} is a closed path. An \emph{induced cycle} in $G$ is a cycle which is also an induced subgraph of $G$. An induced cycle of length $s$ is denoted by $C_s$. A \emph{complete} graph is a graph in which any two distinct vertices are connected by an edge. We shall use $K_s$ to denote a complete graph over $s$ vertices. A graph $G$ is \emph{bipartite} if there is a partition $V_G = X \cup Y$ of the vertices of $G$ such that every edge in $G$ connects a vertex in $X$ to a vertex in $Y$. A bipartite graph $G$ with a bi-partition $V_G = X \cup Y$ of its vertices is called a \emph{complete bipartite} graph if $E_G = \{\{x,y\} ~\big|~ x\in X, y\in Y\}$. We use $K_{r,s}$ to denote the complete bipartite graph whose vertices are partitioned into the union of two sets of cardinality $r$ and $s$. Finally, let $K$ be a field and let $S = K[V_G]$ represent the polynomial ring associated to the vertices in $G$. \begin{defn} Let $M$ be a finitely generated graded $S$ module. Then $M$ admits a \emph{minimal graded free resolution} of the form $$0 \rightarrow \bigoplus_{j \in {\mathbb Z}}S(-j)^{\beta_{p,j}(M)} \rightarrow \dots \rightarrow \bigoplus_{j\in {\mathbb Z}}S(-j)^{\beta_{0,j}(M)} \rightarrow M \rightarrow 0.$$ The number $\beta_{i,j}(M)$ are called the \emph{graded Betti numbers} of $M$. The \emph{projective dimension} and the \emph{regularity} of $M$ are defined as follows: $$\pd(M) = \max\{i ~\big|~ \exists j: \beta_{i,j}(M) \not= 0\} \text{ and } \reg(M) = \max\{j-i ~\big|~ \beta_{i,j}(M) \not= 0\}.$$ The \emph{graded Betti table} of $M$ is an $\pd(M) \times \reg(M)$ array whose $(i,j)$-entry is $\beta_{i,i+j}(M)$. When $M = S/I(G)$, we write $\pd(G)$ and $\reg(G)$ in place of $\pd(S/I(G))$ and $\reg(S/I(G))$. \end{defn} \section{Gap-free graphs and $\lceil 2\sqrt{n} - 2 \rceil$} \label{sec.gapfree} The goal of this section is to provide the non-experts with an easy understanding of why $2\sqrt{n}-2$ appears naturally in the bound for $\tau_{\max}(G)$. This is demonstrated by a short proof of the inequality (\ref{eq.tau}) when $G$ is a gap-free graph. We shall also construct, for any $n \ge 2$, a graph $G$ over $n$ vertices admitting $\tau_{\max}(G) = \lceil 2\sqrt{n}-2\rceil$, showing that the bound for $\tau_{\max}(G)$ in (\ref{eq.tau}) is sharp. The graph $G$ constructed will be gap-free and chordal. \begin{proposition} \label{gap-free} If a graph $G$ on $n$ vertices is gap-free, then \[ \tau_{\max}(G) \geq \lceil 2\sqrt{n} - 2 \rceil. \] \end{proposition} \begin{proof} If $G$ is bipartite then $\tau_{\max}(G) \geq n/2 \geq 2\sqrt{n} - 2$. Thus, we can assume that $G$ is a non-bipartite graph. Since $G$ is gap-free, we can also assume that $G$ is a connected graph. \noindent\textbf{Case 1:} $G$ contains a complete subgraph of size at least 3. Let $q$ denote the maximum integer $q \geq 3$ for which $G$ contains a complete subgraph $K_q$. Since $\tau_{\max}(K_q) = q - 1 \geq 2\sqrt{q} - 2$, one can assume that $q < n$. Without loss of generality, suppose that $\{x_1, \ldots, x_q\}$ is the vertex set of such a $K_q$ in $G$. We claim that the complete subgraph $K_q$ of $G$ can be chosen such that any vertex $x_j$, for $q < j \leq n$, is connected by an edge to a vertex in this $K_q$. Indeed, suppose that this is not the case. Choose such a $K_q$ with the least number of vertices outside of $K_q$ that are not connected to any of the vertices of $K_q$. Let $x_j$, for some $q < j \leq n$, be a vertex outside of $K_q$ that is not connected to any of the vertices in $K_q$. Since $x_j$ is not an isolated vertex in $G$, there exists a vertex $x_k$, for $q < k \not=j \leq n$, such that $\{x_j, x_k\} \in E_G$. This, since $G$ is gap-free, implies that $x_k$ must be connected to at least $q - 1$ vertices of $K_q$. In addition, it follows from the maximality of $q$ that $x_k$ must be connected to exactly $q - 1$ vertices of $K_q$. Assume that $x_k$ is connected to $x_1, \ldots, x_{q - 1}$. Let $K_q'$ be the complete subgraph of $G$ over the vertices $\{x_1, \dots, x_{q-1}, x_k\}$. Consider any vertex $x_l$ outside of $K_q'$ that is connected to a vertex in $K_q$. If $x_l$ is connected to any of the vertices $\{x_1, \dots, x_{q-1}\}$, then $x_l$ is still connected to that vertex in $K_q'$. If $\{x_l, x_q\} \in E_G$ and $\{x_l, x_k\} \not\in E_G$ then, by considering the pair of edges $\{x_l, x_q\}$ and $\{x_j,x_k\}$ and since $G$ is gap-free, we deduce that $\{x_l, x_j\} \in E_G$. This, again since $G$ is gap-free, implies that $x_l$ is connected to at least $q-2$ vertices among $\{x_1, \dots, x_{q-1}\}$. Thus, $x_l$ is connected to a vertex in $K_q'$ (since $q \ge 3$). Hence, the number of vertices outside $K_q'$ that are not connected to $K_q'$ is strictly less than that of $K_q$, a contradiction to the construction of $K_q$. Now, suppose that each vertex $x_j$, for $q < j \leq n$, is connected to a vertex of $K_q$. For $i =1, \dots, q$, let $$W_i = \{j ~\big|~ q < j \leq n, \{x_j,x_i\} \in E_G\},$$ and set $\omega_i = |W_i|$ (note that the sets $W_i$s are not necessarily disjoint). It is easy to see that a minimal vertex cover of $G$ containing $\{x_1, \dots, x_q\} \setminus \{x_i\}$ must also contain the vertices in $W_i$. Thus, it follows that \[ \tau_{\max}(G) \geq \max\{\omega_1, \ldots, \omega_q \} + (q - 1). \] Therefore, \[ \tau_{\max}(G) \geq \frac{\, n - q \,}{q} + (q - 1) = \frac{\, n \,}{q} + q - 2 \geq 2\sqrt{n} - 2. \] \noindent\textbf{Case 2:} $G$ does not contain any complete subgraph of size at least 3. Since $G$ is not bipartite, $G$ contains an odd cycle of length $\ell \ge 5$. Since $G$ is gap-free, by considering pairs of non-adjacent edges on this cycle, we deduce that $G$ contains $C_5$ as an induced cycle. Let $x_1, \ldots, x_5$ be the vertices of this $C_5$ in $G$. We claim that each vertex $x_i$, for $i > 5$, is connected by an edge to one of the vertices of $C_5$. Indeed, suppose that there exists a vertex $x_i$, for some $i > 5$, that is not connected to any of the vertices of $C_5$. Since $G$ is connected, $G$ has an edge $\{x_i,x_j\}$ for some $j > 5$. Since $G$ is gap-free, either $x_1$ or $x_2$ must be connected to $x_j$. We can assume that $\{x_1,x_j\} \in E_G$. This, since $G$ has no triangle, implies that $\{x_2, x_j\} \not\in E_G$ and $\{x_5,x_j\} \not\in E_G$. For the same reason, at least one of the edges $\{x_3,x_j\}$ and $\{x_4,x_j\}$ is not in $G$. Suppose that $\{x_4, x_j\} \not\in E_G$. We then have a gap consisting of the edges $\{x_4, x_5\}$ and $\{x_i, x_j\}$, a contradiction. Now, for $i = 1, \dots, 5$, let $W_i = \{x_k ~\big|~ k> 5 \text{ and } \{x_i,x_k\} \in E_G\}$ and set $\omega_i = |W_i|$. Let $b_i = \omega_i+\omega_{i+2}$, where $\omega_{5+j} = \omega_j$. Observe that a minimal vertex cover of $G$ not containing $x_i$ and $x_{i+1}$, for some $1 \le i \le 5$ must contain $W_i \cup W_{i+1}$. Since $b_1 + \cdots + b_5 \ge 2|\bigcup_{i=1}^5 W_i| = 2(n-5)$, it follows that there is $1 \leq i \leq 5$ with $b_i \geq 2n/5 -2$. Therefore, \[ \tau_{\max}(G) \geq 2n/5 + 1 \geq 2\sqrt{n} - 2, \] and the result is proved. \end{proof} \begin{defn} \label{def.Hs} Given $s \in {\mathbb N}$, we define $H_s$ to be the graph consisting of a complete graph $K_s$, each of whose vertex is furthermore connected to an independent set of size $s-1$, and these independent sets are pairwise disjoint. \end{defn} \begin{center} \includegraphics[width=5cm, height=5cm]{figHs.pdf} \captionof{figure}{$H_5$. \label{figHs}} \end{center} \begin{example} The graph depicted in Figure \ref{figHs} is $H_s$ for $s = 5$. \end{example} The following example gives a graph $G$ over $n$ vertices admitting $\tau_{\max}(G) = \lceil 2\sqrt{n}-2\rceil$ for any $n \ge 2$. \begin{example} \label{ex.equality} Let $a > 0$ be an integer with $a^2 \leq n < (a + 1)^2$. If $n = a^2$ then let $G_{n} = H_a$. The first graph in Figure \ref{fig.G} is $G_{25} = H_5$. It is easy to see that $\tau_{\max}(G_n) = 2(a-1) = 2\sqrt{n}-2.$ If $a^2 < n \leq a^2 + a$ then let $G_n$ be the graph obtained from $H_a$ by adding a leaf $\{x_i, x_{a^2 + i}\}$ to each vertex $x_i$, for $i=1, \dots, n-a^2$, in the complete subgraph $K_a$ of $H_a$. The second graph in Figure \ref{fig.G} is $G_{27}$. Then $\tau_{\max}(G_n) = 2a - 1$. Since $a < \sqrt{n} \leq \sqrt{a^2 + a} < a + 1/2$, one has $\lceil 2\sqrt{n} - 2 \rceil = 2a - 1 = \tau_{\max}(G_n)$. If $a^2 + a < n < (a + 1)^2$ then let $G_n$ be the graph obtained from $G_{a^2+a}$ by adding a leaf $\{x_i, x_{a^2 + a + i}\}$ to each vertex $x_i$, for $i = 1, \dots, n-a^2-a$, in the complete subgraph $K_a$ of $G_{a^2+a}$. The third graph in Figure \ref{fig.G} is $G_{31}$. Then $\tau_{\max}(G_n) = 2a$. Since $a + 1/2 < \sqrt{a^2 + a + 1} \leq \sqrt{n} < a + 1$, one has $\lceil 2\sqrt{n} - 2 \rceil = 2a = \tau_{\max}(G_n)$. \begin{center} \includegraphics[width=4cm, height=4cm]{figHs.pdf} \hspace*{9pt} \includegraphics[width=4cm, height=4cm]{figG27.pdf} \hspace*{9pt} \includegraphics[width=4cm, height=4cm]{figG31.pdf} \captionof{figure}{Graphs with $\tau_{\max}(G) = \lceil 2\sqrt{n}-2\rceil.$ \label{fig.G}} \end{center} Note that all graphs constructed here are chordal and gap-free. \end{example} \section{The bound $\tau_{\max}(G) \geq 2\sqrt{n} - 2$ for an arbitrary graph} \label{sec.main} In this section, we give a new proof for the inequality (\ref{eq.tau}). Our proof is different from that given in \cite[Theorem 2.2]{CHLN} and provides information that we could use in the next section to classify graphs for which (\ref{eq.tau}) becomes an equality. \begin{thm} \label{tulane} Let $G$ be a graph on $n$ vertices. We have {\em \[ \tau_{\max}(G) \geq \lceil 2\sqrt{n} - 2 \rceil. \] } \end{thm} \begin{proof} Since $\tau_{\max}(G) \in {\mathbb N}$, it suffices to show that $\tau_{\max}(G) \ge 2\sqrt{n}-2$. Let $W$ be a minimal vertex cover of maximum size in $G$. That is, $|W| = \tau_{\max}(G)$. We partition $W$ into the following two subsets $$A' = \{w \in W ~\big|~ \deg_G(w) \ge 2\} \text{ and } B' = \{w \in W ~\big|~ \deg_G(w) = 1\}.$$ It is clear that if $A' = \emptyset$ then $G$ consists of isolated edges, and so $\tau_{\max}(G) = n/2 \ge 2\sqrt{n}-2$. Thus, we shall assume that $A' \not= \emptyset$. Let $B = B' \cup \{v \in A' ~|~ N_G(v) \subseteq N_G(B')\}$ and let $A = A' \setminus B$. We now have a new partition of $W$, namely, $W = A \cup B$. Note that $N_G(B') = N_G(B)$. For each vertex $v \in A$, let $M(v) = N_G(v) \setminus (W \cup N_G(B))$. Set $a = |A|$, $b = |B|$ and $n_b = |N_G(B)| \le b$. Consider the maximal sets (with respect to inclusion) of the form $M(w)$ for $w \in A$, and suppose that those maximal sets are $M(w_1), \dots, M(w_t)$, for $w_1, \dots, w_t \in A$. Set $D = A \setminus \{w_1, \dots, w_t\}$ and let $d = |D|$. \noindent\textbf{Claim 1.} For any $i = 1, \dots, t$, we have $|M(w_i)| \le b+1+d-n_b.$ \noindent\textit{Proof of Claim.} Let $$D' = \{w \in D ~\big|~ M(w) \not\subseteq M(w_i)\} \cup \{w \in D ~\big|~ ww_i \in E_G\}$$ and let $D'' = D \setminus D'$. Let $H$ be the induced subgraph of $G$ over $D''$ and let $U$ be a minimal vertex cover of $H$. Let $$W' = \big[W \cup M(w_i)\cup N_G(B)\big] \setminus \big[B \cup \{w_i\} \cup (D'' \setminus U)].$$ It suffices to show that $W'$ is a minimal vertex cover of $G$, which then implies that $|W'| \le |W|$; that is, \begin{align} |M(w_i)| \le b+1+|D'' \setminus U|-n_b \le b+1+d-n_b. \label{eq.mvi} \end{align} To see that $W'$ is a vertex cover of $G$, consider any edge $e = xy \in E_G$. Since $W$ covers $G$, without loss of generality, we may assume that $x \in W$. If $x \in W'$ then $W'$ covers $e$. Assume that $x \not\in W'$. This implies that $x \in B \cup \{w_i\} \cup (D'' \setminus U)$. If $x \in B$ then $y \in N_G(B) \subseteq W'$, and so $W'$ covers $e$. If $x = w_i$ then either $y \in M(w_i) \subseteq W'$ or $y \in W \cup N_G(B)$. Furthermore, if $y \in W$ either $y$ is among the $w_j$, for $j \not= i$, or $y \in D' \subseteq W'$. Thus, in this case, $W'$ also covers $e$. If $x \in D'' \setminus U$ then, by definition, $M(x) \subseteq M(w_i)$ and $xw_i \not\in E_G$. This implies that at least one of the following happens: \begin{enumerate} \item $y \in M(x)$, \item $y \in \{w_1, \dots, w_t\} \setminus \{w_i\}$, \item $y \in N_G(B)$, \item $y \in D'$, \item $xy$ is an edge in $H$ (which forces $y \in U$). \end{enumerate} In any of these cases, we have $y \in W'$. To see that $W'$ is a minimal vertex cover, consider any vertex cover $W'' \subseteq W'$ of $G$. Observe that $W''$ does not contain any vertex in $B \cup \{w_i\}$, so $W''$ must contain $N_G(B) \cup M(w_i)$. Also, for any $j \not= i$, $M(w_j) \not\subseteq M(w_i)$. This implies that $M(w_j) \not\subseteq W''$, which forces $w_j \in W''$ for all $j \not= i$. Furthermore, for any $w \in D'$, either $M(w) \not\subseteq W''$ or $ww_i \in E_G$. It then follows that $w \in W''$, i.e., $D' \subseteq W''$. Finally, for any vertex $u \in U$, since $U$ is a minimal vertex cover of $H$, there exists an edge $uv$ in $H$ such that $v \not\in W'$. This implies that $v \not\in W''$, and so, $u \in W''$. Hence, $W'' = W'$, and $W'$ is a minimal vertex cover of $G$. $\blacksquare$ We proceed with the proof of our theorem by considering two different cases. \noindent\textbf{Case 1:} $B \not= \emptyset$. In this case, Claim 1 gives us \begin{align} n & = \sum_{i=1}^t |M(w_i)| +|A| + |N_G(B)| + |B| \label{eq.nb} \\ & \le t(b+1+d-n_b) + t+d+n_b+b \nonumber \\ & = 2t + t(b+d-n_b) + d+n_b+b \nonumber \\ & = 2t + (t-1)(b+d-n_b) + 2(b+d). \nonumber \end{align} On the other hand $\tau_{\max}(G) = t+d + b$. Observe that, since $b \ge 1$, we have $n_b \ge 1$, and so \begin{align} 4n & \le 4\big[2t + (t-1)(b+d-1) + 2(b+d)\big] \label{eq.nb'}\\ & = 4(t+1)(b+d+1) \le (t+d+b+ 2)^2 \nonumber \\ & = (\tau_{\max}(G)+2)^2. \nonumber \end{align} Hence, $\tau_{\max}(G) \ge 2 \sqrt{n}-2$, and we are done. \noindent\textbf{Case 2:} $B = \emptyset$. Observe that, by Claim 1, for each $i = 1, \dots, t$, \begin{align} |M(w_i)| \le d+1. \label{eq.mv2} \end{align} Observe further that if $D = \emptyset$ then it follows from (\ref{eq.mv2}) that $\tau_{\max}(G) \ge t \ge n/2 \ge 2\sqrt{n}-2$. We shall assume that $D \not= \emptyset$. Since $W$ is a minimal vertex cover of $G$, it follows that $M(v) \not= \emptyset$ for all $v \in A$. Let $M(D) = \bigcup_{w \in D}M(w) \not= \emptyset$. To prove the assertion, it suffices to show that \begin{align} \big|\bigcup_{i=1}^t M(w_i)\big| \le td+1. \label{eq.sum} \end{align} This is because (\ref{eq.sum}) then gives \begin{align} n \le \big|\bigcup_{i=1}^t M(w_i)| + |A| \le td+1+t+d = (t+1)(d+1), \label{eq.nb0} \end{align} which implies that $4n \le (t+d+2)^2 = (\tau_{\max}(G)+2)^2$. To establish (\ref{eq.sum}), we partition $\{w_1, \dots, w_t\}$ into the following two subsets $$V_1 = \{w_i ~\big|~ M(D) \subseteq M(w_i)\} \text{ and } V_2 = \{w_i ~\big|~ M(D) \not\subseteq M(w_i)\}.$$ Consider any $w_i \in V_2$. Since $M(D) \not\subseteq M(w_i)$, there exists a vertex $x \in D$ such that $M(x) \not\subseteq M(w_i)$. Now, apply the same proof as that for Claim 1, for the set $M(w_i)$, observing that $x \in D'$ in this case, and so $|D'' \setminus U| \le d-1$. This implies that $|M(w_i)| \le d$. Observe, finally, that if $V_1 = \emptyset$ then we have $\big|\bigcup_{i=1}^t M(w_i)\big| = \big|\bigcup_{w_i \in V_2}M(w_i)\big| \le td$, and if $V_1 \not= \emptyset$ then we have \begin{align} \big|\bigcup_{i=1}^t M(w_i)\big| & \le \big|\bigcup_{w_i \in V_1}M(w_i)\big| + \big|\bigcup_{w_i \in V_2}M(w_i)| \label{eq.sumb0} \\ & \le |M(D)| + (d+1-|M(D)|)|V_1| + d|V_2| \nonumber \\ & = d(|V_1|+|V_2|) + 1 - (|M(D)|-1)(|V_1|-1) \nonumber \\ & \le td+1. \nonumber \end{align} The result is proved. \end{proof} \begin{corollary} \label{cor.pd} Let $G$ be a graph on $n$ vertices. We have $$\pd(G) \ge \lceil 2\sqrt{n}-2\rceil.$$ \end{corollary} \begin{proof} Let $I(G)^\vee$ denote the Alexander dual of the edge ideal $I(G)$ of $G$. See, for example, \cite[Chapter 5]{MS} for more details of the Alexander duality theory. By a result of Terai \cite[Theorem 2.1]{Terai}, we have $$\reg(I(G)^\vee) = \pd(G).$$ Observe that the minimal generators of $I(G)^\vee$ correspond to the minimal vertex covers in $G$. Since the regularity is an upper bound for the maximal generating degree, we have $\reg(I(G)^\vee) \ge \tau_{\max}(G)$. Thus, $\pd(G) \ge \tau_{\max}(G)$ (see also \cite{DS,K}). The assertion now follows from Theorem \ref{tulane}. \end{proof} \section{Classification for $\tau_{\max}(G) = 2\sqrt{n}-2$ and the spectrum of $(\pd(G), \reg(G))$} \label{sec.class} This section is devoted to our main results. We shall classify graphs $G$, when $n$ is a perfect square, for which $\tau_{\max}(G)$ attains its minimum value; that is, when $\tau_{\max}(G) = 2\sqrt{n}-2$. We shall also give the first nontrivial partial answer to Question \ref{question} on the spectrum of pairs of integers $(\pd(G), \reg(G))$. Recall that, for $s \in {\mathbb N}$, $H_s$ is the graph defined in Definition \ref{def.Hs}. \begin{thm} \label{classification} Let $G$ be a graph on $n$ vertices and suppose that $n$ is a perfect square. Then $\tau_{\max}(G) = 2\sqrt{n}-2$ if and only if $G$ is either $2K_2$, $C_4$ or $H_s$, for some $s \in {\mathbb N}$. \end{thm} \begin{proof} It is clear that if $G$ is either $2K_2$, $C_4$ or $H_s$ then $\tau_{\max}(G) = \lceil 2\sqrt{n}-2\rceil$. We shall prove the other implication. Let $W$ be a minimal vertex cover of largest size. That is, $|W| = 2\sqrt{n}-2$. We shall use the same notations as in the proof of Theorem \ref{tulane}. Consider the following two possibilities. \noindent\textbf{Case 1:} $B \not= \emptyset$. The proof of Theorem \ref{tulane} shows that $4n = (\tau_{\max}(G)+2)^2$ only if the following conditions are satisfied: \begin{enumerate} \item[(1)] $t = b+d$ (due to (\ref{eq.nb'})), \item[(2)] $n_b = 1$ (due to (\ref{eq.nb'})), and \item[(3)] $M(w_1), \dots, M(w_t)$ are pairwise disjoint and each has exactly $b+1+d-n_b = b+d$ elements (due to (\ref{eq.nb})). \end{enumerate} Condition (3), together with (\ref{eq.mvi}), implies that $|D''\setminus U| = |D|$. This happens if and only if $D'' = D$ and $U = \emptyset$. Thus, $D$ is an independent set, and for all $i = 1, \dots, t$ and $w \in D$, we have $M(w) \subseteq M(w_i)$ and $ww_i \not\in E_G$. This, together with condition (3) again, implies that either $t = 1$ or $M(w) = \emptyset$ for all $w \in D$. Suppose that $t = 1$. Condition (1) then implies that $b = 1$ and $d = 0$. In this case, $G$ is either a path of length 3, i.e., $P_3$, or two disjoint edges, i.e., $2K_2$. Note that $P_3 = H_2$. Suppose that $M(w) = \emptyset$ for all $w \in D$. Since $ww_i \not\in E_G$, it follows that $N_G(w) \subseteq N_G(B)$. This is a contradiction to the construction of $B$ unless $D = \emptyset$. Thus, we have $D = \emptyset$ and $A = \{w_1, \dots, w_t\}$. Let $v_b$ be the only vertex in $N_G(B)$. Observe that if there exists an $i$ such that $w_iv_b \not\in E_G$, then let $$W' = [W \cup M(w_i)] \setminus \{w_i\}.$$ It can be seen that $W'$ is a minimal vertex cover of $G$. Thus, $|W'| \le |W|$, and so we must have $|M(w_i)| = 1$. That is, $b=1$ and $|M(w_j)| = 1$ for all $j = 1, \dots, t$. Therefore, $\tau_{\max}(G) = n/2$. In this case, $\tau_{\max}(G) = 2\sqrt{n}-2$ only if $n = 4$, and we have $t=1$ and $G = 2K_2$. \begin{center} \includegraphics[width=5cm, height=5cm]{figHT.pdf} \captionof{figure}{$G$ when $B \not= \emptyset$.\label{fig.HT}} \end{center} Assume that $w_iv_b \in E_G$ for all $i = 1, \dots, t$. Observe further that if there are $i \not= j$ such that $w_iw_j \not\in E_G$ then let $$W' = [W \cup M(w_i) \cup M(w_j) \cup N_G(B)] \setminus [B \cup \{w_i,w_j\}].$$ It can also be seen that $W'$ is a minimal vertex cover of $G$. Thus, $|W'| \le |W|$, and we get $|M(w_i)| + |M(w_j)| + 1 \le b+2$. That is, $2b \le b+1$. Therefore, $b=1$ and again $n = 4$. In this case, $G = P_3$. Suppose, finally, that $w_iw_j \in E_G$ for all $i \not= j$. Clearly, we then have $G = H_{t+1}$, as depicted in Figure \ref{fig.HT} with $t = 4$. \noindent\textbf{Case 2:} $B = \emptyset$. From the minimality of $W$, it follows that $M(w) \not= \emptyset$ for all $w \in A$. Suppose that $D = \emptyset$. Then (\ref{eq.mv2}) implies that $|M(w_i)| = 1$ for all $i = 1, \dots, t$. Thus, $\tau_{\max}(G) \ge n/2$, and so $\tau_{\max}(G) = 2\sqrt{n}-2$ only if $n=4$. Therefore, we also have that $G$ is either $P_3$ or $2K_2$. Suppose that $D \not= \emptyset$. The proof of Theorem \ref{tulane} shows that $4n = (\tau_{\max}(G)+2)^2$ only if the following conditions are satisfied: \begin{enumerate} \item[(4)] $t = d \not= 0$ (due to (\ref{eq.nb0})), \item[(5)] $M(w_j)$'s are all disjoint for $w_j \in V_2$, $M(w_i)$ and $M(w_j)$ are disjoint for any $w_i \in V_1$ and $j \in V_2$, and $M(w_i)$'s pairwise share exactly $M(D)$ as the set of common vertices (due to (\ref{eq.sumb0})), and \item[(6)] either $|M(D)| = 1$ or $|V_1| = 1$ (due to (\ref{eq.sumb0})). \end{enumerate} Consider first the case where $|V_1| = 1$ in condition (6). Without loss of generality, we may assume that $V_1 =\{w_1\}$ and $V_2 = \{w_2, \dots, w_t\}$. In this case, condition (5) states that $M(w_1), \dots, M(w_t)$ are disjoint, $|M(w_1)| = d+1$ and $|M(w_j)| = d$ for $j \ge 2$. Particularly, for all $j \ge 2$, \begin{align} M(D) \cap M(w_j) = \emptyset. \label{eq.MD} \end{align} Applying (\ref{eq.mvi}) to $M(w_1)$ implies that $D$ is an independent set in $G$. Moreover, applying (\ref{eq.mvi}) to $M(w_j)$, for any $j \ge 2$, gives that $|D''| = d-1$. It follows that, for any $j \ge 2$, there is exactly one vertex $w$ in $D$ such that $M(w) \not\subseteq M(w_j)$ or $ww_j \in E_G$. This and (\ref{eq.MD}) force $d = 1$. In this case, $G$ is either a $P_3$ or a $C_4$. Consider now the case where $|M(D)| = 1$. Let $v_d$ be the only vertex in $M(D)$. In this case, we have $M(w) = M(D) = \{v_d\}$ for all $w \in D$. If $V_2 \not= \emptyset$ then let $v \in V_2$. By condition (6) and applying (\ref{eq.mvi}) to $M(v)$, we deduce that $|D'' \setminus U| = d-1$. However, $M(w) \not\subseteq M(v)$ for all $w \in D$ by (\ref{eq.MD}). Thus, in applying (\ref{eq.mvi}) to estimate $M(v)$, we have $D'' = \emptyset$. This is the case only if $d = 1$. Thus, $t=d=1$, and we get to a contradiction to the fact that both $V_1$ and $V_2$ are not empty. Suppose that $V_2 = \emptyset$. Condition (6) states that $M(w_1), \dots, M(w_t)$ pairwise have exactly one vertex $v_d$ in common and each is of size exactly $d+1$. If there are $w_i$ and $w \in D$ such that $ww_i \in E_G$ then, in applying (\ref{eq.mvi}) to $M(w_i)$, we have that $D' \not= \emptyset$. That is $|D''| \le d-1$, and so $|M(w_i)| \le d$, a contradiction. Hence, $ww_i \not\in E_G$ for all $i$ and all $w \in D$. This shows that the vertices in $D$ are of degree 1. That is, $B \not= \emptyset$, a contradiction. \end{proof} Observe that $H_s$ is a chordal and gap-free graph. The conclusion of Theorem \ref{classification} is no longer true if $n$ is not a perfect square. In fact, for any odd integer $p \ge 3$, there exists a graph $G$ over $n = (p+1)^2/4+1$ vertices that is neither chordal nor gap-free and admits $\tau_{\max}(G) = p = \lceil 2\sqrt{n}-2\rceil$. The following example depicts this scenario when $p = 5$ and $n=10$. The example for any odd $p \ge 3$ and $n = (p+1)^2/4+1$ is constructed in a similar manner. \begin{example} \label{ex.notchordal} Let $G$ be the following graph over $10$ vertices (as in Figure \ref{fig.10}). It is easy to see that $\tau_{\max}(G) = 5 = \lceil 2\sqrt{10}-2\rceil$ (the solid black vertices form a minimal vertex cover of maximum cardinality 5). Furthermore, $G$ is neither chordal nor gap-free. \begin{center} \includegraphics[width=3.5cm, height=3.5cm]{figNOTchordal.pdf} \captionof{figure}{A graph with $\tau_{\max}(G) = \lceil 2\sqrt{n}-2\rceil$ which is not chordal nor gap-free. \label{fig.10}} \end{center} \end{example} The following problem, which we hope to come back to in future work, arises naturally. \begin{problem} Characterize all graphs $G$ on $n$ vertices for which $\tau_{\max}(G) = \lceil 2\sqrt{n}-2\rceil.$ \end{problem} Theorem \ref{classification} furthermore gives us some initial understanding toward Question \ref{question}. \begin{thm} \label{thm.reg} Let $G$ be a graph on $n$ vertices and suppose that $n$ is a perfect square. If $\tau_{\max}(G) = 2\sqrt{n}-2$ then we must have either \begin{enumerate} \item $G = 2K_2$ and $(\pd(G), \reg(G)) = (2,2)$, or \item $G = C_4$ and $(\pd(G), \reg(G)) = (3,1)$, or \item $G = H_s$, for some $s \in {\mathbb N}$, and $(\pd(G),\reg(G)) = (2\sqrt{n}-2,1)$. \end{enumerate} \end{thm} \begin{proof} It follows from Theorem \ref{classification} that $G$ either $2K_2$, $C_4$ or $H_s$, for some $s \in {\mathbb N}$. If $G$ is $2K_2$ then $(\pd(G), \reg(G)) = (2,2).$ If $G$ is $C_4$ then $(\pd(G), \reg(G)) = (3,1)$. Suppose that $G = H_s$ for some $s \in {\mathbb N}$. Recall that $H_s$ is a chordal and gap-free graph. Particularly, the induced matching number of $H_s$ is 1. Thus, by \cite[Corollary 6.9]{HVT}, we have $\reg(G) = 1$. \end{proof} Our next main result serves as a converse to Theorem \ref{thm.reg}; that is, we identify the spectrum of $\pd(G)$ when $\reg(G) = 1$. \begin{thm} \label{thm.spec} Let $n \ge 2$ be any integer. The spectrum of $\pd(G)$ for all graphs $G$, for which $\reg(G) = 1$, is precisely $[2\sqrt{n}-2, n-1] \cap {\mathbb Z}$. \end{thm} \begin{proof} By Theorem \ref{tulane}, we have $\pd(G) \ge 2\sqrt{n}-2$. Observe that any minimal vertex cover of $G$ needs at most $n-1$ vertices, so ${\frak m} = (x_1, \dots, x_n)$ is not a minimal primes of $I(G)$. Furthermore, since $I(G)$ is squarefree, it has no embedded primes. This implies that ${\frak m}$ is not an associated prime of $I(G)$. It follows that $\depth S/I(G) \ge 1$. By Auslander-Buchsbaum formula, we then have $\pd(S/I(G)) \le n-1$. It remains to construct a graph $G$ on $n$ vertices, for any given integer $p$ such that $2\sqrt{n}-2 \le p \le n-1$, for which $\pd(G) = p$ and $\reg(G) = 1$. By considering the complete bipartite graph $K_{1,n-1}$, the assertion is clearly true for $p = n-1$. Suppose now that $p \le n-2$. Let $s = \lceil p/2\rceil + 1$ and $T = \lfloor p/2\rfloor + 1$. It can be seen that $sT = \lfloor (p+2)^2/4\rfloor \ge n$. Note further that $s+T = p+2 \le n$. Thus, we can choose $t$ to be the largest integer such that $(s-1)t+T \le n$ (particularly, $1 \le t \le T$), and set $a = (p+2)-(s+t) = T-t$. Let $K_s$ be the complete graph over $s$ vertices $\{x_1, \dots, x_s\}$. For each $i = 1, \dots, s$, let $W_i$ be a set of $t-1$ independent vertices such that the sets $W_i$s are pairwise disjoint and disjoint from the vertices of $K_s$. For each $i = 1, \dots, s$, connect $x_i$ to all the vertices in $W_i$. Observe further that \begin{enumerate} \item $st+a = (s-1)t+T \le n$, and \item $st + sa = sT \ge n$. \end{enumerate} Thus, we can find new pairwise disjoint sets $B_1, \dots, B_s$ of independent vertices, which are also disjoint from the vertices in $K_s$ and $W_i$s, such that $|B_1| = a$ and $|B_i| \le a$, for all $2 \le i \le s$, and $\sum_{i=1}^s |B_i| = n-st$. For each $i=1, \dots, s$, connect $x_i$ to all the vertices in $B_i$. Let $G$ be the resulting graph. \begin{center} \includegraphics[width=5cm,height=5cm]{figSpec.pdf} \captionof{figure}{A graph with $(\pd(G), \reg(G)) = (p,1).$ \label{fig.spec}} \end{center} It is easy to see that $G$ is a chordal and gap-free graph over $n$ vertices. It is also clear to see that $\tau_{\max}(G) = (s-1)+(t-1)+a = p$. By \cite[Corollary 6.9]{HVT}, we have $\reg(G) = \nu(G) = 1$. Moreover, it follows from \cite[Theorem 3.2]{FVT} and \cite[Corollary 3.33]{MS} (see also \cite[Corollary 5.6]{DS}) that $\pd(G) = \tau_{\max}(G) = p$. \end{proof} For simplicity of the statement of our next result, given an integer $n \ge 2$, let $$\pdrSpec(n) = \{(p,r) ~\big|~ \text{there is a graph $G$ over $n$ vertices}: \pd(G) = p, \reg(G) = r\}.$$ Theorem \ref{thm.spec} basically states that $(p,1) \in \pdrSpec(n)$ for any integer $p$ with $2\sqrt{n}-2 \le p \le n-1$. The next corollary is an immediate consequence of Theorem \ref{thm.spec}. \begin{corollary} \label{cor.spec} Let $r \le n/2$ be positive integers. We have $(p,r) \in \pdrSpec(n)$ for any integer $p$ with $$2\sqrt{n-2(r-1)}+r-3 \le p \le n-r.$$ \end{corollary} \begin{proof} By Theorem \ref{thm.spec}, for any integer $p'$ such that $$2\sqrt{n-2(r-1)}-2 \le p' \le n-2(r-1)-1,$$ there is a graph $H$ over $n-2(r-1)$ vertices for which $\pd(H) = p'$ and $\reg(H) = 1$. Let $H'$ be the graph consisting of $r-1$ disjoint edges. It is easy to see that $\pd(H') = r-1 = \reg(H')$. Let $G$ be the disjoint union between $H$ and $H'$. Since the projective dimension and regularity are additive with respect to disjoint unions of graphs, it follows that $\pd(G) = p'+(r-1)$ and $\reg(G) = 1+(r-1) = r$. The assertion is proved by taking $p' = p - (r-1)$. \end{proof} Note that $\lceil 2\sqrt{n-2(r-1)}\rceil+r-3 \ge \lceil 2\sqrt{n}\rceil -2$. That is, $$\big[\lceil 2\sqrt{n-2(r-1)}\rceil+r-3, n-r\big] \subseteq \big[\lceil 2\sqrt{n}-2\rceil, n-1\big].$$ In other words, as $\reg(G)$ gets larger, the spectrum of $\pd(G)$ appears to become smaller. Further computation does suggest that this should be true. \begin{conjecture} Let $r,n \ge 2$ be arbirary integers. If $(p,r) \in \pdrSpec(n)$ then $(p,r-1) \in \pdrSpec(n)$. \end{conjecture} \bibliographystyle{plain}
{ "timestamp": "2020-04-16T02:13:44", "yymm": "2002", "arxiv_id": "2002.02523", "language": "en", "url": "https://arxiv.org/abs/2002.02523", "abstract": "Let $G$ be a finite simple graph on $n$ vertices, that contains no isolated vertices, and let $I(G) \\subseteq S = K[x_1, \\dots, x_n]$ be its edge ideal. In this paper, we study the pair of integers that measure the projective dimension and the regularity of $S/I(G)$. We show that if the projective dimension of $S/I(G)$ attains its minimum value $2\\sqrt{n}-2$ then, with only one exception, the its regularity must be 1. We also provide a full description for the spectrum of the projective dimension of $S/I(G)$ when the regularity attains its minimum value 1.", "subjects": "Combinatorics (math.CO)", "title": "Max Min vertex cover and the size of Betti tables", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668657039606, "lm_q2_score": 0.7217432062975979, "lm_q1q2_score": 0.7081505195661412 }
https://arxiv.org/abs/1105.6363
Compactly generated quasitopological homotopy groups with discontinuous multiplication
For each positive integer Q there exists a path connected metric compactum X such that the Qth-homotopy group of X is compactly generated but not a topological group (with the quotient topology).
\section{Introduction} Given a space $X$ and a positive integer $Q\geq 1,$ the familiar homotopy group $\pi _{Q}(X,p)$ becomes a topological space endowed with the quotient topology induced by the natural surjective map $\Pi _{Q}:M_{Q}(X,p)\rightarrow \pi _{Q}(X,p).$ ($M_{Q}(X,p)$ denotes the space of based maps, with the compact open topology, from the $Q-$sphere $S^{Q}$ into $X$). It is an open problem to understand when $\pi _{Q}(X,p)$ is or is not a topological group with the standard operations. For example, is $\pi _{Q}(X,p)$ always a topological group if $Q\geq 2$ (Problem 5.1 \cite{Brazas}% , abstract \cite{Ghane2})? If $Q\geq 1$ must $\pi _{Q}(X,p)$ be a topological group if $X$ is a path-connected continuum and $\pi _{Q}(X,p)$ is compactly generated? We answer both questions in the negative via counterexamples. In general the topology of $\pi _{Q}(X,p)$ is an invariant of the homotopy type of the underlying space $X,$ $\pi _{Q}(X,p)$ is a quasitopological group (i.e. multiplication is continuous separately in each coordinate and group inversion is continuous), and each map $f:X\rightarrow Y$ induces a continuous homomorphism $f_{\ast }:\pi _{Q}(X,p)\rightarrow \pi _{Q}(Y,f(p))$ \cite{GH}. If $X$ has strong local properties (for example if $X$ is locally $n$% -connected for all $0\leq n\leq Q$) then $\pi _{Q}(X,p)$ is discrete \cite {Fab1},\cite{Calcut},\cite{GH}, and hence a topological group. If $Q\geq 2$, the group $\pi _{Q}(X,p)$ is abelian, and this fact has the capacity to nullify structural pathology present when $Q=1.$ For example Ghane, Hamed, Mashayekhy, and Mirebrahimi \cite{Ghane2} show if $Q\geq 2$ then $\pi _{Q}(X,p)$ is a topological group if $X$ is the $Q-$dimensional version of the 1-dimensional Hawaiian earring. However in general the standard group multiplication in $\pi _{Q}(X,p)$ can fail to be continuous if $\Pi _{Q}\times \Pi _{Q}:M_{Q}(X,p)\times M_{Q}(X,p)\rightarrow \pi _{Q}(X,p)\times \pi _{Q}(X,p)$ fails to be a quotient map. Recent counterexamples respectively of Brazas (Example 4.22 \cite{Brazas}) and Fabel \cite{Fab3} show $\pi _{1}(X,p)$ fails to be a topological group if $X$ is the union of large circles parameterized by the rationals and joined at a common point, or if $X$ is the 1-dimensional Hawaiian earring. For the main result, given $Q\geq 1$ we obtain a space $X$ as the union of convergent line segments $L_{n}\rightarrow L$, joined at the common point $p, $ with a small $Q-$sphere $S_{n}$ attached to the end of each segment $L_{n}$% , and this yields the following. \begin{theorem} For each $Q\in \{1,2,3,...\}$ there exists a compact path connected metric space $X$ such that, with the quotient topology, $\pi _{Q}(X,p)$ is compactly generated and multiplication is discontinuous in $\pi _{Q}(X,p)$. \end{theorem} \subsection{Definitions} If $Y$ is a space and if $A\subset Y$ then the set $A$ is \textbf{closed under convergent sequences }if $A$ enjoys the following property: If the sequence $\{a_{1},a_{2},..\}\subset A$ and if $a_{n}\rightarrow a$ then $% a\in A.$ The space $Y$ is a \textbf{sequential space }if $Y$ enjoys the following property: If $A\subset Y$ and if $A$ is closed under convergent sequences then $A$ is a closed subspace of $Y.$ If $Y$ and $Z$ are spaces the surjective map $q:Y\rightarrow Z$ is \textbf{% quotient map }if for every subset $A\subset Z,$ the set $A$ is closed in $Z$ if and only if the preimage $q^{-1}(A)$ is closed in $Y.$ Given a space $X$ and $p\in X,$ and an integer $Q\geq 1$ let $\pi _{Q}(X,p)$ denote the familiar $Qth$ homotopy group of $X$ based at $p.$ To topologize $\pi _{Q}(X,p)$, let $M_{Q}(X,p)$ denote the space of based maps $f:(S^{Q},1)\rightarrow (X,p)$ from the $Q$-sphere $S^{Q}$ into $X,$ and impart $M_{Q}(X,p)$ with the compact open topology. Let $\Pi _{Q}:M_{Q}(X,p)\rightarrow \pi _{Q}(X,p)$ be the canonical quotient map such that $\Pi _{Q}(f)=\Pi _{Q}(g)$ iff $f$ and $g$ belong to the same path component of $M_{Q}(X,p),$ and declare $U\subset G$ to be open iff $\Pi _{Q}^{-1}(U)$ is open in $M_{Q}(X,p).$ A \textbf{Peano continuum} is a compact locally path connected metric space. If $Q$ is a positive integer a \textbf{closed Q-cell} is any a space homeomorphic to $[0,1]^{Q},$ and the $\mathbf{Q-}$\textbf{sphere} $S^{Q}$ is the quotient of $[0,1]^{Q}$ by identifying to a point the $Q-1$ dimensional boundary $[0,1]^{Q}\backslash (0,1)^{Q}.$ \subsection{Basic properties of $\protect\pi _{Q}(X,p)$} \begin{lemma} \label{seq}If $X$ is a metrizable space then $M_{Q}(X,p)$ is a metrizable space, and $\pi _{Q}(X,p)$ is a sequential space. \end{lemma} \begin{proof} Since $X$ is metrizable and since $S^{Q}$ is a compact, the uniform metric shows $M_{Q}(X,p)$ is metrizable. Moreover since $S^{Q}$ is compact, the compact open topology coincides with the metric topology of uniform convergence in $M_{Q}(X,p).$ Suppose $A\subset \pi _{Q}(X,p)$ and suppose $A$ is closed under convergent sequences. Let $B=\Pi _{Q}^{-1}(A)$. Suppose $b_{n}\rightarrow b$ and $% b_{n}\in B.$ Then $\Pi _{Q}(b_{n})\rightarrow \Pi _{Q}(b)$ and hence $\Pi _{Q}(b)\in A.$ Thus $b\in B.$ Thus $B$ is closed under convergent sequences. Hence $B$ is closed in $M_{Q}(X,p)$ since $M_{Q}(X,p)$ is metrizable. Thus, since $\Pi _{Q}$ is a quotient map, $A$ is closed in $\pi _{Q}(X,p).$ Hence $% \pi _{Q}(X,p)$ is a sequential space. \end{proof} \begin{lemma} \label{reps}Suppose $X$ is metrizable and suppose $z_{n}\rightarrow z$ in $% \pi _{Q}(X,p).$ Then there exists $n_{1}<n_{2}..$ and a convergent sequence $% \alpha _{n_{k}}\in M_{Q}(X,p)$ such that $\Pi _{Q}(\alpha _{n_{k}})=z_{n_{k}}.$ \end{lemma} \begin{proof} If there exists $N$ such that $z_{n}=z$ for all $n\geq N,$then there exists $% \alpha \in M_{Q}(X,p)$ such that $\Pi _{Q}(\alpha )=z$ (since $\Pi _{Q}$ is surjective) and let $\alpha _{n}=\alpha $ for all $n\geq N,$ and thus the constant sequence $\{\alpha _{n}\}$ converges. If no such $N$ exists then obtain a subsequence $\{x_{n}\}\subset \{z_{n}\}$ such that $x_{n}\neq z$ for all$\,$\ $n.$ To see that $\{x_{1},x_{2},..\}$ is not closed in $Z$, note $z\notin \{x_{1},x_{2},..\}$ and, since $% x_{n}\rightarrow z,$ it follows that $z$ is a limit point of the set $% \{x_{1},x_{2},..\}.$ Thus, since $\Pi _{Q}$ is a quotient map, $\Pi _{Q}^{-1}\{x_{1},x_{2},..\}$ is not closed in $Y.$ Obtain a limit point $% \beta \in \overline{\Pi _{Q}^{-1}\{x_{1},x_{2},..\}}\backslash \Pi _{Q}^{-1}\{x_{1},x_{2},..\}.$ Since $M_{Q}(X,p)$ is metrizable (Lemma \ref {seq}), there exists a sequence $\beta _{k}\rightarrow \beta $ such that $% \{\beta _{k}\}\subset $ $\Pi _{Q}^{-1}\{x_{1},x_{2},..\}.$ Moreover (refining $\{\beta _{k}\}$ if necessary) there exists a subsequence $% \{m_{1},m_{2},...\}\subset \{1,2,...\}$ such that $\Pi _{Q}(\beta _{k})=x_{m_{k}}$, and if $k<j$ then $m_{k}<m_{j}.$ Let $x_{m_{k}}=z_{n_{k}}$ and let $\alpha _{n_{k}}=\beta _{k}.$ \end{proof} \section{Main result} Fixing a positive integer $Q,$ the goal is to construct a path connected compact metric space $X,$ such that if $\pi _{Q}(X,p)$ has the quotient topology, then $\pi _{Q}(X,p)$ is compactly generated but the standard group multiplication is not continuous in $\pi _{Q}(X,p).$ The idea to construct $X$ is to begin with the cone over a convergent sequence $\{w,w_{1},w_{2},..\}$ (i.e. we have a sequence of convergent line segments $L_{n}\rightarrow L$, joined at a common endpoint $p\in L_{n}),$ and then we attach a $Q-$ sphere $S_{n}$ of radius $\frac{1}{10^{n}}$ to the opposite end of each segment $L_{n}$ at $w_{n}$. Specifically, for $Q\geq 1$ let $R^{Q+1}$ denote Euclidean space of dimensions $Q+1$ with Euclidean metric $d.$ Let $p=(0,0,..,0)$ and let $% w_{n}=(\frac{1}{n},1,0,...0)$ and let $w=(0,1,0,...,0).$ Consider the line segment $L_{n}=[p,w_{n}]$ and observe for each $n,$ if $i\neq n$ then $% d(w_{n},w_{i})\geq \frac{1}{n}-\frac{1}{n+1}=\frac{1}{n^{2}+n}.$ Let $L=[p,w]$ and let $\gamma _{n}:L\rightarrow L_{n}$ be the linear bijection fixing $p$ and observe $\gamma _{n}\rightarrow id|L$ uniformly. \bigskip Let $c_{n}=(\frac{1}{n},1+\frac{1}{10^{n}},0,..,0).$ Let $S_{n}$ denote the Euclidean $Q$-sphere such that $x\in S_{n}$ iff $d(x,c_{n})=\frac{% 1}{10^{n}}.$ Let $q_{n}=(\frac{1}{n},1+\frac{2}{10^{n}},0,..,0)$ and notice $% S_{n}\cap L_{n}=\{w_{n}\}.$ Let $X_{n}=\cup _{k=1}^{n}(L_{n}\cup S_{n})$ Let $X=L\cup X_{1}\cup X_{2}....$ Define retracts $R_{n}:X\rightarrow X_{n}$ such that $% R_{n}(x_{1},x_{2},...x_{n+1})=(x_{1}^{\ast },x_{2},..,x_{n+1})$ with $% x_{1}^{\ast }$ minimal such that $im(R_{n})\subset X_{n}$. Notice $R_{n}\rightarrow id_{X}$ uniformly and for all $n$ we have $% R_{n}R_{n+1}=R_{n}.$ Hence we obtain the natural homomorphism $\phi :\pi _{Q}(X,p)\rightarrow \lim_{\leftarrow }\pi _{Q}(X_{n},p)$ defined via $\phi ([\alpha ])=([R_{1}(\alpha )],[R_{2}(\alpha )],...).$ For $Q\geq 1$ let $G=\pi _{Q}(X,p)$. Thus if $Q=1$ then $G$ is the free group on generators $\{x_{1},x_{2},...\}$ and if $Q\geq 2,$ then $G$ is the free abelian group on generators $\{x_{1},x_{2},...\}$, and let $\ast :G\times G\rightarrow G$ denote the familiar multiplication. Elements of $G$ admit a canonical form as maximally reduced words in the letters $\{x_{1},x_{2},..\}$ in the format $x_{n_{1}}^{k_{1}}\ast x_{n_{2}}^{k_{2}}\ast ...\ast x_{n_{m}}^{k_{m}}$ with $n_{i}\neq n_{i+1}.$ If $Q\geq 2$ then $G$ is abelian and we can further require $n_{i}<n_{j}$ whenever $i<j.$ Define $l:G\rightarrow \{0,1,2,3,..\}$ such that $l(x_{n_{1}}^{k_{1}}\ast x_{n_{2}}^{k_{2}}...\ast x_{n_{m}}^{k_{m}})=m.$ Let $G_{N}$ denote the subgroup of $G$ such that if $x_{n_{1}}^{k_{1}}\ast x_{n_{2}}^{k_{2}}...\ast x_{n_{m}}^{k_{m}}\in G_{N}$ then $n_{i}\leq N$ for all $i.$ Let $\phi _{N}:G\rightarrow G_{N}$ denote the natural epimorphism such that $% \phi _{N}(g)$ is the reduced word obtained from $g$ after deleting all letters $x_{i}$ of index $i>N$. \begin{lemma} \label{gt2}The homomorphism $\phi :\pi _{Q}(X,p)\rightarrow \lim_{\leftarrow }\pi _{Q}(X_{n},p)$ is continuous and one to one, and the space $\pi _{Q}(X,p)$ is $T_{2}$. \end{lemma} \begin{proof} Recall $G=\pi _{Q}(X,p)$ and note $\pi _{Q}(X_{n},p)$ is canonically isomorphic to $G_{n}.$ To see that $\phi $ is continuous, first recall in general a map $\alpha :Y\rightarrow Z$ \cite{GH} induces a continuous homomorphism $\alpha _{\ast }:\pi _{Q}(Y,y)\rightarrow \pi _{Q}(Z,\alpha (z)) $) and in particular the retractions $R_{n}:X\rightarrow X_{n}$ induce continuous epimorphisms $R_{n\ast }:G\rightarrow G_{n}.$ By definition $\phi =(R_{1\ast },R_{2\ast }...)$ and hence $\phi $ is continuous since $% \lim_{\leftarrow }G_{n}$ enjoys the product topology. To see that $\phi $ is one to one, suppose $[f]\in \ker \phi .$ Since $% f:S^{Q}\rightarrow X$ is a map and since $S^{Q}$ is a Peano continuum, then $% im(f)$ is a Peano continuum, and hence $im(f)$ is locally path connected, and in particular $im(f)\cap \{q_{1},q_{2},...\}$ is finite. Obtain $N$ such that $im(f)\cap \{q_{1},q_{2},...\}\subset \{q_{1},...,q_{N}\}.$ Notice $X_{N}$ is a strong deformation retract of $% X\backslash \{q_{N+1},q_{N+2},..\}$,$\ $(since for $i\geq N+1$, $% S_{i}\backslash \{q_{i}\}$ is contractible to $p,$ and we can contract to $p$ simultaneously for $k\in \{1,2,3,...\}$ the subspaces $L\cup L_{N+k}\cup (S_{N+k}\backslash \{q_{N+k}\}$). In particular, under the strong deformation retraction collapsing $X\backslash \{q_{N+1},q_{N+2},..\}$ to $% X_{N}$, $f$ deforms in $X$ to $R_{N}(f),$ and by assumption $R_{N}(f)$ deforms in $X_{N}$ to the constant map (determined by $p$). Hence $f$ is inessential in $X$ and this proves $\phi $ is one to one. \bigskip Since $X_{n}$ is locally contractible, $\pi _{Q}(X_{n},p)$ is discrete \cite{GH}. Hence $\lim_{\leftarrow }\pi _{Q}(X_{n},p)$ is metrizable and in particular $T_{2}.$ Thus $G$ is $T_{2}$ since $G$ injects continuously into the $T_{2}$ space $\lim_{\leftarrow }G_{n}$. \end{proof} \begin{lemma} \label{sc}Suppose $\{g,g_{1},g_{2},...\}\subset G.$ Suppose $% g_{n}\rightarrow g.$ Then $\{l(g_{n})\}$ is bounded and $\phi _{N}(g_{n})\rightarrow \phi _{N}(g).$ \end{lemma} \begin{proof} Suppose $g_{n}\rightarrow g.$ Then $\phi (g_{n})\rightarrow \phi (g)$ in $% \lim_{\leftarrow }G_{n},$ since $\phi $ is continuous. This means precisely that for each $N\geq 1,$ the sequence $\phi _{N}(g_{n})\rightarrow \phi _{N}(g).$ To prove $\{l(g_{n})\}$ is bounded, suppose to obtain a contradiction $% l\{g_{n}\}$ is not bounded. Select a subsequence $\{y_{n}\}\subset \{g_{n}\}$ such that $l(y_{n})\rightarrow \infty .$ By Lemma \ref{reps} there exists a subsequence $\{z_{n}\}\subset \{y_{n}\}$ and a convergent sequence of maps $% \{\alpha _{n}\}\subset M_{Q}(X,p)$ such that $\Pi (\alpha _{n})=z_{n}$. Let $% z_{n}=x_{s_{1}}^{k_{1}}\ast x_{s_{2}}^{k_{2}}...\ast x_{s_{m_{n}}}^{k_{m_{n}}}$ and let $Z_{n}\subset \{q_{s_{1}},q_{s_{1}},..q_{s_{m_{n}}}\}.$ Thus $Z_{n}$ consists of the tops of the corresponding spheres in $X,$ and hence $Z_{n}\subset im(\alpha _{n}). $ Since $l(z_{n})\rightarrow \infty $, the maps $\{\alpha _{n}\}$ are not equicontinuous, contradicting the fact that the convergent sequence $% \{\alpha _{n}\}$ is equicontinuous. \end{proof} \begin{remark} $\pi _{Q}(X,p)$ is compactly generated. (Select a convergent sequence of generators $v_{1},v_{2},...\subset M_{Q}(X,p),$such that $[v_{n}]$ generates the cyclic group $\pi _{Q}(L_{n}\cup S_{n},p))$ and $[v_{n}]\rightarrow e$ and $e$ denotes the identity in $\pi _{Q}(X,p))$ \end{remark} \begin{theorem} Multiplication $\ast :\pi _{Q}(X,p)\times \pi _{Q}(X,p)\rightarrow \pi _{Q}(X,p)$ is not continuous. \end{theorem} \begin{proof} Recall $G=\pi _{Q}(X,p)$ and consider the following doubly indexed subset $% A\subset G.$ Let $A$ consist of the union of all reduced words of the form $% x_{n}^{k}\ast x_{k+1}\ast x_{k+2}\ast ...\ast x_{k+n}$, taken over all pairs of positive integers $n$ and $k$. To prove that $\ast :G\times G\rightarrow G$ is not continuous, it suffices to prove that $A$ is closed in $G,$ and that $\ast ^{-1}(A)$ is not closed in $G\times G.$ To prove $A$ is closed in $G,$ since $G$ is a $T_{2}$ sequential space (Lemmas \ref{seq} and \ref{gt2}), it suffices to prove every convergent sequence in $A$ has its limit in $A.$ Suppose $g_{m}\rightarrow g$ and $% g_{m}\in A$ for all $m.$ Let $g_{m}=x_{n_{m}}^{k_{m}}\ast x_{k_{m}+1}\ast x_{k_{m}+2}\ast ...\ast x_{k_{m}+n_{m}}$. Notice $l(g_{m})\geq n_{m}.$ Thus by Lemma \ref{sc} the sequence$\{n_{m}\}$ is bounded. For each $n_{i},$by Lemma \ref{sc}, the sequence $\phi _{n_{i}}(g_{m})$ is eventually constant and thus the sequence $\{k_{m}\}$ is bounded (since every subsequence of $% x_{n_{i}}^{1},x_{n_{i}}^{2},x_{n_{i}}^{3},...$ diverges in $G_{n_{i}}$). Thus $\{g_{1},g_{2},...\}$ is a finite set and hence (since $G$ is $T_{1}$) $% \{g_{1},g_{2},...\}$ is closed in $G$. Thus $g\in $ $\{g_{1},g_{2},...\}$ and hence $A$ is closed in $G.$ Let $e\in G$ denote the identity of $G.$ To prove $\ast ^{-1}(A)$ is not closed in $G\times G$, we will show $(e,e)\notin \ast ^{-1}(A)$ and $(e,e)$ is a limit point of $\ast ^{-1}(A).$ Note $e\ast e=e$ and $e\notin A$ since $% l(e)=0$ and $l(x)\geq 1$ for all $x\in A.$ Thus $(e,e)\notin \ast ^{-1}(A).$ To see that $(e,e)$ is a limit point of $\ast ^{-1}(A)$ suppose $U\subset G$ is open and suppose $e\in U.$ Let $V=\Pi ^{-1}(U)\subset M_{Q}(X,p).$ First we show there exists $N$ such that $x_{n}^{k}\in U$ for all $n\geq N$ and all $k,$ argued as follows. Obtain a closed $Q-$cell $B\subset S^{Q}$ and recall $w=(0,1,0,...,0).$ Obtain an inessential map $\alpha \in $ $M_{Q}(X,p)$ such that $im(\alpha )\subset L$ and such that $\alpha ^{-1}(w)=B.$ Let $k_{1},k_{2},...$ be any sequence of integers and consider the sequence $% x_{1}^{k_{1}},x_{2}^{k_{2}},...\subset G.$ For each $n\geq 1$ obtain $\alpha _{n}\in $ $M_{Q}(X,p)$ such that $\Pi (\alpha _{n})=x_{n}^{k_{n}}$, such that $\alpha _{n}|(S^{Q}\backslash B)=\gamma _{n}(\alpha |(S^{Q}\backslash B)),$ and such that $\alpha _{n}(B)\subset S_{n}.$ Hence $\alpha _{n}\rightarrow \alpha $ and thus $\Pi (\alpha _{n})\rightarrow \Pi (\alpha ).$ Hence $x_{n}^{k_{n}}\rightarrow e.$ Since $\{k_{i}\}$ was arbitrary, it follows there exists $N$ such that $x_{n}^{k}\in U$ for all $n\geq N$ and all $k.$ Obtain $N$ as above and for each $k\geq 1$ define $v_{k}=x_{k+1}\ast ...\ast x_{k+N}.$ To see that the sequence $v_{k}\rightarrow e$ select $N$ disjoint closed $Q$-cells $B_{1},B_{2},...,B_{N}\subset S^{Q}$ (and if $Q=1$ we also require the closed intervals satisfy $B_{j}<B_{j+1}$ for $1\leq j\leq N-1$). Construct $\beta :S^{Q}\rightarrow L$ such that $\beta _{i}^{-1}(w)=B_{1}\cup B_{2}...\cup B_{N}.$ Note $\beta $ is inessential since $L$ is contractible. For each $i\in \{1,..,N\}$ and for each $k$ let $% f_{i,k}:S^{Q}\rightarrow X$ satisfy $\Pi (f_{i,k})=x_{k+i},$ and $% f_{i,k}|(S^{Q}\backslash B_{i})=\gamma _{k}(\beta |(S^{Q}\backslash B_{i})).$ Now let $\beta _{k}=\beta |(S^{1}\backslash (B_{1}\cup ..\cup B_{N}))\cup f_{1,k}|B_{1}\cup f_{2,k}|B_{2}...\cup f_{N,k}|B_{N}.$ Notice $\beta _{k}\rightarrow \beta $ uniformly and $\Pi (\beta _{k})=v_{k}$ and $\Pi (\beta )=e,$ and thus $v_{k}\rightarrow e.$ In particular there exists $K>N$ such that $v_{K}\in U.$ Thus $(x_{N}^{K},v_{K})\in (U,U)$ and $x_{N}^{K}\ast v_{K}\in A.$ This proves $(e,e)$ is a limit point of $\ast ^{-1}(A),$ and thus $\ast ^{-1}(A)$ is not closed in $G\times G.$ \end{proof}
{ "timestamp": "2011-06-01T02:04:45", "yymm": "1105", "arxiv_id": "1105.6363", "language": "en", "url": "https://arxiv.org/abs/1105.6363", "abstract": "For each positive integer Q there exists a path connected metric compactum X such that the Qth-homotopy group of X is compactly generated but not a topological group (with the quotient topology).", "subjects": "Algebraic Topology (math.AT)", "title": "Compactly generated quasitopological homotopy groups with discontinuous multiplication", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668723123673, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.7081505184632348 }
https://arxiv.org/abs/1906.09477
The phase diagram of approximation rates for deep neural networks
We explore the phase diagram of approximation rates for deep neural networks and prove several new theoretical results. In particular, we generalize the existing result on the existence of deep discontinuous phase in ReLU networks to functional classes of arbitrary positive smoothness, and identify the boundary between the feasible and infeasible rates. Moreover, we show that all networks with a piecewise polynomial activation function have the same phase diagram. Next, we demonstrate that standard fully-connected architectures with a fixed width independent of smoothness can adapt to smoothness and achieve almost optimal rates. Finally, we consider deep networks with periodic activations ("deep Fourier expansion") and prove that they have very fast, nearly exponential approximation rates, thanks to the emerging capability of the network to implement efficient lookup operations.
\section{Introduction} There is a subtle interplay between different notions of complexity for neural networks. One, most obvious, aspect of complexity is the network size measured in terms of the number of connections and neurons. Another is characteristics of the network architecture (e.g., shallow or deep). A third is the type of the activation function used in the neurons. Yet another, important but sometimes overlooked aspect is the precision of operations performed by neurons. All these complexities are connected by tradeoffs: if we fix a particular problem solvable by neural networks, then we have some freedom in decreasing one complexity at the cost of others. The question we address is: \emph{what are the limits of this freedom}? In the present paper we perform a systematic theoretical study of this question in the context of network expressiveness. We fix the classical approximation problem and explore the opportunities potentially present in solving it within different neural network scenarios. Specifically, suppose that we have a class $F$ of maps from the $d$-dimensional cube $[0,1]^d$ to $\mathbb R$, and we want the network to approximate elements of $F$ in the uniform norm $\|\cdot\|_\infty$. We will make the standard assumption that $F$ is a Sobolev- or H\"older ball of smoothness $r>0$ (i.e., a ball of ``$r$ times differentiable functions'', see Section \ref{sec:prelim}). Then, for a particular type of approximation model, we examine the optimal \emph{approximation rate}, i.e. the relation beween the approximation accuracy and the required number $W$ of model parameters. Typically, this relation has the form of a power law \begin{align}\label{eq:rate} \|f - \widetilde{f}_{W}\|_{\infty} = O(W^{-p}), \quad \forall f \in F, \end{align} where $\widetilde{f}_W$ is an approximation of $f$ by a model with $W$ parameters, and $p$ is a constant (which we will also refer to as the \emph{rate}). In standard fully-connected networks, there is one parameter (weight) per each connection and neuron, so $W$ can be equivalently viewed as the size of the model. Our approach in this paper will be to analyze how the rates $p$ depend on various approximation conditions (e.g., network depth, activation functions, etc.). There are several important general ideas explaining which approximation rates $p$ we can reasonably expect in Eq.\eqref{eq:rate}. In the context of abstract approximation theory, we can forget (for a moment) about the network-based implementation of $\widetilde{f}_{W}$ and just think of it as some approximate parameterization of $F$ by vectors $\mathbf w\in\mathbb R^W$. Let us view the approximation process $f\mapsto\widetilde{f}_W$ as a composition of the \emph{weight assignment} map $f\mapsto \mathbf w_f\in \mathbb R^W$ and the \emph{reconstruction} map $\mathbf w_f\mapsto \widetilde{f}_{W}\in\mathcal F,$ where $\mathcal F$ is the full normed space containing $F$. If both the weight assignment and reconstruction maps were linear, and so their composition $f\mapsto\widetilde{f}_{W}$, the l.h.s. of Eq.\eqref{eq:rate} could be estimated by the \emph{linear $W$-width} of the set $F$ (see \cite{constrappr96}). For a Sobolev ball of $d$-variate functions $f$ of smoothness $r$, the linear $W$-width is asymptotically $\sim W^{-r/d}$, suggesting the approximation rate $p=\tfrac{r}{d}.$ Remarkably, this argument extends to \emph{non-linear} weight assignment and reconstruction maps under the assumption that the weight assignment is \emph{continuous}. More precisely, it was proved in \cite{continuous} that, under this assumption, $p$ in Eq.\eqref{eq:rate} cannot be larger than $\tfrac{r}{d}$. An even more important set of ideas is related to estimates of Vapnik-Chervonenkis dimensions of deep neural networks. The concept of expressiveness in terms of VC-dimension (based on finite set shattering) is weaker than expressiveness in terms of uniform approximation, but upper bounds on the VC-dimension directly imply upper bounds on feasible approximation rates. In particular, the VC-dimension of networks with piecewise-polynomial activations is $O(W^2)$ (\cite{goldberg1995bounding}), which implies that $p$ cannot be larger than $\tfrac{2r}{d}$ -- note the additional factor 2 coming from the power 2 in the VC bound. We refer to the book \cite{anthony2009neural} for a detailed exposition of this and related results. Returning to approximations with networks, the above arguments suggest that the rate $p$ in Eq.\eqref{eq:rate} can be up to $\frac{r}{d}$ assuming the continuity of the weight assignment, and up to $\frac{2r}{d}$ without assuming the continuity, but assuming a piecewise-polynomial activation function such as ReLU. We then face the constructive problem of showing that these rates can indeed be fulfilled by a network computation. One standard general strategy of proving the rate $p=\frac{r}{d}$ is based on polynomial approximations of $f$ (in particular, via the Taylor expansion). A survey of early results along this line for networks with a single hidden layer and suitable activation functions can be found in \cite{pinkus1999review}. An interesting aspect of piecewise-linear activations such as ReLU is that the rate $p=\frac{r}{d}$ cannot be achieved with single-layer networks, but can be achieved with deeper networks implementing approximate multiplication and polynomials (\cite{yarsawtooth, liang2016why, petersen2018optimal, safran2017depth}). \begin{figure} \begin{subfigure}[b]{0.55\textwidth} \centering \includegraphics[scale = 0.7, clip, trim=25mm 21mm 15mm 17mm ]{approx_discrete.pdf} \caption{} \end{subfigure} \begin{subfigure}[b]{0.13\textwidth} \hfill \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \adjustbox{scale=1,right}{% \begin{tikzcd}[scale=0.1, row sep=large] & w = 0.b_1b_2\ldots \arrow[dl, swap, "\lfloor 2w \rfloor"] \arrow[d,"2w-\lfloor 2w\rfloor"] \\ b_1 & w_1=0.b_2b_3\ldots \arrow[dl, swap, "\lfloor 2w_1\rfloor"] \arrow[d,"2w_1-\lfloor 2w_1\rfloor"] \\ b_2 & w_2=0.b_3b_4\ldots\arrow[dl, swap, "\lfloor 2w_2\rfloor"] \arrow[d,"2w_2-\lfloor 2w_2\rfloor"] \\ \ldots & \ldots \end{tikzcd} }\caption{} \end{subfigure} \caption{\textbf{(a)} A high-rate approximation from \cite{yaropt}. The domain $[0,1]^d$ is divided into patches and an approximation to $f$ is encoded in each patch by a single network weight using a binary-type representation. Then, the network computes the approximation $\widetilde f(\mathbf x)$ by finding the relevant weight and decoding it using the bit extraction technique of \cite{bartlett1998almost} (here, $d=1, r=1$, and $p=\tfrac{2r}{d}=2$). \textbf{(b)} Sequential bit extraction by a deep network \cite{bartlett1998almost}. (The floor function $\lfloor\cdot\rfloor$ can be approximated by ReLU with arbitrary accuracy via $\lfloor w\rfloor\approx \tfrac{1}{\delta}(w-1)_+-\tfrac{1}{\delta}(w-1-\delta)_+$ with a small $\delta$.)}\label{fig:approxdiscrete} \end{figure} It was shown in \cite{yaropt} that ReLU networks can also achieve rates $p$ beyond $\frac{r}{d}.$ The result of \cite{yaropt} is stated in terms of the modulus of continuity of $f$; when restricted to H\"older functions with constant $r\le 1$, it implies that on such functions ReLU networks can provide rates $p$ in the interval $(\tfrac{r}{d},\tfrac{2r}{d}]$, in agreement with the mentioned upper bound $\tfrac{2r}{d}$. The construction is quite different from the case $p=\tfrac{r}{d}$ and has a ``coding theory'' rather than ``analytic'' flavor, see Fig.\ref{fig:approxdiscrete}. In agreement with continuous approximation theory and existing VC bounds, the construction inherently requires discontinuous weight assignment (as a consequence of coding finitely many values) and network depth (necessary for the bit extraction part). In this sense, at least in the case of $r\le 1$ one can distinguish two qualitatively different ``approximation phases'': the shallow continuous one corresponding to $p=\tfrac{r}{d}$ (and lower values), and the deep discontinuous one corresponding to $p\in(\tfrac{r}{d},\tfrac{2r}{d}]$. It was shown in \cite{petersen2018optimal, voigtlaender2019approximation} that the shallow rate $p=\tfrac{r}{d},$ but not faster rates, can be achieved if the network weights are discretized with the precision of $O(\log(1/\epsilon))$ bits, where $\epsilon$ is the approximation accuracy. We remark in passing that in recent years there has also been a substantial amount of related research on other aspects of deep network expressiveness: e.g. (just to give a few examples) on performance scaling with input dimension \cite{poggio2017and, montanelli2019new}, depth separation \cite{telgarsky2016benefits, eldan2016power}, generalization from finite training sets \cite{schmidt2020nonparametric}, approximation on manifolds \cite{shaham2018provable}, approximation of discontious functions \cite{petersen2018optimal} and specific signal structures \cite{perekrestenko2018universal, grohs2019deep}. These topics are outside the scope of the present paper. \paragraph{Contribution of this paper.} The developments described above leave many questions open. One immediate question is whether and how the deep discontinuous approximation phase generalizes to higher values of smoothness ($r> 1$). Another natural question is how much the network architectures providing the maximal rate $p=\tfrac{2r}{d}$ depend on the smoothness class. Yet another question is how sensitive the phase diagram is with respect to changing ReLU to other activation functions. In the present paper we resolve some of these questions and, moreover, offer new perspectives on the tradeoffs between different aspects of complexity in neural networks. Specifically: \begin{itemize} \item In Section \ref{sec:phasediag}, we prove that the approximation phase diagram indeed generalizes to arbitrary smoothness $r>0$, with the deep discontinuous phase occupying the region $\tfrac{r}{d}<p\le \tfrac{2r}{d}$. \item In Section \ref{sec:adaptivity}, we prove that the standard fully-connected architecture with a sufficiently large constant width $H$ only depending on the dimension $d$, say $H=2d+10$, can implement approximations that are asymptotically almost optimal for \emph{arbitrary} smoothness $r$. This property can be described as ``universal adaptivity to smoothness'' exhibited by such architectures. \item In Section \ref{sec:otheractiv}, we discuss how the ReLU phase diagram can change if ReLU is replaced by other activation functions. In particular, we prove that the deep discontinuos phase can be constructed for any activation that has a point of nonzero curvature. This implies that the phase diagram for any piecewise polynomial activation is the same as for ReLU. \item In Section \ref{sec:deepfourier} we consider what we call \emph{``deep Fourier expansion''} -- approximation by a deep network with a periodic activation function, which can be seen as a generalization of the usual Fourier series approximation. We prove that such networks can provide much faster, exponential rates compared to the polynomial rates of ReLU networks. The key element of the proof is a new version of the bit extraction procedure replacing sequential extraction by a dichotomy-based lookup. \item In Section \ref{sec:info} we analyze the distribution of information in the networks implementing the discussed modes of approximation. In particular, we show that in the deep discontinuous ReLU phase the total information $\epsilon^{-d/r}$ is uniformly distributed over the $\epsilon^{-1/p}$ encoding weights, with $\epsilon^{1/p-d/r}$ bits per weight, while in the ``deep Fourier'' it is all concentrated in a single encoding weight. \end{itemize} \section{Preliminaries}\label{sec:prelim} \paragraph{Smooth functions.} The paper revolves about what we informally describe as ``functions of smoothness $r$'', for any $r>0$. It is convenient to precisely define them in terms of H\"older spaces. Any $r>0$ can be uniquely represented as $r=k+\alpha$ with an integer $k\ge 0$ and $0<\alpha\le 1$. We define the respective H\"older space $\mathcal C^{k,\alpha}([0,1]^d)$ as the space of $k$ times continuously differentiable functions on $[0,1]^d$ having a finite norm \begin{align*} \|f\|_{\mathcal{C}^{k, \alpha}([0,1]^d)} = \max \Big\{ \max_{\k: |\k| \leq k} \max_{\mathbf x \in [0, 1]^d} |D^{\k} f(\mathbf x)|, \max_{\k: |\k| = k} \sup_{\substack{\mathbf x, \mathbf y \in [0, 1]^d, \\ \mathbf x \neq \mathbf y}} \dfrac{|D^{\k} f(\mathbf x) - D^{\k}f(\mathbf y)|}{\|\mathbf x -\mathbf y\|^{\alpha}} \Big \}. \end{align*} Here $D^{\mathbf k}f$ denotes the partial derivative of $f$. We choose the sets $F$ appearing in Eq.\eqref{eq:rate} to be the unit balls in these H\"older spaces and denote them by $F_{r,d}$. \paragraph{Neural networks.} We consider conventional feedforward neural networks with layouts given by directed acyclic graphs. Each hidden unit performs a computation of the form $\sigma(\sum_{k=1}^K w_kz_k+h),$ where $z_k$ are the signals from the incoming connections, and $w_k$ and $h$ are the weights associated with this unit. In addition to input units and hidden units, the network is assumed to have a single output unit performing a computation similar to that of hidden units, but without the activation function. In Sections \ref{sec:phasediag} and \ref{sec:adaptivity} we assume that the activation function is ReLU: $\sigma(x)=a_+\equiv \max(0,a).$ In general, we refer to networks with an activation function $\sigma$ as \emph{$\sigma$-networks}. In general, we do not make any special connectivity assumptions about the network architecture. The exception is Section \ref{sec:adaptivity} where we consider a particular family of architectures in which hidden units are divided into a sequence of layers, and each layer has a constant number of units. Two units are connected if and only if they belong to neighboring layers. The input units are connected to the units of the first hidden layer and only to them; the output unit is connected to the units of the last hidden layer, and only to them. We refer to this as a \emph{standard deep fully-connected architecture with constant width} (see Fig.\ref{fig:standard_net}). Whenever we mention a \emph{piecewise linear} or \emph{piecewise polynomial} activation, we mean that $\mathbb R$ can be divided into \emph{finitely many} intervals on which the activation is linear or polynomial (respectively). Without this condition of finiteness, activations could be made drastically more expressive (e.g., by joining a dense countable subset of polynomials \cite{maiorov1999lower}). \paragraph{Approximations.} In the accuracy--complexity relation \eqref{eq:rate} we assume that approximations $\widetilde f_W$ are obtained by assigning $f$-dependent weights to a network architecture $\eta_W$ \emph{common} to all $f\in F.$ In particular, this allows us to speak of the weight assignment map $G_W: f\mapsto \mathbf w_f\in\mathbb R^W$ associated with a particular architecture $\eta_W.$ We say that the weight assignment is continuous if this map is continuous with respect to the topology of uniform norm $\|\cdot\|_\infty$ on $F.$ We will be interested in considering different approximation rates $p$, and we interpret Eq.\eqref{eq:rate} in a precise way by saying that a rate $p$ can be achieved iff \begin{align}\label{eq:rate2} \inf_{\eta_W, G_W}\sup_{f\in F}\|f - \widetilde{f}_{\eta_W,G_W}\|_{\infty} \le c_{F,p}W^{-p}, \end{align} where $\widetilde{f}_{\eta_W,G_W}$ denotes the approximation obtained by the weight assignment $G_W$ in the architecture $\eta_W$. Here and in the sequel we generally denote by $c_{a,b,\ldots}$ various positive constants possibly dependent on $a,b,\ldots$ (typically on smoothness $r$ and dimension $d$). Throughout the paper, we will treat $r$ and $d$ as fixed parameters in the asymptotic accuracy-complexity relations. \section{The phase diagram of ReLU networks}\label{sec:phasediag} \begin{figure} \centering \minipage[b][][b]{0.4\textwidth} \includegraphics[scale = 0.45, clip, trim=15mm 0mm 10mm 0mm ]{standardNet2_noType3fonts.pdf} \caption{A standard deep fully-connected architecture with width 5.}\label{fig:standard_net} \endminipage\hfill \minipage[b][][b]{0.55\textwidth} \centering \input{phasediagram_final.tikz} \caption{The phase diagram of approximation rates for ReLU networks. }\label{fig:phase_diagram} \endminipage\hfill \end{figure} Our first main result is the phase diagram of approximation rates for ReLU networks, shown in Fig.\ref{fig:phase_diagram}. The ``shallow continuous phase'' corresponds to $p=\tfrac{r}{d}$, the ``deep discontinuous phase'' corresponds to $\tfrac{r}{d}<p\le\tfrac{2r}{d}$, and the infeasible region corresponds to $p>\tfrac{2r}{d}.$ Our main new contribution is the exact location of the deep discontinuous phase for all $r>0$. The precise meaning of the diagram is explained by the following series of theorems (partly established in earlier works). \begin{theorem}[The shallow continuous phase] The approximation rate $p = \frac{r}{d}$ in Eq.\eqref{eq:rate2} can be achieved by ReLU networks having $L \le c_{r,d} \log W$ layers, and with a continuous weight assignment. \end{theorem} This result was proved in \cite{yarsawtooth} in a slightly weaker form, for integer $r$ and with error $O(W^{-r/d} \log^{r/d} W)$ instead of $O(W^{-r/d})$. The proof is based on ReLU approximations of local Taylor expansions of $f$. The extension to non-integer $r$ is immediate thanks to our definition of general $r$-smoothness in terms of H\"older spaces. The logarithmic factor $\log^{r/d} W$ can be removed by observing that the computation of the approximate Taylor polynomial can be isolated from determining its coefficients and hence only needs to be implemented once in the network rather than for each local patch as in \cite{yarsawtooth} (see Remark \ref{rm:nolog}; the idea of isolation of operations common to all patches is developed much further in the proof Theorem \ref{th:deepphase} below, and is applicable in the special case $p=\tfrac{r}{d}$). \begin{theorem}[Feasibility of rates $p>\tfrac{r}{d}$]\label{th:feasible}{}\hfill \begin{enumerate} \item Approximation rates $p > \frac{2r}{d}$ are infeasible for networks with piecewise-polynomial activation function and, in particular, ReLU networks; \item Approximation rates $p \in (\frac{r}{d}, \frac{2r}{d}]$ cannot be achieved with continuous weights assignment; \item If an approximation rate $p \in (\frac{r}{d}, \frac{2r}{d}]$ is achieved with ReLU networks, then the number of layers $L$ in $\eta_W$ must satisfy $L \geq c_{p,r,d} W^{pd/r - 1} / \log W$ for some $c_{p,r,d} > 0$. \end{enumerate} \end{theorem} These statements follow from existing results on continuous nonlinear approximation (\cite{continuous} for statement 2) and from upper bounds on VC-dimensions of neural networks (\cite{goldberg1995bounding} for statement 1 and \cite{bartlett2019nearly} for statement 3), see \cite[Theorem~1]{yarsawtooth} for a derivation. The extensions to arbitrary $r$ are straightforward. The main new result in this section is the existence of approximations with $p\in(\tfrac{r}{d}, \tfrac{2r}{d}]$: \begin{theorem}[The deep discontinuous phase]\label{th:deepphase} For any $r>0,$ any rate $p \in (\frac{r}{d}, \frac{2r}{d}]$ can be achieved with deep ReLU networks with $L \le c_{r,d} W^{pd/r - 1}$ layers. \end{theorem} This result was proved in \cite{yaropt} in the case $r\le 1$. We generalize this to arbitrary $r$ by combining the coding-based approach of \cite{yaropt} with Taylor expansions. We give a sketch of proof below; the full proof is given in Section \ref{sec:proofdeepphase}. \emph{Sketch of proof.} We use two length scales for the approximation: the coarser one $\tfrac{1}{N}$ and the finer one $\tfrac{1}{M}$, with $M\gg N.$ We start by partitioning the cube $[0,1]^d$ into $\sim N^d$ patches (particularly, simplexes) of linear size $\sim\tfrac{1}{N},$ and then sub-partitioning them into patches of linear size $\sim\tfrac{1}{M}.$ In each of the finer $M$-patches $\Delta_M$ we approximate the function $f\in F_{r,d}$ by a Taylor polynomial $P_{\Delta_M}$ of degree $\lceil r\rceil-1.$ Then, from the standard Taylor remainder bound, we have $|f(\mathbf x)-P_{\Delta_M}(\mathbf x)|=O(M^{-r})$ on $\Delta_M$. This shows that if $\epsilon$ is the required approximation accuracy, we should choose $M\sim \epsilon^{-1/r}.$ Now, if we tried to simply save the Taylor coefficients for each $M$-patch in the weights of the network, we would need at least $\sim M^{d},$ i.e. $\sim\epsilon ^{-d/r}$, weights in total. This corresponds to the classical rate $p=\tfrac{r}{d}$. In order to save on the number of weights and achieve higher rates, we collect Taylor coefficients of all $M$-patches lying in one $N$-patch and encode them in a single \emph{encoding weight} associated with this $N$-patch. Given $p>\tfrac{r}{d},$ we choose $N\sim \epsilon^{-1/(pd)},$ so that in total we create $\sim \epsilon^{-1/p}$ encoding weights, each containing information about $\sim (M/N)^d$, i.e. $\sim\epsilon^{-(d/r-1/p)}$, Taylor coefficients. The number of encoding weights then matches the desired complexity $W\sim\epsilon^{-1/p}$. To encode the Taylor coefficients we actually need to discretize them first. Note that to reconstruct the Taylor approximation in an $M$-patch with accuracy $\epsilon,$ we need to know the Taylor coefficients of order $k$ with precision $\sim M^{-(r-k)}$. We implement an efficient sequential encoding/decoding procedure for the approximate Taylor coefficients of orders $k<\lceil r\rceil$ for all $M$-patches lying in the given $N$-patch $\Delta_N$. Specifically, choose some sequence $(\Delta_{M})_t$ of the $M$-patches in $\Delta_N$ so that neighboring elements of the sequence correspond to neighboring patches. Then, the order-$k$ Taylor coefficients at $(\Delta_{M})_{t+1}$ can be determined with precision $\sim M^{-(r-k)}$ from the respective and higher order coefficients at $(\Delta_{M})_{t}$ using $O(1)$ predefined discrete values. This allows us to encode all the approximate Taylor coefficients in all the $M$-patches of $\Delta_N$ by a single $O((M/N)^d)$-bit number. To reconstruct the approximate Taylor polynomial for a particular input $\mathbf x\in\Delta_M\subset\Delta_N$, we sequentially reconstruct all the coefficients for the sequence $(\Delta_{M})_t$, and, among them, select the coefficients at the patch $(\Delta_{M})_{t_0}=\Delta_M$. The sequential reconstruction can be done by a deep subnetwork with the help of the bit extraction technique \cite{bartlett1998almost}. The depth of this subnetwork is proportional to the number of $M$-patches in $\Delta_N$, i.e. $\sim (M/N)^d$, which is $\sim \epsilon^{-(d/r-1/p)}$ according to our definitions of $N$ and $M$. If $p\le\tfrac{2r}{d},$ then $\tfrac{d}{r}-\tfrac{1}{p}\le \tfrac{1}{p}$ and hence this depth is smaller or comparable to the number of encoding weights, $\epsilon^{-1/p}.$ However, if $p>\tfrac{2r}{d},$ then the depth is asymptotically larger than the number of encoding weights, so the total number of weights is dominated by the depth of the decoding subnetwork, which is $\gtrsim \epsilon^{-d/(2r)}$, and the approximation becomes less efficient than at $p=\tfrac{2r}{d}$. This explains why $p=\tfrac{2r}{d}$ is the boundary of the feasible region. Once the (approximate) Taylor coefficients at $\Delta_M\ni \mathbf x$ are determined, an approximate Taylor polynomial $\widetilde P_{\Delta_M}(\mathbf x)$ can be computed by a ReLU subnetwork implementing efficient approximate multiplications \cite{yarsawtooth}. \qed \section{Fixed-width networks: universal adaptivity to smoothness}\label{sec:adaptivity} The network architectures constructed in the proof of Theorem \ref{th:deepphase} to provide the faster rates $p\in(\tfrac{r}{d},\tfrac{2r}{d}]$ are relatively complex and $r$-dependent. We can ask if such rates can be supported by some simple conventional architectures. It turns out that we can achieve nearly optimal rates using standard fully-connected architectures with sufficiently large constant widths only depending on $d$: \begin{theorem}\label{th:constwidth} Let $\eta_W$ be standard fully-connected ReLU architectures with width $2d+10$ and $W$ weights. Then \begin{equation}\label{eq:constwidth} \inf_{G_W}\sup_{f\in F_{r,d}}\|f-\widetilde{f}_{\eta_W,G_W}\|_\infty\le c_{r,d} W^{-2r/d}\log^{2r/d} W. \end{equation} \end{theorem} The rate in Eq.\eqref{eq:constwidth} differs from the optimal rate with $p=\tfrac{2r}{d}$ only by the logarithmic factor $\log^{2r/d} W$. We give a sketch of proof of Theorem \ref{th:constwidth} below, and details are provided in Section \ref{sec:proofconstwidth}. An interesting result proved in \cite{hanin2017approximating, lu2017expressive} (see also \cite{lin2018resnet} for a related result for ResNets) states that standard fully-connected ReLU architectures with a fixed width $H$ can approximate any $d$-variate continuous function if and only if $H\ge d+1$. Theorem \ref{th:constwidth} shows that with slightly larger widths, such networks can not only adapt to any function, but also adapt to its smoothness. The results of \cite{hanin2017approximating, lu2017expressive} also show that Theorem \ref{th:constwidth} cannot hold with $d$-independent widths. \emph{Sketch of proof of Theorem \ref{th:constwidth}.} The proof is similar to the proof of Theorem \ref{th:deepphase}, but requires a different implementation of the reconstruction of $\widetilde f(\mathbf x)$ from encoded Taylor coefficients. The network constructed in Theorem \ref{th:deepphase} traverses $M$-knots of an $N$-patch and computes Taylor coefficients at the new $M$-knot by updating the coefficients at the previous $M$-knot. This computation can be arranged within a fixed-width network, but its width depends on $r$, since we need to store the coefficients from the previous step, and the number of these coefficients grows with $r$ (see \cite{yaropt} for the constant-width fully-connected implementation in the case of $r\le 1,$ in which the Taylor expansion degenerates into the 0-order approximation). To implement the approximation using an $r$-independent network width, we can decode the Taylor coefficients afresh at each traversed $M$-knot, instead of updating them. This is slightly less efficient and leads to the additional logarithmic factor in Eq.\eqref{eq:constwidth}, as can be seen in the following way. First, since we need to reconstruct the Taylor coefficients of degree $k$ with precision $O(M^{-(r-k)}),$ we need to store $\sim \log M$ bits for each coefficient in the encoding weight. Since $M\sim \epsilon^{-1/r},$ this means a $\sim\log (1/\epsilon)$-fold increase in the depth of the decoding subnetwork. Moreover, an approximate Taylor polynomial must be computed separately for each $M$-patch. Multiplications can be implemented with accuracy $\epsilon$ by a fixed-width ReLU network of depth $\sim(\log(1/\epsilon))$ (see \cite{yarsawtooth}). Computation of an approximate polynomial of the components of the input vector $\mathbf x$ can be arranged as a chain of additions and multiplications in a network of constant width independent of the degree of the polynomial -- assuming the coefficients of the polynomial are decoded from the encoding weight and supplied as they become required. This shows that we can achieve accuracy $\epsilon$ with a network of constant width independent of $r$ at the cost of taking the larger depth $\sim\epsilon ^{-d/(2r)}\log(1/\epsilon)$ (instead of simply $\sim\epsilon ^{-d/(2r)}$ as in Theorem \ref{th:deepphase}). Since $W$ is proportional to the depth, we get $W\sim \epsilon ^{-d/(2r)}\log(1/\epsilon)$. By inverting this relation, we obtain Eq.\eqref{eq:constwidth}. \qed \section{Activation functions other than ReLU}\label{sec:otheractiv} We discuss now how much the ReLU phase diagram of Section \ref{sec:phasediag} can change if we use more complex activation functions. We note first that statement 1 of Theorem \ref{th:feasible} holds not only for ReLU, but for any piecewise-polynomial activation functions, so that the region $p>\tfrac{2r}{d}$ remains infeasible for any such activation. Also, since all piecewise-linear activation functions are essentially equivalent (see e.g. \cite[Proposition 1]{yarsawtooth}), the phase diagram for any piecewise-linear activation is the same as for ReLU. Our main result in this section states that Theorem \ref{th:deepphase} establishing the existence of the deep discontinuous phase remains valid for any activation that has a point of nonzero curvature. \begin{theorem}\label{th:deepsecond} Suppose that the activation function $\sigma$ has a point $x_0$ where the second derivative $\tfrac{d^2\sigma}{dx^2}(x_0)$ exists and is nonzero. Then, any rate $p \in (\frac{r}{d}, \frac{2r}{d})$ can be achieved with deep $\sigma$-networks with $L \le c_{r,d} W^{pd/r - 1}$ layers. \end{theorem} The proof is given in Section \ref{sec:deepsecondproof}; its idea is to reduce the approximation by $\sigma$-networks to deep polynomial approximations. Then, we can follow the lines of the proof of Theorem \ref{th:deepphase} with some adjustments (in particular, we replace the usual bit extraction dynamic as in Fig.\ref{fig:approxdiscrete}(b) by a polynomial dynamical system). We remark that in general, if constrained by degree, polynomials poorly approximate ReLU and other piecewise linear functions \cite{telgarsky2017neural}, but in our setting the polynomials are constrained by their compositional complexity rather than degree, in which case a polynomial approximation of ReLU can be much more accurate. Combined with Statement 1 of Theorem \ref{th:feasible}, Theorem \ref{th:deepsecond} implies, in particular, that the phase diagram for general piecewise polynomial activation functions is the same as for ReLU: \begin{corol} Let $\sigma$ be a continuous piecewise polynomial activation function. Then the rates $p<\tfrac{2r}{d}$ are feasible for $\sigma$-networks, and the rates $p>\tfrac{2r}{d}$ are infeasible. \end{corol} A remarkable class of functions that can be seen as a far-reaching generalization of polynomials are the Pfaffian functions \cite{khovanskii}. Level sets of these functions admit bounds on the number of their connected components that are similar to analogous bounds for algebraic sets, and this is a key property in establishing upper bounds on VC dimensions of networks. In particular, it was proved in \cite{karpinski1997polynomial} that the VC-dimension of networks with the standard sigmoid activation function $\sigma(x)=1/(1+e^{-x})$ is upper-bounded by $O(W^2k^2),$ where $k$ is the number of computation units (see also \cite[Theorem 8.13]{anthony2009neural}). Since $k\le W$, the bound $O(W^2k^2)$ implies the slightly weaker bound $O(W^4)$. Then, by mimicking the proof of statement 1 of Theorem \ref{th:feasible} and replacing there the bound $O(W^2)$ for piecewise-polynomial activation by the bound $O(W^4)$ for the standard sigmoid activation, we get \begin{theorem}\label{th:sigmoid} For networks with the standard sigmoid activation function $\sigma=1/(1+e^{-x})$, the rates $p>\tfrac{4r}{d}$ are infeasible. \end{theorem} It appears that there remains a significant gap between the upper and lower VC dimension bounds for networks with $\sigma(x)=1/(1+e^{-x})$ (see a discussion in \cite[Chapter 8]{anthony2009neural}). Likewise, we do not know if the approximation rates up to $p= \frac{4r}{d}$ are indeed feasible with this $\sigma$. All the above results ignore both precision and magnitude of the network weights. In fact, the rates $p>\tfrac{2r}{d}$ can be excluded for rather general activation functions if we put some mild constraints on the growth of the weights. In Section \ref{sec:weightbounds} we explain this point using a covering number bound from \cite[Theorem 14.5]{anthony2009neural}. \section{``Deep Fourier expansion''}\label{sec:deepfourier} Note that the usual Fourier series expansion $f(\mathbf x)\sim\sum_{\mathbf n\in\mathbb Z^d}a_{\mathbf n}e^{2\pi i\mathbf n\cdot\mathbf x}$ for a function $f$ on $[0,1]^d$ can be viewed as a neural network with one hidden layer, the $\sin$ activation function, and predefined weights in the first layer. Standard convergence bounds for Fourier series (see e.g. \cite{jackson1930theory}) correspond to the shallow continuous rate $p=\tfrac{r}{d}$, in agreement with the linearity of the standard assignment of Fourier coefficients. We can ask what happens to the expressiveness of this approximation if we generalize it by removing all constraints on the architecture and weights, i.e., consider a general deep network with the $\sin$ activation function. It turns out that such a model is drastically more expressive than both standard Fourier expansion and deep ReLU networks. The key factor in this is the \emph{periodicity} of the activation function $\sigma=\sin$; the particular form of $\sigma$ is not that important. Our main result below assumes that the network can use both ReLU and $\sigma$ as activation functions; we refer to these networks as \emph{mixed ReLU/$\sigma$} networks. \begin{theorem}\label{th:sin} Fix $r,d$. Let $\sigma:\mathbb R\to\mathbb R$ be a Lipschitz periodic function with period $T$. Suppose that $\sigma(x)>0$ for $x\in(0,{T}/{2})$ and $\sigma(x)<0$ for $x\in({T}/{2},T)$, and also that $\max_{x\in\mathbb R}\sigma(x)=-\min_{x\in\mathbb R}\sigma(x).$ Then: \begin{enumerate} \item For any number $W$, we can find a mixed ReLU/$\sigma$ network architecture $\eta_W$ with $W$ weights, and a corresponding weight assignment $G_W$, such that \begin{align}\label{eq:ratesin} \sup_{f\in F_{r,d}}\|f - \widetilde{f}_{\eta_W,G_W}\|_{\infty} \le \exp\big(-c_{r,d}W^{1/2}\big) \end{align} with some $r,d$-dependent constant $c_{r,d}>0.$ \item Moreover, the above architecture $\eta_W$ has only one weight whose value depends on $f\in F_{r,d}$; for all other weights the assignment $G_W$ is $f$-independent. \end{enumerate} \end{theorem} In contrast to the previously considered power law rates \eqref{eq:rate2}, the rate \eqref{eq:ratesin} is exponential and corresponds to $p=\infty$, so that the ReLU-infeasible sector $p>\tfrac{2r}{d}$ is fully feasible for mixed ReLU/periodic networks. Moreover, statement 2 of the above theorem means that all information about the approximated function $f$ can be encoded in a single network weight. \begin{figure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[scale = 0.4, clip, trim=0mm 60mm 0mm 40mm ]{sinBitExtraction.pdf} \caption{With ReLU} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \adjustbox{scale=1,right}{% \includegraphics[scale = 0.4, clip, trim=0mm 14mm 0mm 80mm ]{sinBitExtraction.pdf} } \caption{With a periodic activation $\sigma$} \end{subfigure} \caption{Standard sequential (a) and dichotomy-based (b) bit extraction. Bit extraction is used to decode information from network weights and is crucial in achieving non-classical rates $p>\tfrac{r}{d}.$ Standard bit extraction (\cite{bartlett1998almost}, see Fig.\ref{fig:approxdiscrete}) is available with the threshold or ReLU activation functions. The bits are decoded one-by-one, which requires a significant networks depth and caps feasible rates at $p=\tfrac{2r}{d}.$ In contrast, ``deep Fourier expansion'' of Theorem \ref{th:sin} is essentially based on a more efficient dichotomy-based lookup that becomes available if neurons can implement a periodic activation function (see Section \ref{sec:sketchsin}).}\label{fig:sinextraction} \end{figure} The sketch of proof of Theorem \ref{th:sin} is given in Section \ref{sec:sketchsin}, and details are provided in Section \ref{sec:proofsin}. The main idea of the network design is to compute each digit of the output using a dynamical system controlled by the digits of the input. The faster rate can be interpreted as resulting from an efficient, dichotomy-based lookup that can be performed in networks including both ReLU and a periodic activation, see Fig.\ref{fig:sinextraction}. It is well-known that some exotic activation functions allow to achieve rates even higher than those we have discussed. For example, a result of \cite{maiorov1999lower} based on the Kolmogorov Superposition Theorem (\cite[p. 553]{constrappr96}) shows the existence of a strictly increasing analytic activation function $\sigma$ such that any $f\in C([0,1]^d)$ can be approximated with arbitrary accuracy by a three-layer $\sigma$-network with only $9d+3$ units. However, in contrast to these results, our Theorem \ref{th:sin} holds for a very simple and general class of activation functions. \section{Distribution of information in the network}\label{sec:info} \begin{table} \begin{center} \renewcommand{\arraystretch}{1.5} \begin{tabular}{lccc} \toprule \textrm{Approximation} & \textrm{Shallow ReLU } & \textrm{Deep ReLU}& \textrm{``Deep Fourier''} \\ \midrule Rate ($p$) & $p=\tfrac{r}{d}$ & $p\in(\tfrac{r}{d},\tfrac{2r}{d}]$ & $p=\infty$\\ Weight assignment & continuous & discontinuous & discontinuous \\ Network depth ($L$) & $\log(1/\epsilon)$ & $\epsilon^{1/p-d/r}$ & $\log(1/\epsilon)$\\ Number of weights, total ($W$) & $\epsilon^{-d/r}$ & $\epsilon^{-1/p}$ & $\log^2(1/\epsilon)$\\ Number of encoding weights & $\epsilon^{-d/r}$ & $\epsilon^{-1/p}$ & 1 \\ Bits / encoding weight & $\log(1/\epsilon)$ & $\epsilon^{1/p-d/r}$ & $\epsilon^{-d/r}\log(1/\epsilon)$ \\ \bottomrule \end{tabular} \end{center} \caption{Summary of the examined approximation modes. $\epsilon$ stands for the approximation accuracy $\|f-\widetilde f\|_\infty$ achieved uniformly on the H\"older ball $F_{r,d}$. The expressions in the bottom four rows show the orders of magnitude for various network characteristics w.r.t. $\epsilon$.}\label{tab:info} \end{table} It is interesting to examine how information about the approximated function $f$ is distributed in the network (see Table \ref{tab:info}). The classical theorem of Kolmogorov \cite{KolmogorovTikhomirov} shows that the $\epsilon$-entropy of the H\"older ball $F_{r,d}$ scales as $\epsilon^{-d/r}$ at small $\epsilon.$ This means that any family of networks achieving accuracy $\epsilon$ on this ball must include at least $\epsilon^{-d/r}$ bits of information about $f\in F_{r,d}.$ This imposes constraints on the magnitude and/or precision of network weights: if the network is small and the weights have a limited space of values, the network simply cannot contain the necessary amount of information (\cite{bolcskei2017memory,petersen2018optimal, voigtlaender2019approximation}). Classical linear models or ``weakly nonclassical'' models such as shallow ReLU networks contain $\epsilon^{-d/r}$ weights, and a weight precision of $O(\log (1/\epsilon))$ bits is sufficient to accomodate the total $\epsilon$-entropy $\epsilon^{-d/r}$ (\cite{voigtlaender2019approximation}). In contrast, the models in the ``deep discontinuous ReLU'' phase contain much fewer weights and accordingly need a much higher weight precision. Specifically, it follows from the proofs of Theorems \ref{th:deepphase} and \ref{th:deepsecond} that the number of encoding weights in a network with rate $p\in (\tfrac{r}{d},\tfrac{2r}{d}]$ is $\sim\epsilon^{-1/p}$, while each encoding weight must be specified with accuracy $c^{\epsilon^{1/p-d/r}}$ with some constant $c>0,$ i.e. must have $\sim\epsilon^{1/p-d/r}$ bits. In the ``deep Fourier'' model, the encoding weight is unique. In the end of Section \ref{sec:sketchsin} we roughly estimate the information contained in this weight as $\epsilon^{-d/r}\log(1/\epsilon)$, again in agreement with the $\epsilon$-entropy $\epsilon^{-d/r}$ of the H\"older ball $F_{r,d}$. \section{Discussion} Our results highlight tradeoffs between complexity of the network size and complexity of activations and/or arithmetic operations: the size can be decreased substantially at the cost of the other complexities. In addition to the increased precision of network operations, this requires the weight assignment to be discontinuous with respect to the fitted function $f$. While we do not discuss learning aspects in this paper, this discontinuity suggests that such networks should be hard to train by usual gradient-based methods, and would probably require other types of fitting algorithms. The mentioned complexity tradeoffs are not unlimited: we have shown that for all piecewise polynomial activations the feasible rates span the sector $p\le \tfrac{2r}{d}$. We do not know if this remains true for other standard nonpolynomial activations such as the standard sigmoid. This question seems to be essentially rooted in the optimality of Khovanskii's fewnomial bounds, which is a long-standing problem in algebraic geometry \cite{haas2002simple, dickenstein2007extremal}. We have introduced the ``deep Fourier'' model -- a hypothetical computational model assuming that the neurons can perfectly compute a periodic function of their inputs. This model allows to achieve exponential approximation rates while storing all information in a single weight. This result is purely theoretical; it doesn't seem possible to implement such a model using practical technologies. Rather, we see the main interest of this result in the theoretical demonstration of a huge network size reduction compared to the usual shallow Fourier expansion, and in the associated novel bit extraction mechanism. \section{Broader impact} Not applicable. \section{Acknowledgments and Funding Transparency Statement} We thank Christoph Schwab for suggesting an extension of Theorem \ref{th:sin} to general periodic activations. We also thank the anonymous reviewers for several useful comments and suggestions. The research was not supported by third parties. The authors are not aware of any conflict of interest associated with this research. \bibliographystyle{unsrt} \section{Theorem \ref{th:deepphase}: proof details}\label{sec:proofdeepphase} We follow the paper \cite{yaropt} where Theorem \ref{th:deepphase} was proved for $r\le 1$, and generalize it to arbitrary $r>0$ using the strategy explained in Section \ref{sec:phasediag}. Given $p \in (\frac{r}{d}, \frac{2r}{s}]$ we show that it is possible to construct a network architecture with $W$ weights and $L = O(W^{pd/r - 1})$ layers which approximates every $f \in {F}_{r, d}$ with error $O(W^{-p})$. In \autoref{rm:nolog} we deal with the case $p = \frac{r}{d}$. We start by describing the space partition and related constructions. Then we give an overview of the network structure. Finally, we describe in more detail the network computation of the Taylor approximations, which is the main novel element of Theorem \ref{th:deepphase}. \subsection{Space partitions} For an integer $N \geq 1$ we denote by $\mathcal{P}_N$ a standard triangulation of $\mathbb{R}^d$ into simplexes: \begin{align*} \Delta_{N, \mathbf{n}, \rho} = \left\{\mathbf x \in \mathbb{R}^d : 0 \leq x_{\rho(1)} - \frac{\mathbf{n}_{\rho(1)}}{N} \leq \dots \leq x_{\rho(d)} - \frac{\mathbf{n}_{\rho(d)}}{N} \right\}, \end{align*} where $\mathbf{n} \in \mathbb{Z}^d$ and $\rho$ is a permutation of $d$ elements. The vertices of these simpixes are the points of the grid $(\mathbb{Z} / N)^d$. We call the set of all the vertices \emph{the $N$-grid} and a particular vertex \emph{an $N$-knot}. For an $N$-knot we call the union of simplexes it belongs to \emph{an $N$-patch}. We denote a set of all $N$-knots $\mathbf{K}_{N}$. Let $\phi: \mathbb{R}^d \to \mathbb{R}$ be the ``spike'' function defined as the continuous piecewise linear function such that: \begin{enumerate} \item $\phi$ is linear on every simplex from the triangulation $\mathcal{P}_1$; \item $\phi(0) = 1$, $\phi(\mathbf{n}) = 0$ for all other $\mathbf{n} \in \mathbb{Z}^d$. \end{enumerate} The function $\phi(\mathbf x)$ can be computed by a feed-forward ReLU network with $O(d^2)$ weights (see \cite[Section~4.2]{yaropt} for details). We treat $d$ as a constant, so we can say that $\phi(\mathbf x)$ can be computed by a network with a constant number of weights. Note that for integer $N$ and $\mathbf{n} \in Z^d \cap [0, N]^d$, the function $\phi(N\mathbf x - \mathbf{n})$ is a continuous piecewise linear function which is linear in each simplex from $\mathcal{P}_N$, is equal to 1 at $\mathbf x=\frac{\mathbf{n}}{N}$, and vanishes at all other N-knots of $(\mathbb{Z} / N)^d$. It is convenient to keep in mind two following simple propositions: \begin{proposition}\label{prop:linear_interpolation} Suppose we have $K$ $N$-knots $\frac{\mathbf{n}_1}{N}, \dots, \frac{\mathbf{n}_K}{N}$, $\mathbf{n}_i \in \mathbb{Z}^d$ and corresponding numbers $\ell_1, \dots, \ell_K$. Then the function \begin{align*} g(\mathbf x) = \sum_{k=1}^K \ell_k \phi(N\mathbf x - \mathbf{n}_k) \end{align*} has the following properties: \begin{enumerate} \item $g(\mathbf x)$ is linear on each simplex from $\mathcal{P}_N$; \item $g\left( \frac{\mathbf{n}_k}{N} \right) = \ell_k$ for $k=1, \dots N$. For other $N$-knots $\frac{\mathbf{n}}{N}$, $h$ is zero: $h\left( \frac{\mathbf{n}}{N} \right) = 0$; \item $g(\mathbf x)$ can be computed exactly by a network with $O(K)$ weights and $O(1)$ layers. \end{enumerate} \end{proposition} \begin{proposition}\label{prop:constant_interpolation} Suppose we have $K$ $N$-knots $\frac{\mathbf{n}_1}{N}, \dots, \frac{\mathbf{n}_K}{N}$, $\mathbf{n}_i \in \mathbb{Z}^d$ and corresponding numbers $s_1, \dots, s_K$. Suppose also that $N$-patches associated with $\frac{\mathbf{n}_1}{N}, \dots, \frac{\mathbf{n}_K}{N}$ are disjoint. Then there exists function $h(\mathbf x)$ with the following properties: \begin{enumerate} \item $h(\mathbf x)$ is linear on each simplex from $\mathcal{P}_N$; \item For $k=1, \dots N$, $h\left( \mathbf{x} \right) = s_k$ at an $N$-patch associated with $\frac{\mathbf{n}_i}{N}$; \item $h(\mathbf x)$ can be computed exactly by a network with $O(K)$ weights and $O(1)$ layers. \end{enumerate} \end{proposition} \begin{proof} Follows directly from \autoref{prop:linear_interpolation}. We assign value $s_k$ to all $N$-knots in $N$-patch associated with $\frac{\mathbf{n}_k}{N}$ and apply \autoref{prop:linear_interpolation}. Since $N$-patches of interest are disjoint, each $N$-knot has at most one assigned value. \end{proof} \subsection{The filtering subgrids}\label{ss:overviewandpartitioning} Given the total number of weights $W$, we set $N = W^{1/d}$. We will assume without loss of generality that $N$ is integer. We consider triangulation $\mathcal{P}_N$ of $[0,1]^d$ on length scale $\tfrac{1}{N}$. It is convenient to split the $N$-grid into $3^d$ disjoint subgrids with the $3\times$ grid spacing: $$\mathbf N_{\mathbf q} = \{\tfrac{\mathbf{n}}{N}: \mathbf{n} \in \left(\mathbf q + (3\mathbb{Z})^d\right) \cap [0, N]^d\},\quad \mathbf q \in \{0, 1, 2\}^d.$$ Clearly, each subgrid contains $O(N^d)$ knots. Note that $N$-patches associated with $N$-knots in $\mathbf N_{\mathbf q}$ are disjoint. It means, in particular, that any point $\mathbf{x} \in [0,1]^d$ lies in at most one such $N$-patch. It also means that \autoref{prop:constant_interpolation} is applicable to $\mathbf N_{\mathbf q}$. We will use this observation in \autoref{ss:singlegrid} for constructing an efficient approximation in a neighbourhood of $\mathbf N_{\mathbf q}$ for a single $\mathbf{q}$. We call the union of these $N$-patches \emph{a domain of $\mathbf N_{\mathbf q}$}. We compute the full approximation $\widetilde f$ as a sum \begin{align}\label{eq:qdecomp} \widetilde{f}(\mathbf{x}) &= \sum_{\mathbf{q} \in \{0,1,2\}^d} \widetilde{w}_{\mathbf{q}}(\mathbf{x}) \widetilde{f}_{\mathbf{q}}(\mathbf{x}). \end{align} Function $\widetilde{f}_{\mathbf{q}}(\mathbf{x})$ computes $f(\mathbf{x})$ with error $O(W^{-p})$ for every $\mathbf{x}$ in the domain of $\mathbf N_{\mathbf q}$. For $\mathbf{x}$ out of the domain of $\mathbf N_{\mathbf q}$ it computes some garbage value. We describe $\widetilde{f}_{\mathbf{q}}(\mathbf{x})$ in \autoref{ss:singlegrid}. The final approximation $\widetilde{f}(\mathbf{x})$ is a weighted sum of $\widetilde{f}_{\mathbf{q}}(\mathbf{x})$ with weights $\widetilde{w}_{\mathbf{q}}(\mathbf{x})$. We choose such functions $\widetilde{w}_{\mathbf{q}}(\mathbf{x})$, that $\widetilde{w}_{\mathbf{q}}(\mathbf{x})$ vanishes outside the domain of $\mathbf N_{\mathbf q}$ and \begin{align*} \sum_{\mathbf{q} \in \{0,1,2\}^d} \widetilde{w}_{\mathbf{q}}(\mathbf{x}) \equiv 1. \end{align*} It follows that $\widetilde{f}(\mathbf{x})$ is a weighted sum (with weights with the sum 1) of terms approximating $f(\mathbf{x})$ with error $O(W^{-p})$. Consequently, $\widetilde{f}(\mathbf{x})$ approximates $f(x)$ with error $O(W^{-p})$. Function $\widetilde{w}_{\mathbf{q}}(\mathbf{x})$ is given by applying \autoref{prop:linear_interpolation} to $N$-knots from $\mathbf N_{\mathbf q}$ with all values $\ell_1, \ell_2, \dots, \ell_{|\mathbf N_{\mathbf q}|}$ equals to 1. Clearly, $\widetilde{w}_{\mathbf{q}}(\mathbf{x})$ vanishes outside the domain of $\mathbf N_{\mathbf q}$. Sum $\sum_{\mathbf{q} \in \{0,1,2\}^d} \widetilde{w}_{\mathbf{q}}(\mathbf{x})$ is linear on each simplex from $\mathcal{P}_N$ and equals to 1 at all $N$-knots, because each $N$-knot belongs to exactly one set $\mathbf N_{\mathbf q}$. Consequently, this sum equals to 1 for every $\mathbf{x} \in [0,1]^d$. It follows from \autoref{prop:linear_interpolation} that network implementing $\widetilde{w}_{\mathbf{q}}(\mathbf{x})$ has $O(N^d) = O(W)$ weights and $O(1)$ layers. Multiplication $\widetilde{w}_{\mathbf{q}}(\mathbf{x}) \widetilde{f}_{\mathbf{q}}(\mathbf{x})$ is implemented approximately, with error $O(W^{-p})$, by network given by \cite[Proposition~3]{yarsawtooth} and requires $O(\log W)$ additional weights. \subsection{The approximation for a subgrid}\label{ss:singlegrid} Here we describe how we construct $\widetilde{f}_{\mathbf{q}}(\mathbf{x})$ for a single $\mathbf{q} \in \{0, 1, 2\}^d$. Remind that $\widetilde{f}_{\mathbf{q}}(\mathbf{x})$ computes accurate approximation for $f(\mathbf{x})$ only on the domain of $\mathbf N_{\mathbf q}$. For any $N$-knot $\frac{\mathbf{n}}{N}$ in $\mathbf N_{\mathbf q}$ we consider a cube with center at $\frac{\mathbf{n}}{N}$ and edge $\frac{2}{N}$: \begin{align*} \left\{\mathbf{x} \in \mathbb{R}^{d} : \max_{1 \leq i \leq d} \left|\mathbf{x}_{i} - \frac{\mathbf{n}_i}{N} \right| \leq \frac{1}{N} \right\}. \end{align*} We call such cube \emph{an $N$-cube} and denote it by $\mathbf C_{\mathbf{n}}$. Note that $\mathbf C_{\mathbf{n}} = \tfrac{\mathbf{n}}{N} + \mathbf C_{\mathbf{0}}$. Remind that the domain of $\mathbf N_{\mathbf q}$ consists of $|\mathbf N_{\mathbf q}|$ disjoint $N$-patches associated with $N$-knots from $\mathbf N_{\mathbf q}$. Each $\mathbf{x}$ from the domain of $\mathbf N_{\mathbf q}$ belongs to exactly one such $N$-patch. We call this patch \emph{an $N$-patch for $\mathbf{x}$} and associated $N$-knot \emph{an $N$-knot for $\mathbf{x}$}. Let us denote an $N$-knot for $\mathbf{x}$ by $\tfrac{\mathbf{n}_{\mathbf{q}}(\mathbf{x})}{N}$. We set $M = W^{p/r}$. Note that $M^{-r} = W^{-p}$ and, therefore, we need to construct an approximation of error $O(M^{-r})$. We will assume without loss of generality that $M$ is integer and $M$ is divisible by $N$. Then $\mathcal{P}_M$ is a subpartition of $\mathcal{P}_N$. We define $M$-knot and $M$-patch similarly to $N$-knot and $N$-patch. We denote a set of all $M$-knots by $\mathbf{K}_{M}$. Note that there are $O\left((M / N)^d\right)$ $M$-knots in each $N$-patch and $N$-cube. See Fig.\ref{fig:mesh} for an illustration of all described constructions. \begin{figure} \centering \includegraphics[scale = 0.45, clip, trim=0mm 0mm 40mm 0mm ]{mesh.png} \caption{The partitions $\mathcal{P}_{N}$ and $\mathcal{P}_{M}$ for $d = 2$ and $\tfrac{M}{N} = 3$. The small black dots are the $M$-knots, and the thin black edges show the triangulation $\mathcal{P}_{M}$. The large blue dots are the $N$-knots; the light blue edges show the triangulation $\mathcal{P}_{N}$. The red crosses show the points of the subgrid $\mathbf{N}_{\mathbf{q}}$. The filled blue region is the domain of $\mathbf{N}_{\mathbf{q}}$. The bold blue squares show the $N$-cubes $\mathbf{C}_{\mathbf{n}}$ for the points of $\mathbf{N}_{\mathbf{q}}$.} \label{fig:mesh} \end{figure} Suppose that $\mathbf{x}$ lies in an $M$-patch associated with an $M$-knot $\tfrac{\mathbf{m}}{M}$. Consider a Taylor polynomial $P_{\mathbf{m} / M}(\mathbf{x})$ at $\tfrac{\mathbf{m}}{M}$ of order $\lceil r \rceil - 1$. Standard bounds for the remainder of Taylor polynomial imply that it approximates $f(\mathbf{x})$ with error $O(M^{-r})$ uniformly for $f \in F_{r,d}$. Taylor polynomial at $\tfrac{\mathbf{m}}{M}$ (and actually any polynomial) can be implemented with error $O(M^{-r})$ by a network with $O(\log M)$ weights and layers. We refer reader to \cite[Proposition~3]{yarsawtooth} and a proof of \cite[Theorem~1]{yarsawtooth} for details. We can approximate $f(\mathbf{x})$ with error $O(M^{-r})$ with a weighted sum of Taylor polynomials $P_{\mathbf{m} / M}(\mathbf{x})$ at all $M$-knots: \begin{align}\label{eq:fulltaylor} \widetilde{f}(\mathbf{x}) = \sum_{\tfrac{\mathbf{m}}{M} \in \mathbf{K}_{M}} \phi\left(M \mathbf{x} - \mathbf{m}\right) P_{\mathbf{m} / M}(\mathbf{x}). \end{align} Note that $\phi\left(M \mathbf{x} - \mathbf{m}\right)$ vanishes outside an $M$-patch associated with $\tfrac{\mathbf{m}}{M}$ and \begin{align*} \sum_{\tfrac{\mathbf{m}}{M} \in \mathbf{K}_{M}} \phi\left(M \mathbf{x} - \mathbf{m}\right) \equiv 1. \end{align*} There are $M^d$ terms in \eqref{eq:fulltaylor} and calculating single term requires $O(\log M)$ weights. So, the total number of weights needed to implement \eqref{eq:fulltaylor} is $O(M^{d} \log M) = O(W^{pd / r} \log W)$. It is clearly infeasible for $p > \frac{r}{d}$. For $p = \frac{r}{d}$ it leads to approximation error $O(W^{-r/d} \log^{r/d} W)$ and makes a statement of \cite[Theorem~1]{yarsawtooth}. Note that in this construction Taylor coefficients at $M$-knots are the weights of network. Note that terms of \eqref{eq:fulltaylor} are nonzero only for $M$-knots in an $N$-cube for $\mathbf{x}$. Suppose that $\mathbf{x}$ lies in the domain of $\mathbf{N}_{\mathbf{q}}$ and, therefore, has well defined $N$-knot $\tfrac{\mathbf{n}_{\mathbf{q}}(\mathbf{x})}{N}$. For such $\mathbf{x}$ we can write \begin{align}\label{eq:parttaylor} \begin{split} \widetilde{f}_{\mathbf{q}}(\mathbf{x}) &= \sum_{\tfrac{\mathbf{m}}{M} \in \mathbf{K}_{M} \cap \mathbf{C}_{\mathbf{n}_{\mathbf{q}}(x)}} \phi\left(M \mathbf{x} - \mathbf{m}\right) P_{\mathbf{m} / M}(\mathbf{x}) \\ &= \sum_{\tfrac{\mathbf{m}}{M} \in \mathbf{K}_{M} \cap \mathbf{C}_{\mathbf{0}}} \phi\left(M \left(\mathbf{x} - \tfrac{\mathbf{n}_{\mathbf{q}}(x)}{N}\right) - \mathbf{m}\right) P_{\mathbf{m} / M + \mathbf{n}_{\mathbf{q}}(\mathbf{x}) / N}(\mathbf{x}) \end{split} \end{align} There are only $(M / N)^d = W^{pd/r - 1}$ terms in \eqref{eq:parttaylor}. Therefore, if we know $\tfrac{\mathbf{n}_{\mathbf{q}}(\mathbf{x})}{N}$ and Taylor coefficients for $\tfrac{\mathbf{m}}{M} + \tfrac{\mathbf{n}_{\mathbf{q}}(\mathbf{x})}{N}$, then $\widetilde{f}_{\mathbf{q}}(\mathbf{x})$ can be implemented with error $O(M^{-r})$ by a network with $O\left((M / N)^d \log M\right) = O(W^{pd/r - 1} \log W)$ weights. For $\mathbf{x}$ in the domain of $\mathbf{N}_{\mathbf{q}}$ it holds that $\widetilde{f}_{\mathbf{q}}(\mathbf{x}) = \widetilde{f}(\mathbf{x})$. It follows that $\widetilde{f}_{\mathbf{q}}(\mathbf{x})$ indeed approximates $f(\mathbf{x})$ with error $O(M^{-r}) = O(W^{-p})$ on the domain of $\mathbf{N}_{\mathbf{q}}$. If $\mathbf{x}$ lies in the domain of $\mathbf{N}_{\mathbf{q}}$, then we can compute a single coordinate of $\tfrac{\mathbf{n}_{\mathbf{q}}(x)}{N}$ with a network given by \autoref{prop:constant_interpolation}. We need to take $\frac{\mathbf{n}_i}{N} \in \mathbf{N}_{\mathbf{q}}$ and set $s_i$ to be a corresponding coordinate of $\mathbf{n}_i$. We compute $\tfrac{\mathbf{n}_{\mathbf{q}}(x)}{N}$ by applying this observation to all coordinates. Constructed network has $O(|\mathbf N_{\mathbf q}|) = O(N^d) = O(W)$ weights and $O(1)$ layers. In \autoref{ss:coefficientscoding} we show, that (approximated) Taylor coefficients for $(M / N)^d$ $M$-knots $\tfrac{\mathbf{m}}{M} + \tfrac{\mathbf{n}}{N}$, $\tfrac{\mathbf{m}}{M} \in \mathbf{K}_{M} \cap \mathbf{C}_{\mathbf{0}}$ can be computed by a network with $O\left((M / N)^d\right)$ weights and layers from $c_{r, d} \leq 2(d + 1)^{\lceil r \rceil - 1}$ $\mathbf{n}$-dependent values. We call this values \emph{encoding weights for $\mathbf{n}$}. In \autoref{ss:coefficientscoding} we describe how we construct encoding weights for a particular function $f$ and an $N$-knot $\tfrac{\mathbf{n}}{N}$. We show that using approximated Taylor coefficients computed from encoding weights instead of real ones leads to error bounded by $O(M^{-r}) = O(W^{-p})$. For $\mathbf{x}$ in the domain of $\mathbf{N}_{\mathbf{q}}$ we can calculate encoding weights for $\mathbf{n}_{\mathbf{q}}(\mathbf{x})$ by a network given by \autoref{prop:constant_interpolation}. Let us finalize a structure of network computing $\widetilde{f}_{\mathbf{q}}(\mathbf{x})$. For $\mathbf{x}$ in the domain of $\mathbf{N}_{\mathbf{q}}$ it \begin{enumerate} \item Computes $\mathbf{n}_{\mathbf{q}}(\mathbf x)$ and encoding weights for $\mathbf{n}_{q}(\mathbf x)$. This step is implemented by applying \autoref{prop:constant_interpolation} and requires $O(N^d) = O(W)$ weights and $O(1)$ layers; \item Given encoding weights for $\mathbf{n}_{q}(x)$, computes (approximated) Taylor coefficients for all $M$-knots $\tfrac{\mathbf{m}}{M} + \tfrac{\mathbf{n}_{\mathbf{q}}(\mathbf{x})}{N}$, $\tfrac{\mathbf{m}}{M} \in \mathbf{K}_{M} \cap \mathbf{C}_{\mathbf{0}}$. This step requires a network with $O\left((M / N)^d\right) = O(W^{pd/r - 1})$ weights and layers and described in \autoref{ss:coefficientscoding}; \item Given (approximated) Taylor coefficients achieved at the previous step, computes an approximation for $P_{\mathbf{m} / M + \mathbf{n}_{\mathbf{q}}(\mathbf{x}) / N}$ for all $M$-knots $\tfrac{\mathbf{m}}{M} + \tfrac{\mathbf{n}_{\mathbf{q}}(\mathbf{x})}{N}$, $\tfrac{\mathbf{m}}{M} \in \mathbf{K}_{M} \cap \mathbf{C}_{\mathbf{0}}$. The approximation with error $O(M^{-r}) = O(W^{-p})$ for a single $P_{\mathbf{m} / M + \mathbf{n}_{\mathbf{q}}(\mathbf{x}) / N}$ can be implemented by a network with $O(\log M) = O(\log W)$ weights and layers. Total number of weights needed at this step is, therefore, $O\left(|\mathbf{K}_{M} \cap \mathbf{C}_{\mathbf{0}}| \log M\right) = O\left((M / N)^d \log M\right) = O(W^{pd/r - 1} \log W)$. Computation for different $M$-knots can be done in parallel, so the total number of layers is still $O(\log W)$; \item Given $\mathbf{n}_{\mathbf{q}}(\mathbf{x})$, computed at first step, computes $\phi\left(M \left(\mathbf{x} - \tfrac{\mathbf{n}_{\mathbf{q}}(x)}{N}\right) - \mathbf{m}\right)$ for all $M$-knots $\tfrac{\mathbf{m}}{M} + \tfrac{\mathbf{n}_{\mathbf{q}}(\mathbf{x})}{N}$, $\tfrac{\mathbf{m}}{M} \in \mathbf{K}_{M} \cap \mathbf{C}_{\mathbf{0}}$. It requires $O\left(|\mathbf{K}_{M} \cap \mathbf{C}_{\mathbf{0}}|\right) = O\left((M / N)^d\right) = O(W^{pd/r - 1})$ weights and $O(1)$ layers; \item Combines outputs of steps 3 and 4 in the final approximation with \eqref{eq:parttaylor}. Multiplication with accuracy $O(M^{-r})$ can be implemented by a network with $O(\log M)$ weights and layers, so this step requires $O(|\mathbf{K}_{M} \cap \mathbf{C}_{\mathbf{0}}| \log M) = O\left((M / N)^d \log M\right) = O(W^{pd/r - 1} \log W)$ weights and $O(\log M) = O(\log W)$ layers. \end{enumerate} Clearly we can pass forward values achieved at early steps without increasing an asymptotic for needed number of weights and layers. If we sum up the total number of weights needed at each step, we obtain $O\left(W + W^{pd/r - 1} \log W\right)$. For $\tfrac{r}{d} < p < \tfrac{2r}{d}$ it is equivalent to $O(W)$ and matches the desired approximation rate. For $p = \tfrac{2r}{d}$ it is equivalent to $O(W \log W)$ and leads to the desired approximation rate up to a logarithmic factor. We show how to deal with it in \autoref{ss:logfactor}. The total number of needed layers is $O(W^{pd/r - 1})$ and matches the desired. \subsection{Encoding and decoding Taylor coefficients}\label{ss:coefficientscoding} It is known that $\sim \epsilon^{-d/r}$ bits are needed to specify a function $f \in F_{r,d}$ with accuracy $\epsilon$ \cite{KolmogorovTikhomirov}. It follows from the bounds for Kolmogorov $\varepsilon$-entropy of $F_{r,d}$ derived in \cite[\S~4]{KolmogorovTikhomirov}. Here we describe how this specification can be implemented by a neural network. First we introduce some notation. Suppose we have an $M$-knot $\tfrac{\mathbf{m}}{M}$. Taylor expansion $P_{\mathbf{m} / M}(\mathbf{x})$ of $f(\mathbf{x})$ at $\tfrac{\mathbf{m}}{M}$ is given by \begin{align*} P_{\mathbf{m} / M}(\mathbf{x}) &= \sum_{\k : |\k| \leq \lceil r \rceil - 1} \dfrac{D^{\k}f\left(\frac{\mathbf{m}}{M}\right)}{\k !} \left( \mathbf{x} - \dfrac{\mathbf{m}}{M}\right)^{\k}. \end{align*} We use usual convention $\k! = \prod_{i=1}^d k_i$ and $\left( \mathbf{x} - \tfrac{\mathbf{m}}{M}\right)^{\k} = \prod_{i=1}^d \left( x_i - \tfrac{m_i}{M}\right)^{k_i}$. We denote \begin{align*} a_{\mathbf{m}, \k} = D^{\k}f\left(\frac{\mathbf{m}}{M}\right). \end{align*} We denote an approximated Taylor coefficients to be defined further in this section by $\a_{\mathbf{m}, \k}$. Corresponding approximated Taylor expansion is given by \begin{align*} \widehat{P}_{\mathbf{m} / M}(\mathbf{x}) &= \sum_{\k : |\k| \leq \lceil r \rceil - 1} \dfrac{\a_{\mathbf{m}, \k}}{\k !} \left( \mathbf{x} - \dfrac{\mathbf{m}}{M}\right)^{\k}. \end{align*} For any $\mathbf{x}$ in the $M$-patch associated with $\tfrac{\mathbf{m}}{M}$ \begin{align*} \left| f(\mathbf{x}) - P_{\mathbf{m} / M}(\mathbf{x}) \right| \leq c_{r, d} M^{-r}, \end{align*} for all $f \in F_{r, d}$ and some constant $c_{r, d}$, which does not depend on $M$ and $\mathbf{m}$. We first show how we construct encoding weights associated with an $N$-knot $\tfrac{\mathbf{n}}{N}$. Our construction is quite similar to one from the proof of \cite[Theorem~XIV]{KolmogorovTikhomirov}, where bounds for Kolmogorov $\varepsilon$-entropy of $F_{r,d}$ were derived. Then we discuss how approximated Taylor coefficients at $M$-knots in the $N$-cube $\mathbf{C}_{\mathbf{n}}$ are computed from encoding weights by a network. Our goal is to construct such approximated Taylor coefficients $\a_{\mathbf{m}, \k}$, that for any $\mathbf{x}$ in the $M$-patch associated with $\tfrac{\mathbf{m}}{M}$ holds $|\widehat{P}_{\mathbf{m} / M}(\mathbf{x}) - P_{\mathbf{m} / M}(\mathbf{x})| \leq c_{r, d} M^{-r}$ for some $M$-independent constant $c_{r, d}$. The following proposition states sufficient condition on such $\a_{\mathbf{m}, \k}$. \begin{proposition}\label{prop:approxbound} Suppose that \begin{align}\label{eq:sufcond} |a_{\mathbf{m}, \k} - \a_{\mathbf{m}, \k}| \leq M^{|\k| - r} \quad \forall \, \k: |\k| \leq \lceil r \rceil - 1. \end{align} Then for any $\mathbf{x}$ in an $M$-patch associated with $\tfrac{\mathbf{m}}{M}$ \begin{align*} \left| \widehat{P}_{\mathbf{m} / M}(\mathbf{x}) - P_{\mathbf{m} / M}(\mathbf{x}) \right| \leq (d + 1)^{\lceil r \rceil - 1} M^{-r}. \end{align*} \end{proposition} \begin{proof} \begin{align*} \left| \widehat{P}_{\mathbf{m} / M}(\mathbf{x}) - P_{\mathbf{m} / M}(\mathbf{x}) \right| &\leq \sum_{\k : |\k| \leq \lceil r \rceil - 1} \dfrac{1}{\k !} \left|\a_{\mathbf{m}, \k} - a_{\mathbf{m}, \k}\right| \left|\left( \mathbf{x} - \dfrac{\mathbf{m}}{M}\right)^{\k} \right| \\ &\leq \sum_{\k : |\k| \leq \lceil r \rceil - 1} M^{|\k| - r} M^{-|\k|} \\ &\leq (d + 1)^{\lceil r \rceil - 1} M^{-r}. \end{align*} \end{proof} Suppose that two $M$-knots $\tfrac{\mathbf{m}_1}{M}$ and $\tfrac{\mathbf{m}_2}{M}$ are adjacent and we have $\a_{\mathbf{m}_1, \widehat{\mathbf{k}}}$, $|\widehat{\mathbf{k}}| \leq \lceil r \rceil - 1$ satisfying \eqref{eq:sufcond}. Another convenient proposition we use further shows how to construct an accurate approximation for Taylor coefficients at $\tfrac{\mathbf{m}_2}{M}$. \begin{proposition}\label{prop:adjacent} Suppose that two $M$-knots $\tfrac{\mathbf{m}_1}{M}$ and $\tfrac{\mathbf{m}_2}{M}$ are adjacent. Suppose that approximated Taylor coefficients $\a_{\mathbf{m}_1, \widehat{\mathbf{k}}}$, $|\widehat{\mathbf{k}}| \leq \lceil r \rceil - 1$ at $\tfrac{\mathbf{m}_1}{M}$ satisfy \eqref{eq:sufcond}. Then we can find such $c_{\k, \widehat{\mathbf{k}}}$ and $\widetilde{a}_{m_2, \k}$, $|\k|, |\widehat{\mathbf{k}}| \leq \lceil r \rceil - 1$, that \begin{enumerate} \item For all $\k: |\k| \leq \lceil r \rceil - 1$ \begin{align*} \widetilde{a}_{m_2, \k} = \sum_{\widehat{\mathbf{k}}: |\widehat{\mathbf{k}}| \leq \lceil r \rceil - 1} c_{\k, \widehat{\mathbf{k}}} \cdot \widehat a_{\mathbf{m}_1, \widehat{\mathbf{k}}}; \end{align*} \item For all $\k: |\k| \leq \lceil r \rceil - 1$ \begin{align}\label{eq:adjapproxbound} |a_{\mathbf{m}_2, \k} - \widetilde{a}_{\mathbf{m}_2, \k}| < 4 M^{|\k| - r}; \end{align} \item Coefficients $c_{\k, \widehat{\mathbf{k}}}$ depend only on the relative position of $\tfrac{\mathbf{m}_1}{M}$ and $\tfrac{\mathbf{m}_2}{M}$. \end{enumerate} \end{proposition} \begin{proof} Remind that $M$-knots $\tfrac{\mathbf{m}_1}{M}$ and $\tfrac{\mathbf{m}_2}{M}$ are adjacent. Let us consider first component of $\mathbf{m}_1$ and $\mathbf{m}_2$ independently and assume without loss of generality that $\mathbf{m}_1 = (m_1, \overline{\mathbf{m}})$ and $\mathbf{m}_2 = (m_1 + 1, \overline{\mathbf{m}})$. Standard bounds for a remainder of Taylor series partial sum imply, that for any $\k = (k_1, \dots, k_d)$ and $f \in F_{r, d}$ \begin{align*} \left|D^{(k_1, \dots, k_d)}f\left(\dfrac{\mathbf{m}_2}{M}\right) - \sum_{n = 0}^{\lceil r \rceil - 1 - |\k|} \dfrac{D^{(k_1 + n, \dots, k_d)}f\left(\dfrac{\mathbf{m}_1}{M}\right)}{n!} \cdot \dfrac{1}{M^n} \right| \leq M^{|\k| - r}. \end{align*} In our notation \begin{align}\label{eq:adjtaylorbound} \left|a_{\mathbf{m}_2, (k_1, \dots, k_d)} - \sum_{n = 0}^{\lceil r \rceil - 1 - |\k|} \dfrac{a_{\mathbf{m}_1, (k_1 + n, \dots, k_d)}}{n!} \cdot \dfrac{1}{M^n} \right| \leq M^{|\k| - r}. \end{align} From the proposition that coefficients $\a_{\mathbf{m}_1, \k}$ satisfy \eqref{eq:sufcond} it follows that \begin{align}\label{eq:substbound} \begin{split} \left|\sum_{n = 0}^{\lceil r \rceil - 1 - |\k|} \dfrac{\left(a_{\mathbf{m}_1, (k_1 + n, \dots, k_d)} - \a_{\mathbf{m}_1, (k_1 + n, \dots, k_d)}\right)}{n!} \cdot \dfrac{1}{M^n} \right| &\leq \sum_{n = 0}^{\lceil r \rceil - 1 - |\k|} \dfrac{M^{|\k| + n - r}}{n!} \cdot \dfrac{1}{M^n} \\ &= M^{|\k| - r} \sum_{n = 0}^{\lceil r \rceil - 1 - |\k|} \dfrac{1}{n!} \\ &< e M^{|\k| - r} < 3 M^{|\k| - r}. \end{split} \end{align} Combining \eqref{eq:adjtaylorbound} and \eqref{eq:substbound} we obtain \begin{align*} \begin{split} \left|a_{\mathbf{m}_2, (k_1, \dots, k_d)} - \sum_{n = 0}^{\lceil r \rceil - 1 - |\k|} \dfrac{\a_{\mathbf{m}_1, (k_1 + n, \dots, k_d)}}{n!} \cdot \dfrac{1}{M^n} \right| < 4 M^{|\k| - r}. \end{split} \end{align*} It follows that if for each $\k = (k_1, \dots, k_d)$ we set \begin{align}\label{eq:adjtaylorrepr} \widetilde{a}_{\mathbf{m}_2, (k_1, \dots, k_d)} = \sum_{n = 0}^{\lceil r \rceil - 1 - |\k|} \dfrac{\a_{\mathbf{m}_1, (k_1 + n, \dots, k_d)}}{n!} \cdot \dfrac{1}{M^n}, \end{align} then $\widetilde{a}_{\mathbf{m}_2, \k}$ satisfy \eqref{eq:adjapproxbound}. It remains to note that coefficients in \eqref{eq:adjtaylorrepr} depend only on the relative position of $\tfrac{\mathbf{m}_1}{M}$ and $\tfrac{\mathbf{m}_2}{M}$, but not on $f \in F_{r, d}$, values $\a_{\mathbf{m}_1, \k}$ or $M$-knots $\tfrac{\mathbf{m}_1}{M}$ and $\tfrac{\mathbf{m}_2}{M}$ themselves. \end{proof} Now we are ready to describe how we find $\a_{\mathbf{m}, \k}$ for all $M$-knots $\tfrac{\mathbf{m}}{M}$ from a given $N$-cube $\mathbf{C}_{\mathbf{n}}$. We enumerate $M$-knots lying in $\mathbf{C}_{\mathbf{n}}$ with numbers $t = 1, \dots, (2 M / N + 1)^d$ and denote them $\tfrac{\mathbf{m}_{\mathbf{n}, t}}{M}$. We inductively construct $\a_{\mathbf{m}_{\mathbf{n}, t}, \k}$ satisfying \eqref{eq:sufcond} for all $M$-knots $\tfrac{\mathbf{m}_{\mathbf{n}, t}}{M}$. We choose such an enumeration, that two consequent $M$-knots are adjacent. We set $\a_{\mathbf{m}_{\mathbf{n}, 1}, \k} = a_{\mathbf{m}_{\mathbf{n}, 1}, \k}$. Such $\a_{\mathbf{m}_{\mathbf{n}, 1}, \k}$ clearly satisfy \eqref{eq:sufcond}. Suppose that we have constructed $\a_{\mathbf{m}_{\mathbf{n}, t}, \k}$ satisfying \eqref{eq:sufcond}. Since $M$-knots $\tfrac{\mathbf{m}_{\mathbf{n}, t}}{M}$ and $\tfrac{\mathbf{m}_{\mathbf{n}, t+1}}{M}$ are adjacent, we can apply \autoref{prop:adjacent} to get $\widetilde{a}_{m_{\mathbf{n}, t+1}, \k}$, $|\k| \leq \lceil r \rceil - 1$ satisfying \eqref{eq:adjapproxbound}. It follows that there exist such integers $B_{\mathbf{n}, \k, t}$, that $|B_{\mathbf{n}, \k, t}| \leq 3$ and \begin{align*} \left|a_{\mathbf{m}_{\mathbf{n}, t+1}, \k} - \widetilde{a}_{\mathbf{m}_{\mathbf{n}, t+1}, \k} - M^{|\k| - r} B_{\mathbf{n}, \k, t} \right| \leq M^{|\k| - r}. \end{align*} We set \begin{align}\label{eq:coefformula} \a_{\mathbf{m}_{\mathbf{n}, t+1}, \k} = \widetilde{a}_{\mathbf{m}_{\mathbf{n}, t+1}, \k} + M^{|\k| - r} B_{\mathbf{n}, \k, t}. \end{align} Then coefficients $\a_{\mathbf{m}_{\mathbf{n}, t+1}, \k}$ satisfy \eqref{eq:sufcond} as desired. See Fig.\ref{fig:encoding} for an illustration of algorithm of determining $\a_{\mathbf{m}_{\mathbf{n}, t+1}, \k}$. \begin{figure} \centering \input{encoding.tikz} \caption{An illustration of determining approximated Taylor coefficients at $\tfrac{\mathbf{m}_{\mathbf{n}, t+1}}{M}$ from known approximated Taylor coefficients at $\tfrac{\mathbf{m}_{\mathbf{n}, t}}{M}$. The blue line is $D^{\k} f(x)$ and the blue crosses are its values $a_{\mathbf{m}_{\mathbf{n}, t}, \k}$ and $a_{\mathbf{m}_{\mathbf{n}, t+1}, \k}$ at $M$-knots $\tfrac{\mathbf{m}_{\mathbf{n}, t}}{M}$ and $\tfrac{\mathbf{m}_{\mathbf{n}, t+1}}{M}$ respectively. Red crosses are desired approximations $\a_{\mathbf{m}_{\mathbf{n}, t}, \k}$ and $\a_{\mathbf{m}_{\mathbf{n}, t+1}, \k}$ for $a_{\mathbf{m}_{\mathbf{n}, t}, \k}$ and $a_{\mathbf{m}_{\mathbf{n}, t+1}, \k}$ satisfying \eqref{eq:sufcond}. Given $\a_{\mathbf{m}_{\mathbf{n}, t}, \k}$, we first apply \autoref{prop:adjacent} to get $\widetilde{a}_{\mathbf{m}_{\mathbf{n}, t+1}, \k}$ satisfying \eqref{eq:adjapproxbound}. This step is illustrated by the brown dashed arrow and brown cross is $\widetilde{a}_{\mathbf{m}_{\mathbf{n}, t+1}, \k}$. Then we choose such $B_{\mathbf{n}, \k, t} \in \{-3, \dots, 3\}$, that $\a_{\mathbf{m}_{\mathbf{n}, t+1}, \k} = \widetilde{a}_{\mathbf{m}_{\mathbf{n}, t+1}, \k} + M^{|\k| - r} B_{\mathbf{n}, \k, t}$ satisfy \eqref{eq:sufcond}.} \label{fig:encoding} \end{figure} For a single $\k$ we encode $(2M / N + 1)^d$ values $B_{\mathbf{n}, \k, t}$ by a single base-7 number $b_{\mathbf{n}, \k}$ \begin{align*} b_{\mathbf{n}, \k} = \sum_{t=1}^{(2M / N + 1)^d} 7^{-t} \left(B_{\mathbf{n}, \k, t} + 3\right) \end{align*} Numbers $b_{\mathbf{n}, \k}$ and $\a_{\mathbf{m}_{\mathbf{n}, 1}, \k} = a_{\mathbf{m}_{\mathbf{n}, 1}, \k}$ are encoding weights for $\mathbf{n}$. There are $c_{r, d} \leq 2(d + 1)^{\lceil r \rceil - 1}$ encoding weights. Now we describe how a network reconstruct all $\a_{\mathbf{m}_{\mathbf{n}, t}, \k}$ from encoding weights. Numbers $B_{\mathbf{n}, \k, t}$ can be reconstructed from $b_{\mathbf{n}, \k}$ by a ReLU network with $O\left((M/N)^d\right)$ weights and layers. We refer to \cite[5.2.2]{yaropt}, where similar reconstruction is described for ternary numbers. Given $\a_{\mathbf{m}_{\mathbf{n}, t}, \k}$ and $B_{\mathbf{n}, \k, t}$, we first compute $\widetilde{a}_{\mathbf{m}_{\mathbf{n}, t+1}, \k}$ with \eqref{eq:adjtaylorrepr} and then we compute $\a_{\mathbf{m}_{\mathbf{n}, t+1}, \k}$ with \eqref{eq:coefformula}. We need $O(1)$ weights and layers at each step, so the total number of needed weights and layers is $O\left((M/N)^d\right)$. For given $\mathbf{x} \in [0, 1]^d$ and $\mathbf{q} \in \{0, 1, 2\}^d$ we obtain encoding weights for $\mathbf{n}_{\mathbf{q}}(\mathbf{x})$ by applying \autoref{prop:constant_interpolation}. Note that \autoref{prop:adjacent} implies that coefficients in \eqref{eq:adjtaylorrepr} depend only on the relative position of $M$-knots $\tfrac{\mathbf{m}_{\mathbf{n}, t}}{M}$ and $\tfrac{\mathbf{m}_{\mathbf{n}, t+1}}{M}$. It follows that if we choose similar enumeration of $M$-knots for all $N$-cubes $\mathbf{C}_{\mathbf{n}}$, $\tfrac{\mathbf{n}}{N} \in \mathbf N_{\mathbf q}$, then we can use a network described in previous paragraph for all possible values of $\mathbf{n}_{\mathbf{q}}(\mathbf{x})$. Note that encoding weights $b_{\mathbf{n}, \k}$ can be represented as $\sim(M/N)^d$-bits numbers while encoding weights $\a_{\mathbf{m}_{\mathbf{n}, 1}, \k} = a_{\mathbf{m}_{\mathbf{n}, 1}, \k}$ can be arbitrary real numbers. Remind that described construction requires $|\a_{\mathbf{m}_{\mathbf{n}, 1}, \k} - a_{\mathbf{m}_{\mathbf{n}, 1}, \k}| \sim M^{|\k| - r}$. It follows that if we want to encode $\a_{\mathbf{m}_{\mathbf{n}, 1}, \k}$ by a finite number of bits as well, then we need $\sim \log M$ additional bits to achieve desired accuracy. \subsection{Getting rid of logarithmic factor}\label{ss:logfactor} Remind that logarithmic factor arises in the construction described in \ref{ss:singlegrid} in case $p = \tfrac{2r}{d}$. This is because we construct $\widetilde{f}_{\mathbf{q}}(\mathbf{x})$ in form \eqref{eq:parttaylor} with $O\left(W\right)$ terms and we need $O(\log W)$ weights to implement an approximated Taylor sum arising in each term. Note that for a particular $\mathbf{x}$ most terms in \eqref{eq:parttaylor} vanishes since $\phi(M (\mathbf{x} - \tfrac{\mathbf{n}_{\mathbf{q}}(x)}{N}) - \mathbf{m}) = 0$ and there is no need to compute $P_{\mathbf{m} / M + \mathbf{n}_{\mathbf{q}}(\mathbf{x}) / N}(\mathbf{x})$ for such terms. If we perform Taylor sum calculation for only a constant number of non-vanishing terms, then the total number of needed weights reduces to $O(W + \log W)$. We can apply technique used in \ref{ss:overviewandpartitioning} for detecting nonvanishing terms from input $\mathbf{x}$. We split all $M$-knots lying in $N$-cube $\mathbf{C}_{\mathbf{0}}$ into a disjoint union of $3^{d}$ sets \begin{align*} \mathbf{M}_{\mathbf{s}} = \{\tfrac{\mathbf{m}}{M}: \mathbf{m} \in \left(\mathbf s + (3\mathbb{Z})^d\right) \cap \mathbf{C}_{\mathbf{0}}\},\quad \mathbf{s} \in \{0, 1, 2\}^d. \end{align*} $M$-patches associated with $M$-knots in $\mathbf{M}_{\mathbf{s}}$ are disjoint. We call their union the domain of $\mathbf{M}_{\mathbf{s}}$. If $\mathbf{x} - \tfrac{\mathbf{n}_{\mathbf{q}}(\mathbf{x})}{N}$ lies in the domain of $\mathbf{M}_{\mathbf{s}}$, there is exactly one such $\tfrac{\mathbf{m}_{\mathbf{q}, \mathbf{s}}(\mathbf{x})}{M}$, that $\mathbf{x} - \tfrac{\mathbf{n}_{\mathbf{q}}(\mathbf{x})}{N}$ lies in the $M$-patch associated with $\tfrac{\mathbf{m}_{\mathbf{q}, \mathbf{s}}}{M}$. We can rewrite \eqref{eq:parttaylor} as \begin{align}\label{eq:nologform} \widetilde{f}_{\mathbf{q}}(\mathbf{x}) &= \sum_{\mathbf{s} \in \{0, 1, 2\}^d} \left[\widetilde{f}_{\mathbf{q}, \mathbf{s}}(\mathbf{x}) \sum_{\tfrac{\mathbf{m}}{M} \in \tfrac{\mathbf{n}_{\mathbf{q}}(\mathbf{x})}{N} + \mathbf{M}_{\mathbf{s}}} \phi\left(M \left(\mathbf{x} - \tfrac{\mathbf{n}_{\mathbf{q}}(x)}{N}\right) - \mathbf{m}\right) \right]. \end{align} Here $\widetilde{f}_{\mathbf{q}, \mathbf{s}}(\mathbf{x})$ is a function, which calculates $P_{\mathbf{m}_{\mathbf{q}, \mathbf{s}}(\mathbf{x}) / M + \mathbf{n}_{\mathbf{q}} / N}(\mathbf{x})$ if $\mathbf{x} - \tfrac{\mathbf{n}_{\mathbf{q}}(\mathbf{x})}{N}$ lies in the domain of $\mathbf{M}_{\mathbf{s}}$, and some garbage value otherwise. We also require that $\widetilde{f}_{\mathbf{q}, \mathbf{s}}(\mathbf{x})$ computes an approximation for a Taylor series partial sum only once. The total number of partial sums computed by network implementing $\widetilde{f}_{\mathbf{q}}(\mathbf{x})$ in form \eqref{eq:nologform} is therefore reduced to $3^{d}$. The total number of weights needed to implement $\widetilde{f}_{\mathbf{q}}(\mathbf{x})$ reduces from $O(W \log W)$ to $O(W)$. To compute such $\widetilde{f}_{\mathbf{q}, \mathbf{s}}(\mathbf{x})$ we only need to determine approximated Taylor coefficients for $\tfrac{\mathbf{m}_{\mathbf{q}, \mathbf{s}}(\mathbf{x})}{M} + \tfrac{\mathbf{n}_{\mathbf{q}}(\mathbf{x})}{N}$ among all coefficients. For each $\tfrac{\mathbf{m}}{M} \in \mathbf{M}_{\mathbf{s}}$ we construct function $\widehat{w}_{\mathbf{s}, \mathbf{m}}(\mathbf{x})$, which equals to 1 in the $M$-patch associated with $\tfrac{\mathbf{m}}{M}$ and vanishes in other patches of the domain of $\mathbf{M}_{\mathbf{s}}$. Knowing values $\widehat{w}_{\mathbf{s}, \mathbf{m}}(\mathbf{x} - \tfrac{\mathbf{n}_{\mathbf{q}}(\mathbf{x})}{N})$ we clearly can get Taylor coefficients for $\tfrac{\mathbf{m}_{\mathbf{q}, \mathbf{s}}(\mathbf{x})}{M} + \tfrac{\mathbf{n}_{\mathbf{q}}(\mathbf{x})}{N}$ from all Taylor coefficients computed by network. \begin{remark}\label{rm:nolog} Similar reasoning can be applied to the case $p = \tfrac{r}{d}$. In this case we do not consider an $M$-grid at all, but we still can split $N$-grid into $3^d$ disjoint sets and compute approximated Taylor sum once for each set. In this case weight assignment map is continuous and even linear on $f$. \end{remark} \section{Theorem \ref{th:constwidth}: proof details}\label{sec:proofconstwidth} We follow the network construction used in the proof of Theorem \ref{th:deepphase} and described in Subsections \ref{ss:overviewandpartitioning},\ref{ss:singlegrid}. We want to show that this construction can be realized within a ReLU network of width $2d+10.$ As explained in Section \ref{sec:adaptivity}, we slightly modify the construction, so that we don't update the Taylor coefficients at new $M$-patches, but rather compute them afresh. This will give a slight increase in the size of the network. Accordingly, we define parameters $N,M$ in terms of the required accuracy $\epsilon$ rather than the number of weights: specifically, we set $M=\epsilon^{-1/r}$ and $N=\epsilon^{-1/(2r)}.$ Following \cite{yaropt}, we think of the width-$(2d+10)$ network as $2d+10$ ``channels'' that are interconnected and can exchange information. We reserve $d$ channels for passing forward the scalar components of the input vector $\mathbf x$ and one channel for accumulating the approximation $\widetilde{f}(\mathbf x)$. The other channels are used for intermediate computations. The first step in computing the approximation $\widetilde f(\mathbf x)$ is the finite decomposition \ref{eq:qdecomp} of $\widetilde f$ over $\mathbf q$-subgrids. The decomposition can be implemented in the width-$(2d+10)$ network in the serial fashion, so we only need to consider computation of a single term $\widetilde{w}_{\mathbf{q}}(\mathbf{x}) \widetilde{f}_{\mathbf{q}}(\mathbf{x})$. The weight $\widetilde{w}_{\mathbf{q}}(\mathbf{x})$ is just a linear combination of $O(N^d)$ functions $\phi(N\mathbf x-\mathbf n),$ and $\phi$ can be computed by a constant-size chain of linear and ReLU operations (see \cite[Section~4.2]{yaropt}). Thus, $\widetilde{w}_{\mathbf{q}}(\mathbf{x})$ can be computed by a subnetwork using just 2 channels and depth $O(\epsilon^{-d/(2r)}).$ On the other hand, we will show below that $\widetilde{f}_{\mathbf{q}}(\mathbf{x})$ can be computed by a subnetwork using $d+8$ channels and depth $O(\epsilon^{-d/(2r)}\log(1/\epsilon)).$ We can then pass the values $\widetilde{w}_{\mathbf{q}}(\mathbf{x})$ and $\widetilde{f}_{\mathbf{q}}(\mathbf{x})$ to the third subnetwork computing an $O(\epsilon)$-approximation to the product $\widetilde{w}_{\mathbf{q}}(\mathbf{x}) \widetilde{f}_{\mathbf{q}}(\mathbf{x})$. This approximate product can be computed by a width-4 subnetwork of depth $O(\log(1/\epsilon))$ (see \cite[Proposition~3]{yarsawtooth}). Thus the total computation of the term $\widetilde{w}_{\mathbf{q}}(\mathbf{x}) \widetilde{f}_{\mathbf{q}}(\mathbf{x})$, and hence of the whole approximation $\widetilde f(\mathbf x)$ can be done with necessary accuracy $\epsilon$ within the width-$(2d+10)$ network of depth $L=O(\epsilon^{-d/(2r)}\log(1/\epsilon)).$ By inverting this relation, we get $\epsilon = O(L^{-2r/d}\log^{2r/d}L),$ as desired. We return now to the computation of $\widetilde{f}_{\mathbf{q}}(\mathbf{x})$. It is based on the expansion \eqref{eq:parttaylor} and can be performed as described later in that subsection. We examine now indivudual steps and how they can be implemented in our fixed-depth network. \begin{enumerate} \item The $N$-knot positions $\mathbf n_{\mathbf q}(\mathbf x)$ associated with $\mathbf x$ are computed using a linear combination of $O((M/N)^d)$ functions of the form $\phi(N\mathbf x-\mathbf n_k)$. This computation can be performed in a subnetwork of width 2 and depth $O(\epsilon^{-d/2r})$. We reserve $d$ channels to pass forward the scalar components of $\mathbf n_{\mathbf q}(\mathbf x)$. Additionally, we reserve one channel for passing forward the encoding weight corresponding to this $\mathbf n_{\mathbf q}(\mathbf x)$. The encoding weight gets transformed as it passes along the network and bits get decoded from it. Additional 3 channels are sufficient for bit decoding (see \cite{yaropt} for a description of the decoding procedure). \item We traverse the $O((M/N)^d)$ $M$-knots of the $N$-patch corresponding to $\mathbf n_{\mathbf q}$ and decode from the encoding weight the Taylor coefficients of degree up $\lceil r\rceil-1$ at these knots. It is sufficient to know these coefficients with precision $O(\epsilon^r),$ so each Taylor coefficient can be encoded by $K_{\max}=O(\log(1/\epsilon))$ bits $\{b_k\}_{k=0}^{K_{\max}}$, and reconstructed by accumulating the linear combination $\sum_{k=0}^{K_{\max}} 2^{-k}b_k$. Thus, the total required number of bits in the encoding weight is $O(\epsilon^{-d/(2r)})\log(1/\epsilon)$. Also, all the necessary coefficients can be reconstructed using $O(\epsilon^{-d/(2r)})\log(1/\epsilon)$ layers of width 4. \item At each $M$-knot $\mathbf m/M+\mathbf n_{\mathbf q}(\mathbf x)/N$ in the $N$-patch, we compute the respective Taylor polynomial $P_{\mathbf m/M+\mathbf n_{\mathbf q}(\mathbf x)/N}(\mathbf x)=\sum_{\mathbf k:|\mathbf k|\le \lceil r\rceil-1}a_{\mathbf k}(\mathbf x-(\mathbf m/M+\mathbf n_{\mathbf q}(\mathbf x)/N))^{\mathbf k}$. The values of $\mathbf x$ and $\mathbf n_{\mathbf q}(\mathbf x)$ are provided from the reserved channels, and $\mathbf m$ is defined in the network weights. We don't need to know all the coefficients at once, since the polynomial can be computed serially, one monomial after another, and one multiplication after another. To ensure accuracy $\epsilon$, each multiplication requires depth $O(\log(1/\epsilon))$ and width $4$. The total polynomial can then be accumulated using a subnetwork of depth $O(\log(1/\epsilon))$ and width $5$. \item Computation of the values $\phi\big(M (\mathbf{x} - \tfrac{\mathbf{n}_{\mathbf{q}}(x)}{N}) - \mathbf{m}\big)$ can be performed in 2 channels using $O(\epsilon^{-d/(2r)})$ layers in total. \item Once the factors are computed, each product $\phi\big(M (\mathbf{x} - \tfrac{\mathbf{n}_{\mathbf{q}}(x)}{N}) - \mathbf{m}\big) P_{\mathbf{m} / M + \mathbf{n}_{\mathbf{q}}(\mathbf{x}) / N}(\mathbf{x})$ can be computed with accuracy $O(\epsilon)$ in a subnetwork of width $4$ and with $O(\log(1/\epsilon))$ layers, which gives $O(\epsilon^{-d/(2r)}\log(1/\epsilon))$ layers in total. \end{enumerate} Summarizing, we see that the computation of $\widetilde f_{\mathbf q}(\mathbf x)$ can be implemented with accuracy $O(\epsilon)$ in a subnetwork occupying $d+8$ channels and spanning $O(\epsilon^{-d/(2r)}\log(1/\epsilon))$ layers, as claimed. \section{Theorem \ref{th:deepsecond}: proof}\label{sec:deepsecondproof} We generally follow the proof of Theorem \ref{th:deepphase} given in Sections \ref{sec:phasediag} and \ref{sec:proofdeepphase}, and adapt it to the new setting. We start by reducing the approximation by $\sigma$-networks to deep polynomial approximations. We show that the ReLU activation function can be efficiently approximated by iterated polynomials, which allows us to reproduce some parts of the proof of Theorem \ref{th:deepphase} simply by approximating the ReLU. However, other parts, in particular the decoder subnetwork and the selection of the encoding weight, will require more significant changes. \paragraph{Step 1: Reduction to polynomial approximation.} \begin{lemma} Suppose that the activation function $\sigma$ has a point $x_0$ where the second derivative $\tfrac{d^2\sigma}{dx^2}(x_0)$ exists and is nonzero. Then, for any multivariate polynomial $u$, there exists a network architecture such that the polynomial $u$ can be approximated with any accuracy on any bounded set by a $\sigma$-network with this architecture by suitably assigning the weights. \end{lemma} \begin{proof} For $u(x)=x^2$, the desired $\sigma$-network is $$\widetilde u_\delta(x)=(\tfrac{d^2\sigma}{dx^2}(x_0))^{-1}\tfrac{1}{\delta^2}\big(\sigma(x_0+x\delta)+\sigma(x_0-x\delta)-2\sigma(x_0)\big)$$ with a small $\delta$. For any other polynomial, the network can be constructed by using $\widetilde u_\delta$, the polarization identity $xy=\tfrac{1}{2}((x+y)^2-(x-y)^2),$ and linear operations. \end{proof} In view of this lemma, in the sequel we will treat $\sigma$-networks as if capable of exactly implementing any polynomial using some finite architecture. Also, we note that under our assumption on the activation functions and in contrast to ReLU-networks, multiplications can be implemented with any accuracy by fixed-size subnetworks. \paragraph{Step 2: Fast polynomial approximation of thresholds and ReLU.} Consider the polynomial $u(x)=\tfrac{1}{2}x(3-x^2)$, which in particular has the following properties: \begin{enumerate} \item $u(0)=0$ and $u(\pm 1)=\pm 1$; \item $u$ is monotone increasing on $[-1,1]$; \item $\tfrac{du}{dx}(\pm 1)=0$. \end{enumerate} Let $u_n$ be the $n$'th iterate of $u$: \begin{equation}\label{eq:un}u_n=\underbrace{u\circ\ldots \circ u}_{n}. \end{equation} \begin{lemma}\label{th:un}\hfill \begin{enumerate} \item $|u_n(x)-\operatorname{sgn}(x)|\le |\operatorname{sgn}(x)-x|^{2^{n/2}}$ for any $x\in[-1,1].$ \item $|xu_n(x)-|x||\le 2^{-n/2}$ for any $x\in[-1,1].$ \end{enumerate} \end{lemma} \begin{proof} 1. Make the change of variables $x=v(y)=1-y$ and let $\widetilde u=v^{-1}\circ u\circ v$. Then $\widetilde u(y)=\tfrac{1}{2}y^2(3-y).$ It is easy to check that $\widetilde u(y)\le y^{\sqrt{2}}$ for any $y\in[0,1].$ Since both $\widetilde u(y)$ and $y^{\sqrt{2}}$ are monotone increasing on $[0,1],$ there is a similar inequality for their $n$'th iterates: $\widetilde u_n(y)\le y^{2^{n/2}}.$ This gives the desired bound for $x\in[0,1]$. The bound for $x\in[-1,0]$ follows by symmetry. 2. By Statement 1, $|xu_n(x)-|x||\le |x||\operatorname{sgn}(x)-x|^{2^{n/2}}\le 2^{-n/2}$ for $x\in[-1,1].$ \end{proof} The lemma implies, in particular, that a size-$O(n)$ $\sigma$-network can provide an approximation of accuracy $2^{-n/2}$ for the functions $|x|$ and $x_+$ on the segment $[-1,1]$ . The ReLU $x_+$ is approximated by $\tfrac{1}{2}(xu_n(x)+x)$. \paragraph{Step 3: Reduction to $\widetilde{f}_{\mathbf{q}}$.} In the original proof of faster rates for ReLU given in Section \ref{sec:proofdeepphase}, the first step was to represent the approximation $\widetilde f$ by a finite expansion \eqref{eq:qdecomp} over $3^d$ subgrids indexed by $\mathbf q \in \{0,1,2\}^d$: \begin{align*} \widetilde{f}(\mathbf{x}) &= \sum_{\mathbf{q} \in \{0,1,2\}^d} \widetilde{w}_{\mathbf{q}}(\mathbf{x}) \widetilde{f}_{\mathbf{q}}(\mathbf{x}). \end{align*} In the original proof, the ``filtering functions'' $\widetilde{w}_{\mathbf{q}}$ were linear combinations of $O(N^d)$ shifted and rescaled piecewise linear ``spike'' functions $\phi$ (see Prop. \ref{prop:linear_interpolation}). The function $\phi$ can be constructed using several linear and ReLU operations (see \cite[Section~4.2]{yaropt}). We observe now that, using Lemma \ref{th:un}, we can very efficiently approximate the spike functions $\phi$ and then the full filtering functions $\widetilde{w}_{\mathbf{q}}$ by polynomials, simply by approximating each ReLU by the polynomial $\tfrac{1}{2}(xu_n(x)+x)$. Indeed, such an approximation of $\phi$ has accuracy $O(2^{-n/2})$ for a size-$O(n)$ $\sigma$-network (propagation of the error in the computation can be controlled in the standard way, using the Lipshitz continuity of ReLU). We need to remember, however, that the approximation in Lemma \ref{th:un} is valid only on the segment $[-1,1],$ while the shifted and rescaled spike $\phi(N\mathbf x-\mathbf n)$ requires ReLUs to act on a domain of size $O(N)$ if $\mathbf x\in[0,1]^d.$ The domain adaptation can be achieved simply by rescaling the ReLU using the identity $(Nx)_+=Nx_+$. As a result, by adding approximations for all the spikes, we can approximate $\widetilde{w}_{\mathbf{q}}$ by a $\sigma$-network of size $O(nN^d)$ and depth $O(n)$ with uniform accuracy $O(2^{-n/2}N^{d+1})$. Let $N=W^{(1-\delta)/d}$ with some small $\delta>0$, and $n=c\log_2 W$ with some $c>2\tfrac{(1-\delta)(d+1)+r}{d}$. Then the accuracy of the $\widetilde{w}_{\mathbf{q}}$ network is within the desired bound $\epsilon=O(W^{-r/d})$, while the size of the $\widetilde{w}_{\mathbf{q}}$ network is $O(W^{1-\delta}\log W)$, also within the desired bound $W$. This shows that our task is essentially reduced to implementing the functions $\widetilde{f}_{\mathbf{q}}$. We examine now the ReLU implementation of $\widetilde{f}_{\mathbf{q}}$ summarized into 5 steps in the end of Section \ref{ss:singlegrid}. We observe that steps 3-5 (computation of the weighted sum of Taylor approximations in a $N$-patch) can be easily implemented with the activation function $\sigma$ instead of ReLU, by invoking again Lemma \ref{th:un} where necessary. In contrast, steps 1-2 require more significant modifications since they directly involve encoding weights that need to be handled with a high precision ($\sim W^{pd/r-1}$ bits). We first describe the suitable modification of step 2 (bit extraction), and then of step 1 (finding $\mathbf n_{\mathbf q}(\mathbf x)$ and the respective encoding weight). \paragraph{Step 4: Bit extraction.} The standard bit extraction procedure (see \cite{bartlett1998almost} and Fig. \ref{fig:approxdiscrete}) decodes a binary sequence from the encoding weight using threshold activation functions ($\lfloor\cdot\rfloor$) or their approximations by ReLU. In our present setting of polynomial approximation, we use instead a polynomial dinamical system. Specifically, consider the polynomial $v(x)=2-3x^2$. Consider the disjoint intervals $I_0=[\tfrac{1}{2},1], I_1=[-1,-\tfrac{1}{2}]$ and observe that they are contained in the interval $[-1,1]$ which is in turn contained in either of the images $v(I_0), v(I_1)$. Consider a sequence $w_1,\ldots,w_n$ defined by $w_k=v(w_{k-1})$ with some initial value $w_1.$ \begin{lemma}\label{lm:binarypol} For any binary sequence $b_1,\ldots,b_n\in \{0,1\}$, there exists an interval $I\subset [-1,1]$ of length at least $6^{-n}$ such that for any initial value $w_1\in I$ we have $w_k\in I_{b_k}$ for all $k=1,\ldots,n$. \end{lemma} \begin{proof} The interval $I$ can be constructed by sequentially forming pre-images, $I^{(k-1)}=v^{-1}(I^{(k)})\cap I_{b_{k-1}}$, where $ k=n,n-1,\ldots,2,$ and $I^{(n)}=I_{b_{n}}.$ Then $I=I^{(1)};$ the lower bound on the length of $I$ follows since $|\tfrac{dv}{dx}|\le 6$ on $[-1,1].$ \end{proof} The lemma shows that we can decode a length-$n$ binary sequence by a $\sigma$-network of size $O(n)$ starting from an encoding weight defined with precision $6^{-n}$. In contrast to the original ($\lfloor\cdot\rfloor$-based) bit extraction, the values decoded in the present polynomial procedure contain some uncertainty: we only know that $w_k$ belong to one of the intervals $I_0$ or $I_1$. However, this uncertainty is not important: first, we can reduce it to an arbitrary magnitude by small-size subnetworks implementing a polynomial $u_n$ from Lemma \ref{th:un}; second, by Proposition \ref{prop:adjacent} and Eq.\eqref{eq:sufcond}, some level of uncertainty in the Taylor coefficients $\a_{\mathbf{m}, \k}$ is tolerable. \paragraph{Step 5: Computation of the encoding weight corresponding to given input $\mathbf x$.} In the proof for ReLU networks, the position $\mathbf n_{\mathbf q}(\mathbf x)$ of the $N$-knot containing the given point $\mathbf x$, and the respective encoding weight, were determined {exactly} thanks to the ability of ReLU networks to exactly represent functions piecewise linear on the standard triangulation (see Proposition \ref{prop:constant_interpolation}). This is no longer possible with general activation functions $\sigma$ or polynomials; any $\sigma$-network trying to determine the encoding weight will inevitably do it with some error. However, though the precision requirement for encoding weights is high, we can use part 1 of Lemma \ref{th:un} to bring this error to an acceptable level without substantially increasing the network size. Indeed, consider a particular $N$-knot $\tfrac{\mathbf n}{N}\in\mathbf N_{\mathbf q}$ and first construct a map $z_{\mathbf n}(\mathbf x)$ such that $z_{\mathbf n}(\mathbf x)\in [\tfrac{1}{2},1]$ for $\mathbf x$ belonging to the corresponding $N$-patch, while $z_{\mathbf n}(\mathbf x)\in [-1,-\tfrac{1}{2}]$ for $\mathbf x$ belonging to the other $N$-patches. Arguing as in Step 3, such a map can be implemented by a $\sigma$-network of size $O(\log W)$, by approximating the respective ReLU map. Next, let $z_{\mathbf n, n}=u_n\circ z_{\mathbf n},$ where $u_n$ is given in Eq.\eqref{eq:un}. Using Lemma \ref{th:un} with $n=O(\log D)$, we can ensure that $|z_{\mathbf n, n}(\mathbf x)-1|<7^{-D}$ on the $\mathbf n$'th patch while $|z_{\mathbf n, n}(\mathbf x)+1|<7^{-D}$ on the other patches. Here, $D$ corresponds to the number of iterations in Lemma \ref{lm:binarypol} and is proportional to the depth of the decoding subnetwork, i.e. $D\sim (M/N)^d\sim W^{pd/r-1}$ so that $\log D = O(\log W).$ We can now combine all the maps $z_{\mathbf n, n}$ into the map $Z(\mathbf x)=\tfrac{1}{2}\sum_{\mathbf n:\mathbf n/N\in\mathbf N_{\mathbf q}}(z_{\mathbf n, n}(\mathbf x)+1)w_{\mathbf n},$ where $w_{\mathbf n}$ is the desired encoding weight in the $\mathbf n$'th patch. By construction, for any $\mathbf x$ in the $\mathbf n$'th patch we have $|Z(\mathbf x)-w_{\mathbf n}|=O(N^d 7^{-D})$, which satisfies the accuracy requirement $6^{-D}$ of Lemma \ref{lm:binarypol}. On the other hand, the size of the $\sigma$-network implementing $Z(\mathbf x)$ is $O(N^d\log W).$ Choosing $N\sim W^{(1-\delta)/d}$ with arbitrarily small $\delta>0,$ this size fits the available budget $W.$ \section{Expressiveness of networks with Lipschitz activation functions and slowly growing weights}\label{sec:weightbounds} In this section we clarify why, as mentioned in Section \ref{sec:otheractiv}, under mild assumptions on the growth of network weights, networks with any bounded Lipschitz activation function (in particular, the standard sigmoid $\sigma(x)=1/(1+e^{-x})$) can only achieve the approximation rates $p\le \tfrac{2r}{d}.$ This follows from existing upper bounds on the covering numbers for such networks, in particular \cite[Theorem 14.5]{anthony2009neural}. Specifically, consider a neural network with the following properties. Suppose that the network neurons have (possibly different) Lipschitz activation functions $\sigma$ such that $|\sigma(x)|\le b$ and $|\sigma(x)-\sigma(y)|\le a|x-y|$ for all $x,y\in\mathbb R$. Suppose that there is a constant $V>1/a$ such that for any weight vector $\mathbf w$ associated with a particular neuron, its $l^1$-norm $\|\mathbf w\|_1$ is bounded by $V$. Assume that the network has $L\ge 2$ layers, with connections only between adjacent layers, and has $W$ weights. Assume finally that the neurons in the first layer have non-decreasing activation functions. Let $F$ denote the family of functions on $[0,1]^d$ implementable by such a network. For any finite subset $S\subset [0,1]^d$ consider the restriction $F|_S$ as a subset of $\mathbb R^{|S|}$ equipped with the uniform norm $\|\cdot\|_\infty$. We define the \emph{covering number} $N_\infty(\epsilon, F, S)$ as the smallest number of $\epsilon$-balls in $\mathbb R^{|S|}$ covering the set $F|_S$. Then, for any integer $m>0$, we define the covering number $N_\infty(\epsilon, F, m)=\max_{S\subset [0,1]^d, |S|=m}N_\infty(\epsilon, F, m).$ We then have the following bound. \begin{theorem}[Theorem 14.5 of \cite{anthony2009neural}] $N_\infty(\epsilon,F,m)\le \big(\tfrac{4embW(aV)^L}{\epsilon (aV-1)}\big)^W.$ \end{theorem} To obtain the desired bound on approximaton rates for H\"older balls $F_{r,d}$, we can now lower-bound $N_\infty(\epsilon,F,m)$ using the $\epsilon$-capacity of H\"older balls. Specifically, observe that the H\"older ball $F_{r,d}$ contains a set $\Phi_\epsilon$ of at least $M_\epsilon=2^{c_{r,d}\epsilon^{-d/r}}$ functions separated by $\|\cdot\|_\infty$-distance $4\epsilon$ (with some constant $c_{r,d}>0$). These functions can be constructed by a standard argument in which we choose in $[0,1]^d$ a grid $S_\epsilon$ of size $c_{r,d} \epsilon^{-d/r}$ (with a spacing $\sim \epsilon^{1/r}$), and then place a properly rescaled spike function with the sign $+$ or $-$ at each point of the grid. The functions of $\Phi_\epsilon$ are mutually $4\epsilon$-separated when restricted to the grid $S_\epsilon$. If our family $F$ of network-implementable functions can $\epsilon$-approximate any function from the balls $F_{r,d}$, then any $\epsilon$-net for $F|_{S_\epsilon}$ is a $2\epsilon$-net for $\Phi_\epsilon|_{S_\epsilon}$, and thus must contain at least $M_\epsilon$ elements. Hence, $M_\epsilon\le N_\infty(\epsilon,F,S_\epsilon)\le N_\infty(\epsilon,F,c_{r,d}\epsilon^{-d/r}),$ i.e. \begin{equation}\label{eq:ccrd} c_{r,d}\epsilon^{-d/r}\le W\log_2 \big(\tfrac{4e c_{r,d}\epsilon^{-d/r-1}bW(aV)^L}{aV-1}\big). \end{equation} Assuming that $1/\epsilon, W, L, V$ grow while the other parameters are held constant, this bound implies that $$\epsilon\ge c_{r,d,a,b}(WL)^{-r/d}\ln^{-r/d} V$$ with some $c_{r,d,a,b}>0.$ Now suppose that $V$ is a function of $W$, i.e. the magnitude of the weights is allowed to depend on the network size. Suppose that the network achieves the approximation rate $p$, i.e. \begin{equation}\label{eq:ccw} \epsilon \le C_{r,d,a,b} W^{-p}. \end{equation} Since $L\le W$, comparing Eq.\eqref{eq:ccrd} with Eq.\eqref{eq:ccw}, we then find that \begin{equation}\label{eq:clb} \ln V\ge c'_{r,d,a,b} W^{pd/r-2}. \end{equation} Thus, the rates $p>\tfrac{2r}{d}$ require $V$ to very rapidly grow with $W$. This observation agrees with the main result of Section \ref{sec:otheractiv} -- Theorem \ref{th:sin} -- describing approximation with arbitrary rates $p$ by networks with a periodic activation function. In the proof of this theorem, the network weights are defined with the help of rapidly growing constants $a_k$ given in Eq.\eqref{eq:la}. In particular, we have $\log a_K\sim 2^K$ with $K\sim W^{1/2},$ which agrees with the lower bound \eqref{eq:clb}. \section{Theorem \ref{th:sin}: sketch of proof}\label{sec:sketchsin} We can assume without loss of generality that the period $T=2$ and $\max_{x\in\mathbb R}\sigma(x)=-\min_{x\in\mathbb R}\sigma(x)=1$ (these values can always be effectively adjusted in each neuron by rescaling the input and output weights). We divide the proof into three steps. \paragraph{Step 1: reduction to patch-encoders and patch-classifiers.} Recall the concepts of coarser partition on the scale $\tfrac{1}{N}$ and the finer partition on the scale $\tfrac{1}{M}$ used in the proofs of Theorem \ref{th:deepphase} and \ref{th:constwidth}. In those theorems, both $N$ and $M$ were $\sim W^a$ with some constant powers $a$. In contrast, we choose now $N=1$, and we'll set $M$ to grow much faster (roughly exponentially) with $W$: this will be possible thanks to the much more efficient decoding available with the $\sin$ activation. Specifically, note first that we can implement an almost perfect approximation of the parity function $\theta: x\mapsto (-1)^{\lfloor x\rfloor}$ using a constant size networks, by computing $a\sigma(x)$ with a large $a$ and then thresholding the result at 1 and $-1$ using ReLU operations (the approximation only fails in small neighborhoods of the integer points). If the cube $[0,1]^d$ is partitioned into cubic $M$-patches, we can apply rescaled versions of $\theta$ coordinate-wise to create a binary dictionary of these patches. Specifically, we can construct a network of size $\sim d\log_2 M$ that maps a given $\mathbf x\in [0,1]^d$ to a size-$K$ binary sequence encoding the place of the patch $\Delta_M\ni\mathbf x$ in the cube $[0,1]^d$, with $K\sim d\log_2 M$. We call this network the \emph{patch-encoder}. Given a function $f\in F_{r,d},$ we approximate it by a function $\widetilde f$ which is constant in each $M$-patch. Suppose for simplicity and without loss of generality that the smoothness $r\le 1,$ then this approximation has accuracy $\epsilon\sim M^{-r}$. Let $\widetilde f_{\Delta_M}$ be the value that the approximation returns on the patch $\Delta_M$. It is sufficient to define $\widetilde f_{\Delta_M}$ with precision $\sim M^{-r}$. Consider the binary expansion of $f_{\Delta_M}$ that provides this precision: $\widetilde f_{\Delta_M}=-1+\sum_{k=0}^R \widetilde f_{\Delta_M,k}2^{-k},$ where $R\sim r\log_2 M$ and $\widetilde f_{\Delta_M,k}\in\{0,1\}$. Suppose that for each $k$ we can construct a network that maps each patch $\Delta_M$ to the corresponding bit $\widetilde f_{\Delta_M,k}$. Summing these \emph{patch-classifiers} with coefficients $2^{-k}$, we then reconstruct the full approximation $\widetilde f$. We have thus reduced the task to efficiently implementing an arbitrary binary classifier on the $M$-partition of $[0,1]^d.$ The patch-encoder constructed above efficiently encodes each $M$-patch by a binary $K$-bit sequence. We can then think of the classifier as an assignment $A:\{0,1\}^K\to\{0,1\}$ that must be implemented by our network. We show below in Step 2 that this can be done by a size-$O(K)$ network, with the assignment encoded in a single weight $w_A$. The full number of network weights (including the patch-encoder and the patch-classifiers on all $R$ scales) can then be bounded by $W=O(KR),$ i.e. $W=O(rd\log_2^2 M)$. The relations $\epsilon\sim M^{-r}$ and $W\sim rd\log_2^2 M$ then yield $\epsilon\sim 2^{-c'W^{1/2}}$ (with $c'\sim\sqrt{r/d}$), as claimed in Eq.\eqref{eq:ratesin}. Note, however, that the proof strategy that we have described requires the network to have $R$ $f$-dependent encoding weights (one per each patch-classifier), while Statement 2 of the theorem claims a unique $f$-dependent weight. In Step 3, we will resolve this issue by showing that these $R$ weights can be decoded from a single weight with a subnetwork of size $O(R)$. To make these arguments fully rigorous, we need to handle the issue of our approximation to the parity function $\theta$ becoming invalid near the boundaries of the patches. This is done in Section \ref{sec:proofsin} using partitions of unity; the resulting complications do not affect the asymptotic. \paragraph{Step 2: implementation of a patch-classifier.} We explain now how an arbitrary assignment $A:\{0,1\}^K\to\{0,1\}$ can be implemented by a network of size $O(K)$ with a single encoding weight $w_A$. Let us define two sequences, $a_k$ and $l_k$: \begin{equation}\label{eq:la}l_1=\tfrac{1}{2}, \quad a_1=2,\quad l_k=\min(\tfrac{l_{k-1}}{2},\tfrac{l_{k-1}}{a_kc_\sigma}),\quad a_k=\tfrac{4}{l_{k-1}},\end{equation} where $c_\sigma$ is the Lipschitz constant of $\sigma$. Consider iterations $g_1\circ g_2\circ\ldots \circ g_K(w_*),$ in which each $g_k$ can be either the identity function $g_k(w)=w$, or $g_k(w)=\sigma (a_k w),$ with some initial value $w_*$. For each $\mathbf z\in\{0,1\}^K$, let us define $H_{K, w_*}(\mathbf z)$ as the $\operatorname{sgn}$ of the value obtained by substituting the respective functions: $$ H_{K, w_*}(\mathbf z)=\operatorname{sgn}\circ\begin{cases}\operatorname{Id},& z_1=0,\\ \sigma(a_1\cdot), & z_1=1\end{cases}\circ \begin{cases}\operatorname{Id},& z_2=0,\\ \sigma(a_2\cdot), & z_2=1\end{cases}\circ \ldots\circ \begin{cases}\operatorname{Id},& z_K=0,\\ \sigma(a_K\cdot), & z_K=1\end{cases} (w_*) $$ \begin{lemma}\label{lem:sindecoding} For any assignment $A:\{0,1\}^K\to \{0,1\}$ there exists $w_A\in\mathbb R$ such that $H_{K,w_A}(\mathbf z)=A(\mathbf z)$ for all $\mathbf z\in \{0,1\}^K$. \end{lemma} \begin{proof} Proof by induction on $K$, but of a slightly sharper statement: the desired $w_A$ not only exist, but fill (at least) an interval $I_{K}\subset [-1,1]$ of length $l_K$. The base $K=1$ follows immediately from the 2-periodicity of $\sigma$ and the hypothesis that $\sigma(x)>0$ for $x\in [0,1]$ while $\sigma(x)<1$ for $x\in[1,2]$. Suppose we have proved the statement for $K-1$. Given an assignment $A:\{0,1\}^K\to \{0,1\}$, consider it as a pair of assignments $A_0:\{0,1\}^{K-1}\to \{0,1\},A_1:\{0,1\}^{K-1}\to \{0,1\}.$ By the induction hypothesis, we can find two intervals $I_{K-1}^{(0)}$ and $I_{K-1}^{(1)}$ of length $l_{K-1}$ such that $H_{K-1,w_0}(\mathbf z)=A_0(\mathbf z)$ and $H_{K-1,w_1}(\mathbf z)=A_1(\mathbf z)$ for all $w_0\in I_{K-1}^{(0)},w_1\in I_{K-1}^{(1)}$ and $\mathbf z\in \{0,1\}^{K-1}$. Consider the set \begin{equation}\label{eq:iw}I=\{w\in\mathbb R:w\in I_{K-1}^{(0)}\text{ and }\sigma(a_K w)\in I_{K-1}^{(1)}\}.\end{equation} Then for any $w\in I$, we have the desired property $H_{K,w}(\mathbf z)=A(\mathbf z),\forall \mathbf z\in \{0,1\}^K$. We need to show now that $I$ contains an interval of length $l_K$. Observe that, by the relation $a_K=\tfrac{4}{l_{K-1}}$ from Eq.\eqref{eq:la}, the length $l_{K-1}$ of the interval $I^{(0)}_{K-1}$ is twice as large as the period $\tfrac{2}{a_K}$ of the function $\sigma(a_K\cdot).$ Using the assumption that $\max \sigma(x)=-\min\sigma(x)=1$, we see that the function $\sigma(a_K\cdot)$ attains both values 1 and -1 on its period. It follows then by continuity of $\sigma$ that there exists a point $w'$ at a distance not more than $\tfrac{l_{K-1}}{4}$ from the center of $I^{(0)}_{K-1}$ such that $\sigma(a_Kw')$ attains any given value from the interval $[-1,1]$. Let this value be the center $w''$ of the interval $I_{K-1}^{(1)}$. Since the function $\sigma(a_K\cdot)$ is Lipschitz with constant $a_Kc_\sigma$, we have $|\sigma(a_Kw)-\sigma(a_Kw')| < \tfrac{l_{K-1}}{2}$ for any $w$ such that $|w-w'|<\tfrac{l_{K-1}}{2a_Kc_{\sigma}}$. Then it follows from the definition $l_K=\min(\tfrac{l_{K-1}}{2},\tfrac{l_{K-1}}{a_Kc_\sigma})$ that the length-$l_K$ interval centered at $w'$ is contained in $I$ given by Eq.\eqref{eq:iw}. \end{proof} This lemma shows that the network can implement any classifier $A$ if the network can somehow branch into applying either $\operatorname{Id}$ or $\sigma(a_k\cdot)$ depending on the signal bit $b\in\{0,1\}$ that is output by the patch-encoder subnetwork. This branching can be easily implemented by forming the linear combination $(1-b)x+b\sigma(a_kx)$, and also noting that a product of any $x\in\{0,1\}$ and $y\in[-1,1]$ admits the ReLU implementation $xy= \max(0,2x+y-1)-x$. \paragraph{Remark.} The construction in Lemma \ref{lem:sindecoding} can be interpreted as a dichotomy-based lookup if we think of the assignment $A$ as a binary sequence of size $S=2^K.$ In each of the network steps we divide the sequence in half, ultimately locating the desired bit in $K\sim\log_2 S$ steps. We can compare this with the less efficient bit extraction procedure of \cite{bartlett1998almost} (for which it is however sufficient to only have the ReLU activation in the network). In this latter procedure, the bits are extracted from the encoding weight one-by-one, and so the lookup requires $\sim S$ steps. \paragraph{Step 3: ensuring a unique $f$-dependent weight.} Steps 1 and 2 have shown that the desired network can be constructed using at most $R$ $f$-dependent weights, say $w^*_1,\ldots,w^*_R$. We observe now that these values can be approximated arbitrarily well by a serial application of the rescaled activation $\sigma$: \begin{lemma} Let $w^*_1,\ldots,w^*_R\in[-1,1].$ Fix $a>0$ and consider the sequence $w_1',\ldots,w_R'$ defined by $w_k'=\sigma(aw'_{k-1})$ with some initial $w_1'$. Then we can find an initial $w_1'\in[-1,1]$ such that $|w^*_k-w_k'|<\tfrac{2}{a}$ for all $k=1,\ldots,R.$ \end{lemma} \begin{proof} We use induction on $R$. The base $R=1$ is trival. Suppose we have already proved the statement for $R-1$. Let $\widetilde{w}$ be the corresponding initial value for the sequence $\widetilde{w},\sigma(a\widetilde{w}),\sigma(a\sigma(a\widetilde{w})),\ldots$ approximating the sequence $w^*_2,\ldots,w^*_R$. The function $\sigma(a\cdot)$ has period $\tfrac{2}{a},$ and attains all values from $[-1,1]$ on any interval of this length. It follows that we can find $w_1'$ such that $\sigma(aw_1')=\widetilde{w}$ and $|w^*_1-w_1'|<\tfrac{2}{a}$. This gives the desired $w_1'$. \end{proof} Thus, by taking some sufficiently large ($W$-dependent) $a$, we can generate all the $R$ encoding weights $w^*_k$ with sufficient accuracy from a single weight by using an $f$-independent network of complexity $O(R)$, which is within the desired bound \eqref{eq:ratesin}. In Figure \ref{fig:deepFourier} we show the overall network layout. \begin{figure} \centering \input{deep_Fourier_flowchart.tex} \caption{The network layout overview for the ``deep Fourier'' approximation (see Section \ref{sec:sketchsin}).} \label{fig:deepFourier} \end{figure} \paragraph{Information in the encoding weight.} Let us make a rough estimate of the amount of information contained in the constructed encoding weight. First, we can estimate it as $R$ (the number of patch classifiers) times the information in the weight $w_A$ corresponding to a single patch classifier (see Lemma \ref{lem:sindecoding}). We have $R\sim r\log_2 M\sim \log_2 \epsilon.$ The information in $w_A$ can be roughly estimated as $-\log_2 l_K$, where $l_K$ is the length of the interval $I_K$ appearing in the proof of Lemma \ref{lem:sindecoding}. From relations \eqref{eq:la}, for small $l_{k-1}$, by combining $l_k=\tfrac{l_{k-1}}{a_kc_\sigma}$ and $a_k=\tfrac{4}{l_{k-1}}$ we get $l_k=\tfrac{l^2_{k-1}}{4c_\sigma}$, which leads to $\log_2 l_K\sim 2^K.$ Since $K\sim d\log_2 M\sim \tfrac{d}{r}\log_2 \epsilon,$ this gives $-\log_2 l_K\sim \epsilon^{-d/r}$. Summarizing, the total information can be roughly estimated as $\epsilon^{-d/r}\log(1/\epsilon).$ \section{Theorem \ref{th:sin}: proof details}\label{sec:proofsin} Examining the sketch of proof given in Section \ref{sec:sketchsin}, we see that the only significant gap in the given argument is the treatment of boundaries of the patches. Namely, recall that we use approximations to the parity function $\theta(x)=(-1)^{\lfloor x\rfloor}.$ The approximations can be defined by a finite expression in terms of linear, ReLU and $\sin$ operations: $$\widetilde\theta_a(x)=\min(1, \max(-1, a\sigma( x))).$$ By taking $a$ large, we can make $\widetilde\theta_a$ to equal $\theta$ outside some small neighborhood of $\mathbb Z$. Now, recall that we choose patches $\Delta_M$ as cubes $[\tfrac{m_1}{M}, \tfrac{m_1+1}{M}]\times[\tfrac{m_2}{M}, \tfrac{m_2+1}{M}]\times\ldots[\tfrac{m_d}{M}, \tfrac{m_d+1}{M}].$ Assume without loss of generality that $M=2^U$ with some integer $U$. The patch-encoding functions $g_{u,k}:\mathbf x\mapsto\widetilde\theta_a(2^u x_k)$ (with $u=1,2,\ldots,U$ and $k=1,2,\ldots,d$) map the cubes $\Delta_M$ to the values $\pm 1$ everywhere except near the boundaries of these cubes. If we could slightly ``shrink'' the cubes $\Delta_M$ so that they were disjoint, we could adjust $a$ in $\widetilde\theta_a$ so that the functions $g_{u,k}$ were perfectly equal to $\pm 1$ on the whole cubes. The remaining construction of patch-classifying networks in Section \ref{sec:otheractiv} then becomes fully functional and yields the desired asymptotic relation \eqref{eq:ratesin}. Thus, we need to show how to reduce the problem to the case of disjoint patches. This can be done by using suitable filtering functions, similarly to the proofs of Theorems \ref{th:deepphase} and \ref{th:constwidth}. Fix some $a_0>1$ and consider the functions $\Psi_0, \Psi_1:\mathbb R\to [0,1]$ defined by $$\Psi_0(x)=\tfrac{1}{2}(1+\widetilde\theta_{a_0}(2Mx)) ,\quad \Psi_1=1-\Psi_0.$$ The functions $\Psi_0$ and $\Psi_1$ form a a two-element partition of unity. Furthermore, since $a_0>1,$ there is $\delta>0$ such that \begin{align} \Psi_0(x)=0&\text{ for }x\in(\tfrac{3}{4M}-\delta,\tfrac{3}{4M}+\delta)+\mathbb Z/M,\label{eq:psi}\\ \Psi_1(x)=0&\text{ for } x\in(\tfrac{1}{4M}-\delta,\tfrac{1}{4M}+\delta)+\mathbb Z/M.\label{eq:psi2} \end{align} Taking the product of the partitions of unity over the $d$ coordinates, we can write for $\widetilde f:[0,1]^d\to\mathbb R$: $$\widetilde f=\sum_{\mathbf q\in\{0,1\}^d}(\prod_{s=1}^d\Psi_{q_s})\widetilde f.$$ Thanks to Eqs.\eqref{eq:psi},\eqref{eq:psi2}, for each $\mathbf q\in\{0,1\}^d$, the filtering function $\prod_{s=1}^d\Psi_{q_s}$ vanishes in $[0,1]^d$ outside an $\tfrac{1}{M}$-grid of disjoint cubic patches, exactly as desired. We can then look for the approximation $\widetilde f$ in the form $$\widetilde f=\sum_{\mathbf q\in\{0,1\}^d}(\prod_{s=1}^d\Psi_{q_s})\widetilde f_{\mathbf q},$$ where $\widetilde f_{\mathbf q}$ has the required values only on the patches $[0,1]^d\setminus\operatorname{supp}(\prod_{s=1}^d\Psi_{q_s})$ and can be constructed as described in Section \ref{sec:otheractiv}. Having implemented these approximations $\widetilde f_{\mathbf q}$, the final approximation is obtained by implementing approximate products with the filters $\Psi_{q_s}$ and performing summation over $\mathbf q\in\{0,1\}^d$. As shown in \cite[Proposition~3]{yarsawtooth}, multiplication with accuracy $\epsilon$ requires a ReLU subnetwork with $O(\log(1/\epsilon))$ connections. This is asymptotically negligible compared to our bound for the total complexity of the patch-classifiers (which is $O(\log^2(1/\epsilon))$).
{ "timestamp": "2021-01-07T02:05:12", "yymm": "1906", "arxiv_id": "1906.09477", "language": "en", "url": "https://arxiv.org/abs/1906.09477", "abstract": "We explore the phase diagram of approximation rates for deep neural networks and prove several new theoretical results. In particular, we generalize the existing result on the existence of deep discontinuous phase in ReLU networks to functional classes of arbitrary positive smoothness, and identify the boundary between the feasible and infeasible rates. Moreover, we show that all networks with a piecewise polynomial activation function have the same phase diagram. Next, we demonstrate that standard fully-connected architectures with a fixed width independent of smoothness can adapt to smoothness and achieve almost optimal rates. Finally, we consider deep networks with periodic activations (\"deep Fourier expansion\") and prove that they have very fast, nearly exponential approximation rates, thanks to the emerging capability of the network to implement efficient lookup operations.", "subjects": "Neural and Evolutionary Computing (cs.NE); Machine Learning (cs.LG)", "title": "The phase diagram of approximation rates for deep neural networks", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668712109664, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.7081505176683062 }
https://arxiv.org/abs/1506.01774
A polynomial defined by the SL(2;C)-Reidemeister torsion for a homology 3-sphere obtained by a Dehn surgery along a (2p,q)-torus knot
Let K be a (2p,q)-torus knot and M_n is a 3-manifold obtained by 1/n-Dehn surgery along K. We consider a polynomial whose zeros are the inverses of the Reideimeister torsion of M_n for SL(2;C)-irreducible representations. Johnson gave a formula for the case of the (2,3)-torus knot under some modification and normalization. We generalize this formula by using Tchebychev polynomials.
\section{Introduction} Reidemeister torsion is a piecewise linear invariant for manifolds. It was originally defined by Reidemeister, Franz and de Rham in 1930's. In 1980's Johnson \cite{Johnson} developed a theory of the Reidemeister torsion from the view point of relations to the Casson invariant. He also derived an explicit formula for the Reidemeister torsion of homology 3-spheres obtained by $\frac1n$-Dehn surgeries along a torus knot for $\mathit{SL}(2;\mathbb C)$-irreducible representations. Let $K$ be a $(2p, q)$-torus knot where $p,q$ are coprime, positive odd integers. Let $M_n$ be a closed 3-manifold obtained by a $\frac1n$-surgery along $K$. We consider the Reidemeister torsion $\tau_\rho(M_n)$ of $M_n$ for an irreducible representation $\rho:\pi_1(M_n)\rightarrow\mathit{SL}(2;\mathbb C)$. Johnson gave a formula for any non trivial values of $\tau_{\rho}(M_n)$. Furthermore in the case of the trefoil knot, he proposed to consider the polynomial whose zero set coincides with the set of all non trivial values $\{\frac{1}{\tau_\rho(M_n)}\}$, which is denoted by ${\sigma}_{(2,3,n)}(t)$. Under some normalization of ${\sigma}_{(2,3,n)}(t)$, he gave a 3-term relation among ${\sigma}_{(2,3,n+1)}(t),{\sigma}_{(2,3,n)}(t)$ and ${\sigma}_{(2,3,n-1)}(t)$ by using Tchebychev polynomials. In this paper we consider one generalization of this polynomial for a $(2p, q)$-torus knot. Main results of this paper are Theorem 4.3 and Proposition 5.1. \flushleft{Acknowledgements.} This research was partially supported by JSPS KAKENHI 25400101. \section{Definition of Reidemeister torsion} First let us describe definitions and properties of the Reidemeister torsion for $\mathit{SL}(2;\mathbb C)$-representations. See Johnson \cite{Johnson}, Kitano \cite{Kitano94-1,Kitano94-2} and Milnor \cite{Milnor61, Milnor62, Milnor66} for details. Let $W$ be a $d$-dimensional vector space over $\mathbb C$ and let ${\bf b}=(b_1,\cdots,b_d)$ and ${\bf c}= (c_1,\cdots,c_d)$ be two bases for $W$. Setting $b_i=\sum p_{ji}c_{j}$, we obtain a nonsingular matrix $P=(p_{ij})$ with entries in $\mathbb C$. Let $[{\bf b}/{\bf c}]$ denote the determinant of $P$. Suppose \[ C_*: 0\rightarrow C_k \overset{\partial_k}{\rightarrow} C_{k-1} \overset{\partial_{k-1}}{\rightarrow} \cdots \overset{\partial_{2}}{\rightarrow} C_1\overset{\partial_1}{\rightarrow} C_0\rightarrow 0 \] is an acyclic chain complex of finite dimensional vector spaces over $\mathbb C$. We assume that a preferred basis ${\bf c}_i$ for $C_i$ is given for each $i$. Choose some basis ${\bf b}_i$ for $B_i=\mathrm{Im}(\partial_{i+1})$ and take a lift of it in $C_{i+1}$, which we denote by $\tilde{\bf b}_i$. Since $B_i=Z_i=\mathrm{Ker}{\partial_{i}}$, the basis ${\bf b}_i$ can serve as a basis for $Z_i$. Furthermore since the sequence \[ 0\rightarrow Z_i \rightarrow C_i \overset{\partial_{i}}{\rightarrow} B_{i-1}\rightarrow 0 \] is exact, the vectors $({\bf b}_i,\tilde{\bf b}_{i-1}) $ form a basis for $C_i$. Here $\tilde{\bf b}_{i-1}$ is a lift of ${\bf b}_{i-1}$ in $C_i$. It is easily shown that $[{\bf b}_i,\tilde{\bf b}_{i-1}/{\bf c}_i]$ does not depend on the choice of a lift $\tilde{\bf b}_{i-1}$. Hence we can simply denote it by $[{\bf b}_i, {\bf b}_{i-1}/{\bf c}_i]$. \begin{definition} The torsion of the chain complex $C_*$ is given by the alternating product \[ \prod_{i=0}^k[{\bf b}_i, {\bf b}_{i-1} /{\bf c}_i]^{(-1)^{i+1}} \] and we denote it by $\tau(C_*)$. \end{definition} \begin{remark} It is easy to see that $\tau(C_\ast)$ does not depend on the choices of the bases $\{{\bf b}_0,\cdots,{\bf b}_k\}$. \end{remark} Now we apply this torsion invariant of chain complexes to the following geometric situations. Let $X$ be a finite CW-complex and $\tilde X$ a universal covering of $X$. The fundamental group $\pi_1 X$ acts on $\tilde X$ from the right-hand side as deck transformations. Then the chain complex $C_*(\tilde{X};\mathbb Z)$ has the structure of a chain complex of free $\mathbb Z[\pi_1 X]$-modules. Let $\rho:\pi_1 X\rightarrow \mathit{SL}(2; \mathbb C)$ be a representation. We denote the 2-dimensional vector space $\mathbb C^2$ by $V$. Using the representation $\rho$, $V$ admits the structure of a $\mathbb Z[\pi_1 X]$-module and then we denote it by $V_\rho$. Define the chain complex $C_*(X; V_\rho)$ by $C_*({\tilde X}; \mathbb Z)\otimes_{\mathbb Z[\pi_1 X]} V_\rho$ and choose a preferred basis \[ (\tilde{u}_1\otimes {\bf e}_1, \tilde{u}_1\otimes {\bf e}_2, \cdots,\tilde{u}_d\otimes{\bf e}_1, \tilde{u}_d\otimes{\bf e}_2) \] of $C_i(X; V_\rho)$ where $\{{\bf e}_1 , {\bf e_2}\}$ is a canonical basis of $V=\mathbb C^2$ and $u_1,\cdots,u_d$ are the $i$-cells giving a basis of $C_i(X; \mathbb Z)$. Now we suppose that $C_*(X; V_\rho)$ is acyclic, namely all homology groups $H_*(X; V_\rho)$ are vanishing. In this case we call $\rho$ an acyclic representation. \begin{definition} Let $\rho:\pi_1(X)\rightarrow \mathit{SL}(2; \mathbb C)$ be an acyclic representation. Then the Reidemeister torsion $\tau_\rho(X)\in\mathbb C\setminus\{0\}$ is defined by the torsion $\tau(C_*(X; V_\rho))$ of $C_*(X; V_\rho)$. \end{definition} \begin{remark} \noindent \begin{enumerate} \item We define $\tau_\rho(X)=0$ for a non-acyclic representation $\rho$. \item The definition of $\tau_\rho(X)$ depends on several choices. However it is well known that the Reidemeister torsion is a piecewise linear invariant for $X$ with $\rho$. \end{enumerate} \end{remark} Now let $M$ be a closed orientable 3-manifold with an acyclic representation $\rho:\pi_1(M)\rightarrow \mathit{SL}(2; \mathbb C)$. Here we take a torus decomposition of $M=A\cup_{T^2} B$. For simplicity, we write the same symbol $\rho$ for restricted representations to images of $\pi_1(A),\pi_1(B),\pi_1(T^2)$ in $\pi_1(M)$ by inclusions. By this decomposition, we have the following formula. \begin{proposition} Let $\rho:\pi_1(M)\rightarrow \mathit{SL}(2;\mathbb C)$ a representation. Assume all homogy groups $H_{\ast}(T^{2};V_{\rho})=0$. Then all homology groups $H_\ast(M;V_\rho)=0$ if and only if both of all homology groups $H_\ast(A;V_{\rho})=H_\ast(B;V_{\rho})=0$. In this case, it holds \[ \tau_\rho(M)=\tau_{\rho}(A)\tau_{\rho}(B). \] \end{proposition} \section{Johnson's theory} We apply the above proposition to a 3-manifold obtained by Dehn-surgery along a knot. Now let $K\subset S^3$ be a $(2p, q)$-torus knot with coprime odd integers$p,q$. Further let $N(K)$ be an open tubular neighborhood of $K$ and $E(K)$ its knot exterior $S^3\setminus {N}(K)$. We denote its closure of ${N(K)}$ by $\overline{N}$ which is homeomorphic to $S^{1}\times D^{2}$. Now we write $M_n$ to a closed orientable 3-manifold obtained by a $\frac1n$-surgery along $K$. Naturally there exists a torus decomposition $M_n=E(K)\cup \overline{N}$ of $M_n$. \begin{remark} This manifold $M_n$ is diffeomorphic to a Brieskorn homology 3-sphere $\Sigma(2p,q,N)$ where $N=|2pq n+1|$. \end{remark} Here the fundamental group of $E(K)$ has a presentation as follows. \[ \pi_1(E(K))=\pi_1 (S^3\setminus K)=\langle x,y\ |\ x^{2p} =y^ q\rangle \] Furthermore the fundamental group $\pi_1(M_n)$ admits the presentation as follows; \[ \pi_1(M_n)=\langle x,y\ |\ x^{2p} =y^q, m l^n= 1\rangle \] where $m=x^{-r}y^s\ (r,s\in\mathbb Z,\ 2p s- q r=1)$ is a meridian of $K$ and $l=x^{-2p}m^{2p q}=y^{- q}m^{2p q}$ is similarly a longitude. Let $\rho:\pi_1(E(K))=\pi_1(S^3\setminus K)\rightarrow\mathit{SL}(2;\mathbb C)$ a representation. It is easy to see a given representation $\rho$ can be extended to $\pi_1(M_n)\rightarrow\mathit{SL}(2;\mathbb C)$ as a representation if and only if $\rho(ml^n)=E$. Here $E$ is the identity matrix in $\mathit{SL}(2;\mathbb C)$. In this case by applying Proposition 2.5, \[ \tau_\rho(M_n)={\tau_\rho(E(K))}{\tau_\rho(\overline{N})} \] for any acyclic representation $\rho:\pi_1(M_n)\rightarrow\mathit{SL}(2;\mathbb C)$. Now we consider only irreducible representations of $\pi_1(M_n)$, which is extended from the one on $\pi_1(E(K))$. It is seen that the set of the conjugacy classes of the $\mathit{SL}(2 ;\mathbb C)$-irreducible representations is finite. Any conjugacy class can be represented by $\rho_{(a,b,k)}$ for some $(a,b,k)$ such that \begin{enumerate} \item $0<a<2p,0<b< q, a\equiv b \ \text{mod } 2$, \item $0<k<N=|2p q n+1|, k\equiv na\ \text{mod } 2$, \item $\mathrm{tr}(\rho_{(a,b,k)}(x))=2\cos \frac{a\pi }{2p}$, \item $\mathrm{tr} (\rho_{(a,b,k)}(y))=2 \cos\frac{b\pi}{ q}$, \item $\mathrm{tr} (\rho_{(a ,b,k)}(m)) =2 \cos\frac{k\pi}{N}$. \end{enumerate} Johnson computed $\tau_{\rho_{(a,b,k)}}(M_{n})$ as follows. \begin{theorem}[Johnson] \noindent \begin{enumerate} \item A representation $\rho_{(a,b,k)}$ is acylic if and only if $a\equiv b\equiv 1, k \equiv n \text{ mod }2$. \item For any acyclic representation ${\rho_{(a,b,k)}}$ with $a\equiv b{\equiv} 1, k\equiv n \text{ mod }2$, then \[ \tau_{\rho_{(a,b,k)}}(M_{n}) =\frac{1}{2\left(1-\cos\frac{a\pi}{2p}\right) \left(1-\cos\frac{b\pi}{ q}\right) \left(1+\cos\frac{{2}p q k\pi }N\right)}. \] \end{enumerate} \end{theorem} \begin{remark} \noindent \begin{itemize} \item In fact Johnson proved this theorem for any torus knot, not only for a $(2p,q)$-torus knot. \item This Johnson's result was generalized for any Seifert fiber manifold in \cite{Kitano94-1}. Please see \cite{Kitano94-1} as a reference. \item In general, it is not true that the set of $\{\tau_\rho(M_n)\}$ is finite. There exists a manifold whose Reidemsiter torsion can be variable continuously. Please see \cite{Kitano94-2}. \end{itemize} \end{remark} Here assume $K=T(2,3)$ is the trefoil knot. By considering the set of non trivial values $\tau_{\rho}(M_{n})$ for irreducible representation $\rho:\pi_{1}(M_{n})\rightarrow\mathit{SL}(2;\mathbb C)$, Johnson defined the polynomial $\bar{\sigma}_{(2, 3,n)}(t)$ of one variable $t$ whose zeros are the set of $\{\frac{1}{\frac12{\tau_\rho(M_{n})}}\}$. which is well defined up to multiplications of nonzero constants. \begin{theorem}[Johnson] Under normalization by $\bar{\sigma}_{(2,3,n)}(0)=(-1)^n$, there exists the 3-term relation s.t. \[ \bar{\sigma}_{(2,3,n+1)}(t)=(t^3-6t^2+9t-2)\bar{\sigma}_{(2,3,n)}(t) -\bar{\sigma}_{(2,3,n-1)}(t). \] \end{theorem} \begin{remark} The polynomial $t^3-6t^2+9t-2$ is given by $2T_6\left(\frac12\sqrt{t}\right)$. Here $T_6(x)$ is the sixth Tchebychev polynomial. \end{remark} Recall the $n$-th Tchebychev polynomial $T_n(x)$ of the first kind can be defined by expressing $\cos n\theta$ as a polynomial in $\cos\theta$. We give a summary of these polynomials. \begin{proposition} The Tchebychev polynomials have following properties. \begin{enumerate} \item $T_0(x)=1,T_1(x)=x$. \item $T_{-n}(x)=T_n(x)$. \item $T_n(1)={1},T_n(-1)=(-1)^n$. \item $T_n(0)=\begin{cases} & 0\text{ if $n$ is odd,}\\ & (-1)^{\frac n2}\text{ if $n$ is even.} \end{cases}$ \item $T_{n+1}(x)=2xT_n-T_{n-1}(x)$. \item The degree of $T_n(x)$ is $n$. \item {$2T_m(x)T_n(x)=T_{m+n}(x)+T_{m-n}(x)$.} \end{enumerate} \end{proposition} He we put a short list of $T_n(x)$. \begin{itemize} \item $T_0(x)=1$, \item $T_1(x)=x$, \item $T_2(x)=2x^2-1$, \item $T_3(x)=4x^3-3x$, \item $T_4(x)=8x^4-8x^2+1$, \item $T_5(x)=16x^5-20x^3+5x$, \item $T_6(x)=32x^6-48x^4+18x^2-1$. \end{itemize} \begin{example} Put $p=1, q=3$ and $n=-1$. Then $N=|2\cdot 3\cdot(-1)+1|=5$ and $M_{-1}=\Sigma(2,3,5)$. In this case, it is easy to see that $a=b=1$ and $k=1,3$. By the above formula, we obtain \[ \begin{split} \tau_{\rho_{(1,1,k)}}(M_{-1}) &=\frac{1}{{2\left(1-\cos\frac{\pi}2\right) \left(1-\cos\frac{\pi}3\right)}{\left(1+\cos\frac{6k\pi}5\right)}}\\ &=\frac{1}{{2(1-0)(1-\frac12)(1+\cos\frac{6k\pi}5)}}\\ &=\frac{1}{{1+\cos\frac{6k\pi}5}}\\ &={3\pm \sqrt{5}}. \end{split} \] Hence we have two non trivial values of $\frac{1}{\frac{1}{2}\tau_{\rho}(M_{-1})}$ as \[ \begin{split} \frac{1}{\frac{1}{2}\tau_{\rho}(M_{-1})} &=\frac{1}{\frac{3\pm\sqrt{5}}{2}}\\ &=\frac{2}{3\pm\sqrt{5}}\\ &=\frac{3\pm\sqrt{5}}{2}. \end{split} \] Therefore we have \[ \left(t-\left(\frac{3-\sqrt{5}}{2}\right)\right) \left(t-\left(\frac{3+\sqrt{5}}{2}\right)\right) =t^{2}-3t+1. \] Under Johnson's normalization $\bar{\sigma}_{(2,3,-1)}(0)=-1$, \[ \bar{\sigma}_{(2,3,-1)}(t)=-t^{2}+3t-1. \] Next put $n=1$. In this case \[ \begin{split} \tau_{\rho_{(1,1,k)}}(M_{1}) &=\frac{1}{{2\left(1-\cos\frac{\pi}2\right) \left(1-\cos\frac{\pi}3\right)}{\left(1+\cos\frac{6k\pi}{7}\right)}}\\ &=\frac{1}{\left(1+\cos\frac{6k\pi}5\right)}. \end{split} \] We can see as \[ \begin{split} &\left(t-2{\left(1+\cos\frac{6\pi}5\right)}\right)\left(t-2{\left(1+\cos\frac{6\cdot 3\pi}5\right)}\right)\left(t-2{\left(1+\cos\frac{6\cdot 5\pi}5\right)}\right)\\ =&t^{3}-5t^{2}+6t-1\\ =&\bar{\sigma}_{(2,3,1)}(t). \end{split} \] On the other hand, by using Johnson's formula \[ \begin{split} (t^{3}-6t^{2}+9t-2)\bar{\sigma}_{(2,3,0)}(t)-\bar{\sigma}_{(2,3,-1)}(t) =&(t^{3}-6t^{2}+9t-2)\cdot 1-(-t^{2}+3t-1)\\ =&t^{3}-5t^{2}+6t-1, \end{split} \] we obtain the same polynomial. \end{example} \section{Main theorem} From this section, we consider the generalization for a $(2p, q)$-torus knot. Here $p,q$ are coprime odd integers. In this section we give a formula of the torsion polynomial ${\sigma}_{(2p, q,n)}(t)$ for $M_n=\Sigma(2p, q,N)$ obtained by a $\frac1n$-Dehn surgery along $K$. Although Johnson considered the inverses of the half of $\tau_{\rho}(M_{n})$, we simply treat torsion polynomials as follows. \begin{definition} A one variable polynomial ${\sigma}_{(2p, q,n)}(t)$ is called the torsion polynomial of $M_n$ if the zero set coincides with the set of all non trivial values $\{\frac{1}{\tau_\rho(M_n)}\}$ and it satisfies the following normalization condition as $\sigma_{(2p,q,n)}(0)=(-1)^{\frac{np(q-1)}{2}}$. \end{definition} \begin{remark} \noindent \begin{enumerate} \item If $n=0$, then clearly $M_{n}=S^{3}$ with the trivial fundamental group. Hence we define the torsion polynomial to be trivial. \end{enumerate} \end{remark} From here assume $n\neq 0$. Recall Johnson's formula \[ \frac{1}{\tau_{\rho_{(a,b,k)}}(M_{n})} ={2\left(1-\cos\frac{a\pi}{2p}\right) \left(1-\cos\frac{b\pi}{ q}\right) \left(1+\cos\frac{2 p q k\pi}N\right)} \] where $0<a<2p,0<b< q, a\equiv b\equiv 1\text{ mod 2}, k\equiv n\text{ mod }2$. Here we put \[ C_{(2p, q,a,b)}=\left(1-\cos\frac{a\pi }{2p}\right)\left(1-\cos\frac{b\pi }{ q}\right) \] and we have \[ \frac{1}{\tau_{\rho_{(a,b,k)}}(M_n)}=4C_{(2p,q,a,b)}\cdot \frac12\left(1+\cos\frac{2 p q k\pi}N\right). \] Main result is the following. \begin{theorem} The torsion polynomial of $M_{n}$ is given by \[ {\sigma}_{(2p, q,n)}(t)= \prod_{(a,b)}Y_{(n,a,b)}(t) \] where \[ Y_{(n, a,b)}(t) = \begin{cases} & 2C_{(2p, q,a,b)}\frac{ T_{{N+1}}\left(\frac{\sqrt{t}}{2\sqrt{C_{(2p, q,a,b)}}}\right) -T_{{N-1}}\left(\frac{\sqrt{t}}{2\sqrt{C_{(2p, q,a,b)}}}\right)}{t-2C_{(2p, q,a,b)}}\hskip 0.3cm (n>0)\\ & -2C_{(2p, q,a,b)}\frac{ T_{{N+1}}\left(\frac{\sqrt{t}}{2\sqrt{C_{(2p, q,a,b)}}}\right) -T_{{N-1}}\left(\frac{\sqrt{t}}{2\sqrt{C_{(2p, q,a,b)}}}\right)}{t-2C_{(2p, q,a,b)}}\ (n<0).\\ \end{cases} \] Here $N=|2pqn+1|$ and a pair of integers $(a,b)$ is satisfying the following conditions; \begin{itemize} \item $0<a<2p,0<b< q$, \item $a\equiv b\equiv 1\text{ mod }2$, \item $0<k<N, k\equiv n\text { mod }2$. \end{itemize} \end{theorem} \begin{proof} \noindent \flushleft{Case 1:$n> 0$} We modify one factor $(1+\cos\frac{2p q k\pi}N)$ of $\displaystyle\frac{1}{\tau_\rho(M_n)}$ as follows. \begin{lemma} The set $\{\cos\frac{2p q k\pi}{N}\ |\ 0<k<N, k\equiv n\text{ mod }2\}$ is equal to the set $\{\cos\frac{2p k\pi }{N}\ |\ 0<k<\frac{N}{2}\}$. \end{lemma} \begin{proof} Now $N=2p q n+1$ is always an odd integer. For any $k>\frac{N}{2}$, then clearly $N-k<\frac{N}{2}$. Then \[ \begin{split} \cos\frac{2p q (N-k)\pi}{N} &=\cos\left(2p q\pi -\frac{2p q k\pi}{N}\right)\\ &=(-1)^{2p q}\cos\left(-\frac{2p q k\pi}{N}\right)\\ &=\cos\left(\frac{2p q k\pi}{N}\right). \end{split} \] Here if $k$ is even (resp. odd), then $N-k$ is odd (resp. even). Hence it is seen \[ \{\cos\frac{2 p q k\pi}{N} \ |\ 0<k<N, k\equiv n\text{ mod }2\}= \{\cos\frac{2 p q k\pi}{N}\ |\ 0<k<\frac{N}{2}\}. \] For any $k<\frac{N}{2}$, there exists uniquely $l$ such that $-\frac{N}{2}< l< \frac{N}{2}$ and $l\equiv qk$ mod $N$. Further there exists uniquely $l$ such that $0< l< \frac{N}{2}$ and $l\equiv \pm qk$ mod $N$. Here $\cos\frac{2p q k\pi }{N}=\cos\frac{2p l\pi}{N}$ if and only if $2p q k\equiv \pm 2p l$ mod $N$. Therefore it is seen that the set \[ \left\{\cos\frac{2pq k\pi}{N}\ |\ 0<k<\frac{N}{2}\right\} = \left\{\cos\frac{2p k\pi}{N}\ |\ 0<k<\frac{N}{2}\right\}. \] \end{proof} Now we can modify \[ \begin{split} \frac12\left(1+\cos\frac{2p k\pi}{N}\right) &=\frac12\cdot2\cos^2\frac{2p k\pi}{2N}\\ &=\cos^2\frac{p k\pi}{N}. \end{split} \] We put \[ z_k=\cos\frac{p k\pi}{N}\ (0<k<N) \] and substitute $x=z_k$ to $T_{N+1}(x)$. Then it holds \[ \begin{split} T_{N+1}(z_k) &=\cos\left(\frac{(N+1)(p k\pi)}{N}\right)\\ &=\cos\left(p k\pi+\frac{pk\pi }{N}\right)\\ &=(-1)^{p k}z_k . \end{split} \] Similarly it is seen \[ \begin{split} T_{N-1}(z_k) &=\cos\left(\frac{(N-1)(p k\pi)}{N}\right)\\ &=\cos\left(p k\pi-\frac{p k\pi}{N}\right)\\ &=(-1)^{p k}z_k. \end{split} \] Hence it holds \[ T_{N+1}(z_k)-T_{N-1}(z_k)=0. \] By properties of Tchebyshev polynomials, it is seen that \begin{itemize} \item $T_{N+1}(1)-T_{N-1}(1)=0$, \item $T_{N+1}(-1)-T_{N-1}(-1)=0$. \end{itemize} Therefore we consider the following; \[ X_n(x)= \begin{cases} & \frac{T_{N+1}(x)-T_{N-1}(x)}{2(x^2-1)}\hskip 0.3cm (n>0)\\ & -\frac{T_{N+1}(x)-T_{N-1}(x)}{2(x^2-1)}\ (n<0).\\ \end{cases} \] We mention that the degree of $X_n(x)$ is $N-1$. By the above computation, $z_1,\cdots, z_{N-1}$ are the zeros of $X_n(x)$. Further we can see \[ \begin{split} z_{N-k} &=\cos\frac{p(N-k)\pi}{N}\\ &=\cos(p\pi-\frac{ pk\pi}{N})\\ &=(-1)^p\cos(-\frac{pk\pi}{N})\\ &=(-1)^p\cos(\frac{pk\pi}{N})\\ &=-z_k. \end{split} \] This means $N-1$ roots $z_1,\cdots, z_{N-1}$ of $X_n(x)=0$ occur in a pairs. Because $T_{N+1}(x),T_{N-1}(x)$ are even functions, they are functions of $x^2$. Hence $X_n(x)$ is also an even function. Here by replacing $x^2$ by $\frac{t}{4C_{(2p, q,a,b)}}$, namely $x$ by $\frac{\sqrt{t}}{2\sqrt{C_{(2p, q,a,b)}}}$, we put \[ \begin{split} Y_{(n,a,b)}(t) &=X_n\left(\frac{\sqrt{t}}{2\sqrt{C_{(2p, q,a,b)}}}\right)\\ &=\frac{T_{N+1}\left(\frac{\sqrt{t}}{2\sqrt{C_{(2p, q,a,b)}}}\right) -T_{N-1}\left(\frac{\sqrt{t}}{2\sqrt{C_{(2p, q,a,b)}}}\right)} {2\left(\left(\frac{\sqrt{t}}{2\sqrt{C_{(2p, q,a,b)}}}\right)^2-1\right)}\\ &=\frac {T_{N+1}\left(\frac{\sqrt{t}}{2\sqrt{C_{(2p, q,a,b)}}}\right) -T_{N-1}\left(\frac{\sqrt{t}}{2\sqrt{C_{(2p, q,a,b)}}}\right)} { 2\left(\frac{t}{4C_{(2p, q,a,b)}}-1\right) }\\ &=2C_{(2p, q,a,b)} \frac {T_{N+1}\left(\frac{\sqrt{t}}{2\sqrt{C_{(2p, q,a,b)}}}\right) -T_{N-1}\left(\frac{\sqrt{t}}{2\sqrt{C_{(2p, q,a,b)}}}\right)} { t-2C_{(2p, q,a,b)} }\\ \end{split} \] Here it holds that its degree of $Y_{(n,a,b)}(s)$ is $\frac{N-1}{2}$, and the roots of $Y_{(n,a,b)}(t)$ are $4C_{(2p, q,a,b)}z_k^2=4C_{(2p, q,a,b)}\cos^2\frac{\pi k}{2p q n+1}, (0<k<\frac{N-1}{2})$, which are all non trivial values of $\frac{1}{\tau_{\rho_{(a,b,k)}}(M_n)}$. Therefore we obtain the formula. \flushleft{Case 2: $n<0$} In this case we modify $N=|2pqn+1|=2pq|n|-1$. By the same arguments, it is easy to see the claim of the theorem can be proved. Therefore the proof completes. \end{proof} \begin{remark} By defining as $X_{0}(t)=1$, it implies $Y_{(0,a,b)}(t)=1$. Then the above statement is true for $n=0$. \end{remark} \begin{corollary} The degree of $\sigma_{(2p,q,n)}(t)$ is given by $\frac{(N-1)p(q-1)}{4}$. \end{corollary} \begin{proof} The number of the pairs $(a,b)$ is given by $\frac{p(q-1)}{2}$. As the degree of $Y_{(n,a,b)}(t)$ is $\frac{N-1}{2}$, then the degree of of $\sigma_{(2p,q,n)}(t)$ is given by $\frac{(N-1)p(q-1)}{4}$. \end{proof} \section{3-term relations} Finally we prove 3-term realtions for each factor $Y_{(n,a,b)}(t)$ as follows. \begin{proposition} For any $n$, it holds that \[ Y_{(n+1,a,b)}(t)=D(t)Y_{(n,a,b)}(t)-Y_{(n-1,a,b)}(t) \] where $D(t)=2T_{2p q }\left(\frac{\sqrt{t}}{2\sqrt{C_{2p, q,a,b}}}\right)$. \end{proposition} \begin{proof} Recall Prop. 3.2 (7); \[ 2T_m(x)T_n(x)=T_{m+n}(x)+T_{m-n}(x). \] Then if $n>0$ we have \ \[ \begin{split} 2T_{2p q }(x)X_n(x) &=2T_{2p q }(x)\left(\frac{T_{2p q n+2}(x)-T_{2p q n}(x)}{2(x^2-1)}\right)\\ &=\frac{(T_{2p q +2p q n+2}(x)+T_{2p q n+2-2p q }(x))-(T_{2p q +2p q n}(x)+T_{2p q n-2p q }(x))}{2(x^2-1)}\\ &=\frac{T_{2p q (n+1)+2}(x)-T_{2p q (n+1)}(x)+T_{2p q (n-1)+2}(x)-T_{2p q (n-1)}(x))}{2(x^2-1)}\\ &=X_{n+1}(x)+X_{n-1}(x). \end{split} \] Therefore it can be seen that \[ X_{n+1}(x)=2T_{2p q }(x)X_n(x)-X_{n-1}(x) \] and \[ Y_{(n+1,a,b)}(t) =2T_{2p q }\left(\frac{\sqrt{t}}{2\sqrt{C_{(2p,q,a,b)}}}\right) Y_{(n,a,b)}(t)-Y_{(n-1,a,b)}(t). \] If n=0, 3-term relation is \[ Y_{(1,a,b)}(t)=D(t)Y_{(0,a,b)}(t)-Y_{(-1,a,b)}(t). \] It can be seen by direct computation \[ \begin{split} 2T_{2pq}(x)X_{0}(x)-X_{-1}(x) &=2T_{2pq}(x)-X_{-1}(x)\\ &=X_{1}(x). \end{split} \] If $n<0$, it can be also proved. \end{proof} We show some examples. First we treat $(2,3)$-torus knot again. \begin{example} Put $p=1, q=3$. In this case $a=b=1$. Then we see \[ C_{2,3,1,1}=\left(1-\cos\frac\pi 2\right)\left(1-\cos\frac{\pi}{3}\right)=\frac12. \] By applyng the theorem 4.3 and the proposition 5.1, \[ \begin{split} \sigma_{(2,3,-1)}(t) &=\frac{T_{6}\left(\frac{\sqrt{t}}{\sqrt{2}}\right) -T_{4}\left(\frac{\sqrt{t}}{\sqrt{2}}\right)}{2(1-\left(\frac{\sqrt{t}}{\sqrt{2}})^{2}\right)}\\ &=-4t^{2}+6t-1.\\ \sigma_{(2,3,0)}(t)&=1.\\ \sigma_{(2,3,1)}(t)&=8t^{3}-20t^{2}+12 t-1. \end{split} \] \end{example} We show one more example. \begin{example} Here put $(2p,q)=(2,5)$. In this case $(a,b)=(1,1)$ or $(1,3)$ and the constants $C_{(2,5,1,1)}, C_{(2,5,1,3)}$ are given as follows: \[ \begin{split} C_{(2,5,1,1)}&=(1-\cos\frac\pi 2)(1-\cos\frac{\pi}{5})\\ &=1-\cos\frac{\pi}{5}\\ &=\frac{1}{4} \left(3-\sqrt{5}\right). \end{split} \] \[ \begin{split} C_{(2,5,1,3)} &=(1-\cos\frac\pi 2)(1-\cos\frac{3\pi}{5})\\ &=1-\cos\frac{3\pi}{5}\\ &=\frac{1}{4} \left(3+\sqrt{5}\right). \end{split} \] First we put $n=-1$. By Theorem 4.3, \[ \begin{split} \sigma_{(2,5,-1)}(t) &=Y_{(-1,1,1)}(t)Y_{(-1,1,3)}(t)\\ &=X_{-1}\left(\frac{\sqrt{t}}{2\sqrt{C_{2,5,1,1}}}\right) X_{-1}\left(\frac{\sqrt{t}}{2\sqrt{C_{2,5,1,3}}}\right)\\ &=4C_{(2,5,1,1)}C_{(2,5,1,3)} \frac{T_{10}\left(\frac{\sqrt{t}}{2\sqrt{C_{2,5,1,1}}}\right) -T_{8}\left(\frac{\sqrt{t}}{2\sqrt{C_{2,5,1,1}}}\right)} {t-2C_{(2,5,1,1)}} \frac{T_{10}\left(\frac{\sqrt{t}}{2\sqrt{C_{2,5,1,3}}}\right) -T_{8}\left(\frac{\sqrt{t}}{2\sqrt{C_{2,5,1,3}}}\right)} {t-2C_{(2,5,1,3)}}\\ &=64 t^{10}+384 t^9-2880 t^8+5952 t^7+2336 t^6\\ &\ -14856 t^5+12192 t^4-4608 t^3+820 t^2-60 t+1. \end{split} \] By the definition, \[ \sigma_{(2p,q,0)}(t)=1. \] By applying the 3-term realtion \[ Y_{(1,a,b)}(t)=2 T_{10}\left(\frac{\sqrt{t}}{2C_{(2,5,a,b)}}\right)Y_{(0,2p,q)}(t) -Y_{(-1,a,b)}(t), \] we obtain \[ \begin{split} \sigma_{(2,5,1)}(t)= & 256 t^{12}+384 t^{11}-16064 t^{10}+61056 t^9-72000 t^8 \\ &-57888 t^7+197424 t^6-172824 t^5+273408 t^4 \\ &-16632 t^3+1880 t^2-90t+1. \end{split} \] \end{example}
{ "timestamp": "2015-09-29T02:17:45", "yymm": "1506", "arxiv_id": "1506.01774", "language": "en", "url": "https://arxiv.org/abs/1506.01774", "abstract": "Let K be a (2p,q)-torus knot and M_n is a 3-manifold obtained by 1/n-Dehn surgery along K. We consider a polynomial whose zeros are the inverses of the Reideimeister torsion of M_n for SL(2;C)-irreducible representations. Johnson gave a formula for the case of the (2,3)-torus knot under some modification and normalization. We generalize this formula by using Tchebychev polynomials.", "subjects": "Geometric Topology (math.GT)", "title": "A polynomial defined by the SL(2;C)-Reidemeister torsion for a homology 3-sphere obtained by a Dehn surgery along a (2p,q)-torus knot", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668712109662, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.7081505176683062 }
https://arxiv.org/abs/1503.06281
Bichromatic lines in the plane
Given a set of red and blue points in the plane, a bichromatic line is a line containing at least one red and one blue point. We prove the following conjecture of Kleitman and Pinchasi (unpublished, 2003). Let P be a set of n red, and n or n-1 blue points in the plane. If neither colour class is collinear, then P determines at least |P|-1 bichromatic lines. In fact we are able to achieve the same conclusion under the weaker assumption that P is not collinear or a near-pencil.
\section{Introduction} In this paper we consider sets of red and blue points in the Euclidean plane. If $P$ is such a set, a line containing two or more points of $P$ is said to be \emph{determined} by $P$. A line determined by at least one red and one blue point is called \emph{bichromatic}. In 2003, Kleitman and Pinchasi~\cite{kleitpinch} studied lower bounds on the number of bichromatic lines under the assumption that neither colour class is collinear. They made the following conjecture. \begin{conjecture}[Kleitman--Pinchasi Conjecture]\label{KPconj} Let $P$ be a set of $n$ red, and $n$ or $n-1$ blue points in the plane. If neither colour class is collinear, then $P$ determines at least $|P|-1$ bichromatic lines. \end{conjecture} This conjecture is tight for the arrangement of $n-1$ red and $n-1$ blue points on a line, along with one red and one blue point off the line, and collinear with some point on the line. In 1948, de Bruijn and Erd\H os~\cite{BE48} proved that every non-collinear set of $n$ points in the plane determines at least $n$ lines. In fact, they proved this result in a more general combinatorial setting. \begin{theorem}[de Bruijn and Erd\H os]\label{dBErd} Let $S$ be a set of cardinality $n$ and $\{ S_1, \ldots, S_k \}$ a collection of subsets of $S$ such that each pair of elements in $S$ is contained in exactly one $S_i$. Then either $S=S_i$ for some $i$, or $k\geq n$. \end{theorem} As noted by de Bruijn and Erd\H os, the special case where $S$ is a set of points in the plane and the $S_i$ are the collinear subsets of $S$ is easier to prove than the general theorem. It follows by induction from the well-known Sylvester-Gallai Theorem (actually first proven by Melchior~\cite{Melchior-DM41}), which says that every finite non-collinear set of points in the plane determines a line with just two points. As motivation, Kleitman and Pinchasi note that together with the following theorem of Motzkin~\cite{motzkinnonmixed}, Conjecture~\ref{KPconj} would imply the plane case of Theorem~\ref{dBErd}. \begin{theorem}[Motzkin] Every non-collinear set of red and blue points in the plane determines a monochromatic line. \end{theorem} Kleitman and Pinchasi~\cite{kleitpinch} came very close to proving Conjecture~\ref{KPconj}, establishing the following theorem. \begin{theorem}[Kleitman and Pinchasi]\label{KPthm} Let $P$ be a set of $n$ red, and $n$ or $n-1$ blue points in the plane. If neither colour class is collinear, then $P$ determines at least $|P|-3$ bichromatic lines. \end{theorem} Purdy and Smith~\cite{purdysmith} proved Conjecture~\ref{KPconj} for $n\geq 79$. We will establish the following strengthening of Conjecture~\ref{KPconj}. \begin{theorem}\label{mainthm} Let $P$ be a set of $n$ red, and $n$ or $n-1$ blue points in the plane. If $P$ is not collinear or a near-pencil, then $P$ determines at least $|P|-1$ bichromatic lines. \end{theorem} \section{Preliminaries} We begin with a few useful observations. \begin{lemma}\label{imp21} Suppose $P$ is a set of $n$ red and $n$ (or $n-1$) blue points, and suppose there is a line $L$ with $r$ red and $b$ blue points. Let $r'= \min \{ n-r,b \}$ and $b'= \min \{ n-b,r \}$ (or $b'= \min \{ n-1-b,r \}$). Then the number of bichromatic lines is at least $$ \sum_{i=0}^{r'-1} b-i + \sum_{i=0}^{b'-1} r -i = br' - \frac{1}{2}r'(r'-1) + rb'-\frac{1}{2}b'(b'-1) \enspace .$$ Moreover, if $b+r < n$, then $r'=b$, $b'=r$ and the number of bichromatic lines is at least $(b^2 + b + r^2 +r)/2 $. If $L$ is itself bichromatic we may add one more to these totals. \end{lemma} \begin{proof} The bichromatic lines with a red point on $L$ are distinct from those with a blue point. To count those with a red point, take any $b'$ blue points not on $L$. Order these blue points $p_1, p_2, p_3,\ldots, p_{b'}$. There are $r$ lines from $p_1$ to the red points on $L$. For $p_2$ there are also $r$ such lines, but $p_1$ may lie on one of them (but not more). So there are $r-1$ lines that were not yet counted. Similarly, for $p_3$ there are at least $r-2$ lines that are not counted previously, and for $p_i$ there are $r-i+1$. \end{proof} \begin{proposition}\label{propcoll} There is no counterexample to Theorem~\ref{mainthm} with one colour class collinear. \end{proposition} \begin{proof} Suppose one colour class lies on a line $L$. If red is collinear, then using a similar idea to the proof of Lemma~\ref{imp21} we see that there are at least $n + (n-1)$ bichromatic lines, unless there is only one blue point not on $L$. In that case $P$ is a near-pencil. If blue is collinear and there are $n$ blue points, the same argument applies. Now suppose blue is collinear and there are $n-1$ blue points. If $L$ is bichromatic, we get $(n-1) + (n-2) +1 = |P|-1$ bichromatic lines. If $L$ is monochromatic, we have at least $(n-1)+(n-2) +(n-3)$ bichromatic lines, which suffices as long as $n\geq 4$. Finally, if $n=3$ and $L$ is monochromatic, each of the two blue points on $L$ lies in two bichromatic lines, otherwise there would be four collinear points and $P$ would be a near-pencil. \end{proof} It is simple to check that this implies the following. \begin{corollary}\label{easycoro} There is no counterexample to Theorem~\ref{mainthm} with $|P|-2$ collinear points. \end{corollary} Using these observations we can establish the following strengthening of Claim 2.1 in \cite{kleitpinch}. \begin{lemma}\label{KPlem} There is no counterexample to Theorem~\ref{mainthm} with $n$ collinear points. \end{lemma} \begin{proof Suppose there is a line $L$ with $n$ or more points of $P$. Proposition~\ref{propcoll} implies that $L$ is bichromatic and that there is at least one red and one blue point not on $L$. Suppose there are at least two of each colour not on $L$. Then there are at least two bichromatic lines through all except two of the points on $L$. Along with $L$ this yields $2n -2 +1 \geq |P|-1$ bichromatic lines. Corollary~\ref{easycoro} says that there are at least three points not on $L$. So now suppose there is only one point $p$ of some colour not on $L$, and hence at least two of the other colour, say $q_1$ and $q_2$. If $p$ is red, there are $n-1$ red points and at least one blue point on $L$. There are at least $(n-1)+(n-2)$ bichromatic lines through the red points on $L$ and $\{q_1,q_2\}$, one bichromatic line through $p$ and the blue point on $L$, and $L$ itself, giving $2n-1$. Finally, if $p$ is blue and there are $n-1$ blue points in total, then there are $n-2$ blue points and at least two red points on $L$. This gives at least $(n-2)+(n-3)$ bichromatic lines through the blue points on $L$ and $\{q_1,q_2\}$, two bichromatic lines through $p$ and the red points on $L$, and $L$ itself, giving $2n-2=|P|-1$. \end{proof} \section{Large minimal counterexamples} Kleitman and Pinchasi use proof by induction on the size of $P$ to establish Theorem~\ref{KPthm}. They establish an inductive step that works for $n\geq 20$ for both Theorem~\ref{KPthm} and Conjecture~\ref{KPconj}. In this section we reproduce their argument for the sake of completeness, with a few simplifications. We will also recast their argument in terms of a search for a minimal counterexample. Suppose that $P$ is a smallest counterexample to Theorem~\ref{mainthm}, so removing a point from $P$ cannot yield another counterexample. Let $s_{i,j}$ be the number of lines determined by $P$ with exactly $i$ red points and $j$ blue points, where we always assume $i+j \geq 2$. \begin{lemma}\label{1redlem} We may assume that $s_{1,j}=0$ for all $j$. In particular $s_{1,1}=0$, so every line determined by just two points is monochromatic. Moreover, by symmetry, $s_{i,1}=0$ for all $i$ in the case of $n$ blue points. \end{lemma} \begin{proof} If $s_{1,j}\geq 1$, removing the red point from such a line would yield either a near-pencil or a smaller counterexample. In the first case, $P$ had all but two points on a line, contradicting Corollary~\ref{easycoro}. \end{proof} Let $S$ be the number of unordered pairs of points in $P$ with the same colour, and let $D$ be the number of unordered pairs with different colours. If there are $n$ blue points then $S- D = 2\binom{n}{2} - n^2 =-n $. If there are $n-1$ blue points then $S-D= \binom{n}{2} + \binom{n-1}{2} - n(n-1) = 1-n $. Thus $S-D \leq 1-n$. Clearly $D = \sum_{i,j\geq1} ijs_{i,j}$. Ignoring the contribution of monochromatic lines with three or more points, we also have \begin{equation}\label{eqn1} S \geq s_{2,0} + s_{0,2} + \sum_{i,j\geq1} \left(\binom{i}{2}+\binom{j}{2} \right)s_{i,j} \enspace. \end{equation} We use a classical inequality due to Melchior~\cite{Melchior-DM41}. Let $t_i$ be the number of lines containing $i$ points in $P$. \begin{theorem}[Melchior's Inequality]\label{melchior} Let $P$ be a non-collinear set of points. Then $$ t_2 \geq 3+ \sum_{i\geq 3}(i-3)t_i \enspace .$$ \end{theorem} Since $t_2=s_{2,0} + s_{0,2}$ by Lemma~\ref{1redlem}, combining Theorem~\ref{melchior} with (\ref{eqn1}) we get $$S-D \geq 3+\sum_{ i,j \geq 1}(i+j-3)s_{i,j} + \sum_{i,j\geq1} \left(\binom{i}{2}+\binom{j}{2} -ij \right)s_{i,j} \enspace. $$ Using $S-D \leq 1-n$ this gives the following lower bound on (twice) the number of bichromatic lines. \begin{equation}\label{eqn2} 2\sum_{i,j\geq 1} s_{i,j} \geq n+2+ \sum_{i,j\geq1} \left(\frac{1}{2}\left((i-j)^2 +i+j\right) -1 \right)s_{i,j} \enspace. \end{equation} Note that coefficients of the $s_{i,j}$ on the right hand side of (\ref{eqn2}) are all positive because we don't allow $i=j=1$. We wish to minimise the right hand side subject to the constraint \begin{equation}\label{eqn3} D=\sum_{i,j\geq 1} ij s_{i,j} \geq n(n-1) \enspace. \end{equation} The miminum is acheived when the only non-zero $s_{i,j}$ is the one for which the ratio \begin{equation}\label{ratio} \frac{\frac{1}{2}\left((i-j)^2 +i+j\right) -1}{ij} \end{equation} of the coefficients in (\ref{eqn2}) and (\ref{eqn3}) is minimised. This is because this $s_{i,j}$ simultaneously contributes the least to the right hand side of (\ref{eqn2}) and the most to the left hand side of (\ref{eqn3}). Clearly this minimum is acheived when $i=j$ since this minimises both the difference and the sum relative to the product (this is the arithmetic-geometric mean inequality). So (\ref{ratio}) becomes $(i-1)/i^2$, which decreases as $i$ grows larger for $i\geq2$. Now by Lemmas~\ref{imp21} and~\ref{KPlem}, we have that $\frac{1}{2}(i^2 + j^2 + i +j)\leq 2n-2$. This restricts $(i,j)$ to lie within a circle centred at $(-\frac{1}{2},-\frac{1}{2})$. The minimum of (\ref{ratio}) still occurs on the line $i=j$ for this domain. To see this note that the curves on which (\ref{ratio}) is constant are hyperbolas that are symmetric about $i=j$ and tangent to circles centred on $i=j$. Thus if $k$ is the maximum integer such that $k^2+k \leq 2n-2$, then $s_{k,k}$ is the non-zero variable that minimises the right hand side of (\ref{eqn2}). Therefore we may set $k = \floor{\sqrt{2n}}$. The constraint~(\ref{eqn3}) implies that $k^2 s_{k,k} \geq n(n-1)$, which implies $s_{k,k} \geq n(n-1)/k^2$. Since $P$ is a counterexample, $2\sum_{i,j\geq 1} s_{i,j} \leq 4n-4$. Combining all this with (\ref{eqn2}) gives \begin{equation}\label{eqn5} 4n-4 \geq n+2+ (\floor{\sqrt{2n}} -1 )\frac{n(n-1)}{(\floor{\sqrt{2n}})^2} \enspace. \end{equation} The right hand side of (\ref{eqn5}) grows as $\Omega(n^{3/2})$ and the left hand side linearly, so it must be false for large $n$. One can check that it is false for all $n\geq21$. Therefore any minimal counterexamples to Theorem~\ref{mainthm} must occur with $n \leq 20$. \section{Small minimal counterexamples} We continue our search for minimal counterexamples with $n\leq 20$. Similar to Kleitman and Pinchasi, our main tool is computer based linear programming. We include as many extra constraints as we can to eliminate as many $n$ as possible. In the end we are left with just two cases where a minimal counterexample may exist. We will eliminate these possibilities with direct geometric arguments. As well as constraints arising from the previous discussion, we use Hirzebruch's Inequality~\cite{Hirzebruch}. As before, $t_i$ is the number of lines containing $i$ points in $P$. Note that Corollary~\ref{easycoro} ensures that at most $|P|-3$ points are collinear. \begin{theorem}[Hirzebruch's Inequality]\label{hirzebruch} Let $P$ be a set of points with at most $|P|-3$ collinear. Then $$ t_2 +\frac{3}{4}t_3 \geq n + \sum_{i\geq5}(2i-9)t_i \enspace . $$ \end{theorem} We also introduce the following three constraints. \begin{observation}\label{newgpobs} Suppose there are $n$ blue points. Each red point can lie on at most $\floor{n/2}$ lines determined by two or more blue points. \end{observation} \begin{lemma}\label{tricky} Suppose $P$ is a set of $n$ red and $n$ (or $n-1$) blue points, and suppose there is a line $L$ with $r$ red and $b$ blue points. Let $r'= n-r$ and $b'= n-b$ (or $b'= n-1-b$). Then the number of bichromatic lines is at least $$ \min_{i\in [b']} \left\{ i + (r-1)\max \left\{ \ceil{\frac{b'}{i}} , i \right\} \right\} + \min_{i\in [r']} \left\{ i + (b-1)\max \left\{ \ceil{\frac{r'}{i}} , i \right\} \right\} \enspace .$$ Moreover, if $r,b\geq 1$ we may add one more to this total. \end{lemma} \begin{proof} Consider the red points on $L$, and suppose that of them $r_1$ is contained in the least bichromatic lines, and the number of these lines (excluding $L$) is $i$. Then the other $r-1$ red points on $L$ are each contained in at least $i$ bichromatic lines (excluding $L$). But they are also contained in at least $\ceil{\frac{b'}{i}}$ bichromatic lines since some line through $r_1$ contains at least this many blue points. \end{proof} \begin{observation}\label{obs1off} A minimal counterexample to Theorem~\ref{mainthm} must determine precisely $|P|-2$ bichromatic lines. If it has fewer we can remove any point to obtain a smaller counterexample. \end{observation} All in all the constraints are as follows. For brevity they are stated only for the case of $n$ red and $n$ blue points. The case of $n-1$ blue points is very similar. \begin{itemize} \item $\sum \binom{i}{2} s_{i,j} = \binom{n}{2} $ (Counting red pairs) \item $\sum \binom{j}{2} s_{i,j} = \binom{n}{2} $ (Counting blue pairs) \item $\sum ij s_{i,j} = n^2 $ (Counting bichromatic pairs) \item $\sum (i+j-3) s_{i,j} \leq -3 $ (Melchior's Inequality~\ref{melchior}) \item $s_{1,1} + s_{0,2} + s_{2,0} + \frac{3}{4}(s_{0,3}+ s_{1,2}+s_{2,1}+s_{3,0}) \geq 2n + \sum_{i+j\geq 5} (2i+2j-9)s_{i,j}$ (Hirzebruch's Inequality~\ref{hirzebruch}) \item If $i+j \geq n$ then $s_{i,j} =0$ (Lemma~\ref{KPlem}) \item If $i=0$ or $j=0$ then $s_{i,j} =0$ (Lemma~\ref{1redlem}) \item If $i^2 +i +j^2 +j \geq 4n-2 $ then $s_{i,j} =0$ (Lemmas~\ref{imp21} and~\ref{KPlem}) \item $\sum_{j\geq 2} i s_{i,j} \leq n\floor{n/2} $ (Observation~\ref{newgpobs}) \item $\sum_{i\geq 2} j s_{i,j} \leq n\floor{n/2} $ (Observation~\ref{newgpobs}) \item Constraints from Lemma~\ref{tricky} \item $\sum_{i,j\geq 1} s_{i,j} = 2n-2$ (Observation~\ref{obs1off}) \item $s_{i,j} \in \N_0$ \end{itemize} Running this linear program\footnote{% The program used to generate the linear programs for each case is available from the author's web page \url{www.ms.unimelb.edu.au/~mspayne/}.} for each case with $n\leq20$ yields just two cases with a feasible solution. They are the cases of $8$ red and $7$ blue points, and $6$ red and $5$ blue points. In the first case, with $8$ red and $7$ blue points, the linear program returns a solution with $s_{2,3}=3$. If one adds the constraint that $s_{2,3} =0$, there is no longer a feasible solution. So suppose that $s_{2,3} \geq 1$. Consider a line $L$ containing $3$ blue points $b_1,b_2$ and $b_3$, and not containing $6$ of the red points. Using the proof method from Lemma~\ref{tricky}, one can check that $b_1$ has $2$ lines through the reds, and $b_2$ and $b_3$ have $3$ (other cases don't yield a counterexample). Transform $L$ to the line at infinity with a projective transformation. Then the $6$ red points lie on two parallel lines through $b_1$, with three reds on each. They also lie on three parallel lines through $b_2$. Finally, they should also lie on another set of three parallel lines through $b_3$. This is clearly impossible -- for example, note that there is only one non-crossing straight edged matching on the six red points. In the case with $6$ red and $5$ blue points, the linear program returns a solution with $s_{2,2}=6$, $s_{0,2}=4$, $s_{2,0}=6$ and $s_{2,1}=3$. If one adds the constraint that $s_{2,2}\leq5$, there is no longer a feasible solution. Similarly, there are no solutions with $s_{2,2}\geq7$, and also none with $s_{2,1}\leq 2$. We will show that this is not geometrically realisable. We will work in the projective plane and make use of the following well known fact. It is simply the statement that one projective basis can be transformed to another. \begin{proposition}\label{projsquare} Let $V$ and $W$ be real projective planes. Given $v_1, \ldots, v_4 \in V$ in general position and $w_1, \ldots , w_4 \in W$ in general position, there exists a unique collineation (a bijection that preserves collinearities) from $V$ to $W$ that maps each $v_i$ to $w_i$. \end{proposition} \begin{proposition}\label{66prop} It is not possible to arrange $6$ red points and $5$ blue points in the plane so that $s_{2,2}=6$ and $s_{2,1}=3$. \end{proposition} \begin{proof} Suppose for contradiction that $s_{2,2}=6$ and $s_{2,1}=3$. This gives $30$ bichromatic pairs, so there can be no more bichromatic lines. This implies that every blue point is on three lines containing two red points. Label the points $r_1, \ldots, r_6$ and $b_1,\ldots, b_5$. Suppose $\{ r_5,r_6,b_1, b_2\}$ lie on a line $L$. Since $b_1$ is collinear with two pairs in $\{ r_1, r_2, r_3, r_4\}$, this set is in general position. Hence by Proposition~\ref{projsquare} we may assume that they are the vertices of a square, with coordinates $(-1,1),(1,1), (-1,-1)$ and $(1,-1)$ respectively, as shown in Figure~\ref{66pic}. Since $b_2$ is also collinear with two red pairs in $\{ r_1, r_2, r_3, r_4 \} $, we may also assume, without loss of generality, that\footnote{This is the point at infinity in the direction of the $x$-axis.} $b_1 = \overline{r_1r_2} \cap \overline{r_3r_4}= (\infty,0)$ and $b_2 = \overline{r_1r_4}\cap \overline{r_2r_3}= (0,0)$. \begin{figure \begin{center} \includegraphics{65KPconj} \end{center} \caption{Construction for Proposition~\ref{66prop}.} \label{66pic} \end{figure} There is another blue point on the line $\overline{r_1r_2}$ (with equation $y=1$), say $b_3$, and a further blue point on $\overline{r_3r_4}$ (with equation $y=-1$), say $b_4$. The position of either $b_3$ or $b_4$ determines the set $\{r_5,r_6\}$. That is, $\{r_5,r_6\}=\{L \cap \overline{b_3r_3},L \cap \overline{b_3r_4}\}=\{L \cap \overline{b_4r_1}, L \cap \overline{b_4r_2} \}$. Since the configuration described thus far is symmetric about the line $y=0$, it follows that if $b_3=(a,1)$ for some real number $a$, then $b_4=(a,-1)$. At this stage there are six bichromatic lines with only one blue point: $\overline{r_1r_4},$ $\overline{r_4r_6}, \overline{r_6r_2}, \overline{r_2r_3}, \overline{r_3r_5}$ and $\overline{r_5r_1}$. There is one blue point $b_5$ left to determine, and it must lie on three of these lines. Note that the bichromatic lines form a cycle on the blue points in the order listed. Neighbours in the cycle share a red point, so cannot share a blue point, and so $b_5$ lies on alternating lines in the cycle. By symmetry in the line $y=0$, we may assume $b_5$ lies on $\overline{r_2r_3},\overline{r_4r_6}$ and $\overline{r_5r_1}$. Since $\overline{r_2r_3}$ is the line $x=y$, we can say that $b_5=(c,c)$ for some real number\footnote{The point $b_5$ could also be at infinity on $\overline{r_2r_3}$. This case is easily excluded by inspection since both $\overline{r_4r_6}$ and $\overline{r_5r_1}$ would need to be parallel to $\overline{r_2r_3}$. There is no value of $a$ that achieves this.} $c$. Since $b_5$ lies on $\overline{r_5r_1}=\overline{b_4r_1}$, we have $$ (c,c) = \lambda(a,-1) + (1-\lambda)(-1,1) $$ for some parameter $\lambda$. Eliminating $\lambda$ from these two equations yields $$ ac = a-1-3c \enspace .$$ Similarly, since $b_5$ lies on $\overline{r_4r_6}=\overline{b_3r_4}$, we have $$ (c,c) = \gamma(a,1) + (1-\gamma)(1,-1) $$ for some parameter $\gamma$. Eliminating $\gamma$ from these two equations yields $$ac=3c-a-1 \enspace .$$ Equating both expressions for $ac$ yields $a =3c$, and substituting this into the above equation yields $3c^2 = -1$. This contradiction concludes the proof. \end{proof} \subsection*{Acknowledgements} I would like to thank Brendan McKay for some fruitful discussions.
{ "timestamp": "2015-03-24T01:04:20", "yymm": "1503", "arxiv_id": "1503.06281", "language": "en", "url": "https://arxiv.org/abs/1503.06281", "abstract": "Given a set of red and blue points in the plane, a bichromatic line is a line containing at least one red and one blue point. We prove the following conjecture of Kleitman and Pinchasi (unpublished, 2003). Let P be a set of n red, and n or n-1 blue points in the plane. If neither colour class is collinear, then P determines at least |P|-1 bichromatic lines. In fact we are able to achieve the same conclusion under the weaker assumption that P is not collinear or a near-pencil.", "subjects": "Combinatorics (math.CO)", "title": "Bichromatic lines in the plane", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668706602659, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.7081505172708419 }
https://arxiv.org/abs/2105.05364
A Hermite Method with a Discontinuity Sensor for Hamilton-Jacobi Equations
We present a Hermite interpolation based partial differential equation solver for Hamilton-Jacobi equations. Many Hamilton-Jacobi equations have a nonlinear dependency on the gradient, which gives rise to discontinuities in the derivatives of the solution, resulting in kinks. We built our solver with two goals in mind: 1) high order accuracy in smooth regions and 2) sharp resolution of kinks. To achieve this, we use Hermite interpolation with a smoothness sensor. The degrees-of freedom of Hermite methods are tensor-product Taylor polynomials of degree $m$ in each coordinate direction. The method uses $(m+1)^d$ degrees of freedom per node in $d$-dimensions and achieves an order of accuracy $(2m+1)$ when the solution is smooth. To obtain sharp resolution of kinks, we sense the smoothness of the solution on each cell at each timestep. If the solution is smooth, we march the interpolant forward in time with no modifications. When our method encounters a cell over which the solution is not smooth, it introduces artificial viscosity locally while proceeding normally in smooth regions. We show through numerical experiments that the solver sharply captures kinks once the solution losses continuity in the derivative while achieving $2m+1$ order accuracy in smooth regions.
\section{Introduction} We consider the time-dependent Hamilton-Jacobi (HJ) equation \begin{equation} \varphi_t + H(\varphi_{x_1},\varphi_{x_2},\dots,\varphi_{x_d}) = 0, \quad \varphi(\mathbf{x},0) = \varphi^0(\mathbf{x}), \quad \mathbf{x} \in \Omega \in \mathbb{R}^d, \label{HJ} \end{equation} with periodic boundary conditions on $\partial \Omega$. HJ equations appear in many applications, e.g., optimal control, differential games, image processing and the calculus of variations. The solution of an HJ equation may develop a discontinuity in the derivative even when the initial data is smooth. As in conservation laws, the unique solution can be singled out by the use of viscosity solutions \cite{crandall1983viscosity,crandall1984some}. In particular, the viscosity solution gives requirements on the solution at points of discontinuity that allow us to find the unique physically relevant solution. Design of methods for HJ equations that: a) converge to the viscosity solution as the grid is refined even in the presence of kinks, and b) maintain high-order accuracy in smooth regions and here we only mention a few. The first methods to try to accomplish this were introduced in \cite{souganidis1985approximation,crandall1984two}. Essentially non-oscillatory (ENO) \cite{osher1991high} or weighted ENO (WENO) \cite{jiang2000weighted,zhang2003high} have been developed to solve the HJ equation. These methods have been shown to work efficiently, but require a large stencil size in order to obtain high-order accuracy. More recently, there has been work done by using discontinuous Galerkin methods to solve conservation laws. This idea was introduced in \cite{osher1991high}, where the authors exploited the fact that the gradient of the solution satisfies a conservation law system, and used a standard discontinuous Galerkin method to advance the solution in time and then recovered the solution from the derivatives. In reference \cite{cheng2007discontinuous} the authors developed a discontinuous Galerkin method for directly solving \eqref{HJ}, when the Hamiltonian is linear or convex, this eliminates the need to solve systems in the multidimensional case. This work was later improved upon in \cite{cheng2014new}. The improvement extends the method to approximate solutions to \eqref{HJ} when the Hamiltonian is not convex. Other improvements include avoiding reconstruction of the solution across elements by utilizing the Roe speed at the cell interfaces and adding an entropy fix inspired by Harten and Hyman \cite{harten1983self}. In \cite{qiu2005hermite} Qiu and Shu use WENO methods together with Hermite interpolating polynomials (HWENO). These methods are derived from the original WENO methods, but both the function and the first derivative are evolved and used in the reconstruction of the solution. An important advantage of HWENO over WENO is that a more compact stencil may be used for the same order of accuracy. There have been several expansions on this work \cite{qiu2007hermite,zheng2017finite,tao2015high,zahran2016seventh}, where in \cite{zahran2016seventh} the authors develop a seventh order method, which is the same order of accuracy we obtain using three derivatives. The success of HWENO methods has inspired us to build a Hermite method solver for \eqref{HJ}. Using Hermite methods, we may achieve an arbitrary order while keeping a compact stencil. Hermite methods were first studied in \cite{goodrich2006hermite}, where the authors use Hermite methods to solve hyperbolic initial-boundary value problems. Stability of the method and convergence was proved and various numerical examples were provided. Several adaptations of the original Hermite method have been developed. In \cite{appelo2018hermite} the authors use Hermite interpolants to solve the wave equation using dissipative and conservative formulations. A hybrid Hermite-discontinuous Galerkin method was used in \cite{chen2014hybrid}, where the authors approximated the solution of Maxwell's equations. In \cite{cons_hermite} Hermite methods for hyperbolic conservation laws were considered, where the entropy viscosity by Guermond et al., \cite{Guermond2008801}, was adopted to the Hermite framework. The current paper presents the first Hermite method for Hamilton-Jacobi equations. The challenge in solving Hamilton-Jacobi equations stems from the nonlinear dependency on the gradient, which gives rise to discontinuities in the derivatives of the solution, resulting in kinks. To resolve the kinks, we sense the smoothness of the solution on each cell at each timestep by adopting the sensor introduced by Persson and Peraire in \cite{persson2006sub} and later refined by Klockner, Warburton and Hesthaven in \cite{klockner2011viscous}. Following \cite{persson2006sub,klockner2011viscous} we make no modifications if the solution is smooth but when the solution fails to be smooth we locally introduce artificial viscosity. Through numerical experiments we demonstrate that the solver has the following properties: 1) high order accuracy in smooth regions and 2) sharp resolution of kinks. \deaa{Although (as most high order methods) we cannot expect the order of accuracy to be $(2m+1)$ when the solution becomes non-smooth we observed in \cite{AppHaglinear} that the constant in from the the $h$ term in the error expansion can be significantly smaller when the formal order is high.} The rest of this paper is organized as follows: in Section 2 we introduce the numerical scheme for the one-dimensional and two-dimensional HJ equations. Section 3 explains how we sense the smoothness of each element. Section 4 is devoted to the discussion of numerical experiments. \FinalEdits{Code for the numerical examples can be found in the github repository \url{https://github.com/allenalvarezloya/Hermite_HJ}}. Conclusions and future work are discussed in Section 5. \section{Hermite Methods} A Hermite method of order $2m+1$ approximates the solution to a PDE by element-wise polynomials that interpolate the solution and derivatives up to degree $m$ at the element interfaces. The time evolution is done locally from the center of the element. In Hermite methods, the degrees of freedom are the function and the spatial derivatives, or equivalently the Taylor coefficients. The evolution of the polynomials depends on the nature of the PDE. \subsection{Hermite Method in One Dimension} We describe a Hermite PDE method in one-dimension for HJ equations. This method closely follows the Hermite solver for conservation laws described in \cite{hagstrom2015solving}. Note that we use this method, with no modifications, while the solution is smooth. Consider the Hamilton-Jacobi equation, with periodic boundary conditions \begin{equation} u_t = \begin{cases} \varphi_t +H(\varphi_x) = 0,\\ \varphi(x,0)=\varphi^0(x).\\ \end{cases} \label{PDE} \end{equation} \subsubsection{Initialization} We discretize the spatial coordinate into a primal grid and a dual grid \begin{equation} x_i = x_l + i h_x, \quad h_x = (x_r - x_l)/n_x, \label{Grid} \end{equation} where $i = 0,\dots, n_x$ for the primal grid and $i = 1/2, \dots , n_x - 1/2$ for the dual grid. The first step initializes the degrees of freedom by setting them to be the scaled derivatives at each primal grid point. That is, \[ c_{l,i} = \left.\frac{h_x^l}{l!} \frac{d^l u(x,0)}{dx^l} \right|_{x=x_i}. \] We then interpolate scaled derivative data to obtain the piecewise degree-$2m+1$ polynomial \begin{equation} v_{i+\frac{1}{2}}(x,0) = \sum \limits_{l = 0}^{2m+1} d_{l,i+\frac{1}{2}} \left(\frac{x - x_{i+\frac{1}{2}}}{h_x }\right)^l, \end{equation} where the coefficients $d_{l,i}$ are uniquely determined by the interpolation conditions \begin{equation} \frac{h_x^l}{l!}\frac{\partial^l }{\partial x^l} v_{i+\frac{1}{2}}(x_i) = c_{l,i}, \quad \frac{h_x^l}{l!}\frac{\partial^l }{\partial x^l} v_{i+\frac{1}{2}}(x_{i+1}) = c_{l,i+1}, \end{equation} for $l = 0, \dots, m$. \subsubsection{Evolution} We treat the coefficients in the Hermite interpolant as functions of time. That is, \begin{equation} v_{i+\frac{1}{2}}(x,t) = \sum \limits_{l = 0}^{2m+1} d_{l,i+\frac{1}{2}}(t) \left(\frac{x - x_{i+\frac{1}{2}}}{h_x }\right)^l. \label{PDE_Hermite_expansion} \end{equation} For our PDE $\varphi_t = -H(\varphi_x)$ we substitute \eqref{PDE_Hermite_expansion}: \begin{equation} \frac{\partial v_{i+\frac{1}{2}}(x,t) }{\partial t } = \sum \limits_{l = 0}^{2m+1} d_{l,i+\frac{1}{2}}^{'}(t) \left(\frac{x - x_{i+\frac{1}{2}}}{h_x }\right)^l = -H(v_x(x,t)). \label{PDE_Hermite_expansion_substitution} \end{equation} We can differentiate \eqref{PDE_Hermite_expansion_substitution} $k$ times in space and evaluate at $x = x_{i+\frac{1}{2}}$ to obtain \begin{equation} \frac{k!}{h_x^k} d^{'}_{k,i+\frac{1}{2}} (t) = -\left. \frac{\partial^k}{\partial x^k} H(v_x(x,t)) \right |_{x = x_{i + \frac{1}{2}}}. \label{DiffOfHamiltonian} \end{equation} When $H$ is nonlinear the differentiation of the RHS can spawn new terms by the product rule. We avoid this by approximating the RHS by a Taylor polynomial of degree $2m+1$ \begin{equation} H(v_x) \approx \sum \limits_{l=0}^{2m+1} b_{l,i+\frac{1}{2}}(t) \left (\frac{x - x_{i + \frac{1}{2}}}{h_x} \right )^l. \label{Taylor_expansion} \end{equation} for which the differentiation is straight forward. With this approximation we carry out the differentiation in \eqref{DiffOfHamiltonian} and obtain the local system of ODEs \begin{equation} d^{'}_{k,i+\frac{1}{2}}(t) = b_{k,i+\frac{1}{2}}(t), \quad k = 0, \dots , 2m+1. \label{ODEs1D} \end{equation} We can solve this system to evolve our approximate solution using a standard one step ODE solver as described in \cite{HagApp07}. Before we evolve we must find the Taylor coefficients $b_{k,i+\frac{1}{2}}(t).$ The computation of the coefficients, $b_{k,i+\frac{1}{2}}$, is problem specific and depends on the form of the Hamiltonian, $H$. \subsubsection{Polynomial Approximation of Non-linear Hamiltonians} \deaa{The method described above relies on the computation of the Taylor series coefficients of functions of polynomials. It is convenient and efficient to compute these coefficients through recursions that rely on Cauchy products rather than through the use of the higher order chain rule (the Fa\`{a} di Bruno's formula) which have combinatorial complexity. Our discussion here closely follows \cite{neidinger2013efficient}.} We describe this process for the PDE that is used in example 3 in the numerical experiments section below \begin{align*} \varphi_t = \cos(\varphi_x + 1). \end{align*} A Taylor series expansion for the right hand side of this equation can be obtained by \Allen{the chain rule}. If in general the functions $f(x), w(x)$ and $u(x)$ have Taylor series expansions \begin{align*} f(x) &= \sum \limits_{k=0}^{\infty} F_k \left(\frac{x - x_i}{h} \right)^k,\\ w(x) &= \sum \limits_{k=0}^{\infty} W_k \left(\frac{x - x_i}{h} \right)^k,\\ u(x) &= \sum \limits_{k=0}^{\infty} U_k \left(\frac{x - x_i}{h} \right)^k, \end{align*} then for non-linearities that satisfy a differential equation $f'(x) = w(x)u'(x)$ we can compute $F_k, k = 1,2,\dots $ using the formula \begin{align*} F_k = W_0 U_k + \frac{1}{k} \sum \limits_{j = 0}^{k-1} j U_j W_{k-j}. \end{align*} Here we are interested in approximating $\cos(\varphi_x+1)$ and thus note that the functions $s(x) = \sin(u(x))$ and $c(x) = \cos(u(x))$ satisfy $s'(x) = c(x)u'(x)$ and $c'(x) = s(x)u'(x)$ simultaneously. Thus, we use both relationships and the formula above to obtain $\cos(\varphi_x+1)$. Precisely, first we compute \[ (\varphi_x)_k = \left\{ \begin{array}{ll} \frac{k+1}{h} \varphi_k & \mbox{if $k = 0 \dots 2m$},\\ 0 & \mbox{if $k = 2m+1$},\end{array} \right. \] followed by \[ \Allen{U_k} = \left\{ \begin{array}{ll} (\varphi_x)_k + 1& \mbox{if $k = 0$},\\ (\varphi_x)_k & \mbox{if $k = 1,\dots, 2m+1$}.\end{array} \right. \] \Allen{Note that the coefficients, $U_k$, are simply computed from the Taylor expansion of $\varphi$.} We then set $C_0 = \cos(U_0)$ and $S_0 = \sin(U_0)$ and update the remaining coefficients using the recursion \begin{align*} S_k &= C_0U_k+\texttt{ddot}(C,U,k),\\ C_k &= -S_0U_k-\texttt{ddot}(S,U,k), \end{align*} where $u = \Allen{[U_0, U_1, \dots, U_{2m+1}]^T}$ , $c = \Allen{[C_0, C_1, \dots, C_{2m+1}]^T}$, $s = \Allen{[S_0, S_1, \dots, S_{2m+1}]^T}$ and \verb+ddot+ is the function given by \begin{align*} \texttt{ddot}(A,B,k) = \frac{1}{k} \sum \limits_{j = 0}^{k-1} j A_j B_{k-j}. \end{align*} The $C_k$ are the Taylor coefficients of the polynomial approximating $\cos(\varphi_x+1)$. In Table \ref{tab:TaylorTable1} we display the $L_1,L_2$ and $L_{\infty}$ norm errors for $H(\varphi_x) = -\cos(\varphi_x+1)$ using $x_l = 0$, $x_r = 2\pi$ using $n_x = 20, 40, 80 $ and 160 gridpoints. As can be seen from the estimated rates of convergence the procedure is as accurate as the Hermite interpolation ($h^{2m+2}$). To the left in Figure \ref{fig:TaylorExample} we plot the Taylor series approximation which can be seen to be accurate. \begin{figure}[htb] \begin{center} \includegraphics[width=0.49\textwidth]{TaylorExample} \includegraphics[width=0.49\textwidth]{TaylorExample2} \caption{On the left we plot the numerical approximation to $H(\varphi_x) = -\cos(\varphi_x+1)$ as a solid line and sample the actual function as circles. On the right we plot the numerical approximation to $H(\varphi_x) = -|\varphi_x|$ as a solid line and sample the actual function as circles. In both cases we used $m = 3$ derivatives. \label{fig:TaylorExample}} \end{center} \end{figure} \begin{table}[ht] \caption{Errors in approximating $H(\varphi_x) = -\cos(\varphi_x+1)$ using Taylor series approximation recursions in the $L_1$, $L_2$ and $L_{\infty}$ norms at time are displayed along with estimated rates of convergence. Note that we obtain the expected rate of $2m+2$ where $m$ is the number of derivatives used in approximation.\label{tab:TaylorTable1}} \begin{center} \begin{tabular}{c c c c c c c} \hline n & $L_1$ error & Convergence & $L_2$ error & Convergence & $L_{\infty}$ error & Convergence \\ \hline & & & $m = 2$ & & &\\ \hline 20 & 1.18e-05 & - & 1.95e-05 & - & 5.05e-05 & - \\ 40 & 1.75e-07 & 6.08 & 3.07e-07 & 5.99 & 9.89e-07 & 5.68 \\ 80 & 2.74e-09 & 5.99 & 4.86e-09 & 5.98 & 1.61e-08 & 5.94 \\ 160 & 4.44e-11 & 5.95 & 7.94e-11 & 5.94 & 2.71e-10 & 5.90 \\ \hline & & & $m = 3$ & & &\\ \hline 20 & 1.42e-07 & - & 2.64e-07 & - & 7.65e-07 & - \\ 40 & 5.69e-10 & 7.96 & 1.04e-09 & 7.99 & 3.37e-09 & 7.83 \\ 80 & 2.19e-12 & 8.02 & 4.06e-12 & 8.00 & 1.32e-11 & 8.00 \\ 160 & 8.93e-15 & 7.94 & 1.59e-14 & 8.00 & 5.20e-14 & 7.99 \\ \hline \end{tabular} \end{center} \end{table} In the case of a Hamiltonian with an absolute value function the above procedure cannot be used. We describe how to handle such cases using the Eikonal equation \begin{align*} \varphi_t + |\varphi_x| = 0, \end{align*} as an example. Here, we need the expansion of $|\varphi_x|$. If $\varphi_x$ is not sign definite, then we can not use the method above to obtain a Taylor series expansion and instead we proceed by computing \[ (\varphi_x)_k = \left\{ \begin{array}{ll} \frac{k+1}{h} \varphi_k & \mbox{if $k = 0 \dots 2m$},\\ 0 & \mbox{if $k = 2m+1$}.\end{array} \right. \] At the cell center the value of the function is given by the leading coefficient. That is, $\varphi_x(x_i) = \varphi_0$. Therefore, as an approximation we take \[ |(\varphi_x)_k| = \left\{ \begin{array}{ll} (\varphi_x)_k & \mbox{if $\varphi_0 \geq 0$},\\ -(\varphi_x)_k & \mbox{if $\varphi_0 < 0$}.\end{array} \right. \] In Table \ref{tab:TaylorTable1} we display the $L_1,L_2$ and $L_{\infty}$ norm errors for $H(\varphi_x) = |\varphi_x|$ using $x_l = 0$, $x_r = 2\pi$ with $n_x = 20, 40, 80 $ and 160 gridpoints. We show the estimated rates of convergence for each norm. Of course here we cannot expect the full order of convergence as the nonlinearity is not smooth. To the right in Figure \ref{fig:TaylorExample} we plot the Taylor series approximation, which can be seen to be an accurate approximation to the true Hamiltonian. \Allen{Since the degrees of freedom are used from a single point to evaluate the solution on the entire cell a change in the sign of $\varphi_x$ results in an error of either $2\varphi_x$ or -$2\varphi_x$ over the interval where the sign change occurs to the end of the cell. This implies that the error is $\mathcal{O}(h)$ over the cells where a change in sign of $\varphi_x$ occurs.} \begin{table}[ht] \caption{Errors in approximating $H(\varphi_x) = |\varphi_x|$ using Taylor series in the $L_1$, $L_2$ and $L_{\infty}$ norms at time are displayed along with estimated rates of convergence. Note that the function we are trying to interpolate is nonlinear, thus we do not obtain the expected rate of $2m+2$ where $m$ is the number of derivatives used in approximation.} \begin{center} \Allen{ \begin{tabular}{c c c c c c c } \hline n & $L_1$ error & Convergence & $L_2$ error & Convergence & $L_{\infty}$ error & Convergence \\ \hline & & & $m = 2$ & & &\\ \hline 20 & 6.15e-02 & - & 1.20e-01 & - & 3.13e-01 & - \\ 40 & 1.54e-02 & 2.00 & 4.26e-02 & 1.50 & 1.57e-01 & 1.00 \\ 80 & 3.85e-03 & 2.00 & 1.51e-02 & 1.50 & 7.85e-02 & 1.00 \\ 160 & 9.64e-04 & 2.00 & 5.33e-03 & 1.50 & 3.93e-02 & 1.00 \\ \hline & & & $m = 3$ & & &\\ \hline 20 & 6.15e-02 & - & 1.20e-01 & - & 3.13e-01 & - \\ 40 & 1.54e-02 & 2.00 & 4.26e-02 & 1.50 & 1.57e-01 & 1.00 \\ 80 & 3.85e-03 & 2.00 & 1.51e-02 & 1.50 & 7.85e-02 & 1.00 \\ 160 & 9.64e-04 & 2.00 & 5.33e-03 & 1.50 & 3.93e-02 & 1.00 \\ \hline \end{tabular} } \end{center} \label{tab:TaylorTable2} \end{table} \subsubsection{\Allen{Polynomial Approximation of Non-linear Hamiltonians Higher-Dimensions}} \Allen{ The computation of the multivariate Taylor coefficients of functions of polynomials is similar to the univariate case. For a composition $H(\mathbf{x}) = f(u(\mathbf{x}))$ where $f:\mathbb{R} \to \mathbb{R}$ is a standard function we have \begin{align*} \frac{\partial h}{\partial x_i}(\mathbf{x}) & = f'(u(\mathbf{x}))\frac{\partial u}{\partial x_i}(\mathbf{x})\\ & = w(\mathbf{x})\frac{\partial u}{\partial x_i}(\mathbf{x}), \end{align*} which is similar to the relationship $f'(x) = w(x)u'(x)$ in one-dimension. Using this relationship one can derive the recursion \begin{align*} H_{\mathbf{k}} & = W_{\mathbf{0}}U_{\mathbf{k}} + \texttt{ddot}(V,U,\mathbf{k}). \end{align*} Here \begin{align*} \texttt{ddot}(V,U,\mathbf{k}) = \frac{1}{k_p} \sum \limits_{\mathbf{e}_p \leq \mathbf{j} \lvertneqq \mathbf{k}} j_p U(\mathbf{j}) W(\mathbf{k} - \mathbf{j}), \end{align*} where the sum is over all multi-indicies with coordinates $j_p = 1,2,\dots,k_p$ and all other $j_i = 0,1,\dots,k_p$ except when $\mathbf{j} = \mathbf{k}$. We approximate the functions $H(x,y) = \cos(\cos(x+y))$, $H(x,y) = \sin(\sin(x)+\cos(y))$ and $H(x,y) = \sin(\sin(x)\cos(y))$ using the recursion described above. In each example we estimate the rate of convergence by refining the grid by a factor of two starting with $n = 10$ cells in each coordinate direction and ending with $n = 80$. We compute using $m =2$ and $m = 3$ derivatives. For each example we observe the optimal rate of convergence reported in Table \ref{tab:Taylor2Dcoscos}, Table \ref{tab:Taylor2DsinAdd} and Table \ref{tab:Taylor2DsinMult} for $H(x,y) = \cos(\cos(x+y))$, $H(x,y) = \sin(\sin(x)+\cos(y))$ and $H(x,y) = \sin(\sin(x)\cos(y))$, respectively. We display the approximations in Figure \ref{fig:Taylor2D}. } \begin{table}[ht] \caption{\Allen{Errors in approximating $f(x,y) = \cos(\cos(x+y))$ using Taylor series in the $L_1$, $L_2$ and $L_{\infty}$ norms at time are displayed along with estimated rates of convergence. We obtain the expected rate of $2m+2$ where $m$ is the number of derivatives used in approximation.}} \begin{center} \Allen{ \begin{tabular}{c c c c c c c } \hline n & $L_1$ error & Convergence & $L_2$ error & Convergence & $L_{\infty}$ error & Convergence \\ \hline & & & $m = 2$ & & &\\ \hline 10 & 4.03e-04 & - & 1.18e-04 & - & 4.14e-05 & - \\ 20 & 6.23e-06 & 6.02 & 1.83e-06 & 6.01 & 8.33e-07 & 5.63 \\ 40 & 9.69e-08 & 6.01 & 2.86e-08 & 6.00 & 1.32e-08 & 5.98 \\ 80 & 1.51e-09 & 6.00 & 4.47e-10 & 6.00 & 2.06e-10 & 6.00 \\ \hline & & & $m = 3$ & & &\\ \hline 10 & 9.86e-06 & - & 2.89e-06 & - & 2.11e-06 & - \\ 20 & 3.82e-08 & 8.01 & 1.09e-08 & 8.04 & 6.98e-09 & 8.24 \\ 40 & 1.49e-10 & 8.01 & 4.27e-11 & 8.00 & 3.14e-11 & 7.80 \\ 80 & 5.81e-13 & 8.00 & 1.67e-13 & 8.00 & 1.27e-13 & 7.95 \\ \end{tabular} } \end{center} \label{tab:Taylor2Dcoscos} \end{table} \begin{table}[ht] \caption{\Allen{Errors in approximating $f(x,y) = \sin(\sin(x)+\cos(y))$ using Taylor series in the $L_1$, $L_2$ and $L_{\infty}$ norms at time are displayed along with estimated rates of convergence. We obtain the expected rate of $2m+2$ where $m$ is the number of derivatives used in approximation.}} \begin{center} \Allen{ \begin{tabular}{c c c c c c c} \hline n & $L_1$ error & Convergence & $L_2$ error & Convergence & $L_{\infty}$ error & Convergence \\ \hline & & & $m = 2$ & & &\\ \hline 10 & 4.33e-04 & - & 1.17e-04 & - & 8.38e-05 & - \\ 20 & 6.69e-06 & 6.02 & 1.83e-06 & 6.00 & 1.48e-06 & 5.82 \\ 40 & 1.04e-07 & 6.01 & 2.85e-08 & 6.00 & 2.31e-08 & 6.00 \\ 80 & 1.62e-09 & 6.01 & 4.46e-10 & 6.00 & 3.60e-10 & 6.01 \\ \hline & & & $m = 3$ & & &\\ \hline 10 & 8.70e-06 & - & 2.45e-06 & - & 1.70e-06 & - \\ 20 & 3.35e-08 & 8.02 & 9.53e-09 & 8.01 & 7.95e-09 & 7.74 \\ 40 & 1.30e-10 & 8.01 & 3.72e-11 & 8.00 & 3.12e-11 & 7.99 \\ 80 & 5.06e-13 & 8.00 & 1.45e-13 & 8.00 & 1.22e-13 & 8.00 \\ \hline \end{tabular} } \end{center} \label{tab:Taylor2DsinAdd} \end{table} \begin{table}[ht] \caption{\Allen{Errors in approximating $f(x,y) = \sin(\sin(x)\cos(y))$ using Taylor series in the $L_1$, $L_2$ and $L_{\infty}$ norms at time are displayed along with estimated rates of convergence. We obtain the expected rate of $2m+2$ where $m$ is the number of derivatives used in approximation.}} \begin{center} \Allen{ \begin{tabular}{c c c c c c c } \hline n & $L_1$ error & Convergence & $L_2$ error & Convergence & $L_{\infty}$ error & Convergence \\ \hline & & & $m = 2$ & & &\\ \hline 10 & 2.11e-04 & - & 7.01e-05 & - & 4.76e-05 & - \\ 20 & 3.22e-06 & 6.03 & 1.10e-06 & 6.00 & 8.39e-07 & 5.83 \\ 40 & 5.17e-08 & 5.96 & 1.71e-08 & 6.00 & 1.34e-08 & 5.97 \\ 80 & 8.10e-10 & 6.00 & 2.67e-10 & 6.00 & 2.11e-10 & 5.99 \\ \hline & & & $m = 2$ & & &\\ \hline 10 & 3.62e-06 & - & 1.30e-06 & - & 9.47e-07 & - \\ 20 & 1.40e-08 & 8.01 & 5.12e-09 & 7.99 & 3.88e-09 & 7.93 \\ 40 & 5.49e-11 & 7.99 & 1.99e-11 & 8.00 & 1.51e-11 & 8.01 \\ 80 & 2.18e-13 & 7.98 & 7.78e-14 & 8.00 & 5.97e-14 & 7.98 \\ \end{tabular} } \end{center} \label{tab:Taylor2DsinMult} \end{table} \begin{figure}[hbt] \begin{center} \includegraphics[width=0.32\textwidth]{TaylorCosCos.eps} \includegraphics[width=0.32\textwidth]{TaylorSinAdd.eps} \includegraphics[width=0.32\textwidth]{TaylorSinMult.eps} \caption{\Allen{We plot the approximations obtained by the Taylor series recursion examples. Left: $H(x,y) = \cos(\cos(x+y))$. Middle: $H(x,y) = \sin(\sin(x) + \cos(y))$. Right: $H(x,y) = \sin(\sin(x)\cos(y))$. For each example $n = 20$ cells were used with $m = 3$ derivatives.}} \label{fig:Taylor2D} \end{center} \end{figure} {} \subsubsection{Second Half-step and Boundary Conditions} To complete a full time step we repeat this procedure, starting with the initial data obtained from evolving \eqref{ODEs1D} a half-step $\Delta t / 2$ on the dual grid. At the interior nodes $x_i, \ i = 1, \dots, n_x-1$ the procedure is the same as above, but at the boundary nodes we must fill in ghost polynomials at $x_{i-\frac{1}{2}}$ and $x_{n_x+\frac{1}{2}}$, for example by using the properties of the PDE or as in the case of periodic boundary conditions considered here by simply copying data from the opposite boundary. \subsection{Hermite Method in Two-Dimensions} The method in higher dimensions is a direct tensor product extension of the one dimensional procedure. We now describe the method for the Hamilton-Jacobi equation in two-dimensions, with periodic boundary conditions \begin{equation} u_t = \begin{cases} \varphi_t +H(\varphi_{x},\varphi_{y}) = 0,\\ \varphi(x,y,0)=\varphi^0(x,y),\\ \end{cases} \label{PDE2D} \end{equation} and on the rectangular domain $D = [x_L,x_R] \times [y_B,y_T]$. \subsubsection{Initialization} We discretize the grid as follows, \begin{equation} (x_i,y_j) = (x_L + ih_x,y_B+jh_y), \label{2Dgrid} \end{equation} where $h_x = (x_R - x_L)/N_x$ and $h_y = (y_T - y_B)/N_y$ where $i = 0,\dots, n_x$, $j = 0,\dots, n_y$ for the primal grid and $i = 1/2, \dots , n_x - 1/2, \ j = 1/2, \dots , n_y - 1/2$ for the dual grid. The first step initialize the degrees of freedom by setting them to be the scaled derivatives at each point on the primal grid \[ c_{l_x,l_y} = \left.\frac{h_x^{l_x}}{l_x!} \frac{h_y^{l_y}}{l_y!} \frac{d^{l_x}d^{l_y} u(x,0)}{dx^{lx}dy^{ly}} \right|_{(x=x_i,y=y_j)}. \] Note that $c_{l_x,l_y} = c_{l_x,l_y,i,j}$, but we suppress the spatial indices for notational convenience. We interpolate onto the dual grid to obtain the tensor-product Taylor polynomials \begin{equation} v_{i+\frac{1}{2},j+\frac{1}{2}}(x,y,0) = \sum \limits_{l_x = 0}^{2m+1} \sum \limits_{l_y = 0}^{2m+1} d_{l_x,l_y} \left(\frac{x - x_{i+\frac{1}{2}}}{h_x }\right)^{l_x} \left(\frac{y - y_{j+\frac{1}{2}}}{h_y }\right)^{l_y}, \label{2Dinterpolant} \end{equation} where the polynomial interpolates the function values and the partial derivatives at the four corners of the cell. Algorithmically, forming of the interpolant is done by repeated one-dimensional interpolation. For example, we may interpolate in the $y$ direction, for the function and all the $x$ derivatives at grid points $(x_i,y_j)$ and $(x_i,y_{j+1})$ to obtain one interpolant centered at $(x_i,y_{j+1/2})$ and from $(x_{i+1},y_j)$ and $(x_{i+1},y_{j+1})$ to obtain another interpolant centered at $(x_{i+1},y_{j+1/2})$. These two polynomials of degree $m$ in $x$ and degree $2m+1$ in $y$ are then interpolated in the $x$ direction using one-dimensional interpolation. The final result is a polynomial on the form (\ref{2Dinterpolant}). \subsubsection{Evolution} Similar to the one-dimensional case, we treat the coefficients in the Hermite expansions as functions of time. That is, we expand \begin{equation} v_{i+1/2,j+1/2}(x,y,t) = \sum \limits_{l_x = 0}^{2m+1} \sum \limits_{l_y = 0}^{2m+1} d_{l_x,l_y}(t) \left(\frac{x - x_{i+\frac{1}{2}}}{h_x }\right)^{l_x} \left(\frac{y - y_{j+\frac{1}{2}}}{h_y }\right)^{l_y}. \label{PDE_Hermite_expansion2D} \end{equation} For our PDE $\varphi_t = -H(\varphi_x,\varphi_y)$ we substitute \eqref{PDE_Hermite_expansion2D}: \begin{align} \frac{\partial v_{i+\frac{1}{2}}(x,y,t) }{\partial t } & = \sum \limits_{l_x = 0}^{2m+1} \sum \limits_{l_y = 0}^{2m+1} d_{l_x,l_y}^{'}(t) \left(\frac{x - x_{i+\frac{1}{2}}}{h_x }\right)^{l_x} \left(\frac{y - y_{j+\frac{1}{2}}}{h_y }\right)^{l_y} \nonumber\\& = -H(\varphi_x,\varphi_y). \label{PDE_Hermite_expansion_substitution2D} \end{align} We differentiate in \eqref{PDE_Hermite_expansion_substitution2D} $k$ times in the $x$-coordinate and $l$ times in the $y$-coordinate and evaluate at $(x,y) = (x_{i+\frac{1}{2}},y_{j+\frac{1}{2}})$ to obtain \begin{equation} \frac{k!}{h_x^{k}}\frac{l!}{h_y^{l}} d^{'}_{k,l} (t) = -\left. \frac{\partial^k}{\partial x^k} \frac{\partial^l}{\partial x^l} H(v_{x},v_{y}) \right |_{(x,y) = \left(x_{i + \frac{1}{2}},y_{j+\frac{1}{2}}\right)}. \end{equation} Similar to the one-dimensional case, the differentiation of a non-linear $H$ can spawn new terms by the product rule. We avoid this by approximating the Hamiltonian by a Taylor polynomial of degree $(2m+1) \times (2m+1)$ \begin{equation} H(v_x,v_y) \approx \sum \limits_{l_x = 0}^{2m+1} \sum \limits_{l_y = 0}^{2m+1} b_{l_x,l_y}(t) \left(\frac{x - x_{i+\frac{1}{2}}}{h_x }\right)^{l_x} \left(\frac{y - y_{j+\frac{1}{2}}}{h_y }\right)^{l_y}. \label{Taylor_expansion2D} \end{equation} From \eqref{PDE_Hermite_expansion_substitution2D} and \eqref{Taylor_expansion2D} we obtain the local system of ODEs \begin{equation} d^{'}_{l_x,l_y}(t) = b_{l_x,l_y}(t), \quad l_x = 0, \dots , 2m+1, \ l_y = 0, \dots, 2m+1. \label{ODEs2D} \end{equation} We can evolve the solution of this system using, e.g., a Runge-Kutta method. \subsubsection{Boundary Conditions for the Second Half-step} To complete a full time step we repeat this procedure, starting with the initial data obtained from evolving \eqref{ODEs2D} a half-step on the dual grid. At the interior nodes $(x_i,y_j), \ i = 1,\dots, n_x-1, \ j = 1, \dots, n_y - 1$ the procedure is the same as above, but at the boundary nodes we must fill in ghost polynomials. In our case we use the periodic boundary conditions to fill in the ghost polynomials. \section{Adopting the Persson-Peraire, Klockner-Warburton-Hesthaven Sensor to Hermite Methods} \subsection{Estimating Smoothness} The smoothness detector used to modify our Hermite method is an adaptation of the sensor in \cite{persson2006sub}. The sensor uses orthogonal polynomials, $\lbrace \phi_n \rbrace_{n=0}^{N_p-1}$, on each cell to estimate the smoothness of the solution, where $N_p$ is the order of approximation. The method developed here also utilizes the improvements to the Persson-Peraire sensor proposed by Klockner, Warburton and Hesthaven in \cite{klockner2011viscous}. On each element $D_k$, the Persson-Peraire sensor computes a smoothness indicator \begin{equation} S_k = \frac{(q_N,\phi_{N_{p-1}})^2_{L^2_{(D_k)}}}{||q_N||^2_{L^2_{(D_k)}}}, \label{smoothness_sensor} \end{equation} where $N$ is the degree of the interpolating polynomials. \deaa{While this sensor is easy to use and implement we have found that the improved approach in \cite{klockner2011viscous} is more robust. We now explain how we use the approach of \cite{klockner2011viscous} in the context of Hermite methods.} The smoothness estimator we use relies on a projected version of the the solution. Precisely in one-dimension we project onto the orthogonal basis spanned by Legendre polynomials in a cell and in two dimensions we project onto a tensor product basis of one-dimensional Legendre polynomials. We take $N = 2m+1$ and \Allen{$N_p = 2m+3$}. For example, (in one dimension), let $q_N = \sum \limits_{n = 0}^{N_{p-1}} \hat{q}_n \phi_n$ be the projection, then its modes (coefficients) decay according to \begin{equation} |\hat{q}_n| \sim cn^{-s}. \label{modal_decay} \end{equation} Taking the logarithm of \eqref{modal_decay} we have \begin{equation} \log |\hat{q}_n| \approx \log (c) -s \log (n), \label{log_modal_decay} \end{equation} we may find the $c$ and $s$ via minimizing the least squares function \begin{equation} \sum \limits_{n = 1}^{N_{p-1}} | \log |\hat{q}_n| - (\log(c) - s \log(n))|^2. \end{equation} Note that the sum begins at $n=1$ \Allen{(since $\log(0) = -\infty$ we minimize without $n = 0$)}, thus the constant coefficient data is not taken into account when estimating smoothness. \Allen{The removal of the constant-mode information from the estimation process can cause problems. Consider a constant function perturbed by white noise. Since the constant-mode information is removed the smoothness detector only sees the white noise, which could lead to an erroneous smoothness estimate. A fix to this problem, a technique called \textit{baseline modal decay}, was introduced in \cite{klockner2011viscous}.} Heuristically, the idea is to re-add the sense of scale by distributing the energy according to a perfect modal \Allen{decay} \begin{equation} |\hat{b}_n| \sim C\frac{1}{n^N}, \end{equation} for $N$ the polynomial degree of the method, the normalizing factor $C$ \Allen{is chosen to enforce} \[ \sum \limits_{n = 1}^{N_p - 1} |\hat{b}_n|^2 = 1. \] We input the coefficients \begin{equation} |\tilde{q}_n|^2 := |\hat{q}_n|^2 + ||q_N||_{L^2_{D_k}} |b_k|^2 \quad \text{for} \ n \in \lbrace 1, \dots , N_p - 1 \rbrace, \end{equation} into skyline pessimization (described below), instead of the raw coefficients $|\hat{q}_n|$. There are certain situations where the estimator can be fooled. For example, when interpolating the function $x\Theta(x)$, where $\Theta(x)$ is the Heavyside function, \cite{klockner2011viscous} shows that there is an odd - even effect for which odd modes greater than three are numerically zero. A correction of this problem was given by introducing a technique named \textit{skyline pessimization}. The idea is as follows: if you have a mode $n$ with a small coefficient $|\hat{q}_n|$ such that there exists another coefficient with $m > n$ and $|\hat{q}_m| \gg |\hat{q}_n|$, then the coefficient $|\hat{q}_n|$ is most likely spurious and should not be taken into account when estimating $s$. Therefore, the idea is to generate a new set of modal coefficients by \begin{equation} \bar{q}_n := \max \limits_{i \in \lbrace \min (n,N_p-2), \dots, N_p -1 \rbrace} |\hat{q}_i | \quad \text{for} \ n \in \lbrace{1,2,\dots,N_p - 1 \rbrace}. \label{skyline} \end{equation} This forces each modal coefficient to be raised up to the largest higher-numbered modal coefficient, eliminating non-monotone decay. \subsection{Computing the Viscosity from the Smoothness Sensor} \subsubsection{Implementation in One-Dimension} At each time step, $t_k$, we approximate the smoothness of the function inside each cell. In order to determine the modal coefficients, $\lbrace \hat{q}_n \rbrace$, we take $2m+1$ Legendre-Gauss-Lobatto nodes (see Figure \ref{SensorFig}) $z_i$ and map the nodes in [-1,0] to $[x_k,x_{k + 1/2}]$ and the nodes in $[0,1]$ to $[x_{k + 1/2},x_{k + 1}]$, then we evaluate the Hermite interpolants $p_k(x)$ in $[x_k,x_{k + 1/2}]$ and $p_{k+1}(x)$ in $[x_{k + 1/2},x_{k + 1}]$ on the nodes to obtain the function values required for projection. Once the projection is completed, we compute $\lbrace \hat{q}_n \rbrace$ and approximate $s$ using \eqref{log_modal_decay}. \Allen{Note that the representation of the solution on each cell is a polynomial. This means that if we compute the modal coefficients using one cell, then the sensor will determine the solution is smooth (since it is sensing the smoothness of a polynomial). By taking the left and right half of each cells we are able to estimate the smoothness of the actual solution.} \begin{figure}[hbt] \begin{center} \includegraphics[width=0.45\textwidth]{LGL1D.eps} \includegraphics[width=0.45\textwidth]{LGL2D.eps} \caption{Left LGL nodes in one-dimension; Right LGL nodes in two-dimensions. \label{SensorFig}} \end{center} \end{figure} If the solution is no longer $C^1$, then we introduce numerical viscosity by taking $1 - \kappa_w(s)$ where $s$ is the smoothness estimate and $\kappa_w(s)$ is given by \cite{persson2006sub} as \begin{equation} \kappa_w(s) = \nu_0 \begin{cases} 0 \quad & s_w < s_0 - w,\\ \frac{1}{2} \left(1 + \sin \frac{\pi (s_w - s_0)}{2 w}\right) \quad &s_0 - w \leq s_w \leq s_0 + w,\\ 1 \quad &s_0 + w < s_w, \end{cases} \end{equation} where we choose $s_0 = 2$ and $s_w = 3$. These choices activate the viscosity as soon as the solution fails to be $C^1$, $s=3$, and gives maximum viscosity, $\nu_0$, when the solution is $C^0$. We set the maximum viscosity, $\nu_0$, to be \begin{equation} \nu_0 = \lambda\frac{h}{N}, \label{nu1D} \end{equation} where $\lambda$ is the maximum local characteristic velocity. We estimate $\lambda$ by taking the derivative of the Hamiltonian with respect to $\varphi_x$ at each cell center and taking the maximum of the absolute value: \begin{equation} \lambda = \max \limits_{\Allen{i \in \lbrace 0, \dots, n_x \rbrace}} |H_{v_x}(v_x(x_{i+\frac{1}{2}}))|. \label{lambda1D} \end{equation} \Allen{Here $\lambda$ is estimated using every gridpoint.} Note that this value is directly accessible, as $v_x$ is part of the degrees of freedom. Before modifying the PDE we average the viscosity in the spatial domain by setting \[\bar{\kappa}_{i} = \frac{1}{4}(\kappa_{i-1} + 2\kappa_i + \kappa_{i+1}).\] We introduce the numerical viscosity at each timestep when evolving our PDE after interpolation. That is, after the interpolation step we modify the PDE by adding $\bar{\kappa}_i u_{xx}$ making the equation \begin{equation} v_t + H(v_x) = \bar{\kappa}_i v_{xx}, \end{equation} on the $i$th cell. \subsubsection{Implementation in Two-Dimensions} As our orthogonal basis we choose the tensor-product of Legendre polynomials. To check the smoothness of a cell we use the nearest interpolants on the other grid located on the four corners of the cell. That is, if the smoothness information for the cell centered at $(x_{k+1/2},y_{l+1/2})$ then we use the Hermite interpolants at $(x_k,y_l)$, $(x_{k+1},y_{l})$, $(x_{k},y_{l+1})$, and $(x_{k+1},y_{l+1})$. As in one dimension, we partition the cell into four regions and evaluate the function on each region using the interpolant that corresponds to it. For example, for the lower left region we use the information from the Hermite interpolant centered at $(x_k,y_l)$. We analyze the decay of coefficients by grouping them by the total degree i.e., the degree of Legendre polynomial in $x$ plus the degree of the Legendre polynomial in $y$. The tensor-product Legendre polynomial is of the form \begin{equation} p = \sum \limits_{k = 0}^{2m+1} \sum \limits_{l = 0}^{2m+1} C_{k,l} \phi_k \phi_l, \end{equation} where $\phi_k$ and $\phi_l$ are the Legendre polynomials in one-dimension. There are several ways to order the polynomials in each degree; however, some orderings will fool the sensor. We take the maximum coefficient in absolute value for each total degree and use that as our input to the sensor for that degree. That is, we take $c_i = \max|C_{k,l}|$, where $k+l = i$. Once we obtain this ordering we apply \textit{baseline modal decay} and \textit{skyline pessimization} in the same way as the one-dimensional case. We set the maximum viscosity, $\nu_0$, to be \begin{equation} \nu_0 = \lambda\frac{h}{N}, \label{nu2D} \end{equation} where $N$ is the degree of the Hermite interpolating polynomial in one coordinate direction and $h = \max\lbrace h_x,h_y\rbrace$. For estimating $\lambda$ we adapt the Lax-Friedrichs flux given in \cite{crandall1980monotone,osher1991high} by taking the partial derivatives of the Hamiltonian with respect to $\varphi_x$ and $\varphi_y$ evaluating them at the cell centers and taking the maximum of the absolute value: \begin{equation} \lambda = \max \limits_{i,j}\lbrace |H_{v_x}(v_x(x_{i+\frac{1}{2}},y_{j+\frac{1}{2}}),v_y(x_{i+\frac{1}{2}},y_{j+\frac{1}{2}}))|,|H_{v_y}(v_x(x_{i+\frac{1}{2}},y_{j+\frac{1}{2}}),v_y(x_{i+\frac{1}{2}},y_{j+\frac{1}{2}}))| \rbrace. \label{lambda2D} \end{equation} Before modifying the PDE we average the viscosity in the spatial domain by setting \begin{align*}\bar{\kappa}_{k,l} & = \frac{1}{16}(\kappa_{k-1,l-1}+\kappa_{k+1,l-1}+\kappa_{k-1,l+1}+\kappa_{k+1,l+1}+\\ &2(\kappa_{k,l-1}+\kappa_{k-1,l}+\kappa_{k+1,l}+\kappa_{k,l+1})+4\kappa_{k,l}). \end{align*} We introduce the numerical viscosity at each timestep when evolving our PDE after interpolation. That is, after the interpolation step we modify the PDE making the equation \begin{equation} v_t + H(v_x,v_y) = \bar{\kappa}_{k,l}(v_{xx} + v_{yy}). \end{equation} In Figure \ref{fig:smoothess_test} we test the smoothness sensor on two discontinuous functions, a radially symmetric step function and an oblique step function. In both cases the sensor correctly determines the level of smoothness of the underlying function. \begin{figure}[hbt] \begin{center} \includegraphics[width=0.24\textwidth]{RadialStepTopView.eps} \includegraphics[width=0.24\textwidth]{RadialStepSideView.eps} \includegraphics[width=0.24\textwidth]{StepTopView.eps} \includegraphics[width=0.24\textwidth]{StepSideView.eps} \caption{Top: different views of the smoothness sensor when applied to a radial stepfunction $f(x) = 1$ if $r^2 \leq 1$ and zero otherwise. In the smooth region the smoothness is estimated to be 5 and near the the discontinuity it is estimated to be approximately 1/2. Bottom: different views of the smoothness sensor for a stepfunction with $f(x) = 1$ if $x+y\leq 1$ and zero otherwise. The results are similar to the radial case. \label{fig:smoothess_test}} \end{center} \end{figure} \section{Numerical Examples} For each of the following examples we expect the convergence rate to be $2m+1$ when the solution is smooth, where $m$ is number of derivatives used in the interpolation process. When the solution fails to remain smooth we do not expect to see the optimal convergence rates, instead we seek sharp resolution of kinks. For each example we give convergence rates at a time when the solution is still smooth. For the timestepping we use the classical Runge-Kutta method of order 4 (RK4). To this end we chose the timestep small enough so that the temporal error is dominated by the spatial error. \FinalEdits{The derivation of analytical solutions for examples where convergence rates are estimated can be found in the appendix.} \subsection{Examples in One Dimension} \subsubsection{Example 1} In this example we solve the one-dimensional Burgers' equation \begin{equation} \varphi_t +\frac{1}{2} \Allen{(\varphi_x)^2}=0, \nonumber \end{equation} with initial condition \mbox{$\varphi(x,0)=\sin(x)$} and with periodic boundary conditions \mbox{$\varphi(0,t)=\varphi(2\pi,t)$}. The solution is smooth until time $T = 1.0$, at this time a shock will form in $\varphi_x$. Our grid is determined by $x_l = 0$, $x_r = 2\pi$ and the number of grid points, $n_x$. For this example we start with $n_x = 20$ and refine the grid by a factor of two until $n_x = 160$ in order to demonstrate convergence to the viscosity solution. Before the solution develops a kink we demonstrate that our method achieves $2m+1$ order accuracy at time $T=0.5$ as evidenced by the errors measured in the $L_1, L_2$ and $L_{\infty}$ norm along with the estimated rates of convergence reported in Table \ref{tab:Example1smooth}. We also demonstrate that we converge to the viscosity solution at time $T = 1.5$ in Figure \ref{fig:Burgers1Dplot} along with the errors reported in Table \ref{tab:Example1Notsmooth}. \FinalEdits{Note that the kink formed closely resembles an absolute value function. The degree $2m+1$ Hermite interpolant of the absolute value function, $|x|$, can be explicitly written down. It is \begin{equation*} p(x) = \sum \limits_{k = 0}^{m} \binom{2k}{k} \frac{(-1)^{k+1}(x^2-1)^k}{2^{2k}(2k-1)}, \end{equation*} and this in turn corresponds to the first terms of the generalized binomial expansion \[ (1+t)^{\frac{1}{2}} = \sum \limits_{k = 0}^{\infty} \binom{1/2}{k}t^k, \] which is a convergent approximation of the absolute value function when replacing $t = x^2 - 1$ and when $|x|<1$. At $x=0$ and for a fixed $m$ this approximation is positive and there is thus a $\mathcal{O}(1)$ error in a single cell of width $h$. This is the source of the $\mathcal{O}(h)$ in the max norm and $\mathcal{O}(h^2)$ in the 2 norm.} \begin{table}[ht] \caption{Errors in Example 1 in the $L_1$, $L_2$ and $L_{\infty}$ norms at time $T = 1.5$ are displayed along with estimated rates of convergence. Note that these errors occur after a kink has formed.} \begin{center} \begin{tabular}{c c c c c c c} \hline n & $L_1$ error & Conv. Rate & $L_2$ error & Conv. Rate & $L_{\infty}$ error & Conv. Rate \\ \hline & & & $m = 2$ & & &\\ \hline 20 & 2.76e-05 & - & 2.70e-05 & - & 5.47e-05 & - \\ 40 & 9.45e-07 & 4.87 & 9.53e-07 & 4.83 & 2.03e-06 & 4.75 \\ 80 & 3.05e-08 & 4.95 & 3.10e-08 & 4.94 & 6.59e-08 & 4.95 \\ 160 & 9.42e-10 & 5.02 & 9.56e-10 & 5.02 & 2.01e-09 & 5.03 \\ \hline & & & $m = 3$ & & &\\ \hline 20 & 3.56e-07 & - & 4.55e-07 & - & 1.26e-06 & - \\ 40 & 2.61e-09 & 7.09 & 3.22e-09 & 7.14 & 8.49e-09 & 7.21 \\ 80 & 2.03e-11 & 7.01 & 2.50e-11 & 7.01 & 6.23e-11 & 7.09 \\ 160 & 1.82e-13 & 6.80 & 1.85e-13 & 7.08 & 4.56e-13 & 7.10 \\ \hline \end{tabular} \end{center} \label{tab:Example1smooth} \end{table} \begin{table}[ht] \caption{Errors in Example 1 in the $L_1$, $L_2$ and $L_{\infty}$ norms at time $T = 1.5$ are displayed along with estimated rates of convergence. Note that these errors occur after a kink has formed.} \begin{center} \begin{tabular}{c c c c c c c} \hline n & $L_1$ error & Conv. Rate & $L_2$ error & Conv. Rate & $L_{\infty}$ error & Conv. Rate \\ \hline & & & $m = 2$ & & &\\ \hline 20 & 1.16e-02 & - & 1.71e-02 & - & 4.00e-02 & - \\ 40 & 2.73e-03 & 2.09 & 5.86e-03 & 1.54 & 1.97e-02 & 1.02 \\ 80 & 6.81e-04 & 2.00 & 2.08e-03 & 1.50 & 9.85e-03 & 1.00 \\ 160 & 1.70e-04 & 2.00 & 7.28e-04 & 1.51 & 4.87e-03 & 1.01 \\ \hline & & & $m = 3$ & & &\\ \hline 20 & 6.87e-03 & - & 1.46e-02 & - & 3.67e-02 & - \\ 40 & 1.66e-03 & 2.05 & 4.94e-03 & 1.57 & 1.75e-02 & 1.07 \\ 80 & 4.12e-04 & 2.01 & 1.75e-03 & 1.50 & 8.75e-03 & 1.00 \\ 160 & 1.02e-04 & 2.01 & 6.18e-04 & 1.50 & 4.38e-03 & 1.00 \\ \hline \end{tabular} \end{center} \label{tab:Example1Notsmooth} \end{table} \begin{figure}[htb] \begin{center} \includegraphics[width=0.6\textwidth]{Example_01_1D.eps} \caption{Example 1 at time $t = 1.5$ well after the kink has developed. This computation was done with $m = 3$ and $N = 80$ cells. The solid line is the numerical solution and the dots are the exact solution.\label{fig:Burgers1Dplot}} \end{center} \end{figure} The convergence rates displayed in the tables show us that we are converging to the viscosity solution as the grid is being refined. We observe in Figure \ref{fig:Burgers1Dplot} that the method is able to capture the kink formed at $\frac{\pi}{2}$. In Figure \ref{fig:spaceTimeBurgers} we see that as we refine the grid the error is localized where the kink is formed. \begin{figure}[htb] \begin{center} \includegraphics[width=0.48\textwidth]{Burgers50spacetime.eps} \includegraphics[width=0.48\textwidth]{Burgers100spacetime.eps} \caption{Here we display the evolution of the errors with a refinement. On the left we display the evolution of errors with 50 cells and on the right we display the evolution of errors with 100 cells. \label{fig:spaceTimeBurgers}}. \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=0.6\textwidth]{Example_02_1D.eps} \caption{Example 2 at time $t = 1.0$ with $m = 3$ and $N = 80$ cells the solid line is the numerical solution and the dots are exact solution. \label{fig:Eikonal1Dplot}} \end{center} \end{figure} \subsubsection{Example 2} In this example we solve the one-dimensional Eikonal equation \begin{align*} \varphi_t + |\varphi_x|=0, \end{align*} with initial condition \mbox{$\varphi(x,0)=\sin(x)$} and with periodic boundary conditions. The viscosity solution to this equation has a shock forming in $\varphi_x$ at $x = \pi/2$ and a rarefaction wave at $x = 3\pi/2$. The solution is nonsmooth for all $T >0$ so we do not expect order $2m+1$ convergence. To analyze the convergence we use the same grids as in Example 1. We report the $L_1,L_2$ and $L_{\infty}$ errors and their estimated rates of convergence in Table \ref{tab:Eikonal}. In Figure \ref{fig:Eikonal1Dplot} we plot the numerical solution to demonstrate convergence to the viscosity solution. \begin{table}[ht] \caption{Errors in Example 2 in the $L_1$, $L_2$ and $L_{\infty}$ norms at time $T = 1.0$ are displayed along with estimated rates of convergence.} \begin{center} \begin{tabular}{c c c c c c c} \hline n & $L_1$ error & Conv. Rate & $L_2$ error & Conv. Rate & $L_{\infty}$ error & Conv. Rate \\ \hline & & & $m = 2$ & & &\\ \hline 20 & 4.84e-01 & - & 2.20e-01 & - & 1.94e-01 & - \\ 40 & 2.35e-01 & 1.04 & 1.13e-01 & 0.96 & 1.09e-01 & 0.83 \\ 80 & 1.14e-01 & 1.04 & 5.77e-02 & 0.97 & 5.79e-02 & 0.91 \\ 160 & 5.60e-02 & 1.03 & 2.93e-02 & 0.98 & 2.98e-02 & 0.96 \\ \hline & & & $m = 3$ & & &\\ \hline 20 & 3.40e-01 & - & 1.58e-01 & - & 1.39e-01 & - \\ 40 & 1.65e-01 & 1.04 & 8.11e-02 & 0.97 & 7.61e-02 & 0.87 \\ 80 & 8.05e-02 & 1.04 & 4.13e-02 & 0.97 & 3.96e-02 & 0.94 \\ 160 & 3.97e-02 & 1.02 & 2.10e-02 & 0.98 & 2.03e-02 & 0.97 \\ \hline \end{tabular} \end{center} \label{tab:Eikonal} \end{table} \begin{figure}[ht] \begin{center} \includegraphics[width=0.48\textwidth]{Eikonal50spacetime.eps} \includegraphics[width=0.48\textwidth]{Eikonal100spacetime.eps} \caption{Here we display the evolution of the errors with a refinement. On the left display the evolution of errors with 50 cells and on the right we display the evolution of errors with 100 cells. \label{fig:spaceTimeEikonal}} \end{center} \end{figure} The convergence rates displayed in the tables show us that we are converging to the viscosity solution as the grid is being refined. We observe in Figure \ref{fig:Eikonal1Dplot} that the method is able to capture the kink formed at $\frac{\pi}{2}$ and the rarefaction wave at $\frac{3\pi}{2}$. In Figure \ref{fig:spaceTimeEikonal} we see that as we refine the grid the error is localized where the kink is formed. We briefly note that the discontinuity in the Hamiltonian, $H$, causes the piece-wise interpolant to lose smoothness in between cells where the sign of $\varphi_x$ changes. We plan to see if we can rectify this by using a flux-conservative formulation of this method. \subsubsection{Example 3} \begin{table}[ht] \caption{Errors in Example 3 in the $L_1$, $L_2$ and $L_{\infty}$ norms at time $T = 0.5/{ \Allen{\pi}}^2$ are displayed along with estimated rates of convergence.} \begin{center} \begin{tabular}{c c c c c c c } \hline & & & $m = 2$ & & &\\ \hline n & $L_1$ error & Conv. Rate & $L_2$ error & Conv. Rate & $L_{\infty}$ error & Conv. Rate \\ \hline 20 & 1.72e-05 & - & 3.49e-05 & - & 1.59e-04 & - \\ 40 & 4.67e-07 & 5.20 & 1.05e-06 & 5.06 & 6.47e-06 & 4.62 \\ 80 & 1.48e-08 & 4.98 & 2.63e-08 & 5.32 & 1.68e-07 & 5.27 \\ 160 & 6.56e-10 & 4.50 & 8.62e-10 & 4.93 & 3.79e-09 & 5.47 \\ \hline & & & $m = 3$ & & &\\ \hline 20 & 4.96e-06 & - & 1.18e-05 & - & 5.77e-05 & - \\ 40 & 4.00e-08 & 6.95 & 1.31e-07 & 6.50 & 8.79e-07 & 6.04 \\ 80 & 2.23e-10 & 7.48 & 7.28e-10 & 7.49 & 4.75e-09 & 7.53 \\ 160 & 1.94e-12 & 6.84 & 4.30e-12 & 7.40 & 3.17e-11 & 7.23 \\ \hline \end{tabular} \end{center} \label{tab:nonConvexSmooth} \end{table} In this example we solve a one-dimensional equation with a nonconvex Hamiltonian \begin{align*} \varphi_t - \cos(\varphi_x + 1) = 0, \end{align*} with initial condition $\varphi(x,0)=-\cos(\pi x)$ and periodic boundary conditions $\varphi(-1,t)=\varphi(1,t)$. This example shows that our scheme has high-order accuracy even when the Hamiltonian is not convex. Our grid is determined by $x_l = -1$, $x_r = 1$ and the number of grid points, $n_x$. For this example we start with $n_x = 20$ and refine the grid until $n_x = 160$ in order to demonstrate convergence to the viscosity solution. Before the solution develops a kink we demonstrate our method achieves $2m+1$ order accuracy at time $T=\frac{0.5}{\pi^2}$ by giving the $L_1, L_2$ and $L_{\infty}$ norm along with the estimated rates of convergence in Table \ref{tab:nonConvexSmooth}. We also demonstrate that we converge to the viscosity solution at time $T = \frac{1.5}{\pi^2}$ in Figure \ref{fig:NonConvex1Dplot}. The convergence rates displayed in the tables show us that we are converging to the viscosity solution as the grid is being refined. We observe in Figure \ref{fig:NonConvex1Dplot} that the method is able to capture the kinks formed in this example. \begin{figure}[ht] \begin{center} \includegraphics[width=0.6\textwidth]{Example_03_1D.eps} \caption{Example 3 at time $t = 1.5/\pi^2$ with $m = 3$ using $N = 80$ cells. The solid line is the numerical solution and the dots are the exact solution \label{fig:NonConvex1Dplot}} \end{center} \end{figure} \Allen{ \subsubsection{Example 4} In this example we solve a one-dimensional Riemann problem with a nonconvex Hamiltonian \begin{align*} \varphi_t + \frac{1}{4}(\varphi_x^2 - 1)(\varphi_x^2 - 4) = 0, \end{align*} with initial data $\varphi(x,0) = -2|x|$. In this example there are two shocks propagating to the left and right connected in between by a rarefaction wave. Our grid is determined by $x_l = -1$, $x_r = 1$ and the number of grid points, $n_x$. For this example we first start with an odd number of grid points $n_x = 21$ and refine by a factor of two until $n_x = 321$ in order to demonstrate convergence to the viscosity solution. We also refine the grid for an even number of grid points starting with $n_x = 20$ and using $n_x = 320$ on the finest grid. The difference between the two is when using an even number of gridpoints the discontinuity in the initial data at $x = 0$ is located on a gridpoint $x_{n/2}$. When the discontinuity in the initial data is located on a gridpoint, $x_*$, we define the degrees of freedom at $x_*$ in two ways, depending on whether the data is being used to interpolate the left or right of $x_*$. That is, when interpolating to the dual node located to the left of $x_*$ we compute the degrees of freedom at $x_*$ using the limit of the initial data from the left, $\lim \limits_{x \to x_{*}^{-}} \varphi$, and when interpolating to the dual node located to the right of $x_*$ we compute the degrees of freedom at $x_*$ using the limit of the initial data from the right, $\lim \limits_{x \to x_{*}^{+}} \varphi$. Exact solutions are used as the boundary condition. We report the $L_1$, $L_2$ and $L_{\infty}$ norm errors at time $T = 1.0$ along with estimated rates of convergence in Table \ref{tab:Riemann1D}. We observe first order convergence for both the even and odd refinements. We note that for this more difficult example the amount of viscosity needed appears to scale inversely with the square of the CFL number. That is, at large CFL numbers we need less viscosity to compute the correct solution. \begin{table}[ht] \caption{\Allen{Errors in Example 4 in the $L_1$, $L_2$ and $L_{\infty}$ norms at time $T = 1.0$ are displayed along with estimated rates of convergence.}} \begin{center} \Allen{ \begin{tabular}{c c c c c c c } \hline n & $L_1$ error & Convergence & $L_2$ error & Convergence & $L_{\infty}$ error & Convergence \\ \hline & & & Odd & & &\\ \hline & & & $m = 2$ & & &\\ \hline 41 & 4.03e-02 & - & 3.83e-02 & - & 4.35e-02 & - \\ 81 & 1.99e-02 & 1.02 & 1.91e-02 & 1.00 & 2.15e-02 & 1.02 \\ 161 & 9.87e-03 & 1.01 & 9.54e-03 & 1.00 & 1.04e-02 & 1.05 \\ 321 & 4.76e-03 & 1.05 & 4.66e-03 & 1.03 & 4.84e-03 & 1.10 \\ \hline & & & $m = 3$ & & &\\ \hline 41 & 4.92e-02 & - & 8.23e-02 & - & 6.44e-01 & - \\ 81 & 1.97e-02 & 1.32 & 1.89e-02 & 2.12 & 2.17e-02 & 4.89 \\ 161 & 9.79e-03 & 1.01 & 9.47e-03 & 1.00 & 1.10e-02 & 0.99 \\ 321 & 4.87e-03 & 1.01 & 4.73e-03 & 1.00 & 5.30e-03 & 1.05 \\ \hline & & & Even & & &\\ \hline & & & $m = 2$ & & &\\ \hline 40 & 3.98e-02 & - & 3.80e-02 & - & 4.32e-02 & - \\ 80 & 1.95e-02 & 1.03 & 1.87e-02 & 1.02 & 2.15e-02 & 1.01 \\ 160 & 9.59e-03 & 1.02 & 9.29e-03 & 1.01 & 1.02e-02 & 1.07 \\ 320 & 4.62e-03 & 1.05 & 4.52e-03 & 1.04 & 4.71e-03 & 1.12 \\ \hline & & & $m = 3$ & & &\\ \hline 40 & 3.95e-02 & - & 3.77e-02 & - & 4.43e-02 & - \\ 80 & 1.93e-02 & 1.04 & 1.86e-02 & 1.02 & 2.21e-02 & 1.01 \\ 160 & 9.50e-03 & 1.02 & 9.21e-03 & 1.01 & 1.07e-02 & 1.04 \\ 320 & 4.73e-03 & 1.01 & 4.59e-03 & 1.01 & 5.77e-03 & 0.89 \\ \end{tabular} } \end{center} \label{tab:Riemann1D} \end{table} \begin{figure}[ht] \begin{center} \includegraphics[width=0.6\textwidth]{Example_04_1D.eps} \caption{\Allen{Example 4 at time $t = 1.0$ with $m = 3$ using $N = 321$ cells. The solid line is the numerical solution and the dots are the exact solution.} \label{fig:oneDimRiemannPlot}} \end{center} \end{figure} } \subsection{Examples in Two Dimensions} \subsubsection{Example 5} In this example we solve the two-dimensional Burgers' equation \begin{align*} \varphi_t + \frac{1}{2}(\varphi_x + \varphi_y)^2 = 0, \end{align*} with initial condition \mbox{$\varphi(x,y,0) = -\cos(x+y)$} and periodic boundary conditions on $[0,2\pi]^2$. This equation can be reduced to a one-dimensional equation via the change of variables $z = \frac{x+y}{2}$. That is, \begin{align*} \frac{\partial u}{\partial z} &= \frac{\partial u}{\partial z} \frac{\partial z}{\partial x}+ \frac{\partial u}{\partial z} \frac{\partial z}{\partial y}\\ & = \frac{1}{2}\frac{\partial u}{\partial z} + \frac{1}{2}\frac{\partial u}{\partial z}\\ & =\frac{\partial u}{\partial z}. \end{align*} Thus, our equation becomes \begin{align*} \varphi_t + \frac{1}{2}\varphi_z^2 = 0, \end{align*} with initial condition \mbox{$\varphi(z,0) = -\cos(2z)$} and periodic boundary conditions on $[0,2\pi]$. We use the grid with $x_L,y_B = 0$ and $x_R,y_T = 2\pi$ with $n_x,n_y = 10$ cells and refine the grid by a factor of two until we have $n_x,n_y = 80$ cells in order to demonstrate convergence to the viscosity solution. Before the solution develops a kink we demonstrate our method achieves $2m+1$ order accuracy at time $T=0.1$ by giving the $L_1, L_2$ and $L_{\infty}$ norm along with the estimated rates of convergence in Table \ref{tab:2DBurgersSmooth}. We also demonstrate the development of singular features in Figure \ref{fig:2D_burgers}. \begin{figure}[htb] \begin{center} \includegraphics[width=0.48\textwidth]{example12DT01.eps} \includegraphics[width=0.48\textwidth]{example12DT05.eps} \caption{Example 5 on the left is the numerical solution at time $t = 0.1$ approximated using $m = 3$ derivatives on $N = 40$ cells and on the right is the numerical solution at time $t = 0.5$ using the same number of derivatives and cells. \label{fig:2D_burgers}} \end{center} \end{figure} We see from the table that we obtain the full order of the method while the solution is smooth. The figure shows us how the method is able to capture the singular features of the solution. \begin{table}[ht] \caption{Errors in Example 5 in the $L_1$, $L_2$ and $L_{\infty}$ norms at time $T = 0.1$ are displayed along with estimated rates of convergence.} \begin{center} \begin{tabular}{c c c c c c c } n & $L_1$ error & Conv. Rate & $L_2$ error & Conv. Rate & $L_{\infty}$ error & Conv. Rate \\ \hline & & & $m = 2$ & & &\\ \hline 10 & 4.77e-03 & - & 3.78e-04 & - & 8.76e-04 & - \\ 20 & 1.77e-04 & 4.75 & 1.43e-05 & 4.72 & 3.91e-05 & 4.49 \\ 40 & 5.72e-06 & 4.95 & 5.77e-07 & 4.63 & 1.33e-06 & 4.88 \\ 80 & 1.81e-07 & 4.98 & 2.46e-08 & 4.55 & 4.23e-08 & 4.98 \\ \hline & & & $m = 3$ & & &\\ \hline 10 & 1.28e-04 & - & 1.41e-05 & - & 4.29e-05 & - \\ 20 & 9.70e-07 & 7.05 & 4.90e-08 & 8.17 & 3.62e-07 & 6.89 \\ 40 & 7.45e-09 & 7.02 & 2.67e-10 & 7.52 & 2.50e-09 & 7.18 \\ 80 & 5.66e-11 & 7.04 & 2.15e-12 & 6.96 & 1.88e-11 & 7.06 \\ \hline \end{tabular} \end{center} \label{tab:2DBurgersSmooth} \end{table} \subsubsection{Example 6} In this example we solve a two-dimensional nonlinear equation \begin{align*} \varphi_t + \varphi_x\varphi_y=0, \end{align*} with initial condition \mbox{$\varphi(x,y,0) = \sin(x) + \cos(y)$} and periodic boundary conditions on the domain $[-\pi,\pi]^2$.\\\\ This is a genuinely nonlinear problem with a nonconvex Hamiltonian. The viscosity solution is smooth at time $T = 0.5$; we demonstrate $2m+1$ convergence at this time. By $T=1.5$ the viscosity solution develops singular features. We use the grid with $x_L,y_B = -\pi$ and $x_R,y_T = \pi$ with $n_x,n_y = 10$ cells and refine the grid by a factor of two until we have $n_x,n_y = 80$ cells in order to demonstrate convergence to the viscosity solution. Before the solution develops singular features we demonstrate our method achieves $2m+1$ order accuracy at time $T=0.5$ by giving the $L_1, L_2$ and $L_{\infty}$ norm along with the estimated rates of convergence in Table \ref{tab:Example5Smooth}. We also demonstrate the singular features that the solution develops in Figure \ref{fig:2D_nonconvex} \begin{table}[ht] \caption{Errors in Example 6 in the $L_1$, $L_2$ and $L_{\infty}$ norms at time $T = 0.5$ are displayed along with estimated rates of convergence.} \begin{center} \begin{tabular}{c c c c c c c } \hline n & $L_1$ error & Convergence & $L_2$ error & Convergence & $L_{\infty}$ error & Convergence \\ \hline & & & $m = 2$ & & &\\ \hline 10 & 3.50e-04 & - & 6.78e-05 & - & 2.18e-05 & - \\ 20 & 1.08e-05 & 5.02 & 2.11e-06 & 5.00 & 6.89e-07 & 4.98 \\ 40 & 3.35e-07 & 5.01 & 6.58e-08 & 5.00 & 2.14e-08 & 5.01 \\ 80 & 1.04e-08 & 5.01 & 2.05e-09 & 5.00 & 6.64e-10 & 5.01 \\ \hline & & & $m = 3$ & & &\\ \hline 10 & 2.62e-07 & - & 5.15e-08 & - & 1.76e-08 & - \\ 20 & 1.95e-09 & 7.07 & 3.86e-10 & 7.06 & 1.33e-10 & 7.05 \\ 40 & 1.48e-11 & 7.04 & 2.96e-12 & 7.03 & 1.00e-12 & 7.05 \\ 80 & 1.17e-13 & 6.98 & 2.52e-14 & 6.87 & 1.51e-14 & 6.05 \\ \hline \end{tabular} \end{center} \label{tab:Example5Smooth} \end{table} This example is truly a two-dimensional nonlinear problem and we still see that we obtain the full order of the method while the solution is smooth and our method is able to capture the singular features of the solution. \begin{figure}[htb] \begin{center} \includegraphics[width=0.48\textwidth]{example22DT05.eps} \includegraphics[width=0.48\textwidth]{example22DT15.eps} \caption{Example 6 on the left is the numerical solution at time $t = 0.5$ approximated using $m = 3$ derivatives on $N = 40$ cells on the right is the numerical solution at time $t = 1.5$ using the same number of derivatives and cells. \label{fig:2D_nonconvex}} \end{center} \end{figure} \Allen{ \subsubsection{Example 7} In this example we solve a two-dimensional Riemann problem with a nonconvex Hamiltonian \begin{align*} \varphi_t + \sin(\varphi_x + \varphi_y) = 0, \end{align*} with initial data $\varphi(x,y,0) = \pi(|y| - |x|)$ on the domain $\Omega = [-1,1] \times [-1,1]$. All of the waves propagate out of the domain and no physical boundary conditions are needed. As a naive implementation of outflow boundary conditions we simply extend sufficiently so that the influence of periodic boundary conditions does not affect the solution during the simulation time. The results of a simulation using $m = 2$ are displayed in Figure \ref{fig:2D_Riemann}. \begin{figure}[htb] \begin{center} \includegraphics[width=0.8\textwidth]{Riemann2D2.eps} \caption{\Allen{Example 7. Plotted is the numerical solution at time $t = 1.0$ approximated using $m = 2$ derivatives. The computation use $N = 20$ cells in the physical domain $\Omega = [-1,1] \times [-1,1]$.} \label{fig:2D_Riemann}} \end{center} \end{figure} } \Allen{ \subsubsection{Example 8} In this example we solve a problem related to optimal cost determination \begin{align*} \varphi_t + \sin(y)\varphi_x + (\sin(x)+\text{sign}(\varphi_y))\varphi_y - \frac{1}{2} \sin^2(y) + \cos(x) - 1 = 0, \end{align*} with initial data $\varphi(x,y,0) = 0$ and periodic boundary conditions on $\Omega = [-\pi,\pi] \times [-\pi,\pi]$. The Hamiltonian is not smooth for this example. We are able to capture the viscosity solution well. In Figure \ref{fig:2D_OptimalControl} we display the numerical solution on the left and the optimal control term $\text{sign}(\varphi_y)$ on the right. \begin{figure}[htb] \begin{center} \includegraphics[width=0.48\textwidth]{optimalCostSolution.eps} \includegraphics[width=0.48\textwidth]{optimalCostfunction.eps} \caption{\Alldeaa{Example 8 on the left is the numerical solution at time $t = 1.0$ approximated using $m = 2$ derivatives with $N = 40$ cells. On the right is the optimal control term $\text{sign}(\varphi_y)$. \label{fig:2D_OptimalControl}}} \end{center} \end{figure} } \section{Conclusions and Future Work} Through the coupling of Hermite methods and a discontinuity sensor, our method attains $2m+1$ order of convergence in smooth regions while converging to the viscosity solution when kinks are present. While we were able to achieve the goals of: 1) high order accuracy in smooth regions and 2) sharp resolution of kinks, we believe that there are several ways this method can be improved upon. The Eikonal equation gave us inspiration to develop a flux-conservative Hermite method for HJ equations in order to keep continuity in the derivatives at the element interfaces even when the Hamiltonian is discontinuous. Our method, while effective for a Cartesian grid, can not handle complex geometries. The next step is to deal with different types of boundary conditions and apply Hermite methods to curvilinear coordinate systems. We believe that sensing on a curvilinear coordinate system will be a straightforward generalization since we can map the curvilinear element onto the reference element on the unit square. \section{Declarations} \subsection{Funding} Allen Alvarez Loya was funded by NSF Grant DGE-1650115. Daniel Appel\"{o} was supported, in part, by NSF Grant DMS-1913076. \subsection{Conflicts of interest/Competing interests} On behalf of all authors, the corresponding author states that there is no conflict of interest. \subsection{Availability of data and material} All data generated or analyzed during this study are included in this published article. \subsection{Code availability} The numerical examples in can be found in the github repository \url{https://github.com/allenalvarezloya/Hermite_HJ}. \subsection{Authors' contributions} \FinalEdits{ \section*{Appendix}
{ "timestamp": "2021-11-25T02:21:37", "yymm": "2105", "arxiv_id": "2105.05364", "language": "en", "url": "https://arxiv.org/abs/2105.05364", "abstract": "We present a Hermite interpolation based partial differential equation solver for Hamilton-Jacobi equations. Many Hamilton-Jacobi equations have a nonlinear dependency on the gradient, which gives rise to discontinuities in the derivatives of the solution, resulting in kinks. We built our solver with two goals in mind: 1) high order accuracy in smooth regions and 2) sharp resolution of kinks. To achieve this, we use Hermite interpolation with a smoothness sensor. The degrees-of freedom of Hermite methods are tensor-product Taylor polynomials of degree $m$ in each coordinate direction. The method uses $(m+1)^d$ degrees of freedom per node in $d$-dimensions and achieves an order of accuracy $(2m+1)$ when the solution is smooth. To obtain sharp resolution of kinks, we sense the smoothness of the solution on each cell at each timestep. If the solution is smooth, we march the interpolant forward in time with no modifications. When our method encounters a cell over which the solution is not smooth, it introduces artificial viscosity locally while proceeding normally in smooth regions. We show through numerical experiments that the solver sharply captures kinks once the solution losses continuity in the derivative while achieving $2m+1$ order accuracy in smooth regions.", "subjects": "Numerical Analysis (math.NA)", "title": "A Hermite Method with a Discontinuity Sensor for Hamilton-Jacobi Equations", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668695588648, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.7081505164759131 }
https://arxiv.org/abs/0903.2675
Construction and Covering Properties of Constant-Dimension Codes
Constant-dimension codes (CDCs) have been investigated for noncoherent error correction in random network coding. The maximum cardinality of CDCs with given minimum distance and how to construct optimal CDCs are both open problems, although CDCs obtained by lifting Gabidulin codes, referred to as KK codes, are nearly optimal. In this paper, we first construct a new class of CDCs based on KK codes, referred to as augmented KK codes, whose cardinalities are greater than previously proposed CDCs. We then propose a low-complexity decoding algorithm for our augmented KK codes using that for KK codes. Our decoding algorithm corrects more errors than a bounded subspace distance decoder by taking advantage of the structure of our augmented KK codes. In the rest of the paper we investigate the covering properties of CDCs. We first derive bounds on the minimum cardinality of a CDC with a given covering radius and then determine the asymptotic behavior of this quantity. Moreover, we show that liftings of rank metric codes have the highest possible covering radius, and hence liftings of rank metric codes are not optimal packing CDCs. Finally, we construct good covering CDCs by permuting liftings of rank metric codes.
\section{Introduction}\label{sec:introduction} While random network coding \cite{ho_it06, FS_book07, HL08} has proved to be a powerful tool for disseminating information in networks, it is highly susceptible to errors caused by various sources. Thus, error control for random network coding is critical and has received growing attention recently. Error control schemes proposed for random network coding assume two types of transmission models: some (see, for example, \cite{cai_itw02,song_it03,yeung_cis06,cai_cis06,zhang_itw06,zhang_it08}) depend on and take advantage of the underlying network topology or the particular linear network coding operations performed at various network nodes; others \cite{koetter_it08,silva_it08} assume that the transmitter and receiver have no knowledge of such channel transfer characteristics. The contrast is similar to that between coherent and noncoherent communication systems. Data transmission in noncoherent random network coding can be viewed as sending subspaces through an operator channel \cite{koetter_it08}. Error correction in noncoherent random network coding can hence be treated as a coding problem where codewords are linear subspaces and codes are subsets of the projective space of a vector space over a finite field. Similar to codes defined over complex Grassmannians for noncoherent multiple-antenna channels, codes defined in Grassmannians associated with the vector space play a significant role in error control for noncoherent random network coding; Such codes are referred to as constant-dimension codes (CDCs) \cite{koetter_it08}. In addition to the subspace metric defined in \cite{koetter_it08}, an injection metric was defined for subspace codes over adversarial channels in \cite{silva_arxiv08}. Construction of CDCs has received growing attention in the literature recently. In \cite{koetter_it08}, a Singleton bound for CDCs and a family of codes were proposed, which are nearly Singleton-bound-achieving and referred to as KK codes henceforth. A multi-step construction of CDCs was proposed in \cite{skachek_arxiv08}, and we call these codes Skachek codes; Skachek codes have larger cardinalities than KK codes in some scenarios, and reduce to KK codes otherwise. Further constructions for small parameter values were given in \cite{kohnert_mmics08} and the Johnson bound for CDCs was derived in \cite{xia_dcc09}. Although the CDCs in \cite{xia_dcc09} are optimal in the sense of the Johnson bound, they exist in some special cases only. Despite these previous works, the maximum cardinality of a CDC with a given minimum distance and how to construct optimal CDCs remain open problems. Although the packing properties of CDCs were investigated in \cite{kohnert_mmics08, xia_dcc09, skachek_arxiv08, koetter_it08}, the covering properties of CDCs have received little attention in the literature. Covering properties are significant for error control codes, and the covering radius is a basic geometric parameter of a code \cite{cohen_book97}. For instance, the covering radius can be viewed as a measure of performance: if the minimum distance decoding is used, then the covering radius is the maximum weight of a correctable error vector \cite{berger_book71}; if the code is used for data compression, then the covering radius is a measure of the maximum distortion \cite{berger_book71}. The covering radius is also crucial for code design: if the covering radius is no less than the minimum distance of a code, then there exists a supercode with the same minimum distance and greater cardinality. This paper has two main contributions. First, we introduce a new class of CDCs, referred to as augmented KK codes. The cardinalities of our augmented KK codes are \textbf{always greater} than those of KK codes, and in \textbf{most} cases the cardinalities of our augmented KK code are greater than those of Skachek codes. Thus our augmented KK codes represent a step toward solving the open problem (construction of optimal CDCs) mentioned above. Furthermore, we propose an efficient decoding algorithm for our augmented KK codes using the bounded subspace distance decoding algorithm in \cite{koetter_it08}. Our decoding algorithm corrects more errors than a bounded subspace distance decoder. Second, we investigate the covering properties of CDCs. We first derive some key geometric results for Grassmannians. Using these results, we derive upper and lower bounds on the minimum cardinality of a CDC with a given covering radius. Since these bounds are asymptotically tight, we also determine the asymptotic behavior of the minimum cardinality of a CDC with a given covering radius. Although liftings of rank metric codes can be used to construct packing CDCs that are optimal up to a scalar (see, for example, those in \cite{koetter_it08}), we show that all liftings of rank metric codes have the greatest covering radius possible; our result further implies that liftings of rank metric codes are \textbf{not optimal} packing CDCs. We also construct good covering CDCs by permuting liftings of rank metric codes. The rest of the paper is organized as follows. To be self-contained, Section~\ref{sec:preliminaries} reviews some necessary background. In Section~\ref{sec:construction}, we present our augmented KK codes and a decoding algorithm for these codes. In Section~\ref{sec:covering}, we investigate the covering properties of CDCs. \section{Preliminaries}\label{sec:preliminaries} \subsection{Subspace codes}\label{sec:subspace_codes} We refer to the set of all subspaces of $\mathrm{GF}(q)^n$ with dimension $r$ as the Grassmannian of dimension $r$ and denote it as $E_r(q,n)$; we refer to $E(q,n) = \bigcup_{r=0}^n E_r(q,n)$ as the projective space. For $U,V \in E(q,n)$, both the \emph{subspace metric} \cite[(3)]{koetter_it08} \begin{equation} \label{eq:ds} d_{\mbox{\tiny{S}}}(U,V) \stackrel{\mbox{\scriptsize def}}{=} \dim(U + V) - \dim(U \cap V) = 2\dim(U+V) - \dim(U) - \dim(V) \end{equation} and \emph{injection metric} \cite[Def.~1]{silva_arxiv08} \begin{equation} \label{eq:di} d_{\mbox{\tiny{I}}}(U,V) \stackrel{\mbox{\scriptsize def}}{=} \frac{1}{2} d_{\mbox{\tiny{S}}}(U,V) + \frac{1}{2} |\dim(U) - \dim(V)| = \dim(U + V) - \min\{\dim(U), \dim(V)\} \end{equation} are metrics over $E(q,n)$. A {\em subspace code} is a nonempty subset of $E(q,n)$. The minimum subspace (respectively, injection) distance of a subspace code is the minimum subspace (respectively, injection) distance over all pairs of distinct codewords. \subsection{CDCs and rank metric codes}\label{sec:CDCs_and_rank_metric} The Grassmannian $E_r(q,n)$ endowed with both the subspace and injection metrics forms an association scheme \cite{koetter_it08, delsarte_jct76}. For all $U,V \in E_r(q,n)$, $d_{\mbox{\tiny{S}}}(U,V) = 2d_{\mbox{\tiny{I}}}(U,V)$ and the injection distance provides a natural distance spectrum, i.e., $0\leq d_{\mbox{\tiny{I}}}(U,V) \leq r$. We have $|E_r(q,n)| = {n \brack r}$, where ${n \brack r} = \prod_{i=0}^{r-1} \frac{q^n - q^i}{q^r - q^i}$ is the Gaussian polynomial \cite{andrews_book76}, which satisfies \begin{equation} q^{r(n-r)} \leq {n \brack r} < K_q^{-1} q^{r(n-r)} \label{eq:Gaussian} \end{equation} for all $0 \leq r \leq n$, where $K_q = \prod_{j=1}^\infty (1-q^{-j})$ \cite{gadouleau_it08_dep}. We denote the number of subspaces in $E_r(q,n)$ at distance $d$ from a given subspace as $N_{\mbox{\tiny{C}}}(d) = q^{d^2} {r \brack d} {n-r \brack d}$ \cite{koetter_it08}, and denote a ball in $E_r(q,n)$ of radius $t$ around a subspace $U$ and its volume as $B_t(U)$ and $V_{\mbox{\tiny{C}}}(t) = \sum_{d=0}^t N_{\mbox{\tiny{C}}}(d)$, respectively. A subset of $E_r(q,n)$ is called a constant-dimension code (CDC). A CDC is thus a subspace code whose codewords have the same dimension. We denote the \textbf{maximum} cardinality of a CDC in $E_r(q,n)$ with minimum distance $d$ as $A_{\mbox{\tiny{C}}}(q,n,r,d)$. Constructions of CDCs and bounds on $A_{\mbox{\tiny{C}}}(q,n,r,d)$ have been given in \cite{koetter_it08, xia_dcc09, skachek_arxiv08, gabidulin_isit08, kohnert_mmics08}. In particular, $A_{\mbox{\tiny{C}}}(q,n,r,1) = {n \brack r}$ and it is shown \cite{skachek_arxiv08, xia_dcc09} for $r \leq \left\lfloor \frac{n}{2}\right\rfloor$ and $2 \leq d \leq r$, \begin{equation}\label{eq:bounds_Ac} \frac{q^{n(r-d+1)} - q^{(r+l)(r-d+1)}}{q^{r(r-d+1)} - 1} \leq A_{\mbox{\tiny{C}}}(q,n,r,d) \leq \frac{{n \brack r-d+1}}{{r \brack r-d+1}}, \end{equation} where $l \equiv n \mod r$. We denote the lower bound on $A_{\mbox{\tiny{C}}}(q,n,r,d)$ in (\ref{eq:bounds_Ac}) as $L(q,n,r,d)$. Since the lower bound is due to the class of codes proposed by Skachek \cite{skachek_arxiv08}, we refer to these codes as Skachek codes. CDCs are closely related to rank metric codes \cite{delsarte_jct78, gabidulin_pit0185, roth_it91}, which can be viewed as sets of matrices in $\mathrm{GF}(q)^{m \times n}$. The rank distance between two matrices ${\bf C}, {\bf D} \in \mathrm{GF}(q)^{m \times n}$ is defined as $d_{\mbox{\tiny{R}}}({\bf C}, {\bf D}) \stackrel{\mbox{\scriptsize def}}{=} \mathrm{rk}({\bf C} - {\bf D})$. The maximum cardinality of a rank metric code in $\mathrm{GF}(q)^{m \times n}$ with minimum distance $d$ is given by $\min\{q^{m(n-d+1)}, q^{n(m-d+1)}\}$ and codes that achieve this cardinality are referred to as MRD codes. In this paper, we shall only consider MRD codes that are either introduced independently in \cite{delsarte_jct78, gabidulin_pit0185, roth_it91} for $n \leq m$, or their transpose codes for $n > m$. The number of matrices in $\mathrm{GF}(q)^{m \times n}$ with rank $d$ is denoted as $N_{\mbox{\tiny{R}}}(q,m,n,d) = {n \brack d} \prod_{i=0}^{d-1} (q^m - q^i)$, and the volume of a ball with rank radius $t$ in $\mathrm{GF}(q)^{m \times n}$ as $V_{\mbox{\tiny{R}}}(q,m,n,t) = \sum_{d=0}^t N_{\mbox{\tiny{R}}}(q,m,n,d)$. The minimum cardinality $K_{\mbox{\tiny{R}}}(q^m,n,\rho)$ of a code in $\mathrm{GF}(q)^{m \times n}$ with rank covering radius $\rho$ is studied in \cite{gadouleau_it08_covering, gadouleau_cl08} and satisfies $K_{\mbox{\tiny{R}}}(q^m,n,\rho) = K_{\mbox{\tiny{R}}}(q^n,m,\rho)$ \cite{gadouleau_it08_covering}. CDCs are related to rank metric codes through the lifting operation \cite{silva_it08}. Denoting the row space of a matrix ${\bf M}$ as $R({\bf M})$, the lifting of ${\bf C} \in \mathrm{GF}(q)^{r \times (n-r)}$ is defined as $I({\bf C}) = R({\bf I}_r | {\bf C}) \in E_r(q,n)$. For all ${\bf C}, {\bf D} \in \mathrm{GF}(q)^{r \times (n-r)}$, we have $d_{\mbox{\tiny{I}}}(I({\bf C}), I({\bf D})) = d_{\mbox{\tiny{R}}}({\bf C}, {\bf D})$ \cite{silva_it08}. A KK code in $E_r(q,n)$ with minimum injection distance $d$ is the lifting $I(\mathcal{C}) \subseteq E_r(q,n)$ of an MRD code $\mathcal{C} \subseteq \mathrm{GF}(q)^{ r \times (n-r)}$ with minimum rank distance $d$ and cardinality $\min\{q^{(n-r)(r-d+1)}, q^{r(n-r-d+1)} \}$. An efficient bounded subspace distance decoding algorithm for KK codes was also given in \cite{koetter_it08}. Although the algorithm was presented for $r \leq \frac{n}{2}$, it can be easily generalized to all $r$. \section{Construction of CDCs} \label{sec:construction} In this section, we construct a new class of CDCs which contain KK codes as proper subsets. Thus we call them augmented KK codes. We will show that the cardinalities of our augmented KK codes are always greater than those of KK codes, and that in most cases the cardinalities of our augmented KK code are greater than those of Skachek codes. Furthermore, we propose a low-complexity decoder for our augmented KK codes based on the bounded subspace distance decoder in \cite{koetter_it08}. Since dual CDCs preserve the distance, we assume $r \leq \frac{n}{2}$ without loss of generality. \subsection{Augmented KK codes} Our augmented KK code is so named because it has a layered structure and the first layer is simply a KK code. We denote a KK code in $E_r(q,n)$ with minimum injection distance $d$ ($d\leq r$ by definition) and cardinality $q^{(n-r)(r-d+1)}$ as $\mathcal{E}^0$. For $1 \leq k \leq \left\lfloor \frac{r}{d} \right\rfloor$, we first define two MRD codes $\mathcal{C}^k$ and $\mathcal{D}^k$, and then construct $\mathcal{E}^k$ based on $\mathcal{C}^k$ and $\mathcal{D}^k$. $\mathcal{C}^k$ is an MRD code in $\mathrm{GF}(q)^{(r-kd) \times kd}$ with minimum distance $d$ for $k \leq \left\lfloor \frac{r}{d} \right\rfloor - 1$ ($\left\lfloor \frac{n-r}{d} \right\rfloor \geq \left\lfloor \frac{r}{d} \right\rfloor$) and $\mathcal{C}^{\left\lfloor \frac{r}{d} \right\rfloor} = \{ {\bf 0} \} \subseteq \mathrm{GF}(q)^{\left( r - \left\lfloor \frac{r}{d} \right\rfloor d \right) \times \left\lfloor \frac{r}{d} \right\rfloor d}$; $\mathcal{D}^k$ is an MRD code in $\mathrm{GF}(q)^{r \times (n-r-kd)}$ with minimum distance $d$ for $k \leq \left\lfloor \frac{n-r}{d} \right\rfloor - 1$ and $\mathcal{D}^{\left\lfloor \frac{n-r}{d} \right\rfloor} = \{ {\bf 0}\} \subseteq \mathrm{GF}(q)^{r \times \left( n-r- \left\lfloor \frac{n-r}{d} \right\rfloor d \right)}$. For $1 \leq k < \left\lfloor \frac{r}{d} \right\rfloor$, the block lengths of $\mathcal{C}^k$ and $\mathcal{D}^k$ are at least $d$, and hence existence of MRD codes with the parameters mentioned above is trivial. For $1 \leq k \leq \left\lfloor \frac{r}{d} \right\rfloor$, $I(\mathcal{C}^k)$ and $I(\mathcal{D}^k)$ are either trivial codes or KK codes with minimum injection distance $d$ in $E_{r-kd}(q,r)$ and $E_r(q,n-kd)$, respectively. For $1 \leq k \leq \left\lfloor \frac{r}{d} \right\rfloor$, ${\bf C}_i^k \in \mathcal{C}^k$, and ${\bf D}_j^k \in \mathcal{D}^k$, we define $E_{i,j}^k \in E_r(q,n)$ as the row space of $\left( \begin{array}{c|c|c|c} {\bf I}_{r-kd} & {\bf C}_i^k & {\bf 0} & \multirow{2}{*}{${\bf D}_j^k$}\\ \cline{1-3} {\bf 0} & {\bf 0} & {\bf I}_{kd} & \end{array} \right)$ and $\mathcal{E}^k = \{E_{i,j}^k\}_{i,j=0}^{|\mathcal{C}^k|-1,|\mathcal{D}^k|-1}$. Our augmented KK code is simply $\mathcal{E} = \bigcup_{k=0}^{\left\lfloor \frac{r}{d} \right\rfloor} \mathcal{E}^k$. In order to determine its minimum distance, we first establish two technical results. First, for any two matrices ${\bf A} \in \mathrm{GF}(q)^{a \times n}$, ${\bf B} \in \mathrm{GF}(q)^{b \times n}$, by (\ref{eq:ds}) and (\ref{eq:di}) we can easily show that \begin{eqnarray} \label{eq:ds_bound} d_{\mbox{\tiny{S}}}(R({\bf A}), R({\bf B})) &=& 2\mathrm{rk}({\bf A}^T | {\bf B}^T) - \mathrm{rk}({\bf A}) - \mathrm{rk}({\bf B}) \geq |\mathrm{rk}({\bf A}) - \mathrm{rk}({\bf B})|,\\ \label{eq:di_bound} d_{\mbox{\tiny{I}}}(R({\bf A}), R({\bf B})) &=& \mathrm{rk}({\bf A}^T | {\bf B}^T) - \min\{\mathrm{rk}({\bf A}), \mathrm{rk}({\bf B})\} \geq |\mathrm{rk}({\bf A}) - \mathrm{rk}({\bf B})|. \end{eqnarray} Second, we show that truncating the generator matrices of two subspaces in $E(q,n)$ can only reduce the (subspace or injection) distance between them. \begin{lemma} \label{lemma:truncate} Suppose $0 \leq n_1 \leq n$. Let ${\bf A} = ({\bf A}_ 1| {\bf A}_2) \in \mathrm{GF}(q)^{a \times n}$, ${\bf B} = ({\bf B}_ 1| {\bf B}_2) \in \mathrm{GF}(q)^{b \times n}$, where ${\bf A}_1 \in \mathrm{GF}(q)^{a \times n_1}$ and ${\bf B}_1 \in \mathrm{GF}(q)^{b \times n_1}$. Then for $i=1$ and $2$, $d_{\mbox{\tiny{S}}}(R({\bf A}_i), R({\bf B}_i)) \leq d_{\mbox{\tiny{S}}}(R({\bf A}), R({\bf B}))$ and $d_{\mbox{\tiny{I}}}(R({\bf A}_i), R({\bf B}_i)) \leq d_{\mbox{\tiny{I}}}(R({\bf A}), R({\bf B}))$. \end{lemma} \begin{proof} It suffices to prove it for $i=1$ and $n_1 = n-1$. We need to distinguish two cases, depending on $\mathrm{rk}({\bf A}_1^T | {\bf B}_1^T)$. First, if $\mathrm{rk}({\bf A}_1^T | {\bf B}_1^T) = \mathrm{rk}({\bf A}^T | {\bf B}^T)$, then it is easily shown that $\mathrm{rk}({\bf A}_1) = \mathrm{rk}({\bf A})$ and $\mathrm{rk}({\bf B}_1) = \mathrm{rk}({\bf B})$, and hence $d_{\mbox{\tiny{S}}}(R({\bf A}_1), R({\bf B}_1)) = d_{\mbox{\tiny{S}}}(R({\bf A}), R({\bf B}))$ and $d_{\mbox{\tiny{I}}}(R({\bf A}_1), R({\bf B}_1)) = d_{\mbox{\tiny{I}}}(R({\bf A}), R({\bf B}))$ by (\ref{eq:ds_bound}) and (\ref{eq:di_bound}), respectively. Second, if $\mathrm{rk}({\bf A}_1^T | {\bf B}_1^T) = \mathrm{rk}({\bf A}^T | {\bf B}^T) - 1$, then $d_{\mbox{\tiny{S}}}(R({\bf A}_1), R({\bf B}_1)) = 2 \mathrm{rk}({\bf A}^T | {\bf B}^T) - 2 - \mathrm{rk}({\bf A}_1) - \mathrm{rk}({\bf B}_1) \leq d_{\mbox{\tiny{S}}}(R({\bf A}), R({\bf B}))$ by (\ref{eq:ds_bound}) and $d_{\mbox{\tiny{I}}}(R({\bf A}_1), R({\bf B}_1)) = \mathrm{rk}({\bf A}^T | {\bf B}^T) - 1 - \min\{\mathrm{rk}({\bf A}_1), \mathrm{rk}({\bf B}_1)\} \leq d_{\mbox{\tiny{I}}}(R({\bf A}), R({\bf B}))$ by (\ref{eq:di_bound}). \end{proof} \begin{proposition} \label{prop:E_union_K} $\mathcal{E}$ has minimum injection distance $d$. \end{proposition} \begin{proof} We show that any two codewords $E_{i,j}^k, E_{a,b}^c \in \mathcal{E}$ are at injection distance at least $d$ using Lemma~\ref{lemma:truncate}. When $c \neq k$, let us assume $c < k$ without loss of generality, and then $d_{\mbox{\tiny{I}}}(E_{i,j}^k, E_{a,b}^c) \geq d_{\mbox{\tiny{I}}}(R({\bf I}_{r-kd}| {\bf 0}), R({\bf I}_{r-cd})) = (k-c)d \geq d$. When $c = k$ and $a \neq i$, then $d_{\mbox{\tiny{I}}}(E_{i,j}^k, E_{a,b}^k) \geq d_{\mbox{\tiny{I}}}(I({\bf C}_i^k), I({\bf C}_a^k)) \geq d$. When $c = k$, $a = i$, and $b \neq j$, then $d_{\mbox{\tiny{I}}}(E_{i,j}^k, E_{i,b}^k) \geq d_{\mbox{\tiny{I}}}(I({\bf D}_j^k), I({\bf D}_b^k)) \geq d$. \end{proof} Let us first determine the cardinality of our augmented KK codes. By construction, $\mathcal{E}$ has cardinality $|\mathcal{E}| = q^{(n-r)(r-d+1)} + \sum_{k=1}^{\left\lfloor \frac{r}{d} \right\rfloor} |\mathcal{C}^k| |\mathcal{D}^k|$, where $|\mathcal{C}^{\left\lfloor \frac{r}{d} \right\rfloor}| = 1$ and $|\mathcal{C}^k| = \min\{ q^{(r-kd)(kd-d+1)}, q^{kd(r-kd-d+1)} \}$ for $1 \leq k \leq \left\lfloor \frac{r}{d} \right\rfloor - 1$ and $|\mathcal{D}^{\left\lfloor \frac{n-r}{d} \right\rfloor}| = 1$ and \\ $|\mathcal{D}^k| = \min\{ q^{r(n-r-kd-d+1)}, q^{(n-r-kd)(r-d+1)} \}$ for $1 \leq k \leq \left\lfloor \frac{n-r}{d} \right\rfloor - 1$. Let us compare the cardinality of our augmented KK codes to those of KK and Skachek codes. Note that all three codes are CDCs with minimum injection distance $d$ in $E_r(q,n)$. First, it is easily shown that our augmented KK codes properly contain KK codes for all parameter values. This is a clear distinction from Skachek codes with cardinality $L(q,n,r,d)$, which by (\ref{eq:bounds_Ac}) reduce to KK codes for $3r > n$. In order to compare our codes to Skachek codes when $3r \leq n$, we first remark that (\ref{eq:bounds_Ac}) and (\ref{eq:Gaussian}) lead to $L(q,n,r,d) - q^{(n-r)(r-d+1)} < K_q^{-1} q^{(n-2r)(r-d+1)}$. Also, we have $|\mathcal{E}| \geq q^{(n-r)(r-d+1)} + |\mathcal{C}^1||\mathcal{D}^1| \geq q^{(n-r)(r-d+1)} + q^{(n-r-d)(r-d+1)}$. Hence $|\mathcal{E}| - q^{(n-r)(r-d+1)} > K_q q^{(r-d)(r-d+1)} (L(q,n,r,d) - q^{(n-r)(r-d+1)})$, and our augmented KK codes have a greater cardinality than Skachek codes when $d < r$. We emphasize that for CDCs of dimension $r$, their minimum injection distance $d$ satisfies $d\leq r$. A Skachek code is constructed in multiple steps, and in the $i$-th step ($i\geq 1$), subspaces that correspond to a KK code in $E_r(q,n-ir)$ are added to the code. When $d=r$, $\mathcal{E}$ is actually the code obtained after the first step. \subsection{Decoding of augmented KK codes} Let ${\bf A} = ({\bf A}_0 | {\bf A}_3) \in \mathrm{GF}(q)^{a \times n}$ be the received matrix, where ${\bf A}_0 \in \mathrm{GF}(q)^{a \times r}$ and ${\bf A}_3 \in \mathrm{GF}(q)^{a \times (n-r)}$. We propose a decoding algorithm that either produces the unique codeword in $\mathcal{E}$ closest to $R({\bf A})$ in the subspace metric or returns a failure. Suppose the minimum subspace distance of our augmented KK codes is denoted as $2d$, a bounded distance decoder would find the codeword that is closest to $R({\bf A})$ up to subspace distance $d-1$. Our decoding algorithm always returns the correct codeword if it is at subspace distance at most $d-1$ from the received subspace, thus correcting more errors than a bounded subspace distance decoder. Given the layered structure of $\mathcal{E}$, our decoding algorithm for $\mathcal{E}$ is based on a decoding algorithm for $\mathcal{E}^k$, shown below in Algorithm~\ref{alg:Ek}, for any $k$. We denote the codewords in $\mathcal{E}^0$ as $E_{0,j}^0$ for $0 \leq j \leq |\mathcal{E}^0|-1$. \begin{algorithm}\label{alg:Ek} \renewcommand{\labelenumi}{\ref{alg:Ek}.\theenumi} $\mathrm{EBDD}(k, {\bf A})$.\\ Input: $k$ and ${\bf A} = ({\bf A}_1 | {\bf A}_2 | {\bf A}_3) \in \mathrm{GF}(q)^{a \times n}$, ${\bf A}_1 \in \mathrm{GF}(q)^{a \times (r - kd)}$, ${\bf A}_2 \in \mathrm{GF}(q)^{a \times kd}$, ${\bf A}_3 \in \mathrm{GF}(q)^{a \times (n-r)}$.\\ Output: $(E_{i,j}^k, d_k, f_k)$. \begin{enumerate} \item \label{step:E0} If $k=0$, use the decoder for $\mathcal{E}^0$ to obtain $E_{0,j}^0$, calculate $d_k = d_{\mbox{\tiny{S}}}(R({\bf A}), E_{0,j}^0)$, and return $(E_{0,j}^0, d_k, 0)$. If the decoder returns a failure, return $(I({\bf 0}), d, 0)$. \item \label{step:decode_Ck} Use the decoder of $I(\mathcal{C}^k)$ on $({\bf A}_1 | {\bf A}_2)$ to obtain ${\bf C}_i^k$. If the decoder returns a failure, set ${\bf C}_i^k = {\bf 0}$, ${\bf D}_j^k = {\bf 0}$ and return $(E_{i,j}^k, d, 0)$. \item \label{step:decode_Dk} Use the decoder of $I(\mathcal{D}^k)$ on $({\bf A}_1 | {\bf A}_3)$ to obtain ${\bf D}_j^k$. If the decoder returns a failure, set ${\bf D}_j^k = {\bf 0}$ and return $(E_{i,j}^k, d, 0)$. \item \label{step:d0} Calculate $d_k = d_{\mbox{\tiny{S}}}(R({\bf A}), E_{i,j}^k)$ and $f_k = 2d- \max\{d_{\mbox{\tiny{S}}}(R({\bf A}_1 | {\bf A}_2), I({\bf C}_i^k)), d_{\mbox{\tiny{S}}}(R({\bf A}_1 | {\bf A}_3), I({\bf D}_j^k))\}$ and return $(E_{i,j}^k, d_k, f_k)$. \end{enumerate} \end{algorithm} Algorithm~\ref{alg:Ek} is based on the bounded distance decoder proposed in \cite{koetter_it08}. When $k=0$, $\mathcal{E}^0$ is simply a KK code, and the algorithm in \cite{koetter_it08} is used directly; when $k \geq 1$, given the structure of $\mathcal{E}^k$, two decoding attempts are made based on $({\bf A}_1 | {\bf A}_2)$ and $({\bf A}_1 | {\bf A}_3)$, and both are based on the decoding algorithm in \cite{koetter_it08}. We remark that Algorithm~\ref{alg:Ek} always return $(E_{i,j}^k, d_k, f_k)$. If a unique nearest codeword in $\mathcal{E}^k$ at distance no more than $d-1$ from $R({\bf A})$ exists, then by Lemma~\ref{lemma:truncate} Steps~\ref{alg:Ek}.\ref{step:decode_Ck} and \ref{alg:Ek}.\ref{step:decode_Dk} succeed and Algorithm~\ref{alg:Ek} returns the unique nearest codeword in $E_{i,j}^k$. However, when such unique codeword in $\mathcal{E}^k$ at distance no more than $d-1$ does not exist, the return value $f_k$ can be used to find the unique nearest codeword because $f_k$ is a lower bound on the distance from the received subspace to any other codeword in $\mathcal{E}^k$. Also, when $f_k=0$, Algorithm~\ref{alg:E} below always returns a failure. Thus, we call Algorithm~\ref{alg:Ek} an enhanced bounded distance decoder. \begin{lemma} \label{lemma:fk} Suppose the output of $\mathrm{EBDD}(k, {\bf A})$ is $(E_{i,j}^k, d_k, f_k)$, then $d_{\mbox{\tiny{S}}}(R({\bf A}), E_{u,v}^k) \geq f_k$ for any $E_{u,v}^k \in \mathcal{E}^k$ provided $(u,v) \neq (i,j)$. \end{lemma} \begin{proof} The case $f_k = 0$ is trivial, and it suffices to consider $f_k = \min\{2d- d_{\mbox{\tiny{S}}}(R({\bf A}_1 | {\bf A}_2), I({\bf C}_i^k)), 2d- d_{\mbox{\tiny{S}}}(R({\bf A}_1 | {\bf A}_3), I({\bf D}_j^k))\}$. When $u \neq i$, Lemma \ref{lemma:truncate} yields \begin{eqnarray} \nonumber d_{\mbox{\tiny{S}}}(R({\bf A}), E_{u,v}^k) &\geq& d_{\mbox{\tiny{S}}}(R({\bf A}_1|{\bf A}_2), I({\bf C}_u^k))\\ \nonumber &\geq& d_{\mbox{\tiny{S}}}(I({\bf C}_i^k), I({\bf C}_u^k)) - d_{\mbox{\tiny{S}}}(R({\bf A}_1|{\bf A}_2), I({\bf C}_i^k))\\ \nonumber &\geq& 2d- d_{\mbox{\tiny{S}}}(R({\bf A}_1|{\bf A}_2), I({\bf C}_i^k)) \geq f_k. \end{eqnarray} Similarly, when $v \neq j$, we obtain $d_{\mbox{\tiny{S}}}(R({\bf A}), E_{u,v}^k) \geq 2d- d_{\mbox{\tiny{S}}}(R({\bf A}_1 | {\bf A}_3), I({\bf D}_j^k)) \geq f_k$. \end{proof} The algorithm for $\mathcal{E}$ thus follows. \begin{algorithm}\label{alg:E} \renewcommand{\labelenumi}{\ref{alg:E}.\theenumi} Decoder for $\mathcal{E}$.\\ Input: ${\bf A} = ({\bf A}_0 | {\bf A}_3) \in \mathrm{GF}(q)^{a \times n}$, ${\bf A}_0 \in \mathrm{GF}(q)^{a \times r}$, ${\bf A}_3 \in \mathrm{GF}(q)^{a \times (n-r)}$.\\ Output: Either a failure or the unique nearest codeword in $\mathcal{E}$ from $R({\bf A})$. \begin{enumerate} \item \label{step:t} If $\mathrm{rk}({\bf A}) < r-d+1$, return a failure. \item \label{step:t} Calculate $r - \mathrm{rk}({\bf A}_0) = ld + m$ where $0 \leq l \leq \left\lfloor \frac{r}{d} \right\rfloor$ and $0 \leq m < d$. \item \label{step:Ek} Call $\mathrm{EBDD}(l, {\bf A})$ to obtain $(E_{i,j}^l,d_l,f_l)$. If $d_l \leq d-1$, return $E_{i,j}^l$. \item \label{step:Ek+1} If $m=0$, return a failure. Otherwise, call $\mathrm{EBDD}(l+1, {\bf A})$ to obtain $(E_{s,t}^{l+1}, d_{l+1}, f_{l+1})$. If $d_{l+1} \leq d-1$, return $E_{s,t}^{l+1}$. \item \label{step:compare_d0} If $d_l < \min\{d+m, f_l, d_{l+1}, f_{l+1}, 2d-m\}$, return $E_{i,j}^l$. If $d_{l+1} < \min\{d+m, d_l, f_l, f_{l+1}, 2d-m\}$, return $E_{s,t}^{l+1}$. \item \label{step:return} Return a failure. \end{enumerate} \end{algorithm} \begin{proposition} \label{prop:decoding_E} If the received subspace is at subspace distance at most $d-1$ from a codeword in $\mathcal{E}$, then Algorithm~\ref{alg:E} returns this codeword. Otherwise, Algorithm~\ref{alg:E} returns either a failure or the unique codeword closest to the received subspace in the subspace metric. \end{proposition} \begin{proof} We first show that Algorithm~\ref{alg:E} returns the unique nearest codeword in $\mathcal{E}$ to the received subspace if it is at subspace distance at most $d-1$. For all $1 \leq k \leq \left\lfloor \frac{r}{d} \right\rfloor$ and $E_{u,v}^k \in \mathcal{E}^k$, Lemma \ref{lemma:truncate} and (\ref{eq:ds_bound}) yield \begin{equation}\label{eq:d(R(A),El)} d_{\mbox{\tiny{S}}}(R({\bf A}), E_{u,v}^k) \geq d_{\mbox{\tiny{S}}}(R({\bf A}_0), I({\bf C}_u^k)) \geq |r-kd - \mathrm{rk}({\bf A}_0)| = |(l-k)d + m|. \end{equation} Similarly (\ref{eq:ds_bound}) yields $d_{\mbox{\tiny{S}}}(R({\bf A}), E_{0,v}^0) \geq ld+m$ for any $v$. Hence $d_{\mbox{\tiny{S}}}(R({\bf A}), \mathcal{E}^k) \geq d$ for $k \leq l-1$ or $k \geq l+2$. Therefore, the unique nearest codeword is either in $\mathcal{E}^l$ or $\mathcal{E}^{l+1}$ and applying Algorithm~\ref{alg:Ek} for $\mathcal{E}^l$ and $\mathcal{E}^{l+1}$ always returns the nearest codeword. We now show that when the distance from the received subspace to the code is at least $d$, Algorithm~\ref{alg:E} either produces the unique nearest codeword or returns a failure. First, by (\ref{eq:d(R(A),El)}), $d_{\mbox{\tiny{S}}}(R({\bf A}), \mathcal{E}^{l-1}) = d+m$ and $d_{\mbox{\tiny{S}}}(R({\bf A}), \mathcal{E}^{l+2}) = 2d-m$, while $d_{\mbox{\tiny{S}}}(R({\bf A}), \mathcal{E}^k) \geq 2d$ for $k \leq l-2$ or $k \geq l+3$. Also, by Lemma \ref{lemma:fk}, $d_{\mbox{\tiny{S}}}(R({\bf A}), E_{u,v}^l) \geq f_l$ for all $(u,v) \neq (i,j)$ and $d_{\mbox{\tiny{S}}}(R({\bf A}), \mathcal{E}^{l+1}) \geq \min\{d_{l+1}, f_{l+1}\}$. Therefore, if $d_l < \min\{d+m, f_l, d_{l+1}, f_{l+1}, 2d-m\}$, then $E_{i,j}^l$ is the unique codeword closest to $R({\bf A})$. Similarly, if $d_{l+1} < \min\{d+m, d_l, f_l, f_{l+1}, 2d-m\}$, then $E_{s,t}^{l+1}$ is the unique codeword closest to $R({\bf A})$. \end{proof} We note that when $\mathrm{rk}({\bf A}) < r-d+1$, by (\ref{eq:ds_bound}) Steps~\ref{alg:Ek}.\ref{step:decode_Ck} and \ref{alg:Ek}.\ref{step:decode_Dk} would both fail, and Algorithm~\ref{alg:E} will return a failure. We also justify why Algorithm~\ref{alg:E} returns a failure if $d_l \geq d$ and $m=0$ in Step \ref{alg:E}.\ref{step:Ek}. Suppose $d_l \geq d$ and $m = 0$ and we apply Algorithm~\ref{alg:Ek} for $\mathcal{E}^{l+1}$. Then we have $d_l \geq d+m$ and by (\ref{eq:d(R(A),El)}) $d_{l+1} \geq |d-m| = d+m$. Therefore, neither inequality in Step \ref{alg:E}.\ref{step:compare_d0} is satisfied and the decoder returns a failure. By Proposition~\ref{prop:decoding_E}, Algorithm~\ref{alg:E} decodes beyond the half distance. However, the decoding radius of Algorithm~\ref{alg:E} is limited. It is easy to see that the decoding radius of Algorithm~\ref{alg:E} is at most $d+\left \lfloor \frac{d}{2}\right\rfloor$ due to the terms $d+m$ and $2d-m$ in the inequalities in Step~\ref{alg:E}.\ref{step:compare_d0}. We emphasize that this is just an upper bound, and its tightness is unknown. Suppose $r - \mathrm{rk}({\bf A}_0) = ld + m$, when Algorithm~\ref{alg:E} decodes beyond half distance, it is necessary that $f_l$ and $f_{l+1}$ be both nonzero in Step~\ref{alg:E}.\ref{step:compare_d0}. This implies that the row space of $({\bf A}_1 | {\bf A}_2)$ is at subspace distance no more than $d-1$ from $I(\mathcal{C}^l)$ and $I(\mathcal{C}^{l+1})$ and that the row spaces of $({\bf A}_1 | {\bf A}_3)$ are at subspace distance no more than $d-1$ from $I(\mathcal{D}^l)$ and $I(\mathcal{D}^{l+1})$. We note that the inequalities in Step~\ref{alg:E}.\ref{step:compare_d0} are strict in order to ensure that the output of the decoder is the \textbf{unique} nearest codeword from the received subspace. However, if one of the nearest codewords is an acceptable outcome, then equality can be included in the inequalities in Step~\ref{alg:E}.\ref{step:compare_d0}. Our decoding algorithm can be readily simplified in order to obtain a bounded subspace distance decoder, by removing Step \ref{alg:E}.\ref{step:compare_d0}. We emphasize that the general decoding algorithm has the same order of complexity as this simplified bounded subspace distance decoding algorithm. Finally, we note that the decoding algorithms and discussions above consider the subspace metric. It is also remarkable that our decoder remains the same if the injection metric is used instead. We formalize this by the following proposition. \begin{proposition}\label{prop:decoding_I} If the received subspace is at injection distance at most $d-1$ from a codeword in $\mathcal{E}$, then Algorithm~\ref{alg:E} returns this codeword. Otherwise, Algorithm~\ref{alg:E} returns either a failure or the unique codeword closest to the received subspace in the injection metric. \end{proposition} The proof of Proposition~\ref{prop:decoding_I} is based on the observation that a codeword in a CDC is closest to the received subspace in the subspace metric if and only if the codeword is closest to the received subspace in the injection metric by (\ref{eq:di}), and is hence omitted. The complexity of the bounded subspace distance decoder in \cite{koetter_it08} for a KK code in $E(q,n)$ is on the order of $O(n^2)$ operations over $\mathrm{GF}(q)^{n-r}$ for $r \leq \frac{n}{2}$, which is hence the complexity of decoding $\mathcal{E}^0$. This algorithm can be easily generalized to include the case where $r > \frac{n}{2}$, and we obtain a complexity on the order of $O(n^2)$ operations over $\mathrm{GF}(q^{\max\{r,n-r\}})$. Thus the complexity of decoding $I(\mathcal{C}^k)$ and $I(\mathcal{D}^k)$ for $k \geq 1$ is on the order of $O(r^2)$ operations over $\mathrm{GF}(q^{\max\{kd,r-kd\}})$ and $O((n-kd)^2)$ operations over $\mathrm{GF}(q^{\max\{r,n-kd-r\}})$, respectively. The complexity of the decoding algorithm for $\mathcal{E}^k$ is on the order of the maximum of these two quantities. It is easily shown that the complexity is maximized for $k=0$, that is, our decoding algorithm has the same order of complexity as the algorithm for the KK code $\mathcal{E}^0$. \section{Covering properties of CDCs}\label{sec:covering} The packing properties of CDCs have been studied in \cite{koetter_it08, xia_dcc09, skachek_arxiv08, gabidulin_isit08, kohnert_mmics08} and an asymptotic packing rate of CDCs was defined and determined in \cite{koetter_it08}. Henceforth in this section, we focus on the covering properties of CDCs in the Grassmannian instead. We emphasize that since $d_{\mbox{\tiny{S}}}(U,V) = 2d_{\mbox{\tiny{I}}}(U,V)$ for all $U,V \in E_r(q,n)$, we consider only the injection distance in this section. Furthermore, since $d_{\mbox{\tiny{I}}}(U,V)=d_{\mbox{\tiny{I}}}(U^\perp,V^\perp)$ for all $U,V \in E_r(q,n)$, without loss of generality we assume that $r\leq \left\lfloor \frac{n}{2} \right\rfloor$ in this section. \subsection{Properties of balls in the Grassmannian}\label{sec:balls_CDC} We first investigate the properties of balls in the Grassmannian $E_r(q,n)$, which will be instrumental in our study of covering properties of CDCs. First, we derive bounds on the volume of balls in $E_r(q,n)$. \begin{lemma}\label{lemma:bounds_Vc} For all $q$, $n$, $r \leq \left\lfloor \frac{n}{2} \right\rfloor$, and $0 \leq t \leq r$, $q^{t(n-t)} \leq V_{\mbox{\tiny{C}}}(t) < K_q^{-2} q^{t(n-t)}$. \end{lemma} \begin{proof} First, we have $V_{\mbox{\tiny{C}}}(t) \geq N_{\mbox{\tiny{C}}}(t) \geq q^{t(n-t)}$ by (\ref{eq:Gaussian}). Also, $N_{\mbox{\tiny{C}}}(d) < K_q^{-1} N_{\mbox{\tiny{R}}}(q,n-r,r,d)$, and hence $V_{\mbox{\tiny{C}}}(t) < K_q^{-1} V_{\mbox{\tiny{R}}}(q,n-r,r,t) < K_q^{-2} q^{t(n-t)}$ as $V_{\mbox{\tiny{R}}}(q,n-r,r,t) < K_q^{-1} q^{t(n-t)}$ \cite[Lemma 9]{gadouleau_it08_dep}. \end{proof} We now determine the volume of the intersection of two \textbf{spheres} of radii $u$ and $s$ respectively and distance $d$ between their centers, which is referred to as the intersection number $J_{\mbox{\tiny{C}}}(u,s,d)$ of the association scheme \cite{brouwer_book89}. The intersection number is an important parameter of an association scheme. \begin{lemma}\label{lemma:Jc} For all $u$, $s$, and $d$ between $0$ and $r$, \begin{equation} \nonumber J_{\mbox{\tiny{C}}}(u,s,d) = \frac{1}{{n \brack r} N_{\mbox{\tiny{C}}}(d)} \sum_{i=0}^r \mu_i E_u(i) E_s(i) E_d(i), \end{equation} where $\mu_i = {n \brack i} - {n \brack i-1}$ and $E_j(i)$ is a $q$-Eberlein polynomial \cite{delsarte_siam76}: \begin{equation \nonumber E_j(i) = \sum_{l=0}^j (-1)^{j-l} q^{li + {j-l \choose 2}} {r-l \brack r-j} {r-l \brack i} {n-r+l-i \brack l}. \end{equation} \end{lemma} Although Lemma~\ref{lemma:Jc} is obtained by a direct application of Theorems 3.5 and 3.6 in \cite[Chapter II]{bannai_book83}, we present it formally here since it is a fundamental geometric property of the Grassmannian and is very instrumental in our study of CDCs. We also obtain a recursion formula for $J_{\mbox{\tiny{C}}}(u,s,d)$. \begin{lemma}\label{lemma:recursion_Jc} $J_{\mbox{\tiny{C}}}(u,s,d)$ satisfies the following recursion: $J_{\mbox{\tiny{C}}}(0,s,d) = \delta_{s,d}$, $J_{\mbox{\tiny{C}}}(u,0,d) = \delta_{u,d}$, and \begin{equation} \nonumber c_{u+1} J_{\mbox{\tiny{C}}}(u+1,s,d) = b_{s-1} J_{\mbox{\tiny{C}}}(u,s-1,d) + (a_s - a_u) J_{\mbox{\tiny{C}}}(u,s,d) + c_{s+1} J_{\mbox{\tiny{C}}}(u,s+1,d) - b_{u-1} J_{\mbox{\tiny{C}}}(u-1,s,d), \end{equation} where $c_j = J_{\mbox{\tiny{C}}}(1,j-1,j) = {j \brack 1}^2$, $b_j = J_{\mbox{\tiny{C}}}(1,j+1,j) = q^{2j+1} {r-j \brack 1} {n-r-j \brack 1}$, and $a_j = J_{\mbox{\tiny{C}}}(1,j,j) = N_{\mbox{\tiny{C}}}(1) - b_j - c_j$ for $0 \leq j \leq r$. \end{lemma} The proof follows directly from \cite[Lemma 4.1.7]{brouwer_book89}, \cite[Theorem 9.3.3]{brouwer_book89}, and \cite[Chapter 4, (1a)]{brouwer_book89}, and hence is omitted. Let $I_{\mbox{\tiny{C}}}(u,s,d)$ denote the intersection of two \textbf{balls} in $E_r(q,n)$ with radii $u$ and $s$ and distance $d$ between their centers. Since $I_{\mbox{\tiny{C}}}(u,s,d) = \sum_{i=0}^u \sum_{j=0}^s J_{\mbox{\tiny{C}}}(i,j,d)$, Lemma~\ref{lemma:Jc} also leads to an analytical expression for $I_{\mbox{\tiny{C}}}(u,s,d)$. Proposition~\ref{prop:inter_2_balls} below shows that $I_{\mbox{\tiny{C}}}(u,s,d)$ decreases as $d$ increases. \begin{proposition}\label{prop:inter_2_balls} For all $u$ and $s$, $I_{\mbox{\tiny{C}}}(u,s,d)$ is a non-increasing function of $d$. \end{proposition} The proof of Proposition~\ref{prop:inter_2_balls} is given in Appendix~\ref{app:prop:inter_2_balls}. Therefore, the minimum nonzero intersection between two balls with radii $u$ and $s$ in $E_r(q,n)$ is given by $I_{\mbox{\tiny{C}}}(u,s,u+s) = J_{\mbox{\tiny{C}}}(u,s,u+s)$ for $u+s \leq r$. By Lemma~\ref{lemma:recursion_Jc}, it is easily shown that $J_{\mbox{\tiny{C}}}(u,s,u+s) = {u+s \brack u}^2$ for all $u$ and $s$ when $u+s \leq r$. We derive below an upper bound on the union of balls in $E_r(q,n)$ with the same radius. \begin{lemma}\label{lemma:B} The volume of the union of \emph{any} $K$ balls in $E_r(q,n)$ with radius $\rho$ is at most \begin{eqnarray} \nonumber B_{\mbox{\tiny{C}}}(K, \rho) &&= KV_{\mbox{\tiny{C}}}(\rho) - \sum_{a=1}^l [A_{\mbox{\tiny{C}}}(q,n,r,r-a+1) - A_{\mbox{\tiny{C}}}(q,n,r,r-a+2)] I_{\mbox{\tiny{C}}}(\rho,\rho,r-a+1)\\ \label{eq:B} && - [K - A_{\mbox{\tiny{C}}}(q,n,r,r-l+1)] I_{\mbox{\tiny{C}}}(\rho,\rho,r-l), \end{eqnarray} where $l = \max\{a: K \geq A_{\mbox{\tiny{C}}}(q,n,r,r-a+1)\}$. \end{lemma} \begin{proof} Let $\{ U_i \}_{i=0}^{K-1}$ denote the centers of $K$ balls with radius $\rho$ and let $\mathcal{V}_j = \{ U_i \}_{i=0}^{j-1}$ for $1 \leq j \leq K$. Without loss of generality, we assume that the centers are labeled such that $d_{\mbox{\tiny{I}}}(U_j, \mathcal{V}_j)$ is non-increasing for $j \geq 1$. For $1 \leq a \leq l$ and $A_{\mbox{\tiny{C}}}(q,n,r,r-a+2) \leq j < A_{\mbox{\tiny{C}}}(q,n,r,r-a+1)$, we have $d_{\mbox{\tiny{I}}}(U_j, \mathcal{V}_j) = d_{\mbox{\tiny{I}}}(\mathcal{V}_{j+1}) \leq r-a+1$. By Proposition~\ref{prop:inter_2_balls}, $U_j$ hence covers at most $V_{\mbox{\tiny{C}}}(\rho) - I_{\mbox{\tiny{C}}}(\rho, \rho, r-a+1)$ subspaces that are not previously covered by balls centered at $\mathcal{V}_j$. \end{proof} We remark that using any upper bound on $A_{\mbox{\tiny{C}}}(q,n,r,r-a+1)$ in the proof of Lemma~\ref{lemma:B} leads to a valid upper bound on $B_{\mbox{\tiny{C}}}(K, \rho)$. Hence, although the value of $A_{\mbox{\tiny{C}}}(q,n,r,r-a+1)$ is unknown in general, the upper bound in (\ref{eq:bounds_Ac}) can be used in (\ref{eq:B}) in order to obtain an upper bound on the volume of the union on balls in the Grassmannian. \subsection{Covering CDCs}\label{sec:covering_CDC} The {\em covering radius} of a CDC $\mathcal{C} \subseteq E_r(q,n)$ is defined as $\rho = \max_{U \in E_r(q,n)} d_{\mbox{\tiny{I}}}(U,\mathcal{C})$. We denote the minimum cardinality of a CDC in $E_r(q,n)$ with covering radius $\rho$ as $K_{\mbox{\tiny{C}}}(q,n,r,\rho)$. Since $K_{\mbox{\tiny{C}}}(q,n,n-r,\rho) = K_{\mbox{\tiny{C}}}(q,n,r,\rho)$, we assume $r \leq \left\lfloor \frac{n}{2} \right\rfloor$. Also, $K_{\mbox{\tiny{C}}}(q,n,r,0) = {n \brack r}$ and $K_{\mbox{\tiny{C}}}(q,n,r,r) = 1$, hence we assume $0 < \rho < r$ henceforth. We first derive lower bounds on $K_{\mbox{\tiny{C}}}(q,n,r,\rho)$. \begin{lemma}\label{lemma:bound_B} For all $q$, $n$, $r \leq \left\lfloor \frac{n}{2} \right\rfloor$, and $0 < \rho < r$, $K_{\mbox{\tiny{C}}}(q,n,r,\rho) \geq \min \left\{K : B_{\mbox{\tiny{C}}}(K, \rho) \geq {n \brack r} \right\} \geq \frac{{n \brack r}}{V_{\mbox{\tiny{C}}}(\rho)}$. \end{lemma} \begin{proof} Let $\mathcal{C}$ be a CDC with cardinality $K_{\mbox{\tiny{C}}}(q,n,r,\rho)$ and covering radius $\rho$. Then the balls around the codewords cover the ${n \brack r}$ subspaces in $E_r(q,n)$; however, by Lemma~\ref{lemma:B}, they cannot cover more than $B_{\mbox{\tiny{C}}}(|\mathcal{C}|, \rho)$ subspaces. Therefore, $B_{\mbox{\tiny{C}}}(K_{\mbox{\tiny{C}}}(q,n,r,\rho), \rho) \geq {n \brack r}$ and we obtain the first inequality. Since $B_{\mbox{\tiny{C}}}(K, \rho) \leq K V_{\mbox{\tiny{C}}}(\rho)$ for all $K$, we obtain the second inequality. \end{proof} The second lower bound in Lemma \ref{lemma:bound_B} is referred to as the sphere covering bound for CDCs. This bound can also be refined by considering the distance distribution of a covering code. \begin{proposition}\label{prop:linear_inequalities} For $0 \leq \delta \leq \rho$, let $T_\delta = \min \sum_{i=0}^r A_i(\delta)$, where the minimum is taken over all integer sequences $\{A_i(\delta)\}$ which satisfy $A_i(\delta) = 0$ for $0 \leq i \leq \delta-1$, $1 \leq A_\delta(\delta) \leq N_{\mbox{\tiny{C}}}(\delta)$, $0 \leq A_i(\delta) \leq N_{\mbox{\tiny{C}}}(i)$ for $\delta+1 \leq i \leq r$, and $\sum_{i=0}^r A_i(\delta) \sum_{s=0}^\rho J_{\mbox{\tiny{C}}}(l,s,i) \geq N_{\mbox{\tiny{C}}}(l)$ for $0 \leq l \leq r$. Then $K_{\mbox{\tiny{C}}}(q,n,r,\rho) \geq \max_{0 \leq \delta \leq \rho} T_\delta$. \end{proposition} \begin{proof} Let $\mathcal{C}$ be a CDC with covering radius $\rho$. For any $U \in E_r(q,n)$ at distance $\delta$ from $\mathcal{C}$, let $A_i(\delta)$ denote the number of codewords at distance $i$ from $U$. Then $\sum_{i=0}^r A_i(\delta) = |\mathcal{C}|$ and we easily obtain $A_i(\delta) = 0$ for $0 \leq i \leq \delta-1$, $1 \leq A_\delta(\delta) \leq N_{\mbox{\tiny{C}}}(\delta)$, and $0 \leq A_i(\delta) \leq N_{\mbox{\tiny{C}}}(i)$ for $\delta+1 \leq i \leq r$. Also, for $0 \leq l \leq r$, all the subspaces at distance $l$ from $U$ are covered, hence $\sum_{i=0}^r A_i(\delta) \sum_{s=0}^\rho J_{\mbox{\tiny{C}}}(l,s,i) \geq N_{\mbox{\tiny{C}}}(l)$. \end{proof} We remark that Proposition~\ref{prop:linear_inequalities} is a tighter lower bound than the sphere covering bound. However, determining $T_\delta$ is computationally infeasible for large parameter values. Another set of linear inequalities is obtained from the inner distribution $\{a_i\}$ of a covering code $\mathcal{C}$, defined as $a_i \stackrel{\mbox{\scriptsize def}}{=} \frac{1}{|\mathcal{C}|} \sum_{C \in \mathcal{C}} |\{D \in \mathcal{C} : d_{\mbox{\tiny{I}}}(C,D) = i\}|$ for $0 \leq i \leq r$ \cite{delsarte_it98}. \begin{proposition}\label{prop:linear_ai} Let $t = \min \sum_{i=0}^r a_i$, where the minimum is taken over all sequences $\{a_i\}$ satisfying $a_0 = 1$, $0 \leq a_i \leq N_{\mbox{\tiny{C}}}(i)$ for $1 \leq i \leq r$, $\sum_{i=0}^r a_i \sum_{s=0}^\rho J_{\mbox{\tiny{C}}}(l,s,i) \geq N_{\mbox{\tiny{C}}}(l)$ for $0 \leq l \leq r$, and $\sum_{i=0}^r a_i \frac{E_i(l)}{N_{\mbox{\tiny{C}}}(i)} \geq 0$ for $0 \leq l \leq r$. Then $K_{\mbox{\tiny{C}}}(q,n,r,\rho) \geq t$. \end{proposition} \begin{proof} Let $\mathcal{C}$ be a CDC with covering radius $\rho$ and inner distribution $\{a_i\}$. Proposition~\ref{prop:linear_inequalities} yields $0 \leq a_i \leq N_{\mbox{\tiny{C}}}(i)$ for $1 \leq i \leq r$, $\sum_{i=0}^r a_i \sum_{s=0}^\rho J_{\mbox{\tiny{C}}}(l,s,i) \geq N_{\mbox{\tiny{C}}}(l)$ for $0 \leq l \leq r$, while $a_0 = 1$ follows the definition of $a_i$. By the generalized MacWilliams inequalities \cite[Theorem 3]{delsarte_it98}, $\sum_{i=0}^r a_i F_l(i) \geq 0$, where $F_l(i) = \frac{\mu_l}{N_{\mbox{\tiny{C}}}(i)} E_i(l)$ are the q-numbers of the association scheme \cite[(15)]{delsarte_it98}, which yields $\sum_{i=0}^r a_i \frac{E_i(l)}{N_{\mbox{\tiny{C}}}(i)} \geq 0$. Since $\sum_{i=0}^r a_i = |\mathcal{C}|$ we obtain that $|\mathcal{C}| \geq t$. \end{proof} Lower bounds on covering codes with the Hamming metric can be obtained through the concept of the excess of a code \cite{vanwee_jct91}. This concept being independent of the underlying metric, it was adapted to the rank metric in \cite{gadouleau_it08_covering}. We adapt it to the injection metric for CDCs below, thus obtaining the lower bound in Proposition~\ref{prop:excess_bound}. \begin{proposition}\label{prop:excess_bound} For all $q$, $n$, $r \leq \left\lfloor \frac{n}{2} \right\rfloor$, and $0 < \rho < r$, $ K_{\mbox{\tiny{C}}}(q,n,r,\rho) \geq \frac{{n \brack r}} {V_{\mbox{\tiny{C}}}(\rho) - \frac{\epsilon}{\delta}N_{\mbox{\tiny{C}}}(\rho)}$, where $\epsilon \stackrel{\mbox{\scriptsize def}}{=} \left\lceil \frac{b_\rho}{c_{\rho+1}} \right\rceil c_{\rho+1} -b_\rho$, $\delta \stackrel{\mbox{\scriptsize def}}{=} N_{\mbox{\tiny{C}}}(1) - c_\rho + 2\epsilon$, and $b_\rho$ and $c_{\rho+1}$ are defined in Lemma \ref{lemma:recursion_Jc}. \end{proposition} The proof of Proposition \ref{prop:excess_bound} is given in Appendix \ref{app:prop:excess_bound}. We now derive upper bounds on $K_{\mbox{\tiny{C}}}(q,n,r,\rho)$. First, we investigate how to expand covering CDCs. \begin{lemma}\label{lemma:Kc(rho+1)} For all $q$, $n$, $r \leq \left\lfloor \frac{n}{2} \right\rfloor$, and $0 < \rho < r$, $K_{\mbox{\tiny{C}}}(q,n,r,\rho) \leq K_{\mbox{\tiny{C}}}(q,n-1,r,\rho-1) \leq {n-\rho \brack r}$, and $K_{\mbox{\tiny{C}}}(q,n,r,\rho) \leq K_{\mbox{\tiny{C}}}(q,n,r-1,\rho-1) \leq {n \brack r-\rho}$. \end{lemma} The proof of Lemma \ref{lemma:Kc(rho+1)} is given in Appendix \ref{app:lemma:Kc(rho+1)}. The next upper bound is a straightforward adaptation of \cite[Proposition 12]{gadouleau_it08_covering}. \begin{proposition}\label{prop:bound_combinatorial} For all $q$, $n$, $r \leq \left\lfloor \frac{n}{2} \right\rfloor$, and $0 < \rho < r$, $K_{\mbox{\tiny{C}}}(q,n,r,\rho) \leq \left\{ 1-\log_{n \brack r} \left({n \brack r} - V_{\mbox{\tiny{C}}}(\rho) \right) \right\}^{-1} + 1$. \end{proposition} The proof of Proposition \ref{prop:bound_combinatorial} is given in Appendix \ref{app:prop:bound_combinatorial}. The next bound is a direct application of \cite[Theorem 12.2.1]{cohen_book97}. \begin{proposition}\label{prop:bound_JSL} For all $q$, $n$, $r \leq \left\lfloor \frac{n}{2} \right\rfloor$, and $0 < \rho < r$, $K_{\mbox{\tiny{C}}}(q,n,r,\rho) \leq \frac{{n \brack r}}{V_{\mbox{\tiny{C}}}(\rho)} \left\{ 1 + \ln V_{\mbox{\tiny{C}}}(\rho) \right\}$. \end{proposition} The bound in Proposition \ref{prop:bound_JSL} can be refined by applying the greedy algorithm described in \cite{clark_ejc97} to CDCs. \begin{proposition}\label{prop:bound_domination} Let $k_0$ be the cardinality of an augmented KK code with minimum distance $2\rho+1$ in $E_r(q,n)$ for $2 \rho < r$ and $k_0 = 1$ for $2 \rho \geq r$. Then for all $k \geq k_0$, there exists a CDC with cardinality $k$ which covers at least ${n \brack r} - u_k$ subspaces, where $u_{k_0} \stackrel{\mbox{\scriptsize def}}{=} {n \brack r} - k_0 V_{\mbox{\tiny{C}}}(\rho)$ and $u_{k+1} = u_k - \left\lceil \frac{u_k V_{\mbox{\tiny{C}}}(\rho)} {\min \left\{ {n \brack r} - k, B_{\mbox{\tiny{C}}}(u_k, \rho) \right\}} \right\rceil$ for all $k \geq k_0$. Thus $K_{\mbox{\tiny{C}}}(q,n,r,\rho) \leq \min\{k : u_k = 0\}$. \end{proposition} The proof of Proposition~\ref{prop:bound_domination} is given in Appendix~\ref{app:prop:bound_domination}. Using the bounds derived above, we finally determine the asymptotic behavior of $K_{\mbox{\tiny{C}}}(q,n,r,\rho)$. The rate of a covering CDC $\mathcal{C} \subseteq E_r(q,n)$ is defined as $\frac{\log_q |\mathcal{C}|}{ \log_q |E_r(q,n)|}$. We remark that this rate is defined in a combinatorial sense: the rate describes how well a CDC covers the Grassmannian. We use the following normalized parameters: $r' = \frac{r}{n}$, $\rho' = \frac{\rho}{n}$, and the asymptotic rate $k_{\mbox{\tiny{C}}}(r',\rho') = \lim \inf_{n \rightarrow \infty} \frac{\log_q K_{\mbox{\tiny{C}}}(q,n,r,\rho)}{\log_q {n \brack r}}$. \begin{proposition}\label{prop:kc} For all $0 \leq \rho' \leq r' \leq \frac{1}{2}$, $k_{\mbox{\tiny{C}}}(r',\rho') = 1 - \frac{\rho'(1-\rho')}{r'(1-r')}$. \end{proposition} \begin{proof} The bounds on $V_{\mbox{\tiny{C}}}(\rho)$ in Lemma \ref{lemma:bounds_Vc} together with the sphere covering bound yield $K_{\mbox{\tiny{C}}}(q,n,r,\rho) > K_q^2 q^{r(n-r) - \rho(n-\rho)}$. Using the bounds on the Gaussian polynomial in Section~\ref{sec:CDCs_and_rank_metric}, we obtain $k_{\mbox{\tiny{C}}}(r',\rho') \geq 1 - \frac{\rho'(1-\rho')}{r'(1-r')}$. Also, Proposition \ref{prop:bound_JSL} leads to $K_{\mbox{\tiny{C}}}(q,n,r,\rho) < K_q^{-1} q^{r(n-r) - \rho(n-\rho)} [1 + \ln (K_q^{-2}) + \rho(n-\rho)\ln q]$, which asymptotically becomes $k_{\mbox{\tiny{C}}}(r',\rho') \leq 1 - \frac{\rho'(1-\rho')}{r'(1-r')}$. \end{proof} The proof of Proposition \ref{prop:kc} indicates that $K_{\mbox{\tiny{C}}}(q,n,r,\rho)$ is on the order of $q^{r(n-r) - \rho(n-\rho)}$. We finish this section by studying the covering properties of liftings of rank metric codes. We first prove that they have maximum covering radius. \begin{lemma} \label{lemma:covering_lifting} Let $I(\mathcal{C}) \subseteq E_r(q,n)$ be the lifting of a rank metric code in $\mathrm{GF}(q)^{r \times (n-r)}$. Then $I(\mathcal{C})$ has covering radius $r$. \end{lemma} \begin{proof} Let $D \in E_r(q,n)$ be generated by $({\bf 0} | {\bf D}_1)$, where ${\bf D}_1 \in \mathrm{GF}(q)^{r \times (n-r)}$ has rank $r$. Then, for any codeword $I({\bf C})$ generated by $({\bf I}_r | {\bf C})$, it is easily seen that $d_{\mbox{\tiny{I}}}(D, I({\bf C})) = d_{\mbox{\tiny{I}}}(R({\bf 0}), R({\bf I}_r)) = r$ by Lemma~\ref{lemma:truncate}. \end{proof} Lemma~\ref{lemma:covering_lifting} is significant for the design of CDCs. It is shown in \cite{koetter_it08} that liftings of rank metric codes can be used to construct nearly optimal packing CDCs. However, Lemma~\ref{lemma:covering_lifting} indicates that for any lifting of a rank metric code, there exists a subspace at distance $r$ from the code. Hence, adding this subspace to the code leads to a supercode with higher cardinality and the same minimum distance since $d \leq r$. Thus an optimal CDC cannot be designed from a lifting of a rank metric code. Although liftings of rank metric codes have poor covering properties, below we construct a class of covering CDCs by using permuted liftings of rank metric covering codes. We thus relate the minimum cardinality of a covering CDC to that of a covering code with the rank metric. For all $n$ and $r$, we denote the set of subsets of $\{0,1,\ldots,n-1\}$ with cardinality $r$ as $S_n^r$. For all $J \in S_n^r$ and all ${\bf C} \in \mathrm{GF}(q)^{r \times (n-r)}$, let $I(J,{\bf C}) = R(\pi({\bf I}_r | {\bf C})) \in E_r(q,n)$, where $\pi$ is the permutation of $\{0, 1, \ldots, n-1\}$ satisfying $J = \{ \pi(0), \pi(1), \ldots, \pi(r-1)\}$, $\pi(0) < \pi(1) < \ldots < \pi(r-1)$, and $\pi(r) < \pi(r+1) < \ldots < \pi(n-1)$. We remark that $\pi$ is uniquely determined by $J$. It is easily shown that $d_{\mbox{\tiny{I}}}(I(J,{\bf C}), I(J,{\bf D})) = d_{\mbox{\tiny{R}}}({\bf C}, {\bf D})$ for all $J \in S_n^r$ and all ${\bf C}, {\bf D} \in \mathrm{GF}(q)^{r \times (n-r)}$. \begin{proposition}\label{prop:Ks<Kr} For all $q$, $n$, $r \leq \left\lfloor \frac{n}{2} \right\rfloor$, and $0 < \rho < r$, $K_{\mbox{\tiny{C}}}(q,n,r,\rho) \leq {n \choose r} K_{\mbox{\tiny{R}}}(q^{n-r},r,\rho)$. \end{proposition} \begin{proof} Let $\mathcal{C} \subseteq \mathrm{GF}(q)^{r \times (n-r)}$ have rank covering radius $\rho$ and cardinality $K_{\mbox{\tiny{R}}}(q^{n-r},r,\rho)$. We show below that $L(\mathcal{C}) = \{I(J,{\bf C}) : J \in S_n^r, {\bf C} \in \mathcal{C}\}$ is a CDC with covering radius $\rho$. Any $U \in E_r(q,n)$ can be expressed as $I(J,{\bf V})$ for some $J \in S_n^r$ and some ${\bf V} \in \mathrm{GF}(q)^{r \times (n-r)}$. Also, by definition, there exists ${\bf C} \in \mathcal{C}$ such that $d_{\mbox{\tiny{R}}}({\bf C}, {\bf V}) \leq \rho$ and hence $d_{\mbox{\tiny{I}}}(U,I(J,{\bf C})) = d_{\mbox{\tiny{R}}}({\bf C}, {\bf V}) \leq \rho$. Thus $L(\mathcal{C})$ has covering radius $\rho$ and cardinality $\leq {n \choose r} K_{\mbox{\tiny{R}}}(q^{n-r},r,\rho)$. \end{proof} It is shown in \cite{gadouleau_it08_covering} that for $r \leq n-r$, $K_{\mbox{\tiny{R}}}(q^{n-r}, r, \rho)$ is on the order of $q^{r(n-r) - \rho(n-\rho)}$, which is also the order of $K_{\mbox{\tiny{C}}}(q,n,r,\rho)$. The bound in Proposition~\ref{prop:Ks<Kr} is relatively tighter for large $q$ since ${n \choose r}$ is independent of $q$.
{ "timestamp": "2009-03-15T23:48:36", "yymm": "0903", "arxiv_id": "0903.2675", "language": "en", "url": "https://arxiv.org/abs/0903.2675", "abstract": "Constant-dimension codes (CDCs) have been investigated for noncoherent error correction in random network coding. The maximum cardinality of CDCs with given minimum distance and how to construct optimal CDCs are both open problems, although CDCs obtained by lifting Gabidulin codes, referred to as KK codes, are nearly optimal. In this paper, we first construct a new class of CDCs based on KK codes, referred to as augmented KK codes, whose cardinalities are greater than previously proposed CDCs. We then propose a low-complexity decoding algorithm for our augmented KK codes using that for KK codes. Our decoding algorithm corrects more errors than a bounded subspace distance decoder by taking advantage of the structure of our augmented KK codes. In the rest of the paper we investigate the covering properties of CDCs. We first derive bounds on the minimum cardinality of a CDC with a given covering radius and then determine the asymptotic behavior of this quantity. Moreover, we show that liftings of rank metric codes have the highest possible covering radius, and hence liftings of rank metric codes are not optimal packing CDCs. Finally, we construct good covering CDCs by permuting liftings of rank metric codes.", "subjects": "Information Theory (cs.IT)", "title": "Construction and Covering Properties of Constant-Dimension Codes", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668679067631, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.70815051528352 }
https://arxiv.org/abs/1611.05406
Synchronization in networks with multiple interaction layers
The structure of many real-world systems is best captured by networks consisting of several interaction layers. Understanding how a multi-layered structure of connections affects the synchronization properties of dynamical systems evolving on top of it is a highly relevant endeavour in mathematics and physics, and has potential applications to several societally relevant topics, such as power grids engineering and neural dynamics. We propose a general framework to assess stability of the synchronized state in networks with multiple interaction layers, deriving a necessary condition that generalizes the Master Stability Function approach. We validate our method applying it to a network of Rössler oscillators with a double layer of interactions, and show that highly rich phenomenology emerges. This includes cases where the stability of synchronization can be induced even if both layers would have individually induced unstable synchrony, an effect genuinely due to the true multi-layer structure of the interactions amongst the units in the network.
\section{Introduction} Network theory~\cite{Str001,AlB002,New003,DoM003,Ben004,Boc006,Cal007,New010,CoH010} has proved a fertile ground for the modeling of a multitude of complex systems. One of the main appeals of this approach lies in its power to identify universal properties in the structure of connections amongst the elementary units of a system~\cite{WaS998,BaA999,DGM008}. In turn, this enables researchers to make quantitative predictions about the collective organization of a system at different length scales, ranging from the microscopic to the global scale~\cite{Gui005,For010,del13,Pei14,Wil14,Tre15,New15}. As networks often support dynamical processes, the interplay between structure and the unfolding of collective phenomena has been the subject of numerous studies~\cite{BBV008,BaB013,GBB016}. In fact, many relevant processes and their associated emergent phenomena, such as social dynamics~\cite{CFL009}, epidemic spreading~\cite{Pas015}, synchronization~\cite{Boc002}, and controllability~\cite{LSB011}, have been proved to depend significantly on the complexity of the underlying interaction backbone. Synchronization of systems of dynamical units is a particularly noteworthy topic, since synchronized states are at the core of the development of many coordinated tasks in natural and engineered systems~\cite{Pik001,Str003,Man004}. Thus, in the past two decades, considerable attention has been paid to shed light on the role that network structure plays on the onset and stability of synchronized states~\cite{PeC998,Lag000,Bar002,Nis003,Bel004,Hwa005,Cha005,Mot005,Zhou006,Lod007,JGG011,Bil14,del015}. In the last years, however, the limitations of the simple network paradigm have become increasingly evident, as the unprecedented availability of large data sets with ever-higher resolution level has revealed that real-world systems can be seldom described by an isolated network. Several works have proved that mutual interactions between different complex systems cause the emergence of networks composed by multiple layers~\cite{Boc014,Kiv014,Lee015,Bia015}. This way, nodes can be coupled according to different kinds of ties so that each of these interaction types defines an interaction layer. Examples of multilayer systems include social networks, in which individual people are linked and affiliated by different types of relations~\cite{Sze010}, mobility networks, in which individual nodes may be served by different means of transport~\cite{Cardillo,Hal014}, and neural networks, in which the constituent neurons interact over chemical and ionic channels~\cite{Adh011}. Multi-layer networks have thus become the natural framework to investigate new collective properties arising from the interconnection of different systems~\cite{Rad013,JGG15}. The multi-layer studies of processes such as percolation~\cite{Bul010,Son012,Gao012,BiD014,Bax016}, epidemics spreading~\cite{Men012,Gran013,Buono014,Sanz014}, controllability~\cite{MAB016}, evolutionary games~\cite{JGG012,Wang014,Mata015,Wang015} and diffusion~\cite{Gom013} have all evidenced a very different phenomenology from the one found on mono-layer structures. For example, while isolated scale-free networks are robust against random failures of nodes or edges~\cite{Alb000}, interdependent ones are instead very fragile~\cite{Dan016}. Nonetheless, the interplay between multi-layer structure and dynamics remains, under several aspects, still unexplored and, in particular, the study of synchronization is still in its infancy~\cite{Agu015,Zha015,Sev015,Gambuzza15}. Here, we present a general theory that fills this gap, and generalizes the celebrated Master Stability Function (MSF) approach in complex networks~\cite{PeC998} to the realm of multi-layer complex systems. Our aim is to provide a full mathematical framework that allows one to evaluate the stability of a globally synchronized state for non-linear dynamical systems evolving in networks with multiple layers of interactions. To do this, we perform a linear stability analysis of the fully synchronized state of the interacting systems, and exploit the spectral properties of the graph Laplacians of each layer. The final result is a system of coupled linear ordinary differential equations for the evolution of the displacements of the network from its synchronized state. Our setting does not require (nor assume) special conditions concerning the structure of each single layer, except that the network is undirected and that the local and interaction dynamics are described by continuous and differentiable functions. Because of this, the evolutionary differential equations are non-variational. We validate our predictions in a network of chaotic Rössler oscillators with two layers of interactions featuring different topologies. We show that, even in this simple case, there is the possibility of inducing the overall stability of the complete synchronization manifold in regions of the phase diagram where each layer, taken individually, is known to be unstable. \section{Results} \subsection{The model} From the structural point of view, we consider a network composed of $N$ nodes which interact via $M$ different layers of connections, each layer having in general different links and representing a different kind of interactions among the units (see Fig.~\ref{duplex} for a schematic illustration of the case of $M=2$ layers and $N=7$ nodes). Notice that in our setting the nodes interacting in each layer are literally the same elements. Node~$i$ in layer~1 is precisely the same node as node~$i$ in layer~2, 3, or~$M$. This contrasts with other works in which there is a one-to-one correspondence between nodes in different layers, but these represent potentially different states. The weights of the connections between nodes in layer $\alpha$ ($\alpha=1,\dotsc,M$) are given by the elements of the matrix $\mathbf W^{\left(\alpha\right)}$, which is, therefore, the adjacency matrix of a weighted graph. The sum $q_i^\alpha=\sum_{j=1}^NW_{i,j}^{\left(\alpha\right)}$ ($i=1,\dotsc,N$) of the weights of all the interactions of node $i$ in layer $\alpha$ is the strength of the node in that layer. Regarding the dynamics, each node represents a $d$-dimensional dynamical system. Thus, the state of node $i$ is described by a vector $\mathbf{x}_i$ with $d$ components. The local dynamics of the nodes is captured by a set of differential equations of the form \begin{equation*} \dot{\mathbf x}_i=\mathbf F\left(\mathbf{x}_i\right)\;, \end{equation*} where the dot indicates time derivative and $\mathbf F$ is an arbitrary $C^1$-vector field. Similarly, the interaction in layer $\alpha$ is described by a continuous and differentiable vector field $\mathbf H_{\alpha}$ (different, in general, from layer to layer), possibly weighted by a layer-dependent coupling constant $\sigma_\alpha$. We assume that the interactions between node $i$ and node $j$ are diffusive, i.e., that for each layer in which they are connected, their coupling depends on the difference between $\mathbf H_{\alpha}$ evaluated on $\mathbf{x}_j$ and $\mathbf{x}_i$. Then, the dynamics of the whole system is described by the following set of equations: \begin{equation}\label{eomsys} \dot{\mathbf x}_i=\mathbf F\left(\mathbf{x}_i\right)-\sum_{\alpha=1}^M\sigma_\alpha\sum_{j=1}^NL_{i,j}^{\left(\alpha\right)}\mathbf H_{\alpha}\left(\mathbf{x}_j\right)\:, \end{equation} where $\mathbf L^{\left(\alpha\right)}$ is the graph Laplacian of layer $\alpha$, whose elements are: \begin{equation}\label{lapldef} L_{i,j}^{\left(\alpha\right)} = \begin{cases} q_i^\alpha &\quad\text{if }i=j\;,\\ -W_{i,j}^{\left(\alpha\right)} &\quad\text{otherwise}\;. \end{cases} \end{equation} Let us note that our treatment of this setting is valid for all possible choices of $\mathbf F$ and $\mathbf H_{\alpha}$, so long as they are $C^1$, and for any particular undirected structure of the layers. This stands in contrast to other approaches to the study of the same equation set~(\ref{eomsys}) proposed in prior works (and termed as dynamical hyper-networks), which, even though based on ingenious techniques such as simultaneous block-diagonalization, can be applied only to special cases like commuting Laplacians, un-weighted and fully connected layers, and non-diffusive coupling~\cite{Sor012}, or cannot guarantee to always provide a satisfactory solution~\cite{Irv012}. \begin{figure}[b] \centering \includegraphics[width=0.4\textwidth]{duplex.eps} \caption{\label{duplex}Schematic representation of a network with two layers of interaction. The two layers (corresponding here to solid violet and dashed orange links, respectively) are made of links of different type for the same nodes, such as different means of transport between two cities, or chemical and electric connections between neurons. Note that the layers are fully independent, in that they are described by two different Laplacians $\mathbf L^{(1)}$ and $\mathbf L^{(2)}$, so that the presence of a connection between two nodes in one layer does not affect their connection status in the other.} \end{figure} \subsection{Stability of complete synchronization in networks with multiple layers of interactions} We are interested in assessing the stability of synchronized states, which means determining whether a system eventually returns to the synchronized solution after a perturbation. For further details of the following derivations we refer to Materials and Methods. First let us note that, since the Laplacians are zero-row-sum matrices, they all have a null eigenvalue, with corresponding eigenvector $N^{-1/2}\left(1,1,\dotsc,1\right)^{\mathrm T}$, where~T indicates transposition. This means that the general system of equations~(\ref{eomsys}) always admits an invariant solution $\mathbf{S}\equiv\{\mathbf{x}_i(t)=\mathbf{s}(t),\,\forall\,i=1,2,\dots,N\}$, which defines the complete synchronization manifold in $\mathbb{R}^{dN}$. As one does not need a very strong forcing to destroy synchronization in an unstable state, we aim at predicting the behavior of the system when the perturbation is small. Then, we first linearize Eqs.~(\ref{eomsys}) around the synchronized manifold $\mathbf{S}$ obtaining the equations ruling the evolution of the local and global synchronization errors $\delta\mathbf{x}_i\equiv\mathbf{x}_i-\mathbf s$ and $\delta\mathbf X\equiv\left(\delta\mathbf{x}_1,\delta\mathbf{x}_2,\dotsc,\delta\mathbf{x}_N\right)^\mathrm{T}$: \begin{equation}\label{linglob} \delta\dot{\mathbf X}=\left(\mathds 1\otimes J\mathbf F\left(\mathbf s\right)-\sum_{\alpha=1}^M\sigma_\alpha\mathbf L^{\left(\alpha\right)}\otimes J\mathbf H_{\alpha}\left(\mathbf s\right)\right)\delta\mathbf X\:, \end{equation} where $\mathds 1$ is the $N$-dimensional identity matrix, $\otimes$ denotes the Kronecker product, and $J$ is the Jacobian operator. Second, we spectrally decompose $\delta\mathbf X$ in the equation above, and project it onto the basis defined by the eigenvectors of one of the layers. The particular choice of layer is completely arbitrary, as the eigenvectors of the Laplacians of each layer form $M$ equivalent bases of $\mathbb{R}^{N}$. In the following, to fix the ideas, we operate this projection onto the eigenvectors of $\mathbf L^{\left(1\right)}$. After some algebra, the system of equations~(\ref{linglob}) can be expressed as: \begin{multline}\label{mainsystem} \dot{\boldsymbol\eta}_{j} = \left(J\mathbf F\left(\mathbf s\right)-\sigma_1\lambda_j^{(1)}J\mathbf{H}_1\left(\mathbf s\right)\right)\boldsymbol\eta_{j}+\\ -\sum_{\alpha=2}^M\sigma_\alpha\sum_{k=2}^N\sum_{r=2}^N\lambda_r^{(\alpha)}\Gamma_{r,k}^{(\alpha)}\Gamma_{r,j}^{(\alpha)}J\mathbf H_{\alpha}\left(\mathbf s\right)\boldsymbol\eta_{k}\:, \end{multline} for $j=2,\dotsc,N$, where $\boldsymbol\eta_{j}$ is the vector coefficient of the eigendecomposition of $\delta\mathbf X$, $\lambda_r^{(\alpha)}$ is the $r$th eigenvalue of the Laplacian of layer $\alpha$, sorted in non-decreasing order, and we have put ${\boldsymbol\Gamma}^{(\alpha)}\equiv\mathbf{V^{(\alpha)}}^\mathrm{T}\mathbf{V}^{(1)}$, in which $\mathbf{V^{(\alpha)}}$ indicates the matrix of eigenvectors of the Laplacian of layer $\alpha$. Note that to obtain this result, one must ensure that the Laplacian eigenvectors of each layer are orthonormal, a choice that is always possible because all the Laplacians are real symmetric matrices. Thus, the sums run from~2 rather than~1 because the first eigenvalue of the Laplacian, corresponding to $r=1$, is always~0 for all layers, and the first eigenvector, to which all others are orthogonal, is common to all layers. Equation~\ref{mainsystem} is notable in that it includes prior results about systems with commuting Laplacians as a special case. In fact, if the Laplacians commute they can be simultaneously diagonalized by a common basis of eigenvectors. Thus, in this case, $\mathbf{V}^{(\alpha)}=\mathbf{V}^{(1)}\equiv\mathbf V$ for all $\alpha$. In turn, this implies that $\boldsymbol\Gamma^{(\alpha)}=\mathds 1$ for all $\alpha$, and Eq.~\ref{mainsystem} becomes \begin{equation*} \begin{split} \dot{\boldsymbol\eta}_{j} &= \left(J\mathbf F\left(\mathbf s\right)-\sigma_1\lambda_j^{(1)}J\mathbf{H}_1\left(\mathbf s\right)\right)\boldsymbol\eta_{j}+\\ & \quad -\sum_{\alpha=2}^M\sigma_\alpha\sum_{k=2}^N\sum_{r=2}^N\lambda_r^{(\alpha)}\delta_{r,k}\delta_{r,j}J\mathbf H_{\alpha}\left(\mathbf s\right)\boldsymbol\eta_{k}\\ &= \left(J\mathbf F\left(\mathbf s\right)-\sum_{\alpha=1}^M\sigma_\alpha\lambda_j^{(\alpha)}J\mathbf H_{\alpha}\left(\mathbf s\right)\right)\boldsymbol\eta_{j}\:,\\ \end{split} \end{equation*} recovering an $M$-parameter variational form as in \cite{Sor012}. Notice that the stability of the synchronized state is completely specified by the maximum conditional Lyapunov exponent $\Lambda$, corresponding to the variation of the norm of $\boldsymbol\Omega\equiv\left(\boldsymbol\eta_2,\dotsc,\boldsymbol\eta_N\right)$. In fact, since $\boldsymbol\Omega$ will evolve on average as $\left|\boldsymbol\Omega\right|\left(t\right)\sim \exp\left(\Lambda t\right)$, the fully synchronized state will be stable against small perturbations only if $\Lambda<0$. \subsection{Case study: networks of Rössler oscillators} To illustrate the predictive power of the framework described above, we apply it to a network of identical Rössler oscillators, with two layers of connections. Note that our method is fully general, and it can be applied to systems composed by any number of layers and containing oscillators of any dimensionality $d$. The particular choice of $M=2$ and $d=3$ for our example allows us to study a complex phenomenology, while retaining ease of illustration. The dynamics of the Rössler oscillators is described by $\dot{\mathbf x}=\left(-y-z,x+ay,b+\left(x-c\right)z\right)^\mathrm{T}$, where we have put $x\equiv x_1$, $y\equiv x_2$ and $z\equiv x_3$. The parameters are fixed to the values $a=0.2$, $b=0.2$ and $c=9$, which ensure that the local dynamics of each node is chaotic. Considering each layer of connections individually, it is known that the choice of the function $\mathbf H$ allows (for an ensemble of networked Rössler oscillators) the selection of one of the three classes of stability (see Materials and Methods for more details), which are: \begin{itemize} \item[I:] $\mathbf H\left(\mathbf x\right)=\left(0,0,z\right)$, for which synchronization is always unstable. \item[II:] $\mathbf H\left(\mathbf x\right)=\left(0,y,0\right)$, for which synchronization is stable only for $\sigma_{\alpha}\lambda^{\alpha}_2<0.1445$. \item[III:] $\mathbf H\left(\mathbf x\right)=\left(x,0,0\right)$ for which synchronization is stable only for ${}^{0.181}/{}_{\lambda^{\alpha}_2}<\sigma_{\alpha}<{}^{4.615}/{}_{\lambda^{\alpha}_N}$. \end{itemize} Because of the double layer structure, one can now combine together different classes of stability in the two layers, studying how one affects the other and identifying new stability conditions arising from the different choices. In the following, we consider three combinations, namely: \begin{itemize} \item \textbf{Case 1:} Layer~1 in class~I and layer~2 in class~II, i.e., $\mathbf{H}_1\left(\mathbf x\right)=\left(0,0,z\right)$ and $\mathbf{H}_2\left(\mathbf x\right)=\left(0,y,0\right)$. \item \textbf{Case 2:} Layer~1 in class~I and layer~2 in class~III, i.e., $\mathbf{H}_1\left(\mathbf x\right)=\left(0,0,z\right)$ and $\mathbf{H}_2\left(\mathbf x\right)=\left(x,0,0\right)$. \item \textbf{Case 3:} Layer~1 in class~II and layer~2 in class~III, i.e., $\mathbf{H}_1\left(\mathbf x\right)=\left(0,y,0\right)$ and $\mathbf{H}_2\left(\mathbf x\right)=\left(x,0,0\right)$. \end{itemize} As for the choices of the Laplacians $\mathbf L^{\left(1,2\right)}$, we consider three possible combinations: (\emph{i}) both layers as Erd\H{o}s-Rényi networks of equal mean degree (ER-ER); (\emph{ii}) both layers as scale-free networks with power-law exponent~3 (SF-SF); and (\emph{iii}) layer~1 as Erd\H{o}s-Rényi and layer~2 as scale-free (ER-SF). In all cases, the graphs are generated using the algorithm in Ref.~\cite{Gom006}, which allows a continuous interpolation between scale-free and Erd\H{o}s-Rényi structures (see Materials and Methods for details). Therefore, in the following we will consider~9 possible scenarios, i.e., the three combinations of stability classes for each of the three combinations of layer structures. \begin{figure}[t] \centering \includegraphics[width=0.35\textwidth]{Fig2a.eps} \includegraphics[width=0.35\textwidth]{Fig2b.eps} \caption{Maximum Lyapunov exponent for ER-ER topologies in Case~1 (top panel) and Case~2 (bottom panel). The darker blue lines mark the points in the $(\sigma_1,\sigma_2)$ space where $\Lambda$ vanishes, while the striped lines indicate the critical values of $\sigma_2$ if layer~2 is considered in isolation (or, equivalently, if $\sigma_1=0$).}\label{case1-2} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.35\textwidth]{Fig3a.eps} \includegraphics[width=0.35\textwidth]{Fig3b.eps} \caption{Maximum Lyapunov exponent in Case~3 for ER-ER and SF-SF topologies (top and bottom panel, respectively). The darker blue lines mark the points in the $(\sigma_1,\sigma_2)$ plane where the maximum Lyapunov exponent is~0, while the striped lines indicate the stability limits for the $\sigma_1=0$ and $\sigma_2=0$. The points marked in the top panel indicate the choices of coupling strengths used for the numerical validation of the model. Note that for SF networks in class III, the stability window disappears.}\label{case3} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=0.9\textwidth]{Fig4.eps} \caption{Numerical validation of the stability analysis. The error of synchronization increases as long as the only active layer is the one predicted to be unstable. When the other layer is switched on, at time~100, the error of synchronization decays exponentially towards~0, as predicted by the model. With respect to Fig.~\ref{case3}, the top-left panel corresponds to region~II, where layer~1 is unstable and layer~2 stable, and the interaction strengths used were $\sigma_1=0.04$ and $\sigma_2=0.3$. The bottom-left panel corresponds to region~IV, where layer~1 is stable and layer~2 is unstable, and the interaction strengths were $\sigma_1=0.15$ and $\sigma_2=0.5$. The top-right and bottom-right panels correspond to region~VI, where both layers are unstable. The layer active from the beginning was layer~1 for the top-right panel and layer~2 for the bottom-right. In both cases the interaction strengths were $\sigma_1=0.04$ and $\sigma_2=0.5$.}\label{valid} \end{figure*} \textbf{Case 1}. Rewriting the system of equations~(\ref{mainsystem}) explicitly for each component of the $\boldsymbol\eta_{j}$, we obtain here: \begin{align} {{}\dot\eta_j}_1 &= -{\eta_j}_2-{\eta_j}_3\;,\label{eqs11}\\ {{}\dot\eta_j}_2 &= {\eta_j}_1+0.2{\eta_j}_2-\sigma_2\sum_{k=2}^N\sum_{r=2}^N\lambda_r^{(2)}\Gamma_{r,k}\Gamma_{r,j}{\eta_k}_2\;,\label{eqs12}\\ {{}\dot\eta_j}_3 &= s_3{\eta_j}_1+\left(s_1-9\right){\eta_j}_3-\sigma_1\lambda_j^{(1)}{\eta_j}_3\label{eqs13}\:, \end{align} from which the maximum Lyapunov exponent can be numerically calculated. In the top panel of Fig.~\ref{case1-2} we observe that, for ER-ER topologies, the first layer is dominated by the second, as the stability region of the whole system appears to be almost independent of $\sigma_1$, disregarding a slight increase of the critical value of $\sigma_2$ as $\sigma_1$ increases. This demonstrates the ability of class~II systems to control the instabilities inherent to systems in class~I. This result appears to be robust with respect to the choice of underlying structures, as qualitatively similar results are obtained for SF-SF, ER-SF and SF-ER topologies (see Fig.~1 in Supplementary Material). \textbf{Case 2}. For Case~2, the system of equations~(\ref{mainsystem}) read: \begin{align} {{}\dot\eta_j}_1 &= -{\eta_j}_2-{\eta_j}_3-\sigma_2\sum_{k=2}^N\sum_{r=2}^N\lambda_r^{(2)}\Gamma_{r,k}\Gamma_{r,j}{\eta_k}_1\label{eqs21}\;,\\ {{}\dot\eta_j}_2 &= {\eta_j}_1+0.2{\eta_j}_2\label{eqs22}\;,\\ {{}\dot\eta_j}_3 &= s_3{\eta_j}_1+\left(s_1-9\right){\eta_j}_3-\sigma_1\lambda_j^{(1)}{\eta_j}_3\label{eqs23}\:. \end{align} From the bottom panel in Fig.~\ref{case1-2} we observe that, also in this case, the second layer strongly dominates the whole system, as the overall stability window is almost independent from the value of $\sigma_1$. This result, together with that obtained for Case~1, suggests that class~I systems, even though intrinsically preventing synchronization, are easily controllable by both class~II and class~III systems, even though, in analogy to the Case~1, we observe a slight widening of the stability window for increasing values of $\sigma_1$. Again, the results are almost independent from the choice of the underlying topologies (see Fig.~2 in the Supplementary Material). \textbf{Case 3}. Finally, for Case~3, equations~(\ref{mainsystem}) become: \begin{align} {{}\dot\eta_j}_1 &= -{\eta_j}_2-{\eta_j}_3-\sigma_2\sum_{k=2}^N\sum_{r=2}^N\lambda_r^{(2)}\Gamma_{r,k}\Gamma_{r,j}{\eta_k}_1\label{eqs31}\\ {{}\dot\eta_j}_2 &= {\eta_j}_1+0.2{\eta_j}_2-\sigma_1\lambda_j^{(1)}{\eta_j}_2\label{eqs32}\\ {{}\dot\eta_j}_3 &= s_3{\eta_j}_1+\left(s_1-9\right){\eta_j}_3\label{eqs33}\:. \end{align} Here, the system reveals its most striking features. In particular, for ER-ER topologies (see Fig.~\ref{case3}, top panel), we observe~6 different regions, identified in the figure by Roman numerals. Namely, in region~I, synchronization is stable in both layers taken individually (or, equivalently, for either $\sigma_1=0$ and $\sigma_2=0$), and, not surprisingly, the full bi-layered network is also stable. Regions~II, III and~IV correspond to scenarios qualitatively similar to the ones seen previously, i.e., where stability properties of one layer dominate over those of the other. Finally, regions~V and~VI are the most important, as within them one finds effects that are genuinely due to the multi-layered nature of the interactions. There, both layers are individually unstable, and synchronization would not be observed at all for either $\sigma_1=0$ or $\sigma_2=0$. However, the emergence of a collective synchronous motion is remarkably obtained with a suitable tuning of the parameters. In these regions, it is therefore the \emph{simultaneous} action of the two layers that induces stability. Taken collectively, the results we obtained for the three cases indicate that a multi-layer interaction topology enhances the stability of the synchronized state, even allowing the possibility of stabilizing systems that are unstable when considered isolated. \subsection{Numerical validation} We validate the stability predictions derived from equations~(\ref{mainsystem}) by simulating the full non-linear system of equations~(\ref{eomsys}) for an ER-ER topology in Case~3, with three different choices of coupling constants $\sigma_1$ and $\sigma_2$. The three specific sets of coupling values (shown in the top panel of Fig.~\ref{case3}) correspond to situations in which either one or both layers are unstable when isolated, but yield a stable synchronized state when coupled. More specifically, we have chosen ($\sigma_1=0.04$, $\sigma_2=0.3$) corresponding to region~II, ($\sigma_1=0.15$, $\sigma_2=0.5$) in region~IV, and ($\sigma_1=0.04$, $\sigma_2=0.5$) in region~VI. For all the three cases we run the simulations initially with the presence of only the unstable layer, by setting either $\sigma_1=0$ or $\sigma_2=0$ depending on the set of couplings considered. Let us note that for the third set of couplings (region~VI) either layer can be the initially active one, since both are unstable when isolated. Then, after~100 integration steps, we activate the other layer by setting its interaction strength to the (non-zero) value corresponding to the region for which we predicted a stable synchronized state. As the systems evolve, we monitor the evolution of the norm $\left|\boldsymbol\Omega\right|\left(t\right)$ to evaluate the deviation from the synchronized solution with time. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{Fig5.eps} \caption{Identification of the critical points. For a system with ER-ER topology in Case~3 and fixed $\sigma_2=1$, the synchronization error never vanishes if $\sigma_1<\sigma_C\approx 0.04$. Conversely, as soon as $\sigma_1>\sigma_C$, the system is again able to synchronize (green line). One recovers the mono-layer case by imposing $\sigma_2=0$, for which similar results are found, with a critical coupling strength of approximately~$0.08$ (red line). Both results are in perfect agreement with the theoretical predictions (see Fig.~\ref{valid}).} \label{scan} \end{figure} The results, in Fig.~\ref{valid}, show that, when only the unstable interaction layer is active, $\left|\boldsymbol\Omega\right|\left(t\right)$ never vanishes. However, as soon as the other layer is switched on, the norm of $\boldsymbol\Omega$ undergoes a sudden change of behaviour, starting an exponential decay towards~0. This confirms the prediction that the unstable behaviour induced by each layer is compensated by the mutual presence of two interaction layers. Qualitatively similar scenarios are observed in Case~3 for SF-SF topologies, as well as for ER-SF and SF-ER structures (see Fig.~3 in Supplementary Material). Again, they confirm the correctness of the predictions, showing that in region~I layer~1 dominates over layer~2, and that in region~II the overall stability can be induced even when both layers are unstable in isolation. To provide an even stronger demonstration of the predictive power of our method, we simulate the full system for the ER-ER topology in Case~3 fixing the value of $\sigma_2$ to~1 and varying the value of $\sigma_1$ from~0 to~$0.2$. Starting from an initial perturbed synchronized state, after a transient of~100 time units we measure the average of $\left|\boldsymbol\Omega\right|$ over the next~20 integration steps. The results, in Fig.~\ref{scan}, show a very good agreement between the simulations and the theoretical predicion (cf.\ Fig.~\ref{valid}). For values of $\sigma_1$ less then a critical value of approximately~$0.04$, the system never synchronizes. Conversely, when $\sigma_1$ crosses the critical value, the system is able to reach a synchronized state. Interestingly, repeating the simulation with $\sigma_2=0$ one recovers the monoplex case. Also in this instance, we find good agreement between theoretical prediction and simulation, with a critical coupling value of approximately~$0.08$. \section{Discussion} The results shown above clearly illustrate the rich dynamical phenomenology that emerges when the multi-layer structure of real networked systems is taken into account. In an explicit example, we have observed that synchronization stability can be induced in unstable networked layers by coupling them with stable ones. In addition, we have shown that stability can be achieved even when all the layers of a complex system are unstable if considered in isolation. This latter result constitutes a clear instance of an effect that is intrinsic to the true multi-layer nature of the interactions amongst the dynamical units. Similarly, we expect that the opposite could also be observed, namely that the synchronizability of a system decreases, or even disappears, when two individually synchronizable layers are combined. On more general grounds, the theory developed here allows one to assess the stability of the synchronized state of coupled non-linear dynamical systems with multi-layer interactions in a fully general setting. The system can have any arbitrary number of layers and, perhaps more importantly, the network structures of each layer can be fully independent, as we do not exploit any special structural or dynamical property to develop our theory. This way, our approach generalizes the celebrated Master Stability Function~\cite{PeC998} to multi-layer structures, retaining the general applicability of the original method. The complexity in the extra layers is reflected in the fact that the formalism yields a set of coupled linear differential equations (Eq.~\ref{mainsystem}), rather than a single parametric variational equation, which is recovered in the case of commuting Laplacians. This system of equations describes the evolution of a set of $d$-dimensional vectors that encode the displacement of each dynamical system from the synchronized state. The solution of the system gives a necessary condition for stability: if the norm of these vectors vanishes in time, then the system gets progressively closer to synchronization, which is therefore stable; if, instead, the length of the vectors always remains greater than~0, then the synchronized state is unstable. The generality of the method presented, which is applicable to any undirected structure, and its straightforward implementation for any choice of $C^1$ dynamical setup pave the way for the exploration of synchronization properties on multi-layer networks of arbitrary size and structure. Thus, we are confident that our work can be used in the design of optimal multilayered synchronizable systems, a problem that has attracted much attention in mono-layer complex networks~\cite{design1,design2,design3,design4}. In fact, the straightforward nature of our formalism makes it suitable to be efficiently used together with successful techniques, such as the rewiring of links or the search for an optimal distribution of links weights, in the context of multilayer networks. In turn, these techniques may help in addressing the already-mentioned question of the suppression of synchronization due to the interaction between layers, unveiling possible combinations of stable layers that, when interacting, suppress the dynamical coherence that they show in isolation. Also, we believe that the reliability of our method will provide aid to the highly current field of multiplex network controllability~\cite{LSB011,Nepusz012,Sun013,Gao014,Skardal015}, enabling researchers to engineer control layers to drive the system dynamics towards a desired state. In addition, several extensions of our work towards more general systems are possible. A particularly relevant one is the study of multi-layer networks of heterogeneous oscillators, which have a rich phenomenology, and whose synchronizability has been shown to depend on all the Laplacian eigenvalues~\cite{Ska14}, in a way similar to the results presented here. Relaxing the requirement of an undirected structure, our approach can also be used to study directed networks. The graph Laplacians in this case are not necessarily diagonalizable, but a considerable amount of information can be still extracted from them using singular value decomposition. For example, it is already known that directed networks can be rewired to obtain an optimal distribution of in-degrees for synchronization~\cite{Ska16}. Further areas that we intend to explore in future work are those of almost identical oscillators and almost identical layers, which can be approached using perturbative methods and constitute more research directions with even wider applicability. Finally, as our method allows one to study the rich synchronization phenomenology of general multi-layer networks, we believe it will find application in technological, biological and social systems where synchronization processes and multilayered interactions are at work. Some examples are coupled power-grid and communication systems, some brain neuropathologies such as epilepsy, and the onset of coordinated social behavior when multiple interaction channels coexist. Of course, as mentioned above, these applications will demand further advances in order to include specific features such as the non-homogeneity of interacting units or the possibility of directional interactions. \section{Materials and Methods} \subsection{Linearization around the synchronized solution} To linearize the system in Eq.~(\ref{mainsystem}) around the synchronization manifold, use the fact that for any $C^1$-vector field $\mathbf f$ we can write: \begin{equation*} \mathbf f\left(\mathbf x\right)\approx\mathbf f\left(\mathbf{x_0}\right)+J\mathbf f\left(\mathbf{x_0}\right)\cdot\left(\mathbf x-\mathbf{x_0}\right)\:. \end{equation*} Using this relation, we can expand $\mathbf F$ and $\mathbf H$ around $\mathbf s$ in the system of equations~\ref{mainsystem} to obtain: \begin{equation} \delta\dot{\mathbf x}_i = \dot{\mathbf x}_i-\dot{\mathbf s}\approx J\mathbf F\left(\mathbf s\right)\cdot\delta\mathbf{x}_i-\sum_{\alpha=1}^M\sigma_\alpha J\mathbf H_{\alpha}\left(\mathbf s\right)\cdot\sum_{j=1}^N L^{(\alpha)}_{i,j}\delta\mathbf{x}_j\:. \end{equation} Now, we use the Kronecker matrix product to decompose the equation above into self-mixing and interaction terms, and introduce the vector $\delta\mathbf X$, to get the final system of equations~\ref{linglob}. The system~\ref{linglob} can be rewritten by projecting $\delta\mathbf X$ onto the Laplacian eigenvectors of a layer. The choice of layer to carry out this projection is entirely arbitrary, because the Laplacian eigenvectors are always a basis of $\mathbb{R}^N$. Without loss of generality, we choose here layer~1, and we ensure that the eigenvectors are orthonormal. Then, define $\mathds 1_d$ to be the $d$-dimensional identity matrix, and multiply Eq.~\ref{linglob} on the left by $\left({\mathbf{V}^{(1)}}^\mathrm{T}\otimes\mathds 1_d\right)$: \begin{multline*} \left({\mathbf{V}^{(1)}}^\mathrm{T}\otimes\mathds 1_d\right)\delta\mathbf{\dot X} = \Bigg[\left({\mathbf{V}^{(1)}}^\mathrm{T}\otimes\mathds 1_d\right)\left(\mathds 1\otimes J\mathbf F\left(\mathbf s\right)\right)\\ \left. -\sum_{\alpha=1}^M\sigma_\alpha\left({\mathbf{V}^{(1)}}^\mathrm{T}\otimes\mathds 1_d\right)\left(\mathbf L^{(\alpha)}\otimes J\mathbf H_\alpha\left(\mathbf s\right)\right)\right]\delta\mathbf X\:. \end{multline*} Now, use the relation \begin{equation}\label{kronmurb} \left(\mathbf{M_1}\otimes\mathbf{M_2}\right)\left(\mathbf{M_3}\otimes\mathbf{M_4}\right) = \left(\mathbf{M_1}\mathbf{M_3}\right)\otimes\left(\mathbf{M_2}\mathbf{M_4}\right) \end{equation} to obtain \begin{multline*} \left({\mathbf{V}^{(1)}}^\mathrm{T}\otimes\mathds 1_d\right)\delta\mathbf{\dot X} = \left[{\mathbf{V}^{(1)}}^\mathrm{T}\otimes J\mathbf F\left(\mathbf s\right)\right.\\ \left. -\left(\sigma_1\mathbf{D}^{(1)}{\mathbf{V}^{(1)}}^\mathrm{T}\right)\otimes J\mathbf{H}_1\left(\mathbf s\right)\right]\delta\mathbf X\\ -\sum_{\alpha=2}^M\sigma_\alpha\left({\mathbf{V}^{(1)}}^\mathrm{T}\mathbf L^{(\alpha)}\right)\otimes J\mathbf H_\alpha\left(\mathbf s\right)\delta\mathbf X\:, \end{multline*} where $\mathbf{D}^{(\alpha)}$ is the diagonal matrix of the eigenvalues of layer $\alpha$, and we have split the sum into the first term and the remaining $M-1$ terms. Left-multiply the first occurrence of ${\mathbf{V}^{(1)}}^\mathrm{T}$ in the right-hand-side by $\mathds 1$, and right-multiply $\mathbf F$ and $\mathbf H_1$ by $\mathds 1_d$. Then, using again Eq.~\ref{kronmurb}, it is \begin{multline*} \left({\mathbf{V}^{(1)}}^\mathrm{T}\otimes\mathds 1_d\right)\delta\mathbf{\dot X} = \left[\left(\mathds 1\otimes J\mathbf F\left(\mathbf s\right)\right)\left({\mathbf{V}^{(1)}}^\mathrm{T}\otimes\mathds 1_d\right)\right.\\ \left. -\left(\sigma_1\mathbf{D}^{(1)}\otimes J\mathbf H_1\left(\mathbf s\right)\right)\left({\mathbf{V}^{(1)}}^\mathrm{T}\otimes\mathds 1_d\right)\right]\delta\mathbf X\\ -\sum_{\alpha=2}^M\sigma_\alpha{\mathbf{V}^{(1)}}^\mathrm{T}\mathbf L^{(\alpha)}\otimes J\mathbf H_\alpha\left(\mathbf s\right)\delta\mathbf X\:. \end{multline*} Factor out $\left({\mathbf{V}^{(1)}}^\mathrm{T}\otimes\mathds 1_d\right)$ to get \begin{multline*} \left({\mathbf{V}^{(1)}}^\mathrm{T}\otimes\mathds 1_d\right)\delta\mathbf{\dot X} = \left(\mathds 1\otimes J\mathbf F\left(\mathbf s\right)-\sigma_1\mathbf D^{(1)}\otimes J\mathbf H_1\left(\mathbf s\right)\right)\\ \times\left({\mathbf{V}^{(1)}}^\mathrm{T}\otimes\mathds 1_d\right)\delta\mathbf X -\sum_{\alpha=2}^M\sigma_\alpha{\mathbf{V}^{(1)}}^\mathrm{T}\mathbf L^{(\alpha)}\otimes J\mathbf H_\alpha\left(\mathbf s\right)\delta\mathbf X\:. \end{multline*} The relation \begin{equation*} \left(\mathbf{M_1}\otimes\mathbf{M_2}\right)^{-1} = \mathbf{M_1}^{-1}\otimes\mathbf{M_2}^{-1} \end{equation*} implies that $\left({\mathbf{V}^{(1)}}\otimes\mathds 1_d\right)\left({\mathbf{V}^{(1)}}^\mathrm{T}\otimes\mathds 1_d\right)$ is the $mN$-dimensional identity matrix. Then, left-multiply the last last $\delta\mathbf X$ by this expression, obtaining \begin{multline*} \left({\mathbf{V}^{(1)}}^\mathrm{T}\otimes\mathds 1_d\right)\delta\mathbf{\dot X} = \left(\mathds 1\otimes J\mathbf F\left(\mathbf s\right)-\sigma_1\mathbf D^{(1)}\otimes J\mathbf H_1\left(\mathbf s\right)\right)\\ \times\left({\mathbf{V}^{(1)}}^\mathrm{T}\otimes\mathds 1_d\right)\delta\mathbf X -\sum_{\alpha=2}^M\sigma_\alpha{\mathbf{V}^{(1)}}^\mathrm{T}\mathbf L^{(\alpha)}\otimes J\mathbf H_\alpha\left(\mathbf s\right)\\ \times\left({\mathbf{V}^{(1)}}\otimes\mathds 1_d\right)\left({\mathbf{V}^{(1)}}^\mathrm{T}\otimes\mathds 1_d\right)\delta\mathbf X\:. \end{multline*} Now define the vector-of-vectors \begin{equation*} \boldsymbol\eta\equiv\left({\mathbf{V}^{(1)}}^\mathrm{T}\otimes\mathds 1_d\right)\delta\mathbf X\:. \end{equation*} Each component of $\boldsymbol\eta$ is the projection of the global synchronization error vector $\delta\mathbf X$ onto the space spanned by the corresponding Laplacian eigenvector of layer~1. The first eigenvector, which defines the synchronization manifold, is common to all layers, and all other eigenvectors are orthogonal to it. Thus, the norm of the projection of $\boldsymbol\eta$ over the space spanned by the last $N-1$ eigenvectors is a measure of the synchronization error in the directions transverse to the synchronization manifold. Because of how $\eta$ is built, this projection is just the vector $\boldsymbol\Omega$, consisting of the last $N-1$ components of $\boldsymbol\eta$. With this definition of $\boldsymbol\eta$, left-multiply $\mathbf L^{(\alpha)}$ by the identity expressed as $\mathbf V^{(\alpha)}{\mathbf V^{(\alpha)}}^\mathrm{T}$, to obtain \begin{multline*} \dot{\boldsymbol\eta} = \left(\mathds 1\otimes J\mathbf F\left(\mathbf s\right)-\sigma_1\mathbf D^{(1)}\otimes J\mathbf H_1\left(\mathbf s\right)\right)\boldsymbol\eta\\ -\sum_{\alpha=2}^M\sigma_\alpha{\mathbf V^{(1)}}^\mathrm{T}{\mathbf V^{(\alpha)}}\mathbf D^{(\alpha)}{\mathbf V^{(\alpha)}}^\mathrm{T}{\mathbf V^{(1)}}\otimes J\mathbf H_\alpha\left(\mathbf s\right)\boldsymbol\eta\:. \end{multline*} In this vector equation, the first part is purely variational, since it consists of a block-diagonal matrix that multiplies the vector-of-vectors $\boldsymbol\eta$. The second part, instead, mixes different components of $\boldsymbol\eta$. This can be seen more easily expressing the vector equation as a system of equations, one for each component $j$ of $\boldsymbol\eta$. To write such a system, it is convenient to first define $\boldsymbol\Gamma^{(\alpha)}\equiv{\mathbf V^{(\alpha)}}^\mathrm{T}{\mathbf V^{(1)}}$. Then, consider the non-variational part. Its contribution to $j$th component of $\dot{\boldsymbol\eta}$ is given by the product of the $j$th row of blocks of the block-matrix by $\boldsymbol\eta$. In turn, each element of this row of blocks consists of the corresponding element of the $j$th row of ${\boldsymbol\Gamma^{(\alpha)}}^\mathrm{T}\mathbf D^{(\alpha)}\boldsymbol\Gamma^{(\alpha)}$ multiplied by $J\mathbf H_\alpha\left(\mathbf s\right)$: \begin{equation*} \left({\boldsymbol\Gamma^{(\alpha)}}^\mathrm{T}\mathbf D^{(\alpha)}\boldsymbol\Gamma^{(\alpha)}\right)_{j,k} = \sum_{r=1}^N{\Gamma^{(\alpha)}}^\mathrm{T}_{j,r}\lambda_r^{(\alpha)}\Gamma^{(\alpha)}_{r,k}\:. \end{equation*} Summing over all the components $\boldsymbol\eta_k$ yields \begin{multline*} \dot{\boldsymbol\eta}_{j} = \left(J\mathbf F\left(\mathbf s\right)-\sigma_1\lambda_j^{(1)}J\mathbf{H}_1\left(\mathbf s\right)\right)\boldsymbol\eta_j+\\ -\sum_{\alpha=2}^M\sigma_\alpha\sum_{k=2}^N\sum_{r=2}^N\lambda_r^{(\alpha)}\Gamma^{(\alpha)}_{r,k}\Gamma^{(\alpha)}_{r,j}J\mathbf H_{\boldsymbol\alpha}\left(\mathbf s\right)\boldsymbol\eta_k\:, \end{multline*} which is Eq.~\ref{mainsystem}. Notice that the sums over $r$ and $k$ start from~2, because the first eigenvalue is always~0, and the orthonormality of the eigenvectors guarantees that all the elements of the first column of $\boldsymbol\Gamma^{(\alpha)}$ except the first are~0. Each matrix $\Gamma^{(\alpha)}$ effectively captures the alignment of the Laplacian eigenvectors of layer~$\alpha$ with those of layer~1. If the eigenvectors for layer~$\alpha$ are identical to those of layer~1, as it happens when the two Laplacians commute, then $\Gamma^{(\alpha)}=\mathds 1$. Of course, one can generalize the definition of $\Gamma^{(\alpha)}$ to consider any two layers, introducing the matrices $\Xi^{(\alpha,\beta)}\equiv{\mathbf V^{(\alpha)}}^\mathrm{T}{\mathbf V^{(\beta)}}=\Gamma^{(\alpha)}{\Gamma^{(\beta)}}^\mathrm{T}$ that can be even used to define a measure $\ell_D$ of ``dynamical distance'' between two layers $\alpha$ and $\beta$: \begin{equation*} \ell_D = \sum_{i=2}^N\left[\sum_{j=2}^N \left(\Xi^{(\alpha,\beta)}_{i,j}\right)^2\right]-\left(\Xi^{(\alpha,\beta)}_{i,i}\right)^2\:. \end{equation*} \subsection{MSF and stability classes} A particular case of the treatment we considered above happens when $M=1$. In this case, the second term on the right-hand side of Eq.~\ref{mainsystem} disappears, and the system takes the variational form $\dot{\boldsymbol\eta}_{i}=\mathbf{K}_i\boldsymbol\eta_{i}$, where $\mathbf{K}_i\equiv J\mathbf F\left(\mathbf s\right)-\sigma\lambda_iJ\mathbf H\left(\mathbf s\right)$ is an evolution kernel evaluated on the synchronization manifold. Since $\lambda_1=0$, this equation separates the contribution parallel to the manifold, which reduces to $\dot{\boldsymbol\eta}_{1}=J\mathbf F\left(\mathbf s\right)\boldsymbol\eta_{1}$, from the other $N-1$, which describe perturbations in the directions transverse to the manifold, and that have to be damped for the synchronized state to be stable. Since the Jacobians of $\mathbf F$ and $\mathbf H$ are evaluated on the synchronized state, the variational equations differ only in the eigenvalues $\lambda_i$. Thus, one can extract from each of them a set of $d$ conditional Lyapunov exponents, evaluated along the eigen-modes associated to $\lambda_i$. Putting $\nu\equiv\sigma\lambda_i$, the parametrical behaviour of the largest of these exponents $\Lambda\left(\nu\right)$ defines the so-called Master Stability Function (MSF)~\cite{PeC998}. If the network is undirected, then the spectrum of the Laplacian is real, and the MSF is a real function of $\nu$. Crucially, for all possible choices of $\mathbf F$ and $\mathbf H$, the MSF of a network falls into one of three possible behaviour classes, defined as follows~\cite{Boc006}: \begin{itemize} \item Class~I: $\Lambda\left(\nu\right)$ never intercepts the $x$ axis. \item Class~II: $\Lambda\left(\nu\right)$ intercepts the $x$ axis in a single point at some $\nu_c \geqslant 0$. \item Class~III: $\Lambda\left(\nu\right)$ is a convex function with negative values within some window $\nu_{c1}<\nu<\nu_{c2}$; in general, $\nu_{c1}\geqslant 0$, with the equality holding when $\mathbf F$ supports a periodic motion. \end{itemize} The elegance of the MSF formalism manifests itself at its finest for systems in Class~III, for which synchronization is stable only if $\sigma\lambda_2>\nu_{c1}$ and $\sigma\lambda_N<\nu_{c2}$ hold simultaneously. This condition implies ${}^{\lambda_N}/{}_{\lambda_2}<{}^{\nu_{c2}}/{}_{\nu_{c1}}$. Since ${}^{\lambda_N}/{}_{\lambda_2}$ is entirely determined by the network topology and ${}^{\nu_{c2}}/{}_{\nu_{c1}}$ depends only on the dynamical functions $\mathbf F$ and $\mathbf H$, one has a simple stability criterion in which structure and dynamics are decoupled. \subsection{Network generation} To generate the networks for our simulations, we use the algorithm described in Ref.~\cite{Gom006}, that creates a one-parameter family of complex networks with a tunable degree of heterogeneity. The algorithm works as follows: start from a fully connected network with $m_0$ nodes, and a set $\mathcal X$ containing $N-m_0$ isolated nodes. At each time step, select a new node from $\mathcal X$, and link it to $m$ other nodes, selected amongst all other nodes. The choice of the target nodes happens uniformly at random with probability $\alpha$, and following a preferential attachment rule with probability $1-\alpha$. Repeating these steps $N-m_0$ times, one obtains networks with the same number of nodes and links, whose structure interpolates between ER, for $\alpha=1$, and SF, for $\alpha=0$. \subsection{Numerical calculations} To compute the maximum Lyapunov exponent for a given pair of coupling strengths $\sigma_1$ and $\sigma_2$, we first integrate a single Rössler oscillator from an initial state $\left(0,0,0\right)$ for a transient time $t_\mathrm{trans}$, sufficient to reach the chaotic attractor. The integration is carried out using a fourth-order Runge-Kutta integrator with a time step of $5\times 10^{-3}$, for which we choose a transient time $t_\mathrm{trans}=300$. Then, we integrate the systems for the perturbations (Eqs.~\ref{eqs11}--\ref{eqs13}, \ref{eqs21}--\ref{eqs23} and~\ref{eqs31}--\ref{eqs33}) using Euler's method, again with a same time-step of $5\times 10^{-3}$. The initial conditions are so that all the components of all the $\boldsymbol\eta_{j}$ are $1/\sqrt{3\left(N-1\right)}$, making $\boldsymbol\Omega$ a unit vector. At the same time, we continue the integration of the single Rössler unit, to provide for $s_1$ and $s_3$, that appear in the perturbation equations. This process is repeated for~500 time windows, each of the duration of~1 unit (200~steps). After each window $n$ we compute the norm of the overall perturbation $\left|\boldsymbol\Omega\right|\left(n\right)$, and re-scale the components of the $\boldsymbol\eta_{j}$ so that at the start of the next time window the norm of $\boldsymbol\Omega$ is again set to~1. Finally, when the integration is completed, we estimate the maximum Lyapunov exponent as \begin{equation*} \Lambda = \frac{1}{500}\sum_{n=1}^{500}\log\left(\left|\boldsymbol\Omega\right|\left(n\right)\right)\:. \end{equation*}
{ "timestamp": "2016-11-17T02:08:41", "yymm": "1611", "arxiv_id": "1611.05406", "language": "en", "url": "https://arxiv.org/abs/1611.05406", "abstract": "The structure of many real-world systems is best captured by networks consisting of several interaction layers. Understanding how a multi-layered structure of connections affects the synchronization properties of dynamical systems evolving on top of it is a highly relevant endeavour in mathematics and physics, and has potential applications to several societally relevant topics, such as power grids engineering and neural dynamics. We propose a general framework to assess stability of the synchronized state in networks with multiple interaction layers, deriving a necessary condition that generalizes the Master Stability Function approach. We validate our method applying it to a network of Rössler oscillators with a double layer of interactions, and show that highly rich phenomenology emerges. This includes cases where the stability of synchronization can be induced even if both layers would have individually induced unstable synchrony, an effect genuinely due to the true multi-layer structure of the interactions amongst the units in the network.", "subjects": "Physics and Society (physics.soc-ph); Dynamical Systems (math.DS); Chaotic Dynamics (nlin.CD)", "title": "Synchronization in networks with multiple interaction layers", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668673560625, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.7081505148860555 }
https://arxiv.org/abs/2103.05986
Explicit Interval Estimates for Prime Numbers
Using a smoothing function and recent knowledge on the zeros of the Riemann zeta-function, we compute pairs of $(\Delta,x_0)$ such that for all $x \geq x_0$ there exists at least one prime in the interval $(x(1 - \Delta^{-1}), x]$.
\section{Introduction} In 1845, Bertrand \cite{Bertrand_45} postulated that there is at least one prime $p$ satisfying $n < p < 2n-2$ for every integer $n > 3$. In 1850, Chebyshev proved Bertrand's postulate and this kick-started an area of research into the existence of primes in intervals. We say that an interval $[x,x+h]$ is \textit{short} if $h = o(x)$ and \textit{long} if $h = \Omega(x)$, hence Bertrand's postulate is a long interval estimate. There are a number of applications for long interval estimates. Although longer for large $x$, they can be smaller than short intervals for sufficiently small $x$, and so can verify primes in short intervals for small $x$; see \cite{CH_2022} and \cite{Dudek_16} for examples. Long interval results have also been used in problems from additive prime number theory. For example, Dressler \cite{Dressler, DresslerAddendum} refined Bertrand's postulate to show that every positive integer $n\not\in\{1, 2, 4, 6, 9\}$ can be written as the sum of distinct odd primes. Another example is for the ternary Goldbach conjecture: before Helfgott \cite{Helfgott} offered a full proof, Ramar\'{e} and Saouter \cite{RamareSaouter} verified the ternary Goldbach conjecture up to $10^{22}$. For these and other reasons, mathematicians have sought to improve, generalise, or refine Bertrand's postulate. For instance, Shevelev \textit{et al.} \cite{Shevelev} showed that the only integers $k\leq 10^8$ for which there exists a prime in $(k n, (k+1) n)$ for every integer $n > 1$ are $k\in\{1,2,3,5,9,14\}$; this is an extension of \cite{Bachraoui, Loo, Nagura}. There are several types of short interval estimates. For example, Axler \cite{Axler} proved there is always a prime in $(x,x(1 + 198.2/\log^4{x})]$ for $x > 1$. The smallest short interval estimate in the long run would be from Baker, Harman, and Pintz \cite{Baker}, with primes in $[x, x + x^{0.525}]$ for sufficiently large $x$. This, however, has not been made explicit, so it is not known how large $x$ would need to be. Better intervals are possible if assuming the Riemann hypothesis. Cram{\'e}r showed that there are primes in $(x,x+c\sqrt{x}\log x]$ for sufficiently large $x$ and some constant $c>0$. Dudek \cite{Dudek_15} determined that we can take $c=3+\epsilon$ for sufficiently large $x$. Bertrand's interval has been narrowed in the form $(x(1 - \Delta^{-1}), x]$, for all $x\geq x_0$ and some large constant $\Delta$. Chronologically, Schoenfeld \cite[Thm.~12]{Schoenfeld76} proved we can take $\Delta=16,598$ for all $x > 2,010,760$; Ramar\'{e} and Saouter \cite{RamareSaouter} computed a table of $\Delta$ for $x>x_0>4\cdot 10^{18}$ for a range of $x_0$, using analytic estimates and a smoothing argument; and Kadiri and Lumley \cite{KadiriLumley} refined Ramar\'{e} and Saouter's method. A selection of $\Delta$ from the latter two are given in Table \ref{comparison_table}. It is important to note that for $x\leq 4\cdot 10^{18}$ there is no need for interval estimates, as Oliveira e Silva, Herzog, and Pardi \cite{O_H_P_14} have computed the gaps between primes in this range. In this article, we update Kadiri and Lumley's work \cite{KadiriLumley} with the most recent explicit estimates for the Riemann zeta-function $\zeta(s)$ and the Chebyshev functions \begin{equation*} \theta(x) = \sum_{p\leq x}\log{p}, \qquad \psi(x) = \sum_{r= 1}^{\left[\frac{\log{x}}{\log{2}}\right]}\theta(x^{\frac{1}{r}}). \end{equation*} In particular, we use the zero-density estimate from Kadiri, Lumley, and Ng \cite{K_L_N_2018}, the zero-counting function estimate from Platt and Trudgian \cite{P_T_15}, the zero-free region from Mossinghoff and Trudgian \cite{MossinghoffTrudgian2015}, the Riemann height from Platt and Trudgian \cite{PlattTrudgianRH}, and bounds on $\psi(x)-\theta(x)$ from Costa Pereira \cite{Costa} and Broadbent \textit{et al.} \cite{BKLNW_20}. We also use the $L$-functions and Modular Forms Database (LMFDB) \cite{lmfdb} to extend previous computation of sums over zeros of $\zeta(s)$ and a new generalised version of Ramar\'{e} and Saouter's smooth weight. The resulting problem is one of numeric optimisation, in that we seek to maximise $\Delta$ subject to a constraint, defined over several parameters. These improvements lead to Theorem \ref{thm:main}. \begin{theorem}\label{thm:main} For each pair $(\Delta, x_0)$ in Table \ref{table1}, there exists at least one prime in the interval $(x(1 - \Delta^{-1}), x]$ for all $x\geq x_0$. \end{theorem} The following table compares the most recent values of $\Delta$ for specific $x_0$ to those from Theorem \ref{thm:main}. \begin{table}[H] \centering \begin{tabular}{l|ccc} \multirow{2}{*}{$\log x_0$} & \multicolumn{3}{c}{$\Delta$} \\ \cline{2-4} & \multicolumn{1}{c}{Ramar\'{e} $\&$ Saouter} & \multicolumn{1}{c}{Kadiri $\&$ Lumley} & \multicolumn{1}{c}{Cully-Hugill $\&$ Lee} \\ \hline 46 & $8.13\cdot 10^7$ & $1.48\cdot 10^8$ & $2.97\cdot 10^{12}$ \\ 50 & $1.90\cdot 10^8$ & $7.53\cdot 10^8$ & $2.30\cdot 10^{13}$ \\ 55 & $2.07\cdot 10^8$ & $1.77\cdot 10^9$ & $2.82\cdot 10^{14}$ \\ 60 & $2.09\cdot 10^8$ & $1.96\cdot 10^9$ & $3.44\cdot 10^{15}$ \\ 150 & $2.12\cdot 10^8$ & $2.44\cdot 10^9$ & $9.47\cdot 10^{34}$ \end{tabular} \caption{Values of $\Delta$ from Ramar\'{e} $\&$ Saouter \cite{RamareSaouter}, Kadiri $\&$ Lumley \cite{KadiriLumley}, and Theorem \ref{thm:main}.} \label{comparison_table} \end{table} The jump in our results is primarily due to the new smooth weight. Besides this, the largest improvements came from the higher Riemann height, Kadiri, Lumley, and Ng's zero-density estimate, and Broadbent \textit{et al.}'s estimate for $\psi(x)-\theta(x)$. Depending on $x_0$, these results increased Kadiri and Lumley's $\Delta$ by a factor or two of 10. The remainder of the improvement came from the smooth weight. Theorem \ref{thm:main} is better than Baker, Harman, and Pintz's result \cite{Baker} for all $$x\leq 4.317\cdot 10^{73}\approx e^{169.55}.$$ That is, using the pairs of $x_0$ and $\Delta$ in Table \ref{table1}, we consecutively confirmed that each interval in Theorem \ref{thm:main} was contained in Baker, Harman, and Pintz's interval. It would be possible to widen this range by computing $\Delta$ for larger $x_0$, but we chose not to continue Table \ref{table1} because of time constraints and computational capacity. \subsection*{Structure} We obtain Theorem \ref{thm:main} by directly applying the explicit estimates (detailed in Section \ref{sec:Background_results}) and an improved smooth weight to Kadiri and Lumley's main theorem \cite[Thm.~2.7]{KadiriLumley}. In Section \ref{sec:backgound_theory}, we outline the smoothing method used by Kadiri and Lumley, and re-state their theorem in Theorem \ref{thm:KLMainTheorem}. We implement the new estimates from Section \ref{sec:backgound_theory} and define the new smooth weight in Section \ref{sec:smoothing_fn_etc}; we also show how it affects the smoothing argument. Finally, we compute pairs of $(\Delta,x_0)$ in Section \ref{sec:results}, and provide some commentary on the optimisation problem. We also discuss avenues for future research. \subsection*{Acknowledgements} We extend our gratitude to Tim Trudgian for suggesting this project to us and for his advice throughout its progress. We also thank David Platt and Richard Brent for their correspondence. \section{Prerequisite Results}\label{sec:Background_results} Our problem is initially defined in terms of $\theta(x)$, and may be translated in terms of $\psi(x)$ using explicit estimates on the difference between $\theta(x)$ and $\psi(x)$. To this end, Costa Pereira \cite[Thm.~5]{Costa} gives \begin{equation}\label{Costa_diff} \psi(x)-\theta(x) > 0.999x^\frac{1}{2} + x^\frac{1}{3} \end{equation} for all $x\geq e^{38}$, and Broadbent \textit{et al.} \ \cite[Cor.~5.1]{BKLNW_20} give \begin{equation}\label{Broadbent_diff} \psi(x) - \theta(x) < \alpha_1 x^\frac{1}{2} + \alpha_2 x^\frac{1}{3}, \end{equation} with $\alpha_1= 1+ 1.93378 \cdot 10^{-8}$ and $\alpha_2 = 1.04320$, for all $x\geq e^{40}$. The latter is an improvement on the previous upper bound used in \cite{KadiriLumley} from Costa Pereira \cite{Costa}. Working with $\psi(x)$ enables one to implement the explicit Riemann--von Mangoldt formula \cite[\S 17]{Davenport}, which involves a sum over the non-trivial zeros of $\zeta(s)$, which can be estimated using our present understanding of the density and location of these zeros. We will detail the most recent estimates in the remainder of this section. Henceforth, unless otherwise stated, suppose that $s=\sigma+it$, $\rho=\beta+i\gamma$ is a non-trivial zero of $\zeta(s)$ whenever $0<\beta<1$, and $Z(\zeta)$ denotes the set of non-trivial zeros of $\zeta(s)$. \subsection{The Riemann height} The Riemann Hypothesis (RH) states that $\beta=\frac{1}{2}$ for all $\rho$ in the critical strip. If the RH is known to be true for $|\gamma|\leq H$, then $H$ is called an admissible \textit{Riemann height}. Platt and Trudgian \cite{PlattTrudgianRH} have recently announced that $H = 3,000,175,332,800$ is an admissible Riemann height, and this is the value we will use in our computations. \subsection{Zero-free region of the Riemann zeta function} De la Vall\'{e}e Poussin \cite{ValeePoussin} proved there exists a positive constant $R_0$ such that $\zeta(s)\neq 0$ in the region $|t|\geq T$ and \begin{equation}\label{eqn:dlvp} \sigma \geq 1-\frac{1}{R_0\log t}. \end{equation} The best constant in this zero-free region is from Mossinghoff and Trudgian \cite{MossinghoffTrudgian2015}, who establish \eqref{eqn:dlvp} with $R_0=5.573412$ and $T=2$. Koborov \cite{koborov58} and Vinogradov \cite{vinogradov58} independently established an asymptotically superior zero-free region. Ford \cite{ford2002zero} made this result explicit, but it is only better than that of \cite{MossinghoffTrudgian2015} for $|t|>e^{10,152}$. Our estimates only use the zero-free region at the Riemann height $H$, hence we use the result of \cite{MossinghoffTrudgian2015}. \subsection{Zero-density estimates of the Riemann zeta-function} Zero-density estimates bound the number of zeros of $\zeta(s)$ in some rectangular area of the critical strip. The typical notation is, for $\frac{1}{2}\leq \sigma<1$, \begin{align*} N(\sigma, T) &= \#\{\rho:\sigma < \beta < 1, 0 < \gamma < T\}. \end{align*} The number of zeros in the full critical strip is denoted $N(T) =2 N(\frac{1}{2},T)$. Backlund \cite{Backlund_18} established that $N(T)=P(T) + O(\log T)$, where \begin{equation*} P(T) := \frac{T}{2\pi}\log{\frac{T}{2\pi}} - \frac{T}{2\pi} + \frac{7}{8}. \end{equation*} This has been made explicit, in that there are constants $a_1, a_2, a_3$ such that \begin{equation}\label{classical_T} |N(T)-P(T)|\leq R(T) \end{equation} where $R(T) = a_1\log{T} + a_2\log\log{T} + a_3$ for all $T \geq T_0$. A summary of previous estimates for $R(T)$ is given in \cite{Trudgian_14}, and Trudgian proves that we can take $(a_1, a_2, a_3) = (0.112, 0.278, 2.510)$ for $T_0=e$. Platt and Trudgian \cite[Cor.~1]{P_T_15} improved these values to $(a_1, a_2, a_3) = (0.110, 0.290, 2.290)$.\footnote{The authors have been informed in private communications that Platt and Trudgian's result \cite{P_T_15} will be superseded by Hasanalizade \textit{et al.} \cite{HasanalizadeShenWongRiemann}. One could improve our computations with Hasanalizade \textit{et al.}'s results, but we expect these to be marginal.} For $N(\sigma, T)$, the best explicit estimate for $\sigma$ near 1 is from Kadiri, Lumley, and Ng \cite{K_L_N_2018}. Table 1 of \cite{K_L_N_2018} give values of $A(\sigma)$ and $B(\sigma)$ such that for $T\geq H$, $k\in \left[ 10^9 / H, 1\right]$, and $\sigma>1/2$, we have \begin{equation}\label{zd_K} N(\sigma, T) \leq A(\sigma) \left( \log(kT) \right)^{2\sigma} (\log T)^{5-4\sigma}T^{\frac{8}{3}(1-\sigma)} + B(\sigma) (\log T)^2 := D(\sigma,T). \end{equation} To give an example, for $\sigma = 0.9$ we can take $A(\sigma) = 11.499$ and $B(\sigma) = 3.186$. It is possible to recalculate $A$ and $B$ with Platt and Trudgian's Riemann height $H$, using the expressions in (4.72) and (4.73) of \cite{K_L_N_2018}. This effects a drop in both constants, so we use these constants in \eqref{zd_K} for further calculations. Simoni\v{c} \cite{Simonic_19} has given an asymptotically smaller estimate than (\ref{zd_K}) for $\sigma$ near $1/2$. We have not used this estimate because the zero-density estimates are only used at the Riemann height, at which point (\ref{classical_T}) is smaller than that of \cite{Simonic_19} for $\sigma\in [1/2,5/8]$ --- after which (\ref{zd_K}) is better than both. \section{Kadiri and Lumley's Theorem}\label{sec:backgound_theory} In what follows, let $y=(1-\Delta^{-1})x$. To prove that $(\Delta, x_0)$ is a pair for which there exists a prime in $(y, x]$ for all $x\geq x_0$, we will ensure \begin{equation}\label{eqn:condition} \theta(x) - \theta(y) > 0 \end{equation} for $x\geq x_0$. Using Ramar\'{e} and Saouter's smoothing argument from \cite{RamareSaouter}, Kadiri and Lumley translated \eqref{eqn:condition} into an equivalent condition dependent on the non-trivial zeros of $\zeta(s)$, before refining the remainder of Ramar\'{e} and Saouter's arguments through the use of zero-density estimates. Because we will only update estimates for sums which arise from Kadiri and Lumley's method, then obtain further improvements using a better choice of smoothing function, we will outline the smoothing argument and re-state Kadiri and Lumley's main theorem in Theorem \ref{thm:KLMainTheorem} for convenience. Throughout the remainder of the paper, suppose that: \begin{itemize} \item $m\geq 2$ is an integer, \item $u=\delta/m$ for $0\leq \delta \leq 10^{-6}$, \item $0 \leq a \leq 1/2$, and \item $X\geq X_0 \geq 3.99\cdot 10^{18}$. \end{itemize} Our variable will be re-parameterised to $x=\exp(u) X(1+\delta (1-a))$, and taking $y = X(1+\delta a)$ implies \begin{equation*} \Delta = \left(1-\frac{1+\delta a}{\exp(u)(1+\delta (1-a))}\right)^{-1}. \end{equation*} We similarly define $x_0= \exp(u) X_0(1+\delta (1-a))$. Note that the bound on $X_0$ is to allow $x_0$ to be as small as $4\cdot 10^{18}$, given the bounds on $m$, $\delta$, and $a$. Ramar\'{e} and Saouter require the smooth weight $f$ to be `$m$-admissible' over $[0, 1]$, so that: \begin{itemize} \item $f$ is $m$-times differentiable, \item $f^{(k)}(0) = f^{(k)}(1) = 0$ for $0 \leq k \leq m - 1$, \item $f$ is non-negative, and \item $f$ is not identically zero. \end{itemize} Along with this, we use the notation: \begin{align*} ||f||_1 &= \int_0^1 |f(t)| dt; \qquad ||f||_2 = \left( \int_0^1 |f(t)|^2 dt\right)^\frac{1}{2}; \\ \nu(f,a) &= \int_0^a f(t) dt + \int_{1-a}^1 f(t) dt;\\ \Xi_{F}(u, X, \delta, t) &= F(\exp(u) X(1+\delta t)) - F(X(1+\delta t)); \\ I(\delta, u, X) &= \frac{1}{||f||_1} \int_0^1 \Xi_{\theta}(u, X, \delta, t) f(t) dt;\\ J(\delta, u, X) &= \frac{1}{||f||_1} \int_0^1 \Xi_{\psi}(u, X, \delta, t) f(t) dt. \end{align*} Note that for all $t\in [a, 1-a]$, $\Xi_{\theta}(u, X, \delta, t) \leq \theta(x) - \theta(y)$. Multiply both sides by $f(t)$ and integrate over $t\in [a, 1-a]$ to obtain $$\theta(x) - \theta(y)\geq I(\delta, u, X) - E(X) (e^u-1)X,$$ in which \begin{equation}\label{E(x)} E(X) = \frac{2 (1 + \delta)\log(\exp(u)(1+\delta)X_0)\nu(f,a)}{||f||_1\log((\exp(u) - 1)X_0)}. \end{equation} Using \eqref{Costa_diff}, \eqref{Broadbent_diff}, and the bounds on $m$, $\delta$, $a$, $X_0$ we get \begin{equation*} I(\delta, u, X) \geq J(\delta, u, X) - \omega\sqrt{X}, \end{equation*} in which $\omega = 1.0344\cdot 10^{-3}$. The explicit Riemann--von Mangoldt formula is used to estimate $J(\delta, u, X)$, which then requires estimates for a sum over the non-trivial zeros of $\zeta(s)$. Using zero-density estimates and a zero-free region for $\zeta(s)$, Kadiri and Lumley obtain Theorem \ref{thm:KLMainTheorem}. \begin{theorem}[Kadiri--Lumley]\label{thm:KLMainTheorem} Suppose that $\gamma$ are ordinates such that $\rho = \beta + i\gamma\in Z(\zeta)$. There exists a prime in $((1-\Delta^{-1})x,x]$ for all $X\geq X_0$ if \begin{equation}\label{condition} \begin{aligned} F(&0,m,\delta) - B_0(m,\delta){X_0}^{-\frac{1}{2}} - B_1(m, \delta, T_1){X_0}^{-\frac{1}{2}} - B_2(m, \delta, T_1){X_0}^{-\frac{1}{2}} - B_3(m, \delta, \sigma_0){X_0}^{\sigma_0-1}\\ & - B_3(m, \delta, 1-\sigma_0){X_0}^{-\sigma_0} - B_{41}(X_0, m, \delta, \sigma_0) - B_{42}(m, \delta, \sigma_0){X_0}^{-1+\frac{1}{R_0\log{H}}} \\ & - \frac{u}{2(\exp(u) - 1)}{X_0}^{-2} - \frac{\omega}{(\exp(u) - 1)}{X_0}^{-\frac{1}{2}} - \frac{2 (1 + \delta)\log(\exp(u)(1+\delta)X_0)\nu(f,a)}{||f||_1\log((\exp(u) - 1)X_0)} > 0. \end{aligned} \end{equation} The functions $B_i$ are defined as follows. For $0\leq k \leq m$, $s = \sigma + i\tau$ with $\tau > 0$, and $0\leq\sigma\leq 1$, let \begin{equation*} F(k,m,\delta) = \frac{1}{||f||_1} \int_0^1 (1 + \delta t)^{1+k} |f^{(k)}(t)| dt. \end{equation*} First, \begin{align*} B_0(m,\delta) = \min\left\{\frac{4 F(0,m,\delta)}{\exp(u/2) + 1} N_0, \frac{4 F(1,m,\delta)}{(\exp(u/2) + 1)\delta} S_0\right\}, \end{align*} in which $N_0$ denotes the number of zeros $\rho\in Z(\zeta)$ such that $0<\gamma \leq T_0$ and \begin{equation*} \sum_{0<\gamma \leq T_0}\frac{1}{\gamma} \leq S_0. \end{equation*} Second, \begin{align*} B_1(m,\delta,T_1) = \min\left\{\frac{4 F(0,m,\delta)}{\exp(u/2) + 1} \left(N(T_1) - N_0\right), \frac{4 F(1,m,\delta)}{(\exp(u/2) + 1)\delta} S_1(T_0, T_1)\right\}, \end{align*} in which \begin{equation*} \sum_{T_0 < \gamma \leq T_1} \frac{1}{\gamma} \leq S_1(T_0, T_1). \end{equation*} Third, \begin{equation*} B_2(m,\delta,T_1) = \frac{2 F(m,m,\delta)}{(\exp(u/2) - 1)\delta^{m}} S_2(m,T_1),\text{ in which } \sum_{T_1 < \gamma \leq H} \frac{1}{{\gamma}^{m+1}} \leq S_2(m, T_1). \end{equation*} Fourth, \begin{equation*} B_3(m,\delta,\sigma) = \frac{2 F(m,m,\delta) (\exp(u\sigma) + 1)}{(\exp(u) - 1)\delta^{m}} S_3(m),\text{ in which } \sum_{\gamma > H} \frac{1}{{\gamma}^{m+1}} \leq S_3(m). \end{equation*} Fifth, \begin{align*} B_{41}(X_0,m,\delta,\sigma_0) &= \frac{2 F(m,m,\delta) (\exp(u) + 1)}{(\exp(u) - 1)\delta^{m}} S_5(X_0,m,\sigma_0),\\ B_{42}(m,\delta,\sigma_0) &= \frac{2 F(m,m,\delta) (\exp(u) + 1)}{(\exp(u) - 1)\delta^{m}} S_4(m,\sigma_0), \end{align*} in which \begin{equation*} \sum_{\substack{\sigma_0 < \beta < 1\\\gamma > H}} \frac{1}{{\gamma}^{m+1}} \leq S_4(m, \sigma_0),\qquad \sum_{\substack{\sigma_0 < \beta < 1\\\gamma > H}} \frac{{X_0}^{-\frac{1}{R_0\log{\gamma}}}}{{\gamma}^{m+1}} \leq S_5(X_0, m, \sigma_0). \end{equation*} \end{theorem} \section{The Smoothing Function and Important Estimates}\label{sec:smoothing_fn_etc} To compute the $B_i$ functions in \eqref{condition}, we need estimates for each $S_i$ and $F(k,m,\delta)$ for $k=0,1,m$. We will provide new estimates for each $S_i$ in Section \ref{ssec:Si} (Lemma \ref{lem:estimates}). In order to approximate $F(k,m,\delta)$, our choice of smoothing function is defined and justified in Section \ref{ssec:smoothing_fn}. In Section \ref{ssec:F_k_m_delta}, bounds for $F(k,m,\delta)$ are given (Lemma \ref{F_bounds}). \subsection{Estimating each $S_i$}\label{ssec:Si} For $S_1$, $S_2$, $S_3$ we will use the following lemma from Brent, Platt, and Trudgian \cite[Lem. 3]{BrentPlattTrudgian}, which is a refinement of Lehman's work in \cite[Lem. 1]{Lehman}. For $S_4$ and $S_5$, we will use the zero-density estimate \eqref{zd_K} from Kadiri, Lumley, and Ng. \begin{lemma}[Brent--Platt--Trudgian]\label{lem:BPT} Suppose that $A_0 = 2.067$, $A_1 = 0.059$, $A_2 = 1/150$, $2\pi \leq U \leq V$ and $\phi : [U, V] \to [0, \infty )$ is non-negative, differentiable, monotone, and non-increasing on $[U,V]$ such that $\phi'(t) \leq 0$ and $\phi''(t)\geq 0$. Then \begin{equation*} {\sum}'_{\substack{\beta + i\gamma\in Z(\zeta)\\U \leq \gamma \leq V}} \phi(\gamma) = \frac{1}{2\pi} \int_{U}^{V} \phi(t)\log{\frac{t}{2\pi}}dt + \phi(V)Q(V) - \phi(U)Q(U) + E_2(U,V), \end{equation*} in which $Q(T) = N(T) - P(T)$ (as defined in \eqref{classical_T}), $\sum'$ means that if $\beta + iV \in Z(\zeta)$, then the contribution $\phi(V)$ is weighted by $1/2$, and $$|E_2(U,V)| \leq 2(A_0 + A_1\,\log{U})|\phi'(U)| + (A_1 + A_2) \frac{\phi(U)}{U}.$$ \end{lemma} Note that this re-statement can be adjusted to extend from the $\sum'$ notation to $\sum$. All we must do is account for a potential extra contribution of $\phi(V)/2$, hence replacing $\sum'$ by $\sum$, the result holds with $$|E_2(U,V)| \leq 2(A_0 + A_1\,\log{U})|\phi'(U)| + (A_1 + A_2) \frac{\phi(U)}{U} + \frac{\phi(V)}{2}.$$ Further, the constants $A_0$ and $A_1$ are taken from Trudgian \cite[Thm.~2.2]{Trudgian_11} and $A_2$ is calculated in Lemma 2 of \cite{BrentPlattTrudgian}. \begin{lemma}\label{lem:estimates} For integers $m\geq 2$, $X_0 \geq 3.99\cdot 10^{18}$, $T_1\in (T_0, H)$, and $\sigma\in \left(\frac{1}{2},1\right)$, we have \begin{align} S_1(T_0,T_1) &= \frac{1}{2\pi} \log\frac{T_1}{T_0}\log\frac{\sqrt{T_0 T_1}}{2\pi} + \frac{R(T_0)}{T_0} + \frac{R(T_1)+\frac{1}{2}}{T_1} + \mathbf{E}_{T_0},\tag{S1}\label{eqn:estimateS1}\\ S_2(m,T_1) &= \frac{1}{2\pi} \left(\frac{1 + m\log\frac{T_1}{2\pi}}{m^2 {T_1}^m} - \frac{1 + m\log\frac{H}{2\pi}}{m^2 {H}^m}\right) + \frac{R(T_1)}{{T_1}^{m+1}} + \frac{R(H)+\frac{1}{2}}{{H}^{m+1}} + \dot{\mathbf{E}}_{m,T_1},\tag{S2}\label{eqn:estimateS2}\\ S_3(m) &= \frac{1}{2\pi} \left(\frac{1 + m\log\frac{H}{2\pi}}{m^2 {H}^m}\right) + \frac{R(H)}{{H}^{m+1}} + \ddot{\mathbf{E}}_{m,H},\tag{S3}\label{eqn:estimateS3} \end{align} in which \begin{align*} \mathbf{E}_{T_0} &= (A_0 + A_1\log{T_0})\frac{2}{{T_0}^2} + (A_1 + A_2) \frac{1}{{T_0}^2},\\ \dot{\mathbf{E}}_{m,T_1} &= (A_0 + A_1\log{T_1})\frac{2(m+1)}{{T_1}^{m+2}} + (A_1 + A_2) \frac{1}{{T_1}^{m+2}},\\ \ddot{\mathbf{E}}_{m,H} &= (A_0 + A_1\log{H})\frac{2(m+1)}{{H}^{m+1}} + (A_1 + A_2) \frac{1}{{H}^{m+2}}. \end{align*} Moreover, we have \begin{align} S_4(m,\sigma) &= \frac{D(\sigma, H)}{H^{m+1}} + \int_{H}^\infty \frac{\partial D(\sigma, t)}{\partial t} \frac{1}{t^{m+1}}\,dt,\tag{S4}\label{eqn:estimateS4}\\ S_5(X_0, m, \sigma) &= \frac{ D(\sigma, H)}{H^{m+1}} {X_0}^{-\frac{1}{R_0\log{H}}} + \int_{H}^\infty \frac{\partial D(\sigma, t)}{\partial t}\frac{1}{t^{m+1}} \,dt . \tag{S5}\label{eqn:estimateS5} \end{align} \end{lemma} \begin{proof} Using $|Q(T)|\leq R(T)$ from \eqref{classical_T}, we have $$|\phi(V)Q(V) - \phi(U)Q(U)| \leq \phi(V)R(V) + \phi(U)R(U)$$ by the triangle inequality. Using Lemma \ref{lem:BPT} with $\phi(\gamma) = \gamma^{-1}$ we obtain \eqref{eqn:estimateS1}, and with $\phi(\gamma) = \gamma^{-(m+1)}$ we retrieve \eqref{eqn:estimateS2} and \eqref{eqn:estimateS3}. Suppose that $\phi(t) = o(1)$ as $t\rightarrow \infty$ and recall that $\phi'(t) \leq 0$. Then $\phi(t) N(\sigma, t) \to 0$ as $t\to \infty$, so \begin{equation*} \sum_{\substack{\gamma > H\\\sigma \leq \beta < 1}} \phi(\gamma) = - \int_{H}^\infty N(\sigma, t)\phi'(t)dt. \end{equation*} Using \eqref{zd_K} for $T\geq H$, we observe that \begin{align*} - \int_{H}^\infty N(\sigma, t)\phi'(t)dt \leq - \int_{H}^\infty D(\sigma, t) \phi'(t) dt \end{align*} and with integration by parts, we have \begin{align*} - \int_{H}^\infty D(\sigma, t) \phi'(t)dt \leq D(\sigma, H)\phi(H) + \int_{H}^\infty \frac{\partial D(\sigma, t)}{\partial t} \phi(t) dt. \end{align*} Combining these observations, we have \begin{equation}\label{eqn:abcdefghi} \sum_{\substack{\gamma > H\\\sigma \leq \beta < 1}} \phi(\gamma) \leq D(\sigma, H)\phi(H) + \int_{H}^\infty \frac{\partial D(\sigma, t)}{\partial t}\phi(t)dt. \end{equation} Under the choice $\phi(\gamma) = \gamma^{-(m+1)}$, we retrieve \eqref{eqn:estimateS4}. Noting that $X_0^{\frac{-1}{R_0 \log t}}\leq 1$ for $\gamma\geq 1$, choosing $\phi(\gamma) = X_0^{\frac{-1}{R_0 \log \gamma}} \gamma^{-(m+1)}$ yields \eqref{eqn:estimateS5}. \end{proof} \begin{remark} Brent, Platt, and Trudgian \cite{BrentPlattTrudgian} write that $A_0$ and $A_1$ can probably be improved significantly. If so, this would improve our work. \end{remark} \subsection{The smoothing function}\label{ssec:smoothing_fn} The weight used in \cite{RamareSaouter} and \cite{KadiriLumley} was $f_1(t)=(4t(1-t))^m$; this is a bell curve over $t\in [0,1]$ with $\max_{[0,1]} f_1(t) = 1$. Larger $m$ narrows the curve, making $f$ smaller near its endpoints. This function can be generalised to $f_2(t)=(At^n(1-t))^m$, where $n$ is a positive integer, and $A=(n+1)^{n+1} n^{-n}$ to keep $\max_{[0,1]} f_2(t) = 1$. For $n\geq 2$, $f_2$ is skewed to the left, with larger $n$ amplifying this skew. We will use $f_2$, and can justify that larger $n$ allows better results. To justify the choice of $f=f_2$, we can compare the asymptotic behaviour of $\theta(x)-\theta(y)$ to that of its smooth approximation. To simplify the argument, let $a=0$, so that $$x=\exp(u) X(1+\delta)\qquad \text{and}\qquad y = X.$$ Using Rosser and Schoenfeld's estimates for $\theta(t)$ in \cite{RosserSchoenfeld62}, we have \begin{equation*} \theta(x) - \theta(y) = x - y + O\left( \frac{1}{\log{y}}\right), \end{equation*} implying $\theta(x) - \theta(y) \sim (\exp(u)(1+\delta) - 1)X$ as $X\to\infty$. Similarly, \begin{equation}\label{sim1} \theta(\exp(u) X(1+\delta t)) - \theta(X) \sim (\exp(u) (1+\delta t) - 1)X \end{equation} as $X\to\infty$. Following the smoothing argument, for all $t\in [0,1]$, \begin{equation*} \theta(x) - \theta(y) \geq \theta(\exp(u) X(1+\delta t)) - \theta(X). \end{equation*} Multiplying both sides by $f(t)$ and integrating over $t$, we have \begin{equation}\label{smoothed} \theta(x) - \theta(y) \geq \frac{\int_0^1 \left(\theta(\exp(u) X(1+\delta t)) - \theta(X) \right) f(t)dt}{||f||_1}. \end{equation} The right-hand side can be thought of as a smoothed approximation to $\theta(x)-\theta(y)$. A ``good" choice for $f$ would bring this approximation closer to equivalence. This can be assessed by noting that \eqref{sim1} implies \begin{align} \frac{\int_0^1 \left(\theta(\exp(u) X(1+\delta t)) - \theta(X) \right) f(t)dt}{||f||_1} \sim X\left(\exp(u) \frac{\int_0^1 (1+\delta t) f(t)dt}{||f||_1} - 1\right). \label{sim} \end{align} If the ratio in the right-hand side of \eqref{sim} converges to $1+\delta$, then both sides of \eqref{smoothed} will be asymptotically equivalent. To establish that $f=f_2$ yields this asymptotic equivalence for large $m$ and $n$, we require a closed-form expression for $||f||_1$ with $f=f_2$. Integration by parts yields \begin{align*} ||f||_1 = \int_0^1 (At^n(1-t))^m dt = \frac{A^m m! (mn)!}{(mn+m+1)!}, \end{align*} Now, integrating by parts $m$ times gives \begin{align*} \int_0^1 (1+\delta t) f(t)dt &= A^m \int_0^1 (t^{mn} +\delta t^{mn+1}) (1-t)^m dt \\ &= A^m m! \int_0^1 \left( \frac{(mn)! t^{mn+m}}{(mn+m)!} +\frac{(mn+1)! \delta t^{mn+m+1}}{(mn+m+1)!} \right) dt \\ &= A^m m! \left( \frac{(mn)!}{(mn+m+1)!} +\frac{(mn+1)! \delta}{(mn+m+2)!} \right), \end{align*} So, \begin{align} \frac{\int_0^1 (1+\delta t) f(t)dt}{||f||_1} &= m! \left( \frac{(mn)!}{(mn+m+1)!} +\frac{(mn+1)! \delta}{(mn+m+2)!} \right) \frac{(mn+m+1)!}{(m!)(mn)!}\nonumber \\ &= 1 +\frac{(mn+1) \delta}{mn+m+2} \sim 1 + \frac{n}{n+1}\delta \quad\text{as}\quad m\to\infty.\label{eq:expression} \end{align} Since $n/(n+1)\to 1$ as $n\to \infty$, the ratio in the right-hand side of \eqref{sim} is indeed asymptotically equivalent to $1+\delta$. Moreover, larger choices of $n$ will improve the approximation in \eqref{smoothed}. Note that if $t$ were restricted to $[a,1-a]$, there would be no change in the implications for taking $n\rightarrow \infty$. Finally, one needs a closed-form expression for $||f^{(m)}||_2$ under the choice $f=f_2$, in order to evaluate the bounds in Lemma \ref{F_bounds}. Therefore, we note that \begin{align*} &\int_0^1 \left( f^{(m)} (t)\right) ^2 dt = \left[ f^{(m)} (t) f^{(m-1)} (t) \right]_0^1 - \int_0^1 f^{(m+1)} (t) f^{(m-1)} (t) dt \\ &\quad= \ldots = (-1)^m \int_0^1 f^{(2m)} (t) f(t) dt \\ &\quad= (-1)^m \left[ f^{(2m)}(t) \left(\int f(t) dt\right) \right]_0^1 + (-1)^{m+1} \int_0^1 f^{(2m+1)} (t) \left(\int f (t) dt \right) dt \\ &\quad= \ldots = (-1)^{mn} \int_0^1 f^{(mn+m)} (t) \mathcal{F}(t) dt, \end{align*} where $$\mathcal{F}(t) = \underbrace{\int \int \ldots \int}_{\text{$mn-m$ times}} f(t) dt.$$ Since $f(t)$ is a polynomial of degree $mn+m$, we have $f^{(mn+m)} (t)=(-A)^m (mn+m)!$. It can also be shown that \begin{align*} \int_0^1 \mathcal{F}(t) dt &= \int_0^1 A^m \sum_{k=0}^m (-1)^k {\binom{m}{k}} \frac{(mn+k)!}{(2mn-m+k)!}\ t^{2mn-m+k} dt \\ &= A^m \sum_{k=0}^m (-1)^k {\binom{m}{k}} \frac{(mn+k)!}{(2mn-m+k+1)!}. \end{align*} So, we have \begin{align*} ||f^{(m)}||_2 &= A^m \left((mn+m)! \sum_{k=0}^m (-1)^{mn+m+k} {\binom{m}{k}} \frac{(mn+k)!}{(2mn-m+k+1)!} \right)^\frac{1}{2}. \end{align*} \begin{remark} It would be possible to alternatively generalise $f_1$ to $f_3=(4t(1-t)^k)^m$ with $k\geq 1$. This would have an almost identical affect on the smoothing argument, compared to $f_2$, in that none of $\nu(f,a)$, $||f||_1$, $||f||_2$, or $||f^{(m)}||_2$ would change. The only exception is $F(0,m,\delta)$, which would be slightly smaller --- and we want it to be large. This appears to be a consequence of the way in which $x$ was parameterised. If $x$ were re-defined to be the lower end-point of the interval, and parameterised as such, it could be the case that $f_3$ would be the better choice. \end{remark} \subsection{Estimating $F(k,m,\delta)$}\label{ssec:F_k_m_delta} With $f=f_2$ we can bound $F(k,m,\delta )$ using the following update to \cite[Lem. 3.1]{KadiriLumley}. \begin{lemma}\label{F_bounds} Let $B$ be a suitable constant (which we will define), \begin{align*} \lambda_0(m,n,\delta) &= \frac{2(B^n - B^{n+1})^m (mn+m+1)!}{m! (mn)!} \\ \lambda_1(m,n,\delta) &= (1+\delta)^2 \frac{2 (B^n - B^{n+1})^m (mn+m+1)!}{m! (mn)!} \\ \lambda(m,n,\delta) &= \sqrt{\frac{(1+\delta)^{2m+3}-1}{\delta(2m+3)}} \frac{(mn+m+1)!}{A^m m! (mn)!} ||f^{(m)}||_2. \end{align*} Then $1\leq F(0,m,n,\delta) \leq 1+\delta$, \begin{align*} \lambda_0(m,n,\delta) \leq F(1,m,\delta) \leq \lambda_1(m,n,\delta),\qquad\text{and}\qquad F(m,m,\delta) \leq \lambda(m,n,\delta). \end{align*} \end{lemma} \begin{proof} The bounds on $F(0,m,\delta)$ do not change, and follow from $1\leq 1+\delta t\leq 1+\delta$. For $F(1,m,\delta)$, we have $$\frac{||f'||_1}{||f||_1} \leq F(1,m,\delta) \leq (1+\delta)^2 \frac{||f'||_1}{||f||_1},$$ in which \begin{align*} ||f'||_1 = \int_0^1 |f'(t)| dt &= A^m m \int_0^1 (t^n-t^{n+1})^{m-1} \left| (nt^{n-1} - (n+1) t^n) \right| dt \\ &= A^m m \left( \int_0^B + \int_B^1 \right) (t^n-t^{n+1})^{m-1} \left| (nt^{n-1} - (n+1) t^n) \right| dt, \end{align*} where $B$ is chosen such that the derivative of $g(t)=t^n-t^{n+1}$ (i.e. $nt^{n-1} - (n+1) t^n$) is non-negative for $0\leq t\leq B$ and non-positive for $B\leq t\leq 1$. So, we take $B=\frac{n}{n+1}$, and find \begin{align*} ||f'||_1 &= A^m m \left( \int_0^B g(t)^{m-1} g'(t) dt - \int_B^1 g(t)^{m-1} g'(t) dt \right). \end{align*} For the first integral, \begin{align*} \int_0^B g(t)^{m-1} g'(t) dt &= g(B)^m - \int_0^B (m-1) g(t)^{m-1} g'(t) dt \\ &= \frac{1}{m} g(B)^m. \end{align*} Similar logic can be applied to the second integral, so that we have $$||f'||_1 = 2A^m (B^n - B^{n+1})^m.$$ Lastly, for $F(m,m,\delta)$, the Cauchy--Schwarz inequality implies $$F(m,m,\delta) \leq \sqrt{\int_0^1 (1+\delta)^{2(m+1)} dt} \frac{||f^{(m)}||_2}{||f||_1} = \sqrt{\frac{(1+\delta)^{2m+3}-1}{\delta(2m+3)}} \frac{||f^{(m)}||_2}{A^m m! (mn)!}.$$ \end{proof} \section{Results and Future Research}\label{sec:results} \subsection{Results} The $N_0$ and $S_0$ values can be computed after choosing $T_0$, using the list of zeros from the LMFDB database. Taking $T_0 = 104,537,615$, we have $N_0 = 2.6\cdot 10^{8}$ and $S_0 = 21.98308$. It would be possible to take $T_0$ higher, but this would restrict the range for $T_1$, which has an optimal value given $m$ and $\delta$. Moreover, this optimum will be smaller for small $x$. Hence, we choose $T_0$ to allow $T_1$ to take the full range of optimal values. With the estimates established in Section \ref{sec:smoothing_fn_etc}, we used Theorem \ref{thm:KLMainTheorem} to calculate admissible values for $\Delta$ for some range of $x$. After fixing $x_0$, we sought the largest $\Delta$ from sets of $m$, $n$, $a$, $\delta$, $T_1$, and $\sigma_0$ that satisfied (\ref{condition}). Numerical optimisation was used for all parameters except $n$ and $\sigma_0$. As previously indicated, larger $n$ should give consistently better $\Delta$, hence we would take $n$ as large as needed. For $\sigma_0$, the optimal value is independent of the other parameters, as it is the point at which the estimate in (\ref{zd_K}) is smaller than (\ref{classical_T}) for all $\sigma\geq \sigma_0$ at $T=H$. Using (\ref{zd_K}) at $\sigma=0.78$, with $A=5.8773$ and $B=3.869$, we found this to be the case at $\sigma_0=0.7804$. The difficulty in this optimisation problem centered on the optimal value of each parameter being a function of the other parameters. So, fixing any one parameter would set the optimal values of the others, and potentially conceal the `true' optimal. Our approach was to use the \texttt{NMaximize} function in \texttt{Mathematica}, and specify the \texttt{DifferentialEvolution} method, as this would test a wide range of parameter value-combinations before converging on a solution. These calculations were also run with maximum working precision. Further, to maintain precision with larger $x_0$ and $n$, some of the expressions in Section \ref{sec:smoothing_fn_etc} were re-arranged or approximated, and others were forced to evaluate in some specific order. Table \ref{table1} lists the largest order of $\Delta$ we were able to find for each $x_0$, along with the corresponding parameter values. Note that the parameters were rounded to five decimal places before calculating each $\Delta$. Also, the values of $m$ and $n$ are integers because the bounds on $F(k,m,\delta)$ require integrating some $m$ and/or $n$ times. Furthermore, $n$ could not be even if $m$ was odd, otherwise $||f^{(m)}||_2$ would not be real. As with those before us, we cannot claim to have found the true optimal values for each parameter, but we have sufficient reason to believe they are close. \begin{table}[H] \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline $\log x_0$ & $m$ & $n$ & $\delta$ & $a$ & $T_1$ & $\Delta$ \\ \hline $\log(4\cdot 10^{18})$ & 2 & 47 & $1.39801\cdot 10^{-12}$ & $4.71958\cdot 10^{-4}$ & $1.04538\cdot 10^{8}$ & $4.7716\cdot 10^{11}$ \\ \hline 43 & 2 & 47 & $1.25109\cdot 10^{-12}$ & $7.18155\cdot 10^{-4}$ & $1.04538\cdot 10^8$ & $5.3337\cdot10^{11}$ \\ \hline 46 & 2 & 55 & $2.24285\cdot 10^{-13}$ & $1.68957\cdot 10^{-4}$ & $1.04538\cdot 10^8$ & $2.9730\cdot 10^{12}$ \\ \hline 50 & 2 & 61 & $2.89470\cdot 10^{-14}$ & $5.18010\cdot 10^{-4}$ & $1.04538\cdot 10^8$ & $2.3046\cdot 10^{13}$ \\ \hline 55 & 2 & 85 & $2.36015\cdot 10^{-15}$ & $3.22142\cdot 10^{-4}$ & $1.04538\cdot 10^8$ & $2.8258\cdot 10^{14}$ \\ \hline 60 & 2 & 97 & $1.93623\cdot 10^{-16}$ & $2.68169\cdot 10^{-4}$ & $1.04538\cdot 10^8$ & $3.4443\cdot 10^{15}$ \\ \hline 75 & 2 & 201 & $1.16349\cdot 10^{-19}$ & $1.32872\cdot 10^{-4}$ & $1.99909\cdot10^{12}$ & $5.7309\cdot 10^{18}$ \\ \hline 90 & 2 & 465 & $6.51627\cdot 10^{-23}$ & $5.99304\cdot 10^{-4}$ & $6.63318\cdot 10^{11}$ & $1.0238\cdot 10^{22}$ \\ \hline 105 & 2 & 609 & $3.68107\cdot 10^{-26}$ & $4.71942\cdot 10^{-4}$ & $3.00017\cdot 10^{12}$ & $1.8122\cdot 10^{25}$ \\ \hline 120 & 2 & 885 & $4.26161\cdot 10^{-29}$ & $6.99513\cdot 10^{-4}$ & $8.47291\cdot 10^{11}$ & $1.5658\cdot 10^{28}$ \\ \hline 135 & 3 & 1029 & $2.35880\cdot 10^{-32}$ & $5.14483\cdot 10^{-4}$ & $3.00017\cdot 10^{12}$ & $3.1820\cdot 10^{31}$ \\ \hline 150 & 2 & 1171 & $7.03676\cdot 10^{-36}$ & $3.08515\cdot 10^{-4}$ & $1.90772\cdot10^{12}$ & $9.4779\cdot 10^{34}$ \\ \hline \end{tabular} \caption{Admissible pairs of $x_0$ and $\Delta$ for Theorem \ref{thm:main} and the corresponding parameter values.} \label{table1} \end{table} The values listed for $m$, $n$, $\delta$, $a$, and $T_1$ in Table \ref{table1} can be understood by their relative importance and influence. Large $\Delta$ comes from large $m$, large $a$, and small $\delta$; satisfying \eqref{condition} requires small $m$, large $\delta$, and small $a$. Thus, there are optimal values for each. Of these parameters, the order of $\delta$ is the most influential on $\Delta$. In particular, note that the order of $\delta$ is inversely proportional to $\Delta$. We have more freedom in reducing $\delta$ with $T_1$ and $n$. Although they do not directly affect $\Delta$, these two parameters allow $m$, $\delta$, and $a$ to increase $\Delta$ and still satisfy \eqref{condition}. The values for $n$ were chosen such that larger $n$ would not increase the order of $\Delta$. As the improvements from increasing $n$ gradually tapered off, we found the optimal value of $m$ to approach its minimum, 2. Thus, when \texttt{NMaximize} gave the optimal value of $m$ to be 2, it indicated we had reached an $n$ past which $\Delta$ could not be substantially improved. Figure \ref{fig:1} illustrates an exponential relationship between $x_0$ and $\Delta$, given our data. For interest, a linear regression of $\log\Delta$ on $\log x_0$ suggests that for the $x_0$ in Table \ref{table1}, \begin{equation}\label{eqn:approximation} \Delta \approx C{x_0}^{0.496}, \end{equation} with $C=e^{5.896}$. Table \ref{table3} suggests that this pattern continues in some range, however we expect this exponential relationship to eventually taper off. \begin{figure} \centering \includegraphics[scale=0.5]{results-plot} \caption{An exponential relationship between $x_0$ and $\Delta$.} \label{fig:1} \end{figure} \begin{table}[H] \begin{tabular}{|l|l|l|} \hline $\log x_0$ & $\Delta$ & $\Delta_E$ \\ \hline 300 & $4.4893\cdot 10^{67}$ & $1.52623\cdot 10^{67}$ \\ \hline 600 & $6.0664\cdot 10^{132}$ & $6.40675\cdot 10^{131}$ \\ \hline \end{tabular} \caption{More admissible pairs of $x_0$ and $\Delta$ for Theorem \ref{thm:main}, compared to $\Delta_E$, the expected value of $\Delta$ given $x_0$ according to \eqref{eqn:approximation}.} \label{table3} \end{table} \subsection{Future research} There appears to be room to expand on the use of $n$ in the smoothing function. For example, to maximise the left-hand side of \eqref{condition}, we can maximise the only positive term $F(0,m,\delta)$ by increasing $n$. On the flip-side, smaller $n$ will reduce $\nu(f,a)$ and other terms depending on the smoothing function $f$, which is also desirable. This indicates that there could be an optimal value of $n$, or that it would be possible to take $n\rightarrow \infty$, and simplify the constraint in (\ref{condition}). It also appears that one can generalise the method used in this paper for number fields. That is, one can obtain pairs $(x_0,\Delta)$ such that there exists a prime ideal $\mathfrak{p}$ in a number field $\mathbb{K}$ with norm $N(\mathfrak{p})$ in the interval $(x(1-\Delta^{-1}),x]$. In this generalisation, one would need to establish an analogous set-up to what we outlined in Section \ref{sec:backgound_theory}, then estimate a sum over the non-trivial zeros of the Dedekind zeta-function associated to $\mathbb{K}$, which is denoted $\zeta_{\mathbb{K}}$. There is no generalised Riemann height, so we can only use zero-free and zero-density regions of $\zeta_{\mathbb{K}}$ to obtain this estimate. At the time of writing, the second author provides the latest zero-free results in \cite{Lee}, and Trudgian provides the latest peer-reviewed zero-density results in \cite{TrudgianDZDR}, soon to be superseded by Hasanalizade \textit{et al.} \cite{HasanalizadeShenWong}. \bibliographystyle{amsplain}
{ "timestamp": "2021-03-11T02:17:06", "yymm": "2103", "arxiv_id": "2103.05986", "language": "en", "url": "https://arxiv.org/abs/2103.05986", "abstract": "Using a smoothing function and recent knowledge on the zeros of the Riemann zeta-function, we compute pairs of $(\\Delta,x_0)$ such that for all $x \\geq x_0$ there exists at least one prime in the interval $(x(1 - \\Delta^{-1}), x]$.", "subjects": "Number Theory (math.NT)", "title": "Explicit Interval Estimates for Prime Numbers", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668668053619, "lm_q2_score": 0.7217432003123989, "lm_q1q2_score": 0.7081505144885911 }
https://arxiv.org/abs/0803.2141
Graph products of right cancellative monoids
Our first main result shows that a graph product of right cancellative monoids is itself right cancellative. If each of the component monoids satisfies the condition that the intersection of two principal left ideals is either principal or empty, then so does the graph product. Our second main result gives a presentation for the inverse hull of such a graph product. We then specialise to the case of the inverse hulls of graph monoids, obtaining what we call polygraph monoids. Among other properties, we observe that polygraph monoids are F*-inverse. This follows from a general characterisation of those right cancellative monoids with inverse hulls that are F*-inverse.
\section*{Introduction} Graph products of groups were introduced by E.\ R.\ Green in her thesis \cite{green} and have since been studied by several authors, for example, \cite{hermiller} and \cite{crisplaca}. In these two papers, passing reference is made to graph products of monoids, which are defined in the same way as graph products of groups and have been studied specifically by, among others, Veloso da Costa, and Fohry and Kuske \cite{costa1,costa2,fohry}. In this paper we are interested in graph products of right cancellative monoids. Free products and restricted direct products are special cases of graph products, and a free or (restricted) direct product of right cancellative monoids is again right cancellative. In Section~\ref{graph products}, in our first main result, we generalise these observations to obtain a corresponding result for graph products. We then concentrate on right cancellative monoids in which the intersection of two principal left ideals is either principal or empty. Following the terminology from ring theory (see for example \cite{beauregard}) we call these monoids \textit{left LCM monoids}. A useful concept in the study of these monoids is the notion of the inverse hull of a right cancellative monoid. In Section~\ref{inverse hulls}, after generalities on inverse hulls, we give several (known) characterisations of inverse hulls of left LCM monoids and use them to show that a graph product of left LCM monoids is itself left LCM. We then consider presentations for inverse hulls of graph products of left LCM monoids. In Section~\ref{polygraph monoids} we specialise the presentation to the case where each component monoid is free on one generator, we obtain what we call polygraph monoids, generalising the polycylic monoids discussed in \cite[Chapter~9]{lawson}. In the final section, we concentrate on left LCM monoids with two-sided cancellation. Among these monoids we characterise those with an inverse hull that is $F^*$-inverse (see Section~\ref{F*-inverse} for the definition), and observe that, in particular, polygraph monoids are $F^*$-inverse. We assume that the reader is familiar with the basic ideas of semigroup theory (see, for example, \cite{cp61,howie,lawson}). \section{Graph products} \label{graph products} For us, a \textit{graph} $\Gamma =(V,E)$ is a set $V$ of \textit{vertices} together with an irreflexive, symmetric relation $E \subseteq V \times V$ whose elements are called \textit{edges}. In particular, $\Gamma$ is loop free. We say that $u$ and $v$ are \textit{adjacent} in $\Gamma$ if $(u,v) \in E$. For each $v\in V$, let $M_v$ be a monoid; whenever necessary we can, without loss of generality, assume the monoids $M_v$ are disjoint. We denote the free product of the $M_v$ by $\prod^{\star}M_v$ and write $x \centerdot y$ for the product of $x,y\in \prod^{\star}M_v$. We define the \textit{graph product} $\Gamma_{v\in V}M_v$ of the $M_v$ to be the quotient of $\prod^{\star}M_v$ factored by the congruence generated by the relation $$R_{\Gamma} = \{ (m\centerdot n , n\centerdot m) : m\in M_u, n\in M_v \text{ and } u,v \text{ are adjacent in } \Gamma \}.$$ Alternatively, if for each $M_v$ we have a presentation $\langle A_v \mid R_v \rangle$, then $\Gamma_{v\in V}M_v$ is the monoid with presentation $\langle A \mid R\rangle$ where $$A = \bigcup_{v\in V}A_v \text{ \ and \ } R = \bigcup_{(u,v)\in E}\{ ab=ba: a\in A_u, b\in A_v\} \ \cup \ \bigcup_{v\in V}R_v.$$ For the rest of this section we will write $M$ for $\Gamma_{v\in V}M_v$. The $M_v$ are called the \textit{components} of $M$, and we denote multiplication in both $M$ and its components by concatenation. It follows from Theorem~\ref{green} below that the latter embed naturally in the former, and so there should be no cause for confusion. If the graph has no edges, $M$ is the free product of the $M_v$, and at the other extreme, if the graph is complete, $M$ is their restricted direct product. A special case of interest is when all the $M_v$ are isomorphic to the additive monoid of non-negative integers. The graph product is then called a \textit{graph monoid} and denoted by $M(\Gamma)$. Graph monoids are also known variously as \textit{free partially commutative monoids}, \textit{right-angled Artin monoids}, and \textit{trace monoids}. These monoids and the corresponding groups have been extensively investigated (see, for example, \cite{diekert} for monoids, and \cite{charney} for groups). Now let $X$ be the disjoint union of the $M_v\setminus \{1\}$, and for $m \in M_v\setminus \{1\}$ write $C(m) = v$. We denote the product in the free monoid $X^*$ by $x\circ y$ to distinguish it from the products in $M$ and the $M_v$. Clearly there is a canonical surjective homomorphism $\sigma : X^* \to M$ so that each element $a$ of $M$ can be represented by an element of $X^*$, called an \textit{expression} for $a$. If $x_1 \circ x_2 \dots \circ x_n \in X^*$ is an expression for $a \in M$, the $x_i$ are the \textit{components} of the expression, and if $C(x_i) = v$, then $x_i$ is a $v$-component. If $x_i$ and $x_{i+1}$ are both $v$-components, then we may obtain a shorter expression for $a$ by, in the terminology of \cite{hermiller}, \textit{amalgamating} $x_i$ and $x_{i+1}$: if $x_i,x_{i+1}\in M_v$ and $x_ix_{i+1} = 1$, delete $x_i\circ x_{i+1}$; otherwise replace it by the single element $y_i$ of $M_v$ where $y_i = x_ix_{i+1}$ in $M_v$. If $(C(x_j), C(x_{j+1})) \in E$ for some $j$, then we may obtain a different expression for $a$ by replacing $x_j\circ x_{j+1}$ by $x_{j+1}\circ x_j$. Again we follow \cite{hermiller} and call such a move a \textit{shuffle}. Two expressions are \textit{shuffle equivalent} if one can be obtained from the other by a sequence of shuffles. A \textit{reduced expression} is an element $x_1 \circ x_2 \dots \circ x_n \in X^*$ which satisfies \begin{itemize} \item [$(i)$] whenever $i < j$ and $C(x_i) = C(x_j)$, there exists $k$ with $i < k < j$ and $(C(x_i), C(x_k)) \notin E$. \end{itemize} Notice that no amalgamation is possible in a reduced expression, and that a shuffle of a reduced expression is again a reduced expression. The following is the monoid version of a result of Green \cite{green} which can also be deduced easily from \cite[Theorem~6.1]{costa1}. \begin{thm} \label{green} Every element of $M$ is represented by a reduced expression. Two reduced expressions represent the same element of $M$ if and only if they are shuffle equivalent. \end{thm} The \textit{length} of an expression is its length as an element of the free monoid $X^*$; it is clear that shuffle equivalent expressions have the same length, and so, in view of the theorem, all reduced expressions representing a given element of $M$ have the same length. We shall use this observation without further comment, but we note that it also allows us to define the \textit{length} of an element of $M$ to be the length of any reduced expression representing it. As an easy consequence of the notion of length we have the following corollary which we record for later use. First, we recall that a subset $U$ of a monoid $M$ is \textit{right unitary} in $M$ if for all elements $m\in M$ and $u\in U$ we have $m\in U$ if $mu \in U$. There is a dual notion of \textit{left unitary}, and $U$ is \textit{unitary} in $M$ if it is both right and left unitary. \begin{cor}\label{unitary} Each $M_v$ is a unitary submonoid of $M$. \end{cor} \begin{proof} If $c\in M_v$, $a\in M$ and $ac\in M_v$, then $ac$ must have length 1 (or zero) and it follows that $a\in M_v$. Thus $M_v$ is right unitary in $M$, and similarly, it is left unitary. \end{proof} It is natural to ask how properties of $M$ are related to the corresponding properties of the $M_v$. Several such questions are considered in \cite{costa1,costa2,fohry}. Our interest is in right cancellative monoids which do not seem to have been studied in this context. If $M$ is right cancellative, then so too are the $M_v$ since they are submonoids of $M$. Our first aim is to show the converse, that is, if all the $M_v$ are right cancellative, then so is $M$. Towards this end we introduce the following terminology. Let $a, a' \in M$, $v \in V$ and $c \in M_v \setminus \lbrace 1 \rbrace$. We say that $a$ has \textit{final $v$-component} $c$ and \textit{final $v$-complement} $a'$ if $a$ admits a reduced expression $a_1 \circ a_2 \circ \dots \circ a_m \circ c$ such that $a_1 a_2 \dots a_m = a'$. We say that $a$ has \textit{final $v$-component $1$} and \textit{final $v$-complement $a$} if $a$ has a reduced expression $a_1 \circ \dots \circ a_m$ such that either \begin{itemize} \item[(i)] $C(a_j) \neq v$ for all $j$; or \item[(ii)] there exists $k$ with $(C(a_k), v) \notin E$ and $C(a_j) \neq v$ for all $j \geq k$. \end{itemize} Of course, we may define the dual notions of \textit{initial $v$-component} and \textit{initial $v$-complement} in the obvious way. \begin{prop} \label{comp} For each vertex $v$, each element of $M$ has exactly one final $v$-component and exactly one final $v$-complement. \end{prop} \begin{proof} For existence, suppose $x \in M$ and let $$a_1 \circ \dots \circ a_m$$ be a reduced expression for $x$. If conditions (i) or (ii) apply, then, by definition, $x$ has final $v$-component $1$ and final $v$-complement $x$. Otherwise, there is a largest integer $j$ with $C(a_j) = v$. If $(C(a_k), v) \notin E$ for some $k > j$, then condition (ii) holds. Hence $(C(a_k), v) \in E$ for all $k > j$, and it follows easily that one can shuffle $a_j$ to the end to obtain a reduced expression $$a = a_1 \circ \dots \circ a_{j-1} \circ a_{j+1} \circ \dots \circ a_m \circ a_j$$ so that $x$ has final $v$-component $a_j$ and final $v$-complement $a_1 \dots a_{j-1} a_{j+1} \dots a_m$. For uniqueness, suppose first for a contradiction that $x$ has distinct final $v$-components $1$ and $d \neq 1$. Then $x$ has reduced expressions $a = a_1 \circ \dots \circ a_m$ and $b = b_1 \circ \dots \circ b_n \circ d$ where either \begin{itemize} \item[(i)] $C(a_j) \neq v$ for all $j$; or \item[(ii)] there exists $k$ with $(C(a_k), v) \notin E$ and $C(a_j) \neq v$ for all $j > k$. \end{itemize} By Theorem~\ref{green}, $b$ can be obtained from $a$ by a sequence of shuffles. But clearly in case (i) such a shuffle can never introduce a $v$-component, while in case (ii) no such shuffle can change the fact that there exists $a_k$ with $(C(a_k), v) \notin E$ and $C(a_j) \neq v$ for all $j > k$. Since $b$ does not satisfy either of the conditions (i) or (ii), this gives a contradiction. Suppose now that $x$ has reduced expressions $$a = a_1 \circ \dots \circ a_m \circ c$$ and $$b = b_1 \circ \dots \circ b_m \circ d$$ where $c,d \in M_v$, $c \neq 1$, $d \neq 1$. By Theorem~\ref{green}, $b$ can be obtained from $a$ by a sequence of shuffles. It is clear that no such shuffle can change the value of the last $v$-component, so we must have $c = d$. We now turn our attention to showing that final $v$-complements are unique. If the (unique) final $v$-component of $x$ is $1$ then by definition we have that $x$ is the (unique) final $v$-complement of itself, so there is nothing to prove. So suppose $x$ has final $v$-component $c \neq 1$, and that there are reduced expressions $$a = a_1 \circ \dots \circ a_m \circ c$$ and $$b = b_1 \circ \dots \circ b_m \circ c$$ for $x$. Now by Theorem~\ref{green}, there is a sequence of shuffles which takes $a$ to $b$. Clearly just by removing those applications which involve the final $v$-component $c$ of the word, we obtain a sequence of shuffles which can be applied to $a_1 \circ \dots \circ a_m$ to yield $b_1 \circ \dots \circ b_m$. Since these expressions are reduced, it follows by Theorem~\ref{green} again that $a_1 \circ \dots \circ a_m$ and $b_1 \circ \dots \circ b_m$ represent the same element. Thus, $x$ has exactly one final $v$-complement. \end{proof} \begin{lem} \label{component lemma} Let $a \in M$ and $c \in M_v$. Suppose $a$ has final $v$-component $d$ and final $v$-complement $a'$. Then $ac$ has final $v$-component $dc$ and final $v$-complement $a'$. \end{lem} \begin{proof} Suppose first that $a$ has final $v$-component $d \neq 1$. Then $a$ has a reduced expression of the form \begin{equation}\label{eq:red} a_1 \circ a_2 \circ \dots \circ a_m \circ d \end{equation} where $a_1 \circ \dots \circ a_m$ is a reduced expression for $a'$. If $dc \neq 1$ then clearly $$a_1 \circ a_2 \circ \dots \circ a_m \circ (dc)$$ is a reduced expression for $ac$, from which the required result is immediate. On the other hand, if $dc = 1$ then $$a_1 \circ a_2 \circ \dots \circ a_m$$ is a reduced expression for $ac = a'dc = a'$. It follows easily from the fact that \eqref{eq:red} is reduced that either this expression contains no $v$-components, or there exists $k$ such that $(C(a_k), v) \notin E$ and $a_j \notin v$ for all $j \geq k$. Thus, $ac$ has final $v$-component $1$ and final $v$-complement $a'$, as required. Now consider the case in which $a$ has final $v$-component $d = 1$. Then $a$ has a reduced expression $$a_1 \circ a_2 \circ \dots \circ a_m$$ where $a = a' = a_1 a_2 \dots a_m$ and either \begin{itemize} \item[(i)] $C(a_j) \neq v$ for all $j$; or \item[(ii)] there exists $k$ with $(C(a_k), v) \notin E$ and $C(a_j) \neq v$ for all $j \geq k$. \end{itemize} In both cases, it is easy to check that $a_1 \circ a_2 \circ \dots \circ a_m \circ c$ is a reduced expression for $ac$, from which it follows that $ac$ has final $v$-component $dc = c$ and final $v$-complement $a = a'$ as required. \end{proof} \begin{thm} \label{cancellative} A graph product of right [respectively left, two-sided] cancellative monoids is right [respectively left, two-sided] cancellative. \end{thm} \begin{proof} We prove the result for right cancellative monoids. The corresponding result for left cancellative monoids is proved similarly using initial $v$-components and complements, and the result for cancellative monoids is an immediate consequence of the one-sided results. First observe that, since the graph product monoid is generated by elements from the embedded components it suffices to show that elements of the embedded components are right cancellable, that is, that $ac = bc$ implies $a = b$ whenever $c$ belongs $ M_v$ for some $v \in V$. Suppose that $a$ and $b$ have (unique) final $v$-components $d$ and $e$ respectively, and (unique) final $v$-complements $a'$ and $b'$ respectively. Then by the preceding lemma, $ac$ has final $v$-component $dc$ and final $v$-complement $a'$, while $bc$ has final $v$-component $ec$ and final $v$-complement $b'$. Since $ac = bc$, we deduce from Proposition~\ref{comp} that $dc = ec$ and $a' = b'$. But $d$, $e$ and $c$ lie $M_v$ which by assumption is right cancellative, so we deduce that $d = e$, and hence that $a = a' d = b' e = b$ as required to complete the proof. \end{proof} We next consider the question of whether a graph product of monoids each of which is embeddable in a group is itself embeddable in a group. A positive answer is a consequence of the next proposition which gives a universal property defining the graph product. We retain the notation of this section. \begin{prop} \label{homomorphisms1} Let $N$ be a monoid and suppose that for each $v\in V$ there is a homomorphism $\f_v:M_v \to N$ such that $$ (x\f_v)(y\f_u) = (y\f_u)(x\f_v) \text{ for all } (u,v) \in E \text{ and all } x\in M_v, y\in M_u. \qquad (*)$$ Put $M=\Gamma_{v\in V}M_v$. Then there is a unique homomorphism $\f:M \to N$ such that $x\f = x\f_v$ for all $x\in M_v$ and all $v\in V$. \end{prop} \begin{proof} For each $v\in V$, let $\langle A_v \mid R_v\rangle$ be a presentation for $M_v$, and let $\langle A \mid R \rangle$ be the presentation for $M$ as at the beginning of the section. Let $\h:A\to N$ be the function given by $a\h = a\f_v$ where $M_v$ is the unique monoid containing $a$. Since each $\f_v$ is a homomorphism, $\h$ respects the relations in each $R_v$, and by hypothesis, $\h$ also respects all the other relations in $R$. Hence there is a unique homomorphism $\f :M\to N$ which restricts to $\h$ on $A$ and hence to $\f_v$ on each $M_v$. \end{proof} An immediate consequence is the first part of the following result. \begin{prop} \label{homomorphisms2} Let $\G$ be a graph, $V$ its set of vertices and $\{ M_v\}_{v\in V},\{ N_v\}_{v\in V}$ families of monoids. Let $M=\Gamma_{v\in V}M_v$ and $N=\Gamma_{v\in V}N_v$. Then, given homomorphisms $\f_v:M_v \to N_v$ for each $v\in V$, there is a unique homomorphism $\f : M \to N$ such that $m_v\f = m_v\f_v$ for all $v\in V$. Moreover, if each $\f_v$ is injective, then so is $\f$. \end{prop} \begin{proof} All that remains is to prove the final paragraph. Let $a,b \in M$ with $a\f = b\f$ and suppose that $a,b$ have reduced expressions $a_1 \circ \dots \circ a_m$ and $b_1 \circ \dots \circ b_n$ respectively where $a_i \in M_{u_i}$ and $b_j \in M_{v_j}$. Then $$ (a_1\f_{u_1}) \dots (a_m\f_{u_m}) = a\f = b\f = (b_1\f_{v_1}) \dots (b_n\f_{v_n})$$ and since the $\f_v$ are injective, we have that both $(a_1\f_{u_1}) \circ \dots \circ (a_m\f_{u_m})$ and $(b_1\f_{v_1}) \circ \dots \circ (b_n\f_{v_n})$ are reduced expressions for $a\f$. Hence they are shuffle equivalent so that $m = n$ and for some permutation $\s$ we have $a_i\f_{u_i} = b_{i\s}\f_{v_{i\s}}$ for all $i$. Since $\operatorname{im} \f_v \subseteq N_v$ for all $v$, we see that $u_i = v_{i\s}$ for each $i$, and so $a_i = b_{i\s}$ since $\f_{u_i}$ is injective. It is now clear that $a_1 \circ \dots \circ a_m$ and $b_1 \circ \dots \circ b_n$ are shuffle equivalent so that $a = b$ and hence $\f$ is injective. \end{proof} The following corollary, which can also be easily proved directly, is now immediate. \begin{cor} \label{groupembedding} Let $\Gamma$ be a graph with vertex set $V$. If for each $v\in V$, the monoid $M_v$ is embeddable in a group $G_v$, then the graph product $\Gamma M_v$ is embeddable in the group $\Gamma G_v$. \end{cor} In the next section we use ideas about inverse hulls to demonstrate another result about the closure of a class of right cancellative monoids under graph products. Specifically we consider right cancellative monoids which satisfy the condition that the intersection of two principal left ideals is either principal or empty. A right cancellative monoid satisfying this condition is called a \textit{left LCM monoid}. We show that a graph product of left LCM monoids is again a left LCM monoid. The reason for the terminology which is borrowed from ring theory is that the defining condition may also be expressed in terms of divisibility. For a right cancellative monoid $C$ and $a,b\in C$, we say that $a$ is a \textit{left multiple} of $b$ (and that $b$ is a \textit{right factor or divisor} of $a$) if $a=cb$ for some $c\in C$. If $m$ is is a left multiple of both $b$ and $d$, we say it is a \textit{common left multiple} of these elements, and such a common left multiple $m$ is a \textit{least common left multiple} (LCLM) of $b$ and $d$ if every common left multiple of $b$ and $d$ is a left multiple of $m$. Equivalently, $m$ is an LCLM of $b$ and $d$ if and only if $$ Cb \cap Cd = Cm.$$ Least common left multiples are sometimes known as left least common multiples. We note that a left LCM monoid is a right cancellative monoid in which any two elements having a common left multiple have an LCLM. In ring theory (see \cite{beauregard}) an integral domain (not necessarily commutative) is called a \textit{left LCM domain} if the intersection of any two principal left ideals is principal. Thus an integral domain $R$ is a left LCM domain if and only if the cancellative monoid of its non-zero elements is a left LCM monoid. Similarly, one defines \textit{common right factors} and \textit{highest common right factors} (HCRF). An element $d$ of $C$ is an HCRF of $a$ and $b$ in $C$ if and only if $Cd$ is the least upper bound of $Ca$ and $Cb$ in the partially ordered set of principal left ideals of $C$. We remark that LCLMs and HCRFs are not uniquely determined in general being defined only up to left multiplication by a unit. If $C$ is actually cancellative, \textit{common right multiple, common left factor}, LCRM and HCLF are defined symmetrically. Examples of right cancellative LCM monoids abound: the right locally Garside monoids of Dehornoy \cite{dehornoy1} which, as he points out include all Artin monoids and all Garside monoids; from ring theory, we have already mentioned the multiplicative monoid of non-zero elements of any LCM domain. Examples of LCM monoids which are right cancellative but not left cancellative are provided by principal left ideal right cancellative monoids; specific examples are the monoids of ordinal numbers less than $\omega^{\a}$ (where $\a$ is any ordinal number greater than 1) under the dual of the usual operation of ordinal addition. \section{Inverse hulls} \label{inverse hulls} With any right cancellative monoid $C$, one can associate an inverse monoid called the inverse hull of $C$. Before giving the definition we recall some of the basic concepts of inverse monoids. For more on the general theory of inverse monoids see \cite[Chapter~5]{howie} and \cite{lawson}. An \emph{inverse monoid} is a monoid $M$ such that for all $a\in M$ there is a unique $b\in M$ such that $aba = a$ and $bab = b$. The element $b$ is the \emph{inverse} of $a$ and is denoted by $a^{-1}$. It is worth noting that $(a^{-1})^{-1} =a$ and $(ab)^{-1} = b^{-1}a^{-1}$ for all $a,b \in M$. The set of idempotents $E(M)$ of $M$ forms a commutative submonoid, referred to as the \emph{semilattice of idempotents} of $M$. In fact, a monoid $M$ is an inverse monoid if and only if $E(M)$ is a commutative submonoid and for every $a \in M$, there is an element $b\in M$ such that $aba =a$ (that is, $M$ is regular). An \emph{inverse submonoid} of an inverse monoid $M$ is simply a submonoid $N$ closed under taking inverses. For a non-empty set $X$, a \emph{partial permutation} is a bijection $\s:Y\to Z$ for some subsets $Y,Z$ of $X$. We allow $Y$ and $Z$ to be empty so that the empty function is regarded as a partial permutation. The set of all partial permutations of $X$ is made into a monoid by using the usual rule for composition of partial functions; it is called the \emph{symmetric inverse monoid} on $X$ and denoted by $\ensuremath{\mathscr{I}}_X$. That it is an inverse monoid follows from the fact that if $\s$ is a partial permutation of $X$, then so is its inverse (as a function) $\s^{-1}$, and this is the inverse of $\s$ in $\ensuremath{\mathscr{I}}_X$ in the sense above. The idempotents of $\ensuremath{\mathscr{I}}_X$ are the \emph{partial identities} $\e_Y^{}$ for all subsets $Y$ of $X$ where $\e_Y^{}$ is the identity map on the subset $Y$. It is clear that, for $Y,Z \subset X$, we have $\e_Y^{}\e_Z^{} = \e_{Y\cap Z}^{}$ and hence that $E(\ensuremath{\mathscr{I}}_X)$ is isomorphic to the Boolean algebra of all subsets of $X$. The concept of an inverse hull was introduced by Rees \cite{rees1} to give an alternative proof of Ore's theorem about the existence of a group of fractions of a left (or right) Ore cancellative monoid $C$. The name was introduced in \cite{cp61}, where the inverse hull of a right cancellative semigroup $C$ is defined. A detailed study of the inverse hull is carried out in \cite{cherubini} where the authors use a definition slightly different from that in \cite{cp61}. However, the two definitions coincide in the case of inverse hulls of right cancellative monoids, the only case that we consider. After defining what we mean by an inverse hull and recalling some general results, we show that a graph product of left LCM monoids is also a left LCM monoid, and continue by finding a presentation for the inverse hull of a such a graph product in terms of presentations for its constituent monoids. As a special case we obtain a presentation of the inverse hull of a graph monoid. \subsection{Generalities about inverse hulls} \label{generalities} As well as being significant in the question of embeddability in a group, the inverse hull of a right cancellative semigroup is also important in describing the structure of bisimple, 0-bisimple, simple and 0-simple inverse semigroups. Let $C$ be a right cancellative monoid. For an element $a$ of $C$, the mapping $\r_a$ with domain $C$ defined by $$ x\r_a = xa$$ is the \textit{inner right translation} of $C$ determined by $a$. It is injective since $C$ is right cancellative, and so it can be regarded as a member of $\ensuremath{\mathscr{I}}_C$. The inverse submonoid of $\ensuremath{\mathscr{I}}_C$ generated by all the inner right translations of $C$ is the \textit{inverse hull} $IH(C)$ of $C$. The inverse of $\r_a$ is, of course, the partial map $\r_a^{-1}:Ca \to C$, so if $C$ is not a group, then $IH(C)$ contains maps which are not total. The mapping $\eta:C\to IH(C)$ given by $a\eta = \r_a$ is an embedding of $C$ into $IH(C)$. Moreover, $C\eta$ is the right unit subsemigroup of $IH(C)$, that is, it consists of those elements $\r \in IH(C)$ for which there is an element $\t$ with $\r\t = 1_C$. The group of units of $IH(C)$ is $G\eta$ where $G$ is the group of units of $C$. The left unit submonoid $L$ of $IH(C)$ consists of the elements $\r_c^{-1}$ for $c\in C$. For notational convenience, we introduce a left cancellative monoid $C^{-1}$ containing $G$ as its group of units and such that there is an anti-isomorphism $c\mapsto c^{-1}$ from $C$ to $C^{-1}$. Here if $c\in G$, then $c^{-1}$ is its inverse in $G$, and if $c \notin G$, then $c^{-1}$ is a new symbol. We can now extend $\eta$ from $G$ to an isomorphism, also denoted by $\eta$, from $C^{-1}$ to $L$ given by $c^{-1}\eta = \r_c^{-1}$. We remark that if $C$ is a group, then every inner right translation is a permutation of $C$ and $\eta$ is just the Cayley representation of $C$. The empty mapping $\emptyset$ is sometimes a member of $IH(C)$. When it is, it is the zero of $IH(C)$. For ease of expression of some results, we often state them in terms of $IH^0(C)$, where we define $IH^0(C)$ to be the submonoid $IH(C)\cup \{\emptyset\}$ of $\ensuremath{\mathscr{I}}_C$. Clearly, if $a_1,\dots,a_n,b_1,\dots, b_n$ are elements of $C$, then $\r = \r_{a_1}\r_{b_1}^{-1}\dots \r_{a_n}\r_{b_n}^{-1}$ is a member of $IH(C)$. It is easy to verify that every element of $IH(C)$ can be expressed in this way (see \cite[Lemma~2.5]{cherubini}) using the fact that if $a,b\in C$, then $\r_a\r_b =\r_{ab}$ and $\r_a^{-1}\r_b^{-1} = \r_{ba}^{-1}$. Thus every element can be written in the form $(a_1^{}\eta)(b_1^{-1}\eta)\dots (a_n^{}\eta)(b_n^{-1}\eta)$. It is noted in \cite{cp61} that the inverse hull of an infinite cyclic monoid $\{ x\}^*$ is the bicyclic monoid. This example was generalised by Nivat and Perrot in \cite{np} where they introduced polycyclic monoids as the inverse hulls of free monoids. They give several characterisations of polycyclic monoids, and in particular, show that the polycyclic monoid $P_X$ on a set $X$ with more than one element has the following presentation as a monoid with zero: $$ \langle X \cup X^{-1} \mid xx^{-1}=1,xy^{-1}=0 \text{ for } x\neq y\ (x,y\in X)\rangle.$$ More information on polycyclic monoids can be found in \cite[Chapter~9]{lawson} and \cite{meakinsapir}. An independent study of the inverse hull of the free monoid on an arbitrary nonempty set $X$ was carried out in \cite{knox} where Knox describes it as a Rees quotient of a semidirect product of a semilattice by the free group on $X$. Further examples of inverse hulls are calculated in \cite{mcmc}. We recall that a compatible partial order called the natural partial order is defined on any inverse semigroup $S$ by the rule that $a\leq b$ if $a =eb$ for some idempotent $e$. For later use, we characterise this relation between certain elements of an inverse hull in the following well known lemma. See \cite{lawson2} for a version of this and its corollary. \begin{lem} \label{leq} Let $C$ be a right cancellative monoid and let $a,b,c,d\in C$. Then in $IH(C)$, $$ \r_a^{-1}\r_b \leq \r_c^{-1}\r_d \text{ if and only if } a=xc \text{ and } b=xd \text{ for some } x\in C.$$ \end{lem} \begin{proof} If $\r_a^{-1}\r_b \leq \r_c^{-1}\r_d$, then $a\in \operatorname{dom} \r_a^{-1}\r_b$, so $a\in \operatorname{dom} \r_c^{-1}\r_d$, that is, $a\in Cc$, say $a=xc$. Then $$ b= a\r_a^{-1}\r_b = a\r_c^{-1}\r_d = xd.$$ Conversely, $$\r_a^{-1}\r_b = \r_c^{-1}\r_x^{-1}\r_x\r_{d} \leq \r_c^{-1}\r_d.$$ \end{proof} \begin{cor} \label{equal} Let $C$ be a right cancellative monoid and let $a,b,c,d\in C$. Then in $IH(C)$, $$ \r_a^{-1}\r_b = \r_c^{-1}\r_d \text{ if and only if } a=uc \text{ and } b=ud \text{ for some unit } u\in C.$$ \end{cor} \begin{proof} By Lemma~\ref{leq}, there are elements $x,y\in C$ such that $a=xc,\, b=xd,\, c=ya$ and $d=yb$. Hence $a=xya$ and by right cancellation, $1=xy$. It follows that $x$ and $y$ are units. \end{proof} Recall that in any monoid $M$, Green's relation $\mathscr{R}$ is defined by the rule that $a\mathscr{R} b$ if and only if $aM = bM$. The relation $\mathscr{L}$ is the left-right dual of $\mathscr{R}$; we define $\mathscr{H} = \mathscr{R} \cap \mathscr{L}$ and $\mathscr{D} = \mathscr{R}\ \vee \mathscr{L}$. In fact, by \cite[Proposition~2.1.3]{howie}, $\mathscr{D} = \mathscr{R} \circ \mathscr{L} = \mathscr{L} \circ \mathscr{R}$. Finally, $a\ensuremath{\mathscr{J}} b$ if and only if $MaM = MbM$. In an inverse monoid, $a\mathscr{R} b$ if and only if $aa^{-1} = bb^{-1}$ and similarly, $a\mathscr{L}b$ if and only if $a^{-1}a = b^{-1}b$. In $\ensuremath{\mathscr{I}}_X$, we have $\r\ensuremath{\mathscr{R}}\s$ if and only if $\operatorname{dom} \r = \operatorname{dom} \s$, and $\r\L\s$ if and only if $\operatorname{im} \r = \operatorname{im} \s$ \cite[Exercise~5.11.2]{howie}. The following lemma thus follows immediately from \cite[Proposition~3.2.11]{lawson}. \begin{lem} Let $C$ be a right cancellative monoid. Then, for elements $\r,\s$ of $IH^0(C)$, \begin{enumerate} \item [(1)] $\r\ensuremath{\mathscr{R}}\s$ in $IH^0(C)$ if and only if $\operatorname{dom} \r = \operatorname{dom} \s$, \item [(2)] $\r\L\s$ in $IH^0(C)$ if and only if $\operatorname{im} \r = \operatorname{im} \s$. \end{enumerate} \end{lem} We mention that $\L$ is a right congruence and $\ensuremath{\mathscr{R}}$ is a left congruence. More information on Green's relations can be found in \cite{howie,lawson}. Finally, an inverse monoid (or semigroup) is $0$-\textit{bisimple} if all its non-zero elements are $\ensuremath{\mathscr{D}}$-related; it is \textit{bisimple} if all its elements are $\ensuremath{\mathscr{D}}$-related. Thus if $a,b$ are nonzero elements of a $0$-bisimple inverse monoid $M$, then there are elements $c,d\in M$ such that $a\L c\ensuremath{\mathscr{R}} b$ and $a\ensuremath{\mathscr{R}} d \L b$. In \cite{np}, it is pointed out that the equivalence of $(1)$ and $(3)$ in the next proposition can be obtained by slightly modifying the theory of Clifford \cite{clifford}. A proof of the whole result can be extracted from \cite{mcal}, but for the convenience of the reader and completeness we give an elementary proof. \begin{prop} \label{0-bisimple} The following are equivalent for a right cancellative monoid $C$: \begin{enumerate} \item [$(1)$] $IH^0(C)$ is $0$-bisimple, \item [$(2)$] The domain of each non-zero element of $IH^0(C)$ is a principal left ideal, \item [$(3)$] $C$ is a left LCM monoid, \item [$(4)$] Every non-zero element of $IH^0(C)$ can be written in the form $\r_c^{-1}\r_d$ for some $c,d\in C$. \end{enumerate} \end{prop} \begin{proof} Suppose that $(1)$ holds, and let $\r$ be a non-zero element of $IH^0(C)$. Then $\r$ is $\ensuremath{\mathscr{D}}$-related to the identity, and so $\ensuremath{\mathscr{R}}$-related to an element $\s$ of the left unit submonoid. Hence $\operatorname{dom}\r= \operatorname{dom}\s$ and since $\s =\r_a^{-1}$ for some $a\in C$, we have $\operatorname{dom}\r = Ca$ so that $(2)$ holds. If $(2)$ holds, and $a,b\in C$, then since $Ca\cap Cb$ is the domain of $\r_a^{-1}\r_a\r_b^{-1}\r_b$, we see that $Ca \cap Cb$ is either principal or empty. Thus $(3)$ holds. Now suppose that $(3)$ holds and let $\r$ be a non-zero element of $IH^0(C)$. We have noted that $\r = \r_{a_1}\r_{b_1}^{-1}\dots \r_{a_n}\r_{b_n}^{-1}$ for some $a_i,b_i\in C$, and so it is enough to show that if $c,d\in C$ and $\r_c\r_d^{-1}$ is non-zero, then for some $a,b\in C$ we have $\r_c\r_d^{-1} = \r_a^{-1}\r_b$. Now the domain of $\r_c\r_d^{-1}$ is $(Cc \cap Cd)\r_c^{-1}$, and by assumption, $Cc \cap Cd = Cs$ for some $s\in C$. Thus $s=rc=td$ for some $r,t\in C$ and an easy calculation shows that $\r_c\r_d^{-1} = \r_r^{-1}\r_t$. Finally, if $(4)$ holds, let $\r = \r_a^{-1}\r_b$ be a non-zero element of $IH^0(C)$. Now $\r_a^{-1}$ is $\L$-related to the identity, and since $\L$ is a right congruence, we get $\r\L\r_b$. But $\r_b\R1$, so $\r$ is $\ensuremath{\mathscr{D}}$-related to the identity, and $(1)$ follows. \end{proof} It is worth noting that if $C$ is a left LCM monoid, then the product of two non-zero elements in $IH^0(C)$ is given by $$ (\r_a^{-1}\r_b)(\r_c^{-1}\r_d) = \begin{cases} 0 \qquad \qquad \qquad \: \text{ if } Cb \cap Cc = \emptyset\\ \r_{sa}^{-1}\r_{td} \qquad \qquad \text{ if } Cb \cap Cc = Csb = Ctc. \end{cases} $$ Although it is not relevant to the present paper, it is worth noting that every $0$-bisimple inverse monoid $M$ is isomorphic to $IH^0(C)$ where $C$ is the right unit submonoid of $M$ \cite{np}, so that the preceding proposition applies to all such monoids. We make use of the proposition to prove the next theorem for which we also need the following lemma. \begin{lem} \label{intersection1} Let $\G = (V,E)$ be a graph and, for each $v\in V$, let $C_v$ be a right cancellative monoid, and $C= \G_{v\in V}C_v$. Let $c,d$ be nonunits in $C_v,C_u$ respectively where $(u,v)\in E$. Then $$ Cc \cap Cd = Ccd.$$ \end{lem} \begin{proof} Since $(u,v)\in E$, we have $cd = dc$ so that $Ccd \subseteq Cc \cap Cd$. Now suppose that $a \in Cc \cap Cd$ so that $a = sc = td$ for some $s,t \in C$. By Lemma~\ref{component lemma}, $a$ has final $v$-component $c'c$ and final $u$ component $d'd$ where $c'$ is the final $v$-component of $s$ and $d'$ is the final $u$-component of $t$. Neither $c'c$ nor $d'd$ can be $1$ since $c,d$ are not units. Thus $a$ has reduced expressions $x_1\circ \dots \circ x_n \circ (c'c)$ and $y_1 \circ \dots \circ y_n \circ (d'd)$ which, by Theorem~\ref{green}, must be shuffle equivalent. Hence one of the $x_i$, say $x_j$, must be $d'd$ and one can shuffle it to the end to obtain a reduced expression $$ x_1 \circ \dots \circ x_{j-1} \circ x_{j+1} \circ \dots \circ x_n \circ (c'c) \circ (d'd)$$ for $a$. Hence $a = x_1 \dots x_{j-1} x_{j+1} \dots x_n (c'c)(d'd)$, and since $c\in C_v,d'\in C_u$ so that $cd' = d'c$ (as $(u,v)\in E$) we have $$a = x_1 \dots x_{j-1} x_{j+1} \dots x_n c'd'cd \in Ccd$$ completing the proof. \end{proof} \begin{thm} \label{Lpreserved} Let $\G = (V,E)$ be a graph and, for each $v\in V$, let $C_v$ be a left LCM monoid. Then the graph product $C = \G_{v\in V}C_v$ is also a left LCM monoid. \end{thm} \begin{proof} We have that $C$ is right cancellative by Theorem~\ref{cancellative}. To prove that $C$ is a left LCM monoid, we show that every non-zero element of $IH^0(C)$ can be written in the form $\r_a^{-1}\r_b$ for some $a,b \in C$, and appeal to Proposition~\ref{0-bisimple}. We claim that if $c,d\in C$ and $\t = \r_c\r_d^{-1}$ is non-zero, then $\t= \r_a^{-1}\r_b$ for some $a,b\in C$. The result follows from this claim and our earlier observation that every non-zero element of $IH^0(C)$ can be written in the form $\r_{a_1}\r_{b_1}^{-1}\dots \r_{a_n}\r_{b_n}^{-1}$. We note that the claim is true if one of $c,d$ is a unit: if $r=c^{-1}$ exists, then $$ \t = \r_{r^{-1}}\r_d^{-1} = \r_r^{-1}\r_d^{-1} = \r_{dr}^{-1} = \r_{dr}^{-1}\r_1,$$ and if $d$ is a unit, then $$\r_c \r_d^{-1} = \r_c \r_{d^{-1}} = \r_{cd^{-1}} = \r_1^{-1}\r_{cd^{-1}}.$$ We now assume that $c,d$ are both nonunits and continue by proving the claim in the case when $c$ has length 1, that is, $c\in C_v$ for some $v\in V$. Suppose that $d$ has length 1. If $d\in C_v$, then $\t = \r_{a}^{-1}\r_b$ since $C_v$ is a left LCM monoid. Let $d\in C_u$ with $u \neq v$. If $(u,v) \notin E$; then no reduced expression ending in $c$ is shuffle equivalent to one ending in $d$ and it follows that $Cc \cap Cd = \emptyset$. Thus $\t = \emptyset$, a contradiction. Hence $(u,v) \in E$ so that $cd = dc$. By Lemma~\ref{intersection1}, $Cc \cap Cd = Ccd$. It follows that $\operatorname{dom} \r_c\r_d^{-1} = Cd = \operatorname{dom} \r_d^{-1}\r_c$, and it is easily verified that $\r_c\r_d^{-1} = \r_d^{-1}\r_c$. Hence the claim holds for all $c$ and $d$ of length 1; in fact, we have $\r_c\r_d^{-1} = \r_a^{-1}\r_b$ where $a$ and $b$ also have length 1. To complete the proof, let $c,d\in C$ have reduced expressions $c_1 \circ \dots \circ c_h$ and $d_1 \circ \dots \circ d_k$ so that $\r_c\r_d^{-1} = \r_{c_1}\dots \r_{c_h}\r^{-1}_{d_1}\dots \r^{-1}_{d_k}$. Now apply the case for $n=1$ repeatedly. \end{proof} In the next lemma we compare intersections of principal left ideals in the graph product and in its component monoids. \begin{lem} \label{intersection2} Let $\G = (V,E)$ be a graph and, for each $v\in V$, let $C_v$ be a left LCM monoid and let $C = \G_{v\in V}C_v$. If $x,y\in C_v$ for some $v\in V$, then $$ C_v x \cap C_v y = \emptyset \text{ if and only if } Cx \cap Cy =\emptyset.$$ Moreover, if $C_v x \cap C_v y = C_v z$, then $Cx \cap Cy = Cz$. \end{lem} \begin{proof} Clearly, if $Cx \cap Cy=\emptyset$, then $C_v x \cap C_v y = \emptyset$. Conversely, suppose that $ax = by$ for some $a,b\in C$. Let $a$ and $b$ have final $v$-components $c$ and $d$ respectively. Then by Lemma~\ref{component lemma}, $ax$ has final $v$-component $cx$ and $by$ has final $v$-component $dy$. But $ax = by$, so by Proposition~\ref{comp}, $cx = by \in C_vx \cap C_vy$. Suppose that $C_v x \cap C_v y = C_v z$; then certainly, $Cz \subseteq Cx \cap Cy$. If $r= ax = by$ for some $a,b \in C$, then applying Lemma~\ref{component lemma} and Proposition~\ref{comp} again we see that $r$ has final $v$-component $cx = dy$ where $c$ and $d$ are the final $v$-components of $a$ and $b$ respectively. Thus $cx \in C_v x \cap C_v y$ so $cx= mz$ for some $m\in C_v$, and if $r'$ is the final $v$-complement of $r$, then $r=r'mz \in Cz$ as required. \end{proof} We are now in a position to prove the following result which will be important in the next subsection. \begin{prop} \label{embeddedinversehull} If $C$ is the graph product $\G_{v\in V}C_v$ ofleft LCM monoids $C_v$, then, for each $v\in V$, the inverse hull $IH^0(C_v)$ is embedded in $IH^0(C)$. \end{prop} \begin{proof} For $x\in C_v$ denote the inner right translations of $C_v$ and $C$ determined by $x$ by $\r_x$ and $\d_x$ respectively. Non-zero elements of $IH^0(C_v)$ have the form $\r_x^{-1}\r_y$ and so we can define $\h : IH^0(C_v) \to IH^0(C)$ by $0\h = 0$ and $(\r_x^{-1}\r_y)\h = \d_x^{-1}\d_y$. To see that $\h$ is well defined, suppose that $\r_x^{-1}\r_y = \r_z^{-1}\r_t$. Then by Corollary~\ref{equal}, $x=uz$ and $y=ut$ for some unit $u$ of $C_v$. Certainly $u$ is a unit of $C$, so we have $\d_x^{-1}\d_y = \d_z^{-1}\d_t$ as required. To see that $\h$ is injective, suppose that $\d_x^{-1}\d_y = \d_z^{-1}\d_t$ where $x,y,z,t \in C_v$. Then by Corollary~\ref{equal}, we have $x=qz$ and $y=qt$ for some unit $q$ of $C$. By Corollary ~\ref{unitary}, $C_v$ is unitary in $C$, and since $qt,t\in C_v$, we have $q\in C_v$. It is easy to see that $q^{-1}$ is also in $C_v$, so that $q$ is a unit of $C_v$ and so $\r_x^{-1}\r_y = \r_z^{-1}\r_t$ as required. Finally, we show that $\h$ is a homomorphism. Let $\r_x^{-1}\r_y , \r_z^{-1}\r_t$ be elements of $IH^0(C_v)$. If $C_v y \cap C_v z = \emptyset$, then by Lemma~\ref{intersection2}, $Cy \cap Cz = \emptyset$. From the rule for multiplication following Proposition~\ref{0-bisimple}, we have $(\r_x^{-1}\r_y)(\r_z^{-1}\r_t) =0$, and since, by Theorem~\ref{Lpreserved}, $C$ is left LCM, we also have $(\d_x^{-1}\d_y)(\d_z^{-1}\d_t) =0$. If $C_v y \cap C_v z \neq \emptyset$, then since $C_v$ is an LCM monoid, we have $C_v y \cap C_v z = C_va$ for some $a\in C_v$, say $a=ry=sz$ where $r,s\in C_v$. By Lemma~\ref{intersection2}, we also have $Cy \cap Cz = Ca$, and so by the rule for multiplication we see that $$ (\r_x^{-1}\r_y)(\r_z^{-1}\r_t) = \r^{-1}_{rx}\r^{-1}_{st}$$ and $$(\d_x^{-1}\d_y)(\d_z^{-1}\d_t) = \d^{-1}_{rx}\d^{-1}_{st}.$$ It follows that $\h$ is a homomorphism as required. \end{proof} \subsection{Inverse hulls of graph products of left LCM monoids} Let $\G=(V,E)$ be a graph and $\{ C_v\}_{v\in V}$ be a family of left LCM monoids. Let $C = \G_{v\in V}C_v$ be the graph product of the $C_v$; we have just proved that $C$ is also a left LCM monoid. In this subsection our first goal is to find a presentation (as a monoid with zero) for $IH^0(C)$ in terms of given presentations for the inverse monoids $IH^0(C_v)$. We begin by establishing some notation. Let $D$ be any right cancellative monoid with group of units $G$ and let $Y$ be a symmetric set of monoid generators for $G$ (i.e., $y\in Y$ if and only if $y^{-1} \in Y$). We assume that $1\notin Y$ and take $Y$ to be empty if $G = \{1\}$. Let $X$ be a set of nonunits in $D$ such that $X \cup Y$ generates $D$. Let $X^{-1} = \{ x^{-1}:x \in X\}$ be a set disjoint from $X$ such that $x \mapsto x^{-1}$ is a bijection, and $X^{-1} \cup Y$ generates the left cancellative monoid $D^{-1}$ anti-isomorphic to $D$. Since any element of $IH(D)$ can be written in the form $ \r_{a_1}\r_{b_1}^{-1}\dots \r_{a_n}\r_{b_n}^{-1}$, it follows that there is a homomorphism from the free monoid $(X \cup X^{-1} \cup Y)^*$ onto $IH(D)$ which sends $x$ to $\r_x$, $y$ to $\r_y$ and $x^{-1}$ to $\r_x^{-1}$. Thus $IH(D)$ has a presentation of the form $\langle X \cup X^{-1} \cup Y \mid R \rangle$ for some set of relations $R$. We can also regard $\langle X \cup X^{-1} \cup Y \mid R \rangle$ as a presentation for $IH^0(D)$ in the class of monoids with zero. Since $\r_x\r_x^{-1} =1$ for all $x\in X$, we can assume that $xx^{-1} = 1$ is a relation in $R$ for every $x\in X$. Similarly, since $\r_y$ is a unit for all $y \in Y$, we can assume that we have relations $yy^{-1} = 1 = y^{-1}y$ in $R$ for all $y\in Y$. Turning to the graph product $C = \G_{v\in V}C_v$ we note that we have a corresponding graph product $C^{-1} = \G_{v\in V}C_v^{-1}$ of the left cancellative monoids $C_v^{-1}$. Writing $G_v$ for the common group of units of $C_v$ and $C_v^{-1}$, we remark that, by \cite[Proposition~7.1]{costa1}, the common group of units of $C$ and $C^{-1}$ is $G= \G_{v\in V}G_v$. We also observe that the anti-isomorphisms between the $C_v$'s and the $C^{-1}_v$'s extend, by a slight variation of Proposition~\ref{homomorphisms2} to an anti-isomorphism between $C$ and $C^{-1}$. Now put $S_v = IH^0(C_v)$ for each $v\in V\!\!,$ and let $\langle X_v \cup X_v^{-1} \cup Y_v \mid R_v \rangle$ be a presentation for $S_v$ of the type described in the previous paragraph. It will be convenient to adopt the following notation convention: $x_v, y_v$ denote elements of $X_v, Y_v$ respectively; $t_v$ denotes an element of $X_v \cup Y_v$ and $z_v$ denotes any element of $Z_v = X_v \cup X_v^{-1} \cup Y_v$. We now put $X = \bigcup_{v\in V}X_v$, $X^{-1} = \bigcup_{v\in V}X_v^{-1}$, $Y = \bigcup_{v\in V}Y_v$, and $Z = X \cup X^{-1} \cup Y$. As in Section~\ref{graph products}, we will want to consider the free monoid on $\bigcup_{v\in V}C_v$ as well as the free monoid $Z^*$. To avoid confusion about the various products, we write $\circ$, as before, for the product in the former free monoid, and $\diamond$ for that in $Z^*$. Next, we introduce several sets of relations amongst words over $X \cup X^{-1} \cup Y$ (and zero) as follows: \begin{itemize} \item [(1)] $R= \bigcup_{v\in V}R_v$; \item [(2)] $N = \{ x_v\diamond y_{u_1} \diamond \dots\diamond y_{u_m} \diamond x_w^{-1} = 0 : m \geq 0, \forall\ x_v \in X_v,x_w \in X_w,\\ \hspace*{1cm} y_{u_i} \in Y_{u_i} \text{ with }(v,w)\notin E \text{ and } v \neq w \}$; \item [(3)] $\operatorname{Com}\, = \{ z_u\diamond z_v = z_v\diamond z_u : \forall\ z_u \in Z_u,z_v \in Z_v \text{ with }(u,v) \in E \}$. \end{itemize} The \textit{polygraph product} of the $S_v$ is defined to be the monoid $\operatorname{PG} = \operatorname{PG}_{v\in V}(S_v)$ given by the presentation $$\langle Z \mid R \cup N \cup \operatorname{Com} \rangle.$$ There is thus a surjective homomorphism $\zeta: Z^* \to \operatorname{PG}$. For each $v\in V$, the generators and relations of $IH^0(C_v)$ are among those for $\operatorname{PG}$ and so there is a monoid homomorphism $\psi_v$ from $IH^0(C_v)$ into $\operatorname{PG}$ determined by $\r_{t_v}\psi_v = t_v\zeta$ and $\r_{x_v}^{-1}\psi_v = x_v^{-1}\zeta$ for $t_v\in X_v \cup Y_v$ and $x_v \in X_v$. The right unit submonoid of $IH^0(C_v)$ is isomorphic to $C_v$ via the map $\eta_v:C_v \to IH^0(C_v)$ given by $c\eta_v = \r_c$. As noted in the preceding subsection, we can also extend $\eta_v$ from $G_v$ (the group of units of $C_v$) to the left cancellative monoid $C^{-1}_v$ to give an isomomorphism onto the left unit submonoid of $IH^0(C_v)$. Composing $\eta_v$ with the restriction of $\psi_v$ first to the right unit submonoid of $IH^0(C_v)$, then to the left unit submonoid, we obtain monoid homomorphisms from $C_v$ and $C^{-1}_v$ into $PG$ both of which we denote by $\h_v$. There is no ambiguity here since these homomorphisms agree on the common group of units of $C_v$ and $C^{-1}_v$. We observe that if $c_v = t_1\dots t_n$ where $t_i\in X \cup Y$, then $$ c_v\h_v = (t_1\eta_v\psi_v) \dots (t_n\eta_v\psi_v) = \r_{t_1}\psi_v \dots \r_{t_n}\psi_v = t_1\zeta\dots t_n\zeta = (t_1 \diamond \dots \diamond t_n)\zeta, $$ and $$ c_v^{-1}\h_v = (t_n^{-1}\dots t_1^{-1})\h_v = \r_{t_n}^{-1}\psi_v \dots \r_{t_1}^{-1}\psi_v = t_n^{-1}\zeta \dots t_1^{-1}\zeta = (t_n^{-1} \diamond \dots \diamond t_1^{-1})\zeta. $$ Now by Proposition~\ref{homomorphisms1} and its dual, there are unique homomorphisms from $C$ into the right unit submonoid of $\operatorname{PG}$, and from $C^{-1}$ into the left unit submonoid of $\operatorname{PG}$ which restrict to $\h_v$ on each $C_v$ and $C_v^{-1}$ respectively. We have noted that the common group of units of $C$ and $C^{-1}$ is $G = \G_{v\in V}G_v$ where $G_v$ is the common group of units of $C_v$ and $C_v^{-1}$. As no non-units are in both $C$ and $C^{-1}$, there is no ambiguity in denoting both homomorphisms by $\h$. From the above we see that the squares \begin{center} \begin{pspicture}(-1,-1)(4,3) \psset{nodesep=4pt} \rput(0,2){\rnode{A}{$(X \cup Y)^*$}} \rput(3,2){\rnode{B}{$C$}} \rput(0,0){\rnode{C}{$(X\cup X^{-1} \cup Y)^*$}} \rput(3,0){\rnode{D}{$\operatorname{PG}$}} \ncline[arrowsize=3pt 2.5]{->}{A}{B} \ncline[arrowsize=3pt 2.5]{->}{A}{C}\Bput{$\iota$} \ncline[arrowsize=3pt 2.5]{->}{C}{D}\Aput{$\zeta$} \ncline[arrowsize=3pt 2.5]{->}{B}{D}\Aput{$\h$} \end{pspicture} \qquad \qquad \begin{pspicture}(-1,-1)(4,3) \psset{nodesep=4pt} \rput(0,2){\rnode{A}{$(X^{-1} \cup Y)^*$}} \rput(3,2){\rnode{B}{$C^{-1}$}} \rput(0,0){\rnode{C}{$(X\cup X^{-1} \cup Y)^*$}} \rput(3,0){\rnode{D}{$\operatorname{PG}$}} \ncline[arrowsize=3pt 2.5]{->}{A}{B} \ncline[arrowsize=3pt 2.5]{->}{A}{C}\Bput{$\iota$} \ncline[arrowsize=3pt 2.5]{->}{C}{D}\Aput{$\zeta$} \ncline[arrowsize=3pt 2.5]{->}{B}{D}\Aput{$\h$} \end{pspicture} \end{center} are commutative where $\iota$ is the inclusion map. It follows that every non-zero element of $\operatorname{PG}$ can be written in the form $(a_1\h)(b_1^{-1}\h)\dots (a_k\h)(b_k^{-1}\h)$ where $a_i,b_i\in C$. In fact, we can do better than this as we see in the next lemma. \begin{lem} \label{crucial} Every non-zero element of $\operatorname{PG} = \operatorname{PG}_{v\in V}(S_v)$ can be written in the form $(a^{-1}\h)(b\h)$ where $a,b\in C$. \end{lem} \begin{proof} In view of the remark preceding the lemma, it is enough to show that if $c,d\in C$, then either $(c\h)(d^{-1}\h) =0$ or $(c\h)(d^{-1}\h) = (a^{-1}\h)(b\h)$ for some $a,b\in C$. This is clearly true if $c$ or $d$ is a unit of $C$, so we may assume that neither is a unit. We use induction on the length, as defined in Section~\ref{graph products}, of $c$ and $d$. We start by considering $d$ of length 1, and proving by induction on the length of $c$ that for any $c\in C$, either $(c\h)(d^{-1}\h) =0$ or $(c\h)(d^{-1}\h) = (a^{-1}\h)(b\h)$ for some $a,b\in C$ with $a$ of length 1. First, suppose that $c$ has length 1. Then $c\in C_u,d\in C_v$ for some $u,v$. If $u=v$, then $$ (c\h)(d^{-1}\h) = (c\h_u)(d^{-1}\h_u) = (\r_c\psi_u)(\r_d^{-1}\psi_u) = (\r_c\r_d^{-1})\psi_u. $$ Since $C_u$ is left LCM, we have, by Proposition~\ref{0-bisimple}, that $\r_c\r_d^{-1}$ is either zero or equal to $\r_a^{-1}\r_b$ for some $a,b \in C_u$. Hence, if non-zero, $$ (c\h)(d^{-1}\h) = (\r_c\r_d^{-1})\psi_u = (\r_a^{-1}\r_b)\psi_u = (\r_a^{-1}\psi_u)(\r_b\psi_u) = (a^{-1}\h)(b\h). $$ If $u\neq v$, let $c = t_1'\dots t_m'$ and $d = t_1\dots t_n$ where $t_i'\in X_u \cup Y_u$ and $t_j \in X_v \cup Y_v$. If $(u,v) \in E$, then $t_i'\diamond t_j = t_j\diamond t_i'$ is a relation in $\operatorname{Com}$ for all $i,j$ and it follows that $(c\h)(d^{-1}\h) = (d^{-1}\h)(c\h)$. Suppose that $(u,v) \notin E$. Since $c,d$ are non-units, not all the $t_i'$ are units and not all the $t_j$ are units. Let $h$ and $k$ be the largest integers such $t_h'$ and $t_k$ are non-units. Then we can write $x_h'$ for $t_h'$ and $x_k$ for $t_k$, and similarly, we can write $y_i'$ for $t_i'$ when $i > h$ and $y_j$ for $t_j$ when $j > k$. Consider $(x_h'\diamond y_{h+1}' \diamond \dots \diamond y_m' \diamond y_n^{-1} \diamond \dots \diamond y_{k+1}^{-1} \diamond x_{k+1}^{-1})\zeta$. This element is zero (by virtue of the relations in $N$) and so $(c\h)(d^{-1}\h) = 0$. Thus our claim is true for all $c$ and $d$ of length 1. Now suppose that for any $c,d\in C$ with $c$ of length less than $m$ and $d$ of length 1, we have $(c\h)(d^{-1}\h) = 0$ or $(c\h)(d^{-1}\h) = (a^{-1}\h)(b\h)$ for some $a,b \in C$ with $a$ of length 1. Next, let $c\in C$ have length $m$, say $c_1 \circ \dots c_m$ is a reduced expression for $c$, and let $d\in C_v$. By the current induction assumption, $(c_2\dots c_m\h)(d^{-1}\h)$ is either zero or can be written in the form $(a^{-1}\h)(b\h)$ with $a$ of length 1. In the former case, it is clear that $(c\h)(d^{-1}\h) = 0$. In the latter case, if $(c\h)(d^{-1}\h)$ is non-zero, we have \begin{align*} (c\h)(d^{-1}\h) &= ((c_1\dots c_m)\h)(d^{-1}\h) = (c_1\h)((c_2\dots c_m)\h)(d^{-1}\h)\\ &= (c_1\h)(a^{-1}\h)(b\h)\\ &= (a_1^{-1}\h)(b_1\h)(b\h) = (a_1^{-1}\h)((b_1b)\h) \end{align*} where $a_1$ has length 1, using the fact that $c_1$ and $a$ both have length 1. Thus we have proved our claim that for any $c,d\in C$ with $d$ of length 1, either $(c\h)(d^{-1}\h) =0$ or $(c\h)(d^{-1}\h) = (a^{-1}\h)(b\h)$ for some $a,b\in C$ with $a$ of length 1. Now assume inductively that for any $c\in C$ and any $d\in C$ of length $n-1$, if $(c\h)(d^{-1}\h) \neq 0$, then $(c\h)(d^{-1}\h) = (a^{-1}\h)(b\h)$ for some $a,b\in C$. Let $d\in C$ have a reduced expression $d_1 \circ \dots \circ d_n$ so that \begin{align*} (c\h)(d^{-1}\h) &= (c\h)(d_n^{-1}\h)((d_{n-1}^{-1}\dots d_1^{-1})\h) \\ &= (a_1^{-1}\h)(b_1\h)((d_{n-1}^{-1}\dots d_1^{-1})\h) \text{ for some } a_1,b_1\in C \text{ (by the case for $n=1$)}\\ &= (a_1^{-1}\h)((b_1\h)(d_{n-1}^{-1}\dots d_1^{-1})\h) \\ & = (a_1^{-1}\h)(a_2^{-1}\h)(b_2\h) \text{ for some } a_2,b_2\in C \text{ (by the induction assumption)}\\ &= (a_1^{-1}a_2^{-1})\h(b_2\h) \\ &= (a^{-1}\h)(b\h) \text{ where } a =a_2a_1 \text{ and } b = b_2. \end{align*} This completes the proof of the lemma. \end{proof} We now consider $IH^0(C)$. We remind the reader that (as a monoid with zero) each $IH^0(C_v)$ is generated by $\{ \r_{x_v}, \r_{x_v}^{-1}, \r_{y_v} : x_v\in X_v, y_v\in Y_v\}$ and that $IH^0(C)$ is generated by $Q =\{ \r_x, \r_x^{-1}, \r_y : x\in X, y\in Y\}$ where $X = \bigcup_{v\in V}X_v$, $X^{-1} = \bigcup_{v\in V}X_v^{-1}$ and $Y = \bigcup_{v\in V}Y_v$. As before, we also assume that $R_v$ is a set of defining relations for $IH^0(C_v)$ and put $R = \bigcup_{v\in V}R_v$. \begin{lem} \label{relations1} With respect to the generating set $Q$, the relations in $R$ are satisfied by $IH^0(C)$. \end{lem} \begin{proof} By Proposition~\ref{embeddedinversehull}, $IH^0(C_v)$ is embedded in $IH^0(C)$ for all $v\in V$. The relations in $R$ are relations in $R_v$ for some $v$, so hold in $IH^0(C_v)$ and hence in $IH^0(C)$. \end{proof} \begin{lem} \label{relations2} With respect to the generating set $Q$, the relations in $N$ are satisfied by $IH^0(C)$. \end{lem} \begin{proof} Suppose that $x_v\diamond y_{u_1}\diamond \dots\diamond y_{u_m}\diamond x_w^{-1} =0$ is a relation in $N$ so that $(v,w)\notin E$ and $v\neq w$. Then in $IH^0(C)$ we have $$\operatorname{dom} \r_{x_v}\r_{y_{u_1}}\dots \r_{y_{u_m}}\r_{x_w}^{-1} = (Cx_vy_{u_1}\dots y_{u_m} \cap Cx_w)(\r_{x_v}\r_{y_{u_1}}\dots \r_{y_{u_m}})^{-1}.$$ Since $x_v$ is not a unit and $(v,w)\notin E$, in an expression for an element $a$ of $Cx_vy_{u_1}\dots y_{u_m}$, any amalgamation involving $x_v$ produces a non-unit of $C_v$, so a non-unit of $C_w$ cannot be shuffled to the end of the expression. Hence the final $w$-component of $a$ is a unit. But the final $w$-component of an element of $Cx_w$ must be a left multiple of $x_w$ and hence be a non-unit. It follows from Proposition~\ref{comp} that $Cx_vy_{u_1}\dots y_{u_m} \cap Cx_w = \emptyset$ and so $\r_{x_v}\r_{y_{u_1}}\dots \r_{y_{u_m}}\r_{x_w}^{-1} =0$. \end{proof} \begin{lem} \label{relations3} With respect to the generating set $Q$, the relations in $\operatorname{Com}$ are satisfied by $IH^0(C)$. \end{lem} \begin{proof} Following our convention that $t_u,x_u$ denote arbitrary elements of $X_u \cup Y_u$ and $X_u$ respectively, relations in $\operatorname{Com}$ have one of the forms: \begin{itemize} \item [$(i)$] $t_u\diamond t_v = t_v\diamond t_u$; \item [$(ii)$] $x_u\diamond x_v^{-1} = x_v^{-1}\diamond x_u$; \item [$(iii)$] $x_u^{-1}\diamond x_v^{-1} = x_v^{-1}\diamond x_u^{-1}$ \end{itemize} where $(u,v) \in E$. Relations of the form $(i)$ are satisfied in $IH^0(C)$ since $$ \r_{t_u}\r_{t_v} = \r_{t_ut_v} = \r_{t_vt_u} = \r_{t_v}\r_{t_u}.$$ Consider a relation as in $(ii)$. By Lemma~\ref{intersection1} we have $Cx_u \cap Cx_v = Cx_ux_v$, and since $x_ux_v = x_vx_u$ in $C$, we have $$\operatorname{dom} \r_{x_u}\r_{x_v}^{-1} = (\operatorname{im} \r_{x_u} \cap \operatorname{dom} \r_{x_v}^{-1})\r_{x_u}^{-1} = (Cx_vx_u)\r_{x_u}^{-1} = Cx_v.$$ Similarly, we calculate $\operatorname{im} \r_{x_u}\r_{x_v}^{-1} = Cx_u$. Since $\operatorname{im} \r_{x_v}^{-1} = C = \operatorname{dom} \r_{x_u}$, it is easy to see that we also have $\operatorname{dom} \r_{x_v}^{-1}\r_{x_u} = Cx_v$ and $\operatorname{im} \r_{x_v}^{-1}\r_{x_u} = Cx_u$, and it follows that $\r_{x_u}\r_{x_v}^{-1} = \r_{x_v}^{-1}\r_{x_u}$. Finally consider a relation of the form $(iii)$. In this case, since $(u,v)\in E$, we also have that $x_u\diamond x_v = x_v\diamond x_u$ is a relation in $\operatorname{Com}$. Hence $ \r_{x_v}\r_{x_u} = \r_{x_u}\r_{x_v}$ follows by $(i)$, and since $IH^0(C)$ is an inverse monoid, $$ \r_{x_u}^{-1}\r_{x_v}^{-1} = (\r_{x_v}\r_{x_u})^{-1} = (\r_{x_u}\r_{x_v})^{-1} = \r_{x_v}^{-1}\r_{x_u}^{-1}.$$ \end{proof} We now use the lemmas together to obtain the following theorem where we retain the notation of this section. \begin{thm} \label{presentation} The monoids $\operatorname{PG}_{v\in V}(S_v)$ and $IH^0(C)$ are isomorphic. \end{thm} \begin{proof} Consider the function $\b:X\cup X^{-1} \cup Y \to IH^0(C)$ given by $x\b = \r_x$, $x^{-1}\b = \r_x^{-1}$ and $y\b = \r_y$. It follows from Lemmas~\ref{relations1} to \ref{relations3} that $\b$ extends to a homomorphism, again denoted by $\b$, from $\operatorname{PG}$ to $IH^0(C)$. Since the latter is generated by $Q$, the homomorphism is surjective. Let $r,s\in \operatorname{PG}$ and suppose that $r\b = s\b$. By Lemma~\ref{crucial}, $r = (a^{-1}\h)(b\h)$ and $s = (c^{-1}\h)(d\h)$ for some $a,b,c,d \in C$. Hence $((a^{-1}\h)(b\h))\b = ((c^{-1}\h)(d\h))\b$ so that $\r_a^{-1}\r_b = \r_c^{-1}\r_d$, and hence by Corollary~\ref{equal}, there is a unit $e$ of $C$ such that $c =ea$ and $d= eb$. If $m,n\in C$, then there are correponding elements $m^{-1},n^{-1}$ in $C^{-1}$ and $(mn)^{-1} = n^{-1}m^{-1}$. Thus, using the fact that $e$ is a unit in $C$, \begin{align*} s &= (c^{-1}\h)(d\h) = ((ea)^{-1}\h)((eb)\h)\\ &= (a^{-1}e^{-1})\h(eb)\h = (a^{-1}\h)(e^{-1}\h)(e\h)(b\h)\\ &= (a^{-1}\h)((e^{-1}e)\h)(b\h) = (a^{-1}\h)(b\h)\\ &= r. \end{align*} Thus $\b$ is an isomorphism and the proof is complete. \end{proof} \section{Polygraph monoids} \label{polygraph monoids} Theorem~\ref{presentation} gives us a presentation for $IH^0(C)$ and also allows us to write the elements of $\operatorname{PG}$ in the form $a^{-1}b$ with $a,b \in C$ where $a^{-1}b = c^{-1}d$ if and only if $c =ea$ and $d= eb$ for some unit $e$ of $C$. The presentation simplifies considerably in the case when each $C_v$ (and hence also $C$) has trivial group of units, in that $Y = \emptyset$ and consequently $$N = \{ x_u \diamond x_v^{-1} = 0 : \forall\ x_u\in X_u, x_v\in X_v \text{ with }(u,v) \notin E \text{ and } u\neq v \}.$$ Thus we have the presentation $$ \langle X \cup X^{-1} \mid R \cup N \cup \operatorname{Com} \rangle$$ for $IH^0(C)$. A particular instance of this is when each $C_v$ is a free monogenic monoid. Then $S_v = IH^0(C_v)$ is the bicyclic monoid with zero adjoined, and as a monoid with zero it has the presentation with two generators: $\langle x_v,x_v^{-1} \mid x_vx_v^{-1} = 1\rangle$. In this case, the graph product of the $C_v$ is a graph monoid $M(\G)$ with presentation $$\langle x_v\ (v\in V) \mid x_ux_v = x_vx_u \text{ if } (u,v) \in E \rangle.$$ The monoid $IH^0(M(\G))$ is called a \textit{polygraph monoid} and we denote it by $P(\G)$. Put $X = \{ x_v : v\in V\}$ and for $x\in C_u$, $y \in C_v$, write $x \sim y$ if $(u,v) \in E$, and abusing notation, write $x \nsim y$ to mean $u \neq v$ and $(u,v) \notin E$. Then our polygraph monoid has a presentation \begin{align*} \langle X \cup X^{-1} \mid xx^{-1} &= 1;\ xy^{-1} = 0 \text{ if } x \nsim y;\\ xy &= yx,\: xy^{-1} = y^{-1}x,\: x^{-1}y^{-1} = y^{-1}x^{-1} \text{ if } x \sim y \rangle. \end{align*} If $\G$ has no edges, then $M(\G) = X^*$ is the free monoid on $X$ and the polygraph monoid $IH^0(M(\G))$ is the monoid with presentation $$\langle X \cup X^{-1} \mid xx^{-1} = 1; xy^{-1} = 0 \text{ if } x\neq y \rangle,$$ that is, it is the polycyclic monoid introduced in \cite{np} and studied in, among others, \cite{knox,meakinsapir,lawson}. Let $P(\G)$ be the polygraph monoid determined by the graph $\G= (V,E)$. Since $P(\G)$ is the inverse hull (with zero adjoined if necessary) of the graph monoid $M(\G)$, it follows from the remarks following Theorem~\ref{presentation} that every non-zero element of $P(\G)$ can be written as $a^{-1}b$ for some $a,b\in M(\G)$. Since the identity is the only unit in $M(\G)$ it follows that if $a,b,c,d \in M(\G)$, then $a^{-1}b = c^{-1}d$ if and only if $a=c$ and $b=d$. Thus we may regard the non-zero elements of $P(\G)$ as pairs $(a,b)$ where $a,b\in M(\G)$. With this notation, the product in $P(\G)$ is given by $$ (a,b)(c,d) = \begin{cases} 0 \quad \text{ if } M(\G)b \cap M(\G)c = \emptyset\\ (sa,td) \text{ if } M(\G)b \cap M(\G)c = M(\G)sb = M(\G)tc. \end{cases} $$ \begin{prop} \label{polygraphmonoid1} The monoid $P(\G)$ is a $0$-bisimple (bisimple if it has no zero) inverse monoid with $$ E(P(\G)) = \{ (a,a) : a\in M(\G) \} \cup \{0\} $$ as its set of idempotents. \end{prop} \begin{proof} Since graph monoids left LCM, Proposition~\ref{0-bisimple} gives that $P(\G)$ is a $0$-bisimple (bisimple if it has no zero) inverse monoid. It is easy to verify that any element of the form $(a,a)$ is idempotent. Suppose that $(a,b)(a,b) = (a,b)$. Then $(ta,sb) = (a,b) $ where $M(\G)a \cap M(\G)b = M(\G)sb = M(\G)ta$. Hence, by the criterion for equality, $ta = a$ and $sb = b$ in $M(\G)$ so that $t = s = 1$. Thus $M(\G)a = M(\G)b$ and hence $a=b$. \end{proof} Since $P(\G)$ is 0-bisimple, $\ensuremath{\mathscr{D}} = \ensuremath{\mathscr{J}}$ and two elements are $\ensuremath{\mathscr{D}}$-related if and only if they are both non-zero or both equal to zero. In the next proposition we characterise the other Green's relations on $P(\G)$. \begin{prop} \label{polygraphmonoid2} For elements $(a,b),(c,d)$ of $P(\G)$, \begin{itemize} \item [$(1)$] $(a,b)^{-1} = (b,a)$; \item [$(2)$] $(a,b)\L (c,d)$ if and only if $b=d$; \item [$(3)$] $(a,b)\ensuremath{\mathscr{R}} (c,d)$ if and only if $a=c$; \item [$(4)$] $\H$ is trivial. \end{itemize} \end{prop} \begin{proof} $(1)$ is an easy calculation. In an inverse monoid, elements $s,t$ are $\L$-related if and only if $s^{-1}s = t^{-1}t$. Using this and (1) we see that in $P(\G)$ we have $(a,b)\L (c,d)$ if and only if $b=d$. The result for $\ensuremath{\mathscr{R}}$ is similar, and then it follows that $\H$ is trivial. \end{proof} We next consider the properties of being $E^*$-unitary or strongly $E^*$-unitary. For any inverse monoid $S$, the semilattice of idempotents of $S$ is denoted by $E(S)$, and if $S$ has a zero, $E^*(S)$ denotes the set of non-zero idempotents. Recall from Section~\ref{graph products} that a subset $U$ of $S$ is \textit{right unitary} in $S$ if for $u\in U$, $s\in S$ we have $su \in U$ if and only if $s\in U$. There is a dual notion of \textit{left unitary}, and if $U$ is both left and right unitary, it is said to be \textit{unitary} in $S$. If $U$ is either $E(S)$ or $E^*(S)$, then it is left unitary if and only if it is right unitary. We say that $S$ is $E$-\textit{unitary} if $E(S)$ is a unitary subset of $S$, and that it is $E^*$-\textit{unitary} \cite{maria} (or $0$-$E$-unitary \cite{meakinsapir}, \cite{lawson}) if $E^*(S)$ is a unitary subset of $S$. Chapter~9 of \cite{lawson} is devoted to $E^*$-unitary inverse semigroups. A special class of $E^*$-unitary inverse semigroups was introduced independently in \cite{bffg} and \cite{lawson2}. In general, if we adjoin a zero to a semigroup $S$, we denote the semigroup obtained by $S^0$. An inverse semigroup $S$ with zero is \textit{strongly $E^*$-unitary} if there is a group $G$ and a function $\h:S\to G^0$ satisfying: \begin{enumerate} \item $a\h =0$ if and only if $a=0$; \item $a\h =1$ if and only if $a\in E^*(S)$; \item if $ab\neq 0$, then $(ab)\h = (a\h)(b\h)$. \end{enumerate} Condition (1) says that $\h$ is $0$-\textit{restricted}; conditions~(1) and~(2) together say that $\h$ is \textit{idempotent-pure}, that is, the only elements which map to idempotents are idempotents; and condition~(3) says that $\h$ is a \textit{prehomomorphism}. In general, prehomomorphisms between inverse monoids are defined in terms of the natural order on the monoids, but the general definition is equivalent to condition~(3) when the codomain is a group with zero adjoined. Implicit in \cite{bffg} is the result that an inverse semigroup with zero is strongly $E^*$-unitary if and only if it is a Rees quotient of an $E$-unitary inverse semigroup. This was made explicit with an easy proof in \cite{ben}. As well as \cite{bffg} and \cite{ben}, further information about strongly $E^*$-unitary inverse semigroups, including many examples, can be found in the surveys \cite{lawson3} and \cite{mcal2}. We are interested in the connection between strongly $E^*$-unitary inverse monoids and embeddability of cancellative monoids in groups. The following result is due to Margolis \cite{stuart}; we include a proof for completeness. \begin{prop} \label{necandsuff} Let $S$ be a cancellative monoid. Then $S$ is embeddable in a group if and only if $IH^0(S)$ is strongly $E^*$-unitary. \end{prop} \begin{proof} Suppose first that $S$ is embedded in a group $G$. As noted in Section~\ref{generalities}, every (non-zero) element $\r$ of $IH^0(S)$ can be expressed as $\r_{a_1}\r_{b_1}^{-1}\dots \r_{a_n}\r_{b_n}^{-1}$ for some elements $a_1,b_1\dots, a_n,b_n$ of $S$. Define a mapping $\h:IH^0(S) \to G^0$ by putting $0\h=0$ and $(\r_{a_1}\r_{b_1}^{-1}\dots \r_{a_n}\r_{b_n}^{-1})\h = a_1b_1^{-1}\dots a_nb_n^{-1}$ if $\r_{a_1}\r_{b_1}^{-1}\dots \r_{a_n}\r_{b_n}^{-1}$ is non-zero. If $\r = \r_{a_1}\r_{b_1}^{-1}\dots \r_{a_n}\r_{b_n}^{-1} = \r_{c_1}\r_{d_1}^{-1}\dots \r_{c_m}\r_{d_m}^{-1}$ is non-zero, then for every element $x$ in $\operatorname{dom} \r$, we have $$ x\r = x\r_{a_1}\r_{b_1}^{-1}\dots \r_{a_n}\r_{b_n}^{-1} = x\r_{c_1}\r_{d_1}^{-1}\dots \r_{c_m}\r_{d_m}^{-1}$$ so that in $G$, the following equation holds: $$ x\r = xa_1b_1^{-1}\dots a_nb_n^{-1} = xc_1d_1^{-1}\dots c_md_m^{-1}$$ and hence $a_1b_1^{-1}\dots a_nb_n^{-1} = c_1d_1^{-1}\dots c_md_m^{-1}$. Thus $\h$ is well-defined. By definition, $\h$ is 0-restricted. If $\r$ is as defined above and $\r\h =1$, then we have $a_1b_1^{-1}\dots a_nb_n^{-1} =1$ and it follows that $x\r =x$ for all $x\in \operatorname{dom} \r$ so that $\r=I_{\operatorname{dom}\r}$ and $\h$ is idempotent pure. Finally, it is clear from the definition that if $\r,\s\in IH^0(S)$ and $\r\s\neq 0$, then $(\r\s)\h = (\r\h)(\s\h)$ so that $\h$ is a prehomomorphism. Thus $IH^0(S)$ is strongly $E^*$-unitary. For the converse, we suppose that $IH^0(S)$ is strongly $E^*$-unitary and consider a 0-restricted idempotent pure prehomomorphism $\h:IH^0(S) \to G^0$ from $IH^0(S)$ to a group $G$ with zero adjoined. For each $a\in S$, we have the element $\r_a$ of $IH(S)$, and since $\operatorname{dom} \r_a =S$, it follows that $\r_a\r_b =\r_{ab}$ for any $a,b\in S$. Since $\h$ is 0-restricted, $\r_a\h \in G$, and we have $$ (\r_a\h)(\r_b\h) = (\r_a\r_b)\h = \r_{ab}\h.$$ Hence we can define $\psi:S\to G$ by $a\psi = \r_a\h$, and $(a\psi)(b\psi)=(ab)\psi$, that is, $\psi$ is a homomorphism. It is also injective, for if $a\psi =b\psi$, then $\r_a\h=\r_b\h$. Now $\r_a^{-1}\r_b$ is a non-zero element of $IH^0(S)$, and so $$ (\r_a^{-1}\r_b)\h = (\r_a^{-1}\h)(\r_b\h) = (\r_a^{-1}\h)(\r_a\h) = (\r_a^{-1}\r_a)\h =1$$ since $\r_a^{-1}\r_a$ is a non-zero idempotent. But $\h$ is idempotent pure, so $\r_a^{-1}\r_b$ is an idempotent, that is, it is the identity map on its domain. Hence for $x\in \operatorname{dom}(\r_a^{-1}\r_b)$ we have $x\r_a^{-1}\r_b = x$. Now $x\r_a^{-1} =u$ where $x=ua$ and also $x=u\r_b=ub$ so that $ua=ub$ and $a=b$ by cancellation. Thus $S$ is embedded in $G$. \end{proof} It is well known (and a consequence of Corollary~\ref{groupembedding}) that there is an embedding $\h:M(\G) \to G(\G)$ of the graph monoid $M(\G)$ into the graph group $G(\G)$, and so we have the following. \begin{cor} \label{polygraphmonoid3} For any graph $\G$, the polygraph monoid $P(\G)$ is strongly $E^*$-unitary. \end{cor} In the next section, we see that $P(\G)$ has another special property, namely that it is $F^*$-inverse. \section{$F^*$-inverse $0$-bisimple inverse monoids} \label{F*-inverse} Recall that an inverse monoid $S$ is $F^*$-\textit{inverse} if every non-zero element of $S$ is under a unique maximal element in the natural partial order. If $S$ does not have a zero, it is said to be $F$-inverse, and in this case, the definition is equivalent to every $\s$-class containing a maximum element. (Here $\s$ is the minimum group congruence on $S$.) However, we shall use the term $F^*$-inverse to include both cases. It is easy to verify that every $F^*$-inverse monoid is $E^*$-unitary. An $F^*$-inverse monoid which is also strongly $E^*$-unitary is called \textit{strongly $F^*$-inverse}. It follows from Corollary~\ref{polygraphmonoid3} and the results of this section that a polygraph monoid is strongly $F^*$-inverse. We find a criterion for a $0$-bisimple inverse monoid with cancellative right unit submonoid to be $F^*$-inverse in terms of a property of its right unit submonoid. We remark that by a result of Lawson~\cite{lawson2}, for a $0$-bisimple inverse monoid, having a cancellative right unit submonoid is equivalent to being $E^*$-unitary. \begin{lem} \label{maximal} Let $C$ be a right cancellative monoid and suppose that $IH^0(C)$ is $0$-bisimple. If $a,b\in C$ have only units as common left factors, then $\r_a^{-1}\r_b$ is maximal in $IH^0(C)$. \end{lem} \begin{proof} Since $IH^0(C)$ is $0$-bisimple, every element has the form $\r_a^{-1}\r_b$ for some $a,b\in C$. The result is now immediate from Lemma~\ref{leq} and its corollary. \end{proof} If $C$ is a cancellative monoid, we denote the partially ordered set of principal right (resp. left) ideals by $P_r(C)$ (resp. $P_{\ell}(C)$). From the remarks at the end of Section~\ref{graph products}, we see that $P_r(C)$ is a join semilattice if and only if every pair of elements has an HCLF, and it is a meet semilattice if and only if every pair of elements has an LCRM. Corresponding remarks apply to $P_{\ell}(C)$. \begin{prop} \label{mainlemma} Let $C$ be a cancellative monoid and suppose that $IH^0(C)$ is $0$-bisimple. Then $IH^0(C)$ is $F^*$-inverse if and only if $P_r(C)$ is a join semilattice. \end{prop} \begin{proof} Suppose that every pair of elements of $C$ have a HCLF and let $\a$ be a non-zero element of $IH^0(C)$. Then $\a = \r_a^{-1}\r_b$ for some $a,b\in C$. Let $x$ be an HCLF of $a$ and $b$, say $a=xc$ and $b=xd$. Then the only common left factors of $c$ and $d$ are units, so by Lemma~\ref{maximal}, $\r_c^{-1}\r_d$ is maximal. But $\r_a^{-1}\r_b \leq \r_c^{-1}\r_d$ by Lemma~\ref{leq}, so $\a$ lies beneath a maximal element. If $\r_a^{-1}\r_b \leq \r_p^{-1}\r_q$ for some $p,q\in C$, then by Lemma~\ref{leq}, $a=yp$ and $b=yq$ for some $q\in C$. Hence $x=yz$ for some $z\in C$ so that $a=yp=yzc$ and $b=yq=yzd$. By left cancellation, $p=zc$ and $q=zd$ so that $\r_p^{-1}\r_q \leq \r_c^{-1}\r_d$ by Lemma~\ref{leq}. Thus $\r_c^{-1}\r_d$ is the unique maximal element above $\r_a^{-1}\r_b$, and $IH^0(C)$ is $F^*$-inverse. Conversely, suppose that $IH^0(C)$ is $F^*$-inverse, and let $a,b\in C$. Then there is a unique maximal element $\r_c^{-1}\r_d$ above $\r_a^{-1}\r_b$. By Lemma~\ref{leq}, $a=xc$ and $b=xd$ for some $x\in C$. If $y$ is a common left factor of $a$ and $b$, then $a=yp$ and $b=yq$ for some $p,q\in C$ so that $\r_a^{-1}\r_b \leq \r_p^{-1}\r_q$. Now $\r_p^{-1}\r_q \leq \a$ for some maximal $\a$, and by uniqueness, $\a=\r_c^{-1}\r_d$. It follows that $p=zc$ and $q=zd$ for some $z$ so that $xc= a=yzc$ whence $x=yz$ and $y$ is a left factor of $x$. Thus $x$ is a HCLF of $a$ and $b$. \end{proof} An abstract version of this proposition is given in the following result. \begin{prop} \label{E-unitary} Let $S$ be an $E^*$-unitary $0$-bisimple ($E$-unitary bisimple) inverse monoid, and let $C$ be its right unit submonoid. Then $S$ is $F^*$-inverse ($F$-inverse) if and only if $P_r(C)$ is a join semilattice. \end{prop} \begin{proof} Since $S$ is $0$-bisimple, the right unit submonoid $C$ of $S$ is a left LCM monoid by Proposition~1 of \cite{np}, and from the same proposition we have that $S$ is isomorphic to $IH^0(C)$. By Theorem~5 of \cite{lawson2}, $C$ is cancellative so that the result is now immediate by Proposition~\ref{mainlemma}. \end{proof} A \textit{Garside monoid} is defined to be a cancellative monoid whose only unit is the identity, that is a lattice with respect to both left and right divisibility, and that satisfies additional finiteness conditions (see, for example, \cite{dehornoy}). Such monoids have proved to be important in the study of algebraic and algorithmic properties of braid groups and, more generally, Artin groups of finite type. We note that if $C$ is a Garside monoid, then since the identity is the only unit, regarded as a partially ordered set under left divisibility, $C$ is order-isomorphic to $P_r(C)$ under reverse inclusion. Thus $P_r(C)$ is a lattice so that $IH(C)$ does not have a zero, and hence the next corollary follows immediately from Proposition~\ref{0-bisimple} and Proposition~\ref{mainlemma}. \begin{cor} The inverse hull of a Garside monoid $C$ is a bisimple $F$-inverse monoid. \end{cor} We now turn to Artin monoids. Recall that an Artin monoid is a monoid generated by a non-empty set $X$ subject to relations of the form $xyx\dots = yxy\dots $ where $x,y \in X$, both sides of a given relation have the same length, and at most one such relation holds for each pair $x,y\in X$. Thus graph monoids are Artin monoids where both sides of each defining relation have length 2. The associated \textit{Artin group} of a given Artin monoid $A$ is the group given by the presentation of $A$ regarded as a group presentation. Rather than the definition, we use some of the properties of Artin monoids which we now recall. The first three in the list below can be found in \cite{bs}, the third is also given in \cite{deligne}, and the fourth is from \cite{paris}. Let $A$ be an Artin monoid. Then we have the following. \begin{enumerate} \item [1.] $A$ is cancellative. \item [2.] The intersection of two principal left (right) ideals of $A$ is either empty or principal. \item [3.] $A$ is left (and right) Ore if and only if it is of finite type. \item [4.] $A$ embeds in its associated Artin group. \end{enumerate} \begin{prop} The inverse hull\ $IH(A)$ of an Artin monoid $A$ is strongly $F^*$-inverse. \end{prop} \begin{proof} It follows from Proposition~\ref{necandsuff} and item (4) above that $IH(A)$ is strongly $E^*$-unitary ($E$-unitary in case $A$ is of finite type). Moreover, we have already noted that condition~$(d)$ of Proposition~\ref{0-bisimple} is satisfied. Hence $IH(A)$ is $0$-bisimple (bisimple if $A$ is of finite type). Thus by Proposition~\ref{mainlemma}, it is enough to show that any two elements of $A$ have a HCLF. This is noted in \cite{bs}. The argument is as follows. Since the defining relations of $A$ are homogeneous (i.e., the two words in each relation have the same length), it follows that any factor (left or right) of an element $w$ of $A$ has length at most $|w|$. Hence any element of $A$ has only finitely many left factors. Let $x_1,\dots,x_k$ be the common left factors of two elements $v$ and $w$ of $A$. Then by the right handed version of item 2, $$ x_1A \cap \dots \cap x_kA = xA$$ for some $x$. (that is, $x$ is the least common left multiple of $x_1,\dots,x_k$.) Now $x$ is a common left factor of $v$ and $w$, (so must be one of the $x_i$s) and is clearly the HCLF of $v$ and $w$. \end{proof} Since a graph monoid is a special type of Artin monoid, we immediately have the following corollary. \begin{cor} For a graph $\G$, the polygraph monoid is a strongly $F^*$-inverse monoid. \end{cor} \section*{Acknowledgements} The work reported in this paper was started during a visit by the first author to Carleton University where the second author was conducting research supported by the Leverhulme Trust. The first author would like to thank Benjamin Steinberg for making his visit possible, and the School of Mathematics and Statistics at Carleton for its hospitality. The paper was completed when the second author was an RCUK Academic Fellow at the University of Manchester.
{ "timestamp": "2008-03-14T17:57:01", "yymm": "0803", "arxiv_id": "0803.2141", "language": "en", "url": "https://arxiv.org/abs/0803.2141", "abstract": "Our first main result shows that a graph product of right cancellative monoids is itself right cancellative. If each of the component monoids satisfies the condition that the intersection of two principal left ideals is either principal or empty, then so does the graph product. Our second main result gives a presentation for the inverse hull of such a graph product. We then specialise to the case of the inverse hulls of graph monoids, obtaining what we call polygraph monoids. Among other properties, we observe that polygraph monoids are F*-inverse. This follows from a general characterisation of those right cancellative monoids with inverse hulls that are F*-inverse.", "subjects": "Rings and Algebras (math.RA)", "title": "Graph products of right cancellative monoids", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.981166874515169, "lm_q2_score": 0.7217431943271999, "lm_q1q2_score": 0.708150514180613 }
https://arxiv.org/abs/1803.07939
Remarks on Jordan derivations over matrix algebras
Let C be a commutative ring with unity. In this article, we show that every Jordan derivation over an upper triangular matrix algebra T_n(C) is an inner derivation. Further, we extend the result for Jordan derivation on full matrix algebra M_n(C).
\section{Introduction} Throughout this article $C$ denotes a commutative ring with unity, unless otherwise stated and $T$ a $C$-algebra. Recall that a map $D:T\rightarrow T$ is called a \emph{Jordan derivation} if it is $C$-linear and $D(a^2)=D(a)a+aD(a)$, for all $a\in T$. This is said to be a \emph{derivation} if $D(ab)=D(a)b+aD(b)$, for all $a, b\in T$ and an \emph{antiderivation} if $D(ab)=D(b)a+bD(a)$, for all $a, b\in T$. Derivations and antiderivations are the trivial examples of Jordan derivations. But not every Jordan derivation is a derivation (Example of \cite{E}). A derivation $D: T\rightarrow T$ is said to be an \emph{inner derivation} if there exists $a_0\in T$ such that $D(a)=a_0a-aa_0$, for all $a\in T$. The study of Jordan derivation was initiated by Herstein in 1957. In \cite{D}, he had shown that there is no proper Jordan derivation for a prime ring of characteristic not 2. In 1975, Cusack extended Herstein's result in \cite{D1}. Later on, in 1988 Bre$\check{s}$ar \cite{F} proved that every Jordan derivation from a 2-torsion free semiprime ring into itself is a derivation. Note that a ring $R$ is \emph{2-torsion free} if for $a \in R$ such that $2a=0_R$, then $a=0_R$. The problem whether every Jordan derivation of a ring or algebra into itself is a derivation was discussed by many mathematicians in \cite{ben,F2,F4}. Now, suppose $A$ and $B$ are unital algebras over $C$, and $M$ a unital $(A,B)$-bimodule which is faithful as a left $A$-module and as a right $B$-module. Then the $C$-algebra \[\text{Tri}(A,B,M)=\left \{\left( \begin{array}{ccc} a & m \\ 0 & b \end{array} \right)\vert \enspace a \in A,\, b \in B,\, m \in M \right \} \] under the usual matrix operations is said to be a triangular algebra. In last few years Jordan derivation over triangular algebras has invited attention of many mathematicians. In 2005, Benkovi$\check{c}$ \cite{G} proved that every Jordan derivation from an upper triangular matrix algebra $\mathcal{A}$ into its arbitrary bimodule $\mathcal{M}$ is the sum of a derivation and an antiderivation, where $\mathcal{M}$ is 2-torsion free. In 2006, Zhang and Yu \cite{S} proved that every Jordan derivation from $U$ into itself is a derivation, where $U=\text{Tri}(A,B,M)$ is a triangular algebra and $C$ is 2-torsion free. Note that above result is not true if $C$ is not $2$-torsion free. In this connection an example of a Jordan derivation over $\text{Tri}(A,B,M)$ is given which is not a derivation. The construction of the example is same as in Example 8 by Cheung in \cite{che}, but in other perspective. \begin{ex} Let $C=\mathbb{Z}_2$, $A=B=\left \{\left( \begin{array}{ccc} a & b \\ 0 & a \end{array} \right)\vert \enspace a,~b \in \mathbb{Z}_2 \right \}$ and $M=\mathcal{T}_2({\mathbb{Z}_2})$, where $\mathbb{Z}_2=\{0,1\}$, a field with two elements and $\mathcal{T}_2({\mathbb{Z}_2})$ is the algebra of $2\times 2$ matrices over $\mathbb{Z}_2$. Note that $\mathbb{Z}_2$ is not a $2$-torsion free ring. Define $D:\text{Tri}(A,B,M)\rightarrow \text{Tri}(A,B,M)$ as \begin{align*} & D(X)=a_{34}e_{12}+a_{24}e_{13}+a_{13}e_{24}+a_{12}e_{34}~\text{for all}~X\in \text{Tri}(A,B,M),\\ & \text{where}~X=a_{11}(e_{11}+e_{22})+a_{33}(e_{33}+e_{44})+a_{12}e_{12}+a_{13}e_{13}+a_{14}e_{14}+a_{24}e_{24}\\ &+a_{34}e_{34},~a_{ij}\in \mathbb{Z}_2. \end{align*} Here $e_{ij}$ represents the $n\times n$ matrix with $1$ at $(i,j)$th position and $0$ elsewhere. Then $D$ is a Jordan derivation over $\text{Tri}(A,B,M)$. Let $X=e_{11}+e_{13}+e_{14}+e_{22}+e_{33}+e_{34}+e_{44}$ and $Y=e_{13}+e_{14}+e_{33}+e_{34}+e_{44}$. In this case, $D(XY)=0\neq e_{12}=D(X)Y+XD(Y)$. Therefore, $D$ is not a derivation over $\text{Tri}(A,B,M)$. \end{ex} Motivated by above, in Section 2, we prove that a Jordan derivation from the algebra of upper triangular matrices $\mathcal{T}_n(C)$ into itself is a derivation, without assuming $C$ to be $2$-torsion free (Theorem \ref{3.1}). This is towards the furtherance of Theorem 2.1 of \cite{S}. Moreover, it has been proved that derivation on $\mathcal{T}_n(C)$ is an inner derivation (Theorem \ref{3.3}). In 2007 \cite{R}, Ghosseiri proved that if $R$ is a $2$-torsion free ring, $n \geq 2$, and $D$ is a Jordan derivation on the upper triangular matrix ring $\mathcal{T}_n(R)$, then $D$ is a derivation. As a consequence of Theorem \ref{3.1}, a remark is provided on Ghosseiri's result that Jordan derivation over $\mathcal{T}_n(F)$, where $F=\{0,1\}$ is not $2$- torsion free, is a derivation (Corollary \ref{3.2}). In 2009 \cite{ali}, Alizadeh proved that if $A$ is a unital associative ring and $M$ is a $2$-torsion free $A$-bimodule, then every Jordan derivation from $\mathcal{M}_n(A)$ into $\mathcal{M}_n(M)$ is a derivation. In Section 3, we prove that there is no such proper Jordan derivations on the full matrix algebra $\mathcal{M}_n(C)$, without assuming $C$ is $2$-torsion free (Theorem \ref{4.1}). Further, it is proved that every derivation on $\mathcal{M}_n(C)$ is an inner derivation (Theorem \ref{4.2}). Before stating the main results, we have the following:\\ Let $T$ be an algebra, $0_T$ and $1_T$ represent zero and identity of $T$ respectively. Similarly $0$ and $1$ represent the zero and identity of $C$ respectively. $e_{ij}$ denotes the square matrix $(e_{ij})_{n\times n}$ with $1$ at $(i,j)$th position and $0$ elsewhere. Let $D: T \rightarrow T$ be a Jordan derivation. Then $D(x^2)=D(x)x+xD(x)$ for all $x\in T$. Replacing $x$ by $x+y$, we have $D((x+y)^2)=D(x+y)(x+y)+(x+y)D(x+y)$, which implies for all $x,y \in T$,\begin{equation}\label{eq:1} D(xy+yx)=D(x)y+xD(y)+D(y)x+yD(x). \end{equation} Since $D$ is additive, \begin{equation} \label{eq:2} D(0_T)=0_T. \end{equation} Also, $D(1_T)=D(1_T^2)=D(1_T)1_T+1_TD(1_T)$, therefore \begin{equation} \label{eq:3} D(1_T)=0_T. \end{equation} \section{Jordan Derivation on $\mathcal{T}_n(C)$} In this section, we discuss about Jordan derivation which is an inner derivation over upper triangular matrix algebras and towards this we prove the following: \begin{theorem} \label{3.1} Let $C$ be a commutative ring with unity and $\mathcal{T}_n(C)$ be an algebra of $n\times n$ upper triangular matrices over $C$. Then every Jordan derivation on $\mathcal{T}_n(C)$, $n \geq 2$ into itself is a derivation. \end{theorem} \begin{proof} Let $T=\mathcal{T}_n(C)$ and $D$ be a Jordan derivation from $T$ into itself. Let \begin{eqnarray} \label{eq:3a} D(e_{ii})=\sum\limits_{1 \leq j \leq k \leq n}a_{jk}^{(ii)}e_{jk}~, ~\text{where} ~a_{jk}^{(ii)} \in C. \end{eqnarray} Since $D$ is a Jordan derivation, $D(e_{ii})=D(e_{ii}^2)=D(e_{ii})e_{ii}+e_{ii}D(e_{ii})$. Now, by using \eqref{eq:3a} and equating the coefficients from both sides, we obtain \begin{equation} \label{eq:4} \begin{aligned} D(e_{ii})=a_{1i}^{(ii)}e_{1i}+a_{2i}^{(ii)}e_{2i}+a_{3i}^{(ii)}e_{3i}+\cdots+a_{i-1,i}^{(ii)}e_{i-1,i}+a_{i,i+1}^{(ii)}e_{i,i+1}+\cdots+a_{in}^{(ii)}e_{in}\\ \text{for all} ~i\in \{1,2,3,\cdots,n\}. \end{aligned} \end{equation} Also, from \eqref{eq:3}, we have $D(1_T)=0_T$. Therefore, by using \eqref{eq:4}, we have $\sum\limits_{1 \leq i < j \leq n}(a_{ij}^{(ii)}+a_{ij}^{(jj)})e_{ij}=0_T$. Hence, \begin{eqnarray} \label{eq:5} a_{ij}^{(ii)}+a_{ij}^{(jj)}=0~, ~\text{for} ~1 \leq i<j \leq n. \end{eqnarray} In order to find $D(e_{ij})$, when $1 \leq i < j\leq n$, let $D(e_{ij})=\sum\limits_{1 \leq k \leq l \leq n}a_{kl}^{(ij)}e_{kl}$, for $a_{kl}^{(ij)} \in C$ and $1 \leq i < j\leq n$. Now, by applying \eqref{eq:1} on $e_{ij}=e_{ii}e_{ij}+e_{ij}e_{ii}$, we have $D(e_{ij})=D(e_{ii})e_{ij}+e_{ii}D(e_{ij})+D(e_{ij})e_{ii}+e_{ij}D(e_{ii})$. Also, by \eqref{eq:4}, we get, \begin{equation} \label{eq:5a} \begin{aligned} D(e_{ij}) &= a_{1i}^{(ii)}e_{1j} + a_{2i}^{(ii)}e_{2j} + a_{3i}^{(ii)}e_{3j} + \cdots + a_{i-1,i}^{(ii)}e_{i-1,j} \\ &+ a_{1i}^{(ij)}e_{1i} + a_{2i}^{(ij)}e_{2i} + \cdots + a_{i-1,i}^{(ij)}e_{i-1,i} + a_{i,i+1}^{(ij)}e_{i,i+1} + \cdots + a_{in}^{(ij)}e_{in}. \end{aligned} \end{equation} \vspace{0.2cm} Since $e_{ij}=e_{ij}e_{jj}+e_{jj}e_{ij}$. Therefore, by using \eqref{eq:1} and putting the value of $D(e_{ij})$ from \eqref{eq:5a} and $D(e_{jj})$ from \eqref{eq:4}, we have \begin{equation} \label{eq:6a} D(e_{ij})=a_{1i}^{(ii)}e_{1j} + a_{2i}^{(ii)}e_{2j} + a_{3i}^{(ii)}e_{3j} + \cdots + a_{i-1,i}^{(ii)}e_{i-1,j} + a_{ij}^{(ij)}e_{ij} + a_{j,j+1}^{(jj)}e_{i,j+1} + \cdots + a_{jn}^{(jj)}e_{in}. \end{equation} Now from the assumption of $D(e_{ij})$ and \eqref{eq:6a}, \begin{equation} \label{eq:6} D(e_{ij})=a_{1i}^{(ii)}e_{1j} + a_{2i}^{(ii)}e_{2j} + a_{3i}^{(ii)}e_{3j} + \cdots + a_{i-1,i}^{(ii)}e_{i-1,j} + a_{ij}^{(ij)}e_{ij} + a_{i,j+1}^{(ij)}e_{i,j+1} + \cdots + a_{in}^{(ij)}e_{in}. \end{equation} Now, we establish \begin{equation} \label{eq:7} D(e_{ij}e_{kl})=D(e_{ij})e_{kl}+e_{ij}D(e_{kl}) \end{equation} for all $i,j,k,l\in \{1,2,3,\cdots,n\}$. Since \eqref{eq:7} is equivalent to \begin{equation} \label{eq:8} D(e_{kl}e_{ij})=D(e_{kl})e_{ij}+e_{kl}D(e_{ij}). \end{equation} Therefore, proof of \eqref{eq:7} is sufficient to justify \eqref{eq:8} and vice-versa. During the proof, we frequently use \eqref{eq:4} and \eqref{eq:6}. \underline{Case 1:} Let $i\neq j$. Without loss of generality, let $i>j$. Since from \eqref{eq:4}, we have \begin{align*} D(e_{ii})e_{jj}+e_{ii}D(e_{jj})=0_T. \end{align*} Therefore, $D(e_{ii}e_{jj})=D(e_{ii})e_{jj}+e_{ii}D(e_{jj})$. \underline{Case 2:} Let $j<k$. In this case we want to establish $D(e_{ii}e_{jk})=D(e_{ii})e_{jk}+e_{ii}D(e_{jk})$ for all $i,j,k\in \{1,2,3,\cdots,n\}$. If $i<j$, then by using \eqref{eq:5}, \begin{align*} D(e_{ii})e_{jk}+e_{ii}D(e_{jk})=(a_{ij}^{(ii)}+a_{ij}^{(jj)})e_{ik}=0_T. \end{align*} If $i=j$, then \begin{align*} D(e_{ik})e_{ii}+e_{ik}D(e_{ii})=0_T. \end{align*} If $i>j$, then \begin{align*} D(e_{ii})e_{jk}+e_{ii}D(e_{jk})=0_T. \end{align*} \underline{Case 3:} Let $i<j$ and $k<l$. Now, we prove $D(e_{ij}e_{kl})=D(e_{ij})e_{kl}+e_{ij}D(e_{kl})$ for all $i,j,k,l\in \{1,2,3,\cdots,n\}$. For $j<k$, \begin{align*} D(e_{kl})e_{ij}+e_{kl}D(e_{ij})=0_T. \end{align*} If $j=k$, then \begin{align*} D(e_{kl})e_{ik}+e_{kl}D(e_{ik})=0_T. \end{align*} Also, for $j>k$, \begin{align*} D(e_{ij})e_{kl}+e_{ij}D(e_{kl})=0_T. \end{align*} Now to prove $D$ is a derivation on $T$, let $A$, $B$ $\in T$, where $A=\sum\limits_{1 \leq i \leq j \leq n}A_{ij}e_{ij}$ and $B=\sum\limits_{1 \leq k \leq l \leq n}B_{kl}e_{kl}$ for some $A_{ij}, B_{kl} \in C$. Since $D$ is a Jordan derivation on $T$ and a derivation on $e_{ij}$'s. Hence \begin{align*} D(A)B+AD(B) &= \sum\limits_{1 \leq i \leq j \leq n} \sum\limits_{1 \leq k \leq l \leq n}A_{ij}B_{kl}[D(e_{ij})e_{kl}+e_{ij}D(e_{kl})] \\ &= \sum\limits_{1 \leq i \leq j \leq n} \sum\limits_{1 \leq k \leq l \leq n}A_{ij}B_{kl}D(e_{ij}e_{kl}) \\ &= D(\sum\limits_{1 \leq i \leq l \leq n}C_{il}e_{il})\\ &= D(AB), \end{align*} where $C_{il}=\sum_{j=i}^{l}A_{ij}B_{jl}$. Thus, $D$ is a derivation on $T$. \end{proof} Now as a corollary, we describe Jordan derivation on $\mathcal{T}_n(F)$, where $n \geq 2$ is a positive integer and $\mathcal{T}_n(F)$ is considered as a ring. In this case, we relax the linearity condition of the map. \begin{cor} \label{3.2} If $F$ is a field with two elements, then every Jordan derivation on $\mathcal{T}_n(F)$, $n \geq 2$, into itself is a derivation. \end{cor} \begin{proof} Let $D$ is a Jordan derivation on $\mathcal{T}_n(F)$. Since $F=\{0,1\}$, $D$ is $F$-linear. Hence $D$ is a derivation by Theorem \ref{3.1}. \end{proof} \begin{theorem} \label{3.3} Every derivation over $\mathcal{T}_n(C),~n\geq 2,$ is an inner derivation. \end{theorem} \begin{proof} Let $D$ be a derivation on $\mathcal{T}_n(C)$. Since every derivation is a Jordan derivation, all the identities in the proof of Theorem \ref{3.1} hold. Let $i<j<k$. Since $e_{ik}=e_{ij}e_{jk}$ and $D$ is a derivation on $\mathcal{T}_n(C)$, we have $D(e_{ik})=D(e_{ij})e_{jk}+e_{ij}D(e_{jk})$. By \eqref{eq:6}, equating the coefficient of $e_{ik}$ from both sides, \begin{equation} \label{8b} a_{ik}^{(ik)}=a_{ij}^{(ij)}+a_{jk}^{(jk)}. \end{equation} Let $X=\sum\limits_{1 \leq j \leq k \leq n}x_{jk}e_{jk}$, where $x_{jk} \in C$. Now, by using \eqref{eq:4}, \eqref{eq:5}, \eqref{eq:6} and \eqref{8b}, $D(X)=BX-XB$, where \begin{align*} B=\sum_{l=2}^{n}(-a_{1l}^{(1l)})e_{ll}+\sum\limits_{1 \leq i < j \leq n}a_{ij}^{(jj)}e_{ij}. \end{align*} So, $D$ is an inner derivation. \end{proof} As a consequence of Theorem \ref{3.1} and \ref{3.3}, we have the following: \begin{cor} \label{3.4} Every Jordan derivation over $\mathcal{T}_n(C),~n\geq 2,$ is an inner derivation. \end{cor} Let $D$ be a Jordan derivation on $\mathcal{T}_n(C)$. The question is whether there exists a unique $B\in \mathcal{T}_n(C)$ so that $D(X)=BX-XB$, for all $X\in \mathcal{T}_n(C)$. The answer is given by the following example. \begin{ex} Define $D:\mathcal{T}_2(\mathbb{Z}) \rightarrow \mathcal{T}_2(\mathbb{Z})$ by \\ \[D \begin{pmatrix} x_{11}&x_{12}\\ 0&x_{22} \end{pmatrix}= \begin{pmatrix} 0&x_{12}\\ 0&0 \end{pmatrix},\\ ~\text{where}~x_{ij}\in \mathbb{Z},~\text{the ring of integers}. \] By easy computation, $D$ is a Jordan derivation. Also, by Corollary \ref{3.4}, $D$ is inner. Note that $D(X)=BX-XB$ for $B = e_{11}$ or $B = -e_{22}$. Hence, we have more than one choices in this case for $B$. \end{ex} \section{ Jordan Derivations on $\mathcal{M}_n(C)$} Now, we state and prove our main theorem of Jordan derivation on full matrix algebra. \begin{theorem} \label{4.1} Let $C$ be a commutative ring with unity and $\mathcal{M}_n(C)$ be the full matrix algebra over $C$. Then every Jordan derivation on $\mathcal{M}_n(C)$, $n \geq 2$ into itself is a derivation. \end{theorem} \begin{proof} Let $T=\mathcal{M}_n(C)$, $D$ be a Jordan derivation from $T$ into itself. Let \begin{eqnarray} \label{eq:8a} D(e_{ii})=\sum_{k=1}^{n}\sum_{l=1}^{n}a_{kl}^{(ii)}e_{kl}~, ~\text{for} ~a_{kl}^{(ii)} \in C. \end{eqnarray} Since $D$ is a Jordan derivation, $D(e_{ii})=D(e_{ii}^2)=D(e_{ii})e_{ii}+e_{ii}D(e_{ii})$. Now, by using \eqref{eq:8a} and equating the coefficients from both sides, we obtain, \begin{equation} \label{eq:9} \begin{aligned} D(e_{ii}) &= a_{i1}^{(ii)}e_{i1}+ \cdots +a_{i,i-1}^{(ii)}e_{i,i-1}+a_{i,i+1}^{(ii)}e_{i,i+1}+ \cdots +a_{in}^{(ii)}e_{in} \\ &+ a_{1i}^{(ii)}e_{1i}+ \cdots +a_{i-1,i}^{(ii)}e_{i-1,i}+a_{i+1,i}^{(ii)}e_{i+1,i}+ \cdots +a_{ni}^{(ii)}e_{ni} \\ &\text{for all} ~i\in \{1,2,3,\cdots,n\}. \end{aligned} \end{equation} Also, \begin{equation} \label{eq:10} \begin{aligned} D(1_T)=0_T & \implies \mathop {\sum_{k=1}^{n}\sum_{l=1}^{n}}_{k\neq l}(a_{kl}^{(kk)}+a_{kl}^{(ll)})e_{kl}=0_T \\ & \implies a_{kl}^{(kk)}+a_{kl}^{(ll)}=0 \end{aligned} \end{equation} for all $k,l$ with $k\neq l$, by using \eqref{eq:3} and \eqref{eq:9}. In order to find $D(e_{ij})$ for $i\neq j$, let \begin{eqnarray} \label{eq:10a} D(e_{ij})=\sum_{k=1}^{n}\sum_{l=1}^{n}a_{kl}^{(ij)}e_{kl}~, ~\text{for} ~a_{kl}^{(ij)} \in C ~\text{and} ~i\neq j. \end{eqnarray} From \eqref{eq:2}, we have the identity $D(e_{ij})e_{ij}+e_{ij}D(e_{ij})=D(e_{ij}^{2})=D(0_T)=0_T$. By equating the coefficients of $e_{ij}$ and $e_{ii}$, we obtain \begin{equation} \label{eq:11a} a_{ii}^{(ij)}+a_{jj}^{(ij)}=0 \end{equation} and \begin{equation} \label{eq:12} a_{ji}^{(ij)}=0 \end{equation} respectively. Now, applying \eqref{eq:1} on $e_{ij}=e_{ii}e_{ij}+e_{ij}e_{ii}$, we get $D(e_{ij})=D(e_{ii})e_{ij}+e_{ii}D(e_{ij})+D(e_{ij})e_{ii}+e_{ij}D(e_{ii})$. Using \eqref{eq:9} and \eqref{eq:10a}, \begin{equation} \begin{aligned} \label{eq:12b} D(e_{ij}) &= a_{1i}^{(ii)}e_{1j}+ \cdots +a_{i-1,i}^{(ii)}e_{i-1,j}+a_{i+1,i}^{(ii)}e_{i+1,j}+ \cdots +a_{ni}^{(ii)}e_{nj} \\ & + a_{i1}^{(ij)}e_{i1}+ \cdots +a_{i,i-1}^{(ij)}e_{i,i-1}+a_{ii}^{(ij)}e_{ii}+a_{i,i+1}^{(ij)}e_{i,i+1}+ \cdots +a_{in}^{(ij)}e_{in} \\ & + a_{1i}^{(ij)}e_{1i}+ \cdots +a_{i-1,i}^{(ij)}e_{i-1,i}+a_{ii}^{(ij)}e_{ii}+a_{i+1,i}^{(ij)}e_{i+1,i}+ \cdots +a_{ni}^{(ij)}e_{ni} \\ &+a_{ji}^{(ii)}e_{ii}. \end{aligned} \end{equation} Now, from \eqref{eq:10a} and \eqref{eq:12b}, equating the coefficient of $e_{ii}$, \begin{equation} \begin{aligned} \label{eq:12c} a_{ii}^{(ij)}=2a_{ii}^{(ij)}+a_{ji}^{(ii)}. \end{aligned} \end{equation} Therefore, from \eqref{eq:12b} and \eqref{eq:12c}, \begin{equation} \begin{aligned} \label{eq:12a} D(e_{ij}) &= a_{1i}^{(ii)}e_{1j}+ \cdots +a_{i-1,i}^{(ii)}e_{i-1,j}+a_{i+1,i}^{(ii)}e_{i+1,j}+ \cdots +a_{ni}^{(ii)}e_{nj} \\ & + a_{i1}^{(ij)}e_{i1}+ \cdots +a_{i,i-1}^{(ij)}e_{i,i-1}+a_{ii}^{(ij)}e_{ii}+a_{i,i+1}^{(ij)}e_{i,i+1}+ \cdots +a_{in}^{(ij)}e_{in} \\ & + a_{1i}^{(ij)}e_{1i}+ \cdots +a_{i-1,i}^{(ij)}e_{i-1,i}+a_{i+1,i}^{(ij)}e_{i+1,i}+ \cdots +a_{ni}^{(ij)}e_{ni}. \end{aligned} \end{equation} Since $e_{ij}=e_{ij}e_{jj}+e_{jj}e_{ij}$. So by using \eqref{eq:1}, \eqref{eq:10} and \eqref{eq:12} and putting the values of $D(e_{ij})$ and $D(e_{jj})$ from \eqref{eq:12a} and \eqref{eq:9} respectively, we have \begin{equation} \label{eq:13a} \begin{aligned} D(e_{ij}) &= a_{1i}^{(ii)}e_{1j}+ \cdots +a_{i-1,i}^{(ii)}e_{i-1,j}+a_{i+1,i}^{(ii)}e_{i+1,j}+ \cdots +a_{ni}^{(ii)}e_{nj}+a_{ij}^{(ij)}e_{ij} \\ & + a_{j1}^{(jj)}e_{i1}+ \cdots +a_{j,j-1}^{(jj)}e_{i,j-1}+a_{j,j+1}^{(jj)}e_{i,j+1}+ \cdots +a_{jn}^{(jj)}e_{in}. \end{aligned} \end{equation} Also, from \eqref{eq:10a} and \eqref{eq:13a}, \begin{equation} \label{eq:13} \begin{aligned} D(e_{ij}) &= a_{1i}^{(ii)}e_{1j}+ \cdots +a_{i-1,i}^{(ii)}e_{i-1,j}+a_{i+1,i}^{(ii)}e_{i+1,j}+ \cdots +a_{ni}^{(ii)}e_{nj} \\ & + a_{i1}^{(ij)}e_{i1}+ \cdots +a_{i,i-1}^{(ij)}e_{i,i-1}+a_{ii}^{(ij)}e_{ii}+a_{i,i+1}^{(ij)}e_{i,i+1}+ \cdots +a_{in}^{(ij)}e_{in} \end{aligned} \end{equation} and \begin{equation} \label{eq:14} a_{il}^{(ij)}=a_{jl}^{(jj)}~, ~\text{for all} ~l\in \{1,2,\cdots,j-1,j+1,\cdots,n\}. \end{equation} Now, equating the coefficient of $e_{jj}$ from \eqref{eq:10a} and \eqref{eq:13}, \begin{equation} \label{eq:14a} a_{jj}^{(ij)}=a_{ji}^{(ii)} \end{equation} Again, from \eqref{eq:11a} and \eqref{eq:14a}, \begin{equation} \label{eq:11} a_{ii}^{(ij)}+a_{ji}^{(ii)}=0. \end{equation} Now, we establish \begin{equation} \label{eq:15} D(e_{ij}e_{kl})=D(e_{ij})e_{kl}+e_{ij}D(e_{kl}) \end{equation} for all $i,j,k,l\in \{1,2,3,\cdots,n\}$. Since \eqref{eq:15} is equivalent to \begin{equation} \label{eq:16} D(e_{kl}e_{ij})=D(e_{kl})e_{ij}+e_{kl}D(e_{ij}). \end{equation} Therefore, proof of \eqref{eq:15} is sufficient to justify \eqref{eq:16} and vice-versa. During the proof, we frequently use \eqref{eq:9} and \eqref{eq:13}. \underline{Case 1:} Let $i\neq j$. Since from \eqref{eq:9} and \eqref{eq:10}, we have \begin{align*} D(e_{ii})e_{jj}+e_{ii}D(e_{jj})=(a_{ij}^{(ii)}+a_{ij}^{(jj)})e_{ij}=0_T. \end{align*} Therefore, $D(e_{ii}e_{jj})=D(e_{ii})e_{jj}+e_{ii}D(e_{jj})$. \underline{Case 2:} Let $j\neq k$. In this case we establish $D(e_{ii}e_{jk})=D(e_{ii})e_{jk}+e_{ii}D(e_{jk})$ for all $i,j,k\in \{1,2,3,\cdots,n\}$. If $i\neq j$, then by using \eqref{eq:9}, \eqref{eq:10} and \eqref{eq:13}, \begin{align*} D(e_{ii})e_{jk}+e_{ii}D(e_{jk})=(a_{ij}^{(ii)}+a_{ij}^{(jj)})e_{ik}=0_T. \end{align*} If $i=j$, then by using \eqref{eq:11}, \begin{align*} D(e_{ik})e_{ii}+e_{ik}D(e_{ii})=(a_{ii}^{(ik)}+a_{ki}^{(ii)})e_{ii}=0_T. \end{align*} \underline{Case 3:} Let $i\neq j$ and $k\neq l$. Now, our goal is to prove $D(e_{ij}e_{kl})=D(e_{ij})e_{kl}+e_{ij}D(e_{kl})$ for all $i,j,k,l\in \{1,2,3,\cdots,n\}$. Let $j\neq k$. Then by Case 2 and \eqref{eq:1}, \begin{equation} \label{eq:17} D(e_{ij}e_{kk}) = 0_T \implies D(e_{ij})e_{kk}+e_{ij}D(e_{kk}) = 0_T \implies a_{ik}^{(ij)}+a_{jk}^{(kk)} = 0. \end{equation} Also, by using \eqref{eq:17}, we have \begin{align*} D(e_{ij})e_{kl}+e_{ij}D(e_{kl})=(a_{ik}^{(ij)}+a_{jk}^{(kk)})e_{il}=0_T. \end{align*} Let $j=k$ and $i\neq l$. Replacing $(i,j,k)$ by $(j,l,i)$ in \eqref{eq:17}, we have \begin{align*} D(e_{jl})e_{ij}+e_{jl}D(e_{ij})=(a_{ji}^{(jl)}+a_{li}^{(ii)})e_{jj}=0_T. \end{align*} Let $j=k$ and $i=l$. Interchanging $i$ and $j$ in \eqref{eq:14}, we get \begin{align*} D(e_{ij})e_{ji}+e_{ij}D(e_{ji}) &= a_{1i}^{(ii)}e_{1i}+ \cdots +a_{i-1,i}^{(ii)}e_{i-1,i}+a_{i+1,i}^{(ii)}e_{i+1,i}+ \cdots +a_{ni}^{(ii)}e_{ni} \\ &+ a_{j1}^{(ji)}e_{i1}+ \cdots +a_{j,i-1}^{(ji)}e_{i,i-1}+a_{j,i+1}^{(ji)}e_{i,i+1}+ \cdots +a_{jn}^{(ji)}e_{in} \\ &= a_{1i}^{(ii)}e_{1i}+ \cdots +a_{i-1,i}^{(ii)}e_{i-1,i}+a_{i+1,i}^{(ii)}e_{i+1,i}+ \cdots +a_{ni}^{(ii)}e_{ni} \\ &+ a_{i1}^{(ii)}e_{i1}+ \cdots +a_{i,i-1}^{(ii)}e_{i,i-1}+a_{i,i+1}^{(ii)}e_{i,i+1}+ \cdots +a_{in}^{(ii)}e_{in} \\ &= D(e_{ii}). \end{align*} This proves our claim. Finally, proof of $D(AB)=D(A)B+AD(B)$, where $A, B \in \mathcal{M}_{n}(C)$ is same as in Theorem \ref{3.1}. Thus, $D$ is a derivation on $\mathcal{M}_{n}(C)$. \end{proof} To find an example of Jordan derivation over $\mathcal{M}_{n}(C)$, we get only inner derivation. Towards this, we have the following:\\ \begin{theorem} \label{4.2} Let $C$ be a commutative ring with unity and $\mathcal{M}_n(C)$ be the algebra of all $n\times n$ matrices over $C$. Then every derivation of $\mathcal{M}_{n}(C),~n\geq 2$ is an inner derivation. \end{theorem} \begin{proof} Let $D$ be a derivation on $\mathcal{M}_n(C)$. Since every derivation is a Jordan derivation, all the identities in proof of the Theorem \ref{4.1} hold. Let $i\neq j\neq k$. Since $e_{ik}=e_{ij}e_{jk}$ and $D$ is a derivation on $\mathcal{M}_n(C)$, we have $D(e_{ik})=D(e_{ij})e_{jk}+e_{ij}D(e_{jk})$. By \eqref{eq:13}, equating the coefficient of $e_{ik}$ from both sides, \begin{equation} \label{eq:17a} a_{ik}^{(ik)}=a_{ij}^{(ij)}+a_{jk}^{(jk)}. \end{equation} For $i\neq j$, from $e_{ii}=e_{ij}e_{ji}$, we have $D(e_{ii})=D(e_{ij})e_{ji}+e_{ij}D(e_{ji})$. By \eqref{eq:9} and \eqref{eq:13}, equating the coefficient of $e_{ii}$ from both sides, \begin{equation} \label{eq:17b} a_{ij}^{(ij)}+a_{ji}^{(ji)}=0. \end{equation} Let $X=\sum_{k=1}^{n}\sum_{l=1}^{n}x_{kl}e_{kl}$, where $x_{kl} \in C$. Now, by using \eqref{eq:9}, \eqref{eq:10}, \eqref{eq:13}, \eqref{eq:17a} and \eqref{eq:17b}, $D(X)=BX-XB$, where \begin{align*} B=\sum_{l=2}^{n}(-a_{1l}^{(1l)})e_{ll}+ \mathop {\sum_{i=1}^{n}\sum_{j=1}^{n}}_{i\neq j}a_{ij}^{(jj)}e_{ij}. \end{align*} Thus, $D$ is an inner derivation. \end{proof} As a consequence of Theorem \ref{4.1} and \ref{4.2}, we have the following. \begin{cor} \label{4.3} Every Jordan derivation over $\mathcal{M}_n(C),~n\geq 2$ is an inner derivation. \end{cor} Let $D$ be a Jordan derivation on $\mathcal{M}_n(C)$. The question is whether there exists a unique $B\in \mathcal{M}_n(C)$ such that $D(X)=BX-XB$, for all $X\in \mathcal{M}_n(C)$. The answer is given by the following example. \begin{ex} Define $D:\mathcal{M}_4(\mathbb{Z}) \rightarrow \mathcal{M}_4(\mathbb{Z})$ by \\ \[D \begin{pmatrix} x_{11}&x_{12}&x_{13}&x_{14}\\ x_{21}&x_{22}&x_{23}&x_{24}\\ x_{31}&x_{32}&x_{33}&x_{34}\\ x_{41}&x_{42}&x_{43}&x_{44} \end{pmatrix}= \begin{pmatrix} 0&x_{12}&-x_{12}+x_{13}&x_{14}\\ -x_{21}+x_{31}&x_{32}&-x_{22}+x_{33}&x_{34}\\ -x_{31}&0&-x_{32}&0\\ -x_{41}&0&-x_{42}&0 \end{pmatrix},\\ ~\text{where}~x_{ij}\in \mathbb{Z}. \] Then it is easy to see that $D$ is a Jordan derivation. By Corollary \ref{4.3}, $D$ is an inner derivation. Moreover, $D(X)=BX-XB$, for $B=e_{11}+e_{23}$ or $B= 2e_{11}+e_{22}+e_{33}+e_{44}+e_{23}$. Therefore, for $B$, we have multiple choices. \end{ex} \section*{Acknowledgement} The authors are thankful to DST, Govt. of India for financial support and Indian Institute of Technology Patna for providing the research facilities.
{ "timestamp": "2018-03-22T01:10:12", "yymm": "1803", "arxiv_id": "1803.07939", "language": "en", "url": "https://arxiv.org/abs/1803.07939", "abstract": "Let C be a commutative ring with unity. In this article, we show that every Jordan derivation over an upper triangular matrix algebra T_n(C) is an inner derivation. Further, we extend the result for Jordan derivation on full matrix algebra M_n(C).", "subjects": "Rings and Algebras (math.RA)", "title": "Remarks on Jordan derivations over matrix algebras", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668745151689, "lm_q2_score": 0.7217431943271999, "lm_q1q2_score": 0.7081505141806129 }
https://arxiv.org/abs/1712.04068
On radial Schroedinger operators with a Coulomb potential
This paper presents a thorough analysis of 1-dimensional Schroedinger operators whose potential is a linear combination of the Coulomb term 1/r and the centrifugal term 1/r^2. We allow both coupling constants to be complex. Using natural boundary conditions at 0, a two parameter holomorphic family of closed operators is introduced. We call them the Whittaker operators, since in the mathematical literature their eigenvalue equation is called the Whittaker equation. Spectral and scattering theory for Whittaker operators is studied. Whittaker operators appear in quantum mechanics as the radial part of the Schroedinger operator with a Coulomb potential.
\section{Introduction} \setcounter{equation}{0} Consider the differential expression \begin{equation}\label{whita} L_{\beta,\alpha}:=-\partial_z^2+\Big(\alpha-\frac14\Big)\frac{1}{z^2}- \frac{\beta}{z}, \end{equation} where the parameters $\beta,\alpha$ are arbitrary complex numbers. This expression can be understood as an operator acting on functions holomorphic outside of $0$, or acting on compactly supported smooth functions on $]0,\infty[$, or acting on distributions on $]0,\infty[$. We call \eqref{whita} the \emph{formal Whittaker operator}. In this paper we are interested not so much in the formal operator $L_{\beta,\alpha}$ but in some of its realizations as closed operators on $L^2(\mathbb R_+)$, with $\mathbb R_+:=]0,\infty[$. To describe these closed operators it is natural to write $\alpha=m^2$. Then, for any $m\in \mathbb C$ with $\Re(m)>-1$ we introduce an operator $H_{\beta,m}$ which is defined as the closed operator that equals $L_{\beta,m^2}$ on the domain of functions that behave as $x^{\frac12+m}\big(1-\frac{\beta}{1+2m} x \big)$ near zero, see \eqref{eq_def_H} for a precise definition. With this definition we obtain a two-parameter family of closed operators in $L^2(\mathbb R_+)$ \begin{equation*} \mathbb C\times\{m\in \mathbb C\ |\ \Re(m)>-1\}\ni (\beta,m)\mapsto H_{\beta,m}\;\!, \end{equation*} which is holomorphic except for a singularity at $(\beta,m)=(0,-\frac12)$. For $\Re(m) \geqslant 1$ the operator $H_{\beta,m}$ is simply the closure of $L_{\beta,m^2}$ restricted to $C_\mathrm{c}^\infty(\mathbb R_+)$. In fact, for $\Re(m) \geqslant 1$ it is the unique closed realization of $L_{\beta,m^2}$ on $L^2(\mathbb R_+)$. This is not the case when $-1<\Re(m)<1$. Among various closed realizations of $L_{\beta,m^2}$ one can distinguish the minimal one $L_{\beta,m^2}^{\min}$ and the maximal one $L_{\beta,m^2}^{\max}$. The operators $H_{\beta,m}$ lie between $L_{\beta,m^2}^{\min}$ and $L_{\beta,m^2}^{\max}$. They are distinguished by the fact that they are obtained by analytic continuation from the region $\Re(m) \geqslant 1$ where the uniqueness holds. This continuation stops at the vertical line $\Re(m)=-1$, which cannot be passed because on the left of this line the singularity $x^{\frac12+m}$ is not square integrable near $0$. The operators $H_{\beta, m}$ are not the only closed realizations of $L_{\beta,m^2}$ inside the strip $-1<\Re(m)<1$, but they are the {\em distinguished} ones. In fact, for generic $(m^2,\beta)$ in this strip there are two distinguished boundary conditions with the behavior \begin{equation}\label{kappa1} x^{\frac12+m}\big(1-\frac{\beta}{1+ 2m} x \big),\quad x^{\frac12-m}\big(1-\frac{\beta}{1- 2m} x \big) \end{equation} near zero, and they correspond to the operators $H_{\beta,m}$ and $H_{\beta,-m}$. In our paper we consider only the distinguished boundary conditions. We do not discuss other boundary conditions, except for the short remark below. In the generic case, with $-1<\Re(m)<1$, there exist {\em mixed boundary conditions} corresponding to the behavior \begin{equation}\label{kappa} x^{\frac12+ m}\big(1-\frac{\beta}{1+ 2m} x \big)+\kappa x^{\frac12-m}\big(1-\frac{\beta}{1-2m} x \big) \end{equation} near zero, where $\kappa$ is a complex parameter or $\kappa=\infty$ (with an appropriate interpretation of \eqref{kappa}). There are also two {\em degenerate cases}, for which the boundary conditions \eqref{kappa} do not work: If $m^2=0$, then both behaviors in \eqref{kappa1} coincide. If $m^2=\frac14$, $\beta\neq 0$, then only $m=\frac12$ makes sense in \eqref{kappa1}. In the degenerate cases one needs to modify \eqref{kappa} by including appropriate logarithmic terms. The goal of our paper is to study the properties of the family of operators $H_{\beta,m}$. We do not restrict ourselves to real parameters, when $H_{\beta,m}$ are self-adjoint, but we consider general complex parameters. In particular we would like to determine which properties survive in the non-self-adjoint setting and which do not. Our paper is in many ways parallel to \cite{BDG} and especially to \cite{DR}, where the special case $\beta=0$ is studied. These papers showed that the theory of Schr\"odinger operators with complex potentials can be very similar to the theory involving real potentials, when we can have the self-adjointness. This includes functional calculus, spectral and scattering theory. However, the present paper is not just a boring extension of \cite{BDG}---new interesting phenomena appear. First of all, the operators $H_{\beta,m}$ usually have a sequence of eigenvalues accumulating at zero, while for $\beta=0$ these eigenvalues are absent. Depending on the value of the parameters, these eigenvalues disappear into the \emph{nonphysical sheet of the complex plane} and become resonances. In the Appendix we give a few pictures of the spectrum of $H_{\beta,m}$, which illustrate the dependence of eigenvalues and resonances on the parameters. Another phenomenon, which we found quite unexpected, is the presence of a non-removable singularity of the holomorphic function $(\beta,m)\mapsto H_{\beta,m}$ at $(\beta,m)=(0,-\frac12)$. This singularity is closely related to the behavior of the potential at the origin. It is quite curious: it is invisible when we consider just the variable $m$. In fact, as proven already in \cite{BDG}, the map $m\mapsto H_m=H_{0,m}$ is holomorphic around $m=-\frac12$, and $H_{0,-\frac12}$ is the Laplacian on the half-line with the Neumann boundary condition. It is also holomorphic around $m=\frac12$, and $H_{0,\frac12}$ is the Laplacian on the half-line with the Dirichlet boundary condition. Thus one has \begin{align}\label{huli0} H_{0,-\frac12}&\neq H_{0,\frac12}. \end{align} If we introduce the Coulomb potential, then whenever $\beta \neq 0$, \begin{align}\label{huli} H_{\beta,-\frac12}&=H_{\beta,\frac12}. \end{align} The function $(\beta,m)\mapsto H_{\beta,m}$ is holomorphic around $(0,\frac12)$, in particular, \begin{equation}\label{huli1} \lim_{\beta\to0} (\bbbone+H_{\beta,\frac12})^{-1}=(\bbbone+H_{0,\frac12})^{-1}. \end{equation} But \eqref{huli} implies that \begin{equation*} \lim_{\beta\to0} (\bbbone+H_{\beta,-\frac12})^{-1}=(\bbbone+H_{0,\frac12})^{-1}. \end{equation*} Thus $\beta\mapsto(\bbbone+H_{\beta,-\frac12})^{-1}$ is not even continuous near $\beta =0$. This singularity is closely related to a rather irregular behavior of eigenvalues of $ H_{\beta,m}$, see Proposition \ref{more}. As proven in \cite{BDG,DR}, the operators $H_{0,m}$ are rather well-behaved, also in the case of complex $m$. The limiting absorption principle holds, namely the boundary values of the resolvent exist between the usual weighted spaces, and scattering theory works the usual way. In particular, the {\em M{\o}ller operators} (also called {\em wave operators}) exist. They are closely related to the {\em Hankel transformation}, which diagonalizes $H_{0,m}$, or equivalently which intertwines them with a multiplication operator. Most differences between $H_{0,m}$ and $H_{\beta,m}$ for $\beta\neq0$ are caused by the long-range character to the Coulomb potential. In this context, it becomes critical whether $\beta$ is real or not. As is well-known, for real $\beta$, we still have limiting absorption principle with the usual weighted spaces. The usual M{\o}ller operators do not exist, but {\em modified M{\o}ller operators} do. They can be expressed in terms of an isometric operator, which we call the {\em Hankel-Whittaker transformation}. These properties mostly do not survive when $\beta$ becomes non-real. In the limiting absorption principle we need to change the usual weighted spaces, see Theorem \ref{thm_conv_resolvent}. The Hankel-Whittaker transform is no longer bounded, and to our understanding there is no sensible scattering theory. Some remnants of scattering theory remain for complex $m$ but real non-zero $\beta$: we show that in this case the {\em intrinsic scattering operator} is well defined, bounded and invertible unless $\Re(m)=-\frac12$. It is usually stressed that constructions of long-range scattering theory are to some degree arbitrary \cite{DG0}. More precisely, one says that modified M{\o}ller operators and the scattering operator have an arbitrary momentum dependent phase factor. However, in the context of Whittaker operators there are distinguished choices for the M{\o}ller operators and for the scattering operator. These choices appear more or less naturally when one wants to write down formulas for these operators in terms of special functions. So one can argue that they were known before in the literature. However, to our knowledge this observation has not been formulated explicitly. Let us sum up the properties of operators $H_{\beta,m}$ in various parameter regions. \begin{enumerate} \item If $\beta=0$ and $-1<m<\infty$, then $H_{\beta,m}$ is self-adjoint and the usual M{\o}ller operators exist. \item If $\beta=0$ and $-1<\Re(m)<\infty$ with $\Im(m)\neq0$, then $H_{\beta,m}$ is not self-adjoint, it is however similar to self-adjoint; the usual M{\o}ller operators exist \cite{BDG,DR}. \item If $\beta\neq0$ with $\Im(\beta)=0$ and if $-1<m<\infty$, then $H_{\beta,m}$ is self-adjoint, and the modified M{\o}ller operators exist. \item If $\beta\neq0$ with $\Im(\beta)=0$ and if $-1<\Re(m)<\infty$ with $\Im(m)\neq0$, then $H_{\beta,m}$ is not self-adjoint; maybe some kind of long-range scattering theory holds; what we know for sure is the boundedness and invertibility of the intrinsic scattering operator unless $\Re(m)=-\frac{1}{2}$. \item If $\Im(\beta)\neq0$ and $-1<\Re(m)<\infty$ with $\Im(m)\neq0$, then $H_{\beta,m}$ is not self-adjoint; it seems that no reasonable scattering theory applies. \end{enumerate} The operator $H_{\beta,m}$ is one of the most important exactly solvable differential operators. Its eigenvalue equation for the eigenvalue (energy) $-\frac{1}{4}$ \begin{equation}\label{Whittaker-hyper.1} \Big(-\partial_z^2 +\big(m^2 - \frac{1}{4}\big)\frac{1}{z^2} - \frac{\beta}{z}+\frac{1}{4}\Big)v = 0 \end{equation} is known in mathematical literature as the {\em Whittaker equation}. In fact, Whittaker published in 1903 a paper \cite{Whi} where he expressed solutions to \eqref{Whittaker-hyper.1} in terms of confluent functions. This is the reason why we call $H_{\beta,m}$ the {\em Whittaker operator}. The best known application of Whittaker operators concerns the Hydrogen Hamiltonian, that is, the Schr\"odinger operator with a Coulomb potential in dimension 3. More generally, in any dimension the radial part of the Schr\"odinger operator with Coulomb potential reduces to the Whittaker operator. We sketch this reduction in Section \ref{The Coulomb problem in $d$ dimensions}. A brief introduction to the subject can also be found in many textbooks on quantum mechanics, and we refer for example \cite[Sec.~135]{LL} or \cite{GTV} for a recent approach. The literature on the subject is vast and we list only a few classical papers relevant for our manuscript, namely \cite{Dol,Ges,Gui,Her,Hum,MOC,Mic,MZ,Muk,Sea,TB,Yaf} or more recently \cite[App. C]{KP}. However, in all these references only real coupling constants are considered. Note that the study of all possible self-adjoint extensions of the Whittaker operator in the real case goes back to the work of Rellich \cite{R}, and was reconsidered with more generality by Bulla-Gesztesy \cite{BG}. In particular, \cite{BG} discuss both mixed boundary conditions of the form \eqref{kappa} and their logarithmic modifications needed in degenerate cases. Let us finally describe the content of this paper. Section \ref{sec_B_functions} is devoted to special functions that we need in our paper. These functions are essentially eigenfunctions of the formal Whittaker operator \eqref{whita} corresponding to the eigenvalues $-\frac{1}{4}$, $\frac{1}{4}$ and $0$. All of them can be expressed in terms of confluent and Bessel functions. Note that we use slightly different conventions from those in most of the literature. We follow our previous publication \cite{DR}, where we advocated the use of Bessel functions for dimension $1$, denoted $\mathcal{I}_{m}$, $\mathcal{K}_{m}$, $\mathcal{J}_{m}$ and $\mathcal{H}_{m}^\pm$. Here we mimic this approach and introduce systematically the functions $\mathcal{I}_{\beta,m}$, $\mathcal{K}_{\beta,m}$, $\mathcal{J}_{\beta,m}$ and $\mathcal{H}_{\beta,m}^\pm$, which are particularly convenient in the context of the Whittaker operator. Note that $\mathcal{I}_{\beta,m}$, $\mathcal{K}_{\beta,m}$ essentially coincide with the usual Whittaker functions, and $\mathcal{J}_{\beta,m}$ and $\mathcal{H}_{\beta,m}^\pm$ are obtained by analytic continuation to imaginary arguments. In particular, we present the asymptotic behavior of these functions near $0$ and near infinity for any parameters $\beta$ and $m$ in $\mathbb C$. Note that the theory of special functions related to the Whittaker equation is beautiful, rich and useful. We try to present it in a concise and systematic way, which some readers should appreciate. However, the readers who are more interested in operator-theoretic aspects of our paper can skip most of the material of Section \ref{sec_B_functions} and go straight to the next section which constitutes the core of our paper. In Section \ref{sec_Whi_op} we define the closed operators $H_{\beta,m}$ for any $m,\beta\in \mathbb C$ with $\Re(m)>-1$, and investigate their properties. A discussion about the complex eigenvalues of these operators is provided, as well as a description of a limiting absorption principle on suitable spaces. At this point, the distinction between $\Im(\beta)=0$ and $\Im(\beta)\neq 0$ will appear. In the final part of the paper, we introduce Hankel-Whittaker transformations which diagonalize our operators, and provide some information about the scattering theory. Some open questions are formulated in the last subsection. \subsection{The Coulomb problem in $d$ dimensions}\label{The Coulomb problem in $d$ dimensions} Let us briefly describe the manifestation of the Whittaker operator in quantum mechanics. We consider the space $L^2(\mathbb R^d)$ and the Schr\"odinger operator with the Coulomb potential in dimension $d$\;\!: \begin{equation}\label{whit1} -\Delta - \frac{\beta}{r}, \end{equation} where $r$ denotes the radial coordinate. In spherical coordinates the expression \eqref{whit1} reads \begin{equation}\label{whit2} -\partial_r^2 - \frac{d-1}{r}\partial_r -\frac{1}{r^2}\Delta_{\mathbb{S}^{d-1}}-\frac{\beta}{r}, \end{equation} where $\Delta_{\mathbb{S}^{d-1}}$ is the Laplace--Beltrami operator on the sphere $\mathbb{S}^{d-1}$. Eigenvectors of $-\Delta_{\mathbb{S}^{d-1}}$ are the spherical harmonics and the corresponding eigenvalues are $\ell(\ell+d-2)$, with $\ell=0,1,2,\dots$ for $d\geq2$; $\ell=0,1$ for $d=1$. Thus on the spherical harmonics of order $\ell$ the expression \eqref{whit2} becomes \begin{align*} &-\partial_r^2 - \frac{d-1}{r}\partial_r +\frac{\ell(\ell+d-2)}{r^2}-\frac{\beta}{r}\\ &=-\partial_r^2 - \frac{d-1}{r}\partial_r +\frac{m^2 - \naw{\frac{d}{2}-1}^2}{r^2}-\frac{\beta}{r}, \end{align*} where $m:= \ell+\frac{d}{2}-1$. By letting $m$ take an arbitrary complex value and by considering $d=1$, we obtain the \emph{Whittaker operator} \begin{equation}\label{Whittaker-dim-one} -\partial_r^2 +\Big(m^2 - \frac{1}{4}\Big)\frac{1}{r^2} - \frac{\beta}{r}. \end{equation} For $\beta=0$ the Whittaker operator simplifies to the \emph{Bessel operator}, see for example \cite{BDG,DR}. As for the Bessel operators, the Whittaker operators for distinct dimensions are related by a simple similarity transformation, namely \begin{equation}\label{mimi6} \begin{split} &-\partial_r^2 - \frac{d-1}{r}\partial_r +\Big(m^2 - \Big(\frac{d}{2}-1\Big)^2\Big)\frac{1}{r^2} - \frac{\beta}{r}\\ &=r^{-\frac{d}{2}+\frac{1}{2}}\Big(-\partial_r^2 +\Big(m^2 - \frac{1}{4}\Big)\frac{1}{r^2} - \frac{\beta}{r}\Big)r^{\frac{d}{2}-\frac{1}{2}}. \end{split} \end{equation} It is then a matter of taste to decide which dimension should be treated as the standard one. From the physical point of view $d=3$ is the most important, from the mathematical point of view one can hesitate between $d=2$ and $d=1$. We choose $d=1$, following the tradition going back to Whittaker \cite{Whi}, and consistently with \cite{DR}. The Coulomb problem in the physical dimension $d=3$ has a considerable practical importance. Therefore, there is a lot of literature devoted to the equation \begin{equation*} \Big(\partial_r^2 -\ell(\ell+1)\frac{1}{r^2} - \frac{2\eta}{r}+1\Big)v = 0, \end{equation*} called \emph{the Coulomb wave equation}, see \cite[Chap.~14]{AS}, which is directly obtained from the physical problem. For this equation, $\ell$ is a non-negative integer and $\eta$ is a real parameter. Solutions of this equation are often denoted by $F_\ell(\eta,r)$, $G_\ell(\eta,r)$ and $H_\ell^\pm(\eta,z):=G_\ell(\eta,r)\pm \i F_\ell(\eta,r)$, and are called \emph{Coulomb wave functions}. Alternatively, the equation \begin{equation*} \Big(\partial_r^2 -\ell(\ell+1)\frac{1}{r^2} + \frac{2}{r}+\varepsilon\Big)v = 0 \end{equation*} has been considered for $\varepsilon \in \mathbb R$, and its solution are often denoted by $f(\varepsilon, \ell;r)$, $h(\varepsilon,\ell;r)$, and also $s(\varepsilon,\ell;r)$ and $c(\varepsilon,\ell;r)$. Properties of these functions have been studied for example in \cite{Hum,Sea,TB} and compiled in \cite{NIST} (see also the more recent work \cite{Gas}). Our aim is to consider the Whittaker operator in its mathematically most natural form, including complex values of parameters, which do not have an obvious physical meaning. This explains some differences of our set-up and conventions compared with those used in the above literature. \subsection{Notation} We shall use the notations $\mathbb R_+$ for $]0,\infty[$, $\mathbb N$ for $\{0,1,2,\dots\}$, while $\mathbb N^\times:=\{1,2,3,\dots\}$. For $\alpha\in \mathbb C$, $\bar \alpha$ means the complex conjugate. $C_{\rm c}^\infty(\mathbb R_+)$ denotes the set of smooth functions on $\mathbb R_+$ with compact support. For an operator $A$ we denote by $\mathcal{D}(A)$ its domain and by $\sigma_{\rm p}(A)$ the set of its eigenvalues (its point spectrum). We also use the notation $\sigma(A)$ for its spectrum, $\sigma_{\rm ess}(A)$ for its essential spectrum and $\sigma_\mathrm d(A)$ for its discrete spectrum. If $z$ is an isolated point of $\sigma(A)$, then $\bbbone_{\{z\}}(A)$ denotes the Riesz projection of $A$ onto $z$. Similarly, if $A$ is self-adjoint and $\Xi$ is a Borel subset of $\sigma(A)$, then $\bbbone_\Xi(A)$ denotes the spectral projection of $A$ onto $\Xi$. The following holomorphic functions are understood as their \emph{principal bran\-ches}, that is, their domain is $\mathbb C\setminus]-\infty,0]$ and on $]0,\infty[$ they coincide with their usual definitions from real analysis: $\ln(z)$, $\sqrt z$, $z^\lambda$. We set $\arg (z):=\Im \big(\ln(z)\big)$. The extensions of these functions to $]-\infty,0]$ or to $]-\infty,0[$ are from the upper half-plane. \medskip {\bf Acknowledgement.} The authors thank M.~Karczmarczyk for his contributions at an early stage of this project. They are also grateful to D.~Siemssen, who helped them to make pictures with Mathematica. \section{Bessel and Whittaker functions}\label{sec_B_functions} \setcounter{equation}{0} An important role in our paper is played by various kinds of {\em Whittaker functions}, closely related to {\em confluent hypergeometric functions}. We will also use several varieties of {\em Bessel functions}. In this section we fix the notation concerning these special functions and describe their basic properties. This section plays an auxiliary role in our paper, since almost all its result can be found in the literature. The readers interested mainly in our operator-theoretic results can only briefly skim this section, and then go to the next one, which constitutes the main part of our paper. We start by recalling the definition of Bessel functions for dimension $1$, which we prefer to use instead of the usual Bessel functions. Their main properties have been discussed in \cite{DR}, therefore there is no need to repeat them here. We then introduce the Whittaker functions $\mathcal{I}_{\beta,m}$, $\mathcal{K}_{\beta,m}$, $\mathcal{J}_{\beta,m}$ and $\mathcal{H}_{\beta,m}^\pm$. These functions are solutions of the hyperbolic-type and trigonometric-type Whittaker equation, as explained below. In our notation and presentation, as much as possible, we stress the analogy of Whittaker functions and Bessel functions. The section ends with a description of zero-energy solutions of the Whittaker operator. \subsection{Hyperbolic and trigonometric Whittaker equation} A simple argument using complex scaling shows that the eigenvalue problem with non-zero energies for the Whittaker operator \eqref{Whittaker-dim-one} can be derived from the following equation, which is known in the literature as the \emph{Whittaker equation} \begin{equation}\label{Whittaker-hyper} \Big(-\partial_z^2 +\big(m^2 - \frac{1}{4}\big)\frac{1}{z^2} - \frac{\beta}{z}+\frac{1}{4}\Big)v = 0. \end{equation} It is convenient to consider in parallel to \eqref{Whittaker-hyper} the additional equation \begin{equation}\label{Whittaker-trig} \Big(-\partial_z^2 +\big(m^2 - \frac{1}{4}\big)\frac{1}{z^2} - \frac{\beta}{z}-\frac{1}{4}\Big)v = 0. \end{equation} We call it \emph{the trigonometric-type Whittaker equation}. For consistency, the equation \eqref{Whittaker-hyper} is sometimes referred to as \emph{the hyperbolic-type Whittaker equation}. Note that one can pass from \eqref{Whittaker-hyper} to \eqref{Whittaker-trig} by replacing $z$ with $\pm\i z$ and $\beta$ with $\mp\i\beta$. \subsection{Bessel equations and functions}\label{sec_B} In the special case $\beta=0$, by rescaling the independent variable in \eqref{Whittaker-hyper} and \eqref{Whittaker-trig}, we obtain the {\em modified (or hyperbolic-type) Bessel equation for dimension $1$} \begin{equation}\label{lap7} \Big(\partial_z^2-\big(m^2-\frac14\big)\frac{1}{z^2}-1\Big)v = 0, \end{equation} and the {\em standard (or trigonometric-type) Bessel equation for dimension $1$} \begin{equation}\label{lap8} \Big(\partial_z^2-\big(m^2-\frac14\big)\frac{1}{z^2}+1\Big)v = 0. \end{equation} As explained in \eqref{mimi6} they are equivalent to the modified (or hyperbolic-type) Bessel equation \begin{equation}\label{lap5} \Big(\partial_z^2+\frac{1}{z}\partial_z-\frac{m^2}{z^2}-1\Big)v = 0, \end{equation} respectively to the standard (or trigonometric-type) Bessel equation \begin{equation}\label{lap6} \Big(\partial_z^2+\frac{1}{z}\partial_z-\frac{m^2}{z^2}+1\Big)v = 0, \end{equation} which are usually considered in the literature. The distinguished solutions of \eqref{lap5} are \begin{align*} \rm{the}\ \emph{modified Bessel function}&&I_m(z),\\ \rm{the}\ \emph{MacDonald function}&&K_m(z), \end{align*} and of \eqref{lap6} are \begin{align*}\rm{the}\ \emph{Bessel function}&&J_m(z),\\ \rm{the}\ \emph{Hankel function of the 1st kind}&&H_m^+(z)=H_n^{(1)}(z),\\ \rm{the}\ \emph{Hankel function of the 2nd kind}&&H_m^-(z)=H_n^{(2)}(z),\\ \rm{the}\ \emph{Neumann function}&&Y_m(z). \end{align*} Following \cite{DR} we prefer functions which solve the two Bessel equations for dimension~1. Namely, we shall use the following functions solving \eqref{lap7} \begin{align*} \rm{the}\ \emph{modified (or hyperbolic) Bessel function for dim.~$1$}\qquad&\mathcal I_m(z):=\sqrt{\frac{\pi z}{2}} I_m(z),\\ \rm{the}\ \emph{MacDonald function for dim.~$1$}\qquad&\mathcal K_m(z):=\sqrt{\frac{2 z}{\pi}} K_m(z). \end{align*} We will also use the following functions solving \eqref{lap8} \begin{align*} \rm{the}\ \emph{(trigonometric) Bessel function for dim.~$1$}\qquad&\mathcal J_m(z):= \sqrt{\frac{\pi z}{2}} J_m(z),\\ \rm{the}\ \emph{Hankel function of the 1st kind for dim.~$1$}\qquad&\mathcal H_m^+(z):=\sqrt{\frac{\pi z}{2}} H^+_m(z),\\ \rm{the}\ \emph{Hankel function of the 2nd kind for dim.~$1$}\qquad&\mathcal H_m^-(z):=\sqrt{\frac{\pi z}{2}} H^-_m(z),\\ \rm{the}\ \emph{Neumann function for dim.~$1$}\qquad&\mathcal Y_m(z):= \sqrt{\frac{\pi z}{2}} Y_m(z). \end{align*} We refer the reader to the Appendix of \cite{DR} for the properties of these functions. \subsection{The function $\mathcal{I}_{\beta,m}$}\label{sec_f1} The hyperbolic-type Whittaker equation \eqref{Whittaker-hyper} can be reduced to the ${}_1F_1$-equation, also known as the {\em confluent equation}: \begin{equation}\label{confluent-equation} \big(z\partial_z^2 + (c-z)\partial_z - a\big)v=0. \end{equation} Indeed, one has \begin{equation*} -z^{\frac{1}{2}\mp m}\mathrm{e}^{\frac{z}{2}}\Big(-\partial_z^2 + \big(m^2 - \frac{1}{4}\big)\frac{1}{z^2} - \frac{\beta}{z} + \frac{1}{4}\Big)z^{\frac{1}{2}\pm m}\mathrm{e}^{-\frac{z}{2}} = z\partial_z^2 + (c-z)\partial_z - a \end{equation*} for the parameters $c = 1 \pm 2m$ and $a = \frac{1}{2} \pm m-\beta$. Here the sign $\pm$ has to be understood as two possible choices. One of the solutions of the confluent equation is Kummer's confluent hypergeometric function ${}_1F_1(a;c;\cdot)$ defined by \begin{equation}\label{kummer-function} {}_1F_1(a;c;z) = \suma{k=0}{\infty}\frac{(a)_k}{(c)_k}\frac{z^k}{k!}, \end{equation} where $(a)_k:=a(a+1)\cdots(a+k-1)$ is the usual Pochhammer's symbol. It is the only solution of \eqref{confluent-equation} behaving as $1$ in the vicinity of $z=0$. It is often convenient to use the closely-related function ${}_1\mathbf{F}_1(a;c;\cdot)$ defined by \begin{equation}\label{whi} {}_1\mathbf{F}_1(a;c;z) = \suma{k=0}{\infty}\frac{(a)_k}{\Gamma(c+k)}\frac{z^k}{k!} = \frac{{}_1F_1(a;c;z)}{\Gamma(c)}. \end{equation} We prefer the normalization \eqref{whi}, and in the sequel the following function $\mathcal{I}_{\beta,m}$ will be treated as one of the standard solutions of the hyperbolic-type Whittaker equation \eqref{Whittaker-hyper}: \begin{align}\label{eq_serie_I} \nonumber \mathcal{I}_{\beta,m}(z) & := z^{\frac{1}{2}+m}\mathrm{e}^{\mp \frac{z}{2}} {}_1\mathbf{F}_1\Big(\frac{1}{2}+m\mp\beta;\,1+2m;\,\pm z\Big) \\ &= z^{\frac{1}{2}+m}\mathrm{e}^{\mp\frac{z}{2}}\suma{k=0}{\infty}\frac{\naw{\frac{1}{2}+m\mp\beta }_k}{\Gamma(1+2m+k)}\;\!\frac{(\pm z)^k}{k!}. \end{align} Note that the sign independence comes from the $1^{\mathrm{st}}$ Kummer's identity \begin{equation*} {}_1F_1(a;\,c;\,z) = \mathrm{e}^z {}_1F_1(c-a;\,c;\,-z). \end{equation*} In the special case $\beta=0$, the function $\mathcal{I}_{0,m}$ essentially coincides with the modified Bessel function. More precisely, one has \begin{equation}\label{eqI_{0,m}} \mathcal{I}_{0,m}(z) = \frac{\sqrt{\pi z}}{\Gamma\big(\frac{1}{2}+m\big)}I_m\Big(\frac{z}{2}\Big) = \frac{2}{\Gamma\big(\frac{1}{2}+m\big)}\mathcal{I}_m\Big(\frac{z}{2}\Big). \end{equation} For $- \frac{1}{2}-m\pm\beta: = n\in\mathbb N$, the series of \eqref{kummer-function} is finite and we get \begin{equation*} \mathcal{I}_{\pm (\frac{1}{2}+m+n),m}(z) = \frac{n!\;\!z^{\frac{1}{2}+m}\mathrm{e}^{\mp\frac{z}{2}}}{\Gamma(1+2m+n)}L_n^{(2m)}(\pm z), \end{equation*} where \begin{equation}\label{eq_Laguerre} L_n^{(2m)}(z) = \frac{z^{-2m}\mathrm{e}^z}{n!} \frac{\mathrm{d}^n}{\mathrm{d}z^n}\big(\mathrm{e}^{-z}z^{2m+n}\big) \end{equation} are the {\em Laguerre polynomials} (or {\em generalized Laguerre polynomials}). Finally, from equation \eqref{whi} one can deduce the asymptotic behaviour around $0$\;\!: \begin{equation}\label{Ibm-around-zero} \mathcal{I}_{\beta,m}(z)=\frac{z^{\frac{1}{2}+m}}{\Gamma(1+2m)} \Big(1 -\frac{\beta}{1+2m}z+ O(z^2)\Big), \end{equation} while from the asymptotic properties of the ${}_1F_1$-function one obtains for $|\arg(z)|<\frac{\pi}{2}$ and large $|z|$ \begin{equation}\label{Ibm-around-infinity} \mathcal{I}_{\beta,m}(z) = \frac{1}{\Gamma(\frac{1}{2}+m-\beta)}z^{-\beta}\;\!\mathrm{e}^{\frac{z}{2}}\big(1+O(z^{-1})\big). \end{equation} \subsection{The function $\mathcal{K}_{\beta,m}$}\label{sec_f2} The hyperbolic-type Whittaker equation \eqref{Whittaker-hyper} has also a solution with a simple behavior at $\infty$. However, its analysis is somewhat more difficult than that of solutions with a simple behavior at $z=0$, because $z=\infty$ is an irregular singular point. The most convenient way to look for solutions with a simple behavior at $\infty$ is to reduce the Whittaker equation to the {\em ${}_2F_0$ equation} \begin{equation*} \big(w^2\partial_w^2+(-1+(1+a+b)w)\partial_w + ab\big)v=0. \end{equation*} Indeed by setting $w=-z^{-1}$ we obtain \begin{align*} &-z^{2-\beta} \mathrm{e}^{\frac{z}{2}}\Big(-\partial_z^2 + \big(m^2 - \frac{1}{4}\big)\frac{1}{z^2} - \frac{\beta}{z} + \frac{1}{4}\Big)z^{\beta}\mathrm{e}^{-\frac{z}{2}}\\ &= w^2\partial_w^2+(-1+(1+a+b)w)\partial_w + ab \end{align*} for the parameters $a = \frac{1}{2} +m -\beta$ and $b = \frac{1}{2} -m -\beta$. The ${}_2F_0$ equation has a distinguished solution \begin{equation*} {}_2F_0(a,b;-;z):=\lim_{c\to\infty}{}_2F_1(a,b;c;cz), \end{equation*} where we take the limit over $|\arg(c)-\pi|<\pi-\epsilon$ with $\epsilon>0$, and the above definition is valid for $z\in\mathbb{C}\setminus[0,+\infty[$. Obviously one has \begin{equation}\label{obvio} {}_2 F_0(a,b;-;z)={}_2F_0(b,a;-;z). \end{equation} The function extends to an analytic function on the universal cover of $\mathbb C\setminus\{0\}$ with a branch point of an infinite order at 0, and the following asymptotic expansion holds: \begin{equation*} {}_2F_0(a,b;-;z)\sim\sum_{n=0}^\infty\frac{(a)_n(b)_n}{n!}z^n,\quad |\arg(z)|<\pi-\epsilon. \end{equation*} In the literature the ${}_2F_0$ function is seldom used. Instead one uses {\em Tricomi's function} \begin{align*} U(a,c,z) & :=z^{-a}{}_2F_0(a;a- c +1;-; -z^{-1}) \\ & =\frac{\Gamma(1-c)}{\Gamma(1+a-c)}{}_1F_1(a;c;z) + \frac{\Gamma(c-1)}{\Gamma(a)}z^{1-c}{}_1F_1(1+a-c;2-c;z). \end{align*} Tricomi's function is one of solutions of the confluent equation \eqref{confluent-equation}. We then define \begin{align*} \mathcal{K}_{\beta,m}(z) &:=z^\beta\mathrm{e}^{-\frac{z}{2}} {}_2F_0\Big(\frac12+m-\beta,\frac12-m-\beta;-;-z^{-1}\Big)\\ &= z^{\frac{1}{2}+m}\mathrm{e}^{-\frac{z}{2}} U\Big(\frac{1}{2} + m -\beta;\,1 + 2m;\,z\Big), \end{align*} which is thus a solution of the hyperbolic-type Whittaker equation \eqref{Whittaker-hyper}. The symmetry relation \eqref{obvio} implies that \begin{equation}\label{eq_sym} \mathcal{K}_{\beta,m}(z) = \K{\beta}{-m}(z). \end{equation} The following connection formulas hold for $2m\notin\mathbb{Z}$\;\!: \begin{align}\label{connec} \mathcal{K}_{\beta,m}(z)& = -\frac{\pi}{\sin(2\pi m)}\Big(\frac{\mathcal{I}_{\beta,m}(z)}{\Gamma(\frac{1}{2}-m-\beta)} - \frac{\I{\beta}{-m}(z)}{\Gamma(\frac{1}{2}+m-\beta)}\Big),\\ \nonumber \mathcal{I}_{\beta,m}(z)& = \frac{\Gamma(\frac{1}{2}-m+\beta)}{2\pi}\Big(\mathrm{e}^{\i\pi m}\K{-\beta}{m}(\mathrm{e}^{\i\pi}z) + \mathrm{e}^{-\i\pi m}\K{-\beta}{m}(\mathrm{e}^{-\i\pi}z)\Big). \end{align} Recall that the Wronskian of two functions $f,g$ is defined as \begin{equation}\label{wron} \mathscr{W}( f,g;x):=f(x)g'(x)-f'(x)g(x). \end{equation} The Wronskian of $\mathcal{I}_{\beta,m}$ and $\mathcal{K}_{\beta,m}$ can be easily computed, and one finds \begin{equation}\label{wron1} \mathscr{W}(\mathcal{I}_{\beta,m},\mathcal{K}_{\beta,m};x) = - \frac{1}{\Gamma(\frac{1}{2} + m - \beta)}. \end{equation} For the special cases, the relation of the function $\K{0}{m}$ with the usual Macdonald function $K_m$ or with the Macdonald function $\mathcal{K}_m$ for dimension $1$ reads \begin{equation}\label{eqK_{0,m}} \K{0}{m}(z) = \sqrt{\frac{z}{\pi}}K_m\Big(\frac{z}{2}\Big)=\mathcal{K}_m\Big(\frac{z}{2}\Big). \end{equation} Also for $\beta - \frac{1}{2}-m=:n\in \mathbb N$ we obtain \begin{equation*} \K{\frac{1}{2} + m+n}{\pm m}(z) = (-1)^n n!\, z^{\frac{1}{2}+m}\mathrm{e}^{-\frac{z}{2}}L_n^{(2m)}(z), \end{equation*} where $L_n^{(2m)}$ are the Laguerre polynomials introduced in \eqref{eq_Laguerre}. Note that for these values of $\beta$ the functions $\mathcal{I}_{\beta,m}$ and $\mathcal{K}_{\beta,m}$ are essentially the same, except for a $z$-independent factor. However, for $\beta = -\big(\frac{1}{2}+m+n\big)$ the function $\mathcal{K}_{\beta,m}$ has a more complicated representation, see \cite{confluent}. Finally, for $2m\not \in \mathbb Z$ the behaviour of $\mathcal{K}_{\beta,m}$ around zero can be derived from that of $\mathcal{I}_{\beta,m}$ together with the relation \eqref{connec}, while for $2m\in \mathbb Z$ the l'H\^ opital's rule has to be used, see the next subsection. For simplicity, we provide the asymptotic behavior only for $\Re(m)\geqslant 0$, since similar results for $\Re(m)\leqslant 0$ can be obtained by taking \eqref{eq_sym} into account. Thus one has: \begin{equation}\label{Kbm-around-zero} \mathcal{K}_{\beta,m}(z) = \begin{cases} -\frac{z^{\frac{1}{2}}\ln(z)}{\Gamma(1-\beta)} + O\big(\abs{z}^{\frac{1}{2}}\big) & \mbox{ for } m=0,\\ z^\frac{1}{2}\Big(\frac{\Gamma(-2m)}{\Gamma(\frac{1}{2}-m-\beta)}z^m + \frac{\Gamma(2m)}{\Gamma(\frac{1}{2}+m-\beta)}z^{-m}\Big) + O\big(\abs{z}^\frac{3}{2}\big) & \mbox{ for } \Re(m)=0,\ m\ne 0, \\ \frac{\Gamma(2m)}{\Gamma(\frac{1}{2}+m-\beta)}z^{\frac{1}{2}-m}+ O\big(\abs{z}^{\frac{1}{2}+\Re(m)}\big) & \mbox{ for } \Re(m)\in ]0,\frac{1}{2}], \ m\neq \frac{1}{2}, \\ \frac{1}{\Gamma(1-\beta)} + O\big(z\ln(z)\big) & \mbox{ for } m=\frac{1}{2}, \\ \frac{\Gamma(2m)}{\Gamma(\frac{1}{2}+m-\beta)}z^{\frac{1}{2}-m}+ O\big(\abs{z}^{\frac{3}{2}-\Re(m)}\big) & \mbox{ for } \Re(m)>\frac{1}{2}. \end{cases} \end{equation} For the behaviour for large $z$, if $\epsilon>0$ and $\abs{\arg(z)}<\pi-\epsilon$ then one has \begin{equation}\label{Kbm-around-infinity} \mathcal{K}_{\beta,m}(z) = z^\beta\;\!\mathrm{e}^{-\frac{z}{2}} \big(1+O(z^{-1})\big). \end{equation} \begin{remark} In the literature one can find various conventions for solutions of the Whittaker equation. In part of the literature $M_{\beta,m}:=\Gamma(1 + 2m)\;\!\mathcal{I}_{\beta,m}$ is called the Whittaker function of the first kind. $\mathcal{K}_{\beta,m}$ is called the Whittaker function of the second kind and denoted by $W_{\beta,m}$. In \cite{confluent}, our functions $\mathcal{I}_{\beta,m}$ and $\mathcal{K}_{\beta,m}$ correspond to the functions ${\mathscr M}_{\varkappa,\mu/2}$ and $W_{\varkappa,\mu/2}$ with $\varkappa=\beta$ and $\mu/2=m$. With our notation we try to be parallel to the notation for the modified Bessel equation. In fact, for $\beta=0$ our functions $\mathcal{I}_{\beta,m}$ and $\mathcal{K}_{\beta,m}$ are closely related to the modified Bessel function $\mathcal{I}_m$ and the Macdonald function $\mathcal{K}_m$, and this will also hold for $\J{\beta}{m}$ and $\Hpm{\beta}{m}$ with the Bessel function $\mathcal{J}_m$ and the Hankel functions $\mathcal{H}_m^\pm$. \end{remark} \subsection{Degenerate case} In this section we consider the hyperbolic Whittaker equation in the special case $m=\pm \frac{p}{2}$ for any $p\in\mathbb N$. It is sometimes called the degenerate case, because the two solutions $\mathcal{I}_{\beta,m}$ and $\mathcal{I}_{\beta,-m}$ in this case are proportional to one another and do not span the solution space. Therefore, we are forced to use the function $\mathcal{K}_{\beta,m}$ to obtain all solutions. Let us fix $p\in\mathbb N$. We have the identity \begin{equation}\label{dege} \mathcal{I}_{\beta,-\frac{p}{2}}(z) = \Big(-\beta - \frac{p-1}{2}\Big)_{p}\, \mathcal{I}_{\beta,\frac{p}{2}}(z), \end{equation} or equivalently, \begin{equation*} \frac{\I{\beta}{-\frac{p}{2}}(z)} {\Gamma\big(\frac{1+p}{2}-\beta\big)} = \frac{\mathcal{I}_{\beta,\frac{p}{2}}(z)} {\Gamma\big(\frac{1-p}{2}-\beta\big)}. \end{equation*} Indeed, the confluent function $_1F_1(a;\,c;\,z)$ is divergent for $c\to-p$, however, the divergence is of the same order as the divergence of $\Gamma(c)$ for $c\to-p$. Then, by a straightforward calculation we obtain from \eqref{whi} the equality \begin{equation*} {}_1\mathbf{F}_1(a;\,-p+1;\,z) = (a)_{p}\;\!z^{p}\;\!{}_1\mathbf{F}_1(a+p;\,1+p;\,z), \end{equation*} which implies \eqref{dege}. Note that \eqref{dege} also implies that \begin{equation}\label{dege1} \mathcal{I}_{\beta,-\frac{p}{2}}(z) = 0,\ \ \ \hbox{ for } \beta\in\Big\{\frac{1-p}{2},\frac{3-p}{2},\dots,\frac{p-1}{2}\Big\}. \end{equation} Let us now compare the symmetry \eqref{dege} with similar properties of the modified Bessel functions for dimension $1$. For such functions we have $\mathcal{I}_{-m}(z)=\mathcal{I}_m(z)$, for any $m\in\mathbb{Z}$, which is consistent with \eqref{dege}. But for $m\in\mathbb{Z}+\frac12$, $\mathcal{I}_{-m}$ is not proportional to $\mathcal{I}_m$, which at the first sight contradicts \eqref{dege}. However, $\mathcal{I}_{0,m}(z) = \frac{2}{\Gamma\naw{\frac{1}{2}+m}}\mathcal{I}_m\naw{\frac{z}{2}}$ vanishes for $m\in\{\dots,-\frac32,-\frac12\}$, and this makes it consistent with \eqref{dege1}. The function $\mathcal{K}_{\beta,m}$ is quite complicated in the degenerate case. In order to describe it, let us introduce the \emph{digamma function} \begin{equation*} \psi(z) = \partial_z\ln\big(\Gamma(z)\big)=\frac{\Gamma'(z)}{\Gamma(z)}. \end{equation*} Let us also set for $k\in\mathbb N$ \begin{align*} H_k(z)&:= \frac1z+\frac1{z+1}+\dots+\frac{1}{z+k-1},\\ H_k&:= H_k(1)=1+\frac12+\cdots\frac1k. \end{align*} Obviously, this means that $H_0(z)=H_0=0$. We have $\psi(1)=-\gamma$, $\psi(\frac12)=-\gamma-2\ln(2)$. Besides, one has $\psi(z+k)= \psi(z)+H_k(z)$ and $\psi(1+k)= -\gamma+H_k$, and for $k\in \mathbb N$ \begin{equation*} \partial_z\frac{1}{\Gamma(z)}\Big|_{z=-k} = -\frac{\psi(z)}{\Gamma(z)}\Big|_{z=-k} = (-1)^k k!\,. \end{equation*} The following statement can be proven by l'H\^ opital's rule. \begin{theorem} For $p\in \mathbb N$, we have \begin{align*} \mathcal{K}_{\beta,\frac{p}{2}}(z) & = \frac{(-1)^{p+1}\ln(z)\mathcal{I}_{\beta,\frac{p}{2}}(z)} {\Gamma\big(\tfrac{1-p}{2}-\beta\big)}\nonumber\\ &+\frac{(-1)^{p+1}\mathrm{e}^{-\frac{z}{2}}}{\Gamma\big(\frac{1-p}{2}-\beta\big)} \sum_{k=0}^\infty \frac{\big(\frac{1+p}{2}-\beta\big)_k\;\!z^{\frac{1+p}{2}+k}}{(p+k)!\;\!k!}\\ &\quad \times \Big(\psi\big(\tfrac{1+p}{2}-\beta+k\big)-\psi(p+1+k) -\psi(1+k)\Big)\\ &+\mathrm{e}^{-\frac{z}{2}}\sum_{j=0}^{p-1} \frac{(\frac{1-p}{2}-\beta\big)_j (-1)^j(p-j-1)!}{j!\;\!\Gamma\big(\frac{1+p}{2}-\beta\big)} z^{\frac{1-p}{2}+j}. \end{align*} For $p=0$ the above formula simplifies: \begin{align*} \mathcal{K}_{\beta,0}(z) & = -\frac{\ln(z) \mathcal{I}_{\beta,0}(z)}{\Gamma\big(\tfrac{1}{2}-\beta\big)} \\ & \quad - \frac{\mathrm{e}^{-\frac{z}{2}}}{\Gamma\naw{\frac{1}{2}-\beta}} \sum_{k=0}^\infty \frac{(\frac12-\beta)_k\;\!z^{\frac12+k}}{(k!)^2} \big(\psi(\tfrac12-\beta+k)-2\psi(1+k)\big). \end{align*} \end{theorem} \subsection{The function $\mathcal{J}_{\beta,m}$} In this and the next subsections, we consider the trigonometric-type Whittaker equation and its solutions. The function $\J{\beta}{m}$ is defined by the formula \begin{equation}\label{Jbm-definition} \J{\beta}{m}(z) = \mathrm{e}^{\mp\i\frac{\pi}{2}(\frac{1}{2}+m)}\I{\mp\i\beta}{m}\big(\mathrm{e}^{\pm\i\frac{\pi}{2}}z\big). \end{equation} It is a solution of the trigonometric-type Whittaker equation \eqref{Whittaker-trig} which behaves as $\frac{z^{\frac12+m}}{\Gamma(1+2m)}$ for $z$ near $0$. More precisely, one infers from \eqref{Ibm-around-zero} that for $z$ near $0$ \begin{equation}\label{Jbm-around-zero} \mathcal{J}_{\beta,m}(z)=\frac{z^{\frac{1}{2}+m}}{\Gamma(1+2m)} \Big(1 -\frac{\beta}{1+2m}z+ O(z^2)\Big). \end{equation} It satisfies\begin{equation*} \J{\zesp{\beta}}{\zesp{m}}(\zesp{z}) = \zesp{\J{\beta}{m}(z)}. \end{equation*} By starting again from the asymptotics of the ${}_1F_1$-function provided for example in \cite[Eq.~13.5.1]{AS} one can also obtain the asymptotic expansion near infinity. However, note that we consider a real variable $x$ and only $x\to \infty$ since for a complex variable $z$ the asymptotic behaviour highly depends on the argument of $z$. One thus gets for $x\in \mathbb R_+$ with $x$ large: \begin{align}\label{Jbm-around-infinity} \nonumber & \mathcal{J}_{\beta,m}(x) = \frac{\mathrm{e}^{\i\frac{\pi}{2}(\frac{1}{2}+m+\i\beta)}}{\Gamma(\frac{1}{2}+m-\i \beta)} \mathrm{e}^{-\i\frac{x}{2}} x^{-\i \beta} \Big(1+\i\Big(\frac{1}{2}+m+\i\beta\Big)\Big(\frac{1}{2}-m+\i \beta\Big)x^{-1}+O\big(x^{-2}\big)\Big) \\ &\qquad + \frac{\mathrm{e}^{-\i\frac{\pi}{2}(\frac{1}{2}+m-\i\beta)}}{\Gamma(\frac{1}{2}+m+\i \beta)}\mathrm{e}^{\i\frac{x}{2}}x^{\i \beta} \Big(1-\i\Big(\frac{1}{2}+m-\i\beta \Big)\Big(\frac{1}{2}-m-\i \beta\Big)x^{-1}+O\big(x^{-2}\big)\Big). \end{align} In the special case $\beta=0$ one has \begin{equation}\label{eq_J0m_B} \J{0}{m}(z) = \frac{\sqrt{\pi z}}{\Gamma\naw{\frac{1}{2}+m}}J_m\naw{\frac{z}{2}} = \frac{2}{\Gamma\naw{\frac{1}{2}+m}}\mathcal{J}_m\naw{\frac{z}{2}}. \end{equation} \subsection{The functions $\Hpm{\beta}{m}$}\label{sec_f4} Let us define the functions $\Hpm{\beta}{m}$ by the formula \begin{equation}\label{Hpm-definition} \Hpm{\beta}{m}(z) = \mathrm{e}^{\mp \i\frac{\pi}{2}\naw{\frac{1}{2}+m}}\K{\pm\i\beta}{m}(\mathrm{e}^{\mp\i\frac{\pi}{2}}z). \end{equation} Note that here the sign $\pm$ means that we have two functions: one for the sign $+$ and one for the sign $-$. The functions $\Hpm{\beta}{m}$ are solutions of the trigonometric-type Whittaker equation \eqref{Whittaker-trig}. One can observe that the property $\Hpm{\beta}{-m}(z) = \mathrm{e}^{\pm\i\pi m}\Hpm{\beta}{m}(z)$ holds. For these functions one has the following connection formulas: \begin{align} \nonumber \Hpm{\beta}{m}(z)&= \frac{\pm \i\pi}{\sin(2\pi m)}\Big(\frac{\mathrm{e}^{\mp\i\pi m}\J{\beta}{m}(z)}{\Gamma\naw{\frac{1}{2}-m\mp\i\beta}} - \frac{\J{\beta}{-m}(z)}{\Gamma\naw{\frac{1}{2}+m\mp\i\beta}}\Big), \\ \label{eq_missing} \J{\beta}{m}(z)& = \mathrm{e}^{-\pi\beta}\Big(\frac{\Hp{\beta}{m}(z)}{\Gamma\naw{\frac{1}{2}+m+\i\beta}} + \frac{\Hm{\beta}{m}(z)}{\Gamma\naw{\frac{1}{2}+m-\i\beta}}\Big). \end{align} The behaviour of $\Hpm{\beta}{m}$ depends qualitatively on $m$, and can be deduced from the asymptotic behaviour of the function $\mathcal{K}_{\beta,m}$ provided in \eqref{Kbm-around-zero}\;\!: \begin{equation}\label{Hpm-around-zero} \Hpm{\beta}{m}(z) = \begin{cases} \pm\i\frac{z^{\frac{1}{2}}\big(\ln(\mathrm{e}^{\mp\i\frac{\pi}{2}}z)\big)}{\Gamma(1\mp\i\beta)} + O\big(|z|^{1/2}\big) & \mbox{if } m=0,\\ \mp \i z^{\frac{1}{2}}\naw{\frac{\mathrm{e}^{\mp\i\pi m}\Gamma(-2m)z^m}{\Gamma(\frac{1}{2}- m\mp i\beta)} + \frac{\Gamma(2m)z^{-m}}{\Gamma(\frac{1}{2}+m\mp i\beta)}} + O\big(\abs{z}^\frac{3}{2}\big) & \mbox{if } \Re(m)=0,\ m\ne 0, \\ \mp\i\frac{\Gamma(2m)}{\Gamma(\frac{1}{2}+m\mp\i\beta)}z^{\frac{1}{2}-m}+ O\big(\abs{z}^{\frac{1}{2}+\Re(m)}\big) & \mbox{if } \Re(m)\in ]0,\frac{1}{2}], \ m\neq \frac{1}{2},\\ \mp\i\frac{1}{\Gamma(1\mp\i\beta)} + O\big(z\ln(\mathrm{e}^{\mp\i\frac{\pi}{2}}z)\big) & \mbox{if } m=\frac{1}{2},\\ \mp\i\frac{\Gamma(2m)}{\Gamma(\frac{1}{2}+m\mp\i\beta)}z^{\frac{1}{2}-m}+ O\big(\abs{z}^{\frac{3}{2}-\Re(m)}\big)& \mbox{if } \Re(m)>\frac{1}{2}. \end{cases} \end{equation} For the behaviour around $\infty$, we have for $|\arg(z)\mp\frac\pi2|< \pi$ \begin{equation}\label{Hpm-around-infinity} \Hpm{\beta}{m}(z) = \mathrm{e}^{\mp\i\frac{\pi}{2}\naw{\frac{1}{2}+m}}\mathrm{e}^{\frac{\pi\beta}{2}}z^{\pm\i\beta}\;\!\mathrm{e}^{\pm\i\frac{z}{2}}\big(1 + O(z^{-1})\big), \end{equation} \subsection{Zero-energy eigenfunctions of the Whittaker operator}\label{Zero-energy eigensolutions of the Whittaker operator} Bessel functions, which we recalled in Section \ref{sec_B}, play two roles in the present paper. Firstly and as already explained, they are solutions of \eqref{lap7} and \eqref{lap8} in the special case of the Whittaker operator corresponding to $\beta=0$. Secondly, after a small modification they are annihilated by the general Whittaker operator. More precisely, for $\beta \neq 0$ let us define the following two functions on $\mathbb R_+$\;\!: \begin{align}\label{eq_0_en} \begin{split} j_{\beta,m}(x)&:= x^{1/4}\mathcal{J}_{2m}\big(2\sqrt{\beta x}\big), \\ y_{\beta,m}(x)&:= x^{1/4} \mathcal{Y}_{2m}\big(2\sqrt{\beta x}\big). \end{split} \end{align} Then, the equation \begin{equation*} \Big(-\partial_x^2+\big(m^2-\frac14\big)\frac{1}{x^2} - \frac{\beta}{x}\Big)v = 0, \end{equation*} is solved by the functions $j_{\beta,m}$ and $y_{\beta,m}$. Indeed, this is easily observed by the following direct computation: \begin{align*} &\Big[\Big(-\partial_x^2+\big(m^2-\frac14\big)\frac{1}{x^2} - \frac{\beta}{x}\Big)j_{\beta,m}\Big](x) \\ & =-\beta x^{-3/4}\Big(\mathcal{J}_{2m}''\big(2\sqrt{\beta x}\big) - \big(m^2-\tfrac{1}{16}\big)\frac{1}{\beta x} \mathcal{J}_{2m}\big(2\sqrt{\beta x}\big) + \mathcal{J}_{2m}\big(2\sqrt{\beta x}\big) \Big) \\ & =- \beta x^{-3/4}\Big(\mathcal{J}_{2m}'' - \big((2m)^2-\tfrac{1}{4}\big)\frac{1}{(2\sqrt{\beta x})^2} \mathcal{J}_{2m} + \mathcal{J}_{2m}\Big)\big(2\sqrt{\beta x}\big) \end{align*} and the big parenthesis vanishes. The same argument holds for $\mathcal{J}_{2m}$ replaced by $\mathcal{Y}_{2m}$, and therefore for $y_{\beta,m}$ instead of $j_{\beta,m}$. These two functions are linearly independent. Indeed, a short computation yields \begin{equation*} \mathscr{W}(j_{\beta,m},y_{\beta,m};x)=\sqrt{\beta}, \end{equation*} where the Wronskian has been introduced in \eqref{wron}. We will need the asymptotics of these functions near zero. Note that $y_{\beta,m}$ has the same type of asymptotics as $y_{\beta,-m}$, which follows from the relations $\mathcal Y_m(z)=\frac{1}{2\i}\big(\mathcal H_m^+(z)-\mathcal H_m^-(z)\big)$ together with $\mathcal H_{-m}^{\pm}(z)=\mathrm{e}^{\pm \i \pi m}\mathcal H_{m}^{\pm}(z)$. Therefore, in the case of $y_{\beta,m}$ we give only the asymptotics for $\Re(m)\geqslant 0$\;\!: \begin{align} j_{\beta,m}(x) \label{eq_asymp1} & =\frac{\sqrt{\pi} \beta^{\frac{1}{4}+m}}{\Gamma(1+2m)}\;\!x^{\frac{1}{2}+m} \Big(1-\frac{\beta}{1+2m} x + O\big(x^2\big) \Big),\quad \hbox{if } -2m\not\in\mathbb N^\times,\\ \nonumber j_{\beta,m}(x) & =(-1)^{2m}\frac{\sqrt{\pi} \beta^{\frac{1}{4}-m}}{\Gamma(1-2m)}\;\!x^{\frac{1}{2}-m} \Big(1-\frac{\beta}{1-2m} x + O\big(x^2\big) \Big),\quad \hbox{if }-2m\in\mathbb N^\times, \\ \label{eq_asymp2}y_{\beta,m}(x) & = \begin{cases} C_{\beta,0} \Big(x^{1/2} \ln(x) + O\big(x^{1/2}\big)\Big) & \hbox{if } m=0,\\ C_{\beta,m}\;\!\Big(x^{\frac{1}{2}-m} + O\big(x^{\frac{1}{2}+\Re(m)}\big)\Big) & \hbox{if } \Re(m)\in[0,\frac{1}{2}] \hbox{ and } 2m \neq 0,1, \\ C_{\beta, \frac{1}{2}}\Big(1 + O\big(x\ln(x)\big) \Big) & \hbox{if } m = \frac{1}{2}, \\ C_{\beta,m}\;\!\Big(x^{\frac{1}{2}-m} + O\big(x^{\frac{3}{2}-\Re(m)}\big)\Big) & \hbox{if } \Re(m)>\frac{1}{2}, \end{cases} \end{align} where $C_{\beta,m}$ are non-zero constants for $\beta \neq 0$. The above analysis does not include the case $\beta=0$, that is the equation $$ \Big(-\partial_x^2+\Big(m^2-\frac14\Big)\frac{1}{x^2} \Big)v=0. $$ For completeness, let us mention that a linearly independent basis of solutions of this equation is given by \begin{align}\label{eq_0_en_0_beta} \begin{split} x^{\frac{1}{2}+m} \quad & \hbox{ and } \quad x^{\frac{1}{2}-m}\quad \hbox{if } m\neq 0,\\ x^{\frac{1}{2}}\quad & \hbox{ and }\quad x^{\frac{1}{2}}\ln(x) \quad \hbox{if } m= 0. \end{split} \end{align} \section{The Whittaker operator}\label{sec_Whi_op} \setcounter{equation}{0} In this section we define and study the Whittaker operators $H_{\beta,m}$, which form a holomorphic family of closed operators on the Hilbert space $L^2(\mathbb R_+)$. This section is the main part of our paper. \subsection{Preliminaries} Our basic Hilbert space $L^2(\mathbb R_+)$ is endowed with the scalar product $$ (h_1|h_2)=\int_0^\infty \overline{h_1(x)}h_2(x)\;\!\mathrm d x. $$ The bilinear form defined by $$ \langle h_1|h_2\rangle=\int_0^\infty h_1(x)h_2(x)\;\!\mathrm d x $$ will also be useful. For an operator $A$ we denote by $A^*$ its Hermitian conjugate. We will however often prefer to use the transpose of $A$, denoted by $A^\#$, rather than $A^*$. If $A$ is bounded, then $A^*$ and $A^\#$ are defined by the relations \begin{align} (h_1|Ah_2)&=(A^*h_1|h_2),\\ \langle h_1|Ah_2\rangle&=\langle A^\#h_1|h_2\rangle. \end{align} The definitions of $A^*$ has the well-known generalization to the unbounded case. The definition of $A^\#$ in the unbounded case is analogous. Finally, we shall use the notation $X$ for the operator of multiplication by the variable $x$ in $L^2(\mathbb R_+)$. \subsection{Maximal and minimal operators} For any $\alpha, \beta\in \mathbb C$ we consider the differential expression \begin{equation*} L_{\beta,\alpha} :=-\partial_x^2+\Big(\alpha-\frac14\Big)\frac{1}{x^2}-\frac{\beta}{x} \end{equation*} acting on distributions on $\mathbb{R}_+$. We denote by $L_{\beta,\alpha}^{\max}$ and $L_{\beta,\alpha}^{\min}$ the corresponding maximal and minimal operators associated with it in $L^2(\mathbb R_+)$, see \cite[Sec.~4 \& App.~A]{BDG} for the details. We also recall from this reference that the domain ${\mathcal D}(L_{\beta,\alpha}^{\max})$ is given by \begin{equation*} {\mathcal D}(L_{\beta,\alpha}^{\max}) = \big\{f\in L^2(\mathbb R_+) \mid L_{\beta,\alpha} f\in L^2(\mathbb R_+)\big\} \end{equation*} while ${\mathcal D}(L_{\beta,\alpha}^{\min})$ is the closure of the restriction of $L_{\beta,\alpha}$ to $C_{\rm c}^\infty(\mathbb{R}_+)$. The operators $L_{\beta,\alpha}^{\min}$ and $L_{\beta,\alpha}^{\max}$ are closed and one observes that \begin{equation*} \big(L_{\beta,\alpha}^{\min}\big)^* = L_{\bar\beta,\bar \alpha}^{\max}\quad \hbox{ and } \quad \big(L_{\beta,\alpha}^{\min}\big)^\# = L_{\beta, \alpha}^{\max}. \end{equation*} In order to compare the domains ${\mathcal D}(L_{\beta,\alpha}^{\min})$ and ${\mathcal D}(L_{\beta,\alpha}^{\max})$ a preliminary result is necessary. We say that $f\in {\mathcal D}(L_{\beta,\alpha}^{\min})$ around $0$, (or, by an abuse of notation, $f(x)\in {\mathcal D}(L_{\beta,\alpha}^{\min})$ around $0$) if there exists $\zeta\in C_{\rm c}^\infty\big([0,\infty[\big)$ with $\zeta=1$ around $0$ such that $f\zeta\in {\mathcal D}(L_{\beta,\alpha}^{\min})$. Let us note that we will often write $\alpha=m^2$, where $m\in\mathbb C$. We also recall that the functions $j_{\beta,m}$ and $y_{\beta,m}$ have been introduced in \eqref{eq_0_en} and \eqref{eq_0_en_0_beta}. \begin{proposition}\label{lem_properties} \begin{enumerate} \item[(i)] If $f\in {\mathcal D}(L_{\beta,\alpha}^{\max})$, then $f$ and $f'$ are continuous functions on $\mathbb R_+$, and converge to $0$ at infinity, \item[(ii)] If $f\in {\mathcal D}(L_{\beta,\alpha}^{\min})$, then near $0$ one has: \begin{enumerate} \item If $\alpha=0$ then $f(x) = o\big(x^{\frac{3}{2}}|\ln(x)|\big)$ and $f'(x)=o\big(x^{\frac{1}{2}}|\ln(x)|\big)$, \item If $\alpha\neq0$ then $f(x)=o\big(x^{\frac{3}{2}}\big)$ and $f'(x)=o\big(x^{\frac{1}{2}}\big)$. \end{enumerate} \item[(iii)] If $|\Re(m)|<1$ and $f\in {\mathcal D}(L_{\beta,m^2}^{\max})$, then there exist $a,b\in \mathbb{C}$ such that: \begin{equation*} f(x) -aj_{\beta,m} - by_{\beta,m} \in{\mathcal D}(L_{\beta,m^2}^{\min})\hbox{ around }0, \end{equation*} \item[(iv)] If $|\Re(m)|\geqslant 1$, then ${\mathcal D}(L_{\beta,m^2 }^{\min})={\mathcal D}(L_{\beta,m^2 }^{\max})$. \item[(v)] If $|\Re(m)| < 1$, then ${\mathcal D}(L_{\beta,m^2 }^{\min})$ is a subspace of ${\mathcal D}(L_{\beta,m^2 }^{\max})$ of codimension $2$. \end{enumerate} \end{proposition} \begin{proof} Since the above statements have already been proved for $\beta=0$ in \cite{BDG} we consider only the case $\beta\neq 0$. From the asymptotics at zero given in \eqref{eq_asymp1} one can observe that $j_{\beta,m}$ belongs to $L^2$ near $0$ whenever $\Re(m)>-1$. By \eqref{eq_asymp2}, the function $y_{\beta,m}$ also belongs to $L^2$ near $0$ but only for $|\Re(m)|<1$. For other values of parameters, these functions are not $L^2$ near $0$. The proof of (i) and (iii) consists now in a simple application of standard results on second order differential operators as presented for example in the Appendix of \cite{BDG}. More precisely, (i) is a direct consequence of Proposition A.2 of this reference, while (iii) is an application of its Proposition A.5. Statement (v) is a direct consequence of (iii). For the statement (ii), let us write $\alpha=m^2$. First we consider the case $|\Re(m)|<1$. For any function $g$ which is $L^2$ near $0$, let us set $\|g\|_x:=\big(\int_0^x|g(y)|^2 \;\!\mathrm d y\big)^{1/2}$ for $x\in \mathbb R_+$ small enough. It is then proved in Proposition A.7 of \cite{BDG} that if $f\in {\mathcal D}(L_{\beta,m^2}^{\min})$ then one has \begin{align} \label{desc_f} f(x) & = o(1) \big(|j_{\beta,m}(x)|\;\!\|y_{\beta,m}\|_x + |y_{\beta,m}(x)|\;\!\|j_{\beta,m}\|_x\big), \\ \label{desc_f'} f'(x) & = o(1) \big(|{j_{\beta,m}}'(x)|\;\!\|y_{\beta,m}\|_x + |{y_{\beta,m}}'(x)|\;\!\|j_{\beta,m}\|_x\big). \end{align} By computing these expressions in each case one gets that if $m=0$, then $\|j_{\beta,m}\|_x=O(x)$ and $\|y_{\beta,m}\|_x=O\big(x|\ln(x)|\big)$, while if $\Re(m)\geqslant 0$ and $m\neq 0$ then $\|j_{\beta,m}\|_x=O\big(x^{1+\Re(m)}\big)$ and $\|y_{\beta,m}\|_x=O\big(x^{1-\Re(m)}\big)$. Based on these estimates and on the asymptotic expansions of $j_{\beta,m}$, $y_{\beta,m}$ near $0$ one directly infers from \eqref{desc_f} the estimate on $f$ for any $f\in {\mathcal D}(L_{\beta,m^2}^{\min})$. For the estimate on $f'$, it is necessary to compute ${j_{\beta,m}}' $, ${y_{\beta,m}}' $, and the only surprise comes from the special case $m=\frac{1}{2}$. By using \eqref{desc_f'} and the estimate on $\|j_{\beta,m}\|_x$, $\|y_{\beta,m}\|_x$ obtained above, one deduces the behavior of $f'$ for any $f\in {\mathcal D}(L_{\beta,m^2}^{\min})$. The case $|\Re(m)|\geqslant 1$ of the statement (ii) follows by an obvious modification of the proof of Prop. 4.11 of \cite{BDG}. The proof of statement (iv) is deferred to Subsection \ref{sec_resol}. \end{proof} \subsection{The holomorphic family of Whittaker operators} Recall from \eqref{eq_asymp1} that if $-2m\not\in\mathbb N^\times$, then $ j_{\beta,m}(x)=C_{m,\beta}x^{\frac{1}{2}+m} \big(1-\frac{\beta}{1+2m} x + O(x^2) \big)$, which belongs to $L^2$ near $0$ if $\Re(m)>-1$. This motivates the following definition: For $m,\beta\in \mathbb C$ with $\Re(m)>-1$, except for the case $m=-\frac{1}{2}$, $\beta\neq0$, we define the closed operator $H_{\beta,m}$ as the restriction of $L_{\beta,m^2}^{\max}$ to the domain \begin{align} \nonumber {\mathcal D}(H_{\beta,m}) & = \Big\{f\in {\mathcal D}(L_{\beta,m^2}^{\max})\mid \hbox{ for some } c \in \mathbb{C},\\ \label{eq_def_H} &\qquad f(x)- c x^{\frac{1}{2}+m} \Big(1-\frac{\beta}{1+2m} x \Big)\in{\mathcal D}(L_{\beta,m^2}^{\min})\hbox{ around } 0\Big\}. \end{align} Note that for $\beta=0$, the expression $\frac{\beta}{1+2m}$ is interpreted as $0$, also in the case $m=-\frac12$. In the exceptional case excluded above we set \begin{equation}\label{exce} H_{\beta,-\frac{1}{2}}:= H_{\beta,\frac{1}{2}},\quad \beta\neq0. \end{equation} Let us stress that \eqref{exce} does not extend to $\beta=0$. In fact, $H_{0,-\frac12}\neq H_{0,\frac12}$, as we know from \cite{BDG,DR}: $H_{0,-\frac12}$ is the Neumann Laplacian on $\mathbb R_+$, and $H_{0,\frac12}$ is the Dirichlet Laplacian on $\mathbb R_+$. More information about the singularity at $(\beta,m)=(0,-\frac12)$ will be provided in Proposition \ref{more}. The following statements can be proved directly: \begin{theorem} \begin{enumerate} \item[(i)] For any $m\in \mathbb C$ with $\Re(m)>-1$ and any $\beta \in \mathbb C$ one has \begin{equation*} \big(H_{\beta,m}\big)^* = H_{\bar \beta,\bar m},\quad \big(H_{\beta,m}\big)^\# = H_{ \beta, m}, \end{equation*} \item[(ii)] For any real $m>-1$ and for any real $\beta\in \mathbb R$ the operator $H_{\beta,m}$ is self-adjoint, \item[(iii)] For $\Re(m) \geqslant 1$, $$ L_{\beta,m^2 }^{\min}= H_{\beta,m}= L_{\beta,m^2 }^{\max}. $$ \item[(iv)] For $-1<\Re(m)<1$, $$ L_{\beta,m^2 }^{\min}\subsetneq H_{\beta,m}\subsetneq L_{\beta,m^2 }^{\max}, $$ and the inclusions of the corresponding domains are of codimension $1$. \end{enumerate}\end{theorem} \begin{proof} Recall from \cite[Prop.~A.2]{BDG} that for any $f\in {\mathcal D}(L_{\beta,m^2}^{\max})$ and $g\in {\mathcal D}(L_{\bar \beta,\bar m^2}^{\max})$, the functions $f,f',g,g'$ are continuous on $\mathbb R_+$. In addition, the Wronskian of $\bar f$ and $g$, as introduced in \eqref{wron}, possesses a limit at zero, and we have the equality \begin{equation*} (L_{\beta,m^2}^{\max}f|g) - (f|L_{\bar \beta,\bar m^2}^{\max}g) = -\mathscr{W}(\bar f,g;0). \end{equation*} In particular, if $f\in {\mathcal D}(H_{\beta,m})$ one infers that \begin{equation*} (H_{\beta,m}f|g) = (f|L_{\bar \beta, \bar m^2}^{\max}g) -\mathscr{W}(\bar f,g;0). \end{equation*} Thus, $g\in {\mathcal D}\big((H_{\beta,m})^*\big)$ if and only if $\mathscr{W}(\bar f,g;0)=0$, and then $(H_{\beta,m})^*g=L_{\bar \beta,\bar m^2}^{\max}g$. By taking into account the explicit description of ${\mathcal D}(H_{\beta,m})$, straightforward computations show that $\mathscr{W}(\bar f,g;0)=0$ if and only if $g\in {\mathcal D}(H_{\bar \beta,\bar m})$. One then deduces that $(H_{\beta,m})^*= H_{\bar \beta,\bar m}$. Note that the property for the transpose of $H_{\beta,m}$ can be proved similarly, which finishes the proof of (i). The statement (ii) is a straightforward consequence of the statement (i). The statements (iii) and (iv) are consequences of Proposition \ref{lem_properties}. \end{proof} \begin{remark} In the spirit of \cite{DR} one could consider more general boundary conditions, and thus other realizations of the Whittaker operator. However, in this paper we stick to the most natural boundary conditions introduced above. This approach corresponds to the one of the original paper \cite{BDG}, where $\beta=0$. \end{remark} \subsection{The resolvent}\label{sec_resol} From now on, we consider fixed $m,\beta\in \mathbb C$ with $\Re(m)>-1$. In order to study the resolvent of the operator $H_{\beta,m}$, let us introduce the set $\sigma_{\beta,m}\subset \mathbb C$ which will be related later on to the spectrum of $H_{\beta,m}$\;\!: $$ \sigma_{\beta,m}:=\Big\{k\in \mathbb C\mid \Re(k)>0 \hbox{ and } \frac{\beta}{2k}-m-\frac{1}{2}\not \in \mathbb N\Big\}. $$ Let us consider $k \in \sigma_{\beta,m}$. By a scaling argument together with the material of Section \ref{sec_B_functions} one easily observes that the two functions \begin{equation}\label{eq_2_sol} x\mapsto \K{\frac{\beta}{2k}}{m}(2kx) \qquad \hbox{and}\qquad x\mapsto \I{\frac{\beta}{2k}}{m}(2kx) \end{equation} are linearly independent solutions of the equation $(L_{\beta,m^2}+k^2)v=0$. From \eqref{Kbm-around-infinity} one infers that the first function is always in $L^2$ near infinity, but it belongs to $L^2$ near zero only for $|\Re(m)|<1$. On the other hand, the second function belongs to $L^2$ around $0$ for any $m$ with $\Re(m)>-1$ but it does not belong to $L^2$ near infinity. If in addition $m\neq-\frac12$, then one has \begin{equation*} \I{\beta}{m}(x)\sim\frac{x^{\frac12+m}}{\Gamma(1+2m)}\Big(1-\frac{\beta}{(1+2m)}x\Big). \end{equation*} Therefore, it follows that \begin{equation} \I{\frac{\beta}{2k}}{m}(2kx) \sim\frac{(2kx)^{\frac12+m}}{\Gamma(1+2m)}\Big(1-\frac{\beta}{(1+2m)}x\Big), \label{whitti} \end{equation} which means that \eqref{whitti} belongs to the domain of $H_{\beta,m}$ around $0$. Based on these observations and on the standard theory of Green's function, we expect that the inverse of the operator $H_{\beta,m}+k^2$ for suitable $k$ is given by the operator $R_{\beta,m}(-k^2)$ whose kernel is given for $x,y\in \mathbb R_+$ by \begin{align}\label{The-resolvent} \nonumber &R_{\beta,m}(-k^2;x,y)\\ & := \tfrac{1}{2k}\Gamma\naw{\tfrac{1}{2}+m-\tfrac{\beta}{2k}} \begin{cases} \I{\frac{\beta}{2k}}{m}(2k x)\K{\frac{\beta}{2k}}{m}(2k y) & \mbox{ for }0<x<y,\\ \I{\frac{\beta}{2k}}{m}(2k y)\K{\frac{\beta}{2k}}{m}(2k x) & \mbox{ for }0<y<x. \end{cases} \end{align} We still need to check the exceptional case $m=-\frac12$. By \eqref{dege} and \eqref{eq_sym}, we have $$ \mathcal{I}_{\beta,-\frac12}(x)=-\beta \mathcal{I}_{\beta,\frac12}(x),\quad \mathcal{K}_{\beta,-\frac12}(x)=\mathcal{K}_{\beta,\frac12}(x). $$ As a consequence we infer that \begin{equation*} R_{\beta,-\frac{1}{2}}(-k^2;x,y)= R_{\beta,\frac{1}{2}}(-k^2;x,y),\quad\beta\neq0, \end{equation*} which is consistent with \eqref{exce}. \begin{remark} If $\beta=0$, by taking the relations \eqref{eqI_{0,m}} and \eqref{eqK_{0,m}} into account one infers that \begin{equation*} R_{0,m}(-k^2;x,y) = \frac{1}{k}\begin{cases} \mathcal I_m(k x)\mathcal K_m(k y) & \mbox{ for }0<x<y,\\ \mathcal I_m(k y)\mathcal K_m(k x) & \mbox{ for }0<y<x. \end{cases} \end{equation*} This expression corresponds to the starting point for the study of the resolvent in \cite{BDG}. \end{remark} The next statement provides the precise link between the resolvent of $H_{\beta,m}$ and the operator $R_{\beta,m}(-k^2)$. \begin{theorem}\label{thm_resolvent} Let $m,\beta\in \mathbb C$ with $\Re(m)>-1$ and let $k\in \sigma_{\beta,m}$. Then the operator $R_{\beta,m}(-k^2)$ defined by the kernel \eqref{The-resolvent} belongs to $\mathcal B\big(L^2(\mathbb R_+)\big)$ and equals $(H_{\beta,m}+k^2)^{-1}$. Moreover, the map $(\beta,m)\mapsto H_{\beta,m}$ is a holomorphic family of closed operators except for a singularity at $(\beta,m)=(0,-\frac12)$. \end{theorem} Let us emphasize that this statement already provides information about the spectrum $\sigma(H_{\beta,m})$ of $H_{\beta,m}$. Indeed, one infers that \begin{align}\label{pqpq} \nonumber \sigma(H_{\beta,m})&\subset \big\{-k^2 \mid \Re(k)\geqslant 0 \hbox{ and } k\not \in \sigma_{\beta,m}\big\}\\ &=[0,\infty[ \,\bigcup\Big\{\lambda_N \mid N\in \mathbb N, N+m+\frac12\neq0, \quad \Re\Big(\frac{\beta}{N+m+\frac{1}{2}}\Big)>0\Big\}, \end{align} where we have set \begin{equation} \lambda_N:= -\frac{\beta^2}{4(N+m+\frac{1}{2})^2}.\label{delam} \end{equation} Later on, we shall see that the inclusion in \eqref{pqpq} is in fact an equality. The proof of Theorem \ref{thm_resolvent} is based on a preliminary technical lemma. \begin{lemma} Let $m,\beta\in \mathbb C$ with $\Re(m)>-1$ and let $k\in \sigma_{\beta,m}$. Then for any $x,y\in \mathbb R_+$ one has: \begin{enumerate} \item[(i)] If $\Re(m)\geqslant 0$ with $m\neq 0$ then \begin{align}\label{eq_est1} \nonumber & |R_{\beta,m}(-k^2;x,y)| \\ \nonumber & \leqslant C_{\frac{\beta}{2k},m}^2 \tfrac{|\Gamma(\frac{1}{2}+m-\frac{\beta}{2k})|}{2|k|} \mathrm{e}^{-|x-y|\Re(k)} \min\{1,2x|k|)\}^{\frac{1}{2}} \min\{1,2y|k|\}^{\frac{1}{2}} \\ & \quad \times \begin{cases} \max\{1,2x|k|\}^{-\Re(\frac{\beta}{2k})}\max\{1,2y|k|\}^{\Re(\frac{\beta}{2k})} & \mbox{ for }0<x<y,\\ \max\{1,2y|k|\}^{-\Re(\frac{\beta}{2k})}\max\{1,2x|k|\}^{\Re(\frac{\beta}{2k})} & \mbox{ for }0<y<x. \end{cases} \end{align} \item[(ii)] If $\Re(m)\leqslant 0$ with $m\neq 0$ then \begin{align}\label{eq_est2} \nonumber &|R_{\beta,m}(-k^2;x,y)| \\ \nonumber & \leqslant C_{\frac{\beta}{2k},m}^2 \tfrac{|\Gamma(\frac{1}{2}+m-\frac{\beta}{2k})|}{2|k|} \mathrm{e}^{-|x-y|\Re(k)} \min\{1,2x|k|\}^{\Re(m)+\frac{1}{2}} \min\{1,2y|k|\}^{\Re(m)+\frac{1}{2}} \\ & \quad \times \begin{cases} \max\{1,2x|k|\}^{-\Re(\frac{\beta}{2k})}\max\{1,2y|k|\}^{\Re(\frac{\beta}{2k})} & \mbox{ for }0<x<y,\\ \max\{1,2y|k|\}^{-\Re(\frac{\beta}{2k})}\max\{1,2x|k|\}^{\Re(\frac{\beta}{2k})} & \mbox{ for }0<y<x. \end{cases} \end{align} \item[(iii)] If $m=0$ then \begin{align}\label{eq_est3} \nonumber &|R_{\beta,0}(-k^2;x,y)| \\ \nonumber & \leqslant C_{\frac{\beta}{2k}}^2 \tfrac{|\Gamma(\frac{1}{2}-\frac{\beta}{2k})|}{2|k|} \mathrm{e}^{-|x-y|\Re(k)} \min\{1,2x|k|\}^{\frac{1}{2}} \min\{1,2y|k|\}^{\frac{1}{2}} \\ \nonumber & \quad \times \big(1+\big|\ln(\min\{1,2x|k|\})\big|\big) \big(1+\big|\ln(\min\{1,2y|k|\})\big|\big) \\ & \quad \times \begin{cases} \max\{1,2x|k|\}^{-\Re(\frac{\beta}{2k})}\max\{1,2y|k|\}^{\Re(\frac{\beta}{2k})} & \mbox{ for }0<x<y,\\ \max\{1,2y|k|\}^{-\Re(\frac{\beta}{2k})}\max\{1,2x|k|\}^{\Re(\frac{\beta}{2k})} & \mbox{ for }0<y<x. \end{cases} \end{align} \end{enumerate} The constants $C_{\frac{\beta}{2k},m}$ and $C_{\frac{\beta}{2k}}$ are independent of $x$ and $y$. \end{lemma} \begin{proof} Observe first that for $\epsilon>0$ and $|\arg(z)|<\pi-\epsilon$ one deduces from \eqref{Kbm-around-zero} and \eqref{Kbm-around-infinity} that \begin{equation*} |\K{\frac{\beta}{2k}}{m}(z)| \leqslant C_{\frac{\beta}{2k},m}\mathrm{e}^{-\Re(z)/2}\min\{1,|z|\}^{-|\Re(m)|+\frac{1}{2}} \max\{1,|z|\}^{\Re(\frac{\beta}{2k})} \end{equation*} for $m\neq 0$, while for $m=0$ \begin{equation*} |\K{\frac{\beta}{2k}}{0}(z)| \leqslant C_{\frac{\beta}{2k}} \mathrm{e}^{-\Re(z)/2} \min\{1,|z|\}^{\frac{1}{2}} \big(1+\big|\ln(\min\{1,|z|\})\big|\big) \max\{1,|z|\}^{\Re(\frac{\beta}{2k})}. \end{equation*} Similarly, from \eqref{Ibm-around-zero} and \eqref{Ibm-around-infinity} one infers that \begin{align*} |\I{\frac{\beta}{2k}}{m}(z)| & \leqslant C_{\frac{\beta}{2k},m}\mathrm{e}^{\Re(z)/2}\min\{1,|z|\}^{\Re(m)+\frac{1}{2}}\max\{1,|z|\}^{-\Re(\frac{\beta}{2k})} \quad \hbox{for } m\neq 0, \\ |\I{\frac{\beta}{2k}}{0}(z)| & \leqslant C_{\frac{\beta}{2k}} \mathrm{e}^{\Re(z)/2} \min\{1,|z|\}^{\frac{1}{2}} \max\{1,|z|\}^{-\Re(\frac{\beta}{2k})} . \end{align*} As a consequence of these estimates, if $m\neq 0$ one infers that for $0<x<y$ \begin{align*} & |R_{\beta,m}(-k^2;x,y)|\\ & \leqslant C_{\frac{\beta}{2k},m}^2 \frac{\big|\Gamma(\frac{1}{2}+m-\frac{\beta}{2k})\big|}{2|k|} \mathrm{e}^{(x-y)\Re(k)} \min\{1,2x|k|\}^{\Re(m)+\frac{1}{2}} \\ & \quad \times \min\{1,2y|k|\}^{-|\Re(m)|+\frac{1}{2}} \max\{1,2x|k|\}^{-\Re(\frac{\beta}{2k})}\max\{1,2y|k|\}^{\Re(\frac{\beta}{2k})} \end{align*} while for $0<y<x$ one has \begin{align*} & |R_{\beta,m}(-k^2;x,y)|\\ & \leqslant C_{\frac{\beta}{2k},m}^2 \frac{\big|\Gamma(\frac{1}{2}+m-\frac{\beta}{2k})\big|}{2|k|} \mathrm{e}^{(y-x)\Re(k)} \min\{1,2y|k|\}^{\Re(m)+\frac{1}{2}} \\ & \quad \times \min\{1,2x|k|\}^{-|\Re(m)|+\frac{1}{2}} \max\{1,2y|k|\}^{-\Re(\frac{\beta}{2k})}\max\{1,2x|k|\}^{\Re(\frac{\beta}{2k})}. \end{align*} Then, if $\Re(m)\geqslant 0$ one observes that $\big|\frac{kx}{ky}\big|<1$ in the first case, and $\big|\frac{ky}{kx}\big|<1$ in the second case. This directly leads to the first part of the statement. Similarly, for $\Re(m)\leqslant 0$ one has $-|\Re(m)|=\Re(m)$, from which one infers the second part of the statement. The special case $m=0$ is straightforward. \end{proof} \begin{proof}[Proof of Theorem \ref{thm_resolvent}] Observe first that for $k\in \sigma_{\beta,m}$ the Gamma factor in \eqref{The-resolvent} is harmless. Thus, in order to show that the kernel \eqref{The-resolvent}, with the Gamma factor removed, defines a bounded operator for any $k\in \mathbb C$ with $\Re(k)>0$, it is sufficient to consider separately the two regions $$ \Omega:=\Big\{(x,y)\in \mathbb R_+ \times \mathbb R_+ \mid x\geqslant (2|k|)^{-1},y\geqslant [(2|k|)^{-1}\Big\} $$ and $\mathbb R_+ \times \mathbb R_+\setminus \Omega$. In the latter region, thanks to the previous lemma it is easily seen that the kernel $R_{\beta,m}(-k^2;\cdot,\cdot)$ belongs to $L^2$, and thus defines a Hilbert--Schmidt operator. For the kernel on $\Omega$ one can employ Schur's test and observe that $R_{\beta,m}(-k^2;\cdot,\cdot)$ belongs to $L^\infty\big([|k|^{-1},\infty[;L^1([|k|^{-1},\infty[)\big)$ for the two variables taken in arbitrary order. If $\Re\big(\frac{\beta}{2k}\big)\leqslant 0$, then this computation is easy and reduced to the one already performed in the proof of \cite[Lem.~4.4]{BDG}. We shall consider only the case $\Re\big(\frac{\beta}{2k}\big)>0$. Thus, for $\Re\big(\frac{\beta}{2k}\big)>0$ let us check that \begin{equation*} \sup_{y\geqslant (2|k|)^{-1}}\int_{(2|k|)^{-1}}^\infty \big|R_{\beta,m}(-k^2;x,y)\big|\;\!\mathrm d x <\infty, \end{equation*} the other condition being obtained similarly. For fixed $y\in \big[(2|k|)^{-1},\infty\big[$ we divide the above integral into three parts, namely $x\in \big[(2|k|)^{-1},\frac{(2|k|)^{-1}+y}{2}\big[$, $x\in \big[\frac{(2|k|)^{-1}+y}{2},y\big[$ and $x\in[y,\infty[$. For the first part it is enough to observe that \begin{align}\label{est1} \nonumber & \int_{(2|k|)^{-1}}^{\frac{(2|k|)^{-1}+y}{2}}\mathrm{e}^{-(y-x)\Re(k)} \big(\tfrac{y}{x}\big)^{\Re(\frac{\beta}{2k})}\;\!\mathrm d x \\ \nonumber & \leqslant \mathrm{e}^{-y\Re(k)} (2|k|y)^{\Re(\frac{\beta}{2k})}\int_{(2|k|)^{-1}}^{\frac{(2|k|)^{-1}+y}{2}} \mathrm{e}^{x\Re(k)}\;\!\mathrm d x \\ & = \tfrac{1}{\Re(k)}\mathrm{e}^{-y\Re(k)} (2|k|y)^{\Re(\frac{\beta}{2k})} \big(\mathrm{e}^{(\frac{y+(2|k|)^{-1}}{2})\Re(k)}-\mathrm{e}^{(2|k|)^{-1}\Re(k)}\big). \end{align} For the second part one observes that \begin{align}\label{est2} \nonumber & \int_\frac{(2|k|)^{-1}+y}{2}^{y}\mathrm{e}^{-(y-x)\Re(k)} \big(\tfrac{y}{x}\big)^{\Re(\frac{\beta}{2k})}\;\!\mathrm d x \\ \nonumber & \leqslant \mathrm{e}^{-y\Re(k)} 2^{\Re(\frac{\beta}{2k})} \int_\frac{(2|k|)^{-1}+y}{2}^{y} \mathrm{e}^{x\Re(k)}\;\!\mathrm d x \\ & = \tfrac{2^{\Re(\frac{\beta}{2k})}}{\Re(k)}\mathrm{e}^{-y\Re(k)} \big(\mathrm{e}^{y\Re(k)}-\mathrm{e}^{(\frac{y+(2|k|)^{-1}}{2})\Re(k)}\big). \end{align} For the third part one has (with $y\geqslant (2|k|)^{-1}$) \begin{align}\label{est3} \nonumber &\int_y^\infty \mathrm{e}^{-(x-y)\Re(k)} \big(\tfrac{x}{y}\big)^{\Re(\frac{\beta}{2k})}\;\!\mathrm d x \\ \nonumber & = \int_0^\infty \mathrm{e}^{-z\Re(k)}\big(1+\tfrac{z}{y}\big)^{\Re(\frac{\beta}{2k})}\;\!\mathrm d z \\ & \leqslant \int_0^\infty \mathrm{e}^{-z\Re(k)}\big(1+2|k|z\big)^{\Re(\frac{\beta}{2k})}\;\!\mathrm d z . \end{align} Finally, it only remains to observe that the three expressions \eqref{est1}, \eqref{est2} and \eqref{est3} are bounded for $y\geqslant (2|k|)^{-1}$. As a consequence, one deduces that the kernel restricted to $\Omega$ defines a bounded operator in $L^2(\mathbb R_+)$, and by summing up the information one deduces the boundedness of $R_{\beta,m}(-k^2)$. The equality of $R_{\beta,m}(-k^2)$ with $(H_{\beta,m}+k^2)^{-1}$ and the mentioned holomorphic property follow from standard argument, see for example the Appendix A and Proposition 2.3 in \cite{BDG}. \end{proof} \begin{proof}[Proof of Proposition \ref{lem_properties} (iv)] We want to show that $\Re(m)\geqslant 1$ implies $L_{\beta,m^2 }^{\min} = L_{\beta,m^2 }^{\max}$. With the information provided in the previous statements, it can be done by copying {\it mutatis mutandis} the proof of \cite[Prop.~4.10]{BDG}. \end{proof} \subsection{Point spectrum and eigenprojections} In this section we provide more information on the point spectrum of $H_{\beta,m}$ and exhibit an expression for the projection on the corresponding eigenfunctions. \begin{theorem}\label{thm_spectrum} Let $m, \beta\in \mathbb C$ with $\Re(m)>-1$. Then we have \begin{align}\notag \sigma_\mathrm{p}(H_{\beta,m})&= \sigma_\mathrm{d}(H_{\beta,m})\\ &=\Big\{\lambda_N \mid N\in \mathbb N, N+m+\frac12\neq0, \quad \Re\Big(\frac{\beta}{N+m+\frac{1}{2}}\Big)>0\Big\},\label{mio2}\end{align} where $\lambda_N$ were defined in \eqref{delam}. All eigenvalues are of multiplicity $1$. The kernel of the Riesz projection $P_N$ corresponding to $\lambda_N$ is given for $x,y\in \mathbb R_+$ by \begin{align}\label{Eigenprojection-eq} \nonumber P_N(x,y) &= \frac{N!}{\Gamma(1+2m+N)}\Big( \frac{\beta}{N+m+\frac{1}{2}}\Big)^{1+2m}\exp\Big(-\frac{\beta}{2(N+m+\frac{1}{2})}(x+y)\Big)\\ &\quad \times(xy)^{\frac{1}{2}+m} L^{(2m)}_N\Big( \frac{\beta}{N+m+\frac{1}{2}}\;\!x\Big)L^{(2m)}_N\Big( \frac{\beta}{N+m+\frac{1}{2}}\;\!y\Big), \end{align} where $L_N^{(2m)}$ is the Laguerre polynomial introduced in \eqref{eq_Laguerre}. \end{theorem} For the following proof let us observe that we can consider $\beta\neq 0$ since the second condition in \eqref{mio2} is never satisfied for $\beta=0$. In addition, the case $\beta=0$ has already been considered in \cite{BDG} and it was shown in this case that the operator $H_{0,m}$ had no point spectrum. \begin{proof} Observe first that for any $N\in\mathbb N$ the three conditions $\Re(k)>0$, $\frac{\beta}{2k} -m-\frac{1}{2}\neq N$ and $\lambda_N=-k^2$ are equivalent to the condition in \eqref{mio2}. Now, this situation takes place exactly when the two solutions of the equation $L_{\beta,m^2}f=-k^2f$ provided in \eqref{eq_2_sol} are not linearly independent, see also \eqref{wron1}. This means that modulo a multiplicative constant, for $k=\frac{\beta}{2(N+m+\frac{1}{2})}$ the map $x\mapsto \I{N+m+\frac{1}{2}}{m}\big(\frac{\beta}{N+m+\frac{1}{2}}x\big)$ and the map $x\mapsto \K{N+m+\frac{1}{2}}{m}\big(\frac{\beta}{N+m+\frac{1}{2}}x\big)$ are equal. From the discussion following \eqref{eq_2_sol}, one infers that these functions belong to $L^2(\mathbb R_+)$ for any $\Re(m)>-1$. It remains to show that these functions belong to ${\mathcal D}(H_{\beta,m})$. For that purpose let us consider one of them and use \eqref{Ibm-around-zero} to get \begin{align*} &\I{N+m+\frac{1}{2}}{m}\Big(\frac{\beta}{N+m+\frac{1}{2}}x\Big) \\ &= \frac{\big(\frac{\beta}{N+m+\frac{1}{2}}x\big)^{\frac{1}{2}+m}}{\Gamma(1+2m)} \Big(1 -\frac{N+m+\frac{1}{2}}{1+2m}\frac{\beta}{N+m+\frac{1}{2}}x+ O\big(x^2\big)\Big)\\ &= \frac{\big(\frac{\beta}{N+m+\frac{1}{2}}\big)^{\frac{1}{2}+m}}{\Gamma(1+2m)} x^{\frac{1}{2}+m}\Big(1-\frac{\beta}{1+2m}x+O\big(x^2\big)\Big). \end{align*} By comparing this expression with the description of ${\mathcal D}(H_{\beta,m})$ one directly deduces the first statement of the theorem. Let $\gamma$ be a contour encircling an eigenvalue $\lambda_N$ in the complex plane, with no other eigenvalue inside $\gamma$ and with no intersection with $[0,\infty[$. The Riesz projection corresponding to this eigenvalue is then given by \begin{equation*} P_N = -\frac{1}{2\pi\i}\int_{\gamma}R_{\beta,m}(z)\;\!\mathrm d z. \end{equation*} By setting $z=-k^2$ we get \begin{equation} P_N = -\frac{1}{2\pi\i}\int_{\gamma}R_{\beta,m}(-k^2)\;\!\mathrm d (-k^2) = \frac{1}{2\pi\i}\int_{\gamma^*}2kR_{\beta,m}(-k^2)\;\!\mathrm d k \end{equation} for some appropriate curve $\gamma^*$. Now by looking at the expression for the resolvent provided in \eqref{The-resolvent}, one observes that only the first factor is singular for $\frac{1}{2}+m-\frac{\beta}{2k}=-N$ and more precisely one gets for the residue of this term $\mathrm{Res}(\Gamma,-N)=\frac{(-1)^N}{N!}$. By substituting $k = \frac{\beta}{2(N+m+\frac{1}{2})}$ in the expression for the resolvent one thus gets \begin{equation*} P_N(x,y) = \frac{(-1)^N}{N!} \begin{cases} \mathcal{I}_{N+m+\frac{1}{2},m}\big(\frac{\beta}{N+m+\frac{1}{2}} x\big)\mathcal{K}_{N+m+\frac{1}{2},m}\big(\frac{\beta}{N+m+\frac{1}{2}} y\big) \\ \qquad\qquad\qquad\qquad\qquad\qquad\mbox{ for }0<x<y,\\ \mathcal{I}_{N+m+\frac{1}{2},m}\big(\frac{\beta}{N+m+\frac{1}{2}} y\big)\mathcal{K}_{N+m+\frac{1}{2},m}\big(\frac{\beta}{N+m+\frac{1}{2}} x\big) \\ \qquad\qquad\qquad\qquad\qquad\qquad \mbox{ for }0<y<x. \end{cases} \end{equation*} Finally, by recalling that for $\beta=N+m+\frac{1}{2}$ the functions $\mathcal{I}_{\beta,m}$ and $\mathcal{K}_{\beta,m}$ have an easy behaviour and are essentially the same, as mentioned in Sections \ref{sec_f1} and \ref{sec_f2}, one directly infers the explicit formula provided in \eqref{Eigenprojection-eq}. It remains to show that there are no eigenvalues in $[0,\infty[$. We will consider separately $0$ and $]0,\infty[$. Firstly let us consider the functions $x\mapsto h_{\beta,m}^\pm(x):=x^{1/4}\mathcal{H}_{2m}^\pm\big(2\sqrt{\beta x}\big)$. By the arguments of Subsection \ref{Zero-energy eigensolutions of the Whittaker operator}, they satisfy $L_{\beta,m^2}h_{\beta,m}^\pm=0$. By the asymptotic expansions of these functions near $0$ provided in \cite[App.~A.5]{DR} one easily infers that $h_{\beta,m}^\pm$ are $L^2$ near $0$ for $|\Re(m)|<1$ and not otherwise. Also, since for large $z$ one has \begin{equation*} \mathcal H_m^\pm(z) = \mathrm{e}^{\pm\i (z-\frac{1}{2}\pi m-\frac{1}{4}\pi)}\big(1+O(|z|^{-1})\big)\ , \end{equation*} we deduce that $$ h_{\beta,m}^\pm(x) = x^{1/4}\mathrm{e}^{\pm\i (2\sqrt{\beta x}-\pi m-\frac{1}{4}\pi)}\big(1+O(|z|^{-1/2})\big), $$ which means that one (and only one) of these functions is in $L^2$ near infinity if and only $\sqrt{\beta}$ has a non-zero imaginary part. However, since none of these functions has an asymptotic behavior near $0$ of the form $x^{\frac{1}{2}+m} \big(1-\frac{\beta}{1+2m} x \big)+o(x^{\frac{3}{2}})$, one deduces that none of them belongs to ${\mathcal D}(H_{\beta,m})$. Hence $0$ is never an eigenvalue of $H_{\beta,m}$. Let us now consider the equation $L_{\beta,m^2}v=\mu^2 v$ for some $\mu>0$. Two linearly independent solutions are provided by the functions $x\mapsto \Hpm{\frac{\beta}{2\mu}}{m}(2\mu x)$ introduced in Section \ref{sec_f4}. By the asymptotic expansion around $0$ provided in \eqref{Hpm-around-zero}, one infers that these functions are $L^2$ near $0$ if $|\Re(m)|<1$ and not otherwise. Then, from the asymptotic expansion near $\infty$ provided in \eqref{Hpm-around-infinity} one deduces that \begin{equation*} \Hpm{\frac{\beta}{2\mu}}{m}(2\mu x) = \mathrm{e}^{\mp\i\frac{\pi}{2}\naw{\frac{1}{2}+m}}\mathrm{e}^{\frac{\pi\beta}{4\mu}}(2\mu x)^{\pm\i\frac{\beta}{2\mu}} \;\!\mathrm{e}^{\pm\i\mu x}\big(1 + O(x^{-1})\big). \end{equation*} Again, one infers that one (and only one) of these functions is in $L^2$ near infinity if and only $\beta$ has a non-zero imaginary part. However, by taking the asymptotic expansion near $0$ provided in \eqref{Hpm-around-zero}, one observes that none of these functions belongs to ${\mathcal D}(H_{\beta,m})$, from which we deduce that $\mu^2$ is never an eigenvalue of $H_{\beta,m}$. \end{proof} Let us still describe more precisely the point spectrum $\sigma_\mathrm{p}\big(H_{\beta,m}\big)$ when the operator $H_{\beta,m}$ is self-adjoint, which means when $\beta$ and $m$ are real. \begin{corollary} \begin{enumerate} \item[(i)] For $m\in[-1/2,\infty[$ and $\beta<0$ one has $\sigma_\mathrm{p}\big(H_{\beta,m}\big)=\emptyset$, \item[(ii)] For $m\in]-1/2,\infty[$ and $\beta>0$ one has $\sigma_\mathrm{p}\big(H_{\beta,m}\big)=\big\{-\frac{\beta^2}{4(N+m+\frac12)^2}\mid N\in \mathbb N\big\}$, \item[(iii)] For $m\in]-1,-1/2[$ and $\beta<0$ one has $\sigma_\mathrm{p}\big(H_{\beta,m}\big)=\big\{-\frac{\beta^2}{4(m+\frac12)^2}\big\}$, \item[(iv)] For $m\in]-1,-1/2]$ and $\beta>0$ one has $\sigma_\mathrm{p}\big(H_{\beta,m}\big)=\big\{-\frac{\beta^2}{4(N+m+\frac12)^2}\ |\ N\in \mathbb N^\times\big\}$. \end{enumerate} \end{corollary} The singularity of the holomorphic function $(\beta,m)\mapsto H_{\beta,m}$ at $(0,-\frac12)$ may seem surprising. The following proposition helps to explain why this singularity arises. It indicates that the point spectrum has a rather wild behavior for parameters near this singularity. \begin{proposition}\label{more} For every neighborhood $\mathcal{V}$ of $(0,-\frac12)$ in $\mathbb C\times\mathbb C$ and every $z\in\mathbb C$ we can find $(\beta,m)\in\mathcal{V}$ such that $z\in\sigma(H_{\beta,m})$. \end{proposition} \begin{proof} Clearly, one has $[0,\infty[\subset \sigma(H_{\beta,m})$. Moreover, if $z\not\in[0,\infty[$, $\epsilon>0$, $\beta_\epsilon:=\epsilon2\sqrt{-z}$ and $m_\epsilon:=-\frac12+\epsilon$, then one has $z\in\sigma(H_{\beta_\epsilon,m_\epsilon})$ as a consequence Theorem \ref{thm_spectrum} for $N=0$. Clearly, $(\beta_\epsilon,m_\epsilon)\to(0,-\frac12)$ as $\epsilon \to 0$. \end{proof} \begin{remark} Let us recall that when $\beta=0$ one has $\sigma_\mathrm{p}\big(H_{0,m}\big)=\emptyset$ and $\sigma(H_{0,m})=[0,\infty[$ for any $m$ with $m>-1$, as shown in \cite{BDG}. In that respect, the result obtained in (iii) sounds surprising, since for $\beta<0$ it may seem that $-\frac{\beta}{x}$ is a positive perturbation of $H_{0,m}$, but nevertheless $H_{\beta,m}$ has a negative eigenvalue! However, let us emphasize that there is no contradiction since the domains of $H_{\beta,m}$ and $H_{0,m}$ are not the same: no inference can be made. \end{remark} \begin{remark} For $\beta$ and $m$ as in the \emph{physical} quantum-mechanical hydrogen atom, the set $\{\lambda_N\}_{N\in \mathbb N}$ coincides with the usual hydrogen atom point spectrum. Physicists introduce non-negative integers $\ell:=m-\frac{1}{2}$ and $n:=\ell+N+1$ which are called respectively \emph{the azimuthal quantum number} and \emph{the main quantum number}. Then by considering $0\leqslant \ell \leqslant n-1$ these numbers give the $n$-fold degeneracy of the eigenvalue $E_n = -\frac{\beta^2}{4n^2}$. \end{remark} \subsection{Dilation analyticity} The group of dilations is defined for any $\theta \in \mathbb R$ by $$ U_\theta f(x):=\mathrm{e}^{\frac{\theta}{2}}f(\mathrm{e}^\theta x),\quad f\in L^2(\mathbb R_+). $$ It is easily observed that $U_\theta {\mathcal D}(H_{\beta,m}) = {\mathcal D}(H_{\mathrm{e}^\theta \beta,m})$ and \begin{equation}\label{anal} U_\theta H_{\beta,m}U_\theta^{-1}=\mathrm{e}^{-2\theta}H_{\mathrm{e}^\theta\beta, m}. \end{equation} The r.h.s.~of \eqref{anal} can be extended to an analytic function \begin{equation}\label{dila} \mathbb C\ni\theta\mapsto H_{\beta,m}(\theta):=\mathrm{e}^{-2\theta}H_{\mathrm{e}^\theta\beta, m}. \end{equation} As a consequence, the operator $H_{\beta,m}$ is an example of a dilation analytic Schr\"odinger operator, where the domain of analyticity is the whole complex plane. In addition, there is a periodicity in the imaginary direction: we have \begin{equation*} H_{\beta,m}(\theta)= H_{\beta,m}(\theta+2\i\pi). \end{equation*} An operator $H_{\beta,m}$ with a non-real $\beta$ can always be transformed by dilation analyticity into an operator with a real parameter. More precisely, if $\beta=\mathrm{e}^{\i\phi}|\beta|$ is any complex number, then we have \begin{align*} H_{\beta,m}(-\i\phi)&=\mathrm{e}^{2\i\phi}H_{|\beta|, m},\\ H_{\beta,m}(\i\pi-\i\phi)&=\mathrm{e}^{2\i\phi}H_{-|\beta|, m}. \end{align*} Note that these relations will be used in the Appendix for the explicit description of the spectrum of $H_{\beta,m}$. \subsection{Boundary value of the resolvent and spectral density} Our next aim is to look at the boundary value of the resolvent of $H_{\beta,m}$ on the real axis. For that purpose and for any $s\in\mathbb R$ we introduce the space $\langle X\rangle^{s} L^2(\mathbb R_+)$, where $\langle X\rangle:= (1+X^2)^{1/2}$. Clearly, for $s\geqslant 0$ the space $ \langle X\rangle^{-s}L^2(\mathbb R_+)$ is the domain of $\langle X\rangle^{s}$, which we endow with the graph norm, while $\langle X\rangle^s L^2(\mathbb R_+)$ can be identified with the anti-dual of $\langle X\rangle^{-s} L^2(\mathbb R_+)$. Let $m,\beta\in \mathbb C$ with $\Re(m)>-1$. We set \begin{equation}\label{exco} \Omega_{\beta,m}^\pm:= \mathbb R_+\backslash \Big\{\pm \tfrac{\i\beta}{2(N+\frac12+m)}\mid N\in\mathbb N,\ N+\frac12+m\neq0 \Big\} \end{equation} and $\Omega_{\beta,m}:=\Omega_{\beta,m}^+\cap\Omega_{\beta,m}^-$. We say that $(\beta,m)$ is \emph{an exceptional pair} if there exists $N\in\mathbb N$ such that $N+\frac12+m\neq0$ and $\frac{\i \beta}{N+m+\frac12}\in\mathbb R\backslash\{0\}$. Note that \begin{align*} \Omega_{\beta,m}&= \mathbb R_+, \text{ if $(\beta,m)$ is not an exceptional pair},\\ \Omega_{\beta,m}&= \mathbb R_+\backslash\Big\{\Big|\frac{\Re(\beta)}{2\Im(m)}\Big|\Big\}, \text{ if $(\beta,m)$ is an exceptional pair and $\beta\not\in\i\mathbb R$},\\ \Omega_{\beta,m}&= \mathbb R_+\backslash\Big\{\frac{|\beta|}{2(N+\frac12+m)}\mid N+\frac12+m>0,\ N\in\mathbb N\Big\}, \text{ if $\beta\in\i\mathbb R$, $\beta\neq0$, $m\in]-1,\infty[$.} \end{align*} The theorem that we state below has some restrictions when $(\beta,m)$ is an exceptional pair. Its statement is rather involved since any $\beta \in \mathbb C$ is considered. In the special case $\Im(\beta)=0$ some simplifications take place. \begin{theorem}\label{thm_conv_resolvent} Let $m,\beta\in \mathbb C$ with $\Re(m)>-1$. Let us fix $k_0>0$ and consider any $k\in]k_0,\infty[\,\bigcap\Omega_{\beta,m}^\pm$. Then, the boundary values of the resolvent \begin{equation*} R_{\beta,m}(k^2 \pm \i 0):= \mathop{\lim}\limits_{\epsilon \searrow 0}R_{\beta,m}(k^2 \pm\i\epsilon) \end{equation*} exist in the sense of operators from $\langle X\rangle^{-s} L^2(\mathbb R_+)$ to $ \langle X\rangle^s L^2(\mathbb R_+)$ for any $s>\frac{1}{2}+\frac{|\Im(\beta)|}{2k_0}$, uniformly in $k$ on each compact subset of $]k_0,\infty[\,\bigcap\Omega_{\beta,m}^\pm$. For $x,y\in \mathbb R_+$ the kernel of ${R_{\beta,m}(k^2\pm \i 0)}$ is given by \begin{align}\notag &R_{\beta,m}(k^2\pm \i 0;x,y)\\ =&\pm\tfrac{\i}{2k}\Gamma\naw{\tfrac{1}{2} + m \mp \tfrac{\i\beta}{2k}}\begin{cases} \J{\frac{\beta}{2k}}{m}(2k x)\Hpm{\frac{\beta}{2k}}{m}(2k y) & \mbox{ for }0<x<y,\\ \J{\frac{\beta}{2k}}{m}(2k y)\Hpm{\frac{\beta}{2k}}{m}(2k x) & \mbox{ for }0<y<x. \end{cases}\label{eq_kernel_1} \end{align} \end{theorem} Before starting the proof, let us emphasize the role played by $\beta$. If $\Im(\beta)= 0$, then the limiting absorption principle takes place in the usual spaces, with the exponent $s>\frac{1}{2}$. On the other hand, if $\Im(\beta)\neq 0$, an additional weight is necessary for the limiting absorption principle. \begin{proof} Let $k>0$. Assume that $\pm \tfrac{\i\beta}{2k}-m-\tfrac{1}{2}\not \in \mathbb N$. Thus, let us consider the operator $\langle X\rangle^{-s}R_{\beta,m}(k^2\pm \i \epsilon)\langle X\rangle^{-s}$ whose kernel is \begin{equation}\label{eq_kernel_2} \langle x\rangle^{-s}R_{\beta,m}(k^2\pm \i \epsilon;x,y)\langle y\rangle^{-s}, \end{equation} see also \eqref{The-resolvent}. We show that the corresponding operator is Hilbert-Schmidt and converges in the Hilbert-Schmidt norm to the operator whose kernel is provided by \eqref{eq_kernel_1}. For that purpose, let us also set $k^\mp_\epsilon:=\sqrt{-k^2\mp \i\epsilon}$ and observe that $\Re(k^\mp_\epsilon)>0$ and that $\lim\limits_{\epsilon\searrow 0}k^\mp_\epsilon=\mp \i k$. As a consequence, one has $k^\mp_\epsilon = \mp \i k + O(\epsilon)$. We consider first the slightly more complicated case when $-1<\Re(m)\leqslant 0$ and $m\neq 0$. By the estimate \eqref{eq_est2} the expression \eqref{eq_kernel_2} is bounded for $\epsilon$ small enough by \begin{align}\label{eq_to_maj} \nonumber & C_k \tfrac{\big|\Gamma(\frac{1}{2}+m-\frac{\beta}{2k^\mp_\epsilon})\big|}{2|k|} \min\{1,2x|k|\}^{\Re(m)+\frac{1}{2}} \langle x\rangle^{-s} \\ &\quad \times \min\{1,2y|k|\}^{\Re(m)+\frac{1}{2}} \langle y\rangle^{-s} \begin{cases} \big(\frac{\langle y\rangle}{\langle x\rangle}\big)^{\Re(\frac{\beta}{2k_\epsilon^\mp})} & \mbox{ for }y>x,\\ \big(\frac{\langle x\rangle}{\langle y\rangle}\big)^{\Re(\frac{\beta}{2k_\epsilon^\mp})} & \mbox{ for }x>y. \end{cases} \end{align} for a constant $C_k$ independent of $x$ and $y$ but which depends on $k$. We then observe that \begin{equation*} \lim_{\epsilon\searrow0} \Re\big(\tfrac{\beta}{2k_\epsilon^\mp}\big) = \mp \frac{\Im(\beta)}{2k}. \end{equation*} Since $\mp \frac{\Im(\beta)}{2k}\leqslant \frac{|\Im(\beta)|}{2k_0}$ and by taking $\epsilon<\epsilon_0$ sufficiently small, our assumption on $s$ implies that the expression \eqref{eq_to_maj} belongs to $L^2(\mathbb R_+ \times \mathbb R_+)$. On the other hand, starting with the expression \eqref{The-resolvent} and by taking the equalities \eqref{Jbm-definition} and \eqref{Hpm-definition} into account, one also observes that for fixed $x$ and $y$, the expression given in \eqref{eq_kernel_2} converges as $\epsilon\searrow 0$ to \begin{equation}\label{eq_kernel_3} \langle x\rangle^{-s}R_{\beta,m}(k^2\pm \i 0;x,y)\langle y\rangle^{-s}, \end{equation} with $R_{\beta,m}(k^2\pm \i 0;x,y)$ defined in \eqref{eq_kernel_1}. We can apply the Lebesgue Dominated Convergence Theorem and deduce that \eqref{eq_kernel_2} converges in $L^2(\mathbb R_+ \times \mathbb R_+)$ to \eqref{eq_kernel_3}. This convergence is equivalent to \begin{equation*} \lim_{\epsilon\searrow 0} \ \langle X\rangle^{-s}R_{\beta,m}(k^2\pm \i \epsilon)\langle X\rangle^{-s} = \langle X\rangle^{-s}R_{\beta,m}(k^2\pm \i 0)\langle X\rangle^{-s} \end{equation*} in the Hilbert-Schmidt norm. Note finally that the uniform convergence in $k$ on each compact subset of $]k_0,\infty[$ can be checked directly on the above expressions. For $\Re(m) \geqslant 0$ with $m\neq 0$, the same proof holds with the estimate \eqref{eq_est1} instead of \eqref{eq_est2}. Finally for $m=0$, the result can be obtained by using \eqref{eq_est3}. \end{proof} Based on the previous result, the corresponding \emph{spectral density} can now be defined. \begin{proposition} Let $m\in \mathbb C$ with $\Re(m)>-1$, let $\beta\in \mathbb C$ and let us fix $k_0>0$. Then for $k\in]k_0,\infty[\,\bigcap\Omega_{\beta,m}$ the {\em spectral density} defined by \begin{align*} p_{\beta,m}(k^2) & := \mathop{\lim}\limits_{\epsilon\searrow 0}\frac{1}{2\pi \i}\naw{R_{\beta,m}(k^2+ \i \epsilon) - R_{\beta,m}(k^2 - \i \epsilon)} \\ &= \frac{1}{2\pi \i}\naw{R_{\beta,m}(k^2+ \i 0) - R_{\beta,m}(k^2- \i 0)} \end{align*} exists in the sense of operators from $\langle X\rangle^{-s} L^2(\mathbb R_+)$ to $\langle X\rangle^s L^2(\mathbb R_+)$ for any $s>\frac{1}{2}+\frac{|\Im(\beta)|}{2k_0}$. The kernel of this operator is provided for $x,y\in\mathbb{R}_+$ by \begin{equation}\label{spectral-density} p_{\beta,m}(k^2;x,y) =\frac{\mathrm{e}^{\frac{\pi\beta}{2k}}}{4\pi k}\Gamma\Big(\tfrac{1}{2}+m+\tfrac{\i\beta}{2k}\Big)\Gamma\Big(\tfrac{1}{2}+m-\tfrac{\i\beta}{2k}\Big) \J{\frac{\beta}{2k}}{m}(2k x)\J{\frac{\beta}{2k}}{m}(2k y). \end{equation} \end{proposition} \begin{proof} The existence of the limit is provided by Theorem \ref{thm_conv_resolvent} while the explicit formula \eqref{spectral-density} can be deduced from \eqref{eq_kernel_1} together with \eqref{eq_missing}. \end{proof} Note that for $\beta=0$ the expression obtained above reduces to \begin{equation} p_{0,m}(k^2;x,y) = \frac{1}{\pi k}\mathcal J_m(k x)\mathcal J_m(k y), \end{equation} by taking the relation \eqref{eq_J0m_B} into account. This expression corresponds to the one obtained in a less general context in \cite[Prop.~4.4]{DR}. \subsection{Reminder about the Hankel transform} As we mentioned before, to some extent this paper can be viewed as a continuation of \cite{BDG,DR}. These two papers were devoted to Schr\"odinger operators with the inverse square potential. Among other things, certain natural transformations diagonalizing these operators were introduced. They were called there {\em (generalized) Hankel transformations}. In the present paper we would like to find natural transformations that diagonalize $H_{\beta,m}$. We will mimic as closely as possible our previous constructions. Therefore, we devote this subsection to a summary of selected results of \cite{BDG,DR}. Recall first that for any $m\in\mathbb C$ with $\Re(m)>-1$ the operator $H_m=H_{0,m}$ can be diagonalized using the {\em Hankel transformation} $\mathscr{F}_m=\mathscr{F}_m^\#=\mathscr{F}^{-1}_m$, which is a bounded operator on $L^2(\mathbb R_+)$ such that \begin{align*} \mathscr{F}_m X^2&=H_m\mathscr{F}_m. \end{align*} For any $m,m'$ we also define the M{\o}ller operators corresponding to the pair $\big(H_m,H_{m'}\big)$ by the time dependent formula \begin{equation*} W_{m,m'}^\pm:=\slim_{t\to\infty}\mathrm{e}^{\pm\i tH_m}\mathrm{e}^{\mp\i tH_{m'}}. \end{equation*} These operators can be expressed in terms of the Hankel transformation \begin{equation*} W_{m,m'}^\pm=\mathrm{e}^{\pm\i\frac{\pi}{2}(m-m')}\mathscr{F}_m\mathscr{F}_{m'}. \end{equation*} It is also natural to introduce a pair of operators \begin{equation}\label{emme} \mathscr{F}_m^\pm:=\mathrm{e}^{\pm\i\frac{\pi}{2}m}\mathscr{F}_m, \end{equation} which can be called the {\em incoming/outgoing Hankel transformation}. Then we can write \begin{equation*} W_{m,m'}^\pm=\mathscr{F}_{m}^\pm \mathscr{F}_{m'}^{\mp \#}\ . \end{equation*} Note that the definition \eqref{emme} may look trivial, but we will see that in some situations it is more natural to generalize $\mathscr{F}_m^\pm$ rather than $\mathscr{F}_m$. The operators $H_m$ have very special properties. Therefore, some of the properties of Hankel transformations are specific to this class of operators. A more general class of 1-dimensional Schr\"odinger operators $H_{m,\kappa}$ on the half-line has been considered in \cite{DR}. They are generalizations of $H_m$ by considering general boundary condition at zero. Exceptional cases exist for this family, however for non-exceptional pairs $(m,\kappa)$ one can generalize the construction of the incoming/outgoing Hankel transformations. In fact, one can define a pair of bounded and left-invertible operators $\mathscr{F}_{m,\kappa}^\pm$ that diagonalize $H_{m,\kappa}$. They satisfy \begin{align*} \mathscr{F}_{m,\kappa}^{\mp\#}\mathscr{F}_{m,\kappa}^\pm &=\bbbone,\\ \mathscr{F}_{m,\kappa}^\pm \mathscr{F}_{m,\kappa}^{\mp\#}&= \bbbone_{\mathbb R_+}(H_{m,\kappa}),\\ \mathscr{F}_{m,\kappa}^\pm X^2\mathscr{F}_{m,\kappa}^{\mp \#}&=H_{m,\kappa}\;\! \bbbone_{\mathbb R_+}(H_{m,\kappa}), \end{align*} where $\bbbone_{\mathbb R_+}(H_{m,\kappa})$ in the self-adjoint case is the projection on the continuous subspace of $H_{m,\kappa}$, and in the general case it is its obvious generalization. We called $\mathscr{F}_{m,\kappa}^\pm $ the \emph{outgoing/incoming generalized Hankel transformations}. The operators $\mathscr{F}_{m,\kappa}^+$ and $\mathscr{F}_{m,\kappa}^{-}$ are linked by the relation \begin{equation}\mathscr{F}_{m,\kappa}^+\mathcal{G}_{m,\kappa}=\mathscr{F}_{m,\kappa}^{-},\label{linked} \end{equation} where $\mathcal{G}_{m,\kappa}$ is a bounded and boundedly invertible operator commuting with $X$. One can formulate scattering theory for an arbitrary pair of Hamiltonians $H_{m,\kappa}$, $H_{m',\kappa'}$, as it is done in \cite{DR}. Alternatively, one can fix a \emph{reference Hamiltonian}, which is simpler, to which the more complicated \emph{interacting Hamiltonian} $H_{\beta,m}$ will be compared. Two choices of the reference Hamiltonian can be viewed as equally simple: the {\em Dirichlet Laplacian} $H_\mathrm{D}:=H_{\frac12}$ and the {\em Neumann Laplacian} $H_\mathrm{N}:=H_{-\frac12}$. Recall from \cite[Sec.~4.7]{DR} that the Hankel transformation for $H_\mathrm{D}$ is the sine transformation $\mathscr{F}_{\frac12}=\mathscr{F}_\mathrm{D}=\mathscr{F}_\mathrm{D}^{-1}=\mathscr{F}_\mathrm{D}^\#=\mathscr{F}_\mathrm{D}^*$, while the Hankel transformation for $H_\mathrm{N}$ is the cosine transformation $\mathscr{F}_{-\frac12}=\mathscr{F}_\mathrm{N}=\mathscr{F}_\mathrm{N}^{-1}=\mathscr{F}_\mathrm{N}^\#=\mathscr{F}_\mathrm{N}^*$. These transformations are defined by \begin{align*} (\mathscr{F}_\mathrm{D} f)(x)&=\sqrt{\frac{2}{\pi}}\int\sin(xk)f(k)\;\!\mathrm d k,\\ (\mathscr{F}_\mathrm{N} f)(x)&=\sqrt{\frac{2}{\pi}}\int\cos(xk)f(k)\;\!\mathrm d k. \end{align*} Following \eqref{emme} and \eqref{linked}, we also introduce \begin{align*} \mathscr{F}_\mathrm{D}^\pm:=\mathrm{e}^{\pm\i\frac{\pi}{4}}\mathscr{F}_\mathrm{D},&\quad\mathcal{G}_\mathrm{D}:=\mathrm{e}^{\i\frac{\pi}{2}}\bbbone,\\ \mathscr{F}_\mathrm{N}^\pm:=\mathrm{e}^{\mp\i\frac{\pi}{4}}\mathscr{F}_\mathrm{N},& \quad\mathcal{G}_\mathrm{N}:=\mathrm{e}^{-\i\frac{\pi}{2}}\bbbone. \end{align*} For any non-exceptional $\beta,m$ with $m>-1$ one can introduce its M{\o}ller operators with respect to the Dirichlet and Neumann dynamics: \begin{align*} W_{m,\kappa,\mathrm{D}}^\pm&:=\slim_{t\to\infty}\mathrm{e}^{\pm\i tH_{m,\kappa}}\mathrm{e}^{\mp\i tH_\mathrm{D}},\\ W_{m,\kappa,\mathrm{N}}^\pm&:=\slim_{t\to\infty}\mathrm{e}^{\pm\i tH_{m,\kappa}}\mathrm{e}^{\mp\i tH_\mathrm{N}}. \end{align*} The corresponding scattering operators are then defined by \begin{equation*} S_{m,\kappa;\mathrm{D}} = W_{m,\kappa;\mathrm{D}}^{-\#} W_{m,\kappa;\mathrm{D}}^- \ \hbox{ and }\ S_{m,\kappa;\mathrm{N}} := W_{m,\kappa;\mathrm{N}}^{-\#} W_{m,\kappa;\mathrm{N}}^-. \end{equation*} We can also express the M{\o}ller operators in terms of the generalized Hankel transformations \begin{equation*} W_{m,\kappa;\mathrm{D}}^\pm =\mathscr{F}_{m,\kappa}^\pm\mathscr{F}_\mathrm{D}^{\mp\#} \hbox{ and }\ W_{m,\kappa;\mathrm{N}}^\pm =\mathscr{F}_{m,\kappa}^\pm\mathscr{F}_\mathrm{N}^{\mp\#}. \end{equation*} By conjugating with $\mathscr{F}_\mathrm{D}$, resp. $\mathscr{F}_\mathrm{N}$, the scattering operator can be brought to a diagonal form, where up to an inessential factor it coincides with $\mathcal{G}_{m,\kappa}$\;\!: \begin{equation*} \mathscr{F}_\mathrm{D} S_{m,\kappa;\mathrm{D}}\mathscr{F}_\mathrm{D} = \i\mathcal{G}_{m,\kappa}\ \hbox{ and } \ \mathscr{F}_\mathrm{N}S_{m,\kappa;\mathrm{N}}\mathscr{F}_\mathrm{N} = -\i\mathcal{G}_{m,\kappa}. \end{equation*} \subsection{Hankel-Whittaker transformation} It is natural to ask whether the operators $H_{\beta,m}$ considered in this paper also possess diagonalizing operators and a satisfactory scattering theory. There exists actually a candidate for a generalization of incoming/outgoing Hankel transformations $\mathscr{F}_m^\pm$. For any $\beta,m\in \mathbb C$ with $\Re(m)>-1$ let us define the kernel \begin{equation}\label{eq_def_F} \mathscr{F}^\pm_{\beta,m}(x,k) := \frac {1}{\sqrt{2\pi}} \mathrm{e}^{\pm \i \frac{\pi}{2}m}\mathrm{e}^{\frac{\pi\beta}{4k}} \Gamma\big(\tfrac{1}{2}+m\pm\tfrac{\i\beta}{2k}\big)\J{\frac{\beta}{2k}}{m}(2xk), \end{equation} where $x,k\in \mathbb R_+$. This kernel can be used to define a linear transformation on any $f\in C_\mathrm{c}(\mathbb R_+)$\;\!: \begin{equation*} \big(\mathscr{F}_{\beta,m}^\pm f\big)(x):=\int_0^\infty\mathscr{F}_{\beta,m}^\pm(x,k)f(k)\;\!\mathrm d k. \end{equation*} We call $\mathscr{F}_{\beta,m}^\pm$ the \emph{outgoing/incoming Hankel--Whittaker transformation}. We also introduce the function $g_{\beta,m}:\mathbb R_+\to \mathbb C$ by \begin{equation}\label{eq_def_G} g_{\beta,m}(k):=\mathrm{e}^{-\i\pi m}\frac{\Gamma(\frac12+m-\i\frac{\beta}{2k})} {\Gamma(\frac12+m+\i\frac{\beta}{2k})} \end{equation} and the corresponding multiplication operator \begin{equation*} \mathcal{G}_{\beta,m}:=g_{\beta,m}(X), \end{equation*} which we can call the \emph{intrinsic scattering operator}. Let us collect the most obvious properties of $\mathscr{F}_{\beta,m}^\pm$ and $\mathcal{G}_{\beta,m}$. Recall that the set $\Omega_{\beta,m}$ has been introduced just after \eqref{exco}, and that if $(\beta,m)$ is not an exceptional pair then $\Omega_{\beta,m}=\mathbb R_+$. \begin{theorem} Let $m,\beta\in \mathbb C$ with $\Re(m)>-1$, let us also fix $k_0>0$ and let $s>\frac{1}{2}+\frac{|\Im(\beta)|}{2k_0}$. \begin{enumerate} \item[(i)] $\mathscr{F}_{\beta,m}^\pm$ maps $C_{\rm c}\big(]k_0,\infty[\,\bigcap\Omega_{\beta,m}\big)$ into $ \langle X\rangle^{s}L^2(\mathbb R_+)$. \item[(ii)] If $h\in C_{\rm c}\big(]k_0,\infty[\,\bigcap\Omega_{\beta,m}\big)$, then in the sense of quadratic forms on $\langle X\rangle^{-s}L^2(\mathbb R_+)$ we have \begin{equation}\label{diago} \int_0^\infty h(k^2)\;\! p_{\beta,m}(k^2)\;\!\mathrm d k^2 =\mathscr{F}_{\beta,m}^\mp h(X^2)\mathscr{F}_{\beta,m}^{\pm\#}. \end{equation} \item[(iii)] The equalities $\mathscr{F}_{\beta,m}^+\mathcal{G}_{\beta,m}=\mathscr{F}_{\beta,m}^-$ and $\mathscr{F}_{\beta,m}^-\mathcal{G}_{\beta,m}^{-1}=\mathscr{F}_{\beta,m}^+$ hold. \item[(iv)] For fixed $k\in \Omega_{\beta,m}$ the following asymptotics hold as $x\to\infty$\;\!: \begin{align} \nonumber \mathscr{F}_{\beta,m}^+(x,k) = & \frac{1}{\sqrt{2\pi}}\Big(\mathrm{e}^{-\i\frac{\pi}{4}}\mathrm{e}^{\i kx}(2kx)^{\i\frac{\beta}{2k}}\big(1+O(x^{-1})\big) \\ \label{mimi1} & \qquad +g_{\beta,m}^{-1}(k)\mathrm{e}^{\i\frac{\pi}{4}}\mathrm{e}^{-\i kx}(2kx)^{-\i\frac{\beta}{2k}}\big(1+O(x^{-1})\big)\Big),\\ \nonumber \mathscr{F}_{\beta,m}^-(x,k) = & \frac{1}{\sqrt{2\pi}}\Big(\mathrm{e}^{\i\frac{\pi}{4}}\mathrm{e}^{-\i kx}(2kx)^{-\i\frac{\beta}{2k}}\big(1+O(x^{-1})\big) \\ \label{mimi2} & \qquad +g_{\beta,m}(k)\mathrm{e}^{-\i\frac{\pi}{4}}\mathrm{e}^{\i kx}(2kx)^{\i\frac{\beta}{2k}}\big(1+O(x^{-1})\big)\Big). \end{align} \end{enumerate} \end{theorem} \begin{proof} The proof of (i) reduces to showing that the map $x\mapsto \langle x\rangle^{-s} \sup_{k\in K} \J{\frac{\beta}{2k}}{m}(2kx)$ is in $L^2(\mathbb R_+)$ for any $k$ in a compact set $K\subset ]k_0,\infty[\,\bigcap\Omega_{\beta,m}$. The $L^2$-integrability near $0$ follows from \eqref{Jbm-around-zero}, while the $L^2$-integrability near infinity follows from \eqref{Jbm-around-infinity}. Note that the factor $x^{\pm \i\beta}$, which becomes $(2kx)^{\pm \i \frac{\beta}{2k}}$ after the required change of variables, imposes the dependence on $k_0$ for the lower limit of the index $s$. The proofs of (ii) and (iii) consist in direct computations. Finally, (iv) can be obtained by taking again into account the asymptotic expansion of $\J{\beta}{m}$ provided in \eqref{Jbm-around-infinity}. \end{proof} Note that \eqref{diago} essentially says that $\mathscr{F}_{\beta,m}^\pm$ diagonalize the continuous part of $H_{\beta,m}$, since the l.h.s.~of \eqref{diago} can be interpreted as $h(H_{\beta,m})$. In the self-adjoint case, this would correspond to the absolutely continuous part of $H_{\beta,m}$. Clearly, this condition does not fix $\mathscr{F}_{\beta,m}^\pm$ completely. The additional condition for our choice of $\mathscr{F}_{\beta,m}^\pm$ comes from scattering theory, which is expressed in the asymptotics \eqref{mimi1} and \eqref{mimi2}. In that framework the functions $x\mapsto \mathscr{F}_{\beta,m}^\pm(x,k)$ can be viewed as outgoing/incoming distorted waves (or generalized eigenfunctions) of $H_{\beta,m}$ associated with the eigenvalue $k^2$. Note that if we set $\beta=0$, then \eqref{mimi1} and \eqref{mimi2} have the form of usual distorted waves in the short-range case. On the other hand, the factors $(kx)^{\i\beta}$ are needed because of the long-range part of the potential, while the factors $\mathrm{e}^{\pm\i\frac{\pi}{4}}$ are related to the {\em Maslov index} and are needed to make our definitions consistent with the case $\beta=0$ described in \cite{DR}. Let us now recall from \cite{BDG,DR} that $\mathscr{F}_{0,m}^\pm$ are unitary for real $m$, and are bounded for more general $m$. It is natural to ask about the boundedness of $\mathscr{F}_{\beta,m}^\pm$ in the general framework introduced here, but they seem to be rather ill-behaved operators. Note that the operators $\mathcal{G}_{\beta,m}$ are better behaved, and their behavior is easier to study: \begin{proposition} \begin{enumerate} \item[(i)] If $m,\beta$ are real, then $\mathcal{G}_{\beta,m}$ is unitary. \item[(ii)] If $\beta$ is real and $\Re(m)\neq-\frac12$, then $\mathcal{G}_{\beta,m}$ is bounded and boundedly invertible. \item[(iii)] In all other cases $\mathcal{G}_{\beta,m}$ is either unbounded or has an unbounded inverse. \end{enumerate} \end{proposition} \begin{proof} For (i), it is sufficient to recall that $\Gamma(\overline{z})=\overline{\Gamma(z)}$. For (ii), by assuming that $\Re(m)\neq -\frac{1}{2}$ we make sure that neither the numerator nor the denominator of $g_{\beta,m}$ go through the value $\Gamma(0)$. In addition, by using Stirling's formula one observes that $g_{\beta,m}(k)$ remains bounded for $k\to 0$ and for $k\to \infty$. Finally, in the case (iii) either the numerator or the denominator of $g_{\beta,m}$ can have local singularities, and in addition either $g_{\beta,m}(k)$ or $g_{\beta,m}^{-1}(k)$ are unbounded for $k\to 0$. This last result is again a consequence of Stirling's formula. \end{proof} We conjecture that $\mathscr{F}_{\beta,m}^\pm$ is unbounded in $L^2(\mathbb R_+)$ for all non-real $\beta$. If $\Im(\beta)=0$ but $\Im(m)\neq0$, we do not know. For real $\beta$ and $m$, which correspond to self-adjoint $H_{\beta,m}$, the transformations $\mathscr{F}_{\beta,m}^\pm$ are bounded, as we discuss in the next subsections. \subsection{Hankel-Whittaker transformation for real parameters} Throughout this and the next subsection we assume that $\beta,m\in\mathbb R$ with $m>-1$. The operators $H_{\beta,m}$ are then self-adjoint and their spectral and scattering theory is well understood. In the real case, the Hankel-Whittaker transformation satisfies \begin{equation*} \mathscr{F}_{\beta,m}^{\pm*}=\mathscr{F}_{\beta,m}^{\mp\#}. \end{equation*} Because of this identity, we can avoid using the Hermitian conjugation in our formulas in favor of transposition. We do this because we would like that our formulas are easy to generalize to the non-self-adjoint case, where so far their meaning is to a large extent unclear. \begin{theorem}\label{rewrite} $\mathscr{F}_{\beta,m}^{\pm}$ are isometries that diagonalize $H_{\beta,m}$ on the range of the spectral projection of $H_{\beta,m}$ onto $\mathbb R_+$\;\!: \begin{align} \mathscr{F}_{\beta,m}^{\mp \#} \mathscr{F}_{\beta,m}^{\pm} &= \bbbone,\label{rel1}\\ \mathscr{F}_{\beta,m}^\pm \mathscr{F}_{\beta,m}^{\mp \#}& = \bbbone_{\mathbb R_+}(H_{\beta,m}),\label{rel2}\\ \mathscr{F}_{\beta,m}^\pm X^2&= H_{\beta,m}\mathscr{F}_{\beta,m}^\pm.\label{rel3} \end{align} \end{theorem} \begin{proof} For any $0<a<b$, we can apply Stone's formula \begin{equation} \bbbone_{]a,b[}(H_{\beta,m})= \slim_{\epsilon\searrow0}\frac{1}{2\pi\i}\int_a^b(R_{\beta,m}(\lambda+\i\epsilon)- R_{\beta,m}(\lambda-\i\epsilon)\big)\;\!\mathrm d \lambda. \label{stone1}\end{equation} We can reinterpret \eqref{stone1} in the sense of a quadratic form on appropriate weighted spaces, writing \begin{equation}\label{stone2} \bbbone_{]a,b[}(H_{\beta,m})= \int_a^b p_{\beta,m}(k^2)\;\!\mathrm d k^2. \end{equation} Now \eqref{rel2} and \eqref{rel3} follow from \eqref{stone2} and from the identity \begin{equation} p_{\beta,m}(k^2;x,y)= \frac{1}{2k} \mathscr{F}_{\beta,m}^-(x,k)\mathscr{F}_{\beta,m}^+(y,k). \label{stone5}\end{equation} It remains to prove \eqref{rel1}. To simplify notation, we will write $\mathscr{F}$ for $\mathscr{F}_{\beta,m}^\pm$ and $H$ for $H_{\beta,m}$. By \eqref{stone2} and \eqref{stone5}, for any interval $I$ we have \begin{equation}\label{squa} \mathscr{F}\bbbone_I(X^2) \mathscr{F}^*=\bbbone_I(H). \end{equation} By squaring \eqref{squa} we obtain the equality \begin{align}\label{squa1} \mathscr{F}\bbbone_I(X^2) \mathscr{F}^*\mathscr{F}\bbbone_I(X^2) \mathscr{F}^* &=\bbbone_I(H). \end{align} By setting then $P:= \mathscr{F}^*\mathscr{F}$ and by comparing \eqref{squa} and \eqref{squa1} we infer that \begin{equation}\label{squa2} P\bbbone_I(X^2)P\bbbone_I(X^2)P=P\bbbone_I(X^2)P. \end{equation} Clearly, the r.h.s.~of \eqref{squa2} is equal to $P\bbbone_I(X^2)^2P$, and therefore $$ P\bbbone_I(X^2)(\bbbone-P)\bbbone_I(X^2)P=0. $$ Equivalently one has \begin{equation*} \big((\bbbone-P)\bbbone_I(X^2)P\big)^*(\bbbone-P)\bbbone_I(X^2)P=0. \end{equation*} Consequently one gets $(\bbbone-P)\bbbone_I(X^2)P=0$ and $P\bbbone_I(X^2)(\bbbone-P)=0$. By subtraction we finally obtain \begin{equation*} P\bbbone_I(X^2)=\bbbone_I(X^2)P. \end{equation*} Thus $P$ is a projection commuting with all spectral projections of $X^2$ onto intervals. But $X^2$ has multiplicity $1$. Therefore, there exists a Borel set $\Xi\subset\mathbb R_+$ such that \begin{equation*} P=\bbbone_\Xi(X^2)=\mathscr{F}^* \mathscr{F}. \end{equation*} Suppose that $\mathbb R_+\backslash\Xi$ has a positive measure. Then we can find $k_0\in\mathbb R_+$ such that for any $\epsilon>0$ $$ I_\epsilon:=[k_0-\epsilon,k_0+\epsilon]\setminus\Xi $$ has also a positive measure. Let $f_\epsilon$ be the characteristic function of $I_\epsilon$, understood as an element of $L^2(\mathbb R_+)$. Then one infers that \begin{equation}\label{kerno} \|f_\epsilon\|\neq0 \quad \hbox{ and }\quad \mathscr{F} f_\epsilon=0. \end{equation} From the explicit formula for $\mathscr{F}(x,k)$ we immediately see that for any $k_0\in\mathbb R_+$ we can find $x_0\in\mathbb R_+$ such that $\mathscr{F}(x_0,k_0)\neq0$. We also know that $\mathscr{F}(x,k)$ is continuous in both variables. Therefore, we can find $\epsilon>0$ such that for $x\in[x_0-\epsilon,x_0+\epsilon]$ and $k\in[k_0-\epsilon,k_0+\epsilon]$ we have \begin{equation*} |\mathscr{F}(x,k)-\mathscr{F}(x_0,k_0)|>\frac12|\mathscr{F}(x_0,k_0)|. \end{equation*} Now one has \begin{equation*} \big(\mathscr{F} f_\epsilon\big)(x)=\int_{I_\epsilon}\mathscr{F}(x,k)\mathrm d k, \end{equation*} and therefore, \begin{equation*} \big|\big(\mathscr{F} f_\epsilon\big)(x)\big|\geqslant \frac12|I_\epsilon|\,|\mathscr{F}(x_0,k_0)|,\quad x\in[x_0-\epsilon,x_0+\epsilon], \end{equation*} where $|I_\epsilon|$ denotes the Lebesgue measure of $I_\epsilon$. Hence $ \mathscr{F} f_\epsilon\neq0$, which is a contradiction with \eqref{kerno}. \end{proof} In the case of real $m,\beta$, we have $|g_{\beta,m}(k)|=1$ for any $k\in \mathbb R_+$. Therefore, the whole information about $g_{\beta,m}(k)$ is contained in its argument. One half of the argument of $g_{\beta,m}(k)$ is called the {\em phase shift} \begin{equation*} \delta_{\beta,m}(k):=-\frac{\pi}{2}m+\frac{1}{\i}\log\Big(\Gamma \big(\tfrac{1}{2}+m-\tfrac{\i\beta}{2k}\big)\Big). \end{equation*} We have the relations \begin{align*} g_{\beta,m}(k)&=\mathrm{e}^{\i 2\delta_{\beta,m}(k)},\\ \mathcal{G}_{\beta,m}&=\mathrm{e}^{\i 2\delta_{\beta,m}(X)}. \end{align*} In the real case, one can avoid using the incoming/outgoing Hankel-Whittaker transformations $\mathscr{F}_{\beta,m}^{\pm}$, and instead introduce a single $\mathscr{F}_{\beta,m}$ given by the kernel \begin{equation*} \mathscr{F}_{\beta,m}(x,k) := \frac {1}{\sqrt{2\pi}} \mathrm{e}^{\frac{\pi\beta}{4k}} \Big|\Gamma\big(\tfrac{1}{2}+m\pm\tfrac{\i\beta}{2k}\big)\Big|\J{\frac{\beta}{2k}}{m}(2xk). \end{equation*} Note that \begin{equation*} \mathscr{F}_{\beta,m}^\pm= \mathscr{F}_{\beta,m}\mathrm{e}^{\mp \i \delta_{\beta,m}(X)}. \end{equation*} We can rewrite Theorem \ref{rewrite} in terms of the operator $\mathscr{F}_{\beta,m}$\;\!: \begin{align*} \mathscr{F}_{\beta,m}^{ \#} \mathscr{F}_{\beta,m} &= \bbbone,\\ \mathscr{F}_{\beta,m} \mathscr{F}_{\beta,m}^{\#}& = \bbbone_{\mathbb R_+}(H_{\beta,m}),\\ \mathscr{F}_{\beta,m} X^2&= H_{\beta,m}\mathscr{F}_{\beta,m}. \end{align*} However, with $\mathscr{F}_{\beta,m}$ one loses the analyticity, therefore we prefer to continue using $\mathscr{F}_{\beta,m}^{\pm}$. \begin{remark} In the setting of the Coulomb problem, when $m+\frac{1}{2}=\ell \in \mathbb N$ and $\beta \in \mathbb R$, the expression $$ \delta_\ell(k):=\arg\Big(\Gamma\big(\ell+1-\i\frac{\beta}{2k}\big)\Big) $$ is called \emph{the Coulomb phase shift}. Note that an expression close to \eqref{eq_def_G} was introduced in \cite[Eq.~(2.3a)]{TB}. In the setting of the Coulomb potential in $d=3$, an additional function called \emph{the Gamow factor} is often introduced, see for example \cite[Eq.~14.1.7]{AS}. In our framework this factor does not seem to play an important role. \end{remark} \subsection{Scattering theory for real parameters} Since the Coulomb potential is long-range, we do not have the standard short-range scattering theory between arbitrary $H_{\beta,m}$ and $H_{\beta',m'}$. However, if we fix $\beta$ then the scattering theory between $H_{\beta,m}$ and $H_{\beta,m'}$ is short-range. One can argue that for $\beta\neq0$ there is only one natural reference Hamiltonian, namely $H_{\beta,\mathrm{S}}:=H_{\beta,-\frac12}= H_{\beta,\frac12}$. Here we use se subscript $S$ for \emph{standard}. The situation of two equally justified reference Hamiltonians $H_{0,-\frac12}=H_\mathrm{N}$ and $H_{0,\frac12}=H_\mathrm{D}$ seems to be specific for $\beta=0$. By the standard methods of time-dependent long-range scattering theory, as described for example in \cite{DG0}, we can show the existence for any real $\beta,m$ with $m>-1$ of the M{\o}ller operators \begin{equation*} W_{\beta,m;\beta,\mathrm{S}}^\pm :=\slim_{t\to\infty}\mathrm{e}^{\pm\i tH_{\beta,m}}\mathrm{e}^{\mp\i tH_{\beta,\mathrm{S}}} \bbbone_{\mathbb R_+}(H_{\beta,\mathrm{S}}). \end{equation*} These operators can also be expressed in terms of the Hankel-Whittaker transform: \begin{equation*} W_{\beta,m;\beta,\mathrm{S}}^\pm =\mathscr{F}_{\beta,m}^\pm\mathscr{F}_{\beta,\mathrm{S}}^{\mp\#} =\mathscr{F}_{\beta,m}\mathrm{e}^{\mp\i (\delta_{\beta,m}(X)-\delta_{\beta,\mathrm{S}}(X))} \mathscr{F}_{\beta,\mathrm{S}}^{\#}. \end{equation*} In order to compare distinct $\beta$ and $\beta'$ we need to use modified wave operators. There exist various constructions, and we select a construction involving a time-independent modifier that is similar in some sense to the celebrated Isozaki-Kitada construction. As the reference Hamiltonian we use $H_\mathrm{D}$. The modifier is chosen to be $\mathscr{F}_{\beta,\mathrm{S}}^\pm\mathscr{F}_\mathrm{D}^{\mp \#}$. Note that the modifier does not depend on $m$. It depends on $\pm$, and as mentioned above this allows us to obtain expressions analytic in the parameters. With the results obtained so far, one can easily prove the following statement: \begin{theorem} If $\beta,m\in\mathbb R$, $m>-1$, then there exist \begin{equation*} W_{\beta,m;\mathrm{D}}^\pm :=\slim_{t\to\infty}\mathrm{e}^{\pm\i tH_{\beta,m}} \mathscr{F}_{\beta,\mathrm{S}}^\pm\mathscr{F}_\mathrm{D}^{\mp\#}\mathrm{e}^{\mp\i tH_{\mathrm{D}}}, \end{equation*} and the following equalities hold: \begin{equation}\label{wave1} W_{\beta,m;\mathrm{D}}^\pm =\mathscr{F}_{\beta,m}^\pm\mathscr{F}_{\mathrm{D}}^{\mp\#} =\mathscr{F}_{\beta,m}\mathrm{e}^{\mp \i ( \delta_{\beta,m}(X)-\delta_{\mathrm{D}}(X))} \mathscr{F}_{\mathrm{D}}^{\#}. \end{equation} In addition, the scattering operator $S_{\beta,m;\mathrm{D}}:= W_{\beta,m;\mathrm{D}}^{-\#} W_{\beta,m;\mathrm{D}}^-$ satisfies \begin{equation*} \mathscr{F}_\mathrm{D} S_{\beta,m;\mathrm{D}}\mathscr{F}_\mathrm{D} = \i\mathcal{G}_{\beta,m}. \end{equation*} \end{theorem} Let us now compare what we obtained with the literature. Recall from \eqref{eq_def_F} that the kernel of $\mathscr{F}_{\beta,m}^{\pm}(x,k)$ given by \begin{align} \nonumber &\mathscr{F}^\pm_{\beta,m}(x,k) \\\label{eq_f2} & = \frac {1}{\sqrt{2\pi}} \mathrm{e}^{\pm \i \frac{\pi}{2}m}\mathrm{e}^{\frac{\pi\beta}{4k}} \Gamma\big(\tfrac{1}{2}+m\pm\tfrac{\i\beta}{2k}\big)\J{\frac{\beta}{2k}}{m}(2xk) \\ \notag& = \frac {1}{\sqrt{2\pi}} \mathrm{e}^{\pm \i \frac{\pi}{2}m}\mathrm{e}^{\frac{\pi\beta}{4k}} \Gamma\big(\tfrac{1}{2}+m\pm\tfrac{\i\beta}{2k}\big) (2kx)^{\frac{1}{2}+m} \mathrm{e}^{-\i kx} {}_1\mathbf{F}_1\Big(\frac{1}{2}+m+\i \frac{\beta}{2k};\,1+2m;\,2\i kx\Big). \end{align} For the scattering operators, we obtain the multiplication operator \begin{equation}\label{eq_f3} \big(\mathscr{F}_\mathrm{D} S_{\beta,m;\mathrm{D}}\mathscr{F}_\mathrm{D}\big)(k) = \mathrm{e}^{\i\pi(\frac{1}{2}-m)}\frac{\Gamma(\frac12+m-\i\frac{\beta}{2k})}{\Gamma(\frac12+m+\i\frac{\beta}{2k})}. \end{equation} Without any surprise, the expressions obtained in \eqref{eq_f2} and \eqref{eq_f3} coincide with the ones available in the literature, as for example in \cite{GPT,MOC,Mic,Muk,Yaf}. Note that in these references, only the cases $m\geqslant 0$ are considered, and most of the time only the case $m=\ell+\frac{1}{2}$ with $\ell \in \mathbb N$. Let us conclude by one feature of the scattering theory for Whittaker operators that is worth pointing out. The common wisdom says that for long-range potentials modified M{\o}ller operators, and hence also modified scattering operator, are not canonically defined, namely, they are defined only up to an arbitrary momentum dependent phase factor. However, in the case of Whittaker operators there exists a choice that can be viewed as canonical, namely the one provided in \eqref{wave1}.
{ "timestamp": "2018-06-05T02:09:17", "yymm": "1712", "arxiv_id": "1712.04068", "language": "en", "url": "https://arxiv.org/abs/1712.04068", "abstract": "This paper presents a thorough analysis of 1-dimensional Schroedinger operators whose potential is a linear combination of the Coulomb term 1/r and the centrifugal term 1/r^2. We allow both coupling constants to be complex. Using natural boundary conditions at 0, a two parameter holomorphic family of closed operators is introduced. We call them the Whittaker operators, since in the mathematical literature their eigenvalue equation is called the Whittaker equation. Spectral and scattering theory for Whittaker operators is studied. Whittaker operators appear in quantum mechanics as the radial part of the Schroedinger operator with a Coulomb potential.", "subjects": "Mathematical Physics (math-ph); Spectral Theory (math.SP)", "title": "On radial Schroedinger operators with a Coulomb potential", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668657039606, "lm_q2_score": 0.721743200312399, "lm_q1q2_score": 0.7081505136936623 }
https://arxiv.org/abs/1711.00506
Generation and application of multivariate polynomial quadrature rules
The search for multivariate quadrature rules of minimal size with a specified polynomial accuracy has been the topic of many years of research. Finding such a rule allows accurate integration of moments, which play a central role in many aspects of scientific computing with complex models. The contribution of this paper is twofold. First, we provide novel mathematical analysis of the polynomial quadrature problem that provides a lower bound for the minimal possible number of nodes in a polynomial rule with specified accuracy. We give concrete but simplistic multivariate examples where a minimal quadrature rule can be designed that achieves this lower bound, along with situations that showcase when it is not possible to achieve this lower bound. Our second main contribution comes in the formulation of an algorithm that is able to efficiently generate multivariate quadrature rules with positive weights on non-tensorial domains. Our tests show success of this procedure in up to 20 dimensions. We test our method on applications to dimension reduction and chemical kinetics problems, including comparisons against popular alternatives such as sparse grids, Monte Carlo and quasi Monte Carlo sequences, and Stroud rules. The quadrature rules computed in this paper outperform these alternatives in almost all scenarios.
\section{Reduced quadrature algorithms} \label{app:algorithms} This section presents pseudo code that outlines how to compute reduced quadrature rules as detailed in Section~\ref{sec:algorithm}. Algorithm~\ref{alg:reduced-quadrature} presents the entire set of steps for computing reduced quadrature rules and Algorithm~\ref{alg:cluster} details how the clustering algorithm used to generate the initial condition for the local optimization used to compute the final reduced quadrature rule. \input{algorithm} \section{Ridge function quadrature}\label{app:ridge-function-moments} Consider a variable $y\in \mathbb{R}^d$ and a linear transformation $A \in\mathbb{R}^{s\times d}:\R^d\rightarrow \R^s$ which maps the variables $y$ into a lower dimensional set of variables $x = A y \in\mathbb{R}^s$, where $s\le d$. When $\nu$ is a measure in $\R^d$, this transformation induces a measure $\mu$ in $\R^s$. In this section we describe how to compute the moments of the measure $\mu$ in terms of those for $\nu$. Being able to compute such moments allows one to efficiently integrate ridge functions (see Section~\ref{sec:dim-reduction}). One approach is to approximate the moments of $\mu$ via Monte Carlo sampling. That is, generate a set of samples $Y=\{y_{i}\}_{i=1}^S \subset \R^d$ from the measure $\nu$, and then compute a set of samples $X=AY$ in the $s$-dimension space. Given a multi-index set $\Lambda$ with basis $p_\alpha$, $\alpha \in \Lambda$, the $\mu$-moments of $p_\alpha$ can then be computed approximately via $$\frac{1}{M}\sum_{i=1}^M p_\alpha (x_{i}) = \frac{1}{M} \sum_{i=1}^M p_\alpha( A y_i).$$ Such an approach is useful when one cannot directly evaluate $\dx{\mu(x)}$ but rather only has samples from the measure. However the accuracy of a moment matching quadrature rule will be limited by the accuracy of the moments that are being matched. For moments evaluated using Monte Carlo sampling the error in these moments decays slowly at a rate proportional to $M^{-1/2}$ and the variance of the polynomial $p_\alpha$. However, when the higher-dimensional meaure $\nu$ is a tensor-product measure, $\nu(y)=\prod_{i=1}^d\nu_i(y_i)$, with each $\nu_i(x_i)$ a univariate measure, then the moments of the lower-dimensional measure $\mu$ can be computed analytically using, for example, a monomial basis. We need to compute the moments of a monomial basis of the variables $x$ with respect to the measure $\mu(x)$. Computing an expression of the measure $\mu(x)$ in terms of $\nu$ is difficult in general. Instead, we leverage the following equality \begin{align*} \int_{D} P_\alpha(y) d\nu(y) &= \int_{D} p_\alpha(x) d\mu(x), & P_\alpha(y) &\coloneqq p_\alpha(A x). \end{align*} In particular, monomials in $x$ can be expanded in terms of the variables $y$, e.g $x^p=(A y)^p$, and the resulting polynomials $P_\alpha$ are just products of univariate integrals which can be computed analytically or to machine precision with univariate Gaussian quadrature. For example let $y\in[-1,1]^3$, $\nu(y)$ be the uniform probability measure, and $x=Ay$, where $A\in\mathbb{R}^{2\times 3}$ then the moment of $x_1x_2$ is \begin{align*} \int_{D} x_1 x_2 d\mu(x)= \int_{[-1,1]^3} (A_{11}y_1+A_{12}y_2+A_{13}y_3)(A_{21}y_1+A_{22}y_2+A_{23}y_3) d\nu(x). \end{align*} The right-hand side high-dimensional integrand can be expanded into sums of products of univariate terms, and thus can be integrated with univariate quadrature. Let $A$ have rows $a_j^T \in \R^d$, $j = 1, \ldots, s$, and for simplicity assume that $\nu$ is a measure on $[-1,1]^d$. For a general multi-index $\alpha$, we have \begin{align}\nonumber \int_D p_\alpha(x) \dx{\mu}(x) = \int_D x^\alpha \dx{\mu}(x) &= \int_{[-1,1]^d} \prod_{j=1}^s \left( a_j^T y\right)^{\alpha^{(j)}} \dx{\nu}(y) \\\label{eq:multinomial-expansion} &= \int_{[-1,1]^d} \prod_{j=1}^s \left[ \sum_{|\beta| = \alpha^{(j)}} \left(\begin{array}{c} \alpha^{(j)} \\ \beta \end{array} \right) a_j^\beta y^\beta \right] \dx{\nu}(y), \end{align} where we have, for a generic $\beta \in \N_0^d$, \begin{align*} a_j^T &= \left( \begin{array}{ccc} a_{j,1} & \ldots & a_{j,d} \end{array}\right), & a_j^\beta &= \prod_{k=1}^d a_{j,k}^{\beta^{(k)}}, \end{align*} and the multinomial coefficients \begin{align*} \left(\begin{array}{c} \alpha^{(j)} \\ \beta \end{array} \right) &\coloneqq \frac{\alpha^{(j)} !}{\beta !} = \left( \begin{array}{c} \alpha^{(j)} \\ \beta^{(1)},\,\beta^{(2)},\,\ldots,\,\beta^{(d)}\end{array} \right). \end{align*} Inspection of \eqref{eq:multinomial-expansion} and using the tensor-product structure of $\nu$, we see that this can be evaluated exactly via sums and products of univariate integrals of $y^\beta$. While exact, this approach becomes quite expensive for large $k \coloneqq \alpha^{(j)}$ (in which case there are $\left( \begin{array}{c} k + d - 1 \\ k - 1 \end{array}\right)$ summands under the product), or when $s$ in large (in which case one must expand an $s$-fold product). Nevertheless, one can use this approach for relatively large $s$ and $d$ since the univariate integrands in \eqref{eq:multinomial-expansion} are very inexpensive to tabulate, and it is only processing the combination of them that is expensive. \section{Acknowledgements} J.D.~Jakeman's work was supported by DARPA EQUIPS. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA-0003525. A. Narayan is partially supported by AFOSR FA9550-15-1-0467 and DARPA EQUiPS N660011524053. \input{appendix} \bibliographystyle{plain} \section{Conclusions}\label{sec:conclusion} In this paper we present a flexible numerical algorithm for generating polynomial quadrature rules. The algorithm employs optimization to find a quadrature rule that can exactly integrate, up to a specified optimization tolerance, a set of polynomial moments. We provide novel theoretical analysis of the minimal number of nodes of a polynomial rule that is exact for a given set of polynomial moments. Intuition is developed using a simple set of analytical multivariate examples that address existence and uniqueness of optimal rules. In practice we often cannot computationally find an optimal quadrature rule. However we can find rules that have significantly fewer points than the number of moments being matched. Typically the number of points $M$ is only slightly larger ($<10$ points) than the number of moments, $N$, divided by the dimension plus one, $d+1$, i.e. $M\approx N/(d+1)$. The algorithm we present is flexible in the sense that it can construct quadrature rules with positive weights: (i) for any measure for which moments can be computed; (ii) using analytic or sample based moments; (iii) for any set of moments, e.g. total-degree, hyperbolic-cross polynomials. We have shown that this algorithm can be used to efficiently integrate functions in a variety of settings: (i) total-degree integration on hypercubes; (ii) integration for non-tensor-product measures; (iii) high-order integration using approximate moments; (iv) ANOVA approximations; and (v) ridge function integration. \section{Introduction}\label{sec:intro} Let $D \subset \R^d$ be a domain with nonempty interior. Given a finite, positive measure $\mu$ on $D$ and a $\mu$-measurable function $f: D \rightarrow \R$, our main goal is computation of \begin{align* \int_D f(x) \dx{\mu(x)} \end{align*} The need to evaluate such integrals arises in many areas, including finance, stochastic programming, robust design and uncertainty quantification. Typically these integrals are approximated numerically by quadrature rules of the form \begin{align}\label{eq:quadrature} \int_D f(x) \dx{\mu(x)} &\approx \sum_{m=1}^M w_m f\left( x_m \right) \end{align} Examples of common quadrature rules for high-dimensional integration are Monte Carlo and quasi Monte Carlo methods~\cite{Halton_NM_1960,Hammersley_H_book_1964,Niederreiter_book_1992,Sobol_S_IJMPC_1995}, and Smolyak integration rules~\cite{Gerstner_G_NA_1998,Gerstner_G_C_2003,Smolyak_SMD_1963}. Quadrature rules are typically designed and constructed in deference to some notion of approximation optimality. The particular approximation optimality that we seek in this paper is based on polynomial approximation. Our goal is to find a set of quadrature nodes $x_j \in D$, $j = 1, \ldots, M$, and a corresponding set of weights $w_j \in \R$ such that \begin{align}\label{eq:quadrature-condition} \sum_{j=1}^M w_j p\left( x_j \right) &= \int_D p(x) \dx{\mu(x)}, & p &\in P_{\Lambda}, \end{align} where $\Lambda \subset \N_0^d$ is a multi-index set of size $N$, and $P_\Lambda$ is an $N$-dimensional polynomial space defined by $\Lambda$. (We make this precise later.) $P_\Lambda$ can be a relatively ``standard" space, such as the space of all $d$-variate polynomials up to a given finite degree, or more intricate spaces such as those defined by $\ell^p$ balls in index space, or hyperbolic cross spaces. In this paper we will present a method for numerically generating polynomial based cubature rules.\footnote{ Although the focus of this paper is on polynomial based quadrature rules, and the review of quadrature rules contained here reflects this focus, other types of quadrature rules exist. Polynomial based cubature rules rules are useful when the integrand $f$ has high-regularity. However when the function has less regularity, such as piecewise continuity alternative rules based upon other basis functions, such as piecewise polynomials. For example Simpson's rule for univariate functions, sparse grids based upon piecewise polynomials for multivariate functions~\cite{Bungartz_G_AN_2004} and the numerous adaptive versions of these methods e.g.~\cite{Pfluger_PB_JC_2010}} This paper provides two major contributions to the existing literature. Firstly we provide a lower bound on the number points that make up a polynomial quadrature rule. Our analysis is straightforward, but to the authors knowledge this is the first reported bound of its kind. Our second contribution is a numerical method for generating quadrature rules that are exact for a set of arbitrary polynomial moments. Our method has the following features: \begin{itemize} \item Positive quadrature rules are generated. \item Any measure $\mu$ for which moments are computable are applicable. Many existing quadrature methods only apply to tensor-product measures. Our method constructs quadrature rules for measures with, for example, non-linear correlation between variables. \item Analytical or sample-based moments may be used. In some settings it may be possible to compute moments of a measure exactly, but in other settings only samples from the measure are available. For example, one may wish to integrate a function using Markov Chain Monte Carlo-generated samples from a posterior of a Bayesian inference problem. \item A quadrature that is faithful to arbitrary sets of moments may be generated. Many quadrature methods are exact for certain polynomial spaces, for example total-degree or sparse grid spaces. However, some functions may be more accurately represented by alternative polynomial spaces, such as hyperbolic cross spaces. In these situations it may be more prudent to construct rules that can match a customized set of moments. \item Efficient integration of ridge functions is possible. Some high-dimensional functions can be represented by a small number of linear combinations of the input variables. In this case it is more efficient to integrate these functions over this lower-dimensional coordinate space. Such a dimension-reducing transformation typically induces a new measure on a non-tensorial space of lower-dimensional variables. For example, a high-dimensional uniform probability measure on a hypercube may be transformed into a non-uniform measure on a zonotope (a multivariate polygon). \end{itemize} Our algorithm falls into the class of moment-matching methods. There have been some recent attempts at generating quadrature using moment matching via optimization apporaches. These methods frequently either start with a small candidate set and add points until moments are matched~\cite{Mehrotra_P_SJO_2013}, or start with a large set of candidate points and reduce them until no more points can be removed without numerically violating the moment conditions~\cite{ryu_extensions_2014,vahid2017,Vandebos_KD_JCP_2017}. These approaches sometimes go by other names, such as scenario generation or scenario reduction methods. This paper presents a quadrature/scenario reduction moment matching method based upon the work in ~\cite{ryu_extensions_2014}. The method in ~\cite{ryu_extensions_2014} is comprised of two steps. The first step generates a quadrature rule with $M=N$ points, the existence of which is guaranteed by Tchakaloff's theorem~\cite{Tchakaloff_BSM_1957}. The second step uses this quadrature rule as an initial guess for a local gradient-based optimization procedure that searches for a quadrature rule with $M\le N$ points. The initial quadrature rule is generated by drawing a large number of points from the domain of integration and then solving an inequality-constrained linear program to find a quadrature rule with positive weights that matches all desired moments. Similar approaches can be found in ~\cite{Arnst_GPR_IJNME_2012,Constantine_PW_IJNME_2014}. The $N$ points comprising this initial quadrature rule are then grouped into $M\le N$ clusters. A new approximate quadrature rule is then formed by combining points and weights within a cluster into a single point and weight. Our numerical method differs from ~\cite{ryu_extensions_2014} in the following ways: (i) we use a different formulation of the linear program to generate the initial quadrature rule; (ii) we show numerically that this quadrature only needs to be solved with very limited accuracy, (iii) we present an automated way of selecting the clusters from the initial rule -- in ~\cite{ryu_extensions_2014} no method for clustering points is presented; (iv) we provide extensive numerical testing of our method in a number of settings. The theoretical contributions of this paper include a lower bound on the number of points in the final quadrature rule, and a definition of quasi-optimality in terms of this bound. We also provide provide a simple means of testing whether the quadrature rules generated by any method are quasi-optimal. The remainder of the paper is structured as following. In Section \ref{sec:optimal-quadrature}~we introduce some nomenclature, define quasi-optimality, and present a means to verify if a quadrature rule is quasi-optimal. We also use these theoretical tools to analytically derive quasi-optimal rule for additive functions. In Section~\ref{sec:algorithm} we detail out algorithm for generating quadrature rules. We then present a wide range of numerical examples, in Section~\ref{sec:numerical-results}, which explore the properties of the quadrature rules that our numerical algorithm can generate. \subsection{Existing quadrature rules} We give a concise description of some existing quadrature rules. There are numerous approaches for computing polynomial-based quadrature rules so our goal is not to be comprehensive, but instead to concentrate on rules that are related to our proposed method. For univariate functions Gaussian quadrature is one of the most commonly used approaches. Nodes associated to Gaussian quadrature rules are prescribed by roots of the polynomials orthogonal to the measure $\mu$ \cite{szego_orthogonal_1975}. The resulting quadrature rule is always positive (meaning the weights $w_m$ are all positive) and the rules are optimal in the sense that given a Gaussian quadrature rule of degree of exactness $p$, no rule with fewer points can be used to integrate all degree-$p$ polynomials. When integrating multivariate functions with respect to tensor-product measures on a hypercube accurate and efficient quadrature rules can be found by taking tensor-product of one-dimensional Gaussian quadrature rules. These rules will optimal for functions that can be represented exactly by tensor-product polynomial spaces of degree $p$. However the use of such quadrature rules is limited to a small number of dimensions, say 3-4, because the number of the points in the rule grows exponentially with dimension. Sparse grid quadrature methods have been successfully used as an alternative to tensor-product quadrature for multivariate functions \cite{Gerstner_G_NA_1998,Gerstner_G_C_2003,Smolyak_SMD_1963}. The number of points in sparse grid rules grow logarithmically with dimension for a fixed level of accuracy. Unlike tensor-product rules however the quadrature weights will not all be positive. Sparse grid quadrature delays the curse of dimensionality by focusing on integrating polynomial spaces that have high-degree univariate terms but low-degree interaction terms. High-dimensional cubature rules can often be more effective than sparse grid rules when integrating functions that are well represented by total-degree polynomials. These rules have positive weights and typically consist of a very small number of points. However such highly effective cubature rules are difficult to construct and are have only been derived for a specific set of measures, integration domains and polynomial degree of exactness~\cite{Hammer_S_MTAC_1958,Stroud_book_71,Xiu_ANM_2008}. Tensor-product integration schemes generally produce approximations to the integral \eqref{eq:quadrature} whose accuracy scales like $M^{-r/d}$, where $r$ indicates the maximum order of continuous partial derivatives in any direction \cite{Novak_R_inbook_1997}. This convergence rate illustrates both the blessing of smoothness -- regularity accelerates convergence exponentially -- along with the curse of dimensionality -- convergence is exponentially hampered by large dimension. For sparse grids consisting of univariate quadrature rules with $O(2^l)$ points have a similar error, scaling like $O(M^{-r}l^{(d-1)(r+1)})$~\cite{Gerstner_G_NA_1998}. A contrasting approach is given by Monte Carlo (MC) and quasi Monte Carlo (QMC) approaches. These approximations produce convergence rates of $O(M^{-\frac{1}{2}})$ and $O(\log(M)^dM^{-1})$, respectively~\cite{Caflisch_AN_1998}. MC points are random, selected as independent and identically-distributed realizations of a random variable, and QMC points are deterministically generated as sequences that minimize discrepancy. \section{The LP algorithm}\label{app:lp-algorithm} In this section we detail the reduced quadrature algorithm first presented in~\cite{ryu_extensions_2014}. Let a finite index set $\Lambda$ be given with $|\Lambda| = N$, and suppose that $\left\{ p_j \right\}_{j=1}^N$ is a basis for $P_\Lambda$. In addition, let $r$ be a polynomial on $\R^d$ such that $r \not\in P_\Lambda$. We seek to find a positive Borel measure $\nu$ solving \begin{align*} \textrm{minimize } &\int r(x) \dx{\nu(x)} \\ \textrm{subject to } &\int p_j(x) \dx{\nu(x)} = \int p_j(x) \dx{\mu(x)}, \hskip 5pt j=1, \ldots, N \end{align*} The authors in \cite{ryu_extensions_2014} propose the following procedure for approximating a solution: \begin{enumerate}[label=\arabic*)] \item Choose a candidates mesh, $\left\{y_j\right\}_{j=1}^M$, of $S$ points on $D$. Solve the much more tractable finite-dimensional linear problem \begin{align*} \textrm{minimize } &\sum_{k=1}^S v_k r(y_k) \\ \textrm{subject to } &\sum_{k=1}^S v_k p_j(y_k) = \int p_j(x) \dx{\mu(x)}, \hskip 5pt j=1, \ldots, n \\ \textrm{and } & v_k \geq 0, \hskip 5pt k=1, \ldots, S \end{align*} The optimization is over the $S$ scalars $v_k$. Let the solution to this problem be denoted $v_k^\ast$. \item Identify $M \leq N$ clusters from the solution above. Partition the index set $\left\{1, \ldots, S\right\}$ into these $M$ clusters, denoted $C_j$, $j=1, \ldots, M$. Construct $M$ averaged points and weights from this clustering: \begin{align*} \widehat{{x}}_j &= \frac{1}{\widehat{w}_j} \sum_{k \in C_j} v_k^\ast y_k, & \widehat{w}_j &= \sum_{k \in C_j} v^\ast_k \end{align*} Note that the size-$M$ set $\left\{\widehat{{x}}_j, \widehat{w}_j \right\}_{j=1}^M$ is a positive quadrature rule, but it is no longer a solution to the optimization in the previous step. \item Solve the nonlinear optimization problem for the nodes ${x}_{(k)}$ and weights ${w}_{(k)}$, $k=1, \ldots, M$, \begin{align*} \textrm{minimize } &\sum_{j=1}^N \left( \int p_j(x) \dx{\mu(x)} - \sum_{k=1}^M w_k p_j({x}_{(k)}) \right)^2 \\ \textrm{subject to } & {x}_{(k)} \in \Gamma \textrm{ and } w_k \geq 0, \end{align*} using the initial guess ${x}_{(k)} \gets \widehat{{x}}_k$ and $w_k \gets \widehat{w}_k$. \end{enumerate} We refer to the above method as the LP algorithm. \section{Numerically generating multivariate quadrature rules} \label{sec:algorithm} In this section we describe our proposed algorithm for generating multivariate quadrature rules; the algorithm generates rules having significantly less points than the number of moments being matched. We will refer to such rules as \textit{reduced quadrature} rules. We will compare the number of nodes our rules can generate with the lower bound $L(\Lambda)$ defined in Section \ref{sec:optimal-quadrature}, along with the heuristic bound \eqref{eq:counting-heuristic}. Our method is an adaptation of the method presented in \cite{ryu_extensions_2014}. The authors there showed that one can recover a univariate Gauss quadrature rule as the solution to an infinite-dimensional linear program (LP) over nonnegative measures. Let a finite index set $\Lambda$ be given with $|\Lambda| = {N}$, and suppose that $\left\{ p_j \right\}_{j=1}^{N}$ is any basis for $P_\Lambda$. In addition, let $r$ be a polynomial on $\R^d$ such that $r \not\in P_\Lambda$; we seek a positive Borel measure $\nu$ on $\R^d$ solving \begin{align}\label{eq:inifite-lp} \textrm{minimize } &\int r(\V{x}) \dx{\nu(\V{x})} \\ \textrm{subject to } &\int p_j(\V{x}) \dx{\nu(\V{x})} = \int p_j(\V{x}) \dx{\mu(\V{x})}, \hskip 5pt j=1, \ldots, {N_{\Lambda}} \end{align} The restriction that $\nu$ is a positive measure will enter as a constraint into the optimization problem. A nontrivial result is that a positive measure solution to this problem exists with $|\supp \nu| = M \leq {N}$. Such a solution immediately yields a positive quadrature rule with nodes $\left\{\V{x}_{1}, \ldots, \V{x}_{M} \right\} = \supp \nu$ and weights given by $w_j = \nu(\V{x}_{j})$. Unfortunately, the above optimization problem is NP-hard, and so the authors in \cite{ryu_extensions_2014} propose a two step procedure for computing an approximate solution. The first step solves a finite-dimensional linear problem similar to~\eqref{eq:inifite-lp} to find a $K$-point positive quadrature rule with $K \leq N$. In the second step, the $K\leq{N}$ points are clustered into $M$ groups, where $M$ is automatically chosen by the algorithm. The clusters are then used to form a set of $M$ averaged points and weights, which form an approximate quadrature rule. This approximate quadrature rule is refined using a local gradient-based method to optimize a moment-matching objective. The precise algorithm is outlined in detail in Appendix~\ref{app:lp-algorithm}. In this paper we also adopt a similar two step procedure to compute reduced quadrature rules. We outline these two steps in detail in the following section. Pseudo code for generating reduced quadrature rules is presented in Algorithm~\ref{alg:reduced-quadrature} contained in Appendix~\ref{app:algorithms}. \subsection{Generating an initial condition}\label{sec:algorithm-initial} Let an index set $\Lambda$ be given with $|\Lambda| = {N} < \infty$, and suppose that $\left\{ p_j \right\}_{j=1}^n$ is a basis for $P_\Lambda$. We seek to find a discrete measure $\nu=\sum_k v_k\delta_{\V{x}^{(k)}}$ by choosing a large candidates mesh, $X_S = \left\{ x_k \right\}_{k=1}^S$, with $S \gg N$ points on $D$ and solving the $\ell_1$ minimization problem \begin{align}\label{eq:l1-problem} \begin{split} \textrm{minimize } &\sum_{k=1}^S |v_k| \\ \textrm{subject to } &\sum_{k=1}^S v_k p_j({x}_k) = \int p_j({x}) \dx{\mu({x})}, \hskip 5pt j=1, \ldots, {N_{\Lambda}} \\ \textrm{and } & v_k \geq 0, \hskip 5pt k=1, \ldots, S \end{split} \end{align} The optimization is over the $S$ scalars $v_k$. With $\bs{v} \in \R^S$ a vector containing the $v_k$, the non-zero coefficients then define a quadrature rule with $K=\left\|\bs{v}\right\|_0$ points, $K \leq N$. The points corresponding to the non-zero coefficients $v_k$ are the quadrature points and the coefficients themselves are the weights. This $\ell_1$-minimization problem as well as the residual based linear program used by~\cite{ryu_extensions_2014} become increasingly difficult to solve as the size of $\Lambda$ and dimension $d$ of the space increase. The ability to find a solution is highly sensitive to the candidate mesh. Through extensive experimentation we found that by solving~\eqref{eq:l1-problem} approximately via \begin{align}\label{eq:bpdn-problem} \begin{split} \textrm{minimize } &\sum_{k=1}^S |\alpha_k| \\ \textrm{subject to } &\lvert \sum_{k=1}^S \alpha_k p_j(\V{x}^{(k)}) - \int p_j(\V{x}) \dx{\mu(\V{x})}\rvert<\epsilon, \hskip 5pt j=1, \ldots, n \\ \textrm{and } & \alpha_k \geq 0, \hskip 5pt k=1, \ldots, M \end{split} \end{align} for some $\epsilon >0$, we were able to find solutions that, although possibly inaccurate with respect to the moment matching criterion, could be used as an initial condition for the local optimization to find an accurate reduced quadrature rule. This is discussed further in Section~\ref{sec:tp-measures}. We were not able to find such approximate initial conditions using the linear program used in~\cite{ryu_extensions_2014}; we show results supporting this in Section \ref{sec:numerical-results}. We solved~\eqref{eq:bpdn-problem} using least angle regression with a LASSO modification~\cite{Tibshirani_JRSSBM_1996} whilst enforcing positivity of the coefficients. This algorithm iteratively adds and removes positive weights $\alpha_k$ until $\epsilon=0$, or until no new weights can be added without violating the positivity constraints. This allows one to drive $\epsilon$ to small values without requiring an \textit{a priori} estimate of $\epsilon$. In our examples, the candidate mesh ${X}_S$ is selected by generating uniformly random samples over the integration domain $D$, regardless of the integration measure. Better sampling strategies for generating the candidate mesh undoubtedly exist, but these strategies will be $D$- and $\Lambda$-dependent, and are not the central focus of this paper. Our limited experimentation here suggested that there was only marginal benefit from exploring this issue. \subsection{Finding a reduced quadrature rule} Once an initial condition has been found by solving ~\eqref{eq:bpdn-problem} we then use a simple greedy clustering algorithm to generate an initial condition for a local gradient-based moment-matching optimization. \subsubsection{Clustering} The greedy clustering algorithm finds the point with the smallest weight and combines it with its nearest neighbor. These two points are then replaced by a point whose coordinates are a convex combination of the two clustered points, where the convex weights correspond to the $v_k$ weights that are output from the LP algorithm. The newly clustered point set has one less point than before. This process is repeated. , We terminate the clustering iterative procedure when a desired number of points $M$ is reached. At the termination of the clustering algorithm a set of points $\widehat{{x}}_m$ and weights $\widehat{w}_m$, defining an approximate quadrature rule are returned. The algorithm pseudo-code describing this greedy clustering algorithm is given in Algorithm~\ref{alg:cluster} in Appendix~\ref{app:algorithms}. The ability to find an accurate reduced quadrature rule is dependent on the choice of the number of specified clusters $M$. As stated in Section~\ref{sec:optimal-quadrature}, there is a strong heuristic that couples the dimension $d$, the number of matched moments $N$, and the number of points $M$. For general $\mu$ and given ${N}=|\Lambda|$ we set the number of clusters to be \begin{align}\label{eq:dof-heuristic} M = \frac{N}{d+1}. \end{align} As shown in Section~\ref{sec:optimal-quadrature} this heuristic will not always produce a quadrature rule that exactly integrates all specified moments. Moreover, the heuristic may over estimate the actual number of points in a quasi-optimal quadrature rule. It is tempting to set $M$ to the lower bound value $L(\Lambda)$, defined in \eqref{eq:L-definition}. However, a sobering result of our experimentationl is that in all our experiments we were never able to numerically find a quadrature rule with fewer points than that specified by \eqref{eq:dof-heuristic}; therefore, we could not find any rules with $M$ points where $L(\Lambda) \leq M < N/(d+1)$. However, we were able to identify situations in which the heuristic underestimated the requisite number of points in a reduced quadrature rule. For example, if one wants to match the moments of a total degree basis of degree $k$ one must use at least $ k/2+d\choose d$ points. This lower bound is typically violated by the heuristic for low-degree $k$ and high-dimension $d$. E.g. for $d=10$ and $k=2$ we have $M=6$ using the heuristic yet the theoretical lower bound for $M$ from Theorem~\ref{lemma:N-condition} requires $M \geq L(\Lambda)=11$. In this case our theoretical analysis sets a lower bound for $M$ that is more informative than the heuristic \eqref{eq:dof-heuristic}. \subsubsection{Local-optimization} The approximate quadrature rule $\widehat{{x}}_k$, $\widehat{w}_k$ generated by the clustering algorithm is used as an initial guess for the following local optimization \begin{subequations} \begin{align}\label{eq:local-optimization} \textrm{minimize } &\sum_{j=1}^{N} \left( \int p_j({x}) \dx{\mu({x})} - \sum_{m=1}^M w_m p_j({x}_{m}) \right)^2 \\\label{eq:local-constraints} \textrm{subject to } & {x}_{m} \in D \textrm{ and } w_m \geq 0, \end{align} \end{subequations} The objective $f$ defined by \eqref{eq:local-optimization} is a polynomial and its gradient can easily be computed \begin{subequations}\label{eq:jacobian} \begin{align} \frac{df}{dx_{k}^{(s)}}&= -\sum_{i=0}^{n-1}\left[ w_k\frac{dp_i({x}_{k})}{dx_{k}^{(s)}} \left(q(p_i)-\sum_{j=1}^M w_jp_i({x}_{j})\right)\right]\\ \frac{df}{dw_k}&= -\sum_{i=0}^{n-1}\left[ p_i({x}_{k}) \left(q(p_i)-\sum_{j=1}^M w_jp_i({x}_{j})\right)\right] \end{align} \end{subequations} for $s = 1, \ldots, d$ and $m = 1, \ldots, M$, and where $q(p_i) = \int p_i(x) \dx{\mu}(x)$. We use a gradient-based nonlinear least squares method to solve the local optimization problem. Defining the optimization tolerance $\tau=10^{-10}$, the procedure exits when either $|f_{i}-{f_{i-1}}|<\tau f_i$ or $\lVert g_s\rVert_\infty < \tau$, where $f_i$ and $f_{i-1}$ are the values of the objective at steps $i$ and $i-1$ respectively, and $g_s$ is the value of the gradient scaled to respect the constraints \eqref{eq:local-constraints}. This local optimization procedure in conjunction with the cluster based initial guess can frequently find a quadrature rule of size $M$ as determined by the degree of freedom heuristic~\eqref{eq:dof-heuristic}. However in some cases a quadrature rule with $M$ points cannot be found for very high accuracy requirements (in all experiments we say that a quadrature rule is found found if the iterations yield $\lvert f_i\rvert<10^{-8}$). In these situations one might be able to find an $M$ point rule using another initial condition. (Recall the initial condition provided by $\ell_1$-minimization is found using a randomized candidate mesh.) However we found it more effective to simply increment the size of the desired quadrature rule to $M+1$. This can be done repeatedly until a quadrature rule with the desired accuracy is reached. While one fears that one may increment $M$ by a large number using this procedure, we found that no more than a total of $10$ of increments $(M \rightarrow M + 10)$ were ever needed. This is described in more detail in Section~\ref{sec:tp-measures}. Note that both generating the initial condition using~\eqref{eq:bpdn-problem} and solving the local optimization problem~\eqref{eq:local-optimization} involve matching moments. We assume in this paper that moments are available and given; in our examples we compute these moments analytically (or to machine precision with high-order quadrature), unless otherwise specified. \section{Quasi-optimality in multivariate polynomial quadrature}\label{sec:optimal-quadrature} The goal of this section is to mathematically codify relationships between the number of quadrature points $M$ and the dimension $N$ of the polynomial space $P_\Lambda$. In particular, we provide a theoretical lower bound for $M$ for a fixed $\Lambda$. This lower bound can be used to define a notion of optimality in polynomial quadrature rules. We also provide related characterizations of optimal quadrature rules in both one and several dimensions. \subsection{Notation}\label{sec:notation} With $d \geq 1$ fixed, we consider a positive measure $\mu$ on $\R^d$ with support $D = \mathrm{supp}\; \mu$. This support may be unbounded. The $L^2_\mu(D)$ inner product and norm are defined as \begin{align*} \left\langle f, g \right\rangle_{\mu} &\coloneqq \int_D f(x) g(x) \dx{\mu}(x), & \|f\|_\mu^2 &= \left\langle f, f \right\rangle_\mu, & L^2_\mu(D) = \left\{ f: D \rightarrow \R \; | \; \|f\|^2_{\mu} < \infty \right\} \end{align*} To prevent degeneracy of polynomials with respect to $\mu$ and to ensure finite moments of $\mu$, we assume \begin{align}\label{eq:mu-assumption} 0 < \left\| p \right\|_\mu < \infty, \end{align} for all algebraic polynomials $p(x)$. One can guarantee the lower inequality if, for example, there is any open Euclidean ball in $D$ inside which $\mu$ has a density function. A point $x$ in $\R^d$ has coordinate representation $x = \left(x^{(1)}, \ldots, x^{(d)} \right) \in \R^d$; for a multi-index $\alpha \in \N_0^d$ with coordinates $\alpha = \left(\alpha^{(1)}, \ldots, \alpha^{(d)}\right)$, we have $\alpha ! = \prod_{j=1}^d \alpha^{(j)} !$ and $x^\alpha = \prod_{j=1}^d \left[ x^{(j)}\right]^{\alpha^{(j)}}$. We let $\Lambda \subset \N_0^d$ denote a multi-index set of finite size $N$. If $\alpha, \beta \in \N_0^d$ are any two multi-indices and $k \in \R$, we define $\alpha + \beta$, $k \alpha$, and $\left\lfloor k \alpha \right\rfloor$ component wise. The partial ordering $\alpha \leq \beta$ is true if all the component wise conditions are true. We define the following standard properties and operations on multi-index sets: \begin{definition} Let $\Lambda$ and $\Theta$ be two multi-index sets, and let $k \in [0, \infty)$. \begin{enumerate}[labelwidth=3.5cm,itemindent=1em,leftmargin=!] \item[(Minkowski addition)] The sum of two multi-index sets is \begin{align*} \Lambda + \Theta &= \left\{ \alpha + \beta \; |\; \alpha \in \Lambda,\; \beta \in \Theta \right\} \end{align*} \item[(Scalar multiplication)] The expression $k \Lambda$ is defined as \begin{align*} k \Lambda &= \left\{ k \alpha\; | \; \alpha \in \Lambda \right\}. \end{align*} Note that this need not be a set of multi-indices. \item[(Downward closed)] $\Lambda$ is a downward closed set if $\alpha \in \Lambda$ implies that $\beta \in \Lambda$ for all $\beta \leq \alpha$. \item[(Downward closure)] For any finite $\Lambda$, $\widebar{\Lambda}$ is the smallest downward closed set containing $\Lambda$, \begin{align*} \widebar{\Lambda} = \left\{ \alpha \in \N_0^d \; | \; \alpha \leq \beta \textrm{ for some } \beta \in \Lambda \right\}. \end{align*} \item[(Convexity)] $\Lambda$ is convex if for any $p \in [0, 1]$ and any $\alpha, \beta \in \Lambda$, then $\left\lfloor p \alpha + (1-p) \beta \right\rfloor \in \Lambda$. \end{enumerate} \end{definition} With our notation, scalar multiplication is not consistent with Minkowski addition. In particular we have $2 \Theta \subseteq \Theta + \Theta$ in general. If $\Lambda$ is downward closed, then $\widebar{\Lambda} = \Lambda$. The polynomial space $P_\Lambda$ is defined by a given multi-index $\Lambda$: \begin{align*} P_\Lambda &= \mathrm{span} \left\{ x^\alpha \; | \; \alpha \in \Lambda \right\}, & |\Lambda| &= N, \end{align*} and $P_\Lambda$ has dimension $N$ in $L^2_\mu(D)$ under the assumption \eqref{eq:mu-assumption}. Note that we make no particular assumptions on the structure of $\Lambda$. I.e., we do not assume $\Lambda$ is downward closed, but much of our theory and all our numerical examples use downward closed index sets. On $\N_0^d$, we will make use of the $\ell^p$ norm $\left\|\cdot\right\|_p$ for $0 \leq p \leq \infty$, and the associated ball $B_{p}(r)$ of radius $r \geq 0$ to define index sets. These sets are defined by \begin{align*} B_{p}(r) = \left\{ \alpha \in \N_0^d \; | \; \left\|\alpha \right\|_p \leq r \right\}. \end{align*} The $\ell^p$ norms are defined for $p=0$, $0 < p < \infty$, and $p = \infty$ by, respectively, \begin{align*} \left\| \alpha \right\|_0 &= \sum_{j=1}^d \mathbbm{1}_{\alpha_j \neq 0}, & \left\| \alpha \right\|_p^p &= \sum_{j=1}^d \alpha_j^p, & \left\| \alpha \right\|_\infty &= \max_{1 \leq j \leq d} \alpha_j. \end{align*} The index sets sets $B_{p}(r)$ are all downward closed, and are convex if $p \geq 1$. The set $B_0(r)$ equals $\N_0^d$ when $r \geq d$. \subsection{Quasi-optimal quadrature}\label{sec:quasi-optimal quadrature} With $M$ fixed, the theoretical and computational tractability of computing a solution to \eqref{eq:quadrature-condition} depends on $\Lambda$, $D$, and $\mu$. In particular, it is unreasonable to expect that $\Lambda$ can be arbitrarily large; if this were true then $\mu$ can be approximated to arbitrary accuracy by a sum of $M$ Dirac delta distributions, which would allow us to violate the lower inequality in \eqref{eq:mu-assumption}. There is a strong heuristic that motivates the possible size of $\Lambda$: The set $\left\{ x_1, \ldots, x_M \right\}$ represents $M d$ degrees of freedom, and varying $w_j$ ($j=1, \ldots, M$) represents an additional $M$ degrees of freedom. For an $N$-dimensional space $P_\Lambda$, \eqref{eq:quadrature-condition} can be ensured with $N$ constraints. Thus we expect for general $(\mu,D)$ that $\Lambda$ (and thus $N$) must be small enough to satisfy \begin{align}\label{eq:counting-heuristic} |\Lambda| = N \leq (d + 1) M. \end{align} We will show that this heuristic does not always produce a faithful bound on sizes of quadrature rules; we provide instead a strict lower bound on the number of points $M$ in a quadrature rule for a given $\Lambda$. To proceed we require the notion of `half-sets'. \begin{definition} Let $\Lambda \in \N_0^d$ be a finite, nontrivial, downward-closed set. A multi-index set $\Theta$ is \begin{enumerate} \item a half-set for $\Lambda$ if $\Theta + \Theta \subseteq \Lambda$ \item a maximal half-set for $\Lambda$ if it is a half-set of maximal size. I.e, if $|\Theta| = L$, with \begin{align}\label{eq:L-definition} L = L(\Lambda) &= \max \left\{ |\Theta|\; | \; \Theta \textrm{ a multi-index set satisfying } \Theta + \Theta \subseteq \Lambda \right\} \end{align} \end{enumerate} We call $L$ the \textit{maximal half-set size} of $\Lambda$. \end{definition} Recall that $2 \Theta \subseteq \Theta + \Theta$ so that the terminology ``half" should not be conflated with the operation of halving each index in an index set. If $\Lambda$ is not downward closed, it may not have any half sets. However, all nontrivial downward-closed sets have at least one nontrivial half-set (the zero set $\left\{ 0\right\}$ is one such half set). Thus maximal half sets always exist in this case, but they are not necessarily unique. An example in $d=2$ illustrates this non-uniqueness: \begin{gather*} \Lambda = B_0(1) \cap B_1(2) = \left\{ (0,0),\; (0,1),\; (0,2),\; (1,0),\; (2,0) \right\}, \\ \Theta_1 = \left\{ (0,0),\; (1,0) \right\}, \hskip 10pt \Theta_2 = \left\{ (0,0),\; (0,1) \right\}. \end{gather*} We have $L(\Lambda) = 2$, and both $\Theta_1$ and $\Theta_2$ are maximal half sets for $\Lambda$. If $\Lambda$ is both downward-closed and convex, then its maximal half-set is unique and easily computed. \begin{theorem} Let $\Lambda$ be convex and downward-closed. Then its maximal half-set $\Theta$ is unique, given by $\Theta = \left\lfloor \frac{1}{2} \Lambda \right\rfloor$. \end{theorem} \begin{proof} Let $\Theta$ be any half-set for $\Lambda$. Then for any $\theta \in \Theta$, we have $2 \theta \in \Lambda$, so that $\theta \in \left\lfloor \frac{1}{2} \Lambda \right\rfloor$. Thus $\Theta \subseteq \left\lfloor \frac{1}{2} \Lambda \right\rfloor$, showing that any half-set must be contained in $\left\lfloor \frac{1}{2} \Lambda \right\rfloor$. Now let $\theta_1, \theta_2 \in \left\lfloor \frac{1}{2} \Lambda \right\rfloor$. Then $2 \theta_1, 2\theta_2 \in \widebar{\Lambda} = \Lambda$. Thus, by convexity \begin{align*} \theta_1 + \theta_2 &= \frac{1}{2} (2 \theta_1) + \frac{1}{2} (2 \theta_2) \in \Lambda, \end{align*} where the set inclusion holds by convexity of $\Lambda$. Thus, $\left\lfloor \frac{1}{2} \Lambda \right\rfloor$ is itself a half-set; by the previous observation that it also dominates any half-set, then it must be the unique largest (maximal) half-set. \end{proof} We can now state one of the main results of this section: The number $L(\Lambda)$ in \eqref{eq:L-definition} is a lower bound on the size of any quadrature rule satisfying \eqref{eq:quadrature-condition}. \begin{theorem}\label{lemma:N-condition} Let $\Lambda$ be a finite downward-closed set, and suppose that an $N$-point quadrature rule $\left\{x_j, w_j \right\}_{n=1}^N$ exists satisfying \eqref{eq:quadrature-condition}. Then $N \geq L(\Lambda)$, with $L$ the maximal half-set size defined in \eqref{eq:L-definition}. \end{theorem} \begin{proof} Let $\Theta$ be any set satisfying $\Theta + \Theta \subseteq \Lambda$ and $|\Theta| = L$, with $L$ defined in \eqref{eq:L-definition}. We choose any size-$L$ $\mu$-orthonormal basis for $P_\Theta$: \begin{align*} P_\Theta &= \mathrm{span} \left\{ q_j\right\}_{j=1}^L, & \int q_j(x) q_k(x) \dx{\mu(x)} &= \delta_{k,j} \end{align*} Note that $q_j$ and $q_k$ have monomial expansions \begin{align*} q_j(x) &= \sum_{\alpha \in \Theta} c_\alpha x^\alpha, & q_k(x) &= \sum_{\alpha \in \Theta} d_\alpha x^\alpha, \end{align*} for fixed $j,k=1, \ldots, N$ and some constants $c_\alpha$ and $d_\alpha$. This implies \begin{align*} q_j(x) q_k(x) = \sum_{\alpha, \beta \in \Theta} c_\alpha d_\beta x^{\alpha + \beta} = \sum_{\alpha \in (\Theta + \Theta)} f_\alpha x^\alpha. \end{align*} Since $\Theta$ is a half set for $\Lambda$, then $q_j q_k \in P_\Lambda$ and is therefore exactly integrated by the quadrature rule \eqref{eq:quadrature-condition}. Then consider the $L \times L$ matrix $\bs{G}$ defined as \begin{align*} \bs{G} &= \bs{V}^T \bs{W} \bs{V}, & (G)_{j,k} &= \sum_{n=1}^N w_n q_j(x_n) q_k(x_n) \\ (V)_{j,k} &= q_k(x_j), & (W)_{j,k} &= w_j \delta_{j,k}. \end{align*} The matrices $\bs{V}$ and $\bs{W}$ are $N \times L$ and $N \times N$, respectively. Since the quadrature rule exactly integrates $q_j q_k$, then $\bs{G} = \bs{I}$, the $L \times L$ identity matrix. Thus, the matrix product $\bs{V}^T \bs{W} \bs{V}$ has rank $L$. If $N < L$ then, e.g., $\mathrm{rank}(\bs{W}) < L$ and so it is not possible that $\mathrm{rank} \left(\bs{V}^T \bs{W} \bs{V}\right) = L$. \end{proof} This result shows that if $N < L(\Lambda)$, then an $N$-point quadrature rule satisfying \eqref{eq:quadrature-condition} cannot exist. This is nontrivial information. As an example, let $d=2$, and consider \begin{align*} \Lambda = B_{\infty}(2) = \left\{ (0,0), (0,1), (0,2), (1,0), (1,1), (1,2), (2,0), (2,1), (2,2) \right\}, \end{align*} corresponding to the tensor-product space of degree 2. Since $|\Lambda| = 9$, the heuristic \eqref{eq:counting-heuristic} suggests that it is possible to find a rule with only 3 points. However, let $\Theta = \left\{ (0,0), (0,1), (1,0), (1,1) \right\}$. Then $\Theta + \Theta = \Lambda$ and $|\Theta| = 4$. Thus, no 3-point rule that is accurate on $P_\Lambda$ can exist. Another observation from Lemma \ref{lemma:N-condition} is that we may justifiably call an $N$-point quadrature rule optimal if $N = L$, in the sense that one cannot achieve the same accuracy with a smaller number of nodes. \begin{definition} Let $\Lambda$ be a downward-closed multi-index set. We call an $M$-point quadrature rule \textit{quasi-optimal} for $\Lambda$ if it satisfies \eqref{eq:quadrature-condition} with $M=L(\Lambda)$, with $L$ defined in \eqref{eq:L-definition}. \end{definition} We call these sets \underline{quasi}-optimal because, given a quasi-optimal rule for $\Lambda$, it may be possible to generate a quadrature rule of equal size that is accurate on an index set that is strictly larger than $\Lambda$ (see Section \ref{sec:poly-spaces}). Quasi-optimal quadrature rules are not necessarily unique and sometimes do not exist. However, the weights for quasi-optimal quadrature rules have a precise behavior. Under the assumption \eqref{eq:mu-assumption}, we can find an orthonormal basis $q_j$ for $P_\Lambda$: \begin{align*} \left\langle q_j, q_k \right\rangle_\mu &= \delta_{j,k}, & P_\Lambda = \mathrm{span} \left\{ q_j \right\}_{j=1}^N \end{align*} Given this orthonormal basis, define \begin{align*} \lambda_\Lambda(x) &= \frac{1}{\sum_{j=1}^N q_j^2(x)}. \end{align*} The quantity $\lambda_\Lambda$ depends on $D$, $\mu$, and $P_\Lambda$, but not on the particular basis $q_j$ we have chosen: Through a unitary transform we may map $\left\{ q_j \right\}_{j=1}^N$ into any other orthonormal basis for $P_\Lambda$, but this transformation leaves the quantity above unchanged. The weights for a quasi-optimal quadrature rule on $\Lambda$ are evaluations of $\lambda_\Theta$, where $\Theta$ is a maximal half-set for $\Lambda$. \begin{theorem}\label{lemma:w-condition} Let $\Lambda$ be a finite downward-closed set, and let an $M$-point quadrature rule $\left\{x_m, w_m \right\}_{m=1}^M$ be quasi-optimal for $\Lambda$. Then \begin{align}\label{eq:w-condition} w_j &= \lambda_\Theta(x_j), & j&=1, \ldots, N, \end{align} where $\Theta$ is a(ny) maximal half set for $\Lambda$. \end{theorem} \begin{proof} Let $\ell_j(x)$, $j=1, \ldots, N$, be the cardinal Lagrange interpolants from the space $P_{\Theta}$ on the nodes $\left\{x_j\right\}_{j=1}^N$.\footnote{The interpolation problem of $x_j$ for any basis of $P_\Theta$ must be invertible since the Vandermonde-like matrix $\bs{V}$ in the proof of Lemma \ref{lemma:N-condition} must have full rank, be square since $N=L$, and thus be invertible. Thus, the $\ell_j$ are well-defined.} These cardinal interpolants satisfy $\ell_j(x_k) = \delta_{j,k}$. The weights $w_j$ must be positive since \begin{align}\label{eq:w-positive} w_j = \sum_{k=1}^N w_k \ell_j^2(x_k) = \int \ell_j^2(x) \dx{\mu(x)} > 0, \end{align} where the second equality uses the fact that the quadrature rule is exact on $P_\Lambda \ni \ell_j^2$. By \eqref{eq:w-positive}, then $\sqrt{\bs{W}}$ is well-defined. With $\bs{V}$ and $\bs{W}$ defined in the proof of Lemma \ref{lemma:N-condition}, then $\bs{V}^T \bs{W} \bs{V} = \bs{I}$, and both matrices are square ($N = L$). Thus, $\sqrt{\bs{W}} \bs{V}$ is an orthogonal matrix, so that \begin{align*} \sqrt{\bs{W}} \bs{V} \bs{V}^T \sqrt{\bs{W}^T} &= \bs{I} \\ \bs{V} \bs{V}^T &= \bs{W}^{-1} \end{align*} Taking the diagonal components of the left- and right-hand side shows that $w_j = \left( \sum_{k=1}^N (V)^2_{j,k} \right)^{-1}$. \end{proof} \section{Theoretical counter/examples of quasi-optimal quadrature}\label{sec:conseequences} To provide further insight into our definition of quasi-optimality, we investigate the consequences of the above theoretical results through some examples. \subsection{Univariate rules}\label{sec:univariate} The example in this section shows (i) that optimal quadarture rules in one dimension are Gauss quadrature rules, and (ii) among many quasi-optimal rules, some may be capable of accurately integrating more polynomials than the others. Let $d = 1$ with $\Lambda = \left\{ 0, \ldots, N-1\right\}$. The maximal half-set for $\Lambda$ is $\Theta = \left\{ 0, \ldots, \left\lfloor\frac{N-1}{2} \right\rfloor\right\}$, and thus \begin{align*} L(\Lambda) = \left\lfloor \frac{N-1}{2} \right\rfloor + 1 \end{align*} We consider two cases, that of even $N$, and of odd $N$. Suppose $N$ is even. Then a quasi-optimal quadrature rule has $M = |\Theta| = \frac{N}{2}$ abscissae, and exactly integrates $M = 2N$ (linearly independent) polynomials. I.e., the $N$-point rule exactly integrates polynomials up to degree $2 M - 1$. It is well-known that this quadrature rule is unique: it is the $\mu$-Gauss-Christoffel quadrature rule \cite{szego_orthogonal_1975}. With $\left\{q_j\right\}_{j=0}$ a family of $L^2_\mu(D)$-orthonormal polynomials, with $\deg q_j = j$, then the abscissae $x_1, \ldots, x_M$ are the zeros of $q_M$, and the weights are given by \eqref{eq:w-condition}. This quadrature rule can be efficiently computed from eigenvalues and eigenvectors of the symmetric tridiagonal Jacobi matrix associated with $\mu$ \cite{golub_calculation_1969}. Thus, this quadrature rule can be computed without resorting to optimization routines. Now suppose $N$ is odd. Then a quasi-optimal quadrature rule has $M = L(\Lambda) = \frac{N+1}{2}$ abscissae, and exactly integrates $2 M - 1$ polynomials. I.e., the $M$-point rule exactly integrates polynomials up to degree $2 M - 2$. Clearly a Gauss-Christoffel rule is a quasi-optimal rule in this case. However, by our definition there are an (uncountably) \textit{infinite} number of quasi-optimal rules in this case \cite{szego_orthogonal_1975}. In particular, for arbitrary $c \in \R$, the zero set of the polynomial \begin{align}\label{eq:gauss-radau} q_M(x) - c q_{M-1}(x) \end{align} corresponds to the abscissae of a quasi-optimal quadrature rule, with the weights give by \eqref{eq:w-condition}. Like the Gauss-Christoffel case, computation of these quadrature rules can be accomplished via eigendecompositions of symmetric tridiagonal matrices \cite{narayan2017}. In classical scientific computing scenarios, sets of $N$-point rules with polynomial accuracy of degree $2 N -2$ have been called Gauss-Radau rules, and are traditionally associated to situations when $\supp \mu$ is an interval and one of the quadrature absciasse is collocated at one of the endpoints of this interval \cite{golub_modified_1973}. \subsection{Quasi-optimal multivariate quadrature}\label{sec:additive-set-example} This section furnishes an example where a quasi-optimal multivariate quadrature rule can be explicitly constructed. Consider a tensorial domain and probability measure, i.e., suppose \begin{align*} \mu &= \times_{j=1}^d \mu_j, & D &= \otimes_{j=1}^d \supp \mu_j \end{align*} where $\mu_j$ are univariate probability measures. We let $\Lambda = B_{0}(1) \cap B_{1}(n)$ for any $n \geq 2$. With $e_j$ the cardinal $j$'th unit vector in $\N_0^d$, then $\Lambda$ is the set of indices of of the form $q e_j$ for $0 \leq q \leq n$; the size of $\Lambda$ is $N = d n + 1$. Note that this $\Lambda$ would arise in situations where one seeks to approximate an unknown multivariate function as a sum of univariate functions; this is rarely a reasonable assumption in practice. However, in this case we can explicitly construct optimal quadrature rules. The heuristic \eqref{eq:counting-heuristic} suggests that we can construct a rule satisfying \eqref{eq:quadrature-condition} if we use \begin{align*} M \geq n \left(\frac{d}{d+1}\right) + \frac{1}{d+1} \end{align*} nodes, which is approximately $n$ nodes. However, we can achieve this with fewer nodes, only $M = \left\lfloor n/2 \right\rfloor + 1$, independent of $d$. Here the heuristic \eqref{eq:counting-heuristic} is too pessimistic when $d \geq 2$. Associated to each $\mu_j$, we need the corresponding system of orthonormal polynomials $q$ and the univariate $\lambda$ function. For each $j=1, \ldots, d$, let $q_{n,j}(x)$, $n=0, 1, \ldots,$ denote a $L^2_{\mu_j}$-orthonormal polynomial family with $\deg q_{n,j} = n$. Define \begin{align*} \lambda_{n,j}\left(\cdot \right) = \frac{1}{\sum_{i=0}^{n} q_{i,j}^2(\cdot)}. \end{align*} Note that $\lambda_{0,j} = 1/q_{0,j}^2 \equiv 1$ for all $j$ since each $\mu_j$ is a probability measure. Consider the index sets $\Theta_j = \left\{ 0, e_j, 2e_j, \ldots, \left\lfloor n/2 \right\rfloor e_j \right\}$, for $j = 1, \ldots, d$, where $0$ is the origin in $\N_0^d$. Each $\Theta_j$ is a maximal half-set for $\Lambda$, and $L$ in \eqref{eq:L-definition} is given by $L = \left\lfloor n/2 \right\rfloor+1$. To construct a quasi-optimal quadrature rule achieving with $M = L$ nodes, we note that the weights $w_j$, $j=1, \ldots, N$ must be given by \eqref{eq:w-condition}, which holds for \textit{any} maximal half index set $\Theta$. I.e., it must simultaneously hold for \textit{all} $\Theta_j$. Thus, \begin{align}\label{eq:christoffel-equivalence} w_m = \lambda_{{\Theta_j}}\left(x_m\right) = \left[ \sum_{i=0}^{n} q_{i,j}^2\left(x_m^{(j)}\right) \prod_{\substack{s = 1, \ldots, d\\s \neq j}} q^2_{0,s} \right]^{-1} = \lambda_{\left\lfloor n/2 \right\rfloor,j} \left( x_m^{(j)}\right), \end{align} for $j = 1, \ldots, d$. This implies in particular that the coordinates $x_m^{(j)}$ for node $m$ must satisfy \begin{align*} \lambda_{\left\lfloor n/2 \right\rfloor,1} \left( x_m^{(1)}\right) = \lambda_{\left\lfloor n/2 \right\rfloor,2} \left( x_m^{(2)}\right) = \cdots = \lambda_{\left\lfloor n/2 \right\rfloor,d} \left( x_m^{(d)}\right). \end{align*} We can satisfy this condition in certain cases. Suppose $\mu_1 = \mu_2 = \cdots = \mu_d$; then $\lambda_{n,1} = \lambda_{n,2} = \cdots = \lambda_{n,d}$ and so we can satisfy \eqref{eq:christoffel-equivalence} by setting $x^{(1)}_m = x^{(2)}_m = \cdots = x^{(d)}_m$ for all $m = 1, \ldots, M$. Thus nodes for a quasi-optimal quadrature rule nodes could lie in $\R^d$ along the graph of the line defined by \begin{align*} x^{(1)} = x^{(2)} = \cdots = x^{(d)}. \end{align*} In order to satisfy the integration condition \eqref{eq:quadrature-condition} we need distribute the nodes in an appropriate way. Having effectively reduced the problem to one dimension, this is easily done: we choose a Gauss-type quadrature rule as described in the previous section. Let the $j$th coordinate of the quadrature rule be the $M$-point Gauss quadrature nodes for $\mu_j$, i.e., \begin{align*} \left\{ x_1^{(j)}, x_2^{(j)}, \ldots, x_M^{(j)} \right\} = q_{M,j}^{-1}(0). \end{align*} This then uniquely defines $x_1, \ldots, x_M$, and $w_m$ is likely uniquely defined since we have satisfied \eqref{eq:christoffel-equivalence}. Thus a ``diagonal" Gauss quadrature rule, $M = \left\lfloor n/2 \right\rfloor + 1$, with equal coordinate values for each abscissa, is an optimal rule in this case. \subsection{Quasi-optimal multivariate quadrature: non-uniqeness} The previous example with an additional assumption allows to construct $2^{d-1}$ distinct quasi-optimal quadrature rules. Again take $\Lambda = B_{0}(1) \cap B_{1}(n)$ for $n\geq 2$, and let $\mu = \times_{j=1}^d \mu_j$ with identical univariate measures $\mu_j = \mu_k$. To this add the assumption that $\mu_j$ is a symmetric measure; i.e., for any set $A \subset \R$, then $\mu_j(A) - \mu_j(-A)$. In this case the univariate orthogonal polynomials $q_{k,j}$ are even (odd) functions if $k$ is even (odd). Thus, the set zero set $q_{k,j}^{-1}(0)$ is symmetric around 0, and $\lambda_{M,j}$ is always an even function. With $x_m$ the quasi-optimal set of nodes defined in the previous section, let \begin{align*} y_m^{(j)} &= \sigma_j x_m^{(j)}, & \sigma_j \in \left\{ -1, +1 \right\}, \end{align*} for a fixed but arbitrary sign train $\sigma_1, \ldots, \sigma_d$. Using the above properties, one can show that the nodes $\left\{y_1, \ldots, y_M\right\}$ and the weights $w_1, \ldots, w_M$ define a quadrature rule satisfying \eqref{eq:quadrature-condition}, and of course have the same number of nodes as the quasi-optimal rule from the previous section. By varying the $\sigma_j$, we can create $2^{d-1}$ unique distributions of nodes, thus showing that at least this many quasi-optimal rules exist. \subsection{Quasi-optimal multivariate quadrature: non-existence} We again use the setup of Section \ref{sec:additive-set-example}, but this time to illustrate that it is possible for quasi-optimal to not exist. We consider $d=2$, and take $\Lambda = B_{0}(1) \cap B_{1}(3)$, and let $\mu = \mu_1 \times \mu_2$ for two univariate probability measures $\mu_1$ and $\mu_2$. However, this time we let these measures be different: \begin{align*} \dx{\mu_1}(t) &= \frac{1}{2} \dx{t}, & \mathrm{supp}\;\mu_1 &= [-1,1] \\ \dx{\mu_2}(t) &= \frac{3}{4} (1-t^2) \dx{t}, & \mathrm{supp}\;\mu_2 &= [-1,1] \end{align*} With our choice of $\Lambda$, we have $L(\Lambda) = 2$ with two maximal half-sets: \begin{align*} \Theta_1 &= \left\{ (0,0), (1,0) \right\}, & \Theta_2 &= \left\{ (0,0), (0,1) \right\} \end{align*} In this simple case the explicit $\mu_1$- and $\mu_2$-orthonormal families take the expressions \begin{align*} q_{0,1}(t) &= 1, & q_{1,1}(t) &= \sqrt{3} t, \\ q_{0,2}(t) &= 1, & q_{1,2}(t) &= \frac{\sqrt{5}}{2} t \end{align*} so that associated to $\Theta_1$ and $\Theta_2$, respectively, we have the functions \begin{align*} \lambda_{1,1}(t) &= \frac{1}{1 + 3 t^2}, & \lambda_{1,2}(t) &= \frac{4}{4 + 5 t^2}. \end{align*} Since by \eqref{eq:christoffel-equivalence} we require $w_m = \lambda_{1,1}\left(x_m^{(1)}\right) = \lambda_{1,2}\left(x_m^{(2)}\right)$, this implies that \begin{subequations} \begin{align}\label{eq:noexist-a} 3 \left( x_m^{(1)}\right)^2 &= \frac{5}{4} \left( x_m^{(2)} \right)^2, & m = 1, 2 \end{align} However, the condition \eqref{eq:quadrature-condition} also implies for our choice of $\Lambda$ that \begin{align*} \int_{-1}^2 p(t) \frac{1}{2} \dx{t} &= \sum_{m=1}^2 w_m p\left(x^{(1)}_m\right), & p &\in \mathrm{span} \left\{ 1, t, t^2, t^3 \right\}, \\ \int_{\R} p(t) \frac{3}{4}(1-t^2) \dx{t} &= \sum_{m=1}^2 w_m p\left(x^{(2)}_m\right), & p &\in \mathrm{span} \left\{ 1, t, t^2, t^3 \right\}. \end{align*} The conditions above imply that $\left\{x^{(1)}_m\right\}_{m=1}^2$ and $\left\{x^{(2)}_m\right\}_{m=1}^2$ be nodes for the 2-point $\mu_1$- and $\mu_2$-Gauss quadrature rules, respectively, which are both unique. Thus, $\left\{x^{(1)}_m\right\}_{m=1}^2 = \left\{ \pm \sqrt{3}/3 \right\}$, and $\left\{ x^{(2)}_m\right\}_{m=1}^2 = \left\{ \pm 1/\sqrt{5} \right\}$. Thus, we have the equality \begin{align}\label{eq:noexist-b} 3 \left( x_m^{(1)}\right)^2 &= 5 \left( x_m^{(2)} \right)^2, & m = 1, 2 \end{align} \end{subequations} We arrive at a contradiction: simultaneous satisfaction of \eqref{eq:noexist-a} and \eqref{eq:noexist-b} implies all coordinates of the abscissae are 0, which cannot satisfy \eqref{eq:quadrature-condition}. Thus, no quasi-optimal quadrature rule for this example exists. \section{Numerical results} \label{sec:numerical-results} In this section we will explore the numerical properties of the multivariate quadrature algorithm we have proposed in Section \ref{sec:algorithm}. First we will numerically compare the performance of our algorithm with other popular quadrature strategies for tensor product measures. We will then demonstrate the flexibility our our approach for computing quadrature rules for non-tensor-product measures, for which many existing approaches are not applicable. Finally we will investigate the utility of our reduced quadrature rules for high-dimensional integration by leveraging selected moment matching and dimension reduction. Our tests will compare the method in this paper (Section \ref{sec:algorithm}), which we call \textsc{Reduced Quadrature}, to other standard quadrature methods. We summarize these methods below. \begin{itemize} \item \textsc{Monte Carlo} -- The integration rule \begin{align*} \int_D f(x) \dx{\mu}(x) \approx \frac{1}{M} \sum_{m=1}^M f(X_m), \end{align*} where $X_m$ are independent and identically-distributed samples from the probability measure $\mu$. \item \textsc{Sparse grid} -- The integration rule for $\mu$ the uniform measure over $[-1,1]^d$ given by a multivariate sparse grid rule generated from a univariate Clenshaw-Curtis quadrature rule~\cite{Gerstner_G_NA_1998}. \item \textsc{Cubature} -- Stroud cubature rules of degree 2, 3, and 5~\cite{Stroud_book_71}. These rules again integrate on $[-1,1]^d$ with respect to the uniform measure. \item \textsc{Sobol} -- A quasi-Monte Carlo Sobol sequence~\cite{Sobol_S_IJMPC_1995} for approximating integrals on $[-1,1]^d$ using the uniform measure. \item \textsc{Reduced quadrature} -- The method in this paper, described in Section \ref{sec:algorithm}. \item \textsc{$\ell_1$ quadrature} -- The ``initial guess" for the \textsc{Reduced quadrature} algorithm, using the solution of the LP algorithm summarized in Section \ref{sec:algorithm-initial}. \end{itemize} The \textsc{Sparse grid}, \textsc{Sobol}, and \textsc{cubature} methods are usually applicable only for integrating over $[-1,1]^d$ with the uniform measure. When $\mu$ has a density $w(x)$ with support $D \subseteq [-1,1]^d$, we will use these methods to evaluate $\int_D f(x) \dx{\mu}(x)$ by integrating $f(x) w(x)$ with respect to the uniform measure on $[-1,1]^d$, where we assign $w = 0$ on $[-1,1]^d \backslash D$. \subsection{Tensor product measures}\label{sec:tp-measures} Our reduced quadrature approach can generate quadrature rules for non-tensor-product measures but in this section we investigate the performance of our algorithm in the more standard setting of tensor-product measures. We begin by discussing the computational cost of computing our quadrature rules. Specifically we compute quadrature rules for the uniform probability measure on $D=[-1,1]^d$ in up to 10 dimensions for all total-degree spaces with degree at most 20 or with subspace dimension at most 3003. In all cases we were able to generate an efficient quadrature rule using our approach for which the number of quadrature points was equal to or only slightly larger ($<10$ points) than the number of points suggested by the heuristic~\eqref{eq:dof-heuristic}. The number of each points in the quadrature rule, the number of moments matched $\lvert\Lambda\rvert$, the size of a quasi-optimal rule $L(\Lambda)$, the number of points $\ceil{\lvert\Lambda\rvert/(d+1)}$ suggested by the heuristic ~\eqref{eq:dof-heuristic}, and the number of iterations taken by non-linear least squares algorithm, is presented in in Table~\ref{tab:uniform-quad-rules-max-degree}. The final two columns of Table \ref{tab:uniform-quad-rules-max-degree}, the number of points in the the reduced quadrature rules and the number of iterations used by the local optimization are given as ranges because the final result is sensitive to the random candidate sets used to generate the initial condition. The ranges presented in the table represent the minimum and maximum number of points and iterations used to generate the largest quadrature rules for each dimension for 10 different initial candidate meshes. We observe only very minor variation in the number of points, but more sensitivity in the number of iterations is observed. The number of iterations needed by the local optimization to compute the quadrature rules was always a reasonable number: at most a small multiple of the number of moments being matched. However, the largest rules did take several hours to compute on a single core machine due to the cost of running the optimizer (we used scipy.optimize.least\_squares) and forming the Jacobian matrix of the objective. Profiling the simulation revelated that long run times were due to the cost of evaluating the Jacobian \eqref{eq:jacobian} of the objective function, and the singular value decomposition repeatedly called by the optimizer. This run-time could probably be reduced by using a Krylov-based nonlinear least squares algorithm, and by computing the Jacobian in parallel. We expect that such improvements would allow one to construct such rules in higher dimensions in a reasonable amount of time; however, even our unoptimized code was able to compute such rules on a single core in a moderate number of dimensions. \begin{table}[ht] \begin{center} \begin{tikzpicture} \node (tbl) { \begin{tabular}{ c c c c c c c} \arrayrulecolor{darkblue} Dimension & Degree& $\lvert\Lambda\rvert$ & $L(\Lambda)$ & $\ceil{\lvert\Lambda\rvert/(d+1)}$ & No. Points &No. Iterations\\[1.ex] 2 & 20 & 231 & 66 & 77 &77-79 & 262-854 \\ \midrule 3 & 20 & 1771 & 286 & 443 &445-447 & 736-1459\\ \midrule 4 & 13 & 2380 & 210 & 476 &479-480 & 782-2033\\ \midrule 5 & 10 & 3003 & 252 & 501 &506-508 & 3181-4148\\ \midrule 10 & 5 & 3003 & 66 & 273 &273-274 & 511-2229 \\[0.5ex] \end{tabular} }; \begin{pgfonlayer}{background} \draw[rounded corners,top color=lightblue,bottom color=darkblue!50!black, draw=white] ($(tbl.north west)+(0.14,0)$) rectangle ($(tbl.north east)-(0.13,0.9)$); \draw[rounded corners,top color=white,bottom color=darkblue!50!black, middle color=lightblue,draw=blue!20] ($(tbl.south west) +(0.12,0.5)$) rectangle ($(tbl.south east)-(0.12,0)$); \draw[top color=blue!1,bottom color=gray!50,draw=white] ($(tbl.north east)-(0.13,0.6)$) rectangle ($(tbl.south west)+(0.13,0.2)$); \end{pgfonlayer} \end{tikzpicture} \end{center} \caption{Results for computing \textsc{Reduced quadrature} rules for total-degree spaces for the uniform measure on $[-1,1]^d$. Tabulated are the number of moments matched ($|\Lambda|$), the theoretical lower bound on the quadrature rule size from \eqref{eq:L-definition}, the number of the quadrature points given by the counting heuristic \eqref{eq:dof-heuristic}, the number of {reduced quadrature} points found, and the number of iterations required in the optimization. The {reduced quadrature} algorithm output is random since the initial candidate grid is a random set. Thus, the final two columns give a range of results over 10 runs of the algorithm.} \label{tab:uniform-quad-rules-max-degree} \end{table} To compare the performance of our reduced quadrature rules to existing algorithms consider the the corner-peak function often used to test quadrature methods~\cite{Genz_Book_1987}, \begin{align}\label{eq:genz-cp} f_{\mathrm{CP}}(\V{x})=\left(1+\sum_{i=1}^d c_i\, x_i \right)^{-(d+1)},\quad \V{x}\in\Gamma=[0,1]^d \end{align} The coefficients $c_i$ can be used to control the variability over $D$ and the effective dimensionality of this function. We generate each $c_i$ randomly from the interval $[0,1]$ and subsequently normalize them so that $\sum_{i=1}^d c_i = 1$. For $X$ a uniform random variable on $D = [-1,1]^d$, the mean and variance of $f(X)$ can be computed analytically; these values correspond to computing integrals over $D = [-1,1]^d$ with $\mu$ the uniform measure. Figure ~\ref{fig:genz-cp-convergence} plots the convergence of the error in the mean of the corner-peak function using the reduced quadrature rules generated for $\mu$ the uniform probability measure on $[-1,1]^d$ with $d=2,3,4,5,10$. Because generating the initial condition requires randomly sampling over the domain of integration for a given degree, we generate 10 quadrature rules and plot both the median error and the minimum and maximum errors. There is sensitivity to the random candidate set used to generate the initial condition, however for each of the 10 repetitions a quadrature rule was always found and the error in the approximation of the mean of $f$ using these rules converges exponentially fast. For a given number of samples the error in the reduced quadrature rule estimate of the mean is significantly lower than the error of a Clenshaw-Curtis-based sparse grid. It is also more accurate than Sobol sequences up to 10 dimensions. Furthermore the reduced quadrature rule is competitive with the Stroud degree 2,3 and 5 cubature rules. However unlike the Stroud rules the polynomial exactness of the reduced quadrature rule is not restricted to low degrees (shown here) nor is it even resticted to total-degree spaces (as discussed in Section~\ref{sec:poly-spaces}). In Figure ~\ref{fig:genz-cp-convergence} we also plot the error in the quadrature rule used as the initial condition for the local optimization used to generated the reduced rule. As expected, the error of the initial condition is larger than the final reduced rule for a given number of points. Moreover it becomes increasingly difficult to find an accurate initial condition as the degree increases. In two-dimensions, as the degree is increased past 13, the error in the initial condition begins to increase. Similar degradation in accuracy also occurs in the other higher dimensional examples. However, even when the accuracy of the initial condition still is extremely low, it still provides a good starting point for the local optimization, allowing us to accurately generate the reduced quadrature rule. \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{tikz/error-convergence-genz-cp-d-2-uniform-total-degree-exact-standalone.pdf} \includegraphics[width=0.48\textwidth]{tikz/error-convergence-genz-cp-d-3-uniform-total-degree-exact-standalone.pdf}\\ \includegraphics[width=0.48\textwidth]{tikz/error-convergence-genz-cp-d-4-uniform-total-degree-exact-standalone.pdf} \includegraphics[width=0.48\textwidth]{tikz/error-convergence-genz-cp-d-5-uniform-total-degree-exact-standalone.pdf}\\ \includegraphics[width=0.48\textwidth]{tikz/error-convergence-genz-cp-d-10-uniform-total-degree-exact-standalone.pdf}\\ \end{center} \caption{Convergence of the error in the mean of the corner-peak function $f_\mathrm{CP}$~\eqref{eq:genz-modified-cp} computed using the reduced quadrature rule generated for the uniform measure. Convergence is shown for $d=2,3,4,5,10$ which are respectively shown left to right starting from the top left. Solid lines represent the median error of the total-degree quadrature rules for 10 different initial conditions. Transparent bands represent the minimum and maximum errors of the same 10 repetitions.} \label{fig:genz-cp-convergence} \end{figure} \subsection{Beyond tensor product measures}\label{sec:non-tp-measures} Numerous methods have been developed for multivariate quadrature for tensor-product measures, however much less attention has been given to computing quadrature rules for arbitrary measures, for example those that exhibit non-linear correlation between dimensions. In this setting Monte Carlo quadrature is the most popular method of choice. \subsubsection{Chemical reaction model} The reduced quadrature method we have developed can generate quadrature rules to integrate over arbitrary measures. Consider the following model of competing species absorbing onto a surface out of a gas phase \begin{align}\label{eq:chemical-species} \begin{split} \frac{du_1}{dt} &= az-cx_1 - 4du_1u_2\\ \frac{du_2}{dt} &= 2x_2z^2 - 4du_1u_2\\ \frac{du_3}{dt} &= ez - fu_3\\ z=u_1&+u_2+u_3,\quad u_1(0)=u_2(0)=u_3(0) \end{split} \end{align} The constants $c$, $d$, $e$, and $f$ are fixed at the nominal values $c=0.04$, $d=1.0$, $e=0.36$, and $f=0.016$. The parameters $x_1$ and $x_2$ will vary over a domain $D$ endowed with a non-tensor-product probability measure $\mu$. Viewing $X = (X_1, X_2) \in \R^2$ as a random variable with probability density $\dx{\mu}(x)$, we are interested in computing the mean of the mass fraction of the third species $u_3(t=100)$ at $t=100$ seconds. We will consider two test problems, problem $\mathrm{I}$ and problem $\mathrm{II}$, each defined by its own domain $D$ and measure $\mu$. We therefore have two rectangular domains $D_{\mathrm{I}}$ and $D_{\mathrm{II}}$ with measures $\mu_{\mathrm{I}}$ and $\mu_{\mathrm{II}}$, respectively. The domains and the measures are defined via affine mapping of a canonical domain and measure: \begin{align}\label{eq:banana-density} \dx{\mu}(x) &= C\exp(-(\frac{1}{10}x_1^4 + \frac{1}{2}(2x_2-x_1^2)^2)), & {x}&\in D = [-3,3]\times[-2,6], \end{align} where $C$ is a constant chosen to normalize $\mu$ as a probability measure. This density is called a ``banana density", and is a truncated non-linear transformation of a bivariate standard Normal Gaussian distribution. We define $(D_{\mathrm{I}}, \mu_{\mathrm{I}})$ and $(D_{\mathrm{II}}, \mu_{\mathrm{II}})$ as the result of affinely mapping $D$ to the domains \begin{align*} D_{\mathrm{I}}&= [0,4.5]\times[5,35], & D_{\mathrm{II}} &= [1.28,1.92]\times[16.6,24.9]. \end{align*} The response surface of the mass fraction over the integration domain of problem I is shown in the right of Figure~\ref{fig:banana-density}. The response has a strong non-linearity which makes it ideal for testing high-order polynomial quadrature. For comparison the integration domain of problem II is also depicted in the right of Figure~\ref{fig:banana-density}. The domain of problem II is smaller and thus the non-linearity of the response is weaker, consequently integrating the mean of problem II is easier than integrating the mean of problem II for polynomial quadrature methods. \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth]{tikz/banana-density.pdf} \includegraphics[width=0.49\textwidth]{figures/chemical-reaction-response.pdf} \end{center} \caption{(Left) Contour plot of the banana density \eqref{eq:banana-density} mapped to $D_I$. (Right) Response surface of the mass fraction $u_3$ at $t=100$ predicted by the chemical reaction model. The response is plotted over the entire domain of problem I and the black box represents the smaller domain of problem II. } \label{fig:banana-density} \end{figure} Figure~\ref{fig:banana-chemical-reaction-convergence} compares the convergence of the of the error in the mean of the mass fraction of the chemical reaction model computed using the reduced quadrature rule, with the estimates of the mean computed using Monte Carlo sampling and Clenshaw-Curtis based sparse grids. Unlike previous examples the mean cannot be computed analytically so instead we compute the mean using $10^6$ samples from a $2$-dimensional Sobol sequence. Sparse grids can only be constructed for tensor-product measures, so here we investigate performance of sparse grids by including the probability density in the integrand and integrating with respect to the uniform measure. This is the most common strategy to tackle a non-tensor-product integration problem using a tensor-product quadrature rule. For the more challenging integral defined over $D_\mathrm{I}$, the reduced quadrature error out-performs Monte Carlo and sparse grid quadrature however the difference is more pronounced when integrating over $D_\mathrm{II}$. The apparent slower rate of convergence of reduced quadrature for problem I is because a high-polynomial degree is needed to accurately approximate the steep response surface features over this domain. The performance of the reduced quadrature method is related to how well the integrand can be approximated by a polynomial. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{tikz/error-convergence-chemical-reaction-d-2-banana-total-degree-exact-standalone.pdf} \includegraphics[width=0.49\textwidth]{tikz/error-convergence-chemical-reaction-easy-d-2-banana-total-degree-exact-standalone.pdf} \end{center} \caption{Convergence of the error in the mean of the mass fraction of the chemical reaction model computed using the reduced quadrature rule generated for the banana-density over (right) $D_\mathrm{II}$ and (left) $D_\mathrm{I}$. Error is compared to popular alternative quadrature methods.} \label{fig:banana-chemical-reaction-convergence} \end{figure} \subsubsection{Sample-based moments}\label{sec:non-tp-sample-moments} Note that both generating the initial condition~\eqref{eq:bpdn-problem} using and solving the local optimization problem~\eqref{eq:local-optimization} involve matching moments. Throughout this paper we have computed the moments of the polynomials we are matching analytically (or to machine precision with high-order quadrature). However situations may arise when one only has samples from the probability density of a set of random variables. For example Bayesian inference~\cite{Stuart_AN_2010} is often used to infer densities of random variables conditional on available observational densities. The resulting so called posterior densities are almost never tensor-product densities. Moreover it is difficult to compute analytical expressions for the posterior density and so Markov Chain Monte Carlo (MCMC) sampling is often used to draw a set of random samples from the posterior density. Consider a model $f_\text{p}({x}):\realsmap{d}{1}$ parameterized by $d$ variables ${x}=(x_1,\ldots,x_{d})\in D \subset\mathbb{R}^{d}$, predicting an unobservable quantity of interest (QoI). There is a true underlying value of $x$ that is unknown. In the example \eqref{eq:chemical-species}, $f_p$ is the mass fraction $u_3$ of the chemical reaction model. We wish to quantify the effect of the uncertain variables on the model prediction, and we use Bayesian inference to accomplish this. In the standard inverse problems setup, we have no direct measurements of $f_p$ but we can make observations ${{y}_o}$ of other quantities which we can use to constrain estimates of uncertainty in the unobservable QoI. To make this precise, let $f_\text{o}({x}) : \realsmap{d}{n_o}$ be a model, parameterized by the same $d$ random variables ${x}$, which predicts a set of $n_o$ observable quantities. Bayes rule can be used to define the posterior density for the model parameters ${x}$ given observational data ${{y}_o}$: \begin{align}\label{eq:posterior} \pi({x}|{{y}_o})=\frac{\pi({{y}_o}|{x})\pi({x})}{\int_{\Gamma} ({{y}_o}|{x})\pi({x})d{x}}, \end{align} where any prior knowledge on the model parameters is captured through the prior density $\pi(\V{x})$. The construction of the posterior is often not the end goal of an analysis instead one is often interested in statistics on the unobservable QoI. Here we will focus on moments of the data informed predictive distribution, for example the mean prediction \begin{equation}\label{eq:post-pred-moments} m_p = \int_D f_\text{p}({x})\pi({x}|{{y}_o})\dx{{x}} \end{equation} In practice the posterior of the chemical model parameters may be obtained by observational data ${y}$ from chemical reaction experiments and an observational model $f_\text{o}$ that is used to numerically simulate those experiments. However for ease of discussion we will assume the posterior distribution~\eqref{eq:posterior} is given by the banana-type density defined over $D_\mathrm{I}$ or $D_\mathrm{II}$. The banana density \eqref{eq:banana-density} has been used previously to test Markov Chain Monte Carlo (MCMC) methods~\cite{Parno_M_arxiv_2014} and thus is a good test case for Bayesian inference problems. As in the previous section we will use the reduced quadrature method to compute $m_p$, however here we will investigate the performance of the quadrature rules that are generated from \textit{approximate} moments. The approximate moments are those computed via Monte Carlo sampling from $\pi(x | y_o)$. These approximate moments are used to generate reduced quadrature rules, i.e. they are used as inputs when solving~\eqref{eq:bpdn-problem} and~\eqref{eq:local-optimization}. Figure~\ref{fig:banana-chemical-reaction-convergence-samples} illustrates the effect of using a finite number of samples to approximate the polynomial moments. The left of the figure depicts the convergence of the error in the mean of the mass fraction for the banana density defined over $D_\mathrm{I}$. The right shows errors for the banana density defined over $D_\mathrm{II}$. The right-hand figure illustrates that the accuracy of the quadrature rule is limited by the accuracy of the moments used to generate the quadrature rule. Once the error in the approximation of the mean reaches the accuracy of the moments, the error stops decreasing when the number of quadrature points is increased. The error saturation point can be roughly estimated by the typical Monte Carlo error which scales like $P^{-1/2}$, where $P$ is the number of samples used to estimate the polynomial moments. The saturation of error present in the right-hand figure is not as evident in the left-hand figure, and this is because the error of the quadrature rules using exact moments is greater than the error in the Monte Carlo estimate of the moments using based upon $10^4$ and $10^6$ samples. Our Monte Carlo samples from $\pi(x|y_o)$ were generated using rejection sampling, which is an exact sampler. While MCMC is the tool of choice for sampling from high-dimensional non-standard distributions, it is an approximate sampler. If we had generated samples using MCMC, then our approximate moments contain two error sources: that from finite sample size, and that from approximate sampling. For this reason, we opted to use the exact rejection sampling technique. However, we expect that the results in Figure \ref{fig:banana-chemical-reaction-convergence-samples} would look similar when using MCMC samples. \begin{figure} \begin{center} \includegraphics[width=0.49\textwidth]{tikz/error-convergence-chemical-reaction-d-2-banana-samples-total-degree-samples-1e2-standalone.pdf} \includegraphics[width=0.49\textwidth]{tikz/error-convergence-chemical-reaction-easy-d-2-banana-samples-total-degree-samples-1e2-standalone.pdf} \end{center} \caption{Convergence of the error in the mean of the mass fraction of the chemical reaction model computed using the reduced quadrature rule generated for the banana-density using approximate moments. The $x$ axis in the plots indicates the number of points in a reduced quadrature rule generated from approximate moments. The numbers in the legend indicate the number of Monte Carlo samples used to approximate moments. (Right) Convergence over $D_\mathrm{I}$ and (Left) $D_\mathrm{I}$.} \label{fig:banana-chemical-reaction-convergence-samples} \end{figure} \subsection{High-dimensional quadrature} The cost of computing integrals with total-degree quadrature rules increases exponentially with dimension. This cost can be ameliorated if one is willing to consider subspaces that are more exotic than total-degree spaces. In this section we show how our reduced quadrature rules can generate quadrature for non-standard subspaces, taking advantage of certain structures that may exist in an integrand. Specifically we will demonstrate the utility of reduced quadrature rules for integrating high-dimensional functions that can be approximated by (i) low-order ANOVA expansions and (ii) by a function of a small number of linear combinations of the function variables (such functions are often referred to as ridge functions). \subsubsection{ANOVA decompositions}\label{sec:poly-spaces} Dimension-wise decompositions of the input-output relationship have arisen as a successful means of delaying or even breaking the curse of dimensionality for certain applications~\cite{Griebel_H_JC_2010,Foo_K_JCP_2010,Ma_Z_JCP_2009,Jakeman_R_SGA_2013}. Such an approach represents the model outputs as a high-dimensional function $f(x)$ dependent on the model inputs $x=(x_1,\ldots,x_d)$. It uses an ANOVA type decomposition \begin{equation} f(x_1,\ldots,x_d)=f_0+\sum_{n=1}^df_i(x_i)+\sum_{i,j=1}^df_{i,j}(x_i, x_j)+\cdots+f_{i,\ldots,d}(x_i,\ldots,x_d) \end{equation} which separates the function into a sum of subproblems. The first term is the zero-th order effect which is a constant throughout the d-dimensional variable space. The $f_i(x_i)$ terms are the first-order terms which represent the effect of each variable acting independently of all others. The $f_{i,j}(x_i,x_j)$ terms are the second-order terms which are the contributions of $x_i$ and $x_j$ acting together, and so on. In practice only a small number of interactions contribute significantly to the system response and consequently only a low-order approximation is needed to represent the input-output mapping accurately~\cite{Wang_S_SISC_2005}. In this section we show that reduced quadrature rules can be constructed for functions admitting ANOVA type structure. Specifically consider the following modified version of the corner peak function~\eqref{eq:genz-cp} \begin{align}\label{eq:genz-modified-cp} f_{\mathrm{MCP}}=\sum_{i=1}^{d-1}\left(1+c_ix_i+c_{i+1}x_{i+1}\right)^{-3}, \quad x\in D = [0,1]^d \end{align} with $d=20$. This function has at most second order ANOVA terms, and can be integrated exactly using the uniform probability measure on $D$. We can create a quadrature rule ideally suited to integrating functions that have at most second-order ANOVA terms. Specifically we need only customize the index set $\Lambda$ that is input to Algorithm~\ref{alg:reduced-quadrature}. To emphasize a second-order ANOVA approximation, we compute moments of the form $$\int_\Gamma p_\alpha(x)\,d\mu(x),\quad \forall\alpha\in\Lambda=\{\alpha\;|\;\lVert\alpha\rVert_0\le 2 \textrm{ and } \lVert\alpha\rVert_1\le k\}$$ for some degree $k$. In Figure ~\ref{fig:genz-modified-cp-error} we compare the error in the mean computed using our reduced quadrature method, with the errors in the estimates of the mean computed using popular high-dimensional integration methods, specifically Clenshaw Curtis sparse grids, Quasi Monte Carlo integration based upon Sobol sequences and low-degree (Stroud) cubature rules. By tailoring the reduced quadrature rule to the low-order ANOVA structure of the integrand, the error in the estimate of the mean is orders of magnitude smaller than the estimates computed using the alternative methods. \begin{figure} \begin{center} \raisebox{.09\textwidth}{ \includegraphics[width=0.24\textwidth]{tikz/draw-3d-indices.pdf} \includegraphics[width=0.24\textwidth]{tikz/hyperbolic-3d-indices.pdf} } \includegraphics[width=0.49\textwidth]{tikz/error-convergence-genz-cp-second-order-d-20-uniform-second-order-hyperbolic-cross-exact-standalone.pdf} \end{center} \caption{(Left) Comparison of a three dimensional total degree index set of degree 5 with a 2nd-order ANOVA index set of degree 5. (Right) Convergence of the error in the mean of the modified corner-peak function $f_\mathrm{MCP}$~\eqref{eq:genz-modified-cp} computed using reduced quadrature rules for the uniform measure.} \label{fig:genz-modified-cp-error} \end{figure} Note that sparse grids are a form of an anchored ANOVA expansion~\cite{Griebel_H_JC_2010} and delay the curse of dimensionality by assigning decreasing number of samples to resolving higher order ANOVA terms. However, unlike our reduced quadrature rule method sparse grids cannot be tailored to the exact ANOVA structure of the function. \subsubsection{Ridge functions}\label{sec:dim-reduction} In this section we show that our algorithm can be used to integrate high-dimensional ridge functions. Ridge functions are multivariate functions that can be expressed as a function of a small number of linear combinations of the input variables variables \cite{Pinkus_book_2015}. We define a ridge function to be a function of the form $f: \mathbb{R}^d\rightarrow\mathbb{R}$ that can be expressed as a function $g$ of $s < d$ rotated variables, \begin{align*} f(z) &= g(a z), & a &\in\mathbb{R}^{s\times d}. \end{align*} In applications it is common for $d$ to be very large, but for $f$ to be a(n approximate) ridge function with $s \ll d$. When integrating a ridge function one need not integrate in $\mathbb{R}^d$ but rather can focus on the more tractable problem of integrating in $\mathbb{R}^s$. The difficulty then becomes integrating with respect to the transformed measure of the high-dimensional integral in the lower-dimensional space, which is typically unknown and challenging to compute. When $d$-dimensional space is a hypercube, then the corresponding $s$-dimensional domain of integration is a multivariate zonotope, i.e., a convex, centrally symmetric polytope that is the $s$-dimensional linear projection of a $d$-dimensional hypercube. The vertices of the zonotope are a subset of the vertices of the $d$-dimensional hypercube projected onto the $s$-dimensional space via the matrix $a $. When computing moment-matching quadrature rules on zonotopes we must amend the local optimization problem to include linear inequality constraints, to enforce that the quadrature points chosen remain inside the zonotope. The inequality constraints of the zonotope are the same constraints that define the convex hull of the zonotope vertices.\footnote{Note that most non-linear least squares optimizers do not allow the specification of inequality constraints so to compute quadrature rules on a zonotope we we used a sequential quadratic program.} Computing all the vertices of the zonotope $\{a v\;|\;v\in[-1,1]^d\}$ can be challenging: The number of vertices of the zonotope grows exponentially with the large dimension $d$. To address this issue we use a randomized algorithm~\cite{Stinson_GC_ARXIV_2016} to find a subset of vertices of the zonotope. This algorithm produces a convex hull that is a good approximation of the true zonotope with high-probability. In all our testing we found that the approximation of the zonotope hull did not noticeably affect the accuracy of the quadrature rules we generated. We assume that $y \in \R^d$ is the $d$-dimensional variable with a measure $\nu$. Since $d$ is large, $\nu$ is typically a tensor-product measure. In the projected space $x \coloneqq A y \in \R^s$, this induces a new measure $\mu$ on the zonotope $D$ that is not of tensor-product form. With this setup, the $\mu$-moments can be computed analytically by taking advantage of the relationship $x=a z$. For further details see Appendix~\ref{app:ridge-function-moments}. Once a quadrature rule on a zonotope $D$ is constructed, some further work is needed before it can be applied to integrate the function $f$ on the original $d$-dimensional hypercube. Specifically we must transform the quadrature points $x\in D$ back into the hypercube. The inverse transformation of $a $ is not unique, but if the function is (approximately) constant in the directions orthogonal to $x$, then any choice will do. In our example we set this transformation as $y = a ^T x\in D$. The integral of the ridge function can then be approximated by $$ \int_{D_y} f(y)\,d\nu(x) \approx \sum_{i=1}^M f(a ^T x_i) w_i $$ where $(x_i, w_i)$ are a quadrature rule generated to integrate over the $s$-dimensional domain $D$ with the non-tensor-product measure $\mu$. In the following we will consider the integration of a high-dimensional ridge function with $\nu$ the uniform probability measure on $[-1,1]^d$. We will again consider the integrating the moments mass fraction of the third species $u_3(t=100)$ of the competing species model from Section~\ref{sec:non-tp-measures}. However now we set $x=a y$, where $y$ with $d=20$ and $a \in\mathbb{R}^{2\times 20}$ is a randomly generated matrix with orthogonal rows. This makes the mass fraction a ridge function of two variables. For a realization of $a $ we plot the two resulting dimensional zonotope that defines the domain of integration of the variables $x$ in Figure~\ref{fig:zonotope} (left). The new probability density $\mu$ is depicted in the same figure. It is obvious that the transformed density $\mu$ is no longer uniform. In Figure ~\ref{fig:zonotope} (right) we plot the convergence of the error in the mean value of the ridge function computed using reduced quadrature rules. The error for the reduced quadrature approach decays exponentially fast and for a given number of function evaluations is orders of magnitudes smaller than the error obtained using Clenshaw-Curtis sparse grids and Sobol sequences (a QMC method) in the 20 dimensional space. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{figures/zonotope-density-d-20-s-2.pdf} \includegraphics[width=0.49\textwidth]{tikz/error-convergence-chemical-reaction-active-subspace-d-20-active-subspace-total-degree-exact-standalone.pdf} \end{center} \caption{(Left) The zonotope defining the domain of integration and the joint probability density of the two variables defining the two-dimensional ridge function of 20 uniform variables. The affine mapping described in Section \ref{sec:non-tp-measures} is used to center the mean of the zonotope density over the highly non-linear region of the chemical reaction model response surface show in the right of Figure~\ref{fig:banana-density}. (Right) The convergence of the error in the mean value of the ridge function computed using reduced quadrature rules over the two-dimensional zonotope compared against more standard quadrature rules in the full 20-dimensional space.} \label{fig:zonotope} \end{figure}
{ "timestamp": "2017-11-03T01:01:05", "yymm": "1711", "arxiv_id": "1711.00506", "language": "en", "url": "https://arxiv.org/abs/1711.00506", "abstract": "The search for multivariate quadrature rules of minimal size with a specified polynomial accuracy has been the topic of many years of research. Finding such a rule allows accurate integration of moments, which play a central role in many aspects of scientific computing with complex models. The contribution of this paper is twofold. First, we provide novel mathematical analysis of the polynomial quadrature problem that provides a lower bound for the minimal possible number of nodes in a polynomial rule with specified accuracy. We give concrete but simplistic multivariate examples where a minimal quadrature rule can be designed that achieves this lower bound, along with situations that showcase when it is not possible to achieve this lower bound. Our second main contribution comes in the formulation of an algorithm that is able to efficiently generate multivariate quadrature rules with positive weights on non-tensorial domains. Our tests show success of this procedure in up to 20 dimensions. We test our method on applications to dimension reduction and chemical kinetics problems, including comparisons against popular alternatives such as sparse grids, Monte Carlo and quasi Monte Carlo sequences, and Stroud rules. The quadrature rules computed in this paper outperform these alternatives in almost all scenarios.", "subjects": "Numerical Analysis (math.NA)", "title": "Generation and application of multivariate polynomial quadrature rules", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668728630678, "lm_q2_score": 0.7217431943271998, "lm_q1q2_score": 0.7081505129882201 }
https://arxiv.org/abs/1506.06781
Spectral stability of metric-measure Laplacians
We consider a "convolution mm-Laplacian" operator on metric-measure spaces and study its spectral properties. The definition is based on averaging over small metric balls. For reasonably nice metric-measure spaces we prove stability of convolution Laplacian's spectrum with respect to metric-measure perturbations and obtain Weyl-type estimates on the number of eigenvalues.
\section{Introduction} \label{sec:notation} This paper is motivated by \cite{BIK14} where we approximate a compact Riemannian manifold by a weighted graph and show that the spectra of the Beltrami--Laplace operator on the manifold and the graph Laplace operator are close to each other. The key constructions of \cite{BIK14} can be regarded as a definition of an operator which approximates the Beltrami--Laplace operator. The definition is based on averaging over small balls. The construction makes sense for general metric-measure spaces, which in particular include Riemannian manifolds and weighted graphs. In this paper we show that an analogue of some results from \cite{BIK14} holds for a large class of metric-measure spaces. Namely we introduce a ``convolution Laplacian'' operator with a parameter $\rho>0$ (a radius) and prove that its spectrum enjoys stability under metric-measure approximations. Recall that a metric-measure space is a triple $(X,d,\mu)$ where $(X,d)$ is a metric space and $\mu$ is a Borel measure on $X$. All metric spaces in this paper are compact and all measures are finite. We denote by $B_r(x)$ the metric ball of radius $r$ centered at a point $x\in X$. Our main object of study is defined as follows. \begin{definition}\label{d:rho-laplacian} Let $X=(X,d,\mu)$ be a metric-measure space and $\rho>0$. The \textit{$\rho$-Laplacian} $\Delta^\rho_X\colon L^2(X)\to L^2(X)$ is defined by \begin{equation} \label{1.21.12} \Delta^\rho_X u(x)= \frac{1}{\rho^2\mu(B_\rho(x))} \int_{B_\rho(x)} \big(u(x)-u(y)\big)\, d\mu(y) \end{equation} for $u\in L^2(X)$. \end{definition} If $X$ is a Riemannian $n$-manifold, then $\Delta^\rho_X$ converges as $\rho\to 0$ (e.g.\ on smooth functions) to the Beltrami--Laplace operator multiplied by the constant $\frac{-1}{2(n+2)}$. For general metric-measure spaces, it is not clear what should replace the normalizing constant $\frac1{2(n+2)}$ so it does not appear in our definition. It is plausible that $\Delta^\rho_X$ has a meaningful limit as $\rho\to 0$ for a large class of metric-measure spaces $X$. We hope to address this question elsewhere. In this paper we consider the operator $\Delta^\rho_X$ for a fixed ``small'' value of $\rho$. Our goal is to study the spectrum of $\Delta^\rho_X$ and its stability properties. Another interesting case is when $X$ is a discrete space. In this case all needed geometric data amounts to weights of points and the information of which pairs of points are within distance $\rho$. This structure is just a weighted graph (without any lengths assigned to edges) and the $\rho$-Laplacian defined by \eqref{1.21.12} is just the classic weighted graph Laplacian. Spectral theory of graph Laplacians is a well developed subject, see e.g.\ \cite{Chung,Sunada}. In the case when $X$ is a Riemannian manifold, the spectral properties of $\rho$-Laplacians are studied in \cite{LM} in connection with random walks on the manifold. In this paper we study the topic from a different viewpoint. Namely we are interested in spectral stability under metric-measure perturbations. As shown in Section \ref{sec:prelim}, $\Delta^\rho_X$ is a non-negative self-adjoint operator with respect to a certain scalar product on $L^2(X)$. Hence the spectrum of $\Delta^\rho_X$ is a subset of $[0,+\infty)$. Moreover $\operatorname{spec}(\Delta^\rho_X)\subset[0,2\rho^{-2}]$. The spectrum of a bounded self-adjoint operator divides into the discrete and essential spectrum. The discrete spectrum is the set of isolated eigenvalues of finite multiplicity and the essential spectrum is everything else. It turns out that the essential spectrum of $\Delta^\rho_X$, if nonempty, is the single point $\{\rho^{-2}\}$. In our set-up we are concerned only with parts of the spectrum that are substantially below this value. The following Theorem \ref{t:second} is a non-technical implications of our main results. It asserts that under suitable conditions lower parts of $\rho$-Laplacian spectra converge as the metric-measure spaces in question converge. Denote by $\lambda_k(X,\rho)$ the $k$-th smallest eigenvalue of $\Delta^\rho_X$ (counting multiplicities). \begin{theorem} \label{t:second} Let a sequence $\{X_n\}$ of metric-measure spaces converge to $X=(X,d,\mu)$ in the sense of Fukaya \cite{Fu}. Assume that $d$ is a length metric and $X$ satisfies a version of the Bishop--Gromov inequality: there is $\Lambda>0$ such that \begin{equation}\label{e:bishop-gromov} \frac{\mu(B_{r_1}(x))}{\mu(B_{r_2}(x))} \le \left(\frac{r_1}{r_2}\right)^{\!\!\Lambda} \end{equation} for all $x\in X$ and $r_1\ge r_2>0$. Then $$ \lambda_k(X,\rho) = \lim_{n\to\infty} \lambda_k(X_n,\rho) $$ for all $\rho>0$ and all $k$ such that $\lambda_k(X,\rho)<\rho^{-2}$. \end{theorem} Theorem \ref{t:second} follows from more general but more technical Theorem \ref{t:stability} which works for larger classes of spaces and provides estimates on the rate of convergence. Now we discuss hypotheses of Theorem \ref{t:second}. We emphasize that ``niceness'' conditions in Theorem \ref{t:second} are imposed only on the limit space~$X$. The spaces $X_n$ need not satisfy them. In particular, $X_n$ can be discrete approximations of~$X$. Thus, for every ``nice'' space $X$, the spectrum of $\Delta^\rho_X$ can be approximated by spectra of graph Laplacians. By definition, a metric is a \textit{length metric} if every pair of points can be connected by a geodesic segment realizing the distance between the points. This condition can be relaxed to an assumption about intersection of balls, see the BIV condition in Definition \ref{d:BIV}. The classic Bishop--Gromov inequality deals with volumes of balls in Riemannian manifolds with Ricci curvature bounded from below. It implies \eqref{e:bishop-gromov} with $\Lambda$ depending on the dimension of the manifold, its diameter, and the lower bound for Ricci curvature. (In the case of non-negative Ricci curvature $\Lambda$ is just equal to the dimension.) The Bishop--Gromov inequality holds for spaces with generalized Ricci curvature bounds in the sense of Lott--Sturm--Villani \cite{Sturm06,Sturm06-II,LV}. Other classes of spaces satisfying \eqref{e:bishop-gromov} include Finsler manifolds, dimensionally homogeneous polyhedral spaces, Carnot groups, etc. The Fukaya convergence combines the Gromov--Hausdorff convergence of metric spaces and weak convergence of measures. See Definition \ref{d:fukaya} for details. Beware of the fact that, unlike most definitions used in this paper, the Fukaya convergence is sensitive to open sets of zero measure. See the example in Section~\ref{sec:zero-limit-measure} for an illustration of this subtle issue. Actually in this paper we use another notion of metric-measure approximation which is more suitable to the problem. It allows us to obtain nice estimates on the difference of eigenvalues of $\rho$-Laplacians of close metric-measure spaces. \subsection*{Structure of the paper} In Section \ref{sec:prelim} we introduce some notation and collect basic facts about $\rho$-Laplacians. In Section \ref{sec:examples} we discuss some examples. In Section \ref{sec:wasserstein} we introduce a notion of ``closeness'' of metric-measure spaces, which we call $(\ep,\de)$-closeness. Loosely speaking, metric-measure spaces $X$ and $Y$ are $(\ep,\de)$-close if $Y$ is a result of imprecise measurements in $X$ where distances are measured with a small additive inaccuracy $\ep$ and volumes are measured with a small relative inaccuracy~$\de$. The formal definition is a combination of Gromov--Hausdorff distance and a ``relative'' version of Prokhorov distance between measures. The main results of Section \ref{sec:wasserstein} characterize $(\ep,\de)$-closeness in terms of measure transports and Wasserstein distances. In Section~\ref{sec:stability} we prove Theorem \ref{t:stability} which is a quantitative version of Theorem~\ref{t:second}. It asserts that, if metric-measure spaces $X$ and $Y$ are $(\ep,\de)$-close and satisfy certain conditions, then the lower parts of the spectra of their $\rho$-Laplacians are also close. The conditions in Theorem \ref{t:stability} can be thought of as ``discretized'' version of those from Theorem~\ref{t:second}. In Section \ref{sec:TXY} we give a direct construction of a map between $L^2(X)$ and $L^2(Y)$ realizing the spectral closeness in Theorem \ref{t:stability}. The results of Section~\ref{sec:TXY} complement Theorem \ref{t:stability} but they are not used in its proof. In Section \ref{sec:weyl} we obtain Weyl-type estimates for the number of eigenvalues in an interval $[0,c\rho^{-2}]$ where $c<1$ is a suitable constant. See Theorems \ref{t:ess-spectrum} and \ref{t:many-eigenvalues}. For a Riemannian manifold our estimates are of the same order as those given by Weyl's asymptotic formula for Beltrami--Laplace eigenvalues. However our estimates are formulated in terms of packing numbers rather than the dimension and total volume. \subsection*{Acknowledgement} We are grateful to Y.~Eliashberg, L.~Polterovich, and an anonymous editor of ``Geometry \& Topology'' for pointing out weaknesses of a preliminary version of the paper. We did our best in fixing these issues. We are grateful to F.~Galvin for helping us to find references. \section{Preliminaries} \label{sec:prelim} In the sequel we abbreviate metric-measure spaces as ``mm-spaces''. We use notation $d_X$ and $\mu_X$ for the metric and measure of a mm-space $X$. In some cases we consider semi-metrics, that is, distances are allowed to be zero. All definitions apply to semi-metrics with no change. To simplify computations and incorporate constructions from \cite{BIK14} into the present set-up, we introduce weighted $\rho$-Laplacians. Let $X=(X,d,\mu)$ be a mm-space and $\phi\colon X\to\R_+$ a positive measurable function bounded away from 0 and $\infty$ on the support of $\mu$. We call $\phi$ the \textit{normalizing function}. We define a \emph{weighted $\rho$-Laplacian} $\Delta^\rho_\phi$ by $$ \Delta^\rho_\phi u(x)= \frac{1}{\phi(x)} \int_{B_\rho(x)} \bigl(u(x)-u(y)\bigr)\, d\mu(y) . $$ We regard $\Delta^\rho_\phi$ as an operator on $L^2(X)$. Note that this operator does not change if one replaces $X$ by the support of its measure. Definition \ref{d:rho-laplacian} corresponds to the normalizing function $\phi(x)=\rho^2\mu(B_\rho(x))$. Due to compactness of $X$, this function is bounded away from 0 and $\infty$ on the support of $\mu$. The operator $\Delta^\rho_\phi$ is self-adjoint on $L^2(X,\phi\mu)$ where $\phi\mu$ is the measure with density $\phi$ w.r.t.~$\mu$. Indeed, for $u,v\in L^2(X)$ we have \begin{align*} \langle \Delta^\rho_\phi u, v\rangle_{L^2(X,\phi\mu)} &= \int_X \phi(x) v(x) \frac1{\phi(x)} \int_{B_\rho(x)} \bigl(u(x)-u(y)\bigr)\, d\mu(y) d\mu(x) \\ &= \iint_{d(x,y)<\rho} v(x)\big(u(x)-u(y)\bigr)\, d\mu(y) d\mu(x) \end{align*} and the right-hand side is clearly symmetric in $u$ and $v$. The corresponding Dirichlet energy form $$ D^\rho_X(u) = \langle\Delta^\rho_\phi u, u\rangle_{L^2(X,\phi\mu)} $$ does not depend on $\phi$ and is given by \begin{equation}\label{e:dirichlet} D^\rho_X(u) = \frac12 \iint_{d(x,y) <\rho} \bigl(u(x)- u(y)\bigr)^2 \, d\mu(x) d\mu(y). \end{equation} Note that the Dirichlet form is non-negative. When dealing with $\rho$-Laplacians from Definition \ref{d:rho-laplacian}, that is when $\phi(x)=\rho^2\mu(B_\rho(x))$, we denote the measure $\phi\mu$ by $\mu^\rho$. We denote the scalar product and norm in $L^2(X,\mu^\rho)$ by $\langle\cdot,\cdot\rangle_{X^\rho}$ and $\|\cdot\|_{X^\rho}$, resp. That is, \begin{gather} \label{e:murho} d\mu^\rho(x)/d\mu(x)=\rho^2\mu(B_\rho(x)) , \\ \label{e:L2rho} \langle u, v \rangle_{X^\rho}= \rho^2 \int_X \mu(B_\rho(x)) u(x) v(x)\, d\mu(x) , \\ \label{e:L2norm} \|u\|^2_{X^\rho}= \rho^2 \int_X \mu(B_\rho(x)) u(x)^2\, d\mu(x) . \end{gather} The norm of $\Delta^\rho_X$ in $L^2(X,\mu^\rho)$ is bounded by $2\rho^{-2}$. Indeed, \begin{align*} D_X^\rho(u) &= \frac12 \iint_{d(x,y)<\rho} (u(x)-u(y))^2 \,d\mu(x)d\mu(y) \\ &\le \iint_{d(x,y)<\rho} (u(x)^2+u(y)^2) \,d\mu(x)d\mu(y) \\ &= 2\int_X \mu(B_\rho(x)) u(x)^2 \,d\mu(x) = 2\rho^{-2} \|u\|^2_{X^\rho} . \end{align*} Thus the spectrum of $\Delta^\rho$ is contained in $[0,2\rho^{-2}]$. The $\rho$-Laplacian $\Delta^\rho_X$ can be rewritten in the form $\Delta^\rho_X u = \rho^{-2} u - A u$ where $$ Au(x) = \frac{1}{\rho^2\mu(B_\rho(x))} \int_{B_\rho(x)} u(y)\, d\mu(y) . $$ Observe that $A$ is an integral operator with a bounded kernel. Hence it is a compact operator on $L^2(X)$. It follows that the essential spectrum of $\Delta^\rho_X$ is the same as that of the operator $u\mapsto\rho^{-2} u$. Namely it is empty if $L^2(X)$ is finite-dimensional and the single point $\{\rho^{-2}\}$ otherwise. A similar argument shows that the essential spectrum of $\Delta^\rho_\phi$ is located between the infimum and supremum of the function $x\mapsto \mu(B_\rho(x))/\phi(x)$. \begin{notation} \label{n:lambda} Let $\lambda_\infty=\lambda_\infty(X,\rho,\phi)$ be the infimum of the essential spectrum of $\Delta^\rho_\phi$. If there is no essential spectrum (that is, if $L^2(X)$ is finite-dimensional), we set $\lambda_\infty=\infty$. For every $k\in\N$ we define $\lambda_k=\lambda_k(X,\rho,\phi)\in[0,+\infty]$ as follows. First let $0=\lambda_1\le\lambda_2\le\dots$ be the eigenvalues of $\Delta^\rho_\phi$ (with multiplicities) which are smaller than $\lambda_\infty$. If there are only finitely many of such eigenvalues, we set $\lambda_k=\lambda_\infty$ for all larger values of~$k$. We abuse the language and refer to $\lambda_k(X,\rho,\phi)$ as the \emph{$k$-th eigenvalue} of $\Delta^\rho_\phi$ even though it may be equal to $\lambda_\infty$. For the $\rho$-Laplacian $\Delta^\rho_X$ we drop $\phi$ from the notation and denote the $k$-th eigenvalue by $\lambda_k(X,\rho)$. \end{notation} By the standard Min-Max Theorem, for every $k\in\N$ we have \begin{equation} \label{e:minmax-phi} \lambda_k(X,\rho,\phi) =\inf_{H^k} \sup_{u \in H^k\setminus\{0\}} \left(\frac{ D^\rho_X(u)}{\|u\|_{L^2(X,\phi\mu)}^2}\right) \end{equation} and in particular \begin{equation} \label{e:minmax} \lambda_k(X,\rho) =\inf_{H^k} \sup_{u \in H^k\setminus\{0\}} \left(\frac{ D^\rho_X(u)}{\|u\|_{X^\rho}^2}\right) \end{equation} where the infima are taken over all $k$-dimensional subspaces $H^k$ of $L^2(X)$. This formula is our main tool for eigenvalue estimates. We emphasize that it holds in both cases $\lambda_k<\lambda_\infty$ and $\lambda_k=\lambda_\infty$. As an immediate application, we observe that the eigenvalues are stable with respect to small relative changes of the normalizing function and measure. If $\mu_1$ and $\mu_2$ are measures on $X$ satisfying $a\mu_1\le\mu_2\le b\mu_1$ where $a$ and $b$ are positive constants, then for the corresponding mm-spaces $X_1=(X,d,\mu_1)$ and $X_2=(X,d,\mu_2)$ we have \begin{equation}\label{e:la-change-mu} \frac{a^2}{b^2} \le \frac{\lambda_k(X_2,\rho)}{\lambda_k(X_1,\rho)} \le \frac{b^2}{a^2} \end{equation} for every $k\in\N$. This follows from \eqref{e:minmax} and the inequalities \begin{gather*} a^2\le D^\rho_{X_2}(u)/D^\rho_{X_1}(u)\le b^2 , \\ a^2\le \|u\|^2_{X_2^\rho}/\|u\|^2_{X_1^\rho}\le b^2 , \end{gather*} which hold for all $u\in L^2(X)$. Note that multiplying the measure by a constant does not change the $\rho$-Laplacian. For any two normalizing functions $\phi_1$ and $\phi_2$ \eqref{e:minmax-phi} implies that \begin{equation}\label{e:la-change-phi} \inf_{x\in X}\frac{\phi_1(x)}{\phi_2(x)} \le \frac{\lambda_k(X,\rho,\phi_2)}{\lambda_k(X,\rho,\phi_1)} \le \sup_{x\in X}\frac{\phi_1(x)}{\phi_2(x)}. \end{equation} For nice spaces such as Riemannian manifolds, the volume of small $\rho$-balls is almost constant as a function of the center of the ball. In such cases one can consider a weighted $\rho$-Laplacian with a constant normalizing function and conclude that its spectrum is close to that of $\Delta^\rho_X$ (cf.\ Section \ref{subsec:riem}). \section{Examples} \label{sec:examples} \subsection{Riemannian manifolds} \label{subsec:riem} The paper \cite{BIK14} deals with the case of $X$ being a closed Riemannian $n$-manifold $M$ or a discrete approximation of~$M$. In the terminology of Section \ref{sec:prelim}, the object studied in \cite{BIK14} is a weighted $\rho$-Laplacian with constant normalization function $\phi(x)=\phi_\rho:=\frac{\nu_n\rho^{n+2}}{2n+4}$. Here $\nu_n$ is the volume of the unit ball in $\R^n$. As $\rho\to 0$, we have $\mu(B_\rho(x))\sim\nu_n\rho^n$ uniformly in $x\in M$. Hence $\phi_\rho/\rho^2\mu(B_\rho(x))\to \frac1{2n+4}$. Thus, by \eqref{e:la-change-phi}, the spectrum of $\Delta^\rho_X$ is close to that of $\Delta^\rho_\phi$ multiplied by $\frac1{2n+4}$. The results of \cite{LM} imply that the spectrum of $\Delta^\rho_X$, where $X$ is a Riemannian manifold, converges as $\rho\to 0$ to the Beltrami--Laplace spectrum multiplied by $\frac1{2n+4}$. In \cite{BIK14} similar convergence is shown for graph Laplacians arising from discrete approximations of a Riemannian manifold. Theorem \ref{t:second} generalizes this result. Note that the scalar product $\langle\cdot,\cdot\rangle_{X^\rho}$ and Dirichlet form $D^\rho_X$ tend to 0 as $\rho\to 0$. To make them comparable with the Riemannian counterparts one multiplies them by $\rho^{-n-2}$. \subsection{Finsler manifolds} Let $X$ be a closed Finsler manifold $M$ with smooth and quadratically convex Finsler structure. First recall that there are many reasonable notions of volume for Finsler manifolds, see e.g.~\cite{Thompson}. Different volume definitions obviously lead to different $\rho$-Laplacians. Still the issues we study in this paper are not sensitive to the choice of volume. Consider a tangent space $V=T_xM$ at a point $x\in M$. It is equipped with a norm $\|\cdot\|=\|\cdot\|_x$ which is the restriction of the Finsler structure. Let $B$ be the unit ball of $\|\cdot\|$. There is a unique ellipsoid $E\subset V$ such that for every quadratic form the integrals of it over $B$ and $E$ coincide. Rescaling $E$ by a suitable factor (depending on the chosen Finsler volume definition) and regarding the resulting ellipsoid as the unit ball of a Euclidean metric, one obtains a Euclidean metric $|\cdot|$ on $V$ whose $\rho$-Laplacian coincides with that of $\|\cdot\|$ on the set of quadratic forms on $V$. Applying this construction to every $x\in M$ one obtains a family of quadratic forms on the tangent spaces thus defining a Riemannian metric on~$M$. It is very likely that the spectra of $\rho$-Laplacians of the Finsler metric converge as $\rho\to 0$ to the Beltrami--Laplace spectrum of this Riemannian metric. \subsection{Piecewise Riemannian polyhedra} Let $X$ be a finite simplicial complex whose faces are equipped with Riemannian metrics which agree on the intersections of faces. First assume that $X$ is dimensionally homogeneous of dimension $n$. In this case one can mostly follow the analysis of the Riemannian case. The difference is that, due to boundary terms, the Riemannian Dirichlet energy $\int_X\|du\|^2$ is not always equal to $\langle \Delta u,u\rangle$ where $\Delta$ is the Beltrami--Laplace operator. They are however equal on the subspace of functions satisfying Kirchhoff's condition. This condition says that, at every point in an $(n-1)$-dimensional face, the sum of normal derivatives in the adjacent $n$-dimensional faces equals~0. For instance, if $X$ is a manifold with boundary, this boils down to the Neumann boundary condition. It is plausible that the spectra of $\Delta^\rho_X$ converge as $\rho\to 0$ to the spectrum of Beltrami--Laplace operator with Kirchhoff's condition. The problem can also be studied for polyhedral spaces with varying local dimension. For instance, consider a two-dimensional membrane with a one-dimensional string attached. One can equip this space with a measure which is one-dimensional on the string and two-dimensional on the membrane. Unlike the previous examples, we cannot apply our results to this example because it does not satisfy the doubling condition. It is violated near the point where the string is attached to the membrane. It is rather intriguing if Theorem~\ref{t:second} still holds in this situation. \subsection{Disappearing measure support} \label{sec:zero-limit-measure} The following example shows that one has to be careful with limits of mm-spaces if the limit measure does not have full support. Let $X$ be a disjoint union of two compact Riemannian manifolds $M_1$ and $M_2$. Define a distance $d$ on $X$ as follows: in each component it is the standard Riemannian distance, and the distance between the components is a large constant. For each $t\ge 0$ define a measure $\mu_t$ on $X$ by $\mu_t = \operatorname{vol}_{M_1}+t\operatorname{vol}_{M_2}$ where $\operatorname{vol}_{M_i}$, $i=1,2$, are Riemannian volumes on the components. Then $\mu_t$ weakly converges to $\mu_0=\operatorname{vol}_{M_1}$ as $t\to 0$. For every $t>0$, locally constant functions form a two-dimensional subspace in $L^2(X)$. Hence the zero eigenvalue of $\Delta^\rho_{X_t}$ has multiplicity~2. Thus $\lambda_2(X_t,\rho)=0$ for all $t>0$. On the other hand, $\lambda_2(X_0,\rho)>0$ since the $\rho$-Laplacian of $X_0$ is the same as that of the component $M_1$. Thus $\lambda_1(M_0)\ne\lim_{t\to0}\lambda_1(M_t)$. A formal reason for the failure of Theorem \ref{t:second} in this example is that the Bishop--Gromov condition \eqref{e:bishop-gromov} is not satisfied. Another issue is that $d$ is not a length metric. The latter can be fixed by connecting $M_1$ and $M_2$ by a long segment and taking the induced intrinsic metric. \section{Relative Prokhorov and Wasserstein closeness} \label{sec:wasserstein} This section is devoted to the notion of $(\ep,\de)$-closeness that we use in our spectrum stability results. This notion is introduced in Definition \ref{d:mm-close}. The main results of this section are Proposition \ref{p:prokhorov-vs-wasserstein} and Corollary \ref{c:mm-wasserstein} which characterize $(\ep,\de)$-closeness in terms of measure transport. We use the following notation. For a metric space $(X,d)$ and a set $A\subset X$ and $r\ge 0$, we denote by $A^r$ the closed $r$-neighborhood of $A$. That is, $A^r=\{x\in X:\, d(x,A)\le r\}$. \begin{definition}[relative Prokhorov closeness] \label{d:relative-prokhorov} Let $Z$ be a metric space, $\mu_1$, $\mu_2$ finite Borel measures on $Z$, and $\ep,\de\ge 0$. We say that $\mu_1$ and $\mu_2$ are \emph{relative $(\e,\delta)$-close} if for every Borel set $A \subset Z$, $$ e^\delta \mu_1(A^\ep) \geq \mu_2(A) \quad\text{and}\quad e^\delta \mu_2(A^\ep) \geq \mu_1(A) . $$ \end{definition} This definition is similar to that of Prokhorov's distance on the space of measures \cite{Prokhorov}. The crucial difference is that we use multiplicative corrections rather than additive ones. The topology arising from Definition \ref{d:relative-prokhorov} is stronger than the standard weak topology on the space of measures on $X$. If however we restrict ourselves to the subspace of measures with full support, then the topologies are the same. We combine Definition \ref{d:relative-prokhorov} with the notion of Gromov--Hausdorff (GH) distance analogously to the definition of Gromov--Wasserstein distances as in e.g.\ \cite{Sturm06-I}. Recall that metric spaces $(X,d_X)$ and $(Y,d_Y)$ are $\ep$-close in the GH distance iff the disjoint union $X\sqcup Y$ can be equipped with a (semi-)metric $d$ extending $d_X$ and $d_Y$ and such that $X$ and $Y$ are contained in the $\ep$-neighborhoods of each other with respect to~$d$. For discussion of GH distance see e.g.~\cite{BBI}. \begin{definition} \label{d:mm-close} Let $\ep,\de\ge 0$. We say that mm-spaces $X=(X,d_X,\mu_X)$ and $Y=(Y,d_Y,\mu_Y)$ are \emph{mm-relative $(\ep,\de)$-close} if there exists a semi-metric $d$ on $X\sqcup Y$ extending $d_X$ and $d_Y$ and such that $\mu_X$ and $\mu_Y$ are relative $(\ep,\de)$-close in $(X\sqcup Y,d)$ in the sense of Definition \ref{d:relative-prokhorov}. In the sequel we abbreviate ``mm-relative $(\ep,\de)$-close'' to just $(\ep,\de)$-close. \end{definition} Observe that, if the measures have full support, then $(\ep,\delta)$-closeness of mm-spaces $(X,d_X,\mu_X)$ and $(Y,d_Y,\mu_Y)$ implies that the metric spaces $(X,d_X)$ and $(Y,d_Y)$ are $\ep$-close in the sense of Gromov--Hausdorff distance. The following example motivated Definition \ref{d:mm-close} as well as a number of other definitions and assumptions in this paper. \begin{example}[discretization, cf.~\cite{BIK14}] \label{x:discretization} Let $X$ be a mm-space and $Y$ a finite $\ep$-net in $X$. We can associate a small basin in $X$ to every point of $Y$ and move all measure from each basin to its point. More precisely, there is a partition of $X$ into measurable sets $V_y$, $y\in Y$, such that each $V_y$ is contained in the ball $B_\ep(y)$. We assign the weight equal to $\mu_X(V_y)$ to each $y$ thus defining a measure $\mu_Y$ on $Y$. If we regard $\mu_Y$ as a measure on $X$, then it is relative $(\ep,0)$-close to $\mu_X$ in the sense of Definition \ref{d:relative-prokhorov}. We can also regard $Y$ equipped with $\mu_Y$ as a separate mm-space. Then it is $(\ep,0)$-close to $X$ in the sense of Definition \ref{d:mm-close}. Now consider a result of some ``measurement errors'' in~$Y$. Namely, let $Y'=(Y,d_Y',\mu_Y')$ be a mm-space with the same point set $Y$ and such that $|d_Y'-d_Y^{}|<\ep$ and $e^{-\de}\le \mu_Y'/\mu_Y^{}\le e^\de$. Then $Y'$ is $(2\ep,\de)$-close to~$X$. \end{example} Now we show that Fukaya convergence (used in Theorem \ref{t:second}) implies convergence with respect to $(\ep,\de)$-closeness, provided that the limit measure has full support. Recall that the Fukaya convergence is defined as follows. \begin{definition}[cf.~{\cite[(0.2)]{Fu}}]\label{d:fukaya} A sequence $X_n=(X_n,d_n,\mu_n)$ of mm-spaces converges to a mm-space $X=(X,d,\mu)$ in the sense of Fukaya if the following holds. There exist a sequence $\sigma_n\to 0$ of positive numbers and a sequence $f_n\colon X_n\to X$ of measurable maps such that \begin{enumerate} \item $f_n(X_n)$ is an $\sigma_n$-net in $X$; \item $|d(f_n(x),f_n(y))-d_n(x,y)| < \sigma_n$ for all $x,y\in X_n$; \item the push-forward measures $(f_n)^{}_*\mu_n$ weakly converge to~$\mu$. \end{enumerate} \end{definition} \begin{proposition} \label{p:fukaya} Let $X_n$ converge to $X$ in the sense Fukaya and assume that $\mu_X$ has full support. Then there exist sequences $\ep_n,\de_n\to 0$ such that $X_n$ is $(\ep_n,\de_n)$-close to~$X$ for all $n$. \end{proposition} \begin{proof} Let $X$, $X_n$, $\sigma_n$, $f_n$ be as above. The existence of $f_n$ implies that $X_n$ is $2\sigma_n$-close to $X$ in the GH distance, see e.g.\ \cite[Cor.~7.3.28]{BBI}. Moreover there is a metric $d_n'$ on the disjoint union $X\sqcup X_n$ such that $d_n'$ extends $d\cup d_n$ and \begin{equation}\label{e:fukaya3} d_n'(x,f_n(x)) \le \sigma_n \end{equation} for all $x\in X_n$. It suffices to prove that for every $\ep,\de>0$ the spaces $X_n$ eventually get $(\ep,\de)$-close to~$X$. Fix $\ep$ and $\de$. Let $\nu=\nu(\ep,\de)>0$ be so small that $$ (e^\de-1)(\mu(B_{\ep/3}(x))-\nu) \ge \nu $$ for all $x\in X$. Such $\nu$ exists since the measures of $(\ep/3)$-balls in $X$ are bounded away from~0. This is where we use the assumption that $\mu$ has full support. Since $(f_n)_*^{}\mu_n$ weakly converges to $\mu$, the Prokhorov distance between $(f_n)_*^{}\mu_n$ and $\mu$ tends to~0, see \cite{Prokhorov}. This implies that for all sufficiently large~$n$ we have \begin{gather} \label{e:fukaya1} \mu(A^{\ep/3}) + \nu > \mu_n(f_n^{-1}(A)) , \\ \label{e:fukaya2} \mu_n(f_n^{-1}(A^{\ep/3})) + \nu > \mu(A) \end{gather} for every Borel set $A\subset X$. Now consider the disjoint union $Z_n=X\sqcup X_n$ equipped with the metric $d_n'$. The measures $\mu$ and $\mu_n$ can be regarded as measures on~$Z_n$. If $\sigma_n<\ep/3$ then by \eqref{e:fukaya3} we have $f_n^{-1}(A^{\ep/3}) \subset A^\ep$ for all $A\subset X$ and $(f_n(B))^{\ep/3}\subset B^\ep$ for all $B\subset X_n$. Here the neighborhoods are taken in $(Z_n,d_n')$. These inclusions along with \eqref{e:fukaya1} and \eqref{e:fukaya2} imply that \begin{gather} \label{e:fukaya4} \mu(A^\ep)+\nu > \mu_n(A), \\ \label{e:fukaya5} \mu_n(A^\ep)+\nu > \mu(A) \end{gather} for every Borel set $A\subset Z_n$ provided that $n$ is large enough. Let $A\subset Z_n$ be a nonempty set and $\sigma_n<\ep/6$. Then there exists $x\in X$ such that $A^\ep$ contains the ball $B_{5\ep/6}(x)$. This fact is trivial if $A\cap X\ne\emptyset$, otherwise it follows from \eqref{e:fukaya3}. Let $D=B_{\ep/3}(x)\cap X$. By \eqref{e:fukaya3} we have $$ f_n^{-1}(D^{\ep/3})\subset f_n^{-1}(B_{2\ep/3}(x))\subset B_{5\ep/6}(x) \subset A^\ep . $$ Therefore \begin{equation}\label{e:fukaya7} \mu_n(A^\ep) \ge \mu_n(f_n^{-1}(D^{\ep/3})) > \mu(D)-\nu \end{equation} by \eqref{e:fukaya2}. Since $D$ is an $(\ep/3)$-ball in $X$, by the definition of $\nu$ we have $$ (e^\de-1)(\mu(D)-\nu) \ge \nu . $$ This and \eqref{e:fukaya7} imply that $(e^\de-1)\mu_n(A^\ep)\ge\nu$ and therefore \begin{equation}\label{e:fukaya8} e^\de \mu_n(A^\ep) \ge \mu_n(A^\ep)+\nu > \mu(A) \end{equation} by \eqref{e:fukaya5}. Similarly, since $D\subset A^\ep$, we have $\mu(A^\ep)\ge\mu(D)$. This inequality and \eqref{e:fukaya4} imply that \begin{equation}\label{e:fukaya9} e^\de \mu(A^\ep) \ge \mu_n(A) \end{equation} in the same way as \eqref{e:fukaya7} and \eqref{e:fukaya5} imply \eqref{e:fukaya8}. Now \eqref{e:fukaya8} and \eqref{e:fukaya9} imply that $X_n$ and $X$ are $(\ep,\de)$-close. The proposition follows. \end{proof} Now we reformulate $(\ep,\de)$-closeness in terms of measure transport. Recall that a \textit{measure coupling} (or a \emph{measure transportation plan}) between measure spaces $(X,\mu_X)$ and $(Y,\mu_Y)$ is a measure $\ga$ on $X\times Y$ whose marginals on $X$ and $Y$ coincide with $\mu_X$ and $\mu_Y$, resp. The \emph{marginals} are push-forwards of $\ga$ by the coordinate projections from $X\times Y$ to the factors. Obviously a measure coupling exists if and only if $\mu_X(X)=\mu_Y(Y)$. In our set-up $X$ and $Y$ are compact subsets of a metric space $(Z,d)$ and all measures are finite Borel. In this case $\mu_X$ and $\mu_Y$ can be regarded as measures on $Z$ and, assuming that $\mu_X(X)=\mu_Y(Y)$, one defines the $L^\infty$-Wasserstein distance $W_\infty(\mu_X,\mu_Y)$ as the minimum of all $\ep\ge 0$ such that there exists a coupling $\ga$ between $\mu_X$ and $\mu_Y$ such that $d(x,y)\le\ep$ for $\ga$-almost all pairs $(x,y)\in X\times Y$. (The minimum exists due to the weak compactness of the space of measures.) For discussion of Wasserstein distances, see e.g.~\cite{Villani}. \begin{proposition}[approximate coupling] \label{p:prokhorov-vs-wasserstein} Let $Z$ be a compact metric space and $\mu_X$, $\mu_Y$ finite Borel measures on $Z$. Then the following two conditions are equivalent: \begin{enumerate} \item[(i)] $\mu_X$ and $\mu_Y$ are relative $(\ep,\de)$-close (see Definition \ref{d:relative-prokhorov}); \item[(ii)] There exist measures $\widetilde\mu_X$ and $\widetilde\mu_Y$ on $Z$ such that $$ e^{-\delta}\mu_X \le \widetilde\mu_X \le \mu_X, \qquad e^{-\delta}\mu_Y \le \widetilde\mu_Y \le \mu_Y $$ and $W_\infty(\widetilde\mu_X,\widetilde\mu_Y) \le \ep $. \end{enumerate} In particular, $\mu_X$ and $\mu_Y$ are relative $(\ep,0)$-close iff $W_\infty(\mu_X,\mu_Y) \le \ep $ \end{proposition} For comparison of mm-spaces we have the following corollary, which avoids explicit mentioning of metrics on disjoint unions. \begin{corollary} \label{c:mm-wasserstein} Let $X,Y$ be compact mm-spaces and $\ep,\de\ge 0$. Then the following two conditions are equivalent: \begin{enumerate} \item[(i)] $X$ and $Y$ are mm-relative $(\ep,\de)$-close (see Definition \ref{d:mm-close}). \item[(ii)] There exist measures $\widetilde\mu_X$ on $X$ and $\widetilde\mu_Y$ on $Y$ such that \begin{equation}\label{e:mm-wasserstein1} e^{-\delta}\mu_X \le \widetilde\mu_X \le \mu_X, \qquad e^{-\delta}\mu_Y \le \widetilde\mu_Y \le \mu_Y \end{equation} and a measure coupling $\ga$ between $(X,\widetilde\mu_X)$ and $(Y,\widetilde\mu_Y)$ such that \begin{equation}\label{e:mm-wasserstein2} |d_X(x_1,x_2)-d_Y(y_1,y_2)| \le 2\ep \end{equation} for all pairs $(x_1,y_1),(x_2,y_2)\in\operatorname{supp}(\ga)$. \end{enumerate} \end{corollary} In particular, $(\ep,0)$-closeness of mm-spaces is equivalent to $\ep$-closeness with respect to the $L_\infty$ Gromov--Wasserstein distance. The proof of Proposition \ref{p:prokhorov-vs-wasserstein} and Corollary \ref{c:mm-wasserstein} occupies the rest of this section. We prove the proposition by means of discrete approximations. We begin with a version of it for bipartite graphs. Let $G=(V,E)$ be a bipartite graph with partite sets $M$ and $W$. That is, the set $V$ of vertices is the union of disjoint sets $M$ and $W$ and each edge connects a vertex from $M$ to a vertex from $W$. (Exercise: guess where the notations $M$ and $W$ came from.) For a set $A\subset V$ we denote by $N_G(A)$ its graph neighborhood, i.e., the set of vertices adjacent to at least one vertex from $A$. A \textit{matching} in $G$ is a set of pairwise disjoint edges. The classic Hall's Marriage Theorem \cite{Hall} states the following. If for every set $A\subset M$ one has $|N_G(A)|\ge|A|$, then there exists a matching that covers $M$ (that is, the set of endpoints of the matching contains $M$). For discussion of Hall's Theorem and related topics see e.g.\ \cite[Ch.~7]{Ore}. We need the following generalization of Hall's Theorem. \begin{lemma}[Dulmage-Mendelsohn \cite{DM}] \label{l:marriage} Let $G=(V,E)$ be a bipartite graph with partite sets $M$ and $W$. Let $M_0\subset M$ and $W_0\subset W$ be sets such that, for every subset $A$ of either $M_0$ or $W_0$ one has $|N_G(A)|\ge |A|$. Then $G$ contains a matching that covers $M_0\cup W_0$. \end{lemma} This lemma is proven as Theorem 1 in \cite{DM}. It can also be seen as a combination of Hall's Theorem and Ore's Mapping Theorem, see \cite[Theorem 7.4.1]{Ore} or \cite[Theorem 2.3.1]{Mirsky}. The next lemma is a ``continuous'' generalization of Lemma \ref{l:marriage} where finite sets $M$ and $W$ are replaced by metric spaces $X$ and $Y$, and a closed set $E\subset X\times Y$ plays the role of the set of edges of the graph. \begin{lemma \label{l:Hall} Let $X$ and $Y$ be compact metric spaces. Let $\mu_X^{}$, $\mu'_X$ be finite Borel measures on $X$ and $\mu_Y^{}$, $\mu'_Y$ finite Borel measures on $Y$ such that $\mu_X^{}\ge\mu'_X$ and $\mu_Y^{}\ge\mu'_Y$. Let $E \subset X \times Y$ be a closed set. Suppose that, for any Borel sets $A\subset X$ and $B \subset Y$ one has \begin{equation} \label{1.23.12} \mu_Y^{}(A^E) \geq \mu'_X(A),\quad \mu_X^{}(B^E) \geq \mu'_Y(B), \end{equation} where \begin{align*} A^E &= \{y \in Y: \text{there is $x\in A$ such that $(x,y)\in E$} \}, \\ B^E &= \{x \in X: \text{there is $y\in B$ such that $(x,y)\in E$} \}. \end{align*} Then there exist measures $\widetilde \mu_X$, $\widetilde \mu_Y$ such that \begin{equation} \label{4.23.12} \mu_X'\leq \widetilde \mu_X^{} \leq \mu_X^{}, \quad \mu_Y'\leq \widetilde \mu_Y^{} \leq \mu_Y^{}, \end{equation} and a measure coupling $\ga$ between $\widetilde \mu_X$ and $\widetilde \mu_Y$ such that $\operatorname{supp}(\ga) \subset E$. \end{lemma} \begin{proof} First we prove the lemma in the special case when $X$ and $Y$ are finite sets. By means of approximation we may assume that all values of the measures $\mu_X^{},\mu_X',\mu_Y^{},\mu_Y'$ are rational numbers. Multiplying by a common denominator we make them integers. Then we derive the statement from Lemma \ref{l:marriage} as follows. Split each point $x\in X$ into $\mu_X^{}(x)$ points of unit weight (do not forget that $\mu_X^{}(x)\in\Z$). Paint $\mu'_X(x)$ of these points in red and the remaining $\mu_X^{}(x)-\mu'_X(x)$ points in green. Similarly, split each point $y\in Y$ into $\mu_Y^{}(y)$ points of which $\mu'_Y(y)$ are red and the rest are green. Let $M$ and $W$ be the sets of points descending from points of $X$ and $Y$, resp. Let $M_0$ and $W_0$ be the sets of red points from $M$ and $W$, resp. Now construct a bipartite graph $G$ with partite sets $M$ and $W$ as follows. For $x\in X$ and $y\in Y$ such that $(x,y)\in E$, connect every descendant of $x$ to every descendant of $y$ by an edge in $G$. If $(x,y)\notin E$ then there are no edges between descendants of $x$ and~$y$. The relation \eqref{1.23.12} implies that the graph $G$ satisfies the assumptions of Lemma \ref{l:marriage}. Therefore $G$ contains a matching $E_0$ covering $M_0\cup W_0$. For each pair $(x,y)\in X\times Y$ define a point measure $\ga(x,y)$ equal to the number of edges from $E_0$ connecting descendants of $x$ and~$y$. Then $\ga$ is a desired coupling between some measures $\widetilde\mu_X$ and $\widetilde\mu_Y$ satisfying \eqref{4.23.12}. Thus we are done with the discrete case. Passing to the general case, fix a sequence $\sigma_n\to 0$ of positive numbers. For each $n$, divide $X$ and $Y$ into a finite number of Borel subsets $\Omega_X^i$, $\Omega_Y^j$ with $\operatorname{diam}(\Omega_X^i)<\sigma_n$ and $\operatorname{diam}(\Omega_Y^j)<\sigma_n$. Choose points $x_i\in\Omega_X^i,\, y_j\in\Omega_Y^j$ and associate to them point measures $\mu_{i,n}^{}=\mu_X(\Omega^i_X),\, \mu'_{i,n}=\mu'_X(\Omega^i_X)$ and $\mu_{j,n}^{}=\mu_Y(\Omega^j_Y),\, \mu'_{j,n}=\mu'_Y(\Omega^j_Y)$. This defines atomic measures $\mu_{X,n}^{},\mu'_{X,n}$ on $X$ and $\mu_{Y,n}^{},\mu'_{Y,n}$ on $Y$ and the relation \eqref{1.23.12} holds for these discrete measures with $E_n$ in place of $E$, where $E_n$ is the $2\sigma_n$-neighborhood of $E$ with respect to the product distance on $X \times Y$. By the discrete case proven above, there is a measure $\ga_n$ on $X\times Y$ whose marginals $\widetilde\mu_{X,n}$ and $\widetilde\mu_{Y,n}$ satisfy \begin{equation} \label{2.23.12} \mu'_{X, n} \leq \widetilde \mu_{X, n}^{} \leq \mu_{X, n}^{}, \quad \mu'_{Y, n} \leq \widetilde \mu_{Y, n}^{} \leq \mu_{Y, n}^{} \end{equation} and such that $\operatorname{supp}(\ga_n) \subset E_n$. By the weak compactness of the space of measures we may assume that the sequences $\widetilde\mu_{X,n}^{}$, $\widetilde\mu_{Y,n}^{}$ and $\ga_n$ weakly converge to some measures $\widetilde\mu_X^{}$, $\widetilde\mu_Y^{}$ and~$\ga$, resp. Then $\operatorname{supp}(\ga)\subset E$ and $\ga$ is a measure coupling between $\widetilde\mu_X$ and $\widetilde\mu_Y$. Also observe that the measures $\mu_{X,n}^{}$, $\mu'_{X,n}$, $\mu_{Y,n}^{}$, $\mu'_{Y,n}$ weakly converge to $\mu_X^{}$, $\mu'_X$, $\mu_Y^{}$, $\mu'_Y$, resp. This and \eqref{2.23.12} imply that $\widetilde\mu_X^{}$ and $\widetilde\mu_Y^{}$ satisfy \eqref{4.23.12}. \end{proof} \begin{proof}[Proof of Proposition \ref{p:prokhorov-vs-wasserstein}] Let $X=\operatorname{supp}(\mu_X)$ and $Y=\operatorname{supp}(\mu_Y)$. The implication (i)$\Rightarrow$(ii) follows from Lemma \ref{l:Hall} by substituting $\mu'_X=e^{-\de}\mu_X$, $\mu'_Y=e^{-\de}\mu_Y$, and $$ E = \{ (x,y)\in X\times Y :\, d(x,y)\le\ep \} . $$ To prove the implication (ii)$\Rightarrow$(i), let $\widetilde\mu_X$ and $\widetilde\mu_Y$ be as in Proposition \ref{p:prokhorov-vs-wasserstein}(ii), and let $\ga$ be a measure coupling between $\widetilde\mu_X$ and $\widetilde\mu_Y$ realizing the $L^\infty$-Wasserstein distance. Then $\operatorname{supp}(\ga)\subset E$. This implies that, for every Borel set $A\subset X$, $$ \widetilde\mu_X(A) = \ga(A\times Y) \le \ga(X\times(A^\ep\cap Y)) = \widetilde\mu_Y(A^\ep) , $$ where the inequality follows from the inclusion $$ (A\times Y)\cap E \subset X\times(A^\ep\cap Y) . $$ Therefore $$ \mu_X(A) \le e^\de\,\widetilde\mu_X(A) \le e^\de\,\widetilde\mu_Y(A^\ep) \le e^\de\mu_Y(A^\ep) . $$ Similarly $\mu_Y(B) \le e^\de\mu_X(B^\ep)$ for every Borel set $B\subset Y$. Thus $\mu_X$ and $\mu_Y$ are relative $(\ep,\de)$-close. \end{proof} \begin{proof}[Proof of Corollary \ref{c:mm-wasserstein}] (i)$\Rightarrow$(ii): By definition, there exists a semi-metric $d$ on the disjoint union $Z=X\sqcup Y$ such that $\mu_X$ and $\mu_Y$, regarded as measures on~$Z$, are relative $(\ep,\de)$-close. Proposition \ref{p:prokhorov-vs-wasserstein} implies that there exist measures $\widetilde\mu_X$ and $\widetilde\mu_Y$ satisfying \eqref{e:mm-wasserstein1} and a measure coupling $\ga$ between them such that $d(x,y)\le\ep$ for all $(x,y)\subset\operatorname{supp}\ga$. This property and the triangle inequality implies \eqref{e:mm-wasserstein2}. (ii)$\Rightarrow$(i): The proof is similar to that of \cite[Theorem 7.3.25]{BBI}. Let $\ga$ be a measure coupling between $\widetilde\mu_X$ and $\widetilde\mu_Y$ such that \eqref{e:mm-wasserstein1} and \eqref{e:mm-wasserstein2} are satisfied. Define a semi-metric $d$ on $X\sqcup Y$ by setting $d|_{X\times X}=d_X$, $d|_{Y\times Y}=d_Y$, and $$ d(x,y) = \inf_{(x',y')\in\operatorname{supp}(\ga)} \left\{ d_X(x,x')+d_Y(y,y')+\ep \right\} . $$ The triangle inequality for $d$ easily follows from \eqref{e:mm-wasserstein2}, thus $d$ is indeed a semi-metric. The definition of $d$ implies that $d(x,y)=\ep$ if $(x,y)\in\operatorname{supp}(\ga)$. Therefore $W_\infty(\widetilde\mu_X,\widetilde\mu_Y)\le\ep$ where $\widetilde\mu_X$ and $\widetilde\mu_Y$ are regarded as measures on~$Z$. By Proposition \ref{p:prokhorov-vs-wasserstein} this implies that $\mu_X$ and $\mu_Y$ are relative $(\ep,\de)$-close and hence the mm-spaces $X$ and $Y$ are $(\ep,\de)$-close. \end{proof} \section{Stability of eigenvalues} \label{sec:stability} In this section we formulate and prove Theorem \ref{t:stability} which is one of the main results of this paper. Informally it says that if two mm-spaces are close then the lower parts of spectra of their $\rho$-Laplacians are close. First we introduce conditions on mm-spaces needed in the theorem. \begin{definition}[SLV condition] \label{d:SLV} Let $X$ be a mm-space and $\Lambda,\rho,\ep>0$. We say that $X$ satisfies the \emph{spherical layer volume condition} with parameters $\Lambda$, $\rho$, $\ep$, if for every $x\in \operatorname{supp}(\mu_X)$, \begin{equation}\label{e:SLV} \frac{\mu(B_{\rho+\ep}(x) \setminus B_{\rho}(x))}{\mu(B_{\rho}(x))} \leq \Lambda \,\frac{\ep}{\rho}. \end{equation} We abbreviate this condition as $SLV(\Lambda,\rho,\ep)$. \end{definition} \begin{definition}[BIV condition] \label{d:BIV} Let $X$ be a mm-space, $0<\ep\le\rho/2$ and $\Lambda>0$. We say that $X$ satisfies the \emph{ball intersection volume condition} with parameters $\Lambda$, $\rho$, and $\ep$, if for all $x,y\in\operatorname{supp}(\mu_X)$ such that $d_X(x,y)\le\rho+\ep$, $$ \mu(B_{\rho}(x)\cap B_{\rho}(y)) \ge \Lambda^{-1} \mu(B_{\rho+\ep}(x)). $$ We abbreviate this condition as $BIV(\Lambda,\rho,\ep)$. \end{definition} Note that the Bishop--Gromov inequality \eqref{e:bishop-gromov} implies $SLV(\Lambda',\rho,\ep)$ for all $\rho\ge\ep>0$ with $\Lambda'$ depending on the parameter $\Lambda$ of \eqref{e:bishop-gromov}. For $\ep=\rho$, the SLV condition \eqref{e:SLV} turns into a doubling condition: $$ \mu(B_{2\rho}(x)) \le 2\Lambda\mu(B_\rho(x)) . $$ If $d$ is a length metric and this doubling condition holds for all $\rho>0$, then $X$ satisfies $BIV(\Lambda',\rho,\ep)$ for all $\rho>0$ and $\ep\le\rho/2$, where $\Lambda'$ depends only on $\Lambda$. This follows from the fact that the intersection $B_\rho(x)\cap B_\rho(y)$ contains a ball of radius $\frac{\rho-\ep}2$. The next lemma shows that the conditions SLV and BIV are in a sense stable with respect to $(\ep,\de)$-closeness introduced in Section \ref{sec:wasserstein}. \begin{lemma} \label{l:conditions-stable} Let $X$ and $Y$ be $(\ep,\de)$-close mm-spaces (see Definition \ref{d:mm-close}) where $0<\ep\le\rho/12$. Then: 1. If $X$ satisfies $SLV(\Lambda,\rho-2\ep,5\ep)$, then $Y$ satisfies $SLV(6e^{2\de}\Lambda,\rho,\ep)$. 2. If $X$ satisfies $BIV(\Lambda,\rho-2\ep,5\ep)$, then $Y$ satisfies $BIV(e^{2\de}\Lambda,\rho,\ep)$. \end{lemma} \begin{proof} We may assume that $\mu_X$ and $\mu_Y$ have full support. By definition, there is a metric $d$ on the disjoint union $Z=X\sqcup Y$ such that $\mu_X$ and $\mu_Y$ are relative $(\ep,\de)$-close in $(Z,d)$. Throughout this proof all balls, neighborhoods, etc, are considered in the space $(Z,d)$. Since the measures have full support, the Hausdorff distance between $X$ and $Y$ is no greater than~$\ep$. That is, for every $y\in Y$ there exists $x\in X$ such that $d(x,y)\le\ep$, and vice versa. Let $y\in Y$. Take $x\in X$ such that $d(x,y)\le\ep$. Recall that $A^\ep$ denotes the closed $\ep$-neighborhood of a set $A$. The triangle inequality implies that $$ (B_{\rho-2\ep}(x))^\ep\subset B_{\rho}(y) $$ and $$ (B_{\rho+\ep}(y)\setminus B_{\rho}(y))^\ep \subset B_{\rho+3\ep}(x)\setminus B_{\rho-2\ep}(x) . $$ These inclusions and the relative $(\ep,\de)$-closeness of $\mu_X$ and $\mu_Y$ imply that $$ \mu_Y(B_{\rho}(y)) \ge e^{-\de} \mu_X(B_{\rho-2\ep}(x)) $$ and $$ \mu_Y (B_{\rho+\ep}(y)\setminus B_{\rho}(y)) \le e^\de\mu_X(B_{\rho+3\ep}(x)\setminus B_{\rho-2\ep}(x)). $$ Therefore $$ \frac{\mu(B_{\rho+\ep}(y) \setminus B_{\rho}(y))}{\mu(B_{\rho}(y))} \le \frac{\mu(B_{\rho+3\ep}(x) \setminus B_{\rho-2\ep}(x))}{\mu(B_{\rho-2\ep}(x))} \le e^{2\de}\Lambda \,\frac{5\ep}{\rho-2\ep} \le 6\Lambda\frac\ep\rho $$ and the first claim of the proposition follows. To prove the second claim, consider points $y_1,y_2\in Y$ such that $d(y_1,y_2)\le\rho+\ep$. We have to prove that $$ Q := \frac{\mu_Y(B_{\rho}(y_1)\cap B_{\rho}(y_2))}{\mu_Y(B_{\rho+\ep}(y_1))} \ge (e^{2\de}\Lambda)^{-1} . $$ Choose $x_1,x_2\in X$ such that $d(x_1,y_1)\le\ep$ and $d(x_2,y_2)\le\ep$. The triangle inequality implies that $d(x_1,x_2)\le\rho+3\ep$, $$ (B_{\rho+\ep}(y_1))^\ep \subset B_{\rho+3\ep}(x_1) $$ and $$ (B_{\rho-2\ep}(x_1)\cap B_{\rho-2\ep}(x_2))^\ep \subset B_{\rho}(y_1)\cap B_{\rho}(y_2) . $$ Therefore, by relative $(\ep,\de)$-closeness of $\mu_X$ and $\mu_Y$, $$ \mu_Y(B_{\rho+\ep}(y_1)) \le e^\de \mu_X(B_{\rho+3\ep}(x_1)) $$ and $$ \mu_Y(B_{\rho}(y_1)\cap B_{\rho}(y_2)) \ge \ep^{-\de}\mu_X(B_{\rho-2\ep}(x_1)\cap B_{\rho-2\ep}(x_2)) . $$ Hence $$ Q \ge e^{-2\de} \, \frac{\mu_X(B_{\rho-2\ep}(x_1)\cap B_{\rho-2\ep}(x_2))}{\mu_X(B_{\rho+3\ep}(x_1))} \ge e^{-2\de}\Lambda^{-1} $$ where the last inequality follows from the BIV condition for $X$. This finishes the proof of Lemma \ref{l:conditions-stable}. \end{proof} Now we are in a position to state our main theorem. \begin{theorem}\label{t:stability} For every $\Lambda>0$ there exists $C=C(\Lambda)>0$ such that the following holds. If $X$ and $Y$ are mm-spaces which are $(\ep,\de)$-close and satisfy the conditions $SLV(\Lambda,\rho,2\ep)$ and $BIV(\Lambda,\rho,2\ep)$, $0\le\ep\le\rho/4$, $\de\ge 0$, then \begin{equation}\label{e:main-estimate} e^{-4\de}(1+C\ep/\rho)^{-1}\le\frac{\lambda_k(X,\rho)}{\lambda_k(Y,\rho)} \le e^{4\de}(1+C\ep/\rho) \end{equation} for all $k$ such that $\lambda_k(X,\rho)< e^{-4\de}(1+C\ep/\rho)^{-1}\rho^{-2}$. \end{theorem} The proof of Theorem \ref{t:stability} occupies the rest of this section. First we prove the theorem for $\de=0$ (see Proposition \ref{p:ep-0-close}). In this case Corollary \ref{c:mm-wasserstein} implies that the mm-spaces $X$ and $Y$ in question admit a measure coupling $\ga$ satisfying \eqref{e:mm-wasserstein2}. To estimate the difference between eigenvalues of $\Delta^\rho_X$ and $\Delta^\rho_Y$, we transform $X$ to $Y$ in three steps. In the case when $X$ and $Y$ are discrete spaces these steps can be described as follows. First, we split each atom of $X$ into several points and distribute the measure between them. The distances between the descendants of each atom is set to be zero, so we obtain a semi-metric-measure space. Second, we ``transport'' the points to their destinations in $Y$. The formal meaning of this is that we keep the point set and the measure but change distances between points. Finally, we glue together some points to obtain $Y$. The last step is inverse to the first one with $Y$ in place $X$. After we provided this intuition in the discrete case, let us proceed with a formal construction of ``splitting''. It is slightly more cumbersome. Let $\ga$ a measure coupling between mm-spaces $X$ and $Y$. Recall that $\ga$ is a measure on $X\times Y$ and for every Borel set $A \subset X$, \begin{equation} \label{4.21.12} \mu_X(A)= \ga(A \times Y). \end{equation} Define a semi-metric $d_{X|X\times Y}$ on $X\times Y$ by $$ d_{X|X\times Y}((x_1, y_1), (x_2, y_2))=d_X(x_1, x_2). $$ The desired splitting of $X$ is the mm-space $X_\ga=(X\times Y,d_{X|X\times Y},\ga)$. We do not use the (non-Hausdorff) topology arising from the semi-metric $d_{X|X\times Y}$. We equip $X\times Y$ with the standard product Borel $\sigma$-algebra. An interested reader may check that the arguments below also apply if one replaces the semi-metric $d_{X|X\times Y}$ by a genuine metric $d$ defined by $$ d((x_1, y_1), (x_2, y_2))=\max\{d_X(x_1, x_2),c\rho^{-2}d_Y(y_1,y_2)\} $$ where $c$ is a sufficiently small constant, $0<c<1/\operatorname{diam}(Y)$. Applying Definition \ref{d:rho-laplacian} to $X_\ga$ we define the associated $\rho$-Laplacian $\Delta^\rho_{X_\ga}$. Even though $X_\ga$ is almost the same space as $X$, the spectrum of $\Delta^\rho_{X_\ga}$ may slightly differ from that of~$\Delta^\rho_X$. We compare the two spectra in the following lemma: \begin{lemma}\label{l:splitting} Let $X$ and $X_\ga$ be as above. Then $$ \operatorname{spec}(\Delta^\rho_{X_\ga})\subset\operatorname{spec}(\Delta^\rho_X)\cup\{\rho^{-2}\} . $$ Furthermore, every eigenvalue smaller than $\rho^{-2}$ has the same multiplicity in the two spectra. \end{lemma} \begin{proof} Consider a subspace $L\subset L^2(X\times Y,\ga)$ given by $L=\pi^*_X(L^2(X))$ where $\pi_X\colon X \times Y \to X$ is the coordinate projection. In other words, $L$ consists of functions which are constant on every fiber $\{x\}\times Y$, $x\in X$. Due to \eqref{4.21.12}, $\pi_X^*$ is a Hilbert space isomorphism between $L^2(X)$ and $L$. We decompose $L^2(X\times Y,\ga)$ into a direct sum $L\oplus L^\perp$. Loosely speaking, $L^\perp$ consists of functions which are orthogonal to constants in every fiber. More precisely, if $u\in L^\perp$ then \begin{equation}\label{e:splitting1} \int_{A\times Y} u\,d\ga = 0 \end{equation} for every Borel set $A\subset X$. The statement of the lemma is a consequence of the following three facts: \begin{enumerate} \item[(1)] $L$ and $L^\perp$ are invariant under $\Delta^\rho_{X_\ga}$; \item[(2)] $\pi_X^*$ provides an equivalence between $\Delta^\rho_X$ and $\Delta^\rho_{X_\ga}|_L$; \item[(3)] for every $u\in L^\perp$ we have $\Delta^\rho_{X_\ga}u=\rho^{-2}u$. \end{enumerate} To prove these facts, observe that a $\rho$-ball $B_\rho^{X_\ga}(x,y)$ of the semi-metric $d_{X|X\times Y}$ is of the form $$ B_\rho^{X_\ga}(x,y) = B_\rho^X(x) \times Y . $$ Hence for a function $u=\pi_X^*(v)\in L$, where $v\in L^2(X)$, we have \begin{align*} \Delta^\rho_{X_\ga}u(x,y) &= \frac{\rho^{-2}}{\ga(B^X_\rho(x)\times Y)}\int_{B^X_\rho(x)\times Y} [u(x,y)-u(x_1,y_1)] \,d\ga(x_1,y_1) \\ &= \frac{\rho^{-2}}{\mu_X(B^X_\rho(x))}\int_{B^X_\rho(x)} [v(x)-v(x_1)] \,d\mu_X(x_1) = \Delta^\rho_X v(x) \end{align*} where the second identity follows from \eqref{4.21.12}. Thus $\Delta^\rho_{X_\ga}(\pi_X^*(v))=\pi_X^*(\Delta^\rho_X v)$, proving (2) and the first part of (1). For every $u\in L^\perp$ we have \begin{align*} \Delta^\rho_{X_\ga}u(x,y) &= \frac{\rho^{-2}}{\ga(B^X_\rho(x)\times Y)}\int_{B^X_\rho(x)\times Y} [u(x,y)-u(x_1,y_1)] \,d\ga(x_1,y_1) \\ &= \rho^{-2} u(x,y) -\frac{\rho^{-2}}{\ga(B^X_\rho(x)\times Y)}\int_{B^X_\rho(x)\times Y} u(x_1,y_1) \,d\ga(x_1,y_1) \\ &= \rho^{-2} u(x,y) \end{align*} where the last identity follows from \eqref{e:splitting1}. This proves (3) and the second part of (1). \end{proof} The next lemma serves the step where we handle the difference between distances. \begin{lemma}\label{l:distance-change} Let $X_1=(X,d_1,\mu)$, $X_2=(X,d_2,\mu)$ be two mm-spaces with the same point set $X$ and measure $\mu$. Let $\Lambda\ge 1$, $0<\ep\le\rho/2$, and assume that $X_1$, $X_2$ satisfy the conditions $SLV(\Lambda,\rho,\ep)$ and $BIV(\Lambda,\rho,\ep)$. Also assume that \begin{equation}\label{e:dist-error} |d_1(x,y)-d_2(x,y)| \le \ep \end{equation} for all $x,y\in\operatorname{supp}(\mu)$. Then, for every $k\in\N$, \begin{equation}\label{e:la-change} (1+C\ep/\rho)^{-1}\le\frac{\lambda_k(X_2,\rho)}{\lambda_k(X_1,\rho)} \le 1+C\ep/\rho \end{equation} where $C$ is a constant depending only on $\Lambda$. \end{lemma} \begin{proof} We may assume that \eqref{e:dist-error} holds for all $x,y\in X$, otherwise just replace $X$ by $\operatorname{supp}(\mu)$. We estimate the eigenvalues by means of the min-max formula \eqref{e:minmax}. For $i=1,2$, let $B^i_\rho(x)$ denote the $\rho$-ball of $d_i$ centered at $x\in X$, $\|\cdot\|_i=\|\cdot\|_{X_i^\rho}$ (see \eqref{e:L2norm}), and $D_i=D^\rho_{X_i}$ (see \eqref{e:dirichlet}). The only difference as we pass from $X_1$ to $X_2$ is that the balls $B^i_\rho(x)$ are different. The assumption \eqref{e:dist-error} implies that $B^1_\rho(x)\subset B^2_{\rho+\ep}(x)$ for every $x\in X$. This and the condition $SLV(\Lambda,\rho,\ep)$ for $X_2$ imply that $$ \frac{\mu(B^1_\rho(x))}{\mu(B^2_\rho(x))} \le 1+\frac{\mu(B^2_{\rho+\ep}(x)\setminus B^2_\rho(x))}{\mu(B^2_\rho(x))} \le 1+\Lambda\ep/\rho . $$ This and \eqref{e:L2norm} imply that \begin{equation}\label{e:dist-change0} \|u\|^2_1 \le (1+\Lambda\ep/\rho) \|u\|^2_2 \end{equation} for every $u\in L^2(X)$. For the Dirichlet forms we have \begin{align*} D_2(u) &= \iint_{d_2(x, y) <\rho} |u(x)-u(y)|^2 \, d\mu(x) d\mu(y) \\ &\leq \iint_{d_1(x, y) <\rho+\ep} |u(x)-u(y)|^2 \, d\mu(x) d\mu(y) \\ &= D_1(u) +\iint_{L} |u(x)-u(y)|^2 \, d\mu(x) d\mu(y), \end{align*} where $$ L=\{(x,y)\in X\times X : \rho \le d_1(x, y) <\rho+\ep \} . $$ Hence \begin{equation}\label{e:dist-change1} D_2(u)-D_1(u) \le \iint_{L} |u(x)-u(y)|^2 \, d\mu(x) d\mu(y) . \end{equation} Let us estimate the right-hand side of \eqref{e:dist-change1}. For every $(x,y)\in L$ consider the set $U(x,y)=B_\rho^1(x) \cap B_\rho^1(y)$. Recall that \begin{equation}\label{e:dist-change2} \mu(U(x,y)) \ge \Lambda^{-1} \max\{\mu(B^1_\rho(x)),\mu(B^1_\rho(y))\} \end{equation} by the condition $BIV(\Lambda,\rho,\ep)$ for $X_1$. For every $z \in U(x, y)$ we have $$ |u(x)-u(y)|^2 \leq 2 (|u(x)-u(z)|^2+|u(z)-u(y)|^2). $$ Integrating this inequality and taking into account \eqref{e:dist-change2} yields that \begin{align*} |u(x)-u(y)|^2 &\le \frac{2}{\mu_1(U(x,y))} \int_{U(x, y)} (|u(x)-u(z)|^2+|u(z)-u(y)|^2) \, d\mu_1(z) \\ &\le 2\Lambda \bigl(Q(x)+Q(y)\bigr) \end{align*} where $$ Q(x) = \frac1 {\mu(B^1_\rho(x))} \int_{B^1_\rho(x)} |u(x)-u(z)| ^2\, d\mu_1(z) . $$ This and \eqref{e:dist-change1} imply that \begin{align*} D_2(u)-D_1(u) &\le 2\Lambda \iint_L (Q(x)+Q(y))\,d\mu(x)d\mu(y) \\ &= 4\Lambda \iint_L Q(x)\,d\mu(x)d\mu(y) \\ &= 4\Lambda \int_X \mu(B^1_{\rho+\ep}(x)\setminus B^1_\rho(x))\, Q(x) \,d\mu(x) \\ &\le \frac{4\Lambda^2\ep}{\rho} \int_X \mu(B^1_\rho(x))\, Q(x) \,d\mu(x) \\ &=\frac{4\Lambda^2\ep}{\rho} D_1(u) \end{align*} where the second inequality follows from the condition $SLV(\Lambda,\rho,\ep)$ for $X_1$. Thus \begin{equation}\label{e:dist-change-D} D_2(u) \le (1+4\Lambda^2\ep/\rho) D_1(u) . \end{equation} This and \eqref{e:dist-change0} imply that $$ \frac{D_2(u)}{\|u\|^2_2} \le (1+\Lambda\ep/\rho)(1+4\Lambda^2\ep/\rho) \frac{D_1(u)}{\|u\|^2_1} $$ for every $u\in L^2(X)\setminus\{0\}$. By the min-max formula \eqref{e:minmax} this implies the second inequality in \eqref{e:la-change} with $C=\Lambda+4\Lambda^2+4\Lambda^3$. Then the first inequality in \eqref{e:la-change} follows by swapping $X_1$ and $X_2$. \end{proof} The following proposition deals with the case of $\de=0$ of Theorem \ref{t:stability}. \begin{proposition} \label{p:ep-0-close} For every $\Lambda>0$ there exists $C=C(\Lambda)>0$ such that the following holds. Let $0<\ep\le\rho/4$ and let $X$, $Y$ be mm-spaces that are $(\ep,0)$-close and satisfy the conditions $SLV(\Lambda,\rho,2\ep)$ and $BIV(\Lambda,\rho,2\ep)$. Then $$ (1+C\ep/\rho)^{-1} \le \frac{\lambda_k(X,\rho)}{\lambda_k(Y,\rho)} \le 1+C\ep/\rho $$ for every $k\in\N$ such that $\lambda_k(X,\rho)<(1+C\ep/\rho)^{-1}\rho^{-2}$. \end{proposition} \begin{proof} By Corollary \ref{c:mm-wasserstein}, there exists a measure coupling $\ga$ between $X$ and~$Y$ satisfying \eqref{e:mm-wasserstein2}. With this coupling, we construct mm-spaces $$ X_\ga=(X\times Y,d_{X|X\times Y},\ga) \quad\text{and}\quad Y_\ga=(X\times Y,d_{Y|X\times Y},\ga) $$ as explained in the text before Lemma \ref{l:splitting}. Then Lemma \ref{l:splitting} implies that $\lambda_k(X_\ga,\rho)=\lambda_k(X,\rho)$ provided that $\lambda_k(X,\rho)<\rho^{-2}$. The spaces $X_\ga$ and $Y_\ga$ inherit the conditions $SLV(\Lambda,\rho,2\ep)$ and $BIV(\Lambda,\rho,2\ep)$ from $X$ and $Y$. Due to \eqref{e:mm-wasserstein2}, $X_\ga$ and $Y_\ga$ satisfy the assumptions of Lemma \ref{l:distance-change} with $2\ep$ in place of $\ep$. Hence $$ (1+C\ep/\rho)^{-1} \le \frac{\lambda_k(X_\ga,\rho)}{\lambda_k(Y_\ga,\rho)} \le 1+C\ep/\rho $$ where $C$ is a constant depending only on $\Lambda$. If $\lambda_k(X_\ga,\rho)<(1+C\ep/\rho)^{-1}\rho^{-2}$, this implies that $\lambda_k(Y_\ga,\rho)<\rho^{-2}$ and therefore $\lambda_k(Y_\ga,\rho)=\lambda_k(Y,\rho)$ by Lemma~\ref{l:splitting}. The proposition follows. \end{proof} \begin{proof}[\bf Proof of Theorem \ref{t:stability}] Let $X$, $Y$ be as in Theorem \ref{t:stability}. By Corollary \ref{c:mm-wasserstein}, there exist measures $\widetilde\mu_X$ and $\widetilde\mu_Y$ satisfying \eqref{e:mm-wasserstein1} and such that the mm-spaces $\widetilde X=(X,d_X,\widetilde\mu_X)$ and $\widetilde Y=(Y,d_Y,\widetilde\mu_Y)$ are $(\ep,0)$-close. By \eqref{e:la-change-mu} we have \begin{equation}\label{e:tildeX} e^{-2\de} \le \frac{\lambda_k(\widetilde X,\rho)}{\lambda_k(X,\rho)} \le e^{2\de} \end{equation} and \begin{equation}\label{e:tildeY} e^{-2\de} \le \frac{\lambda_k(\widetilde Y,\rho)}{\lambda_k(Y,\rho)} \le e^{2\de} . \end{equation} Now we estimate the ratio $\lambda_k(\widetilde X,\rho)/\lambda_k(\widetilde Y,\rho)$. Due to \eqref{e:mm-wasserstein1}, $\widetilde X$ and $\widetilde Y$ satisfy the conditions $SLV(\Lambda',\rho,2\ep)$ and $BIV(\Lambda',\rho,2\ep)$ with $\Lambda'=e^\de\Lambda$. By Proposition \ref{p:ep-0-close} applied to $\widetilde X$ and $\widetilde Y$ we have \begin{equation}\label{e:tilderatio} (1+C\ep/\rho)^{-1} \le \frac{\lambda_k(\widetilde X,\rho)}{\lambda_k(\widetilde Y,\rho)} \le 1+C\ep/\rho \end{equation} provided that \lambda_k(\widetilde X,\rho) < (1+C\ep/\rho)^{-1}\rho^{-2} $. Here $C$ is a constant depending only on $\Lambda$. The desired estimate \eqref{e:main-estimate} follows from \eqref{e:tilderatio}, \eqref{e:tildeX} and \eqref{e:tildeY}. \end{proof} \begin{proof}[\bf Proof of Theorem \ref{t:second}] Let $X=(X,d,\mu)$ and $X_n=(X_n,d_n,\mu_n)$ be as in Theorem \ref{t:second}. The Bishop--Gromov condition \eqref{e:bishop-gromov} implies that $\mu$ has full support. By Proposition \ref{p:fukaya} it follows that $X_n$ is $(\ep_n,\de_n)$-close to $X$ where $\ep_n,\de_n\to 0$. Fix $\rho>0$ and assume that $\ep_n<\rho/24$. As explained after Definition \ref{d:BIV}, the assumption that $d$ is a length metric and \eqref{e:bishop-gromov} imply that $X$ satisfies $SLV(\Lambda',r,\ep)$ and $BIV(\Lambda',r,\ep)$ for all $r>0$ and $\ep\le r/2$, where $\Lambda'$ depends only on $\Lambda$. By Lemma \ref{l:conditions-stable} it follows that $X_n$ satisfies $SLV(\Lambda'',\rho,2\ep_n)$ and $BIV(\Lambda'',\rho,2\ep_n)$ for some $\Lambda''$ depending only on $\Lambda$. Now Theorem \ref{t:stability} implies that, for some $C=C(\Lambda)$, $$ e^{-4\de_n}(1+C\ep_n/\rho)^{-1} \le \frac{\lambda_k(X_n,\rho)}{\lambda_k(X,\rho)} \le e^{4\de_n}(1+C\ep_n/\rho) $$ for all $n,k$ such that $\lambda_k(X,\rho)<e^{-4\de_n}(1+C\ep_n/\rho)^{-1}\rho^{-2}$. Thus $\lambda_k(X_n,\rho)\to\lambda_k(X,\rho)$ as $n\to\infty$. \end{proof} \section{Transport of $\rho$-Laplacians} \label{sec:TXY} In this section we further analyze the structures appeared in the proof of Theorem \ref{t:stability}. Our goal is to construct a map $T_{XY}\colon L^2(X)\to L^2(Y)$ which shows ``almost equivalence'' of $\rho$-Laplacians $\Delta^\rho_X$ and $\Delta^\rho_Y$. See Proposition \ref{p:TXY} for a precise formulation. Let $X$, $Y$ be as in Theorem \ref{t:stability}. As in the proof of Theorem \ref{t:stability}, let $\ga$ be a measure coupling provided by Corollary \ref{c:mm-wasserstein} and $\widetilde\mu_X$, $\widetilde\mu_Y$ marginals of~$\ga$. The coordinate projection $\pi_X\colon X\times Y\to X$ determines two maps $$ I_X\colon L^2(X,\widetilde\mu_X)\to L^2(X\times Y,\ga) $$ and $$ P_X\colon L^2(X\times Y,\ga)\to L^2(X,\widetilde\mu_X) $$ which are dual to each other. Namely $I_X=\pi_X^*$ is a map given by $$ (I_Xu)(x,y) = u(x), \qquad u\in L^2(X),\ x\in X,\ y\in Y . $$ Note that $I_X$ is an isometric embedding of $L^2(X,\widetilde\mu_X)$ to $L^2(X\times Y,\ga)$. Let $L_X\subset L^2(X\times Y,\ga)$ be the image of $I_X$. Then $P_X$ is the composition of the orthogonal projection onto $L_X$ and the map $I_X^{-1}\colon L_X\to L^2(X)$. Loosely speaking, $P_X$ sends each function on $X\times Y$ to the family of its average values over the fibers $\{x\}\times Y$, $x\in X$. More precisely, by disintegration theorem (see \cite{Pachl} or \cite[Theorem 452I]{Fremlin}), for a.e.\ $x\in X$, there is a measure $\nu_x$ on $Y$ such that $$ \gamma(A \times B)= \int_A \nu_x(B) d\widetilde \mu_X,\quad A \subset X, B \subset Y. $$ Then, for a.e.\ $x\in X$, \begin{equation}\label{e:PX} (P^{}_X\varphi)(x) = \int_{ Y} \phi(x, y)\, d\nu_x, \qquad x\in X. \end{equation} Then, since $\widetilde\mu_X$ is a marginal measure of $\ga$, \eqref{e:PX} implies that $P_X \circ I_X=id_X$. Similarly one defines maps $I_Y$, $P_Y$ and a subspace $L_Y$. We introduce a map $T_{XY}\colon L^2(X)\to L^2(Y)$ by $T_{XY}=P_Y\circ I_X$. By \eqref{e:PX}, for $u \in L^2(X)$, $$ (T^{}_{XY}u)(y) = \int_{X} u(x) \, d\nu_y, $$ where $\nu_y$ is defined similarly to $\nu_x$ and the integral in the right-hand side exists for a.e.~$y\in Y$. The main result of this section is the following proposition. \begin{proposition} \label{p:TXY} For every $\Lambda>0$ there exists $C=C(\Lambda)>0$ such that the following holds. Let $X$, $Y$, $\rho$, $\ep$, $\Lambda$ be as in Theorem \ref{t:stability}. Then for every $u\in L^2(X)$ the map $T_{XY}$ defined above satisfies \begin{equation} \label{e:TXY01} A^{-1}\|u\|^2_{X^\rho} - A\rho^2 D^\rho_X(u) \le\|T^{}_{XY}u\|^2_{Y^\rho} \le A\|u\|^2_{X^\rho} , \end{equation} \begin{equation}\label{e:TXY02} D^\rho_Y(T^{}_{XY}u) \le A D^\rho_X(u) , \end{equation} and \begin{equation}\label{e:TXY03} \| T_{YX}(T_{XY}u) - u \|^2_{X^\rho} \le A\rho^2 D^\rho_X(u) , \end{equation} where $$ A = e^{\de}(1+C\ep/\rho) . $$ \end{proposition} We are interested in the situation when $\de$ and $\ep/\rho$ are small. Then $A$ is close to~1 and the cumbersome formulas \eqref{e:TXY01} and \eqref{e:TXY03} can be informally interpreted in the following way. At not too high energy levels (that is, if the Dirichlet energy of a unit vector $u$ is substantially smaller than $\rho^{-2}$), the operator $T_{XY}$ almost preserves the norm and the inner product by \eqref{e:TXY01} and $T_{YX}\circ T_{XY}$ is close to identity by \eqref{e:TXY03}. \begin{proof}[Proof of Proposition \ref{p:TXY}] Let $\ga$, $\widetilde\mu_X$, $\widetilde\mu_Y$ be as above. Consider mm-spaces $\widetilde X=(X,d_X,\widetilde\mu_X)$ and $\widetilde Y=(Y,d_Y,\widetilde\mu_Y)$, the corresponding $\rho$-Laplacians, norms $\|\cdot\|_{\widetilde X^\rho}$ and $\|\cdot\|_{\widetilde Y^\rho}$, and Dirichlet forms $D^\rho_{\widetilde X}$ and $D^\rho_{\widetilde Y}$ (see \eqref{e:L2norm} and \eqref{e:dirichlet}). As in Section \ref{sec:stability} we equip $X\times Y$ with two semi-distances $d_{X|X\times Y}$ and $d_{Y|X\times Y}$ and denote by $\widetilde X_\ga$ and $\widetilde Y_\ga$ the corresponding mm-spaces (see the proof of Proposition \ref{p:ep-0-close}). These mm-spaces determine $\rho$-Laplacians $\Delta^\rho_{\widetilde X_\ga}$ and $\Delta^\rho_{\widetilde Y_\ga}$, scalar products $\langle,\rangle_{\widetilde X_\ga^\rho}$ and $\langle,\rangle_{\widetilde Y_\ga^\rho}$, and Dirichlet forms $D^\rho_{\widetilde X_\ga}$ and $D^\rho_{\widetilde Y_\ga}$ (see \eqref{e:L2rho} and \eqref{e:dirichlet}). The structures introduced above satisfy the following properties (see the proof of Lemma \ref{l:splitting}): \begin{itemize} \item $L_X$ and $L_X^\perp$ are orthogonal with respect to $\langle\cdot,\cdot\rangle_{\widetilde X_\ga^\rho}$; \item $I_X$ is an isometric embedding with respect to norms $\|\cdot\|_{\widetilde X^\rho}$ and $\|\cdot\|_{\widetilde X_\ga^\rho}$; \item $I_X$ preserves the Dirichlet form $D^\rho_{\widetilde X}$, that is $D^\rho_{\widetilde X_\ga}(I_Xu)=D^\rho_{\widetilde X}(u)$ for all $u\in L^2(X)$; \item $L_X$ and $L_X^\perp$ are invariant under $\Delta^\rho_{\widetilde X_\ga}$ and hence they are orthogonal with respect to $D^\rho_{\widetilde X_\ga}$. \end{itemize} Recall that $P_X$ is the composition of the orthogonal projection to $L_X$ and the map $I_X^{-1}$. Hence $P_X$ does not increase the norms and Dirichlet forms. Similar properties hold for $Y$ in place of $X$. As in the proof of Lemma \ref{l:distance-change}, for every $v\in L^2(X\times Y,\ga)$ we have (see \eqref{e:dist-change0} and \eqref{e:dist-change-D}) \begin{equation}\label{e:TXY-norm} A_1^{-1}\le \|v\|^2_{\widetilde X^\rho} \big/ \|v\|^2_{\widetilde Y^\rho} \le A_1 \end{equation} and \begin{equation}\label{e:TXY-D} A_2^{-1}\le D^\rho_{\widetilde X}(v) \big/ D^\rho_{\widetilde Y}(v) \le A_2 \end{equation} where $A_1=1+\Lambda\ep/\rho$ and $A_2=1+4\Lambda^2\ep/\rho$. Let $u\in L^2(X)$ and $v=I_X(u)$. Then \begin{equation} \label{e:TXY1} \|T^{}_{XY}u\|^2_{\widetilde Y^\rho} = \|P_Yv\|^2_{\widetilde Y^\rho} \le \|v\|^2_{\widetilde Y_\ga^\rho} \le A_1\|v\|^2_{\widetilde X_\ga^\rho} =A_1\|u\|^2_{\widetilde X^\rho} . \end{equation} Now we estimate $\|T_{XY}u\|^2_{Y^\rho}$ from below. Decompose $v$ as $v=v_1+v_2$ where $v_1\in L_Y$ and $v_2\in L_Y^\perp$. As shown in the proof of Lemma \ref{l:splitting}, the $\rho$-Laplacian $\Delta^\rho_{\widetilde Y_\ga}$ acts on $L_Y^\perp$ by multiplication by $\rho^{-2}$. Hence \begin{equation}\label{e:TXY4} D^\rho_{\widetilde Y_\ga}(v) = D^\rho_{\widetilde Y_\ga}(v_1) + D^\rho_{\widetilde Y_\ga}(v_2) \ge D^\rho_{\widetilde Y_\ga}(v_2) = \rho^{-2}\|v_2\|^2_{\widetilde Y_\ga^\rho} \end{equation} and therefore $$ \|v_1\|^2_{\widetilde Y_\ga^\rho} =\|v\|^2_{\widetilde Y_\ga^\rho} - \|v_2\|^2_{\widetilde Y_\ga^\rho} \ge \|v\|^2_{\widetilde Y_\ga^\rho} - \rho^2 D^\rho_{\widetilde Y_\ga}(v) . $$ Thus \begin{multline} \label{e:TXY2} \|T^{}_{XY}u\|^2_{\widetilde Y^\rho} = \|v_1\|^2_{\widetilde Y_\ga^\rho} \ge \|v\|^2_{\widetilde Y_\ga^\rho} - \rho^2 D^\rho_{\widetilde Y_\ga}(v) \\ \ge A_1^{-1} \|v\|^2_{\widetilde X_\ga^\rho} - A^{}_2\rho^2 D^\rho_{\widetilde X_\ga}(v) = A_1^{-1} \|u\|^2_{\widetilde X^\rho} - A^{}_2\rho^2 D^\rho_{\widetilde X}(u) \end{multline} by \eqref{e:TXY-norm} and \eqref{e:TXY-D}. Now \eqref{e:TXY01} follows from \eqref{e:TXY1}, \eqref{e:TXY2} and the bounds \eqref{e:mm-wasserstein1} for $\widetilde\mu_X$ and $\widetilde\mu_Y$. To estimate the Dirichlet form of $T_{XY}u$, observe that $$ D^\rho_{\widetilde Y}(T^{}_{XY}u) = D^\rho_{\widetilde Y_\ga}(v_1) \le D^\rho_{\widetilde Y_\ga}(v) $$ and \begin{equation}\label{e:TXY5} D^\rho_{\widetilde Y_\ga}(v) \le A_2 D^\rho_{\widetilde X_\ga}(v) = A_2 D^\rho_{\widetilde X}(u) \end{equation} by \eqref{e:TXY-D}. These estimates and \eqref{e:mm-wasserstein1} imply \eqref{e:TXY02}. To prove \eqref{e:TXY03}, observe that $I_Y(T_{XY}u)=v_1$ and therefore $$ T_{YX}(T_{XY}u) - u = P_X(v_1)-u = P_X(v_1-v) = P_X(v_2) . $$ Further, $$ \|P_X(v_2)\|^2_{\widetilde X^\rho} \le \|v_2\|^2_{\widetilde X_\ga^\rho} \le A_1\|v_2\|^2_{\widetilde Y_\ga^\rho} \le A_1\rho^2 D^\rho_{\widetilde Y_\ga}(v) \le A_1A_2 \rho^2 D^\rho_{\widetilde X}(u) $$ by \eqref{e:TXY-norm}, \eqref{e:TXY4}, and \eqref{e:TXY5}. Thus $$ \|T_{YX}(T_{XY}u) - u\|^2_{\widetilde X^\rho} \le A_1A_2 \rho^2 D^\rho_{\widetilde X}(u) . $$ This and \eqref{e:mm-wasserstein1} imply \eqref{e:TXY03}. \end{proof} After we obtained estimates on closeness of eigenvalues in Theorem \ref{t:stability} we certainly would like to show that corresponding eigenspaces are also close. The most naive formulation definitely fails. If we have an eigenvalue of multiplicity 2 then there is a two-dimensional eigenspace. Then a small perturbation would generically result in splitting the eigenspace into two orthogonal one-dimensional eigenspaces. An original eigenvector may fail to be close to either of the new eigenspaces. It is still close to a linear combination of new eigenvectors. In our case we have a similar situation. Let $u$ be an eigenvector of $\Delta^\rho_X$ with eigenvalue $\lambda$ which is substantially smaller that $\rho^{-2}$. Then $T_{XY}(u)$ is close to a linear combination of eigenvectors of $\Delta^\rho_Y$ with eigenvalues close to~$\lambda$. We don't give a precise formulation of the statement. It is a direct reformulation of Theorem 3 in \cite{BIK14}. The proof is an application of Proposition~\ref{p:TXY} and straightforward linear algebra. \section{Weyl-type estimates} \label{sec:weyl} In this section we prove Theorems \ref{t:ess-spectrum} and \ref{t:many-eigenvalues}. Theorem \ref{t:ess-spectrum} gives us a Weyl-type upper bound on the number of eigenvalues in a lower part of the spectrum of $\Delta^\rho_X$. Theorem \ref{t:many-eigenvalues} provides a similar lower bound. To formulate the theorems we need notation for packing numbers. For a compact metric space $X$ and $r>0$ we denote by $N_X(r)$ the maximum number of points in an $r$-separated set in $X$. Recall that a set $Y\subset X$ is \emph{$r$-separated} if $d_X(y_1,y_2)\ge r$ for all $y_1,y_2\in Y$. For $R>0$, we denote by $\#^\rho_X(R)$ the number of eigenvalues of $\Delta^\rho_X$ in the interval $[0,R]$, counted with multiplicities. Equivalently, $$ \#_X^\rho(R) = \sup \{ k\in\N : \lambda_k(X,\rho)\le R \} . $$ Note that $\#_X^\rho(R)=\infty$ if $R\ge\lambda_\infty(X,\rho)$. \begin{theorem} \label{t:ess-spectrum} For every $\Lambda\ge 1$ there exists $c=c(\Lambda)>0$ such that the following holds. Let $X$ be a mm-space satisfying the condition $BIV(\Lambda,\frac56\rho,\frac5{12}\rho)$ and the following restricted doubling condition: $$ \mu_X(B_{5\rho/3}) \le \Lambda \mu_X (B_{5\rho/6}) $$ for all $x\in\operatorname{supp}(\mu_X)$. Then $$ \#_X^\rho(c\rho^{-2}) \le N_X(\rho/24) . $$ \end{theorem} If $X$ is a Riemannian manifold then $N_X(r) \sim C_n\mu(X)r^{-n}$ as $r\to 0$, where $n$ is the dimension of $X$. In this case the conclusion of Theorem \ref{t:ess-spectrum} can be restated as follows: for $R=c\rho^{-2}$, we have $$ \#_X^\rho(R) \le C(n,\Lambda) \mu(X) R^{n/2} . $$ The reader is invited to compare the right-hand side of this formula with the classic Weyl's asymptotics for the Beltrami--Laplace spectrum. \begin{proof}[\bf Proof of Theorem \ref{t:ess-spectrum}] Let $X$ be a mm-space satisfying the assumptions of the theorem. Fix $\ep=\rho/24$. Let $Y$ be a maximal $\ep$-separated set in~$X$ and $N=N_X(\ep)$ the cardinality of $Y$. Then $Y$ is an $\ep$-net in $X$. Equip $Y$ with a measure $\mu_Y$ as in Example \ref{x:discretization} so that the resulting mm-space is $(\ep,0)$-close to $X$. By the assumptions of the theorem, $X$ satisfies the conditions $SLV(\Lambda,\rho-4\ep,10\ep)$ and $BIV(\Lambda,\rho-4\ep,10\ep)$. By Lemma \ref{l:conditions-stable} it follows that $Y$ satisfies $SLV(6\Lambda,\rho,2\ep)$ and $BIV(\Lambda,\rho,2\ep)$. Therefore Theorem~\ref{t:stability} applies to $X$ and $Y$ with $6\Lambda$ in place of $\Lambda$. By Theorem~\ref{t:stability}, for every $k\ge 1$ at least one of the following holds: either $$ \lambda_k(X,\rho) > C^{-1}\rho^{-2} $$ or $$ C^{-1}\le\frac{\lambda_k(X,\rho)}{\lambda_k(Y,\rho)} \le C $$ where $C$ is a constant depending only on $\Lambda$. Since $\dim L^2(Y)=N$, we have $\lambda_k(Y,\rho)=\infty$ for all $k>N$. Hence for $k=N+1$ the second alternative above cannot occur unless $\lambda_k(X,\rho)=\infty$. We conclude that $\lambda_{N+1}(X,\rho)>C^{-1}\rho^{-2}$. Therefore for $c=C^{-1}$ we have $\#_X^\rho(c\rho^{-2})\le N$. \end{proof} \begin{theorem}\label{t:many-eigenvalues} Let $X=(X,d,\mu)$ be a mm-space whose measure has full support. Let $\mu^\rho$ be the measure defined by \eqref{e:murho}. Let $r\ge \rho$ and $N=N_X(3r)$. Then $$ \lambda_N(x,\rho) \le 4 Q(r) r^{-2} $$ where $$ Q(r) = \sup_{x\in X}\frac{\mu^\rho(B_{2r}(x))}{\mu^\rho(B_{r/2}(x))} . $$ \end{theorem} For spaces satisfying reasonable assumptions, Theorem \ref{t:many-eigenvalues} complements Theorem \ref{t:ess-spectrum} by giving a lower bound on $\#_X^\rho(c\rho^{-2})$ of the same order of magnitude as in Theorem~\ref{t:ess-spectrum}. Indeed, let $c$ be the constant from Theorem~\ref{t:ess-spectrum} and assume that $Q(r)\le Q_{\max}$ for all $r\ge\rho$. Then, applying Theorem \ref{t:many-eigenvalues} to $r=2\sqrt{c^{-1}Q_{\max}}\,\rho$ we get $$ \#_X^\rho(c\rho^{-2}) = \#_X^\rho(4Q_{\max}r^{-2}) \ge N_X(3r) = N_X(C_1\rho) $$ where $C_1=6\sqrt{c^{-1}Q_{\max}}$. \begin{proof}[\bf Proof of Theorem \ref{t:many-eigenvalues}] By the min-max formula \eqref{e:minmax}, it suffices to construct a linear subspace $H\subset L^2(X)$ such that $\dim H=N$ and \begin{equation}\label{e:many-eigenvalues1} D^\rho_X(u)\le 4Q(r)r^{-2}\|u\|^2_{L^2(X,\mu^\rho)} \end{equation} for every $u\in H$. Here $D^\rho_X$ is the Dirichlet form given by \eqref{e:dirichlet}. Let $\{x_1,\dots,x_N\}$ be a $3r$-separated set in $X$. For each $i$, define a function $u_i\colon X\to\R$ by $$ u_i(x)= \max\left\{ 1- \frac{d(x, x_i)}{r} , 0 \right\} . $$ Let $H$ be the linear span of $u_1,\dots,u_N$. We are going to show that \eqref{e:many-eigenvalues1} is satisfied for all $u\in H$. The supports of $u_i$'s are separated by distance at least $\rho$. Hence $u_i\perp u_j$ and $\Delta^\rho_X(u_i)\perp u_j$ in $L^2(X,\mu^\rho)$ for all $i\ne j$. Therefore it suffices to verify \eqref{e:many-eigenvalues1} for $u=u_i$ only. \smallbreak Since $u_i(x)\ge \frac12$ for all $x\in B_{r/2}(x_i)$, we have \begin{align*} \|u_i\|^2_{L^2(X,\mu^\rho)} &= \rho^2 \int_X \mu(B_\rho(x)) u^2_i(x) d \mu(x) \\ &\geq \frac{\rho^2}{4} \int_{B_{r/2}(x_i)} \mu(B_\rho(x)) d\mu(x) = \frac{\rho^2}{4}\mu^\rho(B_{r/2}(x)) . \end{align*} Since $u_i(x)=0$ if $x\notin B_r(x_i)$ and $u_i$ is $(1/r)$-Lipschitz, we have \begin{align*} D^\rho_X(u_i) &= \int_{B_{r+\rho}(x_i)} \int_{B_\rho(x)} |u_i(x)-u_i(y)|^2 \, d\mu(y)d\mu(x) \\ &\leq \frac{\rho^2}{r^2} \int_{B_{r+\rho}(x_i)} \mu(B_\rho(x)) \, d\mu(x) = \frac{\rho^2}{r^2} \mu^\rho(B_{r+\rho}(x_i)). \end{align*} Thus $$ \frac{D^\rho_X(u_i)}{\|u_i\|^2_{L^2(X,\mu^\rho)}} \le \frac 4{r^2} \frac{\mu^\rho(B_{r+\rho}(x_i))}{\mu^\rho(B_{r/2}(x))} \le 4r^{-2} Q(r) . $$ The theorem follows. \end{proof}
{ "timestamp": "2015-10-01T02:12:40", "yymm": "1506", "arxiv_id": "1506.06781", "language": "en", "url": "https://arxiv.org/abs/1506.06781", "abstract": "We consider a \"convolution mm-Laplacian\" operator on metric-measure spaces and study its spectral properties. The definition is based on averaging over small metric balls. For reasonably nice metric-measure spaces we prove stability of convolution Laplacian's spectrum with respect to metric-measure perturbations and obtain Weyl-type estimates on the number of eigenvalues.", "subjects": "Spectral Theory (math.SP); Metric Geometry (math.MG)", "title": "Spectral stability of metric-measure Laplacians", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668728630677, "lm_q2_score": 0.7217431943271999, "lm_q1q2_score": 0.7081505129882201 }
https://arxiv.org/abs/1308.3133
Intersections of multiplicative translates of 3-adic Cantor sets
Motivated by a question of Erdős, this paper considers questions concerning the discrete dynamical system on the 3-adic integers given by multiplication by 2. Let the 3-adic Cantor set consist of all 3-adic integers whose expansions use only the digits 0 and 1. The exception set is the set of 3-adic integers whose forward orbits under this action intersects the 3-adic Cantor set infinitely many times. It has been shown that this set has Hausdorff dimension 0. Approaches to upper bounds on the Hausdorff dimensions of these sets leads to study of intersections of multiplicative translates of Cantor sets by powers of 2. More generally, this paper studies the structure of finite intersections of general multiplicative translates of the 3-adic Cantor set by integers 1 < M_1 < M_2 < ...< M_n. These sets are describable as sets of 3-adic integers whose 3-adic expansions have one-sided symbolic dynamics given by a finite automaton. As a consequence, the Hausdorff dimension of such a set is always of the form log(\beta) for an algebraic integer \beta. This paper gives a method to determine the automaton for given data (M_1, ..., M_n). Experimental results indicate that the Hausdorff dimension of such sets depends in a very complicated way on the integers M_1,...,M_n.
\section{Introduction} We study the following problem. Let the $3$-adic Cantor set $\Sigma_3 := \Sigma_{3, \bar{2}}$ be the subset of all $3$-adic integers whose $3$-adic expansions consist of digits $0$ and $1$ only. This set is a well-known fractal having Hausdorff dimension $\dim_{H}(\Sigma_{3}) = \log_3 2 \approx 0.630929$. By a {\em multiplicative translate} of such a Cantor set we mean a multiplicatively rescaled set $r \Sigma_{3}= \{ r x: x\in \Sigma_3\}$, where we restrict to $r= \frac{p}{q} \in {\mathbb Q}^{\times}$ being a rational number that is $3$-integral, meaning that $3$ does not divide $q$, In this paper we study sets given as finite intersections of such multiplicative translates: \begin{equation}\label{eq100} C( r_1, r_2, \cdots, r_n) := \bigcap _{i=1}^{n} \frac{1}{r_i} \Sigma_3, \end{equation} where now each $\frac{1}{r_i}$ is $3$-integral. These sets are fractals and our object is to obtain bounds on their Hausdorff dimensions. Our motivation for studying this problem arose from a problem of Erd\H{o}s \cite{Erd79} which is described in Section \ref{sec11}. In principle the Hausdorff dimensions of sets $C( r_1, r_2, \cdots, r_n)$ are explicitly computable in a closed form. This comes about as follows. We show each such set has the property that the $3$-adic expansions of all the members of $C( r_1, r_2, \cdots, r_n)$ are characterizable as the output labels of all infinite paths in a labeled finite automaton which start from a marked initial vertex. General sets of such path labels associated to a finite automaton form symbolic dynamical systems that we call {\em path sets} and which we study in \cite{AL12a}. The sets $C( r_1, r_2, \cdots, r_n)$ are then $p$-adic path set fractals (with $p=3$), using terminology we introduced in \cite{AL12b}. These sets are collections of all $p$-adic numbers whose $p$-adic expansions have digits described by the labels along infinite paths according to a digit assignment map taking path labels in the graph to $p$-adic digits. In \cite[Theorem 2.10]{AL12b} we showed that a {\em $p$-adic path set fractal} is any set $Y$ in ${\mathbb Z}_p$ constructed by a $p$-adic analogue of a real number graph-directed fractal construction, as given in Mauldin and Williams \cite{MW86}. This geometric object $Y$ is given as the set-valued fixed point of a dilation functional equation using a set of $p$-adic affine maps, cf. \cite[Theorem 2.6]{AL12b}. We showed in \cite[Theorem 1.4]{AL12b} that the if $X$ is a $p$-adic path set fractal then any multiplicative translate $r X$ by a $p$-integral rational number $r$ is also a $p$-adic path set fractal. In addition $p$-adic path set fractals are closed under set intersection, a property they inherit from path sets, see \cite[Theorem 1.2]{AL12a}. Since the $3$-adic Cantor set is a $3$-adic path set fractal, the full shift on two symbols, these closure properties immediately imply that every set $C( r_1, r_2, ..., r_n)$ is a $3$-adic path set fractal. In \cite[Theorem 1.1]{AL12b} we showed that the Hausdorff dimension of a $p$-adic path set fractal $X$ is directly computable from the adjacency matrix of a suitable presentation $X$. One has \[ \dim_{H}(X) = \log_p \beta, \] in which $\beta$ is the spectral radius $\rho(\mathbf{A})$ of the adjacency matrix $\mathbf{A}$ of a finite automaton which gives a suitable presentation of the given path set; see Section \ref{sec2}. This spectral radius coincides with the {\em Perron eigenvalue} (\cite[Definition 4.4.2]{LM95}) of the nonnegative integer matrix $\mathbf{A} \neq 0$, which is the largest real eigenvalue $\beta \ge 0$ of $\mathbf{A}$. For adjacency matrices of graphs containing at least one directed cycle, which are nonnegative integer matrices, the Perron eigenvalue is necessarily a real algebraic integer, and also has $\beta \ge 1.$ In the case at hand we know a priori that $1 \le \beta \le 2.$ Everything here is algorithmically effective, as discussed in Sections \ref{sec2} and \ref{sec3}. This paper presents theoretical and experimental results about these sets. In Section \ref{sec3} we give an algorithm to compute an efficient presentation of the underlying path set of $ \mathcal{C}(1, M)$ for integers $M \ge 1$, which is simpler than the general constructions given in \cite{AL12a}, \cite{AL12b}. We extend this method to $ \mathcal{C}(1, M_1, M_2, ..., M_n)$. We give a complete analysis of the structure of the resulting path set presentations for two infinite families $\mathcal{C}(1, M_k)$ of integers $\{ M_k: k \ge 1\}$ whose $3$-adic expansions take an especially simple form. These examples exhibit rather complicated automata in the presentations. We experimentally use the algorithm for $ \mathcal{C}(1, M_1, M_2, ..., M_n)$ to compute various examples indicating that the automata depend in an extremely complicated way on $3$-adic arithmetic properties of $M$. This complexity is reflected in the behavior of the Hausdorff dimension function, and leads to many open questions. \subsection{Motivation: Erd\H{o}s problem}\label{sec11} Erd\H os \cite{Erd79} conjectured that for every $n \geq 9$, the ternary expansion of $2^n$ contains the ternary digit $2$. A weak version of this conjecture asserts that there are finitely many $n$ such that the ternary expansion of $2^n$ consists of $0$'s and $1$'s. Both versions of this conjecture appear difficult. In \cite{Lag09} the second author proposed a $3$-adic generalization of this problem, as follows. Let $\mathbb{Z}_3$ denote the $3$-adic integers, and let a $3$-adic integer $\alpha$ have $3$-adic expansion \[ (\alpha)_3 := (\cdots a_2 a_1 a_0)_3= a_0 + a_1 \cdot 3 + a_2 \cdot 3^2 + \cdots, ~~\mbox{with all}~~ a_i \in \{ 0, 1, 2\}. \] \begin{defn} \label{defn-exceptional} The {\em $3$-adic exceptional set} $\mathcal{E}(\mathbb{Z}_3)$ is defined by \[ \mathcal{E}(\mathbb{Z}_3) := \{\lambda \in \mathbb{Z}_3 : \text{for infinitely many $n \ge 0$ the expansion $(2^n \lambda)_3$ omits the digit $2$}\}. \] \end{defn} The weak version of Erd\H os's conjecture above is equivalent to the assertion that $\mathcal{E}(\mathbb{Z}_3)$ does not contain the integer $1$. The exceptional set seems an interesting object in its own right. It is forward invariant under multiplication by $2$, and one may expect it to be a very small set in terms of measure or dimension. At present it remains possible that the $\mathcal{E}(\mathbb{Z}_3)$ is a countable set, or even that it consists of the single element $\{ 0\}.$ In 2009 the second author put forward the following conjecture asserting that the exceptional set is small in the sense of Hausdorff dimension ( \cite[Conjecture 1.7]{Lag09}). \begin{conj}\label{cj11}{\em (Exceptional Set Conjecture)} The $3$-adic exceptional set $\mathcal{E}(\mathbb{Z}_3)$ has Hausdorff dimension zero, i.e. \begin{equation} \dim_{H}(\mathcal{E}(\mathbb{Z}_3) )=0. \end{equation} \end{conj} \noindent As limited evidence in favor of this conjecture, the paper \cite{Lag09} showed that the Hausdorff dimension of $\mathcal{E} (\mathbb{Z}_3)$ is at most $\frac{1}{2},$ as explained below. That paper initiated a strategy to obtain upper bounds for $\dim_{H}(\mathcal{E}(\mathbb{Z}_3) )$ based on the containment relation \begin{equation}\label{1013} \mathcal{E}(\mathbb{Z}_3) \subseteq \bigcap_{k=1}^{\infty} \mathcal{E}^{(k)}(\mathbb{Z}_3), \end{equation} where \begin{equation}\label{eq120} \mathcal{E}^{(k)}(\mathbb{Z}_3) := \{\lambda \in \mathbb{Z}_3 : \text{at least $k$ values of $(2^n \lambda)_3$ omit the digit 2}\}. \end{equation} These sets form a nested family \[ \Sigma_{3, \bar{2}}= \mathcal{E}^{(1)}(\mathbb{Z}_3 ) \supseteq \mathcal{E}^{(2)}(\mathbb{Z}_3) \supseteq \mathcal{E}^{(3)}(\mathbb{Z}_3 ) \supseteq \cdots \] The containment relation (\ref{1013}) immediately implies inequalities relating the Hausdorff dimension of these sets, namely \begin{equation}\label{1015} \dim_{H}(\mathcal{E}(\mathbb{Z}_3))\le \Gamma, \end{equation} where $\Gamma$ is defined by \begin{equation}\label{Gamma} \Gamma := \lim_{k \to \infty} \dim_{H}(\mathcal{E}^{(k)}(\mathbb{Z}_3)). \end{equation} The inequality \eqref{1015} raises the subsidiary problem of obtaining upper bounds for $\Gamma$, which in turn requires obtaining bounds for the individual $\dim_{H}(\mathcal{E}^{(k)}(\mathbb{Z}_3))$. We note the possibility that $\dim_{H}(\mathcal{E}(\mathbb{Z}_3))< \Gamma$ may hold. The analysis of the sets $\mathcal{E}^{(k)}(\mathbb{Z}_3)$ for $k \ge 2$ leads to the study of particular sets of the kind \eqref{eq100} considered in this paper. We have \begin{equation} \mathcal{E}^{(k)}(\mathbb{Z}_3) = \bigcup_{0 \leq m_1 < \ldots < m_k} \mathcal{C}(2^{m_1},\ldots,2^{m_k}). \end{equation} We next give a simplification, showing that for the purposes of computing Hausdorff dimension we may, without loss of generality, restrict this set union to subsets having $m_1=0$ so that $2^{m_1}=1.$ \begin{defn} \label{de15a} The {\em restricted $3$-adic exceptional set} $\mathcal{E}_1(\mathbb{Z}_3)$ is given by $$ \mathcal{E}_1(\mathbb{Z}_3) := \{\lambda \in \mathbb{Z}_3 : \text{for $n=0$ and infinitely many other $n$, $(2^n \lambda)_3$ omits the digit $2$}\}. $$ \end{defn} It is easy to see that $$ \mathcal{E}(\mathbb{Z}_3)= \bigcup_{n=0}^{\infty} \frac{1}{2^n} \mathcal{E}_1(\mathbb{Z}_3). $$ Since the right side is a countable union of sets we obtain $$ \dim_{H} ( \mathcal{E}(\mathbb{Z}_3)) = \sup_{n \ge 0} \Big(\dim_{H} (\frac{1}{2^n} \mathcal{E}_1(\mathbb{Z}_3))\Big) = \dim_{H} (\mathcal{E}_1(\mathbb{Z}_3)). $$ and we also have $\mathcal{E}_1(\mathbb{Z}_3) \subset \Sigma_{3, \bar{2}}$. Now set $$ \mathcal{E}_1^{(k)}(\mathbb{Z}_3) := \{\lambda \in \Sigma_{3, \bar{2}} : \text{for at least $k$ values of $n \ge 0$, $(2^n \lambda)_3$ omits the digit 2}\}. $$ For $0< m_1<m_2 < \cdots < m_k$ we have the set identities $$\mathcal{C}(2^{m_1},\ldots,2^{m_k}) = \frac{1}{2^{m_1}}\mathcal{C}(1,2^{m_2-m_1},\ldots,2^{m_k-m_1}).$$ These identities yield $ \mathcal{E}^{(k)}(\mathbb{Z}_3) = \bigcup_{n=0}^{\infty} 2^{-n} \mathcal{E}_1^{(k)}(\mathbb{Z}_3). $ Again, since this is a countable union of sets, we obtain the equality $$ \dim_{H}( \mathcal{E}^{(k)}(\mathbb{Z}_3)) = \sup_{k \ge 1} \Big( \dim_{H}(2^{-n} \mathcal{E}_1^{(k)}(\mathbb{Z}_3)\Big)= \dim_{H}(\mathcal{E}_1^{(k)}(\mathbb{Z}_3)) $$ asserted above. It also follows that \begin{equation}\label{newgamma} \Gamma = \lim_{k \to \infty} \dim_{H}(\mathcal{E}_1^{(k)}(\mathbb{Z}_3)). \end{equation} We now have \begin{equation} \mathcal{E}_1^{(k)}(\mathbb{Z}_3) = \bigcup_{0 \leq m_1 < \ldots < m_{k-1}} \mathcal{C}(1, 2^{m_1},\ldots,2^{m_{k-1}}). \end{equation} The right side of this expression is a countable union of sets, so we have \begin{equation}\label{1016} \dim_{H}( \mathcal{E}_1^{(k)}(\mathbb{Z}_3) ) = \sup_{0 \leq m_1 < \ldots < m_{k-1}} \Big( \dim_{H}\big(\mathcal{C}(1, 2^{m_1},\ldots,2^{m_{k-1}})\big)\Big) . \end{equation} Upper bounds for the right side of this formula are obtained by bounding above the Hausdorff dimensions of all the individual sets $\mathcal{C}(1, 2^{m_1},\ldots,2^{m_{k-1}})$, of the form (1.1). Lower bounds may be obtained by determining the Hausdorff dimension of specific individual sets $\mathcal{C}(1, 2^{m_1},\ldots,2^{m_{k-1}})$. By this means the second author \cite[Theorem 1.6 (ii)]{Lag09} obtained the upper bound \begin{equation}\label{lowdim11} \Gamma \le \dim_{H} (\mathcal{E}^{(2)}(\mathbb{Z}_3) ) =\dim_{H} (\mathcal{E}_1^{(2)}(\mathbb{Z}_3) ) \le \frac{1}{2}, \end{equation} and using \eqref{1015} we conclude that \begin{equation}\label{record-bound} \dim_{H} (\mathcal{E}({\mathbb Z}_3)) \le \frac{1}{2}. \end{equation} \subsection{Generalized exceptional set problem}\label{sec11a} We are interested in obtaining improved upper bounds on $\dim_{H}(\mathcal{E}(\mathbb{Z}_3))$. To progress further with the approach above, one needs a better understanding of the structure of sets $\mathcal{C}(1, 2^{m_1}, ..., 2^{m_k})$, with the hope to obtain uniform bounds on their Hausdorff dimension. One approach to upper bounding the exceptional set is to relax its defining conditions to allow arbitrary positive integers $M$ in place of powers of $2$. Since the $3$-adic Cantor set $\Sigma_{3, \bar{2}}$ is forward invariant under multiplication by $3$, we will restrict to integers $M \not\equiv 0\, (\bmod \, 3)$. For application to the exceptional set $\mathcal{E}(\mathbb{Z}_3)$, the discussion in Section \ref{sec11} indicates that it suffices to consider the restricted family of sets $\mathcal{C}(1,M_1,\ldots,M_n)$, i.e. taking $M_0=1$. We define a relaxed version of the restricted $3$-adic exceptional set, as follows. \begin{defn} \label{defn-generalized} The {\em $3$-adic generalized exceptional set} is the set \[\mathcal{E}_{\star}(\mathbb{Z}_3) := \{\lambda \in \mathbb{Z}_3 : \text{there are infinitely many $M \ge 1$, $M \not\equiv 0 ~(\bmod \, 3)$, including \text{$M=1$}, }\] \[ \text{such that the $3$-adic expansion} ~(M \lambda)_3 \text{ omits the digit $2$}\}.\] \end{defn} When considering intersective sets $C(1,M_1,\ldots,M_n)$, we can then further restrict to require all $M_i \equiv 1 ~(\bmod \, 3)$, since any $M \equiv 2 ~(\bmod \, 3)$ has $C(1,M)= \{ 0\}.$ We have $\mathcal{E}_1 (\mathbb{Z}_3) \subset \mathcal{E}_{\star} (\mathbb{Z}_3) \subset \Sigma_{3, \bar{2}}$ and therefore \begin{equation} \dim_{H}( \mathcal{E}(\mathbb{Z}_3)) = \dim_{H}( \mathcal{E}_1(\mathbb{Z}_3)) \le \dim_{H}( \mathcal{E}_{\star}(\mathbb{Z}_3)). \end{equation} Thus upper bounds for the Hausdorff dimension of the generalized exceptional set yield upper bounds for that of the exceptional set. \begin{prob}\label{pr14} {\em (Generalized Exceptional Set Problem )} Determine upper and lower bounds for the Hausdorff dimension of the generalized exceptional set $\mathcal{E}_{\star} (\mathbb{Z}_3)$. In particular, determine whether $\dim_{H} (\mathcal{E}_{\star} (\mathbb{Z}_3)) =0$ or $\dim_{H} (\mathcal{E}_{\star} (\mathbb{Z}_3)) >0$ holds. \end{prob} We next define a family of sets in parallel to $\mathcal{E}_1^{(k)}({\mathbb Z}_3)$ above. We define \[ \mathcal{E}_\star^{(k)}(\mathbb{Z}_3) := \{ \lambda \in \mathbb{Z}_3 : \text{there exist \, $1 = M_1 < M_2< \cdots < M_k$, \,with all $M_i \equiv \, 1 (\bmod\, 3)$}, \] \[ \quad\quad \text{such that the $3$-adic expansion} ~(M_i \lambda)_3 \text{ omits the digit $2$}\}. \] Then in parallel to the case above, we have \[ \mathcal{E}_\star^{(k)}(\mathbb{Z}_3) = \bigcup_{{1=M_1 < \ldots < M_{k-1}}\atop{M_i \equiv 1 (\bmod ~3)}} \mathcal{C}(1, M_1,\ldots,M_{k-1}). \] In consequence we have the inclusion \[ \mathcal{E}_\star(\mathbb{Z}_3) \subseteq \bigcap_{k=1}^{\infty} \mathcal{E}_\star^{(k)}(\mathbb{Z}_3). \] This inclusion yields the bound \begin{equation}\label{eq112} \dim_{H}( \mathcal{E}_\star(\mathbb{Z}_3)) \le \Gamma_{*}, \end{equation} where we define \begin{equation}\label{Gamma-star} \Gamma_{\star}:= \lim_{k \to \infty} \dim_{H}( \mathcal{E}_\star^{(k)}(\mathbb{Z}_3)). \end{equation} As far as we know it is possible that $\dim_{H}( \mathcal{E}_\star(\mathbb{Z}_3)) < \Gamma_{*}$ may occur. The second author \cite[Theorem 1.6]{Lag09} obtained the upper bound \begin{equation} \Gamma_{\ast} \le \dim_{H}( \mathcal{E}_{\star}^{(2)}(\mathbb{Z}_3)) \le \frac{1}{2}, \end{equation} which in fact yielded \eqref{lowdim11}. Our interest in the generalized exceptional set problem stemmed from the fact that if it were true that $\dim_{H} (\mathcal{E}_{\star} (\mathbb{Z}_3)) =0$, then the Exceptional Set Conjecture \ref{cj11} would follow. However a main result of our investigation establishes that this does not hold: we obtain the lower bounds $$ \Gamma_{\star} \ge \dim_{H}(\mathcal{E}_{\star} (\mathbb{Z}_3)) \ge \frac{1}{2} \log_3 2 \approx 0.315464, $$ see Theorem \ref{th110} below. This inconvenient fact limits the upper bounds attainable on $\dim_{H}(\mathcal{E}(\mathbb{Z}_3))$ via the relaxed problem. \subsection{Algorithmic Results}\label{sec12} We study the size of intersections of multiplicative translates of the $3$-adic Cantor set $\Sigma_3:= \Sigma_{3, \bar{2}}$, as measured by Hausdorff dimension. We study the sets $$\mathcal{C}(1, M_1,\ldots,M_n):= \Sigma_{3, \bar{2}} \cap \frac{1}{M_1}\Sigma_{3, \bar{2}} \cap \cdots \cap \frac{1}{M_n}\Sigma_{3, \bar{2}}. $$ where $1< M_1 < \cdots < M_n$ are positive integers. As remarked above, via results in \cite{AL12a}, \cite{AL12b} these sets have a nice description, with their members having $p$-adic expansions describable by finite automata, which permits effective computation of their Hausdorff dimension. These results are reviewed in Section \ref{sec2}, and the necessary definitions for presentations of a path set used in the following theorem appear there. \begin{thm} \label{th12} {\rm (Dimension of $C(1, M_1, ..., M_n)$)} (1) There is a terminating algorithm that takes as input any finite set of integers \\ $1\leq M_1 < \ldots < M_n$, and gives as output a labeled directed graph $\mathcal{G}=(G,\mathcal{L})$ with a marked starting vertex $v_0$, which is a presentation of a path set $X= X(1, M_1, M_2, \cdots , M_n)$ describing the $3$-adic expansions of the elements of the space \[ \mathcal{C}(1, M_1, ..., M_n):= \Sigma_3 \cap \frac{1}{M_1}\Sigma_3 \cap \ldots \cap \frac{1}{M_n}\Sigma_3. \] This presentation is right-resolving and all vertices are reachable from the marked vertex. The graph $G$ has at most $\prod_{i=1}^n (1 + \lfloor \frac{1}{2} M_i \rfloor)$ vertices. (2) The topological entropy $\beta$ of the path set $X$ is the Perron eigenvalue of the adjacency matrix $A$ of the directed graph $G$. It is a real algebraic integer satisfying $1 \leq \beta \leq 2$. Furthermore the Hausdorff dimension $$ \dim_{H} (\mathcal{C}(1, M_1, ..., M_n)) = \log_3 \beta. $$ This dimension falls in the interval $[0, \log_3 2]$. \end{thm} This construction is quite explicit in the special case $\mathcal{C}(1, M)$. In that case already the associated graphs $G$ can be very complicated, and there exist examples where the graph has an arbitrarily large number of strongly connected components, cf. \cite{ABL13}. We have computed Hausdorff dimensions of many examples of such intersections. In the process we have found some infinite families of integers where the graph structures are analyzable, see Section \ref{sec4} and \cite{ABL13}. From the viewpoint of fractal constructions, the sets constructed give specific interesting examples of graph-directed fractals, which appear to have structure depending on the integers $(M_1, .., M_n)$ in an intricate way. \subsection{Hausdorff dimension results: Two infinite families }\label{sec13a} There are some simple properties of the $3$-adic expansion of $M$ (which coincides with the ternary expansion of $M$, read backwards) which restrict the Hausdorff dimension of sets $\mathcal{C}(1, M).$ We begin with some simple restrictions on the Hausdorff dimension which can be read off from the $3$-adic expansion of $M$; this coincides with the ternary expansion of $M$, written $(M)_3$, written backwards, where we write the ternary expansion $$ (M)_3 := (a_k a_{k-1} \cdots a_1 a_0)_ 3 , \quad\quad {for} \quad M = \sum_{j=0}^k a_j 3^j. $$ If the first nonzero $3$-adic digit $a_0= 2$, then $\mathcal{C}(1, M)=\{0\}$, whence its Hausdorff dimension $\dim_{H}(\mathcal{C}(1, M))=0$. On the other hand, if the positive integers $M_1, ..., M_k$ all all digits $a_j= 0$ or $a_j=1$ in their $3$-adic expansions, then the Hausdorff dimension $\dim_{H}(\mathcal{C}(1, M_1, M_2, ..., M_k))$ must be positive. We have found several infinite families of integers having ternary expansions of a simple form, whose path set presentations have a regular structure in the family parameter $k$, that permits their Hausdorff dimension to be determined. The simplest family takes $M_1= 3^k = (10^k)_3$. In this trivial case $\mathcal{C}(1, 3^k) = \Sigma_{3, \bar{2}}$, whence \begin{equation} \dim_{H}( \mathcal{C}(1, M_k)) = \log_3 2 \approx 0.630929. \end{equation} In Section \ref{sec4} we analyze two other infinite families in detail, as follows. The first of these families is $L_k= \frac{1}{2}(3^{k}-1) = (1^{k})_3$, for $k \ge 1$. \begin{thm}\label{th17a} {\rm (Infinite Family $L_k=\frac{1}{2}(3^k-1)$)} (1) Let $L_k= \frac{1}{2}(3^{k}-1) = (1^{k})_3$. The path set presentation $({\mathcal G}, v_0)$ for the path set $X(1, L_k)$ underlying $\mathcal{C}(1, L_k))$ has exactly $k$ vertices and is strongly connected. (2) For every $k \ge 1$, \[ \dim_H(\mathcal{C}(1,L_k) = \dim_{H} \mathcal{C}(1, (1^k)_3) = \log_3 \beta_k, \] where $\beta_k $ is the unique real root greater than $1$ of $\lambda^k - \lambda^{k-1}- 1=0$. (3) For all $k \ge 3$ there holds \[ \dim_{H} \Big(\mathcal{C}(1,L_k)\Big) = \frac{ \log_3 k}{k} + O\left(\frac{\log\log (k)}{k}\right). \] \end{thm} The Hausdorff dimension of the set $\dim_{H}(\mathcal{C}(1,L_k))$ is positive but approaches $0$ as $k \to \infty$. This result is proved in Section \ref{sec42}. Secondly, we consider the family $N_k= 3^{k} + 1 = (10^{k-1}1)_3$. Our main results concern this family. \begin{thm}\label{th14} {\rm (Infinite Family $N_k=3^k+1$)} (1) Let $N_k=3^k+1= (10^{k-1}1)_3$. The path set presentation $({\mathcal G}, v_0)$ for the path set $X(1, N_k)$ underlying $\mathcal{C}(1, N_k)$ has exactly $2^k$ vertices and is strongly connected. (2) For every integer $k \geq 1$, there holds \[ \dim_H(\mathcal{C}(1,N_k)) = \dim_{H} \mathcal{C}(1, (10^{k-1}1)_3) = \log_3\bigg(\frac{1 + \sqrt{5}}{2}\bigg) \approx 0.438018. \] \end{thm} Here the Hausdorff dimension is constant as $k \to \infty$. Theorem \ref{th14} is a direct consequence of results established in Section \ref{sec43} (Theorem \ref{th34} and Proposition \ref{pr45}). We also include results on multiple intersections of sets in the two infinite families above in Section \ref{sec45}. It is easy to see that for each infinite family above, the Hausdorff dimensions of arbitrarily large intersections are always positive. We give some lower bounds on the dimension; Theorem \ref{th413} gives multiple intersections that establish $\Gamma_{\star} \ge \frac{1}{2} \log_3 2.$ In a sequel \cite{ABL13} we analyze a third infinite family $P_k= (20^{k-1}1)_3 = 2 \cdot 3^k +1$, whose underlying path set graphs exhibit much more complicated behavior; they have an unbounded number of strongly connected components as $k \to \infty$. \subsection{Hausdorff dimension results: exceptional sets}\label{sec14a} In addition we are able to combine graphs in the infinite family $\mathcal{C}(1, N_k)$ in such a way to get $\mathcal{C}(1, M_1, M_2, ..., M_n)$ with distinct $M_k \equiv 1 ~(\bmod \, 3)$ which have Hausdorff dimension further bounded away from zero. In Section \ref{sec51} we establish the following lower bound on the Hausdorff dimension of the generalized exceptional set. We are indebted to A. Bolshakov for observing this result, which improves on Theorem \ref{th413}. \begin{thm}\label{th110} The generalized exceptional set $\mathcal{E}_{\star}$ satisfies $$ \dim_{H} ( \mathcal{E}_{\star}) \ge \frac{1}{2} \log_3 2 \approx 0.315464. $$ In fact, $$ \dim_{H} ( \{ \lambda \in \Sigma_{3, \bar{2}}: \, N_{2k+1} \lambda \in \Sigma_{3, \bar{2}} \,\,\mbox{for all}\, k \ge 1\}) \ge \frac{1}{2} \log_3 2. $$ \end{thm} This result is an immediate corollary of Theorem \ref{th51a}. The proof strongly uses the fact that the integers $N_{2k+1}$ have only two nonzero $3$-adic digits. In Section \ref{sec53} we give numerical improvements on the lower bounds in \cite{Lag09} for small $k$ for the Hausdorff dimension of the enclosing sets $\mathcal{E}^{(k)}(\mathbb{Z}_3)$ that upper bound that of the exceptional set $\mathcal{E}(\mathbb{Z}_3)$. These improvements come via explicit examples. \subsection{Extensions of Results }\label{sec15} The results of this paper show that the Generalized Exceptional Set $\mathcal{E}_{*}({\mathbb Z}_3)$ has positive Hausdorff dimension. Theorem \ref{th110} shows that to make further progress on the Exceptional Set Conjecture one cannot relax the problem to consider general integers $M$; it will be necessary to consider a smaller class on integers that have some special properties in common with the integers $2^k$. In a sequel \cite{ABL13} we investigate another approach towards the Exceptional Set Conjecture. Let $n_3(M)$ denote the number of nonzero $3$ digits of $M$. It asks whether the $\dim_{H} \mathcal{C}(1, M)$ necessarily decreases to $0$ as $n_3(M) \to \infty$. It is a known fact that the number of nonzero ternary digits in $(2^n)_3$ goes to infinity as $n \to \infty$, i.e. for each $k \ge 2$ there are only finitely many $n$ with $(2^n)_3$ having at most $k$ nonzero ternary digits. This result was first established in 1971 by Senge and Straus, see \cite{SS71}, and a quantitative version of this assertion follows from results of C. L. Stewart \cite[Theorem 1]{St80}. It follows that if it were true that $\dim_{H} \mathcal{C}(1, M) \to 0$ as $n_3(M) \to \infty$, then the Exceptional Set Conjecture would follow. This paper and its sequel \cite{ABL13} study the Hausdorff dimension of these sets in the special case of multiplicative translates of $3$-adic Cantor sets, but one may also consider many more complicated path set fractals in the sense of \cite{AL12b} in place of the Cantor set. The algorithmic methods of this paper apply to $p$-adic numbers for any prime $p$ and to the $g$-adic numbers considered by Mahler \cite{Mah61} for any integer $g \ge 2$. \subsection{Overview}\label{sec16} Section \ref{sec2} reviews properties of $p$-adic path sets and their symbolic dynamics, drawing on \cite{AL12a} and \cite{AL12b}. The general framework of these papers includes intersections of multiplicative translates of $3$-adic Cantor sets as a special case. Section \ref{sec2} also states a formula for computing the Hausdorff dimension of such sets. Section \ref{sec3} of this paper gives algorithmic constructions and proves Theorem \ref{th12}. It also presents examples. Section \ref{sec4} studies two infinite families of intersections of $3$-adic Cantor sets and proves Theorems ~\ref{th17a} and \ref{th14}. Section \ref{sec5} gives applications, which include the lower bound on the Hausdorff dimension of the generalized exceptional set $\mathcal{E}_\star(\mathbb{Z}_3)$ and lower bounds on $\dim_{H}(\mathcal{E}^{(k)}(\mathbb{Z}_3))$ for small $k$. \subsection{Notation} The notation $(m)_3$ means either the base $3$ expansion of the positive integer $m$, or else the $3$-adic expansion of $(m)_3$. In the $3$-adic case this expansion is to be read right to left, so that it is compatible with the ternary expansion. That is, $\alpha = \sum_{j=0}^{\infty} a_j 3^j$ would be written $( \cdots a_2 a_1 a_0)_3$.\medskip \section{Symbolic Dynamics and Graph-Directed Constructions} \label{sec2} \subsection{Symbolic Dynamics, Graphs and Finite Automata } \label{sec21} The constructions of this paper are based on the fact that the points in intersections of multiplicative translates of $3$-adic Cantor sets have $3$-adic expansions that are describable in terms of allowable paths generated by finite directed labeled graphs . We use symbolic dynamics on certain closed subsets of the one-sided shift space $\Sigma={\mathcal A}^{{\mathbb N}}$ with fixed symbol alphabet ${\mathcal A}$, which for our application will be specialized to ${\mathcal A}=\{0,1,2\}$. A basic reference for directed graphs and symbolic dynamics, which we follow, is Lind and Marcus \cite{LM95}. By a {\em graph} we mean a finite directed graph, allowing loops and multiple edges. A {\em labeled graph} is a graph assigning labels to each directed edge; these labels are drawn from a finite symbol alphabet. A labeled directed graph can be interpreted as a {\em finite automaton} in the sense of automata theory. In our applications to $3$-adic digit sets, the labels are drawn from the alphabet ${\mathcal A}= \{ 0, 1, 2\}.$ In a directed graph, a vertex is a {\em source} if all directed edges touching that vertex are outgoing; it is a {\em sink} if all directed edges touching that edge are incoming. A vertex is {\em essential} if it is neither a source nor a sink, and is called {\em stranded} otherwise. A graph is \emph{essential} if all of its vertices are essential. A graph $G$ is {\em strongly connected} if for each two vertices $i, j$ there is a directed path from $i$ to $j$. We let $SC(G)$ denote the set of strongly connected component subgraphs of $G$. We use some basic facts from Perron-Frobenius theory of nonnegative matrices. The {\em Perron eigenvalue} (\cite[Definition 4.4.2]{LM95}) of a nonnegative real matrix $\mathbf{A} \neq 0$ is the largest real eigenvalue $\beta \ge 0$ of $\mathbf{A}$. A nonnegative matrix is {\em irreducible} if for each row and column $(i, j)$ some power ${\bf A}^m$ has $(i,j)$-th entry nonzero. A nonnegative matrix ${\bf A}$ is {\em primitive} if some power ${\bf A}^k$ for an integer $k \ge 1$ has all entries positive; primitivity implies irreducibility but not vice versa. The {\em Perron-Frobenius theorem,} \cite[Theorem 4.2.3]{LM95} for an irreducible nonnegative matrix ${\bf A}$ states that: \begin{enumerate} \item The Perron eigenvalue $\beta$ is geometrically and algebraically simple, and has an everywhere positive eigenvector ${\bf v}.$ \item All other eigenvalues $\mu$ have $|\mu| \le \beta$, so that $\beta = \sigma({\bf A})$, the spectral radius of ${\bf A}$. \item Any other everywhere positive eigenvector must be a positive mulitiple of ${\bf v}$. \end{enumerate} For a general nonnegative real matrix $\mathbf{A} \neq 0$, the Perron eigenvalue need not be simple, but it still equals the spectral radius $\sigma(\bf{A})$ and it has at least one everywhere nonnegative eigenvector. We apply this theory to adjacency matrices of graphs. A (vertex-vertex) {\em adjacency matrix} ${\bf A} ={\bf A}_{G}$ of the directed graph $G$ has entry $a_{ij}$ counting the number of directed edges from vertex $i$ to vertex $j$. The adjacency matrix is irreducible if and only if the associated graph is strongly connected, and we also call the graph {\em irreducible} in this case. Here primitivity of the adjacency matrix of a directed graph $G$ is equivalent to the graph being strongly connected and aperiodic, i.e. the greatest common divisor of its (directed) cycle lengths is $1$. For an adjacency matrix of a graph containing at least at least one directed cycle, its Perron eigenvalue is necessarily a real algebraic integer $\beta \ge 1$ (see Lind \cite{Lin84} for a characterization of these numbers). \subsection{$p$-Adic path sets, sofic shifts and $p$-adic path set fractals} \label{sec21b} Our basic objects are special cases of the following definition. A {\em pointed graph} is a pair $({\mathcal G}, v)$ consisting of a directed labeled graph ${\mathcal G} =(G, \mathcal{E})$ and a marked vertex $v$ of ${\mathcal G}$. Here $G$ is a (directed) graph and $\mathcal{E}$ is an assignment of labels $(e, \ell)= (v_1, v_2, \ell)$ to the edges of $G$, where every edge gets a unique label, and no two triples are the same (but multiple edges and loops are permitted otherwise). \begin{defn} \label{de211} Given a pointed graph $({\mathcal G}, v)$ its associated \emph{path set} ${\mathcal P} = X_\mathcal{G}(v) \subset {\mathcal A}^{{\mathbb N}}$ is the set of all infinite one-sided symbol sequences $(x_0, x_1, x_2, ...) \in {\mathcal A}^{{\mathbb N}}$, giving the successive labels of all one-sided infinite walks in $\mathcal{G}$ issuing from the distinguished vertex $v$. Many different $(\mathcal{G}, v)$ may give the same path set ${\mathcal P}$, and we call any such $(\mathcal{G},v)$ a \emph{presentation} of ${\mathcal P}$. \end{defn} An important class of presentations have the following extra property. We say that a directed labeled graph ${\mathcal G} =(G, v)$ is {\em right-resolving} if for each vertex of ${\mathcal G}$ all directed edges outward have distinct labels. (In automata theory ${\mathcal G}$ is called a {\em deterministic automaton}.) One can show that every path set has a right-resolving presentation. Note that the labeled graph ${\mathcal G}$ without a marked vertex determines a {\em one-sided sofic shift} in the sense of symbolic dynamics, as defined in \cite{AL12a}. This sofic shift comprises the set union of the path sets at all vertices of ${\mathcal G}$. Path sets are closed sets in the shift topology, but are in general non-invariant under the one-sided shift operator. Those path sets ${\mathcal P}$ that are invariant are exactly the one-sided sofic shifts \cite[Theorem 1.4]{AL12a}. We study the path set concept in symbolic dynamics in \cite{AL12a}. The collection of path sets $X:= X_{({\mathcal G}, v_0)}$ in a given alphabet is closed under finite union and intersection (\cite{AL12a}). The symbolic dynamics analogue of Hausdorff dimension is topological entropy. The {\em topological entropy} of a path set $H_{top} (X)$ is given by $$ H_{top}(X) := \limsup_{n \to \infty} \frac{1}{n} \log N_n(X), $$ where $N_n(X)$ counts the number of distinct blocks of symbols of lengh $n$ appearing in elements of $X$. The topological entropy is easy to compute for right-resolving presentation. By \cite[Theorem 1.13]{AL12a}, it is \begin{equation} \label{top-entropy} H_{top}(X) = \log \beta \end{equation} where $\beta$ is the Perron eigenvalue of the adjacency matrix ${\bf A}={\bf A}_G$ of the underlying directed graph $G$ of ${\mathcal G}$, e.g. the spectral radius of ${\bf A}$. \subsection{$p$-Adic Symbolic Dynamics and Graph Directed Constructions}\label{sec23a} We now suppose ${\mathcal A} = \{0, 1,2, ..., p-1\}$. We can view the elements of a path set $X$ on this alphabet geometrically as describing the digits in the $p$-adic expansion of a $p$-adic integer. This is done using a map $\phi: {\mathcal A}^{{\mathbb N}} \to {\mathbb Z}_p$. from symbol sequences into ${\mathbb Z}_p$. We call the resulting image set $K = \phi(X)$ a \emph{$p$-adic path set fractal}. Such sets are studied in \cite{AL12b}, where they are related to graph-directed fractal constructions. The class of $p$-adic path set fractals is closed under $p$-adic addition and multiplication by rational numbers $r \in {\mathbb Q}$ that lie in ${\mathbb Z}_p$ (\cite{AL12b}). It is possible to compute the Hausdorff dimension of a $p$-adic path set fractal directly from a suitable presentation of the underlying path set $X=X_{{\mathcal G}}(v)$. We will use the following result. \begin{prop}\label{pr22a} Let $p$ be a prime, and $K$ a set of $p$-adic integers whose allowable $p$-adic expansions are described by the symbolic dynamics of a $p$-adic path set $X_K$ on symbols $\mathcal{A} =\{ 0, 1, 2, \cdots, p-1\}$. Let $(\mathcal{G},v_0)$ be a presentation of this path set that is right-resolving. (1) The map $\phi_p: \mathbb{Z}_p \rightarrow [0,1]$ taking $\alpha= \sum_{k=0}^{\infty}{a_k p^k} \in \mathbb{Z}_p$ to the real number with base $p$ expansion $\phi_p(\alpha) :=\sum_{k=0}^\infty \frac{a_k}{p^{k+1}}$ is a continuous map, and the image of $K$ under this map, $K':= \phi_p(K) \subset [0,1]$, is a graph-directed fractal in the sense of Mauldin-Williams. (2) The Hausdorff dimension of the $p$-adic path set fractal $K$ is \begin{equation} \dim_{H}(K) = \dim_{H}(K') = \log_p \beta, \end{equation} where $\beta$ is the spectral radius of the adjacency matrix ${\bf A}$ of $G$. \end{prop} \begin{proof} These results are proved in \cite[Section 2]{AL12b}. \end{proof} In this paper we treat the case $p=3$ with ${\mathcal A} = \{ 0, 1, 2\}$. The $3$-adic Cantor set is a $3$-adic path set fractal, so these general properties above guarantee that the intersection of a finite number of multiplicative translates of $3$-adic Cantor sets will itself be a $3$-adic path set fractal $K$, generated from an underlying path set. To do calculations with such sets we will need algorithms for converting presentations of a given $p$-adic path set to presentations of new $p$-adic path sets derived by the operations above. The $p$-adic arithmetic operations are treated in \cite{AL12b} and union and intersection are treated in \cite{AL12a}. \section{Structure of Intersection Sets $\mathcal{C}(1,M_1, M_2, ..., M_n$)} \label{sec3} We show that the sets $C(1, M_1,\ldots , M_n)$ consist of those $3$-adic integers whose $3$-adic expansions are describable as path sets $X(1, M_1, \cdots , M_n)$. We also present an algorithm which when given the data $(M_1, ...,M_n)$ as input produces as output a presentation ${\mathcal G} = (G, v_0)$ of the path set $X(1,M_1,\ldots,M_n)$. \subsection{Constructing a path set presentation $X(1, M)$}\label{sec31} We describe an algorithmic procedure to obtain a path set presentation $X(1, M)$ for the $3$-adic expansions of elements in $\mathcal{C}(1, M)$. Since $\mathcal{C}(1, 3^j M) = \mathcal{C}(1, M)$, we may reduce to the case $M \not\equiv 0~(\bmod ~3)$ and since $\mathcal{C}(1, M) = \{0\}$ if $M \equiv 2 ~(\bmod\, 3)$ it suffices to consider the case $M \equiv 1 \, (\bmod \, 3)$. \begin{thm}\label{th31n} For $M \ge 1$, with $M \equiv 1 ~(\bmod \, 3)$, the set $\mathcal{C}(1, M) = \Sigma_3 \cap \frac{1}{M} \Sigma_3$ has $3$-adic expansions given by a path set $X(1, M)$ which has an algorithmically computable path set presentation $(\mathcal{G}, v_0)$, in which the vertices $v_{m}$ are labeled with a subset of the integers $0 \le m \le \lfloor \frac{1}{2} M \rfloor$, always including $m=0$, and of cardinality at most $\lfloor \frac{M}{2}\rfloor$. This presentation is right-resolving, connected and essential. \end{thm} \begin{proof} The labeled graph $\mathcal{G} = (G, \mathcal{L})$ will have path labels drawn from $\{0, 1\}$ and the vertices $v_j$ of the underlying directed graph $G$ will be labeled by a subset of the integers $j$ satisfying $0 \le N \le M+1.$ The marked vertex $v_0$ corresponds to $N=0$ and is the starting vertex of the algorithm. The idea is simple. Suppose that $$ \alpha:= \sum_{j=0}^{\infty} a_j 3^j \in \Sigma_3 \cap \frac{1}{M} \Sigma_3. $$ Here all $a_j \in \{0, 1\}$ and in addition $$ M \alpha = \sum_{j=0}^{\infty} b_j 3^j \in \Sigma_3. $$ Suppose the first $n$ digits $$\alpha_n = \sum_{j=0}^{n-1} a_j 3^j,$$ are chosen. Since $M \equiv 1 ~(\bmod \, 3)$ this uniquely specifies the first $n$ digits of $$ M \alpha_n := \sum_{j=0}^{m+n-1} b_j^{(n)} 3^j, $$ namely $$ b_j^{(n)} = b_j \,\, \mbox{for} \quad 0 \le j \le n-1, $$ which have $b_j \in \{0, 1\},$ for $0 \le j \le n-1.$ Here the remaining digits $b_{n+k}^{(n)}$ for $1 \le k \le m$ are unrestricted, with $$ m =\lfloor \log_3 M \rfloor +1. $$ We have followed a path in the graph G corresponding to edges labeled $(a_0, a_1, ..., a_{n-1})$. The vertex we arrive at after these steps will be labeled by the value of the ``carry-digit" part of $\beta_n$, which is $$ N= \sum_{j =n}^{m+n -1} b_j^{(n)} 3^{j-n}. $$ The value of the bottom $3$-adic digit $b_n^{(n)}$ of $N$ will determine the allowable exit edges from vertex $v_N$, and the label of the vertices reached. The requirement is that the next digit $a_{n}$ satisfy \begin{equation}\label{allowable0} a_n + b_n^{(n)} \equiv 0, 1 (\bmod \, 3) \end{equation} If such a value is chosen, then we will be able to create a valid $\alpha_{n+1}$ and $\beta_{n+1} := M \alpha_{n+1}$ will have $$ b_{n}^{(n+1)} = a_n + b_n^{(n)} ~(\bmod \, 3). $$ There always exists at least one exit edge from each reachable vertex $v_N$, since for $b_n^{(n)}=0$ the admissible $a_n =0, 1$; for $b_n^{(n)}=1$ the only admissible $a_n = 0$, and for $b_n^{(n)} =2$ the only admissible $a_n=1$, in order that the next digits $a_{n+1}, b_{n+1}$ both belong to $\{0, 1\}$. The important point is that the vertex label $N$ is all that must be remembered to decide on an admissible exit edge in the next step, since its bottom digit determines the allowable exit edge values $a \subset \{0, 1\}$ by requiring \begin{equation}\label{allowable} a+ N \equiv 0, 1 \, ~(\bmod \, 3), \end{equation} and for an exit edge labeled $a$ one can determine the new vertex label $v_{N'}$ as \begin{equation} \label{update-step} N' := \lfloor \frac{N + M a}{3} \rfloor. \end{equation} To the graph $G$ one adds a directed edge for each allowable value $a_n=0$ or $1$ from $N$ to $N'$ labeled by $a_n$. Now one sees that the are only finitely many vertices $v_N$ that can be reached from the vertex $v_0$. One proves by induction on the number of steps $n$ taken that any reachable vertex $v_N$ has vertex label. $$ 0 \le N \le \lfloor \frac{M}{2} \rfloor. $$ This holds for the initial vertex, while for the induction step, we obtain from \eqref{update-step} that $$ N' \le \frac{N+ Ma}{3} \le \frac{ M/2 + M}{3} \le \frac{M}{2}. $$ Thus the process of constructing the graph will halt. It is easily seen that the presentation $\mathcal{G}= (G, v_0)$ obtained this way has the desired properties. \begin{enumerate} \item[(1)] The graph $G$ is right-resolving because there every vertex has exit edges with distinct edge-labels by construction. \item[(2)] The graph $G$ is essential because every vertex has at least one admissible exit edge, as shown above. \item[(3)] The graph is connected since we include in it only vertices reachable from $v_0$. \end{enumerate} Since $G$ is essential, $\mathcal{G}$ is a presentation of a certain $3$-adic path set via the correspondence taking infinite walks beginning at the $v_0$-state in $\mathcal{G}$ to words in the edges traversed. Denote this path set $X_{\mathcal{G},0}$. It remains to prove that this is the path set $X(1,M)$ corresponding to $\mathcal{C}(1, M)$, which is the claim that $$ X_{\mathcal{G},0} = X(1,M). $$ To prove the claim, let $\Phi : X_{\mathcal{G},0} \rightarrow \mathbb{Z}_3$ be the map \[\cdots a_2 a_1 a_0 \mapsto \sum_{k=0}^{\infty}{a_k3^k}.\] $\Phi$ is clearely an injection. $\Phi(X_{\mathcal{G},0}) \subset \mathcal{C}(1,M)$: Since $ \cdots a_2 a_1 a_0 \in X_{\mathcal{G},0}$ is a word in the full shift on $\{0,1\}$, $\Phi(\cdots a_2 a_1 a_0) = \sum_{k=0}^{\infty}{a_k3^k}$ omits the digit 2, so that $\Phi(X_{\mathcal{G},0}) \subset \Sigma_3$. But the algorithm was constructed specifically so that, given a path $\pi = a_l a_{l-1} \cdots a_2 a_1 a_0$ in $\mathcal{G}$ originating at 0, there is an edge labeled $a_{l+1} \in \{0,1\}$ from the terminal vertex $t(\pi)$ if and only if each digit of the 3-adic expansion of $M \cdot \big(\sum_{k=0}^{l+1}{c_k3^k}\big)$ which cannot be altered by any potential $(l+2)$nd digit is either 0 or 1. This shows both that $\Phi(X_{\mathcal{G},0}) \subset \frac{1}{M} \Sigma_3$ and $\mathcal{C}(1,M) \subset \Phi(X_{\mathcal{G},0})$, so that $\Phi|_{\Phi^{-1}(\mathcal{C}(1,M))} : X_{\mathcal{G},0} \rightarrow \mathcal{C}(1,M)$ is a bijection. Assigning the appropriate metric to $X_{\mathcal{G},0}$ makes $\Phi$ an isomorphism in a now obvious way, proving the claim. \end{proof} We obtain an algorithm to construct $\mathcal{G}= (G, v_0)$ based on the construction above.\medskip \noindent {\bf Algorithm A} {\em (Algorithmic Construction of Path Set Presentation $X(1, M)$).} \begin{enumerate} \item (Initial Step) Start with initial marked vertex $v_0$, and initial vertex set $I_{0} := \{ v_0\}$. Add an exit edge with edge label $0$ giving a self-loop to $v_0$, and add another exit edge with edge label $1$ going to new vertex $v_m$ with vertex label $m := \lfloor M/3 \rfloor,$ Add these two edges and their labels to form (labeled) edge table $E_1$. Form the new vertex set $I_{1} := \{ v_m\}$, and go to Recursive Step with $j=1$. \item (Recursive step) Given value $j$, a nonempty new vertex set $I_j$ of level $j$ vertices, a current vertex set $V_j$ and current edge set $E_j$. At step $j+1$ determine all allowable exit edge labels from vertices $v_N$ in $I_j$, using the criterion \eqref{allowable}, and compute vertices reachable by these exit edges, with reachable vertex labels computed by update equation \eqref{update-step}. Add these new edges and their labels to current edge set to make updated current edge set $E_{j+1}$. Collect all vertices reached that are not in current vertex set $V_j$ into a new vertex set $I_{j+1}$. Update current vertex set $V_{j+1} = V_j \cup I_{j+1}.$ Go to test step. \item (Test step). If the current vertex set $I_{j+1}$ is empty, halt, with the complete presentation ${\mathcal G} = (G, v_0)$ given by sets $V_{j+1}, E_{j+1}$. If $I_{j+1}$ is nonempty, reset $j \mapsto j+1$ and go to Recursive Step. \end{enumerate} The correctness of the algorithm follows from the discussion above. \subsection{Constructing a path set presentation $X(1,M_1,\ldots,M_n)$} \label{sec31b} Given integers $1 \leq M_1 < \ldots < M_n$, we now have a way to construct graph presentations of the path sets $X(1,M_i)$ for each $i$. Since \[X(1,M_1,\ldots,M_n) = \bigcap_{i=1}^n X(1,M_i),\] we need to know how to combine these graphs. Recall the following definition from Lind and Marcus \cite{LM95}: \begin{defn} Let $\mathcal{G}_1$ and $\mathcal{G}_2 $ be labeled graphs with the same alphabet $\mathcal{A}$, and let their underlying graphs be $G_1 = (\mathcal{V}_1,\mathcal{E}_1)$ and $G_2 = (\mathcal{V}_2,\mathcal{E}_2)$. The label product $\mathcal{G}_1 \star \mathcal{G}_2$ of $\mathcal{G}_1$ and $\mathcal{G}_2$ has underlying graph $G$ with vertex set $\mathcal{V} = \mathcal{V}_1 \times \mathcal{V}_2$, edge set $\mathcal{E} = \{(e_1,e_2) \in \mathcal{E}_1 \times \mathcal{E}_2 : e_1 \text{ and } e_2 \text{ have the same labels}\}$. \end{defn} In \cite[Proposition 4.3]{AL12a}, we show that if $(\mathcal{G}_i,v_i)$ is a graph presentation of the path set $\mathcal{P}_i$, then $(\mathcal{G}_1 \star \mathcal{G}_2,(v_1,v_2))$ is a graph presentation for $\mathcal{P}_1 \cap \mathcal{P}_2$. It follows that we can form a presentation of $\mathcal{C}(1,M_1, \cdots , M_n)$ as the label product \[(\mathcal{G},v) = (\mathcal{G}_1 \star \mathcal{G}_2 \star \cdots \star \mathcal{G}_n, (v_1,v_2,\ldots, v_n)),\] where $(\mathcal{G}_i,v_i)$ is the presentation of $\mathcal{C}(1,M_i)$ just constructed. \begin{thm}\label{th33n} For $1 < M_1< M_2 < \cdots < M_n$, with all $M_i \equiv 1 ~(\bmod \, 3)$, the set $$ \mathcal{C}(1, M_1, M_2, \cdots, M_n) ) = \bigcap_{i=1}^n \mathcal{C}(1, M_i) = \Sigma_3 \cap (\bigcap_{i=1}^n \frac{1}{M_i} \Sigma_3), $$ has $3$-adic expansions of its elements given by a path set $X(1, M_1, M_2, \cdots, M_n)$. This path set has an algorithmically computable presentation $(\mathcal{G}, v_{\bf 0})$, in which the vertices $v_{\bf{N}}$ are labeled with a subset of integer vectors ${\bf{N}} =(N_1, N_2, ..., N_n)$ with $0 \le N_i \le \frac{1}{2} M_i$, always including the zero vector $\bf{0}$. The presentation has at most $\prod_{i=1}^n ( 1+ \lfloor \frac{1}{2}M_i \rfloor)$ vertices in the underlying graph. This presentation is right-resolving, connected and essential. \end{thm} \begin{proof} The presentation is obtained by recursively applying the label product construction to the presentations $\mathcal{C}(1, M_i)$, see Algorithm B below. Each step preserves the properties of the presentation graph being right-resolving, connected and essential. The number of states of the label product construction is at most the product of the number of states in the two presentations being constructed. By Theorem \ref{th31n}, the presentation of $\mathcal{C}(1, M_i)$ has at most $(1 + \lfloor \frac{1}{2} M\rfloor )$ vertices. The bound given follows by induction on the successive label product constructions. \end{proof} \noindent{\bf Algorithm B} {\em (Algorithmic Construction of Path Set Presentation $X(1, M_1, ..., M_n)$.} \begin{enumerate} \item (Initial Step) Construct presentations $\mathcal{G}_i = (G_i, {\mathcal L}_i)$ for $X(1, M_i)$ to $\mathcal{C}(1, M_i)$ for $1 \le i \le n$, using Algorithm A. Apply the label product construction to form $\mathcal{H}_2 := \mathcal{G}_1 \star \mathcal{G}_2$. \item For $2 \le i \le n-1$, apply the label product construction to form $$\mathcal{H}_{i+1}=\mathcal{H}_{i} \star \mathcal{G}_{i+1}.$$ Halt when $\mathcal{H}_n$ is computed. \end{enumerate} \subsection{Path Set Characterization of $\mathcal{C}(1, M_1, ..., M_n)$} \label{sec32} From Theorem \ref{th33n} we easily derive the following result. \begin{thm} \label{th11} For any integers $1\leq M_1<\ldots<M_n$, let $$\mathcal{C}(1,M_1,\ldots,M_n):= \Sigma_{3} \cap \frac{1}{M_1}\Sigma_{3} \cap \ldots \cap \frac{1}{M_n}\Sigma_{3}. $$ This is the set of all $3$-adic integers $\lambda \in \Sigma_{3}$ such that $M_j \lambda$ omits the digit $2$ in its $3$-adic expansion. Then: (1) The complete set of the $3$-adic expansions of numbers in the set $\mathcal{C}(1,M_1,\ldots,M_n)$, is a path set in the alphabet ${\mathcal A}= \{ 0, 1, 2\}.$ (2) The Hausdorff dimension of $\mathcal{C}(1,M_1,\ldots,M_n)$ is $\log_3 \beta$, where $\log \beta$ is the topological entropy of this path set. Here $\beta$ necessarily satisfies $1 \le \beta \le 2$, and $\beta$ is a Perron number, i.e. it is a real algebraic integer $\beta \ge 1$ such that all its other algebraic conjugates satisfy $|\sigma(\beta)| < \beta.$ \end{thm} \begin{proof} Theorem~\ref{th33n} gives an explicit construction of a presentation $({\mathcal G}, v)$ showing that $\mathcal{C}(1,M_1\ldots,M_n)$ is a $p$-adic path set. By Proposition ~\ref{pr22a} the Hausdorff dimension of $\mathcal{C}(1,M_1\ldots,M_n)$ is $\log_3 \beta$, where $\beta$ is the spectral radius of the adjacency matrix $A$ of the underlying graph $G$. Since $A$ is a 0-1 matrix, by Perron-Frobenius theory the spectral radius equals the maximal eigenvalue in absolute value, which is necessarily a positive real number $\beta$. It is a solution to a monic polynomial over $\mathbb{Z}$, so that $\beta$ is necessarily an algebraic integer. By construction, the sum of the entries of any row in $A$ is either 1 or 2, so that we also have $1 \leq \beta \leq 2$. \end{proof} \begin{rem} The adjacency matrix $A$ in the sets above {\em need not be irreducible}. Example \ref{example33} below presents a graph $\mathcal{C}(1, 19)$ having a reducible matrix $A$. Here the underlying graph ${\mathcal G}$ has two strongly connected components. \end{rem} Combining the results above establishes Theorem \ref{th12}. \begin{proof}[Proof of Theorem \ref{th12}] (1) This follows from Theorem \ref{th31n} and Theorem \ref{th33n}, with the algorithm for constructing a the presentation of the path set $X(1, M_1, M_2, \cdots , M_n)$ given by combining {Algorithm A} and { Algorithm B}. (2) This follows from Theorem \ref{th11}. \end{proof} \subsection{Examples}\label{sec33} We present several examples of path set presentations. \begin{exmp}\label{example31} The $3$-adic Cantor set $\Sigma_3 = \mathcal{C}(1) = \mathcal{C}(1, 1)$ has a path set presentation $ ({\mathcal G}, v_0)$ pictured in Figure \ref{fig31}. It is the full shift on two symbols, and the initial vertex is the vertex labeled $0$. The underlying graph $G$ of ${\mathcal G}$ is a double cover of a one vertex graph with two symbols. The advantage of the graph $G$ pictured is that a path for it is completely determined by the set of vertex symbols that it passes through. \begin{figure}[ht]\label{fig31} \centering \psset{unit=1pt} \begin{pspicture}(-80,-40)(80,40) \newcommand{\noden}[2]{\node{#1}{#2}{n}} \noden{0}{-35,0} \noden{1}{35,0} \bcircle{n0}{90}{0} \dline{n0}{n1}{1}{0} \bcircle{n1}{270}{1} \end{pspicture} \newline \hskip 0.5in {\rm FIGURE 3.1.} Path set presentation of Cantor shift $\Sigma_3=\mathcal{C}(1)$. The marked vertex is $0$. \newline \newline \end{figure} \end{exmp} \pagebreak \begin{exmp}\label{example32} A path set presentation of $\mathcal{C}(1,7)$, with $7=(21)_3$ is shown in Figure \ref{fig32}. The vertex labeled $0$ is the marked initial state. \begin{figure}[ht]\label{fig32} \centering \psset{unit=1pt} \begin{pspicture}(-80,-50)(80,150) \newcommand{\noden}[2]{\node{#1}{#2}{n}} \noden{0}{0,100} \noden{1}{50,50} \noden{2}{-50,50} \noden{10}{0,0} \bcircle{n0}{0}{0} \bcircle{n10}{180}{1} \bline{n0}{n2}{1} \bline{n2}{n10}{1} \bline{n10}{n1}{0} \bline{n1}{n0}{0} \end{pspicture} \newline \hskip 0.5in {\rm FIGURE 3.2.} Path set presentation of $\mathcal{C}(1,7)$. The marked vertex is $0$. \end{figure} The graph in Figure \ref{fig32} has adjacency matrix \begin{equation*} \bf{A} = \left(\begin{array}{cccc} 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 1 & 1 \\ 1 & 0 & 0 & 0 \\ \end{array}\right), \end{equation*} which has Perron-Frobenius eigenvalue $\beta = \frac{1 + \sqrt{5}}{2}$, so \[\dim_H(\mathcal{C}(1,7)) = \log_3 \left( \frac{1 + \sqrt{5}}{2} \right) \approx 0.438018. \] \end{exmp} \begin{exmp}\label{example33} A path set presentation of $\mathcal{C}(1,19)$, with $19 = (201)_3$, is shown in Figure \ref{fig33}. The node labeled $0$ is the marked initial state. \begin{figure}[ht]\label{fig33} \centering \psset{unit=1pt} \begin{pspicture}(-100,-165)(100,160) \newcommand{\noden}[2]{\node{#1}{#2}{n}} \noden{0}{0,110} \noden{1}{50,55} \noden{10}{50,-55} \noden{100}{0,-110} \noden{22}{-50,-55} \noden{20}{-50,55} \noden{21}{0,-25} \noden{2}{0,25} \bcircle{n0}{0}{0} \bcircle{n100}{180}{1} \dline{n2}{n21}{1}{0} \aline{n0}{n20}{1} \aline{n20}{n22}{1} \aline{n22}{n100}{1} \aline{n100}{n10}{0} \aline{n20}{n2}{0} \bline{n1}{n0}{0} \bline{n10}{n21}{1} \bline{n10}{n1}{0} \end{pspicture} \newline \hskip 0.5in {\rm FIGURE 3.3.} Path set presentation of $\mathcal{C}(1,19)$. The marked vertex is $0$. \end{figure} The graph in Figure \ref{fig33} has adjacency matrix \\ \begin{equation*} \bf{A} = \left(\begin{array}{cccccccc}1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 \\0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 \\0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 \\1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\\end{array}\right),\end{equation*} which has Perron eigenvalue $\beta \approx 1.465571$, so \[\dim_H(\mathcal{C}(1,19)) = \log_3 \beta \approx 0.347934.\] \end{exmp} \begin{exmp}\label{example34} We consider implementation of the algorithm for $\mathcal{C}(1,7,19)$. We start from the presentations of $\mathcal{C}(1, 7)$ and $\mathcal{C}(1, 19)$ in Example \ref{example31}. Taking the label product gives us a presentation of $\mathcal{C}(1,7,19)$, which is shown in Figure \ref{fig34}. \begin{figure}[ht]\label{fig34} \centering \psset{unit=1.3pt} \begin{pspicture}(-50,-45)(50,155) \newcommand{\nodeq}[2]{\node{#1}{#2}{q}} \nodeq{0-0}{0,115} \nodeq{2-20}{-35,75} \nodeq{10-22}{-35,40} \nodeq{10-100}{0,0} \nodeq{1-10}{35,40} \nodeq{0-1}{35,75} \bcircle{q0-0}{0}{0} \bline{q0-0}{q2-20}{1} \bline{q2-20}{q10-22}{1} \bline{q10-22}{q10-100}{1} \bline{q10-100}{q1-10}{0} \aline{q1-10}{q0-1}{0} \bline{q0-1}{q0-0}{0} \bcircle{q10-100}{180}{1} \end{pspicture} \newline \hskip 0.5in {\rm FIGURE 3.4.} Path set presentation of $\mathcal{C}(1,7,19)$. The marked vertex is $0$. \end{figure} This graph $G$ for $\mathcal{C}(1, 7, 19)$ has adjacency matrix $\bf{A}$ given by: \begin{equation*} \bf{A} = \left(\begin{array}{cccccc} 1 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ \end{array}\right). \end{equation*} The Perron eigenvalue $\beta \approx 1.46557$ of this matrix is the largest real root of $\lambda^6 -2 \lambda^5 +\lambda^4 -1 = 0$: The Hausdorff dimension of $\mathcal{C}(1,7,19)$ is then \begin{equation} \dim_H(\mathcal{C}(1,7,19)) = \log_3 \beta \approx 0.347934. \end{equation} \end{exmp} \begin{exmp} The set $\mathcal{C}(1,43)$, with $N= 43= (1121)_3$ has $M \equiv \, 1\, (\bmod \, 3)$, but nevertheless has Hausdorff dimension $0$. A presentation of the path set associated to $\mathcal{C}(1,43)$ is given in Figure \ref{fig35}. \begin{figure}[ht]\label{fig35} \centering \psset{unit=1.3pt} \begin{pspicture}(-80,-15)(80,105) \newcommand{\nodeq}[2]{\node{#1}{#2}{q}} \nodeq{0}{0,90} \nodeq{112}{-45,90} \nodeq{201}{-45,55} \nodeq{20}{-45,0} \nodeq{121}{0,0} \nodeq{2}{0,25} \nodeq{12}{45,0} \nodeq{120}{45,55} \bcircle{q0}{270}{0} \bline{q0}{q112}{1} \bline{q112}{q201}{1} \bline{q201}{q20}{0} \bline{q20}{q121}{1} \aline{q20}{q2}{0} \dline{q121}{q12}{0}{1} \aline{q2}{q120}{1} \aline{q120}{q12}{0} \bline{q120}{q201}{1} \end{pspicture} \bigskip \newline \hskip 0.5in {\rm FIGURE 3.5.} Path set presentation of $\mathcal{C}(1,43)$. The marked vertex is $0$. \newline \newline \end{figure} The graph in Figure \ref{fig35} has four strongly connected components, with vertex sets $\{0\}, \{ 112\}, \{ 2, 120, 201, 20\},$ and $\{ 12, 121\}$ respectively, each of whose underlying path sets have Hausdorff dimension $0$. \end{exmp} \newpage \section{Infinite Families}\label{sec4} \subsection{Basic Properties}\label{sec41} We have the following simple result, showing the influence of the digits in the $3$-adic expansion of $M$ on the size of the set $\mathcal{C}(1,M)$ and $\mathcal{C}(1, M_1, M_2, \cdots, M_k)$. \begin{thm} \label{th31} (1) If the smallest nonzero $3$-adic digit in the $3$-adic expansion of the positive integer $M$ is $2$, then $\mathcal{C}(1, M)= \{0\}$, and \begin{equation} \dim_{H}( \mathcal{C}(1,M))=0. \end{equation} (2) If positive integers $M_1, M_2 , ..., M_n \in \Sigma_3$ all have the property that their $3$-adic expansions $(M_i)_3$ (equivalently their ternary expansions) contain only digits $0$ and $1$, then \begin{equation} \dim_{H} (\mathcal{C}(1, M_1, M_2, ..., M_n)) >0. \end{equation} \end{thm} \paragraph{\bf Remark.} For neither (1) or (2) does the converse hold. The example $M=43= (1121)_3$ has $\dim_{H}( \mathcal{C}(1, M))=0$, but its $3$-adic expansion has smallest digit $1$. The example $M=64 =(2101)_3$ has $\dim_{H}(\mathcal{C}(1, M))> 0$, but its $3$-adic expansion has a digit $2$. \begin{proof}{\em of Theorem~\ref{th31}.} (1) Suppose the smallest nonzero $3$-adic digit in the $3$-adic expansion of the positive integer $M$ is $2$. Then the graph presentation of the path set $X(1, M)$ associated to $\mathcal{C}(1,M)$ constructed using Algorithm A consists of only the node labeled $0$ and the self-loop labeled $0$ at this node (i.e. $\mathcal{C}(1,M) = \{0\}$), whence $\dim_H (\mathcal{C(}1,M)) = 0$. This holds because the smallest nonzero digit of $MN$ for any $N \in \Sigma_3$ is $2$, so that $MN \notin \Sigma_3$. (2) Suppose $M_1,\ldots , M_n \in \Sigma_3$ are positive integers so that all of their $3$-adic expansions have only the digits $0$ and $1$. For each $M_i$, let $m_i$ be the largest nonzero ternary position of $M_i$ (i.e. $M_i = 3^{m_i} +$ \emph{lower order terms}). Then in the graph presentation constructed for $X(1,M_i)$ by Algorithm A, the walk starting at the origin, then moving along an edge labeled $1$ (which exists since $(M_i)_3$ omits the digit $2$), then moving along $m_i$ consecutive edges labeled $0$, is a directed cycle at $0$. Since the edge labeled $0$ is a loop at $0$, if we let $m = \max_{1 \leq i \leq n} m_i$, then the graph presentation the path set $X(1, M_1, ..., M_j)$ of $\mathcal{C}(1,M_1,\ldots, M_n)$ has a directed cycle at $0$ of length $m+1$ given by first traversing the edge labeled $1$, then traversing $m$ consecutive edges labeled $0$. This cycle and plus the loop of length one at $0$ are distinct directed cycles at $0$. It follows that the associated path set has positive topological entropy, and hence $\mathcal{C}(1,M_1, \ldots , M_n)$ has positive Hausdorff dimension by \cite[Theorem 3.1 (iii)]{AL12b}. \end{proof} \subsection{The family $L_k= (1^k)_3 = \frac{1}{2}(3^{k}-1)$.} \label{sec42} The path set presentations $(\mathcal{G},v_0)$ of the sets $\mathcal{C}(1,L_k)$ are particularly simple to analyze. \begin{thm}\label{th32} (1) For $k \ge 1$, and $L_k = \frac{1}{2}(3^{k}-1)$, there holds \begin{equation} \dim_H(\mathcal{C}(1,L_k)) =\log_3 \beta_k, \end{equation} where $\beta_k$ is the unique real root greater than $1$ of \begin{equation} \lambda^k - \lambda^{k-1} -1 =0. \end{equation} (2) For $k \ge 6$, the values $\beta_k$ satisfy the bounds \begin{equation}\label{3030} 1+ \frac{\log k}{k} - \frac{2 \log\log k}{k} \le \beta_k \le 1 + \frac{\log k}{k}. \end{equation} Then for all $k \ge 3$, \begin{equation}\label{hdL} \dim_{H} (\mathcal{C}(1,L_k)) = \frac{\log_3 k}{k} + O \Big(\frac{\log\log k}{\log k}\Big). \end{equation} \end{thm} \begin{minipage}{\linewidth} \begin{center} \begin{tabular}{|c | r | r |} \hline \mbox{Path set} & \mbox{Perron eigenvalue} & \mbox{Hausdorff dim} \vline \\ \hline $\mathcal{C}(1,L_1)$ & $2.000000$ & $ 0.630929$ \\ $\mathcal{C}(1,L_2)$ & $1.618033$ & $ 0.438018$ \\ $\mathcal{C}(1,L_3)$ & $1.465571$ & $0.347934$ \\ $\mathcal{C}(1,L_4)$ & $1.380278$ & $0.293358$ \\ $\mathcal{C}(1,L_5)$ & $1.324718$ & $0.255960$ \\ $\mathcal{C}(1,L_6)$ & $1.285199$ & $0.228392$ \\ $\mathcal{C}(1,L_7)$ & $1.255423$ & $0.207052$ \\ $\mathcal{C}(1,L_8)$ & $1.232055$ & $0.189948$ \\ $\mathcal{C}(1,L_9)$ & $1.213150$ & $0.175877$ \\ \hline \end{tabular} \par \bigskip \hskip 0.5in {\rm TABLE 4.1.} Hausdorff dimensions of $\mathcal{C}(1,L_k)$ (to six decimal places) \newline \newline \end{center} \end{minipage} We first analyze the structure of the directed graph $(\mathcal{G}, v_0)$ in this presentation. \begin{prop} \label{pr42a} For $L_k= (1^k)_3 = \frac{1}{2}(3^k -1)$ the path set $\mathcal{C}(1,L_k)$ has a presentation $({\mathcal G}, v_0)$ given by Algorithm A which has exactly $k$ vertices. The vertices $v_m$ have labels $m=0$ and $m =(1^j)_3$, for $1 \le j \le k-1$. The underlying directed graph $G$ is strongly connected and primitive. \end{prop} \begin{proof} The presentation $(\mathcal{G},v_0)$ of $\mathcal{C}(1,L_k)$ has an underlying directed graph $G$ having $k$ vertices $V_n$ with $N= 0$ and $N= (1^j)_3$ for $1 \le j \le k-1$. The vertex $v_0$ has two exit edges labeled $0$ and $1$, and all other vertices have a unique exit edge labeled $0$. The edges form a self-loop at $0$ labeled $0$, and a directed $k$-cycle, whose vertex labels are $$ 0 \to (1^{k-1})_3 \to (1^{k-2})_3 \to \cdots(1^2)_3 \to (1)_3 \to 0, $$ This cycle certifies strong connectivity of the graph $G$, and in it all edge labels are $0$ except the edge $0 \to (1^{k-1})_3$ labeled $1$. Primitivity follows because it has a cycle of length $1$ at vertex $(0)_3$. \end{proof} \begin{proof}[Proof of Theorem \ref{th32}] (1) By appropriate ordering of the vertices, the adjacency matrix $\bf{A}$ of $\mathcal{G}$ is the $k \times k$ matrix \begin{equation*} \bf{A} = \left(\begin{array}{ccccc} 1 & 1 & 0 & \ldots &0\\ 0 & 0 & 1 & \ddots & \vdots\\ \vdots & \vdots & \ddots & \ddots & 0 \\ 0 & 0 & \ldots & 0 & 1 \\ 1 & 0 & \ldots& 0 & 0 \end{array}\right). \end{equation*} The characteristic polynomial of this matrix is \[ p_k(\lambda) :=\det ( \lambda\bf{I} - \bf{A}) = \det \left(\begin{array}{ccccc} \lambda - 1 & -1 & 0 & \ldots & 0 \\ 0 & \lambda & -1 &\ddots & \vdots \\ \vdots & \vdots & \ddots & \ddots & 0 \\ 0 & 0 & \ldots & \lambda & -1 \\ -1 & 0 & \ldots & 0 & \lambda \end{array}\right). \] Expansion of this determinant by minors on the first column yields \begin{align} p_k(\lambda) = (\lambda - 1) \lambda^{k-1} + (-1)^{k-1} (-1) (-1)^{k-1} = \lambda^k - \lambda^{k-1} - 1. \end{align} The Perron eigenvalue of the nonnegative matrix $\bf{A}$ will be a positive real root $\alpha_k \geq 1$ of $p(\lambda)$. By \eqref{top-entropy} the topological entropy of the path set $X(1, L_k)$ associated to $C(1, L_k)$ is $\log \beta_k$, while by Proposition \ref{pr22a} the Hausdorff dimension of the $3$-adic path set fractal $C(1,L_k)$ itself is $\log_3 \beta_k$ (2) We estimate the size of $\beta_k$. There is at most one real root $\beta_k \ge 1$ since for $\lambda > 1-1/k$ one has \begin{eqnarray*} p_k^{'}(\lambda) &= & k \lambda^{k-1} - (k-1)\lambda^{k-2} = \lambda^{k-2} (k\lambda - (k-1)) > 0. \end{eqnarray*} For the lower bound, we consider $p_k(\lambda)$ for $\lambda >1$ and define variables $y >0$ by $\lambda= 1+ \frac{y}{k}$ with $y>0$, and $x := \lambda^k>1$, noting that $ w= \lambda^k = (1+\frac{y}{k})^k < e^y $ Now \[ \lambda^{k-1}+1 = \frac{x}{1+ \frac{y}{k}} +1 \ge x\left(1- \frac{y}{k}\right) +1 \ge x+\left(1- \frac{xy}{k}\right), \] which exceeds $x$ whenever $xy \le k$. Thus we have $p_k(1+ \frac{y}{k}) < 0$ whenever $xy < ye^y \le k$. The choice $y =\log k - 2 \log\log k$ gives, for $k \ge 3$, \[ y e^y \le \log k (e^{\log k - 2 \log\log k}) \le \frac{k}{\log k} \le k. \] Thus we have, for $k \ge 3$, $p_k( 1+ \frac{\log k}{k}- 2\frac{\log\log k}{k}) <0$, so \[ \beta_k \ge 1+ \frac{\log(k)}{k}- 2\frac{\log\log k}{k}, \] which is the lower bound in \eqref{3030}. For the upper bound, it suffices to show $p_k (1+ \frac{\log k}{k}) >0$ for $k \ge 6$. We wish to show $ (1+ \frac{\log k}{k})^{k-1} (\frac{\log k}{k}) >1$ for $k \ge 6$. This becomes $(1+ \frac{\log k}{k})^{k-1} > \frac{k}{\log k}$, and on taking logarithms requires $$ (\log k -1)\log (1 + \frac{\log k}{k}) > \log k - \log\log k. $$ Using the approximation $\log (1+x) \ge x - \frac{1}{2} x^2$ valid for $0< x< 1,$ we verify this inequality holds for $k \ge 6$, and the upper bound in \eqref{3030} follows. The asymptotic estimate \eqref{hdL} for the Hausdorff dimension of $\mathcal{C}(1, L_k)$ immediately follows by taking logarithms to base $3$ of the estimates above. \end{proof} The results above imply Theorem \ref{th17a} in the introduction. \begin{proof}[Proof of Theorem \ref{th17a}.] Assertion (1) follows from Proposition \ref{pr42a}. Assertions (2) and (3) follow from Theorem \ref{th32}. \end{proof} \subsection{The family $N_k= (10^{k-1}1)_3=3^k+1$.}\label{sec43} We prove the following result. \begin{thm}\label{th34} For every integer $k \geq 0$, and $N_k=3^k+1=(10^{k-1}1)_3$, \begin{equation} \dim_H(\mathcal{C}(1,N_k)) = \dim_{H} \mathcal{C}(1, (10^{k-1}1)_3) = \log_3\bigg(\frac{1 + \sqrt{5}}{2}\bigg) \approx 0.438018. \end{equation} \end{thm} To prove this result we first characterize the presentation $\mathcal{G}= (G, v_0)$ associated to $N_k$ by the construction of Theorem \ref{th31n}. \begin{prop} \label{pr45} For $N_k= 3^k +1$ the path set $\mathcal{C}(1,N_k)$ has a presentation ${\mathcal G} =(G, v_0)$ given by Algorithm A with the following properties. (1) The vertices $v_m$ have labels $m$ that comprise those integers $0 \le m \le \frac{1}{2}(3^k-1)$ whose $3$-adic expansion $(m)_3$ omits the digit $2$. (2) The directed graph $G$ has exactly $2^k$ vertices. (3) The directed graph $G$ is strongly connected and primitive. \end{prop} \begin{proof} (1) Any vertex $v_m$ reachable from $v_0$ has a $3$-adic expansion (equivalently ternary expansion) $(m)_3$ that omits the digit $2$, and has at most $k$ $3$-adic digits. This is proved by induction on the number of steps $n$ taken. The base case has the node $(0)_3$. For the induction step, every vertex in the graph has an exit edge labeled $0$, and vertices with labels $m \equiv 0 ~(\bmod \, 3)$ also have an exit edge labeled $1$. The exit edges labeled $0$ map $m= (b_{k-1} b_{k-2} \cdots b_{1}b_{0})_3 $ to $m'= (0 b_{k-1} b_{k-2} \cdots b_{2}{b_1})_3$. The exit edges labeled $1$ map $m$ to $m' = (1 b_{k-1} b_{k-2} \cdots b_2 b_1)_3$. For both types of exit edges the new vertex reached at the next step omits the digit $2$ from its $3$-adic expansion, completing the induction step. (2) There are exactly $2^k$ possible such vertex labels $m$ in which $(m)_3$ omits the digit $2$. Call such vertex labels {\em admissible}. The largest such $m = \frac{1}{2}(3^k -1).$ (3) To show the graph $G_k$ is strongly connected it suffices to establish that: \begin{enumerate} \item[(R1)] Every possible such vertex l $v_m$ with admissible label $m$ is reachable by a directed path in $G$ from the initial vertex $0 = (00\cdots 0)_3$. \item[(R2)] All admissible vertices $v_m$ have a directed path in $G$ from $v_m$ to $v_0$. \end{enumerate} Note that (R1), (R2) together imply that $G$ is strongly connected. To show (R1), write $m = (b_{k-1} \cdots b_0)_3$, with all $b_j =0$ or $1$, and let $i$ be the smallest index with $b_i=1$. Starting from $v_0$, we may add a directed series of exit edges labeled in order $b_i, b_{i+1}, b_{i+2}, \cdots, b_{k-1}$ to arrive at $v_m$. Such edges exist in $G$, because all intermediate vertices $v_{m'}$ reached along this path have $m' \equiv 0 ~(\bmod 3)$ so that an exit edges labeled both $0$ and $1$ are available at that step. Indeed, the $j$-th step in the path has $(m_j)_3$ having $k-j$ initial $3$-adic digits of $0$, and $k-1-i \leq k-1$. To show (R2) we observe that for any vertex $v_m$ following a path of exit edges all labeled $0$ will eventually arrive at the vertex $v_0$. This is permissible since $(m)_3$ has all digits $0$ or $1$. Now $G_k$ is strongly connected, and it is primitive since it has a loop at vertex $0$. This completes the proof. \end{proof} To obtain an adjacency matrix for this graph, we must choose a suitable ordering of the vertex labels. Order the vertices of $\mathcal{G}$ recursively as follows: the $(0^{k-1})_3$-vertex is first $I_1$, and the $(10^{k-1})_3$-vertex is second $I_2$. Now, suppose that at step $j$ we have ordered the vertices $I_1,\ldots, I_m$, in that order, with $m=2^j$. Then for $1 \le j < k$, we assert that there will be precisely $2m$ vertices, all distinct from $I_1,\ldots,I_m$, to which some $I_i$ has an out edge. We can label these $J_{11},J_{12},\ldots,J_{m1},J_{m2}$ so that $J_{i1}$ has an in-edge labeled 0 from $I_i$, and $J_{i2}$ has an in-edge labeled 1 from $I_i$. Assuming this assertion, at the $j$-th step we expand our ordering to $I_1 \ldots, I_m,J_{11},J_{12},\ldots,J_{m1},J_{m2}$. \begin{prop} \label{pr47} The ordering of the vertices above is valid, and the adjacency matrix $\bf{A}$ of the underlying graph $G$ of $\mathcal{G}$ is the following $2^k \times 2^k$ matrix $\mathbf{A} = (a_{ij})$: \begin{equation*} a_{ij} = \left\{ \begin{array}{rl} 1 & \text{if } 1 \le i \le 2^{k-1} \text{ and } j \in \{2i-1,2i\} ; \\ 1 & \text{if } 2^{k-1} < 1 \text{ and } j = 2(i -2^{k-1})-1; \\ 0 & \text{otherwise}. \end{array}\right. \end{equation*} This description is consistent and exhaustive, characterizing $\mathbf{A}$. \end{prop} To illustrate this, we have for $k =2$ \begin{equation*} \bf{A} = \left(\begin{array}{cccc} 1 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ \end{array}\right), \end{equation*} while for $k = 3$ we have \begin{equation*} \bf{A} = \left(\begin{array}{cccccccc} 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 \\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ \end{array}\right). \end{equation*} \begin{proof} First, we address the ordering of the vertices of $\mathcal{G}$. According to the prescription of the proposition, $I_1 = (0)_3$, $I_2 = (10^{k-1})_3$. In the next step, there is an out-edge labeled $1$ from vertex $(10^{k-1})_3$ to $(110^{k-2})_3$, and an out-edge labeled $0$ from vertex $(10^{k-1})_3$ to vertex $(10^{k-2})_3$. This gives $I_3 = (110^{k-2})_3$, $I_4 = (10^{k-2})_3$. In general, for $k_1 + \cdots + k_r < k$ all nonnegative, if we have a vertex $(1^{k_1} 0^{k_2} 1^{k_3} \cdots 1^{k_r} 0^{k -\Sigma k_i})_3$, it has an out-edge labeled $1$ to a vertex $(1^{k_1 + 1} 0^{k_2} 1^{k_3} \cdots 1^{k_r} 0^{k - 1-\Sigma k_i})_3$ and an out-edged labeled $0$ to a vertex $(1^{k_1} 0^{k_2} 1^{k_3} \cdots 1^{k_r} 0^{k - 1-\Sigma k_i})_3$. On the other hand, a vertex labeled $(1^{k_1} 0^{k_2} 1^{k_3} \cdots 1^{k_r})_3$ ending in $1$ has a single out-edge labeled $0$ to the vertex $(1^{k_1} 0^{k_2} 1^{k_3} \cdots 1^{k_r -1})_3$. Thus, if an edge-walk originating at the $0$-vertex has label $(e_r e_{r-1} \cdots e_1 )_3$, the terminal vertex of this edge walk is the vertex $(e_r e_{r-1} \cdots e_1 0^{k -r})_3$. Now, for any vertex ending in $0$, edges labeled $0$ and $1$ are both admissible, which means that an edge walk labeled $e_1 e_2 \cdots e_{k}$ is admissible for all values $e_j = 0$ or $e_j =1$ for all $1 \leq j \leq k$. But this, then, says that all possible vertex labels from $\{0,1\}^{k}$ are achieved. Moreover, we showed above that a vertex with label from $\{0,1\}^{k}$ has out-edges only to other vertices labeled from $\{0,1\}^{k}$, so this is precisely the set of vertices of $\mathcal{G}$. The $r^{th}$ step of the vertex ordering procedure adds precisely those vertices which end in $0^{k -r}$, of which there are $2^{r-1} = 2 \cdot 2^{r-2}$. The procedure ends at the $k$th step with those vertices which end in $1$. In all, there are $2^{k}$ vertices, one for each label from $\{0,1\}^{k}$. Now we can understand the definition of the coefficients $a_{ij}$ of the adjacency matrix $\mathbf{A}$ of the underlying graph $G$ of $\mathcal{G}$. Vertex $(0)_3$ maps into itself and vertex $(10^k)_3$, which are ordered first and second with respect to the ordering. Thus $a_{11} = a_{12} = 1$, $a_{1j} = 0$ for $j > 2$. Now suppose a vertex is ordered $i^th$ ($I_i$) at the $r^{th}$ stage, and $r \leq k-1$, so that not all vertices have yet been ordered. There are $2^r$ vertices ordered so far (so $1 \leq i \leq 2^r$), and the $(r+1)^{st}$ stage of the construction orders the next $2^r$ vertices precisely so that the out-edges from vertex $I_i$ go to vertices $I_{2i-1}$ and $I_{2i}$. This gives the prescription for $a_{ij}$ for $1 \leq i \leq 2^{k-1}$. Observe that the vertices $I_{2^{k-1}+1}, I_{2^{k-1} + 2}, \ldots, I_{2^k}$ have labels ending in $1$. Hence, such a vertex labeled $m$ has a single out-edge to the vertex labeled $(m-1)/3$. But if $m$ is the label of $I_{2^{k-1}+r}$, then $(m-1)/3$ is the label of $I_{2r-1}$. But $(2^{k-1}+r,2r-1)$ can be rewritten $(i,2(i-2^{k-1})-1)$. This gives the result. \end{proof} We are now ready to prove Theorem ~\ref{th34}. \begin{proof}[Proof of Theorem ~\ref{th34}] Let $\mathbf{A}_k$ be the adjacency matrix of the presentation of $\mathcal{C}(1,N_k)$ constructed via our algorithm. We directly find a strictly positive eigenvector ${\bf v}_k$ of $\mathbf{A}_k$ having $\mathbf{A}_k{\bf v}_k^T$ = $(\frac{1+\sqrt{5}}{2}){\bf v}_k^T$. Here ${\bf v}_k$ is a $ 2^k \times 1$ row vector, with $v_K^T$ its transpose, and let ${\bf v}_k^{(j)} $ denote its $j$-th entry. The Perron-Frobenius Theorem \cite[Theorem 4.2.3]{LM95} then implies that $\alpha = \frac{1+\sqrt{5}}{2}$ is the Perron eigenvalue of $\mathbf{A}_k$. Theorem ~\ref{th12} will then give us that \[\dim_H(\mathcal{C}(1,N_k)) =\log_3 \bigg(\frac{1+\sqrt{5}}{2}\bigg).\] Let $\phi = \frac{1+ \sqrt{5}}{2}$ be the golden ratio. We define the vector ${\bf v}_k$ recursively as follows: \begin{enumerate} \item[(1)] ${\bf v}_1 = (\phi,1) = (\phi^1,\phi^0)$; \item[(2)] If ${\bf v}_{j-1} = (\phi^{k_1},\phi^{k_2},\ldots , \phi^{k_{2^{j-1}}})$, then \[ {\bf v}_j = (\phi^{k_1+1},\phi^{k_2+1},\ldots,\phi^{k_{2^{j-1}}+1},\phi^{k_1},\phi^{k_2},\ldots , \phi^{k_{2^{j-1}}}). \] \end{enumerate} Note that ${\bf v}_j$ is obtained from ${\bf v}_{j-1}$ by adjoining $\phi {\bf v}_{j-1}$ to the front of ${\bf v}_j$. We need now to check that $\mathbf{A} {\bf v}_k^T= \phi {\bf v}_k^T$. We will argue by induction on $k$. The base case is easy. Now observe that if we write \begin{equation*} \bf{A}_k = \left(\begin{array}{c} T_k \\ B_k \\ \end{array}\right) \end{equation*} for $T_k$ and $B_k$ each $2^{k-1} \times 2^k$ blocks, then we have \begin{equation*} B_{k+1} = \left(\begin{array}{cc} B_k & 0 \\ 0 & B_k \\ \end{array}\right) \end{equation*} and \begin{equation*} T_{k+1} = \left(\begin{array}{cc} T_k & 0 \\ 0 & T_k \\ \end{array}\right). \end{equation*} It follows easily from this and the definition of the vectors ${\bf v}_k$ that if $\mathbf{A}_k {\bf v}_k^T = \phi {\bf v}_k^T$, then $\mathbf{A}_{k+1} {\bf v}_{k+1}^T = \phi {\bf v}_{k+1}^T$. This proves the theorem. \end{proof} \begin{proof}[Proof of Theorem \ref{th14}.] Here (1) follows from Proposition \ref{pr45}, and (2) follows from Theorem \ref{th34}. \end{proof} \subsection{Hausdorff dimension bounds for $\mathcal{C}(1, M_1, ..., M_n)$ with $M_i$ in families }\label{sec45} The path set structures of each of the three infinite families are compatible with each other, as a function of $k$, so that the associated $\mathcal{C}(1, M_1, ..., M_n)$ all have positive Hausdorff dimension. We treat them separately. \begin{thm} \label{thm-family1} For the family $L_k= \frac{1}{2}( 3^{k} -1) = (1^k)_3$, for $1 \leq k_1 < \ldots < k_n$, the pointed graph $\mathcal{G}(0,\ldots,0)$ of the path set $X(1, L_{k_1}, \cdots L_{k_m})$ associated to $\mathcal{C}(1,L_{k_1},\ldots,L_{k_n})$ is isomorphic to the pointed graph $(\mathcal{G}_{k_n},0)$ presenting $\mathcal{C}(1,L_{k_n})$. In particular \begin{equation} \dim_H(\mathcal{C}(1,L_{k_1},\ldots,L_{k_n})) = \dim_H(\mathcal{C}(1,L_{k_n})). \end{equation} \end{thm} \begin{proof} The presentation $(\mathcal{G}_k,0)$ of $\mathcal{C}(1,L_k)$ constructed with Algorithm A consists of a self-loop at the $0$-vertex and a cycle of length $k$ at the $0$-state. Taking in Algorithm B the label product $\mathcal{G}_{k_1} \star \cdots \star \mathcal{G}_{k_n}$ gives a graph $\mathcal{G}$ with a self-loop at the $(0,\ldots, 0)$-vertex and a cycle \begin{equation*} \xymatrix{ (0,\ldots,0) \ar[r]^{1} & (1^{k_1-1}, \ldots,1^{k_n-1}) \ar[r]^0 &(1^{k_1-2},\ldots,1^{k_n-2}) \ar[r]^0 & \cdots \\ & \quad \cdots \ar[r]^0 & (0,\ldots, 0 , 1) \ar[r]^0 & (0,\ldots,0). \\} \end{equation*} This cycle has length $k_n$. We can then see that the graph $\mathcal{G}$ is isomorphic to $\mathcal{G}_{k_n}$ by an isomorphism sending $(0,\ldots,0)$ to $0$. \end{proof} We next treat multiple intersections drawn from the second family $N_k$. \begin{thm}\label{th413} For the family $N_k= 3^k+1= (10^{k-1}1)_3$ the following hold. (1) For $1 \le k_1 < k_2 < \cdots < k_n$, one has \begin{equation}\label{lowerbound2} \dim_H(\mathcal{C}(1,N_{k_1}, N_{k_2},\ldots,N_{k_n})) \geq dim_H(\mathcal{C}(1, L_{k_n+1})) \end{equation} Equality holds when $k_j = j$ for $1 \le j \le n$. (2) For fixed $n \ge 1$, there holds \begin{equation}\label{3500} \liminf_{k \rightarrow \infty} \dim_H(\mathcal{C}(1,N_k,\ldots,N_{k+n-1})) \geq \frac{1}{2} (\log_3 2) \approx 0.315464. \end{equation} In particular, $\Gamma_{\star} \ge \frac{1}{2} (\log_3 2).$ \end{thm} \begin{proof} (1) It is easy to see that the set $\mathcal{C}(1,N_{k_1}, N_{k_2},\ldots,N_{k_n})$ contains the set $$ Y_{k_n} : = \{ \lambda = \sum_{j=1}^{\infty} 3^{\ell_1 + \cdots + \ell_j} \in {\mathbb Z}_{3, \bar{2}}: \, \mbox{all}~~\ell_j \ge k_n +1\}, $$ (Here we allow finite sums, corresponding to some $\ell_j = +\infty$). This fact holds by observing that if $\lambda \in Y_{k,n}$ then $N_{k_j} \lambda \in \Sigma_{3, \bar{2}}$ for $1 \le j \le n$, because $$ N_{k_j}\lambda = (\sum_{j=1}^{\infty} 3^{\ell_1 + \cdots + \ell_j}) + (\sum_{j=1}^{\infty} 3^{\ell_1 + \cdots + \ell_j+k_j}) $$ and the $3$-adic addition has no carry operations since all exponents are distinct. The set $Y_{k_n}$ is a $3$-adic path set fractal and it is easily checked to be identical with $\mathcal{C}(1, L_{n_k+1})$, using the structure of its associated graph. This proves \eqref{lowerbound2}. To show equality holds, one must show that allowable sequences for each of $N_{1}, N_{2}, ... ,N_{n}$ require gaps of size at least $n+1$ between each successive nonzero $3$-adic digit in an element of $\mathcal{C}(1, N_1, N_2, ..., N_n).$ This can be done by induction on the current non-zero $3$-adic digit; we omit details. (2) We study the symbolic dynamics of the elements of the underlying path sets in $\mathcal{C}(1,N_{k+j-1})$, for $1 \le j \le n$, given in Theorem \ref{th34}, and use this to lower bound the Hausdorff dimension. \medskip \noindent {\bf Claim.} {\em The $3$-adic path set underlying $\mathcal{C}(1,N_k,\ldots,N_{k+n})$ contains all symbol sequences which, when subdivided into successive blocks of length $2k + n$, have every such block of the form $$(00 \cdots 00 a_k a_{k-1} \cdots a_3 a_2 1)_3 \,\, \mbox{with each} \,\, a_i \in \{0,1\}. $$ } \begin{proof}[Proof of claim.] It suffices to show that all sequences split into blocks of length $2k+n$ of the form $(00 \cdots 00 a_k a_{k-1} \cdots a_3 a_2 1)_3$ occur in $\mathcal{C}(1,N_j)$ for each $k \leq j \leq k+n$, since this will imply the statement for the label product. Consider the presentation $\mathcal{G}_j$ of $\mathcal{C}(1,N_j)$ given by our algorithm. Beginning at the $0$-vertex, an edge labeled $1$ takes us to the state $(10^{j-1})_3$. From a vertex whose label ends in $0$, one may traverse an edge with label $1$ or $0$. But if we are at a vertex whose labeled $a0$, an edge labeled $0$ takes us to a vertex labeled $a$, and an edge labeled $1$ takes us to a vertex labeled $1a$ (this is specific to the case of $N_j$). In other words, we apply the truncated shift map to our vertex label and either concatenate with $1$ on the left or not. It follows that from the vertex $(10^{j-1})_3$ the next $(j-1)$ edges traversed may be labeled either $0$ or $1$. At this point the initial $1$ from $(10^{j-1})_3$ has moved to the far right of our vertex label. Therefore, our choice is restricted: we must traverse an edge labeled $0$. Since our vertex label, whatever it is, consists of only $0$'s and $1$'s, we can in any case traverse $j$ or more consecutive edges labeled $0$ to get back to the $0$-vertex. Thus, first traversing an edge labeled $1$, then traversing edges labeled $0$ or $1$ freely for the next $(k-1)$-steps, then traversing $k+n$ edges labeled $0$ and returning to the $0$-vertex, is possible in the graph $\mathcal{G}_j$ for each $k \leq j \leq k+n$. It follows that all sequences of the desired form are in each $\mathcal{C}(1,N_j)$, and hence in $\mathcal{C}(1,N_k \ldots, N_{k+n})$, proving the claim. \end{proof} With this claim in hand, we see that each block of size $(2k+n$ contains at least $2^{k-2}$ admissible $(2k + n)$-blocks in $\mathcal{C}(1,N_k,\ldots,N_{k+n})$. We conclude that the maximum eigenvalue $\beta_{n,k}$ of the adjacency matrix of the graph $\mathcal{G}_{n,k}$ of $\mathcal{C}(1, N_k, N_{k+1}, \cdots, N_{k+n-1})$ must satisfy $(\beta_{n,k})^{2n+k} \ge 2^{k-2}.$ This yields $$ \beta_{n,k} \ge 2^{\frac{k-2}{k+2n}}. $$ and hence $\liminf_{k \to \infty} \beta_{n,k} \ge \sqrt{2}$. The Hausdorff dimension formula in Proposition \ref{pr22a} then yields \begin{equation} \limsup_{k \rightarrow \infty} \dim_{H}\big(\mathcal{C}(1,N_k,\ldots,N_{k+n})\big) \ge \limsup_{k \rightarrow \infty} \log_3 \beta_{n,k} \ge \frac{1}{2} \log_3 2. \end{equation} as asserted. The lower bound $\Gamma_{\star} \ge \frac{1}{2} \log_3 2$ follows immediately from this bound, see \eqref{Gamma-star}. \end{proof} \section{Applications } \label{sec5} We give several applications to improving bounds for the Hausdorff dimension of various sets. \subsection{Hausdorff dimension of the generalized exceptional set $\mathcal{E}_\star(\mathbb{Z}_3)$} \label{sec51} Theorem \ref{th413} (2) shows that there are arbitrarily large families $\mathcal{C}(1, N_{k_1}, ..., N_{k_n})$ having Hausdorff dimension uniformly bounded below. If one properly restricts the choice of the $N_{k_j}$ then one can obtain an infinite set in this way, as was pointed out to us by Artem Bolshakov. It yields a nontrivial lower bound on the Hausdorff dimension of the generalized exceptional set. \begin{thm}\label{th51a} {\em (Lower Bound for Generalized Exceptional Set)} (1) The subset $Y$ of the $3$-adic Cantor set $\Sigma_{3, \bar{2}}$ given by $$ Y := \{ \lambda :=\sum_{j=0}^{\infty} a_j 3^j : \mbox{all}~~ a_{2k} \in \{0, 1\}, \,\, \mbox{all}\,\, a_{2k+1}=0\} \subset \mathbb{Z}_3. $$ is a $3$-adic path set fractal having $\dim_{H}(Y) = \frac{1}{2} \log_3 2 \approx 0.315464$. This set satisfies $$ Y \subset \mathcal{C}(1, N_{2k+1}), \,\, \mbox{ for all} \,\,k \ge 0, $$ where $N_k = 3^k+1$, and in consequence $$Y \subseteq \bigcap_{k=1}^{\infty} \mathcal{C}(1, N_{2k+1}).$$ (2) One has \begin{equation}\label{bdd1} \dim_{H} \Big( \{ \lambda \in \Sigma_{3, \bar{2}}: \,\, N_{2k+1} \lambda \in \Sigma_{3, \bar{2}} \,\, \mbox{for all}\, k \ge 0\}\Big) \ge \dim_{H}(Y) = \frac{1}{2} \log_3 2. \end{equation} Therefore \begin{equation} \label{bdd2} \dim_{H} (\mathcal{E}_{\ast}) \ge \frac{1}{2} \log_3 2 = 0.315464. \end{equation} \end{thm} \begin{proof} (1) The $3$-adic path set fractal property of $Y \subset \Sigma_{3, \bar{2}}$ is easily established, since the underlying graph of its symbolic dynamics is pictured in Figure ~\ref{fig51}. The Perron eigenvalue of its adjacency matrix is $\sqrt{2}$, and its Hausdorff dimension is $\frac{1}{2} \log_3 2$ by Proposition \ref{pr22a}. \begin{figure}[ht]\label{fig51} \centering \psset{unit=1.3pt} \begin{pspicture}(-125,-20)(125,30) \newcommand{\nodeq}[2]{\node{#1}{#2}{q}} \nodeq{0}{-125,0} \nodeq{1}{125,0} \aline{q0}{q1}{1} \dline{q0}{q1}{0}{0} \end{pspicture} \bigskip \newline \hskip 0.5in {\rm FIGURE 5.1.} Presentation of $Y$. \newline \newline \end{figure} The elements of $Y$ can be rewritten in the form $\lambda =\sum_{j=0}^{\infty} b_{2j} 3^{2j},$ with all $b_{2j} \in \{0, 1\}$. We then have $$ N_{2k+1} \lambda = \sum_{j=0}^{\infty} b_{2j}3^{2j} + \sum_{j=0}^{\infty} b_{2j} 3^{2j+2k+1} \in \Sigma_{3, \bar{2}}, $$ and the inclusion in the Cantor set $\Sigma_{3, \bar{2}}$ follows because the sets of $3$-adic exponents in the two sums on the right side are disjoint, so there are no carry operations in combining them under $3$-adic addition. This establishes that $Y \subset \mathcal{C}(1, N_{2k+1})$. (2) All elements $\lambda \in Y$ have $N_{2k+1} \lambda \in \Sigma_{3, \bar{2}}$ for all $k \ge 1$. Thus $$ Y \subset \{ \lambda \in \Sigma_{3, \bar{2}}: \,\, N_{2k+1} \lambda \in \Sigma_{3, \bar{2}} \,\,\mbox{for all}\, k \ge 1\}. $$ The result \eqref{bdd1} follows, from which \eqref{bdd2} is immediate. \end{proof} Theorem \ref{th110} is included as part (2) of this result. \subsection{Bounds for approximations to the exceptional set $\mathcal{E}(\mathbb{Z}_3)$}\label{sec53} We conclude with numerical results concerning Hausdorff dimensions of the upper approximation sets $\mathcal{E}^{(k)}(\mathbb{Z}_3)$ to the exceptional set $\mathcal{E}(\mathbb{Z}_3)$. Recall that the only powers of $2$ that are known to have ternary expansions that omit the digit $2$ are $2^0= 1=(1)_3, 2^2= 4 = (11)_3$, and $2^8=256= (10111)_3$. In contrast $2^4 = 16 = (121)_3$ and $2^6=64= (2101)_3$. We begin with empirical results about the sets $\mathcal{C}(1,2^{m_1},\ldots,2^{m_n})$ obtained via Algorithm A. Here we note the necessary condition $2^{2n} \equiv 1 ~(\bmod \, 3)$ for positive Hausdorff dimension. \begin{minipage}{\linewidth}\label{tab54} \begin{center} \begin{tabular}{|c |r @{.} l |} \hline Set & \multicolumn{2}{c}{Hausdorff dimension} \vline \\ \hline $\mathcal{C}(1,2^2)$ & 0&438018 \\ $\mathcal{C}(1,2^4)$ & 0&255960 \\ $\mathcal{C}(1, 2^6)$ & 0&278002 \\ $\mathcal{C}(1, 2^8)$ & 0&287416 \\ $\mathcal{C}(1, 2^{10})$ & 0&215201 \\ $\mathcal{C}(1,2^{12})$ & 0&244002 \\ $\mathcal{C}(1,2^{14})$ & 0&267112 \\ \hline $\mathcal{C}(1,2^2,2^4)$ & 0&\\ $\mathcal{C}(1,2^2,2^6)$ & 0& \\ $\mathcal{C}(1,2^2,2^8)$ & 0&228392 \\ $\mathcal{C}(1,2^2,2^{10})$ & 0& \\ $\mathcal{C}(1,2^4,2^6)$ & 0 &\\ $\mathcal{C}(1,2^4,2^8)$ & 0 &\\ $\mathcal{C}(1,2^4,2^{10})$ & 0& \\ $\mathcal{C}(1,2^6,2^8)$ & 0 &\\ $\mathcal{C}(1,2^6,2^{10})$ & 0& \\ $\mathcal{C}(1,2^8,2^{10})$ & 0& \\ \hline $\mathcal{C}(1,2^2,2^8,2^{12})$ & 0& \\ $\mathcal{C}(1,2^2,2^8,2^{14})$ & 0 &\\ $\mathcal{C}(1,2^2,2^8,2^{16})$ & 0 & \\ \hline \end{tabular} \par \bigskip \hskip 0.5in {\rm TABLE 5.2.} Hausdorff dimension of $\mathcal{C}(1,2^{m_1},\ldots,2^{m_k})$ (to six decimal places) \newline \newline \end{center} \end{minipage} \begin{thm} \label{th15} The following bounds hold for sets $\mathcal{E}^{(k)}(\mathbb{Z}_3)$. \begin{eqnarray*} \dim_H(\mathcal{E}^{(2)}(\mathbb{Z}_3)) &\geq &\log_3\bigg(\frac{1 + \sqrt{5}}{2}\bigg) \approx 0.438018,\\ \dim_H(\mathcal{E}^{(3)}(\mathbb{Z}_3)) & \ge& \log_3 \beta_1 \approx 0.228392, \end{eqnarray*} where $\beta_1 \approx 1.28520$ is a root of $\lambda^6-\lambda^5 -1=0$. \end{thm} \begin{proof} We have \begin{eqnarray*} \dim_H(\mathcal{E}^{(2)}(\mathbb{Z}_3)) &=& \sup_{0\leq m_1 < m_2} \dim_H(\mathcal{C}(2^{m_1},2^{m_2}))\\ & \ge & \dim_H(\mathcal{C}(2^0,2^2)) = \log_3\left(\frac{1 + \sqrt{5}}{2}\right). \end{eqnarray*} The bound for $N_1= 2^2 = (11)_3$ follows from Theorem \ref{th14}, taking $k=1$. We also have \begin{eqnarray*} \dim_H(\mathcal{E}^{(3)}(\mathbb{Z}_3)) &=& \sup_{0\leq m_1 < m_2<m_3} \dim_H(\mathcal{C}(2^{m_1},2^{m_2}, 2^{m_3}))\\ & \ge & \dim_H(\mathcal{C}(2^0,2^2,2^8)) = \log_3 \beta_1 \approx 0.228392 \end{eqnarray*} where $\beta_1 \approx 1.28520...$ is a root of $\lambda^6- \lambda^5 -1=0.$ \end{proof} It is unclear whether $\dim_H(\mathcal{E}^{(k)}(\mathbb{Z}_3))$ is positive for any $k \ge 4$. Currently $\mathcal{C}(1,2^2, 2^8)$ is the only component of $\mathcal{E}^{(3)}(\mathbb{Z}_3)$ known to have positive Hausdorff dimension. At present we do not know of any set $\mathcal{C}(1,2^{m_1},2^{m_2},2^{m_3})$ that has positive Hausdorff dimension. \subsection*{Acknowledgments} The authors thank Artem Bolshakov for making the key observation that Theorem \ref{th51a} should hold. W. Abram acknowledges the support of an NSF Graduate Research Fellowship and of the University of Michigan, where this work was carried out.
{ "timestamp": "2013-08-15T02:04:39", "yymm": "1308", "arxiv_id": "1308.3133", "language": "en", "url": "https://arxiv.org/abs/1308.3133", "abstract": "Motivated by a question of Erdős, this paper considers questions concerning the discrete dynamical system on the 3-adic integers given by multiplication by 2. Let the 3-adic Cantor set consist of all 3-adic integers whose expansions use only the digits 0 and 1. The exception set is the set of 3-adic integers whose forward orbits under this action intersects the 3-adic Cantor set infinitely many times. It has been shown that this set has Hausdorff dimension 0. Approaches to upper bounds on the Hausdorff dimensions of these sets leads to study of intersections of multiplicative translates of Cantor sets by powers of 2. More generally, this paper studies the structure of finite intersections of general multiplicative translates of the 3-adic Cantor set by integers 1 < M_1 < M_2 < ...< M_n. These sets are describable as sets of 3-adic integers whose 3-adic expansions have one-sided symbolic dynamics given by a finite automaton. As a consequence, the Hausdorff dimension of such a set is always of the form log(\\beta) for an algebraic integer \\beta. This paper gives a method to determine the automaton for given data (M_1, ..., M_n). Experimental results indicate that the Hausdorff dimension of such sets depends in a very complicated way on the integers M_1,...,M_n.", "subjects": "Number Theory (math.NT); Dynamical Systems (math.DS); Metric Geometry (math.MG)", "title": "Intersections of multiplicative translates of 3-adic Cantor sets", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668717616667, "lm_q2_score": 0.7217431943271999, "lm_q1q2_score": 0.7081505121932915 }
https://arxiv.org/abs/1309.1749
Nodal theorems for the Dirac equation in d >= 1 dimensions
A single particle obeys the Dirac equation in $d \ge 1$ spatial dimensions and is bound by an attractive central monotone potential that vanishes at infinity. In one dimension, the potential is even, and monotone for $x\ge 0.$ The asymptotic behavior of the wave functions near the origin and at infinity are discussed. Nodal theorems are proven for the cases $d=1$ and $d > 1$, which specify the relationship between the numbers of nodes $n_1$ and $n_2$ in the upper and lower components of the Dirac spinor. For $d=1$, $n_2 = n_1 + 1,$ whereas for $d >1,$ $n_2 = n_1 +1$ if $k_d > 0,$ and $n_2 = n_1$ if $k_d < 0,$ where $k_d = \tau(j + \frac{d-2}{2}),$ and $\tau = \pm 1.$ This work generalizes the classic results of Rose and Newton in 1951 for the case $d=3.$ Specific examples are presented with graphs, including Dirac spinor orbits $(\psi_1(r), \psi_2(r)), r \ge 0.$
\section{Introduction} Rose and Newton \cite{rose,rose_book} proved a nodal theorem for a single fermion moving in three dimensions in an attractive central potential which was not too singular at the origin. The Dirac spinor for this problem is constructed with two radial functions, $\psi_1$ and $\psi_2$, with respective numbers of nodes $n_1$ and $n_2.$ The nodal theorem is to the effect that if $\kappa >0,$ then $n_2 = n_1+1,$ and if $\kappa < 0,$ then $n_2 = n_1,$ where $\kappa = \pm(j+\half).$ It is the purpose of the present paper to generalize this nodal theorem to $d \ge 1$ dimensions. We find it convenient to consider the cases $d=1$ and $d >1$ separately, since the parities of the radial functions are important only for $d=1$: thus we arrive at two theorems. In one dimension we find $n_2 = n_1+1,$ and in $d>1$ dimensions we prove a theorem which corresponds to the Rose-Newton result with $\kappa$ replaced by $k_d = \tau(j + \frac{d-2}{2}),$ where $\tau = \pm 1.$ It it always interesting to know {\it a priori} the structure of the solution to a problem. For want of suitable theorems, the nodal structures of bound-state solutions of the Dirac equation have often been tacitly assumed in the recent derivations of relativistic comparison theorems, to the effect that $V_1 \le V_2 \Rightarrow E_1 \le E_2.$ Earlier, such theorems were not expected since the Dirac Hamiltonian is not bounded below and consequently it is difficult to characterize it's discrete spectrum variationally: thus comparison theorems are not immediately evident. The first results \cite{hallct1} were for ground-state problems in $d=3$ dimensions where the nodal structure was given by the Rose-Newton theorem. This Dirac comparison theorem was later extended to the ground state in $d\ge 1$ dimensions by G. Chen \cite{chen1, chen2}. More recent results were based on monotonicity arguments and yielded comparison theorems for the excited states \cite{hallct2, hallct3}; subsequently, applications to atomic physics \cite{softcore} were considered. These theorems and their application required either the precise nodal structure of the ground state, or the knowledge that, for an interesting class of problems, it made sense to compare the eigenvalues of states with the same nodal structure; in particular, the states needed to be suitably labelled. From a practical perspective, we have found that knowing the nodal structure is very helpful in computing bound-state eigenvalues numerically (by shooting methods) and for plotting the graphs of the radial functions $\{\psi_1(r), \psi_2(r)\}$ or the corresponding `Dirac spinor orbits' $\mb{r}(t) = (\psi_1(t),\psi_2(t)), t\ge 0$. Articles by Leviatan {\it et al} \cite{gs2, gs3} show how Dirac nodal structure is important for nuclear physics. \section{One--dimensional case $d=1$} A Dirac equation in one spatial dimension and in natural units $\hbar=c=1$ for a potential $V$ which is symmetric with respect to reflection i.e. $V(-x)=V(x)$ reads \cite{calog}: \begin{equation*} \left(\sigma_1\frac{\partial}{\partial x}-(E-V)\sigma_3+m\right)\psi=0, \end{equation*} where $m$ is the mass of the particle, $\sigma_1$ and $\sigma_3$ are Pauli matrices, and the energy $E$ satisfies $-m<E<m$, \cite{Spectrumd11, Spectrumd12}. Taking the two-component Dirac spinor as $\psi=\left(\begin{array}{cc}u_1 \\ u_2\end{array}\right)$ the above matrix equation can be decomposed into a system of first-order linear differential equations \cite{Dombey, Qiong}: \begin{equation}\label{d1l} u_1'=-(E+m-V)u_2, \end{equation} \begin{equation}\label{d1r} u_2'=\ph(E-m-V)u_1, \end{equation} where prime $'$ denotes the derivative with respect to $x$. For bound states, $u_1$ and $u_2$ satisfy the normalization condition \begin{equation*} (u_1,u_1) + (u_2,u_2) = \int\limits_{-\infty}^{\infty}(u_1^2 + u_2^2)dx = 1. \end{equation*} Now we can state the theorem concerning the number of nodes of the Dirac spinor components for the one-dimensional case. \medskip \noindent{\bf Nodal Theorem in $d=1$ dimension:} ~~{\it We assume $V$ is a negative even potential that is monotone for $x\ge 0$ and satisfies \begin{equation*} \lim_{x\to 0}V(x)=-\nu \qquad \text{and} \qquad \lim_{x\to \pm\infty}V(x)=0, \end{equation*} where $\nu$ is a positive constant. Then $u_1$ and $u_2$ have definite and opposite parities. If $n_1$ and $n_2$ are, respectively, the numbers of nodes of the upper $u_1$ and lower $u_2$ components of the Dirac spinor, then \begin{equation*} n_2=n_1+1. \end{equation*}} \medskip \noindent{\bf Proof:}~~ We suppose that $\{u_1(x), u_2(x)\}$ is a solution of Eqs.(\ref{d1l},~\ref{d1r}). Since $V$ is an even function of $x$, then by direct substitution we find that $\{u_1(-x), -u_2(-x)\}$ and $\{-u_1(-x), u_2(-x)\}$ are also solutions of Eqs.(\ref{d1l},~\ref{d1r}) with the same energy $E$. Therefore, by using linear combinations, we see that the solutions $\{u_1(x), u_2(x)\}$ of Eqs.(\ref{d1l},~\ref{d1r}) can be chosen to have definite and necessarily opposite parities, that is to say, $u_1(x)$ is even and $u_2(x)$ is odd, or {\it vice versa}. Also the energy spectrum of bound states is non-degenerate: details of this can be found in Refs. \cite{Coutinho, Qiong}. Now we analyse the asymptotic behavior of $u_1$ and $u_2$. At positive or negative infinity, after some rearragements, the system of Eqs.(\ref{d1l},~\ref{d1r}) becomes asymptotically \begin{equation*} u_1''=(m^2-E^2)u_1, \end{equation*} \begin{equation*} u_2''=(m^2-E^2)u_2. \end{equation*} Solutions at infinity are given by \begin{equation*} u_1=b^+_1e^{-x\beta}, \qquad u_2=b^+_2e^{-x\beta}, \end{equation*} and at negative infinity \begin{equation*} u_1=b^-_1e^{x\beta}, \qquad u_2=b^-_2e^{x\beta}, \end{equation*} where $b^\pm_1$ and $b^\pm_2$ are constants of integration and $\beta=\sqrt{m^2-E^2}$. Substitution of these solutions into Eqs.(\ref{d1l},~\ref{d1r}) as $x\longrightarrow\pm\infty$ gives us the values for $b^\pm_2$ \begin{equation*} b^+_2=\frac{b^+_1\beta}{m+E} \qquad\text{and}\qquad b^-_2=-\frac{b^-_1\beta}{m+E}. \end{equation*} Considering the following two limits: \begin{equation*} \lim_{x\to+\infty}\frac{u_2}{u_1}=\frac{\beta}{m+E}>0 \qquad\text{and}\qquad \lim_{x\to-\infty}\frac{u_2}{u_1}=-\frac{\beta}{m+E}<0, \end{equation*} we conclude that at positive infinity $u_1$ and $u_2$ have to vanish with the same signs and with different signs at negative infinity. We note that physics does not favour $\infty$ over $-\infty$: $u_1$ and $u_2$ can be interchanged by choosing a different representation for the matrix spinor equation leading to the coupled equations Eqs.(\ref{d1l},~\ref{d1r}). We now introduce the following functions \begin{equation*} W_1(x):=E+m-V(x) \qquad\text{and}\qquad W_2(x):=E-m-V(x), \end{equation*} in terms of which Eqs.(\ref{d1l},~\ref{d1r}) become \begin{equation}\label{d3l} u'_1=-W_1u_2, \end{equation} \begin{equation}\label{d3r} u'_2=\ph W_2u_1. \end{equation} Since $V$ is even, and $u_1$ and $u_2$ have definite and opposite parities, for the remainder of the proof we restrict our attention to the positive half axis $x\ge 0$. As $V$ is negative, and $-m<E<m$, it follows that $W_1>0$, $\forall x$. Also $V$ is monotone nondecreasing and vanishes for large $x$. Thus $W_2$ is monotone nonincreasing and $\lim\limits_{x\to\infty}W_2(x)=E-m<0$, but for small $x$, depending on $V$, $W_2$ may be positive. Hence, $W_2$ is either always negative or is positive near the origin and then changes sign once, say at $x=x_c$. We shall prove now that the wave functions $u_1$ and $u_2$ may have a nodes only on the interval $(0, x_c]$ where $W_2\ge 0$, and that these nodes are alternating. We observe parenthetically that, if $W_2$ is always negative, there are no excited states. Let us start from the interval $(x_c, \infty)$ where $W_2<0$. Without loss of generality, we assume that at the point $x_1\in(x_c, \infty)$ the function $u_2$ is increasing and has a node, so $u_2(x_1)=0$ and $u_2'(x_1)>0$. Then Eqs.(\ref{d3l},~\ref{d3r}) lead to \begin{equation*} u_1(x_1)<0, \quad u_1'(x_1)=0, \quad {\rm and } \quad u_1''(x_1)<0. \end{equation*} Since $u_1$ and $u_2$ have to vanish at infinity, as $x$ increases $u_2$ has to reach a local maximum at some point $x_2>x_1$ ($x_2$ is such that there are no nodes or extrema of $u_2$ between $x_1$ and $x_2$) and then decrease; thus $u_2(x_2)>0$, $u_2'(x_2)=0$, and $u_2''(x_2)<0$. From \eq{d3l} it follows that $u_1'(x_2)<0$ and from \eq{d3r} that $u_1'(x_2)>0$, which is not possible. A similar contradiction is reached if instead we consider a node of $u_1(x)$ at $x_1$. Thus nodes of $u_1$ and $u_2$ can occur only on $(0, x_c]$. Indeed, if we apply the same reasoning as above with $x_1$, $x_2\in (0, x_c]$, we find no contradiction; for $x>x_2$, the functions $u_1$ and $u_2$ can have more nodes or vanish as $x \longrightarrow \infty$. We also conclude that the nodes of $u_1$ and $u_2$ are alternating. The same conclusions are reached if $u_2$ is decreasing at $x_1$, or, as we observed above, if, instead of $u_2$, we consider $u_1$ first. If we assume that $u_1$ is even and $u_2$ is odd, this leads to four cases at the origin: \begin{equation*} {\it case \ 1}: u_1(0)>0,\ u_1'(0)=0,\ u_2(0)=0,\ u_2'(0)>0, \end{equation*} \begin{equation*} {\it case \ 2}: u_1(0)>0,\ u_1'(0)=0,\ u_2(0)=0,\ u_2'(0)<0, \end{equation*} \begin{equation*} {\it case \ 3}: u_1(0)<0,\ u_1'(0)=0,\ u_2(0)=0,\ u_2'(0)>0, \end{equation*} \begin{equation*} {\it case \ 4}: u_1(0)<0,\ u_1'(0)=0,\ u_2(0)=0,\ u_2'(0)<0. \end{equation*} We can reduce these four cases to two because {\it case \ 1} and {\it case \ 2} correspond to {\it case \ 4} and {\it case \ 3}, respectively, multiplied by $-1$. We choose to keep {\it cases \ 1} and {\it 2}. Now, at $x=0$, $u_1$ is positive and has a local maximum. Eqs.(\ref{d3l},~\ref{d3r}) then imply: $u_2(0)=0$ and $u_2'(0)>0$, which rules out {\it case 2}. Thus, for $u_1$ even and $u_2$ odd, we need only consider {\it case 1}. Finally, {\it case \ 1} (and {\it case \ 4}) describe the behavior of $u_1$ and $u_2$ at the origin; the fact that the nodes of these components are alternating tells us how they behave further out, and we know that at infinity the components vanish with the same signs. Also we have assumed that $u_1$ is even, so $u_2$ is odd, and reflection of their graphs gives us their behavior on $(-\infty,\ 0]$. In particular, $u_1$ and $u_2$ vanish at negative infinity with different signs. From the above it follows that $u_2$ has one node more than $u_1$. If $u_1$ is odd, then $u_2$ is even, and, by a similar analysis, we again find that $n_2 = n_1 +1.$ We observe that although $u_1$ can be node free when $u_2$ is odd, when $u_1$ is odd, $u_2$ must have at least two nodes. This completes the proof of the theorem. \hfill $\Box$ As an illustration for this result we plot wave functions for the laser-dressed Coulomb potential which is studied, for example, in Refs. \cite{laser1,laser2,Hall3}, and is given by \begin{equation*} V=-\frac{v}{(x^2+\lambda^2)^{1/2}}. \end{equation*} We consider $v=0.9$, $\lambda=0.5$, and $m=1$: in Figure~1, $u_1$ is even; and in Figure~2, $u_1$ is odd. \begin{figure} \centering{\includegraphics[height=13cm,width=5cm,angle=-90]{dnodefig1.eps}} \caption{Dirac wave functions $u_1$ and $u_2$, such that $u_1$ is even with $u_1(0)>0$, $u_1'(0)=0$, and $u_2$ is odd with $u_2(0)=0$, $u_2'(0)>0$; $n_1=4$, $n_2=5$, $E=0.93011$.} \end{figure} \begin{figure} \centering{\includegraphics[height=13cm,width=5cm,angle=-90]{dnodefig2.eps}} \caption{Dirac wave functions $u_1$ and $u_2$, such that $u_1$ is odd with $u_1(0)=0$, $u_1'(0)<0$, and $u_2$ is even with $u_2(0)>0$, $u_2'(0)=0$; $n_1=3$, $n_2=4$, $E=0.89177$.} \end{figure} \section{The higher dimensional cases $d>1$} For a central potential in $d>1$ dimensions the Dirac equation can be written \cite{jiang} in natural units $\hbar=c=1$ as \begin{equation*} i{{\partial \Psi}\over{\partial t}} =H\Psi,\quad {\rm where}\quad H=\sum_{s=1}^{d}{\alpha_{s}p_{s}} + m\beta+V, \end{equation*} where $m$ is the mass of the particle, $V$ is a spherically symmetric potential, and $\{\alpha_{s}\}$ and $\beta$ are the Dirac matrices which satisfy anti-commutation relations; the identity matrix is implied after the potential $V$. For stationary states, some algebraic calculations in a suitable basis, the details of which may be found in Refs. \cite{jiang, agboola, salazar, yasuk}, lead to a pair of first-order linear differential equations in two radial functions $\{\psi_1, \psi_2\}$, namely \begin{equation}\label{dcel1} \psi_1'=(E+m-V)\psi_2-\frac{k_d}{r}\psi_1, \end{equation} \begin{equation}\label{dcer1} \psi_2'=\frac{k_d}{r}\psi_2-(E-m-V)\psi_1, \end{equation} where $r = \|\mb{r}\|$, prime $'$ denotes the derivative with respect to $r$, $k_{d}=\tau(j+{{d-2}\over{2}})$, $\tau = \pm 1$, $j=1/2$, $3/2$, $5/2$, $\ldots$. It is shown in Refs. \cite{Spectrum1, Spectrum2, Spectrum3} that the energy for bound states lies in the interval $(-m, m)$. We note that the variable $\tau$ is sometimes written $\omega$, as, for example in the book by Messiah \cite{messiah}, and the radial functions are often written $\psi_1 = G$ and $\psi_2 = F,$ as in the book by Greiner \cite{greiner}. For $d > 1,$ these functions vanish at $r = 0$, and, for bound states, they may be normalized by the relation \begin{equation*} (\psi_1,\psi_1) + (\psi_2,\psi_2) = \int\limits_0^{\infty}(\psi_1^2(r) + \psi_2^2(r))dr = 1. \end{equation*} We use inner products {\it without} the radial measure $r^{(d-1)}$ because the factor $r^{\frac{(d-1)}{2}}$ is already built in to each radial function. We shall assume that the potential $V$ is such that there is a discrete eigenvalue $E$ and that Eqs.(\ref{dcel1},~\ref{dcer1}) are the eigenequations for the corresponding radial eigenstates. Now we can state the Nodal Theorem for $d>1$. \medskip \noindent{\bf Nodal Theorem in $d>1$ dimensions:} ~~{\it We assume $V$ is a negative nondecreasing potential which vanishes at infinity. If $n_1$ and $n_2$ are the numbers of nodes of the upper $\psi_1$ and lower $\psi_2$ components of the Dirac wave function, then: \begin{equation*} n_2=n_1+1 \quad \text{if}\quad k_d>0 \end{equation*} and \begin{equation*} n_2=n_1 \quad \text{if}\quad k_d<0. \end{equation*}} \section{Asymptotic behavior of the wave functions} \subsection{Near the origin} Here we shall prove that near the origin the two components of the Dirac wave function start with the same signs if $k_d>0$ and with the different signs if $k_d<0$. We rewrite the system of Eqs.(\ref{dcel1},~\ref{dcer1}) in the following way \begin{equation}\label{dcel3} \psi_1'=W_1\psi_2-\frac{k_d}{r}\psi_1, \end{equation} \begin{equation}\label{dcer3} \psi_2'=\frac{k_d}{r}\psi_2-W_2\psi_1, \end{equation} where \begin{equation*} W_1(r):=E+m-V(r) \quad \text{and} \quad W_2(r):=E-m-V(r). \end{equation*} Since $-m<E<m$ and the potential $V$ is negative, the function $W_1>0$. Meanwhile, $W_2(r)$ is a monotone nonincreasing function which can change sign at most once, and if so from positive to negative; for reasons which will be clear later in section V we consider now only a region where $W_2\ge 0$. We first assume that $k_d>0$. Since $\psi_1(0)=0$, if $\psi_1\ge 0$ near the origin, clearly $\psi_1'\ge 0$. Then Eq.(\ref{dcel3}) implies that $\psi_2\ge 0$. Similarly, if $\psi_1\le 0$, near $r=0$, then $\psi_1'\le 0$ and it follows from Eq.(\ref{dcel3}) that $\psi_2\le 0$. In the case $k_d<0$, we put $\psi_2\ge 0$ near the origin, then $\psi_2'$ has to be nonnegative as well and Eq.(\ref{dcer3}) leads to $\psi_1\le 0$. Finally, if $\psi_2\le 0$, it follows that $\psi_2'\le 0$ and Eq.(\ref{dcer3}) gives $\psi_1\ge 0$. Consequently, near the origin the wave functions $\psi_1$ and $\psi_2$ have the same signs if $k_d>0$ and opposite signs if $k_d<0$. \subsection{At infinity} In that section we shall prove that the two components of the wave function vanish at infinity with different signs. Since $\lim\limits_{r\to\infty}V(r)=0$, at infinity, Dirac coupled equations Eqs.(\ref{dcel1},~\ref{dcer1}) become \begin{equation*} \psi_1'=\ph(E+m)\psi_2, \end{equation*} \begin{equation*} \psi_2'=-(E-m)\psi_1. \end{equation*} We know that the bound-state wave functions must vanish at infinity, so if $\psi_1\ge 0$ before it vanishes, then $\psi_1'\le 0$, and if $\psi_1\le 0$, then $\psi_1'\ge 0$. Now if we assume $\psi_1\ge 0$, the above equations lead to $\psi_2\le 0$ and {\it vice versa}. This means that at infinity, the component functions $\psi_1$ and $\psi_2$ vanish with different signs. This feature does not depend on the sign of $k_d$. \begin{figure} \centering{\includegraphics[height=11cm,width=5cm,angle=-90]{dnodefig3.eps}} \caption{Wave functions have same signs near the origin and different at infinity: $n_1=0$, $n_2=1$, $\tau=1$, $E=0.98472$.} \end{figure} \begin{figure} \centering{\includegraphics[height=11cm,width=5cm,angle=-90]{dnodefig4.eps}} \caption{Wave functions have different signs near the origin and different at infinity: $n_1=n_2=0$, $\tau=-1$, $E=0.97487$.} \end{figure} \begin{figure}[ht] \begin{center} \begin{minipage}[ht]{0.4\linewidth} \includegraphics[width=0.75\linewidth, angle=-90]{dnodefig5.eps} \caption{Dirac spinor orbit corresponding to Fig. 3.} \end{minipage} \hfill \begin{minipage}[ht]{0.4\linewidth} \includegraphics[width=0.75\linewidth, angle=-90]{dnodefig6.eps} \caption{Dirac spinor orbit corresponding to Fig. 4.} \end{minipage} \end{center} \end{figure} Now we study the graphs of the wave functions. We use Figures 3 to 10 as illustrations of the Nodal Theorem for $d>1$. We plot upper and lower components of the Dirac wave function, $\psi_1$ and $10\psi_2$ respectively, corresponding to the Hellmann potential \cite{hel1,hel2,hel3,Hall2} in the form $V(r)=-A/r+Be^{-Cr}/r$, with $A=0.7$, $B=0.5$, $C=0.25$, $m=1$, $j=3/2$, $d=5$, and $\tau = \pm 1$. If $k_d>0$ (Figure 3) both wavefunction components start at the origin with the same sign, but at infinity they must have different signs: thus one of the wave function components will have at least one node more than the other. When $k_d<0$ (Figure 4) $\psi_1$ and $\psi_2$ start with different signs and then vanish with different signs, thus the numbers of nodes in $\psi_1$ and $\psi_2$ must be the same or differ by an even number. We can see from this easiest possible example that the number of nodes of the Dirac wave functions depends on the sign of $k_d$. These results are generalized by the Nodal Theorem for $d>1$. \section{Proof of the Nodal Theorem in $d>1$ dimensions} Now we let $u_1=r^{k_d}\psi_1$ and $u_2=r^{-k_d}\psi_2$. On the interval $(0, \infty)$, the nodes of $u_1$ and $u_2$ coincide with the nodes of $\psi_1$ and $\psi_2$, and Eqs.(\ref{dcel1},~\ref{dcer1}) in terms of $u_1$ and $u_2$ become \begin{equation}\label{dcel2} u_1'=\ph r^{2k_d}W_1u_2, \end{equation} \begin{equation}\label{dcer2} u_2'=-r^{-2k_d}W_2u_1, \end{equation} where \begin{equation*} W_1(r):=E+m-V(r) \quad \text{and} \quad W_2(r):=E-m-V(r). \end{equation*} We immediately have that $W_1>0$ $\forall r\in[0, \infty)$. As in the one-dimensional case, we shall prove that the nodes of the radial wave functions $\psi_1$ and $\psi_2$ occur only when $W_2$ is nonnegative. Since $\lim\limits_{r\to\infty}V(r)=0$, and $E-m<0$, $W_2<0$ for $r$ sufficiently large. Consider the interval $(r_c, \infty)$ on which $W_2<0$ and suppose that at a point $r_1\in (r_c, \infty)$ the function $u_2$ is increasing and has a node. Then at $r_1$, according to Eqs.(\ref{dcel2},~\ref{dcer2}), the function $u_1$ obeys the following conditions \begin{equation*} u_1'(r_1)=0, \quad u_1(r_1)>0, \quad {\rm and} \quad u_1''(r_1)>0. \end{equation*} Using the fact that $u_1$ and $u_2$ ($\psi_1$ and $\psi_2$ respectively) must both vanish at infinity we see that at some point $r_2>r_1$, such that there are no nodes or extrema of $u_2$ between $r_1$ and $r_2$, the function $u_2$ must reach a local maximum and then eventually decrease to zero. Therefore $u_2(r_2)>0$, $u_2'(r_2)=0$, $u_2''(r_2)<0$. \eq{dcel2} then implies $u_1'(r_2)>0$, but \eq{dcer2} leads to $u_1'(r_2)<0$, which is a contradiction. Thus the upper and lower components of the Dirac wave function intersect the $r$ axis only on $(0, x_c]$ where $W_2\ge 0$. If we apply the same reasoning for $r_2\in(0, r_c]$, we find no contradiction, and moreover we conclude that the nodes of $\psi_1$ and $\psi_2$ are alternating. As in the one-dimensional case we arrive at the same results if $u_2$ is decreasing near $r_1$ or if we consider $u_1$ instead of $u_2$. Now, following Rose and Newton \cite{rose} and using what we call `Dirac spinor orbits' \cite{Hallso}, we now consider the graphs generated parametrically by the points $(\psi_1(r), \psi_2(r))\in \Re^2$, $r\ge 0$. We form a vector \mb{R} with initial point at the origin and terminal point $(\psi_1(r), \psi_2(r))$, $r\ge 0$, which makes an angle $\varphi$ with $\psi_1$ axis. Thus \begin{equation*} \rho=\tan\varphi=\frac{\psi_2}{\psi_1}. \end{equation*} From this follows \begin{equation*} \rho'=(1+\rho^2) \varphi'. \end{equation*} Since $(1+\rho^2)$ is positive then \begin{equation}\label{sign} \sgn(\rho')=\sgn(\varphi'). \end{equation} Direct substitution of $\rho=\psi_2/\psi_1$ and equations Eqs.(\ref{dcel1},~\ref{dcer1}) shows us that $\rho$ satisfies the Ricatti equation \begin{equation*} \rho'=\frac{2k_d\rho}{r}-W_2-W_1\rho^2. \end{equation*} In the limit as $\rho$ approaches zero we have \begin{equation*} \rho'=-W_2\le 0. \end{equation*} Then if $\rho\longrightarrow\infty$, we have \begin{equation*} \rho'=-W_1\rho^2< 0, \end{equation*} since we consider only the region where nodes occur, i.e. $W_2\ge 0$. Thus, from \eq{sign}, it follows that $\varphi'$ is a negative function of $r$ for $\rho\longrightarrow 0$ (or $\psi_2\longrightarrow 0$) and $\rho\longrightarrow\infty$ (or $\psi_1\longrightarrow 0$), which means that $\varphi$ is a decreasing function in the region $(0, r_c]$ where nodes of $\psi_1$ and $\psi_2$ occur. Therefore in the $\psi_1$--$\psi_2$ plane, the vector \mb{R} rotates clockwise. \begin{figure} \centering{\includegraphics[height=11cm,width=5cm,angle=-90]{dnodefig7.eps}} \caption{Wave functions $\psi_1$ and $\psi_2$ have the same signs near the origin and different at infinity: $n_1=5$, $n_2=6$, $\tau=1$, $E=0.99697$.} \end{figure} \begin{figure} \centering{\includegraphics[height=11cm,width=5cm,angle=-90]{dnodefig8.eps}} \caption{Wave functions $\psi_1$ and $\psi_2$ have different signs near the origin and different at infinity: $n_1=n_2=5$, $\tau=-1$, $E=0.99626$.} \end{figure} \begin{figure}[ht] \begin{center} \begin{minipage}[ht]{0.4\linewidth} \includegraphics[width=0.75\linewidth, angle=-90]{dnodefig9.eps} \caption{Dirac spinor orbit corresponding to Fig. 7.} \end{minipage} \hfill \begin{minipage}[ht]{0.4\linewidth} \includegraphics[width=0.75\linewidth, angle=-90]{dnodefig10.eps} \caption{Dirac spinor orbit corresponding to Fig.8.} \end{minipage} \end{center} \end{figure} Thus when $k_d$ and $\rho$ are positive (first and third quadrants), a node of $\psi_2$ occurs first; and when $k_d$ and $\rho$ are negative (second and fourth quadrants), a node of $\psi_1$ occurs first. The case when $k_d>0$ is illustrated in Figure 7. Without loss of generality we consider the first quadrant, thus the wave functions start with the same positive sign and a node of $\psi_2$ occurs first; after this, $\psi_1$ has a node, and so on. At infinity $\psi_1$ and $\psi_2$ vanish with different signs. Therefore the number of nodes in the lower component of the Dirac wave function $\psi_2$ must exceed the number of nodes of $\psi_1$ by 1. Using the same reasoning for the case $k_d<0$, illustrated in Figure 8, we see that when $k_d<0$ the number of nodes in the upper and lower components of the Dirac wave function is the same. This completes the proof of the theorem. \hfill $\Box$ We also plot the `Dirac spinor orbits', Figures 5, 6, 9, and 10, and they confirm the clockwise rotation of \mb{R}. We note that corresponding orbits can be plotted for $d=1$ case, but the vector \mb{R} would in this case rotate counterclockwise in the representation we have used. \section{Conclusion} At first sight a general nodal theorem for the Dirac equation in $d$ dimensions might appear to be out of reach. However, for central potentials, the bound states in all cases are built from just two radial functions. For attractive potentials which are no more singular than Coulomb and which vanish at large distances, the logical possibilities are limited in a similar fashion to the three-dimensional case first analysed by Rose and Newton in 1951. We are therefore able to establish nodal theorems for all dimensions $d \ge 1$. Such theorems are useful for the study and computation of bound-state solutions, and they are essential for the establishment of relativistic comparison theorems. \section{Acknowledgments} One of us (RLH) gratefully acknowledges partial financial support of this research under Grant No.\ GP3438 from the Natural Sciences and Engineering Research Council of Canada.\medskip \section*{References}
{ "timestamp": "2013-09-09T02:08:19", "yymm": "1309", "arxiv_id": "1309.1749", "language": "en", "url": "https://arxiv.org/abs/1309.1749", "abstract": "A single particle obeys the Dirac equation in $d \\ge 1$ spatial dimensions and is bound by an attractive central monotone potential that vanishes at infinity. In one dimension, the potential is even, and monotone for $x\\ge 0.$ The asymptotic behavior of the wave functions near the origin and at infinity are discussed. Nodal theorems are proven for the cases $d=1$ and $d > 1$, which specify the relationship between the numbers of nodes $n_1$ and $n_2$ in the upper and lower components of the Dirac spinor. For $d=1$, $n_2 = n_1 + 1,$ whereas for $d >1,$ $n_2 = n_1 +1$ if $k_d > 0,$ and $n_2 = n_1$ if $k_d < 0,$ where $k_d = \\tau(j + \\frac{d-2}{2}),$ and $\\tau = \\pm 1.$ This work generalizes the classic results of Rose and Newton in 1951 for the case $d=3.$ Specific examples are presented with graphs, including Dirac spinor orbits $(\\psi_1(r), \\psi_2(r)), r \\ge 0.$", "subjects": "Mathematical Physics (math-ph); Quantum Physics (quant-ph)", "title": "Nodal theorems for the Dirac equation in d >= 1 dimensions", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9811668706602658, "lm_q2_score": 0.7217431943271999, "lm_q1q2_score": 0.7081505113983628 }
https://arxiv.org/abs/1802.01916
Domination, almost additivity, and thermodynamic formalism for planar matrix cocycles
In topics such as the thermodynamic formalism of linear cocycles, the dimension theory of self-affine sets, and the theory of random matrix products, it has often been found useful to assume positivity of the matrix entries in order to simplify or make feasible certain types of calculation. It is natural to ask how positivity may be relaxed or generalised in a way which enables similar calculations to be made in more general contexts. On the one hand one may generalise by considering almost additive or asymptotically additive potentials which mimic the properties enjoyed by the logarithm of the norm of a positive matrix cocycle; on the other hand one may consider matrix cocycles which are dominated, a condition which includes positive matrix cocycles but is more general. In this article we explore the relationship between almost additivity and domination for planar cocycles. We show in particular that a locally constant linear cocycle in the plane is almost additive if and only if it is either conjugate to a cocycle of isometries, or satisfies a property slightly weaker than domination which is introduced in this paper. Applications to matrix thermodynamic formalism are presented.
\section{Introduction} For the purposes of this article a \emph{linear cocycle} over a dynamical system $T \colon X \to X$ will be a skew-product \[ F \colon X \times \mathbb{R}^d \to X \times \mathbb{R}^d, \quad (x,p) \mapsto (Tx,\mathsf{A}(x) p), \] where $\mathsf{A} \colon X \to GL_d(\mathbb{R})$ is continuous and $X$ is a compact metric space. Writing $\mathsf{A}^n_T(x) = \mathsf{A}(T^{n-1}x) \cdots \mathsf{A}(x)$, we thus have $F^n(x,p) = (T^nx, \mathsf{A}^n_T(x) p)$ for all $n \in \mathbb{N}$ and \begin{equation} \label{eq:cocycle-id} \mathsf{A}^{m+n}_T(x) = \mathsf{A}^m_T(T^nx) \mathsf{A}^n_T(x) \end{equation} for all $m,n \in \mathbb{N}$. In numerous contexts it has been found useful to consider cocycles in which all of the matrices $\mathsf{A}(x)$ are \emph{positive}: we note for example such diverse articles as \cite{FurstenbergKesten1960,HueterLalley95,Jungers12,Pollicott10}. Under this assumption the cocycle satisfies the inequality \[ \left|\log \|\mathsf{A}^{m+n}_T(x)\| -\log \|\mathsf{A}^m_T(T^nx)\|-\log \| \mathsf{A}^n_T(x)\|\right| \leq C \] for some constant $C>0$ depending only on $\mathsf{A}$. This has led some authors to extend results for positive linear cocycles by considering, instead of a linear cocycle, a sequence of continuous functions $f_n \colon X \to \mathbb{R}$ satisfying the inequality \[ \left|f_{n+m}(x)-f_m(T^nx)-f_n(x)\right| \leq C \] for all $x \in X$ and $n,m \geq 1$. Such sequences of functions are referred to in the literature as \emph{almost additive} and have been investigated in \cite{Barreira06,BarreiraDoutor09,BomfimVarandas15,IommiYayama12,Yayama16}. The condition of almost additivity implies trivially a further property, \emph{asymptotic additivity} (see for example Feng and Huang \cite[Proposition~A.5]{FengHuang10}), which has been applied in \cite{Cao13,FengHuang10,IommiYayama17}. In another category of works, positivity is replaced by the more general hypothesis of \emph{domination}: under this hypothesis there exists a continuous splitting $\mathbb{R}^d=\mathcal{U}(x)\oplus \mathcal{V}(x)$, which is preserved by the cocycle, such that $\|\mathsf{A}^n_T(x)u\| \geq Ce^{n\varepsilon} \|\mathsf{A}^n_T(x)v\|$ for all unit vectors $u \in \mathcal{U}(x)$ and $v\in \mathcal{V}(x)$, for some constants $C,\varepsilon>0$ (see \cite{BochiGourmelon09} and references therein). For linear cocycles the hypothesis of domination implies the hypothesis of almost additivity, but the converse is false, as can be seen trivially for the case of cocycles where all of the linear maps are isometries, or where all are equal to the identity. The purpose of this article is to explore precisely the relationship between domination and almost additivity in the context of locally constant two-dimensional linear cocycles over the shift. In this project we are motivated principally by applications to the topics of matrix thermodynamic formalism and the geometry of self-affine fractals. We consider cocycles in the simplest non-commutative setting, namely in the case of planar matrices. A cocycle is dominated if and only if there is a uniform exponential gap between singular values of its iterates. This is equivalent to the existence of a strongly invariant multicone in the projective space; see \cite{AvilaBochiYoccoz2010,BochiGourmelon09}. Domination originates from \cite{Mane1978, Mane1984} and it is an important concept in differentiable dynamical systems; see \cite{BochiViana2005, BonattiDiazPujals2003}. Our contribution in this article to this line of research is to show that a planar matrix cocycle is dominated if and only if matrices are proximal and the norms in the generated sub-semigroup satisfy a certain multiplicativity property; see Corollary \ref{thm:dom-cor1}. Higher dimensions are more difficult: \cite[\S 4]{BochiGourmelon09} show that the connected components of the multicone need not be convex. Of the several motivations for studying almost additive potentials, this article is concerned principally with thermodynamic formalism. In Theorem \ref{thm:holder} we will show that almost additive potentials arising from the norm potential of a two-dimensional locally-constant linear cocycle over the full shift can in almost all cases be studied simply by using the classical thermodynamic formalism. In fact, in our results, we are able to characterise all the properties of equilibrium states for these norm potentials by means of the properties of matrices. Theorem \ref{thm:fortuples} gives a positive answer to \cite[Question 7.4]{BaranyKaenmakiKoivusalo2017} in the two dimensional case. Furthermore, in Example \ref{example}, answering a folklore question, we show the existence of a quasi-Bernoulli equilibrium state which is not a Gibbs measure for any H\"older continuous potential. \section{Preliminaries and statements of results} For the remainder of this article we specialise to cocycles whose values are invertible two-dimensional real matrices. We take $\mathsf{A} \subset GL_2(\mathbb{R})$, set $X=\mathsf{A}^\mathbb{N}$, denote the left shift on $X$ by $T$, and let $\mathsf{A}(x)$ be the first matrix in the infinite sequence $x \in X$. Let \[ F \colon X \times \mathbb{R}^d \to X \times \mathbb{R}^d, \quad (x,p) \mapsto (Tx,\mathsf{A}(x) p) \] be a linear cocycle over $T$. We see that $\mathsf{A}_T^n(x)$ is the product of $n$ first matrices in $x \in X$, and the cocycle identity \eqref{eq:cocycle-id} clearly holds. Let $\mathcal{S}(\mathsf{A})$ denote the sub-semigroup generated by $\mathsf{A}$, that is, $\mathcal{S}(\mathsf{A})=\{A_1\cdots A_n:n\in\mathbb{N}\text{ and }A_i\in\mathsf{A}\text{ for all }i\in\{1,\ldots,n\}\}$. So in particular, $\mathsf{A}_T^n(x) \in\mathcal{S}(\mathsf{A})$ for all $x = (A_1,A_2,\ldots) \in X$ and $n\in\mathbb{N}$. \subsection{Domination} \label{sec:subdom} Following \cite{BochiGourmelon09} we say that a compact and nonempty subset $\mathsf{A}\subset GL_2(\mathbb{R})$ is \emph{dominated} if there exist constants $C>0$ and $0<\tau<1$ such that $$ \frac{|\det(A_1\cdots A_n)|}{\|A_1\cdots A_n\|^2}\leq C \tau^{n} $$ for all $A_1,\dots,A_n\in\mathsf{A}$. We let $\mathbb{RP}^1$ denote the real projective line, which is the set of all lines through the origin in $\mathbb{R}^2$. We call a proper subset $\mathcal{C}\subset\mathbb{RP}^1$ a \emph{multicone} if it is a finite union of closed projective intervals. We say that $\mathcal{C}\subset\mathbb{RP}^1$ is a \emph{strongly invariant multicone} for $\mathsf{A}\subset GL_2(\mathbb{R})$ if it is a multicone and $A\mathcal{C}\subset\mathcal{C}^o$ for all $A\in\mathsf{A}$. Here $\mathcal{C}^o$ is the interior of $\mathcal{C}$. By \cite[Theorem~B]{BochiGourmelon09}, a compact set $\mathsf{A} \subset GL_2(\mathbb{R})$ has a strongly invariant multicone if and only if $\mathsf{A}$ is dominated. We say that $\mathcal{C}\subset\mathbb{RP}^1$ is an \emph{invariant multicone} for $\mathsf{A}\subset GL_2(\mathbb{R})$ if it is a multicone and $A\mathcal{C}\subset\mathcal{C}$ for all $A \in \mathsf{A}$. Recall that a matrix $A$ is \emph{proximal} if it has two real eigenvalues with unequal absolute values, \emph{parabolic} if it has only one eigenspace, i.e.\ the single eigenvalue has geometric multiplicity one, and \emph{conformal} if it has two eigenvalues with the same absolute values. In other words, a matrix $A$ is conformal if and only if there exists an invertible matrix $M$, which we call a \emph{conjugation matrix} of $A$, such that $|\det(A)|^{-1/2}MAM^{-1}\in O(2)$, where $O(2)$ is the group of $2 \times 2$ orthogonal matrices. Furthermore, we say that a set $\mathsf{A}\subset GL_2(\mathbb{R})$ is \emph{strongly conformal} if all the elements of $\mathsf{A}$ are conformal with respect to the same conjugation matrix. Strongly conformality is equivalent to the fact that all the elements in the generated semigroup are conformal. For a proximal matrix $A$, let $\lambda_u(A)$ and $\lambda_s(A)$ be the largest and smallest eigenvalues of $A$ in absolute value, respectively. If the eigenvalues are equal in absolute value, then the choice of $\lambda_u(A)$ and $\lambda_s(A)$ is arbitrary. Note that if $A$ is diagonalisable, then there exist linearly independent subspaces $u(A),s(A)\in\mathbb{RP}^1$ such that $|\lambda_u(A)|=\|A|u(A)\|$ and $|\lambda_s(A)|=\|A|s(A)\|$. We call $u(A)\in\mathbb{RP}^1$ the eigenspace of $A$ corresponding to $\lambda_u(A)$ and $s(A)\in\mathbb{RP}^1$ the eigenspace corresponding to $\lambda_s(A)$. If $\mathsf{A}\subset GL_2(\mathbb{R})$, then we define $X_u(\mathsf{A})$ and $X_s(\mathsf{A})$ to be the closures of the sets of all unstable and stable directions of proximal elements of $\mathcal{S}(\mathsf{A})$, i.e.\ the sets \begin{align*} X_u(\mathsf{A})&=\overline{\{u(A): A\in\SS(\mathsf{A})\text{ is proximal}\}}, \\ X_s(\mathsf{A})&=\overline{\{s(A):A\in\SS(\mathsf{A})\text{ is proximal}\}}, \end{align*} respectively. Recall that $\mathcal{S}(\mathsf{A})$ is the sub-semigroup of $GL_2(\mathbb{R})$ generated by $\mathsf{A}$, i.e. the set of all finite products formed by the elements of $\mathsf{A}$. We say that $\mathsf{A}\subset GL_2(\mathbb{R})$ has an \emph{unstable multicone $\mathcal{C}$} if $\SS(\mathsf{A})$ contains at least one proximal element and \begin{enumerate} \item\label{i-X3} $\mathcal{C}\cap X_s(\mathsf{A})=\emptyset$, \item\label{i-X4} $\partial\mathcal{C}\cap X_u(\mathsf{A})=\emptyset$, \item\label{i-X5} each connected component of $\mathcal{C}$ intersects $X_u(\mathsf{A})$. \end{enumerate} Finally, we say that a semigroup $\mathcal{S}\subset GL_2(\mathbb{R})$ is \emph{almost multiplicative} if there exists a constant $\kappa>0$ such that $\|AB\|\geq \kappa \|A\|\|B\|$ for all $A,B \in \mathcal{S}$. We note that since clearly $\|AB\| \leq \|A\|\|B\|$ for all $A,B \in \mathcal{S}(\mathsf{A})$ for every $\mathsf{A} \subset GL_2(\mathbb{R})$, the condition $\|AB\|\geq \kappa \|A\|\|B\|$ for all $A,B \in \mathcal{S}(\mathsf{A})$ is equivalent to the statement that every cocycle taking values in $\mathcal{S}(\mathsf{A})$ is almost additive in the sense defined in the introduction. Our main result for matrix cocycles is the following theorem. \begin{theorem}\label{thm:justdomin} Let $\mathsf{A}\subset GL_2(\mathbb{R})$. If the sub-semigroup $\mathcal{S}(\mathsf{A})$ is almost multiplicative, then exactly one of the two following conditions hold: \begin{enumerate} \item $\mathsf{A}$ is strongly conformal, \item\label{thm:item2} $\mathsf{A}$ has an invariant unstable multicone and $\SS(\mathsf{A})$ does not contain parabolic elements. \end{enumerate} \end{theorem} The next two propositions show that if the proximal elements of $\mathsf{A}$ form a compact set, then the converse claim holds in Theorem \ref{thm:justdomin}. \begin{proposition}\label{prop:connect} Let $\mathsf{A}\subset GL_2(\mathbb{R})$ be such that $\mathsf{A}$ has an invariant unstable multicone and $\SS(\mathsf{A})$ does not contain parabolic elements. Let $\mathsf{A}_e$ be the collection of all conformal elements of $\mathsf{A}$. Then \begin{enumerate} \item\label{item:that} $\mathsf{A}\setminus\mathsf{A}_e$ is nonempty and contains only proximal elements, \item\label{item:this} $\mathsf{A}_e$ is strongly conformal and $\mathcal{S}(\{|\det(A)|^{-1/2}A : A \in \mathsf{A}_e\})$ is finite. \end{enumerate} Moreover, if $\mathsf{A}\setminus\mathsf{A}_e$ is compact, then $\mathsf{A}\setminus\mathsf{A}_e$ has a strongly invariant multicone $\mathcal{C}$ such that $A\mathcal{C}=\mathcal{C}$ for all $A\in\mathsf{A}_e$. \end{proposition} \begin{proposition}\label{prop:converse} Let $\mathsf{A}_e, \mathsf{A}_h \subset GL_2(\mathbb{R})$ be such that \begin{enumerate} \item $\mathsf{A}_h$ is nonempty, compact, and has a strongly invariant multicone $\mathcal{C}$, \item $\mathsf{A}_e$ is strongly conformal and $A\mathcal{C}=\mathcal{C}$ for all $A\in\mathsf{A}_e$. \end{enumerate} Then $\mathcal{S}(\mathsf{A}_e\cup\mathsf{A}_h)$ is almost multiplicative. \end{proposition} The previous three statements have two immediate corollaries. The first one studies the case where $\mathsf{A}$ contains only proximal elements. The second one is for finite collections. \begin{corollary}\label{thm:dom-cor1} If $\mathsf{A}\subset GL_2(\mathbb{R})$ is compact, then the following two statements are equivalent: \begin{enumerate} \item $\mathsf{A}$ has a strongly invariant multicone, \item $\mathsf{A}$ contains only proximal elements and $\mathcal{S}(\mathsf{A})$ is almost multiplicative. \end{enumerate} \end{corollary} \begin{corollary}\label{thm:justdomin2} If $\mathsf{A}\subset GL_2(\mathbb{R})$ is finite, then the following two statements are equivalent: \begin{enumerate} \item\label{it:domin1} the sub-semigroup $\mathcal{S}(\mathsf{A})$ is almost multiplicative, \item $\mathsf{A}$ can be decomposed into two sets $\mathsf{A}_e$ and $\mathsf{A}_h$ such that $\mathsf{A}_e$ is strongly conformal and if $\mathsf{A}_h\neq\emptyset$, then $\mathsf{A}_h$ has a strongly invariant multicone $\mathcal{C}$ such that $A\mathcal{C}=\mathcal{C}$ for all $A\in\mathsf{A}_e$. \end{enumerate} \end{corollary} \subsection{Thermodynamic formalism} If the set $\mathsf{A} \subset GL_2(\mathbb{R})$ is finite, then it makes sense to consider thermodynamic formalism for matrix cocycles. In this context, it is rather standard practise to use separate alphabet to index the elements in the sub-semigroup. Let $N \ge 2$ be an integer and $\Sigma = \{ 1,\ldots,N \}^\mathbb{N}$ be the collection of all infinite words obtained from integers $\{ 1,\ldots,N \}$. We denote the left shift operator by $\sigma$ and equip $\Sigma$ with the product discrete topology. The \emph{shift space} $\Sigma$ is clearly compact. If $\mathtt{i} = i_1i_2\cdots \in \Sigma$, then we define $\mathtt{i}|_n = i_1 \cdots i_n$ for all $n \in \mathbb{N}$. The empty word $\mathtt{i}|_0$ is denoted by $\varnothing$. Define $\Sigma_n = \{ \mathtt{i}|_n : \mathtt{i} \in \Sigma \}$ for all $n \in \mathbb{N}$ and $\Sigma_* = \bigcup_{n \in \mathbb{N}} \Sigma_n \cup \{ \varnothing \}$. Thus $\Sigma_*$ is the collection of all finite words. The length of $\mathtt{i} \in \Sigma_* \cup \Sigma$ is denoted by $|\mathtt{i}|$. If $\mathtt{i} \in \Sigma_n$ for some $n$, then we set $[\mathtt{i}] = \{ \mathtt{j} \in \Sigma : \mathtt{j}|_n = \mathtt{i} \}$. The set $[\mathtt{i}]$ is called a \emph{cylinder set}. Cylinder sets are open and closed and they generate the Borel $\sigma$-algebra. The longest common prefix of $\mathtt{i},\mathtt{j} \in \Sigma_* \cup \Sigma$ is denoted by $\mathtt{i} \wedge \mathtt{j}$. The concatenation of two words $\mathtt{i} \in \Sigma_*$ and $\mathtt{j} \in \Sigma_* \cup \Sigma$ is denoted by $\mathtt{i}\mathtt{j}$. If $A \subset \Sigma$ and $\mathtt{i} \in \Sigma_*$, then $\mathtt{i} A = \{ \mathtt{i}\mathtt{j} : \mathtt{j} \in A \}$. For example, if $\mathtt{i},\mathtt{j} \in \Sigma_*$, then $[\mathtt{i}\mathtt{j}] = \mathtt{i}[\mathtt{j}] = \mathtt{i}\mathtt{j}\Sigma$. If $\mathtt{i} \in \Sigma_*$ and $n \in \mathbb{N}$, then by $\mathtt{i}^n$ we mean the concatenation $\mathtt{i}\cdots\mathtt{i}$ where $\mathtt{i}$ is repeated $n$ times. Finally, denote by $\sharp_k\mathtt{i}$ the number of appearances of the symbol $k \in \{ 1,\ldots,N \}$ in $\mathtt{i} \in \Sigma_*$, i.e.\ $\sharp_k\mathtt{i}=\sharp\{n:i_n=k\text{ for }n \in \{1,\ldots|\mathtt{i}|\}\}$. We say that the sequence $\Phi = (\phi_n)_{n \in \mathbb{N}}$ of functions $\phi_n \colon \Sigma \to \mathbb{R}$ is \emph{sub-additive} if there exists $C_1 \ge 0$ such that \begin{equation*} \phi_{n+m}(\mathtt{i}) \le \phi_n(\mathtt{i}) + \phi_m(\sigma^n\mathtt{i}) + C_1 \end{equation*} for all $n,m \in \mathbb{N}$ and $\mathtt{i} \in \Sigma$. A sub-additive sequence $\Phi = (\phi_n)_{n \in \mathbb{N}}$ is \emph{almost-additive} if there exists $C_2 \ge 0$ such that \begin{equation*} \phi_{n+m}(\mathtt{i}) \ge \phi_n(\mathtt{i}) + \phi_m(\sigma^n\mathtt{i}) - C_2 \end{equation*} for all $n,m \in \mathbb{N}$ and $\mathtt{i} \in \Sigma$. Finally, we say that an almost-additive sequence $\Phi$ is \emph{additive} if the constants $C_1$ and $C_2$ in the above inequalities can be chosen to $0$. For example, if $\phi \colon \Sigma \to \mathbb{R}$ is a function, then $(\sum_{k=0}^{n-1} \phi \circ \sigma^k)_{n \in \mathbb{N}}$ is additive. In this context, the function $\phi$ is called a \emph{potential}. We say that a potential $\phi$ is \emph{H\"older continuous}, if there exist $C>0$ and $0<\tau<1$ such that $$ |\phi(\mathtt{i})-\phi(\mathtt{j})|\leq C\tau^{|\mathtt{i}\wedge\mathtt{j}|}. $$ for all $\mathtt{i},\mathtt{j}\in\Sigma$. If $\Phi = (\phi_n)_{n \in \mathbb{N}}$ is sub-additive, then the \emph{pressure} of $\Phi$ is defined by \begin{equation}\label{eq:pressure} P(\Phi) = \lim_{n \to \infty} \tfrac{1}{n} \log \sum_{\mathtt{i} \in \Sigma_n} \exp\max_{\mathtt{j} \in [\mathtt{i}]}\phi_n(\mathtt{j}). \end{equation} The limit above exists by the standard properties of sub-additive sequences. Let $\mu$ be a $\sigma$-invariant probability measure on $\Sigma$ and recall that the \emph{Kolmogorov-Sinai entropy} of $\mu$ is \begin{equation*} h_\mu = -\lim_{n \to \infty} \tfrac{1}{n} \sum_{\mathtt{i} \in \Sigma_n} \mu([\mathtt{i}]) \log\mu([\mathtt{i}]). \end{equation*} In addition, if $\Phi = (\phi_n)_{n \in \mathbb{N}}$ is a sub-additive sequence, then we set \begin{equation*} \Lambda_\mu(\Phi) = \lim_{n \to \infty} \tfrac{1}{n} \int_\Sigma \phi_n(\mathtt{i}) \,\mathrm{d}\mu(\mathtt{i}). \end{equation*} It is easy to see that \begin{equation*} P(\Phi) \ge h_\mu + \Lambda_\mu(\Phi) \end{equation*} for all $\sigma$-invariant probability measures $\mu$. The variational principle \begin{equation*} P(\Phi) =\sup\left\{ h_\mu + \Lambda_\mu(\Phi):\mu\text{ is $\sigma$-invariant and }\Lambda_\mu(\Phi)\neq-\infty\right\} \end{equation*} is proved in \cite{CaoFengHuang}. For matrix cocycles this was obtained earlier in \cite{Kaenmaki2004}. A $\sigma$-invariant measure $\mu$ satisfying \begin{equation* P(\Phi) = h_\mu + \Lambda_\mu(\Phi) \end{equation*} is called an \emph{equilibrium state} for $\Phi$. Such a measure always exists in the context of matrix cocycles, but it is not known if a general sub-additive sequence has an equilibrium state; see \cite{Barreira2010}. We say that a probability measure $\mu$ on $\Sigma$ is \emph{quasi-Bernoulli} if there exists a constant $C\ge 1$ such that $$ C^{-1}\mu([\mathtt{i}])\mu([\mathtt{j}])\leq\mu([\mathtt{i}\mathtt{j}])\leq C\mu([\mathtt{i}])\mu([\mathtt{j}]) $$ for all $\mathtt{i},\mathtt{j}\in\Sigma_*$. If the constant $C$ above can be chosen to $1$, then $\mu$ is a \emph{Bernoulli} measure. In other words, a probability measure $\mu$ is Bernoulli if there exist a probability vector $(p_1,\ldots,p_N)$ such that $$ \mu([\mathtt{i}])=p_{i_1}\cdots p_{i_{n}} $$ for all $\mathtt{i}=i_1\cdots i_n\in\Sigma_n$ and $n \in \mathbb{N}$. Let $\phi \colon \Sigma \to \mathbb{R}$ be a continuous potential and $\Phi = (\sum_{k=0}^{n-1} \phi \circ \sigma^k)_{n \in \mathbb{N}}$. We say that a Borel probability measure $\mu$ on $\Sigma$ is a \emph{Gibbs measure} for $\phi$ if there exists a constant $C \ge 1$ such that \begin{equation}\label{eq:gibbsmeas} C^{-1}\exp\biggl(-nP(\Phi) + \sum_{k=0}^{n-1} \phi(\sigma^k(\mathtt{j}))\biggr) \le \mu([\mathtt{i}]) \le C\exp\biggl(-nP(\Phi) + \sum_{k=0}^{n-1} \phi(\sigma^k(\mathtt{j}))\biggr) \end{equation} for all $\mathtt{i} \in \Sigma_n$, $\mathtt{j} \in [\mathtt{i}]$, and $n \in \mathbb{N}$. For example, the Bernoulli measure obtained from a probability vector $(p_1,\ldots,p_N)$ is a Gibbs measure for the potential $\mathtt{i}\mapsto\log p_{\mathtt{i}|_1}$. If $\phi$ is H\"older continuous, then there is unique $\sigma$-invariant Gibbs measure which also is unique equilibrium state; see \cite[Theorems 1.4 and 1.22]{Bowen}. Similarly, if $\Phi = (\phi_n)_{n \in \mathbb{N}}$ is sub-additive, then a Borel probability measure $\mu$ on $\Sigma$ is a \emph{Gibbs-type measure} for $\Phi$ if there exists a constant $C \ge 1$ such that \begin{equation}\label{eq:Gibbstype} C^{-1}\exp\biggl(-nP(\Phi) + \phi_n(\mathtt{j})\biggr) \le \mu([\mathtt{i}]) \le C\exp\biggl(-nP(\Phi) + \phi_n(\mathtt{j})\biggr) \end{equation} for all $\mathtt{i} \in \Sigma_n$, $\mathtt{j} \in [\mathtt{i}]$, and $n \in \mathbb{N}$. It is easy to see that a $\sigma$-invariant Gibbs-type measure is ergodic and hence the unique equilibrium state; see \cite[\S 3.2]{KaenmakiReeve2014}. If $\Phi$ is almost-additive, then, similarly as with continuous potentials, there exist conditions to guarantee the existence of a $\sigma$-invariant Gibbs-type; see \cite[\S 4.2]{Barreira2010}. Our main objective is to study thermodynamic formalism in the setting of matrix cocycles. Let $\mathsf{A} = (A_1,\ldots,A_N) \in GL_2(\mathbb{R})^N$, $s > 0$, and define $\phi_n^s \colon \Sigma \to \mathbb{R}$ for all $n \in \mathbb{N}$ by setting $\phi_n^s(\mathtt{i}) = \log\| A_{\mathtt{i}|_n} \|^s$, where $A_\mathtt{i} = A_{i_1} \cdots A_{i_n}$ for all $\mathtt{i} = i_1 \cdots i_n \in \Sigma_n$ and $n \in \mathbb{N}$. Then the sequence $\Phi^s = (\phi_n^s)_{n \in \mathbb{N}}$ parametrised by $s > 0$ is sub-additive. By \cite[Theorems~2.6 and 4.1]{Kaenmaki2004}, for every choice of the matrix tuple $\mathsf{A}$, there exists an ergodic equilibrium state for $\Phi^s$. The structure of the set of all equilibrium states for $\Phi^s$ is well known. We say that $\mathsf{A} = (A_1,\ldots,A_N) \in GL_2(\mathbb{R})^N$ is \emph{irreducible} if there does not exist $1$-dimensional linear subspace $V$ such that $A_iV=V$ for all $i \in \{ 1,\ldots,N \}$; otherwise $\mathsf{A}$ is \emph{reducible}. In a reducible tuple $\mathsf{A}$, all the matrices are simultaneously upper triangular in some basis. If $\mathsf{A}$ is irreducible, then there is unique equilibrium state which is a Gibbs-type measure for $\Phi^s$; see \cite[Proposition 1.2]{FengKaenmaki2011}. It is worthwhile to remark that irreducibility does not imply that $\Phi^s$ is almost-additive. In the reducible case, there can be two distinct ergodic equilibrium states; see \cite[Theorem 1.7]{FengKaenmaki2011}. Recall also that the set $\{ \mathsf{A} \in GL_2(\mathbb{R})^N : \mathsf{A} \text{ is irreducible} \}$ is open, dense, and of full Lebesgue measure in $GL_2(\mathbb{R})^N$. In fact, the complement of the set is a finite union of $(4N-1)$-dimensional algebraic varieties; see \cite[Propositions 3.4 and 3.6]{KaenmakiLi2017}. The following four results characterise different kind of properties equilibrium states for $\Phi^s$ can have by means of the matrix tuple. \begin{proposition}\label{prop:gibbstype} If $\mathsf{A}=(A_1,\ldots,A_N)\in GL_2(\mathbb{R})^N$ and $\mu$ is an ergodic equilibrium state for $\Phi^s$, then the following two statements are equivalent: \begin{enumerate} \item\label{it:gibbstype1} $\mu$ is a Gibbs-type measure for $\Phi^s$, \item\label{it:gibbstype2} at least one of the following three conditions hold: \begin{enumerate} \item\label{casea} $\mathsf{A}$ is irreducible, \item\label{caseb} $\mathsf{A}$ is strongly conformal, \item\label{casec} $\mathsf{A}$ is reducible with a common invariant subspace $V$ and there exists $\varepsilon>0$ such that either the closed $\varepsilon$-neighbourhood of $V$ or the closure of its complement is an invariant unstable multicone. \end{enumerate} \end{enumerate} \end{proposition} Note that $\mathsf{A}$ can be both irreducible and strongly conformal and that neither condition imply each other. \begin{proposition}\label{prop:bernoulli} If $\mathsf{A}=(A_1,\ldots,A_N)\in GL_2(\mathbb{R})^N$ and $\mu$ is an ergodic equilibrium state for $\Phi^s$, then the following two statements are equivalent: \begin{enumerate} \item\label{it:bernoulli1} $\mu$ is a Bernoulli measure, \item\label{it:bernoulli2} $\mathsf{A}$ is reducible or $\mathsf{A}$ is strongly conformal. \end{enumerate} \end{proposition} In the previous two propositions, one has to assume that the equilibrium measure is ergodic; see \cite[Example 6.2]{KaenmakiVilppolainen2010} for a counter-example. We remark that the Bernoulli property has been studied earlier in \cite[Theorem 13]{Morris2016}. Since the propositions give a complete characterisation of the properties in the reducible case, we can restrict our attention into irreducible matrix tuples. \begin{theorem}\label{thm:fortuples} If $\mathsf{A}=(A_1,\ldots,A_N)\in GL_2(\mathbb{R})^N$ is irreducible and $\mu$ is an equilibrium state for $\Phi^s$, then the following four statements are equivalent: \begin{enumerate} \item\label{thm:ftv} $\mu$ is a quasi-Bernoulli measure, \item\label{thm:ftvvv} $\mathcal{S}(\mathsf{A})$ is almost multiplicative, \item\label{thm:fti} $\mathsf{A}$ can be decomposed into two sets $\mathsf{A}_e$ and $\mathsf{A}_h$ such that $\mathsf{A}_e$ is strongly conformal and if $\mathsf{A}_h\neq\emptyset$, then $\mathsf{A}_h$ has a strongly invariant multicone $\mathcal{C}$ such that $A\mathcal{C}=\mathcal{C}$ for all $A\in\mathsf{A}_e$, \item\label{thm:ftvii} there exist a constant $C>0$ and a $\mu$-almost everywhere continuous potential $f\in L^1(\mu)$ such that \begin{equation}\label{eq:shadowing} \Biggl|\sum_{k=0}^{n-1}f(\sigma^k\mathtt{i})-\log\|A_{\mathtt{i}|_n}\|\Biggr|\leq C \end{equation} for all $\mathtt{i} \in \Sigma$ and $n \in \mathbb{N}$. \end{enumerate} \end{theorem} The previous theorem gives a positive answer to \cite[Question 7.4]{BaranyKaenmakiKoivusalo2017} in the two dimensional case. \begin{theorem}\label{thm:holder} If $\mathsf{A}=(A_1,\ldots,A_N)\in GL_2(\mathbb{R})^N$ is irreducible and $\mu$ is an equilibrium state for $\Phi^s$, then the following three statements are equivalent: \begin{enumerate} \item\label{it:holder1} $\mu$ is a Gibbs measure for some H\"older continuous potential, \item\label{it:holder2} $\mathsf{A}$ has a strongly invariant multicone or $\mathsf{A}$ is strongly conformal, \item\label{it:holder3} there exist a constant $C>0$ and a H\"older-continuous potential $f$ such that $$ \Biggl|\sum_{k=0}^{n-1}f(\sigma^k\mathtt{i})-\log\|A_{\mathtt{i}|_n}\|\Biggr|\leq C $$ for all $\mathtt{i} \in \Sigma$ and $n \in \mathbb{N}$. \end{enumerate} \end{theorem} \begin{center} \begin{figure}[t] \begin{tikzpicture}[scale=1.3] \draw [densely dotted] (3,3) rectangle (7,6); \node[align=center, text width=4cm] at (5,5) {Bernoulli ($\mathsf{A}$ is reducible but does not have an invariant multicone)}; \node[align=center, text width=4cm] at (5,3.5) {Bernoulli (other cases)}; \draw (2,2) -- (8,2) -- (8,4.03) -- (7.03,4.03) -- (7.03,6.03) -- (2.97,6.03) -- (2.97,4.03) -- (2,4.03) -- (2,2); \node[align=center] at (5,2.5) {Gibbs}; \draw[densely dashed] (1,1) -- (9,1) -- (9,4.06) -- (7.06,4.06) -- (7.06,6.06) -- (2.94,6.06) -- (2.94,4.06) -- (1,4.06) -- (1,1); \node[align=center] at (5,1.5) {quasi-Bernoulli}; \draw[ultra thick] (0,0) -- (10,0) -- (10,4.10) -- (0,4.10) -- (0,0); \node[align=center] at (5,0.5) {Gibbs-type}; \end{tikzpicture} \caption{Classification of equilibrium states for $\Phi^s$.} \label{fig:illustration} \end{figure} \end{center} Figure \ref{fig:illustration} illustrates how different properties of equilibrium states for $\Phi^s$ are related. The following example shows that the inclusions depicted in the figure are strict. \begin{example} \label{example} (1) It can happen that an equilibrium state for $\Phi^s$ is a Gibbs measure for some H\"older-continuous potential, but is not a Bernoulli measure: Choose two positive matrices \begin{equation*} A_1 = \begin{pmatrix} 2 & 1 \\ 1 & 1 \end{pmatrix} \quad\text{and}\quad A_2 = \begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix}. \end{equation*} Then $(A_1,A_2)$ is irreducible and has a strongly invariant multicone (i.e.\ the union of the first and third quadrants). The claim follows now from Theorem \ref{thm:holder} and Proposition \ref{prop:bernoulli}. (2) It can happen that an equilibrium state for $\Phi^s$ is a quasi-Bernoulli measure, but is not a Gibbs measure for any H\"older-continuous potential: Let $A_1$ and $A_2$ be as above. Then $(A_1,A_2,I)$ is irreducible and has an invariant multicone (i.e.\ the union of the first and third quadrants). The claim follows now from Theorems \ref{thm:fortuples} and \ref{thm:holder}. (3) It can happen that an equilibrium state for $\Phi^s$ is a Gibbs-type measure for $\Phi^s$, but is not a quasi-Bernoulli measure: Choose two matrices \begin{equation*} A_3 = \begin{pmatrix} 1 & 0 \\ 0 & 2 \end{pmatrix} \quad\text{and}\quad A_4 = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}. \end{equation*} Then $(A_3,A_4)$ is irreducible, has no invariant multicone, and does not contain only conformal matrices. The claim follows now from Proposition \ref{prop:gibbstype} and Theorem \ref{thm:fortuples}. We remark that this phenomenon has been observed earlier in \cite[\S 1.4]{FraserJordanJurga2017}. Another way to see the claim is to consider two conformal irreducible matrices not sharing a conjugation matrix. \end{example} \section{Characterization of domination}\label{sec:dom} In this section, we prove Theorem \ref{thm:justdomin} and Propositions \ref{prop:connect} and \ref{prop:converse}. Let $\mathsf{A} \subset GL_2(\mathbb{R})$ and recall that $\mathcal{S}(\mathsf{A})$ is the sub-semigroup of $GL_2(\mathbb{R})$ generated by $\mathsf{A}$. Let $\mathscr{S}(\mathsf{A}) = \overline{\mathbb{R}\mathcal{S}(\mathsf{A})}\subset M_2(\mathbb{R})$ and note that $\mathscr{S}(\mathsf{A})$ is a sub-semigroup of $M_2(\mathbb{R})$. Define \begin{equation*} \mathscr{R}(\mathsf{A}) = \{ A \in \mathscr{S}(\mathsf{A}) : \rank(A)=1 \}. \end{equation*} \begin{lemma}\label{lem:onlyel} If $\mathsf{A} \subset GL_2(\mathbb{R})$, then $\mathscr{R}(\mathsf{A})=\emptyset$ if and only if $\mathsf{A}$ is strongly conformal. \end{lemma} \begin{proof} If $\mathsf{A}$ is strongly conformal, then by definition there exists a conjugation matrix $M \in GL_2(\mathbb{R})$ such that $|\det(A)|^{-1/2} MAM^{-1}\in O(2)$ for all $A \in \mathsf{A}$, which implies that $|\det(A)|^{-1/2} MAM^{-1}\in O(2)$ for all nonzero $A \in \mathscr{S}(\mathsf{A})=\overline{\mathbb{R}\SS(\mathsf{A})}$. In particular, all nonzero elements of $ \mathscr{S}(\mathsf{A})$ have rank $2$ and therefore $\mathscr{R}(\mathsf{A})=\emptyset$. Suppose conversely that $\mathscr{R}(\mathsf{A})=\emptyset$. We claim that set $$ \mathscr{S}'(\mathsf{A})=\{|\det(A)|^{-1/2}A : A\in\mathscr{S}(\mathsf{A})\setminus\{\boldsymbol{0}\}\}=\mathscr{S}(\mathsf{A})\cap\{A \in M_2(\mathbb{R}) : |\det(A)|=1\} $$ is compact. It is obviously closed, being the intersection of $\mathscr{S}(\mathsf{A})$ with the closed set $\{A \in M_2(\mathbb{R}) : |\det(A)|=1\}$. If it contains a sequence of elements $(A_n)$ such that $\|A_n\| \to \infty$ then this sequence can without loss of generality be taken to be a sequence of elements of $\mathbb{R}\mathcal{S}(\mathsf{A})$. The sequence of normalised matrices $\|A_n\|^{-1}A_n$ has an accumulation point which necessarily has determinant zero and norm one and belongs to $\mathscr{S}(\mathsf{A})$; this limit point is thus an element of $\mathscr{R}(\mathsf{A})$, which is a contradiction, and we conclude that $\{|\det(A)|^{-1/2}A : A\in\mathscr{S}(\mathsf{A})\setminus\{\boldsymbol{0}\}\}$ is bounded. It is therefore compact as claimed. The set $\mathscr{S}'(\mathsf{A})$ is thus a compact sub-semigroup of $GL_2(\mathbb{R})$. We claim that it is a group. To show this it is sufficient to show that the inverse of every $A\in \mathscr{S}(\mathsf{A})$ with $|\det(A)|=1$ belongs to $\mathscr{S}(\mathsf{A})$. If $A \in \mathscr{S}(\mathsf{A})$ is arbitrary, take a convergent subsequence $(A^{n_k})_{k=1}^\infty$ of the sequence $(A^n)_{n=1}^\infty$ with limit $B \in \mathscr{S}(\mathsf{A}) \subset GL_2(\mathbb{R})$, say. The sequence $(A^{-n_k})_{k=1}^\infty$ clearly converges to $B^{-1}$ and therefore $A^{n_{k+1} - n_k-1} \to A^{-1}$ as $k \to \infty$. Thus $A^{-1}$ is the accumulation point of a sequence of elements of $\mathscr{S}(\mathsf{A})$, hence an element of $\mathscr{S}(\mathsf{A})$. The set $\mathscr{S}'(\mathsf{A})$ is therefore a compact subgroup of $GL_2(\mathbb{R})$. If $m$ is Haar measure on $\mathscr{S}'(\mathsf{A})$ and $\langle \cdot,\cdot\rangle$ is the standard inner product on $\mathbb{R}^2$ it is easy to see that $\langle u,v\rangle':=\int \langle Au,Av\rangle \,\mathrm{d} m(A)$ defines an inner product on $\mathbb{R}^2$ which is invariant under every element of $\mathscr{S}'(\mathsf{A})$. Every inner product on $\mathbb{R}^2$ is related to the standard one by a change of basis, so there exists $X \in GL_2(\mathbb{R})$ such that $\langle u,v\rangle'=\langle Xu,Xv\rangle$ for all $u,v \in \mathbb{R}^2$. In particular, $\langle XAX^{-1}u,XAX^{-1}v\rangle=\langle AX^{-1}u,AX^{-1}v\rangle'=\langle X^{-1}u,X^{-1}v\rangle'=\langle u,v\rangle$ for all $u,v \in \mathbb{R}^2$ and $A \in \mathscr{S}'(\mathsf{A})$ which yields $\mathscr{S}'(\mathsf{A})\subset XO(2)X^{-1}$. Thus $\mathscr{S}(\mathsf{A})$ is strongly conformal and therefore $\mathsf{A}$ is strongly conformal as required. \end{proof} We note that according to the previous lemma, $\mathscr{R}(\mathsf{A})\neq\emptyset$ if and only if $\SS(\mathsf{A})$ contains at least one proximal or parabolic element. In the next lemma, we exclude parabolic elements. \begin{lemma} \label{thm:dom-lemma0} Let $\mathsf{A} \subset GL_2(\mathbb{R})$ with $\mathscr{R}(\mathsf{A}) \ne \emptyset$ be such that $\mathcal{S}(\mathsf{A})$ is almost multiplicative. Then $\mathcal{S}(\mathsf{A})$ does not contain parabolic elements and $\mathscr{R}(\mathsf{A})$ does not contain nilpotent elements. \end{lemma} \begin{proof} Suppose that $\mathcal{S}(\mathsf{A})$ contains a parabolic element. This means that, after a suitable change of basis, there exists $A\in\mathcal{S}(\mathsf{A})$ such that $$ A=\begin{pmatrix} a & 0 \\ b & a \end{pmatrix}, $$ where $b\neq 0$. It follows that there exists $c>0$ such that $c^{-1}n|a^{n-1}b|\leq\|A^n\|\leq cn|a^{n-1}b|$ for all $n \in \mathbb{N}$. It follows directly that $\lim_{n \to \infty} \|A^{2n}\|/\|A^n\|^2=0$ which contradicts the condition $\|AB\|\geq \kappa \|A\|\|B\|$. Observe that the relation $\|AB\| \ge \kappa\|A\|\|B\|$ holds for all $A,B \in \mathscr{S}(\mathsf{A})$ by continuity. So similarly, if there exists a nilpotent $A\in\mathscr{R}(\mathsf{A})$, then $0=\|A^n\|\geq\kappa^{n-1}\|A\|^n>0$ for some $n\in\mathbb{N}$, which is again a contradiction. \end{proof} Assuming $\mathscr{R}(\mathsf{A}) \ne \emptyset$, we define the set $X_u$ of all \emph{unstable directions} of proximal elements of $\mathcal{S}(\mathsf{A})$ to be \begin{equation*} X_u = X_u(\mathsf{A}) = \{V \in \mathbb{RP}^1 : V=A\mathbb{R}^2 \text{ for some } A \in \mathscr{R}(\mathsf{A}) \} \end{equation*} and the set $X_s$ of all \emph{stable directions} to be \begin{equation*} X_s = X_s(\mathsf{A}) = \{ \ker(A) \in \mathbb{RP}^1 : A \in \mathscr{R}(\mathsf{A}) \}. \end{equation*} \begin{lemma} \label{thm:dom-lemma1} Let $\mathsf{A} \subset GL_2(\mathbb{R})$ with $\mathscr{R}(\mathsf{A}) \ne \emptyset$ be such that $\mathcal{S}(\mathsf{A})$ is almost multiplicative. Then the sets $X_u$ and $X_s$ are nonempty, compact, and disjoint. Furthermore, $AX_u \subset X_u$ for all $A \in \mathscr{S}(\mathsf{A})$.\end{lemma} \begin{proof} First, we note again that the relation $\|AB\| \ge \kappa\|A\|\|B\|$ holds for all $A,B \in \mathscr{S}(\mathsf{A})$. To see that $X_u$ and $X_s$ are disjoint, note that if $V \in X_u \cap X_s$ then there exist nonzero $B_1,B_2 \in \mathscr{R}(\mathsf{A})$ such that $B_2\mathbb{R}^2=V$ and $B_1V=\{0\}$. Hence $B_1B_2$ is the zero matrix but $B_1$ and $B_2$ are not, which contradicts $\|B_1B_2\|\geq \kappa \|B_1\|\|B_2\|>0$. It follows that $X_u \cap X_s$ is empty. The nonempty set \[ \mathscr{R}_1(\mathsf{A})=\{B \in \mathscr{R}(\mathsf{A}) : \|B\|=1\}= \{B \in \mathscr{S}(\mathsf{A}) : \det(B)=0 \text{ and }\|B\|=1\} \] is clearly a closed bounded subset of $\mathscr{S}(\mathsf{A})$, and in particular is compact. It follows that $X_u$ and $X_s$ are the images of continuous functions $\mathscr{R}_1(\mathsf{A}) \to \mathbb{RP}^1$ and hence are compact and nonempty. To see the last claim, consider a subspace $U$ such that $U=AV$ for some $V \in X_u$ and $A \in \mathscr{S}(\mathsf{A})$. We have $V=B\mathbb{R}^2$ for some $B \in \mathscr{R}(\mathsf{A})$. Clearly $AB$ has rank at most $1$ and is nonzero since $\|AB\| \ge \kappa\|A\|\|B\|>0$, so $AB \in \mathscr{R}(\mathsf{A})$ and $U \in X_u$. \end{proof} The following lemma shows that the definitions of unstable and stable directions agree with the ones given in \S \ref{sec:subdom}. \begin{lemma} Let $\mathsf{A} \subset GL_2(\mathbb{R})$ with $\mathscr{R}(\mathsf{A}) \ne \emptyset$ be such that $\mathcal{S}(\mathsf{A})$ is almost multiplicative. Then \begin{align*} X_u&=\overline{\{u(A): A\in\SS(A)\text{ is proximal}\}}, \\ X_s&=\overline{\{s(A):A\in\SS(A)\text{ is proximal}\}}. \end{align*} \end{lemma} \begin{proof} Let us first demonstrate the inclusions \begin{align} X_u &\subset \overline{\{u(A): A\in\SS(A)\text{ is proximal}\}}, \label{eq:eztet} \\ X_s &\subset \overline{\{s(A): A\in\SS(A)\text{ is proximal}\}}. \label{eq:aztat} \end{align} Before doing so, we show that for every $A \in \mathscr{R}(\mathsf{A})$ with $\|A\|=1$ and any sequence $(B_n)_{n=1}^\infty$ of elements of $\SS(\mathsf{A})$ such that $\|B_n\|^{-1}B_n\to A$ as $n\to\infty$, the sequence $B_n$ contains only proximal elements for all sufficiently large $n$. By Lemma~\ref{thm:dom-lemma0}, no $B_n$ may be a parabolic matrix. Let us contrarily assume that, after passing to a suitable subsequence, every $B_n$ is conformal. Write $B_n':=\|B_n\|^{-1}B_n$ for all $n \in \mathbb{N}$. Since $A$ has rank one we have $\det(A)=0$ and therefore $\det(B_n') \to 0$. Since every $B_n'$ is conformal it satisfies $(\tr B_n')^2\leq 4|\det(B_n')|$ and therefore $\tr B_n' \to 0$. By the Cayley-Hamilton theorem, we have $(B_n')^2-(\tr B_n' )B_n'+(\det(B_n'))I=\boldsymbol{0}$ and since $B_n' \to A$ we deduce that $(B_n')^2 \to 0$. Since $\|B_n'\|=1$ for all $n \in \mathbb{N}$ we get $\|B_n^2\|/\|B_n\|^2 = \|(B_n')^2\|/\|B_n'\|^2 =\|(B_n')^2\|\to 0$, but this contradicts $ \|B_n^2 \| \geq \kappa \|B_n\|^2$. We conclude that $(B_n)_{n=1}^\infty$ is proximal for all sufficiently large $n$ as claimed. It is well known that the maps $u(\cdot)$ and $s(\cdot)$ are continuous on proximal matrices. Moreover, by Lemma~\ref{thm:dom-lemma0}, every $A\in\mathscr{R}(\mathsf{A})$ is proximal. Hence, if $V\in X_u$, then there exists a proximal $A\in\mathscr{R}(\mathsf{A})$ with $\|A\|=1$ such that $V=A\mathbb{R}^2=u(A)$. Moreover, there exists a sequence of proximal matrices $B_n\in\SS(\mathsf{A})$ such that $\|B_n\|^{-1}B_n\to A$ and thus, by the continuity of $u$, $u(B_n)=u(\|B_n\|^{-1}B_n)\to u(A)=V$, which shows \eqref{eq:eztet}. Similarly, if $V\in X_s$, then there exists a proximal $A\in\mathscr{R}(\mathsf{A})$ with $\|A\|=1$ such that $V=\ker(A)=s(A)$, and there exists a sequence of proximal matrices $B_n\in\SS(\mathsf{A})$ such that $\|B_n\|^{-1}B_n\to A$. Applying now the continuity of $s$, we get $s(B_n)=s(\|B_n\|^{-1}B_n)\to s(A)=V$ showing \eqref{eq:aztat}. To finish the characterization of $X_u$ it is sufficient to show that \begin{align} X_u &\supset \{u(A): A\in\SS(\mathsf{A})\text{ is proximal}\}, \label{eq:eztet2} \\ X_s &\supset \{s(A): A\in\SS(\mathsf{A})\text{ is proximal}\}, \label{eq:aztat2} \end{align} since we may then appeal to Lemma \ref{thm:dom-lemma1} and the fact that the sets $X_u$ and $X_s$ are closed. If $V=u(A)$ for some proximal $A\in\SS(\mathsf{A})$, then $\|A^{n}\|^{-1}A^n\to B$ as $n\to\infty$, where $B\in\mathscr{R}(\mathsf{A})$ is such that $B\mathbb{R}^2=u(B)=V$. This shows \eqref{eq:eztet2}. Similarly, if $V=s(A)$ for some proximal $A\in\SS(\mathsf{A})$, then $\|A^{n}\|^{-1}A^n\to B$ as $n\to\infty$, where $B\in\mathscr{R}(\mathsf{A})$ is such that $\ker(B)=V$. This shows \eqref{eq:aztat2} and completes the proof. \end{proof} Let $d$ be the metric on $\mathbb{RP}^1$ defined by taking $d(U,V)$ to be the angle between the subspaces $U$ and $V$. If $\mathsf{A} \subset GL_2(\mathbb{R})$ is such that $\mathscr{R}(\mathsf{A}) \ne \emptyset$, then we define \[ \mathcal{V}_n = \{U\in \mathbb{RP}^1 \colon d(U,V)< \tfrac{1}{n}\text{ for some } V\in X_u \} \] and \[ \mathcal{U}_n =\bigcup_{A \in \mathcal{S}(\mathsf{A})} A\mathcal{V}_n \] for all $n \in \mathbb{N}$. \begin{lemma} \label{thm:dom-lemma2} Let $\mathsf{A} \subset GL_2(\mathbb{R})$ with $\mathscr{R}(\mathsf{A}) \ne \emptyset$ be such that $\mathcal{S}(\mathsf{A})$ is almost multiplicative. Then there is $n_0 \in \mathbb{N}$ such that $\overline{\mathcal{U}}_n$ as defined above is an invariant unstable multicone for all $n \ge n_0$. \end{lemma} \begin{proof} Note that for all $n \in \mathbb{N}$ the invariance of $\overline{\mathcal{U}}_n$ and the property \eqref{i-X4} in the definition of the unstable multicone (see \S \ref{sec:subdom}) follow immediately from the definition of the set $\mathcal{U}_n$ and the continuity of each $A \in \mathcal{S}(\mathsf{A})$ as an action on $\mathbb{RP}^1$. Let us prove the property \eqref{i-X5} for all $n \in \mathbb{N}$. Obviously $\mathcal{V}_n$ is open, and since each $A \in \mathcal{S}(\mathsf{A})$ is invertible and therefore induces a homeomorphism of $\mathbb{RP}^1$, each $\mathcal{U}_n$ is open too. It is clear from the definition that every connected component of $\mathcal{V}_n$ intersects $X_u$. If $U \in \mathcal{U}_n$, then $U=AU'$ for some $A \in \mathcal{S}(\mathsf{A})$ and $U' \in \mathcal{V}_n$. Let $\mathcal{I}\subset \mathcal{V}_n$ be an open connected set which contains $U'$ and which also intersects $X_u$. The set $A\mathcal{I}$ then contains $U$, is connected, and intersects $AX_u$. Since $AX_u \subset X_u$ by Lemma \ref{thm:dom-lemma1}, we conclude that each connected component of $\mathcal{U}_n$ intersects $X_u$. To show that the property \eqref{i-X3} holds for all large enough $n$, let us suppose the contrary. In this case $\overline{\mathcal{U}}_n \cap X_s$ must be nonempty for infinitely many $n \in \mathbb{N}$. This implies that in any prescribed neighbourhoods of $X_u$ and $X_s$ we may find a subspace $U$ in the neighbourhood of $X_u$ and a matrix $A \in \mathcal{S}(\mathsf{A})$ such that $AU$ belongs to the neighbourhood of $X_s$. It follows that we may choose a sequence of subspaces $(U_n)$ converging to a limit $U \in X_u$ and a sequence $(A_n)$ of elements of $\mathcal{S}(\mathsf{A})$ such that $A_nU_n$ converges to a limit $V \in X_s$. Define $B_n:=\|A_n\|^{-1}A_n \in \mathscr{S}(\mathsf{A})$ for every $n \in \mathbb{N}$, and by passing to a subsequence if necessary we may suppose that $(B_n)$ converges to a limit $B \in \mathscr{S}(\mathsf{A})$ with norm 1. We claim that $BU=V$. Let $(u_n)$ be a sequence of unit vectors such that $u_n \in U_n$ for every $n \in \mathbb{N}$ and such that $(u_n)$ converges to a unit vector $u \in U$. It is enough to show that $(B_nu_n)$ converges to $Bu$ and that $Bu$ is nonzero, since we have then shown that $V=\lim_{n \to \infty} B_nU_n=BU$. To see that $Bu$ is nonzero we note that $u \in U \in X_u$ and $B \in \mathscr{S}(\mathsf{A})$ with $B \neq 0$, so if $Bu=0$ then $u \in \ker B \in X_s$ and we have $U \in X_s \cap X_u$ contradicting Lemma \ref{thm:dom-lemma1}. On the other hand since \[0\leq \|B_nu_n-Bu\| \leq \|B_nu_n-B_nu\|+\|B_nu-Bu\| \leq \|u_n-u\|+\|B_n-B\| \to 0 \] we have $B_nu_n \to Bu$ as $n \to \infty$ as required. But the equation $BU=V$ is impossible since $BU \in X_u$ by Lemma \ref{thm:dom-lemma1} and therefore $V \in X_s \cap X_u$, contradicting Lemma \ref{thm:dom-lemma1}. We conclude that $\overline{\mathcal{U}}_n \cap X_s$ must be empty for all large enough $n$ and therefore property \eqref{i-X3} holds for all $n$ sufficiently large. We are left to show that $\overline{\mathcal{U}}_n$ is a multicone. To that end, it suffices to show that $\partial\mathcal{U}_n$ contains only finitely many points. To see this suppose for a contradiction that $U \in \mathbb{RP}^1$ is an accumulation point of a sequence $(U_k)_{k=1}^\infty$ of distinct elements of $\partial\mathcal{U}_n$. We will find it convenient to identify a small open neighbourhood $\mathcal{I}$ of $U$ with a bounded open interval $(a,b)\subset \mathbb{R}$. By passing to a subsequence if necessary we may assume that $(U_k)_{k=1}^\infty$ is monotone with respect to the natural order on $\mathcal{I}$, and without loss of generality we assume $(U_k)_{k=1}^\infty$ to be strictly increasing. We assert that every interval $(U_k,U_{k+2})$ contains a point of $X_u$. Since $U_{k+1}$ is in the closure of $\mathcal{U}_n$, there exists a point of $\mathcal{U}_n$ in the interval $(U_k,U_{k+2})$. Since neither $U_k$ nor $U_{k+2}$ can belong to $\mathcal{U}_n$, it follows that some connected component of $\mathcal{U}_n$ is contained wholly within the interval $(U_k,U_{k+2})$. By \eqref{i-X5}, this implies that a point of $X_u$ must lie in the interval $(U_k,U_{k+2})$. Since this is true for every $k \in \mathbb{N}$, it follows that $U$ is an accumulation point of $X_u$ and hence, by Lemma \ref{thm:dom-lemma1}, $U$ belongs to $X_u$. But $X_u$ is a subset of $\mathcal{U}_n$ and therefore $U \in \mathcal{U}_n$, which implies that $U_k \in \mathcal{U}_n$ for all sufficiently large $k$. This is clearly impossible since no element of $\partial \mathcal{U}_n$ can be an element of $\mathcal{U}_n$. This contradiction proves that $\partial \mathcal{U}_n$ must be finite. \end{proof} The above lemmas prove Theorem \ref{thm:justdomin}: \begin{proof}[Proof of Theorem~\ref{thm:justdomin}] If $\mathscr{R}(\mathsf{A}) = \emptyset$, then, by Lemma~\ref{lem:onlyel}, the set $\mathsf{A}$ is strongly conformal. If $\mathscr{R}(\mathsf{A}) \ne \emptyset$, then the claim follows from Lemmas \ref{thm:dom-lemma0} and \ref{thm:dom-lemma2}. \end{proof} Let us next turn to the proof of the propositions. \begin{lemma}\label{lem:elcone} Let $A\in GL_2(\mathbb{R})$ and let $\mathcal{C}$ be a multicone such that $A\mathcal{C}\subset\mathcal{C}$. If $A$ is conformal, then $A\mathcal{C}=\mathcal{C}$. \end{lemma} \begin{proof} By a suitable change of basis, we may assume that $A\in O(2)$. In this case $A$, preserves Lebesgue measure on $\mathbb{RP}^1$. If $A\mathcal{C}\subsetneq\mathcal{C}$, then, since $\mathcal{C}$ is a finite union of closed projective intervals and $A$ is a homeomorphism, $A\mathcal{C}$ must have smaller Lebesgue measure than $\mathcal{C}$, which is a contradiction. \end{proof} We remark that the converse statement is false: if $A$ is proximal and $\mathcal{C}$ is a closed projective interval with one endpoint equal to $u(A)$ and the other endpoint equal to $s(A)$, then $A\mathcal{C}=\mathcal{C}$ but $A$ is not conformal. If $\mathsf{A}\subset GL_2(\mathbb{R})$ and $\mathsf{A}_e$ is the collection of all conformal elements of $\mathsf{A}$, then we write \begin{equation*} \mathcal{F}(\mathsf{A}):=\mathcal{S}(\{|\det(A)|^{-1/2}A : A \in \mathsf{A}_{e}\}). \end{equation*} \begin{lemma} \label{thm:dom-lemma3} Let $\mathsf{A}\subset GL_2(\mathbb{R})$ be such that $\mathscr{R}(\mathsf{A})\neq\emptyset$ and $\mathsf{A}_e$ be the collection of all conformal elements of $\mathsf{A}$. If $\mathcal{C}$ is an invariant unstable multicone of $\mathsf{A}$, then $\mathsf{A}_e = \{A \in \mathsf{A} : A\mathcal{C} = \mathcal{C}\}$ is strongly conformal and $\mathcal{F}(\mathsf{A})$ is finite. \end{lemma} \begin{proof} Since $\mathsf{A}$ has an invariant multicone $\mathcal{C}$, it follows from Lemma~\ref{lem:elcone} that $A\mathcal{C}=\mathcal{C}$ for all $A\in\mathsf{A}_e$. Hence $\mathsf{A}_e \subset \{A\in\mathsf{A}: A\mathcal{C}=\mathcal{C}\}$. Write $\mathsf{A}_{e}' = \{|\det(A)|^{-1/2}A : A\in\mathsf{A}\text{ and }A\mathcal{C}=\mathcal{C}\}$. Let us first assume that $\#\partial \mathcal{C}>2$. Let $B_1,B_2 \in \mathcal{S}(\mathsf{A}_{e}')$ and suppose that $B_1$ and $B_2$ induce the same permutation of $\partial\mathcal{C}$. Then $B_1^{-1}B_2$ fixes every point of $\partial\mathcal{C}$ and therefore has more than $2$ invariant subspaces and is necessarily equal to $\pm I$. It follows that in this case $\mathcal{S}(\mathsf{A}_{e}')$ has at most $2(\#\partial\mathcal{C})!$ distinct elements. Let us now assume that $\#\partial \mathcal{C}=2$. Write $\partial \mathcal{C}=\{U_1,U_2\}$, and let $u_1 \in U_1$ and $u_2 \in U_2$ be so that $\{u_1,u_2\}$ is a basis for $\mathbb{R}^2$. Every element of $\mathcal{S}(\mathsf{A}_{e}')$ preserves $\partial\mathcal{C}$ and hence is either diagonal or antidiagonal in this basis (where by an antidiagonal matrix we mean a $2\times 2$ matrix with both main diagonal entries equal to zero and both other entries nonzero). Let $D$ be the matrix which $Du_1=u_1$ and $Du_2=-u_2$. A diagonal element of $\mathcal{S}(\mathsf{A}_{e}')$ cannot be proximal since then either $U_1$ or $U_2$ would be the stable space of that matrix contradicting the property $X_s \cap \mathcal{C} =\emptyset$ of the unstable multicone $\mathcal{C}$. It follows that every diagonal element of $\mathcal{S}(\mathsf{A}_{e}')$ must belong to $\{\pm I,\pm D\}$. Let $A_1,\ldots,A_\ell$ be the anti-diagonal elements of $\mathsf{A}_{e}'$ and define $S=\{\pm I, \pm D\} \cup \{\pm A_1,\ldots, \pm A_\ell\} \cup \{\pm DA_1,\ldots,\pm DA_\ell\}$. The set $S$ is a semigroup since $A_iD=-DA_i$ and since each $A_iA_j$ is diagonal and hence equal to $\pm I$ or $\pm D$. In particular, $\mathcal{S}(\mathsf{A}_{e}')$ is contained in a finite semigroup. Thus, $\mathsf{A}_{e}'$ is strongly conformal, which implies that $\{A\in\mathsf{A}: A\mathcal{C}=\mathcal{C}\}\subset\mathsf{A}_{e}$. \end{proof} \begin{lemma} \label{thm:dom-lemma4} Let $\mathsf{A}\subset GL_2(\mathbb{R})$ be such that $\mathsf{A}$ has an invariant unstable multicone $\mathcal{C}$ and $\SS(\mathsf{A})$ does not contain parabolic elements. Let $\mathsf{A}_{e}$ be the collection of all conformal elements of $\mathsf{A}$. Then $$ A_1F_1\cdots A_nF_n\mathcal{C} \subset \mathcal{C}^o $$ for all $n\geq (\#\partial\mathcal{C})^2+1$, $A_1,\ldots,A_n \in \mathsf{A} \setminus \mathsf{A}_{e}$, and $F_1,\ldots,F_n \in \mathcal{F}(\mathsf{A})$. \end{lemma} \begin{proof} It is sufficient to show that every point of $\partial\mathcal{C}$ is mapped into $\mathcal{C}^\circ$ by $A_1F_1\cdots A_nF_n$. Clearly, if there exists $\ell \in \{1,\ldots,n\}$ such that $A_\ell F_\ell\cdots A_nF_n\mathcal{C}\subset\mathcal{C}^o$, then our claim follows. Suppose for a contradiction that there exist $n\geq (\#\partial\mathcal{C})^2+1$, $A_1,\ldots,A_n\in\mathsf{A}\setminus\mathsf{A}_{e}$, and $F_1,\ldots,F_n\in\mathcal{F}(\mathsf{A})$ such that for every $\ell \in \{1,\ldots,n\}$ there exist $V_\ell,W_\ell\in\partial\mathcal{C}$ for which $$ A_\ell F_\ell\cdots A_nF_nV_{\ell}=W_{\ell}. $$ Since $n\geq (\#\partial\mathcal{C})^2+1$, there exist $\ell_1<\ell_2$ such that $V_{\ell_1}=V_{\ell_2}$ and $W_{\ell_1}=W_{\ell_2}$. Hence, $$ A_{\ell_1}F_{\ell_1}\cdots A_{\ell_2-1}F_{\ell_2-1}W_{\ell_2}=W_{\ell_2}. $$ Thus, if $A_{\ell_1}F_{\ell_1}\cdots A_{\ell_2-1}F_{\ell_2-1}$ is proximal, then $W_{\ell_2}\in X_u\cup X_s$. This is impossible, since $\partial\mathcal{C}\cap (X_s\cup X_u)=\emptyset.$ If $A_{\ell_1}F_{\ell_1}\cdots A_{\ell_2-1}F_{\ell_2-1}$ is conformal, then $A_{\ell_1}F_{\ell_1}\cdots A_{\ell_2-1}F_{\ell_2-1}\mathcal{C}=\mathcal{C}$ by Lemma~\ref{lem:elcone}. This is also impossible, since $\mathcal{C}\supsetneq A_{\ell_1}\mathcal{C}\supset A_{\ell_1}F_{\ell_1}\cdots A_{\ell_2-1}F_{\ell_2-1}\mathcal{C}$ by Lemma~\ref{thm:dom-lemma3}. \end{proof} \begin{lemma}\label{lem:fin} Let $\mathsf{A}\subset GL_2(\mathbb{R})$ be such that $\mathsf{A}$ has an invariant unstable multicone $\mathcal{C}$ and $\SS(\mathsf{A})$ does not contain parabolic elements. Let $\mathsf{A}_{e}$ be the collection of all conformal elements of $\mathsf{A}$. If $\mathsf{A} \setminus \mathsf{A}_e$ is compact, then $$ \mathsf{B}=\{A_1A_2\colon A_1\in\mathsf{A}\setminus\mathsf{A}_{e}\text{ and }A_2\in\mathcal{F}(\mathsf{A})\} $$ has a strongly invariant multicone. \end{lemma} \begin{proof} Write $m=(\#\partial\mathcal{C})^2+1$ and note that, by Lemma~\ref{thm:dom-lemma4}, $\mathsf{B}^{m}$ has a strongly invariant multicone. Since $\mathsf{A}\setminus\mathsf{A}_{e}$ is compact by the assumption and $\mathcal{F}(\mathsf{A})$ is finite by Lemma \ref{thm:dom-lemma3}, $\mathsf{B}^m$ is compact. Hence, by \cite[Theorem~B]{BochiGourmelon09}, $\mathsf{B}^m$ is dominated, i.e.\ there exist constants $C>0$ and $\tau>1$ such that $$ \frac{\|B_1\cdots B_n\|}{\|(B_1\cdots B_n)^{-1}\|^{-1}}\geq C\tau^n. $$ for all $B_1,\ldots,B_n\in\mathsf{B}^m$ and all $n\in\mathbb{N}$. Choose $k \in \mathbb{N}$ and let $A_iF_i\in\mathsf{B}$ for all $i \in \{1,\ldots,k\}$. Write $k=qm+p$, where $q\in\mathbb{N}\cup\{0\}$ and $p \in \{0,\ldots,m-1\}$. Then \begin{align*} \frac{\|A_1F_1\cdots A_kF_k\|}{\|(A_1F_1\cdots A_kF_k)^{-1}\|^{-1}} & = \dfrac{\|B_1\cdots B_q\cdot A_{k-p}F_{k-p}\cdots A_kF_k\|}{\|(B_1\cdots B_q\cdot A_{k-p}F_{k-p}\cdots A_kF_k)^{-1}\|^{-1}}\\ & \geq \dfrac{\|B_1\cdots B_q\|\|(A_{k-p}F_{k-p}\cdots A_kF_k)^{-1}\|^{-1}}{\|(B_1\cdots B_q)^{-1}\|^{-1}\|A_{k-p}F_{k-p}\cdots A_kF_k\|}\\ & \geq C\tau^{q}\dfrac{\|(A_{k-p}F_{k-p}\cdots A_kF_k)^{-1}\|^{-1}}{\|A_{k-p}F_{k-p}\cdots A_kF_k\|}. \end{align*} By choosing $C'=C\tau^{-1}\min_{\ell\in\{1,\ldots,m-1\}} \|(A_{1}F_{1}\cdots A_\ell F_\ell)^{-1}\|^{-1}/\|A_{1}F_{1}\cdots A_\ell F_\ell\|$ and $\tau'=\tau^{1/m}$, it follows again from \cite[Theorem~B]{BochiGourmelon09} that $\mathsf{B}$ has a strongly invariant multicone. \end{proof} The following lemma is \cite[Lemma~2.2]{BochiMorris15}. \begin{lemma}\label{lem:Morris} Let $\mathcal{C}_0,\mathcal{C}\subset\mathbb{RP}^1$ be multicones such that $\mathcal{C}_0\subset\mathcal{C}^o$. Then there exists a constant $\kappa_0>0$ such that $\|A|V\|\geq \kappa_0\|A\|$ for all $V\in\mathcal{C}_0$ and for every matrix $A\in GL_2(\mathbb{R})$ with $A\mathcal{C}\subset\mathcal{C}_0$. \end{lemma} We are now ready to prove the propositions: \begin{proof}[Proof of Proposition~\ref{prop:connect}] The assertion \eqref{item:this} follows immediately from Lemma~\ref{thm:dom-lemma3}. Let us verify \eqref{item:that}. If $\mathsf{A}_{e} = \mathsf{A}$, then $\SS(\mathsf{A})$ is strongly conformal since $\mathsf{A}$ is. This means that $\SS(\mathsf{A})$ does not contain an proximal matrix and thus, $\mathsf{A}$ cannot have an unstable multicone by definition. Therefore, \eqref{item:that} holds. To prove the final claim, it is sufficient to show that, by assuming $\mathsf{A}\setminus\mathsf{A}_e$ to be compact, there exists an invariant multicone $\mathcal{C}$ such that $A\mathcal{C}\subset\mathcal{C}^o$ for all $A\in\mathsf{A}\setminus\mathsf{A}_{e}$ and $A\mathcal{C}=\mathcal{C}$ for all $A\in\mathsf{A}_{e}$. By Lemma~\ref{thm:dom-lemma3}, the set $\mathcal{F}(\mathsf{A})$ is finite. Therefore, the set $\mathsf{B}=\{A_1A_2 : A_1\in\mathsf{A}\setminus\mathsf{A}_{e}\text{ and }A_2\in\mathcal{F}(\mathsf{A})\}$ is compact and, by Lemma~\ref{lem:fin}, it has a strongly invariant multicone $\mathcal{C}_0$. Defining $$ \mathcal{C}=\bigcup_{F\in\mathcal{F}(\mathsf{A})}F\mathcal{C}_0, $$ we have $$ A\mathcal{C} = \bigcup_{F\in\mathcal{F}(\mathsf{A})} AF\mathcal{C}_0 \subset\mathcal{C}_0^o\subset\mathcal{C}^o. $$ for all $A\in\mathsf{A}\setminus\mathsf{A}_{e}$. We have finished the proof since for any $A\in\mathsf{A}_{e}$, $A\mathcal{C}=\mathcal{C}$ holds trivially. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:converse}] Let $\varepsilon>0$ and define $$ \mathcal{C}_0=\bigcup_{F\in\mathcal{F}(\mathsf{A})} F\biggl( \biggl\{ U \in \mathbb{RP}^1 : d(U,V) \le \varepsilon \text{ for some } V \in \bigcup_{A\in\mathsf{A}_{h}}A\mathcal{C} \biggr\} \biggr) $$ Recall that $\mathcal{F}(\mathsf{A})$ is finite by Lemma~\ref{thm:dom-lemma3}. By compactness of $\mathsf{A}_h$, we may choose $\varepsilon>0$ small enough so that $\mathcal{C}_0\subset\mathcal{C}^o$, $A\mathcal{C}\subset\mathcal{C}_0$ for all $A\in\mathsf{A}_{h}$, and $A\mathcal{C}_0=\mathcal{C}_0$ for all $A\in\mathsf{A}_{e}$. Observe that every element $A\in\mathcal{S}(\mathsf{A}_h \cup \mathsf{A}_e)$ can be written in the form $(c_0c_1\cdots c_k)F_0\prod_{i=1}^kA_iF_i$, where $c_i\in\mathbb{R}\setminus\{0\}$, $k\in\mathbb{N}\cup\{0\}$, $A_i\in\mathsf{A}_{h}$, and $F_i\in\mathcal{F}(\mathsf{A})$. Therefore, $A\mathcal{C}\subset\mathcal{C}_0$ for all $A\in\mathcal{S}(\mathsf{A}_{h}\cup\mathsf{A}_{e})\setminus\mathcal{S}(\mathsf{A}_{e})$. By Lemma~\ref{lem:Morris}, there exists a constant $\kappa_0=\kappa_0(\mathcal{C}_0,\mathcal{C})$ such that $\|A|V\|\geq\kappa_0\|A\|$ for all $V\in\mathcal{C}_0$ and for every matrix $A \in GL_2(\mathbb{R})$ with $A\mathcal{C}\subset\mathcal{C}_0$. Hence, $$ \|AB\|\geq\|AB|V\|=\|A|BV\|\|B|V\|\geq\kappa_0^2\|A\|\|B\|. $$ for all $A,B\in\mathcal{S}(\mathsf{A}_{h}\cup\mathsf{A}_{e})\setminus\mathcal{S}(\mathsf{A}_{e})$. If $A\in\mathcal{S}(\mathsf{A}_{e})$ or $B\in\mathcal{S}(\mathsf{A}_{e})$, then $\|AB\|\geq \kappa'\|A\|\|B\|$ holds trivially for some $\kappa'>0$ by the finiteness of $\mathcal{F}(\mathsf{A})$. \end{proof} \section{Classification of equilibrium states} This section is devoted to the proofs of Propositions \ref{prop:gibbstype} and \ref{prop:bernoulli}, and Theorems \ref{thm:fortuples} and \ref{thm:holder}. In order to keep the proof of Theorem \ref{thm:holder} as readable as possible, we have postponed the proof of a key technical lemma, Lemma \ref{prop:key}, into \S \ref{sec:3m}. Before we start with the proof of the propositions, we state a couple of auxiliary lemmas. We recall that $\lambda_u(A)$ is the eigenvalue of $A$ with the largest absolute value, and similarly, $\lambda_s(A)$ is the eigenvalue of $A$ with the smallest absolute value. Note that $|\lambda_u(A)|=\|A|u(A)\|$ and $|\lambda_s(A)|=\|A|s(A)\|$, where $u(A)$ is the eigenspace corresponding to $\lambda_u(A)$ and $s(A)$ the eigenspace corresponding to $\lambda_s(A)$. The following two lemmas are special cases of the result of Protasov and Voynov; see \cite[Theorem~2]{ProtVoy}. In order to keep the paper as self-contained as possible, we give here alternative proofs. \begin{lemma}\label{lem:eigenspace} Let $\mathsf{A}=(A_1,\ldots,A_N)\in GL_2(\mathbb{R})^N$ be such that all the elements of $\mathsf{A}$ are proximal. Then the following two statements are equivalent: \begin{enumerate} \item\label{it:eig1} $\lambda_u(A_iA_j)=\lambda_u(A_i)\lambda_u(A_j)$ for all $i,j$, \item\label{it:eig2} $u(A_i)=u(A_j)$ for all $i,j$ or $s(A_i)=s(A_j)$ for all $i,j$. \end{enumerate} \end{lemma} \begin{proof} It is easy to see that \eqref{it:eig2} implies \eqref{it:eig1}. Let us show that \eqref{it:eig1} implies \eqref{it:eig2}. By the assumption and the multiplicativity of the determinant, we have $\lambda_s(A_iA_j)=\lambda_s(A_i)\lambda_s(A_j)$ for all $i,j$. First note that $s(A_i)\neq u(A_j)$, for any $i\neq j$. Indeed, $s(A_i)= u(A_j)$ would imply that the matrix $A_iA_j$ has eigenvalue $\lambda_s(A_i)\lambda_u(A_j)$. Thus, either $\lambda_s(A_i)\lambda_u(A_j)=\lambda_u(A_i)\lambda_u(A_j)$ or $\lambda_s(A_i)\lambda_u(A_j)=\lambda_s(A_i)\lambda_s(A_j)$, which implies that either $\lambda_s(A_i)=\lambda_u(A_i)$ or $\lambda_u(A_j)=\lambda_s(A_j)$, which contradicts to the proximality. We prove the statement by induction. Since $s(A_1)\neq u(A_2)$, after a suitable change of basis, the matrices $A_1$ and $A_2$ have the form $$ A_1=\begin{pmatrix} \lambda_u(A_1) & 0 \\ a & \lambda_s(A_1) \end{pmatrix}\quad\text{and}\quad A_1=\begin{pmatrix} \lambda_u(A_1) & b \\ 0 & \lambda_s(A_1) \end{pmatrix}. $$ Hence, $\tr(A_1A_2)=\lambda_u(A_1A_2)+\lambda_s(A_1A_2)=\lambda_u(A_1)\lambda_u(A_2)+\lambda_s(A_1)\lambda_s(A_2)+ab$. So $ab=0$, which implies that if $b=0$ then $s(A_1)=s(A_2)$ or if $a=0$ then $u(A_1)=u(A_2)$. Let us then assume that the first $N-1$ matrices have the property that either $u(A_i)=u(A_j)$ for all $i,j\in\{1,\ldots,N-1\}$ or $s(A_i)=s(A_j)$ for all $i,j\in\{1,\ldots,N-1\}$. We may assume without loss of generality that $u(A_i)=u(A_j)$ for all $i,j\in\{1,\ldots,N-1\}$. For a fixed $i\in\{1,\ldots,N-1\}$ the equation $\lambda_u(A_i)\lambda_u(A_N)=\lambda_u(A_iA_N)$ holds only if $u(A_i)=u(A_N)$ or $s(A_i)=s(A_N)$. If $u(A_i)=u(A_N)$ for some $i\in\{1,\dots,N-1\}$, then the proof is complete; otherwise $s(A_i)=s(A_N)$ must hold for all $i\in\{1,\dots,N-1\}$, which again implies the claimed property. \end{proof} \begin{lemma}\label{lem:eigenspace2} Let $\mathsf{A}=(A_1,\ldots,A_N)\in GL_2(\mathbb{R})^N$ be such that all the elements of $\mathsf{A}$ are proximal. The following two statements are equivalent: \begin{enumerate} \item\label{it:eig1b} $|\lambda_u(AB)|=|\lambda_u(A)\lambda_u(B)|$ for all $A,B\in\mathcal{S}(\mathsf{A})$, \item\label{it:eig2b} $u(A_i)=u(A_j)$ for all $i,j$ or $s(A_i)=s(A_j)$ for all $i,j$. \end{enumerate} \end{lemma} \begin{proof} It is again easy to see that \eqref{it:eig2b} implies \eqref{it:eig1b}. Therefore, we assume that \eqref{it:eig1b} holds. Let us first show that $\lambda_u(A_iA_j)=\lambda_u(A_i)\lambda_u(A_j)$ or $\lambda_u(A_iA_j^2)=\lambda_u(A_i)\lambda_u(A_j)^2$ for every $i\neq j$. Suppose for a contradiction that there exist $i \ne j$ such that \[ \lambda_u(A_iA_j)=-\lambda_u(A_i)\lambda_u(A_j)\quad\text{and}\quad\lambda_u(A_iA_j^2)=-\lambda_u(A_i)\lambda_u(A_j)^2. \] Hence, $\lambda_u(A_iA_j)\lambda_u(A_j)=-\lambda_u(A_i)\lambda_u(A_j)^2=\lambda_u(A_iA_j^2)$ and, by Lemma~\ref{lem:eigenspace} applied to the matrix pair $(A_iA_j,A_j)$, we have $u(A_iA_j)=u(A_j)$ or $s(A_iA_j)=s(A_j)$. Assuming $u(A_iA_j)=u(A_j)$, we have $-\lambda_u(A_i)\lambda_u(A_j)^2v(A_j)=A_jA_j^2v(A_j)=\lambda_u(A_j)^2A_iv(A_j)$, where $v(A_j)\in u(A_j)$ is a unit vector. But this is a contradiction since this would imply that $\lambda_u(A_i)=0$ or $\lambda_s(A_i)=-\lambda_u(A_i)$. The case $s(A_iA_j)=s(A_j)$ is similar. If $\lambda_u(A_iA_j)=\lambda_u(A_i)\lambda_u(A_j)$, then \eqref{it:eig2b} follows from Lemma~\ref{lem:eigenspace}. Similarly, if $\lambda_u(A_iA_j^2)=\lambda_u(A_i)\lambda_u(A_j)^2$, then again by Lemma~\ref{lem:eigenspace}, $u(A_j)=u(A_j^2)=u(A_i)$ or $s(A_j)=s(A_j^2)=s(A_i)$. The proof can be finished by induction similarly to the proof of Lemma~\ref{lem:eigenspace}. \end{proof} The following lemma is a simple application of \cite[Theorem~1.7(ii)--(iii)]{FengKaenmaki2011}. \begin{lemma}\label{lem:triang} Let $\mathsf{A}=(A_1,\ldots,A_N)\in GL_2(\mathbb{R})^N$ be such that $$ A_i= \begin{pmatrix} a_i & b_i \\ 0 & c_i \end{pmatrix} $$ for all $i \in \{1,\ldots,N\}$, where $a_i,b_i,c_i \in \mathbb{R}$, and let $\mu_a$ and $\mu_c$ be the Bernoulli measures obtained from the probability vectors $(\sum_{i=1}^N|a_i|^s)^{-1}(|a_1|^s,\ldots,|a_N|^s)$ and $(\sum_{i=1}^N|c_i|^s)^{-1}(|c_1|^s,\ldots,|c_N|^s)$, respectively. If $\mu$ is an ergodic equilibrium state for $\Phi^s$, then \begin{equation*} \mu \in \begin{cases} \{\mu_a\}, &\text{if } \sum_{i=1}^N|a_i|^s>\sum_{i=1}^N|c_i|^s, \\ \{\mu_c\}, &\text{if } \sum_{i=1}^N|a_i|^s<\sum_{i=1}^N|c_i|^s, \\ \{\mu_a,\mu_c\}, &\text{if } \sum_{i=1}^N|a_i|^s=\sum_{i=1}^N|c_i|^s. \end{cases} \end{equation*} \end{lemma} The following lemma is \cite[Proposition 1.2]{FengKaenmaki2011}. \begin{lemma}\label{lem:irred} If $\mathsf{A}\in GL_2(\mathbb{R})^N$ is irreducible, then there is unique equilibrium state which is a Gibbs-type measure for $\Phi^s$. \end{lemma} We are now ready to prove the propositions. \begin{proof}[Proof of Proposition~\ref{prop:gibbstype}] Let us first show that \eqref{it:gibbstype2} implies \eqref{it:gibbstype1}. Lemma~\ref{lem:irred} shows that if $\mathsf{A}$ is irreducible then the equilibrium state is a Gibbs-type measure for $\Phi^s$. Also, if $\mathsf{A}$ is strongly conformal, the conclusion is straightforward. We may thus assume that $\mathsf{A}$ is reducible with a common invariant subspace $V$ and that there exists $\varepsilon>0$ such that either the closed $\varepsilon$-neighbourhood of $V$ or the closure of its complement is an invariant unstable multicone. Note that $\SS(\mathsf{A})$ cannot contain any parabolic elements, since in this case the neighbourhood (or its complement) cannot be invariant. We may, by Proposition~\ref{prop:connect}, assume that for some $M \in \mathbb{N}$ the tuple $\mathsf{A}_{h}=(A_1,\ldots,A_M)$ has a strongly invariant multicone $\mathcal{C}$ and $\mathsf{A}_{e}=(A_{M+1},\ldots,A_N)$ is such that $A_i\mathcal{C}=\mathcal{C}$ for all $i \in \{M+1,\ldots,N\}$. Thus, either $V\in\mathcal{C}^o$ or $V\notin\mathcal{C}$. If $V\in\mathcal{C}^o$, then $u(A_i)=V$ for all $i\in\{1,\ldots,M\}$ and if $V\notin\mathcal{C}$, then $s(A_i)=V$ for all $i\in\{1,\ldots,M\}$. By the invariance of $V$ and since $\SS(\mathsf{A})$ does not contain parabolic element, every $A\in\SS(\mathsf{A})$ is diagonalisable. So in the first case, for any $A_{i_1},\ldots,A_{i_n}\in\mathsf{A}$, $$ |\lambda_u(A_{i_1}\cdots A_{i_n})|=\|A_{i_1}\cdots A_{i_n}|V\|=\prod_{\ell=1}^n\|A_{i_\ell}|V\|=\prod_{\ell=1}^n|\lambda_u(A_{i_\ell})|. $$ In the second case similarly, $|\lambda_s(A_{i_1}\cdots A_{i_n})|=\prod_{\ell=1}^n|\lambda_s(A_{i_\ell})|$, but by the multiplicity of the determinant $|\lambda_u(A_{i_1}\cdots A_{i_n})|=\prod_{\ell=1}^n|\lambda_u(A_{i_\ell})|$. Moreover, by Lemma~\ref{lem:Morris}, there exists a constant $C>0$ such that for every $A\in\SS(\mathsf{A})\setminus\SS(\mathsf{A}_e)$ $$ |\lambda_u(A)|\leq\|A\|\leq C|\lambda_u(A)|, $$ and $|\lambda_u(A)|=\|A\|$ for $A\in\SS(\mathsf{A}_e)$ trivially. Hence, the Bernoulli measure $\lambda$ obtained from the probability vector $$ \biggl(\frac{|\lambda_u(A_1)|^s}{\sum_{i=1}^N |\lambda_u(A_i)|^s},\ldots,\frac{|\lambda_u(A_N)|^s}{\sum_{i=1}^N |\lambda_u(A_i)|^s}\biggr) $$ is a $\sigma$-invariant Gibbs-type measure for $\Phi^s$. Therefore, $\mu=\lambda$. Let us then show that \eqref{it:gibbstype1} implies \eqref{it:gibbstype2}. We may assume without loss of generality that $\mathsf{A}$ is reducible with common subspace $V$. Moreover, let us assume that neither any $\varepsilon$-neighbourhood of $V$ nor the closures of the complements are invariant unstable multicone. Our goal is to show that the only remaining possibility, $\mathsf{A}$ is strongly conformal, holds. By reducibility, after a change of basis, every $A_{\mathtt{i}}\in\SS(\mathsf{A})$ has the form $$ A_\mathtt{i}= \begin{pmatrix} a_\mathtt{i} & b_\mathtt{i} \\ 0 & c_\mathtt{i} \end{pmatrix}, $$ where $a_{\mathtt{i}}=\prod_{k=1}^{|\mathtt{i}|}a_{i_k}$ and $c_{\mathtt{i}}=\prod_{k=1}^{|\mathtt{i}|}c_{i_k}$ with some $a_i,b_i,c_i \in \mathbb{R}$ for $i \in \{1,\ldots,N\}$. Then, by Lemma~\ref{lem:triang}, $\mu=\mu_a$ or $\mu=\mu_c$, where $\mu_a$ and $\mu_c$ are defined in the formulation of Lemma~\ref{lem:triang}. If one of the matrices, say $A_\mathtt{i}\in\SS(\mathsf{A})$, is parabolic, then $a_\mathtt{i}=c_\mathtt{i}$ and $b_\mathtt{i}\neq0$. It follows that there exists $c>0$ such that $c^{-1}na_\mathtt{i}^{n-1}b_\mathtt{i}\leq\|A_\mathtt{i}^n\|\leq cna_\mathtt{i}^{n-1}b_\mathtt{i}$ for all $n \in \mathbb{N}$. By \cite[Theorem~1.7(ii)]{FengKaenmaki2011}, we may assume that $P(\Phi^s) = \log\sum_{i=1}^N |a_i|^s$ and that $\mu = \mu_a$. The definition of $\mu_a$ thus implies that $$ C^{-1}\frac{|a_\mathtt{i}|^{ns}}{n^s|a_\mathtt{i}|^{s(n-1)}|b_\mathtt{i}|^s}\leq \frac{\mu([\mathtt{i}^n])}{\|A_{\mathtt{i}}^n\|^s\exp(-nP(\Phi^s))}\leq C\frac{|a_\mathtt{i}|^{ns}}{n^s|a_\mathtt{i}|^{s(n-1)}|b_\mathtt{i}|^s} $$ for all $n \in \mathbb{N}$. This is a contradiction since $\mu$ was assumed to be a Gibbs-type measure for $\Phi^s$. Thus, $\SS(\mathsf{A})$ does not contain any parabolic element. The common subspace $V$ and the fact that $\SS(\mathsf{A})$ does not contain parabolic element implies that all the matrices in $\mathsf{A}$ are diagonalisable. Since neither any $\varepsilon$-neighbourhood of $V$ nor the closures of the complements are invariant unstable multicones, then either $|a_k|=|c_k|$ and $b_k=0$ for every $k \in \{1,\ldots,N\}$ (which implies that $\mathsf{A}$ is strongly conformal) or there exist $i\neq j$ such that $|a_i|<|c_i|$ and $|a_j|>|c_j|$. If $\mu=\mu_a$, then $$ C^{-1}<\frac{\mu([i^n])}{\|A_{i}^n\|^s\exp(-nP(\Phi^s))}\leq C'\frac{|a_i|^{sn}}{|c_i|^{sn}} $$ for all $n \in \mathbb{N}$, and similarly, if $\mu=\mu_c$, then $$ C^{-1}<\frac{\mu([j^n])}{\|A_{j}^n\|^s\exp(-nP(\Phi^s))}\leq C'\frac{|c_j|^{sn}}{|a_j|^{sn}} $$ for all $n \in \mathbb{N}$. Since both inequalities lead to a contradiction, it follows that $\mathsf{A}$ must be strongly conformal. \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:bernoulli}] Let us first show that \eqref{it:bernoulli2} implies \eqref{it:bernoulli1}. If $\mathsf{A}$ is reducible, then the statement follows directly from Lemma~\ref{lem:triang}. If $\mathsf{A}$ is strongly conformal, then the statement is straightforward. Let us then show that \eqref{it:bernoulli1} implies \eqref{it:bernoulli2}. Let us contrarily assume that $\mu$ is a Bernoulli measure, $\mathsf{A}$ is irreducible, and not strongly conformal. By Lemma~\ref{lem:irred}, $\mu$ is a Gibbs-type measure for $\Phi^s$, that is, there exists a constant $C>0$ such that \begin{equation}\label{eq:conttobern} C^{-1}\leq\frac{\mu([\mathtt{i}])}{\|A_{\mathtt{i}}\|^s\exp(-nP(\Phi^s))}\leq C \end{equation} for all $\mathtt{i}\in\Sigma_n$ and $n \in \mathbb{N}$. Since $\mu$ is a Bernoulli measure and $\mathsf{A}$ is not strongly conformal, Theorem~\ref{thm:justdomin} implies that $\mathsf{A}$ has an invariant unstable multicone $\mathcal{C}$ and $\SS(\mathsf{A})$ does not contain any parabolic element. We may, by Proposition~\ref{prop:connect}, assume that for some $M \in \mathbb{N}$ the tuple $\mathsf{A}_{h}=(A_1,\ldots,A_M)$ has a strongly invariant multicone $\mathcal{C}$ and $\mathsf{A}_{e}=(A_{M+1},\ldots,A_N)$ is strongly conformal with $A_i\mathcal{C}=\mathcal{C}$ for all $i \in \{M+1,\ldots,N\}$. By \eqref{eq:conttobern} and the Bernoulli property of $\mu$, $$ C^{-1/n}\leq\frac{\mu([\mathtt{i}])}{\|A_{\mathtt{i}}^n\|^{s/n}\exp(-|\mathtt{i}|P(\Phi^s))}\leq C^{1/n} $$ for all $\mathtt{i}\in\Sigma_*$ and $n\in \mathbb{N}$. Thus, by letting $n \to \infty$, we see that $$ |\lambda_u(A_{\mathtt{i}})|=\mu([\mathtt{i}])^{1/s}\exp(|\mathtt{i}|P/s). $$ for all $\mathtt{i}\in\Sigma_*\setminus\bigcup_{k\in\mathbb{N}}\{M+1,\ldots,N\}^k$. Since $\mu$ is a Bernoulli measure, we see that $|\lambda_u(A_{\mathtt{i}\mathtt{j}})|=|\lambda_u(A_{\mathtt{i}})\lambda_u(A_{\mathtt{j}})|$ for any two $\mathtt{i},\mathtt{j}\in\Sigma_*\bigcup_{k\in\mathbb{N}}\setminus\{M+1,\ldots,N\}^k$. Thus Lemma~\ref{lem:eigenspace2} implies that there exists a subspace $V$ such that $u(A_{\mathtt{i}})=V$ for all $\mathtt{i}\in\Sigma_*\setminus\bigcup_{k \in \mathbb{N}}\{M+1,\ldots,N\}^k$ or $s(A_{\mathtt{i}})=V$ for all $\mathtt{i}\in\Sigma_*\setminus\bigcup_{k \in \mathbb{N}}\{M+1,\ldots,N\}^k$. Without loss of generality, we may assume that we are in the first case. Since $|\lambda_u(A_\mathtt{i}^2A_j)|=|\lambda_u(A_\mathtt{i})\lambda_u(A_\mathtt{i} A_j)|$ and $|\lambda_u(A_\mathtt{i}^3A_j)|=|\lambda_u(A_\mathtt{i})^2\lambda_u(A_\mathtt{i} A_j)|$, for every $j\in\{M+1,\ldots,N\}$, we have by Lemma~\ref{lem:eigenspace} that $u(A_{\mathtt{i}}^kA_j)=u(A_{\mathtt{i}})$, where $k=1$ or $k=2$. Therefore $A_{\mathtt{i}}^kA_ju(A_{\mathtt{i}}^kA_j)=A_{\mathtt{i}}^ku(A_{\mathtt{i}})$, which implies that $A_jV=V$. Thus, $V$ is an invariant subspace for $\mathsf{A}$. This contradicts the irreducibility assumption. \end{proof} Let us next prove the theorems. For the existence of the function in the statement \eqref{thm:ftvii} of Theorem~\ref{thm:fortuples} we need the following lemma. \begin{lemma}\label{lem:convmeas} Let $\mathsf{A} \subset GL_2(\mathbb{R})$ be a finite set such that $\mathsf{A}=\mathsf{A}_h\cup\mathsf{A}_e$, where $\mathsf{A}_e$ is strongly conformal and $\mathsf{A}_h\neq\emptyset$ has a strongly invariant multicone $\mathcal{C}$ such that $A\mathcal{C}=\mathcal{C}$ for all $A\in\mathsf{A}_e$. Let $m$ be the Haar measure generated by $\mathsf{A}_e$ normalised on $\mathcal{C}$. Then for every $\mathtt{i}\in\Sigma$ there exists a probability measure $\nu_{\mathtt{i}}$ on $\mathcal{C}$ such that $$ \nu_{\mathtt{i}}=\lim_{n\to\infty}(A_{\mathtt{i}|_n})_*m. $$ In particular, $(A_j)_*\nu_{\mathtt{i}} = \nu_{j\mathtt{i}}$. \end{lemma} \begin{proof} Write $\mathsf{A}_h=\{A_1,\ldots,A_M\}$ and $\mathsf{A}_e=\{A_{M+1},\ldots,A_N\}$. Let us divide $\Sigma$ into two disjoint sets \begin{align} \hat\Sigma&=\{i_1i_2\cdots\in\Sigma: i_n\in\{1,\ldots,M\}\text{ for infinitely many }n\in\mathbb{N}\},\label{eq:decombsigma1}\\ \Upsilon&=\{i_1i_2\cdots\in\Sigma: \text{ there is }n_0\in\mathbb{N}\text{ such that }i_n\in\{M+1,\ldots,N\}\text{ for all }n > n_0\}.\label{eq:decombsigma2} \end{align} Fix $\mathtt{i} \in \hat\Sigma$. By the definition, $\mathtt{i}\in\hat\Sigma$ can be written as $\mathtt{i} = \mathtt{i}_1j_1\mathtt{i}_2j_2\cdots$, where $$\mathtt{i}_k \in \bigcup_{n\in\mathbb{N}}\{M+1,\ldots,N\}^n \cup \{\varnothing\}$$ and $i_k \in \{1,\ldots,M\}$ for all $k \in \mathbb{N}$. Thus, $A_{\mathtt{i}_kj_k}\mathcal{C}\subset\mathcal{C}^o$ for every $k\in\mathbb{N}$, and there exists a unique $V=V(\mathtt{i})\in\mathbb{RP}^1$ such that $V=\bigcap_{n=0}^{\infty}A_{\mathtt{i}_1j_1}\cdots A_{\mathtt{i}_nj_n}(\mathcal{C})$. Let $g\colon\mathbb{RP}^1\to\mathbb{R}$ be a continuous function. Since $\mathbb{RP}^1$ is compact, for every $\varepsilon>0$ there exists $r>0$ such that for every $V,W\in$ with $d(V,W)<r$, $|g(V)-g(W)|<\varepsilon$. Thus, by choosing $n$ sufficiently large so that $\diam(A_{\mathtt{i}_1j_1}\cdots A_{\mathtt{i}_{n}j_{n}}(\mathcal{C}))<r$, we have $$ \biggl|\int g(V)\,\mathrm{d}(A_{\mathtt{i}|_{n}})_*m(V)-g(V(\mathtt{i}))\biggr|\leq\varepsilon. $$ Hence, $\lim_{n\to\infty}(A_{\mathtt{i}|_{n}})_*m$ exists and equals to $\delta_{V(\mathtt{i})}$. On the other hand, if $\mathtt{i}\in\Upsilon$, then clearly $\lim_{n\to\infty}(A_{\mathtt{i}|_{n}})_*m=(A_{\mathtt{i}|_{k}})_*m$, where $k$ is the smallest $n_0$ satisfying the condition in \eqref{eq:decombsigma2}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:fortuples}] The equivalence of \eqref{thm:ftvvv} and \eqref{thm:fti} follows directly from Corollary~\ref{thm:justdomin2}. By Lemma~\ref{lem:irred}, the equilibrium state $\mu$ is unique and a Gibbs-type measure for $\Phi^s$. Thus, also \eqref{thm:ftv} and \eqref{thm:ftvvv} can be immediately seen to be equivalent. Let us show that \eqref{thm:ftvii} implies \eqref{thm:ftv}. Plugging \eqref{thm:ftvii} into \eqref{eq:Gibbstype}, we see that $$ C^{-1}\exp\biggl(-nP(\Phi) + s\sum_{k=0}^{n-1} f(\sigma^k\mathtt{i})\biggr) \le \mu([\mathtt{i}|_n]) \le C\exp\biggl(-nP(\Phi) + s\sum_{k=0}^{n-1} f(\sigma^k\mathtt{i})\biggr) $$ holds for every $\mathtt{i}\in\Sigma$, from which the quasi-Bernoulli property clearly follows. It remains to show that \eqref{thm:fti} implies \eqref{thm:ftvii}. By Lemma~\ref{lem:convmeas}, $\nu_{\mathtt{i}}=\lim_{n\to\infty}(A_{\mathtt{i}|_{n}})_*m$ exists for every $\mathtt{i}\in\Sigma$. Define $f\colon\Sigma\to\mathbb{R}$ by setting $$ f(\mathtt{i})=\int\log\|A_{\mathtt{i}|_1}|V\|\,\mathrm{d}\nu_{\sigma\mathtt{i}}(V) $$ for all $\mathtt{i} \in \Sigma$. Clearly, $$ \int|f(\mathtt{i})|\,\mathrm{d}\mu(\mathtt{i})\leq\iint|\log\|A_{i_0}|V\||\,\mathrm{d}\nu_{\sigma\mathtt{i}}(V)\,\mathrm{d}\mu(\mathtt{i})\leq\iint C\,\mathrm{d}\nu_{\sigma\mathtt{i}}(V)\,\mathrm{d}\mu(\mathtt{i})=C, $$ where $C=\log\max\{\max_{i}\{\|A_{i}\|\},\max_i\{\|A_i^{-1}\|\}\}$. Let $\hat\Sigma$ and $\Upsilon$ be as in \eqref{eq:decombsigma1} and \eqref{eq:decombsigma2}, respectively. Since $\mu$ is fully supported, $\mu(\Upsilon)=0$ and $\hat\Sigma$ has full $\mu$ measure. Furthermore, every $\mathtt{i} \in \hat\Sigma$ satisfies $$ \diam(A_{\mathtt{i}|_{n}}(\mathcal{C}))\leq C\tau^{\sharp_{1}\mathtt{i}|_n+\cdots+\sharp_{M}\mathtt{i}|_n}\to0 $$ as $n\to\infty$. Therefore, for $\mu$-almost every $\mathtt{i}$ and for any sequence $(\mathtt{j}_n)_{n\in\mathbb{N}}$ converging to $\mathtt{i}$ and sufficiently large $n$, \begin{align*} |f(\mathtt{i})-f(\mathtt{j}_n)|&=\biggl|\int\log\|A_{\mathtt{i}|_1}|V\|\,\mathrm{d}\nu_{\sigma\mathtt{i}}(V)-\int\log\|A_{\mathtt{j}_n|_1}|V\|\,\mathrm{d}\nu_{\sigma\mathtt{j}_n}(V)\biggr|\\ &=\biggl|\log\|A_{\mathtt{i}|_1}|V(\sigma\mathtt{i})\|-\int\log\|A_{\mathtt{i}|_1}|V\|\,\mathrm{d}\nu_{\sigma\mathtt{j}_n}(V)\biggr|\\ &\leq C\dist(\delta_{V(\sigma\mathtt{i})},\nu_{\sigma\mathtt{j}_n})\leq C'\diam(A_{\sigma\mathtt{i}\wedge\sigma\mathtt{j}_n}(\mathcal{C})), \end{align*} which converges to $0$ as $n\to\infty$. Note that $$ \sum_{k=0}^{n-1}f(\sigma^k\mathtt{i})=\int\log\|A_{\mathtt{i}|_n}|V\|\,\mathrm{d}\nu_{\sigma^n\mathtt{i}}(V) $$ for every $n\in \mathbb{N}$ and $\mathtt{i}\in\Sigma$. By Lemma~\ref{lem:Morris}, there exists $\kappa>0$ such that $\|A_{\mathtt{i}|_n}\|\geq\|A_{\mathtt{i}|_n}|V\|\geq\kappa\|A_{\mathtt{i}|_n}\|$ for all $V\in\mathcal{C}$. Therefore, \eqref{thm:ftvii} follows. \end{proof} The following lemma, which we refer to as the three matrices lemma, is the key observation in the proof of Theorem~\ref{thm:holder}. \begin{lemma}\label{prop:key} If $\mathsf{A}=(A_1,A_2,A_3)\in GL_2(\mathbb{R})^3$ is such that $A_3=cI$ for some $c \in\mathbb{R}\setminus \{0\}$ and $(A_1,A_2)$ is irreducible and dominated, then for every H\"older continuous potential $f\colon\{1,2,3\}^{\mathbb{N}}\to\mathbb{R}$ and every $C>0$ there exists $\mathtt{i}\in\{1,2,3\}^{\mathbb{N}}$ and $n\in\mathbb{N}$ such that $$ \Biggl|\sum_{k=0}^{n-1}f(\sigma^k\mathtt{i})-\log\|A_{\mathtt{i}|_n}\|\Biggr|> C. $$ \end{lemma} The proof of the lemma takes several pages. Trying not to disrupt the flow of the proofs in this section, we have postponed it into \S\ref{sec:3m}. \begin{proof}[Proof of Theorem~\ref{thm:holder}] By Lemma~\ref{lem:irred}, the equilibrium state $\mu$ is unique and a Gibbs-type measure for $\Phi^s$. Taking the potential $f$ in \eqref{it:holder3}, it is clear that $\mu$ is Gibbs for the potential $sf$. On the other hand, if $\mu$ is Gibbs for the potential $g$ then $\frac{1}{s}g$ clearly satisfies \eqref{it:holder3}. Let us show that \eqref{it:holder2} implies \eqref{it:holder3}. If $\mathsf{A}$ has a strongly invariant multicone $\mathcal{C}$, then, by e.g.\ \cite[Lemma 2.4]{BaranyRams2017}, there exist H\"older-continuous functions $V\colon\Sigma\to\mathbb{RP}^1$ and $f\colon\Sigma\to\mathbb{R}$ such that \begin{equation}\label{eq:hpot} V(\mathtt{i})=\bigcap_{n=1}^{\infty}A_{\mathtt{i}|_n}(\mathcal{C}) \quad \text{and} \quad f(\mathtt{i})=\log\|A_{\mathtt{i}|_1}|V(\sigma\mathtt{i})\| \end{equation} for all $\mathtt{i}\in\Sigma$. Moreover, by Lemma~\ref{lem:Morris}, there is a constant $C>0$ such that $$ \Biggl|\sum_{k=0}^{m-1}f(\sigma^k\mathtt{i})-\log\|A_{\mathtt{i}|_m}\|\Biggr|\leq C. $$ for all $\mathtt{i}\in\Sigma$ and $m\in\mathbb{N}$. On the other hand, if $\mathsf{A}$ is strongly conformal then, by choosing $f(\mathtt{i})=\frac{1}{2}\log|\det(A_{\mathtt{i}|_1})|$, the claimed properties follow. It remains to show that \eqref{it:holder3} implies \eqref{it:holder2}. Let us assume contrarily that there exist a constant $C>0$ and a H\"older-continuous function $f$ such that \begin{equation}\label{eq:tocontra2} \Biggl|\sum_{k=0}^{n-1}f(\sigma^k\mathtt{i})-\log\|A_{\mathtt{i}|_n}\|\Biggr|\leq C, \end{equation} for all $\mathtt{i}\in\Sigma$, $\mathsf{A}$ does not have strongly invariant multicone, and $\mathsf{A}$ is not strongly conformal. Thus, by Theorem~\ref{thm:fortuples}, $\mathsf{A}$ can be decomposed into $\mathsf{A}_h\neq\emptyset$ and strongly conformal set $\mathsf{A}_e\neq\emptyset$ such that $\mathsf{A}_h$ has strongly invariant multicone $\mathcal{C}$ and $A\mathcal{C}=\mathcal{C}$ for every $A\in\mathsf{A}_e$. As usual, let $\mathsf{A}_h=\{A_1,\ldots,A_M\}$ and $\mathsf{A}_e=\{A_{M+1},\ldots,A_N\}$. The equilibrium state $\mu$ is a quasi-Bernoulli measure. Recall that, by Proposition~\ref{prop:connect}, $\{|\det(A)|^{-1/2}A:A\in\SS(\mathsf{A}_e)\}$ is finite. Hence, there exists $A_\mathtt{j}\in\SS(\mathsf{A}_e)$ such that $A_\mathtt{j}=cI$. Since $\mathsf{A}_h$ is non-empty and $\mathsf{A}$ is irreducible, $X_u(\mathsf{A})$ and $X_s(\mathsf{A})$ contain at least two points each. Then there exist four proximal matrices $A_{\mathtt{i}_1},A_{\mathtt{i}_2},A_{\mathtt{i}_3},A_{\mathtt{i}_4}\in\SS(\mathsf{A})$ such that $u(\mathsf{A}_{\mathtt{i}_1})\neq u(\mathsf{A}_{\mathtt{i}_2})$ and $s(A_{\mathtt{i}_3})\neq s(A_{\mathtt{i}_4})$. Taking $q>0$ sufficiently large we have that $A_{\mathtt{i}_1}^q\mathcal{C}\cap A_{\mathtt{i}_2}^q\mathcal{C}=\emptyset$ and $A_{\mathtt{i}_3}^{-q}(\mathcal{C}^o)^c\cap A_{\mathtt{i}_4}^{-q}(\mathcal{C}^o)^c=\emptyset$. Clearly, $u(A_{\mathtt{i}_1}^qA_{\mathtt{i}_3}^{q})\in A_{\mathtt{i}_1}^q\mathcal{C}$ and $u(A_{\mathtt{i}_2}^qA_{\mathtt{i}_4}^{q})\in A_{\mathtt{i}_2}^q\mathcal{C}$ and so $u(A_{\mathtt{i}_1}^qA_{\mathtt{i}_3}^{q})\neq u(A_{\mathtt{i}_2}^qA_{\mathtt{i}_4}^{q})$. Similarly, $s(A_{\mathtt{i}_1}^qA_{\mathtt{i}_3}^{q})\in A_{\mathtt{i}_3}^{-q}(\mathcal{C}^o)^c$ and $s(A_{\mathtt{i}_2}^qA_{\mathtt{i}_4}^{q})\in A_{\mathtt{i}_4}^{-q}(\mathcal{C}^o)^c$ and so $s(A_{\mathtt{i}_1}^qA_{\mathtt{i}_3}^{q})\neq s(A_{\mathtt{i}_2}^qA_{\mathtt{i}_4}^{q})$. Thus, $(A_{\mathtt{i}_1}^qA_{\mathtt{i}_3}^{q},A_{\mathtt{i}_2}^qA_{\mathtt{i}_4}^{q})$ is dominated and irreducible. There exist $n_1,n_2,n_3\geq1$ such that $\ell:=n_3|\mathtt{j}|=n_1q(|\mathtt{i}_1|+|\mathtt{i}_3|)=n_2q(|\mathtt{i}_2|+|\mathtt{i}_4|)$. Let us define $\Gamma = \{(\mathtt{i}_1^q\mathtt{i}_3^q)^{n_1},(\mathtt{i}_2^q\mathtt{i}_4^q)^{n_1},\mathtt{j}^{n_3}\}^\mathbb{N}$. By \eqref{eq:tocontra2}, the H\"older continuous potential $h=\sum_{j=0}^{\ell-1}f\circ\sigma^j$ satisfies $$ \Biggl|\sum_{k=0}^{m-1}h(\sigma^k\mathtt{i})-\log\|A_{\mathtt{i}|_m}\|\Biggr|\leq C $$ for all $m\in\mathbb{N}$ and $\mathtt{i}\in\Gamma$, where $\sigma$ denotes the left-shift operator on $\Gamma$. Since this contradicts Lemma~\ref{prop:key}, we have finished the proof. \end{proof} \section{The three matrices lemma}\label{sec:3m} In this section, we prove Lemma~\ref{prop:key}. Throughout the section, we assume that $\mathsf{A}=(A_1,A_2,A_3)\in GL_2(\mathbb{R})^3$ is such that $A_3=cI$ for some $c \in \mathbb{R}\setminus\{0\}$, and $(A_1,A_2)$ is irreducible and has a strongly invariant multicone $\mathcal{C}$. Note that there exists a multicone $\mathcal{C}_0 \subset \mathcal{C}^o$ such that $A_i\mathcal{C} \subset \mathcal{C}_0$ for $i=1,2$. Without loss of generality, by multiplying the matrix triple $\mathsf{A}$ by $c^{-1}$, we may assume that $c=1$. This does not affect on the existence of a H\"older continuous potential. For simplicity, let us denote $\Sigma=\{1,2,3\}^{\mathbb{N}}$ and $\Gamma=\{1,2\}^{\mathbb{N}}$. Let the Borel $\sigma$-algebras of $\Sigma$ and $\Gamma$ be $\mathcal{B}_{\Sigma}$ and $\mathcal{B}_{\Gamma}$, respectively. As in \eqref{eq:decombsigma2}, let $\Upsilon=\bigcup_{n=0}^{\infty}\bigcup_{\mathtt{i}\in\Sigma_n}\{\iii3^{\infty}\} \subset \Sigma$ be the countable set of infinite words whose tail consists only $3$'s, and define $\hat\Sigma = \Sigma \setminus \Upsilon$. Notice that each $\mathtt{i}\in\hat\Sigma$ can be written in the form $\mathtt{i}=3^{k_1}i_13^{k_2}i_2\cdots$, where $k_i\in\mathbb{N}\cup\{0\}$ and $i_k\in\{1,2\}$ for all $k \in \mathbb{N}$. Relying on this representation, let us define a function $\kappa\colon \hat\Sigma\to\Gamma$ by setting $$ \kappa(3^{k_1}i_13^{k_2}i_2\cdots)=i_1i_2\cdots $$ for all $\mathtt{i}\in \hat\Sigma$. The definition of $\kappa$ can be naturally extended to $\Sigma_*$ by $\kappa(3^{k_1}i_13^{k_2},\ldots,i_n3^{k_{n+1}})=(i_1,\ldots,i_n)$ and $\kappa(3^k)=\emptyset$, where $k_i\in\mathbb{N}\cup\{0\}$. Observe that $\kappa^{-1}(C)$ is a countable union of cylinder sets in $\Sigma$ for every cylinder set $C$ in $\Gamma$. Thus $\kappa\colon(\Sigma,\mathcal{B}_{\Sigma})\to(\Gamma,\mathcal{B}_{\Gamma})$ is measurable. With a slight abuse of notation, we denote both left-shift operators on $\Sigma$ and $\Gamma$ by $\sigma$. Finally, let us observe that \begin{equation}\label{eq:invkappa} \kappa(\sigma\mathtt{i})= \begin{cases} \kappa(\mathtt{i}), & \mbox{if } \mathtt{i}|_1=3 \\ \sigma\kappa(\mathtt{i}), & \mbox{if } \mathtt{i}|_1\neq3. \end{cases} \end{equation} Let $\mu_h$ be the unique ergodic Gibbs measure on $\Gamma$ for the H\"older continuous potential $h\colon\Gamma\to\mathbb{R}$ defined by \begin{equation*} h(\mathtt{i})=\log\|A_{\mathtt{i}|_1}|V(\sigma\mathtt{i})\|, \end{equation*} where $V(\mathtt{i})=\bigcap_{n=1}^{\infty}A_{\mathtt{i}|_n}(\mathcal{C})$. Since \begin{equation*} \sum_{k=0}^{m-1} h(\sigma^k\mathtt{i}) = \log\|A_{\mathtt{i}|_m}|V(\sigma^m\mathtt{i})\|, \end{equation*} Lemma~\ref{lem:Morris} implies $$ \Biggl|\sum_{k=0}^{m-1}h(\sigma^k\mathtt{i})-\log\|A_{\mathtt{i}|_m}\|\Biggr|\leq C. $$ for all $\mathtt{i}\in\Gamma$ and $m\in \mathbb{N}$. Let us assume contrarily that the statement of Lemma~\ref{prop:key} fails. This means that there is a H\"older continuous potential $f\colon\Sigma\to\mathbb{R}$ and a constant $C>0$ such that \begin{equation}\label{eq:ass} \Biggl|\sum_{k=0}^{n-1}f(\sigma^k\mathtt{i})-\log\|A_{\mathtt{i}|_n}\|\Biggr|\leq C \end{equation} for all $n\in\mathbb{N}$ and $\mathtt{i}\in\Sigma$. Our goal is to show that in this case the Gibbs measure $\mu_h$ is a Bernoulli measure. By Proposition~\ref{prop:bernoulli}, as the tuple $(A_1,A_2)$ is irreducible and contains only proximal matrices, this is a contradiction. We will show this after some auxiliary lemmas. The proof of the following lemma follows easily from the definition of $\kappa$ and the domination of the tuple $(A_1,A_2)$, and we leave it to the reader. \begin{lemma}\label{lem:boundonnorm} There exists $C>0$ such that $$ \Biggl|\log\|A_{\mathtt{i}|_n}\|-\sum_{k=0}^{n-1-\sharp_3\mathtt{i}|_n}h(\sigma^k\kappa(\mathtt{i}))\Biggr|\leq C. $$ for all $\mathtt{i}\in\hat\Sigma$ and $n\in\mathbb{N}$. \end{lemma} \begin{comment} \begin{proof} If $\mathtt{i}|_n$ contains an element other than $3$, then $A_{\mathtt{i}|_n}$ maps the multicone $\mathcal{C}$ into $\mathcal{C}_0$. Therefore, by Lemma~\ref{lem:Morris}, there exists a constant $C>0$ such that $$ |\log\|A_{\mathtt{i}|_n}\|-\log\|A_{\mathtt{i}|_n}|V(\kappa(\sigma^n\mathtt{i}))\||\leq C $$ for all $n\in\mathbb{N}$. Note also that is $\mathtt{i}$ contains only $3$'s, then the above inequality is a triviality. By the definition of $V$, we have $$ \log\|A_{\mathtt{i}|_n}|V(\kappa(\sigma^n\mathtt{i}))\|=\sum_{k=0}^{n-1}\log\|A_{(\sigma^k\mathtt{i})|_1}|V(\kappa(\sigma^k\mathtt{i}))\| $$ for all $n\in\mathbb{N}$. Notice that $\log\|A_3|V\|=0$ for all $V\in\mathbb{RP}^1$. Moreover, if $(\sigma^k\mathtt{i})|_1 \neq 3$, then, by recalling \eqref{eq:invkappa}, we have $V(\kappa(\sigma^k\mathtt{i}))=V(\sigma^{k-\sharp_3\mathtt{i}|_k}\kappa(\mathtt{i}))$. Thus, $$ \sum_{k=0}^{n-1}\log\|A_{(\sigma^k\mathtt{i})|_1}|V(\kappa(\sigma^k\mathtt{i}))\|=\sum_{k=0}^{n-\sharp_3\mathtt{i}|_n}h(\sigma^k\kappa(\mathtt{i})) $$ which finishes the proof. \end{proof} \end{comment} Let $f$ be the H\"older continuous potential in \eqref{eq:ass} and let $\mu_f$ be the unique ergodic Gibbs measure for the potential $f$ on $\Sigma$. By the definition of the pressure and \eqref{eq:ass}, we have $$ P\biggl(\biggl(\sum_{k=0}^{n-1}f\circ\sigma^{k}\biggr)_n\biggr)=P((\mathtt{i}\mapsto\log\|A_{\mathtt{i}|_n}\|)_n)=\lim_{n\to\infty}\tfrac{1}{n}\log\sum_{\mathtt{i}\in\Sigma_n}\|A_{\mathtt{i}}\|. $$ Let us denote the common quantity by $Q$. Then by the definition of Gibbs measures \eqref{eq:gibbsmeas}, there exists a constant $C>0$ such that $$ C^{-1}\exp\biggl(\sum_{k=0}^{n-1}f(\sigma^k\mathtt{i})-nQ\biggr)\leq\mu_f([\mathtt{i}|_n])\leq C\exp\biggl(\sum_{k=0}^{n-1}f(\sigma^k\mathtt{i})-nQ\biggr), $$ for every $\mathtt{i}\in\Sigma$. Let us write $$ R=\lim_{n\to\infty}\tfrac{1}{n}\log\sum_{\mathtt{i}\in\Gamma_n}\|A_{\mathtt{i}}\|. $$ By a simple calculation, recalling that $A_3=I$, we see that $$ Q=\lim_{n\to\infty}\tfrac{1}{n}\log\sum_{\mathtt{i}\in\Sigma_n}\|A_{\mathtt{i}}\|=\lim_{n\to\infty}\tfrac{1}{n}\log\sum_{\ell=0}^n\binom{n}{\ell}\sum_{\mathtt{i}\in\Gamma_{\ell}}\|A_{\mathtt{i}}\|. $$ Since for every $\varepsilon>0$ there exists a constant $K>0$ such that $$ K^{-1}e^{(R-\varepsilon)\ell}\leq\sum_{\mathtt{i}\in\Gamma_{\ell}}\|A_{\mathtt{i}}\|\leq Ke^{(R+\varepsilon)\ell} $$ for every $\ell\in\mathbb{N}$, we see that $\log(1+e^{R-\varepsilon})\leq Q\leq \log(1+e^{R+\varepsilon})$. Since $\varepsilon>0$ was arbitrary, we get \begin{equation}\label{eq:presseq} Q=\log(1+e^{R}). \end{equation} Let us define the Perron-Frobenius operators $\mathcal{L}_f$ and $\mathcal{L}_h$ on $\Sigma$ and on $\Gamma$, respectively, for the H\"older-continuous potentials $f$ and $h$ as $$ (\mathcal{L}_f(\psi))(\mathtt{i})=\sum_{i=1}^3e^{f(i\mathtt{i})}\psi(i\mathtt{i}) \quad \text{and} \quad (\mathcal{L}_h(\phi))(\mathtt{i})=\sum_{i=1}^2e^{h(i\mathtt{i})}\phi(i\mathtt{i}). $$ By \cite[Theorem~1.7 and the proof of Theorem~1.16]{Bowen}, there exist unique functions $\psi_f\colon\Sigma\to\mathbb{R}$ and $\phi_h\colon\Gamma\to\mathbb{R}$ (i.e.\ eigenfunctions) and unique probability measures $\nu_f$ on $\Sigma$ and $\nu_h$ on $\Gamma$ (i.e.\ eigenmeasures) such that $$ \mathcal{L}_f(\psi_f)=e^{Q}\psi_f,\quad \mathcal{L}_h(\phi_h)=e^{R}\phi_h,\quad \mathcal{L}_f^*\nu_f=e^{Q}\nu_f,\quad\mathcal{L}_h^*(\nu_h)=e^{R}\nu_h, $$ and $\int \psi_f\,\mathrm{d}\nu_f=1=\int\phi_h\,\mathrm{d}\nu_h$. Moreover, by \cite[Lemmas~1.8 and 1.10]{Bowen}, the potentials $\log\psi_f$ and $\log\phi_h$ are H\"older continuous. By induction, it is easy to see that for any function $\varphi$ \[ \begin{split} (\mathcal{L}_f^n(\varphi))(\mathtt{i})&=\sum_{j_1}e^{f(j_1\mathtt{i})}(\mathcal{L}_f^{n-1}(\varphi))(j_1\mathtt{i})=\sum_{j_1,j_2}e^{f(j_1\mathtt{i})}e^{f(j_2j_1\mathtt{i})}(\mathcal{L}_f^{n-2}(\varphi))(j_2j_1\mathtt{i})\\ &=\sum_{j_1,\ldots,j_n}e^{f(j_1\mathtt{i})+\cdots+f(j_n\ldots j_1\mathtt{i})}\varphi(j_n\ldots j_1\mathtt{i})=\sum_{\mathtt{k}\in\Sigma_n}e^{\sum_{k=0}^{n-1}f(\sigma^k\mathtt{k}\mathtt{i})}\varphi(\mathtt{k}\mathtt{i}). \end{split} \] By \cite[Proposition~1.14]{Bowen} and the uniqueness of the ergodic Gibbs measure, \begin{equation}\label{eq:conf} \mu_f(B)=\int_B\psi_f\,\mathrm{d}\nu_f \quad \text{and} \quad \mu_h(B')=\int_{B'}\phi_h\,\mathrm{d}\nu_h \end{equation} for every $B\in\mathcal{B}_{\Sigma}$ and $B'\in\mathcal{B}_{\Gamma}$. Thus, for any $\mathtt{j}\in\Sigma_n$ and every $B\in\mathcal{B}_{\Sigma}$ \begin{equation} \label{eq:thisone} \begin{split} \mu_f([\mathtt{j}] \cap \sigma^{-|\mathtt{j}|}(B)) &= \int\psi_f(\mathtt{i})\mathds{1}_{[\mathtt{j}] \cap \sigma^{-|\mathtt{j}|}(B)}(\mathtt{i})\,\mathrm{d}\nu_f(\mathtt{i})\\ &= \int\psi_f(\mathtt{i})\mathds{1}_{[\mathtt{j}] \cap \sigma^{-|\mathtt{j}|}(B)}(\mathtt{i})e^{-nQ}\,\mathrm{d}(\mathcal{L}_f^*)^n(\nu_f)(\mathtt{i})\\ &= \int\mathcal{L}_f^n(\psi_f \mathds{1}_{[\mathtt{j}] \cap \sigma^{-|\mathtt{j}|}(B)})(\mathtt{i})e^{-nQ}\,\mathrm{d}\nu_f(\mathtt{i})\\ &=\int\sum_{\mathtt{k}\in\Sigma_n}\exp\biggl(\sum_{k=0}^{n-1}f(\sigma^k\mathtt{k}\mathtt{i})-nQ\biggr)\psi_f(\mathtt{k}\mathtt{i})\mathds{1}_{[\mathtt{j}] \cap \sigma^{-|\mathtt{j}|}(B)}(\mathtt{k}\mathtt{i})\,\mathrm{d}\nu_f(\mathtt{i})\\ &=\int_B\exp\biggl(\sum_{k=0}^{n-1}f(\sigma^k\mathtt{j}\mathtt{i})-nQ\biggr)\psi_f(\mathtt{j}\mathtt{i})\,\mathrm{d}\nu_f(\mathtt{i})\\ &=\int_B\exp\biggl(\sum_{k=0}^{n-1}f(\sigma^k\mathtt{j}\mathtt{i})-nQ\biggr)\frac{\psi_f(\mathtt{j}\mathtt{i})}{\psi_f(\mathtt{i})}\,\mathrm{d}\mu_f(\mathtt{i}). \end{split} \end{equation} Let $\hat f(\mathtt{i})=f(\mathtt{i})+\log\psi_f(\mathtt{i})-\log\psi_f(\sigma\mathtt{i})$. Since $\log\psi_f$ is H\"older continuous and thus, uniformly bounded over $\Sigma$, there exists $C>0$ such that $$ \Biggl|\sum_{k=0}^{n-1}\hat f(\sigma^k\mathtt{i})-\log\|A_{\mathtt{i}|_n}\|\Biggr|\leq C $$ for all $n\in\mathbb{N}$ and $\mathtt{i}\in\Sigma_n$. By \eqref{eq:thisone}, \begin{equation}\label{eq:thison2} \mu_{f}([\mathtt{i}] \cap \sigma^{-|\mathtt{i}|}(B))=\int_{B}\exp\left(\sum_{k=0}^{|\mathtt{i}|-1}\hat f(\sigma^k\mathtt{j}\mathtt{i})-|\mathtt{i}|Q\right)\,\mathrm{d}\mu_{f}(\mathtt{i}). \end{equation} for all $\mathtt{i}\in\Sigma_*$ and $B\in\mathcal{B}_{\Sigma}$. Let us denote the ratio $(1+e^R)^{-1}$ by $q$. Define $$ \eta([\mathtt{i}])=q^{\sharp_3\mathtt{i}}(1-q)^{|\mathtt{i}|-\sharp_3\mathtt{i}}\mu_h([\kappa(\mathtt{i})]) $$ for all $\mathtt{i}\in\Sigma_*$ and notice that \begin{equation}\label{eq:toeta} \begin{split} \sum_{i=1}^3\eta([\mathtt{i} i])&=\sum_{i=1}^2q^{\sharp_3\mathtt{i}}(1-q)^{|\mathtt{i}|+1-\sharp_3\mathtt{i}}\mu_h([\kappa(\mathtt{i} i)])+q^{\sharp_3\mathtt{i}+1}(1-q)^{|\mathtt{i}|-\sharp_3\mathtt{i}}\mu_h([\kappa(\mathtt{i})])\\ &=q^{\sharp_3\mathtt{i}}(1-q)^{|\mathtt{i}|+1-\sharp_3\mathtt{i}}\sum_{i=1}^2\mu_h([\kappa(\mathtt{i})i])+q^{\sharp_3\mathtt{i}+1}(1-q)^{|\mathtt{i}|-\sharp_3\mathtt{i}}\mu_h([\kappa(\mathtt{i})])\\ &=q^{\sharp_3\mathtt{i}}(1-q)^{|\mathtt{i}|-\sharp_3\mathtt{i}}\mu_h([\kappa(\mathtt{i})])(1-q+q)=\eta([\mathtt{i}]). \end{split} \end{equation} for all $\mathtt{i}\in\Sigma_*$. Thus, by Kolmogorov's extension theorem, $\eta$ can be extended to a probability measure on $(\Sigma,\mathcal{B}_{\Sigma})$. We shall denote the extension by $\eta$ too. The following lemma shows that $\eta$ is ergodic. \begin{lemma}\label{lem:propeta} The measure $\eta$ is $\sigma$-invariant and mixing on $\Sigma$. \end{lemma} \begin{proof} Since $\mu_h$ is $\sigma$-invariant, the proof of $\sigma$-invariance of $\eta$ is similar to \eqref{eq:toeta}, and therefore, we omit it. To prove that $\eta$ is mixing, it is sufficient to show that $$ \lim_{n\to\infty}\eta([\mathtt{i}]\cap\sigma^{-n}[\mathtt{j}])=\eta([\mathtt{i}])\eta([\mathtt{j}]). $$ for all $\mathtt{i},\mathtt{j}\in\Sigma_*$. Let $n>|\mathtt{i}|$ and observe that \begin{align*} \eta([\mathtt{i}]\cap\sigma^{-n}[\mathtt{j}])&=\sum_{\mathtt{h}\in\Sigma_{n-|\mathtt{i}|}}\eta([\mathtt{i}\mathtt{h}\mathtt{j}])=\sum_{\mathtt{h}\in\Sigma_{n-|\mathtt{i}|}}q^{\sharp_3\mathtt{i}+\sharp_3\mathtt{j}+\sharp_3\mathtt{h}}(1-q)^{|\mathtt{i}|-\sharp_3\mathtt{i}+|\mathtt{j}|-\sharp_3\mathtt{j}+|\mathtt{h}|-\sharp_3\mathtt{h}}\mu_h([\kappa(\mathtt{i}\mathtt{h}\mathtt{j})])\\ &=q^{\sharp_3\mathtt{i}+\sharp_3\mathtt{j}}(1-q)^{|\mathtt{i}|-\sharp_3\mathtt{i}+|\mathtt{j}|-\sharp_3\mathtt{j}}\sum_{\mathtt{h}\in\Sigma_{n-|\mathtt{i}|}}q^{\sharp_3\mathtt{h}}(1-q)^{|\mathtt{h}|-\sharp_3\mathtt{h}}\mu_h([\kappa(\mathtt{i})\kappa(\mathtt{h})\kappa(\mathtt{j})])\\ &=q^{\sharp_3\mathtt{i}+\sharp_3\mathtt{j}}(1-q)^{|\mathtt{i}|-\sharp_3\mathtt{i}+|\mathtt{j}|-\sharp_3\mathtt{j}}\sum_{\ell=0}^{n-|\mathtt{i}|}\binom{n-|\mathtt{i}|}{\ell}q^{n-|\mathtt{i}|-\ell}(1-q)^{\ell}\sum_{\mathtt{k}\in\Gamma_{\ell}}\mu_h([\kappa(\mathtt{i})\mathtt{k}\kappa(\mathtt{j})]). \end{align*} Hence, \begin{equation}\label{eq:refasked} \frac{\eta([\mathtt{i}]\cap\sigma^{-n}[\mathtt{j}])}{\eta([\mathtt{i}])\eta([\mathtt{j}])}=\sum_{\ell=0}^{n-|\mathtt{i}|}\binom{n-|\mathtt{i}|}{\ell}q^{n-|\mathtt{i}|-\ell}(1-q)^{\ell}\frac{\mu_h([\kappa(\mathtt{i})\cap\sigma^{-\ell-|\kappa(\mathtt{i})|}\kappa(\mathtt{j})])}{\mu_h([\kappa(\mathtt{i})])\mu_h([\kappa(\mathtt{j})])}. \end{equation} By \cite[Proposition~1.14]{Bowen}, the measure $\mu_h$ is mixing. Thus, for every $\varepsilon>0$ there exists $N$ such that if $\ell\geq N$, then $$ e^{-\varepsilon}\leq\frac{\mu_h([\kappa(\mathtt{i})]\cap\sigma^{-\ell-|\kappa(\mathtt{i})|}[\kappa(\mathtt{j})])}{\mu_h([\kappa(\mathtt{i})])\mu_h([\kappa(\mathtt{j})])}\leq e^{\varepsilon}. $$ Hence, for $n>N+|\mathtt{i}|$ we get \begin{align*} \sum_{\ell=0}^{n-|\mathtt{i}|}&\binom{n-|\mathtt{i}|}{\ell}q^{n-|\mathtt{i}|-\ell}(1-q)^{\ell}\frac{\mu_h([\kappa(\mathtt{i})\cap\sigma^{-\ell-|\kappa(\mathtt{i})|}\kappa(\mathtt{j})])}{\mu_h([\kappa(\mathtt{i})])\mu_h([\kappa(\mathtt{j})])}\\ &\leq e^{\varepsilon}\sum_{\ell=N}^{n-|\mathtt{i}|}\binom{n-|\mathtt{i}|}{\ell}q^{n-|\mathtt{i}|-\ell}(1-q)^{\ell}+\mu_h([\kappa(\mathtt{j})])^{-1}\sum_{\ell=0}^{N-1}\binom{n-|\mathtt{i}|}{\ell}q^{n-|\mathtt{i}|-\ell}(1-q)^{\ell}\\ &\leq e^{\varepsilon}+\mu_h([\kappa(\mathtt{j})])^{-1}N(n-|\mathtt{i}|)^N(1-q)^{n-|\mathtt{i}|-N}, \end{align*} where in the last inequality we used $\binom{n}{k} \leq n^k$. By a similar argument, \begin{align*} \sum_{\ell=0}^{n-|\mathtt{i}|}\binom{n-|\mathtt{i}|}{\ell}&q^{n-|\mathtt{i}|-\ell}(1-q)^{\ell}\frac{\mu_h([\kappa(\mathtt{i})\cap\sigma^{-\ell-|\kappa(\mathtt{i})|}\kappa(\mathtt{j})])}{\mu_h([\kappa(\mathtt{i})])\mu_h([\kappa(\mathtt{j})])}\\ &\geq e^{-\varepsilon}\sum_{\ell=N}^{n-|\mathtt{i}|}\binom{n-|\mathtt{i}|}{\ell}q^{n-|\mathtt{i}|-\ell}(1-q)^{\ell}\geq e^{-\varepsilon}-N(n-|\mathtt{i}|)^N(1-q)^{n-|\mathtt{i}|-N}. \end{align*} By \eqref{eq:refasked} and letting $n\to\infty$, we see that for every $\varepsilon>0$ \begin{equation*} e^{-\varepsilon}\leq\lim_{n\to\infty}\frac{\eta([\mathtt{i}]\cap\sigma^{-n}[\mathtt{j}])}{\eta([\mathtt{i}])\eta([\mathtt{j}])}\leq e^{\varepsilon}. \end{equation*} Since $\varepsilon>0$ was arbitrary, the definition of $\eta$ finishes the proof. \end{proof} \begin{proposition}\label{prop:etaismuf} If $\eta$ and $\mu_f$ are as above, then $\eta=\mu_f$. \end{proposition} \begin{proof} Since both $\eta$ and $\mu_f$ are ergodic measures, it suffices to show that they are equivalent. By Lemma~\ref{lem:boundonnorm} and our assumption \eqref{eq:ass} on $f$, \begin{align*} \eta([\mathtt{i}])&=q^{\sharp_3\mathtt{i}}(1-q)^{|\mathtt{i}|-\sharp_3\mathtt{i}}\mu_h([\kappa(\mathtt{i})])\\ &\leq Cq^{\sharp_3\mathtt{i}}(1-q)^{|\mathtt{i}|-\sharp_3\mathtt{i}}\exp\biggl(\sum_{k=0}^{|\kappa(\mathtt{i})|-1}h(\sigma^k\kappa(\mathtt{i})\mathtt{j})-|\kappa(\mathtt{i})|R\biggr)\\ &\leq C'(1-q)^{|\mathtt{i}|-\sharp_3\mathtt{i}}\|A_{\mathtt{i}}\|\exp\bigl(-(|\mathtt{i}|-\sharp_3\mathtt{i})R-\sharp_3\mathtt{i}\log(1+e^R)\bigr)\\ &=C'\|A_{\mathtt{i}}\|\exp(-|\mathtt{i}|\log(1+e^R))\\ &\leq C''\exp\biggl(\sum_{k=0}^{|\mathtt{i}|-1}f(\sigma^k\mathtt{i}\mathtt{j}')-|\mathtt{i}|\log(1+e^R)\biggr)\leq C'''\mu_f([\mathtt{i}]) \end{align*} for all $\mathtt{i}\in\Sigma_*$. The other inequality follows by a similar argument. Since for every cylinder set $[\mathtt{i}]$, the ratio $\eta([\mathtt{i}])/\mu_f([\mathtt{i}])$ is bounded away from $0$ and $\infty$ uniformly, the statement follows. \end{proof} By \eqref{eq:thison2} and \eqref{eq:presseq}, \begin{equation*} \mu_{f}([i] \cap \sigma^{-1}(B))=\int_{B}\exp(\hat f(i\mathtt{i})-Q)\,\mathrm{d}\mu_{f}(\mathtt{i})=\frac{1}{1+e^R}\int_{B}\exp(\hat f(i\mathtt{i}))\,\mathrm{d}\mu_{f}(\mathtt{i}) \end{equation*} for all $i\in\{1,2,3\}$ and $B\in\mathcal{B}_{\Sigma}$. By Proposition~\ref{prop:etaismuf} and recalling the definition of $\eta$, we have \begin{equation*} \mu_{f}([3] \cap \sigma^{-1}(B))=\eta([3] \cap \sigma^{-1}(B))=\frac{1}{1+e^{R}}\eta(B)=\frac{1}{1+e^{R}}\mu_{f}(B). \end{equation*} Since $\hat f$ is H\"older continuous and the above two equations hold for every $B\in\mathcal{B}_{\Sigma}$, we conclude that \begin{equation}\label{eq:f'piecewise} \hat f(\mathtt{i})=0 \end{equation} for all $\mathtt{i}\in[3]$. By \eqref{eq:thison2}, we have \begin{align*} \mu_f([i_13^{k_1}\cdots i_n3^{k_n}])&=\int \exp\biggl(\sum_{\ell=0}^{k_1+\cdots+k_n+n-1}\hat f(\sigma^{\ell}i_13^{k_1}\cdots i_n3^{k_n}\mathtt{j})-(k_1+\cdots+k_n+n)Q\biggr)\,\mathrm{d}\mu_f(\mathtt{j})\\ &=q^{k_1+\cdots+k_n}\int \exp(\hat f(i_13^{k_1}\cdots\mathtt{j})+\cdots+\hat f(i_n3^{k_n}\mathtt{j})-nQ)\,\mathrm{d}\mu_f(\mathtt{i}) \end{align*} for every $k_1,\ldots,k_n\in\mathbb{N}$, $i_1,\dots,i_n\in\{1,2\}$, and $n\in\mathbb{N}$. Since $\hat f$ is H\"older continuous, we have $$ \lim_{k_{1},\ldots,k_n\to\infty}\hat f(i_{\ell}3^{k_{\ell}}\cdots i_n3^{k_n}\mathtt{j})=\hat f(i_{\ell}3^{\infty}) $$ uniformly for all $\ell\in\{1,\ldots,n\}$ and $\mathtt{j}\in\Sigma$. Hence by the dominated convergence theorem $$ \lim_{k_{1},\ldots,k_n\to\infty}\frac{\mu_f([i_13^{k_1}\cdots i_n3^{k_n}])}{q^{k_1+\cdots+k_n}}=\prod_{\ell=1}^ne^{\hat f(i_{\ell}3^{\infty})-Q}. $$ On the other hand, by the definition of $\eta$ and Proposition~\ref{prop:etaismuf}, $$ \frac{\mu_f([i_13^{k_1}\cdots i_n3^{k_n}])}{q^{k_1+\cdots+k_n}}=(1-q)^{n}\mu_h([i_1\cdots i_n]). $$ It follows that $$ \mu_h([i_1\cdots i_n])=\prod_{\ell=1}^ne^{\hat f(i_{\ell}3^{\infty})-R} $$ and hence, $\mu_h$ is a Bernoulli measure. This contradicts Proposition~\ref{prop:bernoulli} and finishes the proof of Lemma~\ref{prop:key}. \begin{acknowledgements} Bal\'azs B\'ar\'any acknowledges support from the grants NKFI PD123970, OTKA K123782, and the J\'anos Bolyai Research Scholarship of the Hungarian Academy of Sciences. Antti K\"aenm\"aki was supported by the Finnish Center of Excellence in Analysis and Dynamics Research. Ian Morris was supported by the Leverhulme Trust (Research Project Grant number RPG-2016-194). All the authors were partially supported by the ERC grant 306494. The research was started in the Hebrew University of Jerusalem. The authors thank the HUJI, and especially Professor Michael Hochman, for warm hospitality. B\'ar\'any and K\"aenm\"aki also thank the Institut Mittag-Leffler, where the paper was finished. Finally, the authors thank the anonymous referee for the careful reading and valuable comments which improved the paper. \end{acknowledgements}
{ "timestamp": "2019-09-04T02:13:42", "yymm": "1802", "arxiv_id": "1802.01916", "language": "en", "url": "https://arxiv.org/abs/1802.01916", "abstract": "In topics such as the thermodynamic formalism of linear cocycles, the dimension theory of self-affine sets, and the theory of random matrix products, it has often been found useful to assume positivity of the matrix entries in order to simplify or make feasible certain types of calculation. It is natural to ask how positivity may be relaxed or generalised in a way which enables similar calculations to be made in more general contexts. On the one hand one may generalise by considering almost additive or asymptotically additive potentials which mimic the properties enjoyed by the logarithm of the norm of a positive matrix cocycle; on the other hand one may consider matrix cocycles which are dominated, a condition which includes positive matrix cocycles but is more general. In this article we explore the relationship between almost additivity and domination for planar cocycles. We show in particular that a locally constant linear cocycle in the plane is almost additive if and only if it is either conjugate to a cocycle of isometries, or satisfies a property slightly weaker than domination which is introduced in this paper. Applications to matrix thermodynamic formalism are presented.", "subjects": "Dynamical Systems (math.DS)", "title": "Domination, almost additivity, and thermodynamic formalism for planar matrix cocycles", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9898303443461076, "lm_q2_score": 0.7154240018510026, "lm_q1q2_score": 0.7081483861056482 }
https://arxiv.org/abs/1611.02911
Maximising $H$-Colourings of Graphs
For graphs $G$ and $H$, an $H$-colouring of $G$ is a map $\psi:V(G)\rightarrow V(H)$ such that $ij\in E(G)\Rightarrow\psi(i)\psi(j)\in E(H)$. The number of $H$-colourings of $G$ is denoted by $\hom(G,H)$.We prove the following: for all graphs $H$ and $\delta\geq3$, there is a constant $\kappa(\delta,H)$ such that, if $n\geq\kappa(\delta,H)$, the graph $K_{\delta,n-\delta}$ maximises the number of $H$-colourings among all connected graphs with $n$ vertices and minimum degree $\delta$. This answers a question of Engbers.We also disprove a conjecture of Engbers on the graph $G$ that maximises the number of $H$-colourings when the assumption of the connectivity of $G$ is dropped.Finally, let $H$ be a graph with maximum degree $k$. We show that, if $H$ does not contain the complete looped graph on $k$ vertices or $K_{k,k}$ as a component and $\delta\geq\delta_0(H)$, then the following holds: for $n$ sufficiently large, the graph $K_{\delta,n-\delta}$ maximises the number of $H$-colourings among all graphs on $n$ vertices with minimum degree $\delta$. This partially answers another question of Engbers.
\section{Introduction} Let $G$ be a simple, loopless graph and let $H$ be a simple graph, possibly with loops. A \textit{graph homomorphism} from $G$ to $H$ is a map $\psi:V(G)\rightarrow V(H)$ such that $ij\in E(G)\Rightarrow\psi(i)\psi(j)\in E(H)$. An \textit{$H$-colouring} of $G$ is a graph homomorphism from $G$ to $H$. We denote by $\hom(G,H)$ the number of $H$-colourings of $G$. Given a class of graphs $\mathcal{G}$ and a fixed graph $H$, it is natural to ask which $G\in\mathcal{G}$ maximises $\hom(G,H)$. Various classes of graphs have been considered (see Cutler \cite{C12} for a survey). For instance, a number of authors, such as Galvin \cite{G13}, have studied the class of all $\delta$-regular graphs for fixed $\delta$; others, including Loh, Pikhurko and Sudakov \cite{LPS10}, have investigated the class of all graphs with $n$ vertices and $m$ edges. In this paper, we consider the class of all graphs with minimum degree at least $\delta$. This class was studied by Engbers \cite{E15,E16} who raised a number of questions and conjectures. We will answer two of these and provide a partial answer to a third. In Section \ref{sec:connectedgraph}, we consider the case when $\mathcal{G}$ is the set of all \textit{connected} graphs on $n$ vertices with minimum degree at least $\delta$. For this $\mathcal{G}$ and any non-regular graph $H$, Engbers \cite{E16} showed that, for any fixed $\delta\geq2$ and $n$ sufficiently large, $\hom(G,H)$ is maximised uniquely by $G=K_{\delta,n-\delta}$. In this paper, we will extend this result by showing that it holds for all graphs $H$. This answers a question posed by Engbers \cite{E16}. An $H$-colouring of $G$ requires that each component of $G$ is mapped to a component of $H$. As we are only considering connected graphs $G$, each $H$-colouring of $G$ maps $G$ to a single component of $H$. We therefore begin with the case when $H$ is connected. \begin{theorem} \label{thm:connectedgraph} For every $\delta\geq3$ and every connected graph $H$, there exists a constant $\kappa(\delta,H)$ such that the following holds: if $n\geq\kappa(\delta,H)$ and $G$ is a connected graph on $n$ vertices with minimum degree at least $\delta$, then we have $\hom(G,H)\leq\hom(K_{\delta,n-\delta},H)$. Further, if $H$ is not a complete looped graph or a complete balanced bipartite graph, we have equality if and only if $G=K_{\delta,n-\delta}$. \end{theorem} \noindent Extending this result to all graphs $H$ follows as an easy corollary. If $H$ has $h$ components $H_1,\dots H_h$, then $\hom(G,H)=\hom(G,H_1)+\dots+\hom(G,H_h)$ because $G$ is a connected graph. For $n$ sufficiently large, $G=K_{\delta,n-\delta}$ maximises $\hom(G,H_i)$ for each component $H_i$ and so $G=K_{\delta,n-\delta}$ also maximises $\hom(G,H)$. \begin{corollary} \label{cor:connected} For every $\delta\geq3$ and every graph $H$, there exists a constant $\kappa(\delta,H)$ such that the following holds: if $n\geq\kappa(\delta,H)$ and $G$ is a connected graph on $n$ vertices with minimum degree at least $\delta$, then we have $\hom(G,H)\leq\hom(K_{\delta,n-\delta},H)$. Further, if $H$ has a component which is neither a complete looped graph nor a complete balanced bipartite graph, we have equality if and only if $G=K_{\delta,n-\delta}$. \end{corollary} \noindent We may identify a proper $q$-colouring of a graph $G$ with a graph homomorphism from $G$ into $K_q$. Therefore, counting the number of proper $q$-colourings of $G$ corresponds to counting the number of proper graph homomorphisms from $G$ into $K_q$. As $K_q$ is a connected graph, the following corollary also follows immediately from Theorem \ref{thm:connectedgraph}. This answers another question posed by Engbers \cite{E16}. \begin{corollary} Fix $\delta\geq3$ and $q>2$. Then, for $n$ sufficiently large, $K_{\delta,n-\delta}$ uniquely maximizes the number of proper $q$-colourings amongst all connected graphs on $n$ vertices with minimum degree at least $\delta$. \end{corollary} A natural extension to Corollary \ref{cor:connected} is to allow $G$ to have more than one component. Here the picture is less complete. If $H$ is the graph consisting of a single edge with one of the vertices looped, then counting the number of $H$-colourings of a graph $G$ is equivalent to counting the number of independent sets in $G$. Extending previous work on this topic, Cutler and Radcliffe \cite{CR14} gave complete results for all values of $n$ and $\delta$. In particular, if $n\geq2\delta$, then $K_{\delta,n-\delta}$ is the unique graph which maximises $\hom(G,H)$. Galvin \cite{G13} conjectured that, for any $H$, if $G$ was a $\delta$-regular graph on $n$ vertices, then $\hom(G,H)\leq\max\{\hom(K_{\delta,\delta},H)^{n/2\delta},\hom(K_{\delta+1},H)^{n/(\delta+1)}\}$. If this were true, it would mean that, whenever $2\delta(\delta+1)|n$, the $\delta$-regular graph on $n$ vertices which maximises the number of $H$-colourings is either $\frac n{2\delta}K_{\delta,\delta}$ or $\frac n{\delta+1}K_{\delta+1}$. Galvin's conjecture was shown to be false by Sernau \cite{S16}. He produced an infinite family of counterexamples as follows: fix $\delta$ and any simple loopless graph $H$ with no $(\delta+1)$-clique. Take any connected $\delta$-regular graph $G$ on $n<2\delta$ vertices with $\hom(G,H)>0$. He proved that there existed $k\in\mathbb{N}$ such that $\hom(G,kH)>\max\{\hom(K_{\delta+1},kH)^{n/(\delta+1)},\hom(K_{\delta,\delta},kH)^{n/2\delta}\}$ and hence that Galvin's conjecture was false. Engbers \cite{E15} considered a similar question to Galvin but only when the order of $G$ was sufficiently large. He asked which graph on $n$ vertices with minimum degree $\delta$ maximises the number of $H$-colourings as the value of $n$ increases. For general $H$ and $\delta=1$ or $\delta=2$, Engbers showed that $\hom(G,H)$ is maximised by one of $\frac{n}{\delta+1}K_{\delta+1}$, $\frac{n}{2\delta}K_{\delta,\delta}$ or $K_{\delta,n-\delta}$ (where the graph that maximises $\hom(G,H)$ depends on the structure of $H$). These results led him to make the following conjecture. \begin{conjecture} [\cite{E15}] \label{conj:main} Fix $\delta\geq1$ and any graph $H$. Let $G$ be a graph on $n$ vertices with minimum degree at least $\delta$. There exists a constant $c(\delta,H)$ such that, for $n\geq c(\delta,H)$, we have \[ \hom(G,H)\leq\max\big\{\hom(K_{\delta+1},H)^{\frac n{\delta+1}},\hom(K_{\delta,\delta},H)^{\frac n{2\delta}},\hom(K_{\delta,n-\delta},H)\big\}. \] \end{conjecture} \noindent In Section \ref{sec:counterexample}, we will use similar ideas to Sernau to construct counterexamples to Conjecture \ref{conj:main} whenever $\delta\geq3$. On the other hand, we can show that Conjecture \ref{conj:main} does hold in certain circumstances. In Section \ref{sec:largedelta}, we will consider the case when the graph $H$ is fixed and $\delta$ and $n$ are sufficiently large. In particular, for each $k\in\mathbb{N}$, we consider the family $\mathcal{H}_k$ of all graphs with maximum degree $k$ that do not contain the complete looped graph on $k$ vertices or $K_{k,k}$ as a component. We will prove the following theorem. \begin{theorem} \label{thm:largedelta} Fix any $k\in\mathbb{N}$. For every graph $H\in\mathcal{H}_k$ and every $\delta\geq\delta_0(H)$, the following holds: there exists a constant $n_0(\delta,H)$ such that, if $n\geq n_0(\delta,H)$ and $G$ is a graph on $n$ vertices with minimum degree $\delta$, then $\hom(G,H)\leq\hom(K_{\delta,n-\delta},H)$. Equality holds if and only if $G=K_{\delta,n-\delta}$. \end{theorem} \noindent The graph $K_{\delta,n-\delta}$ need not maximise the number of $H$-colourings if $H$ has maximum degree $k$ and contains either the complete looped graph on $k$ vertices or $K_{k,k}$ as a component (i.e. $H\notin\mathcal{H}_k$). This is discussed in more detail in Section \ref{sec:conclusion}. \begin{convention} Throughout this paper, $G$ will be a simple graph without loops. We will adopt the same convention for vertex degrees as Engbers \cite{E16}: for any vertex $v\in V(H)$, we define $d(v)=|\{w\in V(H):vw\in E(H)\}|$. In particular, adding a loop to a vertex in $H$ increases the degree by one. \end{convention} \section{Proof of Theorem \ref{thm:connectedgraph}} \label{sec:connectedgraph} The following definition was introduced by Engbers \cite{E15}. We will use it in the proof of Theorem \ref{thm:connectedgraph} as well as in Section \ref{sec:largedelta}. \begin{definition} For any graph $H$ with maximum degree $k$ and $\delta\geq1$, we define $S(\delta,H)$ to be the set of vectors in $V(H)^{\delta}$ such that the elements of the vector have $k$ neighbours in common. We define $s(\delta,H)=|S(\delta,H)|$. As $H$ has at least one vertex of degree $k$, we have $s(\delta,H)\geq1$. \end{definition} \noindent We will need the following theorem of Erd\H{o}s and P\'{o}sa. \begin{theorem}[\cite{D10}] \label{thm:erdos-posa} There is a function $f:\mathbb{N}\rightarrow\mathbb{R}$ such that, given any $d\in\mathbb{N}$, every graph contains either $d$ disjoint cycles or a set of at most $f(d)$ vertices meeting all its cycles. \end{theorem} \noindent We will frequently use the following lemma of Engbers. \begin{lemma}[\cite{E15}] \label{lem:pathcounting} Suppose $H$ is not the complete looped graph on $k$ vertices or $K_{k,k}$. Then, for any two vertices $i$, $j$ of $H$ and for $r\geq4$, there are at most $(k^2-1)k^{r-4}$ $H$-colourings of $P_r$ that map the initial vertex of that path to $i$ and the terminal vertex to $j$. \end{lemma} \noindent We will also need the following simple observation. \begin{proposition} \label{prop:colouring} Let $G$ and $H$ be graphs with $G$ connected and $X\subseteq V(G)$. Suppose the vertices of $X$ have already been mapped to vertices of $H$. The remaining vertices of $G$ can be mapped into $V(H)$ in such a way that there are at most $\Delta(H)$ choices for each vertex of $V(G)\backslash X$. \end{proposition} \begin{proof} Because $G$ is connected, there is a path from each vertex of $V(G)\backslash X$ to $X$. We order the vertices of $V(G)\backslash X$ by increasing distance from $X$. Each vertex $v\in V(G)\backslash X$ either has a neighbour in $X$ or a neighbour before it in the ordering. Therefore, when we come to colour $v$, one of its neighbours has already been coloured so there are at most $\Delta(H)$ choices for $v$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:connectedgraph}] Let $\delta\geq3$ be fixed and let $H$ be a connected graph with maximum degree $k\in\mathbb{N}$. We have $|V(H)|\geq k$. There are two special cases to look at before we consider a general $H$. \begin{enumerate} \item \textit{$H$ is the complete looped graph on $k$ vertices.} \\ If $G$ is any graph on $n$ vertices, we find that $\hom(G,H)=k^n$ because any vertex of $G$ can be mapped to any vertex of $H$. Hence, as any graph on $n$ vertices with minimum degree $\delta$ maximises the number of $H$-colourings, we have $\hom(G,H)\leq\hom(K_{\delta,n-\delta},H)$ as required. \item \textit{$H=K_{k,k}$.} \\ $H$ is bipartite so $\hom(G,H)\neq0$ if and only if $G$ is bipartite. For any connected bipartite graph $G$ on $n$ vertices, $\hom(G,H)=2k^n$. This means that any connected bipartite graph on $n$ vertices with minimum degree $\delta$ maximises the number of $H$-colourings and hence $\hom(G,H)\leq\hom(K_{\delta,n-\delta},H)$ as required. \end{enumerate} As the theorem is true in these two cases, we may assume that $H$ is not the complete looped graph on $k$ vertices or $K_{k,k}$. We may also assume that $k\geq2$ as we have already dealt with the cases when $H$ is a single looped vertex and when $H=K_{1,1}$. Hence we may apply Lemma \ref{lem:pathcounting} when required. Let $G$ be a graph on $n$ vertices with minimum degree $\delta$ that has the maximum number of $H$-colourings. We know that $H$ has at least one vertex $v$ of degree $k$. When considering $H$-colourings of $K_{\delta,n-\delta}$, we can map the vertex class of size $\delta$ to $v$ and the other vertex class to the neighbours of $v$. Hence, $\hom(K_{\delta,n-\delta},H)\geq k^{n-\delta}$. We will proceed to determine the structure of $G$. The assumption that $G$ has most $H$-colourings tells us that $\hom(G,H)\geq\hom(K_{\delta,n-\delta},H)$. We will show that, for $n$ sufficiently large, we must have $G=K_{\delta,n-\delta}$. \\ \\ \textit{Claim 1: $G$ has a bounded number of disjoint cycles.}\\ Suppose that $G$ has $d$ disjoint cycles. We colour $G$ in the following way. Pick any vertex of $G$ and map it to any vertex of $H$. Take a shortest path from the starting vertex to a vertex on one of the disjoint cycles. There are at most $k$ ways to map each vertex on this path to vertices of $H$. We then consider the other vertices on the cycle (as the end vertex of the path has already been mapped to a vertex of $H$). Lemma \ref{lem:pathcounting} gives at most $(k^2-1)k^{t-3}$ ways to map these vertices to $H$, where $t$ is the number of vertices in the cycle. We then repeat this process of finding a shortest path from the already mapped vertices to one of the disjoint cycles and mapping the vertices in the path and cycle to $H$. Once all of the vertices in disjoint cycles have been considered, any remaining vertices can be mapped greedily with at most $k$ choices for each by Proposition \ref{prop:colouring}. Therefore \[ \hom(G,H)\leq|V(H)|(k^2-1)^dk^{n-2d-1}<|V(H)|k^{n-1}e^{-\frac{d}{k^2}}. \] This is strictly smaller than $k^{n-\delta}$ whenever $d>k^2\log|V(H)|+k^2(\delta-1)\log k$. As $\hom(G,H)$ is maximal, it follows that $G$ has bounded number of disjoint cycles. This bound only depends on $H$ and $\delta$. Hence we have proved the claim. \\ \\ Applying Theorem \ref{thm:erdos-posa} to $G$, we find that there exists a constant $\alpha=\alpha(H,\delta)$ such that $G$ can be made acyclic by removing at most $\alpha$ vertices. We can therefore partition the vertices of $G$ into a set $A$ of size at most $\alpha$ and a set $F$ such that $G[F]$ is a forest. We will show that we can make $F$ into an independent set by moving at most a constant number of vertices from $F$ to $A$. This constant depends only on $\delta$ and $H$ and not on the number of vertices in $G$. We say that a component of a graph is \textit{non-trivial} if it contains at least one edge. \\ \\ \textit{Claim 2: The forest $F$ has a bounded number of non-trivial components.}\\ Suppose $F$ has $a$ non-trivial components, $G_1,\dots G_a$. Each $G_i$ is a tree and so contains a maximal path $P_i$. As every vertex in $G$ has degree at least $\delta\geq3$, each end-vertex of $P_i$ must have a neighbour in $A$. We colour $G$ in the following way. First map $A$ into $H$. There are at most $|V(H)|^{|A|}$ ways to do this. We then consider each $G_i$ in turn. By Lemma 2.2, there are at most $(k^2-1)k^{|P_i|-2}$ ways to colour $P_i$ and at most $k$ ways to colour each of the other vertices of $G_i$. Finally, we consider the remaining vertices of $G$, each of which has at most $k$ possible choices by Proposition \ref{prop:colouring}. Hence \[ \hom(G,H)\leq|V(H)|^{|A|}(k^2-1)^ak^{n-|A|-2a}<|V(H)|^{\alpha}k^{n-\alpha}e^{-\frac{a}{k^2}}. \] This is strictly less than $k^{n-\delta}$ whenever $a>k^2\alpha\log|V(H)|+k^2(\delta-\alpha)\log k$. The maximality of $\hom(G,H)$ means that there exists a constant depending only on $\delta$ and $H$ that bounds the number of non-trivial components of $F$ and hence proves the claim. \\ \\ Let $T$ be any non-trivial component of $F$. Define $T'$ to be the subtree obtained from $T$ by deleting all of the leaves. We will show that the size of $T'$ is bounded by a constant that only depends on $\delta$ and $H$. This is done in two steps: first we show that the maximal length of a path in $T$ is bounded and then we show that $T'$ can only have a bounded number of leaves. Together, these two claims bound the size of $T'$. \\ \\ \textit{Claim 3: The length of the longest path in $T$ is bounded.}\\ Suppose the longest path $P$ in $T$ is $u_1v_1u_2v_2\dots$ and has length $b$. We may write $b=2b'+r$ where $r\in\{0,1\}$. The minimum degree of $G$ is at least $\delta\geq3$ and $T$ is acyclic. Therefore, each vertex of $P$ has a neighbour which is not on $P$. Further, every leaf of $T$ must have a neighbour in $A$. We colour the vertices of $G$ as follows. First, colour $A$. Next, we colour the vertices of $P$ using the following algorithm. Initially, $i=1$. The algorithm colours vertices $u_i$ and $v_i$ at step $i$ (and possibly some other vertices of $T$ that do not lie on $P$). \begin{figure}[h] \centering \begin{tikzpicture} \draw (-1,-0.5) ellipse (0.6cm and 2cm) node {$A$}; \node[main node, label={0:$u_1$}] (u) at (3,1.5) {}; \node[main node, label={0:$v_1$}] (v) at (3,0.5) {}; \node[main node, label={10:$P$}] (w) at (3,-2.5) {}; \path[draw,thick] (u) edge node {} (v) (v) edge node[below, rotate=90] {...} (w) (u) edge node {} (-1,1.1) (v) edge node {} (-1,0.5); \draw[blue] (3,1) ellipse (0.7cm and 1cm); \draw[red] (-1,-0.5) ellipse (0.8cm and 2.2cm); \draw (7,-0.5) ellipse (0.6cm and 2cm) node {$A$}; \node[main node, label={0:$u_1$}] (u) at (11,1.5) {}; \node[main node, label={10:$v_1$}] (v) at (11,0.5) {}; \node[main node, label={10:$P$}] (w) at (11,-2.5){}; \node[main node, label={270:$Q_1$}] (x) at (9,-1.5) {}; \path[draw,thick] (u) edge node {} (v) (v) edge node[below, rotate=90] {...} (w) (u) edge node {} (7,1.1) (x) edge node {} (7,0.5); \draw[decorate, decoration=snake] (v)--++(x); \draw[red] (7,-0.5) ellipse (0.8cm and 2.2cm); \draw [blue] plot [smooth cycle, tension=2] coordinates {(11,1.9) (11,-0.3) (8.6,-1.7)}; \begin{customlegend}[legend cell align=left,legend entries={vertices already coloured, vertices coloured in this step}, legend style={at={(5.5,-3.5)},anchor=north,font=\footnotesize}] \csname pgfplots@addlegendimage\endcsname{no markers,red} \csname pgfplots@addlegendimage\endcsname{no markers,blue} \end{customlegend} \end{tikzpicture} \caption{On the left, $v_1$ has a neighbour in $A$; on the right, $v_1$ does not.} \label{fig:longpathstart} \end{figure} At the $i^\text{th}$ step, consider vertices $u_i$ and $v_i$ on $P$. If $i=1$, $u_i$ is an end-vertex of $P$ and so has a neighbour in $A$; if $i\neq1$, $u_i$ has $v_{i-1}$ as a neighbour. Hence, we know $u_i$ is adjacent to a vertex which has already been coloured. Consider the vertex $v_i$. If $v_i$ has a neighbour in A, we have a path of length 4 starting and ending at vertices which have already been coloured. Lemma \ref{lem:pathcounting} tells us there are at most $k^2-1$ choices for $u_i$ and $v_i$ (see Figure \ref{fig:longpathstart}). If $v_i$ does not have a neighbour in $A$, it must have another neighbour in $T$ which does not lie on $P$. Take a maximal path $Q_i$ in $T$, which starts at $v_i$ and avoids $P$. The end-vertex of $Q_i$ that is not $v_i$ must be a leaf in $T$ and hence has a neighbour in $A$ (see Figure \ref{fig:longpathstart}). We therefore have a path of length $|Q_i|+3$ which starts and ends with vertices that have already been coloured and has $u_i\cup Q_i$ as the internal vertices. Lemma 2.2 gives at most $(k^2-1)k^{|Q_i|-1}$ ways to colour the path $u_i\cup Q_i$. We then proceed to the $(i+1)^\text{th}$ step of the algorithm. After $b'$ steps, we have coloured $2b'$ vertices of $P$ (and possibly some other vertices of $T$). We finish by colouring all of the remaining vertices of $G$, each of which has at most $k$ choices by Proposition \ref{prop:colouring}. Therefore \[ \hom(G,H)\leq|V(H)|^{|A|}(k^2-1)^{b'}k^{n-|A|-2b'}<|V(H)|^{\alpha}k^{n-\alpha}e^{-\frac{b'}{k^2}}. \] This is strictly less than $k^{n-\delta}$ whenever $b'>k^2\alpha\log|V(H)|+k^2(\delta-\alpha)\log k$. Because $\hom(G,H)$ is maximal, there exists a constant depending only on $\delta$ and $H$ which bounds the length of a maximal path in any non-trivial component of $F$ as required. \\ \\ \textit{Claim 4: $T'$ has a bounded number of leaves.}\\ Suppose $T'$ has $l$ leaves. Each leaf of $T'$ has at least two neighbours which are not in $T'$ because the minimum degree of $G$ is at least $\delta\geq3$. At least one of these neighbours is a leaf of $T$. Similarly, every leaf of $T$ has a neighbour in $A$. We colour $G$ by first colouring the vertices of $A$. For each leaf $v$ of $T'$, there are two possibilities. If $v$ has two neighbours $u$ and $w$ which are leaves of $T$, there is a path of length 5 with end vertices in $A$ and internal vertices $u$, $v$ and $w$. By Lemma \ref{lem:pathcounting} there are at most $(k^2-1)k$ ways to colour the path $uvw$. If $v$ only has one neighbour $u$ which is a leaf of $T$, then $v$ must also have a neighbour in $A$ because it has at least $\delta$ neighbours and only one of these can be in $T'$ (see Figure \ref{fig:treeleaves}). Apply Lemma \ref{lem:pathcounting} to the path with end vertices in $A$ and internal vertices $u$ and $v$. There are at most $k^2-1$ choices for the colours of $u$ and $v$. \begin{figure}[h] \centering \begin{tikzpicture} \draw (-1.1,-0.5) ellipse (0.6cm and 2cm) node {$A$}; \node[main node,label={270:$u$}] (u) at (0.9,0.5) {}; \node[main node,label={90:$w$}] (w) at (0.9,-1.5) {}; \node[main node,label={180:$v$}] (v) at (1.9,-0.5) {}; \draw (2.7,-0.5) ellipse (1.4cm and 0.6cm) node {$T'$}; \path[draw,thick] (u) edge node {} (v) (w) edge node {} (v) (v) edge node {} (2.3,-0.5) (u) edge node {} (-1.1,0.1) (w) edge node {} (-1.1,-1.1); \draw[red] (-1.1,-0.5) ellipse (0.8cm and 2.2cm); \draw [blue] plot [smooth cycle, tension=2] coordinates {(0.7,0.7) (0.7,-1.7) (2.1,-0.5)}; \draw (6.9,-0.5) ellipse (0.6cm and 2cm) node {$A$}; \node[main node,label={90:$u$}] (u) at (8.9,-0.5) {}; \node[main node,label={90:$v$}] (v) at (9.9,-0.5) {}; \draw (10.7,-0.5) ellipse (1.4cm and 0.6cm) node {$T'$}; \path[draw,thick] (u) edge node {} (v) (v) edge node {} (10.3,-0.5) (u) edge node {} (6.9,0) (v) edge[bend left] node {} (6.9,-1); \draw[red] (6.9,-0.5) ellipse (0.8cm and 2.2cm); \draw[blue] (9.4,-0.5) circle (0.8cm); \begin{customlegend}[legend cell align=left,legend entries={vertices already coloured, vertices coloured in this step}, legend style={at={(5.5,-3.5)},anchor=north,font=\footnotesize}] \csname pgfplots@addlegendimage\endcsname{no markers,red} \csname pgfplots@addlegendimage\endcsname{no markers,blue} \end{customlegend} \end{tikzpicture} \caption{On the left, $v$ has two leaves as neighbours; on the right, $v$ has one.} \label{fig:treeleaves} \end{figure} Once each leaf of $T'$ has been assigned to a vertex of $H$, there are at most $k$ choices for each of the remaining vertices of $G$ by Proposition \ref{prop:colouring}. Therefore \[ \hom(G,H)\leq|V(H)|^{|A|}(k^2-1)^lk^{n-|A|-2l}<|V(H)|^{\alpha}k^{n-\alpha}e^{-\frac{l}{k^2}}. \] This is strictly less than $k^{n-\delta}$ whenever $l>k^2\alpha\log|V(H)|+k^2(\delta-\alpha)\log k$. The maximality of $\hom(G,H)$ means that the maximum number of leaves $T'$ can have is bounded above by a constant depending only on $\delta$ and $H$ as required. \\ \\ Claims 3 and 4 show that, for each non-trivial component $T$ of $F$, the subtree $T'$ consisting of $T$ without its leaves has maximal size bounded by a constant $t(\delta,H)$. Claim 2 shows that there are at most $a(\delta,H)$ non-trivial components of $F$ for some constant $a(\delta,H)$. We can make $F$ into an independent set by moving some (possibly all) of the vertices of each $T'$ from $F$ to $A$. If any non-trivial component has $T'=\emptyset$, then $T$ is a single edge and in this case we just move one of the end vertices from $F$ to $A$. Hence, by moving at most $a(\delta,H)t(\delta,H)$ vertices from $F$ to $A$, we can turn the forest into an independent set. We have now partitioned the vertices of $G$ into sets of vertices $L$ and $R$ where $|L|\leq \alpha(H,\delta)+a(\delta,H)t(\delta,H)$ and $R$ is an independent set. The size of $L$ is bounded above by a constant that only depends on $\delta$ and $H$; it does not depend on the size of $G$. Each vertex in $R$ has at least $\delta$ neighbours in $L$ because of the minimum degree of the vertices in $G$. By the pigeonhole principle, there exists a set $Y\subseteq L$ of size $\delta$ such that $Y$ is contained in the neighbourhood of at least $(n-|L|)/{{|L|}\choose{\delta}}\geq cn$ vertices of $R$ for some constant $c=c(\delta,H)$. Hence, $G$ contains the subgraph $K_{\delta,cn}$. If $G$ does not contain $K_{\delta,n-\delta}$ as a subgraph, then $Y$ is not a dominating set for $G$. Therefore, the subgraph induced by $G\backslash Y$ has a non-trivial component. If $G\backslash Y$ contains a non-trivial tree, take a maximal path $X$ in this tree. Otherwise, choose $X$ to be a cycle together with a shortest path from the cycle to $Y$. We may colour the vertices of $G$ in such a way that $Y$ is always coloured first. Recall the definition of $S(\delta,H)$ given at the beginning of Section \ref{sec:connectedgraph}. If $Y$ is coloured using a vector from $S(\delta,H)$, we then colour the vertices of $X$. There are at most $(k^2-1)k^{|X|-2}$ ways do this. Finally, we colour the remaining vertices, each of which has at most $k$ choices by Proposition \ref{prop:colouring}. This gives at most $s(\delta,H)(k^2-1)k^{n-\delta-2}$ such colourings. Alternatively, if $Y$ is not coloured using a vector from $S(\delta,H)$, then there are at most $k-1$ ways to map each of the other $cn$ vertices of the $K_{\delta,cn}$ subgraph into $H$. There are then at most $k$ choices for each of the remaining vertices of $G$ by Proposition \ref{prop:colouring}. There are at most $|V(H)|^\delta(k-1)^{cn}k^{n-\delta-cn}$ such colourings. Combining the above gives \begin{align*} \hom(G,H) &\leq s(\delta,H)(k^2-1)k^{n-\delta-2}+|V(H)|^\delta(k-1)^{cn}k^{n-cn-\delta}\\ &=s(\delta,H)k^{n-\delta}-s(\delta,H)k^{n-\delta-2}+|V(H)|^\delta(k-1)^{cn}k^{n-cn-\delta}\\ &<s(\delta,H)k^{n-\delta} \end{align*} for sufficiently large values of $n$. If $G$ contains $K_{\delta,n-\delta}$ as a subgraph and $G\neq K_{\delta,n-\delta}$, then we know that $G$ contains at least one extra edge between two vertices in the same partition class. Clearly, every mapping of $G$ into $H$ is also a mapping of $K_{\delta,n-\delta}$ into $H$. We will show below that the converse is not true. If $ij$ is an edge in $H$, then mapping the size $\delta$ partition class of $K_{\delta,n-\delta}$ to $i$ and the other partition class to $j$ is a proper mapping of $K_{\delta,n-\delta}$ into $H$. However, it is only a proper mapping of $G$ to $H$ if the partition class containing the extra edge is mapped to a looped vertex. Therefore, if $H$ has a non-looped vertex, $\hom(G,H)<\hom(K_{\delta,n-\delta})$. Suppose every vertex of $H$ is looped. We assumed that $H$ was connected and not the complete looped graph so there will be non-adjacent vertices $j$ and $k$ which have a common neighbour $i$. We may map the partition class with the extra edge to vertices $j$ and $k$ and the other partition class to $i$. If the extra edge has one endpoint in $j$ and the other in $k$, we do not get a proper $H$-colouring of $G$ but it is a valid $H$-colouring of $K_{\delta,n-\delta}$. Hence $\hom(G,H)<\hom(K_{\delta,n-\delta})$. Therefore, if $\hom(G,H)$ is maximal and $n$ is sufficiently large, then we must have $G=K_{\delta,n-\delta}$. \end{proof} \section{Counterexample to Conjecture \ref{conj:main}} \label{sec:counterexample} We write $T_t(x)$ for the \textit{$t$-partite Tur\'{a}n graph on $x$ vertices} (i.e. the complete $t$-partite graph on $x$ vertices with the vertex classes as equal as possible). For every $\delta\geq3$, we will construct a graph $H$ such that, for infinitely many values of $n$, the number of $H$-colourings is uniquely maximised by a disjoint union of complete multipartite graphs. This shows that Conjecture \ref{conj:main} does not hold. For simplicity, we first assume that $(t-1)|\delta$ for some $3\leq t\leq\delta$. \begin{theorem} \label{thm:counter} Fix $\delta\geq3$ and $3\leq t\leq\delta$ such that $\delta=(t-1)\alpha$ for some $\alpha\in\mathbb{N}$. Then there exists a constant $k_0(\delta)$ such that the following holds for all values of $m\in\mathbb{N}$: if $k\geq k_0(\delta)$ and $G$ is any graph on $n=mt\alpha$ vertices with minimum degree at least $\delta$, then we have $\hom(G,kK_t)\leq\hom(mT_t(t\alpha),kK_t)$ with equality if and only if $G=mT_t(t\alpha)$. \end{theorem} \begin{proof} Fix $\delta\geq3$ and $3\leq t\leq\delta$ as above where $\delta=(t-1)\alpha$. Take $k$ sufficiently large that $(t!k)^{1/(t\alpha)}>tk^{1/(t\alpha+1)}$. Clearly, $\hom(K_{t+1},kK_t)=0$ and so we only need to consider graphs which are $K_{t+1}$-free. Any $K_{t+1}$-free graph with minimum degree at least $\delta$ has at least $t\alpha$ vertices. Tur\'{a}n's theorem tells us that $T_t(t\alpha)$ is the only such graph with exactly $t\alpha$ vertices. It is easy to see that $\hom(T_t(t\alpha),kK_t)=t!k$. Let $m\in\mathbb{N}$ and take $G$ to be any graph on $n=mt\alpha$ vertices with minimum degree at least $\delta$. We may assume that $G$ has $a$ components $G_1,\dots G_a$ with $|G_1|\geq\dots\geq|G_a|\geq t\alpha$. Then $\hom(G,kK_t)=\prod_{i=1}^a\hom(G_i,kK_t)$. If $|G_1|=t\alpha$, then $|G_i|=t\alpha$ for all $i$ and hence $G=mT_t(t\alpha)$. Suppose that $|G_1|>t\alpha$. We know that, if $|G_i|=t\alpha$, then $G_i=T_t(t\alpha)$ and $\hom(G_i,kKt)=t!k$. If $|G_i|>t\alpha$, then we may colour the vertices of $G_i$ greedily to get $\hom(G_i,kK_t)\leq tk(t-1)^{|G_i|-1}<kt^{|G_i|}$. We chose $k$ such that $(t!k)^{1/(t\alpha)}>tk^{1/(t\alpha+1)}$. Using this and the fact that $|G_i|\geq t\alpha+1$, we have $\hom(G_i,kK_t)<(t!k)^{|G_i|/(t\alpha)}$. Combining these two observations, we get \[ \hom(G,kK_t)=\prod_{i=1}^a\hom(G_i,kK_t)<(t!k)^{n/(t\alpha)}=(t!k)^m=\hom(mT_t(t\alpha),kK_t). \] Therefore, if $G$ is any graph on $n=mt\alpha$ vertices with minimum degree at least $\delta$, we have $\hom(G,kK_t)\leq\hom(mT_t(t\alpha),kK_t)$. We have equality if and only if $G=mT_t(t\alpha)$. \end{proof} \noindent We may use the techniques above to show that, if $(t-1)|(\delta +1)$, then a similar result holds -- there is a graph $H$ such that the number of $H$-colourings is uniquely maximised by a union of complete $t$-partite graphs. Therefore, for every $\delta\geq3$, by taking $t=3$, we can produce a counterexample to Conjecture \ref{conj:main}. In all of the examples we have seen so far, the number of $H$-colourings has been maximised by the union of complete multipartite graphs. We will now give an example where this is not the case. Take $\delta=7$ and $t=4$ and choose $k$ as in Theorem \ref{thm:counter}. Let $H=kK_4$, $m\in\mathbb{N}$ and take $G$ to be any graph on $n=10m$ vertices with minimum degree at least $7$. As before, we may assume that $G$ is $4$-colourable. If $G$ has a component with at least $11$ vertices, then we can show, in a similar way to Theorem \ref{thm:counter}, that $\hom(G,kK_4)<\hom(mT_4(10),kK_4)$. However, the number of $H$-colourings is not maximised by $mT_4(10)$. Let $T'$ be the graph formed from $T_4(10)$ by removing a perfect matching between the two vertex classes of size $2$. Then $\hom(mT',kK_4)=2\hom(mT_4(10),kK_4)$. \section{Proof of Theorem \ref{thm:largedelta}} \label{sec:largedelta} We will need the following simple observation. \begin{proposition} \label{prop:disjointcycles} Fix $d\in\mathbb{N}$. Let $G$ be any graph with minimum degree at least $3d$. Then $G$ has at least $d$ disjoint cycles. \end{proposition} \begin{proof} If $d=1$, the minimum degree of $G$ is at least 3 and so $G$ contains a cycle. If $d>1$, take $C$ to be a shortest cycle in $G$. Each vertex in $G$ has at most 3 neighbours on $C$ or else we would be able to find a shorter cycle. Removing the vertices in $C$ reduces the minimum degree by at most 3. Therefore, by induction, we can find at least $d-1$ disjoint cycles in $G\backslash V(C)$. \end{proof} \noindent Before proving Theorem \ref{thm:largedelta}, we will prove a couple of useful lemmas. Recall the definitions of $S(\delta,H)$ and $s(\delta,H)$ given at the start of Section \ref{sec:connectedgraph}. \begin{lemma} \label{lem:boundlargebipartite} Fix $\delta\geq1$ and $k\geq2$. Fix $H$ to be any graph with maximum degree $k$. Then there exists a constant $\beta(\delta,H)$ such that, for $n\geq\beta(\delta,H)$, we have $\hom(K_{\delta,n-\delta},H)\leq s(\delta,H)k^{n+1-\delta}$. \end{lemma} \begin{proof} The graph $K_{\delta,n-\delta}$ has two vertex classes. Denote the class of size $\delta$ by $Z$. When we are counting the number of $H$-colourings of $K_{\delta,n-\delta}$, we will colour vertices in $Z$ first and then the remaining vertices may be coloured greedily. There are two possibilities: either $Z$ is coloured so that all of the vertices used in $H$ have $k$ common neighbours (i.e. we use a vector from $S(\delta,H)$) or the vertices in $H$ used to colour $Z$ have strictly fewer than $k$ neighbours in common. First, we consider the case where $Z$ is coloured using a vector from $S(\delta,H)$. When we come to colour the vertices of $G\backslash Z$, there are exactly $k$ choices for each one. Therefore, there are exactly $s(\delta,H)k^{n-\delta}$ such colourings. Next, we consider the case where $Z$ is coloured so that the vertices used do not have $k$ common neighbours in $H$. This leaves at most $k-1$ ways to map the vertices of $G\backslash Z$ into $H$. Hence, there are at most $|V(H)|^{\delta}(k-1)^{n-\delta}$ such colourings. Combining the above gives \[ \hom(K_{\delta,n-\delta},H)\leq s(\delta,H)k^{n-\delta}+|V(H)|^{\delta}(k-1)^{n-\delta}. \] Hence, for $n$ sufficiently large, we have \begin{align*} \hom(K_{\delta,n-\delta},H)&\leq s(\delta,H)k^{n-\delta}+k^{n-\delta}\\ &\leq s(\delta,H)k^{n+1-\delta}. \end{align*} This proves the required result. \end{proof} \begin{lemma} \label{lem:boundlargedelta} Fix $H$ to be any graph with maximum degree $k\in\mathbb{N}$ that does not have the complete looped graph on $k$ vertices or $K_{k,k}$ as a component. There exists a constant $\delta_0(H)$ such that, if $\delta\geq\delta_0(h)$ and $G$ is a connected graph on $n$ vertices with minimum degree $\delta$, then $\hom(G,H)<k^{n-1}$. \end{lemma} \begin{proof} The minimum degree condition on $G$ ensures that $n\geq\delta+1$. The restrictions on $H$ mean that $k\geq2$. Let $H$ have $h$ components $H_1,\dots H_h$. As $G$ is connected, any $H$-colouring of $G$ maps $G$ to a single component $H_i$ and so $\hom(G,H)=\sum_{i=1}^h\hom(G,H_i)$. We therefore first count the number of $H_i$-colourings of $G$ for each $i\in[h]$. There are three cases to consider. \\ \\ \textit{Case 1.} Let $H_i$ be a complete looped graph on $l$ vertices where $l<k$. Then $\hom(G,H_i)=l^n\leq(k-1)^n$. This is strictly less than $k^{n-h-1}$ whenever $n>\frac{(h+1)\log k}{\log k-\log(k-1)}$. \\ \\ \textit{Case 2.} Let $H_i=K_{l,l}$ where $l<k$. Then $\hom(G,H_i)=2l^n\leq2(k-1)^n$. This is strictly less than $k^{n-h-1}$ whenever $n>\frac{\log2+(h+1)\log k}{\log k-\log(k-1)}$. \\ \\ \textit{Case 3.} Let $H_i$ be any connected graph which is not the complete looped graph on $l$ vertices or $K_{l,l}$ for some $l\leq k$. Suppose $G$ has $d$ vertex disjoint cycles $C_1,\dots C_d$. We colour $G$ in the following way: \begin{enumerate} \item Pick any vertex of $G$ and map it to any vertex of $H_i$. \item Find a shortest path $P$ from the already coloured vertices of $G$ to an uncoloured vertex on one of the cycles $C_j$. There are at most $k$ ways to map each vertex on this path to vertices of $H_i$. \item The end vertex of $P$ has already been mapped to a vertex of $H_i$ so we consider the other vertices on the cycle $C_j$. Lemma \ref{lem:pathcounting} gives at most $(k^2-1)k^{|C_j|-3}$ ways to map these vertices to $H_i$. \item If, for some $j'\in\{1,\dots d\}$, the cycle $C_{j'}$ has not yet been coloured, go back to step 2. \item Colour any remaining uncoloured vertices in a greedy fashion. By Proposition \ref{prop:colouring}, there are at most $k$ choices for each vertex. \end{enumerate} By colouring $G$ in this way, we find that \[ \hom(G,H_i)\leq|V(H_i)|(k^2-1)^dk^{n-2d-1}<|V(H_i)|k^{n-1}e^{-\frac d{k^2}}. \] This is strictly less than $k^{n-h-1}$ whenever $d>k^2\log|V(H_i)|+k^2h\log k$. \\ \\ Choose $\delta\geq\max\big\{3k^2\log|V(H)|+3k^2h\log k,\frac{(h+1)\log k}{\log k-\log(k-1)}\big\}$ and note that $n\geq\delta+1$. If $H_i$ is in either Case 1 or Case 2, then $n$ is large enough that $\hom(G,H_i)<k^{n-h-1}$. If $H_i$ is in Case 3, then, by Proposition \ref{prop:disjointcycles}, we have that the number of disjoint cycles in $G$ is at least $k^2\log|V(H)|+k^2h\log k$ and hence $\hom(G,H_i)<k^{n-h-1}$. Then \[ \hom(G,H)=\sum_{i=1}^{h}\hom(G,H_i)<hk^{n-h-1}<k^{n-1}. \] Hence, if $H$ does not contain the complete looped graph on $k$ vertices or $K_{k,k}$ as a component, we have $\hom(G,H)<k^{n-1}$ for $\delta$ sufficiently large as required. \end{proof} \noindent We are now ready to prove the main result. \begin{proof}[Proof of Theorem \ref{thm:largedelta}] Let $H$ be any graph with maximum degree $k$ that does not have the complete looped graph on $k$ vertices or $K_{k,k}$ as a component. This allows us to apply Lemma \ref{lem:boundlargedelta} as required. Choose $\delta\geq\delta_0(H)$ where $\delta_0(H)$ is the constant found in Lemma \ref{lem:boundlargedelta}. Set $\lambda(\delta,H)=\max\{\kappa(\delta,H), \beta(\delta,H)\}$ where $\kappa(\delta,H)$ is the constant found in Theorem \ref{thm:connectedgraph} and $\beta(\delta,H)$ is the constant found in Lemma \ref{lem:boundlargebipartite}. Now, choose $n>(\delta-1)(\lambda(\delta,H)-1)$. Let $G$ be a graph on $n$ vertices with minimum degree $\delta$ that has the maximum number of $H$-colourings. Clearly, $\hom(G,H)\geq\hom(K_{\delta,n-\delta},H)\geq s(\delta,H)k^{n-\delta}\geq k^{n-\delta}$. Let $G$ have $t$ components $G_1,\dots G_t$. An $H$-colouring of $G$ comprises of separate $H$-colourings of each component $G_i$ and therefore $\hom(G,H)=\prod_{i=1}^t\hom(G_i,H)$. As $G$ has the most $H$-colourings among all graphs on $n$ vertices with minimum degree $\delta$, we must also have that $G_i$ has the most $H$-colourings among all graphs on $|G_i|$ vertices with minimum degree $\delta$ for each $i\in\{1,\dots t\}$. \\ \\ \noindent \textit{Claim 1: $G$ has a bounded number of components.}\\ By Lemma \ref{lem:boundlargedelta}, we have that $\hom(G_i,H)<k^{|G_i|-1}$ for each $i\in\{1,\dots t\}$ so \[ \hom(G,H)=\prod_{i=1}^t\hom(G_i,H)<\prod_{i=1}^tk^{|G_i|-1}=k^{n-t}. \] If $t\geq\delta$, then we have $\hom(G,H)<k^{n-\delta}\leq\hom(K_{\delta,n-\delta},H)$ and this contradicts our assumption that $G$ has the maximum number of $H$-colourings. \\ \\ \noindent Hence we know that $G$ has at most $\delta-1$ components. By the pigeonhole principle, there is a component of $G$ with at least $\lambda(\delta,H)$ vertices. Without loss of generality, we may assume this component is $G_1$. By Theorem \ref{thm:connectedgraph}, we have that $G_1=K_{\delta,|G_1|-\delta}$ and, applying Lemma \ref{lem:boundlargebipartite}, we find that $\hom(G_1,H)\leq s(\delta,H)k^{|G_1|+1-\delta}$. \\ \\ \noindent \textit{Claim 2: $G$ has exactly one component.}\\ Suppose $t>1$. We know $\hom(G_1,H)\leq s(\delta,H)k^{|G_1|+1-\delta}$. By Lemma \ref{lem:boundlargedelta}, we have $\hom(G_2,H)<k^{|G_2|-1}$. Hence \begin{align*} \hom(G_1\cup G_2,H)&<s(\delta,H)k^{|G_1|+1-\delta}k^{|G_2|-1}\\ &=s(\delta,H)k^{|G_1|+|G_2|-\delta}\\ &\leq\hom(K_{\delta,|G_1|+|G_2|-\delta},H). \end{align*} Replacing $G_1\cup G_2$ by $K_{\delta,|G_1|+|G_2|-\delta}$ increases the number of $H$-colourings of $G$, which contradicts our assumption that $G$ has the maximum number of $H$-colourings. \\ \\ \noindent We have seen that $G$ has exactly one component $G_1$ and that this component is $K_{\delta,|G_1|-\delta}$. In other words, if $G$ has the maximum number of $H$-colourings, then $G=K_{\delta,n-\delta}$ as required. \end{proof} \section{Conclusion} \label{sec:conclusion} We have shown that, given any graph $H$ and any $\delta\geq3$, for sufficiently large $n$, the graph $G=K_{\delta,n-\delta}$ maximises $\hom(G,H)$ among all connected graphs on $n$ vertices with minimum degree $\delta$. If $H$ has a component which is neither a complete looped graph nor a complete balanced bipartite graph, then $K_{\delta,n-\delta}$ is the unique such maximising graph. We have also considered the more general question which was asked by Engbers \cite{E16}: what happens if we consider all graphs on $n$ vertices with minimum degree $\delta$, rather than just those which are connected? There are two situations which arise and we will discuss both. We will first look at the case where $H$ is fixed and $\delta\geq\delta_0(H)$. By making $\delta$ sufficiently large in relation to $|H|$, we are able to identify the maximising graph for certain graphs $H$. We will then consider what happens when $\delta$ is fixed and $H$ is allowed to be any graph, which we will refer to as the \textit{general case}. In what follows, we take $G$ to be any graph on $n$ vertices with minimum degree $\delta$. We assume that $G$ has $t$ components $G_1,\dots G_t$. If $H$ is fixed with maximum degree $k$ and $\delta$ is sufficiently large, then the graph which maximises the number of $H$-colourings depends on the structure of $H$. Some of the different possible graphs which maximise $\hom(G,H)$ are given below. \begin{enumerate} \item \textit{$H$ is $h$ disjoint copies of the complete looped graph on $k$ vertices.}\\ It is easy to see that $\hom(G,H)=\prod_{i=1}^t|V(H)|k^{|G_i|-1}=h^tk^n$. When $h=1$, $\hom(G,H)=k^n$ for any graph $G$ on $n$ vertices and so every graph $G$ maximises the number of $H$-colourings. When $h>1$, $\hom(G,H)$ is maximised when $G$ has as many components as possible. The minimum number of vertices in a component of $G$ is $\delta+1$ which occurs when the component is $K_{\delta+1}$. Writing $n=a(\delta+1)+b$ where $b\in\{0,\dots\delta\}$, we have that $\hom(G,H)$ is maximised by any graph with $a$ components, e.g. $(a-1)K_{\delta+1}\cup K_{\delta+b+1}$. \item \textit{$H$ is $h$ disjoint copies of $K_{k,k}$.}\\ It is easy to see that, if a graph is not bipartite, it is not possible to map it into $H$. Therefore \[ \hom(G,H)= \begin{cases} \prod_{i=1}^t\hom(G_i,H)=(2h)^tk^n&\text{if $G_i$ is bipartite}\\ 0&\text{if $G_i$ is not bipartite.} \end{cases} \] Clearly, the number of $H$-colourings is maximised when $G$ is bipartite and has as many components as possible. The smallest possible bipartite component of $G$ is $K_{\delta,\delta}$ which has $2\delta$ vertices. Writing $n=2a\delta+b$ where $b\in\{0,\dots2\delta-1\}$, we have that $\hom(G,H)$ is maximised by any bipartite graph with $a$ components, e.g. $(a-1)K_{\delta,\delta}\cup K_{\delta,\delta+b}$. \item \textit{No component of $H$ is the complete looped graph on $k$ vertices or $K_{k,k}$.}\\ In Section \ref{sec:largedelta}, we showed that, for any $\delta\geq\delta_0(H)$, there exists a constant $n_0(\delta,H)$ such that, if $n\geq n_0(\delta,H)$, then $K_{\delta,n-\delta}$ uniquely maximises the number of $H$-colourings. \end{enumerate} \noindent From the examples given above, it is clear to see that there is not a simple answer to the question of which graph $G$ maximises $\hom(G,H)$ when $H$ is fixed and $\delta$ is sufficiently large. We make the following conjecture. \begin{conjecture} For any graph $H$ and any $\delta\geq\delta_0(H)$, there exists a constant $n_0(\delta,H)$ such that the following holds: if $G$ is a graph with minimum degree $\delta$ and at least $n_0(\delta,H)$ vertices, then \[ \hom(G,H)\leq\max\big\{\hom(K_{\delta+1},H)^{\frac{|G|}{\delta+1}},\hom(K_{\delta,\delta},H)^{\frac{|G|}{2\delta}},\hom(K_{\delta,|G|-\delta},H)\big\}. \] \end{conjecture} \noindent This conjecture implies that, for a fixed graph $H$ and $\delta$ sufficiently large, the following holds: for sufficiently large $n$ satisfying suitable divisibility conditions, the number of $H$-colourings is always maximised by one of $\frac n{\delta+1}K_{\delta+1}$, $\frac n{2\delta}K_{\delta,\delta}$ or $K_{\delta,n-\delta}$.
{ "timestamp": "2016-11-10T02:04:12", "yymm": "1611", "arxiv_id": "1611.02911", "language": "en", "url": "https://arxiv.org/abs/1611.02911", "abstract": "For graphs $G$ and $H$, an $H$-colouring of $G$ is a map $\\psi:V(G)\\rightarrow V(H)$ such that $ij\\in E(G)\\Rightarrow\\psi(i)\\psi(j)\\in E(H)$. The number of $H$-colourings of $G$ is denoted by $\\hom(G,H)$.We prove the following: for all graphs $H$ and $\\delta\\geq3$, there is a constant $\\kappa(\\delta,H)$ such that, if $n\\geq\\kappa(\\delta,H)$, the graph $K_{\\delta,n-\\delta}$ maximises the number of $H$-colourings among all connected graphs with $n$ vertices and minimum degree $\\delta$. This answers a question of Engbers.We also disprove a conjecture of Engbers on the graph $G$ that maximises the number of $H$-colourings when the assumption of the connectivity of $G$ is dropped.Finally, let $H$ be a graph with maximum degree $k$. We show that, if $H$ does not contain the complete looped graph on $k$ vertices or $K_{k,k}$ as a component and $\\delta\\geq\\delta_0(H)$, then the following holds: for $n$ sufficiently large, the graph $K_{\\delta,n-\\delta}$ maximises the number of $H$-colourings among all graphs on $n$ vertices with minimum degree $\\delta$. This partially answers another question of Engbers.", "subjects": "Combinatorics (math.CO)", "title": "Maximising $H$-Colourings of Graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9898303440461105, "lm_q2_score": 0.7154240018510026, "lm_q1q2_score": 0.7081483858910231 }
https://arxiv.org/abs/1701.01585
A Positivstellensatz for forms on the positive orthant
Let $p$ be a nonconstant form in $\mathbb{R}[x_1,\dots,x_n]$ with $p(1,\dots,1)>0$. If $p^m$ has strictly positive coefficients for some integer $m\ge1$, we show that $p^m$ has strictly positive coefficients for all sufficiently large $m$. More generally, for any such $p$, and any form $q$ that is strictly positive on $(\mathbb{R}_+)^n\setminus\{0\}$, we show that the form $p^mq$ has strictly positive coefficients for all sufficiently large $m$. This result can be considered as a strict Positivstellensatz for forms relative to $(\mathbb{R}_+)^n\setminus\{0\}$. We give two proofs, one based on results of Handelman, the other on techniques from real algebra.
\section{Introduction} \label{sec: intro} Given polynomials $p, q \in {\mathbb{R}}[\mathsf{x}] := {\mathbb{R}}[x_1, \dots, x_n]$ where all the coefficients of $p$ are nonnegative, Handelman \cite{Handelman86} gave a necessary and sufficient condition (reproduced as Theorem \ref{thm: HandelmanCharacterization} below) for there to exist a nonnegative integer $m$ such that the coefficients of $p^m q$ are all nonnegative. In another paper \cite{Handelman92}, Handelman showed that, given a polynomial $p \in {\mathbb{R}}[\mathsf{x}]$ such that $p(1, \dots, 1) > 0$, if the coefficients of $p^m$ are all nonnegative for some $m > 0$, then the coefficients of $p^m$ are all nonnegative for every sufficiently large $m$. In the case where $p$ is a form (i.e.\ a~homogeneous polynomial), there is a stronger positivity condition that $p$ may satisfy. If $p(x) = \sum_{|w| = d} a_wx^w \in {\mathbb{R}}[\mathsf{x}]$ is homogeneous of degree $d$ (with $a_w\in{\mathbb{R}}$), we say that $p$ {\emph{has strictly positive coefficients}} if $a_w > 0$ for all $|w| = d$. Here we use standard multi-index notation, where an $n$-tuple $w = (w_1,\ldots, w_n)$ of nonnegative integers has length $|w| := w_1 + \cdots + w_n$ and $x^{w} = x_1^{w_1} x_2^{w_2} \cdots x_n^{w_n}$. Denote the closed positive orthant of real $n$-space by $${\mathbb{R}}_+^n\>:=\>\{x=(x_1,\dots,x_n)\in{\mathbb{R}}^n\colon x_1,\dots,x_n \ge0\}.$$ Our main result is as follows: \begin{theorem} \label{thm: strictPositivstellensatz} Let $p \in {\mathbb{R}}[\mathsf{x}]$ be a nonconstant real form. The following are equivalent: \begin{enumerate}[(A)] \item \label{thm: someOddPos} The form $p^m$ has strictly positive coefficients for some odd $m\ge1$. \item \label{thm: somePos} The form $p^m$ has strictly positive coefficients for some $m\ge1$, and $p(x) > 0$ at some point $x \in {\mathbb{R}}_+^n$. \item \label{thm: eventualPos} For each real form $q \in {\mathbb{R}}[\mathsf{x}]$ strictly positive on ${\mathbb{R}}_+^n \setminus\{0\}$, there exists a positive integer $m_0$ such that $p^m q$ has strictly positive coefficients for all $m \ge m_0$. \end{enumerate} \end{theorem} Theorem \ref{thm: strictPositivstellensatz} can be derived from an isometric imbedding theorem for holomorphic bundles, due to Catlin-D'Angelo \cite{CD99}. The argument is sketched in an appendix at the end of this paper. Another condition equivalent to each of the three conditions Theorem \ref{thm: strictPositivstellensatz} was given by the second author and To in \cite{TanTo}. The line of argumentation in \cite{TanTo} is analytic in nature, and the proof therein invokes Catlin-D'Angelo's isometric embedding theorem. As the statement of Theorem \ref{thm: strictPositivstellensatz} involves only real polynomials, it is desirable to give a purely algebraic proof, which is what we shall do below. We will in fact give two proofs of very different nature. Both are independent of Catlin-D'Angelo's proof in \cite{CD99}, which uses compactness of the von Neumann operator on pseudoconvex domains of finite-type domains in ${\mathbb{C}}^n$ and an asymptotic expansion of the Bergman kernel function by Catlin \cite{Catlin99}. We remark that in the case when $n = 2$, Theorem \ref{thm: strictPositivstellensatz} follows from De~Angelis' work in \cite{deAngelis03} and has been independently observed by Handelman \cite{HandelmanMO}. Our first proof of Theorem \ref{thm: strictPositivstellensatz} uses the criterion of Handelman \cite{Handelman86} mentioned above. Our second proof reduces Theorem \ref{thm: strictPositivstellensatz} to the archimedean local-global principle due to the first author, in a spirit similar to \cite{Scheiderer12}. For a real form, having strictly positive coefficients is a certificate for being strictly positive on ${\mathbb{R}}_+^n\setminus\{0\}$. Therefore Theorem \ref{thm: strictPositivstellensatz} can be seen as a Positivstellensatz for forms $q$, relative to ${\mathbb{R}}_+^n\setminus \{0\}$. In particular, the case where $p = x_1 + \cdots + x_n$ specializes to the classical P\'olya Positivstellensatz \cite{Polya28} (reproduced in \cite[pp. 57--60]{HLP52}). For any $n \ge 2$ and even $d \ge 4$, there are examples of degree-$d$ $n$-ary real forms $p$ with some negative coefficient that satisfy the equivalent conditions of Theorem \ref{thm: strictPositivstellensatz} (see Example \ref{eq: DV} below). \medbreak \paragraph{{\textbf{Acknowledgements.}}} The second author would like to thank his PhD supervisor Professor Wing-Keung To for his continued guidance and support. We would also like to thank David Handelman for his answer on MathOverflow \cite{HandelmanMO}, and the anonymous referee for pointing out reference \cite{kr}. \section{A theorem of Handelman} Let $p = \sum_{w \in {\mathbb{Z}}^n} c_w x^w\in{\mathbb{R}}[\mathsf{x},\mathsf{x}^{-1}] := {\mathbb{R}}[x_1, \ldots, x_n, x_1^{-1}, \ldots, x_n^{-1}]$ be a Laurent polynomial. Following Handelman \cite{Handelman86} we introduce the following terminology. The {\emph{Newton diagram}} of $p$ is the set $\Log(p) := \{w \in {\mathbb{Z}}^n : c_w \ne0\}$. A subset $F$ of $\Log(p)$ is a \emph{relative face} of $\Log(p)$ if there exists a face $K$ of the convex hull of $\Log(p)$ in ${\mathbb{R}}^n$ such that $F = K \cap \Log(p)$. In particular, the subset $\Log(p)$ is itself a relative face of $\Log(p)$, called the \emph{improper relative face}. Given a set $F\subset{\mathbb{Z}}^n$, an integer $k\ge1$ and a point $z\in{\mathbb{Z}}^n$, we write $kF+z:=\{w^{(1)}+\cdots+w^{(k)}+z\colon w^{(1)},\dots,w^{(k)}\in F\}\subset{\mathbb{Z}}^n$. For a subset $E$ of ${\mathbb{Z}}^n$ and the above Laurent polynomial $p$ we write $p_E:=\sum_{w\in E}c_wx^w$. \begin{definition}\label{dfnstratum} Let $p\in{\mathbb{R}}[\mathsf{x},\mathsf{x}^{-1}]$ be a nonzero Laurent polynomial. Given a relative face $F$ of $\Log(p)$ and a finite subset $S$ of ${\mathbb{Z}}^n$, a {\emph{stratum of $S$ with respect to $F$}} is a nonempty subset $E\subset S$ such that \begin{enumerate}[(i)] \item\label{cond: stratumOne} there exist $k\ge1$ and $z\in{\mathbb{Z}}^n$ such that $E\subset kF+z$; and \item\label{cond: stratumTwo} whenever $E\subset kF+z$ for some $z\in{\mathbb{Z}}^n$ and some $k\ge1$, it follows that $E=(kF+z)\cap S$. \end{enumerate} A stratum $E$ of $S$ with respect to $F$ is \emph{dominant} if, in addition, the following holds: \begin{enumerate}[(i)] \setcounter{enumi}{2} \item \label{cond: stratumDominant} If $E\subset(k\Log(p)+z)\setminus(kF+z)$ for some $k\ge1$ and some $z\in{\mathbb{Z}}^n$, then $(kF+z)\cap S=\emptyset$. \end{enumerate} \end{definition} \begin{theorem}[Handelman {\cite[Theorem A]{Handelman86}}] \label{thm: HandelmanCharacterization} Let $p$ and $q$ be Laurent polynomials in ${\mathbb{R}}[\mathsf{x},\mathsf{x}^{-1}]$, where $p$ has nonnegative coefficients. Then $p^m q$ has nonnegative coefficients for some positive integer $m$ if, and only if, both the following conditions hold: \begin{enumerate}[\indent(a)] \item \label{cond: HandelmanConditionOne} For each dominant stratum $E$ of $\Log(q)$ with respect to the improper relative face $\Log(p)$, the polynomial $q_{E}$ is strictly positive on the interior of ${\mathbb{R}}_+^n$. \item \label{cond: HandelmanConditionTwo} For each proper relative face $F$ of $\Log(p)$, and each dominant stratum $E$ of $\Log(q)$ with respect to $F$, there exists a positive integer $m$ such that $p_F^m q_E$ has nonnegative coefficients. \end{enumerate} \end{theorem} \noindent Here, for a Laurent polynomial $f$, by ``$f$ has nonnegative coefficients'', we mean that all coefficients of $f$ are nonnegative. As observed in \cite{Handelman86}, the product of a suitable monomial with $p_F$ (resp. $f_E$) is a Laurent polynomial involving fewer than $n$ variables (when $F$ is proper), so that the condition \eqref{cond: HandelmanConditionTwo} is inductive. \section{First proof of Theorem \ref{thm: strictPositivstellensatz}} \label{sec: proofOfMainTheorem} We fix an integer $n\ge1$ and use the notation $\mathsf{x}=(x_1,\dots,x_n)$ and $[n]=\{1,\dots,n\}$. Given $z\in{\mathbb{Z}}^n$ and a subset $J$ of $[n]$, let $z_J:=(z_j)_{j\in J}\in{\mathbb{Z}}^J$ denote the corresponding truncation of~$z$. For a nonnegative integer $d$, we write $({\mathbb{Z}}^n_+)_d=\{w\in{\mathbb{Z}}^n\colon w_1\ge0,\dots,w_n\ge0$, $w_1+\cdots+w_n=d\}$. \begin{lemma}\label{domstrat} Let $p\in{\mathbb{R}}[\mathsf{x}]$ be a form of degree $d\ge1$ with strictly positive coefficients. Let $e\ge0$, and let $S\subset({\mathbb{Z}}^n_+)_e$ be a nonempty subset. \begin{itemize} \item[(a)] The relative faces of $\Log(p)$ are the sets $F_J:=\{w\in ({\mathbb{Z}}^n_+)_d \colon w_J=0\}$, where $J\subset[n]$ is a subset. \item[(b)] Let $J\subset [n]$. For each stratum $E$ of $S$ with respect to $F_J$, there exists $\beta\in {\mathbb{Z}}_+^J$ satisfying $|\beta|\le e$ such that $$E\>=\>E_{J,\beta}\>:=\>\{w\in S\colon w_J=\beta\}$$ \item[(c)] If $S=({\mathbb{Z}}_+^n)_e$ and $\emptyset\ne J\subsetneq[n]$, the stratum $E_{J,\beta}$ of $S$ with respect to $F_J$ is dominant if and only if $\beta=0$. \end{itemize} \end{lemma} In particular, $E=S$ is the only stratum of $S$ with respect to the improper relative face $\Log(p)$ of $\Log(p)$, by (b). Note that this stratum is dominant for trivial reasons. \begin{proof} By assumption we have $\Log(p)=({\mathbb{Z}}^n_+)_d$. Denote this set by $F$. Assertion (a) is clear. Note that $J=\emptyset$ resp.\ $J=[n]$ gives $F_J=F$ resp.\ $F_J=\emptyset$. To prove (b), fix a subset $J\subset[n]$, and let $E\subset S$ be a stratum of $S$ with respect to $F_J$. So there exist $k\ge1$ and $z\in{\mathbb{Z}}^n$ such that $E=(kF_J+z)\cap S$. By the particular shape of $F$ we have $$kF_J+z\>=\>\{w\in{\mathbb{Z}}^n\colon|w|=ke+|z|,\ w\ge z,\ w_J=z_J\},$$ where $w\ge z$ means $w_i\ge z_i$ for $i=1,\dots,n$. Therefore $E\subset E_{J,\beta}$ with $\beta:=z_J$. Note that $E_{J,\beta}$ can be nonempty only when $|\beta|\le e$. The proof of (b) will be completed if we show that $E_{J,\beta}\subset lF_J+y$ holds for suitable $l\ge1$ and $y\in{\mathbb{Z}}^n$. To this end it suffices to observe that there exist $l\ge1$ and $y\in{\mathbb{Z}}^n$ such that $ld\ge e$, $|y|=e-ld$, $y_J=\beta$ and $y_i\le0$ for $i\in[n]\setminus J$. These $l$ and $y$ will do the job. It remains to prove (c), so assume now that $S=({\mathbb{Z}}^n_+)_e$. Let $J\subsetneq [n]$ be a proper subset, and let $\beta\in{\mathbb{Z}}_+^J$ be such that $E_{J,\beta}$ is nonempty (hence a stratum of $S$). First assume $\beta\ne0$. There exist $k\ge1$ and $z\in{\mathbb{Z}}^n$ such that $0\le z_J\le\beta$ and $z_J\ne\beta$, such that $z_{[n]\setminus J} \le0$, and such that $|z|=e-kd$. Then $E_{J,\beta}\subset kF+z$ and $E_{J,\beta}\cap(kF_J+z)=\emptyset$. But there exists $w\in S$ with $w_J=z_J$, showing that $(kF_J+z) \cap S\ne\emptyset$, whence $E_{J,\beta}$ is not dominant. On the other hand, $E_{J,\beta}$ is easily seen to be dominant when $\beta=0$. \end{proof} We now give a first proof of Theorem \ref{thm: strictPositivstellensatz}. The implications \eqref{thm: someOddPos} $\Rightarrow$ \eqref{thm: somePos} and \eqref{thm: eventualPos} $\Rightarrow$ \eqref{thm: someOddPos} are trivial. To prove \eqref{thm: somePos} $\Rightarrow$ \eqref{thm: eventualPos}, it suffices to show the following apparently weaker statement: \begin{lemma}\label{weakerstatement} Given forms $f,\,g\in{\mathbb{R}}[x_1,\dots,x_n]$, where $f$ is nonconstant with strictly positive coefficients and where $g$ is strictly positive on ${\mathbb{R}}^n_+\setminus\{0\}$, there exists $l\ge1$ such that $f^lg$ has nonnegative coefficients. \end{lemma} Assuming that Lemma \ref{weakerstatement} has been shown, we can immediately state a stronger version of this lemma. Namely, under the same assumptions it follows that $f^lg$ actually has strictly positive coefficients for suitable $l\ge1$. Indeed, choose a form $g'$ with $\deg(g')=\deg(g)$ such that $g'$ has strictly positive coefficients and the difference $h:=g-g'$ is strictly positive on ${\mathbb{R}}_+^n\setminus\{0\}$, for instance $g'=c(x_1+\cdots+x_n)^{\deg(g)}$ with sufficiently small $c>0$. Applying Lemma \ref{weakerstatement} to $(f,h)$ instead of $(f,g)$ gives $l\ge1$ such that $f^lh$ has nonnegative coefficients. Since $f^lg'$ has strictly positive coefficients, the same is true for $f^lg=f^lg'+f^lh$. Now assume that condition \eqref{thm: somePos} of Theorem \ref{thm: strictPositivstellensatz} holds. Then the form $p$ is strictly positive on ${\mathbb{R}}_+^n\setminus\{0\}$. In order to prove \eqref{thm: eventualPos}, apply the strengthened version of Lemma \ref{weakerstatement} to $(f,g)=(p^m,\,p^ih)$ for $0\le i\le m-1$. This gives $l\ge1$ such that $p^{lm+i}h$ has nonnegative coefficients for all $i\ge0$, which is \eqref{thm: eventualPos}. So indeed it suffices to prove Lemma \ref{weakerstatement}. \begin{proof}[Proof of Lemma \ref{weakerstatement}] The case $n = 1$ is trivial. Suppose that $n>1$ and the above statement holds in less than $n$ variables. Let $\deg(f)=d\ge1$ and $\deg(g)=e$. As before, choose a form $g'$ with $\deg(g')=e$ and with strictly positive coefficients such that $h:=g-g'$ is strictly positive on ${\mathbb{R}}^n_+\setminus\{0\}$. This can be done in such a way that $\Log(h)=({\mathbb{Z}}^n_+)_e$, i.e.\ all coefficients of $h$ are nonzero. We shall verify that the pair $(f,h)$ satisfies the conditions in Theorem \ref{thm: HandelmanCharacterization}. Since $f$ has strictly positive coefficients, the only (dominant) stratum of $S=\Log(h)$ with respect to $F=\Log(f)$ is $E=S$, by Lemma \ref{domstrat}. Thus $h_E=h$ is strictly positive on ${\mathbb{R}}_+^n\setminus\{0\}$, so that condition \eqref{cond: HandelmanConditionOne} is satisfied. Next, let $J\subset[n]$ be a proper nonempty subset. Using the notation of Lemma \ref{domstrat}, the only dominant stratum of $S=\Log(h) = ({\mathbb{Z}}_+^n)_e$ with respect to the proper relative face $F_J$ of $F=\Log(f)$ is $E:=E_{J,0}=\{w\in S\colon w_J=0\}$, according to Lemma \ref{domstrat}(c). Without loss of generality we may assume $J=\{r+1,\dots,n\}$ for some $1\le r<n$, where $J$ has cardinality $n-r$. Then $h_E$ is a form in ${\mathbb{R}}[x_1,\dots,x_r]$ that is strictly positive on ${\mathbb{R}}_+^r\setminus\{0\}$, since $h_E(x_1,\dots,x_r)=h(x_1,\dots,x_r,0,\dots,0)>0$ for all $(x_1,\dots,x_r)\in{\mathbb{R}}_+^r\setminus\{0\}$. Moreover, $f_{F_J}$ is a form in ${\mathbb{R}}[x_1,\dots,x_r]$ with strictly positive coefficients. By the inductive hypothesis there exists $m\ge1$ such that all coefficients of $(f_{F_J})^mh_E$ are nonnegative, which shows that $(f,h)$ satisfies condition \eqref{cond: HandelmanConditionTwo}. Therefore, by Theorem \ref{thm: HandelmanCharacterization}, there exists $l\ge1$ such that $f^lh$ has nonnegative coefficients. \end{proof} \section{Archimedean local-global principle for semirings} \label{sec:archlgpsemiring} Let $A$ be a (commutative unital) ring, and let $T\subset A$ be a subsemiring of $A$, i.e.\ a subset containing $0,\,1$ and closed under addition and multiplication. Recall that $T$ is said to be \emph{archimedean} if for any $f\in A$ there exists $n\in{\mathbb{Z}}$ with $n+f\in T$, i.e.\ if $T+{\mathbb{Z}}=A$. The real spectrum $\mathrm{Sper}(A)$ of $A$ (see e.g.\ \cite{bcr} 7.1, \cite{ma} 2.4) can be defined as the set of all pairs $\alpha=(\mathfrak{p},\le)$ where $\mathfrak{p}$ is a prime ideal of $A$ and $\le$ is an ordering of the residue field of~$\mathfrak{p}$. Given a semiring $T\subset A$, let $X_A(T)\subset\mathrm{Sper}(A)$ be the set of all $\alpha\in\mathrm{Sper}(A)$ such that $f\ge_\alpha0$ for every $f\in T$. We say that $f\in A$ satisfies $f\ge0$ (resp.\ $f>0$) on $X_A(T)$ if $f\ge_\alpha0$ (resp.\ $f>_\alpha0$) for every $\alpha\in X_A(T)$. We recall the archimedean Positivstellensatz in the following form. In a weaker form, this result was already proved by Krivine \cite{kr}. \begin{theorem}[{\cite{sw}} Corollary~2] \label{archpss} Let $A$ be a ring, and let $T\subset A$ be an archimedean semiring containing $\frac1n$ for some integer $n>1$. If $f\in A$ satisfies $f>0$ on $X_A(T)$, then $f\in T$. \end{theorem} We will need to apply Theorem \ref{archlgpsemiring} below, which is a local-global principle for archimedean semirings. A slightly weaker version of this result was already proved in \cite{bss} Theorem 6.5. We give a new proof which is considerably shorter than the proof in \cite{bss}. \begin{theorem}\label{archlgpsemiring} Let $A$ be a ring, let $T\subset A$ be an archimedean semiring containing $\frac1n$ for some integer $n>1$, and let $f\in A$. Assume that for any maximal ideal $\mathfrak{m}$ of $A$ there exists an element $s\in A\setminus\mathfrak{m}$ such that $s\ge0$ on $X_A(T)$ and $sf\in T$. Then $f\in T$. \end{theorem} \begin{proof} There exists an integer $k\ge1$ and elements $s_1,\dots,s_k\in A$ with $\langle s_1,\dots,s_k\rangle=\langle1\rangle$, and with $s_if\in T$ and $s_i\ge0$ on $X_A(T)$ for $i=1,\dots,k$. By \cite{sch:surf} Prop.\ 2.7 there exist $a_1,\dots,a_k\in A$ with $\sum_{i=1}^ka_is_i=1$ and with $a_i>0$ on $X_A(T)$ ($i=1,\dots,k$). Since $T$ is archimedean, the last condition implies $a_i\in T$, by the Positivstellensatz \ref{archpss}. It follows that $f=\sum_{i=1}^k a_i(s_if)\in T$. \end{proof} \section{Second proof of Theorem \ref{thm: strictPositivstellensatz}} \label{sec: 2ndproofOfMainTheorem} As in the first proof, it suffices to prove Lemma \ref{weakerstatement}. So let $f\in{\mathbb{R}}[\mathsf{x}]={\mathbb{R}}[x_1,\dots,x_n]$ be a form of degree $\deg(f)=d\ge1$ with strictly positive coefficients, say $f=\sum_{|\alpha|=d}c_\alpha x^\alpha$. Let $S\subset{\mathbb{R}}[\mathsf{x}]$ be the semiring consisting of all polynomials with nonnegative coefficients. We shall work with the ring $$A\>=\>\Bigl\{\frac p{f^r}\colon r\ge0,\ p\in{\mathbb{R}}[\mathsf{x}]_{dr}\Bigr\}$$ of homogeneous fractions of degree zero, considered as a subring of the field ${\mathbb{R}}(\mathsf{x})$ of rational functions. Let $V\subset\P^{n-1}$ be the complement of the projective hypersurface $f=0$. Then $V$ is an affine algebraic variety over ${\mathbb{R}}$, with affine coordinate ring ${\mathbb{R}}[V]=A$. As a ring, $A$ is generated by ${\mathbb{R}}$ and by the fractions $y_\alpha=\frac{x^\alpha}g$ where $|\alpha|=d$. Let $T$ be the subsemiring of $A$ generated by ${\mathbb{R}}_+$ and by the $y_\alpha$ ($|\alpha|=d$). So the elements of $T$ are precisely the fractions $\frac p{f^r}$, where $r\ge0$ and $p\in S$ is homogeneous of degree~$dr$. The semiring $T$ is archimedean, as follows from the identity $\sum_{|\alpha|=d}c_\alpha y_\alpha=1$ and from $c_\alpha>0$ for all $\alpha$ (\cite{BerrWormann} Lemma~1). First we prove Lemma \ref{weakerstatement} under an extra condition. \begin{lemma} \label{lem: assumeDegreeCondition} Let $f,\,g\in{\mathbb{R}}[x_1,\dots,x_n]$ be forms where $f$ is nonconstant with strictly positive coefficients and $g$ is strictly positive on ${\mathbb{R}}^n_+\setminus\{0\}$. If $\deg(f)$ divides $\deg(g)$, there exists $m\ge1$ such that $f^mg$ has nonnegative coefficients. \end{lemma} \begin{proof} Suppose that $r$ is a positive integer such that $\deg(g)=r\deg(f)$. Then the fraction $\frac g{f^r}$ lies in $A$ and is strictly positive on $X_A(T)$, since $g$ is positive on ${\mathbb{R}}^n_+$. Hence the archimedean Positivstellensatz (Theorem \ref{archpss}) gives $\frac g{f^r}\in T$, and clearing denominators we get the desired conclusion. \end{proof} \begin{remark} When $\deg(f)=1$, Lemma \ref{lem: assumeDegreeCondition} is in fact P\'olya's Positivstellensatz \cite{Polya28}. In this case, our proof above becomes essentially the same as the proof of \cite{Polya28} given by Berr and W\"ormann in \cite{BerrWormann}. \end{remark} For the general case when $\deg(f)$ does not necessarily divide $\deg(g)$, we need a more refined argument as follows. It is similar to the approach in \cite{Scheiderer12}. \begin{proof} [Proof of Lemma \ref{weakerstatement}] Fix integers $k\ge0$, $r\ge0$ such that $k+e=dr$, and consider the fraction $\varphi:=\frac{x_1^kg}{f^r}\in A$. It suffices to show $\varphi\in T$. Indeed, this means that there are $s\ge0$ and $p\in S$, homogeneous of degree $ds$, such that $\varphi=\frac p{f^s}$. We may assume $s\ge r$, then $f^{s-r}x_1^kg$ has nonnegative coefficients. Clearly this implies that $f^{s-r}g$ has nonnegative coefficients. We prove $\varphi\in T$ by applying the local-global principle \ref{archlgpsemiring}. So let $\mathfrak{m}$ be a maximal ideal of $A$. Then $\mathfrak{m}$ corresponds to a closed point $z$ of the scheme $V$, and hence of $\P^{n-1}$. There exist real numbers $t_1,\dots,t_n>0$ such that the linear form $l=\sum_{i=1}^nt_ix_i$ does not vanish in $z$. Hence the element $\psi:=\frac{l^d}f$ of $A$ does not lie in $\mathfrak{m}$. On the other hand, $\psi>0$ on $X_A(T)$, since $l$ and $f$ are strictly positive on ${\mathbb{R}}^n_+\setminus\{0\}$. By Lemma \ref{lem: assumeDegreeCondition}, applied to $l$ and $g$, there exists an integer $N\ge1$ for which $l^Ng\in S$. Choose an integer $m\ge1$ so large that $md\ge N$. Then $$\psi^m\varphi\>=\>\frac{l^{md}x_1^kg}{f^{m+r}}$$ lies in $T$. From Theorem \ref{archlgpsemiring} we therefore deduce $\varphi\in T$, as desired. \end{proof} We conclude with an example, as promised in the introduction. \begin{example} \label{eq: DV} For $n \ge 2$ and even $d = 2k \ge 4$, the form $$p_{\lambda} = (x_1+x_2)^{2k} - \lambda x_1^kx_2^k + \sum_{\stackrel{|w| = 2k}{w_i \neq 0 {\text{ for some } i \ge 3}}} x^w\>\in{\mathbb{R}}[x_1,\dots,x_n]$$ of degree $d$ satisfies the equivalent conditions of Theorem \ref{thm: strictPositivstellensatz} and has a negative coefficient (of the monomial $x_1^kx_2^k$) whenever $\binom{2k}{k}<\lambda < 2^{2k - 1}$. Indeed, it suffices to check the case when $n = 2$, in which case the vertification follows similarly as in a result of D'Angelo-Varolin \cite[Theorem 3]{DV04}. \end{example} \section*{Appendix: Proof of Theorem \ref{thm: strictPositivstellensatz} from Catlin-D'Angelo's Theorem} In this appendix, we sketch how the results of Catlin-D'Angelo \cite{CD99} can be used to deduce Theorem \ref{thm: strictPositivstellensatz}. As in the first and second proofs of Theorem \ref{thm: strictPositivstellensatz}, it suffices to prove Lemma \ref{weakerstatement}. Let $\mathsf{z} = (z_1, \ldots, z_n)$. Denote by ${\mathbb{C}}[\mathsf{z}, \conj{\mathsf{z}}]$ the complex polynomial algebra in the indeterminates $z_1, \dots, z_n, \conj{z_1}, \dots, \conj{z_n}$. Equipped with conjugation, ${\mathbb{C}}[\mathsf{z}, \conj{\mathsf{z}}]$ has the structure of a commutative complex $\ast$-algebra. A polynomial $P \in {\mathbb{C}}[\mathsf{z}, \conj{\mathsf{z}}]$ is said to be {\emph{Hermitian}} if $P$ equals its conjugate $\conj{P}$. Equivalently, $P$ is Hermitian if and only if $P(z, \conj{z})$ is real for all $z \in {\mathbb{C}}^{n}$. A Hermitian polynomial $P\in {\mathbb{C}}[\mathsf{z}, \conj{\mathsf{z}}]$ is said to be \emph{positive on ${\mathbb{C}}^{n}\setminus\{0\}$} if $P(z, \conj{z}) > 0$ for all $z \in {\mathbb{C}}^{n}\setminus\{0\}$. The {\emph{bidegree}} of a monomial $z^\alpha \conj{z}^{\beta} = z_1^{\alpha_1} z_2^{\alpha_2} \cdots z_n^{\alpha_n} \conj{z}_1^{\beta_1} \conj{z}_2^{\beta_2} \cdots \conj{z}_n^{\beta_n} \in {\mathbb{C}}[\mathsf{z},\conj{\mathsf{z}}]$ is $(|\alpha|,|\beta|) = (\alpha_1+ \cdots + \alpha_n, \beta_1 + \cdots+ \beta_n)$. A {\emph{bihomogeneous polynomial}} is a complex linear combination of monomials of the same bidegree. If a bihomogeneous polynomial $P = \sum_{|\alpha| = d, |\beta| = e} a_{\alpha\beta} z^\alpha \conj{z}^\beta$ is Hermitian, then $d = e$, i.e. $P$ has bidegree $(d, d)$. From \cite[Definition 2]{CD99}, a Hermitian bihomogeneous polynomial $P$ is said to satisfy the \emph{strong global Cauchy-Schwarz} (in short, \emph{SGCS}) \emph{inequality} if $|P(z, \conj{w})|^2 < P(z, \conj{z}) P(w, \conj{w})$ whenever $z, w\in {\mathbb{C}}^{n}$ are linearly independent. The following result is a special case of \cite[Corollary of Theorem 1]{CD99} (where the matrix $M$ of bihomogeneous polynomials in \cite[Corollary of Theorem 1]{CD99} has size $1 \times 1$). \begin{theorem}[Catlin-D'Angelo {\cite[Corollary of Theorem 1]{CD99}}] \label{thm: CDThm} Let $R \in {\mathbb{C}}[\mathsf{z},\conj{\mathsf{z}}]$ be a nonconstant Hermitian bihomogeneous polynomial such that $R$ is positive on ${\mathbb{C}}^n\setminus\{0\}$, the domain $\{z \in {\mathbb{C}}^{n} : R(z, \conj{z}) < 1\}$ is strongly pseudoconvex, and $R$ satisfies the SGCS inequality. Then for each Hermitian bihomogeneous polynomial $Q \in {\mathbb{C}}[\mathsf{z},\conj{\mathsf{z}}]$ positive on ${\mathbb{C}}^n \setminus\{0\}$, there exists $l \ge 1$ and polynomials $h_1,\ldots, h_N \in {\mathbb{C}}[\mathsf{z}] \subset{\mathbb{C}}[\mathsf{z},\conj{\mathsf{z}}]$ such that $R^l Q = \sum_{k = 1}^N h_k \conj{h_k}$. \end{theorem} \begin{proof}[Proof sketch of Lemma \ref{weakerstatement} from Theorem \ref{thm: CDThm}] Let $f = \sum_{|\alpha|=d} c_\alpha x^\alpha \in {\mathbb{R}}[\mathsf{x}]$ be a form of degree $d$. Suppose that $f$ is nonconstant with strictly positive coefficients. One verifies that $R := \sum_{|\alpha|=d} c_\alpha z^\alpha \conj{z}^\alpha$ is a nonconstant Hermitian bihomogeneous polynomial that is positive on ${\mathbb{C}}^n\setminus\{0\}$, the domain $\{z \in {\mathbb{C}}^{n} : R(z, \conj{z}) < 1\}$ is strongly pseudoconvex, and that $R$ satisfies the SGCS inequality. Now suppose that $g = \sum_{|\beta|= e} b_\beta x^\beta \in {\mathbb{R}}[\mathsf{x}]$ is a form of degree $e$ which is strictly positive on ${\mathbb{R}}^n_+\setminus\{0\}$. This implies that $Q := \sum_{|\beta|=e} b_\beta z^\beta \conj{z}^\beta$ is a Hermitian bihomogenous polynomial that is positive on ${\mathbb{C}}^n\setminus\{0\}$. Thus we may apply Theorem \ref{thm: CDThm} to obtain $l \ge 1$ such that $R^l Q = \sum_{k = 1}^N h_k \conj{h_k}$ for some polynomials $h_1,\ldots, h_N \in {\mathbb{C}}[\mathsf{z}] \subset{\mathbb{C}}[\mathsf{z},\conj{\mathsf{z}}]$. Hence $R^l Q = \sum_{|\alpha| = |\beta| = ld + e} a_{\alpha\beta} z^\alpha \conj{z}^\beta$ for some positive semidefinite Hermitian matrix $A = (a_{\alpha\beta})_{|\alpha| = |\beta| = ld + e}$. Writing $f^lg = \sum_{|\gamma| = ld + e} a_{\gamma}^\prime x^\gamma$, we see that $A$ is in fact the diagonal matrix $\mathrm{diag}(a_{\gamma}^\prime)_{|\gamma| = ld + e}$. Since $A$ is positive semidefinite, all the coefficients $a_{\gamma}^\prime$ of $f^l g$ are nonnegative. This completes the proof of Lemma \ref{weakerstatement}. \end{proof}
{ "timestamp": "2017-04-11T02:03:22", "yymm": "1701", "arxiv_id": "1701.01585", "language": "en", "url": "https://arxiv.org/abs/1701.01585", "abstract": "Let $p$ be a nonconstant form in $\\mathbb{R}[x_1,\\dots,x_n]$ with $p(1,\\dots,1)>0$. If $p^m$ has strictly positive coefficients for some integer $m\\ge1$, we show that $p^m$ has strictly positive coefficients for all sufficiently large $m$. More generally, for any such $p$, and any form $q$ that is strictly positive on $(\\mathbb{R}_+)^n\\setminus\\{0\\}$, we show that the form $p^mq$ has strictly positive coefficients for all sufficiently large $m$. This result can be considered as a strict Positivstellensatz for forms relative to $(\\mathbb{R}_+)^n\\setminus\\{0\\}$. We give two proofs, one based on results of Handelman, the other on techniques from real algebra.", "subjects": "Algebraic Geometry (math.AG)", "title": "A Positivstellensatz for forms on the positive orthant", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9898303437461133, "lm_q2_score": 0.7154240018510026, "lm_q1q2_score": 0.7081483856763979 }
https://arxiv.org/abs/math/9905032
Asymptotics of Plancherel measures for symmetric groups
We consider the asymptotics of the Plancherel measures on partitions of $n$ as $n$ goes to infinity. We prove that the local structure of a Plancherel typical partition (which we identify with a Young diagram) in the middle of the limit shape converges to a determinantal point process with the discrete sine kernel. On the edges of the limit shape, we prove that the joint distribution of suitably scaled 1st, 2nd, and so on rows of a Plancherel typical diagram converges to the corresponding distribution for eigenvalues of random Hermitian matrices (given by the Airy kernel). This proves a conjecture due to Baik, Deift, and Johansson by methods different from the Riemann-Hilbert techniques used in their original papersmath.CO/9810105andmath.CO/9901118and from the combinatorial approach proposed by Okounkov inmath.CO/9903176.Our approach is based on an exact determinantal formula for the correlation functions of the poissonized Plancherel measures involving a new kernel on the 1-dimensional lattice. This kernel is expressed in terms of Bessel functions and we obtain it as a degeneration of the hypergeometric kernel from the papermath.RT/9904010by Borodin and Olshanski. Our asymptotic analysis relies on the classical asymptotic formulas for the Bessel functions and depoissonization techniques.
\section{Introduction}\label{s1} \subsection{Plancherel measures}\label{s11} Given a finite group $G$, by the corresponding Plancherel measure we mean the probability measure on the set $G^\wedge$ of irreducible representations of $G$ which assigns to a representation $\pi\in G^\wedge$ the weight $(\dim\pi)^2/|G|$. For the symmetric group $S(n)$, the set $S(n)^\wedge$ is the set of partitions $\lambda$ of the number $n$, which we shall identify with Young diagrams with $n$ squares throughout this paper. The Plancherel measure on partitions $\lambda$ arises naturally in representation--theoretic, combinatorial, and probabilistic problems. For example, the Plancherel distribution of the first part of a partition coincides with the distribution of the longest increasing subsequence of a uniformly distributed random permutation \cite{Sch}. We denote the Plancherel measure on partitions of $n$ by $M_n$ \begin{equation}\label{e0001} M_n(\lambda)=\frac{(\dim\lambda)^2}{n!} \,, \quad |\lambda|=n \,, \end{equation} where $\dim\lambda$ is the dimension of the corresponding representation of $S(n)$. The asymptotic properties of these measures as $n\to\infty$ have been studied very intensively, see the References and below. In the seventies, Logan and Shepp \cite{LS} and, independently, Vershik and Kerov \cite{VK1,VK2} discovered the following measure concentration phenomenon for $M_n$ as $n\to\infty$. Let $\lambda$ be a partition of $n$ and let $i$ and $j$ be the usual coordinates on the diagrams, namely, the row number and the column number. Introduce new coordinates $u$ and $v$ by \begin{equation*} u=\frac{j-i}{\sqrt{n}} \,, \quad v=\frac{i+j}{\sqrt{n}} \,, \end{equation*} that is, we flip the diagram, rotate it $135^\circ$ as in Figure \ref{fig1}, and scale it by the factor of $n^{-1/2}$ in both directions. \begin{figure}[!h] \centering \scalebox{.85}{\includegraphics{limshape.eps}} \caption{The limit shape of a typical diagram.} \label{fig1} \end{figure} After this scaling, the Plancherel measures $M_n$ converge as $n\to\infty$ (see \cite{LS,VK1,VK2} for precise statements) to the delta measure supported on the following shape: \begin{equation*} \{ |u|\le 2, |u| \le v \le \Omega(u)\}\,, \end{equation*} where the function $\Omega(u)$ is defined by \begin{equation*} \Omega(u)=\cases {\frac2\pi\, \left(u \arcsin(u/2) + \sqrt{4-u^2}\right)}\,, & |u|\le 2 \,,\\ |u|\,, & |u|>2 \,. \endcases \end{equation*} The function $\Omega$ is plotted in Figure \ref{fig1}. As explained in great detail in \cite{Ke5}, this limit shape $\Omega$ is very closely connected to Wigner's semicircle law for distribution of eigenvalues of a random matrices, see also \cite{Ke2,Ke3,Ke4}. {}From a different point of view, the connection with random matrices was observed in \cite{BDJ1,BDJ2}, and also in the earlier papers \cite{J,R,Re}. In \cite{BDJ1}, Baik, Deift, and Johansson made the following conjecture. They conjectured that in the $n\to\infty$ limit and after proper scaling the joint distribution of $\lambda_i$, $i=1,2,\dots$ becomes identical to the joint distribution of largest eigenvalues of a Gaussian random Hermitian matrix (which is known to be the so-called Airy ensemble, see Section \ref{s14}). They proved this for the individual distribution of $\lambda_1$ and $\lambda_2$ in \cite{BDJ1} and \cite{BDJ2}, respectively. A combinatorial proof of the full conjecture was given by one of us in \cite{O}. It was based on an interplay between maps on surfaces and ramified coverings of the sphere. In this paper we study the local structure of a typical Plancherel diagram both in the bulk of the limit shape $\Omega$ and on its edge, where by the study of the edge we mean the study of the behavior of $\lambda_1$, $\lambda_2$, and so on. We employ an analytic approach based on an exact formula in terms of Bessel functions for the correlation functions of the so-called \emph{poissonization} of the Plancherel measures $M_n$, see Theorem \ref{t1} in the following section, and the so-called \emph{depoissonization} techniques, see Section 1.4. The exact formula in Theorem \ref{t1} is a limit case of a formula from \cite{BO}, see also the recent paper \cite{O2} for a more general result. The use of poissonization and depoissonization is very much in the spirit of \cite{BDJ1,J,V} and represents a well known in statistical mechanics principle of the equivalence of ensembles. Our main results are the following two. In the bulk of the limit shape $\Omega$, we prove that the local structure of a Plancherel typical partition converges to a determinantal point process with the discrete sine kernel, see Theorem \ref{t2}. This result is parallel to the corresponding result for random matrices. On the edge of the limit shape, we give an analytic proof of the Baik-Deift-Johansson conjecture, see Theorem \ref{t3}. These results will be stated in Subsections \ref{s13} and \ref{s14} of the present Introduction, respectively. Simultaneously and independently, results equivalent to our Theorems \ref{t1b} and \ref{t3} were obtained by K.~Johansson \cite{J2}. \subsection{Poissonization and correlation functions}\label{s12} For $\theta>0$, consider the {\it poissonization} $M^\theta$ of the measures $M_n$ \begin{equation*} M^\theta (\lambda) = e^{-\theta}\sum_n \frac{\theta^n}{n!}\, M_n(\lambda) = e^{-\theta} \theta^{|\lambda|} \, \left(\frac{\dim\lambda}{|\lambda|!} \right)^2 \,. \end{equation*} This is a probability measure on the set of all partitions. Our first result is the computation of the correlation functions of the measures $M^\theta$. By correlation functions we mean the following. By definition, set $$ \mathcal{D}(\lambda)=\{\lambda_i-i\}\subset\mathbb{Z} \,. $$ Also, following \cite{VK3}, define the {\em modified Frobenius coordinates} $\operatorname{Fr}(\lambda)$ of a partition $\lambda$ by \begin{multline}\label{e0009} \operatorname{Fr}(\lambda)=\left(\mathcal{D}(\lambda)+{\textstyle \frac12}\right)\triangle \left(\mathbb{Z}_{\le 0} -{\textstyle \frac12}\right)\\ = \left\{p_1+{\textstyle \frac12},\dots,p_d+{\textstyle \frac12},-q_1-{\textstyle \frac12},\dots,-q_d-{\textstyle \frac12} \right\} \subset \mathbb{Z}+{\textstyle \frac12}\,, \end{multline} where $\triangle$ stands for the symmetric difference of two sets, $d$ is the number of squares on the diagonal of $\lambda$, and $p_i$'s and $q_i$'s are the usual Frobenius coordinates of $\lambda$. Recall that $p_i$ is the number of squares in the $i$th row to the right of the diagonal, and $q_i$ is number of squares in the $i$th column below the diagonal. The equality \eqref{e0009} is a well known combinatorial fact discovered by Frobenius, see Ex.~I.1.15(a) in \cite{M}. Note that, in contrast to $\operatorname{Fr}(\lambda)$, the set $\mathcal{D}(\lambda)$ is infinite and, moreover, it contains all but finitely many negative integers. The sets $\mathcal{D}(\lambda)$ and $\operatorname{Fr}(\lambda)$ have the following nice geometric interpretation. Let the diagram $\lambda$ be flipped and rotated $135^\circ$ as in Figure \ref{fig1}, but not scaled. Denote by $\omega_\lambda$ a piecewise linear function with $\omega'_\lambda=\pm 1$ whose graph if given by the upper boundary of $\lambda$ completed by the lines $$ v=|u|\,, \quad u\notin[-\lambda'_1,\lambda_1] \,. $$ Then $$ k\in\mathcal{D}(\lambda) \Leftrightarrow \omega'_\lambda\Big|_{[k,k+1]}=-1 \,. $$ In other words, if we consider $\omega_\lambda$ as a history of a walk on $\mathbb{Z}$ then $\mathcal{D}(\lambda)$ are those moments when a step is made in the negative direction. It is therefore natural to call $\mathcal{D}(\lambda)$ the \emph{descent set} of $\lambda$. As we shall see, the correspondence $\lambda\mapsto\mathcal{D}(\lambda)$ is a very convenient way to encode the local structure of the boundary of $\lambda$. The halves in the definition of $\operatorname{Fr}(\lambda)$ have the following interpretation: one splits the diagonal squares in half and gives half to the rows and half to the columns. \begin{definition} The correlation functions of $M^\theta$ are the probabilities that the sets $\operatorname{Fr}(\lambda)$ or, similarly, $\mathcal{D}(\lambda)$ contain a fixed subset $X$. More precisely, we set \begin{alignat}{2} \rho^\theta(X)&= M^\theta\left(\left\{\lambda\,|\, X\subset \operatorname{Fr}(\lambda)\right\} \right)\,,\quad &&X\in\mathbb{Z}+{\textstyle \frac12}\,,\label{e11a}\\ \boldsymbol{\varrho}^\theta(X)&= M^\theta\left(\left\{\lambda\,|\, X\subset \mathcal{D}(\lambda)\right\} \right)\,,\quad &&X\in\mathbb{Z}\,. \label{e0006} \end{alignat} \end{definition} \begin{theorem}\label{t1} For any $X=\{x_1,\dots,x_s\}\subset \mathbb{Z}+{\textstyle \frac12}$ we have \begin{equation*} \rho^\theta(X)= \det \Big[\mathsf{K}(x_i,x_j)\Big]_{1\le i,j \le s}\,, \end{equation*} where the kernel $\mathsf{K}$ is given by the following formula \begin{equation}\label{e12a} \mathsf{K}(x,y)=\left\{ \begin{array}{ll} \displaystyle \sqrt{\theta}\,\,\frac{\kk_+(|x|,|y|)}{|x|-|y|}\,, & xy >0 \,,\\[10pt] \displaystyle \sqrt{\theta}\,\,\frac{\kk_-(|x|,|y|)}{x-y} \,, & xy<0 \,. \end{array} \right. \end{equation} The functions $\kk_\pm$ are defined by \begin{align}\label{e12} \kk_+(x,y)&= J_{x-\frac12}\, J_{y+\frac12} - J_{x+\frac12} \, J_{y-\frac12} \,, \\ \kk_-(x,y)&= J_{x-\frac12}\, J_{y-\frac12} + J_{x+\frac12} \, J_{y+\frac12} \label{e13}\,, \end{align} where $J_x=J_x(2\sqrt{\theta})$ is the Bessel function of order $x$ and argument $2\sqrt{\theta}$. \end{theorem} This theorem is established in Section \ref{s21}, see also Remark \ref{r11} below. By the complementation principle, see Sections \ref{sA3} and \ref{s21b}, Theorem \ref{t1} is equivalent to the following \begin{theorem}\label{t1b} For any $X=\{x_1,\dots,x_s\}\subset \mathbb{Z}$ we have \begin{equation}\label{e0005} \boldsymbol{\varrho}^\theta(X)= \det \Big[\mathsf{J}(x_i,x_j)\Big]_{1\le i,j \le s}\,, \end{equation} Here the kernel $\mathsf{J}$ is given by the following formula \begin{equation}\label{e212} \mathsf{J}(x,y)=\mathsf{J}(x,y;\theta)=\sqrt{\theta}\,\, \frac{J_x\, J_{y+1} - J_{x+1} \, J_{y}} {x-y} \,, \end{equation} where $J_x=J_x(2\sqrt\theta)$. \end{theorem} \begin{remark}\label{r11} Theorem \ref{t1} is a limit case of Theorem 3.3 of \cite{BO}. For the reader's convenience a direct proof of it is given in Section \ref{s2}. Another proof of the results of \cite{BO} will appear in \cite{BO2}. Various limit cases of the results of \cite{BO} are discussed in \cite{BO3}. By different methods, the formula \eqref{e0005} was obtained by K.~Johansson \cite{J2}. A representation--theoretic proof of a more general formula than Theorem 3.3 of \cite{BO} has been subsequently given in \cite{O2}. \end{remark} \begin{remark} Observe that all Bessel functions involved in the above formulas are of integer order. Also note that the ratios like $\mathsf{J}(x,y)$ are entire functions of $x$ and $y$ because $J_x$ is an entire function of $x$. In particular, the values $\mathsf{J}(x,x)$ are well defined. Various denominator--free formulas for the kernel $\mathsf{J}$ are given in Section \ref{s21}. \end{remark} \subsection{Asymptotics in the bulk of the spectrum}\label{s13} Given a sequence of subsets \begin{equation*} X(n)=\left\{x_1(n)<\dots<x_s(n)\right\}\subset\mathbb{Z}\,, \end{equation*} where $s=|X(n)|$ is some fixed integer, we call this sequence {\em regular} if the following limits \begin{align} a_i &= \lim_{n\to\infty} \frac{x_i(n)}{\sqrt{n}}\,, \label{e14}\\ d_{ij}&=\lim_{n\to\infty}\left(x_i(n)-x_j(n)\right)\,, \label{e15} \end{align} exist, finite or infinite. Here $i,j=1,\dots,s$. Observe that if $d_{ij}$ is finite then $d_{ij}=x_i(n)-x_j(n)$ for $n\gg 0$. In the case when $X(n)$ can be represented as $X(n)=X'(n)\cup X''(n)$ and the distance between $X'(n)$ and $X''(n)$ goes to $\infty$ as $n\to\infty$ we shall say that the sequence {\it splits}; otherwise, we call it {\em nonsplit}. Obviously, $X(n)$ is nonsplit if and only if all $x_i(n)$ stay at a finite distance from each other. Define the correlation functions $\boldsymbol{\varrho}(n,\,\cdot\,)$ of the measures $M_n$ by the same rule as in \eqref{e0006} \begin{equation*} \boldsymbol{\varrho}(n,X)= M_n\left(\left\{\lambda\,|\, X\subset \mathcal{D}(\lambda)\right\} \right)\,. \end{equation*} We are interested in the limit of $\boldsymbol{\varrho}(n,X(n))$ as $n\to\infty$. This limit will be computed in Theorem \ref{t2} below. As we shall see, if $X(n)$ splits, then the limit correlations factor accordingly. Introduce the following {\em discrete sine kernel} which is a translation invariant kernel on the lattice $\mathbb{Z}$ $$ \mathsf{S}(k,l;a)=\mathsf{S}(k-l,a)\,, \quad k,l\in\mathbb{Z}\,, $$ depending on a real parameter $a$: \begin{align*} \mathsf{S}(k,a)&=\frac{\sin(\arccos(a/2)\,k)}{\pi k}\\ &= \frac{\sqrt{4-a^2}}{2\pi} \frac{U_{k-1}(a/2)}{k}\,, \quad k\in \mathbb{Z} \,. \end{align*} Here $U_k$ is the Tchebyshev polynomials of the second kind. We agree that \begin{equation*} \mathsf{S}(0,a)=\frac{\arccos(a/2)}\pi\,, \quad \mathsf{S}(\infty,a)=0 \end{equation*} and also that $$ \mathsf{S}(k,a)= \cases 0\,,& \textup{$a\ge 2$ or $a\le 2$ and $k\ne 0$}\,,\\ 1 \,, &\textup{$a\le 2$ and $k= 0$}\,. \endcases $$ The following result describes the local structure of a Plancherel typical partition. \begin{theorem}\label{t2} Let $X(n)\subset\mathbb{Z}$ be a regular sequence and let the numbers $a_i$, $d_{ij}$ be defined by \eqref{e14}, \eqref{e15}. If $X(n)$ splits, that is, if $X(n)=X'(n)\cup X''(n)$ and the distance between $X'(n)$ and $X''(n)$ goes to $\infty$ as $n\to\infty$ then \begin{equation}\label{e1t2} \lim_{n\to\infty} \boldsymbol{\varrho}(n,X(n)) = \lim_{n\to\infty} \boldsymbol{\varrho}(n,X'(n)) \cdot \lim_{n\to\infty} \boldsymbol{\varrho}(n,X''(n))\,. \end{equation} If $X(n)$ is nonsplit then \begin{equation}\label{e2t2} \lim_{n\to\infty} \boldsymbol{\varrho}\left(n,X(n)\right) = \det \Big[\,\mathsf{S}(d_{ij},a) \Big]_{1\le i,j \le s}\,, \end{equation} where $\mathsf{S}$ is the discrete sine kernel and $a=a_1=a_2=\dots$. \end{theorem} We prove this theorem in Section \ref{s3}. \begin{remark} Notice that, in particular, Theorem \ref{t2} implies that, as $n\to\infty$, the shape of a typical partition $\lambda$ near any point of the limit curve $\Omega$ is described by a stationary random process. For distinct points on the curve $\Omega$ these random processes are independent. \end{remark} \begin{remark} By complementation, see Section \ref{sA3} and \ref{s32}, one obtains from Theorem \ref{t2} an equivalent statement about the asymptotics of the following correlation functions \begin{equation*} \rho(n,X)= M_n\left(\left\{\lambda\,|\, X\subset \operatorname{Fr}(\lambda)\right\} \right)\,. \end{equation*} \end{remark} \begin{remark} The discrete sine kernel was studied before, see \cite{Wi1,Wi2}, mainly as a model case for the continuous sine kernel. In particular, the asymptotics of Toeplitz determinants built from the discrete sign kernel was obtained by H.~Widom in \cite{Wi2} answering a question of F.~Dyson. As pointed out by S.~Kerov, this asymptotics has interesting consequences for the Plancherel measures. \end{remark} \begin{remark} Note that, in particular, Theorem \ref{t2} implies that the limit density (the 1-point correlation function) is given by \begin{equation}\label{e16} \boldsymbol{\varrho}(\infty,a)=\cases \frac1\pi\, {\arccos(a/2)}\,, & |a|\le 2\,,\\ 0\,, & a>2 \,,\\ 1\,, & a<-2 \,. \endcases \end{equation} This is in agreement with the Logan-Shepp-Vershik-Kerov result about the limit shape $\Omega$. More concretely, the function $\Omega$ is related to the density \eqref{e16} by \begin{equation*} \boldsymbol{\varrho}(\infty,u)=\frac{1-\Omega'(u)}2 \,, \end{equation*} which can be interpreted as follows. Approximately, we have \begin{equation*} \#\left\{i\,\left|\, \frac{\lambda_i}{\sqrt{n}} \in [u,u+\De u]\right.\right\} \approx \sqrt{n} \, \boldsymbol{\varrho}(\infty,u) \, \De u \,. \end{equation*} Set $w=\dfrac{i}{\sqrt{n}}$. Then the above relation reads $\De w \approx \boldsymbol{\varrho}(\infty,u) \, \De u $ and it should be satisfied on the boundary $v=\Omega(u)$ of the limit shape. Since $v=u+2w$, we conclude that \begin{equation*} \boldsymbol{\varrho}(\infty,u)\approx \frac{d w}{du}=\frac{1-\Omega'}2 \,, \end{equation*} as was to be shown. \end{remark} \begin{remark} The discrete sine-kernel $\mathsf{S}$ becomes especially nice near the diagonal, that is, where $a=0$. Indeed, \begin{equation*} \mathsf{S}(x,0)= \cases 1/2\,, & x=0 \,,\\ {(-1)^{(x-1)/2}}\Big/(\pi x) \,, & x=\pm1,\pm3,\dots \,, \\ 0\,, &x=\pm2,\pm4,\dots \,. \endcases \end{equation*} \end{remark} \subsection{Behavior near the edge of the spectrum and the Airy ensemble}\label{s14} The discrete sine kernel $\mathsf{S}(k,a)$ vanishes if $a\ge 2$. Therefore, it follows from Theorem \ref{t2} that the limit correlations $\lim\boldsymbol{\varrho}(n,X(n))$ vanish if $a_i\ge 2$ for some $i$. However, as will be shown below in Proposition \ref{p41}, after a suitable scaling near the edge $u=2$, the correlation functions $\boldsymbol{\varrho}^\theta$ converge to the correlation functions given by the Airy kernel \cite{F,TW} \begin{equation*} \mathsf{A}(x,y)=\frac{A(x)A'(y)-A'(x)A(y)}{x-y}\,. \end{equation*} Here $A(x)$ is the Airy function: \begin{equation}\label{e17} A(x)=\frac1\pi\,\int_0^\infty \cos\left(\frac{u^3}3+xu\right)\,du. \end{equation} In fact, the following more precise statement is true about the behavior of the Plancherel measure near the edge $u=2$. By symmetry, everything we say about the edge $u=2$ applies to the opposite edge $u=-2$. Consider the random point process on $\mathbb{R}$ whose correlation functions are given by the determinants \begin{equation*} \rho^{\textup{Airy}}_k(x_1,\dots,x_k)= \det \Big[\,\mathsf{A}(x_i,x_j) \Big]_{1\le i,j \le k} \end{equation*} and let \begin{equation*} \zeta=(\zeta_1 > \zeta_2 > \zeta_3 > \dots ) \in\mathbb{R}^\infty \end{equation*} be its random configuration. We call the random variables $\zeta_i$'s the {\it Airy ensemble}. It is known \cite{F,TW} that the Airy ensemble describes the behavior of the (properly scaled) 1st, 2nd, and so on largest eigenvalues of a Gaussian random Hermitian matrix. The distribution of individual eigenvalues was obtained by Tracy and Widom in \cite{TW} in terms of certain Painlev\'e transcendents. It has been conjectured by Baik, Deift, and Johansson that the random variables \begin{equation*} \widetilde{\lambda}= \left(\widetilde{\lambda}_1 \ge \widetilde{\lambda}_2 \ge \dots \right)\,, \quad \widetilde{\lambda}_i = n^{1/3} \, \left( \frac{\lambda_i}{n^{1/2}}-2 \right) \end{equation*} converge, in distribution and together with all moments, to the Airy ensemble. They verified this conjecture for individual distribution of $\lambda_1$ and $\lambda_2$ in \cite{BDJ1} and \cite{BDJ2}, respectively. In particular, in the case of $\lambda_1$, this generalizes the result of \cite{VK1,VK2} that $\frac{\lambda_1}{\sqrt n} \to 2$ in probability as $n\to\infty$. The computation of $\lim \frac{\lambda_1}{\sqrt n}$ was known as the Ulam problem; different solutions to this problem were given in \cite{AD,J,S}. Convergence of all expectations of the form \begin{equation}\label{e18a} \left\langle \prod_{k=1}^r \sum_{i=1}^\infty e^{t_k \widetilde{\lambda}_i} \right\rangle\,, \quad t_1,\dots,t_r>0\,, \quad r=1,2,\dots\,, \end{equation} to the corresponding quantities for the Airy ensembles was established in \cite{O}. The proof in \cite{O} was based on a combinatorial interpretation of \eqref{e18a} as the asymptotics in a certain enumeration problem for random surfaces. In the present paper we use different ideas to prove the following \begin{theorem} \label{t3} As $n\to\infty$, the random variables $\widetilde{\lambda}$ converge, in joint distribution, to the Airy ensemble. \end{theorem} This is done in Section \ref{s4} using methods described in the next subsection. The result stated in Theorem \ref{t3} was independently obtained by K.~Johansson in \cite{J2}. \subsection{Poissonization and depoissonization}\label{s15} We obtain Theorems \ref{t2} and \ref{t3} from Theorem \ref{t1} using the so-called depoissonization techniques. We recall that the fundamental idea of depoissonization is the following. Given a sequence $b_1,b_2,b_3,\dots$ its {\em poissonization} is, by definition, the function \begin{equation}\label{e18} B(\theta)=e^{-\theta} \sum_{k=1}^\infty \frac{\theta^k}{k!}\, b_k \,. \end{equation} Provided the $b_k$'s grow not too rapidly this is an entire function of $\theta$. In combinatorics, it is usually called the exponential generating function of the sequence $\{b_k\}$. Various methods of extracting asymptotics of sequences from their generating functions are classically known and widely used, see for example \cite{V} where such methods are used to obtain the limit shape of a typical partition under various measures on the set of partitions. A probabilistic way to look at the generating function \eqref{e18} is the following. If $\theta\ge 0$ then $B(\theta)$ is the expectation of $b_\eta$ where $\eta\in\{0,1,2,\dots\}$ is a Poisson random variable with parameter $\theta$. Because $\eta$ has mean $\theta$ and standard deviation $\sqrt\theta$, one expects that \begin{equation}\label{e19} B(n) \approx b_n\,, \quad n\to\infty \,, \end{equation} provided the variations of $b_k$ for $|k-n|\le \operatorname{const}\sqrt n$ are small. One possible regularity condition on $b_n$ which implies \eqref{e19} is monotonicity. In a very general and very convenient form, a depoissonization lemma for nonincreasing nonnegative $b_n$ was established by K.~Johansson in \cite{J}. We use this lemma in Section \ref{s4} to prove Theorem \ref{t3}. Another approach to depoissonization is to use a contour integral \begin{equation}\label{e110} b_n = \frac{n!}{2\pi i} \int_C \frac{B(z)\, e^{z}}{z^n} \, \frac{dz}{z} \,, \end{equation} where $C$ is any contour around $z=0$. Suppose, for a moment, that $b_n$ is constant $b=b_n=B(z)$. The function $e^z/z^n=e^{z-n\ln z}$ has a unique critical point $z=n$. If we choose $|z|=n$ as the contour $C$, then only neighborhoods of size $|z-n|\le \operatorname{const}\sqrt{n}$ contribute to the asymptotics of \eqref{e110}. Therefore, for general $\{b_n\}$, we still expect that provided the overall growth of $B(z)$ is under control and the variations of $B(z)$ for $|z-n| \le \operatorname{const}\sqrt n$ are small, the asymptotically significant contribution to \eqref{e110} will come from $z=n$. That is, we still expect \eqref{e19} to be valid. See, for example, \cite{JS} for a comprehensive discussion and survey of this approach. We use this approach to prove Theorem \ref{t2} in Section \ref{s3}. The growth conditions on $B(z)$ which are suitable in our situation are spelled out in Lemma \ref{l31}. In our case, the functions $B(\theta)$ are combinations of the Bessel functions. Their asymptotic behavior as $\theta\approx n \to\infty$ can be obtained directly from the classical results on asymptotics of Bessel functions which are discussed, for example, in the fundamental Watson's treatise \cite{W}. These asymptotic formulas for Bessel functions are derived using the integral representations of Bessel functions and the steepest descent method. The different behavior of the asymptotics in the bulk $(-2,2)$ of the spectrum, near the edges $\pm 2$ of the spectrum, and outside of $[-2,2]$ is produced by the different location of the saddle point in these three cases. \subsection{Organization of the paper} Section \ref{s2} contains the proof of Theorems \ref{t1} and \ref{t1b} and also various formulas for the kernels $\mathsf{K}$ and $\mathsf{J}$. We also discuss a difference operator which commutes with $\mathsf{J}$ and its possible applications. Section \ref{s3} deals with the behavior of the Plancherel measure in the bulk of the spectrum; there we prove Theorem \ref{t2}. Theorem \ref{t3} and a similar result (Theorem \ref{t4}) for the poissonized measure $M^\theta$ are established in Section \ref{s4}. At the end of the paper there is an Appendix, where we collected some necessary results about Fredholm determinants, point processes, and convergence of trace class operators. \subsection{Acknowledgements} In many different ways, our work was inspired by the work of J.~Baik, P.~Deift, and K.~Johansson, on the one hand, and by the work of A.~Vershik and S.~Kerov, on the other. It is our great pleasure to thank them for this inspiration and for many fruitful discussions. \section{Correlation functions of the measures $M^\theta$}\label{s2} \subsection{Proof of Theorem \ref{t1}}\label{s21} As noted above, Theorem \ref{t1} is a limit case of Theorem 3.3 of \cite{BO}. That theorem concerns a family $\{M^{(n)}_{zz'}\}$ of probability measures on partitions of $n$, where $z,z'$ are certain parameters. When the parameters go to infinity, $M^{(n)}_{zz'}$ tends to the Plancherel measure $M_n$. Theorem 3.3 in \cite{BO} gives a determinantal formula for the correlation functions of the measure \begin{equation}\label{e21} M^\xi_{zz'} = (1-\xi)^t\,\sum_{n=1}^\infty \frac{(t)_n}{n!}\,\xi^n \, M^{(n)}_{zz'} \end{equation} in terms of a certain {\it hypergeometric kernel}. Here $t=zz'>0$ and $\xi\in(0,1)$ is an additional parameter. As $z,z'\to\infty$ and $\xi=\frac{\theta}{t}\to0$, the negative binomial distribution in \eqref{e21} tends to the Poisson distribution with parameter $\theta$. In the same limit, the hypergeometric kernel becomes the kernel $\mathsf{K}$ of Theorem \ref{t1}. The Bessel functions appear as a suitable degeneration of hypergeometric functions. Recently, these results of \cite{BO} were considerably generalized in \cite{O2}, where it was shown how this type of correlation functions computations can be done using simple commutations relations in the infinite wedge space. For the reader's convenience, we present here a direct and elementary proof which uses the same ideas as in \cite{BO} plus an additional technical trick, namely, differentiation with respect to $\theta$ which kills denominators. This trick yields an denominator--free integral formula for the kernel $\mathsf{K}$, see Proposition \ref{p28}. Our proof here is a verification, not deduction. For more conceptual approaches the reader is referred to \cite{BO2,O2}. Let $x,y\in\mathbb{Z}+{\textstyle \frac12}$. Introduce the following kernel $\mathsf{L}$ \begin{equation*} \mathsf{L}(x,y;\theta)=\cases 0\,, & xy>0 \,, \\ \dfrac1{x-y}\dfrac{\theta^{(|x|+|y|)/2}}{\Gamma(|x|+{\textstyle \frac12})\, \Gamma(|y|+{\textstyle \frac12})} \,, & xy <0 \,. \endcases \end{equation*} We shall consider the kernels $\mathsf{K}$ and $\mathsf{L}$ as operators in the $\ell^2$ space on $\mathbb{Z}+{\textstyle \frac12}$. We recall that simple multiplicative formulas (for example, the hook formula) are known for the number $\dim\lambda$ in \eqref{e0001}. For our purposes, it is convenient to rewrite the hook formula in the following determinantal form. Let $\lambda=(p_1,\dots,p_d\,|\,q_1,\dots,q_d)$ be the Frobenius coordinates of $\lambda$, see Section \ref{s12}. We have \begin{equation}\label{e11} \frac{\dim\lambda}{|\lambda|!}=\det\left[\frac1{(p_i+q_j+1)\, p_i!\, q_i!} \right]_{1\le i,j\le d}\,. \end{equation} The following proposition is a straightforward computation using \eqref{e11}. \begin{proposition}\label{p21} Let $\lambda$ be a partition. Then \begin{equation}\label{e22} M^\theta(\lambda)= e^{-\theta}\, \det \Big[\mathsf{L}(x_i,x_j;\theta)\Big]_{1\le i,j \le s}\,, \end{equation} where $\operatorname{Fr}(\lambda)=\{x_1,\dots,x_s\}\subset\mathbb{Z}+{\textstyle \frac12}$ are the modified Frobenius coordinates of $\lambda$. \end{proposition} Let $\operatorname{Fr}_*\left(M^\theta\right)$ be the push-forward of $M^\theta$ under the map $\operatorname{Fr}$. Note that the image of $\operatorname{Fr}$ consists of sets $X\subset\mathbb{Z}+{\textstyle \frac12}$ having equally many positive and negative elements. For other $X\subset\mathbb{Z}+{\textstyle \frac12}$, the right-hand side of \eqref{e22} can be easily seen to vanish. Therefore $\operatorname{Fr}_*\left(M^\theta\right)$ is a determinantal point process (see the Appendix) corresponding to $\mathsf{L}$, that is, its configuration probabilities are determinants of the form \eqref{e22}. \begin{corollary}\label{c22} $\det(1+\mathsf{L})=e^{\theta}$. \end{corollary} This follows from the fact that $M^\theta$ is a probability measure. This is explained in Propositions \ref{p51} and \ref{p54} in the Appendix. Note that, in general, one needs to check that $\mathsf{L}$ is a trace class operator. However, because of the special form of $\mathsf{L}$, it suffices to check a weaker claim -- that $\mathsf{L}$ is a Hilbert--Schmidt operator, which is immediate. Theorem \ref{t1} now follows from general properties of determinantal point processes (see Proposition \ref{p55} in the Appendix) and the following \begin{proposition}\label{p23} $\mathsf{K}=\mathsf{L}\,(1+\mathsf{L})^{-1}$. \end{proposition} We shall need three following identities for Bessel functions which are degeneration of the identities (3.13--15) in \cite{BO} for the hypergeometric function. The first identity is due to Lommel (see \cite{W}, Section 3.2 or \cite{HTF}, 7.2.(60)) \begin{equation}\label{e23} J_\nu(2z)\,J_{1-\nu}(2z) + J_{-\nu}(2z)\, J_{\nu-1}(2z) = \frac{\sin\pi\nu}{\pi\, z} \,. \end{equation} The other two identities are the following. \begin{lemma}\label{l24} For any $\nu\ne 0,-1,-2,\dots$ and any $z\ne 0$ we have \begin{align} &\sum_{m=0}^\infty \frac1{m+\nu} \frac{z^m}{m!} \, J_m(2z) =\frac{\Gamma(\nu)\, J_\nu(2z)}{z^\nu} \,,\label{e24}\\ &\sum_{m=0}^\infty \frac1{m+\nu} \frac{z^m}{m!} \, J_{m+1}(2z) = \frac1z-\frac{\Gamma(\nu)\, J_{\nu-1}(2z)}{z^\nu} \label{e25}\,. \end{align} \end{lemma} \begin{proof} Another identity due to Lommel (see \cite{W}, Section 5.23, or \cite{HTF}, 7.15.(10)) reads \begin{equation*} \sum_{m=0}^\infty \frac{\Gamma(\nu-s+m)}{\Gamma(\nu+m+1)} \frac{z^m}{m!} \, J_{m+s}(2z) = \frac{\Gamma(\nu-s)}{\Gamma(s+1)} \, \frac{J_\nu(2z)}{z^{\nu-s}} \,. \end{equation*} Substituting $s=0$ we get \eqref{e24}. Substituting $s=1$ yields \begin{equation}\label{e25a} \sum_{m=0}^\infty \frac1{(m+\nu)(m+\nu-1)} \frac{z^m}{m!} \, J_{m+1}(2z) = \frac{\Gamma(\nu-1)\, J_\nu(2z)}{z^{\nu-1}} \,. \end{equation} Let $r(\nu,z)$ the difference of the left-hand side and the right-hand side in \eqref{e25}. Using \eqref{e25a} and the recurrence relation \begin{equation}\label{e26} J_{\nu+1} (2z)-\frac{\nu}{z} J_\nu(2z) + J_{\nu-1} (2z) = 0 \end{equation} we find that $r(\nu+1,z)=r(\nu,z)$. Hence for any $z$ it is a periodic function of $\nu$ and it suffices to show that $\lim_{\nu\to\infty} r(\nu,z)=0$. Clearly, the left-hand side in \eqref{e25} goes to 0 as $\nu\to\infty$. From the defining series for $J_\nu$ it is clear that \begin{equation}\label{e27} J_\nu(2z)\sim \frac{z^\nu}{\Gamma(\nu+1)} \,, \quad \nu\to\infty \,, \end{equation} which implies that the right-hand side of \eqref{e25} also goes to $0$ as $\nu\to\infty$. This concludes the proof. \end{proof} \begin{proof}[Proof of Proposition] It is convenient to set $z=\sqrt{\theta}$. Since the operator $1+\mathsf{L}$ is invertible we have to check that \begin{equation*} \mathsf{K}+\mathsf{K}\, \mathsf{L} - \mathsf{L} = 0 \,. \end{equation*} This is clearly true for $z=0$; therefore, it suffices to check that \begin{equation}\label{e28} \dot \mathsf{K} + \dot \mathsf{K} \, \mathsf{L} + \mathsf{K}\dot \mathsf{L} -\dot \mathsf{L} = 0\,, \end{equation} where $\dot \mathsf{K} = \frac{\partial \mathsf{K}}{\partial z}$ and $\dot \mathsf{L} = \frac{\partial \mathsf{L}}{\partial z}$. Using the formulas \begin{alignat}{2}\label{e29} \frac{d}{dz}\, J_x(2z)&=-&&2J_{x+1}(2z)+\frac{x}{z}\, J_{x}(2z) \\ &=&&2J_{x-1}(2z)-\frac{x}{z}\, J_{x}(2z) \nonumber \end{alignat} one computes \begin{equation*} \dot \mathsf{K}(x,y)= \cases J_{|x|-\frac12}\, J_{|y|+\frac12} + J_{|x|+\frac12} \, J_{|y|-\frac12} \,, & xy >0 \,,\\ \operatorname{sgn}(x) \left(J_{|x|-\frac12}\, J_{|y|-\frac12} - J_{|x|+\frac12} \, J_{|y|+\frac12} \right) \,, & xy <0\,, \endcases \end{equation*} where $J_x=J_x(2z)$. Similarly, \begin{equation*} \dot \mathsf{L} (x,y)=\cases 0\,, & xy>0 \,, \\ \operatorname{sgn}(x)\, \dfrac{z^{|x|+|y|-1}}{\Gamma(|x|+{\textstyle \frac12})\, \Gamma(|y|+{\textstyle \frac12})} \,, & xy <0 \,. \endcases \end{equation*} Now the verification of \eqref{e28} becomes a straightforward application of the formulas \eqref{e24} and \eqref{e25}, except for the occurrence of the singularity $\nu\in\mathbb{Z}_{\le 0}$ in those formulas. This singularity is resolved using \eqref{e23}. This concludes the proof of Proposition \ref{p23} and Theorem \ref{t1}. \end{proof} \subsection{Proof of Theorem \ref{t1b}}\label{s21b} Recall that by construction $$ \operatorname{Fr}(\lambda)=\left(\mathcal{D}(\lambda)+{\textstyle \frac12}\right)\triangle \left(\mathbb{Z}_{\le 0} -{\textstyle \frac12}\right)\,. $$ Let us check that this and Proposition \ref{pAc} implies Theorem \ref{t1b}. In Proposition \ref{pAc} we substitute $$ \mathfrak{X}=\mathbb{Z}+{\textstyle \frac12}\,, \quad Z=\mathbb{Z}_{\le 0}-{\textstyle \frac12}\,, \quad K=\mathsf{K}\,. $$ By definition, set $$ \varepsilon(x)=\operatorname{sgn}(x)^{x+1/2}\,, \quad x\in\mathbb{Z}+{\textstyle \frac12}\,. $$ We have the following \begin{lemma}\label{l0001} $\mathsf{K}^\triangle(x,y) = \varepsilon(x)\, \varepsilon(y) \, \mathsf{J}(x-{\textstyle \frac12},y-{\textstyle \frac12})$ \end{lemma} It is clear that since the $\varepsilon$-factors cancel out of all determinantal formulas, this lemma and Proposition \ref{pAc} establish the equivalence of Theorems \ref{t1} and \ref{t1b}. \begin{proof}[Proof of lemma] Using the relation $$ J_{-n}=(-1)^n J_n $$ and the definition of $\mathsf{K}$ one computes \begin{equation}\label{e210} \mathsf{K}(x,y) = \operatorname{sgn}(x)\, \varepsilon(x)\, \varepsilon(y) \, \mathsf{J}(x-{\textstyle \frac12},y-{\textstyle \frac12})\,, \quad x\ne y\,. \end{equation} Clearly, the relation \eqref{e210} remains valid for $x=y>0$. It remains to consider the case $x=y<0$. In this case we have to show that $$ 1-\mathsf{K}(x,x)=\mathsf{J}(x-{\textstyle \frac12},y-{\textstyle \frac12})\,, \quad x\in\mathbb{Z}_{\le 0}-{\textstyle \frac12} \,. $$ Rewrite it as \begin{equation}\label{e004} 1-\mathsf{J}(k,k)=\mathsf{J}(-k-1,-k-1)\,, \quad k=-x-{\textstyle \frac12}\in\mathbb{Z}_{\ge 0}\,. \end{equation} By \eqref{e213} this is equivalent to \begin{multline*} 1-\sum_{m=0}^\infty (-1)^m\,\frac{(2k+m+2)_m}{\Gamma(k+m+2)\Gamma(k+m+2)}\, \frac{\theta^{k+m+1}}{m!}\\ =\sum_{n=0}^\infty (-1)^n\,\frac{(-2k+n)_n}{\Gamma(-k+n+1)\Gamma(-k+n+1)}\, \frac{\theta^{-k+n}}{n!}\,. \end{multline*} Examine the right--hand side. The terms with $n=0,\dots,k-1$ vanish because then $1/\Gamma(-k+n+1)=0$. The term with $n=k$ is equal to 1, which corresponds to 1 in the left--hand side. Next, the terms with $n=k+1,\dots,2k$ vanish because for these values of $n$, the expression $(-2k+n)_n$ vanishes. Finally, for $n\ge 2k+1$, set $n=2k+1+m$. Then the $n$th term in the second sum is equal to minus the $m$th term in the first sum. Indeed, this follows from the trivial relation $$ -(-1)^m\,\frac{(2k+m+2)_m}{m!}=(-1)^n\,\frac{(-2k+n)_n}{n!}\,, \qquad n=2k+1+m. $$ This concludes the proof. \end{proof} \subsection{Various formulas for the kernel $\mathsf{J}$}\label{s22} Recall that since $J_x$ is an entire function of $x$, the function $\mathsf{J}(x,y)$ is entire in $x$ and $y$. We shall now obtain several denominator--free formulas for the kernel $\mathsf{J}$. \begin{proposition}\label{p27} \begin{equation}\label{e213} \mathsf{J}(x,y;\theta)=\sum_{m=0}^\infty(-1)^m\, \frac{(x+y+m+2)_m}{\Gamma(x+m+2)\Gamma(y+m+2)}\, \frac{\theta^{\frac{x+y}2+m+1}}{m!}\,. \end{equation} \end{proposition} \begin{proof} Straightforward computation using a formula due to Nielsen, see Section 5.41 of \cite{W} or \cite{HTF}, formula 7.2.(48) . \end{proof} \begin{proposition}\label{p28} Suppose $x+y>-2$. Then \begin{equation*} \mathsf{J}(x,y;\theta)=\frac12\, \int_0^{2\sqrt\theta} (J_x(z)\, J_{y+1}(z) + J_{x+1}(z)\, J_{y}(z))\, dz \end{equation*} \end{proposition} \begin{proof} Follows from a computation done in the proof of Proposition \ref{p23} \begin{equation*} \frac{\partial}{\partial\theta} \, \mathsf{J}(x,y;\theta) = \frac{1}{2\sqrt{\theta}} \, (J_x \, J_{y+1} + J_{x+1}\, J_{y}) \,, \quad J_x=J_x(2\sqrt{\theta})\,, \end{equation*} and the following corollary of \eqref{e213} $$ \mathsf{J}(x,y;0)=0\,, \quad x+y>-2\,. $$ \end{proof} \begin{remark} Observe that by Proposition \ref{p28} the operator $\frac{\partial\mathsf{J}}{\partial\theta}$ is a sum of two operators of rank 1. \end{remark} \begin{proposition}\label{p29} \begin{equation}\label{e214} \mathsf{J}(x,y;\theta)=\sum_{s=1}^\infty J_{x+s}\, J_{y+s}\,, \qquad J_x=J_x(2\sqrt\theta). \end{equation} \end{proposition} \begin{proof} Our argument is similar to an argument due to Tracy and Widom, see the proof of the formula (4.6) in \cite{TW}. The recurrence relation \eqref{e26} implies that \begin{equation}\label{e215} \mathsf{J}(x+1,y+1)-\mathsf{J}(x,y)=-J_{x+1}\, J_{y+1} \end{equation} Consequently, the difference between the left-hand side and the right-hand side of \eqref{e214} is a function which depends only on $x-y$. Let $x$ and $y$ go to infinity in such a way that $x-y$ remains fixed. Because of the asymptotics \eqref{e27} both sides in \eqref{e214} tend to zero and, hence, the difference actually is 0. \end{proof} In the same way as in \cite{TW} this results in the following \begin{corollary}\label{c210} For any $a\in\mathbb{Z}$, the restriction of the kernel $\mathsf{J}$ to the subset $\{a,a+1,a+2,\dots\}\subset\mathbb{Z}$ determines a nonnegative trace class operator in the $\ell^2$ space on that subset. \end{corollary} \begin{proof} By Proposition \ref{p29}, the restriction of $\mathsf{J}$ on $\{a,a+1,a+2,\dots\}$ is the square of the kernel $(x,y)\mapsto J_{x+y+1-a}(2\sqrt\theta)$. Since the latter kernel is real and symmetric, the kernel $\mathsf{J}$ is nonnegative. Hence, it remains to prove that its trace is finite. Again, by Proposition \ref{p29}, this trace is equal to \begin{equation*} \sum_{s=1}^\infty s \, (J_{a+s+1}(2\sqrt\theta))^2. \end{equation*} This sum is clearly finite by \eqref{e27}. \end{proof} \begin{remark} The kernel $\mathsf{J}$ resembles a Christoffel--Darboux kernel and, in fact, the operator in $\ell^2(\mathbb{Z})$ defined by the kernel $\mathsf{J}$ is an Hermitian projection operator. Recall that $\mathsf{K}=\mathsf{L}(1+\mathsf{L})^{-1}$, where $\mathsf{L}$ is of the form $$ \mathsf{L}=\begin{bmatrix} 0 & A \\ -A^* & 0 \end{bmatrix} $$ On can prove that this together with Lemma \ref{l0001} implies that $\mathsf{J}$ is an Hermitian projection kernel. However, in contrast to a Christoffel--Darboux kernel, it projects to an infinite--dimensional subspace. \end{remark} \subsection{Commuting difference operator} Consider the difference operators $\De$ and $\nabla$ on the lattice $\mathbb{Z}$, $$ (\De f)(k)=f(k+1)-f(k)\,, \qquad (\nabla f)(k)=f(k)-f(k-1)\,. $$ Note that $\nabla=-\De^*$ as operators on $\ell^2(\mathbb{Z})$. Consider the following second order difference Sturm--Liouville operator \begin{equation}\label{e0002} D=\De\circ\alpha\circ\nabla +\beta\,, \end{equation} where $\alpha$ and $\beta$ are operators of multiplication by certain functions $\alpha(k)$, $\beta(k)$. The operator \eqref{e0002} is self--adjoint in $\ell^2(\mathbb{Z})$. A straightforward computation shows that \begin{multline}\label{e0003} \big[D f\big](k)=(-\alpha(k+1)-\alpha(k)+\beta(k))f(k)+\\ \alpha(k)f(k-1)+\alpha(k+1)f(k+1)\,. \end{multline} It follows that if $\alpha(s)=0$ for a certain $s\in\mathbb{Z}$ then the space of functions $f(k)$ vanishing for $k<s$ is invariant under $D$. \begin{proposition} Let $[\mathsf{J}]_s$ denote the operator in $\ell^2(\{s,s+1,\dots\})$ obtained by restricting the kernel $\mathsf{J}$ to $\{s,s+1,\dots\}$. Then the difference Sturm--Liouville operator \eqref{e0002} commutes with $[\mathsf{J}]_s$ provided $$ \alpha(k)=k-s, \qquad \beta(k)=-\,\frac{k(k+1-s-2\sqrt\theta)}{\sqrt\theta}+\textup{const} \,. $$ \end{proposition} \begin{proof} Since $[\mathsf{J}]_s$ is the square of the operator with the kernel $J_{k+l+1-s}$, it suffices to check that the latter operator commutes with $D$, with the above choice of $\alpha$ and $\beta$. But this is readily checked using \eqref{e0003}. \end{proof} This proposition is a counterpart of a known fact about the Airy kernel, see \cite{TW}. Moreover, in the scaling limit when $\theta\to\infty$ and $$ k=2\sqrt\theta+x\,\theta^{1/6}, \qquad s=2\sqrt\theta+\varsigma\,\theta^{1/6}, $$ the difference operator $D$ becomes, for a suitable choice of the constant, the differential operator $$ \frac d{dx}\circ(x-\varsigma)\circ \frac d{dx} -x(x-\varsigma), $$ which commutes to the Airy operator restricted to $(\varsigma,+\infty)$. The above differential operator is exactly that of Tracy and Widom \cite{TW}. \begin{remark} Presumably, this commuting difference operator can be used to obtain, as was done in \cite{TW} for the Airy kernel, asymptotic formulas for the eigenvalues of $[\mathsf{J}]_s$, where $s=2\sqrt\theta+\varsigma\,\theta^{1/6}$ and $\varsigma\ll 0$. Such asymptotic formulas may be very useful if one wishes to refine Theorem \ref{t3} and to establish convergence of moments in addition to convergence of distribution functions. For individual distributions of $\lambda_1$ and $\lambda_2$ the convergence of moments was obtained, by other methods, in \cite{BDJ1,BDJ2}. \end{remark} \section{Correlation functions in the bulk of the spectrum}\label{s3} \subsection{Proof of Theorem \ref{t2}} We refer the reader to Section \ref{s13} of the Introduction for the definition of a regular sequence $X(n)\subset\mathbb{Z}$ and the statement of Theorem \ref{t2}. Also, in this section, we shall be working in the bulk of the spectrum, that is, we shall assume that all numbers $a_i$ defined in \eqref{e14} lie inside $(-2,2)$. The edges $\pm2$ of the spectrum and its exterior will be treated in the next section. In our proof, we shall follow the strategy explained in Section \ref{s15}. Namely, in order to compute the limit of $\boldsymbol{\varrho}(n,X(n))$ we shall use the contour integral \begin{equation*} \boldsymbol{\varrho}(n,X(n)) = \frac{n!}{2\pi i} \int_{|\theta|=n} \boldsymbol{\varrho}^\theta(X(n))\, \frac{e^\theta}{\theta^{n+1}} \, d\theta \,, \end{equation*} compute the asymptotics of $\boldsymbol{\varrho}^\theta$ for $\theta\approx n$, and estimate $|\boldsymbol{\varrho}^\theta|$ away from $\theta=n$. Both tasks will be accomplished using classical results about the Bessel functions. We start our proof with the following lemma which formalizes the above informal depoissonization argument. The hypothesis of this lemma is very far from optimal, but it is sufficient for our purposes. For the rest of this section, we fix a number $0<\alpha<1/4$ which shall play an auxiliary role. \begin{lemma}\label{l31} Let $\{f_n\}$ be a sequence of entire functions \begin{equation*} f_n(z)=e^{-z} \sum_{k\ge 0} \frac{f_{nk}}{k!} \, z^k \,, \quad n =1,2,\dots \,, \end{equation*} and suppose that there exist such constants $f_\infty$ and $\gamma$ that \begin{align} &\max_{|z|= n} \left|f_n(z)\right| = O\left(e^{\gamma\,\sqrt{n}}\right) \label{A} \\ & \max_{|z/n-1|\le n^{-\alpha}} \left|f_n(z)-f_\infty\right| e^{-\gamma|z-n|/\sqrt{n}} = o(1) \label{B}\,, \end{align} as $n\to\infty$. Then \begin{equation*} \lim_{n\to\infty} f_{nn} = f_\infty\,. \end{equation*} \end{lemma} \begin{proof} By replacing $f_n(z)$ by $f_n(z)-f_\infty$, we may assume that $f_\infty=0$. By Cauchy and Stirling formulas, we have \begin{equation*} f_{nn} =(1+o(1))\, \sqrt{\frac{n}{2\pi}} \, \int_{|\zeta|=1} \frac{f_n(n\zeta)\, e^{n(\zeta-1)}}{\zeta^n} \, \frac{d\zeta}{i\zeta} \,. \end{equation*} Choose some large $C>0$ and split the circle $|\zeta|=1$ into 2 parts as follows: \begin{equation*} S_1=\left\{ \frac{C}{n^{1/4}}\le |\zeta-1|\right\}\,, \quad S_2=\left\{ \frac{C}{n^{1/4}}\ge|\zeta-1|\right\} \,. \end{equation*} The inequality \eqref{A} and the equality \begin{equation*} \left|e^{n(\zeta-1)}\right|=e^{-n|\zeta-1|^2/2} \,. \end{equation*} imply that the integral $\int_{S_1}$ decays exponentially provided $C$ is large enough. On $S_2$, the inequality \eqref{B} applies for sufficiently large $n$ and gives \begin{equation*} \max_{z\in S_2} \left|f_n(n\zeta)\right| e^{-\gamma\sqrt{n}|\zeta-1|} = o(1) \,. \end{equation*} Therefore, the the integral $\int_{S_2}$ is $o(\,\,)$ of the following integral \begin{equation*} \sqrt{n} \int_{|\zeta|=1} \frac{d\zeta}{i\zeta} \, \exp\left(-n\frac{|\zeta-1|^2}2 + \gamma\sqrt{n}|\zeta-1|\right) \sim \int_{-\infty}^\infty e^{-s^2/2 +\gamma |s|} \, ds \,. \end{equation*} Hence, $\int_{S_2}=o(1)$ and the lemma follows. \end{proof} \begin{definition} Denote by $\aF$ the algebra (with respect to termwise addition and multiplication) of sequences $\{f_n(z)\}$ which satisfy the properties \eqref{A} and \eqref{B} for some, depending on the sequence, constants $f_\infty$ and $\gamma$. Introduce the following map \begin{equation*} \operatorname{\mathcal{ L}im}:\aF\to \mathbb{C}\,, \quad \{f_n(z)\} \mapsto f_\infty \,, \end{equation*} which is clearly a homomorphism. \end{definition} \begin{remark} Note that we do not require $f_n(z)$ to be entire. Indeed, the kernel $\mathsf{J}$ may have a square root branching, see the formula \eqref{e213}. \end{remark} By Theorem \ref{t1b}, the correlation functions $\boldsymbol{\varrho}^\theta$ belong to the algebra generated by sequences of the form \begin{equation*} \{f_n(z)\}=\left\{\mathsf{J}(x_n,y_n;z)\right\}\,, \end{equation*} where the sequence $X=X(n)=\{x_n,y_n\}\subset \mathbb{Z}$ is regular which, we recall, means that the limits \begin{equation*} a=\lim_{n\to\infty} \frac{x_n}{\sqrt{n}}\,, \quad d=\lim_{n\to\infty} (x_n-y_n) \end{equation*} exist, finite or infinite. Therefore, we first consider such sequences. \begin{proposition}\label{p34} If $X=\{x_n,y_n\}\subset\mathbb{Z}$ is regular then \begin{equation*} \left\{\mathsf{J}(x_n,y_n;z)\right\}\in\aF\,, \quad \operatorname{\mathcal{ L}im}\left(\left\{\mathsf{J}(x_n,y_n;z)\right\}\right)= \mathsf{S}(d,a) \,. \end{equation*} \end{proposition} In the proof of this proposition it will be convenient to allow $X\subset \mathbb{C}$. For complex sequences $X$ we shall require $a\in\mathbb{R}$; the number $d\in\mathbb{C}$ may be arbitrary. \begin{lemma}\label{l35} Suppose that a sequence $X\subset\mathbb{C}$ is as above and, additionally, suppose that $\Im x_n$, $\Im y_n$ are bounded and $d\ne 0$. Then the sequence $\left\{\mathsf{J}(x_n,y_n;z)\right\}$ satisfies \eqref{B} with $f_\infty = \mathsf{S}(d,a)$ and certain $\gamma$. \end{lemma} \begin{proof}[Proof of Lemma] We shall use Debye's asymptotic formulas for Bessel functions of complex order and large complex argument, see, for example, Section 8.6 in \cite{W}. Introduce the following function \begin{equation*} F(x,z)=z^{1/4} \, J_x(2\sqrt{z}) \,. \end{equation*} The formula \eqref{e212} can be rewritten as follows % \begin{equation}\label{e31} \mathsf{J}(x,y;z)=\frac{F(x,z)\, F(y+1,z) - F(x+1,z) \,F(y,z)}{x-y} \,. \end{equation} The asymptotic formulas for Bessel functions imply that \begin{equation}\label{e32} F(x,z)=\frac{\cos \left(\sqrt{z} \, G(u) +\frac{\pi}4\right)}{H(u)^{1/2}} \left(1+O\left(z^{-1/2}\right)\right) \,, \quad u=\frac{x}{\sqrt{z}} \,, \end{equation} where \begin{equation*} G(u)=\frac{\pi}{2}\left(u-\Omega(u)\right)\,, \quad H(u)=\frac{\pi}2 \, \sqrt{4-u^2} \,, \end{equation*} provided that $z\to\infty$ in such a way that $u$ stays in some neighborhood of $(-2,2)$; the precise form of this neighborhood can be seen in Figure 22 in Section 8.61 of \cite{W}. Because we assume that \begin{equation*} \lim_{n\to\infty} \frac{x_n}{\sqrt{n}}\,,\,\lim_{n\to\infty} \frac{y_n}{\sqrt{n}} \in (-2,2) \,, \end{equation*} and because $|z/n-1|<n^{-\alpha}$, the ratios ${x_n}/{\sqrt{z}}$, ${y_n}/{\sqrt{z}}$ stay close to $(-2,2)$. For future reference, we also point out that the constant in $O\left(z^{-1/2}\right)$ in \eqref{e32} is uniform in $u$ provided $u$ is bounded away from the endpoints $\pm2$. First we estimate $\Im \left(\sqrt{z} \, G(u)\right) $. The function $G$ clearly takes real values on the real line. From the obvious estimate \begin{equation*} \left| \Im \left(\sqrt{z} \, G(u) \right) \right| \le \left| \Im \left(\sqrt{n} \, G(x/\sqrt{n}) \right) \right| + \left| \sqrt{z} \, G(x/\sqrt{z}) - \sqrt{n} \, G(x/\sqrt{n}) \right| \end{equation*} and the boundedness of $G$, $G'$, and $|\Im x|$ we obtain an estimate of the form \begin{equation}\label{e33} \max_{|z/n-1|\le n^{-\alpha}} |F(x;z)| e^{-\operatorname{const}\, |z-n|/\sqrt{n}} = O(1)\,. \end{equation} If $d=\infty$ then because of the denominator in \eqref{e31} the estimate \eqref{e33} implies that \begin{equation*} \mathsf{J}(x_n,y_n;z)=o\left(e^{\operatorname{const}\, |z-n|/\sqrt{n}}\right)\,. \end{equation*} Since $\mathsf{S}(\infty,a)=0$, it follows that in this case the lemma is established. Assume, therefore, that $d$ is finite. Observe that for any bounded increment $\De x$ we have \begin{multline}\label{e34} F(x+\De x,z)=\frac{\cos \left(\sqrt{z} \, G(u) +G'(u)\,\De x+ \frac{\pi}4\right)} {H(u)^{1/2}}\\ +O\left( \frac{(\De x)^2}{\sqrt z}\, e^{\operatorname{const}\, |z-n|/\sqrt{n}}\right)\,, \end{multline} and, in particular, the last term is $o\left(e^{\operatorname{const}\, |z-n|/\sqrt{n}}\right)$. Using the trigonometric identity \begin{equation*} \cos\left(A\right)\cos\left(B+C\right)-\cos\left(A+C\right)\cos\left(B\right)= \sin\left(C\right)\sin\left(A-B\right)\,, \end{equation*} and observing that \begin{equation*} G'(u)=\arccos(u/2) \,, \quad \sin(G'(u))=\frac{\sqrt{4-u^2}}2=\frac{H(u)}{\pi}\,, \end{equation*} we compute \begin{multline*} F(x_n;z)\, F(y_n+1;z) -F(x_n+1;z) \,F(y_n;z) = \\ \frac1{\pi} \sin\left(\arccos\left(\frac{x_n}{2\sqrt{z}}\right)\, (x_n-y_n)\right) + o\left(e^{\operatorname{const}\, |z-n|/\sqrt{n}}\right)\,. \end{multline*} Since, by hypothesis, \begin{equation*} \frac{x_n}{\sqrt{z}} \to a\,, \quad (x_n-y_n) \to d \,, \end{equation*} and $d\ne 0$, the lemma follows. \end{proof} \begin{remark} Below we shall need this lemma for a variable sequence $X=\{x_n,y_n\}$. Therefore, let us spell out explicitly under what conditions on $X$ the estimates in Lemma \ref{l35} remain uniform. We need the sequences $\frac{x_n}{\sqrt{n}}$ and $\frac{y_n}{\sqrt{n}}$ to converge uniformly; then, in particular, the ratios $\frac{x_n}{\sqrt{n}}$ and $\frac{y_n}{\sqrt{n}}$ are uniformly bounded away from $\pm 2$. Also, we need $\Im x_n$ and $\Im y_n$ to be uniformly bounded. Finally, we need $|d|$ to be uniformly bounded from below. \end{remark} \begin{proof}[Proof of Proposition] First, we check the condition \eqref{B}. In the case $d\ne 0$ this was done in the previous lemma. Suppose, therefore, that $\{x_n\}$ is a regular sequence in $\mathbb{Z}_{\ge 0}$ and consider the asymptotics of $\mathsf{J}(x_n,x_n;z)$. Because the function $\mathsf{J}(x,y;z)$ is an entire function of $x$ and $y$ we have \begin{equation}\label{e35} \mathsf{J}(x,x;z) = \frac1{2\pi} \int_0^{2\pi} \mathsf{J}\left(x,x+r e^{i t};z\right) \, dt \,, \end{equation} where $r$ is arbitrary; we shall take $r$ to be some small but fixed number. From the previous lemma we know that \begin{equation*} \mathsf{J}\left(x,x+r e^{i t};z\right)= \frac1{\pi r e^{it}} \sin\left(\omega\left(\frac{x}{\sqrt{z}}\right)\, re^{it}\right) + o\left(e^{\operatorname{const}\, |z-n|/\sqrt{n}}\right) \,. \end{equation*} {}From the above remark it follows that this estimate is uniform in $t$. This implies the property \eqref{B} for $\mathsf{J}(x_n,x_n;z)$. To prove the estimate \eqref{A} we use Schl\"afli's integral representation (see Section 6.21 in \cite{W}) \begin{multline}\label{e36} J_x(2\sqrt{z})=\frac1\pi \int_0^\pi \cos \left(x t - 2 \sqrt{z} \, \sin t\right) \, dt - \\ \frac{\sin \pi x}{\pi} \int_0^\infty e^{-x t -2\sqrt{z} \, \sinh t} \, dt \,, \end{multline} which is valid for $|\arg z|<\pi$ and even for $\arg z=\pm \pi$ provided $\Re x>0$ or $x\in\mathbb{Z}$. If $x\in\mathbb{Z}$ then the second summand in \eqref{e36} vanishes and and the first is $O\left(e^{\operatorname{const}|z|^{1/2}}\right)$ uniformly in $x\in\mathbb{Z}$. This implies the estimate \eqref{A} provided $d\ne 0$. It remains, therefore, to check \eqref{A} for $\mathsf{J}(x_n,x_n;z)$ where $\{x_n\}\in\mathbb{Z}$ is a regular sequence. Again, we use \eqref{e35}. Observe, that since $\Re\sqrt{z}\ge 0$ the second summand in \eqref{e36} is uniformly small provided $\Im x$ is bounded from above and $\Re x$ is bounded from below. Therefore, \eqref{e35} produces the \eqref{A} estimate for $x_n\ge 1$. For $x_n\le 0$ we use the relation \eqref{e004} and the reccurence \eqref{e215} to obtain the estimate. \end{proof} \begin{proof}[Proof of Theorem \ref{t2}] Let $X(n)$ be a regular sequence and let the numbers $a_i$ and $d_{ij}$ be defined by \eqref{e14}, \eqref{e15}. We shall assume that $|a_i|<2$ for all $i$. The validity of the theorem in the case when $|a_i|\ge 2$ for some $i$ will be obvious form the results of the next section. We have \begin{align} \boldsymbol{\varrho}^\theta(X(n))&=e^{-\theta}\, \sum_{k=0}^\infty \boldsymbol{\varrho}(k,X(n))\, \frac{\theta^k}{k!} \label{e37}\\ &=\det \Big[\mathsf{J}(x_i(n),x_j(n))\Big]_{1\le i,j \le s}\,. \label{e38} \end{align} where the first line is the definition of $\boldsymbol{\varrho}^\theta$ and the second is Theorem \ref{t1b}. From \eqref{e37} it is obvious that $\boldsymbol{\varrho}^\theta$ is entire. Therefore, we can apply Lemma \ref{l31} to it. It is clear that Lemma \ref{l31}, together with Proposition \ref{p34}, implies Theorem \ref{t2}. The factorization \eqref{e1t2} follows from the vanishing $\mathsf{S}(\infty,a)=0$. \end{proof} \subsection{Asymptotics of $\rho(n,X)$}\label{s32} Recall that the correlation functions $\rho(n,X)$ were defined by \begin{equation*} \rho(n,X)= M_n\left(\left\{\lambda\,|\, X\subset \operatorname{Fr}(\lambda)\right\} \right)\,, \quad X\subset\mathbb{Z}+{\textstyle \frac12}\,. \end{equation*} The asymptotics of these correlation functions can be easily obtained from Theorem \ref{t2} by complementation, see Sections \ref{sA3} and \ref{s21b}, and the result is the following. Let $X(n)\subset \mathbb{Z}+{\textstyle \frac12}$ be a regular sequence. If it splits, then the limit $\lim_{n\to\infty} \rho(n,X(n))$ factors as in \eqref{e1t2}. Suppose therefore, that $X(n)$ is nonsplit. Here one has to distinguish two cases. If $X(n)\subset\mathbb{Z}_{\ge 0}+{\textstyle \frac12}$ or $X(n)\subset\mathbb{Z}_{\le 0}-{\textstyle \frac12}$ then we shall say that this sequence is \emph{off-diagonal}. Geometrically, it means that $X(n)$ corresponds to modified Frobenius coordinates of only one kind: either the row ones or the column ones. For off-diagonal sequences we obtain from Theorem \ref{t2} by complementation that \begin{equation*} \lim_{n\to\infty} \rho\left(n,X(n)\right) = \det \Big[\,\mathsf{S}(d_{ij},|a|) \Big]_{1\le i,j \le s}\,, \end{equation*} where $\mathsf{S}$ is the discrete sine kernel and $a=a_1=a_2=\dots$. If $X(n)$ is nonsplit and \emph{diagonal}, that is, if it is nonsplit and includes both positive and negative numbers, then one has to assume additionally that the number of positive and negative elements of $X(n)$ stabilizes for sufficiently large $n$. In this case the limit correlations are given by the kernel \begin{equation}\label{edefD} \mathsf{D}(x,y)=\cases \mathsf{S}\left(x-y,0\right) \,, & xy>0\,, \\ \displaystyle \frac{\cos\left(\frac\pi 2(x+y)\right)}{\pi(x-y)} \,, &xy<0 \,. \endcases \end{equation} Remark that this kernel \emph{is not} translation invariant. Note, however, that $$ \mathsf{D}(x+1,y+1)=\operatorname{sgn}(xy)\, \mathsf{D}(x,y)\,, $$ provided $x$ and $x+1$ have the same sign and similarly for $y$. Therefore, if the subsets $X\subset\mathbb{Z}+{\textstyle \frac12}$ and $X+m$, $m\in\mathbb{Z}$, have the same number of positive and negative elements then $$ \det \Big[ \mathsf{D}(x_i,x_j)\Big]_{x_i\in X} = \det \Big[ \mathsf{D}(x_i+m,x_j+m)\Big]_{x_i\in X} \,. $$ \section{Edge of the spectrum: convergence to the Airy ensemble}\label{s4} \subsection{Results and strategy of proof}\label{s41} In this section we prove Theorem \ref{t3} which was stated in Section \ref{s14} of the Introduction. We refer the reader to Section \ref{s14} for a discussion of the relation between Theorem \ref{t3} and the results obtained in \cite{BDJ1,BDJ2,O}. Recall that the Airy kernel was defined as follows \begin{equation*} \mathsf{A}(x,y)=\frac{A(x)A'(y)-A'(x)A(y)}{x-y}. \end{equation*} where $A(x)$ is the Airy function \eqref{e17}. The {\it Airy ensemble} is, by definition, a random point process on $\mathbb{R}$, whose correlation functions are given by \begin{equation*} \rho^{\textup{Airy}}_k(x_1,\dots,x_k)= \det \Big[\,\mathsf{A}(x_i,x_j) \Big]_{1\le i,j \le k}\,. \end{equation*} This ensemble was studied in \cite{TW}. We denote by $\zeta_1 > \zeta_2 > \dots $ a random configuration of the Airy ensemble. Theorem \ref{t3} says that after a proper scaling and normalization, the rows $\lambda_1,\lambda_2,\dots$ of a Plancherel random partition $\lambda$ converge in joint distribution to the Airy ensemble. Namely, the following random variables $\widetilde{\lambda}$ \begin{equation*} \widetilde{\lambda}= \left(\widetilde{\lambda}_1 \ge \widetilde{\lambda}_2 \ge \dots \right)\,, \quad \widetilde{\lambda}_i = n^{1/3} \, \left( \frac{\lambda_i}{n^{1/2}}-2 \right) \,, \end{equation*} converge, in joint distribution, to the Airy ensemble as $n\to\infty$. In the proof of Theorem \ref{t3}, we shall follow the strategy explained in Section \ref{s15} of the Introduction. First, we shall prove that under the poissonized measure $M^\theta$ on the set of partitions $\lambda$, the random variables $\widetilde{\lambda}$ converge, in joint distribution, to the Airy ensemble as $\theta\approx n\to\infty$. This result is stated below as Theorem \ref{t4}. From this, using certain monotonicity and Lemma \ref{l48} which is due to K.~Johansson, we shall conclude that the same is true for the measures $M_n$ as $n\to\infty$. The proof of Theorem \ref{t4} will be based on the analysis of the behavior of the correlation functions of $M^\theta$, $\theta\approx n \to \infty$, near the point $2\sqrt{n}$. From the expression for correlation functions of $M^\theta$ given in Theorem \ref{t1} it is clear that this amounts to the study of the asymptotics of $J_{2\sqrt{n}}(2\sqrt\theta)$ when $\theta\approx n \to \infty$. This asymptotics is classically known and from it we shall derive the following \begin{proposition}\label{p41} Set $r=\sqrt\theta$. We have \begin{equation*} r^{\frac 13}\mathsf{J}\left(2r+xr^{\frac 13},2r+yr^{\frac 13},{r^2} \right)\to \mathsf{A}(x,y), \quad r\to+\infty\,, \end{equation*} uniformly in $x$ and $y$ on compact sets of $\mathbb{R}$. \end{proposition} The prefactor $r^{\frac 13}$ corresponds to the fact that we change the local scale near $2r$ to get non-vanishing limit correlations. Using this and verifying certain tail estimates we obtain the following \begin{theorem}\label{t4} For any fixed $m=1,2,\dots$ and any $a_1,\dots,a_m\in\mathbb{R}$ we have \begin{multline}\label{e401} \lim_{\theta\to+\infty} M^\theta\left(\left\{\lambda\,\left|\, \frac{\lambda_i-2\sqrt{\theta}}{\theta^{\frac 16}}\,< a_i\,, \ 1\le i\le m\right.\right\}\right)=\\ \operatorname{Prob}\{\zeta_i< a_i\,,\ 1\le i\le m\}\,, \end{multline} where $\zeta_1>\zeta_2>\dots$ is the Airy ensemble. \end{theorem} Observe that the limit behavior of $\widetilde{\lambda}$ is, obviously, identical with the limit behavior of similarly scaled $1$st, $2$nd, an so on maximal Frobenius coordinates. Proofs of Proposition \ref{p41} and Theorem \ref{t4} are given Section \ref{s42}. In Section \ref{s43}, using a depoissonization argument based on Lemma \ref{l48} we deduce Theorem \ref{t3}. \begin{remark} We consider the behavior of any number of initial rows of $\lambda$, where $\lambda$ is a Plancherel random partition. By symmetry, same results describe the behavior of any number of initial columns of $\lambda$. \end{remark} \subsection{Proof of Theorem \ref{t4}}\label{s42} Suppose we have a point process on $\mathbb{R}$ with determinantal correlation functions \begin{equation*} \rho_k(x_1,\dots,x_k)=\det[K(x_i,x_j)]_{1\le i,j \le k}\,, \end{equation*} for some kernel $K(x,y)$. Let $I$ be a possibly infinite interval $I\subset\mathbb{R}$. By $[K]_I$ we denote the operator in $L^2(I,dx)$ obtained by restricting the kernel on $I\times I$. Assume $[K]_I$ is a trace class operator. Then the intersection of the random configuration $X$ with $I$ is finite almost surely and \begin{equation*} \operatorname{Prob}\{|X\cap I|=N\}=\left.\frac{d^N}{dz^N}\det\Big(1-z[K]_I\Big)\right|_{z=1}\,. \end{equation*} In particular, the probability that $X\cap I$ is empty is equal to \begin{equation*} \operatorname{Prob}\{X\cap I=\emptyset\}=\det\Big(1-[K]_I\Big)\,. \end{equation*} More generally, if $I_1,\dots,I_m$ is a finite family of pairwise nonintersecting intervals such that the operators $[K]_{I_1},\dots,[K]_{I_m}$ are trace class then \begin{multline}\label{e41} \operatorname{Prob}\{|X\cap I_1|=N_1,\dots,|X\cap I_m|=N_m \}\\ =\left.\frac{\partial^{N_1+\dots+N_m}} {\partial z_1^{N_1}\dots\partial z_m^{N_m}} \det\Big(1-z_1[K]_{I_1}-\dots-z_m[K]_{I_m}\Big)\right|_{z_1=\dots=z_m=1}. \end{multline} Here operators $\{[K]_{I_i}\}$ are considered to be acting in the same Hilbert space, for example, in $L^2(I_1\sqcup I_2,\sqcup\dots\sqcup I_m,dx)$. In case of intersecting intervals $I_1,\dots,I_m$, the probabilities $$ \operatorname{Prob}\{|X\cap I_1|=N_1,\dots,|X\cap I_m|=N_m \} $$ are finite linear combinations of expressions of the form \eqref{e41}. Therefore, in order to show the convergence in distribution of point processes with determinantal correlation functions, it suffices to show the convergence of expressions of the form \eqref{e41}. The formula \eqref{e41} is discussed, for example, in \cite{TW2}. It remains valid for processes on a lattice such as $\mathbb{Z}$ in which case the kernel $K$ should be an operator on $\ell^2(\mathbb{Z})$. As verified, for example, in Proposition \ref{p58} in the Appendix, the right-hand side of \eqref{e41} is continuous in each $[K]_{I_i}$ with respect to the trace norm. We shall show that after a suitable embedding of $\ell^2(\mathbb{Z})$ inside $L^2(\mathbb{R})$ the kernel $\mathsf{J}(x,y;\theta)$ converges to the Airy kernel $\mathsf{A}(x,y)$ as $\theta\to\infty$. Namely, we shall consider a family of embeddings $\ell^2(\mathbb{Z})\to L^2(\mathbb{R})$, indexed by a positive number $r>0$, which are defined by \begin{equation}\label{e402} \ell^2(\mathbb{Z})\owns\chi_k \mapsto r^{1/6} \, \chi_{\left[\frac{k-2r}{r^{1/3}},\frac{k+1-2r}{r^{1/3}}\right]} \in L^2(\mathbb{R})\,, \quad k\in\mathbb{Z}\,, \end{equation} where $\chi_k\in\ell^2(\mathbb{Z})$ is the characteristic function of the point $k\in\mathbb{Z}$ and, similarly, the function on the right is the characteristic function of a segment of length $r^{-1/3}$. Observe that this embedding is isometric. Let $\mathsf{J}_r$ denote the kernel on $\mathbb{R}\times\mathbb{R}$ that is obtained from the kernel $\mathsf{J}(\,\cdot\,,\,\cdot\,,r^2)$ on $\mathbb{Z}\times\mathbb{Z}$ using the embedding \eqref{e402}. We shall establish the following \begin{proposition}\label{p44} We have \begin{equation*} [\mathsf{J}_r]_{[a,\infty)}\to[\mathsf{A}]_{[a,\infty)}\,, \quad r\to\infty \,, \end{equation*} in the trace norm for all $a\in\mathbb{R}$ uniformly on compact sets in $a$. \end{proposition} This proposition immediately implies Theorem \ref{t4} as follows \begin{proof}[Proof of Theorem \ref{t4}] Consider the left-hand side of \eqref{e401} and choose for each $a_i$ a pair of functions $k_i^-(r),k_i^+(r)\in\mathbb{Z}$ such that \begin{equation*} \frac{k_i^-(r)-2r}{r^{1/3}}=a_i^-(r) \le a_i \le a_i^+(r) = \frac{k_i^+(r)-2r}{r^{1/3}} \end{equation*} and $a_i^-(r),a_i^+(r)\to a_i$ as $r\to \infty$. Then, on the one hand, the probability in left-hand side of \eqref{e401} lies between the corresponding probabilities for $a_i^-(r)$ and $a_i^+(r)$. On the other hand, the probabilities for $a_i^-(r)$ and $a_i^+(r)$ can be expressed in the form \eqref{e41} for the kernel $\mathsf{J}_r$ and by Propositions \ref{p44} and continuity of the Airy kernel they converge to one and same limit given by the Airy kernel as $r\to\infty$. \end{proof} Now we get to the proofs of Propositions \ref{p41} and \ref{p44} which will require some computations. Recall that the Airy function can be expressed in terms of Bessel functions as follows \begin{equation} A(x)=\cases \frac 1\pi \sqrt{\frac x3}K_{\frac 13}\left(\frac 23 x^{\frac 32}\right)\,,&x\ge 0\,,\\ \frac{\sqrt{|x|}} 3\left[J_{\frac 13}\left(\frac 23 |x|^{\frac 32}\right)+J_{-\frac 13}\left(\frac 23 |x|^{\frac 32}\right)\right]\,,&x\le 0\,, \endcases \end{equation} see Section 6.4 in \cite{W}. Also recall that \begin{equation}\label{e42} A(x) \sim \frac1{2 x^{1/4} \sqrt{\pi}}\, e^{-\frac32 x^{3/2}}\,, \quad x\to+\infty \,, \end{equation} see, for example, the formula 7.23 (1) in \cite{W}. \begin{lemma}\label{l45} For any $x\in\mathbb{R}$ we have \begin{equation}\label{e43} \left|r^{\frac 13}J_{2r+xr^{\frac 13}}(2r)- A(x)\right|= O(r^{-\frac 13})\,, \quad r\to\infty\,, \end{equation} moreover, the constant in $O(r^{-\frac 13})$ is uniform in $x$ on compact subsets of $\mathbb{R}$. \end{lemma} \begin{proof} Assume first that $x\ge 0$. We denote \begin{equation*} \nu=2r+xr^{\frac 13},\quad \alpha=\operatorname{arccosh}\left(1+xr^{-\frac 23}/2\right)\ge 0. \end{equation*} It will be convenient to use the following notation \begin{equation*} P=\nu(\tanh\alpha-\alpha),\quad Q=\frac \nu3 \tanh^3\alpha. \end{equation*} The formula 8.43(4) in \cite{W} reads \begin{equation}\label{e44} J_{\nu}(2r)=\frac {\tanh\alpha}{\pi\sqrt{3}}e^{P+Q}K_{\frac 13}\left(Q\right)+\frac{3\gamma_1}{\nu}\,e^{P} \end{equation} where $|\gamma_1|<1$. We have the following estimates as $r\to+\infty$ \begin{align*} &\alpha=x^{\frac 12}r^{-\frac 13}+O(r^{-1}),\\ &\tanh\alpha=\alpha+O(\alpha^3)=x^{\frac 12}r^{-\frac 13}+O(r^{-1}),\\ &P+Q=\nu\cdot O(\alpha^5)=O(r^{-\frac 23}),\quad e^{P+Q}=1+O(r^{-\frac 23}),\\ &Q=\frac 13\,\left(2r+xr^{\frac 13}\right)\left(x^{\frac 32}r^{- 1}+O(r^{-\frac 43})\right)=\frac{2x^{\frac 32}}3+O(r^{-\frac 13}),\\ &K_{\frac 13}\left(Q\right)=K_{\frac 13}\left(\frac {2x^{\frac 32}}{3}\right)+O(r^{-\frac 13}),\\ &P\le 0,\quad\frac{3\gamma_1}{\nu}\,e^{P}=O(r^{-1}). \end{align*} Substituting this into \eqref{e44}, we obtain the claim \eqref{e43} for $x\ge 0$. Assume now that $x\le 0$. Denote \begin{equation*} \nu=2r+xr^{\frac 13},\quad \beta=\operatorname{arccos}\left(1+xr^{-\frac 23}/2\right)\ge 0,\quad y=|x|. \end{equation*} Introduce the notation \begin{equation*} \widetilde P=\nu(\tan\beta-\beta),\quad \widetilde Q=\frac \nu 3\tan^3\beta\,. \end{equation*} The formula 8.43 (5) in \cite{W} reads \begin{multline}\label{e45} J_{\nu}(r)=\frac 13 \tan \beta\cos\left(\widetilde P-\widetilde Q\right)\left[J_{-\frac 13}\left(\widetilde Q\right)+J_{\frac 13}\left(\widetilde Q\right)\right]\\ +\frac 1{\sqrt{3}} \tan \beta\sin\left(\widetilde P-\widetilde Q\right)\left[J_{-\frac 13}\left(\widetilde Q\right)-J_{\frac 13}\left(\widetilde Q\right)\right]+\frac{24\gamma_2}{\nu} \end{multline} where $|\gamma_2|<1$. Again we have the estimates as $r\to+\infty$ \begin{align*} &\beta=y^{\frac 12}r^{-\frac 13}+O(r^{-1}),\\ &\tan\beta=\beta+O(\beta^3)=y^{\frac 12}r^{-\frac 13}+O(r^{-1}),\\ &\widetilde P-\widetilde Q=\nu\cdot O(\beta^5)=O(r^{-\frac 23}),\\ &\cos\left(\widetilde P-\widetilde Q\right)=1+O(r^{-\frac 43}),\quad \sin\left(\widetilde P-\widetilde Q\right)=O(r^{-\frac 23}),\\ &\widetilde Q=\frac 13\left(2r-yr^{\frac 13}\right)\left(y^{\frac 32}r^{- 1}+O(r^{-\frac 43})\right)=\frac{2y^{\frac 32}}3+O(r^{-\frac 13}),\\ &J_{\pm\frac 13}\left(\widetilde Q\right)=J_{\pm\frac 13}\left(\frac {2y^{\frac 32}}{3}\right)+O(r^{-\frac 13}). \end{align*} These estimates after substituting into \eqref{e45} produce \eqref{e43} for $x\le 0$. \end{proof} \begin{lemma}\label{l46} There exist $C_1,C_2,C_3,\varepsilon>0$ such that for any $A>0$ and $s>0$ we have \begin{alignat}{2} \left|J_{r+Ar^{\frac 13}+s}(r)\right|&\le C_1\,{r^{-\frac13}}{\exp\left(-C_2\left(A^{\frac 32}+sA^{\frac 12}r^{-\frac 13}\right)\right)},\quad &s\le \varepsilon r \,, \label{e46a} \\ \left|J_{r+Ar^{\frac 13}+s}(r)\right|&\le \exp\left(-C_3(r+s)\right), \quad &s\ge \varepsilon r\,, \label{e46b} \end{alignat} for all $r\gg 0$. \end{lemma} \begin{proof} First suppose that $s\le\varepsilon r$. Set $\nu=r+Ar^{\frac 13}+s$. We shall use \eqref{e44} with $\alpha=\operatorname{arccosh}(\nu/r)$. Provided $\varepsilon$ is chosen small enough and $r$ is sufficiently large, $\alpha$ will be close to $0$ and we will be able to use Taylor expansions. For $r\gg 0$ we have \begin{equation*} \alpha=\operatorname{arccosh}(1+Ar^{-\frac 23}+sr^{-1})\ge \operatorname{const}\,(Ar^{-\frac 23}+sr^{-1})^{\frac 12} \,, \end{equation*} and, similarly, \begin{equation*} -P=\nu(\alpha-\tanh\alpha)\ge \operatorname{const}\,(A+sr^{-\frac 13})^{\frac 32}\,. \end{equation*} Since the function $x^{\frac 32}$ is concave, we have $$ -P\ge \operatorname{const}\,(A^{\frac 32}+sA^{\frac 12}r^{-\frac 13})\,. $$ The constant here is strictly positive. Since $K_{\frac 13}(x)\le \operatorname{const}\, x^{-\frac 12} e^{-x}$ (see, for example, the formula 7.23 (1) in \cite{W}) we obtain \begin{multline*} \tanh\alpha \,e^{P+Q}K_{\frac 13}\left(Q\right)\le \operatorname{const}\, \frac{e^{P}}{\sqrt{\nu\tanh\alpha}}\\ \le \frac{\operatorname{const}} {r^{\frac 13}} \exp\left(-\operatorname{const}\,\left(A^{\frac 32}+sA^{\frac 12}r^{-\frac 13}\right)\right)\,, \end{multline*} where we used that $\tanh\alpha\ge \operatorname{const}\, r^{-\frac 13}$. Finally, we note that \begin{equation*} \frac {e^P}{\nu}\le \frac1r\, \exp\left({-\operatorname{const}\,\left(A^{\frac 32}+sA^{\frac 12}r^{-\frac 13}\right)}\right), \end{equation*} and this completes the proof of \eqref{e46a}. The estimate \eqref{e46b} follows directly from the formulas 8.5 (9), (4), (5) in \cite{W}. \end{proof} \begin{lemma}\label{l47} For any $\delta>0$ there exists such $M>0$ that for all $x,y>M$ and large enough $r$ \begin{equation*} \left|\mathsf{J}\left(2r+xr^{\frac 13},2r+yr^{\frac 13}, {r^2}\right)\right|<\delta r^{-\frac 13}. \end{equation*} \end{lemma} \begin{proof} From \eqref{e214} we have \begin{equation}\label{e47} \mathsf{J}\left(2r+xr^{\frac 13},2r+yr^{\frac 13}, {r^2} \right)=\sum_{s=1}^\infty J_{2r+xr^{\frac 13}+s}(2r)\,J_{2r+yr^{\frac 13}+s}(2r). \end{equation} Let us split the sum in \eqref{e47} into two parts \begin{equation*} {\sum}_1=\sum_{l\le \varepsilon r}\,, \quad {\sum}_2=\sum_{l> \varepsilon r}\,, \end{equation*} that is, one sum for $l\le\varepsilon r$ and the other for $l> \varepsilon r$, and apply Lemma \ref{l46} to these two sums. Note that $2r$ here corresponds to $r$ in Lemma \ref{l46}; this produces factors of $2^{\frac 13}$ and does not affect the estimate. Let the $c_i$'s stand for some positive constants not depending on $M$. From \eqref{e46a} we obtain the following estimate for the first sum $$ {\sum}_1 \le c_1\, r^{-\frac 23}\, \exp\left(-c_2\,M^{\frac 32}\right) \, \sum_{s=1}^{[\varepsilon r]} q^s $$ where $$ q=\exp\left(-c_2M^{\frac 12}r^{-\frac 13}\right)\,, \quad 0<q<1\,. $$ Therefore, $$ {\sum}_1 \le \frac{c_1\, r^{-\frac 23}\, \exp\left(-c_2\,M^{\frac 32}\right) }{1-q} \le r^{-\frac 13}\cdot c_3\exp(-c_2M^{\frac 32})M^{-\frac 12} \,. $$ We can choose $M$ so that $c_3\exp(-c_2M^{\frac 32})M^{-\frac 12}<\delta/2$. For the second sum we use \eqref{e46b} and obtain \begin{equation*} {\sum}_2 \le \sum_{s\ge \varepsilon r} \exp(-c_4 (r+s))\le c_5\exp(-c_4r). \end{equation*} Clearly, this is less than $\delta r^{-\frac 13}/2$ for $r\gg0$. \end{proof} \begin{proof}[Proof of Proposition \ref{p41}] As shown in \cite{CL,TW}, the Airy kernel has the following integral representation \begin{equation}\label{e48} \mathsf{A}(x,y)=\int_{0}^\infty A(x+t)A(y+t)dt. \end{equation} The formula \eqref{e47} implies that for any integer $N>0$ \begin{multline}\label{e49} \mathsf{J}\left(2r+xr^{\frac 13},2r+yr^{\frac 13},{r^2} \right)= \sum_{s=1}^N J_{2r+xr^{\frac 13}+s}(2r)\,J_{2r+yr^{\frac 13}+s}(2r)\\ +\mathsf{J}\left(2r+xr^{\frac 13}+N,2r+yr^{\frac 13}+N,{r^2} \right)\,. \end{multline} Let us fix $\delta>0$ and pick $M>0$ according to Lemma \ref{l47}. Since, by assumption, $x$ and $y$ lie in compact set of $\mathbb{R}$, we can fix $m$ such that $x,y \ge m$. Set $$ N=[(M-m+1)\, r^\frac13] \,. $$ Then the inequalities \begin{equation*} x+Nr^{-\frac 13}>M,\quad y+Nr^{-\frac 13}>M \end{equation*} are satisfied for all $x,y$ in our compact set and Lemma \ref{l45} applies to the sum in \eqref{e49}. We obtain \begin{multline*} \left|r^{\frac 23}\sum_{s=1}^{N} J_{2r+xr^{\frac 13}+s}(2r)\,J_{2r+yr^{\frac 13}+s}(2r)-\right.\\ \left. \sum_{s=1}^{N} A(x+sr^{-\frac 13})A(y+sr^{-\frac 13})\right|=O(1) \end{multline*} because the number of summands is $N=O(r^{\frac 13})$ and $A(x)$ is bounded on subsets of $\mathbb{R}$ which are bounded from below. Note that \begin{equation*} r^{-\frac 13} \sum_{s=1}^{N} A(x+sr^{-\frac 13})A(x+sr^{-\frac 13}) \end{equation*} is a Riemann integral sum for the integral \begin{equation*} \int\limits_{0}^{M-m+1} A(x+t)A(y+t)\,dt, \end{equation*} and it converges to this integral as $r\to+\infty$. Since the absolute value of the second term in the right-hand side of \eqref{e49} does not exceed $\delta r^{-\frac 13}$ by the choice of $N$, we get \begin{equation*} \left|r^{\frac 13}\mathsf{J}\left(2r+xr^{\frac 13},2r+yr^{\frac 13},{r^2} \right)-\int\limits_{0}^{M-m+1} A(x+t)A(y+t)dt\right|\le \delta+o(1) \end{equation*} as $r\to+\infty$, and this estimate is uniform on compact sets. Now let $\delta\to 0$ and $M\to+\infty$. By \eqref{e42} the integral \eqref{e48} converges uniformly in $x$ and $y$ on compact sets and we obtain the claim of the proposition. \end{proof} \begin{proof}[Proof of Proposition \ref{p44}] It is clear that Proposition \ref{p41} implies the convergence of $[\mathsf{J}_r]_a$ to $[\mathsf{A}]_a$ in the weak operator topology. Therefore, by Proposition \ref{p56}, it remains to prove that $\operatorname{tr}[\mathsf{J}_r]_a\to\operatorname{tr}[\mathsf{A}]_a$ as $r\to+\infty$. We have \begin{equation*} \operatorname{tr}[\mathsf{J}_r]_a=\sum_{k=[2r+ar^{\frac13}]}^\infty\mathsf{J}(k,k;{r^2})+o(1)\,, \end{equation*} where the $o(1)$ correction comes from the fact that $a$ may not be a number of the form $\frac{k-2r}{r^{1/3}}$, $k\in\mathbb{Z}$. By \eqref{e47} we have \begin{equation}\label{e403} \sum_{k=[2r+ar^{\frac13}]}^\infty\mathsf{J}(k,k;{r^2}) =\sum_{l=1}^\infty l\left(J_{[2r+ar^{\frac13}]+l}(2r)\right)^2 \,. \end{equation} Similarly, \begin{equation}\label{e404} \operatorname{tr}[\mathsf{A}]_a=\int_a^{\infty}\mathsf{A}(s,s)ds=\int_0^\infty t(A(a+t))^2dt\,. \end{equation} Since we already established the uniform convergence of kernels on compact sets, it is enough to show that the both \eqref{e403} and \eqref{e404} go to zero as $a\to+\infty$ and $r\to+\infty$. For the Airy kernel this is clear from \eqref{e42}. For the kernel $\mathsf{J}_r$ it is equivalent to the following statement: for any $\delta>0$ there exists $M_0>0$ such that for all $M>M_0$ and large enough $r$ we have \begin{equation}\label{e410} \left|\sum_{l=1}^\infty l \, J_{2r+Mr^{\frac 13}+l}^2(2r)\right|<\delta \,. \end{equation} We shall employ Lemma \ref{l46} for $A=M$. Again, we split the sum in \eqref{e410} in two parts \begin{equation*} {\sum}_1=\sum_{l\le \varepsilon r}\,, \quad {\sum}_2=\sum_{l> \varepsilon r}\,. \end{equation*} For the first sum Lemma \ref{l46} gives \begin{equation*} {\sum}_1\le c_1r^{-\frac 23}\exp\left(-c_2M^{\frac 32}\right)\sum_{l\le [\varepsilon r]}l\, q^l\,, \end{equation*} where \begin{equation*} q=\exp\left(-c_2M^{\frac 12}r^{-\frac 13}\right)\,, \quad 0<q<1\,, \end{equation*} and the $c_i$'s are some positive constants that do not depend on $M$. Since $\sum l\, q^l = q (1-q)^{-2}$ we obtain \begin{equation*} {\sum}_1 \le c_1r^{-\frac 23}\exp\left(-c_2M^{\frac 32}\right) \, \frac{q}{(1-q)^2} \le c_3\frac{\exp\left(-c_2M^{\frac 32}\right)}{M} \,. \end{equation*} This can be made arbitrarily small by taking $M$ sufficiently large. For the other part of the sum we have the estimate \begin{equation*} {\sum}_2 \le \sum_{l> \varepsilon r}l \exp(-c_4(r+l)) \end{equation*} which, evidently, goes to zero as $r\to+\infty$. \end{proof} \subsection{Depoissonization and proof of Theorem \ref{t3}}\label{s43} Fix some $m=1,2,\dots$ and denote by $F_n$ the distribution function of $\lambda_1,\dots,\lambda_m$ under the Plancherel measure $M_n$ \begin{equation*} F_n(x_1,\dots,x_m)=M_n\left(\left\{\lambda\,\left|\, \lambda_i< x_i\,, \ 1\le i\le m\right.\right\}\right) \,. \end{equation*} Also, set \begin{equation*} F(\theta,x)=e^{-\theta} \sum_{k=0}^\infty\frac {\theta^k}{k!}\, F_k(x). \end{equation*} This is the distribution function corresponding to the measure $M^\theta$. The measures $M_n$ can be obtained as distribution at time $n$ of a certain random growth process of a Young diagram, see e.g.\ \cite{VK2}. This implies that \begin{equation*} F_{n+1}(x)\le F_n(x)\,, \quad x\in\mathbb{R}^m \,. \end{equation*} Also, by construction, $F_n$ is monotone in $x$ and similarly \begin{equation}\label{e4101} F(\theta,x) \le F(\theta,y)\,, \quad x_i \le y_i\,, \quad i=1,\dots,m \,. \end{equation} We shall use these monotonicity properties together with the following lemma. \begin{lemma}[Johansson, \cite{J}]\label{l48} There exist constants $C>0$ and $n_0>0$ such that for any nonincreasing sequence $\{b_n\}_{n=0}^\infty\subset[0,1]$ \begin{equation*} 1\ge b_0\ge b_1\ge b_2\ge b_3\ge\dots\ge 0, \end{equation*} and its exponential generating function \begin{equation*} B(\theta)=e^{-\theta} \sum_{k=0}^\infty\frac {\theta^k}{k!}\cdot b_k \end{equation*} we have for all $n>n_0$ the following inequalities: \begin{equation*} B(n+4\sqrt{n\ln n})-\frac C{n^2}\le b_n\le B(n-4\sqrt{n\ln n})+\frac C{n^2}. \end{equation*} \end{lemma} This lemma implies that for all $x\in \mathbb{R}^m$ \begin{equation}\label{e411} F(n+4\sqrt{n\ln n},x)-\frac C{n^2}\le F_n(x)\le F(n-4\sqrt{n\ln n},x)+\frac C{n^2}\,. \end{equation} Set \begin{equation*} \bar 1=(1,\dots,1)\,. \end{equation*} Theorem \ref{t4} asserts that \begin{equation}\label{e412} F\left(\theta, 2{\theta}^\frac12\, \bar 1 + \theta^{\frac 16} \, x\right)\to F(x),\quad \theta\to +\infty,\quad x\in \mathbb{R}^m, \end{equation} where $F(x)$ is the corresponding distribution function for the Airy ensemble. Note that $F(x)$ is continuous. Denote $n_\pm=n\pm 4\sqrt{n\ln n}$. Then for $i=1,\dots,m$ \begin{equation*} 2n^\frac12_\pm +n^\frac16_\pm\, x_i =2n^\frac12 +n^{\frac 16}\, x_i +O((\ln n)^{1/2}) \,. \end{equation*} Hence, for any $\varepsilon>0$ and all sufficiently large $n$ we have \begin{equation*} 2n^\frac12_+ +n_+^{\frac 16} \, (x_i-\varepsilon) \le 2n^\frac12 +n^{\frac 16}\, x_i \le 2n^\frac12_- +n_-^{\frac 16} \, (x_i+\varepsilon) \,, \end{equation*} for $i=1,\dots,m$. By \eqref{e4101} this implies that \begin{align*} F\left(n_+,2n^\frac12\,\bar 1+n^{\frac 16}\, x\right)&\ge F\left(n_+,2n^\frac12_+\, \bar 1+n_+^{\frac 16} \, \left(x-\varepsilon\, \bar 1\right)\right)\\ F\left(n_-,2n^\frac12\,\bar 1+n^{\frac 16}\, x\right)&\le F\left(n_-,2n^\frac12_-\, \bar 1+n_-^{\frac 16} \, \left(x+\varepsilon\, \bar 1\right)\right) \,. \end{align*} {}From this and \eqref{e411} we obtain \begin{multline*} F\left(n_+,2n^\frac12_+\, \bar 1+n_+^{\frac 16} \, \left(x-\varepsilon\, \bar 1\right)\right)-\frac C{n^2}\\ \le F_n\left(2n^\frac12\,\bar 1+n^{\frac 16}\, x \right)\le \\ F\left(n_-,2n^\frac12_-\, \bar 1+n_-^{\frac 16} \,\left(x+\varepsilon\, \bar 1\right)\right)+\frac C{n^2} \,. \end{multline*} {}From this and \eqref{e412} we conclude that \begin{equation*} F\left(x-\varepsilon\,\bar 1\right)+o(1)\le F_n\left(2n^\frac12\,\bar 1+n^{\frac 16}\, x \right)\le F\left(x+\varepsilon\, \bar 1\right)+o(1) \end{equation*} as $n\to\infty$. Since $\varepsilon>0$ is arbitrary and $F(x)$ is continuous we obtain \begin{equation*} F_n \left(2n^\frac12\,\bar 1+n^{\frac 16}\, x \right)\to F(x),\quad n\to \infty, \quad x\in \mathbb{R}^m, \end{equation*} which is the statement of Theorem \ref{t3}.
{ "timestamp": "1999-09-12T01:42:25", "yymm": "9905", "arxiv_id": "math/9905032", "language": "en", "url": "https://arxiv.org/abs/math/9905032", "abstract": "We consider the asymptotics of the Plancherel measures on partitions of $n$ as $n$ goes to infinity. We prove that the local structure of a Plancherel typical partition (which we identify with a Young diagram) in the middle of the limit shape converges to a determinantal point process with the discrete sine kernel. On the edges of the limit shape, we prove that the joint distribution of suitably scaled 1st, 2nd, and so on rows of a Plancherel typical diagram converges to the corresponding distribution for eigenvalues of random Hermitian matrices (given by the Airy kernel). This proves a conjecture due to Baik, Deift, and Johansson by methods different from the Riemann-Hilbert techniques used in their original papersmath.CO/9810105andmath.CO/9901118and from the combinatorial approach proposed by Okounkov inmath.CO/9903176.Our approach is based on an exact determinantal formula for the correlation functions of the poissonized Plancherel measures involving a new kernel on the 1-dimensional lattice. This kernel is expressed in terms of Bessel functions and we obtain it as a degeneration of the hypergeometric kernel from the papermath.RT/9904010by Borodin and Olshanski. Our asymptotic analysis relies on the classical asymptotic formulas for the Bessel functions and depoissonization techniques.", "subjects": "Combinatorics (math.CO); Representation Theory (math.RT)", "title": "Asymptotics of Plancherel measures for symmetric groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9898303410461384, "lm_q2_score": 0.7154240018510026, "lm_q1q2_score": 0.7081483837447711 }
https://arxiv.org/abs/2212.14005
Rowmotion Markov Chains
Rowmotion is a certain well-studied bijective operator on the distributive lattice $J(P)$ of order ideals of a finite poset $P$. We introduce the rowmotion Markov chain ${\bf M}_{J(P)}$ by assigning a probability $p_x$ to each $x\in P$ and using these probabilities to insert randomness into the original definition of rowmotion. More generally, we introduce a very broad family of toggle Markov chains inspired by Striker's notion of generalized toggling. We characterize when toggle Markov chains are irreducible, and we show that each toggle Markov chain has a remarkably simple stationary distribution.We also provide a second generalization of rowmotion Markov chains to the context of semidistrim lattices. Given a semidistrim lattice $L$, we assign a probability $p_j$ to each join-irreducible element $j$ of $L$ and use these probabilities to construct a rowmotion Markov chain ${\bf M}_L$. Under the assumption that each probability $p_j$ is strictly between $0$ and $1$, we prove that ${\bf M}_{L}$ is irreducible. We also compute the stationary distribution of the rowmotion Markov chain of a lattice obtained by adding a minimal element and a maximal element to a disjoint union of two chains.We bound the mixing time of ${\bf M}_{L}$ for an arbitrary semidistrim lattice $L$. In the special case when $L$ is a Boolean lattice, we use spectral methods to obtain much stronger estimates on the mixing time, showing that rowmotion Markov chains of Boolean lattices exhibit the cutoff phenomenon.
\section{Introduction}\label{sec:intro} \subsection{Distributive Lattices} Let $P$ be a finite poset, and let $J(P)$ denote the set of order ideals of $P$. For $S\subseteq P$, let \[\Delta(S)=\{x\in P:x\leq s\text{ for some }s\in S\}\quad\text{and}\quad\nabla(S)=\{x\in P:x\geq s\text{ for some }s\in S\},\] and let $\min(S)$ and $\max(S)$ denote the set of minimal elements and the set of maximal elements of $S$, respectively. \dfn{Rowmotion}, one of the most well-studied operators in the growing field of dynamical algebraic combinatorics, is the bijection $\mathrm{Row}\colon J(P)\to J(P)$ defined by\footnote{Many authors define rowmotion to be the inverse of the operator that we have defined. Our definition agrees with the conventions used in \cite{Barnard, BarnardHanson, Semidistrim, ThomasWilliams}.} \begin{equation}\label{eq:row_def} \mathrm{Row}(I)=P\setminus \nabla(\max(I)). \end{equation} We refer the reader to \cite{StrikerWilliams,ThomasWilliams} for the history of rowmotion. The purpose of this article is to introduce randomness into the ongoing saga of rowmotion by defining certain Markov chains. We were inspired by the articles \cite{Ayyer, Poznanovic, Rhodes}; these articles define Markov chains based on the \emph{promotion} operator, which is closely related to rowmotion in special cases \cite{Bernstein,StrikerWilliams} (though our Markov chains are fundamentally different from these promotion-based Markov chains). For each $x\in P$, fix a probability $p_x\in[0,1]$. We define the \dfn{rowmotion Markov chain} ${\bf M}_{J(P)}$ with state space $J(P)$ as follows. Starting from a state $I\in J(P)$, select a random subset $S$ of $\max(I)$ by adding each element $x\in\max(I)$ into $S$ with probability $p_x$; then transition to the new state $P\setminus\nabla(S)=\mathrm{Row}(\Delta(S))$. Thus, for any $I,I'\in J(P)$, the transition probability from $I'$ to $I$ is \[\mathbb P(I'\to I)=\begin{cases} \left(\prod\limits_{x\in \min(P\setminus I)}p_x\right)\left(\prod\limits_{x'\in \max(I')\setminus\min(P\setminus I)}(1-p_{x'})\right) & \mbox{if }\min(P\setminus I)\subseteq\max(I'); \\ 0 & \mbox{otherwise.}\end{cases}\] Observe that if $p_x=1$ for all $x\in P$, then ${\bf M}_{J(P)}$ is deterministic and agrees with the rowmotion operator. On the other hand, if $p_x=0$ for all $x\in P$, then ${\bf M}_{J(P)}$ is deterministic and sends every order ideal of $P$ to the order ideal $P$. \begin{example}\label{Exam1} Suppose $P$ is the poset \[\begin{array}{l}\includegraphics[height=.9cm]{RowmotionMarkovPIC2}\end{array},\] whose elements $x,y,z$ are as indicated. Then $J(P)$ forms a distributive lattice with $5$ elements. The transition diagram of ${\bf M}_{J(P)}$ is drawn over the Hasse diagram of $J(P)$ in \Cref{Fig1}. \end{example} \begin{figure}[ht] \begin{center}{\includegraphics[height=13.391cm]{RowmotionMarkovPIC1}} \end{center} \caption{The transition diagram of ${\bf M}_{J(P)}$, where $P$ is the $3$-element poset from \Cref{Exam1}. The elements of each order ideal in $J(P)$ are circled and blue. }\label{Fig1} \end{figure} Our first main theorem states that ${\bf M}_{J(P)}$ is irreducible as long as each probability $p_x$ is strictly between 0 and 1 and explicitly provides the unique stationary distribution. \begin{theorem}\label{thm:main_distributive} Let $P$ be a finite poset. For each $x\in P$, fix a probability $p_x\in (0,1)$. The rowmotion Markov chain ${\bf M}_{J(P)}$ is irreducible. For $I\in J(P)$, the probability of the state $I$ in the stationary distribution of ${\bf M}_{J(P)}$ is \[\frac{1}{Z(J(P))}\prod_{x\in I}p_x^{-1},\] where $\displaystyle Z(J(P))=\sum_{I'\in J(P)}\prod_{x'\in I'}p_{x'}^{-1}$. \end{theorem} Although it is often easy to see why specific Markov chains are irreducible, the irreducibility in \Cref{thm:main_distributive} is not obvious. Indeed, the difficulty arises from the fact that this statement concerns \emph{all} finite posets. It is also surprising that there is such a clean formula for the stationary distribution in this level of generality. Suppose ${\bf M}$ is an irreducible finite Markov chain with state space $\Omega$, transition matrix $Q$, and stationary distribution $\pi$. For $x\in\Omega$, let $Q^i(x,\cdot)$ denote the distribution on $\Omega$ in which the probability of a state $x'$ is the probability of reaching $x'$ by starting at $x$ and applying $i$ transitions (this probability is the entry in $Q^i$ in the row indexed by $x$ and the column indexed by $x'$). In other words, $Q^i(x,\cdot)$ is the law on $\Omega$ after $i$ steps of the Markov chain, starting at $x$. The \dfn{total variation distance} $d_{\mathrm{TV}}=d_{\mathrm{TV}}^{\Omega}$ is the metric on the space of distributions on $\Omega$ defined by \[ d_{\mathrm{TV}}(\mu,\nu)=\max_{A\subseteq \Omega} |\mu(A)-\nu(A)| = \frac{1}{2}\sum_{x\in\Omega} |\mu(x)-\nu(x)|. \] For $\varepsilon>0$, the \dfn{mixing time} of ${\bf M}$, denoted $t^{\mathrm{mix}}_{{\bf M}}(\varepsilon)$, is the smallest nonnegative integer $i$ such that $d_{\mathrm{TV}}(Q^i(x,\cdot),\pi)<\varepsilon$ for all $x\in\Omega$. The \dfn{width} of a finite poset $P$, denoted $\mathrm{width}(P)$, is the maximum size of an antichain in $P$. In \Cref{sec:mixing}, we use the method of coupling to prove the following bound on the mixing time of an arbitrary rowmotion Markov chain. \begin{theorem}\label{thm:general_mixing} Let $P$ be a finite poset, and fix a probability $p_x\in(0,1)$ for each $x\in P$. Let $\overline p=\max\limits_{x\in P}p_x$. For each $\varepsilon>0$, the mixing time of ${\bf M}_{J(P)}$ satisfies \[t^{\mathrm{mix}}_{{\bf M}_{J(P)}}(\varepsilon)\leq\left\lceil\frac{\log\varepsilon}{\log\left(1-\left(1-\overline{p}\right)^{\mathrm{width}(P)}\right)}\right\rceil.\] \end{theorem} It is instructive to see what \Cref{thm:general_mixing} tells us when $p_x=1/2$ for all $x\in P$. In this case, our upper bound on $t^{\text{mix}}_{{\bf M}_{J(P)}}(\varepsilon)$ is \[\left\lceil\frac{\log\varepsilon}{\log(1-2^{-\mathrm{width}(P)})}\right\rceil\approx 2^{\mathrm{width}(P)}\log(1/\varepsilon),\] which depends only on $\varepsilon$ and $\mathrm{width}(P)$ (not $|P|$). Hence, this bound is actually quite good when $P$ is ``tall and slender.'' In fact, it is nearly optimal when $P$ is a chain (so $J(P)$ is an $(n+1)$-element chain). \begin{theorem}\label{thm:chain} Let $P$ be an $n$-element chain. Fix a probability $p\in(0,1)$, and set $p_x=p$ for all $x\in P$. For each $\varepsilon>0$, the mixing time of ${\bf M}_{J(P)}$ satisfies \[\frac{\log\varepsilon}{\log p}-\frac{\log(p-p^{n+1})-\log(2-2p^{n+1})}{\log p}<t^{\mathrm{mix}}_{{\bf M}_{J(P)}}(\varepsilon)\leq\left\lceil\frac{\log\varepsilon}{\log p}\right\rceil.\] \end{theorem} When $p$ is fixed, the term $\displaystyle \frac{\log(p-p^{n+1})-\log(2-2p^{n+1})}{\log p}$ in \Cref{thm:chain} approaches the constant $\dfrac{\log(p/2)}{\log p}$ as $n\to\infty$. At the other extreme, we can drastically improve the bound in \Cref{thm:general_mixing} when $P$ is an antichain (so $J(P)$ is a Boolean lattice). \begin{theorem}\label{thm:antichain} Let $P$ be an $n$-element antichain, and fix a probability $p_x\in(0,1)$ for each $x\in P$. Let $\overline p=\max\limits_{x\in P}p_x$. For each $\varepsilon>0$, the mixing time of ${\bf M}_{J(P)}$ satisfies \[t^{\mathrm{mix}}_{{\bf M}_{J(P)}}(\varepsilon)\leq\left\lceil\frac{\log\varepsilon-\log n}{\log\overline{p}}\right\rceil.\] \end{theorem} We will actually derive \Cref{thm:antichain} from a coupling argument that applies to arbitrary products of semidistrim lattices. When $P$ is an $n$-element antichain and $p_x=1/2$ for all $x\in P$, the upper bound for $t^{\text{mix}}_{{\bf M}_{J(P)}}(\varepsilon)$ in \Cref{thm:general_mixing} is roughly $2^n\log(1/\varepsilon)$, whereas the bound in \Cref{thm:antichain} is roughly $\dfrac{\log(1/\varepsilon)+\log n}{\log 2}$, improving the asymptotics in $n$ from exponential to logarithmic. The coefficient on the $\log(1/\varepsilon)$ term is also lower in \cref{thm:antichain} than in \cref{thm:general_mixing}, so \cref{thm:antichain} provides an asymptotically better bound as $\varepsilon\to0$ as well. \subsection{Semidistrim Lattices} If $P$ is a finite poset, then we can order $J(P)$ by inclusion to obtain a distributive lattice. In fact, Birkhoff's Fundamental Theorem of Finite Distributive Latices \cite{Birkhoff} states that every finite distributive lattice is isomorphic to the lattice of order ideals of some finite poset. Thus, instead of viewing rowmotion as a bijective operator on the set of order ideals of a finite poset, one can equivalently view it as a bijective operator on the set of \emph{elements} of a distributive lattice. This perspective has led to more general definitions of rowmotion in recent years. Barnard \cite{Barnard} showed how to extend the definition of rowmotion to the broader family of \emph{semidistributive} lattices, while Thomas and Williams \cite{ThomasWilliams} discussed how to extend the definition to the family of \emph{trim} lattices. (Every distributive lattice is semidistributive and trim, but there are semidistributive lattices that are not trim and trim lattices that are not semidistributive.) One notable example motivating these extended definitions comes from Reading's \emph{Cambrian lattices} \cite{ReadingCambrian}. Suppose $c$ is a Coxeter element of a finite Coxeter group $W$. Reading \cite{ReadingClusters} found a bijection from the $c$-Cambrian lattice to the $c$-noncrossing partition lattice of $W$; under this bijection, rowmotion on the $c$-Cambrian lattice corresponds to the well-studied \emph{Kreweras complementation} operator on the $c$-noncrossing partition lattice of $W$ \cite{Barnard, ThomasWilliams}. See \cite{DefantLin, HopkinsCDE, ThomasWilliams} for other non-distributive lattices where rowmotion has been studied. Recently, the first author and Williams \cite{Semidistrim} introduced the even broader family of \emph{semidistrim} lattices and showed how to define a natural rowmotion operator on them; this is now the broadest family of lattices where rowmotion has been defined. It turns out that we can extend our definition of rowmotion Markov chains to semidistrim lattices. Let us sketch the details here and wait until \Cref{sec:semidistrim} to define semidistrim lattices properly and explain why this definition specializes to the one given above when the lattice is distributive. Let $L$ be a semidistrim lattice, and let $\mathcal{J}_L$ and $\mathcal{M}_L$ be the set of join-irreducible elements of $L$ and the set of meet-irreducible elements of $L$, respectively. There is a specific bijection $\kappa_L\colon\mathcal{J}_L\to\mathcal{M}_L$ satisfying certain properties. The \emph{Galois graph} of $L$ is the loopless directed graph $G_L$ with vertex set $\mathcal{J}_L$ such that for all distinct $j,j'\in\mathcal{J}_L$, there is an arrow $j\to j'$ if and only if $j\not\leq\kappa_L(j')$. Let $\mathrm{Ind}(G_L)$ be the set of independent sets of $G_L$. There is a particular way to label the edges of the Hasse diagram of $L$ with elements of $\mathcal{J}_L$; we write $j_{uv}$ for the label of the edge $u\lessdot v$. For $w\in L$, let $\mathcal{D}_L(w)$ be the set of labels of the edges of the form $u\lessdot w$, and let $\mathcal{U}_L(w)$ be the set of labels of the edges of the form $w\lessdot v$. Then $\mathcal{D}_L(w)$ and $\mathcal{U}_L(w)$ are actually independent sets of $G_L$. Moreover, the maps $\mathcal{D}_L,\mathcal{U}_L\colon L\to \mathrm{Ind}(G_L)$ are bijections. The \emph{rowmotion} operator $\mathrm{Row}\colon L\to L$ is defined by $\mathrm{Row}=\mathcal{U}_L^{-1}\circ\mathcal{D}_L$. The \emph{rowmotion Markov chain} ${\bf M}_L$ has $L$ as its set of states. For each $j\in\mathcal{J}_L$, we fix a probability $p_j\in[0,1]$. Starting at a state $u\in L$, we choose a random subset $S$ of $\mathcal{D}_L(u)$ by adding each element $j\in\mathcal{D}_L(u)$ into $S$ with probability $p_j$ and then transition to the new state $u'=\mathrm{Row}_L(\bigvee S)$. When $p_j=1$ for all $j\in\mathcal{J}_L$, the Markov chain ${\bf M}_L$ is deterministic and agrees with rowmotion; indeed, this follows from \cite[Theorem~5.6]{Semidistrim}, which tells us that $\bigvee\mathcal{D}_L(u)=u$ for all $u\in L$. The following theorem generalizes the first statement in \Cref{thm:main_distributive}. \begin{theorem}\label{thm:semidistrim_irreducible} Let $L$ be a semidistrim lattice, and fix a probability $p_j\in(0,1)$ for each join-irreducible element $j\in\mathcal{J}_L$. The rowmotion Markov chain ${\bf M}_L$ is irreducible. \end{theorem} Let us remark that this theorem is not at all obvious. Our proof uses a delicate induction that relies on some difficult results about semidistrim lattices proven in \cite{Semidistrim}. For example, we use the fact that intervals in semidistrim lattices are semidistrim. We can also generalize \Cref{thm:general_mixing} to the realm of semidistrim lattices in the following theorem. Given a semidistrim lattice $L$ and an element $u\in L$, we write $\mathrm{ddeg}(u)$ for the \dfn{down-degree} of $u$, which is the number of elements of $L$ covered by $u$. Let $\alpha(G_L)$ denote the independence number of the Galois graph $G_L$; that is, $\alpha(G_L)=\max\limits_{\mathcal I\in\mathrm{Ind}(G_L)}|\mathcal I|$. Equivalently, $\alpha(G_L)=\max\limits_{u\in L}\mathrm{ddeg}(u)$. If $P$ is a finite poset, then $\alpha(G_{J(P)})=\mathrm{width}(P)$. \begin{theorem}\label{thm:semidistrim_mixing} Let $L$ be a semidistrim lattice, and fix a probability $p_j\in(0,1)$ for each $j\in \mathcal{J}_L$. Let $\overline p=\max\limits_{j\in\mathcal{J}_L}p_j$. For each $\varepsilon>0$, the mixing time of ${\bf M}_{L}$ satisfies \[t^{\mathrm{mix}}_{{\bf M}_{L}}(\varepsilon)\leq\left\lceil\frac{\log\varepsilon}{\log\left(1-\left(1-\overline{p}\right)^{\alpha(G_L)}\right)}\right\rceil.\] \end{theorem} We were not able to find a formula for the stationary distribution of the rowmotion Markov chain of an arbitrary semidistrim (or even semidistributive or trim) lattice; this serves to underscore the anomalistic nature of the formula for distributive lattices in \Cref{thm:main_distributive}. However, there is one family of semidistrim (in fact, semidistributive) lattices where we were able to find such a formula. Given positive integers $a$ and $b$, let $\includeSymbol{hexx}_{a,b}$ be the lattice obtained by taking two disjoint chains $x_1<\cdots <x_a$ and $y_1<\cdots< y_b$ and adding a bottom element $\hat 0$ and a top element $\hat 1$. Let us remark that $\includeSymbol{hexx}_{m-1,m-1}$ is isomorphic to the weak order of the dihedral group of order $2m$, whereas $\includeSymbol{hexx}_{m-1,1}$ is isomorphic to the $c$-Cambrian lattice of that same dihedral group (for any Coxeter element $c$). We have $\mathcal J_{\includeSymbol{hexx}_{a,b}}=\mathcal M_{\includeSymbol{hexx}_{a,b}}=\{x_1,\ldots,x_a,y_1,\ldots,y_b\}$. For $2\leq i\leq a$ and $2\leq i'\leq b$, we have $\kappa_{\includeSymbol{hexx}_{a,b}}(x_i)=x_{i-1}$ and $\kappa_{\includeSymbol{hexx}_{a,b}}(y_{i'})=y_{i'-1}$; moreover, $\kappa_{\includeSymbol{hexx}_{a,b}}(x_1)=y_b$ and $\kappa_{\includeSymbol{hexx}_{a,b}}(y_1)=x_a$. This is illustrated in \Cref{Fig3} when $a=3$ and $b=2$. \Cref{Fig2} shows the transition diagram of ${\bf M}_{\text{\includeSymbol{hexx}}_{2,1}}$. \begin{figure}[ht] \begin{center}{\includegraphics[height=8.992cm]{RowmotionMarkovPIC6}} \end{center} \caption{The lattice $\includeSymbol{hexx}_{3,2}$. Next to each edge $u\lessdot v$ is a box containing the edge label $j_{uv}$. The red arrows represent the action of $\kappa_{\includeSymbol{hexx}_{3,2}}$.}\label{Fig3} \end{figure} \begin{theorem}\label{thm:hexx} Fix positive integers $a$ and $b$, and let $\kappa=\kappa_{\includeSymbol{hexx}_{a,b}}$. For each $j\in\mathcal{J}_{\text{\includeSymbol{hexx}}_{a,b}}$, fix a probability $p_j\in(0,1)$. There is a constant $Z(\includeSymbol{hexx}_{a,b})$ (depending only on $a$ and $b$) such that in the stationary distribution of ${\bf M}_{\text{\includeSymbol{hexx}}_{a,b}}$, we have \begin{align*} \mathbb P(\hat 0)&=\frac{1}{Z(\includeSymbol{hexx}_{a,b})}p_{x_1}p_{y_1}\left(1-\prod_{j\in\mathcal{J}_{\text{\includeSymbol{hexx}}_{a,b}}}p_j\right); \\ \mathbb P(\hat 1)&=\frac{1}{Z(\includeSymbol{hexx}_{a,b})}\left(1-\prod_{j\in\mathcal{J}_{\text{\includeSymbol{hexx}}_{a,b}}}p_j\right); \\ \mathbb P(x_i)&=\frac{1}{Z(\text{\includeSymbol{hexx}}_{a,b})}\left((1-p_{x_1})\prod_{\substack{j\in\mathcal{J}_{\text{\includeSymbol{hexx}}_{a,b}} \\ \kappa(j)\geq x_i}}p_j+(1-p_{y_1})\prod_{\substack{j\in\mathcal{J}_{\text{\includeSymbol{hexx}}_{a,b}} \\ \kappa(j)\not<x_i}}p_j\right)\quad\text{for}\quad 1\leq i \leq a; \\ \mathbb P(y_i)&=\frac{1}{Z(\text{\includeSymbol{hexx}}_{a,b})}\left((1-p_{y_1})\prod_{\substack{j\in\mathcal{J}_{\text{\includeSymbol{hexx}}_{a,b}} \\ \kappa(j)\geq y_i}}p_j+(1-p_{x_1})\prod_{\substack{j\in\mathcal{J}_{\text{\includeSymbol{hexx}}_{a,b}} \\ \kappa(j)\not<y_i}}p_j\right)\quad\text{for}\quad 1\leq i \leq b. \end{align*} \end{theorem} \begin{figure}[ht] \begin{center}{\includegraphics[height=11.702cm]{RowmotionMarkovPIC4}} \end{center} \caption{The transition diagram of ${\bf M}_{\text{\includeSymbol{hexx}}_{2,1}}$ drawn over the Hasse diagram of $\includeSymbol{hexx}_{2,1}$. Next to each edge $u\lessdot v$ is a box containing the edge label $j_{uv}$. }\label{Fig2} \end{figure} \cref{sec:prelim} provides preliminary background on Markov chains and posets, as well as a thorough introduction to rowmotion on semidistrim lattices. In \cref{sec:irreducible}, we prove \cref{thm:semidistrim_irreducible}, which implies the first statement in \cref{thm:main_distributive}. \cref{sec:stationary} proves the second statement of \Cref{thm:main_distributive} and proves \cref{thm:hexx}. \cref{sec:mixing} is devoted to mixing times; it is in this section that we prove \cref{thm:general_mixing,thm:semidistrim_mixing,thm:chain,thm:antichain}. We conclude in \cref{sec:conclusion} with a discussion of further research and open questions. \section{Preliminaries}\label{sec:prelim} \subsection{Markov Chains} In this article, a (finite) \dfn{Markov chain} ${\bf M}$ consists of a finite set $\Omega$ of \dfn{states} together with a \dfn{transition probability} $\mathbb P(s\to s')$ assigned to each pair $(s,s')\in\Omega\times\Omega$ so that $\sum_{s'\in\Omega}\mathbb P(s\to s')=1$ for every $s\in \Omega$. The set $\Omega$ is called the \dfn{state space} of ${\bf M}$. We can represent ${\bf M}$ via its \dfn{transition diagram}, which is the directed graph with vertex set $\Omega$ in which we draw an arrow $s\to s'$ labeled by the transition probability $\mathbb P(s\to s')$ whenever this transition probability is positive. We can also represent ${\bf M}$ via its \dfn{transition matrix}, which is the matrix $Q=(Q(s,s'))_{s,s'\in\Omega}$ with rows and columns indexed by $\Omega$, where $Q(s,s')=\mathbb P(s\to s')$. Note that $Q$ is row-stochastic, meaning each of its rows consists of probabilities that sum to $1$. A \dfn{stationary} distribution of ${\bf M}$ is a probability distribution $\pi$ on $\Omega$ such that the vector $(\pi(s))_{s\in\Omega}$ is a left eigenvector of $Q$ with eigenvalue $1$. We say ${\bf M}$ is \dfn{irreducible} if for all $s,s'\in\Omega$, there exists a directed path from $s$ to $s'$ in the transition diagram of ${\bf M}$. It is well known that if ${\bf M}$ is irreducible, then it has a unique stationary distribution. \subsection{Posets} All posets in this article are assumed to be finite. Given a poset $P$ and elements $x,y\in P$ with $x\leq y$, the \dfn{interval} from $x$ to $y$ is the set $[x,y]=\{z\in P:x\leq z\leq y\}$. Whenever we consider such an interval $[x,y]$, we will tacitly view it as a subposet of $P$. If $x<y$ and $[x,y]=\{x,y\}$, then we say $y$ \dfn{covers} $x$ and write $x\lessdot y$. If $P_1,\ldots,P_r$ are posets, then their \dfn{product} $\prod_{i=1}^rP_i$ is the poset whose underlying set is the Cartesian product of $P_1,\ldots,P_r$ and whose order relation is defined so that $(x_1,\ldots,x_r)\leq (x_1',\ldots,x_r')$ if and only if $x_i\leq x_i'$ in $P_i$ for all $i\in [r]$. A \dfn{lattice} is a poset $L$ such that any two elements $u,v\in L$ have a greatest lower bound, which is called their \dfn{meet} and denoted by $u\wedge v$, and a least upper bound, which is called their \dfn{join} and denoted by $u\vee v$. We denote the unique minimal element of $L$ by $\hat 0$ and the unique maximal element of $L$ by $\hat 1$. The meet and join operations are commutative and associative, so we can write $\bigwedge X$ and $\bigvee X$ for the meet and join, respectively, of an arbitrary subset $X\subseteq L$. We use the conventions $\bigwedge\emptyset=\hat 1$ and $\bigvee\emptyset=\hat 0$. An element that covers $\hat 0$ is called an \dfn{atom}. \subsection{Semidistrim Lattices}\label{sec:semidistrim} Following \cite{Semidistrim}, we provide the necessary background on semidistrim lattices. Let $L$ be a lattice. An element $j\in L$ is called \dfn{join-irreducible} if it covers a unique element of $L$; in this case, we write $j_*$ for the unique element of $L$ covered by $j$. Dually, an element $m\in L$ is called \dfn{meet-irreducible} if it is covered by a unique element of $L$; in this case, we write $m^*$ for the unique element of $L$ that covers $m$. Let $\mathcal{J}_L$ and $\mathcal{M}_L$ be the set of join-irreducible elements of $L$ and the set of meet-irreducible elements of $L$, respectively. We say a join-irreducible element $j_0\in\mathcal{J}_L$ is \dfn{join-prime} if there exists $m_0\in\mathcal{M}_L$ such that we have a partition $L=[j_0,\hat 1]\sqcup[\hat 0,m_0]$. In this case, $m_0$ is called \dfn{meet-prime}, and the pair $(j_0,m_0)$ is called a \dfn{prime pair}. A \dfn{pairing} on a lattice $L$ is a bijection $\kappa\colon \mathcal{J}_L\to \mathcal{M}_L$ such that \[\kappa(j)\wedge j=j_*\quad\text{and}\quad\kappa(j)\vee j=(\kappa(j))^*\] for every $j\in \mathcal{J}_L$. (Not every lattice has a pairing.) We say $L$ is \dfn{uniquely paired} if it has a unique pairing; in this case, we denote the unique pairing by $\kappa_L$. If $L$ is uniquely paired and $(j_0,m_0)$ is a prime pair of $L$, then $\kappa_L(j_0)=m_0$. Suppose $L$ is uniquely paired. For $u\in L$, we write \[J_L(u)=\{j\in\mathcal{J}_L:j\leq u\}\quad\text{and}\quad M_L(u)=\{j\in\mathcal{J}_L:\kappa_L(j)\geq u\}.\] There is an associated loopless directed graph $G_L$, called the \dfn{Galois graph} of $L$, defined as follows. The vertex set of $G_L$ is $\mathcal{J}_L$. For distinct $j,j'\in\mathcal{J}_L$, there is an arrow $j\to j'$ in $G_L$ if and only if $j\not\leq\kappa_L(j')$. An \dfn{independent set} of $G_L$ is a set $\mathcal I$ of vertices of $G_L$ such that for all $j,j'\in \mathcal I$, there is not an arrow from $j$ to $j'$ in $G_L$. Let $\mathrm{Ind}(G_L)$ be the set of independent sets of $G_L$. We say a uniquely paired lattice $L$ is \dfn{compatibly dismantlable} if either $|L|=1$ or there is a prime pair $(j_0,m_0)$ of $L$ such that the following compatibility conditions hold: \begin{itemize} \item $[j_0,\hat{1}]$ is compatibly dismantlable, and there is a bijection \[\alpha\colon M_L(j_0) \to \mathcal{J}_{[j_0,\hat 1]}\] given by $\alpha(j)=j_0\vee j$ such that $\kappa_{[j_0,\hat 1]}(\alpha(j))=\kappa_L(j)$ for all $j\in M_L(j_0)$; \item $[\hat{0},m_0]$ is compatibly dismantlable, and there is a bijection \[\beta\colon \kappa_L(J_L(m_0)) \to \mathcal{M}_{[\hat 0,m_0]}\] given by $\beta(m)=m_0\wedge m$ such that $\beta(\kappa_L(j))=\kappa_{[\hat 0,m_0]}(j)$ for all $j\in J_L(m_0)$. \end{itemize} Such a prime pair $(j_0,m_0)$ is called a \dfn{dismantling pair} for $L$. \begin{proposition}[{\cite[Proposition~5.3]{Semidistrim}}] Let $L$ be a compatibly dismantlable lattice. For every cover relation $u\lessdot v$ in $L$, there is a unique join-irreducible element $j_{uv}\in J_L(v)\cap M_L(u)$. \end{proposition} Suppose $L$ is compatibly dismantlable. The previous proposition allows us to label each edge $u\lessdot v$ in the Hasse diagram of $L$ with the join-irreducible element $j_{uv}$. For $w\in L$, we define the \dfn{downward label set} $\mathcal{D}_L(w)=\{j_{uw}:u\lessdot w\}$ and the \dfn{upward label set} $\mathcal{U}_L(w)=\{j_{wv}:w\lessdot v\}$. A lattice $L$ is called \dfn{semidistrim} if it is compatibly dismantlable and $\mathcal{D}_L(w),\mathcal{U}_L(w)\in\mathrm{Ind}(G_L)$ for all $w\in L$. As mentioned in \Cref{sec:intro}, semidistrim lattices generalize semidistributive lattices and trim lattices. \begin{theorem}[{\cite[Theorem~6.2]{Semidistrim}}] Semidistributive lattices are semidistrim, and trim lattices are semidistrim. Hence, distributive lattices are semidistrim. \end{theorem} \begin{example}\label{exam:1} Let us explicate how these general notions specialize when we consider a distributive lattice. Let $P$ be a finite poset, and let $L=J(P)$. For $x\in P$, write $\Delta(x)$ instead of $\Delta(\{x\})$ and $\nabla(x)$ instead of $\nabla(\{x\})$. There is a natural bijection $P\to \mathcal{J}_{L}$ given by $x\mapsto\Delta(x)$. The unique pairing on $L$ is given by $\kappa_L(\Delta(x))=P\setminus\nabla(x)$. Every order ideal in $J(P)$ either contains $\Delta(x)$ or is contained in $P\setminus\nabla(x)$, but not both. Hence, $(\Delta(x),P\setminus\nabla(x))$ is a prime pair (this shows that every join-irreducible element of a distributive lattice is join-prime). The Galois graph $G_L$ is isomorphic (via the map $x\mapsto\Delta(x)$) to the directed comparability graph of $P$, which has vertex set $P$ and has an arrow $x\to y$ for every strict order relation $y<x$ in $P$. The independent sets in $G_L$ correspond to antichains in $P$. It turns out that for any $x_0\in P$, the pair $(\Delta(x_0),P\setminus\nabla(x_0))$ is a dismantling pair. Indeed, the interval $[\Delta(x_0),\hat 1]=\{I\in J(P):x_0\in I\}$ can be identified with the lattice $J(P\setminus\Delta(x_0))$, and the set $M_L(\Delta(x_0))$ can be identified with $P\setminus\Delta(x_0)$. Hence, the bijection $\alpha\colon M_L(\Delta(x_0))\to\mathcal{J}_{[\Delta(x_0),\hat 1]}$ is the usual correspondence between elements of $P\setminus\Delta(x_0)$ and join-irreducible elements of $J(P\setminus\Delta(x_0))$. Similarly, $[\hat 0,P\setminus\nabla(x_0)]=\{I\in J(P):x_0\not\in I\}$ can be identified with the lattice $J(P\setminus\nabla(x_0))$, and the set $\kappa_L(J_L(P\setminus\nabla(x_0)))$ can be identified with $P\setminus\nabla(x_0)$. Hence, the bijection $\beta\colon \kappa_L(J_L(P\setminus\nabla(x_0)))\to\mathcal{M}_{[\hat 0,P\setminus\nabla(x_0)]}$ is the usual correspondence between elements of $P\setminus\nabla(x_0)$ and meet-irreducible elements of $J(P\setminus\nabla(x_0))$. Whenever we have a cover relation $I\lessdot I'$ in $L$, there is a unique element $z\in P$ such that $I'=I\sqcup\{z\}$; then $j_{II'}=\Delta(z)$. For $I\in L$, the downward label set $\mathcal{D}_L(I)$ and the upward label set $\mathcal{U}_L(I)$ correspond (via the map $x\mapsto\Delta(x)$) to $\max(I)$ and $\min(P\setminus I)$, respectively; these are both independent sets in $G_L$ (i.e., antichains in $P$), so $L$ is semidistrim. \end{example} \begin{theorem}[{\cite[Theorem~6.4]{Semidistrim}}]\label{thm:row_well_defined} If $L$ is a semidistrim lattice, then the maps $\mathcal{D}_L\colon L\to\mathrm{Ind}(G_L)$ and $\mathcal{U}_L\colon L\to\mathrm{Ind}(G_L)$ are bijections. \end{theorem} Let $L$ be a semidistrim lattice. Using the preceding theorem, we can define the \dfn{rowmotion} operator $\mathrm{Row}_L\colon L\to L$ by \[\mathrm{Row}_L=\mathcal{U}_L^{-1}\circ\mathcal{D}_L.\] Referring to \Cref{exam:1}, we find that this definition agrees with the one given in \eqref{eq:row_def} when $L$ is distributive. Moreover, this definition coincides with the definition due to Barnard \cite{Barnard} when $L$ is semidistributive and with the definition due to Thomas and Williams \cite{ThomasWilliams} when $L$ is trim. We can now define rowmotion Markov chains for semidistrim lattices, thereby vastly generalizing the definition we gave in \Cref{sec:intro} for distributive lattices. \begin{definition}\label{def:semidistrim_Markov} Let $L$ be a semidistrim lattice. For each $j\in \mathcal{J}_L$, fix a probability $p_j\in[0,1]$. We define the \dfn{rowmotion Markov chain} ${\bf M}_L$ as follows. The state space of ${\bf M}_L$ is $L$. For any $u,u'\in L$, the transition probability from $u'$ to $u$ is \[\mathbb P(u'\to u)=\begin{cases} \left(\prod\limits_{j\in \mathcal{U}_L(u)}p_j\right)\left(\prod\limits_{j'\in \mathcal{D}_L(u')\setminus\mathcal{U}_L(u)}(1-p_{j'})\right) & \mbox{if }\mathcal{U}_L(u)\subseteq\mathcal{D}_L(u'); \\ 0 & \mbox{otherwise.}\end{cases}\] \end{definition} If $L$ is semidistrim and $u\in L$, then we have \cite[Theorem~5.6]{Semidistrim} \[u=\bigvee\mathcal{D}_L(u)=\bigwedge\kappa_L(\mathcal{U}_L(u)).\] It follows from \Cref{thm:row_well_defined} that $\mathcal I=\mathcal{D}_L(\bigvee\mathcal I)=\mathcal{U}_L(\bigwedge\kappa_L(\mathcal I))$ for every $\mathcal I\in\mathrm{Ind}(G_L)$. This enables us to give a more intuitive description of the rowmotion Markov chain ${\bf M}_L$ as follows. Starting from a state $u\in L$, choose a random subset $S$ of $\mathcal{D}_L(u)\in\mathrm{Ind}(G_L)$ by adding each element $j\in\mathcal{D}_L(u)$ into $S$ with probability $p_j$; then transition to the new state $\bigwedge\kappa_L(S)=\mathrm{Row}_L(\bigvee S)$. Observe that if $p_j=1$ for all $j\in\mathcal{J}_L$, then ${\bf M}_{L}$ is deterministic and agrees with rowmotion. On the other hand, if $p_j=0$ for all $j\in \mathcal{J}_L$, then ${\bf M}_L$ is deterministic and sends all elements of $L$ to~$\hat 1$. \section{Irreducibility}\label{sec:irreducible} In this section, we prove \Cref{thm:semidistrim_irreducible}, which tells us that rowmotion Markov chains of semidistrim lattices are irreducible. Note that the first statement in \Cref{thm:main_distributive} is a special case of this result. The main idea is to use induction on the size of the lattice. When $L$ is a distributive lattice $J(P)$, the argument proceeds by choosing an element $x$ of $P$, identifying $J(P\setminus \Delta(x))$ with a sublattice of $J(P)$, and showing that every arrow in the transition diagram of ${\bf M}_{J(P\setminus\Delta(x))}$ is also an arrow in the transition diagram of ${\bf M}_{J(P)}$. To handle arbitrary semidistrim lattices, we use the same basic strategy, but we must invoke the following difficult result from \cite{Semidistrim}. \begin{theorem}[{\cite[Theorem~7.8, Corollary~7.9, Corollary~7.10]{Semidistrim}}]\label{thm:stuff_we_need} Let $L$ be a semidistrim lattice, and let $[u,v]$ be an interval in $L$. Then $[u,v]$ is a semidistrim lattice. There are bijections \[\alpha_{u,v}\colon J_L(v)\cap M_L(u)\to\mathcal{J}_{[u,v]}\quad\text{and}\quad\beta_{u,v}\colon\kappa_L(J_L(v)\cap M_L(u))\to\mathcal{M}_{[u,v]}\] given by $\alpha_{u,v}(j)=u\vee j$ and $\beta_{u,v}(m)=v\wedge m$. We have $\kappa_{[u,v]}(\alpha_{u,v}(j))=\beta_{u,v}(\kappa_L(j))$ for all $j\in J_L(v)\cap M_L(u)$. The map $\alpha_{u,v}$ is an isomorphism from an induced subgraph of the Galois graph $G_L$ to the Galois graph $G_{[u,v]}$. If $u\leq w\lessdot w'\leq v$ and $j_{ww'}$ is the label of the cover relation $w\lessdot w'$ in $L$, then $\alpha_{u,v}(j_{ww'})$ is the label of the same cover relation in $[u,v]$. \end{theorem} \begin{proof}[Proof of \Cref{thm:semidistrim_irreducible}] Let $L$ be a semidistrim lattice. The proof is trivial when $|L|=1$, so we may assume $|L|\geq 2$ and proceed by induction on $|L|$. Fix $u\in L$. The transition diagram of ${\bf M}_L$ contains an arrow $u\to \hat 1$; our goal is to prove that it also contains a path from $\hat 1$ to $u$. First, suppose $u=\hat 0$. Let $k$ be the size of the orbit of $\mathrm{Row}_L$ containing $\hat 1$. The transition diagram of ${\bf M}_L$ contains the path \[\hat 1\to\mathrm{Row}_L(\hat 1)\to\mathrm{Row}_L^2(\hat 1)\to\cdots\to\mathrm{Row}_L^{k-1}(\hat 1).\] But $\mathrm{Row}_L(\hat 0)=\hat 1=\mathrm{Row}_L(\mathrm{Row}_L^{k-1}(\hat 1))$, so $\mathrm{Row}_L^{k-1}(\hat 1)=\hat 0=u$. This demonstrates that the desired path exists in this case. Now suppose $u\neq\hat 0$. By \Cref{thm:stuff_we_need}, the interval $[u,\hat 1]$ is a semidistrim lattice, so it follows by induction that ${\bf M}_{[u,\hat 1]}$ is irreducible. Suppose $w\to w'$ is an arrow in the transition diagram of ${\bf M}_{[u,\hat 1]}$. This means that there exists a set $S\subseteq \mathcal{D}_{[u,\hat 1]}(w)$ such that $w'=\bigwedge\kappa_{[u,\hat 1]}(S)$. Let $\alpha_{u,\hat 1}$ and $\beta_{u,\hat 1}$ be the bijections from \Cref{thm:stuff_we_need} (where we have set $v=\hat 1$). Note that $\beta_{u,\hat 1}(m)=\hat 1\wedge m=m$ for all $m\in\kappa_L(M_L(u))$. Let $T=\alpha_{u,\hat 1}^{-1}(S)$. It follows from \Cref{thm:stuff_we_need} that $T\subseteq\mathcal{D}_L(w)$ and that \[w'=\bigwedge\kappa_{[u,\hat 1]}(S)=\bigwedge\kappa_{[u,\hat 1]}(\alpha_{u,\hat 1}(T))=\bigwedge\beta_{u,\hat 1}(\kappa_L(T))=\bigwedge\kappa_L(T).\] This shows that $w\to w'$ is an arrow in the transition diagram of ${\bf M}_{L}$. We have proven that all arrows in the transition diagram of ${\bf M}_{[u,\hat 1]}$ are also arrows in the transition diagram of ${\bf M}_L$. Since ${\bf M}_{[u,\hat 1]}$ is irreducible, there is a path from $\hat 1$ to $u$ in the transition diagram of ${\bf M}_{[u,\hat 1]}$. This path is also in the transition diagram of ${\bf M}_{L}$, so the proof is complete. \end{proof} \section{Stationary Distributions}\label{sec:stationary} This section is devoted to completing the proof of \Cref{thm:main_distributive} and to proving \Cref{thm:hexx}. That is, we will compute the stationary distribution of the rowmotion Markov chain of an arbitrary distributive lattice and the stationary distribution of the rowmotion Markov chain of $\includeSymbol{hexx}_{a,b}$. We begin with distributive lattices. In what follows, we write $\mathcal A(P)$ for the set of antichains of a finite poset $P$. Note that the map $I\mapsto \max(I)$ is a bijection from $J(P)$ to $\mathcal A(P)$. \begin{lemma}\label{lem:stationary} Let $P$ be a finite poset, and let $A\in\mathcal A(P)$. For each $x\in P$, fix a real number $q_x>0$. We have \[\sum_{\substack{A'\in\mathcal A(P) \\ A'\supseteq A}}\prod_{x\in A'}q_x\prod_{x'\in\Delta(A')\setminus A'}(1+q_{x'})=\prod_{y\in A}q_y\prod_{y'\in P\setminus\nabla(A)}(1+q_{y'}).\] \end{lemma} \begin{proof} The map $S\mapsto S\cup A$ is a bijection from the power set of $P\setminus\nabla(A)$ to the set $\{T\subseteq P:\max(T)\supseteq A\}$. Therefore, \[\prod_{y\in A}q_y\prod_{y'\in P\setminus\nabla(A)}(1+q_{y'})=\sum_{\substack{T\subseteq P \\ \max(T)\supseteq A}}\prod_{x\in T}q_x.\] For each $A'\in \mathcal A(P)$ with $A'\supseteq A$, the map $S\mapsto S\cup A'$ is a bijection from the power set of $\Delta(A')\setminus A'$ to the set $\{T\subseteq P:\max(T)=A'\}$, so \[\sum_{\substack{T\subseteq P \\ \max(T)=A'}}\prod_{x\in T}q_x=\prod_{x\in A'}q_x\prod_{x'\in\Delta(A')\setminus A'}(1+q_{x'}).\] The desired result follows by summing over all such $A'$. \end{proof} \begin{proof}[Proof of \Cref{thm:main_distributive}] Fix a finite poset $P$ and a probability $p_x\in(0,1)$ for each $x\in P$. We proved in \Cref{sec:irreducible} that ${\bf M}_{J(P)}$ is irreducible, so it has a unique stationary distribution. Fix $I\in J(P)$. Our goal is to prove that \begin{equation}\label{eq:stationary} \prod_{x\in I}p_x^{-1}=\sum_{I'\in J(P)}\mathbb P(I'\to I)\prod_{x'\in I'}p_{x'}^{-1}. \end{equation} Let $A=\min(P\setminus I)$. If $\mathbb P(I'\to I)\neq 0$, then $\max(I')\supseteq A$, and in this case, we have $\mathbb P(I'\to I)=\prod\limits_{y\in A}p_y\prod\limits_{y'\in\max(I')\setminus A}(1-p_{y'})$. Writing $A'=\max(I')$, we can rewrite the right-hand side of \eqref{eq:stationary} as \begin{align*} \sum_{\substack{A'\in\mathcal A(P) \\ A'\supseteq A}}\prod_{y\in A}p_y\prod_{y'\in A'\setminus A}(1-p_{y'})\prod_{x'\in \Delta(A')}p_{x'}^{-1}&=\sum_{\substack{A'\in\mathcal A(P) \\ A'\supseteq A}}\prod_{y\in A}\frac{p_y}{1-p_y}\prod_{y'\in A'}\frac{1-p_{y'}}{p_{y'}}\prod_{x'\in \Delta(A')\setminus A'}p_{x'}^{-1} \\ &=\sum_{\substack{A'\in\mathcal A(P) \\ A'\supseteq A}}\prod_{y\in A}q_y^{-1}\prod_{y'\in A'}q_{y'}\prod_{x'\in \Delta(A')\setminus A'}(1+q_{x'}), \end{align*} where we define $q_x=\dfrac{1-p_x}{p_x}$. Appealing to \Cref{lem:stationary}, we can rewrite this as \[\prod_{x\in P\setminus\nabla(A)}(1+q_{x}).\] Since $P\setminus \nabla(A)=I$ and $1+q_{x}=p_{x}^{-1}$, this completes the proof. \end{proof} We now move on to the lattices $\includeSymbol{hexx}_{a,b}$. \begin{proof}[Proof of \Cref{thm:hexx}] Recall that $\includeSymbol{hexx}_{a,b}$ is obtained by adding the minimal element $\hat 0$ and the maximal element $\hat 1$ to the disjoint chains $x_1<\cdots<x_a$ and $y_1<\cdots<y_b$. The join-irreducibles are $x_1,\dots,x_a,y_1,\dots,y_b$; for simplicitly, let $q_i=p_{x_i}$ and $r_i = p_{y_i}$. The transition probabilities of ${\bf M}_{\text{\includeSymbol{hexx}}_{a,b}}$ are as follows: \begin{equation*} \setlength{\tabcolsep}{12pt} \begin{tabular}{llll} $\mathbb{P}(\hat 1 \to \hat 0) = q_1 r_1$ & $\mathbb{P}(\hat 0 \to \hat 1) = 1$ & $\mathbb{P}(x_{i+1}\to x_i) = q_{i+1}$ & $\mathbb{P}(y_{i+1} \to y_i) = r_{i+1}$ \\ $\mathbb{P}(\hat 1 \to \hat 1) = (1-q_1)(1-r_1)$ & & $\mathbb{P}(x_1 \to y_b) = q_1$ & $\mathbb{P}(y_1 \to x_a) = r_1$ \\ $\mathbb{P}(\hat 1 \to x_a) = (1-q_1)r_1$ & & $\mathbb{P}(x_i \to \hat 1) = 1-q_i$ & $\mathbb{P}(y_i \to \hat 1) = 1-r_i$ \\ $\mathbb{P}(\hat 1 \to y_b) = q_1(1-r_1)$. &&& \end{tabular} \end{equation*} Removing the $Z(\includeSymbol{hexx}_{a,b})$ normalization factor, it suffices to show the following measure $\mu$ is stationary: \begin{align*} \mu(\hat 0) &= q_1r_1\left(1-\prod_{i=1}^a q_i \prod_{i'=1}^b r_{i'}\right) \\ \mu(\hat 1) &= 1-\prod_{i=1}^a q_i \prod_{i'=1}^b r_{i'} \\ \mu(x_i) &= (1-q_1) r_1 \prod_{k=i+1}^a q_k + q_1(1-r_1) \prod_{k=i+1}^a q_k \prod_{k'=1}^b r_{k'} \\ \mu(y_i) &= q_1(1-r_1) \prod_{k=i+1}^b r_k + (1-q_1)r_1\prod_{k=1}^a q_j \prod_{k'=i+1}^b r_{k'}. \end{align*} First, we have \begin{align*} \sum_{z\in\,\includeSymbol{hexx}_{a,b}}\mu(z)\mathbb{P}(z\to\hat0) = q_1r_1\mu(\hat 1) &= \mu(\hat 0); \\ \sum_{z\in\,\includeSymbol{hexx}_{a,b}}\mu(z)\mathbb{P}(z\to x_i) = q_{i+1}\mu(x_{i+1}) &= \mu(x_i); \\ \sum_{z\in\,\includeSymbol{hexx}_{a,b}}\mu(z)\mathbb{P}(z\to y_j) = r_{i'+1}\mu(y_{i'+1}) &= \mu(y_{i'}) \end{align*} for $1 \leq i \leq a-1$ and $1 \leq i' \leq b-1$. For $\hat 1$, we have \begin{align*} \sum_{z\in\,\includeSymbol{hexx}_{a,b}}\mu(z)\mathbb{P}(z\to\hat1) &= (1-q_1)(1-r_1)\mu(\hat1) + \mu(\hat0) + \sum_{i=1}^a (1-q_i)\mu(x_i) + \sum_{i'=1}^b (1-r_{i'})\mu(y_{i'}). \end{align*} To show this equals $\mu(\hat1)$, it suffices to show \begin{align*} \sum_{i=1}^a (1-q_i)\mu(x_i) + \sum_{i'=1}^b (1-r_{i'})\mu(y_{i'}) = (q_1 + r_1 - 2q_1r_1)\mu(\hat1). \end{align*} We expand the first sum as \begin{align*} \sum_{i=1}^a (1-q_i)\mu(x_i) &= (1-q_1)r_1 \sum_{i=1}^a \left((1-q_i)\prod_{k=i+1}^a q_k\right) + q_1(1-r_1)\prod_{i'=1}^b r_{i'} \sum_{i=1}^a \left((1-q_i)\prod_{k=i+1}^a q_k\right) \\ &= (1-q_1)r_1\left(1-\prod_{i=1}^a q_i\right) + q_1(1-r_1)\prod_{i'=1}^b r_{i'}\left(1-\prod_{i=1}^a q_i\right) \\ &= (1-q_1)r_1 + q_1(1-r_1)\prod_{i'=1}^b r_{i'} - (1-q_1)r_1\prod_{i=1}^a q_i - q_1(1-r_1)\prod_{i=1}^a q_i \prod_{i'=1}^b r_{i'} \end{align*} and similarly expand the second sum as \begin{align*} \sum_{i'=1}^b (1-r_{i'})\mu(y_{i'}) = q_1(1-r_1) + (1-q_1)r_1\prod_{i=1}^a q_i -q_1(1-r_1)\prod_{i'=1}^b r_{i'}- (1-q_1)r_1\prod_{i=1}^a q_i \prod_{i'=1}^b r_{i'}. \end{align*} Combining them yields \begin{align*} \sum_{i=1}^a (1-q_i)\mu(x_i) + \sum_{i=1}^b (1-r_i)\mu(y_i) &= \left((1-q_1)r_1 + q_1(1-r_1)\right)\left(1-\prod_{i=1}^a q_i\prod_{i'=1}^b r_{i'}\right) \\ &= (q_1+r_1-2q_1r_1)\mu(\hat 1), \end{align*} as desired. It remains to check $x_a$ and $y_b$. For $x_a$, we have \begin{align*} \sum_{z\in\,\includeSymbol{hexx}_{a,b}}\mu(z)\mathbb{P}(z\to x_a) &= (1-q_1)r_1\mu(\hat1) + r_1\mu(y_1) \\ &= (1-q_1)r_1 - (1-q_1)r_1 \prod_{i=1}^a q_i \prod_{i'=1}^b r_{i'} \\ &\hphantom{=}+ q_1(1-r_1)\prod_{i'=1}^b r_{i'} + (1-q_1)r_1 \prod_{i=1}^a q_i \prod_{i'=1}^b r_{i'} \\ &= (1-q_1)r_1 + q_1(1-r_1)\prod_{i'=1}^b r_{i'} \\ &= \mu(x_a). \end{align*} The computation for $y_b$ is essentially identical. This completes the proof that $\mu$ is stationary. \end{proof} \section{Mixing Times}\label{sec:mixing} We now study the mixing times of rowmotion Markov chains. \subsection{Couplings} Let ${\bf M}$ be an irreducible Markov chain with state space $\Omega$, stationary distribution $\pi$, and transition probabilities $\mathbb P(s\to s')$ for all $s,s'\in \Omega$. A \dfn{Markovian coupling} for ${\bf M}$ is a sequence $(X_i,Y_i)_{i\geq 0}$ of pairs of random variables with values in $\Omega$ such that for every $i\geq 0$ and all $s,s',s''\in\Omega$, we have \[\mathbb P(X_{i+1}=s\vert X_i=s', Y_i=s'')=\mathbb P(s'\to s)\] and \[\mathbb P(Y_{i+1}=s\vert X_i=s', Y_i=s'')=\mathbb P(s''\to s).\] It is well known that \begin{equation} d_{\mathrm{TV}}(Q^i(x,\cdot),\pi) \leq \mathbb{P}(X_i\neq Y_i) \end{equation} for any Markovian coupling for ${\bf M}$ with $X_0=x$ and $Y_0\sim\pi$. In fact, for a \dfn{coupling} of two distributions $\mu$ and $\nu$ on $\Omega$, i.e., a joint distribution $(X,Y)$ with marginal distributions $X\sim\mu$ and $Y\sim\nu$, we have \begin{equation} d_{\mathrm{TV}}(\mu,\nu) \leq \mathbb{P}(X\neq Y), \end{equation} and this inequality becomes equality when taking the infimum of $\mathbb{P}(X\neq Y)$ over all such couplings $(X,Y)$. A coupling $(X,Y)$ such that $d_{\mathrm{TV}}(\mu,\nu)=\mathbb{P}(X\neq Y)$ always exists and is called an \dfn{optimal} coupling. \subsection{General Upper Bound} We now prove \cref{thm:general_mixing}. \begin{proof}[Proof of \cref{thm:general_mixing}] Recall that $\overline p = \max\limits_{x\in P} p_x$. For any $I\in J(P)$, we have (viewing $P$ as an element of $J(P)$) \[ \mathbb P(I\to P)=\prod_{x\in\max(I)} (1-p_x) \geq (1-\overline p)^{\lvert\max(I)\rvert} \geq (1-\overline p)^{\mathrm{width}(P)}. \] Thus, for any $I\in J(P)$, we can construct a Markovian coupling $(X_i,Y_i)_{i\geq 0}$ with $X_0=I$ and $Y_0\sim\pi$ such that \begin{itemize} \item if $X_i\neq Y_i$, then $X_{i+1}=Y_{i+1}=P$ with probability at least $(1-\overline p)^{\mathrm{width}(P)}$ (the other transition probabilities do not matter so long as they induce the correct marginal transition probabilities) and \item if $X_i=Y_i$, then $X_{i+1}=Y_{i+1}$. \end{itemize} This implies \[ \mathbb{P}(X_{i+1}\neq Y_{i+1})\leq \left(1-(1-\overline p)^{\mathrm{width}(P)}\right)\mathbb{P}(X_i\neq Y_i)\] for all $i\geq 0$, so \[ d_{\mathrm{TV}}(Q^k(x,\cdot),\pi) \leq \mathbb{P}(X_k\neq Y_k) \leq \left(1-(1-\overline p)^{\mathrm{width}(P)}\right)^k \] for all $k \geq 0$. As the inequality \[ \left(1-(1-\overline p)^{\mathrm{width}(P)}\right)^k \leq \varepsilon \] is equivalent to \[ i \geq \frac{\log\varepsilon}{\log\left(1-(1-\overline p)^{\mathrm{width}(P)}\right)}, \] we have \[ t^{\mathrm{mix}}_{{\bf M}_{J(P)}}(\varepsilon) \leq \left\lceil \frac{\log\varepsilon}{\log\left(1-(1-\overline p)^{\mathrm{width}(P)}\right)}\right\rceil, \] as desired. \end{proof} \begin{proof}[Proof of \cref{thm:semidistrim_mixing}] Let $L$ be a semidistrim lattice, and consider the Markov chain ${\bf M}_L$. For each $u\in L$, we have \[ \mathbb P(u\to\hat 1)=\prod_{j\in\mathcal{D}_L(u)} (1-p_j) \geq (1-\overline p)^{|\mathcal{D}_L(u)|}\geq (1-\overline p)^{\alpha(G_L)}.\] The rest of the proof then follows just as in the preceding proof of \Cref{thm:main_distributive}. \end{proof} \begin{remark}\label{rem:improve_mixing} Let $\mathcal A(P)$ be the set of antichains of a poset $P$. We can straightforwardly improve the $\log\left(1-(1-\overline p)^{\mathrm{width}(P)}\right)$ term of \cref{thm:general_mixing} by instead using \[ \log\left(1-\min_{A\in\mathcal A(P)}\prod_{x\in A}(1-p_x)\right); \] however, when $p_x=p$ is the same across all $x\in P$, or more generally when some antichain $A$ of size $|A|=\mathrm{width}(P)$ has $p_x=\overline p$ for all $x\in A$, these two bounds coincide. Similarly, we can improve the $\log\left(1-(1-\overline p)^{\alpha(G_L)}\right)$ term of \cref{thm:semidistrim_mixing} by instead using \[ \log\left(1-\min_{\mathcal I\in\mathrm{Ind}(G_L)}\prod_{j\in\mathcal I} (1-p_j)\right). \] \end{remark} \ \subsection{Chains}\label{subsec:chains} We now consider the special case when $P$ is a chain with $n$ elements and $p_x=p$ for all $x\in P$, where $p\in(0,1)$ is fixed. Letting $z_k$ denote the unique $k$-element order ideal of $P$, we find that $J(P)$ is the chain $\{z_0<z_1<\cdots<z_n\}$. For each $i\in[n]$, we have the transition probabilities $\mathbb P(z_i\to z_{i-1})=p$ and $\mathbb P(z_i\to z_n)=1-p$. We also have the transition probability $\mathbb P(z_0\to z_n)=1$. According to \cref{thm:main_distributive}, the unique stationary distribution of ${\bf M}_{J(P)}$ is given by \[ \pi(z_i) = \frac{p^{-i}}{\displaystyle\sum_{j=0}^{n} p^{-j}} = \frac{1-p^{-1}}{p^i-p^{i-n-1}}, \] and \cref{thm:general_mixing} bounds the mixing time by \[ t^{\mathrm{mix}}_{{\bf M}_{J(P)}}(\varepsilon) \leq \left\lceil \frac{\log\varepsilon}{\log p}\right\rceil. \] We will see that this bound is asymptotically optimal in the sense that \[\lim\limits_{\varepsilon\to 0^+}\frac{1}{\log\varepsilon}t^{\mathrm{mix}}_{{\bf M}_{J(P)}}(\varepsilon)=\frac{1}{\log p}.\] For $z_k\in J(P)$ and $i\geq n+1$, we have \begin{align*} Q^{i}(z_k,z_n) &= 1-p + pQ^{i-1}(z_k,z_0) = 1-p + p^2 Q^{i-2}(z_k,z_1) = \cdots = 1-p + p^{n+1} Q^{i-n-1}(z_k,z_n). \end{align*} Applying the same reasoning for $1 \leq i \leq n+1$, we find that \begin{align*} Q^i(z_k,z_n) = 1-p+p^iQ^0(z_k,z_{i-1}) = \begin{cases} 1-p+p^{k+1} & \mbox{if }i = k + 1; \\ 1-p & \mbox{if }i \neq k+1. \end{cases} \end{align*} Consequently, for all $1 \leq \ell \leq n+1$ and $m \geq 0$, we have \begin{align*} Q^{m(n+1)+\ell}(z_k,z_n) = \begin{cases} \frac{(1-p)(1-p^{(m+1)(n+1)})}{1-p^{n+1}} + p^{m(n+1)+k+1} & \mbox{if }\ell = k+1; \\ \frac{(1-p)(1-p^{(m+1)(n+1)})}{1-p^{n+1}} & \mbox{if }\ell \neq k+1. \end{cases} \end{align*} Setting $k=n$ and $\ell=n+1$ yields \begin{align*} Q^{(m+1)(n+1)}(z_n,z_n) - \pi(z_n) &= \frac{(1-p)(1-p^{(m+1)(n+1)})}{1-p^{n+1}}+p^{(m+1)(n+1)}-\frac{p-1}{p^{n+1}-1} \\ &= p^{(m+1)(n+1)}\frac{p-p^{n+1}}{1-p^{n+1}}, \end{align*} which also holds when $m+1=0$; hence, \[ d_{\mathrm{TV}}(Q^{m(n+1)}(z_n,\cdot),\pi) \geq p^{m(n+1)}\frac{p-p^{n+1}}{2-2p^{n+1}} \] for all $m\geq0$. It is well known that $d_{\mathrm{TV}}(Q^i(x,\cdot),\pi)$ is monotonically non-increasing in $i$, so this implies that \[ t^{\mathrm{mix}}_{{\bf M}_{J(P)}}(\varepsilon) > \frac{\log\varepsilon+\log(2-2p^{n+1})-\log(p-p^{n+1})}{\log p}, \] and thus \[ \frac{\log \varepsilon}{\log p}+\frac{\log(2-2p^{n+1})-\log(p-p^{n+1})}{\log p} < t^{\mathrm{mix}}_{{\bf M}_{J(P)}}(\varepsilon) \leq \left\lceil \frac{\log\varepsilon}{\log p}\right\rceil. \] This proves \cref{thm:chain}. \subsection{Boolean Lattices}\label{subsec:boolean} In the following proposition, we bound the total variation distance on a product $\Omega = \prod_{i=1}^n \Omega_i$ of state spaces $\Omega_i$ via optimal couplings. Our Boolean lattice mixing time bound, \cref{thm:antichain}, will follow as a special case of this framework since we can view Boolean lattices as products of 2-element chains. \begin{proposition}\label{prop:product} Let $\Omega=\prod_{i=1}^n \Omega_i$ be the product of state spaces $\Omega_1,\ldots,\Omega_n$. For each $i\in[n]$, let $\mu_i$ and $\nu_i$ be distributions on $\Omega_i$. Let $\mu$ and $\nu$ be the joint distributions on $\Omega$ of $(x_1,\dots,x_n)$ and $(y_1,\dots,y_n)$, where $x_1,\dots,x_n$ are independent, $y_1,\ldots, y_n$ are independent, and we have $x_i \sim \mu_i$ and $y_i\sim\nu_i$ for all $i\in[n]$. Then \[ d_{\mathrm{TV}}^{\Omega}(\mu,\nu) \leq 1 - \prod_{i=1}^n \left(1-d_{\mathrm{TV}}^{\Omega_i}(\mu_i,\nu_i)\right). \] \end{proposition} \begin{proof} Consider an optimal coupling $(X_i,Y_i)$ for $(\mu_i,\nu_i)$ for each $i\in[n]$. Let $(X,Y)$ be the coupling of $(\mu,\nu)$ obtained by the joint distributions $X=(X_1,\dots,X_n)$ and $Y=(Y_1,\ldots,Y_n)$, where $X_1,\ldots,X_n$ are independent and $Y_1,\ldots,Y_n$ are independent. Then by optimality of each $(X_i,Y_i)$, \[ d_{\mathrm{TV}}^{\Omega}(\mu,\nu) \leq \mathbb{P}(X\neq Y) = 1 - \prod_{i=1}^n \mathbb{P}(X_i=Y_i) = 1 - \prod_{i=1}^n \left(1-d_{\mathrm{TV}}^{\Omega_i}(\mu_i,\nu_i)\right). \qedhere\] \end{proof} In the context of mixing times, the total variation distance $d_{\mathrm{TV}}(Q^i(x,\cdot),\pi)$ can be viewed as a function mapping $i$ to $\varepsilon$; then the mixing time $t^{\mathrm{mix}}_{{\bf M}}(\varepsilon)$ is correspondingly the inverse function mapping $\varepsilon$ to $i$, after suitably modifying $d_{\mathrm{TV}}$ to be a surjective function onto $(0,1]$ (i.e., adjusting for the fact that we must consider rounding since mixing times are integers). Hence, for a product $\mathfrak{L}=\prod_{i=1}^n L_i$ of semidistrim lattices $L_i$, which is itself semidistrim \cite[Theorem~7.3]{Semidistrim}, \cref{prop:product} theoretically allows one to bound $t^{\mathrm{mix}}_{{\bf M}_{\mathfrak{L}}}(\varepsilon)$ using the functions $t^{\mathrm{mix}}_{{\bf M}_{L_i}}(\cdot)$, though the inversion process may not yield a simple closed-form expression. This implicitly uses the fact that ${\bf M}_{\mathfrak{L}}$ is (isomorphic to) the Markov chain obtained by running ${\bf M}_{L_1},\dots,{\bf M}_{L_n}$ independently, ensuring the distributions $\mu,\nu$ in \cref{prop:product} satisfy the independent coupling assumption. We now consider the case when $P$ is an antichain with $n$ elements. Here, $J(P)$ is the Boolean lattice consisting of all subsets of $P$. \cref{thm:main_distributive} gives the unique stationary distribution of ${\bf M}_{J(P)}$ as \[ \pi(I) = \frac{\displaystyle\prod_{i\in I}p_i^{-1}}{\displaystyle\sum_{I'\subseteq P}\prod_{i\in I'}p_i^{-1}} = \frac{\displaystyle\prod_{i\in I}p_i^{-1}}{\prod_{i=1}^n\left(1+p_i^{-1}\right)}, \] and \cref{rem:improve_mixing} allows us to bound the mixing time by \[ t^{\mathrm{mix}}_{{\bf M}_{J(P)}}(\varepsilon) \leq \left\lceil\frac{\log\varepsilon}{\displaystyle\log\left(1-\prod_{i=1}^n(1-p_i)\right)}\right\rceil. \] Using \cref{prop:product}, we can provide a better bound. Notice that $J(P)=\prod_{x\in P} C_x$, where $C_x$ is a 2-element chain $0_x\lessdot 1_x$, with ${\bf M}_{C_x}$ given by $\mathbb{P}(0_x\to 1_x)=1$ and $\mathbb{P}(1_x\to 0_x)=p_x$. Let $Q_x$ denote the transition matrix of ${\bf M}_{C_x}$, and notice that for any distributions $\mu,\nu$ on $C_x$, we have $d_{\mathrm{TV}}^{C_x}(\mu Q_x,\nu Q_x)=p_xd_{\mathrm{TV}}^{C_x}(\mu,\nu)$. This implies that \[ d_{\mathrm{TV}}^{C_x}(Q_x^i(0_x,\cdot),\pi_x)=\frac{p_x^i}{1+p_x} \quad\text{and}\quad d_{\mathrm{TV}}^{C_x}(Q_x^i(1_x,\cdot),\pi_x)=\frac{p_x^{i+1}}{1+p_x}\leq\frac{p_x^i}{1+p_x}, \] where $\pi_x=\left(\frac{p_x}{1+p_x},\frac{1}{1+p_x}\right)$ is the stationary distribution of ${\bf M}_{C_x}$. \cref{prop:product} implies that for all $I\in J(P)$, \[ d_{\mathrm{TV}}^{\mathfrak{L}}(Q^i(I,\cdot),\pi) \leq 1 - \prod_{x\in P} \left(1-\frac{p_x^i}{1+p_x}\right) \leq 1 - \left(1-\sum_{x\in P} \frac{p_x^i}{1+p_x}\right) \leq \sum_{x\in P} p_x^i \leq n\overline p^i. \] Thus, \[ t^{\mathrm{mix}}_{{\bf M}_{J(P)}}(\varepsilon) \leq \left\lceil\frac{\log\varepsilon-\log n}{\log \overline p}\right\rceil; \] this proves \cref{thm:antichain}. Notice that the coefficient of $\log(1/\varepsilon)$ is $1/\log(1/\overline{p})$. This is smaller than the coefficient given by \cref{thm:general_mixing}, or specifically \cref{rem:improve_mixing}, because \[ \overline p \leq 1 - \prod_{i=1}^n (1-p_i). \] Thus, unlike for chains, there is an asymptotically better bound for Boolean lattices than that given by \cref{thm:general_mixing}. While \cref{thm:general_mixing} gives a bound that is exponential in $n$, \cref{thm:antichain} gives a bound that is logarithmic in $n$. \section{Future Directions}\label{sec:conclusion} In \Cref{thm:semidistrim_mixing}, we obtained a general upper bound for the mixing time of the rowmotion Markov chain of a semidistrim lattice. As exemplified by \Cref{thm:antichain}, this result can be drastically improved if one restricts to certain families of lattices. It would be interesting to obtain more precise estimates for these mixing times. In \Cref{thm:main_distributive,thm:hexx}, we computed the stationary distributions of rowmotion Markov chains of distributive lattices and the lattices $\includeSymbol{hexx}_{a,b}$. It would be quite interesting to find other special families of semidistrim lattices for which one can compute these stationary distributions. \section*{Acknowledgements} Colin Defant was supported by the National Science Foundation under Award No.\ 2201907 and by a Benjamin Peirce Fellowship at Harvard University.
{ "timestamp": "2022-12-29T02:18:27", "yymm": "2212", "arxiv_id": "2212.14005", "language": "en", "url": "https://arxiv.org/abs/2212.14005", "abstract": "Rowmotion is a certain well-studied bijective operator on the distributive lattice $J(P)$ of order ideals of a finite poset $P$. We introduce the rowmotion Markov chain ${\\bf M}_{J(P)}$ by assigning a probability $p_x$ to each $x\\in P$ and using these probabilities to insert randomness into the original definition of rowmotion. More generally, we introduce a very broad family of toggle Markov chains inspired by Striker's notion of generalized toggling. We characterize when toggle Markov chains are irreducible, and we show that each toggle Markov chain has a remarkably simple stationary distribution.We also provide a second generalization of rowmotion Markov chains to the context of semidistrim lattices. Given a semidistrim lattice $L$, we assign a probability $p_j$ to each join-irreducible element $j$ of $L$ and use these probabilities to construct a rowmotion Markov chain ${\\bf M}_L$. Under the assumption that each probability $p_j$ is strictly between $0$ and $1$, we prove that ${\\bf M}_{L}$ is irreducible. We also compute the stationary distribution of the rowmotion Markov chain of a lattice obtained by adding a minimal element and a maximal element to a disjoint union of two chains.We bound the mixing time of ${\\bf M}_{L}$ for an arbitrary semidistrim lattice $L$. In the special case when $L$ is a Boolean lattice, we use spectral methods to obtain much stronger estimates on the mixing time, showing that rowmotion Markov chains of Boolean lattices exhibit the cutoff phenomenon.", "subjects": "Combinatorics (math.CO); Probability (math.PR)", "title": "Rowmotion Markov Chains", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9898303443461076, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.7081483800998236 }
https://arxiv.org/abs/1301.0075
On eigenvalues of Seidel matrices and Haemers' conjecture
For a graph $G$, let $S(G)$ be the Seidel matrix of $G$ and $\te_1(G),...,\te_n(G)$ be the eigenvalues of $S(G)$. The Seidel energy of $G$ is defined as $|\te_1(G)|+...+|\te_n(G)|$. Willem Haemers conjectured that the Seidel energy of any graph with $n$ vertices is at least $2n-2$, the Seidel energy of the complete graph with $n$ vertices. Motivated by this conjecture, we prove that for any $\al$ with $0<\al<2$, $|\te_1(G)|^\al+...+|\te_n(G)|^\al\g (n-1)^\al+n-1$ if and only if $|{\rm det} S(G)|\g n-1$. This, in particular, implies the Haemers' conjecture for all graphs $G$ with $|{\rm det} S(G)|\g n-1$.
\section{Introduction} Let $G$ be a simple graph with vertex set $\{v_1,\ldots,v_n\}$. The {\em Seidel matrix} of $G$ is an $n\times n$ matrix $S(G)=(s_{ij})$ where $s_{11}=\cdots=s_{nn}=0$ and for $i\ne j$, $s_{ij}$ is $-1$ if $v_i$ and $v_j$ are adjacent, and is $1$ otherwise. The {\em Seidel energy} of $G$, denoted by $\s(G)$, is defined as the sum of the absolute values of the eigenvalues of $S(G)$. Considering the complete graph $K_n$, its Seidel matrix is $I-J$. Hence the eigenvalues of $S(K_n)$ are $1-n$ and $1$ (the latter with multiplicity $n-1$). So $\s(K_n)=2n-2$. Haemers conjectured that this is the smallest Seidel energy of an $n$-vertex graph: \noindent{\bf Conjecture} (Haemers \cite{ham}){\bf.} {\em For any graph $G$ on $n$ vertices, $\s(G)\g\s(K_n)$.} We show that the conjecture is true if $|{\rm det}\,S(G)|\g|{\rm det}\,S(K_n)|=n-1$. To be more precise, we prove the following more general statement which makes the main result of the present paper. \begin{thm}\label{main} Let $G$ be a graph with $n$ vertices and let $\te_1,\ldots,\te_n$ be the eigenvalues of $S(G)$. Then the following are equivalent: \begin{itemize} \item[\rm(i)] $|{\rm det}\,S(G)|\g n-1;$ \item[\rm(ii)] for any $0<\al<2$, \begin{equation}\label{maineq} |\te_1|^\al+\cdots+|\te_n|^\al\g(n-1)^\al+(n-1). \end{equation} \end{itemize} \end{thm} The implication `(ii)$\Rightarrow$(i)' is strightforward in view of the fact that $$\lim_{\al\to0^+}\left(\frac{|\te_1|^\al+\cdots+|\te_n|^\al}{n}\right)^\frac{1}{\al}=|\te_1\cdots\te_n|^\frac{1}{n}.$$ We prove the implication `(i)$\Rightarrow$(ii)' in Section~3. The proof is based on KKT method in nonlinear programming. We briefly explain this method in Section~2. For more results of the same flavor as (\ref{maineq}) on Laplacian and signless Laplacian eigenvalues of graphs see \cite{agko1,agko2}. \section{Karush--Kuhn--Tucker (KKT) conditions} In nonlinear programming, the Karush--Kuhn--Tucker (KKT) conditions are necessary for a local solution to a minimization problem provided that some regularity conditions are satisfied. Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers, which allows only equality constraints. For details see \cite{book}. Consider the following optimization problem: \begin{quote} Minimize $f(\x)$\\ subject to:\\ $~~~~~g_j(\x)=0$,~ for $j\in J$,\\ $~~~~~h_i(\x)\lee0$,~ for $i\in I$, \end{quote} where $I$ and $J$ are finite sets of indices. Suppose that the objective function $f:\mathbb{R}^n\to\mathbb{R}$ and the constraint functions $g_j:\mathbb{R}^n\to\mathbb{R}$ and $h_i:\mathbb{R}^n\to\mathbb{R}$ are continuously differentiable at a point $\x^*$. If $\x^*$ is a local minimum that satisfies some regularity conditions, then there exist constants $\mu_i$ and $\la_j$, called KKT multipliers, such that \begin{align*} \nabla f(\x^*)+\sum_{j\in J}&\,\mu_j\nabla g_j(\x^*)+\sum_{i\in I}\la_i\nabla h_i(\x^*)={\bf0}\\ g_j(\x^*)&=0,~~~\hbox{for all $j\in J$},\\ h_i(\x^*)&\lee0,~~~\hbox{for all $i\in I$},\\ \la_i&\g0,~~~\hbox{for all $i\in I$},\\ \la_ih_i(\x^*)&=0,~~~\hbox{for all $i\in I$}. \end{align*} In order for a minimum point to satisfy the above KKT conditions, it should satisfy some regularity conditions (or constraint qualifications). The one which suits our problem is the Mangasarian--Fromovitz constraint qualification (MFCQ). Let $I(\x^*)$ be the set of indices of active inequality constraints at $\x^*$, i.e. $I(\x^*)=\left\{i\in I\mid h_i(\x^*)=0\right\}$. We say that MFCQ holds at a feasible point $\x^*$ if the set of gradient vectors $\{\nabla g_j(\x^*)\mid j\in J\}$ is linearly independent and that there exists $\w\in\mathbb{R}^n$ such that \begin{align*} \nabla g_j(\x^*)\w^\top&=0,~~~ \hbox{for all $j\in J$},\\ \nabla h_i(\x^*)\w^\top&<0,~~~ \hbox{for all $i\in I(\x^*)$}. \end{align*} \begin{thm} {\rm(\cite{mf}, see also \cite[Section 12.6]{book})} If a local minimum $\x^*$ of the function $f(\x)$ subject to the constraints $g_j(\x)=0$, for $j\in J$, and $h_i(\x)=0$, for $i\in I$, satisfies MFCQ, then it satisfies the KKT conditions. \end{thm} \section{Proofs} In this section we prove the non-trivial part of Theorem~\ref{main}, that is the implication `(i)$\Rightarrow$(ii)'. We formulate this as an optimization problem. To this end, we need to come up with appropriate constraints. The main constraint is made by the assumption $|{\rm det}\,S(G)|\g n-1$. The other ones are obtained by the following straightforward lemma. \begin{lem} For any graph $G$ with $n$ vertices, we have \begin{itemize} \item[\rm(i)] $\te_1(G)^2+\cdots+\te_n(G)^2=(n-1)^2+n-1;$ \item[\rm(ii)] $\te_1(G)^4+\cdots+\te_n(G)^4\lee \te_1(K_n)^4+\cdots+\te_n(K_n)^4=(n-1)^4+n-1;$ \item[\rm(iii)] ${\displaystyle\max_{1\lee i\lee n}}\te_i(G)^2\lee{\displaystyle\max_{1\lee i\lee n}}\te_i(K_n)^2=(n-1)^2$. \end{itemize} \end{lem} Now, we can describe our problem as the minimization of the function $$f(\x):=x_1^p+\cdots+x_n^p,~~~ \x=(x_1,\ldots,x_n)\in\mathbb{R}^n,$$ with fixed $0<p<1$, subject to the constraints: \begin{align} g(\x)&:=x_1+\cdots+x_n-n(n-1)=0,\label{g=}\\ h(\x)&:=x_1^2+\cdots+x_n^2-(n-1)^4-(n-1)\lee0,\label{h<}\\ d(\x)&:=(n-1)^2-{\textstyle\prod_{i=1}^nx_i}\lee0,\label{prod}\\ k_i(\x)&:=x_i-(n-1)^2\lee0,~~\hbox{for $i=1,\ldots,n$},\\ l_i(\x)&:=\xi-x_i\lee0,~~\hbox{for $i=1,\ldots,n$}\label{li}, \end{align} where $\xi>0$ is fixed so that if for some $i$, $x_i=\xi$, then $\prod_{i=1}^nx_i<(n-1)^2$. Theorem~\ref{main} now follows if we prove that the minimum of $f(\x)$ subject to (\ref{g=})--(\ref{li}) is $(n-1)^{2p}+n-1$. \begin{lem}\label{mfcq} Let $\e$ be a local minimum of $f(\x)$ subject to the constraints (\ref{g=})--(\ref{li}). Then $\e$ satisfies MFCQ. \end{lem} \begin{proof}{Let $\e=(e_1,\ldots,e_n)$. With no loss of generality assume that $e_1\g\cdots\g e_n$. If $e_1=e_n$, then, in view of (\ref{g=}), all $e_i$ are equal to $n-1$. In this case, in none of the inequality constraints (\ref{h<})--(\ref{li}) equality occurs for $\e$ and so we are done. If $e_1>e_n$, then MFCQ is fulfilled by setting $\w=(-1,0,\ldots,0,1)$. }\end{proof} \begin{lem}\label{bennet} {\rm(\cite{ben})} Suppose $\al,\be,\nu,\omega, a, b, c, d$ are positive numbers and that \begin{align*} \al+\be&=\nu+\omega,\\ \al a+\be b&=\nu c+\omega d,\\ \max\{a, b\}&\lee\max\{c, d\},\\ a^\al b^\be&\g c^\nu d^\omega. \end{align*} Then the inequality $$\al a^p+\be b^p\g\nu c^p+\omega d^p$$ holds for $0\lee p\lee1$. \end{lem} \begin{thm}\label{ep} Let $\e\in\mathbb{R}^n$ satisfy the constraints (\ref{g=})--(\ref{li}). Then $f(\e)\g(n-1)^{2p}+n-1$. \end{thm} \begin{proof}{It suffices to prove the assertion for local minima. So assume that $\e=(e_1,\ldots,e_n)$ is a local minimum of $f(\x)$ subject to the constraints (\ref{g=})--(\ref{li}). Suppose that $e_1\g \cdots\g e_n$. By Lemma~\ref{mfcq}, $\e$ satisfies KKT conditions, namely \begin{equation}\label{nabla} \nabla f(\e)+\mu\nabla g(\e)+\la\nabla h(\e)+\delta\nabla d(\e)+\sum_{i=1}^n\left(\rho_i\nabla k_i(\e)+\gamma_i\nabla l_i(\e)\right)={\bf0}, \end{equation} \vspace{-.8cm} \begin{align} e_1&+\cdots+e_n-n(n-1)=0,\label{sum}\\ \la&\g0,~~~\la h(\e)=0,\label{h}\\ \delta&\g0,~~~\delta d(\e)=0,\nonumber\\ \rho_i&\g0,~~~\rho_i k_i(\e)=0,~~\hbox{for $i=1,\ldots,n$},\label{rho}\\ \gamma_i&\g0,~~~\gamma_i l_i(\e)=0,~~\hbox{for $i=1,\ldots,n$}.\label{gamma} \end{align} By the choice of $\xi$ we have $l_i(\e)<0$ for $i=1,\ldots,n$ and hence by (\ref{gamma}), $\gamma_1=\cdots=\gamma_n=0$. If we let $D=\prod_{i=1}^ne_i$, then (\ref{nabla}) can be written as $$ pe_i^{p-1}+\mu+2\la e_i-\frac{\delta D}{e_i}+\rho_i=0,~~ \hbox{for $i=1,\ldots,n$}.$$ We consider the following two cases. \noi{\bf Case 1.} $e_1=(n-1)^2$. Then by (\ref{sum}) and since $\e$ satisfies (\ref{prod}), we have $$ 1=\frac{e_2+\cdots+e_n}{n-1}\g (e_2\cdots e_n)^\frac{1}{n-1}\g1.$$ It turns out that $e_2=\cdots=e_n=1$ and we are done. \noi{\bf Case 2.} $e_1<(n-1)^2$. So, by (\ref{rho}), $\rho_1=\cdots=\rho_n=0$. It turns out that $e_1,\ldots,e_n$ must satisfy the following equation: \begin{equation}\label{pxp} px^p=\delta D-\mu x-2\la x^2. \end{equation} The curves of $y=px^p$ and $y=\delta D-\mu x-2\la x^2$ intersect in at most two points in $x>0$ and so (\ref{pxp}) has at most two positive roots. If it has one positive root, then by (\ref{sum}), $e_1=\cdots=e_n=n-1$. Hence $f(\e)=n(n-1)^p$ which is greater than $(n-1)^{2p}+n-1$ for $n\g3$. Next assume that (\ref{pxp}) has two positive roots, say $a$ and $b$. These two together with $c=(n-1)^2$ and $d=1$ satisfy the conditions of Lemma~\ref{bennet}. This implies that $f(\e)\g(n-1)^{2p}+n-1$, completing the proof. }\end{proof}
{ "timestamp": "2013-01-03T02:01:02", "yymm": "1301", "arxiv_id": "1301.0075", "language": "en", "url": "https://arxiv.org/abs/1301.0075", "abstract": "For a graph $G$, let $S(G)$ be the Seidel matrix of $G$ and $\\te_1(G),...,\\te_n(G)$ be the eigenvalues of $S(G)$. The Seidel energy of $G$ is defined as $|\\te_1(G)|+...+|\\te_n(G)|$. Willem Haemers conjectured that the Seidel energy of any graph with $n$ vertices is at least $2n-2$, the Seidel energy of the complete graph with $n$ vertices. Motivated by this conjecture, we prove that for any $\\al$ with $0<\\al<2$, $|\\te_1(G)|^\\al+...+|\\te_n(G)|^\\al\\g (n-1)^\\al+n-1$ if and only if $|{\\rm det} S(G)|\\g n-1$. This, in particular, implies the Haemers' conjecture for all graphs $G$ with $|{\\rm det} S(G)|\\g n-1$.", "subjects": "Combinatorics (math.CO)", "title": "On eigenvalues of Seidel matrices and Haemers' conjecture", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9898303413461357, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.7081483779535717 }
https://arxiv.org/abs/2010.11615
Lipschitz property of bistable or combustion fronts and its applications
For a class of reaction-diffusion equations describing propagation phenomena, we prove that for any entire solution $u$, the level set $\{u=\lambda\}$ is a Lipschitz graph in the time direction if $\lambda$ is close to $1$. Under a further assumption that $u$ connects $0$ and $1$, it is shown that all level sets are Lipschitz graphs. By a blowing down analysis, the large scale motion law for these level sets and a characterization of the minimal speed for travelling waves are also given.
\section{Introduction}\label{sec introduction} \setcounter{equation}{0} \subsection{Lipschitz property for level sets} Consider a smooth, entire solution $u$ to the reaction-diffusion equation \begin{equation}\label{eqn} \partial_tu-\Delta u=f(u), \quad 0<u<1 \quad \mbox{in } ~~\R^n\times \R. \end{equation} In this paper we are mainly interested in the Lipschitz property of the level sets $\{u=\lambda\}$ and their geometric motion law at large scales. Our main hypothesis on $f$ are \begin{description} \item [{\bf (F1)}] $f\in \mbox{Lip}([0,1])$, $f(0)=f(1)=0$ and $f\in C^{1,\alpha}([0,\gamma)\cup(1-\gamma,1])$ for some $\alpha,\gamma \in(0,1)$; \item [{\bf (F2)}] $f^\prime(1)<0$; \item [{\bf (F3)}] $\int_{0}^{1}f(u)du>0$. \end{description} Sometimes we also need \begin{description} \item[{\bf (F4)}] there exists a $\theta\in(0,1)$ such that $f>0$ in $(\theta,1)$, and either $f<0$ in $(0,\theta)$ with $f^\prime(0)<0$, or $f\equiv 0$ in $(0,\theta)$. \end{description} Typical examples are the bistable nonlinearity $f(u)=u(u-\theta)(1-u)$ with $\theta<1/2$ and the combustion nonlinearity. The reaction-diffusion equation \eqref{eqn} is used in the modelling of biological propagation phenomena, see Aronson and Weinberger \cite{Aronson1978multidimensional}. Entire solutions have been studied by many people since the work of Hamel and Nadirashvili \cite{Hamel1999entire,Hamel2001entire}, see \cite{Hamel2016monotonicity, Hamel2016bistable, Hamel2016Fisher-KPP,Bu2019transition} for the homogeneous case. There is also a large literature devoted to the study of heterogeneous cases. In particular, a very general notion of travelling fronts, \emph{transition fronts}, was introduced by Berestycki and Hamel in \cite{Hamel2007generalized, Hamel2012transition}. The geometry of an entire solution is complicated in general. To study the Lipschitz property of $\{u=\lambda\}$, we introduce some assumptions on the entire solution $u$. The first one is \begin{description} \item [{\bf(H1)}] For any $t\in\R$, $\sup_{x\in\R^n}u(x,t)=1$. \end{description} Under this assumption we prove \begin{thm}[Half Lipschitz property for entire solutions]\label{main result 3} Suppose $f$ satisfies {\bf (F1-F3)}. There exists a $b_0\in(0,1)$ such that, if an entire solution $u$ satisfies {\bf(H1)}, then for any $\lambda\in[1-b_0,1)$, $\{u=\lambda\}=\{t=h_\lambda(x)\}$ is a globally Lipschitz graph on $\R^n$ \end{thm} In general, if $\lambda$ is close to $0$, $\{u=\lambda\}$ does not satisfy this Lipschitz property, see the example given after Theorem \ref{main result 1}. In order to establish the full Lipschitz property, we need more assumptions. A natural one is \begin{description} \item [{\bf(H2)}] $u\to 0$ uniformly as $\mbox{dist}((x,t),\{u\geq 1-b_0\})\to+\infty$. \end{description} Here $\mbox{dist}$ denotes the standard Euclidean distance on $\R^n\times\R$. \begin{thm}[Full Lipschitz property for entire solutions]\label{main result 4} Suppose $f$ satisfies {\bf (F1-F4)}, and $u$ is an entire solution satisfying {\bf (H1-H2)}. Then for any $\lambda\in(0,1)$, $\{u=\lambda\}=\{t=h_\lambda(x)\}$ is a globally Lipschitz graph on $\R^n$. \end{thm} The proof of Theorem \ref{main result 3} relies on the propagation phenomena (see Aronson and Weinberger \cite{Aronson1978multidimensional}) in \eqref{eqn}. Roughly speaking, by {\bf(F3)}, $1$ represents a more stable state than $0$, so $\{u\approx 1\}$ will invade $\{u\approx0\}$. This gives us a cone of monotonicity at large scales, see Lemma \ref{lem forward Lip} for the precise statement. Although this only implies a Lipschitz property for $\{u=\lambda\}$ at large scales, it can be propagated to a real Lipschitz property by utilizing some estimates on positive solutions to the linear parabolic equation \[\partial_tw-\Delta w=f^\prime(1)w.\] Here the nondegeneracy condition $f^\prime(1)<0$ will be crucial for this argument. After establishing the Lipschitz propriety of $\{u=\lambda\}$ for $\lambda$ close to $1$, under the hypothesis {\bf(H2)}, we can apply the maximum principle and sliding method (in the time direction, cf. Guo and Hamel \cite{Hamel2016monotonicity}) to extend this Lipschitz property backwardly in time, which is Theorem \ref{main result 4}. \subsection{Travelling wave solutions} The solution $u$ is a travelling wave in the direction $-e_n$ and with speed $\kappa>0$, if there exists a function $v\in C^2(\R^n)$ such that \[u(x,t)=v(x+\kappa te_n).\] Here $v$ satisfies the elliptic equation \begin{equation}\label{travelling wave eqn} -\Delta v+\kappa \partial_nv=f(v) \quad \mbox{in }~~ \R^n. \end{equation} Among the class of travelling wave solutions, the one dimensional travelling wave is of particular importance. By \cite[Theorem 4.1]{Aronson1978multidimensional}, under the hypothesis {\bf (F1-F4)}, there exists a unique constant $\kappa_\ast>0$ and a unique (up to a translation) solution to the one dimensional problem \begin{equation}\label{1D wave} -g^{\prime\prime}(t)+\kappa_\ast g^\prime(t)=f(g(t)), \quad g(-\infty)=0, \quad g(+\infty)=1. \end{equation} Theorem \ref{main result 3} applied to $v$ gives \begin{thm}[Half Lipschitz property for travelling waves]\label{main result 1} Suppose $f$ satisfies {\bf (F1-F3)} and $v$ is an entire solution of \eqref{travelling wave eqn}, satisfying $\sup_{\R^n}v=1$. For any $\lambda\in[1-b_0,1)$, $\{v=\lambda\}=\{x_n=h_\lambda(x^\prime), x^\prime\in\R^{n-1}\}$ is a globally Lipschitz graph on $\R^{n-1}$. \end{thm} As in the entire solution case, in general, this property does not hold for level sets $\{v=\lambda\}$ with $\lambda$ close to $0$. For example, in Hamel and Roquejoffre \cite{Hamel2011heteroclinic}, it is shown that when $n=2$, there exist solutions $v$ of \eqref{eqn}, which is monotone in $x_1$ and satisfies \[ \left\{\begin{aligned} &v(x_1,x_2)\to 1 \quad \mbox{uniformly as } x_1\to+\infty.\\ &v(x_1,x_2)\to \varphi(x_2) \quad \mbox{locally uniformly as}~ x_1\to-\infty, \end{aligned}\right. \] where $\varphi$ is an $L$-periodic solution (for some $L>0$) of \[-\varphi^{\prime\prime}=f(\varphi) \quad \mbox{in } \R.\] Hence when $\lambda$ is close to $0$, $\{v=\lambda\}$ is the graph of an $L$-periodic function $h_\lambda$, satisfying $h_\lambda(kL)=-\infty$ for any $k\in\mathbb{Z}$. Clearly it cannot be a globally Lipschitz graph. As in the entire solution case, in order to get the Lipschitz property for all level sets, we need more assumptions. A natural one is \begin{thm}[Full Lipschitz property for travelling waves]\label{main result 2} Suppose $f$ satisfies {\bf (F1-F4)}, $v$ is an entire solution of \eqref{travelling wave eqn}, satisfying $\sup_{\R^n}v=1$ and \begin{equation}\label{assumption} v(x)\to 0 \quad \mbox{uniformly as } \mbox{dist}(x,\{v\geq 1-b_0\})\to+\infty. \end{equation} Then for any $\lambda\in(0,1)$, $\{v=\lambda\}=\{x_n=h_\lambda(x^\prime)\}$ is a globally Lipschitz graph on $\R^{n-1}$. \end{thm} This theorem also holds for the monostable case, that is, instead of {\bf(F4)}, we assume \begin{description} \item[{\bf (F4$^\prime$)}] $f>0$ in $(0,1)$, and $f^\prime(0)>0$. \end{description} The assumption \eqref{assumption} holds automatically in the monostable case. Hence we get a small improvement on the same Lipschitz property for all level sets proved in \cite{Hamel2001entire}, where they require the nonlinearity $f$ to be concave. However, we do not know how to prove the parabolic case, see discussions in Subsection \ref{subsec monostable case}. Existence, qualitative properties and classification of solutions to \eqref{travelling wave eqn} with Lipschitz level sets have been studied by many people, see \cite{Hamel2000conical,Hamel2005conical, Hamel2006classification, Taniguchi2005stability, Taniguchi2006stability, Taniguchi2007pyramidal,Taniguchi2009,Taniguchi2011pyramidal,Taniguchi2012fronts,Taniguchi2015convex}. \subsection{Blowing down limits} Once we know level sets of $u$ are Lipschitz graphs, we would like to study their large scale structures. Take a $b\in(0,1)$ such that $\{u=b\}=\{t=h(x)\}$ is a globally Lipschitz graph on $\R^n$. For any $\lambda>0$, let \[h_\lambda(x):=\frac{1}{\lambda}h(\lambda x).\] They are uniformly Lipschitz. Therefore for any $\lambda_i\to\infty$, there exist a subsequence (not relabelling) such that $h_{\lambda_i}$ converges to $h_\infty$ in $C_{loc}(\R^n)$. (This limit may depend on the choice of subsequences.) We have the following characterization of $h_\infty$. \begin{thm}\label{thm blowing down limit} Under the assumptions of Theorem \ref{main result 4}, the blowing down limit $h_\infty$ is a viscosity solution of \begin{equation}\label{limit eqn 1} |\nabla h_\infty|^2-\kappa_\ast^{-2}=0 \quad \mbox{in } \quad \R^{n}. \end{equation} \end{thm} \begin{rmk}[Level set formulation] Equation \eqref{limit eqn 1} is the level set formulation of the geometric motion equation for the family of hypersurfaces $\Sigma(t):=\{x:h_\infty(x)=t\}$, \begin{equation}\label{geometric motion} V_{\Sigma(t)}=\kappa_\ast\nu_{\Sigma(t)}. \end{equation} Here $\nu_{\Sigma(t)}=-\nabla h_\infty/|\nabla h_\infty|$ is the unit normal vector of $\Sigma(t)$. See Fife \cite[Chapter 1]{Fife1988CBMS} for a formal derivation of this equation. The equation \eqref{limit eqn 1} also corresponds to the fact that the global mean speed of transition fronts equals $\kappa_\ast$, see Hamel \cite{Hamel2016bistable}. \end{rmk} \begin{rmk}\label{rmk representation for level set limit} Because $h_\infty(0)=0$, the following representation formula holds for $h_\infty$ (see for example Monneau, Roquejoffre and Roussier-Michon \cite[Section 2]{Monneau2013graph}): there exists a closed set $\Xi\subset\mathbb{S}^{n-1}$ such that \[ h_\infty(x)=\inf_{\xi\in\Xi}\xi\cdot x.\] As a consequence, $h_\infty$ is concave and $1$-homogeneous. \end{rmk} The connection between reaction-diffusion equations and motion by mean curvatures in the framework of viscosity solutions has been explored by many people in 1980s and 1990s. In particular, the asymptotic behavior of solutions to the Cauchy problem for \eqref{eqn} has been studied by Barles, Bronsard, Evans, Soner and Souganidis in \cite{Evans1989optics,Barles1990wavefront,Barles1992front,Barles1994asymptotic}, in the framework of Hamilton-Jacobi equation and level set motions. We use the same idea, but now for the study of entire solutions of \eqref{eqn} (in the spirit of \cite{Hamel1999entire, Hamel2001entire}), where we are free to perform scalings to study the large scale structure of entire solutions. From this blowing down analysis we also get a characterization of the minimal speed. \begin{thm}\label{thm minimal speed} Suppose $f$ satisfies {\bf (F1-F4)}, $v$ is an entire solution of \eqref{travelling wave eqn}, satisfying $\sup_{\R^n}v=1$ and \eqref{assumption}. Then $\kappa\geq\kappa_\ast$. Furthermore, if $\kappa=\kappa_\ast$, there exists a constant $t\in\R$ such that \[v(x)\equiv g(x_n+t) \quad \mbox{in }~~ \R^n.\] \end{thm} \subsection{Further problems}\label{sec discussion} To put our results in a wide perspective, here we mention some further problems about \eqref{eqn} and \eqref{travelling wave eqn}. Some of these problems are well known to experts in this field. {\bf Problem 1.} Extend results in this paper to the monostable case. {\bf Problem 2.} Theorem \ref{thm blowing down limit} gives only the main order term of the front motion law. The next order term has been formally derived in Fife \cite{Fife1988CBMS}. Using the language of viscosity solutions, the family of hypersurfaces $\Sigma(t):=\{u(t)=1/2\}$ should be an approximate viscosity solution at large scales (in the sense of Savin \cite{Savin2009DeGiorgi,Savin2017DeGiorgi}) of the forced mean curvature flow \begin{equation}\label{geometric motion refined} V_{\Sigma(t)}=\left[\kappa_\ast -H_{\Sigma(t)}\right]\nu_{\Sigma(t)}. \end{equation} Here $H_{\Sigma(t)}$ denotes the mean curvature of $\Sigma(t)$. {\bf Problem 3.} In \cite{Hamel2001entire}, Hamel and Nadirashvili proposed a conjecture about the classification of entire solutions. For travelling wave solutions in the bistable and combustion case, this conjecture may be broken into two steps: \begin{enumerate} \item There exists a one to one correspondence between solutions of \eqref{travelling wave eqn} and solutions of \begin{equation}\label{travelling wave for forced MCF} \mbox{div}\left(\frac{\nabla h}{\sqrt{1+|\nabla h|^2}}\right)=\kappa_\ast-\frac{\kappa}{\sqrt{1+|\nabla h|^2}}. \end{equation} This is the travelling wave equation of \eqref{geometric motion refined}, see \cite{Monneau2013graph} for a discussion on this equation. \item There exists a one to one correspondence between solutions of \eqref{travelling wave for forced MCF} and nonnegative Borel measures on $\mathbb{S}^{n-1}$. \end{enumerate} {\bf Problem 4.} In view of the above discussion and Taniguchi's theorem in \cite{Taniguchi2015convex}, a not so ambitious question is if the reverse of Theorem \ref{thm blowing down limit} is true, that is, given a homogeneous viscosity solution $h_\infty$ of \eqref{limit eqn 1}, does there exist an entire solution of \eqref{eqn} so that its level set $\{u=1/2\}$ is asymptotic to $\{t=h_\infty(x)\}$? \subsection{Notations and organization of the paper} Throughout the paper we keep the following conventions. \begin{itemize} \item We use $C$ (large) and $c$ (small) to denote various universal constants, which could be different from line to line. \item The parabolic boundary of a domain $\Omega\subset\R^n\times\R$ is denoted by $\partial^p\Omega$. \item A function $u\in C^{2,1}(\R^n\times\R)$ if it is $C^2$ in $x$-variables and $C^1$ in $t$-variable. \end{itemize} The remaining part of this paper is organized as follows. In Section \ref{sec propagation} we study the propagation phenomena in \eqref{eqn} and use this to prove Theorem \ref{main result 3}. In Section \ref{sec case near 0} we prove Theorem \ref{main result 4} by the sliding method. An elliptic Harnack inequality is established in Section \ref{sec elliptic Harnack}. In Section \ref{sec blowing down} we perform the blowing down analysis. In Section \ref{sec geometric motion} we prove Theorem \ref{thm blowing down limit}. In Section \ref{sec representation} we give a representation formula for the blowing down limits. With these knowledge on blowing down limits, we prove Theorem \ref{thm minimal speed} in Section \ref{sec minimal speed} by using the sliding method again. \section{Propagation phenomena}\label{sec propagation} \setcounter{equation}{0} \subsection{Cone of monotonicity at large scales} Standard parabolic regularity theory implies that $u$, $\nabla u$, $\nabla^2u$ and $\partial_tu$ are all bounded in $\R^n\times \R$. By the Lipschitz property of $u$ in $t$, $\sup_{x\in\R^n}u(x,t)$ is a Lipschitz function of $t$. We start with the following simple observation, which is related to the hypothesis {\bf(H1)}. \begin{prop}\label{prop dichotomy} Either $\sup_{x\in\R^n}u(x,t)\equiv 1$ or $\sup_{x\in\R^n}u(x,t)<1$ in $(-\infty,+\infty)$. \end{prop} \begin{proof} Denote \[\mathcal{I}:=\left\{t: \sup_{x\in\R^n}u(x,t)=1\right\}.\] By continuity, $\mathcal{I}$ is a closed subset of $\R$. We claim that $\mathcal{I}$ is also open. Therefore it is either empty or the entire real line. Indeed, if $\sup_{x\in\R^n}u(x,t_0)=1$, then there exist a sequence of points $x_j\in\R^n$ such that $u(x_j,t_0)\to1$. Let \[u_j(x,t):=u(x_j+x,t_0+t).\] By standard parabolic regularity theory and Arzela-Ascolli theorem, $u_j\to u_\infty$ in $C^{2,1}_{loc}(\R^n\times \R)$, where $u_\infty$ is an entire solution of \eqref{eqn}. Since $0\leq u_\infty\leq 1$ and $u_\infty(0,0)=1$, by {\bf(F1)} and the strong maximum principle, $u_\infty\equiv 1$. As a consequence, for any $\varepsilon>0$ and $t\in(-\varepsilon,\varepsilon)$, \[\lim_{j\to\infty}u(x_j,t_0+t)=1.\] Hence $\sup_{x\in\R^n}u(x,t)=1$ in $(t_0-\varepsilon,t_0+\varepsilon)$ and the claim follows. \end{proof} From now on it is always assumed that ${\bf (H1)}$ holds, i.e. $\sup_{x\in\R^n}u(x,t)\equiv 1$ for any $t\in\R$. \begin{lem}\label{lem close to 1} For any $b>0$ and $R>0$, there exists a constant $\varepsilon:=\varepsilon(b,R)>0$ such that for any $(x,t)\in\R^n\times\R$, if $u(x,t)\geq 1-\varepsilon$, then $u\geq 1-b$ in $B_{R}(x)\times(t-R,t+R)$. \end{lem} \begin{proof} This follows from a contradiction argument similar to the proof of Proposition \ref{prop dichotomy}, by applying the strong maximum principle to the limiting solution. \end{proof} The following result is essentially \cite[Lemma 5.1]{Aronson1978multidimensional} (see also \cite[Lemma 3.5]{polacik2011threshold}). We will use the notations of forward and backward light cones in space-time: \[ \left\{ \begin{aligned} & \mathcal{C}^+_{\lambda}(x,t):=\left\{(y,s): ~~ s>t, |y-x|<\lambda(s-t)\right\},\\ & \mathcal{C}^-_{\lambda}(x,t):=\left\{(y,s): ~~ s<t, |y-x|<\lambda(t-s)\right\}. \end{aligned}\right. \] \begin{lem}[Propagation to state $1$]\label{lem propagation to 1} There exists a constant $b_1\in(0,1)$ such that for any $b\in[0,b_1)$ and $\delta>0$, there exists an $R:=R(b,\delta)$ so that the following holds. If $w$ is the solution to the Cauchy problem \begin{equation}\label{Cauchy problem} \left\{ \begin{aligned} & \partial_t w-\Delta w=f(w) \quad & \mbox{in } ~~ \R^n\times(0,+\infty),\\ &w(0)=\left(1-b\right)\chi_{B_R}, \end{aligned}\right. \end{equation} where $R\geq R(b,\delta)$, then \[w(x,t)> 1-b \quad \mbox{in } ~~ \mathcal{C}^+_{\kappa_\ast-\delta}(0,0).\] \end{lem} By decreasing $b_1$ further, we may assume $f^\prime\leq f^\prime(1)/2$ in $[1-b_1,1]$. For applications below, we need an a priori estimates for a linear parabolic equation. \begin{lem}\label{lem A.1} Given a constant $M>0$, if $w$ satisfies \[ \left\{\begin{aligned} &\partial_tw-\Delta w\leq -Mw \quad &\mbox{in } B_1\times(-1,0),\\ &0\leq w\leq 1 \quad &\mbox{in } B_1\times(-1,0), \end{aligned}\right. \] then \[ w\leq \frac{C}{M} \quad \mbox{in } B_{1/2}\times(-1/2,0).\] \end{lem} This can be proved, for example, by constructing a suitable sup-solution. The first application of this lemma is \begin{lem}\label{lem inf u} For any entire solution $u$, if it is not exactly $1$, then \[\inf_{\R^n\times\R}u<1-b_1.\] \end{lem} \begin{proof} Assume by the contrary, $u\geq 1-b_1$ everywhere. By our choice of $b_1$, we get \begin{equation}\label{inequality near 1} \partial_t(1-u)-\Delta (1-u)\leq \frac{f^\prime(1)}{2}(1-u) \quad \mbox{in} ~~ \R^n\times\R. \end{equation} An iteration of Lemma \ref{lem A.1} gives $u\equiv 1$. \end{proof} The next lemma is our main technical tool for the proof of Lipschitz property. \begin{lem}\label{lem forward Lip} There exist two constants $D>0$, $0<b_2<b_1$ so that the following holds. For any $(x,t)\in\{u=1-b_2\}$, \[u>1-b_2 \quad \mbox{in} \quad \mathcal{C}^+_{\kappa_\ast-\delta}(x,t+D).\] \end{lem} \begin{proof} Take $R:=R(b_1,\delta)$ according to Lemma \ref{lem propagation to 1}, $b_2:=\varepsilon(b_1,R)$ according to Lemma \ref{lem close to 1}. Then $u(x,t)=1-b_2$ implies $u(y,t)\geq 1-b_1$ for any $y\in B_{R}(x)$. Combining Lemma \ref{lem propagation to 1} and comparison principle, we deduce that $u> 1-b_1$ in $\mathcal{C}_{\kappa_\ast-\delta}^+(x,t)$. Now $1-u$ satisfies the differential inequality \eqref{inequality near 1} in $\mathcal{C}_{\kappa_\ast-\delta}^+(x,t)$. By Lemma \ref{lem A.1}, we find a $D>0$, which depends only on $b_1,b_2$ and $f^\prime(1)$, such that $u>1-b_2$ in $\mathcal{C}^+_{\kappa_\ast-\delta}(x,t+D)$. \end{proof} Three corollaries follow from this lemma. The first two of them are rather direct consequences of this lemma. \begin{coro}\label{coro backward cone} For any $(x,t)\in\{u=1-b_2\}$, \[u<1-b_2 \quad \mbox{in} \quad \mathcal{C}^-_{\kappa_\ast-\delta}(x,t-D).\] \end{coro} \begin{coro} For any $(x,t)\in\{u=1-b_2\}$, $\{u=1-b_2\}$ lies between $\partial \mathcal{C}^-_{\kappa_\ast-\delta}(x,t-D)$ and $\partial \mathcal{C}^+_{\kappa_\ast-\delta}(x,t-D)$. \end{coro} \begin{coro}\label{coro not stationary} If $u$ is an entire solution of \eqref{eqn} satisfying {\bf(H1)}, then it cannot be independent of $t$, unless $u\equiv 1$. \end{coro} \begin{proof} Assume by the contrary, $\partial_tu\equiv 0$ in $\R^n\times \R$. By {\bf (H1)}, there exists a point $(x,0)\in\{u=1-b_2\}$. By Lemma \ref{lem forward Lip}, $u>1-b_2$ in $\mathcal{C}_{\kappa_\ast-\delta}(x,D)$. Then we get $u\geq 1-b_2$ everywhere. By Lemma \ref{lem inf u}, $u\equiv 1$. \end{proof} \begin{prop}\label{prop construction of Lip graphs} The level set $\{u=1-b_2\}$ belongs to the $D$-neighborhood of a globally Lipschitz graph $\{t=h_\ast(x)\}$. \end{prop} \begin{proof} Let \[h_\ast(x):=\inf_{(y,s)\in\{u=1-b_2\}} \left[s+ D+\frac{|x-y|}{\kappa_\ast-\delta}\right].\] It is a globally Lipschitz function on $\R^n$, with its Lipschitz constant at most $(\kappa_\ast-\delta)^{-1}$. To check this, we need only to show that $h_\ast>-\infty$ at one point. (This then implies that it is finite everywhere.) In fact, take an arbitrary point $(x_0,t_0)\in\{u=1-b_2\}$. (The existence of such a point is guaranteed by {\bf (H1)} and Lemma \ref{lem inf u}.) By Corollary \ref{coro backward cone}, we see for any $ (y,s)\in\{u=1-b_2\}$, \[|y-x_0|>\left(\kappa_\ast-\delta\right)\left(t_0-D-s\right).\] In other words, $(x_0,t_0)\notin\mathcal{C}_{\kappa_0-\delta}^+(y,s+ D)$. Then by definition, we get \[h_\ast(x_0)\geq t_0. \qedhere\] \end{proof} We modify $h_\ast$ into a smooth function. Take a standard cut-off function $\eta\in C_0^\infty(\R^n)$, $\eta\geq 0$ and $\int_{\R^n}\eta=1$. Define \[h^\ast(x):=\int_{\R^n}\eta(x-y)\left[h_\ast(y)+1\right]dy.\] It is directly verified that $h^\ast$ has the same Lipschitz constant with $h_\ast$. Moreover, by choosing $\eta$ suitably, we have \begin{equation}\label{2.1} h_\ast \leq h^\ast\leq h_\ast+2. \end{equation} Denote \[\Omega^\ast:=\left\{(x,t):~~ t>h^\ast(x)\right\}.\] \begin{lem}\label{lem comparison with linear eqn} There exists a universal constant $c<1$ such that \[ cb_2\leq 1-u\leq b_2 \quad \mbox{on } \{t=h^\ast(x)\}.\] \end{lem} \begin{proof} The second inequality is a direct consequence of the fact that $u>1-b_2$ in $\Omega^\ast$, thanks to \eqref{2.1} and the construction of $h_\ast$ in the proof of Proposition \ref{prop construction of Lip graphs}. The first inequality follows by applying Harnack inequality to the linear parabolic equation \[\partial_t\left(1-u\right)-\Delta\left(1-u\right)=V\left(1-u\right)\] in the parabolic cylinder $B_{3\sqrt{D}}(x)\times (t-9D,t+9D)$. In the above $V:=-f(u)/(1-u)$ is an $L^\infty$ function. \end{proof} \subsection{Proof of Theorem \ref{main result 3}} Before proving Theorem \ref{main result 3}, first we need to construct a comparison function. Consider the problem \begin{equation}\label{linear eqn} \left\{\begin{aligned} & \partial_tw^\ast-\Delta w^\ast=f^\prime(1)w^\ast, \quad & \mbox{in } \Omega^\ast,\\ &w^\ast=1 \quad & \mbox{on } \partial\Omega^\ast. \end{aligned}\right. \end{equation} \begin{prop}\label{prop property for linear eqn} \begin{enumerate} \item There exists a unique solution of \eqref{linear eqn} in $L^\infty(\Omega^\ast)\cap C^\infty(\overline{\Omega^\ast})$. \item There exists a universal constant $C$ such that for any $(x,t)\in \Omega^\ast$, \begin{equation}\label{exponential decay} \frac{1}{C}e^{-C\left[ t-h^\ast(x)\right]}\leq w^\ast(x,t)\leq Ce^{-\frac{ t-h^\ast(x)}{C}}. \end{equation} \item There exists a universal constant $C$ such that \begin{equation}\label{gradient bound 1} \frac{|\nabla w^\ast|}{w^\ast}+\frac{|\partial_tw^\ast|}{w^\ast}\leq C \quad \mbox{in }~~ \Omega^\ast. \end{equation} \item There exists a universal constant $c$ such that \begin{equation}\label{time derivative bound 1} \frac{\partial_tw^\ast}{w^\ast}\leq -c \quad \mbox{in }~~ \Omega^\ast. \end{equation} \item There exists a universal constant $C$ such that \begin{equation}\label{Lip} \frac{|\nabla w^\ast|}{|\partial_tw^\ast|}\leq C \quad \mbox{in }~~ \Omega^\ast. \end{equation} As a consequence, all level sets of $w^\ast$ are Lipschitz graphs in the $t$-direction. \end{enumerate} \end{prop} \begin{proof} (i) Existence, uniqueness and regularity of the solution can be proved as in Ole\u{\i}nik and Radkevi\v{c} \cite[Chapter 1]{Oleinik1969book}, see respectively Section 5, Section 6 and Section 8 therein. (ii) The exponential upper bound follows from iteratively applying the estimate in Lemma \ref{lem A.1}. The lower bound follows from iteratively applying the standard Harnack inequality. (iii) By the regularity theory in \cite{Oleinik1969book}, there exists a universal constant $C$ such that \[|\nabla w^\ast|+|\partial_t w^\ast|\leq C \quad \mbox{in } \Omega^\ast.\] Hence for any constant vector $(\xi,s)\in\R^{n+1}$, $\xi\cdot\nabla w^\ast+s\partial_tw^\ast$ is a bounded solution of \eqref{linear eqn}. As in (ii), it converges to $0$ as $t-h^\ast(x)\to+\infty$. Then \eqref{gradient bound 1} follows from an application of the comparison principle. (iv) For any $\rho>0$, in the half ball \[\mathcal{B}_{\rho}(0,0):=\left\{(x,t): \quad |x|^2+t^2<\rho^2, ~~ t<0 \right\},\] the function \[w_\rho(x,t):= e^{\alpha\left(|x|^2+t^2-\rho^2\right)}\] is a sup-solution of \eqref{linear eqn}, provided that $\alpha$ is small enough (depending only on $\rho$, the space dimension $n$ and $f^\prime(1)$). On $\left\{|x|^2+t^2=\rho^2, ~~ t<-\left(\kappa_\ast-\delta\right)^{-1}|x|\right\}$, there exists a constant $c(\rho)>0$ such that \[\partial_tw_\rho(x,t)=2\alpha tw_\rho(x,t)\leq -c(\rho).\] Since $\sup_{\R^n}|\nabla^2h^\ast|\leq C$, for any $(x,h^\ast(x))$, there exists an half ball $\mathcal{B}_{1/C}(y,s)$ tangent to $\partial \Omega^\ast$ at this point. Moreover, because $|\nabla h^\ast(x)|\leq \left(\kappa_\ast-\delta\right)^{-1}$, \[ t-s\leq -\left(\kappa_\ast-\delta\right)^{-1}|x-y|.\] By the comparison principle in $\mathcal{B}_{1/C}(y,s)$, $w_{1/C}(\cdot-(y,s))\geq w^\ast$ in $\mathcal{B}_{1/C}(y,s)$. Therefore \[\partial_tw^\ast\left(x,h^\ast(x)\right)\leq -c.\] Then \eqref{time derivative bound 1} follows from an application of the comparison principle as in (iii). (v) This is a direct consequence of \eqref{gradient bound 1} and \eqref{time derivative bound 1}. \end{proof} \begin{lem}\label{lem comparison with linear eqn 2} There exists a universal constant $C$ such that \begin{equation}\label{control from above and below} \frac{b_2}{C}\leq \frac{1-u}{w^\ast}\leq Cb_2 \quad \mbox{in } \Omega^\ast. \end{equation} \end{lem} \begin{proof} For each $k\geq 1$, let \[\Omega^\ast_k:=\left\{(x,t): ~~ k-1<t-h^\ast(x)<k \right\}.\] Similar to \eqref{exponential decay}, for any $(x,t)\in \Omega^\ast$ we have \begin{equation}\label{exponential decay for u} \frac{1}{C}e^{-C\left[ t-h^\ast(x)\right]}\leq 1-u(x,t)\leq Ce^{-\frac{ t-h^\ast(x)}{C}}. \end{equation} Hence there exists a $\sigma\in(0,1)$ such that \[ \sigma^k f^\prime(1)\left(1-u\right)\leq \left[\partial_t-\Delta-f^\prime(1)\right]\left(1-u\right)\leq -\sigma^k f^\prime(1)\left(1-u\right) \quad \mbox{in } \Omega^\ast_k.\] For each $k$, define $w_k^\ast$ inductively as the unique solution of \begin{equation}\label{inductive eqn} \left\{\begin{aligned} &\partial_tw_k^\ast-\Delta w_k^\ast=f^\prime(1)\left(1-\sigma^k\right)w_k^\ast \quad &\mbox{in } \Omega^\ast_k,\\ &w_k^\ast=w_{k-1}^\ast \quad &\mbox{on } \{t=h^\ast(x)+k-1\}. \end{aligned}\right. \end{equation} Here $w_{0}^\ast:\equiv b_2$. As before, the existence and uniqueness of $w_k^\ast$ follows from Ole\u{\i}nik and Radkevi\v{c} \cite[Chapter 1]{Oleinik1969book}. By inductively applying the comparison principle, we get \begin{equation}\label{control from above 1} 1-u\leq w_k^\ast \quad \mbox{in } \Omega^\ast_k. \end{equation} Next for each $k$, denote \[M_k:=\sup_{\Omega^\ast_k}\frac{w_k^\ast}{w^\ast}.\] A direct calculation shows that $\left(M_k w^\ast\right)^{1-\sigma^k}$ is a sup-solution of \eqref{inductive eqn} in $\Omega^\ast_{k+1}$. (Here we also need an inductive assumption that $M_k w^\ast <1$ on $\{t=h^\ast(x)+k\}$.) From this we deduce that \[M_{k+1}\leq M_k^{1-\sigma^{k}}\sup_{\Omega^\ast_{k+1}}\left(w^\ast\right)^{-\sigma^{k}}\leq M_k^{1-\sigma^{k}} e^{Ck\sigma^k},\] where the last inequality follows by substituting \eqref{exponential decay} to estimate $\inf_{\Omega^\ast_{k+1}}w^\ast$. From this inequality it is readily deduced that there exists a universal, finite upper bound on $M_k$ as $k\to\infty$. Combining this fact with \eqref{control from above 1} we obtain the upper bound in \eqref{control from above and below}. The lower bound in \eqref{control from above and below} follows in the same way by considering \begin{equation* \left\{\begin{aligned} &\partial_tw_{k,\ast}-\Delta w_{k,\ast}=f^\prime(1)\left(1+\sigma^k\right)w_{k,\ast} \quad &\mbox{in } \Omega^\ast_k,\\ &w_{k,\ast} =w_{k-1,\ast} \quad &\mbox{on } \{t=h^\ast(x)+k-1\}. \end{aligned}\right. \qedhere \end{equation*} \end{proof} \begin{coro}\label{coro differential Harnack} There exists a universal constant $C>0$ such that \begin{equation}\label{gradient bound 2} \frac{|\nabla u|+|\partial_tu|}{1-u}\leq C \quad \mbox{ in } ~~ \Omega^\ast. \end{equation} \end{coro} \begin{proof} For any $(x,t)\in \Omega^\ast$, standard gradient estimate gives \begin{equation}\label{gradient estimate} |\nabla u(x,t)|+|\partial_tu(x,t)|\leq C\sup_{B_1(x)\times (t-1,t)}(1-u). \end{equation} By the previous lemma, for any $(y,s)\in B_1(x)\times (t-1,t)$, \begin{equation}\label{Harnack} 1-u(y,s)\leq Cw^\ast(y,s) \leq C^2w^\ast(x,t)\leq C^3 \left[1-u(x,t)\right]. \end{equation} Here the second inequality follows by integrating \eqref{gradient bound 1} along the segment from $(y,s)$ to $(x,t)$. Substituting \eqref{Harnack} into \eqref{gradient estimate} we get \eqref{gradient bound 2}. \end{proof} \begin{prop}\label{prop monotonicity in time 1} There exists a $b_0\in(0,b_2)$ such that in $\{u>1-b_0\}$, \begin{equation}\label{time derivative bound 2} \frac{\partial_tu}{1-u}\geq c. \end{equation} \end{prop} \begin{proof} Assume by the contrary, there exists a sequence of points $(x_k,t_k)$ such that $u(x_k,t_k)\to 1$ but \begin{equation}\label{absurd assumption 1} \frac{\partial_tu(x_k,t_k)}{1-u(x_k,t_k)}\to 0. \end{equation} Denote $R_k:=\mbox{dist}((x_k,t_k),\partial\Omega^\ast)$. Because $u_(x_k,t_k)\to1$, by \eqref{exponential decay for u}, $R_k$ goes to infinity as $k\to\infty$. Let \[u_k(x,t):=\frac{1-u(x_k+x,t_k+t)}{1-u(x_k,t_k)}, \quad w_k(x,t):=\frac{w^\ast(x_k+x,t_k+t)}{1-u(x_k,t_k)}.\] By definition, $u_k(0,0)=1$, while by \eqref{control from above and below}, we have \[ \frac{b_2}{C}\leq \frac{u_k}{w_k}\leq Cb_2 \quad \mbox{in } B_{cR_k}(0)\times(-cR_k,cR_k).\] Furthermore, \eqref{gradient bound 1} and \eqref{gradient bound 2} are transformed into \[\frac{|\nabla u_k|+|\partial_tu_k|}{u_k}\leq C, \quad \frac{|\nabla w_k|+|\partial_tw_k|}{w_k}\leq C \quad \mbox{in } B_{cR_k}(0)\times(-cR_k,cR_k).\] Then by standard parabolic regularity theory, $u_k$ and $w_k$ are uniformly bounded in $C^{2+\alpha,1+\alpha/2}_{loc}(\R^n\times\R)$. After passing to a subsequence, $u_k\to u_\infty$, $w_k\to w_\infty$ in $C^{2,1}_{loc}(\R^n\times\R)$. Both of them are solutions of \[\partial_t w-\Delta w=f^\prime(1) w \quad \mbox{in }~~ \R^n\times \R.\] By \cite{Widder1963heat} or \cite{Lin2019ancient}, there exists a Borel measure supported on $\{\lambda=|\xi|^2\}\subset\R^{n+1}$ such that \[ w_\infty(x,t)=\int_{\{\lambda=|\xi|^2\}}e^{\left[f^\prime(1)+\lambda\right]t+\xi\cdot x}d\mu(\xi,\lambda).\] Because \begin{equation}\label{2.2} \frac{b_2}{C}\leq \frac{u_\infty}{w_\infty}\leq Cb_2 \quad \mbox{in }~~ \R^n\times \R, \end{equation} there exists a function $\Theta$ on $\{\lambda=|\xi|^2\}$ with $b_2/C\leq \Theta \leq Cb_2$ such that \[ u_\infty(x,t)=\int_{\{\lambda=|\xi|^2\}}e^{\left[f^\prime(1)+\lambda\right]t+\xi\cdot x}\Theta(\xi,\lambda)d\mu(\xi,\lambda).\] This follows by writing $u_\infty$ as the same integral representation with another measure $\widetilde{\mu}$, applying Radon-Nikodym theorem to these two measures, and then use \eqref{2.2} to estimate the differential $\frac{d\widetilde{\mu}}{d\mu}$. Because $w_\infty$ still satisfies the inequality \eqref{time derivative bound 1}, the support of $\mu$ is contained in $\{\lambda\leq -f^\prime(1)-c\}$. Hence we also have \begin{eqnarray*} \partial_tu_\infty&=& \int_{\{\lambda=|\xi|^2\}}\left[f^\prime(1)+\lambda\right]e^{\left[f^\prime(1)+\lambda\right]t+\xi\cdot x}\Theta(\xi,\lambda)d\mu(\xi,\lambda) \\ &\leq & -cu_\infty. \end{eqnarray*} This is a contradiction with \eqref{absurd assumption 1}. \end{proof} Theorem \ref{main result 3} follows by combining \eqref{gradient bound 2} and \eqref{time derivative bound 2}. \section{Proof of Theorem \ref{main result 4}}\label{sec case near 0} \setcounter{equation}{0} For simplicity, denote $h(x):=h_{1-b_0}(x)$. \subsection{The combustion and bistable case}\label{sec bistable} In these two cases we need the assumption {\bf (H2)}, that is, $u(x,t)\to0$ uniformly as $\mbox{dist}((x,t),\{t=h(x)\})\to +\infty$. First we use the sliding method to prove \begin{prop}\label{prop monotonicity in bistable case} $u$ is increasing in $t$. \end{prop} \begin{proof} The fact that $\partial_tu>0$ in $\{u>1-b_0\}$ has been established in Proposition \ref{prop monotonicity in time 1}. Now we use the sliding method to show the remaining case. For any $\lambda\in\R$, let \[u^\lambda(x,t):=u(x,t+\lambda).\] We want to show that for any $\lambda>0$, $u^\lambda\geq u$ in $\R^n\times\R$. {\bf Step 1.} If $\lambda$ is large enough, $u^\lambda\geq u$ in $\R^n\times\R$. By {\bf(H2)}, there exists a constant $L>0$ such that \begin{equation}\label{small in the right} \sup_{\{t<h(x)-L\}}u \ll 1. \end{equation} If $\lambda> L$, we have \[ u^\lambda \geq 1-b_0 \geq u \quad \mbox{in } \{h(x)-L\leq t \leq h(x)\}.\] In $\{t<h(x)-L\}$, we have \[\partial_t\left(u-u^\lambda\right)_+-\Delta \left(u-u^\lambda\right)_+\leq V\left(u-u^\lambda\right)_+,\] where \[ V:= \begin{cases} \frac{f(u)-f(u^\lambda)}{u-u^\lambda}, & \mbox{if } u>u^\lambda\\ f^\prime(u), & \mbox{otherwise}. \end{cases} \] By \eqref{small in the right} and {\bf(F4)}, $V\leq 0$ in $\{t<h(x)-L\}$. Because $\left(u-u^\lambda\right)_+=0$ on $\{t=h(x)-L\}$ and $\left(u-u^\lambda\right)_+\to 0$ as $\mbox{dist}((x,t),\{t=h(x)-L\})\to+\infty$, by the maximum principle we obtain \[u\leq u^\lambda \quad \mbox{in } ~~ \{t<h(x)-L\}.\] {\bf Step 2.} Now \[ \lambda_\ast:=\inf\left\{\lambda: ~~ \forall ~ \lambda^\prime>\lambda, ~~ u^{\lambda^\prime}\geq u ~~ \mbox{ in } \R^n\times\R\right\}\] is well defined. We claim that $\lambda_\ast=0$. By continuity, $u^{\lambda_\ast}\geq u$ in $\R^n\times\R$. By the strong maximum principle, either $u^{\lambda_\ast}>u$ strictly or $u^{\lambda_\ast}\equiv u$. The later is excluded if $\lambda_\ast>0$, because in this case $u^{\lambda_\ast}>u$ in $\{t>h(x)\}$. {\bf Claim.} If $\lambda_\ast>0$, for any $L>0$, there exists a constant $\varepsilon_1:=\varepsilon(\lambda_\ast,L)>0$ such that \[ u^{\lambda_\ast}-u\geq \varepsilon_1 \quad \mbox{in }~~ \{h(x)-L\leq t \leq h(x)\}. \] We prove this claim by contradiction. Assume there exists a sequence of points $(x_i,t_i)\in \{h(x)-L\leq t \leq h(x)\}$ such that $u^{\lambda_\ast}(x_i,t_i)-u(x_i,t_i)\to 0$. Set \[u_i(x,t):=u(x_i+x,t_i+t).\] They satisfy the following conditions: \begin{itemize} \item there exists a constant $b(L)\in(0,1)$ such that $b(L)\leq u_i(0,0)\leq 1-b(L)$; \item $u_i^{\lambda_\ast}\geq u_i$ in $\R^n\times\R$; \item $u_i^{\lambda_\ast}-u_i\geq c\lambda_\ast \left(1-u_i\right)$ in $\{u_i\geq 1-b_0\}$ (thanks to Proposition \ref{prop monotonicity in time 1}); \item $u_i^{\lambda_\ast}(0,0)-u_i(0,0)\to 0$. \end{itemize} These lead to a contradiction after letting $i\to+\infty$, and the proof of this claim is complete. By this claim and Proposition \ref{prop monotonicity in time 1}, for any $L>0$, we find another constant $\varepsilon_2:=\varepsilon_2(\lambda_\ast,L)>0$ such that, if $\lambda\geq \lambda_\ast-\varepsilon_2$, \begin{equation}\label{positive lower bound} u^{\lambda} \geq u \quad \mbox{in }~~ \{t\geq h(x)-L\}. \end{equation} Then as in Step 1, by \eqref{positive lower bound} and the comparison principle, for these $\lambda$, $u^\lambda\geq u$ in $\{t<h(x)-L\}$ (hence everywhere in $\R^n\times\R$). This is a contradiction with the definition of $\lambda_\ast$. Therefore we must have $\lambda_\ast=0$. \end{proof} Combining this proposition with Corollary \ref{coro not stationary} and strong maximum principle, we obtain \begin{coro}\label{coro strict monotonicity} In $\R^n\times\R$, $\partial_tu>0$ strictly. \end{coro} \begin{prop}\label{prop 3.3} There exists a universal constant $c>0$ such that \[ \partial_tu\geq c|\nabla u| \quad \mbox{in }~~ \R^n\times\R.\] \end{prop} \begin{proof} In $\{u>1-b_0\}$, this inequality follows by combining Corollary \ref{coro differential Harnack} and Proposition \ref{prop monotonicity in time 1}. In view of Corollary \ref{coro strict monotonicity}, following the argument in the second step of the proof of Proposition \ref{prop monotonicity in bistable case}, for any $L>0$, we find a positive lower bound for $\partial_tu$ in $\{h(x)-L\leq t \leq h(x)\}$. Hence trivially we have \[|\nabla u|\leq C \leq \frac{C}{c}\partial_tu \quad \mbox{in }~~ \{h(x)-L\leq t \leq h(x)\}.\] Finally, we apply the maximum principle to the linearized equation \[\left(\partial_t-\Delta\right)\left(\partial_tu-c\xi\cdot\nabla u\right)=f^\prime(u)\left(\partial_tu-c\xi\cdot\nabla u\right)\] to show that $\partial_tu-c\xi\cdot\nabla u\geq 0$ in $\{t<h(x)-L\}$, where $\xi$ is an arbitrary unit vector in $\R^n$ and $c>0$ is a small constant. \end{proof} Theorem \ref{main result 4} is a direct consequence of this proposition. \subsection{A remark on the monostable case}\label{subsec monostable case} In this subsection we give a remark on the monostable case. For this case, we note the following important ``hair trigger" phenomena (see \cite[Theorem 3.1]{Aronson1978multidimensional}). \begin{lem}\label{lem propagation to 1 in monostable case} For any $\lambda\in(0,1)$, $\delta>0$ and $(x,t)\in\R^n\times\R$, there exists a constant $D:=D(x,t,\lambda)>0$ such that \[u> \lambda \quad \mbox{in } ~~ \mathcal{C}^+_{\kappa_\ast-\delta}(x,t+D).\] \end{lem} \begin{lem}\label{lem 3.1} If $f$ is monostable, then $u\to0$ uniformly as $\mbox{dist}((x,t),\{u\geq 1-b_0\})\to +\infty$. \end{lem} \begin{proof} This follows from the following Liouville type result: suppose $u$ is an entire solution of \eqref{eqn} satisfying $0\leq u \leq 1-b_0$, then $u\equiv 0$. This Liouville theorem is a direct consequence of Lemma \ref{lem propagation to 1 in monostable case}. \end{proof} Unfortunately, in this case we need to assume the following assumption: \begin{equation}\label{differential Harnack 2} \frac{|\partial_tu|+|\nabla u|}{u} \leq C \quad \mbox{in }~~ \{t<h(x)\}. \end{equation} Of course, if $u$ is a travelling wave solution, this assumption holds by applying standard elliptic Harnack inequality and interior gradient estimates (see Lemma \ref{lem gradient bound 2} below), but we do not know how to prove the parabolic case. \begin{lem}\label{lem 3.2} Given $\kappa>0$, assume $u$ is an entire positive solution of \[\partial_tu-\Delta u=\kappa u.\] Then \[\frac{|\nabla u|}{\partial_t u}\leq \frac{1}{2\sqrt{\kappa}} \quad \mbox{in }~~ \R^n\times\R.\] As a consequence, all level sets of $u$ are Lipschitz graphs in the $t$ direction, with their Lipschitz constants at most $2\sqrt{\kappa}$. \end{lem} \begin{proof} By \cite{Widder1963heat} or \cite{Lin2019ancient}, there exists a Borel measure $\mu$ supported on $\{\lambda=|\xi|^2\}\subset\R^{n+1}$ such that \[ u(x,t)=\int_{\{\lambda=|\xi|^2\}}e^{\left[\kappa+\lambda\right]t+\xi\cdot x}d\mu(\xi,\lambda).\] Then we have \begin{eqnarray*} \partial_tu(x,t) &=& \int_{\{\lambda=|\xi|^2\}}\left[\kappa+\lambda\right]e^{\left[\kappa+\lambda\right]t+\xi\cdot x}d\mu(\xi,\lambda) \\ &=& \int_{\{\lambda=|\xi|^2\}}\left[\kappa+|\xi|^2\right]e^{\left[\kappa+\lambda\right]t+\xi\cdot x}d\mu(\xi,\lambda) \\ &\geq&2\sqrt{\kappa}\int_{\{\lambda=|\xi|^2\}}|\xi|e^{\left[\kappa+\lambda\right]t+\xi\cdot x}d\mu(\xi,\lambda)\\ &\geq& 2\sqrt{\kappa}|\nabla u(x,t)|. \qedhere \end{eqnarray*} \end{proof} \begin{coro} There exists a constant $L>0$ such that \[\frac{|\nabla u|}{\partial_tu}\leq \frac{1}{4\sqrt{f^\prime(0)}} \quad \mbox{in } ~~ \{t<h(x)-L\}.\] \end{coro} \begin{proof} For any $(x_i,t_i)\in\{t<h(x)\}$ with $t_i-h(x_i)\to -\infty$, by Lemma \ref{lem 3.1}, $u(x_i,t_i)\to 0$. Set \[u_i(x,t):=\frac{u(x_i+x,t_i+t)}{u(x_i,t_i)}.\] By definition, $u_i>0$ and $u_i(0,0)=1$. Integrating \eqref{differential Harnack 2}, we see $u_i$ are uniformly bounded in any compact set of $\R^n\times\R$. Then by standard parabolic regularity theory, we can take a subsequence $u_i\to u_\infty$ in $C^{2,1}_{loc}(\R^n\times\R)$. Here $u_\infty$ is an entire solution of \[\partial_tu_\infty-\Delta u_\infty =f^\prime(0) u_\infty.\] The claim then follows from Lemma \ref{lem 3.2}. \end{proof} Theorem \ref{main result 4} in the monostable case (under the hypothesis \eqref{differential Harnack 2}) follows from the same sliding method as in the previous subsection. \section{An elliptic Harnack inequality}\label{sec elliptic Harnack} \setcounter{equation}{0} From now on, unless otherwise stated, it is always assumed that ${\bf (F1-F4)}$ and ${\bf (H1-H2)}$ hold. In this section we prove an elliptic Harnack inequality for $u$. This will be used in the blowing down analysis in the next section. In $\{t>h(x)\}$, what we want has been given in Corollary \ref{coro differential Harnack}, so here we consider the other part $\{t<h(x)\}$. \begin{prop}\label{prop differential Harnack 1} There exists a universal constant $C>0$ such that \begin{equation}\label{differential Harnack 1} \frac{|\partial_tu|+|\nabla u|}{u} \leq C \quad \mbox{in }~~ \left\{t<h(x)\right\}. \end{equation} \end{prop} Before proving this proposition, we first notice the following exponential decay of $u$ in $\{t<h(x)\}$ for later use. \begin{prop}\label{prop negative part} Under the hypothesis {\bf(H2)}, \begin{equation}\label{exponential decay 2} u(x,t)\leq Ce^{c\left[t-h(x)\right]} \quad \mbox{in} \quad \{t<h(x)\}. \end{equation} \end{prop} \begin{proof} Because $f$ is of combustion type or bistable, by choosing $L$ large enough, we have \[ \partial_tu-\Delta u\leq 0 \quad \mbox{in} \quad \{t<h(x)-L\}.\] Take a radially symmetric function $\varphi\in C^2(\R^n)$ such that $\varphi\leq 0$, $\varphi(x)\equiv -2\kappa_\ast|x|$ outside a large ball, $|\Delta\varphi|\ll 1$ and $|\nabla\varphi|\leq C$ in $\R^n$. By taking a small $\mu>0$, the function \[w(x,t):=e^{\mu\left[t-\varphi(x)\right]}\] is a super-solution of the heat equation in $\mathcal{D}:=\{(y,t): t<\varphi(x)\}$. Moreover, $w=1$ on $\partial\mathcal{D}$. For each $(x,t)\in \{t<h(x)\}$, by enlarging $L$ further (depending on the Lipschitz constant of $h$, but independent of $x$), the domain \[\mathcal{D}_x:=\left\{(y,s): s<h(x)-2L-\varphi(y-x)\right\}\subset \{s<h(y)-L\}.\] A comparison with a suitable translation of $w$ leads to \eqref{exponential decay 2}. \end{proof} Take a large $L>0$ so that $u\ll 1$ in $\left\{t<h(x)-L\right\}$. This is possible by {\bf (H2)}. In $\left\{h(x)-L\leq t \leq h(x)\right\}$, \eqref{differential Harnack 1} is a direct consequence of the facts that $u$ has a positive lower bound here while both $\partial_tu$ and $|\nabla u|$ are bounded. It thus remains to show that \eqref{differential Harnack 1} holds in $\{t<h(x)-L\}$. We first prove the combustion case. \begin{proof}[Proof of Proposition \ref{prop differential Harnack 1} in the combustion case] If $f$ is of combustion type, $u$, $\partial_tu$ and $\partial_{x_i}u$ all satisfy the heat equation in $\{t<h(x)-L\}$. The estimate \eqref{differential Harnack 1} then follows from the comparison principle. For example, because both $\partial_tu$ and $u$ converge to $0$ uniformly as $\mbox{dist}((x,t),\{t=h(x)\})\to +\infty$, if $\partial_tu\leq Mu$ on $\{t=h(x)-L\}$ for some constant $M>0$, then \[\partial_tu\leq Mu \quad \mbox{in } ~~ \{t<h(x)-L\}. \qedhere\] \end{proof} Next, we prove the bistable case. \begin{proof}[Proof of Proposition \ref{prop differential Harnack 1} in the bistable case] Take a $b\in(0,1)$ sufficiently small so that $f\in C^{1,\alpha}([0,b])$, \begin{equation}\label{4.1} f^\prime(u)\leq f^\prime(0)/2 \quad \mbox{and} \quad |f^\prime(u)-f^\prime(0)|\leq Cu^\alpha, \quad \mbox{for any}~~~ u\in[0,b]. \end{equation} For any $\lambda\in(0,b)$, denote $\Omega_\lambda:=\{u<\lambda\}$. A direct calculation using \eqref{4.1} shows that for some universal constant $C>1$ (independent of $\lambda$), \[\partial_tu^{1-C\lambda^\alpha}-\Delta u^{1-C\lambda^\alpha}\geq f^\prime(u) u^{1-C\lambda^\alpha}.\] On the other hand, $\partial_tu$ is a solution of this linearized equation. Therefore, if we denote \[ M(\lambda):= \sup_{\partial\Omega_\lambda}\frac{\partial_tu}{u^{1-C\lambda^\alpha}}=\sup_{\partial\Omega_\lambda}\frac{\partial_tu}{\lambda^{1-C\lambda^\alpha}},\] applying the comparison principle as in the proof of the combustion case, we obtain \begin{equation}\label{comparison in backward domain} \partial_tu\leq M(\lambda)u^{1-C\lambda^\alpha} \quad \mbox{in} \quad \Omega_\lambda. \end{equation} From this inequality and the fact that $\partial\Omega_{\lambda/2}\subset \Omega_\lambda$, we deduce that \[ M\left(\frac{\lambda}{2}\right)\leq M(\lambda) \left(\frac{\lambda}{2}\right)^{-C\left(1-2^{-\alpha}\right) \lambda^\alpha}.\] This inequality implies that \[\limsup_{\lambda\to0}M(\lambda)<+\infty.\] Substituting this estimate into \eqref{comparison in backward domain}, we find a constant $C$ such that for any $\lambda\in(0,b)$, in $\Omega_\lambda\setminus \Omega_{\lambda/2}$, \begin{equation}\label{estimate for u_t} \partial_tu \leq Cu^{1-C\left(2u\right)^\alpha} \leq 2Cu. \end{equation} Here to deduce the last inequality, we have used the inequality (perhaps after choosing a smaller $b$) \[ u^{-C2^\alpha u^\alpha}\leq 2, \quad \mbox{if}~~~ u\leq b.\] Finally, the estimate for $|\nabla u|/u$ follows by combining \eqref{estimate for u_t} and Proposition \ref{prop 3.3}. \end{proof} \section{Blowing down analysis}\label{sec blowing down} \setcounter{equation}{0} Recall that the one dimensional travelling wave $g$ (see \eqref{1D wave}) is strictly increasing, and it converges to $1$ and $0$ exponentially as $t\to \pm\infty$. In fact, by {\bf(F1)} and ${\bf (F4)}$, there exist four positive constants $\alpha_\pm$ and $\beta_\pm$ such that \[g(t)=1-\alpha_+e^{-\beta_+t}+O\left(e^{-(1+\alpha)\beta_+t}\right) \quad \mbox{as }~~ t\to+\infty,\] \[g(t)= \alpha_-e^{\beta_-t}+O\left(e^{(1+\alpha)\beta_-t}\right) \quad \mbox{as }~~ t\to-\infty,\] where \[\beta_+:=-\lim_{t\to+\infty}\frac{g^{\prime\prime}(t)}{g^\prime(t)}=\frac{-\kappa_\ast+\sqrt{\kappa_\ast^2-4f^\prime(1)}}{2},\] \[\beta_-:=\lim_{t\to-\infty}\frac{ g^{\prime\prime}(t)}{g^\prime(t)}=\frac{\kappa_\ast+\sqrt{\kappa_\ast^2-4f^\prime(0)}}{2}.\] Because $f^\prime(0)\leq 0$, $\beta_-\geq\kappa_\ast$. Following \cite{Barles1992front}, set $\Phi:=g^{-1}\circ u$. It satisfies \begin{equation}\label{distance eqn 2} \partial_t\Phi-\Delta\Phi=\kappa_\ast+\frac{g^{\prime\prime}(\Phi)}{g^\prime(\Phi)}\left(|\nabla\Phi|^2-1\right). \end{equation} \begin{lem}\label{lem Lipschitz} There exists a universal constant $C>0$ such that \[ |\partial_t\Phi|+|\nabla\Phi|\leq C \quad \mbox{in } ~~ \R^n\times\R.\] \end{lem} \begin{proof} By Proposition \ref{prop differential Harnack 1}, \[|\partial_t\Phi|\leq C\frac{|\partial_t u|}{u}\leq C, \quad |\nabla\Phi|\leq C\frac{|\nabla u|}{u}\leq C, \quad \mbox{in} ~~\{u\leq 1-b_0\}.\] By Corollary \ref{coro differential Harnack}, \[|\partial_t\Phi|\leq C\frac{|\partial_t u|}{1-u}\leq C, \quad |\nabla\Phi|\leq C\frac{|\nabla u|}{1-u}\leq C, \quad \mbox{in}~~ \{u\geq 1-b_0\}.\qedhere\] \end{proof} \begin{lem}[Semi-concavity]\label{lem semi-concavity} There exists a universal constant $C$ such that for any $(x,t)\in\{\Phi>0\}$, \[\nabla^2\Phi(x,t)\leq \frac{C}{\Phi(x,t)},\] and for any $(x,t)\in\{\Phi<0\}$, \[\nabla^2\Phi(x,t)\geq \frac{C}{\Phi(x,t)}.\] \end{lem} This follows from a standard maximum principle argument applied to $\eta\nabla^2\Phi(\xi,\xi)$, for any $\xi\in\R^n$ and a suitable cut-off function $\eta$. For each $\varepsilon>0$, let $\Phi_\varepsilon(x,t):=\varepsilon\Phi(\varepsilon^{-1}x,\varepsilon^{-1}t)$. It satisfies \begin{equation}\label{vanishing viscosity eqn 1} \partial_t\Phi_\varepsilon-\varepsilon\Delta\Phi_\varepsilon =\kappa_\ast +\frac{g^{\prime\prime}(\varepsilon^{-1}\Phi_\varepsilon)}{g^\prime(\varepsilon^{-1}\Phi_\varepsilon)}\left(|\nabla\Phi_\varepsilon|^2-1\right). \end{equation} By the uniform Lipschitz bound on $\Phi_\varepsilon$ from Lemma \ref{lem Lipschitz}, there exists a subsequence of $\varepsilon\to0$ such that $\Phi_\varepsilon\to\Phi_\infty$ in $C_{loc}(\R^n\times\R)$. The limit $\Phi_\infty$ may depend on the choice of subsequences. But for notational simplicity, we will always write $\varepsilon\to0$ instead of $\varepsilon_i\to0$. By standard vanishing viscosity method, we get \begin{lem}\label{lem blowing down limit} In the open set $\{\Phi_\infty>0\}$, $\Phi_\infty$ is a viscosity solution of \begin{equation}\label{H-J eqn 3} \partial_t\Phi_\infty+\beta_+|\nabla\Phi_\infty|^2-\kappa_\ast-\beta_+=0. \end{equation} In the open set $\{\Phi_\infty<0\}$ (if non-empty), $\Phi_\infty$ is a viscosity solution of \begin{equation}\label{H-J eqn 4} \partial_t\Phi_\infty-\beta_-|\nabla\Phi_\infty|^2-\kappa_\ast+\beta_-=0. \end{equation} \end{lem} Since $h(x)$ is globally Lipschitz on $\R^n$, by taking a further subsequence, we may also assume \[ \varepsilon_i h\left(\varepsilon_i^{-1} x\right)\to h_\infty(x) \quad \mbox{in} ~~ C_{loc}(\R^n).\] \begin{lem}\label{lem nondegeneracy} $\{\Phi_\infty>0\}=\{t>h_\infty(x)\}$. \end{lem} \begin{proof} Because $u\geq 1-b_0$ in $\{t>h(x)\}$, by Lemma \ref{lem A.1} we get \[1-u(x)\leq Ce^{-c\left[t-h(x)\right]} \quad \mbox{in} ~~ \{t>h(x)\}.\] Using the expansion of $g$ near infinity, this is rewritten as \begin{equation}\label{linear growth} \Phi(x)\geq c\left[t-h(x)\right]-C \quad \mbox{in} ~~ \{t>h(x)\}. \end{equation} Taking the scaling $\Psi_\varepsilon$ and letting $\varepsilon\to 0$, we obtain \begin{equation}\label{linear growth 2} \Phi_\infty(x)\geq c\left[t-h_\infty(x)\right]>0 \quad \mbox{in} ~~\{t>h_\infty(x)\}. \end{equation} Finally, because $\Phi\leq g^{-1}(1-b_0)$ in $\{t<h(x)\}$, $\Phi_\infty\leq 0$ in $\{t<h_\infty(x)\}$. \end{proof} \begin{lem}\label{lem unbdd below} $h_\infty$ is unbounded from below. \end{lem} \begin{proof} This is a direct consequence of Proposition \ref{prop dichotomy}. \end{proof} \begin{lem}\label{lem Lip constant} The Lipschitz constant of $h_\infty$ is at most $\kappa_\ast^{-1}$. \end{lem} \begin{proof} For any $(x_0,t_0)\in\{\Phi_\infty>0\}$, for all $\varepsilon$ small, $\Phi_\varepsilon(x_0,t_0)\geq \Phi_\infty(x_0,t_0)/2$. By definition, $u(\varepsilon^{-1}x_0,\varepsilon^{-1}t_0)$ is very close to $1$. By Lemma \ref{lem forward Lip}, for any $\delta>0$, there exits a $D(\delta)$ such that $\mathcal{C}^+_{\kappa_\ast-\delta}(\varepsilon^{-1} x_0, \varepsilon^{-1}t_0+D(\delta))\subset\{u>1-b_2\}$. A scaling of this gives $\mathcal{C}^+_{\kappa_\ast-\delta}(x_0,t_0+D(\delta)\varepsilon)\subset \{\Phi_\varepsilon>0\}$. Letting $\varepsilon\to0$ and then $\delta\to 0$, with the help of \eqref{linear growth 2}, we deduce that $\mathcal{C}^+_{\kappa_\ast}(x_0,t_0)\subset \{\Phi_\infty> 0\}$. This implies that the Lipschitz constant of $h_\infty$ is at most $\kappa_\ast^{-1}$. \end{proof} Finally, under the assumption of Theorem \ref{main result 4}, the following non-degeneracy condition in $\Omega_\infty^-$ holds. \begin{prop}\label{prop nondegeneracy} In $\Omega_\infty^-$, \begin{equation}\label{linear growth 3} \Phi_\infty(x,t)\leq c\left[t-h_\infty(x)\right]<0. \end{equation} \end{prop} This can be proved by scaling Proposition \ref{prop negative part}. \section{Geometric motion: Proof of Theorem \ref{thm blowing down limit}}\label{sec geometric motion} \setcounter{equation}{0} In this section we prove Theorem \ref{thm blowing down limit}. This theorem does not follow directly from the result on front motion law established in \cite{Barles1993front}, although it can be reduced to that one by constructing a suitable comparison function. The main reason is, for entire solutions of \eqref{eqn}, it is not clear whether $|\nabla\Phi|\leq 1$ everywhere or not. (We believe this is not true in general.) \begin{proof}[Proof of Theorem \ref{thm blowing down limit}] We divide the proof into two steps, verifying the sub- and sup-solution property respectively. {\bf Step 1.} For any $\varphi\in C^1(\R^n)$ satisfying $\varphi\geq h_\infty$ and $\varphi=h_\infty$ at one point, say the origin $0$, we want to show that $|\nabla\varphi(0)|\leq \kappa_\ast^{-1}$. Assume by the contrary, there exists $\delta>0$ such that \begin{equation}\label{absurd assumption 3} |\nabla\varphi(0)|=\left(\kappa_\ast-3\delta\right)^{-1}. \end{equation} The tangent plane of $\{t=\varphi(x)\}$ at $(0,0)$ is $\left\{\left(\kappa_\ast-3\delta\right)t=-x\cdot\xi\right\}$, where $\xi:=\nabla\varphi(0)/|\nabla\varphi(0)|$. Since $h_\infty\leq\varphi$, we find three small constants $\rho>0$, $t_0<0$ and $\sigma>0$ such that \[\Phi_\infty(x,t_0)\geq \sigma \quad \mbox{ in} \quad \mathcal{D}:=\left\{x\cdot \xi\geq -\left(\kappa_\ast-2\delta\right)t_0\right\}\cap B_\rho(0).\] For all $\varepsilon$ small, consider the Cauchy problem \[ \left\{ \begin{aligned} & \partial_t w-\Delta w=f(w) \quad & \mbox{in } \R^n\times(t_0/\varepsilon,+\infty),\\ &w(x,t_0/\varepsilon)=\left(1-e^{-c\sigma\varepsilon^{-1}\mbox{dist}\left(\varepsilon x,\partial\mathcal{D}\right)}\right)\chi_{\mathcal{D}/\varepsilon}(x), \end{aligned}\right. \] where $\chi_{\mathcal{D}/\varepsilon}$ denotes the characteristic function of $\mathcal{D}/\varepsilon$. As in Lemma \ref{lem propagation to 1} (or by the motion law for front propagation starting from $\partial(\mathcal{D}\cap B_\rho)$, see \cite[Main Theorem]{Barles1992front} or \cite[Theorem 9.1]{Barles1993front}), we get \[ w(x,0)\geq 1-b_0 \quad \mbox{in} \quad \left\{x\cdot\xi\geq \frac{\delta t_0}{\varepsilon}\right\}\cap B_{\frac{\rho+(\kappa_\ast-\delta)t_0}{\varepsilon}}(0).\] By comparison principle, $u\geq w$ in $\R^n\times[t_0/\varepsilon,0]$. After a scaling, with the help of \eqref{linear growth 2}, we get \[\Phi_\infty> 0 \quad \mbox{in} \quad \left\{x\cdot\xi\geq \delta t_0 \right\}\cap B_{\rho+(\kappa_\ast-\delta)t_0}(0).\] In particular, $(0,0)$ is an interior point of $\{\Phi_\infty>0\}$. This is a contradiction. {\bf Step 2.} In the same way, we can show that for any $\varphi\in C^1(\R^n)$ satisfying $\varphi\leq h_\infty$ and $\varphi(0)=h_\infty(0)$, $|\nabla\varphi(0)|\geq \kappa_\ast^{-1}$. Assume this is not true, that is, there exists a $\delta>0$ such that $|\nabla\varphi(0)|=\left( \kappa_\ast+3\delta\right)^{-1}$. The only difference with Step 1 is the construction of the comparison function. Now we need to consider, for all $\varepsilon$ small, the Cauchy problem \[ \left\{ \begin{aligned} & \partial_t w-\Delta w=f(w) \quad & \mbox{in } \R^n\times(t_0/\varepsilon,+\infty),\\ &w(x,t_0)=1-\left(1-e^{-c\sigma\varepsilon^{-1}\mbox{dist}\left(\varepsilon x,\partial\mathcal{D}\right)}\right)\chi_{\mathcal{D}/\varepsilon}(x), \end{aligned}\right. \] where $\mathcal{D}=\left\{x\cdot\xi\leq -\left(\kappa_\ast+2\delta\right)t_0\right\}\cap B_\rho(0)$. As in Step 1, by the motion law for front propagation starting from $\partial(\mathcal{D}\cap B_\rho)$ given in \cite[Main Theorem]{Barles1992front} or \cite[Theorem 9.1]{Barles1993front}, we deduce that \[ w(x,0)\leq b_0 \quad \mbox{in} \quad \left\{x\cdot\xi\leq -\delta \frac{t_0}{\varepsilon}\right\}\cap B_{\frac{\rho+(\kappa_\ast-\delta)t_0}{\varepsilon}}(0).\] This implies that $(0,0)$ is an interior point of $\{\Phi_\infty<0\}$, which is a contradiction. \end{proof} \section{Representation formula for the blowing down limit}\label{sec representation} \setcounter{equation}{0} In this section, we give an explicit representation formula for $\Phi_\infty$. We first consider the forward problem \eqref{H-J eqn 3} in $\Omega_\infty^+:=\{t>h_\infty(x)\}$, and then the backward problem \eqref{H-J eqn 4} in $\Omega_\infty^-:=\{t<h_\infty(x)\}$. The main tool used in this section is the generalized characteristics associated to $\Phi_\infty$. We will follow closely the treatment in Cannarsa, Mazzola and Sinestrari \cite{Cannarsa2015global}. \subsection{Forward problem} \label{subsec positive part} This subsection is devoted to the forward problem \eqref{H-J eqn 3} in $\Omega_\infty^+$. We first notice the following pointwise monotonicity relation. \begin{lem}\label{lem pointwise monotonicity} For any $(x,t),(y,s)\in\Omega_\infty^+$ with $t>s$, if the segment connecting $(y,s)$ and $(x,t)$ is contained in $\Omega_\infty^+$, then \begin{equation}\label{pointwise monototinicity} \Phi_\infty(x,t)\leq\Phi_\infty(y,s)+\left(\kappa_\ast+\beta_+\right)(t-s)+\frac{|x-y|^2}{4\beta_+(t-s)}. \end{equation} \end{lem} \begin{proof} Since $\Phi_\infty$ is Lipschitz, it is differentiable a.e. and satisfies \eqref{H-J eqn 3} a.e. in $\Omega_\infty^+$. By avoiding a zero measure set, we may assume for a.e. $\tau\in[0,1]$, $\Phi_\infty$ is differentiable at the point $X(\tau):=((1-\tau)y+\tau x, (1-\tau)s+\tau t)$. (The general case follows by an approximation using the continuity of $\Phi_\infty$.) Then we have \begin{eqnarray*} \frac{d}{d\tau}\Phi_\infty\left(X(\tau)\right) &=& \partial_t\Phi\left(X(\tau)\right)\left(t-s\right) +\nabla\Phi\left(X(\tau)\right)\cdot(x-y) \\ &=& \left(\kappa_\ast+\beta_+\right)\left(t-s\right) -\beta_+|\nabla\Phi\left(X(\tau)\right)|^2\left(t-s\right)\\ &&+\nabla\Phi\left(X(\tau)\right)\cdot(x-y)\\ &\leq&\left(\kappa_\ast+\beta_+\right)\left(t-s\right)+\frac{|x-y|^2}{4\beta_+\left(t-s\right)}. \end{eqnarray*} Integrating this inequality in $\tau$, we obtain \eqref{pointwise monototinicity}. \end{proof} Next we establish a localized Hopf-Lax formula for $\Phi_\infty$. \begin{lem}[Localized Hopf-Lax formula I]\label{lem localized Hopf-Lax 1} There exists a constant $K$ depending only on the Lipschitz constant of $\Phi_\infty$ so that the following holds. For any $(x,t)\in\Omega_\infty^+$, there exists an $\varepsilon>0$ such that $B_{K\varepsilon}(x)\times(t-\varepsilon,t)\subset\Omega_\infty^+$, and \begin{equation}\label{localized Hopf-Lax 1} \Phi_\infty(x,t)=\min_{y\in B_{K\varepsilon}(x)}\left[\Phi_\infty(y,t-\varepsilon)+\left(\kappa_\ast+\beta_+\right)\varepsilon+\frac{|x-y|^2}{4\beta_+\varepsilon}\right]. \end{equation} \end{lem} \begin{proof} Denote $Q:=B_{K\varepsilon}(x)\times(t-\varepsilon,t)$. By Lemma \ref{lem pointwise monotonicity}, we can apply Lions \cite[Theorem 10.1 and Theorem 11.1]{Lions1982book} to deduce that \begin{equation}\label{local representation 1} \Phi_\infty(x,t)=\inf_{(y,s)\in \partial^pQ}\left[\Phi_\infty(y,s)+\left(\kappa_\ast+\beta_+\right)(t-s)+\frac{|x-y|^2}{4\beta_+(t-s)}\right]. \end{equation} If $K$ is large enough (compared to the Lipschitz constant of $\Phi_\infty$), for any $(y,s)\in \partial B_{K\varepsilon}(x)\times[t-\varepsilon,t)$, we have \[\Phi_\infty(y,s)+\left(\kappa_\ast+\beta_+\right)(t-s)+\frac{|x-y|^2}{4\beta_+(t-s)}> \Phi_\infty(x,t-\varepsilon)+\left(\kappa_\ast+\beta_+\right)\varepsilon.\] Therefore the infimum in \eqref{local representation 1} is attained in the interior of $B_{K\varepsilon}(x)\times\{t-\varepsilon\}$, and it must be a minimum. \end{proof} Now we use this localized Hopf-Lax formula to study backward characteristic curves of $\Phi_\infty$. We will restrict our attention to differentiable points. Take a point $(x_0,t_0)\in\Omega_\infty^+$ so that $\Phi_\infty(\cdot,t_0)$ is differentiable at $x_0$. Denote $p_0:=\nabla\Phi_\infty(x_0,t_0)$. By Lemma \ref{lem localized Hopf-Lax 1}, for any $s<t_0$ sufficiently close to $t_0$, there exists a point $(x(s),s)\in \mathcal{C}^-_{K}(x_0,t_0)\cap\Omega_\infty^+$ such that \begin{equation}\label{min pt} \Phi_\infty(x_0,t_0)= \Phi_\infty(x(s),s)+\left(\kappa_\ast+\beta_+\right)(t_0-s)+\frac{|x_0-x(s)|^2}{4\beta_+(t_0-s)}. \end{equation} \begin{lem}\label{lem form of characteristic} Under the above setting, we have \begin{equation}\label{representation of min pt} x(s)=x_0-2\beta_+(t_0-s)p_0. \end{equation} \end{lem} \begin{proof} By Lemma \ref{lem localized Hopf-Lax 1}, for any $x$ close to $x_0$, \begin{equation}\label{5.1.1} \Phi_\infty(x,t_0)\geq \Phi_\infty( x(s),s)+\left(\kappa_\ast+\beta_+\right)(t_0-s)+\frac{|x_0- x(s)|^2}{4\beta_+(t_0-s)}. \end{equation} Subtracting \eqref{min pt} from \eqref{5.1.1} leads to \[\Phi_\infty(x,t_0)-\Phi_\infty(x_0,t_0)\geq \frac{x+x_0-2 x(s)}{4\beta_+(t_0-s)}\cdot(x-x_0).\] On the other hand, because $\Phi_\infty(\cdot,t_0)$ is differentiable at $x_0$, we have \[\Phi_\infty(x,t_0)-\Phi_\infty(x_0,t_0)=p\cdot (x-x_0)+o\left(|x-x_0|\right).\] These two relations hold for any $x$ sufficiently close to $x_0$, so \[p_0=\frac{x_0- x(s)}{2\beta_+(t_0-s)}. \qedhere\] \end{proof} \begin{coro} The minimum in \eqref{localized Hopf-Lax 1} is attained at a unique point. \end{coro} The curve \[\{(x(s),s): ~~~ x(s)=x_0-2\beta_+(t_0-s)p_0, ~~ s\leq t_0\}\] is the backward characteristic curve of $\Phi_\infty$ starting from $(x_0,t_0)$. \begin{lem}\label{lem propagation of differentiability} Under the above settings, $\Phi_\infty(\cdot,s)$ is differentiable at $x(s)$. Moreover, \begin{equation}\label{propagation of differentiability} \nabla\Phi_\infty(x(s),s)=p_0. \end{equation} \end{lem} \begin{proof} Because $x(s)$ attains the minimum in \eqref{local representation 1}, for any $z$ sufficiently close to $x(s)$, we have \begin{eqnarray*} && \Phi_\infty(x(s),s)+\left(\kappa_\ast+\beta_+\right)(t_0-s)+\frac{|x_0-x(s)|^2}{4\beta_+(t_0-s)} \\ &\leq& \Phi_\infty(z,s)+\left(\kappa_\ast+\beta_+\right)(t_0-s)+\frac{|x_0-z|^2}{4\beta_+(t_0-s)}. \end{eqnarray*} After simplification, this is \[\Phi_\infty(z,s)\geq \Phi_\infty(x(s),s)+\frac{x_0-x(s)}{2\beta_+(t_0-s)}\cdot\left[z-x(s)\right]+O\left(|z-x(s)|^2\right).\] Since $\Phi_\infty(\cdot,s)$ is semi-concave, this inequality implies that $\Phi_\infty(\cdot,s)$ is differentiable at $x(s)$, and its gradient is given by \eqref{propagation of differentiability}. \end{proof} By Lemma \ref{lem localized Hopf-Lax 1} and Lemma \ref{lem propagation of differentiability}, the characteristic curve can be extended indefinitely in the backward direction, unless it hits the boundary $\partial\Omega_\infty^+$ in finite time. Now we show that the later case must happen. \begin{lem}\label{lem hit boundary 1} For any $(x_0,t_0)\in\Omega_\infty^+$ with $\Phi(\cdot,t_0)$ differentiable at $x_0$, there exists an $s_0<t_0$ such that \[(x_0-2\beta_+(t_0-s_0)p_0,s_0)\in\partial\Omega_\infty^+.\] \end{lem} \begin{proof} If $( x(s),s)=(x_0-2\beta_+(t_0-s)p_0,s)\in\Omega_\infty^+$, by \eqref{representation of min pt}, \eqref{min pt} can be rewritten as \begin{equation}\label{values along characteristic 1} \Phi_\infty(x_0-2\beta_+(t_0-s)p_0,s)=\Phi_\infty(x_0,t_0)-\left(\kappa_\ast+\beta_++\beta_+|p_0|^2\right)(t_0-s). \end{equation} Hence there exists an $s_0$ such that $ \Phi_\infty(x_0-2\beta_+(t_0-s_0)p_0,s_0)=0$ and $\Phi_\infty(x_0-2\beta_+(t_0-s)p_0,s)>0$ for any $s\in(s_0,t_0]$. Because $\Phi_\infty>0$ in $\Omega_\infty^+$ and $\Phi_\infty=0$ on $\partial\Omega_\infty^+$, $(x_0-2\beta_+(t_0-s_0)p_0,s_0)\in\partial\Omega_\infty^+$. \end{proof} \begin{lem}\label{lem representation of blowing down limit 1} For any $(x,t)\in\Omega_\infty^+$, \[\Phi_\infty(x,t)=\inf_{y\in\{h_\infty<t\}}\left\{\left(\kappa_\ast+\beta_+\right)\left[t-h (y)\right]+\frac{|x-y|^2}{4\beta_+\left[t-h (y)\right]}\right\}.\] \end{lem} \begin{proof} Choosing $(y,s)=(y,h_\infty(y))$ with $h_\infty(y)<t$ in \eqref{pointwise monototinicity} (here we may assume the segment connecting this point and $(x,t)$ is contained in $\Omega_\infty^+$), and then taking infimum over $y$, we obtain \begin{equation}\label{5.1} \Phi_\infty(x,t)\leq\inf_{y\in\{h_\infty<t\}}\left\{\left(\kappa_\ast+\beta_+\right)\left[t-h (y)\right]+\frac{|x-y|^2}{4\beta_+\left[t-h (y)\right]}\right\}. \end{equation} To show that this is an equality, we assume without loss of generality that $x$ is a differentiable point of $\Phi_\infty(\cdot,t)$. Then by Lemma \ref{lem hit boundary 1}, in particular, \eqref{values along characteristic 1}, we find that $y=x_0-2\beta_+(t_0-s_0)p_0$ attains the equality in \eqref{5.1}. \end{proof} \subsection{Backward problem}\label{sec negative part} For the backward problem \eqref{H-J eqn 4}, we still use backward characteristics to determine the form of $\Phi_\infty^-$. The proof is similar to the forward problem, so most results in this subsection will be stated without proof. \begin{lem}\label{lem pointwise monotonicity 2} For any $(x,t),(y,s)\in\Omega_\infty^-$ with $t>s$, \begin{equation}\label{pointwise monototinicity 2} \Phi_\infty(x,t)\geq\Phi_\infty(y,s)+\left(\kappa_\ast-\beta_-\right)(t-s)-\frac{|x-y|^2}{4\beta_-(t-s)}. \end{equation} \end{lem} Because $\Omega_\infty^-$ is convex (see Remark \ref{rmk representation for level set limit}), the segment connecting $(y,s)$ and $(x,t)$ is always contained in $\Omega_\infty^-$. In $\Omega_\infty^-$, $\widetilde{\Phi}_\infty:=-\Phi_\infty^-$ is a viscosity solution of \begin{equation}\label{H-J eqn 5} \partial_t\widetilde{\Phi}_\infty+\beta_-|\nabla\widetilde{\Phi}_\infty|^2+\kappa_\ast-\beta_-=0. \end{equation} Hence we have the following localized Hopf-Lax formula. \begin{lem}[Localized Hopf-Lax formula II]\label{lem localized Hopf-Lax 2} There exists a constant $K$ depending only on the Lipschitz constant of $\Phi_\infty$ so that the following holds. For any $(x,t)\in\Omega_\infty^-$, there exists an $\varepsilon>0$ such that $B_{K\varepsilon}(x)\times(t-\varepsilon,t)\subset\Omega_\infty^-$, and \begin{equation}\label{localized Hopf-Lax 2} \Phi_\infty(x,t)=\max_{y\in B_{K\varepsilon}(x)}\left[\Phi_\infty(y,t-\varepsilon)+\left(\kappa_\ast-\beta_-\right)\varepsilon-\frac{|x-y|^2}{4\beta_-\varepsilon}\right]. \end{equation} \end{lem} Take a point $(x_0,t_0)\in\Omega_\infty^+$ so that $\Phi_\infty(\cdot,t_0)$ is differentiable at $x_0$. Denote $p_0:=\nabla\Phi_\infty(x_0,t_0)$. By Lemma \ref{lem localized Hopf-Lax 2}, for any $s<t_0$ sufficiently close to $t_0$, there exists a point $(x(s),s)\in \mathcal{C}^-_{K}(x_0,t_0)\cap\Omega_\infty^-$ such that \begin{equation}\label{min pt 2} \Phi_\infty(x_0,t_0)= \Phi_\infty(x(s),s)+\left(\kappa_\ast-\beta_-\right)(t_0-s)-\frac{|x_0-x(s)|^2}{4\beta_-(t_0-s)}. \end{equation} \begin{lem}\label{lem form of characteristic 2} Under the above setting, we have \begin{equation}\label{representation of min pt 2} x(s)=x_0+2\beta_-(t_0-s)p_0. \end{equation} \end{lem} \begin{lem}\label{lem propagation of differentiability 2} Under the above setting, $\Phi_\infty(\cdot,s)$ is differentiable at $x(s)$. Moreover, \begin{equation}\label{propagation of differentiability 2} \nabla\Phi_\infty(x(s),s)=p_0. \end{equation} \end{lem} By Lemma \ref{lem localized Hopf-Lax 2} and Lemma \ref{lem propagation of differentiability 2}, the characteristic curve can be extended indefinitely in the backward direction, unless it hits the boundary $\partial\Omega_\infty^-$ in finite time. Now we show that the latter case must happen. \begin{lem}\label{lem hit boundary 2} For any $(x_0,t_0)\in\Omega_\infty^-$ with $\Phi(\cdot,t_0)$ differentiable at $x_0$, there exists an $s_0<t_0$ such that \[(x_0+2\beta_+(t_0-s_0)p_0,s_0)\in\partial\Omega_\infty^-.\] \end{lem} \begin{proof} If $(x(s),s)=(x_0+2\beta_-(t_0-s)p_0,s)\in\Omega_\infty^-$, by \eqref{representation of min pt 2}, \eqref{min pt 2} can be rewritten as \begin{equation}\label{values along characteristic 2} \Phi_\infty(x_0+2\beta_-(t_0-s)p_0,s)=\Phi_\infty(x_0,t_0)-\left(\kappa_\ast-\beta_--\beta_-|p_0|^2\right)(t_0-s). \end{equation} If $\beta_->\kappa_\ast$, there exists an $s_0$ such that $ \Phi_\infty(x_0-2\beta_+(t_0-s_0)p_0,s_0)=0$ and $\Phi_\infty(x_0-2\beta_+(t_0-s)p_0,s)<0$ for any $s\in(s_0,t_0]$. Because $\Phi_\infty<0$ in $\Omega_\infty^-$ and $\Phi_\infty=0$ on $\partial\Omega_\infty^-$, $(x_0+2\beta_-(t_0-s_0)p_0,s_0)\in\partial\Omega_\infty^-$. If $\beta_-=\kappa_\ast$, this is still the case, unless $p_0=0$. However, if $p_0=0$, the characteristic curve is $(x_0,s)$, and \eqref{values along characteristic 2} reads as \[\Phi_\infty(x_0,s)\equiv \Phi_\infty(x_0,t_0), \quad \mbox{for any } s<t_0.\] This cannot happen by \eqref{linear growth 3}. \end{proof} With these lemmas in hand, similar to Lemma \ref{lem representation of blowing down limit 1}, we get \begin{lem}\label{lem representation of blowing down limit 2} For any $(x,t)\in\Omega_\infty^-$, \[\Phi_\infty(x,t)=\sup_{y\in\{h_\infty<t\}} \left\{\left(\kappa_\ast-\beta_-\right)\left[t-h_\infty(y)\right]-\frac{|x-y|^2}{4\beta_-\left[t-h_\infty(y)\right]}\right\}.\] \end{lem} \begin{rmk} \begin{itemize} \item In the above, we use only backward characteristic curves starting from differentiable points. For a non-differentiable point, there could exist many backward characteristic curves emanating from it, see \cite{Cannarsa2015global}. \item In the monostable case, where $\kappa_\ast/2\leq \beta_-< \kappa_\ast$, the backward characteristic curves could always stay in the domain and do not hit the boundary. We expect the above representation formula still holds in this case, but do not know how to prove it. \item The existence of a nontrivial viscosity solution to \eqref{H-J eqn 4} imposes some restrictions on the domain $\{t<h_\infty(x)\}$. The following question seems to be interesting, and as far as the author knows, has not been explored in the literature: under what conditions on the domain $\{t<h_\infty(x)\}$, can we prove the nonexistence of viscosity solution of an Hamilton-Jacobi equation? We may ask the same question for the implication of the existence of globally Lipschitz viscosity solutions. \end{itemize} \end{rmk} \section{Characterization of minimal speed: Proof of Theorem \ref{thm minimal speed}}\label{sec minimal speed} \setcounter{equation}{0} In this section we consider travelling wave equation \eqref{travelling wave eqn}. Denote the constants \[K_+:=\sqrt{1+\frac{\kappa_\ast}{\beta_+}+\frac{\kappa^2}{4\beta_+^2}}, \quad \quad K_-:=\sqrt{1-\frac{\kappa_\ast}{\beta_-}+\frac{\kappa^2}{4\beta_-^2}}.\] By abusing notations, we will use the following notations about cones in $\R^n$: \[\mathcal{C}^+_{\lambda}(x):=\{y: y_n-x_n>\lambda|y^\prime-x^\prime|\}, \quad \mathcal{C}^-_{\lambda}(x):=\{y: y_n-x_n<-\lambda|y^\prime-x^\prime|\}.\] As in Section \ref{sec blowing down}, set $\Psi:=g^{-1}\circ u$. It satisfies \begin{equation}\label{distance eqn} -\Delta\Psi+\kappa\partial_n\Psi=\kappa_\ast+\frac{g^{\prime\prime}(\Psi)}{g^\prime(\Psi)}\left(|\nabla\Psi|^2-1\right). \end{equation} Since this is an elliptic equation, we have the following unconditional, global Lipschitz bound on $\Psi$. This lemma holds once the nonlinearity satisfies $f(0)=f(1)=0$, no matter whether it is monostable, combustion or bistable. \begin{lem}\label{lem gradient bound 2} There exists a universal constant $C$ such that $|\nabla\Psi|\leq C$ on $\R^n$. \end{lem} \begin{proof} By definition, \[\nabla\Psi=\frac{\nabla u}{g^\prime(\Psi)}.\] Since $g^\prime$ has a positive lower bound on any compact set of $\R$, by the gradient bound on $u$, $|\nabla\Psi|$ is bounded in $\{1/4<u<3/4\}$. In $\{u<1/4\}$, \[g^\prime(\Psi)\geq c g(\Psi)=cu.\] Hence here we have \[|\nabla\Psi|\leq C\frac{|\nabla u|}{u}\leq C,\] where the last inequality follows from Harnack inequality and interior gradient estimates applied to \eqref{travelling wave eqn}. Similarly, in $\{u>3/4\}$, \[|\nabla\Psi|\leq C\frac{|\nabla u|}{1-u}\leq C.\qedhere\] \end{proof} As before, $\Psi$ is still semi-concave. \begin{lem}[Semi-concavity]\label{semi-concavity 2} There exists a universal constant $C$ such that for any $x\in\{\Psi>0\}$, \[\nabla^2 \Psi(x)\leq \frac{C}{\Psi(x)},\] and for any $x\in\{\Psi<0\}$, \[\nabla^2 \Psi(x)\geq \frac{C}{\Psi(x)}.\] \end{lem} For each $\varepsilon>0$, let $\Psi_\varepsilon(x):=\varepsilon\Psi(\varepsilon^{-1}x)$, which satisfies \begin{equation}\label{vanishing viscosity eqn 2} -\varepsilon\Delta\Psi_\varepsilon+\kappa\partial_n\Psi_\varepsilon=\kappa_\ast +\frac{g^{\prime\prime}(\varepsilon^{-1}\Psi_\varepsilon)}{g^\prime(\varepsilon^{-1}\Psi_\varepsilon)}\left(|\nabla\Psi_\varepsilon|^2-1\right). \end{equation} By the uniform Lipschitz bound on $\Psi_\varepsilon$ from Lemma \ref{lem gradient bound 2}, for any sequence $\varepsilon_i\to0$, there exists a subsequence such that $\Psi_{\varepsilon_i}\to\Psi_\infty$ in $C_{loc}(\R^n)$. Then standard vanishing viscosity method gives \begin{lem}\label{blowing down limit 2} In the open set $\{\Psi_\infty>0\}$, $\Psi_\infty$ is a viscosity solution of \begin{equation}\label{H-J eqn 1} \kappa\partial_n\Psi_\infty-\kappa_\ast+\beta_+\left(|\nabla\Psi_\infty|^2-1\right)=0. \end{equation} In the open set $\{\Psi_\infty<0\}$ (if non-empty), $\Psi_\infty$ is a viscosity solution of \begin{equation}\label{H-J eqn 2} \kappa\partial_n\Psi_\infty-\kappa_\ast-\beta_-\left(|\nabla\Psi_\infty|^2-1\right)=0. \end{equation} \end{lem} \begin{rmk}\label{rmk 6.3} Equations \eqref{H-J eqn 1} and \eqref{H-J eqn 2} are the corresponding travelling wave equations for the time-dependent Hamilton-Jacobi equations \eqref{H-J eqn 3} and \eqref{H-J eqn 4}. \end{rmk} Recall that \[\{v=1-b_0\}=\{x_n=h(x^\prime)\}.\] As before, we define the blowing down limit $h_\infty$ from $h$. By Lemma \ref{lem nondegeneracy}, we still have \[\{\Psi_\infty>0\}=\{x_n>h_\infty(x^\prime)\}.\] \begin{prop}\label{prop blowing down limit 2} The Lipschitz constant of $h_\infty$ is at most $\sqrt{\kappa^2/\kappa_\ast^2-1}$. In particular, we must have $\kappa\geq\kappa_\ast$. \end{prop} \begin{rmk} Under the assumptions of Theorem \ref{main result 2}, the blowing down limit $h_\infty$ is a viscosity solution of \begin{equation}\label{eikonal eqn 2} |\nabla h_\infty|^2-\frac{\kappa^2}{\kappa_\ast^2}+1=0 \quad \mbox{in } \quad \R^{n-1}. \end{equation} This follows from a reduction of Theorem \ref{thm blowing down limit}. \end{rmk} \begin{proof}[Proof of Proposition \ref{prop blowing down limit 2}] The blowing down limit of the level set for the entire solution $v(x+\kappa t e_n)$ is the graph \[ t= \frac{h_\infty(x^\prime)-x_n}{\kappa}.\] By Lemma \ref{lem Lip constant}, its Lipschitz constant is at most $\kappa_\ast^{-1}$. \end{proof} By Lemma \ref{lem semi-concavity}, $\Psi_\varepsilon$ are uniformly semi-concave in any compact set of $\{\Psi_\infty>0\}$. As a consequence, $\Psi_\infty$ is locally semi-concave in this open set. The sup-differential of $\Psi_\infty$ is then well defined at every point in $\{\Psi_\infty>0\}$. Recall that \[\partial\Psi_\infty(x):=\left\{\xi\in\R^n: \limsup_{y\to x}\frac{\Psi_\infty(y)-\Psi_\infty(x)-\xi\cdot (y-x)}{|y-x|}\leq 0\right\} \] is a compact convex subset of $\R^n$. Because $\Psi_\varepsilon\to\Psi_\infty$ uniformly on any compact set of $\R^n$, by the uniform semi-concavity of $\Psi_\varepsilon$, we deduce that for any $x_\varepsilon\to x_\infty\in\{\Psi_\infty>0\}$, \begin{equation}\label{convergence of gradients} \mbox{each limit point of} ~~ \nabla\Psi_\varepsilon(x_\varepsilon) ~~ \mbox{as}~ \varepsilon\to0 \in \partial\Psi_\infty(x_\infty). \end{equation} If $\Psi_\infty<0$ in $\{x_n<h_\infty(x^\prime)\}$, the same result holds for the negative part of $\Psi_\infty$, with sup-differentials replaced by sub-differentials. A reduction of Lemma \ref{lem representation of blowing down limit 1} and Lemma \ref{lem representation of blowing down limit 2} gives \begin{prop}\label{prop solution of H-J eqn} \begin{itemize} \item For any $x=(x^\prime,x_n)\in\{\Psi_\infty>0\}$, \begin{equation}\label{explicit formulation 6.1} \Psi_\infty(x)= \inf_{y^\prime\in \R^{n-1}}\left[K_+ \sqrt{|x^\prime-y^\prime|^2+(x_n-h_\infty(y^\prime))^2} -\frac{\kappa}{2\beta_+}\left(x_n-h_\infty(y^\prime)\right)\right]. \end{equation} \item Assume $\Psi_\infty<0$ in $\{x_n<h_\infty(x^\prime)\}$. Then for any $x=(x^\prime,x_n)\in\{x_n<h_\infty(x^\prime)\}$, \begin{equation}\label{explicit formulation 6.2} \Psi_\infty(x)= -\inf_{y^\prime\in \R^{n-1}}\left[K_- \sqrt{|x^\prime-y^\prime|^2+(x_n-h_\infty(y^\prime))^2} -\frac{\kappa}{2\beta_-}\left(x_n-h_\infty(y^\prime)\right)\right]. \end{equation} \end{itemize} \end{prop} \begin{rmk} The representation formula \eqref{explicit formulation 6.1} and \eqref{explicit formulation 6.2} (when $\beta_->\kappa_\ast$), can be proved directly by rewriting \eqref{H-J eqn 1} and \eqref{H-J eqn 2} as eikonal equations. For example, in the case of \eqref{H-J eqn 1}, we can define a norm on $\R^n$, $\|\cdot\|$ so that the corresponding unit ball is $B_{K_+}\left(0^\prime,-\frac{\kappa}{2\beta_+}\right)$. (This is because this ball contains the origin as an interior point.) The Hamilton-Jacobi equation \eqref{H-J eqn 1} is equivalent to the eikonal type equation \begin{equation}\label{eikonal eqn} \|\nabla\Psi(x)\|^2-1=0. \end{equation} Then we can prove that \[ \Psi_\infty(x)= \inf_{y\in \partial\{\Phi_\infty>0\}}\|x-y\|^\ast.\] Here $\|\cdot\|^\ast$ denotes the dual norm of $\|\cdot\|$. \end{rmk} Now we come to \begin{proof}[Proof of Theorem \ref{thm minimal speed}] We have shown that $\kappa\geq\kappa_\ast$ in Proposition \ref{prop blowing down limit 2}. It remains to characterize the $\kappa=\kappa_\ast$ case. From Proposition \ref{prop blowing down limit 2}, it is seen that, if $\kappa=\kappa_\ast$, we must have $\nabla h_\infty=0$ a.e. in $\R^{n-1}$. Since $h_\infty(0)=0$, we get \begin{equation}\label{flat limit} h_\infty\equiv 0 \quad \mbox{in } ~~ \R^{n-1}. \end{equation} This holds for any blowing down limit $h_\infty$, so the blowing down limit is unique. Substituting \eqref{flat limit} into \eqref{explicit formulation 6.1} and \eqref{explicit formulation 6.2}, by noting that $\kappa=\kappa_\ast$ implies \[K_+=1+\frac{\kappa_\ast}{2\beta_+},\quad K_-=1-\frac{\kappa_\ast}{2\beta_-},\] we deduce that \begin{equation}\label{1d limit} \Psi_\infty(x)\equiv x_n \quad \mbox{in }~~ \R^n. \end{equation} {\bf Claim.} For any $\varepsilon>0$, there exists an $L(\varepsilon)>0$ such that \[|\nabla^\prime\Psi|\leq \varepsilon \partial_n\Psi \quad \mbox{in }~~ \{|\Psi|\geq L(\varepsilon)\}.\] By this claim, similar to the proof of Theorem \ref{main result 2} (or as in the proof of Gibbons conjecture in \cite{Farina1999symmetry,Hamel2000Gibbons}), applying the sliding method we deduce that \[|\nabla^\prime\Psi|\leq \varepsilon \partial_n\Psi \quad \mbox{in }~~ \R^n.\] Letting $\varepsilon\to0$, we deduce that $\nabla^\prime\Psi\equiv 0$, or equivalently, $v$ is a function of $x_n$ only. Because we have \eqref{assumption} and $\sup_{\R^n}v=1$, by the uniqueness of $g$, we find a constant $t\in\R$ such that \[v(x)\equiv g(x_n+t) \quad \mbox{in }~~ \R^n.\] {\bf Proof of the claim.} Assume by the contrary, there exists a sequence of points $x_i$ with \[\varepsilon_i^{-1}:=|\Psi(x_i)|\to +\infty,\] but \begin{equation}\label{absurd assumption 4} |\nabla^\prime\Psi(x_i)|\geq \varepsilon \partial_n\Psi(x_i). \end{equation} Let $\Psi_i(x):=\varepsilon_i\Psi(\varepsilon_i^{-1}x)$. Combining \eqref{convergence of gradients} and \eqref{absurd assumption 4} together leads to a contradiction with \eqref{1d limit}. \end{proof} {\bf Acknowledgement.} The author's research was supported by the National Natural Science Foundation of China No.~11871381 and No. 11631011. He would like to thank Professor Yihong Du for pointing out an error in the draft version, and to Professor Rui Huang for a discussion several years ago.
{ "timestamp": "2020-10-23T02:18:24", "yymm": "2010", "arxiv_id": "2010.11615", "language": "en", "url": "https://arxiv.org/abs/2010.11615", "abstract": "For a class of reaction-diffusion equations describing propagation phenomena, we prove that for any entire solution $u$, the level set $\\{u=\\lambda\\}$ is a Lipschitz graph in the time direction if $\\lambda$ is close to $1$. Under a further assumption that $u$ connects $0$ and $1$, it is shown that all level sets are Lipschitz graphs. By a blowing down analysis, the large scale motion law for these level sets and a characterization of the minimal speed for travelling waves are also given.", "subjects": "Analysis of PDEs (math.AP)", "title": "Lipschitz property of bistable or combustion fronts and its applications", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9898303428461218, "lm_q2_score": 0.7154239897159438, "lm_q1q2_score": 0.708148373020873 }
https://arxiv.org/abs/1410.6236
Local And Global Colorability of Graphs
It is shown that for any fixed $c \geq 3$ and $r$, the maximum possible chromatic number of a graph on $n$ vertices in which every subgraph of radius at most $r$ is $c$ colorable is $\tilde{\Theta}\left(n ^ {\frac{1}{r+1}} \right)$ (that is, $n^\frac{1}{r+1}$ up to a factor poly-logarithmic in $n$). The proof is based on a careful analysis of the local and global colorability of random graphs and implies, in particular, that a random $n$-vertex graph with the right edge probability has typically a chromatic number as above and yet most balls of radius $r$ in it are $2$-degenerate.
\section{Introduction} \subsection{Notation and Definitions} For a simple undirected graph $G=(V,E)$ denote by $d(u,v)$ the \emph{distance} between the vertices $u,v\in V$. The \emph{degree} of a vertex $v \in V$, denoted by $\deg(v)$, is the number of its neighbours in $G$. A subset $V' \subseteq V$ is \emph{independent} if no edge of $G$ has both of its endpoints in $V'$. The \emph{chromatic number} of $G$, denoted by $\chi(G)$, is the minimal number of independent subsets of $V$ whose union covers $V$. A graph is \emph{$k$-degenerate} if the minimum degree of every subgraph of it is at most $k$. In particular, a $k$-degenerate graph is $k+1$-colorable. We will work with random graphs $G_{n,p}$ in the Erd\H{o}s-R\'enyi model, where there are $n$ labelled vertices and each edge is included in the graph with probability $p$, independently of all other edges. We say that a property of $G$ holds \emph{with high probability (w.h.p.\@)} if this property holds with probability that tends to 1 as $n$ tends to $\infty$. In this paper we are only interested in graphs with large chromatic number $\ell$. It will be therefore equivalent to say that a property holds w.h.p.\@ if its probability tends to 1 as $\ell$ tends to $\infty$. Consider the following definition of $r$-local colorability: \begin{definition} \label{def:r_ball} Let $r$ be a positive integer. Let ${U_r}(v,G)$ be the ball with radius $r$ around $v\in V$ in $G$ (i.e.\@ the induced subgraph on all vertices in $V$ whose distance from $v$ is $\leq r$). Let \begin{equation} \ell{\chi_r}(G)=\max_{v\in V}\chi({U_r}(v,G)) \end{equation} denote the \emph{$r$-local chromatic number} of $G$. \end{definition} We also say that $U_r(v,G)$ is the $r$-ball around $v$ in $G$. Finally, we define the main quantity discussed in this paper. \begin{definition} \label{def:f_c(n,r)} For $\ell \geq c \geq 2$ and $r > 0$ let $f_c(\ell,r)$ be the greatest integer $n$ such that every graph on $n$ vertices whose $r$-local chromatic number is $\leq c$ is $\ell$-colorable. \end{definition} In other words, $f_c(\ell,r) + 1$ is the minimal number of vertices in a non-$\ell$-colorable graph in which every $r$-ball is $c$-colorable. Note that $f_{c_1}(\ell,r) \leq f_{c_2}(\ell,r)$ for $c_1 \geq c_2$. Definitions \ref{def:r_ball} and \ref{def:f_c(n,r)} appear explicitly in the paper of Bogdanov \cite{bogdanov14}, but the quantity $f_c(\ell,r)$ itself has been investigated well before (see sections \ref{sec:bg}, \ref{sec:conc_remarks} for more details). The main goal of this paper is to estimate $f_c(\ell, r)$ for fixed $c,r$ as $\ell$ tends to $\infty$. The main result is an upper bound tight up to a polylogarithmic factor for $f_c(\ell, r)$ for all fixed $c \geq 3$ and $r$. \subsection{Background and our contribution} \label{sec:bg} Fix an $r > 0$. Somewhat surprisingly, the gap between $f_2(\ell, r)$ and $f_3(\ell, r)$ might be much bigger than the gap between $f_3(\ell, r)$ and $f_c(\ell, r)$ for any other fixed $c \geq 3$. Here is a short background on previous results regarding $f_c(\ell, r)$ for fixed $c$ and $r$ and large $\ell$ and our contributions to these problems. \subsubsection*{Known upper bounds for $f_c(\ell, r)$ with fixed $c,r$, large $\ell$} Erd\H{o}s \cite{erdos59} showed that for sufficiently large $m$ there exists a graph $G$ with $m^{1+1/2k}$ vertices, that neither contains a cycle of length $\leq k$ nor an independent set of size $m$. As an easy consequence, $G$ is not $m^{1/2k}$-colorable. Put $k=2r+1, \ell=m^{1/2k}$ and note that $G$ has $n=m^{1+1/2k} = \ell^{2k+1} = \ell^{4r+3}$ vertices and $\ell\chi_r(G) \leq 2$ but is not $\ell$-colorable. Hence $$ f_2(\ell, r) < \ell^{4r+3}. $$ A better estimate follows from the results of Krivelevich in \cite{Kriv95}. Indeed, Theorem 1 in his paper implies that there exists an absolute positive constant $c$ so that \begin{equation} \label{e91} f_2(\ell, r) < (c \ell \log {\ell})^{2r} \end{equation} An upper bound for $f_3(\ell, r)$ can be derived from another result by Erd\H{o}s \cite{erdos62}. Erd\H{o}s worked with random graphs in the $G_{n,m}$ model, in which we consider random graphs with $n$ vertices and exactly $m$ edges. He showed that with probability $> 0.8$ and for $k \leq O(n^{1/3})$ large enough, $G_{n,kn}$ is not $\frac{k}{\log{k}}$-colorable but every subgraph spanned by ${O}(nk^{-3})$ vertices is $3$-colorable. It is easy to show that with high probability every $r$-ball in $G_{n,kn}$ has $O(k)^{r}$ vertices (later we prove and apply a similar result for graphs in the $G_{n,p}$ model). Combining the above results and taking $k = 2\ell \log{\ell}$, $n = O(k)^{r+3} = O(\ell \log{\ell})^{r+3}$, it follows that with positive probability the graph $G_{n,kn}$ is not $\ell$-colorable but every $r$-ball (and in fact every subgraph on $O(nk^{-3}) = O(k)^{r}$ vertices) is $3$-colorable. Hence there exists $\beta > 0$ such that: \begin{equation} f_c(\ell, r) \leq f_3(\ell, r) \leq (\beta \ell \log{\ell})^{r+3} \end{equation} for large $\ell$, fixed $r \geq 3$ and for $c \geq 3$. \subsubsection*{Known lower bounds for $f_c(\ell, r)$ with fixed $c,r$, large $\ell$} Bogdanov \cite{bogdanov14} showed that for all $r>0$ and $\ell \geq c \geq 2$: \begin{equation} \label{eq:bogd} f_c(\ell,r) \geq \frac{(\ell/c+r/2)(\ell/c+r/2+1) \ldots(\ell/c+3r/2)}{(r+1)^{r+1}} \geq \left( \frac{\ell / c + r / 2} {r+1} \right)^{r+1} \end{equation} When $c$ and $r$ are fixed, \eqref{eq:bogd} implies that $f_c(\ell,r) = \Omega(\ell^{r+1})$. \subsubsection*{A special case - $f_c(\ell, 1)$ for fixed $c$, large $\ell$} It is not difficult to prove that $f_2(\ell,1)=\Theta(\ell^2\log{\ell})$, using the known fact that the Ramsey number $R(t,3)$ is $\Theta(t^2 / \log{t})$ (see \cite{ajtai80}, \cite{kim95}). In Section \ref{sec:r_is_1} we extend this result to every fixed $c \geq 2$, showing that $f_c(\ell, 1) = \Theta(\ell^2 \log{\ell})$ for any fixed $c \geq 2$. \subsubsection*{The main contribution} The main result in this paper is an improved upper bound for $f_3(\ell, r)$. We show that for fixed $r > 0$: \begin{equation} \label{eq:main_contrib} f_3(\ell, r) \leq \left( 10 \ell \log{\ell} \right) ^ {r+1} \end{equation} Fix $r$ and $c \geq 3$. By the result above (together with \ref{eq:bogd}) it follows that there exists a constant $\delta = \delta(r,c)$ such that \begin{equation} (\delta \ell)^{r+1} \leq f_c(\ell, r) \leq f_3(\ell, r) \leq (10 \ell \log{\ell})^{r+1} \end{equation} The last result determines, up to a logarithmic factor, the maximum possible chromatic number $M_{c,r}(n)$ of a graph on $n$ vertices in which every $r$-ball is $c$-colorable: \begin{equation} a \frac{n^{\frac{1}{r+1}}}{\log{n}} \leq M_{c,r}(n) \leq b_{c,r} n^\frac{1}{r+1} \end{equation} for suitable positive constants $a$, $b_{c,r}$. \\ Note that for $c=2$ the best known estimates are weaker, namely it is only known that $$ \Omega(\frac{n^{1/(2r)}}{\log n}) \leq M_{2,r} \leq O(n^{1/(r+1)}). $$ \subsection{Paper Structure} The rest of the paper is organized as follows: \begin{itemize} \item In Section \ref{sec:grad_reveal} we present the basic approach of gradually revealing information on a random graph. Two examples of this are given. Both will be useful in subsequent sections. \item In Section \ref{sec:f_5} we give an upper bound for $f_5(\ell,r)$ for fixed $r$ and large $\ell$ using the random graph $G_{n,p}$ with $n = (10 \ell \log{\ell})^{r+1}$ and $p = \frac{3}{10} (10 \ell \log{\ell})^{-r}$. It is shown that with high probability, all $r$-balls in the graph are $4$-degenerate. \item In Section \ref{sec:f_4}, the same upper bound is obtained for $f_4(\ell, r)$. It is shown that most $r$-balls in the above graph are $4$-colorable. Deleting the center of every non-$4$-colorable $r$-ball results in a graph with $r$-local chromatic number $ \leq 4$ and chromatic number $> \ell$ with positive probability. \item Section \ref{sec:f_3} includes the proof of the main result of the paper. It is shown that typically most $r$-balls in the above graph are $2$-degenerate. This proof is much harder than the previous one. Again we delete the center of every non-$2$-degenerate $r$-ball to obtain a graph with $r$-local chromatic number at most $3$ and chromatic number $> \ell$ with positive probability. Note that the result in this section is stronger than those in the previous two sections. Still, we prefer to include all three as each of the results has its merits: indeed, to get local $5$ colorability it suffices to consider random graphs with no changes. Getting local $4$-colorability requires some modifications in the random graph, but the proof is very short. Getting local $3$-colorability is significantly more complicated, and is proved by a delicate exposure of the information about the edges of the random graph considered. \item In Section \ref{sec:c_non_const} we extend the result from Section \ref{sec:f_3} to large values of $c$. \item In Section \ref{sec:r_is_1} it is shown that $f_c(\ell, 1) = \Theta(\ell^2 \log{\ell})$ for any fixed $c \geq 2$. \item The final Section \ref{sec:conc_remarks} contains some concluding remarks including a discussion of what can be proved about the behaviour of $f_c(\ell,r)$ for non-constant values of $r$. \end{itemize} \section{Gradually Revealing the Random Graph} \label{sec:grad_reveal} In random graphs of the $G_{n,p}$ model the edges can be examined (that is, accepted to the graph or rejected from it) in any order. This fact can be used to reveal some of the information regarding the graph, while preserving the randomness of other information. Two examples of this basic approach are shown below, both will be used later in this paper. \subsection{Spanning tree with root} \label{subsec:spantree} Let $r > 0$. This model first determines the vertices of $U_r(v,G)$ while also revealing a spanning tree for this subgraph, and only then continues to reveal all other edges of the graph. Choose a root vertex $v$. Let $L_i = L_i(v,G)$ denote \emph{the $i$-th level} with respect to $v$ in $G$ - that is, the set of all vertices of distance $i$ from $v$. Trivially, $L_0(v,G) = \{ v \}$. Also define $L_{\leq i} = L_{\leq i}(v,G) = \bigcup_{j=0}^{i} L_j(v,G)$. Assuming $L_i$ is already known and $T$ is constructed up to the $i$-th level, reveal $L_{i+1}$ and expand $T$ as follows: for every $u \in V$ not in the tree, examine the possible edges from $u$ to $L_i$ one by one. Stop either when an examined edge from $u$ to $L_i$ is accepted to the graph (in this case, $u \in L_{i+1}$ and the accepted edge is added to the tree) or when all possible edges from $u$ to $L_i$ are rejected (here $u \notin L_{i+1}$). An easy induction shows that the newly added vertices are exactly all vertices of $L_{i+1}$. Stop this process after $L_r$ is revealed. The remaining unexamined edges can later be examined in any order. Let $T = T(v)$ be the spanning tree of $U_r(v,G)$ and let $R = R(v) = U_r(v,G) \setminus T(v)$ (i.e. R is the subgraph of $U_r(v,G)$ whose edges are those of $U_r(v,G)$ not in $T(v)$). Note that $R$ only consists of unexamined (at this point) edges and rejected edges. This model with $R$ and $T$ defined as above will be used in Sections \ref{sec:f_5} and \ref{sec:f_4}. \subsection{Reveal vertices, then connect them} \label{subsec:reveal} Let $r > 0$ and $v \in V$. This model consists of two phases: the creation phase determines the vertices of $U_r(v,G)$ while the connection phase gradually reveals all edges of $U_r(v,G)$, separating it to a spanning tree $T$ and a subgraph $R$ containing all other edges. \begin{description} \label{subsec:gradual} \item {\bf{Creation phase} } This phase constructs $L_{i+1}$ given $L_i$ (starting at $i=0$ and ending at $i=r-1$) in the following manner: for every $u \notin L_{\leq i}$, flip a coin with probability $p$ a total of $\vert L_{i} \vert$ times or until the first "yes" answer, whichever comes first. In case of "yes" add $u$ to $L_{i+1}$. \item {\bf{Connection phase} } Connect $L_i$ to $L_{i-1}$, starting at $i=r$ and ending at $i=1$. The connection of $L_i$ to $L_{i-1}$ consists of two steps: \begin{description} \item {\bf{Inner step} } Connect every couple of vertices in $L_i$ randomly and independently with probability $p$. \item {\bf{Counting step} } For every $u \in L_i$, let $k_u \leq \vert L_{i-1} \vert$ be the number of coin flips taken until the first "yes" determined that $u$ is in $L_i$ in the creation phase. Flip the coin $\vert L_{i-1} \vert - k_u$ more times. Let $t_u \geq 0$ be the number of additional "yes" answers obtained. \item {\bf{Linkage step}} For every $u \in L_i$, reveal the neighbours of $u$ in $L_{i-1}$: choose a vertex in $L_{i-1}$ randomly. Connect it to $u$ and add this edge to $T$. Now choose (randomly and indpendently) $t_u$ more vertices from $L_{i-1}$, connect each of them to $u$ and add the resulting edges to $R$. \end{description} \end{description} All other possible edges can be later examined in an arbitrary order. This model will be used in Section \ref{sec:f_3}. \section{$4$-Degeneracy and Upper Bound For $f_5(\ell,r)$} \label{sec:f_5} \begin{thm} \label{thm:main_result_f_5} Let $r>0$. There exists $\ell_0 = \ell_0(r)$ such that for every $\ell > \ell_0$: \begin{equation} f_5\left(\ell,r\right)<\left(10\ell\log \ell\right)^{r+1} \end{equation} \end{thm} \begin{proof} Define $d(\ell) := 3\ell\log{\ell}$. Our choice of a random graph for the proof is based on the following proposition. \begin{prop} \label{prop:not_l_colorable} Any random graph $G_{n,p}$ with $np = d(\ell)$ satisfies w.h.p.\@ \begin{equation} \chi\left(G\right)>\ell \end{equation} \end{prop} \begin{proof} By a standard first moment argument (see \cite{BollobasErdos1975}), w.h.p.\@ there is no independent set of size $(1+o(1))\frac{2\log{np}}{p} = (1+o(1))\frac{2\log{\ell}}{p}$ in $G$. Consequently, \begin{equation} \chi(G) \geq (1-o(1))\frac{n}{\frac{2 \log{\ell}}{p}} = (1-o(1)) \frac{d}{2\log{\ell}} = (1-o(1)) \frac{3 \ell \log{\ell}}{2\log{\ell}} > \ell \end{equation} for $\ell$ large enough. \end{proof} Take the random graph $G = (V, E) = G_{n, p}$ with $n = (10\ell \log{\ell})^{r+1}$ and $p = \frac{3}{10}(10\ell \log{\ell})^{-r}$. $G$ is not $\ell$-colorable with high probability since $np=d(\ell)$. We will show that w.h.p.\@ every $r$-ball in $G$ is $4$-degenerate (and hence $5$-colorable). \begin{lem} \label{lem:max_deg} Fix $r > 0$ and let $\epsilon > 0$ be an arbitrary constant. The maximum degree of a vertex in the random graph $G_{n,p}$ with $n = (10 \ell \log{\ell})^{r+1}$ and $p = \frac{3}{10} (10 \ell \log{\ell})^{-r}$ is w.h.p.\@ no more than $(1+\epsilon)d$. \end{lem} \begin{proof} Let $v \in V$. We have $deg(v) \sim Bin(n-1, p)$ and $\mu = E[deg(v)] = d - p$. We use the following known Chernoff bound (see A.1.12 in \cite{AlonSpencer08}): For a binomial random variable $X$ with expectation $\mu$, and for all $\epsilon >0$ (including $\epsilon >1$): \begin{equation} \label{eq:cher_bound1} \Pr(X > (1+\epsilon)\mu) < \left( \frac{e^{\epsilon}}{(1+\epsilon)^{(1+\epsilon)}} \right)^{\mu} \end{equation} Noting that $(1+\epsilon)d > (1+\epsilon)\mu$, this bound in our case implies \begin{align} \Pr \left[ deg(v) \geq (1+\epsilon) d \right] < \left[ \frac{e^\epsilon}{(1+\epsilon)^{(1+\epsilon)}} \right]^{\mu} = {\gamma_\epsilon}^{d-p} \end{align} Where $\gamma_\epsilon = {e^\epsilon}{(1+\epsilon)^{-(1+\epsilon)}} < 1$ is a positive constant. Therefore, the probability that there exists a vertex with degree $\geq (1+\epsilon)d$ is no more than \begin{align} \begin{split} n {\gamma_\epsilon}^{d-p} =& e ^ {\log{n} + (d-p) \log{\gamma_\epsilon}} = e ^ {(1+o(1))(r+1) \log{\ell} - (1+o(1)) \log(1/{\gamma_\epsilon}) \cdot 3 \ell \log{\ell}} \\ \leq& \ell ^ {(2+o(1))r - (3+o(1))\log(1/{\gamma_\epsilon}) \ell} \xrightarrow{\ell \rightarrow \infty} 0 \end{split} \end{align} Hence with high probability the maximum degree is $< (1+\epsilon) d$. \end{proof} \begin{lem} \label{lem:dist_r} Fix $r$ and let $\epsilon > 0$. Then with high probability all $r$-balls in $G_{n,p}$ (with $n,p$ as before) contain at most $(1+\epsilon)^r d^r$ vertices. \end{lem} \begin{proof} The max degree in the graph is w.h.p.\@ $<(1+\epsilon)d$. In this case, an easy induction shows that every $i$-ball in the graph has at most $(1+\epsilon)^i d^i$ vertices. Setting $i=r$ gives the desired result. \end{proof} We are now ready to prove the main result of this section. \begin{thm} \label{thm:4_degenerate} Fix $r$ and let $n = (10 \ell \log{\ell})^{r+1}$, $p = \frac{3}{10} (10 \ell \log{\ell})^{-r}$. Then with high probability, every $r$-ball in $G_{n,p}$ is $4$-degenerate. \end{thm} To prove this, note that the probability that not every $r$-ball is $4$-degenerate is no more than \begin{align} \begin{split} \nonumber \Pr \left[ \exists v : U_r(v,G) \mbox{ not $4$-degenerate and }\forall u \in V : \deg(u) < (1+\epsilon)d \right] & + \\ + \Pr \big[ \exists u \in V : \deg(u) \geq (1+\epsilon)d \big] & \leq \\ \Pr \left[ \exists v : U_r(v,G) \mbox{ not $4$-degenerate} \bigg\vert \forall u \in V : \deg(u) < (1+\epsilon)d \right] + o(1) &\leq \\ n \Pr \left[ U_r(v_0,G) \mbox{ not $4$-degenerate} \bigg\vert \forall u \in V : \deg(u) < (1+\epsilon)d \right] + o(1) & \end{split} \end{align} Where $v_0 \in V$ is an arbitrary vertex. It is therefore enough to show that for fixed $r>0, v \in V$ and suitable $\epsilon > 0$: \begin{equation} \label{eq:enough_to_show} \lim_{\ell \rightarrow \infty}n \Pr \left[ U_r(v,G) \mbox{ not $4$-degenerate} \bigg\vert \forall u \in V : \deg(u) < (1+\epsilon)d \right] = 0 \end{equation} For the rest of the proof, assume that the maximum degree of $G$ is less than $(1+\epsilon)d$. Fix $v \in V$. A non-$4$-degenerate $r$-ball contains a subgraph with average degree at least $5$, hence it is enough to show that with probability high enough, every subgraph $S = (V_S, E_S) \subseteq U_r(v,G)$ satisfies $|E_S| < 5|V_S| / 2$. Construct a spanning tree $T$ with root $v \in V$ for $U_r(v,G)$ in the spanning tree model described in Subsection \ref{subsec:spantree}. Let $S = (V_S, E_S) \subseteq U_r(v,G)$ be an induced subgraph and put $s = |V_S|$. Assume that $s \geq 6$ (as every subgraph on $< 6$ vertices has minimal degree $\leq 4$). The possible edges of $S$ are either in $T$ or rejected from the graph or not examined yet. $S \cap T$ is a forest and contains at most $s-1$ edges. $S \setminus T$ contains at most $\binom{s}{2}$ unexamined possible edges (all other edges are rejected). The probability that an unexamined edge is accepted to the graph is no more than $p$. Note that here we ignore the conditioning on the maximum degree. By the FKG Inequality (c.f., e.g., \cite{AlonSpencer08}, Chapter 6) this conditioning can only reduce the probability that we are bounding. Let $X$ be the random variable that counts the number of edges in $S \setminus T$. Then $X$ is dominated by $Bin \left(\binom{s}{2}, p \right)$. That is, for a random variable $Y \sim Bin \left(\binom{s}{2}, p \right)$ we have $\Pr(X > k) \leq \Pr(Y > k)$ for every $k$. Hence \begin{equation} \Pr\left( |E_S| \geq \frac{5s}{2}\right) \leq \Pr\left(X > \frac{3s}{2}\right) \leq \Pr\left(Y > \frac{3s}{2}\right) \end{equation} The expectation of $Y$ is $\mu = \binom{s}{2}p$. An easy consequence on the Chernoff bound in \eqref{eq:cher_bound1} implies that \begin{equation} \Pr\left(Y > (1+\tau)\mu \right) < \left( \frac{e}{1+\tau} \right)^{(1+\tau)\mu} \end{equation} Putting $1+\tau=\frac{3}{p(s-1)}$ we get \begin{equation} \Pr\left(Y > \frac{3s}{2}\right) < \left( \frac {e p (s-1)} {3} \right)^{3s / 2} < (ps)^{3s/2} \end{equation} Pick $\epsilon = \frac{1}{9}$. The number of induced subgraphs $S \subseteq U_{r}\left(u,G\right)$ on $s$ vertices is \begin{equation} \binom{\vert{U_r}(u,G)\vert}{s} \leq \binom{(1+\epsilon)^r d^r}{s} \leq e^s [(1+\epsilon)d]^{rs} s^{-s} = e^s \left[\frac{10d}{9} \right]^{rs} s^{-s} \end{equation} The probability that $U_r(v,G)$ is not $4$-degenerate is therefore no more than \begin{align} \sum_{s=6}^{[(1+\epsilon)d]^r} { e^s \left[\frac{10d}{9} \right]^{rs} s^{-s} (ps)^{3s/2} } = \label{eq:after_setting_values} \sum_{s=6}^{[(1+\epsilon)d]^r} \left[ ep \left(\frac{10}{9}d \right)^r \right]^s \left[ ps \right]^{s/2} \end{align} but $ep(\frac{10d}{9})^r = \frac{3e}{10} (\frac{10d}{3})^{-r} (\frac{10d}{9})^r = \frac{3e}{10} 3^{-r} < 1/3$, and the last expression is \begin{align} & \leq \sum_{s=6}^{[(1+\epsilon)d]^r} 3^{-s} \left( ps \right)^{s/2} \leq \sum_{s=6}^{d^{1/10}} \left( ps \right)^{s/2} + \sum_{s=d^{1/10}+1}^{\infty} 3^{-s} \\ & \leq d^{1/10} (pd^{1/10})^3 + 3^{-d^{1/10}} \leq d^{-26r/10} + 3^{-d^{1/10}} \end{align} Since $n = (10 \ell \log{\ell})^{r+1} \leq (4d)^{r+1} \leq (4d)^{2r}$, we conclude that \begin{align} n \Pr \left[ U_r(v,G) \mbox{ not $4$-degenerate} \bigg\vert \forall u \in V : \deg(u) < (1+\epsilon)d \right] \leq& \\ \left[d^{-26r/10} + 3^{-d^{1/10}}\right] (4d)^{2r} \leq O(1) \left[d^{-r/2} + e^{-d^{1/10} + 2r \log{d}}\right] \xrightarrow{\ell \rightarrow \infty} 0 & \end{align} This proves \eqref{eq:enough_to_show} and completes the proof of the Theorem. Theorem \ref{thm:main_result_f_5} follows from \ref{prop:not_l_colorable} and the last Theorem. \end{proof} \section{Upper Bound For $f_4(\ell,r)$} \label{sec:f_4} \begin{thm} \label{thm:bound_on_f_4} Let $r > 0$. There exists $\ell_0(r)$ such that for every $\ell > \ell_0$: \begin{equation} f_4(\ell,r) < (10\ell \log{\ell}) ^ {r+1} \end{equation} \end{thm} \begin{proof} Once again we take the random graph $G_{n,p}$ with $n = (10\ell \log{\ell}) ^ {r+1}, p = \frac{3}{10}(10\ell \log{\ell}) ^ {-r}$ and assume that the maximum degree in $G$ is less than $(1+\epsilon)d = 10d/9$ (taking $\epsilon = 1/9)$. Let $v \in V$ and construct a spanning tree $T(v)$ for $U_r(v,G)$ as in Subsection \ref{subsec:spantree}. Let $R(v) = U_r(v,G) \setminus T(v)$ be the subgraph of all other edges of $U_r(v,G)$. At this point, the possible edges of $R$ are either rejected or unexamined. Suppose that $R$ is $2$-colorable. $T$ is a tree and is thus $2$-colorable. The cartesian multiple of a $2$-coloring of $T$ and a $2$-coloring of $R$ is a valid $4$-coloring of $U_r(v,G) = T \cup R$. To make $R$ 2-colorable, it is enough to get rid of all cycles of odd length in it. This can be done by deleting a vertex (or an edge) from each such cycle. The expected number of cycles of length $k$ in $R(v)$ is no more than $$ \binom{(1+\epsilon)^r d^r }{k} \frac{(k-1)!}{2} p^k \leq \frac{(1 + \epsilon)^{rk}d^{rk} p^k}{2k} $$ \begin{equation} \label{eq:num_cycles_length_k} \leq \frac{1}{2k} \left( \frac{10d}{9} \right)^{rk} \left( \frac{10d}{3} \right)^{-rk} \leq \frac{1}{2k} 3^{-k} \end{equation} Consequently, the expected number of cycles (in particular, of odd cycles) in $R(v)$ is bounded by \begin{equation} \label{eq:odd_length_cycles} \sum_{i = 1}^{\infty} \frac{1}{2(2i+1)} 3^{-2i-1} < \frac{1}{100} \end{equation} And so the probability that $R(v)$ is not $2$-colorable is less then $1/100$. Let $G'$ be a graph obtained from $G$ by removing every $v$ for which $R(v)$ contains an odd cycle (that is, the center of each $r$-ball for which $R$ is not $2$-colorable). Observe that $\ell\chi_r(G') \leq 4$. By \eqref{eq:odd_length_cycles}, the expected number of vertices that need to be removed to obtain $G'$ is less than $\frac{n}{100}$. By Markov's inequality, with probability at least $1/2$ the number of vertices to be removed is less than $\frac{n}{50}$ (note that this computation is without the conditioning on the maximum degree, but by the FKG inequality the same estimate holds also after this conditioning). On the other side, w.h.p.\@ there is no independent set of size $(1+o(1)) \frac{2\log(d)}{p}$ in $G$ (as was discussed in the proof of \ref{prop:not_l_colorable}). Consequently there is no independent set of such size in $G'$. We conclude that with probability $\geq \frac{1}{2} - o(1)$, the chromatic number of $G'$ is at least \begin{equation} \frac{n - \frac{n}{50}}{(1+o(1)) \frac{2\log(d)}{p}} = (1-o(1)) \frac{49d}{100\log{d}} = (1-o(1)) \frac{49 \cdot 3\ell \log{\ell}}{100 \log{\ell}} > \ell \end{equation} For $\ell$ large enough. Recall that these estimates are only true assuming the maximum degree is $< (1+\epsilon)d$, but this property holds with high probability. Thus, the process described above generates with probability $\frac{1}{2} - o(1)$ a graph $G'$ on at most $(10\ell \log{\ell})^{r+1}$ vertices which is not $\ell$-colorable, but with $r$-local chromatic number $\leq 4$. This completes the proof. \end{proof} \section{$2$-Degeneracy And Upper bound For $f_3(\ell,r)$} \label{sec:f_3} The main result proved in this section is \begin{thm} \label{thm:bound_on_f_3} Let $r > 0$. There exists $\ell_0(r)$ such that for every $\ell > \ell_0$: \begin{equation} f_3(\ell,r) < (10\ell \log{\ell}) ^ {r+1} \end{equation} \end{thm} To prove this, we show the following. \begin{thm} \label{thm:prob_2_deg} Let $r > 0$, $v \in V$ where $G = G_{n,p} = (V,E)$, $n = (10 \ell \log{\ell})^{r+1}$, $p = \frac{3}{10} (10 \ell \log{\ell})^{-r}$. Then $U_r(v,G)$ is $2$-degenerate with probability at least $0.99 - o(1)$. \end{thm} The rest of this section is designed as follows. First it is shown that Theorem \ref{thm:bound_on_f_3} follows easily from Theorem \ref{thm:prob_2_deg}. To prove \ref{thm:prob_2_deg}, we consider an algorithm that checks if $U_r(v,G)$ is $2$-degenerate while revealing it as in Subsection \ref{subsec:reveal}. The algorithm is shown to be valid (that is, a "yes" answer implies that $U_r(v,G)$ is indeed $2$-degenerate). The last part of this section shows that a "yes" answer is returned with probability $> 0.99 - o(1)$. To see why \ref{thm:bound_on_f_3} follows from \ref{thm:prob_2_deg}, note that the expected number of non-$2$-degenerate $r$-balls in $G_{n,p}$ is no more than $(\frac{1}{100} + o(1))n$. Taking $G = G_{n,p}$ and deleting the centers of all non-$2$-degenerate $r$-balls generates a graph $G'$ with $\ell\chi_r(G') \leq 3$. Markov's inequality implies, as in Section \ref{sec:f_4}, that with probability at least $1/2-o(1)$ we do not delete more than $\frac{2}{100} n$ centres, thus $\chi(G') > \ell$ holds with probability $> \frac{1}{2} - o(1)$. This completes the proof of Theorem \ref{thm:bound_on_f_3}. The rest of this section is dedicated to proving Theorem \ref{thm:prob_2_deg}. Let $v \in V$. For the (more complicated) analysis of this problem, we use the model of revealing $U_r(v,G)$ presented in subsection \ref{subsec:reveal}. We start with some definitions. First, recall the definition of a level with respect to a vertex. \begin{definition} For a subgraph $F=(V_F, E_F) \subseteq U_r(v,G)$, let \[ L_i(v, F) = \{ u \in V_F : d(u,v) = i \} \] denote the \emph{$i$-th level} (with respect to $v$ in $F$). Moreover, define \[ L_{\geq i}(v,F) = \bigcup_{j=i}^{r} { L_j (v, F) } \mbox{ ; } L_{\leq i}(v,F) = \bigcup_{j=0}^{i} { L_j (v, F) } \] \end{definition} Note that the distance $d(u,v)$ here denotes distance in $G$, not in $F$. The notation $L_i$ (without specifying $v$ and $F$) refers to $L_i(v,G)$. The same holds for $L_{\geq i} = L_{\geq i}(v,G)$ and $L_{\leq i} = L_{\leq i}(v,G)$. For convenience we will also sometimes use these notations to describe the induced subgraph of $F$ on the relevant set of vertices. The next definition presents a few special types of paths and cycles, to be used later when describing and analyzing the algorithm. \begin{definition} Let $F \subseteq U_r(v,G)$. \begin{itemize} \item An \emph{$i$-path} in $F$ is a simple path in $L_{\geq i}(v, F)$ whose endpoints belong to $L_i (v,F)$. \item An $i$-cycle in $F$ is a simple cycle in $L_{\geq i}(v,F)$ with at least one vertex in $L_i (v,F)$. \item An $i$-\emph{horseshoe} in $F$ is a path of the form \[ u w_1 \ldots w_k z \] where $u,z \in L_{i-1}(v, F)$, $k \geq 1$, $u w_1, w_k z \in R$ and $w_1 \ldots w_k $ is an $i$-path in $F$. Specifically in the case $k=1$ we also require $u \neq z$. \item An \emph{$i$-sub-horseshoe} in $F$ is a path of the form \begin{equation} u' w_1 \ldots w_k z' \end{equation} where $u',z' \in L_{i-1}(v,F)$, $k \geq 1$, $u'w_1, w_k z' \in F$ and $w_1 \ldots w_k$ is included in the interior of some $i$-horseshoe. Specifically in the case $k=1$ we also require $u' \neq z'$. \end{itemize} \end{definition} Note that every $i$-horseshoe is also an $i$-sub-horseshoe, but the other direction is not true in general. Here the interior of a path denotes the induced subpath on all vertices except for the endpoints. \subsection{Algorithm for checking if $U_r(v,G)$ is $2$-degenerate} \label{subsec:algo_2_deg} Consider the following algorithm to check if $U_r(v,G)$ is $2$-degenerate. This algorithm always returns "no" if the ball is not $2$-degenerate, but is not assured to return "yes" for a $2$-degenerate ball. We will show that the probability of a "yes" answer is high enough, implying that the $r$-ball is $2$-degenerate with high enough probability. Our algorithm (applied while revealing $U_r(v,G)$ as described in Subsection \ref{subsec:reveal}) maintains a subgraph $F$ which initially consists of all vertices of $U_r(v,G)$ where the edges are not yet revealed. It then gradually reveals information about the edges of $U_r(v,G)$ and adds these edges to $F$ while deleting vertices whose neighbours in $F$ are revealed but their degree is at most $2$. Some conditions might lead to a "no" answer returned by the algorithm, but if it succeeds to delete all vertices of $F$, it returns "yes". It can be seen as a pessimistic version of the naive approach of trying to remove vertices of degree $\leq 2$ from the graph until all the vertices are removed (a "yes" answer) or until a subgraph with minimum degree $\geq 3$ is revealed (a "no" answer). Our algorithm is less accurate but easier to analyze than the naive approach. \subsubsection*{Algorithm \ref{subsec:algo_2_deg} - detailed description} \label{subsub:algorithm} \begin{enumerate} \item\label{enum:creation} Creation phase \begin{enumerate} \item Reveal the levels $L_i$ of $U_r(v,G)$. \begin{enumerate} \item\label{enum:creation_ret_no} If for some $1\leq i \leq r$ it holds that $|L_i| > (1+\epsilon) d |L_{i-1}|$ with $\epsilon = 1/9$, return "no". \item\label{enum:init_f} Initialize a subgraph $F$ with all vertices of $U_r(v,G)$ and no edges. \end{enumerate} \end{enumerate} \item\label{enum:conn} Connection phase: For every level $L_i$ from $i=r$ to $i=1$ do: \begin{enumerate} \item\label{enum:edges_i} Inner step: reveal all inner edges of $L_i$, i.e.\@ edges in $G$ of the form $\{ u,u' \}$ where $u \neq u' \in L_i$. Add them to $F$. \begin{enumerate} \item\label{enum:cycle1} At this point all edges of $L_{\geq i}(v,F)$ are revealed. If there exists an $i$-cycle in $F$, return "no". \end{enumerate} \item\label{enum:counting} Counting step: for every $u \in L_i$, determine how many neighbours it has in $L_{i-1}$. \begin{enumerate} \item\label{enum:remove} At this point we know the degree (in $F$) of all vertices in $L_{\geq i}$. If there exists $u \in L_{\geq i}(v,F)$ with degree $\leq 2$ in $F$ - delete $u$. Repeat until all vertices of $L_{\geq i}(v,F)$ are of degree $> 2$ in $F$. \item\label{enum:horseshoes} The number of $i$-sub-horseshoes in $F$ is also known now. If this number is bigger than $b_i$ (to be determined later), return "no". Moreover, the structures of the $i$-(sub-)horseshoes are known aside from the identities of their endpoints in $L_{i-1}$. \end{enumerate} \item\label{enum:link} Linkage step: For every $u \in L_i$, reveal the neighbours of $u$ in $L_{i-1}$, adding one of the new edges to $T$ and the others to $R$. Add all new edges to $F$. \begin{enumerate} \item At this point, all the $i$-horseshoes and $i$-sub-horseshoes are revealed. \end{enumerate} \end{enumerate} \end{enumerate} Finally, if the connection phase ends without returning "no", the algorithm return "yes". \begin{lem}[validity of the algorithm] \label{lem:validity} If algorithm \ref{subsub:algorithm} returns "yes", then $U_r(v,G)$ is $2$-degenerate. \end{lem} \begin{proof} Assume that the algorithm returned "yes". In the end of the iteration $i=1$, $L_{\geq 1}(v,F)$ does not contain cycles - since a "no" has not been returned before then. Therefore, $L_{\geq 1}(v,F) = F \setminus \{ v \}$ is a forest and thus $1$-degenerate, implying that $F$ is $2$-degenerate at that point. Note that the algorithm does not need to inspect the edges between $v$ and $L_1$, since the $1$-degeneracy of $F \setminus \{ v \}$ suffices. Observe that if a vertex $v$ has degree $\leq 2$ in a graph $H$, then $H$ is $2$-degenerate if and only if $H \setminus \{ v \}$ is $2$-degenerate. Let $v_1, \ldots, v_m$ be the ordered sequence of vertices that were deleted from $F$ during the algorithm. Let $F_i = U_r(v,G) \setminus \{ v_1, \ldots, v_i \}$ for $i=0,\ldots,m$. Clearly, $v_{i+1}$ is of degree at most $2$ in $F_i$ (since we only delete a vertex if it is of degree at most $2$ in $F$ at that point). The previous observation implies that $F_i$ is $2$-degenerate if and only if $F_{i+1}$ is $2$-degenerate. Moreover, the first argument states that $F_m$ is $2$-degenerate. Therefore, by induction $F_i$ is $2$-degenerate for every $i$. Noting that $F_0 =U_r(v,G)$ finishes the proof. \end{proof} \subsection{Analysis of the algorithm} We first present notation that is used throughout the analysis. Afterwards we characterize the set of vertices in $L_{\geq j}$ that survive iteration $i=j$. We use this characterization to give bounds (valid with high probability) on the number of $j$-sub-horseshoes revealed in a given iteration as well as the probability to reveal a $j$-cycle. This gives us the desired lower bound on the probability that the algorithm returns "yes", which implies (along with Lemma \ref{lem:validity}) that an $r$-ball in $G_{n,p}$ is $2$-degenerate with sufficiently high probability. \paragraph{Notation} The following quantities are of interest for analysing algorithm \ref{subsub:algorithm}: \begin{description} \item[$n_j$] number of vertices in $L_j(v,G)$. \item[$c_j$] number of $j$-cycles in $F$ at the end of the inner step \eqref{enum:edges_i} in iteration $i=j$ of the connection phase of algorithm \ref{subsub:algorithm}. \item[$h_j$] number of $j$-sub-horseshoes in $F$ at the end of the counting step \eqref{enum:counting} in iteration $i=j$ of the connection phase. \end{description} The next group of notations refers to the probability to get a "no" answer at some point of the algorithm assuming a "no" has not been returned before then. \begin{description} \item [$q^l$] probability that step \eqref{enum:creation_ret_no} reveals that $n_{j+1} > (1+\epsilon) d n_j$ for some $j$. \item[$q_j^c$] probability that $c_j > 0$ assuming the algorithm has not returned "no" before iteration $i=j$ of the connection phase. \item[$q_j^h$] probability that $h_j > b_j$ ($b_j$ will be determined later) assuming the algorithm has not returned "no" before iteration $i=j$ of the connection phase. \end{description} Note that $h_1 = 0$ and these three conditions are the only ones that lead to a "no" answer, implying the following lemma. \begin{lem} \label{lem:prob_alg_no} The probability that algorithm \ref{subsec:algo_2_deg} returns "no" is no more than \begin{equation} q^l + \sum_{j=1}^{r} q_j^c + \sum_{j=2}^{r} q_j^h \end{equation} \end{lem} The proof of Theorem \ref{thm:prob_2_deg} follows from the next Theorem, along with Lemmas \ref{lem:validity} and \ref{lem:prob_alg_no}. \begin{thm} \label{thm:f_3_probs} The following holds with respect to algorithm \ref{subsec:algo_2_deg} on $G_{n,p}$ and $v$ defined as above: \begin{align} \label{eq:q_l_proof} q^l = o(1) & \\ \label{eq:p_r_c_proof} q_r^c < \frac{1}{100} & \\ \label{eq:proof_all_other_probs} \sum_{j=1}^{r-1} q_j^c + \sum_{j=2}^{r} q_j^h = o(1)& \end{align} \end{thm} \begin{proof} \eqref{eq:q_l_proof} is immediate from Lemma \ref{lem:max_deg}. As in \eqref{eq:num_cycles_length_k} and \eqref{eq:odd_length_cycles} and since $L_r$ is of size at most $(1+\epsilon)^r d^r$, the expected number of cycles in $L_r$ is no more than \begin{equation} \sum_{k=3}^{\infty} \frac{1}{2k} 3^{-k} < \frac{1}{100} \end{equation} which proves \eqref{eq:p_r_c_proof}. In the rest of the proof we establish \eqref{eq:proof_all_other_probs}. \paragraph{Horseshoes and sub-horseshoes} We start by explaining why horseshoes and sub-horseshoes are important for the analysis of this problem. \begin{lem} A vertex in $L_{\geq j}$ might remain in $F$ after step \ref{enum:remove} of iteration $i = j$ of the algorithm only if it lies in some $j$-horseshoe of $F$ at that point. \end{lem} \begin{proof} Observe $F$ at the end of step \ref{enum:counting} in iteration $i = j$ of the algorithm. Let $w \in L_{\geq j}(v,F)$ be a vertex that is not contained in any $j$-horseshoe at this point. Then there is at most one edge $e$ touching $w$ that is the first edge of a path $P$ from $w$ to $L_{j-1}$ whose interior is in $L_{\geq j}$ and last edge is in $R$ (note that this interior might also be empty if $P$ is a single edge). Otherwise, let $e_1 \neq e_2$ be such edges and let $P_1, P_2$ be the corresponding paths. Since $L_{\geq j}(v,F)$ does not contain cycles at this point, the interiors of $P_1$ and $P_2$ are disjoint. Thus $w$ lies in the horseshoe $P_1 \cup P_2$, a contradiction. Hence there exists at most one edge $e$ of this type. We can assume that there exists exactly one. Let $S_w$ be the connected component of $w$ in $L_{\geq j}(v,F) \setminus \{e\}$ at this point. Any vertex aside from $w$ has at most one neighbour in $F$ outside $S_w$ (that is its parent in $T$). Moreover, $S_w$ is a forest and thus contains a leaf $z \neq w$. $z$ has degree $\leq 2$ in $F$ and can be removed from it. \\ This process ends when all vertices of $S_w \setminus \{w\}$ are removed from $F$, leaving $w$ with at most two neighbours: its parent in $T$ and the other enndpoint of $e$. At this point, $w$ can be removed from $F$, completing the proof. \end{proof} One can check that the following is a consequence of the last lemma providing a similar result for edges. \begin{cor} An edge of $U_r(v,G)$ that has an endpoint in $L_{\geq j}$ might remain in $F$ after step \ref{enum:remove} of iteration $i = j$ of the algorithm only if it lies in some $j$-sub-horseshoe of $F$ at that point. \end{cor} Recall that the bounds $b_j$ in step \ref{enum:horseshoes} have not been defined yet. Take $b_1 = 0$ since there are no $1$-horseshoes. For $1 < j \leq r$ take $b_j = \frac{n_{j-1}}{\ell}$. The reasoning for these choices will be clearer later. \paragraph{$r$-horseshoes and $q_r^h$} A $r$-horseshoe of length $k+1$ is a path in $R$ with both endpoints in $L_{r-1}$ and $k > 0$ interior points in $L_r$. The number of candidates to be $r$-horseshoes of length $k+1$ is $\leq n_{r-1}^2 n_r^k$. FKG inequality implies that each candidate is indeed a $r$-horseshoe in $U_r(v,G)$ with probability at most $p^{k+1}$. Such a horseshoe, if exists, forms no more than $3k^2$ $r$-sub-horseshoes. Combining everything we get \begin{align} \label{eq:E[h_r]} E[h_r| \mbox{algorithm did not return "no" before sampling $h_r$}] & \leq \\ \label{eq:pn_j} \sum_{k=1}^{\infty} 3k^2 p^{k+1} n_{r-1}^2 n_r^k = 3n_{r-1} (p n_{r-1}) \sum_{k=1}^{\infty} k^2 (pn_r)^k & \leq \\ 3n_{r-1} \frac{3^{-r}} {d} \sum_{k=1}^{\infty} k^2 3^{-rk} & \leq O(1) \frac{n_{r-1}} {d} \end{align} the inequality in \eqref{eq:pn_j} is true since \begin{equation} pn_j \leq \frac{3}{10} \left(\frac{10}{3}d \right)^{-r} (10/9)^j d^j < 3^{-r} d^{j-r} \end{equation} Applying Markov's inequality to \eqref{eq:E[h_r]} we get: \begin{equation} \label{eq:p_r_h_final} q_r^h \leq \frac{O(1) \frac{n_{r-1}} {d}}{b_r} = O \left(\frac{\ell}{d} \right) = O(1/\log{\ell}) = o(1) \end{equation} \subsubsection*{$j$-cycles and $j$-horseshoes for $j<r$} Assume that the algorithm has not returned "no" in step \ref{enum:creation_ret_no} or in iterations $r, r-1, \ldots, j+1$ of the connection phase. In particular, the number of $(j+1)$-sub-horseshoes in $F$ is at most $b_{j+1}$ and there are no $(j+1)$-cycles in $F$. At this point in the algorithm, the inner structures of the $(j+1)$-horseshoes are known, but their endpoints are not yet determined (as the last possible "no" answer of iteration $j+1$ of the connection phase comes after the inner structures are determined but before step \ref{enum:link} is taken). A $j$-cycle has parameters $m,k$ (with $1 \leq m \leq n_j\ ,\ 0 \leq k \leq m$) if it consists of exactly $m$ vertices in $L_j$, $k$ internally-disjoint $(j+1)$-sub-horseshoes (the interiors are disjoint since a $j$-cycle is simple) and $m-k$ inner edges of $L_j$. It is clear that any $j$-cycle in $F$ can be presented in such a form. A $j$-horseshoe with parameters $m,k$ ($1 \leq m \leq n_j\ ,\ 0 \leq k \leq m-1$) is defined similarly: it consists of $m$ vertices in $L_j$, $k$ internally-disjoint $(j+1)$-sub-horseshoes, $m-1-k$ inner edges of $L_j$ and two edges down to $L_{j-1}$ that are in $R$. Again, any $j$-horseshoe can be presented in this form. We now bound the expected number of $j$-cycles and $j$-horseshoes. We do so by estimating the number of such objects with parameters $m,k$ for all possible values of $m,k$. Fix $a_1, \ldots, a_k, b_1, \ldots, b_k \in V$ (not necessarily distinct) and internally-disjoint $(j+1)$-sub-horseshoes $H_1, \ldots, H_k$. The probability that a specific $H_i$ has endpoints $a_i, b_i$ is at most $\frac{1}{\binom{n_j}{2}} \leq \frac{4}{n_j^2}$ (this is true for $n_j \geq 2$ ; if $n_j = 1$ then there are no $(j+1)$-sub-horseshoes anyway). These $k$ events are independent (as per step \ref{enum:link} in the connection phase), and the probability that all of them occur together is at most $\frac{1}{\binom{n_j}{2}^k} \leq \frac{4^k}{n_j^{2k}}$. There are no more than $b_{j+1}^k$ possible ordered choices of $(H_1, \ldots, H_k)$. Therefore, the expected number of ordered sets of $k$ internally-disjoint $(j+1)$-sub-horseshoes with endpoints $(a_1, b_1), \ldots, (a_k, b_k)$ is no more than \begin{equation} \frac{4^k b_{j+1}^k} {n_j^{2k}} \leq \frac{4^k}{\ell^k n_j^k} \end{equation} \paragraph{$j$-cycles} First we bound the expected number of $j$-cycles with parameters $m,k$ in $F$ after step \ref{enum:edges_i} in iteration $i=j$ of the connection phase. Fix $m$ vertices $(v_1, v_2, \ldots, v_m) \in L_j$ and order them cyclically (there are at most $n_j^m$ such orderings). Now fix $k$ couples of neighbouring vertices $a_i, b_i$ in the chosen cyclic order (there are $\binom{m}{k} \leq 2^m$ possible choices of $k$-tuples). The expected number of $k$-tuples of internally-disjoint $(j+1)$-sub-horseshoes with endpoints $a_i, b_i$ is no more than $\frac{4^k}{\ell^k n_j^k}$. The probability for any other couple of neighbours in the cyclic ordering to have an edge between them is $p$ independently of everything else. Since the expected multiple of independent random variables is the multiple of their expectations, we get that the expected number of $j$-cycles with parameters $m,k$ is no more than \begin{equation} n_j^m 2^m \frac{4^k}{\ell^k n_j^k} p^{m-k} = (2pn_j)^{m-k} \left( \frac{8}{\ell} \right)^k \leq \left(\frac{1}{d}\right)^{m-k} \left( \frac{8}{\ell} \right)^k \leq \left(\frac{8}{\ell}\right)^m \end{equation} And the total expected number of $j$-cycles is no more than \begin{equation} \sum_{m=1}^{n_j} \sum_{k=0}^{m} \left(\frac{1}{d}\right)^{m-k} \left( \frac{8}{\ell} \right)^k = \sum_{m=1}^{n_j} \left( \frac{8}{\ell} \right)^m (1+o(1)) = \frac{8}{\ell} (1+o(1)) = o(1) \end{equation} In particular we get \begin{equation} \label{eq:p_j_c_final} q_j^c = O(1/\ell) = o(1) \end{equation} \paragraph{$j$-horseshoes} We bound the expected number of $j$-(sub-)horseshoes with parameters $m,k$ in $F$ after step \ref{enum:counting} in iteration $i=j$ of the connection phase. Fix $m$ linearly ordered vertices $(v_1, v_2, \ldots, v_m) \in L_j$ (there are at most $n_j^m$ such orderings). Now fix $k$ couples of neighbouring vertices $a_i, b_i$ in the chosen linear ordering (there are $\binom{m-1}{k} \leq 2^{m-1}$ possible choices). The expected number of $k$-tuples of internally-disjoint $(j+1)$-sub-horseshoes with endpoints $a_i, b_i$ is no more than $\frac{4^k}{\ell^k n_j^k}$. The probability for any other couple of neighbours in the linear ordering to have an edge between them is $p$ independently of everything else. For a vertex in $L_j$, the expected number of neighbours via $R$ it has in $L_{j-1}$ is no more than $n_{j-1} p$. Combining all of the above, the expected number of $j$-horseshoes with parameters $m,k$ is no more than \begin{align} & n_j^m 2^{m-1} \frac{4^k}{\ell^k n_j^k} p^{m-1-k} (n_{j-1}p)^2 = \\ & (2 n_j p)^{m-1-k} (n_j p) (n_{j-1} p) (8/\ell)^k n_{j-1} \leq \\ & d^{(j-r)(m-1-k)} d^{j-r} d^{j-1-r} (8/\ell)^k n_{j-1} \leq \\ & d^{-(m-1-k)} d^{-3} (8/\ell)^k n_{j-1} = \Theta(\log{\ell})^{k} d^{-m-2} n_{j-1} \end{align} Each $j$-horseshoe with such parameters contributes no more than $3m^2$ $j$-sub-horseshoes, and the total expected number of $j$-sub-horseshoes in $F$ is at most \begin{align} & 3\sum_{m=1}^{n_j} \sum_{k=0}^{m-1} \Theta(\log{\ell})^{k} d^{-m-2} n_{j-1} m^2 \leq \\ & \sum_{m=1}^{n_j} o(1) \Theta(\log{\ell})^{m+2} d^{-m-2} m^2 n_{j-1} \leq \Theta(\ell)^{-3} n_{j-1} \end{align} By Markov's inequality, \begin{equation} \label{eq:p_j_h_final} q_j^h \leq \frac{\Theta(\ell)^{-3} n_{j-1}}{b_j} \leq \Theta(\ell)^{-2} = o(1) \end{equation} The proof of \eqref{eq:proof_all_other_probs} is now complete by \eqref{eq:p_j_c_final}, \eqref{eq:p_j_h_final} and since $r$ is fixed. \end{proof} \begin{remark} Special care should be taken in proofs of this type to ensure that no source of randomness is used more than once (that is, to prevent the case when some information is revealed at some point of the algorithm but is assumed to be random later on). In particular, note that the information needed to determine how many $j$-sub-horseshoes there are does not interfere with the information needed to know, given all the interiors of $j$-sub-horseshoes without knowing their endpoints yet, what is the probability that specific $k$ internally-disjoint $j$-sub-horseshoes have specific $k$ couples of endpoints. \end{remark} \iffalse \section{Upper Bound For $f_2(\ell,r)$} \label{sec:f_2} This section gives an upper bound for $f_2(\ell,r)$ using simple probabilistic arguments similar to those seen above. Note that a graph $G$ has $\ell\chi_r(G) \leq 2$ if and only if it does not contain cycles of odd length $\leq 2r+1$. \begin{thm} There exists $\ell_0$ such that for every $\ell > \ell_0$, $r > 0$, \begin{equation} f_2(\ell,r) \leq (10\ell\log{\ell})^{2r+1} \end{equation} \end{thm} \begin{proof} Take $G=G_{n,p}$ with $n = (10\ell\log{\ell})^{2r+1}$ and $p = \frac{3}{10}(10\ell\log{\ell})^{-2r}$. As before the expected degree of a vertex is $d = np = 3\ell\log{\ell}$ and w.h.p.\@ there is no independent set of size $[1+o(1)]\frac{2\log{\ell}}{p}$. The expected number of cycles of length $2i+1$ in $G_{n,p}$ for $1 \leq i \leq r$ is no more than \begin{equation} \frac{1}{4i+2} d^{2i+1} \end{equation} Delete a vertex from every cycle of odd length $\leq 2r+1$ to obtain a graph $G'$ with $\ell\chi_r(G) \leq 2$. The expected number of deleted vertices is \begin{equation} [1+o(1)]\frac{1}{4r+2} d^{2r+1} \end{equation} For $\ell$ large enough, the probability that no more than $d^{2r+1}$ vertices are deleted is $> 3/4$. In this case the number of vertices of $G'$, denoted by $n'$, satisfies \begin{align} n' \geq n - d^{2r+1} \geq \frac{1000-27}{1000} n > \frac{9}{10}n. \end{align} Hence with positive probability $G'$ has more than $\frac{9}{10}n$ vertices but contains no independent set of size $\frac{2.5\log{\ell}}{p}$, and its chromatic number is at least $\frac{9np}{25\log{\ell}} = \frac{9d}{25\log{\ell}} > \ell$ as desired. \end{proof} \begin{remark} Deleting edges to remove cycles is more economical than deleting vertices. The bound above can probably be improved by a factor between $\ell$ and $\ell \log{\ell}$ by deleting edges instead of vertices. \end{remark} \fi \section{$f_c(\ell,r)$ For Non-Constant $c$} \label{sec:c_non_const} In the previous sections, $f_c(\ell,r)$ with small fixed $c$ values was considered. In this section our results are extended to large values of $c$. Take $G$ on $n$ vertices, $0.98 (10 \ell \log{\ell})^{r+1} \leq n \leq (10 \ell \log{\ell})^{r+1}$ with $\ell\chi_r(G) = 3$ and with no independent set of size $(1+o(1))\frac{2\log(d)}{p}$, where, as before, $d=3 \ell \log \ell$ and $p=\frac{3}{10}(10 \ell \log \ell)^{-r}$. Such $G$ exists by the results in Section \ref{sec:f_3}. Construct the following graph $G_k$: every vertex in the original $G$ is expanded to a $k$-clique. Two vertices in $G_k$ are connected if they lie in the same clique or if the cliques in which they lie were neighbours in $G$. Every independent set in $G_k$ contains at most one vertex from each clique, and thus the maximal independent set in $G_k$ is of size $<(1+o(1))\frac{2\log(d)}{p}$. There are $kn$ vertices in $G_k$ and thus its chromatic number is (for $\ell$ large enough) \begin{equation} \chi \left( G_k \right) \geq \frac{kn}{(1+o(1))\frac{2\log(d)}{p}} > k\ell \end{equation} Every $r$-ball in $G_k$ is contained in an expanded $r$-ball from $G$. Thus \begin{equation} \ell{\chi_r}(G_k) \leq 3k \end{equation} We conclude that for $\ell^*$ large enough \begin{equation} f_{3k}(k\ell^*, r) < kn \leq k (10 \ell^* \log{\ell^*})^{r+1} \end{equation} Taking $c=3k$, $\ell = k\ell^*$ the last result implies that \begin{equation} f_c(\ell, r) < \frac{c}{3} \left( 10 \frac{\ell}{c/3} \log{\left( \frac{\ell}{c/3} \right)} \right)^{r+1} < \frac{(30 \ell \log{\ell})^{r+1}}{c^r} \end{equation} When $c$ and $\ell$ are not of this form, we need to replace them by $3 \lfloor c/3 \rfloor \leq c$ and $\lfloor c/3 \rfloor \big\lceil \frac{\ell}{\lfloor c/3 \rfloor} \big\rceil \geq \ell$ respectively. The following Theorem summarizes the discussion. \begin{thm} \label{thm:large_c} There exists $\ell_0$ such that for every positive $c$ divisible by $3$ and $\ell \geq max(c, \ell_0)$ divisible by $c/3$: \begin{equation} f_c(\ell, r) < \frac{(30 \ell \log{\ell})^{r+1}}{c^r} \end{equation} Thus for every $c \geq 3$: \begin{equation} f_c(\ell, r) < \frac{[O(\ell \log{\ell})]^{r+1}}{c^r} \end{equation} \end{thm} \begin{remark} The contribution of $c$ in this upper bound is $c^{-r}$, whereas this contribution in the corresponding lower bound by Bogdanov in \eqref{eq:bogd} is $c^{-r-1}$. \end{remark} \section{${f_c}(\ell,1)$} \label{sec:r_is_1} As stated in Section \ref{sec:bg} it is known that ${f_2}(\ell,1) = \Theta \left( \ell^2 \log \ell \right)$. In this section it is shown that $f_c(\ell, 1) = \Theta(\ell^2 \log{\ell})$ for any fixed $c \geq 2$. Since $f_c(\ell,1) \leq f_2(\ell,1)$, we only need to show that $f_c(\ell,1) = \Omega(\ell^2 \log{\ell})$ for fixed $c \geq 2$. \begin{thm} \label{thm:lower_bound} There exists $\alpha > 0$ such that for every $\ell \geq c \geq 2$ \begin{equation} {f_c}(\ell,1) \geq \alpha \frac{\ell^2 \log \ell}{c \log c} \end{equation} In particular, for any fixed $c \geq 2$: \begin{equation} f_c(\ell, 1) = \Theta(\ell^2 \log{\ell}) \end{equation} \end{thm} \begin{proof} Let $G=(V,E)$ be a graph on $n = f_c(\ell, 1) + 1$ vertices with $\ell{\chi_1}(G) \leq c$ but $\chi(G) > \ell$. Our goal is to show\footnote{In fact we need to show this for $n-1$ instead of $n$ but it is clearly equivalent.} that $n \geq \alpha \frac{\ell^2 \log \ell}{c \log c}$ for a suitable choice of $\alpha$. By taking a critical subgraph of $G$ we can assume that the minimum degree of $G$ is at least $\ell$ and clearly we can also assume that $n \leq \zeta \ell^2 \log{\ell}$ for some absolute constant $\zeta > 0$. By these assumptions, the average degree $d$ in $G$ satisfies $\ell \leq d < n \leq \zeta \ell^2 \log{\ell}$. \paragraph{Large independent set in $G$} Observe that \begin{itemize} \item There exists $v \in V$ with $\deg(v) \geq d$. The neighborhood of $v$ is $c$-colorable, and thus contains an independent set of size at least $d/c \geq \ell/c$. \item The first author \cite{alon1996independence} showed that there exists $\beta > 0$ such that any graph $G$ on $n$ vertices with average degree $d \geq 1$ and $\ell\chi_1(G) \leq c$ contains an independent set of size \[ \frac {\beta} {\log{c}} \frac{n}{d} \log{d} \] \end{itemize} \begin{lem} There exists an independent set of size $\geq \delta \sqrt{ \frac{n\log{n}}{c\log{c}}}$ in $G$ where $\delta > 0$ is a suitable global constant. \end{lem} \begin{proof} There exists an independent set of size \begin{equation} \max \bigg\{ \frac{d}{c}, \frac {\beta} {\log{c}} \frac{n}{d} \log{d} \bigg\} \geq \sqrt {\frac{d}{c} \frac { \beta} {\log{c}} \frac{n}{d} \log{d} } \geq \sqrt { \frac{\beta n \log{\ell} } {c \log{c}}} \geq \delta \sqrt {\frac{n \log{n}}{c \log{c}}} \end{equation} as needed. \end{proof} Removing an independent set of size $\delta \sqrt {\frac{f_c(\ell,1) \log{f_c(\ell,1)}}{c \log{c}}} \leq \delta \sqrt {\frac{n \log{n}}{c \log{c}}}$ from $G$ results in a non-$(\ell-1)$-colorable graph. Hence \begin{equation} \label{eq:induct1} f_c(\ell-1, 1) \leq f_c(\ell,1) - \delta \sqrt {\frac{f_c(\ell, 1) \log{f_c(\ell, 1)}}{c \log{c}}} \end{equation} For $\delta$ small enough and $c \geq 2$, the function \begin{equation} h(x):= x - \delta \sqrt{ \frac{x\log x}{c\log c}} \end{equation} is increasing in the domain $[2, \infty)$. Now take $\alpha = \min(1, \delta^2 / 9)$ and fix $c \geq 2$. We will show that $f_c(\ell,1) \geq \alpha \frac{\ell^2 \log{\ell}} {c \log{c}}$ for every $\ell \geq c$ by induction on $\ell$. The base case $\ell = c$ satisfies \begin{equation} {f_c}(c,1) = c = \frac{\ell^2 \log \ell}{c \log c} \geq \alpha \frac{\ell^2 \log \ell}{c \log c} \end{equation} Assuming that ${f_c}(\ell,1) \geq \alpha \frac{\ell^2 \log \ell}{c \log c}$ and using \eqref{eq:induct1} we get \begin{equation} \label{eq:h_bigger} \alpha \frac{\ell^2 \log \ell}{c \log c} \leq {f_c}(\ell+1,1) - \delta \sqrt{ \frac{{f_c}(\ell+1,1)\log{{f_c}(\ell+1,1)}}{c\log{c}}} = h\left( {f_c}(\ell+1,1) \right) \end{equation} Note that $f_c(\ell+1, 1) \geq \ell+1$. If $\alpha \frac{(\ell+1)^2 \log (\ell+1)}{c \log c} \leq \ell+1$ then we are finished. Otherwise, take $x=\alpha \frac{(\ell+1)^2 \log (\ell+1)}{c \log c} \geq \ell+1 \geq 3$. Then \[ \begin{split} x - \alpha \frac{\ell^2 \log \ell}{c \log c} &= \alpha \left[ ((\ell+1)^2 - \ell^2) \frac{\log (\ell+1)}{c \log c} + (\log(\ell+1) - \log{\ell}) \frac{\ell^2}{c \log c} \right] \\ &\leq \alpha \frac{(2\ell+1) \log (\ell+1) + \ell}{c \log c} \leq 3\alpha \frac{(\ell+1) \log (\ell+1)}{c \log c} \\ & = 3\alpha \sqrt{ \frac { \frac{(\ell+1)^2 \log(\ell+1)} {c\log{c}} \log(\ell+1)} {c \log{c}} } \leq 3\sqrt{\alpha} \sqrt{ \frac {x\log{x}} {c\log{c}}} \leq \delta \sqrt{ \frac {x\log{x}} {c\log{c}}} \end{split} \] The last inequality and \eqref{eq:h_bigger} imply that \begin{equation} h(x) \leq \alpha \frac{\ell^2 \log \ell}{c \log c} \leq h(f_c(\ell+1,1)) \label{eq:h_smaller} \end{equation} By the monotonicity of $h$, \begin{equation} \alpha \frac{(\ell+1)^2 \log (\ell+1)}{c \log c} = x \leq f_c(\ell+1,1) \end{equation} finishing the induction step and completing the proof. \end{proof} \section{Final Remarks} \label{sec:conc_remarks} \subsection{Non-constant $r$} Our bounds for $f_c(\ell, r)$ are valid for fixed values of $r$. These bounds still hold if we require that $r \leq \gamma \ell$ for a suitable global constant $\gamma > 0$ instead of requiring $r$ to be fixed. The following amendments of the proof need to be made: \begin{itemize} \item In Lemma \ref{lem:max_deg} we need to make sure that $\ell ^ {(2+o(1))r - (3+o(1))\log(1/{\gamma_\epsilon}) \ell} \xrightarrow{\ell \rightarrow \infty} 0$ where $\epsilon = 1/9$ and $\gamma_\epsilon = {e^\epsilon}{(1+\epsilon)^{-(1+\epsilon)}} < 1$. For $\ell$ large enough, this expression indeed tends to $0$ for every $r \leq \log(1 / \gamma_\epsilon) \ell$. Take a suitable $\gamma \leq \log(1/\gamma_\epsilon)$ that is good for every $\ell \geq 2$. \item In Section \ref{sec:f_3} take \begin{align} b_j = \begin{cases} \frac{n_{r-1}}{\ell} & j=r \\ \frac{n_{j-1}}{d} & 1<j<r \\ 0 & j=1 \end{cases} \end{align} It can be shown that now $q_j^c, q_j^h \leq O(1/d)$ for any $j < r$. This proves \eqref{eq:proof_all_other_probs} in Theorem \ref{thm:f_3_probs} and completes the proof of Theorem \ref{thm:bound_on_f_3}. \end{itemize} Note also that in Theorem \ref{thm:4_degenerate} a slightly different analysis is needed for large $r$, but the stated result remains valid. \subsection{More on $f_c(\ell, r)$} Our general upper bound for $f_c(\ell, r)$ is \begin{equation} f_c(\ell, r) < \frac{[O(\ell \log{\ell})]^{r+1}}{c^r} \end{equation} We have already seen that this bound is tight up to a polylogarithmic factor for fixed $c \geq 3$ and $r$. For other range of the parameters and in particular when $r$ is very large there is a result of Kierstead, Szemer\'edi and Trotter \cite{kierstead83} providing a lower bound for $f_c(\ell, r)$, which is close to being tight in this range. See also \cite{bogdanov13}. In some cases, however, the gap between the known upper and lower bounds is large. In particular, it will be interesting to understand better the behaviour of $f_c(r,r)$, and of $f_2(\ell,r)$. \iffalse showed that Here are some other results regarding $f_c(\ell, r)$. Kierstead, Szemer\'edi and Trotter \cite{kierstead83} showed that \[ {f_c}(k(c-1)+1,2kn^{1/k}) \geq n \] Under certain conditions on $\ell$ and $r$, this result can be rewritten as \begin{equation} \label{eq:lower_kierstead_conc} f_c(\ell,r) \geq \left({\frac{r(c-1)}{2(\ell-1)}} \right) ^ {\frac{\ell-1}{c-1}} \end{equation} Bogdanov \cite{bogdanov13} obtained the upper bound \begin{equation} {f_c}(k(c-1),r) < \frac{(2rc+1)^k-1}{2r} \end{equation} Which is equivalent (under a certain condition on $\ell$) to \begin{equation} \label{eq:upper_bogadnov_conc} f_c(\ell,r) < \frac{(2rc+1)^{{\frac{\ell}{c-1}}}-1}{2r} \end{equation} \eqref{eq:lower_kierstead_conc} and \eqref{eq:upper_bogadnov_conc} have roughly the same order\footnote{the lower bound actually has a larger order in $r$, but there is no contradiction since these bounds are true under different conditions.} in $r$, giving a good estimation for $f_c(\ell, r)$ when $\ell$ is fixed and $r$ is large. However, the known upper and lower bounds are far apart when $\ell = \Theta(r)$. When $c$ is fixed, the lower bound by Kierstead et al.\@ is \begin{equation} f_3(\Theta(r), r) \geq \left( \frac{r}{\Theta(r)} \right)^{\Theta(r)} = \Theta(1)^{\Theta(r)} \end{equation} While our upper bound as well as Bogdanov's bound give \begin{equation} f_3(\Theta(r), r) \leq \Theta(r)^{\Theta(r)} \end{equation} \fi The question of obtaining a better estimation of $f_c(\ell, r)$ in the general case (as well as for fixed $c \geq 3$ and fixed $r$) is left as an open problem. \bibliographystyle{plain}
{ "timestamp": "2015-05-07T02:02:07", "yymm": "1410", "arxiv_id": "1410.6236", "language": "en", "url": "https://arxiv.org/abs/1410.6236", "abstract": "It is shown that for any fixed $c \\geq 3$ and $r$, the maximum possible chromatic number of a graph on $n$ vertices in which every subgraph of radius at most $r$ is $c$ colorable is $\\tilde{\\Theta}\\left(n ^ {\\frac{1}{r+1}} \\right)$ (that is, $n^\\frac{1}{r+1}$ up to a factor poly-logarithmic in $n$). The proof is based on a careful analysis of the local and global colorability of random graphs and implies, in particular, that a random $n$-vertex graph with the right edge probability has typically a chromatic number as above and yet most balls of radius $r$ in it are $2$-degenerate.", "subjects": "Combinatorics (math.CO)", "title": "Local And Global Colorability of Graphs", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9898303422461273, "lm_q2_score": 0.7154239897159439, "lm_q1q2_score": 0.7081483725916227 }
https://arxiv.org/abs/1610.02079
Almost Engel compact groups
We say that a group $G$ is almost Engel if for every $g\in G$ there is a finite set ${\mathscr E}(g)$ such that for every $x\in G$ all sufficiently long commutators $[...[[x,g],g],\dots ,g]$ belong to ${\mathscr E}(g)$, that is, for every $x\in G$ there is a positive integer $n(x,g)$ such that $[...[[x,g],g],\dots ,g]\in {\mathscr E}(g)$ if $g$ is repeated at least $n(x,g)$ times. (Thus, Engel groups are precisely the almost Engel groups for which we can choose ${\mathscr E}(g)=\{ 1\}$ for all $g\in G$.)We prove that if a compact (Hausdorff) group $G$ is almost Engel, then $G$ has a finite normal subgroup $N$ such that $G/N$ is locally nilpotent. If in addition there is a uniform bound $|{\mathscr E}(g)|\leq m$ for the orders of the corresponding sets, then the subgroup $N$ can be chosen of order bounded in terms of $m$. The proofs use the Wilson--Zelmanov theorem saying that Engel profinite groups are locally nilpotent.
\section{Introduction} A group $G$ is called an Engel group if for every $x,g\in G$ the equation $[x,g,g,\dots , g]=1$ holds, where $g$ is repeated in the commutator sufficiently many times depending on $x$ and $g$. (Throughout the paper, we use the left-normed simple commutator notation $[a_1,a_2,a_3,\dots ,a_r]=[...[[a_1,a_2],a_3],\dots ,a_r]$.) A group is said to be locally nilpotent if every finite subset generates a nilpotent subgroup. Clearly, any locally nilpotent group is an Engel group. Wilson and Zelmanov \cite{wi-ze} proved the converse for profinite groups: any Engel profinite group is locally nilpotent. Later Medvedev \cite{med} extended this result to Engel compact (Hausdorff) groups. In this paper we consider {almost Engel} groups in the following precise sense. \begin{definition} \label{d} We say that a group $G$ is \emph{almost Engel} if for every $g\in G$ there is a \emph{finite} set ${\mathscr E}(g)$ such that for every $x\in G$ all sufficiently long commutators $[x,g,g,\dots ,g]$ belong to ${\mathscr E}(g)$, that is, for every $x\in G$ there is a positive integer $n(x,g)$ such that $$[x,\underbrace{g,g,\dots ,g}_n]\in {\mathscr E}(g)\qquad \text{for all }n\geq n(x,g). $$ \end{definition} \noindent Thus, Engel groups are precisely the almost Engel groups for which we can choose ${\mathscr E}(g)=\{ 1\}$ for all $g\in G$. We prove that almost Engel compact groups are finite-by-(locally nilpotent). By a compact group we mean a compact Hausdorff topological group. \begin{theorem}\label{t-e} Suppose that $G$ is an almost Engel compact group. Then $G$ has a finite normal subgroup $N$ such that $G/N$ is locally nilpotent. \end{theorem} In Theorem~\ref{t-e} it also follows that there is a locally nilpotent subgroup of finite index -- just consider $C_G( N)$. The proof uses the aforementioned Wilson--Zelmanov theorem for profinite groups. First the case of a finite group $G$ is considered, where obviously the result must be quantitative: namely, given a uniform bound $ |{\mathscr E}(g)|\leq m$ for the cardinalities of the sets ${\mathscr E}(g)$ in the above definition, we prove that the order of the nilpotent residual $\gamma _{\infty}(G)=\bigcap _i\gamma _i(G)$ is bounded in terms of $m$ only. Then Theorem~\ref{t-e} is proved for profinite groups. Finally, the result for compact groups is derived with the use of the structure theorems for compact groups. As in the case of finite groups, if there is a uniform bound $m$ for the cardinalities of the sets ${\mathscr E}(g)$ in Theorem~\ref{t-e}, then the subgroup $N$ in the conclusion can be chosen to be of order bounded in terms of $m$ (Corollary~\ref{c-em}). In an earlier paper \cite{khu-shu153} we obtained similar results about finite and profinite groups with a stronger condition of ``almost Engel'' type. That condition means that every element $g$ of the group is ``almost $n$-Engel'' for some $n=n(g)$ depending on $g$. Note, however, that Engel groups do not necessarily satisfy that condition. The new Definition~\ref{d} of almost Engel groups in the present paper imposes a weaker and more natural ``almost Engel'' condition and includes Engel groups (when all the subsets ${\mathscr E}(g)$ consist only of 1). Thus, the results of the present paper are stronger than in \cite{khu-shu153} even for (pro)finite groups, and cover a wider class of compact groups. First in \S\,\ref{s-1} we collect some elementary properties of minimal subsets ${\mathscr E}(g)$ in the definition of almost Engel groups. We deal with finite groups in \S\,\ref{s-f}, with profinite groups in \S\,\ref{s-pf}, and consider the general case of compact groups in \S\,\ref{s-comp}. Our notation and terminology is standard; for profinite groups, see, for example, \cite{wilson}. We say for short that an element $g$ of a group $G$ is an \textit{Engel element} if for any $x\in G$ we have $[x,g,g,\dots , g]=1$, where $g$ is repeated in the commutator sufficiently many times depending on $x$ (such elements $g$ are often called left Engel elements). A subgroup (topologically) generated by a subset $S$ is denoted by $\langle S\rangle$. For a group $A$ acting by automorphisms on a group $B$ we use the usual notation for commutators $[b,a]=b^{-1}b^a$ and $[B,A]=\langle [b,a]\mid b\in B,\;a\in A\rangle$, and for centralizers $C_B(A)=\{b\in B\mid b^a=b \text{ for all }a\in A\}$ and $C_A(B)=\{a\in A\mid b^a=b\text{ for all }b\in B\}$. Throughout the paper we shall write, say, ``$(a,b,\dots )$-bounded'' to abbreviate ``bounded above in terms of $a, b,\dots $ only''. \section{Properties of Engel sinks} \label{s-1} Throughout this section we assume that $G$ is an almost Engel group in the sense of Definition~\ref{d}, so that for every $g\in G$ there is a finite set ${\mathscr E}(g)$ such that for every $x\in G$ there is a positive integer $n(x,g)$ such that \begin{equation} \label{e-def} [x,\underbrace{g,g,\dots ,g}_n]\in {\mathscr E}(g)\qquad \text{for any }n\geq n(x,g). \end{equation} If ${\mathscr E}'(g)$ is another finite set with the same property for possibly different numbers $n'(x,g)$, then ${\mathscr E}(g)\cap {\mathscr E}'(g)$ also satisfies the same condition with the numbers $n''(x,g)=\max\{n(x,g),n'(x,g)\}$. Hence for every $g\in G$ there is a \emph{minimal} set satisfying the definition, which we again denote by ${\mathscr E}(g)$ and call the \emph{Engel sink for $g$}, or simply \emph{$g$-sink} for short. \emph{Henceforth we shall always use the notation ${\mathscr E}(g)$ to denote the (minimal) Engel sinks, and $n(x,g)$ the corresponding numbers satisfying \eqref{e-def}.} For a fixed $g\in G$, consider the mapping of ${\mathscr E}(g)$ by the rule $z\to [z,g]$, which maps ${\mathscr E}(g)$ into itself by definition. By the minimality of ${\mathscr E}(g)$ this mapping is a permutation of ${\mathscr E}(g)$. Therefore we can speak of orbits (cycles) of this permutation on ${\mathscr E}(g)$. It follows that every $z\in {\mathscr E}(g)$ can be represented in the form \begin{equation}\label{e-orb1} z=[z,\underbrace{g,\dots,g}_{k}]\qquad \text{for some }k\geq 1, \end{equation} and therefore also as \begin{equation}\label{e-orb2} z=[z,\underbrace{g,\dots,g}_{jk}]\qquad \text{for any positive integer }j. \end{equation} Conversely, elements satisfying \eqref{e-orb1} belong to ${\mathscr E}(g)$. We have thus proved the following. \begin{lemma}\label{l-sink} For any $g\in G$ the $g$-sink ${\mathscr E}(g)$ consists precisely of all elements $z$ such that $z=[z,{g,\dots,g}]$, where $g$ occurs at least once. \end{lemma} Clearly, every subgroup $H$ of $G$ is also an almost Engel group. Moreover, by Lemma~\ref{l-sink}, for $h\in H$ the $h$-sink constructed within $H$ is precisely the subset ${\mathscr E}(h)\cap H$ of the $h$-sink ${\mathscr E}(h)$ in $G$. If $N$ is a normal subgroup of $G$, then $G/N$ is also an almost Engel group. For $\bar g=gN$ the $\bar g$-sink in $G/N$ is the image of ${\mathscr E}(g)$ in this quotient group. These properties will be used throughout the paper without special references. For any element $h$ of the centralizer $C_G(g)$ the equation $z=[z,g, \dots, g]$ implies $z^h=[z^h,g^h, \dots, g^h]=[z^h,g, \dots, g]$. Hence ${\mathscr E}(g)$ is invariant under conjugation by $h$ by Lemma~\ref{l-sink}. If $|{\mathscr E}(g)|=m$, it follows that $h^{m!}$ centralizes ${\mathscr E}(g)$. We have thus proved the following. \begin{lemma}\label{l-c-s} If $h\in C_G(g)$ and $|{\mathscr E}(g)|=m$, then $h^{m!}$ centralizes ${\mathscr E}(g)$. \end{lemma} Engel sinks have especially nice properties in metabelian groups. We denote by $M'$ the derived subgroup of a group $M$. \begin{lemma}\label{l-metab} Let $M$ be a metabelian almost Engel group. {\rm (a)} Every sink ${\mathscr E}(g)$ is a normal subgroup contained in $M'$. {\rm (b)} Elements of ${\mathscr E}(g)$ in the same orbit under the map $z\to [z,g]$ have the same order. \end{lemma} \begin{proof} (a) By \eqref{e-orb1} and \eqref{e-orb2}, every element $z\in {\mathscr E}(g)$ can be represented as $z=[z,g,\dots,g]$ with $g$ repeated $jk(z)$ times for $k(z)\geq 1$ and for every $j=1,2,\dots$. In particular, ${\mathscr E}(g)\subseteq M'$. For $z_1,z_2\in {\mathscr E}(g)$ choose $j$ such that $jk(z_1)k(z_2)$ is larger than $n(z_1z_2,g)$. Then, by the standard metabelian laws, $$ \begin{aligned}z_1z_2&=[z_1,{g,\dots,g}][z_2,{g,\dots,g}]\\ &= [z_1z_2,{g,\dots,g}]\in {\mathscr E}(g),\end{aligned} $$ where $g$ is repeated ${jk(z_1)k(z_2)}$ times in each commutator. Thus, the finite set ${\mathscr E}(g)$ is a subgroup. For any $m\in M$, choose $jk(z)$ larger than $n([z,m],g)$. Then $[z,m]=[[z,g,\dots,g],m]=[[z,m],g,\dots,g]\in {\mathscr E}(g)$, where $g$ is repeated ${jk(z)}$ times in each commutator; this means that ${\mathscr E}(g)$ is a normal subgroup of $M$. (b) Let $z\in {\mathscr E}(g)$. Since $z\in M'$, we have $[z^k,g]=[z,g]^k$ for any integer $k$. Hence the order of $[z,g]$ divides the order of $z$. Going in this way over the orbit, we return to $z$, which implies that the orders of all elements in the orbit are the same. \end{proof} \section{Finite almost Engel groups} \label{s-f} Of course, any finite group $G$ is almost Engel, and every element $g\in G$ has finite (minimal) $g$-sink ${\mathscr E}(g)$. A meaningful result must be of quantitative nature, and this is what we prove in this section. The following theorem will also be used in the proof of the main results on profinite and compact groups. \begin{theorem}\label{t-finite} Let $G$ be a finite group, and $m$ a positive integer. Suppose that for every $g\in G$ the cardinality of the $g$-sink ${\mathscr E}(g)$ is at most $m$. Then $G$ has a normal subgroup $N$ of order bounded in terms of $m$ such that $G/N$ is nilpotent. \end{theorem} The conclusion of the theorem can also be stated as a bound in terms of $m$ for the order of the nilpotent residual subgroup $\gamma _{\infty}(G)$, the intersection of all terms of the lower central series (which for a finite group is of course also equal to some subgroup $\gamma _n(G)$). First we recall or prove a few preliminary results. We shall use the following well-known properties of coprime actions: if $\alpha $ is an automorphism of a finite group $G$ of coprime order, $(|\alpha |,|G|)=1$, then $ C_{G/N}(\alpha )=C_G(\alpha )N/N$ for any $\alpha $-invariant normal subgroup $N$, the equality $[G,\alpha ]=[[G,\alpha ],\alpha ]$ holds, and if $G$ is in addition abelian, then $G=[G,\alpha ]\times C_G(\alpha )$. \begin{lemma}\label{l0} Let $P$ be a finite $p$-subgroup of a group $G$, and $g\in G$ a $p'$-element normalizing $P$. Then the order of $[P,g]$ is bounded in terms of the cardinality of the $g$-sink ${\mathscr E}(g)$. \end{lemma} \begin{proof} For the abelian $p$-group $V=[P,g]/[P,g]'$ we have $V=[V,g]$ and $C_V(g)=1$ because the action of $g$ on $V$ is coprime. Then $V=\{[v,g]\mid v\in V\}$ and therefore also $$ V=\{[v,\underbrace{g,\dots ,g}_n\,]\mid v\in V\} $$ for any $n$. Hence, $V$ is contained in the image of ${\mathscr E}(g)\cap [P,g]$ in $[P,g]/[P,g]'$, whence $|V|\leq |{\mathscr E}(g)|$. Since $[P,g]$ is a nilpotent group, its order is bounded in terms of $|[P,g]/[P,g]'|$ and its nilpotency class. We claim that, as a crude bound, the nilpotency class of $[P,g]$ is at most $2|{\mathscr E}(g)|+1$. Let $\gamma_i$ denote the terms of the lower central series of $[P,g]$. The number of factors of the lower central series of $[P,g]$ on which $g$ acts nontrivially is at most $|{\mathscr E}(g)|$, because for any such a factor $U=\gamma _i/\gamma _{i+1}$ we have $$ 1\ne [U,g]=\{[u, \underbrace{g,\dots ,g}_n\,]\mid u\in U\} $$ for any $n$, since the action of $g$ on $U$ is coprime, and therefore there is an element of ${\mathscr E}(g)$ in $\gamma_i\setminus \gamma _{i+1}$. It remains to observe that $g$ cannot act trivially on two consecutive nontrivial factors of the lower central series of $[P,g]$. Indeed, if $[\gamma_i, g]\leq \gamma _{i+1}$ and $[\gamma_{i+1}, g]\leq \gamma _{i+2}$, then by the Three Subgroup Lemma the inclusions $[\gamma_i, g, [P,g]]\leq [\gamma _{i+1}, [P,g]]=\gamma _{i+2}$ and $[[P,g],\gamma_{i}, g]=[ \gamma _{i+1},g]\leq \gamma _{i+2}$ imply the inclusion $[g, [P,g],\gamma _i]=[[P,g],\gamma _i]=\gamma _{i+1}\leq \gamma _{i+2}$, and the last inclusion implies that $\gamma_{i+1}=1$. \end{proof} The following lemma already appeared in \cite{khu-shu153}, but we reproduce the proof for the benefit of the reader. \begin{lemma}\label{l2} Let $V$ be an elementary abelian $q$-group, and $U$ a $q'$-group of automorphisms of $V$. If $|[V,u]|\leq m$ for every $u\in U$, then $|[V,U]|$ is $m$-bounded, and therefore $|U|$ is also $m$-bounded. \end{lemma} \begin{proof} First suppose that $U$ is abelian. We consider $V$ as an ${\mathbb F} _qU$-module. Pick $u_1\in U$ such that $[V,u_1]\ne 0$. By Maschke's theorem, $V= [V,u_1]\oplus C_V(u_1)$, and both summands are $U$-invariant, since $U$ is abelian. If $C_U([V,u_1])=1$, then $|U|$ is $m$-bounded and $[V,U]$ has $m$-bounded order being generated by $[V,u]$, $u\in U$. Otherwise pick $1\ne u_2\in C_U([V,u_1])$; then $V= [V,u_1] \oplus [V,u_2] \oplus C_V(\langle u_1,u_2\rangle )$. If $1\ne u_3\in C_U([V,u_1]\oplus [V,u_2])$, then $V= [V,u_1]\oplus [V,u_2]\oplus [V,u_3] \oplus C_V(\langle u_1,u_2,u_3\rangle )$, and so on. If $C_U([V,u_1]\oplus\dots \oplus [V,u_k])=1$ at some $m$-bounded step $k$, then again $[V,U]$ has $m$-bounded order. However, if there are too many steps, then for the element $w=u_1u_2\cdots u_k$ we shall have $0\ne [V,u_i]= [[V,u_i],w]$, so that $[V,w] = [V,u_1]\oplus\dots \oplus [V,u_k]$ will have order greater than $m$, a contradiction. We now consider the general case. Since every element $u\in U$ acts faithfully on $[V,u]$, the exponent of $U$ is $m$-bounded. If $P$ is a Sylow $p$-subgroup of $U$, let $M$ be a maximal normal abelian subgroup of $P$. By the above, $|[V,M]|$ is $m$-bounded. Since $M$ acts faithfully on $[V,M]$, we obtain that $|M|$ is $m$-bounded. Hence $|P|$ is $m$-bounded, since $C_P(M)= M$ and $P/M$ embeds in the automorphism group of $M$. Since $|U|$ has only $m$-boundedly many prime divisors, it follows that $|U|$ is $m$-bounded. Since $[V,U]=\sum_{u\in U}[V,u]$, we obtain that $|[V,U]|$ is also $m$-bounded. \end{proof} Recall that the Fitting series starts with the Fitting subgroup $F_1(G)=F(G)$, and by induction, $F_{k+1}(G)$ is the inverse image of $F(G/F_k(G))$. If $G$ is a soluble group, then the least number $h$ such that $F_h(G)=G$ is the \textit{Fitting height} of $G$. The following lemma is well known and is easy to prove (see, for example, \cite[Lemma~10]{khu-maz}). \begin{lemma}\label{l-metan} If $G$ is a finite group of Fitting height 2, then $\gamma _{\infty}(G)=\prod _q [F_q,G_{q'}]$, where $F_q$ is a Sylow $q$-subgroup of $F(G)$, and $G_{q'}$ is a Hall ${q'}$-subgroup of $G$. \end{lemma} We now approach the proof of Theorem~\ref{t-finite} with the following lemma. \begin{lemma}\label{l3} If $G$ is a finite group such that $|{\mathscr E}(g)|\leq m$ for all $g\in G$, then $G/F(G)$ has exponent at most $m!$. \end{lemma} \begin{proof} Every element $g\in G$ centralizes all its powers. Therefore by Lemma~\ref{l-c-s}, since $|{\mathscr E}(g^{m!})|\leq m$ by hypothesis, $g^{m!}$ centralizes ${\mathscr E}(g^{m!})$. By the minimality of the $g^{m!}$-sink, then ${\mathscr E}(g^{m!})=\{1\}$. This means that $g^{m!}$ is an Engel element and therefore belongs to the Fitting subgroup $F(G)$ by Baer's theorem \cite[Satz~III.6.15]{hup}. \end{proof} We are now ready to prove Theorem~\ref{t-finite}. \begin{proof}[Proof of Theorem~\ref{t-finite}] Recall that $G$ is a finite group such that $|{\mathscr E}(g)|\leq m$ for every $g\in G$. We need to show that $|\gamma _{\infty }(G)|$ is $m$-bounded. First suppose that $G$ is soluble. Since $G/F(G)$ has $m$-bounded exponent by Lemma~\ref{l3}, the Fitting height of $G$ is $m$-bounded, which follows from the Hall--Higman theorems \cite{ha-hi}. Hence we can use induction on the Fitting height, with trivial base when the group is nilpotent and $\gamma _{\infty }(G)=1$. When the Fitting height is at least 2, consider the second Fitting subgroup $F_2(G)$. By Lemma \ref{l-metan} we have $\gamma _{\infty }(F_2(G))=\prod _q [F_q,H_{q'}]$, where $F_q$ is a Sylow $q$-subgroup of $F(G)$, and $H_{q'}$ is a Hall ${q'}$-subgroup of $F_2(G)$, the product taken over prime divisors of $|F(G)|$. For a given $q$, let $\bar H_{q'}=H_{q'}/C_{H_{q'}}(F_q)$, and let $V$ be the Frattini quotient $F_q/\Phi (F_q)$. Note that $\bar H_{q'}$ acts faithfully on $V$, since the action is coprime \cite[Satz~III.3.18]{hup}. For every $x\in \bar H_{q'}$ the order $|[V,x]|$ is $m$-bounded by Lemma~\ref{l0}. Then $|\bar H_{q'}|$ is $m$-bounded by Lemma~\ref{l2}. As a result, $|[F_q , H_{q'}]|= |[F_q ,\bar H_{q'}]|$ is $m$-bounded, since $[F_q ,\bar H_{q'}]$ is the product of $m$-boundedly many subgroups $[F_q ,\bar h]$ for $h\in H_{q'}$, each of which has $m$-bounded order by Lemma~\ref{l0}. For the same reasons, there are only $m$-boundedly many primes $q$ for which $[F_q , H_{q'}]\ne 1$. As a result, $|\gamma _{\infty }(F_2(G))|$ is $m$-bounded. Induction on the Fitting height applied to $G/\gamma _{\infty }(F_2(G))$ completes the proof in the case of soluble $G$. Now consider the general case. Most of the following arguments follow the same scheme as in the proof of Theorem~1.2 in \cite{khu-shu153}. First we show that the quotient $G/R(G)$ by the soluble radical is of $m$-bounded order. Let $E$ be the socle of $G/R(G)$. It is known that $E$ contains its centralizer in $G/R(G)$, so it suffices to show that $E$ has $m$-bounded order. In the quotient by the soluble radical, $E=S_1\times\dots\times S_k$ is a direct product of non-abelian finite simple groups $S_i$. Since the exponent of $G/F(G)$ is $m$-bounded by Lemma~\ref{l3}, the exponent of $E$ is also $m$-bounded. Now the classification of finite simple groups implies that every $S_i$ has $m$-bounded order, and it remains to show that the number of factors is also $m$-bounded. By Shmidt's theorem \cite[Satz~III.5.1]{hup}, every $S_i$ has a non-nilpotent soluble subgroup $R_i$, for which $\gamma _{\infty} (R_i)\ne 1$. Since we already proved our theorem for soluble groups, we can apply it to $T=R_1\times \dots \times R_k$. We obtain that $|\gamma _{\infty} (T)|$ is $m$-bounded, whence the number of factors is $m$-bounded. Thus, $|G/R(G)|$ is $m$-bounded. Since $|\gamma _{\infty} (R(G)|$ is $m$-bounded by the soluble case proved above, we can consider $G/\gamma _{\infty} (R(G))$ and assume that $R(G)=F(G)$ is nilpotent. Then $|G/F(G)|$ is $m$-bounded. We now use induction on $|G/F(G)|$. The basis of this induction includes the trivial case $G/F(G)=1$ when $\gamma _{\infty}(G)=1$. But the bulk of the proof deals with the case where $G/F(G)$ is a non-abelian simple group. Thus, suppose that $G/F(G)$ is a non-abelian simple group of $m$-bounded order. Let $g\in G$ be an arbitrary element. The subgroup $F(G)\langle g\rangle$ is soluble, and therefore $|\gamma _{\infty }(F(G)\langle g\rangle)|$ is $m$-bounded by the above. Since $\gamma _{\infty }(F(G)\langle g\rangle)$ is normal in $F(G)$, its normal closure $\langle \gamma _{\infty }(F(G)\langle g\rangle) ^G\rangle$ is a product of at most $|G/F(G)|$ conjugates, each normal in $F(G)$, and therefore has $m$-bounded order. Choose a transversal $\{t_1,\dots, t_k\}$ of $G$ modulo $F(G)$ and set $$ K=\prod _i\langle \gamma _{\infty }(F(G)\langle t_i\rangle) ^G\rangle, $$ which is a normal subgroup of $G$ of $m$-bounded order. It is sufficient to obtain an $m$-bounded estimate for $|\gamma _{\infty }(G/K)|$. Hence we can assume that $K=1$. We remark that then \begin{equation}\label{e-nil} [F(G), g, \dots , g]=1\qquad \text{for any } g\in G, \end{equation} when $g$ is repeated sufficiently many times. Indeed, $g\in F(G)t_i$ for some $t_i$, and the subgroup $F(G)\langle t_i\rangle$ is nilpotent due to our assumption that $K=1$. We now claim that \begin{equation}\label{e-nil2} [F(G), G, \dots , G]=1 \end{equation} if $G$ is repeated sufficiently many times. It is sufficient to prove that $[F_q, G, \dots , G]=1$ for every Sylow $q$-subgroup $F_q$ of $F(G)$. For any $q'$-element $h\in G$ we have $[F_q,h]=[F_q,h,h]$ and therefore $[F_q,h]=1$ in view of \eqref{e-nil}. Let $H$ be the subgroup of $G$ generated by all $q'$-elements. Then $G=F_qH$ since $G/F(G)$ is non-abelian simple, and $[F_q,H]=1$, so that $$ [F_q, G, \dots , G]=[F_q, F_q, \dots , F_q]=1 $$ for a sufficiently long commutator. We finally show that $D:=\gamma _{\infty }(G)$ has $m$-bounded order. First we show that $D=[D,D]$. Indeed, since $G/F(G)$ is non-abelian simple, $D$ is nonsoluble and we must have $$ G=F(G)[D,D]. $$ Taking repeatedly commutator with $G$ on both sides and applying \eqref{e-nil2}, we obtain $D=\gamma _{\infty }(G)\leq [D,D]$, so $D=[D,D]$. Since $F(G)\cap D$ is hypercentral in $D$ by \eqref{e-nil2} and $[D,D]=D$, it follows that $F(G)\cap D\leq Z(D)\cap [D,D]$ by the well-known Gr\"un lemma \cite[Satz~4]{gr}. Thus, $D$ is a central covering of the simple group $D/(F(G)\cap D)\cong G/F(G)$, and therefore by Schur's theorem \cite[Hauptsatz~V.23.5]{hup} the order of $D$ is bounded in terms of the $m$-bounded order of $G/F(G)$. Thus, we have proved that $|\gamma _{\infty }(G)|$ is $m$-bounded in the case where $G/F(G)$ is a non-abelian simple group. We now finish the proof of Theorem~\ref{t-finite} by induction on the $m$-bounded order $k=|G/F(G)|$ proving that $|\gamma _{\infty }(G)|$ is $(m,k)$-bounded. The basis of this induction is the case of $G/F(G)$ being simple: nonabelian simple was considered above, and simple of prime order is covered by the soluble case. Now suppose that $G/F(G)$ has a nontrivial proper normal subgroup with full inverse image $N$, so that $F(G)<N\lhd G$. Since $F(N)=F(G)$, by induction applied to $N$, the order $|\gamma _{\infty }(N)|$ is bounded in terms of $m$ and $|N/F(G)|<k$. Since $N/\gamma _{\infty }(N)\leq F( G/\gamma _{\infty }(N))$, by induction applied to $G/\gamma _{\infty }(N)$ the order $| \gamma _{\infty }(G/\gamma _{\infty }(N) )|$ is bounded in terms of $m$ and $|G/N|<k$. As a result, $|\gamma _{\infty }(G)| $ is $(m,k)$-bounded, as required. \end{proof} \section{Profinite almost Engel groups}\label{s-pf} In this and the next sections, unless stated otherwise, a subgroup of a topological group will always mean a closed subgroup, all homomorphisms will be continuous, and quotients will be by closed normal subgroups. This also applies to taking commutator subgroups, normal closures, subgroups generated by subsets, etc. Of course, any finite subgroup is automatically closed. We also say that a subgroup is generated by a subset $X$ if it is generated by $X$ as a topological group. In this section we prove Theorem~\ref{t-e} for profinite groups, while Corollary~\ref{c-em} for profinite groups is an immediate corollary of Theorem~\ref{t-finite}. \begin{theorem}\label{t-eprof} Suppose that $G$ is an almost Engel profinite group. Then $G$ has a finite normal subgroup $N$ such that $G/N$ is locally nilpotent. \end{theorem} Recall that pro-(finite nilpotent) groups, that is, inverse limits of finite nilpotent groups, are called \textit{pro\-nil\-po\-tent} groups. \begin{lemma}\label{l-p-n} An almost Engel profinite group is pro\-nil\-po\-tent if and only if it is locally nilpotent. \end{lemma} \begin{proof} Of course, any locally nilpotent profinite group is pro\-nil\-po\-tent. Conversely, suppose that $G$ is an almost Engel pro\-nil\-po\-tent group. We claim that all Engel sinks are trivial: ${\mathscr E}(g)=\{ 1\}$ for every $g\in G$. Indeed, otherwise by Lemma~\ref{l-sink} ${\mathscr E}(g)$ contains a non-trivial element of the form $z=[z,g,\dots ,g]$ with $g$ occurring at least once. Choosing an open normal subgroup $N$ with nilpotent quotient $G/N$ such that $z\not\in N$, we obtain a contradiction. Thus, ${\mathscr E}(g)=\{ 1\}$ for every $g\in G$, which means that all elements of $G$ are Engel elements, that is, $G$ is an Engel profinite group. Then $G$ is locally nilpotent by the Wilson--Zelmanov theorem \cite[Theorem~5]{wi-ze}. \end{proof} Recall that the pro\-nil\-po\-tent residual of a profinite group $G$ is $\gamma _{\infty}(G)=\bigcap _i\gamma _i(G)$, where $\gamma _i(G)$ are the terms of the lower central series; this is the smallest normal subgroup with pro\-nil\-po\-tent quotient. The following lemma is well known and is easy to prove. Here, element orders are understood as Steinitz numbers. The same results also hold in the special case of finite groups. \begin{lemma}\label{l-res} {\rm (a)} The pro\-nil\-po\-tent residual $\gamma _{\infty}(G)$ of a profinite group $G$ is equal to the subgroup generated by all commutators $[x,y]$, where $x,y$ are elements of coprime orders. {\rm (b)} For any normal subgroup $N$ of a profinite group $G$ we have $\gamma _{\infty}(G/N)= \gamma _{\infty}(G)N/N$. \end{lemma} \begin{proof} Part (a) follows from the characterization of pro\-nil\-po\-tent groups as profinite groups all of whose Sylow subgroups are normal. Part (b) follows from the fact that for any elements $\bar x, \bar y$ of coprime orders in a quotient $G/N$ of a profinite group $G$ one can find pre-images $x,y\in G$ which also have coprime orders. \end{proof} The following generalization of Hall's criterion for nilpotency \cite{hall58}, which will be used later, already appeared in \cite{khu-shu153}, but we reproduce the proof for the benefit of the reader. We denote the derived subgroup of a group $B$ by $B'$. \begin{proposition}\label{p-hall} {\rm (a)} Suppose that $B$ is a normal subgroup of a group $A$ such that $B$ is nilpotent of class $c$ and $\gamma _{d}(A/B')$ is finite of order $k$. Then the subgroup $C=C_A(\gamma _{d}(A/B'))=\{a\in A\mid [\gamma _{d}(A), a]\leq B'\}$ has finite $k$-bounded index and is nilpotent of $(c,d)$-bounded class. {\rm (b)} Suppose that $B$ is a normal subgroup of a profinite group $A$ such that $B$ is pro\-nil\-po\-tent and $\gamma _{\infty}(A/B')$ is finite. Then the subgroup $D=C_A(\gamma _{\infty }(A/B'))=\{a\in A\mid [\gamma _{\infty }(A), a]\leq B'\}$ is open and pro\-nil\-po\-tent. \end{proposition} \begin{proof} (a) Since $A/C$ embeds into $\operatorname{Aut}\gamma _{d}(A/B')$, the order of $A/C$ is $k$-bounded. We claim that $C$ is nilpotent of $(c,d)$-bounded class. Indeed, using simple-commutator notation for subgroups, we have $$ [\underbrace{C,\dots ,C}_{d+1}, C, C,\dots ]\leq [[\gamma _d(A), C], C, \dots ]\leq [[B,B], C, \dots ], $$ since $[\gamma _d(A), C]\leq B'$ by construction. Applying repeatedly the Three Subgroup Lemma, we obtain \begin{align*} [[B,B], \underbrace{C, \dots ,C}_{2d-1}, C, \dots ]&\leq \prod _{i+j=2d-1}[[B, \underbrace{C, \dots ,C}_{i}], [B,\underbrace{C, \dots ,C}_{j}], C, \dots ]\\ &\leq [[[B,\underbrace{C, \dots ,C}_{d}], B], C, \dots ]\\ &\leq [[[B,B],B],C,\dots ]. \end{align*} Thus, $\gamma _{d+1}(C)\leq \gamma _2(B)$, then $\gamma _{(d+1)+(2d-1)}(C)\leq \gamma _3(B)$, then a similar calculation gives \allowbreak $\gamma _{(d+1)+(2d-1)+(3d-2)}(C)\leq \gamma _4(B)$, and so on. An easy induction shows that $\gamma _{1+f(c,d)}(C)\leq \gamma _{c+1}(B)=1$ for $1+f(c,d)=1+dc(c+1)/2-c(c-1)/2$, so that $C$ is nilpotent of class $ f(c,d)$. (b) As a centralizer of a normal section, $D$ is a closed normal subgroup. Since $A/D$ embeds into $\operatorname{Aut}\gamma _{\infty }(A/B')$, the subgroup $D$ has finite index; thus, $D$ is an open subgroup. We now show that the image of $D=C_A(\gamma _{\infty }(A/B'))$ in any finite quotient $\bar A$ of $A$ is nilpotent. Let bars denote the images in $\bar A$. Then $\gamma _{\infty }(\bar A/\bar B')=\overline{\gamma _{\infty }(A/B')}$ by Lemma~\ref{l-res}(b). Therefore, $\bar D\leq C_{\bar A}(\gamma _{\infty }(\bar A/\bar B'))$. In a finite group, $\gamma _{\infty }(\bar A/\bar B')=\gamma _{d}(\bar A/\bar B')$ for some positive integer $d$. Hence $\bar D$ is nilpotent by part (a). \end{proof} In general, the set of Engel elements in a profinite group may not be closed. But in an almost Engel group, Engel elements form a closed set, and moreover the following holds. \begin{lemma}\label{l-closed} Let $G$ be an almost Engel profinite group, and $k$ a positive integer. Then the set $$ E_{k}=\{x\in G\mid |{\mathscr E}(x)|\leq k\}. $$ is closed in $G$. \end{lemma} \begin{proof} We wish to show equivalently that the complement of $E_k$ is an open subset of $G$. Every element $g\in (G\setminus E_k)$ is characterized by the fact that $|{\mathscr E}(g)|\geq k+1$. Let $z_1,z_2,\dots ,z_{k+1}$ be some $k+1$ distinct elements in ${\mathscr E}(g)$. Using Lemma~\ref{l-sink} we can write for every $i=1,\dots ,k+1$ \begin{equation}\label{e-open} z_i=[z_i,g,\dots ,g], \quad \text{where } g \text{ is repeated } k_i\geq 1 \text{ times}. \end{equation} Let $N$ be an open normal subgroup of $G$ such that the images of $z_1,z_2,\dots ,z_{k+1}$ are distinct elements in $G/N$. Then equations \eqref{e-open} show that for any $u\in N$ the Engel sink $ {\mathscr E}(gu)$ contains an element in each of the $k+1$ cosets $z_iN$. Thus, all elements in the coset $gN$ are contained in $G\setminus E_k$. We have shown that every element of $G\setminus E_k$ has a neighbourhood that is also contained in $G\setminus E_k$, which is therefore an open subset of $G$. \end{proof} Recall that in Theorem~\ref{t-eprof} we need to show that an almost Engel profinite group $G$ has a finite normal subgroup such that the quotient is locally nilpotent. The first step is to prove the existence of an open locally nilpotent subgroup. \begin{proposition}\label{p-pf1} If $G$ is an almost Engel profinite group, then it has an open normal pro\-nil\-po\-tent subgroup. \end{proposition} Of course, the subgroup in question will also be locally nilpotent by Lemma~\ref{l-p-n}; the result can also be stated as the openness of the largest normal pro\-nil\-po\-tent subgroup. \begin{proof} For every $g\in G$ we choose an open normal subgroup $N_g$ such that ${\mathscr E}(g)\cap N_g=1$. Then $g$ is an Engel element in $N_g\langle g\rangle$. By Baer's theorem \cite[Satz~III.6.15]{hup}, in every finite quotient of $N_g\langle g\rangle$ the image of $g$ belongs to the Fitting subgroup. As a result, the subgroup $[N_g, g]$ is pro\-nil\-po\-tent. Let $\tilde N_g$ be the normal closure of $[N_g, g]$ in $G$. Since $[N_g, g]$ is normal in $N_g$, which has finite index, $[N_g, g]$ has only finitely many conjugates, so $\tilde N_g$ is a product of finitely many normal subgroups of $N_g$, each of which is pro\-nil\-po\-tent. Hence, so is $\tilde N_g$. Therefore all the subgroups $\tilde N_g$ are contained in the largest normal pro\-nil\-po\-tent subgroup $K$. It is easy to see that $G/K$ is an $FC$-group (that is, every conjugacy class is finite): indeed, every $\bar g\in G/K$ is centralized by the image of $N_g$, which has finite index in $G$. A~profinite $FC$-group has finite derived subgroup \cite[Lemma~2.6]{sha}. Hence we can choose an open subgroup of $G/K$ that has trivial intersection with the finite derived subgroup of $G/K$ and therefore is abelian; let $H$ be its full inverse image in $G$. Thus, $H$ is an open subgroup such that the derived subgroup $H'$ is contained in $K$. We now consider the metabelian quotient $M=H/K'$, which is also an almost Engel group, and temporarily use the symbols ${\mathscr E}(g)$ for the Engel $g$-sinks in $M$. For every positive integer $k$, consider the set $$ E_{k}=\{x\in M\mid |{\mathscr E}(x)|\leq k\}. $$ By Lemma~\ref{l-closed}, every set $E_k$ is closed in $M$. Since $M$ is an almost Engel group, we have $$ M=\bigcup _{i} E_{i}. $$ By the Baire category theorem \cite[Theorem~34]{kel}, one of these sets contains an open subset; that is, there is an open subgroup $U$ and a coset $aU$ such that $aU\subseteq E_{m}$ for some $m$. In other words, $|{\mathscr E}(au)|\leq m$ for all $u\in U$. We claim that $|{\mathscr E}(u)|\leq m^2$ for any $u\in U$. Indeed, by Lemma~\ref{l-metab}(a) both ${\mathscr E}(a)$ and ${\mathscr E}(au)$ are normal subgroups of $M$ contained in $M'$. In the quotient $$ \bar M=M/\big({\mathscr E}(a) {\mathscr E}(au)\big), $$ both $\bar M'\langle \bar a\rangle$ and $\bar M' \langle \bar a \bar u\rangle$ are normal locally nilpotent subgroups. Hence their product, which contains $\bar u$, is also a locally nilpotent subgroup by the Hirsch--Plotkin theorem \cite[12.1.2]{rob}. As a result, ${\mathscr E}(u)\leq {\mathscr E}(a) {\mathscr E}(au)$ and therefore $|{\mathscr E}(u)|\leq |{\mathscr E}(a)| \cdot |{\mathscr E}(au)|\leq m^2$. Since $M'\leq K/K'$, it is easy to see that ${\mathscr E}(u) ={\mathscr E}(uk)$ for any $u\in U$ and any $k\in K/K'$. Therefore, setting $V=U(K/K')$ we have $|{\mathscr E}(v)|\leq m^2$ for any $v\in V$. Thus, the Engel sinks of elements of $V$ uniformly satisfy the inequality $|{\mathscr E}(v)|\leq m^2$ for all $v\in V$. The same inequality holds in every finite quotient $\bar V$ of $V$, to which we can therefore apply Theorem~\ref{t-finite}. As a result, $|\gamma _{\infty}(\bar V)|\leq n$ for some number $n=n(m)$ depending only on $m$. Then also $|\gamma _{\infty}( V)|\leq n$. Let $W$ be the full inverse image of $V$, which is an open subgroup of $G$ containing $K$, and let $ \Gamma$ be the full inverse image of $\gamma _{\infty}( V)$. Now let $F=C_W(\gamma _{\infty}( V))=\{w\in W\mid [\Gamma ,w]\leq K'\}$. By Proposition~\ref{p-hall}(b), this is an open normal pro\-nil\-po\-tent subgroup, which completes the proof of Proposition~\ref{p-pf1}. \end{proof} \begin{proof}[Proof of Theorem~\ref{t-eprof}] Recall that $G$ is an almost Engel profinite group, and we need to show that $\gamma _{\infty}(G)$ is finite. Henceforth we denote by $F(L)$ the largest normal pro\-nil\-po\-tent subgroup of a profinite group $L$. By Proposition~\ref{p-pf1} we already know that $G$ has an open normal pro\-nil\-po\-tent subgroup, so that $F(G)$ is also open. Further arguments largely follow the scheme of proof of Theorem~1.1 in \cite{khu-shu153}. Since $G/F(G)$ is finite, we can use induction on $|G/F(G)|$. The basis of this induction includes the trivial case $G/F(G)=1$ when $\gamma _{\infty}(G)=1$. But the bulk of the proof deals with the case where $G/F(G)$ is a finite simple group. Thus, we assume that $G/F(G)$ is a finite simple group (abelian or non-abelian). Let $p$ be a prime divisor of $|G/F(G)|$, and $g\in G\setminus F(G)$ an element of order $p^n$, where $n$ is either a positive integer or $\infty$ (so $p^n$ is a Steinitz number). For any prime $q\ne p$, the element $g$ acts by conjugation on the Sylow $q$-subgroup $Q$ of $F(G)$ as an automorphism of order dividing $p^n$. The subgroup $[Q, g]$ is a normal subgroup of $Q$ and therefore also a normal subgroup of $F(G)$. The image of $[Q, g]$ in any finite quotient has order bounded in terms of $|{\mathscr E}(g)|$ by Lemma~\ref{l0}. It follows that $[Q, g]$ is finite of order bounded in terms of $|{\mathscr E}(g)|$. Since $[Q, g]$ is normal in $F(G)$, its normal closure $\langle [Q, g]^G\rangle $ in $G$ is a product of finitely many conjugates and is therefore also finite. Let $R$ be the product of these closures $\langle [Q, g]^G\rangle $ over all Sylow $q$-subgroups $Q$ of $F(G)$ for $q\ne p$. Since $[Q, g]$ is finite of order bounded in terms of $|{\mathscr E}(g)|$ as shown above, there are only finitely many primes $q$ such that $[Q,g]\ne 1$ for the Sylow $q$-subgroup $Q$ of $F(G)$. Therefore $R$ is finite, and it is sufficient to prove that $\gamma _{\infty }(G/R)$ is finite. Thus, we can assume that $R=1$. Note that then $[Q, g^a]=1$ for any conjugate $g^a$ of $g$ and any Sylow $q$-subgroup $Q$ of $F(G)$ for $q\ne p$. Choose a transversal $\{t_1,\dots, t_k\}$ of $G$ modulo $F(G)$. Let $G_1=\langle g^{t_1}, \dots ,g^{t_k}\rangle$. Clearly, $G_1F(G)/F(G)$ is generated by the conjugacy class of the image of $g$. Since $G/F(G)$ is simple, we have $G_1F(G)=G$. By our assumption, the Cartesian product $T$ of all Sylow $q$-subgroups of $F(G)$ for $q\ne p$ is centralized by all elements $g^{t_i}$. Hence, $[G_1, T]=1$. Let $P$ be the Sylow $p$-subgroup of $F(G)$ (possibly, trivial). Then also $[PG_1, T]=1$, and therefore $$\gamma _{\infty }(G)=\gamma _{\infty }(G_1F(G))= \gamma _{\infty }(PG_1).$$ The image of $\gamma _{\infty }(PG_1)\cap T$ in $G/P$ is contained both in the centre and in the derived subgroup of $PG_1/P$ and therefore is isomorphic to a subgroup of the Schur multiplier of the finite group $G/F(G)$. Since the Schur multiplier of a finite group is finite \cite[Hauptsatz~V.23.5]{hup}, we obtain that $\gamma _{\infty }(G)\cap T=\gamma _{\infty }( PG_1)\cap T$ is finite. Therefore we can assume that $T=1$, in other words, that $F(G)$ is a $p$-group. If $|G/F(G)|=p$, then $G$ is a pro-$p$ group, so it is pro\-nil\-po\-tent, which means that $\gamma _{\infty }( G)=1$ and the proof is complete. If $G/F(G)$ is a non-abelian simple group, then we choose another prime $r\ne p$ dividing $|G/F(G)|$ and repeat the same arguments as above with $r$ in place of $p$. As a result, we reduce the proof to the case $F(G)=1$, where the result is obvious. We now finish the proof of Theorem~\ref{t-eprof} by induction on $|G/F(G)|$. The basis of this induction where $G/F(G)$ is a simple group was proved above. Now suppose that $G/F(G)$ has a nontrivial proper normal subgroup with full inverse image $N$, so that $F(G)<N\lhd G$. Since $F(N)=F(G)$, by induction applied to $N$ the group $\gamma _{\infty }(N)$ is finite. Since $N/\gamma _{\infty }(N)\leq F( G/\gamma _{\infty }(N))$, by induction applied to $G/\gamma _{\infty }(N)$ the group $ \gamma _{\infty }(G/\gamma _{\infty }(N) )$ is also finite. As a result, $\gamma _{\infty }(G) $ is finite, as required. \end{proof} \section{Compact almost Engel groups} \label{s-comp} In this section we prove the main Theorem~\ref{t-e} about compact almost Engel groups. We use the structure theorems for compact groups and the results of the preceding section on profinite almost Engel groups. Recall that a group $H$ is said to be \emph{divisible} if for every $h\in H$ and every positive integer $k$ there is an element $x\in H$ such that $x^k=h$. \begin{proposition}\label{pr-div} If $H$ is an almost Engel divisible group, then $H$ is in fact an Engel group. \end{proposition} \begin{proof} We need to show that ${\mathscr E}(h)=\{ 1\}$ for every $h\in H$. Let $|{\mathscr E}(h)|=k$. Let $g\in H$ be an element such that $g^{k!}=h$. Since $g$ centralizes $h$, by Lemma~\ref{l-c-s} we obtain that $h=g^{k!}$ centralizes ${\mathscr E}(h)$. By the minimality of the $h$-sink, then ${\mathscr E}(h)=\{1\}$, as required. \end{proof} We are ready to prove Theorem~\ref{t-e}. \begin{proof}[Proof of Theorem~\ref{t-e}] Let $G$ be an almost Engel compact group; we need to show that there is a finite subgroup $N$ such that $G/N$ is locally nilpotent. By the well-known structure theorems (see, for example, \cite[Theorems~9.24 and 9.35]{h-m}), the connected component of the identity $G_0$ in $G$ is a divisible group such that $G_0/Z(G_0)$ is a Cartesian product of simple compact Lie groups, while the quotient $G/G_0$ is a profinite group. By Proposition~\ref{pr-div}, $G_0$ is an Engel group. Compact Lie groups are linear groups, and linear Engel groups are locally nilpotent by the Garashchuk--Suprunenko theorem \cite{gar-sup} (see also \cite{grb}). Hence $G_0=Z(G_0)$ is an abelian subgroup. \begin{remark} If a compact group $G$ is an Engel group, then by the above $G$ is an extension of an abelian subgroup $G_0$ by a profinite group $G/G_0$. Being an Engel group, $G/G_0$ is locally nilpotent by the Wilson--Zelmanov theorem \cite[Theorem~5]{wi-ze}. It is known that an Engel abelian-by-(locally nilpotent) group is locally nilpotent (see, for example, \cite[12.3.3]{rob}). This gives an alternative proof of Medvedev's theorem \cite{med}. \end{remark} We proceed with the proof of Theorem~\ref{t-e}. \begin{lemma}\label{l-eng} For every $g\in G$ we have ${\mathscr E}(g)\cap G_0=1$, which is equivalent to the fact that for any $x\in G_0$ we have $$ [x,g,\dots ,g]=1 $$ if $g$ is repeated sufficiently many times. \end{lemma} \begin{proof} Suppose the opposite and choose $1\ne z\in {\mathscr E}(g)\cap G_0$. By Lemma~\ref{l-sink} then $z=[z,g,\dots, g]$ with $g$ occurring at least once. Hence $z$ belongs also to the $g$-sink within the semidirect product $G_0\langle g\rangle$. Since this subgroup is metabelian, by Lemma~\ref{l-metab}(a) the $g$-sink in $G_0\langle g\rangle$ is a subgroup, which is equal to ${\mathscr E}(g)\cap G_0$. Therefore we can choose $z_1$ in ${\mathscr E}(g)\cap G_0$ of some prime order $p=|z_1|$. Again by Lemma~\ref{l-sink} we have $z_1=[z_1,g,\dots, g]$ with $g$ occurring at least once. In the divisible group $G_0$ for every $k=1,2,\dots $ there is an element $z_k$ such that $z_k^{p^k}=z_1$. We have $y_k= [z_k, g,\dots ,g]\in {\mathscr E}(g)\cap G_0$ when $g$ is repeated sufficiently many times. Then $y_k^{p^k}= [z_k^{p^k}, g,\dots ,g]=[z_1,g,\dots, g]$, which is an element of the orbit of $z_1$ in ${\mathscr E}(g)\cap G_0$ under the mapping $x\to [x,g]$ and therefore has the same order $p=|z_1|$ by Lemma~\ref{l-metab}(b). Thus, $y_k$ is an element of ${\mathscr E}(g)$ of order exactly $p^{k+1}$, for $k=1,2,\dots $. As a result, ${\mathscr E}(g)$ is infinite, a contradiction with the hypothesis. \end{proof} Applying Theorem~\ref{t-eprof} to the almost Engel profinite group $\bar G=G/G_0$ we obtain a finite normal subgroup $D$ with locally nilpotent quotient. Then $D$ contains all Engel sinks $\bar{\mathscr E}(g)$ of elements $g\in\bar G$ and therefore the subgroup $E$ generated by them: $$ E:=\langle \bar{\mathscr E}(g)\mid g\in \bar G\rangle\leq D. $$ Clearly, $\bar{\mathscr E}(g)^h=\bar{\mathscr E}(g^h)$ for any $h\in\bar G$; hence $E$ is a normal finite subgroup of $\bar G$. We replace $D$ by $E$ in the sense that $\bar G/E$ is also locally nilpotent by the Wilson--Zelmanov theorem \cite[Theorem~5]{wi-ze}, since this is an Engel profinite group. We now consider the action of $\bar G$ by automorphisms on $G_0$ induced by conjugation. \begin{lemma}\label{l-central} The subgroup $E$ acts trivially on $G_0$. \end{lemma} \begin{proof} The abelian divisible group $G_0$ is a direct product $A_0\times\prod _pA_p$ of a torsion-free divisible group $A_0$ and Sylow subgroups $A_p$ over various primes $p$. Clearly, every Sylow subgroup is normal in $G$. First we show that $E$ acts trivially on each $A_p$. It is sufficient to show that for every $g\in \bar G$ every element $z\in \bar{\mathscr E}(g)$ acts trivially on $A_p$. Consider the action of $\langle z, g\rangle$ on $A_p$. Note that $\langle z, g\rangle=\langle z^{\langle g\rangle}\rangle\langle g\rangle$, where $\langle z^{\langle g\rangle}\rangle$ is a finite $g$-invariant subgroup, since it is contained in the finite subgroup $E$. For any $a\in A_p$ the subgroup $$ \langle a^{\langle g\rangle}\rangle=\langle a,[a,g], [a,g,g],\dots \rangle $$ is a finite $p$-group by Lemma~\ref{l-eng}, and this subgroup is $g$-invariant. Its images under the action of elements of the finite group $\langle z^{\langle g\rangle}\rangle$ generate a finite $p$-group $B$, which is $\langle z, g\rangle$-invariant. Lemma~\ref{l-eng} implies that the image of $\langle z, g\rangle$ in its action on $B$ must be a $p$-group. Indeed, otherwise this image contains a $p'$ element $y$ that acts non-trivially on the Frattini quotient $V=B/\Phi (B)$. Then $V=[V,y]$ and $C_V(y)=1$, whence $[V,y]=\{[v,g]\mid v\in [V,y]\}$ and therefore also $[V,y]=\{[v,{y,\dots ,y}]\mid v\in [V,y]\} $ with $y$ repeated $n$ times, for any $n$, contrary to Lemma~\ref{l-eng}. But since $z$ is an element of $\bar{\mathscr E}(g)$, by Lemma~\ref{l-sink} we have $z=[z,g,\dots ,g]$ with at least one occurrence of $g$. Since a finite $p$-group is nilpotent, this implies that the image of $z$ in its action on $B$ must be trivial. In particular, $z$ centralizes $a$. As a result $E$ acts trivially on $A_p$, for every prime $p$. We now show that $E$ also acts trivially on the quotient $W=G_0/\prod _pA_p$ of $G_0$ by its torsion part. Note that $W$ can be regarded as a vector space over ${\mathbb Q}$. Every element $g\in E$ has finite order and therefore by Maschke's theorem $W=[W,g]\times C_W(g)$. If $[W,g]\ne 1$, then $[W,g]=\{[w,{g,\dots ,g}]\mid w\in [W,g]\} $ with $g$ repeated $n$ times, for any $n$. This contradicts Lemma~\ref{l-eng}. Thus, $E$ acts trivially both on $W$ and on $\prod _pA_p$. Then any automorphism of $G_0$ induced by conjugation by $g\in E$ acts on every element $a\in A_0$ as $a^g=at$, where $t=t(a)$ is an element of finite order in $G_0$. Since $a^{g^i}=at^i$, the order of $t$ must divide the order of $g$. Assuming the action of $E$ on $G_0$ to be non-trivial, choose an element $g\in E$ acting on $G_0$ as an automorphism of some prime order $p$. Then there is $a\in A_0$ such that $a^g=at$, where $t\in G_0$ has order $p$. For any $k=1,2,\dots $ there is an element $a_k\in A_0$ such that $a_k^{p^k}=a$. Then $a_k^g=a_kt_k$, where $t_k^{p^k}=t$. Thus $|t_k|=p^{k+1}$, and therefore $p^{k+1}$ divides the order of $g$, for every $k=1,2,\dots$. We arrived at a contradiction since $g$ has finite order. \end{proof} Let $F$ be the full inverse image of $E$ in $G$. Recall that we have normal subgroups $G_0\leq F\leq G$ such that $G_0$ is divisible, $F/G_0$ is finite, and $G/F$ is locally nilpotent. We aim at producing a finite normal subgroup $N$ such that $G/N$ is locally nilpotent. By Lemma~\ref{l-central} the subgroup $G_0$ is contained in the centre of the full inverse image $F$ of $E$, so that $F$ has centre of finite index. Then the derived subgroup $F'$ is finite by Schur's theorem \cite[Satz~IV.2.3]{hup}. We can assume that $F'=1$, so that then $F$ is abelian. In the abstract abelian group $F$ the divisible subgroup $G_0$ has a complement $C$, which is obviously finite, $F=G_0\times C$. Let $M$ be the normal closure of $C$ in $G$. Note that $G/M$ is a locally nilpotent group. Indeed, $G/F$ is locally nilpotent, while $F/M$ is $G$-isomorphic to $G_0/(G_0\cap M)$ so that $G/M$ is an Engel group by Lemma~\ref{l-eng}. Being an abelian-by-(locally nilpotent) Engel group, then $G/M$ is locally nilpotent. Consider the natural semidirect product $M\rtimes (G/C_G(M))$ with the induced topology, in which the action of $G/C_G(M)$ on $M$ is continuous. Both $M$ and $G/C_G(M)$ are profinite groups; therefore $M\rtimes (G/C_G(M))$ is also a profinite group (see \cite[Lemma 1.3.6]{wilson}). Note that $ G/C_G(M)$ is locally nilpotent, since $M\leq C_G(M)$. The group $M\rtimes (G/C_G(M))$ is also almost Engel, so by Theorem~\ref{t-eprof} it contains a finite normal subgroup with locally nilpotent quotient. This finite subgroup therefore contains the subgroup $K$ generated by all Engel sinks in $M\rtimes (G/C_G(M))$; since $ G/C_G(M)$ is locally nilpotent, $K\leq M$. Since $G/M$ is also locally nilpotent, the Engel sinks in $G$ are all contained in $M$ and coincide with the Engel sinks in $M\rtimes (G/C_G(M))$. Renaming $K$ by $N$ as a subgroup of $G$ we arrive at the required result. Indeed, $N$ is a normal subgroup because ${\mathscr E}(g)^h={\mathscr E}(g^h)$ for any $h\in G$. The group $G/N$ is locally nilpotent being an Engel group which is an extension of an abelian group $F/N$ by a locally nilpotent group $G/F$. The proof of Theorem~\ref{t-e} is complete. \end{proof} \begin{corollary} \label{c-em} Let $G$ be an almost Engel compact group such that for some positive integer $m$ all Engel sinks ${\mathscr E}(g)$ have cardinality at most $m$. Then $G$ has a finite normal subgroup $N$ of order bounded in terms of $m$ such that $G/N$ is locally nilpotent. \end{corollary} \begin{proof} By Theorem~\ref{t-e} the group $G$ is finite-by-(locally nilpotent). Therefore every abstract finitely generated subgroup $H$ of $G$ is finite-by-nilpotent and residually finite. By Theorem~\ref{t-finite}, every finite quotient of $H$ has nilpotent residual of $m$-bounded order. Hence $\gamma _{\infty}(H)$ is also finite of $m$-bounded order, and $H/\gamma _{\infty}(H)$ is nilpotent. Thus, every finitely generated subgroup of $G$ has a normal subgroup of $m$-bounded order with nilpotent quotient. By the standard inverse limit argument, the group $G$ has a normal subgroup of $m$-bounded order with locally nilpotent quotient. \end{proof} \section*{Acknowledgements} The authors thank Gunnar Traustason and John Wilson for stimulating discussions. The first author was supported by the Russian Science Foundation, project no. 14-21-00065, and the second by the Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'ogico (CNPq), Brazil. The first author thanks CNPq-Brazil and the University of Brasilia for support and hospitality that he enjoyed during his visits to Brasilia.
{ "timestamp": "2017-05-16T02:12:04", "yymm": "1610", "arxiv_id": "1610.02079", "language": "en", "url": "https://arxiv.org/abs/1610.02079", "abstract": "We say that a group $G$ is almost Engel if for every $g\\in G$ there is a finite set ${\\mathscr E}(g)$ such that for every $x\\in G$ all sufficiently long commutators $[...[[x,g],g],\\dots ,g]$ belong to ${\\mathscr E}(g)$, that is, for every $x\\in G$ there is a positive integer $n(x,g)$ such that $[...[[x,g],g],\\dots ,g]\\in {\\mathscr E}(g)$ if $g$ is repeated at least $n(x,g)$ times. (Thus, Engel groups are precisely the almost Engel groups for which we can choose ${\\mathscr E}(g)=\\{ 1\\}$ for all $g\\in G$.)We prove that if a compact (Hausdorff) group $G$ is almost Engel, then $G$ has a finite normal subgroup $N$ such that $G/N$ is locally nilpotent. If in addition there is a uniform bound $|{\\mathscr E}(g)|\\leq m$ for the orders of the corresponding sets, then the subgroup $N$ can be chosen of order bounded in terms of $m$. The proofs use the Wilson--Zelmanov theorem saying that Engel profinite groups are locally nilpotent.", "subjects": "Group Theory (math.GR)", "title": "Almost Engel compact groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9898303410461385, "lm_q2_score": 0.7154239897159438, "lm_q1q2_score": 0.7081483717331217 }
https://arxiv.org/abs/1812.04131
An upper bound for the restricted online Ramsey number
The restricted $(m,n;N)$-online Ramsey game is a game played between two players, Builder and Painter. The game starts with $N$ isolated vertices. Each turn Builder picks an edge to build and Painter chooses whether that edge is red or blue, and Builder aims to create a red $K_m$ or blue $K_n$ in as few turns as possible. The restricted online Ramsey number $\tilde{r}(m,n;N)$ is the minimum number of turns that Builder needs to guarantee her win in the restricted $(m,n;N)$-online Ramsey game. We show that if $N=r(n,n)$, \[ \tilde{r}(n,n;N)\le \binom{N}{2} - \Omega(N\log N), \] motivated by a question posed by Conlon, Fox, Grinshpun and He. The equivalent game played on infinitely many vertices is called the online Ramsey game. As almost all known Builder strategies in the online Ramsey game end up reducing to the restricted setting, we expect further progress on the restricted online Ramsey game to have applications in the general case.
\section{Introduction} Ramsey's theorem states that for any $m, n\ge 3$, there exists a least positive integer $N = r(m,n)$ such that any red-blue coloring of the edges of the complete graph $K_{N}$ contains either a red $m$-clique or blue $n$-clique. These particular integers $r(m,n)$ are the \textit{Ramsey numbers}. Determining the growth rate of the Ramsey numbers $r(m,n)$ is perhaps the central problem of Ramsey theory, and much is still unknown. An early result of Erd\H os and Szekeres guarantees that $r(n,n)\leq 2^{2n}$ \cite{ErSk}, and in the other direction Erd\H os proved that $r(n,n) \ge 2^{n/2}$ \cite{Erdos}. No exponential improvement has been made on either bound in the decades since they were proven. This paper is concerned with a widely-studied variant of Ramsey numbers, called \textit{online Ramsey numbers}. First we define the \textit{online Ramsey game}, which is played between two players called Builder and Painter. Fix positive integers $m, n\ge 3$. The game takes place on an infinite set of isolated vertices. Each turn, Builder chooses two non-adjacent vertices and builds the edge between them. Painter then paints the edge either red or blue. Builder wins when a red $m$-clique or blue $n$-clique appears in the graph, and Painter's goal is to prevent Builder's win for as long as possible. The \textit{online Ramsey number} $\tilde{r}(m,n)$ is the smallest $t$ such that Builder has a strategy to win within $t$ turns regardless of how painter plays. Online Ramsey numbers were first introduced by Beck \cite{beck} and independently by Kurek and Ruciński \cite{kr}. One can easily find an exponential bound on the online Ramsey number $\tilde{r}(m,n)$ using the classical Ramsey number as follows. \begin{equation} \label{eq:trivial} \frac{r(m,n)}{2} \leq \tilde{r}(m,n) \leq \binom{r(m,n)}{2} \end{equation} However, unlike the classical Ramsey numbers which have seen no exponential improvements in decades, both sides of (\ref{eq:trivial}) have been improved. Conlon \cite{Conlon} proved an exponential improvement on the upper bound, showing that for infinitely many $n$ \[\tilde{r}(n,n) \leq 1.001^{-n}\binom{r(n,n)}{2}.\] In the other direction, Conlon, Fox, Grinshpun, and He \cite{xiaoyu} used the probabilistic method to prove an exponential improvement to the lower bound as well, showing for all $n\ge 3$, \[\tilde{r}(n,n) \geq 2^{(2-\sqrt{2})n + O(1)}.\] Fix positive integers $m, n$, and $N$. The $(m,n;N)$-\textit{restricted online Ramsey game} is the online Ramsey game played on a finite vertex set of size $N$. The \textit{restricted online Ramsey number} $\tilde{r}(m,n;N)$, defined in \cite{xiaoyu}, is the number of moves Builder must take to ensure victory in this game. Of course, this number is only defined when $N \geq r(m,n)$. Many of the Builder strategies used throughout \cite{xiaoyu} and \cite{Conlon} reduce the $(m,n)$-online Ramsey game to the $(m',n';r(m',n'))$-restricted game, for some $m' < m$ and $n' < n$, and then apply the trivial bound \[ \tilde{r}(m',n';r(m',n')) \le \binom{r(m',n')}{2}. \] Improved bounds on the restricted online Ramsey number may allow for corresponding improvements in these Builder strategies, motivating our work on these problems. Our main result on the restricted online Ramsey game is the following. \begin{theorem}\label{thm:main} If $n\ge 3$ and $N = r(n,n)$, then \[ \tilde{r}(n,n;N)\leq{N \choose 2} - \Omega(N\log N). \] \end{theorem} Since $\binom{N}{2}$ is the total number of edges in $K_N$, we say that Builder may always save $\Omega(N\log N)$ moves in the restricted $(n,n;N)$-online Ramsey game. The rest of this paper is organized as follows. In the next section, we make some basic definitions and collect some useful results from extremal graph theory. After that, we divide the proof of Theorem~\ref{thm:main} into two sections. Section~\ref{sec:medbip} shows that if Builder builds a large complete multipartite graph, some pair of parts will have many edges of both colors between them. Section~\ref{sec:indpair} then shows that within these two parts of the graph, Builder can save many moves by constructing a large family of what we call {\it independent pairs}. Finally in Section~\ref{sec:closing} we mention some open problems surrounding the restricted online Ramsey number. \section{Preliminaries} Henceforth, we let $N=r(n,n)$, and simply call the $(n,n;N)$-restricted online Ramsey game the $(n,n;N)$-game. A bichromatic graph is a graph whose edges are colored red or blue. \begin{definition} A \textit{bichromatic graph} is a triple of sets $G=(V,R,B)$ where $R,B\subseteq {V\choose 2}$ and $R\cap B=\emptyset$. We say that $V$ is the vertex set and $R$ (resp. $B$) is the set of red (resp. blue) edges of $G$. \end{definition} If $G$ is a bichromatic graph, write $d_R (G) = \frac{|R|}{|R+B|}$ for the density of red edges (out of all the edges) in $G$. We say that a bichromatic graph is {\it $\varepsilon$-color-balanced} if $\varepsilon \le d_R(G) \le 1-\varepsilon$. Define induced bichromatic subgraphs in the obvious way. The \textit{underlying graph} of a bichromatic graph $G=(V,R,B)$ is the (uncolored) graph $(V,R\cup B)$ and we say an (uncolored) graph is contained in $G$ if it is a subset of the underlying graph of $G$. A bichromatic graph is complete if its underlying graph is complete. We will think of bichromatic graphs as intermediate states in the $(n,n;N)$-game, and show that if Builder can reach certain bichromatic bipartite graphs $G$, Builder will be able to save many moves from those $G$. \begin{definition} If $G$ is a bichromatic graph on $N$ vertices, define $s(G)$ to be the largest $s\in \mathbb{N}$ for which Builder can win the $(n,n;N)$-game starting from $G$ using $\binom{N}{2}-e(G)-s$ moves. \end{definition} We can now restate our Theorem~\ref{thm:main} in terms of $G$. \begin{theorem}\label{thm:nlogn} If $G$ is the empty bichromatic graph on $N$ vertices, $$s(G)=\Omega(N\log N).$$ \end{theorem} In other words, Builder can always save $\Omega(N\log N)$ edges in the $(n,n,N)$-game. In order to prove this theorem, we look to build structures from which Builder can always save moves. For any bichromatic graph $G=(V,R,B)$, we call a pair $(u,v)\in V^2$ of non-adjacent vertices an {\it unbuilt pair}. \begin{definition} If $G = (V, R, B)$ is a bichromatic graph, two vertex-disjoint pairs $(u_1,u_2)$ and $(v_1,v_2)$ in $V^2$ are {\it independent} if both are unbuilt and there exist both a red edge $u_i v_j \in R$ and a blue edge $u_{i'} v_{j'} \in B$ for some (not necessarily distinct) $i,i',j,j'\in\{1,2\}$. \end{definition} The essential observation is that if two pairs $(u_1, u_2)$ and $(v_1, v_2)$ are independent in $G$, then the four vertices $u_1, u_2, v_1, v_2$ will never be in the same monochromatic clique. \begin{lemma}\label{lem:pairwise} If $G$ is a bichromatic graph on $N$ vertices containing $s$ pairs $p_1,\ldots, p_s$ each independent to each of $t$ pairs $q_1,\ldots, q_t$, then $s(G)\ge \min(s,t)$. \begin{proof} Builder's strategy from this point forward is to build all the edges other than $p_1,\ldots, p_s$ and $q_1, \ldots, q_t$. Once this is done, let $G'$ be the resulting bichromatic graph. We claim that Builder only needs to build either $p_1,\ldots, p_s$ or $q_1, \ldots, q_t$ in $G'$ to win. Indeed, if all pairs $p_1,\ldots, p_s$ and $q_1, \ldots, q_t$ are built, the resulting graph is complete of size $N$ and must contain a monochromatic clique by Ramsey's theorem. But independent pairs cannot lie in a monochromatic clique together. In particular, either building all the $p_i$ or all the $q_j$ alone will force a monochromatic clique already. The result follows. \end{proof} \end{lemma} In general, if $G$ is a bichromatic graph on $N$ vertices containing unbuilt pairs $p_{j,1},\ldots, p_{j,s_j}$ for all $j=1,\cdots,t$ (so there are $\sum_{j=1}^ts_j$ of them), such that $p_{j_1,k}$ is independent to $p_{j_2,l}$ for any $j_1\neq j_2$, then $s(G)\ge s_1+s_2+\cdots+s_t-\max(s_1,s_2,\cdots,s_t)$. We then collect two old results in extremal graph theory that we will need. The first shows that a sparse graph contains a larger clique or independent set than Ramsey's theorem predicts. \begin{lemma}\label{lem:bigclique} (Erd\H os and Szemer\'edi \cite{es}.) There exists a universal constant $a>0$ such that if $G$ is a graph on $N$ vertices and $r\leq \varepsilon N^2$ edges, then either $G$ or its complement contains clique of size $s$ where \[ s > \frac{a\log N}{\varepsilon \log (\varepsilon^{-1})}. \] \end{lemma} The second result is a theorem of K\"ov\'ari, S\'os, Tur\'an \cite{Kovari}, answering the famous problem of Zarankiewicz. It shows that dense bipartite graphs contain large complete bipartite subgraphs. \begin{lemma} (K\"ov\'ari, S\'os, Tur\'an \cite{Kovari}.)\label{lem:Kovari} Suppose $m\ge s\ge 1$ and $n\ge t\ge 1$, $G = (U,V,E)$ is a bipartite graph for which $|U| = m$ and $|V|=n$, and $G$ contains no subgraph isomorphic to $K_{s,t}$. Then, \[ |E| < (t-1)^{1/s}(m-s+1)n^{1-1/s}+(s-1)n. \] \end{lemma} We only need a straightforward consequence of the above result. \begin{lemma}\label{lem:computation} Suppose $G=(U,V,E)$ is a bipartite graph satisfying $|U|=N_0,|V|=\binom{N_0}{2}$ with at least $\delta|U||V|$ edges, and $N_0$ is sufficiently large. Then, $G$ contains a copy of $K_{a,b}$, where $a = \delta \log N_0$ and $b=N_0\log N_0$. \begin{proof} Suppose otherwise. Using Lemma~\ref{lem:Kovari}, we can compute that for $N_0$ sufficiently large, \begin{align*} |E|&<(N_0\log N_0-1)^{\frac{1}{\delta \log N_0}}(N_0-\delta \log N_0+1)\binom{N_0}{2}^{1-\frac{1}{\delta \log N_0}}+(\delta \log N_0-1)\binom{N_0}{2}\\ &<(N_0\log N_0)^{\frac{1}{\delta \log N_0}}N_0(N_0^2)^{1-\frac{1}{\delta \log N_0}}+\delta \log N_0\cdot N_0^2\\ &<2^{-\frac{1}{\delta}}(\log N_0)^{\frac{1}{\delta \log N_0}}N_0^3+o(\delta N_0^3)\\ &<\delta N_0^3. \end{align*} This is a contradiction. \end{proof} \end{lemma} Finally, for the sake of notational convenience in the following sections, we further make the following definitions. Write $K_{(X \times Y)}$ for the complete multipartite graph with $X$ parts $U_1, \ldots, U_X$ each of size $Y$. If $G = (V,R,B)$ has underlying graph $K_{(X\times Y)}$, define the {\it $\varepsilon$-reduced graph} of $G$ to be the graph $G_{\varepsilon}$ whose vertices are the parts $U_i$ of $G$, and whose edges are defined as follows. If $U_i, U_j$ are distinct parts of $G$, then there is a red edge between them in $G_{\varepsilon}$ if $d_R(G[U_i \cup U_j]) > 1 - \varepsilon$, and a blue edge if instead $d_R(G[U_i \cup U_j]) < \varepsilon$. \section{Constructing a color-balanced bipartite graph}\label{sec:medbip} The first step of the Builder strategy is to construct a bichromatic graph $G$ with a large color-balanced bipartite subgraph. The main lemma of this section is that if Builder starts by building a $K_{(X\times Y)}$ and the reduced graph $G_\varepsilon$ turns out to be complete, then Builder can save many moves in the $(n,n;N)$-game. \begin{lemma}\label{thm:multipartite} For all $n$, there exist universal constants $C, \varepsilon > 0$ such that if $G$ is a bichromatic graph on $N = R(n,n)$ vertices with induced subgraph $K_{(C \times N/C)}$ and the $\varepsilon$-reduced graph $G_{\varepsilon}$ of $G$ is complete, then \[ s(G) \ge \Omega(N^2). \] \end{lemma} \begin{proof} Suppose we have a bichromatic graph $G=(V,R,B)$ with a complete $\varepsilon$-reduced graph $G_{\varepsilon} = (V',R',B')$. We can apply the Erd\H os-Szekeres upper bound for $r(n,n)$ to see that \[ |V'|=C>r(1/2\log_2(C),1/2\log_2(C)). \] Therefore, there is a monochromatic clique of size $t = 1/2\log_2(C)$ in $G_{\varepsilon}$. Without loss of generality, let it be blue, and let its vertices be $U_1, \ldots, U_t$. The definition of the reduced graph tells us that $H = G[U_1\cup \ldots \cup U_t]$ is complete multipartite with $t$ parts and between each pair of distinct parts at most an $\varepsilon$ fraction of the edges are blue. Therefore overall, $d_R(H) < \varepsilon$. It suffices to show that Builder can always guarantee a monochromatic clique of size $n$ by building all the unbuilt pairs in $H$, since then Builder wins with $\Omega(N^2)$ pairs unbuilt in the rest of $G$. First, note that the only pairs unbuilt in $H$ are the pairs within individual parts, of which there are $t\binom{N/C}{2}$. For $C$ sufficiently large, this is less than an $\varepsilon$ fraction of all the pairs of vertices in $H$. In particular, since $d_R(H) < \varepsilon$, if Builder builds the remaining edges within $H$, the resulting complete bichromatic graph $H'$ satisfies $d_R(H') < 2 \varepsilon$ regardless of Painter's choices. From Lemma~\ref{lem:bigclique} it directly follows that, regardless of Painter's choices, $H'$ will contain a monochromatic clique of size at least \[ u = \Omega \Big( \frac{\log (N)}{\varepsilon \log (\varepsilon^{-1})}\Big). \] Since $n = \Theta( \log N)$, taking $\varepsilon$ small enough and $C$ large enough in terms of $\varepsilon$, we can make $u > n$. In total, we have exhibited a strategy which guarantees a monochromatic $n$-clique if we begin with a complete multipartite graph $K_{(C\times N/C)}$ with some bipartite part not $\varepsilon$-balanced. Given this strating point, we can then fill in the unbuilt pairs in $O(\log C)$ of the parts. Within the remaining $(1-o(1))C$ parts Builder thus saves $\Omega(N^2)$ moves, completing the proof. \end{proof} Lemma~\ref{thm:multipartite} handles the case when $G_\varepsilon$ is complete. The remaining case is that for some pair of parts $U_i, U_j$, the induced subgraph $G[U_i \cup U_j]$ is color-balanced. We handle this case in the next section. \section{Independent pairs}\label{sec:indpair} By Lemma~\ref{thm:multipartite}, it will suffice to solve the problem for color-balanced bipartite graphs $G$. To do so we show that such graphs contain many independent pairs. \begin{definition} Let $G = (V, R, B)$ be a bipartite bichromatic graph with bipartition $V = V_1 \sqcup V_2$. The \textit{left vertex-pair incidence graph} of $G$ is the bipartite graph $H_L = (V', E')$ with vertex set $V' = V_1 \sqcup \binom{V_2}{2}$ and $(u,(v_1,v_2))\in E'$ if and only if $(u,v_1)$ and $(u,v_2)$ are two edges in $G$ of different colors. \end{definition} Observe that if $(u, (v_1, v_2))$ is an edge of the left vertex-pair incidence graph of $G$, then every unbuilt pair containing $u$ is independent from $(v_1, v_2)$. Define the \textit{right vertex-pair incidence graph} $H_R$ of $G$ to be the left vertex-pair incidence graph of $G$ with the sides of its bipartition $V_1 \sqcup V_2$ swapped. \begin{lemma}\label{lem:leastdensity} For all $\varepsilon >0$, there exists $\delta>0$ depending only on $\varepsilon$ such that if $G$ has underlying graph $K_{N_0, N_0}$ for sufficiently large $N_0$ and $G$ is $\varepsilon$-color-balanced, then at least one of its vertex-pair incidence graphs $H_L$ and $H_R$ has at least $\delta N_0^3$ edges. \begin{proof} Suppose $G$ has vertex bipartition $V=V_1 \sqcup V_2$. Let $\delta=\frac{\varepsilon^5}{2(1+\varepsilon)}$, $\mu=\varepsilon^2$, and $\nu=\frac{\varepsilon}{2(1+\varepsilon)}$. Define a vertex in $V_1$ to be color-balanced if it has at least $\mu N_0$ neighbors of each color. Then, for each color-balanced vertex in $V_1$, its corresponding vertex in $H_L$ has degree at least $(\mu N_0)^2=\mu^2 N_0^2$. If there are at least $\nu N_0$ color-balanced vertices in $V_1$, then there are at least $\nu N_0\times \mu^2 N_0^2=\nu\mu^2 N_0^3$ edges built in $E_L$, which means $H_L$ has at least $\delta N_0^3$ edges, and we would be done. It remains to consider the case in which there are fewer than $\nu N_0$ color-balanced vertices in $V_1$. The vertices of $V_1$ which are not color-balanced must have more than $(1-\mu) N_0$ neighbors of a single color. Let $S_R$ be the set of vertices with at least $(1-\mu) N_0$ red neighbors, and $S_B$ be the set of vertices with this many blue neighbors. We know $|S_R| + |S_B|>(1-\nu)N_0$. Since $G$ is $\varepsilon$-color-balanced, $d_R (G) \le 1-\varepsilon$. Also, by counting red edges from $S_R$ alone, $d_R(G) \geq |S_R|\cdot (1-\mu) / N_0$, so it follows that \[ |S_R|\le\frac{(1-\varepsilon)N_0}{1-\mu}=\frac{N_0}{1+\varepsilon}. \] This implies $|S_B|>(1-\nu)N_0-\frac{N_0}{1+\varepsilon}=\frac{\varepsilon N_0}{2(1+\varepsilon)}$. Likewise, we can show that $|S_R|>\frac{\varepsilon N_0}{2(1+\varepsilon)}$. For each pair $(u_1, u_2) \in S_R \times S_B$, there must be at least $(1-2\mu)N_0$ vertices $v$ for which $(u_1, v)$ is red and $(u_2, v)$ is blue. Each triple $(u_1, u_2, v)$ gives an edge in the right vertex-pair incidence graph $H_R$. We find that the total number of edges in $H_R$ must be at least \[ |S_R| \cdot |S_B| \cdot (1-2\mu) N_0 \ge \delta N_0^3. \] Thus, either $H_L$ or $H_R$ has at least $\delta N_0^3$ edges, as desired. \end{proof} \end{lemma} We can now complete the proof of Theorem~\ref{thm:nlogn}. \noindent {\it Proof of Theorem~\ref{thm:nlogn}.} Builder's strategy is to first construct a complete multipartite graph $G=K_{(C\times N_0)}$, where $N_0 = N/C$. By Lemma~\ref{thm:multipartite}, if the $\varepsilon$-reduced graph of $G$ is complete, then $s(G) > \Omega(N^2) > \Omega(N\log N)$, as desired. Otherwise, we can find an induced subgraph $H=G[U_i \cup U_j]$ on two of the parts of $G$ which is $\varepsilon$-color-balanced. By Lemma~\ref{lem:leastdensity}, it follows that (without loss of generality) the left vertex-pair incidence graph $H_L$ of $H$ has at least $\delta N_0^3=\delta(\varepsilon)N_0^3$ edges. Then, by Lemma \ref{lem:computation}, $H_L$ has an induced subgraph $H^*$ isomorphic to $K_{a,b}$, where $a=\delta \log N_0$ and $b=N_0\log N_0$. Let $P$ be the set of all pairs $(u,u')$ where $u$ is one of the $a$ vertices on the left side of $H^*$ and $u'$ is a vertex on the left side of $H$. We have that $|P| \ge \delta N_0 \log N_0$. Let $Q$ be the set of all pairs $(v_i, v_j)\in H^2$ represented by vertices on the right side of $H^*$, so that $|Q| = N_0 \log N_0$. Since $(u, v_i, v_j)$ is an edge of the vertex-pair incidence graph for every such $u, u', v_i ,v_j$, it follows that every $p \in P$ is independent from every $q\in Q$. By Lemma~\ref{lem:pairwise}, $s(G) \ge \min(|P|, |Q|) \ge \Omega(N\log N)$, as desired. \hfill \qed \section{Closing Remarks}\label{sec:closing} The off-diagonal case of the restricted online Ramsey game is equally interesting, and we believe even larger savings can be made here. \begin{conj}\label{conj:off-diagonal} There exists an absolute constant $c$ such that if $N=r(3,n)$, then \[ \tilde{r}(3,n;N) \le (1-c)\binom{N}{2}. \] \end{conj} Suppose Builder orders the vertices $v_1,\ldots, v_n$ arbitrarily and employs the following strategy. On step $i$, Builder builds all unbuilt pairs out of $v_i$ so far. However, during the course of the game, Builder will come across many edges $(v_i,v_j)$ with a common neighbor $v_k$ such that $(v_i,v_k)$ and $(v_j,v_k)$ are both red. In this case, if $(v_i,v_j)$ is built and colored red, then Builder obtains a red triangle and wins. We call such edges $(v_i, v_j)$ {\it forced edges}, edges that Painter will certainly paint blue, and Builder may skip building them until they can be used to fill in a complete blue $n$-clique with certainty. We conjecture that regardless of Painter's actions, either Builder will quickly obtain a blue $n$-clique, or else a constant fraction of the edges in $K_N$ will become forced. If true, this would prove Conjecture~\ref{conj:off-diagonal}. We remark that if exponential improvements are made to the upper bounds on either the diagonal or off-diagonal restricted online Ramsey numbers, then such improvements would translate to exponential improvements on the unrestricted online Ramsey numbers as well. However, it seems unlikely that such improvements are even true. Conlon, Fox, Grinshpun, and He \cite{xiaoyu} asked a somewhat different question about the restricted online Ramsey number. Fix $m,n \ge 3$ and letting $N$ vary, how does the quantity $\tilde{r}(m,n;N)$ change? In this paper we studied the diagonal case (where $m=n$) and let $N = r(m,n)$, the minimum value for which $\tilde{r}(m,n;N)$ is defined, while for $N$ sufficiently large, $\tilde{r}(m,n;N) = \tilde{r}(m,n)$. They conjectured that $\tilde{r}(m,n;N)$ decreases substantially as $N$ varies between these values. Even the simplest question, whether $\tilde{r}(m,n;N)>\tilde{r}(m,n)$ holds for any $N$, is unknown to us at this time. \section{Acknowledgements} We would like to thank George Schaeffer for organizing the Stanford Undergraduate Research in Mathematics program where this research took place, and Stanford University for the opportunity and funding to pursue this project. We are grateful to Jacob Fox for his valuable input and suggestions regarding the restricted online Ramsey game.
{ "timestamp": "2019-06-10T02:01:20", "yymm": "1812", "arxiv_id": "1812.04131", "language": "en", "url": "https://arxiv.org/abs/1812.04131", "abstract": "The restricted $(m,n;N)$-online Ramsey game is a game played between two players, Builder and Painter. The game starts with $N$ isolated vertices. Each turn Builder picks an edge to build and Painter chooses whether that edge is red or blue, and Builder aims to create a red $K_m$ or blue $K_n$ in as few turns as possible. The restricted online Ramsey number $\\tilde{r}(m,n;N)$ is the minimum number of turns that Builder needs to guarantee her win in the restricted $(m,n;N)$-online Ramsey game. We show that if $N=r(n,n)$, \\[ \\tilde{r}(n,n;N)\\le \\binom{N}{2} - \\Omega(N\\log N), \\] motivated by a question posed by Conlon, Fox, Grinshpun and He. The equivalent game played on infinitely many vertices is called the online Ramsey game. As almost all known Builder strategies in the online Ramsey game end up reducing to the restricted setting, we expect further progress on the restricted online Ramsey game to have applications in the general case.", "subjects": "Combinatorics (math.CO)", "title": "An upper bound for the restricted online Ramsey number", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.989830340446144, "lm_q2_score": 0.7154239897159438, "lm_q1q2_score": 0.7081483713038713 }
https://arxiv.org/abs/1103.3545
Maximal eigenvalues of a Casimir operator and multiplicity-free modules
Let $\g$ be a finite-dimensional complex semisimple Lie algebra and $\b$ a Borel subalgebra. Then $\g$ acts on its exterior algebra $\w\g$ naturally.We prove that the maximal eigenvalue of the Casimir operator on $\w\g$ is one third of the dimension of $\g$, that the maximal eigenvalue $m_i$ of the Casimir operator on $\w^i\g$ is increasing for $0\le i\le r$, where $r$ is the number of positive roots, and that the corresponding eigenspace $M_i$ is a multiplicity-free $\g$-module whose highest weight vectors corresponding to certain ad-nilpotent ideals of $\b$. We also obtain a result describing the set of weights of the irreducible representation of $\g$ with highest weight a multiple of $\rho$, where $\rho$ is one half the sum of positive roots.
\section{Introduction} \setcounter{equation}{0}\setcounter{theorem}{0} Let $\g$ be a finite-dimensional complex semisimple Lie algebra and $U(\g)$ its universal enveloping algebra. The study of the $\g$-module structure of its exterior algebra $\w\g$ has a long history. Although this module structure is still not fully understood, Kostant have done a lot of important work on it, see for example \cite{k1} and \cite{k2}. Let $Cas\in U(\g)$ be the Casimir element with respect to the Killing form. Let $m_i$ be the maximal eigenvalue of $Cas$ on $\wedge ^{i} \g$ and $M_i$ be the corresponding eigenspace. Let $p$ be the maximal dimension of commutative subalgebras of $\g$. In \cite{k1} it is proved that $m_i\le i$ for any $i$ and $m_i=i$ for $0\le i\le p$, and if $m_i=i$ then $M_i$ is a multiplicity-free $\g$-module whose highest weight vectors corresponding to $i$-dimensional abelian ideals of $\b$. The integer $p$ for all the simple Lie algebras was determined by Malcev, and Suter gave a uniform formula for $p$ in \cite{s}. Fix a Cartan subalgebra $\h$ of $\g$ and a set $\ddt^+$ of positive roots. Let $\rho\in \h^{*}$ be one half the sum of all the positive roots. For any $\ld\in\h^{*}$, let $V_{\ld}$ denote the irreducible representation of $\g$ with highest weight $\ld$. In this paper we will prove the following result, which extends some theorems of Kostant. Let $n=dim \ \g$ and $r$ be the number of positive roots. \begin{theorem} [Theorem \ref{e}] One has $m_i\le n/3$ for $i=0,1,\cdots,n$, and $m_i=n/3$ if and only if $i=r,r+1,\cdots,r+l$. For $s=0,1,\cdots,l$, $M_{r+s}=\left( \begin{array}{c} l \\ s \\ \end{array} \right) V_{2\rho}$. For $0\le i< r$ one has $m_i< m_{i+1}$. For $1\le i\le r$, $M_i$ is a multiplicity-free $\g$-module, whose highest weight vectors corresponding to certain ad-nilpotent ideals of $\b$. In fact $\oplus_{i=0}^r M_i$ is also a multiplicity-free $\g$-module. \end{theorem} This result relates $M_i$ to ad-nilpotent ideals of $\b$, which are classified in \cite{pp}. But it will be complicated to determine those ad-nilpotent ideals of $\b$ corresponding to the highest weight vectors of $M_i$. To prove this theorem, we need the following interesting result. \begin{prop}[Proposition \ref{d}] Let $k\in \mathbb{Z}^+$. The set of weights of $V_{k\rho}$ (whose dimension is $(k+1)^r$) is $$\{\sum_{i=1}^r c_i\al_i|\al_i\in\ddt^+, c_i=-k/2,-k/2+1,\cdots,k/2-1,k/2\}.$$ \end{prop} \section{Weights of a representation with highest weight a multiple of $\rho$} \setcounter{equation}{0}\setcounter{theorem}{0} Let $\g$ be a finite-dimensional complex semisimple Lie algebra. Fix a Cartan subalgebra $\h$ of $\g$ and a Borel subalgebra $\b$ of $\g$ containing $\h$. Let $\ddt$ be the set of roots of $\g$ with respect to $\h$ and $\ddt^+$ be the set of positive roots whose corresponding root spaces lie in $\b$. Let $W$ be the Weyl group. Let $\n=[\b,\b]$. Then $\b=\h\oplus\n$. An ideal of $\b$ contained in $\n$ is called an ad-nilpotent ideal of $\b$, as it consists of ad-nilpotent elements. Let $\Gamma\st \h^{*}$ be the lattice of $\g$-integral linear forms on $\h$ and $\Lambda\st\Gamma$ the subset of dominant integral linear forms. Let $(,)$ be the bilinear form on $\h^{*}$ induced by the Killing form. Let $l=dim\ \h$, $r=|\ddt^+|$ and $n=l+2r=dim\ \g$. Assume $\ddt^+=\{\al_1,\al_2,\cdots,\al_r\}.$ For any $\ld\in \Lambda$, let $\pi_\ld:\g\rt End(V_\ld)$ be the irreducible representation of $\g$ with highest weight $\ld$, and $\Ga(V_\ld)$ be the set of weights, with multiplicities. Any $\ga\in\Ga(V_\ld)$ will appear $k$ times if the dimension of the $\ga$-weight space is $k$. For example $\Ga(\g)=\ddt\cup\{0,\cdots,0\}$ ($l$ times). If $U\st V_\ld$ is an $\h$-invariant subspace then we will also use $\Ga(U)$ to denote the the set of weights of $U$ with multiplicities and define $$<U>=\sum_{\ga\in\Ga(U)} \ga.$$ For any $S\st \Ga(V_\ld)$, we also define $<S>=\sum_{\ga\in S} \ga.$ Let $\rho\in \h^{*}$ be one half the sum of all the positive roots. For any $k\in \mathbb{Z}^+$, the representation $V_{k\rho}$ of $\g$ has dimension $(k+1)^r$ by Weyl's dimension formula. The following result describes the set of weights of $V_{k\rho}$, which is well-known if $k=1$ (see e.g. \cite{k0}). \begin{prop}\lb{d} The set of weights of $V_{k\rho}$ is $$\Ga(V_{k\rho})=\{\sum_{i=1}^r c_i\al_i|\al_i\in\ddt^+, c_i=-k/2,-k/2+1,\cdots,k/2-1,k/2\},$$ or equivalently, $$\Gamma(V_{k\rho})=\{k\rho-\sum_{i=1}^r c_i\al_i|\al_i\in\ddt^+, c_i=0,1,\cdots,k.\}.$$ \end{prop} \bp By Weyl's denominator formula $$\prod_{i=1}^r (e^{\frac{k+1}{2}\al_i}-e^{-\frac{k+1}{2}\al_i})=\sum_{w\in W} sgn(w) e^{w((k+1)\rho)}.$$ Then for $c_i=-k/2,-k/2+1,\cdots,k/2-1,k/2$ with $i=1,\cdots,r$, \bee \begin{split} \sum_{c_1,\cdots,c_r} e^{\sum_{i=1}^r c_i\al_i}&=\prod_{i=1}^r(e^{(-\frac{k}{2})\al_i}+e^{(-\frac{k}{2}+1)\al_i}+\cdots+e^{(\frac{k}{2}-1)\al_i}+e^{(\frac{k}{2})\al_i})\\ &=\prod_{i=1}^r\frac{e^{\frac{k+1}{2}\al_i}-e^{-\frac{k+1}{2}\al_i}}{e^{\frac{1}{2}\al_i}-e^{-\frac{1}{2}\al_i}}\\&=\frac{\sum_{w\in W} \ sgn(w) e^{w((k+1)\rho)}}{\prod_{i=1}^r (e^{\frac{1}{2}\al_i}-e^{-\frac{1}{2}\al_i})}\\&=char(V_{k\rho}). \end{split} \eee \ep Let $Cas\in U(\g)$ be the Casimir element corresponding to the Killing form. For any $\ld\in\Ga$, define $$Cas(\ld)=(\ld+\rho,\ld+\rho)-(\rho,\rho).$$ The following result is well-known. \begin{lem}\lb{c} If $\ld\in\Lambda$ then $Cas(\ld)$ is the scalar value taken by $Cas$ on $V_\ld$. For any $\mu\in \Ga(\ld)$ one has $Cas(\mu)\le Cas(\ld)$ and $Cas(\mu)<Cas(\ld)$ if $\mu\neq\ld$. \end{lem} \section{Maximal eigenvalues of a Casimir operator and the corresponding eigenspaces} \setcounter{equation}{0}\setcounter{theorem}{0} Let $\wedge \g$ be the exterior algebra of $\g$. Then $\g$ acts on $\wedge \g$ naturally. Let $m_i$ be the maximal eigenvalue of $Cas$ on $\wedge ^{i} \g$ and $M_i$ be the corresponding eigenspace. One knows that $\wedge ^{i}\g$ is isomorphic to $\w ^{n-i}\g$ as $\g$-modules for each $i$, so one has $$m_i=m_{n-i}$$ and $$M_i\cong M_{n-i}.$$ Let $p$ be the maximal dimension of commutative subalgebras of $\g$. Kostant showed that $m_i\le i$ and $m_i=i$ for $0\le i\le p$, and if $m_i=i$ then $M_i$ is spanned by $\wedge ^{k} \a$, where $\a$ runs through $k$-dimensional commutative subalgebras of $\g$. A nonzero vector $w\in\w\g$ is called decomposable if $w=z_1\w z_2\w\cdots\w z_k$ for some positive integer $k$, where $z_i\in \g$. In this case let $\a(w)$ be the respective $k$-dimensional subspace spanned by $z_1,z_2,\cdots,z_k$. \begin{theorem} [Proposition 6 and Theorem 7 of \cite{k1}]\lb{b} (1) Let $$w=z_1\w z_2\w\cdots\w z_k\in\w^k\g$$ be a decomposable vector. Then $w$ is a highest weight vector if and only if $\a(w)$ is $\b$-normal, i.e., $[\b,\a(w)]\st\a(w)$. In this case the highest weight of the simple $\g$-module generated by $w$ is $<\a(w)>$. Thus there is a one-to-one correspondence between all the decomposably-generated simple $\g$-submodules of $\w^k\g$ and all the $k$-dimensional $\b$-normal subspaces of $\g$. (2) Let $\a_1,\a_2$ be any two ideals of $\b$ lying in $\n$. Then $<\a_1>=<\a_2>$ if and only if $\a_1=\a_2$. Thus, if $V_1\st\w^k\g,V_2\st\w^j\g$ are two decomposably-generated simple $\g$-submodules which corresponds to ideals of $\b$ lying in $\n$, then $V_1$ is equivalent to $V_2$ if and only if $V_1=V_2$. \end{theorem} \begin{theorem} \lb{e} (1) One has $$m_i=max\{||\rho+\ga_1+\cdots+\ga_i||^2-||\rho||^2\ |\{\ga_t|t=1,\cdots,i\}\st\Ga(\g)\}$$ for any $i$. (2)One has $m_i\le n/3$ for $i=0,1,\cdots,n$, and $m_i=n/3$ if and only if $i=r,r+1,\cdots,r+l$. For $s=0,1,\cdots,l$, $M_{r+s}=\left( \begin{array}{c} l \\ s \\ \end{array} \right) V_{2\rho}$. (3)For $0\le k< r$ one has $m_k< m_{k+1}$. For $1\le k\le r$, $M_k$ is a multiplicity-free $\g$-module, whose highest weight vectors corresponding to those $k$-dimensional ad-nilpotent ideals $\a$ of $\b$ such that $Cas(<\a>)=m_k$. In fact $\oplus_{k=0}^r M_k$ is also a multiplicity-free $\g$-module. \end{theorem} \bp (1) For $j=1,\cdots,r,$ let $x_j$ (resp. $y_j$) be a weight vector corresponding to $\al_j$ (resp. $-\al_j$). Let $\{h_1,\cdots,h_l\}$ be a basis of $\h$. Then $$A=\{x_1,\cdots,x_r,y_1,\cdots,y_r,h_1,\cdots,h_l\}$$ is a basis of $\g$ consisting of weight vectors. Then $$B_i=\{a_1\wedge a_2\wedge \cdots\wedge a_i| a_j\in A\}$$ is a basis of $\w^{i}\g$ consisting of weight vectors. Let $$C_i=\{v\in B_i\ | Cas(<\a(v)>)=m_i\}.$$ Then by Corollary 2.1 of \cite{k1} $M_i$ is the direct sum of simple $\g$-modules with highest weight vectors $v\in C_i$. It is clear that $$Cas(<\a(v)>)=||\rho+\ga_1+\cdots+\ga_i||^2-||\rho||^2$$ if the weight of $a_j$ is $\ga_j$, thus (1) follows. (2)For any $S=\{\ga_j|j=1,\cdots,i\}\st\Ga(\g)$, $<S>$ is a weight of $\pi_{2\rho}$ by Proposition \ref{d}. Thus by Lemma \ref{c} $Cas(<S>)\le Cas(2\rho)=8||\rho||^2=n/3$, as $||\rho||^2=n/24$. So $m_i=n/3$ if and only if there exists $S\st\Ga(\g)$ such that $|S|=i$ and $<S>=2\rho$. Then $S$ must be of the form $\{x_1,\cdots,x_r,h_{j_1},\cdots,h_{j_s}\}$ and thus $r\le i\le r+l$. For $0\le s\le l$, it is clear $$C_{r+s}=\{x_1\w\cdots\w x_r\w h_{j_1}\w\cdots\w h_{j_s}|1\le j_1<j_2<\cdots<j_s\le l \},$$ thus $M_{r+s}=\left( \begin{array}{c} l \\ s \\ \end{array} \right) V_{2\rho}$. (3)We first show $m_{k+1}>m_{k}$ for $0\le k<r$, which clearly holds in the case $k=0$. Assume $1\le k<r$. Let $v=a_1\w\cdots\w a_k\in C_k$. Then $v$ is a highest weight vector of $M_k$, whose weight is $<S>$ with $S=\Ga(\a(v))$. Then $Cas(<S>)=m_k$, and $[\b,\a(v)]\st\a(v)$ by Theorem \ref{b} (1). Recall that for $\ga=\sum_{i=1}^l k_i \ga_i\in\ddt^+$ where $\{\ga_i|i=1,\cdots,l\}$ is the set of simple roots, its height is defined as $\sum_{i=1}^l k_i$. Choose a positive root $\al$ in $\ddt^+\setminus S$ (which is nonempty as $k<r$) with largest height. Set $T=S\cup \{\al\}$. Let $a\in A$ be the $\al$-weight vector and let $u=v\w a\in B_{k+1}$. By the choice of $\al$ it is clear that $[\b,\a(u)]\st\a(u)$, thus $u$ is also a highest weight vector, whose weight is $<T>=<S>+\al$. As $$(<T>,\al)=(<S>,\al)+(\al,\al)>0,$$ $<S>\in \Ga(V_\ld)$ with $\ld={<T>}$. Then $$m_{k+1}\geq Cas(<T>)>Cas(<S>)=m_k.$$ Now assume $1\le k\le r$. Let $v=a_1\w\cdots\w a_k\in C_k$ , and let $S=\Ga(\a(v))$. We will show $S\st\ddt^+$. If not, let $S^{'}=S\setminus(S\cap(-S))$. Then $<S^{'}>=<S>$ and $|S^{'}|=t<k$. Thus $m_k=Cas(S)=Cas(S^{'})\le m_t$, which contradicts to the previous result. Thus for $1\le k\le r$ one always has $S\st\ddt^+$. Any $v\in C_k$ is a highest weight vector, so $[\b,\a(v)]\st \a(v)$. And if $1\le k\le r$ we have just showed $\Ga(\a(v))\st \ddt^+$. Thus $\a(v)$ is an ad-nilpotent ideal of $\b$. Let $\ld(v)=<\a(v)>$. Then $$M_k=\oplus_{v\in C_k}V_{\ld(v)}.$$ By Theorem \ref{b} (2), if $v_1,v_2\in C_k$ with $v_1\neq v_2$, then $\a(v_1)\neq\a(v_2)$ and $\ld(v_1)\neq \ld(v_2)$. Thus $M_k$ is a multiplicity-free $\g$-module, whose highest weight vectors corresponding to the ad-nilpotent ideals $\a$ of $\b$ such that $Cas(<\a>)=m_k$. By Theorem \ref{b} (2) one can further get that $\oplus_{k=0}^r M_k$ is also a multiplicity-free $\g$-module. \ep \begin{rem} Considering the isomorphism of $\g$-modules $\w^k\g$ and $\w^{n-k}\g$, $\w^k\g$ is multiplicity-free for $0\le k\le r$ and $n-r\le k\le n$. For $r\le k\le r+l$ ($r+l=n-r$), we have showed that $M_k$ is primary of type $\pi_{2\rho}$. As $\g$-modules one has $\w\g=2^l V_\rho\otimes V_\rho$ (see \cite{k2}), so $\w\g$ contains exactly $2^l$ copies of $V_{2\rho}$, which is just $\oplus_{s=0}^l M_{r+s}$. \end{rem}
{ "timestamp": "2011-03-21T01:00:36", "yymm": "1103", "arxiv_id": "1103.3545", "language": "en", "url": "https://arxiv.org/abs/1103.3545", "abstract": "Let $\\g$ be a finite-dimensional complex semisimple Lie algebra and $\\b$ a Borel subalgebra. Then $\\g$ acts on its exterior algebra $\\w\\g$ naturally.We prove that the maximal eigenvalue of the Casimir operator on $\\w\\g$ is one third of the dimension of $\\g$, that the maximal eigenvalue $m_i$ of the Casimir operator on $\\w^i\\g$ is increasing for $0\\le i\\le r$, where $r$ is the number of positive roots, and that the corresponding eigenspace $M_i$ is a multiplicity-free $\\g$-module whose highest weight vectors corresponding to certain ad-nilpotent ideals of $\\b$. We also obtain a result describing the set of weights of the irreducible representation of $\\g$ with highest weight a multiple of $\\rho$, where $\\rho$ is one half the sum of positive roots.", "subjects": "Representation Theory (math.RT)", "title": "Maximal eigenvalues of a Casimir operator and multiplicity-free modules", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9898303428461217, "lm_q2_score": 0.7154239836484143, "lm_q1q2_score": 0.7081483670150481 }
https://arxiv.org/abs/1209.4410
Technical details on Kuranishi structure and virtual fundamental chain
This is an expository article on the theory of Kuranishi structure and is based on a series of pdf files we uploaded for the discussion of the google group named `Kuranishi' (with its administrator H. Hofer). There we replied to several questions concerning Kuranishi structure raised by K. Wehrheim. At this stage we submit this article to the e-print arXiv, all the questions or objections asked in that google group were answered, supplemented or confuted by us. We first discuss the abstract theory of Kuranishi structure and virtual fundamental chain/cycle. This part can be read independently from other parts. We then describe the construction of Kuranishi structure on the moduli space of pseudoholomorphic curves, including the complete analytic detail of the gluing construction as well as the smoothness of the resulting Kuranishi structure. The case of S^1 equivariant Kuranishi structure which appears in the study of time independent Hamiltonian and the moduli space of Floer's equation is included.
\part{Introduction} \section{Introduction} This is an expository article describing the theory of Kuranishi structure, its construction for the moduli space of pseudo-holomorphic curves of various kinds and the way how to use it to define and study virtual fundamental class. Our purpose is to provide technical details much enough for mathematicians who want to apply Kuranishi structure for various purposes can feel confident of its foundation. We believe that by now (16 years after its discovery), the methodology of Kuranishi structure can be used as a `black-box' meaning that mathematicians can freely use it without checking its details in the level we provide in this article, once they understand the general methodology and basic ideas. To ensure them of the preciseness and correctness of all the basic results stated in \cite{FOn}, \cite{fooo:book1}, we provide thorough details so that people can use them without doubt. \par The origin of this article lies in a series of pdf files \cite{Fu1,FOn2,fooo:ans3,fooo:ans34,fooo:ans5} that the authors of the present article uploaded for the discussion of the google group named `Kuranishi' (with its administrator H. Hofer), started around March 14, 2012. In these pdf files, the present authors prepared the replies to several questions concerning the foundation of the Kuranishi structure which were raised by K. Wehrheim. (We mention about the discussion in this google more in Part \ref{origin}.) The pdf files themselves that we uploaded can be obtained in the home page of the second named author (http://www.math.wisc.edu/~oh/). \par The theory of Kuranishi structure first appeared in 1996 January in a paper by first and fourth named authors, which was published as \cite{FuOn99I}. More details thereof were published in \cite{FOn}. These papers contained some technical inaccurate points which were corrected in the book written by the present authors \cite{fooo:book1}. In \cite{fooo:book1} the same methodology as the one used in \cite{FuOn99I,FOn} is applied systematically for the construction of filtered $A_{\infty}$ structure on the singular (co)homology of a Lagrangian submanifold of a symplectic manifolds. \par The construction of the Kuranishi structure of the moduli space of pseudo-holomorphic curves \cite{FOn} is written in the way suitable for the purpose of \cite{FOn} (especially \cite[Theorem 1.1, Corollary 1.4]{FOn}). (See Subsection \ref{subsec342} for more discussion on this point.) In \cite{fooo:book1} (especially in its A1.4) we provide more detail so that it can be used for the purpose of chain level construction we used there. The present article contains even more details of the various parts of this construction. \par Meanwhile there are several articles which describe the story of Kuranishi structure, for example \cite{joyce}, \cite{joyce2}. \par Several other versions of the construction of the virtual fundamental chain or cycle via Kuranishi structure are included in some of the papers of the present authors (\cite{fooo:toric1,fooo:toric2,fooo:toricmir,cyclic}) aimed for various applications. \par This article is {\it not} a research paper {\it but} an expository article. All the results together with the basic idea of its proof had been published in the references we mention above. \par Kuranishi structure is one of the various versions of the technique so called virtual fundamental cycle/chains. Several other versions of the same technique appeared in the year 1996 (the same year as \cite{FOn} appeared.) \cite{LiTi98, LiuTi98,Rua99, Sie96}. Some more detail of \cite{LiTi98, LiuTi98} was appeared in \cite{LuT}. \par Later on, other versions of virtual technique appeared \cite{HWZ,ChiMo}. Also there are various expository articles, e.g., \cite{Salamon,McDuff07,operad,suugakuexp,MW1}, which one can obtain in various places. \par Because of its origin, this article is written in the style so that it will serve as a reference that confirms the solidness of the foundation of the theory. Therefore the preciseness and rigor are our major concern, while writing this article. We are planning to provide a text in the future, which is more easily accessible to non-experts, such as graduate students of the area or researchers from the other related fields. \section{The technique of virtual fundamental cycle/chain} We start with a very brief review of the technique of virtual fundamental cycles/chains. In differential geometry, various moduli spaces appear as `the set of solutions of nonlinear partial differential equations'. Here we put the word in the quote because there are several important issues to be taken care of. \begin{enumerate} \item[(A)] The moduli space is in general very much singular. \item[(B)] We need to take appropriate equivalence classes of the solution set to obtain a moduli space. \item[(C)] We need an appropriate compactification to obtain a useful moduli space. \end{enumerate} In the case of moduli spaces appearing in differential geometry, the point (A) was studied by Kuranishi in Kodaira-Spencer theory of deformation of complex structures. For each compact manifold $(X,J)$, Kuranishi constructed a finite dimensional complex manifold $V$ on which the group of automorphisms $\text{Aut}(X,J)$ acts and a holomorphic map $s : V \to E$ to a complex vector space, such that $E$ is acted by $\text{Aut}(X,J)$ and $s$ is $\text{Aut}(X,J)$-equivariant, and the moduli space of complex structure of $J$ locally is described as \begin{equation}\label{kuranishimodel} s^{-1}(0)/\text{Aut}(X,J). \end{equation} This is called the Kuranishi model. The map $s$ is called the {\it Kuranishi map}. \par In 1980' first by Donaldson in gauge theory and then by Gromov in the theory of pseudo-holomorphic curves, the idea to use the fundamental homology class of various moduli spaces to obtain an invariant was discovered. In the theory of pseudo-holomorphic curves, Gromov-Witten invariant was obtained in this way. Such a theory was rigorously built in the case of semi-positive symplectic manifold by Ruan \cite{ruanD} and Ruan-Tian \cite{RuTi95}. (See also \cite{McSa94}.) \par Around the same time, Floer used the moduli space of solutions of pseudo-holomorphic curve equation with an extra term defined by a Hamiltonian vector field and succeeded in rigorously defining a homology theory, that is now called Floer homology of periodic Hamiltonian system. In \cite{Flo89I} Floer assumed that the symplectic manifold is monotone. This assumption is weakened to semi-positivity in \cite{HoSa95} and \cite{Ono95}. \par In both cases, we need to study the moduli spaces of virtual dimensions $0$ and $1$ (see e.g., page 1020 \cite{FOn}). We construct multi-valued perturbation (multi-section) inductively from the smallest energy (thanks to Gromov's compactness theorem) and can arrange that no zeros appear in the strata of negative virtual dimension. Combined with Gromov's compactness theorem, we can also arrange that there are finitely many zeros in the strata of virtual dimension $0$. When the virtual dimension of the moduli space is $1$, we pick a Kuranishi neighborhood for each zero of the multi-section such that they are mutually disjoint. Then we extend the multi-section so that the weighted number of zeros in the virtual dimension $0$ stratum coincides with the one with sufficiently large fixed gluing parameter $T$ (see Subsection \ref{subsec:Smoothness of coordinate changes} below). We also note that in the case of Gromov-Witten invariants, what we need to study is not the detailed geometric data which moduli space carries but only its homology class, which is significantly weaker information. For the purpose in \cite{fooo:book1}, we have to work with {\it not} homology classes {\it but} chains. \par Roughly speaking the moduli space can be regarded as the zero set of a section of certain infinite dimensional bundle over an infinite dimensional space. Thus its fundamental class is nothing but the `Poincar\'e dual' to the `Euler class' of the bundle. It is well-known that in the finite dimensional case the Euler class is a topological invariant of the bundles and so in particular we can take any section to study it, when the base space is a closed manifold. In the infinite dimensional situation we need to take sections so that they satisfy appropriate Fredholm properties but still exist so abundantly that one has much freedom to perturb. \par Thus, in a situation when the automorphism group of the objects are trivial, it is very easy to find a perturbation of the equation in an abstract way and find a perturbed moduli space that is smooth. This was actually the way taken by Donaldson \cite[II.3]{Don83} in gauge theory. \par The problem becomes nontrivial when the points (B),(C) enter. Let us restrict our discussion below to the case of the moduli space of pseudo-holomorphic curves. \par The point (B) causes the most serious trouble in case the group of automorphisms is noncompact. In fact in such a case the moduli space is not Hausdorff in general. This point is studied in the work of Mumford who introduced the notion of stability and used it systematically to study moduli space of algebraic varieties. The case of curves (Riemann surfaces) is an important case of it. \par Gromov-Witten theory or the theory of pseudo-holomorphic curves is a natural generalization thereof where we consider the pair $(\Sigma,u)$ where $\Sigma$ is a Riemann surface (which includes the case of complex curves with nodal singularities when we study compactification), and $u : \Sigma \to X$ is a pseudo-holomorphic map. (We may includes a finite number of points $\vec z \in \Sigma$, which are nonsingular and disjoint.) The group $\text{Aut}(\Sigma,\vec z,u)$ of automorphisms consists of biholomorphic maps $v : \Sigma \to \Sigma$ such that $u\circ v = u$ and $v$ fixes every marked point in $\vec z$. \par The notion of stable maps due coined by Kontsevitch clarifies the issue here. He called the triple $(\Sigma,\vec z,u)$ stable when $\text{Aut}(\Sigma,\vec z,u)$ is of finite order. This is a natural generalization of the notion of stability due to Mumford defined for the case of stable curves $(\Sigma,\vec z)$. Kontsevitch observed that the moduli space of stable maps $(\Sigma,\vec z,u)$ is Hausdorff. The first and fourth named author gave a precise definition of the relevant topology and gave the proof of Hausdorff property in \cite[Definition 10.3, Lemma 10.4]{FOn}. Hausdorff property is discussed also in \cite{Sie96}. \par On the other hand, though the stability implies that the group $\text{Aut}(\Sigma,\vec z,u)$ is finite, it may still be nontrivial. It means that in the local description as in (\ref{kuranishimodel}), the group of automorphisms can still be nontrivial. In other words our situation is closer to that of an orbibundle on an orbifold rather than to a vector bundle on a manifold. An orbibundle is a vector bundle in the category of orbifold. \par It is well-known classical fact that a generic equivariant section of an equivariant vector bundle is not necessarily transversal to zero, even when the group is finite. After taking the quotient it means that an orbibundle over an orbifold may not have transversal sections. A simple explanation of this fact is that the Euler class of the orbibundle is not necessarily an element of cohomology group over the integer coefficients but is defined only in cohomology group over the rational coefficients. \par At the year 1996 three approaches are proposed and worked out on this point and applied to the study of the moduli space of pseudo-holomorphic curves. \par One approach due to J. Li and G. Tian is an analytic version of `locally free resolution of tangent complex'. The approach by Ruan \cite{Rua99} uses de-Rham theory and uses the representative of Euler class in de Rham cohomology. \par The approach of \cite{FOn} is based on multi-valued perturbations, which was called multisections. \par Before explaining more about the method of \cite{FOn}, we explain the point (C). As usual, compactification of the moduli space of geometric object is obtained by adding certain kinds of `singular objects'. In the case of the moduli space of pseudo-holomorphic curves, such a singular object consists of the triple $(\Sigma,\vec z,u)$ where $\Sigma$ is a curve with only double points, i.e. nodal singularities, as its singularities, $\vec z$ are marked points and $u : \Sigma \to X$ is a pseudo-holomorphic map. We can define stability condition as Kontsevich did. \par The very important point of the story is we can define a coordinate chart in a neighborhood of such objects $(\Sigma,\vec z,u)$ in the same way as the case when $\Sigma$ is smooth. In other words, the moduli space of the triples can be also described locally as the Kuranishi model (\ref{kuranishimodel}). This is a consequence of the process called gluing or stretching the neck. Such a process has its origin in the work of Taubes in gauge theory. By now it very much became a standard practice also in the case of pseudo-holomorphic curves. \par We thus find that each point of the compactification of our moduli space has a local description by Kuranishi model (\ref{kuranishimodel}). \par We can then find a multi-valued perturbation (= mutisection) on each of the Kuranishi model so that $0$ is a regular value for each branch of our multivalued perturbation. Then the task is to formulate the way how those local constructions (perturbation) are globally compatible. \par The notion of Kuranishi structures was introduced for this purpose. Namely a Kuranishi structure by definition provides a way of describing the moduli space locally as $ s^{-1}(0)/\Gamma $ where $s : V \to E$ is a $\Gamma$-equivariant map from a space $V$ equipped with an action of a finite group to a vector space $E$ on which $\Gamma$ acts linearly. We say such a local description as a Kuranishi neighborhood. \par A Kuranishi structure also involves coordinate changes between Kuranishi neighborhoods and requires certain compatibility between coordinate changes. \par Thus the main idea of this story is as follows. \begin{enumerate} \item To define some general notion of `spaces' that contain various moduli spaces of pseudo-holomorphic curves as examples and work out transversality issue in that abstract setting. \item Use multivalued abstract perturbations, that we call multisection, to achieve relevant transversality. \end{enumerate} In this article we describe technical details of this method. \section{The outline of this article} The theory of Kuranishi structures is divided into two parts. One is the abstract theory in which we first define the notion of Kuranishi structure and then we describe how we obtain a virtual fundamental chain/cycle or its homology class in that abstract setting. The other is the methodology of implementing the abstract theory of Kuranishi structure in the study of concrete moduli problem, especially that of the moduli space of pseudo-holomorphic curves. \par We discuss the first point in Part \ref{Part2} and the second point in Parts \ref{secsimple} and \ref{generalcase}. \par The definition of the Kuranishi structure is given in Section \ref{secdefnkura}. \par To construct a perturbation (multisection) of a given Kuranishi map that is transversal to $0$, we work in a local chart (Kuranishi neighborhood) and apply an appropriate induction process. \par This is very much similar to Thom's original proof of transversality theorem in differential topology. Later on, the proof of transversality theorem via Sard's theorem combined with Baire category theorem gained more popularity. However the latter approach, which uses functional analysis, meets some trouble in working out for the case of multisections. In fact, the sum of multisections is rather delicate to define. There seems to be no way to define the sum so that an additive inverse exists. Because of this it is unlikely that the totality of the multisections becomes a vector space of infinite dimension. \par To work out the way to inductively define multisection, we need to find a clever choice of a system of Kuranishi neighborhoods. We called such a system of Kuranishi neighborhoods `a good coordinate system'. \par We remark that in the local description: $ s^{-1}(0)/\Gamma $ ($s : V \to E$) of our moduli space, the number $\dim V - \text{rank}\, E$ is the `virtual' dimension of our moduli space and is a well-defined number. In other words, it is independent of the Kuranish neighborhood. On the other hand the dimension of the base $V$ may depend on the Kuranishi neighborhood. As its consequence, the coordinate change exists sometimes only in one direction, namely from the Kuranishi neighborhood with smaller $\dim V$ to the one with bigger $\dim V$. This makes the proof of the existence of a {\it consistent} system of Kuranishi neighborhoods much more nontrivial compared to the case of ordinary manifolds. Recall that already in the case of orbifold (that is the case when obstruction space $E$ is always zero), the order of the group $\Gamma$ may vary and so the natural procedure of constructing a transversal multisection of an orbibundle over an orbifold is via the induction over the order of $\Gamma$. The case of Kuranishi structure is slightly more nontrivial since the dimension of $V$ may also vary. \par The definition of a good coordinate system is given in Section \ref{defgoodcoordsec}. Existence of such a good coordinate system is proved in Section \ref{sec:existenceofGCS}. \par We alert the readers that in this article more conditions are required for our definition of good coordinate system compared to that of our earlier papers \cite{FOn}, \cite{fooo:book1}. The reason is because it is more convenient for the purpose of writing the technical details of a part of the construction of the virtual fundamental chain/cycle. This detail was asked recently by a few people\footnote{It includes D. Yang, K. Wehrheim, D. McDuff. We thank them for asking this question.}. For example, a question about how we restrict the domains (of the Kuranishi neighborhoods) of the perturbations so that the zero sets of the Kuranishi maps that are defined in each Kuranishi neighborhoods can be glued together to define a Hausdorff and compact space. \par We emphasize that this problem of Hausdorff-ness and compactness is of very much different nature from, for example, that of Hausdorff-ness or compactness of the moduli spaces itself. The latter problem is related to some key geometric notion such as stability and requires to study certain fundamental points of the story like the Gromov compactness and the removable singularity results. On the other hand, the former problem, though it is rather tedious and complicated to write down a precise way to resolve, is of technical nature. It boils down to finding a right way of restricting various domains (Kuranishi neighborhoods) with much care and patience. It, however, goes without saying that writing down this tedious technicality at least once is certainly a nontrivial and meaningful labor to do for the salience of the field, which is the main purpose of Part \ref{Part2}. This makes Part \ref{Part2} rather lengthy and complicated. Section \ref{defgoodcoordsec} contains some of those technical arguments. For the purpose above, we use a general lemma in general topology which we prove in Section \ref{gentoplem}. This lemma (Proposition \ref{metrizable}) is in principle well-known, we suspect. We include its proof here for the sake of completeness since we could not locate an appropriate reference. We gather well-known facts on orbifold, its embedding, and a bundle on it, in Section \ref{ofd} for reader's convenience. These technical points, however, should not be confused with the basic and conceptional points of the theory of Kuranishi structures. We believe that the readers, especially with geometric applications in their minds, can safely forget most of those technical arguments once they go through and convince themselves of the soundness of the foundation. The bottom line of the Kuranishi methodology is to make sure the existence of Kuranishi structure on the \emph{compactified} moduli spaces in the relevant moduli problems. (This step is {\it not} automatic.) Then the rest automatically follows by the general theory of Kuranishi structures. \par In Section \ref{sec:existenceofGCS}, we prove the existence of the good coordinate system in the sense defined in this paper, (which is more restrictive compared to the one in \cite{FOn} or \cite{fooo:book1}.) The proof is based on the idea with its origin in the proof of Lemma 6.3 in \cite[page 957]{FOn}. We work by a downward induction on the dimension $\dim V$ of the Kuranishi neighborhood and in each dimension we glue several Kuranishi neighborhoods (of the same dimension) to obtain a bigger Kuranishi neighborhood. \par\smallskip Parts \ref{secsimple} and \ref{generalcase} provide details of the construction of the Kuranishi structure on the moduli space of pseudo-holomorphic curves. There are two main issues in the construction. \par One is of analytic nature. Namely we construct a Kuranishi neighborhood of a given element of the compactified moduli space. In the case when the given element is a stable map from a nonsingular curve (Riemann surface), the analytic part of the construction is a fairly standard functional analysis. \par In case when the element is a stable map from a curve which has a nodal singularity, its neighborhood still contains a stable map from a nonsingular curve. So we need to study the problem of gluing or of stretching the neck. Such a problem on gluing solutions of non-linear elliptic partial differential equation has been studied extensively in gauge theory and in symplectic geometry during the last decade of the 20th century. Several methods had been developed to solve the problem which are also applicable to our case. In this article, following \cite[Section A1.4]{fooo:book1}, we employ the alternating method, which was first exploited by Donaldson \cite{Don86I} in gauge theory. \par In this method, one solves the linearization of the given equation on each piece of the domain (that is the completion of the complement of the neck region of the source of our pseudo-holomorphic curve.) Then we construct a sequence of families of maps that converges to a version of solutions of the Cauchy-Riemann equation, that is, \begin{equation}\label{mainequation00} \overline{\partial} u' \equiv 0 \mod E(u') \end{equation} and which are parameterized by a manifold (or an orbifold). Here $E(u')$ is a family of finite dimensional vector spaces of smooth sections of an appropriate vector bundle depending on $u'$. More precisely, $E(u')$ is defined using additional marked points, which makes the domain curve stable, see Part 3,4. \par \subsection{Smoothness of coordinate changes}\label{subsec:Smoothness of coordinate changes} The authors were sometimes asked a question about the smoothness of the Kuranishi map $s$ and of the coordinate change of the Kuranishi neighborhoods\footnote{Among others, Y.B. Ruan, C.C. Liu, J. Solomon, I. Smith and H. Hofer asked the question. We thank them for asking this question.}. \par This problem is described as follows. Note in our formulation the neck region is a long cylinder $[-T,T] \times S^1$ or long rectangle $[-T,T] \times [0,1]$, and the case when the source curve is singular corresponds to the case when $T = \infty$. So a part of the coordinate of our Kuranishi neighborhood is naturally parametrized by $(T_0,\infty]$ or its products. Note $\infty$ is included in $(T_0,\infty]$. As a topological space $(T_0,\infty]$ has unambiguous meaning. On the other hand there is no obvious choice of its smooth structure as a manifold with boundary. Moreover for several maps such as Kuranishi map, $s$, it is not obvious whether it is smooth for given coordinate of $(T_0,\infty]$. (See \cite[Remark 13.16]{FOn}.) As we will explain in Subsection \ref{subsec342} there are several ways to resolve this problem. One approach is rather topological and uses the fact that the chart is smooth in the $T$-slice where the gluing parameter $T$ above is fixed. This approach is strong enough to establish all the results of \cite{FOn}. The method of \cite{McSa94} which is quoted in \cite{FOn} is strong enough to work out this approach. However it is not clear to the authors whether it is good enough to establish smoothness of the Kuranishi map or of the coordinate changes at $T=\infty$. (This point was mentioned by the first and the fourth named authors themselves in the year 1996 at \cite[Remark 13.16]{FOn}.) To prove an existence of the Kuranishi structure that literally satisfies our axioms, we take a different method in this article. \par Using the alternating method described in \cite[Section A1.4]{fooo:book1} for the same purpose, we can find an appropriate coordinate chart at $T=\infty$ so that the Kuranishi map and the coordinate changes of our Kuranishi neighborhoods are of $C^{\infty}$ class. For this purpose, we take the parameter $s = 1/T$. As we mentioned in \cite[page 771]{fooo:book1} this parameter $s$ is different from the one taken in algebraic geometry when the target $X$ is projective. The parameter used in algebraic geometry corresponds to $e^{-T}$. It seems likely that in our situation either where almost complex structure is non-integrable and/or where we include the Lagrangian submanifold as the boundary condition (the source being the borderded stable curve) the Kuranishi map or the coordinate changes are {\it not} smooth with respect to the parameter $e^{-T}$. But it is smooth in our parameter $s = 1/T$, as is proved in \cite[Proposition A1.56]{fooo:book1} and Part \ref{generalcase}. \par The proof of this smoothness is based on the exponential decay estimate of the solution of the equation (\ref{mainequation00}) with respect to $T$, that is, the length of the neck. The proof of this exponential decay is given in \cite[Section A1.4, Lemma A1.58]{fooo:book1}. Because, after the publication of \cite{fooo:book1}, we still heard some demand of providing more details of this smoothness proof, we provide such detail in Part \ref{secsimple} and Section \ref{glueing}. \par In Part \ref{secsimple}, we restrict ourselves to the case we glue two (bordered) stable maps such that the source (without considering the map) is already stable. By restricting ourselves to this case we can explain all the analytic issues needed to work out the general case also without making the notation so much complicated. \par We provide the relevant analytic details using the same induction scheme as \cite[Section A1.4, page 773-776]{fooo:book1}. The only difference is that we use $L^2_{m}$ space (the space of maps whose derivative up to order $m$ are of $L^2$ class) here, while we used $L^p_{1}$ space following the tradition of symplectic geometry community in \cite[Section A1.4]{fooo:book1}. Actually using $L^2_{m}$ space in place of $L^p_{1}$ space, it becomes easier to study the derivatives of our solution with respect to the parameter $T$. See Remark \ref{Abremark}. \par In Section \ref{alternatingmethod} we provide the details of the estimate and show that the induction scheme of \cite[Section A1.4]{fooo:book1} provides a convergent family of solutions of our equation (\ref{mainequation00}). (This estimate is actually fairly straightforward to prove although tedious to write down.) \par In Section \ref{subsecdecayT}, we provide the detail of the above mentioned exponential decay estimate of the $T$-derivatives of our solutions. In Section \ref{surjinj}, we review the well-established classical proof of the fact that the solutions obtained exhaust all the nearby solutions and also the map from the parameter space to the moduli space is injective. \par\medskip \subsection{Construction of Kuranishi structure} In the first half of Part \ref{generalcase}, we discuss the second main issue, which enters in the construction of a Kuranishi structure. The problem here is as follows. To define Kuranishi neighborhoods, we need to take the obstruction spaces which appear in the right hand side of (\ref{mainequation00}). We need to choose them so that the Kuranishi neighborhoods that are the solution spaces of (\ref{mainequation00}) can be glued together. In other words we need to choose them so that we can define smooth coordinate changes. \par We need to fix a parametrization of the source (the curve) by the following reason. Let $\frak p=(\Sigma,\vec z,u)$ be an element of the moduli space of stable maps. Firstly we consider the case that the domain $(\Sigma, \vec z)$ is stable. Consider a vector space $E_\frak p$ spanned by finitely many smooth sections of sections of the bundle $u^*TX \otimes \Lambda^{0,1}$. For $\frak q=(\Sigma',\vec z',u')$ close to $\frak p$, we would like to transport $E_\frak p$ to $\frak q$. Let $K$ be a compact subset of $\Sigma$ where elements of $E_\frak p$ are supported. (The subset $K$ is chosen so that it is disjoint from the nodal singularities.) If we fix a diffeomorphism to the image $K \to \Sigma'$ then we can use the parallel transport along the closed geodesic to transfer $E_\frak p$ to $E_\frak p(\frak q)$. This family $E_\frak p(\frak q)$ is obviously a smooth family of vector spaces of smooth sections so we can study the solution space of (\ref{mainequation00}) by using implicit function theorem, for example. In the case when $(\Sigma,\vec z)$ is stable we can choose such a diffeomorphism $K \to \Sigma'$ (up to an ambiguity of finite group of automorphisms) by using the universal family of curves over Deligne-Mumford moduli space. \par In case when $(\Sigma,\vec z)$ is not a stable curve (but $\frak p=(\Sigma,\vec z,u)$ is a stable map) we need some additional argument. In \cite{FOn} we gave two methods to resolve this trouble. One is explained in \cite[Section 15]{FOn} and the other in \cite[Appendix]{FOn}. The first one uses the center of mass technique from Riemannian geometry. We explain it a bit in Subsection \ref{comparizon}. In \cite{fooo:book1} and several other places we used the second technique (the one in \cite[Appendix]{FOn}.) Therefore we use the second method mainly in this article. In this method we add additional marked points $\vec w$ on $\Sigma$ so that $(\Sigma,\vec z \cup \vec w)$ becomes stable. We also put the additional marked points $\vec w'$ on $\Sigma'$ so that $(\Sigma',\vec z' \cup \vec w')$ becomes stable. Then we can transform $E_\frak p$ to $E_\frak p(\frak q \cup \vec w')$. The resulting moduli space (in fact, we take a direct sum \eqref{obstruction} below in later argument), which we call a thickened moduli space, has too many extra parameters compared to those required by the virtual dimension obtained by dimension counting. The extra parameters correspond to the positions of the points $\vec w'$. We kill these extra freedom as follows. We take a finite collection of codimension two submanifolds $D_i \subset X$ for each added marked point in $w_i \in \vec w$ so that $D_i$ transversally intersects $u(\Sigma)$ at $u(w_i)$, and require $w'_i$ to satisfy $u'(w'_i) \in D_i$. \par This gives a construction of a Kuranishi neighborhood at each point of our moduli space. To obtain Kuranishi neighborhoods which can be glued to obtain a global structure, we proceed as follows. \par We take a sufficiently dense finite subset $\{\frak p_c = (\Sigma_c,\vec z_c,u_c)\}_c$ in our moduli space. For each $\frak p_c$ we fix a finite dimensional vector space of sections $E_c=E_{\frak p_c}$ which will be a part of the obstruction space $E$. We also fix additional marked points $\vec w_c$ so that $(\Sigma_c,\vec z_c\cup \vec w_c)$ becomes stable. \par We next consider $(\Sigma',\vec z',u')$ for which we will set up our equation. We take all $\frak p_c$'s which are `sufficiently close' to $(\Sigma',\vec z',u')$. For each such $c$, we take additional $\vec w'_c$ so that $(\Sigma',\vec z'\cup \vec w'_c)$ becomes close to $(\Sigma_c,\vec z_c\cup \vec w_c)$ in an obvious topology. We then use the diffeomorphism obtained by this closeness, and parallel transport to transfer $E_c$ to a finite dimensional vector space $E_c(u',\vec w'_c)$ of sections on $\Sigma'$. We take a sum of them over all $c$'s to obtain the fiber of the obstruction bundle \begin{equation} E(u';(\vec w'_c)) = \bigoplus_{c} E_c(u';\vec w'_c). \label{obstruction} \end{equation} We remark here that this space depends on all the additional marked points $\bigcup_c \vec w'_c$. We solve the equation \eqref{mainequation00} to obtain the moduli space of larger dimension. Finally we cut this space by requiring the condition that each of those additional marked points lies in the corresponding codimension 2 submanifold that we have chosen at the time as we define $E_c$. \par This process of defining $E(u';(\vec w'_c))$ and its solution space is described in detail in Sections \ref{Graph} -\ref{settin2}. \par In the first two sections of those, we discuss the following point. Note we say that the diffeomorphism $K \to \Sigma'$ is determined if the source $(\Sigma,\vec z)$ is stable. More precisely we proceed as follows. Note $(\Sigma',\vec z')$ is close to $(\Sigma,\vec z)$ in Deligne-Mumford moduli space. To identify the set $K \subset \Sigma'$ with a subset of $\Sigma$ we need to fix a trivialization of the universal family. Actually the universal family is not a smooth fiber bundle even in orbifold sense, since there is a singular fiber which corresponds to the nodal curve. So to specify the embedding $K \to \Sigma$ we also need to fix the way to resolve the node. The notion of `coordinate at infinity' is introduced for this purpose. \par After introducing the notion `coordinate at infinity', we define the notion of obstruction bundle data in Section \ref{stabilization}. The obstruction bundle data consist of the finite dimensional vector space of sections $E_c$ (that will be the obstruction bundle $E_c(u';\vec w'_c)$ nearby) together with the additional marked points $\vec w_c$ which we use to transform $E_c$ to the nearby maps as explained above. Using these data, the thickened moduli space, which is the solution space of the equation \begin{equation}\label{mainequation01} \overline{\partial} u' \equiv 0 \mod E(u';(\vec w'_c)), \end{equation} is defined in Section \ref{settin2}. \par Section \ref{glueing} is a generalization of the analytic argument of Part \ref{secsimple}. Most of the arguments in Part \ref{secsimple} can be generalized here without change. The most important point which is new here is the following: For a point $\frak p$ in the moduli space, we construct a thickened moduli space containing it. To obtain the vector space $E$ at $\frak p$ which is the fiber thereat of the obstruction bundle, we consider various $\frak p_c$ in a neighborhood of $\frak p$ and $E_c(u';\vec w'_{c})$ parallel transported from $E_c$ using the marked points $\vec w'_{c}$ as mentioned before. On the other hand, we fix a parameterization of the source of the map $u'$ using a stabilization at $\frak p$ and some other additional marked points $\vec w'_{\frak p}$ associated to the stabilization. Therefore the parametrization of $\Sigma'$ used to define $E_c(u')$ is different from the one that we use to study our equation (\ref{mainequation01}). As far as we are working with smooth curves (the curves without nodal singularity) this is not really an issue since elements of $E_c$ are smooth sections and they behave nicely under a diffeomorphism (or under the change of variables). However when we study gluing of solutions (that is, for the case when $\frak p$ has a node), we need to study the asymptotic behavior of this coordinate change as the gluing parameter $T$ goes to infinity. Study of this asymptotic behavior is also needed when we prove smoothness of the coordinate change at the boundary or at the corner. The main ingredients that we use for this purpose are Propositions \ref{changeinfcoorprop}, \ref{reparaexpest}, which are generalizations of \cite[Lemma A1.59]{fooo:book1}. Propositions \ref{changeinfcoorprop}, \ref{reparaexpest} are proved in Section \ref{changeinfcoorprop}. \par In Section \ref{cutting} we discuss the process of putting the condition $u'(w_i) \in D_i$ to cut the dimension of the thickened moduli space in detail. In particular we show that after doing this cutting down and taking the quotient by the finite group of automorphisms, the set of the solutions of the associated Cauchy-Riemann equations has right dimension. \par Now we construct the moduli space of the solutions of the equation (\ref{mainequation01}) this time requiring the left hand side becomes exactly zero, is homeomorphic to the original unperturbed moduli space. This fact is used in Section \ref{chart} to define a Kuranishi neighborhood at every point of the moduli space. \par In the next three sections, we construct the coordinate changes between Kuranishi neighborhoods and show they are compatible. \subsection{$S^1$-equivariant Kuranishi structure and multi-sections} As we mentioned before Floer studied the pseudo-holomorphic curve equation with extra term defined by Hamiltonian vector field and use its moduli space to define Floer homology of periodic Hamiltonian system. We can define Kuranishi structure on the moduli space of solutions of Floer's equation in the same way. We can use this to generalize Floer's definition of Floer homology of periodic Hamiltonian system to an arbitrary symplectic manifold. \par This part of the generalization is actually fairly straightforward. The point mainly discussed in Part \ref{S1equivariant} is not the definition but a calculation of Floer homology of periodic Hamiltonian system. Namely it coincides with singular homology. This fact is used in the proof of the homological version of Arnold's conjecture. There are two methods to verify this calculation. One uses the method of Bott-Morse theory, and the other is based on the study the case of autonomous Hamiltonian that is $C^2$-small and Morse-Smale. In Part \ref{S1equivariant}, we use the second method in the present article following \cite{FOn}. (The approach using Bott-Morse theory is written in \cite[Section 26]{fooospectr}.) \par The key point is to use the $S^1$ symmetry of the problem. Namely when the Hamiltonian is time independent, the moduli space we study has an extra $S^1$ symmetry arising from domain rotations. Therefore contribution of the relevant Floer moduli space to the matrix elements of the resulting boundary operator is concentrated to the fixed point set of the $S^1$-action which exactly corresponds to the moduli space of Morse gradient flows. This $S^1$ symmetry is used also in the approach via the Bott-Morse theory. \par In Part \ref{S1equivariant} we define the notion of $S^1$-equivariant Kuranishi structure and prove the $S^1$-equivariant counterparts of the various results on the Kuranishi structure. We also construct an $S^1$ equivariant Kuranishi structure on the moduli space of the solutions of the Floer's equation and use it to calculate Floer homology of periodic Hamiltonian system. \subsection{Epilogue} The last part is a kind of appendix. We have already mentioned that the origin of this article is our replies uploaded for the discussion in the google group `Kuranishi' during which we replied mainly to the questions raised by K. Wehrheim. In the first three sections of Part \ref{origin}, we describe the discussion of that google group and the role of the pdf files we posted there, from our point of view. \par In the arXiv and even in the published literature, we have seen a few articles which express some negative view on the foundation and raise some doubts on the solidness of virtual fundamental chain or cycle techniques, although they have been used for the various purposes successfully in the published references. In our point of view, many such doubts raised in those articles are not based on the precise understanding of the virtual fundamental cycle techniques but based on some prejudice on the mathematical point of view and on a few minor technical imprecise statements made in the published articles on the virtual fundamental cycle techniques. \par Recently we have seen another instance of such a writing \cite{MW1} in arXiv that is posted by the very person who have asked us questions in the google group and already gotten our answers, which are mostly the same as Parts \ref{Part2} - \ref{S1equivariant} in this article except some polishing of presentation. They posed several difficulties, which arise in {\em their} approach. For example, Hausdorff property of certain spaces, smoothness of obstruction bundles, which we consciously excluded by taking the route via the finite dimensional reduction. (They should be taken care of, if one works with infinite dimensional setting directly.) We comment on \cite{MW1} more in Section \ref{HowMWiswrong}. \subsection{Thanks} We would like to thank all the participants, especially Wehrheim and McDuff, in the discussions of the `Kuranishi' google group for motivating us to go through this painstaking labour by their meticulous reading and questioning of our writings. KF is supported by JSPS Grant-in-Aid for Scientific Research \# 19104001, 2322404 and Global COE Program G08. YO is supported by US NSF grant \# 0904197. HO is supported by JSPS Grant-in-Aid for Scientific Research \#23340015. KO is supported by JSPS Grant-in-Aid for Scientific Research \# 2124402. \par KF thanks Simons Center for Geometry and Physics for hospitality and financial support while this work is done. \par\newpage \part{Kuranishi structure and virtual fundamental chain}\label{Part2} The purpose of this part is to give the definitions of Kuranishi structure and good coordinate system and to explain the construction of a virtual fundamental chain of a space with Kuranishi structure using a good coordinate system. We also provide the details of the proof of the existence of a good coordinate system on a compact metrizable space with Kuranishi structure and the tangent bundle in Section \ref{sec:existenceofGCS}. We take the definition of \cite[Appendix]{fooo:book1}. \section{Definition of Kuranishi structure} \label{secdefnkura} In this section we give the definition of Kuranishi structure. We mostly follow the exposition and the notations used in \cite{fooo:book1}. Here is a technical remark. In \cite{fooo:book1} we do not use the notion of germ of Kuranishi neighborhoods, which was discussed in \cite{FOn}. The notion of germ is not needed for the proofs of all the results in \cite{FOn}. See Section \ref{gernkuranishi} about germ of Kuranishi neighborhood etc. In particular, as in the exposition of \cite{fooo:book1}, the cocycle condition $$ \underline{\phi}_{pq} \circ \underline{\phi}_{qr}=\underline{\phi}_{pr} $$ is the exact equality and not the one modulo automorphism of a Kuranishi neighborhood. (Note that there may be a non trivial automorphism of a Kuranishi neighborhood.) This is important to avoid usage of 2-category. Here \begin{equation}\label{eq:barphipq} \underline{\phi}_{pq}: U_{pq} \to U_p \end{equation} is an embedding of the orbifold $U_{pq}=V_{pq}/\Gamma_q \to U_p =V_p/\Gamma_p$, that is induced by the $h_{pq}$-equivariant map \begin{equation}\label{eq:phipq} \phi_{pq}:V_{pq} \to V_p \end{equation} where \begin{equation}\label{eq:hpq} h_{pq}: \Gamma_q \to \Gamma_p \end{equation} is a group homomorphism. See below for the precise definitions of these notations. We want to avoid using the language of 2-category unless it is absolutely necessary because we feel that it makes things unnecessarily complicated and that it is also harder to use. (See Section \ref{ofd} where we summarize the notation and definition on orbifold we use in this article.) \par Let $X$ be a compact metrizable space and $p \in X$. We define a Kuranishi neighborhood at a point $p$ in $X$ as follows. \begin{defn} (\cite[Definition A1.1]{fooo:book1})\label{Definition A1.1} A {\it Kuranishi neighborhood} at $p \in X$ is a quintuple $(V_p, E_p, \Gamma_p, \psi_p, s_p)$ such that: \begin{enumerate} \item $V_p$ is a smooth manifold of finite dimension, which may or may not have boundary or corner. \item $E_p$ is a real vector space of finite dimension. \par \item $\Gamma_p$ is a finite group acting smoothly and effectively on $V_p$ and has a linear representation on $E_p$. \item $s_p$ is a $\Gamma_p$ equivariant smooth map $V_p \to E_p$. \par \item $\psi_p$ is a homeomorphism from $s_p^{-1}(0)/\Gamma_p$ to a neighborhood of $p$ in $X$. \end{enumerate} \begin{rem} We {\it always} assume orbifolds to be effective. In our application to the moduli space of pseudo-holomorphic curves we can take obstruction spaces so that orbifold appearing in its Kuranishi neighborhood is always effective, except the case when the target space $X$ is zero dimensional. \end{rem} We denote $U_p = V_p/\Gamma_p$ and call $U_p$ a \emph{Kuranishi neighborhood}. We sometimes also call $V_p$ a {\emph Kuranishi neighborhood} of $p$ by an abuse of terminology. \par We call $E_p \times V_p \to V_p$ an {\it obstruction bundle} and $s_p$ a {\it Kuranishi map}. For $x \in V_p$, denote by $(\Gamma_p)_x$ the isotropy subgroup at $x$, i.e., $$(\Gamma_p)_x = \{ \gamma \in \Gamma_p \,\vert \,\gamma x = x \}.$$ Let $o_p$ be a point in $V_p$ with $s_p(o_p) = 0$ and $\psi_p([o_p]) = p$. We will assume that $o_p$ is fixed by all elements of $ \Gamma_p $. Therefore $o_p$ is a unique point of $V_p$ which is mapped to $p$ by $\psi_p$. \end{defn} \begin{defn}(\cite[Definition A1.3]{fooo:book1})\label{Definition A1.3} Let $(V_p, E_p, \Gamma_p, \psi_p, s_p)$ and $(V_q, E_q, \Gamma_q, \psi_q, s_q)$ be a pair of Kuranishi neighborhoods of $p \in X$ and $q \in \psi_p(s_p^{-1}(0)/\Gamma_p)$, respectively. We say a triple $\Phi_{pq} = (\hat\phi_{pq},\phi_{pq},h_{pq})$ a {\it coordinate change} if \begin{enumerate} \item[(1)] $h_{pq}$ is an injective homomorphism $\Gamma_q \to \Gamma_p$. \item[(2)] $\phi_{pq} : V_{pq} \to V_p$ is an $h_{pq}$ equivariant smooth embedding from a $\Gamma_q$ invariant open neighborhood $V_{pq}$ of $o_q$ to $V_p$, such that the induced map $\underline{\phi}_{pq}: U_{pq} \to U_p$ is injective. Here and hereafter $\underline{\phi}_{pq} : U_{pq} \to U_q$ is a map induced by ${\phi}_{pq}$ and $U_{qp} = V_{qp}/\Gamma_p$. \item[(3)] $(\hat\phi_{pq},\phi_{pq})$ is an $h_{pq}$-equivariant embedding of vector bundles $E_q \times V_{pq} \to E_p \times V_p$. \end{enumerate} In other words, the triple $\Phi_{pq}$ induces an embedding of orbibundles $$ \underline{\hat\phi}_{pq} : \frac{E_q\times V_{pq}}{\Gamma_q} \to \frac{E_p\times V_{p}}{\Gamma_p}, $$ in the sense of Definition \ref{defn:embedding}. \par The collections $\Phi_{pq}$ satisfy the following compatibility conditions. \begin{enumerate} \item[(4)] $\hat\phi_{pq}\circ s_q = s_p\circ\phi_{pq}$. Here and hereafter we sometimes regard $s_p$ as a section $s_p : V_p \to E_p\times V_p$ of trivial bundle $E_p \times V_p \to V_p$. \item[(5)] $\psi_q = \psi_p\circ \underline{\phi}_{pq}$ on $(s_q^{-1}(0) \cap V_{pq})/\Gamma_q$. \item[(6)] The map $h_{pq}$ restricts to an isomorphism $(\Gamma_q)_x \to (\Gamma_p)_{\phi_{pq}(x)}$ for any $x \in V_{pq}$. Here $$ (\Gamma_q)_x = \{ \gamma \in \Gamma_{q} \mid \gamma x = x\}. $$ \end{enumerate} \end{defn} \begin{defn}\label{Definition A1.5}(\cite[Definition A1.5]{fooo:book1}) A {\it Kuranishi structure} on $X$ assigns a Kuranishi neighborhood $(V_p, E_p, \Gamma_p, \psi_p, s_p)$ for each $p \in X$ and a coordinate change $(\hat\phi_{pq},\phi_{pq},h_{pq})$ for each $q \in \psi_p(s_p^{-1}(0)/\Gamma_p)$ such that the following holds. \begin{enumerate} \item $\dim V_p - \operatorname{rank} E_p$ is independent of $p$. \par \item If $r \in \psi_q((V_{pq}\cap s_q^{-1}(0))/\Gamma_q)$, $q \in \psi_p(s_p^{-1}(0)/\Gamma_p)$ then there exists $\gamma_{pqr}^{\alpha} \in \Gamma_p$ for each connected component $(\phi_{qr}^{-1}(V_{pq}) \cap V_{qr} \cap V_{pr})_\alpha$ of $\phi_{qr}^{-1}(V_{pq}) \cap V_{qr} \cap V_{pr}$ such that $$ h_{pq} \circ h_{qr} = \gamma_{pqr}^{\alpha} \cdot h_{pr} \cdot (\gamma_{pqr}^{\alpha})^{-1} , \quad \phi_{pq} \circ \phi_{qr} = \gamma_{pqr}^{\alpha} \cdot \phi_{pr}, \quad \hat\phi_{pq} \circ \hat\phi_{qr} = \gamma_{pqr}^{\alpha}\cdot \hat\phi_{pr}. $$ Here the first equality holds on $(\phi_{qr}^{-1}(V_{pq}) \cap V_{qr} \cap V_{pr})_\alpha$ and the second equality holds on $E_r \times (\phi_{qr}^{-1}(V_{pq}) \cap V_{qr} \cap V_{pr})_\alpha$. \end{enumerate} \end{defn} Next we recall that, for a section $s$ of a vector bundle $E$ on a manifold $V$, the linearization of $s$ induces a canonical map from the restriction of the tangent bundle to the zero set $s^{-1}(0)$ to $E\vert_{s^{-1}(0)}$. We note that the differential $d_{\text{fiber}}s_p$ of the Kuranishi map induces a bundle map \begin{equation}\label{eq:tangent} d_{\text{fiber}}s_p ~:~ N_{V_{pq}}V_p \to \frac{\hat{\phi}_{pq}^*(E_p \times V_{p})}{E_q\times {V_{pq}}} \end{equation} as $\Gamma_q$-equivariant bundles on $V_{pq} \cap s_q^{-1}(0)$, and a commutative diagram \begin{equation} \begin{CD} 0 @ >>> T_{x}V_{pq} @ > {d_x\phi_{pq}} >> T_{\phi_{pq}(x)}V_{p} @ >>> (N_{V_{pq}}V_p)_x @ > >> 0\\ && @ V{ds_{q}}VV @ V{ds_p}VV @VVV\\ 0 @ >>> (E_q)_x @ >{\hat{\phi}_{pq}}>> (E_p)_{\phi_{pq}(x)} @>>> \frac{(E_p)_{\phi_{pq}(x)}}{(E_q)_x}@>>> 0 \end{CD} \end{equation} \begin{defn}[Tangent bundle]\label{defn:tangentbundle} \label{defn:Kura-tangent} We say that a {\it space with Kuranishi structure $(X,{\mathcal K})$ has a tangent bundle} if the map \eqref{eq:tangent} is an $h_{pq}$-equivariant bundle isomorphism on $V_{pq} \cap s_q^{-1}(0)$. \par We say it is orientable if the bundles $$ \det E_q^* \otimes \det TV_q\Big|_{s_q^{-1}(0) \cap \psi_q^{-1}(U_{pq})} $$ has trivializations compatible with the isomorphisms \eqref{eq:tangent}. We call a space with Kuranishi structure and tangent bundle a {\it Kuranishi space}. \end{defn} \begin{defn}\label{Definition A1.13} Consider the situation of Definition \ref{Definition A1.5}. Let $Y$ be a topological space. A family $\{f_p\}$ of $\Gamma_p$-equivariant continuous maps $f_p : V_p \to Y$ is said to be a {\it strongly continuous map} if $$ f_p \circ \phi_{pq} = f_q $$ on $V_{pq}$. A strongly continuous map induces a continuous map $f: X \to Y$. We will ambiguously denote $f = \{f_p\}$ when the meaning is clear. When $Y$ is a smooth manifold, a strongly continuous map $f: X \to Y$ is defined to be smooth if all $f_p:V_p \to Y$ are smooth. We say that it is {\it weakly submersive} if each $f_p$ is a submersion. \end{defn} \section{Definition of good coordinate system} \label{defgoodcoordsec} The construction of multivalued perturbation (multisection) is by induction on the coordinate. For this induction to work we need to take a clever choice of the coordinates we work with. Such a system of coordinates is called a \emph{good coordinate system}, which was introduced in \cite{FOn}. In this section we define the notion of good coordinate system following \cite{FOn}. But, as an abstract framework, we require some additional conditions, for example, Condition \ref{Joyce} due to Joyce. In \cite{FOn} we shrank Kuranishi neighborhoods several times. To describe this procedure in great detail, we use these additional conditions. \begin{defn}\label{ofddef1} An orbifold is a special case of Kuranishi space where all the obstruction bundles are trivial, i.e., $E_p=0$ for all $p \in X$. \end{defn} \begin{rem} If we try to define the notion of morphisms between Kuranishi spaces (the space equipped with Kuranishi structure) it seems necessary to systematically work in 2-category. When one is interested in Kuranishi spaces on its own, this is certainly more natural approach to study. (This is the approach taken by D. Joyce \cite{joyce2} we suppose.) Our purpose is to use the notion of Kuranishi structure as a method of defining various invariants by using Kuranishi structure and abstract perturbation, and implement them to symplectic geometry and/or mirror symmetry etc. By this reason, we take a way that is as short as possible and also is as general as possible at the same time, for our particular purpose. \end{rem} See Section \ref{ofd} about our terminology on orbifolds. We will define there the notion of orbifolds, vector bundle on it, and its embedding. Hereafter we denote \begin{equation} \mathcal U_{p} = \psi_{p}(s_{p}^{-1}(0)/\Gamma_{p}) \end{equation} which defines an open neighborhood of $p$ in $X$ by the assumption on $\psi_{p}$. We modify the definition of good coordinate system in \cite[Lemma 6.3]{fooo:book1} as follows. In our definition of good coordinate system our `Kuranishi neighborhood' is an orbifold which may not be a global quotient. The definition is given in terms of orbifolds and the obstruction bundles which are orbibundles in general. From now on, we call an orbibundle simply a vector bundle. \begin{defn}\label{goodcoordinatesystem} Let $X$ be a space with Kuranishi structure. A \emph{good coordinate system} on it consists of a partially ordered set $(\frak P,\le)$ of finite order, and $(U_{\frak p}, E_{\frak p}, \psi_{\frak p}, s_{\frak p})$ for each $\frak p \in \frak P$, with the following data. \begin{enumerate} \item $U_{\frak p}$ is an orbifold of finite dimension, which may or may not have boundary or corner. \item $E_{\frak p}$ is a real vector bundle over $U_{\frak p}$. \item $s_{\frak p}$ is a section of $E_{\frak p} \to U_{\frak p}$. \item $\psi_{\frak p}$ is an open embedding of $s_{\frak p}^{-1}(0)$ into $X$. \item If $\frak q \le \frak p$, then there exists an embedding of vector bundles $$ \hat{\underline\phi}_{\frak p \frak q}:E_{\frak q}\vert_{U_{\frak p\frak q}} \to E_{\frak p} $$ over an embedding ${\underline\phi}_{\frak p\frak q}: U_{\frak p \frak q} \to U_{\frak p}$ of orbifold such that \begin{enumerate} \item $U_{\frak p\frak q}$ is an open subset of $U_{\frak q}$ such that \begin{equation}\label{eq23} \psi_{\frak q}(U_{\frak p\frak q} \cap s_{\frak q}^{-1}(0)) = \psi_{\frak p}(U_{\frak p}\cap s_{\frak p}^{-1}(0)) \cap \psi_{\frak q}(U_{\frak q}\cap s_{\frak q}^{-1}(0)), \end{equation} \item $\widehat{\underline\phi}_{\frak p\frak q}\circ s_\frak q = s_\frak p \circ \underline \phi_{\frak p\frak q}$, \, $\psi_\frak q = \psi_\frak p\circ \underline \phi_{\frak p\frak q}$, \item $d_{\text{fiber}}s_{\frak p}$ induces an isomorphism of vector bundles at $s_{\frak q}^{-1}(0)\cap U_{\frak p\frak q}$. \begin{equation}\label{15tangent} N_{U_{\frak p\frak q}}U_\frak p \cong \frac{{\underline\phi}_{\frak p\frak q}^*E_\frak p} {(E_\frak q)\vert_{U_{\frak p\frak q}}}. \end{equation} \end{enumerate} \item If $\frak r \le \frak q \le \frak p$, $\psi_{\frak p}(s_{\frak p}^{-1}(0)) \cap \psi_{\frak q}(s_{\frak q}^{-1}(0)) \cap \psi_{\frak r}(s_{\frak r}^{-1}(0)) \ne \emptyset$, we have $$ \underline\phi_{\frak p\frak q} \circ \underline\phi_{\frak q\frak r} = \underline\phi_{\frak p\frak r}, \quad \hat{\underline\phi}_{\frak p\frak q} \circ \hat{\underline\phi}_{\frak q\frak r} = \hat{\underline\phi}_{\frak p\frak r}. $$ Here the first equality holds on ${\underline\phi}_{\frak q\frak r}^{-1}(U_{\frak p\frak q}) \cap U_{\frak q\frak r} \cap U_{\frak p\frak r}$, and the second equality holds on $(E_{\frak r})\vert_{({\underline\phi_{\frak q\frak r}^{-1}(U_{\frak p\frak q}) \cap U_{\frak q\frak r}}\cap U_{\frak p\frak r})}$. \par\medskip \item $$ \bigcup_{\frak p\in \frak P} \psi_{\frak p}(s_{\frak p}^{-1}(0)) = X. $$ \item If $\psi_{\frak p}(s_{\frak p}^{-1}(0)) \cap \psi_{\frak q}(s_{\frak q}^{-1}(0)) \ne \emptyset$, either $\frak p\le \frak q$ or $\frak q \le \frak p$ holds. \item The Conditions \ref{Joyce}, \ref{plusalpha}, \ref{plusalpha2} and \ref{proper} below hold. \end{enumerate} \end{defn} \begin{rem} For the definition of good coordinate system given here, we assume more conditions than those given in \cite{FOn}. We use them to more explicitly describe the process of shrinking the Kuranishi neighborhoods entering in the construction of multisections. \par On the other hand, we prove the existence of such a restrictive good coordinate system assuming the existence of Kuranishi structure with \emph{the same definition} as the one given in \cite{fooo:book1}. Therefore the conclusion which is the existence of virtual fundamental chain or cycle associated to the Kuranishi structure is the same as \cite{fooo:book1}. \end{rem} \begin{conds}[Joyce \cite{joyce}] \label{Joyce} Suppose $\frak p \ge \frak q \ge \frak r$. $$ \underline\phi_{\frak p\frak q}(U_{\frak p\frak q}) \cap \underline\phi_{\frak p\frak r}(U_{\frak p\frak r}) = \underline\phi_{\frak p\frak r} (\underline\phi_{\frak q\frak r}^{-1}(U_{\frak p\frak q})) \cap U_{\frak p\frak r}). $$ \end{conds} \begin{lem}\label{iikaejoyce} Condition \ref{Joyce} is equivalent to the following statement: \par If $\frak p \ge \frak q \ge \frak r$ and $x \in U_{\frak p\frak r}$, $y \in U_{\frak p\frak q}$ with $\underline{\phi}_{\frak p\frak r}(x) = \underline{\phi}_{\frak p\frak q}(y)$, then \begin{enumerate} \item $x \in \underline \phi_{\frak q\frak r}^{-1}(U_{\frak p\frak q}) \cap U_{\frak q\frak r}$, \item $\underline{\phi}_{\frak q\frak r}(x) = y$. \end{enumerate} \end{lem} \begin{proof} This is obvious. \end{proof} \begin{conds}\label{plusalpha} \begin{enumerate} \item If $ \bigcap_{i\in I} U_{\frak p_i\frak q} \ne \emptyset, $ then $ \bigcap_{i\in I} \mathcal U_{\frak p_i\frak q} \ne \emptyset. $ \item If $ \bigcap_{i\in I} \underline{\phi}_{\frak p\frak q_i}(U_{\frak p\frak q_i}) \ne \emptyset, $ then $ \bigcap_{i\in I} \mathcal U_{\frak p\frak q_i} \ne \emptyset. $ \end{enumerate} \end{conds} Here and hereafter we put \begin{equation} \mathcal U_{\frak p\frak q} = \psi_{\frak q}(s_{\frak q}^{-1}(0)\cap U_{\frak p\frak q}). \end{equation} Condition \ref{plusalpha} and Definition \ref{goodcoordinatesystem} (8) imply the following: \begin{lem}\label{intersection} Suppose $\frak q \le \frak p_j$ for $ j =1,\dots, J$ and $ \bigcap_{i\in I} U_{\frak p_i\frak q} \ne \emptyset $. Then the set $\{\frak q\} \cup \{\frak p_j \mid j =1,\dots, J\}$ are linearly ordered. (Namely for each $\frak r,\frak s \in \{\frak q\} \cup \{\frak p_j \mid j =1,\dots, J\}$ at least one of $\frak r \ge \frak s$ or $\frak s \ge \frak r$ holds.) \end{lem} \begin{conds}\label{plusalpha2} Suppose $U_{\frak p\frak r}\cap U_{\frak q\frak r} \ne \emptyset$ or ${\underline \phi}_{\frak q\frak r}^{-1} (U_{\frak p\frak q}) \ne \emptyset$. If $\frak p \ge \frak q\ge \frak r$ in addition, then we have $$ {\underline \phi}_{\frak q\frak r}^{-1} (U_{\frak p\frak q}) = U_{\frak p\frak r} \cap U_{\frak q\frak r}. $$ \end{conds} \begin{conds}\label{proper} The map $ U_{\frak p\frak q} \to U_{\frak p} \times U_{\frak q} $ defined below is proper. \begin{equation} x \mapsto ({\underline \phi}_{\frak p\frak q}(x),x). \end{equation} \end{conds} The existence of good coordinate system is proved in Section \ref{sec:existenceofGCS}. In the rest of this section, we introduce an equivalence relation on the disjoint union $$ \widetilde U(X;\frak P) = \bigcup_{\frak p\in \frak P} U_{\frak p} $$ and a quotient space $U(X;\frak P)$ thereof. We may use the set $U(X;\frak P)$ as a `global thickening' of $X$, in which a perturbation of the zero set $s^{-1}(0)$ will reside.\footnote{Actually we do {\it not} need to use such a space to define virtual fundamental chain $f_*([X])$. We may take simplicial decomposition of the zero set $s^{-1}_{\frak p}(0)$ (after appropriately shrinking the domain) so that they are compatible with the coordinate change, by an induction on the partial order of the set $\frak P$, and can use it to define $f_*([X])$, instead. Existence of `appropriate shrink' is intuitively clear. (See \cite[Answer to Question 3]{Fu1}, for example.) However writing this intuitive picture in detail without introducing a formal definition is rather cumbersome. (The details we provide in this article are more than required in common research papers, according to our opinion.) This is the reason why we choose to define the set $U(X)$ explicitly.} For the simplicity of notations, we omit the dependence on $\frak P$ of $\widetilde U(X;\frak P), \, U(X;\frak P)$ from their notations and just denote them by $\widetilde U(X), \, U(X)$ respectively. \begin{lem}\label{equiv} The following $\sim$ is an equivalence relation on $\widetilde U(X)$. \par\medskip Let $x \in U_{\frak p}$ and $y \in U_{\frak q}$. We say $x\sim y$ if and only if \begin{enumerate} \item $x = y$, or \item $\frak p \ge \frak q$ and $\underline\phi_{\frak p\frak q}(y) = x$, or \item $\frak q \ge \frak p$ and $\underline\phi_{\frak q\frak p}(x) = y$. \end{enumerate} \end{lem} \begin{proof} Only transitivity is nontrivial. Let $x_1 \sim x_2$, $x_2 \sim x_3$, $x_i \in U_{\frak p_i}$. \par Suppose $\frak p_1 \le \frak p_2 \le \frak p_3$. Condition \ref{plusalpha2} implies $U_{\frak p_3\frak p_1} \cap U_{\frak p_2\frak p_1} = (\underline\phi_{\frak p_2\frak p_1})^{-1}(U_{\frak p_3\frak p_2})$. Therefore Definition \ref{goodcoordinatesystem} (6) implies $$ x_3 = \underline\phi_{\frak p_3\frak p_2}(x_2) = \underline\phi_{\frak p_3\frak p_2} \underline\phi_{\frak p_2\frak p_1}(x_1) = \underline\phi_{\frak p_3\frak p_1}(x_1). $$ Namely $x_3 \sim x_1$. The case $\frak p_1 \ge \frak p_2 \ge \frak p_3$ is similar. \par Suppose $\frak p_1 \ge \frak p_2 \le \frak p_3$. Condition \ref{plusalpha} and Definition \ref{goodcoordinatesystem} (8) imply either $\frak p_1 \ge \frak p_3$ or $\frak p_1 \le \frak p_3$. Let us assume $\frak p_1 \le \frak p_3$. Then Condition \ref{plusalpha2} implies $x_2 \in \underline\phi_{\frak p_1\frak p_2}^{-1}(U_{\frak p_3\frak p_1} )$. Then Definition \ref{goodcoordinatesystem} (6) implies $$ \underline\phi_{\frak p_3\frak p_1}(x_1) = \underline\phi_{\frak p_3\frak p_1}(\underline\phi_{\frak p_1\frak p_2}(x_2)) = x_3. $$ Namely $x_1 \sim x_3$. The case $\frak p_1 \ge \frak p_3$ is similar. \par Let us assume $\frak p_1 \le \frak p_2 \ge \frak p_3$. By Condition \ref{plusalpha} and Definition \ref{goodcoordinatesystem} (8), we have either $\frak p_1 \le \frak p_3$ or $\frak p_1 \ge \frak p_3$. Then Condition \ref{Joyce} implies $ x_3 = \underline\phi_{\frak p_3\frak p_1}(x_1)$ or $x_1 = \underline\phi_{\frak p_1\frak p_3}(x_3), $ as required. \end{proof} \begin{defn}\label{clrelation} We define $U(X)$ to be the set of $\sim$ equivalence classes. \par The map $\Pi_\frak p : U_\frak p \to U(X)$ sends an element of $U_{\frak p}$ to its equivalence class. The map $\psi_\frak p^{-1}: \psi_\frak p(s_{\frak p}^{-1}(0) \cap U_\frak p) \to s_{\frak p}^{-1}(0) \cap U_\frak p$ followed by the restriction of $\Pi_\frak p$ to $s_{\frak p}^{-1}(0) \cap U_\frak p$ defines an injective map \begin{equation}\label{eq:iotap} \iota_\frak p: \psi_\frak p(s_{\frak p}^{-1}(0) \cap U_\frak p) \to U(X). \end{equation} \begin{lem}\label{lem:glueiotap} The maps \eqref{eq:iotap} are consistent on the overlaps. We denote the resulting global map by $I: X \to U(X)$. \end{lem} \begin{proof} Let $p \in X$ and suppose \begin{equation}\label{eq:p=} p = \psi_\frak p(x) = \psi_{\frak q}(y) \end{equation} for $x \in s_{\frak p}^{-1}(0) \cap U_\frak p$ and $y \in s_{\frak q}^{-1}(0) \cap U_{\frak q}$. It is enough to prove $x \sim y$. By Definition \ref{goodcoordinatesystem} (6), either $\frak p = \frak q$, $\frak p < \frak q$ or $\frak q < \frak p$. If $\frak q = \frak p$, we must have $x = y$ since $\psi_\frak p: s_{\frak p}^{-1}(0) \cap U_\frak p \to X$ is one-one. For the remaining two cases, we will focus on the case $\frak q < \frak p$ since the other case is the same. Then we are given an embedding of orbifolds $\underline \phi_{\frak p\frak q}:U_{\frak p\frak q} \to U_\frak p$. Then Definition \ref{goodcoordinatesystem} (3-b), $ \psi_\frak q = \psi_\frak p \circ \underline \phi_{\frak p\frak q} $ on $U_{\frak p\frak q}$. On the other hand, it follows from \eqref{eq23} and \eqref{eq:p=} that $p = \psi_\frak q(x')$ with $x' \in s_{\frak q}^{-1}(0) \cap U_{\frak p\frak q} \subset s_{\frak p}^{-1}(0) \cap U_\frak q$. Since $\psi_\frak q$ is one-one on $s_{\frak q}^{-1}(0) \cap U_\frak q$, $x' = y$. Then we derive $$ \psi_\frak p(\phi_{\frak p\frak q}(y)) = \psi_\frak q(y) = p $$ from Definition \ref{goodcoordinatesystem} (3-b). Since $\psi_\frak p (x) = p$ as well and $\psi_\frak p$ is one-one on $s_{\frak p}^{-1}(0) \cap U_\frak q$, we obtain $x = \phi_{\frak p\frak q}(y)$. This proves $x \sim y$, which finishes the proof of the lemma. \end{proof} \end{defn} \begin{prop}\label{goodcoordinateprop2} Suppose we have a good coordinate system. Then there exist open subsets $U'_{\frak p} \subset U_{\frak p}$ and $U'_{\frak p\frak q} \subset U_{\frak p\frak q}$ such that the restrictions to $U'_{\frak p}$ and $U'_{\frak p\frak q}$ give a good coordinate system, and $U'_{\frak p}$ and $U'_{\frak p\frak q}$ are relatively compact in $U_{\frak p}$ and $U_{\frak p\frak q}$, respectively. \end{prop} \begin{proof} We take an open subset $U'_{\frak p} \subset U_{\frak p}$ for each $\frak p$ that is relatively compact in $U_{\frak p}$ and $$ \bigcup_{\frak p\in \frak P}\psi_{\frak p}(s_{\frak p}^{-1}(0) \cap U'_{\frak p}) = X. $$ We may choose it so that \begin{equation}\label{for04} \bigcap_{i\in I} \mathcal U_{\frak p_i} \ne \emptyset \,\,\, \Leftrightarrow \,\,\, \bigcap_{i\in I} \mathcal U'_{\frak p_i} \ne \emptyset. \end{equation} We put \begin{equation}\label{defUprime} U'_{\frak p\frak q} = U_{\frak p\frak q} \cap U'_{\frak q} \cap \underline\phi_{\frak p\frak q}^{-1}(U'_{\frak p}). \end{equation} Condition \ref{proper} implies that $U'_{\frak p\frak q}$ is relatively compact. It is straightforward to check that they satisfy the conditions in Definition \ref{goodcoordinatesystem}. \end{proof} \begin{rem}\label{chooseR} \begin{enumerate} \item If a compact subset $\mathcal K_{\frak p}$ of $\mathcal U_\frak p$ is given for each $\frak p$, then we may choose $U'_\frak p$ etc. in Proposition \ref{goodcoordinateprop2} so that $\mathcal U'_\frak p$ contains $\mathcal K_\frak p$. \item On the other hand, we may choose $U'_{\frak p}$ as small as we want as far as the condition $\bigcup_{\frak p \in \frak P}\mathcal U'_{\frak p}= X$ is satisfied. In fact at the beginning of the proof we take $U'_{\frak p}$ so that this is satisfied and do not need to change it. \item In the case of the good coordinate system we produce in Section \ref{sec:existenceofGCS}, the index set $\frak P$ is a subset of the set of natural number with obvious order $<$. So it is in fact linearly ordered. Some of the combinatorial problem we have taken care of above is simpler in that case. \end{enumerate} \end{rem} We define $U'(X)$ from $U'_{\frak p}$ and $U'_{\frak p\frak q}$ in the same way as Definition \ref{clrelation}. Let $K_{\frak p}$ be the closure of $U'_{\frak p}$ in $U_{\frak p}$ that is compact. We have: \begin{equation} \bigcup_{\frak p \in \frak P}\psi_{\frak p}(s_{\frak p}^{-1}(0) \cap K_{\frak p}) = X. \end{equation} For each $\frak p > \frak q$ we put $$ K_{\frak p\frak q} = \underline\phi_{\frak p\frak q}^{-1}(K_{\frak p}) \cap K_{\frak q}. $$ Since $\underline\phi_{\frak p\frak q}$ is proper it follows that $K_{\frak p\frak q}$ is compact. In the same way as the proof of Lemma \ref{equiv} $K_{\frak p\frak q}$ and the restriction of $\underline\phi_{\frak p\frak q}$ to it induce an equivalence relation. So we are in the following situation. \begin{assump}\label{Kassumption} \begin{enumerate} \item $(\frak P,\le)$ is a finite partial ordered set. \item $K_{\frak p}$ is a Hausdorff and compact set for each $\frak p$. \item For $\frak p,\frak q \in \frak P$, $\frak p > \frak q$, $K_{\frak p\frak q} \subset K_{\frak q}$ is a compact subset and $\underline\phi_{\frak p\frak q} : K_{\frak p\frak q} \to K_{\frak p}$ is an embedding. \item We define a relation $\sim$ on $\widetilde K(\frak P) = \bigcup_{\frak p \in \frak P} K_{\frak p}$ (disjoint union) as follows. Let $x \in K_{\frak p}$ and $y \in K_{\frak q}$. We say $x\sim y$ if and only if \begin{enumerate} \item $x = y$, or \item $\frak p \ge \frak q$ and $\underline\phi_{\frak p\frak q}(y) = x$, or \item $\frak q \ge \frak p$ and $\underline\phi_{\frak q\frak p}(x) = y$. \end{enumerate} Then $\sim$ is an equivalence relation. \end{enumerate} \end{assump} Let $K(\frak P)$ be the set of the $\sim$ equivalence classes of $\widetilde K(\frak P)$. We put the quotient topology. \par In this situation the following holds. \begin{prop}\label{metrizable} In addition to Assumption \ref{Kassumption}, assume that $K_{\frak p}$ satisfies the second axiom of countability and locally compact. Then $K(\frak P)$ is metrizable. (In particular it is Hausdorff.) \end{prop} The proof of this proposition will be given in Section \ref{gentoplem}. We continue our discussion. \begin{defn}\label{Imapdef} We define a map $\frak J_{K(\frak P)U'(X)} : U'(X) \to K(\frak P)$ by sending the ${\sim }$-equivalence class $[x]$ of $x \in \widetilde U'(X)$ to the equivalence class of $x \in \widetilde U(X)$ in $K(\frak P)$. \end{defn} By the definition (\ref{defUprime}) of $U'_{\frak p\frak q}$, we find that if $\tilde x \sim \tilde y$ in $\widetilde U(X)$ for $\tilde x, \tilde y \in \tilde U'(X)$ then $\tilde x \sim \tilde y$ in $\widetilde U'(X)$. Therefore $\frak J_{K(\frak P)U'(X)}$ is injective. \begin{defn} We equip $U'(X)$ with the weakest topology of the map $\frak J_{K(\frak P)U'(X)} : U'(X) \to K(\frak P)$ (with respect to the topology of $K(\frak P)$). We simply call this kind of topology appearing henceforth \emph{weak topology}. \end{defn} The weak topology on $U'(X)$ is nothing but the subspace topology of $K(\frak P)$ if we identify $U'(X)$ with its image of the injective map $\frak J_{K(\frak P)U'(X)}$ in $K(\frak P)$. Hereafter we use this topology on $U'(X)$ only, not the quotient topology of $ U'(X) = \widetilde U'(X)/\sim $, unless otherwise mentioned explicitly. \begin{rem}\label{rem520} The weak topology of $U'(X)$ is Hausdorff since $K(\frak P)$ is Hausdorff and the map $\frak J_{K(\frak P)U'(X)} : U'(X) \to K(\frak P)$ is injective. This topology is certainly different from the quotient topology thereof as the set of equivalence classes in $U'(X)$. We sometimes also call this topology the induced topology as long as it is not ambiguous. In fact the quotient topology does not necessarily satisfy the first axiom of countability as pointed out by \cite[Example 6.1.14]{MW1}. \end{rem} \begin{cor}\label{corollary521} $U'(X)$ (equipped with the weak topology) is metrizable. \end{cor} \begin{proof} This is a consequence of Proposition \ref{metrizable}. \end{proof} We start from $U'_{\frak p}$ and repeat the process. Namely we take relatively compact subsets $U''_{\frak p}$ and define $U''(X)$. We use $K'_{\frak p} =$ the closure of $U''_{\frak p}$ and define $K'(\frak P)$. We define an injective map $\frak J_{U'(X)U''(X)}$ in the same way as Definition \ref{Imapdef}. We equip $U''(X)$ with the weak topology of the map $\frak J_{U'(X)U''(X)}$. \begin{lem}\label{fraIUUhomeo} $\frak J_{U'(X)U''(X)}$ is a topological embedding, i.e., a homeomorphism to its image. \end{lem} \begin{proof} We define $\frak J_{K(\frak P)K'(\frak P)} : K'(\frak P) \to K(\frak P)$ in the same way. It is injective. Since $K'(\frak P)$ is compact and $K(\frak P)$ is Hausdorff $\frak J_{K(\frak P)K'(\frak P)}$ is a topological embedding. We then have the diagram $$ \xymatrix{U'(X) \ar[r] & K(\frak P) \\ U''(X) \ar[u] \ar[r] & K'(\frak P) \ar[u]} $$ in which the top, the bottom and the right column arrows are all topological embeddings. The lemma immediately follows from this. \end{proof} \begin{rem} Hereafter we further shrink $U'_{\frak p}$ several times by taking relatively compact subsets. We always equip them with {\it the weak topology} of the relevant injective map. Lemma \ref{fraIUUhomeo} implies that the weak topology induced from $K'(\frak P) = (\bigcup \overline U''_{\frak p})/\sim$ on $U''_{\frak p}$ coincides with the one induced from $K(\frak P)$. \end{rem} We define a map $\Pi_{\frak p} : U'_{\frak p} \to U'(X)$ as before and define the canonical injective map $I': X \to U'(X)$ as in Lemma \ref{lem:glueiotap} applied to $U'(X)$. \begin{lem} \begin{enumerate} \item $I': X \to U'(X)$ is a topological embedding. \item $\Pi_{\frak p} : U'_{\frak p} \to U'(X)$ is a topological embedding. \end{enumerate} \end{lem} \begin{proof} The well-definedness of $I$ is proved in Lemma \ref{lem:glueiotap}. Statement (1) follows from the fact that $I$ is injective, $X$ is compact and $U'(X)$ is Hausdorff. \par We consider a map $\Pi_{\frak p} : K_{\frak p} \to K(\frak P)$, which is defined in the same way. This map is injective and continuous. Moreover $K_{\frak p}$ is compact and $K(\frak P)$ is Hausdorff. Therefore $\Pi_{\frak p} : K_{\frak p} \to K(\frak P)$ is a topological embedding. Since $\Pi_{\frak p} : U'_{\frak p} \to U'(X)$ is its restriction and the topologies of $U'_{\frak p}$ and of $U'(X)$ are the weak topology, the lemma follows. \end{proof} Hereafter we write $I$ for $I' : X \to U'(X)$ also. \begin{rem} Hereafter we equip a metric with $U'(X)$ or with similar spaces obtained by restricting $U'_{\frak p}$ to its relatively compact subsets several times. We fix a metric on $K(\frak P)$ and the metric we use is always the restriction of this metric. This metric is compatible with the weak topology on $U'(X)$. Since $K(\frak P)$ is compact, the metric on it is unique up to equivalence. (Here two metrics $d$ and $d'$ are said to be equivalent if there exists a homeomorphism $\Phi_i : \R_{\ge 0} \to \R_{\ge 0}$ for $i=1,2$ such that $$ \Phi_1(d(x,y)) \le d'(x,y) \le \Phi_2(d(x,y)). $$ \end{rem} \begin{lem}\label{openneighborhood} Let $x = \psi_\frak p(\tilde x) \in X$ and $\tilde x \in s_\frak p^{-1}(0) \subset U'_\frak p$. Then there exists a neighborhood $\frak O_\frak p(x)$ of $\tilde x$ in $U'_\frak p$ such that \begin{enumerate} \item $\Pi_\frak p : \frak O_\frak p(x) \to U'(X)$ is a topological embedding. \item $\Pi_\frak p(\frak O_\frak p(x))$ is an open subset of $ \bigcup_{\frak q\le \frak p} \Pi_\frak q(U'_\frak q). $ \end{enumerate} \end{lem} \begin{proof} Choose an open neighborhood $\frak O_\frak p(x) \subset U'_\frak p$ that is relatively compact in $U'_\frak p$. Clearly the map $\Pi_\frak p : \frak O_\frak p(x) \to U'(X)$ is injective and continuous. By the choice, it extends so to the closure of $\frak O_\frak p(x)$ that is compact. Since $U'(X)$ equipped \emph{with weak topology} is Hausdorff, (1) follows. \par We next prove (2). Let $\frak q \le \frak p$. Since $\Pi_\frak p : \frak O_\frak p(x) \to U'(X)$ is an embedding and its image is relatively compact, we may assume $\Pi_\frak p (O_\frak p(x)) \subset U''(X)$ by choosing $U''(X)$ appropriately where $U''(X)$ is as in Lemma \ref{fraIUUhomeo}. Therefore it suffices to show that $\Pi_\frak p (O_\frak p(x))$ is open in quotient topology of $U'(X)$. (This is because $K'(\frak P) \to U'(X)$ is continuous with respect to the quotient topologies.) For this purpose, it suffices to show $$ (\Pi_\frak q)^{-1}(\Pi_\frak p(\frak O_\frak p(x))) $$ is open in $U'_\frak q$. In fact $$ \underline{\phi}_{\frak p\frak q}^{-1}(\frak O_\frak p(x)) \cap U'_{\frak p\frak q} = \underline{\phi}_{\frak p\frak q}^{-1}(\frak O_\frak p(x)) \cap \left(U_{\frak p\frak q} \cap U'_\frak q \cap \underline{\phi}_{\frak p\frak q}^{-1}(U'_\frak p)\right) = \underline{\phi}_{\frak p\frak q}^{-1}(\frak O_\frak p(x)) \cap U'_\frak q. $$ But we have $\underline{\phi}_{\frak p\frak q}^{-1}(\frak O_\frak p(x)) \cap U'_\frak q \subset \underline{\phi}_{\frak p\frak q}^{-1}(U'_\frak p) \cap U'_\frak q$ and hence by definition, $$ \underline{\phi}_{\frak p\frak q}^{-1}(\frak O_\frak p(x)) \cap U'_\frak q = \Pi_\frak q^{-1}(\Pi_\frak p(\frak O_\frak p(x))). $$ Combining these, we have finished the proof. \end{proof} The next lemma plays a key role in the next section to show basic properties of the virtual fundamental chain. \begin{lem}\label{unionofofd} For any $x \in X$ there exist $\frak q_{1}, \dots, \frak q_{m} \in \frak P$ with $\frak q_{1} \le \dots \le \frak q_{m}$ and open sets $\Omega_{\frak q_i}(x) \subset U''_{\frak q_i}$ with the following properties. \begin{enumerate} \item $x \in \mathcal U''_{\frak q_1}$ and $x \in \overline{\mathcal U''_{\frak q_i}}$ for $i=2,\dots,m$. \item $\psi_{\frak q_1}^{-1}(x) \in \Omega_{\frak q_1}(x)$. \item $\psi_{\frak q_i}^{-1}(x) \in \overline \Omega_{\frak q_i}(x) \setminus \Omega_{\frak q_i}(x)$ for $i>1$. Here the closure is taken in $U_{\frak q_i}$. \item The map $\Pi_{\frak q_i} : \Omega_{\frak q_i}(x) \to U''(X)$ is a topological embedding. \item The union of the images of $\Pi_{\frak q_i} : \Omega_{\frak q_i}(x) \to U''(X)$ is a neighborhood of $I(X)$. \item $\dim U_{\frak q_1} < \dim U_{\frak q_i}$ for $i\ne 1$. \end{enumerate} \end{lem} \begin{proof} By Lemma \ref{intersection} there exists a maximal $\frak q_1$ such that $x \in \mathcal U''_{\frak q_1}$. \begin{sublem}\label{linearordercotto} There exists $\frak q_m \in \frak P$ such that it is maximal in the set $ \{ \frak q\in \frak P \mid x \in \overline{\mathcal U''_\frak q}\}. $ \end{sublem} \begin{proof} Let $\frak q,\frak q' \in \{ \frak q\in \frak P \mid x \in \overline{\mathcal U''_\frak q}\}. $ Since the closure of $\mathcal U''_\frak q$ is contained in $\mathcal U'_\frak q$, it follows that $\mathcal U_\frak q \cap \mathcal U_{\frak q'} \ne \emptyset$. Therefore by (\ref{for04}) $\mathcal U''_\frak q \cap \mathcal U''_{\frak q'} \ne \emptyset$. The sublemma follows from Definition \ref{goodcoordinatesystem} (8). \end{proof} By Sublemma \ref{linearordercotto} we can take $\frak q_1,\dots,\frak q_m$ such that $\frak q_{1} \le \dots \le \frak q_{m}$ and $$ \{ \frak q\mid x \in \overline{\mathcal U''_\frak q} \} \cap \{ \frak q\mid \dim U_{\frak q} > \dim U_{\frak q_1} \} = \{\frak q_i \mid i=2,\dots,m \}. $$ Then we can use Lemma \ref{openneighborhood} to find the required $\Omega_{\frak q_i}(x) = \frak O_{\frak q_i} \cap U''_{\frak q_i}$. (We do not need $\frak q_i$ ($i\ne 1$) with $\dim U_{\frak q_1} = \dim U_{\frak q_i}$ since $x$ is in (the interior of) $\Omega_{\frak q_1}(x)$.) \end{proof} We put \begin{equation}\label{frakUdef} \frak U(x) = \bigcup_{i=1,\dots,m}\Pi_{\frak q_i}(\Omega_{\frak q_i}(x)). \end{equation} We take the intersection of $\frak U(x)$ and a sufficiently small open neighborhood of $x$ in $U''(X)$. Then the property (4) of Definition \ref{goodcoordinatesystem} implies that it gives an open neighborhood of $x$ in $U''(X)$. Such $\frak U(x)$ forms a neighborhood basis of $x$. (See the proof of Sublemmas \ref{9sublem1}, \ref{9sublem2}.) \begin{exm}\label{omeganeighex} We consider the following three subsets $U''_i$ ($i=1,2,3$) of $\R^3$. $ U''_1 = $ $x$-axis, $ U''_2 = \{(x,y,0) \mid x > -y^2\}$, $ U''_3 = \{(x,y,z) \mid x >0\}. $ We put $U''(X) = U''_1\cup U''_2 \cup U''_3$ and consider $\vec 0 = (0,0,0) \in U''(X)$. Then the neighborhood $\frak U(\vec 0)$ we described above typically is obtained as follows: We take $\Omega_1 = \{(x,0,0) \mid \vert x\vert < 3\epsilon\}$, $\Omega_2 = \{(x,y,0) \mid x > -y^2, \,\, x^2+y^2 < (2\epsilon)^2\}$, $\Omega_3 = \{(x,y,z) \mid x> 0, x^2+y^2 + z^2< \epsilon^2\}$. The union of these three sets is $\frak U(x)$. \end{exm} \section{Construction of virtual fundamental chain} We start the construction of perturbation and virtual fundamental chain. We use the notion of multisection for this purpose. (The method to use abstract multivalued perturbation to define a virtual fundamental chain or cycle for the moduli space of pseudo-holomorphic curve was introduced in 1996 January by \cite{FuOn99I}.) \par Let us review the definition of multisection here. We assume that a finite group $\Gamma$ acts on a manifold $V$ and a vector space $E$. A symmetric group $\frak S_n$ of order $n!$ acts on the product $E^n$ by $$ \sigma(x_1,\ldots,x_n) = (x_{\sigma(1)},\ldots,x_{\sigma(n)}). $$ Let $S^n(E)$ be the quotient space $E^n/\frak S_n$. Then $\Gamma$ action on $E$ induces an action on $S^n(E)$. The map $E^n \to E^{nm}$ defined by $$(x_1,\cdots,x_n) \mapsto (\underbrace{x_1,\ldots,x_1}_{\text{$m$ times}},\underbrace{x_2,\ldots,x_2}_{\text{$m$ times}},\cdots,\underbrace{x_n,\ldots,x_n}_{\text{$m$ times}})$$ induces a $\Gamma$ equivariant map $S^n(E) \to S^{nm}(E)$. \par \begin{defn} An {\it {$n$-multisection}} $s$ of $\pi : E\times V \to V$ is a $\Gamma$-equivariant map $V \to S^n(E)$. We say that it is {\it liftable} if there exists $\widetilde s = (\widetilde{s}_1,\ldots,\widetilde{s}_n): V \to E^n$ such that its composition with $\pi : E^n \to S^n(E)$ is $s$. (We do not assume $\widetilde s$ to be $\Gamma$ equivariant.) Each of $\widetilde{s}_1,\ldots,\widetilde{s}_n$ is said to be a {\it branch} of $s$. \par If $s : V \to S^n(E)$ is an $n$ multisection, then it induces an $nm$ multisection for each $m$ by composing it with $S^n(E) \to S^{nm}(E)$. \par An $n$ multisection $s$ is said to be {\it equivalent} to an $m$ multisection $s'$ if the induced $nm$ multisections coincide to each other. An equivalence class by this equivalence relation is said to be a {\it multisection}. \par A liftable multisection is said to be {\it transversal} to zero if each of its branch is transversal to zero. \par A family of multisections $s_{\epsilon}$ is said to {\it converge} to $s$ as $\epsilon \to 0$ if there exists $n$ such that $s_{\epsilon}$ is represented by an $n$-multisection $s_{\epsilon}^n$ and $s_{\epsilon}^n$ converges to a representative \end{defn} From now on we assume all the multisections are liftable unless otherwise stated. \begin{defn} Let $U$ be an orbifold (which is not necessarily a global quotient) and $E$ a vector bundle on $U$ in the sense of Definition \ref{orbibundledef}. A {\it multisection} of $E$ on $U$ is given by $\{U_i\}$ and $s_i$ where: \begin{enumerate} \item $U_i = V_i/\Gamma_i$ is a coordinate system of our orbifold $U$ in the sense of Definition \ref{ofddefn}. \item $E\vert_{U_i} = (E_i\times V_i)/\Gamma_i$. \item $s_i : V_i \to S^{n_i}(E_i)$ is an $n_i$ multisection of the restriction $E_i$ of our bundle $E$ to $U_i$. \item The restriction of $s_i$ to $U_i \cap U_j$ is equivalent to the restriction of $s_j$ to $U_i \cap U_j$ for each $i,j$. \end{enumerate} We say $(\{U_i\},\{s_i\})$ and $(\{U'_i\},\{s'_i\})$ define the same multisection if the restriction of $s_i$ to $U_i\cap U'_j$ is equivalent to the restriction of $s'_j$ to $U_i\cap U'_j$ for each $i,j$. \par Liftability, transversality, and convergence can be defined in the same way as above. \end{defn} We start the construction of perturbation (system of multisections of obstruction bundles.) Using Proposition \ref{goodcoordinateprop2}, we shrink the Kuranishi neighborhoods $U_{\frak p}$ as follows. \par First we take an extension of the subbundle $\hat{\underline\phi}_{\frak p\frak q}(E_{\frak q} \vert_{U'_{\frak p\frak q}})$ of $E_{\frak p}\vert {{\underline\phi}_{\frak p\frak q}(U'_{\frak p\frak q})}$ to its neighborhood in $U_{\frak p}$.\footnote{Here and hereafter we write $E_{\frak p}$ in place of $(E_{\frak p} \times V_{\frak p})/\Gamma_{\frak p}$ for simplicity.} We also fix a splitting \begin{equation}\label{splittingsubbundle} E_{\frak p}= E_{\frak q} \oplus E_{\frak q}^{\perp} \end{equation} on a neighborhood of ${{\underline\phi}_{\frak p\frak q}(U'_{\frak p\frak q})}$. We can take such an extension of the subbundle and splitting since $U'_{\frak p\frak q}$ is a relatively compact open subset of $U_{\frak p\frak q}$ that is a suborbifold of $U_{\frak p}$. \par Using the splitting (\ref{splittingsubbundle}), the normal differential \begin{equation}\label{dfiber} d_{\rm fiber}s_{\frak p} : N_{U'_{\frak p\frak q}}U'_{\frak p} \to \frac{{\underline\phi}_{\frak p\frak q}^*E_{\frak p}}{E_{\frak q}} \end{equation} is defined. (Note that without fixing the splitting $d_{\rm fiber}s_{\frak p}$ is well-defined only at $s_{\frak q}^{-1}(0) \cap U_{\frak p\frak q}$.) \par By the definition of tangent bundle, the map (\ref{dfiber}) is a bundle isomorphism on $s_{\frak q}^{-1}(0) \cap U'_{\frak p\frak q}$. We take an open neighborhood $\frak W'_{\frak p\frak q}$ of $s_{\frak q}^{-1}(0) \cap U'_{\frak p\frak q}$ in $U'_{\frak q}$ so that (\ref{dfiber}) remains to be a bundle isomorphism on $\frak W'_{\frak p\frak q}$. \par We take $U''_{\frak q}$ for each $\frak q$ so that $$ U''_{\frak q} \cap U'_{\frak p\frak q} \subset \frak W'_{\frak p\frak q} $$ for each $\frak p$. (We can take such $U''_{\frak q}$ by Remark \ref{chooseR} (2).) Thus from now on we may assume that (\ref{dfiber}) is an isomorphism on $U_{\frak p\frak q}$. \par We start with this $U_{\frak p}$, $U_{\frak p\frak q}$ and repeat the construction of the last section. Namely we take $U^{(n)}_\frak p, U^{(n)}_{\frak p\frak q}$ such that \begin{enumerate} \item The conclusion of Proposition \ref{goodcoordinateprop2} is satisfied when we replace $U_\frak p, U_{\frak p\frak q}$ by $U^{(n-1)}_\frak p, U^{(n-1)}_{\frak p\frak q}$ and $U^{\prime}_\frak p, U^{\prime}_{\frak p\frak q}$ by $U^{(n)}_\frak p, U^{(n)}_{\frak p\frak q}$. \item The conclusions of Lemmas \ref{iikaejoyce}-\ref{unionofofd} hold for $U^{(n)}_\frak p, U^{(n)}_{\frak p\frak q}$. \item $U^{(1)}_\frak p, U^{(1)}_{\frak p\frak q}$ is $U'_\frak p, U'_{\frak p\frak q}$, respectively. \end{enumerate} Let $U^{(n)}(X)$ be the space obtained from $U^{(n)}_\frak p, U^{(n)}_{\frak p\frak q}$ as in Definition \ref{clrelation}. Let us consider the good coordinate system $(U_\frak p, E_\frak p, \psi_\frak p, s_\frak p)$ $\frak p\in \frak P$ of our Kuranishi structure. Let $\#\frak P = N$. We put $\frak P = \{\frak p_1,\ldots,\frak p_N\}$, where $\frak p_i < \frak p_j$ only if $i < j$. We take $n = 10N^2$ and consider $U^{(n)}_\frak p, U^{(n)}_{\frak p\frak q}$ as above. \begin{prop}\label{existmulti} For each $\epsilon>0$, there exists a system of multisections $s_{\epsilon,\frak p}$ on $U^{(n)}_\frak p$ for $\frak p \in \frak P$ with the following properties. \begin{enumerate} \item $s_{\epsilon,\frak p}$ is transversal to $0$. \item $s_{\epsilon,\frak p}\circ{\underline\phi}_{\frak p\frak q} = \hat{\underline\phi}_{\frak p\frak q}\circ s_{\epsilon,\frak q}$. \item The derivative of (arbitrary branch of) $s_{\epsilon,\frak p}$ induces an isomorphism \begin{equation}\label{2tangent} N_{U_{\frak p\frak q}}U_\frak p \cong \frac{{\underline\phi}_{\frak p\frak q}^*E_\frak p}{E_\frak q\vert_{U_{\frak p\frak q}}} \end{equation} that coincides with the isomorphism (\ref{dfiber}). \item The $C^0$ distance of $s_{\epsilon,\frak p}$ from $s_\frak p$ is smaller than $\epsilon$. \end{enumerate} \end{prop} The proof of Proposition \ref{existmulti} is by double induction. Since the indices appearing in the proof is rather cumbersome, we explain the construction in 2 simple cases before giving the proof of Proposition \ref{existmulti}. \par\medskip We first explain the case where we have two Kuranishi neighborhoods $U_1, U_2$ and the coordinate transformation $\underline{\phi}_{21} : U_{21} \to U_2$, where $U_{21} \subset U_1$. \par The first step is to construct a multisection $s_{1,\epsilon}$ on $U_1$ that is a perturbation of the Kuranishi map $s_1$ and is transversal to $0$. Existence of such $s_{1,\epsilon}$ is a consequence of \cite[Lemma 3.14]{FOn}. \par We next extend it to $U_2$ as follows. We take a relatively compact subsets $U_i^{(1)} \subset U_i$ such that $\psi_i(s_i^{-1}(0) \cap U_i^{(1)})$ ($i=1,2$) still covers our space $X$. We put $U_{21}^{(1)} = U_2^{(1)} \cap \underline{\phi}_{21}^{-1}(U_1^{(1)})$. This set is relatively compact in $U_{21}$. \par We consider the normal bundle $N_{U_{21}^{(1)}}U_2$ and identify its disk bundle with a tubular neighborhood ${\mathcal N}_{U_{21}^{(1)}}U_2$. \par Using the fiber derivative of $s_2$ we have an isomorphism $$ E_2\vert_{{\mathcal N}_{U_{21}^{(1)}}U_2} \cong \pi^*(E_1 \oplus E_1^\perp) \cong \pi^* (E_1 \oplus N_{U_{21}^{(1)}} U_2). $$ By the implicit function theorem, the $E_1^\perp$-component of the Kuranishi map $s_2$ induces a diffeomorphism from a sufficiently small tubular neighborhood of $U_{21}^{(1)}$ in $U_2$ onto a neighborhood of the zero section of $E_1^\perp$. We can extend $s_{1,\epsilon}$ to the tubular neighborhood such that the first factor is the pull back of $s_{1,\epsilon}$ and the second factor is the $E_1^\perp$-component of the Kuranishi map $s_2$, which is considered as the inclusion of ${\mathcal N}_{U_{21}^{(1)}}U_2$ to $N_{U_{21}^{(1)}}U_2$. We denote the extension we obtain by $s'_{\epsilon,2}$. \par Now we use the relative version of \cite[Lemma 3.14]{FOn} to extend $s'_{\epsilon,2}$ to $U_2$ further such that the extension coincides with $s'_{\epsilon,2}$ on a slightly smaller tubular neighborhood. \par\medskip We next explain the case where we have three Kuranishi neighborhoods $U_1, U_2, U_3$, coordinate changes $\underline{\phi}_{21}, \underline{\phi}_{32}, \underline{\phi}_{31}$ and $U_{21}, U_{32}, U_{31}$. \par We take $U_i^{(j)}$ that is relatively compact in $U_i^{(j-1)}$ but $\bigcup_{i=1,2,3}U_i^{(j)}$ still covers $X$ in an obvious sense. \par In exactly the same way as the case when we have two Kuranishi neighborhoods, we obtain $s_{1,\epsilon}^{(2)}$ and $s_{2,\epsilon}^{(2)}$ on $U_{1}^{(2)}$ and $U_{2}^{(2)}$ respectively, that are compatible on the overlapped part. \par We next extend it to $U_3$. \par We extend $s_{1,\epsilon}^{(2)}$ and $s_{2,\epsilon}^{(2)}$ to the tubular neighborhoods of $U_{31}^{(3)}$ and $U_{32}^{(3)}$ in $U^{(2)}_3$. We denote them by $s_{3,\epsilon}^{(3),1}$ and $s_{3,\epsilon}^{(3),2}$. \par Let us study them on the part where two tubular neighborhoods intersect. Let $x \in U_{31}^{(3)} \cap U_{32}^{(3)}$. We have $$ (N_{U_{31}^{(3)}}U_3^{(2)})_x \cong (N_{U_{21}^{(3)}}U_2^{(2)}) _{x} \oplus (N_{U_{32}^{(3)}}U_3^{(2)})_{\underline{\phi}_{32}(x)} $$ This isomorphism is compatible with the isomorphism $$ \left( \frac{E_3}{E_1}\right)_x \cong \left( \frac{E_2}{E_1}\right)_x \oplus \left( \frac{E_3}{E_2}\right)_{\underline{\phi}_{32}(x)}. $$ Therefore we can glue $s_{3,\epsilon}^{(3),1}$ and $s_{3,\epsilon}^{(3),2}$ on the overlapped part by partition of unity. \par We then extend them to the whole $U_3^{(3)}$ by the relative version of \cite[Lemma 3.14]{FOn}. \par The general case which is given below, is similar, though the notation is rather cumbersome. \begin{proof} \footnote{The argument below is one written in \cite{Fu1}. (If a shorter proof is preferable for readers please read page 3-4 of \cite{Fu1}.)} We will construct a system $ (s_{\epsilon,\frak p}^k; \frak p \in \{\frak p_1, \dots, \frak p_k\}) $ where $s_{\epsilon,\frak p}^k$ is a multisection on $U^{(10k^2)}_{\frak p}$, by upward induction on $k$, so that they satisfy the conditions (1)-(4) above. When $k=1$, the proof is a standard perturbation argument combined with the averaging process over the finite isotropy group. (See the proof of Theorem 3.1 \cite{FOn} for the details.) So we fix $k$ here and explain the process of constructing $s_{\epsilon,\frak p_i}^k$ for $i = 1, \ldots, k$, under the hypothesis that we have already constructed corresponding sections $s_{\epsilon,\frak p_i}^{k-1}$ for $i = 1,\ldots, k-1$. For $s_{\epsilon,\frak p_i}^k$ with $i<k$, the multisection $s_{\epsilon,\frak p_i}^k$ is obtained by restriction of the domain of $s_{\epsilon,\frak p_i}^{k-1}$ and performing some adjustment around the boundary. (See (\ref{skpidefform}).) We identify the image $\underline\phi_{\frak p_k\frak p_i}(U_{\frak p_k\frak p_i}^{(m)})$ in $U^{(m)}_{\frak p_k}$ with $U_{\frak p_k\frak p_i}^{(m)}$ and regard the latter as a subset of $U^{(m)}_{\frak p_k}$ for any integer $m$. (Here $i<k$.) We put \begin{equation} \mathcal N_k^i = \bigcup_{j=i}^{k-1} \mathcal N^i_{U_{\frak p_k\frak p_j}^{(10 (k-1)^2+10(k-i))}} U_{\frak p_k}^{(10 (k-1)^2+10(k-i))} \end{equation} and will define a multisection $s^{k,i}_{\epsilon,\frak p_k}$ on $\mathcal N_k^i$ by downward induction on $i$. \par Here the open subset $\mathcal N^i_{U_{\frak p_k\frak p_j}^{(10 (k-1)^2+10(k-i))}}U_{\frak p_k}^{(10 (k-1)^2+10(k-i))}$ is a tubular neighborhood of $U_{\frak p_k\frak p_j}^{(10 (k-1)^2+10(k-i))}$, which will be chosen in the inductive process. \par We assume the closure of $$\mathcal N^{i}_{U_{\frak p_k\frak p_j}^{(m+1)}}U_{\frak p_k}^{(m+1)}$$ is compact in $$\mathcal N^{i+1}_{U_{\frak p_k\frak p_j}^{(m)}}U_{\frak p_k}^{(m)}.$$ \par Let us start the downward induction for $i$ starting from $i=k-1$. We have an embedding $U_{\frak p_k\frak p_{k-1}}^{(10 (k-1)^2)} \to U_{\frak p_k}^{(10 (k-1)^2)}$. We take its tubular neighborhood $\mathcal N^{k-1}_{U_{\frak p_k\frak p_{k-1}}^{(10 (k-1)^2)}}U_{\frak p_k}^{(10 (k-1)^2)}$. We also have $s_{\epsilon,\frak p_{k-1}}^{k-1}$ on $U_{\frak p_k\frak p_{k-1}}^{(10 (k-1)^2+10(k-i-1))}$ by the induction hypothesis. We have already taken a splitting \begin{equation}\label{eq:Epk} E_{\frak p_k} = E_{\frak p_{k-1}} \oplus E^{\perp}_{\frak p_{k-1}} \end{equation} on $U_{\frak p_k\frak p_{k-1}}^{(10 (k-1)^2+10(k-i-1))}$. (Here we identify $E_{\frak p_{k-1}}$ with its image by $\hat{\phi}_{\frak p_k\frak p_{k-1}}$.) We also extended the bundles $E_{\frak p_{k-1}}$, $E^{\perp}_{\frak p_{k-1}}$ to a small tubular neighborhood $\mathcal N^{k-1}_{U_{\frak p_k\frak p_{k-1}}^{(10 (k-1)^2)}}U_{\frak p_k}^{(10 (k-1)^2)}$.\par We extend $s_{\epsilon,\frak p_{k-1}}^{k-1}$ to $\mathcal N^{k-1}_{U_{\frak p_k\frak p_{k-1}}^{(10 (k-1)^2)}}U_{\frak p_k}^{(10 (k-1)^2)}$ so that \begin{enumerate} \item[(a)] it is $\epsilon/(2^n)$ close to the Kuranishi map $s_{\frak p_k}$, \item[(b)] it coincides with the perturbation already defined at the zero section of the normal bundle for the first component $E_{\frak p_{k-1}}$ of \eqref{eq:Epk}, and \item[(c)] the second component thereof in the splitting \eqref{eq:Epk} is the same as the given Kuranishi map $s_{\frak p_k}$. \end{enumerate} Transversality condition is obviously satisfied and the properties (2)-(4) also hold true by construction. We thus obtained $s^{k,k-1}_{\epsilon,\frak p_k}$ \par Now we go to the inductive step to construct $s^{k,i}_{\epsilon,\frak p_k}$ assuming we have $s^{k,i+1}_{\epsilon,\frak p_k}$. \par We consider the embedding $$ U_{\frak p_k\frak p_j}^{(10 (k-1)^2+10(k-i)-10)} \to U_{\frak p_k}^{(10 (k-1)^2+10(k-i)-10)}, $$ for $i+1 \leq j \leq k$. We identify $U_{\frak p_k\frak p_j}^{(10 (k-1)^2+10(k-i))-10)}$ with the image of embedding and take its tubular neighborhood \begin{equation}\label{311} \mathcal N^{i+1}_{U_{\frak p_k\frak p_j}^{(10 (k-1)^2+10(k-i)-10)}}U_{\frak p_k}^{(10 (k-1)^2+10(k-i)-10)}. \end{equation} Note that we already have our section $s^{k,i+1}_{\epsilon,\frak p_k}$ on ${\mathcal N}_k^{i+1}$ which is the union of (\ref{311}), $j=i+1, \dots, k-1$. \par We next apply the same argument as the first step to obtain a $s_{\epsilon,\frak p_k}^{\prime k,i}$ on a tubular neighborhood $$ \mathcal N^{+,i}_k=\mathcal N^{+,i}_{U_{\frak p_k\frak p_i}^{(10 (k-1)^2+10(k-i)-9)}}U_{\frak p_k}^{(10 (k-1)^2+10(k-i)-9)}. $$ (Here we use the fact that we have a multisection $s_{\epsilon,\frak p_i}^{k-1}$ by induction hypothesis.) \par Note that ${\mathcal N}_k^{i+1}$ and ${\mathcal N}^{+,i}_k$ are open subsets in an orbifold $U_{\frak p_k}$. We take a smooth function $$ \chi : {\mathcal N}_k^{i+1} \cup {\mathcal N}_k^{+,i} \to [0,1] $$ such that $\{\chi, 1-\chi\}$ is a partition of unity subordinate to $\{ \mathcal N_k^{i+1}, \mathcal N_k^{+,i} \}$ and $\chi = 1$ on \begin{equation}\label{315} \bigcup_{j=i+1}^{k-1} \mathcal N^i_{U_{\frak p_k\frak p_j}^{(10 (k-1)^2+10(k-i)-8)}} U_{\frak p_k}^{(10 (k-1)^2+10(k-i)-8)} \subset \mathcal N_k^{i+1}. \end{equation} We put: \begin{equation}\label{312} s^{k,i}_{\epsilon,\frak p_k} = \chi s^{k,i+1}_{\epsilon,\frak p_k} + (1-\chi) s_{\epsilon,\frak p_k}^{\prime k,i}. \end{equation} \begin{rem} The sum of multisections is a bit delicate to define. In our case $s^{k,i+1}_{\epsilon,\frak p_k}$ and $s_{\epsilon,\frak p_k}^{\prime k,i}$ are defined by extending the multisections to the tubular neighborhood. This process does not change the number of branches. (Namely they are extended branch-wise.) So though these two do not coincide as sections, each branch of $s^{k,i+1}_{\epsilon,\frak p_k}$ has a corresponding branch of $s_{\epsilon,\frak p_k}^{\prime k,i}$. So we can apply the formula (\ref{312}) branch-wise. \end{rem} We find that $s^{k,i}_{\epsilon, \frak p_k}$ is transversal to zero on $$\bigcup_{j=i+1}^{k-1} \mathcal N^i_{U_{\frak p_k\frak p_j}^{(10 (k-1)^2+10(k-i)-8)}} U_{\frak p_k}^{(10 (k-1)^2+10(k-i)-8)} \subset \mathcal N_k^{i+1},$$ since $\chi =1$ there. Note that $s^{k,i+1}_{\epsilon, \frak p_k}$ and $s^{\prime k,i}_{\epsilon, \frak p_k}$ coincide on $U_{\frak p_k \frak p_i}^{10(k-1)^2 +10(k-i)-9)} \cap \mathcal N_k^{i+1}$ up to first derivatives (including the normal direction to $U_{\frak p_k \frak p_i}^{10(k-1)^2 +10(k-i)-9)}$ in $U_{\frak p_k}^{10(k-1)^2 +10(k-i)-9)})$. Hence $s_{\epsilon, \frak p_k}^{k,i}$ is transversal to zero on $$\mathcal N^i_{U_{\frak p_k \frak p_i}^{(10(k-1)^2 + 10(k-i) -7}} U_{\frak p_k}^{(10(k-1)^2+10(k-i)-7)},$$ if we take this tubular neighborhood of $U_{\frak p_k \frak p_i}^{(10(k-1)^2 + 10(k-i) -7}$ sufficiently small. We restrict (\ref{312}) to $$ \bigcup_{j=i}^{k-1}\mathcal N^i_{U_{\frak p_k\frak p_j}^{(10 (k-1)^2+10(k-i)-7)}} U_{\frak p_k}^{(10 (k-1)^2+10(k-i)-7)}. $$ Then the multi-section (\ref{312}) is transversal to the zero section. It also satisfies the properties (2)(3)(4). \begin{rem} We use Lemma \ref{intersection} here for the consistency of tubular neighborhoods. We may use Mather's compatible system of tubular neighborhoods \cite{Math73}. However the present situation is much simpler because of Lemma \ref{intersection}. So the compatibility of tubular neighborhoods is obvious. \end{rem} The section $s^{k,i}_{\epsilon,\frak p_k}$ coincides with $s^{k,i+1}_{\epsilon,\frak p_k}$ on the overlapped part because we take $\chi = 1$ on (\ref{315}). \par\medskip We continue the induction up to $i=1$. Then we have a required multisection $s^{k,1}_{\epsilon,\frak p_k}$ on $$ \mathcal N_k^1 = \bigcup_{j=1}^{k-1} \mathcal N^i_{U_{\frak p_k\frak p_j}^{(10 k^2- 10)}}U_{\frak p_k}^{(10 k^2- 10)}. $$ ($10 (k-1)^2+10(k-1) < 10 k^2- 10$.) By \cite[Lemma 3.14]{FOn} we can extend this to $U_{\frak p_k}^{(10 k^2- 9)}$ so that it satisfies (1) and coincides with $s^{k,1}_{\epsilon,\frak p_k}$ on $$ \bigcup_{j=1}^{k-1} \mathcal N^i_{U_{\frak p_k\frak p_j}^{(10 k^2- 9)}}U_{\frak p_k}^{(10 k^2- 9)}. $$ We thus obtain $s^{k}_{\epsilon,\frak p_k}$. \par For $i<k$ we define $s^{k}_{\epsilon,\frak p_i}$ as follows \begin{equation}\label{skpidefform} s^{k}_{\epsilon,\frak p_i} = \begin{cases} \displaystyle (\widehat{\underline{\phi}}_{\frak p_k\frak p_i})^{-1} \circ s^{k}_{\epsilon,\frak p_k} \circ \underline{\phi}_{\frak p_k\frak p_i} &\text{when the right hand side is defined} \\ s^{k-1}_{\epsilon,\frak p_i} &\text{otherwise.} \end{cases} \end{equation} \par The properties (1)-(4) are satisfied. The proof of Proposition \ref{existmulti} is complete. \end{proof} We have thus constructed our perturbation that is a multisection $s_{\epsilon,\frak p}$. To obtain a virtual fundamental chain and prove its basic properties we need to restrict the section $s_{\epsilon,\frak p}$ to an appropriate neighborhood of the union of zero sets of $s_{\frak p}$ and study the properties of $s_{\epsilon,\frak p}$ there. Namely we will prove the following: \begin{enumerate} \item The zero set $s_{\epsilon,\frak p}^{-1}(0)$ converges to the zero set $s_{\frak p}^{-1}(0)$. \item The union of the zero sets $s_{\epsilon,\frak p}^{-1}(0)$ is compact. \item The union of the zero sets $s_{\epsilon,\frak p}^{-1}(0)$ carries a fundamental cycle. \end{enumerate} We need to shrink the domain to prove this. \par Note we have $\lim_{\epsilon\to 0}s_{\epsilon,\frak p} = s_{\frak p}$. So if the domains of the mutisections are compact, this implies that the union of zeros of $s_{\epsilon,\frak p}$ converges to the union of zeros of $s_{\frak p}$, which is nothing but our space $X$. \par Since the domains $U_{\frak p}^{(k)}$ of $s_{\epsilon,\frak p}$ is noncompact, the actual proof is slightly more nontrivial. Note we took a sequence $U_{\frak p}^{(k)}$ such that $U_{\frak p}^{(k+1)}$ is relatively compact in $U_{\frak p}^{(k)}$. We use this fact in place of the compactness of the domain. Then, the point to take care of is the fact that $s_{\frak p}^{-1}(0)$ may intersect with the boundary $\partial U_{\frak p}^{(k+1)}$. We use Lemma \ref{unionofofd} for this purpose. (See Remark \ref{rem68888}.) \par We already shrank our good coordinate system several times. We shrink again below. Let us denote by $U_{\frak p}$, $U(X)$ etc. the good coordinate system and a space obtained when we proved Proposition \ref{existmulti}. (In other words $U_{\frak p} =U^{(N)}_{\frak p}$ in the notation we used in the proof of Proposition \ref{existmulti}.) During the discussion of the rest of this section, we restart numbering the shrunken good coordinate system and will write again $U^{(m)}_{\frak p}$. \par Lemma \ref{compactmainlemma} below is the key lemma. Note all the spaces $U^{(m)}(X)$ with the weak topology induced from $K(\frak P)$ by the injective map $\frak J_{K(\frak P)U^{(m)}(X)}$, can be regarded as subsets of $K(\frak P) = \bigcup \overline U'_{\frak p} / \sim$ (where $K(\frak P)$ is as in Definition \ref{Imapdef}). Recall that the space $K(\frak P)$ is compact and metrizable. We take and fix a metric on it. We consider the metrics on $U^{(m)}(X)$ induced from the metric on $K(\frak P)$. Then by definition the maps $\frak I_{U^{(m')}(X)U^{(m)}(X)}$ are isometries for $m' < m$. \par For a given metric space $Z$ and a subset $C \subset Z$ we denote its $\epsilon$ neighborhood by \begin{equation} B_{\epsilon}(C;Z) = \{z \in Z \mid d(z,C) < \epsilon\}. \end{equation} \begin{lem}\label{compactmainlemma} We may choose $U^{(2)}_{\frak p}$, $U^{(3)}_{\frak p}$ and $\delta > 0$, so that \begin{equation}\label{320formula} \aligned &B_{\delta}(I(X),U^{(2)}(X)) \cap \bigcup_{\frak p_i \in \frak P} \Pi_{\frak p} (\overline s_{\epsilon,\frak p}^{-1}(0) \cap U^{(2)}(X)) \\ &= \frak I_{U^{(2)}(X)U^{(3)}(X)} \left(B_{\delta}(I(X),U^{(3)}(X)) \cap \bigcup_{\frak p_i \in \frak P} \Pi_{\frak p} (\overline s_{\epsilon,\frak p}^{-1}(0) \cap U^{(3)}(X)) \right). \endaligned \end{equation} \end{lem} \begin{rem}\label{rem67} Note if we take $U^{(3)}_{\frak p} \subset U^{(2)}_{\frak p}$ for each $\frak p$ then $U^{(3)}(X)$ is defined by $$ U^{(3)}(X) = \bigcup_{\frak p} \Pi_{\frak p}(U^{(3)}_{\frak p}) \subset U^{(2)}(X), $$ where $\Pi_{\frak p} : U^{(2)}_{\frak p} \to U^{(2)}(X)$. \end{rem} \begin{rem}\label{rem68888} We remark that Lemma \ref{compactmainlemma} claims that the union of zero sets of $s_{\epsilon,\frak p}$ on $U^{(2)}(X)$ is equal to the union of zero sets of $s_{\epsilon,\frak p}$ on $U^{(3)}(X)$, if we restrict them to a small neighborhood of $I(X)$. \end{rem} \begin{exm} We consider $U'_i$ in Example \ref{omeganeighex}. We define an obstruction bundle $E_i$ on it such that $E_1$ is trivial bundle, $E_2 = U'_2 \times \R$, and $E_3 = U'_3 \times \R^2$. We define sections (= Kuranishi maps) $s_i$ by $s_2(x,y,0) = y$, $s_3(x,y,z) = (y,z)$. It defines Kuranishi structure and a good coordinate system in an obvious way. The sections $s_i$ are already transversal to $0$ so we do not perturb. \par We put $U^{(2)}(X) = U'(X)$, $U_i^{(2)}(X) = U'_i(X)$. We take $$ \aligned &U^{(3)}_1 = \{(x,0,0) \mid x < 2c\}, \\ &U^{(3)}_2 = \{ (x,y,0) \mid x > -y^2 + c\}, \\ & U^{(3)}_3 = \{ (x,y,z) \mid x >c\}. \endaligned $$ Here $c>0$. In this example the zero set of the section $s_i$ lies on the $x$-axis. So the lemma holds. The key point of the proof of Lemma \ref{compactmainlemma} is in a neighborhood of the border of $U^{(2)}_{2}$ and $U^{(2)}_3$ the condition $s_i(x) =0$ implies that the point $x$ lies in $U^{(3)}_1$. This is the consequence of Proposition \ref{existmulti} (3). \end{exm} \begin{proof}[Proof of Lemma \ref{compactmainlemma}] In Lemma \ref{unionofofd} we defined the set $\Omega_{\frak q_i}(x) \subset U''_{\frak q_i}$ for $x \in X$. We take $U^{(k)}_{\frak p}$ in place of $U''_{\frak p}$ and obtain $\Omega_{\frak q_i}^{(k)}(x) \subset U^{(k)}_{\frak q_i}$. \par We then define $\frak U^{(k)}(x)$ by (\ref{frakUdef}). We choose a neighborhood $O_x$ of $x$ in $U^{(2)}(X)$ and $\delta_1$ so that \begin{equation}\label{Oxsmallerthaq1} O_x \cap \Pi_{\frak q_1}(\Omega^{(2)}_{\frak q_1}(x))= O_x \cap \overline{\Pi_{\frak q_1}(\Omega^{(2)}_{\frak q_1}(x))} \end{equation} and $$ O_x \cap \frak U^{(2)}(x) \supset B_{\delta_1}(I(x);U^{(2)}(X)). $$ (Note that $x$ is in the interior of $\Omega^{(2)}_{\frak q_1}(x)$. So we can take $O_x$ small so that (\ref{Oxsmallerthaq1}) is satisfied.) \par \begin{sublem}\label{sublem35} We may take $U^{(3),x}_{\frak p}$ (depending on $x$) and $\delta_3>0$ (independent of $x$) so that \begin{equation}\label{sublem69maineq} \aligned &O_{x} \cap \frak U^{(2)}(x) \cap B_{\delta_3}(I(x),U^{(2)}(X)) \cap \bigcup_{\frak p \in \frak P} \Pi_{\frak p} ( \overline s_{\epsilon,\frak p}^{-1}(0)) \\ & \subseteq\frak I_{U^{(2)}(X)U^{(3),x}(X)} \left(B_{\delta_3}(I(X),U^{(3),x}(X)))\cap \bigcup_{\frak p \in \frak P} \Pi_{\frak p} (\overline s_{\epsilon,\frak p}^{-1}(0) \cap U^{(3),x}(X) ) \right) \endaligned \end{equation} for each $x \in X$ holds for sufficiently small $\epsilon$. \end{sublem} We note that $\{U^{(3),x}_{\frak p} \mid \frak p \in \frak P\}$ determines $U^{(3),x}(X)$ as in Remark \ref{rem67}. \begin{proof} We have $$ U^{(2)}(X) \cap O_x = \bigcup_{i=1,\dots,m}\Pi_{\frak q_i}(\Omega^{(2)}_{\frak q_i}(x))\cap O_x. $$ By (\ref{Oxsmallerthaq1}) we have $$ O_x \cap \Pi_{\frak q_1}(\Omega^{(2)}_{\frak q_1}(x))= O_x \cap \overline{\Pi_{\frak q_1}(\Omega^{(2)}_{\frak q_1}(x))}. $$ Therefore we may choose $U^{(3),x}_{\frak p}$ close enough to $U^{(2)}_{\frak p}$ so that $$ O_{x} \cap \Pi_{\frak q_1}(\Omega^{(2)}_{\frak q_1}({x})) \subset \frak I_{U^{(2)}(X)U^{(3),x}(X)}(\Pi_{\frak q_1}(U^{(3),x}_{\frak q_1})). $$ \par On the other hand, in a sufficiently small tubular neighborhood $\mathcal N_{U_{\frak q_1\frak q_i}}U_{\frak q_i}$ ($i>2$) the zero set $s_{\epsilon,q_i}^{-1}(0)$ is contained in the subset $U_{\frak q_i\frak q_1} \subset \mathcal N_{U_{\frak q_i\frak q_1}}U_{\frak q_i}$. (Here $U_{\frak q_i\frak q_1}$ is identified with the zero section of the normal bundle $\mathcal N_{U_{\frak q_i\frak q_1}}U_{\frak q_i}$.) This is a consequence of Proposition \ref{existmulti} (3). \par We can choose $\delta_3>0$ sufficiently small so that $O_{x} \cap \frak U^{(2)}(x) \cap B_{\delta_3}(x,U^{(2)}(X))$ is contained in this tubular neighborhood. (We may choose $\delta_3$ independent of $x$ since $X$ is compact.) The sublemma follows. \end{proof} We find a finite number of $x_i \in I(X)$, $i=1,\dots,\frak I$ and $\delta > 0$, such that \begin{equation}\label{coverOXI} \bigcup_{i=1}^{\frak I} O_{x_i} \cap \frak U^{(2)}(x_i) \cap B_{\delta_3}(I(x_i),U^{(2)}(X)) \supset B_{\delta}(I(X);U^{(2)}(X)). \end{equation} We choose $U^{(3)}_{\frak p}$ such that $$ U^{(2)}_{\frak p} \supset \overline U^{(3)}_{\frak p} \supset U^{(3)}_{\frak p} \supset \bigcup_{i=1}^{\frak I} U^{(3),x_i}_{\frak p}. $$ Then (\ref{sublem69maineq}) holds for all $x_i$ ($i=1,\dots,\frak I$) when we replace $U^{(3),x_i}_{\frak p}$ by this $U^{(3)}_{\frak p}$. By (\ref{coverOXI}) we have: $$ \aligned &\bigcup_{i=1}^{\frak I} \left( O_{x_i} \cap \frak U^{(2)}(x_i) \cap B_{\delta_3}(x_i,U^{(2)}(X)) \cap \bigcup_{\frak p \in \frak P} \Pi_{\frak p} ( \overline s_{\epsilon,\frak p}^{-1}(0)) \right)\\ &\supset B_{\delta}(I(X),U^{(2)}(X)) \cap \bigcup_{\frak p \in \frak P} \Pi_{\frak p} (\overline s_{\epsilon,\frak p}^{-1}(0) \cap U^{(2)}(X)) . \endaligned$$ Therefore (\ref{sublem69maineq}) implies that the left hand side of (\ref{320formula}) is contained in the right hand side. The inclusion of the opposite direction is obvious. \end{proof} \begin{lem}\label{cuttedmodulilem} \begin{equation}\label{cuttedmoduli} B_{\delta}(I(X),U^{(3)}(X)) \cap \bigcup_{\frak p \in \frak P} \Pi_{\frak p} (\overline s_{\epsilon,\frak p}^{-1}(0) \cap U^{(3)}(X)) \end{equation} is compact if $\epsilon > 0$ is sufficiently small. \end{lem} \begin{proof} Lemma \ref{compactmainlemma} implies \begin{equation}\label{closurehuhen} \aligned &B_{\delta}(I(X),U^{(3)}(X)) \cap \bigcup_{\frak p \in \frak P} \Pi_{\frak p} (\overline s_{\epsilon,\frak p}^{-1}(0) \cap U^{(3)}(X))\\ &= B_{\delta}(I(X),\overline U^{(3)}(X)) \cap \bigcup_{\frak p \in \frak P} \Pi_{\frak p} (\overline s_{\epsilon,\frak p}^{-1}(0) \cap \overline U^{(3)}(X)). \endaligned\end{equation} (Here $\overline U^{(3)}(X)$ is the closure of $U^{(3)}(X)$ in $U^{(2)}(X)$.) We remark $U^{(3)}(X)$ is a relatively compact subspace of $U^{(2)}(X)$. The right hand side of (\ref{closurehuhen}) is clearly compact. \end{proof} Hereafter we fix $\delta$ and write (\ref{cuttedmoduli}) as $s_{\epsilon}^{-1}(0)_{\delta}$. It is a subspace of $U^{(3)}(X)$. \begin{lem} $$ \lim_{\epsilon\to 0} \frak I_{U^{(2)}(X)U^{(3)}(X)}(s_{\epsilon}^{-1}(0)_{\delta}) \subseteq B_{\delta}(I(X),U^{(2)}(X)) \cap \bigcup_{\frak p \in \frak P} \Pi_{\frak p} (\overline s_{\frak p}^{-1}(0) \cap U^{(2)}(X)). $$ Here the convergence is by Hausdorff distance. \end{lem} \begin{proof} This is a consequence of the next sublemma applied to the right hand side of (\ref{closurehuhen}) chartwise and branchwise. \end{proof} \begin{sublem} Let $E \to Z$ be a vector bundle on a compact metric space $Z$ and $s$ its section. Suppose that $s_{\epsilon}$ is a family of sections converges to $s$. Then $$ \lim_{\epsilon \to 0} s_{\epsilon}^{-1}(0) \subseteq s^{-1}(0). $$ \end{sublem} \begin{proof} Let $\rho >0$. We put $$ \epsilon(\rho) = \inf \{ \vert s(x) \vert \mid x \in Z \setminus B_{\rho}(s^{-1}(0);Z)\}. $$ Clearly $$ s_{\epsilon}^{-1}(0) \subseteq B_{\rho}(s^{-1}(0);Z) $$ if $\epsilon < \epsilon(\rho)$. The compactness of $Z$ implies $\epsilon(\rho)>0$. \end{proof} \begin{lem} $s^{-1}_{\epsilon}(0)_{\delta}$ has a triangulation. \end{lem} This is proved in \cite[Lemma 6.9]{FOn}. \par Suppose we have a strongly continuous map $f = \{f_\frak p \mid \frak p \in \frak P\}$ to a topological space $Z$ and our Kuranishi space $X$ is oriented. (\cite[A1.17]{fooo:book1}.) Then we can put a weight to each simplex of top dimension in $s^{-1}_{\epsilon}(0)_{\delta}$ and $f_*[X]$. (\cite[(6.10)]{FOn}.) That is a singular chain of $Z$ (with rational coefficients.) Thus we have constructed a virtual fundamental chain of $X$ \par We consider the case our Kuranishi structure has no boundary. \begin{lem}\label{cycleproperties} We can put the weight on each of the top dimensional simplices so that $f_*([X])$ is a cycle. \end{lem} \begin{proof} It suffices to prove that for each $x \in s^{-1}_{\epsilon}(0)_{\delta}$ the zero set of each branch of $s_{\epsilon,\frak p}$ is a smooth manifold in a neighborhood of $x$. This is again a consequence of the proof of Lemma \ref{compactmainlemma} as follows. Suppose $x \in O_{x_i} \cap \frak U^{(2)}(x_i)$. Then we proved that $s^{-1}_{\epsilon}(0)_{\delta}$ in a neighborhood of $x$ coincides with the zero set of $s_{\epsilon,\frak q_1}$. On the other hand each branch of $s_{\epsilon,\frak q_1}$ is transversal to $0$ by Proposition \ref{existmulti}. \end{proof} \begin{rem} We remark that we use the metric on $U(X)$ to {\it prove} that $s_{\epsilon}^{-1}(0)_{\delta}$ has required properties for sufficiently small $\delta$ and $\epsilon$. In particular we did {\it not} use the metric to {\it define} $s_{\epsilon}$. Therefore the virtual fundamental cycle (chain) we obtained is obviously independent of the choice of the metric on $U(X)$. \end{rem} In sum we have defined a virtual fundamental cycle $f_*([X])$ using the good coordinate system on the Kuranishi structure of $X$ which has tangent bundle, oriented and without boundary. Using the relative version of our construction of this and the next sections, we can show the cohomology class of $f_*([X])$ is independent of the choices. (See \cite[Lemmas 17.8,17.9]{FOn2} etc..) \section{Existence of good coordinate system}\label{sec:existenceofGCS} The purpose of this section is to prove the next theorem. \begin{thm}\label{goodcoordinateexists} Let $X$ be a compact metrizable space with Kuranishi structure in the sense of Definition \ref{Definition A1.5}. Then $X$ has a good coordinate system in the sense of Definition \ref{goodcoordinatesystem}. They are compatible in the following sense. \end{thm} \begin{defn} Let $X$ be a space with Kuranishi structure. A good coordinate system is said to be {\it compatible} with this Kuranishi structure if the following holds. \par Let $(U'_\frak p,E'_\frak p,s'_\frak p,\psi'_\frak p)$ be a chart of the given good coordinate system of $X$. For each $q\in \psi'_\frak p(\tilde{q})$, $\tilde{q} \in U'_\frak p \cap (s'_\frak p)^{-1}(0)$ there exists $(\hat{\underline\phi}_{\frak p q},{\underline\phi}_{\frak p q})$ such that \begin{enumerate} \item[{(1)}] There exist an open subset $U_{\frak p q}$ of $U_q$ and an embedding of vector bundles $$ \hat{\underline\phi}_{\frak p q}:E_{q}\vert_{U_{\frak p q}} \to E'_{\frak p} $$ over an embedding ${\underline\phi}_{\frak p q}: U_{\frak p q} \to U'_{\frak p}$ of orbifold of $U_{\frak p q}$ that satisfy \begin{enumerate} \item ${\underline\phi}_{\frak p q}([o_q]) = q$. \item $\hat{\underline\phi}_{\frak pq}\circ s_q = s'_\frak p\circ{\underline\phi}_{\frak pq}$. \item $\psi_q = \psi'_\frak p\circ \underline{\phi}_{\frak pq}$ on $s_q^{-1}(0) \cap U_{\frak pq}$. \end{enumerate} \item[{(2)}] $d_{\rm fiber}s'_{\frak p}$ induces an isomorphism of vector bundles at $s_{q}^{-1}(0)\cap U_{\frak pq}$. \begin{equation}\label{15tangent22}\nonumber N_{U_{\frak p q}}U'_\frak p \cong \frac{\underline\phi_{\frak p q}^*E_\frak p}{E_q\vert_{U_{\frak p q}}}. \end{equation} \item[{(3)}] If $r \in \psi_q(U_{\frak pq}\cap s_q^{-1}(0))$ and $q \in \psi'_\frak p((s'_\frak p)^{-1}(0)\cap U'_{\frak p})$, then $$ \underline{\phi}_{\frak pq} \circ \underline{\phi}_{qr} = \underline{\phi}_{\frak pr}, \quad \hat{\underline{\phi}}_{\frak pq} \circ \hat{\underline{\phi}}_{qr} = \hat{\underline{\phi}}_{\frak pr}. $$ Here the first equality holds on $$ \underline\phi_{qr}^{-1}(U_{\frak p q}) \cap U_{qr} \cap U_{\frak p r} = \underline\phi_{qr}^{-1}(U_{\frak p q}) \cap U_{qr} \cap U_{\frak p r} $$ and the second equality holds on $E_r\vert_{\underline\phi_{qr}^{-1}(U_{\frak p q}) \cap U_{qr} \cap U_{\frak p r}}$. \item[{(4)}] Suppose that $\frak o \ge \frak p$ and the coordinate change of good coordinate system is given by $(U'_{\frak o\frak p},\hat{\underline{\phi}'}_{\frak o\frak p},{\underline{\phi}'}_{\frak o\frak p})$. Let $q \in \psi'_{\frak p}({s'_\frak p}^{-1}(0)\cap U'_{\frak o\frak p} )$. Then we have $$ {\underline{\phi}'}_{\frak o\frak p} \circ {\underline{\phi}}_{\frak pq} = {\underline{\phi}}_{\frak oq}, \quad \hat{\underline{\phi}'}_{\frak o\frak p} \circ \hat{\underline{\phi}}_{\frak pq} = \hat{\underline{\phi}}_{\frak oq}. $$ Here the first equality holds on ${\underline{\phi}}_{\frak pq}^{-1}(U'_{\frak o\frak p}) \cap U_{\frak pq} \cap U_{\frak oq}$, and the second equality holds on $E_q\vert_{{\underline{\phi}}_{\frak pq}^{-1}(U'_{\frak o\frak p}) \cap U_{\frak pq} \cap U_{\frak oq}}$. \end{enumerate} \end{defn} In the above definition, we use $\{(U_p, E_p, s_p, \psi_p)\}$ for Kuranishi neighborhoods in the definition of the Kuranishi structure and $\{(U'_{\frak p}, E'_{\frak p},s'_{\frak p}, \psi'_{\frak p})\}$ for the data in the definition of the good coordinate system in order to distinguish them in the definition of compatibility between them. However, we will write $\{(U_{\frak p}, E_{\frak p}, s_{\frak p}, \psi_{\frak p})\}$ instead of $\{(U'_{\frak p}, E'_{\frak p},s'_{\frak p}, \psi'_{\frak p})\}$, in the rest of this section. \begin{proof}[Proof of Theorem \ref{goodcoordinateexists}] Any point $p\in X$ carries a well defined dimension, $\dim U_p$, of orbifold. We put $d_p = \dim U_p$ and $$ X(\frak d) = \{p \in X \mid d_p = \frak d\}. $$ \par The first part of the proof is to construct an orbifold (plus an obstruction bundle etc.) that is a `neighborhood' of a compact subset of $X(\frak d)$. Let us define such a notion precisely. \begin{defn}\label{pureofdnbd} Let $K_*$ be a compact subset of $X(\frak d)$. A {\it pure orbifold neighborhood} of $K_*$ is $(U_*,E_*,s_*,\psi_*)$ such that the following holds. \begin{enumerate} \item $U_*$ is a $\frak d$-dimensional orbifold. \item $E_*$ is a vector bundle whose rank is $\frak d - \dim X$. (Here $\dim X$ is a dimension of $X$ as a Kuranishi space.) \item $s_*$ is a section of $E_*$. \item $\psi_* : s_*^{-1}(0) \to X$ is a homeomorphism to a neighborhood $\mathcal U_*$ of $K_*$ in $X(\frak d)$. \end{enumerate} We also assume the following compatibility condition with Kuranishi structure of $X$. For any $p \in \psi_*(s_*^{-1}(0)) \subset X$, there exists $(U_{*p}, \hat{\underline{\phi}}_{*p},\underline{\phi}_{*p})$ such that \begin{enumerate} \item[(5)] $U_{*p}$ is an open neighborhood of $[o_p]$ in $U_p$. \item[(6)] $\hat{\underline{\phi}}_{*p} : E_p\vert_{U_{*p}} \to E_*$ is an embedding of vector bundle over an embedding of orbifold ${\underline{\phi}}_{*p} : U_{*p} \to U_*$ such that \begin{enumerate} \item $\hat{\underline{\phi}}_{*p}\circ s_p = s_*\circ{\underline{\phi}}_{*p}$ on $U_{*p}$ \item $\psi_p = \psi_*\circ \underline{\phi}_{*p}$ on $s_p^{-1}(0) \cap U_{*p}$. \end{enumerate} \item[(7)] The restriction of $ds_*$ to the normal direction induces an isomorphism \begin{equation}\label{tangent**} N_{U_{*p}}U_* \cong \frac{\hat{\underline{\phi}}_{*p}^*E_*}{E_p\vert_{U_{*p}}} \end{equation} as vector bundles on the orbifold $U_{*p}$ at $s_p^{-1}(0)$. \item[(8)] If $q \in \psi_p(s_p^{-1}(0)\cap U_{*p})$, then $$ \underline\phi_{*p} \circ \underline\phi_{pq} = \underline\phi_{*q}, \quad \underline{\hat\phi}_{*p} \circ \underline{\hat\phi}_{pq} = \underline{\hat\phi}_{*q}. $$ Here the first equality holds on $\underline{\phi}_{pq}^{-1}(U_{*p}) \cap U_{pq} \cap U_{*q}$, and the second equality holds on $E_q\vert_{\underline{\phi}_{pq}^{-1}(U_{*p}) \cap U_{pq} \cap U_{*q}}$. \end{enumerate} \end{defn} Hereafter we denote \begin{equation}\label{calUnasi} \mathcal U_* = \psi(s_*^{-1}(0)). \end{equation} The goal of the first part of the proof of Theorem \ref{goodcoordinateexists} is to prove the following. \begin{prop}\label{purecover} For any compact subset $K$ of $X(\frak d)$ there exists a pure orbifold neighborhood of $K$. \end{prop} \begin{proof} We cover $K$ by a finite number of $\mathcal U_{p_j}$'s where $p_j \in K$ and $$ \psi_{p_j}(s_{p_j}^{-1}(0)) = \mathcal U_{p_j}. $$ There exist compact subsets $K_j$ of $\mathcal U_{p_j}$ such that the union of $K_j$ contains $K$. Thus to prove Proposition \ref{purecover}, by the induction argument we have only to prove the following lemma. \end{proof} \begin{lem}\label{2gluelemma} Let $K_1, K_2$ be compact subsets of $X(\frak d)$. Suppose $K_1$ and $K_2$ have pure orbifold neighborhoods. Then $K_1 \cup K_2$ has a pure orbifold neighborhood. \end{lem} \begin{proof} Let $(U_i,E_i,s_i,\psi_i)$ be a pure orbifold neighborhood of $K_i$. Note that $\dim U_1 = \dim U_2 = \frak d$. We denote the map $\underline\phi_{*p}$ the open set $U_{*p}$ etc. for $(U_i,E_i,s_i,\psi_i)$ by $\underline\phi_{ip}$, $U_{ip}$ etc. (Namely we replace $*$ by $i \in \{1,2\}$.) The open subset $\mathcal U_i$ is as in (\ref{calUnasi}). \par Let $q \in K_1 \cap K_2$. We take an open subset $U_{12;q}$ such that \begin{equation} o_q \in U_{12;q} \subset U_{1q} \cap U_{2q} \subset U_q \end{equation} and \begin{equation} \mathcal U_{12;q} = \mathcal U_{1q} \cap \mathcal U_{2q} \subset X. \end{equation} Here $ \mathcal U_{iq} = \psi_q(s_q^{-1}(0) \cap U_{iq}). $ We take $q_1,\dots, q_I$ such that $$ K_1 \cap K_2 \subseteq \bigcup_{i=1}^I \mathcal U_{12;q_i}. $$ We take a relatively compact open subset $U^-_{12;q}$ in $U_{12;q}$ such that $$ K_1 \cap K_2 \subseteq \bigcup_{i=1}^I \mathcal U^-_{12;q_i}; \quad \mathcal U^-_{12;q} := \psi_q(s_q^{-1}(0) \cap U^-_{12;q}). $$ By a standard argument in general topology we can choose them so that the following holds. \begin{conds}\label{cond46} If $\mathcal U_{12;q_i} \cap \mathcal U_{12;q_{i'}} \ne \emptyset$, then $\mathcal U_{12;q_i} \cap \mathcal U_{12;q_{i'}} \cap X(\frak d) \ne \emptyset$. \end{conds} We assume the same condition for $\{\mathcal U^-_{12;q_i}\}$. \par For each $r \in K_1 \cap K_2$ we take an open subset $U^0_r$ of $U_r$ containing $o_r$ so that \begin{conds}\label{cond47} \begin{enumerate} \item $U^0_r \subset U_{1r} \cap U_{2r}$. \item If $\underline\phi_{1r}(U^0_r) \cap \overline{\underline\phi_{1q_i}(U_{12;q_i}^-)} \ne \emptyset$, then $$ U^0_r \subset U_{q_ir} \cap \underline{\phi}_{q_ir}^{-1}(U_{12;q_i}). $$ \item If $\underline\phi_{2r}(U^0_r) \cap \overline{\underline\phi_{2q_i}(U_{12; q_i}^-)} \ne \emptyset$, then $$ U^0_r \subset U_{q_ir}\cap \underline{\phi}_{q_ir}^{-1}(U_{12;q_i}). $$ \end{enumerate} \end{conds} \begin{lem}\label{existsUr0} There exists such a choice of $U^0_r$. \end{lem} \begin{proof} We first observe \begin{equation}\label{75ineq} s_1^{-1}(0) \cap \overline{\underline\phi_{1q_i}(U_{12; q_i}^-)} = s_1^{-1}(0) \cap {\underline\phi_{1q_i}(\overline U_{12; q_i}^-)} \subset s_1^{-1}(0) \cap {\underline\phi_{1q_i}(U_{12; q_i})}. \end{equation} We also have a similar inclusion when we replace $\phi_{1q_i}$ etc. by $\phi_{2q_i}$ etc.. We next decompose $I$ into disjoint unions $$ I = I_1^{\rm in} \cup I_1^{\rm out} = I_2^{\rm in} \cup I_2^{\rm out} $$ respectively where we define $$ I_1^{\rm in} = \left\{ i \in I \mid r \in \psi_{q_i} (s_{q_i}^{-1}(0)), \ \underline\phi_{1q_i}(o_r) \in \overline{\underline{\phi}_{1q_i}(U_{12;q_i}^-)}\right\}, \quad I_1^{\rm out} = I \setminus I_1^{\rm in} $$ and similarly for $I_2^{\rm in}$, $I_2^{\rm out}$. Then we put \begin{equation}\label{eq:Ur0} \aligned &U_r^0 =\\ &\left(\bigcap_{i \in I_1^{\rm in}} \left(U_{q_ir}\cap \underline\phi_{q_ir}^{-1}(U_{12;q_i})\right)\cap \bigcap_{i \in I_1^{\rm out}}\underline\phi_{1r}^{-1} \left(U_1 \setminus \overline{\underline\phi_{1q_i}(U_{12; q_i}^-)}\right) \right) \\ &\cap \left(\bigcap_{i \in I_2^{\rm in}} \left(U_{q_ir} \cap \underline\phi_{q_ir}^{-1}(U_{12;q_i})\right)\cap \bigcap_{i \in I_2^{\rm out}} \underline\phi_{2r}^{-1}\left(U_2 \setminus \overline{\underline\phi_{2q_i}(U_{12; q_i}^-)}\right)\right). \endaligned \end{equation} Using (\ref{75ineq}), it is easy to see that $U_r^0$ is an open neighborhood of $o_r$ satisfying all the required properties. \end{proof} We choose $r_1,\dots,r_J \in K_1\cap K_2$ such that \begin{equation}\label{eq:cup1toJ} \bigcup_{j=1}^J \mathcal U_{r_j}^0 \supset K_1 \cap K_2. \end{equation} We put \begin{equation} \aligned U_{21}^{(1)} &= \bigcup_{i=1}^I\bigcup_{j=1}^J \left( \underline{\phi}_{1r_j}(U^0_{r_j}) \cap \underline{\phi}_{1q_i}(U^-_{12;q_i}) \right) \subset U_1, \\ U_{12}^{(1)} &= \bigcup_{i=1}^I\bigcup_{j=1}^J \left( \underline{\phi}_{2r_j}(U^0_{r_j}) \cap \underline{\phi}_{2q_i}(U^-_{12;q_i}) \right) \subset U_2. \endaligned \end{equation} They are open subsets in orbifolds and so are orbifolds. We note that $$ \bigcup_{i=1}^I \left( \underline{\phi}_{1r_j}(U^0_{r_j}) \cap \underline{\phi}_{1q_i}(U_{12;q_i}) \right) \supset \underline{\phi}_{1r_j}(U_{r_j}^0) $$ and $\mathcal U_{12;q_i}$, $i=1, \dots, I$ cover $K_1 \cap K_2$ (similarly for $\underline{\phi}_{2r_j}$, $\underline{\phi}_{2q_i}$). Hence \begin{equation} U_{21}^{(1)} \supset \psi_1^{-1}(K_1\cap K_2), \qquad U_{12}^{(1)} \supset \psi_2^{-1}(K_1\cap K_2) \end{equation} by \eqref{eq:cup1toJ}. \begin{lem}\label{ofdgluemainsublam} There exists an open embedding of orbifolds $\underline{\phi}_{21} : U_{21}^{(1)} \to U_2$ such that $\underline{\phi}_{21}(U_{21}^{(1)}) = U_{12}^{(1)}$ and satisfies the following: \begin{enumerate} \item If $x = \underline{\phi}_{1r_j}(\tilde x_j)$ then \begin{equation}\label{phi21defformula} \underline{\phi}_{21}(x) = \underline{\phi}_{2r_j}(\tilde x_j). \end{equation} \item There exists a bundle isomorphism \begin{equation} \hat{\underline{\phi}}_{21} : E_1\vert_{U_{21}^{(1)}} \to E_2\vert_{U_{12}^{(1)}} \nonumber\end{equation} over $\underline{\phi}_{21}$. On the fiber of $x = \underline{\phi}_{1r_j}(\tilde x_j)$ we have \begin{equation}\label{hatphi21defformula} \hat{\underline{\phi}}_{21} = \hat{\underline{\phi}}_{2r_j} \circ \hat{\underline{\phi}}_{1r_j}^{-1}. \end{equation} \item On $U_{21}^{(1)}$ we have: \begin{equation}\label{compatis428} s_2 \circ \underline{\phi}_{21} = \hat{\underline{\phi}}_{21} \circ s_1. \end{equation} \item On $s_1^{-1}(0) \cap U_{21}^{(1)}$, we have: \begin{equation}\label{compatis429} \psi_2 \circ \underline{\phi}_{21} = \psi_1. \end{equation} \end{enumerate} \end{lem} \begin{proof} For the statement (1), we note that the right hand side of (\ref{phi21defformula}) is well-defined because of Condition \ref{cond47} (1). So to define $\underline{\phi}_{21}$ it suffices to show that the right hand side of (\ref{phi21defformula}) is independent of $j$. Suppose $$ x = \underline{\phi}_{1r_j}(\tilde x_j) = \underline{\phi}_{1r_{j'}}(\tilde x_{j'}) \in \underline{\phi}_{1q_i}(U^-_{q_i}). $$ By Condition \ref{cond47}, (2) we have $\tilde x_j \in U_{q_ir_j}$, $\tilde x_{j'} \in U_{q_ir_{j'}}$ and $\underline{\phi}_{q_ir_j}(\tilde x_j) \in U_{12;q_i}$, $\underline{\phi}_{q_ir_{j'}}(\tilde x_{j'}) \in U_{12;q_i}$. Since $$ \underline{\phi}_{1q_i}(\underline{\phi}_{q_ir_j}(\tilde x_j)) = x = \underline{\phi}_{1q_i}(\underline{\phi}_{q_ir_{j'}}(\tilde x_{j'})), $$ it follows that $$ \underline{\phi}_{q_ir_j}(\tilde x_j) = \underline{\phi}_{q_ir_{j'}}(\tilde x_{j'}). $$ Therefore $$ \underline{\phi}_{2r_j}(\tilde x_j) = \underline{\phi}_{2q_i}(\underline{\phi}_{q_ir_j}(\tilde x_j)) = \underline{\phi}_{2q_i}(\underline{\phi}_{q_ir_{j'}}(\tilde x_{j'})) = \underline{\phi}_{2r_{j'}}(\tilde x_{j'}), $$ as required. \par We have thus defined $\underline{\phi}_{21}$. We can define $\underline{\phi}_{12}$ in a similar way. It is easy to see $\underline{\phi}_{21}\circ \underline{\phi}_{12}$ and $\underline{\phi}_{12}\circ \underline{\phi}_{21}$ are identity maps. Therefore $\underline{\phi}_{21}$ is an isomorphism. \par For (2), We define $\hat{\underline{\phi}}_{21}$ by (\ref{hatphi21defformula}). We can prove that it is well-defined and is an isomorphism in the same way as the proof of (1). \par Finally for (3) and (4), we note that (\ref{compatis428}) follows from (\ref{phi21defformula}) and (\ref{hatphi21defformula}). (\ref{compatis429}) follows from (\ref{phi21defformula}). \end{proof} We now use the bundle isomorphisms $(\widehat{\underline{\phi}}_{21},\underline{\phi}_{21})$ to glue the pure orbifold neighborhoods $(U_1,E_1,s_1,\psi_1)$ and $(U_2,E_2,s_2,\psi_2)$. \par We first shrink $U_1, \, U_2$ so that Definition \ref{pureofdnbd} (4) holds as follows. We choose open subsets $V_1, V_2 \subset X$ such that \begin{equation} K_1 \subset V_1 \subset \psi_1(s_1^{-1}(0)), \qquad K_2 \subset V_2 \subset \psi_2(s_2^{-1}(0)), \qquad V_1 \cap V_2 \subset \psi_1(U_{21}^{(1)} \cap s_1^{-1}(0)). \end{equation} (Such $V_1$, $V_2$ exist since $K_1 \cap K_2 \subset \psi_1(U_{21}^{(1)} \cap s_1^{-1}(0))$.) \par We take open sets $U'_i \subset U_i$ such that $U'_i \cap s_i^{-1}(0) = \psi_i^{-1}(V_i)$ and set $U^{\prime (1)}_{21} = U^{(1)}_{21} \cap U'_1 \cap \underline{\phi}_{21}^{-1}(U_2')$, $U^{\prime (1)}_{12} = U^{(1)}_{12} \cap U'_2 \cap \underline{\phi}_{12}^{-1}(U_1')$. By restricting the base of the bundle $E_i$, the maps $\underline{\phi}_{21}$ etc are defined in an obvious way and satisfy the conclusion of Lemma \ref{ofdgluemainsublam}. Moreover we have \begin{equation}\label{intproperty} \psi_1(U'_{1} \cap s_1^{-1}(0)) \cap \psi_2(U'_{2} \cap s_2^{-1}(0)) \subset \psi_1(U_{21}^{\prime (1)} \cap s_1^{-1}(0)). \end{equation} (\ref{intproperty}) implies that $\psi_1$, $\psi_2$ induce an {\it injective} map $(U'_{1} \cap s_1^{-1}(0)) \#_{\underline{\phi}_{21}} (U'_{2} \cap s_2^{-1}(0)) \to X$. \par This would have finished the proof of Lemma \ref{2gluelemma}, if we already established that the glued space $U_1' \#_{\underline\phi_{21}} U_2'$ is Hausdorff, which is not guaranteed with the current construction. In order to obtain a Hausdorff space after gluing we need to further shrink the domains as follows. \par The argument of this shrinking process will be somewhat similar to that of Section \ref{defgoodcoordsec}. Let $U_1^0 \subset U'_1$ and $U_2^0 \subset U'_2$ be relatively compact open subsets such that $$ \psi_1(s_1^{-1}(0) \cap U_1^0) \supset K_1, \qquad \psi_2(s_2^{-1}(0) \cap U_2^0) \supset K_2. $$ Let $W_{21} \subset U_{21}^{(1)}$ be a relatively compact open subset such that \begin{equation}\label{Wcond} W_{21} \supset s_1^{-1}(0) \cap \overline {(U_1^0 \cap \underline{\phi}_{21}^{-1}(U_2^0))}. \end{equation} The existence of $W_{21}$ with this property follows from the next lemma. \begin{lem}\label{lemma49} $s_1^{-1}(0) \cap \overline{(U_1^0 \cap \underline{\phi}_{21}^{-1}(U_2^0))}$ is compact. \end{lem} \begin{proof} $\psi_1(\overline U_{1}^{0} \cap s_1^{-1}(0))$ is a compact subset of $V_1$. Therefore $\overline U_{1}^{0} \cap s_1^{-1}(0)$ is compact. Since $s_1^{-1}(0) \cap \overline{(U_1^0 \cap \underline{\phi}_{21}^{-1}(U_2^0))}$ is its closed subset, the lemma follows. \end{proof} We next prove: \begin{lem}\label{lem410} There exists an open subset $U_1^{00}$ such that $$ \psi_1^{-1}(K_1) \subset U_1^{00} \subset \overline U_1^{00} \subset U_1^0 $$ such that \begin{equation}\label{415form} \overline U_1^{00} \cap \overline{\underline{\phi}_{21}^{-1}(U^0_2)} \subset W_{21}. \end{equation} \end{lem} \begin{proof} For each $x\in K_1$ we define its neighborhood $U_x$'s as follows. \begin{enumerate} \item If $x \notin \overline{\underline{\phi}_{21}^{-1}(U^0_2)}$ then $\overline U_x \cap \overline{\underline{\phi}_{21}^{-1}(U^0_2)} = \emptyset$. \item If $x \in \overline{\underline{\phi}_{21}^{-1}(U^0_2)}$ then $\overline U_x \subset W_{21}$. \end{enumerate} Note $K_1 \cap \overline{\underline{\phi}_{12}(U^0_2)} \subset W_{21}$ by (\ref{Wcond}). Therefore we can find such $U_x$ in Case (2). We cover $K_1$ by a finite number of such $U_x$'s. Let $U_1^{00}$ be its union. Then it has the required properties. \end{proof} Let $C_2$ be the closure of $U_2^0$ in $U'_2$ and $C_1$ be the closure of $U_1^{00}$ in $U'_1$. We put $$ C_{21} = C_1 \cap \underline{\phi}_{21}^{-1}(C_2), \quad C_{12} = \underline{\phi}_{21}(C_{21}). $$ We put \begin{equation}\label{eq:simonC} C = (C_1 \cup C_2)/\sim \end{equation} where $\sim$ is defined as follows $x\sim y$ if and only if one of the following holds. \begin{enumerate} \item $x = y$, or \item $x \in C_{21}$, $y = \underline{\phi}_{21}(x) \in C_2$, or \item $x \in C_{12}$, $y = \underline{\phi}_{12}(x) \in C_1$. \end{enumerate} We put the quotient topology on $C$. \begin{lem}\label{C21compact} $C_{21}$ is a compact subset of $C_1$. \end{lem} \begin{proof} If suffices to show that $C_{21}$ is a closed subset of $C_1$. But this is obvious since $\underline\phi_{21}$ is a continuous map to $U_2$ and $C_2$ is a closed subset of $U_2$. \end{proof} \begin{lem}\label{sublemhaus} $C$ is Hausdorff. \end{lem} \begin{proof} This is a consequence of Proposition \ref{metrizable}. \end{proof} Let $\Pi_i : C_i \to C$ be the map which sends an element to its equivalence class. Since $C_i$ is compact and $C$ is Hausdorff, the map $\Pi_i$ is a topological embedding. We set $$ U = \Pi_1(U^{00}_1) \cup \Pi_2(U^0_2) \subset C $$ and put the subspace topology on it. This set carries an orbifold structure since each of $U_1^{00}$ and $U_2^0$ does and the gluing is done by (orbifold) diffeomorphism. Then by restricting $E_i$, $s_i$, $\psi_i$, etc and then gluing them, we obtain the required gluings $E$, $s$, $\psi$ and etc. \begin{lem}\label{XcapU} The quadruple $(U,E,s,\psi)$ constructed above satisfies the conditions of a pure orbifold neighborhood of $K_1 \cup K_2$. \end{lem} \begin{proof} Definition \ref{pureofdnbd} (1),(2) and (3) are obvious. We can use (\ref{intproperty}), (\ref{415form}) to show that $\psi : s^{-1}(0) \to X$ is injective where $s^{-1}(0) \subset U$. Therefore it is an embedding, which is also open. The proof of Lemma \ref{XcapU} is complete. \end{proof} Therefore the proofs of Proposition \ref{purecover} and Lemma \ref{2gluelemma} are complete. \end{proof} \par\medskip We have thus completed the first part of the proof of Theorem \ref{goodcoordinateexists} and enter the second part. \par Put $$ \frak D = \{\frak d \in \Z_{>0} \mid X(\frak d) \ne \emptyset \} $$ and let $(\frak D,\le)$ be an ordered set. A subset $D \subset \frak D$ is said to be an {\it ideal} if $$ \frak d \in D, \,\frak d' \ge \frak d \quad \Rightarrow \quad \frak d' \in D. $$ We call a subset $D' \subset D$ of an ideal $D$ a {\it sub-ideal} thereof if $D'$ is an ideal with respect to the induced order. For $D \subset \frak D$ we put $$ X(D) = \bigcup_{\frak d \in D} X(\frak d). $$ Then $X(D)$ is a closed subset of $X$ if $D$ is an ideal. \begin{defn}\label{mixednbd} Let $D \subset \frak D$ be an ideal. A \emph{mixed orbifold neighborhood} of $X(D)$ is given by the quadruples $$ (\mathcal K_{\frak d} \subset U_{\frak d}, E_{\frak d},s_{\frak d}, \psi_{\frak d}) $$ together with embeddings $(\underline{\hat\phi}_{\frak d\frak d'}, \underline\phi_{\frak d\frak d'})$ of vector bundles for each pair $(\frak d,\frak d')$ with $\frak d < \frak d'$. We assume they have the following properties: \begin{enumerate} \item $\mathcal K_{\frak d}$ is a compact subset of $X(\frak d)$. \item $(U_{\frak d}, E_{\frak d},s_{\frak d}, \psi_{\frak d})$ is a pure orbifold neighborhood of $\mathcal K_{\frak d}$. \item Let $\psi_{\frak d} : U_{\frak d} \cap s_{\frak d}^{-1}(0) \to X$ be as in Definition \ref{pureofdnbd} (3). Then we have \begin{equation}\label{Kislargerthan} \mathcal K_{\frak d} \supset X(\frak d) \setminus \bigcup_{\frak d' > \frak d} \psi_{\frak d'} \left(U_{\frak d'} \cap s_{\frak d'}^{-1}(0)\right). \end{equation} \item $U_{\frak d'\frak d}$ is an open neighborhood of $$ \psi_{\frak d}^{-1}\left( \mathcal K_{\frak d}\cap \psi_{\frak d'} \left(U_{\frak d'} \cap s_{\frak d'}^{-1}(0)\right) \right) $$ in $U(\frak d)$. \item The map $$ \underline{\hat\phi}_{\frak d'\frak d} : E_{\frak d}\vert_{U_{\frak d'\frak d}} \to E_{\frak d'} $$ is an embedding of vector bundle over an embedding of orbifold $$ \underline\phi_{\frak d'\frak d} : U_{\frak d'\frak d} \to U_{\frak d'} $$ such that \begin{enumerate} \item The equality $$ s_{\frak d'} \circ \underline\phi_{\frak d'\frak d} = \underline{\hat\phi}_{\frak d'\frak d}\circ s_{\frak d} $$ holds on $U_{\frak d'\frak d}$. \item The equality $$ \psi_{\frak d'} \circ \underline\phi_{\frak d'\frak d} = \psi_{\frak d} $$ holds on $U_{\frak d'\frak d} \cap s_{\frak d}^{-1}(0)$. \end{enumerate} \item The restriction of $ds_{\frak d'}$ to the normal direction induces an isomorphism \begin{equation}\label{tangent***} N_{U_{\frak d'\frak d}}U_{\frak d'} \cong \frac{\hat{\phi}_{\frak d'\frak d}^*E_{\frak d'}}{E_\frak d\vert_{U_{\frak d'\frak d}}} \end{equation} as vector bundles on $U_{\frak d'\frak d}\cap s_{\frak d}^{-1}(0)$. \item If $p \in \psi_{\frak d}(U_{\frak d'\frak d} \cap s_{\frak d}^{-1}(0)) \subset \psi_{\frak d'}(U_{\frak d'} \cap s_{\frak d'}^{-1}(0))$, $$ \underline\phi_{\frak d'\frak d}\circ \underline\phi_{\frak d p} = \underline\phi_{\frak d' p}, \quad \underline{\hat\phi}_{\frak d'\frak d}\circ \underline{\hat\phi}_{\frak d p} = \underline{\hat\phi}_{\frak d' p} $$ on $\underline{\phi}_{\frak d p}^{-1}(U_{\frak d' \frak d}) \cap U_{\frak d p} \cap U_{{\frak d}'p}$ and $E_p\vert_{\underline{\phi}_{\frak d p}^{-1}(U_{\frak d' \frak d}) \cap U_{\frak d p} \cap U_{{\frak d}'p}}$, respectively. (We write $\underline{\phi}_{\frak d p}$ etc. instead of $\underline\phi_{* p}$ etc. for the structure maps of $U_{\frak d}$. Namely we replace $*$ by $\frak d$.) \item The space $U(D)$ is Hausdorff. The map ${\Pi}_{\frak d} : U_{\frak d} \to U(D)$ is a topological embedding. We have \begin{equation}\label{Dddd} U_{\frak d'\frak d} = {\Pi}_{\frak d}^{-1}{\Pi}_{\frak d'}(U_{\frak d'}) \end{equation} and \begin{equation}\label{Icompati} {\Pi}_{\frak d'} \circ \underline{\phi}_{\frak d'\frak d} = {\Pi}_{\frak d} \end{equation} on $U_{\frak d'\frak d}$. Moreover $$ U(D) = \bigcup_{\frak d \in D} I_{\frak d}(U_{\frak d}). $$ We call $U(D)$ the {\it total space} of our mixed orbifold neighborhood. \item We define a subset $s_D^{-1}(0)$ of $U(D)$ by $s_D^{-1}(0) = \bigcup_{\frak d \in D}{\Pi}_{\frak d}(s_\frak d^{-1}(0))$. We define $\psi_D : s_D^{-1}(0) \to X$ such that $\psi_D\circ {\Pi}_{\frak d} = \psi_{\frak d}$ on $s_\frak d^{-1}(0) \subset U_{\frak d}$. (This is well-defined by (7).) We require that $$ \psi_D : s_D^{-1}(0) \to X $$ is a topological embedding onto a neighborhood of $X(D)$ in $X$. \end{enumerate} \end{defn} Note that (\ref{Kislargerthan}) implies \begin{equation} X(D) \subset \bigcup_{\frak d \in D} \psi_{\frak d} \left(s_{\frak d}^{-1}(0)\right). \end{equation} We also have the following: \begin{lem}\label{mixednbdrestrict} If $U(D)'$ is an open subset of $U(D)$ such that $$ U(D)' \supset \psi_D^{-1}(X(D)) \cap \bigcup_{\frak d \in D} {\Pi}_{\frak d}(s_{\frak d}^{-1}(0)) . $$ Then there exists a mixed orbifold neighborhood of $X(D)$ such that its total space is the above $U(D)'$. \end{lem} \begin{proof} We put $U'_{\frak d} = U_{\frak d} \cap {\Pi}_{\frak d}^{-1}(U(D)')$, $U'_{\frak d'\frak d} = {\Pi}_{\frak d}^{-1}({\Pi}_{\frak d'}(U_{\frak d'}) \cap U(D)')$. We define $E'_{\frak d}$ and various maps by restricting ones of $U(D)$. It is straightforward to check that they satisfy the required properties (1)-(11) of Definition \ref{mixednbd}. \end{proof} Another way to shrink $U(D)$ is as follows. \begin{lem}\label{mixednbdrestrict2} Let $ (\mathcal K_{\frak d} \subset U_{\frak d}, E_{\frak d},s_{\frak d}, \psi_{\frak d}) $ be a mixed orbifold neighborhood of $X(D)$. Suppose we are given $U'_{\frak d} \subset U_{\frak d}$ for each $\frak d$ such that $$ \psi_{\frak d}(s_{\frak d}^{-1}(0) \cap U'_{\frak d}) \supset \mathcal K_{\frak d}. $$ Then $U'_{\frak d}$ and the restriction of the other data define a mixed orbifold neighborhood of $X(D)$. \end{lem} \begin{proof} The proof is obvious. \end{proof} \par The goal of the second part of the proof of Theorem \ref{goodcoordinateexists} is to prove the following: \begin{prop}\label{existmixed} For any ideal $D$ there exists a mixed orbifold neighborhood of $X(D)$. \end{prop} \begin{proof} The proof is by an induction on $\# D$. If $\# D = 1$ then $D = \{\frak d\}$ with $\frak d$ maximal in $\frak D$. We note that $X(\frak d)$ is compact when $\frak d$ is maximal. We put $\mathcal K_{\frak d} = X(\frak d)$ that is compact. We use Proposition \ref{purecover} to obtain a pure orbifold neighborhood $U_{\frak d}$ of $\mathcal K_{\frak d} = X(\frak d)$. The proposition is proved in this case. \par Suppose we have proved the proposition for all $D'$ with $\# D' < \#D$. We will prove it for $D$. Let $\frak d_0$ be an element of $D$ that is minimal. We put $D' = D \setminus \{\frak d_0\}$, which is a sub-ideal of $D$ with $\# D' < \#D$. Therefore, by the induction hypothesis, we have a mixed orbifold neighborhood of $X(D')$. We denote it by $U^{(1)}_{\frak d}$, $\mathcal K_{\frak d}$, $\underline{\phi}_{\frak d'\frak d}$ etc. (Here $\frak d,\frak d' \in D'$.) \par Let $\mathcal K_{\frak d_0}$ be a compact subset of $X(\frak d_0)$ such that \begin{equation} \bigcup_{\frak d \in D'}\psi^{(1)}_{\frak d} \left(U^{(1)}_{\frak d} \cap (s_{\frak d}^{(1)})^{-1}(0)\right) \supset \overline{(X(\frak d_0) \setminus \mathcal K_{\frak d_0})}. \end{equation} We apply Proposition \ref{purecover} to $\mathcal K_{\frak d_0}$ to obtain $U^{(1)}_{\frak d_0}$. The main part of the proof is to glue $U^{(1)}_{\frak d_0}$ with $X(D')$ to obtain the required mixed orbifold neighborhood of $X(D)$. The construction is similar to the proof of Lemma \ref{ofdgluemainsublam}. \begin{rem} We would like to remark that one outstanding difference between the gluing maps $\underline \phi_{ij}$ for the pure orbifold neighborhoods and $\underline\phi_{\frak d'\frak d}$ for the mixed orbifold neighborhoods is that the former is a diffeomorphism while the latter is only an embedding. \end{rem} Because of this, we would like to repeat the detail of the gluing process. \par Let $U^{(2)}(D')$ be a relatively compact open subset of $U^{(1)}(D')$ satisfying \begin{equation}\label{U2cond1} U^{(2)}(D') \supset \psi_{D'}^{-1}(X(D')). \end{equation} We may choose it sufficiently close to $U^{(1)}(D')$ such that \begin{equation}\label{U2cond2} \bigcup_{\frak d \in D'}\psi^{(1)}_{\frak d} \left(U^{(2)}_{\frak d} \cap (s_{\frak d}^{(1)})^{-1}(0)\right) \supset \overline{(X(\frak d_0) \setminus \mathcal K_{\frak d_0})}. \end{equation} Here $U^{(2)}_{\frak d} $ is obtained from $U^{(2)}(D')$ as in the proof of Lemma \ref{mixednbdrestrict}. \par Let $U^{(2)}_{\frak d_0} \subset U^{(1)}_{\frak d_0}$ be a relatively compact open subset such that \begin{equation}\label{U2cond3} (\psi_{\frak d_0}^{(1)})^{-1}(\mathcal K_{\frak d_0}) \subset U^{(2)}_{\frak d_0}. \end{equation} We remark (\ref{U2cond2}) and (\ref{U2cond3}) imply \begin{equation}\label{U2cond4} \psi^{(1)}_{\frak d_0} \left((s_{\frak d_0}^{(1)})^{-1}(0)\cap U^{(2)}_{\frak d_0}\right) \cup \psi^{(1)}_{D'} \left((s_{D'}^{(1)})^{-1}(0)\cap U^{(2)}(D')\right) \supset X(\frak d_0). \end{equation} \par We put \begin{equation}\label{Ldefn} \aligned \mathcal L_{\frak d_0} = X(\frak d_0) &\cap \psi^{(1)}_{\frak d_0}\left ((s_{\frak d_0}^{(1)})^{-1}(0) \cap U^{(2)}_{\frak d_0}\right)\\ &\cap\psi^{(1)}_{D'} \left((s_{D'}^{(1)})^{-1}(0)\cap U^{(2)}(D')\right). \endaligned \end{equation} \par For each $q \in \overline{\mathcal L}_{\frak d_0}$ we take an open neighborhood $U^{(01)}_q$ in $U_q$ that satisfies the following conditions. \begin{conds} \begin{enumerate} \item If $\frak d > \frak d_0$ and $q \in \overline{\psi^{(1)}_{\frak d}((s_{\frak d}^{(1)})^{-1}(0)\cap U_{\frak d}^{(2)})}\cap \overline{\mathcal L}_{\frak d_0}$, then $U^{(01)}_q \subset U^{(1)}_{\frak d q}$. Here $U^{(1)}_{\frak d q}$ is as in Definition \ref{pureofdnbd} (5) for the pure orbifold neighborhood $U^{(1)}_{\frak d}$. \item $U^{(01)}_q \subset U^{(1)}_{\frak d_0 q}$, where $U^{(1)}_{\frak d_0 q}$ is as in Definition \ref{pureofdnbd} (5) for the pure orbifold neighborhood $U^{(1)}_{\frak d_0}$. \end{enumerate} \end{conds} For a finite subset $\frak I = \{q_i \mid i = 1,\dots, I\} \subset \overline{\mathcal L}_{\frak d_0}$ and each given $\frak d \in D'$, we write \begin{equation}\label{eq:Ifrakd} \frak I_{\frak d} = \frak I \cap \overline{\psi^{(1)}_{\frak d}((s_{\frak d}^{(1)})^{-1}(0)\cap U_{\frak d}^{(2)})} \cap \overline{\mathcal L}_{\frak d_0}. \end{equation} We also denote $$ \mathcal U^{(21)}_{\frak d}:= \psi^{(1)}_{\frak d}((s_{\frak d}^{(1)})^{-1}(0)\cap U_{\frak d}^{(2)}) \subset \mathcal U^{(1)}_{\frak d}. $$ By the compactness of $\overline{\mathcal L}_{\frak d_0}$, it follows that there exists a finite subset $\frak J$ of $\overline{\mathcal L}_{\frak d_0}$ such that the following condition holds. \begin{conds}\label{cond415} For any $\frak d \in D'$ we have: $$ \bigcup_{q_i \in \frak I_{\frak d}} \mathcal U_{q_i}^{(01)} \supset \overline{\mathcal U^{(21)}_{\frak d}} \cap \overline{\mathcal L}_{\frak d_0}. $$ Here $\mathcal U_{q_i}^{(01)} = \psi_{q_i}^{(1)}((s_{q_i}^{(1)})^{-1}(0)\cap U_{q_i}^{{(01)}})$. \end{conds} Since $$ \overline{\mathcal L}_{\frak d_0} \subset \bigcup_{\frak d \in D'} \overline{\mathcal U^{(21)}_{\frak d}}, $$ Condition \ref{cond415} implies \begin{equation} \bigcup_{q_i \in \frak J_{\frak d}} \mathcal U_{q_i}^{(01)} \supset \overline{\mathcal L}_{\frak d_0}. \end{equation} We may assume that $\{\mathcal U_{q_i}^{(01)}\}$ satisfies Condition \ref{cond46}. \par We next take a relatively compact open subset $U_{q_i}^{(01)-}$ of $U_{q_i}^{{(01)}}$ such that the following holds. \begin{conds}\label{cond416} For any $\frak d \in D'$, we have: $$ \bigcup_{q_i \in \overline{\mathcal U^{(21)}_{\frak d} \cap \overline{\mathcal L}_{\frak d_0}}} \mathcal U_{q_i}^{(01)-} \supset \overline{\mathcal U^{(21)}_{\frak d}} \cap \overline{\mathcal L}_{\frak d_0}. $$ Here $\mathcal U_{q_i}^{(01)-} = \psi^{(1)}_{q_i}((s^{(1)}_{q_i})^{-1}(0)\cap U_{q_i}^{(01)-})$. \end{conds} We also assume that $\{\mathcal U_{q_i}^{(01)-}\}$ satisfies Condition \ref{cond46}. \par For $r \in \overline{\mathcal L}_{\frak d_0}$ we take an open neighborhood $U_r^{0}$ of $o_r$ in $U_r$ with the following properties. \begin{conds}\label{cond417} \begin{enumerate} \item If $\frak d >\frak d_0$ and $r \in \overline{\mathcal U^{(21)}_{\frak d}}\cap \overline{\mathcal L}_{\frak d_0}$, then $U_r^0 \subset U_{\frak d r}^{(1)}$. \item $U_r^0 \subset U_{\frak d_0 r}^{(1)}$. \item If $\underline{\phi}^{(1)}_{\frak d_0 r}(U_r^0) \cap \overline{\underline{\phi}^{(1)}_{\frak d_0 q_i}(U_{q_i}^{(01)-})} \ne \emptyset$ then $U_r^0 \subset U_{q_i r}^{(1)} \cap (\underline{\phi}_{q_i r}^{(1)})^{-1}(U^{(01)}_{q_i})$. \item If $\underline{\phi}^{(1)}_{\frak d r}(U_r^0) \cap \overline{\underline{\phi}^{(1)}_{\frak d q_i}(U_{q_i}^{(01)-})} \ne \emptyset$ then $U^0_r \subset U^{(1)}_{q_i r} \cap ( \underline{\phi}_{q_i r}^{(1)})^{-1}(U_{q_i}^{(01)})$. \end{enumerate} \end{conds} The existence of such $U^0_r$ is proved in the same way as Lemma \ref{existsUr0}. We choose a subset $$ \frak J = \{r_j \mid r_j \in\overline{\mathcal L}_{\frak d_0}, j=1, \cdots, J \} \subset \overline{\mathcal L}_{\frak d_0} $$ such that the following holds for each $\frak d \in D'$ \begin{equation} \bigcup_{r_j \in \frak J_{\frak d}} \mathcal U_{r_j}^0 \supset \overline{\mathcal U^{(21)}_{\frak d}} \cap \overline{\mathcal L}_{\frak d_0}, \end{equation} where $$ \frak J_{\frak d} =\frak J \cap \overline{\psi^{(1)}_{\frak d}((s_{\frak d}^{(1)})^{-1}(0)\cap U_{\frak d}^{(2)})}\cap \overline{\mathcal L}_{\frak d_0}. $$ We now put \begin{equation} U_{\frak d\frak d_0}^{(1)} =\bigcup_{r_ j \in \frak J_{\frak d}} \bigcup_{q_i \in \frak J_{\frak d}} \underline{\phi}^{(1)}_{\frak d_0 r_j}(U_{r_j}^0) \cap \underline{\phi}^{(1)}_{\frak d_0 q_i}(U_{q_i}^{(01)-}). \end{equation} This is an open subset of $U_{\frak d_0}^{(1)}$. Since $U_{\frak d_0}^{(1)}$ is an orbifold, its open subset $U_{\frak d\frak d_0}^{(1)}$ is also an orbifold. We remark that \begin{equation}\label{Liscontained1} \psi^{(1)}_{\frak d_0} (U_{\frak d\frak d_0}^{(1)} \cap (s^{(1)}_{\frak d_0})^{-1}(0)) \supset \overline{\mathcal L}_{\frak d_0} \cap \mathcal U^{(21)}_{\frak d}. \end{equation} The following lemma is an analog to Lemma \ref{ofdgluemainsublam}. As we mentioned before, one difference is that the gluing map $\underline\phi^{(1)}_{\frak d \frak d_0}$ is not an open embedding. \begin{lem}\label{lem418} There exists an embedding of orbifolds $\underline\phi^{(1)}_{\frak d \frak d_0} : U_{\frak d\frak d_0}^{(1)} \to U_{\frak d}^{(1)}$ with the following properties. \begin{enumerate} \item If $x = \underline{\phi}^{(1)}_{\frak d_0 r_j}(\tilde x_j)$, then \begin{equation}\label{440} \underline\phi^{(1)}_{\frak d \frak d_0}(x) = \underline{\phi}^{(1)}_{\frak d r_j}(\tilde x_j). \end{equation} \item There exists an embedding of vector bundles $$ \underline{\hat\phi}^{(1)}_{\frak d \frak d_0} : E_{\frak d_0}^{(1)}\vert_{U_{\frak d\frak d_0}^{(1)}} \to E_{\frak d}^{(1)} $$ that covers $\underline\phi^{(1)}_{\frak d \frak d_0}$. \item If $\frak d > \frak d' > \frak d_0$ then $$ \underline{\phi}^{(1)}_{\frak d \frak d_0} = \underline{\phi}^{(1)}_{\frak d \frak d'} \circ \underline{\phi}^{(1)}_{\frak d' \frak d_0} $$ on $ (\underline{\phi}^{(1)}_{\frak d' \frak d_0})^{-1}(U_{\frak d \frak d'}^{(1)}) \cap U_{\frak d \frak d_0}^{(1)} $ and $$ \underline{\hat\phi}^{(1)}_{\frak d \frak d_0} = \underline{\hat\phi}^{(1)}_{\frak d \frak d'} \circ \underline{\hat\phi}^{(1)}_{\frak d' \frak d_0} $$ on $ E^{(1)}_{\frak d_0}\vert_{(\underline{\phi}^{(1)}_{\frak d' \frak d_0})^{-1}(U_{\frak d \frak d'}^{(1)}) \cap U_{\frak d \frak d_0}^{(1)} } $. \item We have $$ s_{\frak d}^{(1)} \circ \underline{\phi}^{(1)}_{\frak d \frak d_0} = \underline{\hat\phi}^{(1)}_{\frak d \frak d_0} \circ s^{(1)}_{\frak d_0} $$ on $U_{\frak d \frak d_0}^{(1)}$. \item We have $$ \psi_{\frak d}^{(1)} \circ \underline{\phi}^{(1)}_{\frak d \frak d_0} = \psi_{\frak d_0}^{(1)} $$ on $U_{\frak d \frak d_0}^{(1)} \cap (\overline s_{\frak d_0}^{(1)})^{-1}(0)$. \item The restriction of $ds_{\frak d}^{(1)}$ to the normal direction induces an isomorphism \begin{equation}\label{tangent***} N_{U_{\frak d\frak d_0}}U_{\frak d} \cong \frac{E_{\frak d}} {\hat{\phi}_{\frak d\frak d_0}(E_{\frak d_0}\vert_{U_{\frak d\frak d_0}})} \end{equation} as vector bundles on $U_{\frak d\frak d_0}\cap s_{\frak d_0}^{-1}(0)$. \end{enumerate} \end{lem} \begin{proof} The proof is similar to the proof of Lemma \ref{ofdgluemainsublam}. \par Note that the right hand side of (\ref{440}) is well defined because of Condition \ref{cond417} (1). We first show that the right hand side of (\ref{440}) is independent of $j$. Suppose $$ x = \underline{\phi}^{(1)}_{\frak d_0 r_j}(\tilde x_j)= \underline{\phi}^{(1)}_{\frak d_0 r_{j'}}(\tilde x_{j'}) \in \underline{\phi}^{(1)}_{\frak d_0 q_i}(U_{q_i}^{(01)-}). $$ Then by Condition \ref{cond417} (3) we have $\tilde x_j \in U^{(1)}_{q_i r_j}$, $\tilde x_{j'} \in U^{(1)}_{q_i r_{j'}}$ and $\underline{\phi}^{(1)}_{q_i r_j}(\tilde x_j) \in U_{q_i}^{(01)}$, $\underline{\phi}^{(1)}_{q_i r_{j'}}(\tilde x_{j'}) \in U_{q_i}^{(01)}$ . Since $$ \underline{\phi}^{(1)}_{\frak d_0 q_i}(\underline{\phi}^{(1)}_{q_i r_j}(\tilde x_j)) = x = \underline{\phi}^{(1)}_{\frak d_0 q_i}(\underline{\phi}^{(1)}_{q_i r_{j'}}(\tilde x_{j'})), $$ it follows that $$ \underline{\phi}^{(1)}_{q_i r_j}(\tilde x_j) = \underline{\phi}^{(1)}_{q_i r_{j'}}(\tilde x_{j'}). $$ Therefore $$ \underline{\phi}^{(1)}_{\frak d r_j}(\tilde x_j) =\underline{\phi}^{(1)}_{\frak d q_i}(\underline{\phi}^{(1)}_{q_i r_j}(\tilde x_j)) = \underline{\phi}^{(1)}_{\frak d q_i}(\underline{\phi}^{(1)}_{\frak d r_{j'}}(\tilde x_{j'})) = \underline{\phi}^{(1)}_{\frak d r_{j'}}(\tilde x_{j'}) $$ as required. We remark that $\underline{\phi}^{(1)}_{\frak d_0 r_j}$ is an open embedding of orbifolds. Therefore $\underline\phi^{(1)}_{\frak d \frak d_0}$ defined by (\ref{440}) is an embedding of orbifolds. \par The proof of (2) is similar. Then the proofs of (3)-(6) are straightforward. \end{proof} The pure orbifold neighborhoods $U^{(1)}_{\frak d}$ ($\frak d > \frak d_0$) and $U^{(1)}_{\frak d_0}$ together with $\underline\phi_{\frak d\frak d_0}$ etc. satisfy the properties required in Definition \ref{mixednbd} except the following two points. \begin{enumerate} \item[(A)] Hausdorff-ness of the space $U(D)$ that is required in Definition \ref{mixednbd} (8). \item[(B)] Injectivity of the map $\psi_D$ that is required in Definition \ref{mixednbd} (9). \end{enumerate} \par In the rest of the proof we shrink $U^{(1)}_{\frak d}$, $U^{(1)}_{\frak d_0}$ again so that (A), (B) above are satisfied. We shrink in such a way appearing either in Lemma \ref{mixednbdrestrict} or in Lemma \ref{mixednbdrestrict2}. Therefore the other properties required in Definition \ref{mixednbd} than (A) (B) hold after shrinking. \par We put \begin{equation}\label{44344} U_{\frak d \frak d_0}^{(2)} = (\underline\phi^{(1)}_{\frak d \frak d_0})^{-1} (U_{\frak d}^{(2)}) \cap U_{\frak d_0}^{(2)}. \end{equation} Let $\underline\phi^{(2)}_{\frak d \frak d_0}$ and $\underline{\hat\phi}^{(2)}_{\frak d \frak d_0}$ be the restrictions of $\underline\phi^{(1)}_{\frak d \frak d_0}$ and $\underline{\hat\phi}^{(1)}_{\frak d \frak d_0}$ to $U_{\frak d \frak d_0}^{(2)}$ and $E_{\frak d_0}\vert_{U_{\frak d \frak d_0}^{(2)}}$. \par The inclusion (\ref{Liscontained1}) and the definition of $U_{\frak d\frak d_0}^{(2)}$ imply \begin{equation}\label{Liscontained2} \psi^{(2)}_{\frak d_0} (U_{\frak d\frak d_0}^{(2)} \cap (s^{(2)}_{\frak d_0})^{-1}(0)) \supset \overline{\mathcal L}_{\frak d_0} \cap \psi^{(2)}_{\frak d}((s_{\frak d}^{(2)})^{-1}(0) \cap U_{\frak d}^{(2)}) \cap \psi_{\frak d_0}^{(2)} ((s_{\frak d_0}^{(2)})^{-1}(0) \cap U_{\frak d_0}^{(2)}). \end{equation} \begin{lem}\label{lem1} There exist $U_{\frak d_0}^{(3)} \subset U_{\frak d_0}^{(2)}$ and $U_{\frak d}^{(3)} \subset U_{\frak d}^{(2)}$ such that the following holds. We define $U_{\frak d \frak d_0}^{(3)}$ and $U_{\frak d \frak d'}^{(3)}$ for $\frak d,\frak d' \in D'$ with $\frak d > \frak d' >\frak d_0$ in the same way as (\ref{44344}). Various bundles maps sections are defined by restrictions in an obvious way. Then we have: \begin{enumerate} \item (\ref{U2cond1})-(\ref{U2cond4}) hold when we replace ${*}^{(1)}$ by ${*}^{(3)}$. (Here and hereafter $*$ is anything such as $U_{\frak d_0}$ etc.) \item The conclusion of Lemma \ref{lem418} holds when we replace ${*}^{(1)}$ by ${*}^{(3)}$. \item (\ref{Liscontained2}) holds when we replace ${*}^{(2)}$ by ${*}^{(3)}$. \item \begin{equation}\label{overremove1} \psi_{\frak d_0}(s_{\frak d_0}^{-1}(0) \cap U_{\frak d_0}^{(3)}) \cap \psi_{\frak d}(s_{\frak d}^{-1}(0) \cap U^{(3)}_{\frak d}) \subseteq \psi_{\frak d_0} (s_{\frak d_0}^{-1}(0) \cap U_{\frak d\frak d_0}^{(3)}) \end{equation} for $\frak d > \frak d_0$ \end{enumerate} \end{lem} The opposite inclusion of \eqref{overremove1} is obvious. We note that (4) above implies that Property (B) (injectivity required in Definition \ref{mixednbd} (9)) holds for $U_{\frak d_0}^{(3)}$, $U_{\frak d}^{(3)}$. \begin{proof} By (\ref{Liscontained2}) and (\ref{Ldefn}) we have $$ \aligned &\psi_{\frak d}^{(2)}(s_{\frak d}^{-1}(0) \cap U^{(2)}_{\frak d}) \cap \psi_{\frak d_0}^{(2)}(s_{\frak d_0}^{-1}(0) \cap U^{(2)}_{\frak d_0}) \cap X(D) \\ &=\psi_{\frak d}^{(2)}(s_{\frak d}^{-1}(0) \cap U^{(2)}_{\frak d}) \cap \psi_{\frak d_0}^{(2)}(s_{\frak d_0}^{-1}(0) \cap U^{(2)}_{\frak d_0}) \cap X(\frak d_0) \\ &\subset \overline{\mathcal L}_{\frak d_0} \cap \psi^{(2)}_{\frak d}((s_{\frak d}^{(2)})^{-1}(0)\cap U_{\frak d}^{(2)}) \subset \psi^{(2)}_{\frak d_0} (U_{\frak d\frak d_0}^{(2)} \cap (s^{(2)}_{\frak d_0})^{-1}(0)). \endaligned$$ Therefore we can choose open sets $V_{\frak d_0}, V_{D'} \subset X$ such that: \begin{enumerate} \item $V_{D'} \supset X(D')$. \item $V_{D'} \supset \overline{(X(\frak d_0) \setminus \mathcal K_{\frak d_0})}$. \item $\mathcal K_{\frak d_0} \subset V_{\frak d_0}$. \item $V_{D'} \cup V_{\frak d_0} \supset X(D)$. \item $V_{D'} \cap V_{\frak d_0} \cap \psi^{(2)}_{\frak d}((s_{\frak d}^{(2)})^{-1}(0)\cap U_{\frak d}^{(2)})\subset \psi^{(2)}_{\frak d_0} (U_{\frak d\frak d_0}^{(2)} \cap (s^{(2)}_{\frak d_0})^{-1}(0)) $. \end{enumerate} We take $U_{\frak d_0}^{(3)} \subset U_{\frak d_0}^{(2)}$ and $U(D')^{(3)} \subset U(D')^{(2)}$, (where $U(D')^{(2)} = \bigcup_{\frak d \in D'} \Pi_{\frak d}(U_{\frak d}^{(2)})$) such that $$ \aligned &U_{\frak d_0}^{(3)} \cap (s_{\frak d_0}^{(2)})^{-1}(0) =(\psi_{\frak d_0}^{(2)})^{-1}( V_{\frak d_0}),\\ & U_{D'}^{(3)} \cap (s_{D'}^{(2)})^{-1}(0) =(\psi_{D'}^{(2)})^{-1}( V_{D'}). \endaligned $$ We use it to obtain $U_{\frak d}^{(3)}$. Then Formula (\ref{Liscontained2}) after replacing ${*}^{(2)}$ by ${*}^{(3)}$ determines $U_{\frak d\frak d_0}^{(3)}$. The open sets $U_{\frak d\frak d'}^{(3)}$ for $\frak d,\frak d' \in D'$ $(\frak d > \frak d')$ are defined similarly. The bundles, maps, sections are defined by restriction in an obvious way. Then the conclusion of Lemma \ref{lem418} holds. \par Condition (1)-(4) above imply (\ref{U2cond1})-(\ref{U2cond4}), respectively. Condition (5) implies (\ref{overremove1}). \end{proof} \begin{rem} A similar formula \begin{equation}\label{overremove2} \psi_{\frak d_1}(s_{\frak d_1}^{-1}(0) \cap U^{(3)}_{\frak d_1}) \cap \psi_{\frak d_2}(s_{\frak d_2}^{-1}(0) \cap U^{(3)}_{\frak d_2}) \subseteq \psi_{\frak d_1} (s_{\frak d_1}^{-1}(0) \cap U^{(3)}_{\frak d_2\frak d_1}) \end{equation} for $\frak d_2 > \frak d_1 > \frak d_0$ is a consequence of Definition \ref{mixednbd} (9), applied to $D'$ and so is a part of induction hypothesis. (More precisely (\ref{overremove2}) with $U^{(3)}_{\frak d_1}$ etc. replaced by $U^{(1)}_{\frak d_1}$ etc. is the consequence of the induction hypothesis. Then (\ref{overremove2}) follows easily from definition.) \end{rem} It remains to shrink so that (A) (Hausdorff-ness) holds. The way we shrink here is similar to the construction of pure orbifold neighborhood. Note we are gluing many spaces $U_{\frak d}$. We will reduce the problem to the gluing of two spaces $U_{\frak d_0}$ and $U(D')$. For this purpose we need to modify so that the maps $\underline\phi_{\frak d\frak d_0}$ can be glued to give a map $U_{\frak d_0} \to U(D')$. The detail follows. \par We shrink again each of $U^{(3)}_{\frak d}$ with $\frak d > \frak d_0$ to obtain $U^{(4)}_{\frak d}$. We take it so that $U^{(4)}_{\frak d}$ is relatively compact in $U^{(3)}_{\frak d}$ and $U^{(4)}(D') = \bigcup_{\frak d\in D'}\Pi_{\frak d}(U^{(4)}_{\frak d})$ still carries the structure of mixed orbifold neighborhood. We also shrink $U^{(3)}_{\frak d_0}$ to a relatively compact subset $U^{(4)}_{\frak d_0}$. The domain of the coordinate change is defined by: $$ U^{(4)}_{\frak d_2\frak d_1} = \Pi^{-1}_{\frak d_1}(\Pi_{\frak d_1}(U^{(4)}_{\frak d_1})\cap \Pi_{\frak d_2}(U^{(4)}_{\frak d_2})) = \underline\phi_{\frak d_2\frak d_1}^{-1}(U^{(4)}_{\frak d_2}) \cap U^{(4)}_{\frak d_1} $$ and $$ U^{(4)}_{\frak d\frak d_0} = (\underline\phi_{\frak d\frak d_0}^{(1)})^{-1}(U_{\frak d}^{(4)}) \cap U^{(4)}_{\frak d_0}. $$ Bundles and bundle maps etc. are obtained by restriction in an obvious way. The conclusion of Lemma \ref{lem1} holds with $*^{(3)}$ replaced by $*^{(4)}$. \par For each point $x \in \psi_{\frak d_0}^{-1}(\mathcal K_{\frak d_0})$ we take $U_x$ a neighborhood of $x$ in $U^{(4)}_{\frak d_0}$ with the following property. \begin{proper}\label{propertyforUxX} If $\frak d_2 > \frak d_1 > \frak d_0$ and $$ U_x \cap U^{(4)}_{\frak d_1 \frak d_0} \cap U^{(4)}_{\frak d_2 \frak d_0} \ne \emptyset $$ then $$ \underline\phi^{(3)}_{\frak d_1\frak d_0}(U_x) \subseteq U^{(3)}_{\frak d_2 \frak d_1}. $$ \end{proper} Such a choice is possible because of the next lemma. \begin{lem} $$ (\psi_{\frak d_0}^{(3)})^{-1}(\mathcal K_{\frak d_0}) \cap \overline{(U^{(4)}_{\frak d_1 \frak d_0} \cap U^{(4)}_{\frak d_2 \frak d_0})} \subset (\underline\phi^{(3)}_{\frak d_1\frak d_0})^{-1} (U^{(3)}_{\frak d_2\frak d_1}). $$ \end{lem} \begin{proof} Note that $ \frak K_{\frak d_0} = \psi_{\frak d_0}^{(3)}((s_{\frak d_0}^{(3)})^{-1}(0) \cap \overline{U}_{\frak d_0}^{(4)}) $ is a compact subset of $\psi_{\frak d_0}^{(3)}((s_{\frak d_0}^{(3)})^{-1}(0) \cap U_{\frak d_0}^{(3)})$ and $ \frak K_{\frak d_i} = \psi_{\frak d_i}^{(3)}((s_{\frak d_i}^{(3)})^{-1}(0) \cap \overline{U}_{\frak d_i}^{(4)}) $ is a compact subset of $\psi_{\frak d_i}^{(3)}((s_{\frak d_i}^{(3)})^{-1}(0) \cap U^{(3)}_{\frak d_i})$ for $i=1,2$. (Here $\overline{U}_{\frak d_0}^{(4)}$ is the closure of ${U}_{\frak d_0}^{(4)}$ in ${U}_{\frak d_0}^{(3)}$.) Using (\ref{overremove1}), (\ref{overremove2}), we find $$ \aligned \frak K_0 \cap \frak K_i &\subset (\psi_{\frak d_0}^{(3)})(s_{\frak d_0}^{-1}(0) \cap U_{\frak d_0}^{(3)}) \cap \psi_{\frak d_i}(s_{\frak d_i}^{-1}(0) \cap U_{\frak d_i})\\ &\subseteq (\psi_{\frak d_0}^{(3)}) (s_{\frak d_0}^{-1}(0) \cap U_{\frak d_i\frak d_0}^{(3)}), \endaligned $$ and $$ \frak K_1 \cap \frak K_2 \subset \psi^{(3)}_{\frak d_1}(s_{\frak d_1}^{-1}(0) \cap U^{(3)}_{\frak d_1}) \cap \psi^{(3)}_{\frak d_2}(s_{\frak d_2}^{-1}(0) \cap U^{(3)}_{\frak d_2}) \subseteq \psi^{(3)}_{\frak d_1} ((s^{(3)}_{\frak d_1})^{-1}(0) \cap U^{(3)}_{\frak d_2\frak d_1} ). $$ Now the lemma follows from: $$ \psi^{(3)}_{\frak d_0} \left( (\psi_{\frak d_0}^{(3)})^{-1}(\mathcal K_{\frak d_0}) \cap \overline{(U^{(4)}_{\frak d_1 \frak d_0} \cap U^{(4)}_{\frak d_2 \frak d_0})} \right) \subset \frak K_0 \cap \frak K_1 \cap \frak K_2. $$ \end{proof} We cover $\psi_{\frak d_0}^{-1}(\mathcal K_{\frak d_0})$ by a finitely many sets $U_{x_i}$ ($i=1,\dots,I$) among such $U_x$'s and let $U^{(5)}_{\frak d_0}$ be the union $\bigcup_{i\in I}U_{x_i}$ of them. \begin{lem} \begin{equation}\label{Mcformula-} U_{\frak d_1 \frak d_0}^{(5)} \cap U_{\frak d_2 \frak d_0}^{(5)} \subset (\underline\phi^{(5)}_{\frak d_1 \frak d_0})^{-1}(U_{\frak d_2 \frak d_1}^{(4)}). \end{equation} Here $$ U_{\frak d \frak d_0}^{(5)} = U_{\frak d \frak d_0}^{(4)} \cap U^{(5)}_{\frak d_0} $$ and ${\underline\phi}^{(5)}_{\frak d \frak d_0}$ is the restriction of ${\underline\phi}^{(3)}_{\frak d \frak d_0}$ to $U_{\frak d \frak d_0}^{(5)}$. \end{lem} \begin{proof} Suppose $y \in U_{\frak d_1 \frak d_0}^{(5)} \cap U_{\frak d_2 \frak d_0}^{(5)}$. Then we have $ {\underline\phi}^{(3)}_{\frak d_2\frak d_0}(y) \in U_{\frak d_2}^{(4)} $ and $ {\underline\phi}^{(3)}_{\frak d_1\frak d_0}(y)\in U_{\frak d_1}^{(4)}. $ There exists $x_i$ ($i\in I$) with $y \in U_{x_i}$. Since $$ y \in U_{x_i} \cap U^{(5)}_{\frak d_1 \frak d_0} \cap U^{(5)}_{\frak d_2 \frak d_0}, $$ Property \ref{propertyforUxX} implies: $$ {\underline\phi}_{\frak d_1\frak d_0}^{(3)}(y)\in {\underline\phi}_{\frak d_1\frak d_0}^{(3)}(U_{x_i}) \subseteq U_{\frak d_2 \frak d_1}^{(3)}. $$ By Lemma \ref{lem418}, $$ {\underline\phi}_{\frak d_2\frak d_1}{\underline\phi}_{\frak d_1\frak d_0}^{(3)}(y) = {\underline\phi}_{\frak d_2\frak d_0}^{(3)}(y). $$ Therefore $$ {\underline\phi}^{(3)}_{\frak d_1\frak d_0}(y) \in ({\underline\phi}^{(1)}_{\frak d_2 \frak d_1})^{-1}(U_{\frak d_2}^{(4)})\cap U_{\frak d_1}^{(4)} = U_{\frak d_2 \frak d_1}^{(4)}. $$ Thus $$ y \in ({\underline\phi}^{(5)}_{\frak d_1 \frak d_0})^{-1}(U_{\frak d_2 \frak d_1}^{(4)}) $$ as required. \end{proof} Let $U^{(6)}_{\frak d}$ be a relatively compact subset of $U^{(4)}_{\frak d}$ for $\frak d \in D'$ and $U^{(6)}_{\frak d_0}$ a relatively compact subset of $U^{(5)}_{\frak d_0}$. We may choose them so that Lemma \ref{lem1} (1)-(4) holds when we replace $*^{(3)}$ and $*^{(2)}$ by $*^{(6)}$. \par We define \begin{equation} U^{(6)}(D) = \bigcup_{\frak d \in D} U^{(6)}_{\frak d}/\sim. \end{equation} Here $\sim$ is defined as follows. $x \sim y$ if and only if one of the following holds: \begin{enumerate} \item $x = y$. \item $x \in U^{(6)}_{\frak d'}$, $y \in U^{(6)}_{\frak d'\frak d} \cap U^{(6)}_{\frak d}$, $x = \underline{\phi}^{(6)}_{\frak d'\frak d}(y)$. \item $y \in U^{(6)}_{\frak d'}$, $x \in U^{(6)}_{\frak d'\frak d} \cap U^{(6)}_{\frak d}$, $y = \underline{\phi}^{(6)}_{\frak d'\frak d}(x)$. \end{enumerate} We define $U^{(6)}(D')$ in the same way. \par\medskip We define a map ${\Pi}_{\frak d} : U^{(6)}_{\frak d} \to U^{(6)}(D)$ by sending an element to its equivalence class. We remark that we have a continuous map $$ \underline{\phi}^{(6)}_{D'\frak d_0} : U^{(6)}_{D'\frak d_0} \to U^{(6)}(D') $$ from $ U^{(6)}_{D'\frak d_0} = \bigcup_{\frak d \in D'} {\Pi}^{(6)}_{\frak d}(U^{(6)}_{\frak d\frak d_0}) $ such that $$ \underline{\phi}^{(6)}_{D'\frak d_0} = {\Pi}^{(6)}_{\frak d}\circ \underline{\phi}^{(6)}_{\frak d\frak d_0} $$ holds on $U^{(6)}_{\frak d\frak d_0}$. This is a consequence of Lemma \ref{lem418} (3) and \eqref{Mcformula-}. \begin{rem}\label{thankmac} The authors thank D. McDuff who pointed out that the inclusion (\ref{Mcformula-}) is necessary to show such $U^{(6)}_{D'\frak d_0}$ exists, during our discussion at google group Kuranishi. \end{rem} \par Now we are in the last step to achieve Hausdorff-ness. We use a similar trick as in the last part of the proof of Proposition \ref{purecover} to modify $U^{(6)}_{\frak d}$ etc. as follows. Note that $U^{(6)}(D)$ can also be written as $$ U^{(6)}(D) = (U^{(6)}(D') \cup U^{(6)}_{\frak d_0})/\sim, $$ where $x \sim y$ if and only if one of the following holds. \begin{enumerate} \item $x = y$, or \item $x \in U^{(6)}_{D'}$, $y \in U^{(6)}_{D'\frak d_0} \subset U^{(6)}_{\frak d_0}$, $x = \underline{\phi}^{(6)}_{D'\frak d_0}(y)$, or \item $y \in U^{(6)}_{D'}$, $x \in U^{(6)}_{D'\frak d_0} \subset U^{(6)}_{\frak d_0}$, $y = \underline{\phi}^{(6)}_{D'\frak d_0}(x)$. \end{enumerate} \par\medskip We also note that $U^{(6)}(D')$ is already Hausdorff (with respect to the quotient topology) by induction hypothesis. \begin{rem} In fact the obvious map $U^{(6)}(D') \to U(D')$ is injective and continuous. (It may not be a topological embedding however. See Remark \ref{rem520}.) \end{rem} Now the rest of the construction is similar to the one in Lemmas \ref{lemma49} - \ref{XcapU}. We take a relatively compact subset $U^{(7)}(D')$ of $U^{(6)}(D')$ such that \begin{equation}\label{formula440} U^{(7)}(D') \supset X(D') \end{equation} and a relatively compact subset $U^{(7)}_{\frak d_0}$ of $U^{(6)}_{\frak d_0}$ such that \begin{equation}\label{formula441} U^{(7)}_{\frak d_0} \supset (\psi^{(6)}_{\frak d_0})^{-1}(\mathcal K_{\frak d_0}). \end{equation} We take $W_{D'\frak d_0} \subset U^{(6)}_{D'\frak d_0}$ such that $$ W_{D'\frak d_0} \supset (s_{\frak d_0}^{(6)})^{-1}(0) \cap \overline{(U^{(7)}_{\frak d_0} \cap (\underline{\phi}^{(6)}_{D\frak d_0})^{-1}(U^{(7)}(D')))}. $$ The existence of such $W_{D'\frak d_0}$ can be proved in the same way as Lemma \ref{lemma49}. In the same way as Lemma \ref{lem410}, we can find $U^{(8)}_{\frak d_0}$ such that together with $U^{(8)}(D') = U^{(7)}(D')$ it satisfies \begin{equation} \overline U^{(8)}_{\frak d_0} \cap \overline{(\underline{\phi}^{(6)}_{D'\frak d_0})^{-1}(U^{(8)}(D'))} \subset W_{D'\frak d_0}. \end{equation} Moreover (\ref{formula440}), (\ref{formula441}) hold with $*^{(7)}$ replaced by $*^{(8)}$. \par We put $C_1 = \overline U^{(8)}_{\frak d_0}$, $C_2 = \overline U^{(8)}(D')$. We put $C_{21} = (\underline{\phi}^{(6)}_{D'\frak d_0})^{-1}(C_2)$. We then define $C = (C_1 \cup C_2)/\sim$ where $\sim$ is defined by using the restriction of $\underline{\phi}^{(6)}_{D'\frak d_0}$ to $C_{21}$, as before. (We put quotient topology on it.) We can prove that $C_{21}$ is compact in the same way as Lemma \ref{C21compact}. Therefore $C$ is Hausdorff by Proposition \ref{metrizable}. \par Let $\Pi_i : C_i \to C$ be the obvious map. We put $$ U(D) = \Pi_1(U^{(8)}_{\frak d_0}) \cup \Pi_2(U^{(8)}(D')) \subset C $$ and will use a topology induced from the topology of $C$ on it. We define $$ U_{\frak d_0} = U^{(8)}_{\frak d_0}, \qquad U_{\frak d} = \Pi_{\frak d}^{-1}(U_\frak d^{(8)}) \subset U^{(6)}_{\frak d}. $$ They are orbifolds. We obtain bundles, sections, maps, coordinate changes, on them by restriction in an obvious way. The proof of Proposition \ref{existmixed} is complete. \end{proof} \begin{lem}\label{linearcond} We may choose $U(D)$ so that the following holds in addition. Let $\frak d_k > \frak d_0$. If $$ \bigcap_{k=1}^K {\Pi}_{\frak d_k}(U_{\frak d_k}) \cap {\Pi}_{\frak d_0}(U_{\frak d_0}) \ne \emptyset $$ then $$ \bigcap_{k=1}^K {\Pi}_{\frak d_k}(U_{\frak d_k}\cap s_{\frak d_k}^{-1}(0)) \cap {\Pi}_{\frak d_0}(U_{\frak d_0}\cap s_{\frak d_0}^{-1}(0)) \ne \emptyset. $$ \end{lem} \begin{proof} We will modify $U(D)$ so that it satisfies this additional condition by induction on $\#D$. \par The inductive step is as follows. We take $\frak d_0 \in D$ that is minimal in $D$. We put $D' = D \setminus \{\frak d_0\}$. \par We modify $U_{\frak d_0}$ so that the conclusion of the lemma holds by induction on $K$. We assume the conclusion of the lemma holds for $K \le K_0-1$. We consider the case of $K_0$. Let $$ \frak C = \{\{\frak d_1,\dots,\frak d_{K_0}\} \mid (\ref{sempty}), \text{$\frak d_i$ are all different.}\} $$ \begin{equation}\label{sempty} \bigcap_{k=1}^{{K_0}} {\Pi}_{\frak d_k}(U_{\frak d_k}\cap s_{\frak d_k}^{-1}(0)) \cap {\Pi}_{\frak d_0}(U_{\frak d_0}\cap s_{\frak d_0}^{-1}(0)) =\emptyset. \end{equation} We shrink $U_{\frak d}$ a bit so that we may assume \begin{equation}\label{semptycl} \bigcap_{k=1}^{{K_0}} \overline{{\Pi}_{\frak d_k}(U_{\frak d_k}\cap s_{\frak d_k}^{-1}(0)}) \cap \overline{{\Pi}_{\frak d_0}(U_{\frak d_0}\cap s_{\frak d_0}^{-1}(0)}) =\emptyset. \end{equation} for $\{\frak d_1,\dots,\frak d_{K_0}\} \in \frak C$. \par We replace $U_{\frak d_0}$ by \begin{equation}\label{Ud_0prime} U'_{\frak d_0} = U_{\frak d_0} \setminus \bigcup_{\{\frak d_1,\dots,\frak d_{K_0}\} \in \frak C} \bigcap_{k=1}^{K_0}\overline U_{\frak d_k\frak d_0}. \end{equation} We will prove $U_{\frak d}$ ($\frak d > \frak d_0$) together with $U'_{\frak d_0}$ satisfies the required property for $K \le K_0$. \par We first consider the case $K\le K_0 -1$. Suppose $$ \bigcap_{k=1}^K {\Pi}_{\frak d_k}(U_{\frak d_k}) \cap {\Pi}_{\frak d_0}(U'_{\frak d_0}) \ne \emptyset. $$ Then $$ \bigcap_{k=1}^K {\Pi}_{\frak d_k}(U_{\frak d_k}) \cap {\Pi}_{\frak d_0}(U_{\frak d_0}) \ne \emptyset. $$ Then by induction hypothesis we have $$ \bigcap_{k=1}^K {\Pi}_{\frak d_k}(U_{\frak d_k}\cap s_{\frak d_k}^{-1}(0)) \cap {\Pi}_{\frak d_0}(U_{\frak d_0}\cap s_{\frak d_0}^{-1}(0)) \ne \emptyset. $$ We note that \begin{equation}\label{UprimeandUons-0} U'_{\frak d_0} \cap s^{-1}_{\frak d_0}(0) =U_{\frak d_0} \cap s^{-1}_{\frak d_0}(0) \end{equation} by (\ref{semptycl}), (\ref{Ud_0prime}). Therefore $$ \bigcap_{k=1}^K {\Pi}_{\frak d_k}(U_{\frak d_k}\cap s_{\frak d_k}^{-1}(0)) \cap {\Pi}_{\frak d_0}(U'_{\frak d_0}\cap s_{\frak d_0}^{-1}(0)) \ne \emptyset $$ as required. \par We next consider the case $K=K_0$. Suppose $$ \bigcap_{k=1}^{K_0} {\Pi}_{\frak d_k}(U_{\frak d_k}\cap s_{\frak d_k}^{-1}(0)) \cap {\Pi}_{\frak d_0}(U'_{\frak d_0}\cap s_{\frak d_0}^{-1}(0)) = \emptyset. $$ Then by (\ref{UprimeandUons-0}) we have $$ \bigcap_{k=1}^{K_0} {\Pi}_{\frak d_k}(U_{\frak d_k}\cap s_{\frak d_k}^{-1}(0)) \cap {\Pi}_{\frak d_0}(U_{\frak d_0}\cap s_{\frak d_0}^{-1}(0)) = \emptyset. $$ Namely $\{ \frak d_1,\dots,\frak d_{K_0}\} \in C$. Therefore $$ \bigcap_{k=1}^{K_0} {\Pi}_{\frak d_k}(U_{\frak d_k}) \cap {\Pi}_{\frak d_0}(U'_{\frak d_0}) = \emptyset $$ by (\ref{Ud_0prime}). The proof of the inductive step is complete. \end{proof} Now we are in the position to complete the proof of Theorem \ref{goodcoordinateexists}. We apply Proposition \ref{existmixed} to obtain a mixed orbifold neighborhood of $X(\frak D) = X$. We put $\frak P = \frak D \subset \Z_{> 0}$. The order is $\le$. For $\frak d \in \frak D = \frak P$, we have $U_{\frak d}$, $E_{\frak d}$, $s_{\frak d}$, $\psi_{\frak d}$ by Definition \ref{mixednbd} (2)(3). Let us check Definition \ref{goodcoordinatesystem} (1)-(9). \par Definition \ref{goodcoordinatesystem} (1)-(4) follows from Definition \ref{mixednbd} (2)(3). Definition \ref{goodcoordinatesystem} (5)(6) follows from Definition \ref{mixednbd} (4) - (8). Definition \ref{goodcoordinatesystem} (7) follows from Definition \ref{mixednbd} (9). Definition \ref{goodcoordinatesystem} (8) is obvious since $\frak P \subset \Z_{> 0}$. Condition \ref{Joyce} in Definition \ref{goodcoordinatesystem} (9) follows from Definition \ref{mixednbd} (8) (9). Conditions \ref{plusalpha} and \ref{plusalpha2} in Definition \ref{goodcoordinatesystem} (9) follow from Lemma \ref{linearcond} and (\ref{Dddd}). Condition \ref{proper} follows from Definition \ref{mixednbd} (8) especially Hausdorff-ness of $U(X)$. \par The proof of Theorem \ref{goodcoordinateexists} is now complete. \end{proof} \section{Appendix: a lemma on general topology}\label{gentoplem} In this appendix we prove Proposition \ref{metrizable}. We assume Assumption \ref{Kassumption}. We first prove the following. \begin{lem}\label{hausdorfflema} $K(\frak P)$ is Hausdorff. \end{lem} \begin{rem} Note $\{(x,y) \in (\coprod_{\frak p} K_{\frak p}) \times (\coprod_{\frak p} K_{\frak p}) \mid x\sim y\}$ is a closed subset. However this does not immediately imply that $K(\frak P)$ is Hausdorff. (This is because $\Pi_{\frak p}$ is not an open mapping.) \end{rem} \begin{proof} The proof is by induction on $\#\frak P$. Denote by $\pi: \coprod_{\frak p} K_{\frak p} \to K(\frak P)$ the projection. \par The case $\#\frak P = 1$ is trivial. Suppose $\frak P = \{\frak p,\frak q\}$ and $\frak p>\frak q$. We remark that $\Pi_{\frak q} : K_{\frak q} \to K(\frak P)$ and $\Pi_{\frak p} : K_{\frak p} \to K(\frak P)$ are both closed mappings. \par Let $p \ne q \in K(\frak P)$. We need to find open neighborhoods $A_p, \, A_q \subset K(\frak P)$ such that $A_p \cap A_q =\emptyset$. There are 4 cases to consider. We put $q = [x]$, $p =[y]$ \begin{enumerate} \item $x, \, y\in K_{\frak q} \setminus K_{\frak p\frak q}$ or $x, \, y\in K_{\frak p} \setminus \underline{\phi}_{\frak p\frak q}(K_{\frak p\frak q})$. \item $x \in K_{\frak q}\setminus K_{\frak p\frak q}$, $y\in K_{\frak p} \setminus \underline{\phi}_{\frak p\frak q}(K_{\frak p\frak q})$. Or the same with $x$ and $y$ exchanged. \item $x \in K_{\frak q} \setminus K_{\frak p\frak q}$, $y \in K_{\frak p\frak q}$. There are 3 similar cases where $x$, $y$ are exchanged and/or $\frak p$, $\frak q$ are exchanged. \item $x, \, y\in K_{\frak p\frak q}$. \end{enumerate} Case (1). Suppose $x, \, y\in K_{\frak q} \setminus K_{\frak p\frak q}$. Choose disjoint open subsets $A'_x, A'_y$ of $K_{\frak q}$ such that $x \in A'_x, y \in A'_y$. Then $A_x = \Pi_{\frak q}(A'_x \setminus K_{\frak p\frak q})$ and $A_y = \Pi_{\frak q}(A'_y \setminus K_{\frak p\frak q})$ have required properties. \par \noindent Case (2). $A_x = \Pi_{\frak q}(K_{\frak q} \setminus K_{\frak p\frak q})$ and $A_y = \Pi_{\frak p}(K_{\frak p} \setminus \underline{\phi}_{\frak p\frak q}(K_{\frak p\frak q}))$ have required properties. \par \noindent Case (3). Note $K_{\frak q}$ is normal since it is Hausdorff and compact. Therefore there exist disjoint open subsets $A'_x, A'_y$ of $K_{\frak q}$ such that $x \in A'_x, \,\, K_{\frak p\frak q} \subset A'_y$. Let $A''_y$ be an open subset of $K_{\frak p}$ containing $\underline{\phi}_{\frak p\frak q}(K_{\frak p\frak q})$. Then $A_x = \Pi_{\frak q}(A'_x)$, $A_y = \Pi_{\frak q}(A'_y) \cup \Pi_{\frak p}(A''_y)$ have required properties. \par \noindent Case (4). We take disjoint open subsets $A'_x$ and $A'_y$ of $K_{\frak q}$ such that $x \in A'_x$ and $y \in A'_y$. Since $K_{\frak q}$ is normal we may assume $\overline A'_x \cap \overline A'_y = \emptyset$. \par We take open subsets $A''_x$ and $A''_y$ of $K_{\frak p}$ such that $A''_x \cap \underline{\phi}_{\frak p\frak q}(K_{\frak p\frak q}) = \underline{\phi}_{\frak p\frak q}(A'_x \cap K_{\frak p\frak q})$ and $A''_y \cap \underline{\phi}_{\frak p\frak q}(K_{\frak p\frak q}) = \underline{\phi}_{\frak p\frak q}(A'_y \cap K_{\frak p\frak q})$. Such $A''_x$ exists since $\underline{\phi}_{\frak p\frak q}(A'_x \cap K_{\frak p\frak q})$ is an open subset of $\underline{\phi}_{\frak p\frak q}(K_{\frak p\frak q})$ with respect to the subspace topology. However $A''_x \cap A''_y$ may be nonempty. \par Since $K_{\frak p}$ is normal and $\overline A'_x \cap \overline A'_y = \emptyset$, we can find disjoint open subsets $A'''_x$ and $A'''_y$ of $K_{\frak p}$ such that $\underline{\phi}_{\frak p\frak q}(\overline A'_x \cap K_{\frak p\frak q}) \subset A'''_x$, $\underline{\phi}_{\frak p\frak q}(\overline A'_y \cap K_{\frak p\frak q}) \subset A'''_y$. \par Now $ A_x = \Pi_{\frak q}(A'_x) \cup \Pi_{\frak p}(A''_x\cap A'''_x) $, $ A_y = \Pi_{\frak q}(A'_y) \cup \Pi_{\frak p}(A''_y\cap A'''_y) $ have required properties. \par This proves the lemma for the case $\# \frak P = 2$. \par\medskip Now suppose the lemma hold for the case $\#\frak P < n$ for $n \in \N$ and prove the case of $\#\frak P = n$. Let $\frak p_0 \in \frak P$ be an element which is minimal with respect to the partial order. We put $\frak P' = \frak P \setminus \{\frak p_0\}$. We obtain $K(\frak P')$. By the induction hypothesis, it is Hausdorff. We put \begin{equation}\label{closedcover} K_{\frak P'\frak p_0} = \bigcup_{\frak p \in \frak P'} K_{\frak p\frak p_0} \subset K_{\frak p_0}. \end{equation} Let $\Pi'_{\frak p} : K_{\frak p} \to K(\frak P')$ ($\frak p \in \frak P'$) be the map which sends an element to its equivalence class. For $x \in K_{\frak p\frak p_0}$ we put \begin{equation}\label{kiregirevarsho} \underline\phi_{\frak P'\frak p_0}(x) = \Pi'_{\frak p}(\underline\phi_{\frak p\frak p_0}(x)). \end{equation} It is easy to see that (\ref{kiregirevarsho}) induces a map \begin{equation} \underline\phi_{\frak P'\frak p_0} : K_{\frak P'\frak p_0} \to K(\frak P'). \end{equation} \begin{sublem} $\underline\phi_{\frak P'\frak p_0}$ is continuous. \end{sublem} \begin{proof} The restriction of $\underline\phi_{\frak P'\frak p_0}$ to each $K_{\frak p\frak p_0}$ is continuous by the definition of quotient topology. Moreover (\ref{closedcover}) is a covering by closed sets. The sublemma follows. \end{proof} Since $K_{\frak P'\frak p_0}$ is compact and $K(\frak P')$ is Hausdorff by induction hypothesis, it follows that $\underline\phi_{\frak P'\frak p_0}$ is a topological embedding. Thus $K_{\frak p_0}$, $K(\frak P')$, $K_{\frak P'\frak p_0}$, $\underline\phi_{\frak P'\frak p_0}$ satisfy the assumption of Lemma \ref{hausdorfflema}, where $\frak P = \{\frak P',\frak p_0\}$. ($\frak p_0 < \frak P'$.) We then obtain a Hausdorff space, which we write $K'(\frak P)$. The proof of Lemma \ref{hausdorfflema} is completed by the next sublemma. \end{proof} \begin{sublem} $K'(\frak P)$ is homeomorphic to $K(\frak P)$. \end{sublem} \begin{proof} We have obvious projection maps \begin{equation}\label{quotient1} \pi'' : K_{\frak p_0} \sqcup K(\frak P') \to K'(\frak P), \end{equation} \begin{equation}\label{quotient2} \pi' : \coprod_{\frak p \in \frak P'} K_{\frak p} \to K(\frak P'), \end{equation} \begin{equation}\label{quotient3} \pi : \coprod_{\frak p \in \frak P} K_{\frak p} \to K(\frak P). \end{equation} The topology of the right hand sides are quotient topology with respect to these maps. Composing (\ref{quotient1}) and (\ref{quotient2}) we obtain \begin{equation} \pi'' \circ (\text{id} \sqcup \pi' ) : \coprod_{\frak p \in \frak P} K_{\frak p} \to K'(\frak P). \end{equation} The topology of $K'(\frak P)$ is the quotient topology of this map. \par Using the fact that $\sim$ is an equivalence relation, it is easy to see the following. If $x,y \in \coprod_{\frak p \in \frak P} K_{\frak p}$, then $\pi(x) = \pi(y)$ if and only if $\pi'' \circ (\text{id} \sqcup \pi' )(x) = \pi'' \circ (\text{id} \sqcup \pi' )(y)$. This implies the sublemma. \end{proof} We now recall that a family of subsets $\{U_i \mid i \in I\}$ of a topological space $X$ containing $x \in X$ is said to be a \emph{neighborhood basis} of $x$ if \begin{enumerate} \item each $U_i$ contains an open neighborhood of $x$, \item for each open set $U$ containing $x$ there exists $i$ such that $U_i \subset U$. \end{enumerate} A family of open subsets $\{U_i \mid i \in I\}$ of a topological space $X$ is said to be a basis of the open sets if for each $x$ the set $\{U_i \mid x \in U_i\}$ is a neighborhood basis of $x$. A topological space is said to satisfy the second axiom of countability if there exists a countable basis of open subsets $\{U_i \mid i \in I\}$. We next prove the following. \begin{lem}\label{lem2kasan} If $K_{\frak p}$ satisfies the second axiom of countability in addition, then $K(\frak P)$ also satisfies the second axiom of countability. \end{lem} \begin{proof} For each $\frak p$, we take a countable set $\frak U_{\frak p} =\{U_{\frak p,i} \subset K_{\frak p} \mid i \in I_{\frak p}\}$ which is a basis of the open sets of $K_{\frak p}$. We may assume $\emptyset \in \frak U_{\frak p}$ and each $U_{\frak p,i}$ is open. \par For each $\vec i = (i_{\frak p})_{\frak p \in \frak P}$ ($i_{\frak p} \in I_{\frak p}$) we define $U(\vec i)$ to be the interior of the set \begin{equation}\label{U+basis1} U^+(\vec i) := \bigcup_{\frak p \in \frak P}\Pi_{\frak p}(U_{\frak p,i_{\frak p}}). \end{equation} This is a countable family of open subsets of $K(\frak P)$. We will prove that this family is a basis of open sets of $K(\frak P)$. \par Let $q \in K(\frak P)$, we put \begin{equation}\label{defPxsss} \frak P(q) = \{\frak p \in \frak P \mid q=[x], \, x \in K_{\frak p}\}. \end{equation} Here and hereafter we identify $K_{\frak p}$ to its image in $K(\frak P)$. Note since $K(\frak P)$ is Hausdorff and $K_{\frak p}$ is compact, the natural inclusion map $K_{\frak p} \to \coprod_{\frak p \in\frak P} K_{\frak p}$ induces a topological embedding $K_{\frak p} \to K(\frak P)$. \par For $\frak p \in \frak P(x)$, we have $x_{\frak p} \in K_{\frak p}$ with $[x_\frak p] = q$. We take a countable neighborhood basis of $x_{\frak p}$. We put $$ I_{\frak p}(q) = \{i \in I_{\frak p} \mid q = [x], \text{for some } x \in U_{\frak p,i}\} $$ For each $\vec i = (i_{\frak p}) \in \prod_{\frak p\in \frak P(q)} I_{\frak p}(q)$, we set \begin{equation}\label{U+basis12} U^+(\vec i) =\bigcup_{\frak p \in \frak P(q)} \Pi_{\frak p}(U_{\frak p,i_{\frak p}}) \subset K(\frak P). \end{equation} We claim that the collection $\{U^+(\vec i) \mid \vec i \in \prod_{\frak p\in \frak P(q)} I_{\frak p}(q)\}$ is a neighborhood basis of $q$ in $K(\frak P)$ for any $q$. The claim follows from Sublemmas \ref{9sublem1},\, \ref{9sublem2}. \par \begin{sublem}\label{9sublem1} The subset $U^+(\vec i)$ is a neighborhood of $q$ in $K(\frak P)$. \end{sublem} \begin{proof} For $\frak p \in \frak P(q)$ the set $K_{\frak p} \setminus U_{\frak p,i_{\frak p}}$ is a closed subset of $K_{\frak p}$ and so is compact. Therefore $\Pi_{\frak p}(K_{\frak p} \setminus U_{\frak p,i_{\frak p}})$ is compact and so is closed. \par If $\frak p \notin \frak P(q)$ then we consider $\Pi_{\frak p}(K_{\frak p})$ which is closed. \par Now we put $$ K = \bigcup_{\frak p \in \frak P(q)}\Pi_{\frak p}(K_{\frak p} \setminus U_{\frak p,i_{\frak p}}) \cup \bigcup_{\frak p \notin \frak P(q)}\Pi_{\frak p}(K_{\frak p}). $$ This is a finite union of closed sets and so is closed. It is easy to see that $$ q \in K(\frak P) \setminus K \subset U^+(\vec i). $$ \end{proof} \begin{sublem} \label{9sublem2} The collection $\{U^+(\vec i)\}$ satisfies the properties (2) of the neighborhood basis above. \end{sublem} \begin{proof} We remark that the map $K_{\frak p} \to K(\frak P)$ is a topological embedding. Therefore $U\cap K_{\frak p}$ is an open set of $K_{\frak p}$. Therefore for each $\frak p \in \frak P(q)$, the set $U\cap K_{\frak p}$ is a neighborhood of $x_{\frak p}$ in $K_{\frak p}$. By the definition of neighborhood basis in $K_\frak p$, there exists $i_{\frak p}$ such that $U_{\frak p,i_{\frak p}} \subset U\cap K_{\frak p}$. We put $\vec i = (i_{\frak p})$. Then $U^+(\vec i) \subset U$ as required. \end{proof} We remark that $U^+(\vec i)$ in (\ref{U+basis12}) is a special case of $U^+(\vec i)$ in (\ref{U+basis1}). (We take $U_{\frak p,i_{\frak p}} = \emptyset$ for $\frak p \notin \frak P(x)$.) The family $U(\vec i)$ is a countable basis of open sets of $K(\frak P)$. The lemma is proved. \end{proof} \begin{lem}\label{loccomp} If each $K_{\frak p}$ is locally compact, then $K(\frak P)$ is locally compact. \end{lem} \begin{proof} Let $x \in K(\frak P)$. We define $\frak P(x)$ by (\ref{defPxsss}). For each $\frak p \in \frak P(x)$, we take a neighborhood basis of $\{U_{\frak p,i} \mid I_{\frak p}\}$ of $x$ in $K_{\frak p}$ such that $U_{\frak p,i}$ are all compact. \par For each $\vec i = (i_{\frak p}) \in \prod_{\frak p\in \frak P(x)} I_{\frak p}(x)$ we define $U^+(\vec i)$ by (\ref{U+basis12}). They form a neighborhood basis of $x$ in $K(\frak P)$ by Sublemmas \ref{9sublem1}, \ref{9sublem2}. Since $U^+(\vec i)$ are all compact, the lemma follows. \end{proof} Combining Lemmas \ref{hausdorfflema}, \ref{lem2kasan}, \ref{loccomp} and a celebrated result by Urysohn we obtain Proposition \ref{metrizable}. \section{Orbifold via coordinate system} \label{ofd} In this section we review orbifold, its embedding and a bundle on it. Let $X$ be a paracompact Hausdorff space. \begin{defn}[orbifold]\label{ofddefn} \begin{enumerate} \item An orbifold chart of $X$ at $p\in X$ is $(V_p,\Gamma_p,\psi_p)$ such that $V_p$ is a manifold on which a finite group $\Gamma_p$ acts effectively, such that $o_p \in V_p$ is fixed by all the elements of $\Gamma_p$. $\psi_p$ is a homeomorphism from a quotient space $U_p = V_p/\Gamma_p$ onto a neighborhood of $p$ in $X$ such that $\psi_p(o_p) = p$. \item Let $(V_p,\Gamma_p,\psi_p)$ be as above and $q \in \psi_p(U_p)$. A coordinate change is $(V_{pq},\phi_{pq},h_{pq})$ such that $h_{pq} : \Gamma_q \to \Gamma_p$ is a group homomorphism, $V_{pq} \subseteq V_q$ is a $\Gamma_q$ invariant open neighborhood of $o_q$ in $V_q$ and $\phi_{pq} : V_{pq} \to V_p$ is an $h_{pq}$ equivariant smooth open embedding of manifolds. We assume that they satisfy \begin{equation}\label{91form} \psi_p \circ \underline\phi_{pq} = \psi_q \end{equation} where $\underline\phi_{pq} : V_{pq}/\Gamma_q \to V_{p}/\Gamma_p$ is induced by $\phi_{pq}$. \end{enumerate} We call $(\{ (V_p,\Gamma_p,\psi_p) \},\{(V_{pq},\phi_{pq},h_{pq})\})$ an {\it orbifold structure} on $X$. \end{defn} We can use (\ref{91form}) to show $ \underline{\phi}_{pq} \circ \underline{\phi}_{qr} = \underline{\phi}_{pr} $ on $ \underline{\phi}_{qr}^{-1}(U_{pq}) \cap U_{pr}. $ We can use this fact to show that Definition \ref{ofddefn} is equivalent to Definition \ref{ofddef1}. \begin{rem} We assumed the action of $\Gamma_p$ is effective. We always do so in this article. To emphasize this point we say sometimes effective orbifold instead of orbifold. \par Sometimes effectivity of $\Gamma_p$-action is not assumed in the definition of orbifold. In such case definition of (uneffective) orbifold becomes rather complicated if we use coordinate chart. See \cite[Section 4]{fooo:overZ}. The notion of morphisms between uneffective orbifolds is also harder to define if we use the language of coordinate chart. (See Example \ref{notembedding} below.) \end{rem} \begin{defn}[embedding of orbifold]\label{emborf} Let $X,Y$ be orbifolds and $F : X \to Y$ a continuous map. $F$ is said to be an {\it embedding} of an orbifold if $F$ is a topological embedding and the following conditions are satisfied for each $q \in X$ and $p=F(q) \in Y$. \begin{enumerate} \item There exists an open subset $V^X_{pq} \subset V^X_q$ of the chart of $q$ that is $\Gamma^X_q$ equivariant and containing $o_q$. \item There exists a smooth embedding $F_{q} : V^X_{pq} \to V^Y_p$ of manifolds. \item There exist a group isomorphism $h^F_{pq} : \Gamma_q^X \to \Gamma^Y_p$ such that $F_{q}$ is $h^F_{pq}$ equivariant. \item The map $h^F_{pq}$ restricts to an isomorphism $(\Gamma^X_q)_x \to (\Gamma^Y_p)_{\phi_{pq}(x)}$ for any $x \in V^X_{pq}$. \item $F_{q}$ induces the map $F\vert_{\psi_q(V^X_{pq}/\Gamma^X_q)} : \psi_q(V^X_{pq}/\Gamma^X_q)\to \psi_p(V^Y_p/\Gamma^Y_p) \subset Y$. \end{enumerate} \end{defn} \begin{rem} \begin{enumerate} \item Note that in the above definition $F_q$ and $h^F_{pq}$ are required to {\it exist} for $F$ to be an embedding of an orbifold, but they are {\it not} a part of the data which defines an embedding of an orbifold. In other words, two embeddings between orbifolds are equal if they coincide set theoretically. (Namely they coincide as maps between sets.) \item In general, we need to be very careful to define the notion of morphisms between orbifolds. In fact a natural framework to define the category of orbifolds is that of 2 category. In other words, the correct notion to define is not two morphisms being the same but being equivalent. \item On the other hand, as long as we use only effective orbifolds and embeddings in the above sense, we do not need to use 2 category. In fact, in such case orbifolds and embeddings between them can be studied in a similar way as the case of manifolds and embeddings between them. This simplifies the discussion significantly, especially when implementing the Kuranishi structure in applications. Because of this, we use only embeddings of orbifolds and no other maps between them for the purpose of the study of Kuranishi structures. \end{enumerate} \end{rem} It is easy to see that the (set theoretical) composition of embeddings of orbifolds is an embedding of an orbifold. \begin{rem}\label{notembedding} Let $\Z_2$ act on $\R^2$ by $(x,y) \mapsto (-x,-y)$. The quotient space $\R^2/\Z_2$ has a natural structure of orbifold. We regard one point $\{p\}$ as a manifold and hence is an orbifold. The map which sends $p$ to the equivalence class of $(0,0)$ is a continuous map and is a topological embedding : $\{p\} \to \R^2/\Z_2$. But it is not an embedding of orbifold in our sense. In fact the isotropy group is not isomorphic. \end{rem} \begin{defn}\label{diffeoofd} Two embeddings of orbifolds are said to be the {\it same} if they coincide set theoretically. \par An identity map is an embedding of orbifold. \par An embedding of an orbifold is said to be a {\it diffeomorphism} if it has an inverse (as set theoretical map) and if the inverse is an embedding of an orbifold. \par Suppose we have two orbifold structures on the same space $X$. We say they are the {\it same} if the identity is a diffeomorphism between two orbifold structures. \par An open set of an orbifold has an obvious orbifold structure. \par We say an embedding of an orbifold to be an {\it open embedding} if it gives a diffeomorphism onto an open set of the target. \end{defn} \begin{rem} We remark that the definition of two orbifold structures being the same we gave above is a straightforward generalization of the well-know definition in the case of manifold structures. We need to be careful if we try to generalize this definition to the case of Kuranishi structure. \end{rem} \begin{rem}\label{localchart} Sometimes orbifolds are defined in a slightly different way, as follows. Let $X$ be a paracompact Hausdorff space and $\bigcup_i \mathcal U_i = X$ be a locally finite cover. We consider homeomorphisms $$ \psi_i :U_i = V_i/\Gamma_i \to \mathcal U_i, $$ where $V_i$ is a smooth manifold and $\Gamma_i$ is a finite group with smooth and effective action on $V_i$. We say that $(\mathcal U_i,\psi_i,V_i,\Gamma_i)$ defines an orbifold structure on $X$ if the maps \begin{equation}\label{psiijtrans} \psi_{ji} = \psi^{-1}_j \circ \psi_i : U_{ji} = \psi_i^{-1}(\mathcal U_j) \to U_j \end{equation} are open embeddings of orbifolds in the sense of Definition \ref{diffeoofd} for each $i,j$. \par It is easy to show that this definition is equivalent to Definition \ref{ofddefn}. \end{rem} Various notions appearing in the theory of manifold, such as Riemannian metric, differential form, integration etc. can be generalized to the case of orbifold in a straightforward way. \begin{defn}[orbibundle]\label{orbibundledef} Let $X$ be a paracompact Hausdorff space. Suppose we are given an orbifold structure $(\{ (V_p,\Gamma_p,\psi_p) \},\{(V_{pq},\phi_{pq},h_{pq})\})$ on $X$ in the sense of Definition \ref{ofddefn}. \par A vector bundle on $X$ (we call it an {\it orbibundle} sometimes also) is an orbifold $\mathcal E$ together with a continuous map $\pi : \mathcal E \to X$ and $E_{p}, \widehat{\phi}_{pq}, \widehat{\psi}_p$ for each $p$ or $p,q$ such that the following holds. \begin{enumerate} \item $E_p \to V_p$ is a $\Gamma_p$ equivariant smooth vector bundle on a manifold $V_p$. \item $\widehat{\phi}_{pq} : E_q\vert_{V_{pq}} \to E_p$ is an $h_{pq}$ equivariant bundle map over ${\phi}_{pq} $, that is a fiberwise isomorphism. \item $\widehat{\psi}_p : E_p/\Gamma_p \to \pi^{-1}(\mathcal U_p)$ is a diffeomorphism of orbifolds such that $$ \begin{CD} E_p/\Gamma_p @>{\widehat\psi_p}>> \pi^{-1}(\mathcal U_p) \\ @VVV @VV{\pi}V \\ U_p @>>{\psi_p}> \mathcal U_p \end{CD} $$ and $$ \begin{CD} (E_q\vert_{V_{pq}})/\Gamma_q @>{\widehat\psi_q}>> \pi^{-1}(\mathcal U_{pq}) \\ @V{\underline{\widehat{\phi}}_{pq}}VV @VVV \\ E_p/\Gamma_p @>>{\widehat\psi_p}> \pi^{-1}(\mathcal U_{p}) \end{CD} $$ commute. Here $\underline{\widehat{\phi}}_{pq}$ is induced by ${\widehat{\phi}}_{pq}$ and $\mathcal U_{pq} = \psi_q(V_{pq}/\Gamma_q)$. \item The rank of the vector bundle $E_p$ is independent of $p$.\footnote{This condition is automatic if $X$ is connected.} We call it the {\it rank} of our vector bundle $\mathcal E$. \end{enumerate} More precisely we say $((\mathcal E,\pi),\{(E_{p}, \widehat{\psi}_p)\},\{\widehat{\phi}_{pq}\})$ is a vector bundle. Sometimes we say $\mathcal E$ is a {\it vector bundle} etc. by an abuse of notation. \end{defn} \begin{exm} If $X$ is an orbifold, its tangent bundle $TX$ is defined as an orbibundle on $X$ in an obvious way. \end{exm} We can define (Whitney) sum and tensor product of orbibundles in an obvious way. \begin{defn}[Embedding of vector bundle]\label{defn:embedding} Let $((\mathcal E^X,\pi),\{(E^X_{p}, \widehat{\psi}^X_p)\},\{\widehat{\phi}^X_{pq}\})$ and $((\mathcal E^Y,\pi),\{(E^Y_{p}, \widehat{\psi}^Y_p)\},\{\widehat{\phi}^Y_{pq}\})$ be vector bundles over orbifolds $X$ and $Y$, respectively. Let $F : X \to Y$ be an embedding of an orbifold. Then an {\it embedding of a vector bundle} $\hat F : {\mathcal E}_X \to {\mathcal E}_Y$ over $F$ is an embedding of an orbifold such that the following holds in addition. \par Let $q\in X$ and $p = F(q)$. Let $V^X_{pq}$, $F_{q} : V^X_{pq} \to V^Y_p$, $h^F_{pq} : \Gamma_q^X \to \Gamma^Y_p$ be as in Definition \ref{emborf}. Then there exists an $h^F_{pq}$ equivariant embedding of vector bundles $$ \hat F_{pq} : E^X_q\vert_{V^X_{pq}} \to E^Y_p $$ such that the following diagram commutes. $$ \begin{CD} E^X_q\vert_{V^X_{pq}/\Gamma^X_q} @>{\widehat\psi^X_q}>> \pi^{-1}(\mathcal U^X_{pq}) \\ @V{\underline{\hat F}_{pq}}VV @VV{\widehat{F}}V \\ E^Y_p/\Gamma_p^Y @>>{\widehat\psi^Y_p}> \pi^{-1}(\mathcal U^Y_{p}) \end{CD} $$ Here $\underline{\hat F}_{pq}$ is induced from ${\hat F}_{pq}$. \end{defn} \begin{defn} \begin{enumerate} \item An embedding of a vector bundle $\hat F : {\mathcal E}_X \to {\mathcal E}_Y$ is said to be an {\it open embedding} of a vector bundle if it is an open embedding as a map between orbifolds. \item An {\it isomorphism} between vector bundles is an embedding of a vector bundle that is a diffeomorphism between orbifolds. \item Two vector bundles on a given orbifold $X$ is said to be {\it isomorphic} if there exists an isomorphism which covers the identity map between them. \item If $\mathcal E_a$ $a=1,2$ are orbibundles over the same orbifold $X$ and $\hat F : \mathcal E_1 \to \mathcal E_2$ is an embedding of an orbibundle over the identity map, then we say $\mathcal E_1$ is a {\it subbundle} of $\mathcal E_2$. \item If $\mathcal E_1$ is a subbundle of $\mathcal E_2$, we can define the quotient bundle $\mathcal E_2/\mathcal E_2$ in an obvious way. \end{enumerate} \end{defn} It is easy to see that the (set theoretical) composition of embeddings of orbibundle is an embedding of orbibundle. \begin{rem} If two orbifold structures on $X$ are the same, the notion of vector bundles on it is the same in the following sense. The isomorphism classes of vector bundles on one structure corresponds to the isomorphism classes of vector bundles on the other structure by a canonical bijection. The proof is easy and is left to the reader. \end{rem} \begin{rem} Suppose a structure of orbifold on $X$ is given by $(\mathcal U_i,\psi_i,V_i,\Gamma_i)$ as in Remark \ref{localchart}. Then we can define a vector bundle on it by $(E_i,\hat\psi_{ij})$ as follows. \begin{enumerate} \item $E_i \to V_i$ is a $\Gamma_i$ equivariant vector bundle. \item Let $\psi_{ji} = \psi^{-1}_j \circ \psi_i : U_{ji} = \psi_i^{-1}(\mathcal U_j) \to U_j$ be as in (\ref{psiijtrans}). Then $$ \hat\psi_{ij} : E_i/\Gamma_i\vert_{U_{ji}} \to E_j/\Gamma_j $$ be an open embedding of vector bundle in the above sense. \end{enumerate} It is easy to see that this definition is equivalent to Definition \ref{orbibundledef}. \end{rem} \begin{lem}[Induced bundle] Let $F : X \to Y$ be an embedding of an orbifold and $\mathcal E^Y$ be a vector bundle on $Y$. Then there exists a vector bundle $\mathcal E^X$ on $X$ with the same rank as $\mathcal E^Y$ and an embedding of an orbibundle $\hat F : \mathcal E^X \to \mathcal E^Y$ which covers $F$. \par $(\mathcal E^X,\hat F)$ is unique in the following sense. If $(\mathcal E_a^X,\hat F_a)$, $a=1,2$ are two such choices, then there exists an isomorphism of vector bundles $\hat I : \mathcal E_1^X \to \mathcal E_2^X$ such that it covers identity and satisfies $ \hat F_2 \circ \hat I = \hat F_1. $ \end{lem} The proof is easy and is left to the reader. We call $\mathcal E^X$ the {\it induced bundle} and write $F^*\mathcal E^Y$. In the case $X$ is an open subset of $Y$, $\mathcal E^X = F^* \mathcal E^Y$ is called the {\it restriction} of $\mathcal E^Y$ to $X$. \begin{rem} When we consider a map between orbifolds which is not an embedding, the induced bundle may not be well-defined. \end{rem} \begin{exm} If $F : X \to Y$ is an embedding of an orbifold, it induces an embedding of orbibundles $dF : TX \to TY$ between their tangent bundles in an obvious way. Therefore $TX$ is a subbundle of the pull back bundle $F^*TY$. \end{exm} \begin{defn} If $F : X \to Y$ is an embedding of an orbifold, the normal bundle $N_XY$ is by definition the quotient bundle $F^*TY/TX$. \end{defn} \begin{rem} We can prove tubular neighborhood theorem in an obvious way. Namely $N_XY$ as an orbifold is diffeomorphic to an open neighborhood of $F(X)$ in $Y$. The proof is similar to the proof of tubular neighborhood theorem for manifolds. \end{rem} Finally we mention two easy lemmas about gluing orbifolds or orbibundles by a diffeomorphism or an isomorphism. \par Let $X$, $Y$ be orbifolds and $U_{YX} \subset X$ be an open set. Suppose $F : U_{YX} \to Y$ is an open embedding of orbifolds. \par We define an equivalence relation $\sim $ on the disjoint union $X \sqcup Y$ by the following. We say $x\sim y$ if and only if \begin{enumerate} \item $x = y$, or \item $x \in U_{XY}$ and $y \in Y$ and $y = F(x)$, \item $y \in U_{XY}$ and $x \in Y$ and $x = F(y)$. \end{enumerate} Let $X \cup_F Y$ be the set of equivalence classes equipped with quotient topology. \begin{lem}\label{gluelemmaord} If $X \cup_F Y$ is Hausdorff, it has a structure of an orbifold such that the maps \begin{equation}\label{XYembedtounion} X \to X \cup_F Y, \qquad Y \to X\cup_F Y \end{equation} which send $x \in X$ (resp. $y\in Y$) to its equivalence class are open embeddings of orbifolds. \end{lem} The proof is easy and is left to the reader. \begin{rem} One needs to be very careful when trying to generalize Lemma \ref{gluelemmaord} to uneffective orbifolds. \end{rem} \begin{lem} In the situation of Lemma \ref{gluelemmaord} we assume in addition that we are given two orbibundles $\mathcal E^X$ over $X$ and $\mathcal E^Y$ over $Y$ with the same rank. Suppose furthermore that $F$ is covered by an open embedding $\hat F : \mathcal E^X\vert_{U_{YX}} \to \mathcal E^Y$ of an orbibundle. Here $\mathcal E^X\vert_{U_{YX}}$ is the restriction of an orbibundle. \par Then there exists a structure of orbibundle on $\mathcal E^X \cup_{\hat F} \mathcal E^Y$ together with embeddings of orbibundles $$ \mathcal E^X \to \mathcal E^X \cup_{\hat F} \mathcal E^Y, \qquad \mathcal E^Y \to \mathcal E^X \cup_{\hat F} \mathcal E^Y $$ which cover the maps in (\ref{XYembedtounion}). \end{lem} The proof is easy and is left for the reader. \par\newpage \part{Construction of the Kuranishi structure 1: Gluing analysis in the simple case} \label{secsimple} In this part we give detailed proof of the gluing analysis (or stretching the neck) of the pseudo-holomorphic curve with node, and also decay estimate of the exponential order with respect to the length of the neck. This analysis implies the smoothness of the Kuranishi chart at infinity. In Part \ref{secsimple} we discuss a simple case where we have two pseudo-holomorphic maps from stable and irreducible curves joined at one point. The general case will be discussed in the next Part. \section{Setting} We will describe the general case in Part \ref{generalcase}. To simplify the notation and clarify the main analytic point of the proof we prove the case where we glue holomorphic maps from two stable bordered Riemann surfaces to $(X,L)$ in this section. \par Let $\Sigma_i$ be a bordered Riemann surface with one end. ($i=1,2$.) We assume that there are compact subsets $K_i \subset \Sigma_i$ such that $\Sigma_i \setminus K_i$ are half infinite cylinders. For $T>0$, we put coordinates $[-5T, \infty) \times [0,1]$, resp. $(-\infty, 5T] \times [0,1]$ on $\Sigma_i \setminus K_i$, $i=1,2$, respectively. We identify their ends as follows. \begin{equation} \aligned \Sigma_1 &= K_1 \cup ((-5T,\infty)\times [0,1]), \\ \Sigma_2 &= ((-\infty,5T)\times [0,1]) \cup K_2. \endaligned \end{equation} Here $K_i$ are compact and $\pm \infty$ are the ends. We put \begin{equation} \Sigma_T = K_1 \cup ((-5T,5T)\times [0,1]) \cup K_2. \end{equation} We use $\tau$ for the coordinate of the factors $(-5T,\infty)$, $(-\infty,5T)$, or $(-5T,5T)$ and $t$ for the coordinate of the second factor $[0,1]$. \par Let $X$ be a symplectic manifold with compatible (or tame) almost complex structure and $L$ be its Lagrangian submanifold. \par Let $$ u_i : (\Sigma_i,\partial \Sigma_i) \to (X,L), \qquad i=1,2 $$ be pseudo-holomorphic maps of finite energy. Then, by the removable singularity theorem that is now standard, we have asymptotic value \begin{equation}\label{tauinf} \lim_{\tau\to \infty} u_1(\tau,t) \in L \end{equation} and \begin{equation}\label{tauminf} \lim_{\tau\to -\infty} u_2(\tau,t) \in L. \end{equation} The limits (\ref{tauinf}) and (\ref{tauminf}) are independent of $t$. \par We assume that the limit (\ref{tauinf}) coincides with (\ref{tauminf}) and denote it by $p_0 \in L$. \par We fix a coordinate of $X$ and of $L$ in a neighborhood of $p_0$. So a trivialization of the tangent bundle $TX$ and $TL$ in a neighborhood of $p_0$ is fixed. Hereafter we assume the following: \begin{equation} \text{\rm Diam}(u_1([-5T,\infty)\times [0,1])) \le \epsilon_1, \qquad \text{\rm Diam}(u_2((-\infty,5T]\times [0,1])) \le \epsilon_1. \end{equation} \par The maps $u_i$ determine homology classes $\beta_i = [u_i] \in H_2(X,L)$. \par We take $K_i^{\rm obst}$ a compact subset of the interior of $K_i$ and take \begin{equation}\label{Eitake} E_i \subset \Gamma(K_i^{\rm obst};u_i^*TX\otimes \Lambda^{0,1}) \end{equation} a finite dimensional linear subspace consisting of smooth sections supported in $K_i^{\rm obst}$. \par For simplicity we also fix a complex structure of the source $\Sigma_i$. The version where it can move will be discussed in Part \ref{generalcase}. We also assume that $\Sigma_i$ equipped with marked points $\vec z_i$ is stable. The process to add marked points to stabilize it will be discussed in Part \ref{generalcase} also. Let \begin{equation}\label{lineeq} D_{u_i}\overline{\partial} : L^2_{m+1,\delta}((\Sigma_i,\partial \Sigma_i);u_i^*TX,u_i^*TL) \to L^2_{m,\delta}(\Sigma_i;u_i^{*}TX \otimes \Lambda^{01}) \end{equation} be the linearization of the Cauchy-Riemann equation. Here we define the weighted Sobolev space we use as follows. \begin{defn}(\cite[Section 7.1.3]{fooo:book1})\footnote{In \cite{fooo:book1} $L^p_1$ space is used in stead of $L^2_{m}$ space.} Let $L^2_{m+1,loc}((\Sigma_i,\partial \Sigma_i);u_i^*TX;u_i^*TL)$ be the set of the sections $s$ of $u_i^*TX$ which is locally of $L^2_{m+1}$-class, (Namely its differential up to order $m+1$ is of $L^2$ class. Here $m$ is sufficiently large, say larger than $10$.) We also assume $s(z) \in u_i^*TL$ for $z \in \partial \Sigma_i$. \par The weighted Sobolev space $L^2_{m+1,\delta}((\Sigma_i,\partial \Sigma_i);u_i^*TX,u_i^*TL)$ is the set of all pairs $(s,v)$ of elements $s$ of $L^2_{m+1,loc}((\Sigma_i,\partial \Sigma_i);u_i^*TX;u_i^*TL)$ and $v \in T_{p_0}L$, (here $p_0 \in L$ is the point (\ref{tauinf}) or (\ref{tauminf})) such that \begin{equation}\label{weight1} \sum_{k=0}^{m+1} \int_{\Sigma_i \setminus K_i} e^{\delta\vert \tau \pm 5T\vert}\vert \nabla^k(s - \text{\rm Pal}(v))\vert^2 < \infty, \end{equation} where $\text{\rm Pal} : T_{p_0}X \to T_{u_i(\tau,t)}X$ is defined by the trivialization we fixed right after (\ref{tauminf}). (Here $\pm$ is $+$ for $i=1$ and $-$ for $i=2$.) The norm is defined as the sum of (\ref{weight1}), the norm of $v$ and the $L^2_{m+1}$ norm of $s$ on $K_i$. (See (\ref{normformjula}).) \par $L^2_{m,\delta}(\Sigma_i;u_i^{*}TX \otimes \Lambda^{01})$ is defined similarly without boundary condition and with out $v$. (See (\ref{normformjula52}).) \end{defn} When we define $D_{u_i}\overline{\partial}$ we forget $v$ component and use $s$ only. \begin{rem} The positive number $\delta$ is chosen as follows. (\ref{tauinf}) and a standard estimate imply that there exists $\delta_1 > 0$ such that \begin{equation}\label{approestu} \left\vert\frac{d}{d\tau}u_i\right\vert_{C^k}(\tau,t) < C_ke^{-\delta_1 \vert \tau\vert} \end{equation} for any $k$. We choose $\delta$ smaller than $\delta_1/10$. (\ref{approestu}) implies $$ (D_{u_i}\overline\partial)(\text{\rm Pal}(v)) < C_ke^{-\delta_1 \vert \tau\vert/10}. $$ Therefore (\ref{lineeq}) is defined and bounded. \end{rem} It is a standard fact that (\ref{lineeq}) is Fredholm. \par We work under the following assumption. \begin{assump}\label{DuimodEi} \begin{equation}\label{DuimodEi0} D_{u_i}\overline{\partial} : L^2_{m+1,\delta}((\Sigma_i,\partial \Sigma_i);u_i^*TX,u_i^*TL) \to L^2_{m,\delta}(\Sigma_i;u_i^{*}TX \otimes \Lambda^{01})/E_i \end{equation} is surjective. Moreover the following (\ref{Duievsurj}) holds. Let $ (D_{u_i}\overline{\partial})^{-1}(E_i) $ be the kernel of (\ref{DuimodEi0}). We define \begin{equation}\label{Duiev} D{\rm ev}_{i,\infty} : L^2_{m+1,\delta}((\Sigma_i,\partial \Sigma_i);u_i^*TX,u_i^*TL) \to T_{p_0}L \end{equation} by $$ D{\rm ev}_{i,\infty}(s,v) = v. $$ Then \begin{equation}\label{Duievsurj} D{\rm ev}_{1,\infty} - D{\rm ev}_{2,\infty} : (D_{u_1}\overline{\partial})^{-1}(E_1) \oplus (D_{u_2}\overline{\partial})^{-1}(E_2) \to T_{p_0}L \end{equation} is surjective. \end{assump} \par Let us start stating the result. Let \begin{equation}\label{uprime} u' : (\Sigma_T,\partial\Sigma_T) \to (X,L) \end{equation} be a smooth map. We consider the following condition depending $\epsilon >0$. \begin{conds}\label{nearbyuprime} \begin{enumerate} \item $u'\vert_{K_i}$ is $\epsilon$-close to $u_i\vert_{K_i}$ in $C^1$ sense. \item The diameter of $u'([-5T,5T]\times [0,1])$ is smaller than $\epsilon$. \end{enumerate} \end{conds} \par We take $\epsilon_2$ sufficiently small compared to the `injectivity radius' of $X$ so that the next definition makes sense.\footnote{More precisely, we assume that $$ \{(x,y) \in X \times X \mid d(x,y) < \epsilon_2\} \subset {\rm E}(\{ (x,v) \in TX \mid \vert v\vert < \epsilon\}), $$ where ${\rm E} : \{ (x,v) \in TX \mid \vert v\vert < \epsilon\} \to X$ is induced by an exponential map of certain connection of $TX$. See (\ref{defE}).} For $u'$ satisfying Condition \ref{nearbyuprime} for $\epsilon <\epsilon_2$ : $$ I_{u'} : E_i \to \Gamma(\Sigma_T;(u')^*TX \otimes \Lambda^{01}) $$ is the complex linear part of the parallel translation along the short geodesic (between $u_i(z)$ and $u'(z)$. Here $z \in K_i^{\text{\rm obst}}$). We put \begin{equation} E_i(u') = I_{u'}(E_i). \end{equation} \par The equation we study is \begin{equation}\label{mainequation} \overline\partial u' \equiv 0 , \quad \mod E_1(u') \oplus E_2(u'). \end{equation} \begin{rem} In the actual construction of Kuranishi structure, we take several $u_i$'s and take $E_i$'s for each of them. Then in place of $E_1(u') \oplus E_2(u')$ we take sum of finitely many of them. Here we simplify the notation. There are not so many differences between the proof of Theorem \ref{gluethm1} and the corresponding result in case we take several such $u_i$'s and $E_i$'s. See \cite[pages 4-5]{Fu1} and Section \ref{glueing}. \end{rem} Theorem \ref{gluethm1} describes all the solutions of (\ref{mainequation}). To state this precisely we need a bit more notations. \par We consider the following condition for $u'_i : (\Sigma_i,\partial\Sigma_i) \to (X,L)$. \begin{conds}\label{uiconds} \begin{enumerate} \item $u_i'\vert_{K_i}$ is $\epsilon$-close to $u_i\vert_{K_i}$ in $C^1$ sense. \item The diameter of $u_1'([-5T,\infty)\times [0,1])$, (resp. $u_2'((-\infty,5T])\times [0,1])$) is smaller than $\epsilon$. \end{enumerate} \end{conds} Then we define $$ I_{u'_i} : E_i \to \Gamma(\Sigma_i;(u_i')^*TX \otimes \Lambda^{01}) $$ by using the parallel transport in the same way as $I_{u'_T}$. (This makes sense if $u'_i$ satisfies Condition \ref{uiconds} for $\epsilon < \epsilon_2$.) We put \begin{equation} E_i(u'_i) = I_{u'_i}(E_i). \end{equation} So we can define an equation \begin{equation}\label{mainequationui} \overline\partial u_i' \equiv 0 , \quad \mod E_i(u'_i). \end{equation} \begin{defn} The set of solutions of equation (\ref{mainequationui}) with finite energy and satisfying Condition \ref{uiconds} for $\epsilon = \epsilon_2$ is denoted by $\mathcal M^{E_i}((\Sigma_i,\vec z_i);\beta_i)_{\epsilon_2}$. Here $\beta_i$ is the homology class of $u_i$. \end{defn} \begin{rem} In the usual story of pseudo-holomorphic curve, we identify $u_i$ and $u_i'$ if there exists a biholomorphic map $v : (\Sigma_i,\vec z_i)\to (\Sigma_i,\vec z_i)$ such that $u'_i = u_i\circ v$. In our situation where $\Sigma_i$ has no sphere or disk bubble and has nontrivial boundary with at least one boundary marked points (that is $\tau =\pm\infty$), such $v$ is necessary the identity map. Namely $\Sigma_i$ has no nontrivial automorphism. \end{rem} The surjectivity of (\ref{Duiev}), (\ref{Duievsurj}) and the implicit function theorem imply that if $\epsilon_2$ is small then there exists a finite dimensional vector space $\tilde V_i$ and its neighborhood $V_i$ of $0$ such that $$ \mathcal M^{E_i}((\Sigma_i,\vec z_i);\beta_i)_{\epsilon_2} \cong V_i. $$ Since we assume that $\Sigma_i$ is nonsingular the group $\text{\rm Aut}((\Sigma_i,\vec z_i),u_i)$ is trivial. (In the case when there is a sphere bubble, the automorphism group can be nontrivial. That case will be discussed later.) \par For any $\rho_i \in V_i$ we denote by $u_i^{\rho_i} : (\Sigma_i,\partial\Sigma_i) \to (X,L)$ the corresponding solution of (\ref{mainequationui}). \par We have an evaluation map $$ \text{\rm ev}_{i,\infty} : \mathcal M^{E_i}((\Sigma_i,\vec z_i);\beta_i)_{\epsilon_2} \to L $$ that is smooth. Namely $$ \text{\rm ev}_{i,\infty} (u'_i) = \lim_{\tau\to \pm\infty}u'_i(\tau,t). $$ (Here $\pm = +$ for $i=1$ and $-$ for $i=2$.)\footnote{This is a consequence of the fact that $u_i$ is pseudo-holomorphic outside a compact set and has finite energy.} We consider the fiber product: \begin{equation}\label{fpmoduli} \mathcal M^{E_1}((\Sigma_1,\vec z_1);\beta_1)_{\epsilon_2} \times_L \mathcal M^{E_2}((\Sigma_2,\vec z_2);\beta_2)_{\epsilon_2}. \end{equation} The surjectivity of (\ref{Duievsurj}) implies that this fiber product is transversal so is $$ V_1 \times_L V_2. $$ And an element of $V_1 \times_L V_2$ is written as $\rho = (\rho_1,\rho_2)$. \begin{defn} Let $\beta =\beta_1 + \beta_2$. We denote by $\mathcal M^{E_1+E_2}((\Sigma_T,\vec z);\beta)_{\epsilon}$ the set of solutions of (\ref{mainequation}) satisfying the Condition \ref{nearbyuprime} with $\epsilon_2 = \epsilon$. \end{defn} \begin{thm}\label{gluethm1} For each sufficiently small $\epsilon_3$ and sufficiently large $T$, there exist $\epsilon_1, \epsilon_2$ and a map $$ \text{\rm Glu}_T : \mathcal M^{E_1}((\Sigma_1,\vec z_1);\beta_1)_{\epsilon_2} \times_L \mathcal M^{E_2}((\Sigma_2,\vec z_2);\beta_2)_{\epsilon_2} \to \mathcal M^{E_1+E_2}((\Sigma_T,\vec z);\beta)_{\epsilon_1} $$ that is a diffeomorphism to its image. The image contains $\mathcal M^{E_1+E_2}((\Sigma_T,\vec z);\beta)_{\epsilon_3}$. \end{thm} The result about exponential decay estimate of this map is in Section \ref{subsecdecayT}. (Theorem \ref{exdecayT}.) \par\bigskip \section{Proof of Theorem \ref{gluethm1} : 1 - Bump function and weighted Sobolev norm} \label{subsec12} The proof of Theorem \ref{gluethm1} was given in \cite[Section 7.1.3]{fooo:book1}. The exponential decay estimate of the solution was proved in \cite[Section A1.4]{fooo:book1} together with a slightly modified version of the proof of Theorem \ref{gluethm1}. Here we follow the proof of \cite[Section A1.4]{fooo:book1} and give its more detail. As mentioned there the origin of the proof is Donaldson's paper \cite{Don86I}, and its Bott-Morse version in \cite{Fuk96II}. \par We first introduce certain bump functions. First let $\mathcal A_T \subset \Sigma_T$ and $\mathcal B_T \subset \Sigma_T$ be the domains defined by $$ \mathcal A_T = [-T-1,-T+1] \times [0,1], \qquad \mathcal B_T = [T-1,T+1] \times [0,1]. $$ We may regard $\mathcal A_T,\mathcal B_T \subset \Sigma_i$. The third domain is $$ \mathcal X = [-1,1] \times [0,1] \subset \Sigma_T. $$ We may also regard $\mathcal X \subset \Sigma_i$. \par Let $\chi_{\mathcal A}^{\leftarrow}$, $\chi_{\mathcal A}^{\rightarrow}$ be smooth functions on $[-5T,5T]\times [0,1]$ such that \begin{equation} \chi_{\mathcal A}^{\leftarrow}(\tau,t) = \begin{cases} 1 & \tau < -T-1 \\ 0 & \tau > -T+1. \end{cases} \end{equation} $$ \chi_{\mathcal A}^{\rightarrow} = 1 - \chi_{\mathcal A}^{\leftarrow}. $$ \par We define \begin{equation} \chi_{\mathcal B}^{\leftarrow}(\tau,t) = \begin{cases} 1 & \tau < T-1 \\ 0 & \tau > T+1. \end{cases} \end{equation} $$ \chi_{\mathcal B}^{\rightarrow} = 1 - \chi_{\mathcal B}^{\leftarrow}. $$ \par We define \begin{equation} \chi_{\mathcal X}^{\leftarrow}(\tau,t) = \begin{cases} 1 & \tau < -1 \\ 0 & \tau > 1. \end{cases} \end{equation} $$ \chi_{\mathcal X}^{\rightarrow} = 1 - \chi_{\mathcal X}^{\leftarrow}. $$ We extend these functions to $\Sigma_T$ and $\Sigma_i$ ($i=1,2$) so that they are locally constant outside $[-5T,5T]\times [0,1]$. We denote them by the same symbol. \par\medskip We next introduce weighted Sobolev norms and their local versions for sections on $\Sigma_T$ or $\Sigma_i$ as follows. \par We define $e_{i,\delta} : \Sigma_i \to [1,\infty)$ of $C^{\infty}$ class as follows. \begin{equation}\label{e1delta} e_{1,\delta}(\tau,t) \begin{cases} =e^{\delta\vert \tau + 5T\vert} &\text{if $\tau > 1 - 5T$ }\\ =1 &\text{on $K_1$} \\ \in [1,10] &\text{if $\tau < 1 - 5T$} \end{cases} \end{equation} \begin{equation}\label{e2delta} e_{2,\delta}(\tau,t) \begin{cases} =e^{\delta\vert \tau - 5T\vert} &\text{if $\tau < 5T-1$ }\\ =1 &\text{on $K_2$} \\ \in [1,10] &\text{if $\tau > 5T-1$} \end{cases} \end{equation} We also define $e_{T,\delta} : \Sigma_T \to [1,\infty)$ as follows: \begin{equation}\label{e2delta} e_{T,\delta}(\tau,t) \begin{cases} =e^{\delta\vert \tau - 5T\vert} &\text{if $1<\tau < 5T-1$ }\\ = e^{\delta\vert \tau + 5T\vert} &\text{if $-1>\tau > 1-5T$ }\\ =1 &\text{on $K_1\cup K_2$} \\ \in [1,10] &\text{if $\vert\tau - 5T\vert < 1$ or $\vert\tau + 5T\vert < 1$} \\ \in [e^{5T\delta}/10,e^{5T\delta}] &\text{if $\vert\tau\vert < 1$}. \end{cases} \end{equation} The weighted Sobolev norm we use for $L^2_{m,\delta}(\Sigma_i;u_i^*TX\otimes \Lambda^{01})$ is \begin{equation} \Vert s\Vert^2_{L^2_{m,\delta}} = \sum_{k=0}^m \int_{\Sigma_i} e_{i,\delta} \vert \nabla^k s\vert^2 \text{\rm vol}_{\Sigma_i}. \end{equation} For $(s,v) \in L^2_{m+1,\delta}((\Sigma_i,\partial \Sigma_i);u_i^*TX,u_i^*TL)$ we define \begin{equation}\label{normformjula} \aligned \Vert (s,v)\Vert^2_{L^2_{m+1,\delta}} = &\sum_{k=0}^{m+1} \int_{K_i} \vert \nabla^k s\vert^2 \text{\rm vol}_{\Sigma_i}\\ &+ \sum_{k=0}^{m+1} \int_{\Sigma_i \setminus K_i} e_{i,\delta}\vert \nabla^k(s - \text{\rm Pal}(v))\vert^2 \text{\rm vol}_{\Sigma_i} + \Vert v\Vert^2. \endaligned \end{equation} We next define a weighted Sobolev norm for the sections on $\Sigma_T$. Let $$ s \in L^2_{m+1}((\Sigma_T,\partial \Sigma_T);u^*TX,u^*TL). $$ Since we take $m$ large, $s$ is continuous. So $s(0,1/2) \in T_{u(0,1/2)}X$ is well defined. There is a canonical trivialization of $TX$ in a neighborhood of $p_0$ that we fixed right after (\ref{tauminf}). We use it to define $\text{\rm Pal}$ below. We put \begin{equation}\label{normformjula5} \aligned \Vert s\Vert^2_{L^2_{m+1,\delta}} = &\sum_{k=0}^{m+1} \int_{K_1} \vert \nabla^k s\vert^2 \text{\rm vol}_{\Sigma_1} + \sum_{k=0}^{m+1} \int_{K_2} \vert \nabla^k s\vert^2 \text{\rm vol}_{\Sigma_2}\\ &+ \sum_{k=0}^{m+1} \int_{[-5T,5T]\times [0,1]} e_{T,\delta}\vert \nabla^k(s - \text{\rm Pal}(s(0,1/2)))\vert^2 \text{\rm vol}_{\Sigma_i} \\&+ \Vert s(0,1/2)\Vert^2. \endaligned \end{equation} For $$ s \in L^2_{m}((\Sigma_T,\partial \Sigma_T);u^*TX\otimes \Lambda^{01}) $$ we define \begin{equation}\label{normformjula52} \Vert s\Vert^2_{L^2_{m,\delta}} = \sum_{k=0}^{m} \int_{\Sigma_T} e_{T,\delta}\vert \nabla^k s\vert^2 \text{\rm vol}_{\Sigma_1}. \end{equation} These norms were used in \cite[Section 7.1.3]{fooo:book1}. \par For a subset $W$ of $\Sigma_i$ or $\Sigma_T$ we define $ \Vert s\Vert_{L^2_{m,\delta}(W\subset \Sigma_i)} $, $ \Vert s\Vert_{L^2_{m,\delta}(W\subset \Sigma_T)} $ by restricting the domain of the integration (\ref{normformjula52}) or (\ref{normformjula5}) to $W$. \par Let $(s_j,v_j) \in L^2_{m+1,\delta}((\Sigma_i,\partial \Sigma_i);u_i^*TX,u_i^*TL)$ for $j=1,2$. We define the inner product among them by: \begin{equation}\label{innerprod1} \aligned \langle\!\langle (s_1,v_1),(s_2,v_2)\rangle\!\rangle_{L^2_{\delta}} = & \int_{\Sigma_i\setminus K_i} (s_1-\text{\rm Pal}v_1,s_2-\text{\rm Pal}v_2) \\ &+\int_{K_i} (s_1,s_2) + ( v_1,v_2). \endaligned \end{equation} We also use an exponential map. (The same map was used in \cite[pages 410-411]{fooo:book1}.) We take a diffeomorphism \begin{equation}\label{defE} {\rm E} = ({\rm E} _1,{\rm E} _2): \{ (x,v) \in TX \mid \vert v\vert < \epsilon\} \to X \times X \end{equation} to its image such that $$ {\rm E} _1(x,v) = x, \quad \left.\frac{d {\rm E} _2(x,tv)}{dt}\right\vert_{t=0} = v $$ and $$ {\rm E} (x,v) \in L\times L, \qquad \text{for $x \in L$, $v \in T_xL$}. $$ Furthermore we may take it so that \begin{equation}\label{Einanbdofp0} {\rm E} (x,v) = (x,x+v) \end{equation} on a neighborhood of $p_0$. \par To find such $\rm E$, we take a linear connection $\nabla$ (that may not be a Levi-Civita connection of a Riemannian metric) of $TX$ such that $TL$ is parallel with respect to $\nabla$. We then use geodesic with respect to $\nabla$ to define an exponential map. We then define $\rm E$ such that $t\mapsto {\rm E} _2(x,tv)$ is a geodesic with initial direction $v$. Note that we may take $\nabla$ so that in a neighborhood of $p_0$ it coincides with the standard trivial connection with respect the coordinate we fixed. (\ref{Einanbdofp0}) follows. \section{Proof of Theorem \ref{gluethm1} : 2 - Gluing by alternating method} \label{alternatingmethod} Let us start with $$ u^{\rho} = (u_1^{\rho_1},u_2^{\rho_2}) \in \mathcal M^{E_1}((\Sigma_1,\vec z_1);\beta_1)_{\epsilon_2} \times_L \mathcal M^{E_2}((\Sigma_2,\vec z_2);\beta_2)_{\epsilon_2}. $$ Here $\rho_i \in V_i$ and the corresponding map $(\Sigma_i,\partial \Sigma_i) \to (X,L)$ is denoted by $u_i^{\rho_i}$. Let $\rho = (\rho_1,\rho_2)$. We put $$ p^{\rho} = \lim_{\tau\to \infty} u_1^{\rho_1}(\tau,t) = \lim_{\tau\to -\infty} u_2^{\rho_2}(\tau,t). $$ \par\medskip \noindent{\bf Pregluing}: \begin{defn} We define \begin{equation} u_{T,(0)}^{\rho} = \begin{cases} \chi_{\mathcal B}^{\leftarrow} (u_1^{\rho_1} - p^{\rho}) + \chi_{\mathcal A}^{\rightarrow} (u_2^{\rho_2} - p^{\rho}) + p^{\rho} & \text{on $[-5T,5T] \times [0,1]$} \\ u_1^{\rho_1} & \text{on $K_1$} \\ u_2^{\rho_2} & \text{on $K_2$}. \end{cases} \end{equation} \end{defn} Note that we use the coordinate of the neighborhood of $p_0$ to define the sum in the first line. \par\medskip \noindent{\bf Step 0-3}: \begin{lem}\label{lem18} If $\delta < \delta_1/10$ then there exists $\frak e^{\rho} _{i,T,(0)} \in E_i$ such that \begin{equation}\label{startingestimate} \Vert\overline\partial u^{\rho} _{T,(0)} - \frak e^{\rho} _{1,T,(0)} - \frak e^{\rho} _{2,T,(0)}\Vert_{L_{m,\delta}^2} < C_{1,m}e^{-\delta T}. \end{equation} Moreover \begin{equation}\label{frakeissmall} \Vert \frak e^{\rho} _{i,T,(0)}\Vert_{L_{m}^2(K_i)} < \epsilon_{4,m}. \end{equation} Here $\epsilon_{4,m}$ is a positive number which we may choose arbitrarily small by taking $V_i$ to be a sufficiently small neighborhood of zero in $\tilde V_i$. \par Moreover $\frak e^{\rho} _{i,T,(0)}$ is independent of $T$. \end{lem} \begin{proof} We put $$ \frak e_{i,T,(0)} = \overline\partial u^{\rho}_i \in E_i. $$ Then by definition the support of $\overline\partial u^{\rho} _{T,(0)} - \frak e^{\rho} _{1,T,(0)} - \frak e^{\rho} _{2,T,(0)}$ is in $[-5T,5T]\times [0,1]$. Moreover it is estimated as (\ref{startingestimate}). \end{proof} \par\medskip \noindent{\bf Step 0-4}: \begin{defn}\label{deferfirst} We put $$ \aligned {\rm Err}^{\rho}_{1,T,(0)} &= \chi_{\mathcal X}^{\leftarrow} (\overline\partial u^{\rho} _{T,(0)} - \frak e^{\rho} _{1,T,(0)}), \\ {\rm Err}^{\rho}_{2,T,(0)} &= \chi_{\mathcal X}^{\rightarrow} (\overline\partial u^{\rho} _{T,(0)} - \frak e^{\rho} _{2,T,(0)}). \endaligned$$ We regard them as elements of the weighted Sobolev spaces $L^2_{m,\delta}((\Sigma_1,\partial \Sigma_1);(u_1^{\rho})^*TX\otimes \Lambda^{01})$ and $L^2_{m,\delta}((\Sigma_2,\partial \Sigma_2);(u_2^{\rho})^*TX\otimes \Lambda^{01})$ respectively. (We extend them by $0$ outside a compact set.) \end{defn} \par\medskip \noindent{\bf Step 1-1}: We first cut $u^{\rho}_{T,(0)}$ and extend to obtain maps $\hat u^{\rho}_{i,T,(0)} : (\Sigma_i,\partial\Sigma_i) \to (X,L)$ $(i=1,2)$ as follows. (This map is used to set the linearized operator (\ref{lineeqstep0}).) \begin{equation} \aligned &\hat u^{\rho}_{1,T,(0)}(z) \\ &= \begin{cases} \chi_{\mathcal B}^{\leftarrow}(\tau-T,t) u^{\rho}_{T,(0)}(\tau,t) + \chi_{\mathcal B}^{\rightarrow}(\tau-T,t)p^{\rho} &\text{if $z = (\tau,t) \in [-5T,5T] \times [0,1]$} \\ u^{\rho}_{T,(0)}(z) &\text{if $z \in K_1$} \\ p^{\rho} &\text{if $z \in [5T,\infty)\times [0,1]$}. \end{cases} \\ &\hat u^{\rho}_{2,T,(0)}(z) \\ &= \begin{cases} \chi_{\mathcal A}^{\rightarrow}(\tau+T,t) u^{\rho}_{T,(0)}(\tau,t) + \chi_{\mathcal A}^{\leftarrow}(\tau+T,t)p^{\rho} &\text{if $z = (\tau,t) \in [-5T,5T] \times [0,1]$} \\ u^{\rho}_{T,(0)}(z) &\text{if $z \in K_2$} \\ p^{\rho} &\text{if $z \in (-\infty,-5T]\times [0,1]$}. \end{cases} \endaligned \end{equation} Let \begin{equation}\label{lineeqstep0} \aligned D_{\hat u^{\rho}_{i,T,(0)}}\overline{\partial} : L^2_{m+1,\delta}((\Sigma_i,\partial \Sigma_i);&(\hat u^{\rho}_{i,T,(0)})^*TX,(\hat u^{\rho}_{i,T,(0)})^*TL) \\ &\to L^2_{m,\delta}(\Sigma_i;(\hat u^{\rho}_{i,T,(0)})^{*}TX \otimes \Lambda^{01}) \endaligned \end{equation} be the linearization of the Cauchy-Riemann equation. \begin{lem}\label{surjlineastep11} We put $E_i = E_i(\hat u^{\rho}_{i,T,(0)})$. We have \begin{equation}\label{surj1step1} \text{\rm Im}(D_{\hat u^{\rho}_{i,T,(0)}}\overline{\partial}) + E_i = L^2_{m,\delta}(\Sigma_i;(\hat u^{\rho}_{i,T,(0)})^{*}TX \otimes \Lambda^{01}). \end{equation} Moreover \begin{equation}\label{Duievsurjstep1} D{\rm ev}_{1,\infty} - D{\rm ev}_{2,\infty} : (D_{\hat u^{\rho}_{1,T,(0)}}\overline{\partial})^{-1}(E_1) \oplus (D_{\hat u^{\rho}_{2,T,(0)}}\overline{\partial})^{-1}(E_2) \to T_{p^{\rho}}L \end{equation} is surjective. \end{lem} \begin{proof} Since $\hat u^{\rho}_{i,T,(0)}$ is close to $u_i$ in exponential order, this is a consequence of Assumption \ref{DuimodEi}. \end{proof} Note that $E_i(u'_i)$ actually depends on $u'_i$. So to obtain a linearized equation of (\ref{mainequation}) we need to take into account of that effect. Let $\Pi_{E_i(u_i')}$ be the projection to $E_i(u_i')$ with respect to the $L^2$ norm. Namely we put \begin{equation}\label{form118} \Pi_{E_i(u_i')}(A) = \sum_{a=1}^{\dim E_i} \langle\!\langle A,\mathbf e_{i,a}(u'_i) \rangle\!\rangle_{L^2(K_1)}\mathbf e_{i,a}(u'_i), \end{equation} where $\mathbf e_{i,a}$, $a=1,\dots,\dim E_i(u_i')$ is an orthonormal basis of $E_i(u_i')$ which are supported in $K_i$. \par We put \begin{equation}\label{DEidef} (D_{u'_i}E_i)(A,v) = \frac{d}{ds}(\Pi_{E_i({\rm E} (u_i',sv))}(A))\vert_{s=0}. \end{equation} Here $v \in \Gamma((\Sigma_i,\partial \Sigma_i),(u'_i)^*TX,(u'_i)^*TL)$. (Then ${\rm E} (u_i',sv)$ is a map $(\Sigma_i,\partial \Sigma_i) \to (X,L)$ defined in (\ref{defE}).) \begin{rem} We use an isomorphism \begin{equation}\label{parallemap} \Gamma(\Sigma_i ; {\rm E} (u_i',sv)^*TX\otimes \Lambda^{01}) \cong \Gamma(\Sigma_i;(u'_i)^*TX \otimes \Lambda^{01}) \end{equation} to define the right hand side of (\ref{DEidef}). The map (\ref{parallemap}) is defined as follows. Let $z \in \Sigma_i$. We have a path $r \mapsto {\rm E} (u_i'(z),rsv(z))$ joining $u'_i(z)$ to $ {\rm E} (u_i',sv)(z)$. We use a connection $\nabla$ such that $TL$ is parallel to define a parallel transport along this path. Its complex linear part defines an isomorphism (\ref{parallemap}). \par We note that the same isomorphism (\ref{parallemap}) is used also to define $D_{u'_i}\overline \partial$. Namely $$ (D_{u'_i}\overline \partial)(v) = \frac{d}{ds}(\overline\partial {\rm E} (u_i',sv))\vert_{s=0} $$ where the right hand side is defined by using (\ref{parallemap}). \end{rem} \par We put $$ \Pi_{E_i(u_i')}^{\perp}(A) = A - \Pi_{E_i(u_i')}(A). $$ \par The equation (\ref{mainequationui}) is equivalent to the following \begin{equation}\label{mainequationui2} \Pi_{E_i(u_i')}^{\perp}\overline\partial u'_i = 0. \end{equation} We calculate the linearization $$ \left.\frac{\partial}{\partial s} \Pi_{E_i({\rm E}(u_i',s V))}^{\perp} \overline\partial {\rm E}(u_i',s V))\right\vert_{s=0} $$ to obtain the linearized equation: \begin{equation}\label{linearized2221} D_{u'_i}\overline\partial (V) - (D_{u'_i}E_i)(\overline\partial u'_i,V) \equiv 0 \mod E_i(u'_i). \end{equation} We note that $$ \overline\partial \hat u^{\rho}_{i,T,(0)} - \frak e^{\rho} _{i,T,(0)} $$ is exponentially small. So we use the operator \begin{equation}\label{144op} V \mapsto D_{\hat u^{\rho}_{i,T,(0)}}\overline{\partial}(V) - (D_{\hat u^{\rho}_{i,T,(0)}}E_i)(\frak e^{\rho} _{i,T,(0)}, V) \end{equation} as an approximation of the linearization of (\ref{mainequationui2}). \begin{lem}\label{lem112} We put $E_i = E_i(\hat u^{\rho}_{i,T,(0)})$. We have \begin{equation}\label{surj1step1modififed} \text{\rm Im}(D_{\hat u^{\rho}_{i,T,(0)}}\overline{\partial} - (D_{\hat u^{\rho}_{i,T,(0)}}E_i)(\frak e^{\rho} _{i,T,(0)}, \cdot)) + E_i = L^2_{m,\delta}(\Sigma_i;(\hat u^{\rho}_{i,T,(0)})^{*}TX \otimes \Lambda^{01}). \end{equation} Moreover \begin{equation}\label{Duievsurjstep12} \aligned D{\rm ev}_{1,\infty} - &D{\rm ev}_{2,\infty} : (D_{\hat u^{\rho}_{1,T,(0)}}\overline{\partial} - (D_{\hat u^{\rho}_{1,T,(0)}}E_1)(\frak e^{\rho} _{1,T,(0)}, \cdot))^{-1}(E_1) \\ &\oplus (D_{\hat u^{\rho}_{2,T,(0)}}\overline{\partial} -(D_{\hat u^{\rho}_{2,T,(0)}}E_2)(\frak e^{\rho} _{2,T,(0)}, \cdot))^{-1}(E_2) \to T_{p^{\rho}}L \endaligned \end{equation} is surjective. \end{lem} \begin{proof} (\ref{frakeissmall}) implies that $ (D_{\hat u^{\rho}_{1,T,(0)}}E_1)(\frak e^{\rho} _{1,T,(0)}, \cdot)$ is small in operator norm. The lemma follows from Lemma \ref{surjlineastep11}. \end{proof} \begin{rem}\label{independetmm} Note that (\ref{frakeissmall}) is proved by taking $V_i$ in a small neighborhood of $0$ (in $\tilde V_i$) with respect to the $C^m$ norm. (Note $V_i \subset \mathcal M^{E_i}((\Sigma_i,\vec z_i);\beta_i)_{\epsilon_2}$ and $V_i$ consists of smooth maps.) However we can take $V_i$ that is independent of $m$ and the conclusion of Lemma \ref{lem112} holds for $m$. In fact the elliptic regularity implies that if the conclusion of Lemma \ref{lem112} holds for some $m$ then it holds for all $m'>m$. (The inequality (\ref{frakeissmall}) holds for that particular $m$ only. However this inequality is used to show Lemma \ref{lem112} only.) \end{rem} We consider \begin{equation}\label{gibker} \aligned &\text{\rm Ker}(D{\rm ev}_{1,\infty} - D{\rm ev}_{2,\infty})\\ \cap &\left((D_{\hat u^{\rho}_{1,T,(0)}}\overline{\partial} - (D_{\hat u^{\rho}_{1,T,(0)}}E_1)(\frak e^{\rho} _{1,T,(0)}, \cdot)))^{-1}(E_1) \right.\\ &\quad\oplus \left.(D_{\hat u^{\rho}_{2,T,(0)}}\overline{\partial} - (D_{\hat u^{\rho}_{2,T,(0)}}E_2)(\frak e^{\rho} _{2,T,(0)}, \cdot))^{-1}(E_2)\right). \endaligned\end{equation} This is a finite dimensional subspace of \begin{equation}\label{cocsissobolev} \text{\rm Ker}(D{\rm ev}_{1,\infty} - D{\rm ev}_{2,\infty}) \cap \bigoplus_{i=1}^2 L^2_{m+1,\delta}((\Sigma_i,\partial \Sigma_i);(\hat u^{\rho}_{i,T,(0)})^{*}TX,(\hat u^{\rho}_{i,T,(0)})^{*}TL) \end{equation} consisting of smooth sections. \begin{defn}\label{defnfraH} We denote by $\frak H(E_1,E_2)$ the intersection of the $L^2$ orthogonal complement of (\ref{gibker}) with (\ref{cocsissobolev}). Here the $L^2$ inner product is defined by (\ref{innerprod1}). \end{defn} \begin{defn}\label{def105} We define $(V^{\rho}_{T,1,(1)},V^{\rho}_{T,2,(1)},\Delta p^{\rho}_{T,(1)})$ as follows. \begin{equation}\label{144ffff} \aligned (D_{\hat u^{\rho}_{i,T,(0)}}\overline{\partial})(V^{\rho}_{T,i,(1)}) - &(D_{\hat u^{\rho}_{i,T,(0)}}E_i)(\frak e^{\rho} _{i,T,(0)},V^{\rho}_{T,i,(1)}) \\ &+ {\rm Err}^{\rho}_{i,T,(0)} \in E_i(\hat u^{\rho}_{i,T,(0)}). \endaligned \end{equation} \begin{equation} D{\rm ev}_{\infty}(V^{\rho}_{T,1,(1)}) = D{\rm ev}_{-\infty}(V^{\rho}_{T,2,(1)}) = \Delta p^{\rho}_{T,(1)}. \end{equation} Moreover $$ ((V^{\rho}_{T,1,(1)},\Delta p^{\rho}_{T,(1)}),(V^{\rho}_{T,2,(1)},\Delta p^{\rho}_{T,(1)})) \in \frak H(E_1,E_2). $$ \end{defn} Lemma \ref{lem112} implies that such $(V^{\rho}_{T,1,(1)},V^{\rho}_{T,2,(1)},\Delta p^{\rho}_{T,(1)})$ exists and is unique. \begin{lem} If $\delta < \delta_1/10$, then \begin{equation}\label{142form} \Vert (V^{\rho}_{T,i,(1)},\Delta p^{\rho}_{T,(1)})\Vert_{L^2_{m+1,\delta}(\Sigma_i)} \le C_{2,m}e^{-\delta T}, \qquad \vert\Delta p^{\rho}_{T,(1)}\vert \le C_{2,m}e^{-\delta T}. \end{equation} \end{lem} This is immediate from construction and the uniform boundedness of the right inverse of $D_{\hat u^{\rho}_{i,T,(0)}}\overline{\partial} - (D_{\hat u^{\rho}_{i,T,(0)}}E_i)(\frak e^{\rho} _{i,T,(0)}, \cdot)$. \par\medskip \noindent{\bf Step 1-2}: We use $(V^{\rho}_{T,1,(1)},V^{\rho}_{T,2,(1)},\Delta p^{\rho}_{T,(1)})$ to find an approximate solution $u^{\rho}_{T,(1)}$ of the next level. \begin{defn} We define $u^{\rho}_{T,(1)}(z) $ as follows. (Here $\rm E$ is as in (\ref{defE}).) \begin{enumerate} \item If $z \in K_1$, we put \begin{equation} u^{\rho}_{T,(1)}(z) = {\rm E} (\hat u^{\rho}_{1,T,(0)}(z),V^{\rho}_{T,1,(1)}(z)). \end{equation} \item If $z \in K_2$, we put \begin{equation} u^{\rho}_{T,(1)}(z) = {\rm E} (\hat u^{\rho}_{2,T,(0)}(z),V^{\rho}_{T,2,(1)}(z)). \end{equation} \item If $z = (\tau,t) \in [-5T,5T]\times [0,1]$, we put \begin{equation} \aligned u^{\rho}_{T,(1)}(\tau,t) = &\chi_{\mathcal B}^{\leftarrow}(\tau,t) (V^{\rho}_{T,1,(1)}(\tau,t) - \Delta p^{\rho}_{T,(1)})\\ &+\chi_{\mathcal A}^{\rightarrow}(\tau,t)(V^{\rho}_{T,2,(1)}(\tau,t)-\Delta p^{\rho}_{T,(1)}) +u^{\rho}_{T,(0)}(\tau,t) + \Delta p^{\rho}_{T,(1)}. \endaligned \end{equation} \end{enumerate} \end{defn} We recall that $\hat u^{\rho}_{1,T,(0)}(z) = u^{\rho}_{T,(0)}(z)$ on $K_1$ and $\hat u^{\rho}_{2,T,(0)}(z) = u^{\rho}_{T,(0)}(z)$ on $K_2$. \par\medskip \noindent{\bf Step 1-3}: Let $0 < \mu < 1$. We fix it throughout the proof. \begin{lem}\label{mainestimatestep13} There exists $\delta_2$ such that for any $\delta < \delta_2$, $T > T(\delta,m,\epsilon_{5,m})$ there exists $\frak e^{\rho} _{i,T,(1)} \in E_i$ with the following properties. $$ \Vert\overline\partial u^{\rho} _{T,(1)} - (\frak e^{\rho} _{1,T,(0)} + \frak e^{\rho} _{1,T,(1)}) - (\frak e^{\rho} _{2,T,(0)} + \frak e^{\rho} _{2,T,(1)})\Vert_{L_{m,\delta}^2} < C_{1,m}\mu\epsilon_{5,m} e^{-\delta T}. $$ (Here $C_{1,m}$ is the constant given in Lemma \ref{lem18}.) Moreover \begin{equation}\label{frakeissmall2} \Vert \frak e^{\rho} _{i,T,(1)}\Vert_{L_{m}^2(K_i)} < C_{3,m}e^{-\delta T}. \end{equation} \end{lem} \begin{proof} The existence of $\frak e^{\rho} _{i,T,(1)}$ satisfying $$ \Vert\overline\partial u^{\rho} _{T,(1)} - (\frak e^{\rho} _{1,T,(0)} + \frak e^{\rho} _{1,T,(1)}) - (\frak e^{\rho} _{2,T,(0)} + \frak e^{\rho} _{2,T,(1)})\Vert_{L_{m,\delta}^2(K_1\cup K_2\subset \Sigma_T)} < C_{1,m}\mu\epsilon_{5,m} e^{-\delta T}/10 $$ is a consequence of the fact that (\ref{linearized2221}) is the linearized equation of (\ref{mainequationui2}) and the estimate (\ref{142form}). More explicitly we can prove it by a routine calculation as follows. We first estimate on $K_1$. We have: \begin{equation}\label{151ffff} \aligned \overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(0)},V^{\rho}_{T,1,(1)}))& \\ = \overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(0)},0)) &+\int_0^1 \frac{\partial}{\partial s} \overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(0)},sV^{\rho}_{T,1,(1)}))ds \\ =\overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(0)},0)) &+ (D_{\hat u^{\rho}_{1,T,(0)}}\overline\partial)(V^{\rho}_{T,1,(1)}) \\ &+\int_0^1 ds \int_0^s \frac{\partial^2}{\partial r^2} \overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(0)},rV^{\rho}_{T,1,(1)}))dr. \endaligned\end{equation} We remark \begin{equation}\label{152ff} \aligned &\left\Vert\int_0^1 ds \int_0^s \frac{\partial^2}{\partial r^2} \overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(0)},rV^{\rho}_{T,1,(1)}))dr \right\Vert_{L^2_m(K_1)} \\ &\le C_{3,m} \Vert V^{\rho}_{T,1,(1)}\Vert_{L^2_{m+1,\delta}}^2 \le C_{4,m}e^{-2\delta T}. \endaligned \end{equation} We have \begin{equation}\label{155ff} \aligned \Pi_{E_1({\rm E} (\hat u^{\rho}_{1,T,(0)},V^{\rho}_{T,1,(1)}))}^{\perp}&\\ = \Pi_{E_1(\hat u^{\rho}_{1,T,(0)})}^{\perp} &+ \int_0^1 \frac{\partial}{\partial s} \Pi_{E_i({\rm E} (\hat u^{\rho}_{1,T,(0)},sV^{\rho}_{T,1,(1)}))}^{\perp} ds \\ = \Pi_{E_1(\hat u^{\rho}_{1,T,(0)})}^{\perp} &- (D_{\hat u^{\rho}_{1,T,(0)}}E_1)(\cdot,V^{\rho}_{T,1,(1)})\\ &+ \int_0^1 ds \int_0^s \frac{\partial^2}{\partial r^2} \Pi_{E_1({\rm E} (\hat u^{\rho}_{1,T,(0)},rV^{\rho}_{T,1,(1)}))}^{\perp} dr. \endaligned\end{equation} We can estimate the third term of the right hand side of (\ref{155ff}) in the same way as in (\ref{152ff}). \par On the other hand, (\ref{151ffff}) implies that \begin{equation}\label{156ff} \left\Vert\overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(0)},V^{\rho}_{T,1,(1)})) - \frak e^{\rho} _{1,T,(0)}\right\Vert_{L^2_{m}(K_1)} \le C_{6,m}e^{-\delta T}. \end{equation} Therefore, using (\ref{155ff}) and (\ref{142form}), we have \begin{equation} \aligned &\bigg\Vert\Pi_{E_1({\rm E} (\hat u^{\rho}_{1,T,(0)},V^{\rho}_{T,1,(1)}))}^{\perp}\overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(0)},V^{\rho}_{T,1,(1)})) \\ &- \Pi_{E_1(\hat u^{\rho}_{1,T,(0)},0)}^{\perp}\overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(0)},V^{\rho}_{T,1,(1)}))\\ &- \Pi_{E_1({\rm E} (\hat u^{\rho}_{1,T,(0)},V^{\rho}_{T,1,(1)}))}^{\perp}(\frak e^{\rho} _{1,T,(0)}) + \Pi_{E_1(\hat u^{\rho}_{1,T,(0)},0)}^{\perp}(\frak e^{\rho} _{1,T,(0)}) \bigg\Vert_{L^2_{m}(K_1)} \le C_{7,m}e^{-2\delta T}. \endaligned\end{equation} Therefore using (\ref{155ff}) we have: \begin{equation}\label{160ff} \aligned &\Vert\Pi_{E_1({\rm E} (\hat u^{\rho}_{1,T,(0)},V^{\rho}_{T,1,(1)}))}^{\perp}\overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(0)},V^{\rho}_{T,1,(1)})) \\ &- \Pi_{E_1(\hat u^{\rho}_{1,T,(0)},0)}^{\perp}\overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(0)},V^{\rho}_{T,1,(1)}))\\ &+ (D_{\hat u^{\rho}_{1,T,(0)}}E_1)(\frak e^{\rho} _{1,T,(0)},V^{\rho}_{T,1,(1)})\Vert_{L^2_{m}(K_1)} \le C_{8,m}e^{-2\delta T}. \endaligned\end{equation} By (\ref{144ffff}) and Definition \ref{deferfirst}, we have: \begin{equation}\label{161ff} \aligned \overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(0)},0)) &+ (D_{\hat u^{\rho}_{1,T,(0)}}\overline\partial)(V^{\rho}_{T,1,(1)})\\ &- (D_{\hat u^{\rho}_{1,T,(0)}}E_1)(\frak e^{\rho} _{1,T,(0)},V^{\rho}_{T,1,(1)}) \in E_1(\hat u^{\rho}_{1,T,(0)}) \endaligned \end{equation} on $K_1$. \par (\ref{160ff}) and (\ref{161ff}) imply \begin{equation}\label{162ff} \aligned &\Vert\Pi_{E_1({\rm E} (\hat u^{\rho}_{1,T,(0)},V^{\rho}_{T,1,(1)}))}^{\perp}\overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(0)},V^{\rho}_{T,1,(1)})) \\ &- \Pi_{E_1(\hat u^{\rho}_{1,T,(0)},0)}^{\perp}\overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(0)},V^{\rho}_{T,1,(1)}))\\ &+ \Pi_{E_1(\hat u^{\rho}_{1,T,(0)},0)}^{\perp}\overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(0)},0))\\ &+ \Pi_{E_1(\hat u^{\rho}_{1,T,(0)},0)}^{\perp}(D_{\hat u^{\rho}_{1,T,(0)}}\overline\partial)(V^{\rho}_{T,1,(1)})\Vert_{L^2_{m}(K_1)} \le C_{9,m}e^{-2\delta T}. \endaligned\end{equation} Combined with (\ref{151ffff}) and (\ref{152ff}), we have \begin{equation}\label{eq164} \aligned &\Vert\Pi_{E_1({\rm E} (\hat u^{\rho}_{1,T,(0)},V^{\rho}_{T,1,(1)}))}^{\perp}(\overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(0)},V^{\rho}_{T,1,(1)})))\Vert_{L^2_m(K_1)}\\ &\le C_{10,m}e^{-2\delta T} \le C_{1,m}e^{-\delta T}\epsilon_{5,m}\mu/10, \endaligned \end{equation} for $T>T_m$ if we choose $T_m$ so that $C_{10,m}e^{-\delta T_m} < C_{1,m}\epsilon_{5,m}\mu/10$. \par It follows from (\ref{156ff}) and (\ref{eq164}) that $$ \Vert\Pi_{E_1({\rm E} (\hat u^{\rho}_{1,T,(0)},V^{\rho}_{T,1,(1)}))}(\overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(0)},V^{\rho}_{T,1,(1)})) - \frak e^{\rho} _{1,T,(0)}\Vert_{L^2_{m}(K_1)} \le C_{11,m}e^{-\delta T}. $$ Then (\ref{frakeissmall2}) follows, by selecting $$ \frak e^{\rho} _{1,T,(1)} = \Pi_{E_1({\rm E} (\hat u^{\rho}_{1,T,(0)},V^{\rho}_{T,1,(1)}))}(\overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(0)},V^{\rho}_{T,1,(1)}) - \frak e^{\rho} _{1,T,(0)}). $$ \par The estimate on $K_2$ is the same. \par Let us estimate $\overline\partial u^{\rho} _{T,(1)}$ on $[-T+1,T-1]\times [0,1]$. The inequality $$ \Vert\overline\partial u^{\rho} _{T,(1)} \Vert_{L_{m,\delta}^2([-T+1,T-1]\times [0,1]\subset \Sigma_T)} < C_{1,m}\mu\epsilon_{5,m} e^{-\delta T}/10 $$ is also a consequence of the fact that (\ref{linearized2221}) is the linearized equation of (\ref{mainequationui2}) and the estimate (\ref{142form}). (Note the bump functions $\chi_{\mathcal B}^{\leftarrow}$ and $\chi_{\mathcal A}^{\rightarrow}$ are $\equiv 1$ there.) On $\mathcal A_T$ we have \begin{equation}\label{estimateatA1} \overline\partial u^{\rho} _{T,(1)} = \overline\partial(\chi_{\mathcal A}^{\rightarrow}(V^{\rho}_{T,2,(1)}-\Delta p^{\rho}_{T,(1)})+V^{\rho}_{T,1,(1)} +u^{\rho}_{T,(0)}). \end{equation} Note $$ \aligned \Vert \overline\partial(\chi_{\mathcal A}^{\rightarrow}(V^{\rho}_{T,2,(1)}-\Delta p^{\rho}_{T,(1)})\Vert_{L^2_m(\mathcal A_T)} &\le C_{3,m}e^{-6T\delta}\Vert V^{\rho}_{T,2,(1)}-\Delta p^{\rho}_{T,(1)}\Vert_{L^2_{m+1,\delta}(\mathcal A_T \subset \Sigma_{2})}\\ &\le C_{12,m} e^{-7T\delta}. \endaligned $$ The first inequality follows from the fact the weight function $e_{2,\delta}$ is around $e^{6T\delta}$ on $\mathcal A_T$. The second inequality follows from (\ref{142form}). On the other hand the weight function $e_{T,\delta}$ is around $e^{4T\delta}$ at $\mathcal A_T$.\footnote{This drop of the weight is the main part of the idea. It was used in \cite[page 414]{fooo:book1}. See \cite[Figure 7.1.6]{fooo:book1}.} Therefore \begin{equation}\label{2ff160} \Vert \overline\partial(\chi_{\mathcal A}^{\rightarrow}(V^{\rho}_{T,2,(1)}-\Delta p^{\rho}_{T,(1)})) \Vert_{L^2_{m,\delta}(\mathcal A_T\subset \Sigma_T)} \le C_{13,m} e^{-3T\delta}. \end{equation} Note $$ {\rm Err}^{\rho}_{2,T,(0)} = 0 $$ on $\mathcal A_T$. Using this in the same way as we did on $K_1$ we can show \begin{equation}\label{2ff161} \Vert\overline\partial(V^{\rho}_{T,1,(1)} +u^{\rho}_{T,(0)})\Vert_{L^2_{m,\delta}(\mathcal A_T \subset \Sigma_T)} \le C_{1,m}e^{-\delta T}\epsilon_{5,m}\mu/20 \end{equation} for $T > T_m$. Therefore by taking $T$ large we have \begin{equation}\label{2ff162} \Vert\overline\partial u^{\rho} _{T,(1)} \Vert_{L_{m,\delta}^2(\mathcal A_T\subset \Sigma_T)} < C_{1,m}\mu\epsilon_{5,m} e^{-\delta T}/10. \end{equation} (Note that the almost complex structure may not be integrable. So the almost complex structure may not be constant with respect to the flat metric we are taking in the neighborhood of $p_0$. However we can still deduce (\ref{2ff162}) from (\ref{2ff161}) and (\ref{2ff160}).) \par The estimate on $\mathcal B_T$ and on $([-5T,-T-1]\cup [T+1,5T]) \times [0,1]$ are similar. The proof of Lemma \ref{mainestimatestep13} is complete. \end{proof} \par\medskip \noindent{\bf Step 1-4}: \begin{defn} We put $$ \aligned {\rm Err}^{\rho}_{1,T,(1)} &= \chi_{\mathcal X}^{\leftarrow} (\overline\partial u^{\rho} _{T,(1)} - (\frak e^{\rho} _{1,T,(0)} + \frak e^{\rho} _{1,T,(1)})) , \\ {\rm Err}^{\rho}_{2,T,(1)} &= \chi_{\mathcal X}^{\rightarrow} (\overline\partial u^{\rho} _{T,(1)} - (\frak e^{\rho} _{2,T,(0)} + \frak e^{\rho} _{2,T,(1)})). \endaligned$$ We regard them as elements of the weighted Sobolev spaces $L^2_{m,\delta}(\Sigma_1;(\hat u_{1,T,(1)}^{\rho})^*TX\otimes \Lambda^{01})$ and $L^2_{m,\delta}(\Sigma_2;(\hat u_{2,T,(1)}^{\rho})^*TX\otimes \Lambda^{01})$ respectively. (We extend them by $0$ outside a compact set.) \end{defn} We put $p^{\rho}_{(1)} = p^{\rho} + \Delta p^{\rho}_{T,(1)}$. \par\medskip We now come back to the Step 2-1 and continue. In other words, we will prove the following by induction on $\kappa$. \begin{eqnarray} \left\Vert (V^{\rho}_{T,i,(\kappa)},\Delta p^{\rho}_{T,(\kappa)})\right\Vert_{L^2_{m+1,\delta}(\Sigma_i)} &<& C_{2,m}\mu^{\kappa-1}e^{-\delta T}, \label{form0182} \\ \left\Vert \Delta p^{\rho}_{T,(\kappa)}\right\Vert &<& C_{2,m}\mu^{\kappa-1}e^{-\delta T}, \label{form0183} \\ \left\Vert u^{\rho}_{T,(\kappa)} - u^{\rho}_{T,(0)} \right\Vert_{L^2_{m+1,\delta}(\Sigma_{T})} &<& C_{14,m}e^{-\delta T}, \label{form0184a} \\ \left\Vert {\rm Err}^{\rho}_{i,T,(\kappa)} \right\Vert_{L^2_{m,\delta}(\Sigma_i)} &<& C_{1,m}\epsilon_{5,m}\mu^{\kappa}e^{-\delta T}, \label{form0185} \\ \left\Vert \frak e^{\rho} _{i,T,(\kappa)}\right\Vert_{L^2_{m}(K_i^{\text{\rm obst}})} &<& C_{15,m}\mu^{\kappa-1}e^{-\delta T}, \quad \text{for $\kappa \ge 1$}. \label{form0186} \end{eqnarray} \begin{rem} The left hand side of (\ref{form0184a}) is defined as follows. We define $\frak u^{\rho}_{T,(\kappa)}$ by $ u^{\rho}_{T,(\kappa)} = {\rm E}(u^{\rho}_{T,(\kappa-1)},\frak u^{\rho}_{T,(\kappa)}). $ Then the left hand side of (\ref{form0184a}) is $$ \Vert \frak u^{\rho}_{T,(\kappa)} \Vert_{L^2_{m+1,\delta}((\Sigma_T,\partial\Sigma_T);(u^{\rho}_{T,(\kappa-1)})^*TX, (u^{\rho}_{T,(\kappa-1)})^*TL)}. $$ \end{rem} More precisely the claim we will prove is: for any $\epsilon_{5,m}$ we can choose $T_m$ so that (\ref{form0182}) and (\ref{form0183}) imply (\ref{form0185}) and (\ref{form0186}) for given $T>T_m$, and we can choose $\epsilon_{5,m}$ so that (\ref{form0185}) and (\ref{form0186}) for $\kappa$ implies (\ref{form0182}) and (\ref{form0183}) for $\kappa + 1$. (It is easy to see that (\ref{form0182}) and (\ref{form0183}) imply (\ref{form0184a}).) \par Below we describe Steps $\kappa$-1,\dots,$\kappa$-4. \par\medskip \noindent{\bf Step $\kappa$-1}: We first cut $u^{\rho}_{T,(\kappa-1)}$ and extend to obtain maps $\hat u^{\rho}_{i,T,(\kappa-1)} : (\Sigma_i,\partial\Sigma_i) \to (X,L)$ $(i=1,2)$ as follows. \begin{equation} \aligned &\hat u^{\rho}_{1,T,(\kappa-1)}(z) \\ &= \begin{cases} \chi_{\mathcal B}^{\leftarrow}(\tau-T,t) u^{\rho}_{T,(\kappa-1)}(\tau,t) + \chi_{\mathcal B}^{\rightarrow}(\tau-T,t)p^{\rho}_{(\kappa-1)} &\text{if $z = (\tau,t) \in [-5T,5T] \times [0,1]$} \\ u^{\rho}_{T,(\kappa-1)}(z) &\text{if $z \in K_1$} \\ p^{\rho}_{T,(\kappa-1)} &\text{if $z \in [5T,\infty)\times [0,1]$}. \end{cases} \\ &\hat u^{\rho}_{2,T,(\kappa-1)}(z) \\ &= \begin{cases} \chi_{\mathcal A}^{\rightarrow}(\tau+T,t) u^{\rho}_{T,(\kappa-1)}(\tau,t) + \chi_{\mathcal A}^{\leftarrow}(\tau+T,t)p^{\rho}_{(\kappa-1)} &\text{if $z = (\tau,t) \in [-5T,5T] \times [0,1]$} \\ u^{\rho}_{T,(\kappa-1)}(z) &\text{if $z \in K_2$} \\ p^{\rho}_{T,(\kappa-1)} &\text{if $z \in (-\infty,-5T]\times [0,1]$}. \end{cases} \endaligned \end{equation} Let \begin{equation}\label{lineeqstepkappa} \aligned D_{\hat u^{\rho}_{i,T,(\kappa-1)}}\overline{\partial} : L^2_{m+1,\delta}((\Sigma_i,\partial \Sigma_i);&(\hat u^{\rho}_{i,T,(\kappa-1)})^*TX,(\hat u^{\rho}_{i,T,(\kappa-1)})^*TL) \\ &\to L^2_{m,\delta}(\Sigma_i;(\hat u^{\rho}_{i,T,(\kappa-1)})^{*}TX \otimes \Lambda^{01}). \endaligned \end{equation} \begin{lem}\label{surjlineastep11kappa} We have \begin{equation}\label{surj1step1} \text{\rm Im}(D_{\hat u^{\rho}_{i,T,(\kappa-1)}}\overline{\partial}) + E_i = L^2_{m,\delta}(\Sigma_i;(\hat u^{\rho}_{i,T,(\kappa-1)})^{*}TX \otimes \Lambda^{01}). \end{equation} Moreover \begin{equation}\label{Duievsurjstep1} D{\rm ev}_{1,\infty} - D{\rm ev}_{2,\infty} : (D_{\hat u^{\rho}_{1,T,(0)}}\overline{\partial})^{-1}(E_1) \oplus (D_{\hat u^{\rho}_{2,T,(0)}}\overline{\partial})^{-1}(E_2) \to T_{p^{\rho}_{T,(\kappa-1)}}L \end{equation} is surjective. \end{lem} \begin{proof} Since $\hat u^{\rho}_{i,T,(\kappa-1)}$ is close to $u_i$ in exponential order, this is a consequence of Assumption \ref{DuimodEi}. \end{proof} We denote \begin{equation} (\frak {se})^{\rho} _{i,T,(\kappa-1)} = \sum_{a=0}^{\kappa-1}\frak e^{\rho} _{i,T,(a)}. \end{equation} \begin{lem}\label{lem112kappa} We have \begin{equation}\label{surj1step1modififedkappa} \aligned &\text{\rm Im}(D_{\hat u^{\rho}_{i,T,(\kappa-1)}}\overline{\partial} - (D_{\hat u^{\rho}_{i,T,(\kappa-1)}}E_i)((\frak {se})^{\rho} _{i,T,(\kappa-1)}, \cdot)) + E_i \\ &= L^2_{m,\delta}(\Sigma_i;(\hat u^{\rho}_{i,T,(\kappa-1)})^{*}TX \otimes \Lambda^{01}). \endaligned\end{equation} Moreover \begin{equation}\label{Duievsurjstep1kappa2} \aligned &D{\rm ev}_{1,\infty} - D{\rm ev}_{2,\infty} \\ : &(D_{\hat u^{\rho}_{1,T,(\kappa-1)}}\overline{\partial} - (D_{\hat u^{\rho}_{1,T,(\kappa-1)}}E_1)((\frak {se})^{\rho} _{1,T,(\kappa-1)}, \cdot)))^{-1}(E_1) \\ &\oplus (D_{\hat u^{\rho}_{2,T,(\kappa-1)}}\overline{\partial} - (D_{\hat u^{\rho}_{2,T,(\kappa-1)}}E_2)((\frak {se})^{\rho} _{2,T,(\kappa-1)}, \cdot))^{-1}(E_2) \to T_{p^{\rho}_{T,(\kappa-1)}}L \endaligned \end{equation} is surjective. \end{lem} \begin{proof} \begin{equation}\label{eq181} \left\Vert \sum_{a=0}^{\kappa-1}\frak e^{\rho} _{i,T,(a)}\right\Vert_{L_{m}^2(K_i)} < \epsilon_{4,m} + C_{15,m}\frac{e^{-\delta T}}{1-\mu}. \end{equation} imply that $ (D_{\hat u^{\rho}_{1,T,(0)}}E_1)(\frak e^{\rho} _{1,T,(0)}, \cdot)$ is small in operator norm. The lemma follows from Lemma \ref{surjlineastep11kappa}. \end{proof} Note that Remark \ref{independetmm} still applies to Lemma \ref{lem112kappa}. \begin{defn} We define $(V^{\rho}_{T,1,(\kappa)},V^{\rho}_{T,2,(\kappa)},\Delta p^{\rho}_{T,(\kappa)})$ as follows. \begin{equation}\label{formula158} \aligned D_{\hat u^{\rho}_{i,T,(\kappa-1)}}(V^{\rho}_{T,i,(\kappa)}) - (D_{\hat u^{\rho}_{i,T,(\kappa-1)}}E_i) &((\frak {se})^{\rho} _{i,T,(\kappa-1)}, V^{\rho}_{T,i,(\kappa)}) \\ &+ {\rm Err}^{\rho}_{i,T,(\kappa-1)} \in E_i(\hat u^{\rho}_{i,T,(\kappa-1)}). \endaligned \end{equation} \begin{equation} D{\rm ev}_{1,\infty}(V^{\rho}_{T,1,(\kappa)}) = D{\rm ev}_{2,\infty}(V^{\rho}_{T,2,(\kappa)}) = \Delta p^{\rho}_{T,(\kappa)}. \end{equation} We also require \begin{equation}\label{inHhanru} ((V^{\rho}_{T,1,(\kappa)},\Delta p^{\rho}_{T,(\kappa)}),(V^{\rho}_{T,2,(\kappa)},\Delta p^{\rho}_{T,(\kappa)})) \in \frak H(E_1,E_2). \end{equation} \end{defn} Lemma \ref{lem112kappa} implies that such $(V^{\rho}_{T,1,(\kappa)},V^{\rho}_{T,2,(\kappa)},\Delta p^{\rho}_{T,(\kappa)})$ exists and is unique. \begin{rem} Note in (\ref{inHhanru}) we use the same space $\frak H(E_1,E_2)$ as in Definition \ref{def105}. We may use the orthogonal complement of $$ \text{\rm Ker}(D{\rm ev}_{1,\infty} - D{\rm ev}_{2,\infty}) \cap \bigoplus_{i=1}^2 (D_{\hat u^{\rho}_{i,T,(\kappa-1)}}\overline{\partial} - (D_{\hat u^{\rho}_{i,T,(\kappa-1)}}E_i)((\frak {se})^{\rho} _{i,T,(\kappa-1)}, \cdot))^{-1}(E_i) $$ instead. The reason why we use the same space as one in Definition \ref{def105} here is that then a calculation we need to do for the exponential decay estimate of $T$ derivative becomes a bit shorter. Since $\hat u^{\rho}_{i,T,(\kappa)}$ is sufficiently close to $\hat u^{\rho}_{i,T,(0)}$, the unique existence of $(V^{\rho}_{T,1,(\kappa)},V^{\rho}_{T,2,(\kappa)},\Delta p^{\rho}_{T,(\kappa)})$ satisfying (\ref{formula158}) - (\ref{inHhanru}) holds by (\ref{eq181}). \end{rem} \begin{lem}\label{estimageVkappa} If $\delta < \delta_1/10$ and $T>T(\delta,m)$, then \begin{equation}\label{142form23} \aligned &\Vert (V_{T,i,(\kappa)},\Delta p^{\rho}_{T,(\kappa)})\Vert_{L^2_{m+1,\delta}(\Sigma_i)} \le C_{2,m}\mu^{\kappa-1}e^{-\delta T}, \\ &\vert\Delta p^{\rho}_{T,(\kappa)}\vert \le C_{2,m}\mu^{\kappa-1}e^{-\delta T}. \endaligned \end{equation} \end{lem} \begin{proof} This follows from uniform boundedness of the inverse of (\ref{surj1step1modififedkappa}) together with the $\kappa-1$ version of Lemma \ref{mainestimatestep13}. (That is Lemma \ref{mainestimatestep13kappa}.) \end{proof} This lemma implies (\ref{form0182}) and (\ref{form0183}). \par\medskip \noindent{\bf Step $\kappa$-2}: We use $(V^{\rho}_{T,1,(\kappa)},V^{\rho}_{T,2,(\kappa)},\Delta p^{\rho}_{T,(\kappa)})$ to find an approximate solution $u^{\rho}_{T,(\kappa)}$ of the next level. \begin{defn} We define $u^{\rho}_{T,(\kappa)}(z) $ as follows. \begin{enumerate} \item If $z \in K_1$, we put \begin{equation} u^{\rho}_{T,(\kappa)}(z) = {\rm E} (\hat u^{\rho}_{1,T,(\kappa-1)}(z),V^{\rho}_{T,1,(\kappa)}(z)). \end{equation} \item If $z \in K_2$, we put \begin{equation} u^{\rho}_{T,(\kappa)}(z) = {\rm E} (\hat u^{\rho}_{2,T,(\kappa-1)}(z),V^{\rho}_{T,2,(\kappa)}(z)). \end{equation} \item If $z = (\tau,t) \in [-5T,5T]\times [0,1]$, we put \begin{equation} \aligned u^{\rho}_{T,(\kappa)}(\tau,t) = &\chi_{\mathcal B}^{\leftarrow}(\tau,t) (V^{\rho}_{T,1,(\kappa)}(\tau,t) - \Delta p^{\rho}_{T,(\kappa)})\\ &+\chi_{\mathcal A}^{\rightarrow}(\tau,t)(V^{\rho}_{T,2,(\kappa)}(\tau,t)-\Delta p^{\rho}_{T,(\kappa)})\\ &+u^{\rho}_{T,(\kappa-1)}(\tau,t)+\Delta p^{\rho}_{T,(\kappa)}. \endaligned \end{equation} \end{enumerate} \end{defn} We note that $\hat u^{\rho}_{1,T,(\kappa-1)}(z) = u^{\rho}_{T,(\kappa-1)}(z)$ on $K_1$ and $\hat u^{\rho}_{2,T,(\kappa-1)}(z) = u^{\rho}_{T,(\kappa-1)}(z)$ on $K_2$. \par (\ref{form0184a}) is immediate from the definition and (\ref{form0182}) and (\ref{form0183}), since $0<\mu<1$. \par\medskip \noindent{\bf Step $\kappa$-3}: \begin{lem}\label{mainestimatestep13kappa} For each $\epsilon_5>0$ we have the following. If $\delta < \delta_2$ and $T > T(\delta,m,\epsilon_5)$, then there exists $\frak e^{\rho} _{i,T,(\kappa)} \in E_i$ such that $$ \left\Vert\overline\partial u^{\rho} _{T,(\kappa)} - \sum_{a=0}^{\kappa}\frak e^{\rho} _{1,T,(a)} -\sum_{a=0}^{\kappa}\frak e^{\rho} _{2,T,(a)} \right\Vert_{L_{m,\delta}^2} < C_{1,m}\mu^{\kappa}\epsilon_5 e^{-\delta T}. $$ (Here $C_{1,m}$ is as in Lemma \ref{lem18}.) Moreover \begin{equation}\label{frakeissmall2kappa} \Vert \frak e^{\rho} _{i,T,(\kappa)}\Vert_{L_{m}^2(K_i)} <C_{15,m}\mu^{\kappa-1}e^{-\delta T}. \end{equation} \end{lem} \begin{proof} The proof is similar to the proof of Lemma \ref{mainestimatestep13} and proceed as follows. We have: \begin{equation}\label{2151ffff} \aligned \overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(\kappa-1)},V^{\rho}_{T,1,(\kappa)}))& \\ = \overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(\kappa-1)},0)) &+\int_0^1 \frac{\partial}{\partial s} \overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(\kappa-1)},sV^{\rho}_{T,1,(\kappa)}))ds \\ =\overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(\kappa-1)},0)) &+ (D_{\hat u^{\rho}_{1,T,(\kappa-1)}}\overline\partial)(V^{\rho}_{T,1,(\kappa)}) \\ &+\int_{0}^1 ds\int_0^s \frac{\partial^2}{\partial r^2} \overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(\kappa-1)},rV^{\rho}_{T,1,(\kappa)}))dr. \endaligned\end{equation} We remark \begin{equation}\label{2152ff} \aligned &\left\Vert \int_{0}^1 ds\int_0^s \frac{\partial^2}{\partial r^2} \overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(\kappa-1)},rV^{\rho}_{T,1,(\kappa)}))dr \right\Vert_{L^2_m(K_1)} \\ &\le C_{4,m} \Vert V^{\rho}_{T,1,(\kappa)}\Vert_{L^2_{m+1,\delta}}^2 \le C_{5,m}e^{-2\delta T}\mu^{2(\kappa-1)}. \endaligned \end{equation} We have \begin{equation}\label{2155ff} \aligned \Pi_{E_1({\rm E} (\hat u^{\rho}_{1,T,(\kappa-1)},V^{\rho}_{T,1,(\kappa)}))}^{\perp}&\\ = \Pi_{E_1(\hat u^{\rho}_{1,T,(\kappa-1)})}^{\perp} &+ \int_0^1 \frac{\partial}{\partial s} \Pi_{E_i({\rm E} (\hat u^{\rho}_{1,T,(\kappa-1)},sV^{\rho}_{T,1,(\kappa)}))}^{\perp} ds \\ = \Pi_{E_1(\hat u^{\rho}_{1,T,(\kappa-1)})}^{\perp} &- (D_{\hat u^{\rho}_{1,T,(\kappa-1)}}E_1)(\cdot,V^{\rho}_{T,1,(\kappa)})\\ &+ \int_{0}^1 ds\int_0^s \frac{\partial^2}{\partial r^2} \Pi_{E_1({\rm E} (\hat u^{\rho}_{1,T,(\kappa-1)},rV^{\rho}_{T,1,(\kappa)}))}^{\perp} dr. \endaligned\end{equation} We can estimate the third term of the right hand side of (\ref{2155ff}) in the same way as (\ref{2152ff}). \par On the other hand, (\ref{2151ffff}) implies that \begin{equation}\label{2156ff} \left\Vert\overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(\kappa-1)},V^{\rho}_{T,1,(\kappa)})) - \frak{se}^{\rho} _{1,T,(\kappa-1)}\right\Vert_{L^2_{m}(K_1)} \le C_{6,m}e^{-\delta T}\mu^{\kappa-1}. \end{equation} Therefore \begin{equation} \aligned &\Vert\Pi_{E_1({\rm E} (\hat u^{\rho}_{1,T,(\kappa-1)},V^{\rho}_{T,1,(\kappa)}))}^{\perp} \overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(\kappa-1)},V^{\rho}_{T,1,(\kappa)})) \\ &- \Pi_{E_1(\hat u^{\rho}_{1,T,(\kappa-1)},0)}^{\perp}\overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(\kappa-1)},V^{\rho}_{T,1,(\kappa)}))\\ &+ (D_{\hat u^{\rho}_{1,T,(\kappa-1)}}E_1)(\frak{se}^{\rho} _{1,T,(\kappa-1)},V^{\rho}_{T,1,(\kappa)})\Vert_{L^2_{m}(K_1)} \le C_{7,m}e^{-2\delta T}\mu^{\kappa-1}. \endaligned\end{equation} By (\ref{formula158}) we have: \begin{equation} \aligned \overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(\kappa-1)},0)) &+ (D_{\hat u^{\rho}_{1,T,(\kappa-1)}}\overline\partial)(V^{\rho}_{T,1,(\kappa)})\\ &- (D_{\hat u^{\rho}_{1,T,(\kappa-1)}}E_1)(\frak{se}^{\rho} _{1,T,(\kappa-1)},V^{\rho}_{T,1,(\kappa)}) \in E_1(\hat u^{\rho}_{1,T,(\kappa-1)}) \endaligned \end{equation} on $K_1$. \par Summing up we have \begin{equation} \aligned &\Vert\Pi_{E_1({\rm E} (\hat u^{\rho}_{1,T,(\kappa-1)},V^{\rho}_{T,1,(\kappa)}))}^{\perp}(\overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(\kappa-1)},V^{\rho}_{T,1,(\kappa)})))\Vert_{L^2_m(K_1)}\\ &\le C_{10,m}e^{-2\delta T}\mu^{\kappa-1} \le C_{1,m}e^{-\delta T}\epsilon_{5,m}\mu^{\kappa}/10 \endaligned \end{equation} for $T>T_m$. \par It follows from (\ref{2156ff}) that $$ \Vert\Pi_{E_1({\rm E} (\hat u^{\rho}_{1,T,(\kappa-1)},V^{\rho}_{T,1,(\kappa)}))}(\overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(\kappa-1)},V^{\rho}_{T,1,(\kappa)})) - \frak{se}^{\rho} _{1,T,(\kappa-1)}\Vert_{L^2_{m}(K_1)} \le C_{8,m}e^{-\delta T}\mu^{\kappa-1}. $$ Then (\ref{frakeissmall2kappa}) follows by putting $$ \aligned \frak e^{\rho}_{1,T,(\kappa)} &= \Pi_{E_1({\rm E} (\hat u^{\rho}_{1,T,(\kappa-1)},V^{\rho}_{T,1,(\kappa)}))}(\overline\partial ({\rm E} (\hat u^{\rho}_{1,T,(\kappa-1)},V^{\rho}_{T,1,(\kappa)})) - \frak{se}^{\rho} _{1,T,(\kappa-1)}\\ &\in E_1({\rm E} (\hat u^{\rho}_{1,T,(\kappa-1)},V^{\rho}_{T,1,(\kappa)})) \cong E_1. \endaligned $$ \par Let us estimate $\overline\partial u^{\rho} _{T,(\kappa)}$ on $[-T,T]\times [0,1]$. The inequality $$ \Vert\overline\partial u^{\rho} _{T,(\kappa)} \Vert_{L_{m,\delta}^2([-T,T]\times [0,1]\subset \Sigma_T)} < C_{1,m} \mu^{\kappa}\epsilon_{5,m} e^{-\delta T}/10 $$ is also a consequence of the fact that (\ref{linearized2221}) is the linearized equation of (\ref{mainequationui2}) and the estimate (\ref{142form23}). (Note the bump functions $\chi_{\mathcal B}^{\leftarrow}$ and $\chi_{\mathcal A}^{\rightarrow}$ are $\equiv 1$ there.) On $\mathcal A_T$ we have \begin{equation}\label{2estimateatA1} \overline\partial u^{\rho} _{T,(\kappa)} = \overline\partial(\chi_{\mathcal A}^{\rightarrow}(V^{\rho}_{T,2,(\kappa)}-\Delta p^{\rho}_{T,(\kappa)})+V^{\rho}_{T,1,(\kappa)} +u^{\rho}_{T,(\kappa-1)}). \end{equation} Note $$ \aligned \Vert \overline\partial(\chi_{\mathcal A}^{\rightarrow}(V^{\rho}_{T,2,(\kappa)}-\Delta p^{\rho}_{T,(\kappa)}))\Vert_{L^2_m(\mathcal A_T)} &\le C_{3,m}e^{-6T\delta}\Vert V^{\rho}_{T,2,(\kappa)}-\Delta p^{\rho}_{T,(\kappa)}\Vert_{L^2_{m+1,\delta}(\mathcal A_T \subset \Sigma_{2})}\\ &\le C_{12,m} e^{-7T\delta}\mu^{\kappa-1}. \endaligned $$ The first inequality follows from the fact the weight function $e_{2,\delta}$ is around $e^{6T\delta}$ on $\mathcal A_T$. The second inequality follows from (\ref{142form23}). On the other hand the weight function $e_{T,\delta}$ is around $e^{4T\delta}$ at $\mathcal A_T$.\footnote{This drop of the weight is the main part of the idea. It was used in \cite[page 414]{fooo:book1}. See \cite[Figure 7.1.6]{fooo:book1}.} Therefore \begin{equation}\label{ff160} \Vert \overline\partial(\chi_{\mathcal A}^{\rightarrow}(V^{\rho}_{T,2,(\kappa)}-\Delta p^{\rho}_{T,(\kappa)})) \Vert_{L^2_{m,\delta}(\mathcal A_T\subset \Sigma_T)} \le C_{13,m} e^{-3T\delta}\mu^{\kappa-1}. \end{equation} Note $$ {\rm Err}^{\rho}_{2,T,(\kappa-1)} = 0 $$ on $\mathcal A_T$. Therefore in the same way as we did on $K_1$ we can show \begin{equation}\label{ff161} \Vert\overline\partial(V^{\rho}_{T,1,(\kappa)} +u^{\rho}_{T,(\kappa-1)})\Vert_{L^2_{m,\delta}(\mathcal A_T \subset \Sigma_T)} \le C_{1,m}e^{-\delta T}\epsilon_{5,m}\mu^{\kappa}/20 \end{equation} for $T > T_m$. Therefore by taking $T$ large we have \begin{equation}\label{ff162} \Vert\overline\partial u^{\rho} _{T,(\kappa)} \Vert_{L_{m,\delta}^2(\mathcal A_T\subset \Sigma_T)} < C_{1,m}\mu^{\kappa}\epsilon_{5,m} e^{-\delta T}/10. \end{equation} \par The estimate on $\mathcal B_T$ and on $([-5T,-T-1]\cup [T+1,5T]) \times [0,1]$ are similar. The proof of Lemma \ref{mainestimatestep13kappa} is complete. \end{proof} \par\medskip \noindent{\bf Step $\kappa$-4}: \begin{defn} We put $$ \aligned {\rm Err}^{\rho}_{1,T,(\kappa)} &= \chi_{\mathcal X}^{\leftarrow} \left(\overline\partial u^{\rho} _{T,(\kappa)} - \sum_{a=0}^{\kappa}\frak e^{\rho} _{1,T,(a)} \right), \\ {\rm Err}^{\rho}_{2,T,(\kappa)} &= \chi_{\mathcal X}^{\rightarrow} \left(\overline\partial u^{\rho} _{T,(\kappa)} - \sum_{a=0}^{\kappa}\frak e^{\rho} _{2,T,(a)}\right). \endaligned$$ We regard them as elements of the weighted Sobolev spaces $L^2_{m,\delta}(\Sigma_1;(\hat u_{1,T,(\kappa)}^{\rho})^*TX\otimes \Lambda^{01})$ and $L^2_{m,\delta}(\Sigma_2;(\hat u_{2,T,(\kappa)}^{\rho})^*TX\otimes \Lambda^{01})$ respectively. (We extend them by $0$ outside a compact set.) \end{defn} We put $p^{\rho}_{(\kappa)} = p^{\rho}_{(\kappa-1)} + \Delta p^{\rho}_{T,(\kappa)}$. \par Lemma \ref{mainestimatestep13kappa} implies (\ref{form0185}) and (\ref{form0186}). \par \medskip We have thus described all the induction steps. For each fixed $m$ there exists $T_m$ such that if $T > T_m$ then $$ \lim_{\kappa \to \infty} u^{\rho} _{T,(\kappa)} $$ converges in $L_{m+1,\delta}^2$ sense to the solution of (\ref{mainequation}). The limit is automatically of $C^{\infty}$ class by elliptic regularity. We have thus constructed the map in Theorem \ref{gluethm1}. We will prove its surjectivity and injectivity in Section \ref{surjinj} below. Before doing so we prove an exponential decay estimate of its $T$ derivative. \section{Exponential decay of $T$ derivatives} \label{subsecdecayT} We first state the result of this subsection. We recall that for $T$ sufficiently large and $\rho = (\rho_1,\rho_2) \in V_1\times_L V_2$ we have defined $u^{\rho} _{T,(\kappa)}$. We denote its limit by \begin{equation}\label{limitkappainf} u^{\rho} _{T} = \lim_{\kappa \to \infty} u^{\rho} _{T,(\kappa)} : (\Sigma_T,\partial\Sigma_T) \to (X,L). \end{equation} The main result of this subsection is an estimate of $T$ and $\rho$ derivatives of this map. We prepare some notations to state the result. \par We change the coordinates of $\Sigma_i$ and $\Sigma_T$ as follows. In the last section we put $$ \Sigma_1 = K_1 \cup ([-5T,\infty) \times [0,1]) $$ and use $(\tau,t)$ for the coordinate of $[-5T,\infty) \times [0,1]$. This identification depends on $T$. So we rewrite it to $$ \Sigma_1 = K_1 \cup ([0,\infty) \times [0,1]) $$ and the coordinate for $[0,\infty) \times [0,1]$ is $(\tau',t)$ where \begin{equation} \tau' =\tau + 5T. \end{equation} Similarly we rewrite $$ \Sigma_2 = ((-\infty,5T] \times [0,1]) \cup K_2 $$ to $$ \Sigma_2 = ((-\infty,0] \times [0,1]) \cup K_2 $$ and use the coordinate $(\tau'',t)$ where \begin{equation} \tau'' =\tau - 5T. \end{equation} We may use either $(\tau',t)$ or $(\tau'',t)$ as the coordinate of $ \Sigma_T \setminus (K_1\cup K_2) $. \par Let $S$ be a positive number. We have $ K_i \subset \Sigma_T. $ We put \begin{equation}\label{1104} \aligned K_1^{+S} &= K_1 \cup ([0,S]\times [0,1]) \subset \Sigma_T, \\ K_2^{+S} &= ([-S,0]\times [0,1]) \cup K_2 \subset \Sigma_T. \endaligned \end{equation} Here the inclusion $K_1 \cup ([0,S]\times [0,1]) \subset \Sigma_T$ is by using the coordinate $\tau'$ and the inclusion $([-S,0]\times [0,1]) \cup K_2 \subset \Sigma_T$ is by using the coordinate $\tau''$. \par We may also regard $K_i^{+S} \subset \Sigma_i$. Note that the spaces $K_i^{+S}$ are independent of $T$, as far as $10T>S$. \par We restrict the map $u^{\rho} _{T}$ to $K_i^{+S}$. We thus obtain a map $$ \text{\rm Glures}_{i,S} : [T_m,\infty) \times V_1 \times_L V_2 \to \text{\rm Map}_{L^2_{m+1}}((K_i^{+S},K_i^{+S}\cap\partial \Sigma_i),(X,L)) $$ by \begin{equation} \begin{cases} \text{\rm Glures}_{1,S}(T,\rho)(x) &= u^{\rho} _{T}(x) \qquad x \in K_1\\ \text{\rm Glures}_{1,S}(T,\rho)(\tau',t) &= u^{\rho} _{T}(\tau',t) = u^{\rho}_T(\tau+5T,t) \end{cases} \end{equation} \begin{equation} \begin{cases} \text{\rm Glures}_{2,S}(T,\rho)(x) &= u^{\rho} _{T}(x) \qquad x \in K_2\\ \text{\rm Glures}_{2,S}(T,\rho)(\tau'',t) &= u^{\rho} _{T}(\tau'',t) = u^{\rho}_T(\tau-5T,t) \end{cases} \end{equation} Here $\text{\rm Map}_{L^2_{m+1}}((K_i^{+S},K_i^{+S}\cap\partial \Sigma_i),(X,L))$ is the space of maps of $L^2_{m+1}$ class ($m$ is sufficiently large, say $m>10$.) It has a structure of Hilbert manifold in an obvious way. This Hilbert manifold is independent of $T$. So we can define $T$ derivative of a family of elements of $\text{\rm Map}_{L^2_{m+1}}((K_i^{+S},K_i^{+S}\cap\partial \Sigma_i),(X,L))$ parametrized by $T$. \begin{rem}\label{differentm} The domain and the target of the map $\text{\rm Glures}_{i,S}$ depend on $m$. However its image actually is in the set of smooth maps. Also none of the constructions of $u^{\rho} _{T}$ depends on $m$. (The proof of the convergence of (\ref{limitkappainf}) depends on $m$. So the number $T_m$ depends on $m$.) Therefore the map $\text{\rm Glures}_{i,S}$ is {\it independent} of $m$ on the intersection of the domains. Namely the map $\text{\rm Glures}_{i,S}$ constructed by using $L^2_{m_1}$ norm coincides with the map $\text{\rm Glures}_{i,S}$ constructed by using $L^2_{m_2}$ norm on $[\max\{T_{m_1},T_{m_2}\},\infty) \times V_1\times_L V_2$. \end{rem} \begin{thm}\label{exdecayT} For each $m$ and $S$ there exist $T(m), C_{16,m,S}, \delta > 0$ such that the following holds for $T>T(m)$ and $n + \ell \le m - 10$ and $\ell > 0$. \begin{equation} \left\Vert \nabla_{\rho}^n \frac{d^{\ell}}{dT^{\ell}} \text{\rm Glures}_{i,S}\right\Vert_{L^2_{m+1-\ell}} < C_{16,m,S}e^{-\delta T}. \end{equation} Here $\nabla_{\rho}^n$ is the $n$-th derivative in $\rho$ direction. \end{thm} \begin{rem} Theorem \ref{exdecayT} is basically equivalent to \cite[Lemma A1.58]{fooo:book1}. The proof below is basically the same as the one in \cite[page 776]{fooo:book1}. We add some more detail. \end{rem} \begin{proof} \par The construction of $u_{T,(\kappa)}^{\rho}$ was by induction on $\kappa$. We divide the inductive step of the construction of $u_{T,(\kappa+1)}^{\rho}$ from $u_{T,(\kappa)}^{\rho}$ into two. \begin{enumerate} \item[(Part A)] Start from $(V^{\rho}_{T,1,(\kappa)},V^{\rho}_{T,2,(\kappa)},\Delta p^{\rho}_{T,(\kappa)})$ and end with ${\rm Err}^{\rho}_{1,T,(\kappa)}$ and ${\rm Err}^{\rho}_{2,T,(\kappa)}$. This is step $\kappa$-2,$\kappa$-3,$\kappa$-4. \item[(Part B)] Start from ${\rm Err}^{\rho}_{1,T,(\kappa)}$ and ${\rm Err}^{\rho}_{2,T,(\kappa)}$ and end with $(V^{\rho}_{T,1,(\kappa+1)},V^{\rho}_{T,2,(\kappa+1)},\Delta p^{\rho}_{T,(\kappa+1)})$. This is step $(\kappa+1)$-1. \end{enumerate} \par\medskip We will prove the following inequality by induction on $\kappa$, under the assumption $T > T(m)$, $\ell >0$, $n+\ell \le m-10$. \begin{eqnarray} \left\Vert \nabla_{\rho}^n \frac{\partial^{\ell}}{\partial T^{\ell}} (V^{\rho}_{T,i,(\kappa)},\Delta p^{\rho}_{T,(\kappa)})\right\Vert_{L^2_{m+1-\ell,\delta}(\Sigma_i)} &<& C_{17,m}\mu^{\kappa-1}e^{-\delta T}, \label{form182} \\ \left\Vert \nabla_{\rho}^n \frac{\partial^{\ell}}{\partial T^{\ell}} \Delta p^{\rho}_{T,(\kappa)}\right\Vert &<& C_{17,m}\mu^{\kappa-1}e^{-\delta T}, \label{form183} \\ \left\Vert \nabla_{\rho}^n \frac{\partial^{\ell}}{\partial T^{\ell}} u^{\rho}_{T,(\kappa)}\right\Vert_{L^2_{m+1-\ell,\delta}(K_i^{+5T+1})} &<& C_{18,m}e^{-\delta T}, \label{form184} \\ \left\Vert \nabla_{\rho}^n \frac{\partial^{\ell}}{\partial T^{\ell}}{\rm Err}^{\rho}_{i,T,(\kappa)} \right\Vert_{L^2_{m-\ell,\delta}(\Sigma_i)} &<& C_{19,m}\epsilon_{6,m}\mu^{\kappa}e^{-\delta T}, \label{form185} \\ \left\Vert \nabla_{\rho}^n \frac{\partial^{\ell}}{\partial T^{\ell}} \frak e^{\rho} _{i,T,(\kappa)}\right\Vert_{L^2_{m-\ell}(K_i^{\text{\rm obst}})} &<& C_{19,m} \mu^{\kappa-1}e^{-\delta T}. \label{form186} \end{eqnarray} More precisely, the claim we will prove is the following: For each $\epsilon_{6,m}$, we can choose $T(m)$ so that (\ref{form182}) and (\ref{form183}) imply (\ref{form185}) and (\ref{form186}) for $T > T(m)$, and we can choose $\epsilon_{6,m}$ so that (\ref{form185}) and (\ref{form186}) for $\kappa$ implies (\ref{form182}) and (\ref{form183}) for $\kappa + 1$. (\ref{form184}) follows from (\ref{form182}) and (\ref{form183}). \begin{rem} We use $L^2_{m+1}$ norm on $K_i^{+5T+1}$ only in formula (\ref{form184}). Note we use coordinate $(\tau',t)$ on $K_1^{+5T+1} \setminus K_1$, and $(\tau'',t)$ on $K_2^{+5T+1} \setminus K_2$. We remark also that $\Sigma_T = K_1^{+5T+1} \cup K_2^{+5T+1}$. \end{rem} \begin{rem}\label{rem136} Note that $(V^{\rho}_{T,i,(\kappa)},\Delta p^{\rho}_{T,(\kappa)})$ appearing in (\ref{form182}) is an element of the weighted Sobolev space $L^2_{m+1,\delta}((\Sigma_i,\partial \Sigma_i);(\hat u^{\rho}_{i,T,(\kappa-1)})^*TX,(\hat u^{\rho}_{i,T,(\kappa-1)})^*TL)$ that depends on $T$ and $\rho$. To make sense of $T$ and $\rho$ derivatives we identify $$ \aligned &L^2_{m+1,\delta}((\Sigma_i,\partial \Sigma_i);(\hat u^{\rho}_{i,T,(\kappa-1)})^*TX,(\hat u^{\rho}_{i,T,(\kappa-1)})^*TL)\\ &\cong L^2_{m+1,\delta}((\Sigma_i,\partial \Sigma_i);u_{i}^*TX,u_{i}^*TL) \endaligned $$ as follows. We find $V$ such that $\hat u^{\rho}_{i,T,(\kappa-1)} = {\rm E}(u_{i},V)$. We use the parallel transport with respect to the path $r \mapsto {\rm E}(u_{i},rV)$ and its complex linear part to define this isomorphism. The same remark applies to (\ref{form185}) and (\ref{form186}). \end{rem} \begin{rem} The square of the left hand side of (\ref{form182}), in case $i=1$, is : $$ \aligned &\left\Vert \nabla_{\rho}^n \frac{\partial^{\ell}}{\partial T^{\ell}} V^{\rho}_{T,1,(\kappa)}\right\Vert^2_{L^2_{m+1-\ell}(K_1)} \\ &+ \sum_{k=0}^{m+1-\ell} \int_{[0,\infty)\times [0,1]} e_{1,T}(\tau,t) \left\Vert\nabla_{\tau',t}^k\nabla_{\rho}^n \frac{\partial^{\ell}}{\partial T^{\ell}} \big( V^{\rho}_{T,i,(\kappa)} - \text{\rm Pal}(\Delta p^{\rho}_{T,(\kappa)}) \big)\right\Vert^2 d\tau'dt. \endaligned $$ Note that we apply Remark \ref{rem136} to define $T$ and $\rho$ derivatives in the above formula. \par The case $i=2$ is similar using $\tau''$ coordinate. \end{rem} \par\medskip \noindent{\bf (Part A)} (See \cite[page 776 paragraph (A) and (B)]{fooo:book1}.) \par We assume (\ref{form182}) and (\ref{form183}). \par We find that \begin{enumerate} \item \begin{equation}\label{1113formu} {\rm Err}^{\rho}_{1,T,(\kappa)}(z) = \Pi_{E_1(\hat u^{\rho}_{1,T,(\kappa-1)})}^{\perp} \overline \partial {\rm E} (\hat u^{\rho}_{1,T,(\kappa-1)}(z),V^{\rho}_{T,1,(\kappa)}(z)) \end{equation} for $z \in K_1$. \item \begin{equation}\label{form1106} \aligned &{\rm Err}^{\rho}_{1,T,(\kappa)}(\tau') \\ = &(1-\chi(\tau'-5T))\overline\partial\big(\chi(\tau'-4T)(V^{\rho}_{T,2,(\kappa)}(\tau'-10T,t) - \Delta p^{\rho}_{T,(\kappa)}) \\ &\qquad\qquad\qquad\qquad+V^{\rho}_{T,1,(\kappa)}(\tau',t) +u^{\rho}_{T,(\kappa-1)}(\tau',t)\big), \endaligned \end{equation} for $(\tau',t) \in [0,\infty)\times [0,1]$. (Note $\tau' = \tau''+10T$ and the variable of $V^{\rho}_{T,2,(\kappa)}$ is $(\tau'',t)$.) \end{enumerate} Here $\chi : \R \to [0,1]$ is a smooth function such that \begin{equation}\label{chichi} \chi(\tau) \begin{cases} =0 & \tau < -1 \\ =1 & \tau > 1 \\ \in [0,1] &\tau \in [-1,1]. \end{cases} \end{equation} \par Note that in Formulas (\ref{form182})-(\ref{form186}) the Sobolev norms in the left hand side are $L^2_{m+1-\ell,\delta}(\Sigma_i)$ etc. and are not $L^2_{m+1,\delta}(\Sigma_i)$ etc. The origin of this loss of differentiability (in the sense of Sobolev space) comes from the term $V^{\rho}_{T,2,(\kappa)}(\tau'-10T)$. In fact, we have $$ \frac{\partial }{\partial T}V^{\rho}_{T_1,2,(\kappa)}(\tau'-10T) = -10 \frac{\partial }{\partial \tau''}V^{\rho}_{T_1,2,(\kappa)}(\tau'-10T) $$ for a fixed $T_1$. Hence $\partial/\partial T$ is continuous as $L^2_{m+1} \to L^2_m$. We remark in (\ref{form182}) for $i=2$ we use the coordinate $(\tau'',t)$ on $(-\infty,0]\times [0,1]$ to define $T$ derivative of $V_{T,2,(\kappa)}^{\rho}$. \par Taking this fact into account the proof goes as follows. \par We can estimate $T$ and $\rho$ derivative of ${\rm Err}^{\rho}_{1,T,(\kappa)}$ on $K_1$ in the same way as the proof of Lemma \ref{mainestimatestep13kappa}. \begin{rem}\label{remark127} The fact we use here is that the maps such as $(u,v) \mapsto {\rm E}(u,v)$, $(u,v) \to \Pi^{\perp}_{E_i(u)}(v)$ are smooth maps from $L^2_{m+1,loc} \times L^2_{m+1,\delta} \to L^2_{m+1,\delta}$ or $L^2_{m+1,loc} \times L^2_{m,\delta} \to L^2_{m,\delta}$ and $u \to \overline\partial u$ is a smooth map $L^2_{m+1,\delta} \to L^2_{m,\delta}$. (Since we assume $m$ sufficiently large this is a well-known fact.) Moreover the map $T \mapsto u^{\rho}_{T,(\kappa-1)}$ and $T \mapsto V^{\rho}_{T,1,(\kappa)}$ are $C^{\ell}$ maps as a map $[T(m),\infty) \to L^2_{m+1-\ell,\delta}$ with its differential estimated by induction hypothesis (\ref{form184}) and (\ref{form182}). \par We note that $\rho \mapsto u^{\rho}_{T,(\kappa-1)}$ is smooth as a map $V_1 \times_L V_2 \to L^2_{m+1,\delta}$. \end{rem} The estimates of $T$ and $\rho$ derivatives of (\ref{form1106}) are as follows. \par We first consider the domain $\tau' \in [4T+1,\infty)$. There we have \begin{equation}\label{form11010} \aligned {\rm Err}^{\rho}_{1,T,(\kappa)}(\tau',t) = &(1-\chi(\tau'-5T))\overline\partial(V^{\rho}_{T,2,(\kappa)}(\tau'-10T,t)\\ &+V^{\rho}_{T,1,(\kappa)}(\tau',t) +u^{\rho}_{T,(\kappa-1)}(\tau',t)- \Delta p^{\rho}_{T,(\kappa)}). \endaligned \end{equation} By the same calculation as in the proof of Lemma \ref{mainestimatestep13kappa}, (\ref{form11010}) is equal to $$ \aligned (1-\chi(\tau'-5T))\int_0^1ds \int_0^s \frac{\partial^2}{\partial r^2}& \overline\partial \big(r(V^{\rho}_{T,2,(\kappa)}(\tau'-10T) - \Delta p^{\rho}_{T,(\kappa)})\\ &+r(V^{\rho}_{T,1,(\kappa)}(\tau',t) - \Delta p^{\rho}_{T,(\kappa)})\\ & +u^{\rho}_{T,(\kappa-1)}(\tau',t) + r\Delta p^{\rho}_{T,(\kappa)}\big) dr. \endaligned $$ (Note that we are away from the support of $E_i$.)\footnote{Note $\overline\partial$ is non-constant. So $\overline\partial (r(V^{\rho}_{T,2,(\kappa)}(\tau'-10T) - \Delta p^{\rho}_{T,(\kappa)}) +r(V^{\rho}_{T,1,(\kappa)}(\tau',t) - \Delta p^{\rho}_{T,(\kappa)}) +u^{\rho}_{T,(\kappa-1)}(\tau',t) + r\Delta p^{\rho}_{T,(\kappa)})$ is nonlinear on $r$.} Using the fact that $T \mapsto (V^{\rho}_{T,1,(\kappa)}(\tau',t)-\Delta p^{\rho}_{T,(\kappa)}) + (V^{\rho}_{T,2,(\kappa)}(\tau'-10T)-\Delta p^{\rho}_{T,(\kappa)})$ and $T \mapsto u^{\rho}_{T,(\kappa-1)}(\tau',t)$ are of $C^{\ell}$ class as a map to $L^2_{m+1-\ell,\delta}$, we can estimate it to obtain the required estimate (\ref{form185}) on this part. We remark $T \mapsto (V^{\rho}_{T,2,(\kappa-1)},\Delta p^{\rho}_{T,(\kappa-1)})$ is $C^{\ell}$ with exponential decay estimate on $T$ derivatives as a map $[T(m),\infty) \to L^{2}_{m-\ell+1,\delta}$. This follows from the induction hypothesis as follows. \begin{equation}\label{1106} \aligned &\frac{\partial^{\ell}}{\partial T^{\ell}} \left(\left. V^{\rho}_{T,2,(\kappa)}(\tau' - 10 T) \right)\right\vert_{T=T_1} \\ & = \sum_{\ell_1+\ell_2 = \ell} (-10)^{\ell_2} \frac{\partial^{\ell_1}}{\partial T^{\ell_1}}\frac{\partial^{\ell_2}}{(\partial \tau'')^{\ell_2}} V^{\rho}_{T,2,(\kappa)}(\tau' - 10 T_1). \endaligned \end{equation} The $L^2_{m+1-\ell,\delta}$-norm of the right hand side can be estimated by (\ref{form182}). \par We next consider $\tau' \in [0,4T+1]$. There we have \begin{equation}\label{form110622} \aligned {\rm Err}^{\rho}_{1,T,(\kappa)}(\tau',t) = &\overline\partial(\chi(\tau'-4T)(V^{\rho}_{T,2,(\kappa)}(\tau'-10T)- \Delta p^{\rho}_{T,(\kappa)}) \\ &+V^{\rho}_{T,1,(\kappa)}(\tau',t) +u^{\rho}_{T,(\kappa-1)}(\tau',t)). \endaligned \end{equation} Note $$ \overline\partial u^{\rho}_{T,(\kappa-1)}(\tau',t)) = {\rm Err}^{\rho}_{1,T,(\kappa-1)}(\tau',t), $$ there. Therefore we can calculate in the same way as the proof of Lemma \ref{mainestimatestep13kappa} to find $$ \aligned &\overline\partial(V^{\rho}_{T,1,(\kappa)}(\tau',t) +u^{\rho}_{T,(\kappa-1)}(\tau',t))\\ &=\int_0^1ds \int_0^s \frac{\partial^2}{\partial r^2} \overline\partial ( r(V^{\rho}_{T,1,(\kappa)}(\tau',t) - \Delta p^{\rho}_{T,(\kappa)}) +u^{\rho}_{T,(\kappa-1)}(\tau',t) + r\Delta p^{\rho}_{T,(\kappa)})dr. \endaligned$$ We can again estimate the right hand side by using the fact that the maps $T \mapsto (V^{\rho}_{T,1,(\kappa)}(\tau',t),\Delta p^{\rho}_{T,(\kappa)})$ and $T \mapsto u^{\rho}_{T,(\kappa-1)}(\tau',t)$ are of $C^{\ell}$ class as a map to $L^2_{m+1-\ell,\delta}$ with estimate (\ref{form184}). \par Finally we observe that the ratio between weight function of $L_{m+1,\delta}^2(\Sigma_2)$ and of $L_{m+1,\delta}^2(\Sigma_T)$ is $e^{2T\delta}$ on $\tau = -T$ (that is $\tau' = 4T$). We use this fact to estimate $\overline\partial(\chi(\tau'-4T)(V^{\rho}_{T,2,(\kappa)}(\tau'-10T)- \Delta p^{\rho}_{T,(\kappa)}))$. We thus obtain the required estimate (\ref{form185}) for ${\rm Err}^{\rho}_{1,T,(\kappa)}$ on $\tau' \in [0,4T+1]$. \par We thus obtain an estimate for ${\rm Err}^{\rho}_{1,T,(\kappa)}(\tau',t)$. \par The estimate of derivatives of ${\rm Err}^{\rho}_{2,T,(\kappa)}(\tau',t)$ is similar. Thus we have (\ref{form185}). \par We note that $\frak e^{\rho} _{i,T,(0)}$ is independent of $T$ as an element of $E_i$. Among $ \frak e^{\rho} _{i,T,(\kappa)}$'s, the term $\frak e^{\rho} _{i,T,(0)}$ is the only one that is not of exponential decay with respect to $T$. Once we note this point the rest of the proof of (\ref{form186}) is the same as the proof of Lemma \ref{mainestimatestep13kappa}. \par We finally prove (\ref{form184}). On $K_1$ we have $$ u^{\rho}_{T,(\kappa)} = {\rm E} (u^{\rho}_{T,(\kappa-1)},V^{\rho}_{1,T,(\kappa)}). $$ So using $\mu<1$, (\ref{form184}) follows from (\ref{form182}) on $K_1$. \par On $(\tau',t) \in [0,5T+1) \times [0,1]$ we have: $$ \aligned &u^{\rho}_{T,(\kappa)}(\tau',t) \\ &= V^{\rho}_{T,1,(\kappa)}(\tau',t) +(1-\chi(\tau'-4T))(V^{\rho}_{T,2,(\kappa)}(\tau'-10T,t)- \Delta p^{\rho}_{T,(\kappa)})\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad +u^{\rho}_{T,(\kappa-1)}(\tau',t) \\ &= \sum_{a=1}^{\kappa} V^{\rho}_{T,1,(a)}(\tau',t) + (1-\chi(\tau'-4T))\sum_{a=1}^{\kappa}(V^{\rho}_{T,2,(a)}(\tau'-10T,t)- \Delta p^{\rho}_{T,(a)})\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad +u^{\rho}_{T,(0)}(\tau',t). \endaligned $$ Then using a calculation similar to (\ref{1106}) we have (\ref{form182}) on $(\tau',t) \in [0,5T+1) \times [0,1]$. \begin{rem}\label{Abremark} In \cite{Abexotic} Abouzaid used $L^p_1$ norm for the maps $u$. He then proved that the gluing map is continuous with respect to $T$ (that is $S$ in the notation of \cite{Abexotic}) but does not prove its differentiability with respect to $T$. (Instead he used the technique to remove the part of the moduli space with $T>T_0$. See Subsection \ref{subsec342}. This technique certainly works for the purpose of \cite{Abexotic}.) In fact if we use $L^p_1$ norm instead of $L^2_m$ norm then the left hand side of (\ref{form184}) becomes $L^p_{-1}$ norm which is hard to use. \par Abouzaid mentioned in \cite[Remark 5.1]{Abexotic} that this point is related to the fact that quotients of Sobolev spaces by the diffeomorphisms in the source are not naturally equipped with the structure of smooth Banach manifold. Indeed in the situation when there is an automorphism on $\Sigma_2$, for example $\Sigma_2$ is disk with one boundary marked point $(-\infty,t)$, then the $T$ parameter is killed by a part of the automorphism. So the shift of $V^{\rho}_{T,2,(\kappa)}$ by $T$ that appears in the second term of (\ref{form1106}) will be equivalent to the action of the automorphism group of $\Sigma_2$ in such a situation. The shift of $T$ causes the loss of differentiability in the sense of Sobolev space in the formulas (\ref{form182}) -(\ref{form186}). However at the end of the day we can still get the differentiability of $C^{\infty}$ order and its exponential decay by using various {\it weighted} Sobolev spaces with various $m$ simultaneously. (See Remark \ref{differentm} also.) \end{rem} \par\medskip \noindent{\bf (Part B)} (See \cite[page 776 the paragraph next to (B)]{fooo:book1}.) \par We assume (\ref{form182})-(\ref{form186}) for $\kappa$ and will prove (\ref{form182}) and (\ref{form183}) for $\kappa+1$. This part is nontrivial only because the construction here is global. (Solving linear equation.) So we first review the set up of the function space that is independent of $T$. \par In Definition \ref{defnfraH} we defined a function space $\frak H(E_1,E_2)$, that is a subspace of (\ref{cocsissobolev}). Since (\ref{cocsissobolev}) is still $T$ dependent we rewrite it a bit. We consider $u_{i}^{\rho} : (\Sigma_i,\partial\Sigma_i) \to (X,L)$ that is $T$-{\it independent}. \par The maps $\hat u^{\rho}_{i,T,(\kappa)}$ are close to $u_{i}^{\rho}$. (Namely the $C^0$ distance between them is smaller than injectivity radius of $X$.) We take a connection of $TX$ so that $L$ is totally geodesic. We use the complex linear part of the parallel transport with respect to this connection, to send $$ \bigoplus_{i=1}^2 L^2_{m,\delta}((\Sigma_i,\partial \Sigma_i);(u_{i}^{\rho})^{*}TX,(u_{i}^{\rho})^{*}TL) $$ to $$ \bigoplus_{i=1}^2 L^2_{m,\delta}((\Sigma_i,\partial \Sigma_i);(\hat u^{\rho}_{i,T,(\kappa)})^{*}TX,(\hat u^{\rho}_{i,T,(\kappa)})^{*}TL). $$ Note that $\text{\rm Ker}(D{\rm ev}_{1,\infty} - D{\rm ev}_{2,\infty})$ is sent to $\text{\rm Ker}(D{\rm ev}_{1,\infty} - D{\rm ev}_{2,\infty})$ by this map. Therefore we obtain an isomorphism between \begin{equation}\label{Tindependentfcs} \text{\rm Ker}(D{\rm ev}_{1,\infty} - D{\rm ev}_{2,\infty}) \cap \bigoplus_{i=1}^2 L^2_{m,\delta}((\Sigma_i,\partial \Sigma_i);(u_{i}^{\rho})^{*}TX,(u_{i}^{\rho})^{*}TL) \end{equation} and \begin{equation}\label{cocsissobolevkappa} \text{\rm Ker}(D{\rm ev}_{1,\infty} - D{\rm ev}_{2,\infty}) \cap \bigoplus_{i=1}^2 L^2_{m,\delta}((\Sigma_i,\partial \Sigma_i);(\hat u^{\rho}_{i,T,(\kappa)})^{*}TX,(\hat u^{\rho}_{i,T,(\kappa)})^{*}TL). \end{equation} In case $\kappa =0$ we send $\frak H(E_1,E_2)$ by this isomorphism to obtain a subspace of (\ref{Tindependentfcs}) which we denote by $\frak H(E_1,E_2)$ by an abuse of notation. We send it to the subspace of (\ref{cocsissobolevkappa}) and denote it by $\frak H(E_1,E_2;\kappa,T)$. We thus have an isomorphism $$ I_{1,\kappa,T} : \frak H(E_1,E_2) \to \frak H(E_1,E_2;\kappa,T). $$ \par We next use the parallel transport in the same way to find an isomorphism $$ I_{2,\kappa,T} : L^2_{m,\delta}(\Sigma_i;(u_{i}^{\rho})^*TX \otimes \Lambda^{01}) \to L^2_{m,\delta}(\Sigma_i;(\hat u^{\rho}_{i,T,(\kappa)})^{*}TX \otimes \Lambda^{01}). $$ Thus the composition $$ I_{2,\kappa,T} ^{-1}\circ \left(D_{\hat u^{\rho}_{i,T,(\kappa-1)}}\overline{\partial} - (D_{\hat u^{\rho}_{i,T,(\kappa-1)}}E_i)((\frak {se})^{\rho} _{i,T,(\kappa-1)}, \cdot))\right) \circ I_{1,\kappa,T} $$ defines an operator $$ D_{\kappa,T} :\frak H(E_1,E_2) \to L^2_{m,\delta}(\Sigma_i;(u_{i}^{\rho})^*TX \otimes \Lambda^{01}). $$ Here the domain and the target is independent of $T,\kappa$. \begin{rem} Note $D_{\hat u^{\rho}_{i,T,(\kappa-1)}}\overline{\partial} - (D_{\hat u^{\rho}_{i,T,(\kappa-1)}}E_i)((\frak {se})^{\rho} _{i,T,(\kappa-1)},\cdot)$ is the differential operator in (\ref{linearized2221}) and (\ref{144op}). This differential operator gives the linearization of the right hand side of (\ref{1113formu}). \end{rem} \par We next eliminate $T,\kappa$ dependence of $E_i$. We consider the finite dimensional subspace: $$ E_i(\hat u^{\rho}_{i,T,(\kappa)}) \subset L^2_{m,\delta}(\Sigma_i;(\hat u^{\rho}_{i,T,(\kappa)})^{*}TX \otimes \Lambda^{01}). $$ Let us consider $$ E_{i,(\kappa),T} = I_{2,\kappa,T} ^{-1}(E_i(\hat u^{\rho}_{i,T,(\kappa)})) $$ that may depend on $T$. However $$ E_{i,(0)} = I_{2,\kappa,T} ^{-1}(E_i(\hat u^{\rho}_{i,T,(0)})) $$ is independent of $T$ since $\hat u^{\rho}_{i,T,(0)}= u_{i}^{\rho}$ on $K_i$. Let $E_{i,(0)}^{\perp}$ be the $L^2$ orthogonal complement of $E_{i,(0)} $ in $L^2_{m,\delta}(\Sigma_i;(\hat u^{\rho}_{i,T,(\kappa)})^{*}TX \otimes \Lambda^{01})$. \par We have \begin{equation}\label{artificialsplitting} E_{i,(\kappa),T} \oplus E_{i,(0)}^{\perp} = L^2_{m,\delta}(\Sigma_i;(u_{i}^{\rho})^*TX \otimes \Lambda^{01}). \end{equation} Therefore the inclusion induces an isomorphism $$ E_{i,(0)}^{\perp} \cong L^2_{m,\delta}(\Sigma_i;(u_{i}^{\rho})^*TX \otimes \Lambda^{01})/E_{i,(\kappa),T}. $$ We thus obtain \begin{equation} \overline D_{\kappa,T} :\frak H(E_1,E_2) \to E_{i,(0)}^{\perp}. \end{equation} The induction hypothesis implies the following: \begin{enumerate} \item There exist $C_{20,m}, C_{21,m} >0$ such that \begin{equation} C_{20,m} \Vert V\Vert_{L^2_{m+1,\delta}} \le \Vert\overline D_{0,T}(V)\Vert_{L^2_{m,\delta}} \le C_{21,m} \Vert V\Vert_{L^2_{m+1,\delta}}. \end{equation} \item \begin{equation}\label{estimateDover} \Vert\overline D_{\kappa,T}(V) - \overline D_{0,T}(V) \Vert_{L^2_{m,\delta}} \le C_{21,m} e^{-\delta T}\Vert V\Vert_{L^2_{m+1,\delta}}. \end{equation} Moreover \begin{equation}\label{11230} \left\Vert\nabla_{\rho}^n \frac{\partial^{\ell}}{\partial T^{\ell}} \overline D_{\kappa,T}(V) \right\Vert_{L^2_{m-\ell,\delta}} \le C_{22,m} e^{-\delta T} \Vert V\Vert_{L^2_{m+1,\delta}}. \end{equation} \end{enumerate} In fact, (\ref{11230}) follows from \begin{eqnarray} &\displaystyle\left\Vert \nabla_{\rho}^n \frac{\partial^{\ell}}{\partial T^{\ell}}\hat u^{\rho}_{i,T,(\kappa)}\right\Vert_{L^2_{m-\ell}(K_i)} \le C_{23,m}e^{-\delta T},\label{1124}\\ &\displaystyle\left\Vert \nabla_{\rho}^n \frac{\partial^{\ell}}{\partial T^{\ell}}\hat u^{\rho}_{i,T,(\kappa)}\right\Vert_{L^2_{m-\ell}([S,S+1]\times [0,1])} \le C_{23,m}e^{-\delta T}\label{1125} \end{eqnarray} for any $S\in [0,\infty)$. (See also the Remark \ref{newremark11}.) Note that the weighted Sobolev norm $ \Vert \nabla_{\rho}^n \frac{\partial^{\ell}}{\partial T^{\ell}}\hat u^{\rho}_{i,T,(\kappa)}\Vert_{L^2_{m-\ell,\delta}(\Sigma_i)} $ can be large because $$ \frac{\partial}{\partial T} \chi_{\mathcal B}^{\leftarrow}(\tau-T,t) u^{\rho}_{T,(\kappa-1)} $$ is only estimated by $e^{-3\delta T}$ on the support of $\chi_{\mathcal B}^{\leftarrow}(\tau-T,t)$ but the weight $e_{1,\delta}$ is roughly $e^{7T\delta}$ on the support of $\chi_{\mathcal B}^{\leftarrow}(\tau-T,t)$. However this does not cause any problem to prove (\ref{11230}). In fact the operator $\overline D_{\kappa,T}$ is a differential operator whose coefficient depends on $\hat u^{\rho}_{i,T,(\kappa)}$. So to estimate the operator norm of its derivatives with respect to the {\it weighted} Sobolev norm, we only need to estimate the local Sobolev norm without weight of $\hat u^{\rho}_{i,T,(\kappa)}$, that is provided by (\ref{1124}) and (\ref{1125}). \par We note that $\overline D_{0,T}$ is independent of $T$. So we write $\overline D_0$. Now we have: \begin{equation} \aligned \overline D_{\kappa,T}^{-1} &= \left( ( 1 + (\overline D_{\kappa,T} - \overline D_0)\overline D_0^{-1})\overline D_0\right)^{-1} \\ & = \overline D_0^{-1}\sum_{k=0}^{\infty} (-1)^k ((\overline D_{\kappa,T} - \overline D_0))\overline D_0^{-1})^k. \endaligned \end{equation} Therefore \begin{equation}\label{derivativeDest} \left\Vert \nabla_{\rho}^n \frac{\partial^{\ell}}{\partial T^{\ell}} \overline D_{\kappa,T}^{-1}(W)\right\Vert_{L^2_{m+1-\ell,\delta}} \le C_{24,m} e^{-\delta}\Vert W\Vert_{L^2_{m,\delta}} \end{equation} for $\ell > 0$ and $\ell + n \le m$. (Here we assume $W$ is $T$ independent.) Since $$ (V^{\rho}_{T,1,(\kappa+1)},V^{\rho}_{T,2,(\kappa+1)},\Delta p^{\rho}_{T,(\kappa+1)}) = (I_{1,\kappa,T}\circ \overline D_{\kappa,T}^{-1} \circ I_{2,\kappa,T}^{-1}) ({\rm Err}^{\rho}_{1,T,(\kappa)},{\rm Err}^{\rho}_{2,T,(\kappa)}), $$ (\ref{form185}) and (\ref{derivativeDest}) imply (\ref{form182}) and (\ref{form183}) for $\kappa +1$. \par The proof of Theorem \ref{exdecayT} is now complete. \end{proof} \begin{rem}\label{newremark11} Let us add a few more explanation about the proof of (\ref{estimateDover}) and (\ref{11230}). Especially the relation between two operators $D_{\kappa,T}$ and $\overline D_{\kappa,T}$. We consider the direct sum decomposition \begin{equation}\label{artificialsplitting2} E_{i,(\kappa),T} \oplus E_{i,(0)}^{\perp} = L^2_{m,\delta}(\Sigma_i;(u_{i}^{\rho})^*TX \otimes \Lambda^{01}). \end{equation} Note that this is not an orthogonal decomposition. We take an isomorphism $$ B_{i,(\kappa),T} : L^2_{m,\delta}(\Sigma_i;(u_{i}^{\rho})^*TX \otimes \Lambda^{01}) \to L^2_{m,\delta}(\Sigma_i;(u_{i}^{\rho})^*TX \otimes \Lambda^{01}) $$ such that according to the orthogonal decomposition \begin{equation}\label{bettersplitting2} E_{i,(0)} \oplus E_{i,(0)}^{\perp} = L^2_{m,\delta}(\Sigma_i;(u_{i}^{\rho})^*TX \otimes \Lambda^{01}). \end{equation} The restriction $ B_{i,(\kappa),T}\vert_{E_{i,(0)}^{\perp}} $ is the identity map and the restriction $B_{i,(\kappa),T}\vert_{E_{i,(0)}}$ is the canonical isomorphism $$ A_{i,(\kappa),T} : E_{i,(0)} \to E_{i,(0)} $$ given by the parallel transportation. Namely we put $$ B_{i,(\kappa),T} = A_{i,(\kappa),T} \circ \Pi_{E_{i,(0)}} + \Pi_{E_{i,(0)}^{\perp}}. $$ It is easy to prove \begin{equation}\label{estimateDoverB} \Vert\overline B_{i,(\kappa),T}(V) - V \Vert_{L^2_{m,\delta}} \le C_{25,m} e^{-\delta T}\Vert V\Vert_{L^2_{m,\delta}}. \end{equation} Moreover \begin{equation}\label{11230B} \left\Vert\nabla_{\rho}^n \frac{\partial^{\ell}}{\partial T^{\ell}} B_{i,(\kappa),T}(V) \right\Vert_{L^2_{m-\ell,\delta}} \le C_{26,m} e^{-\delta T} \Vert V\Vert_{L^2_{m,\delta}}. \end{equation} Note that $$ C_{i,(\kappa),T} = \Pi_{E_{i,(0)}^{\perp}}\circ B_{i,(\kappa),T}^{-1} $$ is the projection to the second factor in (\ref{artificialsplitting2}) and hence \begin{equation}\label{relDandDover} \overline D_{\kappa,T} = \Pi_{E_{i,(0)}^{\perp}}\circ B_{i,(\kappa),T}^{-1}\circ D_{\kappa,T}. \end{equation} We can use (\ref{estimateDoverB}), (\ref{11230B}) and (\ref{relDandDover}) to prove (\ref{estimateDover}), (\ref{11230}). \end{rem} \section{Surjectivity and injectivity of the gluing map} \label{surjinj} In this subsection we prove surjectivity and injectivity of the map $\text{\rm Glu}_T$ in Theorem \ref{gluethm1} and complete the proof of Theorem \ref{gluethm1}.\footnote{ Here surjectivity means the second half of the statement of Theorem \ref{gluethm1}, that is `The image contains $\mathcal M^{E_1+E_2}((\Sigma_T,\vec z);\beta)_{\epsilon_3}$.'} The proof goes along the line of \cite{Don83}. (See also \cite{freedUhlen}.) The surjectivity proof is written in \cite[Section 14]{FOn} and injectivity is proved in the same way. (\cite[Section 14]{FOn} studies the case of pseudo-holomorphic curve without boundary. It however can be adapted easily to the bordered case as we mentioned in \cite[page 417 lines 21-26]{fooo:book1}.) Here we explain the argument in our situation in more detail. \par We begin with the following a priori estimate. \begin{prop}$($\cite[Lemma 11.2]{FOn}$)$\label{neckaprioridecay} There exist $\epsilon_3, C_{25,m}, \delta_2 > 0$ such that if $u : (\Sigma_T,\partial\Sigma_T) \to (X,L)$ is an element of $\mathcal M^{E_1+E_2}((\Sigma_T,\vec z);\beta)_{\epsilon}$ for $0 <\epsilon<\epsilon_3$ then we have \begin{equation} \left\Vert \frac{\partial u}{\partial \tau}\right\Vert_{C^m([\tau-1,\tau+1] \times [0,1])} \le C_{27,m} e^{-\delta_2 (5T - \vert\tau\vert)}. \end{equation} \end{prop} The proof is the same as \cite[Lemma 11.2]{FOn} that is proved in \cite[Section 14]{FOn} and so is omitted. \par We also have the following: \begin{lem}\label{gluedissmoothindex} $\mathcal M^{E_1+E_2}((\Sigma_T,\vec z);\beta)_{\epsilon}$ is a smooth manifold of dimension $\dim V_1 + \dim V_2 - \dim L$. \end{lem} This is a consequence of the implicit function theorem and the index sum formula. \par \medskip \begin{proof}[Proof of surjectivity] During this proof we take $m$ sufficiently large and fix it. We will fix $\epsilon$ and $T_0$ during the proof and assume $T>T_0$. (They are chosen so that the discussion below works.) Let $u : (\Sigma_T,\partial\Sigma_T) \to (X,L)$ be an element of $\mathcal M^{E_1+E_2}((\Sigma_T,\vec z);\beta)_{\epsilon}$. The purpose here is to show that $u$ is in the image of $\text{\rm Glu}_T$. We define $u_i' : (\Sigma_i,\partial\Sigma_i) \to (X,L)$ as follows. We put $p^u_0 = u(0,0) \in L$. \begin{equation} \aligned &u'_{1}(z) \\ &= \begin{cases} \chi_{\mathcal B}^{\leftarrow}(\tau-T,t) u(\tau,t) + \chi_{\mathcal B}^{\rightarrow}(\tau-T,t)p_0^u &\text{if $z = (\tau,t) \in [-5T,5T] \times [0,1]$} \\ u(z) &\text{if $z \in K_1$} \\ p_0^u &\text{if $z \in [5T,\infty)\times [0,1]$}. \end{cases} \\ &u'_2(z) \\ &= \begin{cases} \chi_{\mathcal A}^{\rightarrow}(\tau+T,t) u(\tau,t) + \chi_{\mathcal A}^{\leftarrow}(\tau+T,t)p_0^u &\text{if $z = (\tau,t) \in [-5T,5T] \times [0,1]$} \\ u(z) &\text{if $z \in K_2$} \\ p_0^u &\text{if $z \in (-\infty,-5T]\times [0,1]$}. \end{cases} \endaligned \end{equation} Proposition \ref{neckaprioridecay} implies \begin{equation} \Vert \Pi_{E_i(u'_i)}\overline\partial u'_{i} \Vert_{L^2_{m,\delta}(\Sigma_i)} \le C_{28,m} e^{-\delta T}. \end{equation} Here we take $\delta < \delta_2/10$. On the other hand, by assumption and elliptic regularity we have \begin{equation} \Vert u'_i - u_i \Vert_{L^2_{m+1,\delta}(\Sigma_i)} \le C_{29,m} \epsilon. \end{equation} Therefore by an implicit function theorem we have the following: \begin{lem} There exists $\rho_i \in V_i$ such that \begin{equation}\label{1121} \Vert u'_i - u^{\rho_i}_i \Vert_{L^2_{m+1,\delta}(\Sigma_i)} \le C_{30,m} e^{-\delta T}, \end{equation} $\rho = (\rho_1,\rho_2) \in V_1 \times_L V_2$, and \begin{equation} \vert\rho_i\vert \le C_{31,m}\epsilon. \end{equation} \end{lem} (Note when $\rho_i = 0$, $u_i^{\rho_i} = u_i$.) \par By (\ref{1121}) we have \begin{equation}\label{1123} \Vert u - u^{\rho}_T \Vert_{L^2_{m+1,\delta}(\Sigma_T)} \le C_{32,m} e^{-\delta T}. \end{equation} Here $u^{\rho}_T =\text{\rm Glu}_T(\rho)$. \par We take $V \in \Gamma((\Sigma_T,\partial \Sigma_T);(u^{\rho}_T)^*TX;(u^{\rho}_T)^*TL)$ so that $$ u(z) = {\rm E} (u^{\rho}_T(z),V(z)). $$ We define $u^s : (\Sigma_T,\partial \Sigma_T) \to (X,L)$ by \begin{equation} u^s(z) ={\rm E} (u^{\rho}_T(z),sV(z)). \end{equation} (\ref{1123}) implies \begin{equation} \Vert \Pi^{\perp}_{(E_1+E_2)(u^s)}\overline\partial u^s \Vert_{L^2_{m,\delta}(\Sigma_T)} \le C_{33,m} e^{-\delta T} \end{equation} and \begin{equation} \left\Vert \frac{\partial}{\partial s} u^s \right\Vert_{L^2_{m+1,\delta}(K_i^{+S})} \le C_{34,m} e^{-\delta T} \end{equation} for each $s \in [0,1]$. \begin{lem} If $T$ is sufficiently large, then there exists $\hat u^s : (\Sigma_T,\partial \Sigma_T) \to (X,L)$ $(s \in [0,1])$ with the following properties. \begin{enumerate} \item $$ \overline\partial \hat u^s \equiv 0 \mod (E_1+E_2)(\hat u^s). $$ \item \begin{equation}\label{sderovatove} \left\Vert \frac{\partial}{\partial s} \hat u^s \right\Vert_{L^2_{m+1,\delta}(K_i^{+S})} \le 2C_{35,m} e^{-\delta T}. \end{equation} \item $\hat u^s = u^s$ for $s=0,1$. \end{enumerate} \end{lem} \begin{proof} Run the alternating method described in Subsection \ref{alternatingmethod} in one parameter family version. Since $u^s$ is already a solution for $s=0,1$, it does not change. \end{proof} \begin{lem}\label{immersion} The map $\text{\rm Glu}_T : V_1 \times_L V_2 \to \mathcal M^{E_1+E_2}((\Sigma_T,\vec z);\beta)_{\epsilon}$ is an immersion if $T$ is sufficiently large. \end{lem} \begin{proof} We consider the composition of $\text{\rm Glu}_T$ with $$ \mathcal M^{E_1+E_2}((\Sigma_T,\vec z);\beta)_{\epsilon} \to {L^2_{m+1}}((K_i^{+S},K_i^{+S}\cap\partial \Sigma_i),(X,L)) $$ defined by restriction. In the case $T = \infty$ this composition is obtained by restriction of maps. By unique continuation, this is certainly an immersion for $T=\infty$. Then Theorem \ref{exdecayT} implies that it is an immersion for sufficiently large $T$. \end{proof} Now we will prove that $$ A = \{s \in [0,1] \mid \hat u^s \in \text{image of $\text{\rm Glu}_T$}\} $$ is open and closed. Lemma \ref{gluedissmoothindex} implies that $\mathcal M^{E_1+E_2}((\Sigma_T,\vec z);\beta)_{\epsilon}$ is a smooth manifold and has the same dimension as $V_1 \times_L V_2$. Therefore Lemma \ref{immersion} implies that $A$ is open. The closedness of $A$ follows from (\ref{sderovatove}). \par Note $0 \in A$. Therefore $1\in A$. Namely $u$ is in the image of $\text{\rm Glu}_T$ as required. \end{proof} \begin{proof}[Proof of injectivity] Let $\rho^j = (\rho_1^j,\rho_2^j) \in V_1\times_L V_2$ for $j=0,1$. We assume \begin{equation} \text{\rm Glu}_T(\rho^0) = \text{\rm Glu}_T(\rho^1) \end{equation} and \begin{equation} \Vert \rho_i^j\Vert < \epsilon. \end{equation} We will prove that $\rho^0 = \rho^1$ if $T$ is sufficiently large and $\epsilon$ is sufficiently small. We may assume that $V_1\times_L V_2$ is connected and simply connected. Then, we have a path $s \mapsto \rho^s =(\rho^s_1,\rho^s_2) \in V_1 \times_L V_2$ such that \begin{enumerate} \item $\rho^s = \rho^j$ for $s=1$, $j=0,1$. \item $$ \left\Vert \frac{\partial}{\partial s} \rho^s \right\Vert \le \Phi_1(\epsilon) $$ where $\lim_{\epsilon \to 0}\Phi_1(\epsilon) = 0$. \end{enumerate} We define $V(s) \in \Gamma((\Sigma_T,\partial \Sigma_T);(u^{\rho^0}_T)^*TX;(u^{\rho^0}_T)^*TL)$ such that $$ u^{\rho^s}_T(z) = {\rm E} (u^{\rho^0}_T(z),V(s)(z)). $$ (By (2) $u^{\rho^s}_T(z)$ is $C^0$-close to $u^{\rho^0}_T(z)$, as $\epsilon \to 0$. Therefore there exists such a unique $V(s)$ if $\epsilon$ is small.) Note $V(1) = V(0)$ since $u^{\rho^1} = u^{\rho^0}$. Therefore for $w \in D^2 = \{w \in \C \mid \vert w\vert \le 1\}$ there exists $V(w)$ such that \begin{enumerate} \item $V(s) = V(w)$ if $w = e^{2\pi\sqrt{-1}s}$. \item We put $w = x + \sqrt{-1}y$. \begin{equation} \left\Vert \frac{\partial}{\partial x} V(w) \right\Vert _{L^2_{m+1,\delta}(\Sigma_T)} + \left\Vert \frac{\partial}{\partial y} V(w) \right\Vert _{L^2_{m+1,\delta}(\Sigma_T)} \le \Phi_2(\epsilon) \end{equation} where $\lim_{\epsilon \to 0}\Phi_2(\epsilon) = 0$. \end{enumerate} We put $u^w(z) = {\rm E}(u^{\rho^0}_T(z),V(w)(z))$. \begin{lem} If $T$ is sufficiently large and $\epsilon$ is sufficiently small then there exists $\hat u^w : (\Sigma_T,\partial \Sigma_T) \to (X,L)$ $(s \in [0,1])$ with the following properties. \begin{enumerate} \item $$ \overline\partial \hat u^w \equiv 0 \mod (E_1+E_2)(\hat u^w). $$ \item \begin{equation}\label{sderovatove131} \left\Vert \frac{\partial}{\partial x} \hat u^w \right\Vert_{L^2_{m+1,\delta}(K_i^{+S})} + \left\Vert \frac{\partial}{\partial y} \hat u^w \right\Vert_{L^2_{m+1,\delta}(K_i^{+S})} \le \Phi_3(\epsilon) \end{equation} with $\lim_{\epsilon \to 0}\Phi_3(\epsilon) = 0$. \item $\hat u^w = u^w$ for $w\in \partial D^2$. \end{enumerate} \end{lem} \begin{proof} Run the alternating method described in Subsection \ref{alternatingmethod} in two parameter family version. \end{proof} \begin{lem}\label{liftinglemma} If $T$ is sufficiently large and $\epsilon$ is sufficiently small, there exists a smooth map $F : D^2 \to V_1\times_L V_2$ such that \begin{enumerate} \item $\text{\rm Glu}_T(F(w)) = \hat u^{w}$. \item If $s \in [0,1]$ then we have: $$ F(e^{2\pi\sqrt{-1}s}) =\rho^s. $$ \end{enumerate} \end{lem} \begin{proof} Note that $\rho \mapsto \text{\rm Glu}_T(\rho)$ is a local diffeomorphism. So we can apply the proof of homotopy lifting property as follows. Let $D_{r}^2 = \{z \in \C \mid \vert z - (r-1) \vert \le r\}$. We put $$ A = \{r \in [0,1] \mid \text{$\exists$ $F : D^2_r \to V_1\times_L V_2$ satisfying (1) above and $F(-1) = \rho^{1/2}$}\}. $$ Since $\text{\rm Glu}_T(\rho)$ is a local diffeomorphism, $A$ is open. We can use (\ref{sderovatove131}) to show closedness of $A$. Since $0\in A$, it follows that $1\in A$. The proof of Lemma \ref{liftinglemma} is complete. \end{proof} The proof of Theorem \ref{gluethm1} is now complete. \end{proof} \par \newpage \part{Construction of the Kuranishi structure 2: Construction in the general case}\label{generalcase} \section{Graph associated to a stable map} \label{Graph} We first recall the definition of the moduli space of (bordered) stable maps of genus zero. \begin{defn} Let $\beta \in H_2(X,L;\Z)$ and $k,\ell \ge 0$. The {\it compactified moduli space of pseudo-holomorphic disks with $k+1$ boundary marked points and $\ell$ interior marked points with boundary condition given by $L$} that we denote by $\mathcal M_{k+1,\ell}(\beta)$ is the set of equivalence classes of $((\Sigma,\vec z,\vec z^{\text{\rm int}}),u)$, where: \begin{enumerate} \item $\Sigma$ is a bordered semi-stable curve of genus zero with one boundary component $\partial\Sigma$. \item $u : (\Sigma,\partial\Sigma) \to (X,L)$ is a pseudo-holomorphic map of homology class $\beta$. \item $\vec z = (z_0,\dots,z_k)$ are boundary marked points. None of them are singular points and they are all distinct. We assume that they respect the cyclic order of $\partial\Sigma$. \item $\vec z^{\text{\rm int}} = (z^{\text{\rm int}}_1,\dots,z^{\text{\rm int}}_{\ell})$ are interior marked points of $\Sigma$. None of them are singular points and they are all distinct. \end{enumerate} \par We say $((\Sigma,\vec z,\vec z^{\text{\rm int}}),u)$ is {\it equivalent} to $((\Sigma',\vec z',\vec z^{\text{\rm int} \prime}),u')$ if there exists a biholomorphic map $v : \Sigma' \to \Sigma$ such that $u\circ v = u'$ and $v(z'_i) = z_i$, $v(z^{\text{\rm int} \prime}_i) = z^{\text{\rm int}}_i$. \end{defn} \begin{defn} Let $\alpha \in H_2(X;\Z)$ and $\ell \ge 0$. The {\it compactified moduli space of pseudo-holomorphic sphere with $\ell$ (interior) marked points} that we denote by $\mathcal M_{\ell}^{\rm cl}(\alpha)$ is the set of the equivalence classes of $((\Sigma,\vec z^{\text{\rm int}}),u)$, where: \begin{enumerate} \item $\Sigma$ is a semi-stable curve of genus zero without boundary. \item $u : \Sigma \to X$ is a pseudo-holomorphic map of homology class $\alpha$. \item $\vec z^{\text{\rm int}} = (z^{\text{\rm int}}_1,\dots,z^{\text{\rm int}}_{\ell})$ are marked points of $\Sigma$. None of them are singular points and they are all distinct. \end{enumerate} \par We say $((\Sigma,\vec z^{\text{\rm int}}),u)$ is {\it equivalent} to $((\Sigma',\vec z^{\text{\rm int} \prime}),u')$ if there exists a biholomorphic map $v : \Sigma' \to \Sigma$ such that $u\circ v = u'$ and $v(z^{\text{\rm int} \prime}_i) = z^{\text{\rm int}}_i$. \end{defn} \par The topology of $\mathcal M_{\ell}^{\rm cl}(\alpha)$ is defined in \cite[Definition 10.3]{FOn} and the topology of $\mathcal M_{k+1,\ell}(\beta)$ is defined in \cite[Definition 7.1.42]{fooo:book1}. (See Definition \ref{convdefn}.) \par It is proved in \cite[Theorem 11.1 and Lemma 10.4]{FOn} that $\mathcal M_{\ell}^{\rm cl}(\alpha)$ is compact and Hausdorff. $\mathcal M_{k+1,\ell}(\beta)$ is also compact and Hausdorff. See \cite[Theorem 7.1.43]{fooo:book1} and the references therein. \par We refer \cite[Section 2.1]{fooo:book1} for the moduli space $\mathcal M_{k+1,\ell}(\beta)$. See also \cite{Liu}. \par We consider the case when $X$ is a point and denote the moduli space of that case by $\mathcal M_{k+1,\ell}$. We call it Deligne-Mumford moduli space. (This is a slight abuse of notation since Deligne-Mumford studied the case when there is no boundary.) We define $\mathcal M_{\ell}^{\text{\rm cl}}$ in the same way. \par \begin{thm}\label{existsKura} $\mathcal M_{\ell}^{\rm cl}(\alpha)$ has a Kuranishi structure (without boundary) and $\mathcal M_{k+1,\ell}(\beta)$ has a Kuranishi structure with corners. \end{thm} \begin{rem} \begin{enumerate} \item Theorem \ref{existsKura} in case of $\mathcal M_{\ell}^{\rm cl}(\alpha)$ is a special case of \cite[Theorem 7.10]{FOn}. In the case of $\mathcal M_{k+1,\ell}(\beta)$, Theorem \ref{existsKura} is \cite[Theorem 2.1.29]{fooo:book1}. \item In the case of $\mathcal M_{k+1,\ell}(\beta)$ we need to describe the way how various moduli spaces with different $k$, $\ell$, $\beta$ are related along their boundaries and corners, for the application. See \cite[Proposition 7.1.2]{fooo:book1} for the precise statement on this point. It is easy to see that the proof we will give in this note implies that version. \end{enumerate} \end{rem} Below we give a detailed proof of Theorem \ref{existsKura}. The proof is based on the proof in \cite{FOn}. The smoothness of coordinate at infinity is useful especially in the case of $\mathcal M_{k+1,\ell}(\beta)$. On that point we follow the method of \cite[Section 7.2 and Appendix A1.4]{fooo:book1}. \begin{rem} We discuss the case of genus zero here. We can handle the case of moduli space of pseudo-holomorphic curves with or without boundary and of arbitrary genus and with arbitrary number of boundary components, in the same way. The case of multi Lagrangian submanifolds in pairwise clean intersection can be also handled in the same way. To slightly simplify the notation we restrict ourselves to the case of disks, that is mainly used in our book \cite{fooo:book1} and spheres, that is asked in the google group `Kuranishi' explicitly. In fact {\it no} new idea is required for generalization to higher genus etc. as far as the construction of Kuranishi structure concerns. \end{rem} In a way similar to \cite[Section 8]{FOn}, we stratify $\mathcal M_{k+1,\ell}(\beta)$ as follows. For each element $\frak p = [(\Sigma,\vec z,\vec z^{\text{\rm int}}),u]$ of $\mathcal M_{k+1,\ell}(\beta)$ we associate $\mathcal G = \mathcal G_{\frak p}$, a graph with some extra data, as follows. \par A vertex $\rm v$ of $\mathcal G$ corresponds to $\Sigma_{\rm v}$ an irreducible component of $\Sigma$. (It is either a disk or a sphere.) We put data $\beta_{\rm v} = [u\vert _{\Sigma_{\rm v}}]$ that is either an element of $H_2(X,L;\Z)$ or an element of $H_2(X;\Z)$. \par To each singular point $z$ of $\Sigma$ we associate an edge ${\rm e}_z$ of $\mathcal G$. The edge ${\rm e}_z$ joins two vertices ${\rm v}_1,{\rm v}_2$ such that $z \in \Sigma_{{\rm v}_i}$. Note $z$ can be either boundary or interior singular points. We also denote by $z_{\rm e}$ the singular point of $\Sigma$ corresponding to the edge $\rm e$. \par For each vertex ${\rm v}$ we also include the data which marked points are contained in $\Sigma_{\rm v}$. \begin{defn} We call a graph $\mathcal G$ equipped with some other data described above, the {\it combinatorial type} of $\frak p = [(\Sigma,\vec z,\vec z^{\text{\rm int}}),u]$. We denote by $\mathcal M_{k+1,\ell}(\beta;\mathcal G)$ the set of $\frak p$ with combinatorial type $\mathcal G$. \par We write $\overset{\circ}{\mathcal M}_{k+1,\ell}(\beta)$ the stratum $\mathcal M_{k+1,\ell}(\beta;{\rm pt})$, where ${\rm pt}$ is a graph without edge.\footnote{$\overset{\circ}{\mathcal M}_{k+1,\ell}(\beta)$ is slightly smaller than the `interior' of ${\mathcal M}_{k+1,\ell}(\beta)$. Namely elements of $\overset{\circ}{\mathcal M}_{k+1,\ell}(\beta)$ do not contain any disk or sphere bubble. On the other hand, elements of the interior of ${\mathcal M}_{k+1,\ell}(\beta)$ may contain sphere bubble.} \par We say that $\mathcal G$ is {\it stable} if corresponding pseudo-holomorphic curve is stable. We say that $\mathcal G$ is {\it source stable} if the marked bordered curve obtained by forgetting the map is stable. \end{defn} Let $\mathcal G$ and $\mathcal G'$ be combinatorial types. We say $\mathcal G \succ \mathcal G'$ if $\mathcal G'$ is obtained from $\mathcal G$ by iterating the following process finitely many times. \par Take an edge ${\rm e}$ of $\mathcal G$. We shrink $\rm e$ and identify two vertices ${\rm v}_1$, ${\rm v}_2$ contained in $\rm e$. Let ${\rm v}$ be the vertex identified to ${\rm v}_1$, ${\rm v}_2$. We put $\beta_{\rm v} =\beta_{{\rm v}_1} + \beta_{{\rm v}_2}$. The marked points assigned to ${\rm v}_1$ or ${\rm v}_2$ will be assigned to ${\rm v}$. \begin{lem} If $$ \overline{\mathcal M_{k+1,\ell}(\beta;\mathcal G)} \cap \mathcal M_{k+1,\ell}(\beta;\mathcal G') \ne \emptyset, $$ then $\mathcal G \succ \mathcal G'$. \end{lem} The proof is easy so omitted. \par Sometimes we add the following data to $\mathcal G$. \begin{enumerate} \item Orientation to each of the edge. We call that $\mathcal G$ is {\it oriented} in case we include this data.\footnote{Actually in our case of genus $0$ with at least one marked point there is a canonical way to orient the edges as follows. We remove $z_{\rm e}$ from $\Sigma$. Then there is a component which contains the $0$-th boundary marked point (or first interior marked point if $\partial \Sigma = \emptyset$). If $\rm v$ is a vertex contained in $\rm e$ we orient $\rm e$ so that $\rm v$ is inward if and only if the corresponding irreducible component is in the connected component of $\Sigma$ minus boundary marked points that contains $0$-th boundary marked point.} \item The length $T_{\rm e} \in \R_{>0}$ to each of the edges ${\rm e}$. \end{enumerate} We say an edge $\rm e$ is an {\it outgoing edge} of its vertex $\rm v$ and {\it incoming edge} of its vertex ${\rm v}'$ if the orientation of $\rm e$ is goes from $\rm v$ to ${\rm v}'$. By an abuse of terminology we say $\rm v$ is an {\it incoming vertex} (resp. outgoing vertex) of the $\rm e$ if $\rm e$ is an {\it incoming edge} (resp. outgoing edge) of $\rm v$. \footnote{This might be different from the usual meaning of the English word incoming and outgoing.} \par We use the following notation. \par\medskip $C^0_{\mathrm d}(\mathcal G) = $ the set of the vertices that correspond to a disk component. \par $C^0_{\mathrm s}(\mathcal G) = $ the set of the vertices that correspond to a sphere component. \par $C^0(\mathcal G) = C^0_{\mathrm d}(\mathcal G) \cup C^0_{\mathrm s}(\mathcal G)$. \par $C^1_{\mathrm o}(\mathcal G) = $ the set of the edges that correspond to a boundary singular point. \par $C^1_{\mathrm c}(\mathcal G) = $ the set of the edges that correspond to an interior singular point. \par $C^1(\mathcal G) = C^1_{\mathrm o}(\mathcal G) \cup C^1_{\mathrm c}(\mathcal G)$. \par\medskip Here d,s,o,c indicate disk, sphere, open (string), closed (string), respectively. \par We define moduli space of marked stable maps from genus zero curve {\it without} boundary in the same way. We denote it by $\mathcal M_{\ell}^{\text{\rm cl}}(\alpha)$ where $\alpha \in H_2(X;\Z)$. ($\ell$ is the number of (interior) marked points.) In the same way we can associate a combinatorial type to it that is a graph $\mathcal G$. In this case there is no $C^0_{\mathrm d}(\mathcal G)$ or $C^1_{\mathrm o}(\mathcal G)$. We define $\mathcal M_{\ell}^{\text{\rm cl}}(\alpha;\mathcal G)$, $\overset{\circ}{\mathcal M}_{\ell}^{\text{\rm cl}}(\alpha)$, in the same way. \par Let us introduce some more notations. Let $\frak p \in \mathcal M_{k+1,\ell}(\beta)$. We put $$ \frak p = (\frak x,u) = ((\Sigma,\vec z,\vec z^{\rm int}),u). $$ Then we sometimes write $\frak x = \frak x_{\frak p}$, $\Sigma = \Sigma_{\frak p} =\Sigma_{\frak x}$, $\vec z = \vec z_{\frak p} =\vec z_{\frak x}$, $\vec z^{\rm int} = \vec z^{\rm int}_{\frak p} =\vec z^{\rm int}_{\frak x}$. We also write $u = u_{\frak p}$. We use a similar notation in case $\frak p \in \mathcal M_{\ell}^{\text{\rm cl}}(\alpha)$. \begin{defn} We put \begin{equation} \aligned \Gamma_{\frak p} = \{ v : \Sigma_{\frak p} \to \Sigma_{\frak p} \mid &\text{$v$ is a biholomorphic map},\,\, v(z_{\frak p,i}) =z_{\frak p,i}, \\ &v(z_{\frak p,i}^{\text{\rm int}}) =z_{\frak p,i}^{\text{\rm int}}, u_{\frak p} \circ v = u_{\frak p}.\} \endaligned \end{equation} \begin{equation}\label{2132} \aligned \Gamma^+_{\frak p} = \{ v : \Sigma_{\frak p} \to \Sigma_{\frak p} \mid &\text{$v$ is a biholomorphic map},\,\, v(z_{\frak p,i}) =z_{\frak p,i}, \\ &\exists \sigma\in \frak S_{\ell} \,\,v(z_{\frak p,i}^{\text{\rm int}}) =z_{\frak p,\sigma(i)}^{\text{\rm int}}, \,\, u_{\frak p} \circ v = u_{\frak p}.\} \endaligned \end{equation} Here $\frak S_{\ell}$ is the group of permutations of $\{1,\dots,\ell\}$. \par The assignment $v \mapsto \sigma$ defines a group homomorphism \begin{equation}\label{permrep} \Gamma^+_{\frak p} \to \frak S_{\ell}. \end{equation} When $\frak H$ is a subgroup of $\frak S_{\ell}$ we denote by $\Gamma^{\frak H}_{\frak p}$ its inverse image by (\ref{permrep}). We denote $$ \mathcal M_{k+1,\ell}(\beta;\frak H)= \mathcal M_{k+1,\ell}(\beta)/\frak H, $$ where $\frak H$ acts by permutation of the interior marked points. \par In case $X$ is a point we write $\mathcal M_{k+1,\ell}(\frak H)$ and define the groups $\Gamma^{\frak H}_{\frak x}$, $\Gamma^{+}_{\frak x}$ for an element $\frak x \in \mathcal M_{k+1,\ell}$. Note that in our case of genus zero with at least one boundary marked point, the group $\Gamma_{\frak x}$ is trivial. (However this fact is never used in this article.) \par We define a similar notion in the case of $\mathcal M_{\ell}^{\text{\rm cl}}$ etc. \end{defn} \par\medskip \section{Coordinate around the singular point} \label{coordinateinf} Let us assume that $\mathcal G$ is an oriented combinatorial type that is source stable and $\frak H$ is a subgroup of $\frak S_{\ell}$. Let $\frak x = [\Sigma,\vec z,\vec z^{\text{\rm int}}] \in \mathcal M_{k+1,\ell}(\frak H)$ with combinatorial type $\mathcal G$. It is well-known that $\mathcal M_{k+1,\ell}(\frak H)$ is an effective orbifold with boundary and corners with its local model $\frak V({\frak x})/\Gamma^{\frak H}_{\frak x}$. Let us describe this neighborhood in more detail below. \par For each ${\rm v} \in C^0_{\mathrm d}(\mathcal G)$, the element $\frak x$ determines a marked disk $\frak x_{\rm v} \in \overset{\circ}{\mathcal M}_{k_{\rm v}+1,\ell_{\rm v}}$. Here $k_{\rm v}$ is the sum of the number of edges $\in C^1_{\mathrm o}(\mathcal G)$ containing ${\rm v}$ and the number of boundary marked points assigned to ${\rm v}$. $\ell_{\rm v}$ is the sum of the number of edges $\in C^1_{\mathrm c}(\mathcal G)$ containing ${\rm v}$ and the number of interior marked points assigned to ${\rm v}$. (In other words the singular points of $\Sigma$ that is contained in $\Sigma_{\rm v}$ is regarded as a marked point of $\frak x_{\rm v}$.) \par For each ${\rm v} \in C^0_{\mathrm s}(\mathcal G)$, the element $\frak x$ determines a marked sphere $\frak x_{\rm v} \in \overset{\circ}{\mathcal M}_{\ell_{\rm v}}^{\text{\rm cl}}$ in the same way. \par Let $\frak V(\frak x_{\rm v})/\Gamma^{\frak H}_{\frak x_{\rm v}}$ be the neighborhood of $\frak x_{\rm v}$ in ${\mathcal M} _{k_{\rm v}+1,\ell_{\rm v}}({\frak H})$ or in ${\mathcal M}_{\ell_{\rm v}}^{\text{\rm cl}}(\frak H)$, respectively, according to whether ${\rm v} \in C^0_{\rm d}(\mathcal G)$ or ${\rm v} \in C^0_{\rm s}(\mathcal G)$. The group $\Gamma^{\frak H}_{\frak x}$ acts on the product $\prod \frak V(\frak x_{\rm v})$. The quotient $$ \frak V(\frak x;\mathcal G)/\Gamma^{\frak H}_{\frak x} = \left(\prod_{{\rm v}\in C^0(\mathcal G)} \frak V(\frak x_{\rm v})\right) / \Gamma^{\frak H}_{\frak x} $$ is a neighborhood of $\frak x$ in $\mathcal M_{k+1,\ell}(\mathcal G;\frak H)$. \par A neighborhood of $\frak x$ in $\mathcal M_{k+1,\ell}(\frak H)$ is identified with \begin{equation}\label{nbhdofstratum} \left( \frak V(\frak x;\mathcal G) \times \left(\prod_{{\rm e}\in C^1_{\mathrm o}(\mathcal G)} (T_{{\rm e},0},\infty]\right) \times \left(\prod_{{\rm e}\in C^1_{\mathrm c}(\mathcal G)} ((T_{{\rm e},0},\infty] \times S^1)/\sim\right)\right)/\Gamma^{\frak H}_{\frak x}. \end{equation} \begin{rem}\label{rem:161} The equivalence relation $\sim$ in (\ref{nbhdofstratum}) is defined as follows. $(T,\theta) \sim (T',\theta')$ if $(T,\theta) = (T',\theta')$ or $T=T'=\infty$. \par The action of $\Gamma^{\frak H}_{\frak x}$ on $$ \left(\prod_{{\rm e}\in C^1_{\mathrm o}(\mathcal G)} (T_{{\rm e},0},\infty]\right) \times \left(\prod_{{\rm e}\in C^1_{\mathrm c}(\mathcal G)} ((T_{{\rm e},0},\infty] \times S^1)/\sim\right) $$ is by exchanging the factors associated to the edges $\rm e$ and by rotation of the $S^1$ factors. (See the proof of Lemma \ref{Phisiequv}.) \end{rem} We will define a map from (\ref{nbhdofstratum}) to $\mathcal M_{k+1,\ell}({\frak H})$. (See Definition \ref{def214}.) We need to fix a coordinate of $\Sigma$ around each of the singular point for this purpose. For the sake of consistency with the analytic construction in Section \ref{secsimple}, we use cylindrical coordinate. \begin{defn}\label{coordinatainfdef} Let \begin{equation}\label{fibrationsigma} \pi : \frak M_{\frak x_{\rm v}} \to \frak V(\frak x_{\rm v}) \end{equation} be a fiber bundle whose fiber is a two dimensional manifold together with fiberwise complex structure. This fiber bundle is the universal family in the sense of (2) below. We call (\ref{fibrationsigma}) with extra data described below a {\it universal family with coordinate at infinity} if the following conditions are satisfied. \begin{enumerate} \item $\frak M_{\frak x_{\rm v}}$ has a fiberwise biholomorphic $\Gamma^+_{\frak x_{\rm v}}$ action and $\pi$ is $\Gamma^+_{\frak x_{\rm v}}$ equivariant. \item For $\frak y \in \frak V(\frak x_{\rm v})$ the fiber $\pi^{-1}(\frak y)$ is biholomorphic to $\Sigma_{\frak y}$ minus marked points corresponding to the singular points of $\frak y$. \item As a part of the data we fix a closed subset $\frak K_{\frak x_{\rm v}} \subset \frak M_{\frak x_{\rm v}}$ such that $\pi : \frak K_{\frak x_{\rm v}} \to \frak V(\frak x_{\rm v})$ is proper. \item We consider the direct product \begin{equation}\label{endproductstri} \aligned \frak V(\frak x_{\rm v}) \times &\bigcup_{{\rm e}\in C^1_{\mathrm o}(\mathcal G) \atop \text{${\rm e}$ is an outgoing edge of ${\rm v}$}} (0,\infty) \times [0,1] \\ &\cup \bigcup_{{\rm e}\in C^1_{\mathrm o}(\mathcal G) \atop \text{${\rm e}$ is an incoming edge of ${\rm v}$}} (-\infty,0) \times [0,1] \\ &\cup \bigcup_{{\rm e}\in C^1_{\mathrm c}(\mathcal G) \atop \text{${\rm e}$ is an outgoing edge of ${\rm v}$}} (0,\infty) \times S^1 \\ &\cup \bigcup_{{\rm e}\in C^1_{\mathrm c}(\mathcal G) \atop \text{${\rm e}$ is an incoming edge of ${\rm v}$}} (-\infty,0) \times S^1. \endaligned \end{equation} (Here and hereafter the symbols $\cup$ and $\bigcup$ in (\ref{endproductstri}) are the {\it disjoint} union.) \par As a part of the data we fix a diffeomorphism between $\frak M_{\frak x_{\rm v}}\setminus \frak K_{\frak x_{\rm v}}$ and (\ref{endproductstri}) that commutes with the projection to $ \frak V(\frak x_{\rm v})$ and is a fiberwise biholomorphic map. Moreover the diffeomorphism sends each end corresponding to a singular point $z_{\rm e}$ to the end in (\ref{endproductstri}) corresponding to the edge ${\rm e}$. \item The diffeomorphism in (4) extends to a fiber preserving diffeomorphism $$ \frak M_{\frak x_{\rm v}} \cong \frak V(\frak x_{\rm v}) \times (\Sigma_{\frak x_{\rm v}} \setminus \{\text{singular points}\}). $$ This diffeomorphism sends each of the interior or boundary marked points of the fiber of $\frak y$ to the corresponding marked point of $\{\frak y\} \times \Sigma_{\frak x_{\rm v}}$. However, this diffeomorphism does {\it not} preserve fiberwise complex structure. As a part of the data we fix this extension of diffeomorphism. \item The action of an element of $\Gamma^+_{\frak x_{\rm v}}$ on (\ref{endproductstri}) is given by exchanging the factors associated to the edges $\rm e$ and by rotation of the $S^1$ factors. \end{enumerate} Hereafter we sometimes call a {\it coordinate at infinity} in place of a universal family with coordinate at infinity. \end{defn} \begin{exm} Let $\frak x_{\text{\rm v}}$ be $S^2$ with $\ell + 2 $ marked points $$ z_0=0, z_1 = \infty, z_2=1, \dots, z_{\ell+1} = e^{2\pi\sqrt{-1}(\ell-1)/\ell}. $$ Let $\frak H \subset \frak S_{\ell+2}$ be the subgroup $\frak S_{\ell}$ consisting of elements that fix $z_0, z_1$. We assume that $z_0$ and $z_1$ correspond to singular points of $\frak x$. It is easy to see that $\Gamma_{\frak x}^{\frak H} = \Z_{\ell}$. Then $\Sigma_{\frak x_{\rm v}} \setminus \{z_0,z_1\} = \R \times S^1$ and the action of $\Gamma_{\frak x}^{\frak H}$ is given by rotation of the $S^1$ factors. \end{exm} \begin{defn}\label{defn288} Suppose we are given a coordinate at infinity for each of $\frak x_{\rm v}$ where $\frak x_{\rm v}$ corresponds to an irreducible component of $\frak x$. We say that they are {\it invariant under the $\Gamma^+_{\frak x}$-action} if the following holds. \par We define a fiber bundle \begin{equation}\label{2149} \pi : \underset{{\rm v}\in C^0(\mathcal G)}{\bigodot}\frak M_{\frak x_{\rm v}} \to \prod_{{\rm v}\in C^0(\mathcal G)}\frak V(\frak x_{\rm v}) \end{equation} as follows. We take projections $\prod_{{\rm v}\in C^0(\mathcal G)}\frak V(\frak x_{\rm v}) \to \frak V(\frak x_{\rm v})$ and pull back the bundle (\ref{fibrationsigma}) by this projection. We thus obtain a fiber bundle over $\prod_{{\rm v}\in C^0(\mathcal G)}\frak V(\frak x_{\rm v})$. (\ref{2149}) is the disjoint union of those bundles over ${\rm v}\in C^0(\mathcal G)$. In other words the fiber of (\ref{2149}) at $(\frak y_{\rm v} : {\rm v} \in C^0(\mathcal G))$ is a disjoint union of $\frak y_{\rm v}$'s. \par The fiber bundle (\ref{2149}) has a $\Gamma^+_{\frak x_{\rm v}}$-action. We consider its restriction to \begin{equation}\label{2endproductstri} \pi : \underset{{\rm v}\in C^0(\mathcal G)}{\bigodot}(\frak M_{\frak x_{\rm v}}\setminus \frak K_{\frak x_{\rm v}}) \to \prod_{{\rm v}\in C^0(\mathcal G)}\frak V(\frak x_{\rm v}). \end{equation} The group $\Gamma^+_{\frak x}$ acts on the sum of the second factors of (\ref{endproductstri}) by exchanging the factors associated to the edges $\rm e$ and by rotation of the $S^1$ factors. We require that (\ref{2endproductstri}) is invariant under this action. \par Moreover we assume that the diffeomorphisms in Definition \ref{coordinatainfdef} (4)(5) are $\Gamma^+_{\frak x}$ equivariant. \end{defn} Now we fix a coordinate at infinity for each of $\frak x_{\rm v}$ that is invariant under the $\Gamma^{\frak H}_{\frak x}$ action. We will use it to define a map from (\ref{nbhdofstratum}) to a neighborhood of $\frak x$ in $\mathcal M_{k+1,\ell}({\frak H})$ as follows. Let $(\frak y_{\rm v} : {\rm v} \in C^0(\mathcal G))$ and $\frak y_{\rm v} \in \frak V(\frak x_{\rm v})$. Take a representative $\Sigma_{\frak y_{\rm v}}$ of $\frak y_{\rm v}$. We put $K_{\frak y_{\rm v}} = \Sigma_{\frak y_{\rm v}}\cap \frak K_{\frak x_{\rm v}}$. The coordinate at infinity defines a biholomorphic map between $\bigcup_{ {\rm v} \in C^0(\mathcal G))}\Sigma_{\frak y_{\rm v}} \setminus K_{\rm v}$ and \begin{equation}\label{endidentify} \aligned &\bigcup_{{\rm e}\in C^1_{\mathrm o}(\mathcal G) \atop \text{${\rm e}$ is an outgoing edge of ${\rm v}$}} (0,\infty) \times [0,1] \\ &\cup \bigcup_{{\rm e}\in C^1_{\mathrm o}(\mathcal G) \atop \text{${\rm e}$ is an incoming edge of ${\rm v}$}} (-\infty,0) \times [0,1] \\ &\cup \bigcup_{{\rm e}\in C^1_{\mathrm c}(\mathcal G) \atop \text{${\rm e}$ is an outgoing edge of ${\rm v}$}} (0,\infty) \times S^1 \\ &\cup \bigcup_{{\rm e}\in C^1_{\mathrm c}(\mathcal G) \atop \text{${\rm e}$ is an incoming edge of ${\rm v}$}} (-\infty,0) \times S^1. \endaligned \end{equation} We write the coordinate of each summand of (\ref{endidentify}) by $(\tau'_{\rm e},t_{\rm e})$, $(\tau''_{\rm e},t_{\rm e})$, $(\tau'_{\rm e},t'_{\rm e})$, $(\tau''_{\rm e},t''_{\rm e})$ respectively. (Here we identify $S^1 = \R/\Z$ so $t_{\rm e} \in [0,1]$ or $t'_{\rm e}, t''_{\rm e} \in \R/\Z$.) \par Now, let $((T_{\rm e};{\rm e}\in C^1_{\mathrm o}(\mathcal G)) ,((T_{\rm e},\theta_{\rm e});{\rm e}\in C^1_{\mathrm c}(\mathcal G))$ be an element of \begin{equation}\label{infnbfparam} \left(\prod_{{\rm e}\in C^1_{\mathrm o}(\mathcal G)} (T_{{\rm e},0},\infty]\right) \times \left(\prod_{{\rm e}\in C^1_{\mathrm c}(\mathcal G)} ((T_{{\rm e},0},\infty] \times S^1)/\sim\right). \end{equation} (Here $\theta_{\rm e} \in \R/\Z$.) \begin{defn}\label{def29} We denote the right hand side of (\ref{infnbfparam}) by $(\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1)$. \end{defn} \par We first consider the case $T_{\rm e} \ne \infty$. We define $\tau_{\rm e}$ for ${\rm e} \in C^1(\mathcal G)$ and $t_{\rm e}$ for ${\rm e}\in C^1_{\mathrm c}(\mathcal G)$ as follows. \begin{eqnarray} \tau_e &=& \tau'_{\rm e} - 5T_{\rm e} = \tau''_{\rm e} + 5T_{\rm e}, \label{cctau1}\\ t_{\rm e} &=& t'_{\rm e} = t''_{\rm e} - \theta_{\rm e}.\label{ccttt1} \end{eqnarray} We note that (\ref{cctau1}), (\ref{ccttt1}) are consistent with the notation of Section \ref{subsecdecayT}. We consider \begin{equation}\label{neckopen} [-5T_{\rm e},5T_{\rm e}] \times [0,1] \end{equation} for each ${\rm e} \in C^1_{\mathrm o}(\mathcal G)$ with coordinate $(\tau_{\rm e},t_{\rm e})$ and \begin{equation}\label{neckclosed} [-5T_{\rm e},5T_{\rm e}] \times S^1 \end{equation} for each ${\rm e} \in C^1_{\mathrm c}(\mathcal G)$ with coordinate $(\tau_{\rm e},t_{\rm e})$. \par We now consider the union \begin{equation}\label{summandtoglued} \aligned \bigcup_{{\rm v} \in C^0(\mathcal G)}K_{\frak y_{\rm v}} &\cup \bigcup_{{\rm e}\in C^1_{\mathrm o}(\mathcal G)} [-5T_{\rm e},5T_{\rm e}] \times [0,1] \\ &\cup \bigcup_{{\rm e}\in C^1_{\mathrm c}(\mathcal G)} [-5T_{\rm e},5T_{\rm e}] \times S^1. \endaligned \end{equation} (\ref{cctau1}) and (\ref{ccttt1}) describe the way how we glue various summands in (\ref{summandtoglued}) to obtain a bordered Riemann surface, that is nonsingular in our case where $T_e \ne \infty$. \begin{defn}\label{def214} We denote by ${\overline\Phi}((\frak y_{\rm v};{\rm v} \in C^0(\mathcal G)), (T_{\rm e};{\rm e}\in C^1_{\mathrm o}(\mathcal G)),(T_{\rm e},\theta_{\rm e});{\rm e}\in C^1_{\mathrm c}(\mathcal G))$ the element of $\mathcal M_{k+1,\ell}$ represented by the above bordered Riemann surface. \par Hereafter we write ${\frak y}= (\frak y_{\rm v};{\rm v} \in C^0(\mathcal G))$, $\vec T^{\mathrm o} = (T_{\rm e};{\rm e}\in C^1_{\mathrm o}(\mathcal G))$, $\vec T^{\mathrm c} = (T_{\rm e};{\rm e}\in C^1_{\mathrm c}(\mathcal G))$, and $\vec \theta = (\theta_{\rm e};{\rm e}\in C^1_{\mathrm c}(\mathcal G))$. We put $\vec T = (\vec T^{\mathrm o},\vec T^{\mathrm c})$. We denote ${\overline{\Phi}}({\frak y},\vec T^{\mathrm o},(\vec T^{\mathrm c},\vec{\theta})) = {\overline{\Phi}}({\frak y},\vec T,\vec \theta) \in \mathcal M_{k+1,\ell}$. \par\medskip We next consider the case when some $T_{\rm e} =\infty$. We define a graph $\mathcal G'$ as follows : We shrink all the edges ${\rm e}$ of $\mathcal G$ with $T_{\rm e} \ne \infty$. Various data we associate to $\mathcal G'$ are induced by the one associated to $\mathcal G$ in an obvious way. The element ${\overline{\Phi}}({\frak y},\vec T^{\mathrm o},(\vec T^{\mathrm c},\vec{\theta}))$ is contained in $\mathcal M_{k+1,\ell}(\mathcal G')$. Namely we glue (\ref{summandtoglued}) to obtain a (noncompact) bordered Riemann surface $\Sigma'$. Then we add a finite number of points (each corresponds to the edges with infinite length) to obtain (singular) stable bordered curve ${\overline{\Phi}}({\frak y},\vec T^{\mathrm o},(\vec T^{\mathrm c},\vec{\theta}))$ such that ${\overline{\Phi}}({\frak y},\vec T^{\mathrm o},(\vec T^{\mathrm c},\vec{\theta}))$ minus singular points is $\Sigma'$. \par Thus we have defined $$ {\overline\Phi} : \prod_{{\rm v}\in C^0(\mathcal G)}\frak V(\frak x_{\rm v}) \times (\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1) \to \mathcal M_{k+1,\ell}. $$ \end{defn} We define some terminology below. \begin{defn}\label{defcoreandneck} We call $K_{\frak y_{\rm v}}$ as in (\ref{summandtoglued}) a component of the {\it core} of $\frak y$ or of ${\overline{\Phi}}({\frak y},\vec T^{\mathrm o},(\vec T^{\mathrm c},\vec{\theta}))$. Each of the connected component of the second or third term of (\ref{summandtoglued}) is called a component of the {\it neck region}. In case $T_{\rm}$ is infinity, there is a domain identified with $([0,\infty) \cup (-\infty,0]) \times [0,1]$ or with $([0,\infty) \cup (-\infty,0]) \times S^1$ corresponding to it. We call it also a component of the neck region. The union of all the components of the core and the neck region is ${\overline{\Phi}}({\frak y},\vec T^{\mathrm o},(\vec T^{\mathrm c},\vec{\theta}))$ minus singular points. \end{defn} \begin{rem}\label{rem216} Note that $\mathcal M_{k+1,\ell}$ has an $\frak S_{\ell}$ action by permutation of the interior marked points. A local chart of $\mathcal M_{k+1,\ell}$ at $\frak x$ is of the form $\frak V/\Gamma_{\frak x}$, and a local chart of $\mathcal M_{k+1,\ell}/\frak S_{\ell}$ at $[\frak x]$ is of the form $\frak V/\Gamma^+_{\frak x}$. \end{rem} \begin{lem}\label{Phisiequv} The map ${\overline\Phi}$ is $\Gamma^{+}_{\frak x}$ equivariant. \end{lem} \begin{proof} We first define a $\Gamma^{+}_{\frak x}$ action on (\ref{infnbfparam}). Note an element of $\Gamma^{+}_{\frak x}$ acts on the graph $\mathcal G$ in an obvious way. So it determines the way how to exchange the factors of (\ref{infnbfparam}). The rotation part of the action is defined as follows. By Definition \ref{coordinatainfdef} (6) we can determine the rotation of the $t_{\rm e}$ coordinate induced by an element of $\Gamma^{+}_{\frak x}$. Therefore by (\ref{ccttt1}) the action on $\theta_{\rm e}$ coordinate is determined. \par Once we defined $\Gamma^{+}_{\frak x}$ action on (\ref{infnbfparam}) the equivariance of the map ${\overline{\Phi}}$ is immediate from definition. \end{proof} Note that the space (\ref{nbhdofstratum}) has a stratification. (This stratification is induced by the stratification of $(0,\infty]$ that consists of $(0,\infty)$ and $\{\infty\}$. The map ${\overline{\Phi}}$ respects this stratification and stratification of $\mathcal M_{k+1,\ell}$ by $\{\mathcal M_{k+1,\ell}(\mathcal G)\}$. Moreover ${\overline{\Phi}}$ is continuous and strata-wise smooth. We do not discuss the smooth structure of (\ref{nbhdofstratum}) yet. (See Section \ref{chart}.) \par We remark that the map ${\overline{\Phi}}$ {\it depends} on the choice of coordinate at infinity. The next result describes how ${\overline{\Phi}}$ depends on the choice of coordinate at infinity. \par Let \begin{equation} \overline{\Phi}_1 : \prod_{{\rm v}\in C^0(\mathcal G)}\frak V^{(1)}(\frak x_{\rm v}) \times (\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1) \to \mathcal M_{k+1,\ell} \end{equation} be the map in Definition \ref{def214}. Suppose $$ \frak Y_0 = \overline{\Phi}_1(\frak y_0,\vec T_{\frak Y_0},\vec \theta_{\frak Y_0}) $$ and $\mathcal G_{\frak Y_0}$ is the combinatorial type of $\frak Y_0$. Note $\mathcal G_{\frak Y_0}$ is obtained from $\mathcal G_{\frak x}$ by shrinking several edges. Therefore we may regard $$ C^1(\mathcal G_{\frak Y_0}) \subseteq C^1(\mathcal G_{\frak x}). $$ Namely we can canonically identify ${\rm e} \in C^1(\mathcal G_{\frak x})$ with an element of ${\rm e} \in C^1(\mathcal G_{\frak Y_0})$ if $T_{\frak Y_0,{\rm e}} = \infty$. \par We take a coordinate at infinity of $\frak Y_0$. By Definition \ref{def214} it determines an embedding \begin{equation} \overline{\Phi}_2 : \prod_{{\rm v}\in C^0(\mathcal G_{\frak Y_0})}\frak V^{(2)}(\frak Y_{0,\rm v}) \times (\vec T^{\rm o}_1,\infty] \times ((\vec T^{\rm c}_1,\infty] \times \vec S^1) \to \mathcal M_{k+1,\ell}. \end{equation} Here an element of $(\vec T^{\rm o}_1,\infty] \times ((\vec T^{\rm c}_1,\infty] \times \vec S^1)$ is $((T_{\rm e};{\rm e} \in C^1_{\rm o}(\mathcal G_{\frak Y_0}),((T_{\rm e},\theta_{\rm e}), {\rm e} \in C^1_{\rm c}(\mathcal G_{\frak Y_0}))$. \par We put \begin{equation} \overline{\Phi}_{12} = \overline{\Phi}_1^{-1} \circ \overline{\Phi}_2. \end{equation} We next define $\Psi_{12}$. Let $(\frak z_{\rm v}) \in \prod_{{\rm v}\in C^0(\mathcal G_{\frak Y_0})}\frak V^{(2)}(\frak Y_{0,\rm v})$. We denote $\vec{\infty} \in (\vec T^{\rm o}_1,\infty] \times ((\vec T^{\rm c}_1,\infty] \times \vec S^1)$ to be the point whose components are all $\infty$. Then $\overline{\Phi}_2((\frak z_{\rm v}),\vec\infty)$ has the same combinatorial type $\mathcal G_{\frak Y_0}$ as $\frak Y_0$. We define $\Psi_{12}^{\frak y}((\frak z_{\rm v}))\in \prod_{{\rm v}\in C^0(\mathcal G)}\frak V^{(1)}(\frak x_{\rm v})$ and $\vec T',\vec\theta'$ by $$ \overline{\Phi}_1^{-1}(\overline{\Phi}_2((\frak z_{\rm v}),\vec\infty)) = (\Psi_{12}^{\frak y}((\frak z_{\rm v})),\vec T',\vec\theta'). $$ We note that $T'_{\rm e} = \infty$ if ${\rm e} \in C^1(\mathcal G_{\frak Y_0}) \subset C^1(\mathcal G_{\frak x})$. Then we put \begin{equation}\label{2167} \Psi_{12}((\frak z_{\rm v}),\vec T,\vec\theta) = ({\Psi}^{\frak y}_{12}((\frak z_{\rm v})),\vec T'',\vec\theta'') \end{equation} where $$ T''_{\rm e} = \begin{cases} T_{\rm e} &\text{if ${\rm e} \in C^1(\mathcal G_{\frak Y_0})$} \\ T'_{\rm e} &\text{if ${\rm e} \in C^1(\mathcal G_{\frak x}) \setminus C^1(\mathcal G_{\frak Y_0})$}, \end{cases} $$ $$ \theta''_{\rm e} = \begin{cases} \theta_{\rm e} &\text{if ${\rm e} \in C_{\rm c}^1(\mathcal G_{\frak Y_0})$} \\ \theta'_{\rm e} &\text{if ${\rm e} \in C_{\rm c}^1(\mathcal G_{\frak x}) \setminus C_{\rm c}^1(\mathcal G_{\frak Y_0})$}. \end{cases} $$ \begin{rem} If $\frak Y_0$ has the same combinatorial type as $\frak x$ then $\Psi_{12}$ is the identity map. Note that even in the case $\frak Y_0=\frak x$ the map $\overline{\Phi}_{12}$ may not be the identity map since $\overline{\Phi}_j$ depends on the choice of coordinate at infinity. \end{rem} \par Let $k_{T,{\rm e}} =0,1,\dots$, $k_{\theta,{\rm e}} = 0,1,2,\dots$ and define $$ \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}} = \prod_{{\rm e}\in C^1(\mathcal G)}\frac{\partial^{ k_{T,{\rm e}}}}{\partial T_{\rm e}^{k_{T,{\rm e}}}}. $$ We define $\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial T^{\vec k_{\theta}}}$ in the same way. We put $$ \vec k_{T}\cdot \vec T = \sum_{{\rm e}\in C^1(\mathcal G)} k_{T,{\rm e}}T_{\rm e}, \quad \vec k_{\theta}\cdot \vec T^{\rm c} = \sum_{{\rm e}\in C^1_{\rm c}(\mathcal G)} k_{\theta,{\rm e}}T_{\rm e}. $$ \begin{prop}\label{changeinfcoorprop} In the above situation we have the following inequality for any compact subset $\frak V_0(\frak x,\mathcal G)$ of $\frak V(\frak x,\mathcal G)$ : \begin{equation} \left\Vert \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}} \frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta^{\vec k_{\theta}}} ({\overline{\Phi}}_{12} - {\Psi}_{12}) \right\Vert_{C^k} \le C_{1,k} e^{-\delta' (\vec k_{T}\cdot \vec T+\vec k_{\theta}\cdot \vec T^{\rm c})}, \label{2144} \end{equation} for $\vert \vec k_T\vert, \vert \vec k_{\theta}\vert \le k$ with $\vert \vec k_T\vert + \vert \vec k_{\theta}\vert \ne 0$, where the left hand sides are $C^k$ norm (as maps on $\frak y$) and $\delta' > 0$ depends only on $\delta$ and $k$. \end{prop} \begin{rem} The estimates in Proposition \ref{changeinfcoorprop} holds strata-wise. Namely in the situation where some of $T_{\rm e}$ is infinity, we only consider $\vec k_{T}, \vec k_{\theta}$ such that $k_{T,\rm e}=k_{\theta,\rm e} = 0$ for the edges $\rm e$ with $T_{\rm e} = \infty$. \end{rem} \begin{rem}\label{metricfiber} During the proof of Proposition \ref{changeinfcoorprop} and also during various discussions in later sections, we need metrics of the source and the target to define various norms etc. For this purpose we take a Riemannian metric on $X$ and also a family of metrics of the fibers of (\ref{fibrationsigma}) such that outside $K_{{\rm v}}$ it coincides with the standard flat metric (via coordinates $\tau$ and $t$). We include it in the data of universal family with coordinate at infinity. Since we use it only to fix norm etc. it is not an important part of that data. \end{rem} Proposition \ref{changeinfcoorprop} is a generalization of \cite[Lemma A1.59]{fooo:book1} and will be used for the same purpose later to derive the exponential decay estimate of the coordinate change of our Kuranishi structure. We suspect Proposition \ref{changeinfcoorprop} is not new. However for completeness sake the proof will be given later in Subsection \ref{proposss}. \begin{rem} In case $\frak Y_0 = \frak x$, Proposition \ref{changeinfcoorprop} implies that there exists $\vec{\Delta T} : \frak V_0(\frak x,\mathcal G) \to \R^{\# C^1(\mathcal G)}$, $\vec{\Delta \theta} : \frak V_0(\frak x,\mathcal G) \to (S^1)^{\# C^1_{\text{\rm c}}(\mathcal G)}$ such that $T$ component (resp. $\theta$ component) of ${\overline{\Phi}}_{21}$ goes to $\vec T + \vec{\Delta T}$ (resp. $\vec {\theta}_e + \vec{\Delta \theta}$) in an exponential order as $T$ goes to infinity. (\ref{2144}) implies that ${\frak y}$ component of ${\overline{\Phi}}_{21}$ goes to $\frak y$ in exponential order as $T$ goes to infinity. \end{rem} Proposition \ref{changeinfcoorprop} describes the coordinate change (change of the parametrization) of the moduli space. A coordinate at infinity determines a parametrization of the (bordered) curve itself, since it includes the trivialization of the fiber bundle (\ref{fibrationsigma}). Proposition \ref{reparaexpest} below describes the way how it changes when we change the coordinate at infinity. \par Let $\overline{\Phi}_{12} = \overline{\Phi}_{1}^{-1} \circ \overline{\Phi}_{2}$ be as in Proposition \ref{changeinfcoorprop} and let $(\frak y_j,\vec T_j,\vec \theta_j)$ ($j=1,2$) be in the domain of $ \overline{\Phi}_{j}$. We assume \begin{equation}\label{2255} (\frak y_1,\vec T_1,\vec \theta_1) = \overline{\Phi}_{12} (\frak y_2,\vec T_2,\vec \theta_2). \end{equation} Let $\Sigma_{(\frak y_j,\vec T_j,\vec \theta_j)}$ be a curve representing $\overline{\Phi}_{j}(\frak y_j,\vec T_j,\vec \theta_j)$. It comes with coordinate at infinity. By (\ref{2255}) and stability, there exists a {\it unique} isomorphism \begin{equation}\label{2256} \frak v_{(\frak y_2,\vec T_2,\vec \theta_2)} : \Sigma_{(\frak y_2,\vec T_2,\vec \theta_2)} \to \Sigma_{(\frak y_1,\vec T_1,\vec \theta_1)} \end{equation} of marked curves. \par Let $K_{\rm v}^{(j)}$ be the core of $\Sigma_{(\frak y_j,\vec T_j,\vec \theta_j)}$. We take a compact subset $K_{{\rm v},0}^{(2)} \subset K_{\rm v}^{(2)}$ such that \begin{equation} \frak v_{(\frak y_2,\vec T_2,\vec \theta_2)} (K_{{\rm v},0}^{(2)} ) \subset K_{\rm v}^{(1)} \end{equation} for sufficiently large $\vec T_1$. Note that the sets $K_{\rm v}^{(1)}$ and $K_{{\rm v},0}^{(2)}$ are independent of $(\frak y_2,\vec T_2,\vec \theta_2)$. Let $$ C^k(K_{{\rm v},0}^{(2)},K_{\rm v}^{(1)}) $$ be the space of $C^k$ maps with $C^k$ topology. The restriction of $\frak v_{(\frak y_2,\vec T_2,\vec \theta_2)}$ to $K_{{\rm v},0}^{(2)}$ defines an element of it that we denote by $$ \text{\rm Res}(\frak v_{(\frak y_2,\vec T_2,\vec \theta_2)}) \in C^k(K_{{\rm v},0}^{(2)},K_{\rm v}^{(1)}). $$ \begin{prop}\label{reparaexpest} There exist $C_{2,k}$, $T_k$ such that for each ${\rm e}_0 \in C^1_{\rm c}(\mathcal G_{\frak y_2})$ we have \begin{equation} \aligned &\left\Vert \nabla_{\frak y_2}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T_2^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta_2^{\vec k_{\theta}}} \frac{\partial}{\partial T_{2,{\rm e}_0}} \text{\rm Res}(\frak v_{(\frak y_2,\vec T_2,\vec \theta_2)}) \right\Vert_{C^k} < C_{2,k}e^{-\delta_2 T_{2,{\rm e}_0}}, \\ &\left\Vert \nabla_{\frak y_2}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T_2^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta_2^{\vec k_{\theta}}} \frac{\partial}{\partial \theta_{2,{\rm e}_0}} \text{\rm Res}(\frak v_{(\frak y_2,\vec T_2,\vec \theta_2)}) \right\Vert_{C^k} < C_{2,k}e^{-\delta_2 T_{2,{\rm e}_0}}, \endaligned \end{equation} if each of $T_{2,{\rm e}}$ is greater than $T_k$ and $\vert{\vec k_{T}}\vert + \vert{\vec k_{\theta}}\vert +n \le k$. Here $\vec T_2 = (T_{2,{\rm e}}; {\rm e}\in C^1(\mathcal G_{\frak y_2}))$, $\vec \theta_2 = (\theta_{2,{\rm e}}; {\rm e}\in C^1_{\rm c}(\mathcal G_{\frak y_2}))$. \par The first inequality also holds for ${\rm e}_0 \in C^1_{\rm o}(\mathcal G_{\frak y_2})$. \end{prop} We note that when all the numbers $T_{2,{\rm e}}$ are $\infty$, $\overline{\Phi}_2(\frak y_2,\vec T_2,\vec \theta_2)$ has the same combinatorial type as $\frak Y_0$. (Note $\overline{\Phi}_2$ gives a coordinate of the Deligne-Mumford moduli space in a neighborhood of $\frak Y_0$.) Then, integrating on $T_{2,{\rm e}}$, Proposition \ref{reparaexpest} implies: \begin{cor}\label{corestimatecoochange} \begin{equation} \aligned &\left\Vert \nabla_{\frak y_2}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T_2^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta_2^{\vec k_{\theta}}} (\text{\rm Res}(\frak v_{(\frak y_2,\vec T_2,\vec \theta_2)}) - \text{\rm Res}(\frak v_{(\frak y_2,\vec \infty)}) \right\Vert_{C^k} < C_{3,k}e^{-\delta_2 T_{2,{\rm min}}}, \\ &\left\Vert \nabla_{\frak y_2}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T_2^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta_2^{\vec k_{\theta}}} (\text{\rm Res}(\frak v_{(\frak y_2,\vec T_2,\vec \theta_2)}) -\text{\rm Res}(\frak v_{(\frak y_2,\vec \infty)}) \right\Vert_{C^k} < C_{3,k}e^{-\delta_2 T_{2,{\rm min}}}, \endaligned \end{equation} if $T_{2,{\rm e}} \ge T_{2,{\rm min}} > T_k$ for all $\rm e$ and $\vert{\vec k_{T}}\vert + \vert{\vec k_{\theta}}\vert +n \le k$. Here $T_{2,{\rm min}}=\min (T_{2,{\rm e}} ; {\rm e} \in C^1(\mathcal G_{\frak y_2}))$. \end{cor} \par\medskip In later subsections we also use a parametrized version of Propositions \ref{changeinfcoorprop} and \ref{reparaexpest}, which we discuss now. \par Let $Q$ be a finite dimensional manifold. Suppose we have a fiber bundle \begin{equation}\label{fibrationsigmafami} \pi : \tilde{\frak M}^{(2)}_{\frak x_{\rm v}} \to Q_{\rm v} \times \frak V(\frak x_{\rm v}) \end{equation} that is a universal family (\ref{fibrationsigma}) when we restrict it to each of $\{\xi\} \times \frak V(\frak x_{\rm v})$ for $\xi_{\rm v} \in Q_{\rm v}$. We put $$ Q = \prod_{{\rm v} \in C^0(\mathcal G)} Q_{\rm v}. $$ \begin{defn}\label{def:Qfamily} A $Q$-{\it parametrized family of coordinates at infinity} is a fiber bundle (\ref{fibrationsigmafami}) and its trivialization so that for each $\xi = (\xi_{\rm v})$ the restriction to $\{\xi_{\rm v}\} \times\frak V(\frak x_{\rm v}) $ gives a coordinate at infinity in the sense of Definition \ref{coordinatainfdef}. \end{defn} Suppose a $Q$-parametrized family of coordinate at infinity in the above sense is given. Then we can perform the construction we already described for each $\xi$ and obtain a map \begin{equation} \overline{\Phi}_{2} : Q \times \prod_{{\rm v}\in C^0(\mathcal G)}\frak V(\frak x_{\rm v}) \times (\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1) \to \mathcal M_{k+1,\ell}. \end{equation} Note that for each $\xi \in Q$ it gives a diffeomorphism to a neighborhood of $\frak x$ in $\mathcal M_{k+1,\ell}$. \par Suppose we have a (unparametrized) coordinate at infinity that is a fiber bundle $$ \pi : {\frak M}^{(1)}_{\frak x_{\rm v}} \to \frak V(\frak x_{\rm v}) $$ equipped with trivialization. It induces an embedding $$ \overline{\Phi}_{1} : \prod_{{\rm v}\in C^0(\mathcal G)}\frak V(\frak x_{\rm v}) \times (\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1) \to \mathcal M_{k+1,\ell}. $$ They induce a map \begin{equation}\label{coodinatechange12} \aligned \overline{\Phi}_{12}: &Q \times \prod_{{\rm v}\in C^0(\mathcal G)}\frak V(\frak x_{\rm v}) \times (\vec T^{\rm o \prime}_0,\infty] \times ((\vec T^{\rm c \prime}_0,\infty] \times \vec S^1)\\ &\to \prod_{{\rm v}\in C^0(\mathcal G)}\frak V(\frak x_{\rm v}) \times (\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1) \endaligned \end{equation} by the formula: $$ \overline{\Phi}_{1}(\overline{\Phi}_{12}(\xi,\frak y,\vec T,\vec\theta)) = \overline{\Phi}_{2}(\xi,\frak y,\vec T,\vec\theta). $$ Here $\vec T^{\rm o \prime}_0$ and $\vec T^{\rm c \prime}_0$ are sufficiently large compared with $\vec T^{\rm o}_0$ and $\vec T^{\rm c }_0$. \par Moreover we have a family of biholomorphic maps: \begin{equation}\label{mapvbra} \frak v_{(\xi,\frak y,\vec T,\vec\theta)} : \Sigma_{\vec T,\vec\theta}^{\frak y,\xi,(2)} \to \Sigma_{\vec T',\vec \theta'}^{\frak y',(1)}. \end{equation} Here $(\frak y',\vec T',\vec\theta') = \overline{\Phi}_{12}(\xi,\frak y,\vec T,\vec \theta)$ and $\Sigma_{\vec T',\vec \theta'}^{\frak y',(1)}$, $\Sigma_{\vec T,\vec\theta}^{\frak y,\xi,(2)}$ are marked bordered curves representing $\overline{\Phi}_{1}(\overline{\Phi}_{12}(\xi,\rho,\vec T,\vec\theta))$ and $\overline{\Phi}_{2}(\xi,\rho,\vec T,\vec\theta)$, respectively. \begin{lem}\label{changeinfcoorproppara} We have $C_{4,k}$, $C_{5,k}$ such that: \begin{equation}\label{2336} \left\Vert \nabla_{\xi}^{\vec k_{\xi}}\frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}} \frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta^{\vec k_{\theta}}} ({\overline{\Phi}}_{12}(\xi,\frak y,\vec T,\vec \theta) - \Psi_{12}(\frak y,\vec T,\vec \theta)) \right\Vert_{C^k} \le C_{4,k} e^{-\delta (\vec k_{T}\cdot \vec T+\vec k_{\theta}\cdot \vec T^{\rm c})} \end{equation} for $\vert \vec k_{\xi}\vert$, $\vert\vec k_T\vert, \vert \vec k_{\theta}\vert \le k$, if each of $T_{{\rm e}}$ is greater than $T_k$. The left hand sides are $C^k$ norm (as functions on $\frak y$). Moreover for each ${\rm e}_0 \in C^1_{\rm c}(\mathcal G_{\frak y_2})$ we have \begin{equation}\label{2337} \aligned &\left\Vert \nabla_{\xi}^{\vec k_{\xi}}\nabla_{\frak y}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta^{\vec k_{\theta}}} \frac{\partial}{\partial T_{{\rm e}_0}} \text{\rm Res}(\frak v_{(\xi,\frak y,\vec T,\vec \theta)}) \right\Vert_{C^k} < C_{5,k}e^{-\delta_2 T_{{\rm e}_0}}, \\ &\left\Vert \nabla_{\xi}^{\vec k_{\xi}}\nabla_{\frak y}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta^{\vec k_{\theta}}} \frac{\partial}{\partial \theta_{{\rm e}_0}} \text{\rm Res}(\frak v_{(\xi,\frak y,\vec T,\vec \theta)}) \right\Vert_{C^k} < C_{5,k}e^{-\delta_2 T_{{\rm e}_0}}, \endaligned \end{equation} if each of $T_{{\rm e}}$ is greater than $T_k$ and $\vert \vec k_{\xi}\vert + \vert{\vec k_{T}}\vert + \vert{\vec k_{\theta}}\vert +n \le k$. \par The first inequality of (\ref{2337}) also holds for ${\rm e}_0 \in C^1_{\rm o}(\mathcal G_{\frak y_2})$. \end{lem} Note that (\ref{2336}), (\ref{2337}) are parametrized versions of Propositions \ref{changeinfcoorprop}, \ref{reparaexpest}, respectively. For the proof, see Section \ref{proposss}. \par\medskip \section{Stabilization of the source by adding marked points and obstruction bundles} \label{stabilization} Let $((\Sigma,\vec z,\vec z^{\text{\rm int}}),u) = (\frak x,u) \in \mathcal M_{k+1,\ell}(\beta;\mathcal G)$. We assume that $\mathcal G$ is stable but is not source stable. In Section \ref{secsimple} we assumed that the source is stable. In order to carry out analytic detail similar to the one in Section \ref{secsimple} in the general case, we stabilize the source by adding marked points. In other words, we use the method of \cite[appendix]{FOn} for this purpose.\footnote{ 16 years of experience shows that the method of \cite[appendix]{FOn} is easier to use in various applications than the method of \cite[Section 13]{FOn}.} \begin{rem}\label{rem161} We note that the method of \cite[appendix]{FOn} had been used earlier in various places by many people. A nonexhausting list of it is \cite[Proposition 7.11, Theorem 9.1]{Wo}, \cite[appendix]{FOn}, \cite[begining of Section 3 and the proof of Lemma 3.1]{LiTi98}, \cite[page 395]{Si}, \cite[page 424]{fooo:book1}, \cite[Section 4.3]{fooo:toricmir}. See also \cite[(3.9)]{Rua99}. \end{rem} We recall: \begin{defn} An irreducible component $\frak x_{\rm v} = (\Sigma_{\rm v},\vec z_{\rm v},\vec z_{\rm v}^{\text{\rm int}})$ of $\frak x$ is said to be {\it unstable}, if and only if one of the following holds: \begin{enumerate} \item $\frak x_{\rm v} \in \mathcal M_{k_{\rm v}+1,\ell_{\rm v}}$ and $k_{\rm v} + 1+ 2\ell_{\rm v} < 3$. \item $\frak x_{\rm v} \in \mathcal M^{\rm cl}_{\ell_{\rm v}}$ and $\ell_{\rm v} < 3$. \end{enumerate} \end{defn} There is at least one boundary marked point in case $\frak x_{\rm v}$ is a disk ($\frak x \in \mathcal M_{k+1,\ell}$ and $k+1 > 0$), and at least one interior marked point in case $\frak x_{\rm v}$ is a sphere. (This is because it should be attached to a disk or to a sphere.) Note we assume $\ell \ge 1$ in case of $\mathcal M_{\ell}^{\rm cl}$.) Therefore there are three cases where $\frak x_{\rm v}$ is unstable: \begin{enumerate} \item[(a)] $\frak x_{\rm v}$ is a disk. $\frak x_{\rm v} \in \mathcal M_{k_{\rm v}+1,\ell_{\rm v}}$ and $k_{\rm v} = 0$ or $1$. $\ell_{\rm v} = 0$. \item[(b)] $\frak x_{\rm v}$ is a sphere. $\frak x_{\rm v} \in \mathcal M^{\rm cl}_{\ell_{\rm v}}$ and $\ell_{\rm v} =2$. \item [(c)] $\frak x_{\rm v}$ is a sphere. $\frak x_{\rm v} \in \mathcal M^{\rm cl}_{\ell_{\rm v}}$ and $\ell_{\rm v} =1$. \end{enumerate} \begin{rem} In the case of higher genus there are some other kinds of irreducible components that are unstable. For example, $T^2$ without marked points is unstable. We can handle them in the same way. If we consider also $\mathcal M_{0}^{\rm cl}(\alpha)$, then $\mathcal M_{0}^{\rm cl}$ also appears. \end{rem} \begin{defn}(\cite[Section 13 p989 and appendix p1047]{FOn}) A {\it minimal stabilization} is a choice of additional interior marked points, where we put one interior marked point $w_{\rm v}$ of $\Sigma_{\rm v}$ for each $\frak x_{\rm v}$ satisfying (a) or (b) above and two interior marked points $w_{{\rm v},1}$, $w_{{\rm v},2}$ for each $\frak x_{\rm v}$ satisfying (c) above, so that the following holds. \begin{enumerate} \item $w_{\rm v} \notin \vec z_{\rm v}^{\text{\rm int}}$. $w_{{\rm v},1}, w_{{\rm v},2} \notin \vec z_{\rm v}^{\text{\rm int}}$. They are not singular. \item $u$ is an immersion at $w_{\rm v}$, $w_{{\rm v},1}$, $w_{{\rm v},2}$. \item Let $v \in \Gamma_{(\frak x,u)}^+$ such that $v\Sigma_{\rm v} = \Sigma_{\rm v'}$. Suppose $\frak x_{\rm v}$ satisfies (a) or (b) above. Then $v w_{\rm v} = v' w_{\rm v'}$ for some $v' \in \Gamma_{(\frak x_{\rm v'},u)}^+$. Suppose $\frak x_{\rm v}$ satisfies (c) above. Then there exists $v' \in \Gamma_{(\frak x_{\rm v'},u)}^+$ such that $v w_{{\rm v},i} = v' w_{{\rm v}',i}$ for $i=1,2$. \item $w_{{\rm v},1}\ne v'w_{{\rm v},2}$ for any $v' \in \Gamma_{(\frak x_{\rm v},u)}^+$. \end{enumerate} \end{defn} (We add three marked points in the case of $\mathcal M_{0}^{\rm cl}$.) \begin{defn}\label{symstabili} A {\it symmetric stabilization} is a choice of additional marked points $\vec w = (w_1,\dots,w_{\ell'}) \in \text{\rm Int}\,\Sigma$, such that: \begin{enumerate} \item $\vec w \cap \vec z^{\text{\rm int}} = \emptyset$. \item $w_i \ne w_j$ for $i\ne j$. \item $u$ is an immersion at each $w_i$. \item $(\Sigma, \vec z, \vec w \cup \vec z^{\text{\rm int}})$ is stable. \item For each $v \in \Gamma_{(\frak x,u)}^+$ there exists $\sigma_{v} \in \frak S_{\ell'}$, such that $$ v(w_i) = w_{\sigma_{v}(i)}. $$ \end{enumerate} \end{defn} We note that a minimal stabilization induces a symmetric stabilization. Namely we take $$ \aligned &\{v w_{\rm v} \mid v \in \Gamma_{(\frak x_{\rm v},u)}^+, \,\, \text{$\frak x_{\rm v}$ satisfies (a) or (b)}\}\\ &\cup \{v w_{{\rm v},i} \mid v \in \Gamma_{(\frak x_{\rm v},u)}^+, i=1,2, \,\, \text{$\frak x_{\rm v}$ satisfies (c)}\}. \endaligned $$ Since the notion of symmetric stabilization is more general, we use symmetric stabilization in this note. Symmetric stabilization was used in \cite{fooo:toricmir}. \par We write $$ \frak x\cup \vec w = (\Sigma, \vec z, \vec z^{\text{\rm int}}\cup \vec w ) $$ when $\frak x = (\Sigma, \vec z, \vec z^{\text{\rm int}})$. \begin{rem} In our genus zero case, Definition \ref{symstabili} (4) implies that the automorphism group of $(\Sigma, \vec z, \vec z^{\text{\rm int}}\cup \vec w)$ is trivial.\footnote{In the case of higher genus, we may include the triviality of the automorphism as a part of the definition of the symmetric stabilization. If we do so then (\ref{sigmahomotoS}) is still an injective homomorphism.} So we can define an {\it injective} homomorphism \begin{equation}\label{sigmahomotoS} \sigma : \Gamma_{(\frak x,u)} \to \frak S_{\ell'} \end{equation} by $$ v(w_i) = w_{\sigma(i)}. $$ (Here $\frak S_{\ell'}$ is the symmetric group of order $\ell'!$.) We denote by $\frak H_{(\frak x,u)}$ the image of (\ref{sigmahomotoS}). In a similar way we obtain an injective homomorphism \begin{equation} \sigma : \Gamma^+_{(\frak x,u)} \to \frak S_{\ell}\times \frak S_{\ell'}. \end{equation} We denote its image by $\frak H^+_{(\frak x,u)}$. \end{rem} \par We use the notion of symmetric stabilization of $\frak x \in \mathcal M_{k+1,\ell}(\beta;\mathcal G)$ to define the notion of obstruction bundle data as follows. \begin{defn}\label{obbundeldata} An {\it obstruction bundle data $\frak E_{\frak p}$ centered at} $$ \frak p = (\frak x,u) = ((\Sigma,\vec z,\vec z^{\text{\rm int}}),u) \in \mathcal M_{k+1,\ell}(\beta;\mathcal G) $$ is the data satisfying the conditions described below. \begin{enumerate} \item A symmetric stabilization $\vec w = (w_1,\dots,w_{\ell'})$ of $(\frak x,u)$. We denote by $\mathcal G_{\vec w \cup \frak x}$ the combinatorial type of $\vec w \cup \frak x$. \item A neighborhood $\frak V(\frak x_{\rm v} \cup \vec w_{\rm v})$ of $ \frak x_{\rm v} \cup \vec w_{\rm v} = (\Sigma_{\frak x_{\rm v}},\vec z_{\frak x_{\rm v},}\vec z^{\rm int}_{\rm v} \cup \vec w_{\rm v})$ in $\overset{\circ}{\mathcal M} _{k_{\rm v}+1,\ell_{\rm v}+\ell_{\rm v}'}$ or $\overset{\circ}{\mathcal M}^{\rm cl} _{\ell_{\rm v}+\ell_{\rm v}'}$. Here $\frak x_{\rm v} \in \overset{\circ}{\mathcal M} _{k_{\rm v}+1,\ell_{\rm v}}$ or $\in \overset{\circ}{\mathcal M}^{\rm cl} _{\ell_{\rm v}+\ell_{\rm v}'}$ is an irreducible component of $\frak x$ and $\vec w_{\rm v}$ is a part of $\vec w$ that is contained in this irreducible component. \item A universal family with coordinate at infinity of $\frak x_{\rm v} \cup \vec w_{\rm v}$ defined on $\frak V(\frak x_{\rm v} \cup \vec w_{\rm v})$. (We use the notation of Definition \ref{coordinatainfdef}.) We assume that it is invariant under the $\Gamma_{(\frak x\cup \vec w,u)}^{\frak H^+_{(\frak x,u)}}$ action in the sense we will explain later. \item A compact subset $K^{\rm obst}_{\rm v}$ such that $K^{\rm obst}_{\rm v} \times \frak V(\frak x_{\rm v}\cup \vec w_{\rm v})$ is contained in $\frak K_{\frak x_{\rm v}}$, which is defined in Definition \ref{coordinatainfdef} (3). We assume that they are $\Gamma_{(\frak x\cup \vec w,u)}^{\frak H^+_{(\frak x,u)}}$ invariant in the sense we will explain later. We call $K^{\rm obst}_{\rm v}$ the {\it support of the obstruction bundle}. \item A $\frak y \in \frak V(\frak x_{\rm v} \cup \vec w_{\rm v})$-parametrized smooth family of finite dimensional complex linear subspaces $E_{\frak p,{\rm v}}(\frak y,u)$ of $$ \Gamma_0(\text{\rm Int}\,K^{\rm obst}_{\rm v}; u^*TX \otimes \Lambda^{01}). $$ Here $\Gamma_0$ denotes the set of the smooth sections with compact support on the domain $\Sigma_{\frak y_{\rm v}}$ induced by $\frak y_{\rm v} \in \frak V(\frak x_{\rm v} \cup \vec w_{\rm v})$. We regard $u : \Sigma_{\frak x_{\rm v}} \to X$ also as a map from $\Sigma_{\frak y_{\rm v}}$ by using the smooth trivialization of the universal family given as a part of Definition \ref{coordinatainfdef} (5). \par We assume that $\bigoplus_{{\rm v} \in C^0(\mathcal G)} E_{{\rm v}}$ is invariant under the $\Gamma_{(\frak x\cup \vec w,u)}^{\frak H^+_{\frak p}}$ action in the sense we will explain later. \item For each $\text{\rm v} \in C^0_{\rm d}(\mathcal G_{\frak p})$ and $\frak y_{\rm v} \in \frak V(\frak x_{\rm v}\cup \vec w_{\rm v})$ the differential operator \begin{equation}\label{ducomponents0} \aligned \overline D_{u} \overline\partial : &L^2_{m+1,\delta}((\Sigma_{\frak y_{\rm v}},\partial \Sigma_{\frak y_{\rm v}}); u^*TX, u^*TL) \\ &\to L^2_{m,\delta}(\Sigma_{\frak y_{\rm v}}; u^*TX \otimes \Lambda^{01})/E_{\frak p,{\rm v}}(\frak y,u) \endaligned \end{equation} is surjective. (We define the above weighted Sobolev spaces in the same way as in Section \ref{subsec12}. See Section \ref{glueing} for the precise definition in the general case.) \par If $\text{\rm v} \in C^0_{\rm s}(\mathcal G_{\frak p})$ and $\frak y_{\rm v} \in \frak V(\frak x_{\rm v}\cup \vec w_{\rm v})$, the differential operator \begin{equation}\label{ducomponentssphere} \aligned \overline D_{u} \overline\partial : &L^2_{m+1,\delta}(\Sigma_{\frak y_{\rm v}}; u^*TX) \\ &\to L^2_{m,\delta}(\Sigma_{\frak y_{\rm v}}; u^*TX \otimes \Lambda^{01})/E_{\frak p,{\rm v}}(\frak y,u) \endaligned \end{equation} is surjective. \item The kernels of (\ref{ducomponents0}) and (\ref{ducomponentssphere}) satisfy a transversality property for evaluation maps that is as described in Condition \ref{transeval}. \item For each $w_i \in \Sigma_{\rm v}$ we take a codimension $2$ submanifold $\mathcal D_{i}$ of $X$ such that $u(w_i) \in \mathcal D_i$ and $$ u_*T_{w_i}\Sigma_{\rm v} + T_{u(w_i)}\mathcal D_i = T_{w_i}X. $$ Moreover $\{\mathcal D_i\}$ is invariant under the $\Gamma_{\frak p}^+$ action in the following sense. Let $v \in \Gamma_{\frak p}^+$ and $v(w_i) =w_{\sigma(i)}$ then \begin{equation}\label{Dequivalent} \mathcal D_i = \mathcal D_{\sigma(i)}. \end{equation} (Note $u(w_i) = u(w_{\sigma(i)})$ since $u\circ v = u$.) \end{enumerate} \end{defn} \begin{conds}\label{transeval} Suppose a vertex $\text{\rm v} \in C^0_{\rm d}(\mathcal G_{\frak p})$ is contained in an edge ${\rm e} \in C^1_{\rm o}(\mathcal G_{\frak p})$. Let $z_{\rm e}$ be a singular point of $\Sigma_{\frak x}$ corresponding to the edge ${\rm e} \in C^1_{\rm o}(\mathcal G_{\frak p})$. We define \begin{equation}\label{evaluflagcomp} \text{\rm ev}_{\rm v,e} : L^2_{m+1,\delta}((\Sigma_{\frak y_{\rm v}},\partial \Sigma_{\frak y_{\rm v}}); u^*TX, u^*TL) \to T_{u(z_{\rm e})}L \end{equation} by $ s \mapsto \pm s(z_{\rm e}) $ where we take $+$ if $\text{\rm v}$ is an outgoing vertex of $\rm e$ and we take $-$ if $\text{\rm v}$ is an incoming vertex of $\rm e$. If $\text{\rm v} \in C^0_{\rm d}(\mathcal G_{\frak p})$ and ${\rm e} \in C^1_{\rm c}(\mathcal G_{\frak p})$, then we define \begin{equation}\label{evaluflagcomp20} \text{\rm ev}_{\rm v,e} : L^2_{m+1,\delta}((\Sigma_{\frak y_{\rm v}},\partial\Sigma_{\frak y_{\rm v}}); u^*TX,u^*TL) \to T_{u(z_{\rm e})}X \end{equation} by the same formula. In a similar way we define \begin{equation}\label{evaluflagcompcl} \text{\rm ev}_{\rm v,e} : L^2_{m+1,\delta}(\Sigma_{\frak y_{\rm v}}; u^*TX) \to T_{u(z_{\rm e})}X, \end{equation} if ${\rm e} \in C^1_{\rm c}(\mathcal G_{\frak p})$ and $\text{\rm v} \in C^0_{\rm s}(\mathcal G_{\frak p})$ is its vertex. \par Combining all of (\ref{evaluflagcomp}), (\ref{evaluflagcomp20}), (\ref{evaluflagcompcl}) we obtain a map: \begin{equation} \aligned \text{\rm ev}_{\mathcal G_{\frak p}} : &\bigoplus_{\text{\rm v} \in C^0_{\rm d}(\mathcal G_{\frak p})} L^2_{m+1,\delta}((\Sigma_{\frak y_{\rm v}},\partial \Sigma_{\frak y_{\rm v}}); u^*TX, u^*TL) \\ &\oplus \bigoplus_{\text{\rm v} \in C^0_{\rm s}(\mathcal G_{\frak p})} L^2_{m+1,\delta}(\Sigma_{\frak y_{\rm v}}; u^*TX) \\ &\to \bigoplus_{{\rm e} \in C^1_{\rm o}(\mathcal G_{\frak p})} T_{u(z_{\rm e})}L \oplus \bigoplus_{{\rm e} \in C^1_{\rm c}(\mathcal G_{\frak p})} T_{u(z_{\rm e})}X. \endaligned \end{equation} The condition we require is that the restriction of $\text{\rm ev}_{\mathcal G_{\frak p}} $ to $$ \bigoplus_{\text{\rm v} \in C^0(\mathcal G_{\frak p})} \text{\rm Ker}\overline D_{u_{\rm v}} \overline\partial $$ is surjective. \end{conds} \begin{rem} In \cite{fooo:book1} we used Kuranishi structures on $\mathcal M_{k+1,\ell}(\beta)$ so that the evaluation maps $\text{\rm ev} : \mathcal M_{k+1,\ell}(\beta) \to L^{k+1} \times X^{\ell}$ are weakly submersive. To construct Kuranishi structures satisfying this additional property, we need to require an additional assumption to the obstruction bundle data. Namely we need to assume that the evaluation maps at the marked points $$\aligned {\rm ev} : &\bigoplus_{\text{\rm v} \in C^0_{\rm d}(\mathcal G_{\frak p})} L^2_{m+1,\delta}((\Sigma_{\frak y_{\rm v}},\partial \Sigma_{\frak y_{\rm v}}); u^*TX, u^*TL) \to \prod_{i=0}^{k} T_{u(z_i)}L \times \prod_{i=1}^{\ell} T_{u(z_i^{\rm int})}X \endaligned$$ are also surjective. But we do not include it in the definition here since there are cases we do not assume it. \end{rem} We next explain the precise meaning of invariance under the action in (3), (4), (5). The invariance in (3) is defined in Definition \ref{defn288}. The $\Gamma_{(\frak x\cup \vec w,u)}^{\frak H^+_{(\frak x,u)}}$ action on $\frak K_{\frak x_{\rm v}}$ is induced by its action. (See Definition \ref{defn288}.) So we require (the totality of) $K^{\rm obst}_{\rm v}$ is invariant under this action in (4). To make sense of (5) we define a $\Gamma_{(\frak x\cup \vec w,u)}^{\frak H^+_{(\frak x,u)}}$ action on \begin{equation}\label{sumobstructionspace} \bigoplus_{{\rm v} \in C^0(\mathcal G)}\Gamma_0(\text{\rm Int}\,K^{\rm obst}_{\rm v}; u^*TX \otimes \Lambda^{01}). \end{equation} If $v \in \Gamma_{(\frak x\cup \vec w,u)}^{\frak H^+_{(\frak x,u)}}$ then $v\Sigma_{\rm v} = \Sigma_{\rm v'}$ for some ${\rm v}'$ and $K^{\rm obst}_{{\rm v}'} = vK^{\rm obst}_{{\rm v}}$ by (4). Moreover $u\circ v = u$ holds on $\Sigma_{\rm v}$. Therefore we obtain $$ v_* : \Gamma_0(\text{\rm Int}\,K^{\rm obst}_{\rm v}; u^*TX \otimes \Lambda^{01}) \cong \Gamma_0(\text{\rm Int}\,K^{\rm obst}_{{\rm v}'}; u^*TX \otimes \Lambda^{01}). $$ They induce a $\Gamma_{(\frak x\cup \vec w,u)}^{\frak H^+_{(\frak x,u)}}$ action on (\ref{sumobstructionspace}). Note that this is the case of the action at $\vec w \cup \frak p = (\vec w \cup \frak x,u)$. When we move to a nearby point $(\frak y,u)$, the situation becomes slightly different, since $v_*\frak y = \frak y$ holds no longer. We have a smooth trivialization of the bundle (\ref{fibrationsigma}). (Definition \ref{coordinatainfdef} (5).) Namely we are given a diffeomorphism $$ v : K_{\rm v}(\frak y) \to K_{\rm v'}(\frak y) $$ between the cores. (Here we write $K_{\rm v}(\frak y)$ in place of $K_{\rm v}$ to include its complex structure.) However this is not a biholomorphic map. On the other hand $$ v : K_{\rm v}(\frak y) \to K_{\rm v'}(v_*\frak y) $$ is a biholomorphic map by Definition \ref{coordinatainfdef} (1). Therefore we still obtain a map \begin{equation}\label{sentbyv} \aligned v_* : &\Gamma_0(\text{\rm Int}\,K^{\rm obst}_{\rm v}(\frak y); u^*TX \otimes \Lambda^{01})\\ &\cong \Gamma_0(\text{\rm Int}\,K^{\rm obst}_{{\rm v}'}(v_*\frak y); (u\circ v^{-1})^*TX \otimes \Lambda^{01}). \endaligned \end{equation} Definition \ref{obbundeldata} (5) means $$ v_*\left(E_{\frak p,{\rm v}}(\frak y,u)\right) = E_{\frak p,{\rm v}'} (v_*\frak y,u\circ v^{-1}) = E_{\frak p,{\rm v}'} (v_*\frak y,u) $$ where the map $v_*$ appearing at the beginning of the formula is the map (\ref{sentbyv}). \begin{rem}\label{rem23} The condition (8), especially $u(w_i) \in \mathcal D_i$, is assumed only for $\frak p$ and $\vec w$. For the general point $\frak V(\frak y_{\rm v} \cup \vec w_{\rm v})$ this condition is not assumed at this stage. We put this condition only at later step (Section \ref{cutting}. See also Definition \ref{transconst}.) and only to the solutions of the equation. \end{rem} \begin{lem}\label{existobbundledata} For each $\frak p$ there exists an obstruction bundle data $\frak E_{\frak p}$ centered at $\frak p$. \end{lem} \begin{proof} Existence of symmetric stabilization is obvious. We can find $E_{\frak p,\text{\rm v}}(\frak p \cup \vec w_{\frak p})$ for ${\rm v} \in C^0(\mathcal G_{\frak p \cup \vec w_{\frak p}})$ satisfying (7), (8) by the unique continuation properties of the linearization of the Cauchy-Riemann equation. We can make them $\Gamma_{\frak p \cup \vec w_{\frak p}}^{\frak H^+_{\frak p}}$ invariant by taking the union of the images of actions. Then we extend them to a small neighborhood of $\frak p \cup \vec w_{\frak p}$ in a way such that (7), (8) are satisfied. We make them $\Gamma_{\frak p \cup \vec w_{\frak p}}^{\frak H^+_{\frak p}}$ invariant by taking average as follows. Let $\frak y = (\frak y_{\rm v})$ such that $\frak y_{\rm v} \in \frak V(\frak x_{\rm v} \cup \vec w_{\rm v})$. Using the trivialization of the bundle (\ref{fibrationsigma}) we can define $$ \frak I'_{\frak y} : \bigoplus_{{\rm v} \in C^0(\mathcal G_{\frak p \cup \vec w_{\frak p}})} E_{\frak p,{\rm v}} \to \bigoplus_{{\rm v} \in C^0(\mathcal G)} \Gamma(\Sigma_{\frak y,{\rm v}}; u^*TX\otimes \Lambda^{01}). $$ Note for $v \in \Gamma_{\frak p \cup \vec w_{\frak p}}^{\frak H^+_{\frak p}}$ the equality $ v_* \circ \frak I'_{\frak y} = \frak I'_{v\frak y}\circ v_* $ may not be satisfied. However since $ v_* \circ \frak I'_{\frak p} = \frak I'_{\frak p}\circ v_* $ we may assume $$ \Vert v_* \circ \frak I'_{\frak y} - \frak I'_{v\frak y}\circ v_* \Vert $$ is small by taking $\frak V(\frak x_{\rm v} \cup \vec w_{\rm v})$ small. Therefore $$ \frak I_{\frak y} = \frac{1}{\# \Gamma_{\frak p \cup \vec w_{\frak p}}}\sum_{v \in \Gamma_{\frak p \cup \vec w_{\frak p}}^{\frak H^+_{\frak p}}} (v^{-1})_* \circ \frak I'_{v\frak y} \circ v_* $$ is injective and close to $\frak I'_{\frak p}$. We hence obtain the required $E_{\frak p}(\frak y)$ by $$ E_{\frak p}(\frak y) = \text{\rm Im}\frak I_{\frak y} . $$ The existence of the codimension $2$ submanifolds $\mathcal D_i$ is obvious. \end{proof} The obstruction bundle data determines $$ E_{\frak p}(\frak y,u) = \bigoplus_{{\rm v} \in C^0(\mathcal G_{\frak p \cup \vec w_{\frak p}})}E_{\frak p,{\rm v}}(\frak y,u) \subset \bigoplus_{{\rm v} \in C^0(\mathcal G_{\frak p \cup \vec w_{\frak p}})}L^2_{m,\delta}(\Sigma_{\frak y_{\rm v}}; u^*TX \otimes \Lambda^{01}) $$ for $\frak y \in \frak V(\frak x\cup \vec w)$. This subspace plays the role of (a part of) the obstruction bundle of the Kuranishi structure we will construct. To define our equation and thickened moduli space we need to extend the family of linear subspaces $E_{\frak p}(\cdot)$ so that we associate $E_{\frak p}(\frak q)$ to an object $\frak q$ which is `close' to $\frak p$. We will define this close-ness below. (This is a generalization of Condition \ref{nearbyuprime}.) \par We use the map $$ \overline{\Phi} : \prod_{\rm v \in C^{0}(\mathcal G_{\frak p \cup \vec w_{\frak p}})}\frak V(\frak x_{\rm v} \cup \vec w_{\rm v}) \times (\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1) \to \mathcal M_{k+1,\ell+\ell'}. $$ (See Definition \ref{def29}.) Let $\frak Y = \overline{\Phi}(\frak y,\vec T,\vec \theta)$ be an element of $\mathcal M_{k+1,\ell+\ell'}$ that is represented by $(\Sigma_{\frak Y},\vec z_{\frak Y}, \vec z^{\text{\rm int}}_{\frak Y} \cup \vec w_{\frak Y})$. By construction (\ref{summandtoglued}) we have $$ \aligned \Sigma_{\frak Y} = \bigcup_{{\rm v} \in C^0(\mathcal G_{\frak p \cup \vec w_{\frak p}})} K^{\frak Y}_{{\rm v} } &\cup \bigcup_{{\rm e}\in C^1_{\mathrm o}(\mathcal G_{\frak p \cup \vec w_{\frak p}})} [-5T_{\rm e},5T_{\rm e}] \times [0,1] \\ &\cup \bigcup_{{\rm e}\in C^1_{\mathrm c}(\mathcal G_{\frak p \cup \vec w_{\frak p}})} [-5T_{\rm e},5T_{\rm e}] \times S^1. \endaligned $$ We called the second and the third summand the neck region. In case $T_{\rm e} = \infty$ the product of the union of two half lines and $[0,1]$ or $S^1$ is also called the neck region. See Definition \ref{defcoreandneck}. \begin{defn}\label{epsiloncloseto} Let $u' : (\Sigma_{\frak Y} ,\partial\Sigma_{\frak Y}) \to (X,L)$ be a smooth map in homology class $\beta$. We say that $(\Sigma_{\frak Y},u')$ is {\it $\epsilon$-close to $\frak p$} with respect to the given obstruction bundle data if the following holds. \begin{enumerate} \item Since $\frak Y = \overline{\Phi}(\frak y,\vec T,\vec \theta)$ the core $K^{\frak Y}_{{\rm v} } \subset \Sigma_{\frak Y}$ is identified with $K^{\frak y}_{\rm v} \subset \Sigma_{\frak y}$. We require \begin{equation}\label{c1close} \vert u - u'\vert_{C^{10}(K^{\frak Y}_{{\rm v} })} < \epsilon \end{equation} for each $\rm v$. (We regard $u$ as a map from $\Sigma_{\frak y}$ by using the smooth trivialization of the universal family given as a part of Definition \ref{coordinatainfdef} (4).) \item The map $u'$ is holomorphic on each of the neck region. \item The diameter of the $u'$ image of each of the connected component of the neck region is smaller than $\epsilon$. \item $T_{\rm e} > \epsilon^{-1}$ for each $\rm e$. \end{enumerate} \end{defn} \begin{rem} We use metrics of the source and of $X$ to define the left hand side of (\ref{c1close}). See Remark \ref{metricfiber}. \end{rem} \begin{rem} We note that Definition \ref{epsiloncloseto} is not a definition of topology on certain set. In fact, `$(\Sigma_{\frak Y},u')$ is close to $\frak p$' is defined only when $\frak p$ is an element of $\mathcal M_{k+1,\ell}(\beta)$, but $(\Sigma_{\frak Y},u')$ may not be an element of $\mathcal M_{k+1,\ell}(\beta)$. \par Even in case $(\Sigma_{\frak Y},u') \in \mathcal M_{k+1,\ell}(\beta)$, the fact that $(\Sigma_{\frak Y},u')$ is $\epsilon$-close to $\frak p$ does not imply that $\frak p$ is $\epsilon$-close $(\Sigma_{\frak Y},u')$. In fact, if $(\Sigma_{\frak Y},u')$ is $\epsilon$-close to $\frak p$ then $\mathcal G_{\frak p} \succ \mathcal G_{\frak Y}$. \par On the other hand, we have the following. If $(\Sigma_{\frak Y},u') \in \mathcal M_{k+1,\ell}(\beta)$ and is $\epsilon_1$-close to $\frak p$ and if $(\Sigma_{\frak Y'},u'')$ is $\epsilon_2$-close to $(\Sigma_{\frak Y},u')$, then $(\Sigma_{\frak Y'},u'')$ is $\epsilon_1 + o(\epsilon_2)$-close to $\frak p$. (Here $\lim_{\epsilon_2\to 0} o(\epsilon_2) = 0$.) \end{rem} Let $\frak Y = \overline{\Phi}(\frak y,\vec T,\vec \theta)$ and $u' : (\Sigma_{\frak Y} ,\partial\Sigma_{\frak Y}) \to (X,L)$ be a smooth map in homology class $\beta$ such that $(\Sigma_{\frak Y},u')$ is $\epsilon$-close to $\frak p$. We assume that $\epsilon$ is smaller than the injectivity radius of $X$. Let ${\rm v} \in C^0(\mathcal G)$. \begin{defn}\label{Emovevvv} Suppose that we are given an obstruction bundle data $\frak E_{\frak p}$ centered at $\frak p$. We define a map \begin{equation}\label{Ivpdefn} I^{{\rm v},\frak p}_{(\frak y,u),(\frak Y,u')} : E_{\frak p,\rm v}(\frak y,u) \to \Gamma_0(\text{\rm Int}\,K^{\rm obst}_{\rm v}; (u')^*TX \otimes \Lambda^{01}) \end{equation} by using the complex linear part of the parallel transport along the path of the form $t \mapsto {\rm E}(u(z),tv)$, where $ {\rm E}(u(z),v) = u'(z)$. (Note this is a short geodesic joining $u(z)$ and $u'(z)$ with respect to the connection which we used to define $\rm E$.) Here we identify $$ K^{\rm obst}_{\rm v} \subset K_{\rm v} \subset \Sigma_{\frak y}, \quad K^{\rm obst}_{\rm v} \subset K_{\rm v} \subset \Sigma_{\frak Y}. $$ \par We write the image of (\ref{Ivpdefn}) by $E_{\frak p,\rm v}(\frak Y,u')$. \end{defn} The map $I^{{\rm v},\frak p}_{(\frak y,u),(\frak Y,u')}$ is $\Gamma_{(\frak x\cup \vec w,u)}^{\frak H^+_{(\frak x,u)}}$ invariant in the sense of Lemma \ref{ptequiv} below. Note we have an injective homomorphism $\Gamma_{(\frak x\cup \vec w,u)}^{\frak H^+_{(\frak x,u)}} \to \frak S_{\ell} \times \frak S_{\ell'}$ such that the $\Gamma_{(\frak x\cup \vec w,u)}^{\frak H^+_{(\frak x,u)}}$ action on the elements of $\frak V(\frak x \cup \vec w)$ is identified with the permutation of the $\ell$ marked points in $\frak x$ and $\ell'$ marked points $\vec w$. (See (\ref{2132}).) For $v \in \Gamma_{(\frak x\cup \vec w,u)}^{\frak H^+_{(\frak x,u)}}$ we define $v_*\frak Y$ by permuting the marked points of $\frak Y$ in the same way. If $(\frak Y,u')$ is $\epsilon$-close to $\frak p$ then $(v_*\frak Y,u')$ is $\epsilon$-close to $\frak p$. Let $\rm v'$ be the vertex which is mapped from $\rm v$ by $v$ with respect to the $\Gamma_{(\frak x\cup \vec w,u)}^{\frak H^+_{(\frak x,u)}}$ action of $\mathcal G$. (See the discussion about Definition \ref{obbundeldata} (5) we gave right above Remark \ref{rem23}.) We remark that $v_*\frak Y = \overline{\Phi}(v_*\frak y,v_*\vec T,v_*\vec \theta)$. By using diffeomorphism in Definition \ref{def29}, we have a map $v : \frak y \to v_*\frak y$. Note there exists a map (diffeomorphism) $v : \Sigma_{\frak y} \to \Sigma_{\frak y}$ that permutes the marked points in the required way. However this map is not holomorphic in general. It becomes biholomorphic as a map $v : \Sigma_{\frak y} \to \Sigma_{v_*\frak y}$. \begin{lem}\label{ptequiv} The following diagram commutes. \begin{equation} \CD E_{\frak p,\rm v}(\frak y,u) @>{I^{{\rm v},\frak p}_{(\frak y,u),(\frak Y,u')}}>> \Gamma_0(\text{\rm Int}\,K^{\rm obst}_{\rm v}(\frak Y); (u')^*TX \otimes \Lambda^{01}) \\ @V{v_*}VV @V V{v_*}V \\ E_{\frak p,\rm v'}(v_*\frak y,u\circ v^{-1}) @>{I^{{\rm v}',\frak p}_{(v_*\frak y,u),(v_*\frak Y,u'\circ v^{-1})}}>> \Gamma_0(\text{\rm Int}\,K^{\rm obst}_{\rm v'}(v_*\frak Y); (u'\circ v^{-1})^*TX \otimes \Lambda^{01}) \endCD \label{A1.16}\end{equation} Here we define $\rm v'$ by $v(K_{\rm v}) = K_{\rm v'}$. \end{lem} \begin{proof} The lemma follows from the fact that parallel transport etc. is independent of the enumeration of the marked points. (Note the left vertical arrow is well-defined by Definition \ref{obbundeldata} (5).) \end{proof} \begin{cor}\label{vindependenEcor} $$ v_*\left(\bigoplus_{\rm v \in C^0(\mathcal G)}E_{\frak p,\rm v}(\frak Y,u')\right) = \bigoplus_{\rm v \in C^0(\mathcal G)}E_{\frak p,\rm v}(v_*\frak Y,u'\circ v^{-1}). $$ \end{cor} This is a consequence of Lemma \ref{ptequiv} and Definition \ref{obbundeldata} (5). \par\medskip We next show that the Fredholm regularity (Definition \ref{obbundeldata} (6)) and evaluation map transversality (Definition \ref{obbundeldata} (7)) are preserved when we take $(\frak Y,u')$ that is $\epsilon$-close to $\frak p$. (See Proposition \ref{linearMV}.) To state them precisely we need some preparation. \par Let $\frak Y = \overline{\Phi}(\frak y,\vec T,\vec \theta)$ be an element of $\mathcal M_{k+1,\ell+\ell'}$ that is represented by $(\Sigma_{\frak Y},\vec z_{\frak Y}, \vec z^{\text{\rm int}}_{\frak Y} \cup \vec w_{\frak Y})$. We denote by $\mathcal G_{\frak Y}$ the combinatorial type of $\frak Y$. (Here $\mathcal G_{\frak y}$ is the combinatorial type of $\frak y$ and $\mathcal G_{\frak Y}$ is obtained from $\mathcal G_{\frak y}$ by shrinking the edges $\rm e$ such that $T_{\rm e} \ne \infty$.) Let $\rm v \in C^0_{\rm d}(\mathcal G_{\frak Y})$. We have a differential operator \begin{equation}\label{ducomponents20} \aligned D_{u',\rm v} \overline\partial : L^2_{m+1,\delta}((\Sigma_{\frak Y_{\rm v}},\partial \Sigma_{\frak Y_{\rm v}});& (u')^*TX, (u')^*TL) \\ &\to L^2_{m,\delta}(\Sigma_{\frak Y_{\rm v}}; (u')^*TX \otimes \Lambda^{01}). \endaligned \end{equation} In case $\rm v \in C^0_{\rm s}(\mathcal G_{\frak Y})$ we have \begin{equation}\label{ducomponents20c} D_{u',\rm v} \overline\partial : L^2_{m+1,\delta}(\Sigma_{\frak Y_{\rm v}}; (u')^*TX) \to L^2_{m,\delta}(\Sigma_{\frak Y_{\rm v}}; (u')^*TX \otimes \Lambda^{01}). \end{equation} \begin{defn}\label{fredreg} We say $(\frak Y,u')$ is {\it Fredholm regular} with respect to the obstruction bundle data $\frak E_{\frak p}$ if the sum of the image of (\ref{ducomponents20}) and $E_{\frak p,\rm v}(\frak Y,u')$ is $L^2_{m,\delta}(\Sigma_{\frak Y_{\rm v}}; (u')^*TX \otimes \Lambda^{01})$ and if the sum of the image of (\ref{ducomponents20c}) and $E_{\frak p,\rm v}(\frak Y,u')$ is $L^2_{m,\delta}(\Sigma_{\frak Y_{\rm v}}; (u')^*TX \otimes \Lambda^{01})$. \end{defn} Using this terminology, Definition \ref{obbundeldata} (6) means that $(\frak x,u)$ is Fredholm regular with respect to the obstruction bundle data $\frak E_{\frak p}$. \par We next define the notion of evaluation map transversality. \begin{defn}\label{maptrans} A {\it flag} of $\mathcal G$ is a pair $(\mathrm v,\mathrm e)$ of edges $\rm e$ and its vertex $\rm v$. Suppose $\mathcal G$ is oriented. We say a flag $(\mathrm v,\mathrm e)$ is {\it incoming} if $\rm e$ is an incoming edge. Otherwise it is said {\it outgoing}. We denote by $z_{\rm e}$ the singular point corresponding to an edge $\rm e$. \end{defn} For each flag $(\rm v,\rm e)$ of $\mathcal G_{\frak Y}$, we define \begin{equation}\label{evaluflagcomp2} \text{\rm ev}_{\rm v,e} : L^2_{m+1,\delta}((\Sigma_{\frak Y_{\rm v}},\partial \Sigma_{\frak Y_{\rm v}}); (u')^*TX, (u')^*TL) \to T_{u'(z_{\rm e})}L, \end{equation} if ${\rm v} \in C^0_{\rm d}(\mathcal G_{\frak Y})$, $\rm e \in C^1_{\rm o}(\mathcal G_{\frak Y})$ in the same way as (\ref{evaluflagcomp}), \begin{equation}\label{evaluflagcomp23} \text{\rm ev}_{\rm v,e} : L^2_{m+1,\delta}((\Sigma_{\frak Y_{\rm v}},\partial \Sigma_{\frak Y_{\rm v}}) ; (u')^*TX, (u')^*TL) \to T_{u'(z_{\rm e})}X, \end{equation} if ${\rm v} \in C^0_{\rm d}(\mathcal G_{\frak Y})$, $\rm e \in C^1_{\rm c}(\mathcal G_{\frak Y})$ in the same way as (\ref{evaluflagcomp20}), and \begin{equation}\label{evaluflagcompcl2} \text{\rm ev}_{\rm v,e} : L^2_{m+1,\delta}(\Sigma_{\frak Y_{\rm v}}; (u')^*TX) \to T_{u'(z_{\rm e})}X, \end{equation} if ${\rm e} \in C^1_{\rm c}(\mathcal G_{\frak Y})$ in the same way as (\ref{evaluflagcompcl}). \par Combining them we obtain \begin{equation}\label{evaluationmap2diff} \aligned \text{\rm ev}_{\mathcal G_{\frak Y}} : &\bigoplus_{\text{\rm v} \in C^0_{\rm d}(\mathcal G_{\frak Y})} L^2_{m+1,\delta}((\Sigma_{\frak Y_{\rm v}},\partial \Sigma_{\frak Y_{\rm v}}); (u')^*TX, (u')^*TL) \\ &\bigoplus_{\text{\rm v} \in C^0_{\rm s}(\mathcal G_{\frak Y})} L^2_{m+1,\delta}(\Sigma_{\frak Y_{\rm v}}; (u')^*TX) \\ &\to \bigoplus_{{\rm e} \in C^1_{\rm o}(\mathcal G_{\frak Y})} T_{u'(z_{\rm e})}L \oplus \bigoplus_{{\rm e} \in C^1_{\rm c}(\mathcal G_{\frak Y})} T_{u'(z_{\rm e})}X. \endaligned \end{equation} \begin{defn}\label{def:1720} Suppose $(\frak Y,u')$ is Fredholm regular with respect to the obstruction bundle data $\frak E_{\frak p}$. We say that $(\frak Y,u')$ is {\it evaluation map transversal} with respect to the obstruction bundle data $\frak E_{\frak p}$ if the restriction of (\ref{evaluationmap2diff}) to the direct sum of the kernels of (\ref{evaluflagcomp2}), (\ref{evaluflagcomp23}) and of (\ref{evaluflagcompcl2}) is surjective. \end{defn} Using this terminology, Definition \ref{obbundeldata} (7) means that $(\frak x,u)$ is evaluation map transversal with respect to the obstruction bundle data $\frak E_{\frak p}$. \par\medskip Proposition \ref{linearMV} below says that Fredholm regularity and evaluation map transversality are preserved if $(\frak Y,u')$ is sufficiently close to $\frak p$. To state it we need to note the following point. \par When we define $\epsilon$-close-ness, we put the condition that the image of each connected component of the neck region has diameter $< \epsilon$. But we did not assume a similar condition for $\frak p$ and $\frak E_{\frak p}$ itself. So in case when this condition is not satisfied for $\frak p$, there can not exist any object that is $\epsilon$-close to $\frak p$. Especially $\frak p$ itself is not $\epsilon$-close to $\frak p$. \par However, we can always modify the core $K_{\rm v}$ so that $\frak p$ itself becomes $\epsilon$-close to $\frak p$ as follows. We take a positive number $R_{(\rm v,\rm e)}$ for each flag of $\mathcal G$ and write $\vec R$ the totality of such $R_{(\rm v,\rm e)}$. We put \begin{equation}\label{extendedcore} \aligned K^{+\vec R}_{\rm v} = K_{\rm v} &\cup \bigcup_{{\rm e}\in C^1_{\mathrm o}(\mathcal G) \atop \text{$(\rm v,{\rm e})$ is an outgoing flag}} (0,R_{(\rm v,\rm e)}] \times [0,1] \\ &\cup \bigcup_{{\rm e}\in C^1_{\mathrm o}(\mathcal G) \atop \text{$(\rm v,{\rm e})$ is an incoming flag}} [-R_{(\rm v,\rm e)},0) \times [0,1] \\ &\cup \bigcup_{{\rm e}\in C^1_{\mathrm c}(\mathcal G) \atop \text{$(\rm v,{\rm e})$ is an outgoing flag} }(0,R_{(\rm v,\rm e)}] \times S^1 \\ &\cup \bigcup_{{\rm e}\in C^1_{\mathrm c}(\mathcal G) \atop \text{$(\rm v,{\rm e})$ is an incoming flag}} [-R_{(\rm v,\rm e)},0) \times S^1. \endaligned \end{equation} \begin{defn} We can define an obstruction bundle data $\frak E_{\frak p}$ centered at $\frak p$ using $K^{+\vec R}_{\rm v}$ in place of $K_{\rm v}$. We call it the obstruction bundle data obtained by {\it extending the core} and write $\frak E_{\frak p}^{+ \vec R}$. We call (\ref{extendedcore}) the {\it extended core}. (In case we need to specify $\vec R$ we call it the $\vec R$-extended core.) (\ref{extendedcore}) is a generalization of (\ref{1104}). \end{defn} \begin{prop}\label{linearMV} Let $\frak p \in \mathcal M_{k+1,\ell}(\beta)$ and $\frak E_{\frak p}$ be an obstruction bundle data centered at $\frak p$. Then there exist $\epsilon > 0$ and $\vec R$ with the following properties. \begin{enumerate} \item If $(\frak Y,u')$ is $\epsilon$-close to $\frak p$ with respect to $\frak E_{\frak p}^{+ \vec R}$, then $(\frak Y,u')$ is Fredholm regular with respect to $\frak E_{\frak p}^{+ \vec R}$. \item If $(\frak Y,u')$ is $\epsilon$-close to $\frak p$ with respect to $\frak E_{\frak p}^{+ \vec R}$, then $(\frak Y,u')$ is evaluation map transversal with respect to $\frak E_{\frak p}^{+ \vec R}$. \item $\frak p$ is $\epsilon$-close to $\frak p$ with respect to $\frak E_{\frak p}^{+ \vec R}$. \end{enumerate} \end{prop} \begin{proof} By using the fact that the diameter of the $u'$ image of the connected component of the neck region is small, we can prove an exponential decay estimate of $u'$ on the neck region. This is an analogue of Lemma \ref{neckaprioridecay} and its proof is the same as the proof of \cite[Lemma 11.2]{FOn}. Then the rest of the proof of (1),(2) is a version of the proof of Mayer-Vietoris principle of Mrowka \cite{Mr}. See \cite[Proposition 7.1.27]{fooo:book1} or \cite[Lemma 8.5]{Fuk96II}. (3) is obvious. \end{proof} So far we have discussed the case of bordered genus zero curve. The case of genus zero curve without boundary is the same so we do not repeat it. \footnote{Higher genus case is also the same.} \par\medskip \section{The differential equation and thickened moduli space} \label{settin2} To construct a Kuranishi neighborhood of each point in our moduli space $\mathcal M_{k+1,\ell}(\beta)$ or $\mathcal M_{\ell}^{\rm cl}(\alpha)$, we need to assign an obstruction bundle to each point of it. To do so we follow the way we had written in \cite[end of the page 1003]{FOn} and \cite[end of the page 423-middle of page 424]{fooo:book1}. The outline of the argument is as follows. For each $\frak p \in \mathcal M_{k+1,\ell}(\beta)$ we take an obstruction bundle data $\frak E_{\frak p}$. We then consider a closed neighborhood $\frak W_{\frak p}$ of $\frak p$ in $\mathcal M_{k+1,\ell}(\beta)$ so that its elements together with certain marked points added is $\epsilon_{\frak p}$-close to $\frak p$ with respect to $\frak E_{\frak p}^{+\vec R}$. Here we choose $\epsilon_{\frak p}$ and $\frak E_{\frak p}^{+\vec R}$ so that Proposition \ref{linearMV} holds. We next take a finite number of $\frak p_c \in \mathcal M_{k+1,\ell}(\beta)$ such that $$ \bigcup_{c} {\rm Int}\,\frak W_{\frak p_c} = \mathcal M_{k+1,\ell}(\beta). $$ For $\frak p \in \mathcal M_{k+1,\ell}(\beta)$, we collect all $E_{\frak p_c}$ such that $\frak p_c$ satisfies $\frak p \in \frak W_{\frak p_c}$. The sum will be the obstruction bundle $\mathcal E_{\frak p}$ at $\frak p$. Now we will describe this process in more detail below. \par We first define the subset $\frak W_{\frak p}$ in more detail. We note that in Definition \ref{epsiloncloseto}, we need $\ell + \ell_{\frak p}$ interior marked points to define its $\epsilon$-close-ness to an element $\frak p \in \mathcal M_{k+1,\ell}(\beta)$. (Here $\ell_{\frak p}$ is the number of marked points we add as a part of the obstruction bundle data $\frak E_{\frak p}$.) We start with describing the process of forgetting those $\ell_{\frak p}$ marked points. \par \begin{defn}\label{transconst} We consider the situation of Definition \ref{epsiloncloseto}. Let $\frak Y = \Phi(\frak y,\vec T,\vec \theta)$ and let $u' : (\Sigma_{\frak Y} ,\partial\Sigma_{\frak Y}) \to (X,L)$ be a smooth map in the homology class $\beta$ that is $\epsilon$-close to $\frak p$. We say $(\frak Y,u')$ satisfies the {\it transversal constraint} if for each $w_i \in \vec w$ we have \begin{equation} u'(w_i) \in \mathcal D_{\frak p,i}. \end{equation} \end{defn} Let us explain the notation appearing in the above definition. We have $\vec w_{\frak p}$, the additional marked points on $\Sigma_{\frak p}$ as a part of the obstruction bundle data $\frak E_{\frak p}$. The element $\frak y$ is in a neighborhood $\frak V(\frak x_{\frak p}\cup \vec w_{\frak p})$. (This neighborhood $\frak V(\frak x_{\frak p}\cup \vec w_{\frak p})$ is also a part of the date $\frak E_{\frak p}$.) $(\vec T,\vec \theta)$ is as in Definition \ref{def214}. Thus $\frak Y = {\overline\Phi}(\frak y,\vec T,\vec \theta)$ is a bordered genus zero curve with $k+1$ boundary and $\ell + \ell_{\frak p}$ interior marked points. ($\ell_{\frak p}$ is the number of points in $\vec w_{\frak p}$.) We denote by $w_{\frak p,i}$ the $(\ell + i)$-th interior marked point. (It is $i$-th among the additional marked points.) For each $i=1,\dots,\ell_{\frak p}$, we took $\mathcal D_{\frak p,i}$ that is transversal to $u_{\frak p}(\Sigma_{\frak p})$ at $u_{\frak p}(w_i)$ as a part of the data $\frak E_{\frak p}$. \par \begin{lem}\label{transpermutelem} For each $\frak p \in \mathcal M_{k+1,\ell}(\beta)$ and an obstruction bundle data $\frak E_{\frak p}$ centered at $\frak p$ there exists $\epsilon_{\frak p}$ such that the following holds. \par Let $\frak q = (\frak x_{\frak q},u_{\frak q}) \in \mathcal M_{k+1,\ell}(\beta)$. We consider the set of symmetric marking $\vec w'_{\frak p}$ of $\frak x_{\frak q}$ with $\#\vec w'_{\frak p} = \ell_{\frak p}$, such that the following holds. \begin{enumerate} \item There exists $\frak y\in \frak V(\frak x_{\frak p}\cup \vec w_{\frak p})$ and $(\vec T,\vec \theta) \in (\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1)$ such that $\frak x_{\frak q}\cup\vec w'_{\frak p} = \overline{\Phi}(\frak y,\vec T,\vec \theta)$. \item $(\frak x_{\frak q}\cup \vec w'_{\frak p}, u_{\frak q})$ is $\epsilon_{\frak p}$-close to $\frak p$. \item $(\frak x_{\frak q}\cup\vec w'_{\frak p}, u_{\frak q})$ satisfies the transversal constraint. \end{enumerate} Then the set of such $\vec w'_{\frak p}$ consists of a single $\Gamma_{\frak p}$ orbit if it is nonempty. Here we regard $\Gamma_{\frak p} \subset \frak S_{\ell'}$ by (\ref{sigmahomotoS}) and $\Gamma_{\frak p}$ acts on the set of $\vec w'_{\frak p}$'s by permutation. \end{lem} The proof of Lemma \ref{transpermutelem} is not difficult. We however postpone its proof to Section \ref{cutting} where the transversal constraint is studied more systematically. \par We are now ready to provide the definition of $\frak W_{\frak p} \subset \mathcal M_{k+1,\ell}(\beta)$. \par First for each $\frak p \in \mathcal M_{k+1,\ell}(\beta)$ we take and fix an obstruction bundle data $\frak E_{\frak p}$. Let $\vec w_{\frak p}$ be the additional marked points we take as a part of $\frak E_{\frak p}$. We take $\epsilon_{\frak p}$ so that Proposition \ref{linearMV} and Lemma \ref{transpermutelem} hold. Moreover we may change $\frak E_{\frak p}$ if necessary so that Proposition \ref{linearMV} holds for $\frak E_{\frak p}^{+ \vec R} = \frak E_{\frak p}$. \begin{defn}\label{openW+++} $\frak W^+(\frak p)$ is the set of all $\frak q \in \mathcal M_{k+1,\ell}(\beta)$ such that the set of $\vec w'_{\frak p}$ satisfying (1)-(3) of Lemma \ref{transpermutelem} is nonempty. The constant $\epsilon_{\frak p}$ (which is often denoted by $\epsilon_{\frak p_c}$ or $\epsilon_c$) is determined later. (See Lemma \ref{nbdregmaineq} (Remark \ref{Lem258}), Proposition \ref{forgetstillstable}, Lemma \ref{setisopen}, Lemma \ref{injectivitypp3}, Sublemma \ref{lub296lem} (Remark \ref{rem298}.) See also 2 lines above Definition \ref{obbundle1}.) We note that $\frak W^+(\frak p)$ is open, as we will see in Subsection \ref{cutting}. See Remark \ref{rem:2111}. \par We choose a compact subset $\frak W_{\frak p} \subset \frak W^+(\frak p)$ that is a neighborhood of $\frak p$. We take $\frak W^0_{\frak p}$ that is a compact subset of ${\rm Int}\,\,\frak W_{\frak p}$ and is a neighborhood of $\frak p$. \par We take a finite set $\{\frak p_{c} \mid c \in \frak C\} \subset \mathcal M_{k+1,\ell}(\beta)$ such that \begin{equation}\label{mpccoverM} \bigcup_{c \in \frak C} \text{\rm Int}\,\, \frak W^0_{\frak p_c} = \mathcal M_{k+1,\ell}(\beta). \end{equation} \end{defn} We fix this set $\{\frak p_{c} \mid c \in \frak C\}$ in the rest of the construction of the Kuranishi structure. From now on none of the obstruction bundle data at $\frak p$ for $\frak p \notin \frak C$ is used in this note. \begin{defn}\label{defn243} For $\frak p \in \mathcal M_{k+1,\ell}(\beta)$, we define $$ \frak C(\frak p) = \{ c \in \frak C \mid \frak p \in \frak W_{\frak p_c}\}. $$ We also choose additional marked points $\vec w_{c}^{\frak p}$ of $\frak x_{\frak p}$ for each $c\in \frak C(\frak p)$ such that \begin{enumerate} \item There exist $\frak y\in \frak V(\frak x_{\frak p_c}\cup \vec w_{\frak p_c})$ and $(\vec T,\vec \theta) \in (\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1)$ such that $\frak x_{\frak p}\cup\vec w_c^{\frak p} = \overline{\Phi}(\frak y,\vec T,\vec \theta)$. \item $(\frak x_{\frak p}\cup \vec w_c^{\frak p},u_{\frak p})$ is $\epsilon_{\frak p_c}$-close to $\frak p_c$. \item $(\frak x_{\frak p}\cup \vec w_c^{\frak p},u_{\frak p})$ satisfies the transversal constraint. \end{enumerate} \end{defn} \begin{lem} For each $\frak p$ there exists a neighborhood $U$ of it so that if $\frak q \in U$ then $$ \frak C(\frak q) \subseteq \frak C(\frak p). $$ \end{lem} \begin{proof} The lemma follows from the fact that $ \frak W_{\frak p_c}$ is closed. \end{proof} \par We next define an obstruction bundle $\mathcal E_{\frak p}$ for each $\frak p = (\frak x_{\frak p},u_{\frak p}) \in \mathcal M_{k+1,\ell}(\beta)$. Take $c \in \frak C(\frak p)$. Let $\vec w^{\frak p}_c$ be as in Definition \ref{defn243}. By Definition \ref{Emovevvv}, the map \begin{equation}\label{Ivpdefn22} I^{{\rm v},\frak p_c}_{(\frak y_c,u_c),\frak p\cup \vec w^{\frak p}_c)} : E_{\frak p_c,\rm v}(\frak y_c,u_c) \to \Gamma_0(\text{\rm Int}\,K^{\rm obst}_{\rm v}; u_{\frak p}^*TX \otimes \Lambda^{01}) \end{equation} is defined. Here $\frak x_{\frak p} \cup \vec w^{\frak p}_c = \overline{\Phi}(\frak y_c,\vec T_c,\vec \theta_c)$ and $\frak y_c \in \frak V(\frak x_{\frak p_c}\cup \vec w_{\frak p_c})$. Note $ K_{\rm v}^{\rm obst} \subset K_{\rm v} \subset \frak x_{\frak p_c}. $ We have also $K_{\rm v}^{\rm obst} \subset \frak x_{\frak p}$ since $\vec w^{\frak p}_c \cup \frak x_{\frak p} = \overline{\Phi}(\frak y_c,\vec T_c,\vec \theta_c)$. \begin{lem} The image $ E_{c}(\frak p) $ of (\ref{Ivpdefn22}) depends only on $\frak p \in \frak W^+_{\frak p_c}$ and is independent of the choices of $\vec w_c^{\frak p}$ satisfying Definition \ref{defn243} (1)-(3). \end{lem} \begin{proof} This is a consequence of Corollary \ref{vindependenEcor} and Lemma \ref{transpermutelem}. \end{proof} \begin{defn}\label{def:187} We define \begin{equation}\label{kuraequation} \mathcal E_{\frak C(\frak p)}(\frak p) = \sum_{c \in \frak C(\frak p)} E_{c}(\frak p). \end{equation} For $\frak A \subset \frak C(\frak p)$ we put \begin{equation} \mathcal E_{\frak A} (\frak p) = \sum_{c \in \frak A} E_{c}(\frak p). \end{equation} \end{defn} The defining equation of the thickened moduli space at $\frak p$ is $$ \overline\partial u_{\frak p} \equiv 0 \mod \mathcal E_{\frak C(\frak p)}({\frak p}). $$ We need to extend the subspace $\mathcal E_{\frak C(\frak p)}({\frak p})$ to a family of subspaces parametrized by a neighborhood of $\frak p$. Before doing so we need the following. \begin{lem}\label{transbetweenEs} By perturbing $E_{\frak p_c}$ (that is a part of the obstruction bundle data $\frak E_{\frak p_c}$) we may assume that $$ E_{c}(\frak p) \cap E_{c'}(\frak p) = \{0\}, $$ if $c,c' \in \frak C(\frak p)$ and $c\ne c'$. \end{lem} \begin{proof} The proof will be written in Section \ref{Lemma49}. \end{proof} Now we start extending the equation (\ref{kuraequation}) to an element $\frak q$ in a `neighborhood' of $\frak p$. We do not yet assume that $\frak q$ satisfies the transversal constraint (Definition \ref{transconst}). So to define $E_{c}(\frak q)$ we need to include $\vec w'_c$ for all $c \in \frak C(\frak p)$ as marked points of $\frak q$. We also take more marked points $\vec w_{\frak p}$ to stabilize $\frak p$ and take corresponding additional marked points $\vec w'_{\frak p}$ on $\Sigma_{\frak q}$. The marked points $\vec w_{\frak p}$ are used to fix the coordinate to perform the gluing construction in section \ref{glueing}. $\vec w'_c$ is used to define the map (\ref{Ivpdefn22}). Thus they have different roles. \par A technical point to take care of is the following. We may assume that the $\ell_c$ components of $\vec w_c^{\frak p}$ are mutually different, for each $c$. (This is because $\ell_c$ components of $\vec w_{\frak p_c}$ are mutually different.) However there is no obvious way to arrange so that $\vec w_c^{\frak p} \cap \vec w_{c'}^{\frak p} = \emptyset$ for $c \ne c'$. Note, in the usual stable map compactification, at the point where two or more marked points become coincide, we put the `phantom bubble' so that they become different points on this bubbled component. For our purpose, the proof becomes simpler when we do {\it not} put a phantom bubble in case one of the components of $\vec w_{c}^{\frak p}$ coincides with one of the components of $\vec w_{c'}^{\frak p}$ for $c \ne c'$. Taking these points into account we define $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta,\frak p)_{\epsilon_{0},\vec T_{0}}$ below. \par We first review the situation we are working in and prepare some notations. Let $\frak p \in \mathcal M_{k+1,\ell}(\beta)$. We defined $\frak C(\frak p)$ in Definition \ref{defn243}. For $c \in \frak C(\frak p)$ we fixed an obstruction bundle data $\frak E_{\frak p_c}$ centered at $\frak p_c$. Additional marked points $\vec w_{\frak p_c}$ is a part of the data $\frak E_{\frak p_c}$. We put $\ell_c = \#\vec w_c$. We also put $\epsilon_c = \epsilon_{\frak p_c}$ where the right hand side is as in Lemma \ref{transpermutelem}. As mentioned before we take $\frak E_{\frak p_c}$ so that Proposition \ref{linearMV} holds for $\frak E_{\frak p_c}^{+ \vec R} = \frak E_{\frak p_c}$. \begin{defn}\label{stabdata} A {\it stabilization data} at $\frak p$ is the data as follows. \begin{enumerate} \item A symmetric stabilization $\vec w_{\frak p} = (w_{\frak p,1},\dots,w_{\frak p,\ell_{\frak p}})$ of $\frak p$. Let $\ell_{\frak p} = \#\vec w_{\frak p}$. \item For each $w_{\frak p,i}$ ($i=1,\dots,\ell_{\frak p}$), we take and fix $\mathcal D_{\frak p,i}$ such that it is a codimension two submanifold of $X$ and is transversal to $u_{\frak p}$ at $u_{\frak p}(w_{\frak p,i})$. We also assume $u_{\frak p}(w_{\frak p,i}) \in \mathcal D_{\frak p,i}$. \item We assume that $\{\mathcal D_{\frak p,i}\mid i=1,\dots,\ell_{\frak p}\}$ is invariant under the $\Gamma_{\frak p}$ action in the same sense as in Definition \ref{obbundeldata} (8) (\ref{Dequivalent}). \item A coordinate at infinity of $\frak p \cup \vec w_{\frak p}$. \item $\vec w_{\frak p} \cap \vec w_c^{\frak p} = \emptyset$ for any $c \in \frak C(\frak p)$. \item Let $K^{\rm obst}_{{\rm v},c}$ be the support of the obstruction bundle as in Definition \ref{obbundeldata} (4). (Here ${\rm v} \in C^0(\mathcal G_{\frak p_c})$.) Since $\frak x_{\frak p} = \overline{\Phi}(\frak y,\vec T,\vec \theta)$ we may regard $K^{\rm obst}_{{\rm v},c} \subset \Sigma_{\frak p}$. We require $$ K^{\rm obst}_{{\rm v},c} \subset \bigcup_{{\rm v}' \in C^0(\mathcal G_{\frak p})} {\rm Int} K_{{\rm v}'}. $$ Here the right hand side is the core of the coordinate at infinity given by item (4) Definition \ref{stabdata}. \end{enumerate} \end{defn} A stabilization data at $\frak p$ is similarly defined as the obstruction bundle data centered at $\frak p$. But it does not include $K_{\rm v}^{\rm obst}$ or $E_{\frak p,\rm v}$. The stabilization data at $\frak p$ has no relation to the obstruction bundle data at $\frak p$.\footnote{In case $\frak p = \frak p_c$ we have both stabilization data and obstruction bundle data at $\frak p$. The notation $\vec w_{\frak p}$ is used for both structures. They may not be coincide. We use the same symbol for both since this can not cause any confusion and the case $\frak p = \frak p_c$ does not play a role in our discussion.} \par We fix a metric on all the Deligne-Mumford moduli spaces. Let $\frak V_{\epsilon_0}(\frak p \cup \vec w_{\frak p})$ be the $\epsilon_0$-neighborhood of $\frak p \cup \vec w_{\frak p}$ in $\mathcal M_{k+1,\ell+\ell_{\frak p}}(\mathcal G_{(\frak p \cup \vec w_{\frak p})})$ where $\mathcal G_{(\frak p \cup \vec w_{\frak p})}$ is the combinatorial type of $\frak p \cup \vec w_{\frak p}$. \begin{defn} [{\rm Definition of $ {\frak U}_{k+1,(\ell;\ell_{\frak p},(\ell_c))}(\beta,\frak p;\frak B)_{\epsilon_{0},\vec T_{0}} $}] \label{defn251} We fix a stabilization data at $\frak p$ and an obstruction bundle data centered at $\frak p_{c}$ for each $c \in \frak C(\frak p)$. Let $\frak B \subset \frak C(\frak p)$. For each $c \in \frak G(\frak p)$ we chose $\vec w^{\frak p}_c$ in Definition \ref{defn243}. \par For $\epsilon_0 > 0$ and $\vec T_{0} = (\vec T^{\rm o}_0,\vec T^{\rm c}_0) = (T_{{\rm e},0} : {\rm e} \in C^1(\mathcal G_{\frak p}))$ we consider the set of all $(\frak Y,u',(\vec w'_c ; c\in \frak B))$ such that the following holds for some $\vec R$. \begin{enumerate} \item There exist $\frak y \in \frak V_{\epsilon_0}(\frak p \cup \vec w_{\frak p})$, $(\vec T,\vec \theta) \in (\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1)$ such that $$ \frak Y = \overline{\Phi}(\frak y,\vec T,\vec \theta) \in \mathcal M_{k+1,\ell+\ell_{\frak p}}. $$ \item $u'$ is $\epsilon_0$-close to $u_{\frak p}$ on the extended core $K_{\rm v}^{+\vec R}$ of $\Sigma_{\frak p}$ in $C^{10}$-topology. We use the coordinate at infinity of $\frak p \cup \vec w_{\frak p}$ that is included in the stabilization data at $\frak p$, to define this $C^{10}$ close-ness. \item Moreover we assume that the diameter of the $u'$ image of each neck region of $\Sigma_{\frak Y}$ is smaller than $\epsilon_0$. We assume furthermore that $u'$ is pseudo-holomorphic in the neck regions. (The neck region here is the complement of the union of the extended cores $K_{\rm v}^{+\vec R}$.) \item We write $\frak Y = \frak Y_0 \cup \vec w'_{\frak p}(\frak Y)$ where $\vec w'_{\frak p}(\frak Y)$ are $\ell_{\frak p}$ marked points that correspond to $\vec w_{\frak p}$. We assume that $(\frak Y_0\cup \vec w'_c,u')$ is $\epsilon_{0}$-close to $\frak p \cup \vec w^{\frak p}_c$ in the sense of Definition \ref{epsiloncloseto} after extending the core of $\frak p \cup \vec w_c^{\frak p}$ by $\vec R$. \end{enumerate} \par We say that $(\frak Y^{(1)},u^{\prime (1)},(\vec w^{\prime (1)}_c ; c\in \frak B))$ is {\it weakly equivalent} to $(\frak Y^{(2)},u^{\prime (2)},(\vec w^{\prime (2)}_c ; c\in \frak B))$ if there exists a bi-holomorphic map $v : \frak Y^{(1)}\to \frak Y^{(2)}$ such that \begin{enumerate} \item[(a)] $u^{\prime (1)} = u^{\prime (2)}\circ v$. \item[(b)] $v(w^{\prime (1)}_{c,i}) = w^{\prime (2)}_{c,\sigma_c(i)}$, where $\sigma_c \in \frak S_{\ell_c}$. \item[(c)] $v$ sends the $i$-th boundary marked point of $\frak Y^{(1)}$ to the $i$-th boundary marked point of $\frak Y^{(2)}$. $v$ sends 1-st,\dots,$\ell$-th interior marked points of $\frak Y^{(1)}$ to the corresponding interior marked points of $\frak Y^{(2)}$. $v$ sends $\ell+1$,\dots,$\ell+k$,\dots$\ell+\ell_{\frak p}$-th interior marked points of $\frak Y^{(1)}$ to the $\ell+\sigma(1)$,\dots,$\ell+\sigma(k)$,\dots$\ell+\sigma(\ell_{\frak p})$-th interior marked points of $\frak Y^{(2)}$, where $\sigma \in \frak S_{\ell_{\frak p}}$. \end{enumerate} \par We denote by $\overline{\frak U}_{k+1,(\ell;\ell_{\frak p},(\ell_c))}(\beta,\frak p;\frak B)_{\epsilon_{0},\vec T_{0}}$ the set of all weak equivalence classes of $(\frak Y,u',(\vec w'_c ; c\in \frak B))$ satisfying (1)-(4) above. (Here we use the weak equivalence relation defined by (a), (b), (c).) \par We say that $(\frak Y^{(1)},u^{\prime (1)},(\vec w^{\prime (1)}_c ; c\in \frak B))$ is {\it equivalent} to $(\frak Y^{(2)},u^{\prime (2)},(\vec w^{\prime (2)}_c ; c\in \frak B))$ when $\sigma = \sigma_c =$ identity is satisfied in (a)-(c) above in addition. Let $$ {\frak U}_{k+1,(\ell;\ell_{\frak p},(\ell_c))}(\beta,\frak p;\frak B)_{\epsilon_{0},\vec T_{0}} $$ be the set of equivalence classes of this equivalence relation. \end{defn} \begin{lem}\label{ippaiequivalnce} We may choose $\epsilon_{0}$ sufficiently small so that the following holds. Suppose $(\frak Y^{(1)},u^{\prime (1)},(\vec w^{\prime (1)}_c ; c\in \frak B))$ is weakly equivalent to $(\frak Y^{(2)},u^{\prime (2)},(\vec w^{\prime (2)}_c ; c\in \frak B))$ in the above sense and $ \frak Y^{(j)} = \overline{\Phi}(\frak y^{(j)} ,\vec T^{(j)} ,\vec \theta^{(j)} ) \in \mathcal M_{k+1,\ell+\ell_{\frak p}}. $ Then we have $$ (\frak y^{(2)} ,\vec T^{(2)} ,\vec \theta^{(2)} ) = v_* (\frak y^{(1)} ,\vec T^{(1)} ,\vec \theta^{(1)} ) $$ for some $v \in \Gamma_{\frak p} \subset \frak S_{\ell_{\frak p}}$. \end{lem} \begin{proof} The proof is by contradiction. Suppose there exists a sequence of positive numbers $\epsilon_{0,a} \to 0$ and $(u'_{(j),a},(\frak y^{(j),a} ,\vec T^{(j),a} ,\vec \theta^{(j),a}), (\vec w^{\prime (j), a}_c ; c\in \frak B))$ for $j=1,2$ and $a=1,2,\dots$ such that: \begin{enumerate} \item The object $(u'_{(1),a},(\frak y^{(1),a} ,\vec T^{(1),a} ,\vec \theta^{(1),a}),(\vec w^{\prime (1), a}_c ; c\in \frak B))$ is weakly equivalent to the object $(u'_{(2),a},(\frak y^{(2),a} ,\vec T^{(2),a} ,\vec \theta^{(2),a}),(\vec w^{\prime (2),a}_c ; c\in \frak B))$. \item $ \frak Y^{(j),a} = \overline{\Phi}(\frak y^{(j),a} ,\vec T^{(j),a} ,\vec \theta^{(j),a} ) \in \mathcal M_{k+1,\ell+\ell_{\frak p}}. $ \item The objects $(u'_{(j),a},(\frak y^{(j),a} ,\vec T^{(j),a} ,\vec \theta^{(j),a}),(\vec w^{\prime (j),a}_c ; c\in \frak B))$ are representatives of elements of $\frak U_{k+1,(\ell;\ell_{\frak p},(\ell_c))}(\beta,\frak p;\frak B)_{\epsilon_{0,a},\vec T_{0}}$. \item There is no $v \in \Gamma_{\frak p}$ satisfying $ (\frak y^{(2),a} ,\vec T^{(2),a} ,\vec \theta^{(2),a}) = v_* (\frak y^{(1),a} ,\vec T^{(1),a} ,\vec \theta^{(1),a}) $. \end{enumerate} We will deduce contradiction. By assumption there exist $\vec R_a \to \infty$ and biholomorphic maps $v_a : \frak Y^{(1),a} \to \frak Y^{(2),a}$ such that \begin{enumerate} \item[(I)] $\vert u'_{(2),a} \circ v_a - u'_{(1),a}\vert_{C^{10}(K_{{\rm v}}^{+\vec R_a})} < \epsilon_{0,a}$. \item[(II)] The diameter of $u'_{(j),a}$ image of each connected component of the complement of the union of the extended cores $K_{{\rm v}}^{+\vec R_a}$ is smaller than $\epsilon_{0,a}$. \item[(III)] $v_a(w^{\prime (1), a}_{c,i}) = w^{\prime (2), a}_{c,\sigma_c(i)}$, where $\sigma_c \in \frak S_{\ell_c}$. \item[(IV)] $v_a$ sends the $i$-th boundary marked point of $\frak Y^{(1),a}$ to the $i$-th boundary marked point of $\frak Y^{(2),a}$. $v_a$ sends 1-st,\dots,$\ell$-th interior marked points of $\frak Y^{(1),a}$ to the corresponding interior marked points of $\frak Y^{(2),a}$. $v_a$ sends $\ell+1$,\dots,$\ell+k$,\dots$\ell+\ell_{\frak p}$-th interior marked points of $\frak Y^{(1),a}$ to the $\ell+\sigma_a(1)$,\dots,$\ell+\sigma_a(k)$,\dots$\ell+\sigma_a(\ell_{\frak p})$-th interior marked points of $\frak Y^{(2),a}$, where $\sigma_a \in \frak S_{\ell_{\frak p}}$. \end{enumerate} By (I) and (II) we may take a subsequence (still denoted by the same symbol) such that $v_a$ converges to a biholomorphic map $v : \Sigma_{\frak p} \to \Sigma_{\frak p}$ such that $u_{\frak p} \circ v = u_{\frak p}$. Then (III) and (IV) imply that $v \in \Gamma_{\frak p}$. \par So changing $\frak Y^{(2),a}$ by $v$ we may assume $v = {\rm identity}$. Therefore $v_a$ converges to identity. The stability then implies that $v_a$ is identity. This contradicts to (4). \end{proof} \begin{defn}\label{defEc} Let $\frak q^+ = (\frak Y,u',(\vec w'_c ; c\in \frak B)) \in \frak U_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak B)_{\epsilon_{0},\vec T_{0}}$. We define $$ E_c(\frak q^+) \subset \bigoplus_{\rm v \in C^0(\mathcal G(\frak Y))} \Gamma_0(\text{\rm Int}\,K^{\rm obst}_{\rm v}; (u')^*TX \otimes \Lambda^{01}) $$ as follows, where $\mathcal G(\frak Y)$ is the combinatorial type of $(\frak Y,u')$. We regard $K^{\rm obst}_{\rm v}$ as a subset of $\frak Y$. We note that $\frak p \cup \vec w^{\frak p}_{c}$ is $\epsilon_{\frak p_c}$-close to $\frak p_{c} \cup \vec w_{\frak p_c}$ and $(\frak Y\cup \vec w'_c,u')$ is $\epsilon_{0}$-close to $\frak p \cup \vec w^{\frak p}_{c}$ in the sense of Definition \ref{epsiloncloseto}. Therefore we have \begin{equation}\label{Ivpdefn2211} I^{{\rm v},\frak p_c}_{(\frak y_c,u_c),(\frak Y\cup \vec w'_c,u')} : E_{\frak p_c,\rm v}(\frak y_c,u_c) \to \Gamma_0(\text{\rm Int}\,K^{\rm obst}_{\rm v}; (u')^*TX \otimes \Lambda^{01}). \end{equation} Here $\frak p_c = (\frak x_c,u_c)$ and $\frak Y\cup \vec w'_c = \overline{\Phi}(\frak y_c,\vec T,\vec \theta)$. We regard $K^{\rm obst}_c$ as a subset of $\frak x_c$ also. (Note that the core of $\frak Y$ is canonically identified with the core of $\frak y_c$.) Then we define \begin{equation} E_c(\frak q^+) = \sum_{\rm v \in C^0(\mathcal G(\frak Y))} I^{{\rm v},\frak p_c}_{(\frak y_c,u_c),(\frak Y\cup \vec w'_c,u')}(E_{\frak p_c,\rm v}(\frak y_c,u_c)) \end{equation} and put \begin{equation}\label{kuraequationplus} \mathcal E_{\frak B}(\frak q^+) = \sum_{c \in \frak B} E_{c}(\frak q^+). \end{equation} For $\frak A \subset \frak B$ we put \begin{equation} \mathcal E_{\frak A}(\frak q^+) = \sum_{c \in \frak A} E_{c}(\frak q^+). \end{equation} \end{defn} \begin{rem} When we define $E_c(\frak q^+)$, we use the additional marked points $\vec w'_c$ and $\vec w_{\frak p_c}$ that are assigned to $\frak p_c$. So this subspace is taken in a way independent of $\frak p$. This is important to prove that the coordinate change satisfies the cocycle condition later. We explained this point in \cite[the last three lines in the answer to question 4]{Fu1}. \end{rem} The next lemma is a consequence of Lemmas \ref{ippaiequivalnce} and \ref{ptequiv}. \begin{lem} Suppose that $(\frak Y^{(1)},u^{\prime (1)},(\vec w^{\prime (1)}_c ; c\in \frak B))$ is weakly equivalent to $(\frak Y^{(2)},u^{\prime (2)},(\vec w^{\prime (2)}_c ; c\in \frak B))$ and $v$ is as in Lemma \ref{ippaiequivalnce}. We put $\frak q^{+ (j)} = (\frak Y^{(j)},u^{\prime (j)},(\vec w^{\prime (j)}_c ; c\in \frak B))$. Then $$ E_c(\frak q^{+ (2)}) = v_*E_c(\frak q^{+ (1)}). $$ \end{lem} Now we define: \begin{defn}\label{defthickened} The {\it thickened moduli space} $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon_{0},\vec T_{0}}$ is the subset of $\frak U_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak B)_{\epsilon_{0},\vec T_{0}}$ consisting of the equivalence classes of elements $\frak q^+ = (\frak Y,u',(\vec w'_c ; c\in \frak B)) \in \frak U_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak B)_{\epsilon_{0},\vec T_{0}}$ that satisfy \begin{equation}\label{mainequationformulamod} \overline\partial u' \equiv 0 \mod \mathcal E_{\frak A}(\frak q^+). \end{equation} In case $\frak A = \frak B$ we write $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A)_{\epsilon_{0},\vec T_{0}}$. \end{defn} \begin{lem}\label{nbdregmaineq} Assume $\frak A \ne \emptyset$. We can choose $\epsilon_{0}$, $\epsilon_{\frak p_c}$ sufficiently small and $\vec T_{0}$ sufficiently large such that the following holds after extending the core of $\frak p \cup \vec w_{\frak p}$. \begin{enumerate} \item If $\frak q^+ = (\frak Y,u',(\vec w'_c ; c \in \frak B))$ is in $\frak U_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak B)_{\epsilon_{0},\vec T_{0}}$ then the equation (\ref{mainequationformulamod}) is Fredholm regular. \item If $\frak q^+ = (\frak Y,u',(\vec w'_c ; c\in \frak B))$ is in $\frak U_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak B)_{\epsilon_{0},\vec T_{0}}$ then $\frak q^+$ is evaluation map transversal. \item $\frak p \in \frak U_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak B)_{\epsilon_{0},\vec T_{0}}$. \end{enumerate} \end{lem} Here the definition of Fredholm regularity is the same as Definition \ref{fredreg} and the definition of evaluation map transversality is the same as Definition \ref{maptrans}. The proof of Lemma \ref{nbdregmaineq} is the same as that of Proposition \ref{linearMV}. \begin{rem}\label{Lem258} More precisely we first choose $\epsilon_{\frak p_c}$ so that Lemma \ref{nbdregmaineq} holds for $\frak q^+ =\frak p \cup \vec w_{\frak p}$. (The choice of $\epsilon_{\frak p_c}$ is done at the stage when we take $\frak M^+(\frak p_c)$ in Definition \ref{transconst}.) Then we take $\epsilon_0$ small so that the Lemma \ref{nbdregmaineq} holds for any element $\frak q^+$ of $\frak U_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak B)_{\epsilon_{0},\vec T_{0}}$. \end{rem} \par \begin{cor}\label{smoothness00} If $\epsilon_{0}$, $\epsilon_{\frak p_c}$ small then $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon_{0},\vec T_{0}}$ has a structure of smooth manifold stratawise. The dimension of the top stratum is $$ \dim \mathcal M_{k+1,\ell}(\beta) + 2\sum_{c\in \frak B}\ell_c + 2\ell_{\frak p} + \sum_{c \in \frak A}\dim_{\R} E_c. $$ Here $\dim \mathcal M_{k+1,\ell}(\beta)$ is a virtual dimension that is given by $$ \dim \mathcal M_{k+1,\ell}(\beta) = k +1 + 2\ell -3 + 2\mu(\beta). $$ ($\mu(\beta)$ is the Maslov index.) The dimension of the stratum $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;\mathcal G)_{\epsilon_{0},\vec T_{0}}$ is $$ \dim \mathcal M_{k+1,\ell}(\beta) + 2\sum_{c\in \frak B}\ell_c + 2\ell_{\frak p} + \sum_{c \in \frak A}\dim_{\R} E_c - 2\#C^1_{\rm c}(\mathcal G) - \#C^1_{\rm o}(\mathcal G). $$ $\Gamma_{\frak p}$ acts effectively on $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon_{0},\vec T_{0}}$. \end{cor} Corollary \ref{smoothness00} is an immediate consequence of Lemma \ref{nbdregmaineq}, implicit function theorem and index calculation. \begin{rem} We can define the topology of $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon_{0},\vec T_{0}}$ in the same way as the topology of $\mathcal M_{k+1,\ell}(\beta)$. We omit it here and will define the topology of $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon_{0},\vec T_{0}}$ in the next subsection. (Definition \ref{topthickmoduli}.) \end{rem} So far we have described the case of $\mathcal M_{k+1,\ell}(\beta)$. The case of $\mathcal M^{\rm cl}_{\ell}(\alpha)$ is similar with obvious modification. \par\medskip \section{Gluing analysis in the general case} \label{glueing} The purpose of this section is to generalize Theorems \ref{gluethm1} and \ref{exdecayT} to the case of the thickened moduli space $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A)_{\epsilon_{0},\vec T_{0}}$ we defined in the last subsection. Actually this generalization is straightforward. \par We first state the result. Let $\mathcal G_{\frak p}$ be the combinatorial type of $\frak p$. We first consider the stratum $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\mathcal G_{\frak p})_{\epsilon_{0}}$. We did not include $\vec T_{0}$ in the notation since this parameter does not play a role in our stratum. (Note $T_{{\rm e},0}$ is the gluing parameter. We do not perform gluing to obtain an element in the same stratum as $\frak p$.) We write \begin{equation}\label{2193} V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;\epsilon_0) = \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;\mathcal G_{\frak p})_{\epsilon_{0}}. \end{equation} This space in this subsection plays the role of $V_1 \times_L V_2$ in Theorem \ref{gluethm1}. In case $\frak B = \frak A$, we put $$ V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\epsilon_0) := V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak A;\epsilon_0). $$ \begin{lem}\label{lem:191} $V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;\epsilon_0)$ has a structure of smooth manifold. \end{lem} \begin{proof} This is a special case of Corollary \ref{smoothness00} and is a consequence of Lemma \ref{transpermutelem} (2) and (3). We give a proof for completeness. \par Let $c \in \frak C({\frak p})$. Since $\frak p \prec \frak p_c$, there exists a map $\pi : \mathcal G_{\frak p_c} \to \mathcal G_{\frak p}$. For each ${\rm v}' \in C^0(\mathcal G_{\frak p_c})$ we obtain an element $\frak p_{c,{\rm v}'} \in \mathcal M_{k_{{\rm v}'+1},\ell_{\rm v}'}(\beta_{{\rm v}'})$ and $\frak p_{c,{\rm v}'} \cup \vec w_{c,{\rm v}'}\in \mathcal M_{k_{{\rm v}'}+1,\ell_{\rm v}+ \ell_{c,{\rm v}'}}(\beta_{{\rm v}'})$. For ${\rm v} \in C^0_{\rm d}(\mathcal G_{\frak p})$ the union of $\frak p_{c,{\rm v}'}$ for all ${\rm v}' $ with $\pi({\rm v}') = {\rm v}$ is an element $\frak p_{c,{\rm v}} \in \mathcal M_{k_{{\rm v}+1},\ell_{\rm v}}(\beta_{{\rm v}})$. Together with the union of $\vec w_{c,{\rm v}'}$'s it gives $\frak p_{c,{\rm v}} \cup \vec w_{c,{\rm v}}\in \mathcal M_{k_{{\rm v}+1},\ell_{\rm v}+ \ell_{c,{\rm v}}}(\beta_{{\rm v}})$. The obstruction bundle data centered at $\frak p_c$ induces one centered at $\frak p_{c,{\rm v}}$ in an obvious way. \par Let $\frak p_{\rm v} \in \mathcal M_{k_{{\rm v}+1,\ell_{\rm v}}}(\beta_{\rm v})$ be an element obtained by restricting various data of $\frak p$ to the irreducible component of $\frak x_{\frak p}$ corresponding to the vertex ${\rm v}$ in an obvious way. We have additional marked points $\vec w_c^{\frak p_{\rm v}}$ by restricting $\vec w_c^{\frak p}$. Then $\frak p_{\rm v} \cup\vec w_c^{\frak p_{\rm v}}$ is $\epsilon_c$ close to $\frak p_{c,{\rm v}} \cup \vec w_{c,{\rm v}}$. \par We have taken the additional marked points $\vec w_{\frak p}$ on $\frak p$. Let $\vec w_{\frak p,{\rm v}}$ be a part of it that lies on the irreducible component $\frak p_{\rm v}$ Then $\frak p_{\rm v} \cup \vec w_{\frak p,{\rm v}} \in \mathcal M_{k_{{\rm v}+1},\ell_{\rm v}+\ell_{\frak p,{\rm v}}}(\beta_{{\rm v}})$. \par Using $\frak p_{c,{\rm v}}, \vec w_{c,{\rm v}}, \frak p_{\rm v}, \vec w_{\frak p,{\rm v}}, \vec w_{c}^{\frak p_{\rm v}}$ etc., we define $ \mathcal M_{k_{\rm v}+1,(\ell_{\rm v},\ell_{\frak p,{\rm v}},(\ell_{c,{\rm v}}))} (\beta_{\rm v};\frak p_{\rm v};\frak A;\frak B;\text{\rm point})_{\epsilon_{0}}. $ (Note that $\frak p_{\rm v}$ is irreducible. So the corresponding graph is trivial, that is the graph without edge.) We note again that $\frak p_{\rm v}$ is irreducible and is source stable. So the thickened moduli space $ \mathcal M_{k_{\rm v}+1,(\ell_{\rm v},\ell_{\frak p,{\rm v}},(\ell_{c,{\rm v}}))} (\beta_{\rm v};\frak p_{\rm v};\frak A;\frak B;\text{\rm point})_{\epsilon_{0}} $ is the set parametrized by the solutions of the equations $$ \overline\partial u' \equiv 0 \mod \mathcal E_{\frak B}(u') $$ together with the complex structure of the source. By Lemma \ref{transpermutelem} (2) the linearized operator of this equation is surjective. Therefore $ \mathcal M_{k_{\rm v}+1,(\ell_{\rm v},\ell_{\frak p,{\rm v}},(\ell_{c,{\rm v}}))} (\beta_{\rm v};\frak p_{\rm v};\frak A;\frak B;\text{\rm point})_{\epsilon_{0}} $ is a smooth manifold on a neighborhood of $(\frak p_{\rm v},\vec w_{\frak p,{\rm v}},(\vec w_c^{\frak p_{\rm v}}))$ for each ${\rm v} \in C^0_{\rm d}(\mathcal G_{\frak p})$. (Note that we add marked points so that there is no automorphism of elements of $\mathcal M_{k_{\rm v}+1,\ell_{\rm v}}(\beta_{\rm v})$. So it is not only an orbifold but is also a manifold.) The case $\rm v \in C_{\rm s}^0(\mathcal G_{\frak p})$ can be discussed in the same way and obtain $\mathcal M^{\rm cl}_{(\ell_{\rm v},\ell_{\frak p,{\rm v}},(\ell_{c,{\rm v}}))} (\beta_{\rm v};\frak p_{\rm v};\frak A;\frak B;\text{\rm point})_{\epsilon_{0}} $, that is also a smooth manifold. \par We take the product of them for all $\rm v \in C^0(\mathcal G_{\frak p})$. By taking evaluation maps we have $$ \aligned &\prod_{\rm v \in C_{\rm d}^0(\mathcal G_{\frak p})}\mathcal M_{k_{\rm v}+1,(\ell_{\rm v},\ell_{\frak p,{\rm v}},(\ell_{c,{\rm v}}))} (\beta_{\rm v};\frak p_{\rm v};\frak A;\frak B;\text{\rm point})_{\epsilon_{0}} \\ &\times \prod_{\rm v \in C_{\rm s}^0(\mathcal G_{\frak p})} \mathcal M^{\rm cl}_{(\ell_{\rm v},\ell_{\frak p,{\rm v}},(\ell_{c,{\rm v}}))} (\beta_{\rm v};\frak p_{\rm v};\frak A;\frak B;\text{\rm point})_{\epsilon_{0}} \\ &\to \left(\prod_{\rm e\in C_{\rm o}^1(\mathcal G_{\frak p})} L \times \prod_{\rm e\in C_{\rm c}^1(\mathcal G_{\frak p})} X\right)^2. \endaligned $$ Lemma \ref{transpermutelem} (3) implies that this map is transversal to the diagonal set $\prod_{\rm e\in C_{\rm o}^1(\mathcal G_{\frak p})} L \times \prod_{\rm e\in C_{\rm c}^1(\mathcal G_{\frak p})} X = L^{\# C_{\rm o}^1(\mathcal G_{\frak p})}\times X^{\#C_{\rm c}^1(\mathcal G_{\frak p})}$. The inverse image of the diagonal set is $V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\epsilon_0)$. \end{proof} \par The gluing we will perform below defines a map \begin{equation} \aligned \text{\rm Glu} : V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;\epsilon_1) &\times (\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1) \\ &\to \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon_{0},\vec T_{0}}. \endaligned \end{equation}\label{219433} For a fixed $(\vec T,\vec \theta)$ we denote the restriction of $\text{\rm Glu}$ to $V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;\epsilon_1) \times \{(\vec T,\vec \theta)\}$ by $\text{\rm Glu}_{(\vec T,\vec \theta)}$. \begin{defn} $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;(\vec T,\vec \theta))_{\epsilon_{0},\vec T_{0}}$ is a subset of the space $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon_{0},\vec T_{0}}$ consisting of the equivalence classes of $(\frak Y,u')$ such that $\frak Y ={\overline\Phi}(\frak y,\vec T,\vec \theta)$ where the combinatorial type of $\frak y$ is $\mathcal G_{\frak p}$. In case $\frak A = \frak B$, we put $$ \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A)_{\epsilon_{0},\vec T_{0}} = \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak A)_{\epsilon_{0},\vec T_{0}}. $$ \end{defn} \begin{thm}\label{gluethm3} For each sufficiently small $\epsilon_3$, and sufficiently large $\vec T$, there exist $\epsilon_2$, $\epsilon_4$ and a $\Gamma_{\frak p}^+$ equivariant map \begin{equation} \aligned \text{\rm Glu}_{(\vec T,\vec \theta)} : &V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;\epsilon_4)\\ &\to \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;(\vec T,\vec \theta))_{\epsilon_{2}} \endaligned \end{equation} which is a diffeomorphism onto its image. The image of $\text{\rm Glu}_{(\vec T,\vec \theta)} $ contains the space $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;(\vec T,\vec \theta))_{\epsilon_{3}}$. \end{thm} Here $\vec T$ being sufficiently large means that each of its component is sufficiently large. Theorem \ref{gluethm3} is a generalization of Theorem \ref{gluethm1}. \begin{defn}\label{topthickmoduli} We define a topology on $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;(\vec T,\vec \theta))_{\epsilon}$ for $\epsilon < \epsilon_3$ and $\vec T_0$ large so that $\text{\rm Glu}$ is a homeomorphism to the image. \par It is easy to see that this topology coincides with the topology that is defined in the same way as the topology of $\mathcal M_{k+1,\ell}(\beta)$. \end{defn} To state a generalization of Theorem \ref{exdecayT}, that is the exponential decay estimate of $T$ derivatives, we take $\vec R$ and the extended core $K_{\rm v}^{+\vec R}$ as in (\ref{extendedcore}). By restriction we define a map \begin{equation}\label{2195form} \aligned \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}&(\beta;\frak p;\frak A;\frak B;(\vec T,\vec \theta))_{\epsilon_{2},\vec T_{0}} \\ &\to C^{\infty}((K_{\rm v}^{+\vec R},K_{\rm v}^{+\vec R}\cap \partial \Sigma_{\frak p,{\rm v}}),(X,L)). \endaligned \end{equation} We compose it with $\text{\rm Glu}_{(\vec T,\vec \theta)}$ and obtain $\text{\rm Glures}_{(\vec T^,\vec \theta),{\rm v},\vec R}$. \begin{thm}\label{exdecayT33} For each $m$ and $\vec R$ there exist $T(m)$, $C_{6,m,\vec R}$ and $\delta$ such that the following holds for $T^{\rm o}_{\rm e}>T(m)$, $T^{\rm c}_{\rm e}>T(m)$ and $n + \vert{\vec k_{T}}\vert + \vert{\vec k_{\theta}}\vert \le m - 10$ and $\vert{\vec k_{T}}\vert + \vert{\vec k_{\theta}}\vert > 0$. \begin{equation}\label{est196} \left\Vert \nabla_{\rho}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta^{\vec k_{\theta}}} \text{\rm Glures}_{(\vec T,\vec \theta), {\rm v},\vec R}\right\Vert_{L^2_{m+1-\vert{\vec k_{T}}\vert - \vert{\vec k_{\theta}}\vert}} < C_{6,m,\vec R}e^{-\delta' (\vec k_{T}\cdot \vec T+\vec k_{\theta}\cdot \vec T^{\rm c})}. \end{equation} \par Here $\nabla_{\rho}^n$ is the $n$-th derivative in $\rho \in V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;\epsilon_2)$ direction and $\delta'>0$ depends only on $\delta$ and $m$. \end{thm} The proofs of Theorems \ref{gluethm3} and \ref{exdecayT33} occupy the rest of this subsection. We begin with introducing some notations. Suppose that $(\frak x^{\rho,+},u^{\rho},(\vec w^{\rho}_{c}))$ is a representative of an element $\rho$ of $V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\epsilon_0)$. We put $\Sigma_{\frak x^{\rho,+}} = \Sigma^{\rho}$. Its marked points are denoted by $\vec z^{\rho}$, $\vec z^{{\rm int},\rho}$ and $\vec w^{\rho}_{\frak p}$, $\vec w^{\rho}_{c}$. Here $w$'s are additional marked points. We divide each of the irreducible components $\Sigma^{\rho}_{\rm v}$ of $\Sigma^{\rho}$ as \begin{equation}\label{sigmayv} \aligned K^{\rho}_{\rm v} &\cup \bigcup_{{\rm e}\in C^1_{\mathrm o}(\mathcal G) \atop \text{${\rm e}$ is an outgoing edge of ${\rm v}$}} (0,\infty) \times [0,1] \\ &\cup \bigcup_{{\rm e}\in C^1_{\mathrm o}(\mathcal G) \atop \text{${\rm e}$ is an incoming edge of ${\rm v}$}} (-\infty,0) \times [0,1] \\ &\cup \bigcup_{{\rm e}\in C^1_{\mathrm c}(\mathcal G) \atop \text{${\rm e}$ is an outgoing edge of ${\rm v}$}} (0,\infty) \times S^1 \\ &\cup \bigcup_{{\rm e}\in C^1_{\mathrm c}(\mathcal G) \atop \text{${\rm e}$ is an incoming edge of ${\rm v}$}} (-\infty,0) \times S^1, \endaligned \end{equation} where the coordinates of the 2-nd, 3-rd, 4-th, and 5-th summands are $(\tau'_{\rm e},t_{\rm e})$, $(\tau''_{\rm e},t_{\rm e})$, $(\tau'_{\rm e},t_{\rm e}')$, and $(\tau''_{\rm e},t_{\rm e}'')$, respectively. Here $\tau'_{\rm e} \in (0,\infty)$, $\tau''_{\rm e} \in (-\infty,0)$. \par We call the end corresponding to $\rm e$ the {\it $\rm e$-th end}. \par We recall \begin{eqnarray}\label{neckcoordinate2} \tau_e &=& \tau'_{\rm e} - 5T_{\rm e} = \tau''_{\rm e} + 5T_{\rm e}, \label{cctau12}\\ t_{\rm e} &=& t'_{\rm e} = t''_{\rm e} - \theta_{\rm e}.\label{ccttt12} \end{eqnarray} We put $$ u^{\rho}_{\rm v} = u^{\rho}\vert_{K_{\rm v}}, \qquad u^{\rho}_{\rm e} = u^{\rho}\vert_{\text{$\rm e$-th neck region}}. $$ We denote by $\Sigma_{\frak Y} = \Sigma_{\vec T,\vec \theta}^{\rho}$ a representative of $\frak Y = \overline{\Phi}(\frak y,\vec T,\vec \theta)$. The curve $\Sigma_{\vec T,\vec \theta}^{\rho}$ is a union \begin{equation}\label{neddecomposit} \aligned \bigcup_{\rm v \in C^0(\mathcal G_{\frak p})}K^{\rho}_{\rm v} &\cup \bigcup_{{\rm e}\in C^1_{\mathrm o}(\mathcal G)} [-5T_{\rm e},5T_{\rm e}] \times [0,1] \\ &\cup \bigcup_{{\rm e}\in C^1_{\mathrm c}(\mathcal G)} [-5T_{\rm e},5T_{\rm e}] \times S^1. \endaligned \end{equation} The coordinates of the 2nd and 3rd terms are $\tau_{\rm e}$ and $t_{\rm e}$. \par We call $[-5T_{\rm e},5T_{\rm e}] \times [0,1]$ or $[-5T_{\rm e},5T_{\rm e}] \times S^1$ the {\it $\rm e$-th neck}. \par In case $T_{\rm e} = \infty$, the curve $\Sigma_{\vec T,\vec \theta}^{\rho}$ contains $([0,\infty) \cup (-\infty,0]) \times [0,1]$ or $([0,\infty) \cup (-\infty,0]) \times S^1$ corresponding to the $\rm e$-th edge. We call $([0,\infty) \times [0,1]$ (or $\times S^1$) the {\it outgoing $\rm e$-th end} and $(-\infty,0] \times [0,1]$ (or $S^1$) the {\it incoming $\rm e$-th end}. \par We call $K_{\rm v}$ the {\it $\rm v$-th core}. \par The restriction of $u^{\rho}$ to $K_{\rm v}$ is written as $u^{\rho}_{\rm v}$. The restriction of $u^{\rho}$ to the $\rm e$-neck is written as $u^{\rho}_{\rm e}$. \par For each $\rm e$, let $\rm v_1$ and $\rm v_2$ be its incoming and outgoing vertices. We have \begin{equation}\label{ryogawalimitcoin} \lim_{\tau_{\rm e}\to -\infty} u_{\rm v_2}^{\rho}(\tau_{\rm e},t_{\rm e}) = \lim_{\tau_{\rm e}\to \infty} u_{\rm v_1}^{\rho}(\tau_{\rm e},t_{\rm e}), \end{equation} and (\ref{ryogawalimitcoin}) is independent of $t_{\rm e}$. We write this limit as $p^{\rho}_{\rm e}$. We take a Darboux coordinate in a neighborhood of each $p^{\rho}_{\rm e}$ such that $L$ is flat in this coordinate. We choose the map $\rm E$ such that (\ref{Einanbdofp0}) holds in this neighborhood of $p^{\rho}_{\rm e}$. \par For $\rm e \in C^{1}_{\rm o}(\mathcal G_{\frak p})$ with $T_{\rm e} \ne \infty$, we define \begin{equation}\label{DemanAetc} \aligned \mathcal A_{{\rm e},T} &= [-T_{\rm e}-1,-T_{\rm e}+1] \times [0,1] \subset [-5T_{\rm e},5T_{\rm e}] \times [0,1], \\ \mathcal B_{{\rm e},T} &= [T_{\rm e}-1,T_{\rm e}+1] \times [0,1] \subset [-5T_{\rm e},5T_{\rm e}] \times [0,1], \\ \mathcal X_{{\rm e},T} &= [-1,+1] \times [0,1] \subset [-5T_{\rm e},5T_{\rm e}] \times [0,1]. \endaligned \end{equation} In case $\rm e \in C^{1}_{\rm c}(\mathcal G_{\frak p})$, the sets $\mathcal A_{{\rm e},T}$, $\mathcal B_{{\rm e},T}$, $\mathcal X_{{\rm e},T}$ are defined in the same way as above replacing $[0,1]$ by $S^1$. \par If $\rm v$ is a vertex of $\rm e$ then $\mathcal A_{{\rm e},T}$, $\mathcal B_{{\rm e},T}$, $\mathcal X_{{\rm e},T}$ may be regarded as a subset of $\Sigma_{\rm v}^{\rho}$ also. \par Let $\chi_{{\rm e},\mathcal A}^{\leftarrow}$, $\chi_{{\rm e},\mathcal A}^{\rightarrow}$ be smooth functions on $[-5T_{\rm e},5T_{\rm e}]\times [0,1]$ or $[-5T_{\rm e},5T_{\rm e}]\times S^1$ such that \begin{equation}\label{eq201} \chi_{{\rm e},\mathcal A}^{\leftarrow}(\tau_{\rm e},t_{\rm e}) = \begin{cases} 1 & \tau_{\rm e} < -T_{\rm e}-1 \\ 0 & \tau_{\rm e} > -T_{\rm e}+1. \end{cases} \end{equation} $$ \chi_{{\rm e},\mathcal A}^{\rightarrow} = 1 - \chi_{{\rm e},\mathcal A}^{\leftarrow}. $$ We define \begin{equation} \chi_{{\rm e},\mathcal B}^{\leftarrow}(\tau_{\rm e},t_{\rm e}) = \begin{cases} 1 & \tau_{\rm e} < T_{\rm e}-1 \\ 0 & \tau_{\rm e} > T_{\rm e}+1. \end{cases} \end{equation} $$ \chi_{{\rm e},\mathcal B}^{\rightarrow} = 1 - \chi_{{\rm e},\mathcal B}^{\leftarrow}. $$ We define \begin{equation} \chi_{{\rm e},\mathcal X}^{\leftarrow}(\tau_{\rm e},t_{\rm e}) = \begin{cases} 1 & \tau_{\rm e} < -1 \\ 0 & \tau_{\rm e} > 1. \end{cases} \end{equation} $$ \chi_{{\rm e},\mathcal X}^{\rightarrow} = 1 - \chi_{{\rm e},\mathcal X}^{\leftarrow}. $$ We extend these functions to $\Sigma_{\vec T,\vec \theta}^{\rho}$ and $\Sigma_{\rm v}^{\rho}$ so that they are locally constant on its core. We denote them by the same symbol. \par We next introduce weighted Sobolev norms and their local versions for sections on $\Sigma_{\rm v}^{\rho}$ as follows. We define a smooth function $e_{\rm v,\delta} : \Sigma_{\rm v}^{\rho} \to [1,\infty)$ by \begin{equation}\label{e1deltamulti} e_{\rm v,\delta}(\tau_{\rm e},t_{\rm e}) \begin{cases} =1 &\text{on $K_{\rm v}$,} \\ =e^{\delta\vert \tau_{\rm e} + 5T_{\rm e}\vert} &\text{if $\tau_{\rm e} > 1 - 5T_{\rm e}$, and $\rm e$ is an outgoing edge of $\rm v$,}\\ \in [1,10] &\text{if $\tau_{\rm e} < 1 - 5T_{\rm e}$, and $\rm e$ is an outgoing edge of $\rm v$,}\\ =e^{\delta\vert \tau - 5T_{\rm e}\vert} &\text{if $\tau_{\rm e} < 5T_{\rm e}-1$, and $\rm e$ is an incoming edge of $\rm v$,}\\ \in [1,10] &\text{if $\tau_{\rm e} > 5T_{\rm e}-1$, and $\rm e$ is an incoming edge of $\rm v$.} \end{cases} \end{equation} We also define a weight function $e_{\vec T,\delta} : \Sigma_{\vec T,\vec \theta}^{\rho} \to [1,\infty)$ as follows: \begin{equation}\label{e2delta} e_{\vec T,\delta}(\tau_{\rm e},t_{\rm e}) \begin{cases} =e^{\delta\vert \tau_{\rm e} - 5T_{\rm e}\vert} &\text{if $1<\tau_{\rm e} < 5T_{\rm e}-1$,}\\ = e^{\delta\vert \tau + 5T_{\rm e}\vert} &\text{if $-1>\tau > 1-5T_{\rm e}$,}\\ =1 &\text{on $K_{\rm v}$,} \\ \in [1,10] &\text{if $\vert\tau_{\rm e} - 5T_{\rm e}\vert < 1$ or $\vert\tau_{\rm e} + 5T_{\rm e}\vert < 1$,} \\ \in [e^{5T_{\rm e}\delta}/10,e^{5T_{\rm e}\delta}] &\text{if $\vert\tau_{\rm e}\vert < 1$}. \end{cases} \end{equation} The weighted Sobolev norm we use for $L^2_{m,\delta}(\Sigma^{\rho}_{\rm v};(u_{\rm v}^{\rho})^*TX\otimes \Lambda^{01})$ is given by \begin{equation} \Vert s\Vert^2_{L^2_{m,\delta}} = \sum_{k=0}^m \int_{\Sigma_{\rm v}^{\rho}} e_{\rm v,\delta} \vert \nabla^k s\vert^2 \text{\rm vol}_{\Sigma_{\rm v}^{\rho}}. \end{equation} \begin{defn}\label{Sobolev263} The Sobolev space $L^2_{m+1,\delta}((\Sigma_{\rm v}^{\rho},\partial \Sigma_{\rm v}^{\rho});(u_{\rm v}^{\rho})^*TX,(u_{\rm v}^{\rho})^*TL)$ consists of elements $(s,\vec v)$ with the following properties. \begin{enumerate} \item $\vec v = (v_{\rm e})$ where $\rm e$ runs on the set of edges of $\rm v$ and $v_{\rm e} \in T_{p^{\rho}_{\rm e}}(X)$ (in case $\rm e\in C^1_{\rm c}(\mathcal G)$) or $v_{\rm e} \in T_{p^{\rho}_{\rm e}}(L)$ (in case $\rm e\in C^1_{\rm o}(\mathcal G)$). \item The following norm is finite. \begin{equation}\label{normformjulamulti} \aligned \Vert (s,\vec v)\Vert^2_{L^2_{m+1,\delta}} = &\sum_{k=0}^{m+1} \int_{K_{\rm v}} \vert \nabla^k s\vert^2 \text{\rm vol}_{\Sigma_i}+ \sum_{{\rm e} \text{: edges of $\rm v$}}\Vert v_{\rm e}\Vert^2\\ &+ \sum_{k=0}^{m+1}\sum_{{\rm e} \text{: edges of $\rm v$}} \int_{\text{e-th end}} e_{\rm v,\delta}\vert \nabla^k(s - \text{\rm Pal}(v_{\rm e}))\vert^2 \text{\rm vol}_{\Sigma_{\rm v}^{\rho}}. \endaligned \end{equation} \end{enumerate} \end{defn} \begin{defn} We define \begin{equation}\label{evaluationmkepot} \aligned D{\rm ev}_{\mathcal G_{\frak p}} : &\bigoplus_{{\rm v}\in C^0_{\rm o}(\mathcal G_{\frak p})} L^2_{m+1,\delta}((\Sigma_{{\rm v}}^{\rho},\partial \Sigma_{\rm v}^{\rho});(u_{\rm v}^{\rho})^*TX,(u_{\rm v}^{\rho})^*TL)\\ &\oplus \bigoplus_{{\rm v}\in C^0_{\rm c}(\mathcal G_{\frak p})} L^2_{m+1,\delta}(\Sigma_{{\rm v}}^{\rho};(u_{\rm v}^{\rho})^*TX) \\ &\to \bigoplus_{\rm e \in C^1_{\rm o}(\mathcal G_{\frak p})}T_{p^{\rho}_{\rm e}}L \oplus \bigoplus_{\rm e \in C^1_{\rm c}(\mathcal G_{\frak p})}T_{p^{\rho}_{\rm e}}X \endaligned\end{equation} as in (\ref{evaluationmap2diff}). \end{defn} \begin{defn}\label{Lhattaato1} We denote the kernel of (\ref{evaluationmkepot}) by $$ L^2_{m+1,\delta}((\Sigma^{\rho},\partial \Sigma^{\rho});(u^{\rho})^*TX,(u^{\rho})^*TL). $$ \end{defn} \par We next define weighted Sobolev norms for the sections of various bundles on $\Sigma_{\vec T,\vec \theta}^{\rho}$. Let $$ u' : (\Sigma_{\vec T,\vec \theta}^{\rho},\partial \Sigma_{\vec T,\vec \theta}^{\rho}) \to (X,L) $$ be a smooth map of homology class $\beta$ that is pseudo-holomorphic in the neck region and has finite energy. (We include the case when $u'$ is not pseudo-holomorphic in the neck region but satisfies the same exponential decay estimate as the pseudo-holomorphic curve.) We first consider the case when all $T_{\rm e} \ne \infty$. In this case $\Sigma_{\vec T,\vec \theta}^{\rho}$ is compact. We consider an element $$ s \in L^2_{m+1}((\Sigma_{\vec T,\vec \theta}^{\rho},\partial \Sigma_{\vec T,\vec \theta}^{\rho});(u')^*TX,(u')^*TL). $$ Since we take $m$ large, the section $s$ is continuous. We take a point $(0,1/2)_{\rm e}$ in the $\rm e$-th neck. Since $s \in L^2_{m+1}$ its value $s((0,1/2)_{\rm e}) \in T_{u'((0,1/2)_{\rm e})}X$ is well-defined. \par We take a coordinate around $p_{\rm e}^{\rho}$ such that in case $\rm e \in C^1_{\rm o}(\mathcal G)$ our Lagrangian submanifold $L$ is linear in this coordinate around $p_{\rm e}^{\rho}$. We use this trivialization to find a canonical trivialization of $TX$ in a neighborhood of $p_{\rm e}^{\rho}$. We use this trivialization to define $\text{\rm Pal}$ below. We put \begin{equation}\label{normformjula5multi} \aligned \Vert s\Vert^2_{L^2_{m+1,\delta}} = &\sum_{k=0}^{m+1}\sum_{\rm v} \int_{K_{\rm v}} \vert \nabla^k s\vert^2 \text{\rm vol}_{\Sigma_{\rm v}^{\rho}}\\ &+ \sum_{k=0}^{m+1}\sum_{\rm e} \int_{\text{e-th neck}} e_{\vec T,\delta}\vert \nabla^k(s - \text{\rm Pal}(s(0,1/2)_{\rm e}))\vert^2 dt_{\rm e}d\tau_{\rm e} \\&+ \sum_{\rm e}\Vert s((0,1/2)_{\rm e}))\Vert^2. \endaligned \end{equation} For a section $ s \in L^2_{m}(\Sigma_{\vec T,\vec \theta}^{\rho};u^*TX\otimes \Lambda^{01}) $ we define \begin{equation}\label{normformjula52multi} \Vert s\Vert^2_{L^2_{m,\delta}} = \sum_{k=0}^{m} \int_{\Sigma_{\vec T,\vec \theta}^{\rho}} e_{T,\delta}\vert \nabla^k s\vert^2 \text{\rm vol}_{\Sigma_{\vec T,\vec \theta}^{\rho}}. \end{equation} \par We next consider the case when some of the edges $\rm e$ have infinite length, namely $T_{\rm e} = \infty$. Let $C^{1,\rm{inf}}_{\rm o}(\mathcal G_{\frak p},\vec T)$ (resp. $C^{1,\rm{inf}}_{\rm c}(\mathcal G_{\frak p},\vec T)$) be the set of elements $\rm e$ in $C^{1}_{\rm o}(\mathcal G_{\frak p})$ (resp. $C^{1}_{\rm c}(\mathcal G_{\frak p})$) with $T_{\rm e} = \infty$ and let $C^{1,\rm{fin}}_{\rm o}(\mathcal G_{\frak p},\vec T)$ (resp. $C^{1,\rm{fin}}_{\rm c}(\mathcal G_{\frak p},\vec T)$) be the set of elements ${\rm e} \in C^{1}_{\rm o}(\mathcal G_{\frak p})$ (resp. $C^{1}_{\rm c}(\mathcal G_{\frak p})$) with $T_{\rm e} \ne \infty$. Note the ends of $\Sigma_{\vec T,\vec \theta}^{\rho}$ correspond two to one to $C^{1,\rm{inf}}_{\rm o}(\mathcal G_{\frak p},\vec T) \cup C^{1,\rm{inf}}_{\rm c}(\mathcal G_{\frak p},\vec T)$. The ends that correspond to an element $\rm e$ of $C^{1,\rm{inf}}_{\rm o}(\mathcal G_{\frak p},\vec T)$ is $([-5T_{\rm e},\infty) \times [0,1]) \cup (-\infty,5T_{\rm e}] \times [0,1])$ and the ends that correspond to ${\rm e} \in C^{1,\rm{inf}}_{\rm c}(\mathcal G_{\frak p},\vec T)$ is $([-5T_{\rm e},\infty) \times S^1) \cup (-\infty,5T_{\rm e}] \times S^1)$. We have a weight function $e_{\rm v,\delta}(\tau_{\rm e},t_{\rm e})$ on it. \begin{defn}\label{Lhattaato2} An element of $$ L^2_{m+1,\delta}((\Sigma_{\vec T,\vec \theta}^{\rho},\partial \Sigma_{\vec T,\vec \theta}^{\rho});(u')^*TX,(u')^*TL) $$ is a pair $(s,\vec v)$ such that \begin{enumerate} \item $s$ is a section of $(u')^*TX$ on $\Sigma_{\vec T,\vec \theta}^{\rho}$ minus singular points $z_{\rm e}$ with $T_{\rm e} = \infty$. \item $s$ is locally of $L^2_{m+1}$ class. \item On $\partial \Sigma_{\vec T,\vec \theta}^{\rho}$ the restriction of $s$ is in $(u')^*TL$. \item $\vec v = (v_{\rm e})$ where ${\rm e}$ runs in $C^{1,{\rm inf}}({\mathcal G}_{\frak p},{\vec T})$ and $v_{\rm e}$ is as in Definition \ref{Sobolev263} (1). \item For each $\rm e$ with $T_{\rm e} = \infty$, the integral \begin{equation}\label{intatinfedge} \aligned &\sum_{k=0}^{m+1}\int_{0}^{\infty} \int_{t_{\rm e}} e_{\rm v,\delta}(\tau_{\rm e},t_{\rm e})\vert\nabla^k(s(\tau_{\rm e},t_{\rm e}) - {\rm Pal}(v_{\rm e}))\vert^2 d\tau_{\rm e} dt_{\rm e}\\ &+ \sum_{k=0}^{m+1}\int^{0}_{-\infty} \int_{t_{\rm e}} e_{\rm v,\delta}(\tau_{\rm e},t_{\rm e})\vert\nabla^k(s(\tau_{\rm e},t_{\rm e}) - {\rm Pal}(v_{\rm e}))\vert^2 d\tau_{\rm e} dt_{\rm e} \endaligned \end{equation} is finite. (Here we integrate over ${t_{\rm e}\in [0,1]}$ (resp. ${t_{\rm e}\in S^1}$) if ${\rm e }\in C_{\rm o}^{1,{\rm inf}}({\mathcal G}_{\frak p},\vec T)$ (resp. ${\rm e} \in C_{\rm c}^{1,{\rm inf}}({\mathcal G}_{\frak p},\vec T)$). \end{enumerate} We define \begin{equation}\label{2210} \Vert (s,\vec v)\Vert^2_{L^2_{m+1,\delta}} = (\ref{normformjula5multi}) + \sum_{{\rm e} \in C^{1,{\rm inf}}({\mathcal G}_{\frak p},\vec T)}(\ref{intatinfedge}) + \sum_{{\rm e} \in C^{1,{\rm inf}}(\mathcal G_{\frak p},\vec T)}\Vert v_{{\rm e}}\Vert^2. \end{equation} An element of $$ L^2_{m,\delta}(\Sigma_{\vec T,\vec \theta}^{\rho};(u')^*TX\otimes \Lambda^{01}) $$ is a section $s$ of the bundle $(u')^*TX\otimes \Lambda^{01}$ such that it is locally of $L^2_{m}$-class and \begin{equation}\label{intatinfedge3} \aligned &\sum_{k=0}^{m}\int_{0}^{\infty} \int_{t_{\rm e}} e_{\rm v,\delta}\vert\nabla^k s(\tau_{\rm e},t_{\rm e})\vert^2 d\tau_{\rm e} dt_{\rm e}\\ &+ \sum_{k=0}^{m}\int^{0}_{-\infty} \int_{t_{\rm e}} e_{\rm v,\delta}\vert\nabla^k(s(\tau_{\rm e},t_{\rm e}) \vert^2 d\tau_{\rm e} dt_{\rm e} \endaligned \end{equation} is finite. We define \begin{equation}\label{2212} \Vert s\Vert^2_{L^2_{m,\delta}} = (\ref{normformjula52multi}) + \sum_{{\rm e} \in C^{1,{\rm inf}}({\mathcal G}_{\frak p},\vec T)}(\ref{intatinfedge3}). \end{equation} \end{defn} \par For a subset $W$ of $\Sigma_{\rm v}^{\rho}$ or $\Sigma^{\rho}_{\vec T,\vec \theta}$ we define $ \Vert s\Vert_{L^2_{m,\delta}(W\subset \Sigma^{\rho}_{\rm v})} $, $ \Vert s\Vert_{L^2_{m,\delta}(W\subset \Sigma_{\vec T,\vec \theta}^{\rho})} $ by restricting the domain of the integration (\ref{normformjula52multi}), (\ref{normformjula5multi}), (\ref{2210}) or (\ref{2212}) to $W$. \par Let $(s_j,\vec v_j) \in L^2_{m+1,\delta}((\Sigma_{\rm v}^{\rho},\partial \Sigma_{\rm v}^{\rho});(u_{\rm v}^{\rho})^*TX,(u_{\rm v}^{\rho})^*TL)$ for $j=1,2$. We define an inner product among them by: \begin{equation}\label{innerprod} \aligned &\langle\!\langle (s_1,\vec v_1),(s_2,\vec v_2)\rangle\!\rangle_{L^2_{\delta}}\\ = & \sum_{{\rm e}\in C^1(\mathcal G_{\frak p})}\int_{\text{{\rm e}-th neck}} e_{\vec T,\delta} (s_1-\text{\rm Pal}(v_{1,{\rm e}}), s_2-\text{\rm Pal}(v_{2,{\rm e}})) \\ &+\sum_{{\rm v}\in C^0(\mathcal G_{\frak p})}\int_{K_{\rm v}} (s_1,s_2) +\sum_{{\rm e}\in C^1(\mathcal G_{\frak p})} (v_{1,\rm e},v_{2,\rm e}). \endaligned \end{equation} \par Now we start the gluing process. Let us start with the maps $$u_{\rm v}^{\rho} : (\Sigma_{\rm v}^{\rho},\partial \Sigma_{\rm v}^{\rho}) \to (X,L)$$ for each $\rm v$ so that $(u_{\rm v}^{\rho};{\rm v} \in C^0(\mathcal G_{\frak p}))$ consists an element of $V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\epsilon_2)$. Let $(\vec T,\vec\theta) \in (\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1)$. For $\kappa = 0,1,2,\dots$, we will define a series of maps \begin{eqnarray} u_{\vec T,\vec \theta,(\kappa)}^{\rho} &:& (\Sigma_{\vec T,\vec \theta}^{\rho},\partial\Sigma_{\vec T,\vec \theta}^{\rho}) \to (X,L) \\ \hat u_{{\rm v},\vec T,\vec \theta,(\kappa)}^{\rho} &:& (\Sigma_{\rm v}^{\rho},\partial \Sigma_{\rm v}^{\rho}) \to (X,L) \end{eqnarray} and elements \begin{eqnarray} \frak e^{\rho} _{c,\vec T,\vec \theta,(\kappa)} &\in& E_c = \bigoplus_{{\rm v}\in C^0(\mathcal G_{\frak p_{c}})} E_{c,\rm v} \\ {\rm Err}^{\rho}_{{\rm v},\vec T,\vec \theta,(\kappa)} &\in& L^2_{m,\delta}(\Sigma_{\rm v}^{\rho};(\hat u_{{\rm v},\vec T,\vec \theta,(\kappa)}^{\rho})^*TX\otimes \Lambda^{01}). \end{eqnarray} Note $E_{c,\rm v} \subset \Gamma(K_{\rm v};u_{\frak p_c}^*TX \otimes \Lambda^{01})$ is a finite dimensional space which we take as a part of the obstruction bundle data centered at $\frak p_c$. \par Moreover we will define $V^{\rho}_{\vec T,\vec \theta,{\rm v},(\kappa)}$ for ${\rm v} \in C^0(\mathcal G_{\frak p})$ and $\Delta p^{\rho}_{{\rm e},\vec T,\vec \theta,(\kappa)}$ for ${\rm e} \in C^1(\mathcal G_{\frak p})$. The pair $((V^{\rho}_{\vec T,\vec \theta,{\rm v},(\kappa)}),(\Delta p^{\rho}_{{\rm e},\vec T,\vec \theta,(\kappa)}))$ is an element of the weighted Sobolev space $L^2_{m+1,\delta}((\Sigma^{\rho}_{\rm v},\partial \Sigma^{\rho}_{\rm v}); (\hat u_{{\rm v},\vec T,\vec \theta,(\kappa-1)}^{\rho})^*TX,(\hat u_{{\rm v},\vec T,\vec \theta,(\kappa-1)}^{\rho})^*TL)$. \par The construction of these objects is a straightforward generalization of the construction given by Section \ref{alternatingmethod} and proceed by induction on $\kappa$ as follows. \par\medskip \noindent{\bf Pregluing}: We first define an approximate solution $u_{\vec T,\vec \theta,(0)}^{\rho}$. For ${\rm e} \in C^1(\mathcal G_{\frak p})$ we denote by ${\rm v}_{\leftarrow}({\rm e})$ and ${\rm v}_{\rightarrow}({\rm e})$ its two vertices. Here $\rm e$ is an outgoing edge of ${\rm v}_{\leftarrow}({\rm e})$ and is an incoming edge of ${\rm v}_{\rightarrow}({\rm e})$. We put: \begin{equation}\label{22190} u_{\vec T,\vec \theta,(0)}^{\rho} = \begin{cases} \chi_{{\rm e},\mathcal B}^{\leftarrow} (u_{{\rm v}_{\leftarrow}({\rm e})}^{\rho} - p^{\rho}_{\rm e}) + \chi_{{\rm e},\mathcal A}^{\rightarrow} (u_{{\rm v}_{\rightarrow}({\rm e})}^{\rho} - p^{\rho}_{\rm e}) + p^{\rho}_{\rm e} & \text{on the ${\rm e}$-th neck} \\ u_{\rm v}^{\rho} & \text{on $K_{\rm v}$} . \end{cases} \end{equation} \par\medskip \noindent{\bf Step 0-3}: We next define \begin{equation} \sum_{c\in \frak A} \frak e^{\rho} _{c,\vec T,\vec \theta,(0)} = \overline\partial u_{\rm v}^{\rho}, \qquad \text{on $K_{\rm v}$}. \end{equation} Here we identify $E_c \cong E_c(u_{\rm v}^{\rho})$ on $K_{\rm v}$ by the parallel transport as we did in Definition \ref{defEc}. See also Definition \ref{Emovevvv}. Note that $\overline\partial u_{\rm v}^{\rho}$ is contained in $\oplus E_c$ since $(u_{\rm v}^{\rho};{\rm v} \in C^0(\mathcal G_{\frak p}))$ is an element of $V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\epsilon_0)$. \par We put \begin{equation} \frak{se}^{\rho}_{\vec T,\vec \theta,(0)} := \sum_{c\in \frak A} \frak e^{\rho} _{c,\vec T,\vec \theta,(0)}. \end{equation} \par\medskip \noindent{\bf Step 0-4}: We next define \begin{equation}\label{2222} {\rm Err}^{\rho}_{{\rm v},\vec T,\vec \theta,(0)} = \begin{cases} \chi_{{\rm e},\mathcal X}^{\leftarrow} \overline\partial u_{\vec T,\vec \theta,(0)}^{\rho} & \text{on the ${\rm e}$-th neck if $\rm e$ is outgoing} \\ \chi_{{\rm e},\mathcal X}^{\rightarrow} \overline\partial u_{\vec T,\vec \theta,(0)}^{\rho} & \text{on the ${\rm e}$-th neck if $\rm e$ is incoming} \\ \overline\partial u_{\vec T,\vec \theta,(0)}^{\rho} - \frak{se}^{\rho}_{\vec T,\vec \theta,(0)} & \text{on $K_{\rm v}$} . \end{cases} \end{equation} See Remark \ref{rem270}. \par\medskip \noindent{\bf Step 1-1}: We put \begin{equation} \aligned &\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}(z) \\ &= \begin{cases} \chi_{{\rm e},\mathcal B}^{\leftarrow}(\tau_{\rm e}-T_{\rm e},t_{\rm e}) &\!\!\!\!\!\!u^{\rho}_{\vec T,\vec \theta,(0)}(\tau_{\rm e},t_{\rm e}) + \chi_{{\rm e},\mathcal B}^{\rightarrow}(\tau_{\rm e}-T_{\rm e},t_{\rm e})p^{\rho}_{\rm e} \\ &\text{if $z = (\tau_{\rm e},t_{\rm e})$ is on the $\rm e$-th neck that is outgoing} \\ \chi_{{\rm e},\mathcal A}^{\rightarrow}(\tau_{\rm e}-T_{\rm e},t_{\rm e}) &\!\!\!\!\!\!u^{\rho}_{\vec T,\vec \theta,(0)}(\tau,t) + \chi_{{\rm e},\mathcal A}^{\leftarrow}(\tau_{\rm e}-T_{\rm e},t_{\rm e})p^{\rho}_{\rm e} \\ &\text{if $z = (\tau_{\rm e},t_{\rm e})$ is on the $\rm e$-th neck that is incoming} \\ u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}(z) &\text{if $z \in K_{\rm v}$.} \end{cases} \endaligned \end{equation} We denote the (covariant) linearization of the Cauchy-Riemann equation at this map $\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}$ by \begin{equation}\label{lineeqstep0vvv} \aligned D_{\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}}\overline{\partial} : L^2_{m+1,\delta}((\Sigma_{\rm v},\partial \Sigma_{\rm v});&(\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)})^*TX,(\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)})^*TL) \\ &\to L^2_{m,\delta}(\Sigma_{\rm v};(\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)})^{*}TX \otimes \Lambda^{01}). \endaligned \end{equation} \par We next study the obstruction bundle $E_c$. We recall that at $u^{\rho}_{\vec T,\vec \theta,(0)}$ the obstruction bundle $E_c(u^{\rho}_{\vec T,\vec \theta,(0)})$ was defined as follows. (See Definition \ref{defEc}.) We use the added marked points $\vec w^{\rho}_c$ and consider $\Sigma_{\vec T,\vec \theta}^{\rho} \cup \vec w^{\rho}_c$. Here, by abuse of notation, we include the $k+1$ boundary and $\ell$ interior marked points in the notation $\Sigma_{\vec T,\vec \theta}^{\rho}$ . (The additional marked points $\vec w_{\frak p}^{\rho}$ and $\vec w_{c}^{\rho}$ are {\it not} included.) By assumption $\Sigma_{\vec T,\vec \theta}^{\rho} \cup \vec w^{\rho}_c$ is ($\epsilon_c$+$o(\epsilon_0)$)-close to $\frak p_c$. Therefore the diffeomorphism between cores of $\Sigma_{\frak p_c}$ and of $\Sigma_{\vec T,\vec \theta}^{\rho}$ is determined, by the obstruction bundle data $\frak E_{\frak p_c}$. Using this diffeomorphism and the parallel transport we have \begin{equation}\label{2224} I^{{\rm v},\frak p_c}_{(\frak y_c,u_c),(\Sigma_{\vec T,\vec \theta}^{\rho} \cup \vec w^{\rho}_c,u^{\rho}_{\vec T,\vec \theta,(0)})} : E_{c,{\rm v}}(\frak y_c,u_c) \to \Gamma(K_{\rm v} ; (u^{\rho}_{\vec T,\vec \theta,(0)})^{*}TX \otimes \Lambda^{01}). \end{equation} The notation in (\ref{2224}) is as follows. There is a map $\pi : \mathcal G_{\frak p_c} \to\mathcal G_{\frak p}$ shrinking several edges. For ${\rm v} \in C^0(\mathcal G_{\frak p})$ we put $$ E_{c,{\rm v}} = \bigoplus_{{\rm v}' \in C^0(\mathcal G_{\frak p_c}) \atop \pi({\rm v}') = {\rm v}}E_{c,{\rm v}'} $$ where $E_{c,{\rm v}'}$ is the obstruction bundle that is included in the obstruction bundle data $\frak C_{\frak p_c}$ at $\frak p_c$. It determines $ E_{c,{\rm v}}(\frak y_c,u_c) = \bigoplus_{{\rm v}' \in C^0(\mathcal G_{\frak p_c}) \atop \pi({\rm v}') = {\rm v}}E_{c,{\rm v}'}(\frak y_c,u_c) $. Then (\ref{2224}) is defined by Definition \ref{Emovevvv}. \begin{rem}\label{rem267} In Definition \ref{stabdata} (6) we assumed that the image of $K_{{\rm v},c}^{\rm obst}$ by the diffeomorphism mentioned above is always contained in the core of $\Sigma_{\vec T,\vec \theta}^{\rho}$. (Here $K_{{\rm v},c}^{\rm obst}$ is the support of $E_{c,\rm v}$.) Note by the core we mean the core with respect to the coordinate at infinity that is included as a part of the stabilization data at $\frak p$ here. \end{rem} The vector space $E_c(u^{\rho}_{\vec T,\vec \theta,(0)})$ is the sum over ${\rm v} \in C^0(\mathcal G_{\frak p})$ of the images of (\ref{2224}). \par We next consider the obstruction bundle at $\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}$. A technical point we need to take care of here is that the obstruction bundle we use is {\it not} $E_c(\coprod_{{\rm v}\in C^0(\mathcal G_{\frak p})}\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)})$ but is slightly different from it. Let $K_{{\rm v},c}^{\rm obst} \subset K_{\rm v} \subset \Sigma_{\vec T,\vec \theta}^{\rho}$ be the image of the set $K_{{\rm v},c}^{\rm obst}$ by the above mentioned diffeomorphism that is induced by the stabilization data at $\frak p$. We remark that we may regard $K_{\rm v}$ as a subset of $\Sigma_{\rm v}^{\rho}$ also by using the stabilization data at $\frak p$. Moreover on $K_{\rm v}$ we have $\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)} = u^{\rho}_{\vec T,\vec \theta,(0)}$. So we have \begin{equation}\label{lem77maesiki} \aligned \text{Image of (\ref{2224})} \,\,\subset \,\, &\Gamma(K_{\rm v} ; (u^{\rho}_{\vec T,\vec \theta,(0)})^{*}TX \otimes \Lambda^{01})\\ &= \bigoplus_{{\rm v} \in C^0(\mathcal G_{\frak p})}\Gamma(K_{\rm v} ; (\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)})^{*}TX \otimes \Lambda^{01}). \endaligned \end{equation} \begin{defn}\label{defn277} We regard the left hand side of (\ref{lem77maesiki}) as a subspace of $$ \bigoplus_{{\rm v} \in C^0(\mathcal G_{\frak p})}\Gamma(K_{\rm v} ; (\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)})^{*}TX \otimes \Lambda^{01}) $$ and denote it by $$ \bigoplus_{{\rm v} \in C^0(\mathcal G_{\frak p})} E'_c(\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}) \subset \bigoplus_{{\rm v} \in C^0(\mathcal G_{\frak p})} L^2_{m,\delta}(\Sigma_{\rm v};(\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)})^{*}TX \otimes \Lambda^{01}). $$ We also define $$ \mathcal E'_{\frak p,{\rm v},\frak A}(\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}) = \bigoplus_{c\in \frak A} E'_c(\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}), \quad \mathcal E'_{\frak p,\frak A}(\hat u^{\rho}_{\vec T,\vec \theta,(0)}) = \bigoplus_{{\rm v} \in C^0(\mathcal G_{\frak p})}\mathcal E'_{\frak p,{\rm v},\frak A}(\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}). $$ \end{defn} \begin{rem} The reason why $E'_c(\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}) \ne E_c(\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)})$ is as follows. The union of the domains of $\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}$ over $\rm v$ is $\Sigma_{\frak p}$. When we identify the core of $\Sigma_{\frak p}$ with the core of $\Sigma_{\vec T,\vec \theta}^{\rho}$, we use the additional marked points $\vec w_{\frak p}$ included in the stabilization data at $\frak p$. We now consider the two diffeomorphisms: \begin{eqnarray} K_{{\rm v},c}^{\rm obst} &\longrightarrow& \text{Core of $\Sigma_{\vec T,\vec \theta}^{\rho}$} \longrightarrow \text{Core of $\Sigma_{\frak p}$} \label{2225}\\ K_{{\rm v},c}^{\rm obst} &\longrightarrow& \text{Core of $\Sigma_{\frak p}$}.\label{2226} \end{eqnarray} We note that the diffeomorphism of the second arrow of (\ref{2225}) is defined by using the additional marked points $\vec w_{\frak p}$. The other arrows are defined by using the additional marked points $\vec w_{\frak p_c}$. Therefore in general (\ref{2225}) $\ne$ (\ref{2226}). The definition of $E'_c(\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)})$ uses (\ref{2225}) and the definition of $E_c(\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)})$ uses (\ref{2226}). This phenomenon does not occur in the situation of Part \ref{secsimple}. This is because we took $\frak p = \frak p_c$ in Part \ref{secsimple}. \end{rem} \begin{rem}\label{rem270} In the situation of Part \ref{secsimple} we have ${\rm Err}^{\rho}_{{\rm v},\vec T,\vec \theta,(0)} = 0$ on the core $K_{\rm v}$. However this is not the case in the current situation. In fact, by definition we have \begin{equation}\label{2227} \sum_{\rm v \in C^0(\mathcal G_{\frak p})}{\rm Err}^{\rho}_{{\rm v},\vec T,\vec \theta,(0)} = \overline\partial u_{\vec T,\vec \theta,(0)}^{\rho} - \frak{se}^{\rho}_{\vec T,\vec \theta,(0)} \end{equation} and \begin{equation}\label{2228} \frak{se}^{\rho}_{\vec T,\vec \theta,(0)} = \sum_{c\in \frak A} \frak e^{\rho} _{c,\vec T,\vec \theta,(0)} = \overline\partial u_{\rm v}^{\rho} \end{equation} on $K_{\rm v}$. Moreover $u_{\rm v}^{\rho} = u_{\vec T,\vec \theta,(0)}^{\rho}$ on $K_{\rm v}$. However (\ref{2227}) is nonzero because the way how we identify an element $\frak e^{\rho} _{c,\vec T,\vec \theta,(0)} \in E_c$ as a section on $K_{\rm v}$ are different between the case of $u_{\rm v}^{\rho}$ and of $u_{\vec T,\vec \theta,(0)}^{\rho}$. Namely, in (\ref{2227}) we regard $\frak e^{\rho} _{c,\vec T,\vec \theta,(0)}$ (that is a part of $ \frak{se}^{\rho}_{\vec T,\vec \theta,(0)}$) as an element of $E_c(u_{\vec T,\vec \theta,(0)}^{\rho})$. In (\ref{2228}) we regard $\frak e^{\rho} _{c,\vec T,\vec \theta,(0)}$ as an element of $E_c(u_{\rm v}^{\rho})$. \par We identify $K_{\rm v} \subset \Sigma_{\vec T,\vec \theta}^{\rho}$ with $K_{\rm v} \subset \Sigma^{\rho}_{\rm v}$ by using the stabilization data at $\frak p$. Thus $\frak e^{\rho} _{c,\vec T,\vec \theta,(0)}$ in (\ref{2227}) is also regarded as an element of $E'_c(u_{\rm v}^{\rho})$. So ${\rm Err}^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}$ is nonzero on $K_{\rm v}$ because of $E'_c(u_{\rm v}^{\rho}) \ne E_c(u_{\rm v}^{\rho})$. But this difference is of exponentially small. Namely we have the next lemma. \end{rem} \begin{lem}\label{0estimatekkk} Put $T_{\rm min} = \min\{T_{\rm e} \mid {\rm e} \in C^1(\mathcal G_{\frak p})\}$. Then there exists $T_m$ such that the following inequality holds \begin{equation}\label{2251} \left\Vert \nabla_{\rho}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta^{\vec k_{\theta}}} {\rm Err}^{\rho}_{{\rm v},\vec T,\vec \theta,(0)} \right\Vert_{L^2_{m-\vert{\vec k_{T}}\vert - \vert{\vec k_{\theta}}\vert-1,\delta}(\Sigma_{\rm v}^{\rho})} < C_{7,m}e^{-\delta T_{{\rm min}}} \end{equation} for $\vert{\vec k_{T}}\vert + \vert{\vec k_{\theta}}\vert \le m - 10$ and $T_{{\rm min}} > T_m$. \end{lem} The proof is given later right after the proof of Lemma \ref{288lam}. \par In Definition \ref{defn277} we defined $\mathcal E'_{\frak p,{\rm v},\frak A}(\cdot)$ for $\cdot = \hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}$. We next extend it to nearby maps. Let $u'_{\rm v} : (\Sigma_{\rm v}^{\rho},\partial\Sigma_{\rm v}^{\rho}) \to (X,L)$ be a smooth map which is sufficiently close to $\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}$ in $C^{10}$ sense on $K_{\rm v}$. We define $\mathcal E'_{\frak p,{\rm v}, \frak A}(u'_{\rm v})$ as follows. We identify $K_{\rm v}$ with a subset of $\Sigma_{\vec T,\vec \theta}^{\rho}$ by using the additional marked points $\vec w_{\frak p}^{\rho}$. Take any $u'' : (\Sigma_{\vec T,\vec \theta}^{\rho},\partial \Sigma_{\vec T,\vec \theta}^{\rho}) \to (X,L)$ that coincides with $u'_{\rm v}$ on $K_{\rm v}$ and is enough close to $u_{\frak p}$ so that $E_{c}(u'')=\bigoplus_{{\rm v}'\in C^0(\mathcal G_{\frak p_c})} E_{c,{\rm v}'}(u'')$ is defined. We put $$ E_{c,{\rm v}}(u'') = \bigoplus_{{\rm v}' \in C^0 (\mathcal G_{\frak p_c}) \atop \pi({\rm v}') = {\rm v}}E_{c,{\rm v}'}(u''). $$ By definition, $E_{c,\rm v}(u'')$ is independent of $u''$ but depends only on $u'_{\rm v}$ and is in $\Gamma(K_{\rm v} ; (u'_{\rm v})^{*}TX \otimes \Lambda^{01})$. Again using the diffeomorphism which is defined by the marked points $\vec w_{\frak p}^{\rho}$ we identify this space as a subspace of $\Gamma(\Sigma_{\rm v}^{\rho} ; (u'_{\rm v})^{*}TX \otimes \Lambda^{01})$. That is by definition $E'_{c,\rm v}(u'_{\rm v})$. (This is the case ${\rm v} \in C^0_{\rm d}(\mathcal G_{\frak p})$. The case of ${\rm v} \in C^0_{\rm s}(\mathcal G_{\frak p})$ is similar.) We put \begin{equation}\label{2230} \mathcal E'_{\frak p,{\rm v},\frak A}(u'_{\rm v}) = \sum_{c \in \frak A}E'_{c,{\rm v}}(u'_{\rm v}), \qquad \mathcal E'_{\frak p,\frak A}(u') = \sum_{{\rm v} \in C^0(\mathcal G_{\frak p})} \mathcal E'_{\frak p,{\rm v},\frak A}(u'_{\rm v}). \end{equation} \par Let $$ \Pi_{\mathcal E'_{\frak p,\frak A}(u')} ~:~ \bigoplus_{{\rm v} \in C^0(\mathcal G_{\frak p})}L^2_{m,\delta}(\Sigma_{\rm v}^{\rho};(u'_{{\rm v}})^{*}TX \otimes \Lambda^{01}) \to \mathcal E'_{\frak p,\frak A}(u') $$ be the $L^2$-orthogonal projection. We next define its derivation by an element $$ v = (v_{\rm v}) \in \bigoplus_{{\rm v} \in C^0_{\rm d}(\mathcal G_{\frak p})}\Gamma((\Sigma^{\rho}_{\rm v},\partial\Sigma^{\rho}_{\rm v});(u_{\rm v}')^{*}TX,(u_{\rm v}')^{*}TL) \oplus \bigoplus_{{\rm v} \in C^0_{\rm s}(\mathcal G_{\frak p})}\Gamma(\Sigma^{\rho}_{\rm v};(u_{\rm v}')^{*}TX) $$ by \begin{equation}\label{DEidefvv} (D_{u'_{\rm v}}\mathcal E'_{\frak p,\frak A})((A_{\rm v}),(v_{\rm v})) = \frac{d}{ds}(\Pi_{\mathcal E'_{\frak p,\frak A}({\rm E} (u_{\rm v}',sv_{\rm v}))}(A_{\rm v}))\vert_{s=0} \end{equation} as in (\ref{DEidef}), where $$ A_{\rm v} \in L^2_{m,\delta}(\Sigma_{\rm v}^{\rho};(u_{\rm v}')^{*}TX \otimes \Lambda^{01}). $$ \par We use the operator \begin{equation}\label{144opvv} V \mapsto D_{\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}}\overline{\partial}(V) - (D_{\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}} \mathcal E'_{\frak p,\frak A})(\frak{se}^{\rho}_{\vec T,\vec \theta,(0)} , V) \end{equation} as the linearization of the Cauchy-Riemann equation modulo $\mathcal E'_{\frak A}$. \footnote{Here we consider $\mathcal E_{\frak A}$ and not $\mathcal E'_{\frak A}$. Note we are studying the Cauchy-Riemann equation for $u^{\rho}_{\vec T,\vec\theta,(0)}$. The obsutruction space $\mathcal E'_{\frak A}(\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)})$ is sent to $\mathcal E_{\frak A}(u^{\rho}_{\vec T,\vec\theta,(0)})$ by the identification using the stabilization data at $\frak p$.} \par We recall that $$ L^2_{m+1,\delta}((\Sigma^{\rho},\partial \Sigma^{\rho});(\hat u^{\rho}_{\vec T,\vec \theta,(0)})^*TX,(\hat u^{\rho}_{\vec T,\vec \theta,(0)})^*TL) $$ is the kernel of (\ref{evaluationmkepot}) for $\hat u^{\rho}_{\vec T,\vec \theta,(0)} = (\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}) _{{\rm v}\in C^0(\mathcal G_{\frak p})}$. The direct sum of (\ref{144opvv}) induces an operator on $ L^2_{m+1,\delta}((\Sigma^{\rho},\partial \Sigma^{\rho});(\hat u^{\rho}_{\vec T,\vec \theta,(0)})^*TX,(\hat u^{\rho}_{\vec T,\vec \theta,(0)})^*TL) $ by restriction. \begin{lem}\label{28011} The sum of the image of the direct sum of the operators (\ref{144opvv}) on $$L^2_{m+1,\delta}((\Sigma^{\rho},\partial \Sigma^{\rho});(\hat u^{\rho}_{\vec T,\vec \theta,(0)})^*TX,(\hat u^{\rho}_{\vec T,\vec \theta,(0)})^*TL)$$ and the subspace $\mathcal E'_{\frak p,\frak A}(\hat u^{\rho}_{\vec T,\vec \theta,(0)})$ is $$ \bigoplus_{{\rm v}\in C^0(\mathcal G_{\frak p})} L^2_{m,\delta}(\Sigma^{\rho}_{\rm v};(\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)})^{*}TX \otimes \Lambda^{01}) $$ if $\vec T$ is sufficiently large. \end{lem} \begin{proof} This is a consequence of Lemma \ref{nbdregmaineq}. \end{proof} Lemma \ref{28011} is a generalization of Lemma \ref{lem112}. \begin{defn} The $L^2$ orthogonal complement of $$ \left(D_{\hat u^{\rho}_{\vec T,\vec \theta,(0)}}\overline{\partial} - (D_{\hat u^{\rho}_{\vec T,\vec \theta,(0)}}\mathcal E'_{\frak p,\frak A}))(\frak{se}^{\rho}_{\vec T,\vec \theta,(0)} , \cdot)\right)^{-1}(\mathcal E'_{\frak p,\frak A}(\hat u^{\rho}_{\vec T,\vec \theta,(0)})) $$ in $$ L^2_{m+1,\delta}((\Sigma^{\rho},\partial \Sigma^{\rho});(\hat u^{\rho}_{\vec T,\vec \theta,(0)})^*TX,(\hat u^{\rho}_{\vec T,\vec \theta,(0)})^*TL) $$ is denoted by $\frak H(\rho,\vec T,\vec \theta)$. \par We take $\vec T = \vec\infty = (\infty,\dots,\infty)$ and write $\frak H(\rho) = \frak H(\rho,\vec\infty,\vec \theta_0)$. Then the restriction of (\ref{144opvv}) to $\frak H(\rho)$ induces an isomorphism to $$ \bigoplus_{{\rm v}\in C^0(\mathcal G_{\frak p})} L^2_{m,\delta}(\Sigma^{\rho}_{\rm v};(\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)})^{*}TX \otimes \Lambda^{01})/\mathcal E'_{\frak p,\frak A}(\hat u^{\rho}_{\vec T,\vec \theta,(0)}) $$ for sufficiently large $\vec T$. \end{defn} \begin{defn} We define $V^{\rho}_{\vec T,\vec \theta,{\rm v},(1)}$ for ${\rm v} \in C^0(\mathcal G_{\frak p})$ and $\Delta p^{\rho}_{{\rm e},\vec T,\vec \theta,(1)}$ for ${\rm e} \in C^1(\mathcal G_{\frak p})$ so that $((V^{\rho}_{\vec T,\vec \theta,{\rm v},(1)})_{\rm v},(\Delta p^{\rho}_{{\rm e},\vec T,\vec \theta,(1)})_{\rm e}) \in \frak H(\rho)$ is the unique element such that \begin{equation} \aligned D_{\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}}\overline{\partial}(V^{\rho}_{\vec T,\vec \theta,{\rm v},(1)}) &- (D_{\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}}\mathcal E'_{\frak p,\frak A})(\frak{se}^{\rho}_{\vec T,\vec \theta,(0)} , V^{\rho}_{\vec T,\vec \theta,{\rm v},(1)})\\ &+ {\rm Err}^{\rho}_{{\rm v},\vec T,\vec \theta,(0)} \in \mathcal E'_{\frak p,\frak A}(\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}) \endaligned \end{equation} and \begin{equation} \lim_{\tau_{\rm e} \to \pm \infty} V^{\rho}_{\vec T,\vec \theta,{\rm v},(1)}(\tau_{\rm e},t_{\rm e}) = \Delta p^{\rho}_{{\rm e},\vec T,\vec \theta,(1)}, \end{equation} where $\pm \infty= + \infty$ if $\rm e$ is outgoing and $=-\infty$ if $\rm e$ is incoming. \end{defn} \par\medskip \noindent{\bf Step 1-2}: \begin{defn} We define $u_{\vec T,\vec \theta,(1)}^{\rho}(z)$ as follows. (Here $\rm E$ is the map as in (\ref{defE}).) \begin{enumerate} \item If $z \in K_{\rm v}$, we put \begin{equation} u_{\vec T,\vec \theta,(1)}^{\rho}(z) = {\rm E} (u_{\vec T,\vec \theta,(0)}^{\rho}(z),V^{\rho}_{\vec T,\vec \theta,{\rm v},(1)}(z)). \end{equation} \item If $z = (\tau_{\rm e},t_{\rm e}) \in [-5T_{\rm e},5T_{\rm e}]\times [0,1]$ or $S^1$, we put \begin{equation} \aligned u_{\vec T,\vec \theta,(1)}^{\rho}(\tau_{\rm e},t_{\rm e}) = &\chi_{{\rm v}_{\leftarrow}({\rm e}),\mathcal B}^{\leftarrow}(\tau_{\rm e},t_{\rm e}) (V^{\rho}_{\vec T,\vec \theta,{\rm v}_{\leftarrow}({\rm e}),(1)} (\tau_{\rm e},t_{\rm e}) - \Delta p^{\rho}_{{\rm e},\vec T,\vec \theta,(1)})\\ &+\chi_{{\rm v}_{\rightarrow}({\rm e}),\mathcal A}^{\rightarrow}(\tau_{\rm e},t_{\rm e})(V^{\rho}_{\vec T,\vec \theta,{\rm v}_{\rightarrow}({\rm e}),(1)}(\tau_{\rm e},t_{\rm e})-\Delta p^{\rho}_{{\rm e},\vec T,\vec \theta,(1)})\\ &+u_{\vec T,\vec \theta,(0)}^{\rho}(\tau_{\rm e},t_{\rm e}) +\Delta p^{\rho}_{{\rm e},\vec T,\vec \theta,(1)}. \endaligned \end{equation} \end{enumerate} \end{defn} \par\medskip \noindent{\bf Step 1-3}: We define: \begin{equation} \frak e^{\rho} _{1,T,(1)} = \Pi_{\mathcal E_{\frak p,\frak A}({\rm E} (u_{\vec T,\vec \theta,(0)}^{\rho},V^{\rho}_{\vec T,\vec \theta,{\rm v},(1)})}(\overline\partial {\rm E} (u_{\vec T,\vec \theta,(0)}^{\rho},V^{\rho}_{\vec T,\vec \theta,{\rm v},(1)})) \end{equation} and \begin{equation} \frak{se}^{\rho}_{\vec T,\vec \theta,(1)} = \frak{e}^{\rho}_{\vec T,\vec \theta,(0)} + \frak e^{\rho} _{\vec T,\vec \theta,(1)}. \end{equation} \par\medskip \noindent{\bf Step 1-4}: We take $0 < \mu < 1$ and fix it throughout the proof of this subsection. \begin{defn} We put \begin{equation} {\rm Err}^{\rho}_{{\rm v},\vec T,\vec \theta,(1)} = \begin{cases} \chi_{{\rm e},\mathcal X}^{\leftarrow} \overline\partial u_{\vec T,\vec \theta,(1)}^{\rho} & \text{on the ${\rm e}$-th neck if $\rm e$ is outgoing} \\ \chi_{{\rm e},\mathcal X}^{\rightarrow} \overline\partial u_{\vec T,\vec \theta,(1)}^{\rho} & \text{on the ${\rm e}$-th neck if $\rm e$ is incoming} \\ \overline\partial u_{\vec T,\vec \theta,(0)}^{\rho} - \frak{se}^{\rho}_{\vec T,\vec \theta,(1)} & \text{on $K_{\rm v}$} . \end{cases} \end{equation} We extend them by $0$ outside a compact set and will regard them as elements of the function space $L^2_{m,\delta}(\Sigma_{\rm v}^{\rho};(\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(1)})^{*}TX \otimes \Lambda^{01})$, where $\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(1)}$ will be defined in the next step. \end{defn} We put $p^{\rho}_{{\rm e},\vec T,\vec \theta,(1)} = p^{\rho}_{{\rm e},\vec T,\vec \theta,(0)} + \Delta p^{\rho}_{{\rm e},\vec T,\vec \theta,(1)}$. \par We now come back to Step 2-1 and continue inductively on $\kappa$. \par The main estimate of those objects are the next lemma. We put $R_{(\rm v,e)} = 5T_{\rm e} + 1$ and $\vec R = (R_{(\rm v,e)})$. \begin{prop}\label{expesgen1} There exist $T_m, C_{8,m}, C_{9,m}, C_{10,m}, \epsilon_{5,m} > 0$ and $0<\mu<1$ such that the following inequalities hold if $T_{\rm e}>T_m$ for all $\rm e$. We put $\vec{T}=(T_e; e\in C^1(\mathcal G_{\frak p}))$ and $T_{\rm min} = \min \{T_{\rm e} \mid {\rm e} \in C^1(\mathcal G_{\frak p})\}$. \begin{eqnarray} \left\Vert \left((V^{\rho}_{\vec T,\vec \theta,{\rm v},(\kappa)}),(\Delta p^{\rho}_{{\rm e},\vec T,\vec \theta,(\kappa)})\right)\right\Vert_{L^2_{m+1,\delta}(\Sigma^{\rho}_{\rm v})} &<& C_{8,m}\mu^{\kappa-1}e^{-\delta T_{\rm min}}, \label{form0182vv} \\ \left\Vert (\Delta p^{\rho}_{{\rm e},\vec T,\vec \theta,(\kappa)})\right\Vert &<& C_{8,m}\mu^{\kappa-1}e^{-\delta T_{\rm min}}, \label{form0183vv} \\ \left\Vert u_{\vec T,\vec \theta,(\kappa)}^{\rho}- u_{\vec T,\vec \theta,(0)}^{\rho} \right\Vert_{L^2_{m+1,\delta} (K_{\rm v}^{+\vec R})} &<& C_{9,m}e^{-\delta T_{\rm min}}, \label{form0184} \\ \left\Vert{\rm Err}^{\rho}_{{\rm v},\vec T,\vec \theta,(\kappa)} \right\Vert_{L^2_{m,\delta}(\Sigma^{\rho}_{\rm v})} &<& C_{10,m}\epsilon_{5,m}\mu^{\kappa}e^{-\delta T_{\rm min}}, \label{form0185vv} \\ \left\Vert \frak e^{\rho} _{\vec T,\vec \theta,(\kappa)}\right\Vert_{L^2_{m}(K_{\rm v}^{\text{\rm obst}})} &<& C_{10,m}\mu^{\kappa-1}e^{-\delta T_{\rm min}}, \label{form0186vv} \end{eqnarray} where we assume $\kappa \ge 1$ in (\ref{form0186vv}). \end{prop} \begin{proof} The proof is the same as the discussion in Subsection \ref{alternatingmethod} and so is omitted.\footnote{Actually we need some new argument for the case $\kappa = 0$ of (\ref{form0185vv}). We will discuss it later during the proof of Lemma \ref{sublem278}.} \end{proof} (\ref{form0182vv}) implies that the limit of $ u_{\vec T,\vec \theta,(\kappa)}^{\rho} $ converges as $\kappa$ goes to $\infty$ after $C^k$ topology for each $k$ if $T_{\rm e} > T_{k+10}$ for all $\rm e$. We define \begin{equation} {\rm Glu}_{\vec T,\vec \theta}(\rho) = \lim_{\kappa\to \infty} u_{\vec T,\vec \theta,(\kappa)}^{\rho} = u_{\vec T,\vec \theta}^{\rho}. \end{equation} (\ref{form0185vv}) and (\ref{form0186vv}) imply $$ \overline\partial u_{\vec T,\vec \theta}^{\rho} = \sum_{\kappa=0}^{\infty}\frak e^{\rho} _{\vec T,\vec \theta,(\kappa)} \in \mathcal E_{\frak A}(\overline\partial u_{\vec T,\vec \theta}^{\rho}). $$ Therefore $$ u_{\vec T,\vec \theta}^{\rho} \in \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;(\vec T^{\rm o},\vec T^{\rm c},\vec \theta))_{\epsilon_{2},\vec T_0}. $$ We thus have defined ${\rm Glu}_{\vec T,\vec \theta}$. \par We next prove Theorem \ref{exdecayT33}. The main part of the proof is the next lemma. \begin{prop}\label{expesgen2Tdev} There exist $T_m, C_{11,m}, C_{12,m}, C_{13,m}, C_{14,m}, \epsilon_{2,m} > 0$ and $0<\mu<1$ such that the following inequalities hold if $T_{\rm e}>T_m$ for all $\rm e$. \par Let ${\rm e}_0 \in C^1_{\rm o}(\mathcal G_{\frak p})$. Then for each $\vec k_{T}$, $\vec k_{\theta}$ we have \begin{equation} \aligned &\left\Vert \nabla_{\rho}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta^{\vec k_{\theta}}} \frac{\partial}{\partial T_{{\rm e}_0}}((V^{\rho}_{\vec T,\vec \theta,{\rm v},(\kappa)}),(\Delta p^{\rho}_{{\rm e},\vec T,\vec \theta,(\kappa)}))\right\Vert_{L^2_{m+1-\vert{\vec k_{T}}\vert - \vert{\vec k_{\theta}}\vert-1,\delta}(\Sigma^{\rho}_{\rm v})}\\ &< C_{11,m}\mu^{\kappa-1}e^{-\delta T_{{\rm e}_0}}, \label{form0182vv2} \endaligned \end{equation} \begin{equation} \displaystyle\left\Vert \nabla_{\rho}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta^{\vec k_{\theta}}} \frac{\partial}{\partial T_{{\rm e}_0}} (\Delta p^{\rho}_{{\rm e},\vec T,\vec \theta,(\kappa)})\right\Vert < C_{11,m}\mu^{\kappa-1}e^{-\delta T_{{\rm e}_0}}, \label{form0183v2v} \end{equation} \begin{equation} \left\Vert \nabla_{\rho}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta^{\vec k_{\theta}}} \frac{\partial}{\partial T_{{\rm e}_0}} u_{\vec T,\vec \theta,(\kappa)}^{\rho} \right\Vert_{L^2_{m+1-\vert{\vec k_{T}}\vert - \vert{\vec k_{\theta}}\vert -1,\delta}(K_{\rm v}^{+\vec R})} < C_{12,m}e^{-\delta T_{{\rm e}_0}}, \label{form01842} \end{equation} \begin{equation} \aligned &\left\Vert \nabla_{\rho}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta^{\vec k_{\theta}}} \frac{\partial}{\partial T_{{\rm e}_0}} {\rm Err}^{\rho}_{{\rm v},\vec T,\vec \theta,(\kappa)} \right\Vert_{L^2_{m-\vert{\vec k_{T}}\vert - \vert{\vec k_{\theta}}\vert-1,\delta}(\Sigma^{\rho}_{\rm v})} \\ &< C_{13,m}\epsilon_{6,m}\mu^{\kappa}e^{-\delta T_{{\rm e}_0}}, \label{form0185vv2} \endaligned \end{equation} \begin{equation} \left\Vert \nabla_{\rho}^n\frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta^{\vec k_{\theta}}} \frac{\partial}{\partial T_{{\rm e}_0}} \frak e^{\rho} _{\vec T,\vec \theta,(\kappa)}\right\Vert_{L^2_{m-\vert{\vec k_{T}}\vert - \vert{\vec k_{\theta}}\vert-1}(K_{\rm v}^{\text{\rm obst}})} < C_{14,m}\mu^{\kappa-1}e^{-\delta T_{{\rm e}_0}}, \label{form0186vv2} \end{equation} for $\vert{\vec k_{T}}\vert + \vert{\vec k_{\theta}}\vert + n < m -11$. \par Let ${\rm e}_{0} \in C^1_{\rm c}(\mathcal G_{\frak p})$. Then the same inequalities as above hold if we replace $\frac{\partial}{\partial T_{{\rm e}_0}}$ by $\frac{\partial}{\partial \theta_{{\rm e}_0}}$. \end{prop} \begin{proof}[Proposition \ref{expesgen2Tdev} $\Rightarrow$ Theorem \ref{exdecayT33}] Note if $k_{{\rm e}_0} \ne 0$ or $\theta_{{\rm e}_0} \ne 0$ then $$ \vec k_{T}\cdot \vec T+\vec k_{\theta}\cdot \vec T^{\rm c} \le 2k \max \{T_{\rm e} \mid k_{T,{\rm e}} \ne 0, \,\text{or } k_{\theta,{\rm e}} \ne 0 \}. $$ It is then easy to see that Proposition \ref{expesgen2Tdev} implies Theorem \ref{exdecayT33} by putting $\delta' = \delta/2k$. \end{proof} \begin{proof}[Proof of Proposition \ref{expesgen2Tdev}] The proof is mostly the same as the argument of Subsection \ref{subsecdecayT}. The new part is the proof of the next lemma. \begin{lem}\label{sublem278} Let ${\rm e}_{0} \in C^1_{\rm c}(\mathcal G_{\frak p})$. We have \begin{equation}\label{2251} \left\Vert \nabla_{\rho}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta^{\vec k_{\theta}}} \frac{\partial}{\partial T_{{\rm e}_0}} {\rm Err}^{\rho}_{{\rm v},\vec T,\vec \theta,(0)} \right\Vert_{L^2_{m-\vert{\vec k_{T}}\vert - \vert{\vec k_{\theta}}\vert-1,\delta}(\Sigma^{\rho}_{\rm v})} < C_{15,m}e^{-\delta T_{{\rm e}_0}} \end{equation} and \begin{equation}\label{2252} \left\Vert \nabla_{\rho}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta^{\vec k_{\theta}}} \frac{\partial}{\partial \theta_{{\rm e}_0}} {\rm Err}^{\rho}_{{\rm v},\vec T,\vec \theta,(0)} \right\Vert_{L^2_{m-\vert{\vec k_{T}}\vert - \vert{\vec k_{\theta}}\vert-1,\delta}(\Sigma^{\rho}_{\rm v})} < C_{15,m}e^{-\delta T_{{\rm e}_0}}. \end{equation} \end{lem} \begin{proof} We recall (\ref{2222}), \begin{equation} {\rm Err}^{\rho}_{{\rm v},\vec T,\vec \theta,(0)} = \begin{cases} \chi_{{\rm e},\mathcal X}^{\leftarrow} \overline\partial u_{\vec T,\vec \theta,(0)}^{\rho} & \text{on the ${\rm e}$-th neck if $\rm e$ is outgoing} \\ \chi_{{\rm e},\mathcal X}^{\rightarrow} \overline\partial u_{\vec T,\vec \theta,(0)}^{\rho} & \text{on the ${\rm e}$-th neck if $\rm e$ is incoming} \\ \overline\partial u_{\vec T,\vec \theta,(0)}^{\rho} - \frak{se}^{\rho}_{\vec T,\vec \theta,(0)} & \text{on $K_{\rm v}$} . \end{cases} \end{equation} We first estimate ${\rm Err}^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}$ on the neck region. Let ${\rm e} \in C^1_{\rm c}(\mathcal G_{\frak p})$ is an outgoing edge of $\rm v$. Let ${\rm v}'$ be the other vertex of $\rm e$. We have \begin{equation}\label{neckerror1} \aligned &\hskip-3.4cm{\rm Err}^{\rho}_{{\rm v},\vec T,\vec \theta,(0)} (\tau'_{\rm e},t'_{\rm e}) \\ = (1-\chi(\tau'_{\rm e}-5T_{\rm e}))\overline \partial \bigg( &p^{\rho}_{\rm e} + (1-\chi(\tau'_{\rm e}-6T_{\rm e})) (u^{\rho}_{\rm v}(\tau'_{\rm e},t'_{\rm e}) - p^{\rho}_{\rm e})\\ &+ \chi(\tau'_{\rm e}-4T_{\rm e})(u^{\rho}_{{\rm v}'}(\tau'_{\rm e}-10T_{\rm e},t'_{\rm e}+\theta_{\rm e}) - p^{\rho}_{\rm e})\bigg). \endaligned \end{equation} Note that we use the coordinates $(\tau'_{\rm e},t'_{\rm e})$ for $u^{\rho}_{\rm v}$ and $(\tau''_{\rm e},t''_{\rm e})$ for $u^{\rho}_{{\rm v}'}$. (See (\ref{cctau12}), (\ref{ccttt12}).) The function $\chi$ is as in (\ref{chichi}). \par If ${\rm e}_{0} \ne {\rm e}$, then $\partial/\partial T_{{\rm e}_0}$ or $\partial/\partial \theta_{{\rm e}_0}$ of (\ref{neckerror1}) is zero. \par Let us study $\partial/\partial T_{{\rm e}}$ or $\partial/\partial \theta_{{\rm e}}$ of (\ref{neckerror1}) in case ${\rm e}_{0} = {\rm e}$. We apply $\partial/\partial \theta_{{\rm e}}$ to the third line of (\ref{neckerror1}) to obtain \begin{equation}\label{neckerror2} \aligned &(1-\chi(\tau'_{\rm e}-5T_{\rm e})) \frac{\partial}{\partial \theta_{{\rm e}}}\overline \partial \left(\chi(\tau'_{\rm e}-4T_{\rm e})u^{\rho}_{{\rm v}'}(\tau'_{\rm e}-10T_{\rm e},t'_{\rm e}+\theta_{\rm e}) \right)\\ &= (1-\chi(\tau'_{\rm e}-5T_{\rm e}))\chi(\tau'_{\rm e}-4T_{\rm e})\overline \partial \bigg(\frac{\partial}{\partial t'_{{\rm e}}} u^{\rho}_{{\rm v}'}(\tau'_{\rm e}-10T_{\rm e},t'_{\rm e}+\theta_{\rm e}) \bigg). \endaligned\end{equation} Support of (\ref{neckerror2}) is in the domain $ 4T_{\rm e} - 1 \le \tau'_{\rm e} \le 5T_{\rm e} + 1 $ that is $ -6T_{\rm e} - 1 \le \tau''_{\rm e} \le -5T_{\rm e} + 1 $. There the $C^m$ norm of $u^{\rho}_{{\rm v}'}$ is estimated as $$ \Vert u^{\rho}_{{\rm v}'}\Vert_{C^m([-6T_{\rm e} - 1,-5T_{\rm e} + 1))} \le C_{11,m}\,e^{-5T_{\rm e} \delta_1}. $$ On the other hand, the weight function $e_{\rm v,\delta}$ given in \eqref{e1deltamulti} is estimated by $e^{5T_{\rm e}\delta}$ on the support. (See (\ref{e1deltamulti}).) Therefore this term has the required estimated. (Note $\delta < \delta_1/10$.) The other term or other case of the estimate on the neck region is similar. \par\medskip We next estimate ${\rm Err}^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}$ on the core. As we explained in Remark \ref{rem270} this is nonzero because of the difference of the parametrization of the core. So to study it, we need to discuss the dependence of the parametrization of the core on the coordinate at infinity. Proposition \ref{reparaexpest}, Corollary \ref{corestimatecoochange} and Lemma \ref{changeinfcoorproppara} give the estimate we need to study. \par We consider $\frak p_c$ and the obstruction bundle data $\frak E_{\frak p_c}$ there. Let $\mathcal G_c$ be the combinatorial type of $\frak p_c$. Note $\frak p \in \frak W_{\frak p_c}$ and $(\frak x_{\frak p}\cup \vec w_c^{\frak p}, u_{\frak p})$ is $\epsilon_{\frak p_c}$-close to $\frak p_c$. Let $\mathcal G(\frak p,c)$ be the combinatorial type of $(\frak x_{\frak p}\cup \vec w_c^{\frak p}, u_{\frak p})$. By Definition \ref{epsiloncloseto} (1) we have $ \mathcal G_c \succ \mathcal G(\frak p,c) $. Let $$ \frak x_{\frak p}\cup \vec w_c^{\frak p} = {\overline\Phi}(\frak y_1,\vec T_{1},\vec \theta_{1}). $$ Note that the singular point of $\frak p$ corresponds one to one to the edges $\rm e$ of $\frak y_{1}$ such that $T_{1,{\rm e}} = \infty$. \par For each ${\rm v}' \in C^0(\mathcal G_{\frak p_c})$, we denote the corresponding core of $\Sigma_{\frak p_c}$ by $K_{{\rm v}'}^c$. We may also regard $$ K_{{\rm v}'}^c \subset \Sigma_{\frak p}. $$ Let $\pi : \mathcal G_{\frak p_c} \to \mathcal G_{\frak p}$ be a map shrinking the edges $\rm e$ with $T_{\rm e} \ne \infty$. We put ${\rm v} = \pi({\rm v}')$. Then there exists $\vec R$ such that \begin{equation}\label{inclusionisv} K_{{\rm v}'}^c \subset K_{\rm v}^{+\vec R}. \end{equation} Here the right hand side is the core of the coordinate at infinity of $\frak p$, that is included in the stabilization data of $\frak p$. The inclusion (\ref{inclusionisv}) is obtained from the map $\frak v_{\xi,\frak y,\vec T,\vec\theta}$ appearing in Lemma \ref{changeinfcoorproppara} as follows. \par We put $$ \{{\rm v}(i)\mid i=1,\dots,n_{c,{\rm v}}\} = \{{\rm v}' \in C^0(\mathcal G_{\frak p_c}) \mid \pi({\rm v}') = {\rm v}\}. $$ We consider the union $$ K^{c}_{{\rm v},0} = \bigcup_{i=1}^{n_{c,{\rm v}}} K^{\rm obst}_{{\rm v}(i)} \subset \Sigma_{\frak p_c}. $$ \par We consider $\Sigma_{\vec T,\vec \theta}^{\rho}$ that is a domain of $u_{\vec T,\vec \theta,(0)}^{\rho}$. The parameter $\rho$ includes both the marked points $\vec w^{\rho}_c$ and $\vec w^{\rho}_{\frak p}$. By forgetting $\vec w^{\rho}_{\frak p}$ we have an embedding $$ \frak v_{c,{\rm v}(i),\rho,\vec T,\vec\theta} : K^{c}_{{\rm v}(i)} \to \Sigma^{\rho}_{\vec T,\vec\theta}. $$ (Here the parameter $\vec w^{\rho}_{\frak p}$ (that is a part of $\rho$) plays the role of the parameter $\xi \in Q$ in Lemma \ref{changeinfcoorproppara}.) \par By forgetting $\vec w^{\rho}_{c}$ we have an embedding $$ \frak v_{\frak p,{\rm v},\rho,\vec T,\vec\theta} : K_{\rm v} \to \Sigma^{\rho}_{\vec T,\vec\theta}. $$ \par We consider $K_{{\rm v}(i)}^{\rm obst} \subset K^c_{{\rm v}(i)}$ that is a compact set we fixed as a part of the obstruction bundle data centered at $\frak p_{c}$. By Remark \ref{rem267}, we may assume $$ \frak v_{c,{\rm v}(i),\rho,\vec T,\vec\theta}(K_{{\rm v}(i)}^{\rm obst}) \subset \frak v_{\frak p,{\rm v},\rho,\vec T,\vec\theta}(K_{\rm v}). $$ Therefore taking union over $i=1,\dots,n_{c,{\rm v}}$ we obtain \begin{equation}\label{corechangeest} \frak v_{(\frak p,c),{\rm v},\rho,\vec T,\vec\theta} := \frak v_{\frak p,{\rm v},\rho,\vec T,\vec\theta}^{-1}\circ \left(\coprod_{i=1}^{n_{c,{\rm v}}} \frak v_{c,{\rm v}(i),\rho,\vec T,\vec\theta} \right) : K^{c}_{{\rm v},0} \to K_{\rm v}. \end{equation} We denote this map by $$ \text{\rm Res}(\frak v_{(\frak p,c),{\rm v},\rho,\vec T,\vec\theta}) \in C^m(K^{c}_{{\rm v},0},K_{\rm v}). $$ We can estimate it by using Lemma \ref{changeinfcoorproppara} that is a family version of Proposition \ref{reparaexpest} and Corollary \ref{corestimatecoochange}. (See Lemma \ref{288lam} below.) \par We next describe the way how $\frak v_{(\frak p,c),{\rm v},\rho,\vec T,\vec\theta}$ and its estimate are related to the estimate of ${\rm Err}^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}$. We first recall that $$ \overline\partial u^{\rho}_{\rm v} \in \bigoplus_{c\in \frak A} E_{c,{\rm v}} $$ by assumption. We denote by $\frak e^{\rho} _{c,\vec T,\vec \theta,(0)}$ the sum of its $E_{c,{\rm v}}$ components over $\rm v$. It is actually independent of $\vec T,\vec \theta$. So we write it $\frak e^{\rho}_{c,(0)}$ here. We remark that we identify $$ E_{c,{\rm v}} \subset \Gamma_0(K_{\rm v};(u^{\rho}_{\rm v})^{*}TX\otimes \Lambda^{01}) $$ using the obstruction bundle data centered at $\frak p_c$. Here $K_{\rm v} \subset \Sigma_{\frak y}$. (Note that the combinatorial type of $\frak y$ is the same as $\frak p$.) \par In (\ref{22190}), we used $u^{\rho}_{\rm v}$ to obtain a map $$ u_{\vec T,\vec \theta,(0)}^{\rho} : (\Sigma_{\vec T,\vec \theta}^{\rho},\partial\Sigma_{\vec T,\vec \theta}^{\rho}) \to (X,L). $$ Moreover $u^{\rho}_{\rm v} = u_{\vec T,\vec \theta,(0)}^{\rho}$ on $K_{\rm v}$. However $$ E_{c,{\rm v}}(u^{\rho}_{\rm v} ) \ne E_{c,{\rm v}}(u_{\vec T,\vec \theta,(0)}^{\rho}), $$ as subsets of $$ \Gamma(K_{\rm v};(u^{\rho}_{\rm v})^*TX \otimes \Lambda^{01}) = \Gamma(K_{\rm v};(u_{\vec T,\vec \theta,(0)}^{\rho})^*TX \otimes \Lambda^{01}). $$ In fact, $ E_{c,{\rm v}}(u_{\vec T,\vec \theta,(0)}^{\rho})$ is defined by the diffeomorphism $\frak v_{(\frak p,c),{\rm v},\rho,\vec T,\vec\theta}$ and $E_{c,{\rm v}}(u^{\rho}_{\rm v} )$ is defined by the diffeomorphism $\frak v_{(\frak p,c),{\rm v},\rho,\vec \infty}$. \par Therefore, by definition, ${\rm Err}^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}$ on $K_{\rm v}$ is \begin{equation} \overline\partial u_{\vec T,\vec \theta,(0)}^{\rho} - \sum_c\frak e^{\rho}_{c,(0)} = \sum_c \left(\frak e^{\rho,1}_{c,(0)} - \frak e^{\rho,2}_{c,(0)}\right), \end{equation} where $\frak e^{\rho,1}_{c,(0)} \in \bigoplus_{{\rm v} \in C^0(\mathcal G_{\frak p})} E_{c,{\rm v}}(u_{\vec T,\vec \theta,(0)}^{\rho})$ and $\frak e^{\rho,2}_{c,(0)} \in \bigoplus_{{\rm v} \in C^0(\mathcal G_{\frak p})}E_{c,{\rm v}}(u^{\rho}_{\rm v} )$ are defined as follows: \begin{equation} \aligned \frak e^{\rho,1}_{c,(0)}( \frak v_{(\frak p,c),{\rm v},\rho,\vec T,\vec\theta}(z)) &= \text{\rm Pal}_{u_{\frak p_c,{\rm v}}(z),u^{\rho}_{\rm v}(\frak v_{(\frak p,c),{\rm v},\rho,\vec T,\vec\theta}(z)}(\frak e^{\rho}_{c,(0)}), \\ \frak e^{\rho,2}_{c,(0)}( \frak v_{(\frak p,c),{\rm v},\rho,\vec \infty}(z)) &= \text{\rm Pal}_{u_{\frak p_c,{\rm v}}(z),u^{\rho}_{\rm v}(\frak v_{(\frak p,c),{\rm v},\rho,\vec \infty}(z)}(\frak e^{\rho}_{c,(0)}). \endaligned \end{equation} Thus Lemma \ref{288lam} below implies $$ \left\Vert{\rm Err}^{\rho}_{{\rm v},\vec T,\vec \theta,(0)} \right\Vert_{L^2_{m,\delta}(K_{\rm v})} < C_{8,m}\epsilon_{1,m}e^{-\delta T_{\rm min}}. $$ This is the case $\kappa = 0$ of (\ref{form0185vv}) on $K_{\rm v}$. \par Proposition \ref{reparaexpest} implies the estimate (\ref{2251}) and (\ref{2252}) on $K_{\rm v}$. The proof of Lemma \ref{sublem278} is complete assuming Lemma \ref{288lam}. \end{proof} \begin{lem}\label{288lam} There exist $C_{15,k}$, $T_k$ such that for each ${\rm e} \in C^1_{\rm c}(\mathcal G_{\frak p})$ we have: \begin{equation} \aligned &\left\Vert \nabla_{\frak y_2}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T_2^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta_2^{\vec k_{\theta}}} \frac{\partial}{\partial T_{2,{\rm e}_0}} (\frak v_{(\frak p,c),{\rm v},\rho,\vec T,\vec\theta}) \right\Vert_{C^k} < C_{15,k}e^{-\delta_2 T_{2,{\rm e}_0}}, \\ &\left\Vert \nabla_{\frak y_2}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T_2^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta_2^{\vec k_{\theta}}} \frac{\partial}{\partial \theta_{2,{\rm e}_0}} (\frak v_{(\frak p,c),{\rm v},\rho,\vec T,\vec\theta}) \right\Vert_{C^k} < C_{15,k}e^{-\delta_2 T_{2,{\rm e}_0}}, \endaligned \end{equation} whenever $T_{2,{\rm e}}$ is greater than $T_k$ and $\vert{\vec k_{T}}\vert + \vert{\vec k_{\theta}}\vert +n \le k$. \par The first inequality holds for ${\rm e} \in C^1_{\rm o}(\mathcal G_{\frak p})$ also. \end{lem} \begin{proof} It suffices to prove the same estimate for $\frak v_{\frak p,{\rm v},\rho,\vec T,\vec\theta}$ and $\frak v_{c,{\rm v}(i),\rho,\vec T,\vec\theta}$. Note $\rho \in V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\epsilon_0)$ contains various data. We use only a part of such a data. We below recall the parameter space which contains only the data we use below. \par Let $\frak V(\frak x_{\frak p}\cup \vec w_{c}^{\frak p})$ be a neighborhood of $\frak x_{\frak p}\cup \vec w_{c}^{\frak p}$ in the stratum of the Deligne-Mumford moduli space that consists of elements of the same combinatorial type as $\frak x_{\frak p}\cup \vec w_{c}^{\frak p}$. We also take $\frak V(\frak x_{\frak p}\cup \vec w_{\frak p})$ and $\frak V(\frak x_{\frak p}\cup \vec w_{c}^{\frak p}\cup \vec w_{\frak p})$ that are neighborhoods in the stratum of the Deligne-Mumford moduli space of $\frak x_{\frak p}\cup \vec w_{\frak p}$ and $\frak x_{\frak p}\cup \vec w_{c}^{\frak p}\cup \vec w_{\frak p}$, respectively. \par We can take those three neighborhoods so that there exist $Q_1$ and $Q_2$ such that \begin{equation}\label{2282iso} Q_1 \times \frak V(\frak x_{\frak p}\cup \vec w_{c}^{\frak p}) \cong \frak V(\frak x_{\frak p}\cup \vec w_{c}^{\frak p}\cup \vec w_{\frak p}) \cong Q_2 \times \frak V(\frak x_{\frak p}\cup \vec w_{\frak p}) \end{equation} and that the isomorphisms in (\ref{2282iso}) is compatible with the forgetful maps $$ \frak V(\frak x_{\frak p}\cup \vec w_{c}^{\frak p}\cup \vec w_{\frak p}) \to \frak V(\frak x_{\frak p}\cup \vec w_{c}^{\frak p}) $$ and $$ \frak V(\frak x_{\frak p}\cup \vec w_{c}^{\frak p}\cup \vec w_{\frak p}) \to \frak V(\frak x_{\frak p}\cup \vec w_{\frak p}). $$ We consider the universal family $$ \frak M(\frak x_{\frak p}\cup \vec w_{c}^{\frak p}\cup \vec w_{\frak p}) \to \frak V(\frak x_{\frak p}\cup \vec w_{c}^{\frak p}\cup \vec w_{\frak p}). $$ Together with other data it gives a coordinate at infinity. We take any of them. \par Using (\ref{2282iso}), this coordinate at infinity of $\frak x_{\frak p}\cup \vec w_{c}^{\frak p}\cup \vec w_{\frak p}$ induces a $Q_1$-parametrized family of coordinates at infinity of $\frak x_{\frak p}\cup \vec w_{c}^{\frak p}$ and a $Q_2$-parametrized family of coordinates at infinity of $\frak x_{\frak p}\cup \vec w_{\frak p}$. (See Definition \ref{def:Qfamily} for the definition of a $Q$-parametrized family of coordinates at infinity.) \par Compared with the given coordinate at infinities of $\frak x_{\frak p}\cup \vec w_{c}^{\frak p}$ and of $\frak x_{\frak p}\cup \vec w_{\frak p}$ we obtain the maps $\frak v_{\frak p,{\rm v},\rho,\vec T,\vec\theta}$ and $\frak v_{c,{\rm v}(i),\rho,\vec T,\vec\theta}$. Therefore Lemma \ref{288lam} follows from Lemma \ref{changeinfcoorproppara}. \end{proof} We thus have completed the first step of the induction to prove Proposition \ref{expesgen2Tdev}. The other steps are similar to the proof of Theorem \ref{exdecayT}. \par When we study $T_{\rm e}$ and $\theta_{\rm e}$ derivatives and prove Lemma \ref{expesgen2Tdev}, we again need to estimate the $T_{\rm e}$ and $\theta_{\rm e}$ derivatives of the map $$ E_{c} \to \Gamma_0(K_{\rm v};(u_{\vec T,\vec \theta,(\kappa)}^{\rho})^*TX \otimes \Lambda^{01}). $$ This map is defined by using the diffeomorphism $\frak v_{(\frak p,c),{\rm v},\rho,\vec T,\vec\theta}$. Therefore we can use Lemma \ref{288lam} in the same way as above to obtain the required estimate.\footnote {We remark that $E_c$ is a finite dimensional vector space consisting of smooth sections with compact support. So estimating the effect of change of variables of its element by $\frak v_{(\frak p,c),{\rm v},\rho,\vec T,\vec\theta}$ is easy using Lemma \ref{288lam}.} \par The proof of Proposition \ref{expesgen2Tdev} is complete. \end{proof} \begin{proof}[Proof of Lemma \ref{0estimatekkk}] We can prove Lemma \ref{0estimatekkk} by integrating the inequality in Lemma \ref{288lam}. \end{proof} Thus we have proved Theorem \ref{exdecayT33}. \par We can use it in the same way as in Section \ref{surjinj} to prove surjectivity and injectivity of the map ${\rm Glu}_{\vec T,\vec \theta}$. \par To show that ${\rm Glu}_{\vec T,\vec \theta}$ is $\Gamma_{\frak p}^+$-equivariant, we only need to remark that if $\frak p_c \in \frak C(\frak p)$ then $\Gamma_{\frak p}^+ \subseteq \Gamma_{\frak p_c}^+$. (In fact all the constructions are equivariant.) \par The proof of Theorem \ref{gluethm3} is complete. \qed \begin{rem} We close this subsection with another technical remark. Theorems \ref{gluethm3} and \ref{exdecayT33} imply that $$ \aligned \text{\rm Glu} : V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;\epsilon_1) &\times (\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1) \\ &\to \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon_{0},\vec T_{0}} \endaligned $$ is a strata-wise $C^m$ diffeomorphism if $T_{{\rm e},0}$ for all $\rm e$ is larger than a number {\it depending} on $m$. Using Theorem \ref{exdecayT33} we can define smooth structures on both sides so that the map becomes a $C^{m}$ diffeomorphism. (See Section \ref{chart}. We will use $s_{\rm e}= T_{\rm e}^{-1}$ as a coordinate.) \par Note that the domain and the target of $\text{\rm Glu} $ have strata-wise $C^{\infty}$ structure.\footnote{This is an easy consequence of implicit function theorem.} However, the construction we gave does not show that $\text{\rm Glu}$ is of $C^{\infty}$-class. This is not really an issue for our purpose of defining virtual fundamental chain or cycle. Indeed, Kuranishi structure of $C^k$ class with sufficiently large $k$ is enough for such a purpose. ($C^1$-structure is enough.) \par On the other hand, as we will explain in Section \ref{toCinfty}, Theorems \ref{gluethm3} and \ref{exdecayT33} are enough to prove the existence of Kuranishi structure of $C^{\infty}$ class. Except in Section \ref{toCinfty}, we fix $m$ and will construct a Kuranishi structure of $C^m$ class. For this purpose we choose $T_{{\rm e},0}$ so that it is larger than $T_{10m}$. Therefore our construction of $\text{\rm Glu}$ works on $L^2_{10m+1,\delta}$. \end{rem} \par\medskip \section{Cutting down the solution space by transversals} \label{cutting} In Section \ref{glueing}, we described the thickened moduli space $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon_{0},\vec T_{0}}$ by a gluing construction. Its dimension is given by $$\aligned &\dim \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon_{0},\vec T_{0}}\\ &= \text{\rm virdim}\, \mathcal M_{k+1,\ell}(\beta) + \dim_{\R} \mathcal E_{\frak A} + (2\ell_{\frak p} + 2\sum_{c\in \frak B}\ell_c)\\ &= k+1+2\ell + \mu(\beta) - 3 + \dim_{\R} \mathcal E_{\frak A} + (2\ell_{\frak p} + 2\sum_{c\in \frak B}\ell_c). \endaligned$$ \par Note that the dimension of the Kuranishi neighborhood of $\frak p$ in $\mathcal M_{k+1,\ell}(\beta)$ must be $\text{\rm virdim} \mathcal M_{k+1,\ell}(\beta) + \dim_{\R} \mathcal E_{\frak A}$. Therefore we need to cut down this moduli space $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon_{0},\vec T_{0}}$ to obtain a Kuranishi neighborhood. We do so by requiring the transversal constraint as in Definition \ref{transconst}. We will define it below in a slightly generalized form. (For example, we define it for $(\frak x,u)$ such that $u$ is not necessarily pseudo-holomorphic but satisfies the equation $\overline\partial u \equiv 0 \mod \mathcal E_{\frak A}(u)$ only.) \par Let $\frak p \in \mathcal M_{k+1,\ell}(\beta)$ and $\emptyset \ne \frak A \subseteq \frak B \subseteq \frak C(\frak p)$. We consider a subset $\frak B^- \subseteq \frak B$ with $\frak A \subseteq \frak B^-$. Let $\vec w_{\frak p} = (w_{\frak p,1},\dots,w_{\frak p,\ell_{\frak p}})$ be a symmetric stabilization of $\frak x_{\frak p}$ that is a part of the stabilization data at $\frak p$. Let $I\subset \{1,\dots,\ell_{\frak p}\}$ and we consider $\vec w^{-}_{\frak p} = (w_{\frak p,i}; i \in I)$. For simplicity of notation we put $I= \{1,\dots,\ell^-_{\frak p}\}$. We assume that $\vec w^-_{\frak p}$ is already a symmetric stabilization of $\frak x_{\frak p}$. It induces a stabilization data at $\frak p$ in an obvious way. We thus obtain $\mathcal M_{k+1,(\ell,\ell^-_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B^-)_{\epsilon_{0},\vec T_{0}}$. \begin{defn}\label{def7289} An element $(\frak Y,u',(\vec w'_c;c\in \frak B))$ of $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon_{0},\vec T_{0}}$ is said to satisfy the (partial) {\it transversal constraint} for $\vec w_{\frak p} \setminus \vec w^-_{\frak p}$ and $\frak B \setminus \frak B^-$ if the following holds. \begin{enumerate} \item If $i > \ell^-_{\frak p}$ then $ u'(w'_{\frak p,i}) \in \mathcal D_{\frak p,i}. $ Here $w'_{\frak p,i}$, $i=1,\dots,\ell_{\frak p}$ denote the $(\ell+1)$-th, \dots, $(\ell+\ell_{\frak p})$-th interior marked points of $\frak Y$. \item If $c \in \frak B \setminus \frak B^-$ and $i=1,\dots,\ell_{c}$ then $ u'(w'_{c,i}) \in \mathcal D_{c,i}$. Here $\vec w'_c = (w'_{c,1},\dots,w'_{c,\ell_c})$. \end{enumerate} We denote by $$ \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B) ^{\vec w^-_{\frak p},\frak B^-}_{\epsilon_{0},\vec T_{0}} $$ the set of all elements of the thickened moduli space $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon_{0},\vec T_{0}}$ satisfying transversal constraint for $\vec w_{\frak p} \setminus \vec w^-_{\frak p}$ and $\frak B \setminus \frak B^-$. \end{defn} Our next goal is to show that $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B) ^{\vec w^-_{\frak p},\frak A^-}_{\epsilon_{0},\vec T_{0}}$ is homeomorphic to $\mathcal M_{k+1,(\ell,\ell^-_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B^-)_{\epsilon_{0},\vec T_{0}}$. (Proposition \ref{forgetstillstable}.) To prove this we first define an appropriate forgetful map. \begin{defn}\label{defnforget} Let $(\frak Y,u',(\vec w'_c;c\in \frak B)) \in \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon_{0},\vec T_{0}}$. Note $\frak Y = \frak Y_0 \cup \vec w_{\frak p}$ and $\vec w_{\frak p}$ consists of $\ell_{\frak p}$ interior marked points. We take only $\ell^-_{\frak p}$ of them and put $\vec w_{\frak p}^-$ and put $\frak Y^- = \frak Y_0 \cup \vec w_{\frak p}^-$. We assume that $\frak Y^-$ is stable and $\frak x_{\frak p} \cup \vec w_{\frak p}^-$ is also stable. We also assume that $\Gamma_{\frak p}$ preserves $\vec w_{\frak p}$ as a set. We define the forgetful map by: \begin{equation}\label{2265} \frak{forget}_{\frak B,\frak B^-;\vec w_{\frak p},\vec w^-_{\frak p}} (\frak Y,u',(\vec w'_c;c\in \frak B)) = (\frak Y^-,u',(\vec w'_c;c\in \frak B^-)). \end{equation} \end{defn} \begin{lem}\label{fraAforget1} The map $\frak{forget}_{\frak B,\frak B^-;\vec w_{\frak p},\vec w^-_{\frak p}}$ defines $$ \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon_{0},\vec T_{0}} \to \mathcal M_{k+1,(\ell,\ell^-_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B^-)_{\epsilon_{0},\vec T_{0}}. $$ This map is a continuous and strata-wise smooth submersion. The fiber is $2(\ell_{\frak p} - \ell^-_{\frak p}) + 2\sum_{c\in \frak B \setminus \frak B^-}\ell_c$ dimensional. \end{lem} \begin{proof} We note that $\frak Y^-$ is still stable. (This is because $\frak x_{\frak p} \cup \vec w^{-}_{\frak p}$ is stable.) Therefore $\frak{forget}_{\frak B,\frak B^-;\vec w_{\frak p},\vec w^-_{\frak p}}$ preserves stratification. Note we forget the position of the $\ell_{\frak p} - \ell^-_{\frak p} + \sum_{c\in \frak B \setminus \frak B^-}\ell_c$ marked points. There is no constraint for those marked points other than those coming from the condition that $(\frak Y,u')$ is $\epsilon_0$-close to $(\frak x_{\frak p}\cup \vec w_{\frak p},u_{\frak p})$ and $(\frak Y_0 \cup \vec w'_c,u')$ are $\epsilon_0$-close to $\frak p \cup \vec w_c^{\frak p}$ for all $c \in \mathcal A$. These are open conditions. Therefore this map is a strata-wise smooth submersion and the fiber is $2(\ell_{\frak p} - \ell^-_{\frak p}) + 2\sum_{c\in \frak B \setminus \frak B^-}\ell_c$ dimensional. \end{proof} \begin{prop}\label{forgetstillstable} The following holds if $\epsilon_{0},\epsilon_{\frak p_{c}}$ are sufficiently small. \begin{enumerate} \item The space $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B) ^{\vec w^-_{\frak p},\frak B^-}_{\epsilon_{0},\vec T_0}$ is a strata-wise smooth submanifold of our thickened moduli space $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon_{0},\vec T_{0}}$ of codimension $2(\ell_{\frak p} - \ell^-_{\frak p}) + 2\sum_{c\in \frak B \setminus \frak B^-}\ell_c$. \item The restriction of $\frak{forget}_{\frak B,\frak B^-;\vec w_{\frak p},\vec w^-_{\frak p}}$ induces a homeomorphism $$ \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon_{0},\vec T_{0}}^{\vec w^-_{\frak p},\frak B^-} \to \mathcal M_{k+1,(\ell,\ell^-_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B^-)_{\epsilon_{0},\vec T_{0}} $$ that is a strata-wise diffeomorphism. \end{enumerate} \end{prop} \begin{rem} Note that if $c \in \mathcal B$ then $\frak p \in \frak M_{\frak p_c}$ and $\epsilon_c$ is used to define $\frak M_{\frak p_c}$. (See Definition \ref{openW+++}.) \end{rem} \begin{proof} We consider the evaluation maps at the $(\ell_{\frak p} - \ell^-_{\frak p}) + \sum_{c\in \frak B \setminus \frak B^-}\ell_c$ marked points that we forget by the map $\frak{forget}_{\frak B,\frak B^-;\vec w_{\frak p},\vec w^-_{\frak p}}$. It defines a continuous and strata-wise smooth map \begin{equation}\label{evatforgottenmark} \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon_{0},\vec T_{0}} \to X^{(\ell_{\frak p} - \ell^-_{\frak p}) + \sum_{c\in \frak B \setminus \frak B^-}\ell_c}. \end{equation} We consider the submanifold \begin{equation}\label{transforgottenmark} \prod_{i=\ell^-_{\frak p}+1}^{\ell_{\frak p}}\mathcal D_{\frak p,i} \times \prod_{c \in \frak B \setminus \frak B^-}\prod_{i=1}^{\ell_{c}}\mathcal D_{c,i} \end{equation} of the right hand side of (\ref{evatforgottenmark}). By Proposition \ref{linearMV} (2), the map (\ref{evatforgottenmark}) is transversal to (\ref{transforgottenmark}) at $\frak p$ if $\epsilon_{\frak p_c}$ is sufficiently small. Therefore we may assume (\ref{evatforgottenmark}) is transversal to (\ref{transforgottenmark}) everywhere. Since $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B) ^{\vec w^-_{\frak p},\frak B^-}_{\epsilon_{0},\vec T_{0}}$ is the inverse image of (\ref{transforgottenmark}) by the map (\ref{evatforgottenmark}), the statement (1) follows. \par By choosing $\epsilon_0$ sufficiently small we can ensure that the image under the map (\ref{evatforgottenmark}) of each fiber of the map $\frak{forget}_{\frak B,\frak B^-;\vec w_{\frak p},\vec w^-_{\frak p}}$ intersects with the submanifold (\ref{transforgottenmark}) at one point. Moreover by stability the elements of $\mathcal M_{k+1,(\ell,\ell^-_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B^-)_{\epsilon_{0},\vec T_{0}} $ have no automorphism. The statement (2) follows. \end{proof} We next consider a similar but a slightly different case of transversal constraint. Namely: \begin{defn}\label{defn9797} An element $(\frak Y,u',(\vec w'_c;c\in \frak B))$ of $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon_{0},\vec T_{0}}$ is said to satisfy the {\it transversal constraint at all additional marked points} if the following holds. Let $w'_{\frak p,i}$, $i=1,\dots,\ell_{\frak p}$ denote the $(\ell+1)$-th, \dots, $(\ell+\ell_{\frak p})$-th interior marked points of $\frak Y$. We put $\vec w'_c = (w'_{c,1},\dots,w'_{c,\ell_c})$. \begin{enumerate} \item For all $i = 1,\dots, \ell_{\frak p}$ we have $ u'(w'_{\frak p,i}) \in \mathcal D_{\frak p,i}. $ \item For all $c \in \frak B$ and $i=1,\dots,\ell_{\frak c}$ we have $ u'(w'_{c,i}) \in \mathcal D_{c,i}. $ \end{enumerate} \par We denote by $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B) ^{{\rm trans}}_{\epsilon_{0},\vec T_{0}}$ the set of all elements of the thickened moduli space $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon_{0},\vec T_{0}}$ satisfying transversal constraint at all additional marked points. \end{defn} \begin{lem}\label{transstratasmf} The set $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B) ^{{\rm trans}}_{\epsilon_{0},\vec T_{0}}$ is a closed subset of our space $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon_{0},\vec T_{0}}$ and is a strata-wise smooth submanifold of codimension $2\ell_{\frak p} + 2\sum_{c\in \frak B}\ell_c$. \end{lem} \begin{rem} We note that the map $\rm Glu$ is a homeomorphism onto its image of the thickened moduli space $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B) ^{{\rm trans}}_{\epsilon_{0},\vec T_{0}}$. \end{rem} \begin{proof} By Proposition \ref{forgetstillstable} it suffices to consider the case $\frak A = \frak B$. By the way similar to the proof of Proposition \ref{forgetstillstable} we define \begin{equation}\label{evatforgottenmark2} \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A)_{\epsilon_{0},\vec T_{0}} \to X^{\ell_{\frak p} + \sum_{c\in \frak A}\ell_c} \end{equation} that is an evaluation map at all the added marked points. Note at point $\frak p$, when we perturb the added marked points $\vec w'_{\frak p}$ and $\vec w'_{\frak c}$ we still obtain an element of the thickened moduli space. This is because the map $u_{\frak p}$ is pseudo-holomorphic. Therefore, the evaluation map (\ref{evatforgottenmark2}) is transversal to \begin{equation}\label{transforgottenmark2} \prod_{i=1}^{\ell_{\frak p}}\mathcal D_{\frak p,i} \times \prod_{c \in \frak A }\prod_{i=1}^{\ell_{c}}\mathcal D_{c,i} \end{equation} at $\frak p$. It implies that (\ref{evatforgottenmark2}) is transversal to (\ref{transforgottenmark2}) everywhere on $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A)_{\epsilon_{0},\vec T_{0}}$, if $\epsilon_0$ is small. Since $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{{\rm trans}}_{\epsilon_{0},\vec T_{0}}$ is the inverse image of (\ref{transforgottenmark2}) by the map (\ref{evatforgottenmark2}), the lemma follows. \end{proof} \begin{defn}\label{zerosetofspreddf} We denote by $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{{\rm trans}}_{\epsilon_{0},\vec T_{0}} \cap \frak s^{-1}(0)$ the set of all $(\frak Y,u',(\vec w'_c;c\in \frak A)) \in \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{{\rm trans}}_{\epsilon_{0},\vec T_{0}}$ such that $u'$ is pseudo-holomorphic. \end{defn} Our space $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{{\rm trans}}_{\epsilon_{0},\vec T_{0}} \cap \frak s^{-1}(0)$ is a closed subset of the moduli space $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{{\rm trans}}_{\epsilon_{0},\vec T_{0}}$. \par By forgetting all the additional marked points we obtain a map \begin{equation}\label{forget1} \frak{forget} : \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{{\rm trans}}_{\epsilon_{0},\vec T_{0}} \cap \frak s^{-1}(0) \to \mathcal M_{k+1,\ell}(\beta). \end{equation} We recall that we have injective homomorphisms $$ \aligned &\Gamma_{\frak p} \to \frak S_{\ell_{\frak p}} \times \prod_{c\in \frak A}\frak S_{\ell_c}, \\ &\Gamma^+_{\frak p} \to \frak S_{\ell} \times \frak S_{\ell_{\frak p}} \times \prod_{c\in \frak A}\frak S_{\ell_c}. \endaligned $$ The group $\Gamma^+_{\frak p}$ acts on $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A)_{\epsilon_{0},\vec T_{0}}$ as follows. We regard $\Gamma^+_{\frak p} \subset \frak S_{\ell} \times \frak S_{\ell_{\frak p}} \times \prod_{c\in \frak A}\frak S_{\ell_c}$. Then the action of $\Gamma_{\frak p}^+$ on $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A)_{\epsilon_{0},\vec T_{0}}$ is by exchanging the interior marked points. It is easy to see that $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{{\rm trans}}_{\epsilon_{0},\vec T_{0}}$ is invariant under this action. Therefore (\ref{forget1}) induces a map \begin{equation}\label{forget2} \overline{\frak{forget}} : \bigg(\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{{\rm trans}}_{\epsilon_{0},\vec T_{0}} \cap \frak s^{-1}(0)\bigg)/\Gamma_{\frak p} \to \mathcal M_{k+1,\ell}(\beta). \end{equation} \begin{rem} The map (\ref{forget2}) induces a map $$ \bigg(\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{{\rm trans}}_{\epsilon_{0},\vec T_{0}} \cap \frak s^{-1}(0)\bigg)/\Gamma_{\frak p}^+ \to \mathcal M_{k+1,\ell}(\beta)/\frak S_{\ell}. $$ See Remark \ref{rem216}. We can use this remark to construct an $\frak S_{\ell}$ invariant Kuranishi structure on $\mathcal M_{k+1,\ell}(\beta)$. \end{rem} \begin{prop}\label{charthomeo} The map (\ref{forget2}) is a homeomorphism onto an open neighborhood of $\frak p$. \end{prop} \begin{proof} The geometric intuition behind this proposition is clear. We will give a detailed proof below for completeness sake. We first review the definition of the topology of $\mathcal M_{k+1,\ell}(\beta)$ given in \cite[Definition 10.2, 10.3]{FOn}, \cite[Definition 7.1.39, 7.1.42]{fooo:book1}. \begin{defn}\label{convdefn} Let $\frak p_a = ((\Sigma_a,\vec z_a,\vec z^{\rm int}_a),u_a), \frak p_{\infty} = ((\Sigma,\vec z,\vec z^{\rm int}),u) \in \mathcal M_{k+1,\ell}(\beta)$. We {\it assume} $(\Sigma_a,\vec z_a,\vec z^{\rm int}_a)$ and $(\Sigma,\vec z,\vec z^{\rm int})$ are stable. We say that a sequence $((\Sigma_a,\vec z_a,\vec z^{\rm int}_a),u_a)$ {\it stably converges} to $((\Sigma,\vec z,\vec z^{\rm int}),u)$ and write $$ \underset{a\to \infty}{\rm lims}\,\, \frak p_a = \frak p_{\infty} $$ if the following holds. \begin{enumerate} \item We assume $$\lim_{a\to \infty}(\Sigma_a,\vec z_a,\vec z^{\rm int}_a) = (\Sigma,\vec z,\vec z^{\rm int})$$ in the Deligne-Mumford moduli space $\mathcal M_{k+1,\ell}$. We take a coordinate at infinity of $(\Sigma,\vec z,\vec z^{\rm int})$. It determines a diffeomorphism between cores of $\Sigma_a$ and of $\Sigma$ for large $a$. \item For each $\epsilon$ we can extend the core appropriately so that there exists $a_0$ such that (2),(3) hold for $a > a_0$. $$ \vert u_a - u\vert_{C^1(\text{Core})} < \epsilon. $$ Here we regard $u_a$ and $u$ as maps from the core of $\Sigma_a$ and $\Sigma$ by the above mentioned diffeomorphism. \item The diameter of the image of each connected component of the neck region by $u_a$ is smaller than $\epsilon$. \end{enumerate} \end{defn} \begin{defn} Let $\frak p_a = ((\Sigma_a,\vec z_a,\vec z^{\rm int}_a),u_a)$, $\frak p_{\infty} = ((\Sigma,\vec z,\vec z^{\rm int}),u) \in \mathcal M_{k+1,\ell}(\beta)$. We say that $\frak p_a$ converges to $\frak p_{\infty}$ and write $$ \lim_{a\to \infty}\frak p_a = \frak p_{\infty} $$ if there exist $\ell' \ge 0$ and $\frak q_a = ((\Sigma_a,\vec z_a,\vec z^{\rm int}_a\cup \vec z^{+,\rm int}_a),u_a)$, $\frak q_{\infty} = ((\Sigma,\vec z,\vec z^{\rm int}\cup \vec z^{+,\rm int}_{\infty}),u) \in \mathcal M_{k+1,\ell+\ell'}(\beta)$ such that \begin{equation}\label{Sconv2} \underset{a\to \infty}{\rm lims}\,\, \frak q_a = \frak q_{\infty} \end{equation} and \begin{equation}\label{conv2} \frak{forget}_{(k+1;\ell+\ell'),(k+1;\ell)}(\frak q_a)= \frak p_a, \quad \frak{forget}_{(k+1;\ell+\ell'),(k+1;\ell)}(\frak q_{\infty})= \frak p_{\infty}. \end{equation} \end{defn} Here $$ \frak{forget}_{(k+1;\ell+\ell'),(k+1;\ell)} : \mathcal M_{k+1,\ell+\ell'}(\beta) \to \mathcal M_{k+1,\ell}(\beta) $$ is a map forgetting $(\ell+1)$-st,\dots,$(\ell+\ell')$-st (interior) marked points (and shrinking the irreducible components that become unstable. See \cite[p 419]{fooo:book1}.) \par Now we prove the following: \begin{lem}\label{setisopen} If $\epsilon_0$, $\epsilon_{\frak p_c}$ are sufficiently small, then the image of (\ref{forget2}) is an open subset of $\mathcal M_{k+1,\ell}(\beta)$. \end{lem} \begin{proof} Let $$ \frak p' \in \overline{\frak{forget}} \bigg(\big(\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{{\rm trans}}_{\epsilon_{0},\vec T_{0}} \cap \frak s^{-1}(0)\big)/\Gamma_{\frak p}\bigg) $$ and $ \frak p_a \in \mathcal M_{k+1,\ell}(\beta) $ such that $ \lim_{a\to\infty} \frak p_a = \frak p'. $ We will prove $$ \frak p_a \in \overline{\frak{forget}} \bigg(\big(\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{{\rm trans}}_{\epsilon_{0},\vec T_{0}} \cap \frak s^{-1}(0)\big)/\Gamma_{\frak p}\bigg) $$ for all sufficiently large $a$. \par We put $\frak p' = (\frak Y_0,u')$ and $$ (\frak Y_0\cup \vec w'_{\frak p},u',(\vec w'_c;c\in \frak A)) \in \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{{\rm trans}}_{\epsilon_{0},\vec T_{0}} \cap \frak s^{-1}(0). $$ We also put $\frak p_a = (\frak x_{\frak p_a},u_{\frak p_a})$. By Definition \ref{convdefn}, there exists $\frak q_a, \frak q_{\infty} \in \mathcal M_{k+1,\ell+\ell'}(\beta)$ such that (\ref{Sconv2}) holds and \begin{equation}\label{conv3} \frak{forget}_{(k+1;\ell+\ell'),(k+1;\ell)}(\frak q_a)= \frak p_a, \quad \frak{forget}_{(k+1;\ell+\ell'),(k+1;\ell)}(\frak q_{\infty})= \frak p'. \end{equation} Let $\vec z^{+,\rm int}_a \subset \frak x_{\frak q_a}, \vec z^{+,\rm int}_{\infty} \subset \frak x_{\frak q_{\infty}}$ be the interior marked points that are not the marked points of $\frak p_a$ or of $\frak p'$. By perturbing $\frak q_a$ and $\frak q_{\infty}$ a bit we may assume \begin{equation}\label{notinDsss} \aligned u_{\frak q_a}(z^{+,\rm int}_{a,i}) &\notin \bigcup_{i=1}^{\ell_{\frak p}} \mathcal D_{\frak p,i} \cup \bigcup_{c \in \frak A}\bigcup_{i=1}^{\ell_{c}} \mathcal D_{c,i},\\ u_{\frak q_{\infty}}(z^{+,\rm int}_{{\infty},i}) &\notin \bigcup_{i=1}^{\ell_{\frak p}} \mathcal D_{\frak p,i} \cup \bigcup_{c \in \frak A}\bigcup_{i=1}^{\ell_{c}} \mathcal D_{c,i}. \endaligned \end{equation} We consider the map $\Sigma_{\frak q_{\infty}} \to \Sigma_{\frak p'}$ that shrinks the irreducible components which become unstable after forgetting $(\ell+1)$-th,\dots, $(\ell + \ell')$-th marked points $\vec z^{+,\rm int}_{\infty}$ of $\frak x_{\frak q_{\infty}}$. By (\ref{notinDsss}) none of the points $\vec w'_{\frak p}$, $\vec w'_c$ are contained in the image of the irreducible components of $\Sigma_{\frak q_{\infty}}$ that we shrink. Therefore $\vec w'_{\frak p}, \vec w'_c \subset \Sigma_{\frak p'}$ may be regarded as points of $\Sigma_{\frak q_{\infty}}$. \par Then by extending the core if necessary we may assume that $\vec w'_{\frak p}, \vec w'_c$ are in the core of $\Sigma_{\frak q_{\infty}}$. Here we use the coordinate at infinity that appears in the definition of $\underset{a\to \infty}{\rm lims}\,\, \frak q_a = \frak q_{\infty}$. \par We note that $$ u_{\frak q_{\infty}}(w'_{\frak p,i}) \in \mathcal D_{\frak p,i}, \quad u_{\frak q_{\infty}}(w'_{c,i}) \in \mathcal D_{c,i}. $$ We also note that $u_{\frak q_a}$ converges to $u_{\frak q_{\infty}}$ in $C^1$-topology on the core. Moreover $u_{\frak q_{\infty}}$ is transversal to $\mathcal D_{\frak p,i}$ (resp. $\mathcal D_{c,i}$) at $u_{\frak q_{\infty}}(w'_{\frak p,i})$ (resp. $u_{\frak q_{\infty}}(w'_{c,i})$). Therefore, for sufficiently large $a$ there exist $w'_{a,\frak p,i}, w'_{a,c,i} \in \Sigma_{\frak q_a}$ with the following properties. \begin{enumerate} \item $u_{\frak q_a}(w'_{a,\frak p,i}) \in \mathcal D_{\frak p,i}$. \item $u_{\frak q_a}(w'_{a,c,i}) \in \mathcal D_{c,i}$. \item $\lim_{a \to \infty} w'_{a,\frak p,i} = w'_{\frak p,i}$. \item $\lim_{a \to \infty} w'_{a,c,i} = w'_{c,i}$. \end{enumerate} Here in the statements (3) and (4) we use the identification of the core of $\Sigma_{\frak q_a}$ and of $\Sigma_{\frak q_{\infty}}$ induced by the coordinate at infinity that appears in the definition of $\underset{a\to \infty}{\rm lims}\,\, \frak q_a = \frak q_{\infty}$. We send $w'_{a,\frak p,i}$ by the map $\Sigma_{\frak q_a} \to \Sigma_{\frak p_a}$ and denote it by the same symbol. We thus obtain $\vec w'_{a,\frak p} \subset \Sigma_{\frak p_a}$. The additional marked points $w'_{a,c,i}$ induce $\vec w'_{a,c} \subset \Sigma_{\frak p_a}$ in the same way. \par Using (1)-(4) above and the fact that $u_{\frak q_a}$ converges to $u_{\frak q_{\infty}}$ in $C^1$-topology we can easily show that $$ (\frak x_{\frak p_a} \cup \vec w'_{a,\frak p}, u_{\frak p_a}, (\vec w'_{a,c};c\in \frak A)) \in \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{{\rm trans}}_{\epsilon_{0},\vec T_{0}} \cap \frak s^{-1}(0) $$ for sufficiently large $a$. Thus we have $$ \aligned \frak p_a &= \overline{\frak{forget}}((\frak x_{\frak p_a} \cup \vec w'_{a,\frak p}, u_{\frak p_a}, (\vec w'_{a,c};c\in \frak A))) \\ &\in \overline{\frak{forget}} \bigg(\big(\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{{\rm trans}}_{\epsilon_{0},\vec T_{0}} \cap \frak s^{-1}(0)\big)/\Gamma_{\frak p}\bigg) \endaligned $$ for sufficiently large $a$. The proof of Lemma \ref{setisopen} is complete. \end{proof} \begin{lem}\label{injectivitypp} If $\epsilon_0$ is sufficiently small, then the map (\ref{forget2}) is injective. \end{lem} \begin{proof} The proof is by contradiction. We assume that there exists $\epsilon_{0}^{(n)}$ with $\epsilon_{0}^{(n)} \to 0$ as $n\to \infty$, and \begin{equation}\label{nearpisassump} \aligned &(\frak Y_{j;(n),0}\cup \vec w'_{j;(n),\frak p},u'_{j;(n)},(\vec w'_{j;(n),c};c\in \frak A))\\ &\in \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{{\rm trans}}_{\epsilon^{(n)}_{0},\vec T_{0}} \cap \frak s^{-1}(0) \endaligned \end{equation} for $j=1,2$. Here we extend the core of the coordinate at infinity of $\frak p$ by $\vec R_{(n)} \to \infty$ to define the right hand side of (\ref{nearpisassump}). We assume \begin{equation}\label{Y1eqY2} (\frak Y_{1;{(n)},0},u'_{1;{(n)}}) \sim (\frak Y_{2;{(n)},0},u'_{2;{(n)}}) \end{equation} in $\mathcal M_{k+1,\ell}(\beta)$ but \begin{equation}\label{Y1isY2} \aligned &[(\frak Y_{1;{(n)},0}\cup \vec w'_{1;{(n)},\frak p},u'_{1;{(n)}},(\vec w'_{1;{(n)},c};c\in \frak A))]\\ &\ne [(\frak Y_{2;{(n)},0}\cup \vec w'_{2;{(n)},\frak p},u'_{2;{(n)}},(\vec w'_{2;{(n)},c};c\in \frak A))] \endaligned \end{equation} in $(\big(\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{{\rm trans}}_{\epsilon_{0}^{(n)},\vec T_{0}} \cap \frak s^{-1}(0)\big)/\Gamma_{\frak p}$. We will deduce contradiction. \par The condition (\ref{Y1eqY2}) implies that there exists $v_{(n)} : \Sigma_{\frak Y_{1;{(n)},0}} \to \Sigma_{\frak Y_{2;{(n)},0}}$ with the following properties. \begin{enumerate} \item $v_{(n)}$ is a biholomorphic map. \item $u'_{2;{(n)}} \circ v_{(n)} = u'_{1;{(n)}}$. \item $v_{(n)}$ sends $k+1$ boundary marked points and $\ell$ interior marked points of $\frak Y_{1;{(n)},0}$ to the corresponding marked points of $\frak Y_{2;{(n)},0}$. \end{enumerate} We take a coordinate at infinity associated to the stabilization data at $\frak p$. Then (\ref{nearpisassump}) implies that the core of $\frak Y_{j;{(n)},0}$ ($j=1,2$) is identified with the extended core $(K_{\rm v}^{\frak p})^{+\vec R_{(n)}}$ of $\frak p$. This identification may not preserve complex structures but preserves the $k+1$ boundary and $\ell+\ell'$ interior marked points. Therefore $v_{(n)}$ induces $$ v_{(n)} : K_{0,{\rm v}}^{\frak p} \to (K_{\rm v}^{\frak p})^{+\vec R_{(n)}} $$ where $K_{0,{\rm v}}^{\frak p}$ is a compact set such that $w'_{1;(n),\frak p,i}, w'_{1;(n),c,i} \in K_{0,{\rm v}}^{\frak p}$. (We may extend the core so that we can find such $K_{0,{\rm v}}^{\frak p}$.) \par We may take $\vec R_{(n)} \to \infty$ so that the $u'_{j;(n)}$ image of each of the connected components of the complement of $(K_{\rm v}^{\frak p})^{+\vec R_{(n)}}$ has diameter $< \epsilon_{0}^{(n)}$. \par We consider the complex structure of $\Sigma_{\frak p}$ on $(K_{\rm v}^{\frak p})^{+\vec R_{(n)}}$ and denote it by $j_{\frak p}$. Then we have \begin{equation}\label{almstjequiv} \lim_{n\to \infty}\Vert (v_{(n)} )_* j_{\frak p} - j_{\frak p}\Vert_{C^1\left((K_{\rm v}^{\frak p})^{+\vec R^-_{(n)}}\right)} = 0 \end{equation} where $\vec R^{-}_{(n)} \to \infty$ is chosen so that $v_{(n)}((K_{\rm v}^{\frak p})^{+\vec R^-_{(n)}}) \subset (K_{\rm v}^{\frak p})^{+\vec R_{(n)}}$. \par On the other hand by Property (4) above we have \begin{equation}\label{almostucomp} \lim_{n\to \infty}\Vert u\circ v_{(n)} - u \Vert_{C^1\left((K_{\rm v}^{\frak p})^{+\vec R^-_{(n)}}\right)} =0. \end{equation} We use (\ref{almstjequiv}) and (\ref{almostucomp}) to prove the following. \begin{sublem} After taking a subsequence if necessary, there exists $v' \in\Gamma_{\frak p}$ such that $$ \lim_{n\to\infty}\Vert v_{(n)} - v' \Vert_{C^1((K^{\frak p}_{{\rm v}})^{+\vec R})} = 0 $$ for any $\vec R$. \end{sublem} \begin{proof} Since $v_{(n)}$ is biholomorphic with respect to a pair of complex structures converging to $(j_{\frak p},j_{\frak p})$, we can use Gromov compactness to show that it converges in compact $C^{\infty}$ topology outside finitely many points after taking a subsequence if necessary. Let $v'$ be the limit. By the Property (2) above we have $u \circ v' = u$. \par On the irreducible component of $\frak x_{\frak p}$ where $u$ is not constant, we use $u \circ v' = u$ together with the fact that $v_{(n)}$ is biholomorphic to show that there is no bubble on this component. Namely $v_{(n)}$ converges everywhere on this component. \par The irreducible component of $\frak x_{\frak p}$ where $u$ is trivial is stable since $\frak p$ is stable. We note that $v'$ preserves the marked points of $\frak p$. It implies that $v'$ is not a constant map on this component. Then using the fact that $v_{(n)}$ is biholomorphic we can again show that there is no bubble on this component. \par We thus proved that $v_{(n)}$ converges to $v'$ everywhere. It is then easy to see that $v' \in \Gamma_{\frak p}$. \end{proof} By replacing $(\frak Y_{2;(n),0}\cup \vec w'_{2;(n),\frak p},u'_2,(\vec w'_{2;(n),c};c\in \frak A))$ using the action of $v' \in \Gamma_{\frak p}$, we may assume that \begin{equation}\label{closetoid} \lim_{n\to \infty}\Vert v_{(n)} - {\rm identity} \Vert_{C^1(K^{\frak p}_{0,{\rm v}})} =0. \end{equation} Then, $u'_{1;(n)}(w'_{1;(n),\frak p,i}), u'_{2;(n)}(w'_{2;(n),\frak p,i}) \in \mathcal D_{\frak p,i}$ imply \begin{equation} v_{(n)}(w'_{1;(n),\frak p,i},) = w'_{2;(n),\frak p,i}. \end{equation} \par We next take coordinate at infinity associated to the obstruction bundle data centered at $\frak p_c$. Then we can think of the restriction $v_{(n)} : K^{\frak p_c}_{0,{\rm v}} \to K^{\frak p_c}_{{\rm v}}$, which satisfies \begin{equation}\label{closetoid2} \lim_{n\to \infty}\Vert v_{(n)} - {\rm identity} \Vert_{C^1(K^{\frak p_c}_{{0,\rm v}})} = 0. \end{equation} (In fact, we may take $\vec R$ so that for each ${\rm v}\in C^0(\mathcal G_{\frak p_c})$ we have ${\rm v}' \in C^0(\mathcal G_{\frak p})$ such that $K_{\rm v}^{\frak p_c} \subset (K_{{\rm v}'}^{\frak p})^{+\vec R}$.) \par Then, $u'_{1;(n)}(w'_{1;(n),c,i}), u'_{2;(n)}(w'_{2;(n),c,i}) \in \mathcal D_{c,i}$ imply \begin{equation}\label{closetoid299} v_{(n)}(w'_{1;(n),c,i}) = w'_{2;(n),c,i}. \end{equation} Property (1),(2) and (\ref{closetoid2}), (\ref{closetoid299}) contradict to (\ref{Y1isY2}). The proof of Lemma \ref{injectivitypp} is complete. \end{proof} \begin{lem}\label{injectivitypp3} If $\epsilon_0$, $\epsilon_{\frak p_c}$ are sufficiently small, then (\ref{forget2}) is a homeomorphism onto its image. \end{lem} \begin{proof} It is easy to see that the map (\ref{forget2}) is continuous. It is injective by Lemma \ref{injectivitypp}. It suffices to show that the converse is continuous. The proof of the continuity of the converse is similar to the proof of Lemma \ref{setisopen}. We however repeat the detail of the proof for completeness sake. Let $$ (\frak x_{\frak p_a} \cup \vec w'_{a,\frak p}, u_{\frak p_a}, (\vec w'_{a,c};c\in \frak A)) \in \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{{\rm trans}}_{\epsilon_{0},\vec T_{0}} \cap \frak s^{-1}(0) $$ and $$ (\frak x_{\frak p_{\infty}} \cup \vec w'_{{\infty},\frak p}, u_{\frak p_{\infty}}, (\vec w'_{{\infty},c};c\in \frak A)) \in \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{{\rm trans}}_{\epsilon_{0},\vec T_{0}} \cap \frak s^{-1}(0). $$ We put $\frak p_{\infty} = (\frak x_{\frak p_{\infty}},u_{\frak p_{\infty}})$, $\frak p_{a} = (\frak x_{\frak p_{a}},u_{\frak p_{a}})$ and assume \begin{equation} \lim_{a\to \infty} \frak p_{a} = \frak p_{\infty} \end{equation} in $\mathcal M_{k+1,\ell}(\beta)$. \par By Definition \ref{convdefn}, there exist $\frak q_a, \frak q_{\infty} \in \mathcal M_{k+1,\ell+\ell'}(\beta)$ such that (\ref{Sconv2}) and \begin{equation}\label{conv3} \frak{forget}_{(k+1;\ell+\ell'),(k+1;\ell)}(\frak q_a)= \frak p_a, \quad \frak{forget}_{(k+1;\ell+\ell'),(k+1;\ell)}(\frak q_{\infty})= \frak p_{\infty}. \end{equation} Let $\vec z^{+,{\rm int}}_a \subset \frak x^{+,{\rm int}}_{\frak q_a}, \vec z^{+,{\rm int}}_{\infty} \subset \frak x_{\frak q_{\infty}}$ be the marked points of $\frak q_a, \frak q_{\infty}$ that are not marked points of $\frak p_a$ or of $\frak p_{\infty}$. By perturbing $\frak q_a$ and $\frak q_{\infty}$ a bit we may assume (\ref{notinDsss}). \par We consider the map $\Sigma_{\frak q_{\infty}} \to \Sigma_{\frak p_{\infty}}$ that shrinks the components which become unstable after forgetting $(\ell+1)$-th,\dots, $(\ell + \ell')$-th marked points $\vec z^{+,{\rm int}}_{\infty}$ of $\frak x_{\frak q_{\infty}}$. By (\ref{notinDsss}) none of the points $\vec w'_{\infty,\frak p}$, $\vec w'_{\infty,c}$ are contained in the image of the components of $\Sigma_{\frak q_{\infty}}$ that we shrink. So $\vec w'_{\infty,\frak p}, \vec w'_{\infty,c} \subset \Sigma_{\frak p'}$ may be regarded as points of $\Sigma_{\frak q_{\infty}}$. \par Then by extending the core if necessary we may regard that $\vec w'_{\infty,\frak p}, \vec w'_{\infty,c}$ are in the core of $\Sigma_{\frak q_{\infty}}$. Here we use the coordinate at infinity that appears in the definition of $\underset{a\to \infty}{\rm lims}\,\, \frak q_a = \frak q_{\infty}$. \par We remark that $ u_{\frak q_{\infty}}(w'_{\infty,c,i}) \in \mathcal D_{c,i}. $ We also remark that $u_{\frak q_a}$ converges to $u_{\frak q_{\infty}}$ in $C^1$-topology on the core. Moreover $u_{\frak q_{\infty}}$ is transversal to $\mathcal D_{\frak p,i}$ (resp. $\mathcal D_{c,i}$) at $u_{\frak q_{\infty}}(w'_{\infty,\frak p,i})$ (resp. $u_{\frak q_{\infty}}(w'_{\infty,c,i})$). Therefore, for sufficiently large $a$ there exist $w''_{a,\frak p,i}, w''_{a,c,i} \in \Sigma_{\frak q_a}$ with the following properties. \begin{enumerate} \item $u_{\frak q_a}(w''_{a,\frak p,i}) \in \mathcal D_{\frak p,i}$. \item $u_{\frak q_a}(w''_{a,c,i}) \in \mathcal D_{c,i}$. \item $\lim_{a \to \infty} w''_{a,\frak p,i} = w'_{\infty,\frak p,i}$. \item $\lim_{a \to \infty} w''_{a,c,i} = w'_{\infty,c,i}$. \end{enumerate} Here in (3)(4) we use the identification of the core of $\Sigma_{\frak q_a}$ and of $\Sigma_{\frak q_{\infty}}$ induced by the coordinate at infinity that appears in the definition of $\underset{a\to \infty}{\rm lims}\,\, \frak q_a = \frak q_{\infty}$. We send $w''_{a,\frak p,i}$ by the map $\Sigma_{\frak q_a} \to \Sigma_{\frak p_a}$ and denote it by the same symbol. We thus obtain $\vec w''_{a,\frak p} \subset \Sigma_{\frak p_a}$. The additional marked points $w''_{a,c,i}$ induce $\vec w''_{a,c} \subset \Sigma_{\frak p_a}$ in the same way. \begin{sublem}\label{lub296lem} $w''_{a,\frak p,i} = w'_{a,\frak p,i}$ and $w''_{a,c,i} = w'_{a,c,i}$ if $\epsilon_0$ and $\epsilon_{\frak p_c}$ are small and $a$ is large. \end{sublem} \begin{proof} Note $(\frak x_{\frak p_a}\cup \vec w_{a,\frak p_a},u_{\frak p_a})$ and $(\frak x_{\frak p_{\infty}}\cup \vec w_{{\infty},\frak p_{\infty}},u_{\frak p_{\infty}})$ are both $\epsilon_0$-close to $(\frak x_{\frak p},\vec w_{\frak p},u_{\frak p})$. Then we can choose $\epsilon_0$ small so that (3) above implies $$ d(w'_{a,\frak p,i},w''_{a,\frak p,i}) \le 3\epsilon_0 $$ for sufficiently large $a$. We can also show that $$ d(w'_{a,c,i},w''_{a,c,i}) \le 3(o(\epsilon_0)+ \epsilon_{\frak p_c}) $$ in the same way. (Here $\lim_{\epsilon_0 \to 0}o(\epsilon_0) =0$.) On the other hand we have $u_{\frak q_a}(w'_{a,\frak p,i}) \in \mathcal D_{\frak p,i}$, $u_{\frak q_a}(w'_{a,c,i}) \in \mathcal D_{c,i}$. They imply the sublemma. \end{proof} \begin{rem}\label{rem298} In the last step we need to assume $\epsilon_{\frak p_c}$ small. More precisely, when we take $\epsilon_{\frak p_c}$ at the stage of Definition \ref{openW+++} we require the following. \par If $ d(w'_{c,i},w''_{c,i}) \le 4\epsilon_{\frak p_c} $, $w'_{c,i},w''_{c,i} \in \Sigma_{\frak p}$ and $u_{\frak p}(w'_{c,i}) \in \mathcal D_{c,i}$, $u_{\frak p}(w''_{c,i}) \in \mathcal D_{c,i}$, then $w'_{c,i} = w''_{c,i}$. \par We next choose $\epsilon_0$ so small that the same statement holds for $\frak p_a$, with $4\epsilon_{\frak p}$ replaced by $3\epsilon_{\frak p_c}$. \end{rem} Now (3)(4) above imply $$ \lim_{a\to\infty}(\frak x_{\frak p_a} \cup \vec w'_{a,\frak p}, u_{\frak p_a}, (\vec w'_{a,c};c\in \frak A)) = (\frak x_{\frak p_{\infty}} \cup \vec w'_{{\infty},\frak p}, u_{\frak p_{\infty}}, (\vec w'_{{\infty},c};c\in \frak A)) $$ in $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{{\rm trans}}_{\epsilon_{0},\vec T_{0}} \cap \frak s^{-1}(0)$ as required. \end{proof} The proof of Proposition \ref{charthomeo} is complete. \end{proof} \begin{proof}[Proof of Lemma \ref{transpermutelem}] Lemma \ref{transpermutelem} is actually the same as Lemma \ref{injectivitypp} except the following point. We remark that at the stage when we state Lemma \ref{transpermutelem} we did not prove Theorems \ref{gluethm3} and \ref{exdecayT33}. In fact, to fix the obstruction bundle $E_c$ we used Lemma \ref{transpermutelem}. However the argument here is not circular by the following reason. \par When we prove Lemma \ref{transpermutelem}, we take an obstruction bundle data centered at $\frak p$ only, the same point as the one we start the gluing construction. We use the obstruction bundle induced by this obstruction bundle data to go through the gluing argument (proof of Theorems \ref{gluethm3} and \ref{exdecayT33}.) We do not need the conclusion of Lemma \ref{transpermutelem} for the gluing argument. Then we obtain $\text{\rm Glu}$. We use this map to go through the proof of Lemma \ref{injectivitypp} and prove Lemma \ref{transpermutelem}. \end{proof} \begin{rem}\label{rem:2111} In Definition \ref{openW+++} we mentioned that we prove open-ness of the set $\frak W^+(\frak p)$ in Subsection \ref{cutting}. Indeed it follows from Lemma \ref{setisopen}. We remark that open-ness of $\frak W^+(\frak p)$ was used to define the set $\frak C(\frak p)$ and so was used in the proof of Theorems \ref{gluethm3} and \ref{exdecayT33}. However the argument is not circular by the same reason as we explained in the proof of Lemma \ref{transpermutelem} above. \end{rem} \par\medskip \section{Construction of Kuranishi chart} \label{chart} In Lemma \ref{fraAforget1}, Proposition \ref{forgetstillstable}, Lemma \ref{transstratasmf}, {\it strata-wise} differentiable structures of the spaces $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B) ^{\vec w^-_{\frak p},\frak B^-}_{\epsilon_{0},\vec T_{0}}$ and $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{{\rm trans}}_{\epsilon_{0},\vec T_{0}}$ or maps among them are discussed. These spaces are actually differentiable manifolds with corners and the maps are differentiable maps between them. As we mentioned in \cite[page 771-773]{fooo:book1} this is a consequence of the exponential decay estimate (Theorems \ref{exdecayT} and \ref{exdecayT33}). We first discuss this point in detail here. \par Let $V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;\epsilon_1)$ be as in (\ref{2193}). We put \begin{equation} \aligned &V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;\epsilon_1)^{\vec w^-_{\frak p},\frak B^-}\\ &= \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{\vec w^-_{\frak p},\frak B^-}_{\epsilon_{2},\vec T_{0}} \cap V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\epsilon_1) \endaligned \end{equation} \begin{equation} \aligned &V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\epsilon_1)^{\rm trans}\\ &= \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{\rm trans}_{\epsilon_{2},\vec T_{0}} \cap V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\epsilon_1). \endaligned \end{equation} (See Definitions \ref{def7289}, \ref{defn9797}.) We note that the right hand side is independent of $\epsilon_2$ and $\vec T_0$ if $\epsilon_1$ is sufficiently small. By Proposition \ref{forgetstillstable} (1) and Lemma \ref{transstratasmf}, $V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;\epsilon_1)^{\vec w^-_{\frak p},\frak B^-}$ and $V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\epsilon_1)^{\rm trans}$ are $C^m$-submanifolds. \par The next proposition says that the thickened moduli spaces $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B) ^{\vec w^-_{\frak p},\frak B^-}_{\epsilon_{0},\vec T_{0}}$ and $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{\rm trans}_{\epsilon_{0},\vec T_{0}}$ are graphs of the maps ${\rm End}_{\vec w^-_{\frak p},\frak B^-}$ and ${\rm End}_{\vec w^-_{\frak p},\frak B^-}$, which enjoy exponential decay estimate. \begin{prop}\label{graphdecaythem} There exist strata-wise $C^m$-maps $$ \aligned {\rm End}_{\vec w^-_{\frak p},\frak B^-} : &V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;\epsilon_1)^{\vec w^-_{\frak p},\frak B^-} \times (\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1)\\ &\to V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;\epsilon_1) \endaligned $$ and $$ \aligned {\rm End}_{\rm trans} : V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\epsilon_1)^{\rm trans} &\times (\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1)\\ &\to V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\epsilon_0) \endaligned $$ with the following properties. \begin{enumerate} \item $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B) ^{\vec w^-_{\frak p},\frak B^-}_{\epsilon_{0},\vec T_{0}}$ is described by the map ${\rm End}_{\vec w^-_{\frak p},\frak B^-}$ as follows: $$\aligned &\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B) ^{\vec w^-_{\frak p},\frak B^-}_{\epsilon_{0},\vec T_{0}}\\ &= \bigg\{ {\rm Glu}({\rm End}_{\vec w^-_{\frak p},\frak B^-}(\frak q,(\vec T,\vec\theta)), \vec T,\vec\theta) \\ &\qquad\mid (\frak q,(\vec T,\vec\theta)) \in V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;\epsilon_1)^{\vec w^-_{\frak p},\frak B^-} \times (\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1) \bigg\}. \endaligned$$ We also have $$\aligned &\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{\rm trans}_{\epsilon_{0},\vec T_{0}}\\ &= \bigg\{ {\rm Glu}({\rm End}_{\rm trans}(\frak q,(\vec T,\vec\theta)), \vec T,\vec\theta) \\ &\qquad\mid (\frak q,(\vec T,\vec\theta)) \in V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\epsilon_1)^{\rm trans} \times (\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1) \bigg\}. \endaligned$$ \item The maps ${\rm End}_{\vec w^-_{\frak p},\frak B^-}$ and ${\rm End}_{\rm trans}$ enjoy the following exponential decay estimate. \begin{equation} \left\Vert \nabla_{\frak q}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta^{\vec k_{\theta}}} {\rm End}_{\vec w^-_{\frak p},\frak B^-}\right\Vert_{C^0} < C_{16,m,\vec R}e^{-\delta' (\vec k_{T}\cdot \vec T+\vec k_{\theta}\cdot \vec T^{\rm c})} \end{equation} \begin{equation} \left\Vert \nabla_{\frak q}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta^{\vec k_{\theta}}} {\rm End}_{\rm trans}\right\Vert_{C^0} < C_{16,m,\vec R}e^{-\delta' (\vec k_{T}\cdot \vec T+\vec k_{\theta}\cdot \vec T^{\rm c})} \end{equation} if $n + \vert \vec k_{T}\vert+\vert \vec k_{\theta}\vert \le m$. Here $\nabla_{\frak q}^n$ is a derivation of the direction of the parameter space $V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;\epsilon_1)^{\vec w^-_{\frak p},\frak B^-}$ or of the parameter space $V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\epsilon_1)^{\rm trans}$. \end{enumerate} \end{prop} \begin{proof} We prove the estimate for the case of $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B) ^{\vec w^-_{\frak p},\frak B^-}_{\epsilon_{0},\vec T_{0}}$. The other case is entirely similar. \par We consider the evaluation map (\ref{evatforgottenmark}) \begin{equation}\label{forgetmoikkai} \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon_{0},\vec T_{0}} \to X^{(\ell_{\frak p} - \ell^-_{\frak p}) + \sum_{c\in \frak B \setminus \frak B^-}\ell_c} \end{equation} and compose it with (\ref{2193}) $$ \aligned \text{\rm Glu} : V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;\epsilon_1) &\times (\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1) \\ &\to \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon_{0},\vec T_{0}} \endaligned $$ to obtain \begin{equation}\label{form314} \aligned {\rm ev}_{\vec w^-_{\frak p},\frak B^-} : V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;\epsilon_1) &\times (\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty]\times \vec S^1) \\ &\to X^{(\ell_{\frak p} - \ell^-_{\frak p}) + \sum_{c\in \frak B \setminus \frak B^-}\ell_c}. \endaligned \end{equation} \begin{lem}\label{evalexdecay11} The map ${\rm ev}_{\vec w^-_{\frak p},\frak B^-} $ enjoys the following exponential decay estimate. \begin{equation} \left\Vert \nabla_{\rho}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta^{\vec k_{\theta}}} {\rm ev}_{\vec w^-_{\frak p},\frak B^-}\right\Vert_{C^0} < C_{17,m,\vec R}e^{-\delta' (\vec k_{T}\cdot \vec T+\vec k_{\theta}\cdot \vec T^{\rm c})}, \end{equation} if $n + \vert \vec k_{T}\vert+\vert \vec k_{\theta}\vert \le m$. Here $\nabla_{\rho}^n$ is a derivation of the direction of the parameter space $V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;\epsilon_1)^{\vec w^-_{\frak p},\frak B^-}$. \end{lem} \begin{proof} We remark that (\ref{form314}) factors through \begin{equation}\label{22942} \aligned \text{\rm Glures} : &V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;\epsilon_1) \times (\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1)\\ &\to \prod_{{\rm v}\in C^0(\mathcal G_{\frak p})}L^2_{m}((K_{\rm v}^{+\vec R},K_{\rm v}^{+\vec R}\cap \partial \Sigma_{\frak p,{\rm v}}),(X,L)). \endaligned \end{equation} In fact we may take $\vec R$ so that all the marked points are in the extended core $\bigcup_{{\rm v}\in C^0(\mathcal G_{\frak p})}K_{\rm v}^{+\vec R}$. Therefore the lemma is an immediate consequence of Theorem \ref{exdecayT33}. \end{proof} By definition, we have: $$ V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;\epsilon_0)^{\vec w^-_{\frak p},\frak B^-} = {\rm ev}_{\vec w^-_{\frak p},\frak B^-}^{-1} \bigg( \prod_{i=\ell^-_{\frak p}+1}^{\ell_{\frak p}}\mathcal D_{\frak p,i} \times \prod_{c \in \frak B \setminus \frak B^-}\prod_{i=1}^{\ell_{c}}\mathcal D_{c,i} \bigg). $$ (See the proof of Proposition \ref{forgetstillstable}.) Proposition \ref{graphdecaythem} is then a consequence of Lemma \ref{evalexdecay11} and the implicit function theorem. \end{proof} We next change the coordinate of $(\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1)$. The original coordinates are $((T_{{\rm e}}),(\theta_{\rm e})) \in (\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1)$. \begin{defn} We define \begin{equation}\label{2294} \aligned s_{{\rm e}} = \frac{1}{T_{{\rm e}}} \in \left[0,\frac{1}{T_{{\rm e},0}}\right), \qquad &\text{if ${\rm e} \in C^1_{{\rm o}}(\mathcal G_{\frak p})$}, \\ \frak z_{{\rm e}} = \frac{1}{T_{{\rm e}}}\exp(2\pi\sqrt{-1}\theta_{\rm e}) \in D^2\left(\frac{1}{T_{{\rm e},0}}\right), \qquad &\text{if ${\rm e} \in C^1_{{\rm c}}(\mathcal G_{\frak p})$}. \endaligned \end{equation} We also put $s_{{\rm e}} = 0$ (resp. $\frak z_{{\rm e}} = 0$) if $T_{\rm e} = \infty$. Here we put $ D^2(r) = \{ z \in \C \mid \vert z\vert < r\}. $ \end{defn} By this change of coordinates, $(\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1)$ is identified with \begin{equation}\label{2295} \prod_{{\rm e} \in C^1_{{\rm o}}(\mathcal G_{\frak p})}\left[0,\frac{1}{T_{{\rm e},0}}\right) \times \prod_{{\rm e} \in C^1_{{\rm o}}(\mathcal G_{\frak p})} D^2\left(\frac{1}{T_{{\rm e},0}}\right). \end{equation} \begin{defn} We denote the right hand side of (\ref{2295}) as $[0,(\vec T^{\rm o}_0)^{-1}) \times D^2((\vec T^{\rm c}_0)^{-1})$. \end{defn} \begin{rem}\label{rcstratifiation} The space $[0,(\vec T^{\rm o}_0)^{-1}) \times D^2((\vec T^{\rm c}_0)^{-1})$ has a stratification that is induced by the stratification $$[0,1/T_{{\rm e},0}) = \{0\} \cup (0,1/T_{{\rm e},0}) $$ and $$ D^2(1/T_{{\rm e},0}) = \{0\} \cup (D^2(1/T_{{\rm e},0}) \setminus \{0\} ). $$ This stratification corresponds to the stratification of $(\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1)$ that we defined before, by the homeomorphism (\ref{2294}). \end{rem} We note that $[0,(\vec T^{\rm o}_0)^{-1}) \times D^2((\vec T^{\rm c}_0)^{-1})$ is a smooth manifold with corner. The above stratification is finer than its stratification associated to the structure of manifold with corner. \par We then regard $\text{\rm Glu}$ as a map \begin{equation}\label{2296} \aligned \text{\rm Glu}' : V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;\epsilon_1) &\times [0,(\vec T^{\rm o}_0)^{-1}) \times D^2((\vec T^{\rm c}_0)^{-1})\\ &\to \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B) _{\epsilon_{0},\vec T_{0}}. \endaligned \end{equation} \begin{cor} The inverse image $$ (\text{\rm Glu}')^{-1}\left( \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B) ^{\vec w^-_{\frak p},\frak B^-}_{\epsilon_{0},\vec T_{0}}\right) $$ is a $C^m$-submanifold of $V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B;\epsilon_1) \times [0,(\vec T^{\rm o}_0)^{-1}) \times D^2((\vec T^{\rm c}_0)^{-1})$. It is transversal to the strata of the stratification mentioned in Remark \ref{rcstratifiation}. \par The same holds for $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{\rm trans}_{\epsilon_{0},\vec T_{0}}$. \end{cor} This is an immediate consequence of Proposition \ref{graphdecaythem}. \begin{rem} We can actually promote this $C^m$ structure to a $C^{\infty}$-structure as we will explain in Subsection \ref{toCinfty}. The same remark applies to all the constructions of Subsections \ref{chart}-\ref{kstructure}. \end{rem} \begin{defn}\label{defVVVVV} We put $$ V_{k+1,\ell}((\beta;\frak p;\frak A);\epsilon_{0},\vec T_{0}) = \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{\rm trans}_{\epsilon_{0},\vec T_{0}} $$ and regard it as a $C^m$-manifold with corner so that $\text{\rm Glu}'$ is a $C^m$-diffeomorphism. \end{defn} \begin{lem} The action of $\Gamma_{\frak p}$ on $V_{k+1,\ell}((\beta;\frak p;\frak A);\epsilon_{0},\vec T_{0})$ is of $C^m$-class. \end{lem} \begin{proof} Note the $\Gamma_{\frak p}$-action on $(\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1)$ is by exchanging the factors associated to the edges $\rm e$ and by the rotation of the $S^1$ factors. Therefore it becomes a smooth action on $ [0,(\vec T^{\rm o}_0)^{-1}) \times D^2((\vec T^{\rm c}_0)^{-1})$. By construction $\text{\rm Glu}'$ is $\Gamma_{\frak p}$-equivariant. The lemma follows. \end{proof} The orbifold $V_{k+1,\ell}((\beta;\frak p;\frak A);\epsilon_{0},\vec T_{0})/\Gamma_{\frak p}$ is a chart of the Kuranishi neighborhood of $\frak p$ which we define in this section. Note we may assume that the action of $\Gamma_{\frak p}$ to $V_{k+1,\ell}((\beta;\frak p;\frak A);\epsilon_{0},\vec T_{0})$ is effective, by increasing the obstruction bundle if necessary. \par\smallskip We next define an obstruction bundle. Recall that we fixed a complex vector space $E_c$ for each $c \in \frak A$. ($E_c = \bigoplus_{{\rm v} \in C^0(\mathcal G_{\frak p_c})}E_{c,{\rm v}}$ and $E_{c,{\rm v}}$ is a subspace of $ \Gamma_0(\text{\rm Int}\,K^{\rm obst}_{\rm v}; u_{\frak p_c}^*TX \otimes \Lambda^{01}) $.) By Definition \ref{obbundeldata} (5), $E_c$ carries a $\Gamma_{\frak p_c}$ action. It follows that $\Gamma_{\frak p} \subset \Gamma_{\frak p_c}$, because $\frak p\cup \vec w^{\frak p}_c$ is $\epsilon_c$-close to $\frak p_c \cup \vec w_{\frak p_c}$. Therefore we have a $\Gamma_{\frak p}$-action on $$ E_{\frak A} = \bigoplus_{c\in \frak A} E_c. $$ \begin{defn}\label{obbundle1} The obstruction bundle of our Kuranishi chart is the bundle \begin{equation} \frac{\left( V_{k+1,\ell}((\beta;\frak p;\frak A);\epsilon_{0},\vec T_{0}) \times E_{\frak A}\right)}{\Gamma_{\frak p}} \to \frac{\left( V_{k+1,\ell}((\beta;\frak p;\frak A);\epsilon_{0},\vec T_{0}) \right)}{\Gamma_{\frak p}}. \end{equation} \end{defn} We next define the Kuranishi map, that is a section of the obstruction bundle. Let $\frak q^+= (\frak x_{\frak q},u_{\frak q};(\vec w^{\frak q}_{c}; c\in \frak A)) \in \mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{\rm trans}_{\epsilon_{0},\vec T_{0}}$. By definition we have $$ \overline\partial u_{\frak q} \in \mathcal E_{\frak A}(\frak q^+). $$ By Definition \ref{defEc} we have an isomorphism (\ref{Ivpdefn2211}) \begin{equation} I^{{\rm v},\frak p_c}_{(\frak y_c,u_c),(\frak x_{\frak q}\cup \vec w^{\frak q}_{c},u_{\frak q})} : E_{\frak p_c,\rm v}(\frak y_c,u_c) \to \Gamma_0(\text{\rm Int}\,K^{\rm obst}_{\rm v}; (u_{\frak q})^*TX \otimes \Lambda^{01}). \end{equation} The direct sum of the right hand side over $c \in \frak A$ and ${\rm v} \in C^0(\mathcal G_{\frak p_c})$ is by definition $\mathcal E_{\frak A}(\frak q^+)$. Sending the element $\overline\partial u_{\frak q}$ by the inverse of $I^{{\rm v},\frak p_c}_{(\frak y_c,u_c),(\frak x_{\frak q}\cup \vec w^{\frak q}_{c},u_{\frak q})}$ we obtain an element \begin{equation}\label{2299} \bigoplus_{c \in \frak A \atop {\rm v} \in C^0(\mathcal G_{\frak p_c})}{I^{{\rm v},\frak p_c}_{(\frak y_c,u_c),(\frak x_{\frak q}\cup \vec w^{\frak q}_{c},u_{\frak q})}}^{-1}(\overline\partial u_{\frak q}) \in E_{\frak A}. \end{equation} \begin{defn} We denote the element (\ref{2299}) by $\frak s(\frak q^+)$. The section $\frak s$ is called the {\it Kuranishi map}. \end{defn} \begin{lem} The section $\frak s$ defined above is a section of $C^m$-class of the obstruction bundle in Definition \ref{obbundle1} and is $\Gamma_{\frak p}$-equivariant. \end{lem} \begin{proof} The $\Gamma_{\frak p}$-equivariance is immediate from its construction. \par To prove that $\frak s$ is of $C^m$-class, we first remark that $\frak s$ is extended to the thickened moduli space $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A)_{\epsilon_{0},\vec T_{0}}$ by the same formula. We consider the composition of $\frak q^+ \mapsto \frak s(\frak q^+)$ with the map $\text{\rm Glu}'$ (\ref{2296}). Since $K_{\rm v}^{\rm obst}$ lies in the core this composition factors through \text{\rm Glures} (\ref{22942}). (Here we identify $ (\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1)$ with $[0,(\vec T^{\rm o}_0)^{-1}) \times D^2((\vec T^{\rm c}_0)^{-1})$.) Therefore by Theorem \ref{exdecayT33} we have \begin{equation} \left\Vert \nabla_{\rho}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta^{\vec k_{\theta}}}(\frak s\circ\text{\rm Glu}) \right\Vert_{C^0} < C_{18,m,\vec R}e^{-\delta' (\vec k_{T}\cdot \vec T+\vec k_{\theta}\cdot \vec T^{\rm c})}, \end{equation} if $n + \vert \vec k_{T}\vert+\vert \vec k_{\theta}\vert \le m$. Therefore $\frak s$ is of $C^{m}$-class. \end{proof} We note that the zero set of the section $\frak s$ coincides with the set $$\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{{\rm trans}}_{\epsilon_{0},\vec T_{0}} \cap \frak s^{-1}(0)$$ which we defined in Definition \ref{zerosetofspreddf}. \begin{defn}\label{defpsi} We define a local parametrization map $$ \psi : \frac{\frak s^{-1}(0)}{\Gamma_{\frak p}} \to \mathcal M_{k+1,\ell}(\beta) $$ to be the map (\ref{forget2}). \end{defn} Proposition \ref{charthomeo} implies that $\psi$ is a homeomorphism to an open neighborhood of $\frak p$. \par In summary we have proved the following: \begin{prop}\label{chartprop} Let $\frak p \in \mathcal M_{k+1,\ell}(\beta)$. We take a stabilization data at $\frak p$ and $\frak A \subset \frak C(\frak p)$. ($\frak A \ne \emptyset$.) Then there exists a Kuranishi neighborhood of $\mathcal M_{k+1,\ell}(\beta)$ at $\frak p$. Namely : \begin{enumerate} \item An (effective) orbifold $V_{k+1,\ell}((\beta;\frak p;\frak A);\epsilon_{0},\vec T_{0})/\Gamma_{\frak p}$. \item A vector bundle $$ \frac{\left( V_{k+1,\ell}((\beta;\frak p;\frak A);\epsilon_{0},\vec T_{0}) \times E_{\frak A}\right)}{\Gamma_{\frak p}} \to \frac{\left( V_{k+1,\ell}((\beta;\frak p;\frak A);\epsilon_{0},\vec T_{0}) \right)}{\Gamma_{\frak p}} $$ on it. \item Its section $\frak s$ of $C^m$-class. \item A homeomorphism $$ \psi : \frac{\frak s^{-1}(0)}{\Gamma_{\frak p}} \to \mathcal M_{k+1,\ell}(\beta) $$ onto an open neighborhood of $\frak p$ in $\mathcal M_{k+1,\ell}(\beta)$. \end{enumerate} \end{prop} Before closing this subsection, we prove that the evaluation maps on $\mathcal M_{k+1,\ell}(\beta)$ are extended to our Kuranishi neighborhood as $C^m$-maps. \par We consider the map \begin{equation}\label{evaluationext} {\rm ev} : V_{k+1,\ell}((\beta;\frak p;\frak A);\epsilon_{0},\vec T_{0}) =\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A)^{\rm trans}_{\epsilon_{0},\vec T_{0}} \to L^{k+1} \times X^{\ell} \end{equation} that is the evaluation map at the $0$-th,\dots,$k$-th boundary marked points and $1$st - $\ell$-th interior marked points. \begin{lem} The map (\ref{evaluationext}) is a $C^m$-map and is $\Gamma_{\frak p}$-equivariant. \end{lem} \begin{proof} We first remark that (\ref{evaluationext}) extends to $\mathcal M_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A)_{\epsilon_{0},\vec T_{0}}$. Its composition with $\text{\rm Glu}$ factors through \text{\rm Glures} (\ref{22942}). Therefore by Theorem \ref{exdecayT33} we have \begin{equation} \left\Vert \nabla_{\rho}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta^{\vec k_{\theta}}}({\rm ev}\circ\text{\rm Glu}) \right\Vert_{C^0} < C_{19,m,\vec R}e^{-\delta' (\vec k_{T}\cdot \vec T+\vec k_{\theta}\cdot \vec T^{\rm c})}, \end{equation} if $n + \vert \vec k_{T}\vert+\vert \vec k_{\theta}\vert \le m$. Therefore ${\rm ev}$ is of $C^{m}$-class. $\Gamma_{\frak p}$ equivariance is immediate from definition. \end{proof} \begin{rem} Proposition \ref{chartprop} holds and can be proved when we replace $\mathcal M_{k+1,\ell}(\beta)$ by $\mathcal M_{\ell}^{\rm cl}(\alpha)$. The proof is the same. \end{rem} \par\medskip \section{Coordinate change - I: Change of the stabilization and of the coordinate at infinity} \label{changeofmarking} In this subsection and the next, we define coordinate change between Kuranishi neighborhoods we constructed in the last subsection and prove a version of compatibility of the coordinate changes. In Subsection \ref{kstructure} we will adjust the sizes of the Kuranishi neighborhoods and of the domains of the coordinate changes so that they literally satisfy the definition of the Kuranishi structure. \par We begin with recalling the facts we have proved so far. We take a finite set $\{\frak p_c \mid c \in \frak C\} \subset \mathcal M_{k+1,\ell}(\beta)$ and fix an obstruction bundle data $\frak E_{\frak p_c}$ centered at each $\frak p_c$. \par Let $\frak w_{\frak p}$ be a stabilization data at $\frak p\in \mathcal M_{k+1,\ell}(\beta)$. The stabilization data $\frak w_{\frak p}$ consists of the following: \begin{enumerate} \item The additional marked points $\vec w_{\frak p}$ of $\frak x_{\frak p}$. \item The codimension 2 submanifolds $\mathcal D_{\frak p,i}$. \item A coordinate at infinity of $\frak x_{\frak p} \cup \vec w_{\frak p}$. \end{enumerate} By an abuse of notation we denote the coordinate at infinity also by $\frak w_{\frak p}$ from now on. Let $\ell_{\frak p} = \# \vec w_{\frak p}$ and $\frak A \subset \frak C(\frak p)$. We always assume that $\frak A \ne \emptyset$. \par By taking a sufficiently small $\epsilon_0$ and sufficiently large $\vec T_{0}$, we obtained a Kuranishi neighborhood at $\frak p$ by Proposition \ref{chartprop}. The Kuranishi neighborhood is $V_{k+1,\ell}((\beta;\frak p;\frak A);\epsilon_{0},\vec T_{0})/\Gamma_{\frak p}$. This Kuranishi neighborhood depends on $\epsilon_0, \vec{T}_0$ as well as $\frak w_{\frak p}$. During the construction of the coordinate change, we need to shrink this Kuranishi neighborhood several times. We use a pair of positive numbers $(\frak o,\mathcal T)$ to specify the size as follows. We consider \begin{equation}\label{gluemapsss} \aligned \text{\rm Glu} : V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\epsilon_1) &\times (\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1) \\ &\to \mathcal M^{{\frak w}_{\frak p}}_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A)_{\epsilon_{0},\vec T_{0}}. \endaligned \end{equation} \begin{rem} Here and hereafter we include the symbol ${\frak w}_{\frak p}$ in the notation of the thickened moduli space, to show the stabilization data at $\frak p$ that we use to define it. In fact the dependence of the thickened moduli space on the stabilization data is an important point to study in this subsection. \end{rem} $V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\epsilon_1)$ is a smooth manifold. We fix a metric on it. Let \begin{equation}\label{gluemapsssdom} B_{\frak o}^{{\frak w}_{\frak p}}(\frak p;V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\epsilon_1)) \end{equation} be the $\frak o$ neighborhood of $\frak p$ in this space. We put $T_{{\rm e},0} = \mathcal T$ for all $\rm e$ and denote it by $\vec{\mathcal T}$. Since this space is independent of $\epsilon_1$ if $\frak o$ is sufficiently small compared to $\epsilon_1$ we omit $\epsilon_1$ from the notation. We consider \begin{equation}\label{gluemapsssdom2} B^{{\frak w}_{\frak p}}_{\frak o}(\frak p;V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A))\times (\vec{\mathcal T},\infty] \times ((\vec{\mathcal T},\infty] \times \vec S^1). \end{equation} \begin{defn} We say that $(\frak o,\mathcal T)$ is $\frak w_{\frak p}$ {\it admissible} if the domain of the map (\ref{gluemapsss}) includes (\ref{gluemapsssdom2}). We say it is admissible if it is clear which stabilization data we take. \par We say $(\frak o,\mathcal T)>(\frak o',\mathcal T')$ if $\frak o>\frak o'$ and $1/\mathcal T > 1/\mathcal T'$. \end{defn} \begin{defn} We denote by $ V(\frak p,\frak w_{\frak p};(\frak o,\mathcal T);\frak A) $ the intersection of the image of the set (\ref{gluemapsssdom2}) by the map (\ref{gluemapsss}) and $\mathcal M^{{\frak w}_{\frak p}}_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A)_{\epsilon_{0},\vec T_{0}}^{\rm trans}$. \par The restrictions of the obstruction bundle, Kuranishi map, and the map $\psi$ to $ V(\frak p,\frak w_{\frak p};(\frak o,\mathcal T);\frak A) $ are written as $\mathcal E_{\frak p,\frak w_{\frak p};(\frak o,\mathcal T);\frak A}$. and $\frak s_{\frak p,\frak w_{\frak p};(\frak o,\mathcal T);\frak A}$, $\psi_{\frak p,\frak w_{\frak p};(\frak o,\mathcal T);\frak A}$, respectively. \par They define a Kuranishi neighborhood. Sometimes we denote by $ V(\frak p,\frak w_{\frak p};(\frak o,\mathcal T);\frak A) $ this Kuranishi neighborhood, by an abuse of notation. \end{defn} The main result of this subsection is the following. \begin{prop}\label{prop2117} Let $\frak w^{(j)}_{\frak p}$, ($j=1,2$) be stabilization data at $\frak p$ and $\frak A \supseteq \frak A^{(1)} \supseteq \frak A^{(2)} \ne \emptyset$. Suppose $(\frak o^{(1)},\mathcal T^{(1)})$ is $\frak w^{(1)}_{\frak p}$ admissible. \par Then there exists $(\frak o^{(2)}_0,\mathcal T^{(2)}_0)$ such that if $(\frak o^{(2)},\mathcal T^{(2)}) < (\frak o^{(2)}_0,\mathcal T^{(2)}_0)$ then $(\frak o^{(2)},\mathcal T^{(2)})$ is $\frak w_{\frak p}^{(2)}$ admissible and we have a coordinate change from $ V(\frak p,\frak w^{(2)}_{\frak p};(\frak o^{(2)},\mathcal T^{(2)});\frak A^{(2)}) $ to $ V(\frak p,\frak w^{(1)}_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)}) $. Namely there exists $(\phi_{12},\widehat{\phi}_{12})$ with the following properties. \begin{enumerate} \item $$ \phi_{12} : V(\frak p,\frak w^{(2)}_{\frak p};(\frak o^{(2)},\mathcal T^{(2)});\frak A^{(2)}) \to V(\frak p,\frak w^{(1)}_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)}) $$ is a $\Gamma_{\frak p}$-equivariant $C^m$ embedding. \item $$ \widehat{\phi}_{12} : \mathcal E_{\frak p,\frak w^{(2)}_{\frak p};(\frak o^{(2)},\mathcal T^{(2)});\frak A^{(2)}} \to \mathcal E_{\frak p,\frak w^{(1)}_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)}} $$ is a $\Gamma_{\frak p}$-equivariant embedding of vector bundles of $C^m$-class that covers $\phi_{12}$. \item The next equality holds. $$ \frak s_{\frak p,\frak w^{(1)}_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)}} \circ \phi_{12} = \widehat{\phi}_{12} \circ \frak s_{\frak p,\frak w^{(2)}_{\frak p};(\frak o^{(2)},\mathcal T^{(2)});\frak A^{(2)}}. $$ \item The next equality holds on $\frak s_{\frak p,\frak w^{(2)}_{\frak p};(\frak o^{(2)},\mathcal T^{(2)});\frak A^{(2)}}^{-1}(0)$. $$ \tilde\psi_{\frak p,\frak w^{(1)}_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)}} \circ \phi_{12} = \tilde\psi_{\frak p,\frak w^{(2)}_{\frak p};(\frak o^{(2)},\mathcal T^{(2)});\frak A^{(2)}}. $$ Here $\tilde\psi_{\frak p,\frak w^{(1)}_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)}}$ is the composition of $\psi_{\frak p,\frak w^{(1)}_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)}}$ and the projection map $$ V(\frak p,\frak w^{(1)}_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)}) \to V(\frak p,\frak w^{(1)}_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)}) / \Gamma_{\frak p}. $$ The definition of $\tilde\psi_{\frak p,\frak w^{(2)}_{\frak p};(\frak o^{(2)},\mathcal T^{(2)});\frak A^{(2)}}$ is similar. \item Let $\frak q^{(2)} \in V(\frak p,\frak w^{(2)}_{\frak p};(\frak o^{(2)},\mathcal T^{(1)});\frak A^{(2)})$ and $\frak q^{(1)} = \phi_{12}(\frak p)$. Then the derivative of $\frak s_{\frak p,\frak w^{(2)}_{\frak p};(\frak o^{(2)},\mathcal T^{(2)});\frak A^{(2)}}$ induces an isomorphism $$ \frac {T_{\frak q^{(1)}}V(\frak p,\frak w^{(1)}_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)})} {T_{\frak q^{(2)}}V(\frak p,\frak w^{(2)}_{\frak p};(\frak o^{(2)},\mathcal T^{(2)});\frak A^{(2)})} \cong \frac {\left(\mathcal E_{\frak p,\frak w^{(1)}_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)}}\right)_{\frak q^{(1)}}} {\left(\mathcal E_{\frak p,\frak w^{(2)}_{\frak p};(\frak o^{(2)},\mathcal T^{(2)});\frak A^{(2}}\right)_{\frak q^{(2)}}}. $$ \end{enumerate} \end{prop} \begin{proof} We divide the proof into several cases. \par\medskip \noindent{\bf Case 1}: The case $\vec w_{\frak p}^{(1)} = \vec w_{\frak p}^{(2)}$, $\mathcal D^{(1)}_{\frak p,i} = \mathcal D^{(2)}_{\frak p,i}$ and $\frak A^{(1)} = \frak A^{(2)}$. \par This is the case when only the coordinate at infinity $\frak w_{\frak p}^{(1)}$ is different from $\frak w_{\frak p}^{(2)}$. A part of the data of the coordinate at infinity is a fiber bundle (\ref{fibrationsigma}) that is: \begin{equation}\label{fibrationsigma2} \pi : \frak M^{(j)}_{(\frak x_{\frak p} \cup \vec w_{\frak p})_{\rm v}} \to \frak V^{(j)}((\frak x_{\frak p} \cup \vec w_{\frak p})_{\rm v}) \end{equation} where $\frak V^{(j)}((\frak x_{\frak p} \cup \vec w_{\frak p})_{\rm v})$ is a neighborhood of $(\frak x_{\frak p} \cup \vec w_{\frak p})_{\rm v}$ in the Deligne-Mumford moduli space $\mathcal M_{k_{\rm v}+1,\ell_{\rm v}}$ or $\mathcal M_{\ell_{\rm v}}^{\rm cl}$. (${\rm v} \in C^0(\mathcal G_{\frak x_{\frak p} \cup \vec w_{\frak p}})$.) We choose $\frak V^{(2)-}((\frak x_{\frak p} \cup \vec w_{\frak p})_{\rm v}) \subset \frak V^{(j)}((\frak x_{\frak p}\cup \vec w_{\frak p})_{\rm v})$ an open neighborhood of $(\frak x_{\frak p}\cup \vec w_{\frak p})_{\rm v}$ so that \begin{equation} \frak V^{(2)-}((\frak x_{\frak p} \cup \vec w_{\frak p})_{\rm v})\subset \frak V^{(1)}((\frak x_{\frak p} \cup \vec w_{\frak p})_{\rm v}). \end{equation} We put $\frak M^{(2) - }_{(\frak x_{\frak p}\cup \vec w_{\frak p})_{\rm v}} = \pi^{-1}(\frak V^{(2)-}((\frak x_{\frak p} \cup \vec w_{\frak p})_{\rm v}))$. Then there exists a unique bundle map $$ \Phi_{12} : \frak M^{(2) - }_{(\frak x_{\frak p}\cup \vec w_{\frak p})_{\rm v}} \to \frak M^{(1)}_{(\frak x_{\frak p}\cup \vec w_{\frak p})_{\rm v}} $$ that preserves the marked points and is a fiberwise biholomorphic map. This is because of the stability. By extending the core of ${\frak w}^{(2)}_{\frak p}$ we may assume \begin{equation}\label{extendcore2banme} \Phi_{12}(\frak K^{(2) - }_{(\frak x_{\frak p}\cup \vec w_{\frak p})_{\rm v}} ) \supset (\frak K^{(1)}_{(\frak x_{\frak p}\cup \vec w_{\frak p})_{\rm v}}) \cap \pi^{-1}(\frak V^{(2)-}((\frak x_{\frak p} \cup \vec w_{\frak p})_{\rm v})). \end{equation} \begin{lem}\label{lemma21118} Let $\epsilon_0$ and $\mathcal T^{(1)}$ be given, then there exist $\epsilon'_0$, $\mathcal T^{(2)}$ such that \begin{equation}\label{lem2118formula} \mathcal M^{\frak w_{\frak p}^{(2) -}}_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) _{\epsilon'_{0},\vec{\mathcal T}^{(2)}} \subset \mathcal M^{\frak w_{\frak p}^{(1)}}_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) _{\epsilon_{0},\vec{\mathcal T}^{(1)}}. \end{equation} \end{lem} Here we define $\frak w_{\frak p}^{(2)-}$ from $\frak w_{\frak p}^{(2)}$ by shrinking $\frak V^{(2)}((\frak x_{\frak p} \cup \vec w_{\frak p})_{\rm v})$ to $\frak V^{(2)-}((\frak x_{\frak p} \cup \vec w_{\frak p})_{\rm v})$ and extending the core so that (\ref{extendcore2banme}) is satisfied and use it to define the left hand side. \begin{proof} Since the equation (\ref{mainequationformulamod}) is independent of the stabilization data at $\frak p$, it suffices to show $$ \frak U_{k+1,(\ell,\ell_{\frak p},(\ell_c))}^{\frak w_{\frak p}^{(2)-}}(\beta;\frak p)_{\epsilon'_{0},\vec{\mathcal T}^{(2)}} \subseteq \frak U^{\frak w_{\frak p}^{(1)}}_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p)_{\epsilon_{0},\vec{\mathcal T}^{(1)}}. $$ Here the meaning of the symbol `$(2)-$' and `$(1)$' is similar to (\ref{lem2118formula}). \par An element of $ \frak U_{k+1,(\ell,\ell_{\frak p},(\ell_c))}^{\frak w_{\frak p}^{(2)-}}(\beta;\frak p)_{\epsilon'_{0},\vec{\mathcal T}^{(2)}} $ is $(\frak Y_0 \cup \vec w'_{\frak p},u',(\vec w'_c))$. Let us check that it satisfies (1)-(4) of Definition \ref{defn251} applied to $\frak U^{\frak w_{\frak p}^{(1)}}_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p)_{\epsilon_{0},\vec{\mathcal T}^{(1)}}$. \par (1) is obvious. (2) follows from (\ref{extendcore2banme}). (4) is also obvious. \par We will prove (3). We note that $\frak p$ is $\epsilon_0$ close to $\frak p$ itself by our choice. So the diameter of the $u_{\frak p}$ image of each connected component of the neck region (with respect to ${\frak w}^{(1)}$) is smaller than $\epsilon_0$. We take $\epsilon'_0$ so that the diameter of the $u_{\frak p}$ image of each connected component of the neck region (with respect to ${\frak w}^{(1)}$) is smaller than $\epsilon_0 - 2\epsilon'_0$. Now since the $C^0$ distance between $u'$ and $u_{\frak p}$ on the core of ${\frak w}^{(2)}$ is small than $\epsilon'_0$, $$\aligned &u'\big( \text{e-th neck with respect to $\frak w^{(1)}_{\frak p}$} \big)\\ &\subset \text{$\epsilon'_0$ neighborhood of $u_{\frak p}\big( \text{e-th neck with respect to $\frak w^{(2)}_{\frak p}$} \big)$.} \endaligned$$ (3) follows. \end{proof} Using the fact that $\mathcal D^{(1)}_{\frak p,i} = \mathcal D^{(2)}_{\frak p,i}$, Lemma \ref{lemma21118} implies \begin{equation}\label{lem21110formula} \mathcal M^{\frak w_{\frak p}^{(2)-}}_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{\rm trans}_{\epsilon'_{0},\vec{\mathcal T}^{(2)}} \subset \mathcal M^{\frak w_{\frak p}^{(1)}}_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A) ^{\rm trans}_{\epsilon_{0},\vec{\mathcal T}^{(1)}}. \end{equation} Let \begin{equation}\label{gluemapsss1} \aligned \text{\rm Glu}^{(1)} : &B^{\frak w_{\frak p}^{(1)}}_{\frak o^{(1)}}(\frak p;V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A)) \\ &\times (\vec{\mathcal T}^{(1)},\infty] \times ((\vec{\mathcal T}^{(1)},\infty] \times \vec S^1) \to \mathcal M^{\frak w_{\frak p}^{(1)}}_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A)_{\epsilon_{0},\vec{\mathcal T}^{(1)}} \endaligned \end{equation} and $$ \aligned \text{\rm Glu}^{(2)-} : &B^{\frak w_{\frak p}^{(2)-}}_{\frak o^{(2)}}(\frak p;V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A)) \\ &\times (\vec{\mathcal T}^{(2)},\infty] \times ((\vec{\mathcal T}^{(2)},\infty] \times \vec S^1) \to \mathcal M^{\frak w_{\frak p}^{(2)-}}_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A)_{\epsilon_{0},\vec{\mathcal T}^{(2)}} \endaligned $$ be appropriate restrictions of (\ref{gluemapsss}). Its image is an open neighborhood of $\frak p \cup \vec w_{\frak p}$. Therefore there exists $(\frak o^{(2)}_0, \mathcal T^{(2)}_0)$ such that for any $(\frak o^{(2)},\mathcal T^{(2)}) < (\frak o^{(2)}_0,\mathcal T^{(2)}_0)$ we have \begin{equation}\label{gluemapsss2} \aligned &\text{\rm Glu}^{(2)-}\big(B^{\frak w_{\frak p}^{(2)-}}_{\frak o^{(2)}}(\frak p;V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A)) \times (\vec{\mathcal T}^{(2)},\infty] \times ((\vec{\mathcal T}^{(2)},\infty] \times \vec S^1)\big) \\ &\subset \text{\rm Glu}^{(1)}\big(B^{\frak w_{\frak p}^{(1)}}_{\frak o^{(1)}}(\frak p;V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A)) \times (\vec{\mathcal T}^{(1)},\infty] \times ((\vec{\mathcal T}^{(1)},\infty] \times \vec S^1)\big). \endaligned \end{equation} This in turn implies $$ V(\frak p,\frak w^{(2)}_{\frak p};(\frak o^{(2)},\mathcal T^{(2)});\frak A) \subset V(\frak p,\frak w^{(1)}_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A). $$ Let $\phi_{12}$ be this natural inclusion. \begin{lem}\label{2120lem} $\phi_{12}$ is a $C^m$-map. \end{lem} \begin{proof} Let $$ \aligned &\hat V(\frak p,\frak w^{(2)}_{\frak p};(\frak o^{(2)},\mathcal T^{(2)});\frak A)\\ &\subset B^{\frak w_{\frak p}^{(2)-}}_{\frak o^{(2)}}(\frak p;V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A)) \times (\vec{\mathcal T}^{(2)},\infty] \times ((\vec{\mathcal T}^{(2)},\infty] \times \vec S^1) \endaligned$$ be the inverse image of $V(\frak p,\frak w^{(2)-}_{\frak p};(\frak o^{(2)},\mathcal T^{(2)});\frak A)$ by $\text{\rm Glu}^{(2)-}$ and let $$ \aligned &\hat V(\frak p,\frak w^{(1)}_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A)\\ &\subset B^{\frak w_{\frak p}^{(1)}}_{\frak o^{(1)}}(\frak p;V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A)) \times (\vec{\mathcal T}^{(1)},\infty] \times ((\vec{\mathcal T}^{(1)},\infty] \times \vec S^1) \endaligned$$ be the inverse image of $V(\frak p,\frak w^{(1)}_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A)$ by $\text{\rm Glu}^{(1)}$. \par We consider the maps $$\aligned &B^{\frak w_{\frak p}^{(1)}}_{\frak o^{(1)}}(\frak p;V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A)) \to \prod_{{\rm v}\in C^0(\mathcal G_{\frak p})}\frak V^{(1)}((\frak x_{\frak p} \cup \vec w_{\frak p})_{\rm v}) \\ &B^{\frak w_{\frak p}^{(2)-}}_{\frak o^{(2)}}(\frak p;V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A)) \to \prod_{{\rm v}\in C^0(\mathcal G_{\frak p})}\frak V^{(2)-}((\frak x_{\frak p} \cup \vec w_{\frak p})_{\rm v}) \endaligned$$ that forget the maps. (Namely it sends $(\frak y,u')$ to $\frak y$.) \par We then define a map \begin{equation}\label{evaluandTfac} \aligned &\frak F^{(1)} : \hat V(\frak p,\frak w^{(1)}_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A) \\\to &\prod_{{\rm v}\in C^0(\mathcal G_{\frak p})} C^m((K_{{\rm v},(1)}^{+\vec R_{(1)}},K_{{\rm v},(1)}^{+\vec R_{(1)}}\cap \partial \Sigma_{{\rm v},(1)}),(X,L))\\ &\times \prod_{{\rm v}\in C^0(\mathcal G_{\frak p})}\frak V^{(1)}((\frak x_{\frak p} \cup \vec w_{\frak p})_{\rm v})\\ &\times (\vec{\mathcal T}^{(1)},\infty] \times ((\vec{\mathcal T}^{(1)},\infty] \times \vec S^1). \endaligned \end{equation} Here the first factor is induced by the map $$ \mathcal M^{\frak w_{\frak p}^{(1)}}_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A)_{\epsilon_{0},\vec T_{0}} \to L^2_{m+10}((K_{{\rm v},(1)}^{+\vec R_{(1)}},K_{{\rm v},(1)}^{+\vec R_{(1)}}\cap \partial \Sigma_{{\rm v},(1)}),(X,L)) $$ that is the map $\text{\rm Glu}^{(1)}$ followed by the restriction of the domain to the core $K_{{\rm v},(1)}^{+\vec R_{(1)}}$. (See (\ref{2195form}).) (We put the symbol $(1)$ in $K_{{\rm v},(1)}^{+\vec R_{(1)}})$ to clarify that this core is induced by $\frak w_{\frak p}^{(1)}$.) We chose $T_{{\rm e},0}$ so that the gluing construction works for $L^2_{10m+1}$. (See the end of Section \ref{glueing}.) The second and the third factors are the obvious projections. The map $\frak F^{(1)}$ is a $C^m$ embedding of the $C^m$ manifold $\hat V(\frak p,\frak w^{(2)}_{\frak p};(\frak o^{(2)},\mathcal T^{(2)});\frak A)$, with corners. \par We also consider a similar embedding \begin{equation}\label{evaluandTfac2} \aligned &\frak F^{(2)} : \hat V(\frak p,\frak w^{(2)}_{\frak p};(\frak o^{(2)},\mathcal T^{(2)});\frak A) \\ \to &\prod_{{\rm v}\in C^0(\mathcal G_{\frak p})} C^{2m}((K_{{\rm v},(2)}^{+\vec R_{(2)}}),K_{{\rm v},(2)}^{+\vec R_{(2)}} \cap \partial \Sigma_{{\rm v},(2)}),(X,L))\\ &\times \prod_{{\rm v}\in C^0(\mathcal G_{\frak p})}\frak V^{(2)}((\frak x_{\frak p} \cup \vec w_{\frak p})_{\rm v})\\ &\times (\vec{\mathcal T}^{(2)},\infty] \times ((\vec{\mathcal T}^{(2)},\infty] \times \vec S^1). \endaligned \end{equation} \par We denote by $\frak X(1,m)$ the right hand side of (\ref{evaluandTfac}) and by $\frak X(2,2m)$ the right hand side of (\ref{evaluandTfac2}). \par We next study the change of parametrization of the core. Let us use the notation in Proposition \ref{reparaexpest}. For $(\rho,\vec T,\vec \theta) \in \prod_{{\rm v} \in C^0(\mathcal G_{\frak p})}\frak V^{(2)}((\frak x_{\frak p} \cup \vec w_{\frak p})_{\rm v}) \times (\vec{\mathcal T}^{(2)},\infty] \times ((\vec{\mathcal T}^{(2)},\infty] \times \vec S^1)$ we have a map $$ \frak v_{\rho,\vec T,\vec \theta} : \Sigma_{\vec T,\vec \theta}^{(2)} \to \Sigma_{\overline{\Phi}_{12}({\rho,\vec T,\vec \theta})}^{(1)}. $$ The source $\Sigma_{\vec T,\vec \theta}^{(2)}$ is obtained using the coordinate at infinity $\frak w_{\frak p}^{(2)}$ and the target $\Sigma_{\overline{\Phi}_{12}({\rho,\vec T,\vec \theta})}^{(1)}$ is obtained using the coordinate at infinity $\frak w_{\frak p}^{(1)}$. We may assume that $$ \frak v_{\rho,\vec T,\vec \theta}(K_{{\rm v},(2)}^{+\vec R_{(2)}}) \subset K_{{\rm v},(1)}^{+\vec R_{(1)}}. $$ We then define a map $$ \frak H_{12} : \frak X(2,2m) \to \frak X(1,m) $$ by the formula \begin{equation}\label{defnH12} \frak H_{12}(u,(\rho,\vec T,\vec \theta)) = (u\circ \frak v_{(\rho,\vec T,\vec \theta)},\overline{\Phi}_{12}(\rho,\vec T,\vec \theta)). \end{equation} \begin{sublem}\label{sublem1} $\frak H_{12}$ is a $C^m$-map. \end{sublem} \begin{proof} By Proposition \ref{changeinfcoorprop}, the map $\overline{\Phi}_{12}$ is a $C^m$ diffeomorphism. Therefore the second and the third factors of $\frak H_{12}$ is a $C^m$-map. The first factor is of $C^m$-class because of Proposition \ref{reparaexpest} and a well-known fact that the map $ C^{m}(M_1,M_2) \times C^{2m}(M_2,M_3) \to C^{m}(M_1,M_3) $ given by $ (v,u) \mapsto u\circ v $ is a $C^m$ map. \end{proof} On the other hand we have: \begin{sublem}\label{sublem2} $$ \frak H_{12} \circ \frak F^{(2)} = \frak F^{(1)} \circ \phi_{12}. $$ \end{sublem} This is immediate from the construction. \par Since $\frak F^{(2)}$ and $\frak F^{(1)}$ are both $C^m$ embeddings, Sublemmas \ref{sublem1} and \ref{sublem2} imply Lemma \ref{2120lem}. \end{proof} The map $\phi_{12}$ is obviously $\Gamma_{\frak p}$ equivariant. We then define $$ \aligned \widehat{\phi}_{12} = \phi_{12} \times {\rm identity} : &V(\frak p,\frak w^{(2)}_{\frak p};(\frak o^{(2)},\mathcal T^{(2)});\frak A) \times \bigoplus_{c\in \frak A} E_c\\ &\subset V(\frak p,\frak w^{(1)}_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A)\times \bigoplus_{c\in \frak A} E_c. \endaligned $$ Conditions (2)-(5) are trivial to verify. It also follows that the maps obtained are $\Gamma_{\frak p}$-equivariant. (During the proof of Proposition \ref{prop2117}, the $\Gamma_{\frak p}$-equivariance is always trivial to prove. So we do not mention it any more.) \par\medskip \noindent{\bf Case 2}: The case ${\frak w}_{\frak p}^{(1)} = {\frak w}_{\frak p}^{(2)}$ and $\frak A^{(1)} \ne \frak A^{(2)}$. \par Assume that $\frak B \supseteq \frak A^{(1)} \supset \frak A^{(2)}$ ($\frak B\subseteq \frak C(\frak p)$). If we regard $$ V(\frak p,\frak w_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)}) \subset \mathcal M^{{\frak w}_{\frak p}^{(1)}}_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A^{(1)};\frak B) ^{\rm trans}_{\epsilon_{0},\vec{\mathcal T}^{(1)}}, $$ then we may also regard $$ V(\frak p,\frak w_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(2)}) \subset \mathcal M^{{\frak w}_{\frak p}^{(1)}}_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A^{(1)};\frak B) ^{\rm trans}_{\epsilon_{0},\vec{\mathcal T}^{(1)}}. $$ Moreover \begin{equation}\label{2315emb} V(\frak p,\frak w_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(2)}) \subset V(\frak p,\frak w_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)}). \end{equation} We can show that (\ref{2315emb}) is a $C^m$-map in the same way as the proof of Lemma \ref{2120lem}. (Actually the proof is easier since there is no coordinate change of the source and so $\frak H_{12}$ is the identity map in the situation of Case 2.) \par Furthermore an element $(\frak Y,u',(\vec w'_{a,c};c\in \frak B))$ of $V(\frak p,\frak w_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)})$ is in $V(\frak p,\frak w_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(2)})$ if and only if \begin{equation}\label{2315+1} \frak s_{\frak p,\frak w_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)}}(\frak Y,u',(\vec w'_{a,c};c\in \frak B)) = \overline\partial u' \in \mathcal E_{\frak p,\frak w_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(2)}}. \end{equation} We put $\frak q^+ = (\frak Y,u',(\vec w'_{a,c};c\in \frak B))$. By Lemmas \ref{nbdregmaineq} and \ref{transstratasmf}, $d_{\frak q^+}\frak s$ induces an isomorphism: $$ \frac {T_{\frak q^+}V(\frak p,\frak w_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)})} {T_{\frak q^+}V(\frak p,\frak w_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(2)})} \cong \frac {\left(\mathcal E_{\frak p,\frak w_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)}}\right)_{\frak q^{+}}} {\left(\mathcal E_{\frak p,\frak w_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(2)}}\right)_{\frak q^{+}}}. $$ We have thus obtained a coordinate change in this case. \par\medskip The other two cases are as follows. \par\smallskip \noindent{\bf Case 3}: The case $\vec{w}_{\frak p}^{(1)} \subset \vec{w}_{\frak p}^{(2)}$ and $\frak A^{(1)} = \frak A^{(2)}$. The stabilization data ${\frak w}_{\frak p}^{(1)}$ is induced from ${\frak w}_{\frak p}^{(2)}$. \par\smallskip \noindent{\bf Case 4}: The case $\vec{w}_{\frak p}^{(1)} \supset \vec{w}_{\frak p}^{(2)}$ and $\frak A^{(1)} = \frak A^{(2)}$. The stabilization data ${\frak w}_{\frak p}^{(2)}$ is induced from ${\frak w}_{\frak p}^{(1)}$. \par\smallskip Let us explain the notion that `stabilization data ${\frak w}_{\frak p}^{(1)}$ is induced from ${\frak w}_{\frak p}^{(2)}$.' Suppose $\vec{w}_{\frak p}^{(1)} \subset \vec{w}_{\frak p}^{(2)}$. Let \begin{equation}\label{2149quote} \pi : \underset{{\rm v}\in C^0(\mathcal G_{\frak p})}{{\bigodot}}\frak M^{(2)}_{(\frak x_{\frak p}\cup \vec w^{(2)}_{\frak p})_{\rm v}} \to \prod_{{\rm v}\in C^0(\mathcal G_{\frak p})}\frak V^{(2)}((\frak x_{\frak p}\cup \vec w^{(2)}_{\frak p})_{\rm v}) \end{equation} be the fiber bundle (\ref{2149}) that is a part of the data included in ${\frak w}_{\frak p}^{(2)}$. Here $\frak V^{(2)}((\frak x_{\frak p}\cup \vec w^{(2)}_{\frak p})_{\rm v})$ is an open neighborhood of $(\frak x_{\frak p}\cup w^{(2)}_{\frak p})_{\rm v}$ in $\mathcal M_{k_{\rm v}+1,\ell_{\rm v} + \ell^{(2)}_{\rm v}}$ or in $\mathcal M^{\rm cl}_{\ell_{\rm v} + \ell^{(2)}_{\rm v}}$. (They are contained in the top stratum of the Deligne-Mumford moduli spaces.) \par Forgetful map of the marked points in $\vec{w}_{\frak p}^{(2)} \setminus \vec{w}_{\frak p}^{(1)}$ induces a map $$ \frak{forget}_{\rm v} : \mathcal M_{k_{\rm v}+1,\ell_{\rm v} + \ell^{(2)}_{\rm v}} \to \mathcal M_{k_{\rm v}+1,\ell_{\rm v} + \ell^{(1)}_{\rm v}} $$ etc. We put $$ \frak{forget}_{\rm v}(\frak V^{(2)}((\frak x_{\frak p}\cup \vec w^{(2)}_{\frak p})_{\rm v})) = \frak V^{(1),+}((\frak x_{\frak p}\cup \vec w^{(1)}_{\frak p})_{\rm v}). $$ We take $\frak V^{(1)}((\frak x_{\frak p}\cup \vec w^{(1)}_{\frak p})_{\rm v}) \subset \frak V^{(1),+}((\frak x_{\frak p}\cup \vec w^{(1)}_{\frak p})_{\rm v})$ that is a neighborhood of $(\frak x_{\frak p}\cup \vec w^{(1)}_{\frak p})_{\rm v}$ such that there exists a section \begin{equation}\label{sect} \frak{sect}_{\rm v} : \frak V^{(1)}((\frak x_{\frak p}\cup \vec w^{(1)}_{\frak p})_{\rm v}) \to \frak{forget}(\frak V^{(2)}((\frak x_{\frak p}\cup \vec w^{(2)}_{\frak p})_{\rm v})). \end{equation} Then we can pull back (\ref{2149quote}) by $\frak{sect} = (\frak{sect}_{\rm v})$ to obtain a fiber bundle \begin{equation}\label{2149quote2} \pi : \underset{{\rm v}\in C^0(\mathcal G_{\frak p})}{\bigodot}\frak M^{(1)}_{(\frak x_{\frak p}\cup \vec w^{(1)}_{\frak p})_{\rm v}} \to \prod_{{\rm v}\in C^0(\mathcal G_{\frak p})}\frak V^{(1)}((\frak x_{\frak p}\cup \vec w^{(1)}_{\frak p})_{\rm v}). \end{equation} Moreover we can pull back a trivialization of the fiber bundle (\ref{2149quote}) to one of the fiber bundle (\ref{2149quote2}). Thus we obtain a coordinate at infinity of $(\frak x_{\frak p}\cup w^{(1)}_{\frak p})_{\rm v}$. \begin{defn} We call the coordinate at infinity obtained as above the {\it coordinate at infinity induced from ${\frak w}_{\frak p}^{(2)}$}. \par We also take codimension $2$ submanifolds $\mathcal D^{(1)}_{\frak p,i}$ that are included as a part of the stabilization data ${\frak w}_{\frak p}^{(1)}$, so that $\mathcal D^{(1)}_{\frak p,i} = \mathcal D^{(2)}_{\frak p,i}$ for $i=1,\dots,\#\vec w^{(1)}_{\frak p}$. We thus have obtained a stabilization date ${\frak w}_{\frak p}^{(1)}$. We call it {\it the stabilization data induced from ${\frak w}_{\frak p}^{(2)}$}. \end{defn} We now construct a coordinate change of the Kuranishi structures in Case 3. In Definition \ref{defnforget} we defined a forgetful map $$ \aligned \frak{forget}_{\frak B,\frak B^-;\vec w_{\frak p},\vec w^-_{\frak p}} &: \mathcal M^{\frak w_{\frak p}}_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon'_{0},\vec{\mathcal T}^{(2)}}\\ &\to \mathcal M^{\frak w_{\frak p}^{-}}_{k+1,(\ell,\ell^-_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B^-)_{\epsilon_{0},\vec{\mathcal T}^{(1)}}. \endaligned $$ Here we shrink the base space of (\ref{2149quote}) so that this map is well-defined. We need to extend the core of the domain and replace $\epsilon_0$ by $\epsilon'_0$ in the same way as in Lemma \ref{2120lem}. We then obtain a stabilization data, which we denote by $\frak m^{(2)-}_{\frak p}$. \par Taking $\vec w_{\frak p} = \vec w_{\frak p}^{(2)-}$ and $\vec w^-_{\frak p} = \vec w^{(1)}_{\frak p}$ and $\frak B^- = \frak B$ we have $$ \aligned \frak{forget}_{\frak B,\frak B;\vec w_{\frak p}^{(2)-},\vec w_{\frak p}^{(1)}} &: \mathcal M^{{\frak w}_{\frak p}^{(2)-}}_{k+1,(\ell,\ell_{\frak p}^{(2)},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon'_{0},\vec{\mathcal T}^{(2)}}\\ &\to \mathcal M^{{\frak w}_{\frak p}^{(1)}}_{k+1,(\ell,\ell^{(1)}_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon_{0},\vec{\mathcal T}^{(1)}}. \endaligned $$ It induces a map \begin{equation}\label{2318} \aligned \mathcal M^{{\frak w}_{\frak p}^{(2)}}_{k+1,(\ell,\ell_{\frak p}^{(2)},(\ell_c))}(\beta;\frak p;\frak A;\frak B)^{\vec w_{\frak p}^{(2)}\setminus \vec w_{\frak p}^{(1)}}_{\epsilon'_{0},\vec{\mathcal T}^{(2)}}\to \mathcal M^{{\frak w}_{\frak p}^{(1)}}_{k+1,(\ell,\ell^{(1)}_{\frak p},(\ell_c))}(\beta;\frak p;\frak A;\frak B)_{\epsilon_{0},\vec{\mathcal T}^{(1)}} \endaligned \end{equation} which is a strata-wise differentiable open embedding by Proposition \ref{forgetstillstable}. We denote the map (\ref{2318}) by $\tilde\phi_{12}$. \begin{lem}\label{tildephecm} $\tilde\phi_{12}$ is of $C^m$-class in a neighborhood of $\frak p \cup \vec w_{\frak p}^{(2)}$. \end{lem} \begin{proof} The proof is similar to the proof of Lemma \ref{2120lem}. We use Lemma \ref{changeinfcoorproppara} which is a parametrized version of Propositions \ref{changeinfcoorprop} and \ref{reparaexpest}. Let $$ \frak x_{\frak p}\cup \vec w_{\frak p}^{(2)} = \tilde{\frak x} \in \prod_{{\rm v}\in C^0(\mathcal G_{\frak p})}\frak V^{(2)}((\frak x_{\frak p}\cup \vec w^{(2)}_{\frak p})_{\rm v}) $$ and $\frak{forget}(\tilde{\frak x}) = \frak x = \frak x_{\frak p}\cup \vec w_{\frak p}^{(1)}$. Let $\frak V^{(1)-}((\frak x_{\frak p}\cup \vec w^{(1)}_{\frak p})_{\rm v})$ be a neighborhood of $\frak p$. \par Let $\frak{sect}_{(1),{\rm v}}$ be the section we chose in (\ref{sect}). It gives a stabilization data $\frak w_{\frak p}^{(1)}$. We take $$ \frak{sect}_{(2),{\rm v}} : Q_{\rm v} \times \frak V^{(1)-}((\frak x_{\frak p}\cup \vec w^{(1)}_{\frak p})_{\rm v}) \to \frak V^{(2)}((\frak x_{\frak p}\cup \vec w^{(2)}_{\frak p})_{\rm v}) $$ such that the following condition is satisfied. \begin{conds} \begin{enumerate} \item $ \frak{forget}(\frak{sect}_{(2),{\rm v}}(\xi,\frak y_{\rm v})) = \frak y_{\rm v}. $ \item $\frak{sect}_{(2),{\rm v}}$ is a diffeomorphism onto an open neighborhood of $\tilde{\frak x}_{\rm v}$. \end{enumerate} \end{conds} Pulling back $\frak w^{(2)}_{\frak p}$ by $\frak{sect}_{(2)}$ we have a $Q = \prod Q_{\rm v}$-parametrized family of stabilization data, which we call $\tilde{\frak w}^{(2)}_{\frak p}$. We denote the image of $\frak{sect}_{(2),\rm v}$ by $ \frak V^{(2)-}((\frak x_{\frak p}\cup \vec w^{(2)}_{\frak p})_{\rm v})$. \par We use $\frak w_{\frak p}^{(1)}$ in the same was as in the proof of Lemma \ref{2120lem} to obtain \begin{equation}\label{evaluandTfacsaa} \aligned &\frak F^{(1)} : \hat V^-(\frak p,\frak w^{(1)}_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A) \\ \to &\prod_{{\rm v}\in C^0(\mathcal G_{\frak p})} C^{m}((K_{{\rm v},(1)}^{+\vec R_{(1)}},K_{{\rm v},(1)}^{+\vec R_{(1)}} \cap \partial \Sigma_{{\rm v},(1)}),(X,L))\\ &\times \prod_{{\rm v}\in C^0(\mathcal G_{\frak p})}\frak V^{(1)-}((\frak x_{\frak p} \cup \vec w_{\frak p})_{\rm v})\\ &\times ((\vec{\mathcal T}^{(1)},\infty] \times ((\vec{\mathcal T}^{(1)},\infty] \times \vec S^1). \endaligned \end{equation} (Here we put $-$ in $ \hat V^-(\frak p,\frak w^{(1)}_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A)$ to clarify that this space uses $\frak V^{(1)-}((\frak x_{\frak p} \cup \vec w_{\frak p})_{\rm v})$.) \par We use $\frak w_{\frak p}^{(2)}$ to obtain \begin{equation}\label{evaluandTfac2paaa} \aligned &\frak F^{(2)} : \hat V^-(\frak p,\frak w^{(2)}_{\frak p};(\frak o^{(2)},\mathcal T^{(2)});\frak A) \\ \to &\prod_{{\rm v}\in C^0(\mathcal G_{\frak p})} C^{2m}((K_{{\rm v},(2)}^{+\vec R_{(2)}}),K_{{\rm v},(2)}^{+\vec R_{(2)}}\cap \partial \Sigma_{{\rm v},(2)}),(X,L))\\ &\times \prod_{{\rm v}\in C^0(\mathcal G_{\frak p})}\frak V^{(2)-}((\frak x_{\frak p} \cup \vec w_{\frak p})_{\rm v})\\ &\times ((\vec{\mathcal T}^{(2)},\infty] \times ((\vec{\mathcal T}^{(2)},\infty] \times \vec S^1). \endaligned \end{equation} Let $\frak X(1,m)$, $\frak X(2,2m)$ be the spaces in the right hand side of (\ref{evaluandTfacsaa}), (\ref{evaluandTfac2paaa}) respectively. \par We apply Lemma \ref{changeinfcoorproppara} to the family of coordinates at infinity $\tilde{\frak w}^{(2)}_{\frak p}$ and the coordinate at infinity ${\frak w}^{(1)}_{\frak p}$. It gives estimates of the map $\overline{\Phi}_{12}$ defined in (\ref{coodinatechange12}) and $\frak v_{(\xi,\rho,\vec T,\vec\theta)}$ as in (\ref{mapvbra}). \par We define $ \frak H_{12} : \frak X(2,2m) \to \frak X(1,m) $ by \begin{equation}\label{defnH12p} \frak H_{12}(u,\frak{sect}_{(2)}(\xi,\rho),(\vec T,\vec \theta)) = (u\circ \frak v_{(\xi,\rho,\vec T,\vec \theta)},\overline{\Phi}_{12}(\xi,\rho,\vec T,\vec \theta)). \end{equation} By construction we have \begin{equation} \frak H_{12} \circ \frak F^{(2)} = \frak F^{(1)} \circ \tilde{\phi}_{12}. \end{equation} Lemma \ref{changeinfcoorproppara} implies that $\frak H_{12}$ is a $C^m$-map. Moreover $\frak F^{(1)}$ and $\frak F^{(2)}$ are $C^m$-embeddings. Therefore $\tilde\phi_{12}$ is a $C^m$-map on $ \hat V^-(\frak p,\frak w^{(2)}_{\frak p};(\frak o^{(2)},\mathcal T^{(2)});\frak A)$. The proof of Lemma \ref{tildephecm} is complete. \end{proof} We go back to the construction of coordinate change in Case 3. By requiring the transversal constraint at all the marked points, $\tilde\phi_{12}$ induces a required coordinate change $\phi_{12}$. Since $\frak A^{(1)} = \frak A^{(2)}$, it is easy to find the bundle map $\hat\phi_{12}$ that has the required properties. \begin{rem} Note that the map (\ref{2318}) and the coordinate change $\phi_{12}$ we obtain are independent of the choice of the section of (\ref{sect}). But $\phi_{12}$ depends on the codimension 2 submanifolds we take, since the process to take $\rm trans$ depends on them. We use the coordinate at infinity (or the map $\frak{sect}_{\rm v}$ of (\ref{sect})) only to {\it prove} that $\phi_{12}$ is of $C^m$-class. \end{rem} Using the fact that the map (\ref{2318}) is a local diffeomorphism the construction of the coordinate change in Case 4 is an inverse of one in Case 3. \par\medskip We have thus constructed the coordinate change in the 4 cases above. The general case can be constructed by a composition of them. \par Let us be given $({\frak w}_{\frak p}^{(1)},\frak A^{(1)})$ and $({\frak w}_{\frak p}^{(2)},\frak A^{(2)})$. We say that the pair $(({\frak w}_{\frak p}^{(1)},\frak A^{(1)}),({\frak w}_{\frak p}^{(2)},\frak A^{(2)}))$ is of Type 1,2,3,4, if we can apply Case 1,2,3,4, respectively. We say the coordinate change obtained {\it the coordinate change of Type} 1,2,3,4, respectively. \begin{lem}\label{lem2121} For given $({\frak w}_{\frak p}^{(1)},\frak A^{(1)})$ and $({\frak w}_{\frak p}^{(6)},\frak A^{(6)})$ with $\vec w^{(1)}\cap \vec w^{(6)} = \emptyset$, there exist $({\frak w}_{\frak p}^{(j)},\frak A^{(j)})$ for $j=2,\dots,5$ such that: \par The pair $(({\frak w}_{\frak p}^{(1)},\frak A^{(1)}),({\frak w}_{\frak p}^{(2)},\frak A^{(2)}))$ is of type $2$, \par The pair $(({\frak w}_{\frak p}^{(2)},\frak A^{(2)}),({\frak w}_{\frak p}^{(3)},\frak A^{(3)}))$ is of type $1$, \par The pair $(({\frak w}_{\frak p}^{(3)},\frak A^{(3)}),({\frak w}_{\frak p}^{(4)},\frak A^{(4)}))$ is of type $3$, \par The pair $(({\frak w}_{\frak p}^{(4)},\frak A^{(4)}),({\frak w}_{\frak p}^{(5)},\frak A^{(5)}))$ is of type $4$, \par The pair $(({\frak w}_{\frak p}^{(5)},\frak A^{(5)}),({\frak w}_{\frak p}^{(6)},\frak A^{(6)}))$ is of type $1$. \end{lem} \begin{proof} We put $({\frak w}_{\frak p}^{(2)},\frak A^{(2)}) = ({\frak w}_{\frak p}^{(1)},\frak A^{(6)})$ and $\frak A^{(j)} = \frak A^{(6)}$ for all $j=2,\dots,6$. \par Let $\vec w^{(4)}_{\frak p} = \vec w^{(1)}_{\frak p} \cup \vec w^{(6)}_{\frak p}$. (Note this is a disjoint union by assumption.) We take (any) coordinate at infinity for $\frak x_{\frak p} \cup \vec w^{(4)} _{\frak p}$. The codimension 2 submanifolds are determined from the data given in $\frak w^{(1)}_{\frak p}$ and $\frak w^{(6)}_{\frak p}$. We thus defined $({\frak w}_{\frak p}^{(4)},\frak A^{(4)})$. \par We take the coordinates at infinity that is induced from ${\frak w}_{\frak p}^{(4)}$ so that the set of additional marked points are $\vec w^{(1)}_{\frak p}$ and $\vec w^{(6)}_{\frak p}$. We thus obtain $({\frak w}_{\frak p}^{(3)},\frak A^{(3)})$ and $({\frak w}_{\frak p}^{(5)},\frak A^{(5)})$, respectively. It is easy to see that they have required properties. \end{proof} \begin{rem} We need the hypothesis $\vec w^{(1)}_{\frak p}\cap \vec w^{(6)}_{\frak p} = \emptyset$ in Lemma \ref{lem2121}. Otherwise it might happen that $w^{(1)}_{\frak p,i} = w^{(6)}_{\frak p,j}$ but $\mathcal D^{(1)}_{\frak p,i} \ne \mathcal D^{(6)}_{\frak p,j}$. \end{rem} By Lemma \ref{lem2121} we can define a coordinate change for the pairs $({\frak w}_{\frak p}^{(1)},\frak A^{(1)})$ and $({\frak w}_{\frak p}^{(2)},\frak A^{(2)})$ as the composition of 5 coordinate changes. We have thus constructed the required coordinate change $$ \phi_{12} : V(\frak p,\frak w^{(2)}_{\frak p};(\frak o^{(2)},\mathcal T^{(2)});\frak A^{(2)}) \to V(\frak p,\frak w^{(1)}_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)}) $$ in case $\vec w^{(1)}_{\frak p} \cap \vec w^{(2)}_{\frak p} = \emptyset$. \par In general cases we take ${\frak w}_{\frak p}^{(0)}$ such that $\vec w^{(1)}_{\frak p} \cap \vec w^{(0)}_{\frak p} = \vec w^{(2)}_{\frak p} \cap \vec w^{(0)}_{\frak p} = \emptyset$ and put $$ \phi_{12} = \phi_{10} \circ \phi_{02}. $$ The proof of Proposition \ref{prop2117} is complete. \end{proof} We remark that in the proof of Lemma \ref{lem2121} we made a choice of coordinate at infinity of $\frak x_{\frak p} \cup \vec w^{(4)}_{\frak p}$. We also take ${\frak w}^{(0)}_{\frak p}$ at the last step of the proof of Proposition \ref{prop2117}. However the resulting coordinate change is independent of these choices if we shrink the domain. Namely we have the following Lemma \ref{lem123pre}. We put \begin{equation}\label{constructedUUU} U(\frak p,\frak w_{\frak p};(\frak o,\mathcal T);\frak A) = V(\frak p,\frak w^{(1)}_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)})/\Gamma_{\frak p}. \end{equation} This is an orbifold. \begin{rem} We may replace the obstruction bundle by a bigger one and may assume that the orbifold (\ref{constructedUUU}) is effective. This is because the action of $\Gamma_{\frak p}$ on $\Gamma(\Sigma_{\frak p};u_{\frak p}^*TX\otimes \Lambda^{01})$ is effective and it is still effective if we restrict to the space of smooth sections supported on a compact subset of a core. \footnote{The same applies to the case of higher genus. The only exception is the case $X$ is a point.} In this article we always assume the effectivity of orbifolds. \end{rem} The map $$ \phi_{12} : V(\frak p,\frak w^{(2)}_{\frak p};(\frak o^{(2)},\mathcal T^{(2)});\frak A^{(2)}) \to V(\frak p,\frak w^{(1)}_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)}) $$ induces $$ \underline\phi_{12} : U(\frak p,\frak w^{(2)}_{\frak p};(\frak o^{(2)},\mathcal T^{(2)});\frak A^{(2)}) \to U(\frak p,\frak w^{(1)}_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)}) $$ which is an embedding of orbifold. The embedding of vector bundle $\widehat{\phi}_{12}$ induces $\underline{\widehat{\phi}}_{12}$ that is an embedding of orbibundles. \begin{lem}\label{lem123pre} We use the notation in Proposition \ref{prop2117}. If two different choices of $(\frak o^{(2),j}_0,\mathcal T^{(2),j}_0)$ $(j=1,2)$ and $(\phi_{12}^j,\widehat{\phi}_{12}^j)$ $(j=1,2)$ are made, then there exists $(\frak o^{(3)},\mathcal T^{(3)})$ such that $(\frak o^{(3)},\mathcal T^{(3)}) < (\frak o^{(2),j}_0,\mathcal T^{(2),j}_0)$ $(j=1,2)$ and $$ (\underline\phi_{12}^1,\widehat{\underline\phi}_{12}^1) = (\underline\phi_{12}^2,\widehat{\underline\phi}_{12}^2) $$ on $ U(\frak p,\frak w^{(2)}_{\frak p};(\frak o^{(3)},\mathcal T^{(3)});\frak A^{(2)}) $. \end{lem} \begin{proof} We first prove the next lemma. \begin{lem}\label{lem123} Let $\vec w_{\frak p}^{(1)} \subset \vec w_{\frak p}^{(2)}$. Let ${\frak w}_{\frak p}^{(i,j)}$ $i=1,2$, $j=1,2$ be the stabilization data at $\frak p$ such that the additional marked points associated to ${\frak w}_{\frak p}^{(i,j)}$ is $\vec w_{\frak p}^{(j)}$. \par We assume that $(({\frak w}_{\frak p}^{(i,1)},\frak A),({\frak w}_{\frak p}^{(i,2)},\frak A))$ is type $3$.\footnote{Namely we assume that the coordinate at infinity of ${\frak w}_{\frak p}^{(i,1)}$ is induced by that of ${\frak w}_{\frak p}^{(i,2)}$ and the submanifolds we assigned in Definition \ref{stabdata} (2) coincide each other when they are assigned to the same marked points.} \par Let $\phi_{(i,j);(i',j')}$ be the coordinate change from the coordinate associated with ${\frak w}_{\frak p}^{(i',j')}$ to one associated with ${\frak w}_{\frak p}^{(i,j)}$. Then we have \begin{equation}\label{comb4sq} \underline\phi_{(1,1);(1,2)}\circ \underline\phi_{(1,2);(2,2)} = \underline\phi_{(1,1);(2,1)}\circ \underline\phi_{(2,1);(2,2)} \end{equation} on a small neighborhood of $\frak p$ in the Kuranishi chart associated with ${\frak w}_{\frak p}^{(2,2)}$. The same equality holds for $\widehat{\underline\phi}_{(i,j);(i',j')}$. \par The same conclusion holds when $\vec w_{\frak p}^{(2)} \subset \vec w_{\frak p}^{(1)}$ and replace `type $3$' by `type $4$'. \end{lem} \begin{rem} We remark that the difference between $\vec w_{\frak p}^{(1)}$ and $\vec w_{\frak p}^{(2)}$ is coordinate at infinity. \end{rem} \begin{proof} This lemma as well as several other lemmas that appear later, is a consequence of the following general observation. \par We consider an open subset $\mathcal U \subset \mathcal M_{k+1,\ell+\ell'}$ of the Deligne-Mumford moduli space. Let $$ \pi : \frak M(\mathcal U) \to \mathcal U $$ be the restriction of the universal family to $\mathcal U$. Suppose we have a {\it topological space} $\Xi$ consisting of (an appropriate equivalence classes of) pairs $(\frak x,u')$ where $\frak x \in \mathcal U$ and $u' : \pi^{-1}(\frak x) \to X$ is a smooth map. \footnote{Here equivalence relation is defined by an appropriate reparametrization of the source by a biholomorphic map.} Here we emphasis that we regard $\Xi$ as a topological space and do not need to use any other structure such as smooth structure. \par Suppose $(V_i,\Gamma_i,E_i,\frak s_i,\psi_i)$ is a Kuranishi neighborhood at $\frak p$. We assume that the coordinate change $\underline\phi_{ji}$ is defined as follows: Suppose that there exists a homeomorphism $\Phi_i : V_i/\Gamma_i \to \Xi$ onto an open neighborhood of $\frak x$ with $\frak x = \Phi_{i}(\frak p)$ for all of $i$ and $$ \underline\phi_{ji} = \Phi_{j}^{-1} \circ \Phi_{i} $$ holds on a neighborhood of $\frak p$. Then we have $$ \underline\phi_{12}\circ \underline\phi_{23} = \underline\phi_{13} $$ on a neighborhood of $\frak p$. This observation is obvious. \begin{rem} Note it is important here that we only need to check set theoretical equality. This is because our orbifolds are always effective orbifolds and we consider only embeddings as maps between them. Therefore we do not need to study orbifold structure or smooth structure to prove compatibility of the coordinate changes etc. \end{rem} \begin{rem}\label{rem124} Later we will use a slightly more general case. Namely we consider the case when there are $V_{i,j}$ and $\Phi_{i,j} : V_{i,j}/\Gamma_{i,j} \to \Xi$ for $(i,j) = (1,1),\dots,(1,m)$ and $(i,j) = (2,1),\dots,(2,n)$. We assume $V_{1,1} = V_{2,1}$ and $V_{1,m} = V_{2,n}$. Suppose $\frak x = \Phi_{i,j}(\frak p)$ is independent of $i,j$ and $\Phi_{i,j}$ is a homeomorphism onto a neighborhood of $\frak x$. We put: $ \underline\phi_{(i,j)(i,j+1)} = \Phi_{i,j}^{-1} \circ \Phi_{i,j+1}. $ Then we have $$ \underline\phi_{(1,1)(1,2)}\circ\dots\circ \underline\phi_{(1,m-1)(1,m)} = \underline\phi_{(2,1)(2,2)}\circ\dots\circ \underline\phi_{(2,n-1)(2,n)} $$ on a neighborhood of $\frak p$. This is again obvious. \end{rem} Now we apply the observation above to the situation of Lemma \ref{lem123}. The role of $\Xi$ is taken by $$ \mathcal M^{{\frak w}_{\frak p}^{(2,2)}}_{k+1,(\ell,\ell_{\frak p}^{(2)},(\ell_c))}(\beta;\frak p;\frak A;\frak B) ^{\rm trans}_{\epsilon_{0},\vec{\mathcal T}^{(2)}}. $$ We note that this set depends on the coordinate at infinity. However Lemma \ref{lemma21118} implies that it is independent of the coordinate at infinity on a neighborhood of $\frak p$. We have thus proved (\ref{comb4sq}). \par Note the bundle maps $\widehat{\underline\phi}_{(i,j);(i',j')}$ are nothing but the identity maps on the fiber in our situation. The proof of Lemma \ref{lem123} is complete. \end{proof} Lemma \ref{lem123pre} for the case $\vec w^{(1)}_{\frak p}\cap \vec w^{(2)}_{\frak p} = \emptyset$ is immediate from Lemma \ref{lem123}. \par Let us prove the general case. We need to prove the independence of the coordinate change of the choice of ${\frak w}_{\frak p}^{(0)}$. Let ${\frak w}_{\frak p}^{(0,1)}$, ${\frak w}_{\frak p}^{(0,2)}$ be two such choices. Namely we assume $\vec w^{(1)}_{\frak p}\cap \vec w^{(0,i)}_{\frak p} =\vec w^{(2)}_{\frak p}\cap \vec w^{(0,i)}_{\frak p} = \emptyset$ for $i=1,2$. We first assume $\vec w^{(0,1)}_{\frak p} \cap \vec w^{(0,2)}_{\frak p} = \emptyset$ in addition. We put $\vec w^{(0)}_{\frak p} = \vec w^{(0,1)}_{\frak p} \cup \vec w^{(0,2)}_{\frak p}$. We take a stabilization data ${\frak w}_{\frak p}^{(0)}$ so that the codimension $2$ submanifolds are induced by ${\frak w}_{\frak p}^{(0,i)}$. Then, $\underline\phi_{(0,i),0}$ are composition of coordinate change of type 3 and of type 1 and $ \underline\phi_{0,(0,i)} $ are composition of coordinate change of type 4 and of type 1. Therefore from the first part of the proof we have $$\aligned \underline\phi_{1(0,1)}\circ \underline\phi_{(0,1)2} &= \underline\phi_{1(0,1)}\circ \underline\phi_{(0,1)0}\circ \underline\phi_{0(0,1)}\circ\underline\phi_{(0,1)2}\\ &=\underline\phi_{10}\circ \underline\phi_{02} =\underline\phi_{1(0,2)}\circ \underline\phi_{(0,2)2} \endaligned$$ as required.\footnote{Here $\phi_{1(0,1)}$ is the coordinate change from the Kuranishi chart associated with $\vec w^{(0,1)}_{\frak p}$ to the one associated with $\vec w^{1}_{\frak p}$. The notation of other coordinate changes are similar.} \par To remove the condition $\vec w^{(0,1)}_{\frak p} \cap \vec w^{(0,2)}_{\frak p} = \emptyset$ it suffices to remark that there exists ${\vec w}^{(0,3)}_{\frak p}$ such that $\vec w^{(1)}_{\frak p}\cap \vec w^{(0,3)}_{\frak p} =\vec w^{(2)}_{\frak p}\cap \vec w^{(0,3)}_{\frak p} = \emptyset$ and $\vec w^{(0,1)}_{\frak p} \cap \vec w^{(0,3)}_{\frak p} =\vec w^{(0,2)}_{\frak p} \cap \vec w^{(0,3)}_{\frak p} = \emptyset$. The proof of Lemma \ref{lem123pre} is complete. \end{proof} Now we prove the compatibility of the coordinate transformations stated in Proposition \ref{prop2117}. \begin{lem}\label{lem125} Let $({\frak w}_{\frak p}^{(j)},\frak A^{(j)})$ be a pair of stabilization data at $\frak p$ and $\frak A^{(j)} \subset \frak C(\frak p)$, for $j=1,2,3$. Suppose $\frak A^{(1)} \supseteq \frak A^{(2)} \supseteq \frak A^{(3)} \ne \emptyset$ and let $(\frak o^{(1)},\mathcal T^{(1)})$ be admissible for $({\frak w}_{\frak p}^{(1)},\frak A^{(1)})$. \par By Proposition \ref{prop2117} we have admissible $(\frak o^{(2)},\mathcal T^{(2)})$ and $(\frak o^{(3)},\mathcal T^{(3)})$ such that the coordinate change $$(\phi_{1j},\hat{\phi}_{1j}) : V(\frak p,\frak w^{(j)}_{\frak p};(\frak o,\mathcal T);\frak A^{(j)}) \to V(\frak p,\frak w^{(1)}_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(2)}) $$ exists if $(\frak o^{(j)},\mathcal T^{(j)}) > (\frak o,\mathcal T)$. (Here $j=2,3$). \par By Proposition \ref{prop2117} there exists admissible $(\frak o^{(4)},\mathcal T^{(4)})$ such that a coordinate change $$(\phi_{23},\hat{\phi}_{23}) : V(\frak p,\frak w^{(3)}_{\frak p};(\frak o,\mathcal T);\frak A^{(3)}) \to V(\frak p,\frak w^{(2)}_{\frak p};(\frak o^{(2)},\mathcal T^{(2)});\frak A^{(2)}) $$ exists if $(\frak o^{(4)},\mathcal T^{(4)}) > (\frak o,\mathcal T)$. \par Now there exists $(\frak o^{(5)},\mathcal T^{(5)})$ with $(\frak o^{(5)},\mathcal T^{(5)}) < (\frak o^{(j)},\mathcal T^{(j)})$ $(j=3,4)$ such that we have \begin{equation} (\underline\phi_{13},\hat{\underline\phi}_{13}) = (\underline\phi_{12},\hat{\underline\phi}_{12})\circ(\underline\phi_{23},\hat{\underline\phi}_{23}) \end{equation} on $U(\frak p,\frak w^{(3)}_{\frak p};(\frak o^{(5)},\mathcal T^{(5)});\frak A^{(3)})$. \end{lem} \begin{proof} We first prove the case when $\vec w_{\frak p}^{(1)}$, $\vec w_{\frak p}^{(2)}$, $\vec w_{\frak p}^{(3)}$ are mutually disjoint. \par We note that we may assume $\frak A^{(1)} = \frak A^{(2)} = \frak A^{(3)} $. In fact the coordinate change of type 2 (that is the coordinate change which replaces $\frak A$ by its subset $\frak A^-$), is defined by inclusion of the domains so that $\frak A^-$ is obtained from $\frak A$ by the equation (\ref{2315+1}). This process commutes with other types of coordinate changes. So we assume $\frak A^{(1)} = \frak A^{(2)} = \frak A^{(3)} = \frak A$. \par We also note that the composition of two coordinate changes of type $j$ (for $j=1,\dots,4$) is again a coordinate change of type $j$. \par Now using Lemma \ref{lem2121}, we can find $\frak w_{\frak p}^{(i,j)}$ $i=1,2,3$, $j=2,\dots,6$ such that $(\frak w_{\frak p}^{(i,j)},\frak w_{\frak p}^{(i,j+1)})$ is as in the conclusion of Lemma \ref{lem2121} and $$ {\frak w}_{\frak p}^{(1,2)} = {\frak w}_{\frak p}^{(3,2)} = {\frak w}_{\frak p}^{(1)}, \quad {\frak w}_{\frak p}^{(1,6)} = {\frak w}_{\frak p}^{(2,2)} = {\frak w}_{\frak p}^{(2)}, \quad {\frak w}_{\frak p}^{(2,6)} = {\frak w}_{\frak p}^{(3,6)} = {\frak w}_{\frak p}^{(3)}. $$ Then $$ \aligned \underline\phi_{12} &= \underline\phi_{(1,2)(1,3)}\circ\underline\phi_{(1,3)(1,4)}\circ\underline\phi_{(1,4)(1,5)}\circ\underline\phi_{(1,5)(1,6)}, \\ \underline\phi_{23} &= \underline\phi_{(2,2),(2,3)}\circ\underline\phi_{(2,3)(2,4)}\circ\underline\phi_{(2,4)(2,5)}\circ\underline\phi_{(2,5)(1,6)}, \\ \underline\phi_{13} &= \underline\phi_{(3,2)(3,3)}\circ\underline\phi_{(3,3)(1,4)}\circ\underline\phi_{(3,4)(3,5)}\circ\underline\phi_{(3,5)(3,6)}. \endaligned $$ Therefore we can apply the general observation mentioned in the course of the proof of Lemma \ref{lem123} in the form of Remark \ref{rem124} to prove Lemma \ref{lem125} in our case. \par In fact we can take $\Xi$ as follows. We consider $\vec w_{\frak p}^{(i,4)}$ for $i=1,2,3$ and put $\vec w_{\frak p} = \vec w_{\frak p}^{(1,4)} \cup \vec w_{\frak p}^{(2,4)} \cup \vec w_{\frak p}^{(3,4)}$. We take (any) coordinate at infinity of $\frak x_{\frak p} \cup \vec w_{\frak p}$. We take the codimension 2 submanifolds $\mathcal D_{\frak p,i}$ (that is a part of the data ${\frak w}_{\frak p}$) so that they coincide with those taken for ${\frak w}^{(i)}_{\frak p}$, $i=1,2,3$. (Note we use the assumption that $\vec w_{\frak p}^{(1)}$, $\vec w_{\frak p}^{(2)}$, $\vec w_{\frak p}^{(3)}$ are mutually disjoint here.) We have thus defined the stabilization data $\frak w_{\frak p}$. Then $$ \Xi = \mathcal M^{{\frak w}_{\frak p}}_{k+1,(\ell,\ell_{\frak p}^{(+)},(\ell_c))}(\beta;\frak p;\frak A;\frak B) ^{\rm trans}_{\epsilon_{0},\vec{\mathcal T}_1}, $$ where $\ell_{\frak p}^{(+)} = \#\vec w_{\frak p}$. \par We finally remove the condition that $\vec w_{\frak p}^{(1)}$, $\vec w_{\frak p}^{(2)}$, $\vec w_{\frak p}^{(3)}$ are mutually disjoint. We take $\vec{w}_{\frak p}^{(4)}, \vec{w}_{\frak p}^{(5)}$ such that $$ \vec w_{\frak p}^{(i)} \cap \vec{w}_{\frak p}^{(4)} = \emptyset = \vec w_{\frak p}^{(i)} \cap \vec{w}_{\frak p}^{(5)} $$ for $i=1,2,3$ and $\vec w_{\frak p}^{(4)} \cap \vec{w}_{\frak p}^{(5)} = \emptyset$. We also take codimension two transversal submanifolds $\mathcal D_i$ for each of those additional marked points. We have thus obtained the stabilization data $\frak w_{\frak p}^{(4)}$, $\frak w_{\frak p}^{(5)}$. Then we have $$ \underline\phi_{12}\circ \underline\phi_{23} = \underline\phi_{15}\circ \underline\phi_{52}\circ\underline\phi_{24}\circ \underline\phi_{43} = \underline\phi_{15}\circ \underline\phi_{54}\circ \underline\phi_{43} = \underline\phi_{15}\circ \underline\phi_{53} = \underline\phi_{13}. $$ Here the first and the last equalities are the definitions. The second and the third equalities follow from the case of Lemma \ref{lem125} which we already proved. The proof of Lemma \ref{lem125} is complete. \end{proof} \section{Coordinate change - II: Coordinate change among different strata} \label{differentstratum} In this subsection we construct coordinate changes between the Kuranishi charts we constructed in Proposition \ref{chartprop} for the general case. Let $\frak p(1) \in \mathcal M_{k+1,\ell}(\beta)$. We take a stabilization data $\frak w_{\frak p(1)}$ at $\frak p(1)$ and $\frak A^{(1)} \subseteq \frak C(\frak p(1))$. We use them to define Kuranishi neighborhood $V(\frak p(1),\frak w_{\frak p(1)};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)})$ given in Definition \ref{defpsi}. Let \begin{equation}\label{322eq} \psi_{\frak p(1),\frak w_{\frak p(1)};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)}} : \frak s_{\frak p(1),\frak w_{\frak p(1)};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)}}^{-1}(0) /\Gamma_{\frak p(1)} \to \mathcal M_{k+1,\ell}(\beta) \end{equation} be the map in Proposition \ref{chartprop}. We assume that $\frak p(2)$ is contained in its image. \par We will define the notion of {\it induced stabilization data} at $\frak p(2)$. We recall that the stabilization data $\frak w_{\frak p(1)}$ includes the fiber bundle (\ref{2149}) \begin{equation}\label{21492} \pi : \underset{{\rm v}\in C^0(\mathcal G_{\frak p(1)})}{\bigodot}\frak M^{(1)}((\frak x_{\frak p(1)}\cup \vec w_{\frak p(1)})_{\rm v})\to \prod_{{\rm v}\in C^0(\mathcal G_{\frak p(1)})}\frak V^{(1)}((\frak x_{\frak p(1)}\cup \vec w_{\frak p(1)})_{\rm v}). \end{equation} Here $\frak V^{(1)}((\frak x_{\frak p(1)}\cup \vec w_{\frak p(1)})_{\rm v})$ is a neighborhood of $(\frak x_{\frak p(1)}\cup \vec w_{\frak p(1)})_{\rm v}$ in the Deligne-Mumford moduli space $\mathcal M_{k_{\rm v}+1,\ell_{\rm v}+\ell_{\frak p(1),\rm v}}$. The product in the right hand side of (\ref{21492}) is identified with a neighborhood of $\frak x_{\frak p(1)} \cup \vec w_{\frak p(1)}$ in the stratum $\mathcal M_{k+1,\ell+\ell_{\frak p(1)}}(\mathcal G_{\frak p(1)\cup \vec w_{\frak p(1)}})$ of the Deligne-Mumford moduli space $\mathcal M_{k+1,\ell+\ell_{\frak p(1)}}$. We denote this neighborhood by $\frak V(\frak x_{\frak p(1)}\cup \vec w_{\frak p(1)})$. \begin{conds} We consider a symmetric stabilization $\vec w_{\frak p(2)}$ on $\frak x_{\frak p(2)}$, an element $\sigma_0\in \frak V(\frak x_{\frak p(1)}\cup \vec w_{\frak p(1)})$ and $(\vec S^{\rm o}_{0},(\vec S^{\rm c}_{0},\vec \theta_0))) \in (\vec{\mathcal T}^{(1)},\infty] \times ((\vec{\mathcal T}^{(1)},\infty] \times \vec S^1)$ that satisfy the following two conditions. \begin{enumerate}\label{condindtransdata} \item $ \frak x_{\frak p(2)}\cup \vec w_{\frak p(2)} = \overline{\Phi}(\sigma_0;\vec S^{\rm o}_{0},(\vec S^{\rm c}_{0},\vec \theta_0)). $ \item $\frak p(2) \cup \vec w_{\frak p(2)}$ satisfies the transversal constraint at all marked points. Namely for each $i=1,\dots,\ell_{\frak p(1)}$ we have $$ u_{\frak p(2)}(w_{\frak p(2),i}) \in \mathcal D_{\frak p(1),i}. $$ Here $\mathcal D_{\frak p(1),i}$ is a codimension 2 submanifold included in the stabilization data $\frak w_{\frak p(1)}$. (We remark $\#\vec w_{\frak p(2)} = \# \vec w_{\frak p(1)} = \ell_{\frak p(1)}$.) \end{enumerate} \end{conds} An element of $\Gamma_{\frak p(1)}$ is regarded as an element of the permutation group $\frak S_{\ell_{\frak p(1)}}$. So it transforms $\vec w_{\frak p(2)}$ by permutation. The group $\Gamma_{\frak p(1)}$ acts also on the set of pairs $(\sigma_0;\vec S^{\rm o}_{0},(\vec S^{\rm c}_{0},\vec \theta_0))$. We then have the following: \begin{lem}\label{dependcompsst} The set of triples $(\vec w_{\frak p(2)},\sigma_0;\vec S^{\rm o}_{0},(\vec S^{\rm c}_{0},\vec \theta_0))$ satisfying Condition \ref{condindtransdata} consists of a single $\Gamma_{\frak p(1)}$-orbit. \end{lem} \begin{proof} This is an immediate consequence of Proposition \ref{charthomeo}. \end{proof} We continue the construction of the induced stabilization data at $\frak p(2)$. Let $\mathcal G_{\frak p(2)\cup \vec w_{\frak p(2)}}$ be the combinatorial type of $\frak p(2)\cup \vec w_{\frak p(2)}$. In general it is different from the combinatorial type $\mathcal G_{\frak p(1)\cup \vec w_{\frak p(1)}}$ of $\frak p(1)\cup \vec w_{\frak p(1)}$. In fact the graph $\mathcal G_{\frak p(2)\cup \vec w_{\frak p(2)}}$ is obtained from the graph $\mathcal G_{\frak p(1)\cup \vec w_{\frak p(1)}}$ by shrinking all the edges $\rm e$ such that $S_{0,{\rm e}} \ne \infty$. We denote by $C^{1,{\rm fin}}(\mathcal G_{\frak p(1)\cup \vec w_{\frak p(1)}})$ the set of edges $\rm e$ with $S_{0,{\rm e}} \ne \infty$. We have \begin{equation} C^{1}(\mathcal G_{\frak p(1)\cup \vec w_{\frak p(1)}}) = C^{1,{\rm fin}}(\mathcal G_{\frak p(1)\cup \vec w_{\frak p(1)}}) \sqcup C^{1}(\mathcal G_{\frak p(2)\cup \vec w_{\frak p(2)}}). \end{equation} Here the right hand side is the {\it disjoint} union. Choose $\Delta S \in \R_{>0}$ that is sufficiently smaller than $S_{0,{\rm e}}$. (We may take for example $\Delta S = 1$.) \par Let $\frak V^{(2)}(\frak x_{\frak p(2)}\cup \vec w_{\frak p(2)})$ be a neighborhood of $\frak x_{\frak p(2)}\cup \vec w_{\frak p(2)}$ in the stratum $\mathcal M_{k+1,\ell+\ell_{\frak p(1)}}(\mathcal G_{\frak p(2)\cup \vec w_{\frak p(2)}})$ of the Deligne-Mumford moduli space $\mathcal M_{k+1,\ell+\ell_{\frak p(1)}}$. We can take them so that there exists an identification \begin{equation}\label{2324} \aligned \frak V^{(2)}(\frak x_{\frak p(2)}\cup \vec w_{\frak p(2)}) =&\,\frak V^{(1)}(\frak x_{\frak p(1)}\cup \vec w_{\frak p(1)}) \\ &\times \prod_{{\rm e} \in C^{1,{\rm fin}}_{\rm o}(\mathcal G_{\frak p(1)\cup \vec w_{\frak p(1)}})} ((S_{0,\rm e}-\Delta S,S_{0,\rm e}+\Delta S) \times [0,1])\\ &\times \prod_{{\rm e} \in C^{1,{\rm fin}}_{\rm c}(\mathcal G_{\frak p(1)\cup \vec w_{\frak p(1)}})} ((S_{0,\rm e}-\Delta S,S_{0,\rm e}+\Delta S)\times S^1). \endaligned \end{equation} \par Let $\overline{\rm v}$ be a vertex of $\mathcal G_{\frak p(2)\cup \vec w_{\frak p(2)}}$. We take the subgraph $\mathcal G_{\frak p(1)\cup \vec w_{\frak p(1)},\overline{\rm v}}$ of the graph $\mathcal G_{\frak p(1)\cup \vec w_{\frak p(1)}}$ as follows. There exists a map $\mathcal G_{\frak p(1)\cup \vec w_{\frak p(1)}} \to \mathcal G_{\frak p(2)\cup \vec w_{\frak p(2)}}$ that shrinks the edges $\rm e$ with $S_{0,\rm e} \ne \infty$. An edge ${\rm e} \in C^1(\mathcal G_{\frak p(1)\cup \vec w_{\frak p(1)}})$ is an edge of $\mathcal G_{\frak p(1)\cup \vec w_{\frak p(1)},\overline{\rm v}}$ if it goes to the point $\overline{\rm v}$ by this map, or it goes to the edge containing $\overline{\rm v}$ by this map. Then we have \begin{equation}\label{232400} \aligned &\frak V^{(2)}((\frak x_{\frak p(2)}\cup \vec w_{\frak p(2)})_{\overline{\rm v}}) \\ =&\prod_{{\rm v} \in C^0(\mathcal G_{\frak p(1)\cup \vec w_{\frak p(1)},\overline{\rm v}})}\frak V^{(1)}((\frak x_{\frak p(1)}\cup \vec w_{\frak p(1)})_{\rm v}) \\ &\times \prod_{{\rm e} \in C^{1,{\rm fin}}_{\rm o}(\mathcal G_{\frak p(1)\cup \vec w_{\frak p(1)},\overline{\rm v}})} ((S_{0,\rm e}-\Delta S,S_{0,\rm e}+\Delta S)\times [0,1])\\ &\times \prod_{{\rm e} \in C^{1,{\rm fin}}_{\rm c}(\mathcal G_{\frak p(1)\cup \vec w_{\frak p(1)},\overline{\rm v}})} ((S_{0,\rm e}-\Delta S,S_{0,\rm e}+\Delta S)\times S^1). \endaligned \end{equation} The universal family over the Deligne-Mumford moduli space restricts to a fiber bundle \begin{equation}\label{2326} \pi : \frak M^{(2)}((\frak x_{\frak p(2)}\cup \vec w_{\frak p(2)})_{\overline{\rm v}})\to \frak V^{(2)}((\frak x_{\frak p(2)}\cup \vec w_{\frak p(2)})_{\overline{\rm v}}). \end{equation} The fiber at $(\sigma;\vec S^{\rm o},(\vec S^{\rm c},\vec \theta))$ of this bundle, which we denote by $\Sigma_{(\sigma;\vec S^{\rm o},(\vec S^{\rm c},\vec \theta))}$, is the union of the following three types of 2 dimensional manifolds. \begin{enumerate} \item[(I)] For each ${\rm v} \in C^0(\mathcal G_{\frak p(2)\cup \vec w_{\frak p(2)}})$ we consider the core $K_{\rm v}^{\sigma_{\rm v}}$ that is contained in $\Sigma_{\sigma_{\rm v}}$. (Here $\sigma_{\rm v} \in \frak V^{(1)}((\frak x_{\frak p(1)}\cup \vec w_{\frak p(1)})_{\rm v})$ is a component of $\sigma$ and $\Sigma_{\sigma_{\rm v}}$ is a Riemann surface corresponding to this element ${\sigma_{\rm v}}$.) \item[(II)] If ${\rm e}\in C^1_{\mathrm o}(\mathcal G_{\frak p(2)\cup \vec w_{\frak p(2)}})$, $S_{0,{\rm e}} = \infty$ and ${\rm e}$ goes to an outgoing edges of $\overline{\rm v}$, we have $[0,\infty) \times [0,1]$. \par If ${\rm e}\in C^1_{\mathrm o}(\mathcal G_{\frak p(2)\cup \vec w_{\frak p(2)}})$, $S_{0,{\rm e}} = \infty$ and ${\rm e}$ goes to an incoming edge of $\overline{\rm v}$, we have $(-\infty,0] \times [0,1]$. \par If ${\rm e}\in C^1_{\mathrm c}(\mathcal G_{\frak p(2)\cup \vec w_{\frak p(2)}})$, $S_{0,{\rm e}} = \infty$ and ${\rm e}$ goes to an outgoing edge of $\overline{\rm v}$, we have $[0,\infty) \times S^1$. \par If ${\rm e}\in C^1_{\mathrm c}(\mathcal G_{\frak p(2)\cup \vec w_{\frak p(2)}})$, $S_{0,{\rm e}} = \infty$ and ${\rm e}$ goes to an incoming edge of $\overline{\rm v}$, we have $(-\infty,0] \times S^1$. \item[(III)] If ${\rm e}\in C^1_{\mathrm o}(\mathcal G_{\frak p(2)\cup \vec w_{\frak p(2)}})$, $S_{0,{\rm e}} \ne \infty$, we have $[-5S_{{\rm e}},5S_{{\rm e}}] \times [0,1]$. If ${\rm e}\in C^1_{\mathrm c}(\mathcal G_{\frak p(2)\cup \vec w_{\frak p(2)}})$, $S_{0,{\rm e}} \ne \infty$, we have $[-5S_{{\rm e}},5S_{{\rm e}}] \times S^1$. \end{enumerate} \begin{defn} The core $K_{\overline{\rm v}}$ of $\Sigma_{(\sigma;\vec S^{\rm o},(\vec S^{\rm c},\vec \theta))}$ is the union of the subsets of type I or type III. \end{defn} On the complement of the core, the fiber bundle (\ref{2326}) has a trivialization, that is given by the identification of the subsets of type II with the standard set mentioned there. This trivialization preserves complex structures. \par This trivialization extends to the subsets of type I. In fact, such an extension is a part of the data included in the coordinate at infinity of $\frak w_{\frak p(1)}$. Note that this extension of trivialization does not respect the fiberwise complex structure. \par Note, however, that this trivialization does {\it not} extend to the trivialization of the fiber bundle (\ref{2326}) if there exists an edge ${\rm e}\in C^1_{\mathrm c}(\mathcal G_{\frak p(2)\cup \vec w_{\frak p(2)}})$ with $S_{0,{\rm e}} \ne \infty$. In fact, there exists an $S^1$ factor in (\ref{232400}) that corresponds to such an edge $\rm e$ and our fiber bundle has nontrivial monodromy around it, that is the Dehn twist at the domain $[-5S_{0,{\rm e}},5S_{0,{\rm e}}] \times S^1$. \par Therefore to find a coordinate at infinity that satisfies Definition \ref{coordinatainfdef} (5) we need to restrict the domain. We take a sufficiently small $\Delta\theta$ (for example $\Delta\theta = 1/10$) and put \begin{equation}\label{232400aa} \aligned &\frak V((\frak x_{\frak p(2)}\cup \vec w_{\frak p(2)})_{\overline{\rm v}}) \\ =&\prod_{{\rm v} \in C^0(\mathcal G_{\frak p(1)\cup \vec w_{\frak p(1)},\overline{\rm v}})}\frak V((\frak x_{\frak p(1)}\cup \vec w_{\frak p(1)})_{\rm v}) \\ &\times \prod_{{\rm e} \in C^{1,{\rm fin}}_{\rm o}(\mathcal G_{\frak p(1)\cup \vec w_{\frak p(1)},\overline{\rm v}})} ((S_{0,\rm e}-\Delta S,S_{0,\rm e}+\Delta S)\times [0,1])\\ &\times \prod_{{\rm e} \in C^{1,{\rm fin}}_{\rm c}(\mathcal G_{\frak p(1)\cup \vec w_{\frak p(1)},\overline{\rm v}})} ((S_{0,\rm e}-\Delta S,S_{0,\rm e}+\Delta S)\times (\theta_{0,{\rm e}}-\Delta{\theta},\theta_{0,{\rm e}}+\Delta{\theta})). \endaligned \end{equation} (Note $\frak x_{\frak p(2)}\cup \vec w_{\frak p(2)} = \overline{\Phi}(\sigma_0;\vec S^{\rm o}_{0},(\vec S^{\rm c}_{0},\vec \theta_0))$ and $\theta_{0,{\rm e}}$ is a component of $\vec \theta_0)$.) \par We consider the fiber bundle \begin{equation}\label{23262} \pi : \frak M((\frak x_{\frak p(2)}\cup \vec w_{\frak p(2)})_{\overline{\rm v}})\to \frak V((\frak x_{\frak p(2)}\cup \vec w_{\frak p(2)})_{\overline{\rm v}}) \end{equation} in place of (\ref{2326}). \par Now we can extend the trivialization of the fiber bundle defined in the complement of the core, to the trivialization that is defined everywhere. (But it does not preserve the complex structures.) We have thus defined a coordinate at infinity of $\frak p(2)$. \par We take the codimension 2 submanifolds $\mathcal D_{\frak p(1),i}$ that is a part of $\frak w_{\frak p(1)}$ and put $$ \mathcal D_{\frak p(2),i} = \mathcal D_{\frak p(1),i}. $$ \begin{defn} The stabilization data at $\frak p(2)$ that is obtained as above is called the {\it stabilization data induced by $\frak w_{\frak p(1)}$}. \end{defn} \begin{rem} There are more than one ways of extending the trivialization of the fiber bundle that is given on the part of type I and type II to the whole space. However the way to do so is determined if we take the following two families of diffeomorphisms. \begin{enumerate} \item A family of diffeomorphisms from the rectangles $[-5S_{{\rm e}},5S_{{\rm e}}] \times [0,1]$ to $[0,1]\times [0,1]$ so that they are obvious isometries in a neighborhood of $\partial[-5S_{{\rm e}},5S_{{\rm e}}] \times [0,1]$. Here the parameter is $S_{{\rm e}} \in (S_{0,\rm e}-\Delta S,S_{0,\rm e}+\Delta S)$. \item A family of diffeomorphisms from the annuli $[-5S_{{\rm e}},5S_{{\rm e}}] \times S^1$ to $[0,1]\times S^1$ so that they are obvious isometries in a neighborhood of $\{-5S_{{\rm e}}\} \times S^1$ and is the rotation by $\theta_{\rm e}$ in a neighborhood $\{5S_{{\rm e}}\} \times S^1$. Here the parameter is $S_{{\rm e}} \in (S_{0,\rm e}-\Delta S,S_{0,\rm e}+\Delta S)$ and $\theta_{\rm e} \in (\theta_{0,{\rm e}}-\Delta{\theta},\theta_{0,{\rm e}}+\Delta{\theta})$. \end{enumerate} Such families of diffeomorphisms obviously exist. We can take one and use it whenever we define the induced coordinate at infinity. In that sense the notion of induced coordinate at infinity and of induced stabilization data is well-defined. (Namely it can be taken independent of $\frak p(1)$ for example.) \end{rem} \par In Section \ref{coordinateinf}, we discussed how the parametrization changes when we change the coordinate at infinity. There we defined a map $\Psi_{12}$. (See (\ref{2167}).) The following is obvious from definition. We use the notation in Propositions \ref{changeinfcoorprop} and \ref{reparaexpest}. \begin{lem}\label{lempsi12} If we take the induced core on $\frak Y_0$ then $\overline{\Phi}_{12} = \Psi_{12}$. Moreover $\frak v_{\frak y_2,\vec T_2,\vec\theta_2}$ is the identity map on the core $K_{\rm v}$. \end{lem} The first main result of this subsection is the following. \begin{prop}\label{prop21333} Let $\frak p(1) \in \mathcal M_{k+1,\ell}(\beta)$ and take a stabilization data $\frak w_{\frak p(1)}$ at $\frak p(1)$ and admissible $(\frak o^{(1)}, \mathcal T^{(1)})$. Let $\frak p(2)$ be in the image of (\ref{322eq}). We take the induced stabilization data $\frak w_{\frak p(2)}$. Let $\frak A \subseteq \frak C(\frak p(2)) \subseteq \frak C(\frak p(1))$. \par Then there exists an admissible $(\frak o_0^{(2)}, \mathcal T_0^{(2)})$ such that if $(\frak o^{(2)}, \mathcal T^{(2)}) < (\frak o_0^{(2)}, \mathcal T_0^{(2)})$ there exists a coordinate change $$ (\phi_{12},\hat\phi_{12}) : V(\frak p(2),\frak w_{\frak p(2)};(\frak o^{(2)},\mathcal T^{(2)});\frak A)\to V(\frak p(1),\frak w_{\frak p(1)};(\frak o^{(1)},\mathcal T^{(1)});\frak A). $$ \end{prop} \begin{proof} We have maps \begin{equation}\label{gluemapsss1a} \aligned \text{\rm Glu}^{(1)} :&B_{\frak o^{(1)}}^{\frak w_{\frak p(1)}}(\frak p;V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p(1);\frak A)) \times (\vec{\mathcal T}^{(1)},\infty] \times ((\vec{\mathcal T}^{(1)},\infty] \times \vec S^1) \\ &\to \mathcal M^{\frak w_{\frak p(1)}}_{k+1,(\ell,\ell_{\frak p(1)},(\ell_c))}(\beta;\frak p(1);\frak A)_{\epsilon_{0,1},\vec{\mathcal T}^{(1)}} \endaligned \end{equation} and \begin{equation}\label{gluemapsss1a2} \aligned \text{\rm Glu}^{(2)} :&B_{\frak o^{(2)}}^{\frak w_{\frak p(2)}}(\frak p;V_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p(2);\frak A)) \times (\vec{\mathcal T}^{(2)},\infty] \times ((\vec{\mathcal T}^{(2)},\infty] \times \vec S^1) \\ &\to \mathcal M^{\frak w_{\frak p(2)}}_{k+1,(\ell,\ell_{\frak p(1)},(\ell_c))}(\beta;\frak p(2);\frak A)_{\epsilon_{0,2},\vec{\mathcal T}^{(2)}} \endaligned \end{equation} by the gluing constructions at $\frak p(1)$ and at $\frak p(2)$ respectively. (More precisely for a given $\epsilon_{0,2}$, the map (\ref{gluemapsss1a2}) is defined by choosing $\frak o^{(2)}$ small and $\mathcal T^{(2)}$ large.) \par By the assumption and Proposition \ref{charthomeo}, there exists $\vec w_{\frak p(2),c}$ such that $$ (\frak p(2)\cup \vec w_{\frak p(2)},(\vec w_{\frak p(2),c})) \in \mathcal M^{\frak w_{\frak p(1)}}_{k+1,(\ell,\ell_{\frak p(1)},(\ell_c))}(\beta;\frak p(1);\frak A)_{\epsilon_{0,1},\vec{\mathcal T}^{(1)}}^{\rm trans}. $$ We observe $$ (\frak p(2)\cup \vec w_{\frak p(2)},(\vec w_{\frak p(2),c})) \in \mathcal M^{\frak w_{\frak p(2)}}_{k+1,(\ell,\ell_{\frak p(2)},(\ell_c))}(\beta;\frak p(1);\frak A)_{\epsilon_{0,2},\vec{\mathcal T}^{(2)}} $$ and the image of (\ref{gluemapsss1a2}) defines a neighborhood basis when we move $\epsilon_{0,2}$. Therefore by taking $\epsilon_{0,2}$ small and $\mathcal T^{(2)}$ large, we may assume that \begin{equation}\label{thikenincl232} \aligned &\mathcal M^{\frak w_{\frak p(2)}}_{k+1,(\ell,\ell_{\frak p(1)},(\ell_c))}(\beta;\frak p(2);\frak A)_{\epsilon_{0,2},\vec {\mathcal T^{(2)}}}\\ &\subset \mathcal M^{\frak w_{\frak p(1)}}_{k+1,(\ell,\ell_{\frak p(1)},(\ell_c))}(\beta;\frak p(1);\frak A)_{\epsilon_{0,1},\vec{\mathcal T}^{(1)}} \endaligned\end{equation} and this is an open embedding. By construction, the element of the thickened moduli space $\mathcal M^{\frak w_{\frak p(2)}}_{k+1,(\ell,\ell_{\frak p(2)},(\ell_c))}(\beta;\frak p(2);\frak A)_{\epsilon_{0,2},\vec{\mathcal T}^{(2)}}$ satisfies the transversal constraint at all additional marked points with respect to $\frak w_{\frak p(1)}$ if and only if the transversal constraint at all additional marked points with respect to $\frak w_{\frak p(2)}$ is satisfied. Therefore \begin{equation}\label{2332} \aligned &\mathcal M^{\frak w_{\frak p(2)}}_{k+1,(\ell,\ell_{\frak p(1)},(\ell_c))}(\beta;\frak p(2);\frak A)_{\epsilon_{0,2},\vec{\mathcal T}^{(2)}}^{\rm trans}\\ &\subset \mathcal M^{\frak w_{\frak p(1)}}_{k+1,(\ell,\ell_{\frak p(1)},(\ell_c))}(\beta;\frak p(1);\frak A)_{\epsilon_{0,1},\vec{\mathcal T}^{(1)}}^{\rm trans} \endaligned\end{equation} and this is an open embedding. We thus can define a continuous strata-wise $C^m$-map $\phi_{12}$ as the inclusion map. It is an open embedding of $C^m$-class strata-wise. \begin{lem}\label{coodinatedifferentp} $\phi_{12}$ is of $C^m$-class. \end{lem} \begin{proof} The proof is similar to that of Lemma \ref{2120lem}. We repeat the detail for completeness. Let $\hat V(\frak p(j),\frak w_{\frak p(j)};(\frak o^{(j)},\mathcal T^{(j)});\frak A)$ be the inverse image of $V(\frak p(j),\frak w_{\frak p(j)};(\frak o^{(j)},\mathcal T^{(j)});\frak A)$ by $\text{\rm Glu}^{(j)}$. (Here $j=1,2$). It suffices to show that $$ \aligned \tilde\phi_{12} = (\text{\rm Glu}^{(1)})^{-1}\circ \text{\rm Glu}^{(2)} : &\hat V(\frak p(2),\frak w_{\frak p(2)};(\frak o^{(2)},\mathcal T^{(2)});\frak A) \\ &\to \hat V(\frak p(1),\frak w_{\frak p(1)};(\frak o^{(1)},\mathcal T^{(1)});\frak A) \endaligned $$ is of $C^m$-class. We obtain maps \begin{equation}\label{evaluandTfacjj} \aligned \frak F^{(j)} : \hat V&(\frak p(j),\frak w_{\frak p(j)};(\frak o^{(j)},\mathcal T^{(j)});\frak A)\\ \to &\prod_{{\rm v}\in C^0(\mathcal G_{\frak p(1)})} C^m((K_{\rm v}^{+\vec R},K_{\rm v}^{+\vec R} \cap \partial\Sigma_{{\rm v},(1)}),(X,L))\\ &\times \prod_{{\rm v}\in C^0(\mathcal G_{\frak p(1)})}\frak V((\frak x_{\frak p(1)} \cup \vec w_{\frak p(1)})_{\rm v})\times (\vec{\mathcal T}^{(j)},\infty] \times (\vec{\mathcal T}^{(j)},\infty] \times \vec S^1) \endaligned \end{equation} in the same way as (\ref{evaluandTfac}) for $j=1,2$. We remark here that we take the graph $\mathcal G_{\frak p(1)}$ for the case $j=2$ also. By applying Theorem \ref{exdecayT33} we find that (\ref{evaluandTfacjj}) is an $C^m$-embedding for $j=1$. \par We will prove that (\ref{evaluandTfacjj}) is a $C^m$-embedding for $j=2$ also. It follows from Theorem \ref{exdecayT33} applied to the gluing at $\frak p(2)$ that $\frak F^{(2)}$ is of $C^m$-class. We put $\frak F^{(2)} = (\frak F^{(2)}_1,\frak F^{(2)}_2)$. Here $\frak F^{(2)}_1$ (resp.$\frak F^{(2)}_2$) is a map to the factor in the second line (resp. third line). It suffices to show that $\frak F^{(2)}_1$ is a $C^m$-embedding on each of the fiber of $\frak F^{(2)}_2$. Note that the factors of the third line parametrize the complex structure of the source. The fact that $\frak F^{(2)}_1$ is an embedding on the fiber of $T_{\rm e} = \infty$ follows from Theorem \ref{exdecayT33} applied to the gluing at $\frak p(1)$. Then we apply Theorem \ref{exdecayT33} to the gluing at $\frak p(2)$ to show that $\frak F^{(2)}_1$ is an embedding on the fiber of $\frak F^{(2)}_2$ if $\mathcal T^{(2)}$ is sufficiently large. \par Now using the obvious fact that $\frak F^{(1)}\circ \tilde\phi_{12} = \frak F^{(2)}$, we conclude that $\tilde\phi_{12}$ is a $C^m$-embedding. \end{proof} \begin{rem} Contrary to the case of the proof of Lemma \ref{2120lem}, we do have $\frak F^{(1)}\circ \tilde\phi_{12} = \frak F^{(2)}$. This is because we are using the coordinate at infinity $\frak w_{\frak p(2)}$ that is induced from $\frak w_{\frak p(1)}$ and so the parametrization of the core is the same. \end{rem} We thus have defined $\phi_{12}$. We define $\hat\phi_{12} = \phi_{12} \times {\rm identity}$. It is easy to see that $\phi_{12}$ is $\Gamma_{\frak p(2)}$-equivariant. Other properties are also easy to prove. The proof of Proposition \ref{prop21333} is now complete. \end{proof} \begin{rem} In Lemma \ref{dependcompsst} we proved that the two choices of $\vec w_{(2)}$ are transformed each other under the $\Gamma_{\frak p(1)}$ action. More precisely we have the following. The action of $\Gamma_{\frak p(1)}$ is given by the permutation of the marked points $\vec w_{(2)}$. If $\gamma \in \Gamma_{\frak p(2)}$ the permutation of $\vec w_{(2)}$ gives an equivalent element. Namely there exists a biholomorphic map $\frak x_{\frak p(2)} \cup \vec w_{(2)} \to \frak x_{\frak p(2)} \cup \gamma\vec w_{(2)}$. \par In case $\gamma \notin \Gamma_{\frak p(2)}$, $\frak x_{\frak p(2)} \cup \vec w_{(2)}$ is not biholomorphic to $\frak x_{\frak p(2)} \cup \gamma\vec w_{(2)}$. Each of the choice $\vec w_{(2)}$ and $\gamma\vec w_{(2)}$ induces a stabilization data at $\frak p(2)$, which we write $\frak w_{(2)}$ and $\gamma\frak w_{(2)}$ respectively. They define the coordinate changes. We remark that there is a canonical diffeomorphism $$ \mathcal M^{\frak w_{\frak p(2)}}_{k+1,(\ell,\ell_{\frak p(1)},(\ell_c))}(\beta;\frak p(2);\frak A)^{\rm trans}_{\epsilon_{0,2},\vec{\mathcal T}^{(2)}} \cong \mathcal M^{\gamma\frak w_{\frak p(2)}}_{k+1,(\ell,\ell_{\frak p(1)},(\ell_c))}(\beta;\frak p(2);\frak A)^{\rm trans}_{\epsilon_{0,2},\vec{\mathcal T}^{(2)}} $$ by permutation of the marked points. Namely we have $$ \gamma : V(\frak p(2),\frak w_{\frak p(2)};(\frak o^{(2)},\mathcal T^{(2)});\frak A) \to V(\frak p(2),\gamma\frak w_{\frak p(2)};(\frak o^{(2)},\mathcal T^{(2)});\frak A). $$ On the other hand $\gamma \in \Gamma_{\frak p(1)}$ acts on $V(\frak p(1),\frak w_{\frak p(1)};(\frak o^{(1)},\mathcal T^{(1)});\frak A)$. Since our construction is $\Gamma_{\frak p(1)}$ equivariant we have $$ \gamma \circ \phi_{12} = \phi_{12} \circ \gamma. $$ Here $\phi_{12}$ in the left hand side uses $\frak w_{\frak p(2)}$ and $\phi_{12}$ in the right hand side uses $\gamma\frak w_{\frak p(2)}$. This is the same as the case of coordinate change of the charts of orbifolds. \par We remark that this phenomenon finally causes an appearance of $\gamma^{\alpha}_{pqr}$ in Definition \ref{Definition A1.5}. \end{rem} Combined with the result of the last subsection, Proposition \ref{prop21333} implies the following. \begin{cor}\label{cchanges} Let $\frak p(1) \in \mathcal M_{k+1,\ell}(\beta)$. We take a stabilization data $\frak w_{\frak p(1)}$ at $\frak p(1)$ and admissible $(\frak o^{(1)}, \mathcal T^{(1)})$. \par Let $\frak p(2)$ be in the image of (\ref{322eq}). We take a stabilization data $\frak w_{\frak p(2)}$ at $\frak p(2)$. \par Then there exists an admissible $(\frak o_0^{(2)}, \mathcal T_0^{(2)})$ such that the following holds for $\frak A^{(j)} \subseteq \frak C(\frak p(j))$ with $\frak A^{(2)} \subseteq \frak A^{(1)}$. \par For any $(\frak o^{(2)}, \mathcal T^{(2)}) < (\frak o_0^{(2)}, \mathcal T_0^{(2)})$ then there exists a coordinate change $(\phi_{12},\hat\phi_{12})$ from the Kuranishi chart $V(\frak p(2),\frak w_{\frak p(2)};(\frak o^{(2)},\mathcal T^{(2)});\frak A^{(2)})$ to the Kuranishi chart $V(\frak p(1),\frak w_{\frak p(1)};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)})$. \end{cor} \begin{proof} Let $\frak w_{\frak p(2)}'$ be the stabilization data at $\frak p(2)$ induced by $\frak w_{\frak p(1)}$. Then the required coordinate change is obtained by composing the three coordinate changes associated to the pairs, $((\frak w_{\frak p(1)},\frak A^{(1)}),(\frak w_{\frak p(1)},\frak A^{(2)}))$, $((\frak w_{\frak p(1)},\frak A^{(2)}),(\frak w_{\frak p(2)}',\frak A^{(2)}))$, $((\frak w_{\frak p(2)}',\frak A^{(2)}),(\frak w_{\frak p(2)},\frak A^{(2)}))$. They are obtained by Proposition \ref{prop2117}, Proposition \ref{prop21333}, Proposition \ref{prop2117}, respectively. \end{proof} \begin{rem} By construction the coordinate change given in Corollary \ref{cchanges} is independent of the choices involved in the definition, in a neighborhood of $\frak p(2)$. \end{rem} We next prove the compatibility of the coordinate changes in Corollary \ref{cchanges}. \begin{prop}\label{compaticoochamain} Let $\frak p(1) \in \mathcal M_{k+1,\ell}(\beta)$. We take a stabilization data $\frak w_{\frak p(1)}$ at $\frak p(1)$ and admissible $(\frak o^{(1)}, \mathcal T^{(1)})$. \par Let $\frak p(2)$ be in the image of (\ref{322eq}). We take a stabilization data $\frak w_{\frak p(2)}$ at $\frak p(2)$. Let $(\frak o_0^{(2)}, \mathcal T_0^{(2)})$ be as in Corollary \ref{cchanges}. \par Then there exists $ \epsilon_{7} = \epsilon_7(\frak p(1),\frak w_{\frak p(1)},\frak p(2),\frak w_{\frak p(2)})$ with the following properties for each $(\frak o^{(2)}, \mathcal T^{(2)}) < (\frak o_0^{(2)}, \mathcal T_0^{(2)})$. \par Let $\frak p(3) \in \mathcal M_{k+1,\ell}(\beta)$. We assume $d(\frak p(2),\frak p(3)) < \epsilon_{7}$.\footnote{$d$ here is any metric on $\mathcal M_{k+1,\ell}(\beta)$.} Then for any stabilization data $\frak m_{\frak p(3)}$ at $\frak p(3)$, there exists admissible $(\frak o_0^{(3)}, \mathcal T_0^{(3)})$ such that if $(\frak o^{(3)}, \mathcal T^{(3)}) < (\frak o_0^{(3)}, \mathcal T_0^{(3)})$ and $\frak A^{(j)} \subseteq \frak C(\frak p(j))$ ($j=1,2,3$) with $\frak A^{(1)} \supseteq \frak A^{(2)}\supseteq \frak A^{(3)}$, then we have the following. \begin{enumerate} \item There exists a coordinate change $$(\phi_{23},\hat\phi_{23}) : V(\frak p(3),\frak w_{\frak p(3)};(\frak o^{(3)},\mathcal T^{(3)});\frak A^{(3)})\to V(\frak p(2),\frak w_{\frak p(2)};(\frak o^{(2)},\mathcal T^{(2)});\frak A^{(2)}) $$ as in Corollary \ref{cchanges}. \item There exists a coordinate change $$ (\phi_{13},\hat\phi_{13}) : V(\frak p(3),\frak w_{\frak p(3)};(\frak o^{(3)},\mathcal T^{(3)});\frak A^{(3)}) \to V(\frak p(1),\frak w_{\frak p(1)};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)}) $$ as in Corollary \ref{cchanges}. \item We have $$ (\underline\phi_{13},\hat{\underline\phi}_{13}) = (\underline\phi_{12},\hat{\underline\phi}_{12}) \circ (\underline\phi_{23},\hat{\underline\phi}_{23}). $$ Here $$ (\phi_{12},\hat\phi_{12}) : V(\frak p(2),\frak w_{\frak p(2)};(\frak o^{(2)},\mathcal T^{(2)});\frak A^{(2)}) \to V(\frak p(1),\frak w_{\frak p(1)};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)}) $$ is the coordinate change in Corollary \ref{cchanges} and $$ (\underline\phi_{12},\hat{\underline\phi}_{12}) : U(\frak p(2),\frak w_{\frak p(2)};(\frak o^{(2)},\mathcal T^{(2)});\frak A^{(2)}) \to U(\frak p(1),\frak w_{\frak p(1)};(\frak o^{(1)},\mathcal T^{(1)});\frak A^{(1)}) $$ is induced by it. \end{enumerate} \end{prop} \begin{proof} By the same reason as in the case of Proposition \ref{prop21333} we may assume $\frak A^{(1)} = \frak A^{(2)} = \frak A^{(3)} = \frak A$. So we will assume it throughout the proof. \par We first prove the following. \begin{lem}\label{lem2144} Let $\frak w_{\frak p(2)}^{(1)}$ be the stabilization data at $\frak p(2)$ induced by $\frak w_{\frak p(1)}$ and $\frak w_{\frak p(3)}^{(1)}$ the stabilization data at $\frak p(3)$ induced by $\frak w_{\frak p(2)}^{(1)}$. Then $\frak w_{\frak p(3)}^{(1)}$ is the stabilization data induced by $\frak w_{\frak p(1)}$. \end{lem} The proof is obvious. \begin{lem}\label{lem21409} Let $\frak w_{\frak p(1)}^{(1)} = \frak w_{\frak p(1)}$ and $\frak w_{\frak p(2)}^{(1)}$, $\frak w_{\frak p(3)}^{(1)}$ be as in Lemma \ref{lem2144}. We denote by $(\phi_{ij},\hat\phi_{ij})$ (for $1\le i < j\le 3$) the coordinate changes induced by the pair $(\frak w_{\frak p(i)}^{(1)},\frak w_{\frak p(j)}^{(1)})$. Then we have \begin{equation}\label{2140formula} (\phi_{12},\hat\phi_{12})\circ (\phi_{23},\hat\phi_{23}) = (\phi_{13},\hat\phi_{13}) \end{equation} in a neighborhood of $\frak p(3)$. \end{lem} \begin{proof} We can choose $\epsilon_{0,j}$ $(j=1,2,3)$ such that $$ \aligned &\mathcal M^{\frak w^{(1)}_{\frak p(3)}}_{k+1,(\ell,\ell_{\frak p(1)},(\ell_c))}(\beta;\frak p(3);\frak A)_{\epsilon_{0,3},\vec{\mathcal T}^{(3)}}^{\rm trans}\\ &\subset \mathcal M^{\frak w^{(1)}_{\frak p(2)}}_{k+1,(\ell,\ell_{\frak p(1)},(\ell_c))}(\beta;\frak p(2);\frak A)_{\epsilon_{0,2},\vec {\mathcal T}^{(2)}}^{\rm trans}\\ &\subset \mathcal M^{\frak w^{(1)}_{\frak p(1)}}_{k+1,(\ell,\ell_{\frak p(1)},(\ell_c))}(\beta;\frak p(1);\frak A)_{\epsilon_{0,1},\vec{\mathcal T}^{(1)}}^{\rm trans}. \endaligned $$ The maps (\ref{2140formula}) are all induced by this inclusion in a neighborhood of $\frak p(3)$. Hence the lemma. \end{proof} The proof of the next lemma is the main part of the proof of Proposition \ref{compaticoochamain}. \begin{lem}\label{2141} Let $\frak p(2) \in \mathcal M_{k+1,\ell}(\beta)$ and let $\frak w_{\frak p(2)}^{(1)}$, $\frak w_{\frak p(2)}^{(2)}$ be two stabilization data at $\frak p(2)$. Suppose a $\frak w_{\frak p(2)}^{(1)}$-admissible $(\frak o^{(21)}, \mathcal T^{(21)})$ is given. Take an $\frak w_{\frak p(2)}^{(2)}$-admissible $(\frak o_0^{(22)}, \mathcal T_0^{(22)})$ such that if $(\frak o^{(22)}, \mathcal T^{(22)}) < (\frak o_0^{(22)}, \mathcal T_0^{(22)})$ then there exists a coordinate change $$ (\phi_{(21)(22)},\hat\phi_{(21)(22)}) : V(\frak p(2),\frak w^{(2)}_{\frak p(2)};(\frak o^{(22)},\mathcal T^{(22)});\frak A)\to V(\frak p(2),\frak w^{(1)}_{\frak p(2)};(\frak o^{(21)},\mathcal T^{(21)});\frak A) $$ as in Proposition \ref{prop2117}. \par Then there exists $\epsilon_{8} = \epsilon_8(\frak p(2),\frak w_{\frak p(2)}^{(1)},\frak w_{\frak p(2)}^{(2)},(\frak o^{(21)}, \mathcal T^{(21)}),(\frak o^{(22)}, \mathcal T^{(22)}))$ such that if $\frak p(3) \in \mathcal M_{k+1,\ell}(\beta)$, $d(\frak p(2),\frak p(3)) < \epsilon_{8}$ the following holds. \begin{enumerate} \item There exists a stabilization data $\frak m_{\frak p(3)}^{(1)}$ at $\frak p(3)$ induced from $\frak w_{\frak p(2)}^{(1)}$ and a stabilization data $\frak m_{\frak p(3)}^{(2)}$at $\frak p(3)$ induced from $\frak w_{\frak p(2)}^{(2)}$. \item There exists a $\frak w_{\frak p(3)}^{(1)}$-admissible $(\frak o_0^{(31)}, \mathcal T_0^{(31)})$ such that if $(\frak o^{(31)}, \mathcal T^{(31)}) < (\frak o_0^{(31)}, \mathcal T_0^{(31)})$ then the coordinate change $$ (\phi_{(21)(31)},\hat\phi_{(21)(31)}) : V(\frak p(3),\frak w^{(1)}_{\frak p(3)};(\frak o^{(31)},\mathcal T^{(31)});\frak A) \to V(\frak p(2),\frak w^{(1)}_{\frak p(2)};(\frak o^{(21)},\mathcal T^{(21)});\frak A) $$ as in Proposition \ref{prop21333} exists. \item There exists a $\frak w_{\frak p(3)}^{(2)}$-admissible $(\frak o_0^{(32)}, \mathcal T_0^{(32)})$ such that if $(\frak o^{(32)}, \mathcal T^{(32)}) < (\frak o_0^{(32)}, \mathcal T_0^{(32)})$ then the coordinate change $$ (\phi_{(22)(32)},\hat\phi_{(22)(32)}) : V(\frak p(3),\frak w^{(2)}_{\frak p(3)};(\frak o^{(32)},\mathcal T^{(32)});\frak A) \to V(\frak p(2),\frak w^{(2)}_{\frak p(2)};(\frak o^{(22)},\mathcal T^{(22)});\frak A) $$ as in Proposition \ref{prop21333} exists. \item There exists a $\frak w_{\frak p(3)}^{(2)}$-admissible $(\frak o_0^{(32)\prime}, \mathcal T_0^{(32)\prime})$ such that if $(\frak o^{(32)\prime}, \mathcal T^{(32)\prime}) < (\frak o_0^{(32)\prime}, \mathcal T_0^{(32)\prime})$ then the coordinate change $$ (\phi_{(31)(32)},\hat\phi_{(31)(32)}) : V(\frak p(3),\frak w^{(2)}_{\frak p(3)};(\frak o^{(32)\prime},\mathcal T^{(32)\prime});\frak A) \to V(\frak p(3),\frak w^{(1)}_{\frak p(3)};(\frak o^{(31)},\mathcal T^{(31)});\frak A) $$ as in Proposition \ref{prop2117} exists. \item Suppose $(\frak o^{(32)\prime\prime}, \mathcal T^{(32)\prime\prime}) < (\frak o_0^{(32)\prime}, \mathcal T_0^{(32)\prime})$ and $(\frak o^{(32)\prime\prime}, \mathcal T^{(32)\prime\prime}) < (\frak o_0^{(32)}, \mathcal T_0^{(32)})$. Then we have \begin{equation} \aligned &(\underline\phi_{(21)(22)},\hat{\underline\phi}_{(21)(22)})\circ ({\underline\phi}_{(22)(32)},\hat{\underline\phi}_{(22)(32)})\\ &= ({\underline\phi}_{(21)(31)},\hat{\underline\phi}_{(21)(31)})\circ ({\underline\phi}_{(31)(32)},\hat{\underline\phi}_{(31)(32)}) \endaligned\end{equation} on $U(\frak p(3),\frak w^{(2)}_{\frak p(3)};(\frak o^{(32)\prime\prime},\mathcal T^{(32)\prime\prime});\frak A)$. \end{enumerate} \end{lem} \begin{rem} The statement (1) above was proved at the beginning of this subsection. The statements (2) and (3) above were proved by Proposition \ref{prop21333}. The statement (4) above was proved by Proposition \ref{prop2117}. So only the statement (5) is new in Lemma \ref{2141}. \end{rem} \begin{proof}[Lemma \ref{2141} $\Rightarrow$ Proposition \ref{compaticoochamain}] Let $\frak w_{\frak p(2)}^{(1)}$ be the stabilization data at $\frak p(2)$ induced by $\frak w_{\frak p(1)}$. \par We apply Lemma \ref{2141} to $\frak w_{\frak p(2)}^{(1)}$ and $\frak w_{\frak p(2)}^{(2)} = \frak w_{\frak p(2)}$. We then obtain $\epsilon_{8}$. This $\epsilon_{8}$ is $\epsilon_7$ in Proposition \ref{compaticoochamain}. Suppose $\frak p(3) \in \mathcal M_{k+1,\ell}(\beta)$, $d(\frak p(2),\frak p(3)) < \epsilon_{8}$. We obtain $\frak m_{\frak p(3)}^{(1)}, \frak m_{\frak p(3)}^{(2)}$ from Lemma \ref{2141} (1). \par Using the pair of stabilization data $(\frak w_{\frak p(1)},\frak w_{\frak p(2)}^{(1)})$ we obtain the coordinate change $(\phi_{1(21)},\hat\phi_{1(21)})$ by Proposition \ref{prop21333}. \par Using the pair of stabilization data $(\frak m_{\frak p(3)}^{(2)},\frak m_{\frak p(3)})$ we obtain the coordinate change $(\phi_{(32)3},\hat\phi_{(32)3})$ by Proposition \ref{prop2117}. \par Now by using Lemma \ref{2141} (5) we have \begin{equation}\label{eq2337} \aligned &({\underline\phi}_{1(21)},\hat{\underline\phi}_{1(21)}) \circ ({\underline\phi}_{(21)(22)},\hat\phi_{(21)(22)})\circ ({\underline\phi}_{(22)(32)},\hat{\underline\phi}_{(22)(32)})\circ ({\underline\phi}_{(32)3},\hat{\underline\phi}_{(32)3})\\ &= ({\underline\phi}_{1(21)},\hat{\underline\phi}_{1(21)}) \circ ({\underline\phi}_{(21)(31)},\hat{\underline\phi}_{(21)(31)})\circ ({\underline\phi}_{(31)(32)},\hat{\underline\phi}_{(31)(32)})\circ ({\underline\phi}_{(32)3},\hat{\underline\phi}_{(32)3}) \endaligned \end{equation} in a neighborhood of $[\frak p(3)]$. \par By definition of $(\phi_{12},\hat{\phi}_{12})$, $(\phi_{23},\hat{\phi}_{23})$ given in the proof of Corollary \ref{cchanges}, we have \begin{equation} ({\underline\phi}_{1(21)},\hat{\underline\phi}_{1(21)}) \circ ({\underline\phi}_{(21)(22)},\hat{\underline\phi}_{(21)(22)}) = ({\underline\phi}_{12},\hat{\underline\phi}_{12}) \end{equation} and \begin{equation} ({\underline\phi}_{(22)(32)},\hat{\underline\phi}_{(22)(32)})\circ ({\underline\phi}_{(32)3},\hat{\underline\phi}_{(32)3}) = ({\underline\phi}_{23},\hat{\underline\phi}_{23}). \end{equation} On the other hand, by Lemma \ref{lem21409} $(\phi_{1(21)},\hat\phi_{1(21)}) \circ (\phi_{(21)(31)},\hat\phi_{(21)(31)})$ is the coordinate change given by Proposition \ref{prop21333}. By Lemma \ref{lem125}, $(\phi_{(31)(32)},\hat\phi_{(31)(32)})\circ (\phi_{(32)3},\hat\phi_{(32)3})$ is the coordinate change given by Proposition \ref{prop2117}. Therefore, by the definition given in the proof of Corollary \ref{cchanges}, \begin{equation}\label{eq2340} \aligned ({\underline\phi}_{13},\hat{\underline\phi}_{13}) = ({\underline\phi}_{1(21)},\hat{\underline\phi}_{1(21)}) &\circ ({\underline\phi}_{(21)(31)},\hat{\underline\phi}_{(21)(31)})\\ &\circ ({\underline\phi}_{(31)(32)},\hat{\underline\phi}_{(31)(32)})\circ ({\underline\phi}_{(32)3},\hat{\underline\phi}_{(32)3}). \endaligned \end{equation} Proposition \ref{compaticoochamain} follows from (\ref{eq2337})-(\ref{eq2340}). \end{proof} \begin{proof}[Proof of Lemma \ref{2141}] By definition, the coordinate change $(\phi_{(21)(22)},\hat\phi_{(21)(22)})$ is a composition of finitely many coordinate changes that are one of the types 1,3,4. (The notion of coordinate changes of type 1,3,4 is defined right before Lemma \ref{lem2121}.) Therefore it suffices to prove the lemma in the case when $(\phi_{(21)(22)},\hat\phi_{(21)(22)})$ is one of types 1,3,4. We prove each of those cases below. \par\medskip \noindent{\bf Case 1}: $(\phi_{(21)(22)},\hat\phi_{(21)(22)})$ is of type 1. \par We use the notation in the proof of Proposition \ref{prop2117} with $\frak p$ being replaced by $\frak p(2)$ or $\frak p(3)$. \par By Lemma \ref{lemma21118} we have \begin{equation}\label{lem2118formula23} \mathcal M^{\frak w_{\frak p(2)}^{(2) -}}_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p(2);\frak A) _{\epsilon'_{0,2},\vec{\mathcal T}^{(2) \prime}} \subset \mathcal M^{\frak w_{\frak p(2)}^{(1)}}_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p(2);\frak A) _{\epsilon_{0,2},\vec{\mathcal T}^{(1)}}. \end{equation} (Here we replace $\epsilon_{0}, \epsilon'_{0}$ in (\ref{lem2118formula}) by $\epsilon_{0,2}, \epsilon'_{0,2}$. We also put $\ell_{\frak p} = \ell_{\frak p(2)} = \ell_{\frak p(3)}$.) Also by Lemma \ref{lemma21118} we have \begin{equation}\label{lem2118formula233} \mathcal M^{\frak w_{\frak p(3)}^{(2) -}}_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p(3);\frak A) _{\epsilon'_{0,3},\vec{\mathcal T}^{(3) \prime}} \subset \mathcal M^{\frak w_{\frak p(3)}^{(1)}}_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p(3);\frak A) _{\epsilon_{0,3},\vec{\mathcal T}^{(3)}}. \end{equation} (Here we replace $\epsilon_{0}, \epsilon'_{0}$ in (\ref{lem2118formula}) by $\epsilon_{0,3}, \epsilon'_{0,3}$.) \par By the definition of type 1 we use the same codimension 2 submanifolds to put the transversal constraint. Therefore we have \begin{equation}\label{lem2118formula23tra} \mathcal M^{\frak w_{\frak p(2)}^{(2) -}}_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p(2);\frak A) _{\epsilon'_{0,2},\vec{\mathcal T}^{(2) \prime}}^{\rm trans} \subset \mathcal M^{\frak w_{\frak p(2)}^{(1)}}_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p(2);\frak A) _{\epsilon_{0,2},\vec{\mathcal T}^{(2)}}^{\rm trans} \end{equation} and \begin{equation}\label{lem2118formula233tra} \mathcal M^{\frak w_{\frak p(3)}^{(2) -}}_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p(3);\frak A) _{\epsilon'_{0,3},\vec{\mathcal T}^{(3) \prime}}^{\rm trans} \subset \mathcal M^{\frak w_{\frak p(3)}^{(1)}}_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p(3);\frak A) _{\epsilon_{0,3},\vec {\mathcal T}^{(3)}}^{\rm trans}. \end{equation} On the other hand, by (\ref{2332}) we have \begin{equation}\label{2332ss} \mathcal M^{\frak w_{\frak p(3)}^{(1)}}_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p(3);\frak A)_{\epsilon_{0,3},\vec {\mathcal T}^{(3)}}^{\rm trans} \subset \mathcal M^{\frak w_{\frak p(2)}^{(1)}}_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p(2);\frak A)_{\epsilon_{0,2},\vec {\mathcal T}^{(2)}}^{\rm trans}. \end{equation} Note that the stabilization data $\frak w_{\frak p(2)}^{(2) -}$ and $\frak w_{\frak p(3)}^{(2) -}$ appearing in (\ref{lem2118formula23tra}) and (\ref{lem2118formula233tra}) are obtained by extending the core of the coordinate at infinity included in $\frak w_{\frak p(2)}^{(2)}$ and $\frak w_{\frak p(3)}^{(2)}$, respectively. Therefore by further extending the core we may assume that $\frak w_{\frak p(3)}^{(2) -}$ is induced from $\frak w_{\frak p(2)}^{(2) -}$. Therefore again by (\ref{2332}) we have \begin{equation}\label{2332s2s} \mathcal M^{\frak w_{\frak p(3)}^{(2)-}}_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p(3);\frak A)_{\epsilon''_{0,3},\vec{\mathcal T}^{(3)\prime\prime}}^{\rm trans} \subset \mathcal M^{\frak w_{\frak p(2)}^{(2)-}}_{k+1,(\ell,\ell_{\frak p},(\ell_c))}(\beta;\frak p(2);\frak A)_{\epsilon'_{0,2},\vec {\mathcal T}^{(2) \prime}}^{\rm trans}. \end{equation} By definition, the coordinate changes $\phi_{(21)(22)}$, $\phi_{(31)(32)}$, $\phi_{(21)(31)}$, $\phi_{(22)(32)}$ are the inclusion maps (\ref{lem2118formula23tra}), (\ref{lem2118formula233tra}), (\ref{2332ss}) and (\ref{2332s2s}) in neighborhoods of $\frak p(2)$, $\frak p(3)$, $\frak p(3)$, $\frak p(3)$, respectively. The lemma is proved in this case. \par\medskip \noindent{\bf Case 2}: Void. \par\medskip \noindent{\bf Case 4}: $(\phi_{(21)(22)},\hat\phi_{(21)(22)})$ is of type 4. \par We have $\vec w_{\frak p(2)}^{(1)} \supset \vec w_{\frak p(2)}^{(2)}$. Therefore $\vec w_{\frak p(3)}^{(1)} \supset \vec w_{\frak p(3)}^{(2)}$. It follows that $(\phi_{(31)(32)},\hat\phi_{(31)(32)})$ is also of type 4. We have the following commutative diagram. \begin{equation}\label{cd347} \begin{CD} \mathcal M^{\frak w_{\frak p(2)}^{(1)}}_{k+1,(\ell,\ell_{\frak p(2)}^{(1)},(\ell_c))}(\beta;\frak p(2);\frak A) _{\epsilon_{0},\vec{\mathcal T}^{(1)}} @ > {\frak{forget}_{\frak A,\frak A;\vec w_{\frak p(2)}^{(1)},\vec w_{\frak p(2)}^{(2)}}} >> \mathcal M^{\frak w_{\frak p(2)}^{(2)}}_{k+1,(\ell,\ell_{\frak p(2)}^{(2)},(\ell_c))}(\beta;\frak p(2);\frak A) _{\epsilon_{0},\vec{\mathcal T}^{(2)}} \\ @ AA{\subset}A @ AA{\subset}A\\ \mathcal M^{\frak w_{\frak p(3)}^{(1)}}_{k+1,(\ell,\ell_{\frak p(3)}^{(1)},(\ell_c))}(\beta;\frak p(3);\frak A) _{\epsilon'_{0},\vec{\mathcal T}^{(1) \prime}} @ > {\frak{forget}_{\frak A,\frak A;\vec w_{\frak p(3)}^{(1)},\vec w_{\frak p(3)}^{(2)}}} >> \mathcal M^{\frak w_{\frak p(3)}^{(2)}}_{k+1,(\ell,\ell_{\frak p(3)}^{(2)},(\ell_c))}(\beta;\frak p(3);\frak A) _{\epsilon'_{0},\vec{\mathcal T}^{(2) \prime}} \end{CD} \end{equation} We note that we use the same codimension 2 submanifold to put transversal constraint. Therefore (\ref{cd347}) induces: \begin{equation}\label{cd348} \begin{CD} \mathcal M^{\frak w_{\frak p(2)}^{(1)}}_{k+1,(\ell,\ell_{\frak p(2)}^{(1)},(\ell_c))}(\beta;\frak p(2);\frak A) _{\epsilon_{0},\vec{\mathcal T}^{(1)}}^{\rm trans} @ > {\frak{forget}_{\frak A,\frak A;\vec w_{\frak p(2)}^{(1)},\vec w_{\frak p(2)}^{(2)}}} >> \mathcal M^{\frak w_{\frak p(2)}^{(2)}}_{k+1,(\ell,\ell_{\frak p(2)}^{(2)},(\ell_c))}(\beta;\frak p(2);\frak A) _{\epsilon_{0},\vec{\mathcal T}^{(2)}}^{\rm trans} \\ @ AA{\subset}A @ AA{\subset}A\\ \mathcal M^{\frak w_{\frak p(3)}^{(1)}}_{k+1,(\ell,\ell_{\frak p(3)}^{(1)},(\ell_c))}(\beta;\frak p(3);\frak A) _{\epsilon'_{0},\vec{\mathcal T}^{(1) \prime}}^{\rm trans} @ > {\frak{forget}_{\frak A,\frak A;\vec w_{\frak p(3)}^{(1)},\vec w_{\frak p(3)}^{(2)}}} >> \mathcal M^{\frak w_{\frak p(3)}^{(2)}}_{k+1,(\ell,\ell_{\frak p(3)}^{(2)},(\ell_c))}(\beta;\frak p(3);\frak A) _{\epsilon'_{0},\vec{\mathcal T}^{(2) \prime}}^{\rm trans} \end{CD} \end{equation} The commutativity of (\ref{cd348}) is Lemma \ref{2141} in this case. \par\medskip \noindent{\bf Case 3}: $(\phi_{(21)(22)},\hat\phi_{(21)(22)})$ is of type 3. \par We obtain the following commutative diagram in the same way. \begin{equation}\label{cd349} \begin{CD} \mathcal M^{\frak w_{\frak p(2)}^{(1)}}_{k+1,(\ell,\ell_{\frak p(2)}^{(1)},(\ell_c))}(\beta;\frak p(2);\frak A) _{\epsilon_{0},\vec{\mathcal T}^{(1)}}^{\rm trans} @ < {\frak{forget}_{\frak A,\frak A;\vec w_{\frak p(2)}^{(2)},\vec w_{\frak p(2)}^{(1)}}} << \mathcal M^{\frak w_{\frak p(2)}^{(2)}}_{k+1,(\ell,\ell_{\frak p(2)}^{(2)},(\ell_c))}(\beta;\frak p(2);\frak A) _{\epsilon_{0},\vec{\mathcal T}^{(2)}}^{\rm trans} \\ @ AA{\subset}A @ AA{\subset}A\\ \mathcal M^{\frak w_{\frak p(3)}^{(1)}}_{k+1,(\ell,\ell_{\frak p(3)}^{(1)},(\ell_c))}(\beta;\frak p(3);\frak A) _{\epsilon'_{0},\vec{\mathcal T}^{(1) \prime}}^{\rm trans} @ < {\frak{forget}_{\frak A,\frak A;\vec w_{\frak p(3)}^{(2)},\vec w_{\frak p(3)}^{(1)}}} << \mathcal M^{\frak w_{\frak p(3)}^{(2)}}_{k+1,(\ell,\ell_{\frak p(3)}^{(2)},(\ell_c))}(\beta;\frak p(3);\frak A) _{\epsilon'_{0},\vec{\mathcal T}^{(2) \prime}}^{\rm trans} \end{CD} \end{equation} All the above arrows are diffeomorphisms locally. This implies the lemma in this case. The proof of Lemma \ref{2141} is complete. \end{proof} The proof of Proposition \ref{compaticoochamain} is complete. \end{proof} \par\medskip \section{Wrap-up of the construction of Kuranishi structure} \label{kstructure} In this subsection we complete the proof of Theorem \ref{existsKura}. We will prove the case of $\mathcal M_{k+1,\ell}(\beta)$. The case of $\mathcal M^{\rm cl}_{\ell}(\alpha)$ is the same. \par In this subsection we fix a stabilization data $\frak w_{\frak p}$ at $\frak p$ for each $\frak p$ and always use it. We also take $\frak A = \frak C(\frak p)$ unless otherwise specified. So we omit them from the notation of Kuranishi chart. We write $\frak d =(\frak o,\mathcal T)$. Thus we write $$ (V(\frak p;\frak d),\mathcal E_{(\frak p;\frak d)},\frak s_{(\frak p;\frak d)}, \psi_{(\frak p;\frak d)}) $$ to denote our Kuranishi neighborhood. \par For simplicity of notation we denote by $\tilde\psi_{(\frak p;\frak d)}$ the composition of $\psi_{(\frak p;\frak d)}$ and the projection $\frak s_{(\frak p;\frak d)}^{-1}(0) \to \frak s_{(\frak p;\frak d)}^{-1}(0)/\Gamma_{\frak p}$. \par The next lemma is the main technical lemma we use for the construction. \begin{lem}\label{lem2143} There exist finite subsets $\frak P_j = \{\frak p(j,i) \mid i=1,\dots,N_j\} \subset \mathcal M_{k+1,\ell}(\beta)$ for $j=1,2,3$ and admissible $\frak d(j,1,i) > \frak d(j,2,i)$ for $j=1,2,3$, $i=1,\dots,N_j$ such that they satisfy the following properties. \begin{enumerate} \item If $j=1,2,3$ then $$ \bigcup_{i=1}^{N_j} \tilde\psi_{(\frak p(j,i);\frak d(j,2,i))} (\frak s_{(\frak p(j,i);\frak d(j,2,i))}^{-1}(0)) = \mathcal M_{k+1,\ell}(\beta). $$ \item The following holds for $j > j'$. If $$ \frak p(j,i) \in \tilde\psi_{(\frak p(j',i');\frak d(j',2,i'))} (\frak s_{(\frak p(j',i');\frak d(j',2,i'))}^{-1}(0)), $$ then there exists a coordinate change $$ \phi_{(j',i'),(j,i)}: V(\frak p(j,i);\frak d(j,1,i)) \to V(\frak p(j',i');\frak d(j',1,i')) $$ as in Corollary \ref{cchanges}. \item Let $j=1$ or $2$, $i_1,\dots,i_m \in \{1,\dots,N_{j+1}\}$. Suppose $$ \bigcap_{n=1}^{m} \tilde\psi_{(\frak p(j+1,i_n);\frak d(j+1,1,i_n))} (\frak s_{(\frak p(j+1,i_n);\frak d(j+1,1,i_n))}^{-1}(0))\\ \ne \emptyset, $$ then there exists $i$ independent of $n$ such that $$ \frak p(j+1,i_n) \in \tilde\psi_{(\frak p(j,i);\frak d(j,2,i))} (\frak s_{(\frak p(j,i);\frak d(j,2,i))}^{-1}(0)) $$ for any $n = 1,\dots,m$. \item Let $i_j \in \{1,\dots,N_j\}$. If $$ \frak p(3,i_3) \in \tilde\psi_{(\frak p(2,i_2);\frak d(2,2,i_2))} (\frak s_{(\frak p(2,i_2);\frak d(2,2,i_2))}^{-1}(0)) $$ and $$ \frak p(2,i_2) \in \tilde\psi_{(\frak p(1,i_1);\frak d(1,2,i_1))} (\frak s_{(\frak p(1,i_1);\frak d(1,2,i_1))}^{-1}(0)), $$ then there exists a coordinate change $$ \phi_{(1,i_1),(3,i_3)}: V(\frak p(3,i_3);\frak d(3,1,i_3)) \to V(\frak p(1,i_1);\frak d(1,1,i_1)) $$ as in Corollary \ref{cchanges}. Moreover we have \begin{equation}\label{2451} {\underline\phi}_{(1,i_1),(2,i_2)}\circ {\underline\phi}_{(2,i_2),(3,i_3)} ={\underline\phi}_{(1,i_1),(3,i_3)} \end{equation} everywhere on $U(\frak p(3,i_3);\frak d(3,1,i_3)) = V(\frak p(3,i_3);\frak d(3,1,i_3))/\Gamma_{\frak p(3,i_3)}$. \end{enumerate} \end{lem} \begin{proof} For each $\frak p \in \mathcal M_{k+1,\ell}(\beta)$, we take admissible $\frak d(\frak p,1;1) >\frak d(\frak p,1;2) >\frak d(\frak p,1;3)$. Then we have $\frak P_1 = \{\frak p(1,i) \mid i=1,\dots,N_1\}$ such that \begin{equation}\label{2351} \bigcup_{i=1}^{N_1} \tilde\psi_{(\frak p(1,i);\frak d(\frak p(1,i),1;3))} (\frak s_{(\frak p(1,i);\frak d(\frak p(1,i),1;3))}^{-1}(0)) = \mathcal M_{k+1,\ell}(\beta). \end{equation} We put $\frak d(1,1,i) = \frak d(\frak p(1,i),1;1)$, $\frak d(1,2,i) = \frak d(\frak p(1,i),1;2)$. Then, since \begin{equation}\label{2351ato} \aligned &\tilde\psi_{(\frak p(1,i);\frak d(\frak p(1,i),1;3))} (\frak s_{(\frak p(1,i);\frak d(\frak p(1,i),1;3)))}^{-1}(0))\\ &\subset \tilde\psi_{(\frak p(1,i);\frak d(1,2,i))} (\frak s_{(\frak p(1,i);\frak d(1,2,i))}^{-1}(0)), \endaligned \end{equation} Lemma \ref{lem2143} (1) hods for $j=1.$ \par\smallskip For each $\frak p \in \mathcal M_{k+1,\ell}(\beta)$ we take an admissible $\frak d(\frak p,2;1)$ so that the following conditions hold. \begin{conds}\label{conds144} \begin{enumerate} \item[(a)] If $ \frak p \in \tilde\psi_{(\frak p(1,i);\frak d(1,2,i))} (\frak s_{(\frak p(1,i);\frak d(1,2,i))}^{-1}(0)), $ then there exists a coordinate change $$ \phi_{(1,i),(2,\frak p)}: V(\frak p;\frak d(\frak p,2;1)) \to V(\frak p(1,i);\frak d(1,1,i)) $$ as in Corollary \ref{cchanges}. \item[(b)] If $$ \tilde\psi_{(\frak p;\frak d(\frak p,2;1))} (\frak s_{(\frak p;\frak d(\frak p,2;1))}^{-1}(0))\cap \tilde\psi_{(\frak p(1,i);\frak d(\frak p(1,i),1;3))} (\frak s_{(\frak p(1,i);\frak d(\frak p(1,i),1;3))}^{-1}(0)) \ne \emptyset $$ then $$ \tilde\psi_{(\frak p;\frak d(\frak p,2;1))} (\frak s_{(\frak p;\frak d(\frak p,2;1))}^{-1}(0)) \subseteq \tilde\psi_{(\frak p(1,i);\frak d(\frak p(1,i),1;2))} (\frak s_{(\frak p(1,i);\frak d(\frak p(1,i),1;2))}^{-1}(0)). $$ \item[(c)] Let $\epsilon_{9}(\frak p)$ be the positive number we define below. If an element $\frak q \in \mathcal M_{k+1,\ell}(\beta)$ satisfies $ \frak q \in \tilde{\psi}_{(\frak p;\frak d(\frak p,2;1))} (\frak s_{(\frak p;\frak d(\frak p,2;1))}^{-1}(0)), $ then $d(\frak p,\frak q) < \epsilon_{9}(\frak p)$. \end{enumerate} \end{conds} Here $\epsilon_{9}(\frak p)$ is defined as follows. For each $i=1,\dots,N_1$ we put $\frak p(1) = \frak p(1,i)$, $\frak p(2) = \frak p$ and apply Proposition \ref{compaticoochamain}. We then obtain $\epsilon_{7}(i,\frak p)$. We define $$ \epsilon_{9}(\frak p) = \min \{ \epsilon_{7}(i,\frak p) \mid i=1,\dots,N_1\}. $$ The existence of such $\frak d(\frak p,2;1)$ is obvious. Furthermore for each $\frak p \in \mathcal M_{k+1,\ell}(\beta)$, we take $\frak d(\frak p,2;2),\frak d(\frak p,2;3)$ such that $\frak d(\frak p,2;1) >\frak d(\frak p,2;2) >\frak d(\frak p,2;3)$. Then we have $\frak P_2 = \{\frak p(2,i) \mid i=1,\dots,N_2\}$ such that \begin{equation}\label{2352} \bigcup_{i=1}^{N_2} \tilde\psi_{(\frak p(2,i);\frak d(\frak p(2,i),2;3))} (\frak s_{(\frak p(2,i);\frak d(\frak p(2,i),2;3))}^{-1}(0)) = \mathcal M_{k+1,\ell}(\beta). \end{equation} We put $\frak d(2,1,i) = \frak d(\frak p(2,i),2;1)$, $\frak d(2,2,i) = \frak d(\frak p(2,i),2;2)$. Then (\ref{2352}) and $\frak d(\frak p,2;2) >\frak d(\frak p,2;3)$ imply Lemma \ref{lem2143} (1) for $j=2$. Lemma \ref{lem2143} (2) for $(j,j')=(2,1)$ follows immediately from Condition \ref{conds144} (a). \begin{sublem}\label{sublem2145} Lemma \ref{lem2143} (3) holds for $j=1$. \end{sublem} \begin{proof} Suppose $$ \bigcap_{n=1}^m \tilde\psi_{(\frak p(2,i_n);\frak d(2,1,i_n))} (\frak s_{(\frak p(2,i_n);\frak d(2,1,i_n))}^{-1}(0)) \ne \emptyset. $$ Then (\ref{2351}) implies that there exists $i$ such that $$ \aligned &\bigcap_{n=1}^m \tilde\psi_{(\frak p(2,i_n);\frak d(2,1,i_n))} (\frak s_{(\frak p(2,i_n);\frak d(2,1,i_n))}^{-1}(0))\\ &\cap \tilde\psi_{(\frak p(1,i);\frak d(\frak p(1,i),1;3))} (\frak s_{(\frak p(1,i);\frak d(\frak p(1,i),1;3))}^{-1}(0)) \ne \emptyset. \endaligned $$ Therefore Condition \ref{conds144} (b) and (\ref{2351ato}) imply $$ \tilde\psi_{(\frak p(2,i_n);\frak d(2,1,i_n))} (\frak s_{(\frak p(2,i_n);\frak d(2,1,i_n))}^{-1}(0))\subset \tilde\psi_{(\frak p(1,i);\frak d(1,2,i))} (\frak s_{(\frak p(1,i);\frak d(1,2,i))}^{-1}(0)) $$ for any $n$. In particular $$ \frak p(2,i_n) \in \tilde\psi_{(\frak p(1,i);\frak d(1,2,i))} (\frak s_{(\frak p(1,i);\frak d(1,2,i))}^{-1}(0)) $$ as required. \end{proof} For each $\frak p \in \mathcal M_{k+1,\ell}(\beta)$ we take an admissible $\frak d(\frak p,3;1)$ so that the following conditions hold. \begin{conds}\label{cond146} \begin{enumerate} \item[(a)] If $ \frak p \in \tilde\psi_{(\frak p(2,i);\frak d(2,2,i))} (\frak s_{(\frak p(2,i);\frak d(2,2,i))}^{-1}(0)), $ then there exists a coordinate change $$ \phi_{(2,i),(3,\frak p)}: V(\frak p;\frak d(\frak p,3;1)) \to V(\frak p(2,i);\frak d(2,1,i)) $$ as in Corollary \ref{cchanges}. \item[(b)] If $$ \tilde\psi_{(\frak p;\frak d(\frak p,3;1))} (\frak s_{(\frak p;\frak d(\frak p,3;1))}^{-1}(0))\cap \tilde\psi_{(\frak p(2,i);\frak d(\frak p(2,i),2;3))} (\frak s_{(\frak p(2,i);\frak d(\frak p(2,i),2;3))}^{-1}(0)) \ne \emptyset, $$ then $$ \tilde\psi_{(\frak p;\frak d(\frak p,3;1))} (\frak s_{(\frak p;\frak d(\frak p,3;1))}^{-1}(0)) \subseteq \tilde\psi_{(\frak p(2,i);\frak d(\frak p(2,i),2;2))} (\frak s_{(\frak p(2,i);\frak d(\frak p(2,i),2;2))}^{-1}(0)). $$ \item[(c)] Void \item[(d)] Let $(i_1,i_2)$ be an arbitrary pair of integers such that $$ \aligned \frak p &\in \tilde\psi_{(\frak p(2,i_2);\frak d(2,2,i_2))} (\frak s_{(\frak p(2,i_2);\frak d(2,2,i_2))}^{-1}(0)),\\ \frak p(2,i_2) &\in \tilde\psi_{(\frak p(1,i_1);\frak d(1,2,i_1))} (\frak s_{(\frak p(1,i_1);\frak d(1,2,i_1))}^{-1}(0)). \endaligned $$ Then $$ \frak d(\frak p,3;1) < \frak d(i_1,i_2). $$ Here the right hand side is defined below. \item[(e)] Under the same assumption as in (d), there exists a coordinate change $$ \phi_{(1,i_1),(3,\frak p)}: V(\frak p;\frak d(\frak p,3;1)) \to V(\frak p(1,i_1);\frak d(1,1,i_1)) $$ as in Corollary \ref{cchanges}. \end{enumerate} \end{conds} The definition of $\frak d(i_1,i_2)$ is as follows. We put $\frak p(1) = \frak p(1,i_1)$ and $\frak p(2) = \frak p(2,i_2)$ and $\frak p(3) = \frak p$. We also put $(\frak o^{(1)}, \mathcal T^{(1)}) = \frak d(1,1,i_1)$, $(\frak o^{(2)}, \mathcal T^{(2)}) = \frak d(2,1,i_2)$. Using Condition \ref{conds144} (c) we can apply Proposition \ref{compaticoochamain} to obtain $(\frak o^{(3)}_0, \mathcal T^{(3)}_0)$, which we put $\frak d(i_1,i_2)$. \par Existence of $\frak d(\frak p,3;1)$ is obvious. Furthermore for each $\frak p\in \mathcal M_{k+1,\ell}(\beta)$, we take $\frak d(\frak p,3;2)$ with $\frak d(\frak p,3;1) > \frak d(\frak p,3;2)$. Then we have $\frak P_3 = \{\frak p(3,i) \mid i=1,\dots,N_3\}$ such that \begin{equation}\label{23523} \bigcup_{i=1}^{N_3} \tilde\psi_{(\frak p(3,i);\frak d(\frak p(3,i),3;2))} (\frak s_{(\frak p(3,i);\frak d(\frak p(3,i),3;2)))}^{-1}(0)) = \mathcal M_{k+1,\ell}(\beta). \end{equation} We put $\frak d(3,1,i) = \frak d(\frak p(3,i),3;1)$, $\frak d(3,2,i) = \frak d(\frak p(3,i),3;2)$. \par Now Lemma \ref{lem2143} (1) for $j=3$ follows from (\ref{23523}). Lemma \ref{lem2143} (2) for $(j,j')=(3,2), (3,1)$ follows from Condition \ref{cond146} (a),(e). The proof of Lemma \ref{lem2143} (3) for $j=2$ is the same as the proof of Sublemma \ref{sublem2145}. \par Finally Lemma \ref{lem2143} (4) is a consequence of Condition \ref{conds144} (c), Condition \ref{cond146} (d)(e), and Proposition \ref{compaticoochamain}. The proof of Lemma \ref{lem2143} is complete. \end{proof} \begin{proof} [Proof of Theorem \ref{existsKura}] We start the construction of a Kuranishi structure on $\mathcal M_{k+1,\ell}(\beta)$. Let $\frak p \in \mathcal M_{k+1,\ell}(\beta)$. There exists $i(\frak p) \in \{1,\dots,N_3\}$ such that $$ \frak p \in \tilde\psi_{(\frak p(3,i(\frak p));\frak d(\frak p(3,i(\frak p)),3;2))} (\frak s_{(\frak p(3,i(\frak p));\frak d(\frak p(3,i(\frak p)),3;2))}^{-1}(0)). $$ We take any such $i(\frak p)$ and fix it. Choose $\hat{\frak p} \in V(\frak p(3,i(\frak p));\frak d(3,1,i(\frak p)))$ such that $$ \tilde\psi_{(\frak p(3,i(\frak p));\frak d(\frak p(3,i(\frak p)),3;2))}(\hat{\frak p}) =\frak p. $$ We have an embedding $$ \bigoplus_{c \in \frak C(\frak p)} \mathcal E_c \subset \mathcal E_{(\frak p(3,i(\frak p));\frak d(\frak p(3,i(\frak p)),3;2))} $$ of vector bundles. (This is because $\frak C(\frak p) \subseteq \frak C(\frak p(3,i(\frak p)))$.) We take a neighborhood $V_{\frak p}$ of $\hat{\frak p}$ in the set $$ W_{\frak p}^{(3)} = \bigg\{ \frak v \in V(\frak p(3,i(\frak p));\frak d(3,1,i(\frak p))) \mid \frak s_{(\frak p(3,i(\frak p));\frak d(\frak p(3,i(\frak p)),3;2)))}(\frak v) \in \bigoplus_{c \in \frak C(\frak p)} \mathcal E_c\bigg\} $$ such that $V_{\frak p}$ is $\Gamma_{\frak p}$ invariant. The sum $\bigoplus_{c \in \frak C(\frak p)} \mathcal E_c$ defines a $\Gamma_{\frak p}$ equivariant vector bundle on $V_{\frak p}$ that we denote by $E_{\frak p}$. The restriction to $V_{\frak p}$ of the section $\frak s_{(\frak p(3,i(\frak p));\frak d(\frak p(3,i(\frak p)),3;2))}$ and the map $\tilde\psi_{(\frak p(3,i(\frak p));\frak d(\frak p(3,i(\frak p)),3;2))}$ (divided by $\Gamma_{\frak p(3,i(\frak p))}$) is our $\frak s_{\frak p}$ and $\psi_{\frak p}$. We can show easily that $(V_{\frak p},\Gamma_{\frak p},E_{\frak p},\frak s_{\frak p},\psi_{\frak p})$ is a Kuranishi chart of $\frak p$. \par We next define coordinate changes. Let $\frak q \in \psi_{\frak p}(\frak s^{-1}_{\frak p}(0))$. It implies $\frak C(\frak q) \subseteq \frak C(\frak p)$. \par We note that $i(\frak p)$ may be different from $i(\frak q)$. On the other hand, we have $$ \aligned &\tilde\psi_{(\frak p(3,i(\frak p));\frak d(3,1,i(\frak p)))} (\frak s_{(\frak p(3,i(\frak p));\frak d(3,1,i(\frak p)))}^{-1}(0))\\ &\cap \tilde\psi_{(\frak p(3,i(\frak q));\frak d(3,1,i(\frak q)))} (\frak s_{(\frak p(3,i_(\frak q));\frak d(3,1,i(\frak q)))}^{-1}(0)) \ne \emptyset. \endaligned $$ In fact, $\frak q$ is contained in the intersection. Therefore by Lemma \ref{lem2143} (3), there exists $i(\frak p,\frak q)$ such that $$ \frak p(3,i(\frak p)), \frak p(3,i(\frak q)) \in \tilde\psi_{(\frak p(2,i(\frak p,\frak q));\frak d(2,2,i(\frak p,\frak q)))} (\frak s_{(\frak p(2,i(\frak p,\frak q));\frak d(2,2,i(\frak p,\frak q)))}^{-1}(0)). $$ Therefore by Lemma \ref{lem2143} (2), we have coordinate changes: $$ \phi_{(\frak p(2,i(\frak p,\frak q))(3,i(\frak p))} : V(\frak p(3,i(\frak p));\frak d(3,1,i(\frak p))) \to V(\frak p(2,i(\frak p,\frak q));\frak d(2,1,i(\frak p,\frak q))) $$ and $$ \phi_{(\frak p(2,i(\frak p,\frak q))(3,i(\frak q))} : V(\frak p(3,i(\frak q));\frak d(3,1,i(\frak q))) \to V(\frak p(2,i(\frak p,\frak q));\frak d(2,1,i(\frak p,\frak q))). $$ We write them sometimes as $\phi_{(\frak p\frak q)\frak p}$, $\phi_{(\frak p\frak q)\frak q}$ for simplicity. \par By the compatibility of $\psi$ with coordinate changes, $$ \frak q \in \phi_{(\frak p(2,i(\frak p,\frak q))(3,i(\frak p))}(V_{\frak p}) \cap \phi_{(\frak p(2,i(\frak p,\frak q))(3,i(\frak q))} (V_{\frak q}). $$ We consider $$ W_{\frak q}^{(2)} = \bigg\{ \frak v \in V(\frak p(2,i(\frak p,\frak q));\frak d(2,1,i(\frak p,\frak q))) \,\,\bigg\vert\,\, \frak s_{(\frak p(2,i(\frak p,\frak q));\frak d(2,1,i(\frak p,\frak q)))}(\frak v) \in \bigoplus_{c \in \frak C(\frak q)} \mathcal E_c\bigg\}. $$ Both $ \phi_{(\frak p(2,i(\frak p,\frak q))(3,i(\frak p))}(V_{\frak p}) \cap W^{(2)}_{\frak q} $ and $\phi_{(\frak p(2,i(\frak p,\frak q))(3,i(\frak q))} (V_{\frak q})$ are open subsets of $W^{(2)}_{\frak q}$. This fact is proved by dimension counting and by the fact that $\phi_{(\frak p(2,i(\frak p,\frak q))(3,i(\frak p))}$ and $\phi_{(\frak p(2,i(\frak p,\frak q))(3,i(\frak q))}$ are embeddings. \par We put \begin{equation} V_{\frak p\frak q} = \phi_{(\frak p(2,i(\frak p,\frak q))(3,i(\frak q))}^{-1} (\phi_{(\frak p(2,i(\frak p,\frak q))(3,i(\frak p))}(V_{\frak p}) \cap W_{\frak q} ) \end{equation} and \begin{equation}\label{eq23777} \phi_{\frak p\frak q} = \phi_{(\frak p(2,i(\frak p,\frak q))(3,i(\frak p))}^{-1} \circ \phi_{(\frak p(2,i(\frak p,\frak q))(3,i(\frak q))}. \end{equation} We can define $\hat{\phi}_{\frak p\frak q}$ by using $\hat{\phi}_{(\frak p(2,i(\frak p,\frak q))(3,i(\frak p))}$ and $\hat{\phi}_{(\frak p(2,i(\frak p,\frak q))(3,i(\frak q))}$. We have thus constructed a coordinate change. \par We finally prove the compatibility of coordinate changes. Let $\frak q \in \tilde\psi_{\frak p}(\frak s^{-1}_{\frak p}(0))$, and $\frak r \in \tilde\psi_{\frak q}(\frak s^{-1}_{\frak q}(0))$. We then obtain $i(\frak p,\frak q)$, $i(\frak p,\frak r)$, $i(\frak q,\frak r)$ as above. \par We note that $$ \aligned &\tilde\psi_{(\frak p(2,i(\frak p,\frak q));\frak d(2,2,i(\frak p,\frak q)))} (\frak s_{(\frak p(2,i(\frak p,\frak q));\frak d(2,2,i(\frak p,\frak q)))}^{-1}(0))\\ &\supseteq \tilde\psi_{(\frak p(3,i(\frak q));\frak d(3,1,i(\frak q)))} (\frak s_{(\frak p(3,i(\frak q));\frak d(3,1,i(\frak q)))}^{-1}(0))\\ &\supseteq \tilde\psi_{\frak q}(\frak s_{\frak q}^{-1}(0)) \ni \frak r. \endaligned $$ Therefore $$ \aligned &\tilde\psi_{(\frak p(2,i(\frak p,\frak q));\frak d(2,2,i(\frak p,\frak q))} (\frak s_{(\frak p(2,i(\frak p,\frak q));\frak d(2,2,i(\frak p,\frak q)))}^{-1}(0))\\ &\cap \tilde\psi_{(\frak p(2,i(\frak q,\frak r));\frak d(2,2,i(\frak q,\frak r)))} (\frak s_{(\frak p(2,i(\frak p,\frak r));\frak d(2,2,i(\frak p,\frak r)))}^{-1}(0))\\ &\cap \tilde\psi_{(\frak p(2,i(\frak p,\frak r));\frak d(2,2,i(\frak p,\frak r)))} (\frak s_{(\frak p(2,i(\frak p,\frak r));\frak d(2,2,i(\frak p,\frak r)))}^{-1}(0)) \endaligned $$ is nonempty. Therefore Lemma \ref{lem2143} (2) and (3) imply that there exists $i(\frak p,\frak q,\frak r)$ such that we have coordinate changes: $$ \aligned {\underline\phi}_{(\frak p(1,i(\frak p,\frak q,\frak r))(2,i(\frak p,\frak q))} : &U(\frak p(2,i(\frak p,\frak q));\frak d(2,1,i(\frak p,\frak q)))\\ &\to U(\frak p(1,i(\frak p,\frak q,\frak r));\frak d(1,1,i(\frak p,\frak q,\frak r)))\\ {\underline\phi}_{(\frak p(1,i(\frak p,\frak q,\frak r))(2,i(\frak q,\frak r))} : &U(\frak p(2,i(\frak q,\frak r));\frak d(2,1,i(\frak q,\frak r)))\\ &\to U(\frak p(1,i(\frak p,\frak q,\frak r));\frak d(1,1,i(\frak p,\frak q,\frak r)))\\ {\underline\phi}_{(\frak p(1,i(\frak p,\frak q,\frak r))(2,i(\frak p,\frak r))} : &U(\frak p(2,i(\frak p,\frak r));\frak d(2,1,i(\frak p,\frak r)))\\ &\to U(\frak p(1,i(\frak p,\frak q,\frak r));\frak d(1,1,i(\frak p,\frak q,\frak r))). \endaligned $$ \footnote{Here $U(\frak p(2,i(\frak p,\frak q));\frak d(2,1,i(\frak p,\frak q))) = V(\frak p(2,i(\frak p,\frak q));\frak d(2,1,i(\frak p,\frak q)))/ \Gamma_{\frak p(2,i(\frak p,\frak q))}$ etc..} We write them as ${\underline\phi}_{(\frak p\frak q\frak r)(\frak p\frak q)}$, ${\underline\phi}_{(\frak p\frak q\frak r)(\frak q\frak r)}$, ${\underline\phi}_{(\frak p\frak q\frak r)(\frak p\frak r)}$. By Lemma \ref{lem2143} (4) we obtain $$ \aligned {\underline\phi}_{(\frak p(1,i(\frak p,\frak q,\frak r))(3,i(\frak p))} : &U(\frak p(3,i(\frak p));\frak d(3,1,i(\frak p)))\\ &\to U(\frak p(1,i(\frak p,\frak q,\frak r));\frak d(1,1,i(\frak p,\frak q,\frak r)))\\ {\underline\phi}_{(\frak p(1,i(\frak p,\frak q,\frak r))(3,i(\frak q))} : &U(\frak p(3,i(\frak q));\frak d(3,1,i(\frak q)))\\ &\to U(\frak p(1,i(\frak p,\frak q,\frak r));\frak d(1,1,i(\frak p,\frak q,\frak r)))\\ {\underline\phi}_{(\frak p(1,i(\frak p,\frak q,\frak r))(3,i(\frak r))} : &U(\frak p(3,i(\frak r));\frak d(3,1,i(\frak r)))\\ &\to U(\frak p(1,i(\frak p,\frak q,\frak r));\frak d(1,1,i(\frak p,\frak q,\frak r))). \endaligned $$ We write them as ${\underline\phi}_{(\frak p\frak q\frak r)\frak p}$, ${\underline\phi}_{(\frak p\frak q\frak r)\frak q}$, ${\underline\phi}_{(\frak p\frak q\frak r)\frak r}$. \par By Lemma \ref{lem2143} (4) we have $$ \aligned &{\underline\phi}_{(\frak p\frak q\frak r)(\frak p\frak q)} \circ {\underline\phi}_{(\frak p\frak q)\frak p} = {\underline\phi}_{(\frak p\frak q\frak r)\frak p}, \qquad {\underline\phi}_{(\frak p\frak q\frak r)(\frak p\frak q)} \circ {\underline\phi}_{(\frak p\frak q)\frak q} = {\underline\phi}_{(\frak p\frak q\frak r)\frak q}, \\ &\phi_{(\frak p\frak q\frak r)(\frak q\frak r)} \circ {\underline\phi}_{(\frak q\frak r)\frak q} = {\underline\phi}_{(\frak p\frak q\frak r)\frak q}, \qquad {\underline\phi}_{(\frak p\frak q\frak r)(\frak q\frak r)} \circ {\underline\phi}_{(\frak q\frak r)\frak r} = {\underline\phi}_{(\frak p\frak q\frak r)\frak r}, \\ &{\underline\phi}_{(\frak p\frak q\frak r)(\frak p\frak r)} \circ {\underline\phi}_{(\frak p\frak r)\frak r} = {\underline\phi}_{(\frak p\frak q\frak r)\frak r}, \qquad {\underline\phi}_{(\frak p\frak q\frak r)(\frak p\frak r)} \circ {\underline\phi}_{(\frak p\frak r)\frak p} = {\underline\phi}_{(\frak p\frak q\frak r)\frak p}. \endaligned $$ Now we calculate: $$ \aligned {\underline\phi}_{\frak p\frak q} \circ {\underline\phi}_{\frak q\frak r} &= {\underline\phi}_{(\frak p\frak q)\frak p}^{-1}\circ {\underline\phi}_{(\frak p\frak q)\frak q}\circ {\underline\phi}_{(\frak q\frak r)\frak q}^{-1}\circ \phi_{(\frak q\frak r)\frak r}\\ &= {\underline\phi}_{(\frak p\frak q\frak r)\frak p}^{-1}\circ {\underline\phi}_{(\frak p\frak q\frak r)(\frak p\frak q)} \circ {\underline\phi}_{(\frak p\frak q)\frak q} \circ \phi_{(\frak q\frak r)\frak q}^{-1}\circ\phi_{(\frak p\frak q\frak r)(\frak q\frak r)}^{-1}\circ {\underline\phi}_{(\frak p\frak q\frak r)\frak r}\\ &=\phi_{(\frak p\frak q\frak r)\frak p}^{-1}\circ\phi_{(\frak p\frak q\frak r)\frak r}\\ &=\phi_{(\frak p\frak r)\frak p}^{-1} \circ{\underline\phi}_{(\frak p\frak q\frak r)(\frak p\frak r)}^{-1}\circ{\underline\phi}_{(\frak p\frak q\frak r)(\frak p\frak r)}\circ{\underline\phi}_{(\frak p\frak r)\frak r}\\ &= {\underline\phi}_{(\frak p\frak r)\frak p}^{-1} \circ{\underline\phi}_{(\frak p\frak r)\frak r} = {\underline\phi}_{\frak p\frak r}. \endaligned $$ Note (\ref{2451}) holds everywhere on $U(\frak p(3,i_3);\frak d(3,1,i_3))$. Therefore we can perform the above calculation everywhere on ${\underline\phi}_{\frak q\frak r}^{-1}(U_{\frak p\frak q}) \cap U_{\frak p\frak r}$. (The maps appearing in the intermediate stage of the calculation are defined in larger domain.) \par The proof of the consistency of the bundle maps $\hat{{\underline\phi}}_{\frak p\frak q},$ $\hat{{\underline\phi}}_{\frak q\frak r}$, $\hat{{\underline\phi}}_{\frak p\frak r}$ is the same by using $\hat{{\underline\phi}}_{(\frak p\frak q\frak r)\frak r}$ etc. \par The proof of Theorem \ref{existsKura} is now complete. \end{proof} \par\medskip \section{Appendix: Proof of Proposition \ref{changeinfcoorprop}} \label{proposss} In this subsection we prove Propositions \ref{changeinfcoorprop}, \ref{reparaexpest} and Lemma \ref{changeinfcoorproppara}. It seems likely that there are several different ways to prove them. We prove the proposition by the alternating method similar to those in the proof of Theorems \ref{gluethm1}, \ref{exdecayT}, \ref{gluethm3}, \ref{exdecayT33}. \par In view of Lemma \ref{lempsi12}, it suffices to prove them in the case $\frak x = \frak Y_0$. So we assume it throughout this subsection. \par We start with describing the situation. We consider the universal bundle (\ref{fibrationsigma}). The base space $\frak V(\frak x_{\rm v})$ is a neighborhood of $\frak x_{\rm v}$ in the Deligne-Mumford moduli space. Suppose we have two coordinates at infinity, which we write $\frak w{(j)}$, $j=1,2$. We denote the universal bundle (\ref{fibrationsigma}) over $\frak V(\frak x_{\rm v})$ that is a part of $\frak w{(j)}$ by \begin{equation}\label{universal} \pi^{(j)} : \frak M^{(j)}_{\frak x_{\rm v}} \to \frak V(\frak x_{\rm v}). \end{equation} Actually $\pi^{(1)} = \pi^{(2)}$ but we distinguish them.\footnote{To prove Lemma \ref{changeinfcoorproppara}, we need to consider a parametrized family and so the parameter $\xi$ should be added to many of the objects we define. To simplify the notation we omit them.} The fiber at the base point $\frak x_{\rm v}$ is written as $\Sigma^{(j)}_{\rm v}$ and the fiber at $\rho_{\rm v} \in \frak V(\frak x_{\rm v})$ is written as $\Sigma^{\rho, (j)}_{\rm v}$. \par We have an isomorphism \begin{equation}\label{universaliso} \hat\varphi_{12} : \frak M^{(2)}_{\frak x_{\rm v}} \to \frak M^{(1)}_{\frak x_{\rm v}} \end{equation} of fiber bundles that preserves fiberwise complex structures and marked points. Such an isomorphism is unique since we assumed $\frak x_{\rm v}$ to be stable. \par By Definition \ref{coordinatainfdef} (5) we have a trivialization: \begin{equation}\label{universaltrivia} \varphi^{(j)}_{\rm v} : \Sigma^{(j)}_{\rm v} \times \frak V(\frak x_{\rm v}) \to \frak M^{(j)}_{\frak x_{\rm v}}. \end{equation} The map $\varphi^{(j)}_{\rm v}$ is a diffeomorphism of fiber bundles of $C^{\infty}$-class, and preserves the complex structure on the neck (ends). Moreover it preserves $\Gamma_{\frak p}$-action and marked points. \par Let $\rho = (\rho_{\rm v})$. The restriction of the composition $(\varphi^{(1)}_{\rm v})^{-1} \circ \hat\varphi_{12}\circ \varphi^{(2)}_{\rm v}$ to the fiber at $\rho_{\rm v} \in \frak V(\frak x_{\rm v})$ becomes a diffeomorphism \begin{equation}\label{3362} u^{\rho}_{\rm v} : (\Sigma^{(2)}_{\rm v},j^{(2)}_{\rho}) \to (\Sigma^{(1)}_{\rm v},j^{(1)}_{\rho}). \end{equation} We note that $u^{\rho}_{\rm v}$ is a diffeomorphism and is biholomorphic in the neck region. (Note that the complex structure of the neck region is fixed by the definition of coordinate at infinity.) It is also biholomorphic (everywhere) with respect to the family of complex structures, $j^{(1)}_{\rho}, j^{(2)}_{\rho}$ parametrized by $\rho$. \par The map (\ref{3362}) preserves the marked points and is $\Gamma_{\frak x}$-equivariant. We also assume the image of the neck region by $u^{\rho}_{\rm v}$ is contained in the neck region. (We can always assume so by extending the neck of the coordinate at infinity $\frak w(1)$ of the source.) Hereafter we write $\Sigma^{\rho,(j)}_{\rm v} = (\Sigma^{(j)}_{\rm v},j^{(j)}_{\rho})$ in case we do not need to write $j^{(j)}_{\rho}$ explicitly. \begin{rem} We fix a trivialization as a smooth fiber bundle since it is important to fix a parametrization to study $\rho$ derivative of the $\rho$-parametrized family of maps from the fibers. \end{rem} \par In (\ref{2256}) we introduced the map \begin{equation}\nonumber \frak v_{(\frak y_2,\vec T_2,\vec \theta_2)} : \Sigma_{(\frak y_2,\vec T_2,\vec \theta_2)} \to \Sigma_{(\frak y_1,\vec T_1,\vec \theta_1)}. \end{equation} Here the marked bordered curves $\Sigma_{(\frak y_j,\vec T_j,\vec \theta_j)}$ ($j=1,2$) are obtained by gluing $\Sigma^{(j)}_{\rm v}$ in a way parametrized by $\frak y_j,\vec T_j,\vec \theta_j$. The idea of the proof is to construct the map $\frak v_{(\frak y_2,\vec T_2,\vec \theta_2)}$ by gluing the maps $u^{\rho}_{\rm v}$ using the alternating method. In this subsection we use the notation $u$, $\rho$ in place of $\frak v$, $\frak y$. \par We introduce several function spaces. Let $$ \frak y = \rho = (\rho_{\rm v}) \in \prod_{{\rm v} \in C^{0}(\mathcal G_{\frak x})}\frak V(\frak x_{\rm v}). $$ We write $\Sigma^{\rho, (j)}_{\vec T,\vec \theta}$ using the notation used in the gluing construction in Subsection \ref{glueing}. \par We use the decomposition (\ref{sigmayv}) and (\ref{neddecomposit}) with coordinate (\ref{neckcoordinate2}). The domain (\ref{DemanAetc}) are also used. We use the bump functions (\ref{eq201})-(\ref{e2delta}). \par On the function space \begin{equation}\label{363form} L^2_{m,\delta}(\Sigma^{\rho,(2)}_{\rm v};(u_{\rm v}^{\rho})^*T\Sigma^{\rho',(1)}_{\rm v}\otimes \Lambda^{01}) \end{equation} we define the norm \begin{equation} \Vert s\Vert^2_{L^2_{m,\delta}} = \sum_{k=0}^m \int_{\Sigma_{\rm v}^{\rho}} e_{\rm v,\delta} \vert \nabla^k s\vert^2 \text{\rm vol}_{\Sigma_{\rm v}^{\rho}}. \end{equation} We modify Definition \ref{Sobolev263} as follows. \begin{defn}\label{Sobolev26322} The Sobolev space $$ L^2_{m+1,\delta}((\Sigma_{\rm v}^{\rho,(2)},\partial \Sigma_{\rm v}^{\rho,(2)});(u_{\rm v}^{\rho})^*T\Sigma_{\rm v}^{\rho',(1)},(u_{\rm v}^{\rho})^*T\partial \Sigma_{\rm v}^{\rho',(1)}) $$ consists of elements $(s,\vec v)$ with the following properties. \begin{enumerate} \item $\vec v = (v_{\rm e})$ where $\rm e$ runs on the set of edges of $\rm v$ and $$ v_{\rm e} = c_1\frac{\partial}{\partial \tau_{\rm e}} + c_2\frac{\partial}{\partial t_{\rm e}} $$ (in case $\rm e\in C^1_{\rm c}(\mathcal G)$) or $$ v_{\rm e} = c\frac{\partial}{\partial \tau_{\rm e}} $$ (in case $\rm e\in C^1_{\rm o}(\mathcal G)$). Here $c,c_1,c_2 \in \R$. \item The following norm is finite. \begin{equation}\label{normformjulamulti} \aligned &\Vert (s,\vec v)\Vert^2_{L^2_{m+1,\delta}} \\= &\sum_{k=0}^{m+1} \int_{K_{\rm v}} \vert \nabla^k s\vert^2 \text{\rm vol}_{\Sigma_i}+ \sum_{{\rm e} \text{: edges of $\rm v$}}\Vert v_{\rm e}\Vert^2\\ &+ \sum_{k=0}^{m+1}\sum_{{\rm e} \text{: edges of $\rm v$}} \int_{\text{e-th end}} e_{\rm v,\delta}\vert \nabla^k(s - \text{\rm Pal}(v_{\rm e}))\vert^2 \text{\rm vol}_{\Sigma_{\rm v}^{\rho}}. \endaligned \end{equation} Here $\text{\rm Pal}$ is defined by the canonical trivialization of the tangent bundle on the neck region. \end{enumerate} \end{defn} In case ${\rm v}\in C^0_{\rm s}(\mathcal G_{\frak x})$ we use the function space $L^2_{m+1,\delta}(\Sigma_{{\rm v}}^{\rho,(2)}; (u_{\rm v}^{\rho})^*T\Sigma_{\rm v}^{\rho',(1)})$ in place of $L^2_{m+1,\delta}((\Sigma_{\rm v}^{\rho,(2)},\partial \Sigma_{\rm v}^{\rho,(2)});(u_{\rm v}^{\rho})^*T\Sigma_{\rm v}^{\rho',(1)},(u_{\rm v}^{\rho})^*T\partial \Sigma_{\rm v}^{\rho',(1)})$. We do not assume any condition similar to Definition \ref{Lhattaato1} and put \begin{equation}\label{consthomo0thcomplex} \aligned &L^2_{m+1,\delta}((\Sigma^{\rho,(2)},\partial \Sigma^{\rho,(2)}) ;(u_{\rm v}^{\rho})^*T\Sigma^{\rho',(1)},(u^{\rho})^*T\partial \Sigma^{\rho',(1)})\\ = &\bigoplus_{{\rm v}\in C^0_{\rm d}(\mathcal G_{\frak x})} L^2_{m+1,\delta}((\Sigma_{{\rm v}}^{\rho,(2)},\partial \Sigma_{\rm v}^{\rho,(2)});(u_{\rm v}^{\rho})^*T\Sigma_{\rm v}^{\rho',(1)}, (u_{\rm v}^{\rho})^*T\partial \Sigma_{\rm v}^{\rho',(1)})\\ &\oplus \bigoplus_{{\rm v}\in C^0_{\rm s}(\mathcal G_{\frak x})} L^2_{m+1,\delta}(\Sigma_{{\rm v}}^{\rho,(2)};(u_{\rm v}^{\rho})^*T\Sigma_{\rm v}^{\rho',(1)}). \endaligned \end{equation} The sum of (\ref{363form}) over $\rm v$ is denoted by $$ L^2_{m,\delta}(\Sigma^{\rho,(2)};(u^{\rho})^*T\Sigma^{\rho',(1)}\otimes \Lambda^{01}). $$ \par We next define weighted Sobolev norms for the sections of various bundles on $\Sigma_{\vec T,\vec \theta}^{\rho,(2)}$. Here $\Sigma_{\vec T,\vec \theta}^{\rho,(2)}$ was denoted by $\Sigma_{\vec T,\vec \theta}^{\rho}$ in Subsection \ref{glueing}. Let $$ u' : (\Sigma_{\vec T,\vec \theta}^{\rho,(2)},\partial \Sigma_{\vec T,\vec \theta}^{\rho,(2)}) \to (\Sigma_{\vec T',\vec \theta'}^{\rho',(1)},\partial \Sigma_{\vec T',\vec \theta'}^{\rho',(1)}) $$ be a diffeomorphism that sends each neck region of the source to the corresponding neck region of the target. We first consider the case when all $T_{\rm e} \ne \infty$. In this case $\Sigma_{\vec T,\vec \theta}^{\rho,(j)}$ is compact. We consider an element $$ s \in L^2_{m+1}((\Sigma_{\vec T,\vec \theta}^{\rho,(2)},\partial \Sigma_{\vec T,\vec \theta}^{\rho,(2)});(u')^*T\Sigma_{\vec T',\vec \theta'}^{\rho',(1)}, (u')^*T\partial \Sigma_{\vec T',\vec \theta'}^{\rho',(1)}). $$ Since we take $m$ large the section $s$ is continuous. We take a point $(0,1/2)_{\rm e}$ in the $\rm e$-th neck. So $s((0,1/2)_{\rm e}) \in T_{u'((0,1/2)_{\rm e})}\Sigma_{\vec T',\vec \theta'}^{\rho',(1)}$ is well-defined. \par We use a canonical trivialization of the tangent bundle in the neck regions to define $\text{\rm Pal}$ below. We put \begin{equation}\label{3655multi} \aligned \Vert s\Vert^2_{L^2_{m+1,\delta}} = &\sum_{k=0}^{m+1}\sum_{\rm v} \int_{K_{\rm v}} \vert \nabla^k s\vert^2 \text{\rm vol}_{\Sigma_{\rm v}^{\rho}}\\ &+ \sum_{k=0}^{m+1}\sum_{\rm e} \int_{\text{e-th neck}} e_{\vec T,\delta}\vert \nabla^k(s - \text{\rm Pal}(s(0,1/2)_{\rm e}))\vert^2 dt_{\rm e}d\tau_{\rm e} \\&+ \sum_{\rm e}\Vert s((0,1/2)_{\rm e}))\Vert^2. \endaligned \end{equation} For a section $ s \in L^2_{m}(\Sigma_{\vec T,\vec \theta}^{\rho,(2)};(u')^*T\Sigma_{\vec T',\vec \theta'}^{\rho',(1)}\otimes \Lambda^{01}) $ we define \begin{equation}\label{366} \Vert s\Vert^2_{L^2_{m,\delta}} = \sum_{k=0}^{m} \int_{\Sigma_T} e_{T,\delta}\vert \nabla^k s\vert^2 \text{\rm vol}_{\Sigma_T}. \end{equation} \par We next consider the case when some of the edges $\rm e$ have infinite length, namely $T_{\rm e} = \infty$. Let $C^{1,\rm{inf}}_{\rm o}(\mathcal G_{\frak x},\vec T)$ (resp. $C^{1,\rm{inf}}_{\rm c}(\mathcal G_{\frak x},\vec T)$) be the set of elements $\rm e$ in $C^{1}_{\rm o}(\mathcal G_{\frak x})$ (resp. $C^{1}_{\rm c}(\mathcal G_{\frak x})$) with $T_{\rm e} = \infty$ and $C^{1,\rm{fin}}_{\rm o}(\mathcal G_{\frak x},\vec T)$ (resp. $C^{1,\rm{fin}}_{\rm c}(\mathcal G_{\frak x},\vec T)$) be the set of elements $C^{1}_{\rm o}(\mathcal G_{\frak x})$ (resp. $C^{1}_{\rm c}(\mathcal G_{\frak x})$) with $T_{\rm e} \ne \infty$. Note the ends of $\Sigma_{\vec T,\vec \theta}^{\rho}$ correspond two to one to $C^{1,\rm{inf}}_{\rm o}(\mathcal G_{\frak x},\vec T) \cup C^{1,\rm{inf}}_{\rm c}(\mathcal G_{\frak x},\vec T)$. The ends that correspond to an element of $C^{1,\rm{inf}}_{\rm o}(\mathcal G_{\frak x},\vec T)$ is $([-5T_{\rm e},\infty) \times [0,1]) \cup (-\infty,5T_{\rm e}] \times [0,1])$ and the ends that correspond to $C^{1,\rm{inf}}_{\rm c}(\mathcal G_{\frak p},\vec T)$ is $([-5T_{\rm e},\infty) \times S^1) \cup (-\infty,5T_{\rm e}] \times S^1)$. We have a weight function $e_{\rm v,\delta}(\tau_{\rm e},t_{\rm e})$ on it. \begin{defn}\label{Def3.32} An element of $$ L^2_{m+1,\delta}((\Sigma_{\vec T,\vec \theta}^{\rho,(2)},\partial \Sigma_{\vec T,\vec \theta}^{\rho,(2)}); (u')^*T\Sigma_{\vec T',\vec \theta'}^{\rho',(1)},(u')^*T\partial\Sigma_{\vec T',\vec \theta'}^{\rho',(1)}) $$ is a pair $(s,\vec v)$ such that: \begin{enumerate} \item $s$ is a section of $(u')^*T\Sigma_{\vec T',\vec \theta'}^{\rho',(1)}$ on $\Sigma_{\vec T,\vec \theta}^{\rho,(2)}$ minus singular points $z_{\rm e}$ corresponding to the edges $\rm e$ with $T_{\rm e} = \infty$. \item $s$ is locally of $L^2_{m+1}$ class. \item On $\partial \Sigma_{\vec T,\vec \theta}^{\rho,(2)}$ the restriction of $s$ is in $(u')^*T\partial\Sigma_{\vec T',\vec \theta'}^{\rho',(1)}$. \item $\vec v = (v_{\rm e})$ where ${\rm e}$ runs in $C^{1,{\rm inf}}({\mathcal G}_{\frak p},{\vec T})$ and $v_{\rm e}$ is as in Definition \ref{Sobolev26322} (1). \item For each $\rm e$ with $T_{\rm e} = \infty$ the integral \begin{equation}\label{intatinfedge356} \aligned &\sum_{k=0}^{m+1}\int_{0}^{\infty} \int_{t_{\rm e}} e_{\rm v,\delta}(\tau_{\rm e},t_{\rm e})\vert\nabla^k(s(\tau_{\rm e},t_{\rm e}) - {\rm Pal}(v_{\rm e}))\vert^2 d\tau_{\rm e} dt_{\rm e}\\ &+ \sum_{k=0}^{m+1}\int^{0}_{-\infty} \int_{t_{\rm e}} e_{\rm v,\delta}(\tau_{\rm e},t_{\rm e})\vert\nabla^k(s(\tau_{\rm e},t_{\rm e}) - {\rm Pal}(v_{\rm e}))\vert^2 d\tau_{\rm e} dt_{\rm e} \endaligned \end{equation} is finite. (Here we integrate over ${t_{\rm e}\in [0,1]}$ (resp. ${t_{\rm e}\in S^1}$) if ${\rm e }\in C_{\rm o}^{1,{\rm inf}}({\mathcal G}_{\frak p},\vec T)$ (resp. ${\rm e} \in C_{\rm c}^{1,{\rm inf}}({\mathcal G}_{\frak p},\vec T)$). \item The section $s$ vanishes at each marked points. \end{enumerate} We define \begin{equation}\label{368} \Vert (s,\vec v)\Vert^2_{L^2_{m+1,\delta}} = (\ref{3655multi}) + \sum_{{\rm e} \in C^{1,{\rm inf}}({\mathcal G}_{\frak p},\vec T)}(\ref{intatinfedge356}) + \sum_{{\rm e} \in C^{1,{\rm inf}}(\mathcal G_{\frak p},\vec T)}\Vert v_{{\rm e}}\Vert^2. \end{equation} An element of $$ L^2_{m,\delta}(\Sigma_{\vec T,\vec \theta}^{\rho,(2)};(u')^*T\Sigma_{\vec T',\vec \theta'}^{\rho',(1)}\otimes \Lambda^{01}) $$ is a section $s$ of the bundle $(u')^*T\Sigma_{\vec T',\vec \theta'}^{\rho',(1)}\otimes \Lambda^{01}$ such that it is locally of $L^2_{m}$-class and \begin{equation}\label{369} \aligned &\sum_{k=0}^{m}\int_{0}^{\infty} \int_{t_{\rm e}} e_{\rm v,\delta}\vert\nabla^k s(\tau_{\rm e},t_{\rm e})\vert^2 d\tau_{\rm e} dt_{\rm e}\\ &+ \sum_{k=0}^{m}\int^{0}_{-\infty} \int_{t_{\rm e}} e_{\rm v,\delta}\vert\nabla^k(s(\tau_{\rm e},t_{\rm e}) \vert^2 d\tau_{\rm e} dt_{\rm e} \endaligned \end{equation} is finite. We define \begin{equation}\label{370} \Vert s\Vert^2_{L^2_{m,\delta}} = (\ref{366}) + \sum_{{\rm e} \in C^{1,{\rm inf}}({\mathcal G}_{\frak p},\vec T)}(\ref{369}). \end{equation} \end{defn} \par For a subset $W$ of $\Sigma_{{\rm v}}^{\rho,(2)}$ or $\Sigma_{\vec T,\vec \theta}^{\rho,(2)}$ we define $ \Vert s\Vert_{L^2_{m,\delta}(W\subset \Sigma_{{\rm v}}^{\rho,(2)})} $, $ \Vert s\Vert_{L^2_{m,\delta}(W\subset \Sigma_{\vec T,\vec \theta}^{\rho,(2)})} $ by restricting the domain of the integration (\ref{3655multi}), (\ref{366}), (\ref{368}) or (\ref{370}) to $W$. \par We consider maps $u_{\rm v}^{\rho} : (\Sigma_{\rm v}^{\rho,(2)},\partial \Sigma_{\rm v}^{\rho,(2)}) \to (\Sigma_{\rm v}^{\rho,(1)},\partial \Sigma_{\rm v}^{\rho,(1)})$ in (\ref{3362}), for all $\rm v$. We write $u^{\rho} = (u_{\rm v}^{\rho})$. \par We next define a vector space that corresponds to a fiber of the `obstruction bundle' in our situation. Let $ u' : (\Sigma_{\rm v}^{\rho,(2)},\partial \Sigma^{\rho,(2)}_{\rm v}) \to (\Sigma_{\rm v}^{\rho',(1)},\partial \Sigma_{\rm v}^{\rho',(1)}) $ be a diffeomorphism that sends each of the neck region of the source to the corresponding neck region of the target. We define $$ E_{\rm v}^{\rho}(u') \subset \Gamma_0(K^{\rho,(2)}_{\rm v}, (u')^*T\Sigma^{\rho',(1)}_{\rm v}\otimes \Lambda^{01}) $$ as follows. \par We may identify $\frak V(\frak x_{\rm v})$ as an open subset of certain Euclidean space. Let $\frak e_{\rm v} \in T_{\rho'_{\rm v}}\frak V(\frak x_{\rm v})$. We define \begin{equation}\label{fromdefto01} \frak I^{\rho'}_{\rm v}(u',\frak e_{\rm v}) = \left.\frac{d}{dt}(\overline\partial^{\rho,\rho' + t\frak e_{\rm v}}u_{\rm v}^{\rho})\right\vert_{t=0}. \end{equation} Here $\overline\partial^{\rho,\rho + t\frak e_{\rm v}}$ is the $\overline\partial$ operator with respect to the complex structure $j^{(1)}_{\rho'+ t\frak e_{\rm v}}$ (on the target) and $j^{(2)}_{\rho}$ (on the source). We thus obtain a map: \begin{equation}\label{cokisdetan} \frak I^{\rho'}_{\rm v}(u',\cdot) : T_{\rho'_{\rm v}}\frak V(\frak x_{\rm v}) \to L^2_{m,\delta}(\Sigma_{\rm v}^{\rho,(2)};(u')^*T\Sigma^{\rho',(1)}_{\rm v}\otimes \Lambda^{01}). \end{equation} Since the complex structure is independent of $\rho$ on the neck region, the image of (\ref{cokisdetan}) is contained in $\Gamma_0(K^{\rho,(2)}_{\rm v}, (u')^*T\Sigma^{\rho',(1)}_{\rm v}\otimes \Lambda^{01})$, that is, the set of smooth sections supported on the interior of the core. \begin{defn} We denote by $E^{\rho}_{\rm v}(u')$ the image of (\ref{cokisdetan}). \end{defn} We consider the linearization of the Cauchy-Riemann equation associated to the biholomorphic map $u'$ that is \begin{equation}\label{CRatuuu} \aligned D_{u'}\overline \partial: &L^2_{m+1,\delta}((\Sigma^{\rho,(2)}_{\rm v},\partial \Sigma^{\rho,(2)}_{\rm v}) ;(u')^*T\Sigma_{\rm v}^{\rho',(1)},(u')^*T\partial \Sigma^{\rho',(1)}_{\rm v})\\ &\to L^2_{m,\delta}(\Sigma_{\rm v}^{\rho,(2)};(u')^*T\Sigma^{\rho',(1)}_{\rm v}\otimes \Lambda^{01}). \endaligned\end{equation} \begin{lem}\label{lem3511} If $u'$ is sufficiently close to $u^{\rho}_{\rm v}$ then the kernel of (\ref{CRatuuu}) is zero and we have \begin{equation}\label{cokistang} {\rm Im}(D_{u'}\overline \partial) \oplus E^{\rho}_{\rm v}(u') = L^2_{m,\delta}(\Sigma_{\rm v}^{\rho,(2)};(u')^*T\Sigma^{\rho',(1)}_{\rm v}\otimes \Lambda^{01}). \end{equation} \end{lem} \begin{proof} We first consider the case $u' = u^{\rho}_{\rm v}$, that is a biholomorphic map. Then the kernel is identified with the set of holomorphic vector fields on $\Sigma^{\rho,(2)}$ that vanish on the singular points and marked points. Such a vector field is necessary zero by stability. \par By the standard result of deformation theory, the cokernel is identified with the deformation space of the complex structures, since $u^{\rho}_{\rm v}$ is biholomorphic. Therefore (\ref{cokistang}) holds. \par We then find that the conclusion holds if $u'$ is sufficiently close to $u^{\rho}_{\rm v}$ so that $D_{u'}\overline \partial$ is close to $D_{u^{\rho}_{\rm v}}\overline \partial$ in operator norm and $E^{\rho}_{\rm v}(u')$ is close to $E^{\rho}_{\rm v}(u^{\rho}_{\rm v})$, in the sense that we can choose their orthonormal basis that are close to each other. \end{proof} \begin{rem} `Sufficiently close' is a bit imprecise way to state the lemma. In the case we apply the lemma, we can easily check that the last part of the proof works. \end{rem} We next take a map \begin{equation}\label{appendEdef} {\rm E} : \{(z,v) \in T\Sigma^{(1)} \mid \vert v\vert \le \epsilon\} \to \Sigma^{(1)} \end{equation} such that \begin{enumerate} \item ${\rm E}(z,0) = z$ and $$ \left.\frac{d}{dt} {\rm E}(z,tv)\right\vert_{t=0} = v. $$ \item If $(z,v) \in T\partial \Sigma^{(1)}$ then ${\rm E}(z,v) \in \partial \Sigma^{(1)}$. \item ${\rm E}(z,v) = z+v$ on the neck region. \end{enumerate} \par\medskip Now we start the gluing construction. Let $(\vec T,\vec\theta) \in (\vec T^{\rm o}_0,\infty] \times ((\vec T^{\rm c}_0,\infty] \times \vec S^1)$. For $\kappa =0,1,2,\dots$, we will define a series of maps \begin{eqnarray} u_{\vec T,\vec \theta,(\kappa)}^{\rho} &:& (\Sigma_{\vec T,\vec \theta}^{\rho,(2)},\partial\Sigma_{\vec T,\vec \theta}^{\rho,(2)}) \to (\Sigma_{\vec T^{(\kappa)},\vec \theta^{(\kappa)}}^{\rho_{(\kappa)},(1)},\partial\Sigma_{\vec T^{(\kappa)},\vec \theta^{(\kappa)}}^{\rho_{(\kappa)},(1)}) \\ \hat u_{{\rm v},\vec T,\vec \theta,(\kappa)}^{\rho} &:& (\Sigma_{\rm v}^{\rho,(2)},\partial \Sigma_{\rm v}^{\rho,(2)}) \to (\Sigma_{\rm v}^{\rho_{(\kappa)},(1)},\partial \Sigma_{\rm v}^{\rho_{(\kappa)},(1)}), \end{eqnarray} (we will explain $\rho_{(\kappa)}$, $\vec T^{(\kappa)}$ and $\vec \theta^{(\kappa)}$ below) and elements \begin{eqnarray} \frak e^{\rho}_{{\rm v},\vec T,\vec \theta,(\kappa)} &\in& E_{\rm v}(\hat u_{{\rm v},\vec T,\vec \theta,(\kappa)}^{\rho}) \\ {\rm Err}^{\rho}_{{\rm v},\vec T,\vec \theta,(\kappa)} &\in& L^2_{m,\delta}(\Sigma_{\rm v}^{\rho,(2)};(\hat u_{{\rm v},\vec T,\vec \theta,(\kappa)}^{\rho})^*T\Sigma_{\rm v}^{\rho_{(\kappa)},(1)}\otimes \Lambda^{01}). \end{eqnarray} \par Moreover we will define $V^{\rho}_{\vec T,\vec \theta,{\rm v},(\kappa)}$ for ${\rm v} \in C^0(\mathcal G)$, $\Delta T^{\rho}_{\vec T,\vec \theta,(\kappa),{\rm v},{\rm e}} \in \R$ for ${\rm e} \in C^1(\mathcal G)$ and $\Delta \theta^{\rho}_{\vec T,\vec \theta,(\kappa),{\rm v},{\rm e}} \in \R$ for ${\rm e} \in C^1_{\rm c}(\mathcal G)$. We put $$ \aligned &v^{\rho}_{\vec T,\vec \theta,(\kappa),{\rm v},{\rm e}} = \Delta T^{\rho}_{\vec T,\vec \theta,(\kappa),{\rm v},{\rm e}}\frac{\partial}{\partial \tau_{\rm e}}, \qquad \text{for ${\rm e} \in C^1_{\rm o}(\mathcal G)$}, \\ &v^{\rho}_{\vec T,\vec \theta,(\kappa),{\rm v},{\rm e}} = \Delta T^{\rho}_{\vec T,\vec \theta,(\kappa),{\rm v},{\rm e}}\frac{\partial}{\partial \tau_{\rm e}} + \Delta \theta^{\rho}_{\vec T,\vec \theta,(\kappa),{\rm v},{\rm e}}\frac{\partial}{\partial t_{\rm e}} \qquad \text{for ${\rm e} \in C^1_{\rm c}(\mathcal G)$}. \endaligned$$ The pair $((V^{\rho}_{\vec T,\vec \theta,{\rm v},(\kappa)}),(v^{\rho}_{\vec T,\vec \theta, (\kappa),{\rm v},{\rm e}}))$ becomes an element of $$ L^2_{m+1,\delta}((\Sigma_{\rm v}^{\rho,(2)},\partial \Sigma_{\rm v}^{\rho,(2)});(\hat u_{{\rm v},\vec T,\vec \theta,(\kappa-1)}^{\rho})^*T\Sigma_{\vec T^{(\kappa)},\vec \theta^{(\kappa)}}^{\rho_{(\kappa)},(1)},(\hat u_{{\rm v},\vec T,\vec \theta,(\kappa-1)}^{\rho})^*T\partial \Sigma_{\vec T^{(\kappa)},\vec \theta^{(\kappa)}}^{\rho_{(\kappa)},(1)}). $$ \par The vectors $\vec T^{(\kappa)}$ and $\vec \theta^{(\kappa)}$ are determined by $\Delta T^{\rho}_{\vec T,\vec \theta,(1),{\rm v},{\rm e}}, \dots, \Delta T^{\rho}_{\vec T,\vec \theta,(\kappa-1),{\rm v},{\rm e}}$ and $\Delta \theta^{\rho}_{\vec T,\vec \theta,(1),{\rm v},{\rm e}}, \dots, \Delta \theta^{\rho}_{\vec T,\vec \theta,(\kappa-1),{\rm v},{\rm e}}$ as follows. For each $\rm e$ let ${\rm v}_{\leftarrow}({\rm e})$ and ${\rm v}_{\rightarrow}({\rm e})$ be the vertices for which $\rm e$ is outgoing (resp. incoming) edge. We put: \begin{equation}\label{3407aa} 10T^{(\kappa)}_{\rm e} = 10T_{\rm e} - \sum_{a=0}^{\kappa}\Delta T^{\rho}_{\vec T,\vec \theta,(a),{\rm v}_{\leftarrow}({\rm e}),{\rm e}} + \sum_{a=0}^{\kappa}\Delta T^{\rho}_{\vec T,\vec \theta,(a),{\rm v}_{\rightarrow}({\rm e}),{\rm e}} \end{equation} \begin{equation}\label{3408aa} \theta^{(\kappa)}_{\rm e} = \theta_{\rm e} + \sum_{a=0}^{\kappa}\Delta \theta^{\rho}_{\vec T,\vec \theta,(a),{\rm v}_{\leftarrow}({\rm e}),{\rm e}} - \sum_{a=0}^{\kappa}\Delta \theta^{\rho}_{\vec T,\vec \theta,(a),{\rm v}_{\rightarrow}({\rm e}),{\rm e}}. \end{equation} \begin{rem} As induction proceeds, we will modify the length of the neck region a bit from $T_{\rm e}$ to $T^{(\kappa)}_{\rm e}$. We also modify $\theta_{\rm e}$ (that is the parameter to tell how much we twist the $S^1$ direction when we glue the pieces to obtain our curve) to $\theta^{(\kappa)}_{\rm e}$. \end{rem} The elements $\rho_{(\kappa)} = (\rho_{{\rm v},(\kappa)})$ is defined from $\frak e^{\rho}_{{\rm v},\vec T,\vec \theta,(\kappa)}$ inductively as follows. \par \begin{equation}\label{3383} \frak I^{\rho_{(\kappa-1)}}_{\rm v}(\hat u_{{\rm v},\vec T,\vec \theta,(\kappa-1)}^{\rho},\rho_{{\rm v},(\kappa)}-\rho_{{\rm v},(\kappa-1)}) = \frak e^{\rho}_{{\rm v},\vec T,\vec \theta,(\kappa)}. \end{equation} So $T^{(\kappa)}_{\rm e}, \theta^{(\kappa)}_{\rm e}$ and $\rho_{{\rm v},(\kappa)}$ {\it depend} on $\rho, \vec T,\vec \theta$. \begin{rem} The construction of these objects are very much similar to that of Subsection \ref{glueing}. Note that $(\Sigma^{(1)},\partial\Sigma^{(1)})$ plays the role of $(X,L)$ here. (In fact $\partial\Sigma^{(1)}$ is a Lagrangian submanifold of $\Sigma^{(1)}$.) However the construction here is different from one in Subsection \ref{glueing} in the following two points. \begin{enumerate} \item We will construct a map $u$ that not only satisfies $\overline\partial u \equiv 0 \mod E^{\rho}_{\rm v}$ but is also a genuine holomorphic map. The linearized equation (\ref{CRatuuu}) is {\it not} surjective. We will kill the cokernel by deforming the complex structure of the target. Namely $\rho \ne \rho_{(\kappa)}$ in general. \item We do {\it not} require $\Delta T^{\rho}_{\vec T,\vec \theta,(\kappa),{\rm v}_{\leftarrow}({\rm e}),{\rm e}} = \Delta T^{\rho}_{\vec T,\vec \theta,(\kappa),{\rm v}_{\rightarrow}({\rm e}),{\rm e}}$ or $\Delta \theta^{\rho}_{\vec \theta,\vec \theta,(\kappa),{\rm v}_{\leftarrow}({\rm e}),{\rm e}} = \Delta \theta^{\rho}_{\vec \theta,\vec \theta,(\kappa),{\rm v}_{\rightarrow}({\rm e}),{\rm e}}$. This condition corresponds to $D{\rm ev}_{\mathcal G_{\frak p}}(V,\Delta p) =0$ that we put in Definition \ref{Lhattaato1}. Here we did not put a similar condition in (\ref{consthomo0thcomplex}). Instead we deform the complex structure of the target again. Namely $T^{(\kappa)}_{\rm e} \ne T_{\rm e}$, $\theta^{(\kappa)}_{\rm e} \ne \theta_{\rm e}$ in general. \end{enumerate} \end{rem} Now we start the construction of the above objects by induction on $\kappa$. \par\medskip \noindent{\bf Pregluing}: Since $u^{\rho}_{\rm v} : \Sigma_{\rm v}^{\rho,(2)} \to \Sigma_{\rm v}^{\rho,(1)}$ is biholomorphic and sends the neck region to the corresponding neck region, there exists $\Delta T^{\rho}_{\vec T,\vec \theta,(0),{\rm v},{\rm e}} \in \R$ for ${\rm e} \in C^1(\mathcal G)$ and $\Delta \theta^{\rho}_{\vec T,\vec \theta,(0),{\rm v},{\rm e}} \in \R$ for ${\rm e} \in C^1_{\rm c}(\mathcal G)$ such that \begin{equation} \vert u^{\rho}_{\rm v}(\tau_{\rm e},t_{\rm e}) - (\tau_{\rm e}+\Delta T^{\rho}_{\vec T,\vec \theta,(0),{\rm v},{\rm e}},t_{\rm e} + \Delta \theta^{\rho}_{\vec T,\vec \theta,(0),{\rm v},{\rm e}}) \vert \le C_1e^{-\delta_1 \vert\tau_{\rm e}\vert}. \end{equation} Note that in case ${\rm e} \in C^1_{\rm o}(\mathcal G)$ we put $\Delta \theta^{\rho}_{\vec T,\vec \theta,(0),{\rm v},{\rm e}} = 0$. \par We identify the $\rm e$-th neck region of $\Sigma_{\vec T^{(\kappa)},\vec \theta^{(\kappa)}}^{\rho_{(\kappa)},(2)}$ with $$ [-5T_{\rm e} + \frak s\Delta T_{{\rm e},(\kappa)}^{\leftarrow}, 5T_{\rm e} + \frak s\Delta T_{{\rm e},(\kappa)} ^{\rightarrow} ] \times [0,1] \,\,\text{\rm or $S^1$}, $$ where $$ \aligned \frak s\Delta T_{{\rm e},(\kappa)}^{\leftarrow} &= \sum_{a=0}^{\kappa}\Delta T^{\rho}_{\vec T,\vec \theta,(a),{\rm v}_{\leftarrow}({\rm e}),{\rm e}},\\ \frak s\Delta T_{{\rm e},(\kappa)}^{\rightarrow} &= \sum_{a=0}^{\kappa}\Delta T^{\rho}_{\vec T,\vec \theta,(a),{\rm v}_{\rightarrow}({\rm e}),{\rm e}}. \endaligned $$ We also denote $$ \aligned \frak s\Delta \theta_{{\rm e},(\kappa)} ^{\leftarrow} &= \sum_{a=1}^{\kappa}\Delta \theta^{\rho}_{\vec T,\vec \theta,(a),{\rm v}_{\leftarrow}({\rm e}),{\rm e}}, \\ \frak s\Delta \theta_{{\rm e},(\kappa)} ^{\rightarrow} &= \sum_{a=1}^{\kappa}\Delta \theta^{\rho}_{\vec T,\vec \theta,(a),{\rm v}_{\rightarrow}({\rm e}),{\rm e}}. \endaligned $$ We use the symbol $\tau_{\rm e}^{(\kappa)}$ as the coordinate of the first factor. The symbol $t_{\rm e}^{(\kappa)}$ denotes the coordinate of the second factor that is given by $$ t_{\rm e}^{(\kappa)} = t_{\rm e} + \frak s\Delta \theta_{{\rm e},(\kappa)} ^{\leftarrow} $$ in case ${\rm e} \in C^1_{\rm c}(\mathcal G_{\frak x})$. Here $t_{\rm e}$ is the canonical coordinate of $S^1$. In case ${\rm e} \in C^1_{\rm o}(\mathcal G_{\frak x})$, $t_{\rm e}^{(\kappa)} = t_{\rm e}$. \par We have \begin{equation}\label{3394tt} \tau_{\rm e}^{(\kappa)} = \tau'_{\rm e} - 5T_{\rm e} + \frak s\Delta T_{{\rm e},(\kappa)}^{\leftarrow} =\tau''_{\rm e} + 5T_{\rm e} + \frak s\Delta T_{{\rm e},(\kappa)}^{\rightarrow} . \end{equation} (Hence $\tau'_{\rm e} = \tau''_{\rm e} + 10T_{\rm e} - \frak s\Delta T^{\leftarrow}_{{\rm e},(\kappa)} + \frak s\Delta T^{\rightarrow}_{{\rm e},(\kappa)} = \tau''_{{\rm e},(\kappa)} + 10T_{{\rm e}}^{(\kappa)}$. See (\ref{3407aa}).) \par In case ${\rm e} \in C^1_{\rm c}(\mathcal G_{\frak x})$ we also have \begin{equation}\label{3395tt} t_{\rm e}^{(\kappa)} = t'_{\rm e} + \frak s\Delta \theta_{{\rm e},(\kappa)}^{\leftarrow} =t''_{\rm e} -\theta_{\rm e} + \frak s\Delta \theta_{{\rm e},(\kappa)}^{\rightarrow}. \end{equation} (Hence $t'_{\rm e} = t''_{\rm e} - \theta_{\rm e}^{(\kappa)}$. See (\ref{3408aa}).) \par We define the map $ {\rm id}^{\rho,\vec T,\vec \theta}_{{\rm e},(\kappa)} $ from the $\rm e$-th neck of $\Sigma_{\vec T,\vec \theta}^{\rho,(2)}$ to the $\rm e$-th neck of $\Sigma_{\vec T^{(\kappa)},\vec \theta^{(\kappa)}}^{\rho_{(\kappa)},(1)}$ by \begin{equation} {\rm id}^{\rho,\vec T,\vec \theta}_{{\rm e},(\kappa)} : (\tau_{\rm e},t_{\rm e}) \mapsto (\tau_{\rm e}^{(\kappa)} ,t_{\rm e}^{(\kappa)} ) = (\tau_{\rm e},t_{\rm e}). \end{equation} \par We now put: \begin{equation}\label{2219aa} u_{\vec T,\vec \theta,(0)}^{\rho} = \begin{cases} \chi_{{\rm e},\mathcal B}^{\leftarrow} (u_{{\rm v}_{\leftarrow}({\rm e})}^{\rho} - {\rm id}^{\rho,\vec T,\vec \theta}_{{\rm e},(0)}) + \chi_{{\rm e},\mathcal A}^{\rightarrow} (u_{{\rm v}_{\rightarrow}({\rm e})}^{\rho} - {\rm id}^{\rho,\vec T,\vec \theta}_{{\rm e},(0)}) + {\rm id}^{\rho,\vec T,\vec \theta}_{{\rm e},(0)} & \text{on the ${\rm e}$-th neck} \\ u_{\rm v}^{\rho} & \text{on $K_{\rm v}$}. \end{cases} \end{equation} \par\medskip \noindent{\bf Step 0-4}: We next define \begin{equation}\label{222233} {\rm Err}^{\rho}_{{\rm v},\vec T,\vec \theta,(0)} = \begin{cases} \chi_{{\rm e},\mathcal X}^{\leftarrow} \overline\partial u_{\vec T,\vec \theta,(0)}^{\rho} & \text{on the ${\rm e}$-th neck if $\rm e$ is outgoing} \\ \chi_{{\rm e},\mathcal X}^{\rightarrow} \overline\partial u_{\vec T,\vec \theta,(0)}^{\rho} & \text{on the ${\rm e}$-th neck if $\rm e$ is incoming} \\ 0 & \text{on $K_{\rm v}$} . \end{cases} \end{equation} \par\medskip \noindent{\bf Step 1-1}: Let ${\rm id}_{{\rm v},{\rm e}}$ be the identity map from the neck region of $\Sigma^{(2)}_{\rm v}$ to the neck region of $\Sigma^{(1)}_{\rm v}$. (It does not coincide with $u^{\rho}_{\rm v}$ there.) We set: \begin{equation} \Delta^{{\rm v}_{\leftarrow}({\rm e}),{\rm e}}_{\vec T,\vec \theta,(0)} = (\frak s\Delta T_{{\rm e},(0)}^{\leftarrow}, \frak s\Delta \theta_{{\rm e},(0)}^{\leftarrow}), \quad \Delta^{{\rm v}_{\rightarrow}({\rm e}),{\rm e}}_{\vec T,\vec \theta,(0)} = (\frak s\Delta T_{{\rm e},(0)}^{\rightarrow}, \frak s\Delta \theta_{{\rm e},(0)}^{\rightarrow}). \end{equation} (In case ${\rm e} \in C^1_{\rm o}(\mathcal G_{\frak x})$ we set $\frak s\Delta \theta_{{\rm e},(0)}^{\leftarrow} = \frak s\Delta \theta_{{\rm e},(0)}^{\rightarrow}=0$.) We then define \begin{equation} {\rm id}_{{\rm v},{\rm e}}^{\vec T,\vec \theta,(0)} = {\rm id}_{{\rm v},{\rm e}} + \Delta^{{\rm v},{\rm e}}_{\vec T,\vec \theta,(0)}. \end{equation} Now, we put \begin{equation} \aligned &\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}(z) \\ &= \begin{cases} \chi_{{\rm e},\mathcal B}^{\leftarrow}(\tau_{\rm e}-T_{\rm e},t_{\rm e}) &\!\!\!\!\!\!u^{\rho}_{\vec T,\vec \theta,(0)}(\tau_{\rm e},t_{\rm e}) + \chi_{{\rm e},\mathcal B}^{\rightarrow}(\tau_{\rm e}-T_{\rm e},t_{\rm e}){\rm id}_{{\rm v},{\rm e}}^{\vec T,\vec \theta,(0)} \\ &\text{if $z = (\tau_{\rm e},t_{\rm e})$ is on the $\rm e$-th neck that is outgoing} \\ \chi_{{\rm e},\mathcal A}^{\rightarrow}(\tau_{\rm e}-T_{\rm e},t_{\rm e}) &\!\!\!\!\!\!u^{\rho}_{\vec T,\vec \theta,(0)}(\tau,t) + \chi_{{\rm e},\mathcal A}^{\leftarrow}(\tau_{\rm e}-T_{\rm e},t_{\rm e}){\rm id}_{{\rm v},{\rm e}}^{\vec T,\vec \theta,(0)}\\ &\text{if $z = (\tau_{\rm e},t_{\rm e})$ is on the $\rm e$-th neck that is incoming} \\ u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}(z) &\text{if $z \in K_{\rm v}$.} \end{cases} \endaligned \end{equation} \begin{defn} We define $V^{\rho}_{\vec T,\vec \theta,{\rm v},(1)}$ for ${\rm v} \in C^0(\mathcal G_{\frak p})$ and real numbers $\Delta T^{\rho}_{\vec T,\vec \theta,(1),{\rm v}_{\leftarrow}({\rm e}),{\rm e}}$, $\Delta T^{\rho}_{\vec T,\vec \theta,(1),{\rm v}_{\rightarrow}({\rm e}),{\rm e}}$ for ${\rm e} \in C^1(\mathcal G_{\frak p})$ and $\Delta \theta^{\rho}_{\vec T,\vec \theta,(1),{\rm v}_{\leftarrow}({\rm e}),{\rm e}}$, $\Delta \theta^{\rho}_{\vec T,\vec \theta,(1),{\rm v}_{\rightarrow}({\rm e}),{\rm e}}$ for ${\rm e} \in C^1_{\rm c}(\mathcal G_{\frak p})$ so that the following conditions are satisfied. \begin{equation} D_{\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}}\overline{\partial}(V^{\rho}_{\vec T,\vec \theta,{\rm v},(1)}) - {\rm Err}^{\rho}_{{\rm v},\vec T,\vec \theta,(0)}\\ = \frak e^{\rho}_{{\rm v},\vec T,\vec \theta,(0)} \in E_{\rm v}(\hat u_{{\rm v},\vec T,\vec \theta,(0)}^{\rho}) \end{equation} and \begin{equation} \aligned &\lim_{\tau_{\rm e} \to \infty} \left(V^{\rho}_{\vec T,\vec \theta,{\rm v}_{\leftarrow}({\rm e}),(1)}(\tau_{\rm e},t_{\rm e}) - \Delta T^{\rho}_{\vec T,\vec \theta,(1),{\rm v}_{\leftarrow}({\rm e}),{\rm e}}\frac{\partial}{\partial \tau_{\rm e}}\right) = 0,\\ &\lim_{\tau_{\rm e} \to -\infty} \left(V^{\rho}_{\vec T,\vec \theta,{\rm v}_{\rightarrow}({\rm e}),(1)}(\tau_{\rm e},t_{\rm e}) - \Delta T^{\rho}_{\vec T,\vec \theta,(1),{\rm v}_{\rightarrow}({\rm e}),{\rm e}}\frac{\partial}{\partial \tau_{\rm e}}\right) = 0, \endaligned \end{equation} if ${\rm e} \in C_{\rm o}^1(\mathcal G_{\frak p})$, \begin{equation} \aligned &\lim_{\tau_{\rm e} \to \infty} \left(V^{\rho}_{\vec T,\vec \theta,{\rm v}_{\leftarrow}({\rm e}),(1)}(\tau_{\rm e},t_{\rm e}) - \Delta T^{\rho}_{\vec T,\vec \theta,(1),{\rm v}_{\leftarrow}({\rm e}),{\rm e}}\frac{\partial}{\partial \tau_{\rm e}} - \Delta \theta^{\rho}_{\vec T,\vec \theta,(1),{\rm v}_{\leftarrow}({\rm e}),{\rm e}}\frac{\partial}{\partial t_{\rm e}}\right) = 0,\\ &\lim_{\tau_{\rm e} \to -\infty} \left(V^{\rho}_{\vec T,\vec \theta,{\rm v}_{\rightarrow}({\rm e}),(1)}(\tau_{\rm e},t_{\rm e}) - \Delta T^{\rho}_{\vec T,\vec \theta,(1),{\rm v}_{\rightarrow}({\rm e}),{\rm e}}\frac{\partial}{\partial \tau_{\rm e}} - \Delta \theta^{\rho}_{\vec T,\vec \theta,(1),{\rm v}_{\rightarrow}({\rm e}),{\rm e}}\frac{\partial}{\partial t_{\rm e}}\right) = 0, \endaligned \end{equation} if ${\rm e} \in C^1_{\rm c}(\mathcal G_{\frak p})$. \end{defn} The unique existence of such objects is a consequence of Lemma \ref{lem3511}. \par We define $\rho_{(1)}$ by (\ref{3383}). \par\medskip \noindent{\bf Step 1-2}: \begin{defn} We define $u_{\vec T,\vec \theta,(1)}^{\rho}(z)$ as follows. (Here $\rm E$ is as in (\ref{appendEdef}).) \begin{enumerate} \item If $z \in K_{\rm v}$ we put \begin{equation}\label{3419} u_{\vec T,\vec \theta,(1)}^{\rho}(z) = {\rm E} (u_{\vec T,\vec \theta,(0)}^{\rho},V^{\rho}_{\vec T,\vec \theta,{\rm v},(1)}(z)). \end{equation} \item If $z = (\tau_{\rm e},t_{\rm e}) \in [-5T_{\rm e},5T_{\rm e}]\times [0,1]$ or $S^1$, we put \begin{equation}\label{3420} \aligned &u_{\vec T,\vec \theta,(1)}^{\rho}(\tau_{\rm e},t_{\rm e}) \\ &=\chi_{{\rm v}_{\leftarrow}({\rm e}),\mathcal B}^{\leftarrow}(\tau_{\rm e},t_{\rm e}) (V^{\rho}_{\vec T,\vec \theta,{\rm v}_{\leftarrow}({\rm e}),(1)} (\tau_{\rm e},t_{\rm e}) - (\Delta T_{{\rm e},(1)}^{\leftarrow}, \Delta \theta_{{\rm e},(1)}^{\leftarrow}))\\ &+\chi_{{\rm v}_{\rightarrow}({\rm e}),\mathcal A}^{\rightarrow}(\tau_{\rm e},t_{\rm e})(V^{\rho}_{\vec T,\vec \theta,{\rm v}_{\rightarrow}({\rm e}),(1)}(\tau_{\rm e},t_{\rm e}) - (\Delta T_{{\rm e},(1)}^{\rightarrow}, \Delta \theta_{{\rm e},(1)}^{\rightarrow}))\\ &+u_{\vec T,\vec \theta,(0)}^{\rho}(\tau_{\rm e},t_{\rm e}). \endaligned \end{equation} \end{enumerate} Here we use the coordinate $(\tau_{\rm e}^{(1)},t_{\rm e}^{(1)})$ given in (\ref{3394tt}) and (\ref{3395tt}) for the {\it target}. \end{defn} \par We remark that $\tau^{(0)}_{\rm e} = \tau^{(1)}_{\rm e} - \Delta T^{\leftarrow}_{{\rm e},(1)}$. Therefore, in a neighborhood of $\{-5T_{\rm e}\} \times [0,1]\times S^1$, (\ref{3419}) and (\ref{3420}) are consistent. \par\medskip \noindent{\bf Step 1-3}: We recall that $\rho_{{\rm v},(1)}$ is defined by \begin{equation}\label{338322} \frak I^{\rho_{(0)}}_{\rm v}(\hat u_{{\rm v},\vec T,\vec \theta,(0)}^{\rho},\rho_{{\rm v},(1)}-\rho_{{\rm v},(0)}) = \frak e^{\rho}_{{\rm v},\vec T,\vec \theta,(1)}. \end{equation} (Note $\rho_{{\rm v},(0)} = 0$.) \par\medskip \noindent{\bf Step 1-4}: \begin{defn} We put \begin{equation} {\rm Err}^{\rho}_{{\rm v},\vec T,\vec \theta,(1)} = \begin{cases} \chi_{{\rm e},\mathcal X}^{\leftarrow} \overline\partial u_{\vec T,\vec \theta,(1)}^{\rho} & \text{on ${\rm e}$-th neck if $\rm e$ is outgoing} \\ \chi_{{\rm e},\mathcal X}^{\rightarrow} \overline\partial u_{\vec T,\vec \theta,(1)}^{\rho} & \text{on ${\rm e}$-th neck if $\rm e$ is incoming} \\ \overline\partial u_{\vec T,\vec \theta,(1)}^{\rho} & \text{on $K_{\rm v}$} . \end{cases} \end{equation} We extend them by $0$ outside a compact set and will regard them as elements of the function space $L^2_{m,\delta}(\Sigma^{\rho,(2)}_{\rm v};(\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(1)})^{*}T\Sigma_{\vec T^{(1)},\vec \theta^{(1)}}^{\rho_{(1)},(1)} \otimes \Lambda^{01})$, where $\hat u^{\rho}_{{\rm v},\vec T,\vec \theta,(1)}$ will be defined in the next step. \end{defn} We thus come back to Step 2-1 and continue. We obtain the following estimate by induction on $\kappa$. We put $R_{\rm e} = 5T_{\rm e} + 1$. \begin{lem}\label{expesgen1sec3} There exist $T_m, C_{2,m},\dots, C_{8,m}, \epsilon_{1,m} > 0$ and $0<\mu<1$ such that the following inequalities hold if $T_{\rm e}>T_m$ for all $\rm e$. We put $T_{\rm min} = \min\{ T_{\rm e}\mid {\rm e} \in C^1(\mathcal G_{\frak p})\}$. \begin{eqnarray} \left\Vert \left((V^{\rho}_{\vec T,\vec \theta,{\rm v},(\kappa)}),(v^{\rho}_{\vec T,\vec \theta,{\rm v},{\rm e},(\kappa)})\right)\right\Vert_{L^2_{m+1,\delta} (\Sigma^{\rho,{(2)}}_{\rm v})} &<& C_{2,m}\mu^{\kappa-1}e^{-\delta T_{\rm min}}, \label{form0182vv3} \\ \left\Vert (v^{\rho}_{\vec T,\vec \theta,{\rm v},{\rm e},(\kappa)})\right\Vert &<& C_{3,m}\mu^{\kappa-1}e^{-\delta T_{\rm min}}, \label{form0183vv3} \\ \left\Vert u_{\vec T,\vec \theta,(\kappa)}^{\rho}- u_{\vec T,\vec \theta,(0)}^{\rho} \right\Vert_{L^2_{m+1,\delta}((K_{\rm v}^{(2)})^{+\vec R})} &<& C_{4,m}e^{-\delta T_{\rm min}}, \label{form0184vv3} \\ \left\Vert{\rm Err}^{\rho}_{{\rm v},\vec T,\vec \theta,(\kappa)} \right\Vert_{L^2_{m,\delta}(\Sigma^{\rho,{(2)}}_{\rm v})} &<& C_{5,m}\epsilon_{1,m}\mu^{\kappa}e^{-\delta T_{\rm min}}, \label{form0185vv3} \\ \left\Vert \frak e^{\rho} _{\vec T,\vec \theta,(\kappa)}\right\Vert_{L^2_{m}((K_{\rm v}^{(2)})^{+\vec R})} &<& C_{6,m}\mu^{\kappa-1}e^{-\delta T_{\rm min}}, \label{form0186vv3vv3} \\ \left\Vert \Delta T^{\rho}_{\vec T,\vec \theta,(\kappa),{\rm v},{\rm e}} \right\Vert &<& C_{7,m}\mu^{\kappa-1}e^{-\delta T_{\rm min}}, \label{Tconverges3}\\ \left\Vert \Delta \theta^{\rho}_{\vec T,\vec \theta,(\kappa),{\rm v},{\rm e}}\right\Vert &<& C_{8,m}\mu^{\kappa-1}e^{-\delta T_{\rm min}} \label{rhoconverges3}. \end{eqnarray} \end{lem} The proof is the same as the proof of Proposition \ref{expesgen1} and so is omitted. We note that (\ref{form0186vv3vv3}) and (\ref{3383}) imply \begin{equation}\label{rhoconverges} \left\Vert \rho_{(\kappa)} - \rho \right\Vert < C_{9,m}\mu^{\kappa-1}e^{-\delta T_{\rm min}}. \end{equation} Therefore the limit $$ \lim_{\kappa \to \infty} \rho_{(\kappa)} = \rho'(\rho,\vec T,\vec \theta) $$ exists. (\ref{Tconverges3}) and (\ref{rhoconverges3}) imply that $$ \lim_{\kappa \to \infty} \frak s\Delta T^{\rho}_{\vec T,\vec \theta,(\kappa),{\rm v},{\rm e}} = \frak s\Delta T^{\rho}_{\vec T,\vec \theta,(\infty),{\rm v},{\rm e}} $$ and $$ \lim_{\kappa \to \infty} \frak s\Delta \theta^{\rho}_{\vec T,\vec \theta,(\kappa),{\rm v},{\rm e}} = \frak s\Delta \theta^{\rho}_{\vec T,\vec \theta,(\infty),{\rm v},{\rm e}} $$ converge. We put $$ \vec T'(\rho,\vec T,\vec \theta) = \vec T + \frak s\Delta \vec T^{\rho}_{\vec T,\vec \theta,(\infty)}, \quad \vec \theta'(\rho,\vec T,\vec \theta) = \vec \theta + \frak s\Delta \vec \theta^{\rho}_{\vec T,\vec \theta,(\infty)}. $$ Then (\ref{form0184vv3}) implies that $$ \lim_{\kappa \to \infty} u_{\vec T,\vec \theta,(\kappa)}^{\rho} $$ converges to a map $$ u_{\vec T,\vec \theta,(\infty)}^{\rho} : (\Sigma_{\vec T,\vec \theta}^{\rho,(2)},\partial\Sigma_{\vec T,\vec \theta}^{\rho,(2)}) \to (\Sigma_{\vec T'(\rho,\vec T,\vec \theta) ,\vec \theta'(\rho,\vec T,\vec \theta)}^{\rho'(\rho,\vec T,\vec \theta),(1)}, \partial\Sigma_{\vec T'(\rho,\vec T,\vec \theta) ,\vec \theta'(\rho,\vec T,\vec \theta)}^{\rho'(\rho,\vec T,\vec \theta),(1)}) $$ in $L^2_{m+1}$ topology. (Note the union of $(K_{\rm v}^{(2)})^{+\vec R}$ for various $\rm v$ covers $\Sigma_{\vec T,\vec \theta}^{\rho_{(\kappa)},(2)}$.) The formula (\ref{form0185vv3}) then implies that $u_{\vec T,\vec \theta,(\infty)}^{\rho}$ is a biholomorphic map. \par Therefore, using the notation in Proposition \ref{changeinfcoorprop} we have \begin{equation}\label{form408} {\overline{\Phi}}_{12}(\rho,\vec T,\vec\theta) = (\rho'(\rho,\vec T,\vec \theta),\vec T'(\rho,\vec T,\vec \theta') ,\vec \theta'(\rho,\vec T,\vec \theta)). \end{equation} Using the notation in Proposition \ref{reparaexpest} we have \begin{equation}\label{3409} \frak v_{(\rho,\vec T,\vec\theta)} = u_{\vec T,\vec \theta,(\infty)}^{\rho}. \end{equation} The $T_{\rm e}$ etc. derivative of the objects we constructed enjoy the following estimate. \begin{lem}\label{expesgen2Tdevss} There exist $T_m, C_{10,m}, \dots, C_{16,m}, \epsilon_{2,m} > 0$ and $0<\mu<1$ such that the following inequalities hold if $T_{\rm e}>T_m$ for all $\rm e$. \par Let ${\rm e}_0 \in C^1(\mathcal G_{\frak p})$. Then for each $\vec k_{T}$, $\vec k_{\theta}$ we have \begin{equation} \aligned &\left\Vert \nabla_{\rho}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta^{\vec k_{\theta}}} \frac{\partial}{\partial T_{{\rm e}_0}} \left((V^{\rho}_{\vec T,\vec \theta,{\rm v},(\kappa)}),(v^{\rho}_{\vec T,\vec \theta,{\rm v},{\rm e},(\kappa)})\right) \right\Vert_{L^2_{m+1-\vert{\vec k_{T}}\vert - \vert{\vec k_{\theta}}\vert-n-1,\delta}(\Sigma_{\rm v}^{\rho,(2)})}\\ &< C_{10,m}\mu^{\kappa-1}e^{-\delta T_{{\rm e}_0}}, \label{form0182vv233} \endaligned \end{equation} \begin{equation} \displaystyle\left\Vert \nabla_{\rho}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta^{\vec k_{\theta}}} \frac{\partial}{\partial T_{{\rm e}_0}} (v^{\rho}_{\vec T,\vec \theta,{\rm v},{\rm e},(\kappa)})\right\Vert < C_{11,m}\mu^{\kappa-1}e^{-\delta T_{{\rm e}_0}}, \label{form0183v2v33} \end{equation} \begin{equation} \left\Vert \nabla_{\rho}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta^{\vec k_{\theta}}} \frac{\partial}{\partial T_{{\rm e}_0}} u_{\vec T,\vec \theta,(\kappa)}^{\rho} \right\Vert_{L^2_{m+1-\vert{\vec k_{T}}\vert - \vert{\vec k_{\theta}}\vert -n-1,\delta}((K_{\rm v}^{(2))})^{+\vec R})} < C_{12,m}e^{-\delta T_{{\rm e}_0}}, \label{form0184233} \end{equation} \begin{equation} \aligned &\left\Vert \nabla_{\rho}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta^{\vec k_{\theta}}} \frac{\partial}{\partial T_{{\rm e}_0}} {\rm Err}^{\rho}_{{\rm v},\vec T,\vec \theta,(\kappa)} \right\Vert_{L^2_{m-\vert{\vec k_{T}}\vert - \vert{\vec k_{\theta}}\vert-n-1,\delta} (\Sigma_{\rm v}^{\rho,{(2)}})} \\ &< C_{13,m}\epsilon_{2,m}\mu^{\kappa}e^{-\delta T_{{\rm e}_0}}, \label{form0185vv233} \endaligned \end{equation} \begin{equation} \left\Vert \nabla_{\rho}^n\frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta^{\vec k_{\theta}}} \frac{\partial}{\partial T_{{\rm e}_0}} \frak e^{\rho} _{\vec T,\vec \theta,(\kappa)}\right\Vert_{L^2_{m-\vert{\vec k_{T}}\vert - \vert{\vec k_{\theta}}\vert-n-1}(K_{\rm v}^{(2)})} < C_{14,m}\mu^{\kappa-1}e^{-\delta T_{{\rm e}_0}}, \label{form0186vv233} \end{equation} \begin{equation}\label{415} \left\Vert \nabla_{\rho}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta^{\vec k_{\theta}}} \frac{\partial}{\partial T_{{\rm e}_0}} \Delta T^{\rho}_{\vec T,\vec \theta,(\kappa),{\rm v},{\rm e}} \right\Vert < C_{15,m}\mu^{\kappa-1}e^{-\delta T_{{\rm e}_0}}, \end{equation} \begin{equation}\label{416} \left\Vert \nabla_{\rho}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta^{\vec k_{\theta}}} \frac{\partial}{\partial T_{{\rm e}_0}} \Delta \theta^{\rho}_{\vec T,\vec \theta,(\kappa),{\rm v},{\rm e}}\right\Vert < C_{16,m}\mu^{\kappa-1}e^{-\delta T_{{\rm e}_0}} \end{equation} for $\vert{\vec k_{T}}\vert + \vert{\vec k_{\theta}}\vert + n < m -11$. \par Let ${\rm e}_{0} \in C^1_{\rm c}(\mathcal G_{\frak p})$. Then the same inequalities as above hold if we replace $\frac{\partial}{\partial T_{{\rm e}_0}}$ by $\frac{\partial}{\partial \theta_{{\rm e}_0}}$. \end{lem} The proof is mostly the same as that of Proposition \ref{expesgen2Tdev}. The difference is the following point only. We remark that in (\ref{form0182vv233}), (\ref{form0183v2v33}), (\ref{form0184233}), (\ref{form0185vv233}) the norm is $L^2_{m+1-\vert{\vec k_{T}}\vert - \vert{\vec k_{\theta}}\vert-n-1,\delta}$ norm. On the other hand, in (\ref{form0182vv2}), (\ref{form0183v2v}), (\ref{form01842}), (\ref{form0185vv2}), the norm was $L^2_{m+1-\vert{\vec k_{T}}\vert - \vert{\vec k_{\theta}}\vert-1,\delta}$ norm. The reason is as follows. We remark that in our case $$ T^{(\kappa)}_{\rm e} = T_{\rm e} - \frac{1}{10}\sum_{a=0}^{\kappa}\Delta T^{\rho}_{\vec T,\vec \theta,(a),{\rm v}_{\leftarrow}({\rm e}),{\rm e}} + \frac{1}{10}\sum_{a=0}^{\kappa}\Delta T^{\rho}_{\vec T,\vec \theta,(a),{\rm v}_{\rightarrow}({\rm e}),{\rm e}} $$ is $\rho$ dependent. When we study $\rho$ derivative in the inductive steps, we need to take $\rho$ derivative of $$ \hat u^{\rho}_{{\rm v}',\vec T,\vec \theta,{(\kappa)}}(\tau'_{\rm e}-10T^{(\kappa)}_{\rm e},t'_{\rm e}+\theta^{(\kappa)}_{\rm e}) $$ etc.. Then there will be a term including $\tau''_{\rm e}$ or $t''_{\rm e}$ derivative of $\hat u^{\rho}_{{\rm v}',\vec T,\vec \theta,{(\kappa)}}$. \par Except this point the proof of Lemma \ref{expesgen2Tdevss} is the same as the proof of Proposition \ref{expesgen2Tdev} and so is omitted. \begin{proof}[Proof of Proposition \ref{changeinfcoorprop}] We note that (\ref{form0186vv2}) and (\ref{3383}) imply \begin{equation}\label{rhoconvergesTTT} \left\Vert \nabla_{\rho}^n \frac{\partial^{\vert \vec k_{T}\vert}}{\partial T^{\vec k_{T}}}\frac{\partial^{\vert \vec k_{\theta}\vert}}{\partial \theta^{\vec k_{\theta}}} \frac{\partial}{\partial T_{{\rm e}_0}} (\rho_{(\kappa)} - \rho) \right\Vert < C_{17,m}\mu^{\kappa-1}e^{-\delta T_{\rm e}} \end{equation} and the same formula with $\frac{\partial}{\partial T_{{\rm e}_0}}$ replaced by $\frac{\partial}{\partial \theta_{{\rm e}_0}}$ if ${\rm e}_{0} \in C^1_{\rm c}(\mathcal G_{\frak p})$. (\ref{form408}), (\ref{rhoconvergesTTT}), (\ref{415}) and (\ref{416}) imply (\ref{2144}). \end{proof} \begin{proof}[Proof of Proposition \ref{reparaexpest}] This is an immediate consequence of (\ref{3409}) and (\ref{form0184233}). \end{proof} \begin{proof}[Proof of Lemma \ref{changeinfcoorproppara}] This is a parametrized version and the proof is the same as above. \end{proof} \section{Appendix: From $C^m$ structure to $C^{\infty}$ structure} \label{toCinfty} In this subsection we will prove that the Kuranishi structure of $C^{m}$-class, which we obtained in Section \ref{generalcase}, is actually of $C^{\infty}$-class. \par We consider the embedding $\frak F^{(1)}$ (see the formula (\ref{evaluandTfac})) which we constructed in the proof of Lemma \ref{2120lem}. Here we fix $m$. \begin{lem}\label{lem3143} The image of $\frak F^{(1)}$ is a $C^{\infty}$ submanifold. \end{lem} \begin{proof} We first note several obvious facts. Let $\frak M$ be a Banach manifold and $X \subset \frak M$ be a subset. Then the statement that $X$ is a $C^{m'}$-submanifold of finite dimension is well-defined. And the $C^{m'}$-structure of $X$ as a submanifold is unique if exists. Here $m'$ is one of $0,1,\dots,\infty$. Moreover $X$ is a $C^{\infty}$-submanifold if and only if for each $p \in X$ and $m'$ there exists a neighborhood $U$ of $p$ such that $U \cap X$ is a submanifold of $C^{m'}$-class. \par Now we prove the lemma. Let $\frak q$ be in the image of $\frak F^{(1)}$ and take any $m'$. Let $\frak w_{\frak p}$ be the stabilization data at $\frak p$ that we used to define $\frak F^{(1)}$. We take the stabilization data $\frak w_{\frak q}$ on $\frak q$ that is induced by $\frak w_{\frak p}$. We define ${\rm Glue}$ at $\frak q$ using the stabilization data $\frak w_{\frak q}$. Then, as in the proof of Lemma \ref{coodinatedifferentp}, we obtain \begin{equation}\label{evaluandTfacjj22} \aligned \frak F^{(2)} : \hat V&(\frak q,\frak w_{\frak q};(\frak o',\mathcal T';\frak A))\\ \to &\prod_{{\rm v}\in C^0(\mathcal G_{\frak q})} C^{m'}((K_{\rm v}^{+\vec R},K_{\rm v}^{+\vec R} \cap \partial \Sigma_{\frak q,{\rm v}}),(X,L))\\ &\times \prod_{{\rm v}\in C^0(\mathcal G_{\frak p})}\frak V((\frak x_{\frak p} \cup \vec w_{\frak q)_{\rm v}})\times ((\vec{\mathcal T'},\infty] \times (\vec{\mathcal T}',\infty] \times \vec S^1). \endaligned \end{equation} Let us denote the target of $\frak F^{(j)}$ by $\frak X(j)$. The map $\frak F^{(2)}$ is a $C^{m'}$-embedding. We define $\pi_{m,m'} : \frak X(2) \to \frak X(1)$ so that it is the identity map for the second factor and the inclusion map $$ C^{m'}((K_{\rm v}^{+\vec R},K_{\rm v}^{+\vec R} \cap \partial \Sigma_{\frak q,{\rm v}}),(X,L)) \to C^{m}((K_{\rm v}^{+\vec R},K_{\rm v}^{+\vec R} \cap \partial \Sigma_{\frak q,{\rm v}}),(X,L)) $$ for the first factor. This map is of $C^{\infty}$ class. We note that $$ \pi_{m,m'} \circ \frak F^{(2)} = \frak F^{(1)} \circ \phi_{12}, $$ since we use the induced stabilization data for $\frak q$. We already proved that $\varphi_{12}$ is a diffeomorphism of $C^m$-class to an open subset. Moreover $\frak F^{(2)}$ is an embedding of $C^{m'}$-class. Therefore a neighborhood of $\frak q$ of the image of $\frak F^{(1)}$ is a submanifold of $C^{m'}$-class. The proof of Lemma \ref{lem3143} is complete. \end{proof} We define a $C^{\infty}$ structure of the Kuranishi neighborhood so that $\frak F^{(1)}$ is a diffeomorphism to its image. \begin{lem} The coordinate change $\phi_{12}$ we defined is a diffeomorphism of $C^{\infty}$-class. \end{lem} \begin{proof} We prove the case of $\phi_{12}$ in Lemma \ref{2120lem}. We consider the following commutative diagram. \begin{equation}\label{cd347333} \begin{CD} \hat V(\frak p,\frak w^{(1)}_{\frak p};(\frak o^{(1)},\mathcal T^{(1)});\frak A) _{\epsilon_{0},\vec{\mathcal T}^{(1)}} @ > {\frak F^{(1)}_{m'}} >> \frak X^{(1)}_{m'} @>>{\pi_{m,m'}}> \frak X^{(1)}_{m}\\ @ AA{\subset}A @ AA\frak H_{12}A @ AA\frak H_{12}A\\ \hat V(\frak p,\frak w^{(2)}_{\frak p};(\frak o^{(2)},\mathcal T^{(2)});\frak A) _{\epsilon_{0},\vec{\mathcal T}^{(2)}} @ > {\frak F^{(2)}_{2m'}}>> \frak X^{(2)}_{2m'} @>>{\pi_{2m,2m'}}> \frak X^{(2)}_{2m} \end{CD} \end{equation} Here \begin{equation} \aligned \frak X^{(2)}_{2m'} : =&\prod_{{\rm v}\in C^0(\mathcal G_{\frak p})} C^{2m'}((K_{\rm v}^{+\vec R},K_{\rm v}^{+\vec R}\cap \partial \Sigma_{\frak p,{\rm v}}),(X,L))\\ &\times \prod_{{\rm v}\in C^0(\mathcal G_{\frak p})}\frak V((\frak x_{\frak p} \cup \vec w_{\frak p})_{\rm v})\times ((\vec{\mathcal T}^{(2)},\infty] \times (\vec{\mathcal T}^{(2)},\infty] \times \vec S^1) \endaligned \end{equation} is the space appearing in (\ref{evaluandTfac}), (\ref{evaluandTfac2}) and the map $\frak F^{(2)}_{2m'}$ is defined as in (\ref{evaluandTfacjj}). (We include $2m'$ in the notation to specify the function space we use.) The space $\frak X^{(1)}_{m'}$ and the map $\frak F^{(1)}_{m'}$ are similarly defined. The two maps $\frak H_{12}$ in the vertical arrow are given by $$ \frak H_{12}(u,(\rho,\vec T,\vec \theta))) = (u\circ \frak v_{(\rho,\vec T,\vec \theta)},\overline{\Phi}_{12}(\rho,\vec T,\vec \theta)). $$ The maps in the horizontal lines are of $C^{\infty}$ class by definition. The map $\frak H_{12}$ in the second vertical line is of $C^{m'}$ class by Sublemma \ref{sublem1}. The map $\frak H_{12}$ in the third vertical line is one used in the proof of Lemma \ref{2120lem}. Therefore $\phi_{12}$ is of $C^{m'}$-class at $\frak p$. Note we can start at arbitrary point $\frak q$ in the image of $\frak F^{(2)}$ and prove that $\phi_{12}$ is of $C^{m'}$-class for any $m'$ at any point $\frak q$, by using the proof of Lemma \ref{lem3143}. This implies the lemma in the case of $\phi_{12}$ in Lemma \ref{2120lem}. \par In the other cases, the proof of the smoothness of the coordinate change is similar. \end{proof} We have thus proved that the Kuranishi structure we obtained is of $C^{\infty}$-class. \par\medskip \section{Appendix: Proof of Lemma \ref{transbetweenEs}} \label{Lemma49} \begin{proof}[Proof of Lemma \ref{transbetweenEs}] \begin{sublem} There exists a finite dimensional smooth and compact family $\frak M$ of pairs $(\Sigma,u')$ such that each element of $\mathcal M_{k+1,\ell}(\beta)$ appears as its member. \end{sublem} \begin{proof} Run the gluing argument of Section \ref{glueing} at each point $\frak p \in \mathcal M_{k+1,\ell}(\beta)$ using an obstruction bundle data given at that point. We then obtain a neighborhood of each $\frak p$ in a finite dimensional manifold. We can take finitely many of them to cover $\mathcal M_{k+1,\ell}(\beta)$ by compactness. \end{proof} We take a finite number of $\frak p_c$ so that (\ref{mpccoverM}) is satisfied. For each $c$ and $N\in \Z_+$ we take $E_{c,N} \subset \bigoplus_{\rm v}\Gamma_0({\rm Int}\,K^{\rm obst}_{\rm v}; u_{\frak p_c}^*TX \otimes \Lambda^{01})$ that is isomorphic to the $N$ copies of $E_c$ as a $\Gamma_{\frak p_c}$ vector space and $E_c \subset E_{c,N}$. \par We consider the space of $\Gamma_{\frak p_c}$-equivariant embeddings $\sigma_c : E_c \to E_{c,N}$ in the neighborhood of the original embedding. Each $\sigma_c$ determines a perturbed $E_c$ which we write $E^{\sigma_c}_c$. \par The condition that $E^{\sigma_c}_c(\frak q) \cap E^{\sigma_{c'}}_{c'} (\frak q) \ne \{0\}$ for some $\frak q \in \frak M$ such that $\frak q \cup \vec w'_{c}$ is $\epsilon_{\frak p_c}$ close to $\frak p_c$ defines a subspace of the set of $(\sigma_c)_{c\in \frak C}$'s whose codimension depends on the number of $c$'s, the dimension of $E_c$ and the dimension of $\frak M$ and $N$. By taking $N$ huge, we may assume that such $(\sigma_c)_{c\in \frak C}$ is nowhere dense. Namely the conclusion holds after perturbing $E_c$ by arbitrary small amount in $E_{c,N}$. \end{proof} \par \newpage \part{$S^1$ equivariant Kuranishi structure and Floer's moduli space} \label{S1equivariant} In Part \ref{S1equivariant}, we explain Kuranishi structure on the space of connecting orbits in the Floer theory for periodic Hamiltonian system and use it to calculate the Floer homology of periodic Hamiltonian system. This was done in \cite{FOn} but here we give more detailed proof than one in \cite{FOn}. \par Section \ref{abstractS1equiv} contains abstract theory of $S^1$-equivariant Kuranishi structure. We define the notion of Kuranishi structure which admits a locally free $S^1$ action and its good coordinate system. We explain how the construction in Part \ref{Part2} (that is basically the same as \cite{FOn} and \cite[Sectin A1]{fooo:book1}) can be modified so that all the constructions are $S^1$-equivariant. \par In Section \ref{Floersequation} we review the moduli space of solutions of Floer's equation (the Cauchy-Riemann equation perturbed by a Hamiltonian vector field). \par In Section \ref{construS1equiv} we study the case of time independent Hamiltonian and prove in detail that the Floer's moduli space has an $S^1$ equivariant Kuranishi structure in that case. \par In Section \ref{calcu}, we prove in detail that the Floer homology of periodic Hamiltonian system is isomorphic to the singular homology. Namely it provides the detail of the proof of \cite[Theorem 20.5]{FOn2}. \par\medskip We also remark that, at the stage of the year 2012 (when this article is written), there are two proofs of isomorphism between Floer homology of periodic Hamiltonian system and ordinary homology of a symplectic manifold $X$. One is in \cite{FOn} and uses identification with Morse complex in the case Hamiltonian is small and time independent. This proof is the same as the one taken in this article. (Namely the proof given in this article coincides with the one in \cite{FOn} except some technical detail.) The other uses Bott-Morse and de Rham theory and is in \cite[Section 26]{fooospectr}. (Several other proofs are written in 1996 by Ruan \cite{Rua99}, Liu-Tian \cite{LiuTi98} also.) This second proof has its origin in (the proof of) \cite[Theorem 1.2]{Fuk96II}. \par On the other hand, there is a third method using the Lagrangian Floer cohomology of the diagonal. In this third method we do not need to study $S^1$ equivariant Kuranishi structure at all. See Remark \ref{lagrangeproof}. \section{Definition of $S^1$ equivariant Kuranishi structure, its good coordinate system and perturbation}\label{abstractS1equiv} We define an $S^1$ equivariant Kuranishi structure below. A notion of $T^n$ equivariant Kuranishi structure (in a strong sense) is defined in \cite[Definition B.4]{fooo:toric1}. However that definition applies to the case when $T^n$ acts on the target. The $S^1$ equivariant Kuranishi structure we use to study time independent Hamiltonian is different therefrom since our $S^1$ action comes from the automorphism of the source. Definition \ref{chertS1act} gives a definition in the current case. Let $X$ be a Hausdorff metrizable space on which $S^1$ acts. We assume that the isotropy group of every element is finite. \begin{defn}\label{chertS1act} Let $(V_p, E_p, \Gamma_p, \psi_p, s_p)$ be a Kuranishi neighborhood of $p\in X$ as in Definition \ref{Definition A1.1}, except replace the assumption $\gamma o_p = o_p$ for $\gamma \in \Gamma_p$ by Condition (8) below.\footnote{$o_p \in V_p$ is a point such that $\psi_p([o_p]) = p$.} We define a {\it locally free $S^1$ action} on this chart as follows. \begin{enumerate} \item There exists a group $G_p$ acting effectively on $V_p$ and $E_p$. \item $G_p \supset \Gamma_p$ and the $\Gamma_p$ action extends to the $G_p$ action. \item The identity component $G_{p,0}$ of $G_p$ is isomorphic to $S^1$. We fix an isomorphism $\frak h_p : S^1 \to G_{p,0}$. \item $G_p$ is generated by $\Gamma_p$ and $G_{p,0}$. \item $G_{p,0}$ commutes with the action of $\Gamma_p$. \item The isotropy group at every point of $G_p$ action on $V_p$ is finite. \item $s_p$ and $\psi_p$ are $G_p$ equivariant. \item The $G_{p,0}$ orbit of $o_p$ is invariant of $G_p$. \end{enumerate} \end{defn} \begin{rem} Note the Conditions (4), (5) imply that $G_p$ is isomorphic to the direct product $\Gamma_p \times S^1$. \end{rem} The next example shows a reason why we remove the assumption $\gamma o_p = o_p$. \begin{exm}\label{zeifert} We take $V_p = S^1 \times D^2$ and $\Gamma_p = \Z_2$ such that the nontrivial element of $\Gamma_p$ acts by $(t,z) \mapsto (t+1/2,-z)$. (Here $S^1 = \R/\Z$.) $G_{p,0} = S^1$ acts on $V_p$ by rotating the first factor $S^1$. The action of $\Gamma_p$ is free. The quotient space $V_p/\Gamma_p$ is a manifold. The induced $S^1$ action on $V_p/\Gamma_p$ is locally free but is not free. The quotient space $V_p/G_p$ is an orbifold $D^2/\Z_2$. See Example \ref{zeifertgeo}. \end{exm} \begin{defn}\label{chengeS1act} Let $(V_p, E_p, \Gamma_p, \psi_p, s_p)$ and $(V_q, E_q, \Gamma_q, \psi_q, s_q)$ be Kuranishi neighborhoods of $p \in X$ and $q \in \psi_p(s_p^{-1}(0)/\Gamma_p)$, respectively. We assume that they carry locally free $S^1$ actions. ($G_p$ and $G_q$.) Let a triple $(\hat\phi_{pq},\phi_{pq},h_{pq})$ be a coordinate change in the sense of Definition \ref{Definition A1.3}. We say it is {\it $S^1$ equivariant} if the following holds. \begin{enumerate} \item $h_{pq}$ extends to a group homomorphism $G_q \to G_p$, which we denote by $\frak h_{pq}$. \item $V_{pq}$ is $G_q$ invariant. \item $\phi_{pq} : V_{pq} \to V_p$ is $\frak h_{pq}$-equivariant. \item $\hat\phi_{pq}$ is $\frak h_{pq}$-equivariant. \item $ \frak h_{pq}\circ \frak h_{q} = \frak h_{p}. $ \end{enumerate} \end{defn} \begin{defn}\label{Definition A1.5S} Let $(V_p, E_p, \Gamma_p, \psi_p, s_p)$ and $(\hat\phi_{pq},\phi_{pq},h_{pq})$ define a Kuranishi structure in the sense of Definition \ref{Definition A1.5} on $X$. A {\it locally free $S^1$ action} on $X$ is assigned by a locally free $S^1$ action in the sense of Definition \ref{chertS1act} on each chart $(V_p, E_p, \Gamma_p, \psi_p, s_p)$ so that the coordinate change is $S^1$ equivariant. \end{defn} \begin{rem} Let $r \in \psi_q((V_{pq}\cap s_q^{-1}(0))/\Gamma_q)$, $q \in \psi_p(s_p^{-1}(0)/\Gamma_p)$. There exists $\gamma_{pqr}^{\alpha} \in \Gamma_p$ for each connected component $(\phi_{qr}^{-1}(V_{pq}) \cap V_{qr} \cap V_{pr})_\alpha$ of $\phi_{qr}^{-1}(V_{pq}) \cap V_{qr} \cap V_{pr}$ by Definition \ref{Definition A1.5} (2). We automatically have \begin{equation}\label{addiionalcond} \frak h_{pq} \circ \frak h_{qr} = \gamma_{pqr}^{\alpha} \cdot \frak h_{pr} \cdot (\gamma_{pqr}^{\alpha})^{-1}, \end{equation} because $S^1$ lies in the center and this formula is already assumed for $\Gamma_r$. \end{rem} \begin{lemdef}\label{lemdef1} If $X$ has a Kuranishi structure with a locally free $S^1$ action then $X/S^1$ has an induced Kuranishi structure. \end{lemdef} \begin{proof} Let $p\in X$. We take $o_p \in V_p$ and choose a local transversal $\overline V_p$ to the $S^1$ orbit $G_{p,0}o_p$. We put $$ \Gamma^+_p = \{ \gamma \in G_p \mid \gamma o_p = o_p\}. $$ By Definition \ref{chertS1act} (6), $\Gamma^+_p$ is a finite group. We may choose $\overline V_p$ so that it is invariant under $\Gamma^+_p$. We restrict $E_p$ to $\overline V_p$ to obtain $\overline E_p$. The Kuranishi map $s_p$ induces $\overline s_p$. \par We may shrink our Kuranishi neighborhood and may assume that \begin{equation}\label{equiv1} V_p = G_{p,0}\cdot\overline V_p \end{equation} for all $p$. \par For $x \in \overline V_p$, satisfying $s_p(x) = 0$, we define $\overline\psi_p(x)$ to be the equivalence class of $\psi_p(x)$ in $X/S^1$. It is easy to see that $(\overline V_p,\Gamma^+_p,\overline E_p,\overline s_p,\overline\psi_p)$ is a Kuranishi chart of $X/S^1$ at $[p]$. \par Let $[q] \in \overline\psi_p(\tilde q)$, where $\tilde q \in \overline V_p$. Choose $q \in X$ such that $q = \psi_p(\tilde q)$. We have a coordinate transformation $(V_{pq},\phi_{pq},\hat\phi_{pq})$ and a group homomorphism $\frak h_{pq} : G_q\to G_p$ such that $$ \tilde q = g_{pq}(o_q) \cdot \phi_{pq}(o_q) $$ holds for some $g_{pq}(o_q) \in G_{p,0}$. Moreover there exists a smooth map $$ g_{pq} : \overline V_{pq} \to G_{p,0} $$ such that it coincides with $g_{pq}(o_q)$ at $o_q$ and $$ g_{pq}(x) \cdot \phi_{pq}(x) \in \overline V_{p}. $$ Here $\overline V_{pq}$ is a neighborhood of $o_q$ in $\overline V_q$. We define $$ \overline{\phi}_{pq}(x) = g_{pq}(x) \cdot \phi_{pq}(x). $$ We shrink $V_{pq}$ and may assume \begin{equation}\label{equiV2} V_{pq} = G_{p,0}\cdot\overline V_{pq}. \end{equation} \par By definition $$ \Gamma^+_{q} = \{ \gamma \in G_q \mid \gamma o_q = o_q\}. $$ Using the fact that $$ \{\gamma \in \Gamma_{p}\mid \gamma\cdot \phi_{pq}(o_q) = \phi_{pq}(o_q) \} = h_{pq}(\Gamma_{q}) $$ and $G_{p,0}$ is contained in the center, we find that $$ \{ \gamma \in G_p \mid \gamma \overline{\phi}_{pq}(o_q) = \overline{\phi}_{pq}(o_q)\} = \frak h_{pq}(\Gamma^+_{q}) \subset \Gamma_p^+. $$ We denote by $\overline h_{pq}$ the restriction of $\frak h_{pq}$ to $\Gamma^+_{q}$. It is easy to see that $\overline{\phi}_{pq}$ is $\overline h_{pq}$ equivariant. We can lift $\overline{\phi}_{pq}$ to $\hat{\overline{\phi}}_{pq}$ using ${\phi}_{pq}$ and $G_p$ action on $E_p$. \par We have thus constructed a coordinate change of our Kuranishi structure on $X/S^1$. It is straightforward to check the compatibility among the coordinate changes. \end{proof} We next define a good coordinate system. We note that in Part \ref{Part2} we defined a chart of good coordinate system as an orbifold that is not necessarily a global quotient. So we define a notion of locally free $S^1$ action on orbifold. \begin{defn} Let $U$ be an orbifold on which $S^1$ acts effectively as a topological group. We assume that the isotropy group of this $S^1$ action is always finite. We say that the action is a {\it smooth action on orbifold} if the following holds for each $p \in U$. \par There exists an $S^1$ equivariant neighborhood $U_p$ of $p$ in $U$ and $V_p$ a manifold on which $G_p$ acts. $(V_p,\Gamma_p,\psi_p)$ is a chart of $U$ as an orbifold. The conditions (1)-(6) in Definition \ref{chertS1act} hold and $\psi_p$ is $G_p$ equivariant. Moreover the $S^1$ action on $V_p/S^1 \subset U$ induced by $\frak h_p : S^1 \to G_{p,0}$ coincides with the given $S^1$ action. \end{defn} Let $S^1$ act effectively on $X$ and assume that its isotropy group is finite. \begin{defn}\label{goodcoordinatesystemS} Suppose $X$ has a locally free $S^1$ equivariant Kuranishi structure. An \emph{$S^1$ equivariant good coordinate system} on it is $(U_{\frak p}, E_{\frak p}, \psi_{\frak p}, s_{\frak p})$, $(U_{\frak p \frak q},\hat\phi_{\frak p \frak q},\phi_{\frak p\frak q})$ as in Definition \ref{goodcoordinatesystem}. We require furthermore the following in addition. \begin{enumerate} \item There exists a smooth $S^1$ action on $U_{\frak p}$ and $E_{\frak p}$. \item $\psi_{\frak p}$, $s_{\frak p}$ are $S^1$ equivariant. \item $U_{\frak p \frak q}$ is $S^1$ invariant and $\hat\phi_{\frak p \frak q},\phi_{\frak p\frak q}$ are $S^1$ equivariant. \end{enumerate} \end{defn} Note the notion of $S^1$-equivariance of maps or subsets are defined set theoretically. \begin{lem} If $(U_{\frak p}, E_{\frak p}, \psi_{\frak p}, s_{\frak p})$, $(U_{\frak p \frak q},\hat\phi_{\frak p \frak q},\phi_{\frak p\frak q})$ is an $S^1$ equivariant good coordinate system then it induces a good coordinate system of $X/S^1$, that is $(\overline U_{\frak p}, \overline E_{\frak p}, \overline \psi_{\frak p}, \overline s_{\frak p})$, $(\overline U_{\frak p \frak q},\hat{\overline \phi}_{\frak p \frak q},\overline \phi_{\frak p\frak q})$, where $\overline U_{\frak p} = U_{\frak p}/S^1$ etc.. \end{lem} \begin{proof} Apply the construction of Lemma-Definition \ref{lemdef1} locally. \end{proof} \begin{prop}\label{S1goodcoordinate} For any locally free $S^1$ equivariant Kuranishi structure we can find an $S^1$ equivariant good coordinate system. \end{prop} \begin{proof} The proof uses the construction of good coordinate system in Section \ref{sec:existenceofGCS}. We defined and used the notion of pure and mixed orbifold neighborhood there. We constructed them for Kuranishi structure. We will use pure and mixed orbifold neighborhood of the Kuranishi structure on $X/S^1$ and extend them to ones on $X$. The detail follows. \par We stratify $\overline X = X/S^1 = \bigcup_{\frak d}\overline X(\frak d)$ where $[p]\in \overline X(\frak d)$ if $\dim \overline U_{[p]} = \frak d$. So $X(\frak d+1)/S^1 = \overline X(\frak d)$. Let $\overline{\mathcal K}_*$ be a compact subset of $\overline X(\frak d)$. Let ${\mathcal K}_*$ be an $S^1$-invariant compact subset of $X(\frak d)$ such that $\overline{\mathcal K}_* = {\mathcal K}_*/S^1$. In Proposition \ref{purecover} we constructed a pure orbifold neighborhood $\overline U_*$ of ${\mathcal K}_*/S^1$. \begin{lem}\label{S1tuki44} There exists a pure orbifold neighborhood $U_*$ of ${\mathcal K}_*$ on which $S^1$ acts and $U_*/S^1 = \overline U_*$. \end{lem} \begin{rem} This lemma is somewhat loosely stated, since we did not define the notion of $S^1$ action on pure orbifold neighborhood. The definition is: $U_{*}$ has a locally free effective smooth $S^1$ action and all the structure maps commute with the $S^1$ action. \end{rem} \begin{proof} We can prove this lemma by examining the proof of Proposition \ref{purecover}. Namely $\overline U_*$ is obtained by gluing various Kuranishi charts and restricting it to suitable open subsets. We take the inverse image of $U_{\frak p} \to \overline{U}_{\frak p}$ of those charts. We can then glue and restrict them in the same way to obtain $U_*$. We omit the detail. \end{proof} In Section \ref{sec:existenceofGCS} we then proceed to define a mixed orbifold neighborhood of $\overline X(D)$ for an ideal $D \subset \frak D$. For an ideal $D \subset \frak D$ we put $D^{+1} = \{\frak d+1 \mid \frak d \in D\}$. \begin{lem}\label{S1tuki442} We assume that $\{\overline{\mathcal U}_{\frak d}\}$ together with other data provide the mixed orbifold neighborhood of $\overline X(D)$ obtained in Proposition \ref{existmixed}. \par Then we can take an $S^1$ equivariant mixed orbifold neighborhood $\{{\mathcal U}_{\frak d+1}\}$ (plus other data) on $X(D^{+1})$ such that $\overline{\mathcal U}_{\frak d} = {\mathcal U}_{\frak d+1}/S^1$. \end{lem} \begin{proof} This is proved again by examining the proof of Proposition \ref{existmixed} and checking that the gluing process there can be lifted. This is actually fairly obvious. \end{proof} We note that the chart of good coordinate system on $\overline X$ constructed in Part \ref{Part2} is $\overline{\mathcal U}_{\frak d}$ and other data of good coordinate system is obtained by the structure maps etc. of mixed orbifold neighborhood. Therefore $ {\mathcal U}_{\frak d+1}$ becomes the required $S^1$ equivariant good coordinate system of $X$. The proof of Proposition \ref{S1goodcoordinate} is complete. \end{proof} \begin{lem}\label{lem214} If the dimension of $X/S^1$ in the sense of Kuranishi structure is $-1$ then there exists an $S^1$ equivariant multisection on the good coordinate system $\mathcal U_{\frak p}$ of $X$ whose zero set is empty. \end{lem} \begin{proof} It suffices to define an appropriate notion of pull back of the multisection of $\overline{\mathcal U}_{\frak p}$ to ones of $\mathcal U_{\frak p}$. This is routine. \end{proof} \section{Floer's equation and its moduli space} \label{Floersequation} In this section we concern with the moduli space of solutions of Floer's perturbed Cauchy-Riemann equation. Such a moduli space appears in the proof of Arnold's conjecture of various kinds. In the next section, we prove existence of $S^1$ equivariant Kuranishi structure of such a moduli space in the case when our Morse function is time independent. Let $H : X \times S^1 \to \R$ be a smooth function on a symplectic manifold $X$. We put $H_t(x) = H(x,t)$ where $t \in S^1$ and $x \in X$. The function $H_t$ generates the Hamiltonian vector field $\frak X_{H_t}$ by $$ i_{\frak X_{H_t}}\omega = d{H_t}. $$ We denote it by $\frak P(H)$ the set of the all 1-periodic orbits of the time dependent vector field $\frak X_{H_t}$. We put $$ \tilde{\frak P}(H) = \{(\gamma,w) \mid \gamma \in {\frak P}(H),\,\, u : D^2 \to X,\,\, u(e^{2\pi it}) = \gamma(t)\}/\sim, $$ where $(\gamma,w) \sim (\gamma',w')$ if and only if $\gamma = \gamma'$ and $$ \omega([w] - [w']) = 0,\quad c_1([w] - [w']) = 0. $$ Here $\omega$ is the symplectic form and $c_1$ is the first Chern class of $X$. \begin{assump}\label{nondeg1} All the 1-periodic orbits of the time dependent vector field $\frak X_{H_t}$ are non-degenerate. \end{assump} Following \cite{Flo89I}, we consider the maps $h : \R \times S^1 \to X$ that satisfy \begin{equation}\label{Fleq} \frac{\partial h}{\partial \tau} + J \left( \frac{\partial h}{\partial t} - \frak X_{H_t} \right) = 0. \end{equation} Here $\tau$ and $t$ are the coordinates of $\R$ and $S^1 = \R/\Z$, respectively. For $\tilde{\gamma}^{\pm} = ({\gamma}^{\pm},w^{\pm}) \in \tilde{\frak P}(H)$ we consider the boundary condition \begin{equation}\label{bdlatinf} \lim_{\tau \to \pm \infty}h(\tau,t) = \gamma^{\pm}(t). \end{equation} The following result due to Floer \cite{Flo89I} is by now well established. \begin{prop} We assume Assumption \ref{nondeg1}. Then for any solution $h$ of (\ref{Fleq}) with $$ \int_{\R \times S^1} \left\Vert\frac{\partial h}{\partial \tau}\right\Vert^2 d\tau dt < \infty $$ there exists ${\gamma}^{\pm} \in {\frak P}(H)$ such that (\ref{bdlatinf}) is satisfied. \end{prop} Let $\tilde{\gamma}^{\pm} = ({\gamma}^{\pm},w^{\pm}) \in \tilde{\frak P}(H)$. \begin{defn}\label{tildeMreg} We denote by $\widetilde{\mathcal M}^{\text{\rm reg}}(X,H;\tilde{\gamma}^{-},\tilde{\gamma}^{+})$ the set of all maps $h : \R \times S^1 \to X$ that satisfy (\ref{Fleq}), (\ref{bdlatinf}) and $$ w^- \# h \sim w^+. $$ Here $\#$ is an obvious concatenation. \par The translation along $\tau \in \R$ defines an $\R$ action on $\widetilde{\mathcal M}^{\text{\rm reg}}(X,H;\tilde{\gamma}^{-},\tilde{\gamma}^{+})$. This $\R$ action is free unless $\tilde{\gamma}^{-} = \tilde{\gamma}^{+}$. We denote by ${\mathcal M}^{\text{\rm reg}}(X,H;\tilde{\gamma}^{-},\tilde{\gamma}^{+})$ the quotient space of this action. \end{defn} \begin{thm}{\rm (\cite[Theorem 19.14]{FOn})}\label{connectingcompactka} We assume Assumption \ref{nondeg1}. \begin{enumerate} \item The space ${\mathcal M}^{\text{\rm reg}}(X,H;\tilde{\gamma}^{-},\tilde{\gamma}^{+})$ has a compactification ${\mathcal M}(X,H;\tilde{\gamma}^{-},\tilde{\gamma}^{+})$. \item The compact space ${\mathcal M}(X,H;\tilde{\gamma}^{-},\tilde{\gamma}^{+})$ has an oriented Kuranishi structure with corners. \item The codimension $k$ corner of ${\mathcal M}(X,H;\tilde{\gamma}^{-},\tilde{\gamma}^{+})$ is identified with the union of $$ \prod_{i=0}^{k} {\mathcal M}(X,H;\tilde{\gamma}_{i},\tilde{\gamma}_{i+1}) $$ over the $k+1$-tuples $(\tilde{\gamma}_{0},\dots,\tilde{\gamma}_{k+1})$ such that $\tilde{\gamma}_{0} = \tilde{\gamma}^{-}$, $\tilde{\gamma}_{k+1}=\tilde{\gamma}^{+}$ and $\tilde{\gamma}_{i} \in \tilde{\frak P}(H)$. \end{enumerate} \end{thm} The proof is in Section \ref{calcu}. \par The main purpose of this section is to explain the proof of the next result. We consider the case when $H$ is time independent. In this case, Assumption \ref{nondeg1} implies that $H : X \to \R$ is a Morse function. We also assume the following: \begin{assump}\label{nondeg2} \begin{enumerate} \item The gradient vector field of $H$ satisfies the Morse-Smale condition. \item Any 1-periodic orbit of $\frak X_H$ is a constant loop. (Namely it corresponds to a critical point of $H$.) \end{enumerate} \end{assump} Condition (1) is satisfied for generic $H$. We can replace $H$ by $\epsilon H$ for small $\epsilon$ so that (2) is also satisfied. \par By assumption, elements of ${\frak P}(H)$ are constant loops. We write $\frak x \in X$ to denote its element. We put $$ \Pi = \frac{\text{\rm Im}~ (\pi_2(X) \to H_2(X;\Z))}{\text{Ker}(c_1) \cap \text{Ker}(\omega) \cap \text{\rm Im}~(\pi_2(X) \to H_2(X;\Z))}. $$ Here we regard $c_1 : H_2(X;\Z) \to \Z$, $\omega : H_2(X;\Z) \to \R$. An element of $\tilde{\frak P}(H)$ is regarded as a pair $(\frak z,\alpha)$, where $\frak z$ is a critical point of $H$ and $\alpha\in \Pi$. \par We put $$ {\mathcal M}^{\text{\rm reg}}(X,H;\frak z^{-},\frak z^{+};\alpha) = {\mathcal M}^{\text{\rm reg}}(X,H;(\frak z^-,\alpha^{-}),(\frak z^+,\alpha^{-}+\alpha)). $$ It is easy to see that the right hand is independent of $\alpha^{-} \in \Pi$. \par Let ${\mathcal M}(X,H;\frak z^-,\frak z^+;\alpha)$ be its compactification as in Theorem \ref{connectingcompactka}. \par Let ${\mathcal M}^{\text{\rm reg}}(X,H;\frak z^{-},\frak z^{+};\alpha)^{S^1}$ be the fixed point set of the $S^1$ action obtained by $t_0h(\tau,t) = h(\tau,t+t_0)$. It is easy to see that this set is empty unless $\alpha = 0$ and in the case $\alpha = 0$ the fixed point set ${\mathcal M}^{\text{\rm reg}}(X,H;\frak z^{-},\frak z^{+};0)^{S^1}$ can be identified with the set of gradient lines of $H$ joining $\frak z^{-}$ to $\frak z^{+}$. This identification can be extended to their compactifications. \begin{assump}\label{nondeg3} \begin{enumerate} \item ${\mathcal M}(X,H;\frak z^{-},\frak z^{+};0)^{S^1}$ is an open subset of ${\mathcal M}(X,H;\frak z^{-},\frak z^{+};0)$. Namely any solution of (\ref{Fleq}) which is sufficiently close to an $S^1$ equivariant solution is $S^1$ equivariant. \item The moduli space ${\mathcal M}(X,H;\frak z^{-},\frak z^{+};0)$ is Fredholm regular at each point of ${\mathcal M}(X,H;\frak z^{-},\frak z^{+};0)^{S^1}$. \end{enumerate} \end{assump} \begin{lem}\label{identifiedwithgrad} Assumption \ref{nondeg3} is satisfied if we replace $H$ by $\epsilon H$ for a sufficiently small $\epsilon$. \end{lem} \begin{proof} (2) is proved in \cite[page 1038]{FOn}. More precisely, it is proved there that for sufficiently small $\epsilon$ the following holds. Let $\ell$ be a gradient line joining $\frak z^{-}$ to $\frak z^{+}$ and let $h_{\ell}$ be the corresponding element of ${\mathcal M}(X,\epsilon H;\frak z^{-},\frak z^{+};0)^{S^1}$. We consider the deformation complexes of the gradient line equation at $\ell$ and of the equation (\ref{Fleq}) at $h_{\ell}$. The kernel and the cokernel of the former are contained in the kernel and the cokernel of the later, respectively. It is proved in \cite[page 1038]{FOn} that they actually coincide each other if $\epsilon$ is sufficiently small. \par Since $H$ satisfies the Morse-Smale condition, the element $\ell$ is Fredholm regular in the moduli space of gradient lines. Therefore by the above mentioned result the moduli space ${\mathcal M}^{\text{\rm reg}}(X,\epsilon H;\frak z^{-},\frak z^{+};0)$ is Fredholm regular at $h_{\ell}$. This implies (2). (1) is a consequence of the same result and the implicit function theorem. (We note that we can prove the same result at the point ${\mathcal M}(X,\epsilon H;\frak z^{-},\frak z^{+};\alpha)^{S^1} \setminus {\mathcal M}^{\text{\rm reg}}(X,\epsilon H;\frak z^{-},\frak z^{+};\alpha)^{S^1}$ in the same way.) \end{proof} Thus, replacing $H$ by $\epsilon H$ if necessary, we may assume that $H$ satisfies Assumption \ref{nondeg3}. We put \begin{equation}\label{M0000} {\mathcal M}_0(X,H;{\frak x}^{-},\tilde{\frak x}^{+};0) = {\mathcal M}(X,H;{\frak x}^{-},\tilde{\frak x}^{+};0) \setminus {\mathcal M}(X,H;{\frak x}^{-},\tilde{\frak x}^{+};0)^{S^1}. \end{equation} Lemma \ref{identifiedwithgrad} implies that ${\mathcal M}_0(X,H;{\frak x}^{-},\tilde{\frak x}^{+};0)$ is open and closed in ${\mathcal M}(X,H;{\frak x}^{-},\tilde{\frak x}^{+};0)$. \begin{thm}{\rm (\cite[page 1036]{FOn})}\label{existKuran} If we assume Assumptions \ref{nondeg1}, \ref{nondeg2} and \ref{nondeg3}, then the following holds. \begin{enumerate} \item In case $\alpha \ne 0$ the Kuranishi structure on ${\mathcal M}(X,H;{\frak x}^{-},\tilde{\frak x}^{+};\alpha)$ can be taken to be $S^1$ equivariant. \item In case $\alpha = 0$ the same conclusion holds for ${\mathcal M}_0(X,H;{\frak x}^{-},\tilde{\frak x}^{+};0)$. \end{enumerate} \end{thm} \section{$S^1$ equivariant Kuranishi structure for the Floer homology of time independent Hamiltonian} \label{construS1equiv} In this section we prove Theorem \ref{existKuran} in detail. We begin with describing the compactification ${\mathcal M}(X,H;{\frak z}^{-},{\frak z}^{+};\alpha)$ of the moduli space ${\mathcal M}^{\rm reg}(X,H;\frak z^-,\frak z^+;\alpha)$. Here we include the case $\alpha = 0$ and the $S^1$ fixed point since it will appear in the fiber product factor of the compactification. \par We consider $(\Sigma,z_-,z_+)$, a genus zero semistable curve with two marked points. \begin{defn}\label{defn41} Let $\Sigma_0$ be the union of the irreducible components of $\Sigma$ such that \begin{enumerate} \item $z_-,z_+ \in \Sigma_0$. \item $\Sigma_0$ is connected. \item $\Sigma_0$ is smallest among those satisfying (1),(2) above. \end{enumerate} We call $\Sigma_0$ the {\it mainstream} of $(\Sigma,z_-,z_+)$, or simply, of $\Sigma$. An irreducible component of $\Sigma$ that is not contained in $\Sigma_0$ is called a {\it bubble component}. \par Let $\Sigma_a \subset \Sigma$ be an irreducible component of the mainstream. If $z_- \notin \Sigma_a$ then there exists a unique singular point $z_{a,-}$ of $\Sigma$ contained in $\Sigma_a$ such that \begin{enumerate} \item $z_-$ and $\Sigma_a \setminus\{z_{a,-}\}$ belong to the different connected components of $\Sigma \setminus\{z_{a,-}\}$. \item $z_+$ and $\Sigma_a \setminus\{z_{a,-}\}$ belong to the same connected components of $\Sigma \setminus\{z_{a,-}\}$. \end{enumerate} In case $z_- \in \Sigma_a$ we set $z_- = z_{a,-}$. \par We define $z_{a,+}$ in the same way. \par A {\it parametrization of the mainstream} of $(\Sigma,z_-,z_+)$ is $\varphi = \{\varphi_a\}$, where $\varphi_a : \R \times S^1 \to \Sigma_a$ for each irreducible component $\Sigma_a$ of the mainstream such that: \begin{enumerate} \item $\varphi_a$ is a biholomorphic map $\varphi_a : \R \times S^1 \cong \Sigma_a \setminus \{z_{a,-},z_{a,+}\}$. \item $\lim_{\tau \to \pm \infty}\varphi_a(\tau,t) = z_{a,\pm}$. \end{enumerate} \end{defn} \begin{defn}\label{defn210} We denote by $\widehat{\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha)$ the set of triples $((\Sigma,z_-,z_+),u,\varphi)$ satisfying the following conditions: \begin{enumerate} \item $(\Sigma,z_-,z_+)$ is a genus zero semistable curve with two marked points. \item $\varphi$ is a parametrization of the mainstream. \item $u : \Sigma \to X$ a continuous map from $\Sigma$ to $X$. \item If $\Sigma_a$ is an irreducible component of the mainstream and $\varphi_a : \R \times S^1 \to \Sigma_a$ is as above then the composition $h_a = u \circ \varphi_a$ satisfies the equation (\ref{Fleq}). \item If $\Sigma_a$ is a bubble component then $u$ is pseudo-holomorphic on it. \item $u(z_-) = \frak z^-$, $u(z_+) = \frak z^+$. \item $[u_*[\Sigma]] = \alpha$. Here $\alpha \in \Pi$. \end{enumerate} \end{defn} \begin{defn}\label{3equivrel} On the set $\widehat{\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha)$ we define three equivalence relations $\sim_1$, $\sim_2$, $\sim_3$ as follows. \par $((\Sigma,z_-,z_+),u,\varphi) \sim_1 ((\Sigma',z'_-,z'_+),u',\varphi')$ if and only if there exists a biholomorphic map $v : \Sigma \to \Sigma'$ with the following properties: \begin{enumerate} \item $u' = u\circ v$. \item $v(z_{-}) = z'_{-}$ and $v(z_{+}) = z'_{+}$. In particular $v$ sends the mainstream of $\Sigma$ to the mainstream of $\Sigma'$. \item If $\Sigma_a$ is an irreducible component of the mainstream of $\Sigma$ and $v(\Sigma_a) = \Sigma'_a$, then we have \begin{equation}\label{preserveparame} v \circ \varphi_a = \varphi'_a. \end{equation} \end{enumerate} \par The equivalence relation $\sim_2$ is defined replacing (\ref{preserveparame}) by existence of $\tau_a$ such that \begin{equation}\label{preserveparame2} (v \circ \varphi_a)(\tau,t) = \varphi'_a(\tau+\tau_a,t). \end{equation} \par The equivalence relation $\sim_3$ is defined by requiring only (1), (2) above. (Namely by removing condition (3).) \end{defn} \begin{rem} After taking the $\sim_3$ equivalence class, the data $\varphi$ does not remain. Namely $((\Sigma,z_-,z_+),u,\varphi) \sim_3 ((\Sigma,z_-,z_+),u,\varphi')$ for any $\varphi, \varphi'$. \end{rem} \begin{defn}\label{defn45} We put $$\aligned \widetilde{\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha) &= \widehat{\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha)/\sim_1, \\ {\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha) &= \widehat{\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha)/\sim_2, \\ \overline{\overline{\mathcal M}}(X,H;\frak z^-,\frak z^+,\alpha) &= \widehat{\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha)/\sim_3. \endaligned$$ We use $\frak p$ etc. to denote an element of ${\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha)$ and denote by $[\frak p]$ its equivalence class in $\overline{\overline{\mathcal M}}(X,H;\frak z^-,\frak z^+,\alpha)$. \par Let $((\Sigma,z_-,z_+),u,\varphi)$ be an element of $\widehat{\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha)$. Suppose the mainstream of $\Sigma$ has $k$ irreducible components. We add the bubble tree to the irreducible component of the mainstream where it is rooted. We thus have obtained a decomposition \begin{equation}\label{stedec} \Sigma = \sum_{i=1}^k \Sigma_{i}. \end{equation} Here $z_- \in \Sigma_1$, $z_+ \in \Sigma_k$ and $\# (\Sigma_i \cap \Sigma_{i+1}) = 1$. We call each summand of (\ref{stedec}) a {\it mainstream component}. We put $z_{i+1} = \Sigma_i \cap \Sigma_{i+1}$ and call it $(i+1)$-th {\it transit point}. We put $\frak z_i = u(z_i)$ and call it a {\it transit image}. \par Let $\frak p = ((\Sigma,z_-,z_+),u,\varphi) \in \widehat{\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha)$ and let $\Sigma_i$ be one of its mainstream component. We restrict $u$, $\varphi$ to $\Sigma_i$ and obtain $\frak p_i$. We say that $\Sigma_i$ is a {\it gradient line component} if $\frak p_i$ is a fixed point of the $S^1$ action. \par In the definition of $\widehat{\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha)$ we forget the map $u$ but remember only the homology class of $u\vert_{\Sigma_v}$ of each irreducible component and the images $u(z_i)$ of the transit points (that are critical points of $H$). We then obtain a decorated moduli space of domain curves denoted by $\widehat{\mathcal M}(\frak z^-,\frak z^+,\alpha)$. We define equivalence relations $\sim_j$ ($j=1,2,3$) on it in the same way and obtain $\widetilde{\mathcal M}(\frak z^-,\frak z^+,\alpha)$, ${\mathcal M}(\frak z^-,\frak z^+,\alpha)$, and $\overline{\overline{\mathcal M}}(\frak z^-,\frak z^+,\alpha)$. For each element $\frak p$ of $\widehat{\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha)$ etc., we denote by $\frak x_{\frak p}$ the element of $\widehat{\mathcal M}(\frak z^-,\frak z^+,\alpha)$ etc. obtained by forgetting $u$ as above. \par For each $\frak p \in \widehat{\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha)$ etc. or $\frak x \in \widehat{\mathcal M}(\frak z^-,\frak z^+,\alpha)$ etc., we define a graph $\mathcal G_{\frak p}$ or $\mathcal G_{\frak x}$ in the same way as in Section \ref{Graph}. We include the data of the homology class of each component and the images of the transit points in $\mathcal G_{\frak p}$ (resp. $\mathcal G_{\frak x}$). We call $\mathcal G_{\frak p}$ (resp. $\mathcal G_{\frak x}$) the combinatorial type of $\frak p$ (resp. $\frak x$). We denote by $\widehat{\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha;\mathcal G)$ etc. or $\widehat{\mathcal M}(\frak z^-,\frak z^+,\alpha;\mathcal G)$ etc. the subset of the objects with combinatorial type $\mathcal G$. \end{defn} We consider the subset $\widehat{\mathcal M}^{\text{\rm reg}}(X,H;\frak z^-,\frak z^+,\alpha)$ of $\widehat{\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha)$ consisting of all the elements $((\Sigma,z_-,z_+),u,\varphi)$ such that $\Sigma = S^2$. Let $\widetilde{\mathcal M}^{\text{\rm reg}}(X,H;\frak z^-,\frak z^+,\alpha)$, ${\mathcal M}^{\text{\rm reg}}(X,H;\frak z^-,\frak z^+,\alpha)$, $\overline{\overline{\mathcal M}}^{\text{\rm reg}}(X,H;\frak z^-,\frak z^+,\alpha)$ be the $\sim_1$, $\sim_2$, $\sim_3$ equivalence classes of $\widehat{\mathcal M}^{\text{\rm reg}}(X,H;\frak z^-,\frak z^+,\alpha)$, respectively. \par It is easy to see that $\widetilde{\mathcal M}^{\text{\rm reg}}(X,H;\frak z^-,\frak z^+,\alpha)$, ${\mathcal M}^{\text{\rm reg}}(X,H;\frak z^-,\frak z^+,\alpha)$ coincide with the ones in Definition \ref{tildeMreg}. In particular \begin{equation}\label{Rquotient} {\mathcal M}^{\text{\rm reg}}(X,H;\frak z^-,\frak z^+,\alpha) \cong \widetilde{\mathcal M}^{\text{\rm reg}}(X,H;\frak z^-,\frak z^+,\alpha)/\R. \end{equation} Here the $\R$ action is obtained by the translation along the $\R$ direction of the source and is free. \par Moreover we have \begin{equation} \overline{\overline{\mathcal M}}^{\text{\rm reg}}(X,H;\frak z^-,\frak z^+,\alpha) \cong {\mathcal M}^{\text{\rm reg}}(X,H;\frak z^-,\frak z^+,\alpha)/S^1. \end{equation} Here the $S^1$ action is obtained by the $S^1$ action of the source. However we note that the fiber of the canonical map \begin{equation}\label{C*action} {\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha) \to \overline{\overline{\mathcal M}}(X,H;\frak z^-,\frak z^+,\alpha) \end{equation} between the compactified moduli spaces may be bigger than $S^1$. In fact, if $(\Sigma,z_-,z_+)$ has $k$ mainstream components, the fiber of $[(\Sigma,z_-,z_+),u,\varphi]$ in ${\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha)$ is $(S^1)^k$ for the generic points. \par On the other hand there exists an $S^1$ action on ${\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha)$ obtained by $$ t_0\cdot [(\Sigma,z_-,z_+),u,\varphi] = [(\Sigma,z_-,z_+),u,t_0\cdot\varphi] $$ where $$ (t_0\cdot \varphi)_a(\tau,t) = \varphi_a(\tau,t+t_0). $$ \begin{defn} $$ \overline{\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha) = {\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha)/S^1. $$ \end{defn} The map (\ref{C*action}) factors through $\overline{\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha)$. \par We can prove that $\overline{\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha)$ is compact in the same way as \cite[Theorem 11.1]{FOn}. \par In place of taking the quotient by the $\R$ action in (\ref{Rquotient}) we can require the following balancing condition. (In other words we can take a global section of this $\R$ action.) \begin{defn}\label{balancedcond} Let $((\Sigma,z_-,z_+),u,\varphi) \in \widetilde{\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha)$. Suppose that it has only one mainstream component. We define a function $\mathcal A : \R \setminus \text{a finite set} \to \R$ as follows. \par Let $\tau_0 \in \R$. We assume $\varphi(\{\tau_0\} \times S^1)$ does not contain a root of the bubble tree. (This is the way how we remove a finite set from the domain of $\mathcal A$.) Let $\Sigma_{{\rm v}_i}$, $i=1,\dots,m$ be the set of the irreducible components that is in a bubble tree rooted on $\R_{\le \tau_0}\times S^1$. We define \begin{equation} \mathcal A(\tau_0) = \sum_{i=1}^m \int_{\Sigma_{{\rm v}_i}} u^*\omega + \int_{\tau=-\infty}^{\tau_0}\int_{t\in S^1} (u\circ\varphi)^*\omega + \int_{t\in S^1} H(u(\varphi(\tau_0,t))) dt. \end{equation} This is a nondecreasing function and satisifies $$ \lim_{\tau\to-\infty} \mathcal A(\tau) = H(\frak z_-), \qquad \lim_{\tau\to+\infty} \mathcal A(\tau) = H(\frak z_+) + \alpha \cap \omega. $$ We say $\varphi$ satisfies the {\it balancing condition} if \begin{equation} \lim_{\tau < 0 \atop \tau\to 0} \mathcal A(\tau) \le \frac{1}{2} \left( H(\frak z_-) + H(\frak z_+) + \alpha \cap \omega \right) \le \lim_{\tau > 0 \atop \tau\to 0} \mathcal A(\tau). \end{equation} In case $((\Sigma,z_-,z_+),u,\varphi) \in \widetilde{\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha)$ we require the balancing condition in mainstream-component-wise. \end{defn} \begin{rem}\label{rem48} We remark that there exists unique $\varphi$ satisfying the balancing condition in each of the $\R$ orbits, except the following case: $u$ is constant on the (unique) irreducible component in the mainstream. (In this case there must occur a nontrivial bubble.) The uniqueness breaks down in the case when there exists $\tau_0$ such that $$ \sum_{i=1}^m \int_{\Sigma_{{\rm v}_i}} u^*\omega = \frac{1}{2} \omega \cap \alpha $$ in addition. (Here $\{{\rm v}_i\mid i=1,\dots,m\}$ is the bubbles associated to $\tau_0$ as in Definition \ref{balancedcond}.) In such a case we replace $\mathcal A$ by the following regularized version $$ \mathcal A'(\tau_0) = \frac{2}{\sqrt{\pi}}\int_{\R} e^{-(\tau-\tau_0)^2} \mathcal A(\tau) d\tau. $$ The first derivative of $\mathcal A'$ is always strictly positive. So there exists a unique $\varphi$ satisfying the modified balanced condition (using $\mathcal A'$) in each $\R$ orbit. \par Note however the balancing condition will be mainly used later to define canonical marked point. In the case there is a sphere bubble we will not take a canonical marked point. So this remark is only for consistency of the terminology. \end{rem} We have thus defined a compactification of ${\mathcal M}^{\text{reg}}(X,H;\frak z^-,\frak z^+,\alpha)$. We will construct a Kuranishi structure with corner on it. The construction is mostly the same as the proof of Theorem \ref{existsKura}, which is a detailed version of the proof of \cite[Theorem 7.10]{FOn}. Here we explain this proof in more detail than \cite{FOn}. \par We first remark that $\overline{\overline{\mathcal M}}(X,H;\frak z^-,\frak z^+,\alpha)$ does {\it not} have a Kuranishi structure in general. (Even in the case $\alpha \ne 0$.) This is because there is an element in this moduli space whose isotropy group is of positive dimension. Namely if $\Sigma_i$ is a gradient line component, then the biholomorphic $S^1$ action on the component $\Sigma_i$ is in the group of automorphisms of this element of $\overline{\overline{\mathcal M}}(X,H;\frak z^-,\frak z^+,\alpha)$. So a neighborhood of this element may not be a manifold with corner. \par On the other hand, the $S^1$ action on ${\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha)$ is always locally free (namely its isotropy group is a finite group) in case $\alpha \ne 0$. In the case of $\alpha =0$ the $S^1$ action on ${\mathcal M}_0(X,H;\frak z^-,\frak z^+,0)$ is locally free. \par To define an obstruction bundle on the compactification we need to take an obstruction bundle data in the same way as Definition \ref{obbundeldata}. To keep consistency with the fiber product description of the boundary or corner of $\overline{\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha)$ we will define it in a way invariant not only under the $S^1$ action but also under the $(S^1)^k$ action on the part where there are $k$ irreducible components in the mainstream. \par We also need to consider the case of gradient line component at the same time. Note we assumed that the map $u$ is an immersion at the additional marked points in Definition \ref{symstabili} (3) (the definition of symmetric stabilization). In case of gradient line component, there is no such point. However since we assumed that the gradient flow of our Hamiltonian $H$ is Morse-Smale and satisfying Assumption \ref{nondeg3} (2), we actually do {\it not} need to perturb the equation on such a component. So our obstruction bundle there is, by definition, a trivial bundle. (And we do not need to stabilize such a component to define an obstruction bundle.) \par Taking this into account we define an obstruction bundle data in our situation as the following Definition \ref{obbundeldata}. \par \begin{defn}\label{symstab} A {\it symmetric stabilization} of $((\Sigma,z_-,z_+),u,\varphi)$ is $\vec w$ such that $\vec w \cap \Sigma_i$ is a symmetric stabilization of $\Sigma_i$ in the sense of Definition \ref{symstabili} if $\Sigma_i$ is not a gradient line component, and $\vec w \cap \Sigma_i = \emptyset$ if $\Sigma_i$ is a gradient line component. \end{defn} \begin{defn}\label{canmarked} Let $\frak p = ((\Sigma,z_-,z_+),u,\varphi)$ be as above. We assume $\Sigma_i$ is a gradient line component. Note, then $\ell(\tau) = u(\varphi_i(\tau,t))$ is a gradient line joining transit images $\frak z_i$ and $\frak z_{i+1}$. There exists a unique $\tau_0$ such that $$ H(\ell(\tau_0)) = \frac{1}{2}\left( H(\frak z_i) + H(\frak z_{i+1}) \right). $$ We put $w_i = \varphi_i(\tau_0,0)$, which we call the {\it canonical marked point} of the gradient line component. \end{defn} \begin{rem} We note that the pair $(\frak p,w_i)$ where $w_i$ is the canonical marked point depends only on the $\sim_3$ equivalence class of $\frak p$ in the following sense. Suppose $\frak p \sim_3 \frak p'$. We define $w'_i$ in the $i$-th mainstream component of $\Sigma_{\frak p'}$ as above. Then there exists an isomorphism $v : \Sigma_{\frak p} \to \Sigma_{\frak p'}$ satisfying Definition \ref{3equivrel} (1), (2) and such that $v(w_i) =w'_i$. This is because $u$ is $S^1$ equivariant on this irreducible component. \par On the other hand, if the homology class $u_*([\Sigma_i])$ is nonzero, the pair $(\frak p,w_i)$ is not $\sim_3$ equivalent to $(\frak p,w'_i)$ in the above sense, where $w'_i = \varphi_i(\tau_0,t_0)$. \end{rem} We note that, if the $i$-th mainstream component $\Sigma_i$ consists of a gradient line and sphere bubbles, then we put $\vec w$ only on the part of the sphere bubble. Note the irreducible component that is the intersection of this mainstream component and the mainstream is (source) stable since the root of the bubble is the third marked point. \begin{defn}\label{obbundeldata1} An {\it obstruction bundle data $\frak E_{\frak p}$ centered at} $$ [\frak p] = [(\Sigma,z_-,z_+),u,\varphi] \in \overline{\overline{\mathcal M}}(X,H;\frak z^-,\frak z^+,\alpha) $$ is the data satisfying the conditions described below. We put $\frak x = (\Sigma,z_-,z_+)$. Let $\frak x_i$ be the $i$-th mainstream component. (It has two marked points.) We put $\alpha_i = u_*[\Sigma_i] \in \Pi$. \begin{enumerate} \item A symmetric stabilization $\vec w$ of $\frak p$. We put $\vec w^{(i)} = \vec w \cap \frak x_i$. \item The same as Definition \ref{obbundeldata} (2). \item A universal family with coordinate at infinity of $\frak x_{\frak p} \cup \vec w \cup \vec w^{\rm can}$. Here we put the canonical marked point (Definition \ref{canmarked}) for each gradient line component and denote them by $\vec w^{\rm can}$. We require some additional condition (Condition \ref{condcoord} below) for the coordinate at infinity. \item The same as Definition \ref{obbundeldata} (4). Namely compact subsets $K^{\rm obst}_{\rm v}$ of $\Sigma_{\rm v}$. (The support of the obstruction bundle.) We do not put $K^{\rm obst}_{\rm v}$ on the gradient line components. In case the $i$-th mainstream component $\Sigma_i$ consists of a gradient line and sphere bubbles, then we put $K^{\rm obst}_{\rm v}$ only on the bubbles. \item The same as Definition \ref{obbundeldata} (5). Namely, finite dimensional complex linear subspaces $E_{\frak p,{\rm v}}(\frak y,u)$. We do not put them on the gradient line components. In case $i$-th mainstream component $\Sigma_i$ consists of the gradient line and sphere bubbles, then we put them only on the bubbles. \item The same as Definition \ref{obbundeldata} (6) except the differential operator there \begin{equation}\label{ducomponents} \aligned \overline D_{u} \overline\partial : &L^2_{m+1,\delta}((\Sigma_{\frak y_{\rm v}},\partial \Sigma_{\frak y_{\rm v}}); u^*TX, u^*TL) \\ &\to L^2_{m,\delta}(\Sigma_{\frak y_{\rm v}}; u^*TX \otimes \Lambda^{0,1})/E_{\frak p,{\rm v}}(\frak y,u) \endaligned \end{equation} is replaced by the linearization of the equation (\ref{Fleq}) \item The same as Definition \ref{obbundeldata} (7). \item We take a codimension 2 submanifold $\mathcal D_j$ for each of $w_j \in \vec w$ in the same way as Definition \ref{obbundeldata} (8). We note that we do {\it not} take such submanifolds for the canonical marked points $\in \vec w^{\rm can}$. (In fact since $u$ is not an immersion at the canonical marked points we can not choose such submanifolds.) \end{enumerate} \par We require that the data $K^{\rm obst}_{\rm v}$, $E_{\frak p,{\rm v}}(\frak y,u)$ depend only on the mainstream component $\frak p_i = [(\Sigma_i,z_{i-1},z_i),u,\varphi]$ (where $z_i$ is the $i$-th transit point) that contains the $\rm v$-th irreducible component. We call this condition {\it mainstream-component-wise}. \end{defn} The additional condition we assume in Item (3) above is as follows. \begin{conds}\label{condcoord} \begin{enumerate} \item Let $z_{i+1}$ be the $(i+1)$-th transit point, which is contained in $\Sigma_i$ and $\Sigma_{i+1}$. Then the coordinate at infinity near $z_{i+1}$ coincides with the parametrization $\varphi_i$ or $\varphi_{i+1}$ up to the $\R \times S^1$ action. Namely it is $(\tau,t) \mapsto \varphi_{i}(\tau+\tau_0,t+t_0)$ (resp. $\varphi_{i+1}(\tau+\tau_0,t+t_0))$ for some $\tau_0$ and $t_0$. \item Let $z$ be a singular point that is not a transit point and $\Sigma_{\rm v}$ an irreducible component containing $z$. Since $\Sigma_{\rm v}$ is a sphere there exists a biholomorphic map $$ \phi : \Sigma_{\rm v} \cong \C \cup \{\infty\} $$ such that $\phi(z) = 0$. \par Then the coordinate at infinity around $z$ is given as $$ (\tau,t) \mapsto \phi^{-1}(e^{\pm 2\pi(\tau+\sqrt{-1}t)}), $$ for some choice of $\phi$. Here $\pm$ depends on the orientation of the edge corresponding to $z$. \end{enumerate} \end{conds} Note that the choice of coordinate at infinity satisfying the above condition is not unique. \begin{rem} In (2) above we make full use of the fact that our curve is of genus $0$. The construction developed in Part \ref{secsimple} and Part \ref{generalcase} is designed so that it works in the case of arbitrary genus without change. So we did not put this condition in Part \ref{secsimple} and Part \ref{generalcase}. Condition \ref{condcoord} (2) will be used to simplify the discussion on how to handle the Hamiltonian perturbation in the gluing analysis. See Lemma \ref{gluehamexpondec}. \end{rem} \par We can prove existence of an obstruction bundle data in the same way as Lemma \ref{existobbundledata}. For example, we can choose the marked points $\vec w$ as follows: We note that the restriction of $u$ to the irreducible component $\Sigma_{\rm v}$ is not homologous to zero except the following two cases. So we can find a point of $\Sigma_{\rm v}$ at which $u$ is an immersion and take it as an additional marked point. \begin{enumerate} \item $\Sigma_{\rm v}$ is in the mainstream and is not a root of the sphere bubble. \item $\Sigma_{\rm v}$ is in the mainstream and is a root of the sphere bubble. \end{enumerate} In Case (1), we take only the canonical marked point on this component. In Case (2), this irreducible component is stable. So we do not take additional marked points on this component. Thus we can define $\vec w$. \par\smallskip We take and fix an obstruction bundle data for each element of $\overline{\overline{\mathcal M}}(X,H;\frak z^-,\frak z^+,\alpha)$. \par We defined the moduli spaces $\widehat{\mathcal M}(\frak z^-,\frak z^+,\alpha)$, ${\mathcal M}(\frak z^-,\frak z^+,\alpha)$, and $\overline{\overline{\mathcal M}}(\frak z^-,\frak z^+,\alpha)$ in Definition \ref{defn45}. We add $\ell$ additional marked points on it and denote the moduli space of such objects as $\widehat{\mathcal M}_{\ell}(\frak z^-,\frak z^+,\alpha)$, ${\mathcal M}_{\ell}(\frak z^-,\frak z^+,\alpha)$, and $\overline{\overline{\mathcal M}}_{\ell}(\frak z^-,\frak z^+,\alpha)$. We denote by $\widehat{\mathcal M}_{\ell}(\frak z^-,\frak z^+,\alpha;\mathcal G)$, ${\mathcal M}_{\ell}(\frak z^-,\frak z^+,\alpha;\mathcal G)$, and $\overline{\overline{\mathcal M}}_{\ell}(\frak z^-,\frak z^+,\alpha;\mathcal G)$ its subset so that its combinatorial type is $\mathcal G$. (We include the datum on how the additional marked points $\vec w$ in $\mathcal G$ are distributed over the irreducible components.) \par Let $((\Sigma,z_-,z_+),u,\varphi) \cup \vec w\cup \vec w^{\rm can} = \frak p \cup \vec w\cup \vec w^{\rm can}$ be as in Definition \ref{symstab} with decomposition (\ref{stedec}). We put $\ell = \#(\vec w\cup \vec w^{\rm can})$ and denote by $\overline{\overline{\frak V}}(\frak p \cup \vec w\cup \vec w^{\rm can})$ a neighborhood of $\frak x_{\frak p} \cup \vec w\cup \vec w^{\rm can}$ in $\overline{\overline{\mathcal M}}_{\ell}(\frak z^-,\frak z^+,\alpha;\mathcal G_{\frak p \cup \vec w\cup \vec w^{\rm can}})$. \par In the same way as Definition \ref{def214} we define a map \begin{equation}\label{doubleoverlinePhi} \overline{\overline{\Phi}} : \overline{\overline{\frak V}}(\frak p \cup \vec w\cup \vec w^{\rm can}) \times ((\vec T,\infty] \times S^1) \to \overline{\overline{\mathcal M}}_{\ell}(\frak z^-,\frak z^+,\alpha), \end{equation} that is an isomorphism onto an open neighborhood of $[\frak x_{\frak p} \cup \vec w\cup \vec w^{\rm can}]$. Here the notation $((\vec T,\infty] \times S^1)$ is similar to Definition \ref{def29}. \par We next add the parametrization $\varphi$ of the mainstream to the map (\ref{doubleoverlinePhi}) and define its ${{\mathcal M}}_{\ell}(\frak z^-,\frak z^+,\alpha)$-version below. \par Now we define a manifold with corner $\widetilde D(k;\vec T_0)$ as follows. We put \begin{equation}\label{216} \widetilde {\overset{\circ}D}(k;T_0)\\ = \{(T_1,\dots,T_k) \in \R^{k} \mid T_{i+1} - T_i \ge T_{0,i}\}. \end{equation} We (partially) compactify $\widetilde {\overset{\circ}D}(k;\vec T_0)$ to $\widetilde {D}(k;\vec T_0)$ by admitting $T_{i+1} - T_i = \infty$ as follows. We put $s'_i = 1/(T_{i+1}-T_i)$ then $T_1$ and $s'_1,\dots,s'_{k-1}$ define another parameters. So (\ref{216}) is identified with $\R \times \prod_{i=1}^k(0,1/T_{0,i}]$. We (partially) compactify it to $\R \times \prod_{i=1}^k[0,1/T_{0,i}]$. By taking the quotients of $\widetilde {\overset{\circ}D}(k;T_0)$ and $\widetilde {D}(k;\vec T_0)$ by the $\R$ action $T(T_1,\dots,T_k) = (T_1+T,\dots,T_k+T)$, we obtain ${\overset{\circ}D}(k;\vec T_0)$ and $D(k;\vec T_0)$ respectively. \par Let ${{\frak V}}(\frak p \cup \vec w\cup \vec w^{\rm can}) \subset {\mathcal M}_{\ell}(\frak z^-,\frak z^+,\alpha;\mathcal G _{\frak p \cup \vec w\cup \vec w^{\rm can}})$ be the inverse image of $\overline{\overline{\frak V}}(\frak p \cup \vec w\cup \vec w^{\rm can})$ under the projection \begin{equation}\label{projectiontovarvar} {\mathcal M}_{\ell}(\frak z^-,\frak z^+,\alpha;\mathcal G _{\frak p \cup \vec w\cup \vec w^{\rm can}}) \to \overline{\overline{\mathcal M}}_{\ell}(\frak z^-,\frak z^+,\alpha). \end{equation} \begin{rem} Note for an element $((\Sigma',z'_-,z'_+),\varphi') \cup \vec w'$, the marked points $w'_i$ that correspond to the canonical marked points $\in \vec w^{\rm can}$ may not be canonical. (Namely it may not be of the form $\varphi'(\tau_0,0)$ where $\tau_0$ is as in Definition \ref{canmarked}.) \end{rem} The space ${\mathcal M}_{\ell}(\frak z^-,\frak z^+,\alpha;\mathcal G _{\frak p \cup \vec w\cup \vec w^{\rm can}})$ carries an $(S^1)^k$ action given by $$ (t_1,\dots,t_k)(((\Sigma,z_-,z_+),u,\varphi) \cup \vec w\cup \vec w^{\rm can}) = ((\Sigma,z_-,z_+),u,\varphi') \cup \vec w\cup \vec w^{\rm can} $$ where $\varphi = (\varphi_i)_{i=1}^k$ and $\varphi' = (\varphi'_i)_{i=1}^k$ such that $$ \varphi'_i(\tau,t) = \varphi_i(\tau,t+t_i). $$ This action is locally free and the map (\ref{projectiontovarvar}) can be identified with the canonical projection: $$ {\mathcal M}_{\ell}(\frak z^-,\frak z^+,\alpha;\mathcal G _{\frak p \cup \vec w\cup \vec w^{\rm can}}) \to {\mathcal M}_{\ell}(\frak z^-,\frak z^+,\alpha;\mathcal G _{\frak p \cup \vec w\cup \vec w^{\rm can}})/(S^1)^k. $$ \par It follows from this fact that ${{\frak V}}(\frak p \cup \vec w\cup \vec w^{\rm can})$ is an open neighborhood of the inverse image of $[\frak p]$ in ${\mathcal M}_{\ell}(\frak z^-,\frak z^+,\alpha;\mathcal G _{\frak p \cup \vec w\cup \vec w^{\rm can}})$. \par We now define: \begin{equation}\label{418} \overline{\Phi} : {{\frak V}}(\frak p \cup \vec w\cup \vec w^{\rm can}) \times {D}(k;\vec T_0) \times (\prod_{j=1}^m(T_{0,j},\infty] \times S^1)/\sim) \to {\mathcal M}_{\ell}(\frak z^-,\frak z^+,\alpha) \end{equation} that is a homeomorphism onto an open neighborhood of the inverse image of $[\frak p]$ in ${\mathcal M}_{\ell}(\frak z^-,\frak z^+,\alpha)$. Here $\sim$ is as in Remark \ref{rem:161}. \par Let $\frak y \in {{\frak V}}(\frak p \cup \vec w\cup \vec w^{\rm can})$ and $(\vec T,\vec \theta) \in \widetilde D(k;\vec T_0) \times (\prod_{j=1}^m(T_{0,j},\infty] \times S^1)/\sim)$. Note ${\frak V}(\frak p \cup \vec w \cup \vec w^{\rm can})$ is the quotient space of the $\R$ action. We represent the quotient by the slice obtained by requiring the balancing condition (Definition \ref{balancedcond}). \par The element $\frak y$ comes with the coordinate around the $m$ singular points of $\Sigma$ that are not transit points. (This is a part of the stabilization data centered at $\frak p$.) We use the parameters in $(\prod_{j=1}^m(T_{0,j},\infty] \times S^1)/\sim$ to resolve those singular points. \par The rest of the parameter $\vec T' = (T_1,\dots,T_k) \in D(k_1,k_2;\vec T_0)$ is used to resolve the transit points as follows. We consider the case this parameter $\vec T' $ is in $\overset{\circ}D(k;\vec T_0)$. Let us consider $$ [-5T_i,5T_i]_i \times S^1_i $$ and regard it as a subset of the domain of $\varphi_i : \R \times S^1 \to \Sigma_i$. We define $$ \varphi_0 : \bigcup_i ([-5T_i,5T_i]_i \times S^1_i) \to \R \times S^1 $$ as follows. If $(\tau,t) \in [-5T_i,5T_i]_i \times S^1_i$ then $$ \varphi_0(\tau,t) = (\tau+10 T_i,t). $$ We use $\varphi_0\circ\varphi_i^{-1}$ to identify (a part of) $\Sigma_i$ with a subset of $\R \times S^1$. We then use this identification to move bubble components (glued) and marked points. So together with $\R \times S^1$ it gives a marked Riemann surface. We thus obtain ${\frak Y}$. The image of $\varphi_0$ is in the {\it core} of ${\frak Y}$ and the complement of the core in the mainstream is the neck region. \par By taking the quotient with respect to the $\R$ action, we obtain: $$ {\frak Y} = \overline{\Phi}(\frak y,\vec T,\vec \theta) \in {\mathcal M}_{\ell}(\frak z^-,\frak z^+,\alpha). $$ We have thus defined (\ref{418}). \par\smallskip We next consider $((\Sigma',z'_-,z'_+),u',\varphi')\cup \vec w'$ and define the $\epsilon$-close-ness of it from $[\frak p \cup \vec w\cup \vec w^{\rm can}] = [((\Sigma,z_-,z_+),u,\varphi) \cup \vec w\cup \vec w^{\rm can}]$. Here $((\Sigma',z'_-,z'_+),u',\varphi')$ is assumed to satisfy Definition \ref{defn210} (1)(2)(3)(6)(7) and $\vec w'$ is the set of $\ell$ additional marked points. We decompose \begin{equation}\label{stedec2} \Sigma' = \sum_{j=1}^{k'} \Sigma'_{j} \end{equation} into the mainstream components. Let $z'_j$ be the $j$-th transit point and $\alpha'_j = u'_*([\Sigma'_{j}])$. We assume there exists a map $i: \{1,\dots,k'\} \to \{1,\dots,k\}$ such that \begin{enumerate} \item[(a)] $u(z_{i(j)}) = u'(z'_j)$. \item[(b)] $\sum_{i=i(j)}^{i(j+1)-1}\alpha_i = \alpha'_j$. \end{enumerate} Here $z_i$ is the $i$-th transit point of $\frak p$ and $\alpha_i = u_*([\Sigma_{i}])$. \par \begin{defn}\label{epsiclose} We say $((\Sigma',z'_-,z'_+),u',\varphi') \cup \vec w'$ is {\it $\epsilon$-close} to $[\frak p \cup \vec w\cup \vec w^{\rm can}]$ if the following holds. \begin{enumerate} \item \begin{equation}\label{Phioverline2} ((\Sigma',z'_-,z'_+),u') \cup \vec w' = \overline{\overline{\Phi}}(\overline{\frak y},\vec T,\vec \theta) \end{equation} where $\overline{\frak y} \in \overline{\overline{\frak V}}(\frak p \cup \vec w\cup \vec w^{\rm can})$. And Definition \ref{epsiloncloseto} (1) holds. \item Definition \ref{epsiloncloseto} (2) holds. \item Definition \ref{epsiloncloseto} (3) holds. \item Definition \ref{epsiloncloseto} (4) holds. (Namely each of the component of $\vec T$ is $> 1/\epsilon$.) \item If $\Sigma_i$, $i=i(j),\dots,i(j+1)-1$ are all gradient line components, then $\Sigma'_{j}$ is also a gradient line component that is $\epsilon$-close to the union of the gradient lines $u\vert_{\Sigma_{i}}$, $i=i(j),\dots,i(j+1)-1$. We also require that $\vec w' \cap \Sigma'_{j}$ consists of $i(j+1) - i(j)$ points $z'_{i(j)+1},\dots,z'_{i(j+1)}$ such that $$ \left\vert H(z_i) - \frac{1}{2}\left( H(\frak z_i) + H(\frak z_{i+1}) \right)\right\vert < \epsilon $$ for $i(j) \le i \le i(j+1)-1$. \end{enumerate} \end{defn} If $((\Sigma',z'_-,z'_+),u',\varphi') \cup \vec w'$ is $\epsilon$-close to $[\frak p \cup \vec w\cup \vec w^{\rm can}]$ and $\epsilon$ is sufficiently small, we can define an obstruction bundle for $((\Sigma',z'_-,z'_+),u',\varphi')$ in a way similar to Definition \ref{Emovevvv} as follows. \begin{defn}\label{obstructionbundle} We consider the decomposition (\ref{stedec2}) of $\Sigma'$ into the mainstream components and define $i(j)$ as in (a),(b) there. We will define an obstruction bundle supported on each of $\Sigma'_j$. \par If $\Sigma'_j$ is a gradient line component, we set an obstruction bundle to be trivial on $\Sigma'_j$. \par Suppose $\Sigma'_j$ is not a gradient line component. We remove all the marked points $\vec w' \cap \Sigma'_j$ that correspond to $\vec w^{\rm can}$. We denote by $\vec w'_{0,j} \subseteq \vec w' \cap \Sigma'_j$ the remaining marked points on $\Sigma'_j$. It is easy to see that $(\Sigma'_j;z'_j,z'_{j+1})\cup \vec w'_{0,j}$ is stable. Let $\vec w_{0,j}\subseteq \vec w$ be the set of the marked points on $\Sigma$ corresponding to the marked points $\vec w'_{0,j}$. \par In the union $\bigcup_{i=i(j)}^{i(j+1) -1}\Sigma_i$, we shrink each of the gradient line components $\Sigma_i$ to a point. Let $\Sigma_{0,j}$ be the resulting semi-stable curve. Then $(\Sigma_{0,j};z_{i(j)},z_{i(j+1)}) \cup \vec w_{0,j}$ is stable. It has a coordinate at infinity induced by one given in Definition \ref{obbundeldata} (3). We remark that the union of the supports of the obstruction bundles in $\bigcup_{i=i(j)}^{i(j+1) -1}\Sigma_i$ may be regarded as subsets of $\Sigma_{0,j}$. \par We observe that $(\Sigma'_j;z'_j,z'_{j+1})\cup \vec w'_{0,j}$ is obtained from $(\Sigma_{0,j};z_{i(j)},z_{i(j+1)}) \cup \vec w_{0,j}$ by resolving singular points. Therefore using the above mentioned coordinate at infinity we have a diffeomorphism from the supports of the obstruction bundles in $\bigcup_{i=i(j)}^{i(j+1) -1}\Sigma_i$ onto open subsets of $\Sigma'_j$, together with parallel transport to define an obstruction bundle on $\Sigma'_j$, in the same way as Definition \ref{Emovevvv}. \end{defn} \begin{rem}\label{remakcannorole} We remark that by construction the obstruction bundle that we defined on $((\Sigma',z'_-,z'_+),u',\varphi') \cup \vec w'$ is independent of the marked points $\in \vec w'$ corresponding to the canonical marked points $\vec w^{\rm can}$. This is important to see that our construction is $S^1$ equivariant. \par In other words, the canonical marked points $\vec w^{\rm can}$ and the corresponding marked points in $\vec w'$ do {\it not} play an important role in our construction. We introduce them so that our terminology is as close to the one in Parts \ref{secsimple}--\ref{generalcase} as possible. \end{rem} In the same way as Corollary \ref{vindependenEcor}, we can show this obstruction bundle is independent of the equivalence relation $\sim_i$ ($i=1,2,3$) that is defined in the same way as Definition \ref{3equivrel}. We can define Fredholm regularity and evaluation-map-transversality of such obstruction bundle in the same way as Definition \ref{fredreg}, Definition \ref{def:1720}, respectively. Then an obvious analogue of Proposition \ref{linearMV} is proved in the same way. \begin{defn}\label{constrainttt} Suppose $((\Sigma',z'_-,z'_+),u',\varphi') \cup \vec w'$ is as in Definition \ref{epsiclose}. We say that it satisfies the {\it transversal constraint} if the following holds. \begin{enumerate} \item Let $w'_i$ be one of the elements of $\vec w'$. If the corresponding $w_i \in \vec w\cup \vec w^{\rm can}$ is contained in $\vec w$ then $u'(w'_i)$ in contained in the codimension 2 submanifold $\mathcal D_i$ that are given as a part of Definition \ref{obbundeldata1} (8). \item Suppose $\Sigma'_j$ is a gradient line component and $w'_{i(j)+1},\dots,w'_{i(j+1)} = \vec w' \cap \Sigma'_{j}$. Then $$ H(u(w'_i)) = \frac{1}{2}\left( H(\frak z_i) + H(\frak z_{i+1}) \right) $$ for $i(j)+1 \le i \le i(j+1)$. \item Let $w'_1,\dots,w'_n$ be the points in $\vec w'$ corresponding to $\vec w^{\rm can}$. We may assume that they are all contained in the mainstream (by taking $\epsilon$ small.) Then we require that the $S^1$ coordinate thereof are all $[0] \in \R/\Z = S^1$. \end{enumerate} \end{defn} Then for each $\frak p \in \overline{\overline{\mathcal M}}(X,H;\frak z^-,\frak z^+,\alpha)$ we fix an obstruction bundle data $\frak C_{\frak p}$ centered at $[\frak p]$. In particular we have $\vec w_{\frak p}$. We choose $\epsilon_{\frak p}$ so that the conclusion of Lemma \ref{transpermutelem} holds. \par For each $[\frak p] \in \overline{\overline{\mathcal M}}(X,H;\frak z^-,\frak z^+,\alpha)$ we denote by $\frak W_{\frak p}$ the subset of $\overline{\overline{\mathcal M}}(X,H;\frak z^-,\frak z^+,\alpha)$ consisting of $[\frak p']$ satisfying the following conditions. There exists $\vec w'$ such that $\frak p' \cup \vec w'$ is $\epsilon_{\frak p}$-close to $[\frak p \cup \vec w_{\frak p}\cup \vec w^{\rm can}]$ and $\vec w'$ satisfies the transversal constraint. \par We then find a finite set $\frak C = \{\frak p_c\}$ such that $$ \bigcup_{c} {\rm Int}\,\frak W_{\frak p_c} = \overline{\overline{\mathcal M}}(X,H;\frak z^-,\frak z^+,\alpha), $$ where $\frak W_{\frak p_c}$ consists of elements $\frak p$ with the following property: there exists $\vec w_{\frak p}$ such that $\frak p \cup \vec w_{\frak p}$ is $\epsilon_{\frak p_c}$-close to $\frak p_c \cup \vec w_{\frak p_c}\cup \vec w^{\rm can}$ and $\vec w_{\frak p}$ satisfies the transversal constraint. \par We can construct such $\frak C = \frak C(\alpha)$ mainstream-component-wise in the following sense. We decompose $\frak p = \cup \frak p_i$ into its mainstream components. Then $\frak p \in \frak C(\alpha)$ if and only if $\frak p_i \in \frak C(\alpha_i)$\footnote{For the component $\alpha_i = 0$ we take a sufficiently dense subset of the moduli space of gradient lines and use it.} for each $i$. As long as we consider a finite number of $\alpha$'s, we can construct such $\frak C(\alpha)$ inductively over the energy of $\alpha$. \par For each $[\frak p] \in \overline{\overline{\mathcal M}}(X,H;\frak z^-,\frak z^+,\alpha)$ we define the notion of stabilization data in the same way as Definition \ref{stabdata}. (In other words, we require Definition \ref{obbundeldata} (1)(2)(3)(8).) \par We put \begin{equation} \frak C_{[\frak p]} = \{ c\in \frak C(\alpha) \mid [\frak p] \in {\rm Int}\,\frak W_{\frak p_c}\}. \end{equation} We note that if $c \in \frak C_{[\frak p]}$ there exists $\vec w^{\frak p}_c$ such that $\frak p\cup \vec w^{\frak p}_c$ is $\epsilon_c$-close to $\frak p_c \cup \vec w_c \cup \vec w^{\rm can}$ and $\vec w^{\frak p}_c$ satisfies the transversal constraint. We take such $\vec w^{\frak p}_c$ and fix it. \par Let $$ [\frak p] = [(\Sigma_{\frak p},z_{\frak p,-},z_{\frak p, +}),u_{\frak p},\varphi_{\frak p})] \in \overline{\overline{\mathcal M}}(X,H;\frak z^-,\frak z^+,\alpha). $$ \begin{defn}\label{defthickenmoduli} We define a {\it thickened moduli space} \begin{equation}\label{thicenmoduli} {\mathcal M}_{(\ell_{\frak p},(\ell_c))}(X,H;\frak z^-,\frak z^+,\alpha;\frak p;\frak A;\frak B)_{\epsilon_0,\vec T_0} \end{equation} to be the set of $\sim_2$ equivalence classes of $((\frak Y,u',\varphi'),\vec w'_{\frak p},(\vec w'_c))$ with the following properties. (Here $\frak A \subset \frak B \subset \frak C_{[\frak p]}$.) \begin{enumerate} \item $(\frak Y,u',\varphi') \cup \vec w'_{\frak p}$ is $\epsilon_0$-close to $\frak p\cup \vec w_{\frak p} \cup \vec w^{\rm can}$. Here $\ell_{\frak p} = \#\vec w'_{\frak p}$. \item $(\frak Y,u',\varphi') \cup \vec w'_{c}$ is $\epsilon_0$-close to $\frak p\cup \vec w_c^{\frak p}$. Here $\ell_{c} = \#\vec w'_{c}$. \item On the bubble we have $$ \overline{\partial} u' \equiv 0 \mod \mathcal E_{\frak B}. $$ Here $\mathcal E_{\frak B}$ is the obstruction bundle defined by Definition \ref{obstructionbundle} in the same way as Definition \ref{def:187}. \item On the $i$-th irreducible component of the mainstream we consider $h'_i = u'\circ \varphi'_i$. Then it satisfies $$ \frac{\partial h'_i}{\partial \tau} + J\left( \frac{\partial h'_i}{\partial t} - \frak X_{H_t} \right) \equiv 0 \mod \mathcal E_{\frak B}. $$ Here $\mathcal E_{\frak B}$ is as in (3). \end{enumerate} \end{defn} The next lemma says that ${\mathcal M}_{(\ell_{\frak p},(\ell_c))}(X,H;\frak z^-,\frak z^+,\alpha;\frak p;\frak A;\frak B)_{\epsilon_0,\vec T_0} $ carries the following $S^1$ action. \begin{lem}\label{S1invarianceMM} Suppose that $((\frak Y,u',\varphi'),\vec w'_{\frak p},(\vec w'_c))$ is an element of the moduli space ${\mathcal M}_{(\ell_{\frak p},(\ell_c))}(X,H;\frak z^-,\frak z^+,\alpha;\frak p;\frak A;\frak B)_{\epsilon_0,\vec T_0} $ and $t_0 \in S^1$. Then $((\frak Y,u',t_0\cdot\varphi'),t_0\vec w'_{\frak p},(t_0\vec w'_c))$ is an element of ${\mathcal M}_{(\ell_{\frak p},(\ell_c))}(X,H;\frak z^-,\frak z^+,\alpha;\frak p;\frak A;\frak B)_{\epsilon_0,\vec T_0}$. Here $ (t_0\cdot \varphi')(\tau,t) = \varphi'(\tau,t+t_0) $ and $t_0\vec w'_c$ is defined as follows. $(t_0\vec w'_c)_i = (\vec w'_c)_i$ if it corresponds to a marked point in $\vec w_{\frak p_c}$. If $(\vec w'_c)_i$ corresponds to a canonical marked point then $(t_0\vec w'_c)_i = ((t_0\cdot \varphi') \circ(\varphi)^{-1})((\vec w'_c)_i)$. \end{lem} \begin{proof} The only part of our construction which potentially breaks the $S^1$ symmetry is Definition \ref{constrainttt} (3). However as we remarked in Remark \ref{remakcannorole}, the marked points that correspond to the canonical marked points do not affect the obstruction bundle. Therefore the $S^1$ symmetry is not broken. \end{proof} \begin{defn} We denote by $V_{(\ell_{\frak p},(\ell_c))}(X,H;\frak z^-,\frak z^+,\alpha;\frak p;\frak A;\frak B)_{\epsilon_0}$ the subset of ${\mathcal M}_{(\ell_{\frak p},(\ell_c))}(X,H;\frak z^-,\frak z^+,\alpha;\frak p;\frak A;\frak B)_{\epsilon_0,\vec T_0} $ consisting of the elements with the same combinatorial type as $\frak p$. (Compare \eqref{2193}.) \end{defn} In the same way as Lemma \ref{lem:191}, $V_{(\ell_{\frak p},(\ell_c))}(X,H;\frak z^-,\frak z^+,\alpha;\frak p;\frak A;\frak B)_{\epsilon_0}$ is a smooth manifold. In the same way as Lemma \ref{S1invarianceMM} we can show that our space $$V_{(\ell_{\frak p},(\ell_c))}(X,H;\frak z^-,\frak z^+,\alpha;\frak p;\frak A;\frak B)_{\epsilon_0}$$ has an $(S^1)^k$ action. Here $k$ is the number of mainstream components of $\frak p$. Let $m$ be the number of singular points of $\Sigma_{\frak p}$ which are not transit points. \par Now we have the following analogue of Theorem \ref{gluethm3}. \begin{prop}\label{gluingpropham} There exists a map $$ \aligned \text{\rm Glue} : &V_{(\ell_{\frak p},(\ell_c))}(X,H;\frak z^-,\frak z^+,\alpha;\frak p;\frak A;\frak B)_{\epsilon_0} \times \left(\prod_{i=1}^m (T_{0,i},\infty] \times S^1)\right)/\sim \times D(k;\vec T'_0)\\ &\to {\mathcal M}_{(\ell_{\frak p},(\ell_c))}(X,H;\frak z^-,\frak z^+,\alpha;\frak p;\frak A;\frak B)_{\epsilon_2}. \endaligned $$ Its image contains ${\mathcal M}_{(\ell_{\frak p},(\ell_c))}(X,H;\frak z^-,\frak z^+,\alpha;\frak p;\frak A;\frak B)_{\epsilon_3}$ for sufficiently small $\epsilon_3$. \par An estimate similar to Theorem \ref{exdecayT33} also holds. \end{prop} (The notation ${\mathcal M}_{(\ell_{\frak p},(\ell_c))}(X,H;\frak z^-,\frak z^+,\alpha;\frak p;\frak A;\frak B)_{\epsilon}$ is similar to one used in Theorem \ref{gluethm3}.) \begin{proof} The proof is mostly the same as the proofs of Theorems \ref{gluethm3}, \ref{exdecayT33}. The only new point we need to discuss is the following. \par Our equation is the pseudo-holomorphic curve equation ((3) in Definition \ref{defthickenmoduli}) on the bubble but involves Hamiltonian vector field ((4) in Definition \ref{defthickenmoduli}) on the mainstream. When we resolve the singular points, the bubble becomes the mainstream. So we need to estimate the contribution of the (pull back by appropriate diffeomorphisms of the) Hamiltonian vector field on such a part. We need to do so by using the coordinate that is similar to those we used in the proofs of Theorems \ref{gluethm3}, \ref{exdecayT33}. We will use Lemma \ref{gluehamexpondec} below for this purpose. \par We put $$ \Sigma_{\frak p} = \bigcup_{i=1}^k\Sigma_{i} \cup \bigcup_{\rm v} \Sigma_{\rm v} $$ where $\Sigma_i$ are in the mainstream and $\Sigma_{\rm v}$ are the bubbles. We note that each $\Sigma_{\rm v}$ is a two sphere $S^2$. Let $z$ be a singular point contained in $\Sigma_{\rm v}$. According to Condition \ref{condcoord}, we have a disk $D_{z,\rm v} \subset \Sigma_{\rm v}$ centered at $z$ on which a coordinate at infinity is defined. In case $z \in \Sigma_0$, we also have a disk $D_{z,i} \subset \Sigma_{i}$ on which a coordinate at infinity is defined. \par We also have \begin{equation}\label{necktransti} \varphi_i (((-\infty,5T_i] \cup [5T_{i+1},\infty))\times S^1) \subset \Sigma_i. \end{equation} The union of the images of (\ref{necktransti}) and the disks $D_{z,\rm v}$ and $D_{z,i}$ are the neck regions. Its complement is called the core. We write $K_{\rm v}$ or $K_i$ the part of the core in the component $\Sigma_{\rm v}$, $\Sigma_{i}$, respectively. \par Using the coordinate at infinity, we have an embedding \begin{equation}\label{embeddingii} i_{\rm v} : K_{\rm v} \to \Sigma', \qquad i_{i} : K_{i} \to \Sigma'. \end{equation} Lemma \ref{gluehamexpondec} provides an estimate of the maps (\ref{embeddingii}). We put the metric on $K_{\rm v}$ regarding them as subsets of the sphere. We put the metric on $K_i$ by regarding them as subsets of $\R \times S^1$. (They are compact. So actually the choice of the metric does not matter.) \par We fix $\vec T$ (the lengths of the neck region when we glue and obtain $\Sigma'$.) For each $\rm v$ we define $T_{\rm v}$ as follows. Take a shortest path joining our irreducible component $\Sigma_{\rm v}$ to the mainstream. Let $z_1,\dots,z_r$ be the singular points contained in this path. Let $10T_{z_i}$ be the length of the neck region corresponding to the singular point $z_i$. We put $T_{\rm v} = \sum T_{z_i}$. \par We observe that $i_{\rm v}(K_{\rm v})$ is in the mainstream if and only if $T_{\rm v}$ is finite. \begin{lem}\label{gluehamexpondec} There exist $C_{\ell}, c_{\ell} > 0$ with the following properties. \begin{enumerate} \item Suppose that $i_{\rm v}(K_{\rm v})$ is in the mainstream. We then regard $$ i_{\rm v} : K_{\rm v} \to \R \times S^1. $$ The $C^{\ell}$ norm of this map is smaller than $C_{\ell}e^{-c_{\ell}T_{\rm v}}$. In particular the diameter of its image is smaller than $C_{1}e^{-c_{1}T_{\rm v}}$. \item We remark that the image of $i_i$ is in certain mainstream component. So we may regard $$ i_i : K_{i} \to \R \times S^1. $$ Note $K_i \subset \R \times S^1$. Then $i_i$ extends to a biholomorphic map of the form $ (\tau,t) \mapsto (\tau+\tau_0,t+t_0). $ \end{enumerate} \end{lem} \begin{proof} For simplicity of the notation we consider the case $$ \Sigma_{\frak p} = (\R \times S^1) \cup S^2 = \Sigma_1 \cup \Sigma_{\rm v}. $$ Let $T$ be the length of the neck region of $\Sigma'$. Then we have a canonical isomorphism \begin{equation}\label{sigmaprimeidentified} \Sigma' = ((\R \times S^1) \setminus D^2_0) \cup ([-5T,5T]\times S^1) \cup (\C \setminus D^2). \end{equation} Here $D^2_0$ is an image of a small disk $\subset \R \times \R$ by the projection $\R \times \R \to \R \times S^1$ and the disk $D^2$ in the third term is the disk of radius 1 centered at origin. The identification (\ref{sigmaprimeidentified}) is a consequence of Condition \ref{condcoord}. \par We have a biholomorphic map $$ I : \Sigma' \to \R \times S^1 $$ that preserves the coordinates at their two ends. \par We can take $I$ as follows. Let $I_0 : D^2 \to D^2_0$ be the isomorphism that lifts to a homothetic embedding $D^2 \to \R^2$. \begin{enumerate} \item On $(\R \times S^1) \setminus D^2_0$, the map $I$ is the identity map. \item If $z \in \C \setminus D^2$ then $$ I(z) = I_0(e^{-20\pi T}/z). $$ \item If $z = (\tau,t) \in [-5T,5T]\times S^1$ then $$ I(z) = I_0(e^{-2\pi((\tau+5T)+\sqrt{-1}t)}). $$ \end{enumerate} Lemma \ref{gluehamexpondec} is immediate from this description. \par The general case can be proved by iterating a similar process. \end{proof} \begin{rem}\label{neckremark} On the part of the neck region $[-4T,4T]\times S^1$ where we perform the gluing construction, we can prove a similar estimate as Lemma \ref{gluehamexpondec} (1). \end{rem} Now we go back to the proof of Proposition \ref{gluingpropham}. Lemma \ref{gluehamexpondec} (2) implies that on the mainstream the equation Definition \ref{defthickenmoduli} (4) is preserved by gluing. Lemma \ref{gluehamexpondec} (1) and Remark \ref{neckremark} imply that the effect of the Hamiltonian vector field is small in the exponential order on the other part. Therefore the presence of the Hamiltonian term does not affect the proof of Theorems \ref{gluethm3}, \ref{exdecayT33} and we can prove Proposition \ref{gluingpropham} in the same way as in Section \ref{glueing}. \end{proof} We are now in the position to complete the proof of Theorem \ref{existKuran}. We have defined the thickened moduli space and described it by the gluing map. The rest of the construction of the Kuranishi structure is mostly the same as the construction in Sections \ref{cutting}-- \ref{kstructure}. We mention two points below. Except them there are nothing to modify. \par\smallskip \noindent(1) We consider the process to put (transversal) constraint and forget the marked points. This was done in Section \ref{cutting} to cut down the thickened moduli space to an orbifold of correct dimension, which will become our Kuranishi neighborhood. Here we use the constraint defined in Definition \ref{constrainttt}. In the case when the marked point $w'_i$ corresponds to one of $\vec w_{\frak p}$ or $\vec w_{\frak p_c}$ that is not a canonical marked point, this process is exactly the same as in Section \ref{cutting}. \par In the case of the marked point $w'_i$ corresponds to a canonical marked point of $\frak p$ or $\frak p_c$ we use Definition \ref{constrainttt} (2),(3). Note that these conditions determine the position of $w'_i$ on $\Sigma'$ uniquely. On the hand, as we remarked in Remark \ref{remakcannorole}, the marked point $w'_i$ does not affect the obstruction bundle and hence the equations defining our thickened moduli space. So the discussion of the process to put constraint and forget such a marked point is rather trivial. \par\smallskip \noindent(2) Our thickened moduli space has an $S^1$ action. The gluing map we constructed in Proposition \ref{gluingpropham} is obviously $S^1$ equivariant. (The obstruction bundle is invariant under the $S^1$ action as we remarked before.) Therefore all the construction of the Kuranishi structure is done in an $S^1$ equivariant way. Note that we define the $S^1$ action on the thickened moduli space. The smoothness of this action is fairly obvious. \par We note that the group $\Gamma_{\frak p}$ for $\frak p$ (Definition \ref{chertS1act}) in our case of $\frak p = ((\Sigma,z_-,z_+),u,\varphi)$ consists of maps $v : \Sigma \to \Sigma$ that satisfies Definition \ref{3equivrel} (1)(2) for $((\Sigma',z'_-,z'_+),u,\varphi') = ((\Sigma,z_-,z_+),u,\varphi)$ and $$ v \circ \varphi_a(\tau,t) = \varphi_a(\tau,t+t_0) $$ for some $t_0 \in S^1$. The groups $\Gamma_{\frak p}$ and $S^1$ generate the group $G_{\frak p}$. \par\medskip The proof of Theorem \ref{existKuran} is now complete. \qed \begin{exm}\label{zeifertgeo} Let $h : \R \times S^1 \to X$ be a solution of the equation (\ref{Fleq}) (without bubble). We assume that $h$ is injective. We put $$ h_k(\tau,t) = h(k\tau,kt) : \R \times S^1 \to X. $$ We define $\varphi : \R \times S^1 \to S^2 \setminus \{z_-,z_+\}$ by $\varphi(\tau,t) = e^{2\pi(\tau+\sqrt{-1}t)}$. We put $u_k = h_k \circ \varphi^{-1}$. Then $((S^2,z_-,z_+),u_k,\varphi)$ is an element of ${\mathcal M}(X,H;\frak z^-,\frak z^+,\alpha)$. Let $t_0\varphi(\tau,t) = \varphi(\tau,t+t_0)$. It is easy to see that $$ ((S^2,z_-,z_+),u_k,\varphi) \sim_2 ((S^2,z_-,z_+),u_k,t_0\varphi) $$ if and only if $t_0 = m/k$, $m\in \Z$. \par We can take the Kuranishi neighborhood of $\frak p = ((S^2,z_-,z_+),u_k,\varphi) $ of the form $V = S^1 \times V'$, on which the generator of the group $\Gamma_{\frak p} = \Z_k$ acts by $(t,v) \mapsto (t+1/k,\psi(v))$ where $\psi : V' \to V'$ is not an identity map. The $S^1$ action is by rotation of the first factor. Thus the quotient $V/\Gamma_{\frak p}$ is a manifold and $V/(\Gamma_{\frak p} \times S^1)$ is an orbifold. See Example \ref{zeifert}. \end{exm} \begin{rem} In this section we studied the moduli space ${{\mathcal M}}(X,H;\frak z^-,\frak z^+,\alpha)$ in the case when $H$ is a time independent Morse function. In an alternative approach (such as those \cite[Section 26]{fooospectr}), we studied the case $H\equiv 0$ using Bott-Morse gluing. Actually the discussion corresponding to this section is easier in the case of $H\equiv 0$. In fact in case of $H \equiv 0$, we do not need to study the moduli space of its gradient lines. (Some argument was necessary to discuss the moduli space of gradient lines since its element has $S^1$ as an isotropy group. The main part of this sedition is devoted to this point.) \par In \cite{FOn} we used the case $H$ is a time independent Morse function rather than studying the case $H\equiv 0$ by Bott-Morse theory. The reason is that the chain level argument that we need to use for the case $H\equiv 0$ was not written in detail at the time when \cite{FOn} was written in 1996. Now full detail of the chain level argument was written in \cite{fooo:book1}. So at the stage of 2012 (16 years after \cite{FOn} was written) using the case $H\equiv 0$ to calculate Floer homology of periodic Hamiltonian system is somewhat simpler to write up in detail rather than using the case when $H$ is a time independent Morse function. In this section however we focused on the case of time independent Morse function and written up as much detail as possible to convince the readers \end{rem} \section{Calculation of Floer homology of periodic Hamiltonian system} \label{calcu} We first prove Theorem \ref{connectingcompactka}. The proof is similar to the proof of Theorem \ref{existKuran}. We indicate below the points where proofs are different. \par Let $(\Sigma,z_-,z_+)$ be as in Definition \ref{defn41}. We define the notion of mainstream, mainstream component, and transit point in the same way as in Section \ref{construS1equiv}. \par Let $\tilde{\gamma}^{\pm} = ({\gamma}^{\pm},w^{\pm}) \in \tilde{\frak P}(H)$. (Here $H$ is a time {\it dependent} periodic Hamiltonian.) Let $((\Sigma,z_-,z_+),u,\varphi)$ be as in Definition \ref{defn210} such that it satisfies (1)(2)(4)(5) of Definition \ref{defn210} and the following three conditions: \begin{enumerate} \item[(3)$'$] $u : \Sigma \setminus \{\text{transit points}\} \to X$ is a continuous map. \item[(6)$'$] There exist $\tilde\gamma_i = (\gamma_i,w_i) \in \tilde{\frak P}(H)$ for $i=1,\dots,k+1$ with $\tilde\gamma_1 = \tilde\gamma^-$, $\tilde\gamma_{k+1} = \tilde\gamma^+$ such that $$ \lim_{\tau\to -\infty} u(\varphi_i(\tau,t)) = \gamma_i(t), \qquad \lim_{\tau\to +\infty} u(\varphi_i(\tau,t)) = \gamma_{i+1}(t). $$ Here $\varphi_i : \R \times S^1 \to \Sigma_i$ is the $i$-th component of $\varphi$. \item[(7)$'$] $w_i \# u(\Sigma_i) \sim w_{i+1}$. \end{enumerate} We denote by $\widehat{\mathcal M}(X,H;\tilde\gamma^-,\tilde\gamma^+)$ the set of such $((\Sigma,z_-,z_+),u,\varphi)$. We define equivalence relations $\sim_1$ and $\sim_2$ on it in the same way as Definition \ref{3equivrel}. (We do not use $\sim_3$ here.) We then put $$\aligned \widetilde{\mathcal M}(X,H;\tilde\gamma^-,\tilde\gamma^+) &= \widehat{\mathcal M}(X,H;\tilde\gamma^-,\tilde\gamma^+)/\sim_1, \\ {\mathcal M}(X,H;\tilde\gamma^-,\tilde\gamma^+) &= \widehat{\mathcal M}(X,H;\tilde\gamma^-,\tilde\gamma^+)/\sim_2. \endaligned$$ \par\smallskip We next define a balancing condition. Let $((\Sigma,z_-,z_+),u,\varphi) \in \widehat{\mathcal M}(X,H;\tilde\gamma^-,\tilde\gamma^+)$ be an element with one mainstream component. We define a function $\mathcal A : \R \setminus \text{a finite set} \to \R$ as follows. \par Let $\tau_0 \in \R$. We assume $\varphi(\{\tau_0\} \times S^1)$ does not contain a root of the bubble tree. (This is the way how we remove a finite set from the domain of $\mathcal A$.) Let $\Sigma_{{\rm v}_i}$, $i=1,\dots,m$ be the set of the irreducible components that is in a bubble tree rooted on $\R_{\le \tau_0}\times S^1$. We define \begin{equation} \aligned \mathcal A(\tau_0) = \sum_{i=1}^m \int_{\Sigma_{{\rm v}_i}} u^*\omega &+ \int_{\tau=-\infty}^{\tau_0}\int_{t\in S^1} (u\circ\varphi)^*\omega\\ &+ \int_{t\in S^1} H(t,u(\varphi(\tau_0,t))) dt + \int_{D^2} (w^-)^*\omega. \endaligned \end{equation} Note that the action functional $\mathcal A_H$ is defined by $$ \mathcal A_H(\tilde\gamma) = \int_{t\in S^1} H(t,u(\gamma(t))) dt + \int_{D^2} (w)^*\omega. $$ The function $\mathcal A(\tau_0) $ is nondecreasing and satisifies $$ \lim_{\tau\to-\infty} \mathcal A(\tau) = \mathcal A_H(\tilde\gamma^-), \qquad \lim_{\tau\to+\infty} \mathcal A(\tau) = \mathcal A_H(\tilde\gamma^+). $$ We say $\varphi$ satisfies the {\it balancing condition} if \begin{equation} \lim_{\tau < 0 \atop \tau\to 0} \mathcal A(\tau) \le \frac{1}{2} \left( \mathcal A_H(\tilde\gamma^-) + \mathcal A_H(\tilde\gamma^+) \right) \le \lim_{\tau > 0 \atop \tau\to 0} \mathcal A(\tau). \end{equation} In case of general $((\Sigma,z_-,z_+),u,\varphi) \in \widehat{\mathcal M}(X,H;\tilde\gamma^-,\tilde\gamma^+)$ we consider the balancing condition in mainstream-component-wise. \par In the case there is a mainstream component $\Sigma_i$ such that $\partial (u\circ \varphi_i)/\partial \tau = 0$ we can apply the method of Remark \ref{rem48}. \par\smallskip We next define the notion of canonical marked point. Let $\frak p = ((\Sigma,z_-,z_+),u,\varphi) \in {\mathcal M}(X,H;\tilde\gamma^-,\tilde\gamma^+)$. Let $\Sigma_i$ be its mainstream component. We assume that there is no sphere bubble rooted on it. We are given a biholomorphic map $\varphi_i : \R\times S^1 \to \Sigma_i \setminus \{z_i,z_{i+1}\}$ where $z_i,z_{i+1}$ are transit points on $\Sigma_i$. We require $\varphi_i$ to satisfy the balancing condition. Now we define the canonical marked point $w^{\rm can}_i$ on $\Sigma_i$ by $ w_i = \varphi_i(0,0). $ Let $\vec w^{\rm can}$ be the totality of all the canonical marked points on $\Sigma$. \par A symmetric stabilization of $\frak p = ((\Sigma,z_-,z_+),u,\varphi) \in {\mathcal M}(X,H;\tilde\gamma^-,\tilde\gamma^+)$ is $\vec w$ such that $\vec w \cap \Sigma_0 = \emptyset$ where $\Sigma_0$ is the mainstream of $\Sigma$ and $\vec w \cup\vec w^{\rm can}$ is the symmetric stabilization of $(\Sigma,z_-,z_+)$. \begin{defn}\label{obbundeldata2} An {\it obstruction bundle data $\frak E_{\frak p}$ centered at} $$ \frak p = ((\Sigma,z_-,z_+),u,\varphi) \in {\mathcal M}(X,H;\tilde\gamma^-,\tilde\gamma^+) $$ is the data satisfying the conditions described below. We put $\frak x = (\Sigma,z_-,z_+)$. Let $\frak x_i$ be the $i$-th mainstream component. (It has two marked points.) \begin{enumerate} \item A symmetric stabilization $\vec w $ of $\frak p$. We put $\vec w^{(i)} = \vec w \cap \frak x_i$. \item The same as Definition \ref{obbundeldata} (2). \item A universal family with coordinate at infinity of $\frak x_{\frak p} \cup \vec w \cup \vec w^{\rm can}$. Here $\vec w^{\rm can}$ is the canonical marked points as above. We require Condition \ref{condcoord} for the coordinate at infinity. \item The same as Definition \ref{obbundeldata} (4). Namely compact subsets $K^{\rm obst}_{\rm v}$ of $\Sigma_{\rm v}$. (We put $K^{\rm obst}_{\rm v}$ also on the mainstream.) \item The same as Definition \ref{obbundeldata} (5). Namely, finite dimensional complex linear subspaces $E_{\frak p,{\rm v}}(\frak y,u)$. (We put $E_{\frak p,{\rm v}}(\frak y,u)$ also on the mainstream.) \item The same as Definition \ref{obbundeldata} (6) except the differential operator there \begin{equation}\label{ducomponents22} \aligned \overline D_{u} \overline\partial : &L^2_{m+1,\delta}((\Sigma_{\frak y_{\rm v}},\partial \Sigma_{\frak y_{\rm v}}); u^*TX, u^*TL) \\ &\to L^2_{m,\delta}(\Sigma_{\frak y_{\rm v}}; u^*TX \otimes \Lambda^{0,1})/E_{\frak p,{\rm v}}(\frak y,u) \endaligned \end{equation} is replaced by the linearization of the equation (\ref{Fleq}) \item The same as Definition \ref{obbundeldata} (7). \item We take a codimension 2 submanifold $\mathcal D_j$ for each of $w_j \in \vec w$ in the same way as Definition \ref{obbundeldata} (8). \end{enumerate} \par We require that the data $K^{\rm obst}_{\rm v}$, $E_{\frak p,{\rm v}}(\frak y,u)$ depend only on $\frak p_i = [(\Sigma_i,z_{i-1},z_i),u,\varphi]$ (where $z_i$ is the $i$-th transit point) that contains the $\rm v$-th irreducible component. We call this condition {\it mainstream-component-wise}. \end{defn} We define ${\mathcal M}_{\ell}(\tilde\gamma^-,\tilde\gamma^+)$ in the same way as ${\mathcal M}_{\ell}(\frak z^-,\frak z^+,\alpha)$. (Namely its element is $((\Sigma_i,z_{i},z_{i+1}),\varphi)$ together with $\ell$ additional marked points on $\Sigma$, elements $\tilde\gamma_i \in \tilde{\frak P}(H)$ assigned to each of the transit point, and homology classes of each of the bubbles.) We denote by ${\mathcal M}_{\ell}(\tilde\gamma^-,\tilde\gamma^+;\mathcal G)$ its subset consisting of elements with given combinatorial type $\mathcal G$. \par Let $ \frak p = ((\Sigma,z_-,z_+),u,\varphi) \in {\mathcal M}(X,H;\tilde\gamma^-,\tilde\gamma^+) $ and $\vec w \cup \vec w^{\rm can}$ be its symmetric stabilization. We denote by $\frak V(\frak p \cup \vec w \cup \vec w^{\rm can})$ a neighborhood of $\frak p \cup \vec w \cup \vec w^{\rm can}$ in ${\mathcal M}_{\ell}(\tilde\gamma^-,\tilde\gamma^+;\mathcal G_{\frak p \cup \vec w \cup \vec w^{\rm can}})$. \par We can define \begin{equation}\label{4182} \overline{\Phi} : {{\frak V}}(\frak p \cup \vec w\cup \vec w^{\rm can}) \times D(k;\vec T_0) \times (\prod_{j=1}^m(T_{0,j},\infty] \times S^1)/\sim) \to {\mathcal M}_{\ell}(\tilde\gamma^-,\tilde\gamma^+) \end{equation} in the same way as (\ref{418}). \par We say $((\Sigma',z'_-,z'_+),u',\varphi') \cup \vec w'$ is {\it $\epsilon$-close} to $\frak p \cup \vec w\cup \vec w^{\rm can}$, if (1)(2)(3)(4) of Definition \ref{epsiclose} hold and if \begin{enumerate} \item[(5)$'$] Let $w'_j = \varphi'_j(\tau_j,t_j) \in \vec w'$ be the marked point corresponding to the canonical marked point $\in \vec w^{\rm can} \cap \Sigma_i$. Then $$ \left\vert \mathcal A(\tau_j) - \frac{1}{2} \left( \mathcal A_H(\tilde\gamma_i) + \mathcal A_H(\tilde\gamma_{i+1}) \right) \right\vert < \epsilon $$ and $ \vert t_j - 0\vert < \epsilon. $ \end{enumerate} \par If $((\Sigma',z'_-,z'_+),u',\varphi') \cup \vec w'$ is $\epsilon$-close to $\frak p \cup \vec w\cup \vec w^{\rm can}$ and the obstruction bundle data at $\frak p$ is given, then they induce an obstruction bundle at $((\Sigma',z'_-,z'_+),u',\varphi') \cup \vec w'$ in the same way as Definition \ref{obstructionbundle}. \begin{defn}\label{constrainttt2} We say that $((\Sigma',z'_-,z'_+),u',\varphi') \cup \vec w'$ satisfies the {\it transversal constraint} if the following holds. \begin{enumerate} \item The same as Definition \ref{constrainttt} (1). \item Let $w'_j = \varphi'_j(\tau_j,t_j) \in \vec w'$ be the marked point corresponding to the canonical marked point $\in \vec w^{\rm can} \cap \Sigma_i$. Then $$ \mathcal A(\tau_j) = \frac{1}{2} \left( \mathcal A_H(\tilde\gamma_i) + \mathcal A_H(\tilde\gamma_{i+1}) \right). $$ \item In the situation of (2) we have $ t_j = [0]. $ \end{enumerate} \end{defn} For each $\frak p \in {\mathcal M}(X,H;\tilde\gamma^-,\tilde\gamma^+)$ we take a stabilization data $\frak C_{\frak p}$ centered at $\frak p$. We also take $\epsilon_{\frak p}$ so that if $((\Sigma',z'_-,z'_+),u',\varphi') \cup \vec w'$ is $\epsilon_{\frak p}$-close to $\frak p \cup \vec w\cup \vec w^{\rm can}$ then Fredholm regularity (see Definition \ref{fredreg}) and evaluation map transversality (see Definition \ref{def:1720}) hold for $((\Sigma',z'_-,z'_+),u',\varphi') \cup \vec w'$. \par Let $\frak W_{\frak p}$ be the set of elements $\frak p'$ with the following property: there exists $\vec w'$ such that $\frak p' \cup \vec w'$ is $\epsilon_{\frak p}$-close to $\frak p \cup \vec w^{\frak p} \cup \vec w^{\rm can}$ and $\vec w'$ satisfies the transversal constraint in the sense of Definition \ref{constrainttt2}. \par We use it to find a finite set $\frak C = \{\frak p_c\}$ such that $$ \bigcup_{c} {\rm Int}\,\frak W_{\frak p_c} = {\mathcal M}(X,H;\tilde\gamma^-,\tilde\gamma^+). $$ We then define $$ \frak C_{\frak p} = \{ c \in \frak C \mid \frak p \in \frak W_{\frak p_c}\}. $$ \begin{defn}\label{defthickenmoduli2} We define a {\it thickened moduli space} \begin{equation}\label{thicenmoduli22} {\mathcal M}_{(\ell_{\frak p},(\ell_c))}(X,H;\tilde\gamma^-,\tilde\gamma^+;\frak p;\frak A;\frak B)_{\epsilon_0,\vec T_0} \end{equation} to be the set of $\sim_2$ equivalence classes of $((\frak Y,u',\varphi'),\vec w'_{\frak p},(\vec w'_c))$ with the following properties. (Here $\frak A \subset \frak B \subset \frak C_{\frak p}$.) \begin{enumerate} \item $(\frak Y,u',\varphi') \cup \vec w'_{\frak p}$ is $\epsilon_0$-close to $\frak p\cup \vec w_{\frak p} \cup \vec w^{\rm can}$. Here $\ell_{\frak p} = \#\vec w'_{\frak p}$. \item $(\frak Y,u',\varphi') \cup \vec w'_{c}$ is $\epsilon_0$-close to $\frak p\cup \vec w_c^{\frak p}$. Here $\ell_{c} = \#\vec w'_{c}$. \item On the bubble we have $$ \overline{\partial} u' \equiv 0 \mod \mathcal E_{\frak B}. $$ Here $\mathcal E_{\frak B}$ is the obstruction bundle defined by Definition \ref{obstructionbundle} in the same way as Definition \ref{def:187}. \item On the $i$-th irreducible component of the mainstream we consider $h'_i = u'\circ \varphi'_i$. Then it satisfies $$ \frac{\partial h'_i}{\partial \tau} + J\left( \frac{\partial h'_i}{\partial t} - \frak X_{H_t} \right) \equiv 0 \mod \mathcal E_{\frak B}. $$ Here $\mathcal E_{\frak B}$ is as in (3). \end{enumerate} \end{defn} \begin{defn} We denote by $V_{(\ell_{\frak p},(\ell_c))}(X,H;\tilde\gamma^-,\tilde\gamma^+;\frak p;\frak A;\frak B)_{\epsilon_0}$ the subset of the thickened moduli space ${\mathcal M}_{(\ell_{\frak p},(\ell_c))}(X,H;\tilde\gamma^-,\tilde\gamma^+;\frak p;\frak A;\frak B)_{\epsilon_0,\vec T_0} $ consisting of elements with the same combinatorial type as $\frak p, \vec w, \vec w_c$. (Compare \eqref{2193}.) \end{defn} The Fredholm regularity and evaluation map transversality imply that the space $V_{(\ell_{\frak p},(\ell_c))}(X,H;\tilde\gamma^-,\tilde\gamma^+;\frak p;\frak A;\frak B)_{\epsilon_0}$ is a smooth manifold. \begin{prop}\label{gluingpropham2} There exists a map $$ \aligned \text{\rm Glue} : &V_{(\ell_{\frak p},(\ell_c))}(X,H;\tilde\gamma^-,\tilde\gamma^+;\frak p;\frak A;\frak B)_{\epsilon_0} \times \left(\prod_{i=1}^m (T_{0,i},\infty] \times S^1)\right)/\sim \times D(k;\vec T'_0)\\ &\to {\mathcal M}_{(\ell_{\frak p},(\ell_c))}(X,H;\tilde\gamma^-,\tilde\gamma^+;\frak p;\frak A;\frak B)_{\epsilon_2}. \endaligned $$ Its image contains ${\mathcal M}_{(\ell_{\frak p},(\ell_c))}(X,H;\tilde\gamma^-,\tilde\gamma^+;\frak A;\frak B)_{\epsilon_3}$ for sufficiently small $\epsilon_3$. \par An estimate similar to Theorem \ref{exdecayT33} also holds. \end{prop} The proof is the same as the proof of Proposition \ref{gluingpropham}. Using Proposition \ref{gluingpropham2} we can prove Theorem \ref{connectingcompactka} in the same way as the last step of the proof of Theorem \ref{existKuran}. \qed \begin{rem} In case $H$ in Theorem \ref{connectingcompactka} happens to be time independent, the Kuranishi structure obtained by Theorem \ref{connectingcompactka} is different from the one obtained by Theorem \ref{existKuran}. In fact, during the proof of Theorem \ref{existKuran} we chose a (sufficiently dense finite) subset of $\overline{\overline{\mathcal M}}(X,H;\frak z^-,\frak z^+,\alpha)$ to define the obstruction bundle. During the proof of Theorem \ref{connectingcompactka} we chose a (sufficiently dense finite) subset of ${\mathcal M}(X,H;\tilde\gamma^-,\tilde\gamma^+)$ for the same purpose. The elements of the moduli space $\overline{\overline{\mathcal M}}(X,H;\frak z^-,\frak z^+,\alpha)$ are $\sim_3$ equivalence classes and the elements of the moduli space ${\mathcal M}(X,H;\tilde\gamma^-,\tilde\gamma^+)$ are $\sim_2$ equivalence classes. \end{rem} Using Theorem \ref{connectingcompactka} we can define Floer homology $HF(X,H)$ for time dependent 1-periodic Hamiltonian $H$ satisfying Assumption \ref{nondeg1}. This construction (going back to Floer \cite{Flo88IV,Flo89I}, see also \cite{HoSa95,Ono95}) is well-established. We sketch the construction here for completeness. We use the universal Novikov ring $$ \Lambda_0 = \left\{\left. \sum_{i=1}^{\infty} a_iT^{\lambda_i} ~\right\vert~ a_i \in \Q, \,\, \lambda_i \in \R, \,\, \lim_{i\to \infty} \lambda_i = + \infty \right\}. $$ Let $CF(X,H)$ be the free $\Lambda_0 $ module whose basis is identified with the set $\frak P(H)$. \par We take $E > 0$. By Theorem \ref{connectingcompactka}, we obtained a system of Kuranishi structures on ${\mathcal M}(X,H;\tilde\gamma^-,\tilde\gamma^+)$ for each pair $\tilde\gamma^-,\tilde\gamma^+$ with $\mathcal A_H(\tilde\gamma^+) - \mathcal A_H(\tilde\gamma^-) < E$. We take a system of multisections $\frak s$ on them that are compatible with the description of its boundary and corner as in Theorem \ref{connectingcompactka} (3). Here we use the fact that the obstruction bundle is defined mainstream-component-wise. Note our Kuranishi structure is oriented. We define \begin{equation} \partial^E [\gamma^-] = \sum_{\tilde\gamma^+, \mu(\tilde\gamma^+)-\mu(\tilde\gamma^- )= 1 \atop \mathcal A_H(\tilde\gamma^+) - \mathcal A_H(\tilde\gamma^-) < E} \#{\mathcal M}(X,H;\tilde\gamma^-,\tilde\gamma^+)^{\frak s} T^{\mathcal A_H(\tilde\gamma^+) - \mathcal A_H(\tilde\gamma^-)}[\gamma^+]. \end{equation} Here we take a lift $\tilde\gamma^-$ of $\gamma^-$ to define the right hand side. However we can show that the right hand side is independent of the choice of the lift. The number $\mu(\tilde\gamma^+)$ is the Maslov index. We have $$ \dim {\mathcal M}(X,H;\tilde\gamma^-,\tilde\gamma^+) = \mu(\tilde\gamma^+)-\mu(\tilde\gamma^-) - 1. $$ Using the moduli space ${\mathcal M}(X,H;\tilde\gamma^-,\tilde\gamma^+)$ with $\dim {\mathcal M}(X,H;\tilde\gamma^-,\tilde\gamma^+) = 2$, we can prove \begin{equation} \partial^E \circ \partial^E \equiv 0 \mod T^E \Lambda_0 \end{equation} in a well-established way. Thus we define $$ HF(X;H;\Lambda_0/T^E\Lambda_0) \cong H(CF(X,H) \otimes_{\Lambda_0}{\Lambda_0/T^E\Lambda_0},\partial^E). $$ We can prove that $HF(X;H;\Lambda_0/T^E\Lambda_0)$ is independent of the choice of Kuranishi structure and its multisection. (See the proof of Theorem \ref{theorem10} later.) We thus define \begin{defn} $$ HF(X;H;\Lambda_0) = \lim_{\longleftarrow} HF(X;H;\Lambda_0/T^E\Lambda_0). $$ We also define $$ HF(X;H) = HF(X;H;\Lambda_0) \otimes_{\Lambda_0}\Lambda $$ where $\Lambda$ is the field of fractions of $\Lambda_0$. \end{defn} In fact, using the next lemma we can find a boundary operator $\partial$ on the full module $CF(X,H)$ so that its homology is $HF(X;H;\Lambda_0)$. \begin{lem}\label{homotopyextension} Let $C$ be a finitely generated free $\Lambda_0$ module and $E < E'$. Suppose we are given $\partial_E : C \otimes_{\Lambda_0} \Lambda_0/T^E\Lambda_0 \to C \otimes_{\Lambda_0} \Lambda_0/T^E\Lambda_0$, $\partial_{E'} : C \otimes_{\Lambda_0} \Lambda_0/T^{E'}\Lambda_0 \to C \otimes_{\Lambda_0} \Lambda_0/T^{E'}\Lambda_0$ with $\partial_E \circ \partial_E = 0$, $\partial_{E'} \circ \partial_{E'} = 0$. Moreover we assume $(C \otimes_{\Lambda_0} \Lambda_0/T^E\Lambda_0, \partial_{E'} \mod T^E\Lambda_0)$ is chain homotopy equivalent to $(C \otimes_{\Lambda_0} \Lambda_0/T^E\Lambda_0, \partial_{E})$. Then we can lift $\partial_E$ to $\partial'_{E'} : C \otimes_{\Lambda_0} \Lambda_0/T^{E'}\Lambda_0 \to C \otimes_{\Lambda_0} \Lambda_0/T^{E'}\Lambda_0$ such that $(C \otimes_{\Lambda_0} \Lambda_0/T^{E'}\Lambda_0, \partial_{E'})$ is chain homotopy equivalent to $(C \otimes_{\Lambda_0} \Lambda_0/T^{E'}\Lambda_0, \partial'_{E'})$. \end{lem} We omit the proof. \begin{rem} The method for taking projective limit $E \to \infty$ that we explained above is a baby version of the one employed in \cite[Section 7]{fooo:book1}. (In \cite{fooo:book1} the filtered $A_{\infty}$ structure is defined by using a similar method.) In \cite{Ono95}, a slightly different way to go to projective limit was taken. \par For the main application, that is, to estimate the order of $\frak P(H)$ by Betti number, we actually do not need to go to the projective limit. See Remark \ref{limitremark}. \end{rem} Now we use the $S^1$ equivariant Kuranishi structure in Theorem \ref{existKuran} to prove the next theorem. \begin{thm}\label{theorem10} For any time dependent 1-periodic Hamiltonian $H$ on a compact manifold $X$ satisfying Assumption \ref{nondeg1}, we have $$ HF(X,H) \cong H(X;\Lambda) $$ where the right hand side is the singular homology group with $\Lambda$ coefficients. \end{thm} \begin{proof} Let $H'$ be a time {\it independent} Hamiltonian satisfying Assumptions \ref{nondeg2}, \ref{nondeg3}. We regard $H'$ as a Morse function and let ${\rm Crit}(H')$ be the set of the critical points of $H'$. We denote by $CF(X,H';\Lambda_0)$ the free $\Lambda_0$ module with basis identified with ${\rm Crit}(H')$. Let $\mu : {\rm Crit}(H') \to \Z$ be the Morse index. For $\frak x^+,\frak x^- \in {\rm Crit}(H')$ with $\mu(\frak x^+) - \mu(\frak x^-) = 1$ we define \begin{equation} \langle \partial \frak x^-,\frak x^+\rangle = T^{H'(\frak x^+) - H'(\frak x^-)}\# \mathcal M(X,H',\frak x^-,\frak x^+;0), \end{equation} where $\# \mathcal M(X,H',\frak x^-,\frak x^+;0)$ is the number counted with orientation. (Here $0$ denotes the equivalence class of zero in $\Pi$.) By Assumptions \ref{nondeg2} this moduli space is smooth.) It induces $\partial : CF(X,H';\Lambda_0) \to CF(X,H';\Lambda_0)$. It is by now well established that $\partial\circ \partial = 0$. We put $$ HF(X,H';\Lambda_0) = \frac{\text{\rm Ker} \partial}{\text{\rm Im} \partial}. $$ It is also standard by now that $$ HF(X,H') = HF(X,H';\Lambda_0) \otimes_{\Lambda_0} \Lambda $$ is isomorphic to the singular homology $H(X;\Lambda)$ of $\Lambda$ coefficients. \par We will construct a chain map from $CF(X,H';\Lambda_0)$ to $CF(X,H;\Lambda_0)$. Let $\mathcal H : \R \times S^1 \times X \to \R$ be a smooth map such that \begin{equation} \mathcal H(\tau,t,x) = \begin{cases} H'(x) &\text{if $\tau \le -1$}, \\ H(t,x) &\text{if $\tau \ge -1$}. \end{cases} \end{equation} For a map $h : \R \times S^1 \to X$ we consider the equation \begin{equation}\label{homomoeq} \frac{\partial h}{\partial \tau} + J\left( \frac{\partial h}{\partial t} - \frak X_{\mathcal H_{\tau,t}} \right) = 0, \end{equation} where $\mathcal H_{\tau,t}(x) = \mathcal H(\tau,t,x)$. \par Given $\frak x \in \text{\rm Crit}(H')$ and $\tilde\gamma = (\gamma,w) \in \tilde{\frak P}(H)$ we consider the set of the maps $h$ satisfying (\ref{homomoeq}) together with the following boundary conditions. \begin{enumerate} \item $ \lim_{\tau \to -\infty} h(\tau,t) = \frak x. $ \item $ \lim_{\tau \to +\infty} h(\tau,t) = \gamma(t). $ \item $[h] \sim [w]$. Here $h$ is regarded as a map from $D^2$ by identifying $\{-\infty\} \cup ((-\infty,+\infty] \times S^1)$ with $D^2$. \end{enumerate} We denote the totality of such $h$ by ${\mathcal M}^{\rm reg}(X,\mathcal H;\frak x,\tilde\gamma)$. \begin{thm}\label{theorem51} There exists a compactification ${\mathcal M}(X,\mathcal H;\frak x,\tilde\gamma)$ of ${\mathcal M}^{\rm reg}(X,\mathcal H;\frak x,\tilde\gamma)$, which is Hausdorff. \par For each $E>0$ there exists a system of oriented Kuranishi structures with corners on ${\mathcal M}(X,\mathcal H;\frak x,\tilde\gamma)$ for $\mathcal A_H(\tilde \gamma) \le E$. Its boundary is identified with the union of the following spaces, together with its Kuranishi structures. \begin{enumerate} \item $$ {\mathcal M}(X,H';\frak x,\frak x',\alpha) \times {\mathcal M}(X,\mathcal H;\frak x',\tilde\gamma-\alpha) $$ where $\frak x' \in {\rm Crit}(H')$, $\alpha \in \Pi$ and $\tilde\gamma- \alpha = (\gamma,w-\alpha)$. \item $$ {\mathcal M}(X,\mathcal H;\frak x,\tilde\gamma') \times {\mathcal M}(X,H,\tilde\gamma',\tilde\gamma) $$ where $\tilde\gamma' \in \tilde{\frak P}(H)$. \end{enumerate} \end{thm} \begin{proof} The proof of Theorem \ref{theorem51} is mostly the same as the proof of Theorems \ref{connectingcompactka} and \ref{existKuran}. So we mainly discuss the point where there is a difference in the proof. \par Let $(\Sigma,z_-,z_+)$ be a genus zero semi-stable curve with two marked points. We fix one of the irreducible components in the mainstream and denote it by $\Sigma^0_0$. We decompose \begin{equation} \Sigma = \Sigma^- \cup \Sigma^0 \cup \Sigma^+ \end{equation} as follows. $\Sigma^0$ is the mainstream component containing $\Sigma^0_0$. $\Sigma^-$ (resp. $\Sigma^+$) is the connected component of $\Sigma \setminus \Sigma^0$ containing $z_-$ (resp. $z_+$). We remark $\Sigma^0$ and/or $\Sigma^-$ may be empty. \par We consider $((\Sigma,z_-,z_+), \Sigma^0,u,\varphi)$ such that Definition \ref{defn210} (1)(2)(4)(5) are satisfied. We assume moreover the following conditions. \begin{enumerate} \item[(3.1)$'$] $\varphi\vert_{\Sigma^+} : \Sigma^+ \to X$ is continuous. \item[(3.2)$'$] We put either $\{ z_{0,-} \} = \Sigma^0 \cap \Sigma^-$ or $z_{0,-} = z_-$. (Here we put $z_{0,-} = z_-$ if $z_- \in \Sigma^0$.) Then $\varphi\vert_{\Sigma^0 \setminus \{z_{0,-}\}} : \Sigma^0 \setminus \{z_{0,-}\}\to X$ is continuous. \item[(3.3)$'$] The same condition as the condition (3)$'$ stated at the beginning of Section \ref{calcu} holds for $\Sigma^+$. \item[(4.1)$'$] $h\circ \varphi_i$ satisfies the equation (\ref{Fleq}) for $H'$ if $\Sigma_i \subset \Sigma^-$. \item[(4.2)$'$] $h\circ \varphi_0$ satisfies the equation (\ref{homomoeq}). Here $\varphi_0$ is a part of $\varphi$ and is a parametrization of $\Sigma_0^0$. \item[(4.3)$'$] $h\circ \varphi_i$ satisfies the equation (\ref{Fleq}) for $H$ if $\Sigma_i \subset \Sigma^+$. \item[(6.1)$'$] Definition \ref{defn210} (6) is satisfied on $\Sigma^-$. \item[(6.2)$'$] Let $k-1$ be the number of the mainstream components in $\Sigma^+$. Then there exist $\tilde\gamma_j = (\gamma_j,w_j) \in \tilde{\frak P}(H)$ for $j=1,\dots,k-1$. We have $$ \lim_{\tau\to \infty} u(\varphi_0(\tau,t)) = \gamma_1(t). $$ Here $\varphi_0$ is as in (4.2)$'$. \item[(6.3)$'$] Let $\Sigma^+_j$ ($j=1,\dots,k-1$) be the irreducible components in the mainstream of $\Sigma^+$. Let $\varphi^+_j$ is a part of $\varphi$ and is a parametrization of $\Sigma^+_j$. Then we have $$ \lim_{\tau\to -\infty} u(\varphi^+_j(\tau,t)) = \gamma_j(t) \qquad \lim_{\tau\to \infty} u(\varphi^+_j(\tau,t)) = \gamma_{j+1}(t). $$ Here $\gamma_{k} = \gamma$. \item[(7)$'$] $u_*([\Sigma]) = [w]$. \end{enumerate} We denote by $\widehat{\mathcal M}(X,\mathcal H;\frak x,\tilde\gamma)$ the set of all such $((\Sigma,z_-,z_+), \Sigma^0,u,\varphi)$. \par We define three equivalence relations $\sim_1$, $\sim_2$, $\sim_3$ on $\widehat{\mathcal M}(X,\mathcal H;\frak x,\tilde\gamma)$ as follows. \par The definition of $\sim_1$ is the same as Definition \ref{3equivrel}. \par We apply $\sim_2$ of Definition \ref{3equivrel} on $\Sigma^+$ and $\Sigma^-$ and $\sim_1$ of Definition \ref{3equivrel} on $\Sigma^0$. This is $\sim_2$ here. \par We apply $\sim_2$ of Definition \ref{3equivrel} on $\Sigma^+$, $\sim_3$ of Definition \ref{3equivrel} on $\Sigma^-$ and $\sim_1$ of Definition \ref{3equivrel} on $\Sigma^0$. This is $\sim_3$ here. We put $$\aligned \widetilde{\mathcal M}(X,\mathcal H;\frak x,\tilde\gamma) &= \widehat{\mathcal M}(X,\mathcal H;\frak x,\tilde\gamma)/\sim_1, \\ {\mathcal M}(X,\mathcal H;\frak x,\tilde\gamma) &= \widehat{\mathcal M}(X,\mathcal H;\frak x,\tilde\gamma)/\sim_2, \\ \overline{\overline{\mathcal M}}(X,\mathcal H;\frak x,\tilde\gamma) &= \widehat{\mathcal M}(X,\mathcal H;\frak x,\tilde\gamma)/\sim_3. \endaligned$$ \par We define the notion of balancing condition on the mainstream component in $\Sigma^-$ and $\Sigma^+$ in the same way as before. (We do not define such a notion for $\Sigma^0_0$ since the equation (\ref{homomoeq}) is not invariant under the $\R$ action.) \par We next define the notion of canonical marked points. For the mainstream components in $\Sigma^+$ or in $\Sigma^-$, the definition is the same as before. If $\Sigma^0$ contains a sphere bubble we do not define canonical marked points on it. Otherwise the canonical marked point on this mainstream component is $\varphi_0(0,0)$. \par Using this notion of canonical marked points we can define the notion of obstruction bundle data for $[\frak p]\in \overline{\overline{\mathcal M}}(X,\mathcal H;\frak x,\tilde\gamma)$ in the same way as before. (We put an obstruction bundle also on $\Sigma^0_0$.) We take and fix an obstruction bundle data for each of $[\frak p]\in \overline{\overline{\mathcal M}}(X,\mathcal H;\frak x,\tilde\gamma)$. \par We can use it to define a map similar to $\overline{\Phi}$ and $\overline{\overline{\Phi}}$ in the same way. \par We then use them to define the notion of $\epsilon$-close-ness in the same way. \par We next define transversal constraint. Let $((\Sigma',z'_-,z'_+), \Sigma^{\prime 0},u',\varphi') \cup \vec w'$ is $\epsilon$ close to $((\Sigma,z_-,z_+), \Sigma^0,u,\varphi) \cup \vec w \cup \vec w^{\rm can}$. We consider $w'_j \in \vec w'$. If $w'_j$ is either in $\Sigma'_+$ or in $\Sigma'_-$ then the definition of transversal constraint is the same as Definition \ref{constrainttt} or Definition \ref{constrainttt2}, respectively. \par Suppose $w'_j \in \Sigma'_0$. If $w'_j$ corresponds to a marked point in $\vec w$ then the definition of transversal constraint is the same as Definition \ref{constrainttt}. We consider the case where $w'_j$ corresponds to a canonical marked point $w_i$. There are three cases. \begin{enumerate} \item $w_i \in \Sigma_0$. In this case the transversal constraint is $w'_j = \varphi_0(0,0)$. \item $w_i \in \Sigma_-$. Let $\Sigma_{-,i}$ be the mainstream component containing $w_i$. (It is irreducible since $w_i$ is a canonical marked point.) The transversal constraint first requires $w'_j = \varphi'_0(\tau_0,0)$ with $\tau_0 \le -1$. Moreover it requires $$ \aligned &\int_{\Sigma_-} (u')^*\omega + \int_{\tau=-\infty}^{\tau_0} (u')^*\omega + H'(u'(\varphi_0(\tau_0,t))) \\ &= \frac{1}{2} \left( {H'}(u(z_i)) + {H'}(u(z_{i+1})) \right) + \int_{\Sigma_{\tau\le \tau(w_i)}} u^*\omega. \endaligned $$ Here $z_i$ and $z_{i+1}$ are transit points contained in $\Sigma_i$ and $\Sigma_{\tau\le \tau_0}$ is defined as follows. Let $w_i = \varphi_i(\tau_i,0)$. We consider $\Sigma \setminus \{ \varphi_i(\tau_i,t) \mid t \in S^1\}$. Then $\Sigma_{\tau\le \tau_0}$ is the connected component of it containing $z_-$. \item $w_i \in \Sigma_+$. Let $\Sigma_{+,i}$ be the mainstream component containing $w_i$. (It is irreducible since $w_i$ is a canonical marked point.) The transversal constraint first requires $w'_j = \varphi'_0(\tau_0,0)$ with $\tau_0 \ge +1$. Moreover it requires $$ \aligned &\int_{\Sigma_-} (u')^*\omega + \int_{\tau=-\infty}^{\tau_0} (u')^*\omega + \int_{t\in S^1}H(t,u'(\varphi_0(\tau_0,t)))dt \\ &= \frac{1}{2} \left( \mathcal A_{H}(\tilde{\gamma_i}) + \mathcal A_{H}(\tilde{\gamma}_{i+1}) \right). \endaligned $$ Here the restriction of $u$ to $\Sigma_{+,i}$ gives an element of $ {\mathcal M}(X,H;\tilde\gamma_i,\tilde\gamma_{i+1})$. \end{enumerate} For a point $[\frak p]\in \overline{\overline{\mathcal M}}(X,\mathcal H;\frak x,\tilde\gamma)$ we define the notion of stabilization data in the same way as before. \par Now using this notion of transversal constraint and $\epsilon$-close-ness, we define $\frak C_{[\frak p]} \subset \overline{\overline{\mathcal M}}(X,\mathcal H;\frak x,\tilde\gamma)$ for each $[\frak p]\in \overline{\overline{\mathcal M}}(X,\mathcal H;\frak x,\tilde\gamma)$. We then define a finite set $\frak C$ such that $$ \bigcup_{[\frak p_c] \in \frak C}\frak C_{[\frak p_c]} = \overline{\overline{\mathcal M}}(X,\mathcal H;\frak x,\tilde\gamma). $$ We may assume that this choice is mainstream-component-wise in the same sense as before. \par We use the choice of $\frak C$ together with obstruction bundle data we can define an obstruction bundle in the same way as before. We use it to define a thickened moduli space. The rest of the proof is the same as the proof of Theorems \ref{connectingcompactka} and \ref{existKuran}. \end{proof} \begin{lem}\label{512} There exists a constant $\frak E^-(\mathcal H)$ depending only on $\mathcal H$ with the following properties. If ${{\mathcal M}}(X,\mathcal H;\frak x,\tilde\gamma)$ is nonempty then \begin{equation} \mathcal A_{H}(\tilde\gamma) \ge H'(\frak x) -\frak E^-(\mathcal H). \end{equation} \end{lem} This lemma is classical. See \cite[(2.14)]{Ono95}. (It was written also in \cite[Lemma 9]{Floerhofer}.) \begin{rem} The optimal estimate for $\frak E^-(\mathcal H)$ is $\frak E^-(\mathcal H) = \mathcal E^-(H-H')$, where $$ \mathcal E^+(F) = \int_0^1 \max_{p\in X} F(t,p)dt, \quad \mathcal E^-(F) = -\int_0^1 \inf_{p\in X} F(t,p)dt. $$ See \cite[Proposition 3.2]{Oh}. (See also \cite[Proposition 2.1]{usher:depth}.) A Lagrangian version of a similar optimal estimate was obtained by \cite{Chek96}. \par We do not need this optimal estimate for the purpose of this note, but it becomes important to study spectral invariant. \end{rem} We now define $$ \Phi_E : CF(X,H';\Lambda_0) \to CF(X,H;\Lambda_0) $$ as follows. We consider $\frak x$ and $\tilde{\gamma}$ so that: \begin{enumerate} \item[(a)] The (virtual) dimension of ${{\mathcal M}}(X,\mathcal H;\frak x,\tilde\gamma)$ is $0$. \item[(b)] $ \mathcal A_{H}(\tilde\gamma) \le H'(\frak x) + E. $ \end{enumerate} We then put $$ \langle\Phi_E(\frak x),\tilde\gamma\rangle = T^{\mathcal A_{H}(\tilde\gamma) - H'(\frak x) + \frak E^-(\mathcal H)}\#{{\mathcal M}}(X,\mathcal H;\frak x,\tilde\gamma)^{\frak s}. $$ Here we take and fix a system of multisections $\frak s$ of the moduli space ${{\mathcal M}}(X,\mathcal H;\frak x,\tilde\gamma)$ that is transversal to zero, compatible with the description of the boundary, and satisifies the inequality (b) above. We use it to define the right hand side. \par We note that the exponent of $T$ in the right hand side is nonnegative because of Lemma \ref{512}. \begin{lem}\label{513} $$ \partial_E \circ \Phi_E - \Phi_E \circ \partial \equiv 0 \mod T^E\Lambda_0. $$ \end{lem} \begin{proof} We use the case of moduli space ${{\mathcal M}}(X,\mathcal H;\frak x,\tilde\gamma)$ satisfying (a) above and has virtual dimension $1$. It boundary is described as in Theorem \ref{theorem51}. The case (2) there, counted with sign, gives $\partial_E \circ \Phi_E$. \par We consider the case (1) there. We need to consider the case when virtual dimension of ${\mathcal M}(X,H';\frak x,\frak x',\alpha)$ is zero. Using $S^1$ equivariance of our Kuranishi structure and multisection, and Lemma \ref{lem214}, we find that such ${\mathcal M}(X,H';\frak x,\frak x',\alpha)$ is an empty set after perturbation, unless $\alpha = 0$. In the case $\alpha =0$ we can prove, in the same way as above, that $$ {\mathcal M}(X,H';\frak x,\frak x',0) = {\mathcal M}(X,H';\frak x,\frak x',0)^{S^1} $$ after perturbation. Therefore case (1) gives $\Phi_E \circ \partial$. The proof of Lemma \ref{513} is complete. \end{proof} We have thus defined a chain map \begin{equation} \Phi_E : CF(X,H';\Lambda_0/T^E\Lambda_0) \to CF(X,H;\Lambda_0/T^E\Lambda_0). \end{equation} We next put $ \mathcal H'(\tau,t,x) = \mathcal H(-\tau,t,x). $ We use it in the same way to define the moduli space ${{\mathcal M}}(X,\mathcal H';\tilde\gamma,\frak x)$. This moduli space has an oriented Kuranishi structure with corners and its boundary is described in a similar way as Theorem \ref{theorem51}. (The proof of this fact is the same as the proof of Theorem \ref{theorem51}.) If it is nonempty then we have \begin{equation} H'(\frak x) \ge \mathcal A_{H}(\tilde\gamma) - \frak E^+(\mathcal H'). \end{equation} \begin{rem} Here $\frak E^+(\mathcal H')$ is certain constant depending only on $\mathcal H'$. The optimal value is $\mathcal E^+(H-H')$. \end{rem} We put $$ \langle\Psi_E(\tilde\gamma),\frak x\rangle = T^{H'(\frak x) - \mathcal A_{H}(\tilde\gamma) + \frak E^+(\mathcal H')}\#{{\mathcal M}}(X,\mathcal H;\frak x,\tilde\gamma)^{\frak s} $$ and obtain a chain map \begin{equation} \Psi_E : CF(X,H;\Lambda_0/T^E\Lambda_0) \to CF(X,H';\Lambda_0/T^E\Lambda_0) . \end{equation} \begin{lem}\label{CC;-} We may choose $\frak E^+(\mathcal H')$, $\frak E^-(\mathcal H)$ and such that the following holds for $\frak E = \frak E^+(\mathcal H')+\frak E^-(\mathcal H)$. \begin{enumerate} \item $\Psi_E\circ \Phi_E$ is chain homotopic to $[\frak x] \mapsto T^{\frak E}[\frak x]$. \item $\Phi_E\circ \Psi_E$ is chain homotopic to $[\tilde{\gamma}] \mapsto T^{\frak E}[\tilde{\gamma}]$. \end{enumerate} \end{lem} \begin{proof} For $S>0$ we define $\rho_S : \R \to [0,1]$ such that $$ \rho_S(\tau) = \begin{cases} 1 &\text{if $\vert\tau\vert < S-1$} \\ 0 &\text{if $\vert\tau\vert \ge S$}, \end{cases} $$ and put $$ \mathcal H^S(t,x) = \mathcal H(\rho_S(\tau,x)). $$ For $\frak x_{\pm} \in \text{\rm Crit}(H')$ and $\alpha \in \Pi$, we use the perturbation of Cauchy-Riemann equation by the Hamilton vector field of $H_{S,\tau,t}$ to obtain a moduli space ${{\mathcal M}}(X,\mathcal H^S;\frak x_-,\frak x_+;\alpha)$ in the same way. Its union for $S \in [0,S_0]$ also has a Kuranishi structure whose boundary is given as in Theorem \ref{theorem51} (1), (2) and ${{\mathcal M}}(X,\mathcal H^S;\frak x_-,\frak x_+;\alpha)$ with $S=0,S_0$. \par We consider the case $S=0$. In this case the equation for ${{\mathcal M}}(X,\mathcal H^S;\frak x_-,\frak x_+;\alpha)$ is $S^1$ equivariant. Therefore it has an $S^1$ equivariant Kuranishi structure that is free for $\alpha \ne 0$. For $\alpha = 0$ we obtain an $S^1$ equivariant Kuranishi structure on ${{\mathcal M}}_0(X,\mathcal H^{S=0};\frak x_-,\frak x_+;0)$. Therefore, by counting the moduli spaces of virtual dimension $0$ we have identity. (It becomes $[\frak x] \mapsto T^{\frak E}[\frak x]$ because of the choice of the exponent in the definition.) \par The case $S = S_0$ with $S_0$ huge gives the composition $\Psi_E\circ \Phi_E$. \par (1) now follows from a cobordism argument. \par The proof of (2) is similar. \end{proof} Using \cite[Proposition 6.3.14]{fooo:book1} we have $$ HF(X,H';\Lambda_0) \cong (\Lambda_0)^{\oplus b'} \oplus \bigoplus_{i=1}^{m'} \frac{\Lambda_0}{T^{\lambda'_i}\Lambda_0} $$ here $\lambda'_i$, $i=1,\dots,m'$ are positive real numbers. It implies $$ HF(X,H';\Lambda) \cong (\Lambda)^{\oplus b'}. $$ We remark $H(X,H';\Lambda) \cong H(X;\Lambda)$ where the right hand side is the singular homology. (Note $H'$ is a time independent Hamiltonian that is a Morse function on $X$.) \par Similarly we have $$ HF(X,H;\Lambda_0) \cong (\Lambda_0)^{\oplus b} \oplus \bigoplus_{i=1}^{m} \frac{\Lambda_0}{T^{\lambda_i}\Lambda_0} $$ and $$ HF(X,H;\Lambda) \cong (\Lambda)^{\oplus b}. $$ We take $E$ sufficiently larger than $\frak E$ and $\lambda_i$, $\lambda'_i$. Then we can use Lemma \ref{CC;-} to show $b = b'$. (See \cite[Subsection 6.2]{bidisk} for more detailed proof of more precise results in a related situation.) The proof of Theorem \ref{theorem51} is now complete. \end{proof} \begin{rem}\label{limitremark} \begin{enumerate} \item The argument of the last part of the proof of Theorem \ref{theorem51} shows that to prove the inequality (the homology version of Arnold's conjecture) \begin{equation}\label{arnold} \#\frak P(H) \ge \sum \text{\rm rank} H_k(X;Q) \end{equation} for periodic Hamiltonian system with non-degenerate closed orbit, we do not need to use projective limit. We can use $HF(X,H;\Lambda_0/T^E\Lambda_0)$ for sufficiently large but fixed $E$. (Such $E$ depends on $H$ and $H'$.) \item During the above proof of the isomorphism $H(X,H;\Lambda) \cong H(X;\Lambda)$ we did not construct an isomorphism among them but showed only the coincidence of their ranks. Actually we can construct the following diagram \begin{equation}\label{Diag} \CD CF(X,H';\Lambda_0/T^{E'}\Lambda_0) @>{\Phi_{E'}}>>CF(X,H;\Lambda_0/T^{E'}\Lambda_0) \\ @V{}VV @V VV \\ CF(X,H';\Lambda_0/T^{E}\Lambda_0) @>{\Phi_{E}}>>CF(X,H;\Lambda_0/T^{E}\Lambda_0)\endCD \end{equation} for $E<E'$. Here the vertical arrows are composition of reduction modulo $T^E$ and chain homotopy equivalence. (Note this map is not reduction modulo $T^{E}$. In fact the two chain complexes that are the target and the domain of the vertical arrows, are constructed by using different Kuranishi structures and multisections.) We can prove that Diagram \ref{Diag} commutes up to chain homotopy. Then using a lemma similar to Lemma \ref{homotopyextension} we can extend $\Phi_E$ to a chain map $CF(X,H';\Lambda_0) \to CF(X,H;\Lambda_0)$. We then can prove that it is a chain homotopy equivalence. This argument is a baby version of one developed in \cite[Section 7.2]{fooo:book1}. \end{enumerate} \end{rem} \begin{rem}\label{lagrangeproof} As we mentioned at the beginning of Part \ref{S1equivariant}, there is an alternative (third) proof of (\ref{arnold}) which does {\it not} use $S^1$ equivariant Kuranishi structure, and which works for an arbitrary compact symplectic manifold $X$. Let $H$ be a time dependent Hamiltonian whose 1 periodic orbits are all non-degenerate. Let $\varphi : X \to X$ be the symplectic diffeomorphism that is time one map of the time dependent Hamiltonian vector field associated to $H$. We consider a symplectic manifold $(X\times X,\omega \oplus -\omega)$. The graph $$ L(\varphi) = \{(x,\varphi(x)) \mid x \in X\} $$ is a Lagrangian submanifold in $X\times X$. Since the inclusion induces an injective homomorphism $H(L(\varphi)) \cong H(X) \to H(X\times X)$ in the homology groups, \cite[Theorem C, Theorem 3.8.41]{fooo:book1} implies that the Lagrangian Floer cohomology between $L(\varphi)$ with itself is defined, (after an appropriate bulk deformation). Again since $L(\varphi) \to X\times X$ induces an injective homomorphism in the homology, the spectral sequence in \cite[Theorem D (D.3)]{fooo:book1} degenerates at $E_2$ stage and implies $$ HF((L(\varphi),b,\frak b),(L(\varphi),b,\frak b);\Lambda) \cong H(L(\varphi);\Lambda) = H(X;\Lambda). $$ (Here $b$ is an appropriate bounding cochain and $\frak b$ is an appropriate bulk class.) Since $L(\varphi)$ is Hamiltonian isotopic to the diagonal $X$, \cite[Theorem 4.1.5]{fooo:book1} implies $$ HF((L(\varphi),b,\frak b),(L(\varphi),b,\frak b);\Lambda) \cong HF((L(\varphi),b,\frak b),(X,b',\frak b);\Lambda). $$ Note $L(\varphi) \cap X \cong \frak P(H)$ and the intersection is transversal. (This is a consequence of nondegeneracy of the periodic orbits.) Therefore the rank of the Floer cohomology $HF((L(\varphi),b,\frak b),(X,b',\frak b);\Lambda)$ is not greater than the order of $\frak P(H)$. The formula (\ref{arnold}) follows. \par Note in the above proof we use injectivity of $H(L(\varphi)) \to H(X\times X)$ to show that $L(\varphi)$ (which is Hamiltonian isotopic to the diagonal) is unobstructed (after bulk deformation). Alternatively we can use the involution $X \times X \to X\times X$, $(x,y) \mapsto (y,x)$ to prove unobstructedness of the diagonal. (See \cite{fooo:inv}.) \end{rem} \par\newpage \part{An origin of this article and some related matters} \label{origin} \section{Background of this article} This article is more than 200 pages long and claims {\it no} new results. Since this is unusual for the article of this length posted in the e-print server, we explain why such an article is written in this part. \par This article concerns the foundation of the virtual fundamental chain/cycle technique, especially of the one represented by the Kuranishi structure introduced by the first and the fourth named authors in \cite{FuOn99I, FOn} and slightly rectified by the present authors in \cite[Appendix A.1]{fooo:book1}. There are various articles in the electronic network (including the e-print server) that express negative opinions on the soundness of the foundation of the virtual fundamental chain/cycle technique, not just complaining about lack of details of the proofs. We also have information that several people have mentioned negative opinion on the soundness of the foundation of the virtual fundamental chain/cycle technique in various occasions such as lectures or talks in conferences or workshops. \par However the authors of the present article were not directly asked or pointed out explicit gaps or objections in our writing before such negative opinion was expressed in the articles for public display. In addition there have been presented some lectures or talks in which such negative opinions on virtual fundamental chain/cycle technique with the authors of the present article not being present. \par We point out that the foundation of the virtual fundamental chain/cycle technique was established in published and refereed journal papers (\cite{FOn,LiTi98,LiuTi98,Rua99,Si}) by various authors. Most of such articles were written in the year 1996 that is 16 years ago. Various articles (\cite{ChiMo,CMRS,operad,fooo:book1,HWZ,joyce,joyce2,Liu,LiuTi98,LuT,McDuff07,Sie96}) on the foundation of the virtual fundamental chain/cycle have also been written and/or published. \par During these 16 years, virtual fundamental chain/cycle technique have been applied for numerous purposes of different nature and by many authors successfully. Moreover in the course of generating these applications, all the technical incompleteness or inconsistency in its foundation have been exposed and corrected through the subsequent underpinning and systematic usages of virtual fundamental chain/cycle techniques in many different ways. \par Nevertheless the fact that various negative opinions on the soundness of the foundation of the virtual fundamental chain/cycle technique spread out in public has caused significant reservation for various mathematicians, especially younger generation of mathematicians, to use this technique for their purposes. We are afraid that this will cause certain delay of the development of the relevant mathematics as a whole. \par On the other hand, when these negative opinion on the foundation of the virtual fundamental chain/cycle technique were expressed, most of them were not written in the way that explicitly specify and/or pin down the point of their concern. This fact and the fact that we were not directly asked questions or objections, \footnote{ When the book \cite{fooo:book1} was published, we corrected or supplied more detail on Kuranishi structure about all the points we were directly told or we found ourselves. After \cite{fooo:book1} was published, we have not directly heard of problems or objections about our writings until March 2012. There are a few exceptions: (1) A few people asked us to write more about analytic issue (more than we wrote in \cite[Section A1.4]{fooo:book1}). We know indirectly that such demands exist among other people too. (2) The first named author had some e-mail discussions with D. Joyce. (3) D. McDuff also asked some questions to us.} made it very difficult for us to try to eliminate such obstructions to the applications of virtual fundamental chain/cycle technique. \par Finally, there was set up the occasion of a group of invited mathematicians discussing the soundness of the foundation of the virtual fundamental chain/cycle technique in the google group named `Kuranishi' (its administrator is H. Hofer) through which questions asked directly to the present authors. Especially, K. Wehrheim sent us a list of questions on March 13, 2012 which she thought were problematic in the existing literature. We took this opportunity and did our best to change the above mentioned situation so that other mathematicians can also freely use virtual fundamental chain/cycle technique. \par For this purpose, we temporarily halted most of our other on-going joint projects and have been concentrated in preparing answers to Wehrheim's questions in the google group `Kuranishi' in great detail as complete as possible and also in replying to all the questions asked further there. The pdf files \cite{Fu1}, \cite{FOn2}, \cite{fooo:ans3}, \cite{fooo:ans34} \cite{fooo:ans5} were uploaded to the google group `Kuranishi' for this purpose during this discussion. They form the main parts of this article (after minor modification \footnote{There are certain corrections especially to \cite{FOn2}, which becomes Part \ref{Part2} of this article. We mentioned them as Remarks \ref{rem520}, \ref{thankmac}.}). \par After we had uploaded those files that we thought answered all the questions raised by Wehrheim, an article \cite{MW1} of McDuff and Wehrheim was posted in the arXiv. The article contains some objections or negative comments about the soundness of the foundation of the virtual fundamental chain/cycle technique, especially of the Kuranishi structure laid out in \cite{FOn}, \cite{fooo:book1}. Unfortunately, the article does not even mention the presence of our replies \cite{Fu1}, \cite{FOn2}, \cite{fooo:ans3}, \cite{fooo:ans34} \cite{fooo:ans5} to those objections and criticisms. As far as we are concerned, those objections had been already replied and confuted in our files \cite{Fu1}, \cite{FOn2}, \cite{fooo:ans3}, \cite{fooo:ans34} \cite{fooo:ans5}. We will explain in Section \ref{HowMWiswrong} more explicitly on these points. \section{Our summary of the discussion at google group `Kuranishi'} In this section we present a summary of our discussion in the google group `Kuranishi' {\it from our point of view}. There are several discussions at the beginning of this google group, but we skip them since we were not directly involved in that discussion. The discussion we were directly involved in are related to the questions by K. Wehrheim. On March 13 2012, she sent a series of questions about Kuranishi structures. The first of them is: \par\medskip {\bf 1.)} Please clarify, with all details, the definition of a Kuranishi structure. And could you confirm that a special case of your work proves the following? \begin{enumerate} \item The Gromov-Witten moduli space $\mathcal M_1(J,A)$ of $J$-holomorphic curves of genus 0, fixed homology class $A$, with 1 marked point has a Kuranishi structure. \item For any compact space $X$ with Kuranishi structure and continuous map $f:X\to M$ to a manifold $M$ (which suitably extends to the Kuranishi structure), there is a well defined $f_*[X]^{\rm vir}\in H_*(M)$. \end{enumerate} \par\medskip We replied in \cite{Fu1} as follows: \par\medskip {\bf Question 1} (1) Yes. (2) We need to assume $f$ to be strongly continuous and Kuranishi structure has tangent bundle\footnote{We need to take the version of \cite{fooo:book1} not of \cite{FOn} for the definition of the existence of tangent bundle.} and orientation. \par\medskip This question is rather formal so there is nothing more to explain here. The next two questions are: \par\medskip {\bf 2.)} The following seeks to clarify certain parts in the definition of Kuranishi structures and the construction of a cycle. \begin{enumerate} \item What is the precise definition of a germ of coordinate change? \item What is the precise compatibility condition for this germ with respect to different choices of representatives of the germs of Kuranishi charts? \item What is the precise meaning of the cocycle condition? \item What is the precise definition of a good coordinate system? \item How is it constructed from a given Kuranishi structure? \item Why does this construction satisfy the cocycle condition? \end{enumerate} \par {\bf 3.)} Let $X$ be a compact space with Kuranishi structure and good coordinate system. Suppose that in each chart the isotropy group $\Gamma_p=\{{\rm id}\}$ is trivial and $s^\nu_p:U_p\to E_p$ is a transverse section. What further conditions on the $s^\nu_p$ do you need (and how to you achieve them) in order to ensure that the perturbed zero set $X^\nu=\cup_p (s^\nu_p)^{-1}(0) / \sim$ carries a global triangulation, in particular \begin{enumerate} \item $X^\nu$ is compact, \item $X^\nu$ is Hausdorff, \item $X^\nu$ is closed, i.e.\ if $X^\nu=\bigcup_n \Delta_n$ is a triangulation then $\sum_n f(\partial \Delta_n) = \emptyset$. \end{enumerate} Some part of Question 2 concerns the notion of germ of Kuranishi neighborhood. We explain this issue in Subsection \ref{gernkuranishi}. As we explain there we do not use the notion of germ of Kuranishi neighborhood in \cite{fooo:book1}. Definition \ref{Definition A1.5} in this article is the same as \cite[Definition A1.5]{fooo:book1}. So at the time of this question asked, the point related to the notion of `germ of Kuranishi coordinate' had been already corrected in \cite{fooo:book1}. \par The most important statement in our answer \cite{Fu1} to Question 3 is: \par\medskip No further condition on $s^{\nu}_p$ is necessary if it is close enough to original Kuranishi map and if we shrink $U_p$'s during the construction. \par\medskip In \cite{FOn2}, we explained construction of the good coordinate system and virtual fundamental chains based on the definition of \cite{fooo:book1} and confirmed the above statement in detail. It provides an answer to Questions 2) and 3). (The outline of that construction is given in \cite{Fu1}.) A few typos were pointed out and were immediately corrected. Then the discussion at the google group on this point stopped for a while. \footnote{We remark that Hausdorffness was a point mentioned by several people in the talks by McDuff and Wehrheim at Institute for advanced study in March and April 2012.} \par In the mean time, we continued posting other parts \cite{fooo:ans3}, \cite{fooo:ans34}, \cite{fooo:ans5} of our answer. After we had finished posting the last answer file \cite{fooo:ans5}, which we thought had answered all the questions raised by K. Wehrheim, a version of the manuscript by McDuff and Wehrheim was posted to the google group `Kuranishi', which soon appeared in the e-print arXiv as the article \cite{MW1}. Then discussions on Questions 2), 3) were restarted in the google group `Kuranishi'. McDuff asked several questions and then pointed out an error in \cite{FOn2}. We acknowledged it was an error and at the same time posted its correction. This correction is used in Section \ref{sec:existenceofGCS} of this article. We acknowledge McDuff for pointing out this error in Remark \ref{thankmac}. In the presence of Example 6.1.14 given in \cite{MW1}, we wrote the proof of Hausdorffness and compactness, which appeared in Sections 2 and 4 of \cite{FOn2}, more carefully. It then became Sections \ref{defgoodcoordsec}, \ref{sec:existenceofGCS} and \ref{gentoplem} of this article. Thus Part \ref{Part2} of present article answers Questions 2 and 3 in great detail. Namely the answer to Question 2 (1) is Definition \ref{Definition A1.3}, the answer to Question 2 (2) is explained in Subsection \ref{gernkuranishi}, the answer to Question 2 (3) is Definition \ref{Definition A1.5}, the answer to Question 2 (4) is Definition \ref{goodcoordinatesystem}, the answer to Question 2 (5) (6) are given in Section \ref{sec:existenceofGCS}. The answer to Question 3 (1) is Lemma \ref{cuttedmodulilem}, the answer to Question 3 (2) is Corollary \ref{corollary521}, and the answer to Question 3 (3) is Lemma \ref{cycleproperties}. As we mentioned in the introduction, according to our opinion the points appearing in Questions 2 and 3 are of technical nature. \par\medskip The next question was: \par\medskip {\bf 4.)} For the Gromov-Witten moduli space $\mathcal M_1(J,A)$ of $J$-holomorphic curves of genus 0 with 1 marked point, suppose that $A\in H_2(M)$ is primitive so that $\mathcal M_1(J,A)$ contains no nodal or multiply covered curves. \begin{enumerate} \item Given two Kuranishi charts $(U_p, E_p, \Gamma_p=\{{\rm id}\}, \ldots)$ and $(U_q, E_q, \Gamma_q=\{{\rm id}\},\ldots)$ with overlap at $[r]\in\mathcal M_1(J,A)$, how exactly is a sum chart $(U_r,E_r,\ldots)$ with $E_r \simeq E_p\times E_q$ constructed? \item How are the embeddings $U_p \supset U_{pr} \hookrightarrow U_r$ and $U_q \supset U_{qr} \hookrightarrow U_r$ constructed? \item How is the cocycle condition proven for triples of such embeddings? \end{enumerate} This question addresses only a very special case in the standard practice of the common researches in the field.\footnote{In fact we do not need to use virtual fundamental cycle in the case appearing in this question. Since the pseudo-holomorphic curve appearing in such a moduli space is automatically somewhere injective, the transversality can be achieved by taking generic $J$.} It appears to us that the only point to mention is about how we associate the obstruction space $E(u')$ to an unknown map $u'$ (equipped with various other data). Namely the Kuranishi neighborhood is the set of the solutions of the equation $ \overline\partial u' \equiv 0 \mod E(u'). $ \par The answer which was in the first named author's google post \cite{Fu1} was as follows. (We slightly change the notations below so that it is consistent with the argument in Part \ref{generalcase} of this article.) \par\medskip We cover the given moduli space by a finite number of sufficiently small closed sets $\frak M_{\frak p_c}$ of the mapping space each of which is `centered at' a point $\frak p_c \in \frak M_{\frak p_c}$ that is represented by a stable map $((\Sigma_c,\vec z_c),u_c)$. ($\Sigma_c$ is a (bordered) Riemann surface and $\vec z_c = (z_{c,1}, \dots,z_{c,m})$ are marked points. (Interior or boundary marked points.) $u_c : \Sigma_c \to X$ is a pseudo-holomorphic map.) We fix a subspace $E_c$ of $\Gamma(\Sigma_c;u_c^*TX\otimes \Lambda^{0,1})$ as in (12.7) in page 979 of \cite{FOn}. For $\frak p = ((\Sigma_{\frak p},\vec z_{\frak p}),u_{\frak p})$ we collect $E_c$ for all $c$ with ${\frak p} \in \frak M_{\frak p_c}$ and the sum of them is $E_{\frak p}$. The Kuranishi neighborhood of ${\frak p}$ is the set of solutions of \begin{equation}\label{definingeq} \overline{\partial} u' \equiv 0 \mod E_{\frak p}. \end{equation} We will discuss the way how we identify $E_c$ to a subset of $\Gamma(\Sigma';(u')^*TX\otimes \Lambda^{0,1})$ in case $((\Sigma',\vec z'),u')$ is close to $((\Sigma_{\frak p},\vec z_{\frak p}),u_{\frak p})$. When we fix $E_c$ we also fix finitely many additional marked points $\vec w_{c} = (w_{cj})$ where $w_{cj} \in \Sigma_c$, $j=1,\dots,k_c$ at the same time and take transversals $\mathcal D_{cj}$ as in \cite[Appendix]{FOn}. We take sufficiently many marked points so that after adding those marked points $(\Sigma_c,\vec z_c\cup \vec w_{c})$ becomes stable. We consider $((\Sigma',\vec z'),u')$. For each $c$ we add marked points $\vec w'_c = (z'_{cj})$, $w'_{cj} \in \Sigma$, $j=1,\dots,k_i$ to $(\Sigma',\vec z')$ so that $w'_{cj}$ is on $\mathcal D_{cj}$ . We add them to obtain $(\Sigma',\vec z'\cup \vec w'_c)$ that becomes stable, for each $c$. We require that it is close to $(\Sigma_c,\vec z_c \cup \vec w_{c})$ in Deligne-Mumford moduli space (or its bordered version). Then we obtain a diffeomorphism (outside the neck region) from $\Sigma_c$ to $\Sigma'$ which sends $\vec z_c\cup \vec w_{c}$ to $\vec z' \cup \vec w'_c$, preserving the enumeration. (See \cite[Lemma 13.18 and Appendix]{FOn}.) Using this diffeomorphism and (complex linear part of) the parallel transport on $X$ (the symplectic manifold) with respect to the Levi-Civita connection along the minimal geodesic joining $u'(w'_{cj})$ with $u_c(w_{cj})$ (where $w_{cj} \in \Sigma_c$ is identified with $w'_{cj} \in \Sigma'$ by the above mentioned diffeomorphism), we send $E_c$ to a set of the sections of $(u')^*TX\otimes \Lambda^{0,1}$ on $\Sigma'$. We do it for each of $c$. (In other words the stabilization we use {\it depends} on $c$.) Thus each of $E_c$ is identified with a subspace of $\Gamma(\Sigma';(u')^*TX\otimes \Lambda^{0,1})$. We take its sum and that is $E_{\frak p}$ at $(\Sigma',\vec z',(\vec w'_c))$. \par We have thus made sense out of (\ref{definingeq}). \par An important point here is that the subspace $E_c \subset \Gamma(\Sigma';(u')^*TM\otimes \Lambda^{0,1})$ at $(\Sigma',\vec z',(\vec w'_c))$ is {\it independent} of ${\frak p}$ as far as $(\Sigma',\vec z')$ is close to ${\frak p}$. The data we use for stabilization is chosen on ${\frak p}_c$ (not on ${\frak p}$) once and for all. This is essential for the cocycle condition to hold\footnote{Since Equation (\ref{definingeq}) makes sense in the way independent of $\frak p$ it seems possible to simply take the union of its solution space to obtain some Hausdorff metrizable space. That can play a role as the metric space that contains all the Kuranishi neighborhoods. However we insist that we should {\it not} build the general theory of Kuranishi structure under the assumption of the existence of such an ambient space, since it will spoil the flexibility of the definition of the general story we have.}. (2),(3). Once (1) is understood the coordinate change $\underline\phi_{{\frak q}{\frak p}}$ is just a map which send an en element $((\Sigma',\vec z'),u')$ to the same element. So the cocycle condition is fairly obvious. { [End of an answer in \cite{Fu1}]} \par\medskip We were then asked further details to provide and responded to their requests by the posts \cite{fooo:ans3} and \cite{fooo:ans34}. Those answers discuss much more general case than the case asked in Question 4. We provided this general answer because there was also the demand for providing more details of the gluing analysis. \par The case asked in Question 4 is of course contained in \cite{fooo:ans3} and \cite{fooo:ans34} as a special case. More specifically, our answers to the questions in Question 4 are given as follows: \par \begin{enumerate} \item \cite[Prosition 2.125]{fooo:ans34} = Proposition \ref{chartprop}. (It is based on \cite[Definition 2.60]{fooo:ans34} + \cite[Definition 2.63]{fooo:ans34} + \cite[Definition 2.119]{fooo:ans34} = Definitions \ref{defEc}, \ref{defthickened}, \ref{defVVVVV}.) \item The proof is completed in \cite[Subsection 2.10 (2.385)]{fooo:ans34} = Section \ref{kstructure} (\ref{eq23777}), based on technical lemma \cite[Lemma 2.163]{fooo:ans34} =Lemma \ref{lem2143}. The main part of the construction is \cite[Propositions 2.131 and 2.152]{fooo:ans34} = Propositions \ref{prop2117} and \ref{prop21333}. \item The proof is completed in \cite[Subsection 2.10 (page 119)]{fooo:ans34} = Section \ref{kstructure}. The main part of the proof is \cite[Lemma 2.145 and Proposition 2.158]{fooo:ans34} = Lemma \ref{lem125} and Proposition \ref{compaticoochamain}. \end{enumerate} There was the following additional question asked by Wehrheim on March 23. \par\medskip Q4: ``Assuming that I understand your construction of a single Kuranishi chart in [FOn],\footnote{[FOn] is \cite{FOn} in the reference of this article.} I would like to see all the details of constructing this simplest sum chart (which in [FOn] are rather scattered, as you said). In particular, I would like to see the very explicit Fredholm setup for the set of solutions of (0.5) ... i.e. what is the Banach manifold? What is the Fredholm operator? Why is it smooth / transverse? Or ... if you work with gluing maps and infinitesimal local slices ... maybe you could give the construction in a series of lemmas without proofs. Then I can ask about specific proofs. In fact, in that case I could ask directly to get a proof of injectivity / surjectivity / continuity of inverse for the map $\psi$ from the "complement of tangent space to automorphism group in the domain of gluing map" to the moduli space. I couldn't see much of an explanation in [FOn] in this case without Deligne-Mumford parameters.'' \par\medskip Our reply to it (which was sent at the same time as \cite{fooo:ans34}) was as follows: \par\medskip The Banach manifold we use to define smooth (or $C^m$) structure is one of $C^m$ maps from the compact subset of Riemann surface to $X$. (See the proof of \cite[Lemma 2.133]{fooo:ans34} = Lemma \ref{2120lem} and \cite[Section 3.2]{fooo:ans34} = Section \ref{toCinfty}.) \footnote{More precisely we take a product with an appropriate Deligne-Mumford moduli space (that includes the gluing parameter).} We may use also the space of $L^2_m$ maps. Because of elliptic regularity there is no difference which one we use. The Fredholm operator appearing in the gluing construction is \cite[(2.247)]{fooo:ans34} = (\ref{lineeqstep0vvv}). This is not exactly the linearization operator of the equation. We still have extra coordinate while solving equation. Correct moduli space is obtained by cutting the solution space down by putting some constraints. The injectivity / surjectivity / continuity of inverse of the map $\psi$ is proved in \cite[Proposition 2.102]{fooo:ans34} = Proposition \ref{charthomeo}. (More precisely, injectivity in \cite[Lemma 2.106]{fooo:ans34}=Lemma \ref{injectivitypp}, surjectivity in \cite[Lemma 2.105]{fooo:ans34} = Lemma \ref{setisopen} and continuity in \cite[Lemma 2.108]{fooo:ans34} = Lemma \ref{injectivitypp3} are proved respectively.) Its proof uses injectivity / surjectivity of the thickened moduli space that is a part of \cite[Theorem 2.70]{fooo:ans34} = Theorem \ref{gluethm3}. (The proof of that injectivity / surjectivity is given in \cite[Subsection 1.5]{fooo:ans34} = Section \ref{surjinj}.) Automorphism group of the domain is killed by adding marked points to the source. \par\medskip After the discussion at the google group resumed, we were asked to explain the following two points by Wehrheim and McDuff (There were a few other points. But the two points below were the major points of their concern of that time, we think). When we set up the equation (\ref{definingeq}), we use the additional marked points $w'_{cj}$ on the source $\Sigma'$ of the map $u'$ there. Therefore the thickened moduli space also involves the added marked points and has dimension bigger than the given virtual dimension by $2 \times \#\{w'_{cj}\}$. To kill off this extra dimension, we put the constraint \begin{equation}\label{transversalconstr} u'(w'_{cj}) \in \mathcal D_{cj} \end{equation} on the choice of $w'_{cj}$. \par\medskip \begin{enumerate} \item[(A)] The question was whether this constraint equation is transversal or not. \end{enumerate} \par\medskip We replied that the transversality of this equation had been proved in the course of the proof of Lemma \ref{transstratasmf} etc. \par The right hand side $E_{\frak p}$ of the equation (\ref{definingeq}) actually depends on $u'$. Let us write it as $E_{\frak p}(u')$. \begin{enumerate} \item[(B)] The question was whether this forms the smooth vector bundle when we vary the associated parameter space of $u'$'s and etc. (in an appropriate function space). \end{enumerate} \par\medskip Our answer (Aug. 2) was as follows: \par\medskip This point is remarked in \cite[footnote 17 p. 77]{fooo:ans34} = footnote 23 in Section \ref{glueing} and also in \cite[Remark 1.39]{fooo:ans3} = Remark \ref{remark127}. Also during the proof of gluing analysis, we need to take (second) derivative of the projection to the obstruction bundle. The projection to the obstruction bundle is calculated in Formula \cite[(1.39)]{fooo:ans3} =(\ref{form118}). We take its derivative in \cite[(1.40)]{fooo:ans3} = (\ref{DEidef}). Existence of this (and higher) derivative is obvious from the explicit formula \cite[(1.39)]{fooo:ans3} =(\ref{form118}). In the proof of \cite[Lemma 1.22]{fooo:ans3} = Lemma \ref{mainestimatestep13}, which is the key estimate for the Newton's method to work in gluing analysis, the estimate of the second derivative of the projection to the obstruction bundle becomes necessary. More explicitly, it appears in \cite[(1.58)]{fooo:ans3} = (\ref{155ff}). \par However since we were specifically asked to provide more details we replied to it. We reproduce our reply in Subsection \ref{smoothness}. \par\medskip The fifth question of Wehrheim was: \par\medskip {\bf 5.)} How is equality of Floer and Morse differential for the Arnold conjecture proven? \begin{enumerate} \item Is there an abstract construction along the following lines: Given a compact topological space $X$ with continuous, proper, free $S^1$-action, and a Kuranishi structure for $X/S^1$ of virtual dimension $-1$, there is a Kuranishi structure for $X$ with $[X]^{\rm vir}=0$. \item How would such an abstract construction proceed? \item Let $X$ be a space of Hamiltonian Floer trajectories between critical points of index difference $1$, in which breaking occurs (due to lack of transversality). How is a Kuranishi structure for $X/S^1$ constructed? \item If the Floer differential is constructed by these means, why is it chain homotopy equivalent to the Floer differential for a non-autonomous Hamiltonian? \end{enumerate} The reply in \cite{Fu1} was as follows: \par\medskip (1)(2) I do not think it is possible in completely abstract setting. At least I do not know how to do it. In a geometric setting such as the one appearing in page 1036 \cite{FOn}, Kuranishi structure is obtained by specifying the choice of the obstruction space $E_p$ for each $p$. We can take $E_p$ in an $S^1$ equivariant way so the Kuranishi structure on the quotient $X/S^1$ is obtained. And it is a quotient of one on $X$. $S^1$ equivariant multisection can be constructed in an abstract setting so if the quotient has virtual dimension $-1$ the zero set is empty. (3) We can take a direct sum of the obstruction bundles, the support of which is disjoint from the points where two trajectories are glued. In the situation of (1) the obstruction bundle is $S^1\times S^1$ equivariant. The symmetry is compatible with the diagonal $S^1$ action nearby. (4) It is \cite[Theorem 20.5]{FOn}. \par\medskip We then were requested to explain more detail. We sent \cite{fooo:ans5} to the google group, which contains the required detail. It becomes Part \ref{S1equivariant} of this article. \par There were no further questions or objections to the points concerning Question 5) so far in the google group. \par\medskip At the time of writing this article (Sep. 10) all the questions or objections asked in the google group `Kuranishi' were answered, supplemented or confuted by us. \section{Explanation of various specific points in the theory} In this section we mention some of the points which we were asked on the foundation of the virtual fundamental chain/cycle technique and explain why they do not affect the rigor of the virtual fundamental chain/cycle technique. We have already discussed most of them in the main body of this article. The discussion of this section mentions other method to the problem also and sometimes discuss some special case to clarify the idea. \subsection{A note on the germ of the Kuranishi structure} \label{gernkuranishi} In \cite{FOn} the notion of `germ of Kuranishi neighborhood' is defined as follows. Let $(V_p, E_p, \Gamma_p, \psi_p, s_p)$, $(V'_p, E'_p, \Gamma'_p, \psi'_p, s'_p)$ be a Kuranishi neighborhoods centered at $p \in X$ be as in Definition \ref{Definition A1.1}. We say that they are equivalent if there is a third Kuranishi neighborhood $(V''_p, E''_p, \Gamma''_p, \psi''_p, s''_p)$ together with: \begin{enumerate} \item Isomorphisms $$ \Gamma_p \cong \Gamma''_p \cong \Gamma'_p. $$ \item Equivariant open embeddings $$ V''_p \to V_p, \qquad V''_p \to V'_p. $$ \item Fiberwise isomorphism bundle maps $$ V''_p \times E''_p \to V_p \times E_p, \quad V''_p \times E''_p \to V'_p \times E'_p. $$ which cover the open embeddings in (2) and are equivariant. \item They are compatible with $\psi_p$, $s_p$ in an obvious sense. \end{enumerate} The equivariant class is called a germ of Kuranishi neighborhood. It was used for a similar germ version of coordinate change. Together with compatibility condition which is a `germ version' of Definition \ref{Definition A1.5} (2), the definition of Kuranishi structure was given in \cite{FOn}. \par The main trouble of this definition is as follows.\footnote{This point was mentioned by the first named author on 19th March 2012 in his post to the google group Kuranishi.} Using the open embedding of (2) above we obtain a diffeomorphism between neighborhoods of $o_p$ in $V_p$ and in $V'_p$. However such a diffeomorphism is {\it not} unique. The germ of such diffeomorphisms at $o_p$ is not unique either. (The compatibility with $\psi_p$ implies that its restriction to the zero set of the Kuranishi map $s_p$ is unique.) \par We then consider the coordinate change. Suppose we have a coordinate change $(\hat\phi_{pq},\phi_{pq},h_{pq})$ from a Kuranishi neighborhood $(V_q, E_q, \Gamma_q, \psi_q, s_q)$ centered at $q$ to a Kuranishi neighborhood $(V_p, E_p, \Gamma_p, \psi_p, s_p)$ centered at $p$. (Here $q \in \text{\rm Im}\psi_p$.) \par We next take another pair of Kuranishi neighborhoods $(V'_q, E'_q, \Gamma'_q, \psi'_q, s'_q)$ and $(V'_p, E'_p, \Gamma'_p, \psi'_p, s'_p)$ of $q$ and $p$ respectively. \par We assume that $(V'_q, E'_q, \Gamma'_q, \psi'_q, s'_q)$ and $(V'_p, E'_p, \Gamma'_p, \psi'_p, s'_p)$ are equivalent to $(V_q, E_q, \Gamma_q, \psi_q, s_q)$ and $(V_p, E_p, \Gamma_p, \psi_p, s_p)$, respectively. \par Then if we choose the diffeomorphisms such as item (2) above between \begin{enumerate} \item[(a)] A neighborhood of $o_p$ in $V_p$ and a neighborhood of $o_p$ in $V'_p$, \item[(b)] A neighborhood of $o_q$ in $V_q$ and a neighborhood of $o_q$ in $V'_q$, \end{enumerate} we can use them together with $(\hat\phi_{pq},\phi_{pq},h_{pq})$ to find a coordinate change $(\hat\phi'_{pq},\phi'_{pq},h'_{pq})$ from $(V'_q, E'_q, \Gamma'_q, \psi'_q, s'_q)$ to $(V'_p, E'_p, \Gamma'_p, \psi'_p, s'_p)$. \par The problem lies in the fact that this induced coordinate change $(\hat\phi'_{pq},\phi'_{pq},h'_{pq})$ {\it does} depend on the choice of the diffeomorphisms (a)(b) above. It does depend on it even if we consider a small neighborhood of $o_q$. \par As a consequence, it is hard to state the compatibility condition between two different coordinate changes in the way independent of the choice of the representative of the germs of the Kuranishi neighborhood. \par {So the meaning of the statement that coordinate changes are compatible which uses the language of germs is ambiguous from the writing in \cite{FOn}. This is indeed a mathematical error of \cite{FOn}.} {\begin{rem} At the time of our writing of \cite[Appendix]{fooo:book1} however, we became aware of the danger of the notion of germs of Kuranishi neighborhoods and etc. This was the reason why we rewrote the definition of the Kuranishi structure therein so that it does not involve the notion of germs. However our understanding of the above mentioned problem at that time was not as complete as now. That is the reason why we did not mention it in \cite{fooo:book1}. \end{rem}} \par A correct way of clarifying this point, which was adopted in \cite{fooo:book1} and in this article, is as follows. We take and fix representatives of Kuranishi neighborhoods for each $p$. We also fix a choice of coordinate changes between the Kuranishi neighborhoods, which are representatives of coordinate changes between the Kuranishi neighborhoods of each $p$ and $q \in \psi_p(s_p^{-1}(0))$. Here we fix a coordinate change {\it not} its germ. Then the compatibility between coordinate changes has definite meaning without ambiguity. \par Note that the representatives of Kuranishi neighborhoods and coordinate changes were taken and fixed in the proof of the existence of the good coordinate system in \cite{FOn}, as mentioned explicitly in \cite[(6.14.2), (6.19.2), (6.19.4)]{FOn}. Also the Kuranishi neighborhood and coordinate changes between them appearing in the good coordinate system are the representatives but not the germs. (This point is emphasized in \cite[Remark 6.2]{FOn}.) By this reason the proof of existence of the good coordinate system in \cite[Lemma 6.3]{FOn} is correct notwithstanding the error mentioned above, if we take the definition of Kuranishi structure from \cite{fooo:book1}. The proof of the existence of good coordinate system which is Theorem \ref{goodcoordinateexists} is given in Section \ref{sec:existenceofGCS} uses the same idea as the proof of \cite[Lemma 6.3]{FOn}, but contains more detail. \par\medskip Another issue was mentioned by D. Joyce on March 19 in the google group `Kuranishi' and also during the E-mail discussion with the first named author. \par Let us consider a germ of coordinate change from a Kuranishi neighborhood of $q$ to one of $p$. The issue is that we can't regard $p$ and $q$ as fixed: since we can always make $V_p$ smaller in the germ, for fixed $p$ and $q$ you can always make $V_p$, $V_q$ smaller in their germs so that they do not overlap, and there is nothing to transform.\footnote{This sentence is copied from D. Joyce's post on March 19 to the google group Kuranishi.} \par It seems that we can go around this problem by carefully choosing the way of defining the notion of the germ of coordinate change etc. However since we have already modified the definition and eliminate the notion of germ from our definition because of the first point, we do not discuss this second issue any more. \par\medskip We remark that in his theory \cite{joyce2}, Joyce pushed the sheaf-theoretic point of views to its limit and, we believe, his approach gives more thorough and systematic answers to this problem. On the other hand, our purpose here is to provide the shortest rigorous way of constructing virtual fundamental cycles/chains in the situation where we apply the moduli space of pseudo-holomorphic curve etc. to symplectic geometry, mirror symmetry and etc.. The way taken in \cite{fooo:book1} and in this article lies rather at the opposite end of \cite{joyce2} in this respect. We restrict our attention to effective orbifolds and then restrict maps between them to embeddings. In this way, we can study orbifolds and maps between them in a way as close as possible to the way we study manifolds and smooth maps between them. In \cite{joyce2} the most general case of orbifolds and morphisms between them are included. To handle such general case, the language of 2-category is systematically used in \cite{joyce2}. \footnote{As we mentioned before this seems to be the correct way to study Kuranishi space, {\it as its own}, which is {\it not} our purpose here.} Part \ref{Part2} of this article contains the construction of virtual fundamental chain/cycle and the proof of its basic property. The proof we provide is, in our opinion, strictly more detailed than is required in the standard research articles. Nevertheless it is much shorter than other articles discussing similar points using different approach. \subsection{Construction of the Kuranishi structure on the moduli space of pseudo-holomorphic curve} \label{subsec342} One of the objections to the virtual fundamental chain/cycle technique, which we (mainly indirectly) heard of before 2009 and sometimes after that, is about the construction (or existence) of the (smooth) Kuranishi structure on the moduli space of pseudo-holomorphic curves in various situations. \par Most of this problem is of analytic nature. The main point we heard of is about the gluing (or stretching the neck) construction and the smoothness of the resulting coordinate changes between the Kuranishi neighborhoods obtained by the gluing (or stretching the neck) construction. \par We mentioned this point in the introduction and have provided one analytical way of constructing \emph{smooth} Kuranishi structure with corners in detail in Parts \ref{secsimple} and \ref{generalcase} of the present article. \par Here we discuss this point again at the same time mentioning another more geometric way of constructing such structure. \par There is a well-established technique of extracting the moduli space as a manifold with boundary in certain circumstances. It was used by Donaldson in gauge theory (in his first paper \cite{Don83} to show that 1 instanton moduli of ASD connections on 4 manifold $M$ with $b_2^+ = 0$ has $M$ as a boundary.) According to this method one takes certain parameter (that is, the degree of concentration of the curvature for the case of ASD equation and the parameter $T$ in the situation of Part \ref{secsimple}). We consider the submanifold where that parameter $T$ is sufficiently large, say $T_0$. We throw away everything of the part where $T > T_0$. Then the slice $T=T_0$ becomes the boundary of the `moduli space' we obtain. It was more detailed in a book by Freed and Uhlenbeck \cite{freedUhlen} in the gauge theory case. Abouzaid used this technique in his paper \cite{Abexotic} about exotic spheres in $T^*S^n$, including the case of corners. At least as far as the results in \cite{FOn} are concerned we can use this technique and prove all the results in \cite{FOn} since we have only to study the moduli space of virtual dimension 0 and 1 only. In other words we can use something like Theorem \ref{gluethm1} for a large fixed $T$, but does not need to estimate the $T$ derivative or study the behavior of the moduli space at $T=\infty$. \par {The reason is as follows. Suppose that the moduli space in consideration carries the Kuranishi structure of virtual dimension $1$ or $0$. Then when we consider the corners of codimension $2$ or higher the restriction of the moduli space to that corner has negative virtual dimension. So after a generic multivalued perturbation the zero set on the corner becomes empty. So all we need is to extend multivalued perturbation to its neighborhood. We remark that $C^0$ extension is enough for this purpose.} \par For the case of moduli space of virtual dimension 1, after generic perturbation we have isolated zeros of the perturbed Kuranishi map on the boundary. So, for large $T_0$, Theorem \ref{gluethm1} or its analogue implies that set of zeros on the `boundary $T=T_0$' has one-one correspondence with that of the actual boundary ($T=\infty$). Because of this, we do not need to carefully examine what happens in a neighborhood of the set $T=\infty$. All we need is to extend this given perturbation at $T=T_0$ to the interior. \par We also remark that it is unnecessary to show differentiability of the coordinate change at this boundary. This is because the zero set at the boundary is isolated. \par This argument is good enough to establish all the results in \cite{FOn}. \par As we mentioned explicitly in \cite[page 978 line 13]{FOn} our argument there, in analytic points, is basically the same as in \cite{McSa94}. (Let us remark however the proof of `surjectivity' that is written in \cite[Section 14]{FOn} is slightly different from one in \cite{McSa94}.) So the novelty of \cite{FOn} does {\it not} lie in the analytic point but in its general strategy, that is \begin{enumerate} \item Define some general notion of `spaces' that contain various moduli spaces of pseudo-holomorphic curves as examples and work out transversality issue in that abstract setting. \item Use multivalued abstract perturbation, which we call multisection. \end{enumerate} \par\medskip When we go beyond that and prove results such as those we had proved in \cite{fooo:book1}, we need to study the moduli spaces of higher virtual dimension and study chain level intersection theory. In that case we are not sure whether the above mentioned technique is enough. (It may work. But we did not think enough about it.) This is not the way we had taken in \cite{fooo:book1}. \par Our method in \cite{fooo:book1} was to use exponential decay estimate (\cite[Lemma A1.59]{fooo:book1}) and to use $s = 1/T$ as the coordinate on the normal direction to the stratum to define smooth coordinate of the Kuranishi structure. {Here $T$ is the gluing parameter arising from the given analytic coordinate $z$ centered at the puncture associated to the marked point. More specifically, we have $T = -\ln |z|$ (and hence $s = e^{-1/|z|}$.)}. We refer \cite[Subsection A1.4]{fooo:book1} and \cite[Subection 7.1.2]{fooo:book1} where this construction is written. \par In Parts \ref{secsimple} and \ref{generalcase}, we provide more details of the way how to use alternating method to construct smooth chart at infinity following the argument in \cite[Subsection A1.4]{fooo:book1}. \subsection{Comparison with the method of \cite{FOn} Sections 12-15 and of \cite[Appendix]{FOn}} \label{comparizon} \par During the construction of the Kuranishi neighborhood of each point in the moduli space of pseudo-holomorphic curve, we need to kill the automorphism of the source curve and fix the parametrization of it. \footnote{This subsection is mostly the copy of our post to the google group Kuranishi on August 3.} Let us call this process normalization in this subsection. \par In \cite{FOn} we provided two different normalizations. One is written in Sections 12-15 and the other is in Appendix. Here is some explanation of the difference between two approaches. There are two steps for this normalization process. \smallskip \par {\bf Step (1) } To fix a coordinate or parametrization of the source. {\bf Step (2) } To kill the ambiguity of the parametrization in Step (1). \smallskip \par The difference between two techniques mainly lies in Step (2). The techniques of the appendix is certainly {\it not} our invention. It was used by many people as we quoted in Remark \ref{rem161}. We believe it was already a standard method when we wrote it in 1996. In both techniques, we need to fix a parametrization of the source curve. (Step (1).) In case we construct a Kuranishi chart locally, this is not a serious matter because we fix a parametrization of the source at the center of the chart and use it to fix a parametrization nearby. Transferring obstruction bundles centered at some points to another point (on which we want to define an obstruction bundle) is more serious issue. This point is discussed in \cite[Section 15]{FOn}. In both techniques we used the method to stabilize the curve by {\it adding marked points}. In \cite[Sections 12-15]{FOn} it is written in page 989. There we add marked points to reduce the isotropy group to a finite group. (It is the isomorphism in (13.19).) In the appendix it is written in page 1047. \par The difference of the two techniques lies in the way how we kill the ambiguity. (Step (2) above.) Namely, there are more parameters than the expected dimension of Kuranishi neighborhood. \par In \cite[Sections 12-15]{FOn} it is done by {\it requiring the local minimality of the function} `meandist' (15.5). Namely we require the map $u'$ to be as close to $u$ by the given parametrization. (Here $u$ is the map part of the object that is the center of the chart and $u'$ is the map part of the object that is a general element of the Kuranishi neighborhood). This is a version of the technique called the {\it center of mass technique} which was discovered by K. Grove and H. Karcher in Riemannian geometry \cite{GrKa}. Probably using this technique in this situation was new and was not so standard. \par On the other hand, in \cite[Appendix]{FOn}, the ambiguity is killed by {\it putting codimension $2$ submanifolds and requiring that the marked points to land on those submanifolds}. (As we mentioned above this was already a standard technique at the stage of 1996). \par Both techniques work. But later we mainly use the technique given in the appendix. For example, in \cite[p424]{fooo:book1} we wrote as follows. \par\bigskip For the case where $(\Sigma, \vec{z})$ is unstable, we use Theorem 7.1.44 and proceed as follows. (The argument below is a copy of that in Appendix [FuOn99II]\footnote{The reference [FuOn99II] in \cite{fooo:book1} is just \cite{FOn} in the current article.}. There is an alternative argument which will be similar to Section 15 [FuOn99II].) We add some interior marked points $\vec{z}^+$ so that $(\Sigma, \vec{z}, \vec{z}^+)$ becomes stable. We may also assume the following (7.1.48.1) Any point $z_i^+$ lie on a component where $w$ is non-trivial. (7.1.48.2) $w$ is an immersion at each point of $\vec{z}^+$. By the same reason as in the appendix [FuOn99II], we can make this assumption without loss of generality. We choose $Q_i^{2n-2} \subset M$ (a submanifold of codimension 2) for each $z_i^+ \in \vec{a}^+$ such that $Q_i^{2n-2}$ intersect with $w(\Sigma)$ transversally at $w(z_i^+)$. ..... \par\medskip There is a similar sentence in page 566 of the year-2006 preprint version of our book. \par Let us add a few words about the use of `slice' in \cite[Section 12]{FOn}. There we first consider the space of maps with the parameter of the source fixed. Then we obtain a family of solutions (of nonlinear Cauchy-Riemann equation modulo obstruction bundle) parametrized by a finite dimensional space. (that is $V^+_{\sigma}$ which appears in the three lines above from \cite[Theorem 12.9]{FOn}.) This part is \cite[Proposition 12.23]{FOn}. This space is not the correct Kuranishi neighborhood since it has too many parameters. (The extra parameters correspond to the automorphism group of the source that is {\it finite dimensional}, though.) We take a slice at this stage. Namely we use $V'_{\sigma}$ in place of $V^+_{\sigma}$. (This is \cite[Lemma 12.24]{FOn} that comes after \cite[Proposition 12.23]{FOn}.) So when we take a slice, the space in question is {\it already} a solution space of the elliptic PDE and so consists of smooth maps. The reparametrization etc. is obviously smooth. From this point of view, the situation is essentially different from gauge theory. When we consider the set of solutions of ASD equation (without gauge fixing condition), we will get some {\it infinite dimensional space} and its element {\it may not be smooth} because of the lack of ellipticity. Dividing it by the infinite dimensional gauge transformation group is indeed a nontrivial analytic problem. So usually people study the process of gauge fixing and solving ASD equation at the same time. Then analysis is certainly an issue at that stage. \par\medskip Note that the action of the group of diffeomorphisms is mentioned at the beginning of \cite[Section 15]{FOn}. If one reads this part conscientiously, one finds that it occurs during the heuristic explanation why some approach (especially the same approach as gauge theory) has a trouble and is {\it not} taken in \cite[Section 15]{FOn}. The infinite dimensional group of diffeomorphisms appears only here in \cite{FOn} and so it does not appear in the part where the actual proof is performed. It is written in \cite{FOn}, line 17- 13 from the bottom on page 999 : \par\bigskip \dots one may probably be able to prove a slice theorem \dots. However, because of the trouble we mentioned above, we do not use this infinite dimensional space and work more directly without using infinite dimensional manifold. \par\bigskip Here it is written clearly that we do not use a slice theorem in \cite[Sections 12-15]{FOn}. \footnote{We are informed that several people discuss a problem of the foundation of virtual fundamental chain/cycle technique based on the fact that the action of the group of diffeomorphisms ${\rm Diff}(\Sigma)$ on $L^p_1(\Sigma,X)$ is not differentiable. Sometimes it is said they can not take slice to this group action because of the lack of differentiability. As we mentioned here we {\it never} take slice of this group action in our approach via the Kuranishi structure.} Since in this article we provided the detail using the technique killing the ambiguity by transversals and not by the center of mass, and to write both techniques in such a detail ($>$100 pages) is too much, we do not think it is necessary to discuss this comparison much longer, for our purpose. Our purpose is to clarify the soundness of the virtual fundamental cycle or chain technique based on Kuranishi structure and multisection. \subsection{Smoothness of family of obstruction spaces $E(u')$} \label{smoothness} In the google group Kuranishi, we were asked on the smoothness of the obstruction bundle $E_{\frak p}(u')$ which appears in the right hand side of (\ref{definingeq}) for example, by McDuff. (Here $u'$ runs in an appropriate $L^2_{m}$ space of maps.) The discussion in the google group ends up with the agreement that this statement (that is $u' \mapsto E_{\frak p}(u')$ is a smooth family of vector subspaces) is correct. Here we reproduce the pdf files that we uploaded to the google group Kuranishi on August 10 and 12. They explain the proof in detail in the particular case of the moduli space $\mathcal M_1(A)$ of holomorphic maps $u : S^2 \to X$ and of homology class $A$ that is primitive. (Namely there is no nonconstant holomorphic maps $u_1, u_2 : S^2 \to X$ such that $u_{1*}[S^2] + u_{2*}[S^2] = A$.) Here $1$ in the suffix means that we consider one marked point (= $0$). In other words, we identify two maps $u,u'$ if there exists a biholomorphic map $v : S^2 \to S^2$ such that $u\circ v = u'$ and $v(0) =0$. In this case there is neither a bubble nor a nontrivial automorphism. (\emph{This is the case to which \cite{MW1} restrict themselves}.) We first explain construction of the Kuranishi chart in this special case. {It is our opinion that these two posts essentially take care of all the cases in application the content of \cite{MW1} can handle.} \par\medskip {\bf [The post on August 10]} \par Suppose one has $u_{c(1)} : S^2 \to X$ and $u_{c(2)} : S^2 \to X$, for which we take obstruction bundles $E_1$ and $E_2$. \footnote{$E_1$ is a finite dimensional vector space of smooth sections of $u_{c(1)}^*TX \otimes \Lambda^{01}$. We assume the support of its element is away from singular or marked points.} Also we take $D_{i,1}$, $D_{i,-1}$, which are codimension 2 submanifolds of $X$ that are transversal to $u_{c(i)}$ at $1 \in S^2$ and $-1 \in S^2$, respectively. (We assume $u_{c(i)}$ is an immersion at $\pm 1$.) Let $u : S^2 \to X$ be a third map to which we want to transfer $E_1$ and $E_2$. (In other words $u$ is a pseudo-holomorphic map which will be the center of the Kuranishi chart we are constructing.) As an assumption $u g_1$ is $C^0$ close to $u_{c(1)}$ and $u g_2$ is $C^0$ close to $u_{c(2)}$. Here $g_1,g_2 \in Aut(S^2,0)$. (We are studying $\mathcal M_1(A)$ and $0 \in S^1$ is the marked point. So $g_i(0) = 0$.) But $g_1$ and $g_2$ can be very far a way from each other. We require $u(g_i(1)) \in D_{i,1}$, $u(g_i(-1)) \in D_{i,-1}$. This condition (together with $C^0$ closed-ness) determine $g_1$, $g_2$ uniquely. (Actually $u g_i$ is $C^1$ close to $u_{c(i)}$ since they are pseudo-holomorphic. We use this $C^1$ close-ness to show the uniqueness. This point was discussed before already.) The map $u$ is $C^0$ close to both $u_{c(1)} g_1^{-1}$ and $u_{c(2)} g_2^{-1}$. The obstruction spaces $E_i$ are vector spaces of smooth sections of $u_{c(i)}^* TX \otimes \Lambda^{01}$. $g_i^{*}$ transforms them to a vector space of sections of $(u_{c(i)} g_i^{-1})^* TX \otimes \Lambda^{01}$, which we write $g_i^{*}E_i$. We use parallel transport to send $g_i^{*}E_i$ to a vector subspace of sections of $u^*TX \otimes \Lambda^{01}$. In case we want to construct a Kuranishi chart centered at $u$ we proceed as follows. We choose $D_{2}, D_{-2}$ codimension 2 submanifolds of $X$ which intersect transversally with $u$ at $u(2)$ and $u(-2)$ respectively. We consider $u'$ which is a smooth map $S^2 \to X$ that is $C^{10}$ close to $u$ (Namely $\vert u-u'\vert_{C^{10}} < \epsilon_0$.). We do not assume $u'$ is pseudo-holomorphic and will transfer the obstruction bundles to $(u')^*TX \otimes \Lambda^{01}$. We use four more marked points $w_{c(i),1}$, $w_{c(i),-1}$. We assume $w_{c(i),1} \in S^2$ is $\epsilon_{0}$-close to $g_i(1)$ and $w_{c(i),-1} \in S^2$ is $\epsilon_{0}$-close to $g_i(-1)$. \footnote{ $w_{c(1),1} = w_{c(2),1}$ may occur. But it does not cause problem. See the paragraph starting at the end of page 55. (The paragraph start with `A technical point to take care of is ...'.} ($\epsilon_0$ is a small number depending on $u$.) There exists $g_i' \in Aut(S^2,0)$ such that $g'_i(1) = w_{c(i),1}$, $g'_i(-1) = w_{c(i),-1}$. Such $g_i'$ is unique and $g'_i - g_i$ is small. (More precisely it is estimated by $o(\epsilon_0\vert \epsilon_{c(i)})$. Here $o(\epsilon_0\vert \epsilon_{c(i)})$ depends on $\epsilon_0, \epsilon_{c(i)}$ and goes to zero as $\epsilon_0$ goes to zero for each fixed $\epsilon_{c(i)}$.) We use $g'_0$, $g'_1$ (which is determined by $u', w_{c(i),\pm 1}$ in the above way) to obtain $E_i(u';\vec w)$. (Here $\vec w'$ is 4 points $(w_{c(i),\pm 1})$.) Namely we use parallel transport from $(g'_i)^{*}E_i$ to $(u')^*TX \times \Lambda^{01}$, that is possible by $C^0$ close-ness. \footnote{$u$ is $\epsilon_{c_(i)}$ close to $u_{c_i}g_{c_i}$. $u'$ is $\epsilon_0$ close to $u$, and $g'_i$ is $o(\epsilon_0\vert \epsilon_{c(i)})$. All are $C^0$ sense. We first choose $\epsilon_{c(i)}$ small and then choose $\epsilon_0$ so that the sum $\epsilon_{c(i)} + \epsilon_0 + o(\epsilon_0\vert \epsilon_{c(i)})$ is small enough.} We may assume $E_1(u';\vec w) \cap E_2(u';\vec w) = \{0\}$, since we may assume $$ E_1(u;(g_i(\pm1))) \cap E_2(u;(g_i(\pm1))) = \{0\} $$ by Lemma \ref{transbetweenEs} and $E_1(u';\vec w) \cap E_2(u';\vec w) = \{0\}$ is an open condition (with respect to $C^0$ topology.) (Note in case $u' = u$ and $\vec w = (g_i(\pm1))$ we have $g'_i = g_i$.) Now we apply the implicit function theorem to the equation \begin{equation}\label{mainaquation34} \overline{\partial} u' \equiv 0 \mod E_1(u';\vec w) \oplus E_2(u';\vec w) \end{equation} \noindent to obtain the thickened moduli space. (It is the set of $(u',\vec w)$ satisfying this equation.) It is a smooth manifold. \footnote{The surjectivity of the linearized equation is OK, since it is OK at $u$ and we may choose $\epsilon_0$ depending on $u$.} \footnote{In case $[u] \in \mathcal M_1(A)$ is a stable map with nontrivial automorphism then it is an orbifold.} The dimension of the thickened moduli space is greater than the correct dimension of Kuranishi neighborhood. The difference is 12. ($4 = \dim Aut(S^2,0)$ and each $w_{i,\pm 1}$ gives 2 extra dimension.)\footnote{We correct misprint in our google post here.} We cut down the dimension of the thickened moduli space by imposing the constraints \begin{equation}\label{trans} \aligned &u'(2) \in D_2, \quad u'(-2) \in D_{-2},\\ &u'(w_{i,1}) \in D_{i,1}, \quad u'(w_{i,-1}) \in D_{i,-1}. \endaligned \end{equation} The condition in the first line kills the $ Aut(S^2,0)$ ambiguity of $u'$ and the condition in the second line kills the ambiguity of the choice of $\vec w$.\footnote{We correct misprint in our google post here.} These are 12 independent equations. ($12 = 6 \times \text{codim} D$). After cutting down the dimension of the thickened moduli space by (\ref{trans}) we obtain the required Kuranishi neighborhood. (Transversality of the equation (\ref{trans}) is OK by choosing $\epsilon_0$ small, since it is OK at $u$.) \par\medskip This PDF file was an answer to a question of K.Wehrheim. Then McDuff asked a question about the smoothness of the right hand side of (\ref{mainaquation34}). The next part is a reproduction of our answer to it \par\medskip {\bf [The post on August 12]} \par The equation $$ \overline{\partial} u' \equiv 0 \mod E_1(u';\vec w) \oplus E_2(u';\vec w) $$ is an elliptic PDE. More precisely it is a family of elliptic PDE with parameter $\vec w$ and extra $m = \dim E_1 + \dim E_2$ parameters $a_i$ we explain below. We rewrite the equation to $$ \overline{\partial} u' + \sum a_i e_i(u',\vec w) = 0 $$ where $e_i(u',\vec w)$ is a basis of $E_1(u';\vec w) \oplus E_2(u';\vec w)$. (This is an equation for $u'$. Its parameters are $\vec w$ and $a_i$.) The coefficient of this elliptic PDE depends smoothly on the parameters $\vec w$, $a_i$. (See the argument below which shows that $e_i(u',\vec w)$ is smooth both in $u'$ and $\vec w$.) So the solution space with $\vec w$ (and $a_i$) moving consists of a smooth manifold (if the surjectivity of its linearized equation is OK). Moreover the projection to the parameter space (especially $\vec w$) and all the evaluation maps are smooth on the solution space. This is a standard fact in the theory of elliptic PDE. \par\medskip Finally let us explain the smoothness of $e_i(u',\vec w)$ with respect to $u'$ and $\vec w$. Note $e_i(u',\vec w)$ is a member of a basis of $E_j(u;\vec w)$ for $j=1,2$. The section $e_i(u',\vec w)$ is defined from a basis $e_i$ of $E_j$ in a way explained before. We explain it again below in a slightly different way so that the smoothness of $e_i(u',\vec w)$ becomes obvious. Hereafter we consider the case $j=1$. We put $v = u_{c(1)}g_1^{-1} : S^2 \to X$. (Note $u'$ is close to $v$ is $C^0$ sense.) Let us take an open set $\mathcal V$ of $S^2 \times X$ that is $$ \mathcal V = \{ (z,x) \mid d(v(z),x) < 2\epsilon_0 + 2\epsilon_{c(1)}\}. $$ We define a vector bundle on $\mathcal V$ by $$ \frak E = \pi_2^*TX \otimes \pi_1^* \Lambda^{01}. $$ ($\pi_1$,$\pi_2$ are projections from $S^2 \times X$ to the first and second factors.) Let $G$ be a small neighborhood of $g_1$ in $Aut(S^2,0)$. We pull back the bundle $\frak E$ to $\mathcal V \times G$ by the projection $\mathcal V \times G \to \mathcal V$. We denote it by $\frak E \times G$. Let us define a smooth section $\hat e_i$ of it as follows. Let $g'_1 \in G$. We consider the composition $u_{c(1)} \circ (g'_1)^{-1}$ and define $$ (u_{c(1)} \circ (g'_1)^{-1})^+: S^2 \to \mathcal V $$ by $$ z \mapsto (z,(u_{c(1)} \circ (g'_1)^{-1})(z)) $$ We identify $S^2$ with the image $(u_{c(1)} \circ (g'_1)^{-1})^+(S^2)$. The restriction of $\frak E \times G$ to this $S^2$ is $(u_{c(1)} \circ (g'_1)^{-1})^*TX \times \Lambda^{01}$. To this bundle we transform the section $e_i$ of $u_{c(1)}^*TX \times \Lambda^{01}$ by $(g'_1)^{*}$. (This is what we did before.) We then extend it to a section on $\mathcal V \times \{g'_1\}$ by the parallel transportation in the $X$ direction along a geodesic with respect to an appropriate connection of $X$. \par Thus for each $g'_1$ we have a section of $\frak E\times \{g'_1\}$ on $\mathcal V\times \{g'_1\}$. Moving $g'_1$, we have a section of the bundle $\frak E \times G$ on $\mathcal V \times G$. We denote it by $\hat e_i$. It is obvious that $\hat e_i$ is smooth. Let $W$ be the parameter space of $\vec w$ that is a finite dimensional manifold. Let $g'_1(\vec w)$ be the biholomorphic map sending $\pm 1$ to $w_{1,\pm 1}$. It depends smoothly on $\vec w$. We put $$ \hat u' : S^2 \times W \to \mathcal V \times G $$ by $$ (z,\vec w) \mapsto (z,u'(z),g'_1(\vec w)) $$ \par $e_i(u',w)$ coincides with the composition $\hat e_i \circ \hat u'$. So this is smooth in $u'$ and $\vec w$. \subsection{A note about \cite[Section 12]{FOn}} \label{sec12FO} \par This is a note related to \cite[Remark 4.1.3]{MW1}. This remark we think is related to \cite[Lemma 12.24 and Proposition 12.25]{FOn}. \par\medskip Following \cite{MW1} we restrict our explanation to the case of $\mathcal M_{1}(A)$, the moduli space of pseudo-holomorphic sphere with one marked point and of homology class $A$ such that there are no nonconstant pseudo-holomorphic spheres $u_1,u_2$ with $u_{1*}([S^2]) + u_{2*}([S^2]) = A$. (Therefore all the elements of $\mathcal M_{1}(A)$ are somewhere injective.) \par Let us describe the point which we think is the concern of \cite[Remark 4.1.3]{MW1}. We put $G = \text{Aut}(S^2,0)$ the group of biholomorphic maps $v : S^2 \to S^2$ such that $v(0) = 0$. Let $u : S^2 \to X$ be a pseudo-holomorphic map of homology class $A$. We are going to construct a Kuranishi neighborhood centered at $[u] \in \mathcal M_{1}(A)$. \par We take $E$ that is a finite dimensional space of smooth sections of $u^*TX \otimes \Lambda^{01}$. We assume that the image of $$ D_u\overline\partial : \Gamma(S^2;u^*TX) \to \Gamma(S^2;u^*TX\otimes \Lambda^{01}) $$ together with $E$ is $\Gamma(S^2;u^*TX\otimes \Lambda^{01})$. \par We consider the operator \begin{equation}\label{01} \overline {D_u}\overline\partial : \Gamma(S^2;u^*TX) \to \Gamma(S^2;u^*TX\otimes \Lambda^{01})/E. \end{equation} \par Let $u' : S^2 \to X$ be a map which is $C^{10}$ close to $u$. We define $E(u') \in \Gamma(S^2;(u')^*TX\otimes \Lambda^{01})$ by parallel transport from $u$. (More precisely we use parallel transport of the tangent bundle $TX$ along the geodesic joining $u(z)$ and $u'(z)$ for each $z \in S^2$.) \par This map $u' \mapsto E(u')$ is {\it not} invariant of $G$ action. \par Let $V$ be the set of the solution of \begin{equation}\label{Enonequiveqqq} \overline\partial u' \equiv 0 \mod E(u'), \end{equation} such that $u'$ is $\epsilon$-close to $u$ in $C^{10}$ norm. Implicit function theorem and assumption implies that $V$ is a smooth manifold. \par On $V$ we have a vector bundle whose fiber at $u'$ is $E(u')$. This is a smooth vector bundle. We have a section $s$ of it such that $s(u') = \overline\partial u'$. \par $s^{-1}(0)$ maps to $\mathcal M_{1}(A)$. However this is not injective since $G$ action is not killed. \par Let us identify $T_uV$ with the kernel of (\ref{01}). \par We consider the Lie algebra $T_eG$. Since $u\circ g \in V$ for all $g$ we can embed $T_eG \subset T_uV$. Let $V'$ be a submanifold of $V$ such that $u\in V'$ and \begin{equation}\label{3502} T_eV' \oplus T_eG = T_eV. \end{equation} This $V'$ is the slice appearing in \cite[Section 12]{FOn}. (Note this slice is taken after we obtained a finite dimensional space.) We prove the following: \begin{prop}\label{prof11aaa} $u' \mapsto [u']$ induces a homeomorphism between $ (V' \cap B_{\epsilon}V) \cap s^{-1}(0) $ and a neighborhood of $[u]$ in $\mathcal M_{1}(A)$ for sufficiently small $\epsilon$. \end{prop} This proposition is a special case of \cite[Lemma 12.24 and Proposition 12.25]{FOn}. \begin{proof} Let $G_0$ be a small neighborhood of identity in $G$. (We will describe how small it is later.) Let $L^2_m(S^2,X)$ be the space of $L^2_m$ maps to $X$ from $S^2$. We take $m$ huge. $L^2_m(S^2,X)$ is a Hilbert manifold and $V$ is its smooth submanifold of finite dimension. Let $N_VL^2_m(S^2,X)$ be a tubular neighborhood of $V$ in $L^2_m(S^2,X)$ and $\Pi : N_V(L^2_m(S^2,X)) \to V$ the projection. Let $V_0$ be a relatively compact neighborhood of $u$ in $V$. We take $G_0$ small such that if $u'\in V_0$ and $g \in G_0$ then $u'\circ g \in N_VL^2_m(S^2,X)$. We define $$ F : V_0 \times G_0 \to V $$ by $$ F(u',g) = \Pi(u'\circ g). $$ We remark that if $s(u') = 0$ then $F(u',g) = u'\circ g$. $F$ is a smooth map since $V_0$ consists of smooth maps. In fact the map $$ V_0 \times G_0 \to L^2_m(S^2,X) $$ defined by $(u,g)\mapsto u\circ g$ is smooth. \begin{lem}\label{lemma2adddd} There exists a neighborhood $V_2$ of $u$ in $V$ with the following property. If $u' \in V_2$ there exists $g \in G_0$ such that $F(u',g) \in V'$. \end{lem} \begin{lem}\label{lemma3adddd} There exist $\epsilon$ and $G_0$ such that the following holds. If $u' \in V' \cap B_{\epsilon}V$ and $F(u',g) \in V' $, then $g = 1$. \end{lem} \begin{proof}[Proof of lemmas \ref{lemma2adddd},\ref{lemma3adddd}] For each $u' \in V_0$ we put $$ G_0u' = \{ F(u',g) \mid g \in G_0\}. $$ (Note $u,g \mapsto F(u,g)$ is not a group action. So $G_0u'$ is not an orbit.) \par We may replace $G_0$ by a smaller neighborhood of identity and take a small neighborhood $V_{00}$ of $u$ so that $g \mapsto F(u',g)$ is a smooth embedding of $G_0$ if $u' \in V_{00}$. By assumption (\ref{3502}) the submanifold $G_0u$ intersects transversally with $V'$ at $u$ in $V$. So we may replace $G_0$ by a smaller neighborhood of identity again such that $G_0u \cap V' = \{u\}$. \par Now since $u' \mapsto G_0u'$ is a smooth family of smooth submanifolds, $G_0u'$ intersects transversally to $V'$ at one point if $u'$ is sufficiently close to $u$. This implies Lemmas \ref{lemma2adddd},\ref{lemma3adddd} \end{proof} If $[u'']$ is in a small neighborhood of $[u]$ in $\mathcal M_1(A)$ then by definition we may replace $u''$ and assume $u'' \in V_2$. Then there exists $g\in G_0$ such that $u' = F(u'',g)$ is in $V'$ by Lemma \ref{lemma2adddd}. Since $u''$ is pseudo-holomorphic $u' = u''\circ g$. Namely $[u'] = [u'']$. Thus $ (V' \cap B_{\epsilon}V) \cap s^{-1}(0) $ goes to a neighborhood of $[u]$. \par The injectivity of this map is immediate from Lemma \ref{lemma3adddd}. \par It is easy to see that this map from $(V' \cap B_{\epsilon}V) \cap s^{-1}(0)$ to an open set of $\mathcal M_1(A)$ is continuous. The proof that it is an open mapping is the same as the proof of the fact that $ (V' \cap B_{\epsilon}V) \cap s^{-1}(0) $ is a neighborhood of $[u]$. \end{proof} Note the reason why Proposition \ref{prof11aaa} is not completely trivial lies on the fact that equation (\ref{Enonequiveqqq}) is not $G$ invariant. In \cite[Section 15]{FOn}, we made a difference choice of $E(u')$ using center of mass technique so that $u\mapsto E(u)$ is $G$ equivariant. Then Proposition \ref{prof11aaa} is trivial to prove for that choice of $E(u')$. \section{Confutations against the criticisms made in \cite{MW1} on the foundation of the virtual fundamental chain/cycle technique} \label{HowMWiswrong} In an article \cite{MW1} there are several criticisms on the earlier references on the foundation of virtual fundamental chain/cycle technique. This section provides our confutations against those criticisms. \par We think such confutations are necessary by the following reason. There are various researches in progress based on virtual fundamental chain/cycle technique by various people. The authors of \cite{MW1} mention a plan to write a replacement of a part of the results in the existing literature. However, \cite{MW1} provides only very beginning of their plan and \cite{MW1} concerns only the case where the pseudo-holomorphic curves discussed are automatically somewhere injective, which is not applicable to any of the on-going researches we mentioned above. Based on their writing, it appears that the authors of \cite{MW1} do not plan to study the chain level argument based on Kuranishi structure. See \cite[Page 5 Line 12-14 from bottom]{MW1}. Since the chain level argument is essential in many of the on-going researches, we need something more than their planned `replacement'. Therefore leaving the criticisms of \cite{MW1} un-refuted would cause serious confusion among the researchers in the relevant field. Hence we have decided to provide our confutations against the criticisms displayed in \cite{MW1} as public as possible to the degree of the article \cite{MW1}. \par It is our understanding that an important and basic agreement of the mathematical research is that researchers are free to use the results of the published research papers (with appropriate citations) unless explicit and specified gap or problem was pointed out to the author but the author failed to provide a reasonable answer or correction on that particular point. This agreement is a part of the foundation of the refereeing system of the mathematical publications on which the whole mathematical community much depends. \par By this reason we explain in detail where the misunderstanding behind the criticisms of \cite{MW1} lies in, and then make our confutations against them word by word. We do so only to the criticisms of \cite{MW1} directed against the papers written by the present authors. In \cite{MW1} there are criticisms to other versions of virtual fundamental cycle/chain technique. We found several problems there also. However we restrict our confutations only to the criticisms directed to the papers of the present authors. This is because the present authors do not have thorough knowledge of the other versions of virtual fundamental cycle/chain technique, compared to that of their own. The page numbers etc. below are those of the version of \cite{MW1} appeared as arXiv:1280.1340v1 on Aug. 7 2012. We note that the date Aug. 7 was after we had posted all our detailed answers \cite{Fu1}, \cite{FOn2}, \cite{fooo:ans3} \cite{fooo:ans34} and \cite{fooo:ans5} to K. Wehrheim's questions. The small letters are used for the quote from \cite{MW1}. \par Note similar criticisms appear repeatedly in \cite{MW1}. In such a case we repeat the same answer. Although some of the quotations below may not be direct criticisms, we supplement them in order to clarify our mathematical points. \begin{enumerate} \item\label{MW1} Page 2 Line 13-14 \par {\small while some topological and analytic issues were not resolved} \par We will clarify below that all such issues have been resolved. \item\label{MW2} Page 2 Line 11-14 from the bottom and \par {\small The main analytic issue in each regularization approach is in the construction of transition maps for a given moduli space, where one has to deal with the lack of differentiability of the reparametrization action on infinite dimensional function spaces discussed in Section 3.} \par Such issue does not cause any problem in our proof as we will explain below. See items (\ref{MW3}),(\ref{MW13}),(\ref{MW18}),(\ref{MW21})-(\ref{MW28}). \item\label{MW3} Page 4 Line 16-20 from the bottom \par {\small The issue here is the lack of differentiability of the reparametrization action, which enters if constructions are transferred between different infinite dimensional local slices of the action, or if a differentiable Banach manifold structure on a quotient space of maps by reparametrization is assumed. } \par Such an issue is irrelevant to our approach and is not present in our approach, since infinite dimensional slice was never used. We explained this point in the second half of Subsection \ref{comparizon}. \item\label{MW4} Page 4 line 10 from below: \par {\small However, in making these constructions explicit, we needed to resolve ambiguities in the definition of Kuranishi structure, concerning the precise meaning of germ of coordinate changes and the cocycle condition discussed in Section 2.5.} \par This point had been already corrected in \cite{fooo:book1}. We explained this point in Subsection \ref{gernkuranishi}. \item\label{MW5} Page 4 last 4 lines: \par {\small One issue that we will touch on only briefly in Section 2.2 is the lack of smoothness of the standard gluing constructions, which affects the smoothness of the Kuranishi charts near the nodal or broken curves. } \par This points had already been discussed in \cite{fooo:book1}. More detail had been given in \cite{fooo:ans3} and \cite{fooo:ans34}. They are basically the same as Parts \ref{secsimple} and \ref{generalcase} of this article. \item\label{MW6} Page 4 last line - Page 5 4th line: \par {\small A more fundamental topological issue is the necessity to ensure that the zero set of a transversal perturbation is not only smooth, but also remains compact as well as Hausdorff, and does not acquire boundary in the regularization. These properties nowhere addressed in the literature as far as we are aware, are crucial for obtaining a global triangulation and thus a fundamental homological class. } \par These points had been addressed in \cite{Fu1,FOn2}. They had been sent to the authors of \cite{MW1} as a reply to the question raised by the very person who wrote : `These properties nowhere addressed in the literature as far as we are aware'. \par According to the opinion of present authors, this point is not a fundamental issue but only a technical point. We leave each reader to see our proof given in Part \ref{Part2} and form his/her own opinion about it. Anyway correctness of the construction of virtual fundamental chain/cycle is not affected at all whether this point (which had already been resolved) is fundamental or not. \item\label{MW7} Page 5 4 - 6 line: \par {\small Another topological issue is the necessity of refining the cover by Kuranishi charts to a `good cover' in which the unidirectional transition maps allow for an iterative construction of perturbation.} \par This was proved in \cite[Lemma 6.3]{FuOn99I}. Responding to the request of an author of \cite{MW1} more detail of its proof had been provided in \cite{Fu1,FOn2}. \item\label{MW78} Page 5 Line 20-30 \par {\small The case of moduli spaces with boundary given as the fiber product of other moduli spaces, as required for the construction of $A_{\infty}$-structures, is beyond the scope of our project. It has to solve the additional task of constructing regularizations that respect the fiber product structure on the boundary. This issue, also known as constructing coherent perturbations, has to be addressed separately in each special geometric setting, and requires a hierarchy of moduli spaces which permits one to construct the regularizations iteratively. In the construction of the Floer differential on a finitely generated complex, such an iteration can be performed using an energy filtration thanks to the algebraically simple gluing operation. In more `nonlinear' algebraically settings, such as $A_{\infty}$ structures, one needs to artificially deform the gluing operation, e.g. when dealing with homotopies of data [Se].} \par This paragraph might intend to put some negative view on the existing literature which constructed $A_{\infty}$ structure, especially on \cite{fooo:book1}. However since no mathematical problem or difficulty in the existing literature is mentioned it is impossible for us to do anything other than ignoring this paragraph. \item\label{MW8} Page 5 the last paragraph - Page 6 the first paragraph: \par {\small Another fundamental issue surfaced when we tried to understand how Floer's proof of the Arnold conjecture is extended to general symplectic manifolds using abstract regularization techniques. In the language of Kuranishi structures, it argues that a Kuranishi space $X$ of virtual dimension $0$, on which $S^1$ acts such that the fixed points $F\subset X$ are isolated solutions, allows for a perturbation whose zero set is given by the fixed points. At that abstract level, the argument in both [FO] and [LiuT]\footnote{ They are \cite{FOn} and \cite{LiuTi98} in the reference of this article} is that a Kuranishi structure on $(X\setminus F)/S^1$ (which has virtual dimension 􀀀$-1$) can be pulled back to a Kuranishi structure on $X\setminus F$ with trivial fundamental cycle. However, they give no details of this construction.} \par Such detail, \cite{fooo:ans5} = Part \ref{S1equivariant} of this article, had been given to the authors of \cite{MW1}. \cite{fooo:ans5} is a reply to the question raised by the very person who wrote : `However, they give no details of this construction'. \item\label{MW9} Page 9 line 8-9: \par {\small The gluing analysis is a highly nontrivial Newton iteration scheme and should have an abstract framework that does not seem to be available at present. } \par We do not understand why {\it abstract frame work} {\bf should} be used. Our gluing analysis is based on the study of the concrete geometric situation where we perform our gluing construction. To find some abstract formulation of gluing construction can be an interesting research. However it is {\it not required} in order to confirm the correctness of the gluing constructions in various particular geometric cases. Such gluing constructions had been used successfully in gauge theory and in pseudo-holomorphic curve by many people in the last 30 years. We wonder whether the authors of \cite{MW1} question the soundness of all those well established results or not. If not the authors of \cite{MW1} should explain the reason why abstract frame work should be used in this particular case and not in the other cases. \item\label{MW10} Page 9 line 9-10: \par {\small In particular, it requires surjective linearized operators, and so only applies after perturbation. } \par In the construction of Kuranishi neighborhood we modify the (nonlinear Cauchy-Riemann) equation $ \overline\partial u' = 0 $ slightly and use $ \overline\partial u' \equiv 0 \mod E(u'). $ In other words the surjective linearized operators are obtained by introducing obstruction space $E(u')$ and {\it not} by perturbation. In other words, surjectivity of linearized operators is used {\it before} perturbation. \par The perturbation via a generic choice of multisections starts {\it after} finite dimensional reduction. \item Page 9\label{item12} line 11-12: \par {\small Moreover, gluing of noncompact spaces requires uniform quadratic estimates, which do not hold in general. } \par The proof of uniform quadratic estimates is certainly necessary in all the situations where gluing (stretching the neck) argument are used. (Both in gauge theory and the study of pseudo-holomorphic curve for example.) The way to handle it had been well established more than 20 years ago. \footnote{For example in \cite[Blowing up the metric; page 121-127]{freedUhlen} an estimate of $L^4$ norm in terms of $L^2_1$ norm is discussed, in the case of one forms on noncompact $4$ manifolds with cylindrical end.} It is proved in our situation (in the same way as many of the other situations) as follows. The domain curve $\Sigma_T$ (we use the notation of Part \ref{secsimple}) is a union of core and neck region. The core consists a compact family of compact spaces and is independent of the gluing parameter $T$. Therefore `uniform quadratic estimates' is obvious there. On the other hand, the length of the neck region is unbounded, which is the noncompactness of \cite{MW1}'s concern we suppose. However, on the neck region we have an exponential decay estimate (See \cite[Lemma 11.2]{FOn} = Proposition \ref{neckaprioridecay} of this article.) and the neck region is of cylindrical type. Therefore even the length of the neck region is unbounded we have a {\it uniform} quadratic estimates. We also remark that the weighted Sobolev norm we introduced in (\ref{normformjula5}) and \cite[Subsection 7.1.2]{fooo:book1} is designed so that the norm of the right inverse of the linearized operator becomes uniformly bounded. (Namely it is bounded by a constant independent of $T$.) \item\label{MW11} Page 9 line 12-13: \par {\small Finally, injectivity of the gluing map does not follow from the Newton iteration and needs to be checked in each geometric setting. } \par The classical proof of injectivity had been reviewed \cite{fooo:ans3} = Part \ref{secsimple}. See Section \ref{surjinj}. Maybe this proof is more popular in gauge theory community than in symplectic geometry community. (See \cite{Don86I} for example.) \item\label{MW12} Page 9 line 3-7: \par {\small Each setting requires a different, precise definition of a Banach space of perturbations. Note in particular that spaces of maps with compact support in a given open set are not compact. The proof of transversality of the universal section is very sensitive to the specific geometric setting, and in the case of varying $J$ requires each holomorphic map to have suitable injectivity properties.} \par It seems that `a Banach space of perturbations' that they allude to here is the obstruction space $E_{\frak p}(u')$. We always choose it so that it becomes a {\it finite} dimensional space of smooth sections. (See for example \cite[(12.7.4)]{FOn} and Definition \ref{obbundeldata} (5) of present article.) It is a finite dimensional space and so is a `Banach space'. (We do not think it is a good idea to introduce infinite dimensional space here. It is because such infinite dimensional space is harder to control.) It is very hard to understand the rest of this writing at least for us. The word `universal section' is not defined, for example. \item\label{MW13} Page 11 lines 2-5 from the bottom. \par {\small This differentiability issue was not apparent in [FO, LiT]\footnote{[FO] is \cite{FOn} and [LiT] is \cite{LiTi98} in the reference of this article.}, but we encounter the same obstacle in the construction of sum charts; see Section 4.2. In this setting, it can be overcome by working with special obstruction bundles, as we outline in Section 4.3.} \par This `obstacle' seems to be related to the following point: To associate an obstruction space $E_{\frak p}(u')$ for $u' : \Sigma' \to X$ we use the vector space $E_c$ for various $c$, that is a space of sections $u_c^*TX \otimes \Lambda^{01}$ where $u_c : \Sigma_c \to X$. So we need to use a diffeomorphism between a support $K_c$ of the elements of $E_c$ and a subset of $\Sigma'$. Namely we transform $E_c$ by using this diffeomorphism and a parallel transport on $X$. \par Note in \cite{FOn} as well as in all our articles, the subspace $E_c$ is a finite dimensional space of smooth sections. So clearly no issue appears by transforming them by diffeomorphism. It seems that `working with special obstruction bundles' is nothing but this choice of \cite[(12.7.4)]{FOn} etc., that is, `a finite dimensional space of smooth sections'. \item\label{MW14} Page 12 lines 16-20. \par {\small In principle, the construction of a continuous gluing map should always be possible along the lines of \cite{McSa94}, though establishing the quadratic estimates is nontrivial in each setting. However, additional arguments specific to each setting are needed to prove surjectivity, injectivity, and openness of the gluing map.} \par The method of \cite{McSa94} was used in \cite{FOn}.\footnote{It is quoted in \cite[page 984]{FOn} as follows: \par The proof is again a copy of McDuff-Salamon's in [47] with some minor modifications to handle the existence of the obstruction and moduli parameter. \par [47] is \cite{McSa94} in the reference of this article.} The quadratic decay estimate was classical as we already mentioned in (\ref{item12}). Because of our choice of `special obstruction bundles' (\ref{MW13})), there appears no new point arising from the addition of obstruction bundles.\footnote{We choose our obstruction bundle so that the supports of its elements are away from nodes. So the obstruction bundle affects our equation only at the core where noncompactness of the source does not appear. At the neck region the equation is genuine pseudo-holomorphic curve equation and is the same as one studied in \cite{McSa94}.} The proof of surjectivity is in \cite[ Section 14]{FOn}. More detail is in Section \ref{surjinj} of this article = \cite[Section 1.5]{fooo:ans3}, together with the proof of injectivity (that is similar to one of surjectivity) and openness of the gluing map (that follows from surjectivity proven in Section \ref{surjinj} as shown in Section \ref{cutting} (Lemma \ref{setisopen} = \cite[Lemma 2.105]{fooo:ans34})). Those proofs were classical already at the stage of 1996. \item\label{MW15} Page 12 lines 20-23. \par {\small Moreover, while homeomorphisms to their image suffice for the geometric regularization approach, the virtual regularization approaches all require stronger differentiability of the gluing map; e.g. smoothness in [FO,FOOO,J]\footnote{[FO,FOOO,J] are \cite{FOn}, \cite{fooo:book1} \cite{joyce} in the reference of this article. } } \par For the purpose of \cite{FOn} differentiability of the gluing map is required only in its version with $T$ (the gluing parameter) fixed, as we explained in Subsection \ref{subsec342}. \par The `stronger differentiability of the gluing map' had been proved in \cite[Proposition A1.56]{fooo:book1}. More detail of this proof \cite{fooo:ans3} \cite{fooo:ans34} (= Part \ref{secsimple} and \ref{generalcase} of this article) had been written and sent to the members of the google group `Kuranishi', which include the authors of \cite{MW1}. \item\label{MW16} Page 12 Remark 2.2.3 \par {\small None of [LiT, LiuT, FO, FOOO] \footnote{They are \cite{LiTi98,LiuTi98,FOn,fooo:book1} in the reference of this article.} give all details for the construction of a gluing map. In particular, [FO, FOOO]\footnote{They are \cite{FOn,fooo:book1} in the reference of this article.} construct gluing maps with image in a space of maps, but give few details on the induced map to the quotient space, see Remark 4.1.3. For closed nodal curves, [McS, Chapter 10] constructs continuous gluing maps in full detail, but (at least in the first edition) does not claim that the glued curves depend differentiably on the gluing parameter $a\in \C$ as $a \to 0$. By rescaling $\vert a \vert$, it is possible to establish more differentiability for $a \to 0$. } \par The analytic detail of the gluing map had been given in \cite[Sections 7.1 and Sections A1.4]{fooo:book1}. Even more detail of the same argument (\cite{fooo:ans3} and \cite{fooo:ans34} that are Parts \ref{secsimple} and \ref{generalcase} of this article) was sent to the members of the google `Kuranishi' that include the authors of \cite{MW1}. \par In \cite[Remark 4.1.3]{MW1}, the authors of \cite{MW1} explained why {\bf their} approach fails in the setting of \cite{FOn} Section 12-15. Therefore it is irrelevant to {\bf our} approach. See Items (\ref{MW29})-(\ref{MW293}) and Subsection \ref{sec12FO}. \par The book \cite{McSa94} which we quote in \cite{FOn} indeed does not claim the differentiability on the gluing parameter. As we explained in Subsection \ref{subsec342}, the differentiability of the gluing map with gluing parameter fixed, is enough to establish all the results of \cite{FOn}. This is because we only need to study the moduli space of virtual dimension 1 and 0 for the purpose of \cite{FOn}. We remark that this had been mentioned already in \cite[page 782 line 6-8 from below]{fooo:book1} as follows. \par\medskip We remark that smoothness of coordinate change was not used in [FuOn99II] \footnote{This is \cite{FOn} in the reference of this article.}, since only 0 and 1 dimensional moduli spaces was used there. In other words, in Situation 7.2.2 mentioned in \S 7.2, we do not need it. \par\medskip \item\label{MW17} Page 16 - 19, Beginning of Subsection 2.5 \par There are discussions about germ of Kuranishi neighborhood and its coordinate change there. As we explained in Subsection \ref{gernkuranishi} this point had already been corrected in \cite{fooo:book1}. \item\label{MW18} Page 19 lines 13-18. \par {\small The first nontrivial step is to make sure that these representatives were chosen sufficiently small for coordinate changes between them to exist in the given germs of coordinate changes. The second crucial step is to make specific choices of representatives of the coordinate changes such that the cocycle condition is satisfied. However, [FO, (6.19.4)] does not address the need to choose specific, rather than just sufficiently small, representatives. } \par This point is related to the notion of `germs of Kuranishi structure'. So it had been already corrected in \cite{fooo:book1}. The definition of Kuranishi structure in \cite{fooo:book1} includes the choice of representatives of the coordinate changes such that the cocycle condition is satisfied. \item\label{MW19} Page 20 lines 18-21 \par {\small The basic issues in any regularization are that we need to make sense of the equivalence relation and ensure that the zero set of a transverse perturbation is not just locally smooth (and hence can be triangulated locally), but also that the transition data glues these local charts to a compact Hausdorff space without boundary. } \par See item (6). \item\label{MW20} Page 25 last two lines - Page 26 first line \par {\small It has been the common understanding that by stabilizing the domain or working in finite dimensional reductions one can overcome this differentiability failure in more general situations. } \par This common understanding is absolutely correct. (See Items (\ref{MW21}) - (\ref{MW26}).) Indeed we had done so. \item\label{MW21} Page 27 last paragraph - Page 28 second line. \par {\small It has been the common understanding that virtual regularization techniques deal with the differentiability failure of the reparametrization action by working in finite dimensional reductions, in which the action is smooth. We will explain below for the global obstruction bundle approach, and in Section 4.2 for the Kuranishi structure approach, that the action on infinite dimensional spaces nevertheless needs to be dealt with in establishing compatibility of the local finite dimensional reductions. In fact, as we show in Section 4, the existence of a consistent set of such finite dimensional reductions with finite isotropy groups for a Fredholm section that is equivariant under a nondifferentiable group action is highly nontrivial. For most holomorphic curve moduli spaces, even the existence of not necessarily compatible reductions relies heavily on the fact that, despite the differentiability failure, the action of the reparametrization groups generally do have local slices. However, these do not result from a general slice construction for Lie group actions on a Banach manifold, but from an explicit geometric construction using transverse slicing conditions.} \par The `highly nontrivial problem' mentioned in `that is equivariant under a nondifferentiable group action is highly nontrivial' in the above quote seems to be related to the issue written in Item (\ref{MW13}). It had been resolved in the way explained there. \par The discussion of \cite[Section 4.2]{MW1} seems to be very much similar to a special case of our discussion in Part \ref{generalcase} of this article (= \cite{fooo:ans34}). (See Item (\ref{MW27}).) So it seems to us that \cite[Section 4.2]{MW1} also {\it supports} the common understanding `stabilizing the domain or working in finite dimensional reductions one can overcome this differentiability failure in more general situations' as Part \ref{generalcase} of this article does. \par As we explained in Subsection \ref{comparizon} we never used slice theorem for such `general slice construction for Lie group actions on a Banach manifold' and had been used `explicit geometric construction using transverse slicing conditions'. \par {So the description of this part of \cite{MW1} is based on their misunderstanding of the Kuranishi structure approach and presumption arising from their experience with other on-going projects}. We emphasize that the action on infinite dimensional spaces {\it never} enter in our approach. \item\label{MW22} Page 32 16 and 17 lines from bottom. \par {\small Using such slices, the differentiability issue of reparametrizations still appears in many guises:} \par Let us explain below (Items (\ref{MW23}) - (\ref{MW26})) why {\it none} of those guises appear in our construction. \item\label{MW23} {\small (i) The transition maps between different local slices - arising from different choices of fixed marked points or auxiliary hypersurfaces are reparametrizations by biholomorphisms that vary with the marked points or the maps. The same holds for local slices arising from different reference surfaces, unless the two families of diffeomorphisms to the reference surface are related by a fixed diffeomorphism, and thus fit into a single slice.} \par The family of diffeomorphisms appearing here is applied (as reparametrization) to the set of solutions of elliptic PDE. (Namely after solving equations, $\overline{\partial}u' \equiv 0 \mod E_{\frak p}(u')$.) By elliptic regularity they are smooth families of smooth maps. So reparametrization does not cause any problem. \item\label{MW24} { \small (ii) A local chart for $\frak R$ near a nodal domain is constructed by gluing the components of the nodal domain to obtain regular domains. Transferring maps from the nodal domain to the nearby regular domains involves reparametrizations of the maps that vary with the gluing parameters.} \par Near a nodal domain the smoothness of coordinate change is more nontrivial than the case (i) above. \cite[Lemma A1.59]{fooo:book1} (which is generalized to Propositions \ref{changeinfcoorprop} and \ref{reparaexpest} of this article = \cite[Propositions 2.19,2.23]{fooo:ans34}) had been prepared for this purpose and had been used to resolve this point. See the proof of Lemma \ref{2120lem} for example. \item\label{MW25} {\small (iii) The transition map between a local chart near a nodal domain and a local slice of regular domains is given by varying reparametrizations. This happens because the local chart produces a family of Riemann surfaces that varies with gluing parameters, whereas the local slice has a fixed reference surface. } \par The same answer as item (\ref{MW24}) above. \item\label{MW26} { \small (iv) Infinite automorphism groups act on unstable components of nodal domains. } \par This is the reason why we need to add marked points to such components. \item\label{MW27} Page 33 Lines 6-8 \par {\small We show in Remark 3.1.5 and Section 4.2 that these issues are highly nontrivial to deal with in abstract regularization approaches.} \par It is not so clear for us what `abstract regularization approaches' means. It might be related to taking slice of infinite dimensional group. We never do it as explained in Subsection \ref{comparizon}. \par What is written in Section 4.2 \cite{MW1} is similar to (a special case of) what we had written in \cite[Appendix] {FOn}, \cite[page 8-9, the answer to Question 4]{Fu1}, \cite{fooo:ans34} ( = Part \ref{generalcase} of this article), as we show by examples below. Therefore as far as the correctness of the mathematical statements appearing here concerns, the opinion of the authors of \cite{MW1} are likely to coincide with ours. \begin{enumerate} \item The conditions given in page 39 Lines 5 -12 from the bottom. \par These are similar to the special case of Definition \ref{obbundeldata} of obstruction bundle data. \item The discussion of the last part of page 42 of \cite{MW1} (Item 1 there). \par The discussion there seems to be related to the smoothness of $u' \mapsto E(u')$ that was explained in the posts on Aug. 12, which is reproduced in Section \ref{smoothness}. \item The Item 2 in page 43. \par The first part of this discussion is related to Lemma \ref{transbetweenEs}. The rest seems to be similar to the construction of the obstruction bundle we described in `the post of Aug. 10', which we reproduced in Section \ref{smoothness}. \footnote{ We remark those posts on Aug. 10 and Aug. 12 are extracts (or adaptations to a special case) of our earlier posts \cite{fooo:ans34}. We sent them to the google group `Kuranishi' according to the request of the authors of \cite{MW1}.} \item Page 44 Line 6-8. \par {\small This construction is so canonical that coordinate changes between different sum charts exist essentially automatically, and satisfy the weak cocycle condition.} \par This sentence is very much similar to the following which appears as a part of \cite{Fu1}. So there seems to be an agreement concerning this point. \par\medskip Once (1) is understood the coordinate change $\phi_{{\frak q}{\frak p}}$ is just a map which send an en element $((\Sigma,\vec z),u)$ to the same element. So the cocycle condition is fairly obvious. \item Page 46 \cite[(4.3.3)]{MW1}. \par The choice of the obstruction space in \cite[(4.3.3)]{MW1} is the same as one which appeared in \cite[(12.7.4)]{FOn}. \end{enumerate} \item\label{MW29} Page 38-39, Remark 4.1.3 \par In \cite[Remark 4.1.3]{MW1}, the authors of \cite{MW1} explained why {\it their} approach fails in the setting of \cite{FOn} Section 12-15. Therefore it is irrelevant to {\it our} approach. We show it by several examples below. \item\label{MW291} Page 38 Line 16-17, \par {\small The above proof translates the construction of basic Kuranishi charts in [FO]\footnote{This is \cite{FOn} in the reference of this article.} in the absence of nodes and Deligne-Mumford parameters into a formal setup. } \par The authors of \cite{FOn} do not agree that this is a translation of the construction of \cite{FOn}. \item\label{MW292} Page 38 last 4 lines and Page 39 first line. \par {\small [FO] construct the maps $\hat s$ and $\hat{\psi}$ on a {\it thickened} Kuranishi domain" analogous to $\hat W_{f}$ and thus need to make the same restriction to an infinitesimal local slice" as in Lemma 4.1.2. Again, the argument for injectivity of $f$ given in Lemma 4.1.2 does not apply due to the differentiability failure of the reparametrization action of $G = G_{\infty}$ discussed in Section 3.1.} \par The authors of \cite{MW1} claim here that why the proof of \cite[Lemma 4.1.2]{MW1} {\it they gave} fails in the situation of \cite[Section 12]{FOn}. This claim has no relation to our proof. See Subsection \ref{sec12FO} for the correct proof of \cite[Lemma 12.24]{FOn} in the situation of \cite{MW1}. \item\label{MW292} Page 39 Line 6-8 \par {\small The claim that $\psi_f$ has open image in $\sigma^{-1}(0)/G$ is analogous to [FO, 12.25], which seems to assume that $\hat U_f$ is invariant under $G_{\infty}$ to assert ``$\hat\psi(s􀀀^{-1}(0) \exp_f(W_f)) = \hat{\psi}(\hat s^{-1}(0))$''. } \par When first and 4th named authors wrote \cite{FOn}, they of course were aware of the fact that the choice of the obstruction bundle $E$ in \cite[Section 12]{FOn} is not invariant of the action of automorphism group. This is {\it the} reason why \cite[Section 15 and appendix]{FOn} was written. \footnote{ We remark that the automorphism group (that is written $G$ in the above quote from \cite{MW1}) acts on the zero set of Kuranishi map (that is written as $\hat s^{-1}(0)$ in the above quote.). Therefore the set $V'_{\sigma} \cap s_{\sigma}^{-1}(0)/\text{\rm Aut} (\sigma)$ (that is $\hat\psi(s^{-1}(0))$ in the above quote, which appears in \cite[Lemma 12.24 and Proposition 12.25]{FOn} is well defined. (The group $\text{\rm Aut}(\sigma)$ is the finite group that is a automorphisms of stable {\it map}.) } More explicitly it is written in \cite[Page 1001 Line 18-19 from the bottom]{FOn} that \par\medskip The trouble here is that $E_{\tau_i}$ is {\it not} invariant by the ``action'' of $Lie(Aut(\Sigma_{\tau_i}))_0$. \par\medskip (Note `not' was italic in \cite{FOn}.) The proof of \cite[Proposition 12.25]{FOn} in the situation of \cite{MW1} without using $G$ invariance of obstruction bundle is in Subsection \ref{sec12FO}. \item\label{MW293} Page 39 Line 8-10. \par {\small However, $G_{\infty}$-invariance of $\hat U_f$ requires $G_{\infty}$-equivariance of $\hat E$, i.e. an equivariant extension of $E_f$ to the infinite dimensional domain $\hat{\mathcal V}$. A general construction of such extensions does not exist due to the differentiability failure of the $G_{\infty}$-action. } \par It is not clear for us what `general construction' means. It might mean `a construction in some abstract setting without using the properties of explicit geometric setting'. We never tried to find such a `general construction'. Two explicit geometric constructions of such extensions are given in \cite{FOn}. One in Section 15 the other in Appendix. \item\label{MW28} Page 39 Lines 14-17 from the bottom. \par {\small The differentiability issues in the above abstract construction of Kuranishi charts can be resolved, by using a geometrically explicit local slice $\mathcal B_f \subset \widehat{\mathcal B}^{k,p}$ as in (3.1.3). (This is mentioned in various places throughout the literature, e.g. [FO, Appendix], but we could not find the analytic details discussed here.) } \par Such detail had been written in \cite{fooo:ans34} and posted to the google group `Kuranishi' of which the authors of \cite{MW1} are members. \cite{fooo:ans34} is a reply to a question raised by the very person who wrote `we could not find the analytic details'. \item\label{MW30} Page 53 Line 14-17 from the bottom. \par {\small Note that we crucially use the triviality of the isotropy groups, in particular in the proof of the cocycle condition. Nontrivial isotropy groups cause additional indeterminacy, which has to be dealt with in the abstract notion of Kuranishi structures.} \par The detail of the existence of Kuranishi structure of the moduli space {\it without using triviality of the isotropy groups} is explained in detail in Part \ref{generalcase} of this article. \item\label{MW31} Page 54 \par {\small (iii) In view of Sum Condition II′ and the previous remark, one cannot expect any two given basic Kuranishi charts to have summable obstruction bundles and hence be compatible. This requires a perturbation of the basic Kuranishi charts, which is possible only when dealing with a compactified solution space, since each perturbation may shrink the image of a chart. } \par This might be related to Lemma \ref{transbetweenEs}. (The proof of this lemma is easy.) \item\label{MW32} Page 54 \par {\small (iv) This discussion also shows that even a simple moduli space such as $\mathcal M_1(A,J)$ does not have a canonical Kuranishi structure. Hence the construction of invariants from this space also involves constructing a Kuranishi structure on the product cobordism $\mathcal M_1(A,J) \times [0,1]$ intertwining any two Kuranishi structures for $\mathcal M_1(A,J)$ arising from different choices of basic charts and transition data.} \par The cobordism of Kuranishi structure and its application to the well-defined-ness of virtual fundamental class had been discussed in \cite[Lemmas 17.8,17.9]{FOn2} etc. \item\label{MW33} Page 75 13 -1 5 \par {\small However, in order to obtain a VMC from a Kuranishi structure, we either need to require the strong cocycle condition, or make an additional subtle shrinking construction as in (i) that crucially uses the additivity condition. } \par A detailed explanation of the construction of VMC from a Kuranishi structure {\it without using additivity condition} is given in Part \ref{Part2} of this article. \item\label{MW34} Page 75 Line 20-22 from the bottom. \par {\small The proof of existence in [FO, Lemma 6.3] is still based on notions of germs and addresses neither the relation to overlaps nor the cocycle condition. } \par It is explained in Subsection \ref{gernkuranishi} why the proof of [FO, Lemma 6.3] is {\it not} based on the notion of germs. The detail of this proof is given in Section \ref{sec:existenceofGCS}. \end{enumerate}
{ "timestamp": "2012-09-21T02:01:27", "yymm": "1209", "arxiv_id": "1209.4410", "language": "en", "url": "https://arxiv.org/abs/1209.4410", "abstract": "This is an expository article on the theory of Kuranishi structure and is based on a series of pdf files we uploaded for the discussion of the google group named `Kuranishi' (with its administrator H. Hofer). There we replied to several questions concerning Kuranishi structure raised by K. Wehrheim. At this stage we submit this article to the e-print arXiv, all the questions or objections asked in that google group were answered, supplemented or confuted by us. We first discuss the abstract theory of Kuranishi structure and virtual fundamental chain/cycle. This part can be read independently from other parts. We then describe the construction of Kuranishi structure on the moduli space of pseudoholomorphic curves, including the complete analytic detail of the gluing construction as well as the smoothness of the resulting Kuranishi structure. The case of S^1 equivariant Kuranishi structure which appears in the study of time independent Hamiltonian and the moduli space of Floer's equation is included.", "subjects": "Symplectic Geometry (math.SG); Differential Geometry (math.DG)", "title": "Technical details on Kuranishi structure and virtual fundamental chain", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9566341987633822, "lm_q2_score": 0.7401743677704878, "lm_q1q2_score": 0.7080761132573136 }
https://arxiv.org/abs/1202.3184
Asymptotic Behavior of the Maximum and Minimum Singular Value of Random Vandermonde Matrices
This work examines various statistical distributions in connection with random Vandermonde matrices and their extension to $d$--dimensional phase distributions. Upper and lower bound asymptotics for the maximum singular value are found to be $O(\log^{1/2}{N^{d}})$ and $\Omega((\log N^{d} /(\log \log N^d))^{1/2})$ respectively where $N$ is the dimension of the matrix, generalizing the results in \cite{TW}. We further study the behavior of the minimum singular value of these random matrices. In particular, we prove that the minimum singular value is at most $N\exp(-C\sqrt{N}))$ with high probability where $C$ is a constant independent on $N$. Furthermore, the value of the constant $C$ is determined explicitly. The main result is obtained in two different ways. One approach uses techniques from stochastic processes and in particular, a construction related to the Brownian bridge. The other one is a more direct analytical approach involving combinatorics and complex analysis. As a consequence, we obtain a lower bound for the maximum absolute value of a random complex polynomial on the unit circle, which may be of independent mathematical interest. Lastly, for each sequence of positive integers ${k_p}_{p=1}^{\infty}$ we present a generalized version of the previously discussed matrices. The classical random Vandermonde matrix corresponds to the sequence $k_{p}=p-1$. We find a combinatorial formula for their moments and we show that the limit eigenvalue distribution converges to a probability measure supported on $[0,\infty)$. Finally, we show that for the sequence $k_p=2^{p}$ the limit eigenvalue distribution is the famous Marchenko--Pastur distribution.
\section{Introduction}\label{intro}% Large dimensional random matrices are of much interest in statistics, where they play a role in multivariate analysis. In his seminal paper, Wigner \cite{wigner} proved that the spectral measure of a wide class of symmetric random matrices of dimension $N$ converges, as $N\to\infty$, to the semicircle law. Much work has since been done on related random matrix ensembles, either composed of (nearly) independent entries, or drawn according to weighted Haar measures on classical groups (e.g., orthogonal, unitary, simplectic). The limiting behavior of the spectrum of such matrices is of considerable interest for mathematical physics and information theory. In addition, such random matrices play an important role in operator algebra studies initiated by Voiculescu, now known as free (non--commutative) probability theory (see \cite{Voi1} and \cite{Voi2} and the many references therein). The study of large random matrices is also related to interesting questions in combinatorics, geometry, algebra and number theory. More recently, the study of large random matrices ensembles with additional structure have been considered. For instance, the properties of the spectral measures of random Hankel, Markov and Toeplitz matrices with independent entries have been studied in \cite{dembo}. In this paper we study several aspects of random Vandermonde matrices with unit magnitude complex entries and their generalizations. An $N\times L$ matrix ${\bf V}$ with unit complex entries is a {\em Vandermonde matrix} if there exist values $\theta_1,\ldots,\theta_L \in [0,1]$ such that \begin{equation} \label{eqn_Vandermondedefn} {\bf V} := \frac{1}{\sqrt{N}}\, \left( \begin{array}{lcl} 1 & \ldots & 1 \\ e^{2\pi i\theta_1} & \ldots & e^{2\pi i\theta_L} \\ \vdots & \ddots & \vdots \\ e^{2\pi i(N-1)\theta_1} & \ldots & e^{2\pi i(N-1)\theta_L} \end{array} \right) \end{equation} (see \cite{GC02} or \cite{TW} for more details). A random Vandermonde matrix is produced if the entries of the phase vector $\theta:= \left( \theta_1,\ldots, \theta_L \right) \in [0,1]^L$ are random variables. For the purposes of this paper we assume that the phase vector has i.i.d. components with an absolutely continuous distribution $\nu$. \par Vandermonde matrices were defined in \cite{SP} and were also called $d$--fold Vandermonde matrices. The case $d=1$ are the matrices in (\ref{eqn_Vandermondedefn}). For $d\geq 2$, these matrices are defined by selecting $L$ random vectors $x_q$ independently in the $d$--dimensional hypercube $[0,1]^{d}$. These vectors are called the vectors of phases. Given a scale parameter $N$, consider the function defined by $\gamma: \left\{ 0,1,\ldots,N-1 \right\}^d \to \left\{ 0,1,\ldots,N^{d}-1 \right\}$ such that for every vector of integers $\ell = (\ell_1,\ell_2,\ldots,\ell_d) \in \left\{ 0,1,\ldots,N-1 \right\}^d$ the value $\gamma(\ell)$ is equal to $$ \gamma(\ell) := \sum_{j=1}^{d}{N^{j-1}\ell_{j}}. $$ It is easy to see that this function is a bijection over the set $\left\{ 0,1,\ldots,N^{d}-1 \right\}$. Now we define the $N^{d}\times L$ matrix ${\bf V}^{(d)}$ as \begin{equation} {\bf V}^{(d)}_{(\gamma(\ell),q)} := \frac{1}{N^{d/2}}\, \mathrm{exp}\Big(2\pi i \langle \ell,x_q\rangle\Big). \label{eqn_vandergendef} \end{equation} For the case $d=1$ we drop the upper index and denote this matrix by ${\bf V}$; for $d\geq 2$ we use ${\bf V}^{(d)}$. Random Vandermonde matrices and their extended versions are a natural construction with a wide range of applications in fields as diverse as finance \cite{Norberg}, signal processing \cite{SP}, wireless communications \cite{Porst}, statistical analysis \cite{Anderson}, security \cite{Sampaio} and biology \cite{Strohmer}. This stems from the close relationship that unit magnitude complex Vandermonde matrices have with the discrete Fourier transform. Among these, there is an important recent application for signal reconstruction using noisy samples (see \cite{SP}) where an asymptotic estimate is obtained for the mean squared error. In particular, and as was shown in \cite{SP}, generalized Vandermonde matrices play an important role in the minimum mean squared error estimation of vector fields, as might be measured in a sensor network. In such networks, the parameter $d$ is the dimension of the field being measured, $L$ is the number of sensors and $N$ can be taken as the approximate bandwidth of the measured signal per dimension. This asymptotic can be calculated as a random eigenvalue expectation, whose limit distribution depends on the signal dimension $d$. In the case $d=1$ the limit is via random Vandermonde matrices. As $d\rightarrow \infty$ the Marchenko--Pastur limit distribution is shown to apply. Further applications were treated in \cite{GC02} including source identification and wavelength estimation. \par One is typically interested in studying the behavior of these matrices as both $N$ and $L$ go to infinity at a given ratio, $\lim_{L,N\to\infty}\frac{L}{N^{d}}=\beta$. In \cite{GC02}, important results were obtained for the case $d=1$. In particular, the limit of the moments of ${\bf V^{*}}{\bf V}$ was derived and a combinatorial formula for the asymptotic moments was given under the hypothesis of continuous density. In \cite{TW}, these results were extended to more general densities and it was also proved that these moments arise as the moments of a probability measure $\mu_{\nu,\beta}$ supported on $[0,\infty)$. This measure depends on the measure $\nu$, the distribution of the phases, and on the value of $\beta$. \par In \cite{TW}, the behavior of the maximum eigenvalue was studied and tight upper and lower bounds were found. Here we extend these results and study the maximum eigenvalue of the $d$--fold extended Vandermonde matrix. More specifically, we study the asymptotic behavior of the maximum eigenvalue of the matrix ${{\bf V}^{(d)}}^{*}{\bf V}^{(d)}$ and derive upper and lower bounds. \par A natural question is how the smallest singular value behaves as $N\to\infty$, and this paper is one of the first to address this question. Here we restrict to the case $d=1$. The matrix $\bf{V}^{*}\bf{V}$ is an $L\times L$ positive definite random matrix with eigenvalues $$ 0\leq \lambda_{1}(N)\leq \ldots \leq \lambda_{L}(N). $$ The singular values of $\bf{V}$ are by definition the eigenvalues of $\sqrt{\bf{V}^{*}\bf{V}}$. Therefore, $s_{i}(N)=\sqrt{\lambda_i(N)}$. On one hand, it is clear that if $L>N$ the matrix ${\bf V^{*}}{\bf V}$ is of size $L\times L$ and rank $N$. Therefore, if $\beta>1$ the asymptotic limit measure has an atom at zero of size at least $1-1/\beta$. On the other hand, if $L=N$, the random matrix ${\bf V^{*}}{\bf V}$ has determinant \begin{equation} \det({\bf V^{*}}{\bf V)}=|\det({\bf V})|^{2}=\frac{1}{N^{N}}\cdot\prod_{1\leq p<q\leq N}{|e^{2\pi i\theta_{p}}-e^{2\pi i\theta_{q}}|^2}. \end{equation} This determinant is zero if and only if there exist distinct $p$ and $q$ such that $\theta_{p}=\theta_{q}$. This is an event of zero probability if the probability measure has a density. Therefore, the minimum eigenvalue value $\lambda_1(N)$ is positive with probability 1 and converges to 0 as $N$ increases. In this work, we show that with high probability $\lambda_{1}(N)\leq N^2\exp(-C\sqrt{N})$. As a consequence of our argument we show that with high probability \begin{equation} \max \Bigg\{\prod_{i=1}^{N}|z-z_i|^2 \,\,:\,\,|z|=1\Bigg\}\geq \exp(C\sqrt{N}) \end{equation} where $z_k=e^{2\pi i\theta_k}$ and $\{\theta_{1},\ldots,\theta_{N}\}$ are i.i.d on $[0,1]$. Moreover, we explicitly determine the constant $C$. We believe that this may prove to be of independent mathematical interest. Additionally, we show the absence of finite moments for the matrix $({\bf V^{*}}{\bf V})^{-1}.$ \par Finally, we present a generalized version of the previously discussed random Vandermonde matrices. More specifically, consider an increasing sequence of integers $\{k_{p}\}_{p=1}^{\infty}$ and let $\{\theta_{1},\ldots,\theta_{N}\}$ be i.i.d. random variables uniformly distributed on the unit interval $[0,1]$. Let ${\bf V}$ be the $N\times N$ random matrix defined as \begin{equation} V(p,q):=\frac{1}{\sqrt{N}}z_{q}^{k_p} \end{equation} where $z_{q}:=e^{2\pi i\theta_{q}}$. Note that if we consider the sequence $k_p=p-1$ then the matrix ${\bf V}$ is the usual random Vandermonde matrix defined in (\ref{eqn_Vandermondedefn}). We study the limit eigenvalue distribution of this matrix ${\bf X}:={\bf VV}^{*}$ and in particular its asymptotic moments. We also find a combinatorial formula for its moments and show that for every sequence there exists a unique probability measure on $[0,\infty)$ with these moments. Finally, we show that for the sequence $k_p=2^p$ the limit eigenvalue distribution is the famous Marchenko--Pastur distribution. \par The rest of the paper proceeds as follows. In Section \ref{sec:rand_mat_ess}, we present some preliminaries in random matrix theory, the Littlewood--Offord theory (\cite{taovu}), set up some notation and terminology, and review some known results for random Vandermonde matrices. In Section \ref{tracelog}, we derive a formula for the trace log and log determinant of the random Vandemonde matrices. We also prove the absence of finite moments for the matrix ${\bf M}^{*}{\bf M}$ where ${\bf M}={\bf V}^{-1}$. In Section \ref{maxeig}, we present upper and lower bounds for the behavior of the maximum singular value of ${\bf V}^{(d)}$ in the general case. In Section \ref{sec_mineig}, we study the behavior of the minimum singular value of ${\bf V}$. In Section \ref{sec_numer}, we present some numerical results that suggest the absence of an atom at zero for the limit eigenvalue for the square case. In the last Section, we analyze the moments and limit eigenvalue distributions of the generalized version of the random Vandermonde matrices as described before. \section{Preliminaries}\label{sec:rand_mat_ess} \subsection{Random Matrix Theory} Throughout the paper we denote by ${\bf A}^{*}$ the complex conjugate transpose of the matrix ${\bf A}$ and by ${\bf I}_{N}$ the $N\times N$ identity matrix. We let $\mathrm{Tr}({\bf A}):=\sum_{i=1}^{N}{a_{ii}}$ be the non--normalized trace, where $a_{ii}$ are the diagonal elements of the matrix ${\bf A}$. We also let $\mathrm{tr}_{N}({\bf A})=\frac{1}{N}\mathrm{Tr}({\bf A})$ be the normalized trace. Let ${\bf A}_{N}=(a_{ij}(\omega))_{i,j=1}^{N}$ be a random matrix where the entries $a_{ij}$ are random variables on some probability space. We say that the random matrices ${\bf A}_{N}$ converge to a random variable $A$ in distribution if the moments of ${\bf A}_{N}$ converge to the moments of the random variable $A$, and denote this by ${\bf A}_{N}\to A$. \par Note that for a Hermitian $N\times N$ matrix ${\bf A} = {\bf A}^{*}$, the collection of moments corresponds to a probability measure $\mu_{{\bf A}}$ on the real line, determined by $\mathrm{tr}_{N}({\bf A}^{k})=\int_{\mathbb{R}}{t^{k}\,d\mu_{A}(t)}$. This measure is given by the eigenvalue distribution of ${\bf A}$, i.e., it puts mass $\frac{1}{N}$ on each of the eigenvalues of ${\bf A}$ (counted with multiplicity): \begin{equation} \mu_{{\bf A}}=\frac{1}{N}\sum_{i=1}^{N}{\delta_{\lambda_{i}}} \end{equation} where $\lambda_{1},\ldots,\lambda_{N}$ are the eigenvalues of ${\bf A}$. In the same way, for a random matrix ${\bf A}$, $\mu_{{\bf A}}$ is given by the averaged eigenvalue distribution of ${\bf A}$. Thus, moments of random matrices with respect to the averaged trace contain exactly the type of information in which one is usually interested when dealing with random matrices. \par Consider an $N\times L$ random Vandermonde matrix $\bf V$ with unit complex entries, as given in (\ref{eqn_Vandermondedefn}). The variables $\theta_{\ell}$ are called the phase distributions and $\nu$ its probability distribution. It was proved in \cite{GC02} that if $d\nu=f(x)\,dx$ for $f(x)$ continuous in $[0,1]$, then the matrices ${\bf V^{*}\bf V}$ have finite asymptotic moments. In other words, the limit \begin{equation} m_{r}^{(\beta)}=\lim_{N\to\infty}{\mathbb{E}\Big[\mathrm{tr}_{L}\Big(({\bf V^{*}V})^r \Big)\Big]} \end{equation} exists for all $r\geq 0$. Moreover, \begin{equation} m_{r}^{(\beta)}=\sum_{\rho\in\mathcal{P}(r)}{K_{\rho,\nu}\beta^{|\rho|-1}} \end{equation} where $K_{\rho,\nu}$ are positive numbers indexed by the partition set. We call these numbers {\it Vandermonde expansion coefficients}. The fact that all the moments exist is not enough to guarantee the existence of a limit probability measure having these moments. However, it was proved in \cite{TW} that the eigenvalues of ${\bf V^{*}V}$ converge in distribution to a probability measure $\mu_{\beta,\nu}$ supported on $[0,\infty)$ where $\beta=\lim_{N\to\infty}{\frac{L}{N}}$. More precisely, $$ m_{r}^{(\beta)}=\int_{0}^{\infty}{t^{r}\,d\mu_{\beta,\nu}(t)}. $$ In \cite{TW}, the class of functions for which the limit eigenvalue distribution exists was enlarged to include unbounded densities and lower bounds and upper bounds for the maximum eigenvalue were found. We suggest that the interested reader look at the articles \cite{GC02} and \cite{TW} for more properties on the Vandermonde expansion coefficients as well as methods and formulas to compute them. \subsection{Littlewood--Offord Theory} Let $v_1,\ldots,v_n$ be $n$ vectors in $\mathbb{R}^d$, which we normalise to all have length at least $1$. For any given radius $\Delta > 0$, we consider the small ball probability $$ \displaystyle p(v_1,\ldots,v_n,\Delta) := \sup_B {\mathbb{P}}( \eta_1 v_1 + \ldots + \eta_n v_n \in B ) $$ where $\eta_1,\ldots,\eta_n$ are i.i.d. Bernoulli signs (i.e. taking on values $+1$ or $-1$ independently with a probability of $1/2$), and $B$ ranges over all (closed) balls of radius $\Delta$. The Littlewood--Offord problem is to compute the quantity $$ \displaystyle p_d(n,\Delta) := \sup_{v_1,\ldots,v_n} p(v_1,\ldots,v_n,\Delta) $$ where $v_1,\ldots,v_n$ range over all vectors in $\mathbb{R}^d$ of length at least $1$. Informally, this number measures the extent to which a random walk of length $n$ (with all steps of size at least $1$) can concentrate into a ball of radius $\Delta$. The one dimensional case of this problem was solved by Erd\"os. First, one observes that one can normalise all the $v_i$ to be at least $+1$ (as opposed to being at most $-1$). In the model case when $\Delta < 1$, he proved that $$ \displaystyle p_1(n,\Delta) = \frac{1}{2^n}\binom{n}{\lfloor n/2\rfloor} = \frac{\sqrt{\frac{2}{\pi}}+o(1)}{\sqrt{n}} $$ when $0 \leq \Delta < 1$ (the bound is attained in the extreme case $v_1=\ldots=v_n=1$). A similar argument works for higher values of $\Delta$, using Dilworth's Theorem instead of Sperner's Theorem, and gives the exact value \begin{equation}\label{LOT} p_1(n,\Delta) = \frac{1}{2^n}\sum_{j=1}^s \binom{n}{m_j} = \frac{s\sqrt{\frac{2}{\pi}}+o(1)}{\sqrt{n}} \end{equation} whenever $n \geq s$ and $s-1 \leq \Delta < s$ for some natural number $s$, where $\binom{n}{m_1},\ldots,\binom{n}{m_s}$ are the $s$ largest binomial coefficients of $\binom{n}{1}, \ldots, \binom{n}{n}$. See \cite{taovu} for more details on the Littlewood--Offord Theory. \section{Trace Logarithm Formula and the Inverse of a Vandermonde Matrix}\label{tracelog} \subsection{Inverse of a Vandermonde Matrix} Given a vector $x$ in $\mathbb C^N$, we define $\sigma^m_r(x)$ to be the sum of all $r$--fold products of the components of $x$ not involving the $m$--th coordinate. In other words, $$ \sigma^m_r(x) = \sum_{\rho_r^m} \prod_{k\in \rho_r^m} x_k $$ where $\rho_r^m$ is a subset of $\left\{ x_1,x_2,\ldots,x_{m-1},x_{m+1},\ldots,x_N \right\}$ of cardinality $r$. The following Theorem was proved in \cite{GEpaper}. \begin{theorem} \label{GEthm} Let ${\bf V}$ be a square $N\times N$ matrix given by \begin{equation} \label{eqn_Vandefn} {\bf V} := \left( \begin{array}{llll} 1 & 1 & \ldots & 1 \\ x_1 & x_2 &\ldots & x_N \\ \vdots & \vdots & \ddots & \vdots \\ x_1^{N-1} & x_{2}^{N-1} & \ldots & x_N^{N-1} \end{array} \right) \end{equation} with non--zero entries. Then its inverse ${\bf M}:={\bf V}^{-1}$ is the matrix with entries $$ \mathbf{M}(m,n) = \frac{(-1)^{N-n} \sigma_{N-n}^m(x)}{\prod_{j \neq m} \left( x_m - x_j \right)} $$ with $m,n \in \left\{ 1,2,\ldots,N \right\}$. \end{theorem} \begin{remark} Let $\nu_1 \leq \nu_2 \leq \ldots \leq \nu_N$ be the eigenvalues of ${\bf M}^* {\bf M}$ and let $\lambda_1 \leq \lambda_2 \leq \ldots \leq \lambda_N$ be the corresponding eigenvalues of ${\bf V}^*{\bf V}$, which are the same as for ${\bf V}{\bf V}^*$. Note that $$ \nu_k = \lambda^{-1}_{N-(k-1)} $$ and in particular $\nu_N = \lambda_1^{-1}$. Therefore, to understand the behavior of $\lambda_{1}$ it is enough to understand the behavior of $\nu_{N}$. \end{remark} Here we prove a Theorem about the trace log formula for random Vandermonde matrices and a Theorem about the non--existence of the moments of ${\bf M}^* {\bf M}$, but first we need the following Lemma. \begin{lemma}\label{lemma1} Let ${\bf M}$ be an invertible $N\times N$ matrix with columns $X_{1},\ldots,X_{N}$ and let $R_{1},\ldots,R_{N}$ be the rows of ${\bf M}^{-1}$. Let ${\mathcal V}_i$ be the subspace generated by all the column vectors except $X_{i}$, i.e., $$ {\mathcal V}_i=\mathrm{span}\{X_{1},X_{2},\ldots,X_{i-1},X_{i+1},\ldots,X_{N}\}. $$ Then the distance between the vector $X_{i}$ and the subspace ${\mathcal V}_i$ is $$ \mathrm{dist}(X_{i},{\mathcal V}_i)=\frac{1}{\norm{R_{i}}}. $$ Moreover, $$ \sum_{i=1}^{N}{\lambda_{i}({\bf M}^* {\bf M})^{-1}}=\sum_{i=1}^{N}{\mathrm{dist}(X_{i},{\mathcal V}_i)^{-2}}. $$ \end{lemma} \begin{proof} The result follows from an identity involving the singular values of ${\bf M}$. By definition, the inner product $\langle R_k,X_{\ell}\rangle= \delta_{k,\ell}$ so that $R_k$ is orthogonal to ${\mathcal V}_k$. Hence, $$ d(X_k, {\mathcal V}_k) = \frac{1}{\euclidnorm{R_k}}. $$ Let $\lambda_{k}({\bf M}^* {\bf M})$ be the eigenvalues of ${\bf M}^* {\bf M}$. Then, $$ \mathrm{Tr} \left( \left( {\bf M}^{-1} \right)^* {\bf M}^{-1} \right) = \sum_{i=1}^{N}{\lambda_{i}({\bf M}^* {\bf M})^{-1}}. $$ On the other hand, $$ \mathrm{Tr} \left( \left( {\bf M}^{-1} \right)^* {\bf M}^{-1} \right) = \sum_{1\leq k,\ell\leq N} \vert \left( {\bf M}^{-1} \right)_{k,\ell} \vert^2 = \sum_{k=1}^{N} \euclidnorm{R_k}^2 $$ completing the proof. \qed \end{proof} \begin{theorem} Let ${\bf V}$ be a square random Vandermonde matrix of dimension $N$ with i.i.d. phases distributed according to a measure $\nu$ with continuous density $f(x)$ over $[0,1]$. Then \begin{equation} \mathbb{E}\Big( \mathrm{tr}_{N}\log ({\bf{V}^{*}\bf{V}})\Big) = (N-1)\,\mathbb{E}\big(\log |1-e^{2\pi i\theta}|\big)-\log(N). \end{equation} \end{theorem} \begin{proof} Let $0\leq \lambda_{1}\leq \lambda_{2}\leq \ldots \leq \lambda_{N}$ be the eigenvalues of ${\bf V}^{*}{\bf V}$. Note that $\lambda_{1}>0$ with probability one. It is clear that, \begin{equation} \log\det({\bf V}^{*}{\bf V})=\sum_{i=1}^{N}{\log(\lambda_{i})}. \end{equation} On the other hand, $$ \det({\bf V}^{*}{\bf V})=\frac{1}{N^{N}}\prod_{1\leq p<q\leq N}{|e^{2\pi i\theta_{p}}-e^{2\pi i\theta_{q}}|^2} $$ hence \begin{equation} \log\det({\bf V}^{*}{\bf V})=\sum_{p<q}{2\log(|e^{2\pi i\theta_{p}}-e^{2\pi i\theta_{q}}|)}-N\log(N). \end{equation} Since the phases are identically distributed, it is easy to see that the expectation $\mathbb{E}\big[\log\det({\bf V}^{*}{\bf V})\big]$ is \begin{equation}\label{eq1} \frac{N(N-1)}{2}\,\,\mathbb{E}\Big(2\log |e^{2\pi i\theta_{p}}-e^{2\pi i\theta_{q}}| \Big)-N\log(N) \end{equation} which is \begin{equation} N(N-1)\,\mathbb{E}\Big(\log |1-e^{2\pi i\theta}| \Big)-N\log(N). \end{equation} Since for every invertible Hermitian matrix ${\bf A}$, we have that $\mathrm{Tr}\log({\bf A})=\log\det({\bf A})$, in particular we have that \begin{equation}\label{eq2} \mathrm{tr}_{N}\log({\bf V}^{*}{\bf V})=\frac{1}{N}\log\det({\bf V}^{*}{\bf V}). \end{equation} Combining (\ref{eq1}) and (\ref{eq2}) we see that \begin{equation} \mathbb{E}\Big(\mathrm{tr}_{N}\log({\bf V}^{*}{\bf V})\Big)=(N-1)\,\mathbb{E}\big(\log |1-e^{2\pi i\theta}| \big)-\log(N). \end{equation} \qed \end{proof} \begin{theorem}\label{momentsinv} Let ${\bf V}$ be a square $N\times N$ random Vandermonde matrix. Then for every $N\geq 2$, the matrix ${\bf V^{*}}{\bf V}$ is invertible with probability $1$ and $$ \mathbb{E}\big(\mathrm{tr}_{N}\big(({\bf V^{*}}{\bf V})^{-p}\big)\big)=\infty $$ for every $p\geq 1$. \end{theorem} \begin{proof} It is enough to prove the case $p=1$ since the other cases follow from this one. As we mentioned in the introduction, the matrix ${\bf V^{*}}{\bf V}$ is invertible with probability one since its determinant its non--zero with probability one. From Lemma \ref{lemma1} we see that $$ \mathbb{E}\big(\mathrm{tr}_{N}\big(({\bf V^{*}}{\bf V})^{-1}\big)\big) = \frac{1}{N}\sum_{m,n=1}^{N}{\mathbb{E}\big( |\mathbf{M}(m,n)|^2\big)} $$ where $\mathbf{M}=\mathbf{V}^{-1}$. In particular, $$ \mathbb{E}\big(\mathrm{tr}_{N}\big(({\bf V^{*}}{\bf V})^{-1}\big)\big) \geq \frac{1}{N}\mathbb{E}\big(|\mathbf{M}(1,N)|^2\big). $$ Now using Theorem \ref{GEthm} we see that $$ \mathbb{E}\big(|\mathbf{M}(1,N)|^2\big)=\Bigg( \frac{1}{2\pi}\int_{0}^{2\pi}{\frac{d\theta}{|1-e^{i\theta}|^2}}\Bigg)^{N-1}=\infty. $$ \qed \end{proof} \section{Maximum Eigenvalue}\label{maxeig} Let ${\bf X}$ be the $L\times L$ matrix defined as ${\bf X} := {{\bf V}^{(2)}}^{*} {\bf V}^{(2)}$. Suppose that the phases $(\theta_k,\psi_k)$ are selected i.i.d. on $[0,1]^2$ with density $f(x)$. We further suppose, as in \cite{TW}, that $(\theta_k,\psi_k) - (\theta_1,\psi_1)$ given $\theta_1,\psi_1$ has a conditional density (which exists for all $(\theta_1,\psi_1)$ if it exists for one) and denote this density by $f$ ignoring its dependence on $(\theta_1,\psi_1)$, as it only appears through $\maxnorm{f}$. It can be shown that ${\bf X}$ has the same eigenvalues as the matrix ${\bf A}$ whose entries are \begin{equation}\label{eqn_Amat} A(k,m) := D_N\Big(2\pi(\theta_k - \theta_m)\Big) D_N\Big(2\pi(\psi_k - \psi_m)\Big) \end{equation} where $$ D_N(x) := \frac{\sin(\frac{N}{2}x)}{N\sin(\frac{x}{2})} $$ is the Dirichlet kernel (see e.g. \cite{TW}). Similarly, in the case $d\geq 3$ we obtain a product of $d$ Dirichlet kernels. Subsequently, ${\bf A}$ is used to construct upper and lower bounds for the maximum eigenvalue $\lambda_L$. \par We now proceed to obtain asymptotic upper and lower bounds for $d$--fold Vandermonde matrices. We first focus on the case $d=2$ and retain the notation of Section \ref{sec:rand_mat_ess}. In what follows we prove the following Theorem. \begin{theorem} Let ${\bf V}^{(d)}$ be the $N^{d}\times L$ $d$--fold random Vandermonde matrix defined in (\ref{eqn_vandergendef}). Let $\lambda_L$ be the maximum eigenvalue of the matrix $({\bf V}^{(d)})^{*} ({\bf V}^{(d)})$. If $\lim_{N,L\to\infty}\frac{L}{N^{d}}=\beta \in (0,\infty)$, there exists constants $C_e$ such that for all $C > C_e$ \begin{equation}\label{eqn_maxupperbnd} \mathbb{P} \Big(\lambda_L \geq C\log(N^d) + u \Big) \leq e^{-u} \frac{N^d}{N^{\delta (d-1)\log(N)}} \end{equation} \end{theorem} where $\delta = C - C_e$. \begin{proof} The line of argument follows that of \cite{TW}. We begin with the following upper bound for the Dirichlet function proved in \cite{TW}, \begin{equation} \label{eq_char} \babs{\frac{\sin(\frac{N}{2}\abs{x})}{N\sin(\frac{\abs{x}}{2})}}\leq\sum_{k=1}^{N/2}{\,\frac{1}{k}\,\mathbf{1}_{\big[\frac{2\pi(k-1)}{N},\frac{2\pi k}{N}\big)}(\abs{x})} \end{equation} where $\mathbf{1}_{B}$ is the indicator function on the Borel set $B$. To apply the bound, let $p_{a,b}$, with $a,b\in\mathbb{Z}$, be the probability that $$ \theta_k - \theta_1\in [(a-1)/N,a/N] $$ and $$ \psi_k - \psi_1\in [(b-1)/N,b/N]. $$ Define, $$ q_{a,b} := p_{a,b} + p_{-a,b} + p_{a,-b} + p_{-a,-b} $$ Then, it is easy to see from the union bound that \begin{equation} q_{a,b}\leq \frac{4}{N^{2}} \norm{f}_\infty = \frac{C_q}{N^2} \label{eqn_qbnd} \end{equation} where the function $f$ is the probability density over the unit square $[0,1]^{2}$ and $C_{q}$ a constant. Next for the magnitude of a term in the first row, corresponding to $\theta_1,\psi_1$, we find that \begin{equation} \mathbb{E} \big(\exp(Xt)\mid \theta_1, \psi_1\big) \leq \sum_{a=1}^{N/2} \sum_{b=1}^{N/2} q_{a,b} e^{\frac{t}{ab}} \leq 1 + (e-1) \sum_{a=1}^{N/2} \sum_{b=1}^{N/2} q_{a,b} \frac{t}{ab} \end{equation} where $X$ is defined as $$ X := \Big| D_N(2\pi(\theta_k - \theta_1)) D_N(2\pi(\psi_k - \psi_1)) \Big| $$ for $k\neq 1$ and the upper bound does not depend on $k$ or $(\theta_1,\psi_1)$. If $R$ is any row sum of the entries of the matrix ${\bf X}$ it follows that \begin{equation} \mathbb{E}\big(\exp(R)\big) \leq e \Bigg( 1 + C_q \frac{(e-1)}{N^2} H^2_{N/2} \Bigg)^{L-1} \leq e^{\beta_N C_q (e-1) H^2_{N/2}} \label{eqn_Rbnd} \end{equation} by taking $t=1$ and using the facts that $\left( 1 + x/y \right)^y \leq \exp(x)$ and $\beta_N \rightarrow \beta$ as $N \rightarrow \infty$, where $H_p:=\sum_{k=1}^p 1/k$. Then $$ H_N = \log N + \gamma + \frac{1}{2N} + O(N^{-2}) $$ where $\gamma$ is the Euler--Mascheroni constant. It follows that \begin{equation} \mathbb{E} \big( \exp(R)\big) \leq e^{C_e \left( \log N \right)^2} = N^{C_e \log N} \end{equation} for some suitable constant $C_e > 0$. Applying the union bound to the maximum row sum $Y$ and then, Markov's inequality with $C \geq C_e + \delta$, we observe that \begin{equation} \mathbb{P}\Big(Y \geq C \log(N^{2}) + u\Big) \leq e^{-u} \frac{N^2}{N^{C\log N}} N^{C_e \log N} = e^{-u} \frac{N^2}{N^{\delta\log N}}. \label{eqn_unionbnd} \end{equation} Since the maximum eigenvalue is upper bounded by the maximum row sum of magnitudes, then $Y \geq \lambda_L$. This concludes the proof for the case $d=2$. \par For the case $d > 2$, one obtains a $d$--fold product of Dirichlet functions such that the exponent of the harmonic function $H$ is $d$ instead of 2 in (\ref{eqn_Rbnd}) and also in (\ref{eqn_qbnd}). Constant terms are also suitably modified and with these changes carried through to (\ref{eqn_unionbnd}) where 2 is again replaced with $d$, the result follows as before. \qed \end{proof} The following Corollary is stated without proof. \begin{cor} There exists a positive constant $B$ such that $$ \mathbb{E}(\lambda_L) \leq B \log (N^d) + o(1). $$ \end{cor} \subsection{Lower Bound} The purpose of this subsection is to present the following lower bound for the maximum eigenvalue $\lambda_L$, in which we suppose that phases are provided according to a joint continuous density $f$ bounded away from zero. \begin{theorem} Let ${\bf V}^{(d)}$ be the $N^{d}\times L$ $d$--fold random Vandermonde matrix and let $\lambda_L$ be its maximum eigenvalue. If $N,L \rightarrow \infty$ such that $\lim_{L,N\to\infty}\frac{L}{N^{d}}=\beta\in (0,\infty)$, there exists a constant $K$ such that \begin{equation} \mathbb{P}\Big( \lambda_L \geq K\frac{ \log N^d}{ \log \log N^{d}}\Big) = 1 - o(1). \label{eqn_maxlowbnd} \end{equation} \end{theorem} \begin{proof} We rely on the equivalent matrix given in (\ref{eqn_Amat}), which is an $L \times L$ matrix. For notation simplicity, we specialize to the case $d=2$ since the case $d\geq 3$ follows a similar argument. To construct our lower bound by analogy with the $1$--dimensional case, we want to obtain large numbers of points $(\theta, \psi)$ that lie close together, that is points such that $\vert \theta_k - \theta_m \vert$ and $\vert \psi_k - \psi_m \vert$ are both small. If this is the case for some large set of indexes $\Gamma$, then it follows that $A(k,m) \approx 1$ for all $k$ and $m$ in $\Gamma$. In addition, the matrix ${\bf A}$ is symmetric with diagonal values 1 and so it follows that the eigenvalues of ${\bf A}$ interlace with the eigenvalues of any principal sub--matrix (see \cite{Wilkinson}). In particular, if we define the matrix ${\bf A}_\Gamma$ as $$ {\bf A}_\Gamma(k,m):= A(k,m) $$ for $k,m\in \Gamma$ then $\lambda_1({\bf A}_\Gamma) \leq \lambda_1({\bf A})$. Hence, if $\abs{\theta_k - \theta_m}$ and $\abs{\psi_k-\psi_m} < \epsilon$ for some $\epsilon>0$ for all $k$ and $m$ then it follows that $$ D_N^2(2 \pi \epsilon) \times \vert \Gamma \vert \leq \lambda_1({\bf A}_\Gamma) \leq \lambda_1({\bf A}). $$ Divide the unit square $[0,1] \times [0,1]$ into $N^2_\epsilon$ equal squares with sides of length $\epsilon$. Take $\Gamma$ to be the indexes corresponding to the square with the most number of points in it. By hypothesis, the joint measure has a continuous density $f$ bounded away from 0 throughout $[0,1]^2$. Therefore, it follows that $$ \eta:=\min \,\{f(\theta,\psi)\,:\,(\theta,\psi)\in [0,1]^2\}>0. $$ By construction, we select $L$ phase points $(\theta,\psi) \in [0,1]\times[0,1]$ so that each square receives at least $\epsilon^2 L N^{-2} \eta$ points on the average, which is $O(1)$ since $L/N^2 \rightarrow \beta$ as $L,N \rightarrow \infty$. This is an occupancy model and we are interested in the square with the maximum number of points. The number of such points is at least \begin{eqnarray*} k(N^2) & = & \alpha \frac{ \log N_\epsilon^2}{ \log \log N_\epsilon^2} \\ & \geq & \alpha (1 - o(1))\frac{\log N}{\log \log N} \end{eqnarray*} with probability $1- o(1)$ for any $\alpha \in (0,1)$ independently of the mean number of points per square, (see for instance (see e.g. \cite{Raab}). However since $\epsilon > 0$ and $\alpha \in (0,1)$ are both arbitrary, the lower bound on $\lambda_1({\bf A}_\Gamma)$ and hence on $\lambda_1({\bf A})$ follows with $K$ any constant in $(0,1)$. The proof is complete. \qed \end{proof} \section{Minimum Eigenvalue}\label{sec_mineig} In this Section we focus on the behavior of the minimum eigenvalue $\lambda_{1}$ for the case $d=1$. Consider the matrix ${\bf A}$ as in (\ref{eqn_Amat}) and all its $2\times 2$ principal sub--minors. These matrices are symmetric and the minimum eigenvalue for the sub--minor determined by phases $\theta_k$ and $\theta_\ell$ is denoted $\lambda_{k,\ell}$. In other words, $\lambda_{k,\ell}$ is the smaller root of the equation $$ \left( 1 - \lambda_{k,\ell} \right)^2 = D_N(2\pi(\theta_k - \theta_\ell))^2. $$ Taking square roots and applying again the interlacing Theorem (see \cite{Wilkinson} for a reference) we obtain, $$ \lambda_1(N) \leq \min_{k,\ell} \Big( 1 - D_N(2\pi(\theta_k-\theta_\ell)) \Big). $$ Let $\omega$ and $\alpha$ be defined as $\omega := N^2 \min_{k,\ell} \vert \theta_k - \theta_\ell \vert$ and $$ \alpha := \min_{k,\ell}\Big(1 -D_N(2\pi(\theta_k-\theta_\ell))\Big). $$ From a result of de Finetti (see \cite{Feller_Vol2} for a reference), $$ \mathbb{P} \Big(\min_{k,\ell} \vert \theta_k - \theta_\ell \vert > \delta\, \Big) = \left( 1 - N \delta \right)_+^{N-1} $$ where $ \left( x \right)_+ := \max \left\{ x , 0 \right\}$. Substituting $\eta/N^2$ for $\delta$ and taking the limit as $N\to\infty$, we obtain that $$ \lim_{N\to\infty} \mathbb{P} \Big( \min_{k,\ell} \vert \theta_k - \theta_\ell \vert > \frac{\eta}{N^2} \,\Big)=e^{-\eta}. $$ On taking the Taylor expansion of $D_N$ to second order around the origin we obtain $$ \min_{k,\ell} \Big(1 - D_N(2\pi(\theta_k-\theta_\ell))\Big) = \frac{\pi^2\omega^2}{6 N^2} + o(N^{-2}). $$ Therefore, the following limit holds, $$ \lim_{N\to\infty}\mathbb{P}\Big( \alpha \leq \frac{\pi^2\omega^2}{6 N^2} \Big) = 1 - e^{-\omega}. $$ Then, it follows that $\lambda_1(N) \leq O(N^{-2})$ as $N \rightarrow \infty$. As we now show, the approach to zero is much more rapid. To obtain better estimates for $\lambda_1(N)$ as $N \rightarrow \infty$, we now consider the maximum eigenvalue of the inverse matrix. Given an $N\times N$ matrix ${\bf M}$, we have the following matrix norms $\spnorm{{\bf M}}:= \sup_{j} \big( \sum_{k=1}^N \vert M_{k,j} \vert \big)$ and $\opnorm{{\bf M}} := \sqrt{\lambda_N({\bf M}^* {\bf M})}$. The following inequality is well known (see \cite{HornJohnson} for more details), $$ \frac{1}{\sqrt{N}} \spnorm{{\bf M}} \leq \opnorm{{\bf M}} \leq \spnorm{{\bf M}}. $$ We now prove the following Lemma which is used later. \begin{lemma}\label{lemma_poly} Let $A(z) = a_0 z^n + a_1 z^{n-1} + \ldots + a_n$ be a complex polynomial and let $A^* := \max_{\lvert z \rvert = 1} \lvert A(z) \rvert$ be its maximum on the unit circle. Then \begin{equation} \frac{\lvert a_n \rvert + \ldots + \lvert a_0 \rvert}{n+1} \leq A^* \leq \lvert a_n \rvert + \ldots + \lvert a_0 \rvert. \label{eqn_sstar} \end{equation} \end{lemma} \begin{proof} The second inequality follows immediately from the triangle inequality, so we concentrate on the first one. It is enough to show that \begin{equation} \lvert a_k \rvert \leq A^* \label{eqn_aineq} \end{equation} for all $k$. By applying Cauchy's integral Theorem and using the fact that $$ \int_{\lvert z \rvert = 1}{z^{-i}\,dz} = 0 $$ for all $i\neq 1$, we obtain that, \begin{equation} a_k = \frac{1}{2\pi i} \int_{\lvert z \rvert = 1} \frac{A(z)}{z^{n-k+1}} dz \end{equation} for all $k$. Therefore, \begin{equation} \lvert a_k \rvert \leq \frac{1}{2\pi} \int_{0}^{2\pi} \lvert A(z(\theta)) \rvert d\theta \leq A^* \nonumber \label{eqn_polyineq} \end{equation} where the first inequality follows by upper bounding the integral, taken as a line integral with respect to $\theta$ around the unit circle, and the second inequality follows from the definition of $A^*$. By applying the inequality for each $k$ we obtain the required lower bound. \qed \end{proof} In the following steps we find a bound on $\lambda_1(N)$ in terms of the maximum of a polynomial with roots on the unit circle. We begin with some definitions. Let $z_k = e^{2\pi i \theta_k}$ be the values determining the random Vandermonde matrix as in (\ref{eqn_Vandermondedefn}). Let $P(z)$ be the polynomial defined as $$ P(z) := \prod_{k=1}^N \left( z - z_k \right). $$ We further denote $$ P_p(z) := \frac{P(z)}{|z - z_p|}. $$ Let ${\bf M}={\bf V}^{-1}$ be the inverse of the random Vandermonde matrix and let $M(p,q)$ denote its entries. Define \begin{equation} \beta_p := \sum_{q=1}^{N}{|M(p,q)|}. \end{equation} By Theorem \ref{GEthm}, we know that \begin{equation} \label{eqn_dstar} \beta_p := \frac{\sqrt{N}}{\prod_{q\neq p}{|z_p-z_q|}} \Big(|\sigma_0^p| + \ldots + |\sigma^p_{N-1}| \Big). \end{equation} In addition, let \begin{equation} T_p (z):= \frac{P(z)}{|P(z_p)|} = \prod_{q\neq p}{\frac{\left( z - z_q \right) }{|z_p - z_q|}}. \label{eqn_Tpdefn} \end{equation} It follows from (\ref{eqn_dstar}) and (\ref{eqn_sstar}) that $$ \frac{\beta_p}{N} \leq \max_{|z|=1} \Big( \sqrt{N} |T_p(z)| \Big) \leq \beta_p $$ for all $p=1,\ldots,N$. The following Lemma is a direct consequence of the Hadamard's inequality (see \cite{HornJohnson}). \begin{lemma} \label{lem_hadbnd} Let $z_1,\ldots,z_N$ be distinct points on the unit complex circle. Then there exists $p_0 \in \left\{ 1, \ldots, N \right\}$ such that, \begin{equation} \prod_{q\neq p_0} \lvert z_{p_0} - z_q \rvert \leq N. \end{equation} \end{lemma} \begin{proof} Assume this is not true. Therefore, for every $p$ \begin{equation} \prod_{q\neq p} \lvert z_{p} - z_q \rvert > N. \end{equation} Hence, $\sum_{q\neq p}{\log|z_p-z_q|}> \log N$ for every $p$. Let ${\bf G}$ be the Vandermonde matrix whose entries are $G(i,j)=z_{j}^{i-1}$. Then $|\det({\bf G})|=\prod_{1\leq p <q \leq N}{|z_p-z_q|}$ and $$ \log|\det({\bf G})|=\sum_{1\leq p<q\leq N}{\log|z_p-z_q|}=\frac{1}{2}\sum_{p=1}^{N}\sum_{q\neq p}{\log|z_p-z_q|}>\frac{N\log N}{2}. $$ Thus, $|\det({\bf G})|> N^{N/2}$ which violates Hadamard's inequality. \qed \end{proof} We are now in a position to prove the following Lemma, which provides upper and lower bounds on the minimum eigenvalue of a Vandermonde matrix in terms of the polynomial $T_p$ defined in (\ref{eqn_Tpdefn}). \begin{lemma}\label{lem_polyineq} Let $\lambda_1(N)$ be the minimum eigenvalue of the random matrix ${\bf V}^*{\bf V}$ and $T_p(z)$ be as defined above. Then, \begin{equation} \frac{1}{N^3 \max_p \left\{ \max_{\vert z \vert = 1} \vert T_p(z) \vert^2 \right\}}\leq \lambda_1(N) \leq \frac{1}{\max_p \left\{ \max_{\vert z \vert =1} \lvert T_p(z) \rvert^2 \right\}}. \end{equation} Moreover, \begin{equation} \lambda_1(N) \leq \frac{N^2}{\max_{\lvert z \rvert = 1} \prod_{q\neq p_0} \lvert z - z_q \rvert^2} \leq \frac{4N^2}{\max_{\lvert z \rvert = 1} \prod_{q} \lvert z - z_q \rvert^2}. \label{eqn_upperbnd} \end{equation} \end{lemma} \begin{proof} Since $\spnorm{{\bf M}} = \max \left\{ \beta_p : p=1,\ldots,N \right\}$, it follows that $$ \frac{\spnorm{{\bf M}}}{N} \leq \sqrt{N} \max_{p} \left\{ \max_{|z|= 1} \,|T_p(z)|\right\} \leq \spnorm{{\bf M}}. $$ On the other hand, $$ \frac{1}{\spnorm{{\bf M}}^2} \leq \lambda_1(N) \leq \frac{N}{\spnorm{{\bf M}}^2} $$ from which we can deduce that \begin{equation} \frac{1}{N^3 \max_p \left\{ \max_{|z|= 1} |T_p(z)|^2 \right\}} \leq \lambda_1(N) \leq\frac{1}{\max_p \left\{ \max_{\vert z \vert =1} \lvert T_p(z) \rvert^2 \right\}}. \end{equation} Using Lemma \ref{lem_hadbnd}, we know that there exists $p_0$ such that $\prod_{q \neq p_0} \lvert z_{p_0} - z_q \rvert \leq N$. We thus obtain that \begin{equation} \max_{\lvert z \rvert = 1} \,|T_{p_0}(z)| \geq \max_{|z|=1} \left( \frac{1}{N} \prod_{q \neq p_0} \lvert z - z_q \rvert \right). \label{eqn_Tineq} \end{equation} Therefore, using the fact that $$ \max_{p} \max_{|z|=1} \,|T_{p}(z)| \geq \max_{\lvert z \rvert = 1} \,|T_{p_0}(z)| $$ and Lemma \ref{lem_polyineq}, we see that \begin{equation} \lambda_1(N) \leq \frac{N^2}{\max_{\lvert z \rvert = 1} \prod_{q\neq p_0} \lvert z - z_q \rvert^2} \leq \frac{4N^2}{\max_{\lvert z \rvert = 1} \prod_{q} \lvert z - z_q \rvert^2} \end{equation} where the last inequality follows from the fact that $|z-z_{p_0}|\leq 2$. \qed \end{proof} \subsection{Stochastic Construction} Before stating our upper bound for the minimum eigenvalue, we introduce the following definitions. First, we define a random sequence via a realization of the Brownian bridge $W^o$ on $[0,2\pi]$, which satisfies $W^o(0) = W^o(2\pi) = 0$ (see \cite{Billingsley} for details on the Brownian bridge construction). A shift $\varphi$ of the Brownian bridge is defined by \begin{equation*} W^o_\varphi(\theta) := \begin{cases} W^o(\varphi+\theta) - W^o(\varphi), & \text{if \,\,\,} 0\leq \theta\leq 2\pi -\varphi,\\ W^o(\varphi+\theta-2\pi) - W^o(\varphi), & \text{if \,\,\,} 2\pi - \varphi \leq \theta < 2 \pi. \end{cases} \end{equation*} Further, define the infinite sequence $\Phi:=\{\varphi_r\}_{r\geq 0}$ to be the sequence of dyadic phases on $[0,2\pi]$. Given a realization of the Brownian bridge, define the following function, $$ I_\varphi := \int_0^{2\pi} W^o_\varphi(\theta) \frac{\sin \theta}{1 - \cos \theta} d \theta $$ for $\varphi\in[0,2\pi]$. Note that it is not clear that the above integral is well defined, since it may not exist as the fraction $ \frac{\sin \theta}{1 - \cos \theta}$ behaves like $\theta^{-1}$ near $0$ and $2 \pi$. We address this matter shortly. In Figure \ref{fig_brownrlse}, we show a realization of the Brownian bridge and a shifted version with $\varphi=3\pi/2$. In Figure \ref{fig_integ}, we show $I_\varphi$ for the same realization. \begin{figure}[!Ht] \begin{center} \includegraphics[width=10cm]{bbridge.eps} \caption{Realization of the Brownian bridge and a shifted version. In red we have the plot of the function $\frac{\sin\theta}{1-\cos\theta}$, in blue $W^o$ and in green $W^o_{\varphi}$ with $\varphi=\frac{3\pi}{2}$.} \label{fig_brownrlse} \end{center} \end{figure} \begin{figure}[!Ht] \begin{center} \includegraphics[width=10cm]{Itheta.eps} \caption{$I_\varphi$ for the previous Brownian bridge realization.} \label{fig_integ} \end{center} \end{figure} Using the sequence $\Phi$ and the same realization of $W^o$ we construct a sequence of random variables ${\bf I}=\{I_{r}\}_{r\geq 0}$ as $I_r := I_{\varphi_r}$. We show that $I_\varphi$ is continuous on the interval $[0,2\pi]$, and so there exists a value $\varphi^*$ that determines the maximum value of $I_\varphi$, which we denote as $I^*$. Since $\Phi$ is dense on the unit circle, it follows that \begin{equation}\label{eqqmax} I^{*} := \sup \left\{ I_r : r \in {\mathbb N} \right\} \end{equation} and its distribution is determined via the infinite sequence $I_r$. We now show that the above random function is well defined and that integrals are a.s. finite. \begin{lemma} Given a realization of the Brownian bridge $W^o$, then a.s. the following integrals exist for all $\varphi \in [0, 2\pi)$ $$ |I_\varphi|=\Bigg|\int_0^{2\pi} W^o_\varphi \frac{\sin \psi}{1 - \cos \psi} d \psi \Bigg| < \infty. $$ In addition, the function $\varphi\mapsto I_\varphi$ is continuous. \end{lemma} \begin{proof} When $\varphi=0$ we write the above integral as $I$. The Levy global modulus of continuity tells us that standard Brownian motion $B$ on $[0,2\pi)$ satisfies almost surely $$ \lim_{\delta\to 0} \limsup_{0 \leq t \leq 2 \pi - \delta} \frac{\abs{B(t+\delta) - B(t)}}{w(\delta)} = 1 $$ where $w(\delta) = \sqrt{2 \delta \log \frac{1}{\delta}}$ (see \cite{RogersWilliams} for a proof of this fact). Since $W^o$ is by definition, $$ W^o(\psi) = B(\psi) - \frac{\psi}{2 \pi} B(2 \pi) $$ our argument is the same no matter which value of $\varphi$ is chosen because the Levy modulus applies to the entire sample path. We therefore set $\varphi = 0$. By definition of the Levy modulus, almost surely there exists $\delta_2 > 0$ such that $$ \frac{\abs{B(t+\delta) - B(t)}}{w(\delta)} \leq 2 $$ for all $0<\delta\leq\delta_2$. Therefore, \begin{equation} a(\delta):=\abs{W^o(\psi+\delta) - W^o(\psi)} \leq 2 w(\delta) + \frac{|B(2\pi)|}{2\pi} \delta. \end{equation} We may therefore split the integral as, \begin{eqnarray*} I & = & \int_{\delta_2}^{2\pi- \delta_2} W^o(\psi) \frac{\sin \psi}{1 - \cos \psi} d \psi + \int_0^{\delta_2} W^o(\psi)\frac{\sin \psi}{1 - \cos \psi} d \psi \\ & + & \int_{2 \pi - \delta_2}^{2\pi} W^o(\psi) \frac{\sin \psi}{1 - \cos \psi} d \psi. \end{eqnarray*} The first integral is finite, being the integral of a continuous function over the interval $[\delta_2, 2\pi - \delta_2]$. We may further suppose that $\delta_2$ has been chosen such that $\abs{\psi \frac{\sin \psi}{1 - \cos \psi}} \leq 4$ for $0 < \psi < \delta_2$ with a corresponding inequality in a similar neighbourhood of $2 \pi$. By choice of $\delta_2$, we obtain that $$ \Big| \int_0^{\delta_2} W^o(\psi) \frac{\sin \psi}{1 - \cos \psi} d \psi \Big| \leq 8 \int_0^{\delta_2} \frac{a(\psi)}{\psi} d \psi < O(\delta_2^{1/3}) $$ for sufficiently small $\delta_2$. The same argument applies to the last integral. Since $w(\delta_2)$ gives a uniform bound the result holds for all $\varphi \in [0, 2 \pi)$. Continuity in $\varphi$ follows by a similar argument, \begin{eqnarray} \vert I_\varphi - I_{\tilde{\varphi}} \vert & \leq & \Big| \int_\delta^{2\pi- \delta} \left( W^o_\varphi - W^o_{\tilde{\varphi}} \right) \frac{\sin \psi}{1 - \cos \psi} d \psi \Big| \\ & + & \int_0^{\delta} \abs{W^o_\varphi(\psi)} \Big|\frac{\sin \psi}{1 - \cos \psi}\Big| d \psi \nonumber \\ & + & \int_{2\pi - \delta}^{2\pi} \abs{W^o_\varphi(\psi)} \Big|\frac{\sin \psi}{1 - \cos \psi}\Big| d\psi \nonumber \\ & + & \int_0^{\delta} \abs{W^o_{\tilde{\varphi}}(\psi)} \Big|\frac{\sin \psi}{1 - \cos \psi}\Big| d\psi \nonumber \\ & + & \int_{2\pi - \delta}^{2\pi} \abs{W^o_{\tilde{\varphi}}(\psi)} \Big|\frac{\sin \psi}{1 - \cos \psi}\Big| d\psi. \nonumber \end{eqnarray} Provided that $0 < \delta < \delta_2$, the tail integrals are all $O(\delta^{1/3})$ as before. We bound the first integral by two positive integrals, to obtain \begin{eqnarray*} \Big|\int_\delta^{2\pi- \delta} \left( W^o_\varphi - W^o_{\tilde{\varphi}} \right) \frac{\sin \psi}{1 - \cos \psi} d \psi \Big| & \leq & 2 \sup \,\abs{W^o_\varphi(\psi) - W^o_{\tilde{\varphi}}(\psi)} \big[ \log(1 - \cos\psi) \big]_\delta^\pi \\ & \leq & 6\,a(\delta) \left( \log 2 - \log (1 - \cos \delta) \right) \end{eqnarray*} provided $\abs{\varphi - \tilde{\varphi}} < \delta$. Finally, $a(\delta) \left( \log 2 - \log (1 - \cos \delta) \right) \to 0$ as $\delta \to 0$, which implies continuity. \qed \end{proof} It therefore follows that $I^*$ is well defined. Let $T_{N}(\varphi)$ be defined as \begin{equation} T_{N}(\varphi) := \frac{1}{\sqrt{N}}\log|P(e^{i\varphi})|^2 = \frac{1}{\sqrt{N}} \sum_{q=1}^N \log \Big(2(1 - \cos(\varphi-\theta_q))\Big) \label{eqn_TNdefn} \end{equation} where $P(z)$ is a random polynomial on the unit circle with roots $\{e^{i\theta_{q}}\}_{q=1}^{N}$ as before. Furthermore, define the infinite sequence of random variables $T_N(\varphi_r)$ by evaluating the previous expression at the phases of $\Phi$. Note that $T_N$ cannot be defined as a random function in either $C[0,2\pi]$ or in $D[0,2\pi]$, as its discontinuities are not of the first kind. We remark that since there are only countably many $\varphi_r$ and the phases $\theta_q$ are i.i.d. and uniformly distributed, no $\varphi_r$ coincides with any $\theta_q$ almost surely so that the sum exists. We further observe that since, $$ \int_0^{2 \pi}{\log \big(2(1 - \cos\psi)\big)\,d\psi} = 0 $$ and $$ \int_0^{2\pi}{\log^2\big(2(1 - \cos\psi)\big)\,d\psi} < \infty. $$ Note that the sequence $T_N(\varphi_r)$ satisfies the central limit Theorem as a function of $N$ for every $r \in {\mathbb Z}_+$. We consider the sequence ${\bf T}_N:=\{T_N(\varphi_r)\}_{r\geq 0}$ in the sequence space ${\mathbb R}^\infty$ with metric $$ \rho_0({\bf x}, {\bf y}) = \sum_{\ell=0}^\infty \frac{ \abs{x_\ell - y_\ell}}{1 + \abs{x_\ell - y_\ell}} 2^{-\ell} $$ and using the ordering stated earlier. It is well known that this forms a Polish space (\cite{Billingsley68}). We now derive one more Lemma for use later on. \begin{lemma} \label{lem_Dctsmap} Let $Y$ be a function in $D[0,2\pi]$. Then $Y$ is Lebesgue measurable, and its integral exists, \begin{equation} \int_0^{2\pi} Y(s) ds < \infty. \end{equation} Furthermore, let $Y_n$ be a sequence of functions in $D[0,2\pi]$ such that $Y_n \rightarrow Y$ in $D$ (i.e., with respect to the Skorohod topology). Then \begin{equation} \int_0^{2\pi} Y_n(s) ds \rightarrow \int_0^{2\pi} Y(s)ds. \end{equation} \end{lemma} \begin{proof} The existence of the integral follows from Lemma 1, page 110 of \cite{Billingsley68} and the subsequent discussion which shows that functions in $D$ on a closed bounded interval are both Lebesgue measurable and bounded. The former follows from the fact that they may be uniformly approximated by simple functions, a direct consequence of Lemma 1 and the latter also. Convergence follows from the Lebesgue dominated convergence theorem. This holds since the sequence $Y_n$ is uniformly bounded, by a constant so the sequence is dominated. Second $Y$ is continuous a.e. with pointwise convergence holding at points of continuity, as a consequence of convergence in $D$ see \cite{Billingsley68}. \qed \end{proof} We now proceed to prove the following Theorem. \begin{theorem}\label{thm_weakconvergence} With the topology induced by the previous metric in ${\mathbb R}^\infty$, we have that the sequence ${\bf T}_{N}$ converges in distribution to the sequence ${\bf I}$ \begin{equation}\label{eqn_thmfin} {\bf T}_N \Rightarrow {\bf I} \end{equation} as $N\to\infty$. \end{theorem} \begin{proof} In order to do so we use Theorem 4.2 of \cite{Billingsley68}. Suppose that there is a metric space ${\mathcal S}$ with metric $\rho_0$ and sequences ${\bf T}_{N, \epsilon}$, ${\bf I}_\epsilon$ and ${\bf T}_N$ all lying in ${\mathcal S}$ such that the following conditions hold, \begin{eqnarray}\label{eqn_weakcond} {\bf T}_{N, \epsilon} & \Rightarrow & {\bf I}_\epsilon \\ {\bf I}_\epsilon & \Rightarrow & {\bf I} \nonumber \end{eqnarray} together with the further condition that given arbitrary $\eta > 0$, \begin{equation}\label{eqn_probmetric} \lim_{\epsilon \rightarrow 0} \limsup_{N \rightarrow \infty} \prob{ \rho_0({\bf T}_{N, \epsilon}, {\bf T}_N) \geq \eta } = 0. \end{equation} Then it holds that ${\bf T}_N \Rightarrow {\bf I}$. First, we define ${\bf I}_\epsilon$ using a realisation of the Brownian Bridge $$ I_{r,\epsilon} := \big[ W^o_{\varphi_r}(\psi) \log 2(1- \cos \psi) \big]^{2\pi - \epsilon}_{\epsilon} - \int_\epsilon^{2\pi-\epsilon} W^o_{\varphi_r}(\psi) \frac{\sin \psi}{1 - \cos \psi} d \psi. $$ The definition of the other sequence is more involved and so we defer it for a moment. We have shown that the limit integrals exist a.s. and so we only need to show that the first term converges to 0. Since $\log \big[2 \left( 1 - \cos \psi \right)\big] = O(\log \epsilon)$ when $\epsilon$ is small and in a neighbourhood of 0 and $2\pi$, we may invoke the Levy modulus of continuity, wrapped around at $2\pi$ to obtain that this term is $O( a(\epsilon)\log \epsilon)$. Hence, coordinate convergence of the integrals holds so that $$ \Big| I_{r,\epsilon} + \int_\epsilon^{2\pi-\epsilon} W^o_{\varphi_r}(\psi) \frac{\sin \psi}{1 - \cos \psi} d \psi \Big| \Rightarrow 0 $$ and it follows that ${\bf I}_\epsilon \Rightarrow {\bf I}$ as $\epsilon\to 0$, since the sign of the integral is immaterial. We have thus demonstrated the second condition of (\ref{eqn_weakcond}). Next we proceed by rewriting $T_N(\varphi_r)$ in terms of the empirical distribution function $F_N:[0,2\pi]\to [0,1]$ determined by $$ F_N(\psi) := \frac{\# \left\{ \theta_q : 0\leq \theta_q \leq \psi \right\}}{N}. $$ By the definition of $F_{N}$ and the Lebesgue--Stieljes integral \begin{eqnarray*} T_N(\varphi_r) & = & \sqrt{N} \int_0^{2\pi} \log \big(2(1 - \cos(\varphi_r-\psi))\big) dF_N(\psi) \\ & = & \sqrt{N} \int_0^{2\pi} \log \big(2(1 - \cos \tilde{\psi})\big) dF_{N,\varphi_r}(\tilde{\psi}) \end{eqnarray*} where the change of variables $\tilde{\psi} = \psi - \varphi$ has been made. For $\psi \in [0,2\pi)$ we define $F_{N,\varphi}(\psi)$ as the ``cycled'' empirical distribution function of $F_{N}$ by \begin{equation*} F_{N,\varphi}(\psi) := \begin{cases} \frac{\#\left\{ \varphi \leq \theta_q < \varphi+\psi \right\} }{N}, & \text{if } \varphi \leq \psi < 2\pi-\varphi, \\ F_{N,\varphi}(2\pi) + \frac{\#\left\{ 0 \leq \theta_q \leq \psi-2\pi+\varphi \right\}}{N},& \text{if } 2\pi-\varphi \leq \psi < 2\pi. \end{cases} \end{equation*} To define the sequence $T_{N,\epsilon}(\varphi_r)$, we split the integral into two parts as in $\int^{2\pi - \epsilon}_\epsilon$ and $\int_0^\epsilon + \int_{2\pi - \epsilon}^{2\pi}$ and then use integration by parts on the first part, which yields the expression, \begin{eqnarray} T_{N,\epsilon}(\varphi_r) & := & \sqrt{N} \times \left( \left[ \left( F_{N,\varphi_r}(\psi) - \frac{\psi}{2\pi} \right) \log 2(1 - \cos \psi) \right]_\epsilon^{2\pi-\epsilon} \right) \\ \label{eqn_TNepsdefn} & - & \sqrt{N} \int_\epsilon^{2\pi - \epsilon} \left( F_{N,\varphi_r}(\psi) - \frac{\psi}{2\pi} \right)\frac{\sin \psi}{(1 - \cos \psi)} d\psi \nonumber . \end{eqnarray} For later use, we define $$ W_{N,\varphi} := \sqrt{N} \left( F_{N,\varphi}(\psi) - \frac{\psi}{2\pi} \right). $$ This is not quite equal to the original sum, since $$ \int_0^{2\pi} \log \big(2 \left( 1 - \cos \psi \right)\big) d \psi = 0 $$ so that the $\psi$ terms do not give 0 but rather cancel with $\mu_\epsilon$ to be defined in a moment. We express the remainder as a sum, noting that we must include the mean, which is by symmetry, \begin{equation} \mu_\epsilon := \frac{2}{2\pi} \int_0^\epsilon \log \big(2\left( 1 - \cos \psi \right)\big) d \psi = \frac{2}{\pi} \left( \epsilon \log \epsilon - \epsilon +o(\epsilon) \right). \end{equation} Define $S_\epsilon(\varphi) := \left\{ \theta_q: \theta_q \in [\varphi-\epsilon,\varphi+\epsilon] \right\}$ so that the sum may be written as \begin{equation} Z_{N,\epsilon}(\varphi_r) := \frac{1}{\sqrt{N}} \sum_{\theta_q \in S_\epsilon(\varphi_r)} \log \big( 2(1 - \cos(\varphi_r-\theta_q))\big) - \sqrt{N} \mu_\epsilon. \label{eqn_Zvarepsdef} \end{equation} Denote the corresponding sequence as ${\bf Z}_{N,\epsilon}$. Taking expectations we thus find that $$ \expect{Z_{N,\epsilon}(\varphi_r)} = \frac{N}{\sqrt{N}} \int_{-\epsilon}^\epsilon \log \big( 2 (1-\cos\psi) \big) d\psi - \sqrt{N} \mu_\epsilon = 0 $$ is a sequence of random variables with 0 mean. We finally write, \begin{equation} {\bf T}_N = {\bf T}_{N,\epsilon} + {\bf Z}_{N,\epsilon}. \label{eqn_seqdiff} \end{equation} We now proceed to demonstrate the first condition of (\ref{eqn_weakcond}), namely that, ${\bf T}_{N, \epsilon} \Rightarrow {\bf I}_\epsilon$. The random variable $T_{N, \epsilon}(\varphi_r)$ is a functional of an empirical distribution and therefore of a process lying in $D[0,2\pi]$. Define the random sequence $J_\epsilon$ for $f \in D[0,2\pi]$ and $f(0) = f(2\pi) = 0$ with the component term, \begin{equation} J_{\epsilon,r}(f) = \int_\epsilon^{2\pi - \epsilon} f_{\varphi_r}(\psi) \frac{\sin \psi}{\left( 1 - \cos \psi \right)} d \psi - \Big[ f_{\varphi_r}(\psi) \log \big[2(1 - \cos \psi)\big] \Big]_\epsilon^{2\pi-\epsilon}. \label{eqn_Imap} \end{equation} It is well known that, $W_{N,0} \Rightarrow W^o$ in $D$, which implies that $W_{N,\varphi_r} \Rightarrow W^o_{\varphi_r}$ as $N\to\infty$ for all $r$. The result follows by showing that $J_\epsilon$ defines a measurable mapping $J_\epsilon:D[0,2\pi] \rightarrow {\mathbb R}^\infty$ in $D[0,2\pi]$. Since $$ J_{\epsilon,r}(W_N) = T_{N,\epsilon}(\varphi_r), $$ we may therefore apply Theorem 5.1 and Corollary 1 of \cite{Billingsley68} which states that if $W_N \Rightarrow W^o$ then $J_\epsilon(W_N) \Rightarrow J_\epsilon(W^o)$, (and hence ${\bf T}_{N,\epsilon} \Rightarrow {\bf I}_\epsilon$) provided that we verify \begin{equation} \prob{ W^o \in D_{J_\epsilon}} = 0. \label{eqn_probdiscontinuity} \end{equation} To deal with the measurability question, we first observe that the coordinate maps are measurable and since $\sin \psi/(1-\cos \psi)$ is continuous on $[\epsilon, 2\pi - \epsilon]$, it follows by Lemma \ref{lem_Dctsmap} that $J_{\epsilon,r}$ is measurable for each $r$ and hence so is the sequence mapping $J_\epsilon$. Again by Lemma \ref{lem_Dctsmap} the sequence of integrals convergences with respect to $\rho_0$. This leaves only the final term. However, since the limit $W^o$ is almost surely continuous it follows almost surely that \begin{eqnarray*} f_{\varphi_r}(\epsilon) & \rightarrow & W^o_{\varphi_r}(\epsilon) \\ f_{\varphi_r}(2\pi - \epsilon) & \rightarrow & W^o_{\varphi_r}(2\pi-\epsilon) \end{eqnarray*} for each $r$ if $f \rightarrow W^o$ in $D[0,2\pi]$. Thus the corresponding sequence converges with respect to $\rho_0$ also and so (\ref{eqn_probdiscontinuity}) holds. The proof of the first condition is concluded. It remains to demonstrate (\ref{eqn_probmetric}). Here we use the union bound and Chebyshev's inequality. This is because the various $Z_{N,\epsilon}(\varphi_r)$ in the sequences are dependent, as they are determined via the same $\theta_q$. Nevertheless they are of course themselves the sum of i.i.d. random variables. In determining the variance, we may work with $\varphi_r=0$ without loss of generality. The variance of one of the i.i.d. summands in (\ref{eqn_Zvarepsdef}) is \begin{equation} \sigma^2_\epsilon := \frac{2}{2\pi} \int_0^\epsilon \log^2 \big(2(1 - \cos\psi)\big) d\psi - \mu_\epsilon^2 < \infty. \end{equation} Since for small $\epsilon > 0$ we have $\log \big(2(1 - \cos\psi)\big) = O(2\log \psi) + o(\psi)$, the integral is $\sigma_\epsilon = O(\epsilon \log^2 \epsilon)$, as the integral of $\log^2 x$ is $x \log^2 x - 2x \log x + 2x$, it follows that $\sigma^2_\epsilon \rightarrow 0$ as $\epsilon \rightarrow 0$. Now fix $\eta >0$. By definition of $\rho_0$ and from (\ref{eqn_seqdiff}), we obtain $$ \rho_0({\bf T}_{N,\epsilon} , {\bf T}_N ) = \sum_{r=0}^\infty \frac{\abs{Z_{N,\epsilon}(\varphi_r)}}{1 + \abs{Z_{N,\epsilon}(\varphi_r)}} 2^{-r}. $$ Let $R_\eta$ be such that $\sum_{r=R_\eta+1}^\infty 2^{-r} < \eta/2$. Now we apply the union bound to the remaining $R_\eta+1$ summands to obtain \begin{eqnarray}\label{eqn_chebunionbnd} \prob{\sum_{r=0}^{R_\eta} \frac{\abs{ Z_{N,\epsilon}(\varphi_r)}}{1 + \abs{Z_{N,\epsilon}(\varphi_r)}} 2^{-r} \geq \eta/2 } &\leq &\sum_{r=0}^{R_\eta} \prob{\abs{Z_{N,\epsilon}(\varphi_r)} 2^{-r} \geq \frac{\eta}{2 \left( R_\eta +1 \right)}} \nonumber \\ & \leq & \sum_{r=0}^{R_\eta} \sigma^2_\epsilon \frac{4 \left( R_\eta +1 \right) ^2}{\eta^2 2^{2r}} \nonumber \\ & \leq & \sigma^2_\epsilon \frac{16 \left( R_\eta +1 \right) ^2}{3\eta^2}. \end{eqnarray} Hence, $$ \limsup_{N\to\infty} \prob{\rho_0({\bf T}_{N,\epsilon} , {\bf T}_N ) > \eta } \leq \frac{16(R_\eta + 1)^2 \sigma^2_\epsilon}{3\eta^2} $$ and the RHS goes to 0 as $\epsilon\to 0$, for each $\eta > 0$. Thus we obtain (\ref{eqn_probmetric}) as required. We have therefore verified all conditions; and Theorem \ref{thm_weakconvergence} is proved. \qed \end{proof} We are now in a position to state our main result for an upper bound on the minimum eigenvalue of a random Vandermonde matrix. \begin{theorem} \label{thm_minbridge} Let $\lambda_1(N)$ be the minimum eigenvalue of the square $N\times N$ matrix ${\bf V}^{*}{\bf V}$. We further assume that the phases $\theta_{1},\ldots,\theta_{N}$ are i.i.d. and drawn accordingly to the uniform distribution. Then \begin{equation} \lambda_1(N) \leq 2N^{2}\exp\big(-\sqrt{N}T_N^{*}\big) \label{eqn_minupbnd} \end{equation} where $T_N^* := \limsup_{r\to\infty} T_N(\varphi_r)$. Moreover, given $a > 0$, \begin{equation} \liminf_{N\to\infty} \prob{\lambda_1(N) \leq 2N^2 \exp\big(-\sqrt{N}a\big)} \geq \prob{I^* > a}. \end{equation} \end{theorem} \begin{proof} Using the definition of $T_N(\varphi_r)$ in (\ref{eqn_TNdefn}) we obtain $$ \lambda_1(N) \leq 4N^2 \exp\big(-\sqrt{N}T_N(\varphi_r)\big). $$ Since this equation holds over every $r$ it follows that \begin{equation} \lambda_1(N) \leq 4N^2 \exp \big(-\sqrt{N}\limsup_{r\to\infty} T_N(\varphi_r)\big). \end{equation} Since this holds for all $N \in {\mathbb N}$, we obtain (\ref{eqn_minupbnd}), which is the first part of the Theorem. Now define the random variable $$ L_N := -\frac{1}{\sqrt{N}} \log \frac{\lambda_1(N)}{4N^2} $$ and further, for any given $R$, define $T_{N,R} := \max_{r \leq R} \left\{ T_N(\varphi_r) \right\} $, and similarly define $I^*_R$. Then for any fixed $R$ and $a > 0$ $$ \prob{L_N > a} \geq \prob{T_{N,R} > a} $$ by definition of $T_{N,R}$. By weak convergence, since the set is open, and by Theorem 2.1 of \cite{Billingsley68}, we obtain $$ \liminf_{N\to\infty} \prob{L_N > a } \geq \liminf_N \prob{T_{N,R} > a} \geq \prob{ I^*_R > a} $$ as a consequence of Theorem \ref{thm_weakconvergence}. Finally, by almost sure continuity it holds that $I_{R}^{*} \to I^{*}$ almost surely, and so by the monotone convergence theorem we see that $\mathbb{P}(I^* > a)= \lim_{R\to\infty} \mathbb{P}(I^*_R > a)$, which implies our result. \qed \end{proof} \subsection{Analytical and Combinatorial Construction} \par In this Section, we present an analytical and elementary argument for the upper bound of the minimum eigenvalue. Let $z_1,z_2,\ldots,z_N$ be complex numbers on the unit circle and let $P(z)=\prod_{i=1}^{N}{(z-z_i)}$ be the polynomial with these roots. We want to estimate $\max_{|z|=1}{|P(z)|}$ when the roots $\{z_{i}\}_{i=1}^{N}$ are i.i.d. uniformly distributed random variables on the unit circle. \begin{lemma}\label{lemma11} Given $P(z)$ as before there exists $|w|=1$ such that $|P(w)P(-w)|=1$. \end{lemma} \begin{proof} Consider the function $\Psi(z)=\log|P(z)|+\log|P(-z)|$. This function is continuous except at the values $\{z_1,-z_1,\ldots,z_N,-z_N\}$ where it has a vertical asymptote going to $-\infty$. Therefore, we can consider this function as a continuous function from the unit circle to $[-\infty,\infty)$ with the usual topology. On the other hand, it is clear that $\int_{|z|=1}{\Psi(z)}=0$. Therefore, there exist $w$ such that $\Psi(w)=0$ and hence $|P(w)||P(-w)|=1$. \qed \end{proof} \par Consider the following construction. We first randomly choose the points $\{z_{i}\}_{i=1}^{N}$ and consider the set of pairs $\mathcal{P}:=\{(z_1,-z_1),\ldots,(z_N,-z_N)\}$. Note that changing $z_i$ to $-z_i$ does not affect the value of the point $w$ in the previous Lemma. Hence the set $\mathcal{P}$ determines the point $w$. Now we fix this point and consider $\alpha_{i}:=|w-z_i|$ and $\beta_{i}:=|w+z_i|$. Since $|P(w)P(-w)|=1$, we see that $\prod_{i=1}^{N}{\alpha_i \beta_i}=1$. It is also clear that $\beta_i = \sqrt{4-\alpha_i^{2}}$. \par Let $y$ be the random variable defined as $y:\{1,-1\}^{N}\to\ensuremath{\mathbb{R}}$ $$ y(v_1,v_2,\ldots,v_N) = \sum_{i=1}^{N}{v_{i}\log(\alpha_i/\beta_i)} $$ taking signs i.i.d. at random with probability $1/2$. It is not difficult to see that $\mathbb{E}(y)=0$, where the average is taken over the set $\{1,-1\}^{N}$. Note that \begin{eqnarray*} y(v_1,\ldots,v_N) & = & \sum_{i=1}^{N}{v_{i}(\log(\alpha_i)-\log(\beta_i))}\\ & = &\log|P_{(v_1,\ldots,v_N)}(w)|-\log|P_{(v_1,\ldots,v_N)}(-w)| \end{eqnarray*} where $P_{(v_1,\ldots,v_N)}(z)$ is the polynomial with roots $v_{i}z_{i}$ \begin{equation} P_{(v_1,\ldots,v_N)}(z) = \prod_{i=1}^{N}{(z-v_{i}z_{i})} \end{equation} and $w$ is as in Lemma \ref{lemma11}. Since $|P_{(v_1,\ldots,v_N)}(-w)|=|P_{(v_1,\ldots,v_N)}(w)|^{-1}$ we see that \begin{equation}\label{fund} y(v_1,\ldots,v_N) = \log|P_{(v_1,\ldots,v_N)}(w)|^{2}. \end{equation} \begin{theorem} Let $\gamma:=\log\Big(\frac{\cos(\pi/8)}{\sin(\pi/8)}\Big)$. For every $\epsilon>0$ the following holds $$ \mathbb{P}\Big(|y(v_1,\ldots,v_N)|\geq \gamma\sqrt{\pi}\epsilon\sqrt{N}/2\Big)\geq 1-\epsilon. $$ \end{theorem} \begin{proof} By changing $z_i$ to $-z_i$ if necessary, we can assume without loss of generality that $\alpha_i\geq\beta_i$. The point $w$ is equal to $w=e^{i\theta}$ for some phase $\theta$ in $[0,2\pi)$. Let $\mathcal{A}$ be the set $$ \mathcal{A}:=\Big\{e^{i\phi}\,\,:\,\,\phi\in [\theta+\pi/4,\theta+3\pi/4]\cup [\theta-3\pi/4,\theta-\pi/4]\Big\}. $$ The total length of the set $\mathcal{A}$ is $\pi$ and hence the probability of random point $z_{i}$ to belong to $\mathcal{A}$ is equal to $1/2$. On the other hand, it is easy to see that if $z_i$ belongs to the complement of $\mathcal{A}$ and since by assumption $\alpha_i\geq \beta_i$ we see that $$ \log(\alpha_i/\beta_i)\geq \log\Bigg(\frac{\cos(\pi/8)}{\sin(\pi/8)}\Bigg)=:\gamma \approx 0.8814. $$ Let us order the values of $\log(\alpha_i/\beta_i)$ in increasing order. Up to a re--numeration we see that $$ 0 \leq\ldots\leq \log(\alpha_{m}/\beta_{m})\leq \gamma \leq \log(\alpha_{m+1}/\beta_{m+1})\leq\ldots\leq \log(\alpha_N/\beta_N). $$ The value of $m$ is a random variable that converges almost surely to $N/2$ as $N\to\infty$. Without loss of generality and for notation simplicity, we take $m=N/2$, however, and as the argument shows this is not strictly necessary. Let $\epsilon>0$ and let us consider the value $\sup_{I}{\mathbb{P}(y(v_1,\ldots,v_N)\in I)}$ where $I$ ranges over all the closed intervals of length $\gamma\sqrt{\pi}\epsilon\sqrt{N}$ in the real line. By applying the Littlewood--Offord Theorem, discussed in the preliminaries Section, we see that \begin{equation}\label{eqq1} \sup_{I}\Big\{\mathbb{P}\Big(y(v)\in I\,\,|\,\,\text{conditioning on $v_{i}$ for $i\leq \lfloor N/2 \rfloor$}\Big)\Big\} = \epsilon + o(N^{-1/2}) \end{equation} for $N$ sufficiently large. On the other hand, $$ \sup_{I}\Big\{\mathbb{P}\big(y(v_1,\ldots,v_N)\in I\big)\Big\} $$ is equal to $$ \frac{1}{2^{N/2}}\sum_{(v_{1},\ldots,v_{\lfloor N/2 \rfloor})}{\sup_{I}\Big\{\mathbb{P}\Big(y(v)\in I\,\,|\,\,\text{cond. on the first $v_{i}$}\Big)\Big\}}\leq\epsilon $$ where the last inequality follows from (\ref{eqq1}). In particular, taking $I$ to be the interval $I =[-\gamma\sqrt{\pi}\epsilon\sqrt{N}/2, \gamma\sqrt{\pi}\epsilon\sqrt{N}/2]$ we conclude that $$ \mathbb{P}\Big(|y(v_1,\ldots,v_N)|\geq \gamma \sqrt{\pi}\epsilon\sqrt{N}/2\Big)\geq 1-\epsilon. $$ \qed \end{proof} \begin{theorem}\label{randpoly} Given $\epsilon>0$ we have that \begin{equation} \mathbb{P}\,\Big(\max_{|z|=1}|P(z)|^2\geq \exp(\gamma\sqrt{\pi}\epsilon\sqrt{N}/2)\Big)\geq 1-\epsilon \end{equation} for $N$ sufficiently large. \end{theorem} \begin{proof} As done before, we start by randomly generating $n$ pairs of diametrically opposite points $(z_{i},-z_{i})$ and find $w$ as in Lemma \ref{lemma11}. Finally, fix the $z_i$ by the $N$ independent coin flips $(v_{1},\ldots,v_{N})$ and condition on this event. We observed before that $|P_{(v_1,\ldots,v_N)}(-w)|=|P_{(v_1,\ldots,v_N)}(w)|^{-1}$. Therefore, $$ \log|P_{(v_1,\ldots,v_N)}(w)|=-\log|P_{(v_1,\ldots,v_N)}(-w)|. $$ Let $a(w)$ be \begin{eqnarray*} a(w) & = & \log|P_{(v_1,\ldots,v_N)}(w)|-\log|P_{(v_1,\ldots,v_N)}(-w)|\\ & = & \log|P_{(v_1,\ldots,v_N)}(w)|^2. \end{eqnarray*} Then by the previous Theorem and (\ref{fund}) we see that $$ \mathbb{P}\Big(|a(w)| \geq \gamma\sqrt{\pi}\epsilon\sqrt{N}/2\Big)\geq 1-\epsilon. $$ Since $a(-w)=-a(w)$ we clearly see that $$ \mathbb{P}\big(\max\big(a(w),a(-w)\big)\geq \sqrt{\pi}\epsilon\sqrt{N}/2\big)\geq 1-\epsilon. $$ Therefore, \begin{equation} \mathbb{P}\,\Big(\max_{|z|=1}{\max \{\log|P_{(v_1,\ldots,v_N)}(z)|^2,\log|P_{(v_1,\ldots,v_N)}(-z)|^2 \} } \geq \sqrt{\pi}\epsilon\sqrt{N}/2\Big) \geq 1-\epsilon. \end{equation} Now removing the conditioning on the pairs $\{z_1,-z_1,\ldots,z_N,-z_N\}$ we see that, \begin{equation} \mathbb{P}\,\Bigg(\max_{|z|=1}|P(z)|^2\geq \exp\big(\sqrt{\pi}\epsilon\sqrt{N}/2\big)\Bigg)\geq 1-\epsilon \end{equation} for $N$ sufficiently large. \qed \end{proof} Since we already saw in Lemma \ref{lem_polyineq} that $$ \lambda_{1}(N)\leq \frac{4N^2}{\max_{|z|=1}|P(z)|^2} $$ the following Theorem follows immediately. \begin{theorem}\label{main_comb} Given $\epsilon>0$ we have that \begin{equation} \mathbb{P}\,\Big(\lambda_1(N)\leq 4N^2\exp(-\sqrt{\gamma \pi}\epsilon\sqrt{N}/2)\Big)\geq 1-\epsilon \end{equation} for $N$ sufficiently large. \end{theorem} \section{Numerical Results}\label{sec_numer} In this Section we present some numerical results for the behavior near the origin of the limit probability distribution of ${\bf V}^{*}{\bf V}$, and for the minimum eigenvalue $\lambda_1$. Let ${\bf V}$ be a square $N\times N$ random Vandermonde matrix with phases $\theta_{1},\theta_{2},\ldots, \theta_{N}$, which are i.i.d. random variables uniformly distributed on $[0,1]$. We know that the empirical eigenvalue distribution of ${\bf V}^{*}{\bf V}$ converges as $N\to\infty$ to a probability measure $\mu$. One question that we would like to address is: does the measure $\mu$ have an atom at zero? Let $\{\lambda_{i}\}_{i=1}^{N}$ be the eigenvalues of ${\bf V}^{*}{\bf V}$. Given $\epsilon>0$ let us denote by $G_{N}(\epsilon)$ the average number of eigenvalues less than or equal to $\epsilon$, i.e., $$ G_{N}(\epsilon):=\frac{1}{N} \mathbb{E}\Big( \Big| \big\{\lambda_{i}<\epsilon\,:\,i=1,\ldots,N\big\} \Big| \Big). $$ Therefore, if there is an atom at zero for the measure $\mu$ with mass $\mu\{0\}=\beta$, the following holds \begin{equation} \inf_{\epsilon>0}\,\liminf_{N\to\infty}\,G_{N}(\epsilon)=\beta. \end{equation} \begin{figure}[!Ht] \begin{center} \includegraphics[width=10cm]{atom_N=1000.eps} \caption{Graph of the average proportion of eigenvalues smaller than $10^{-p}$ as a function of $p$ for $N=1000$.} \label{atom} \end{center} \end{figure} In Figure \ref{atom}, we plot $G_{N}(10^{-p})$ as a function of $p$ for $N=1000$. These graphs suggest that if there is an atom, its mass has to be relatively small. Further simulations suggest the absence of an atom at zero. However, at the moment, we are unable to prove this result. \begin{figure}[!Ht] \begin{center} \includegraphics[width=10cm]{mN_plot200_R1000.eps} \caption{Graphs of the average of $2\log \max_{|z=1|}|P(z)|$ (blue) and $\sqrt{\gamma \pi}\epsilon\sqrt{N}/2$ (red) as a function of $N$ where $2\log \max_{|z=1|}|P(z)|$ was averaged over $1000$ realizations.} \label{ff1} \end{center} \end{figure} Finally, we present some numerical results for the behavior of the maximum of a random polynomial on the unit circle in the context of Theorem \ref{randpoly}. In Figure \ref{ff1}, we show the graphs of $2\log \max_{|z=1|}|P(z)|$ and $\sqrt{\gamma \pi}\epsilon\sqrt{N}/2$ as a function of $N$. This graph suggests that Theorem \ref{randpoly} could be slightly improved. \section{Generalized Random Vandermonde Matrix} In this Section we present a generalized version of the previously discussed random Vandermonde matrices. More specifically, consider an increasing sequence of integers $\{k_{p}\}_{p=1}^{\infty}$ and let $\{\theta_{1},\ldots,\theta_{N}\}$ be i.i.d. random variables uniformly distributed on the unit interval $[0,1]$. Let $\bf{V}$ be the $N\times N$ random matrix defined as \begin{equation} V(p,q):=\frac{1}{\sqrt{N}}z_{q}^{k_p} \end{equation} where $z_{q}:=e^{2\pi i\theta_{q}}$. Note that if we consider the sequence $k_p=p-1$ then the matrix $\bf{V}$ is the usual random Vandermonde matrix defined in (\ref{eqn_Vandermondedefn}). We are interested in understanding the limit eigenvalue distribution for the matrices $\bf{X}:=\bf{V}\bf{V}^{*}$ and in particular their asymptotic moments. Let $r\geq 0$ and let us define the $r$--th asymptotic moment as \begin{equation} m_{r}:=\lim_{N\to\infty} \mathbb{E}\Big( \mathrm{tr}_{N}({\bf X}^{r})\Big). \end{equation} These moments, as well as the limit eigenvalue distribution, depend on the sequence $\{k_{p}\}_{p=1}^{\infty}$. \begin{remark} It is a straight forward calculation to see that $m_0=m_{1}=1$, $m_{2}=2$ and $m_3=5$ no matter what is the sequence $\{k_{p}\}_{p=1}^{\infty}$. The first interesting case happens when $r$ is equal to 4. These is because $r=4$ is the first positive integer where there is a non--crossing partition, namely the partition $\rho=\{\{1,3\},\{2,4\}\}$. \end{remark} The next Theorem shows a combinatorial expression for the moments as well as the existence of the limit eigenvalue distribution. \begin{theorem} Let $\{k_{p}\}_{p=1}^{\infty}$ be an increasing sequence of positive integers. Then \begin{equation} m_r = \sum_{\rho\in \mathcal{P}(r)}{K_{\rho}} \end{equation} where $\mathcal{P}(r)$ is the set of partitions of the set $\{1,2,\ldots,r\}$ and \begin{equation} K_{\rho}:=\lim_{N\to\infty}\frac{|S_{\rho,N}|}{N^{r+1-|\rho|}} \end{equation} where $|\rho|$ is the number of blocks of $\rho$ and \begin{equation} S_{\rho,N}:=\Big\{(p_1,\ldots,p_r)\in\{1,2,\ldots,N\}^{r}\,:\,\sum_{i\in B_{j}}{k_{p_i}}=\sum_{i\in B_{j}}{k_{p_{i+1}}} \Big\}, \end{equation} where $B_{j}$ are the blocks of $\rho$. Moreover, there exists a unique probability measure $\mu$ supported in $[0,\infty)$ with these moments. \end{theorem} \begin{proof} Given $r\geq 0$ then \begin{eqnarray*} \mathrm{tr}_{N}({\bf X}^{r}) & = & \frac{1}{N}\sum_{(p_1,\ldots,p_r)}{X(p_1,p_2)X(p_2,p_3)\ldots X(p_r,p_1)} \\ & = & \frac{1}{N^{r+1}}\sum_{(p_1,\ldots,p_r)} \sum_{(i_1,\ldots,i_r)} z_{i_1}^{(k_{p_1}-k_{p_2})}z_{i_2}^{(k_{p_2}-k_{p_3})}\ldots z_{i_r}^{(k_{p_r}-k_{p_1})}. \end{eqnarray*} The sequence $(i_1,i_2,\ldots,i_r)\in \{1,2,\ldots,N\}^{r}$ uniquely defines a partition $\rho$ of the set $\{1,2,\ldots,r\}$ (we denote this by $(i_1,\ldots,i_r)\mapsto \rho$) where each block $B_{j}$ consists of the positions which are equal, i.e., $$ B_{j}=\{w_{j_1},\ldots,w_{j_{|B_j|}}\} $$ where $i_{w_{j_1}}=i_{w_{j_2}}=\ldots=i_{w_{j_{|B_j|}}}$. Denote this common value by $W_{j}$. Then \begin{equation}\label{genn} \mathrm{tr}_{N}({\bf X}^{r}) = \frac{1}{N^{r+1}} \sum_{(i_1,\ldots,i_r)\mapsto \rho}\,\,\,\sum_{(p_1,\ldots,p_r)} \prod_{k=1}^{|\rho|}{z_{W_k}^{\sum_{i\in B_k}{(k_{p_i}-k_{p_{i+1}})}}}. \end{equation} Taking expectation on both sides we observe that $$ \mathbb{E}\Bigg(z_{W_k}^{\sum_{i\in B_k}{(k_{p_i}-k_{p_{i+1}})}}\Bigg)\neq 0 $$ if and only if $\sum_{i\in B_k}{(k_{p_i}-k_{p_{i+1}})}=0$. Let $S_{\rho,N}$ be the $r$-tuples $(p_1,\ldots,p_r)$ which solve the equations \begin{equation}\label{ttt} \sum_{i\in B_k}{k_{p_i}}=\sum_{i\in B_k}{k_{p_{i+1}}} \end{equation} for all the the blocks $k=1,2,\ldots,|\rho|$ and let $|S_{\rho,N}|$ be its cardinality. Let $K_{\rho}$ be defined as $$ K_{\rho}=\lim_{N\to\infty}\frac{|S_{\rho,N}|}{N^{r+1-|\rho|}}. $$ Then it follows from (\ref{genn}) that $$ m_{r}=\sum_{\rho\in \mathcal{P}(r)}K_{\rho}. $$ It is straight forward to see that the set of solutions of (\ref{ttt}) has $r+1-|\rho|$ free variables since one of the equations is redundant (the sum of all the equations is 0). Therefore, for every partition $\rho$ the value of $K_{\rho}$ satisfies $0\leq K_{\rho}\leq 1$. Then the moments are bounded by the Bell numbers $B_{r}=|\mathcal{P}(r)|$. Define, $$ \beta_{r}:=\inf_{k\geq r}{m_{k}^{\frac{1}{2k}}}\leq {B_{k}^{\frac{1}{2k}}}\leq \inf_{k\geq r}{\sqrt{k}}=\sqrt{r}. $$ Hence, $\beta_{r}^{-1}\geq r^{-1/2}$ and therefore $$ \sum_{r=1}^{+\infty}{\beta_{r}^{-1}}=+\infty. $$ Therefore, by Carleman's Theorem \cite{Carleman} there exists a unique probability measure $\mu$ supported on $[0,+\infty)$ such that $$ m_{r}=\int_{0}^{+\infty}{t^{n}\,d\mu (t)}. $$ In other words, the sequence $m_{r}$ is {\em distribution determining}. \qed \end{proof} \begin{prop} Let $\rho\in\mathcal{P}(r)$ then $K_{\rho}=1$ if and only if the partition $\rho$ is non--crossing. \end{prop} The proof of this results follows similarly to the one presented in \cite{GC02} for the sequence $k_{p}=p-1$ and we leave it as an exercise for the reader. \begin{example} Let $r=4$ and let $\rho=\{\{1,3\},\{2,4\}\}$. Then $$ K_{\rho}=\lim_{N\to\infty}\frac{|S_{\rho,N}|}{N^3} $$ where $$ S_{\rho,N}=\big\{(p_1,p_2,p_3,p_4)\in \{1,2,\ldots,N\}^{4}\,:\,k_{p_1}+k_{p_3}=k_{p_2}+k_{p_4}\big\}. $$ For the case $k_{p}=p-1$ it was observed in \cite{GC02} that $K_{\rho}=2/3$. As a matter of fact, it is not difficult to see that $K_{\rho}$ is the volume of the polytope $$ K_{\rho}=\mathrm{vol}\big(\{(x,y,z)\in [0,1]^3\,:\ 0\leq x+y-z\leq 1\}\big). $$ This polytope is shown in Figure \ref{poly}. \begin{figure}[!Ht] \begin{center} \includegraphics[width=6cm]{p.eps} \caption{The polytope $(x,y,z)\in [0,1]^{3}$ such that $0\leq x+y-z\leq 1$.} \label{poly} \end{center} \end{figure} For the case $k_{p}=2^{p}$ we see that $$ S_{\rho,N}=\big\{(p_1,p_2,p_3,p_4)\in \{1,2,\ldots,N\}^{4}\,:\,2^{p_1}+2^{p_3}=2^{p_2}+2^{p_4}\big\}. $$ For positive integers $\{a,b,c,d\}$ the equation $2^{a}+2^{b}=2^{c}+2^{d}$ holds if and only if $\{a,b\}=\{c,d\}$. Therefore, $|S_{\rho,N}|=2N^2$ and hence $K_{\rho}=0$. \end{example} The next Theorem shows that if $k_p=2^p$ then the limit eigenvalue distribution is the famous Marchenko--Pastur distribution. \begin{theorem} Let $k_{p}=2^{p}$ then for every $r$ and $\rho\in\mathcal{P}(r)$ the coefficient $K_{\rho}=0$ if the partition is crossing. Hence $$ m_{r}=|NC(r)| $$ the number of non--crossing partitions and $\mu$ is the Marchenko--Pastur distribution $$ d\mu(x)=\frac{1}{2\pi}\sqrt{\frac{4-x}{x}}\mathbf{1}_{[0,4]}. $$ \end{theorem} \begin{proof} We already observed that $K_{\rho}=1$ iff the partition is non--crossing. Therefore, we need to show that for every crossing partition $K_{\rho}=0$. Let $\rho\in\mathcal{P}(r)$ be a crossing partition with blocks $\{B_1,B_2,\ldots,B_{|\rho|}\}$. Let $I$ be the set of indices such that for $i\in I$ the block $B_{i}$ does not cross any other block $B_{j}$. Then we can decompose $\rho$ as $\rho = \rho_1\cup \rho_{2}$ where $\rho_{2}=\cup_{i\in I}{B_{i}}$ is the union of all the non--crossing blocks. Then by the definition of $K_{\rho}$ we see that $K_{\rho}=K_{\rho_1}K_{\rho_2}$. Now we need to show that $K_{\rho_1}=0$. Up to a re-enumeration, if necessary, we see that $\rho_1\in\mathcal{P}(s)$ where $s\leq r$. By definition every block of $\rho_1$ crosses at least another block. For every $n$--tuples of positive integers $(a_1,\ldots,a_n)$ and $(b_1,\ldots,b_n)$ the equation $$ 2^{a_1}+2^{a_2}+\ldots+2^{a_n}=2^{b_1}+2^{b_2}+\ldots+2^{b_n} $$ implies that $\{a_1,\ldots,a_n\}=\{b_1,\ldots,b_n\}$. Hence, every equation in $S_{\rho_1,N}$ eliminates at least two variables and therefore $|S_{\rho_1,N}|=O(N^{s+1-2|\rho_1|})$. This implies that $$ K_{\rho_1}=\lim_{N\to\infty}\frac{|S_{\rho_1,N}|}{N^{s+1-|\rho_1|}}=0 $$ finishing the proof. \qed \end{proof} In Figure \ref{MP}, we see the histogram of the matrix $\bf{V}\bf{V}^{*}$ for the sequence $k_p=2^{p}$ and $N=100$ over $1000$ trials in comparison with the Marchenko--Pastur distribution. As it can be appreciated even for $N$ as small as 100 the two are not to far apart. In Figure \ref{classic}, we see the histogram of the eigenvalues of $\bf{V}\bf{V}^{*}$ for $k_p=p-1$ and $N=100$. \begin{figure}[!Ht] \begin{center} \includegraphics[width=10cm]{Generalized_2p_N100_5000trials.eps} \caption{The blue graph is the histogram of eigenvalues of the matrix $\bf{V}\bf{V}^{*}$ for the sequence $k_p=2^{p}$ and $N=100$ over $1000$ trials. The red curve is the Marchenko--Pastur distribution.} \label{MP} \end{center} \end{figure} \begin{figure}[!Ht] \begin{center} \includegraphics[width=10cm]{Calssical_N100_5000trials.eps} \caption{Histogram of the eigenvalues of matrix $\bf{V}\bf{V}^{*}$ for the sequence $k_p=p-1$ and $N=100$ over $1000$ trials.} \label{classic} \end{center} \end{figure} The case $k_{p}=p^2$ is an interesting one (as well as the cases $k_{p}=p^{a}$). At the moment we don't understand what is the limit eigenvalue distribution for this sequence. For instance, is it true that $K_{\rho}=0$ for every crossing partition? Is it true that $K_{\rho}=0$ for the partition $\rho=\{\{1,3\},\{2,4\}\}$? In a private communication with Prof. Carl Pomerance it was indicated that $|S_{\rho,N}|$ is of the order $O(N^2\log(N))$. However, we are not providing a proof of this fact. In Figure \ref{ffg}, we show the values of $|S_{\rho,N}|/N^3$ as a function of $N$ and we compare it with the case $k_{p}=2^{p}$. \begin{figure}[!Ht] \begin{center} \includegraphics[width=10cm]{graph.eps} \caption{This figure shows $|S_{\rho,N}|/N^3$ for the sequence $k_{p}=p^2$ (red) and $k_{p}=2^{p}$ (magenta).} \label{ffg} \end{center} \end{figure}
{ "timestamp": "2012-11-19T02:02:41", "yymm": "1202", "arxiv_id": "1202.3184", "language": "en", "url": "https://arxiv.org/abs/1202.3184", "abstract": "This work examines various statistical distributions in connection with random Vandermonde matrices and their extension to $d$--dimensional phase distributions. Upper and lower bound asymptotics for the maximum singular value are found to be $O(\\log^{1/2}{N^{d}})$ and $\\Omega((\\log N^{d} /(\\log \\log N^d))^{1/2})$ respectively where $N$ is the dimension of the matrix, generalizing the results in \\cite{TW}. We further study the behavior of the minimum singular value of these random matrices. In particular, we prove that the minimum singular value is at most $N\\exp(-C\\sqrt{N}))$ with high probability where $C$ is a constant independent on $N$. Furthermore, the value of the constant $C$ is determined explicitly. The main result is obtained in two different ways. One approach uses techniques from stochastic processes and in particular, a construction related to the Brownian bridge. The other one is a more direct analytical approach involving combinatorics and complex analysis. As a consequence, we obtain a lower bound for the maximum absolute value of a random complex polynomial on the unit circle, which may be of independent mathematical interest. Lastly, for each sequence of positive integers ${k_p}_{p=1}^{\\infty}$ we present a generalized version of the previously discussed matrices. The classical random Vandermonde matrix corresponds to the sequence $k_{p}=p-1$. We find a combinatorial formula for their moments and we show that the limit eigenvalue distribution converges to a probability measure supported on $[0,\\infty)$. Finally, we show that for the sequence $k_p=2^{p}$ the limit eigenvalue distribution is the famous Marchenko--Pastur distribution.", "subjects": "Probability (math.PR); Information Theory (cs.IT)", "title": "Asymptotic Behavior of the Maximum and Minimum Singular Value of Random Vandermonde Matrices", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9896718477853187, "lm_q2_score": 0.7154240079185319, "lm_q1q2_score": 0.7080349998667119 }
https://arxiv.org/abs/1708.04901
Convex sequences may have thin additive bases
For a fixed $c > 0$ we construct an arbitrarily large set $B$ of size $n$ such that its sum set $B+B$ contains a convex sequence of size $cn^2$, answering a question of Hegarty.
\section*{Notation} The following notation is used throughout the paper. The expressions $X \gg Y$, $Y \ll X$, $Y = O(X)$, $X = \Omega(Y)$ all have the same meaning that there is an absolute constant $c$ such that $|Y| \leq c|X|$. If $X$ is a set then $|X|$ denotes its cardinality. For sets of numbers $A$ and $B$ the \emph{sumset} $A + B$ is the set of all pairwise sums $$ \{ a + b: a \in A, b \in B \}. $$ \section{Introduction} Let $A = \{a_i\}, i = 1\ldots n$ be a set\footnote{Sometimes we use the word \emph{sequence} to emphasize the ordering. } of real numbers ordered in a way that $a_1 \leq a_2 \leq \ldots \leq a_n$. Recall that $A$ is called \emph{convex} if the gaps between consecutive elements of $A$ are strictly increasing, that is $$ a_2 - a_1 < a_3 - a_2 < \ldots < a_n - a_{n-1}. $$ Studies of convex sets were initiated by Erd\H{o}s who conjectured that any convex set must grow with respect to addition, so that the size of the set of sums $A+A := \{a_1 + a_2 : a_1, a_2 \in A \}$ is significantly larger than the size of $A$. The first non-trivial bound confirming the conjecture of Erd\H{o}s was obtained by Hegyv{\'a}ri \cite{MR858397}, and the state of the art bound is due to Schoen and Shkredov \cite{MR2825592}, who proved that for an arbitrary convex set $A$ holds $$ |A+A| \geq C|A|^{14/9}\log^{-2/3} |A| $$ for some absolute constants $C, c > 0$. It is conjectured that in fact $$ |A+A| \geq C(\epsilon)|A|^{2-\epsilon} $$ holds for any $\epsilon > 0$. In general, it is believed that convex sets cannot be additively structured. In particular, a few years ago Hegarty asked\footnote{The original MathOverflow question is contrapositive to our reformulation which is technically slightly more convenient to state. } \cite{HegartyQuestion} whether there is a constant $c>0$ with the property that there is a set $B$ of arbitrarily large size $n$ such that $B+B$ contains a convex set of size $cn^2$. Recall that $B$ is a \emph{basis} (of order two) for a set $A$ if $A \subset B + B$. In other words, Hegarty asked if a convex set of size $n$ can have a thin additive basis (of order two) of size as small as $O(n^{1/2})$, which is clearly the smallest possible size up to a constant. Perhaps contrary to the intuition that convex sets lack additive structure, we present a construction which answers Hegarty's question in the affirmative. Our main result is as follows. \begin{theorem} \label{thm:main} There is $c>0$ such that for any $m$ there is a set $B$ of size $n > m$ such that $B+B$ contains a convex set of size $cn^2$. \end{theorem} \section{Construction} Assume $n$ is fixed and large. We will construct a set $B$ of size $O(n)$ such that $B+B$ contains a convex set of size $\Omega(n^2)$. Theorem \ref{thm:main} will clearly follow. The following constants (we assume $n$ is fixed) will be used throughout the proof. $$ \alpha := \frac{1}{n^2} \,\,\,\,\, \gamma := \frac{1}{1000n^3} \,\,\,\,\, \epsilon := 0.1 $$ Define \be x_i &=& i + (\alpha + \gamma)i^2 \\ y_j &=& j - \alpha j^2. \ee Next, we define $$ B_k = \{x_i + y_j : i + j = k \}, $$ where $i$ and $j$ are allowed to be negative. Let $k \in [.999n, n]$ so that $\alpha k^2 \in [.99, 1]$. For such an integer $k$ writing $j = k - i$ we have that the $i$th element of $B_k$ is given by \beq \label{eq:block} b^{(k)}_i = k + (\alpha + \gamma)i^2 - \alpha(k-i)^2 = (k - \alpha k^2) + \gamma i^2 + 2 i k \alpha. \eeq Now assume that $i$ ranges in $[-n, 2n]$. The consecutive differences $b^{(k)}_{i+1} - b^{(k)}_i$ are then given by $$ \Delta^{(k)}_i := \gamma(2i+1) + 2k\alpha. $$ Observe that $\Delta^{(k)}_i$ are positive and increasing, thus the block $B_k := \{ b^{(k)}_i \}^{2n}_{-n}$ is convex. Further, by (\ref{eq:block}) for sufficiently large $n$ we have \ben b^{(k)}_{-n} &=& k - \alpha k^2 + \gamma n^2 - 2nk \alpha \in [k - 2.9, k - 3] \label{eq:lowerbound} \\ b^{(k)}_{2n} &=& k - \alpha k^2 + \gamma (2n)^2 + 4nk \alpha \in [k+2.9, k + 3.1], \label{eq:upperbound} \een so $B_k \subset [k-3, k+3] + [-\epsilon, \epsilon]$. Now we are going to a build large convex sequence out of blocks $B_k$ with $4 | k$. Since each $B_k$ is already convex, it remains to show how to glue together $B_k$ and $B_{k+4}$ so that the resulting set is again convex. We proceed with the following simple lemma. \begin{lemma} \label{lm:glueing} Let $X = \{x_i \}^N_{i=0}$ and $Y = \{ y_j \}^M_{j=0}$ be two convex sequences and there are indices $u$ and $v$ such that $$ [x_u, x_{u+1}] \subset [y_v, y_{v+1}]. $$ Then $$ Z := \{x_i \}^u_{i=0} \cup \{ y_j \}^M_{j=v+1} $$ is a convex sequence. \end{lemma} \begin{proof} Since $[x_u, x_{u+1}] \subset [y_v, y_{v+1}]$ we have that $$ x_{u} - x_{u-1} < x_{u+1} - x_u < y_{v+1} - x_u. $$ On the other hand, $$ y_{v+1} - x_u < y_{v+1} - y_v < y_{v+2} - y_{v+1}. $$ \end{proof} By Lemma \ref{lm:glueing}, in order to merge $B_k$ and $B_{k+4}$ it suffices to find two consecutive elements $b^{(k)}_i, b^{(k)}_{i+1} \in B_k$ in between two consecutive elements $b^{(k+4)}_j, b^{(k+4)}_{j+1} \in B_{k+4}$. Define \be \delta &:=& \max_{i \in [-n, 2n]} \Delta^{(k)}_i \\ \Delta &:=& \min_{i \in [-n, 2n]} \Delta^{(k+4)}_i \ee We have \ben \delta < 3n\gamma + 2k\alpha < \frac{2.1}{n} \label{eq:delta1} \\ \Delta - \delta > 8\alpha - 10n\gamma > \frac{6}{n^2} \label{eq:delta2}. \een Let $b^{(k)}_v$ be the least element in $B_k$ greater than $b^{(k+4)}_{-n}$ (such element exists by (\ref{eq:upperbound})). We claim that with $m := \lceil n/2 \rceil + 1$ holds $b^{(k+4)}_{-n + m} > b^{(k)}_{v+m}$, which in turn by the pigeonhole principle guarantees the arrangement of elements required by Lemma \ref{lm:glueing}. Indeed, by our choice of $v$ \beq \label{eq:startdist} 0 \leq d : =b^{(k)}_v - b^{(k+4)}_{-n} \leq \delta \eeq But by (\ref{eq:delta1}), (\ref{eq:delta2}) \beq b^{(k+4)}_{-n+m} - b^{(k)}_{v+m} > -d + m(\Delta-\delta) > \frac{3}{n} - \delta > 0, \eeq so the claim follows. It remains to note that $$ b^{(k)}_{v+m} < b^{(k+4)}_{-n} + m\Delta < (k+1+\epsilon) + \frac{2n^2\alpha}{2} + 4\gamma n < k + 2.2 $$ and thus $v + m < 2n$ by (\ref{eq:upperbound}). This verifies that $b^{(k)}_v, b^{(k)}_{v+m} \in B_k$. \section{Putting everything together} Applying the procedure described in the previous section, we can glue together consecutive blocks $B_{4l}$ with $4l: = k \in [0.999n, n]$. Let $A$ be the resulting convex sequence. First, observe there are $\Omega(n)$ blocks being merged. Moreover, each interval $[4l-1+\epsilon, 4l +1 -\epsilon]$ is covered only by the block $B_{4l}$ and by (\ref{eq:lowerbound}), (\ref{eq:upperbound}), (\ref{eq:delta1}) contains $\Omega(n)$ elements from $B_{4l}$, so $|A| = \Omega(n^2)$. On the other hand, by our construction, $A$ is contained in the sumset $B+B$ of $B := \{ x_i\}^{2n}_{-2n} \cup \{ y_j\}^{2n}_{-2n}$ of size $O(n)$. \begin{remark} It follows from our construction that there are arbitrarily large convex sets $A$ such that the equation $$ a_1 - a_2 = x :\,\,\, a_1, a_2 \in A $$ has $\Omega(|A|^{1/2})$ solutions $(a_1, a_2)$ for at least $\Omega(|A|^{1/2})$ values of $x$. \end{remark} \section{Acknowledgments} The first author is supported by ERC-AdG. 321104 and Hungarian National Research Development and Innovation Funds K 109789, NK 104183 and K 119528. \noindent The second author is supported by the Knuth and Alice Wallenberg postdoctoral fellowship. \noindent The work on this paper was partially carried out while the second author was visiting R\'enyi Institute of Mathematics by invitation of Endre Szemer\'edi, whose hospitality and support is greatly acknowledged. We also thank Peter Hegarty and Ilya Shkredov for useful discussions. \bibliographystyle{plain}
{ "timestamp": "2017-08-17T02:06:37", "yymm": "1708", "arxiv_id": "1708.04901", "language": "en", "url": "https://arxiv.org/abs/1708.04901", "abstract": "For a fixed $c > 0$ we construct an arbitrarily large set $B$ of size $n$ such that its sum set $B+B$ contains a convex sequence of size $cn^2$, answering a question of Hegarty.", "subjects": "Combinatorics (math.CO)", "title": "Convex sequences may have thin additive bases", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9896718456529512, "lm_q2_score": 0.7154240079185318, "lm_q1q2_score": 0.7080349983411649 }
https://arxiv.org/abs/0908.4205
Detecting Hilbert manifolds among isometrically homogeneous metric spaces
We detect Hilbert manifolds among isometrically homogeneous metric spaces and apply the obtained results to recognizing Hilbert manifolds among homogeneous spaces of the form G/H where G is a metrizable topological group and H is a closed balanced subgroup of G.
\section{Introduction} The problem of detecting topological groups that are locally homeomorphic to (finite or infinite)-dimensional Hilbert spaces traces its history back to the fifth problem of David Hilbert concerning the recognition of Lie groups in the class of topological groups. This problem was resolved by combined efforts of A.~Gleason \cite{Gle}, D.~Montgomery, L.~Zippin \cite{MZ}, and K.~Hofmann \cite{Hofmann}. According to their results, a topological group $G$ is a Lie group if and only if $G$ is locally compact and locally contractible. In this case $G$ is an Euclidean manifold, that is, a manifold modeled on an Euclidean space $\mathbb R^n$. The next step was made in 1981 by T.~Dobrowolski and H.~Toru\'nczyk \cite{DT}. They proved that a topological group $G$ is a manifold modeled on a separable Hilbert space if and only if $G$ is a locally Polish ANR. A topological space is called {\em locally Polish} if each point $x\in X$ has a Polish (i.e. separable completely metrizable) neighborhood. Most recently, T.~Banakh and I.~Zarichnyy \cite{BZ} proved in 2008 that a topological group $G$ is a manifold modeled on an infinite-dimensional Hilbert space if and only if $G$ is a completely metrizable ANR with LFAP. A topological space $X$ is said to have {\em Locally Finite Approximation Property} (abbreviated LFAP) if for every open cover $\mathcal U$ there are maps $f_n:X\to X$, $n\in\omega$, such that each $f_n$ is $\mathcal U$-near to the identity map and the indexed family $\{f_n(X)\}_{n\in\omega}$ is locally finite in $X$. This property was crucial in Toru\'nczyk's characterization \cite{Tor81} of non-separable Hilbert manifolds. \medskip By the Birkhoff-Kakutani Metrization Theorem \cite[2.5]{Tk}, the topology of any first countable topological group $G$ is generated by a left-invariant metric. This metric turns $G$ into an isometrically homogeneous metric space. We define a metric space $X$ to be {\em isometrically homogeneous} if for any two points $x,y\in X$ there is a bijective isometry $f:X\to X$ such that $f(x)=y$. This notion is a metric analogue of the well-known notion of a topologically homogeneous spaces. We recall that a topological space $X$ is called {\em topologically homogeneous} if for any two points $x,y\in X$ there is a homeomorphism $f:X\to X$ such that $f(x)=y$. In light of the mentioned results the following open problem arises naturally: \begin{problem} \label{prob1} How can one detect Euclidean and Hilbert manifolds among isometrically homogeneous metric spaces? \end{problem} For the Euclidean case of this problem we have the following answer which will be derived in Section~\ref{tbg} from a result of J.~Szenthe \cite{Szenthe}. \begin{theorem}\label{t1} An isometrically homogeneous metric space $X$ is an Euclidean manifold if and only if $X$ is locally compact and locally contractible. \end{theorem} The Hilbert case of Problem~\ref{prob1} is more difficult. We shall answer this problem under an addition assumption that the isometrically homogeneous space is $\mathbb I^{<\omega}{\sim}$homogeneous. The class of such spaces includes all metric groups (that is, topological groups endowed with an admissible left-invariant metric) and also quotient spaces $G/H$ of metric groups $G$ by closed balanced subgroups $H\subset G$ (cf. Corollary~\ref{cor1}). To introduce $\mathbb I^{<\omega}{\sim}$homogeneous metric spaces, let us first observe that a metric space $X$ is isometrically homogeneous if and only if the action of the isometry group $\mathrm{Iso}(X)$ on $X$ is transitive. This is equivalent to saying that for each point $\theta\in X$ the map $$\alpha_\theta:\mathrm{Iso}(X)\to X,\;\;\alpha_\theta:f\mapsto f(\theta),$$ is surjective. It is well-known (and easy to check) that the isometry group $\mathrm{Iso}(X)$ of a metric space $X$ is a topological group with respect to the topology of pointwise convergence (that is, the topology inherited from the Tychonov power $X^X$). Moreover, the natural action $$\alpha:\mathrm{Iso}(X)\times X\to X,\;\alpha:(f,x)\mapsto f(x),$$ of $\mathrm{Iso}(X)$ on $X$ is continuous. Let $T$ be a topological space. We define a map $q:X\to Y$ between topological spaces to be \begin{itemize} \item {\em $T{-}$invertible} if for each continuous map $f:T\to Y$ there is a continuous map $g:T\to X$ such that $q\circ g=f$; \item {\em $T{\sim}$invertible} if for each continuous map $f:T\to Y$ and an open cover $\mathcal U$ of $Y$ there is a continuous map $g:T\to X$ such that $q\circ g$ is $\mathcal U$-near to $f$ (in the sense that for each $t\in T$ there is $U\in\mathcal U$ with $\{f(t),q\circ g(t)\}\subset U$). \end{itemize} Observe that a map $q:X\to Y$ is $\mathbb I^0{-}$invertible if and only if $q(X)=Y$ and $q$ is $\mathbb I^0{\sim}$invertible if and only if $q(X)$ is dense in $Y$ (here $\mathbb I^0$ is a singleton). We define a metric space $X$ to be {\em $T{-}$homogeneous} (resp. {\em $T{\sim}$homogeneous}), where $T$ is a topological space, if for some point $\theta\in X$ the map $\alpha_\theta:\mathrm{Iso}(X)\to X$ is $T$-invertible (resp. $T{\sim}$invertible). Let us observe that each metric group $G$ (that is, a topological group endowed with an admissible left-invariant metric) is a $G$-homogeneous metric space. This follows from the fact that for the neutral element $\theta$ of $G$ the map $\alpha_\theta:\mathrm{Iso}(G)\to G$ admits a continuous section $l:G\to \mathrm{Iso}(G)$ defined by $s:g\mapsto l_g$, where $l_g:x\mapsto gx$, is the left shift. We shall be interested in the $T{-}$ and $T{\sim}$homogeneity in case $T$ is a (finite- or infinite-dimensional) cube $\mathbb I^n$. Observe that a metric space $X$ is $\mathbb I^0{-}$homogeneous if and only if $X$ is isometrically homogeneous, and $X$ is $\mathbb I^0{\sim}$homogeneous if and only if some point $\theta\in X$ has dense orbit under the action of the isometry group $\mathrm{Iso}(X)$. On the other hand, a metric space $X$ is $\mathbb I^n{-}$homogeneous for all $n\in\omega$ if and only if $X$ is $\mathbb I^{<\omega}{-}$homogeneous for the topological sum $\mathbb I^{<\omega}=\oplus_{n\in\omega}\mathbb I^n$ of finite-dimensional cubes. A metric space $X$ is $\mathbb I^{<\omega}{\sim}$homogeneous if and only if it is $\mathbb I^\omega{\sim}$homogeneous. \smallskip For each metric space $X$ those homogeneity properties relate as follows: \smallskip \begin{picture}(300,160)(-30,-28) \put(9,110){metric group} \put(170,110){topologically homogeneous} \put(225,95){$\Uparrow$} \put(35,94){$\Downarrow$} \put(0,80){$X{-}$homogeneous} \put(35,64){$\Downarrow$} \put(170,80){isometrically homogeneous} \put(225,64){$\Updownarrow$} \put(0,50){$\mathbb I^\omega{-}$homogeneous $\Rightarrow$ $\mathbb I^{<\omega}{-}$homogeneous $\Rightarrow$ $\mathbb I^{0}{-}$homogeneous} \put(35,34){$\Downarrow$} \put(0,20){$\mathbb I^\omega{\sim}$homogeneous $\Leftrightarrow$ $\mathbb I^{<\omega}{\sim}$homogeneous $\Rightarrow$ $\mathbb I^{0}{\sim}$homogeneous} \put(140,34){$\Downarrow$} \put(225,34){$\Downarrow$} \put(175,-13){\vector(-1,0){90}} \put(38,13){\vector(0,-1){17}} \put(41,4){\tiny +} \put(50,10){\tiny{\it isometrically}} \put(50,4){\tiny{\it homogeneous}} \put(50,-3){\tiny{\it Polish ANR}} \put(228,13){\vector(0,-1){17}} \put(240,8){\tiny{\it locally compact and}} \put(232,4){\tiny +} \put(241,0){\tiny{\it locally contractible}} \put(2,-15){Hilbert manifold} \put(185,-15){Euclidean manifold} \end{picture} } The last two implications in the diagram hold under additional assumptions on the local structure of $X$ and are established in the following theorem that recognizes Hilbert manifolds among $\mathbb I^{<\omega}{\sim}$homogeneous metric spaces (and will be proved in Section~\ref{pf-main}). \begin{theorem}\label{main} An isometrically homogeneous $\mathbb I^{<\omega}{\sim}$homogeneous metric space $X$ is a manifold modeled on \begin{enumerate} \item an Euclidean space if and only if $X$ is locally precompact, locally Polish, and locally contractible; \item a separable Hilbert space if and only if $X$ is a locally Polish ANR; \item an infinite-dimensional Hilbert space if and only if $X$ is completely-metrizable ANR with LFAP. \end{enumerate} \end{theorem} We explain some of the notions appearing in this theorem. A metric space is said to be ({\em locally}) {\em precompact} if its completion is (locally) compact. A topological space $X$ is called {\em locally Polish} if each point of $X$ has a Polish (= separable completely-metrizable) neighborhood; $X$ is said to be {\em completely-metrizable} if its topology is generated by a complete metric. ANR is the standard abbreviation for the absolute neighborhood retracts in the class of metrizable spaces. \medskip \section{Detecting Hilbert manifolds among quotient spaces of topological groups} In this section we shall apply Theorem~\ref{main} to detecting Hilbert manifolds among homogeneous spaces of the form $G/H=\{xH:x\in G\}$ where $H$ is a closed subgroup of a topological group $G$ and $G/H$ is endowed with the quotient topology. We define a subgroup $H$ of a topological group $G$ to be {\em balanced} if for every neighborhood $U\subset G$ of the neutral element $e\in G$ there is a neighborhood $V\subset G$ of $e$ such that $HV\subset UH$. \begin{corollary}\label{cor1} Let $H\subset G$ be a balanced closed subgroup of a metrizable topological group $G$ such that the quotient map $q:G\to G/H$ is $\mathbb I^{<\omega}{\sim}$invertible. The space $G/H$ is a manifold modeled on \begin{enumerate} \item an Euclidean space if and only if $G/H$ is locally compact and locally contractible; \item a separable Hilbert space if and only if $G/H$ is a locally Polish ANR; \item an infinite-dimensional Hilbert space if and only if $G/H$ is a completely-metrizable ANR with LFAP. \end{enumerate} \end{corollary} \begin{proof} By the Birkhoff-Kakutani Theorem \cite[2.5]{Tk}, the topology of $G$ is generated by a bounded left-invariant metric $d$. This metric induces the Hausdorff metric $$d_H(A,B)=\max\{\sup_{a\in A}d(a,B),\sup_{b\in B}d(b,A)\}$$on the hyperspace $2^G$ of all non-empty closed subsets of $G$. Endow the quotient space $G/H=\{xH:x\in G\}$ with the Hausdorff metric $d_H$ and observe that for each $g\in G$ the left shift $l_g:G/H\to G/H$, $l_g:xH\mapsto gxH$, is an isometry of $G/H$. Therefore, the Hausdorff metric turns $G/H$ into an isometrically homogeneous metric space. We claim that this metric generates the quotient topology on $G/H$. Because of the homogeneity, it suffices to check that $d_H$ generates the quotient topology at the distinguished element $H$ of $G/H$. Fix a basic neighborhood $U\cdot H=\{uH:u\in U\}\subset G/H$ of $H$, where $U\subset G$ is a neighborhood of the neutral element $e$ in $G$. Since $H$ is balanced, there is a neighborhood $V\subset G$ of $e$ such that $HV\subset UH$. Find $\varepsilon>0$ such that $B_\varepsilon\subset V$ where $B_\varepsilon=\{x\in G:d(x,e)<\varepsilon\}$ is the $\varepsilon$-ball centered at $e$. Then for each coset $xH\in G/H$ with $d_H(xH,H)<\varepsilon$ we get $xH\subset HV\subset UH$. This shows that the topology on $G/H$ generated by the Hausdorff metric $d_H$ is stronger than the quotient topology. Next, given any $\varepsilon>0$, use the balanced property of $H$ to find a neighborhood $V=V^{-1}\subset G$ of $e$ such that $HV\subset B_\varepsilon H$. Then $VH=(HV)^{-1}\subset (B_\varepsilon H)^{-1}\subset HB_\varepsilon$. Consequently, for every $v\in V$ we get $vH\subset HB_{\varepsilon}$. Since $v^{-1}\in V$ we also get $v^{-1}H\subset HB_{\varepsilon}$ and $H\subset vHB_\varepsilon$. The inclusions $vH\in HB_{\varepsilon}$ and $H\subset vHB_\varepsilon$ imply that $d_H(H,vH)\le\varepsilon$. Consequently, $V\cdot H\subset\{g\in G:d_H(gH,H)\le\varepsilon<2\varepsilon\}$, which shows that the quotient topology on $G/H$ is stronger than the topology generated by the Hausdorff metric $d_H$ on $G/H$. \smallskip The $\mathbb I^{<\omega}{\sim}$invertibility of the quotient map $q:G\to G/H$ implies the $\mathbb I^{<\omega}{\sim}$homo\-geneity of the isometrically homogeneous metric space $(G/H,d_H)$. Now the statements (1)--(3) follow immediately from Theorem~\ref{main}. \end{proof} A topological space $X$ is defined to be $\LC[<\omega]$ if for each point $x\in X$, each neighborhood $U\subset X$ of $x$, and every $k<\omega$ there is a neighborhood $V\subset U$ of $x$ such that each map $f:S^k\to V$ is null homotopic in $U$. \begin{corollary}\label{cor2} Let $H\subset G$ be a completely-metrizable balanced $LC^{<\omega}$-subgroup of a metrizable topological group $G$. The space $G/H$ is a manifold modeled on \begin{enumerate} \item an Euclidean space if and only if $G/H$ is locally compact and locally contractible; \item a separable Hilbert space if and only if $G/H$ is a locally Polish ANR; \item an infinite-dimensional Hilbert space if and only if $G/H$ is a completely-metrizable ANR with LFAP. \end{enumerate} \end{corollary} \begin{proof} This corollary will follow from Corollary~\ref{cor1} as soon as we check that the quotient map $q:G\to G/H$ is $\mathbb I^{<\omega}{-}$invertible. For this we shall apply the Finite-Dimensional Selection Theorem of E.Michael \cite{Mi}. Let $d$ be a left-invariant metric generating the topology of the group $G$. This metric induces an admissible metric $\rho(x,y)=d(x,y)+d(x^{-1},y^{-1})$ on $G$. It is well-known that the completion $\bar G$ of $G$ by the metric $\rho$ has the structure of topological group. The subgroup $H\subset G\subset\bar G$, being completely-metrizable, is closed in $\bar G$. The $\mathbb I^{<\omega}{-}$invertibility of the quotient map $q:G\to G/H$ will follow from the Michael Selection Theorem \cite{Mi} as soon as we check that the family $\{xH:x\in G\}$ is equi-$\LC[n]$ for every $n\in\omega$. The latter means that for every $x_0\in G$ and a neighborhood $U(x_0)\subset G$ of $x_0$ there is another neighborhood $V(x_0)\subset U(x_0)$ of $x_0$ such that each map $f:S^n\to xH\cap V(x_0)$ from the $n$-dimensional sphere into a coset $xH\in G/H$, $x\in G$, is null homotopic in $xH\cap U(x_0)$. Find a neighborhood $U\subset G$ of the neutral element $e$ of $G$ such that $x_0U^2\subset U(x_0)$. Since $H$ is $\LC[<\omega]$, there is a neighborhood $W\subset G$ of $e$ such that each map $f:S^n\to H\cap W$ is null homotopic in $U\cap H$. Find a neighborhood $V\subset U$ of $e$ such that $x_0^{-1}V^{-1}Vx_0\subset W$. We claim that the neighborhood $V(x_0)=Vx_0\cap x_0V$ has the desired property. Indeed, fix any map $f:S^n\to xH\cap V(x_0)$ where $x\in V(x_0)$. Consider the left shift $l_{x^{-1}}:g\mapsto x^{-1}g$, and observe that $$l_{x^{-1}}\circ f(S^n)\subset H\cap x^{-1}V(x_0)\subset x_0^{-1}V^{-1}Vx_0\subset W.$$ Now the choice of $W$ ensures that the map $l_{x^{-1}}\circ f$ is null-homotopic in $H\cap U$ and hence $f$ is null-homotopic in $$xH\cap xU\subset xH\cap x_0VU\subset xH\cap x_0U^2\subset xH\cap U(x_0).$$ \end{proof} Since the trivial subgroup is balanced, Corollary~\ref{cor2} implies the following three results due to K.Hofmann \cite{Hofmann}, T.Dobrowolski, H.Toru\'nczyk \cite{DT}, and T.Banakh, I.Zarichnyy \cite{BZ}, respectively. \begin{corollary}\label{cor3} A topological group $G$ is a manifold modeled on \begin{enumerate} \item an Euclidean space if and only if $G$ is locally compact and locally contractible; \item a separable Hilbert space if and only if $G$ is a locally Polish ANR; \item an infinite-dimensional Hilbert space if and only if $G$ is a completely-metrizable ANR with LFAP. \end{enumerate} \end{corollary} In should be mentioned that the requirement on the subgroup $H\subset G$ to be balanced is essential in Corollaries~\ref{cor1} and \ref{cor2}. \begin{example} By \cite{Fer}, the homeomorphism group $\mathcal H(\mathbb I^\omega)$ of the Hilbert cube $\mathbb I^\omega$ is a Polish ANR. Moreover, by Corollary 4.12 of \cite{Fer}, for any point $\theta\in \mathbb I^\omega$ the closed subgroup $\mathcal H_\theta(\mathbb I^\omega)=\{h\in\mathcal H(\mathbb I^\omega):h(\theta)=\theta\}\subset\mathcal H(\mathbb I^\omega)$ is an ANR as well. This subgroup is not balanced because otherwise the Hilbert cube $\mathbb I^\omega=\mathcal H(\mathbb I^\omega)/\mathcal H_\theta(\mathbb I^\omega)$ would be an Euclidean manifold by Corollary~\ref{cor2}(1). Next, we show that the quotient map $q:\mathcal H(\mathbb I^\omega)\to \mathcal H(\mathbb I^\omega)/\mathcal H_\theta(\mathbb I^\omega)$ is $\mathbb I^\omega{\sim}$invertible but not $\mathbb I^\omega{-}$invertible. The $\mathbb I^\omega{\sim}$invertibility of $q$ follows from the $\LC[<\omega]$-property of the subgroup $\mathcal H_\theta(\mathbb I^\omega)$ and Finite-Dimensional Michael Selection Theorem \cite{Mi}. On the other hand, the fixed point property of $\mathbb I^\omega$ implies that the quotient map $q$ is not $\mathbb I^\omega$-invertible. Indeed, assuming that $q$ has a section $s:\mathbb I^\omega\to \mathcal H(\mathbb I^\omega)$, $s:x\mapsto s_x\in \mathcal H(\mathbb I^\omega)$, and taking any homeomorphism $g\in \mathcal H(\mathbb I^\omega)\setminus \mathcal H_\theta(\mathbb I^\omega)$, we would get a continuous map $f:\mathbb I^\omega\to \mathbb I^\omega$, $f:x\mapsto q(s_x\circ g)$, without fixed point. Indeed, assuming that $f(x)=x$ for some $x\in\mathbb I^\omega$, we would get $$x=f(x)=q(s_x\circ g)=s_x\circ g(\theta).$$ Since $x=q(s_x)=s_x(\theta)$, this would imply that $g(\theta)=\theta$ and hence $g\in\mathcal H_\theta(\mathbb I^\omega)$, which contradicts the choice of the homeomorphism $g$. \end{example} \begin{problem} Let $H$ be a closed subgroup of a Polish ANR-group $G$ such that the quotient map $q:G\to G/H$ is a locally trivial bundle. Is the quotient space $G/H$ a Hilbert manifold? \end{problem} Another related problem was posed in \cite{HR}: \begin{problem} Let $H$ be a closed ANR-subgroup of a Polish ANR group $G$. Is $G/H$ a manifold modeled on a Hilbert space or the Hilbert cube? \end{problem} \section{Locally precompact isometrically homogeneous metric spaces}\label{tbg} In this section we shall study locally precompact isometrically homogeneous metric spaces. We recall that a metric space is locally precompact if its completion is locally compact. The following theorem implies Theorems~\ref{t1} announced in the introduction. \begin{theorem}\label{szenthe} An isometrically homogeneous metric space $X$ is an Euclidean manifold if and only if $X$ is locally precompact, locally Polish, and locally contractible. \end{theorem} \begin{proof} If $X$ is an Euclidean manifold, then $X$ is locally compact and locally contractible. The local precompactness of $X$ will follow as soon as we show that $X$ is complete. Take any point $x_0$ in the completion $\bar X$ of the metric space $X$. Fix any point $\theta\in X$ and by the local compactness of $X$, find an $\varepsilon>0$ such the closed $\varepsilon$-ball $B(\theta,\varepsilon)=\{x\in X:\mathrm{dist}(x,\theta)\le\varepsilon\}$ is compact. Since $X$ is isometrically homogeneous, there is a homeomorphism $f:X\to X$ such that $\mathrm{dist}(x_0,f(\theta))<\varepsilon/2$. It follows that the $\varepsilon$-ball $B(f(\theta),\varepsilon)=\{x\in X:\mathrm{dist}(x,f(\theta))\le\varepsilon\}$ contains the point $x_0$ in its closure in $\bar X$. Since $B(f(\theta),\varepsilon)=f(B(\theta,\varepsilon))$ is compact, $x_0\in B(f(\theta),\varepsilon)\subset X$. Thus $X=\bar X$ is a complete metric space. Being locally compact, this space is locally precompact. \smallskip Now assume that $X$ is locally precompact, locally Polish, and locally contractible. We need to show that $X$ is an Euclidean manifold. It suffices to check that each connected component of $X$ is an Euclidean manifold. Since connected components of $X$ are isometrically homogeneous, we lose no generality assuming that $X$ is connected. The completion $\bar X$ of the locally precompact space $X$ is locally compact. By \cite{DW} (cf. also \cite[Th.I.4.7]{KN}), the isometry group $\mathrm{Iso}(\bar X)$ of $\bar X$ is locally compact, metrizable, and separable. Moreover, for every point $\theta\in X$ the action $$\alpha_\theta:\mathrm{Iso}(\bar X)\to \bar X$$ is proper in the sense that it is closed and the stabilizer $\mathrm{Iso}(\bar X,\theta)=\{f\in\mathrm{Iso}(\bar X):f(\theta)=\theta\}$ is compact (cf. \cite{MS}). We are going to prove that the metric space $X$ is complete. For this consider the subgroup $$\mathrm{Iso}(X)=\{f\in\mathrm{Iso}(\bar X):f(X)=X\}\subset\mathrm{Iso}(\bar X)$$ in the group $\mathrm{Iso}(\bar X)$. Let us show that the subgroup $\mathrm{Iso}(X)=\{f\in\mathrm{Iso}(\bar X):f(X)=X\}$ of $\mathrm{Iso}(\bar X)$ is coanalytic. The subspace $X$ of $\bar X$, being (locally) Polish, is a $G_\delta$-set in $\bar X$. Consequently, its complement $\bar X\setminus X$ can be written as the countable union $\bar X\setminus X=\bigcup_{n\in\omega}K_n$ of non-empty compact sets. Observe that $$\mathrm{Iso}(X)=\bigcap_{n\in\omega}\{f\in\mathrm{Iso}(\bar X):f(K_n)\cup f^{-1}(K_n)\subset \bar X\setminus X\}.$$ Let $\exp(\bar X)$ be the space of non-empty compact subset of $\bar X$ endowed with the Hausdorff metric. For every $n\in\omega$ consider the continuous maps $$\xi_n:\mathrm{Iso}(\bar X)\to\exp(\bar X),\;\;\;\xi_n:f\mapsto f(K_n)\cup f^{-1}(K_n) .$$ By \cite[33.B]{Ke}, the subspace $\exp(\bar X\setminus X)=\{K\in\exp(\bar X):K\subset\bar X\setminus X\}$ is coanalytic and so is its preimage $\xi_n^{-1}(\exp(\bar X\setminus X))$ for $n\in\omega$. Since $$\mathrm{Iso}(X)=\bigcap_{n\in\omega}\xi_n^{-1}(\exp(\bar X\setminus X)),$$ we see that the subgroup $\mathrm{Iso}(X)$ is coanalytic in $\mathrm{Iso}(\bar X)$. Then $\mathrm{Iso}(X)$ has the Baire property in $G$ and hence either is meager or is closed in $\mathrm{Iso}(\bar X)$ according to \cite[9.9]{Ke}. If $\mathrm{Iso}(X)$ is closed in $\mathrm{Iso}(\bar X)$, then $X=\alpha_\theta(\mathrm{Iso}(X))=\bar X$ (because the map $\alpha_\theta:\mathrm{Iso}(\bar X)\to\bar X$ is closed) and we are done. It remains to prove that the assumption that $\mathrm{Iso}(X)$ is meager leads to a contradiction. Let $G$ be the closure of the subgroup $\mathrm{Iso}(X)$ in $\mathrm{Iso}(\bar X)$. Taking into account that the map $\alpha_\theta:\mathrm{Iso}(\bar X)\to \bar X$ is closed and $X=\alpha_\theta(\mathrm{Iso}(X))$ is dense in $\bar X$, we conclude that $\bar X =\alpha_\theta(G)$. It follows from the local compactness of $G$ and the properness of the action $\alpha_\theta:G\to\bar X$ that the map $\alpha_\theta$ is open. Then the image $\alpha_\theta(\mathrm{Iso}(X))=X$ of the meager subgroup $\mathrm{Iso}(X)$ of $G$ is a meager subset of $\bar X$, which is not possible as $X$ is a dense $G_\delta$-set in $\bar X$. This contradiction completes the proof of the completeness of $X$. Now we see that the connected locally compact locally contractible space $X=\bar X$ admits an effective transitive action of the locally compact group $\mathrm{Iso}(X)$. By Theorem~3 of J.~Szenthe \cite{Szenthe}, $\mathrm{Iso}(X)$ is a Lie group and $X$ is an Euclidean manifold. \end{proof} \section{Characterizing the topology of Hilbert manifolds} In order to prove Theorem~\ref{main}(2,3) we shall apply the celebrated Toru\'nczyk's characterization of the topology of infinite-dimensional Hilbert manifolds. The key ingredient of this characterization is the $\kappa$-discrete $m$-cells property defined for cardinals $\kappa$ and $m$ as follows. We say that a topological space $X$ satisfies the {\em $\kappa$-discrete $m$-cells property} if for every map $f:\kappa\times \mathbb I^m\to X$ and every open cover $\mathcal U$ of $X$ there is a map $g:\kappa\times \mathbb I^m\to X$ such that $g$ is $\mathcal U$-near to $f$ and the family $\big\{g(\{\alpha\}\times \mathbb I^m)\big\}_{\alpha\in\kappa}$ is discrete in $X$ (here we identify the cardinal $\kappa$ with the discrete space of all ordinals $<\kappa$). The following characterization theorem is due to H.Toru\'nczyk \cite{Tor81}. \begin{theorem}[Toru\'nczyk]\label{tor1} A metrizable space $X$ is a manifold modeled on an infinite-dimensional Hilbert space $l_2(\kappa)$ of density $\kappa\ge\omega$ if and only if $X$ has the following properties: \begin{enumerate} \item $X$ is a completely metrizable ANR; \item each connected component of $X$ has density $\le\kappa$; \item $X$ has the $\kappa$-discrete $m$-cells property for all $m<\omega$; and \item $X$ has LFAP. \end{enumerate} \end{theorem} For manifolds modeled on the separable Hilbert space $l_2$ this characterization can be simplified as follows: \begin{theorem}[Toru\'nczyk]\label{tor2} A metrizable space $X$ is an $l_2$-manifold if and only if $X$ is a locally Polish ANR with the $\omega$-discrete $\omega$-cells property. \end{theorem} Thus the problem of recognition of Hilbert manifolds reduces to detecting the $\kappa$-discrete $m$-cells property. For spaces with LFAP the latter problem can be reduced to cardinals $\kappa$ with uncountable cofinality. The following lemma is proved in \cite{BZ}. \begin{lemma}\label{l1} A paracompact space $X$ with $\omega$-LFAP has $\kappa$-discrete $m$-cells property for a cardinal $\kappa$ if and only if $X$ has the $\lambda$-discrete $m$-cells property for all cardinals $\lambda\le\kappa$ of uncountable cofinality. \end{lemma} In fact, the $\kappa$-discrete $m$-cells property follows from its metric counterpart called the $\kappa$-separated $m$-cells property. Following \cite{BZ}, we define a metric space $(X,\rho)$ to have the {\em $\kappa$-separated $m$-cells property} if for every $\varepsilon>0$ there is $\delta>0$ such that for every map $f:\kappa\times \mathbb I^m\to X$ there is a map $g:\kappa\times \mathbb I^m\to X$ that is $\varepsilon$-homotopic to $f$ and such that $$\mathrm{dist}\big(g(\{\alpha\}\times \mathbb I^m),g(\{\beta\}\times \mathbb I^m)\big)\ge \delta$$ for all ordinals $\alpha<\beta<\kappa$. The following lemma was proved in \cite{BZ} by the method of the proof of Lemma 1 in \cite{DT}. \begin{lemma}\label{l2} Each metric space $X$ with the $\kappa$-separated $m$-cells property has the $\kappa$-discrete $m$-cells property. \end{lemma} According to Lemma~6 of \cite{BZ}, the $\kappa$-separated $m$-cells property can be characterized as follows: \begin{lemma}\label{l3} Let $m\le \omega\le\kappa$ be two cardinals. A metric space $X$ has the $\kappa$-separated $m$-cells property if and only if for every $\varepsilon>0$ there is $\delta>0$ such that for every subset $A\subset X$ of cardinality $|A|<\kappa$, and every map $f:\mathbb I^d\to X$ of a cube of finite dimension $d\le m$ there is a map $g:\mathbb I^d\to X$ that is $\varepsilon$-homotopic to $f$ and has $\mathrm{dist}(g(\mathbb I^d),A)\ge\delta$. \end{lemma} \section{The $\kappa$-separated $m$-cells property in metric spaces} In this section we shall establish the $\kappa$-separated $m$-cells property in $\mathbb I^m{\sim}$homo\-geneous metric spaces. A subset $S$ of a metric space $X$ is called {\em separated} if it is {\em $\varepsilon$-separated} for some $\varepsilon>0$. The latter means that $\mathrm{dist}(x,y)\ge\varepsilon$ for any distinct points $x,y\in S$. \begin{lemma}\label{l4} Let $m\le \omega\le\kappa$ be two cardinals. An $\mathbb I^m{\sim}$homogeneous metric $\LC[<\omega]$-space $X$ has the $\kappa$-separated $m$-cells property if each non-empty open subset of $X$ contains a separated subset of cardinality $\kappa$. \end{lemma} \begin{proof} Assume that each non-empty subset of $X$ contains a separated subset of cardinality $\kappa$. According to Lemma~\ref{l3}, the $\kappa$-separated $m$-cells property of $X$ will follow as soon as given $\varepsilon>0$ we find $\delta>0$ such that for every subset $A\subset X$ of cardinality $|A|<\kappa$ and every map $f:\mathbb I^d\to X$ of a cube of finite dimension $d\le m$ there is a map $\tilde f:\mathbb I^d\to X$ which is $2\varepsilon$-homotopic to $f$ and such that $\mathrm{dist}(\tilde f(\mathbb I^d),A)\ge\delta$. Being $\mathbb I^m{\sim}$homogeneous, the space $X$ contains a point $\theta\in X$ such that the map $$\alpha_\theta:\mathrm{Iso}(X)\to X,\;\;\alpha_\theta:f\mapsto f(\theta),$$is $\mathbb I^m{\sim}$invertible. Being $\LC[<\omega]$, the space $X$ is locally path-connected at $\theta$. Consequently, there is $\delta_1>0$ such that each point $y\in B(\theta,\delta_1)\subset X$ can be linked with $\theta$ by a path of diameter $<\varepsilon$. By our hypothesis, the $\delta_1$-ball $B(\theta,\delta_1)$ contains a separated subset $S\subset B(\theta,\delta_1)$ of size $|S|=\kappa$. Since $S$ is separated, the number $$\delta=\frac13\inf\{\mathrm{dist}(s,t):s,t\in S,\; s\ne t\}$$is strictly positive. We claim that the number $\delta$ satisfies our requirements. Indeed, take any subset $A\subset X$ of cardinality $|A|<\kappa$ and fix any map $f:\mathbb I^d\to X$ from a cube of finite dimension $d\le m$. Since $X$ is $\LC[<\omega]$, there is $\varepsilon'>0$ such that any map $f':\mathbb I^d\to X$ that is $\varepsilon'$-near to $f$ is $\varepsilon$-homotopic to $f$ (cf. \cite[V.5.1]{Hu}). By our hypothesis, the map $\alpha_\theta$ is $\mathbb I^m{\sim}$invertible. Therefore there is a map $g:\mathbb I^d\to \mathrm{Iso}(X)$ such that the composition $f'=\alpha_\theta\circ g$ is $\varepsilon'$-near to $f$. By the choice of $\varepsilon'$, the map $f'$ is $\varepsilon$-homotopic to $f$. We recall that $$\alpha:\mathrm{Iso}(X)\times X\to X,\;\;\alpha:(f,x)\mapsto f(x),$$denotes the action of the isometry group on $X$. \begin{claim}\label{claim1} There is a point $s\in S$ such that $\mathrm{dist}(\alpha(g(\mathbb I^d)\times\{s\}),A)\ge\delta$. \end{claim} The proof depends on the value of the cardinal $\kappa$. If $\kappa$ is uncountable, then we can fix a dense subset $Q\subset g(\mathbb I^d)\times A$ of cardinality $|Q|\le\mathrm{dens}(g(\mathbb I^d)\times A)\le\max\{\omega,|A|\}<\kappa$. Assuming that Claim~\ref{claim1} is false, we could find for every $s\in S$ a pair $(q_s,a_s)\in Q$ such that $\mathrm{dist}(\alpha(q_s,s),a_s)<\delta$. The strict inequality $|Q|<\kappa\le|S|$ implies the existence of two distinct points $s,t\in S$ with $(q_s,a_s)=(q_t,a_t)$. Let $x=q_s=q_t$ and observe that $$\begin{aligned} 3\delta\le&\mathrm{dist}(s,t)=\mathrm{dist}(\alpha(x,s),\alpha(x,t))\le\\ \le&\mathrm{dist}(\alpha(q_s,s),a_s)+\mathrm{dist}(a_t,\alpha(q_t,t))<2\delta,\end{aligned}$$ which is a contradiction, proving Claim~\ref{claim1} for an uncountable $\kappa$. \smallskip In case of a countable $\kappa$ the argument is a bit different. In this case the set $A$ is finite. We claim that for every $a\in A$ the set $$K_a=\{x\in X:\exists y\in g(\mathbb I^d)\mbox{ with }\alpha(y,x)=a\}$$ is compact. This will follow as soon as we check that each sequence $(x_n)_{n\in\omega}\subset K_a$ has a cluster point $x_\infty\in K_a$. For every $n\in\omega$ find an isometry $y_n\in g(\mathbb I^d)\subset\mathrm{Iso}(X)$ such that $\alpha(y_n,x_n)=a$. By the compactness of $g(\mathbb I^d)$, the sequence $(y_n)$ has a cluster point $y_\infty\in g(\mathbb I^d)$. Observe that the point $x_\infty=y_\infty^{-1}(a)$ belongs to $K_a$. We claim that $x_\infty$ is a cluster point of the sequence $(x_n)$. Given any $\eta>0$ and $n\in\omega$, we need to find $p\ge n$ such that $\mathrm{dist}(x_p,x_\infty)<\eta$. Since $y_\infty$ is a cluster point of $(y_i)$, there is a number $p\ge n$ such that $\mathrm{dist}(y_\infty(x_\infty),y_p(x_\infty))<\eta$. Then $ \mathrm{dist}(x_p,x_\infty)=\mathrm{dist}(y_p(x_p),y_p(p_\infty))=\mathrm{dist}(a,y_p(x_\infty))=\mathrm{dist}(y_\infty(x_\infty),y_p(x_\infty))<\eta, $$witnessing that the sets $B_a$, $a\in A$, are compact. Since the union $K=\bigcup_{a\in A}K_a\subset X$ is compact and the set $S$ is $3\delta$-separated, there is a $s\in S$ such that $\mathrm{dist}(s,K)\ge \delta$. We claim that $\mathrm{dist}(\alpha(g(\mathbb I^d)\times \{s\}),A)\ge\delta$. Assuming the converse, we would find an isometry $y\in g(\mathbb I^d)$ such that $\mathrm{dist}(y(s),a)<\delta$ for some $a\in A$. Let $x=y^{-1}(a)$ and observe that $x\in K_a$ and hence $$\mathrm{dist}(s,K)\le \mathrm{dist}(s,x)=\mathrm{dist}(y(s),y(x))=\mathrm{dist}(y(s),a)<\delta,$$ which contradicts the choice of $s$. This completes the proof of Claim~\ref{claim1}. \medskip Define a map $\tilde f:\mathbb I^d\to X$ letting $\tilde f(x)=\alpha(g(x),s)$ for $x\in \mathbb I^d$. The choice of $s$ ensures that $\mathrm{dist}(\tilde f(\mathbb I^d),A)\ge\delta$. By the choice of $\delta_1$ the point $s\in S\subset B(\theta,\delta_1)$ can be linked with $\theta$ by a path $\gamma:[0,1]\to X$ with $\gamma(0)=\theta$, $\gamma(1)=s$ and $\mathrm{diam}(\gamma[0,1])<\varepsilon$. This path allows us to define an $\varepsilon$-homotopy $$h:\mathbb I^d\times[0,1]\to X,\; h:(x,t)\mapsto\alpha(g(x),\gamma(t))$$linking the maps $f'=h_0$ and $\tilde f=h_1$. Since the map $f'$ is $\varepsilon$-homotopic to $f$, we conclude that $\tilde f:\mathbb I^d\to X$ is a required map that is $2\varepsilon$-homotopic to $f$ and has property $\mathrm{dist}(\tilde f(\mathbb I^d),A)\ge\delta$. \end{proof} \section{Proof of Theorem~\ref{main}}\label{pf-main} Let $X$ be an isometrically homogeneous $\mathbb I^{<\omega}{\sim}$homogeneous metric space. Since each connected component of $X$ is isometrically homogeneous and $\mathbb I^{<\omega}{\sim}$homoge\-neous, we lose no generality by assuming that $X$ is connected. \smallskip (1) The first statement of Theorem~\ref{main} follows from Theorem~\ref{szenthe}. \smallskip (2) Assume that $X$ is a locally Polish ANR-space. We need to prove that $X$ is a manifold modeled on a separable Hilbert space. If the completion $\bar X$ is locally compact, then $X=\bar X$ is an Euclidean manifold according to Theorem~\ref{szenthe}. Therefore we assume that $\bar X$ is not locally compact. We claim that the space $X$ has the $\omega$-separated $\omega$-cells property. This will follow from Lemma~\ref{l4} as soon as we check that each non-empty open subset $U\subset X$ contains an infinite separated subset. Fix any point $x_0\in U$ and find $\varepsilon>0$ such that $B(x_0,2\varepsilon)\subset U$. Since the complete metric space $\bar X$ is not locally compact, there is a point $x_1\in\bar X$ having no totally bounded neighborhood. Take any point $x_2\in X$ with $\mathrm{dist}(x_2,x_1)<\varepsilon$. Since the space $X$ is isometrically homogeneous, there is an isometry $f:X\to X$ such that $f(x_0)=x_2$. This isometry can be extended to an isometry $\bar f:\bar X\to\bar X$. Since the point $x_1$ has no totally bounded neighborhood, the ball $\bar B(x_1,\varepsilon)=\{x\in \bar X:\mathrm{dist}(x,x_1)\le\varepsilon\}$ contains an infinite separated subset $S\subset X\cap \bar B(x_1,\varepsilon)$. Since $\mathrm{dist}(x_1,f(x_0))<\varepsilon$, the set $S$ lies in the ball $B(f(x_0),2\varepsilon)$. Then $f^{-1}(S)$ is an infinite separated subset of the ball $B(x_0,2\varepsilon)\subset U$. By Lemma~\ref{l2}, the space $X$ has the $\omega$-discrete $\omega$-cells property and by Toru\'nczyk Theorem~\ref{tor2}, $X$ is an $l_2$-manifold. \smallskip (3) Assume that $X$ is a completely-metrizable ANR with LFAP. The isometric homogeneity of $X$ implies that any two connected components of $X$ are isometric. Let $\kappa$ be the density of any connected component of $X$. The homogeneity of $X$ implies that each non-empty open subset $U\subset X$ has density $\mathrm{dens}(U)\ge\kappa$ (cf. the proof of Corollary 3 in \cite{BZ}). Repeating the proof of Lemma 9 from \cite{BZ} we can also show that for each cardinal $\lambda\le\kappa$ of uncountable cofinality, each non-empty open subset $U\subset X$ contains a separated subset $S\subset U$ of cardinality $|S|\ge\lambda$. Applying Lemmata~\ref{l2} and \ref{l4}, we conclude that the space $X$ has the $\lambda$-discrete $\omega$-cells property for every cardinal $\lambda\le\kappa$ of uncountable cofinality. Since $X$ has LFAP, $X$ has the $\kappa$-discrete $\omega$-cells approximation property by Lemma~\ref{l1}. Finally, applying the Toru\'nczyk Characterization Theorem~\ref{tor1}, we conclude that $X$ is an $l_2(\kappa)$-manifold. \section{Acknowledgements} This research was supported by the Slovenian Research Agency grants P1-0292-0101, J1-9643-0101, and J1-2057-0101. We thank the referee for the comments and suggestions.
{ "timestamp": "2010-02-14T21:29:37", "yymm": "0908", "arxiv_id": "0908.4205", "language": "en", "url": "https://arxiv.org/abs/0908.4205", "abstract": "We detect Hilbert manifolds among isometrically homogeneous metric spaces and apply the obtained results to recognizing Hilbert manifolds among homogeneous spaces of the form G/H where G is a metrizable topological group and H is a closed balanced subgroup of G.", "subjects": "Geometric Topology (math.GT); Metric Geometry (math.MG)", "title": "Detecting Hilbert manifolds among isometrically homogeneous metric spaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9896718450437033, "lm_q2_score": 0.7154240079185319, "lm_q1q2_score": 0.7080349979052944 }
https://arxiv.org/abs/2111.04884
On Trace Zero Matrices and Commutators
Given any commutative ring $R$, a commutator of two $n\times n$ matrices over $R$ has trace $0$. In this paper, we study the converse: whether every $n \times n$ trace $0$ matrix is a commutator. We show that if $R$ is a Bézout domain with algebraically closed quotient field, then every $n\times n$ trace $0$ matrix is a commutator. We also show that if $R$ is a regular ring with large enough Krull dimension relative to $n$, then there exist a $n\times n$ trace $0$ matrix that is not a commutator. This improves on a result of Lissner by increasing the size of the matrix allowed for a fixed $R$. We also give an example of a Noetherian dimension $1$ commutative domain $R$ that admits a $n\times n$ trace $0$ non-commutator for any $n\ge 2$.
\section{Introduction} Let $R$ be a commutative ring. Given two $n\times n$ matrices $A$ and $B$ in $\Mat_n(R)$, recall that the commutator of $A$ and $B$ is denoted by $[A,B]:=AB-BA$. It is a standard fact that $\tr(AB) = \tr(BA)$, and since the trace is additive, it follows that $\tr([A,B]) = \tr(AB-BA)=0$. So it is natural to wonder if the converse is also true. That is, given an $n\times n$ trace $0$ matrix $C$, do there exist two $n \times n$ matrices $A$ and $B$ such that $C=[A,B]$? The answer to the above question depends on the underlying commutative ring $R$ where the entries lie and the size $n$ of the matrix. If $R$ is a field, then every $n\times n$ trace $0$ matrix is a commutator. This was proven by Shoda in \cite{Shoda37} for a characteristic $0$ field and by Albert and Muckenhoupt in \cite{AM57} for a field with any characteristic. More recently, Laffey and Reams showed that for every $n\ge 1$, any $n\times n$ trace $0$ matrix is a commutator over $R=\Z$ in \cite{LR94}, and Stasinski generalised it to an arbitrary principal ideal ring in \cite{Stasinski16}. Stasinski subsequently showed in \cite{Stasinski18} that over principal ideal rings, every trace $0$ matrix is a commutator of trace $0$ matrices as well. In this paper, we give a new class of rings where every trace $0$ matrix is a commutator. \begin{restatable*}{thm}{bezout} \label{thm:bezoutcommutator} Let $R$ be a B\'{e}zout domain with algebraically closed quotient field. Then every trace $0$ matrix in $\Mat_n(R)$ is a commutator for any $n\ge 1$. \end{restatable*} In \Cref{sec:hollow}, we also discuss hollow matrices and nilpotent matrices which are both special cases of trace $0$ matrices, and prove the following theorem. \begin{restatable*}{thm}{dedekind} \label{thm:dedekindalgebra} Let $R$ be a Pr\"{u}fer domain that is also a $k$-algebra over an infinite field $k$. Then for any $n\ge 1$, every nilpotent matrix in $\Mat_n(R)$ is a commutator. \end{restatable*} This is a partial progress towards answering whether every $n\times n$ trace $0$ matrix over a Dedekind domain is a commutator for any $n \ge 1$. No results for Dedekind domains were previously known for the case when $n\ge 3$ and $R$ is not a principal ideal domain. On the other hand, there are rings where there is a trace $0$ matrix that is not a commutator. Lissner in \cite{Lissner61} showed that if $R$ is a polynomial ring in $m\ge 3$ variables, then there is an $n\times n$ trace $0$ non-commutator for any $2\le n\le \tfrac{m+1}{2}$. Mesyan in \cite{Mesyan06} used a similar idea to create a $2\times 2$ trace $0$ non-commutator over a ring with a maximal ideal satisfying certain properties. In \Cref{sec:two} and \Cref{sec:combinatorics}, we extend the above works of Lissner and Mesyan to construct a trace $0$ non-commutator over a more general class of rings. \begin{restatable*}{thm}{main} \label{thm:main} Let $R$ be a commutative ring with a maximal ideal $\fm\subset R$ such that $R_\fm$ is a regular local ring of dimension $m\ge 3$. Suppose further that $n\ge 2$ is an integer such that $n \le \tfrac{m^2+2m+5}{8}$. Then there exists a trace $0$ non-commutator in $M_n(R)$. \end{restatable*} In particular, we improve the bound on the size of the matrix where we can produce a trace $0$ non-commutator for a fixed ring; If $m$ is the dimension of the ring and $n$ is the size of the matrix, then Lissner requires $n\le \tfrac{m+1}{2}$, so we have improved the bound on the size of the matrix from linear to quadratic in $m$. This improvement comes from solving a certain combinatorial problem which can be thought of either as a packing problem or a graph theory problem. The construction of the matrix will be described in \Cref{sec:two}, while the combinatorics will be explained in \Cref{sec:combinatorics}. In \Cref{sec:ring}, we give an example of a Noetherian dimension $1$ ring where there is an $n\times n$ trace $0$ non-commutator for any $n\ge 2$. \begin{restatable*}{thm}{unbounded} \label{thm:unbounded} There exists a Noetherian commutative domain $\Lambda$ with dimension $1$ such that for every $n\ge 2$, there exists a trace $0$ non-commutator in $\Mat_n(\Lambda)$. \end{restatable*} Finally, in \Cref{sec:2x2}, we discuss $2\times 2$ matrices. The $2\times 2$ matrices are special since every $2\times 2$ trace $0$ matrix being a commutator is equivalent to every vector in $R^3$ being a cross product. This will be discussed further in \Cref{sec:2x2} along with \Cref{thm:characterisation} which characterises rings where every $2\times 2$ trace $0$ matrix is a commutator. After the characterisation, we prove the following theorem. \begin{restatable*}{thm}{fpalgebra} \label{cor:fpalgebraisop} Let $R$ be a regular finitely generated $\widebar{\F}_p$-algebra of dimension $2$. Then $R$ is an OP-ring and every trace $0$ matrix in $\Mat_2(R)$ is a commutator. \end{restatable*} The analogue of the above theorem for $\widebar{\Q}$-algebras is also true if the Bloch-Beilinson conjecture holds (see \Cref{thm:bbopring}). \section{B\'{e}zout Domains and Pr\"{u}fer Domains}\label{sec:hollow} Recall that a \emph{B\'{e}zout domain} is a domain where every finitely generated ideal is principal and a \emph{Pr\"{u}fer domain} is a domain where every finitely generated ideal is invertible. They are non-Noetherian analogues of a PID and a Dedekind domain respectively. A Noetherian B\'{e}zout domain is a PID and a Noetherian Pr\"{u}fer domain is a Dedekind domain. Note that a B\'{e}zout domain is a Pr\"{u}fer domain since principal ideals are always invertible. For a reference on B\'{e}zout domains and Pr\"{u}fer domains see \cite[Chapter III]{fl01}. The main idea in this section is about finding an appropriate basis of $R^n$ which puts a given matrix $A$ in a form that makes it easier to show that $A$ is a commutator. We consider $A$ as a matrix over the quotient field $K$ of $R$ to find an appropriate filtration of $K^n$ and then bring down the filtration to $R^n$ to find the new basis of $R^n$. We first prove some lemmas related to bringing down a $K$-vector space filtration to an $R$-module filtration. Recall that given an $R$-module $M$, there is a natural map $M\to M\tensor_R K$ sending $m\in M$ to $m\tensor 1\in M\tensor_R K$. If $M$ is torsion-free, then the map is an injection, so we may consider $M\subset M\tensor_R K$ in such a case. \begin{lemma}\label{lem:torsionfree} Let $R$ be a domain, $K$ be the quotient field of $R$ and $M$ be a torsion-free $R$-module. If $U\subset V\subset M\tensor_R K$ are $K$-subspaces, then $(V\cap M)/(U\cap M)$ is a torsion-free $R$-module. \end{lemma} \begin{proof} Let $v\in V\cap M$ and $r\in R$ non-zero such that $rv\in U\cap M$. Then $v\in U$ since $r\ne 0$, and so $v\in U\cap M$. Hence $(V\cap M)/(U\cap M)$ is torsion-free. \end{proof} \begin{lemma}\label{lem:finitemodules} Let $R$ be a Pr\"{u}fer domain and $K$ be its quotient field. Suppose that $M$ is a finitely generated torsion-free $R$-module and $U\subset V:=M\tensor_R K$ is a $K$-subspace. Then $U\cap M\subset V$ is a finitely generated $R$-module. \end{lemma} \begin{proof} We will prove that $U\cap M$ is finitely generated by inducting on $n:=\dim_KV$. If $n=1$, then $U=\set{0}$ or $K$, so $U\cap M=\set{0}$ or $M$, and hence $U\cap M$ is finitely generated. Suppose it is true for $n-1$. If $U\cap M=\set{0}$, then we are done, so suppose $U\cap M\ne \set{0}$. Then there exists a non-zero $v\in U\cap M$. Consider the quotient map \[q:V\to V/Kv.\] Since $M$ is finitely generated, $q(M)\iso M/(Kv\cap M)$ is also finitely generated, and by \Cref{lem:torsionfree}, it is torsion-free as well. Moreover, $M\tensor_RK=V$, so $q(M)\tensor_RK = q(V)\iso K^{n-1}$. We claim that \[q(U)\cap q(M)=q(U\cap M).\] Since $q(U)\cap q(M)\supset q(U\cap M)$ is clear, we only show that $q(U)\cap q(M)\subset q(U\cap M)$. So let $x\in q(U)\cap q(M)$. Then there exist $u\in U$ and $m\in M$ such that $x=q(u)=q(m)$. Now $m-u\in \ker q=Kv \subset U$, so $m= u + (m-u) \in U$ as well, hence $x=q(m)\in q(U\cap M)$. So we have a finitely generated torsion-free module $q(M)$ with $\dim_Kq(V)=n-1$, and so by the inductive hypothesis, $q(M)\cap q(U)=q(U\cap M)$ is a finitely generated $R$-module. Hence to show that $U\cap M$ is finitely generated, we only need to show that $\ker q\cap (U\cap M) = Kv\cap M$ is finitely generated. Since $R$ is a Pr\"{u}fer domain, by \cite[Theorem V.2.7]{fl01}, there exist finitely generated ideals $I_1,\dots,I_r\subset R$ such that \[M\iso I_1\oplus \dots\oplus I_r.\] Suppose $(v_1,\dots,v_r)\in \oplus_{i=1}^r I_i$ is the image of $v\in M$ under the isomorphism. Then \[ Kv\cap M \iso \set{a\in K}{av\in M}= \set{a\in K}{av_i\in I_i\; \forall i=1,\dots r} = \bigcap_{i=1}^r (I_i: v_i), \] where $(I_i:v_i):=\set{a\in K}{av_i\in I_i}$ is the ideal quotient. If $v_i\ne 0$, then $(I_i:v_i)=v_i^{-1}I_i$, so $(I_i:v_i)$ is finitely generated since $I_i$ is finitely generated. If $v_i=0$, then $(I_i:v_i)=K$. We assumed $v\ne 0$, so at least one $v_i\ne 0$, and so the intersection \[\bigcap_{i=1}^r (I_i: v_i) = \bigcap^r_{\substack{i=1 \\ v_i\ne 0}} (I_i: v_i) \] is non-trivial. Hence $Kv\cap M$ is isomorphic to a finite intersection of finitely generated fractional ideals of a Pr\"{u}fer domain, so it is finitely generated by \cite[Ex. III.1.1]{fl01} and so we are done. \end{proof} \begin{theorem}\label{thm:bezouttriangularisation} Let $R$ be a B\'{e}zout domain and $K$ be its quotient field. If the characteristic polynomial of a matrix $A\in\Mat_n(R)$ splits completely over $K$, then there exists a basis of $R^n$ such that $A$ is upper triangular with respect to this new basis. \end{theorem} \begin{proof} If we consider $A\in\Mat_n(K)$, then since the characteristic polynomial splits completely, there exists a filtration of $K$-vector spaces \[0=V_0\subset V_1\subset\cdots\subset V_n=K^n, \] such that $\dim_KV_i=i$ and $AV_i\subset V_i$ for all $i=1,\dots, n$ (see e.g. Jordan canonical form in \cite[12.3]{DS04}). If we let $W_i:=V_i\cap R^n$ for $i=0,1,\dots, n$, then we obtain the following filtration of $R$-modules \[ 0=W_0\subset W_1\subset\cdots \subset W_n=R^n,\] such that $AW_i\subset W_i$ for all $i=1,\dots, n$. By \Cref{lem:finitemodules}, $W_i$'s are all finitely generated, hence $W_i/W_{i-1}$'s are also finitely generated. Now by \Cref{lem:torsionfree}, $W_i/W_{i-1}$ is also torsion-free, and so it is a finitely generated torsion-free module over a B\'{e}zout domain, hence $W_i/W_{i-1}$ is free by \cite[Corollary V.2.8]{fl01}. So the following exact sequence of $R$-modules splits \[ 0\to W_{i-1} \to W_i\to W_i/W_{i-1} \to 0, \] and there exists a free rank $1$ $R$-module $M_i$ such that $W_i=W_{i-1}\oplus M_i$. Hence we obtain a decomposition $R^n = \oplus_{i=1}^n M_i$ such that for all $i=1,\dots,n$, $AM_i\subset \oplus_{j\le i}M_j$. Now $M_i$ is a free rank $1$ $R$-module, and so the above decomposition gives an $R$-basis of $R^n$ such that with respect to this new basis, $A$ is an upper triangular matrix. \end{proof} \begin{lemma}\label{lem:triangularcommutator} Let $R$ be a ring and $A\in\Mat_n(R)$ be an upper triangular trace $0$ matrix. Then $A$ is a commutator. \end{lemma} \begin{proof} Let $X=(x_{ij})\in\Mat_n(R)$ be the matrix with $1$'s on the superdiagonal and $0$ everywhere else, that is, \[X= \begin{pmatrix} 0 & 1 & & 0 & 0\\ 0 & 0 & \ddots & 0 & 0\\ \vdots&\vdots & \ddots &\ddots & \\ 0 & 0 & \cdots& 0 & 1\\ 0 & 0 & \cdots& 0 & 0 \end{pmatrix} .\] If $A=(a_{ij})$, then we define $B=(b_{ij})\in\Mat_n(R)$ sequentially from the top to the bottom as follows: \[ b_{ij}= \begin{cases} 0 & \text{if }i=1,\\ a_{1,j}&\text{if }i=2,\\ a_{i-1,j}+b_{i-1,j-1}&\text{if }3\le i\le n. \end{cases} \] To ease the notation, we also let $b_{ij}:=0$ if $i=n+1$ or $j=0$. Now for any $1\le i,j\le n$, we have \[ [X,B]_{ij} = b_{i+1,j}-b_{i,j-1}.\] Now for $i=1$, \[ [X,B]_{1,j} = b_{2,j}-b_{1,j-1} = a_{1,j},\] and so $[X,B]_{1,j}=A_{1,j}$. For $i\ge 2$, we have \[ [X,B]_{i,j} = b_{i+1,j}-b_{i,j-1} = a_{i,j}+b_{i,j-1}-b_{i,j-1}=a_{i,j},\] so $[X,B]_{i,j}=A_{i,j}$. Hence $[X,B]=A$. \end{proof} We now combine \Cref{thm:bezouttriangularisation} and \Cref{lem:triangularcommutator} to show one of the main theorems of this section. \bezout \begin{proof} By \Cref{thm:bezouttriangularisation} we can triangularise $A$. Note that for any $A\in\Mat_n(R)$ and $g\in \GL_n(R)$, $A$ is a commutator if and only if $gAg^{-1}$ is a commutator since for any $B,C\in\Mat_n(R)$, \[ A=[B,C] \iff gAg^{-1} = [gBg^{-1},gCg^{-1}].\] Then \Cref{lem:triangularcommutator} implies that $A$ is a commutator. \end{proof} \begin{remark} The only class of rings for which it was previously known that every trace $0$ matrix is a commutator is the class of principal ideal rings, and this is due to Stasinski in \cite[Theorem 6.3]{Stasinski16}. So \Cref{thm:bezoutcommutator} gives a new class of rings where every trace $0$ matrix is a commutator. Note that \Cref{thm:bezoutcommutator} does not imply Stasinski's result since we assume that the B\'{e}zout domain has algebraically closed quotient field. We also note that in our proof, we take a different approach to Stasinski's proof. \end{remark} We now discuss examples of B\'{e}zout domains with algebraically closed quotient field. \begin{theorem}[{\cite[Theorem 102]{Kaplansky74}}]\label{thm:kapbezout} Let $R$ be a Dedekind domain and $K$ be its quotient field. If for all finite extensions $L/K$, the integral closure $S$ of $R$ in $L$ has torsion Picard group $\Pic(S)$, then the integral closure $\widebar{R}$ of $R$ in an algebraic closure of $K$ is a B\'{e}zout domain. \end{theorem} \begin{example} Let $R:=\Z$ or $R:=k[t]$, with $k$ a finite field, and $K$ be the quotient field of $R$. Then for any finite extension $L/K$, the integral closure $S$ of $R$ in $L$ has finite Picard group. Hence by \Cref{thm:kapbezout}, the integral closure $\widebar{R}$ of $R$ in an algebraic closure of $K$ is a B\'{e}zout domain. Moreover, the quotient field of $\widebar{R}$ is algebraically closed, so by \Cref{thm:bezoutcommutator}, every trace $0$ matrix is a commutator over $\widebar{R}$. \end{example} \begin{corollary}\label{cor:dedekindextension} Let $R$ be a Dedekind domain and $K$ be its quotient field. If for all finite extensions $L/K$, the integral closure $S$ of $R$ in $L$ has torsion Picard group $\Pic(S)$, then for all trace $0$ matrices $A\in\Mat_n(R)$, there exists an $R$-algebra $T$ such that $T$ is finitely generated as an $R$-module and $A$ is commutator in $\Mat_n(T)$. \end{corollary} \begin{proof} By \Cref{thm:kapbezout}, the integral closure $\widebar{R}$ of $R$ in an algebraic closure of $K$ is a B\'{e}zout domain. Hence by \Cref{thm:bezoutcommutator}, $A$ is a commutator over $\widebar{R}$, so $A=[B,C]$ for some $B,C\in \Mat_n(\widebar{R})$. If we take $T:=R[ b_{ij}, c_{ij} : 1\le i, j\le n]\subset\widebar{R}$, then $B,C\in \Mat_n(T)$, so $A$ is a commutator in $\Mat_n(T)$. \end{proof} \begin{remark} In \Cref{cor:dedekindextension}, we could also take the integral closure $T'$ of $R$ in the quotient field $F$ of $T$. Then $T'$ will be a Dedekind domain, but not necessary a finitely generated $R$-module, unless $F/K$ is separable or $R$ is finitely generated over a field. \end{remark} \begin{remark} A related notion to the assumption in \Cref{thm:kapbezout} is pictorsion rings defined in \cite[Definition 0.3]{GLL15}. A commutative ring $R$ is called \emph{pictorsion} if $\Pic(S)$ is torsion for any finite ring map $R\to S$. As noted in \cite{GLL15} before Lemma 8.10, a pictorsion Dedekind domain satisfies the assumption of \Cref{thm:kapbezout} (the assumption is called Condition (T)(a) in \cite[Definition 0.2]{GLL15}). While it is not known whether every Dedekind domain $R$ satisfying the assumption of \Cref{thm:kapbezout} is pictorsion, if we assume that the residue fields at all maximal ideals of $R$ are algebraic extensions of finite fields, then $R$ is pictorsion \cite[Lemma 8.10 (b)]{GLL15}. \end{remark} We now discuss the case of hollow endomorphisms of $R$-modules for an arbitrary ring $R$, and then specialise it to nilpotent matrices over Pr\"{u}fer domain in \Cref{thm:dedekindalgebra}. Recall that an $n\times n$ matrix is \emph{hollow} if all of its diagonal entries are zero (see e.g. \cite[Section 3.1]{Gentle} for a reference on hollow matrices). In the following definition, we generalise the concept of a hollow matrix to an endomorphism of an arbitrary $R$-module with a decomposition. \begin{definition} Let $R$ be a ring and $M$ be an $R$-module. Suppose we have a decomposition $M=\oplus_{k=1}^nM_k$ where $M_k\subset M$ are $R$-submodules. Then for any $A\in\End_R(M)$ and $1\le i,j\le n$, define $a_{ij}:= \pi_iA\iota_j \in \Hom_R(M_j,M_i)$, where $\pi_i\in\Hom_R(\oplus_{k=1}^nM_k,M_i)$ is the projection and $\iota_j\in\Hom_R( M_j,\oplus_{k=1}^nM_k)$ is the inclusion. We can think of $(a_{ij})$ as an $n\times n$ matrix where the $(i,j)$-th entry $a_{ij}$ lies in $\Hom_R(M_j,M_i)$ instead of $R$. We say that $A\in\End_R(M)$ is \emph{hollow with respect to a decomposition} $M = \oplus_{k=1}^nM_k$ if $a_{kk}=0$ for all $k=1,\dots,n$. \end{definition} Note that a matrix $A\in\Mat_n(R)$ being hollow is the same as $A\in\End_R(R^n)$ being a hollow endomorphism with respect to the decomposition coming from the standard basis. Let $R$ be a commutative ring with $C:=\set{r_1,\dots,r_n}\subset R^\times$ such that $r_i-r_j\in R^\times$ for all $i\ne j$. Such a set $C$ is called a \emph{clique of exceptional units}. This term is usually used in the context of number rings, and the cliques were used by Lenstra to construct Euclidean fields in \cite{lenstra76}. \begin{example}\label{ex:algebraclique} Let $k$ be a field and $R$ be a $k$-algebra. Then any $C\subset k^\times \subset R^\times$ is a clique of exceptional units since $x-y\in k^\times\subset R^\times$ for all distinct $x,y\in C$. \end{example} \begin{theorem}\label{thm:hollow} Let $R$ be a commutative ring with a clique of exceptional units $\set{r_1,\dots,r_n}$ of size $n\ge 1$. Let $M$ be a finitely generated $R$-module with a decomposition $M=\oplus_{k=0}^nM_k$, and suppose that $A\in\End_R(M)$ is hollow with respect to that decomposition. Then \[A = [X,A'] \] for some $X,A'\in\End_R(M)$. In particular, every hollow matrix in $\Mat_{n+1}(R)$. \end{theorem} \begin{proof} Let $X$ be the diagonal matrix with $r_0:=0,r_1,\dots,r_n$ on the diagonal. We slightly abuse notation here and we denote by $r_i$ the element in $R$ and the endomorphism given by multiplication by $r_i$ on $M_i$ so that we can view $X$ as an element of $\End_R(M)$. If $A=(a_{ij})$, then define $A':=(a'_{ij})\in \End(\oplus_{k=0}^nM_k)$ as follows: \[a'_{ij} := \begin{cases} 0 &\text{if }i=j\\ (r_i-r_j)^{-1}a_{ij} &\text{otherwise} \end{cases}. \] Note that $r_1,\dots,r_n$ are units, so $r_0-r_i=-r_i$ is a unit for all $1\le i\le n$. Then \[[X,A']_{ij} = x_{ii}a'_{ij}-a'_{ij}x_{jj} = r_{i}a'_{ij}-a'_{ij}r_j=(r_i-r_j)a'_{ij}=a_{ij},\] so $A=[X,A']$. The last part follows since a hollow matrix $A\in\Mat_{n+1}(R)$ is hollow with respect to the decomposition of $R^{n+1}$ coming from the standard basis. \end{proof} \begin{corollary} Let $k$ be a field and $R$ be a $k$-algebra. Then every hollow matrix in $\Mat_n(R)$ is a commutator for any $1\le n\le \# k$. In particular, if $k$ is a field of infinite cardinality, then every $n\times n$ hollow matrix is a commutator for any $n \ge 1$. \end{corollary} \begin{proof} From \Cref{ex:algebraclique}, we have a clique of exceptional units $C\subset k^\times\subset R^\times$ of any size up to $\# k^\times$. Since every hollow matrix in $\Mat_{\# C+1}(R)$ is a commutator by \Cref{thm:hollow}, every $n\times n$ hollow matrix is a commutator for any $1\le n\le \# k^\times+1=\# k$. \end{proof} We now consider nilpotent matrices over a Pr\"{u}fer domain $R$. We generalise the procedure described by Appleby in \cite[Section 3]{Appleby98} from Dedekind domains to Pr\"{u}fer domains. This allows us to find a decomposition of a finitely generated torsion-free $R$-module $M$ that makes $A$ strictly upper triangular for any fixed nilpotent endomorphism $A\in\End_R(M)$. \begin{lemma}\label{lem:hollowdecomp} Let $R$ be a Pr\"{u}fer domain, $M$ be a finitely generated torsion-free $R$-module and $A\in\End_R(M)$ be a nilpotent endomorphism. Then there exists a decomposition $M=\oplus_{i=1}^nM_i$ where $n=\rk M$, such that the endomorphism $A$ is strictly upper triangular with respect to this decomposition. \end{lemma} \begin{proof} Let $K$ be the quotient field of $R$ and $V:=M\tensor_R K$. Then $M\subset V$ since $M$ is torsion-free, and we can consider $A\in\End_K(V)$. Since $A$ is nilpotent, we can find a filtration of $K$-vector spaces \[ 0=V_0\subset V_1\subset \dots \subset V_n=V,\] such that $AV_i\subset V_{i-1}$ for all $i=1,\dots, n$, and $\dim_KV_i=i$. If we let $W_i := R^n\cap V_i$ for $i=1,\dots, n$, then we obtain the following filtration of $R$-modules \[ 0=W_0\subset W_1\subset \dots \subset W_n = M, \] such that $AW_i\subset W_{i-1}$ for all $i=1,\dots,n$. Now, $W_i$ is a finitely generated $R$-module by \Cref{lem:finitemodules}, and so $W_i/W_{i-1}$ is also finitely generated. Now by \Cref{lem:torsionfree}, $W_i/W_{i-1}$ is a torsion-free as well, and so it is a finitely generated torsion-free module over a Pr\"{u}fer domain, hence projective by \cite[Theorem V.2.7]{fl01}. So the following exact sequence splits, \[ 0\to W_{i-1}\to W_{i}\to W_{i}/W_{i-1}\to 0, \] and we have $W_i = W_{i-1} \oplus M_i$ for some $R$-module $M_i$. So we obtain a decomposition $M = \oplus_{i=1}^n M_i$. Now consider the matrix representation $A=(a_{ij})$ with respect to this decomposition. Then $a_{ij}=0$ for all $1\le j\le i\le n$ since $AM_j\subset W_{j-1}$ and $W_{j-1}\cap M_i=0$. Hence $A$ is strictly upper triangular with respect to the decomposition $M=\oplus_{i=1}^nM_i$. \end{proof} \begin{remark} For a commutative Noetherian ring $R$, Yohe showed that every nilpotent matrix being similar to a strictly upper triangular matrix is equivalent to $R$ being a principal ideal ring \cite[Theorem 1]{Yohe67}. So for a non-PID Dedekind domain $R$, there are nilpotent matrices that are not similar to a strictly upper triangular matrix. Nevertheless, in \Cref{lem:hollowdecomp}, we show that by allowing the matrix entries to lie in $\Hom_R(M_j,M_i)$ instead of $R$, we can put any nilpotent matrix over $R$ in a strictly upper triangular form. \end{remark} \begin{corollary}\label{cor:nilpotentdedekind} Let $R$ be a Pr\"{u}fer domain with a clique of exceptional units $C:=\set{r_1,\dots,r_n}$ of size $n\ge 1$, and $M$ be a finitely generated projective $R$-module $M$ of rank $m\le n+1$. Then every nilpotent endomorphism $A\in\End_R(M)$ is a commutator. In particular, every nilpotent matrix in $\Mat_m(R)$ is a commutator for all $1\le m\le n+1$. \end{corollary} \begin{proof} By \Cref{lem:hollowdecomp}, we can construct a decomposition $M=\oplus_{k=1}^m M_k$ such that $A$ is hollow with respect to this decomposition. Since any subset of a clique of exceptional units is still a clique, we can take a subset of $C$ of size $m-1$ and apply \Cref{thm:hollow} to obtain $X,A'\in\End_R(M)$ such that $A=[X,A']$. \end{proof} \dedekind \begin{proof} From \Cref{ex:algebraclique}, we have a clique of exceptional units $k^\times\subset R^\times$. So by \Cref{cor:nilpotentdedekind}, every nilpotent endomorphism in $\End_R(M)$ is a commutator if $\rk M \le \# k^\times + 1 = \# k$. \end{proof} \Cref{thm:dedekindalgebra} provides partial progress towards answering the following open question. \begin{question}[{\cite[Section 7]{Stasinski16}}]\label{que:dedekindcommutator} Let $R$ be a Dedekind domain and $n\ge 3$. Is every $n\times n$ trace $0$ matrix in $\Mat_n(R)$ a commutator? \end{question} Note that a Dedekind domain is a Pr\"{u}fer domain. Lissner answered \Cref{que:dedekindcommutator} affirmatively in \cite[Appendix]{Lissner65} for $n=2$, but no progress has been made since for $n\ge 3$. \begin{example} For a fixed $n$, we can provide examples of non-PID number rings $R$ where every nilpotent matrix in $\Mat_m(R)$ is commutator for all $1\le m\le n$. We construct a large clique of exceptional units in $\Z[\zeta_p]$ for a prime $p$ where $\zeta_p= e^{2\pi i/p}$ based on an example in \cite[Section 3]{lenstra76}. Let $\omega_i:=\frac{\zeta_p^i-1}{\zeta_p-1}\in\Z[\zeta_p]$ for any $i=1,\dots, p-1$. Then the absolute norm of $\omega_i$ is 1, so $\omega_i\in \Z[\zeta_p]^\times$ for all $i=1,\dots, p-1$. Now given $1\le i<j\le p-1$, \[\omega_j-\omega_i = \zeta_p^i\omega_{j-i}\in\Z[\zeta_p]^\times, \] so $\set{\omega_1,\dots,\omega_{p-1}}$ is a clique of exceptional units. Hence by \Cref{cor:nilpotentdedekind}, every nilpotent matrix in $\Mat_m(\Z[\zeta_p])$ is a commutator for all $1\le m\le p$. Moreover, $\Z[\zeta_p]$ has class number greater than $1$ for all primes $p\ge 23$ (see \cite[Theorem 11.1]{washington97}), so $\Z[\zeta_p]$ will not be a PID for those primes. Hence for any prime $p\ge 23$, $\Z[\zeta_p]$ gives a non-trivial example which does not follow from \cite[Theorem 6.3]{Stasinski16}. \end{example} \section{Construction of Trace Zero Non-commutators}\label{sec:two} In this section, we construct a trace $0$ non-commutator assuming we can solve a certain combinatorial problem which is stated in \Cref{q:mainWithd} in the next section. We will first give a couple of definitions that will be used in the problem and the main theorem \Cref{thm:counterexample}. \begin{definition} Let $m\ge 1$ and $d\ge 0$ be integers. We will call a set $S\subset \Z_{\ge 0}^m$ \emph{$d$-separated} if for all distinct $s=(s_i), s'=(s'_i)\in S$, \[ |s-s'|_1:=\sum_{i=1}^{m}|s_i-s_i'| > d. \] \end{definition} \begin{definition} Let $n,r\in\Z_{\ge 0}$. Then the \emph{discrete $n$-simplex of length $r$} is \[ \Delta(n,r):=\set{v=(v_i)\in\Z_{\ge 0}^{n+1}}{\sum_{i=1}^{n+1} v_i = r}.\] \end{definition} We now give the construction of the trace $0$ non-commutator. Let $R$ be a commutative ring. We use the following notation in the theorem: given a point $s=(s_i)\in\Z^m_{\ge 0}$ and elements $x_1,\dots,x_m\in R$, we let $x^s:=\prod_{i=1}^mx_i^{s_i}\in R$. As usual, we set $r^0:=1$ for any $r\in R$. \begin{theorem} \label{thm:counterexample} Let $R$ be a commutative ring with an ideal $I\subset R$ with the following property: there exist non-zero $x_1,\dots,x_m\in I$ with $m\ge 3$, and $d\ge 0$ such that for all $0\le k\le 3d+1$, \[I^k/I^{k+1} = \bigoplus_{\substack{t\in\Z_{\ge 0}^m \\ |t|_1=k}}(R/I)(x^t + I^{k+1}).\] In other words, $I^k/I^{k+1}$ is a free $R/I$-module and the degree $k$ monomials in the $x_i$'s form an $R/I$-basis of $I^k/I^{k+1}$ for all $0\le k\le 3d+1$. Suppose further that there exists a $2d$-separated set $S:=\set{s_1,\dots,s_{2n-1}}\subset{\Delta(m-1,2d+1)}\subset \Z_{\ge 0}^m$ with $n\ge 2$. Then \[ X := \begin{pmatrix} x^{s_1} & x^{s_2} & \cdots& x^{s_{n-1}} & x^{s_n}\\ x^{s_{n+1}} & 0 & \cdots& 0 & 0\\ \vdots & \vdots & \ddots & \vdots & \vdots\\ x^{s_{2n-2}} & 0 & \cdots &0& 0\\ x^{s_{2n-1}} & 0 &\cdots&0 & -x^{s_1} \end{pmatrix} \] is a trace $0$ non-commutator in $\Mat_n(R)$. \end{theorem} \begin{proof} Since $\tr(X)= x^{s_1}-x^{s_1}=0$, the only remaining thing we need to show is that $X$ is not a commutator. Suppose by contradiction that $X=[B,C]$ for some $B=(b_{ij}),C=(c_{ij})\in M_n(R)$. Then \[[B-b_{nn}I_n,C-c_{nn}I_n] = [B,C] = X,\] so we may assume $b_{nn}=0=c_{nn}$ by taking $B-b_{nn}I_n$ and $C-c_{nn}I_n$. Now suppose $b_{i1},b_{1i},c_{i1},c_{1i}\in I^{d+1}$ for all $i=1,\dots, n$. Then from $X=[B,C]$, we have \[a_{11} = \sum_{i=1}^n (b_{1i}c_{i1} - c_{1i}b_{i1})\in I^{2d+2}.\] But $a_{11}=x^{s_1}\in I^{2d+1}$ is part of the $R/I$-basis of $I^{2d+1}/I^{2d+2}$, so $a_{11}\notin I^{2d+2}$, which is a contradiction. Hence to show that $X$ is a non-commutator, we only need to show that $b_{i1},b_{1i},c_{i1},c_{1i}\in I^{d+1}$ for all $i$. We will first show that $b_{i1},b_{1i}\in I^{d+1}$ for all $i$. So let $k$ be the minimum integer such that $b_{i1}\in I^k\setminus I^{k+1}$ or $b_{1i}\in I^k\setminus I^{k+1}$ for some $i$. If $k\ge d+1$ or no such $k$ exists then we are done so assume $k\le d$. Without loss of generality, assume that $b_{1j}\in I^k\setminus I^{k+1}$. Then writing $b_{1j}$ in terms of the monomial basis in $I^k/I^{k+1}$, we have \[b_{1j} \equiv \sum_{t\in\Delta(m-1,k)}r_tx^{t} \mod I^{k+1},\] with $r_t\in R$ for all $t\in\Delta(m-1,k)\subset\Z^{m}_{\ge 0}$ such that $r_{t_0}\notin I$ for some $t_0\in \Delta(m-1,k)$. Now from $X=[B,C]$, we have \[\tr(BX) = \tr(B[B,C]) = \tr(BBC - BCB) = \tr([B,BC]) = 0. \] Since $b_{nn}=0$, we have \begin{equation} 0 =\tr(BX) = \sum_{i=0}^n\sum_{j=0}^n b_{ij}a_{ji} = a_{11}b_{11} + \sum_{i=2}^n b_{i1}a_{1i} + \sum_{j=2}^n b_{1j}a_{j1}\label{eq:1}. \end{equation} Now, $a_{1j}=x^{s_j}$ with $|s_j|_1=2d+1$, so \[a_{1j}b_{j1} = x^{s_j}b_{j1} \equiv \sum_{t\in\Delta(m-1,k)}r_tx^{s_j+t} \in I^{k+2d+1}/ I^{k+2d+2}.\] Now $r_{t_0}\in R\setminus I$, and so by considering \cref{eq:1} mod $I^{k+2d+2}$, we see that a term with a monomial $x^{s_j+t_0}$ must also appear in $b_{1i}a_{i1}$ or $b_{i1}a_{1i}$ mod $I^{k+2d+2}$ for some $i$. So suppose $b_{i1}a_{1i}$ contains such a term. Then we have $x^{s_j+t_0} \equiv x^ux^{s_i} \mod I^{k+2d+2}$ where $x^u$ comes from $b_{i1}$ and $|u|_1=k$. So we have $s_j+t_0 = u+s_i$ which implies \[|s_i - s_j| = |t_0-u| \le |t_0| + |u| = 2k\le 2d.\] This is a contradiction since $S$ is $2d$-separated. Hence $k\ge d+1$, and so $b_{1i},b_{i1}\in I^{d+1}$ for all $i$. By similar argument, $c_{1i},c_{i1}\in I^{d+1}$ as well. Hence no such matrices $B$ and $C$ can exist and $X$ is a non-commutator. \end{proof} The following corollary generalises the result of Mesyan \cite[Prop. 20]{Mesyan06}. \begin{corollary}\label{cor:d0} Let $R$ be a ring and suppose that $I:=(x_1,\dots,x_m)$ is an ideal such that $I/I^2$ is a free $R/I$-module with the classes of $x_1,\dots,x_m$ in $I/I^2$ forming an $R/I$-basis. Then for any $2\le n\le \tfrac{m+1}{2}$, \[X:=\begin{pmatrix} x_1 & x_2 & \cdots & x_{n-1} & x_n\\ x_{n+1} & 0 & \cdots & 0 & 0\\ \vdots & \vdots & \ddots & \vdots & \vdots\\ x_{2n-2} & 0 & \cdots & 0 & 0\\ x_{2n-1} & 0 & \cdots & 0 & -x_1 \end{pmatrix}\] is not a commutator in $\Mat_n(R)$. In particular, if a Noetherian ring $R$ has Krull dimension $m\ge 3$, then there exists a trace $0$ non-commutator in $\Mat_n(R)$ for any $2\le n\le \tfrac{m+1}{2}$. \end{corollary} \begin{proof} We will use \Cref{thm:counterexample} to construct the trace $0$ non-commutator. If we take $d=0$, then the basis condition in the theorem is equivalent to $I/I^2$ being a free $R/I$-module. For the set $S$ in the theorem, if $d=0$ then any set $S\subset\Delta(m-1,1)$ is automatically $2d$-separated. Hence we can take $S$ to be $\Delta(m-1,1)=\set{e_1,\dots,e_m}\subset\Z_{\ge0}^m$ where $e_i$'s are the standard basis of $\Z^m$. Now if $n\le \tfrac{m+1}{2}$, then we may take $2n-1$ points inside $\Delta(m-1,1)$. Hence by \Cref{thm:counterexample}, $X$ is a non-commutator. The second part follows since if we take $I\subset R$ to be a maximal ideal with maximum height, then $\dim_{R/I}I/I^2\ge \dim R=m$ (see Theorem 13.5 in \cite{Matsumura}). \end{proof} \begin{remark} Let $R$ be a ring and $\fm\subset R$ be a maximal ideal such that $\dim_{R/\fm}\fm/\fm^2\ge 3$. Then by \Cref{cor:d0}, there exists a trace $0$ non-commutator in $\Mat_2(R)$. This also follows from \cite[Prop. 20]{Mesyan06}. Such a maximal ideal exists if $R$ is a Noetherian integrally closed domain of dimension $2$ that is not regular. Indeed, by Serre's criterion \cite[Theorem 23.8]{Matsumura}, $R$ being integrally closed implies $R_\fp$ is regular for any prime ideal $\fp\subset R$ with $\height(\fp)\le 1$. But $R$ is not regular, so $R_\fm$ must be singular at some maximal ideal $\fm\subset R$ of $\height(\fm)=2$. At such maximal ideal $\fm$, $\dim_{R/\fm}\fm/\fm^2 > \dim R_\fm=\height(\fm) = 2$ and so there exists a trace $0$ non-commutator in $\Mat_2(R)$. \end{remark} The following result of Lissner is a special case of \Cref{cor:d0}. \begin{corollary}[{\cite[Theorem 5.4]{Lissner61}}]\label{thm:lissner} Let $K$ be a field, $n\ge 2$ be an integer and $R=K[x_1,\dots, x_m]$, with $m\ge 2n-1$. Then there exists a trace $0$ non-commutator in $\Mat_n(R)$. \end{corollary} We now demonstrate that the condition in \Cref{thm:counterexample} about the structure of $I^k/I^{k+1}$ is relatively easy to satisfy. The following lemma gives a large class of a ring and an ideal satisfying the condition. Recall that given a commutative ring $R$ and an ideal $I\subset R$, we can construct the \emph{associated graded ring} $\gr_I(R) := \oplus_{i\ge 0}I^{i}/I^{i+1}$. Then given an element $r\in R$, we can associate an element $\initial{r}\in\gr_I(R)$ called the \emph{initial form} defined as follows. Let $j\ge 0$ be such that $r\in I^j\setminus I^{j+1}$. If no such $j$ exists, then $r\in\cap_iI^i$, and we take $\initial{r} := 0\in \gr_I(R)$. Otherwise take $\initial{r}:=r+I^{j+1}\in I^j/I^{j+1}\subset\gr_I(R)$. \begin{lemma} \label{lem:regular} Let $R$ be a ring and $\fm\subset R$ be a maximal ideal such that $R_\fm$ is a regular local ring (see \cite[Section 14]{Matsumura} for the definition of a regular local ring). Then \[\gr_\fm(R) \iso k_\fm[x_1,\dots,x_m], \] where $k_\fm:=R/\fm$ is the residue field and $m$ is the dimension of $R_\fm$ (which is finite since $R_\fm$ is a Noetherian local ring) and $k_\fm[x_1,\dots,x_m]$ is a polynomial ring in $m$ variables. Moreover, there exist $y_1,\dots,y_m\in \fm$ such that $\initial{y_1},\dots,\initial{y_m}$ map to $x_1,\dots,x_m$ under the above isomorphism and the degree $k$ monomials in the $y_i$'s form a $k_\fm$-basis of $\fm^k/\fm^{k+1}$ for all $k\ge 0$. \end{lemma} \begin{proof} Since $R_\fm$ is a regular local ring, \[\gr_{\fm R_\fm}(R_\fm)\iso k_\fm[x_1,\dots,x_m],\] as graded rings by Theorem 14.4 in \cite{Matsumura}. Now $\fm$ is maximal, so $\fm^i/\fm^{i+1}\iso (\fm R_\fm)^i/(\fm R_\fm)^{i+1}$ for all $i\ge 0$, and hence $\gr_\fm(R)\iso\gr_{\fm R_\fm}(R_\fm)$. So we have \[ \gr_\fm(R)\iso \gr_{\fm R_\fm}(R_\fm) \iso k_\fm[x_1,\dots,x_m].\] They are isomorphic as graded rings, so there exist $y_1,\dots,y_m\in R$ such that $\initial{y_i}$ maps to $x_i$. The above isomorphism also implies that the degree $k$ monomials in $y_i$'s form a $k_\fm$-basis of $\fm^k/\fm^{k+1}$ for all $k\ge 0$ since it is true for the polynomial ring on the right hand side. \end{proof} We now state the main result combining \Cref{thm:counterexample} and the combinatorial result \Cref{thm:quadraticS}. \main \begin{proof} We will use \Cref{thm:counterexample} to construct the trace $0$ non-commutator, so we need to check the basis condition, and construct the set $S$ in the assumption of the theorem. By \Cref{lem:regular}, the basis condition in \Cref{thm:counterexample} is satisfied for all $d$, and by \Cref{thm:quadraticS}, if $m$ is odd, then there exists an $S$ with $\# S= \tfrac{(m+1)^2}{4}$. So if $2n-1\le \tfrac{(m+1)^2}{4}$, then the assumption of \Cref{thm:counterexample} is satisfied and there exists a trace $0$ non-commutator in $M_n(R)$. If we rearrange $2n-1\le \tfrac{(m+1)^2}{4}$, we obtain $n \le \tfrac{m^2+2m+5}{8}$. For $m$ even, there exists an $S$ with $\# S = 2n-1\le \tfrac{m(m+2)}{4}$ from \Cref{thm:quadraticS}, and we may rearrange the inequality to $n \le \tfrac{m^2+2m+4}{8}$. But if $m$ is even and $n$ is an integer, then $n \le \tfrac{m^2+2m+4}{8}$ is equivalent to $n \le \tfrac{m^2+2m+5}{8}$ so we have the stated inequality for $m$ even as well. \end{proof} For an ideal $I$ of a ring $R$, let $\nu(I)$ be the minimal number of generators of $I$. In the case of a Noetherian local ring $R$ and its maximal ideal $\fm$, we have $\nu(\fm) = \dim_{R/\fm}(\fm/\fm^2)$ by Nakayama's Lemma \cite[Theorem 2.3]{Matsumura}. \begin{corollary}\label{cor:numax} Let $R$ be a Noetherian ring and let $n\ge 2$ be such that every trace $0$ matrix in $\Mat_n(R)$ is a commutator. Then for all maximal ideals $\fm$ of $R$, \[\nu(\fm) < \begin{cases} 2\sqrt{2n-1} & \text{if }R_\fm\text{ is regular},\\ 2n-1 & \text{otherwise}. \end{cases} \] \end{corollary} \begin{proof} Suppose $R_\fm$ is regular. Since every $n\times n$ trace $0$ matrix is a commutator, $n >\frac{m^2+2m+5}{8}$ where $m=\dim R_\fm = \dim_{R/\fm}(\fm/\fm^2) = \nu(\fm R_\fm)$ by \Cref{thm:main}. Hence $2\sqrt{2n-1}-1> m=\nu(\fm R_\fm)$. Now by \cite[Theorem 1]{dg77}, $\nu(\fm R_\fm) +1 \ge \nu(\fm)$ if $R_\fm$ is regular, so \[ 2\sqrt{2n-1}> \nu(\fm).\] Hence the stated inequality follows. For $R_\fm$ not regular, we apply \Cref{cor:d0} to obtain $n > \frac{m+1}{2}$ where $m=\dim_{R/\fm}(\fm/\fm^2) = \nu(\fm R_\fm)$. Hence \[ 2n-1 > m=\nu(\fm R_\fm).\] Now by \cite[Theorem 1]{dg77}, $\nu(\fm R_\fm ) = \nu(\fm)$ if $R_\fm$ is not regular, so the stated inequality follows. \end{proof} \begin{remark} If $R$ is a Noetherian ring such that every $2\times 2$ trace $0$ is a commutator, then \Cref{cor:numax} implies \[\nu(\fm) \le \begin{cases} 3 &\text{if }R_\fm\text{ is regular},\\ 2 &\text{otherwise}, \end{cases} \] for any maximal ideal $\fm\subset R$. However, it is known that $\nu(\fm)\le 2$ for any maximal ideal $\fm$ if every $2\times 2$ trace $0$ matrix is a commutator (see \cite[(3) Lemma, (4) Lemma]{BDG80}). Note that we do not know whether every $2\times 2$ trace $0$ matrix is a commutator if $\nu(\fm)\le 2$ for all maximal ideals $\fm$. See however, \Cref{thm:local} and \Cref{cor:generatedby2} for cases where this is true. \end{remark} In view of \Cref{thm:main}, it is natural to ask the following question. \begin{question} Let $R$ be a Noetherian ring such that there exists $c>0$ with $\dim_{R/\fm}\fm/\fm^2\le c$ for all maximal ideals $\fm\subset R$. Does there exist an $n_0=n_0(R)\ge 1$ such that for all $n\ge n_0$, every trace $0$ matrix in $\Mat_n(R)$ is a commutator? \end{question} The example produced in \Cref{thm:unbounded} shows that the answer to this question is negative if the assumption on the existence of such $c$ is omitted. The simplest rings for which the question is open are regular local rings of dimension $2$ and Dedekind domains. For both of those types of rings, it is not known whether every $n\times n$ trace $0$ matrix is a commutator for any $n\ge 3$. Let us now summarise below the largest trace $0$ non-commutator we can construct for a fixed ring and an ideal using \Cref{thm:counterexample}. \begin{theorem}\label{thm:largestn} Let $R$ be a polynomial ring in $m\ge 3$ variables over a field and $d \ge 0$ or, more generally, let $R$ be a ring with an ideal $I\subset R$ satisfying the basis condition from \Cref{thm:counterexample}. That is, there exist some non-zero $y_1,\dots,y_m\in I$ with $m\ge 3$ such that the degree $k$ monomials in the $y_i$'s form an $R/I$-basis of $I^k/I^{k+1}$ for all $0\le k\le 3d+1$. Then the following list describes the sizes $n$ for which we can construct a trace $0$ non-commutator in $\Mat_n(R)$. \begin{enumerate} \item If $d=0$, then any $2\le n\le\tfrac{m+1}{2}$. \item If $d=1$ and $m\not\equiv 5\pmod{6}$, then any $2\le n\le\tfrac{1}{2}(\floor{\frac{m}{3}\floor{\frac{m-1}{2}}}+ m + 1)$. \item If $d=1$ and $m\equiv 5\pmod{6}$, then any $2\le n\le\tfrac{1}{2}(\floor{\frac{m}{3}\floor{\frac{m-1}{2}}}+ m)$. \item If $m=3$, then $n=2$. \item If $m\ge 4$ and $d\ge m-1$, then any $2\le n\le \frac{m^2+2m+5}{8}$. \item For certain specific $m$ and $d$, the corresponding entry in the table below is the size of the largest non-commutator matrix that \Cref{thm:counterexample} can construct. When the entry is in bold, it is bigger than the upperbound provided by (5). \begin{table}[H] \centering \begin{tabular}{|>{\bfseries}c| *{11}{c}|} \hline \diagbox[width=2.5em]{m}{\kern-1em d} & \textbf{2} & \textbf{3} & \textbf{4} & \textbf{5} & \textbf{6} & \textbf{7} & \textbf{8} & \textbf{9} & \textbf{10} & \textbf{11} &\textbf{12} \\ \hline 4& 3 & 3 & \textbf{4} & \textbf{4} & \textbf{4} & \textbf{4} & \textbf{4} & \textbf{4} & \textbf{4} & \textbf{4} & \textbf{4}\\ 5& 5 & 5 & 5 & \textbf{6} & \textbf{6} & \textbf{6} & \textbf{6} & \textbf{6} & \textbf{7} & & \\ 6& 6 & 7 & \textbf{8} & & & & & & & &\\ 7& 9 & & & & & & & & & &\\ 8& 12 & & & & & & & & & &\\\hline \end{tabular} \caption{Size of the largest non-commutator matrix that \Cref{thm:counterexample} can construct} \end{table} \end{enumerate} \end{theorem} \begin{proof} We will use \Cref{thm:counterexample} to construct the trace $0$ non-commutator. For (1), the proof is in \Cref{cor:d0}. Note that the assumptions of the \Cref{thm:counterexample} can be simplified to the assumptions in \Cref{cor:d0} (see the proof of \Cref{cor:d0} for more details on the assumptions). For (2) to (6) we will first describe the set $S$ used in \Cref{thm:counterexample}. For (2) and (3) we will use \Cref{thm:binary}. Given $d=1$ and $m\ge 0$, take $S$ to be the maximum set of binary vectors of length $m$ that are Hamming distance at least $4$ apart and constant weight 3. For (4), \Cref{prop:smallR} describes how to obtain the set $S$ for $m=3$ and for any $d$. For (5), \Cref{thm:quadraticS} describes how to obtain the set $S$ for any $m\ge 1$ and $d\ge m-1$. For (6) \Cref{prop:dataSize} describes the largest $S$ one can obtain for the listed $m$ and $d$. Once the $S$ needed for \Cref{thm:counterexample} is obtained, we can construct an $n\times n$ trace $0$ non-commutator if $2n-1\le \# S$. Note that for (5), we have $\# S = \tfrac{m(m+2)}{4}$ when $m$ is even. This inequality can be rearranged to $2\le n\le \frac{m^2+2m+4}{8}$, but this is equivalent to $2\le n\le \frac{m^2+2m+5}{8}$ since $m$ is even. Hence we obtain the inequality stated in (5) for both even and odd $m$. \end{proof} \begin{remark} The $n$'s given in the table in \Cref{thm:largestn} is the largest size for which we can construct a trace $0$ non-commutator using \Cref{thm:counterexample}. On the other hand, (5) in the list can be improved if we can construct a bigger $S$ required for \Cref{thm:counterexample}. For example, if $m=4$, (5) gives $2\le n\le 3$, but we see that in the table in (6), we can take $n=4$ if $d\ge 4$. \end{remark} While the upper bound on $n$ in \Cref{thm:main} could be improved, the size of the non-commutator we can construct using \Cref{thm:counterexample} is bounded for a fixed ring $R$ and ideal $I\subset R$, as we now show. \begin{prop}\label{prop:upperbound} Let $R$ be a commutative ring and let $I\subset R$ be an ideal such that $I/I^2$ is a free $R/I$-module with rank $m\ge 3$. Then the size $n$ of a trace $0$ non-commutator one can construct using \Cref{thm:counterexample} is bounded above by $2^{2m-3}$. \end{prop} \begin{proof} If $m$ is finite, then the size of the set $S=\set{s_1,\dots,s_{2n-1}}$ required in \Cref{thm:counterexample} is bounded above by $4^{m-1}$ by \Cref{cor:upperbound}. So we have $2n-1\le 4^{m-1}$ which implies $n\le 2^{2m-3}$. Hence the largest trace $0$ non-commutator one can construct using \Cref{thm:counterexample} is bounded above by $2^{2m-3}$. \end{proof} \begin{remark} \Cref{prop:upperbound} gives an upperbound for the size of the non-commutator one can construct using \Cref{thm:counterexample} for a fixed $R$ and $I$. However, if we only fix $R$, then the size of the non-commutator can be arbitrary large in general. We will later construct a ring with such a property in \Cref{thm:unbounded}. \end{remark} \section{Combinatorics}\label{sec:combinatorics} We will now discuss the set $S$ required in \Cref{thm:counterexample}. Namely, we would like to answer the following questions. \begin{question} \label{q:mainWithd} Given $m\ge 1$ and $d\ge 0$, how large can a set $S\subset \Z^m_{\ge 0}$ be if it is $2d$-separated and contained in $\Delta(m-1,2d+1)$? \end{question} \begin{question} \label{q:main} Given $m\ge 1$, how large can a set $S$ be if it is $2d$-separated and contained in $\Delta(m-1,2d+1)\subset \Z^m_{\ge 0}$ for some $d\ge 0$? \end{question} An answer for \Cref{q:mainWithd} will give the size of the largest matrix we can construct using \Cref{thm:counterexample} for a fixed $m$ and $d$. While an answer to \Cref{q:main} will be useful in the situation of \Cref{thm:main} where we are allowed to take arbitrary large $d$. To answer the above questions, we will first use the following lemma to simplify the problem. \begin{lemma} \label{lem:corner} Let $m\ge 1$ and $d\ge 0$ be integers, and suppose that a set $S$ is $2d$-separated and contained in $\Delta(m-1,2d+1)$. Then there exists a set $S'$ such that $S'$ is $2d$-separated, contained in $\Delta(m-1,2d+1)$, $\#S'\ge \#S$ and $v_1:=(2d+1,0\dots,0),\dots,v_m:=(0,\dots,0,2d+1)\in S'$. \end{lemma} \begin{proof} For each $i$, we will either add $v_i$ to $S$ or replace one of the point in $S$ with $v_i$ to construct $S'$. So let $1\le i\le m$. If $v_i\in S$, then we do not need to modify $S$. Assume now that $v_i\notin S$. If $v_i$ is $2d$-separated from all the points in $S$, then we may take $S' = S\cup \set{v_i}$. Otherwise there is a point $s=(s_j)\in S$ such that $|s-v_i|_1\le 2d$. We will show that such $s$ is unique, and so we may replace $s$ with $v_i$ to obtain $S'$. So suppose we have $t=(t_j)\in S$ with $|t-v_i|_1\le 2d$, and without loss of generality, assume $s_i\ge t_i$. Now \[2d \ge |s-v_i|_1 = \sum_{j\ne i} s_j + 2d+1-s_i=4d+2 - 2s_i,\] so $s_i\ge d+1$, and similarly, $t_i\ge d+1$. Hence \begin{align*} |s-t|_1 &= s_i-t_i + \sum_{j\ne i}|s_j-t_j| \\ &\le s_i-t_i + \sum_{j\ne i}s_j + \sum_{j\ne i}t_j\\ &\le s_i-t_i + (2d+1-s_i)+(2d+1-t_i) \\ &= 4d +2-2t_i\\ &\le 2d. \end{align*} Since $S$ is $2d$-separated, this implies that $s=t$. Hence there is a unique point in $s\in S$ such that $|s-v_i|_1\le 2d$, and so if we take $S' := (S\setminus\set{s}) \cup \set{v_i}$, then $S'$ will be $2d$-separated. \end{proof} We will call the $v_i$'s the \emph{corner points} since they are on the corners of the simplex $\Delta(m-1,2d+1)$. We now consider the necessary and sufficient conditions for a point $s\in\Delta(m-1,2d+1)$ to be $2d$-separated from all the corner points. \begin{lemma}\label{lem:boundedcoordinate} Let $d\ge 0$ and $m\ge 1$ be integers and $s=(s_j)\in\Delta(m-1,2d+1)\subset\Z_{\ge0}^m$. Then $s$ is $2d$-separated from all the corner points if and only if $s_j\le d$ for all $j=1,\dots,m$. \end{lemma} \begin{proof} First, note that if $s=(s_j)\in\Delta(m-1,2d+1)$, then for all $i=1,\dots,m$, \[|s-v_i|_1 = 2d+1-s_i + \sum_{j\ne i}s_j = 2d+1-2s_i+ \sum_{j=1}^ms_j = 2(d-s_i) + 2d+2.\] So $s$ is $2d$-separated from $v_i$ if and only if $2(d-s_i)+2d+2 > 2d$, which can be rearranged to \[d+1 > s_i. \] Since $d$ and $s_i$ are integers, this inequality is equivalent to $d\ge s_i$. Hence $s$ is $2d$-separated from all the corner points if and only if $d\ge s_i$ for all $i$. \end{proof} If we are constructing a maximum $2d$-separated set $S\subset\Delta(m-1,2d+1)$, then by \Cref{lem:corner}, we can assume that the set $S$ contains all the corner points. Now consider the rest of the points $T:=S\setminus\set{v_1,\dots,v_m}$. For $S$ to be $2d$-separated, $T$ also needs to be $2d$-separated, and in addition, by \Cref{lem:boundedcoordinate}, for all $t=(t_j)\in T$, we must have $t_j\le d$ for all $j$. So in other words, \Cref{q:mainWithd} about the maximum size of $S$ is equivalent to the following question. Also note that the sizes of the largest $S$ in \Cref{q:mainWithd} and the largest $T$ in the following question are related by $\# S=m+\#T$. \begin{question}\label{q:t} Given $m$ and $d$, how large can a set $T$ be if it is $2d$-separated, contained in $\Delta(m-1,2d+1)$, and for all $t=(t_i)\in T$, $t_i\le d$ for all $i=1,\dots,m$. \end{question} The $d=1$ case in \Cref{q:t} can be reformulated in terms of binary vectors as follows. Recall that a binary vector of length $m$ is any vector in $\set{0,1}^m$. The weight of such a vector is the sum of all the entries, and for two binary vectors $v,v'$, the Hamming distance between $v$ and $v'$ is $|v-v'|_1$. Now if $d=1$, then every entry in $t\in T$ is either $0$ or $1$, so we can consider $t$ as a binary vector of length $m$. Moreover, since $t\in\Delta(m-1,3)$, the weight of $t$ is 3. In other words \Cref{q:t} for the $d=1$ case is asking for a largest set of binary vectors of length $m$ with weight $3$ that are pairwise Hamming distance being more than $2$ apart. The following theorem answers how large such a set of binary vectors can be. We state the theorem as in \cite{BSSS90}. For the proof, see Theorem 3 in \cite{Schoenheim66}. Note that the Hamming distance between two binary vectors of the same weight must be even, so Hamming distance being more than $2$ apart is equivalent to Hamming distance being at least $4$ apart. \begin{theorem}[{\cite[Theorem 4]{BSSS90}}]\label{thm:binary} Let $A(m,d,w)$ be the maximal possible number of binary vectors of length $m$, Hamming distance at least $d$ apart and constant weight $w$. Then \[A(m,4,3) = \begin{cases} \floor{\frac{m}{3}\floor{\frac{m-1}{2}}}, & \text{if }m\not\equiv 5\pmod{6}\\ \floor{\frac{m}{3}\floor{\frac{m-1}{2}}} - 1, & \text{if }m\equiv 5\pmod{6} \end{cases} \] \end{theorem} So for all $m\ge 1$ and $d=1$, the largest $T$ in \Cref{q:t} has size $A(m,4,3)$, while the largest $S$ in \Cref{q:mainWithd} has size $A(m,4,3)+m$, answering \Cref{q:t} and \Cref{q:mainWithd} respectively. For small $m$'s we can answer \Cref{q:main} fully. \begin{prop}\label{prop:smallR} Given $m=1,2$ or $3$, the largest set $S$ that is $2d$-separated and contained in $\Delta(m-1,2d+1)\subset \Z_{\ge 0}^m$ has size $\# S=1,2$ and $4$ respectively. \end{prop} \begin{proof} By \Cref{lem:corner}, we may assume that the corner points are already in $S$. Then for $m=1$ and $m=2$, there are no points in $\Delta(m-1,2d+1)$ that are $2d$-separated from the corner points. So $\# S=m$ is the largest possible set. For $m=3$, suppose there exist $2$ distinct points $s,s'\in\Delta(2,2d+1)$ that are $2d$-separated from the corner points. If $s=(a,b,c)$ and $s'=(a',b',c')$, then we have $0\le a,b,c,a',b',c' \le d$ by \Cref{lem:boundedcoordinate}. Since $|s|_1=2d+1=|s'|_1$, we may assume without loss of generality that $a\ge a'$, $b\le b'$ and $c\le c'$. Then \begin{align*} |s-s'|_1 &= |a-a'| + |b-b'| + |c-c'| \\ &= a-a' + b'-b + c'-c\\ &= 2d+1 - 2a' -(2d+1) + 2a'\\ &= 2(a-a')\\ &\le 2d \end{align*} so $s$ and $s'$ cannot be $2d$-separated. Hence $S$ can only contain one more point apart from the corner points, and hence $\# S=4$ is the largest. \end{proof} For other small $m$'s, we can use a computer program to determine explicitly how large $S$ can be for small $d$'s. But we will first convert that question into a graph theory problem. Given $m\ge 1$ and $d\ge 0$ integers, let $G(m,d):=(V,E)$ be the graph with a vertex set $V$ and an edge set $E$ defined as follows: The vertices are points in $\Delta(m-1, 2d+1)$ with entries at most $d$, and an edge exists between distinct $s, s' \in V$ if \[ |s-s'|_1=\sum_{i=1}^{m}|s_i-s_i'| \le 2d.\] Recall that for a graph, an \emph{independent set} is a set of vertices with no edges between them, and an independent set with the largest size is called a \emph{maximum independent set}. The cardinality of such a maximum independent set is called the \emph{independence number} of the graph. The set $T$ in \Cref{q:t} is an independent set of $G(m,d)$, so we would like to know the independence number of $G(m,d)$ for a given $m\ge 1$ and $d\ge 0$. \begin{prop}\label{prop:dataSize} For the following $m$ and $d$, the size of the largest $2d$-separated set ${S\subset \Z_{\ge0}^m}$ contained in $\Delta(m-1,2d+1)$ is given in the table below. \begin{table}[H] \centering \begin{tabular}{|>{\bfseries}c| *{12}{c}|} \hline \diagbox[width=2.5em]{m}{\kern-1em d}& \textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} & \textbf{5} & \textbf{6} & \textbf{7} & \textbf{8} & \textbf{9} & \textbf{10} & \textbf{11} &\textbf{12} \\ \hline 4& 5 & 6 & 6 & 7 & 7 & 7 & 7 & 7 & 8 & 8 & 8 & 8\\ 5& 7 & 10& 10& 10& 11& 11& 12& 12& 12& 13 & & \\ 6& 10& 12& 14& 15& & & & & & & &\\ 7& 14& 18& & & & & & & & & &\\ 8& 16& 24& & & & & & & & & &\\\hline \end{tabular} \caption{Size of the largest $2d$-separated set ${S\subset \Z_{\ge0}^m}$ contained in $\Delta(m-1,2d+1)$} \end{table} \end{prop} \begin{proof} As stated before the proposition, we would like to know the independence number of $G(m,d)$ to answer \Cref{q:t}, and so we used the function \texttt{IndependenceNumber} in Magma\cite{Magma} to compute it. As noted in the paragraph before \Cref{q:t}, the size of $T$ in \Cref{q:t} is related to the size of $S$ in \Cref{q:mainWithd} by $\# S = \# T + m$. So we added $m$ to the output from \texttt{IndependenceNumber} to obtain the size of the largest set $S\subset \Delta(m-1,2d+1)$ that is $2d$-separated, thus answering \Cref{q:mainWithd} completely for these values of $m$ and $d$. The missing entries in the table are due to the computations taking too long for those parameters. \end{proof} \begin{remark} The independence number of a graph is a well known invariant and has been well studied. In particular, there are inequalities which involves the independence number and other invariants of a graph. See \cite[Chapter 3]{Willis11} for a list of such inequalities. Unfortunately the inequalities listed were either not strong enough, or had an invariant that was difficult to compute, and we could not obtain any useful bound for our graphs. \end{remark} For any $m\ge 1$, we have the following construction of a set $S$ suitable for use in \Cref{thm:counterexample}. \begin{theorem}\label{thm:quadraticS} Let $m\in\Z_{>0}$. Then for any $d\ge m-1$, there exists a $2d$-separated set $S\subset {\Delta(m-1,2d+1)}\subset \Z_{\ge 0}^m$ with \[\#S = \begin{cases} \frac{m(m+2)}{4} & \text{if $m$ is even},\\ \frac{(m+1)^2}{4} & \text{if $m$ is odd}. \end{cases} \] \end{theorem} \begin{proof} We will explicitly construct such an $S$. For $1\le j\le \tfrac{m}{2}$, $1\le k\le m-2j$ and $r:=\ord_2(j)$, let \[v_{j,k}:=(\underbrace{0,\dots,0}_{k-1},d-r-k,\underbrace{0,\dots,0}_{j-1},d,\underbrace{0\dots,0}_{j-1},r+k+1,\underbrace{0,\dots,0}_{m-2j-k})\in\Delta(m-1,2d+1).\] To check that $v_{j,k}\in\Delta(m-1,2d+1)$, we first verify that $d-r-k=d-\ord_2(j)-k\ge 0$. \begin{align*} d-\ord_2(j) - k &\ge d-\ord_2(j) - (m-2j)\\ &=d-m -\ord_2(j)+2j\\ &\ge d - m + 2\\ &\ge 0 \end{align*} where the last inequality follows from the assumption $d\ge m-1$. Looking at the length, we have $|v_{j,k}|_1 = d-r-k+d+r+k+1=2d+1$, so it has the correct length. Hence $v_{j,k}$ is a valid point in $\Delta(m-1,2d+1)$. We take \[S:=\set{\text{corner points of }\Delta(m-1,2d+1)}\cup \set{v_{j,k}}{1\le j\le \tfrac{m}{2}, 1\le k\le m-2j}.\] We first check if all the entries of $v_{j,k}$ are at most $d$ to verify that they are $2d$-separated from the corner points. Since $d-r-k\le d$, the only other value we need to check is $r+k+1$, and \[r+k+1=\ord_2(j) + k + 1\le \ord_2(j) + m-2j + 1\le m - 2 + 1=m-1\le d.\] Hence $v_{j,k}$ is $2d$-separated from the corner points. Now we check that $v_{j,k}$'s are $2d$-separated. So fix two distinct $v_{j,k}$ and $v_{j',k'}$, and let \[K:=\set{1\le i\le m}{(v_{j,k})_i\ne 0} \quad\text{and}\quad K':=\set{1\le i\le m}{(v_{j',k'})_i\ne 0}\] be the support of $v_{j,k}$ and $v_{j',k'}$ respectively. Note that $\# (K\cap K')\le 2$, since $\# K=3=\# K'$ and $K=K'$ if and only if $v_{j,k}=v_{j',k'}$. Then \begin{align*} |v_{j,k}-v_{j',k'}|_1 =& \sum_{i\in K\cap K'}|(v_{j,k})_i-(v_{j',k'})_i| + \sum_{i\notin K\cap K'}(v_{j,k})_i + \sum_{i\notin K\cap K'}(v_{j',k'})_i \\ \ge& \sum_{i\notin K\cap K'}(v_{j,k})_i + \sum_{i\notin K\cap K'}(v_{j',k'})_i. \end{align*} Now suppose $\# (K\cap K')\le 1$. Since the entries of $v_{j,k}$ are bounded by $d$, we have $\sum_{i\notin K\cap K'}(v_{j,k})_i\ge 2d+1-d = d+1$, and similarly for $v_{j',k'}$. Hence \[ |v_{j,k}-v_{j',k'}|_1 \ge \sum_{i\notin K\cap K'}(v_{j,k})_i + \sum_{i\notin K\cap K'}(v_{j',k'})_i\ge (d+1)+(d+1) > 2d,\] and so $v_{j,k}$ and $v_{j',k'}$ are $2d$-separated. Now we consider the case when $\#(K\cap K') = 2$. There are three possible cases: \begin{enumerate} \item $j'=j$ and $k' = k+j$ (where the $k+j$-th position and the $k+2j$-th position agree), \item $j'=2j$ and $k'=k$ (where the $k$-th position and the $k+2j$-th position agree), or \item $j'=2j$ and $k'=k-j'$ (where the $k-2j$-th position and the $k$-th position agree). \end{enumerate} Note that if $j'\ne j$ and $j'\ne 2j$, then the spacing between the non-zero entries does not match, so the supports can only intersect at most 1 entry. We first consider the case when $j'=j$ and $k'=k+j$. In this case, we have \begin{align*} &|v_{j,k}-v_{j,k'}|_1 \\ =& |(\overbrace{\underbrace{0,\dots,0}_{k-1},d-r-k,\underbrace{0,\dots,0}_{j-1}}^{k'-1},r+k',\underbrace{0\dots,0}_{j-1},r+k+1-d,\underbrace{\overbrace{0,\dots,0}^{j-1}, -r-k'-1,\overbrace{0\dots,0}^{m-2j-k'}}_{m-2j-k})|_1\\ =&d-r-k+r+k'+d-(r+k+1)+r+k'+1\\ =&2d+2k'-2k\\ >&2d \end{align*} so they are $2d$-separated. If $j'=2j$ and $k'=k$ then we have \begin{align*} &|v_{j,k}-v_{j',k'}|_1\\ =& |(\overbrace{\underbrace{0,\dots,0}_{k-1}}^{k'-1},-r+r',\overbrace{\underbrace{0,\dots,0}_{j-1},d,\underbrace{0\dots,0}_{j-1}}^{j'-1},r+k+1-d,\underbrace{\overbrace{0,\dots,0}^{j'-1}, r'+k'+1,\overbrace{0,\dots,0}^{m-2j'-k'}}_{m-2j-k})|_1\\ =& -r+r' + d + d-(r+k+1) + r'+k'+1\\ =&2d + 2(r'-r) \\ =& 2d + 2(\ord_2(2j) - \ord_2(j))\\ >& 2d \end{align*} so these points are $2d$-separated. Finally if $j'=2j$ and $k'=k-j'$, then we have \begin{align*} &|v_{j,k}-v_{j',k'}|_1\\ =& |(\underbrace{\overbrace{0,\dots,0}^{k'-1},d-r'-k',\overbrace{0,\dots,0}^{j'-1} }_{k-1},-r-k,\overbrace{\underbrace{0,\dots,0}_{j-1},d,\underbrace{0,\dots,0}_{j-1}}^{j'-1},r+k-r'-k',\underbrace{\overbrace{0,\dots,0}^{m-2j'-k'}}_{m-2j-k})|_1\\ =& d-r'-k'+r+k+d+r+k-r'-k'\\ =& 2d + 2(k-k') + 2(r-r')\\ >& 2d, \end{align*} and those points are also $2d$-separated. Finally we count how many points we get from this construction. If $m$ is even, this gives total of \[m+\sum_{j=1}^{m/2}(m-2j) = \frac{m(m+2)}{4} \] points, and if $m$ is odd, \[m+\sum_{j=1}^{(m-1)/2}(m-2j) = \frac{(m+1)^2}{4} \] points. \end{proof} \begin{remark} One could add more points to the set $S$ given in \Cref{thm:quadraticS} by considering points with larger support. If a point has a support of size $5$ and has all its non-zero entries taking values around $\tfrac{2d+1}{5}$, then it will be $2d$-separated from any points in $S$. More generally, if $u'$ is the size of the support of the previous point added, then one could add a point with support of size $u>2u'$ with all its non-zero entries taking values around $\tfrac{2d+1}{u}$. While this does increase the size of $S$, it is only useful for large $m$, and it does not appear to increase the asymptotic of the size of $S$. \end{remark} We conclude the section with an upper bound on the size of a $2d$-separated set $S$ contained in $\Delta(m-1,2d+1)$. We will first give an equivalent formulation of \Cref{q:main} by putting all the $d$'s together that will be useful for proving the upper bound. \begin{question} \label{q:real} Let $m\in \Z_+$. Consider the regular $(m-1)$-simplex \[\Delta_{m-1} := \set{v\in\R^m}{v_i\ge 0, \sum_{i=1}^mv_i = 1}.\] What is the maximum $S\subset\Delta_{m-1}$ that is $1$-separated? \end{question} \begin{remark}\label{rem:convertQs} We can convert between the $S$'s in \Cref{q:main} and \Cref{q:real} in the following way. If a set $S\subset\Delta(m-1,2d+1)$ is $2d$-separated, then $S':=\set{\tfrac{1}{2d+1}s}{s\in S}\subset \Q_{\ge 0}^m$ is contained in $\Delta_{m-1}$. Now if $s,t\in S$ are distinct points, then $|s-t|_1>2d$. But since $|s|_1=|t|_1$ and $s,t\in\Z^m_{\ge 0}$, the distance $|s-t|_1$ must be even, so $|s-t|_1\ge 2d+2$. Hence $S'$ is $1$-separated and satisfies the conditions in \Cref{q:real}. On the other hand, suppose $S\subset\Delta_{m-1}$ is $1$-separated. For each $s\in S$, take a nearby point $s':=(s'_i)\in\Delta_{m-1}\cap \Q^m$ such that the denominators of the $s'_i$'s are odd. Let $S'$ be the set of all these $s'$'s. Since being $1$-separated is an open condition, if we take $s'$ to be close enough to $s$, then $S'$ will be $1$-separated. Now suppose $2d+1$ is the LCM of the denominators appearing in the entries of all $s'\in S'$. Then $S'':=\set{ (2d+1)s'}{s'\in S'}\subset \Z^m_{\ge 0}$ is contained in $\Delta(m-1,2d+1)$ and is $2d$-separated. Hence $S''$ satisfies the conditions in \Cref{q:main}. \end{remark} \begin{prop}\label{prop:realUpperbound} Let $m\in\Z_{>0}$. Then for any $S\subset\Delta_{m-1}$ that is $1$-separated, we have \[ \# S \le 4^{m-1}.\] \end{prop} \begin{proof} Suppose we have such an $S$. Then around each point $s\in S$, we have a radius $\tfrac{1}{2}$ (in $\ell_1$-norm) ball $B(s,\tfrac{1}{2})$ such that none of these balls intersect each other. So we have \[ \sum_{s\in S} \vol\left(B(s,\tfrac{1}{2})\cap \Delta_{m-1}\right) \le \vol(\Delta_{m-1}).\] The volume of $B(s,\tfrac{1}{2})\cap\Delta_{m-1}$ is the smallest when $s$ is a corner point of $\Delta_{m-1}$ since $\Delta_{m-1}$ is a convex polytope. For a corner point $v_1:=(1,0,\dots,0)$, \[ B(v_1,\tfrac{1}{2})\cap\Delta_{m-1} =\set{(u_i)\in\R^m}{ \frac{3}{4}\le u_1\le 1, 0\le u_i\le \frac{1}{4}\; \forall\; 2\le i\le m,\sum_{i=1}^mu_i=1},\] which is a $(m-1)$-simplex with vertices $(1,0,\dots,0)$, $(\tfrac{3}{4},\tfrac{1}{4},0,\dots,0),\dots, (\tfrac{3}{4},0,\dots,0,\tfrac{1}{4})$. This simplex has side length $\tfrac{1}{4}$ of the side length of the simplex $\Delta_{m-1}$, so \[\#S\left(\tfrac{1}{4}\right)^{m-1}\vol(\Delta_{m-1})\le \sum_{s\in S} \vol\left(B(s,\tfrac{1}{2})\cap \Delta_{m-1}\right) \le \vol(\Delta_{m-1}). \] Hence $\# S \le 4^{m-1}$. \end{proof} We can translate the upper bound for \Cref{q:real} to an upper bound for \Cref{q:main}. \begin{corollary} \label{cor:upperbound} Let $m\ge 1$ be an integer. Suppose $S\subset\Delta(m-1,2d+1)\subset\Z_{\ge 0}^m$ is a $2d$-separated set for some $d\ge 0$. Then $\# S \le 4^{m-1}$. \end{corollary} \begin{proof} By \Cref{rem:convertQs}, any set $S\subset\Delta(m-1,2d+1)$ that is $2d$-separated can be converted to a set $S'\subset \Delta_{m-1}$ that is $1$-separated. By \Cref{prop:realUpperbound}, $\# S'\le 4^{m-1}$, so $\# S= \# S' \le 4^{m-1}$. \end{proof} \section{A Ring with Trace Zero Non-commutators of Arbitrary Large Size}\label{sec:ring} In this section, we construct a Noetherian domain $\Lambda$ of dimension $1$ that admits an $n\times n$ trace $0$ non-commutator matrix over $\Lambda$ for all $n\ge 2$. We use the following theorem of Heinzer and Levy to construct the ring, and then apply \Cref{cor:d0} with varying maximal ideals to show that there is a trace $0$ non-commutator of arbitrary size. \begin{theorem}[{\cite[Theorem 2.1]{HL07}}]\label{thm:LH} Let $K$ be a field and $I$ a nonempty set. For each $i\in I$, let $(\Lambda_i,\fn(i))$ be a Noetherian local integral domain of dimension $1$ with maximal ideal $\fn(i)$ such that $\Lambda_i$ is a subring of $K$ and the quotient field $Q(\Lambda_i)$ equals $K$. Suppose: \begin{enumerate} \item Every non-zero element of $K$ is a unit in all but finitely many $\Lambda_i$; and \item For every pair of distinct indices $j\ne h$ there exist elements $x_j\in\Lambda_j$ and $x_h\in\Lambda_h$ such that: \begin{enumerate} \item $x_j$ and $x_h$ are non-units in $\Lambda_j$ and $\Lambda_h$ respectively; \item $x_j$ is a unit in $\Lambda_i$ when $i\ne j$, and $x_h$ is a unit in $\Lambda_i$ when $i\ne h$; \item $x_j+x_h$ is a unit in $\Lambda_i$ for every $i$. \end{enumerate} \end{enumerate} Then the ring $\Lambda:=\bigcap_i\Lambda_i$ is Noetherian of dimension $1$, its distinct maximal ideals are $\fm(i):=\fn(i)\cap \Lambda$, and $\Lambda_i=\Lambda_{\fm(i)}$ for each $i\in I$. \end{theorem} \unbounded \begin{proof} We will start with the construction of the field $K$ needed to apply \Cref{thm:LH}. Let $k$ be an algebraically closed field and $S:=k[X_{ij}:i\ge 2, 1\le j\le i]$, a polynomial ring with infinitely many variables. Let $J\subset S$ be the ideal generated by the following elements: \[ J:= ( X^2_{i1}-X^{p_j}_{ij}: i\ge 2, 2\le j\le i),\] where $p_j$ is the $j$-th prime. \begin{claim}\label{claim:domain} $R:=S/J$ is an integral domain. \end{claim} \begin{proof}[Proof of \Cref{claim:domain}] We show that $R$ is an integral domain by proving that $J$ is a prime ideal. So let $g,h\in S$ be such that $gh\in J$. Then \[gh = \sum_{i\ge 2}\sum_{j=2}^i a_{ij}(X^{2}_{i1}-X^{p_j}_{ij}), \] for some $a_{ij}\in S$ with only finitely many of the $a_{ij}$'s being non-zero. Then there are only finitely many variables $X_{ij}$ appearing in the equation, so we may restrict our ring $S$ to \[S_n := k[X_{ij}: 2\le i\le n, 2\le j\le i],\] and the ideal $J$ to an ideal \[J_n := ( X^{2}_{i1}-X^{p_j}_{ij}: 2\le i\le n, 2\le j\le i)\subset S_n,\] for some $n$. Then $g\in J_n$ or $h\in J_n$ will imply $g\in J$ or $h\in J$, so it is sufficient to show that $J_n$ is a prime ideal. We will prove that $S_n/J_n$ is a domain in order to show that $J_n$ is a prime ideal. Now, \[S_n/J_n \iso k[X_{21},X_{22}]/(X^2_{21}-X_{22}^3) \tensor_k\dots\tensor_k k[X_{n1},\dots,X_{nn}]/(X^2_{n1}-X_{n2}^3,\dots,X^{2}_{n1}-X^{p_n}_{nn}). \] A tensor product of finitely generated $k$-algebras that are domains is a domain if $k$ is algebraically closed (see proof of \cite[\href{https://stacks.math.columbia.edu/tag/05P3}{Tag 05P3}]{stacks-project}). So if \[T_t:=k[X_{1},\dots,X_{t}]/(X^2_{1}-X_{2}^3,\dots,X^{2}_{1}-X^{p_t}_{t}),\] is a domain for all $2\le t\le n$, then $S_n/J_n$ is also a domain. Now we will prove that $T_t$ is a domain by induction on $t$, along with the fact that $[Q(T_t):k(X_1)]=p_2p_3\cdots p_t$ where $Q(T_t)$ is the quotient field of $T_t$. If $t=2$, then $T_2=k[X_1,X_2]/(X^2_1-X^3_2)$. Now $X_1^2-X_2^3$ is an irreducible polynomial in $k[X_1,X_2]$, and $k[X_1,X_2]$ is a UFD, so $X_1^2-X_2^3$ is a prime element. Hence $T_2$ is a domain, and $[Q(T_2):k(X_1)]=3$. Now suppose that $T_{t-1}$ is a domain and $[Q(T_{t-1}):k(X_1)] = p_2p_3\cdots p_{t-1}$. Since \[ T_t \iso T_{t-1}[X_t]/(X_{t}^{p_{t}}-X_1^2),\] we only need to show that $f_t:=X^{p_t}_t-X_1^2$ is prime in $T_{t-1}[X_t]$ to show that $T_t$ is a domain. We first show that $f_t$ is irreducible in $k(X_1)[X_t]$. Now $f_t=X_t^{p_t}-X_1^2$ has a prime degree and $k(X_1)$ contains all the $p_t$-th roots of unity, so by \cite[Theorem VI.9.1]{Lang}, $f_t$ is irreducible in $k(X_1)[X_{t}]$ if and only if it has no roots in $k(X_1)$. It is clear that $f_t$ has no roots in $k(X_1)$, so $f_t$ is irreducible in $k(X_1)[X_t]$. Now $\deg f_t=p_t$ is a prime and does not divide $[Q(T_{t-1}):k(X_1)]=p_2p_3\cdots p_{t-1}$, so $f_t$ is still irreducible in $Q(T_{t-1})[X_t]$. Hence $f_t$ is a prime in $Q(T_{t-1})[X_t]$ since $Q(T_{t-1})[X_t]$ is a UFD. Now will use the fact that $f_t$ is a prime in $Q(T_{t-1})[X_t]$ to show that $f_t$ is a prime in $T_{t-1}[X_t]$. So suppose $f_t\mid gh$ for some $g,h\in T_{t-1}[X_t]$. Since $f_t$ is a prime in $Q(T_{t-1})[X_t]$, we may assume that $f_tg' = g$ for some $g':=\sum_{i=0}^{n'}g'_iX_t^i\in Q(T_{t-1})[X_t]$ without loss of generality. Suppose $g'\notin T_{t-1}[X_t]$. Then there exists a maximal index $j$ such that $g'_j\notin T_{t-1}$. If $g=\sum_{i=0}^ng_iX_t^i$, then looking at the degree $j+p_t$ coefficients of the equation $f_tg'=g$, we have \[ g_{j+p_t} = g'_j - g'_{j+p_t}X_1^2.\] We have $g'_{j+p_t}\in T_{t-1}$ since $j$ was the maximal index such that $g'_j\notin T_{t-1}$. Hence $g'_j=g_{j+p_t}+g'_{j+p_t}\in T_{t-1}$, which is a contradiction, so $g'\in T_{t-1}[X_t]$. Hence we have $f_t\mid g$ in $T_{t-1}[X_t]$ and so $f_t$ is a prime in $T_{t-1}[X_t]$. This implies that $T_t=T_{t-1}[X_t]/(f_t)$ is a domain, and now $Q(T_t)=Q(T_{t-1})[X_t]/(f_t)$, so \[[Q(T_t):k(X_1)] = [Q(T_t):Q(T_{t-1})][Q(T_{t-1}):k(X_1)]= p_tp_2p_3\cdots p_{t-1},\] and we are done. Hence $T_t$, $S_n/J_n$ and $R=S/J$ are all domains and we have proved \Cref{claim:domain}. \end{proof} Since $R$ is a domain, we may take $K:=Q(R)$, the quotient field of $R$. This will be the $K$ we take for applying \Cref{thm:LH}. We now define the $\Lambda_n$'s needed to apply \Cref{thm:LH}. We take the set $I$ to be $\N$. Let $Y_{ij}\in R\subset K$ be the residue class of $X_{ij}$. For $n\ge 2$, define a subfield \[F_n := k(Y_{ij}: i\ne n, 1\le j\le i)\subset K,\] and a subring \[R_n:= F_n[Y_{n,1},\dots,Y_{n,n}]\subset K.\] Let $P_n:=F_n[Z_1,\dots,Z_n]$ be the polynomial ring in $n$ variables. \begin{claim}\label{claim:iso} \begin{align*} \varphi_n\lmaps[l]{P_n/(Z^{2}_1-Z^3_2,\dots, Z^{2}_1-Z^{p_n}_n)}{R_n=F_n[Y_{n,1},\dots,Y_{n,n}]}\\ \widebar{Z}_i&\longmapsto Y_i, \end{align*} is an isomorphism, where $\widebar{Z}_i$ is the residue class of $Z_i$. \end{claim} \begin{proof}[Proof of \Cref{claim:iso}] It is clear that $\varphi_n$ is surjective, so let us show that it is injective. Consider the lift of $\varphi_n$, \begin{align*} \tilde{\varphi}_n\lmaps[l]{P_n}{R_n=F_n[Y_{n,1},\dots,Y_{n,n}]}\\ Z_i&\longmapsto Y_i. \end{align*} We see that $\ker\tilde{\varphi}_n\supset (Z^{2}_1-Z^3_2,\dots, Z^{2}_1-Z^{p_n}_n)$, so to conclude that $\varphi_n$ is injective, we only need to show that \[\ker\tilde{\varphi}_n\subset (Z^{2}_1-Z^3_2,\dots, Z^{2}_1-Z^{p_n}_n).\] So suppose that $f\in \ker\tilde{\varphi}_n$. If $c\in F^\times_n$ is the product of the denominators of the coefficients of $f$, then \[cf\in k[Y_{ij}:i\ne n, 1\le j\le i][Z_1,\dots,Z_n]=:P_n'\subset P_n.\] Moreover, $cf\in (Z^{2}_1-Z^3_2,\dots, Z^{2}_1-Z^{p_n}_n)$ implies $f\in(Z^{2}_1-Z^3_2,\dots, Z^{2}_1-Z^{p_n}_n)$, so it is sufficient to check that \[ \ker\left(\tilde{\varphi}_n|_{P'_n} \right)\subset (Z^{2}_1-Z^3_2,\dots, Z^{2}_1-Z^{p_n}_n)P_n'\subset P_n'.\] We have the following commutative diagram \[\begin{tikzcd} S=k[X_{ij}:i\ge 2, 1\le j\le i] \arrow[d, "\psi_n", two heads] \arrow[rd, "q", two heads] & \\ {P_n'=k[Y_{ij}:i\ne n, 1\le j\le i][Z_1,\dots,Z_n]} \arrow[r, "\tilde{\varphi}_n|_{P'_n}", two heads] & \tilde{\varphi}_n(P_n')=k[Y_{ij}:i\ge 2, 1\le j\le i]=S/J \end{tikzcd},\] where $q$ is the quotient map and \begin{align*} \psi_n\lmaps[l]{S}{P_n'}\\ X_{ij} &\longmapsto \begin{cases} Y_{ij} &\text{if }i\ne n,\\ Z_j &\text{if }i=n. \end{cases} \end{align*} Since $\psi_n$ is surjective, $\ker\tilde{\varphi}_n|_{P_n'}=\psi_n(\ker q) = \psi_n(J)$, and \[ \psi_n(J) = ( \psi_n(X_{i1})^2-\psi_n(X_{ij})^{p_j}: i\ge 2, 2\le j\le i) = (Z^{2}_1-Z^3_2,\dots, Z^{2}_1-Z^{p_n}_n) \subset P_n'. \] Hence \[ \ker\tilde{\varphi}_n|_{P_n'}= (Z^{2}_1-Z^3_2,\dots, Z^{2}_1-Z^{p_n}_n)\subset P_n',\] and $\varphi_n$ is an isomorphism. \end{proof} Since $P_n/(Z^{2}_1-Z^3_2,\dots, Z^{2}_1-Z^{p_n}_n)$ is easier to work with than $R_n$, we will use the former presentation to prove properties about $R_n$ to verify the assumptions of \Cref{thm:LH}. The ideal $(Y_{n,1},\dots,Y_{n,n})\subset R_n$ is maximal since it corresponds to $(\widebar{Z}_1,\dots,\widebar{Z}_n)$ so we may localise $R_n$ at $(Y_{n,1},\dots,Y_{n,n})$ to obtain a local domain \[\Lambda_n := (R_n)_{(Y_{n,1},\dots,Y_{n,n})}\subset K,\] with a maximal ideal $\fn(n):=(Y_{n,1},\dots,Y_{n,n})\subset \Lambda_n$. Since $\Lambda_n$ is a localisation of a finitely generated $F_n$-algebra, it is Noetherian. Now we show that $\dim \Lambda_n=1$. We have an isomorphism \[\Lambda_n\iso \left(P_n/(Z^{2}_1-Z^3_2,\dots, Z^{2}_1-Z^{p_n}_n)\right)_{(\widebar{Z}_1,\dots,\widebar{Z}_n)}, \] through $\varphi_n$. We may interchange localisation and quotient so \[\Lambda_n\iso (P_n)_{(Z_1,\dots,Z_n)}/(Z^{2}_1-Z^3_2,\dots, Z^{2}_1-Z^{p_n}_n),\] as well. Now \[\left(Z_1,Z^{2}_1-Z^3_2,\dots, Z^{2}_1-Z^{p_n}_n\right) =\left(Z_1, Z_2^3,\dots, Z_n^{p_n}\right) \subset (P_n)_{(Z_1,\dots,Z_n)} \] is a $(Z_1,\dots,Z_n)$-primary ideal, so ${Z_1,Z^{2}_1-Z^3_2,\dots, Z^{2}_1-Z^{p_n}_n}$ is a system of parameters of $(P_n)_{(Z_1,\dots,Z_n)}$ (see the start of \cite[Ch. 14]{Matsumura} for the definition of a system of parameters). Hence by \cite[Theorem 14.1]{Matsumura}, \[\dim \Lambda_n=\dim (P_n)_{(Z_1,\dots,Z_n)}/(Z^{2}_1-Z^3_2,\dots, Z^{2}_1-Z^{p_n}_n) = n-(n-1) = 1.\] Finally note that $\Lambda_n$ contains all $Y_{ij}$'s, so $R\subset \Lambda_n\subset K=Q(R)$. Hence to apply \Cref{thm:LH}, we only need to verify conditions (1) and (2). For (1), if $f\in K$, then $f$ can be written as a rational function in the $Y_{ij}$'s. If $f$ is non-zero, then only finitely many $Y_{ij}$'s appear in such a representation of $f$. If $n\ge 2$ is such that $Y_{nj}$ does not appear in such a representation for all $1\le j\le n$, then $f\in F_n$, so $f\in\Lambda_n$ is a unit. Hence any non-zero element in $K$ is a unit in all but finitely many $\Lambda_n$'s. For (2), if $j \ne h$, then take $x_j:= Y_{j,1}$ and $x_h:=Y_{h,1}$. These are not units in $\Lambda_j$ and $\Lambda_h$ respectively, and are units in $\Lambda_i$ for other $i$'s not equal to $j$ or $h$. Moreover, if $i\ne j$ and $i\ne h$, then $x_j+x_h\in F_i\subset R_i$ so $x_j+x_h$ is a unit in $\Lambda_i$. Hence $\Lambda:=\bigcap_i \Lambda_i$ is a Noetherian domain of dimension $1$ by \Cref{thm:LH}. Finally, we show that the embedding dimension of $\fm(i):=\fn(i)\cap\Lambda$ is $i$. This will prove the theorem since for all $n\ge 2$, we can find $i$ such that $n\le \tfrac{i+1}{2}$, and we can apply \Cref{cor:d0} with $R=\Lambda$ and $\fm = \fm(i)$ to construct an $n\times n$ trace $0$ non-commutator. We now compute the embedding dimension of $\fm(i)$'s. Since $\Lambda_i=\Lambda_{\fm(i)}$ by \Cref{thm:LH}, we can check the embedding dimension of the local ring $\Lambda_i$ instead. We have \[\Lambda_i\iso (P_i)_{(Z_1,\dots,Z_i)}/(Z^{2}_1-Z^3_2,\dots, Z^{2}_1-Z^{p_i}_i),\] so \[ \fn(i)/\fn(i)^2\iso (\widebar{Z}_1,\dots,\widebar{Z}_i)/(\widebar{Z}_1,\dots,\widebar{Z}_i)^2.\] Now we show that $\widebar{Z}_1,\dots,\widebar{Z}_i$ form an $F_i$-basis of $(\widebar{Z}_1,\dots,\widebar{Z}_i)/(\widebar{Z}_1,\dots,\widebar{Z}_i)^2$. So suppose \begin{equation} a_1\widebar{Z}_1+\dots+a_i\widebar{Z}_i \in (\widebar{Z}_1,\dots,\widebar{Z}_i)^2,\label{eq:6} \end{equation} for some $a_1,\dots,a_i\in F_i$. Now \[(Z^{2}_1-Z^3_2,\dots, Z^{2}_1-Z^{p_i}_i) \subset (Z_1,\dots,Z_i)^2, \] so we can lift \cref{eq:6} to $(P_i)_{(Z_1,\dots,Z_n)}$ to obtain \[ a_1Z_1+\dots+a_iZ_i \in (Z_1,\dots,Z_i)^2.\] But then $a_j=0$ for all $j$ since all the terms in the left-hand side are degree $1$. Hence \[\dim_{F_i}(\widebar{Z}_1,\dots,\widebar{Z}_i)/(\widebar{Z}_1,\dots,\widebar{Z}_i)^2=i \] \end{proof} \section{Trace Zero $2\times 2$ Matrices} \label{sec:2x2} In this section, we summarise the current progress on the case of $2\times 2$ matrices. We start by recalling various facts and definitions needed to state \Cref{thm:characterisation} where we put together known results from the 1970s and 1980s. We then review facts and definitions on the K-theory of affine surfaces to be able to state new results such as \Cref{cor:fpalgebraisop},\Cref{cor:gradedopring} and \Cref{thm:bbopring}. For this section, we assume that $\Spec(R)$ is connected for any ring $R$. This is not a strong assumption since we can always reduce the question about commutators to the case where $\Spec(R)$ is connected if $R$ is Noetherian. Indeed, if $R$ is a Noetherian ring, then $\Spec(R)$ has only finitely many connected components, and $R=\prod_{i=1}^r R_i$ where $\Spec(R_i)$ is a connected component of $\Spec(R)$. A matrix $M\in \Mat_n(\prod_{i=1}^rR_i)$ is a commutator if and only if all $M_i$'s are commutators where $M=(M_i)\in \prod_{i=1}^r\Mat_n(R_i)\iso\Mat_n(\prod_{i=1}^rR_i)$. Lissner showed by elementary means that $\begin{pmatrix}c_1&c_2\\c_3&-c_1 \end{pmatrix}\in \Mat_2(R)$ is a commutator except possibly when $c_i\notin (c_j,c_k)\subset R$ for any $(i\; j\; k)$ which is a permutation of $(1\; 2\; 3)$ (see \cite[Lemma 3.1 and 3.3]{Lissner61}). The main tool in the $2\times 2$ case is the following lemma connecting a commutator with an exterior power. Recall that $v\in \bigwedge^pR^n$ is \emph{decomposable} if there exist $v_1,\dots,v_p\in R^n$ such that $v=v_1\wedge \dots \wedge v_p$. \begin{lemma}\label{lem:2x2decomposable} Let $ M:= \begin{pmatrix} a & b \\ c & -a \end{pmatrix}\in\Mat_2(R)$. Then $M$ is a commutator if and only if $ ae_2\wedge e_3+be_3\wedge e_1+ce_1\wedge e_2\in\bigwedge^2 R^3$ is decomposable. In particular, every $2\times 2$ trace $0$ matrix is a commutator if and only if every vector in $\bigwedge^2R^3$ is decomposable. \end{lemma} \begin{proof} Omitted. \end{proof} \begin{definition} Given $n\ge 1$, we say that \emph{$R$ is $T_n^{n+1}$} if every vector in $\bigwedge^nR^{n+1}$ is decomposable. We call a ring $R$ an \emph{OP-ring} if it is $T_n^{n+1}$ for all $n\ge 1$. \end{definition} This definition was made by David Lissner in \cite{Lissner65} and OP stands for \emph{outer product}. \begin{remark} \Cref{lem:2x2decomposable} can be rephrased as stating that every trace $0$ matrix in $\Mat_2(R)$ is a commutator if and only if $R$ is $T_2^3$. \end{remark} From here on in this section, we will focus on the case when $R$ is Noetherian. For non-Noetherian OP rings, see \cite{JW17}. Given a property P, we say that a ring $R$ is \emph{locally P} if every localisation of $R$ at a prime ideal has the property P. For $T_n^{n+1}$ or OP, locally true at every prime is equivalent to locally true at every maximal ideal. \begin{prop}\label{prop:locallyop} A Noetherian ring $R$ is locally $T_n^{n+1}$ or locally OP if and only if every localisation of $R$ at a maximal ideal is $T_n^{n+1}$ or OP, respectively. \end{prop} \begin{proof} If $R$ is locally $T_n^{n+1}$, then every localisation of $R$ at a maximal ideal is $T_n^{n+1}$ by definition. On the other hand, suppose every localisation of $R$ at a maximal ideal is $T_n^{n+1}$, and let $\fp\subset R$ be a prime ideal. Then there exists a maximal ideal $\fm\supset \fp$ and $R_\fp$ is a localisation of $R_\fm$ at $\fp R_\fm$. Now let $u\in\bigwedge^nR_\fp^{n+1}$, and $f:R_\fm^{n+1}\to R_\fp^{n+1}$ be the $R_\fm$-module homomorphism induced by the localisation at $\fp R_\fm$. Then there exists $r\in R_\fm\setminus{\fp R_\fm}$ such that $ru \in \im \bigwedge^nf$, so let $v\in \bigwedge^nR_\fm^{n+1}$ be such that $(\bigwedge^{n}f)(v) = ru$. Since $R_\fm$ is $T_n^{n+1}$, $v$ is decomposable, so \[ v = v_1\wedge \dots \wedge v_n \] for some $v_1,\dots,v_n\in R_\fm^{n+1}$. Now \[ u = \frac{1}{r}({\textstyle\bigwedge^n}f)(v_1\wedge\dots\wedge v_n)=\frac{1}{r} f(v_1)\wedge \dots\wedge f(v_n)\in {\textstyle\bigwedge^n}R^{n+1}_\fp \] so $u$ is decomposable. Hence $R_\fp$ is $T_n^{n+1}$. The equivalence for OP follows from the equivalence for $T_n^{n+1}$ for all $n\ge 1$. \end{proof} For a local ring, the OP property can be detected by the minimal number of generators for its maximal ideal. \begin{theorem}[{\cite[Theorem]{Towber70}}]\label{thm:local} Let $R$ be a Noetherian local ring with maximal ideal $\fm$. Then $R$ is an OP-ring if and only if $\fm$ is generated by 2 elements. \end{theorem} \begin{corollary}\label{cor:regularlocalop} Let $R$ be a regular ring of dimension $d \le 2$. Then $R$ is locally OP. \end{corollary} \begin{proof} Let $\fm\subset R$ be a maximal ideal. Then $R_\fm$ is a regular local ring of dimension $\le d$, so $\fm R_\fm$ is generated by at most 2 elements. Hence $R_\fm$ is an OP ring by \Cref{thm:local}, and so $R$ is locally OP. \end{proof} Recall that a \emph{semilocal ring} is a ring with finitely many maximal ideals. For semilocal rings, the OP property can be detected locally. \begin{theorem}[{\cite[Theorem]{Hinohara72}}]\label{thm:semilocal} Let $R$ be a Noetherian semilocal ring. Then $R$ is an OP-ring if and only if $R$ is locally OP. \end{theorem} Lissner first used the term OP-ring in \cite{Lissner65} and showed that a Dedekind domain is an OP-ring in \cite[Appendix]{Lissner65}. Towber subsequently showed that if $R$ is a Dedekind domain then $R[x]$ is an OP-ring in \cite[Theorem 1.2]{Towber68}. Estes and Matijevic then proved the following characterisation of OP-rings. Note that in the equivalence stated, instead of the condition locally OP, Estes and Matijevic had $R_\fm$ is OP for any maximal ideal $\fm\subset R$. However, by \Cref{prop:locallyop} that is equivalent to $R$ being locally OP. Recall that an $R$-module $M$ is \emph{$R$-oriented} if $\bigwedge^nM\iso R$ for some $n\ge 1$. \begin{theorem}[{\cite[Theorem 1, Corollary 1, Corollary 2]{EM80}}]\label{thm:general} Let $R$ be a Noetherian ring satisfying one of the following properties: \begin{enumerate}[label=(\alph*)] \item $R$ is reduced, \item every minimal ideal of $R$ is principal, or \item $R$ has only finitely many maximal ideals with non-regular localisation. \end{enumerate} Then $R$ is an OP-ring if and only if $R$ is locally OP and every finitely generated $R$-oriented module is free. \end{theorem} \begin{remark} Estes and Matijevic note in \cite[Page 1356]{EM80} that the three conditions on $R$ are probably not necessary, and suspect that every $R$-oriented module being free and $R$ being locally OP are the only necessary conditions for $R$ to be an OP-ring. \end{remark} \begin{corollary}\label{cor:orientedisprojective} Let $R$ be a reduced Noetherian locally OP ring. Then any finitely generated $R$-oriented module is projective. \end{corollary} \begin{proof} Let $M$ be a finitely generated $R$-oriented module, so that $\bigwedge^n M \iso R$ for some $n\ge 1$. We will show that $M$ is projective by showing that it is locally free. So let $\fp\in \Spec(R)$ be a prime ideal. Then \[ R_\fp \iso {\textstyle\left(\bigwedge^n M\right)} \tensor_R R_\fp \iso \textstyle{\bigwedge^n} M_\fp.\] Now $R$ is locally OP and so $R_\fp$ is an OP-ring, and $M_\fp$ is a finitely generated $R_\fp$-oriented module, so $M_\fp$ is free by \Cref{thm:general}. Hence $M$ is projective. \end{proof} Recall that an $R$-module $M$ is \emph{stably-free} if $M\oplus R^m\iso R^n$ for some $n$ and $m$. The \emph{Grothendieck group} of a ring $R$, $K_0(R)$, is the group completion of the monoid of isomorphism classes of finitely generated projective $R$-modules, under direct sum. And $SK_0(R)$ is the subgroup of $K_0(R)$ consisting of the classes $[P]-[R^m]$, where $P$ is an $R$-oriented projective module of constant rank $m$ (cf. \cite[Definition II.2.6.1]{Weibel13}). \begin{theorem}[Boraty{\'n}ski--Davis--Geramita\cite{BDG80}, Estes--Matijevic\cite{EM80}]\label{thm:characterisation} Let $R$ be a Noetherian ring satisfying one of the conditions (a), (b), or (c) in \Cref{thm:general}. Then the following are equivalent: \begin{enumerate} \item Every trace $0$ matrix in $\Mat_2(R)$ is a commutator, \item $R$ is $T_2^3$, \item $R$ is an OP-ring, \item $R$ is locally OP and every finitely generated projective $R$-oriented module is free, \item $R$ is locally OP, $SK_0(R)=0$ and every finitely generated stably-free projective $R$-module is free, and \item Every maximal ideal of $R$ is generated by two elements and every finitely generated stably-free projective $R$-module is free. \end{enumerate} \end{theorem} \begin{proof} (1) is equivalent to (2) by \Cref{lem:2x2decomposable}. (3) implies (2) follows from the definition of an OP-ring. (4) implies (3) is \Cref{thm:general} with \Cref{cor:orientedisprojective}. For (5) implies (4), suppose $M$ is an $R$-oriented module. Then for all prime ideal $\fp$, $M_\fp$ is $R_\fp$-oriented module, and since $R_\fp$ is OP, every $R_\fp$-oriented module is free. Hence $M$ is an $R$-oriented projective module. Hence it is stably-free projective by \cite[(6) Lemma]{BDG80} and hence free by (5). For (6) implies (5), if every maximal ideal of $R$ is generated by 2 elements, then every maximal ideal is locally generated by at most two elements. Hence every localisation is OP by \Cref{thm:local}. Now for all maximal ideal $\fm$, $\fm R_\fm$ is generated by at most 2 elements, so $\dim R_\fm\le 2$ by Krull's Height theorem \cite[Theorem 13.5]{Matsumura}. Hence $\dim R\le 2$, and so by \cite[(5) Proposition]{BDG80}, $SK_0(R) = 0$. (2) implies (6) follows from \cite[(3) Lemma, (4) Lemma]{BDG80}. \end{proof} \begin{corollary}\label{cor:lowdim} Let $R$ be a locally OP Noetherian ring. Then \begin{enumerate} \item $\dim R\le 2$, \item If $\dim R=0$ then $R$ is an OP-ring, \item If $\dim R=1$ and if $R$ satisfies one of the conditions (a), (b), or (c) in \Cref{thm:general}, then $R$ is an OP-ring. \end{enumerate} \end{corollary} \begin{proof} If $R$ is locally OP Noetherian ring, then by \Cref{thm:local}, $\fm R_\fm$ is generated by 2 elements for any maximal ideal $\fm$ of $R$. Hence $\dim R_\fm \le 2$ by Krull's Height theorem \cite[Theorem 13.5]{Matsumura}, and so $\dim R\le 2$. (2) follows from \Cref{thm:semilocal} since if $R$ is Noetherian and dimension $0$, then it is a semilocal ring. For (3), let $M$ be an $R$-oriented module, that is $\bigwedge^nM=R$ for some $n\ge 1$. Then by \Cref{cor:orientedisprojective} $M$ is projective and so by Bass cancellation theorem \cite[Theorem I.2.3]{Weibel13}, $M\iso P\oplus R^{n-1}$ for some projective module $P$ of rank 1. Now \[R = {\textstyle\bigwedge^nM= \bigwedge^n}\left(P\oplus R^{n-1}\right)= \bigoplus_{k=0}^n {\textstyle\bigwedge^kP} \tensor {\textstyle\bigwedge^{n-k}R^{n-1}} = P\tensor R = P,\] so $M$ is free. Hence every finitely generated $R$-oriented module is free, so $R$ is an OP-ring by \Cref{thm:general}. \end{proof} \begin{example} Let $A$ be a PID with quotient field $F$ and $K/F$ be a finite field extension. An $A$-order $R$ in $K$ is an $A$-subalgebra of $K$ which is finitely generated as an $A$-module and such that $F\tensor_AR=K$. In \cite[Theorem 3.4]{Clark18}, Clark showed that if $N:=[K:F]$, then there exists an $A$-order $R$ in $K$ such that $R$ admits a maximal ideal $\fm\subset R$ with $N$ as the minimal cardinality of a set of generators. If $N\ge 3$, then by \Cref{thm:characterisation} (6), $R$ is not an OP ring. In particular if $K/\Q$ is a number field of degree at least $3$, then $K$ admits a $\Z$-order which is not an OP ring. Note that if $R$ is an $A$-order in a quadratic extension $K/F$, then every ideal of $R$ is a free $A$-module of rank $2$, so every maximal ideal of $R$ is generated by at most $2$ elements. Hence for any maximal ideal $\fm\subset R$, $\fm R_\fm$ is generated by at most $2$ elements, so by \Cref{thm:local}, $R$ is locally OP. Moreover, $R$ is also a dimension $1$ domain, so by \Cref{cor:lowdim} (3)(a), $R$ is an OP ring. \end{example} From here on, we will work with the case where the ring $R$ is a $k$-algebra for some field $k$. As we will see later, this will allow us to utilise the geometry of $\Spec(R)$ as a $k$-variety. The next theorem gives examples of $k$-algebras $R$ where every stably-free projective module is free. \begin{theorem}\label{thm:stablyfreeisfree} Let $k$ be a ring, and $R$ be a finitely generated $2$-dimensional $k$-algebra. Then every finitely generated stably-free projective $R$-module is free if any of the following is satisfied: \begin{enumerate} \item $k$ is an algebraically closed field \cite[Theorem 1]{MS76}. \item $k$ is an infinite perfect field with $\ch k\ne 2$ and the cohomological dimension of $k$ is at most $1$ \cite[Remark 4.2]{Bhatwadekar03}. \item $k$ is a real closed field and all $k$-points on $\Spec(R)$ lie on a closed subscheme of dimension $\le 1$ \cite[Theorem 3.1]{MS76}. \item $k=\Z$ or $k=\F_q$ for any prime power $q$ \cite[Corollary 2.5]{KMR88}. \end{enumerate} \end{theorem} \begin{corollary}\label{cor:generatedby2} Let $k$ be a ring and $R$ be a Noetherian reduced finitely generated $2$-dimensional $k$-algebra satisfying one of the conditions (1)-(4) in \Cref{thm:stablyfreeisfree}. Then every trace $0$ matrix in $\Mat_2(R)$ is a commutator if and only if every maximal ideal of $R$ can be generated by two elements. \end{corollary} \begin{proof} Follows from applying \Cref{thm:stablyfreeisfree} to the equivalence of (1) and (6) in \Cref{thm:general}. \end{proof} We now state an application of \Cref{thm:stablyfreeisfree} (1) which will be used in \Cref{ex:hypersurface} and \Cref{ex:godeaux}. Recall that for a variety $X/k$, the Chow group $A_0(X)$ is the group of zero cycles of degree $0$ modulo rational equivalence. For a projective variety $X/k$, we have the \emph{Albanese map} \[ AJ_X \maps{A_0(X)}{\Alb_{X/k}(k)}, \] where $\Alb_{X/k}$ is the Albanese variety of $X/k$ (see \cite[Section 3]{ss03} for more details about the Albanese map). We denote the kernel as $SA_0(X) := \ker AJ_X$, which is also denoted as $T(X)$ or $F^2CH_0(X)$ in some literature. \begin{theorem}\label{cor:sa} Let $k$ be an algebraically closed field and suppose that $R$ is a $2$-dimensional regular domain that is a finitely generated $k$-algebra. If $\Spec(R)$ is an open affine subvariety of a regular projective surface $X/k$ with finite $SA_0(X)$, then $R$ is an OP-ring, and every trace $0$ matrix in $\Mat_2(R)$ is a commutator. \end{theorem} \begin{proof} By \Cref{cor:regularlocalop}, $R$ is a locally OP ring, and by \cite[Theorem 3]{MS76}, $SA_0(X)$ finite implies $SK_0(R)=0$. Every stably-free projective $R$-module is free by \Cref{thm:stablyfreeisfree} (1), so $R$ satisfies \Cref{thm:characterisation} (5). Hence $R$ is an OP-ring and every trace $0$ matrix in $\Mat_2(R)$ is a commutator. \end{proof} When $k=\widebar{\F}_p$, we have the following theorem. \begin{theorem}\label{cor:positivecharop} Let $R$ be a locally OP dimension $2$ finitely generated $\widebar{\F}_p$-algebra that satisfies one of the conditions (a), (b), or (c) in \Cref{thm:general}. Then $R$ is an OP-ring and every trace $0$ matrix in $\Mat_2(R)$ is a commutator. \end{theorem} \begin{proof} By \Cref{thm:general}, we only need to show that every finitely generated oriented $R$-module is free. Let $P$ be an oriented module of rank $n$. If $n >2$ then by Bass cancellation theorem \cite[Theorem I.2.3]{Weibel13}, $P\iso Q\oplus R^{n-2}$ where $Q$ is a projective module of rank $2$. By \cite[Theorem 6.4.1]{KS07}, any projective $R$-module of rank $2$ has a non-zero free direct summand, so $Q\iso Q'\oplus R$ for some finitely generated projective $R$-module $Q'$. Now \[R = {\textstyle\bigwedge^nP= \bigwedge^n}\left(Q'\oplus R^{n-1}\right)= \bigoplus_{k=0}^n {\textstyle\bigwedge^k}Q' \tensor {\textstyle\bigwedge^{n-k}}R^{n-1} = Q'\tensor R = Q'.\] Hence $P$ is free. \end{proof} \fpalgebra \begin{proof} $R$ is locally OP by \Cref{cor:regularlocalop}, and $R$ is regular so it is reduced. Hence $R$ is an OP ring by \Cref{cor:positivecharop}, and so every trace $0$ matrix in $\Mat_2(R)$ is a commutator. \end{proof} For an algebra over a characteristic $0$ field, we have the following result for a graded $\widebar{\Q}$-algebra. \begin{theorem}\label{cor:gradedopring} Let $R=\oplus_{n\ge 0}R_n$ be a $2$-dimensional regular graded domain that is an associative algebra over $R_0=\widebar{\Q}$. Then $R$ is an OP ring, and every trace $0$ matrix in $\Mat_2(R)$ is a commutator. \end{theorem} \begin{proof} Since $R$ is a regular $2$-dimensional ring, $R$ is locally OP by \Cref{cor:regularlocalop}. By \cite[Theorem 1.2]{KS02}, every finitely generated projective $R$-module is free, so every $R$-oriented module is free. Hence $R$ is an OP ring by \Cref{thm:general}, and every trace $0$ matrix in $\Mat_2(R)$ is a commutator. \end{proof} Recall the following conjecture (see \cite[Page 267 and Theorem 6.2.1]{KS07} for a discussion about the conjecture). \begin{conjecture}[Bloch--Beilinson Conjecture]\label{bbconjecture} If $X/\widebar{\Q}$ is an irreducible regular projective surface, then $SA_0(X)=0$. \end{conjecture} \begin{theorem}\label{thm:bbopring} Assume the Bloch--Beilinson conjecture. Then every finitely generated dimension $2$ regular $\widebar{\Q}$-algebra $R$ is an OP-ring. In particular, every trace $0$ matrix in $\Mat_2(R)$ is a commutator. \end{theorem} \begin{proof} For any $R$ as in the theorem, there exists an irreducible regular projective surface $X/\widebar{\Q}$ birational to $\Spec(R)$. By \Cref{bbconjecture}, $SA_0(X)=0$, so by \cite[Theorem 3]{MS76}, $SK_0(R)=0$. By \Cref{thm:stablyfreeisfree} (1), every stably-free projective module is free. Finally, $R$ is locally OP since $R$ is regular and dimension 2. Hence $R$ satisfies \Cref{thm:characterisation} (5) so $R$ is an OP-ring, and every trace $0$ matrix in $\Mat_2(R)$ is a commutator. \end{proof} In contrast to the case of $\widebar{\Q}$-algebras, the following theorem gives examples of dimension $2$ regular $\C$-algebras which are not OP-rings. \begin{theorem}\label{cor:infchownotop} Let $X/\C$ be an irreducible regular proper surface, with $H^0(X, \Omega^2_{X/\C})\ne 0$, and let $\Spec(R)$ be a non-empty open affine subvariety of $X$. Then $R$ is not an OP ring and there is a trace $0$ non-commutator in $\Mat_2(R)$. \end{theorem} \begin{proof} By \cite[Corollary 1]{KS10}, $A_0(\Spec R)\ne 0$. Since $\Spec(R)$ is a regular affine surface, $A_0(\Spec R)=SK_0(R)$ (see \cite[Theorem 4.2 (d)]{MS76}) and so it does not satisfy \Cref{thm:characterisation} (5). Hence $R$ is not an OP ring and there exists a trace $0$ non-commutator in $\Mat_2(R)$. \end{proof} \begin{example}\label{ex:hypersurface} For any $d\ge 1$, let $R_d:=\C[x,y,z]/(x^d+y^d+z^d-1)$. Then $\Spec(R_d)$ is an open subvariety of $X_d:= \Proj(\C[x_0,x_1,x_2,x_3]/(x_0^d+x_1^d+x_2^d+x_3^d))$. If $d=1,2,3$, $X_d$ is rational \cite[Example II.8.20.3]{hartshorne}, and so $A_0(X_d)=0$ \cite[Prop. 7.1]{bloch10}. Hence $SA_0(X_d)=0$ and by \Cref{cor:sa}, $R_d$ is an OP-ring and every trace $0$ matrix in $\Mat_2(R_d)$ is a commutator. If $d\ge 4$, then we have $\Omega^2_{X_d/\C}=\cO_{X_d}(d-4)$ (see \cite[Example II.8.20.3]{hartshorne}), so $H^0(X_d, \Omega^2_{X_d/\C}) = H^0(X_d,\cO_{X_d}(d-4))\ne 0$. Hence by \Cref{cor:infchownotop}, there is a trace $0$ non-commutator in $\Mat_2(R_d)$. Note that if the Bloch-Beilinson conjecture is true, then $\widebar{\Q}[x,y,z]/(x^d+y^d+z^d-1)$ is an OP-ring by \Cref{thm:bbopring} even if $d\ge 4$. \end{example} Recall the following conjecture of Bloch (see Conjecture 1.8 and Proposition 1.11 in \cite{bloch10} for details about the conjecture). \begin{conj}[Bloch Conjecture]\label{conj:bloch} Let $X/\C$ be a regular projective surface. If $H^0(X,\Omega^2_{X/\C})=0$ then $SA_0(X)=0$. \end{conj} Bloch's conjecture has been verified in many cases, including for any surfaces that are not of general type \cite[Proposition 4]{BKL76} and for some surfaces of general type, see e.g. \cite{bcgp12,IM79,Barlow85rational,Voisin14,Bauer14,BF15,PW16}. Thus in these cases, we can apply \Cref{cor:sa} to obtain examples of OP-rings. For example, the following is an OP-ring coming from a surface of general type called a Godeaux surface. \begin{example}\label{ex:godeaux} Let $Y$ be the quintic complex surface in $\P^3_\C$ defined by the equation $x_0^5+x_1^5+x_2^5+x_3^5=0$. Let \[\sigma(x_0:x_1:x_2:x_3) = (x_0: \zeta_5x_1:\zeta_5^2x_2:\zeta_5^3x_3)\] be an automorphism of $Y$ where $\zeta_5=e^{2\pi i/5}$. The quotient surface $X:=Y/\langle\sigma\rangle$ is called a Godeaux surface, with $A_0(X)=0$ (see \cite[Theorem 1]{IM79}). So for an open affine subvariety $\Spec(R)$ of $X$, we have that $R$ is an OP-ring by \Cref{cor:sa}. For example consider \[S:= \C[x,y,z]/(x^5+y^5+z^5+1),\] where $x:=x_1/x_0$, $y:=x_2/x_0$ and $z:=x_3/x_0$, so that $\Spec(S)$ is an open subscheme of $Y$. Then $\sigma$ acts on $\Spec(S)$ as well by $\sigma(f(x,y,z))= f(\zeta_5 x, \zeta_5^2 y, \zeta_5^3 z)$, so $\Spec(S^{\langle \sigma\rangle}) \iso \Spec(S)/\langle\sigma\rangle$ is an open affine subvariety of $X$. Hence for \[R:=\left(\C[x,y,z]/(x^5+y^5+z^5+1) \right)^{\langle\sigma\rangle}, \] every trace $0$ matrix in $\Mat_2(R)$ is a commutator. \end{example} We conclude this section with the case when $\Spec(R)$ is an affine quadric hypersurface in $\A^3_k$. We do not assume that $k$ is algebraically closed, and use the theory of quadratics forms to determine whether $R$ is an OP ring. Given $k$ a field with $\ch k\ne 2$ and a homogeneous degree $2$ polynomial $q(x,y,z)\in k[x,y,z]$, define \[R(k,q):=k[x,y,z]/(q-1).\] Recall that $q$ can be written as $q=(x \; y\; z)Q(x\; y\; z)^t$ where $Q\in\Mat_3(k)$ is a symmetric matrix. The \emph{discriminant} of $q$ is $\Delta(q):=\det(Q)$ and $q$ is called \emph{non-degenerate} if $\Delta(q)\ne 0$. If $q$ is non-degenerate, then $R(k,q)$ is a regular $2$-dimensional algebra. Finally, recall that $q$ is \emph{isotropic over $k$} if there exist $x,y,z\in k$ not all $0$ such that $q(x,y,z)=0$. \begin{theorem}\label{cor:quadratic1} If $q$ is isotropic or $\sqrt{-\Delta(q)}\in k$, then $R(k,q)$ is an OP-ring. Hence every trace $0$ matrix in $\Mat_2(R(k,q))$ is a commutator. \end{theorem} \begin{proof} Let $R:=R(k,q)$. Since $R$ is a regular dimension $2$ ring, it is a locally OP ring by \Cref{cor:regularlocalop}. Now suppose $P$ is an $R$-oriented projective module with $\bigwedge^n P =R$. Then by \cite[Theorem 16.1]{swan87}, $P=R^{n-1}\oplus Q$ with $\rk Q=1$. So \[R={\textstyle\bigwedge^n P = \bigwedge^n}(R^{n-1}\oplus Q) = R\tensor Q = Q.\] Hence $P$ is a free module, and by \Cref{thm:characterisation}, $R$ is an OP ring and every trace $0$ matrix in $\Mat_2(R)$ is a commutator. \end{proof} We also have a partial converse of \Cref{cor:quadratic1}. \begin{theorem}\label{cor:quadratic2} If $q$ is anisotropic, $\sqrt{-\Delta(q)}\notin k$ and $q$ represents 1, then $R(k,q)$ is not an OP-ring and there is a trace $0$ matrix in $\Mat_2(R(k,q))$ that is not a commutator. \end{theorem} \begin{proof} Let $R:=R(k,q)$. Since \[SK_0(R) = \ker(\tilde{K}_0(R)\to \Pic(R)),\] $SK_0(R)=\Z/2\Z$ by \cite[Theorem 9.2 (b), Lemma 11.5]{swan87}. Hence by \Cref{thm:characterisation}, $R$ is not an OP ring and there is a trace $0$ matrix in $\Mat_2(R)$ that is not a commutator. \end{proof} \begin{remark} Over $R:=\R[x,y,z]/(x^2+y^2+z^2-1)$, there is a well-known example of a $2\times 2$ non-commutator $A:=\begin{pmatrix} x & y \\ z & -x \end{pmatrix}\in \Mat_2(R)$ (see \cite[Section 3]{RR00} for the proof). By taking $q:=x^2+y^2+z^2$ as the quadratic form, we see that $R=R(\R, q)$. Since $\sqrt{-\Delta(q)}=\sqrt{-1}\notin \R$, \Cref{cor:quadratic2} implies that there is a non-commutator in $\Mat_2(R)$. However we cannot conclude from \Cref{cor:quadratic2} that this particular matrix $A$ is a non-commutator. Note that for $R\tensor_\R\C = R(\C, q)= \C[x,y,z]/(x^2+y^2+z^2-1)$, \Cref{cor:quadratic1} applies since $\sqrt{-\Delta(q)}=\sqrt{-1}\in \C$, so $A$ is a commutator in $\Mat_2(R(\C,q))$. For example, we can write $A$ as \[ A = \left[ \begin{pmatrix} 1+ix(ix-y) & -xz \\ x(ix-y) & 0 \end{pmatrix}, \begin{pmatrix} -iz & ix+y\\ -z & 0 \end{pmatrix} \right]. \] \end{remark} \section*{Acknowledgements} The author would like to thank his advisor, Dino Lorenzini, for posing the problem and providing detailed feedback on the manuscript. This work was completed as a part of the author's doctoral dissertation at University of Georgia. \printbibliography \end{document}
{ "timestamp": "2021-11-10T02:05:47", "yymm": "2111", "arxiv_id": "2111.04884", "language": "en", "url": "https://arxiv.org/abs/2111.04884", "abstract": "Given any commutative ring $R$, a commutator of two $n\\times n$ matrices over $R$ has trace $0$. In this paper, we study the converse: whether every $n \\times n$ trace $0$ matrix is a commutator. We show that if $R$ is a Bézout domain with algebraically closed quotient field, then every $n\\times n$ trace $0$ matrix is a commutator. We also show that if $R$ is a regular ring with large enough Krull dimension relative to $n$, then there exist a $n\\times n$ trace $0$ matrix that is not a commutator. This improves on a result of Lissner by increasing the size of the matrix allowed for a fixed $R$. We also give an example of a Noetherian dimension $1$ commutative domain $R$ that admits a $n\\times n$ trace $0$ non-commutator for any $n\\ge 2$.", "subjects": "Rings and Algebras (math.RA); Commutative Algebra (math.AC)", "title": "On Trace Zero Matrices and Commutators", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9896718496130619, "lm_q2_score": 0.7154240018510025, "lm_q1q2_score": 0.7080349951694602 }
https://arxiv.org/abs/1909.05060
Incremental proximal gradient scheme with penalization for constrained composite convex optimization problems
We consider the problem of minimizing a finite sum of convex functions subject to the set of minimizers of a convex differentiable function. In order to solve the problem, an algorithm combining the incremental proximal gradient method with smooth penalization technique is proposed. We show the convergence of the generated sequence of iterates to an optimal solution of the optimization problems, provided that a condition expressed via the Fenchel conjugate of the constraint function is fulfilled. Finally, the functionality of the method is illustrated by some numerical experiments addressing image inpainting problems and generalized Heron problems with least squares constraints.
\section{Introduction} Let $F_i:\R^n\to\R$ be a function of the form $$F_i(x):=f_i(x)+h_i(x)$$ for all $i=1,\ldots,m$, where $f_i:\R^n\to\R$ is a convex function and $h_i:\R^n\to\mathbb{R}$ is a convex differentiable function such that $\nabla h_i$ is $L_i-$Lipschitz continuous. Let $g:\R^n\to\R$ be a convex differentiable function such that $\nabla g$ is $L_g-$Lipschitz continuous. In this work, we focus on the problem \begin{eqnarray}\label{MP}% \begin{array}{ll} \textrm{minimize}\indent \sum_{i=1}^m F_i(x)\\ \textrm{subject to}\indent x\in \arg\min g. \end{array}% \end{eqnarray} Let $\mathcal{S}$ denote the solution set of this problem and assume that $\mathcal{S}$ is nonempty. In addition, we may assume without loss of generality that $\min g=0$. It is well known that the minimization of the sum of composite functions yields many applications to classification and regression models in machine learning. In these applications, a key feature is to deal with a very large number of component (typically, convex and Lipschitz continuous) loss functions where the evaluation of the proximal operators and/or gradients of the whole objective function seems very costly or even impossible; see \cite{BCN16,BL03,SLB17}. Apart from the aforementioned classification and regression problems, the problems with additive structure also arise in sensor, wireless and peer-to-peer networks in which there is no central node that facilitates computation and communication. Moreover, the allocation of all the cost components $F_i$ at one node is sometimes not possible due to memory, computational power, or private information. For further discussion concerning sensor networks, see \cite{RN05, BHG08}. One of promising algorithms for performing this kind of problem structure is the so-called incremental type method. Its key idea is to take steps subsequently along the proximal operators and/or gradients of the component functions $F_i$ and to update the current iterate after processing each $F_i$. To be precise, let us recall the classical incremental gradient method (IGM) for solving the minimization problem, that is, \begin{eqnarray}\label{MP-smooth}% \begin{array}{ll} \textrm{minimize}\indent \sum_{i=1}^m f_i(x)\\ \textrm{subject to}\indent x\in X, \end{array}% \end{eqnarray} where $f_i:\R^n\to\mathbb{R}$ is a convex differentiable function, for all $i=1,\ldots,m$, and $X\subset\R^n$ is a nonempty closed convex set. The method is given as follows: if $x_k$ is the vector obtained after $k$ cycles, the vector $x_{k+1}$ is updated by $\varphi_{1,k}:=x_k$, then computing \begin{eqnarray}\label{IGM}\varphi_{i+1,k}:=\varphi_{i,k}-\alpha_k\nabla f_i(\varphi_{i,k}),\indent i=1,\ldots,m, \end{eqnarray} and finally generating $x_{k+1}$ after one more cyclic $m$ steps as $$x_{k+1}:=\mathrm{proj}_{X}(\varphi_{m+1,k}),$$ where $\alpha_k$ is a positive scalar parameter, and $\mathrm{proj}_{X}$ is the projection operator onto $X$. The advantage of IGM comparing with the classical gradient descent method, which have been analytically proved and even experimentally observed, is that it can attain a better asymptotic convergence to a solution of (\ref{MP-smooth}); see \cite{B12} for more details on this topic. Apart from the gradient based method, there are many situations in which the objective functions may not be smooth enough to apply IGM; in this case, we can consider the so-called incremental proximal method instead. Consider the (nonsmooth) minimization problem (\ref{MP-smooth}) where the component function $f_i:\R^n\to\R$ is convex for all $i=1,\ldots,m$. The incremental proximal method, which was initially proposed by Bertsekas \cite{B11}, is given as follows: if $x_k$ is the vector obtained after $k$ cycles, then the vector $x_{k+1}$ is updated in a similar fashion to IGM, except that the gradient step (\ref{IGM}) is replaced by the proximal step $$\varphi_{i+1,k}:=\prox_{\alpha_k f_i}\left(\varphi_{i,k}\right),\indent i=1,\ldots,m.$$ For further discussion convergence result, see \cite{B11,B12,B15}. On the other hand, the problem (\ref{MP-smooth}) involves the constraints which can be reformulated into the form (\ref{MP}) via a penalty function corresponding to the constraint so that the set of all minimizers of the constructed penalty function is the considered constraint. Attouch and Czarnecki \cite{AC10} initially investigated a qualitative analysis of the optimal solutions of (\ref{MP}) from the perspective of a penalty-based dynamical system. This starting point stimulates huge interest among research community to design and develop numerical algorithms for solving the minimization problem (\ref{MP}); see \cite{AC10, ACP11,ACP11-2,BB15,BC14,BCN17,BCN17-2,BN18,NP13,NP18, NP18-2,P12} for more insights into this research topic. It is worth noting that the common key feature of these proposed iterative methods is the penalization strategy, that is, if the function $g$ is smooth, then the penalization term is evaluated by its gradient \cite{NP13,NP18,P12}. Motivated by all the results mentioned above, we proposed an iterative scheme, which combines the incremental proximal and gradient method with penalization strategy, for solving the constrained minimization problem (\ref{MP}). To deal with the convergence result, we show that the generated sequence converges to an optimal solution of (\ref{MP}) by using the quasi-Fej\'er monotonicity technique. To illustrate the theoretical results, we also present some numerical experiments addressing the image reconstruction problem and the generalized Heron location problem. In the remaining of this section we recall some necessary tools of convex analysis. The reader may consult \cite{BC11,BV10,Z02} for further details. For a convex function $f:\R^n\rightarrow\R$ we let $f^*$ donote the (Fenchel) conjugate function of $f$, that is, the function $f^*:\R^n\rightarrow (-\infty,+\infty]$ such that $$f^*(u):=\sup_{x\in \R^n}\{\langle u,x\rangle-f(x)\}$$ for all $u\in \R^n$. The subdifferential of $f$ at $x\in \R^n$, is the set $$\partial f(x):=\{v\in \R^n:f(y)\geq f(x)+\langle v,y-x\rangle \ \forall y\in \R^n\}.$$ We also let $\min f := \inf_{x \in \R^n} f(x)$ denote the optimal objective value of the function $f$ and let $\arg\min f :=\{x \in \R^n: f(x) = \min f \}$ denote its set of global minima of $f$. For $r>0$ and $x\in \R^n$, we let $\prox_{rf}(x)$ denote the proximal point of parameter $r$ of $f$ at $x$, which is the unique optimal solution of the (strongly convex) optimization problem $$\min_{u\in \R^n}f(u)+\frac{1}{2r}\|u-x\|^2.$$ Note that $\prox_{rf}=(I+r\partial f)^{-1}$ and it is a single-valued operator. Let $X\subseteq \R^n$ be a nonempty set. The indicator function of $X$ is the function $\delta_X:\R^n\rightarrow (-\infty,+\infty]$ which takes the value $0$ on $X$ and $+\infty$ otherwise. The subdifferential of the indicator function is the normal cone of $X$, that is, $$N_X(x)=\{u\in \R^n:\langle u,y-x\rangle\leq 0 \ \forall y\in X\}$$ if $x\in X$ and $N_X(x)=\emptyset$ for $x\notin X$. For all $x\in X$, it holds that $u\in N_X(x)$ if and only if $\sigma_X(u)=\langle u,x\rangle$, where $\sigma_X : \R^n \rightarrow (-\infty,+\infty]$ is the support function of $X$ defined by $$ \sigma_X(u)=\sup_{y\in X}\langle y,u\rangle.$$ Moreover, we let $\ran(N_X)$ denote the range of the normal cone $N_X$, that is, we have $p \in \ran(N_X)$ if and only if there exists $x \in X$ such that $p \in N_X(x)$. \section{Algorithm and Convergence Result} In this section, we consider the convergence analysis of the incremental proximal gradient method with smooth penalty term for solving (\ref{MP}). Firstly, we propose our main algorithm as shown in Algorithm \ref{algorithm-ergodic-smooth}. \vspace{0.3cm} \begin{algorithm}[H]\label{algorithm-ergodic-smooth} \SetAlgoLined \textbf{Initialization}: The positive sequences $(\alpha_k)_{k\geq1}$, $(\beta_k)_{k\geq1}$, and an arbitrary $x_1\in \R^n$. \\ \textbf{Iterative Step}: For a given current iterate $x_{k}\in \R^n$ ($k\geq 1$), set $$\varphi_{1,k}:=x_{k}-\alpha_k\beta_k\nabla g(x_k),$$ and define $$\varphi_{i+1,k}:=\prox_{\alpha_k f_i}\left(\varphi_{i,k}-\alpha_k\nabla h_i(\varphi_{i,k})\right),\hspace{1cm} i=1,\ldots,m,$$ and $$x_{k+1}=\varphi_{m+1,k}.$$ \caption{IPGM with penalty term} \end{algorithm} \begin{remark}Algorithm \ref{algorithm-ergodic-smooth} is different from \cite[Algorithm 3.1]{NP18-2}. In fact, in \cite{NP18-2}, the authors considered the problem (\ref{MP}) in the sense that $h=\sum_{i=1}^mh_i$ and performed the gradient $\nabla h(x_k)$ at each iteration $k$. However, the iterative scheme proposed here allows us to perform the gradient $\nabla h_i(\varphi_{i,k})$ at each sub-iteration $i$. Moreover, it is worth noting that Algorithm \ref{algorithm-ergodic-smooth} is very useful through its decentralized setting which appears in many situations, for instance, decentralized network system or support vector machine learning problems; see \cite{NO09,RNV12}. \end{remark} \vspace{0.3cm} For the convergence result, the following hypotheses are assumed throughout this work: \begin{equation*}\label{H} \left\{ \begin{array}{ll} \text{(H1) The subdifferential sum } \partial\left(\sum_{i=1}^m f_i+\delta_{\arg\min g}\right)=\sum_{i=1}^m\partial f_i+N_{\arg\min g} \text{ holds};\\ \text{(H2) The sequence } (\alpha_k)_{k\geq1} \text{ is satisfying } \sum_{k=1}^{\infty}\alpha_k=+\infty \text{ and } \sum_{k=1}^{\infty}\alpha_k^2<+\infty;\\ \text{(H3) } 0<\liminf_{k\to+\infty}\alpha_k\beta_k\leq\limsup_{k\to+\infty}\alpha_k\beta_k<\frac{2}{L_g}; \\ \text{(H4) For all } p\in\mathrm{ran}(N_{\arg\min g}), \sum_{k=1}^\infty\alpha_k\beta_k\left[g^*\left(\frac{p}{\beta_k}\right)-\sigma_{\arg\min g}\left(\frac{p}{\beta_k}\right)\right]<+\infty. \end{array} \right. \end{equation*}% Some remarks relating to our assumptions are as follows. \begin{remark}\label{key-remark} \begin{itemize} \item[(i)] For the conditions which guarantee the exact subdifferential sum formula in the condition (H1), the reader may consult the book of Bauschke and Combettes \cite{BC11}. \item[(ii)] Note that the hypothesis (H3) is a relaxation of Assumption 4.1 (S3) in \cite{NP18-2}. In fact, the superior limit in \cite{NP18-2} is bounded above by $\frac{1}{L_g}$, but in this work it can be extended to $\frac{2}{L_g}$. This allows us to consider larger parameters $(\alpha_k)_{k\geq1}$ and $(\beta_k)_{k\geq1}$. An example of the sequences $(\alpha_k)_{k\geq1}$ and $(\beta_k)_{k\geq1}$ satisfying the conditions (H2) and (H3) is the real sequences $\alpha_k\sim\frac{1}{k}$ and $\beta_k\sim bk$ for every $k\geq1$, where $0<b<\frac{2}{L_g}$. \item[(iii)] The condition (H4) was originated by Attouch and Czarnecki \cite{AC10}. For example, for a function $g:\R^n\to\R$, it holds that $g\leq \delta_{\arg\min g}$ and so $g^*\geq (\delta_{\arg\min g})^*=\sigma_{\arg\min g}$, which yields $$g^*-\sigma_{\arg\min g}\geq 0.$$ Note that if the function $g$ satisfies $$g \geq \frac{a}{2}\mathrm{dist}^2(\cdot,\arg\min g),$$ where $a>0$, then we have $g^*(x)-\sigma_{\arg\min g}(x)\leq\frac{1}{2a}\|x\|^2$ for all $x \in \R^n$. Thus, for every $k \geq 1$, and all $p\in\mathrm{ran}(N_{\arg\min g})$, we have $$\alpha_k\beta_k\left[g^*\left(\frac{p}{\beta_k}\right)-\sigma_{\arg\min g}\left(\frac{p}{\beta_k}\right)\right]\leq \frac{\alpha_k}{2a\beta_k}\|p\|^2.$$ Note that if $\sum_{k=1}^\infty\frac{1}{\beta_k^2}<+\infty$, then it follows that $$\sum_{k=1}^\infty\alpha_k\beta_k\left[g^*\left(\frac{p}{\beta_k}\right)-\sigma_{\arg\min g}\left(\frac{p}{\beta_k}\right)\right]<+\infty.$$ This inequality also holds for the sequences satisfying the hypotheses (H2) and (H3). \end{itemize} \end{remark} \vspace{0.3cm} The following theorem describes the convergence of iterates. \vspace{0.3cm} \begin{theorem}\label{thm-main-2} The sequence $(x_k)_{k\geq1}$ converges to a point in $\mathcal{S}$. \end{theorem} \vspace{0.3cm} In order to prove Theorem \ref{thm-main-2}, we need to recall the concept of quasi-Fej\'{e}r monotone as follows. Let $C$ be a nonempty subset of $\R^n$. We say that a sequence $(x_k)_{k\geq1}\subset \R^n$ is \textit{quasi-Fej\'{e}r monotone} relative to $C$ if for each $c\in C$, there exist a sequence $(\delta_k)_{k\geq1}\subset [0,+\infty)$ with $\sum_{k=1}^{\infty}\delta_k=+\infty$ and $k_0\in\N$ such that $$\|x_{k+1}-c\|^2\leq\|x_k-c\|^2+\delta_k,\indent\forall k\geq k_0.$$ The following proposition provides an essential property of a quasi-Fej\'{e}r monotone sequence; see Combettes \cite{C01} for further information. \begin{proposition}\label{fejer} \cite[Theorem 3.11]{C01} Let $(x_k)_{k\geq1}$ be a quasi-Fej\'{e}r monotone sequence relative to a nonempty subset $C\subset \R^n$. If at least one sequential cluster point of $(x_k)_{k\geq1}$ lies in $C$, then $(x_k)_{k\geq1}$ converges to a point in $C$. \end{proposition} The following additional key tool is known as the Silverman-Toeplitz theorem \cite{K04}. \begin{proposition}\label{Silverman-Toeplitz} Let $(\alpha_k)_{k\geq1}$ be a positive real sequence with $\sum_{k=1}^{\infty}\alpha_k=+\infty$. If $(u_k)_{k\geq1}\subset\mathbb{R}^n$ is a sequence such that $\lim_{k\to+\infty}u_k=u\in\mathbb{R}^n$, then $\lim_{l\to+\infty}\frac{\sum_{k=1}^l\alpha_ku_k}{\sum_{k=1}^{l}\alpha_k}=u$. \end{proposition} We are in a position to prove Theorem \ref{thm-main-2}, which is our main theorem. \begin{proof} Let $u\in\mathcal{S}$ and $k\geq1$ be fixed. For each $i=1,\ldots,m$, we have from the subdifferential inequality of $f_i$ that \begin{eqnarray*} \<\varphi_{i,k}-\varphi_{i+1,k}-\alpha_k\nabla h_i(\varphi_{i,k}),u-\varphi_{i+1,k}\>\leq \alpha_k(f_i(u)-f_i(\varphi_{i,k})). \end{eqnarray*} Moreover, by the convexity of $h_i$, we have \begin{eqnarray}\label{thm-eqn1} &&2\<\varphi_{i,k}-\varphi_{i+1,k},u-\varphi_{i+1,k}\>\nonumber\\ &\leq& 2\alpha_k(f_i(u)-f_i(\varphi_{i,k}))+2\alpha_k\<\nabla h_i(\varphi_{i,k}),u-\varphi_{i+1,k}\>\nonumber\\ &=&2\alpha_k(f_i(u)-f_i(\varphi_{i,k}))+2\alpha_k\<\nabla h_i(\varphi_{i,k}),u-\varphi_{i,k}\>+2\alpha_k\<\nabla h_i(\varphi_{i,k}),\varphi_{i,k}-\varphi_{i+1,k}\>\nonumber\\ &\leq&2\alpha_k(f_i(u)-f_i(\varphi_{i,k}))+2\alpha_k(h_i(u)-h_i(\varphi_{i,k}))+2\alpha_k\<\nabla h_i(\varphi_{i,k}),\varphi_{i,k}-\varphi_{i+1,k}\>\nonumber\\ &\leq&2\alpha_k(F_i(u)-F_i(\varphi_{1,k}))+2\alpha_k(F_i(\varphi_{1,k})-F_i(\varphi_{i,k}))\nonumber\\ &&+\alpha_k^2\|\nabla h_i(\varphi_{i,k})\|^2+\|\varphi_{i,k}-\varphi_{i+1,k}\|^2\nonumber\\ &\leq&2\alpha_k(F_i(u)-F_i(\varphi_{1,k}))+2\alpha_k(F_i(\varphi_{1,k})-F_i(\varphi_{i,k}))\nonumber\\ &&+2\alpha_k^2\|\nabla h_i(\varphi_{i,k})-\nabla h_i(u)\|^2+2\alpha_k^2\|\nabla h_i(u)\|^2+\|\varphi_{i,k}-\varphi_{i+1,k}\|^2. \end{eqnarray} Note that \begin{eqnarray*}\|\varphi_{i+1,k}-u\|^2-\|\varphi_{i,k}-u\|^2+\|\varphi_{i+1,k}-\varphi_{i,k}\|^2=2\<\varphi_{i,k}-\varphi_{i+1,k},u-\varphi_{i+1,k}\>. \end{eqnarray*} Combining this equality with (\ref{thm-eqn1}), we obtain \begin{eqnarray*}\|\varphi_{i+1,k}-u\|^2-\|\varphi_{i,k}-u\|^2 &\leq&2\alpha_k(F_i(u)-F_i(\varphi_{1,k}))\\ &&+2\alpha_k(F_i(\varphi_{1,k})-F_i(\varphi_{i,k}))\\ &&+2\alpha_k^2\|\nabla h_i(\varphi_{i,k})-\nabla h_i(u)\|^2+2\alpha_k^2\|\nabla h_i(u)\|^2. \end{eqnarray*} Summing up this inequality for all $i=1,\ldots,m$, we have \begin{eqnarray*}\|\varphi_{m+1,k}-u\|^2-\|\varphi_{1,k}-u\|^2 &\leq&2\alpha_k(F(u)-F(\varphi_{1,k}))\\ &&+2\alpha_k\left(F(\varphi_{1,k})-\sum_{i=1}^mF_i(\varphi_{i,k})\right)\\ &&+2\alpha_k^2\sum_{i=1}^m\|\nabla h_i(\varphi_{i,k})-\nabla h_i(u)\|^2+2\alpha_k^2\sum_{i=1}^m\|\nabla h_i(u)\|^2. \end{eqnarray*} Since $(\varphi_{i,k})_{k\geq1}$ is bounded for all $i=1,\ldots,m$, there exists $M>0$ such that \begin{eqnarray}\label{thm-eqn-M}\max\left\{\sum_{i=1}^m\|\nabla h_i(\varphi_{i,k})-\nabla h_i(u)\|^2,\max_{1\leq i\leq m}\|\nabla h_i(\varphi_{i,k})\|\right\}\leq M, \end{eqnarray} and so \begin{eqnarray}\label{thm-eqn2}\|\varphi_{m+1,k}-u\|^2-\|\varphi_{1,k}-u\|^2 &\leq&2\alpha_k(F(u)-F(\varphi_{1,k}))\nonumber\\ &&+2\alpha_k\left(F(\varphi_{1,k})-\sum_{i=1}^mF_i(\varphi_{i,k})\right)\nonumber\\ &&+2\alpha_k^2M+2\alpha_k^2\sum_{i=1}^m\|\nabla h_i(u)\|^2. \end{eqnarray} On the other hand, since $\partial f_i$ maps a bounded subset into a bounded nonempty subset of $\R^n$ (see \cite[Proposition 16.20 (iii)]{BC11}), we have from the definition of $\partial f_i$ that for each $i=1,\ldots,m$, there exists $K_i>0$ such that \begin{eqnarray*}\|\varphi_{i,k}-\varphi_{i+1,k}-\alpha_k\nabla h_i(\varphi_{i,k})\|\leq\alpha_kK_i, \end{eqnarray*} and so \begin{eqnarray*}\|\varphi_{i,k}-\varphi_{i+1,k}\|\leq\alpha_kK_i+\alpha_k\|\nabla h_i(\varphi_{i,k})\|\leq\alpha_kK, \end{eqnarray*} where \begin{eqnarray*}\label{thm-eqn2-K}K:=M+\max_{1\leq i \leq m}K_i. \end{eqnarray*} Note that \begin{eqnarray*}\|\varphi_{i,k}-\varphi_{1,k}\|\leq\sum_{j=1}^{i-1}\|\varphi_{j,k}-\varphi_{j+1,k}\|\leq\alpha_kiK. \end{eqnarray*} Moreover, since $\bigcup_{i=1}^m\{\varphi_{i,k}:k\geq 1\}$ is bounded, by using \cite[Proposition 16.20 (ii)]{BC11} again, we know that the functions $f_i$ and $h_i$ are Lipschitz continuous on all bounded sets. For all $i=1,\ldots,m$, there exists the Lipschitz constant $c_i>0$ such that \begin{eqnarray*}F_i(\varphi_{1,k})-F_i(\varphi_{i,k})\leq c_i\|\varphi_{1,k}-\varphi_{i,k}\|\leq c\|\varphi_{1,k}-\varphi_{i,k}\|\leq\alpha_kicK, \end{eqnarray*} where $c:=\max_{1\leq i\leq m}c_i$. This yields \begin{eqnarray*}F(\varphi_{1,k})-\sum_{i=1}^mF_i(\varphi_{i,k})\leq \alpha_kcK\sum_{i=1}^mi=\alpha_kcK\frac{m(m+1)}{2}. \end{eqnarray*} Combining this relation with (\ref{thm-eqn2}), we obtain \begin{eqnarray*}\|\varphi_{m+1,k}-u\|^2-\|\varphi_{1,k}-u\|^2 &\leq&2\alpha_k(F(u)-F(\varphi_{1,k}))+\alpha_k^2cKm(m+1)\\ &&+2\alpha_k^2M+2\alpha_k^2\sum_{i=1}^m\|\nabla h_i(u)\|^2. \end{eqnarray*} On the other hand, by the definition of $\varphi_{1,k}$, we note that \begin{eqnarray}\label{lemma-eqn-12-4-1} \|\varphi_{1,k}-u\|^2&=&\|\varphi_{1,k}-x_k\|^2+\|x_k-u\|^2+2\<\varphi_{1,k}-x_k,x_k-u\>\nonumber\\ &=&\alpha_k^2\beta_k^2\|\nabla g(x_k)\|^2+\|x_k-u\|^2-2\alpha_k\beta_k\<\nabla g(x_k),x_k-u\>. \end{eqnarray} Thanks to the Baillon-Haddad theorem \cite[Corollary 18.16]{BC11}, we know that $\nabla g$ is $\frac{1}{L_g}$-cocoercive. By using $\nabla g(u)=0$, we have \begin{eqnarray*} 2\alpha_k\beta_k\<\nabla g(x_k),x_k-u\> &=&2\alpha_k\beta_k\<\nabla g(x_k)-\nabla g(u),x_k-u\>\nonumber\\ &\geq&\frac{2\alpha_k\beta_k}{L_g}\|\nabla g(x_k)-\nabla g(u)\|^2=\frac{2\alpha_k\beta_k}{L_g}\|\nabla g(x_k)\|^2, \end{eqnarray*} which implies that \begin{eqnarray}\label{lemma-eqn-12-5-1} -2\alpha_k\beta_k\<\nabla g(x_k),x_k-u\> &\leq&-\frac{2\alpha_k\beta_k}{L_g}\|\nabla g(x_k)\|^2. \end{eqnarray} Thus, the inequalities (\ref{lemma-eqn-12-4-1}) and (\ref{lemma-eqn-12-5-1}) imply that \begin{eqnarray*}\|\varphi_{1,k}-u\|^2\leq\|x_k-u\|^2+\alpha_k\beta_k\left(\alpha_k\beta_k-\frac{2}{L_g}\right)\|\nabla g(x_k)\|^2. \end{eqnarray*} Using the last two inequalities and the assumption (H3), it follows that there exists $k_0\in\N$ such that \begin{eqnarray*}\|x_{k+1}-u\|^2-\|x_k-u\|^2 &\leq&2\alpha_k(F(u)-F(\varphi_{1,k}))+\alpha_k^2cKm(m+1)\\ &&+2\alpha_k^2M+2\alpha_k^2\sum_{i=1}^m\|\nabla h_i(u)\|^2, \indent\forall k\geq k_0. \end{eqnarray*} Now, since $(x_k)_{k\geq1}$ is bounded, we let $z\in\R^n$ be its sequential cluster point and a subsequence $(x_{k_j})_{j\geq1}$ of $(x_k)_{k\geq1}$ be such that $x_{k_j}\to z$. By Lemma \ref{key-lemma4-2} (iii) and (iv) (see, Appendix A), we have $\varphi_{1,k_j}\to z$ and $z\in\argmin g$. Thus, for every $k_j\geq k_0$, we have \begin{eqnarray*} 2\alpha_{k_j}(F(\varphi_{1,k_j})-F(u)) &\leq&\|x_{k_j}-u\|^2-\|x_{{k_j}+1}-u\|^2\\ &&+\alpha_{k_j}^2\left(cKm(m+1)+2M+2\sum_{i=1}^m\|\nabla h_i(u)\|^2\right), \end{eqnarray*} which yields \begin{eqnarray*} \sum_{k=k_0}^{k_j}\alpha_{k}(F(\varphi_{1,k})-F(u)) &\leq&\frac{\|x_{1}-u\|^2}{2}-\frac{\|x_{{k_j}+1}-u\|^2}{2}+M'\sum_{k=k_0}^{k_j}\alpha_{k}^2, \end{eqnarray*} where $M':=\frac{cKm(m+1)}{2}+M+\sum_{i=1}^m\|\nabla h_i(u)\|^2$. Hence, we have \begin{eqnarray*} \frac{\sum_{k=k_0}^{k_j}\alpha_{k}(F(\varphi_{1,k})-F(u))}{\sum_{k=k_0}^{k_j}\alpha_{k}} &\leq&\frac{\|x_{1}-u\|^2}{2\sum_{k=k_0}^{k_j}\alpha_{k}}+M'\frac{\sum_{k=k_0}^{k_j}\alpha_{k}^2}{\sum_{k=k_0}^{k_j}\alpha_{k}}. \end{eqnarray*} Consequently, by the assumption (H2) and Proposition \ref{Silverman-Toeplitz}, we obtain \begin{eqnarray*} \liminf_{j\to+\infty} \frac{\sum_{k=k_0}^{k_j}\alpha_{k}(F(\varphi_{1,k})-F(u))}{\sum_{k=k_0}^{k_j}\alpha_{k}} \leq0. \end{eqnarray*} The convexity of $F$ together with Proposition \ref{Silverman-Toeplitz} also yields \begin{eqnarray*} F(z)\leq\liminf_{j\to+\infty} F\left(\frac{\sum_{k=k_0}^{k_j}\alpha_{k}\varphi_{1,k}}{\sum_{k=k_0}^{k_j}\alpha_{k}}\right)\leq F(u). \end{eqnarray*} Since $u\in\mathcal{S}$ is arbitrary, we have $z\in\mathcal{S}$. Therefore, by Proposition \ref{fejer}, the sequence $(x_k)_{k\geq1}$ converges to a point in $\mathcal{S}$. \end{proof} Some remarks relating to Theorem \ref{thm-main-2} are as follows. \begin{remark}\label{key-remark} \begin{itemize} \item[(i)] One can obtain a convergence result as Theorem \ref{thm-main-2} in a general setting of a proper convex lower semicontinuous objective function $f_i:\R^n\to(-\infty,+\infty]$, provided that the Lipschitz continuity relative to all bounded subsets of the functions $f_i$ and $h_i$ and the fact that the subdifferential of $f_i$ maps bounded subsets of $\R^n$ into bounded nonempty subsets of $\R^n$ are imposed. The properties that the functions $f_i, h_i$ are Lipschitz continuous on all bounded subsets of $\R^n$ is typically assumed in order to guarantee the non-ergodic convergence of incremental proximal type schemes; see, for instance, \cite{B11,B12}. In fact, there are several loss functions in machine learning which satisfy the Lipschitz continuous property, for instance, the hinge, logistic, and Huber loss functions; see \cite{RDCPV04} for further discussion. \item[(ii)] One can also obtain a weak ergodic convergence of the sequence $(x_k)_{k\geq1}$ in a general setting of real Hilbert space by slightly modifying the proofs of Theorem 3.1 and Corollary 4.1 in \cite{NP18-2}. \end{itemize} \end{remark} \section{Numerical Examples} In this section, we demonstrate the effectiveness of the proposed algorithm by applying to solve the image reconstruction problem addressing the image inpainting and the generalized Heron problems. All the experiments were performed under MATLAB 9.6 (R2019a) running on a MacBook Pro 13-inch, 2019 with a 2.4 GHz Intel Core i5 processor and 8 GB 2133 MHz LPDDR3 memory. \subsection{Image Inpainting} Let $n:=\ell_1\times\ell_2$ and $X\in\R^{\ell_1\times \ell_2}$ be an ideal complete image. Let $x\in\R^n$ represent the vector generated by vectorizing the image $X$. Let $\mathbf{b}\in\R^n$ be the marked image and $\mathbf{B}\in\R^{n\times n}$ be the diagonal matrix where $\mathbf{B}_{i,i}=0$ if the pixel $i$ in the marked image $\mathbf{b}$ is missing (in our experiments, we set it to be black) and $\mathbf{B}_{i,i}=1$ otherwise, for $i=1,\ldots,n$. In a traditional way, the image inpainting problem aims to reconstruct the clean image $x$ from the marked image $\mathbf{b}$ by solving the unconstrained nonsmooth optimization problem \begin{eqnarray}\label{inpaint-pb-trad}% \begin{array}{ll} \textrm{minimize }\indent \lambda_1\|Wx\|_1+\frac{\lambda_2}{2}\|x\|^2+\frac{1}{2}\|\mathbf{B}\cdot-\mathbf{b}\|^2\\ \textrm{subject to}\indent x\in\R^n,\\ \end{array}% \end{eqnarray} where $\lambda_1,\lambda_2>0$ are the penalization parameters, and $W$ is the inverse discrete Haar wavelet transform. The term $\|Wx\|_1$ is to deduce the sparsity of the image under the wavelet transformation, and the term $\frac{1}{2}\|x\|^2$ is to deduce the uniqueness of the solution. Note that $W^\top W=WW^\top=I$ and so $\|W^\top W\|=1$. For more details of wavelet-based inpainting, see \cite{SMF10}. Meanwhile, in our experiment, we consider the basic structure of the ill-conditional linear inverse problem $\mathbf{B}x=\mathbf{b}$ and make use of the regularized tools but under the framework of the nonsmooth convex constrained minimization problem \begin{eqnarray}\label{inpaint-pb}% \begin{array}{ll} \textrm{minimize }\indent \lambda_1\|Wx\|_1+\frac{\lambda_2}{2}\|x\|^2\\ \textrm{subject to}\indent x\in\argmin \frac{1}{2}\|\mathbf{B}\cdot-\mathbf{b}\|^2.\\ \end{array}% \end{eqnarray} Note that the problem (\ref{inpaint-pb}) fits into the setting of the problem (\ref{MP}) where $m=1$, $f_1=\lambda_1\|W(\cdot)\|_1$, $h_1=\frac{\lambda_2}{2}\|\cdot\|^2$, and $g=\frac{1}{2}\|\mathbf{B}\cdot-\mathbf{b}\|^2$. To show the performance of the proposed method, we solve the image inpainting problem (\ref{inpaint-pb}) using Algorithm \ref{algorithm-ergodic-smooth}. Firstly, we test the method by presenting the ISNR values for different combinations of parameters for reconstructing the 384$\times$512 peppers image whose noisy image is obtained by randomly masking 60\% of all pixels to black. The quality of the reconstructed images is measured by means of the improvement in signal-to-noise ratio (ISNR) in decibel (dB), that is, $$\mathrm{ISNR}(k)=10\log_{10}\left( \frac{\|x-\mathbf{b}\|^2}{\|x-x_k\|^2}\right),$$ where $x, \mathbf{b}$, and $x_k$ denote the original clean image, the noisy image with missing pixels, and the reconstructed image at iteration $k$, respectively. We run the algorithm for 20 iterations to obtain the ISNR value whose results are presented in Tables \ref{tb-isnr} and \ref{tb2-isnr}. \begin{table}[H] \centering \setlength{\tabcolsep}{6pt} {\scriptsize \caption{\label{tb-isnr} ISNR values after 20 iterations for different choices of penalization parameters $\lambda_1$ and $\lambda_2$ with the step size $\alpha_k=1/k$ and the penalization parameter $\beta_k=k$.} \begin{tabular}{@{}l r r r r r r r r r r r r r r r r r r r@{}} \toprule $\lambda_1$ $\rightarrow$ & \multirow{2}{*}{$0.1$} & \multirow{2}{*}{$0.2$} & \multirow{2}{*}{$0.3$} & \multirow{2}{*}{$0.4$} & \multirow{2}{*}{$0.5$} & \multirow{2}{*}{$0.6$} & \multirow{2}{*}{$0.7$} & \multirow{2}{*}{$0.8$} & \multirow{2}{*}{$0.9$} & \multirow{2}{*}{$1$} & \multirow{2}{*}{$1.5$} & \multirow{2}{*}{$2$} \\ $\lambda_2$ $\downarrow$ \\ \midrule $10^{-8}$ &3.1701 &5.9704 &8.6229 &11.1305 &13.2470 &14.6711 &15.4206 &15.7265 &15.8401 &15.8574 &15.4991 &14.9656\\ $10^{-5}$ &3.1700 &5.9703 &8.6227 &11.1304 &13.2468 &14.6709 &15.4205 &15.7264 &15.8400 &15.8573 &15.4990 &14.9656\\ $10^{-4}$ &3.1696 &5.9695 &8.6215 &11.1288 &13.2452 &14.6696 &15.4195 &15.7258 &15.8395 &15.8569 &15.4987 &14.9653\\ $10^{-3}$ &3.1654 &5.9616 &8.6096 &11.1135 &13.2288 &14.6561 &15.4100 &15.7189 &15.8341 &15.8523 &15.4957 &14.9625\\ 0.005 &3.1466 &5.9267 &8.5569 &11.0458 &13.1562 &14.5957 &15.3671 &15.6880 &15.8097 &15.8319 &15.4820 &14.9502\\ 0.01 &3.1233 &5.8835 &8.4916 &10.9615 &13.0654 &14.5192 &15.3122 &15.6484 &15.7786 &15.8058 &15.4646 &14.9346\\ 0.05 &2.9436 &5.5502 &7.9894 &10.3072 &12.3390 &13.8716 &14.8185 &15.2887 &15.4972 &15.5707 &15.3129 &14.8020\\ 0.1 &2.7353 &5.1635 &7.4109 &9.5430 &11.4549 &13.0110 &14.0903 &14.7313 &15.0632 &15.2124 &15.0952 &14.6188 \\ \bottomrule \end{tabular} } \end{table} In Table \ref{tb-isnr}, we list the values of the ISNR after 20 iterations are performed by Algorithm \ref{algorithm-ergodic-smooth} for different choices of the penalization parameters $\lambda_1,\lambda_2> 0$. In this case, we put the step size sequence $\alpha_k=1/k$ and the penalization sequence $\beta_k=k$. We observe that combining $\lambda_1=1$ with each parameter $\lambda_2\in(0,10^{-4}]$ leads to a large ISNR value of approximately 15.86 dB. Moreover, one can see that the ISNR value tends to increase when $\lambda_1\in[0.1,1]$ increases, while it tends to decrease when $\lambda_2$ increases. \begin{table}[H] \centering \setlength{\tabcolsep}{6pt} {\tiny \caption{\label{tb2-isnr} ISNR values after 20 iterations for different choices of step size $\alpha_k=a/k$ and penalization parameter $\beta_k=bk$ when $\lambda_1=1$ and $\lambda_2=10^{-4}$.} \begin{tabular}{@{}l r r r r r r r r r r r r r r r r r r r r r r r r r r r r@{}} \toprule $a$ $\rightarrow$ & \multirow{2}{*}{$0.8$} & \multirow{2}{*}{$0.9$} & \multirow{2}{*}{$1$} & \multirow{2}{*}{$1.1$} & \multirow{2}{*}{$1.2$} & \multirow{2}{*}{$1.3$} & \multirow{2}{*}{$1.4$} & \multirow{2}{*}{$1.5$} & \multirow{2}{*}{$1.6$} & \multirow{2}{*}{$1.7$} & \multirow{2}{*}{$1.8$} & \multirow{2}{*}{$1.9$} & \multirow{2}{*}{$2$} \\ $b$ $\downarrow$ \\ \midrule 0.5 &12.0819 &12.9435 &13.5459 &13.9563 &14.2388 &14.4386 &14.5835 &14.6916 &14.7741 &14.8385 &14.8897 &14.9313 &14.9656\\ 0.6 &13.2543 &13.9796 &14.4414 &14.7384 &14.9360 &15.0727 &15.1705 &15.2430 &15.2981 &15.3409 &15.3748 &15.4022 &15.4245\\ 0.7 &14.0487 &14.6485 &15.0080 &15.2330 &15.3802 &15.4807 &15.5519 &15.6042 &15.6438 &15.6744 &15.6984 &15.7172 &15.7323\\ 0.8 &14.6041 &15.1009 &15.3895 &15.5675 &15.6822 &15.7594 &15.8139 &15.8537 &15.8832 &15.9056 &15.9228 &15.9361 &15.9465\\ 0.9 &14.9997 &15.4189 &15.6594 &15.8054 &15.8984 &15.9597 &16.0030 &16.0341 &16.0569 &16.0737 &16.0861 &16.0951 &16.1017\\ 1 &15.2883 &15.6511 &15.8574 &15.9807 &16.0579 &16.1084 &16.1437 &16.1686 &16.1861 &16.1986 &16.2074 &16.2136 &-\\ 1.1 &15.5002 &15.8237 &16.0060 &16.1133 &16.1792 &16.2221 &16.2516 &16.2719 &16.2860 &16.2956 &16.3019 &- &-\\ 1.2 &15.6600 &15.9543 &16.1182 &16.2135 &16.2715 &16.3090 &16.3345 &16.3518 &16.3636 &- &- &- &-\\ 1.3 &15.7859 &16.0569 &16.2063 &16.2922 &16.3439 &16.3775 &16.3998 &16.4147 &- &- &- &- &-\\ 1.4 &15.8859 &16.1386 &16.2766 &16.3551 &16.4019 &16.4320 &16.4519 &- &- &- &- &- &-\\ 1.5 &15.9625 &16.2011 &16.3305 &16.4032 &16.4467 &16.4749 &- &- &- &- &- &- &-\\ 1.6 &16.0223 &16.2499 &16.3724 &16.4406 &16.4819 &- &- &- &- &- &- &- &-\\ 1.7 &16.0697 &16.2885 &16.4062 &16.4713 &- &- &- &- &- &- &- &- &-\\ 1.8 &16.1088 &16.3210 &16.4349 &16.4981 &- &- &- &- &- &- &- &- &-\\ 1.9 &16.1418 &16.3481 &16.4595 &- &- &- &- &- &- &- &- &- &-\\ 2 &16.1690 &16.3710 &- &- &- &- &- &- &- &- &- &- &- \\ \bottomrule \end{tabular} } \end{table} In Table \ref{tb2-isnr}, we present the values of the ISNR after 20 iterations are performed by Algorithm \ref{algorithm-ergodic-smooth} for different choices of the positive square summable step size sequence $\alpha_k=a/k$ where $a\in[0.8,2]$, and the positive penalization sequence $\beta_k=bk$, where $b\in[0.5,2]$. Note that the results for the combinations dissatisfying the condition (H3) are not presented in the table. Observe that combining $\alpha_k=1.1/k$ with $\beta_k=1.8k$ leads to the largest ISNR value of 16.4981 dB. Furthermore, we notice that the ISNR value tends to increase when both $a$ and $b$ increase. Next, we will compare the performance of Algorithm \ref{algorithm-ergodic-smooth} when solving the inpainting problem in our setting (\ref{MP}) via the ISNR values with other well-known iterative methods, namely, proximal-gradient method (PGM) (see \cite[Theorem 25.8]{BC11}) and the fast iterative shrinkage-thresholding algorithm (FISTA) \cite{BT09}. These methods are suited for solving the classical setting (\ref{inpaint-pb-trad}) when putting $f=\lambda_1\|W(\cdot)\|_1$ and $h=\frac{\lambda_2}{2}\|\cdot\|^2+\frac{1}{2}\|\mathbf{B}\cdot-\mathbf{b}\|^2$ with $\nabla h$ being $(\lambda_2+1)-$Lipschitz continuous. For fair comparison, we manually choose the best possible parameters combinations of each method (see Tables \ref{tb-isnr-fista} - \ref{tb-isnr-pgm-2} in Appendix B for several parameters combinations of PGM and FISTA) as follows: \begin{itemize} \item Algorithm \ref{algorithm-ergodic-smooth}: $\lambda_1=1, \lambda_2=10^{-4}, \alpha_k=1.1/k$, and $\beta_k=1.8k$; \item PGM \cite[Theorem 25.8]{BC11}: $\lambda_1=0.1, \lambda_2=10^{-8}$, and $\gamma=1.9/(\lambda_2+1)$; \item FISTA \cite{BT09}: $\lambda_1=0.05$ and $\lambda_2=10^{-4}$. \end{itemize} We testify the methods by the relative change $$\max\left\{\frac{\|x_{x+1}-x_k\|}{\|x_k\|+1},\frac{|A(x_{k+1})-A(x_{k})|}{|A(x_{k})|+1},\frac{|B(x_{k+1})-B(x_{k})|}{|B(x_{k})|+1}\right\}\leq \epsilon,$$ where $A(\cdot):=\lambda_1\|W(\cdot)\|_1+\frac{\lambda_2}{2}\|\cdot\|^2$, $B(\cdot):=\frac{1}{2}\|\mathbf{B}\cdot-\mathbf{b}\|^2$, and $\epsilon$ is an optimality tolerance. We use the optimality tolerance $10^{-6}$ for obtaining the ISNR values. Moreover, we show the curves of ISNR for the reconstructed images performed by these three methods after 50 iterations. We test the methods for three test images, and the results are shown in Figures \ref{PP} - \ref{LH}. \begin{figure}[H] \begin{minipage}[t]{0.22\textwidth} \centering \resizebox*{3.5cm}{!}{\includegraphics{PP_noisy.png}}\\ {\scriptsize (a) 60\% missing pixels \\$384\times 512$ peppers image} \end{minipage} \begin{minipage}[t]{0.22\textwidth} \centering \resizebox*{3.5cm}{!}{\includegraphics{PP_IPGM.png}}\\ {\scriptsize (b) Algorithm 1, \\ISNR = 16.5561 dB} \end{minipage} \begin{minipage}[t]{0.22\textwidth} \centering \resizebox*{3.5cm}{!}{\includegraphics{PP_FISTA.png}}\\ {\scriptsize (c) FISTA, \\ISNR = 16.1556 dB} \end{minipage} \begin{minipage}[t]{0.22\textwidth} \centering \resizebox*{3.5cm}{!}{\includegraphics{PP_PGM.png}}\\ {\scriptsize (d) PGM, \\ ISNR = 15.1692 dB} \end{minipage} \vspace{0.3cm} \begin{minipage}[t]{0.4\textwidth} \centering \resizebox*{6cm}{!}{\includegraphics{PP_error.png}}\\ {\scriptsize (e) ISNR values via error of tolerances} \end{minipage} \begin{minipage}[t]{0.4\textwidth} \centering \resizebox*{6cm}{!}{\includegraphics{PP_ISNR.png}}\\ {\scriptsize (f) ISNR values via iterations} \end{minipage} \caption{\label{PP} Image Inpainting. Figure (a) shows the 384$\times$512 peppers image with randomly masking 60\% of all pixels to black. Figures (b) - (d) show the reconstructed images performed by Algorithm \ref{algorithm-ergodic-smooth}, FISTA, and PGM, respectively, for the optimality tolerance $10^{-6}$. Figure (e) shows ISNR values when the iterates reach various optimality tolerances, and Figure (f) shows ISNR values when performing 50 iterations.} \end{figure} \begin{figure}[H] \begin{minipage}[t]{0.2\textwidth} \centering \resizebox*{3cm}{!}{\includegraphics{LE_noisy.png}}\\ {\scriptsize (a) 60\% missing pixels \\$512\times 512$ Lena image} \end{minipage} \begin{minipage}[t]{0.2\textwidth} \centering \resizebox*{3cm}{!}{\includegraphics{LE_IPGM.png}}\\ {\scriptsize (b) Algorithm 1, \\ISNR = 18.4371 dB} \end{minipage} \begin{minipage}[t]{0.2\textwidth} \centering \resizebox*{3cm}{!}{\includegraphics{LE_FISTA.png}}\\ {\scriptsize (c) FISTA, \\ISNR = 18.1711 dB} \end{minipage} \begin{minipage}[t]{0.2\textwidth} \centering \resizebox*{3cm}{!}{\includegraphics{LE_PGM.png}}\\ {\scriptsize (d) PGM, \\ ISNR = 17.2952 dB} \end{minipage} \vspace{0.3cm} \begin{minipage}[t]{0.4\textwidth} \centering \resizebox*{6cm}{!}{\includegraphics{LE_error.png}}\\ {\scriptsize (e) ISNR values via error of tolerances} \end{minipage} \begin{minipage}[t]{0.4\textwidth} \centering \resizebox*{6cm}{!}{\includegraphics{LE_ISNR.png}}\\ {\scriptsize (f) ISNR values via iterations} \end{minipage} \caption{\label{LE} Image Inpainting. Figure (a) shows the 512$\times$512 Lenna image with randomly masking 60\% of all pixels to black. Figures (b) - (d) show the reconstructed images performed by Algorithm \ref{algorithm-ergodic-smooth}, FISTA, and PGM, respectively, for the optimality tolerance $10^{-6}$. Figure (e) shows ISNR values when the iterates reach various optimality tolerances, and Figure (f) shows ISNR values when performing 50 iterations.} \end{figure} \begin{figure}[H] \begin{minipage}[t]{0.2\textwidth} \centering \resizebox*{3cm}{!}{\includegraphics{LH_noisy.png}}\\ {\scriptsize (a) 60\% missing pixels \\$640\times 480$ lighthouse image} \end{minipage} \begin{minipage}[t]{0.2\textwidth} \centering \resizebox*{3cm}{!}{\includegraphics{LH_IPGM.png}}\\ {\scriptsize (b) Algorithm 1, \\ISNR = 17.9481 dB} \end{minipage} \begin{minipage}[t]{0.2\textwidth} \centering \resizebox*{3cm}{!}{\includegraphics{LH_FISTA.png}}\\ {\scriptsize (c) FISTA, \\ISNR = 17.9159 dB} \end{minipage} \begin{minipage}[t]{0.2\textwidth} \centering \resizebox*{3cm}{!}{\includegraphics{LH_PGM.png}}\\ {\scriptsize (d) PGM, \\ ISNR = 17.4031 dB} \end{minipage} \vspace{0.3cm} \begin{minipage}[t]{0.4\textwidth} \centering \resizebox*{6cm}{!}{\includegraphics{LH_error.png}}\\ {\scriptsize (e) ISNR values via error of tolerances} \end{minipage} \begin{minipage}[t]{0.4\textwidth} \centering \resizebox*{6cm}{!}{\includegraphics{LH_ISNR.png}}\\ {\scriptsize (f) ISNR values via iterations} \end{minipage} \caption{\label{LH} Image Inpainting. Figure (a) shows the 640$\times$480 lighthouse image with randomly masking 60\% of all pixels to black. Figures (b) - (d) show the reconstructed images performed by Algorithm \ref{algorithm-ergodic-smooth}, FISTA, and PGM, respectively, for the optimality tolerance $10^{-6}$. Figure (e) shows ISNR values when the iterates reach various optimality tolerances, and Figure (f) shows ISNR values when performing 50 iterations.} \end{figure} From all the above results, we observe that the proposed method (Algorithm \ref{algorithm-ergodic-smooth}) outperforms PGM and FISTA in the terms of the improvement in signal-to-noise ratio (ISNR) in both the error of tolerance criteria and stopping criteria with a fixed number of iterations, which may benefit from the usefulness of hierarchical setting (\ref{inpaint-pb}) considered in this work. \subsection{Generalized Heron Problem} The traditional Heron problem is to find a point on a straight line in a plane in which it minimizes the sum of distances from it to two given points. Several generalizations of the classical Heron problem of finding a point that minimizes the sum of the distances to given closed convex sets over a nonempty simple closed convex set have been investigated by many authors, for instance, \cite{BCH15,MNS12,MNS12-2}. However, it is very challenging to solve the generalized Heron problem when the constrained set is an affine subspace $\mathbf{A}x=\mathbf{b}$ (a solution set to a system of linear equations), which typically has no solution and thus computing a metric projection onto this feasible set is impossible. In addition, any methods mentioned in the above references cannot be applied in such case. This motivates us to consider the generalized Heron problem of finding a point that minimizes the sum of the distances to given closed convex sets over a least squares solution to a system of linear equation, that is, \begin{eqnarray}\label{heron} \begin{array}{ll} \textrm{minimize}\indent \sum_{i=1}^m\dist(x,C_i)+\frac{1}{2}\|x\|^2\\ \textrm{subject to}\indent x\in\argmin \frac{1}{2}\|\mathbf{A}x-\mathbf{b}\|^2, \end{array}% \end{eqnarray} where $C_i \subset \mathbb{R}^n$ are nonempty closed convex subsets, for all $i=1,...,m$, $\mathbf{A}\in \mathbb{R}^{r\times n}$ is a matrix, and $\mathbf{b}\in \mathbb{R}^{r}$ is a vector. We observe that (\ref{heron}) fits into the setting of the problem (\ref{MP}) when setting $f_i(x)=\dist(x,C_i)$, $h_i(x)=\frac{1}{2m}\|x\|^2$, for all $i=1,...,m$, and $g(x)=\frac{1}{2}\|\mathbf{A}x-\mathbf{b}\|^2$ for all $x\in\R^n$. In this case, a solution set of a system of linear equations can be empty. Moreover, it is worth noting that when performing our proposed method (Algorithm \ref{algorithm-ergodic-smooth}), we only require to compute the gradient of $g$ but not the inverse of any matrix. Note also that, the square of $\ell_2$-norm is to guarantee the uniqueness of a solution to the problem. We will perform our experiments by considering the closed convex target sets $C_i \subset \mathbb{R}^n$, for all $i=1,\ldots,m$, which are balls of radius $0.2$ whose centers are created randomly in the interval $(-n^2,n^2)$. We put $r=m^2$ and generate all arrays of the matrix $\mathbf{A}$ randomly from the interval $(-n^2,n^2)$. Our experiments will be divided into two cases, namely, the case of consistent constraint where $\mathbf{b}=\mathbf{0}_{\mathbb{R}^{m^2}}$, and the case of inconsistent constraint where $\mathbf{b}$ is not a zero vector. In all experiments, we measure the performance of Algorithm \ref{algorithm-ergodic-smooth} by the relative change between two consecutive iterations, i.e., $$\max\left\{\frac{\|x_{x+1}-x_k\|}{\|x_k\|+1},\frac{|F(x_k)-F(x_{k-1})|}{|F(x_{k-1})|+1},\frac{|g(x_k)-g(x_{k-1})|}{|g(x_{k-1})|+1}\right\}\leq \epsilon,$$ where $F:=\sum_{i=1}^mF_i$ and $F_i:=f_i+h_i$ for all $i=1,...,m$. The runtimes of Algorithm \ref{algorithm-ergodic-smooth} are clocked in seconds. Two examples of an iteration $x_k$ generated by Algorithm \ref{algorithm-ergodic-smooth} are illustrated in Figure \ref{cons}. \begin{figure}[H] \begin{minipage}[c]{0.5\textwidth} \centering \resizebox*{4.5cm}{!}{\includegraphics{cons_3.png}} \\ {\scriptsize (a) consistent constraint} \end{minipage} \begin{minipage}[c]{0.4\textwidth} \centering \resizebox*{4.5cm}{!}{\includegraphics{incons_2.png}} \\ {\scriptsize (b) inconsistent constraint} \end{minipage} \caption{\label{cons}Behavior of $x_k$ for several optimal tolerances $\epsilon$ on the generalized Heron problems with consistent constraint (Figure (a)) and inconsistent constraint (Figure (b)).} \end{figure} Next, we will consider the influences of corresponding parameters $\alpha_k$ and $\beta_k$ in Algorithm \ref{algorithm-ergodic-smooth} on the generalized Heron problem with the consistent constraint, that is, $\mathbf{b}=\mathbf{0}_{\mathbb{R}^{m^2}}$. We use the number of target sets $m=5$, the dimension $n=2$, and perform $10$ samplings for the different randomly chosen matrix $\mathbf{A}\in \mathbb{R}^{25\times 5}$, the balls $C_1,\ldots,C_5\subset\mathbb{R}^2$, and the starting point $x_1\in\mathbb{R}^2$. We terminate Algorithm \ref{algorithm-ergodic-smooth} when the relative changes are less than $10^{-6}$, and then compute the averages of runtimes, whose results are shown in Table \ref{tb-cons}. \begin{table}[H] \centering \setlength{\tabcolsep}{6pt} {\scriptsize \caption{\label{tb-cons} Comparison of runtime for different choices of step size $\alpha_k = a/k$ and penalization parameter $\beta_k=bk/\|\mathbf{A}\|^2$ in the case of consistent constraint.} \begin{tabular}{@{}l r r r r r r r r r r r r r r r r r r r@{}} \toprule $a$ $\rightarrow$ & \multirow{2}{*}{$0.1$} & \multirow{2}{*}{$0.2$} & \multirow{2}{*}{$0.3$} & \multirow{2}{*}{$0.4$} & \multirow{2}{*}{$0.5$} & \multirow{2}{*}{$0.6$} & \multirow{2}{*}{$0.7$} & \multirow{2}{*}{$0.8$} & \multirow{2}{*}{$0.9$} & \multirow{2}{*}{$1$} \\ $b$ $\downarrow$ \\ \midrule 0.1& 0.1152& 0.0988& 0.0977& 0.0800& 0.0967& 0.0732& 0.0848& 0.0756& 0.0830& 0.0813\\ 0.2& 0.0637& 0.0663& 0.0634& 0.0545& 0.0562& 0.0648& 0.0552& 0.0505& 0.0575& 0.0633\\ 0.3& 0.0524& 0.0465& 0.0461& 0.0483& 0.0609& 0.0528& 0.0427& 0.0481& 0.0485& 0.0542\\ 0.4& 0.0503& 0.0430& 0.0458& 0.0485& 0.0454& 0.0500& 0.0454& 0.0487& 0.0361& 0.0392\\ 0.5& 0.0413& 0.0393& 0.0441& 0.0409& 0.0436& 0.0383& 0.0371& 0.0385& 0.0354& 0.0437\\ 0.6& 0.0317& 0.0360& 0.0409& 0.0386& 0.0342& 0.0403& 0.0379& 0.0358& 0.0332& 0.0301\\ 0.7& 0.0337& 0.0330& 0.0338& 0.0352& 0.0345& 0.0280& 0.0333& 0.0330& 0.0366& 0.0358\\ 0.8& 0.0307& 0.0290& 0.0285& 0.0296& 0.0352& 0.0327& 0.0350& 0.0332& 0.0301& 0.0296\\ 0.9& 0.0347& 0.0308& 0.0293& 0.0330& 0.0324& 0.0364& 0.0297& 0.0300& 0.0297& 0.0297\\ 1.0& 0.0293& 0.0283& 0.0322& 0.0278& 0.0315& 0.0306& 0.0337& 0.0292& 0.0283& 0.0337\\ 1.1& 0.0298& 0.0254& 0.0301& 0.0290& 0.0354& 0.0314& 0.0270& 0.0264& 0.0252& 0.0287\\ 1.2& 0.0284& 0.0301& 0.0296& 0.0263& 0.0276& 0.0259& 0.0299& 0.0223& 0.0281& 0.0303\\ 1.3& 0.0238& 0.0267& 0.0259& 0.0286& 0.0292& 0.0244& 0.0263& 0.0260& 0.0268& 0.0302\\ 1.4& 0.0270& 0.0261& 0.0246& 0.0248& 0.0404& 0.0220& 0.0292& 0.0280& 0.0244& 0.0291\\ 1.5& 0.0252& 0.0279& 0.0261& 0.0233& 0.0329& 0.0244& 0.0253& 0.0270& 0.0268& 0.0280\\ 1.6& 0.0264& 0.0239& 0.0242& 0.0261& 0.0264& 0.0242& 0.0210& 0.0233& 0.0246& 0.0279\\ 1.7& 0.0209& 0.0244& 0.0242& 0.0247& 0.0225& 0.0215& 0.0227& 0.0260& 0.0225& 0.0229\\ 1.8& 0.0233& 0.0229& 0.0281& 0.0217& 0.0216& 0.0222& 0.0220& 0.0240& 0.0217& 0.0262\\ 1.9& 0.0242& 0.0202& 0.0275& 0.0235& 0.0255& {\bf 0.0198}& 0.0227& 0.0221& 0.0238& 0.0257 \\ \bottomrule \end{tabular} } \end{table} In Table \ref{tb-cons}, we present the influences of the positive square summable step size $\alpha_k=a/k$, where $a\in[0.1,1]$, and the positive penalization parameter $\beta_k=bk/\|\mathbf{A}\|^2$, where $b\in[0.1,1.9]$. We can notice that almost all computational runtimes are between 0.02 and 0.04 seconds. Moreover, we observe that for each choice of $\alpha_k$, the larger penalization parameter $\beta_k$ gives a better result, and in this experiment, the best result is obtained from the combination of $\alpha_k=0.6/k$ and $\beta_k=1.9k/\|\mathbf{A}\|^2$. In Table \ref{error--tb}, we show the averaged computational runtime and the average number of iterations of 10 sampling for several numbers of target sets $m$ and several dimensions $n$. For the sake of completeness, we also present the average value of norm $\|\mathbf{A}\|$. We terminate Algorithm \ref{algorithm-ergodic-smooth} by the optimality tolerance $\epsilon=10^{-6}$. \begin{table}[H] \centering \setlength{\tabcolsep}{12pt} \caption{\label{error--tb}Behavior of Algorithm \ref{algorithm-ergodic-smooth} on the generalized Heron problems with consistent constraints.} {\scriptsize \begin{tabular}{@{}l l r r r r r r r r r @{}} \hline\noalign{\smallskip} $m$ & $n$ & $\|\mathbf{A}\|$ & Time & \#(Iters) \\ \midrule 5 &2 &12.7429 &0.0292 &1210\\ &3 &30.4313 &0.0331 &1367\\ &5 &92.6700 &0.0642 &2925\\ &10 &423.3893 &0.2367 &8822\\ &20 &1978.0400 &1.4004 &39735\\ &50 &16519.4145& 26.9685 &444919\\ \midrule 10 & 2 &24.2202 &0.0485 &1305\\ &3 &55.5478 &0.0879 &2252\\ &5 &167.3363 &0.1785 &4097\\ &10 &721.7128 &0.8301 &13761\\ &20 &3269.1938 &4.7381 &42965\\ &50 &24146.8185& 40.2545 &196313\\ \midrule 20 &2 &47.3754 &0.1301 &2078\\ &3 &108.9700 &0.2621 &3831\\ &5 &308.9152 &0.6069 &8240\\ &10 &1299.6429 &2.0165 &22944\\ &20 &5502.7016 &8.7097 &62866\\ &50 &38131.4998& 94.9971 &239289\\ \midrule 50 &2 &116.4996 &0.7457 &4857\\ &3 &264.7944 &1.4213 &8629\\ &5 &743.7431 &4.2728 &19139\\ &10 &3037.0681 &14.8371& 51719\\ &20 &12424.2950 &64.2897& 127730\\ &50 &81465.8917& 450.8698& 439601\\ \midrule 100 &2 &232.7786 &3.4001 &8879 \\ &3 &524.9095 &7.2561 &18363 \\ &5 &1466.0786 &17.5919 &37689 \\ &10 &5907.6284 &60.5538& 100652 \\ &20 &24024.5749 &322.0257& 251252\\ &50 & 153858.3795 &2918.1447 &741560 \\ \hline\noalign{\smallskip} \end{tabular} } \end{table} As shown in Table \ref{error--tb}, for the same number of target sets $m$, we observe that the higher dimension needs a longer time and a higher number of iterations. In a similar direction, we notice that for the same dimension $n$, the larger number of target sets takes considerably more computational time than the smaller one. In particular, we also observe that the computational time for the case when $m=100$ is almost a hundred times of that of the smaller case $m=10$. In the next experiments, we consider the generalized Heron problem with inconsistent constraint, that is, $\mathbf{b}$ is not a zero vector $\mathbf{0}_{\mathbb{R}^{m^2}}$. We also use the number of target sets $m=5$, the dimension $n=2$, and perform $10$ samplings for the different randomly chosen matrix $\mathbf{A}\in \mathbb{R}^{25\times 5}$, the balls $C_1,\ldots, C_5\subset\mathbb{R}^2$, the starting point $x_1\in\mathbb{R}^2$, and the vector $\mathbf{b}$ in the interval $(0,1)$. The influences of corresponding parameters $\alpha_k$ and $\beta_k$ in Algorithm \ref{algorithm-ergodic-smooth} are obtained when Algorithm \ref{algorithm-ergodic-smooth} is terminated by the optimal tolerance $10^{-6}$ and its runtimes, which are shown in Table \ref{tb-incons} are averaged. \begin{table}[H] \centering \setlength{\tabcolsep}{6pt} {\scriptsize \caption{\label{tb-incons} Comparison of algorithm runtime for different choices of step sizes $\alpha_k = a/k$ and penalization parameter $\beta_k=bk/\|\mathbf{A}\|^2$ in the case of inconsistent constraint.} \begin{tabular}{@{}l r r r r r r r r r r r r r r r r r r r@{}} \toprule $a$ $\rightarrow$ & \multirow{2}{*}{$0.1$} & \multirow{2}{*}{$0.2$} & \multirow{2}{*}{$0.3$} & \multirow{2}{*}{$0.4$} & \multirow{2}{*}{$0.5$} & \multirow{2}{*}{$0.6$} & \multirow{2}{*}{$0.7$} & \multirow{2}{*}{$0.8$} & \multirow{2}{*}{$0.9$} & \multirow{2}{*}{$1$} \\ $b$ $\downarrow$ \\ \midrule 0.1& 0.0975& 0.0953& 0.0861& 0.0863& 0.0725& 0.0862& 0.0692& 0.0908& 0.0906& 0.0759\\ 0.2& 0.0632& 0.0558& 0.0564& 0.0614& 0.0463& 0.0523& 0.0616& 0.0567& 0.0500& 0.0588\\ 0.3& 0.0510& 0.0563& 0.0492& 0.0570& 0.0422& 0.0493& 0.0389& 0.0582& 0.0455& 0.0444\\ 0.4& 0.0417& 0.0393& 0.0415& 0.0395& 0.0463& 0.0443& 0.0464& 0.0478& 0.0403& 0.0379\\ 0.5& 0.0392& 0.0410& 0.0314& 0.0415& 0.0403& 0.0446& 0.0395& 0.0388& 0.0394& 0.0510\\ 0.6& 0.0381& 0.0373& 0.0395& 0.0311& 0.0383& 0.0315& 0.0355& 0.0361& 0.0369& 0.0307\\ 0.7& 0.0368& 0.0343& 0.0313& 0.0339& 0.0361& 0.0313& 0.0346& 0.0341& 0.0286& 0.0390\\ 0.8& 0.0353& 0.0293& 0.0304& 0.0318& 0.0300& 0.0326& 0.0306& 0.0331& 0.0318& 0.0267\\ 0.9& 0.0296& 0.0280& 0.0298& 0.0319& 0.0306& 0.0279& 0.0320& 0.0274& 0.0275& 0.0287\\ 1.0& 0.0282& 0.0276& 0.0305& 0.0282& 0.0323& 0.0292& 0.0302& 0.0303& 0.0270& 0.0254\\ 1.1& 0.0269& 0.0273& 0.0320& 0.0277& 0.0271& 0.0252& 0.0344& 0.0257& 0.0264& 0.0297\\ 1.2& 0.0284& 0.0203& 0.0278& 0.0246& 0.0199& 0.0274& 0.0272& 0.0219& 0.0273& 0.0237\\ 1.3& 0.0242& 0.0233& 0.0275& 0.0283& 0.0267& 0.0249& 0.0262& 0.0273& 0.0267& 0.0252\\ 1.4& 0.0294& 0.0257& 0.0215& 0.0251& 0.0237& 0.0214& 0.0299& 0.0240& 0.0241& 0.0233\\ 1.5& 0.0245& 0.0275& 0.0227& 0.0250& 0.0254& 0.0228& 0.0248& 0.0228& 0.0192& 0.0252\\ 1.6& 0.0241& 0.0234& 0.0225& 0.0191& 0.0235& 0.0215& 0.0254& 0.0210& 0.0248& 0.0213\\ 1.7& 0.0233& 0.0248& 0.0220& 0.0261& 0.0199& 0.0211& 0.0235& 0.0219& 0.0208& 0.0233\\ 1.8& 0.0224& 0.0214& 0.0227& 0.0200& 0.0207& 0.0255& 0.0238& 0.0210& 0.0178& {\bf 0.0175}\\ 1.9& 0.0251& 0.0210& 0.0201& 0.0194& 0.0212& 0.0234& 0.0254& 0.0205& 0.0224& 0.0188 \\ \bottomrule \end{tabular} } \end{table} We present in Table \ref{tb-incons} the influences of the positive step size $\alpha_k=a/k$, where $a\in[0.1,1]$, and the penalization parameter $\beta_k=bk/\|\mathbf{A}\|^2$, where $b\in[0.02,0.04]$. In the same manner with the consistent case, we can notice that for all choices of step size $\alpha_k$, a smaller penalization parameter $\beta_k$ considerably requires more computational runtime, and in this experiment, the best result is obtained from the combination of $\alpha_k=1/k$ and $\beta_k=1.8k/\|\mathbf{A}\|^2$. In a similar fashion as the consistent case, we show the average computational runtime and the average number of iterations of 10 sampling for several numbers of target sets $m$ and several dimensions $n$ for the inconsistent case in Table \ref{error--tb-incons}. In this experiment, we also terminate Algorithm \ref{algorithm-ergodic-smooth} by the optimality tolerance $\epsilon=10^{-6}$. \begin{table}[H] \centering \setlength{\tabcolsep}{12pt} \caption{\label{error--tb-incons}Behavior of Algorithm \ref{algorithm-ergodic-smooth} on generalized Heron problems with inconsistent constraint.} {\scriptsize \begin{tabular}{@{}l l r r r r r r r r r @{}} \hline\noalign{\smallskip} $n$ & $m$ & $\|\mathbf{A}\|$ & Time & \#(Iters) \\ \midrule 5 &2 &12.4861 &0.0295 &1112\\ &3 &30.5263 &0.0323 &1333\\ &5 &98.5510 &0.0402 &1605\\ &10 &430.1475 &0.1857 &6707\\ &20 &1989.9481 &1.3728 &37874\\ &50 &16358.4264 &27.6430 &446870\\\midrule 10 &2 &24.5911 &0.0486 &1319\\ &3 &55.6295 &0.0556 &1379\\ &5 &163.4317 &0.0792 &1759\\ &10 &725.6254 &0.3125 &4874\\ &20 &3173.0835 &1.8285 &17425\\ &50 &23579.2386 &19.7460 &94644\\ \midrule 20 &2 &46.6140 &0.0770 &1197\\ &3 &107.6329 &0.1053 &1567\\ &5 &309.4062 &0.1414 &1918\\ &10 &1304.4604 &0.4817 &5267\\ &20 &5521.4899 &1.9743 &14588\\ &50 &38699.0252 &26.7447 &67902\\ \midrule 50&2 &116.5993 &0.3056 &2017\\ &3 &263.8920 &0.2797 &1713\\ &5 &742.5804 &0.6016 &2612\\ &10 &3024.8978 &2.1776 &7343\\ &20 &12455.7174 &8.2004 &16805\\ &50 &81713.0491 &65.7634 &64529\\ \midrule 100 &2 &232.2942 &0.8234 &2064\\ &3&524.2420 &0.9983 &2367\\ &5&1466.3736 &1.5053 &2991\\ &10&5918.4690 &5.5408 &8583\\ &20&23967.7064 &24.1894&21119\\ &50& 153872.0238& 251.9113 &74525\\ \hline\noalign{\smallskip} \end{tabular} } \end{table} In Table \ref{error--tb-incons}, we notice that for the same number of target sets $m$, almost all cases of higher dimension $m$ need a longer runtime and a higher number of iterations. This is also similar manner in the context of the number of iterations. Moreover, we notice that for the same number of target sets and dimension, the generalized Heron problem with inconsistent constraint requires less iterations and computational runtime compared with the consistent case. \section{Conclusions} We consider the splitting method called the incremental proximal gradient method with a penalty term for solving a minimization problem of the sum of a finite number of convex functions subject to the set of minimizers of a convex differentiable function. The advantage of our method is that it allows us not only to compute the proximal operator or the gradient of each function separately but also to consider a general sense of the constrained set. Under some suitable assumptions, we show the convergence of iterates to an optimal solution. Finally, we propose some numerical experiments on the image inpainting problem and the generalized Heron problems. \section*{Acknowledgement} The authors are thankful to two anonymous referees and the Associate Editor for comments and remarks which improved the quality and presentation of the paper. The authors are also thankful to Professor Radu Ioan Bo\c{t} for his suggestion on image inpainting problem. This work is supported by the Thailand Research Fund under the Project RAP61K0012. \section*{Appendix} \subsection*{A. Key Tool Lemmas}\label{proof} This subsection is dedicated to the proofs of the important lemmas relating to the sequence generated by Algorithm \ref{algorithm-ergodic-smooth}. \begin{lemma}\label{IPGP-lemma-12} Let $u\in\mathcal{S}$ and $p\in N_{\arg\min g}(u)$ be such that $0=p+\sum_{i=1}^mv_i+\sum_{i=1}^m\nabla h_i(u)$, where $v_i\in\partial f_i(u)$ for all $i=1,\ldots,m$. Then for every $k\geq1$ and $\eta>0$, we have \begin{eqnarray*} &&\|x_{k+1}-u\|^2-\|x_{k}-u\|^2+\frac{\eta}{1+\eta}\alpha_k\beta_k g(x_{k})+\left(1-\frac{\eta}{1+\eta}\right)\sum_{i=1}^m\|\varphi_{i+1,k}-\varphi_{i,k}\|^2\\ &\leq&\alpha_k\left(\frac{2(1+\eta)}{\eta}\alpha_k-\frac{2}{\max_{1\leq i\leq m}L_i}\right)\sum_{i=1}^m\|\nabla h_i(\varphi_{i,k})-\nabla h_i(u)\|^2\\ &&+\left(\left(1+\frac{\eta}{2(1+\eta)}\right)\alpha_k\beta_k-\frac{2}{L_g(1+\eta)}\right)\alpha_k\beta_k\|\nabla g(x_k)\|^2\\ &&+\frac{2m(m+1)(1+\eta)}{\eta}\alpha_k^2\sum_{i=1}^m\|\nabla h_i(u)+v_i\|^2\\ &&+\frac{\eta}{1+\eta}\alpha_k\beta_k\left[g^*\left(\frac{2p}{\frac{\eta}{1+\eta}\beta_k}\right)-\sigma_{\arg\min g}\left(\frac{2p}{\frac{\eta}{1+\eta}\beta_k}\right)\right]. \end{eqnarray*} \end{lemma} \begin{proof} Let $k\geq1$ be fixed. For all $i=1,\ldots,m$, it follows from the definition of proximity operator $\alpha_k f_i$ that $\varphi_{i,k}-\alpha_k\nabla h_i(\varphi_{i,k})-\varphi_{i+1,k}\in\alpha_k\partial f_i(\varphi_{i+1,k})$. Since $v_i\in\partial f_i(u)$, the monotonicity of $\partial f_i$ implies that \begin{eqnarray*}\<\varphi_{i,k}-\alpha_k\nabla h_i(\varphi_{i,k})-\varphi_{i+1,k}-\alpha_k v_i,\varphi_{i+1,k}-u\>\geq0, \end{eqnarray*} or equivalently, \begin{eqnarray*}\<\varphi_{i,k}-\varphi_{i+1,k},u-\varphi_{i+1,k}\>\leq \alpha_k\<\nabla h_i(\varphi_{i,k})+v_i,u-\varphi_{i+1,k}\>. \end{eqnarray*} This implies that \begin{eqnarray}\label{lemma-eqn-1}\|\varphi_{i+1,k}-u\|^2-\|\varphi_{i,k}-u\|^2+\|\varphi_{i+1,k}-\varphi_{i,k}\|^2\leq 2\alpha_k\<\nabla h_i(\varphi_{i,k})+ v_i,u-\varphi_{i+1,k}\>. \end{eqnarray} Summing up the inequalities (\ref{lemma-eqn-1}) for all $i=1,\ldots,m$, we obtain \begin{eqnarray}\label{lemma-eqn-2}\|\varphi_{m+1,k}-u\|^2-\|\varphi_{1,k}-u\|^2&+&\sum_{i=1}^m\|\varphi_{i+1,k}-\varphi_{i,k}\|^2\nonumber\\ &\leq& 2\alpha_k\sum_{i=1}^m\<\nabla h_i(\varphi_{i,k})+ v_i,u-\varphi_{i+1,k}\>\nonumber\\ &=& 2\alpha_k\sum_{i=1}^m\<\nabla h_i(\varphi_{i,k})- \nabla h_i(u),u-\varphi_{i,k}\>\nonumber\\ &&+ 2\alpha_k\sum_{i=1}^m\<\nabla h_i(\varphi_{i,k})- h_i(u),\varphi_{i,k}-\varphi_{i+1,k}\>\nonumber\\ && + 2\alpha_k\sum_{i=1}^m\<\nabla h_i(u)+v_i,u-\varphi_{i+1,k}\>. \end{eqnarray} Let us consider the first term in the right-hand side of (\ref{lemma-eqn-2}). For each $i=1,\ldots,m$, we note that $\nabla h_i$ is $\frac{1}{L_i}$-cocoercive, that is, \begin{eqnarray*} \<\nabla h_i(\varphi_{i,k})- \nabla h_i(u),\varphi_{i,k}-u\>\geq\frac{1}{L_i}\|\nabla h_i(\varphi_{i,k})-\nabla h_i(u)\|^2, \end{eqnarray*} and so \begin{eqnarray}\label{lemma-eqn-3} 2\alpha_k\sum_{i=1}^m\<\nabla h_i(\varphi_{i,k})- \nabla h_i(u),u-\varphi_{i,k}\> &\leq&-2\alpha_k \sum_{i=1}^m\frac{1}{L_i}\|\nabla h_i(\varphi_{i,k})-\nabla h_i(u)\|^2\nonumber\\ &\leq& \frac{-2\alpha_k}{\max_{1\leq i\leq m}L_i}\sum_{i=1}^m\|\nabla h_i(\varphi_{i,k})-\nabla h_i(u)\|^2. \end{eqnarray} For the second term of the right-hand side of (\ref{lemma-eqn-2}), for each $i=1,\ldots,m$, we note that \begin{eqnarray*} 2\alpha_k\<\nabla h_i(\varphi_{i,k})- h_i(u),\varphi_{i,k}-\varphi_{i+1,k}\>&\leq& \frac{2(1+\eta)}{\eta}\alpha_k^2\|\nabla h_i(\varphi_{i,k})-\nabla h_i(u)\|^2\\ &&+\frac{\eta}{2(1+\eta)}\|\varphi_{i,k}-\varphi_{i+1,k}\|^2, \end{eqnarray*} which yields \begin{eqnarray}\label{lemma-eqn-4} 2\alpha_k\sum_{i=1}^m\<\nabla h_i(\varphi_{i,k})- h_i(u),\varphi_{i,k}-\varphi_{i+1,k}\>&\leq& \frac{2(1+\eta)}{\eta}\alpha_k^2\sum_{i=1}^m\|\nabla h_i(\varphi_{i,k})-\nabla h_i(u)\|^2\nonumber\\ &&+\frac{\eta}{2(1+\eta)}\sum_{i=1}^m\|\varphi_{i,k}-\varphi_{i+1,k}\|^2. \end{eqnarray} Substituting (\ref{lemma-eqn-3}) and (\ref{lemma-eqn-4}) in (\ref{lemma-eqn-2}), we obtain \begin{eqnarray}\label{lemma-eqn-4-2}\|\varphi_{m+1,k}-u\|^2-\|\varphi_{1,k}-u\|^2&+&\left(1-\frac{\eta}{2(1+\eta)}\right)\sum_{i=1}^m\|\varphi_{i+1,k}-\varphi_{i,k}\|^2\nonumber\\ &\leq& \alpha_k\left(\frac{2(1+\eta)}{\eta}\alpha_k-\frac{2}{\max_{1\leq i\leq m}L_i}\right)\sum_{i=1}^m\|\nabla h_i(\varphi_{i,k})-\nabla h_i(u)\|^2\nonumber\\ && + 2\alpha_k\sum_{i=1}^m\<\nabla h_i(u)+v_i,u-\varphi_{i+1,k}\>. \end{eqnarray} Now, for the last term in the right-hand side of (\ref{lemma-eqn-4-2}), we have \begin{eqnarray}\label{lemma-eqn-5} 2\alpha_k\sum_{i=1}^m\<\nabla h_i(u)+v_i,x_{k}-\varphi_{i+1,k}\>&\leq& \frac{2m(m+1)(1+\eta)}{\eta}\alpha_k^2\sum_{i=1}^m\|\nabla h_i(u)+v_i\|^2\nonumber\\ &&+\frac{\eta}{2m(m+1)(1+\eta)}\sum_{i=1}^m\|x_{k}-\varphi_{i+1,k}\|^2. \end{eqnarray} Now, for each $i=1,\ldots,m$, the triangle inequality yields \begin{eqnarray}\label{lemma-eqn-5-2} \|x_{k}-\varphi_{i+1,k}\|&\leq&\|x_{k}-\varphi_{1,k}\|+\sum_{j=1}^{i}\|\varphi_{j+1,k}-\varphi_{j,k}\|\\ &\leq&\|x_{k}-\varphi_{1,k}\|+\sum_{i=1}^m\|\varphi_{i+1,k}-\varphi_{i,k}\|,\nonumber \end{eqnarray} and so \begin{eqnarray*} \|x_{k}-\varphi_{i+1,k}\|^2&\leq&\left(\|x_{k}-\varphi_{1,k}\|+\sum_{i=1}^m\|\varphi_{i+1,k}-\varphi_{i,k}\|\right)^2\\ &\leq&(m+1)\left(\|x_{k}-\varphi_{1,k}\|^2+\sum_{i=1}^m\|\varphi_{i+1,k}-\varphi_{i,k}\|^2\right). \end{eqnarray*} Summing up the above inequalities for all $i=1,\ldots,m$, we have \begin{eqnarray*} \sum_{i=1}^m\|x_{k}-\varphi_{i+1,k}\|^2 &\leq&m(m+1)\left(\|x_{k}-\varphi_{1,k}\|^2+\sum_{i=1}^m\|\varphi_{i+1,k}-\varphi_{i,k}\|^2\right). \end{eqnarray*} Multiplying this inequality by $\frac{\eta}{2m(m+1)(1+\eta)}$, the inequality (\ref{lemma-eqn-5}) becomes \begin{eqnarray}\label{lemma-eqn-6} 2\alpha_k\sum_{i=1}^m\<\nabla h_i(u)+v_i,x_{k}-\varphi_{i+1,k}\>&\leq& \frac{2m(m+1)(1+\eta)}{\eta}\alpha_k^2\sum_{i=1}^m\|\nabla h_i(u)+v_i\|^2\nonumber\\ &&+\frac{\eta}{2(1+\eta)}\|x_{k}-\varphi_{1,k}\|^2+\frac{\eta}{2(1+\eta)}\sum_{i=1}^m\|\varphi_{i+1,k}-\varphi_{i,k}\|^2,\nonumber\\ \end{eqnarray} and then \begin{eqnarray}\label{lemma-eqn-12-2} 2\alpha_k\sum_{i=1}^m\<\nabla h_i(u)+v_i,u-\varphi_{i+1,k}\> &\leq& 2\alpha_k\sum_{i=1}^m\<\nabla h_i(u)+v_i,u-x_k\>\nonumber\\ &&+2\alpha_k\sum_{i=1}^m\<\nabla h_i(u)+v_i,x_k-\varphi_{i+1,k}\>\nonumber\\ &\leq& 2\alpha_k\sum_{i=1}^m\<\nabla h_i(u)+v_i,u-x_k\>\nonumber\\ &&\frac{2m(m+1)(1+\eta)}{\eta}\alpha_k^2\sum_{i=1}^m\|\nabla h_i(u)+v_i\|^2 \nonumber\\ &&+\frac{\eta}{2(1+\eta)}\|x_{k}-\varphi_{1,k}\|^2 +\frac{\eta}{2(1+\eta)}\sum_{i=1}^m\|\varphi_{i+1,k}-\varphi_{i,k}\|^2.\nonumber \end{eqnarray} Hence (\ref{lemma-eqn-4-2}) becomes \begin{eqnarray}\label{lemma-eqn-12-3}&&\|\varphi_{m+1,k}-u\|^2-\|\varphi_{1,k}-u\|^2+\left(1-\frac{\eta}{(1+\eta)}\right)\sum_{i=1}^m\|\varphi_{i+1,k}-\varphi_{i,k}\|^2\nonumber\\ &\leq& \alpha_k\left(\frac{2(1+\eta)}{\eta}\alpha_k-\frac{2}{\max_{1\leq i\leq m}L_i}\right)\sum_{i=1}^m\|\nabla h_i(\varphi_{i,k})-\nabla h_i(u)\|^2\nonumber\\ && + \frac{2m(m+1)(1+\eta)}{\eta}\alpha_k^2\sum_{i=1}^m\|\nabla h_i(u)+v_i\|^2+ \frac{\eta}{2(1+\eta)}\alpha_k^2\beta_k^2\|\nabla g(x_k)\|^2\nonumber\\ &&2\alpha_k\sum_{i=1}^m\<\nabla h_i(u)+v_i,u-x_{k}\>. \end{eqnarray} On the other hand, from the definition of $\varphi_{1,k}$, we note that \begin{eqnarray}\label{lemma-eqn-12-4} \|\varphi_{1,k}-u\|^2&=&\|\varphi_{1,k}-x_k\|^2+\|x_k-u\|^2+2\<\varphi_{1,k}-x_k,x_k-u\>\nonumber\\ &=&\alpha_k^2\beta_k^2\|\nabla g(x_k)\|^2+\|x_k-u\|^2-2\alpha_k\beta_k\<\nabla g(x_k),x_k-u\>. \end{eqnarray} Furthermore, since $\nabla g$ is $\frac{1}{L_g}$-cocoercive and $\nabla g(u)=0$, we have \begin{eqnarray*} 2\alpha_k\beta_k\<\nabla g(x_k),x_k-u\> &=&2\alpha_k\beta_k\<\nabla g(x_k)-\nabla g(u),x_k-u\>\nonumber\\ &\geq&\frac{2\alpha_k\beta_k}{L_g}\|\nabla g(x_k)-\nabla g(u)\|^2=\frac{2\alpha_k\beta_k}{L_g}\|\nabla g(x_k)\|^2, \end{eqnarray*} which implies that \begin{eqnarray}\label{lemma-eqn-12-5} -2\alpha_k\beta_k\<\nabla g(x_k),x_k-u\> &\leq&-\frac{2\alpha_k\beta_k}{L_g}\|\nabla g(x_k)\|^2. \end{eqnarray} Moreover, since $g$ is convex and $g(u)=0$, we have \begin{eqnarray}\label{lemma-eqn-12-6} -2\alpha_k\beta_k\<\nabla g(x_k),x_k-u\> &=& 2\alpha_k\beta_k\<\nabla g(x_k),u-x_k\>\nonumber\\ &\leq& 2\alpha_k\beta_k\left(g(u)-g(x_k)\right)=-2\alpha_k\beta_kg(x_k). \end{eqnarray} Combining (\ref{lemma-eqn-12-5}) and (\ref{lemma-eqn-12-6}), we obtain \begin{eqnarray*} -2\alpha_k\beta_k\<\nabla g(x_k),x_k-u\> &\leq& -\frac{2\alpha_k\beta_k}{L_g(1+\eta)}\|\nabla g(x_k)\|^2-\frac{2\eta}{1+\eta}\alpha_k\beta_kg(x_k). \end{eqnarray*} From this inequality, together with the inequality (\ref{lemma-eqn-12-4}) and the definition of $x_{k+1}$, it follows that \begin{eqnarray}\label{lemma-eqn-12-7}&&\|x_{k+1}-u\|^2-\|x_k-u\|^2+\left(1-\frac{\eta}{(1+\eta)}\right)\sum_{i=1}^m\|\varphi_{i+1,k}-\varphi_{i,k}\|^2+\frac{\eta}{1+\eta}\alpha_k\beta_kg(x_k)\nonumber\\ &\leq& \alpha_k\left(\frac{2(1+\eta)}{\eta}\alpha_k-\frac{2}{\max_{1\leq i\leq m}L_i}\right)\sum_{i=1}^m\|\nabla h_i(\varphi_{i,k})-\nabla h_i(u)\|^2\nonumber\\ &&+\left(\left(1+\frac{\eta}{2(1+\eta)}\right)\alpha_k\beta_k-\frac{2}{L_g(1+\eta)}\right)\alpha_k\beta_k\|\nabla g(x_k)\|^2\nonumber\\ && + \frac{2m(m+1)(1+\eta)}{\eta}\alpha_k^2\sum_{i=1}^m\|\nabla h_i(u)+v_i\|^2\nonumber\\ &&2\alpha_k\sum_{i=1}^m\<\nabla h_i(u)+v_i,u-x_{k}\>-\frac{\eta}{1+\eta}\alpha_k\beta_kg(x_k). \end{eqnarray} In particular, since $p\in\argmin g$, we have {\small \begin{eqnarray*} 2\alpha_k\sum_{i=1}^m\<\nabla h_i(u)+v_i,u-x_{k}\>-\frac{\eta}{1+\eta}\alpha_k\beta_kg(x_k) &=&-2\alpha_k\<p,u-x_{k}\>-\frac{\eta}{1+\eta}\alpha_k\beta_kg(x_{k})\\ &=&2\alpha_k\<p,x_{k}\>-\frac{\eta}{1+\eta}\alpha_k\beta_kg(x_{k})-2\alpha_k\<p,u\>\\ &=&\frac{\eta}{1+\eta}\alpha_k\beta_k\left[\left\langle\frac{2p}{\frac{\eta}{1+\eta}\beta_k},x_{k}\right\rangle-g(x_{k})-\left\langle\frac{2p}{\frac{\eta}{1+\eta}\beta_k},u\right\rangle\right]\\ &\leq&\frac{\eta}{1+\eta}\alpha_k\beta_k\left[g^*\left(\frac{2p}{\frac{\eta}{1+\eta}\beta_k}\right)-\left\langle\frac{2p}{\frac{\eta}{1+\eta}\beta_k},u\right\rangle\right]\\ &=&\alpha_k\beta_k\left[g^*\left(\frac{2p}{\frac{\eta}{1+\eta}\beta_k}\right)-\sigma_{\argmin g}\left(\frac{2p}{\frac{\eta}{1+\eta}\beta_k},u\right)\right]. \end{eqnarray*} } Combining this relation and (\ref{lemma-eqn-12-7}), the required inequality is finally obtained. \end{proof} The following proposition also plays an essential role in convergence analysis. \begin{proposition}\label{lemma-alvarez} \cite{P87} Let $(a_k)_{k\geq1}$, $(b_k)_{k\geq1}$, and $(c_k)_{k\geq1}$ be real sequences. Assume that $(a_k)_{k\geq1}$ is bounded from below, $(b_k)_{k\geq1}$ is nonnegative, $\sum_{k=1}^\infty c_k<+\infty$, and $$a_{k+1}-a_k+b_k\leq c_k \indent \forall k \geq 1.$$ Then the sequence $(a_k)_{k\geq1}$ converges and $\sum_{k=1}^\infty b_k<+\infty$. \end{proposition} The following lemma is a collection of some convergence properties of the sequences involved in our analysis. \begin{lemma}\label{key-lemma4-2} The following statements hold: (i) The sequence $(x_k)_{k\geq1}$ is quasi-Fej\'{e}r monotone relative to $\mathcal{S}$. (ii) For each $u\in \mathcal{S}$, the limit $\lim_{k\to+\infty}\|x_k-u\|$ exists. Moreover, we have $\sum_{k=1}^\infty\alpha_k\beta_kg(x_{k})<+\infty$, $\sum_{k=1}^\infty\alpha_k\beta_k\|\nabla g(x_{k})\|^2<+\infty$, and $\sum_{k=1}^\infty\sum_{i=1}^m\|\varphi_{i+1,k}-\varphi_{i,k}\|^2<+\infty$. (iii) $\lim_{k\to+\infty}g(x_{k})=\lim_{k\to+\infty}\|\nabla g(x_{k})\|=\lim_{k\to+\infty}\sum_{i=1}^m\|\varphi_{i+1,k}-\varphi_{i,k}\|^2=\lim_{k\to+\infty}\|x_k-\varphi_{1,k}\|=0$. (iv) Every sequential cluster point of the sequence $(x_k)_{k\geq1}$ lies in $\argmin g$. \end{lemma} \begin{proof} (i) Since $\limsup_{k\to+\infty}\alpha_k\beta_k<\frac{2}{L_g}$, there exists $k_0\in\N$ such that $\alpha_k\beta_k<\frac{2}{L_g}$ for all $k\geq k_0$. Now, by picking $\eta_0\in\left(0,\frac{2\left(\frac{2}{L_g}-\limsup_{k\to+\infty}\alpha_k\beta_k\right)}{3\limsup_{k\to+\infty}\alpha_k\beta_k}\right)$, we have $\alpha_k\beta_k<\frac{4}{L_g(2+3\eta_0)}$ for all $k\geq k_0$. Hence, there must exist $M>0$ such that for all $k\geq k_0$, we have $$\alpha_k\beta_k < M < \frac{4}{L_g(2+3\eta_0)}=\frac{2}{L_g(1+\eta_0)\left(1+\frac{\eta_0}{2(1+\eta_0)}\right)}.$$ On the other hand, since $\alpha_k\to 0$, there exists $k_1\in\N$ such that $$\frac{2(1+\eta_0)}{\eta_0}\alpha_k-\frac{2}{\max_{1\leq i\leq m}L_i}<0, \indent \forall k\geq k_1.$$ Let $u\in \mathcal{S}$ be given. For every $k\geq \max\{k_0,k_1\}$, we have from Lemma \ref{IPGP-lemma-12} that \begin{eqnarray}\label{eqn-vip} &&\|x_{k+1}-u\|^2-\|x_{k}-u\|^2+\frac{\eta_0}{1+\eta_0}\alpha_k\beta_k g(x_{k})+\left(1-\frac{\eta_0}{1+\eta_0}\right)\sum_{i=1}^m\|\varphi_{i+1,k}-\varphi_{i,k}\|^2\nonumber\\ &&+\left(\frac{2}{L_g(1+\eta_0)}-\left(1+\frac{\eta_0}{2(1+\eta_0)}\right)M\right)\alpha_k\beta_k\|\nabla g(x_k)\|^2\nonumber\\ &\leq&\frac{2m(m+1)(1+\eta_0)}{\eta_0}\alpha_k^2\sum_{i=1}^m\|\nabla h_i(u)+v_i\|^2\nonumber\\ &&+\frac{\eta_0}{1+\eta_0}\alpha_k\beta_k\left[g^*\left(\frac{2p}{\frac{\eta_0}{1+\eta_0}\beta_k}\right)-\sigma_{\arg\min g}\left(\frac{2p}{\frac{\eta_0}{1+\eta_0}\beta_k}\right)\right]. \end{eqnarray} Since the right-hand side is summable and the last three terms of the left-hand side are nonnegative, the sequence $(x_k)_{k\geq1}$ is quasi-Fej\'{e}r monotone relative to $\mathcal{S}$. (ii) Since the right-hand side of (\ref{eqn-vip}) is summable, the statement in (ii) follows immediately from Proposition \ref{lemma-alvarez}. (iii) By (ii), it is obvious that $\lim_{k\to+\infty}\|x_{k}-\varphi_{1,k}\|^2=\lim_{k\to+\infty}\sum_{i=1}^m\|\varphi_{i+1,k}-\varphi_{i,k}\|^2=0$. Moreover, by the assumption (H3), we also have $\lim_{k\to+\infty}\|\nabla g(x_{k})\|=\lim_{k\to+\infty}g(x_{k})=0$. (iv) Let $w$ be a sequential cluster point of $(x_k)_{k\geq1}$ and $(x_{k_j})_{j\geq1}$ be a subsequence of $(x_k)_{k\geq1}$ such that $x_{k_j}\to w$. By the lower semicontinuity of $g$, we obtain $$g(w)\leq \liminf_{j\to+\infty}g(x_{k_j})=\lim_{k\to+\infty}g(x_{k})=0,$$ and so $w\in\arg\min g$. \end{proof} \subsection*{B. Parameters Combinations for PGM and FISTA} In this subsection, we present the ISNR values performed by the classical proximal-gradient method (PGM) and FISTA for various parameters combinations. \begin{table}[H] \centering \setlength{\tabcolsep}{6pt} {\scriptsize \caption{\label{tb-isnr-fista} ISNR values performed by FISTA after 20 iterations for different choices of parameters $\lambda_1$ and $\lambda_2$.} \begin{tabular}{@{}l r r r r r r r r r r r r r r r r r r r@{}} \toprule $\lambda_1$ $\rightarrow$ & \multirow{2}{*}{$0.001$} & \multirow{2}{*}{$0.005$} & \multirow{2}{*}{$0.01$} & \multirow{2}{*}{$0.05$} & \multirow{2}{*}{$0.1$} & \multirow{2}{*}{$0.2$} & \multirow{2}{*}{$0.3$} & \multirow{2}{*}{$0.4$} & \multirow{2}{*}{$0.5$} \\ $\lambda_2$ $\downarrow$ \\ \midrule $10^{-8}$ & 1.1003 &5.0197 &9.3646 &15.9509 &15.1522 &13.1999 &11.6012 &10.2787 &9.1532\\ $10^{-7}$ & 1.1003 &5.0197 &9.3646 &15.9509 &15.1522 &13.1999 &11.6012 &10.2787 &9.1532\\ $10^{-6}$ & 1.1003 &5.0196 &9.3643 &15.9509 &15.1521 &13.1998 &11.6011 &10.2786 &9.1531\\ $10^{-5}$& 1.0999 &5.0182 &9.3623 &15.9509 &15.1517 &13.1994 &11.6007 &10.2783 &9.1528\\ $10^{-4}$& 1.0966 &5.0049 &9.3414 &15.9511 &15.1479 &13.1950 &11.5966 &10.2745 &9.1495\\ $10^{-3}$& 1.0639 &4.8724 &9.1253 &15.9440 &15.1076 &13.1512 &11.5551 &10.2372 &9.1162\\ 0.005 & 0.9296 &4.3075 &8.1022 &15.7625 &14.9024 &12.9478 & 11.3680 &10.0706 &8.9685\\ 0.01 & 0.7854 &3.6751 &6.9042 &15.3534 &14.5746 &12.6772 & 11.1297 &9.8619 &8.7847 \\ \bottomrule \end{tabular} } \end{table} In Table \ref{tb-isnr-fista}, we observe that the combination $\lambda_1=0.05$ with $\lambda_2=10^{-4}$ leads to the largest ISNR values of 15.9511 dB. \begin{table}[H] \centering \setlength{\tabcolsep}{6pt} {\scriptsize \caption{\label{tb-isnr-pgm-1} ISNR values performed by PGM after 20 iterations for different choices of parameters $\lambda_1$ and $\lambda_2$ with $\gamma=1/(\lambda_2+1)$.} \begin{tabular}{@{}l r r r r r r r r r r r r r r r r r r r@{}} \toprule $\lambda_1$ $\rightarrow$ & \multirow{2}{*}{$0.001$} & \multirow{2}{*}{$0.005$} & \multirow{2}{*}{$0.01$} & \multirow{2}{*}{$0.05$} & \multirow{2}{*}{$0.1$} & \multirow{2}{*}{$0.2$} & \multirow{2}{*}{$0.3$} & \multirow{2}{*}{$0.4$} & \multirow{2}{*}{$0.5$} \\ $\lambda_2$ $\downarrow$ \\ \midrule $10^{-8}$ & 0.1896 &0.9280 &1.8158 &7.9777 &13.3422& 13.1484 &11.6114 &10.2972& 9.1721\\ $10^{-7}$ &0.1896 &0.9280 &1.8158 &7.9777 &13.3422& 13.1484 &11.6114 &10.2972& 9.1721\\ $10^{-6}$ & 0.1896 &0.9280 &1.8158 &7.9776 &13.3421& 13.1483 &11.6113 &10.2971& 9.1721\\ $10^{-5}$ & 0.1896 &0.9279 &1.8156 &7.9769 &13.3412& 13.1479 &11.6109 &10.2967& 9.1717\\ $10^{-4}$ & 0.1894 &0.9270 &1.8140 &7.9695 &13.3318& 13.1433 &11.6068 &10.2930& 9.1684\\ $10^{-3}$ & 0.1876 &0.9184 &1.7974 &7.8963& 13.2378& 13.0972& 11.5653& 10.2556& 9.1349\\ 0.005 & 0.1799 &0.8816 &1.7262 &7.5816 &12.8122 &12.8849 &11.3779 &10.0885 &8.9863\\ 0.01 & 0.1707 &0.8381 &1.6423 &7.2120& 12.2729 &12.6055 &11.1393 &9.8788 &8.8013 \\ \bottomrule \end{tabular} } \end{table} In Table \ref{tb-isnr-pgm-1}, we observe that the combinations $\lambda_1=0.1$ with $\lambda_2=10^{-7}, 10^{-8}$ lead to the largest ISNR values of 13.3422 dB. \begin{table}[H] \centering \setlength{\tabcolsep}{6pt} {\scriptsize \caption{\label{tb-isnr-pgm-2} ISNR values performed by PGM after 20 iterations for different choices of parameters $\lambda_2$ and $\gamma$ with $\lambda_1=0.1$.} \begin{tabular}{@{}l r r r r r r r r r r r r r r r r r r r@{}} \toprule $\gamma$ $\rightarrow$ & \multirow{2}{*}{$\frac{1}{(\lambda_2+1)}$} & \multirow{2}{*}{$\frac{1.1}{(\lambda_2+1)}$} & \multirow{2}{*}{$\frac{1.2}{(\lambda_2+1)}$} & \multirow{2}{*}{$\frac{1.3}{(\lambda_2+1)}$} & \multirow{2}{*}{$\frac{1.4}{(\lambda_2+1)}$} & \multirow{2}{*}{$\frac{1.5}{(\lambda_2+1)}$} & \multirow{2}{*}{$\frac{1.6}{(\lambda_2+1)}$} & \multirow{2}{*}{$\frac{1.7}{(\lambda_2+1)}$} & \multirow{2}{*}{$\frac{1.8}{(\lambda_2+1)}$} & \multirow{2}{*}{$\frac{1.9}{(\lambda_2+1)}$} \\ $\lambda_2$ $\downarrow$ \\ \midrule $10^{-8}$ &13.3422 &13.9381 &14.3418 &14.6012 &14.7664 &14.8744 &14.9489 &15.0031 &15.0458 &15.0842\\ $10^{-7}$ &13.3422 &13.9381 &14.3418 &14.6012 &14.7664 &14.8744 &14.9489 &15.0031 &15.0458 &15.0842\\ $10^{-6}$ &13.3421 &13.9380 &14.3417 &14.6012 &14.7663 &14.8744 &14.9489 &15.0030 &15.0458 &15.0842\\ $10^{-5}$ &13.3412 &13.9371 &14.3409 &14.6005 &14.7657 &14.8738 &14.9483 &15.0025 &15.0453 &15.0837\\ $10^{-4}$ &13.3318 &13.9284 &14.3331 &14.5936 &14.7595 &14.8680 &14.9429 &14.9973 &15.0402 &15.0787\\ $10^{-3}$ &13.2378 &13.8399 &14.2537 &14.5231 &14.6960 &14.8091 &14.8871 &14.9437 &14.9883 &15.0278\\ $0.005$ &12.8122 &13.4286 &13.8748 &14.1807 &14.3841 &14.5192 &14.6117 &14.6783 &14.7302 &14.7738\\ $0.01$ &12.2729 &12.8882 &13.3562 &13.6962 &13.9347 &14.0987 &14.2119 &14.2927 &14.3537 &14.4025 \\ \bottomrule \end{tabular} } \end{table} In Table \ref{tb-isnr-pgm-2}, we observe that the combinations $\lambda_2=10^{-6}, 10^{-7}, 10^{-8}$ with $\gamma=1.9/(\lambda_2+1)$ lead to the largest ISNR values of 15.0842 dB.
{ "timestamp": "2020-04-21T02:31:09", "yymm": "1909", "arxiv_id": "1909.05060", "language": "en", "url": "https://arxiv.org/abs/1909.05060", "abstract": "We consider the problem of minimizing a finite sum of convex functions subject to the set of minimizers of a convex differentiable function. In order to solve the problem, an algorithm combining the incremental proximal gradient method with smooth penalization technique is proposed. We show the convergence of the generated sequence of iterates to an optimal solution of the optimization problems, provided that a condition expressed via the Fenchel conjugate of the constraint function is fulfilled. Finally, the functionality of the method is illustrated by some numerical experiments addressing image inpainting problems and generalized Heron problems with least squares constraints.", "subjects": "Optimization and Control (math.OC)", "title": "Incremental proximal gradient scheme with penalization for constrained composite convex optimization problems", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES\n\n", "lm_q1_score": 0.9896718480899426, "lm_q2_score": 0.7154240018510026, "lm_q1q2_score": 0.7080349940797843 }
https://arxiv.org/abs/2106.05644
Optimal Non-Convex Exact Recovery in Stochastic Block Model via Projected Power Method
In this paper, we study the problem of exact community recovery in the symmetric stochastic block model, where a graph of $n$ vertices is randomly generated by partitioning the vertices into $K \ge 2$ equal-sized communities and then connecting each pair of vertices with probability that depends on their community memberships. Although the maximum-likelihood formulation of this problem is discrete and non-convex, we propose to tackle it directly using projected power iterations with an initialization that satisfies a partial recovery condition. Such an initialization can be obtained by a host of existing methods. We show that in the logarithmic degree regime of the considered problem, the proposed method can exactly recover the underlying communities at the information-theoretic limit. Moreover, with a qualified initialization, it runs in $\mathcal{O}(n\log^2n/\log\log n)$ time, which is competitive with existing state-of-the-art methods. We also present numerical results of the proposed method to support and complement our theoretical development.
\section{Introduction}\label{sec:intro} Community detection is a fundamental task in network analysis and has found wide applications in diverse fields, such as social science \citep{girvan2002community}, physics \citep{newman2004finding}, and machine learning \citep{shi2000normalized}, just to name a few. As the study of community detection grows, a large variety of theories and algorithms have been proposed in the past decades for addressing different tasks under different settings. To better validate and compare these theories and algorithms, the stochastic block model (SBM), which tends to generate graphs containing underlying community structures, is widely used as a canonical model for studying community detection. In particular, substantial advances have been made in recent years on understanding the fundamental limits of community detection and developing algorithms for tackling different recovery tasks in the SBM; see, e.g., \citet{abbe2017community} and the references therein. In this work, we consider the problem of exactly recovering the communities in the symmetric SBM. Specifically, given $n$ nodes that are partitioned into $K \ge 2$ unknown communities of equal size, a random graph is generated by independently connecting each pair of vertices with probability $p$ if they belong to the same community and with probability $q$ otherwise. The goal is to recover the underlying communities exactly by only observing one realization of the graph. In the logarithmic degree regime of the considered SBM, i.e., $p=\alpha\log n/n$ and $q=\beta\log n/n$ for some $\alpha,\beta>0$, this problem exhibits a sharp information-theoretic threshold: it is possible to achieve exact recovery if $\sqrt{\alpha}-\sqrt{\beta} > \sqrt{K}$ and is impossible if $\sqrt{\alpha}-\sqrt{\beta} < \sqrt{K}$ \citep{abbe2015community}. Then, it is of interest to design computationally tractable methods that can achieve exact recovery under a condition on $\alpha$ and $\beta$ that meets the information-theoretic limit. In the past years, many algorithms have been proposed to achieve this task, such as spectral clustering \citep{mcsherry2001spectral,su2019strong,yun2014accurate, yun2016optimal}, SDP-based approach \cite{amini2018semidefinite,fei2018exponential,fei2020achieving,li2018convex}, and likelihood-based approach \cite{amini2013pseudo,gao2017achieving,zhang2016minimax,zhou2020rate}. However, most of these algorithms have a time complexity that is at least quadratic in $n$, which usually does not scale well to large-scale problems. In the symmetric SBM, the maximum likelihood (ML) estimation problem is formulated as \begin{align}\label{MLE} \max\ \left\{ \langle \bm{A}\bm{H}, \bm{H} \rangle: \bm{H} \in \mathcal{H} \right\}, \tag{\textsf{MLE}} \end{align} where $\bm{A}$ is the adjacency matrix of the observed graph, \begin{align}\label{set-H} \mathcal{H}=\left\{\bm{H} \in \mathbb{R}^{n\times K}:\ \bm{H}\bm{1}_K = \bm{1}_n,\ \bm{H}^T\bm{1}_n = m\bm{1}_K, \right. \\ \left. \bm{H} \in \{0,1\}^{n\times K}\right\} \ \quad\qquad\qquad \notag \end{align} is the discrete feasible set, $\bm{1}_n$ is the all-one vector of dimension $n$, and $m=n/K$. It is known that an ML estimator achieves exact recovery at the information-theoretic limit, but solving Problem \eqref{MLE} is NP-hard in the worst-case. Recently, in independent lines of research, many non-convex formulations that arise in a variety of applications have been shown to be solvable, in the sense of average-case performance, by simple and scalable iterative methods. This includes phase retrieval \citep{bendory2017non, chen2019gradient}, group synchronization \citep{ling2020improved,liu2017estimation,liu2020unified,zhong2018near}, low-rank matrix recovery \citep{chi2019nonconvex}, and two-block community detection \cite{wang2020nearly}. It then naturally motivates the question of whether one can apply a similar simple and scalable method to the discrete optimization problem \eqref{MLE}. In this work, we answer this question in the affirmative by showing that a projected power method provably works for solving Problem \eqref{MLE}. As a consequence, we obtain a simple and scalable method that achieves exact recovery under the optimal condition on $\alpha$ and $\beta$. \subsection{Related Works} In the context of the SBM, \emph{exact recovery}, also named \emph{strong consistency}, requires all the communities to be identified correctly up to a permutation of labels. More precisely, exact recovery is achieved if there exists an algorithm that takes one realization of the graph as input and outputs the true partition with high probability. In the logarithmic degree regime of the binary symmetric SBM, i.e., the symmetric SBM with $K=2$, \citet{abbe2016exact} and \citet{mossel2014consistency} independently showed that it is possible to achieve exact recovery if $\sqrt{\alpha}-\sqrt{\beta} > \sqrt{2}$ and is not possible if $\sqrt{\alpha}-\sqrt{\beta} < \sqrt{2}$, thereby establishing the information-theoretic limit for exact recovery. Later, \citet{abbe2015community} generalized this result to the case of $K \ge 2$ and showed that the information-theoretic limit is $\sqrt{\alpha}-\sqrt{\beta} > \sqrt{K}$. \emph{Almost exact recovery}, also named \emph{weak consistency}, requires the recovery of all but a vanishing fraction of vertices. In \emph{partial recovery}, only a constant fraction of vertices needs to be identified correctly. It is obvious that the requirement of partial recovery is much milder than that of almost exact recovery. We refer the reader to \citet{abbe2017community} for the formal definitions of these recovery tasks and more results on the corresponding fundamental limits in the SBM. Over the past years, many algorithms have been proposed to tackle the problem of exact recovery in the symmetric SBM. One popular approach is spectral clustering. For example, \citet{mcsherry2001spectral} proposed a spectral partition method, which first randomly partitions the vertex set into two parts, then calls the combinatorial projection subroutine, and finally clusters the vertices by distances on the projected points. They showed that in the symmetric SBM, the proposed method achieves exact recovery if $(p-q)/\sqrt{p}\gtrsim \sqrt{\log n/n}$ and $np \gtrsim \log^6n$. Later, \citet{yun2014accurate, yun2016optimal} also presented a spectral partition method, which proceeds by applying spectral decomposition to a trimmed adjacency matrix for generating an initial partition, followed by an additional procedure for local improvement. In the considered SBM, this method achieves exact recovery down to the information-theoretic threshold in $\mathcal{O}(n \mathbf{poly}\log n)$. Recently, \citet{su2019strong} showed that the standard spectral clustering, which first computes the leading $K$ eigenvectors of the graph Laplacian matrix and then applies the {\em k-means} algorithm to do clustering, achieves exact recovery under some weak conditions. These conditions can be simplified as $\sqrt{\alpha}-\sqrt{\beta} \ge c > \sqrt{K}$ for some positive constant $c$ in the symmetric SBM. In general, these spectral clustering methods run in polynomial time. Another popular approach is convex relaxation of the ML estimation problem. In the setting of $K=2$, \citet{bandeira2018random} and \citet{hajek2016achieving,hajek2016achieving_ex} respectively showed that semidefinite programming (SDP) relaxation of the ML formulation of the binary symmetric SBM achieves exact recovery at the information-theoretic limit. In the setting of $K \ge 2$, \citet{guedon2016community} proposed a SDP relaxation of Problem \eqref{MLE} and showed a recovery error bound, which decays polynomially in the signal-to-noise ratio, for the solution to their considered SDP. Such an error bound only implies that their proposed SDP achieves almost exact recovery in our considered SBM. Following this work, \citet{fei2018exponential, fei2020achieving} proposed a new SDP relaxation of Problem \eqref{MLE} and established a more refined recovery error bound, which decays exponentially in the signal-to-noise ratio. This error bound implies that their proposed SDP achieves exact recovery provided that $(\alpha-\beta)^2\ge c\left(\alpha+(K-1)\beta\right)$ for a positive constant $c$. Besides, \citet{amini2018semidefinite} proposed another SDP relaxation of Problem \eqref{MLE} and showed that this SDP exactly recovers the communities with high probability if $(\alpha-\beta)^2 \ge cK(\alpha+K\beta)$ for a positive constant $c$. Despite the nice property of SDP-based approaches that they do not require any initial estimate of the partition or local refinement, solving the SDP problem is usually computationally prohibitive for large-scale data sets. We refer the reader to a survey by \citet{li2018convex} for more results on convex relaxation methods for community detection. \vskip -0.1in \begin{table}[t] \caption{Comparison of recovery conditions and time complexities of the surveyed methods for exact recovery in the SBM ($K\ge 2$).} \label{table-0} \begin{center} \begin{footnotesize} \begin{tabular}{ccccc} \toprule {\bf References} & {\bf Conditions} & {\bf Complexities} \\ \midrule \citet{mcsherry2001spectral} & Not optimal & Polynomial \\ \citet{yun2016optimal} & Optimal & $\mathcal{O}(n\mathbf{poly}\log n)$ \\ \citet{su2019strong} & Not optimal & Polynomial \\ \midrule \citet{amini2018semidefinite} & Not optimal & Polynomial\\ \citet{fei2018exponential} & Not optimal & Polynomial \\ \midrule \citet{abbe2015community} & Optimal & $\mathcal{O}(n^{1+1/\log\log n})$ \\ \citet{gao2017achieving} & Optimal & Polynomial \\ \midrule {\bf Ours} & {\bf Optimal} & $\mathcal{O}\left(\frac{n\log^2n}{\log\log n}\right)$\\ \bottomrule \end{tabular} \end{footnotesize} \end{center} \vskip -0.2in \end{table} We would also like to mention some algorithms for the considered problem that use other techniques. \citet{abbe2015community} developed a two-stage algorithm that consists of the Sphere-comparison sub-routine for detecting communities almost exactly and the Degree-profiling sub-routine for identifying the communities exactly. Moreover, it recovers the communities exactly with high probability all the way down to information-theoretic threshold in $\mathcal{O}(n^{1+1/\log\log n})$ time in the considered SBM. Besides, \citet{gao2017achieving} proposed a two-stage algorithm that needs a weakly consistent initialization and refines it by optimizing the local penalized maximum likelihood function for each node separately. In the considered SBM, their proposed method achieves exact recovery at the information-theoretic limit in polynomial time. We refer the reader to \citet{amini2013pseudo,zhang2016minimax,zhou2020rate} for more likelihood-based approach. Recently, \citet{wang2020nearly} proposed a non-convex approach that involves initializing a generalized power method with a power method for solving a regularized ML formulation of the binary symmetric SBM. Their method runs in nearly-linear time and is among the most efficient in the literature that achieves exact recovery at the information-theoretic limit. There are still many other interesting methods for the considered problem, such as the mean field method in \citet{zhang2020theoretical}, a variant of Lloyd’s algorithm in \citet{lu2016statistical}, and the modularity-based method in \citet{cohen2020power}. Due to the limitation of space, we shall not discuss further here. \subsection{Our Contribution}\label{sec:oc} In this work, we propose a simple and scalable method that can achieve the optimal exact recovery threshold in the symmetric SBM. Our strategy is simply to apply the projected power method to tackle Problem \eqref{MLE} directly. Specifically, it starts with an initial point that satisfies a certain partial recovery condition and then applies projected power iterations to refine the iterates successively. In the logarithmic degree regime of the symmetric SBM, we prove that the proposed method achieves exact recovery at the information-theoretic limit. Moreover, we show that it takes $\mathcal{O}(\log n/\log\log n)$ projected power iterations to obtain the underlying communities. Besides, we demonstrate that each projected power iteration is equivalent to a minimum-cost assignment problem (MCAP), which can be solved in $\mathcal{O}(n\log n)$ time. These yield that the proposed method runs in $\mathcal{O}\left(n\log^2n/\log\log n\right)$ time with a qualified initialization. This is competitive with the most efficient algorithms in the literature for the considered problem. It is worth noting that despite the simplicity of the proposed method, it only requires a partial recovery condition for the initial point, which is generally milder than almost exact recovery conditions that are needed for most existing two-stage algorithms; see, e.g., \citet[Algorithm 2]{gao2017achieving}, \citet[Algorithm 2]{yun2014accurate}, and \citet[Sphere-comparison algorithm]{abbe2015community}. Our work also contributes to the emerging area of provable non-convex methods. In particular, our result indicates that the ML formulation of the symmetric SBM, albeit non-convex and discrete, can be solved via a carefully designed, yet simple, iterative procedure. Prior to our work, such discrete optimization problem is usually handled either by SDP relaxation (see, e.g., \citet{amini2018semidefinite,fei2018exponential}) or by non-convex but continuous relaxation (see, e.g., \citet{bandeira2016low}). We believe that the proposed non-convex approach can be extended to other structured discrete optimization problems; cf. \citet{liu2017discrete}. The rest of this paper is organized as follows. In Section \ref{sec:preli}, we introduce the proposed method for exact community recovery and present the main results of this paper. In Sections \ref{sec:pf-main}, we prove the main results. We then report some numerical results in Section \ref{sec:num} and conclude in Section \ref{sec:con}. \emph{Notation.} Let $\mathbb{R}^n$ be the $n$-dimensional Euclidean space and $\|\cdot\|$ be the Euclidean norm. We write matrices in capital bold letters like $\bm{A}$, vectors in bold lower case like $\bm{a}$, and scalars as plain letters. Given a matrix $\bm{A}$, we use $\|\bm{A}\|$ to denote its spectral norm, $\|\bm{A}\|_F$ its Frobenius norm, and $a_{ij}$ its $(i,j)$-th element. Given a positive integer $n$, we denote by $[n]$ the set $\{1,\dots,n\}$. Given a discrete set $S$, we denote by $|S|$ the cardinality of $S$. We use $\bm{1}_n$ and $\bm{E}_n$ to denote the $n$-dimensional all-one vector and $n\times n$ all-one matrix, respectively. We use $\Pi_K$ to denote the collections of all $K\times K$ permutation matrices. We use $\mathbf{Bern}(p)$ to denote the Bernoulli random variable with mean $p$. \section{Preliminaries and Main Results}\label{sec:preli} In this section, we formally set up the considered problem in the SBM, present the proposed algorithm, and give a summary of our main results. To proceed, we introduce clustering matrices for representing community structures and the symmetric stochastic block model (SBM) for generating observed graphs. \begin{defi}\label{memship-matrix} We say that $\bm{H} \in \mathbb{R}^{n\times K}$ is a clustering matrix if it takes the form of \begin{align}\label{form-H} h_{ik} = \begin{cases} 1,\quad \text{if}\ i \in \mathcal{I}_k, \\ 0,\quad \text{otherwise} \end{cases} \end{align} \vskip -0.1in for some $\mathcal{I}_1,\dots,\mathcal{I}_K$ such that ${\cup}_{k=1}^K\mathcal{I}_k = [n]$ and $\mathcal{I}_k \cap \mathcal{I}_\ell = \emptyset$ for all $1 \le k \neq \ell \le K$. Moreover, we say that $\bm{H} \in \mathbb{R}^{n\times K}$ is a balanced clustering matrix if it satisfies the above requirement with $|\mathcal{I}_k|=m$ for all $k\in [K]$. For simplicity, we use $\mathbb{M}_{n,K},\mathbb{H}_{n,K}$ to denote the collections of all such clustering and balanced clustering matrices, respectively. \end{defi} Intuitively, a family of sets $\mathcal{I}_1,\dots,\mathcal{I}_K$ represents a partition of $n$ nodes into $K$ communities such that $h_{ik}=1$ if node $i$ belongs to the community encoded by $\mathcal{I}_k$ and $h_{ik}=0$ otherwise. Given a fixed $\bm{H} \in \mathbb{M}_{n,K}$, $\bm{H}\bm{Q}$ for any $\bm{Q} \in \Pi_K$ represents the same community structure as $\bm{H}$ up to a permutation of the labels. \begin{defi}[Symmetric SBM]\label{SBM} Let $n \ge 2$ be the number of vertices, $K \ge 2$ be the number of communities, and $p,q\in [0,1]$ be parameters of the connectivity probabilities. Furthermore, let $\bm{H}^* \in \mathbb{H}_{n,K}$ represent a unknown partition of $n$ vertices into $K$ equal-sized communities. We say that a random graph $G$ is generated according to the symmetric SBM with parameters $(n,K,p,q)$ and $\bm{H}^*$ if $G$ has a vertex set $V=[n]$ and the elements $\{a_{ij}\}_{1\le i \le j \le n}$ of its adjacency matrix $\bm{A}$ are generated independently by \begin{align} a_{ij} \sim \left\{ \begin{aligned} \mathbf{Bern}(p),\quad & \text{if}\ \ \bm{h}_i^{*^T}\bm{h}^*_j = 1,\\ \mathbf{Bern}(q),\quad & \text{if}\ \ \bm{h}_i^{*^T}\bm{h}^*_j = 0, \end{aligned \right. \label{eq:SBM} \end{align} where $\bm{h}_i^{*^T}$ is the $i$-th row of $\bm{H}^*$. \end{defi} Intuitively, this model states that given a true partition of $n$ vertices into $K$ unknown communities of equal size, a random graph $G$ is generated by independently connecting each pair of vertices with probability $p$ if they belong to the same community and with probability $q$ otherwise. Given one observation of such $G$, our goal is to develop a simple and scalable algorithm that outputs the true partition, i.e., $\bm{H}^*\bm{Q}$ for some $\bm{Q} \in \Pi_K$, with high probability. Since exact recovery requires the node degree to be at least logarithmic (see, e.g., \citet[Section 2.5]{abbe2017community}), we focus on the logarithmic sparsity regime of the symmetric SBM in this work, i.e., \begin{align}\label{p-q} p=\alpha\frac{\log n}{n}\quad \text{and}\quad q=\beta\frac{\log n}{n}, \end{align} where $\alpha, \beta$ are positive constants. The main ingredient in our approach is to apply the projected power method for solving Problem \eqref{MLE}. Specifically, the projected power step takes the form of \begin{align}\label{update-H} \bm{H}^{k+1} \in \mathcal{T}(\bm{A}\bm{H}^k),\ \text{for all}\ k\ge 1, \end{align} where $\mathcal{T}:\mathbb{R}^{n\times K} \rightrightarrows \mathbb{R}^{n\times K}$ denotes the projection operator onto $\mathcal{H}$; i.e., for any $\bm{C} \in \mathbb{R}^{n\times K}$, \begin{align}\label{project-H} \mathcal{T}(\bm{C}) = \mathop{\mathrm{arg\,min}}\left\{ \|\bm{H}-\bm{C}\|_F:\ \bm{H} \in \mathcal{H} \right\}. \end{align} Note that Problem \eqref{MLE} can be interpreted as a principal component analysis (PCA) problem with some structural constraints. This motivates us to propose a variant of the power iteration as in \eqref{update-H} for solving it. Actually, many algorithms of similar flavor for solving PCA problems with other structural constraints have appeared in the literature; see, e.g., \citet{boumal2016nonconvex,chen2018projected,deshpande2014cone,journee2010generalized}. One important step towards guaranteeing rapid convergence of the projected power method for solving Problem \eqref{MLE} is to identify a proper initial point $\bm{H}^0$, which constitutes another ingredient in our approach. Specifically, the initial point $\bm{H}^0$ is required to satisfy the following condition: \begin{align}\label{init-partial} \bm{H}^0 \in \mathbb{M}_{n,K}\quad \mbox{s.t.}\ \min_{\bm{Q} \in \Pi_K}\|\bm{H}^0 - \bm{H}^*\bm{Q} \|_F \le \theta\sqrt{n}, \end{align} where $\theta$ is a constant that will be specified later. We remark that the condition \eqref{init-partial} is equivalent to that $\bm{H}^0$ satisfies a partial recovery condition; see, e.g., \citet[Definition 4]{abbe2017community}. We now summarize the proposed method for solving Problem \eqref{MLE} in Algorithm \ref{alg:PGD}. It starts with an initial point $\bm{H}^0$ satisfying \eqref{init-partial} and projects $\bm{H}^0$ onto $\mathcal{H}$ to make the partition balanced. Then, it refines the iterates via projected power iterations $N$ times, where $N$ is an input parameter of the algorithm, and outputs $\bm{H}^{N+1}$. \begin{algorithm}[!htbp] \caption{Projected Power Method for Solving Problem \eqref{MLE}} \begin{algorithmic}[1] \STATE \textbf{Input:} adjacency matrix $\bm{A}$, positive integer $N$ \STATE \textbf{Initialize} an $\bm{H}^0$ satisfying \eqref{init-partial} \STATE set $\bm{H}^1 \leftarrow \mathcal{T}(\bm{H}^0)$ \FOR{$k=1,2,\dots,N$} \STATE set $\bm{H}^{k+1}\leftarrow \mathcal{T}(\bm{A}\bm{H}^k)$ \ENDFOR \STATE \textbf{Output} $\bm{H}^{N+1}$ \end{algorithmic} \label{alg:PGD} \end{algorithm} We next present the main theorem of this paper, which shows that Algorithm \ref{alg:PGD} achieves exact recovery down to the information-theoretic threshold and also provides its explicit iteration complexity bound. \begin{thm}\label{thm-1} Let $\bm{A}$ be the adjacency matrix of a realization of the random graph generated according to the symmetric SBM with parameters $(n,K,p,q)$ and a planted partition $\bm{H}^* \in \mathbb{H}_{n,K}$. Suppose that $p,q$ satisfy \eqref{p-q} with $\sqrt{\alpha}-\sqrt{\beta} > \sqrt{K}$ and $n$ is sufficiently large. Then, there exists a constant $\gamma>0$, whose value depends only on $\alpha$, $\beta$, and $K$, such that the following statement holds with probability at least $1-n^{-\Omega(1)}$: If the initial point satisfies the partial recovery condition in \eqref{init-partial} such that \begin{align}\label{theta} \theta = \frac{1}{4}\min\left\{\frac{1}{\sqrt{K}},\frac{\gamma\sqrt{K}}{16(\alpha-\beta)}\right\}, \end{align} Algorithm \ref{alg:PGD} outputs a true partition in $\lceil 2\log\log n \rceil+\left\lceil \frac{2\log n}{\log\log n} \right\rceil+2$ projected power iterations. \end{thm} Before we proceed, some remarks are in order. First, an $\bm{H}^0$ satisfying \eqref{init-partial} can be found by a host of initialization procedures in existing methods. For example, \citet[Algorithm 2]{gao2017achieving} and \citet[Algorithm 2]{yun2014accurate} respectively proposed spectral clustering based initialization procedures that can obtain an $\bm{H}^0$ satisfying \begin{align}\label{init-weak} \bm{H}^0 \in \mathbb{M}_{n,K}\ \mbox{s.t.} \min_{\bm{Q} \in \Pi_K}\|\bm{H}^0 - \bm{H}^*\bm{Q} \|_F \lesssim \sqrt{\frac{n}{\log n}}. \end{align} These initializations are cheap to compute and automatically fulfill the partial recovery requirement in \eqref{init-partial} when $n$ is sufficiently large. Note that compared to our projected power method, the refinement procedures in \citet[Algorithm 1]{gao2017achieving} and \citet[Algorithm 1]{yun2014accurate} are rather complicated. Besides, we remark that \eqref{init-weak} is a condition of almost exact recovery (see, e.g., \citet[Definition 4]{abbe2017community}). It is much more stringent than \eqref{init-partial}, which is merely a condition of partial recovery. Second, as we show in Proposition \ref{prop:MCAP}, the projection in \eqref{project-H} is equivalent to a minimum-cost assignment problem (MCAP), which is a special linear programming (LP) problem and can be solved very efficiently; see \citet{tokuyama1995geometric}. We refer the reader to Section A.1 of the appendix for the formal definition of the MCAP. \begin{prop}\label{prop:MCAP} Problem \eqref{project-H} is equivalent to a minimum-cost assignment problem, which can be solved in $\mathcal{O}(K^2n\log n)$ time. \end{prop} This, together with the time complexity of computing the matrix product $\bm{A}\bm{H}$ for some $\bm{H} \in \mathbb{R}^{n\times K}$ and Theorem \ref{thm-1}, immediately implies the time complexity of Algorithm \ref{alg:PGD} with a qualified initialization. \begin{coro}\label{coro:time} Consider the setting of Theorem \ref{thm-1}. If Algorithm \ref{alg:PGD} uses an initial point that satisfies the partial recovery condition in \eqref{init-partial} with $\theta$ in \eqref{theta}, then it outputs a true partition in $$\mathcal{O}\left(\left(K^2+3\alpha+3(K-1)\beta\right)\frac{n\log^2n}{\log\log n}\right)$$ time with probability at least $1-n^{-\Omega(1)}$. \end{coro} Finally, it is worth noting that the proposed method in Algorithm \ref{alg:PGD} can be viewed as an extension of that in \citet{wang2020nearly}, both of which are essentially the projected gradient method applied to the corresponding ML formulation. In particular, when $K=2$, the projection operators in these two works both admit a closed-form solution, which can be done via partial sorting. Moreover, our method can be applied to do community detection in the setting of multiple communities, i.e., $K \ge 2$, while that in \citet{wang2020nearly} only works when $K=2$. Besides, the method in \citet{wang2020nearly} requires a spectral initialization to satisfy a condition of almost exact recovery. By contrast, any point satisfying the partial recovery condition in \eqref{init-partial}, including some spectral initializations, is a qualified initialization for Algorithm \ref{alg:PGD}. \section{Proofs of Main Results}\label{sec:pf-main} In this section, we provide the proofs of our main results in Section \ref{sec:preli}. The complete proofs of the theorem, propositions, and lemmas can be found in Sections B, C of the appendix. \subsection{Analysis of the Projected Power Iteration}\label{sub-sec:pgd} In this subsection, we study the convergence behavior of the projected power iterations in Algorithm \ref{alg:PGD}. Our main idea is to show the contraction property of the projection operator $\mathcal{T}$ in the symmetric SBM. Let $$\mathcal{P} = \{\bm{H}\in\mathbb{R}^{n\times K}: \bm{H}\bm{1}_K = \bm{1}_n, \bm{H}^T\bm{1}_n=m\bm{1}_K, \bm{H} \ge 0 \}.$$ To begin, we present a lemma that establishes an equivalence among the set of extreme points of this polytope, the discrete set $\mathcal{H}$, and the collection of all balanced clustering matrices $\mathbb{H}_{n,K}$. \begin{lemma}\label{lem:form-H} The following statements are equivalent: \\ (i) $\bm{H} \in \mathcal{H}$.\qquad\qquad (ii) $\bm{H}$ is an extreme point of $\mathcal{P}$. \\ (iii) $\bm{H} \in \mathbb{H}_{n,K}$. \end{lemma} It is worth noting that the proof this lemma builds on the total unimodularity (see, e.g., \citet{heller1956extension,hoffman2010integral}) of the equality constraint matrix of the polytope $\mathcal{P}$. Equipped with this lemma, we can show that Problem \eqref{project-H} is equivalent to an LP. \begin{prop}\label{prop:LP} For any $\bm{C} \in \mathbb{R}^{n\times K}$, Problem \eqref{project-H} is equivalent to the following LP: \begin{align}\label{LP-H} \mathcal{T}(\bm{C}) = \mathop{\mathrm{arg\,max}}\left\{ \langle \bm{C},\bm{H} \rangle:\ \bm{H} \in \mathcal{P} \right\}. \end{align} \end{prop} Next, we characterize the optimal solutions of the LP in \eqref{LP-H} explicitly by exploiting the structure of the polytope $\mathcal{P}$. \begin{lemma}\label{lem:closed-form-LP} For a matrix $\bm{C} \in \mathbb{R}^{n\times K}$, it holds that $\bm{H} \in \mathcal{T}(\bm{C})$ if and only if \begin{align*} h_{ik} = \begin{cases} 1,\ \text{if}\ i \in \mathcal{I}_k, \\ 0,\ \text{otherwise}, \end{cases} \end{align*} where $\mathcal{I}_1,\dots,\mathcal{I}_K$ satisfies (i) ${\cup}_{k=1}^K\mathcal{I}_k = [n]$, $\mathcal{I}_k \cap \mathcal{I}_\ell = \emptyset$, and $|\mathcal{I}_k| = m$ for all $1\le k \neq \ell \le K$, and (ii) there exists $\bm{w} \in \mathbb{R}^K$ such that \begin{align}\label{form-C} c_{ik} - c_{i\ell} \ge w_k - w_\ell \ge c_{jk} - c_{j\ell} \end{align} for all $i \in \mathcal{I}_k$, $j\in \mathcal{I}_\ell$, and $1\le k \neq \ell \le K$. \end{lemma} When $K=2$, let $\bm{c}_1$ and $\bm{c}_2$ denote the first and second columns of $\bm{C} \in \mathbb{R}^{n\times 2}$, respectively. In this scenario, Lemma \ref{lem:closed-form-LP} implies that solving the LP in \eqref{LP-H} boils down to finding the indices that correspond to the $n/2$ largest entries of the vector $\bm{c}_1-\bm{c}_2$, which can be done via median finding efficiently. Based on the above lemma, we can show that the projection operator $\mathcal{T}$ in \eqref{project-H} possesses a Lipschitz-like property in spite of the fact that $\mathcal{H}$ is a discrete set. \begin{lemma}\label{lem:LP-cont} Let $\delta > 0$, $\bm{C} \in \mathbb{R}^{n\times K}$ be arbitrary and $m=n/K$. Suppose that there exists a family of index sets $\mathcal{I}_1,\dots,\mathcal{I}_K$ satisfying ${\cup}_{k=1}^K\mathcal{I}_k = [n]$, $\mathcal{I}_k \cap \mathcal{I}_\ell = \emptyset$, and $|\mathcal{I}_k| = m$ such that $\bm{C}$ satisfies \begin{align}\label{eq:LP-cont-cond} c_{ik} - c_{i\ell} \ge \delta \end{align} for all $i \in \mathcal{I}_k$ and $1 \le k \neq \ell \le K$. Then, for any $\bm{V} \in \mathcal{T}(\bm{C})$, $\bm{C}^\prime \in \mathbb{R}^{n\times K}$, and $\bm{V}^\prime \in \mathcal{T}(\bm{C}^\prime)$, it holds that \begin{align}\label{rst:LP-cont} \|\bm{V} - \bm{V}^\prime\|_F \le \frac{2\|\bm{C}-\bm{C}^\prime\|_F}{\delta}. \end{align} \end{lemma} Next, we show an inequality that is useful in establishing the contraction property of the projected power iterations. \begin{lemma}\label{lem:contra} Let $\Delta =\bm{A} - \mathbb{E}[\bm{A}]$. Suppose that $\varepsilon \in (0,1/\sqrt{K})$ and $\bm{H} \in \mathcal{H}$ such that $\|\bm{H}-\bm{H}^*\bm{Q}\|_F \le \varepsilon\sqrt{n}$ for some $\bm{Q} \in \Pi_K$. Then, it holds that \begin{align*} \|\bm{A}(\bm{H}-\bm{H}^*\bm{Q})\|_F \le \left(\frac{4\varepsilon n}{\sqrt{K}}(p-q) + \|\Delta\| \right) \|\bm{H}-\bm{H}^*\bm{Q}\|_F. \end{align*} \end{lemma} Then, we present some probabilistic results that will be used for establishing the contraction property of the projected power iterations. \begin{lemma}\label{lem:spectral-norm-Delta} Let $\Delta =\bm{A} - \mathbb{E}[\bm{A}]$. There exists a constant $c_1>0$, whose value only depends on $\alpha$ and $\beta$, such that \begin{align}\label{eq:spectral-norm-Delta} \|\Delta\| \le c_1\sqrt{\log n} \end{align} holds with probability at least $1-n^{-3}$. \end{lemma} This lemma provides a spectral bound on the deviation of $\bm{A}$ from its mean. It is a direct consequence of \citet[Theorem 5.2]{lei2015consistency} and thus we omit its proof. \begin{lemma}\label{lem:tail-Bino Let $m=n/K$ and $\alpha > \beta >0$ be constants. Suppose that $\{W_i\}_{i=1}^m$ are i.i.d.~$\mathbf{Bern}(\alpha\log n/n)$ and $\{Z_i\}_{i=1}^m$ are i.i.d.~$\mathbf{Bern}(\beta\log n/n)$ that is independent of $\{W_i\}_{i=1}^m$. Then, for any $\gamma \in \mathbb{R}$, it holds that \begin{align* \mathbb{P}\left( \sum_{i=1}^m W_i - \sum_{i=1}^m Z_i \le \gamma \log n \right) \le n^{-\frac{(\sqrt{\alpha}-\sqrt{\beta})^2}{K}+\frac{\gamma \log(\alpha/\beta)}{2}}. \end{align*} \end{lemma} This lemma is proved in \citet[Lemma 8]{abbe2020entrywise}. Based on the this lemma, we can show that the entries of $\bm{A}\bm{H}^*$ satisfy the requirement of \eqref{eq:LP-cont-cond} in Lemma \ref{lem:LP-cont} with high probability. \begin{lemma}\label{lem:block-gap} Suppose that $\alpha > \beta > 0$ and $\bm{C}=\bm{A}\bm{H}^*$. Let $\mathcal{I}_k=\{i \in [n]: h_{ik}^* =1\}$ for all $k\in [K]$. If $\sqrt{\alpha}-\sqrt{\beta} > \sqrt{K}$, there exists a constant $\gamma> 0$, whose value depends only on $\alpha$, $\beta$, and $K$, such that for all $i \in \mathcal{I}_k$ and $1\le k \neq \ell \le K$, \begin{align}\label{rst-1:lem-block-gap} c_{ik} - c_{i\ell} \ge \gamma\log n \end{align} holds with probability at least $1-n^{-\Omega(1)}$. \end{lemma} Armed with the above results, we are now ready to show that the projected power iteration possesses a contraction property in a certain neighborhood of $\bm{H}^*\bm{Q}$ for some $\bm{Q} \in \Pi_K$. \begin{prop}\label{prop:contra-PGM} Suppose that the constants $\alpha, \beta > 0$ satisfy $\sqrt{\alpha}-\sqrt{\beta} > \sqrt{K}$ and $n > \exp(16c_1^2/\gamma^2)$. Then, the following event happens with probability at least $1-n^{-\Omega(1)}$: For all $\bm{H} \in \mathcal{H}$ and $\varepsilon \in \left(0,\min\left\{\frac{1}{\sqrt{K}},\frac{\gamma\sqrt{K}}{16(\alpha-\beta)}\right\}\right)$ such that $\|\bm{H}-\bm{H}^*\bm{Q}\|_F \le \varepsilon \sqrt{n}$ for some $\bm{Q} \in \Pi_K$, it holds that \begin{align}\label{rst:prop-PGM} \|\bm{V} - \bm{H}^*\bm{Q}\|_F \le \kappa \|\bm{H}-\bm{H}^*\bm{Q}\|_F \end{align} for any $\bm{V} \in \mathcal{T}(\bm{A}\bm{H})$, where \begin{align}\label{conv-rate} & \kappa = 4\max\left\{ \frac{4\varepsilon (\alpha-\beta)}{\gamma\sqrt{K}}, \frac{c_1}{\gamma\sqrt{\log n}} \right\} \in (0,1) \end{align} and $c_1, \gamma$ are the constants in Lemmas \ref{lem:spectral-norm-Delta} and \ref{lem:block-gap}, respectively. \end{prop} Observe that the contraction rate $\kappa$ is decreasing to a quantity on the order of $1/\sqrt{\log n}$ as the iterates approach a ground truth. This implies that the better the initialization, the less iterations the proposed method requires to find a ground truth. The following lemma indicates that the projected power iterations exhibit one-step convergence to a ground truth. This would imply the finite termination of the proposed algorithm. \begin{lemma}\label{lem:one-step-conv} Suppose that the constants $\alpha > \beta > 0$ satisfy $\sqrt{\alpha}-\sqrt{\beta} > \sqrt{K}$. Then, the following statement holds with probability at least $1-n^{-\Omega(1)}$: For all $\bm{H} \in \mathcal{H}$ such that $\|\bm{H}-\bm{H}^*\bm{Q}\|_F < \sqrt{\gamma\log n}$ for some $\bm{Q} \in \Pi_K$, it holds that \begin{align}\label{rst:lem-one-step-conv} \mathcal{T}(\bm{A}\bm{H}) = \{\bm{H}^*\bm{Q}\}, \end{align} where $\gamma > 0$ is the constant in Lemma \ref{lem:block-gap}. \end{lemma} \begin{figure*}[!htbp] \begin{minipage}[b]{0.245\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{fig-PGM-1.eps}} \centerline{(a) PPM}\medskip \end{minipage} \begin{minipage}[b]{0.245\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{fig-SDP-1.eps}} \centerline{(b) SDP}\medskip \end{minipage} \begin{minipage}[b]{0.245\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{fig-SC-1.eps}} \centerline{(c) SC}\medskip \end{minipage} \begin{minipage}[b]{0.245\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{fig-MLE-1.eps}} \centerline{(d) PMLE}\medskip \end{minipage} \vskip -0.1in \caption{Phase transition in the setting of $n=300,K=3$: The $x$-axis is $\beta$, the $y$-axis is $\alpha$, and darker pixels represent lower empirical probability of success. The red curve is the information-theoretic threshold $\sqrt{\alpha}-\sqrt{\beta}=\sqrt{3}$.} \label{fig-1} \end{figure*} \begin{figure*}[!htbp] \begin{minipage}[b]{0.245\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{fig-PGM-2.eps}} \centerline{(a) PPM}\medskip \end{minipage} \begin{minipage}[b]{0.245\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{fig-SDP-2.eps}} \centerline{(b) SDP}\medskip \end{minipage} \begin{minipage}[b]{0.245\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{fig-SC-2.eps}} \centerline{(c) SC}\medskip \end{minipage} \begin{minipage}[b]{0.245\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{fig-MLE-2.eps}} \centerline{(d) PMLE}\medskip \end{minipage} \vskip -0.1in \caption{Phase transition in the setting of $n=600,K=6$: The $x$-axis is $\beta$, the $y$-axis is $\alpha$, and darker pixels represent lower empirical probability of success. The red curve is the information-theoretic threshold $\sqrt{\alpha}-\sqrt{\beta}=\sqrt{6}$.} \label{fig-2} \end{figure*} \subsection{Proof of Theorem \ref{thm-1}} Now, we are ready to derive the iteration complexity bound of Algorithm \ref{alg:PGD} equipped with the results in Section \ref{sub-sec:pgd}. We first provide a formal version of Theorem \ref{thm-1} and then sketch its proof. The full proof can be found in Section C of the appendix. Recall that $\theta$, $c_1$, and $\gamma$ are the constants in Theorem \ref{thm-1}, Lemma \ref{lem:spectral-norm-Delta}, and Lemma \ref{lem:block-gap}, respectively. To simplify the notations in the sequel, let \begin{align}\label{phi} \phi = \frac{c_1\sqrt{K}}{16(\alpha-\beta)}. \end{align} \begin{thm}\label{thm-2} Consider the setting of Theorem \ref{thm-1}. Suppose that \begin{align}\label{n} n > \exp\left(\max\left\{ \frac{64c_1^2}{\gamma^2}, \frac{\gamma^2}{c_1^2}, \frac{4\sqrt{2}\phi}{\sqrt{\gamma}}, \frac{256c_1^4}{\gamma^4} \right\}\right). \end{align} Then, the following statement holds with probability at least $1-n^{-\Omega(1)}$: If the initial point $\bm{H}^0\in \mathbb{M}_{n,K}$ satisfies \begin{align}\label{partial-recov} \|\bm{H}^0-\bm{H}^*\bm{Q}\|_F \le \theta\sqrt{n} \end{align} for some $\bm{Q} \in \Pi_K$ and $\theta$ is defined in \eqref{theta}, Algorithm \ref{alg:PGD} outputs $\bm{H}^*\bm{Q}$ within $\lceil 2\log\log n \rceil+\left\lceil \frac{2\log n}{\log\log n} \right\rceil+2$ projected power iterations. \end{thm} \begin{proof} Suppose that the statements in Proposition \ref{prop:contra-PGM} and Lemma \ref{lem:one-step-conv} hold, which happens with probability at least $1-n^{-\Omega(1)}$ by the union bound. We first show that for all $k \ge 2$, $\bm{H}^k \in \mathbb{H}_{n,K}$ satisfies $\|\bm{H}^k-\bm{H}^*\bm{Q}\|_F \le 2\theta\sqrt{n}$ and \begin{align*} \|\bm{H}^k - \bm{H}^*\bm{Q}\|_F \le \frac{1}{2} \|\bm{H}^{k-1}-\bm{H}^*\bm{Q}\|_F, \end{align*} and it holds for $N_1=\lceil 2\log\log n \rceil + 1$ that \begin{align*} \|\bm{H}^{N_1} - \bm{H}^*\bm{Q}\|_F \le 2\phi\sqrt{\frac{n}{\log n}}. \end{align*} Next, we show that for all $k \ge 1$, $\bm{H}^{N_1+k} \in \mathbb{H}_{n,K}$ satisfies $\|\bm{H}^{N_1+k}-\bm{H}^*\bm{Q}\|_F \le 2\phi\sqrt{n/\log n}$ and \begin{align*} \|\bm{H}^{N_1+k} - \bm{H}^*\bm{Q}\|_F \le \frac{4c_1}{\gamma\sqrt{\log n}} \|\bm{H}^{N_1+k-1}-\bm{H}^*\bm{Q}\|_F, \end{align*} and it holds for $N_2=\left\lceil \frac{2\log n}{\log\log n} \right\rceil$ that \begin{align*} \|\bm{H}^{N_2+N_1} - \bm{H}^*\|_F < \sqrt{\gamma\log n}. \end{align*} Once this holds, we have $\bm{H}^{N_1+N_2+1}=\bm{H}^*$ by Lemma \ref{lem:one-step-conv}. Then, the desired result is established. \end{proof} \section{Experimental Results}\label{sec:num} \begin{figure*}[t] \begin{center} \begin{minipage}[b]{0.33\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{conv-1.eps}} \centerline{(a) {\footnotesize $(\alpha,\beta,K)=(18,4,4)$}}\medskip \end{minipage} \begin{minipage}[b]{0.33\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{conv-2.eps}} \centerline{(b) {\footnotesize $(\alpha,\beta,K)=(36,8,8)$}} \medskip \end{minipage} \begin{minipage}[b]{0.33\linewidth} \centering \centerline{\includegraphics[width=\linewidth]{conv-3.eps}} \centerline{(c) {\footnotesize $(\alpha,\beta,K)=(54,12,12)$}}\medskip \end{minipage} \end{center} \vskip -0.1in \caption{Convergence performance of PPM: The $x$-axis is number of iterations and the $y$-axis is the distance from an iterate to a ground truth, i.e., $\min_{\bm{Q} \in \Pi_K}\|\bm{H}^k-\bm{H}^*\bm{Q}\|_F$, where $\bm{H}^k$ is the $k$-th iterate generated by PPM.} \label{fig-3} \end{figure*} In this section, we report the recovery performance and numerical efficiency of our proposed method for recovering communities on both synthetic and real data sets. We also compare our method with three existing methods, which are the SDP-based method in \citet{amini2018semidefinite}, the spectral clustering (SC) method in \citet{su2019strong}, and the local penalized ML estimation (PMLE) method in \citet{gao2017achieving}. In the implementation, we employ \citet[Algorithm 2]{gao2017achieving} for computing the initial point $\bm{H}^0$ in Algorithm \ref{alg:PGD} if we do not specify the initialization method. Moreover, we use alternating direction method of multipliers (ADMM) for solving the SDP as suggested in \citet{amini2018semidefinite},\footnote{The code can be downloaded at \url{https://github.com/aaamini/SBM-SDP}.} the MATLAB function \textsf{eigs} for computing the eigenvectors that are needed in the SC method and the first stage of the PMLE method, and the MATLAB function \textsf{kmeans} for computing the partition in the SC method. For ease of reference, we denote our method simply by PPM. All of our simulations are implemented in MATLAB R2020a on a PC running Windows 10 with 16GB memory and Intel(R) Core(TM) i5-8600 3.10GHz CPU. Our code is available at \url{https://github.com/peng8wang/ICML2021-PPM-SBM}. \vskip 0.25 in \subsection{Phase Transition and Computational Time}\label{sub-sec:phase} We first conduct the experiments to examine the phase transition property and running time of the aforementioned methods for recovering communities in graphs that are generated by the symmetric SBM in Definition \ref{SBM}. We have two sets of simulations. We choose $n=300,K=3$ (resp. $n=600,K=6$), and let the parameter $\alpha$ in \eqref{p-q} vary from $0$ to $30$ (resp. 60) with increments of $0.5$ (resp. $1$) and the parameter $\beta$ in \eqref{p-q} vary from $0$ to $10$ (resp. $20$) with increments of $0.4$ (resp. $0.8$). For every pair of $\alpha$ and $\beta$, we generate $40$ instances and calculate the ratio of exactly recovering the communities for all the tested methods. The phase transition results are reported in Figures \ref{fig-1} and \ref{fig-2}. According to these figures, we can observe that all the methods exhibit a phase transition phenomenon and the recovery performance of PPM is slightly better than the other three methods. Moreover, Figures \ref{fig-1}(a) and \ref{fig-2}(a) indicate that PPM achieves the optimal recovery threshold, which supports the result in Theorem \ref{thm-1}. Besides, we record the total CPU time consumed by each method for completing the phase transition experiments in Table \ref{table-1}. It can be observed that PPM is slightly better than PMLE and substantially faster than SC and SDP. \begin{table}[!htbp] \caption{Total CPU times (in seconds) of the methods in the phase transition experiments.} \vskip 0.1in \label{table-1} \begin{center} \begin{tabular}{ccccc} \toprule $\quad$ Time (s) & PPM & SDP & SC & PMLE \\ \midrule $n=300,\ K=3$ & {\bf 401} & 25887 & 1438 & 572 \\ $n=600,\ K=6$ & {\bf 1824} & 82426 & 3669 & 2661\\ \bottomrule \end{tabular} \end{center} \vskip -0.3in \end{table} \subsection{Convergence Performance}\label{sub-sec:conv} We next conduct the experiments to study the convergence performance of PPM for recovering the communities in graphs generated by the symmetric SBM in Definition \ref{SBM}. In the simulations, we choose three different sets of $(\alpha,\beta, K)$ such that $\sqrt{\alpha} - \sqrt{\beta} > \sqrt{K}$ and generate graphs of dimension $n=6000$. Moreover, we generate the initial point $\bm{H}^0$ in Algorithm \ref{alg:PGD} via $\bm{H}^0 \in \mathcal{T}(\bm{G})$, where each entry of $\bm{G} \in \mathbb{R}^{n\times K}$ is randomly generated by the standard normal distribution. Let $\bm{H}^k$ denote the $k$-th iterate of the PPM. In each graph, we run PPM $10$ times from different initial points and then plot the distances of the iterates to the ground truth, i.e., $\min_{\bm{Q} \in \Pi_K}\|\bm{H}^k-\bm{H}^*\bm{Q}\|_F$, against the iteration number in Figure \ref{fig-3}. It can be observed that PPM exhibits a finite termination phenomenon and converges to the ground truth within $20$ iterations even if it starts from a randomly generated initial point. This also corroborates the one-step convergence result in Lemma \ref{lem:one-step-conv} and the iteration complexity in Theorem \ref{thm-1}. \subsection{Recovery Efficiency and Accuracy} Finally, we conduct the experiments to compare the recovery efficiency and accuracy of our method with SDP, SC, and PMLE on real data sets. We use the data sets \emph{polbooks}, \emph{polblogs}, and \emph{football} downloaded from the SuiteSparse Matrix Collection \citep{davis2011university}.\footnote{\url{https://sparse.tamu.edu/}} For the set \emph{football}, we remove the communities whose sizes are less than 10. To tackle the difficulty that these real networks have unbalanced communities, we modify the second constraint in \eqref{set-H} as $\bm{H}^T\bm{1}_K=\bm{\pi}$, where $\pi_k$ denotes the $k$-th community size for all $k\in [K]$, and then apply PPM for solving the resulting formulation as in Algorithm \ref{alg:PGD}. The stopping criteria for the tested methods are set as follows. For PPM, we terminate it when there exists some iterate $k\ge 6$ such that $\|\bm{H}^k-\bm{H}^{l}\|_F \le 10^{-3}$ for some $k-5\le l \le k-1$; for ADMM, we terminate it when the norm of difference of two consecutive iterates is less than $10^{-3}$. No stopping criterion is needed for SC and PMLE since SC employs the MATLAB function \textsf{kmeans} to do the clustering and PMLE directly assigns each vertex to the corresponding community based on the initialization partition. Besides, we generate an initial point for PPM as in Section \ref{sub-sec:conv}. Then, we run each algorithm $10$ times and select the best solution (in terms of function value) as its recovery solution. Moreover, we set the maximum iteration number for PPM and ADMM as 1000. To compare the recovery efficiency and accuracy of the tested methods, we report the total CPU time for all runs and the number of misclassified vertices (MVs) of each method in Table \ref{table-2}. These results, together with those in Table \ref{table-1}, demonstrate that our proposed method is comparable to these state-of-the-art methods in terms of recovery efficiency and accuracy on both synthetic and real data sets. \begin{table}[!htbp] \caption{Total CPU times (in seconds) and the number of misclassified vertices (MVs) of the methods on real data sets.} \label{table-2} \begin{center} \begin{tabular}{lcccc} \toprule Time (s) & PPM & SDP & SC & PMLE \\ \midrule \emph{polbooks} & {\bf 0.28} & 10.26 & 0.30 & 19.67\\ \emph{polblogs} & {\bf 0.02} & 2348 & 0.41 & 1.39\\ \emph{football} & {\bf 0.21} & 0.83 & 0.42 & 0.40 \\ \midrule num. of MVs & PPM & SDP & SC & PMLE \\ \midrule \emph{polbooks} & {\bf 18} & 24 & {\bf 18} & 19\\ \emph{polblogs} & {\bf 52} & 238 & 215 & 279\\ \emph{football} & 4 & {\bf 2} & {\bf 2} & 13\\ \bottomrule \end{tabular} \end{center} \end{table} \section{Concluding Remarks}\label{sec:con} In this work, we proposed a projected power method for solving the ML formulation of the symmetric SBM. We showed that provided an initial point satisfying a mild partial recovery condition, this method achieves exact recovery down to the information-theoretic threshold and runs in $\mathcal{O}(n\log n/\log\log n)$ time in the logarithmic degree regime. This is also demonstrated by our numerical results. Moreover, it is observed in the numerical results that the proposed method still works effectively even with a random initialization. Then, one natural future direction is to study the convergence behavior of the proposed method with a random initialization. Another direction is to extend our proposed method to other variants of the basic SBM, such as degree-corrected block models (see, e.g., \citet{gao2018community, karrer2011stochastic}), labelled SBMs (see, e.g., \citet{heimlicher2012community, yun2016optimal}), and overlapping SBMs (see, e.g., \citet{airoldi2008mixed,gopalan2013efficient}). \section*{Acknowledgements} This work is supported in part by CUHK Research Sustainability of Major RGC Funding Schemes project 3133236. \bibliographystyle{icml2021}
{ "timestamp": "2021-06-11T02:17:31", "yymm": "2106", "arxiv_id": "2106.05644", "language": "en", "url": "https://arxiv.org/abs/2106.05644", "abstract": "In this paper, we study the problem of exact community recovery in the symmetric stochastic block model, where a graph of $n$ vertices is randomly generated by partitioning the vertices into $K \\ge 2$ equal-sized communities and then connecting each pair of vertices with probability that depends on their community memberships. Although the maximum-likelihood formulation of this problem is discrete and non-convex, we propose to tackle it directly using projected power iterations with an initialization that satisfies a partial recovery condition. Such an initialization can be obtained by a host of existing methods. We show that in the logarithmic degree regime of the considered problem, the proposed method can exactly recover the underlying communities at the information-theoretic limit. Moreover, with a qualified initialization, it runs in $\\mathcal{O}(n\\log^2n/\\log\\log n)$ time, which is competitive with existing state-of-the-art methods. We also present numerical results of the proposed method to support and complement our theoretical development.", "subjects": "Optimization and Control (math.OC)", "title": "Optimal Non-Convex Exact Recovery in Stochastic Block Model via Projected Power Method", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9896718474806949, "lm_q2_score": 0.7154240018510026, "lm_q1q2_score": 0.7080349936439139 }
https://arxiv.org/abs/2005.04180
Convex lattice polygons with all lattice points visible
Two lattice points are visible to one another if there exist no other lattice points on the line segment connecting them. In this paper we study convex lattice polygons that contain a lattice point such that all other lattice points in the polygon are visible from it. We completely classify such polygons, show that there are finitely many of lattice width greater than $2$, and computationally enumerate them. As an application of this classification, we prove new obstructions to graphs arising as skeleta of tropical plane curves.
\section{Introduction} A lattice point in $\mathbb{R}^2$ is any point with integer coordinates, and a lattice polygon is any polygon whose vertices are lattice points. We say that two distinct lattice points $p=(a,b)$ and $q=(c,d)$ are visible to one another if the line segment $\overline{pq}$ contains no lattice points besides $p$ and $q$, or equivalently if $\gcd(a-c,b-d)=1$; by convention we say that any $p$ is visible from itself. Points visible from the origin $O=(0,0)$ are called visible points, with all other points being called invisible. The properties of visible and invisible points have been subject to a great deal of study over the past century, as surveyed in \cite[\S 10.4]{research_book}. The question of which structures can appear among visible points, invisible points, or some prescribed combination thereof was studied in \cite{herzog-stewart}, where it was proved that one can find a copy of any convex lattice polygon (indeed, any arrangement of finitely many lattice points) consisting entirely of invisible points. \begin{figure}[hbt] \centering \includegraphics{several_panoptigons.pdf} \caption{Three panoptigons, with a panoptigon point circled and lines of sight illustrated; the middle polygon has a second panoptigon point, namely the bottom vertex} \label{figure:several_panoptigons} \end{figure} In this paper we pose and answer a somewhat complementary question: which convex lattice polygons including the origin contain only {visible} lattice points? We define a \emph{panoptigon}\footnote{This name is modeled off of \emph{panopticon}, an architectural design that allows for one position to observe all others. It comes from the Greek word \emph{panoptes}, meaning ``all seeing''.} to be a convex lattice polygon $P$ containing a lattice point $p$ such that all other lattice points in $P$ are visible from $p$. We call such a $p$ a \emph{panoptigon point} for $P$. Thus up to translation, a panoptigon is a convex lattice polygon containing the origin such that every point in $P\cap\mathbb{Z}^2$ is a visible point. Three panoptigons are pictured in Figure \ref{figure:several_panoptigons}, each with a panoptigon point and its lines of sight highlighted; note that the panoptigon point need not be unique. One can quickly see that there exist infinitely many panoptigons; for instance, the triangles with vertices at $(0,0)$, $(1,0)$, and $(a,1)$ are panoptigons for any value of $a$. However, this is not an interesting family of examples since any two of these triangles are \emph{equivalent}, made precisely below. \begin{defn} A \emph{unimodular transformation} is an integer linear map $t:\mathbb{R}^2\rightarrow\mathbb{R}^2$ that preserves the integer lattice $\mathbb{Z}^2$; any such map is of the form $t(p)=Ap+b$, where $A$ is a $2\times 2$ integer matrix with determinant $\pm1$ and $b\in\mathbb{Z}^2$ is a translation vector. We say that two lattice polygons $P$ and $Q$ are \emph{equivalent} if there exists a unimodular triangulation $t$ such that $t(P)=Q$. \end{defn} It turns out that there are infinitely many panoptigons even up to equivalence: note that the triangle with vertices at $(0,0)$, $(0,-1)$, and $(b,-1)$ is a panoptigon for every positive integer $b$, and any two such triangles are pairwise inequivalent since they have different areas. We can obtain nicer results if we stratify polygons according to the \emph{lattice width} of a polygon $P$, the minimum integer $w$ such that there exists a polygon $P'$ equivalent to $P$ in the horizontal strip $\mathbb{R}\times [0,w]$. Although there are infinitely many panoptigons of lattice widths $1$ and $2$, we can still classify them completely, as presented in Lemmas \ref{lemma:panoptigon_g0} and \ref{lemma:panoptigon_g1}. Once we reach lattice width $3$ or more, we obtain the following powerful result. \begin{theorem}\label{theorem:at_most_13} Let $P$ be a panoptigon with lattice width $\textrm{lw}(P)\geq 3$. Then $|P\cap\mathbb{Z}^2|\leq 13$. \end{theorem} Since there are only finitely many lattice polygons with a fixed number of lattice points up to equivalence \cite[Theorem 2]{lz}, it follows that there are only finitely many panoptigons $P$ with $\textrm{lw}(P)\geq 3$. In Appendix \ref{section:appendix} we detail computations to enumerate all such lattice polygons. This allow us to determine that there exactly $73$ panoptigons of lattice width $3$ or more. One is the triangle of degree $3$, which has a single interior lattice point; and the other $72$ are non-hyperelliptic, meaning that the convex hull of their interior lattice points is two-dimensional. As an application of our classification of panoptigons, we prove new results about {tropically planar graphs} \cite{small2017_tpg}. These are $3$-regular, connected, planar graphs that arise as skeletonized versions of dual graphs of regular, unimodular triangulations of lattice polygons. We often stratify tropically planar graphs by their first Betti number, also called their genus. If $G$ is a tropically planar graph arising from a triangulation of a lattice polygon $P$, then the genus of $G$ is equal to the number of interior lattice points of $P$. We prove a new criterion for ruling out certain graphs from being tropically planar, notable in that the graphs it applies to are $2$-edge-connected, unlike those ruled out by most existing criteria; this resolves an open question posed in \cite[\S 5]{small2017_tpg}. We say that a planar graph $G$ is a \emph{big face graph} if for every planar embedding of $G$, there is a bounded face sharing an edge with all other bounded faces. \begin{theorem}\label{theorem:big_face_graphs} If $G$ is a big face graph of genus $g\geq 14$, then $G$ is not tropically planar. \end{theorem} The idea behind the proof of this theorem is as follows. If a big face graph $G$ is tropically planar, then it is dual to a regular unimodular triangulation of a lattice polygon $P$. One of the interior lattice points $p$ of $P$ must be connected to all the other interior lattice points, so that the bounded face dual to $p$ can share an edge with all other bounded faces. Thus, the convex hull of the interior lattice points of $P$ must be a panoptigon. If that panoptigon has lattice width $3$ or more, then it can have at most $13$ lattice points, and so $G$ cannot have $g\geq 14$. For the case that the lattice width of the interior panoptigon is smaller, we need an understanding of which polygons of lattice width $1$ or $2$ can appear as the interior lattice points of another lattice polygon. We obtain this in Propositions \ref{prop:lattice_width_3_maximal} and \ref{prop:lattice_width_4_maximal}, and can once again bound the genus of $G$. In fact, if we are willing to rely on our computational enumeration of all panoptigons with lattice width at least $3$, then we can improve this result to say that big face graphs of genus $g\geq 12$ are not tropically planar. We will see that this bound is sharp. Our paper is organized as follows. In Section \ref{section:lattice_polygons} we present background on lattice polygons, including a description of all polygons of lattice width at most $2$. In Section \ref{section:panoptigons} we classify all panoptigons. In Section \ref{section:lw3_and_4} we classify all maximal polygons of lattice width $3$ or $4$. Finally, in Section \ref{section:big_face_graphs} we prove Theorem \ref{theorem:big_face_graphs}. Our computational results are then summarized in Appendix \ref{section:appendix}. \medskip \noindent \textbf{Acknowledgements.} The authors thank Michael Joswig for many helpful conversations on tropically planar graphs, and for comments on an earlier draft of this paper, as well as two anonymous reviewers for many helpful suggestions and corrections. We would also like to thank Benjamin Lorenz, Andreas Paffenholz and Lars Kastner for helping with uploading our results to polyDB. Ralph Morrison was supported by the Max Planck Institute for Mathematics in the Sciences, and by the Williams College Class of 1945 World Travel Fellowship. Ayush Kumar Tewari was supported by the Deutsche Forschungsgemeinschaft (SFB-TRR 195 “Symbolic Tools in Mathematics and their Application”). \section{Lattice polygons}\label{section:lattice_polygons} In this section we recall important terminology and results regarding lattice polygons. This includes the notion of maximal polygons, and of lattice width. Throughout we will assume that $P$ is a two-dimensional convex lattice polygon, unless otherwise stated. The \emph{genus} of a polygon $P$ is the number of lattice points interior to $P$. A key fact is that for fixed $g\geq 1$, there are only finitely many lattice polygons of genus $g$, up to equivalence \cite[Theorem 9]{Castryck2012}. We refer to the convex hull of the $g$ interior points of $P$ as the \emph{interior polygon of $P$}, denoted $P_\textrm{int}$. If $\dim(P_\textrm{int})=2$, we call $P$ \emph{non-hyperelliptic}; if $\dim(P_\textrm{int})\leq 1$, we call $P$ \emph{hyperelliptic}. This terminology, due to \cite{Castryck2012}, is inspired by hyperelliptic curves, whose Newton polygons have all interior points collinear. We say a lattice polygon $P$ is a \emph{maximal polygon} if it is maximal with respect to containment among all lattice polygons containing the same set of interior lattice points. In the case that $P$ is non-hyperelliptic, there is a strong relationship between $P$ and $P_\textrm{int}$. Let $\tau_1,\ldots,\tau_n$ be the one-dimensional faces of a (two-dimensional) lattice polygon $Q$. Then $Q$ can be defined as an intersection of half-planes: \[Q=\bigcap_{i=1}^n\mathcal{H}_{\tau_i},\] where $\mathcal{H}_{\tau}=\{(x,y)\in\mathbb{R}^2\,|\,a_\tau x+b_\tau y\leq c_\tau\}$ is the set of all points on the same side of the line containing $\tau$ as $Q$. Without loss of generality, we assume that $a_\tau,b_\tau,c_\tau\in\mathbb{Z}$ with $\gcd(a_\tau,b_\tau)=1$. With this convention, we define \[\mathcal{H}^{(-1)}_{\tau}=\{(x,y)\in\mathbb{R}^2\,:\,a_\tau x+b_\tau y\leq c_\tau+1\},\] and from there define the \emph{relaxed polygon} of $Q$ as \[Q^{(-1)}:=\bigcap_{i=1}^n\mathcal{H}^{(-1)}_{\tau_i}.\] We can think of $Q^{(-1)}$ as the polygon we would get by ``moving out'' the edges of $Q$. It is worth remarking that that $Q^{(-1)}$ need not be a lattice polygon. We denote $Q^{(-1)}\cap \mathcal{H}_{\tau_i}^{(-1)}$ as $\tau_i^{(-1)}$. It is not necessarily the case that $\tau_i^{(-1)}$ is a one-dimensional face of $Q^{(-1)}$; however, if $Q^{(-1)}$ is a lattice polygon, then $Q^{(-1)}\cap \tau_i^{(-1)}$ must contain at least one lattice point, as proved in \cite[Lemma 2.2]{small2017_dim}. Examples where $Q^{(-1)}$ is not a lattice polygon, and where $Q^{(-1)}$ is a lattice polygon but an edge has collapsed, are illustrated in Figure \ref{figure:examples_relaxed_polygons}. There is a very important case when we are guaranteed to have that $Q^{(-1)}$ is a lattice polygon, namely when $Q=P_\textrm{int}$ for some non-hyperelliptic lattice polygon $P$. \begin{figure}[hbt] \centering \includegraphics{examples_relaxed_polygons.pdf} \caption{Two lattice polygons, one with a relaxed polygon with a non-lattice vertex marked; and one with a collapsed edge in the relaxed (lattice) polygon} \label{figure:examples_relaxed_polygons} \end{figure} \begin{prop}[\cite{Koelman}, \S 2.2]\label{prop:interior_maximal} Let $P$ be a non-hyperelliptic lattice polygon, with interior polygon $P_\textrm{int}$. Then $P_\textrm{int}^{(-1)}$ is a lattice polygon containing $P$ whose interior polygon is also $P_\textrm{int}$. In particular, $P_\textrm{int}^{(-1)}$ is the unique maximal polygon with interior polygon $P_\textrm{int}$. \end{prop} If we are given a polygon $Q$ and we wish to know if there exists a lattice polygon $P$ with $P_\textrm{int}=Q$, it therefore suffices to compute the relaxed polygon $Q^{(-1)}$, and to check whether its vertices have integral coordinates. This might fail because two adjacent edges $\tau_i$ and $\tau_{i+1}$ of $Q$ are relaxed to intersect at a non-integral vertex of $Q^{(-1)}$; we also might have that some $\tau_i^{(-1)}$ is completely lost, which cannot happen when $Q^{(-1)}$ is a lattice polygon by \cite[Lemma 2.2]{small2017_dim}. Careful consideration of these obstructions will be helpful in classifying the maximal polygons of lattice widths $3$ and $4$ in Section \ref{section:lw3_and_4}. An important tool in studying lattice polygons is the notion of \emph{lattice width}. Let $P$ be a non-empty lattice polygon, and let $v=\langle a,b\rangle$ be a lattice direction with $\gcd(a,b)=1$. The \emph{width of $P$ with respect to $v$} is the smallest integer $d$ for which there exists $m\in \mathbb{Z}$ such that the strip \[m\leq ay-bx\leq m+d\] contains $P$. We denote this $d$ as $w(P,v)$. The \emph{lattice width of $P$} is the minimal width over all possible choices of $v$: \[\textrm{lw}(P)=\min_vw(P,v).\] Any $v$ which achieves this minimum is called a \emph{lattice width direction for $P$}. Equivalently, $\textrm{lw}(P)$ is the smallest $d$ such that there exists a lattice polygon $P'$ equivalent to $P$ with $P'\subset \mathbb{R}\times[0,d]$. We recall the following result connecting the lattice widths of a polygon and its interior polygon. Let $T_d=\textrm{conv}((0,0),(d,0),(0,d))$ denote the standard triangle of degree $d$. \begin{lemma}[Theorem 4 in \cite{castryckcools-gonalities}]\label{lemma:lw_facts} For a lattice polygon $P$ we have $\textrm{lw}(P)=\textrm{lw}(P_{\textrm{int}})+2$, unless $P$ is equivalent to $T_d$ for some $d\geq 2$, in which case $\textrm{lw}(P)=\textrm{lw}(P_{\textrm{int}})+3=d$. \end{lemma} The following result tells us precisely which polygons have lattice width $1$ or $2$. It is a slight reworking due to of a result due to \cite{Koelman}, also presented in \cite[Theorem 10]{Castryck2012} \begin{theorem}\label{theorem:lw_012} Let $P$ be a two-dimensional lattice polygon. If $\textrm{lw}(P)=1$, then $P$ is equivalent to \[T_{a,b}:=\textrm{conv}((0,0),(0,1),(a,1),(b,0))\] for some $a,b\in\mathbb{Z}$ with $0\leq a\leq b$ and $b\geq 1$. If $\textrm{lw}(P)=2$, then up to equivalence either $P=T_2$; or $g(P)=1$ and $P\neq T_3$ (all such polygons are illustrated in Figure \ref{figure:g1_lw2}); or $g(P)\geq 2$. In the latter case we have $\frac{1}{6}(g+3)(2g^2+15g+16)$ polygons, sorted into three types: \begin{itemize} \item Type 1: \[\includegraphics{hyp_type_1}\] where $g\leq i\leq 2g$. \item Type 2: \[\includegraphics{hyp_type_2}\] where $0\leq i\leq g$ and $0\leq j\leq i$; or $g<i\leq 2g+1$ and $0\leq j\leq 2g-i+1$ \item Type 3: \[\includegraphics{hyp_type_3}\] where $0\leq k\leq g+1$ and $0\leq i\leq g+1-k$ and $0\leq j\leq i$; or $0\leq k\leq g+1$ and $g+1-k<i\leq 2g+2-2k$ and $0\leq j\leq 2g-i-2k+1$ \end{itemize} \begin{figure}[hbt] \centering \includegraphics{g1_lw2.pdf} \caption{The $14$ genus $1$ polygons with lattice width $2$} \label{figure:g1_lw2} \end{figure} \end{theorem} \begin{proof}The classification proved in \cite{Koelman} was similar, except with polygons sorted by genus ($g=0$, $g=1$, and $g\geq 2$ with all interior lattice points collinear) rather than by lattice width. We can translate their work into the desired result as follows. For $\textrm{lw}(P)=1$, we know $P$ has no interior lattice points, so $g=0$; all polygons of genus $0$ besides $T_2$ have lattice width $1$. By \cite{Koelman} all genus $0$ polygons besides $T_2$ are equivalent to $T_{a,b}$ for some $a,b\in\mathbb{Z}$ with $0\leq a\leq b$ and $b\geq 1$. For $\textrm{lw}(P)=2$, we deal with the three cases of $g=0$, $g=1$, and $g\geq 2$. If $g=0$, then the only polygon of lattice width $2$ is $T_2$. If $P$ is a polygon with genus $g=1$, then by Lemma \ref{lemma:lw_facts} we know that $\textrm{lw}(P)=\textrm{lw}(P_\textrm{int})+2=0+2=2$ unless $P$ is equivalent to $T_d$ for some $d$. The only value of $d$ such that $T_d$ has genus $1$ is $d=3$, so every genus $1$ polygon except $T_3$ has lattice width $2$. Finally, suppose $P$ is a polygon of lattice width $2$ and genus $g\geq 2$. Since $\textrm{lw}(T_d)=d$ and $g(T_2)=0$, we know $P\neq T_d$ for any $d$, and so $\textrm{lw}(P_\textrm{int})=\textrm{lw}(P)-2=2-2=0$. It follows that all the $g$ interior lattice points of $P$ must be collinear, and so $P$ is hyperelliptic. Conversely, if $P$ is a hyperelliptic polygon of genus $g\geq 2$, by definition the interior polygon $P_\textrm{int}$ has lattice width $0$. Since no triangle $T_d$ has genus $g\geq 2$ with all its interior points collinear we may apply Lemma \ref{lemma:lw_facts} to conclude that $\textrm{lw}(P)=\textrm{lw}(P_\textrm{int})+2=2$. This means that for polygons of genus $g\geq 2$, being hyperelliptic is equivalent to having lattice width $2$. Combined with the classification of hyperelliptic polygons in \cite{Koelman}, this completes the proof. \end{proof} A counterpart of lattice width is \emph{lattice diameter}. Following \cite{bf}, the lattice diameter $\ell(P)$ is the length of the longest lattice line segment contained in the polygon $P$: \[\ell(P)=\max\{|L\cap P\cap\mathbb{Z}^2|-1\,:\,\textrm{$L$ is a line}\}.\] We define a \emph{lattice diameter direction} $\left<a,b\right>$ to be one such that there exists a line $L$ with slope vector $\left<a,b\right>$ with $|L\cap P\cap\mathbb{Z}^2|-1=\ell(P)$. We remark that there exist other works where lattice diameter is defined as the largest number of collinear lattice points in the polygon $P$ \cite{alarcon}; this is simply one more than the convention we set above. The following result relates $\ell(P)$ to $\textrm{lw}(P)$. \begin{theorem}[\cite{bf}, Theorem 3] We have $\textrm{lw}(P)\leq \lfloor\frac{4}{3}\ell(P)\rfloor+1$. \end{theorem} We now present background material on triangulations and tropical curves. Assume for the remainder of the section that $P$ is a lattice polygon of genus $g\geq 2$. A \emph{unimodular triangulation} of $P$ is a subdivision of $P$ into lattice triangles of area $\frac{1}{2}$ each. Such a triangulation $\Delta$ is called \emph{regular} if there exists a height function $\omega:P\cap\mathbb{Z}^2\rightarrow \mathbb{R}$ inducing $\Delta$. This means that $\Delta$ is the projection of the lower convex hull of the image of $\omega$ back onto $P$. See \cite{triangulations} for details on regular triangulations. Given a regular unimodular triangulation $\Delta$ of a lattice polygon $P$, we can consider the \emph{weak dual graph} of $\Delta$, which consists of $1$ vertex for each elementary triangle, with two vertices connected if and only if the corresponding triangles share an edge. Each vertex in this graph has degree $1$, $2$, or $3$, depending on how many edges the corresponding triangle has on the boundary of $P$. We transform the weak dual graph of $\Delta$ into a $3$-regular graph as follows: first, iteratively delete any $1$-valent vertices and their attached edges. This will yield a graph with all vertices of degree $2$ or $3$. Remove each degree $2$ vertex by concatenating the two edges incident to it. Since we have assumed that $g(P)\geq 2$, the end result is a $3$-regular graph $G$ (with loops and parallel edges allowed). We call $G$ the \emph{skeleton} associated to $\Delta$. Any $G$ that arises from such a procedure is called a \emph{tropically planar graph}. An example of a regular unimodular triangulation, the weak dual graph, and the tropically planar skeleton are pictured in Figure \ref{figure:tropically_planar_full}. Note that there is a one-to-one correspondence between the interior lattice points of $P$ and the bounded faces of $G$ in this embedding, where two faces of $G$ share an edge if and only if the corresponding interior lattice points are connected by an edge in $\Delta$. \begin{figure}[hbt] \centering \includegraphics[scale=0.6]{tropically_planar_full.pdf} \caption{A regular unimodular triangulation of a polygon, the weak dual graph of the triangulation, and the corresponding tropically planar skeleton} \label{figure:tropically_planar_full} \end{figure} It is worth remarking that we could still construct a graph $G$ from a non-regular triangulation. The reason that we insist that $\Delta$ is regular is so that the graph $G$ appears as a subset of a smooth tropical plane curve, which is a balanced $1$-dimensional polyhedral complex that is dual to a regular unimodular triangulation of a lattice polygon; see \cite{ms}. (Indeed, the regularity is necessary if we wish to endow a skeleton with the structure of a \emph{metric graph}, with lengths assigned to its edges, as explored in \cite{BJMS} and \cite{small2017_dim}.) Most of the results that we prove in this paper, and that we recall for the remainder of this section, also hold if we expand to graphs that arise as dual skeleta of \emph{any} unimodular triangulation of a lattice polygon. The first Betti number of a tropically planar graph, also known as its genus\footnote{This terminology comes from \cite{bn} and is motivated by algebraic geometry; it is unrelated to the notion of graph genus defined in terms of embeddings on surfaces. The first Betti number of a graph is also sometimes called its \emph{cyclomatic number}.}, is equal to the number of interior lattice points of the lattice polygon $P$ giving rise to it. It is also equal to the number of bounded faces in any planar embedding of the graph. A systematic method of computing all tropically planar graphs of genus $g$ was designed and implemented in \cite{BJMS} for $g\leq 5$. The algorithm is brute-force, and works by considering all maximal lattice polygons of genus $g$, finding all regular unimodular triangulations of them, and computing the dual skeleta. These computations were pushed up to $g= 7$ in \cite{small2017_tpg}. In general there is no known method of checking whether an arbitrary graph is tropically planar short of this massive computation. A fruitful direction in the study of tropically planar graphs has been finding properties or patterns that are forbidden in such graphs, so as to quickly rule out particular examples. Since the graph before skeletonization is dual to a unimodular triangulation of a polygon, any tropically planar graph is $3$-regular, connected, and planar. Several additional constraints are summarized in the following result. \begin{theorem}[\cite{cdmy}, Proposition 4.1; \cite{small2017_tpg}, Theorem 3.4; \cite{joswigtewari}, Theorems 10 and 14] Suppose that $G$ is a $3$-regular graph of genus $g$ of one of the forms illustrated in Figure \ref{figure:forbidden_patterns}, where each gray box represents a subgraph of genus at least $1$. If $G$ is tropically planar, then it must have either the third or fourth forms, with $g= 4$ for the third form and $g\leq 5$ in the fourth form. In particular, if $g\geq 6$, then $G$ is not tropically planar. \end{theorem} \begin{figure}[hbt] \centering \includegraphics[scale=0.7]{forbidden_patterns.pdf} \caption{Forbidden patterns in tropically planar graphs of genus $g\geq 6$} \label{figure:forbidden_patterns} \end{figure} The proofs of these results all use the following observation: any cut-edge in a tropically planar graph must arise from a split in the dual unimodular triangulation that divides the polygon into two polygons of positive genus. From there, one argues that collections of such splits cannot appear in lattice polygons in ways that would give rise to graphs of the pictured forms. For planar graphs that are $2$-edge-connected and thus have no cut-edges, the only known general criterion to rule out tropical planarity is the notion of crowdedness \cite{morrisonhyp}. However, crowded graphs are ones that cannot be dual to \emph{any} triangulation of \emph{any} point set in $\mathbb{R}^2$, regardless of whether or not the point set comes from a convex lattice polygon; thus it is not especially interesting that crowded graphs are not tropically planar. In Section \ref{section:big_face_graphs} we will find a family of $2$-edge-connected, $3$-regular planar graphs that are not crowded but are still not tropically planar, the first known such examples. \section{A classification of all panoptigons}\label{section:panoptigons} Let $P$ be a convex lattice polygon. Recall from the introduction that $P$ is a {panoptigon} if there is lattice point $p\in P\cap\mathbb{Z}^2$ such that every other point in $P\cap\mathbb{Z}^2$ is visible from $p$. In this section we will classify all panoptigons, stratified by a combination of genus and lattice width. We begin with the panoptigons of genus $0$. \begin{lemma}\label{lemma:panoptigon_g0} Let $P$ be a panoptigon of genus $0$. Then $P$ is one of the following polygons, up to lattice equivalence: \[\includegraphics[]{panoptigons_g0}\] where $0\leq a\leq\min\{2,b\}$. \end{lemma} \begin{proof} By \cite{Koelman}, any genus $0$ polygon is equivalent either to the triangle $T_2$, or to the (possibly degenerate) trapezoid $T_{a,b}$ where $0\leq a\leq b$ and $1\leq b$. The triangle of degree $2$ is a panoptigon, as any non-vertex lattice point can see every other lattice point. For $T_{a,b}$, we note that if $a\geq 3$ then the polygon is not a panoptigon: each lattice point $p$ is on a row with at least $3$ other lattice points, not all of which can be visible from $p$ since the $4$ (or more) points in that row are collinear. However, if $a\leq 2$, then a point $p$ can be chosen on the top row that can see the other $a$ points on the top row, as well as all points on the bottom row. Thus $T_{a,b}$ is a panoptigon if and only if $a\leq 2$. \end{proof} For polygons with exactly one interior lattice point, there is no obstruction to being a panoptigon. \begin{lemma}\label{lemma:panoptigon_g1} If $P$ is a polygon of genus $1$, then $P$ is a panoptigon. \end{lemma} \begin{proof} Let $p$ be the unique interior lattice point of $P$, and let $q$ be any other lattice point of $P$. Since $g(P)=1$, the point $q$ must be on the boundary. By convexity, the line segment $\overline{pq}$ must have its relative interior contained in the interior of the polygon, and so the line segment does not intersect $\partial P$ outside of $q$. Since $p$ is the only interior lattice point, we have that the only lattice points of $\overline{pq}$ are its endpoints. It follows that $q$ is visible from $p$ for all $q\in P\cap\mathbb{Z}^2-\{p\}$. We conclude that $P$ is a panoptigon with panoptigon point $p$. \end{proof} We now consider hyperelliptic polygons of genus $g\geq 2$. We will characterize precisely which of these are panoptigons based on the classification of them in Theorem \ref{theorem:lw_012} into Types 1, 2, and 3. Any hyperelliptic polygon can be put into one of these forms in the horizontal strip $\mathbb{R}\times[0,2]$; thus we may say a lattice point $(a,b)$ of such a polygon is at height $b$, where every point is either at height $0$, height $1$, or height $2$. \begin{lemma}\label{lemma:hyperelliptic_panoptigon} Let $P$ be a hyperelliptic polygon of genus $g\geq 2$, transformed so that it of one of the forms presented in Theorem \ref{theorem:lw_012}. Then $P$ is a panoptigon if and only if \begin{itemize} \item $P$ is of Type 1, with $g\leq 3$; or \item $P$ is of Type 2, either with $g\leq 2$, or with $j=0$ and $0\leq i\leq 1$; or \item $P$ is of Type 3, either with $j=0$ and $i\leq 2$, with $k$ odd if $i=0$ and $k$ even if $i=2$; or with $i=0$ and $j\leq 2$, and $k$ odd if $j=0$ and $k$ even if $j=2$. \end{itemize} \end{lemma} For the reader's convenience we recall the polygons of Types 1, 2, and 3 in Figure \ref{figure:hyp_all_types}. \begin{figure}[hbt] \centering \includegraphics[scale=0.8]{hyp_all_types.pdf} \caption{Hyperelliptic polygons of Types 1, 2, and 3} \label{figure:hyp_all_types} \end{figure} \begin{proof} We start by making the following observations. If $p=(a,b)$ is a panoptigon point for a hyperelliptic polygon $P$, then there must be at most $3$ points at height $b$; and if there are exactly $3$, then $p$ must be the middle such point. We also make several remarks in the case that $b\in\{0,2\}$. There are no obstructions to a point at height $b$ seeing a point at height $1$, so we will not concern ourselves with this. Choose $b'\in\{0,2\}$ distinct from $b$, and suppose height $b'$ has $2$ or more lattice points; then two of those points have the form $q=(a,b')$ and $q'=(a+1,b')$. We claim that $p$ cannot view both $q$ and $q'$. Writing $p=(a,b)$, the midpoints of the line segments $\overline{pq}$ and $\overline{pq'}$ have coordinates $\left(\frac{a+a'}{2},1\right)$ and $\left(\frac{a+a'+1}{2},1\right)$, respectively. Exactly one of $\frac{a+a'}{2}$ and $\frac{a+a'+1}{2}$ is an integer, meaning that either $q$ or $q'$ is not visible from $p$. So, if $p=(a,b)$ is a panoptigon point at height $b\in\{0,2\}$, there must be exactly one lattice point $q=(a',b')$ at height $b'\in \{0,2\}$ with $b'\neq b$; moreover, we must have that $a-a'$ is odd. We are ready to determine the possibilities for a hyperelliptic panoptigon $P$ of genus $g\geq 2$, sorted by type. \begin{itemize} \item Let $P$ be a hyperelliptic polygon of Type 1. If $g\leq 3$, then we may choose $p=(a,1)$ that can see every other point at height $1$, as well as all points at heights $0$ and $2$; in this case $P$ is a panoptigon. If $g\geq 4$, then there are at least $4$ points at height $1$. Moreover, the number of points at height $0$ is $i+1$ where $g\leq i\leq 2g$, and we have $i+1\geq 5$ since $g\geq 4$. Thus it is impossible to have at most $3$ points at one height and $1$ at another. This means that for $g\geq 4$, $P$ cannot be a panoptigon \item Let $P$ be a hyperelliptic polygon of Type 2. If $g=2$, then $P$ has exactly three points at height $1$, and we can choose the middle point as a panoptigon point. Now assume $g\geq 3$; we cannot choose a panoptigon point at height $1$, since there are $g+1\geq 4$ points at that height. To avoid having $4$ points on both the top and bottom rows we need $0\leq i\leq g$ and $0\leq j\leq i$; and one of $i$ and $j$ must be $0$, so we need $j=0$ since $j\leq i$. From there we need at most $3$ lattice points on the bottom row, so $0\leq i\leq 2$. If $i=2$, then the only possible panopticon point is the middle one on the bottom row, namely $(1,0)$; but this point cannot see $(1,2)$, a contradiction. Thus $0\leq i\leq 1$; note that in either case $(0,0)$ can serve as a panoptigon point. \item Finally, let $P$ be a hyperelliptic polygon of Type 3. We cannot have a panoptigon point at height $1$, since there are at least $g+2\geq 4$ points at that height. If there is a panoptigon point at height $0$, then we must have at most $3$ points at height $0$ and exactly one point at height $2$; that is, we must have $j=0$ and $i\leq 2$. Moreover, we need to verify that way may choose a panoptigon point at height $0$ that can see the unique point at height $2$; this can always be done if $i=1$, but if $j=0$ then we need $k$ odd (the only possible panoptigon point is then $(0,0)$), and if $j=2$ we need $k$ even (the only possible panoptigon point is then $(1,0)$). A similar argument shows that we can choose a panoptigon point at height $2$ if and only if $i=0$ and $j\leq 2$, with $k$ odd if $j=0$ and $k$ even if $j=2$. \end{itemize} \end{proof} As with the lattice width $1$ panoptigons, we find infinitely many lattice width $2$ panoptigons, namely those of Type 2 with $j=0$ and $0\leq i\leq 1$, and those of Type 3. We have now classified all hyperelliptic panoptigons, and have found that there are infinitely many of lattice width $1$ and infinitely many of lattice width $2$. Our last step is to understand non-hyperelliptic panoptigons; with the exception of the triangle $T_3$, this is equivalent to panoptigons of lattice width $3$ or more. We are now ready to prove that the total number of lattice points of such a panoptigon is at most $13$. \begin{proof}[Proof of Theorem \ref{theorem:at_most_13}] Let us consider the lattice diameter $\ell(P)$ of $P$. We know by \cite[Theorem 1]{alarcon} that $|P\cap\mathbb{Z}^2|\leq (\ell(P)+1)^2$, so if $\ell(P)\leq 2$ we have $|P\cap\mathbb{Z}^2|\leq 9$. Thus we may assume $\ell(P)\geq 3$. Perform an $\textrm{SL}_2(\mathbb{Z})$ transformation so that $\langle 1,0\rangle$ is a lattice diameter direction for $P$, and translate the polygon so that the origin $O=(0,0)$ is a panoptigon point. Thus $P\cap\mathbb{Z}^2$ consists of $O$ and a collection of visible points. Since $\ell(P)\geq3$ and $\langle 1,0\rangle$ is a lattice diameter direction, we know that the polygon $P$ must contain $4$ lattice points of the form $(a,b)$, $(a+1,b)$, $(a+2,b)$, and $(a+3,b)$. We claim that $b\in\{-1,1\}$. Certainly $b\neq 0$, since there are only three such points allowed in $P$: $(0,0)$ and $(\pm 1, 0)$. We also know that $b$ cannot be even: any set $\mathbb{Z}\times \{2k\}$ has every second point invisible from the origin. Suppose for the sake of contradiction that the points $(a,b)$, $(a+1,b)$, $(a+2,b)$, and $(a+3,b)$ are in $P$ with $b$ odd and $b\geq 3$ (a symmetric argument will hold for $b\leq -3$). Consider the triangle $T=\textrm{conv}(O,(a,b),\ldots,(a+3,b))$. By convexity, $T\subset P$. Consider the line segment $T\cap L$, where $L$ is the line defined by $y=b-1$. The length of this line segment is $3-\frac{1}{b}$, and since $b\geq 3$ this is strictly greater than $2$. Any line segment of length $2$ at height $b-1$ will intersect at least two lattice points. But since $b-1$ is even and $b-1\geq 2$, at least one of these lattice points is not visible from $O$. Such a lattice point must be contained in $T$, and therefore in $P$, a contradiction. Thus we have that $b=\pm 1$. Rotating our polygon $180^\circ$ degrees if necessary, we may assume that $b=-1$, so that the points $(a,-1),\ldots,(a+3,-1)$ are contained in $P$. It is possible that the number $k$ of lattice points on the line defined by $y=-1$ is more than $4$; up to relabelling, we may assume that $(a,-1),\ldots,(a+k-1,-1)$ are lattice points in $P$ while $(a-1,-1)$ and $(a+k,-1)$ are not, where $k\geq 4$. Applying a shearing transformation $\left(\begin{matrix}1&a+1\\0&1\end{matrix}\right)$, we may further assume that the points at height $-1$ are precisely $(-1,-1),\ldots,(k-2,-1)$. We will now make a series of arguments that rule out many lattice points from being contained in $P$. The end result of these constraints is pictured in Figure \ref{figure:ruling_out_points}, with points labelled by the argument that rules them out. \begin{itemize} \item[(i)] The polygon $P$ has (regular) width at least $3$ at height $-1$, and width strictly smaller than $2$ at heights $2$ and $-2$, since it cannot contain two consecutive lattice points at those heights. It follows from convexity that the width of the polygon is strictly smaller than $1$ at height $-3$, and that the polygon cannot have any lattice points at all at height $-4$. It also follows that the polygon cannot have a nonnegative width at height $8$. Thus every lattice point $(x,y)$ in the polygon satisfies $-3\leq y\leq 7$. \item[(ii)] We can further restrict the possible heights by showing that there can be no lattice points at height $-3$. Suppose there were such a point $(x,-3)$ in $P$. Consider the triangle $\textrm{conv}((x,-3),(-1,-1),(2,-1))$. This triangle has area $3$, so by Pick's Theorem \cite{pick} satisfies $3=g+\frac{b}{2}-1$, or $4=g+\frac{b}{2}$, where $g$ and $b$ are the number of interior lattice points an boundary lattice points of the triangle, respectively. The $4$ lattice points at height $-1$ contribute $2$ to this sum, and the one lattice point at height $-3$ contributes $\frac{1}{2}$ to this sum, meaning that the lattice points at height $-2$ contribute $\frac{3}{2}$ to this sum. It follows that there must be at least two lattice points at height $-2$; but this is a contradiction, since at least one of these points will be invisible from $O$. We conclude that $P$ cannot contain a lattice point of the form $(x,-3)$, and thus $y\geq -2$ for all lattice points $(x,y)\in P$. \item[(iii)] We know that the lattice point $(-2,0)$ is not in $P$ since it is not visible from $O$. If there is any lattice point of the form $(x,y)$ with $y\geq 1$ and $y\leq -x-2$, then the triangle $\textrm{conv}(O,(-1,-1),(x,y))$ will contain $(-2,0)$. Thus no such lattice point $(x,y)$ can exist in $P$. \item[(iv)] No point of the form $(x,y)$ with $x\geq 2$ and $y\geq 0$ may appear in $P$: this would force the point $(2,0)$ to appear, as it would lie in the triangle $\textrm{conv}(O,(2,-1),(x,y))$. \item[(v)] There are now only finitely many allowed lattice points $(x,y)$ with $y\geq 1$, namely those with $-y-1\leq x\leq 2$ and $1\leq y\leq 7$. For each such point, we consider the triangle $\textrm{conv}((x,y),(-1,-1),(-1,3))$. We claim that only the $13$ choices of $(x,y)$ pictured in Figure \ref{figure:ruling_out_points}. that do not introduce a forbidden point. To see this, we note that the points $(0,2)$, $(-2,2)$ and $(-2,4)$ are all forbidden. The point $(0,2)$ rules out $(x,y)$ with $x=1$ and $y\geq 5$; with $x=0$ and $y\geq 2$; with $x=-1$ and $y\geq 4$; ant with $x=-2$ and $y\geq 5$. For $x=-2$, the points $(-2,2)$ and $(-2,4)$ are already ruled out. For all remaining points with $x\leq -3$, every point besides $(-3,2)$, $(-4,3)$, and $(-5,3)$ introduces the point $(-2,2)$ or $(-2,4)$ or both. This establishes our claim. \item[(vi)] By assumption, we know there are no lattice points of the form $(x,-1)$ where $x\leq -2$. It follows that there are also no lattice points of the form $(x,-2)$ where $x\leq -4$, since $(-1,-2)$ would lie in the convex hull of such a point with $O$ and $(2,-1)$. \item[(vii)] We will now use the fact that we have assumed that $P$ satisfies $\textrm{lw}(P)\geq 3$. We cannot have that $P$ is contained in the strip $-2\leq y\leq 0$, so there must be at least one point $(x,y)$ with $y\geq 1$. If there is a point of the form $(x',-1)$ with $x'\geq 6$, then we would have that $\textrm{conv}((x,y),(x',-1),(-1,-1))$ contains the point $(2,0)$, which is invisible. Thus we can only have points $(x',-1)$ if $-1\leq x\leq 5$. A similar argument shows that $P$ can only contain a point $(x,-2)$ if $x$ is odd with $-3\leq x\leq 9$. \end{itemize} \begin{figure}[hbt] \centering \includegraphics{ruling_out_points.pdf} \caption{Possible lattice points in $P$, with impossible points labelled by the argument ruling them out} \label{figure:ruling_out_points} \end{figure} We have now narrowed the possible lattice points in our polygon down to the $30$ lattice points in Figure \ref{figure:ruling_out_points}, five of which we know appear in $P$. For every such point $(x,y)$, there does indeed exist a polygon $P$ with $\textrm{lw}(P)\geq 3$ containing $(x,y)$ as well as the five prescribed points such that $P\cap\mathbb{Z}^2$ is a subset of the $30$ allowed points, so we cannot narrow down any further. One way to finish the proof is by use of a computer to determine all possible subsets of the $25$ points that can be added to our initial $5$ points to yield a polygon of lattice width at least $3$; we would then simply check the largest number of lattice points. We have carried out this computation, and present the results in Appendix \ref{section:appendix}. We also present the following argument, which will complete our proof without needing to rely on a computer. First we split into four cases, depending on the number $k$ of lattice points at height $-1$: $4$, $5$, $6$, or $7$. When there are more than $4$, we can eliminate more of the candidate points $(x,y)$ with $y\geq 1$ or $y=-2$; the sets of allowable points in these four cases are illustrated in Figure \ref{figure:four_cases_allowed_points}. In each case we will argue that our polygon $P$ has at most $13$ lattice points. \begin{figure}[hbt] \centering \includegraphics[scale=0.8]{four_cases_allowed_points.pdf} \caption{Narrowing down possible points depending on the number of points at height $-1$} \label{figure:four_cases_allowed_points} \end{figure} \begin{itemize} \item Suppose $k=4$. There are $20$ possible points at height $-1$ or above; since there is at most one point at height $-2$, it suffices to show that we can fit no more than $12$ lattice points at height $-1$ or above into a lattice polygon. First suppose the point $(-5,4)$ is in $P$. This eliminates $9$ possible points from appearing in $P$, yielding at most $20-9+1=12$ lattice points total in $P$. Leaving out $(-5,4)$ but including $(-4,3)$ similarly eliminates $9$ possible points. Including $(-2,3)$ eliminates $8$; including $(-1,3)$ and leaving out $(-2,3)$ eliminates $8$; including $(1,4)$ eliminates $9$; and including $(1,3)$ and leaving out $(1,4)$ eliminates $9$. In all these cases, we can conclude that $P$ has at most $13$ lattice points in total. The only remaining case is that all lattice points of $P$ have heights between $-2$ and $2$. The polygon can have at most one lattice point at height $-2$, at most one lattice point at height $2$, and some assortment of the $11$ total points with heights between $-1$ and $1$. Once again, $P$ can have at most $13$ lattice points. \item Suppose $k=5$. If $P$ includes the point $(-4,3)$, then it cannot include $(-2,3)$, $(-1,2)$, or $(0,1)$. Combined with the fact that $P$ can only have one lattice point at height $-2$, this leaves $P$ with at most $13$ total lattice points. A similar argument holds if $P$ includes the point $(-2,3)$. If $P$ contains neither $(-4,3)$ nor $(-2,3)$, then it has at most $1$ point at height $3$, at most one point at height $-2$, and some collection of the $11$ points between. Thus $P$ has at most $13$ lattice points. \item Suppose $k=6$. Since $P$ has at most one lattice point at height $-2$, and only $12$ points are allowed outside of that height, $P$ has at most $13$ lattice points total. \item Suppose $k=7$. Since $P$ has at most one lattice point at height $-2$, and only $11$ points are allowed outside of that height, $P$ has at most $12$ lattice points total. \end{itemize} We conclude that $|P\cap\mathbb{Z}^2|\leq 13$. \end{proof} As detailed in Appendix \ref{section:appendix}, we enumerated all non-hyperelliptic polygons containing the five prescribed points from the previous proof, along with some subset of the other $25$ permissible points. The end result was $69$ non-hyperelliptic panoptigons of lattice diameter $3$ or more, up to equivalence. In the same appendix we show that there are $3$ non-hyperelliptic panoptigons with lattice diameter at most $2$, yielding a grand total of $72$ non-hyperelliptic panoptigons. If we instead wish to count panoptigons of lattice width at least $3$, this count becomes $73$ due to the inclusion of $T_3$. We remark that it is possible to give a much shorter proof that there are only finitely many non-hyperelliptic panoptigons. Suppose that $P$ is a panoptigon of lattice diameter $\ell(P)\geq 7$. By the same argument that started our previous proof, we may assume without loss of generality that $P$ has $(0,0)$ as a panoptigon point as well as eight or more lattice points at height $-1$. If $P$ contains a point of the form $(x,y)$ where $y\geq 2$, then the line segment $P\cap L$ where $L$ is the $x$-axis must have length at least $7\left(1-\frac{1}{y+1}\right)\geq 7\left(1-\frac{1}{2+1}\right)=\frac{14}{3}>4$. As such $P$ must contain at least $4$ points at height $0$, impossible since there are only $3$ visible points at this height. Similarly $P$ can have no lattice points at height $1$: these would force the inclusion of either $(2,0)$ or $(-2,0)$. Finally, if $P$ contains a point of the form $(x,y)$ where $y\leq -3$, then the line segment $P\cap L'$ where $L'$ is the horizontal line at height $-2$ must have width at least $7\left(1-\frac{1}{|y|-1}\right)\geq 7\left(1-\frac{1}{3-1}\right)=\frac{7}{2}>3$. As such we know that $P$ must contain at least $3$ lattice points at height $-2$, impossible since no two consecutive points at that height are both visible. Thus we know that $P$ only has lattice points at heights $0$, $-1$, and $-2$, and so is a hyperelliptic polygon. This means that if $P$ is a non-hyperelliptic panoptigon, it must have $\ell(P)\leq 6$. Since $|P\cap\mathbb{Z}^2|\leq (\ell(P)+1)^2$, it follows that if $P$ is a non-hyperelliptic panoptigon then it must have at most $(6+1)^2=49$ lattice points; there are any finitely many such polygons. In principle one could enumerate all such polygons with at most $49$ lattice points as in \cite{Castryck2012} and check which are panoptigons; this would be much less efficient than the computation led to by our longer proof. \section{Characterizing all maximal polygons of lattice width $3$ or $4$}\label{section:lw3_and_4} In this section we will characterize all maximal polygons of lattice width $3$ or $4$. By Lemma \ref{lemma:lw_facts}, this will allow us to determine which polygons of lattice width $1$ or $2$ can be the interior polygon of some lattice polygon. This will be helpful in Section \ref{section:big_face_graphs}, when we will need to know which of the infinitely many panoptigons of lattice width at most $2$ can be an interior polygon. For lattice width $3$, we do have the triangle $T_3$ as an exceptional case; all other polygons with lattice width $3$ must have an interior polygon of lattice width $1$. \begin{prop}\label{prop:lattice_width_3_maximal} Let $P$ be a maximal polygon. Then $P$ has lattice width $3$ if and only if up to equivalence we either have $P=T_3$, or $P=T_{a,b}^{(-1)}$ where $a\geq \frac{1}{2}b-1$, $0\leq a\leq b$, and $b\geq 1$, and where $T_{a,b}\neq T_1$. \end{prop} \begin{proof} If $P$ is equivalent to $T_3$, then it has lattice width $3$ as desired. If $P$ is equivalent to some other $T_d$, then $P$ has lattice width $d\neq 3$, and so need not be considered. Now assume $P$ is not equivalent to $T_d$ for any $d$, so that $P$ has lattice width $3$ if and only if $P_\textrm{int}$ has lattice width $1$ by Lemma \ref{lemma:lw_facts}. This is the case if and only if $P_\textrm{int}$ is equivalent to $T_{a,b}$ for some $a,b\in\mathbb{Z}$ where $0\leq a\leq b$ and $b\geq 1$ (where $T_{a,b}\neq T_1$) by Theorem \ref{theorem:lw_012}. Thus to prove our claim, it suffices by Proposition \ref{prop:interior_maximal} to show that $T_{a,b}^{(-1)}$ is a lattice polygon if and only if $a\geq \frac{1}{2}b-1$. We set the following notation to describe $T_{a,b}$. Starting with the face connecting $(0,0)$ and $(0,1)$ and moving counterclockwise, label the faces of $T_{a,b}$ as $\tau_1$, $\tau_2$, $\tau_3$, and $\tau_4$ (where $\tau_4$ does not appear if $a=0$). Pushing out the faces, we find that $\tau_1^{(-1)}$ lies on the line $x=-1$, $\tau_2^{(-1)}$ on the line $y=-1$, $\tau_3^{(-1)}$ on the line $x+(b-a)y= b+1$, and $\tau_4^{(-1)}$ on the line $y=2$. Note that working cyclically, we have $\tau_i^{(-1)}\cap\tau_{i+1}^{(-1)}$ is a lattice point: we get the points $(-1,-1)$, $(2b-a+1,1)$, $(2a-b+1,2)$, and $(-1,2)$. Thus if these are the vertices of $T_{a,b}^{(-1)}$, then $T_{a,b}^{(-1)}$ is a lattice polygon. Certainly $(-1,-1)$ and $(2b-a+1,1)$ appear in $T_{a,b}^{(-1)}$. The points $(2a-b+1,2)$ and $(-1,2)$ will appear as (not necessarily distinct) vertices of $T_{a,b}^{(-1)}$ if and only if $2a-b+1\geq -1$; that is, if and only if $a\geq\frac{1}{2}b-1$. Thus in the case that $a\geq\frac{1}{2}b-1$, we have that $T_{a,b}^{(-1)}$ is a lattice polygon with vertices at $(-1,-1)$, $(2b-a+1,-1)$, $(2a-b+1,2)$, and $(-1,2)$. If on the other hand $a<\frac{1}{2}b-1$, then $\tau_4^{(-1)}$ is not a face of $T_{a,b}^{(-1)}$, and so one of the vertices of $T_{a,b}^{(-1)}$ is $\tau_1^{(-1)}\cap\tau_3^{(-1)}$. These faces intersect at the point $\left(\frac{b+2}{b-a},-1\right)$, where we may divide by $b-a$ since $a<\frac{1}{2}b-1$ and so $a\neq b$. Note that $b-a> b-\frac{1}{2}b+1=\frac{1}{2}(b+2)$. It follows that that $\frac{b+2}{b-a}<2$, and certainly $\frac{b+2}{b-a}>1$, so $\left(\frac{b+2}{b-a},-1\right)$ is not a lattice point. We conclude that $T_{a,b}^{(-1)}$ is a lattice polygon if and only if $a\geq \frac{1}{2}b-1$, thus completing our proof. \end{proof} The explicitness of this result, combined with the fact that $g\left(T_{a,b}^{(-1)}\right)=a+b+2$, allows us to count the number of maximal polygons $P$ of genus $g$ with lattice width $3$. First, note that there are $\left\lfloor\frac{g-2}{2}\right\rfloor$ choices of $T_{a,b}$ with $g$ lattice points: with our assumption that $a\leq b$, we can choose $a$ to be any number from $1$ up to $\left\lfloor\frac{g-2}{2}\right\rfloor$, and $b$ is determined from there. Next, we will exclude those choices of $a$ that yield $a<\frac{1}{2}b-1$, or equivalently $a\leq \frac{1}{2}b-\frac{3}{2}$ since $a,b\in\mathbb{Z}$. Given that $a+b=g$, this is equivalent to $a\leq\frac{1}{2}(g-a)-\frac{3}{2}$, or $\frac{3}{2}a\leq\frac{1}{2}g-\frac{3}{2}$, or $a\leq \frac{g}{3}-1$. Thus the number of polygons we must exclude from the total count $\left\lfloor\frac{g-2}{2}\right\rfloor$ is $\left\lfloor \frac{g}{3}\right\rfloor-1$. We conclude that the number of maximal polygons of genus $g$ with lattice width $3$ is \[\left\lfloor\frac{g-2}{2}\right\rfloor-\left\lfloor\frac{g}{3}\right\rfloor+1\] when $g\geq 4$ (which allows us to ignore $T_3$). We now wish to classify maximal polygons $P$ of lattice width $4$. One possibility is that $P$ is $T_4$. Other than this example, the interior polygon $P_{\textrm{int}}$ must have lattice width $2$. Note that if $g(P_{\textrm{int}})=0$, then $P_{\textrm{int}}=T_2$; this has relaxed polygon $T_5$, which has lattice width $5$ and so is not under consideration. If $g(P_{\textrm{int}})=1$, then $P_\textrm{int}$ is one of the polygons in Figure \ref{figure:g1_lw2}. It turns out that all of these can be relaxed to a lattice polygon, each of which has lattice width $4$; these polygons are illustrated in Figure \ref{figure:g1_relaxed}. \begin{figure}[hbt] \centering \includegraphics[scale=0.8]{g1_relaxed.pdf} \caption{The lattice width $4$ polygons with exactly one doubly interior point} \label{figure:g1_relaxed} \end{figure} Now we deal with the most general case of polygons with $\textrm{lw}(P)=4$, namely those where $P_{\textrm{int}}$ has lattice width $2$ and genus $g'\geq 2$. Thus $P_\textrm{int}$ must be one of the $\frac{1}{6}(g+3)(2g^2+15g+16)$ hyperelliptic polygons presented in Theorem \ref{theorem:lw_012}. We must now determine which of these hyperelliptic polygons $Q$ have a relaxed polygon $Q^{(-1)}$ that has lattice points for vertices. We do this over three lemmas, which consider the polygons of Type 1, Type 2, and Type 3 separately. \begin{lemma}\label{lemma:type1} If $Q$ is of Type 1, then the relaxed polygon $Q^{(-1)}$ is a lattice polygon if and only if $i\leq \frac{3g+1}{2}$. \end{lemma} \begin{proof} Let $\tau_1$, $\tau_2$, $\tau_3$, and $\tau_4$ denote the four one-dimensional faces of $Q$, proceeding counterclockwise starting from the face connecting $(0,0)$ and $(1,2)$ (note that $\tau_4$ does not appear as a one-dimensional face if $i=2g$). Consider the relaxed faces $\tau_1^{(-1)}$, $\tau_2^{(-1)}$, $\tau_3^{(-1)}$, and $\tau_4^{(-1)}$. These lie on the lines $-2x+y=1$, $y=-1$, $2x+(2i-2g-1)y=2i+1$, and $y=3$. Proceeding cyclically, the intersection points $\tau_i^{(-1)}\cap \tau_{i+1}^{(-1)}$ of these relaxed faces are $(-1,-1)$, $(2i-g,-1)$, $(3g-2i+2,3)$, and $(1,3)$. All these points are lattice points, so if they are indeed the vertices of $P_\textrm{int}$ then $Q^{(-1)}$ is a lattice polygon. The one situation in which our relaxed polygon will not have all lattice points is if $\tau_1^{(-1)}$ and $\tau_3^{(-1)}$ intersect at a height strictly below $3$, cutting off the face $\tau_4^{(-1)}$ and yielding a vertex with $y$ coordinate strictly between $2$ and $3$. These faces intersect at $\left(\frac{g+1}{2(i-g)},\frac{i+1}{i-g}\right)$, which has $y$-coordinate strictly smaller than $3$ if and only if $\frac{i+1}{i-g}<3$, which can be rewritten as $i+1<3i-3g$, or as $\frac{3g+1}{2}<i$. Thus when $i\leq \frac{3g+1}{2}$, our relaxed polygon is a lattice polygon; and when $i>\frac{3g+1}{2}$, it is not. \end{proof} \begin{lemma}\label{lemma:type2} If $Q$ is of Type 2, then the relaxed polygon $Q^{(-1)}$ is a lattice polygon if and only if $i\geq \frac{g}{2}+1$ and $j\geq\frac{g-1}{2}$. \end{lemma} \begin{proof} Label the faces of $Q$ cyclically as $\tau_1$, $\tau_2$, $\tau_3$, $\tau_4$, and $\tau_5$. Due to the form of the slopes of these faces, the relaxed face $\tau_i^{(-1)}$ will intersect the relaxed face $\tau_{i+1}^{(-1)}$ at a lattice point; this is true for $\tau_1$ with $\tau_2$ and $\tau_5$ by computation, and for any horizontal line with a face of slope $1/k$ for some integer $k$. Similarly, we are fine with the intersections of $\tau_3^{(-1)}$ and $\tau_4^{(-1)}$: these will always intersect at the lattice point $(g+2,1)$. Thus the only way the relaxed polygon will fail to have lattice vertices is if certain edges are lost while pushing out. Considering the normal fan of $Q$, this leads to two possible cases for $Q$ to not be integral: if the face $\tau_2^{(-1)}$ is lost, and if the face $\tau_5^{(-1)}$ is lost. First we consider the case that $\tau_2^{(-1)}$ is lost due to $\tau_1^{(-1)}$ and $\tau_3^{(-1)}$ intersecting at a point with $y$-coordinate strictly between $0$ and $-1$; note that this can only happen when $i<g$. The face $\tau_1^{(-1)}$ is on the line $-2x+y=1$, and $\tau_3^{(-1)}$ is on the line $x-(g+1-i)y=i+1$. These intersect at $\left(-\frac{g+2}{2g-2i+1},-\frac{2i+3}{2g-2i+1}\right)$. Note that $-\frac{2i+3}{2g-2i+1}>-1$ is equivalent to $\frac{2i+3}{2g-2i+1}<1$, which in turn is equivalent to $2i+3<2g-2i+1$. This simplifies to $i<\frac{g}{2}+1$. Thus we have a collapse of $\tau_2^{(-1)}$ that introduces a non-lattice vertex point if and only if $i<\frac{g}{2}+1$. Now we consider the case that $\tau_5^{(-1)}$ is lost due to $\tau_1^{(-1)}$ and $\tau_4^{(-1)}$ intersecting at a point with $y$-coordinate strictly between $2$ and $3$. The face $\tau_4^{(-1)}$ lies on the line with equation $x+(g-j)y=2g-j+2$. This intersects $\tau_1^{(-1)}$ at $\left(\frac{g+2}{2g-2j+1},\frac{4g-2j+5}{2g-2j+1}\right)$. Having $\frac{4g-2j+5}{2g-2j+1}<3$ is equivalent to $4g-2j+5< 6g-6j+3$, which can be rewritten as $4j<2g-2$, or $j< \frac{g-1}{2}$. Thus we have a collapse of $\tau_5^{(-1)}$ that introduces a non-lattice vertex point if and only if $j< \frac{g-1}{2}$. We conclude that $Q^{(-1)}$ is a lattice polygon if and only if $i\geq \frac{g}{2}+1$ and $j\geq\frac{g-1}{2}$ \end{proof} \begin{lemma}\label{lemma:type3} If $Q$ is of Type 3, then the relaxed polygon $Q^{(-1)}$ is a lattice polygon if and only if $i\geq g/2$ and $j\geq g/2$. \end{lemma} \begin{proof}Label the faces of $Q$ cyclically as $\tau_1,\ldots,\tau_6$, where $\tau_1$ is the face containing the lattice points $(k,2)$ and $(0,1)$ (with the understanding that some faces might not appear if one or more of $i$, $j$ and $k$ are equal to $0$). If the faces $\tau_1^{(-1)},\ldots,\tau_6^{(-1)}$ are all present in the polygon $P^{(-1)}$, then they intersect at lattice points by the arguments from the previous proof. Thus we need only be concerned with the following cases: where $\tau_3^{(-1)}$ collapses due to $\tau_2^{(-1)}$ and $\tau_4^{(-1)}$ intersecting at a point $(x,y)$ with $0>y>-1$; and where $\tau_6^{(-1)}$ collapses due to $\tau_5^{(-1)}$ and $\tau_1^{(-1)}$ intersecting at a point $(x,y)$ with $2<y<3$. First we consider $\tau_2^{(-1)}$ and $\tau_4^{(-1)}$. We have that $\tau_2^{(-1)}$ lies on the line defined by $x=-1$, and that $\tau_4^{(-1)}$ lies on the line defined by $x-(g+1-i)y=i+1$. These lines intersect at $(-1,-\frac{i+2}{g+1-i})$. The $y$-coordinate is strictly greater than $-1$ when $\frac{i+2}{g+1-i}<1$, i.e. when $i+1<g+1-i$, which can be rewritten as $i<\frac{g}{2}$. Thus we lose $\tau_3^{(-1)}$ to a non-lattice vertex precisely when $i<\frac{g}{2}$. Now we consider $\tau_5^{(-1)}$ and $\tau_1^{(-1)}$. We have that $\tau_1^{(-1)}$ lies on the line $x-ky=-k+1$, unless $k=0$ in which case it lies on the line $x=-1$; and that $\tau_5^{(-1)}$ lies on the line $x+(g+1-k-j)y=2g+2-k-j$. In the event that $k\neq 0$, these intersect at $\left(\frac{gk+g-j+1}{g-j+1},\frac{2g-j+1}{g-j+1}\right)$, which has $y$-coordinate strictly smaller than $3$ when $\frac{2g-j+1}{g-j+1}<3$, or equivalently if $2g-j+1<3g-3j+1$, or equivalently if $j<\frac{g}{2}$. For the $k=0$ case, the intersection point becomes $\left(-1,\frac{2g-j+3}{g-j+1}\right)$, which has $y$-coordinate strictly smaller than $3$ when $\frac{2g-j+3}{g-j+1}<3$, or equivalently when $2g-j+3<3g-3j-3k+3$, or equivalently when $j<\frac{g}{2}$. Thus we have a non-lattice vertex due to $\tau_5^{(-1)}$ collapsing precisely when $j<\frac{g}{2}$. We conclude that $Q^{(-1)}$ is a lattice polygon if and only if $i\geq g/2$ and $j\geq g/2$. \end{proof} Combining Lemmas \ref{lemma:type1}, \ref{lemma:type2}, and \ref{lemma:type3} and the preceding discussion, we have the following classification of maximal polygons with lattice width $4$. \begin{prop}\label{prop:lattice_width_4_maximal} Let $P$ be a maximal polygon of lattice width $4$. Then up to lattice equivalence, $P$ is either $T_4$; one of the $14$ polygons in Figure \ref{figure:g1_relaxed}; or $Q^{(-1)}$, where $Q$ is a hyperelliptic polygon satisfying the conditions of Lemma \ref{lemma:type1}, \ref{lemma:type2}, or \ref{lemma:type3}. \end{prop} The most important consequence of Propositions \ref{prop:lattice_width_3_maximal} and \ref{prop:lattice_width_4_maximal} is that we can determine which panoptigons of lattice width $1$ or lattice width $2$ are interior polygons of some lattice polygon. We summarize this with the following result. \begin{cor}\label{corollary:lw12_panoptigon_point_bound} Let $Q$ be a panoptigon with $\textrm{lw}(Q)\leq 2$ such that $Q^{(-1)}$ is lattice polygon. Then $|Q\cap\mathbb{Z}^2|\leq 11$. \end{cor} \begin{proof} If $\textrm{lw}(Q)= 1$ with $Q^{(-1)}$ a lattice polygon, then $Q$ must be the trapezoid $T_{a,b}$ with $0\leq a\leq b$, $b\geq 1$, and $a\geq \frac{b}{2}-1$ by Proposition \ref{prop:lattice_width_3_maximal}. In order for $T_{a,b}$ to be a panoptigon, we need $a\leq 2$ by Lemma \ref{lemma:panoptigon_g0}, so $2\geq \frac{b}{2}-1$, implying $b\leq 6$. It follows that $|Q\cap\mathbb{Z}^2|=a+b+2\le 2+6+2=10$. Now assume $\textrm{lw}(Q)= 2$ with $Q^{(-1)}$ a lattice polygon. If $Q$ has genus $0$ then it is $T_2$, and has $6$ lattice points. If $Q$ has genus $1$ then it is one of the polygons in Figure \ref{figure:g1_lw2}, and so has at most $9$ lattice points. Outside of these situations, we know that $Q$ is a hyperelliptic panoptigon of genus $g\geq 2$ as characterized in Lemma \ref{lemma:hyperelliptic_panoptigon}. We deal with two cases: where $Q$ has a panoptigon point at height $1$, and where it does not. In the first case, we either have $g=2$ with $Q$ of Type 1 or Type 2, or $g=3$ with $Q$ of Type 1. A hyperelliptic polygon of Type 1 has $(i+1)+(1+2g-i)=2g+2$ boundary points. A hyperelliptic polygon of Type 2 has $i+j+3$ boundary points. If $Q$ is of Type 1, then it has in total $3g+2\leq11 $ lattice points. If $Q$ is of Type 2, then $i+j\leq 2g+1=2\cdot 2+1=5$, implying that $Q$ has a total of $i+j+3+g\leq 5+3+2=10$ lattice points. In the second case, we know that $Q$ must have at most $3$ points at height $0$ or $2$, and exactly $1$ point at the other height. First we claim that $Q$ cannot be of Type 1: there are $2g+2\geq 6$ boundary points, all at height $0$ or $2$, and $Q$ can have at most $4$ points total at those heights. For Types 2 and 3, we know by Lemmas \ref{lemma:type2} and \ref{lemma:type3} that either $i\geq\frac{g}{2}+1$ and $j\geq \frac{g-1}{2}$, or $i\geq\frac{g}{2}$ and $j\geq \frac{g}{2}$. At least one of $i$ and $j$ must equal $0$ to allow for a single point at height $0$ or height $2$, so these inequalities are impossible for $g\geq 2$. Thus $Q$ cannot have Type 2 or Type 3 either, and this case never occurs. We conclude that if $Q$ is a panoptigon of lattice width $1$ or $2$ such that $Q^{(-1)}$ is a lattice polygon, then $|Q\cap\mathbb{Z}^2|\leq 11$. \end{proof} \section{Big face graphs are not tropically planar}\label{section:big_face_graphs} Let $G$ be a planar graph. Recall that we say that $G$ is a \emph{big face graph} if for any planar embedding of $G$, there exists a bounded face that shares an edge with every other bounded face. Our main examples of big face graphs will come from the following construction. First we recall the construction of a \emph{chain} of genus $g$ from \cite[\S 6]{BJMS}. Start with with $g$ cycles in a row, connected at $g-1$ $4$-valent vertices. We will resolve each of these $4$-valent vertices to result in two $3$-valent vertices in one of two ways. Let $v$ be a vertex, incident to the edges $e_1,e_2,f_1,f_2$ where $e_1$ and $e_2$ are part of one cycle and $f_1$ and $f_2$ are part of another. We will remove $v$ and replace it with two connected vertices $v_1$ and $v_2$, and we will either connect $v_1$ to $e_1$ and $f_1$ and $v_2$ to $e_2$ and $f_2$; or we will connect $v_1$ to $e_1$ and $e_2$ and $v_2$ to $f_1$ and $f_2$. Any graph obtained from making such a choice at each vertex is then called a chain. Figure \ref{figure:chain_construction} illustrates, for $g=3$, the starting $4$-regular graph; the two ways to resolve a $4$-valent vertex; and the resulting chains of genus $3$. We remark that although there are $2\times 2 = 4$ ways to choose the vertex resolutions, two of them yield isomorphic graphs, giving us $3$ chains of genus $3$ up to isomorphism. Note that for every genus, there is exactly one chain that is bridge-less, i.e. $2$-edge-connected. \begin{figure}[hbt] \centering \includegraphics{chain_construction} \caption{The starting $4$-regular graph in the chain construction; the two choices for resolving a $4$-valent vertex; and the three chains of genus $3$, up to isomorphism} \label{figure:chain_construction} \end{figure} Given a chain of genus $g$, we construct a \emph{looped chain} of genus $g+1$ by adding an edge from the first cycle to the last one. The looped chains of genus $4$ corresponding to the chains of genus $3$ are illustrated in Figure \ref{figure:chains_and_looped_chains}. For larger genus, we remark that two non-isomorphic chains can give rise to isomorphic looped chains. \begin{figure}[hbt] \centering \includegraphics{chains_and_looped_chains.pdf} \caption{The looped chains of genus $4$ } \label{figure:chains_and_looped_chains} \end{figure} In order to argue that any looped chain is a big face graph, we recall the following useful result. By a special case of Whitney's $2$-switching theorem \cite[Theorem 2.6.8]{graphs_on_surfaces}, if $G$ is a $2$-connected graph, then any other planar embedding can be reached, up to weak equivalence\footnote{Weak equivalence means two graph embeddings have the same facial structure, although possibly with different unbounded faces.}, from the standard embedding by a sequence of \emph{flippings}. A flipping of a planar embedding finds a cycle $C$ with only two vertices $v$ and $w$ incident to edges exterior to $C$, and then reverses the orientation of $C$ and all vertices and edges interior to $C$ to obtain a new embedding. This process is illustrated in Figure \ref{figure:2-flip}, where $C$ is the highlighted cycle $v-a-b-w-d-v$. \begin{figure}[hbt] \centering \includegraphics{2-flip} \caption{Two embeddings of a planar graph related by a flipping} \label{figure:2-flip} \end{figure} \begin{lemma}\label{lemma:looped_chain} Any looped chain is a big face graph. \end{lemma} \begin{proof} In the standard embedding of a looped chain as in Figure \ref{figure:chains_and_looped_chains}, there are (at least) two faces that share an edge with all other faces: one bounded and one unbounded. Since any looped chain is $2$-connected, any other embedding can be reached, up to weak equivalence, by a sequence of flippings. It thus suffices to show that the standard embedding of a looped chain is invariant under flipping. Consider the standard embedding of a looped chain $G$, and assume that that $C$ is a cycle in $G$ that has exactly two vertices $v$ and $w$ incident to edges exterior to $C$. Let $\overline{C}$ denote the set of all vertices in or interior to $C$. Since $G$ is trivalent and $C$ is $2$-regular, we know that $v$ and $w$ are each incident to exactly one edge, say $e$ for $v$ and $f$ for $w$, that is exterior to $C$. We now deal with two possibilities: that $\overline{C}=V(G)$, and that $\overline{C}\subsetneq V(G)$. If $\overline{C}=V(G)$, then $e=f$, and the only possibility is that $v$ and $w$ are the vertices added to a chain $H$ to build the looped chain $G$; that $H$ is the bridge-less chain; and that $C$ is the outside boundary of $H$ in its standard embedding. Flipping with respect to $C$ does not change the embedding of this graph. \begin{figure}[hbt] \centering \includegraphics{looped_chain_possible_cycles} \caption{The structure of a looped chain, where the bridge-less chains $G_i$ have solid edges and the edges $e_i$ are dotted; the boundaries of the $G_i$ are bold, and are the only possible choices of $C$ for a flipping} \label{figure:looped_chain_possible_cycles} \end{figure} If $\overline{C}\subsetneq V(G)$, then $\{e,f\}$ forms a $2$-edge-cut for $G$, separating it into $\overline{C}$ and $\overline{C}^C$. Consider the structure of $G$: is is a collection of $2$-edge-connected graphs $G_1,\ldots,G_k$, namely a collection of bridge-less chains, connected in a loop by edges $e_1,\ldots,e_k$, where $e_i$ connects $G_i$ and $G_{i+1}$, working modulo $k$; see Figure \ref{figure:looped_chain_possible_cycles} for this labelling scheme. We claim that $e,f\in\{e_1,\ldots,e_k\}$. If not, then without loss of generality $e$ is in some bridge-less chain $G_i$. If $f\in E(G_j)$ for $j\neq i$, then the graph remains connected; the same is true if $f\in\{e_1,\ldots,e_k\}$. So we would need $f$ to also be in $G_i$. By the structure of the looped chain, we would the removal of $e$ and $f$ to disconnect $G_i$ into multiple components, at least one of which is not incident to $e_i$ or $e_{i+1}$; however, this is impossible based on the structure of a bridge-less chain. It follows that $e$ and $f$ must be among $e_1,\ldots,e_k$. The only way to choose a pair $\{e,f\}$ from among $e_1,\ldots,e_k$ so that they are the only exterior edges incident to the boundary of a cycle $C$ is if they are incident to the same bridge-less chain $G_i$; that is, if up to relabelling we have $e=e_i$ and $f=e_{i+1}$ for some $i$. Thus $C$ and its interior constitutes one of the bridge-less chains $G_i$. But flipping a bridge-less chain does not change the embedding of our (unlabelled) graph, completing the proof. \end{proof} \begin{figure}[hbt] \centering \includegraphics{loop_of_loops.pdf} \caption{The loop of loops $L_g$ for $3\leq g\leq 6$} \label{figure:loop_of_loops} \end{figure} We summarize the connection between big face graphs and panoptigons in the following lemma. \begin{lemma}\label{lemma:big_face_panoptigon_lemma} Suppose that $G$ is a tropically planar big face graph arising from a polygon $P$. Then $P_\textrm{int}$ is a panoptigon. \end{lemma} This is not an if-and-only-if statement, since not all triangulations of $P$ connect a point of $P_\textrm{int}$ to all other points of $P_\textrm{int}$; for instance, the chain of genus $3$ with two bridges is not a big face graph, but by \cite[\S 5]{BJMS} it arises from $T_4$ whose interior polygon is a panoptigon. \begin{proof} Let $\Delta$ be a regular unimodular triangulation of $P$ such that $G$ is the skeleton of the weak dual graph of $\Delta$. The embedding of $G$ arising from this construction must have a bounded face $F$ bordering all other faces. By duality, we know that $F$ corresponds to an interior lattice point $p$ of $P$. Since $F$ shares an edge with all other bounded faces, dually $p$ is connected to each other interior point of $P$ by a primitive edge in $\Delta$. Thus $P_\textrm{int}$ is a panoptigon, with $p$ a panoptigon point for it. \end{proof} One common example of a looped chain of genus $g$ is the \emph{loop of loops} $L_g$, obtained by connecting $g-1$ bi-edges in a loop. This is illustrated in Figure \ref{figure:loop_of_loops} for $g$ from $3$ to $6$. For low genus, the loop of loops is tropically planar. Figure \ref{figure:triangulations_for_lol} illustrates polygons of genus $g$ for $3\leq g\leq 10$ along with collections of edges emanating from an interior point; when completed to a regular unimodular triangulation\footnote{One way to see that this can be accomplished is to use a placing triangulation \cite[\S 3.2.1]{triangulations}, where the highlighted panoptigon point is placed first and the other lattice points are placed in any order.}, they will yield $L_g$ as the dual tropical skeleton. Thus $L_g$ is tropically planar for $g\leq 10$. Another example of a tropically planar looped chain, this one of genus $11$, is pictured in Figure \ref{figure:g11_big_face}, along with a regular unimodular triangulation of a polygon giving rise to it. Since the theta graph of genus $2$ is also tropically planar \cite[Example 2.5]{BJMS} and is a big face graph, there exists at least one tropically planar big face graph of genus $g$ for $2\leq g\leq 11$. We are now ready to prove that this does not hold for $g\geq 14$. \begin{figure}[hbt] \centering \includegraphics[scale=0.8]{triangulations_for_lol.pdf} \caption{Starts of triangulations that will yield the loop of loops as the dual tropical skeleton } \label{figure:triangulations_for_lol} \end{figure} \begin{figure}[hbt] \centering \includegraphics{g11_big_face.pdf} \caption{A tropically planar big face graph of genus $11$, with a regular unimodular triangulation giving rise to it } \label{figure:g11_big_face} \end{figure} \begin{proof}[Proof of Theorem \ref{theorem:big_face_graphs}] Let $G$ be a tropically planar big face graph, and let $P$ be a lattice polygon giving rise to it. By Lemma \ref{lemma:big_face_panoptigon_lemma}, $P_\textrm{int}$ is a panoptigon. If $\textrm{lw}(P_\textrm{int})\leq 2$, then $g=|P_\textrm{int}\cap\mathbb{Z}^2|\leq 11$ by Corollary \ref{corollary:lw12_panoptigon_point_bound}. If $\textrm{lw}(P_\textrm{int})\geq 3$, then $g=|P_\textrm{int}\cap\mathbb{Z}^2|\leq 13$ by Theorem \ref{theorem:at_most_13}. Either way, we may conclude that the genus of $G$ is at most $13$. \end{proof} It follows, for instance, that no looped chain of genus $g\geq 14$ is tropically planar. If we are willing to rely on our computational enumeration of all non-hyperelliptic panoptigons, we can push this further: there does not exist a tropically planar big face graph for $g\geq 12$, and this bound is sharp. We have already seen in Figure \ref{figure:g11_big_face} that there exists a tropically planar big face graph of genus $11$. To see that none have higher genus, first note that if $P_\textrm{int}$ is a panoptigon with $12$ or $13$ lattice points, then $P_\textrm{int}$ must be non-hyperelliptic by Corollary \ref{corollary:lw12_panoptigon_point_bound}. Thus $P_\textrm{int}$ must be one of the $15$ non-hyperelliptic panoptigons with $12$ lattice points, or one of the $8$ non-hyperelliptic panoptigons with $13$ lattice points, as presented in Appendix \ref{section:appendix}. However, for each of these polygons $Q$, we have verified computationally that $Q^{(-1)}$ is not a lattice polygon; see Figure \ref{figure:panoptigons_12_or_13}. Thus no lattice polygon of genus $g\geq 12$ has an interior polygon that is also a panoptigon. It follows from Lemma \ref{lemma:big_face_panoptigon_lemma} that no big face graph of genus larger than $11$ is tropically planar. We close with several possible directions for future research. \begin{itemize} \item For any lattice point $p$, let $\textrm{vis}(p)$ denote the set of all lattice points visible to $p$ (including $p$ itself). Given a convex lattice polygon $P$, define its \emph{visibility number} to be the minimum number of lattice points in $P$ needed so that we can see every lattice point from one of them: \[V(P)=\min\left\{|S|\,:\, S\subset P\cap\mathbb{Z}^2\textrm{ and } P\cap\mathbb{Z}^2\subset \bigcup_{p\in S}\textrm{vis}(p) \right\}.\] Thus $P$ is a panoptigon if and only if $V(P)=1$. Classifying polygons of fixed visibility number $V(P)$, or finding relationships between $V(P)$ and such properties as genus and lattice width, could be interesting in its own right, and could provide new criteria for determining whether graphs are tropically planar; for instance, the prism graph $P_n=K_2\times C_n$ can only arise from a polygon $P$ with $V(P)\leq 2$. This question is in some sense a lattice point version of the art gallery problem. \item We can generalize from two-dimensional panoptigons to $n$-dimensional \emph{panoptitopes}, which we define to be convex lattice polytopes containing a lattice point $p$ from which all the polytope's other lattice points are visible. A few of our results generalize immediately; for instance, the proof of Lemma \ref{lemma:panoptigon_g1} works in $n$-dimensions, so any polytope with exactly one interior lattice point is a panoptitope. A complete classification of $n$-dimensional panoptitopes for $n\geq 3$ will be more difficult than it was in two-dimensions, especially since it is no longer the case that there are finitely many polytopes with a fixed number of lattice points. Results about panoptitopes would also have applications in tropical geometry; for instance, an understanding of three-dimensional panoptitopes would have implications for the structure of tropical surfaces in $\mathbb{R}^3$. \item To any lattice polygon we can associate a toric surface \cite{toric_varieties}. An interesting question for future research would be to investigate those toric surfaces that are associated to panoptigons, or more generally toric varieties associated to panoptitopes. \end{itemize} \section{Introduction} A lattice point in $\mathbb{R}^2$ is any point with integer coordinates, and a lattice polygon is any polygon whose vertices are lattice points. We say that two distinct lattice points $p=(a,b)$ and $q=(c,d)$ are visible to one another if the line segment $\overline{pq}$ contains no lattice points besides $p$ and $q$, or equivalently if $\gcd(a-c,b-d)=1$; by convention we say that any $p$ is visible from itself. Points visible from the origin $O=(0,0)$ are called visible points, with all other points being called invisible. The properties of visible and invisible points have been subject to a great deal of study over the past century, as surveyed in \cite[\S 10.4]{research_book}. The question of which structures can appear among visible points, invisible points, or some prescribed combination thereof was studied in \cite{herzog-stewart}, where it was proved that one can find a copy of any convex lattice polygon (indeed, any arrangement of finitely many lattice points) consisting entirely of invisible points. \begin{figure}[hbt] \centering \includegraphics{several_panoptigons.pdf} \caption{Three panoptigons, with a panoptigon point circled and lines of sight illustrated; the middle polygon has a second panoptigon point, namely the bottom vertex} \label{figure:several_panoptigons} \end{figure} In this paper we pose and answer a somewhat complementary question: which convex lattice polygons including the origin contain only {visible} lattice points? We define a \emph{panoptigon}\footnote{This name is modeled off of \emph{panopticon}, an architectural design that allows for one position to observe all others. It comes from the Greek word \emph{panoptes}, meaning ``all seeing''.} to be a convex lattice polygon $P$ containing a lattice point $p$ such that all other lattice points in $P$ are visible from $p$. We call such a $p$ a \emph{panoptigon point} for $P$. Thus up to translation, a panoptigon is a convex lattice polygon containing the origin such that every point in $P\cap\mathbb{Z}^2$ is a visible point. Three panoptigons are pictured in Figure \ref{figure:several_panoptigons}, each with a panoptigon point and its lines of sight highlighted; note that the panoptigon point need not be unique. One can quickly see that there exist infinitely many panoptigons; for instance, the triangles with vertices at $(0,0)$, $(1,0)$, and $(a,1)$ are panoptigons for any value of $a$. However, this is not an interesting family of examples since any two of these triangles are \emph{equivalent}, made precisely below. \begin{defn} A \emph{unimodular transformation} is an integer linear map $t:\mathbb{R}^2\rightarrow\mathbb{R}^2$ that preserves the integer lattice $\mathbb{Z}^2$; any such map is of the form $t(p)=Ap+b$, where $A$ is a $2\times 2$ integer matrix with determinant $\pm1$ and $b\in\mathbb{Z}^2$ is a translation vector. We say that two lattice polygons $P$ and $Q$ are \emph{equivalent} if there exists a unimodular triangulation $t$ such that $t(P)=Q$. \end{defn} It turns out that there are infinitely many panoptigons even up to equivalence: note that the triangle with vertices at $(0,0)$, $(0,-1)$, and $(b,-1)$ is a panoptigon for every positive integer $b$, and any two such triangles are pairwise inequivalent since they have different areas. We can obtain nicer results if we stratify polygons according to the \emph{lattice width} of a polygon $P$, the minimum integer $w$ such that there exists a polygon $P'$ equivalent to $P$ in the horizontal strip $\mathbb{R}\times [0,w]$. Although there are infinitely many panoptigons of lattice widths $1$ and $2$, we can still classify them completely, as presented in Lemmas \ref{lemma:panoptigon_g0} and \ref{lemma:panoptigon_g1}. Once we reach lattice width $3$ or more, we obtain the following powerful result. \begin{theorem}\label{theorem:at_most_13} Let $P$ be a panoptigon with lattice width $\textrm{lw}(P)\geq 3$. Then $|P\cap\mathbb{Z}^2|\leq 13$. \end{theorem} Since there are only finitely many lattice polygons with a fixed number of lattice points up to equivalence \cite[Theorem 2]{lz}, it follows that there are only finitely many panoptigons $P$ with $\textrm{lw}(P)\geq 3$. In Appendix \ref{section:appendix} we detail computations to enumerate all such lattice polygons. This allow us to determine that there exactly $73$ panoptigons of lattice width $3$ or more. One is the triangle of degree $3$, which has a single interior lattice point; and the other $72$ are non-hyperelliptic, meaning that the convex hull of their interior lattice points is two-dimensional. As an application of our classification of panoptigons, we prove new results about {tropically planar graphs} \cite{small2017_tpg}. These are $3$-regular, connected, planar graphs that arise as skeletonized versions of dual graphs of regular, unimodular triangulations of lattice polygons. We often stratify tropically planar graphs by their first Betti number, also called their genus. If $G$ is a tropically planar graph arising from a triangulation of a lattice polygon $P$, then the genus of $G$ is equal to the number of interior lattice points of $P$. We prove a new criterion for ruling out certain graphs from being tropically planar, notable in that the graphs it applies to are $2$-edge-connected, unlike those ruled out by most existing criteria; this resolves an open question posed in \cite[\S 5]{small2017_tpg}. We say that a planar graph $G$ is a \emph{big face graph} if for every planar embedding of $G$, there is a bounded face sharing an edge with all other bounded faces. \begin{theorem}\label{theorem:big_face_graphs} If $G$ is a big face graph of genus $g\geq 14$, then $G$ is not tropically planar. \end{theorem} The idea behind the proof of this theorem is as follows. If a big face graph $G$ is tropically planar, then it is dual to a regular unimodular triangulation of a lattice polygon $P$. One of the interior lattice points $p$ of $P$ must be connected to all the other interior lattice points, so that the bounded face dual to $p$ can share an edge with all other bounded faces. Thus, the convex hull of the interior lattice points of $P$ must be a panoptigon. If that panoptigon has lattice width $3$ or more, then it can have at most $13$ lattice points, and so $G$ cannot have $g\geq 14$. For the case that the lattice width of the interior panoptigon is smaller, we need an understanding of which polygons of lattice width $1$ or $2$ can appear as the interior lattice points of another lattice polygon. We obtain this in Propositions \ref{prop:lattice_width_3_maximal} and \ref{prop:lattice_width_4_maximal}, and can once again bound the genus of $G$. In fact, if we are willing to rely on our computational enumeration of all panoptigons with lattice width at least $3$, then we can improve this result to say that big face graphs of genus $g\geq 12$ are not tropically planar. We will see that this bound is sharp. Our paper is organized as follows. In Section \ref{section:lattice_polygons} we present background on lattice polygons, including a description of all polygons of lattice width at most $2$. In Section \ref{section:panoptigons} we classify all panoptigons. In Section \ref{section:lw3_and_4} we classify all maximal polygons of lattice width $3$ or $4$. Finally, in Section \ref{section:big_face_graphs} we prove Theorem \ref{theorem:big_face_graphs}. Our computational results are then summarized in Appendix \ref{section:appendix}. \medskip \noindent \textbf{Acknowledgements.} The authors thank Michael Joswig for many helpful conversations on tropically planar graphs, and for comments on an earlier draft of this paper, as well as two anonymous reviewers for many helpful suggestions and corrections. We would also like to thank Benjamin Lorenz, Andreas Paffenholz and Lars Kastner for helping with uploading our results to polyDB. Ralph Morrison was supported by the Max Planck Institute for Mathematics in the Sciences, and by the Williams College Class of 1945 World Travel Fellowship. Ayush Kumar Tewari was supported by the Deutsche Forschungsgemeinschaft (SFB-TRR 195 “Symbolic Tools in Mathematics and their Application”). \section{Lattice polygons}\label{section:lattice_polygons} In this section we recall important terminology and results regarding lattice polygons. This includes the notion of maximal polygons, and of lattice width. Throughout we will assume that $P$ is a two-dimensional convex lattice polygon, unless otherwise stated. The \emph{genus} of a polygon $P$ is the number of lattice points interior to $P$. A key fact is that for fixed $g\geq 1$, there are only finitely many lattice polygons of genus $g$, up to equivalence \cite[Theorem 9]{Castryck2012}. We refer to the convex hull of the $g$ interior points of $P$ as the \emph{interior polygon of $P$}, denoted $P_\textrm{int}$. If $\dim(P_\textrm{int})=2$, we call $P$ \emph{non-hyperelliptic}; if $\dim(P_\textrm{int})\leq 1$, we call $P$ \emph{hyperelliptic}. This terminology, due to \cite{Castryck2012}, is inspired by hyperelliptic curves, whose Newton polygons have all interior points collinear. We say a lattice polygon $P$ is a \emph{maximal polygon} if it is maximal with respect to containment among all lattice polygons containing the same set of interior lattice points. In the case that $P$ is non-hyperelliptic, there is a strong relationship between $P$ and $P_\textrm{int}$. Let $\tau_1,\ldots,\tau_n$ be the one-dimensional faces of a (two-dimensional) lattice polygon $Q$. Then $Q$ can be defined as an intersection of half-planes: \[Q=\bigcap_{i=1}^n\mathcal{H}_{\tau_i},\] where $\mathcal{H}_{\tau}=\{(x,y)\in\mathbb{R}^2\,|\,a_\tau x+b_\tau y\leq c_\tau\}$ is the set of all points on the same side of the line containing $\tau$ as $Q$. Without loss of generality, we assume that $a_\tau,b_\tau,c_\tau\in\mathbb{Z}$ with $\gcd(a_\tau,b_\tau)=1$. With this convention, we define \[\mathcal{H}^{(-1)}_{\tau}=\{(x,y)\in\mathbb{R}^2\,:\,a_\tau x+b_\tau y\leq c_\tau+1\},\] and from there define the \emph{relaxed polygon} of $Q$ as \[Q^{(-1)}:=\bigcap_{i=1}^n\mathcal{H}^{(-1)}_{\tau_i}.\] We can think of $Q^{(-1)}$ as the polygon we would get by ``moving out'' the edges of $Q$. It is worth remarking that that $Q^{(-1)}$ need not be a lattice polygon. We denote $Q^{(-1)}\cap \mathcal{H}_{\tau_i}^{(-1)}$ as $\tau_i^{(-1)}$. It is not necessarily the case that $\tau_i^{(-1)}$ is a one-dimensional face of $Q^{(-1)}$; however, if $Q^{(-1)}$ is a lattice polygon, then $Q^{(-1)}\cap \tau_i^{(-1)}$ must contain at least one lattice point, as proved in \cite[Lemma 2.2]{small2017_dim}. Examples where $Q^{(-1)}$ is not a lattice polygon, and where $Q^{(-1)}$ is a lattice polygon but an edge has collapsed, are illustrated in Figure \ref{figure:examples_relaxed_polygons}. There is a very important case when we are guaranteed to have that $Q^{(-1)}$ is a lattice polygon, namely when $Q=P_\textrm{int}$ for some non-hyperelliptic lattice polygon $P$. \begin{figure}[hbt] \centering \includegraphics{examples_relaxed_polygons.pdf} \caption{Two lattice polygons, one with a relaxed polygon with a non-lattice vertex marked; and one with a collapsed edge in the relaxed (lattice) polygon} \label{figure:examples_relaxed_polygons} \end{figure} \begin{prop}[\cite{Koelman}, \S 2.2]\label{prop:interior_maximal} Let $P$ be a non-hyperelliptic lattice polygon, with interior polygon $P_\textrm{int}$. Then $P_\textrm{int}^{(-1)}$ is a lattice polygon containing $P$ whose interior polygon is also $P_\textrm{int}$. In particular, $P_\textrm{int}^{(-1)}$ is the unique maximal polygon with interior polygon $P_\textrm{int}$. \end{prop} If we are given a polygon $Q$ and we wish to know if there exists a lattice polygon $P$ with $P_\textrm{int}=Q$, it therefore suffices to compute the relaxed polygon $Q^{(-1)}$, and to check whether its vertices have integral coordinates. This might fail because two adjacent edges $\tau_i$ and $\tau_{i+1}$ of $Q$ are relaxed to intersect at a non-integral vertex of $Q^{(-1)}$; we also might have that some $\tau_i^{(-1)}$ is completely lost, which cannot happen when $Q^{(-1)}$ is a lattice polygon by \cite[Lemma 2.2]{small2017_dim}. Careful consideration of these obstructions will be helpful in classifying the maximal polygons of lattice widths $3$ and $4$ in Section \ref{section:lw3_and_4}. An important tool in studying lattice polygons is the notion of \emph{lattice width}. Let $P$ be a non-empty lattice polygon, and let $v=\langle a,b\rangle$ be a lattice direction with $\gcd(a,b)=1$. The \emph{width of $P$ with respect to $v$} is the smallest integer $d$ for which there exists $m\in \mathbb{Z}$ such that the strip \[m\leq ay-bx\leq m+d\] contains $P$. We denote this $d$ as $w(P,v)$. The \emph{lattice width of $P$} is the minimal width over all possible choices of $v$: \[\textrm{lw}(P)=\min_vw(P,v).\] Any $v$ which achieves this minimum is called a \emph{lattice width direction for $P$}. Equivalently, $\textrm{lw}(P)$ is the smallest $d$ such that there exists a lattice polygon $P'$ equivalent to $P$ with $P'\subset \mathbb{R}\times[0,d]$. We recall the following result connecting the lattice widths of a polygon and its interior polygon. Let $T_d=\textrm{conv}((0,0),(d,0),(0,d))$ denote the standard triangle of degree $d$. \begin{lemma}[Theorem 4 in \cite{castryckcools-gonalities}]\label{lemma:lw_facts} For a lattice polygon $P$ we have $\textrm{lw}(P)=\textrm{lw}(P_{\textrm{int}})+2$, unless $P$ is equivalent to $T_d$ for some $d\geq 2$, in which case $\textrm{lw}(P)=\textrm{lw}(P_{\textrm{int}})+3=d$. \end{lemma} The following result tells us precisely which polygons have lattice width $1$ or $2$. It is a slight reworking due to of a result due to \cite{Koelman}, also presented in \cite[Theorem 10]{Castryck2012} \begin{theorem}\label{theorem:lw_012} Let $P$ be a two-dimensional lattice polygon. If $\textrm{lw}(P)=1$, then $P$ is equivalent to \[T_{a,b}:=\textrm{conv}((0,0),(0,1),(a,1),(b,0))\] for some $a,b\in\mathbb{Z}$ with $0\leq a\leq b$ and $b\geq 1$. If $\textrm{lw}(P)=2$, then up to equivalence either $P=T_2$; or $g(P)=1$ and $P\neq T_3$ (all such polygons are illustrated in Figure \ref{figure:g1_lw2}); or $g(P)\geq 2$. In the latter case we have $\frac{1}{6}(g+3)(2g^2+15g+16)$ polygons, sorted into three types: \begin{itemize} \item Type 1: \[\includegraphics{hyp_type_1}\] where $g\leq i\leq 2g$. \item Type 2: \[\includegraphics{hyp_type_2}\] where $0\leq i\leq g$ and $0\leq j\leq i$; or $g<i\leq 2g+1$ and $0\leq j\leq 2g-i+1$ \item Type 3: \[\includegraphics{hyp_type_3}\] where $0\leq k\leq g+1$ and $0\leq i\leq g+1-k$ and $0\leq j\leq i$; or $0\leq k\leq g+1$ and $g+1-k<i\leq 2g+2-2k$ and $0\leq j\leq 2g-i-2k+1$ \end{itemize} \begin{figure}[hbt] \centering \includegraphics{g1_lw2.pdf} \caption{The $14$ genus $1$ polygons with lattice width $2$} \label{figure:g1_lw2} \end{figure} \end{theorem} \begin{proof}The classification proved in \cite{Koelman} was similar, except with polygons sorted by genus ($g=0$, $g=1$, and $g\geq 2$ with all interior lattice points collinear) rather than by lattice width. We can translate their work into the desired result as follows. For $\textrm{lw}(P)=1$, we know $P$ has no interior lattice points, so $g=0$; all polygons of genus $0$ besides $T_2$ have lattice width $1$. By \cite{Koelman} all genus $0$ polygons besides $T_2$ are equivalent to $T_{a,b}$ for some $a,b\in\mathbb{Z}$ with $0\leq a\leq b$ and $b\geq 1$. For $\textrm{lw}(P)=2$, we deal with the three cases of $g=0$, $g=1$, and $g\geq 2$. If $g=0$, then the only polygon of lattice width $2$ is $T_2$. If $P$ is a polygon with genus $g=1$, then by Lemma \ref{lemma:lw_facts} we know that $\textrm{lw}(P)=\textrm{lw}(P_\textrm{int})+2=0+2=2$ unless $P$ is equivalent to $T_d$ for some $d$. The only value of $d$ such that $T_d$ has genus $1$ is $d=3$, so every genus $1$ polygon except $T_3$ has lattice width $2$. Finally, suppose $P$ is a polygon of lattice width $2$ and genus $g\geq 2$. Since $\textrm{lw}(T_d)=d$ and $g(T_2)=0$, we know $P\neq T_d$ for any $d$, and so $\textrm{lw}(P_\textrm{int})=\textrm{lw}(P)-2=2-2=0$. It follows that all the $g$ interior lattice points of $P$ must be collinear, and so $P$ is hyperelliptic. Conversely, if $P$ is a hyperelliptic polygon of genus $g\geq 2$, by definition the interior polygon $P_\textrm{int}$ has lattice width $0$. Since no triangle $T_d$ has genus $g\geq 2$ with all its interior points collinear we may apply Lemma \ref{lemma:lw_facts} to conclude that $\textrm{lw}(P)=\textrm{lw}(P_\textrm{int})+2=2$. This means that for polygons of genus $g\geq 2$, being hyperelliptic is equivalent to having lattice width $2$. Combined with the classification of hyperelliptic polygons in \cite{Koelman}, this completes the proof. \end{proof} A counterpart of lattice width is \emph{lattice diameter}. Following \cite{bf}, the lattice diameter $\ell(P)$ is the length of the longest lattice line segment contained in the polygon $P$: \[\ell(P)=\max\{|L\cap P\cap\mathbb{Z}^2|-1\,:\,\textrm{$L$ is a line}\}.\] We define a \emph{lattice diameter direction} $\left<a,b\right>$ to be one such that there exists a line $L$ with slope vector $\left<a,b\right>$ with $|L\cap P\cap\mathbb{Z}^2|-1=\ell(P)$. We remark that there exist other works where lattice diameter is defined as the largest number of collinear lattice points in the polygon $P$ \cite{alarcon}; this is simply one more than the convention we set above. The following result relates $\ell(P)$ to $\textrm{lw}(P)$. \begin{theorem}[\cite{bf}, Theorem 3] We have $\textrm{lw}(P)\leq \lfloor\frac{4}{3}\ell(P)\rfloor+1$. \end{theorem} We now present background material on triangulations and tropical curves. Assume for the remainder of the section that $P$ is a lattice polygon of genus $g\geq 2$. A \emph{unimodular triangulation} of $P$ is a subdivision of $P$ into lattice triangles of area $\frac{1}{2}$ each. Such a triangulation $\Delta$ is called \emph{regular} if there exists a height function $\omega:P\cap\mathbb{Z}^2\rightarrow \mathbb{R}$ inducing $\Delta$. This means that $\Delta$ is the projection of the lower convex hull of the image of $\omega$ back onto $P$. See \cite{triangulations} for details on regular triangulations. Given a regular unimodular triangulation $\Delta$ of a lattice polygon $P$, we can consider the \emph{weak dual graph} of $\Delta$, which consists of $1$ vertex for each elementary triangle, with two vertices connected if and only if the corresponding triangles share an edge. Each vertex in this graph has degree $1$, $2$, or $3$, depending on how many edges the corresponding triangle has on the boundary of $P$. We transform the weak dual graph of $\Delta$ into a $3$-regular graph as follows: first, iteratively delete any $1$-valent vertices and their attached edges. This will yield a graph with all vertices of degree $2$ or $3$. Remove each degree $2$ vertex by concatenating the two edges incident to it. Since we have assumed that $g(P)\geq 2$, the end result is a $3$-regular graph $G$ (with loops and parallel edges allowed). We call $G$ the \emph{skeleton} associated to $\Delta$. Any $G$ that arises from such a procedure is called a \emph{tropically planar graph}. An example of a regular unimodular triangulation, the weak dual graph, and the tropically planar skeleton are pictured in Figure \ref{figure:tropically_planar_full}. Note that there is a one-to-one correspondence between the interior lattice points of $P$ and the bounded faces of $G$ in this embedding, where two faces of $G$ share an edge if and only if the corresponding interior lattice points are connected by an edge in $\Delta$. \begin{figure}[hbt] \centering \includegraphics[scale=0.6]{tropically_planar_full.pdf} \caption{A regular unimodular triangulation of a polygon, the weak dual graph of the triangulation, and the corresponding tropically planar skeleton} \label{figure:tropically_planar_full} \end{figure} It is worth remarking that we could still construct a graph $G$ from a non-regular triangulation. The reason that we insist that $\Delta$ is regular is so that the graph $G$ appears as a subset of a smooth tropical plane curve, which is a balanced $1$-dimensional polyhedral complex that is dual to a regular unimodular triangulation of a lattice polygon; see \cite{ms}. (Indeed, the regularity is necessary if we wish to endow a skeleton with the structure of a \emph{metric graph}, with lengths assigned to its edges, as explored in \cite{BJMS} and \cite{small2017_dim}.) Most of the results that we prove in this paper, and that we recall for the remainder of this section, also hold if we expand to graphs that arise as dual skeleta of \emph{any} unimodular triangulation of a lattice polygon. The first Betti number of a tropically planar graph, also known as its genus\footnote{This terminology comes from \cite{bn} and is motivated by algebraic geometry; it is unrelated to the notion of graph genus defined in terms of embeddings on surfaces. The first Betti number of a graph is also sometimes called its \emph{cyclomatic number}.}, is equal to the number of interior lattice points of the lattice polygon $P$ giving rise to it. It is also equal to the number of bounded faces in any planar embedding of the graph. A systematic method of computing all tropically planar graphs of genus $g$ was designed and implemented in \cite{BJMS} for $g\leq 5$. The algorithm is brute-force, and works by considering all maximal lattice polygons of genus $g$, finding all regular unimodular triangulations of them, and computing the dual skeleta. These computations were pushed up to $g= 7$ in \cite{small2017_tpg}. In general there is no known method of checking whether an arbitrary graph is tropically planar short of this massive computation. A fruitful direction in the study of tropically planar graphs has been finding properties or patterns that are forbidden in such graphs, so as to quickly rule out particular examples. Since the graph before skeletonization is dual to a unimodular triangulation of a polygon, any tropically planar graph is $3$-regular, connected, and planar. Several additional constraints are summarized in the following result. \begin{theorem}[\cite{cdmy}, Proposition 4.1; \cite{small2017_tpg}, Theorem 3.4; \cite{joswigtewari}, Theorems 10 and 14] Suppose that $G$ is a $3$-regular graph of genus $g$ of one of the forms illustrated in Figure \ref{figure:forbidden_patterns}, where each gray box represents a subgraph of genus at least $1$. If $G$ is tropically planar, then it must have either the third or fourth forms, with $g= 4$ for the third form and $g\leq 5$ in the fourth form. In particular, if $g\geq 6$, then $G$ is not tropically planar. \end{theorem} \begin{figure}[hbt] \centering \includegraphics[scale=0.7]{forbidden_patterns.pdf} \caption{Forbidden patterns in tropically planar graphs of genus $g\geq 6$} \label{figure:forbidden_patterns} \end{figure} The proofs of these results all use the following observation: any cut-edge in a tropically planar graph must arise from a split in the dual unimodular triangulation that divides the polygon into two polygons of positive genus. From there, one argues that collections of such splits cannot appear in lattice polygons in ways that would give rise to graphs of the pictured forms. For planar graphs that are $2$-edge-connected and thus have no cut-edges, the only known general criterion to rule out tropical planarity is the notion of crowdedness \cite{morrisonhyp}. However, crowded graphs are ones that cannot be dual to \emph{any} triangulation of \emph{any} point set in $\mathbb{R}^2$, regardless of whether or not the point set comes from a convex lattice polygon; thus it is not especially interesting that crowded graphs are not tropically planar. In Section \ref{section:big_face_graphs} we will find a family of $2$-edge-connected, $3$-regular planar graphs that are not crowded but are still not tropically planar, the first known such examples. \section{A classification of all panoptigons}\label{section:panoptigons} Let $P$ be a convex lattice polygon. Recall from the introduction that $P$ is a {panoptigon} if there is lattice point $p\in P\cap\mathbb{Z}^2$ such that every other point in $P\cap\mathbb{Z}^2$ is visible from $p$. In this section we will classify all panoptigons, stratified by a combination of genus and lattice width. We begin with the panoptigons of genus $0$. \begin{lemma}\label{lemma:panoptigon_g0} Let $P$ be a panoptigon of genus $0$. Then $P$ is one of the following polygons, up to lattice equivalence: \[\includegraphics[]{panoptigons_g0}\] where $0\leq a\leq\min\{2,b\}$. \end{lemma} \begin{proof} By \cite{Koelman}, any genus $0$ polygon is equivalent either to the triangle $T_2$, or to the (possibly degenerate) trapezoid $T_{a,b}$ where $0\leq a\leq b$ and $1\leq b$. The triangle of degree $2$ is a panoptigon, as any non-vertex lattice point can see every other lattice point. For $T_{a,b}$, we note that if $a\geq 3$ then the polygon is not a panoptigon: each lattice point $p$ is on a row with at least $3$ other lattice points, not all of which can be visible from $p$ since the $4$ (or more) points in that row are collinear. However, if $a\leq 2$, then a point $p$ can be chosen on the top row that can see the other $a$ points on the top row, as well as all points on the bottom row. Thus $T_{a,b}$ is a panoptigon if and only if $a\leq 2$. \end{proof} For polygons with exactly one interior lattice point, there is no obstruction to being a panoptigon. \begin{lemma}\label{lemma:panoptigon_g1} If $P$ is a polygon of genus $1$, then $P$ is a panoptigon. \end{lemma} \begin{proof} Let $p$ be the unique interior lattice point of $P$, and let $q$ be any other lattice point of $P$. Since $g(P)=1$, the point $q$ must be on the boundary. By convexity, the line segment $\overline{pq}$ must have its relative interior contained in the interior of the polygon, and so the line segment does not intersect $\partial P$ outside of $q$. Since $p$ is the only interior lattice point, we have that the only lattice points of $\overline{pq}$ are its endpoints. It follows that $q$ is visible from $p$ for all $q\in P\cap\mathbb{Z}^2-\{p\}$. We conclude that $P$ is a panoptigon with panoptigon point $p$. \end{proof} We now consider hyperelliptic polygons of genus $g\geq 2$. We will characterize precisely which of these are panoptigons based on the classification of them in Theorem \ref{theorem:lw_012} into Types 1, 2, and 3. Any hyperelliptic polygon can be put into one of these forms in the horizontal strip $\mathbb{R}\times[0,2]$; thus we may say a lattice point $(a,b)$ of such a polygon is at height $b$, where every point is either at height $0$, height $1$, or height $2$. \begin{lemma}\label{lemma:hyperelliptic_panoptigon} Let $P$ be a hyperelliptic polygon of genus $g\geq 2$, transformed so that it of one of the forms presented in Theorem \ref{theorem:lw_012}. Then $P$ is a panoptigon if and only if \begin{itemize} \item $P$ is of Type 1, with $g\leq 3$; or \item $P$ is of Type 2, either with $g\leq 2$, or with $j=0$ and $0\leq i\leq 1$; or \item $P$ is of Type 3, either with $j=0$ and $i\leq 2$, with $k$ odd if $i=0$ and $k$ even if $i=2$; or with $i=0$ and $j\leq 2$, and $k$ odd if $j=0$ and $k$ even if $j=2$. \end{itemize} \end{lemma} For the reader's convenience we recall the polygons of Types 1, 2, and 3 in Figure \ref{figure:hyp_all_types}. \begin{figure}[hbt] \centering \includegraphics[scale=0.8]{hyp_all_types.pdf} \caption{Hyperelliptic polygons of Types 1, 2, and 3} \label{figure:hyp_all_types} \end{figure} \begin{proof} We start by making the following observations. If $p=(a,b)$ is a panoptigon point for a hyperelliptic polygon $P$, then there must be at most $3$ points at height $b$; and if there are exactly $3$, then $p$ must be the middle such point. We also make several remarks in the case that $b\in\{0,2\}$. There are no obstructions to a point at height $b$ seeing a point at height $1$, so we will not concern ourselves with this. Choose $b'\in\{0,2\}$ distinct from $b$, and suppose height $b'$ has $2$ or more lattice points; then two of those points have the form $q=(a,b')$ and $q'=(a+1,b')$. We claim that $p$ cannot view both $q$ and $q'$. Writing $p=(a,b)$, the midpoints of the line segments $\overline{pq}$ and $\overline{pq'}$ have coordinates $\left(\frac{a+a'}{2},1\right)$ and $\left(\frac{a+a'+1}{2},1\right)$, respectively. Exactly one of $\frac{a+a'}{2}$ and $\frac{a+a'+1}{2}$ is an integer, meaning that either $q$ or $q'$ is not visible from $p$. So, if $p=(a,b)$ is a panoptigon point at height $b\in\{0,2\}$, there must be exactly one lattice point $q=(a',b')$ at height $b'\in \{0,2\}$ with $b'\neq b$; moreover, we must have that $a-a'$ is odd. We are ready to determine the possibilities for a hyperelliptic panoptigon $P$ of genus $g\geq 2$, sorted by type. \begin{itemize} \item Let $P$ be a hyperelliptic polygon of Type 1. If $g\leq 3$, then we may choose $p=(a,1)$ that can see every other point at height $1$, as well as all points at heights $0$ and $2$; in this case $P$ is a panoptigon. If $g\geq 4$, then there are at least $4$ points at height $1$. Moreover, the number of points at height $0$ is $i+1$ where $g\leq i\leq 2g$, and we have $i+1\geq 5$ since $g\geq 4$. Thus it is impossible to have at most $3$ points at one height and $1$ at another. This means that for $g\geq 4$, $P$ cannot be a panoptigon \item Let $P$ be a hyperelliptic polygon of Type 2. If $g=2$, then $P$ has exactly three points at height $1$, and we can choose the middle point as a panoptigon point. Now assume $g\geq 3$; we cannot choose a panoptigon point at height $1$, since there are $g+1\geq 4$ points at that height. To avoid having $4$ points on both the top and bottom rows we need $0\leq i\leq g$ and $0\leq j\leq i$; and one of $i$ and $j$ must be $0$, so we need $j=0$ since $j\leq i$. From there we need at most $3$ lattice points on the bottom row, so $0\leq i\leq 2$. If $i=2$, then the only possible panopticon point is the middle one on the bottom row, namely $(1,0)$; but this point cannot see $(1,2)$, a contradiction. Thus $0\leq i\leq 1$; note that in either case $(0,0)$ can serve as a panoptigon point. \item Finally, let $P$ be a hyperelliptic polygon of Type 3. We cannot have a panoptigon point at height $1$, since there are at least $g+2\geq 4$ points at that height. If there is a panoptigon point at height $0$, then we must have at most $3$ points at height $0$ and exactly one point at height $2$; that is, we must have $j=0$ and $i\leq 2$. Moreover, we need to verify that way may choose a panoptigon point at height $0$ that can see the unique point at height $2$; this can always be done if $i=1$, but if $j=0$ then we need $k$ odd (the only possible panoptigon point is then $(0,0)$), and if $j=2$ we need $k$ even (the only possible panoptigon point is then $(1,0)$). A similar argument shows that we can choose a panoptigon point at height $2$ if and only if $i=0$ and $j\leq 2$, with $k$ odd if $j=0$ and $k$ even if $j=2$. \end{itemize} \end{proof} As with the lattice width $1$ panoptigons, we find infinitely many lattice width $2$ panoptigons, namely those of Type 2 with $j=0$ and $0\leq i\leq 1$, and those of Type 3. We have now classified all hyperelliptic panoptigons, and have found that there are infinitely many of lattice width $1$ and infinitely many of lattice width $2$. Our last step is to understand non-hyperelliptic panoptigons; with the exception of the triangle $T_3$, this is equivalent to panoptigons of lattice width $3$ or more. We are now ready to prove that the total number of lattice points of such a panoptigon is at most $13$. \begin{proof}[Proof of Theorem \ref{theorem:at_most_13}] Let us consider the lattice diameter $\ell(P)$ of $P$. We know by \cite[Theorem 1]{alarcon} that $|P\cap\mathbb{Z}^2|\leq (\ell(P)+1)^2$, so if $\ell(P)\leq 2$ we have $|P\cap\mathbb{Z}^2|\leq 9$. Thus we may assume $\ell(P)\geq 3$. Perform an $\textrm{SL}_2(\mathbb{Z})$ transformation so that $\langle 1,0\rangle$ is a lattice diameter direction for $P$, and translate the polygon so that the origin $O=(0,0)$ is a panoptigon point. Thus $P\cap\mathbb{Z}^2$ consists of $O$ and a collection of visible points. Since $\ell(P)\geq3$ and $\langle 1,0\rangle$ is a lattice diameter direction, we know that the polygon $P$ must contain $4$ lattice points of the form $(a,b)$, $(a+1,b)$, $(a+2,b)$, and $(a+3,b)$. We claim that $b\in\{-1,1\}$. Certainly $b\neq 0$, since there are only three such points allowed in $P$: $(0,0)$ and $(\pm 1, 0)$. We also know that $b$ cannot be even: any set $\mathbb{Z}\times \{2k\}$ has every second point invisible from the origin. Suppose for the sake of contradiction that the points $(a,b)$, $(a+1,b)$, $(a+2,b)$, and $(a+3,b)$ are in $P$ with $b$ odd and $b\geq 3$ (a symmetric argument will hold for $b\leq -3$). Consider the triangle $T=\textrm{conv}(O,(a,b),\ldots,(a+3,b))$. By convexity, $T\subset P$. Consider the line segment $T\cap L$, where $L$ is the line defined by $y=b-1$. The length of this line segment is $3-\frac{1}{b}$, and since $b\geq 3$ this is strictly greater than $2$. Any line segment of length $2$ at height $b-1$ will intersect at least two lattice points. But since $b-1$ is even and $b-1\geq 2$, at least one of these lattice points is not visible from $O$. Such a lattice point must be contained in $T$, and therefore in $P$, a contradiction. Thus we have that $b=\pm 1$. Rotating our polygon $180^\circ$ degrees if necessary, we may assume that $b=-1$, so that the points $(a,-1),\ldots,(a+3,-1)$ are contained in $P$. It is possible that the number $k$ of lattice points on the line defined by $y=-1$ is more than $4$; up to relabelling, we may assume that $(a,-1),\ldots,(a+k-1,-1)$ are lattice points in $P$ while $(a-1,-1)$ and $(a+k,-1)$ are not, where $k\geq 4$. Applying a shearing transformation $\left(\begin{matrix}1&a+1\\0&1\end{matrix}\right)$, we may further assume that the points at height $-1$ are precisely $(-1,-1),\ldots,(k-2,-1)$. We will now make a series of arguments that rule out many lattice points from being contained in $P$. The end result of these constraints is pictured in Figure \ref{figure:ruling_out_points}, with points labelled by the argument that rules them out. \begin{itemize} \item[(i)] The polygon $P$ has (regular) width at least $3$ at height $-1$, and width strictly smaller than $2$ at heights $2$ and $-2$, since it cannot contain two consecutive lattice points at those heights. It follows from convexity that the width of the polygon is strictly smaller than $1$ at height $-3$, and that the polygon cannot have any lattice points at all at height $-4$. It also follows that the polygon cannot have a nonnegative width at height $8$. Thus every lattice point $(x,y)$ in the polygon satisfies $-3\leq y\leq 7$. \item[(ii)] We can further restrict the possible heights by showing that there can be no lattice points at height $-3$. Suppose there were such a point $(x,-3)$ in $P$. Consider the triangle $\textrm{conv}((x,-3),(-1,-1),(2,-1))$. This triangle has area $3$, so by Pick's Theorem \cite{pick} satisfies $3=g+\frac{b}{2}-1$, or $4=g+\frac{b}{2}$, where $g$ and $b$ are the number of interior lattice points an boundary lattice points of the triangle, respectively. The $4$ lattice points at height $-1$ contribute $2$ to this sum, and the one lattice point at height $-3$ contributes $\frac{1}{2}$ to this sum, meaning that the lattice points at height $-2$ contribute $\frac{3}{2}$ to this sum. It follows that there must be at least two lattice points at height $-2$; but this is a contradiction, since at least one of these points will be invisible from $O$. We conclude that $P$ cannot contain a lattice point of the form $(x,-3)$, and thus $y\geq -2$ for all lattice points $(x,y)\in P$. \item[(iii)] We know that the lattice point $(-2,0)$ is not in $P$ since it is not visible from $O$. If there is any lattice point of the form $(x,y)$ with $y\geq 1$ and $y\leq -x-2$, then the triangle $\textrm{conv}(O,(-1,-1),(x,y))$ will contain $(-2,0)$. Thus no such lattice point $(x,y)$ can exist in $P$. \item[(iv)] No point of the form $(x,y)$ with $x\geq 2$ and $y\geq 0$ may appear in $P$: this would force the point $(2,0)$ to appear, as it would lie in the triangle $\textrm{conv}(O,(2,-1),(x,y))$. \item[(v)] There are now only finitely many allowed lattice points $(x,y)$ with $y\geq 1$, namely those with $-y-1\leq x\leq 2$ and $1\leq y\leq 7$. For each such point, we consider the triangle $\textrm{conv}((x,y),(-1,-1),(-1,3))$. We claim that only the $13$ choices of $(x,y)$ pictured in Figure \ref{figure:ruling_out_points}. that do not introduce a forbidden point. To see this, we note that the points $(0,2)$, $(-2,2)$ and $(-2,4)$ are all forbidden. The point $(0,2)$ rules out $(x,y)$ with $x=1$ and $y\geq 5$; with $x=0$ and $y\geq 2$; with $x=-1$ and $y\geq 4$; ant with $x=-2$ and $y\geq 5$. For $x=-2$, the points $(-2,2)$ and $(-2,4)$ are already ruled out. For all remaining points with $x\leq -3$, every point besides $(-3,2)$, $(-4,3)$, and $(-5,3)$ introduces the point $(-2,2)$ or $(-2,4)$ or both. This establishes our claim. \item[(vi)] By assumption, we know there are no lattice points of the form $(x,-1)$ where $x\leq -2$. It follows that there are also no lattice points of the form $(x,-2)$ where $x\leq -4$, since $(-1,-2)$ would lie in the convex hull of such a point with $O$ and $(2,-1)$. \item[(vii)] We will now use the fact that we have assumed that $P$ satisfies $\textrm{lw}(P)\geq 3$. We cannot have that $P$ is contained in the strip $-2\leq y\leq 0$, so there must be at least one point $(x,y)$ with $y\geq 1$. If there is a point of the form $(x',-1)$ with $x'\geq 6$, then we would have that $\textrm{conv}((x,y),(x',-1),(-1,-1))$ contains the point $(2,0)$, which is invisible. Thus we can only have points $(x',-1)$ if $-1\leq x\leq 5$. A similar argument shows that $P$ can only contain a point $(x,-2)$ if $x$ is odd with $-3\leq x\leq 9$. \end{itemize} \begin{figure}[hbt] \centering \includegraphics{ruling_out_points.pdf} \caption{Possible lattice points in $P$, with impossible points labelled by the argument ruling them out} \label{figure:ruling_out_points} \end{figure} We have now narrowed the possible lattice points in our polygon down to the $30$ lattice points in Figure \ref{figure:ruling_out_points}, five of which we know appear in $P$. For every such point $(x,y)$, there does indeed exist a polygon $P$ with $\textrm{lw}(P)\geq 3$ containing $(x,y)$ as well as the five prescribed points such that $P\cap\mathbb{Z}^2$ is a subset of the $30$ allowed points, so we cannot narrow down any further. One way to finish the proof is by use of a computer to determine all possible subsets of the $25$ points that can be added to our initial $5$ points to yield a polygon of lattice width at least $3$; we would then simply check the largest number of lattice points. We have carried out this computation, and present the results in Appendix \ref{section:appendix}. We also present the following argument, which will complete our proof without needing to rely on a computer. First we split into four cases, depending on the number $k$ of lattice points at height $-1$: $4$, $5$, $6$, or $7$. When there are more than $4$, we can eliminate more of the candidate points $(x,y)$ with $y\geq 1$ or $y=-2$; the sets of allowable points in these four cases are illustrated in Figure \ref{figure:four_cases_allowed_points}. In each case we will argue that our polygon $P$ has at most $13$ lattice points. \begin{figure}[hbt] \centering \includegraphics[scale=0.8]{four_cases_allowed_points.pdf} \caption{Narrowing down possible points depending on the number of points at height $-1$} \label{figure:four_cases_allowed_points} \end{figure} \begin{itemize} \item Suppose $k=4$. There are $20$ possible points at height $-1$ or above; since there is at most one point at height $-2$, it suffices to show that we can fit no more than $12$ lattice points at height $-1$ or above into a lattice polygon. First suppose the point $(-5,4)$ is in $P$. This eliminates $9$ possible points from appearing in $P$, yielding at most $20-9+1=12$ lattice points total in $P$. Leaving out $(-5,4)$ but including $(-4,3)$ similarly eliminates $9$ possible points. Including $(-2,3)$ eliminates $8$; including $(-1,3)$ and leaving out $(-2,3)$ eliminates $8$; including $(1,4)$ eliminates $9$; and including $(1,3)$ and leaving out $(1,4)$ eliminates $9$. In all these cases, we can conclude that $P$ has at most $13$ lattice points in total. The only remaining case is that all lattice points of $P$ have heights between $-2$ and $2$. The polygon can have at most one lattice point at height $-2$, at most one lattice point at height $2$, and some assortment of the $11$ total points with heights between $-1$ and $1$. Once again, $P$ can have at most $13$ lattice points. \item Suppose $k=5$. If $P$ includes the point $(-4,3)$, then it cannot include $(-2,3)$, $(-1,2)$, or $(0,1)$. Combined with the fact that $P$ can only have one lattice point at height $-2$, this leaves $P$ with at most $13$ total lattice points. A similar argument holds if $P$ includes the point $(-2,3)$. If $P$ contains neither $(-4,3)$ nor $(-2,3)$, then it has at most $1$ point at height $3$, at most one point at height $-2$, and some collection of the $11$ points between. Thus $P$ has at most $13$ lattice points. \item Suppose $k=6$. Since $P$ has at most one lattice point at height $-2$, and only $12$ points are allowed outside of that height, $P$ has at most $13$ lattice points total. \item Suppose $k=7$. Since $P$ has at most one lattice point at height $-2$, and only $11$ points are allowed outside of that height, $P$ has at most $12$ lattice points total. \end{itemize} We conclude that $|P\cap\mathbb{Z}^2|\leq 13$. \end{proof} As detailed in Appendix \ref{section:appendix}, we enumerated all non-hyperelliptic polygons containing the five prescribed points from the previous proof, along with some subset of the other $25$ permissible points. The end result was $69$ non-hyperelliptic panoptigons of lattice diameter $3$ or more, up to equivalence. In the same appendix we show that there are $3$ non-hyperelliptic panoptigons with lattice diameter at most $2$, yielding a grand total of $72$ non-hyperelliptic panoptigons. If we instead wish to count panoptigons of lattice width at least $3$, this count becomes $73$ due to the inclusion of $T_3$. We remark that it is possible to give a much shorter proof that there are only finitely many non-hyperelliptic panoptigons. Suppose that $P$ is a panoptigon of lattice diameter $\ell(P)\geq 7$. By the same argument that started our previous proof, we may assume without loss of generality that $P$ has $(0,0)$ as a panoptigon point as well as eight or more lattice points at height $-1$. If $P$ contains a point of the form $(x,y)$ where $y\geq 2$, then the line segment $P\cap L$ where $L$ is the $x$-axis must have length at least $7\left(1-\frac{1}{y+1}\right)\geq 7\left(1-\frac{1}{2+1}\right)=\frac{14}{3}>4$. As such $P$ must contain at least $4$ points at height $0$, impossible since there are only $3$ visible points at this height. Similarly $P$ can have no lattice points at height $1$: these would force the inclusion of either $(2,0)$ or $(-2,0)$. Finally, if $P$ contains a point of the form $(x,y)$ where $y\leq -3$, then the line segment $P\cap L'$ where $L'$ is the horizontal line at height $-2$ must have width at least $7\left(1-\frac{1}{|y|-1}\right)\geq 7\left(1-\frac{1}{3-1}\right)=\frac{7}{2}>3$. As such we know that $P$ must contain at least $3$ lattice points at height $-2$, impossible since no two consecutive points at that height are both visible. Thus we know that $P$ only has lattice points at heights $0$, $-1$, and $-2$, and so is a hyperelliptic polygon. This means that if $P$ is a non-hyperelliptic panoptigon, it must have $\ell(P)\leq 6$. Since $|P\cap\mathbb{Z}^2|\leq (\ell(P)+1)^2$, it follows that if $P$ is a non-hyperelliptic panoptigon then it must have at most $(6+1)^2=49$ lattice points; there are any finitely many such polygons. In principle one could enumerate all such polygons with at most $49$ lattice points as in \cite{Castryck2012} and check which are panoptigons; this would be much less efficient than the computation led to by our longer proof. \section{Characterizing all maximal polygons of lattice width $3$ or $4$}\label{section:lw3_and_4} In this section we will characterize all maximal polygons of lattice width $3$ or $4$. By Lemma \ref{lemma:lw_facts}, this will allow us to determine which polygons of lattice width $1$ or $2$ can be the interior polygon of some lattice polygon. This will be helpful in Section \ref{section:big_face_graphs}, when we will need to know which of the infinitely many panoptigons of lattice width at most $2$ can be an interior polygon. For lattice width $3$, we do have the triangle $T_3$ as an exceptional case; all other polygons with lattice width $3$ must have an interior polygon of lattice width $1$. \begin{prop}\label{prop:lattice_width_3_maximal} Let $P$ be a maximal polygon. Then $P$ has lattice width $3$ if and only if up to equivalence we either have $P=T_3$, or $P=T_{a,b}^{(-1)}$ where $a\geq \frac{1}{2}b-1$, $0\leq a\leq b$, and $b\geq 1$, and where $T_{a,b}\neq T_1$. \end{prop} \begin{proof} If $P$ is equivalent to $T_3$, then it has lattice width $3$ as desired. If $P$ is equivalent to some other $T_d$, then $P$ has lattice width $d\neq 3$, and so need not be considered. Now assume $P$ is not equivalent to $T_d$ for any $d$, so that $P$ has lattice width $3$ if and only if $P_\textrm{int}$ has lattice width $1$ by Lemma \ref{lemma:lw_facts}. This is the case if and only if $P_\textrm{int}$ is equivalent to $T_{a,b}$ for some $a,b\in\mathbb{Z}$ where $0\leq a\leq b$ and $b\geq 1$ (where $T_{a,b}\neq T_1$) by Theorem \ref{theorem:lw_012}. Thus to prove our claim, it suffices by Proposition \ref{prop:interior_maximal} to show that $T_{a,b}^{(-1)}$ is a lattice polygon if and only if $a\geq \frac{1}{2}b-1$. We set the following notation to describe $T_{a,b}$. Starting with the face connecting $(0,0)$ and $(0,1)$ and moving counterclockwise, label the faces of $T_{a,b}$ as $\tau_1$, $\tau_2$, $\tau_3$, and $\tau_4$ (where $\tau_4$ does not appear if $a=0$). Pushing out the faces, we find that $\tau_1^{(-1)}$ lies on the line $x=-1$, $\tau_2^{(-1)}$ on the line $y=-1$, $\tau_3^{(-1)}$ on the line $x+(b-a)y= b+1$, and $\tau_4^{(-1)}$ on the line $y=2$. Note that working cyclically, we have $\tau_i^{(-1)}\cap\tau_{i+1}^{(-1)}$ is a lattice point: we get the points $(-1,-1)$, $(2b-a+1,1)$, $(2a-b+1,2)$, and $(-1,2)$. Thus if these are the vertices of $T_{a,b}^{(-1)}$, then $T_{a,b}^{(-1)}$ is a lattice polygon. Certainly $(-1,-1)$ and $(2b-a+1,1)$ appear in $T_{a,b}^{(-1)}$. The points $(2a-b+1,2)$ and $(-1,2)$ will appear as (not necessarily distinct) vertices of $T_{a,b}^{(-1)}$ if and only if $2a-b+1\geq -1$; that is, if and only if $a\geq\frac{1}{2}b-1$. Thus in the case that $a\geq\frac{1}{2}b-1$, we have that $T_{a,b}^{(-1)}$ is a lattice polygon with vertices at $(-1,-1)$, $(2b-a+1,-1)$, $(2a-b+1,2)$, and $(-1,2)$. If on the other hand $a<\frac{1}{2}b-1$, then $\tau_4^{(-1)}$ is not a face of $T_{a,b}^{(-1)}$, and so one of the vertices of $T_{a,b}^{(-1)}$ is $\tau_1^{(-1)}\cap\tau_3^{(-1)}$. These faces intersect at the point $\left(\frac{b+2}{b-a},-1\right)$, where we may divide by $b-a$ since $a<\frac{1}{2}b-1$ and so $a\neq b$. Note that $b-a> b-\frac{1}{2}b+1=\frac{1}{2}(b+2)$. It follows that that $\frac{b+2}{b-a}<2$, and certainly $\frac{b+2}{b-a}>1$, so $\left(\frac{b+2}{b-a},-1\right)$ is not a lattice point. We conclude that $T_{a,b}^{(-1)}$ is a lattice polygon if and only if $a\geq \frac{1}{2}b-1$, thus completing our proof. \end{proof} The explicitness of this result, combined with the fact that $g\left(T_{a,b}^{(-1)}\right)=a+b+2$, allows us to count the number of maximal polygons $P$ of genus $g$ with lattice width $3$. First, note that there are $\left\lfloor\frac{g-2}{2}\right\rfloor$ choices of $T_{a,b}$ with $g$ lattice points: with our assumption that $a\leq b$, we can choose $a$ to be any number from $1$ up to $\left\lfloor\frac{g-2}{2}\right\rfloor$, and $b$ is determined from there. Next, we will exclude those choices of $a$ that yield $a<\frac{1}{2}b-1$, or equivalently $a\leq \frac{1}{2}b-\frac{3}{2}$ since $a,b\in\mathbb{Z}$. Given that $a+b=g$, this is equivalent to $a\leq\frac{1}{2}(g-a)-\frac{3}{2}$, or $\frac{3}{2}a\leq\frac{1}{2}g-\frac{3}{2}$, or $a\leq \frac{g}{3}-1$. Thus the number of polygons we must exclude from the total count $\left\lfloor\frac{g-2}{2}\right\rfloor$ is $\left\lfloor \frac{g}{3}\right\rfloor-1$. We conclude that the number of maximal polygons of genus $g$ with lattice width $3$ is \[\left\lfloor\frac{g-2}{2}\right\rfloor-\left\lfloor\frac{g}{3}\right\rfloor+1\] when $g\geq 4$ (which allows us to ignore $T_3$). We now wish to classify maximal polygons $P$ of lattice width $4$. One possibility is that $P$ is $T_4$. Other than this example, the interior polygon $P_{\textrm{int}}$ must have lattice width $2$. Note that if $g(P_{\textrm{int}})=0$, then $P_{\textrm{int}}=T_2$; this has relaxed polygon $T_5$, which has lattice width $5$ and so is not under consideration. If $g(P_{\textrm{int}})=1$, then $P_\textrm{int}$ is one of the polygons in Figure \ref{figure:g1_lw2}. It turns out that all of these can be relaxed to a lattice polygon, each of which has lattice width $4$; these polygons are illustrated in Figure \ref{figure:g1_relaxed}. \begin{figure}[hbt] \centering \includegraphics[scale=0.8]{g1_relaxed.pdf} \caption{The lattice width $4$ polygons with exactly one doubly interior point} \label{figure:g1_relaxed} \end{figure} Now we deal with the most general case of polygons with $\textrm{lw}(P)=4$, namely those where $P_{\textrm{int}}$ has lattice width $2$ and genus $g'\geq 2$. Thus $P_\textrm{int}$ must be one of the $\frac{1}{6}(g+3)(2g^2+15g+16)$ hyperelliptic polygons presented in Theorem \ref{theorem:lw_012}. We must now determine which of these hyperelliptic polygons $Q$ have a relaxed polygon $Q^{(-1)}$ that has lattice points for vertices. We do this over three lemmas, which consider the polygons of Type 1, Type 2, and Type 3 separately. \begin{lemma}\label{lemma:type1} If $Q$ is of Type 1, then the relaxed polygon $Q^{(-1)}$ is a lattice polygon if and only if $i\leq \frac{3g+1}{2}$. \end{lemma} \begin{proof} Let $\tau_1$, $\tau_2$, $\tau_3$, and $\tau_4$ denote the four one-dimensional faces of $Q$, proceeding counterclockwise starting from the face connecting $(0,0)$ and $(1,2)$ (note that $\tau_4$ does not appear as a one-dimensional face if $i=2g$). Consider the relaxed faces $\tau_1^{(-1)}$, $\tau_2^{(-1)}$, $\tau_3^{(-1)}$, and $\tau_4^{(-1)}$. These lie on the lines $-2x+y=1$, $y=-1$, $2x+(2i-2g-1)y=2i+1$, and $y=3$. Proceeding cyclically, the intersection points $\tau_i^{(-1)}\cap \tau_{i+1}^{(-1)}$ of these relaxed faces are $(-1,-1)$, $(2i-g,-1)$, $(3g-2i+2,3)$, and $(1,3)$. All these points are lattice points, so if they are indeed the vertices of $P_\textrm{int}$ then $Q^{(-1)}$ is a lattice polygon. The one situation in which our relaxed polygon will not have all lattice points is if $\tau_1^{(-1)}$ and $\tau_3^{(-1)}$ intersect at a height strictly below $3$, cutting off the face $\tau_4^{(-1)}$ and yielding a vertex with $y$ coordinate strictly between $2$ and $3$. These faces intersect at $\left(\frac{g+1}{2(i-g)},\frac{i+1}{i-g}\right)$, which has $y$-coordinate strictly smaller than $3$ if and only if $\frac{i+1}{i-g}<3$, which can be rewritten as $i+1<3i-3g$, or as $\frac{3g+1}{2}<i$. Thus when $i\leq \frac{3g+1}{2}$, our relaxed polygon is a lattice polygon; and when $i>\frac{3g+1}{2}$, it is not. \end{proof} \begin{lemma}\label{lemma:type2} If $Q$ is of Type 2, then the relaxed polygon $Q^{(-1)}$ is a lattice polygon if and only if $i\geq \frac{g}{2}+1$ and $j\geq\frac{g-1}{2}$. \end{lemma} \begin{proof} Label the faces of $Q$ cyclically as $\tau_1$, $\tau_2$, $\tau_3$, $\tau_4$, and $\tau_5$. Due to the form of the slopes of these faces, the relaxed face $\tau_i^{(-1)}$ will intersect the relaxed face $\tau_{i+1}^{(-1)}$ at a lattice point; this is true for $\tau_1$ with $\tau_2$ and $\tau_5$ by computation, and for any horizontal line with a face of slope $1/k$ for some integer $k$. Similarly, we are fine with the intersections of $\tau_3^{(-1)}$ and $\tau_4^{(-1)}$: these will always intersect at the lattice point $(g+2,1)$. Thus the only way the relaxed polygon will fail to have lattice vertices is if certain edges are lost while pushing out. Considering the normal fan of $Q$, this leads to two possible cases for $Q$ to not be integral: if the face $\tau_2^{(-1)}$ is lost, and if the face $\tau_5^{(-1)}$ is lost. First we consider the case that $\tau_2^{(-1)}$ is lost due to $\tau_1^{(-1)}$ and $\tau_3^{(-1)}$ intersecting at a point with $y$-coordinate strictly between $0$ and $-1$; note that this can only happen when $i<g$. The face $\tau_1^{(-1)}$ is on the line $-2x+y=1$, and $\tau_3^{(-1)}$ is on the line $x-(g+1-i)y=i+1$. These intersect at $\left(-\frac{g+2}{2g-2i+1},-\frac{2i+3}{2g-2i+1}\right)$. Note that $-\frac{2i+3}{2g-2i+1}>-1$ is equivalent to $\frac{2i+3}{2g-2i+1}<1$, which in turn is equivalent to $2i+3<2g-2i+1$. This simplifies to $i<\frac{g}{2}+1$. Thus we have a collapse of $\tau_2^{(-1)}$ that introduces a non-lattice vertex point if and only if $i<\frac{g}{2}+1$. Now we consider the case that $\tau_5^{(-1)}$ is lost due to $\tau_1^{(-1)}$ and $\tau_4^{(-1)}$ intersecting at a point with $y$-coordinate strictly between $2$ and $3$. The face $\tau_4^{(-1)}$ lies on the line with equation $x+(g-j)y=2g-j+2$. This intersects $\tau_1^{(-1)}$ at $\left(\frac{g+2}{2g-2j+1},\frac{4g-2j+5}{2g-2j+1}\right)$. Having $\frac{4g-2j+5}{2g-2j+1}<3$ is equivalent to $4g-2j+5< 6g-6j+3$, which can be rewritten as $4j<2g-2$, or $j< \frac{g-1}{2}$. Thus we have a collapse of $\tau_5^{(-1)}$ that introduces a non-lattice vertex point if and only if $j< \frac{g-1}{2}$. We conclude that $Q^{(-1)}$ is a lattice polygon if and only if $i\geq \frac{g}{2}+1$ and $j\geq\frac{g-1}{2}$ \end{proof} \begin{lemma}\label{lemma:type3} If $Q$ is of Type 3, then the relaxed polygon $Q^{(-1)}$ is a lattice polygon if and only if $i\geq g/2$ and $j\geq g/2$. \end{lemma} \begin{proof}Label the faces of $Q$ cyclically as $\tau_1,\ldots,\tau_6$, where $\tau_1$ is the face containing the lattice points $(k,2)$ and $(0,1)$ (with the understanding that some faces might not appear if one or more of $i$, $j$ and $k$ are equal to $0$). If the faces $\tau_1^{(-1)},\ldots,\tau_6^{(-1)}$ are all present in the polygon $P^{(-1)}$, then they intersect at lattice points by the arguments from the previous proof. Thus we need only be concerned with the following cases: where $\tau_3^{(-1)}$ collapses due to $\tau_2^{(-1)}$ and $\tau_4^{(-1)}$ intersecting at a point $(x,y)$ with $0>y>-1$; and where $\tau_6^{(-1)}$ collapses due to $\tau_5^{(-1)}$ and $\tau_1^{(-1)}$ intersecting at a point $(x,y)$ with $2<y<3$. First we consider $\tau_2^{(-1)}$ and $\tau_4^{(-1)}$. We have that $\tau_2^{(-1)}$ lies on the line defined by $x=-1$, and that $\tau_4^{(-1)}$ lies on the line defined by $x-(g+1-i)y=i+1$. These lines intersect at $(-1,-\frac{i+2}{g+1-i})$. The $y$-coordinate is strictly greater than $-1$ when $\frac{i+2}{g+1-i}<1$, i.e. when $i+1<g+1-i$, which can be rewritten as $i<\frac{g}{2}$. Thus we lose $\tau_3^{(-1)}$ to a non-lattice vertex precisely when $i<\frac{g}{2}$. Now we consider $\tau_5^{(-1)}$ and $\tau_1^{(-1)}$. We have that $\tau_1^{(-1)}$ lies on the line $x-ky=-k+1$, unless $k=0$ in which case it lies on the line $x=-1$; and that $\tau_5^{(-1)}$ lies on the line $x+(g+1-k-j)y=2g+2-k-j$. In the event that $k\neq 0$, these intersect at $\left(\frac{gk+g-j+1}{g-j+1},\frac{2g-j+1}{g-j+1}\right)$, which has $y$-coordinate strictly smaller than $3$ when $\frac{2g-j+1}{g-j+1}<3$, or equivalently if $2g-j+1<3g-3j+1$, or equivalently if $j<\frac{g}{2}$. For the $k=0$ case, the intersection point becomes $\left(-1,\frac{2g-j+3}{g-j+1}\right)$, which has $y$-coordinate strictly smaller than $3$ when $\frac{2g-j+3}{g-j+1}<3$, or equivalently when $2g-j+3<3g-3j-3k+3$, or equivalently when $j<\frac{g}{2}$. Thus we have a non-lattice vertex due to $\tau_5^{(-1)}$ collapsing precisely when $j<\frac{g}{2}$. We conclude that $Q^{(-1)}$ is a lattice polygon if and only if $i\geq g/2$ and $j\geq g/2$. \end{proof} Combining Lemmas \ref{lemma:type1}, \ref{lemma:type2}, and \ref{lemma:type3} and the preceding discussion, we have the following classification of maximal polygons with lattice width $4$. \begin{prop}\label{prop:lattice_width_4_maximal} Let $P$ be a maximal polygon of lattice width $4$. Then up to lattice equivalence, $P$ is either $T_4$; one of the $14$ polygons in Figure \ref{figure:g1_relaxed}; or $Q^{(-1)}$, where $Q$ is a hyperelliptic polygon satisfying the conditions of Lemma \ref{lemma:type1}, \ref{lemma:type2}, or \ref{lemma:type3}. \end{prop} The most important consequence of Propositions \ref{prop:lattice_width_3_maximal} and \ref{prop:lattice_width_4_maximal} is that we can determine which panoptigons of lattice width $1$ or lattice width $2$ are interior polygons of some lattice polygon. We summarize this with the following result. \begin{cor}\label{corollary:lw12_panoptigon_point_bound} Let $Q$ be a panoptigon with $\textrm{lw}(Q)\leq 2$ such that $Q^{(-1)}$ is lattice polygon. Then $|Q\cap\mathbb{Z}^2|\leq 11$. \end{cor} \begin{proof} If $\textrm{lw}(Q)= 1$ with $Q^{(-1)}$ a lattice polygon, then $Q$ must be the trapezoid $T_{a,b}$ with $0\leq a\leq b$, $b\geq 1$, and $a\geq \frac{b}{2}-1$ by Proposition \ref{prop:lattice_width_3_maximal}. In order for $T_{a,b}$ to be a panoptigon, we need $a\leq 2$ by Lemma \ref{lemma:panoptigon_g0}, so $2\geq \frac{b}{2}-1$, implying $b\leq 6$. It follows that $|Q\cap\mathbb{Z}^2|=a+b+2\le 2+6+2=10$. Now assume $\textrm{lw}(Q)= 2$ with $Q^{(-1)}$ a lattice polygon. If $Q$ has genus $0$ then it is $T_2$, and has $6$ lattice points. If $Q$ has genus $1$ then it is one of the polygons in Figure \ref{figure:g1_lw2}, and so has at most $9$ lattice points. Outside of these situations, we know that $Q$ is a hyperelliptic panoptigon of genus $g\geq 2$ as characterized in Lemma \ref{lemma:hyperelliptic_panoptigon}. We deal with two cases: where $Q$ has a panoptigon point at height $1$, and where it does not. In the first case, we either have $g=2$ with $Q$ of Type 1 or Type 2, or $g=3$ with $Q$ of Type 1. A hyperelliptic polygon of Type 1 has $(i+1)+(1+2g-i)=2g+2$ boundary points. A hyperelliptic polygon of Type 2 has $i+j+3$ boundary points. If $Q$ is of Type 1, then it has in total $3g+2\leq11 $ lattice points. If $Q$ is of Type 2, then $i+j\leq 2g+1=2\cdot 2+1=5$, implying that $Q$ has a total of $i+j+3+g\leq 5+3+2=10$ lattice points. In the second case, we know that $Q$ must have at most $3$ points at height $0$ or $2$, and exactly $1$ point at the other height. First we claim that $Q$ cannot be of Type 1: there are $2g+2\geq 6$ boundary points, all at height $0$ or $2$, and $Q$ can have at most $4$ points total at those heights. For Types 2 and 3, we know by Lemmas \ref{lemma:type2} and \ref{lemma:type3} that either $i\geq\frac{g}{2}+1$ and $j\geq \frac{g-1}{2}$, or $i\geq\frac{g}{2}$ and $j\geq \frac{g}{2}$. At least one of $i$ and $j$ must equal $0$ to allow for a single point at height $0$ or height $2$, so these inequalities are impossible for $g\geq 2$. Thus $Q$ cannot have Type 2 or Type 3 either, and this case never occurs. We conclude that if $Q$ is a panoptigon of lattice width $1$ or $2$ such that $Q^{(-1)}$ is a lattice polygon, then $|Q\cap\mathbb{Z}^2|\leq 11$. \end{proof} \section{Big face graphs are not tropically planar}\label{section:big_face_graphs} Let $G$ be a planar graph. Recall that we say that $G$ is a \emph{big face graph} if for any planar embedding of $G$, there exists a bounded face that shares an edge with every other bounded face. Our main examples of big face graphs will come from the following construction. First we recall the construction of a \emph{chain} of genus $g$ from \cite[\S 6]{BJMS}. Start with with $g$ cycles in a row, connected at $g-1$ $4$-valent vertices. We will resolve each of these $4$-valent vertices to result in two $3$-valent vertices in one of two ways. Let $v$ be a vertex, incident to the edges $e_1,e_2,f_1,f_2$ where $e_1$ and $e_2$ are part of one cycle and $f_1$ and $f_2$ are part of another. We will remove $v$ and replace it with two connected vertices $v_1$ and $v_2$, and we will either connect $v_1$ to $e_1$ and $f_1$ and $v_2$ to $e_2$ and $f_2$; or we will connect $v_1$ to $e_1$ and $e_2$ and $v_2$ to $f_1$ and $f_2$. Any graph obtained from making such a choice at each vertex is then called a chain. Figure \ref{figure:chain_construction} illustrates, for $g=3$, the starting $4$-regular graph; the two ways to resolve a $4$-valent vertex; and the resulting chains of genus $3$. We remark that although there are $2\times 2 = 4$ ways to choose the vertex resolutions, two of them yield isomorphic graphs, giving us $3$ chains of genus $3$ up to isomorphism. Note that for every genus, there is exactly one chain that is bridge-less, i.e. $2$-edge-connected. \begin{figure}[hbt] \centering \includegraphics{chain_construction} \caption{The starting $4$-regular graph in the chain construction; the two choices for resolving a $4$-valent vertex; and the three chains of genus $3$, up to isomorphism} \label{figure:chain_construction} \end{figure} Given a chain of genus $g$, we construct a \emph{looped chain} of genus $g+1$ by adding an edge from the first cycle to the last one. The looped chains of genus $4$ corresponding to the chains of genus $3$ are illustrated in Figure \ref{figure:chains_and_looped_chains}. For larger genus, we remark that two non-isomorphic chains can give rise to isomorphic looped chains. \begin{figure}[hbt] \centering \includegraphics{chains_and_looped_chains.pdf} \caption{The looped chains of genus $4$ } \label{figure:chains_and_looped_chains} \end{figure} In order to argue that any looped chain is a big face graph, we recall the following useful result. By a special case of Whitney's $2$-switching theorem \cite[Theorem 2.6.8]{graphs_on_surfaces}, if $G$ is a $2$-connected graph, then any other planar embedding can be reached, up to weak equivalence\footnote{Weak equivalence means two graph embeddings have the same facial structure, although possibly with different unbounded faces.}, from the standard embedding by a sequence of \emph{flippings}. A flipping of a planar embedding finds a cycle $C$ with only two vertices $v$ and $w$ incident to edges exterior to $C$, and then reverses the orientation of $C$ and all vertices and edges interior to $C$ to obtain a new embedding. This process is illustrated in Figure \ref{figure:2-flip}, where $C$ is the highlighted cycle $v-a-b-w-d-v$. \begin{figure}[hbt] \centering \includegraphics{2-flip} \caption{Two embeddings of a planar graph related by a flipping} \label{figure:2-flip} \end{figure} \begin{lemma}\label{lemma:looped_chain} Any looped chain is a big face graph. \end{lemma} \begin{proof} In the standard embedding of a looped chain as in Figure \ref{figure:chains_and_looped_chains}, there are (at least) two faces that share an edge with all other faces: one bounded and one unbounded. Since any looped chain is $2$-connected, any other embedding can be reached, up to weak equivalence, by a sequence of flippings. It thus suffices to show that the standard embedding of a looped chain is invariant under flipping. Consider the standard embedding of a looped chain $G$, and assume that that $C$ is a cycle in $G$ that has exactly two vertices $v$ and $w$ incident to edges exterior to $C$. Let $\overline{C}$ denote the set of all vertices in or interior to $C$. Since $G$ is trivalent and $C$ is $2$-regular, we know that $v$ and $w$ are each incident to exactly one edge, say $e$ for $v$ and $f$ for $w$, that is exterior to $C$. We now deal with two possibilities: that $\overline{C}=V(G)$, and that $\overline{C}\subsetneq V(G)$. If $\overline{C}=V(G)$, then $e=f$, and the only possibility is that $v$ and $w$ are the vertices added to a chain $H$ to build the looped chain $G$; that $H$ is the bridge-less chain; and that $C$ is the outside boundary of $H$ in its standard embedding. Flipping with respect to $C$ does not change the embedding of this graph. \begin{figure}[hbt] \centering \includegraphics{looped_chain_possible_cycles} \caption{The structure of a looped chain, where the bridge-less chains $G_i$ have solid edges and the edges $e_i$ are dotted; the boundaries of the $G_i$ are bold, and are the only possible choices of $C$ for a flipping} \label{figure:looped_chain_possible_cycles} \end{figure} If $\overline{C}\subsetneq V(G)$, then $\{e,f\}$ forms a $2$-edge-cut for $G$, separating it into $\overline{C}$ and $\overline{C}^C$. Consider the structure of $G$: is is a collection of $2$-edge-connected graphs $G_1,\ldots,G_k$, namely a collection of bridge-less chains, connected in a loop by edges $e_1,\ldots,e_k$, where $e_i$ connects $G_i$ and $G_{i+1}$, working modulo $k$; see Figure \ref{figure:looped_chain_possible_cycles} for this labelling scheme. We claim that $e,f\in\{e_1,\ldots,e_k\}$. If not, then without loss of generality $e$ is in some bridge-less chain $G_i$. If $f\in E(G_j)$ for $j\neq i$, then the graph remains connected; the same is true if $f\in\{e_1,\ldots,e_k\}$. So we would need $f$ to also be in $G_i$. By the structure of the looped chain, we would the removal of $e$ and $f$ to disconnect $G_i$ into multiple components, at least one of which is not incident to $e_i$ or $e_{i+1}$; however, this is impossible based on the structure of a bridge-less chain. It follows that $e$ and $f$ must be among $e_1,\ldots,e_k$. The only way to choose a pair $\{e,f\}$ from among $e_1,\ldots,e_k$ so that they are the only exterior edges incident to the boundary of a cycle $C$ is if they are incident to the same bridge-less chain $G_i$; that is, if up to relabelling we have $e=e_i$ and $f=e_{i+1}$ for some $i$. Thus $C$ and its interior constitutes one of the bridge-less chains $G_i$. But flipping a bridge-less chain does not change the embedding of our (unlabelled) graph, completing the proof. \end{proof} \begin{figure}[hbt] \centering \includegraphics{loop_of_loops.pdf} \caption{The loop of loops $L_g$ for $3\leq g\leq 6$} \label{figure:loop_of_loops} \end{figure} We summarize the connection between big face graphs and panoptigons in the following lemma. \begin{lemma}\label{lemma:big_face_panoptigon_lemma} Suppose that $G$ is a tropically planar big face graph arising from a polygon $P$. Then $P_\textrm{int}$ is a panoptigon. \end{lemma} This is not an if-and-only-if statement, since not all triangulations of $P$ connect a point of $P_\textrm{int}$ to all other points of $P_\textrm{int}$; for instance, the chain of genus $3$ with two bridges is not a big face graph, but by \cite[\S 5]{BJMS} it arises from $T_4$ whose interior polygon is a panoptigon. \begin{proof} Let $\Delta$ be a regular unimodular triangulation of $P$ such that $G$ is the skeleton of the weak dual graph of $\Delta$. The embedding of $G$ arising from this construction must have a bounded face $F$ bordering all other faces. By duality, we know that $F$ corresponds to an interior lattice point $p$ of $P$. Since $F$ shares an edge with all other bounded faces, dually $p$ is connected to each other interior point of $P$ by a primitive edge in $\Delta$. Thus $P_\textrm{int}$ is a panoptigon, with $p$ a panoptigon point for it. \end{proof} One common example of a looped chain of genus $g$ is the \emph{loop of loops} $L_g$, obtained by connecting $g-1$ bi-edges in a loop. This is illustrated in Figure \ref{figure:loop_of_loops} for $g$ from $3$ to $6$. For low genus, the loop of loops is tropically planar. Figure \ref{figure:triangulations_for_lol} illustrates polygons of genus $g$ for $3\leq g\leq 10$ along with collections of edges emanating from an interior point; when completed to a regular unimodular triangulation\footnote{One way to see that this can be accomplished is to use a placing triangulation \cite[\S 3.2.1]{triangulations}, where the highlighted panoptigon point is placed first and the other lattice points are placed in any order.}, they will yield $L_g$ as the dual tropical skeleton. Thus $L_g$ is tropically planar for $g\leq 10$. Another example of a tropically planar looped chain, this one of genus $11$, is pictured in Figure \ref{figure:g11_big_face}, along with a regular unimodular triangulation of a polygon giving rise to it. Since the theta graph of genus $2$ is also tropically planar \cite[Example 2.5]{BJMS} and is a big face graph, there exists at least one tropically planar big face graph of genus $g$ for $2\leq g\leq 11$. We are now ready to prove that this does not hold for $g\geq 14$. \begin{figure}[hbt] \centering \includegraphics[scale=0.8]{triangulations_for_lol.pdf} \caption{Starts of triangulations that will yield the loop of loops as the dual tropical skeleton } \label{figure:triangulations_for_lol} \end{figure} \begin{figure}[hbt] \centering \includegraphics{g11_big_face.pdf} \caption{A tropically planar big face graph of genus $11$, with a regular unimodular triangulation giving rise to it } \label{figure:g11_big_face} \end{figure} \begin{proof}[Proof of Theorem \ref{theorem:big_face_graphs}] Let $G$ be a tropically planar big face graph, and let $P$ be a lattice polygon giving rise to it. By Lemma \ref{lemma:big_face_panoptigon_lemma}, $P_\textrm{int}$ is a panoptigon. If $\textrm{lw}(P_\textrm{int})\leq 2$, then $g=|P_\textrm{int}\cap\mathbb{Z}^2|\leq 11$ by Corollary \ref{corollary:lw12_panoptigon_point_bound}. If $\textrm{lw}(P_\textrm{int})\geq 3$, then $g=|P_\textrm{int}\cap\mathbb{Z}^2|\leq 13$ by Theorem \ref{theorem:at_most_13}. Either way, we may conclude that the genus of $G$ is at most $13$. \end{proof} It follows, for instance, that no looped chain of genus $g\geq 14$ is tropically planar. If we are willing to rely on our computational enumeration of all non-hyperelliptic panoptigons, we can push this further: there does not exist a tropically planar big face graph for $g\geq 12$, and this bound is sharp. We have already seen in Figure \ref{figure:g11_big_face} that there exists a tropically planar big face graph of genus $11$. To see that none have higher genus, first note that if $P_\textrm{int}$ is a panoptigon with $12$ or $13$ lattice points, then $P_\textrm{int}$ must be non-hyperelliptic by Corollary \ref{corollary:lw12_panoptigon_point_bound}. Thus $P_\textrm{int}$ must be one of the $15$ non-hyperelliptic panoptigons with $12$ lattice points, or one of the $8$ non-hyperelliptic panoptigons with $13$ lattice points, as presented in Appendix \ref{section:appendix}. However, for each of these polygons $Q$, we have verified computationally that $Q^{(-1)}$ is not a lattice polygon; see Figure \ref{figure:panoptigons_12_or_13}. Thus no lattice polygon of genus $g\geq 12$ has an interior polygon that is also a panoptigon. It follows from Lemma \ref{lemma:big_face_panoptigon_lemma} that no big face graph of genus larger than $11$ is tropically planar. We close with several possible directions for future research. \begin{itemize} \item For any lattice point $p$, let $\textrm{vis}(p)$ denote the set of all lattice points visible to $p$ (including $p$ itself). Given a convex lattice polygon $P$, define its \emph{visibility number} to be the minimum number of lattice points in $P$ needed so that we can see every lattice point from one of them: \[V(P)=\min\left\{|S|\,:\, S\subset P\cap\mathbb{Z}^2\textrm{ and } P\cap\mathbb{Z}^2\subset \bigcup_{p\in S}\textrm{vis}(p) \right\}.\] Thus $P$ is a panoptigon if and only if $V(P)=1$. Classifying polygons of fixed visibility number $V(P)$, or finding relationships between $V(P)$ and such properties as genus and lattice width, could be interesting in its own right, and could provide new criteria for determining whether graphs are tropically planar; for instance, the prism graph $P_n=K_2\times C_n$ can only arise from a polygon $P$ with $V(P)\leq 2$. This question is in some sense a lattice point version of the art gallery problem. \item We can generalize from two-dimensional panoptigons to $n$-dimensional \emph{panoptitopes}, which we define to be convex lattice polytopes containing a lattice point $p$ from which all the polytope's other lattice points are visible. A few of our results generalize immediately; for instance, the proof of Lemma \ref{lemma:panoptigon_g1} works in $n$-dimensions, so any polytope with exactly one interior lattice point is a panoptitope. A complete classification of $n$-dimensional panoptitopes for $n\geq 3$ will be more difficult than it was in two-dimensions, especially since it is no longer the case that there are finitely many polytopes with a fixed number of lattice points. Results about panoptitopes would also have applications in tropical geometry; for instance, an understanding of three-dimensional panoptitopes would have implications for the structure of tropical surfaces in $\mathbb{R}^3$. \item To any lattice polygon we can associate a toric surface \cite{toric_varieties}. An interesting question for future research would be to investigate those toric surfaces that are associated to panoptigons, or more generally toric varieties associated to panoptitopes. \end{itemize}
{ "timestamp": "2020-08-19T02:02:45", "yymm": "2005", "arxiv_id": "2005.04180", "language": "en", "url": "https://arxiv.org/abs/2005.04180", "abstract": "Two lattice points are visible to one another if there exist no other lattice points on the line segment connecting them. In this paper we study convex lattice polygons that contain a lattice point such that all other lattice points in the polygon are visible from it. We completely classify such polygons, show that there are finitely many of lattice width greater than $2$, and computationally enumerate them. As an application of this classification, we prove new obstructions to graphs arising as skeleta of tropical plane curves.", "subjects": "Combinatorics (math.CO); Algebraic Geometry (math.AG)", "title": "Convex lattice polygons with all lattice points visible", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.989671849308438, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.7080349889466622 }
https://arxiv.org/abs/1106.1601
Roth's theorem in many variables
We prove, in particular, that if a subset A of {1, 2,..., N} has no nontrivial solution to the equation x_1+x_2+x_3+x_4+x_5=5y then the cardinality of A is at most N e^{-c(log N)^{1/7-eps}}, where eps>0 is an arbitrary number, and c>0 is an absolute constant. In view of the well-known Behrend construction this estimate is close to best possible.
\section{Introduction} \label{sec:introduction} The celebrated theorem of Roth \cite{Roth_AP3} asserts that every subset of $\{1,\dots, N\}$ that does not contain any three term arithmetic progression has size $O(N/\log\log N)$. There are numerous refinements of Roth's result \cite{Bu,Bourgain_AP2007,a_H-B,Sz_inf}. Currently the best known upper bound $O( N/(\log N)^{1-o(1)})$ is due to Sanders \cite{Sanders_Roth_1-eps}. The comprehensive history of the subject can be found in \cite{Tao_Vu_book}. It turns out that the Roth's method gives a similar upper bound for the size of sets having no nontrivial solutions to a invariant linear equation \begin{equation}\label{e} a_1x_1+\dots+a_kx_k=0, \end{equation} i.e. $a_1+\dots+a_k=0,\, k\gs 3$ (three term arithmetic progressions corresponds to the equation $x+y=2z$). On the other hand, the well-known construction of Behrend \cite{Behrend,Elkin,Koester,Moser,Ra,Salem1,Salem2} provides large sets having no solution to certain kind of invariant equations. He showed that there are subsets of $\{1,\dots,N\}$ of size $Ne^{(-C_{b,k}\sqrt{\log N})}$ without solution to the invariant equation \begin{equation}\label{eq: Behrend} a_1x_1+\dots +a_kx_k=by, \end{equation} where $a_1+\dots+a_k=b,~a_i>0.$ The aim of this paper is to establish a new upper bound for subsets of $\{1,\dots,N\}$ having no solution to an invariant equation in at least $6$ variables. \bigskip \Th {\it Let $N$ and $k\gs 6$ be positive integers. Let $A \subseteq \{1,\dots,N\}$ be a set having no solution to the equation (\ref{e}), where all $x_1,\dots,x_k,y$ are distinct integers. Then \begin{equation}\label{est:cardinality_A_5y''_intr} |A|\ll {\exp\Big (-c\Big (\frac{ \log N}{\log \log N}\Big )^{1/6} \Big )}N \,, \end{equation} where $c=c(a_1,\dots,a_k).$} \label{t:Roth_Schoen_Gr''_intr} \bigskip Observe, that Theorem \ref{t:Roth_Schoen_Gr''_intr} together with Behrend's example give a reasonable estimates for all equations of the type (\ref{eq: Behrend}). Let us also formulate an immediate corollary to Theorem \ref{t:Roth_Schoen_Gr''_intr} for the equation \begin{equation}\label{f} x_1+x_2+x_3+x_4+x_5=5y \end{equation} which is very close to the most intriguing case $x+y=2z$. \bigskip \Cor {\it Suppose that $A\sbeq \{1,\dots,N\}$ has no solution to the equation (\ref{f}) with distinct integers. Then there exists a constant $c>0$ such that \begin{equation*} |A|\ll {\exp\Big (-c\Big (\frac{ \log N}{\log \log N}\Big )^{1/6} \Big)}N \,. \end{equation*} } \label{c:roth5x} Our argument heavily relies on a recent work on Polynomial Freiman-Ruzsa Conjecture, by Sanders \cite{Sanders_2A-2A_new_optimal} (see also \cite{sch}). A fundamental tool in our approach is a version of Bogolyubov-Ruzsa Lemma proved in \cite{Sanders_2A-2A_new_optimal}. We also use the density increment method introduced by Roth, however in a different way. The density increment is not deduced from the existence of a large Fourier coefficient of a set $A,\, |A|=\a N,$ having no solution to an equation (\ref{e}) (which is always the case). We will be rather interested in finding a translation of a large Bohr set in $a_1\cdot A+a_2\cdot A+a_3\cdot A+a_4\cdot A,$ from which one can easily deduce a large density increment of $A$ by a constant factor on some large Bohr set. By Sanders' theorem dimension of the Bohr set increases by $O(\log^4(1/\a))$ in each iteration step, which makes the argument very effective. The paper is organized as follows. We start with proving analogues of Theorem \ref{t:Roth_Schoen_Gr''_intr} and Corollary \ref{c:roth5x} for finite fields in section \ref{sec:F_p^n}. The argument is especially simple and transparent in this case. Theorem \ref{t:Roth_Schoen_Gr''_intr} is proved in next three sections. In section \ref{sec:Bohr_sets} we recall some basic properties of Bohr sets in abelian groups. In section \ref{sec:Sanders_Bohr} we prove a local version of Sanders result. The next section contains the proof of Theorem \ref{t:Roth_Schoen_Gr''_intr}. We conclude the paper with a discussion concerning consequences of Polynomial Freiman--Ruzsa Conjecture for sets having no solutions to an invariant linear equation with distinct integers. \section{Notation} \label{sec:notation} Let $\Gr=(\Gr,+)$ be a finite Abelian group with additive group operation $+$, and let $N=|\Gr|$. By $\F{\Gr}$ we denote the Pontryagin dual of $\Gr$, i.e. the space of homomorphisms $\gamma$ from $\Gr$ to $S^1$. It is well known that $\F{\Gr}$ is an additive group which is isomorphic to $\Gr$. The Fourier coefficients of $f: \Gr\rightarrow \C$ are defined by $$\F{f}(\g)=\sum_{x\in \Gr}f(x)\ov{\g}(x).$$ By the convolution of two function $f,g:\Gr\rightarrow \C$ we mean $$(f*g)(x)=\sum_{x\in \Gr}f(x)g(y-x).$$ It is easy to see that $\F{f*g}(\g)=\F{f}(\g) \F{g}(\g).$ If $X$ is a nonempty set, then by $\mu_X$ we denote the uniform probability measure on $X$ and let $$ \Spec_\epsilon (\mu_X) := \{ \gamma \in \F{\Gr} ~:~ |\F{X} (\gamma)| \gs \epsilon |X| \}.$$ Let $\Z_p = \Z/p\Z$, and $\f_p^* = \Z_p \setminus \{ 0 \}$. If $A$ is a set, then we write $A(x)$ for its characteristic function i.e. $A(x) = 1$ if $x\in A$ and $A(x)=0$ otherwise. All logarithms are to base $2.$ The signs $\ll$ and $\gg$ are usual Vinogradov's symbols. \section{Finite fields model} \label{sec:F_p^n} \bigskip In this section we present proofs of Corollary \ref{c:roth5x} and Theorem \ref{t:Roth_Schoen_Gr''_intr} in finite fields setting. Here we assume that $a_1,\dots,a_k\in \f^*_p.$ The case of $\f^n_p,$ in view of its linear space structure over $\f_p$, is considerable simpler than the case of $\Z.$ Even the simplest version of Roth's argument yields an estimate $O_p(p^n/n^{k-2})$ for size of sets free of solution to (\ref{e}) (see \cite{Meshulam,LS09}, \cite{Sanders_Z_4^n},\cite{Sanders_log}). Our main tool is the following finite fields version of Sanders' theorem \cite{Sanders_2A-2A_new_optimal}. \bigskip \Lemma {\em Suppose that $A,S\subseteq \f_p^n$ are finite non--empty sets such that $|A+S| \le K \min \{ |A|, |S| \}$. Then $A-A+S-S$ contains a subspace $V$ of codimension at most $O_p(\log^{4} K)$. } \label{l:Sanders_F_p^n} \bigskip The proof of the next theorem illustrates the main idea of our approach. \bigskip \Th {\em Suppose that $A\sbeq \f_p^n,\, p\not=5,$ and $A$ has no nontrivial solution to (\ref{f}) with $x_i\not=y$ for some $i$. Then $$ |A|\ls p^n \cdot \exp(-c_p(\log p^n)^{1/5}) \, $$ for some positive constant $c_p.$} \label{t:5y_F_p^n} \proof Suppose that $A\sbeq \f_p^n$ has density $\a$ and contains no solution to $(\ref{f}).$ We split $A$ into two disjoint sets $A_1,A_2$ of equal size. Clearly, there exists $z\in \f^n_p$ such that $$|A_1\cap (z-A_2)|\gg \a^2p^n\,.$$ Let us put $B=A_1\cap (z-A_2)$. By Lemma \ref{l:Sanders_F_p^n}, there exists a subspace $V$ of codimension at most $O_p(\log^{4} (1/\a))$ such that $V\sbeq 2B-2B$, so that $$ 2z+V \sbeq 2A_1+2A_2 \,. $$ Therefore, in view of $A_1\cap A_2=\emptyset$, we have $5y-x\not\in 2z+V$ for all $x,y\in A,$ hence $A$ intersects at most half of cosets of $V,$ which implies $$|A\cap (v+V)|\ge 2\a|V|,$$ for some $v.$ Thus, $(A-v)\cap V$ is free of solutions to (\ref{f}) and has density at least $2\a$ on $V.$ After $t$ iterations we obtain a subspace of codimension at most $O_p(t \cdot \log^{4} (1/\a))$ such that $$ |(A-v_t)\cap V_t|\ge 2^t\a|V_t| \,, $$ for some $v_t.$ Since the density is always at most one we can iterate this procedure at most $\log(1/\a)+1$ times. Hence $$ (\log(1/\a)+1) \cdot \log^{4} (1/\a) \gg_p n \,, $$ so that $$ \a\ls \exp(-c_pn^{1/5}) \,$$ for some positive constant $c_p.\hfill\Box$ \bigskip To prove the main result of the section we will need the following consequence of Lemma \ref{l:Sanders_F_p^n}. We sketch its proof here; the interested reader will find all details in Section \ref{sec:Sanders_Bohr}. \bigskip \Lemma {\em Let $A_1,\dots, A_k\sbeq \f^n_p$ be sets of density at least $\a$. Then $A_1-A_1+\dots +A_k-A_k$ contains a subspace $V$ of codimension at most $O_p(k^{-3}\log^{4} (1/\a))$.} \label{l:trick_small_sumsets} \medskip \proof We have $$|A_1|\le |A_1+A_2|\le\dots \le |A_1+\dots +A_k|\le \a^{-1}|A_1|, $$ so that there exists $2\le i\le k$ such that $$|A_1+\dots +A_i|\le \a^{-1/(k-1)}|A_1+\dots +A_{i-1}|.$$ Thus, setting $A=A_1+\dots+A_{i-1},\, S=T=A_i$, we have $|A+S|\ls \a^{-1/(k-1)}|A|$, $|S+T|\ls \a^{-1}|S|.$ Then applying Lemma \ref{l:Sanders_F_p^n} and Theorem \ref{t:Sanders_2A-2A_reformulation} (see Section \ref{sec:Sanders_Bohr}) we infer that there is a subspace $V$ of codimension $O_p(\log^3(\a^{-1/(k-1)})\cdot \log (1/\a))=O_p(k^{-3}\log^{4} (1/\a))$ such that $$ v+V\sbeq A_1-A_1+\dots +A_i-A_i \sbeq A_1-A_1+\dots +A_k-A_k \,, $$ and the assertion follows. $\hfill\Box$ \bigskip \Th {\em Suppose that $A\sbeq \f_p^n$ has no solution with distinct elements to an invariant equation \begin{equation}\label{equation:F_p^n} a_1x_1+\dots+a_kx_k=0, \end{equation} where $a_1,\dots,a_k\in \f_p^*$ and $k\gs 6.$ Then $$ |A|\ls k p^n \cdot \exp(-c_p(k^3\log p^n)^{1/5}) \,. $$ for a positive constant $c_p$.} \label{t:5y_general_F_p^n} \medskip \proof Suppose $A\sbeq \f_p^n$ has no solution with distinct elements to (\ref{equation:F_p^n}) and $|A|=\a p^n.$ Let $A_1,\dots, A_{2l},\, l=\lf (k-2)/2 \rf$ be arbitrary disjoint subsets of $A$ of size $\lf |A|/(5k)\rf$ and put $A'=A\setminus \bigcup A_i.$ Clearly, there are $z_1,\dots,z_{l}$ such that $$ |(a_{2i-1}\cdot A_{2i-1})\cap(z_i-a_{2i}\cdot A_{2i})|\gg (k/\a)^{2}p^n $$ and let $B_i,\, 1\ls i\ls l,$ be the sets on the left hand side in the above inequalities, respectively. By Lemma \ref{l:trick_small_sumsets}, applied for $B_1,\dots, B_l$ and $K=O((k/\a)^2)$ there is a subspace $V$ of codimension $d=O_p(k^{-3}\log^{4} (k/{\a}))$ such that $$ V\sbeq B_1-B_1 +\dots+B_l-B_l \,, $$ so that $$v+V\sbeq a_1\cdot A_1+\dots+a_{k-2}\cdot A_{k-2}$$ for some $v.$ Since $A$ does not contain any solution to (\ref{equation:F_p^n}) with distinct elements it follows that $$a_{k-1}x+a_ky\notin v+V,$$ for all $x,y\in A', \, x \not=y.$ Hence, if for some $w$ the coset $w+V$ contains at least $2$ elements of $A',$ then $-a^{-1}_k(a_{k-1}w-v)+V$ is disjoint from $A'$. The number of cosets of $V$ sharing exactly $1$ element with $A$ is trivially at most $ p^d.$ Thus, there exists $w'$ such that $|A'\cap (w'+V)|\gs \frac{(4/5)\a p^n-p^d}{p^d/2} |V|,$ which is at least $(3/2)\a |V|,$ provided that \begin{equation}\label{d} p^{n-d}\gg \a^{-1}. \end{equation} After $t$ iterates of this argument we obtain a subspace $V_t$ of codimension $O_p(t k^{-3}\log^{4} (k/{\a}))$ such that $$|(A-v_t)\cap V_t|\gs (3/2)^t\a|V_t|.$$ Since $(3/2)^t\a\ls 1$ it follows that $t \ls 2\log (1/\a)$. Thus, (\ref{d}) must be violated after at most $2\log(1/\a)$ steps, in particular $p^{n-2\log(1/\a)d} \ll \a^{-1},$ so that $$ k^{-3}\log(1/\a) \log^{4} ({k}/{\a}) \gg_p n/2 \,. $$ Hence $\a \ls k \exp(-c_p(k^3\log p^n)^{1/5}) \,.\hfill \Box$ \section{Basic properties of Bohr sets} \label{sec:Bohr_sets} \bigskip Bohr sets were introduced to additive number theory by Ruzsa \cite{Ruzsa_freiman}. Bourgain \cite{Bu} was the first, who used Fourier analysis on Bohr sets to improve estimate in Roth's theorem. Sanders \cite{Sanders_2A-2A_new_optimal} further developed the theory of Bohr sets proving many important theorems, see for example Lemma \ref{l:Chang+large_Bohr_exp_sums} below. Let $\G$ be a subset of $\F{\Gr}$, $|\G| = d$, and $\eps = (\eps_1,\dots,\eps_d) \in (0,1]^d$. \bigskip \Def Define the Bohr set $B = B(\G, \eps)$ setting \[ B(\G, \eps) = \{ n \in \Gr ~:~ \| \g_j(n)\| < \eps_{j} \mbox{ for all } \g_j \in \G \} \,, \] where $\|x\|=|\arg x|/2\pi.$ \bigskip The number $d$~ is called {\it dimension } of $B$ and is denoted by $\dim B$. If $M = B + n$, $n\in \Gr,$ is a translation of a Bohr set $B$, we put $\dim M = \dim B$. The {\it intersection} $B\wedge B'$ of two Bohr sets $B = B(\G,\eps)$ and $B' = B(\G',\eps')$ is the Bohr set with the generating set $\G\cup \G'$ and new vector $\t{\eps}=\min (\eps_j,\eps'_j)$. We write ${B}' \ls {B}$ for two Bohr sets ${B} = B({\G},{\eps})$, ${B}' = B({\G}',{\eps}')$ if ${\G} \subseteq {\G}'$ and ${\eps}'_j \ls {\eps}_j$, $j\in [\dim B]$. Thus ${B}' \le {B}$ implies that ${B}' \subseteq {B}$ and always $B\wedge B' \le B,B'$. Furthermore, if $B=B(\G,\eps)$ and $\rho>0$ then by $B_\rho$ we mean $B(\G,\rho \eps).$ \bigskip \Def A Bohr set $B = B (\G, \eps)$ is called {\it regular}, if for every $\eta,\, d|\eta|\ls 1/100$ we have \begin{equation}\label{cond:reg_size} (1-100d|\eta|)|B_1| < { |B_{1+\eta}| } < (1+100d|\eta|)|B_1| \,. \end{equation} \bigskip We formulate a sequence of basic properties of Bohr (see \cite{Bu}), which will be used later. \bigskip \Lemma {\it Let $B (\G,\eps)$ be a Bohr set. Then there exists $\eps_1$ such that $ \frac{\eps}{2} < \eps_1 < \eps $ and $B (\G,\eps_1)$ is regular. \label{l:Reg_B} } \bigskip \Lemma {\it Let $B (\G,\eps)$ be a Bohr set. Then \[ |B (\G,\eps)| \ge \frac{N}{2} \prod_{j=1}^d \eps_j \,. \] } \label{l:Bohr_est} \bigskip \Lemma {\it Let $B (\G,\eps)$ be a Bohr set. Then $$ |B(\G,\eps)| \ls 8^{|\G|+1} |B(\G,\eps/2)| \,. $$ } \label{l:entropy_Bohr} \bigskip \Lemma {\it Suppose that $B^{(1)}, \dots, B^{(k)}$ is a sequence of Bohr sets. Then $$ \mu_{\Gr} (\bigwedge_{i=1}^k B^{(i)}) \ge \prod_{i=1}^k \mu_{\Gr} (B^{(i)}_{1/2}) \,. $$ } \label{l:Bohr_intersection_Sanders} The next lemma is due to Bourgain \cite{Bu}. It shows the fundamental property of regular Bohr sets. We recall his argument for the sake of completeness. \bigskip \Lemma {\it Let $B = B (\G, \eps)$ be a regular Bohr set. Then for every Bohr set $B' = B (\G, \eps')$ such that $\eps'\ls \k \eps/ (100 d)$ we have: \\ $1)~$ the number of $n's$ such that $( B * B' ) (n) > 0$ does not exceed $|B|(1+\kappa)$,\\ $2)~$ the number of $n's$ such that $( B * B') (n) = |B'|$ is greater than $|B|(1-\kappa)$ and \begin{equation}\label{2k} \l\| (\mu_B * \mu_{B'})(n) - \mu_B (n) \r\|_1 < 2\kappa \,. \end{equation} \label{l:L_pm} } \proof If $(B * B^{'}) (n) > 0,$ then there exists $m$ such that for any $\g_j \in \G$, we have \begin{equation*}\label{tM2} \| \g_j \cdot m \| < \frac{\k}{100d} \eps_j , \quad \| \g_j \cdot (n-m) \| < \eps_j \,, \end{equation*} so that \begin{equation*}\label{} \| \g_j \cdot n \| < \Big( 1 + \frac{\k}{100d} \Big) \eps_j \,, \end{equation*} for all $\g_j \in \G$. Therefore $ n\in B^{+} := B \left( \G, \left( 1 + \frac{\k}{100d} \right) \eps \right) \, $ and by Lemma \ref{l:Reg_B}, we have $|B^{+}| \ls (1+\k) |B|$. On the other hand, if \begin{equation*}\label{L-} n\in B^{-} := B \left( \G, \left( 1 - \frac{\k}{100d} \right) \eps \right)\,, \end{equation*} then $(B * B^{'}) (n) = |B'|$. Using Lemma \ref{l:Reg_B}, we obtain $|B^{-}| \ge (1-\k) |B|$. To prove (\ref{2k}) observe that $$ \l\| (\mu_B * \mu_{B'})(n) - \mu_B (n) \r\|_1 = \l\| (\mu_B * \mu_{B'})(n) - \mu_B (n) \r\|_{ l^1 ( B^{+} \setminus B^{-}) } \ls \frac{|B^{+}| - |B^{-}|}{|B|} < 2 \k $$ as required. $\hfill\Box$ \bigskip \Cor {\it With the assumptions of Lemma \ref{l:L_pm} we have $|B| \ls |B + B'| \ls |B^+|\ls (1+\k) |B|$. } \bigskip Notice that for every $\g\in \Z^*_p$ and a Bohr set $B(\G,\eps)$ we have $\g \cdot B(\G,\eps) = B(\g^{-1}\cdot \G, \eps)$. Thus, if $B(\G,\eps)$ is a regular, then $\g \cdot B(\G,\eps)$ is regular as well. \section{A variant of Sanders' theorem} \label{sec:Sanders_Bohr} Very recently Sanders \cite{Sanders_2A-2A_new_optimal} proved the following remarkable result. \bigskip \Th {\it Suppose that $\Gr$ is an abelian group and $A,S\subseteq \Gr$ are finite non--empty sets such that $|A+S| \ls K \min \{ |A|, |S| \}$. Then $(A-A)+(S-S)$ contains a proper symmetric $d(K)$--dimensional coset progression $M$ of size $\exp (-h(K)) |A+S|$. Moreover, we may take $d(K) = O(\log^6 K)$, and $h(K) = O(\log^6 K \log \log K)$. } \label{t:Sanders_2A-2A_new_optimal} \bigskip The aim of this section is to show the following modification of Sanders' theorem which is crucial for our argument. \bigskip \Th {\it Let $\eps,\delta \in (0,1]$ be real numbers. Let $A,A'$ be subsets of a regular Bohr sets $B$ and let $S,S'$ be subsets of a regular Bohr sets $B_\eps$, where $\eps \ls 1/(100d)$ and $d=\dim B$. Suppose that $\mu_B (A), \mu_{B} (A'), \mu_{B_\eps} (S), \mu_{B_\eps} (S') \gs \a$. Then the set $(A-A')+(S-S')$ contains a translation of a regular Bohr set $z+\t{B}$ such that $\dim \t{B} = d + O(\log^4 (1/\a))$ and \begin{equation}\label{f:t(B)_size} |\t{B}| \gs \exp (-O(d\log d + d\log (1/\eps) + \log^4 (1/\a)\log d +\log^5 (1/\a)+d\log(1/\a))) |B| \,. \end{equation} } \label{t:Sanders_2A-2A_reformulation} \bigskip Observe that the statement above with $O(d^4 + \log^4 (1/\a))$ instead of $d + O(\log^4 (1/\a))$ is a direct consequence of Theorem \ref{t:Sanders_2A-2A_new_optimal} (see the beginning of the proof of Theorem \ref{t:Sanders_2A-2A_reformulation}). Next we will formulate two results, which will be used in the course of the proof of Theorem \ref{t:Sanders_2A-2A_reformulation}. The first lemma, proved by Sanders \cite{Sanders_2A-2A_new_optimal}, is a version of Croot--Sisask theorem \cite{Croot_Sisask_convolutions}. \bigskip \Lemma {\it Suppose that $\Gr$ is a group, $A,S,T \subseteq \Gr$ are finite non--empty sets such that $|AS| \ls K|A|$ and $|TS|\ls L|S|$. Let $\epsilon \in (0,1]$ and let $h$ be a positive integer. Then there is $t\in T$ and a set $X\subseteq T-t$, with $$ |X| \gs \exp (-O(\epsilon^{-2} h^2 \log K \log L)) |T| $$ such that $$ | \mu_{A^{-1}} * AS * \mu_{S^{-1}}(x) - 1| \ls \epsilon \quad \mbox{ for all } \quad x \in X^h \,. $$ } \label{pr:Periodic_convolution_3_sets} \bigskip The next lemma is a special case of Lemma 5.3 from \cite{Sanders_2A-2A_new_optimal}. This is a local version of Chang's spectral lemma \cite{Chang}, which is another important result recently proved in additive combinatorics. \bigskip \Lemma {\it Let $\epsilon, \nu, \rho $ be positive real number. Suppose that $B$ is a regular Bohr set and let $X\subseteq B.$ Then there is a set $\L$ of size $O(\epsilon^{-2} \log (2\mu^{-1/2}_B (X)))$ such that for any $\gamma\in \Spec_\epsilon (\mu_X)$ we have $$ |1-\gamma(x)| = O(|\L| (\nu + \rho \dim^2 (B)) ) \quad \mbox{ for all} \quad x\in B_{\rho} \wedge B'_\nu \,, $$ where $B' = B(\L,1/2)$. } \label{l:Chang+large_Bohr_exp_sums} \bigskip {\it Proof of Theorem \ref{t:Sanders_2A-2A_reformulation}} Applying Lemma \ref{pr:Periodic_convolution_3_sets} with $A,\, S$ and $T=B_{\d},\, \d={\eps/100d}$ and $K=L=O(1/\a)$, we find a set $X\subseteq B_\d-t$ such that \begin{equation}\label{tmp:18.02.2011_1} |X| \gs \exp (-O(\epsilon^{-2} h^2 \log^2 K)) |B_{\d}| \,, \end{equation} and \begin{equation}\label{tmp:18.02.2011_-1} | \mu_{-A} * (A+S) * \mu_{-S}(x) - 1| \ls \epsilon/3 \quad \mbox{ for all } \quad x \in hX \,. \end{equation} We may assume that $B_\d$ is regular. Let $\epsilon$ be a small positive constant to be specify later. Put $h = \lceil \log ( K/\epsilon) \rceil$ and $l=O(\epsilon^{-4} h^2 \log^2 K)$. Applying Lemma \ref{l:Chang+large_Bohr_exp_sums} for $X+t\sbeq B_\d$ with parameters $\nu = O(\epsilon / (l K^{1/2}))$, $\rho = O( \epsilon / (l d^2 K^{1/2}))$, we obtain \begin{equation}\label{tmp:18.02.2011_2} |1-\gamma(x)| \ls \epsilon /(3 K^{1/2}) \quad \mbox{ for all } \quad x\in B_{\d \rho} \wedge B'_\nu \quad \mbox{ and } \quad \gamma \in \Spec_\epsilon (\mu_X) \,. \end{equation} We have $\dim (B_{\d \rho} \wedge B'_\nu)= d +O( \log^4 (1/\a))$. By the same argument, applied for sets $A',S'$ there are sets $X'$, $\L'$ of cardinality $l$ and a Bohr set $B^*_{\nu}$ that satisfy inequalities (\ref{tmp:18.02.2011_1}) and (\ref{tmp:18.02.2011_2}), respectively. Finally, we set $$B''= B_{\d\rho}\wedge B'_\nu \wedge B^*_{\nu}.$$ Clearly, $d''=\dim B''=d+O( \log^4 (1/\a))$ and by Lemma \ref{l:Bohr_est}, Lemma \ref{l:entropy_Bohr}, Lemma \ref{l:Bohr_intersection_Sanders} and $\epsilon=\Omega ( 1)$ we have \begin{equation}\label{in:bohr size} |\t{B}| \gs \exp (-O(d\log d + d\log (1/\eps) + \log^4 (1/\a)\log d +\log^5 (1/\a)+d\log(1/\a))) |B|. \end{equation} In view of the inequality $$\sum_{\gamma} |\F{(A+S)}(\gamma)\F{\mu}_A(\gamma)\F{\mu}_S(\gamma)|\ls \frac{(|A+S||A|)^{1/2}}{|S|}\ls K^{1/2},$$ which follows from Cauchy-Schwarz inequality and Parseval's formula, we may proceed in the same way as in the proof of Lemma 9.2 in \cite{Sanders_2A-2A_new_optimal} and conclude that for any probability measure $\mu$ supported on $B''$ we have \begin{equation}\label{tmp:18.02.2011_3} \| (A+S) * \mu \|_{\infty} \gs 1-\epsilon \quad \mbox{ and } \quad \| (A'+S') * \mu \|_{\infty} \gs 1-\epsilon \,. \end{equation} Let $\eta = 1/4 d''$. We show that $(A-A')+(S-S')$ contains a translation of $\t{B} := B''_\eta$. Indeed, note that $$ B''_{1/2} \subseteq B''_{1/2+\eta} \subseteq \dots \subseteq B''_{1/2+2d''\eta} = B'' \,. $$ so that by pigeonhole principle, there is some $i\le 2d''$ such that $|B''_{1/2+i\eta}| \ls \sqrt{2} |B''_{1/2+(i-1)\eta}|$. We apply (\ref{tmp:18.02.2011_3}) for $$ \mu = \frac{B''_{1/2+i\eta}+B''_{1/2+(i-1)\eta}}{|B''_{1/2+i\eta}| + |B''_{1/2+(i-1)\eta}|} \,. $$ Thus, there is $x$ such that $$ |(x+A+S) \cap B''_{1/2+i\eta}| + |(x+A+S) \cap B''_{1/2+(i-1)\eta}| \gs (1-\epsilon) \l( |B''_{1/2+i\eta}| + |B''_{1/2+(i-1)\eta}| \r) \,. $$ Taking $\epsilon$ sufficiently small (see \cite{Sanders_2A-2A_new_optimal} for details), we get $$ |(x+A+S) \cap B''_{1/2+i\eta}| \gs \frac{3}{4} |B''_{1/2+(i-1)\eta}| \,, \quad |(x+A+S) \cap B''_{1/2+(i-1)\eta}| \gs \frac{3}{4} |B''_{1/2+(i-1)\eta}| \,. $$ Analogously, for some $y$, we obtain $$ |(y+A'+S') \cap B''_{1/2+i\eta}| \gs \frac{3}{4} |B''_{1/2+(i-1)\eta}| \,, \quad |(y+A'+S') \cap B''_{1/2+(i-1)\eta}| \gs \frac{3}{4} |B''_{1/2+(i-1)\eta}| \,. $$ Hence for each $b\in \t{B}$, we have \begin{eqnarray*} (A+S) * (-A'-S') (b+y-x) &=& (x+A+S) * (-y-A'-S') (b)\\ &\gs& ((x+A+S) \cap B''_{1/2+i\eta}) * ((-y-A-S) \cap B''_{1/2+(i-1)\eta}) (b)\\ & \gs& |(x+A+S) \cap B''_{1/2+i\eta}| + |(y+A+S) \cap B''_{1/2+(i-1)\eta}|\\ &-& |((x+A+S) \cap B''_{1/2+i\eta}) \cap ((-y-A-S) \cap B''_{1/2+i\eta})|\\ &\gs& \frac{3}{2} |B''_{1/2+(i-1)\eta}| - |B''_{1/2+i\eta}| > 0 \,. \end{eqnarray*} Therefore, $(A-A')+(S-S')$ contains a translation of $\t{B}$. Finally, by Lemma \ref{l:Reg_B}, there is $1/2\ls \sigma \ls 1$ such that $\t B_{\sigma}$ is regular. By (\ref{in:bohr size}) and Lemma \ref{l:entropy_Bohr} $\t B_\sigma$ also satisfies (\ref{f:t(B)_size}). This completes the proof. $\hfill\Box$ \section{Proof of the main result} \label{sec:proof} Let $A\sbeq \{1,\dots,N\}$ be a set having no solution to (\ref{e}). As usually, we embed $A$ in $\Z_p$ with $p$ between $(\sum|a_i|)N$ and $2(\sum|a_i|)N$, so $A$ has no solution to (\ref{e}) in $\Z_p$. All sets considered below are subsets of $\Z_p.$ We start with the following simple observation. \bigskip \Lemma {\it Let $B$ be a regular Bohr set of dimension $d$, $B' \le B_\rho$ be a Bohr set and $\rho\ls \a/(1600d)$. Suppose that $\mu_B (A), \mu_B (A')\gs \a$. Then there exists $x\in B$ such that \begin{equation}\label{f:l_A_and_-A_pm} (\mu_{B'} * A) (x),\, (\mu_{B'} * A') (-x) \gs \a/4 \end{equation} or \begin{equation}\label{f:l_A_and_-A_inc} \| \mu_{B'} * A \|_\infty \gs 1.5 \a \text{~~or~~} \| \mu_{B'} * A' \|_\infty \gs 1.5 \a \end{equation} } \label{l:A_and_-A} \Proof By regularity of $B$ we have $$ \a \ls \sum_{x\in B} \mu_{B} (x) A(x) \ls \a/8 + \sum_{x\in B} (\mu_{B} * \mu_{{B'}}) (x) A(x) \ls \a/8+ \frac{1}{|B|} \sum_{x\in B} (\mu_{{B'}} * A) (x)$$ and $$\a \ls \a/8+\frac{1}{|B|} \sum_{x\in B} (\mu_{{B'}} * A') (x). $$ Hence $$ \sum_{x\in B} \l( (\mu_{B'} * A) (x) + (\mu_{B'} * A') (-x) \r) \gs (7\a/4)|B| $$ and the result follows. $\hfill\Box$ \bigskip Theorem \ref{t:Roth_Schoen_Gr''_intr} is a consequence of the next lemma. \bigskip \Lemma {\it Suppose that $B$ is a regular Bohr set of dimension $d$ and $A\sbeq B,\,$ $\mu_B (A) = \a$ has no solution with distinct elements to ({\ref{e}}). Assume that \begin{equation}\label{l:large bohr} |B|\gs \exp(O(d\log d+\log^5(1/\a)+d\log(1/\a) +\log d \log^4 (1/\a)+d\log k)). \end{equation} Then there exists a regular Bohr set $B'$, such that \begin{equation}\label{f:Roth_Schoen_Gr_density'} \| \mu_{B'} * A \|_\infty \gs (1+1/(16k)) \a \,, \end{equation} $\dim B' = d + O(\log^4 (1/\a))$, and \begin{equation}\label{f:Roth_Schoen_Gr_eps'} |B'| \gs \exp (-O(d\log d+ \log^5 (1/\a)+d\log(1/\a) +\log d \log^4 (1/\a))) |B| \,. \end{equation} } \label{l:Roth_Schoen_Gr'} \proof We start with mimicking the argument used by Sanders in \cite{Sanders_Roth_1-eps}. Suppose that $\eps=c\a/(100Mdk^2)$, where $c>0$ is a small constant and $M=\prod |a_i|$, is such that $B_{\eps}$ is a regular Bohr set and put $B^i=(\prod_{j\not=i} a_j)\cdot B_\eps$. By Lemma \ref{l:L_pm}, we have \begin{equation}\label{f:multiplication_trick} \| k \cdot (A* \mu_B) -\sum_{i=1}^k A * \mu_B * \mu_{B^i}\|_\infty \ls 2kc\a \,. \end{equation} Thus, for $\e=1/(16k),\,$ either we have $\| \mu_{B^i} * A \|_\infty \gs (1+\e) \a$ for some $1\ls i \ls k,$ or there is $w\in B$ such that $\mu_{B^i}(A+w)=\mu_{B'}(a_i\cdot (A+w)) \gs (1-k\e)\a$ for every $i,$ where $B'=(\prod a_j)\cdot B_\eps$. In the first case we are done, so assume that the last inequalities hold. Since (\ref{e}) is an invariant equation we may translate our set and assume that $\mu_{B'} (a_i\cdot A) \gs (1-k\e)\a$ for all $1\ls i\ls k$. Let $B'_{\eps/2}\sbeq B'' \subseteq B'_\eps$ and $B''_{\eps/2}\sbeq B''' \subseteq B''_\eps$ are regular Bohr sets. By regularity of $B'$ and Lemma \ref{l:A_and_-A} either (\ref{f:l_A_and_-A_inc}) holds, and we are done, or there exists $x\in B'$ with $\mu_{B''+x}(a_1\cdot A)\gs \a/8$ and $\mu_{B''-x}(a_2\cdot A)\gs \a/8.$ We show that there are disjoint sets $A_1,A_2$ of $A$ such that $\a/32\ls \mu_{B''+x}(a_1\cdot A_1)\ls \a/16$ and $\a/32\ls \mu_{B''-x}(a_2\cdot A_2)\ls \a/16$. Indeed, let $Q_1 = \{ q\in A ~:~ a_1 \cdot q \in B''+x\}$, $Q_2 = \{ q\in A ~:~ a_2 \cdot q \in B''-x\}$. Note that $|Q_1|, |Q_2| \ge \a|B''|/8$. If $|Q_1 \cap Q_2| > \a|B''|/16$ then split $Q_1 \cap Q_2$ into two parts $A_1$, $A_2$ whose sizes differ by at most one. Otherwise, we put $A_1 = Q_1 \setminus Q_2$ and $A_2 = Q_2 \setminus Q_1$. Put $A'=A\setminus (A_1\cup A_2)$ then $\mu_{B'} (a_i\cdot A') \gs 3\a/4$ for $i \ge 3$. Again applying Lemma \ref{l:A_and_-A} for $B'''$ and the arguments above, we find $y\in B'$ and disjoint sets $A_3,A_4\sbeq A'$ such that $\mu_{B'''+y}(a_3\cdot A_3)\gs \a/16$ and $\mu_{B'''-y}(a_4\cdot A_4)\gs \a/16.$ Assume that $k$ is even. Let $l = (k-6)/2 \ge 0$. Using the arguments as before, we infer that then there are disjoint sets $A_5,\dots,A_{k-2}$ and elements $y_1,\dots,y_l$ such that $$ a_5 \cdot A_5 - y_1 \sbeq B'',~ -a_6 \cdot A_6 - y_1 \sbeq B'',~ \dots ,~ a_{k-3} \cdot A_{k-3} - y_{l} \sbeq B'',~ -a_{k-2} \cdot A_{k-2} - y_l \sbeq B'' $$ and $$ \mu_{B''+y_1} (a_5 \cdot A_5),\, \mu_{B''-y_1} (a_6 \cdot A_6)\,, \dots \,, \mu_{B''+y_l} (a_{k-3} \cdot A_{k-3}),\, \mu_{B''-y_l} (a_{k-2} \cdot A_{k-2}) \ge \frac{\a}{16 k} \,. $$ Finally, by Theorem \ref{t:Sanders_2A-2A_reformulation} applied to sets $$a_1\cdot A_1-x\sbeq B'',~ -a_2\cdot A_2-x\sbeq B'',~ a_3\cdot A_3-y\sbeq B''',~ -a_4\cdot A_4-y\sbeq B''',$$ there exists a Bohr set $\t{B}\le B'$ and $z$ such that \begin{equation}\label{tmp:02.05.2011_1} \t{B} + z \subseteq a_1\cdot A_1 +a_2\cdot A_2 + a_3\cdot A_3 + a_4\cdot A_4 + \sum_{j=5}^{k-2} a_j \cdot A_j \,, \end{equation} $\t{d}=\dim \t{B} = d + O(\log^4 (1/\a))$ and \begin{equation}\label{tmp:22.02.2011_1'} |\t{B}| \gs \exp (-O(d\log d+ \log^5 (1/\a)+d\log(1/\a) +\log d \log^4 (1/\a))) |B| \,. \end{equation} The sum over $j$ in (\ref{tmp:02.05.2011_1}) can be empty. In the case we put the sum to be equal to zero. Notice that $z\in 4B'' + (k-6)B'''\sbeq kB''.$ Since $A_1,\dots, A_{k-2}$ are disjoint it follows that \begin{equation}\label{tmp:13_02_2011_1'} a_{k-1}x_{k-1}+a_kx_k \notin \t{B} - z \end{equation} for all distinct $ x_{k-1},x_k \in A\setminus \cup_{j=1}^{k-2} A_j.$ By Lemma \ref{l:Reg_B} we find $1/(400k\t{d})\ls \d\ls 1/(200k\t{d})$ such that $\t{B}_\d$ is regular. Obviously $\t{B}_{\d}$ satisfies (\ref{tmp:22.02.2011_1'}). Write $$ E_i:= \{ x \in B' ~:~ (\mu_{\t{B}_{\d}} * (a_i\cdot A)) (x) \gs k/|\t{B}_{\d}| \} \,. $$ Observe that if $-z\in E_{k-1}+E_k,$ then one can find a solution to (\ref{e}) with distinct $ x_1,\dots,x_k \in A.$ Therefore $E_{k-1}\sbeq B'\setminus (-E_k-z),$ so that $$|E_{k-1}|\ls |B'\setminus (-E_k-z)| = |B'| - |B'\cap (E_k+z)| \ls |B'| - |E_k| + 100 \eps M dk|B'| \,.$$ Finally $$|E_{k-1}|+|E_k|\ls (3/2)|B'|,$$ so that $|E_i|\ls (3/4)|{B'}|$ for some $i$. Thus $$|A|=\|\mu_{\t{B}_{\d}} * (a_i\cdot A)\|_1\ls \|\mu_{\t{B}_{\d}} * (a_i\cdot A)\|_\infty |E_i|+(k/|\t{B}_{\d}|)|B'_{1+\d}|\,.$$ By (\ref{l:large bohr}) $$|{\t{B}_{\d}}|\gs \exp(O(-(d\log d+\log^5(1/\a)+d\log(1/\a) +\log d \log^4 (1/\a))))|B|\gs 10\cdot 8^{d+1}k/\a\,,$$ and since Lemma \ref{l:entropy_Bohr} implies $|B'_{1+\d}|\ls 2|B'|,$ so that $$\|\mu_{\t{B}_{\d}} * (a_i\cdot A)\|_\infty|E_i||\t{B}_{\d}|\gs 0.9 |\t{B}_{\d}||A|.$$ Hence $$\|\mu_{B^*} * A\|_\infty\gs 1.1\a,$$ where $B^*=a_i^{-1}\cdot {\t{B}_{\d}},$ and the assertion follows. Now suppose that $k$ is odd. Only the first part of the proof needs to be slightly modified. Certainly, we may assume that $a_5 = 1$. By regularity of $B$ we have \begin{equation}\label{f:multiplication_trick'} \| k \cdot (A* \mu_B) - A * \mu_B * \mu_{B''} - \sum_{i\not=5}^{} A * \mu_B * \mu_{B^i}\|_\infty \ls 2kc\a \,, \end{equation} where $B^i$ and $B''$ are defined as before. Put $l=(k-7)/2 \ge 0$. By Lemma \ref{l:A_and_-A} there are disjoint sets $A_1,\dots, A_k$ and elements $x,y,y_1,\dots,y_l$ such that (\ref{tmp:02.05.2011_1})--(\ref{tmp:13_02_2011_1'}) hold. However, $A_5\sbeq B'',$ so that $z\in kB''$. One can finish the proof in exactly the same way as before. $\hfill\Box$ \bigskip {\it Proof of Theorem \ref{t:Roth_Schoen_Gr''_intr}} Let $A\sbeq B^0=\Z_p, |A|\gs \a p.$ We apply iteratively Lemma \ref{l:Roth_Schoen_Gr'}. After $t$ steps we obtain a regular Bohr set $B^t$ and $x_t\in \Z_p$ such that $|A\cap (B^t+x_t)|\gs (1+1/(16k))^t\a |B^t|,\, \dim B^t\ll t\log^4(1/\a),$ and $$|B^t|\gs \exp(-O(t\log^4 (1/\a)\log \log (1/\a)+\log^5(1/\a)))|B^{t-1}|.$$ Since the density is always less than $1$ we may apply Lemma \ref{l:Roth_Schoen_Gr'} at most $O(\log(1/\a))$ times. Therefore, after $t=O(\log (1/\a))$ iterates assumption of Lemma \ref{l:Roth_Schoen_Gr'} are violated, so that $$\exp(-O(\log^6 (1/\a)\log \log (1/\a)))p \ls |B^t|\ls \exp(O(\log^5(1/\a))),$$ which yields $$\a\ll \exp(-c(\log p/\log \log p)^{1/6}),$$ and the assertion follows. $\hfill\Box$ \section{Polynomial Freiman--Ruzsa Conjecture and linear equation} \label{sec:PFRC} Freiman-Ruzsa Polynomial Conjecture can be formulated in the following way. \bigskip \con\label{con1} {\it Let $A\sbeq \z_N, |A|=\a N,$ then there exists a Bohr set $B(\G,\eps)\sbeq 2A-2A$ such that $|\G|=d\ll \log(1/\a)$ and $\eps\gg 1/\log (1/\a).$} \bigskip We have $$|B(\G,\eps)|\gs \frac12\eps^{d}N,$$ so that it would give a nontrivial result provided that $\a\gg N^{-c/\log\log N}.$ However, it was proved in \cite{Shkredov1} and \cite{Shkredov2} that in Chang's lemma (see section \ref{sec:Sanders_Bohr}) one can take much larger $\eps.$ This give a (little) support for the following version of the above conjecture for sparse sets. \bigskip \con\label{con2} {\it Let $A, A'\sbeq \z_N, |A|, |A'|\gs N^{1-c},$ then there exists a $\d_c \log N-$dimensional Bohr set $B\sbeq A-A +A'-A'$ such that $|B|\gg N^{1-c'}$ and $\d_c \rightarrow 0,c'\rightarrow 0$ with $c\rightarrow 0.$ Furthermore, each $b\in B$ has $\gg |A|^2|A'|^2/N$ representations in the form $a-b+a'-b',\, a,b\in A,\, a',b'\in A'.$ } \bigskip We shall give here an application of Conjecture \ref{con2}. First we recall some definitions from \cite{Ruzsa_equations}. Let \begin{equation}\label{equation} a_1x_1+\dots+a_kx_k=0 \end{equation} be an invariant linear equation. We say that the solution $x_1,\dots,x_k$ of (\ref{equation}) is trivial if there is a partition $\{1,\dots,k\}=\T_1\cup\dots \cup \T_{l}$ into nonempty and disjoint sets $\T _j$ such that $x_u=x_v$ if and only if $u,v\in \T_j$ for some $j$ and $$\sum_{i\in \T_j}a_i=0,$$ for every $1\ls j\ls l.$ The {\it genus} of (\ref{equation}) is the largest ${\s}$ such that there is a partition $\{1,\dots,k\}=\T _1\cup\dots \cup \T_{\s}$ into nonempty and disjoint sets $\T _j$ such that $$\sum_{i\in \T_j}a_i=0,$$ for every $1\ls j\ls \s.$ Let $r(N)$ be the maximum size of a set $A\sbeq \{1,\dots ,N\}$ having no nontrivial solution to (\ref{equation}) with $x_i\in A$ and let $R(N)$ be the analogous maximum over sets that the equation (\ref{equation}) has no solution with distinct $x_i\in A$. It is not hard to prove that $r(N)\ll N^{1/\s}.$ Much less is known about the behavior of $R(N).$ Bukh \cite{Bukh} showed that we always have $R(N)\ll N^{1/2-\eps}$ for the symmetric equations $$a_1x_1+\dots+a_lx_l=a_1y_1+\dots+a_ly_l.$$ Our result is the following. \Th {\it Assuming Conjecture \ref{con2} we have $$R(N)\ll N^{1-c},$$ for every invariant equation (\ref{equation}) with $a_1=-a_2,~a_3=-a_4,$ where $c=c(a_1,\dots,a_k).$ } \proof Suppose that $A$ has no solution to an equation (\ref{equation}) with $a_1=-a_2,~a_3=-a_4,$ where $c=c(a_1,\dots,a_k)$ and assume that $|A|\gg N^{1-c},\, c>0.$ We embed $A$ in $\Z_M$ with $M=SN,$ where $S=\sum|a_i|,$ so that any solution to (\ref{equation}) in $\Z_M$ is a genuine solution in $\Z.$ Let $A=A_1\cup A_2$ be a partition of $A$ into roughly equal parts. If Conjecture \ref{con2} holds, then there is a Bohr set $$B\sbeq a_1\cdot A_1-a_1\cdot A_1+a_3\cdot A_1-a_3\cdot A_1$$ of dimension at most $\d_c \log N$ and size at least $\gg N^{1-c'}.$ Put $B'=B_{1/S}.$ We show that for every $t\in \Z_M$ we have $$|(t+B')\cap A_2|\ls k-4.$$ Indeed, if there are distinct $x_5,\dots,x_k\in (t+B')\cap A_2$, then $$\sum_{i=5}^k a_ix_i\in \Big (\sum_{i=5}^k a_it\Big)+B=B.$$ However, each element in $B$ has at least $|A|^4/M$ representations in the form $a_1x-a_1y+a_3z-a_3w,\, x,y,z,w\in A_1$. This would give a solution to (\ref{equation}) with distinct integers. Hence, $$|B'||A_2|=\sum_t |(t+B')\cap A_2|\ls kM$$ so $$|A|\ls 2kSN/|B'|.$$ Now, by Lemma \ref{l:entropy_Bohr} it follows that $|B'|\gg S^{-4d}|B|\gg N^{1-c'-2\d_c \log S}.$ This leads to a contradiction, provided $c$ is small enough. $ \hfill\Box$ \bigskip {\bf Acknowledgement } We wish to thank Tom Sanders for stimulating discussions.
{ "timestamp": "2011-06-09T02:02:55", "yymm": "1106", "arxiv_id": "1106.1601", "language": "en", "url": "https://arxiv.org/abs/1106.1601", "abstract": "We prove, in particular, that if a subset A of {1, 2,..., N} has no nontrivial solution to the equation x_1+x_2+x_3+x_4+x_5=5y then the cardinality of A is at most N e^{-c(log N)^{1/7-eps}}, where eps>0 is an arbitrary number, and c>0 is an absolute constant. In view of the well-known Behrend construction this estimate is close to best possible.", "subjects": "Number Theory (math.NT)", "title": "Roth's theorem in many variables", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9896718490038141, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.7080349887287269 }
https://arxiv.org/abs/1111.5340
On the Expected Complexity of Random Convex Hulls
In this paper we present several results on the expected complexity of a convex hull of $n$ points chosen uniformly and independently from a convex shape.(i) We show that the expected number of vertices of the convex hull of $n$ points, chosen uniformly and independently from a disk is $O(n^{1/3})$, and $O(k \log{n})$ for the case a convex polygon with $k$ sides. Those results are well known (see \cite{rs-udkhv-63,r-slcdn-70,ps-cgi-85}), but we believe that the elementary proof given here are simpler and more intuitive.(ii) Let $\D$ be a set of directions in the plane, we define a generalized notion of convexity induced by $\D$, which extends both rectilinear convexity and standard convexity.We prove that the expected complexity of the $\D$-convex hull of a set of $n$ points, chosen uniformly and independently from a disk, is $O(n^{1/3} + \sqrt{n\alpha(\D)})$, where $\alpha(\D)$ is the largest angle between two consecutive vectors in $\D$. This result extends the known bounds for the cases of rectilinear and standard convexity.(iii) Let $\B$ be an axis parallel hypercube in $\Re^d$. We prove that the expected number of points on the boundary of the quadrant hull of a set $S$ of $n$ points, chosen uniformly and independently from $\B$ is $O(\log^{d-1}n)$. Quadrant hull of a set of points is an extension of rectilinear convexity to higher dimensions. In particular, this number is larger than the number of maxima in $S$, and is also larger than the number of points of $S$ that are vertices of the convex hull of $S$.Those bounds are known \cite{bkst-anmsv-78}, but we believe the new proof is simpler.
\section[Introduction]{Introduction} Let $C$ be a fixed compact convex shape, and let $X_n$ be a random sample of $n$ points chosen uniformly and independently from $C$. Let $Z_n$ denote the number of vertices of the convex hull of $X_n$. R{\'e}nyi and Sulanke \cite{rs-udkhv-63} showed that $E[Z_n] = O(k \log{n})$, when $C$ is a convex polygon with $k$ vertices in the plane. Raynaud \cite{r-slcdn-70} showed that expected number of facets of the convex hull is $O(n^{(d-1)/(d+1)})$, where $C$ is a ball in $\Re^d$, so $E[Z_n] = O(n^{1/3})$ when $C$ is a disk in the plane. Raynaud \cite{r-slcdn-70} showed that the expected number of facets of ${\mathop{\mathrm{CH}}}(X_n) = ConvexHull(X_n)$ is $O \pth{(\log(n))^{(d-1)/2}}$, where the points are chosen from $\Re^d$ by a $d$-dimensional normal distribution. See \cite{ww-sg-93} for a survey of related results. All these bounds are essentially derived by computing or estimating integrals that quantify the probability of two specific points of $X_n$ to form an edge of the convex hull (multiplying this probability by $\binom{n}{2}$ gives $E[Z_n]$). Those integrals are fairly complicated to analyze, and the resulting proofs are rather long, counter-intuitive and not elementary. Efron \cite{e-chrsp-65} showed that instead of arguing about the expected number of vertices directly, one can argue about the expected area/volume of the convex hull, and this in turn implies a bound on the expected number of vertices of the convex hull. In this paper, we present a new argument on the expected area/volume of the convex hull (this method can be interpreted as a discrete approximation to the integral methods). The argument goes as follows: Decompose $C$ the into smaller shapes (called tiles). Using the topology of the tiling and the underlining type of convexity, we argue about the expected number of tiles that are exposed by the random convex hull, where a tile is exposed if it does not lie completely in the interior of the random convex hull. Resulting in a lower bound on the area/volume of the random convex hull. We apply this technique to the standard case, and also for more exotic types of convexity. In Section \ref{sec:random:ch}, we give a rather simple and elementary proofs of the aforementioned bounds $E[Z_n]=O(n^{1/3})$ for $C$ a disk, and $E[Z_n]=O \pth{ k \log{n}}$ for $C$ a convex $k$-gon. We believe that these new elementary proofs are indeed simpler and more intuitive\footnote{Preparata and Shamos \cite[pp. 152]{ps-cgi-85} comment on the older proof for the case of a disk: ``Because the circle has no corners, the expected number of hull vertices is comparatively high, although we know of no elementary explanation of the $n^{1/3}$ phenomenon in the planar case.'' It is the author's belief that the proof given here remedies this situation.} than the previous integral-based proofs. The question on the expected complexity of the convex hull remains valid, even if we change our type of convexity. In Section \ref{sec:genrelized:convex}, we define a generalized notion of convexity induced by ${\cal D}$, a given set of directions. This extends both rectilinear convexity, and standard convexity. We prove that the expected complexity of the ${\cal D}$-convex hull of a set of $n$ points, chosen uniformly and independently from a disk, is $O \pth {n^{1/3} + \sqrt{n\alpha({\cal D})}}$, where $\alpha({\cal D})$ is the largest angle between two consecutive vectors in ${\cal D}$. This result extends the known bounds for the cases of rectilinear and standard convexity. Finally, in Section \ref{sec:hcube}, we deal with another type convexity, which is an extension of the generalized convexity mentioned above for the higher dimensions, where the set of the directions is the standard orthonormal basis of $\Re^d$. We prove that the expected number of points that lie on the boundary of the quadrant hull of $n$ points, chosen uniformly and independently from the axis-parallel unit hypercube in $\Re^d$, is $O(\log^{d-1} n)$. This readily imply $O(\log^{d-1} n)$ bound on the expected number of maxima and the expected number of vertices of the convex hull of such a point set. Those bounds are known \cite{bkst-anmsv-78}, but we believe the new proof is simpler and more intuitive. \section[On the Complexity of the Convex Hull of a Random Point Set]{On the Complexity of the Convex Hull of a Random Point Set} \label{sec:random:ch} In this section, we show that the expected number of vertices of the convex hull of $n$ points, chosen uniformly and independently from a disk, is $O(n^{1/3})$. Applying the same technique to a convex polygon with $k$ sides, we prove that the expected number of vertices of the convex hull is $O( k \log{n})$.\footnote{As already noted, these results are well known (\cite{rs-udkhv-63,r-slcdn-70,ps-cgi-85}), but we believe that the elementary proofs given here are simpler and more intuitive.} The following lemma, shows that the larger the expected area outside the random convex hull, the larger is the expected number of vertices of the convex hull. \begin{lemma} Let $C$ be a bounded convex set in the plane, such that the expected area of the convex hull of $n$ points, chosen uniformly and independently from $C$, is at least $\pth{1-f(n)}Area(C)$, where $1 \geq f(n) \geq 0$, for $n \geq 0$. Then the expected number of vertices of the convex hull is $\leq n f(n/2)$. \label{lemma:area:to:vertices} \end{lemma} \begin{proof} Let $N$ be a random sample of $n$ points, chosen uniformly and independently from $C$. Let $N_1$ (resp. $N_2$) denote the set of the first (resp. last) $n/2$ points of $N$. Let $V_1$ (resp. $V_2$) denote the number of vertices of $H = {\mathop{CH}}(N_1 \cup N_2)$ that belong to $N_1$ (resp. $N_2$), where ${\mathop{CH}}(N_1 \cup N_2) = {\mathrm{ConvexHull}}( N_1 \cup N_2 )$. Clearly, the expected number of vertices of $C$ is $E[V_1] + E[V_2]$. On the other hand, \[ E \pbrc{V_1 \sep{N_2}} \leq \frac{n}{2} \pth{\frac{Area(C) - Area({\mathop{CH}}(N_2))}{Area(C)}}, \] since $V_1$ is bounded by the expected number of points of $N_1$ falling outside ${\mathop{CH}}(N_2)$. We have \begin{eqnarray*} E[V_1] &=& E_{N_2} \pbrc{ E[V_1|N_2] } \leq E \pbrc{\frac{n}{2} \pth{\frac{Area(C) - Area({\mathop{CH}}(N_2))}{Area(C)}}}\\ &\leq& \frac{n}{2}f(n/2), \end{eqnarray*} since $E[X]=E_Y[E[X|Y]]$ for any two random variables $X,Y$. Thus, the expected number of vertices of $H$ is $E[V_1] + E[V_2] \leq n f(n/2)$. \end{proof} \begin{remark} Lemma \ref{lemma:area:to:vertices} is known as {\em Efron's Theorem}. See \cite{e-chrsp-65}. \end{remark} \begin{theorem} The expected number of vertices of the convex hull of $n$ points, chosen uniformly and independently from the unit disk, is $O(n^{1/3})$. \label{theorem:area} \end{theorem} \begin{proof} We claim that the expected area of the convex hull of $n$ points, chosen uniformly and independently from the unit disk, is at least $\pi - O \pth{ n^{-2/3}}$. Indeed, let $D$ denote the unit disk, and assume without loss of generality, that $n=m^3$, where $m$ is a positive integer. Partition $D$ into $m$ sectors, ${\cal S}_1, \ldots, {\cal S}_m$, by placing $m$ equally spaced points on the boundary of $D$ and connecting them to the origin. Let $D_1, \ldots, D_{m^2}$ denote the $m^2$ disks centered at the origin, such that (i) $D_{1} = D$, and (ii) $Area(D_{i-1}) - Area(D_{i}) = \pi/m^2$, for $i=2, \ldots, m^2$. Let $r_i$ denote the radius of $D_i$, for $i=1, \ldots, m^2$. Let $S_{i,j} = (D_{i} \setminus D_{i+1}) \cap {\cal S}_j$, and $S_{m^2,j}=D_{m^2} \cap {\cal S}_j$, for $i=1, \ldots, m^2-1$, $j=1, \ldots, m$. The set $S_{i,j}$ is called the $i$-th {\em tile} of the sector ${\cal S}_j$, and its area is $\pi/n$, for $i=1,\ldots,m^2$, $j=1, \ldots,m$. Let $N$ be a random sample of $n$ points chosen uniformly and independently from $D$. Let $X_j$ denote the first index $i$ such that $N \cap S_{i, j} \ne \emptyset$, for $j=1, \ldots, m$. For a fixed $j \in \brc{1,\ldots,m}$, the probability that $X_j = k$ is upper-bounded by the probability that the tiles $S_{1, j} , \ldots, S_{(k-1),j}$ do not contain any point of $N$; namely, by $\pth{1-\frac{k-1}{n}}^{n}$. Thus, $P[X_j = k] \leq \pth{1-\frac{k-1}{n}}^{n} \leq e^{-(k-1)}$, since $1-x\leq e^{-x}$, for $x \geq 0$. Thus, \[ E \pbrc{ X_j } = \sum_{k=1}^{m^2} k P[X_j = k ] \leq \sum_{k=1}^{m^2} k e^{-(k-1)} = O(1), \] for $j=1, \ldots, m$. Let $K_o$ denote the convex hull of $N \cup \brc{o}$, where $o$ is the origin. The tile $S_{i,j}$ is {\em exposed} by a set $K$, if $S_{i,j} \setminus K \ne \emptyset$. We claim that at most $X_{j-1} + X_{j+1} + O(1)$ tiles are exposed by $K_o$ in the sector ${\cal S}_j$, for $j=1,\ldots, m$ (where we put $X_0 = X_m$, $X_{m+1} = X_1$). \begin{figure} \centerline{ \Ipe{figs/expose.ipe} } \caption{Illustrating the proof that bounds the number of tiles exposed by $T$ inside ${\cal S}_j$} \label{fig:slice} \end{figure} Indeed, let $w=w(N,j) = max(X_{j-1},X_{j+1})$, and let $p,q$ be the two points in $S_{j-1,w}, S_{j+1,w}$, respectively, such that the number of sets exposed by the triangle $T = \triangle{opq}$, in the sector ${\cal S}_i$, is maximal. Both $p$ and $q$ lie on ${\partial}{D_{w+1}}$ and on the external radii bounding ${\cal S}_{j-1}$ and ${\cal S}_{j+1}$, as shown in Figure \ref{fig:slice}. Clearly, any tile which is exposed in ${\cal S}_j$ by $K_o$ is also exposed by $T$. Let $s$ denote the segment connecting the middle of the base of $T$ to its closest point on ${\partial}{D_w}$. The number of tiles in ${\cal S}_j$ exposed by $T$ is bounded by $\max \pth{X_{j-1}, X_{j+1}}$, plus the number of tiles intersecting the segment $s$. The length of $s$ is \[ |oq| - |oq| \cos \pth{ \frac{3}{2} \cdot \frac{2\pi}{m}} \leq 1 - \cos \pth{ \frac{3}{2} \cdot \frac{2\pi}{m}} \leq \frac{1}{2} \pth{ \frac{3\pi}{m}}^2 = \frac{4.5\pi^2}{m^2}, \] since $\cos(x) \geq 1-x^2/2$, for $x \geq 0$. On the other hand, $r_{i+1} - r_{i} \geq r_i - r_{i-1} \geq 1/(2m^2)$, for $i=2,\ldots,m^2$. Thus, the segment $s$ intersects at most $\ceil{||s||/(1/(2m^2))} = \ceil{9\pi^2} = 89$ tiles, and we have that the number of tiles exposed in the sector ${\cal S}_i$ by $K_o$ is at most $\max \pth{X_{j-1}, X_{j+1}} + 89 \leq X_{j-1} + X_{j+1} + 89$, for $j=1, \ldots, m$. Thus, the expected number of tiles exposed by $K_o$ is at most \[ E \pbrc{ \sum_{i=1}^{m} \pth{ X_{j-1} + X_{j+1} + 89 } } = O(m). \] The area of $K ={\mathop{CH}}(N)$ is bounded from below by the area of tiles which are not exposed by $K$. The probability that $K \subsetneq K_o$ (namely, the origin is not inside $K$, or, equivalently, all points of $N$ lie in some semidisk) is at most $2\pi/2^n$, as easily verified. Hence, \[ E[ Area( K ) ] \geq E[ Area( C ) ] - P \pbrc{ C \ne K } \pi = \pi - O(m)\frac{\pi}{n} - \frac{2\pi}{2^n}\pi = \pi -O \pth{n^{-2/3}}. \] The assertion of the theorem now follows from Lemma \ref{lemma:area:to:vertices}. \end{proof} \begin{lemma} The expected number of vertices of the convex hull of $n$ points, chosen uniformly and independently from the unit square, is $O(\log{n})$. \label{lemma:vertices:square} \end{lemma} \begin{figure} \centerline{ \Ipe{figs/sq_expose.ipe} } \caption{Illustrating the proof that bounds the number of tiles exposed by ${\mathop{CH}}(N)$ inside the $j$-th column, by using a non-uniform tiling of the strips to the left and to the right of the $j$-th column. The area of such a larger tile is at least $1/n$.} \label{fig:column:ch} \end{figure} \begin{proof} We claim that the expected area of the convex hull of $n$ points, chosen uniformly and independently from the unit square, is at least $1 - O \pth{ \log(n)/ n}$. Let $S$ denote the unit square. Partition $S$ into $n$ rows and $n$ columns, such that $S$ is partitioned into $n^2$ identical squares. Let $S_{i,j} = [(i-1)/n,i/n] \times [(j-1)/n,j/n]$ denote the $j$-th square in the $i$-th column, for $1 \leq i,j \leq n$. Let ${\cal S}_i = \cup_{j=1}^{n} S_{i,j}$ denote the $i$-th column of $S$, for $i=1,\ldots,n$, and let ${\cal S}(l,k) = \cup_{i=l}^{k} {\cal S}_i$, for $1 \leq l \leq k \leq n$. Let $N$ be a random sample of $n$ points chosen uniformly and independently from $S$. Let $X_j$ denote the first index $i$ such that $N \cap (\cup_{l=1}^{j-1} S_{l, i}) \ne \emptyset$, for $j=2, \ldots, n-1$; namely, $X_j$ is the index of the first row in ${\cal S}(1,j-1)$ that contains a point from $N$. Symmetrically, let $X_{j}'$ be the index of the first row in ${\cal S}(j+1,n)$ that contains a point of $N$. Clearly, $E[X_j] = E[X_{n-j+1}']$, for $j=2,\ldots, n-1$. Let $Z_j$ denote the number of squares $S_{i,j}$ in the bottom of the $j$-th column that are exposed by ${\mathop{CH}}(N)$, for $j=2,\ldots, n-1$. Arguing as in the proof of Theorem \ref{theorem:area}, we have that $Z_j \leq \max ( X_j, X_j' ) \leq X_j + X_j'$. Thus, in order to bound $E[Z_j]$, we first bound $E[X_j]$ by covering the strips ${\cal S}(1,j-1), {\cal S}(j+1,n)$ by tiles of area $\geq 1/n$. In particular, let $h(l) = \ceil{n/(l-1)}$, and let $R_j(m) = [0, (j-1)/n] \times [h(n-j+1)(m-1)/n, h(j)m/n ]$, and let $R_j'(m) = [(j+1)/n, 1] \times [h(j)(m-1)/n, h(j)m/n ]$, for $j=2, \ldots, n-1$. See Figure \ref{fig:column:ch}. Let $Y_j$ denote the minimal index $i$ such that $R_j(i) \cap N \ne \emptyset$. The area of $R_{j}(i)$ is at least $1/n$, for any $i$ and $j$. Arguing as in the proof of Theorem \ref{theorem:area}, it follows that $E[ Y_j ] = O(1)$. On the other hand, $E[ X_j] \leq h(j) E[Y_j] = O( n/(j-1))$. Symmetrically, $E[X_j'] = O(n/(n-j))$. Thus, by applying the above argument to the four directions (top, bottom, left, right), we have that the expected number of squares $S_{i,j}$ exposed by ${\mathop{CH}}(N)$ is bounded by \[ 4n - 4 + 4 { \sum_{j=2}^{n-1} E[Z_j]} < 4n + 4 { \sum_{j=2}^{n-1} (E[X_j] + E[X_j'])} = 4n + 8 { \sum_{j=2}^{n-1} O\pth{\frac{n}{j-1}} } = O(n\log{n}), \] where $4n - 4$ is the number of squares adjacent to the boundary of $S$. Since the area of each square is $1/n^2$, it follows that the expected area of ${\mathop{CH}}(N)$ is at least $1 - O(\log(n)/n)$. By Lemma \ref{lemma:area:to:vertices}, the expected number of vertices of the convex hull is $O(\log n)$. \end{proof} \begin{lemma} The expected number of vertices of the convex hull of $n$ points, chosen uniformly and independently from a triangle, is $O(\log{n})$. \label{lemma:triangle} \end{lemma} \begin{proof} We claim that the expected area of the convex hull of $n$ points, chosen uniformly and independently from a triangle $T$, is at least $(1 - O \pth{ \log(n)/ n}) Area(T)$. We adapt the tiling used in Lemma \ref{lemma:vertices:square} to a triangle. Namely, we partition $T$ into $n$ equal-area triangles, by segments emanating from a fixed vertex, each of which is then partitioned into $n$ equal-area trapezoids by segments parallel to the opposite side, such that each resulting trapezoid has area $1/n^2$. See Figure \ref{fig:triangle:tiling}. Notice that this tiling has identical topology to the tiling used in Lemma \ref{lemma:vertices:square}. Thus, the proof of Lemma \ref{lemma:vertices:square} can be applied directly to this case, repeating the tiling process three times, once for each vertex of $T$. This readily implies the asserted bound. \end{proof} \begin{figure} \centerline{ \Ipe{figs/tri_expose.ipe} } \caption{Illustrating the proof of Lemma \ref{lemma:vertices:square} for the case of a triangle.} \label{fig:triangle:tiling} \end{figure} \begin{theorem} The expected number of vertices of the convex hull of $n$ points, chosen uniformly and independently from a polygon $P$ having $k$ sides, is $O(k \log{n})$. \end{theorem} \begin{proof} We triangulate $P$ in an arbitrary manner into $k$ triangles $T_1, \ldots, T_k$. Let $N$ be a random sample of $n$ points, chosen uniformly and independently from $P$. Let $Y_i = |T_i \cap N|$, $N_i = T_i \cap N$, and $Z_i = |{\mathop{CH}}(N_i)|$, for $i=1, \ldots, k$. Notice that the distribution of the points of $N_i$ inside $T_i$ is identical to the distribution of $Y_i$ points chosen uniformly and independently from $T_i$. In particular, $E[Z_i | Y_i] = O( \log{Y_i})$, by Lemma \ref{lemma:triangle}, and $E[ Z_i ] = E_{Y_i}[ E[ Z_i | Y_i ] ] = O( \log{n } )$, for $i=1, \ldots, k$. Thus, $E[ |{\mathop{CH}}(N)| ] \leq E \pbrc{ \sum_{i=1}^{k} |{\mathop{CH}}(N_i)| } \leq \sum_{i=1}^{k} E[ Z_i] = O( k \log{n} )$. \end{proof} \section{On the Expected Complexity of a Generalized Convex Hull Inside a Disk} \label{sec:genrelized:convex} In this section, we derive a bound on the expected complexity on a generalized convex hull of a set of points, chosen uniformly and independently for the unit disk. The new bound matches the known bounds, for the case of standard convexity and maxima. The bound follows by extending the proof of Theorem \ref{theorem:area}. We begin with some terminology and some initial observations, most of them taken or adapted from \cite{mp-ofsch-97}. A set ${\cal D}$ of vectors in the plane is a {\em set of directions}, if the length of all the vectors in ${\cal D}$ is $1$, and if $v \in {\cal D}$ then $-v \in {\cal D}$. Let ${\cal D}_{\Re}$ denote the set of all possible directions. A set $C$ is {\em ${\cal D}$-convex} if the intersection of $C$ with any line with a direction in ${\cal D}$ is connected. By definition, a set $C$ is convex (in the standard sense), if and only if it is ${\cal D}_{\Re}$-convex. For a set $C$ in the plane, we denote by ${\mathop{\cal{CH}}}_{D}(C)$ the {\em ${\cal D}$-convex hull} of $C$; that is, the smallest ${\cal D}$-convex set that contains $C$. While this seems like a reasonable extension of the regular notion of convexity, its behavior is counterintuitive. For example, let ${\cal D}_Q$ denote the set of all rational directions (the slopes of the directions are rational numbers). Since ${\cal D}_Q$ is dense in ${\cal D}_{\Re}$, one would expect that ${\mathop{\cal{CH}}}_{{\cal D}_Q}(C) = {\mathop{\cal{CH}}}_{{\cal D}_\Re}(C) = {\mathop{\mathrm{CH}}}(C)$. However, if $C$ is a set of points such that the slope of any line connecting a pair of points of $C$ is irrational, then ${\mathop{\cal{CH}}}_{D_Q}(C) =C$. See \cite{osw-dcrch-85, rw-cgro-88, rw-ocfoc-87} for further discussion of this type of convexity. \begin{definition} Let $f$ be a real function defined on a ${\cal D}$-convex set $C$. We say that $f$ is {\em ${\cal D}$-convex} if, for any $x \in C$ and any $v \in {\cal D}$, the function $g(t)=f(x+tv)$ is a convex function of the real variable $t$. (The domain of $g$ is an interval in $\Re$, as $C$ is assumed to be ${\cal D}$-convex.) Clearly, any convex function, in the standard sense, defined over the whole plane satisfies this condition. \end{definition} \begin{definition} Let $C \subseteq \Re^2$. The set ${\mathop{\cal{CH}}}^{\cal D}(C)$, called the {\em functional ${\cal D}$-convex hull of $C$}, is defined as \[ {\mathop{\cal{CH}}}^{\cal D}(C) = \brc{ x\in \Re^2 \sep{ f(x) \leq \sup_{y\in C}f(y) \text{ for all ${\cal D}$-convex } f:\Re^2 \rightarrow \Re}} \] A set $C$ is {\em functionally ${\cal D}$-convex} if $C={\mathop{\cal{CH}}}^{{\cal D}}(C)$. \end{definition} \begin{definition} Let ${\cal D}$ be a set of directions. A pair of vectors $v_1,v_2 \in {\cal D}$, is a {\em ${\D\text{-pair}}$}, if $v_2$ is counterclockwise from $v_1$, and there is no vector in ${\cal D}$ between $v_1$ and $v_2$. Let $\DPAIRS{{\cal D}}$ denote the set of all ${\D\text{-pair}}$s. Let $\mathop{pspan}(u_1,u_2)$ denote the portion of the plane that can be represented as a {\em positive} linear combination of $u_1, u_2 \in {\cal D}$. Thus $\mathop{pspan}( u_1,u_2)$ is the {\em open} wedge bounded by the rays emanating from the origin in directions $u_1, u_2$. We define by $(v_1,v_2)_L = \mathop{pspan}( -v_1, v_2)$ and $(v_1,v_2)_R = \mathop{pspan}( v_1, -v_2)$: these are two of the four quadrants of the plane induced by the lines containing $v_1$ and $v_2$. Similarly, for $v \in {\cal D}$ we denote by $\HL{v}$ and $\HR{v}$ the two open half-planes defined by the line passing through $v$. Let \[ {\cal{Q}}({\cal D}) = \brc{ \HL{v}, \HR{v} \sep{ v \in {\cal D}}} \cup \brc{ (v_1,v_2)_R, (v_1,v_2)_L \sep{ (v_1, v_2) \in \DPAIRS{D} }}. \] \end{definition} \begin{definition} For a set $S \subseteq \Re^2$ we denote by $T(S)$ the set of translations of $S$ in the plane, that is $T(S) = \brc{ S + p \sep{ p \in \Re^2 }}$. Given a set of directions ${\cal D}$, let ${\cal T}({\cal D}) = \bigcup_{Q \in {\cal{Q}}({\cal D})} T(Q)$. \end{definition} For ${\cal D}_\Re$, the set ${\cal T}({\cal D}_\Re)$ is the set of all open half-planes. The standard convex hull of a planar point set $S$ can be defined as follows: start from the whole plane, and remove from it all the open half-planes $H^+$ such that $H^+ \cap S = \emptyset$. We extend this definition to handle ${\cal D}$-convexity for an arbitrary set of directions ${\cal D}$, as follows: \[ \DCH{{\cal D}}(S) = \Re^2 \setminus \pth{ \bigcup_{I \in {\cal T}({\cal D}), I \cap S = \emptyset} I }; \] that is, we remove from the plane all the translations of quadrants and halfplanes in ${\cal{Q}}({\cal D})$ that do not contain a point of $S$. See Figures \ref{fig:dch:example}, \ref{fig:dch:example:ext}. \begin{figure} \centerline{ \Ipe{figs/set-of-dir.ipe} } \caption{(a) A set of directions ${\cal D}$, (b) the set of quadrants ${\cal{Q}}({\cal D})$ induced by ${\cal D}$, and (c) the $\DCH{{\cal D}}$ of three points.} \label{fig:dch:example} \end{figure} \begin{figure} \centerline{ \Ipe{figs/set-of-dir-discon.ipe} } \caption{(a) A set of directions ${\cal D}$, such that $\alpha({\cal D}) > \pi/2$, (b) the set of quadrants ${\cal{Q}}({\cal D})$ induced by ${\cal D}$, and (c) the $\DCH{{\cal D}}$ of a set of points which is not connected.} \label{fig:dch:example:ext} \end{figure} For the case ${\cal D}_{xy} = \brc{ (0,1), (1,0), (0,-1), (-1,0) }$, Matou{\v s}ek{} and Plech{\' a}{\v c}{} \cite{mp-ofsch-97} showed that if $\DCH{{\cal D}_{xy}}(S)$ is connected, then ${\mathop{\cal{CH}}}^{{\cal D}_{xy}}(S) = \DCH{{\cal D}_{xy}}(S)$. \begin{definition} For a set of directions ${\cal D}$, we define the {\em density} of ${\cal D}$ to be \[ \alpha({\cal D}) = \max_{(v_1,v_2) \in \DPAIRS{{\cal D}}} \alpha(v_1,v_2), \] where $\alpha(v_1,v_2)$ denotes the counterclockwise angle from $v_1$ to $v_2$. \end{definition} See Figure \ref{fig:dch:example:ext}, for an example of a set of directions with density larger than $\pi/2$. \begin{corollary} Let ${\cal D}$ be a set of directions in the plane. Then: \begin{itemize} \item The set $\DCH{{\cal D}}(A)$ is ${\cal D}$-convex, for any $A \subseteq \Re^2$. \item For any $A \subseteq B \subseteq \Re^2$, one has $\DCH{{\cal D}}(A) \subseteq \DCH{{\cal D}}(B)$. \item For two sets of directions ${\cal D}_1 \subseteq {\cal D}_2$ we have $\DCH{{\cal D}_1}(S) \subseteq \DCH{{\cal D}_2}(S)$, for any $S \subseteq \Re^2$. \item Let $S$ be a bounded set in the plane, and let ${\cal D}_1 \subseteq {\cal D}_2 \subseteq {\cal D}_3 \cdots$ be a sequence of sets of directions, such that $\lim_{i\rightarrow \infty} \alpha(D_i) = 0$. Then, $\mathop{int}{{\mathop{\mathrm{CH}}}(S)} \subseteq \lim_{i \rightarrow \infty} \DCH{{\cal D}_i}(S) \subseteq {\mathop{\mathrm{CH}}}(S)$. \remove{ \item Let $S$ be a finite set of points in general position in the plane, and let ${\cal D}_1 \supseteq {\cal D}_2 \supseteq {\cal D}_3 \cdots$ be a sequence of sets of directions, such that $\lim_{i\rightarrow \infty} \alpha(D_i) = \pi$. Then, $\lim_{i \rightarrow \infty} \DCH{{\cal D}_i}(S) = S$.} \end{itemize} \end{corollary} \begin{lemma} Let ${\cal D}$ a set of directions, and let $S$ be a finite set of points in the plane. Then $C = \DCH{{\cal D}}(S)$ is a polygonal set whose complexity is $O(|S \cap {\partial}{C}|)$. \label{lemma:on:boundary} \end{lemma} \begin{proof} It is easy to show that $C$ is polygonal. We charge each vertex of $C$ to some point of $S' = S \cap{\partial}{C}$. Let $C'$ be a connected component of $C$. If $C'$ is a single point, then this is a point of $S'$. Otherwise, let $e$ be an edge of $C'$, and let $I$ be a set in ${\cal T}({\cal D})$ such that $e \subseteq {\partial}{I}$, and $I \cap S = \emptyset$. Since $e$ is an edge of $C'$, there is no $q \in \Re^2$ such that $e \subseteq q + I$, and $(q+I)\cap S = \emptyset$. This implies that there must be a point $p$ of $S$ on ${\partial}{I} \cap l_e$, where $l_e$ is the line passing through $e$. However, $C$ is a ${\cal D}$-convex set, and the direction of $e$ belongs to ${\cal D}$. It follows that $l_e$ intersects $C$ along a connected set (i.e., the segment $e$), and $p \in l_e \cap C = e$. We charge the edge $e$ to $p$. We claim that a point $p$ of $S'$ can be charged at most 4 times. Indeed, for each edge $e'$ of $C$ incident to $p$, there is a supporting set in ${\cal T}({\cal D})$, such that $p$ and $e'$ lie on its boundary. Only two of those sets can have angle less than $\pi/2$ at $p$ (because such a set corresponds to a ${\D\text{-pair}}(v_1,v_2)$ with $\alpha(v_1,v_2) > \pi/2$). Thus, a point of $S'$ is charged at most $\max( 2\pi/(\pi/2), \pi/(\pi/2) + 2) = 4$ times. \end{proof} \begin{lemma} Let ${\cal D}$ be a set of directions, and let $K$ be a bounded convex body in the plane, such that the expected area of $\DCH{{\cal D}}(N)$ of a set $N$ of $n$ points, chosen uniformly and independently from $K$, is at least $\pth{1-f(n)}Area(K)$, where $1 \geq f(n) \geq 0$, for $n \geq 1$. Then, the expected number of vertices of $C = \DCH{{\cal D}}(N)$ is $O(n f(n/2))$. \label{lemma:area:to:vertices:ext} \end{lemma} \begin{proof} By Lemma \ref{lemma:on:boundary}, the complexity of $C$ is proportional to the number of points of $N$ on the boundary of $C$. Using this observation, it is easy to verify that the proof of Lemma \ref{lemma:area:to:vertices} can be extended to this case. \end{proof} We would like to apply the proof of Theorem 2.3 to bound the expected complexity of a random ${\cal D}$-convex hull inside a disk. Unfortunately, if we try to concentrate only on three consecutive sectors (as in Figure \ref{fig:slice}) it might be that there is a quadrant $I$ of ${\cal T}({\cal D})$ that intersects the middle the middle sector from the side (i.e. through the two adjacent sectors). This, of course, can not happen when working with the regular convexity. Thus, we first would like to decompose the unit disk into ``safe'' regions, where we can apply a similar analysis as the regular case, and the ``unsafe'' areas. To do so, we will first show that, with high probability, the $\DCH{{\cal D}}$ of a random point set inside a disk, contains a ``large'' disk in its interior. Next, we argue that this implies that the random $\DCH{{\cal D}}$ covers almost the whole disk, and the desired bound will readily follows from the above Lemma. \begin{definition} For $r \geq 0$, let $B_r$ denote the disk of radius of $r$ centered at the origin. \end{definition} \begin{lemma} Let ${\cal D}$ be a set of directions, such that $0 \leq \alpha({\cal D}) \leq \pi / 2$. Let $N$ be a set of $n$ points chosen uniformly and independently from the unit disk. Then, with probability $1-n^{-10}$ the set $\DCH{{\cal D}}(N)$ contains $B_r$ in its interior, where $r = 1 - c \sqrt{\log{n}/n}$, for an appropriate constant $c$. \label{lemma:big:disk} \end{lemma} \begin{proof} Let $r'=1 - c \sqrt{(\log{n})/n}$, where $c$ is a constant to be specified shortly. Let $q$ be any point of $B_{r'}$. We bound the probability that $q$ lies outside $C = \DCH{{\cal D}}(N)$ as follows: Draw $8$ rays around $q$, such that the angle between any two consecutive rays is $\pi/4$. This partitions $q + B_{r''}$, where $r'' = c \sqrt{(\log{n})/n}$, into eight portions $R_1, \ldots, R_8$, each having area $\pi c^2 \log{n}/(8n)$. Moreover, $R_i \subseteq q + B_{r''} \subseteq B_1$, for $i=1, \ldots, 8$. The probability of a point of $N$ to lie outside $R_i$ is $1 - c^2 \log{n}/(8n)$. Thus, the probability that all the points of $N$ lie outside $R_i$ is \[ P \pbrc{ N \cap R_i = \emptyset } \leq \pth{ 1 - \frac{ c^2 \log{n}}{8n}}^n \leq e^{-(c^2\log{n})/8} = n^{-c^2/8}, \] since $1-x \leq e^{-x}$, for $x \geq 0$. Thus, the probability that one of the $R_i$'s does not contain a point of $N$ is bounded by $8 n^{-c^2/8}$. We claim that if $R_i \cap N \ne \emptyset$, for every $i=1,\ldots, 8$, then $q \in C$. Indeed, if $q \notin C$ then there exists a set $Q \in {\cal{Q}}({\cal D})$, such that $(q + Q) \cap N = \emptyset$. Since $\alpha({\cal D}) \leq \pi/2$ there exists an $i$, $1 \leq i \leq 8$, such that $R_i \subseteq q + Q$; see Figure \ref{fig:quadrants}. This is a contradiction, since $R_i \cap N \ne \emptyset$. Thus, the probability that $q$ lies outside $C$ is $\leq 8n^{-c^2/8}$. \begin{figure} \centerline{ \Ipe{figs/quadrants.ipe} } \caption{Since $\alpha({\cal D}) \leq \pi/2$, any quadrant $Q \in {\cal{Q}}({\cal D})$, when translated by $q$, must contain one of the $R_i$'s.} \label{fig:quadrants} \end{figure} Let $N'$ denote a set of $n^{10}$ points spread uniformly on the boundary of $B_{r'}$. By the above analysis, all the points of $N'$ lie inside $C$ with probability at least $1-8n^{10-c^2/8}$. Furthermore, arguing as above, we conclude that $B_{r} \subseteq \DCH{{\cal D}}(N')$, where $r = 1 - 2 c \sqrt{(\log{n})/n}$. Hence, with probability at least $1 - 8n^{10-c^2/8}$, $\DCH{{\cal D}}(C)$ contains $B_r$. The lemma now follows by setting $c=20$, say. \end{proof} Since the set of directions may contain large gaps, there are points in $B_1 \setminus B_r$ that are ``unsafe'', in the following sense: \begin{definition} Let ${\cal D}$ be a set of directions, and let $0 \leq r \leq 1$ be a prescribed constant, such that $0 \leq \alpha({\cal D}) \leq \pi / 2$. A point $p$ in $B_1$ is {\em safe}, relative to $B_r$, if $op \subseteq \DCH{{\cal D}}(B_r \cup \brc{p})$. \end{definition} See Figure \ref{fig:unsafe} for an example how the unsafe area looks like. The behavior of the $\DCH{{\cal D}}$ inside the unsafe areas is somewhat unpredictable. Fortunately, those areas are relatively small. \begin{figure} \centerline{ \Ipe{figs/unsafe1.ipe} } \caption{The dark areas are the unsafe areas for a consecutive pairs of directions $v_1, v_2 \in {\cal D}$.} \label{fig:unsafe} \end{figure} \begin{lemma} Let ${\cal D}$ be a set of directions, such that $0 \leq \alpha({\cal D}) \leq \pi / 2$, and let $r= 1 - O \pth{ \sqrt{(\log{n})/n}}$. The unsafe area in $B_1$, relative to $B_r$, can be covered by a union of $O(1)$ caps. Furthermore, the length of the base of such a cap is $O(((\log{n})/n)^{1/4})$, and its height is $O\pth{\sqrt{(\log{n})/n}}$. \label{lamma:bad:bad:bad:caps} \end{lemma} \begin{proof} Let $p$ be an unsafe point of $B_1$. Let $\overrightarrow{v_1},\overrightarrow{v_2}$ be the consecutive pair of vectors in ${\cal D}$, such that the vector $\overrightarrow{po}$ lies between them. If $\mathop{ray}(p,\overrightarrow{v_1}) \cap B_r \ne \emptyset$, and $\mathop{ray}(p,\overrightarrow{v_2}) \cap B_r \ne \emptyset$ then $po \subseteq {\mathop{\mathrm{CH}}} \pth{ \brc{ p, o, p_1, p_2 } } \subseteq \DCH{{\cal D}}(B_r \cup \brc{p})$, for any pair of points $p_1 \in B_r \cap \mathop{ray}(p,\overrightarrow{v_1}), p_2 \in B_r \cap \mathop{ray}(p,\overrightarrow{v_2})$. Thus, $p$ is unsafe only if one of those two rays miss $B_r$. Since $p$ is close to $B_r$, the angle between the two tangents to $B_r$ emanating from $p$ is close to $\pi$. This implies that the angle between $\overrightarrow{v_1}$ and $\overrightarrow{v_2}$ is at least $\pi/4$ (provided $n$ is a at least some sufficiently large constant), and the number of such pairs is at most $8$. The area in the plane that sees $o$ in a direction between $\overrightarrow{v_1}$ and $\overrightarrow{v_2}$, is a quadrant $Q$ of the plane. The area in $Q$ which is is safe, is a parallelogram $T$. Thus, the unsafe area in $B_1$ that induced by the pair $\overrightarrow{v_1}$ and $\overrightarrow{v_2}$ is $(B_1 \cap Q) \setminus T$. Since $\alpha({\cal D}) \leq \pi/2$, this set can covered with two caps of $B_1$ with their base lying on the boundary of $B_r$. See Figure \ref{fig:unsafe}. The height of such a cap is $1-r = O\pth{\sqrt{\frac{\log{n}}{n(\pi - \alpha)}}}$, and the length of the base of such a cap is $2\sqrt{ 1- r^2 } = O\pth{ \pth{\frac{\log{n}}{n(\pi-\alpha)}}^{1/4} }$. \end{proof} The proof of Lemma \ref{lamma:bad:bad:bad:caps} is where our assumption that $\alpha({\cal D}) \leq \pi/2$ plays a critical role. Indeed, if $\alpha({\cal D}) > \pi/2$, then the unsafe areas in $B_1 \setminus B_r$ becomes much larger, as indicated by the proof. \begin{theorem} Let ${\cal D}$ be a set of directions, such that $0 \leq \alpha({\cal D}) \leq \pi / 2$. The expected number of vertices of $\DCH{{\cal D}}(N)$, where $N$ is a set of $n$ points, chosen uniformly and independently from the unit disk, is $O\pth{n^{1/3} + \sqrt{n\alpha({\cal D})}}$. \label{theorem:area:x} \end{theorem} \begin{proof} We claim that the expected area $\DCH{{\cal D}}(N)$ is at least $\pi - O \pth{ n^{-2/3} + \sqrt{\alpha/n} }$, where $\alpha = \alpha({\cal D})$. The theorem will then follow from Lemma \ref{lemma:area:to:vertices:ext}. Indeed, let $m$ be an integer to be specified later, and assume, without loss of generality, that $m$ divides $n$. Partition $B$ into $m$ congruent sectors, ${\cal S}_1, \ldots, {\cal S}_m$. Let $B^1, \ldots, B^{\mu}$ denote the $\mu = n/m$ disks centered at the origin, such that (i) $B^{1} = B_1$, and (ii) $Area(B^{i-1}) - Area(B^{i}) = \pi/\mu$, for $i=2, \ldots, \mu$. Let $r_i$ denote the radius of $B^i$, for $i=1, \ldots, \mu$. Note\footnote{ $Area(B^1) - Area(B^2) = \pi(1-r_2^2) = \pi/\mu$, thus $r_2^2 = 1 - 1/\mu$. We have $r_2 \leq 1 -1/(2\mu)$, and $r_1 - r_2 \geq 1 - (1-1/(2\mu)) = 1/(2\mu)$.}, that $r_{i} - r_{i+1} \geq r_{i-1} - r_{i}\geq 1/(2\mu)$, for $i=2, \ldots, \mu-1$. Let $r= 1 - O\pth{\sqrt{(\log{n})/n}}$, and let $U$ be the set of sectors that either intersect an unsafe area of $B$ relative to $B_r$, or their neighboring sectors intersect the unsafe area of $B$. By Lemma \ref{lamma:bad:bad:bad:caps}, the number of sectors in $U$ is $O(1) \cdot O \pth{\frac{ ((\log{n})/n)^{1/4}} {(2\pi/m)}} = O(m((\log{n})/n)^{1/4})$. Let $S_{i,j} = (B^{i} \setminus B^{i+1}) \cap {\cal S}_j$, and $S_{\mu,j}=B^{\mu} \cap {\cal S}_j$, for $i=1, \ldots, \mu-1$, and $j=1, \ldots, m$. The set $S_{i,j}$ is called the $i$-th {\em tile} of the sector ${\cal S}_j$, and its area is $\pi/n$, for $i=1,\ldots,\mu$, and $j=1, \ldots,m$. Let $X_j$ denote the first index $i$ such that $N \cap S_{i, j} \ne \emptyset$, for $j=1, \ldots, m$. The probability that $X_j = k$ is upper-bounded by the probability that the tiles $S_{1, j}, \ldots, S_{(k-1),j}$ do not contain any point of $N$; namely, by $\pth{1-\frac{k-1}{n}}^{n}$. Thus, $P[X_j = k] \leq \pth{1-\frac{k-1}{n}}^{n} \leq e^{-(k-1)}$. Thus, \[ E \pbrc{ X_j } = \sum_{k=1}^{\mu} k P[X_j = k ] \leq \sum_{k=1}^{\mu} k e^{-(k-1)} = O(1), \] for $j=1, \ldots, m$. Let $C$ denote the set $\DCH{{\cal D}}(N \cup B_r)$. The tile $S_{i,j}$ is {\em exposed} by a set $K$, if $S_{i,j} \setminus K \ne \emptyset$. We claim that the expected number of tiles exposed by $C$ in a section $S_j \notin U$ is at most $X_{j-1} + X_{j+1} + O(\mu/m^2 + \alpha\mu/m)$, for $j=1,\ldots, m$ (where we put $X_0 = X_m$, $X_{m+1} = X_1$). Indeed, let $w=max(X_{j-1},X_{j+1})$, and let $p,q$ be the two points in $S_{j-1,w}, S_{j+1,w}$, respectively, such that the number of sets exposed by the triangle $T = \triangle{opq}$, in the sector ${\cal S}_j$, is maximal. Both $p$ and $q$ lie on ${\partial}{B^{w+1}}$ and on the external radii bounding ${\cal S}_{j-1}$ and ${\cal S}_{j+1}$, as shown in Figure \ref{fig:slice}. Let $s$ denote the segment connecting the midpoint $\rho$ of the base of $T$ to its closest point on ${\partial}{B^w}$. The number of tiles in ${\cal S}_j$ exposed by $T$ is bounded by $w$, plus the number of tiles intersecting the segment $s$. The length of $s$ is \[ |oq| - |oq| \cos \pth{ \frac{3}{2} \cdot \frac{2\pi}{m}} \leq 1 - \cos \pth{ \frac{3\pi}{m}} \leq \frac{1}{2} \pth{ \frac{3\pi}{m}}^2 = \frac{4.5\pi^2}{m^2}, \] since $\cos{x} \geq 1-x^2/2$, for $x \geq 0$. On the other hand, the segment $s$ intersects at most $\ceil{||s||/(1/(2\mu))} = O(\mu/m^2)$ tiles, and we have that the number of tiles exposed in the sector ${\cal S}_i$ by $T$ is at most $w + O(\mu/m^2)$, for $j=1, \ldots, m$. Since ${\cal S}_j \notin U$, the points $p,q$ are safe, and $op, oq \subseteq C$. This implies that the only additional tiles that might be exposed in ${\cal S}_j$ by $C$, are exposed by the portion of the boundary of $C$ between $p$ and $q$ that lie inside $T$. Let $V$ be the circular cap consisting of the points in $T$ lying between $pq$ and a circular arc $\gamma \subseteq T$, connecting $p$ to $q$, such that for any point $p' \in \gamma$ one has $\angle{pp'q} = \pi - \alpha$. See Figure \ref{fig:circ:arc}. \begin{figure} \centerline{ \Ipe{figs/attack.ipe} } \caption{The portion of $T$ that can be removed by a quadrant $Q$ of ${\cal T}({\cal D})$, is covered by the darkly-shaded circular cap, such that any point on its bounding arc creates an angle $\pi-\alpha$ with $p$ and $q$.} \label{fig:circ:arc} \end{figure} Let $Q \in {\cal T}({\cal D})$ be any quadrant of the plane induce by ${\cal D}$, such that $Q \cap N = \emptyset$ (i.e. $C\cap Q = \emptyset$), and $Q \cap T \ne \emptyset$. Then, $Q \cap op = \emptyset, Q \cap oq =\emptyset$ since $p$ and $q$ are safe. Moreover, the angle of $Q$ is at least $\pi - \alpha$, which implies that $Q \cap T \subseteq V$. See Figure \ref{fig:circ:arc}. Let $s'$ be the segment $o\rho \cap V$, where $\rho$ is as above, the midpoint of $pq$. The length of $s''$ is \[ |s'| \leq \sin \pth{ \frac{3}{2} \cdot \frac{2\pi}{m}} \tan{ \frac{\alpha}{2} } \leq \frac{3\pi}{m} \frac{\sqrt{2}\alpha}{2} \leq \frac{3\pi\alpha}{m}, \] since $\sin{x} \leq x$, for $x \geq 0$, and $1/\sqrt{2} \leq \cos{(\alpha/2)}$ (because $0 \leq \alpha \leq \pi/2$). Thus, the expected number of tiles exposed by $C$, in a sector ${\cal S}_j \notin U$, is bounded by \[ X_{j-1} + X_{j+1} + O \pth{ \frac{\mu}{m^2}} + O \pth{ \frac{3\pi\alpha/m}{1/(2\mu)} } = X_{j-1} + X_{j+1} + O \pth{ \frac{\mu}{m^2}} + O \pth{ \frac{\alpha \mu}{m}}. \] Thus, the expected number of tiles exposed by $C$, in sectors that do not belong to $U$, is at most \[ E \pbrc{ \sum_{j=1}^{m} \pth{ X_{j-1} + X_{j+1} + O \pth{ \frac{\mu}{m^2}} + O \pth{ \frac{\alpha \mu}{m}} } } = O \pth{ m + \frac{\mu}{m} + \alpha \mu}. \] Adding all the tiles that lie outside $B_r$ in the sectors that belong to $U$, it follows that the expected number of tiles exposed by $C$ is at most \begin{eqnarray*} O&&\hspace{-0.75cm} \pth{ m + \frac{\mu}{m} + \alpha \mu + |U|\cdot \frac{1-r}{1/2\mu} }= O \pth{ m + \frac{\mu}{m} + \alpha \mu + m \pth{\frac{\log{n}}{n}}^{1/4} \cdot \mu \sqrt{\pth{\frac{\log{n}}{n}}}} \\ &=& O \pth{ m + \frac{n}{m^2} + \frac{\alpha n}{m} + n\pth{\frac{\log{n}}{n}}^{3/4} } = O \pth{ m + \frac{n}{m^2} + \frac{\alpha n}{m} + n^{1/4}\log^{3/4}{n} }. \end{eqnarray*} Setting $m=\max{\pth{n^{1/3}, \sqrt{n\alpha}}}$, we conclude that the expected number of tiles exposed by $C$ is $O\pth{n^{1/3} + \sqrt{n\alpha}}$. The area of $C' =\DCH{{\cal D}}(N)$ is bounded from below by the area of the tiles which are not exposed by $C'$. The probability that $C' \ne C$ (namely, that the disk $B_r$ is not inside $C'$) is at most $n^{-10}$, by Lemma \ref{lemma:big:disk}. Hence the expected area of $C'$ is at least \[ E[ Area( C ) ] - Prob \pbrc{ C \ne C' } \pi = \pi - O\pth{ n^{1/3} + \sqrt{n\alpha}}\frac{\pi}{n} - n^{-10}\pi = \pi -O \pth{n^{-2/3} + \sqrt{\frac{\alpha}{n}} \;}. \] The assertion of the theorem now follows from Lemma \ref{lemma:area:to:vertices:ext}. \end{proof} The expected complexity of the $\DCH{{\cal D}_{xy}}$ of $n$ points, chosen uniformly and independently from the unit square, is $O(\log{n})$ (Lemma \ref{lemma:vertices:square}). Unfortunately, this is a degenerate case for a set of directions with $\alpha({\cal D}) = \pi/2$, as the following corollary testifies: \begin{corollary} Let ${\cal D}_{xy}'$ be the set of directions resulting from rotating ${\cal D}_{xy}$ by 45 degrees. Let $N$ be a set of $n$ points, chosen independently and uniformly from the unit square ${S'}$. The expected complexity of $\DCH{{\cal D}_{xy}'}(N)$ is $\Omega \pth{\sqrt{n}}$. \end{corollary} \begin{proof} Without loss of generality, assume that $n=m^2$ for some integer $m$. Tile ${S'}$ with $n$ translated copies of a square of area $1/n$. Let ${\cal S}_1, \ldots, {\cal S}_m$ denote the squares in the top raw of this tiling, from left to right. Let $A_j$ denote the event that ${\cal S}_j$ contains a point of $N$, and neither of the two adjacent squares $S_{j-1}, S_{j+1}$ contains a point of $N$, for $j=2, \ldots, m-1$. We have \[ Prob\pbrc{A_j} = Prob \pbrc{ {\cal S}_{j+1} \cap N = \emptyset \text{ and }{\cal S}_{j-1} \cap N = \emptyset } - Prob \pbrc{ ({\cal S}_{j-1} \cup {\cal S}_j \cup {\cal S}_{j+1}) \cap N = \emptyset }, \] for $j=2, \ldots, m-1$. Hence, \[ \lim_{n\rightarrow \infty} Prob\pbrc{A_j} = \lim_{n\rightarrow \infty } \pth{ \pth{1 - \frac{2}{n}}^{n} - \pth{1 - \frac{3}{n}}^{n} } = e^{-2} - e^{-3} \approx 0.0855 \] \begin{figure} \centerline{ \Ipe{figs/squares.ipe} } \caption{If $A_j$ happens, then the squares ${\cal S}_{j-1}, {\cal S}_{j+1}$ do not contain a point of $N$. Thus, if $q$ is the highest point in ${\cal S}_j$, then $q+Q_{top}$ can not contain a point of $N$, and $q$ is a vertex of $\DCH{{\cal D}_{xy}'}(N)$.} \label{fig:squares} \end{figure} This implies, that for $n$ large enough, $Prob \pbrc{A_j} \geq 0.01$. Thus, the expected value of $Y$ is $\Omega(m) = \Omega \pth{ \sqrt{n}}$, where $Y$ is the number of $A_j$'s that have occurred, for $j=2,\ldots, m-1$. However, if $A_j$ occurs, then $C =\DCH{{\cal D}_{xy}'}(N)$ must have a vertex at ${\cal S}_j$. Indeed, let $Q_{top}$ denote the quadrant of ${\cal{Q}}({\cal D}_{xy}')$ that contains the positive $y$-axis. If we translate $Q_{top}$ to the highest point in $S_j \cap N$, then it does not contain a point of $N$ in its interior, implying that this point is a vertex of $C$, see Figure \ref{fig:squares}. This implies that the expected complexity of $\DCH{{\cal D}_{xy}'}(N)$ is $\Omega \pth{\sqrt{n}}$ \end{proof} \section[Result]{On the Expected Number of Points on the Boundary of the Quadrant Hull Inside a Hypercube} \label{sec:hcube} In this section, we show that the expected number of points on the boundary of the quadrant hull of a set $S$ of $n$ points, chosen uniformly and independently from the unit cube is $O(\log^{d-1}n)$. Those bounds are known \cite{bkst-anmsv-78}, but we believe the new proof is simpler. \begin{definition}[\cite{mp-ofsch-97}] Let ${\cal Q}$ be a family of subsets of $\Re^d$. For a set $A \subseteq \Re^d$, we define the ${\cal Q}$-hull of $A$ as \[ \QHull{{\cal Q}}(A) = \bigcap\brc{ Q \in {\cal Q} \sep{ A \subseteq Q }}. \] \end{definition} \begin{definition}[\cite{mp-ofsch-97}] For a sign vector $s \in \brc{-1, +1}^d$, define \[ q_s = \brc{ x \in \Re^d \sep{ \mathop{\mathrm{sign}}(x_i ) = s_i, \text{ for } i=1, \ldots, d }}, \] and for $a \in \Re^d$, let $q_s(a) =q_s + a$. We set ${\cal Q}_{sc} = \brc{ \Re^d \setminus q_s(a) \sep{ a \in \Re^d, s \in \brc{-1,+1}^d}}$. We shall refer to $\QHull{{\cal Q}_{sc}}(A)$ as the {\em quadrant hull} of $A$. These are all points which cannot be separated from $A$ by any open orthant in space (i.e., quadrant in the plane). \end{definition} \begin{definition} Given a set of points $S \subseteq \Re^d$, a point $p \in \Re^d$ is {\em ${\cal Q}_{sc}$-exposed}, if there is $s \in \brc{-1,+1}^d$, such that $q_s(p) \cap S = \emptyset$. A set $C$ is {\em ${\cal Q}_{sc}$-exposed}, if there exists a point $p\in C$ which is ${\cal Q}_{sc}$-exposed. \end{definition} \begin{definition} For a set $S \subseteq \Re^d$, let $n_{sc}(S)$ denote the number of points of $S$ on the boundary of $\QHull{{\cal Q}_{sc}}(S)$. \end{definition} \begin{theorem} Let ${\cal C}$ be a unit axis parallel hypercube in $\Re^d$, and let $S$ be a set of $n$ points chosen uniformly and independently from ${\cal C}$. Then, the expected number of points of $S$ on the boundary of $H = \QHull{{\cal Q}_{sc}}(S)$ is $O(\log^{d-1}(n))$. \label{theorem:main} \end{theorem} \begin{proof} We partition ${\cal C}$ into equal size tiles, of volume $1/n^d$; that is $C(i_1, i_2, \ldots, i_d) = [(i_1-1)/n, i_1/n] \times \cdots \times [(i_d - 1)/n, i_d/n]$, for $1 \leq i_1, i_2, \ldots, i_d \leq n$. We claim that the expect number of tiles in our partition of ${\cal C}$ which are exposed by $S$ is $O(n^{d-1} \log^{d-1}n)$. Indeed, let $q = q_{(-1, -1, \ldots, -1)}$ be the ``negative'' quadrant of $\Re^d$. Let $X(i_2, \ldots, i_d)$ be the maximal integer $k$, for which $C(k,i_2, \ldots, i_d)$ is exposed by $q$. The probability that $X(i_2, \ldots, i_d) \geq k$ is bounded by the probability that the cubes $C(l_1, l_2, \ldots, l_d)$ does not contain a point of $S$, where $l_1 < k, l_2< i_2, \ldots, l_d < i_d$. Thus, \begin{eqnarray*} \Pr \pbrc{ X(i_2, \ldots, i_d) \geq k} &\leq& \pth{ 1 - \frac{(k-1)(i_2 -1) \cdots (i_d-1)}{n^d}}^n \\ &\leq& \exp \pth{ -\frac{{(k-1)(i_2 - 1) \cdots (i_d-1)}}{n^{d-1}}}, \end{eqnarray*} since $1-x \leq e^{-x}$, for $x \geq 0$. Hence, the probability that $\Pr\pbrc{X(i_2, \ldots, i_d) \geq i\cdot m + 1} \leq e^{-i}$, where\\ $m = \ceil{\frac{n^{d-1}}{(i_2-1) \cdots (i_d-1)}}$. Thus, \begin{eqnarray*} E \pbrc{ X(i_2, \ldots, i_d) } &=& \sum_{i=1}^{\infty} i \Pr\pbrc{X(i_2, \ldots, i_d) = i } = \sum_{i=0}^{\infty} \sum_{j=im+1}^{(i+1)m} j \Pr\pbrc{X(i_2, \ldots, i_d) = j }\\ &\leq& \sum_{i=0}^{\infty} (i+1)m \Pr \pbrc{ X(i_2, \ldots, i_d) \geq im + 1} \leq \sum_{i=0}^{\infty} (i+1)me^{-i} = O(m). \end{eqnarray*} Let $r$ denote the expected number of tiles exposed by $q$ in ${\cal C}$. If $C(i_1, \ldots, i_d)$ is exposed by $q$, then $X(i_2, \ldots, i_d) \geq i_1$. Thus, one can bound $r$ by the number of tiles on the boundary of ${\cal C}$, plus the sum of the expectations of the variables $X(i_2, \ldots, i_d)$. We have \begin{eqnarray*} r &=& O(n^{d-1}) + \sum_{i_2=2}^{n-1} \sum_{i_3=2}^{n-1} \cdots \sum_{i_d=2}^{n-1} O \pth{ \frac{n^{d-1}}{(i_2 - 1)(i_3 -1 ) \cdots(i_d - 1)}} \\ &=& O \pth{ n^{d-1}} \sum_{i_2=2}^{n-1} \frac{1}{i_2-1}\sum_{i_3=2}^{n-1} \frac{1}{i_3-1} \cdots \sum_{i_d=2}^{n-1} \frac{1}{i_d-1} = O \pth{ n^{d-1} \log^{d-1}{n}}. \end{eqnarray*} The set ${\cal Q}_{sc}$ contains translation of $2^d$ different quadrants. This implies, by symmetry, that the expected number of tiles exposed in ${\cal C}$ by $S$ is $O \pth{ 2^dn^{d-1} \log^{d-1}{n}} = O \pth{n^{d-1} \log^{d-1}{n}}$. However, if a tile is not exposed by any $q_s$, for $s \in \brc{-1,+1}^d$, then it lies in the interior of $H$. Implying that the expected volume of $H$ is at least \[ \frac{n^d - O\pth{n^{d-1} \log^{d-1}{n}}}{n^d} = 1 - O\pth{\frac{\log^{d-1} n}{n}}. \] We now apply an argument similar to the one used in Lemma \ref{lemma:area:to:vertices} (Efron's Theorem), and the theorem follows. \end{proof} \remove{ We are now in position to apply an argument similar to the one used in Lemma \ref{lemma:area:to:vertices}: Partition $S$ into two equal size sets $S_1, S_2$. We have that $n_{sc}(S)$ is bounded by the number of points of $S_1$ falling outside $H_2 = \QHull{{\cal Q}_{sc}}(S_2)$, plus the number of points of $S_2$ falling outside $H_1 = \QHull{{\cal Q}_{sc}}(S_1)$, where $n_{sc}(S)$ is the number of points of $S$ on the boundary of $\QHull{{\cal Q}_{sc}}(S)$. We conclude, \begin{eqnarray*} E[ n_{sc}(S) ] &\leq& E \pbrc{ |S_1 \setminus H_2 | } + E \pbrc{ |S_2 \setminus H_1|} = E\pbrc{ \rule{0cm}{0.5cm} E \pbrc{ |S_1 \setminus H_2| \sep{ S_2 } }} + E \pbrc{ \rule{0cm}{0.5cm} E \pbrc{ |S_2 \setminus H_1| \sep{ S_1 } } }\\ &=& E\pbrc{ \frac{n}{2}\pth{1 - \mathop{\mathrm{Vol}}(H_2)} + \frac{n}{2}\pth{1 - \mathop{\mathrm{Vol}}(H_1)}} = O \pth{ n \cdot \frac{\log^{d-1} n}{n}} = O( \log^{d-1} n), \end{eqnarray*} and the theorem follows. } \begin{remark} A point $p$ of $S$ is a {\em maxima}, if there is no point $p'$ in $S$, such that $p_i \leq p'_i$, for $i=1,\ldots, d$. Clearly, a point which is a maxima, is also on the boundary of $\QHull{{\cal Q}_{sc}}(S)$. By Theorem \ref{theorem:main}, the expected number of maxima in a set of $n$ points chosen independently and uniformly from the unit hypercube in $\Re^d$ is $O(\log^{d-1} n)$. This was also proved in \cite{bkst-anmsv-78}, but we believe that our new proof is simpler. Also, as noted in \cite{bkst-anmsv-78}, a vertex of the convex hull of $S$ is a point of $S$ lying on the boundary of the $\QHull{{\cal Q}_{sc}}(S)$. Hence, the expected number of vertices of the convex hull of a set of $n$ points chosen uniformly and independently from a hypercube in $\Re^d$ is $O(\log^{d-1} n)$. \end{remark} \subsection*{Acknowledgments} I wish to thank my thesis advisor, Micha Sharir, for his help in preparing this manuscript. I also wish to thank Pankaj Agarwal, and Imre B{\'a}r{\'a}ny for helpful discussions concerning this and related problems. \bibliographystyle{salpha}
{ "timestamp": "2011-11-24T02:00:20", "yymm": "1111", "arxiv_id": "1111.5340", "language": "en", "url": "https://arxiv.org/abs/1111.5340", "abstract": "In this paper we present several results on the expected complexity of a convex hull of $n$ points chosen uniformly and independently from a convex shape.(i) We show that the expected number of vertices of the convex hull of $n$ points, chosen uniformly and independently from a disk is $O(n^{1/3})$, and $O(k \\log{n})$ for the case a convex polygon with $k$ sides. Those results are well known (see \\cite{rs-udkhv-63,r-slcdn-70,ps-cgi-85}), but we believe that the elementary proof given here are simpler and more intuitive.(ii) Let $\\D$ be a set of directions in the plane, we define a generalized notion of convexity induced by $\\D$, which extends both rectilinear convexity and standard convexity.We prove that the expected complexity of the $\\D$-convex hull of a set of $n$ points, chosen uniformly and independently from a disk, is $O(n^{1/3} + \\sqrt{n\\alpha(\\D)})$, where $\\alpha(\\D)$ is the largest angle between two consecutive vectors in $\\D$. This result extends the known bounds for the cases of rectilinear and standard convexity.(iii) Let $\\B$ be an axis parallel hypercube in $\\Re^d$. We prove that the expected number of points on the boundary of the quadrant hull of a set $S$ of $n$ points, chosen uniformly and independently from $\\B$ is $O(\\log^{d-1}n)$. Quadrant hull of a set of points is an extension of rectilinear convexity to higher dimensions. In particular, this number is larger than the number of maxima in $S$, and is also larger than the number of points of $S$ that are vertices of the convex hull of $S$.Those bounds are known \\cite{bkst-anmsv-78}, but we believe the new proof is simpler.", "subjects": "Computational Geometry (cs.CG)", "title": "On the Expected Complexity of Random Convex Hulls", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9896718486991902, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.7080349885107916 }
https://arxiv.org/abs/1507.02015
Lower Bounds by Birkhoff Interpolation
In this paper we give lower bounds for the representation of real univariate polynomials as sums of powers of degree 1 polynomials. We present two families of polynomials of degree d such that the number of powers that are required in such a representation must be at least of order d. This is clearly optimal up to a constant factor. Previous lower bounds for this problem were only of order $\Omega$($\sqrt$ d), and were obtained from arguments based on Wronskian determinants and "shifted derivatives." We obtain this improvement thanks to a new lower bound method based on Birkhoff interpolation (also known as "lacunary polynomial interpolation").
\section{Introduction} In this paper we obtain lower bounds for the representation of a univariate polynomial $f \in \rr[X]$ of degree $d$ under the form: \begin{equation} \label{model} f(x)=\sum_{i=1}^l \beta_i (x+y_i)^{e_i} \end{equation} where the $y_i$ are real constants and the exponents $e_i$ nonnegative integers. We give two families of polynomials such that the number $l$ of terms required in such a representation must be at least of order $d$. This is clearly optimal up to a constant factor. Previous lower bounds for this problem~\cite{KKPS} were only of order $\Omega(\sqrt{d})$. The polynomials in our first family are of the form $H_1(x)=\sum_{i=1}^k \alpha_i (x+x_i)^d$ with all $\alpha_i$ nonzero and the $x_i$'s distinct. We show that that they require at least $l \geq k$ terms whenever $k \leq (d+2)/4$. In particular, for $k=(d+2)/4$ we obtain $l=k=(d+2)/4$ as a lower bound. The polynomials in our second family are of the form $H_2(x)=(x+1)^{d+1}-x^{d+1}$ and we show that they require more than $(d-1)/2$ terms. This improves the lower bound for $H_1$ by a factor of 2, but this second lower bound applies only when the exponents $e_i$ are required to be bounded by $d$ (obviously, if larger exponents are allowed we only need two terms to represent $H_2$). It is easily shown that every polynomial of degree $d$ can be represented with $\lceil(d+1)/2 \rceil$ terms. This implies that of all polynomials of degree $d$, $H_2$ is essentially (up to a small additive constant) the hardest one. Our lower bound results are specific to polynomials with real coefficients. It would be interesting to obtain similar lower bounds for other fields, e.g., finite fields or the field of complex numbers. As an intermediate step toward our lower bound theorems, we obtain a result on the linear independence of polynomials which may be of independent interest. \begin{theorem} \label{independence} Let $f_1,\ldots,f_k \in \rr[X]$ be $k$ distinct polynomials of the form $f_i(x)=(x+a_i)^{e_i}$. Let us denote by $n_j$ the number of polynomials of degree less than $j$ in this family. If $n_1 \leq 1$ and $n_j+n_{j-1} \leq j$ for all $j$, the family $(f_i)$ is linearly independent. \end{theorem} We will see later (in Section~\ref{lb_section}, Remark~\ref{opt}) that this theorem is optimal up to a small additive constant when $d$ is even, and exactly optimal when $d$ is odd. \subsection*{Motivation and connection to previous work} Lower bounds for the representation of univariate polynomials as sums of powers of {\em low degree} polynomials were recently obtained in~\cite{KKPS}. We continue this line of work by focusing on powers of {\em degree one} polynomials. This problem is still challenging because the exponents $e_i$ may be different from $d=\deg(f)$, and may be possibly larger than $d$. The lower bounds obtained in~\cite{KKPS} are of order $\Omega(\sqrt{d})$. We obtain $\Omega(d)$ lower bounds with a new method based on polynomial interpolation (more on this below). The work in~\cite{KKPS} and in the present paper is motivated by recent progress in arithmetic circuit complexity. It was shown that strong enough lower bounds for circuits of depth four~\cite{AgraVinay08,Koi12,Tavenas} or even depth three~\cite{GuptaKKS13,Tavenas} would yield a separation of Valiant's~\cite{Valiant79} algebraic complexity classes VP and VNP. Moreover, lower bounds for such circuits were obtained thanks to the introduction by Neeraj Kayal of the method of {\em shifted partial derivatives}, see e.g.~\cite{Kayal12,fournier2014lower,GKKS13,kayal2014exponential,kayal2014super,kumar2014power,kumar2014limits}. Some of these lower bounds seem to come close to separating VP from VNP, but there is evidence that the method of shifted derivatives by itself will not be sufficient to achieve this goal. It is therefore desirable to develop new lower bounds methods. We view the models studied in~\cite{KKPS} and in the present paper as "test beds" for the development of such methods in a fairly simple setting. We note also that (as explained above) strong lower bounds in slightly more general models would imply a separation of VP from VNP. Indeed, if the affine function $x+y_i$ in~(\ref{model}) are replaced by a multivariate affine functions we obtain the model of "depth 3 powering arithmetic circuits.'' In general depth 3 arithmetic circuits, instead of powers of affine functions we have products of (possibly distinct) affine functions. We note that the depth reduction result of~\cite{GuptaKKS13} yields circuits where the number of factors in such products can be much larger than the degree of the polynomial represented by the circuit. It is therefore quite natural to allow exponents $e_i > d$ in~(\ref{model}). Likewise, the model studied in~\cite{KKPS} is close to depth 4 arithmetic circuits, see~\cite{KKPS} for details. \subsection*{Birkhoff interpolation} As mentioned above, our results are based on polynomial interpolation and more precisely on Birkhoff interpolation (also known as "lacunary interpolation"). The most basic form of polynomial interpolation is Lagrange interpolation. In a typical Lagrange interpolation problem, one may have to find a polynomial $g$ of degree at most 2 satisfying the 3 constraints $g(-1)=1$, $g(0)=4$, $g(1)=3$. At a slightly higher level of generality we find Hermite interpolation, where at each point we must interpolate not only values of $g$ but also the values of its first few derivatives. As an example, we may have to find a polynomial $g$ of degree 3 satisfying the 4 constraints $g(0)=1$, $g(1)=0, g'(1)=-1, g''(1)=2$. Birkhoff interpolation is even more general as there may be ``holes'' in the sequence of derivatives to be interpolated at each point. An example of such a problem is: $g(0)=0$, $g'(1)=0$, $g(2)=g"(2)=0$. We have set the right hand side of all constraints to 0 because the interpolation problems that we need to handle in this paper all turn out to be of that form (in general, one may naturally allow nonzero values). Our interest is in the existence of a nonzero polynomial of degree at most $d$ satisfying the constraints, and more generally in the dimension of the solution space. In fact, we need to know whether it has the dimension that one would expect by naively couting the number of constraints. Contrary to Lagrange or Hermite interpolation in one variable, where the existence of a nonzero solution can be easily decided by comparing the number of constraints to $d+1$ (the number of coefficients of~$g$), this is a nontrivial problem and a rich theory was developed to address it~\cite{lorentz84}. Results of the real (as opposed to complex) theory of Birkhoff interpolation turn out to be very well suited to our lower bound problems. This is the reason why we work with real polynomials in this paper. \subsection*{The Waring problem} Any homogenous (multivariate) polynomial $f$ can be written as a sum of powers of linear forms. In the Waring problem for polynomials one attempts to determine the smallest possible number of powers in such a representation. This number is called the {\em Waring rank} of $f$. Obtaining lower bounds from results in polynomial interpolation seems to be a new method in complexity theory, but it may not come as a surprise to experts on the Waring problem. Indeed, a major result in this area, the Alexander-Hirschowitz theorem (\cite{alexander95}, see~\cite{brambilla08} for a survey), is usually stated as a result on (multivariate, Hermite) polynomial interpolation. Classical work on the Waring problem was focused on the Waring rank of {\em generic polynomials}, and this question was completely answered by Alexander and Hirschowitz. The focus on generic polynomials is in sharp contrast with complexity theory, where a main goal is to prove lower bounds on the complexity of {\em explicit} polynomials (or of explicit Boolean functions in Boolean complexity). A few recent papers~\cite{landsberg2010,carlini2012} have begun to investigate the Waring rank of specific (or explicit, in computer science parlance) polynomials such as monomials, sums of coprime monomials, the permanent and the determinant. We expect that more connections between lower bounds in algebraic complexity, polynomial interpolation and the Waring problem will be uncovered in the future. \subsection*{Organization of the paper} In Section~\ref{translation} we begin a study of the linear independence of polynomials of the form $(x+y_i)^{e_i}$. We show that this problem can be translated into a problem of Birkhoff interpolation, and in fact we show that Birkhoff interpolation and linear independence are dual problems. In Section~\ref{matrices} we present the notions and results on Birkhoff interpolation that are needed for this paper, and we use them to prove Theorem~\ref{independence}. We build on this result to prove our lower bound results in Section~\ref{lb_section}, and we discuss their optimality. The lower bound problem studied in this paper is over the field of real numbers. In Section~\ref{fields} we briefly discuss the situation in other fields and in particular the field of complex numbers. Finally, we give an illustration of our methods in the appendix by completely working out a small example. \section{From linear independence to polynomial interpolation} \label{translation} There is a clear connection between lower bounds for representations of polynomials under form~(\ref{model}) and linear independence. Indeed, proving a lower bound for a polynomial $f$ amounts to showing that $f$ is linearly independent from $(x+y_1)^{e_1},\ldots,(x+y_l)^{e_l}$ for some $l$ and for any sequence of $l$ pairs $(y_1,e_1),\ldots,(y_l,e_l)$. Moreover, if the "hard polynomial" $f$ is itself presented as a sum of powers of degree 1 polynomials (which is the case in this paper), we can obtain a lower bound for $f$ from linear independence results for such powers. This motivates the following study. Let us denote by $\rd[X]$ the linear subspace of $\rr[X]$ made of polynomials of degree at most $d$, and by $g^{(k)}$ th $k$-th order derivative of a polynomial $g$. \begin{proposition} \label{dual} Let $f_1,\ldots,f_k \in \rd[X]$ be $k$ distinct polynomials of the form $f_i(x)=(x+a_i)^{e_i}$. The family $(f_i)_{1 \leq i \leq k}$ is linearly independent if and only if $$\dim \{g \in \rd[X];\ g^{(d-e_i)}(a_i) =0 \text{ \rm for all } i\} = d+1-k.$$ \end{proposition} Let $V$ be the subspace of $\rd[X]$ spanned by the $f_i$. The orthogonal $V^{\perp}$ of $V$ is the space of linear forms $\phi \in \rd[X]^*$ such that $\langle \phi,f \rangle = 0$ for all $f \in V$. We will use the fact that $\dim V^{\perp} = d+1 - \dim V$. We will identify $\rd[X]$ with its dual $\rd[X]^*$ via the symmetric bilinear form $$\langle g,f \rangle = \sum_{k=0}^d \frac{f_k g_{d-k}}{{d \choose k}}.$$ This is reminiscent of Weyl's unitarily invariant inner product (see e.g. chapter 16 of~\cite{burgisser2013} for a recent exposition) but we provide here a self-contained treatment. Poposition~\ref{dual} follows immediately from the next lemma: \begin{lemma} The orthogonal $f_i^{\perp}$ of $f_i$ is equal to $ \{g \in \rd[X];\ g^{(d-e_i)}(a_i) =0\}$. \end{lemma} \begin{proof} We begin with the case $e_i=d$. We need to show that for a polynomial $f(x)=(x+a)^d$, $\langle g,f \rangle =0$ iff $g(a)=0$. This follows from the definition of $\langle g,f \rangle$ since by expanding $(x+a)^d$ in powers of $x$ we have \begin{equation} \label{lagrange} \langle g,(x+a)^d \rangle = \sum_{k=0}^d g_{d-k}a^{d-k}=g(a). \end{equation} Consider now the general case $f(x)=(x+a)^{d-k}$ where $k \geq 0$. We will show that \begin{equation} \label{birkhoff} g^{(k)}(a)= \frac{d!}{(d-k)!} \langle g,f \rangle, \end{equation} thereby completing the proof of the lemma. In order to obtain~(\ref{birkhoff}) from~(\ref{lagrange}) we introduce a new variable $\epsilon$ and expand in two different ways $\langle g,(x+a+\epsilon)^d \rangle$ in powers of $\epsilon$. From~(\ref{lagrange}) we have \begin{equation} \label{expansion1} \langle g,(x+a+\epsilon)^d \rangle = g(a+\epsilon) = \sum_{k=0}^d \frac{g^{(k)}(a)}{k!} \epsilon^k. \end{equation} On the other hand, since $(x+a+\epsilon)^d = \sum_{k=0}^d {d \choose k}\epsilon^k (x+a)^{d-k}$ we have from bilinearity \begin{equation} \label{expansion2} \langle g,(x+a+\epsilon)^d \rangle = \sum_{k=0}^d {d \choose k} \langle g,(x+a)^{d-k} \rangle \epsilon^k. \end{equation} Comparing~(\ref{expansion1}) and~(\ref{expansion2}) shows that $\frac{g^{(k)}(a)}{k!} = {d \choose k} \langle g,(x+a)^{d-k} \rangle$. \end{proof} Since $\rd[X]$ has dimension $d+1$ we must have $k \leq d+1$ for the $f_i$ to be linearly independent. More generally, let $n_j$ be the number of $f_i$'s which are of degree less than $j$. Again, for the $f_i$ to be linearly independent we must have $n_j \leq j$ for all $j=1,\ldots,d+1$. The polynomial identity $(x+1)^2-(x-1)^2-4x=0$ shows that the converse is not true, but Theorem~\ref{independence} from the introduction shows that a weak converse holds true. We will use Proposition~\ref{dual} to prove this theorem at the end of the next section. \section{Interpolation matrices} \label{matrices} In Birkhoff interpolation we look for a polynomial $g \in \rd[X]$ satisfying a system of linear equations of the form \begin{equation} \label{birkint} g^{(k)}(x_i)=c_{i,k}. \end{equation} The system may be lacunary, i.e., we may not have an equation in the system for every value of $i$ and $k$. We set $e_{i,k}=1$ if such an equation appears, and $e_{i,k}=0$ otherwise. We arrange this combinatorial data in an {\em interpolation matrix} $E=(e_{i,k})_{1 \leq i \leq m, 0 \leq k \leq n}$. We assume that the {\em knots} $x_1,\ldots,x_m$ are distinct. It is usually assumed~\cite{lorentz84} that $|E|=\sum_{i,k} e_{i,k}$, the number of 1's in $E$, is equal to $d+1$ (the number of coefficients of $g$). Here we will only assume that $|E| \leq d+1$. We can also assume that $n \leq d$ since $g^{(k)}=0$ for $k>d$. In the sequel we will in fact assume that $n=d$: this condition can be enforced by adding empty columns to $E$ if necessary. Let $X=\{x_1,\ldots,x_m\}$ be the set of knots. When $|E|=d+1$, the pair $(E,X)$ is said to be {\em regular} if~(\ref{birkint}) has a unique solution for any choice of the $c_{i,k}$. Finding necessary or sufficient conditions for regularity has been a major topic in Birkhoff interpolation~\cite{lorentz84}. For $|E| \leq d+1$, we may expect~(\ref{birkint}) to have a set of solutions of dimension $d+1-|E|$. We therefore extend the definition of regularity to this case as follows. \begin{definition} The pair $(E,X)$ is regular if for any choice of the $c_{i,k}$ the set of solutions of~(\ref{birkint}) is an affine subspace of dimension $d+1-|E|$. \end{definition} Note in particular that the set of solutions is nonempty since $|E| \leq d+1$. Basic linear algebra provides a link between regularity for different values of $|E|$. \begin{proposition} \label{subset} Let $E$ be an $m \times (d+1)$ interpolation matrix. For an interpolation matrix $F$ of the same format, we write $F \subseteq E$ if $e_{i,k}=0$ implies $f_{i,k}=0$ (i.e., the set of 1's of $F$ is included in the set of 1's of $E$). If the pair $(E,X)$ is regular and $F \subseteq E$ then $(F,X)$ is regular as well. \end{proposition} \begin{proof} Consider the interpolation problem: $$g^{(k)}(x_i)=c_{i,k} \text{ for } f_{i,k}=1.$$ The set of solutions ${\cal F} \subseteq \rd[X]$ is an affine subspace which is either empty or of dimension at least $d+1-|F|$. It cannot be empty since by adding $|E|-|F|$ constraints we can obtain an interpolation problem with a solution space of dimension $d+1-|E| \geq 0$. For the same reason, it cannot be of dimension $d+2-|F|$ or more. In this case, by adding $|E|-|F|$ constraints we would obtain an interpolation problem with a solution space of dimension at least $(d+2-|F|)-(|E|-|F|) = d+2-|E|$. This is impossible since $(E,X)$ is regular. \end{proof} Another somewhat more succint way of phrasing the above proof is to consider the matrix of the linear system defining the affine subset~$\cal F$. Anticipating on Section~\ref{lb_section}, let us denote this matrix by $A(E,X)$. The pair $(E,X)$ is regular iff $A(E,X)$ if of full row rank. The rows of $A(F,X)$ are also rows of $A(E,X)$, so $A(F,X)$ must be of full row rank if $A(E,X)$ is. For an interpolation matrix, the notions of {\em regularity} and {\em order regularity} are classicaly defined~\cite{lorentz84} in the case $|E| = d+1$, but the extension to the general case $|E| \leq d+1$ is straightforward: \begin{definition} The interpolation matrix $E$ is regular if $(E,X)$ is regular for any choice of $m$ knots $x_1,\ldots,x_m$. It is order regular if $(E,X)$ is regular for any choice of $m$ ordered knots $x_1 < x_2 \ldots < x_m$. \end{definition} As an immediate corollary of Proposition~\ref{subset} we have: \begin{corollary} \label{subsetcor} Let $E,F$ be two interpolation matrices with $F \subseteq E$. If $E$ is regular (respectively, order regular) then $F$ is also regular (respectively, order regular). \end{corollary} We will give in Theorem~\ref{order} a sufficient condition for order regularity, but we first need some additional definitions. Say that an interpolation matrix $E$ satisfies the {\em upper P\'olya condition} if for $r=1,\ldots,d+1$ there are at most~$r$ 1's in the last $r$ columns of $E$. If $|E|=d+1$ this is equivalent to the {\em P\'olya condition:} there are at least $r$ 1's in the first $r$ columns of $E$ for $r=1,\ldots,d+1$. Consider a row of an interpolation matrix $E$. By {\em sequence} we mean a maximal sequence of consecutive 1's in this row. A sequence containing an odd number of 1's is naturally called an {\em odd sequence}. A sequence of the $i$th row is {\em supported} if there are 1's in $E$ both to the northwest and southwest of the first element of the row. More precisely, if $(i,k)$ is the position of the first 1 of the sequence, $E$ should contain 1's in positions $(i_1,k_1)$ and $(i_2,k_2)$ where $i_1 < i < i_2$, $k_1 < k$ and $k_2 < k$. The following important result (Theorem~1.5 in~\cite{lorentz84}) is due to Atkinson and Sharma~\cite{atkinson69}. \begin{theorem} \label{order} Let $E$ be an $m \times (d+1)$ interpolation matrix with $|E|=d+1$. If $E$ satisfies the P\'olya condition and contains no odd supported sequence then $E$ is order regular. \end{theorem} As an example, the interpolation problem corresponding to the polynomial identity $(x+1)^2-(x-1)^2-4x=0$ is: $$g(-1)=0,\ g'(0)=0,\ g(1)=0$$ where $g \in {\rr}_2[X]$. It admits $g(x)=x^2-1$ as a nontrivial solution. The corresponding interpolation matrix $$\left(\begin{array}{ccc} 1 & 0 & 0\\ 0 & 1 & 0\\ 1 & 0 & 0 \end{array}\right)$$ satisfies the P\'olya condition but contains an odd supported sequence in its second row. \begin{corollary} \label{ordercor} Let $E$ be an $m \times (d+1)$ interpolation matrix with $|E|=d+1$. If $E$ satisfies the P\'olya condition, then: \begin{itemize} \item[(i)] if every odd sequence of $E$ belongs to the first row, to the last row, or begins in the first column then $E$ is order regular. \item[(ii)] if every odd sequence of $E$ begins in the first column then $E$ is regular. \end{itemize} \end{corollary} \begin{proof} Part (i) follows from the fact that a sequence which belongs to the first row, to the last row, or begins in the first column cannot be supported. For part (ii), assume that every odd sequence of $E$ begins in the first column and fix $m$ distinct nodes $x_1,\ldots,x_m$. By reordering the $x_i$'s we obtain an increasing sequence $x'_1 < x'_2 < \cdots < x'_m$. Applying the same permutation on the rows of $E$, we obtain an interpolation matrix $E'$; clearly, the pair $(E,X)$ is regular if and only if $(E',X')$ is. The latter pair is regular because $X'$ is ordered and (by part (i)) $E'$ is order regular. \end{proof} We can now prove the main result of this section. \begin{theorem} \label{regular} Let $F$ be an $m \times (d+1)$ interpolation matrix. We denote by $N_r$ the number of 1's in the last $r$ columns of $F$. If $N_1 \leq 1$ and $N_r + N_{r-1} \leq r$ for $r=2,\ldots, d+1$ then $F$ is regular. \end{theorem} Note that the conditions on $N_r$ are a strengthening of the upper P\'olya condition $N_r \leq r$. \begin{proof} We will add 1's to $F$ so as to obtain a matrix $E$ satisfying the hypothesis of Corollary~\ref{ordercor}, part~(ii). Corollary~\ref{subsetcor} will then imply that $F$ is regular. In order to construct $E$ we proceed as follows. First, for every odd sequence of $F$ which does not begin in the first column we add a 1 in the cell immediately to the left of its first 1. All odd sequences of the resulting matrix $F'$ begin in the first column. Moreover, we have added at most $N_{r-1}$ 1's in the last $r$ columns of $F$ (note that we can add exactly $N_{r-1}$ 1's when the last $r-1$ columns contain $N_{r-1}$ sequences of length 1). Since $N_1 \leq 1$ and $N_r + N_{r-1} \leq r$, $F'$ satisfies the upper P\'olya condition. If $|F'|=d+1$ we set $E=F'$. This matrix satisfies the P\'olya condition and its odd sequences all begin in the first column, so we can indeed apply Corollary~\ref{ordercor} to get that $E$ is regular and. Since $F \subseteq E$, by Corollary \ref{subsetcor}, we conclude that $F$ is also regular. If $|F'| < d+1$ we need to add more 1's. It suffices to add $d + 1 - |F'|$ new rows to $F'$ with a $1$ in the first column and $0$'s everywhere else. Denoting by $E$ the resulting matrix we clearly have that $E$ satisfies the P\'olya condition, $|E| = d+1$ and its odd sequences begin in the first column, so Corollary~\ref{ordercor} and Corollary~\ref{subsetcor} apply here to conclude that $F$ is regular. Note that $E$ and $F$ do not have the same format since $E$ has more rows, but we can apply Corollary~\ref{subsetcor} if we first expand $F$ with $d+1-|F'|$ empty rows. \end{proof} \subsection*{Proof of Theorem~\ref{independence}} At this point we have enough knowledge of Birkhoff interpolation to prove Theorem~\ref{independence}. In view of Proposition~\ref{dual} we need to show that the interpolation problem $$g^{(d-e_i)}(a_i) =0 \text{ \rm for } i=1,\ldots,k$$ has a solution space of dimension $d+1-k$. Let $F$ be the corresponding interpolation matrix. This matrix contains $d+1-k$ 1's and is of size $m \times (d+1)$ for some $m \leq k$ (we have $m=k$ only when the $a_i$'s are all distinct). The hypothesis on the $n_j$'s implies that $F$ satisfies the hypothesis of Theorem~\ref{regular}, and the result follows from the regularity of $F$. \section{Lower bounds} \label{lb_section} System~(\ref{birkint}) is a linear system of equations in the coefficients of $g$. Following~\cite{lorentz84}, to set up this system it is convenient to work in the basis $(x^j/j!)_{0 \leq j \leq d}$ instead of the standard basis $(x^j)_{0 \leq j \leq d}$. We denote by $A(E,X)$ the matrix of the system in that basis, where as in the previous section $E$ denotes the corresponding interpolation matrix and $X$ the set of knots. As already pointed out after Proposition~\ref{subset}, the pair $(E,X)$ is regular if and only if $A(E,X)$ is of rank $|E|$. In our chosen basis, an interpolation constraint of the form~(\ref{birkint}) reads: $$\sum_{j=0}^d \frac{x_i^{j-k}}{(j-k)!} g_j = c_{i,k}$$ where the coefficients $g_0,\ldots,g_d$ are the unknowns and we choose as in~\cite{lorentz84} to interpret $1/r!$ as 0 for $r<0$. \begin{proposition} \label{split} Consider a pair $(E,X)$ where $E$ is an interpolation matrix of size $m \times (d+1)$ and $X$ a set of $m$ knots. Let $E_1$ be the matrix formed of the first $r+1$ columns of $E$ and $E_2$ the matrix formed of the remaining $d-r$ columns. Suppose that $E_1$ contains at most $r+1$ 1's and $E_2$ at most $d-r$ 1's. If both pairs $(E_1,X)$, $(E_2,X)$ are regular then $(E,X)$ is regular. \end{proposition} \begin{proof} The case where $|E_1|=r+1$ and $|E_2|=d-r$ is treated in Theorem~1.4 of~\cite{lorentz84}. Their argument extends to the general case. Indeed, as shown in~\cite{lorentz84} the rank of $A(E,X)$ is at least equal to the sum of the ranks of $A(E_1,X)$ and $A(E_2,X)$. For the reader's convenience, we recall from~\cite{lorentz84} that this inequality is due to the fact that $A(E,X)$ can be transformed by a permutation of rows into a matrix of the form\footnote{We give an example in the appendix.} $$\left(\begin{array}{cc} A(E_1,X) & * \\ 0 & A(E_2,X) \end{array}\right).$$ The two matrices $A(E_1,X)$, $A(E_2,X)$ are respectively of rank $|E_1|$ and $|E_2|$ since the corresponding pairs are assumed to be regular. Thus, $A(E,X)$ is of rank at least $|E|=|E_1|+|E_2|$. This matrix must in fact be of rank exactly $|E|$ since it has $|E|$ rows, and we conclude that the pair $(E,X)$ is regular. \end{proof} \begin{lemma} \label{slope} For any finite sequence $(u_i)_{0 \leq i \leq n}$ of real numbers with $n \geq 1$ there is an index $s \in \{0,\ldots,n-1\}$ such that $(u_{s+t}- u_s)/t \leq (u_n - u_0)/n$ for every $t=1,\ldots,n-s$. \end{lemma} \begin{proof} Let $\alpha = \min_{0 \leq i \leq n-1} (u_n - u_i) / (n-i)$ and let $s$ be an index where the minimum is achieved. We have $(u_n - u_s) / (n-s) = \alpha \leq (u_n - u_0)/n$. For every $t=1,\ldots,n-s$ we also have $(u_n -u_s) / (n-s) \leq (u_n - u_{s+t})/(n-s-t)$, which implies $(u_{s+t}-u_s)/t \leq (u_n - u_s)/(n-s) \leq (u_n - u_0) /n$. \end{proof} Our lower bound results are easily derived from the following theorem. \begin{theorem} \label{lbtool} Consider a polynomial identity of the form: \begin{equation} \label{identity} \sum_{i=1}^k \alpha_i (x+x_i)^d = \sum_{i=1}^l \beta_i (x+y_i)^{e_i} \end{equation} where the $x_i$ are distinct real constants, the constants $\alpha_i$ are not all zero, the $\beta_i$ and $y_i$ are arbitrary real constants, and $e_i < d$ for every $i$. Then we must have $k+l > (d+2)/2$. \end{theorem} \begin{proof} We assume without loss of generality that the $l$ polynomials $(x+y_i)^{e_i}$ are linearly independent. Indeed, the right-hand side of~(\ref{identity}) could otherwise be rewritten as a linear combination of an independent subfamily, and this would only decrease $l$. Let us also assume that $k+l \leq (d+2)/2$. Then we will show that the $k+l$ polynomials $(x+x_i)^d$, $(x+y_i)^{e_i}$ must be linearly independent. This is clearly in contradiction with~(\ref{identity}). In view of Proposition~\ref{dual}, to show that our $k+l$ polynomials are linearly independent we need to show that the corresponding interpolation problem has a solution space of dimension $d+1-k-l$. Let $E$ be the corresponding interpolation matrix and $X$ the set of knots: $|X|=m$ where $m$ is the number of distinct points in $x_1,\ldots,x_k,y_1,\ldots,y_l$; moreover, $E$ is a matrix of size $m \times (d+1)$ which contains $k+l$ 1's. We need to show that the pair $(E,X)$ is regular. Let $N_t$ be the number of 1's in the last $t$ columns of $E$. We must have $N_1 \leq 1$, or else the independent family $(x+y_i)^{e_i}$ would contain more than one constant polynomial. We can now complete the proof of the theorem in the special case where $E$ satisfies the conditions $N_t+N_{t-1} \leq t$ for every $t=2,\ldots,d+1$. Indeed, in this case $E$ is regular by Theorem~\ref{regular} (remember that this is how we proved Theorem~\ref{independence}, our main linear independence result). For the general case, the idea of the proof is to: \begin{itemize} \item[(i)] Split vertically $E$ in two matrices $E_1,E_2$. \item[(ii)] Apply the same argument (Theorem~\ref{regular}) to $E_1$. \item[(iii)] Obtain the regularity of the pair $(E_2,X)$ from the linear independence of the $(x+y_i)^{e_i}$. \item[(iv)] Conclude from Proposition~\ref{split} that the pair $(E,X)$ is regular. \end{itemize} We now explain how to carry out these four steps. For the first one, note that $N_{d+1}=|E|=k+l \leq (d+2)/2$. Let us apply Lemma~\ref{slope} to the sequence $(N_i)_{0 \leq i \leq d+1}$ beginning with $N_0=0$. The lemma shows the existence of an index $s \in \{0,\ldots,d\}$ such that $$\frac{N_{s+t} - N_s}{t} \leq \frac{N_{d+1}}{d+1} \leq \frac{d+2}{2(d+1)} = \frac{1}{2} + \frac{1}{2(d+1)} \leq \frac{1}{2} + \frac{1}{2t}$$ for every $t=1,\ldots,d+1-s$. Let $E_1$ be the matrix formed of the first $r+1$ columns of $E$, where $r=d-s$. The number of 1's in the last $t$ columns of $E_1$ is $N'_t = N_{s+t} - N_s \leq (t+1)/2$. In particular, $N'_1 \leq 1$ and, since $N'_t, N'_{t-1}$ are integers, we get that $N'_t + N'_{t-1} \leq \lfloor (2t + 1)/2 \rfloor = t$ for all $t \in \{2,\ldots,r+1\}$. This matrix therefore satisfies the hypotheses of Theorem~\ref{regular}, and we conclude that $E_1$ is regular. This completes step~(ii). For step~(iii), we note that since $E_2$ has $s<d+1$ columns the Birkhoff interpolation problem corresponding to the polynomials $(x+y_i)^{e_i}$ with $e_i \leq s-1$ admits $(E_2,X)$ as its pair. Since these polynomials are assumed to be linearly independent, $E_2$ must contain at most $s$ 1's and $(E_2,X)$ must indeed be a regular pair. Finally, we conclude from Proposition~\ref{split} that $(E,X)$ is regular as well. \end{proof} \begin{theorem}[First lower bound] \label{lb1} Consider a polynomial of the form \begin{equation} \label{hardpoly1} H_1(x)=\sum_{i=1}^k \alpha_i (x+x_i)^d \end{equation} where the $x_i$ are distinct real constants, the $\alpha_i$ are nonzero real constants, and $k \leq (d+2)/4$. If $H_1$ is written under the form \begin{equation} \label{easy1} H_1(x)=\sum_{i=1}^l \beta_i (x+y_i)^{e_i} \end{equation} with $e_i \leq d$ for every $i$ then we must have $l \geq k$. \end{theorem} \begin{proof} Assume first that $e_i < d$ for all $i$. By Theorem~\ref{lbtool} we must have $k+l > (d+2)/2$, so $l > (d+2)/2 - k \geq k$. Consider now the general case and assume that $l < k$. We reduce to the previous case by moving on the side of~(\ref{hardpoly1}) the $k'$ polynomials in~(\ref{easy1}) of degree $e_i=d$. On the second side remains a sum of $l-k'$ terms of degree less than $d$. We have on the first side a sum of terms of degree $d$. Taking possible cancellations into account, the number of such terms is at least $k-k'>0$, and at most $k+k'$. We must therefore have $(k+k')+(l-k') > (d+2)/2$, so $l > (d+2)/2 -k \geq k$ after all. \end{proof} In other words, writing $H_1$ under form~(\ref{hardpoly1}) is exactly optimal when $k \leq (d+2)/4$. We can give another lower bound of order $d$ (with an improved constant) for a polynomial of a different form. \begin{theorem}[Second lower bound] \label{lb2} Let $H_2 \in \rd[X]$ be the polynomial $H_2(x)=(x+1)^{d+1}-x^{d+1}$. If $H_2$ is written under the form $$H_2(x)=\sum_{i=1}^l \beta_i (x+y_i)^{e_i}$$ with $e_i \leq d$ for every $i$ then we must have $l > (d-1)/2$. \end{theorem} \begin{proof} This follows directly from Theorem~\ref{lbtool} after replacing $d$ by $d+1$ in~(\ref{identity}). Since $k=2$ we must have $2+l>(d+3)/2$, i.e., $l > (d-1)/2$. \end{proof} This result shows that allowing exponents $e_i >d$ can drastically decrease the ``complexity'' of a polynomial since $H_2$ can be expressed as the difference of only two $(d+1)$-st powers. Such savings cannot be obtained for all polynomials. Indeed, the next result, which subsumes Theorem~\ref{lb1}, shows that no improvement is possible for $H_1$ even if arbitrarily large powers are allowed. \begin{theorem}[Third lower bound] Consider a polynomial of the form \begin{equation} H_1(x)=\sum_{i=1}^k \alpha_i (x+x_i)^d \end{equation} where the $x_i$ are distinct real constants, the $\alpha_i$ are nonzero real constants, and $k \leq (d+2)/4$. If $H_1$ is written under the form \begin{equation} \label{easy3} H_1(x)=\sum_{i=1}^l \beta_i (x+y_i)^{e_i} \end{equation} then we must have $l \geq k$. \end{theorem} Note that the exponents $e_i$ may be arbitrarily large. \begin{proof} Let $n$ be the largest exponent $e_i$ which occurs with a coefficient $\beta_i \neq 0$. The case $n \leq d$ is covered by Theorem~\ref{lb1}, so we assume here that $n>d$. In equation~(\ref{easy3}), let us move all the $n$-th powers from the right hand side to the left hand side, and the $k$ degree-$d$ terms of $H_1$ from the left hand side to the right hand side. Applying Theorem~\ref{lbtool} to this identity shows that $k+l>(n+2)/2$. Hence $l > (n+2)/2 - k \geq k$. \end{proof} \begin{remark} \label{opt} The lower bound for $H_2$ in Theorem~\ref{lb2} is essentially optimal. More concretely, it is optimal up to one unit when $d$ is even, and exactly optimal when $d$ is odd. Note indeed that by a change of variable, representing $H_2$ is equivalent to representing the polynomial $H_3(x)=(x+1)^{d+1}-(x-1)^{d+1}$. If we expand the two binomials in $H_3$ the monomials of degree $d+1-j$ wih even $j$ cancel, and we obtain a sum of $\lceil \frac{d+1}{2} \rceil$ monomials. See Proposition~\ref{dependenceincomplex} for a generalization of this observation. In fact, with the same number of terms we can represent not only $H_2$ but all polynomials of degree $d$: see Proposition~\ref{upper} below. The consideration of $H_3$ also shows that Theorem~\ref{independence} is optimal up to one unit when $d$ is even, and exactly optimal when $d$ is odd. Indeed, we have just observed that there is a linear dependence between the $2+\lceil \frac{d+1}{2} \rceil$ polynomials $(x+1)^{d+1},(x-1)^{d+1},x^d,x^{d-2},x^{d-4},\ldots$. If $d$ is odd, the number of polynomials of degree less than $j$ in this sequence is $n_j=\lfloor j/2 \rfloor$ for $j \leq d+1$; moreover, $n_{d+2}=2+(d+1)/2=(d+5)/2$. Hence $n_j+n_{j+1}=j$ for $j \leq d$; moreover, $n_{d+1}+n_{d+2}=d+3$. If $d$ is even, the number of polynomials of degree less than $j$ in this sequence is $n_j=\lceil j/2 \rceil$ for $j \leq d+1$; moreover, $n_{d+2}=2+(d+2)/2=(d+6)/2$. Hence $n_j+n_{j+1}=j+1$ for $j \leq d$; moreover, $n_{d+1}+n_{d+2}=d+4$. \end{remark} A simple construction shows that all polynomials of degree $d$ can be written as a linear combination of $\lceil(d+1)/2 \rceil$ powers. \begin{proposition} \label{upper} Every polynomial of degree $d$ can be expressed as $\sum_{i = 1}^l \beta_i (x + y_i)^{e_i}$ with $l \leq \lceil(d+1)/2 \rceil$. \end{proposition} \begin{proof} We use induction on $d$. Since the result is obvious for $d = 0,1$ we consider a polynomial $f = \sum_{i = 0}^d a_i x^i$ of degree $d \geq 2$, and we assume that that the Proposition holds for polynomials of degree $d-2$. We observe that $g := f - a_d (x + (a_{d-1}/d a_d))^d$ has degree $\leq d-2$. Applying the induction hypothesis to $g$ we get that $g = \sum_{i = 1}^{l'} \beta_i (x + y_i)^{e_i}$, with $l' \leq \lceil(d-1)/2 \rceil$. Hence, setting $l = l'+1$, $\beta_l = a_d$, $y_l = a_{d-1}/(d a_d)$ and $e_l = d$, we conclude that $f = \sum_{i = 1}^l \beta_i (x + y_i)^{e_i}$ and $l \leq 1 + \lceil(d-1)/2 \rceil = \lceil(d+1)/2 \rceil$. \end{proof} Theorem~\ref{lb2} therefore shows that of all polynomials of degree $d$, $H_2$ is essentially (up to a small additive constant) the hardest one. \section{Changing Fields} \label{fields} Some of the proof techniques used in this paper, and even the results themselves, are specific to the field of real numbers. This is due to the fact that certain linear dependence relations which cannot occur over $\rr$ may occur if we change the base field. For instance, over a field of characteristic $p>0$ we have $(X+1)^{p^k}-X^{p^k}-1=0$ for any $k$ (compare with Theorem~\ref{independence} for the real case). The remainder of this section is devoted to a discussion of the complex case. We begin with an identity which generalizes the identity $(x+1)^2-(x-1)^2-4x=0$. \begin{proposition}\label{dependenceincomplex} Take $k \in \mathbb{Z}^+$ and let $\xi$ be a $k$-th primitive root of unity. Then, for all $d \in \mathbb{Z}^+$ and all $\mu \in \mathbb{C}$ the following equality holds: $$\sum_{j = 1}^{k} \xi^j (x + \xi^j \mu)^d = \sum_{i \equiv -1\, ({\rm mod}\ k) \atop 0 \leq i \leq d} k \binom{d}{i} \mu^i x^{d-i}.$$ \end{proposition} \begin{proof} We observe that $$\sum_{j = 1}^{k} \xi^j (x + \xi^j \mu)^d = \sum_{i = 0}^d \binom{d}{i} \mu^i x^{d-i} \left(\sum_{j = 1}^{k} \xi^{ji + j}\right).$$ To deduce the result it suffices to prove that $\sum_{j = 1}^{k} \xi^{ji + j}$ equals $k$ if $i \equiv -1 \ ({\rm mod} \ k)$, or $0$ otherwise. Whenever $i \equiv -1 \ ({\rm mod}\ k)$ we have that $\xi^{ji + j} = (\xi^{i+1})^j = 1$ for all $j \in \{1,\ldots,k\}$. For $i \not\equiv -1\ ({\rm mod}\ k)$, the summation of the geometric series shows that $$\displaystyle \sum_{j =1}^{k} \xi^{j(i + 1)} = \xi^{i + 1}.\frac{\xi^{k(i + 1)-1}}{\xi^{i + 1}-1}=0.$$ \end{proof} For any $d, k \in \mathbb{Z}^+$, Proposition~\ref{dependenceincomplex} yields an identity of the form \begin{equation} \label{complexid} \sum_{j = 1}^{k} \alpha_j (x + x_j)^d = \sum_{j = 1}^l \beta_j x^{e_j} \end{equation} where the $x_j$ are distinct complex constants, the $\alpha_j, \beta_j$ are nonzero complex numbers, $l=\big\lfloor \frac{d+1}{k}\big\rfloor$ and $e_j < d$ for all $j$. Note the sharp contrast with theorems~\ref{lbtool} and~\ref{lb1}. In particular, Theorem~\ref{lb1} gives an $\Omega(d)$ lower bound for polynomials of the form $\sum_{j = 1}^{k} \alpha_i (x + x_i)^d$ over the field of real numbers (the implied constant in the $\Omega$ notation is equal to 1/4). But in~(\ref{complexid}) we have $k.l \leq d+1$, so $k \leq \sqrt{d+1}$ or $l \leq \sqrt{d+1}$. We conclude that no better lower bound than $\Omega(\sqrt{d})$ can possibly hold over $\mathbb{C}$ for the same family of polynomials, at least for arbitrary distinct $x_i$'s and arbitrary nonzero $\alpha_i$. Such a $\Omega(\sqrt{d})$ lower bound was recently obtained for the more general model of sums of power of bounded degree polynomials: see Theorem~2 in~\cite{KKPS}. We leave it as an open problem to close this quadratic gap between lower bounds over $\rr$ and $\mathbb{C}$: find an explicit polynomial $f \in \mathbb{C}[X]$ of degree $d$ which requires at least $k=\Omega(d)$ terms to be represented under the form $$f(x)=\sum_{i=1}^k \alpha_i (x+x_i)^{e_i}.$$ With the additional requirements $e_i \leq d$ for all $i$, the ``target polynomial'' $H_2(x)=(x+1)^{d+1}-x^{d+1}$ from Theorem~\ref{lb2} looks like a plausible candidate. {\small \section*{Acknowledgments} P.K. acknowledges useful discussions with member of the Aric research group in the initial stages of this work.
{ "timestamp": "2015-07-09T02:04:53", "yymm": "1507", "arxiv_id": "1507.02015", "language": "en", "url": "https://arxiv.org/abs/1507.02015", "abstract": "In this paper we give lower bounds for the representation of real univariate polynomials as sums of powers of degree 1 polynomials. We present two families of polynomials of degree d such that the number of powers that are required in such a representation must be at least of order d. This is clearly optimal up to a constant factor. Previous lower bounds for this problem were only of order $\\Omega$($\\sqrt$ d), and were obtained from arguments based on Wronskian determinants and \"shifted derivatives.\" We obtain this improvement thanks to a new lower bound method based on Birkhoff interpolation (also known as \"lacunary polynomial interpolation\").", "subjects": "Computational Complexity (cs.CC)", "title": "Lower Bounds by Birkhoff Interpolation", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9896718477853188, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.7080349878569862 }
https://arxiv.org/abs/1908.11192
An improvement of Prouhet's 1851 result on multigrade chains
In 1851 Prouhet showed that when $N=j^{k+1}$ where $j$ and $k$ are positive integers, $j \geq 2$, the first $N$ consecutive positive integers can be separated into $j$ sets, each set containing $j^k$ integers, such that the sum of the $r$-th powers of the members of each set is the same for $r=1,\,2,\,\ldots,\,k$. In this paper we show that even when $N$ has the much smaller value $2j^k$, the first $N$ consecutive positive integers can be separated into $j$ sets, each set containing $2j^{k-1}$ integers, such that the integers of each set have equal sums of $r$-th powers for $r=1,\,2,\,\ldots,\,k$. Moreover, we show that this can be done in at least $\{(j-1)!\}^{k-1}$ ways. We also show that there are infinitely many other positive integers $N=js$ such that the first $N$ consecutive positive integers can similarly be separated into $j$ sets of integers, each set containing $s$ integers, with equal sums of $r$-th powers for $r=1,\,2,\,\ldots,\,k$, with the value of $k$ depending on the integer $N$.
\section{Introduction} If there exist integers $a_{uv},\;u=1,\,2,\,\ldots,\,s,\;v=1,\,2,\,\ldots,\,j$ ($j$ and $s$ being positive integers $ \geq 2$), such that the relations \begin{equation} \sum_{u=1}^sa_{u1}^r=\sum_{u=1}^sa_{u2}^r=\cdots =\sum_{u=1}^sa_{uj}^r, \label{basicchn} \end{equation} are satisfied when $r=1,\,2,\,\dots,\,k$, we write, \begin{equation} a_{11},\,a_{21},\ldots,\,a_{s1} \stackrel{k}{=} a_{12},\,a_{22},\ldots,\,a_{s2} \\ \stackrel{k}{=} \ldots \stackrel{k}{=} a_{1j},\,a_{2j},\ldots,\,a_{sj}. \label{basicchnnot1} \end{equation} A solution of \eqref{basicchn} is said to be nontrivial if the $j$ sets $\{a_{uv},\,u=1,\,2,\,\ldots,\,s\}$, $v=1,\,2,\,\ldots,\,j$, are distinct. The least value of $s$ for which there exists a nontrivial solution of \eqref{basicchn} is denoted by $P(k,\,j)$. Relations of type \eqref{basicchn} are known as multigrade chains. The first example of multigrade chains was obtained in 1851 by Prouhet \cite[p. 449]{HW} who gave a rule to separate the first $j^{k+1}$ positive integers into $j$ sets that provide a multigrade chain \eqref{basicchnnot1} with $s=j^k$. Relevant excerpts from Prouhet's original note are given in \cite[pp. 999-1000]{BP}. As a numerical example, Prouhet noted that the integers $1,\,2,\,\ldots,\,27$ can be separated into three sets satisfying the relations, \begin{equation} \begin{aligned} 1,\,6,\,8,\,12,\,14,\,16,\,20,\,22,\,27 &\stackrel{2}{=}2,\,4,\,9,\,10,\,15,\,17,\,21,\,23,\,25\\ &\stackrel{2}{=}3,\,5,\,7,\,11,\,13,\,18,\,19,\,24,\,26. \end{aligned} \label{chn27ex1} \end{equation} While Prouhet himself did not give a proof, his result has subsequently been proved by several authors in various ways \cite{Le, Ng, Ro1, Wr2, Wr3}. It has been proved by Wright \cite{Wr1} that $P(k,\,j) \leq (k^2+k+2)/2$ when $k$ is even and $P(k,\,j) \leq (k^2+3)/2$ when $k$ is odd. However, Wright's method proves only the existence of solutions of \eqref{basicchn} and cannot be used to construct actual examples of multigrade chains. When $j=2$, it has been shown that $P(k,\,2)=k+1$ when $k \leq 9$ \cite[p. 440, \;p. 449]{HW} and also when $k=11$ \cite{CW}. Further, it has been shown that $P(k,\,j)=k+1$ for $k=2,\,3$ and $5$ and for all values of $j$ \cite[p. 437]{HW}. Numerous papers have been published on Prouhet's problem, especially concerning the particular case of equations \eqref{basicchn} when $j=2$ and this problem is now referred to as the Prouhet-Tarry-Escott problem. Gloden has written an entire book on multigrade equations and multigrade chains \cite{Gl} and the problem has been the subject of two survey articles \cite{Bo, RN} both of which contain extensive bibliographies. Further, Prouhet's problem has been linked to various other problems \cite{AMZ, BP, BOR, Ce, GGG}. However, despite the passage of time since the publication of Prouhet's note in 1851 and the attention bestowed on the problem, until now Prouhet's original result has not been improved. A remarkable feature of Prouhet's solution of the equations \eqref{basicchn} is that the integers $a_{uv},\;u=1,\,2,\,\ldots,\,s,\;v=1,\,2,\,\ldots,\,j$, are a permutation of the first $N$ consecutive positive integers where $N=j^{k+1}$. The problem of separating $N$ consecutive integers into sets with equal power sums has been considered in two articles \cite{Ro2, Ro3} by Roberts who has shown that ``if $q$ is a factorization of $n$ whose factors have least common multiple $L_q$ then the first $n$ nonnegative integers can be split into $L_q$ classes with equal $t$-th power sums for all $t$ satisfying \[ 0 \leq t < q^*-\max_{0 < s < L_q} \nu_s, \] where $q^*$ is the number of factors in $q$ and $\nu_s$ is the number of them that divide $s$". The maximum possible value of $t$ is relatively small and is the smallest exponent in the canonical prime factorization of $n$. In this paper we will show that the consecutive positive integers $1,\,2,\,\,\ldots,$ $2j^k$ can be separated into $j$ sets of $2j^{k-1}$ members satisfying the relations \eqref{basicchnnot1}. In fact, we show that this can, in general, be done in at least $\{(j-1)!\}^{k-1}$ ways. For $j > 2$, the integer $2j^{k}$ is much smaller than $j^{k+1}$ and the result is thus a significant improvement over Prouhet's solution of \eqref{basicchnnot1}. We also show that there exist infinitely many other positive integers $N=js$ such that the positive integers $1,\,\,2,\,\ldots,\,N$ can be separated into $j$ sets, each set containing $s$ integers, such that the $j$ sets provide a solution of \eqref{basicchnnot1} and, in general, this can be done in several ways. The theorems in this paper give much better results as compared to the results obtained by Roberts \cite{Ro2, Ro3}. \section{Some preliminary lemmas} \begin{lem} If there exist integers $a_{uv},\;u=1,\,2,\,\ldots,\,s,\;v=1,\,2,\,\ldots,\,j$ such that \begin{equation} a_{11},\,a_{21},\ldots,\,a_{s1} \stackrel{k}{=} a_{12},\,a_{22},\ldots,\,a_{s2} \\ \stackrel{k}{=} \ldots \stackrel{k}{=} a_{1j},\,a_{2j},\ldots,\,a_{sj}, \end{equation} then \begin{equation} \begin{aligned} &Ma_{11}+K,\,Ma_{21}+K,\ldots,\,Ma_{s1}+K \\ & \quad \stackrel{k}{=} Ma_{12}+K,\,Ma_{22}+K,\ldots,\,Ma_{s2}+K \\ &\quad\stackrel{k}{=} \ldots \\ & \quad \stackrel{k}{=} Ma_{1j}+K,\,Ma_{2j}+K,\ldots,\,Ma_{sj}+K, \end{aligned} \end{equation} where $M$ and $K$ are arbitrary integers. \end{lem} \begin{proof} When $j=2$, this is a simple consequence of the binomial theorem and is a well-known lemma \cite{Do}. When $j > 2$, then also, the lemma follows immediately from the binomial theorem. \end{proof} \begin{lem}For any arbitrary positive integer $j> 1 $, the first $2j$ consecutive positive integers can be separated into $j$ sets, each set containing two integers, such that the sum of the integers in each set is the same. \end{lem} \begin{proof} The $j$ sets $\{u,\,2j+1-u\},\;u=1,\,2,\,\ldots,\,j$, have the same sum $2j+1$. Since the integers in these $j$ sets are the first $2j$ consecutive positive integers, the lemma is proved. \end{proof} \begin{lem} For any arbitrary positive integers $m$ and $j> 1 $, the first $2mj$ consecutive positive integers can be separated into $j$ sets, each set containing $2m$ integers, such that the sum of the integers in each set is the same. \end{lem} \begin{proof} This is a straightforward generalisation of Lemma 2. We first divide the consecutive integers $1,\,2,\,\ldots,\,2mj$ into $2j$ blocks, each block consisting of $m$ consecutive integers -- the first block being $1,\,2,\,\ldots,\,m$. Next for each integer $u, \; 1 \leq u \leq j$, we construct a set consisting of the $m$ integers of the $u^{\rm th}$ block and the $m$ integers of the $(2j+1-u)^{\rm th}$ block. We thus get $j$ sets, each set consisting of $2m$ integers, such that the sum of the integers in each set is $m(2mj+1)$. This proves the lemma. \end{proof} \begin{lem} For any arbitrary positive integer $j> 1 $, the first $j^2$ consecutive positive integers can be separated into $j$ sets, each set containing $j$ integers, such that the sum of the integers in each set is the same. \end{lem} \begin{proof} If we separate the first $j^2$ consecutive positive integers into the $j$ sets, \begin{align*} &\{1,&&j+2,&& 2j+3,&& 3j+4,&& \ldots,& (j-1)j+j\},\\ &\{j+1,&&2j+2,&& 3j+3,&& 4j+4,&& \ldots,& j\},\\ &\{2j+1,&&3j+2,&& 4j+3,&& 5j+4,&& \ldots,& j+j\},\\ \vdots \\ &\{(j-1)j+1,&&2,&& j+3,&& 2j+4,&& \ldots,& (j-2)j+j\}, \end{align*} it would be observed that each of the numbers $u,\; u=1,\,\ldots,\,j$, occurs as a summand in one and only one member of each set and the same is true for each of the numbers $uj,\; u=1,\,\ldots,\,j-1$. It follows that the sum of the members in each set is the same, the common sum being $j(j^2+1)/2$. Further, each set contains $j$ integers and it is readily seen that the integers in all the $j$ sets put together are just a permutation of the first $j^2$ consecutive positive integers. Thus the lemma is proved. \end{proof} \begin{lem} Any solution of the multigrade chain \eqref{basicchnnot1} yields a solution of the multigrade chain \begin{equation} b_{11},\,b_{21},\ldots,\,b_{t1} \stackrel{k+1}{=} b_{12},\,b_{22},\ldots,\,b_{t2} \\ \stackrel{k+1}{=} \ldots \stackrel{k+1}{=} b_{1j},\,b_{2j},\ldots,\,b_{tj} \label{basicchnnot1b} \end{equation} where $t=js$. \end{lem} \begin{proof} Let $h_1,\,h_2,$ $\ldots,\,h_j$ be an arbitrary set of $j$ distinct integers. We take the integers $b_{u1},\;u=1,\,2,\,\,\ldots,\,t$, as follows: \begin{equation} \begin{aligned} &a_{11}+h_1,\;a_{21}+h_1,\,\ldots,\,a_{s1}+h_1, \\ &a_{12}+h_2,\;a_{22}+h_2,\,\ldots,\,a_{s2}+h_2,\\ &\vdots \\ &a_{1j}+h_j,\;a_{2j}+h_j,\,\ldots,\,a_{sj}+h_j. \end{aligned} \label{setb1} \end{equation} For any given integer $v$ where $2 \leq v \leq j$, we replace $h_1,\,h_2,$ $\ldots,\,h_j$ in the set of integers \eqref{setb1} by $h_v,\,h_{v+1},\,\ldots,\,\,h_{v+j-1}$ respectively where we take $h_m=h_{m-j}$ when $m > j$, and the resulting integers are taken to be the integers $b_{uv},\;u=1,\,2,\,\,\ldots,\,t$. We will now show that, with these values of $b_{uv}$, the relations \eqref{basicchnnot1b} are satisfied. The proof is by the multinomial theorem. In view of the relations \eqref{basicchnnot1}, it is readily seen that the relations \eqref{basicchnnot1b} are true for exponents $1,\,2,\,\ldots,\,k$. Further, when we consider the relation \eqref{basicchnnot1b} for the exponent $k+1$, on expanding the terms of the first set, that is, $b_{u1}^{k+1},\;u=1,\,\ldots,\,t$, and adding only the terms involving $h_1^r,\,h_2^r,\,\ldots,\,h_j^r$ where $1 \leq r \leq k+1$, we get \[ \begin{aligned} &\sum_{u=1}^s \binom{k+1}{r}a_{u1}^{k+1-r}h_1^r+\sum_{u=1}^s \binom{k+1}{r}a_{u2}^{k+1-r}h_2^r+\cdots+\sum_{u=1}^s \binom{k+1}{r}a_{uj}^{k+1-r}h_j^r\\ & \quad =(h_1^r+h_2^r+\cdots+h_j^r)\sum_{u=1}^s \binom{k+1}{r}a_{u1}^{k+1-r}. \end{aligned} \] It is now easy to see that the terms involving $h_i^r,\;i=1,\,2,\,\ldots,\,j$, where $1 \leq r \leq k+1$, add up to the same common sum in each set. Further, the terms independent of $h_i$ add up to $\sum_{u=1}^s\sum_{v=1}^ja_{uv}^{k+1}$ in each set. It is thus seen that the relations \eqref{basicchnnot1b} are also true for the exponent $k+1$. This proves the lemma. \end{proof} \section{Multigrade chains consisting only of the first $N$ consecutive positive integers} In Section 3.1 we give three theorems which show that there exist infinitely many integers $N=js$ such that the consecutive positive integers $1,\,2,\,\ldots,\,N$ can be separated into $j$ sets, each set consisting of $s$ integers, such that the $j$ sets provide a solution of \eqref{basicchnnot1} for a certain value of $k$. In Section 3.2 we give some numerical examples of such multigrade chains. \subsection{} \begin{thm} If $N=2j^k$ where $j \geq 2$ and $k \geq 1$, the first $N$ consecutive positive integers can be separated into $j$ sets in at least $\{(j-1)!\}^{k-1}$ ways, each set consisting of $2j^{k-1}$ integers, such that the $j$ sets provide a solution of the multigrade chain \eqref{basicchnnot1}. \end{thm} \begin{proof} The proof is by induction. It follows from Lemma 2 that the result is true when $k=1$. We now assume that the result is true when $k=n$, that is, we assume that there exist integers $a_{uv},\;u=1,\ldots,\,s,\;v=1,\ldots,\,j,$ where $s=2j^{n-1}$ such that \begin{equation} a_{11},\,a_{21},\ldots,\,a_{s1} \stackrel{n}{=} a_{12},\,a_{22},\ldots,\,a_{s2} \\ \stackrel{n}{=} \ldots \stackrel{n}{=} a_{1j},\,a_{2j},\ldots,\,a_{sj}, \label{mchn1} \end{equation} and the integers $a_{ij}$ are a permutation of the first $2j^n$ positive integers. On applying Lemma 1 with $M=j,\;K=-j$ to the relations \eqref{mchn1}, we get the multigrade chain, \begin{equation} b_{11},\,b_{21},\ldots,\,b_{s1} \stackrel{n}{=} b_{12},\,b_{22},\ldots,\,b_{s2} \\ \stackrel{n}{=} \ldots \stackrel{n}{=} b_{1j},\,b_{2j},\ldots,\,b_{sj}, \label{mchn2} \end{equation} where the integers $b_{ij}$ are a permutation of the integers $0,\,j,\,2j,\,\ldots,\,2j^{n+1}-j$. We now apply Lemma 5 to the relations \eqref{mchn2} taking the integers $h_1,\,h_2,$ $\ldots,\,h_j$, as the integers $1,\,2,\,\ldots,\,j$, and we get the multigrade chain, \begin{equation} c_{11},\,c_{21},\ldots,\,c_{t1} \stackrel{n+1}{=} c_{12},\,c_{22},\ldots,\,c_{t2} \\ \stackrel{n+1}{=} \ldots \stackrel{n+1}{=} c_{1j},\,c_{2j},\ldots,\,c_{tj}, \label{mchn3} \end{equation} where $t=2j^n$ and the integers $c_{uv},\;u=1,\ldots,\,t,\;v=1,\ldots,\,j$, are obtained by adding each of the integers $1,\,2,\,\ldots,\,j$ to each of the integers $0,\,j,\,2j,\,\ldots,\,2j^{n+1}-j$. It follows that the integers $c_{uv}$ are the consecutive integers $1,\,2,\,\ldots,\,2j^{n+1}$. Thus, the first $2j^{n+1}$ positive integers have been separated into $j$ sets, each set consisting of $2j^n$ integers, such that the $j$ sets provide a solution of the multigrade chain \eqref{basicchnnot1} with $k=n+1$. In fact, we may take the integers $h_1,\,h_2,\,\ldots,\,h_j$ to be any permutation of the integers $1,\,2,\,\ldots,\,j$, and we still get a multigrade chain of type \eqref{mchn3} consisting of the consecutive integers $1,\,2,\,\ldots,\,2j^{n+1}$. For getting distinct multigrade chains of type \eqref{mchn3}, we may keep $h_1=1$ as fixed while permuting the remaining $j-1$ integers in $(j-1)!$ ways. Thus, starting from the multigrade chain \eqref{mchn1}, we get $(j-1)!$ distinct multigrade chains \eqref{mchn3} consisting of the consecutive integers $1,\,2,\,\ldots,\,2j^{n+1}$. The theorem now follows by induction. \end{proof} \begin{thm} If $N=2mj^k$, the first $N$ consecutive positive integers can be separated into $j$ sets in at least $\{(j-1)!\}^{k-1}$ ways, each set consisting of $2mj^{k-1}$ integers, such that the $j$ sets provide a solution of the multigrade chain \eqref{basicchnnot1}. \end{thm} \begin{proof} By Lemma 3, the result is true for $k=1$. The remaining proof is similar to that of Theorem 6 and is accordingly omitted. \end{proof} \begin{thm} If $N=j^{k+1}$ where $j \geq 2$ and $k \geq 1$, the first $N$ consecutive positive integers can be separated into $j$ sets in at least $\{(j-1)!\}^{k-1}$ ways, each set consisting of $j^k$ integers, such that the $j$ sets provide a solution of the multigrade chain \eqref{basicchnnot1}. \end{thm} \begin{proof} By Lemma 4, the result is true for $k=1$. As in the case of Theorem 7, the remaining proof is similar to the proof of Theorem 6 and is omitted. This gives yet another proof of Prouhet's result. \end{proof} \subsection{} We now give a few numerical examples. Since $18=2.3^2$, in view of Theorem 6, the consecutive integers 1,\,2,\,\ldots,\,18 can be separated into 3 sets -- each set consisting of 6 integers -- to yield two multigrade chains valid for exponents and 1 and 2. These two multigrade chains are as follows: \begin{equation} 1, 5, 9, 12, 14, 16 \stackrel{2}{=} 2, 6, 7, 10, 15, 17 \stackrel{2}{=}3, 4, 8, 11, 13, 18, \label{chn18ex1} \end{equation} and \begin{equation} 1, 6, 8, 11, 15, 16 \stackrel{2}{=} 3, 5, 7, 10, 14, 18 \stackrel{2}{=}2, 4, 9, 12, 13, 17. \label{chn18ex2} \end{equation} We note that the smallest exponent in the canonical prime factorization of 18 is 1, and hence the method described by Roberts \cite{Ro2, Ro3} does not generate the above multigrade chains. As a second example, in view of Theorem 8, the first 27 consecutive positive integers can be separated into three sets -- each set having 9 integers -- to yield two multigrade chains. These two multigrade chains are as follows: \begin{equation} \begin{aligned} 1,\, 6,\, 8,\, 11,\, 13,\, 18,\, 21,\, 23,\, 25&\stackrel{2}{=}2,\, 4,\, 9,\, 12,\, 14,\, 16,\, 19,\, 24,\, 26\\ &\stackrel{2}{=}3,\, 5,\, 7,\, 10,\, 15,\, 17,\, 20,\, 22,\, 27. \end{aligned} \label{chn27ex2} \end{equation} and \begin{equation} \begin{aligned} 1,\, 5,\, 9,\, 12,\, 13,\, 17,\, 20,\, 24,\, 25&\stackrel{2}{=} 2,\, 6,\, 7,\, 10,\, 14,\, 18,\, 21,\, 22,\, 26\\ &\stackrel{2}{=}3,\, 4,\, 8,\, 11,\, 15,\, 16,\, 19,\, 23,\, 27. \end{aligned} \label{chn27ex3} \end{equation} It is interesting to observe that both of the above multigrade chains are distinct from the one given by Prouhet. In fact,\, there is a fourth multigrade chain comprising of the first 27 positive integers. It is as follows: \begin{equation} \begin{aligned} 1,\, 5,\, 9,\, 11,\, 15,\, 16,\, 21,\, 22,\, 26 &\stackrel{2}{=} 2,\, 6,\, 7,\, 12,\, 13,\, 17,\, 19,\, 23,\, 27\\ &\stackrel{2}{=}3,\, 4,\, 8,\, 10,\, 14,\, 18,\, 20,\, 24,\, 25. \end{aligned} \label{chn27ex4} \end{equation} \section{An open problem} It follows from the Theorems 6, 7 and 8 that, for any given positive integers $k \geq 1$ and $j \geq 2$, there exist infinitely many integers $N$ such that the first $N$ consecutive positive integers can be separated into $j$ sets that provide a solution of the multigrade chain \eqref{basicchnnot1}. Accordingly for $k \geq 1$ and $j \geq 2$, we define $N(k,\,j)$ to be the least positive integer $N$ with this property. An immediate consequence of Theorem 6 is that $ N(k,\,j) \leq 2j^k$. It would be of interest to determine the integer $N(k,\,j)$. It is readily proved that $N(1,\,j)=2j$, $N(2,\,2)=8$ and $N(2,\,3)=18$. Thus, in these cases $N(k,\,j)=2j^k$. In fact, it appears that $N(k,\,j)=2j^k$ for arbitrary positive integers $k \geq 1$ and $j \geq 2$ but this remains to be proved. \begin{center} \Large Acknowledgments \end{center} I wish to thank the Harish-Chandra Research Institute, Prayagraj for providing me with all necessary facilities that have helped me to pursue my research work in mathematics.
{ "timestamp": "2019-08-30T02:13:40", "yymm": "1908", "arxiv_id": "1908.11192", "language": "en", "url": "https://arxiv.org/abs/1908.11192", "abstract": "In 1851 Prouhet showed that when $N=j^{k+1}$ where $j$ and $k$ are positive integers, $j \\geq 2$, the first $N$ consecutive positive integers can be separated into $j$ sets, each set containing $j^k$ integers, such that the sum of the $r$-th powers of the members of each set is the same for $r=1,\\,2,\\,\\ldots,\\,k$. In this paper we show that even when $N$ has the much smaller value $2j^k$, the first $N$ consecutive positive integers can be separated into $j$ sets, each set containing $2j^{k-1}$ integers, such that the integers of each set have equal sums of $r$-th powers for $r=1,\\,2,\\,\\ldots,\\,k$. Moreover, we show that this can be done in at least $\\{(j-1)!\\}^{k-1}$ ways. We also show that there are infinitely many other positive integers $N=js$ such that the first $N$ consecutive positive integers can similarly be separated into $j$ sets of integers, each set containing $s$ integers, with equal sums of $r$-th powers for $r=1,\\,2,\\,\\ldots,\\,k$, with the value of $k$ depending on the integer $N$.", "subjects": "Number Theory (math.NT)", "title": "An improvement of Prouhet's 1851 result on multigrade chains", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9896718477853187, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.708034987856986 }
https://arxiv.org/abs/1911.09533
Uniform chain decompositions and applications
The Boolean lattice $2^{[n]}$ is the family of all subsets of $[n]=\{1,\dots,n\}$ ordered by inclusion, and a chain is a family of pairwise comparable elements of $2^{[n]}$. Let $s=2^{n}/\binom{n}{\lfloor n/2\rfloor}$, which is the average size of a chain in a minimal chain decomposition of $2^{[n]}$. We prove that $2^{[n]}$ can be partitioned into $\binom{n}{\lfloor n/2\rfloor}$ chains such that all but at most $o(1)$ proportion of the chains have size $s(1+o(1))$. This asymptotically proves a conjecture of Füredi from 1985. Our proof is based on probabilistic arguments. To analyze our random partition we develop a weighted variant of the graph container method. Using this result, we also answer a Kalai-type question raised recently by Das, Lamaison and Tran. What is the minimum number of forbidden comparable pairs forcing that the largest subfamily of $2^{[n]}$ not containing any of them has size at most $\binom{n}{\lfloor n/2\rfloor}$? We show that the answer is $(\sqrt{\frac{\pi}{8}}+o(1))2^{n}\sqrt{n}$. Finally, we discuss how these uniform chain decompositions can be used to optimize and simplify various results in extremal set theory.
\section{Introduction} The \emph{Boolean lattice $2^{[n]}$} is the family of all subsets of $[n]=\{1,\dots,n\}$, ordered by inclusion. A \emph{chain} in $2^{[n]}$ is a family $\{x_{1},\dots,x_{k}\}\subset 2^{[n]}$ such that $x_{1}\subset \dots\subset x_{k}$, and an \emph{antichain} is a family $A\subset 2^{[n]}$ such that no two elements of $A$ are comparable. A cornerstone result in extremal set theory is the theorem of Sperner~\cite{S28} which states that the size of the largest antichain in $2^{[n]}$ is $\binom{n}{\lfloor n/2\rfloor}$, which by Dilworth's theorem~\cite{D50} is equivalent to the statement that the minimum number of chains $2^{[n]}$ can be partitioned into is also $\binom{n}{\lfloor n/2\rfloor}$. While the maximum sized antichain is more or less unique (if $n$ is odd, there are two maximal antichains, otherwise it is unique), there are many different ways to partition $2^{[n]}$ into the minimum number of chains. In general, chain decompositions of the Boolean lattice into the minimum number of chains are extensively studied, see e.g. \cite{BTK51,DJMS19,F85,GK76,G88,HLST02,HLST03, T15, T16}. One minimal chain decomposition of particular interest is the so-called \emph{symmetric chain decomposition}. A chain with elements $x_{0}\subset\dots\subset x_{k}$ is \emph{symmetric} in $2^{[n]}$, if $|x_{i}|=\frac{n-k}{2}+i$ for $i=0,\dots,k$. It was proved by de Brujin, Tengbergen and Kruyswijk~\cite{BTK51} that the Boolean lattice can be partitioned into symmetric chains. Note that in such a chain decomposition, there are exactly $\binom{n}{k}-\binom{n}{k-1}$ chains of size $n-2k+1$ for $k=0,\dots,\lfloor n/2\rfloor$. Therefore, in a symmetric chain decomposition the sizes of the chains are distributed very non-uniformly, in fact, it is the most non-uniform chain decomposition in a certain sense, see the discussion in Section \ref{sect:remarks}. Perhaps motivated by this observation, F\"uredi~\cite{F85} asked whether there exists a chain decomposition of $2^{[n]}$ into the minimum number of chains such that any two chains have roughly the same size. \begin{conjecture}[F\"{u}redi~\cite{F85}]\label{conj:mainconj} Let $n$ be a positive integer and let $s=2^{n}/\binom{n}{\lfloor n/2\rfloor}.$ Then $2^{[n]}$ can be partitioned into $\binom{n}{\lfloor n/2\rfloor}$ chains such that the size of each chain is either $\lfloor s\rfloor$ or $\lceil s\rceil$. \end{conjecture} Here, we have $s=(\sqrt{\frac{\pi}{2}}+o(1))\sqrt{n}\approx 1.25\sqrt{n}$. Hsu, Logan, Shahriari and Towse~\cite{HLST02,HLST03} proved the existence of a chain decomposition into the minimum number of chains such that the size of each chain is between $\frac{1}{2}\sqrt{n}+O(1)$ and $O(\sqrt{n\log n})$. The second author of this paper~\cite{T15,T16} improved the lower and upper bound to $0.8\sqrt{n}$ and $26\sqrt{n}$, respectively, and proved certain generalizations of his result to other partially ordered sets. The main result of our paper is that Conjecture~\ref{conj:mainconj} holds asymptotically. \begin{theorem}\label{thm:mainthm} Let $n$ be a positive integer and $s=2^{n}/\binom{n}{\lfloor n/2\rfloor}$. The Boolean lattice can be partitioned into $\binom{n}{\lfloor n/2\rfloor}$ chains such that all but at most $n^{-\frac{1}{8}+o(1)}$ proportion of the chains have size ${s(1+O(n^{-\frac{1}{16}}))}$. \end{theorem} \noindent We made no serious attempt to optimize the error terms in this result. Let us remark that we will show in Section 2.7 that the chain decomposition provided by Theorem~\ref{thm:mainthm} has the following additional property. \begin{corollary} \label{remark} Let $\mathcal{C}$ be a chain decomposition of $2^{[n]}$ provided by Theorem~\ref{thm:mainthm}. Then the chains of size ${s(1+O(n^{-\frac{1}{16}}))}$ in $\mathcal{C}$ cover $1-n^{-\frac{1}{8}+o(1)}$ proportion of $2^{[n]}$. \end{corollary} Our main theorem has the following interesting application. The well known theorem of Mantel states that if a graph $G$ with $n$ vertices does not contain a triangle, then it has at most $\lfloor \frac{n^{2}}{2}\rfloor$ edges, and this bound is sharp for every $n$. Kalai (see~\cite{DLT19}) proposed the following question: what is the size of the smallest set $T$ of triples in an $n$ element vertex set $V$ such that any graph on $V$ with $\lfloor \frac{n^{2}}{2}\rfloor+1$ edges contains a triangle spanned by a triple in $T$? Das, Lamaison and Tran~\cite{DLT19} proved that the answer is $(\frac{1}{2}+o(1))\binom{n}{3}$, where the upper bound also follows from an earlier work of Allen, B\"ottcher, Hladk\'y, Piguet~\cite{ABHP13}. The authors also propose to study Kalai-type questions for other well known extremal problems. Motivated by Sperner's theorem they asked for the minimum number of forbidden comparable pairs forcing that the largest subfamily of $2^{[n]}$ not containing any of them has size at most $\binom{n}{\lfloor n/2\rfloor}$. Let $B_{n}$ denote the comparability graph of $2^{[n]}$, that is, $V(B_{n})=2^{[n]}$ and $x,y\in 2^{[n]}$ are joined by an edge if $x\subset y$ or $y\subset x$. It is a nice exercise to show that $B_{n}$ has $3^{n}-2^{n}$ edges. Sperner's theorem is equivalent to the statement that the size of the largest independent set of $B_{n}$ is $\binom{n}{\lfloor n/2\rfloor}$. In this setting, the question of Das, Lamaison and Tran can be reformulated as follows. What is the least number of edges of a subgraph $G$ of $B_{n}$ with $V(G)=2^{[n]}$ such that $G$ has no independent set larger than $\binom{n}{\lfloor n/2\rfloor}$? Using Theorem~\ref{thm:mainthm}, we answer this question asymptotically. \begin{theorem}\label{thm:sperner} Let $G$ be a subgraph of $B_{n}$ with the minimum number of edges such that $V(G)=2^{[n]}$ and $G$ has no independent set larger than $\binom{n}{\lfloor n/2\rfloor}$. Then $|E(G)|=(\sqrt{\frac{\pi}{8}}+o(1))2^{n}\sqrt{n}$. \end{theorem} Finally, we show that the uniform chain decomposition provided by Theorem~\ref{thm:mainthm} can be applied to various extremal set theory problems, generalizing ideas of the second author~\cite{T19}. The typical question in extremal set theory is that how large can be a family $H\subset 2^{[n]}$ that avoids a certain forbidden configuration. One way to attack such a problem is as follows. A \emph{$d$-dimensional grid} is a $d$-term Cartesian product of the form $[k_1]\times\dots\times[k_d]$. We fix some $d$ and partition $2^{[n]}$ into $d$-dimensional grids of roughly the same size. Then, we bound the size of the intersection of each of these grids with the family $H$ avoiding the forbidden configuration. The advantage of this approach is that the problem of the maximal subset of the grid avoiding a given forbidden configuration is equivalent to an (ordered) hypergraph Tur\'an problem, for which sometimes there is already an available good bound. In order to find a partition into $d$-dimensional grids, we write $2^{[n]}$ as the Cartesian product $2^{[n_{1}]}\times\dots\times 2^{[n_{d}]}$, where $n_{i}\approx \frac{n}{d}$, and find a uniform chain decomposition $\mathcal{C}_{i}$ of $2^{[n_{i}]}$. Then the Cartesian products $C_{1}\times\dots\times C_{d}$, where $C_{1}\in\mathcal{C}_{1},\dots,C_{d}\in\mathcal{C}_{d}$, partition $2^{[n]}$ in the desired manner. We will illustrate how to apply this idea in case when the forbidden configuration is two sets and their union, a copy of some poset $P$, or a full Boolean algebra. Our paper is organized as follows. In Section~\ref{sect:mainproof}, we prove Theorem~\ref{thm:mainthm} and Corollary~\ref{remark}. In Section~\ref{sect:sperner}, we prove Theorem~\ref{thm:sperner}. In Section~\ref{sect:extremal}, we discuss further possible applications of our main result in extremal set theory. \section{Decomposition into chains of uniform size}\label{sect:mainproof} \subsection{Preliminaries} We use the following standard graph theoretic notation. If $G$ is a graph and $x\in V(G)$, then $\deg_{G}(x)$ denotes the degree of $x$ in $G$. Also, if $U\subset V(G)$, then $N_{G}(U)=\{y\in V(G)\setminus U:\exists x\in U, xy\in E(G)\}$ is the \emph{external neighborhood} of $U$ in $G$, and if $U=\{x\}$, we write $N_{G}(x)$ instead of $N_{G}(\{x\})$. Also, we use the following set theoretic notation. If $0\leq l\leq n$, then $[n]^{(l)}=\{x\in 2^{[n]}:|x|=l\}$ and $[n]^{(\geq l)}=\{x\in 2^{[n]}:|x|\geq l\}$. We define $[n]^{(\leq l)}$ similarly. Also, a \emph{level} of $2^{[n]}$ refers to one of the families $[n]^{(l)}$ for $l=0,\dots,n$. The proof of our main theorem uses probabilistic tools, see the book of Alon and Spencer~\cite{AS04} for a general reference about the probabilistic method. In particular, we need the following variants of Chernoff's inequality, see e.g. Theorem 2.8 in \cite{JLR00}. \begin{claim}\label{claim:chernoff}(Chernoff's inequality) Let $X_1,\dots,X_n$ be independent random variables such that $\mathbb{P}(X_{i}=1)=p_{i}$ and $\mathbb{P}(X_{i}=0)=1-p_{i}$, and let $X=\sum_{i=1}^{n}X_i$. Then for $\delta>0$, we have $$\mathbb{P}(X\geq (1+\delta)\mathbb{E}(X))\leq \begin{cases} e^{-\frac{\delta^{2}}{3}\mathbb{E}(X)} &\mbox{ if }\delta\leq 1,\\ e^{-\frac{\delta}{3}\mathbb{E}(X)} &\mbox{ if }\delta>1. \end{cases}$$ Also, if $p_{1}=\dots=p_{n}=\frac{1}{2}$ and $t>0$, then $$\mathbb{P}\left(X\geq \frac{n}{2}+t\right)\leq e^{-\frac{2t^{2}}{n}}.$$ \end{claim} Our proof of Theorem~\ref{thm:mainthm} depends quite delicately on the distribution of the sizes of the levels of $2^{[n]}$. Next, we collect some estimates on the binomial coefficients we use in this paper. \begin{claim}\label{claim:binomial} Let $n$ be a positive integer, $m=\lceil \frac{n}{2}\rceil$ and $M=\binom{n}{m}$. \begin{enumerate} \item $M=\left(\sqrt{\frac{2}{\pi}}+o(1)\right)\frac{2^{n}}{\sqrt{n}}.$~\cite{S14} \item For $l=o(n^{2/3})$, $\binom{n}{m+l}=(1+o(1))Me^{-2l^{2}/n}.$~\cite{S14} \item For $0<l$, $\sum_{i>m+l}\binom{n}{i}\leq 2^{n}e^{-2l^{2}/n}.$ (Chernoff's inequality) \item For $0<l<\sqrt{n}$, $M\left(1-\frac{2l^{2}}{n}\right)\leq \binom{n}{m+l}<M\left(1-\frac{l^{2}}{4n}\right)$. \item For $0\leq l<10\sqrt{n}$, $\binom{n}{m+l}-\binom{n}{m+l+1}=\Theta(l2^{n}n^{-3/2}).$ \item For $\sqrt{n}\leq l=o(n^{2/3})$, $\sum_{i\geq m+l}\binom{n}{i}\geq (e^{-7}+o(1))2^ne^{-2l^2/n}\frac{\sqrt{n}}{l}$. \end{enumerate} \end{claim} \begin{proof} See the Appendix. \end{proof} \subsection{Overview of the proof} The proof of Theorem~\ref{thm:mainthm} is somewhat technical at certain stages, so let us roughly outline our strategy. Let $k=\lceil s/2\rceil$. First of all, we only consider the upper half of $2^{[n]}$, $B=[n]^{(\geq \lfloor n/2\rfloor)}$, as if we manage to partition $B$ into chains of size $k$ approximately, then we can easily turn it into a chain partition of $2^{[n]}$ with the desired properties. We start with the $k$ largest levels. The remaining levels $[n]^{(l)}$ for $l> \lceil n/2\rceil+k$ we cut into small pieces and glue these small pieces to the levels $[n]^{(\lceil n/2\rceil)},\dots,[n]^{(\lceil n/2\rceil+k)}$ such that every level of the resulting new poset has size roughly $\binom{n}{\lfloor n/2\rfloor}$. Since this new poset has exactly $k+1$ levels, one can hope to find a chain partition of it into $\binom{n}{\lfloor n/2\rfloor}$ chains, each of size $\approx k$. Indeed, we show that if we cut the levels $[n]^{(l)}$ for $l> \lceil n/2\rceil+k$ randomly, then such a chain partition exists with high probability. \subsection{Setting up} Throughout this section, we assume that $n$ is sufficiently large for our arguments to work. Let $m=\lceil \frac{n}{2}\rceil$, $M=\binom{n}{m}$, $A_{i}=[n]^{(m+i)}$ for $i=0,\dots,n-m$, and $B=[n]^{(\geq m)}$. Then $|B|=2^{n-1}$ if $n$ is odd, and $|B|=2^{n-1}+\frac{M}{2}$ if $n$ is even. We remind the reader that $s=\frac{2^{n}}{M}$, and define $k=\lceil \frac{s}{2}\rceil$. Note that $(k-1)M<|B|<(k+1)M$. Also, as $s=(1+o(1))\sqrt{\pi/2}\sqrt{n}$, we have $|A_{k}|=Me^{-\pi/4+o(1)}$. In particular, $0.45 M< |A_{k}|<0.46 M$. Consider the subposet $P_{0}$ of $B$ induced by the levels $A_{0},\dots,A_{k}$. Next, we would like to ''fill up'' $P_{0}$ with the elements of $[n]^{(>k+m)}$, that is, we want to add elements of $[n]^{(>k+m)}$ to the levels $A_{1},\dots,A_{k}$ such that the size of each level becomes roughly $M$. We do this as follows: imagine a $(k+1)\times M$ sized rectangle partitioned into $(k+1)M$ unit squares indexed by $(a,b)\in \{0,\dots,k\}\times [M]$, where we fill some of the unit squares with the elements of $B$. We want do this in a way such that each row corresponds to an expanded level $A_{i}'$. First, for $a=0,\dots,k$, fill the unit squares $(a,1),\dots, (a,|A_{a}|)$ with the elements of $A_{a}$. Then, we will fill the rest of the unit squares as follows. For $1\leq a\leq b\leq k$, let $X_{a,b}=\{b\}\times \{|A_{a}|+1,\dots,|A_{a-1}|\}$, and for $l=0,\dots,k$, let the $l$-th diagonal be the union $\bigcup_{l\leq a\leq k-l} X_{a,a+l}$. Note that $|X_{a,b}|=|A_{a-1}|-|A_{a}|$ and the size of the $l$-th diagonal is $M-|A_{k-l}|$. Order the elements of $[n]^{(>k+m)}$ in an increasing order of the sizes, and among sets of the same size, chose a random ordering. Start filling up the first diagonal using the elements of $[n]^{(>k+m)}$ with respect to this order. Then if the $l$-th diagonal is already filled up, we move to the $(l+1)$-th diagonal. Also, we fill up each diagonal from right to left. We do this until we run out of elements in $[n]^{(>k+m)}$. In the end, the $i$-th row of the rectangle becomes the level $A_{i}'$, and we get a poset $P$ with levels $A_{0}',\dots,A_{k}'$ in which $x\leq_{P} y$ if $x$ and $y$ are in different levels and $x\subset y$. Then $P$ is a subposet of $B$ of height $k+1$ such that every level of $P$ has size roughly $M$. Our goal (more or less) is to show that $P$ can be partitioned into $M$ chains. In the rest of the proof, we shall not work directly with the poset $P$, but for a better understanding of our proof, it is worth seeing this underlying structure. See Figure \ref{figure:chains} for an illustration. For the sake of clarity, let us define our sets $X_{a,b}$ formally. Let $C_{0}=\left\lceil \sqrt{\frac{1}{3}n \log n}\right\rceil$. Let $T=\bigcup_{k+1\leq i\leq C_{0}}A_{i}$ and $Z=[n]^{(>m+C_{0})}$. Then $|Z|\leq n^{-2/3}2^{n}$ by Claim~\ref{claim:binomial}, (3). For $ k+1\leq i\leq C_{0}$, let $\prec_{i}$ be a random total ordering on $A_{i}$ (chosen uniformly among all the total orders), and define the total ordering $\prec$ on $T$ such that for $x\in A_{a}$ and $y\in A_{b}$, we have $x\prec y$ if $a<b$, or $a=b$ and $x\prec_{a} y$. In other words, we randomly order the elements of the levels from $A_{k+1}$ to $A_{C_{0}}$, and then we lay out these levels next to each other, this is the total order $(T,\prec)$. Each set $X_{a,b}$ will be an interval in $T$ with respect to the total order $\prec$. Let $I^{*}=\{(a,b):1\leq a\leq b\leq k\}$, which will serve as the set of possible indices of these intervals. Order the elements of $I^{*}$ by $\prec'$ such that $(a,b)\prec' (a',b')$ if $b-a<b'-a'$, or $b-a=b'-a'$ and $a<a'$, then $\prec'$ will be the order of our desired intervals. Cut $T$ into intervals $X_{a,b}$, where $(a,b)\in I^{*}$, with the following procedure. Let $(1,1)=(a_{1},b_{1})\prec'\dots\prec' (a_{|I^{*}|},b_{|I*|})$ be the elements of $I^{*}$, and let $X_{a_{1},b_{1}}$ be the initial segment of $T$ of size $|A_{0}|-|A_{1}|$. Now if $X_{a_{l},b_{l}}$ is already defined for $l\geq 1$, and there are still at least $|A_{a_{l+1}-1}|-|A_{a_{l+1}}|$ elements of $T$ larger than $X_{a_{l},b_{l}}$ with respect to $\prec$, then let $X_{a_{l+1},b_{l+1}}$ be the $|A_{a_{l+1}-1}|-|A_{a_{l+1}}|$ smallest elements of $T$ larger than $X_{a_{l},b_{l}}$. Otherwise, stop, and set $I=\{(a_{j},b_{j}):1\leq j\leq l\}$. \begin{figure} \begin{tikzpicture} \draw[dashed] (0,0) rectangle (10,2.1) ; \draw (0,0) rectangle (10,0.3) ; \node at (5,0.15) {\tiny $A_{0}$} ; \draw (0,0.3) rectangle (9.6,0.6) ; \node at (4.8,0.45) {\tiny $A_{1}$} ; \draw (0,0.6) rectangle (9,0.9) ; \draw (0,0.9) rectangle (8.2,1.2) ; \draw (0,1.2) rectangle (7,1.5) ; \draw (0,1.5) rectangle (5.7,1.8) ; \draw (0,1.8) rectangle (4.5,2.1) ; \node at (2.25,1.95) {\tiny $A_k$} ; \draw[fill=black!10!white] (-2.1,3.5) rectangle (1.5,3.8) ; \node at (-0.3,3.97) {\tiny $A_{k+1}$} ; \draw[fill=black!20!white] (1.5,3.5) rectangle (4.5,3.8) ; \node at (3,3.97) {\tiny $A_{k+2}$} ; \draw[fill=black!30!white] (4.5,3.5) rectangle (6.9,3.8) ; \draw[fill=black!40!white] (6.9,3.5) rectangle (8.9,3.8) ; \draw[fill=black!50!white] (8.9,3.5) rectangle (10.6,3.8) ; \draw[fill=black!60!white] (10.6,3.5) rectangle (12.1,3.8) ; \node at (11.35,3.97) {\tiny $A_{C_{0}}$} ; \draw[decoration={brace,mirror,raise=5pt},decorate] (-2.1,3.5) -- node[below=6pt] {\small $T$} (12.1,3.5); \draw[->,very thick] (6.5,3) -- (7,2.4) ; \draw[fill=black!10!white] (9.6,0.3) rectangle (10,0.6) ; \node at (10.5,0.45) {\tiny $X_{1,1}$} ; \draw[->] (10.2,0.45) -- (9.8,0.45) ; \draw[fill=black!10!white] (9,0.6) rectangle (9.6,0.9) ; \node at (10.5,0.75) {\tiny $X_{1,2}$} ; \draw[fill=black!10!white] (8.2,0.9) rectangle (9,1.2) ; \draw[fill=black!10!white] (7,1.2) rectangle (8.2,1.5) ; \draw[fill=black!20!white] (5.7,1.5) rectangle (6.4,1.8) ; \draw[fill=black!10!white] (6.4,1.5) rectangle (7,1.8) ; \draw[fill=black!20!white] (4.5,1.8) rectangle (5.7,2.1) ; \node at (5.1,1.95) {\tiny $X_{k,k}$} ; \draw[fill=black!20!white] (9.6,0.6) rectangle (10,0.9) ; \draw[fill=black!20!white] (9,0.9) rectangle (9.6,1.2) ; \draw[->] (10.2,0.75) -- (9.8,0.75) ; \draw[fill=black!30!white] (8.2,1.2) rectangle (8.9,1.5) ; \draw[fill=black!20!white] (8.9,1.2) rectangle (9,1.5) ; \draw[fill=black!30!white] (7,1.5) rectangle (8.2,1.8) ; \draw[fill=black!40!white] (5.7,1.8) rectangle (6.5,2.1) ; \draw [fill=black!30!white](6.5,1.8) rectangle (7,2.1) \draw[fill=black!40!white] (9.6,0.9) rectangle (10,1.2) ; \draw[fill=black!40!white] (9,1.2) rectangle (9.6,1.5) ; \draw[fill=black!50!white] (8.2,1.5) rectangle (8.8,1.8) ; \draw[fill=black!40!white] (8.8,1.5) rectangle (9,1.8) ; \draw[fill=black!50!white] (7.1,1.8) rectangle (8.2,2.1) ; \draw[fill=black!60!white] (7,1.8) rectangle (7.1,2.1) ; \draw[fill=black!60!white] (9.6,1.2) rectangle (10,1.5) ; \draw[fill=black!60!white] (9,1.5) rectangle (9.6,1.8) ; \draw[fill=black!60!white] (8.6,1.8) rectangle (9,2.1) ; \draw[very thick] (9.6,0.3) rectangle (10,1.5) ; \draw[very thick] (9,0.6) rectangle (9.6,1.8) ; \draw[very thick] (7,1.2) rectangle (8.2,2.1) ; \draw[very thick] (5.7,1.5) rectangle (7,2.1) ; \draw[very thick] (4.5,1.8) rectangle (5.7,2.1) ; \draw[very thick] (8.2,0.9) -- (9,0.9) -- (9,2.1) -- (8.6,2.1) -- (8.6,1.8)-- (8.2,1.8) -- (8.2,0.9) ; \draw (0,0.3) rectangle (9.6,0.6) ; \draw (0,0.6) rectangle (9,0.9) ; \draw (0,0.9) rectangle (8.2,1.2) ; \end{tikzpicture} \caption{We cut the union of the levels $A_{k+1},\dots,A_{C_{0}}$ into small pieces $X_{a,b}$ of size $|A_{a-1}|-|A_{a}|$ for $1\leq a\leq b\leq k$. For $a=1,\dots,k$, we consider the block $X_{a,a}\cup\dots \cup X_{a,k}$ and partition it into $\approx |X_{a,a}|$ chains, whose collection is denoted by $\mathcal{C}_{a}$. Finally, we find a chain decomposition $\mathcal{D}_{0}$ of $A_{0}\cup\dots\cup A_{k}$ into $M$ chains, and attach the chains in $\mathcal{C}_{a}$ to those chains of $\mathcal{D}_{0}$ that end in $A_{a-1}$.} \label{figure:chains} \end{figure} As a reminder, for $l=0,\dots,k-1$, the \emph{$l$-th diagonal} is the union $\bigcup_{a:(a,a+l)\in I}X_{a,a+l}$. Say that a diagonal is \emph{complete} if $(k-l,k)\in I$. Let $\mu\leq k$ be the largest number such that the $(k-\mu)$-th diagonal is not complete. Then for every $1\leq a\leq k$, the number of indices $b$ such that $(a,b)\in I$ is at least $k+1-a-\mu$ (note that this number might be negative). Let us estimate $\mu$. \begin{claim}\label{claim:missingblocks} $\mu=O(n^{1/3}).$ \end{claim} \begin{proof} If the $l$-th diagonal is complete, then it contains $M-|A_{k-l}|$ elements. Consider the inequality $(k-1)M<|B|$. We have $|B|=\sum_{i=0}^{k}|A_{i}|+|T|+|Z|$, so this inequality can be rewritten as $|T|+|Z|>-2M+\sum_{i=1}^{k}(M-|A_{i}|)$. Since the $(k-\mu)$-th diagonal is not complete, we have $|T|\leq \sum_{l=0}^{k-\mu}(M-|A_{k-l}|)$, which then implies $|Z|\geq -2M+\sum_{i=1}^{\mu-1} (M-|A_{i}|)$. By Claim~\ref{claim:binomial},~(4), we have $|A_{i}|\leq M\left(1-\frac{i^{2}}{4n}\right)$. Therefore, $$|Z|\geq -2M+M\sum_{i=1}^{\mu-1}\frac{i^{2}}{4n}\geq \frac{(\mu-1)^{3}M}{12n}-2M.$$ From this, and using that $|Z|\leq n^{-2/3}2^n<M$, we conclude that $\mu=O(n^{1/3}).$ \end{proof} For $(a,b)\in I$, let $\phi(a,b)$ be the set of indices $r$ such that $A_{r}\cap X_{a,b}\neq\emptyset$. Say that the index $(a,b)\in I$ is \emph{whole} if $|\phi(a,b)|=1$, and say that $(a,b)$ is \emph{shattered} otherwise. In other words, $(a,b)$ is whole if $X_{a,b}$ is completely contained in a level, and shattered otherwise. Clearly, the number of shattered indices in $I$ is at most $C_{0}$ as $X_{a,b}$ is shattered if there exists $r$ such that $X_{a,b}$ contains the last point of $A_{r}$ and the first point of $A_{r+1}$ with respect to $\prec$. The proof of the following claim is rather technical and does not add much to the reader's understanding of the paper, hence we have moved it to the Appendix. \begin{claim}\label{claim:calc} Let $1\leq a\leq k$ and $a\leq b<b'\leq k$. Then $\phi(a,b)$ and $\phi(a,b')$ are disjoint. \end{claim} \emph{Remark.} This claim is quite important for our proof to work, and it seems more of a coincidence that it is actually true, rather than having some combinatorial reason behind it. To prove the claim, we do delicate calculations with binomial coefficients, which the interested reader can find in the Appendix. For $a=1,\dots,k$, let $$K_{a}=\bigcup_{\substack{b : (a,b)\in I\\ (a,b)\mbox{\footnotesize\ is whole}}} X_{a,b}.$$ Then $K_{a}$ is the union of $|A_{a-1}|-|A_{a}|$ sized random subsets of distinct levels, where the fact that these levels are distinct follows from Claim \ref{claim:calc}. In what comes, we would like to partition $K_{a}$ into roughly $|A_{a-1}|-|A_{a}|$ chains, most of them of size $\approx k-a$. In order to do this, it is enough to show that the size of the largest antichain of $K_{a}$ is not much larger than $|A_{a-1}|-|A_{a}|$. To bound the size of this largest antichain, we use the celebrated container method. The \emph{graph} container method, which we will use in the present work, dates back to works of Kleitman and Winston~\cite{kw1,kw2} from more than 30 years ago; for more recent applications see~\cite{btw,wojteksurvey}. We will use a multi-stage version of the method, this idea has first appeared in~\cite{BSperner}. \subsection{Containers} In this section, we construct a small family $\mathcal{C}$ of subsets of $T$, which we shall refer to as \emph{containers}, such that every antichain of $T$ is contained in some element of $\mathcal{C}$, and each $C\in\mathcal{C}$ has small mass, where we use the following notion of mass. If $\mathcal{F}\subset 2^{[n]}$, the \emph{Lubell-mass} of $\mathcal{F}$ is $$\ell(\mathcal{F})=\sum_{x\in \mathcal{F}}\frac{1}{\binom{n}{|x|}}.$$ Next, we show that any family of large Lubell-mass must contain an element that is comparable to many other elements. \begin{claim}\label{claim:maxdeg} Let $\delta>0$, $r$ is positive integer and let $\mathcal{F}\subset B$ such that $\ell(\mathcal{F})=r+\delta$. Then there exists $x\in \mathcal{F}$ such that $x$ is comparable with at least $\frac{\delta}{(r+\delta)r!}\cdot(\frac{n}{2})^{r}$ elements of $\mathcal{F}$. \end{claim} \begin{proof} For each $x\in \mathcal{F}$, consider the number of elements of $\mathcal{F}$ comparable with $x$, and let $\Delta$ be the maximum of these numbers. Let $C$ be a maximal chain in $2^{[n]}$ chosen randomly from the uniform distribution. Note that $\mathbb{E}(|C\cap\mathcal{F}|)=\ell(\mathcal{F})=r+\delta$. Let $N$ be the number of pairs $(x,y)$ in $C\cap \mathcal{F}$ such that $x\subset y$ and $|y|-|x|\geq r$. On one hand, we have $N\geq |C\cap \mathcal{F}|-r$, hence $\mathbb{E}(N)\geq \delta$. On the other hand, if $x,y\in \mathcal{F}$ such that $x\subset y$ and $|y|-|x|\geq r$, then $$\mathbb{P}(x,y\in C)=\frac{|x|!(|y|-|x|)!(n-|y|)!}{n!}=\frac{1}{\binom{n}{|y|}}\cdot\frac{1}{\binom{|y|}{|x|}}\leq \frac{1}{\binom{n}{|y|}}\cdot\frac{1}{\binom{m+r}{r}}\leq \frac{r!}{\binom{n}{|y|}}\left(\frac{2}{n}\right)^{r},$$ noting that $|y|\geq m+r\geq \frac{n}{2}+r$. For $y\in \mathcal{F}$, let $D(y)=\{x\in \mathcal{F}:x\subset y, |y|-|x|\geq r\}$. Then we can write \begin{align*} \mathbb{E}(N)&=\sum_{y\in\mathcal{F}}\sum_{x\in D(y)} \mathbb{P}(x,y\in C)\leq \sum_{y\in\mathcal{F}}|D(y)|\frac{r!}{\binom{n}{|y|}}\left(\frac{2}{n}\right)^{r}\\&\leq \Delta r!\left(\frac{2}{n}\right)^{r}\ell(\mathcal{F})=\Delta r!\left(\frac{2}{n}\right)^{r}(r+\delta). \end{align*} Comparing the right hand side with the lower bound $\delta\leq \mathbb{E}(N)$, we get the desired bound $\Delta\geq \frac{\delta}{(r+\delta)r!}\cdot(\frac{n}{2})^{r}$. \end{proof} Now we are ready to establish our container lemma. In the proof we will use the above claim only for $r=1,2$. \begin{lemma}\label{lemma:container} There exists a family $\mathcal{C}$ of subsets of $T$ such that \begin{enumerate} \item $|\mathcal{C}|\leq 2^{2^{n}n^{-3/2+o(1)}}$, \item for every $C\in\mathcal{C}$, we have $\ell(C)\leq 1+n^{-1/3+o(1)}$, \item if $I$ is an antichain in $T$, then there exists $C\in\mathcal{C}$ such that $I\subset C$. \end{enumerate} \end{lemma} \begin{proof} Let $G$ be the comparability graph of $T$, and let $<$ be an arbitrary total ordering on $T$. Let $I$ be an antichain of $T$. We build a container containing $I$ with the help of the following algorithm. \begin{description} \item[Step 0] Set $S_{0}:=\emptyset$ and $G_{0}:=G$. \item[Step $i$] Let $v_{i}$ be the smallest vertex (with respect to $<$) of $G_{i-1}$ with maximum degree. If $\ell(G_{i-1})\geq 1+n^{-1/2}$, then consider two cases. \begin{itemize} \item if $v_{i}\not\in I$, then let $G_{i}:=G_{i-1}\setminus\{v_{i}\}$, $S_{i}:=S_{i-1}$ and proceed to step $i+1$, \item if $v_{i}\in I$, then let $S_{i}:=S_{i-1}\cup \{v_{i}\}$ and $G_{i}:=G_{i-1}\setminus (\{v_{i}\}\cup N_{G_{i-1}}(v_{i}))$, and proceed to step $i+1$. \end{itemize} On the other hand, if $\ell(G_{i-1})<1+n^{-1/2}$, then set $S=S_{i-1}$, $f(S)=V(G_{i-1})$ and terminate the algorithm. \end{description} Call the set $S$ a \emph{fingerprint}. Note that $V(G_{i-1})$ only depends on $S$, so the function $f$ is properly defined on the set of fingerprints. Finally, set $C=S\cup f(S)$, then $C$ contains $I$. Let $\mathcal{C}$ be the family of the sets $C$ for every independent set $I$. Now let us estimate the size of $S$. We study our algorithm by dividing the steps into phases depending on $\ell(V(G_{i}))$. \begin{description} \item[Phase -1] This phase consists of those steps $i$ for which $\ell(V(G_{i}))\geq 3$, and let $i'$ be the last step in this phase. In every such step, the maximum degree of $V(G_{i})$ is at least $\frac{n^{2}}{24}$ by Claim~\ref{claim:maxdeg} (with $r=2$ and $\delta=1$). If we added $v_{i}$ to $S_{i-1}$, then we have $|V(G_{i})|\leq |V(G_{i-1})|-\frac{n^{2}}{24}$, which means that $|V(G_{i'})|\leq 2^{n}-\frac{|S_{i'}|n^{2}}{24}$. Therefore, $|S_{i'}|\leq \frac{24\cdot 2^{n}}{n^{2}}.$ Let $T_{-1}=S_{i'}$. \item[Phase 0] This phase consists of those steps $i$ for which $ 3>\ell(V(G_{i-1}))\geq 2$, and let $i_{0}$ be the last step of this phase. Also, let $T_{0}=S_{i_{0}}\setminus T_{-1}$, the set of elements we added to $S$ during this phase. In this phase, we have $|V(G_{i-1})|\leq 3M$ and by Claim~\ref{claim:maxdeg} (with $r=1$ and $\delta=1$), the maximum degree of $V(G_{i-1})$ is at least $\frac{n}{4}$. If $v_{i}\in I$, then we have $|V(G_{i})|\leq |V(G_{i-1})|-\frac{n}{4}$, which means that $|V(G_{i_{0}})|\leq 3M-|T_{0}|\frac{n}{4}$. Therefore, $|T_{0}|\leq \frac{12M}{n}<12\cdot 2^{n}n^{-3/2}.$ \item[Phase r] For $r=1,\dots,\frac{1}{2}\log_{2} n$, phase $r$ consists of those steps $i$ for which $1+\frac{1}{2^{r-1}}>\ell(V(G_{i-1}))\geq 1+\frac{1}{2^{r}}$. Let $i_{r}$ be the last step of phase $r$ and let $T_{r}=S_{i_{r}}\setminus S_{i_{r-1}}$, the set of elements we added to $S$ during phase $r$. By Claim~\ref{claim:maxdeg} (with $r=1$ and $\delta=\frac{1}{2^{r}}$), the maximum degree of $V(G_{i-1})$ is at least $\frac{n}{2^{r+2}}$. Also, $\ell(V(G_{i_{r-1}})\setminus V(G_{i_{r}}))\leq \frac{1}{2^{r}}$, so $|V(G_{i_{r-1}})\setminus V(G_{i_{r}})|\leq \frac{M}{2^{r}}$. Moreover, $|V(G_{i_{r}})|\leq |V(G_{i_{r-1}})|-|T_{r}|\frac{n}{2^{r+2}}$, which gives $$|T_{r}|\leq \frac{4M}{n}\leq \frac{4\cdot 2^{n}}{n^{3/2}}.$$ \end{description} Therefore, in the end of the process, we get $$|S|=\sum_{r=-1}^{\frac{1}{2}\log_{2} n} |T_{r}|\leq \frac{3\cdot 2^{n}\log_{2} n}{n^{3/2}}.$$ Hence, there are at most $$\binom{2^{n}}{3\cdot 2^{n}n^{-3/2}\log_{2} n}=2^{2^{n}n^{-3/2+o(1)}}$$ fingerprints, which is also an upper bound for $|\mathcal{C}|$. It only remains to bound $\ell(C)$. Recall that $T$ contains only sets of size at most $m+C_0$, $\binom{n}{m+C_{0}}=(1+o(1))n^{-2/3} M$ and $M \leq O(2^n/\sqrt{n})$. Thus we have $$\ell(C)<\ell(f(S))+\ell(S)\leq 1+n^{-1/2}+\frac{|S|}{\binom{n}{m+C_{0}}}\leq 1+n^{-1/2}+(1+o(1))n^{2/3}\frac{|S|}{M}.$$ Here, $n^{2/3}\frac{|S|}{M}=O(n^{-1/3}\log n)$, so $\ell(C)\leq 1+O(n^{-1/3}\log n)$. \end{proof} \subsection{Antichains} The aim of this section is to bound the size of the maximal antichain in $K_{a}$. Recall that for $(a,b)\in I$, $\phi(a,b)$ is the set of indices $r$ such that $A_{r}\cap X_{a,b}\neq\emptyset$, and $$K_{a}=\bigcup_{\substack{b: (a,b)\in I\\(a,b)\mbox{\footnotesize\ is whole}}} X_{a,b}.$$ \begin{lemma}\label{lemma:antichain} Let $a\geq n^{1/10}$. With probability at least $1-2^{-n^2}$, the size of the maximal antichain of $K_{a}$ is $$\left(1+\frac{n^{o(1)}}{\sqrt{a}}\right)(|A_{a-1}|-|A_{a}|).$$ \end{lemma} \begin{proof} Let $A=|A_{a-1}|-|A_{a}|$, then by Claim~\ref{claim:binomial} (5), we have $A=\Theta(a2^{n}n^{-3/2})$. Let $E$ be the set of indices $b$ such that $(a,b)\in I$ and $(a,b)$ is whole. If $b\in E$, let $r_b$ be the unique index such that $X_{a,b}\subset A_{r_{b}}$. Then $X_{a,b}$ is an $A$ element subset of $A_{r_b}$, chosen from the uniform distribution on all $A$ element subsets. Also, as the sets $\phi(a,b)$ for $b\in E$ are pairwise disjoint by Claim \ref{claim:calc}, the system of random variables $\{X_{a,b}:b\in E\}$ is independent. Instead of $X_{a,b}$, it is more convenient to work with the set $Y_{a,b}$ which we get by selecting each element of $A_{r_b}$ independently with probability $p_b=\frac{A}{|A_{r_b}|}$. Indeed, $X_{a,b}=Y_{a,b}|(|Y_{a,b}|=A)$, and $$\mathbb{P}(|Y_{a,b}|=A)=p_b^A(1-p_b)^{|A_{r_b}|-A}\binom{|A_{r_b}|}{A}>\frac{1}{|A_{r_b}|}>2^{-n},$$where the second to last inequality can be seen by observing that the function $f(x)=\mathbb{P}(|Y_{a,b}| = x)$ is increasing for $x\leq |A|$ and decreasing for $x\geq |A|$. Let $D=\bigcup_{b\in E}Y_{a,b}$ and $U=\bigcup_{b\in E}A_{r_b}$. Let $\mathcal{C}$ be the family of containers of $T$ given by Lemma~\ref{lemma:container}. Let $\delta$ be a real number such that $n^{-1/3+1/20}<\delta<1$, let $C\in\mathcal{C}$ and consider the probability that $W=|C\cap D|$ is larger than $A(1+\delta)$. First of all, we have $$\mathbb{E}(W)=\sum_{b\in E}\frac{A|C\cap A_{r_b}|}{|A_{r_b}|}=A\sum_{b\in E}\frac{|C\cap A_{r_{b}}|}{\binom{n}{m+r_{b}}}=A\ell(|C\cap U|)\leq A(1+n^{-1/3+o(1)}).$$ Now let us estimate the probability that $W\geq (1+\delta)A$. Let $\delta'=(1+\delta)\frac{A}{\mathbb{E}(W)}-1$, then $(1+\delta)A=(1+\delta')\mathbb{E}(W)$. Using the property that $\delta>n^{-1/3+1/20}$, we have $\delta'\geq \delta \frac{A}{2\mathbb{E}(W)}$. But $W$ is the sum of Bernoulli random variables, so we can apply Chernoff's inequality (Claim~\ref{claim:chernoff}). Consider two cases: if $\delta'\leq 1$, then $$\mathbb{P}(W\geq (1+\delta')\mathbb{E}(W))\leq e^{-\frac{(\delta')^2}{3}\mathbb{E}(W)}\leq e^{-\delta^2 \frac{A^2}{12\mathbb{E}(W)}}\leq e^{-\frac{\delta^2 A}{24}},$$ and if $\delta'>1$, then $$\mathbb{P}(W\geq (1+\delta')\mathbb{E}(W))\leq e^{-\frac{\delta'}{3}\mathbb{E}(W)}\leq e^{-\delta \frac{A}{6}}< e^{-\frac{\delta^2 A}{24}}.$$ Choose $\delta$ such that $e^{-\frac{\delta^2 A}{24}}|\mathcal{C}|=2^{-2n^{2}}$. Since $A=\Theta(a2^{n}n^{-3/2})$ and $|\mathcal{C}|\leq 2^{2^{n}n^{-3/2+o(1)}}$, we have $\delta=\frac{n^{o(1)}}{\sqrt{a}}$. Note that $n^{-1/3+1/20}<\delta<1$ holds, so the previous calculations are valid for this choice of $\delta$. By the union bound, the probability that there exists $C\in \mathcal{C}$ such that $|C\cap D|\geq (1+\delta)A$ is at most $|\mathcal{C}| e^{-\frac{\delta^2 A}{24}}=2^{-2n^{2}}.$ But every independent set of $U$ is contained in some $C\in\mathcal{C}$, so the probability $q'$ such that $D$ has no independent set of size larger than $(1+\delta)A$ is at most $2^{-2n^{2}}$. Finally, let $q$ be the probability that $K_{a}$ has an independent set larger than $(1+\delta)A$. Then $q$ is equal to the probability that $D$ has an independent set of size $(1+\delta)A$, conditioned on the event that $|Y_{a,b}|=A$ for $b\in E$. But the probability of this event is at least $2^{-n|E|}> 2^{-n^{2}}$, since $|E|\leq k=O(\sqrt{n})$, so $q\leq q'2^{n^{2}}\leq 2^{-n^{2}}$. \end{proof} \subsection{Matchings} By the previous lemma and by Dilworth's theorem \cite{D50}, we know that $K_{a}$ can be partitioned into slightly more than $(|A_{a-1}|-|A_{a}|)$ chains. We would like to attach most of these chains to a chain decomposition of the union of the levels $A_{0}\cup\dots\cup A_{k-1}$. This section is devoted to the following lemma, which deals with this problem. For $a=1,\dots,k$, let $B_a$ be the bipartite graph with vertex classes $A_{a-1}$ and $A_{a}\cup X_{a,a}$, where the edges between the two vertex classes are the comparable pairs. Note that $B_{a}$ is a \emph{balanced} bipartite graph, that is, $|A_{a-1}|=|A_{a}\cup X_{a,a}|$. \begin{lemma}\label{lemma:matching} If $(a,a)$ is whole, then with probability at least $1-2^{-n}$, there exists a matching $M_a$ in $B_a$ such that $M_a$ covers every element of $A_a$, and $M_{a}$ covers all but $O(\frac{2^{n}}{n^{5/4}})$ elements of $X_{a,a}$. \end{lemma} We prepare the proof of this lemma with a number of simple claims, the first one of which is a form of the LYM inequality. \begin{claim}\label{claim:normalizedmatching} Let $i,j\in \{0,\dots,n-m\}$, $i\neq j$. Let $G$ be the bipartite graph with vertex classes $A_i$ and $A_j$ such that the edges of $G$ are the comparable pairs. Then for every $X\subset A_i$, we have $$\frac{|X|}{|A_{i}|}\leq \frac{|N_{G}(X)|}{|A_j|}.$$ \end{claim} \begin{proof} Suppose that $i<j$, the other case can be handled in a similar manner. Let $e$ denote the number of edges between $X$ and $N_{G}(X)$. Counting $e$ from the vertices in $X$, we get $e=|X|\binom{n-m-i}{j-i}$. Counting the edges by the vertices in $N_{G}(X)$, we get $e\leq |N_{G}(X)|\binom{j+m}{j-i}$. Therefore, $|X|\binom{n-m-i}{j-i}\leq |N_{G}(X)|\binom{j+m}{j-i}$, which is equivalent to $\frac{|X|}{|A_{i}|}\leq \frac{|N_{G}(X)|}{|A_j|}.$ \end{proof} \begin{claim}\label{claim:hall}(Defect version of Hall's theorem~\cite{H35}, see also~\cite{ADH98}) Let $G$ be a bipartite graph with vertex classes $A$ and $B$, and let $\Delta$ be a positive integer. Suppose that for every $X\subset A$, we have $|N_{G}(X)|\geq |X|-\Delta$. Then $G$ contains a matching of size at least $|A|-\Delta$. \end{claim} \begin{claim}\label{claim:levels} Let $0\leq i<n-m$ and let $G$ be the bipartite graph with vertex classes $A_{i}$ and $A_{i+1}$ in which the edges are the comparable pairs. Then there exists a complete matching from $A_{i+1}$ to $A_{i}$ for $i=0,\dots,n-m$. \end{claim} \begin{proof} This follows easily from Claim~\ref{claim:normalizedmatching} and Hall's theorem (Claim~\ref{claim:hall} with $\Delta=0$). Indeed, for every $X\subset A_{i}$, we have $|N_{G}(X)|\geq \frac{|A_{i}|}{|A_{i+1}|}|X|\geq |X|$, so Hall's condition is satisfied. Therefore, there exists a matching of size $|A_{i+1}|$. \end{proof} \begin{corollary}\label{cor:leftover} Let $T'=[n]^{(\geq m+k)}$. Then $T'$ can be partitioned into $|A_{k}|$ chains. \end{corollary} \begin{proof} For $i=k,\dots,n-m-1$, let $M_{i}$ be a complete matching from $A_{i+1}$ to $A_{i}$. For $x\in A_{k}$, let $C_{x}$ be the chain with elements $x=x_{0}\subset\dots\subset x_{l}$, where $x_{j}$ is matched to $x_{j+1}$ in $M_{k+j}$ for $j=0,\dots,l-1$, and $x_{l}$ is not covered by the matching $M_{k+l}$. Then $\{C_{x}\}_{x\in A_{k}}$ is a chain partition of $T'$ into $|A_{k}|$ chains. \end{proof} \begin{claim}\label{claim:maxmatching}(see for example~\cite{ADH98}) Let $G$ be a bipartite graph and $M$ be a matching in $G$. Then there exists a maximal sized matching $M'$ in $G$ such that $V(M)\subset V(M')$. \end{claim} \begin{claim}\label{claim:k+1-k+2} $X_{a,a}\subset (A_{k+1}\cup A_{k+2})$, and in particular, $X_{k-1,k-1},X_{k,k}\subset A_{k+2}$. \end{claim} \begin{proof} By numerical calculations, we have $0.4M< |A_{k+2}|<|A_{k+1}|< |A_{k}|<0.46 M$, so the inequalities $|A_{k+1}|<M-|A_{k}|<|A_{k+1}|+|A_{k+2}|$ hold. Here, $M-|A_{k}|$ is the size of the first diagonal, which contains $X_{1,1},\dots,X_{k,k}$. The inequalities show that this diagonal contains $A_{k+1}$ and a constant proportion of $A_{k+2}$. But then as $X_{k-1,k-1},X_{k,k}$ are the last elements of this diagonal, we have $X_{k-1,k-1},X_{k,k}\subset A_{k+2}$. \end{proof} Now we are ready to prove the main lemma of this section. \begin{proof}[Proof of Lemma~\ref{lemma:matching}] As $(a,a)$ is whole, there exists an index $r$ such that $X_{a,a}\subset A_{r}$, and let $A=|A_{a-1}|-|A_a|$. By Claim \ref{claim:k+1-k+2}, we have $r\in \{k+1,k+2\}$, and in particular, $r=k+2$ if $a\in \{k-1,k\}$. Similarly as before, instead of working with the random set $X_{a,a}$, we will work with the set $Y$ we get by selecting each element of $A_r$ independently with probability $p=\frac{A}{|A_r|}$. Indeed, $X_{a,a}$ has the same distribution as $Y|(|Y|=A)$, and $\mathbb{P}(|Y|=A)\geq \frac{1}{|A|}\geq 2^{-n}$. Let $E$ be the bipartite graph with vertex classes $A_{a-1}$ and $A_{a}\cup Y$, where the edges are given by the comparable pairs, and let $E'$ be the subgraph of $E$ induced on $A_{a-1}\cup Y$. Consider the degrees of $E'$ in $A_{a-1}$. Every $x\in A_{a-1}$ is comparable with exactly $d=\binom{n-m-a+1}{r-(a-1)}$ elements of $A_r$, so for every $x\in A_{a-1}$, $\mathbb{E}(\deg_{E'}(x))=pd$. Next, let us bound $pd$. Consider two cases. \begin{description} \item[$k=a$] We have $r-(a-1)=3$, so $$d=\binom{n-m-a+1}{r-(a-1)}\geq \binom{n/3}{3}=\Omega(n^3).$$ Also, we have $p=\frac{A}{|A_r|}=\Omega(n^{-1/2})$ by the following estimates: $A=\Theta(a2^{n}n^{-3/2})=\Omega(2^{n}n^{-1})$ by Claim \ref{claim:binomial}, (5), and $|A_{r}|=\Theta(2^{n}n^{-1/2})$ by Claim \ref{claim:binomial}, (1)-(2). Therefore, $pd=\Omega(n^{5/2})$. \item[$a\leq k$] We have $r-(a-1)\geq 4$. Indeed, if $a=k-1$, then $r=k+2$, and if $a\leq k-2$, then $r\geq k+1$. But then $$d=\binom{n-m-a+1}{r-(a-1)}\geq \binom{n/3}{4}=\Omega(n^4).$$ Also $p=\frac{A}{|A_r|}=\Omega(n^{-1})$ as $A=\Theta(a2^{n}n^{-3/2})=\Theta(2^{n}n^{-3/2})$ by Claim \ref{claim:binomial}, (5), and $|A_{r}|=\Theta(2^{n}n^{-1/2})$ by Claim \ref{claim:binomial}, (1)-(2). Therefore, we get $pd=\Omega(n^{3})=\Omega(n^{5/2})$. \end{description} Now consider the degree of $x$ in $E'$. As $\deg_{E'}(x)$ is the sum of independent Bernoulli random variables, we can apply Chernoff's inequality with $\delta<1$ (Claim~\ref{claim:chernoff}) to get $$\mathbb{P}(\deg_{E'}(x)\geq (1+\delta)pd)\leq e^{-\frac{\delta^{2}pd}{3}}.$$ Choose $\delta$ such that $\delta^{2}pd=12n$, then $\delta=O(n^{-3/4})$. Let $\mathcal{E}$ be the event that there exists $x\in A_{a-1}$ such that $\deg_{E'}(x)\geq (1+\delta)pd$. By the union bound, $\mathbb{P}(\mathcal{E})\leq |A_{a-1}|e^{-4n}<2^{-2n}.$ Moreover, $\mathbb{P}(\mathcal{E}|(|Y|=A))\leq \frac{\mathbb{P}(\mathcal{E})}{\mathbb{P}(|Y|=A)}\leq 2^{-n}$. To finish the proof, it is enough to show that if $\overline{\mathcal{E}}\cap (|Y|=A)$ happens, then the desired matching exists. Let $d'=\binom{m+r}{r-(a-1)}$, then the degree of every vertex in $Y$ is $d'$. Let $U\subset Y$, $V=N_{E}(U)$ and let $f$ be the number of edges between $U$ and $V$. We have $$d'|U|=f\leq (1+\delta)pd|V|,$$ which implies $|V|\geq \frac{d'}{d(1+\delta)p}|U|.$ Note that $\frac{d'}{pd}=\frac{1}{p}\cdot \frac{|A_{a-1}|}{|A_{r}|}=\frac{|A_{a-1}|}{A}$ and $\frac{|U|}{|A|}\leq 1$, so $$|V|\geq |U|\frac{|A_{a-1}|}{A(1+\delta)}\geq|U|\frac{|A_{a-1}|}{A}(1-\delta)\geq |U|\frac{|A_{a-1}|}{A}-\delta|A_{a-1}|.$$ Also, by Claim~\ref{claim:normalizedmatching}, for every $U'\subset A_{a}$, we have $|N_{E}(U')|\geq |U'|\frac{|A_{a-1}|}{|A_{a}|}$. Now we show that Hall's condition holds in $E$ with defect $\Delta=\delta|A_{a-1}|=O(\frac{2^{n}}{n^{5/4}})$, that is, for every $U_{0}\subset A_{a}\cup Y$, we have $|N_{E}(U_{0})|\geq |U_{0}|-\Delta$. If this is true, then Claim~\ref{claim:hall} implies that there exists a matching of size at least $(1-\delta)|A_{a-1}|$ in $E$. But there exists a complete matching $M$ in $E$ from $A_{a}$ to $A_{a-1}$ by Claim~\ref{claim:levels}, so there exists a matching $M'$ of maximal size that covers every element of $A_{a}$ by Claim~\ref{claim:maxmatching}. Then $M'$ satisfies the desired properties. Let $U_{0}\subset A_{a}\cup Y$, $U=U_{0}\cap Y$ and $U'=U_{0}\cap Y$. Then \begin{align*} |N_{E}(U_{0})|&\geq \max\{|N_{E}(U)|,|N_{E}(U')|\}\\ &\geq |A_{a-1}|\max\left\{\frac{|U|}{A}-\delta,\frac{|U'|}{|A_{a}|}\right\}\\ &\geq |A_{a-1}|\max\left\{\frac{|U|}{A},\frac{|U'|}{|A_{a}|}\right\}-\Delta. \end{align*} Let $\alpha=\frac{|U|}{A}$ and $\beta=\frac{|U'|}{|A_{a}|}$. If $\alpha\geq \beta$, then $|U'|\leq \alpha |A_{a}|$ and $|U_{0}|\leq \alpha(|A_{a}|+A)=\alpha |A_{a-1}|$. Therefore, $|N_{E}(U_{0})|\geq |A_{a-1}|\alpha-\Delta \geq |U_{0}|-\Delta$. We can proceed similarly if $\alpha<\beta$. This finishes the proof. \end{proof} \subsection{The proof of Theorem \ref{thm:mainthm}} For $x\in 2^{[n]}$, let $x^c=[n]\setminus x$, and for $F\subset 2^{[n]}$, let $$\overline{F}=\{x^c: x\in F\}.$$ Fix $\lambda=n^{-1/16}$. It is enough to partition $B$ into $\binom{n}{\lfloor n/2\rfloor}$ chains such that all but at most $O(Mn^{-1/8})$ of the chains have size between $k-3\lambda k$ and $k+3\lambda k$. Indeed, let $\mathcal{D}$ be such a chain partition, and for $x\in A_{0}$, let $D_{x}\in \mathcal{D}$ be the chain containing $x$. If $n$ is even, let $D_{x}^+=D_{x}\cup \overline{D_{x^c}}$. If $n$ is odd, let $\tau: A_{0}\rightarrow [n]^{(\frac{n-1}{2})}$ be an arbitrary bijection such that $\tau(x)\subset x$ for every $x\in A_0$, and set $D^+_{x}=D_x\cup \overline{D}_{\tau(x)^c}$. Then $\mathcal{D}^+=\{D_x^+:x\in A_{0}\}$ is a chain partition of $2^{[n]}$ into $\binom{n}{\lfloor n/2\rfloor}$ chains with the desired properties. In the rest of this section, we prove that there exists a chain partition of $B$ with the properties above. By Lemma~\ref{lemma:antichain} and Lemma~\ref{lemma:matching}, there is a choice for the sets $X_{a,b}$, $(a,b)\in I$ such that for $n^{1/10}<a\leq k$, the size of the maximal antichain in $K_{a}$ is $(1+\frac{n^{o(1)}}{\sqrt{a}})(|A_{a-1}|-|A_{a}|)$, and there is a matching $M_{a}$ in $B_{a}$ that covers every element of $A_{a}$, and covers all but at most $O(2^{n}n^{-5/4})$ elements of $X_{a,a}$. First, we shall cover most elements of $B$ by chains, most of whose size is between $k(1-3\lambda)$ and $k+1$, while collecting certain elements of $B$ which are not covered into a set $\mathcal{L}$. We refer to the elements of $\mathcal{L}$ as \emph{leftovers}. First of all, put every element $x\in B$ satisfying $|x|\geq m+C_{0}$ into $\mathcal{L}$. Then we added at most $2^{n}n^{-2/3}$ elements to $\mathcal{L}$. Also, we put every element of $X_{a,b}$ for $a\leq n^{1/10}$ and $a<b\leq k$ in $\mathcal{L}$. Then, by Claim \ref{claim:binomial}, (5), we put at most $$\sum_{a\leq n^{1/10}}\sum_{b=a}^{k} |X_{a,b}|\leq k\sum_{a\leq n^{1/10}} (|A_{a-1}|-|A_{a}|)=O(2^{n}n^{-4/5})$$ elements in $\mathcal{L}$. So far $$|\mathcal{L}|=O(2^{n}n^{-2/3}).$$ For $a=1,\dots,k$, say that $a$ is shattered if the number of indices $b$ such that $(a,b)\in I$ and $(a,b)$ is shattered is at least $\lambda k$. If $a$ is shattered, then put every element of $\bigcup_{b:(a,b)\in I} X_{a,b}$ into $\mathcal{L}$. In total, there are less than $C_{0}$ shattered pairs $(a,b)$, so the number of shattered indices $a$ is at most $\frac{C_{0}}{\lambda k}$. The size of the set $\bigcup_{b:(a,b)\in I} X_{a,b}$ is $$\sum_{b: (a,b)\in I}|X_{a,b}|=O(ka2^{n}n^{-3/2})=O(2^{n}n^{-1/2}),$$ so we added at most $O(\frac{C_{0}}{\lambda k}2^{n}n^{-1/2})=2^{n}n^{-7/16+o(1)}$ elements to $\mathcal{L}$. Also, for every $(a,b)\in I$, if $(a,b)$ is shattered, put every element of $X_{a,b}$ into $\mathcal{L}$. The number of shattered sets is less than $C_{0}$, and $|X_{a,b}|=O(a2^{n}n^{-3/2})=O(2^{n}n^{-1})$, so we added at most $O(C_{0}2^{n}n^{-1})=2^{n}n^{-1/2+o(1)}$ elements to $\mathcal{L}$. So far $$|\mathcal{L}|\leq 2^{n}n^{-7/16+o(1)}.$$ Now let $n^{1/10}<a\leq k-1$ be such that $a$ is not shattered. Let $A=|A_{a-1}|-|A_{a}|=\Theta(a2^{n}n^{-3/2})$, and let $r$ be the size of the set $\{b:(a,b)\in I,|\phi(a,b)|=1\}$. Then $k+1-a\geq r\geq k+1-a-\lambda k-\mu>k+1-a-2\lambda k$, where $\mu=O(n^{1/3})<\lambda k$ by Claim~\ref{claim:missingblocks}. Also, we have $$|K_{a}|=rA.$$ By the well known theorem of Dilworth~\cite{D50}, $K_{a}$ can be partitioned into at most $(1+\frac{n^{o(1)}}{\sqrt{a}})A$ chains, let $\mathcal{C}_{a}$ denote the collection of chains in such a chain decomposition. Say that a chain $L\in \mathcal{C}_{a}$ is short, if $|L|\leq r-\lambda k$, and let $N_{\mbox{\tiny short}}$ denote the number of short chains. The size of every chain in $K_{a}$ is at most $r$, so $$rA=|K_{a}|\leq (r-\lambda k)N_{\mbox{\tiny short}}+(|\mathcal{C}_{a}|-N_{\mbox{\tiny short}})r,$$ which implies $$N_{\mbox{\tiny short}}<\frac{n^{o(1)}r}{\lambda k\sqrt{a}}A\leq \frac{\sqrt{a}}{\lambda}2^{n}n^{-3/2+o(1)}\leq 2^{n}n^{-19/16+o(1)}.$$ Say that a chain in $\mathcal{C}_{a}$ is \emph{irrelevant}, if its minimum is not in $X_{a,a}$. Then the number of irrelevant chains is $$N_{\mbox{\tiny irr}}=|\mathcal{C}_{a}|-|X_{a,a}|<\frac{n^{o(1)}}{\sqrt{a}}A=\sqrt{a}2^{n}n^{-3/2+o(1)}=O(2^{n}n^{-5/4}).$$ Finally, say that a chain $L\in \mathcal{C}_{a}$ is \emph{sad}, if its minimum $z$ is in $X_{a,a}$, but $z$ is not covered by the matching $M_{a}$. Then the number of sad chains is $$N_{\mbox{\tiny sad}}=O(2^{n}n^{-5/4}).$$ Let $\mathcal{C}_{a}^{*}$ be the set of chains in $\mathcal{C}_{a}$ that are neither short, irrelevant, nor sad, and let $L_{a}\subset K_{a}$ be the set of elements that are not covered by any chain in $\mathcal{C}_{a}^{*}$. Then $$|L_{a}|\leq k(N_{\mbox{\tiny short}}+N_{\mbox{\tiny irr}}+N_{\mbox{\tiny sad}})\leq kn^{-19/16+o(1)}\leq 2^{n}n^{-11/16+o(1)}.$$ Add every element of $L_{a}$ for $n^{1/10}<a\leq k$ to the set of leftovers $\mathcal{L}$. In total, we added at most $k2^{n}n^{-11/16}=2^{n}n^{-3/16+o(1)}$ elements to $\mathcal{L}$. At this point, we have $|\mathcal{L}|=2^{n}n^{-3/16+o(1)}$, and we do not add any more elements to $\mathcal{L}$. Construct the family of chains $\mathcal{D}$ as follows. First, using the matchings $M_{1},\dots,M_{k}$, construct a chain decomposition $\mathcal{D}_{0}$ of $\bigcup_{i=0}^{k}A_{k}$. For $x\in A_{0}$, let $D_{x}$ be the chain $x=x_{0}\subset...\subset x_{l}$, where $x_{i-1}$ is matched to $x_{i}$ in $M_{i}$ for $i=1,\dots,l$, and either $l=k$, or $x_{l}$ is not matched to any element of $A_{l+1}$ in $M_{l+1}$. Then $\mathcal{D}_{0}=\{D_{x}:x\in A_{0}\}$ is a chain decomposition of $\bigcup_{i=0}^{k}A_{k}$ into $M$ chains such that if a chain has maximum element in $A_{l}$, then the size of the chain is exactly $l+1$. Now consider some $D\in \mathcal{D}_{0}$. If $y\in A_{a-1}$ is the maximum element of $D$, $y$ is matched to some $z\in X_{a,a}$ in $M_{a}$, and there exists $C\in\mathcal{C}^{*}_{a}$ such that $z$ is the minimal element of $C$, then let $D^{+}=D\cup C$ and say that $D$ is \emph{compatible}. Noting that $|D|=a$ and $k+1-a\geq |C|\geq k+1-a-3\lambda k$, we have $k+1\geq |D^{+}|\geq k+1-3\lambda k$. Also, if $a=k+1$, then set $D^{+}=D$ and say that $D$ is also compatible. In this case, $|D|=k+1$. Otherwise, if $a\leq k$, and $y$ is not matched to some $z\in X_{a,a}$, or $z$ is not the minimal element of a chain in $\mathcal{C}^{*}_{a}$, then let $D^{+}=D$, and say that $D$ is incompatible. Set $\mathcal{D}=\{D^{+}:D\in\mathcal{D}_{0}\}$. The number of incompatible chains with maximum element in $A_{a-1}$ is at most the number of short and sad chains in $\mathcal{C}_{a}$, which is at most $$N_{\mbox{\tiny short}}+N_{\mbox{\tiny sad}}\leq 2^{n}n^{-19/16+o(1)}.$$ Therefore, the total number of incompatible chains across every $a$ is at most $k2^{n}n^{-19/16+o(1)}=2^{n}n^{-11/16+o(1)}.$ To summarize our progress so far, we constructed a family $\mathcal{D}$ of $M$ chains such that $\mathcal{D}$ partitions $B\setminus\mathcal{L}$, and all but at most $2^{n}n^{-11/16+o(1)}$ chains in $\mathcal{D}$ have size between $k+1-3\lambda k$ and $k+1$. It only remains to partition $\mathcal{L}$ into a few chains such that each of these chains can be attached to an element of $\mathcal{D}$. This guarantees that the number of chains remains $M$ and only a few of the chains get longer. Let $\mathcal{S}$ be a family of $|A_k|$ chains that partition $[n]^{(\geq m+k)}$, see Corollary \ref{cor:leftover}. Then $\mathcal{S}'=\{S\cap \mathcal{L}: S\in\mathcal{S}\}$ forms a chain partition of the leftover elements. We form our final chain partition by gluing the chains of $\mathcal{S}'$ to certain chains of $\mathcal{D}$. For $x\in A_k$, let $S_x$ be the unique chain containing $x$. For $D\in\mathcal{D}$, let $D^{*}=D\cup (S_x\cap \mathcal{L})$ if the maximum element of $D$ is in $A_k$, and this maximum element is $x$. Otherwise, let $D^{*}=D$. Then $\mathcal{D}^{*}=\{D^{*}:D\in\mathcal{D}\}$ is a chain partition of $B$ into $M$ chains. We show that $\mathcal{D}$ satisfies the desired properties. Let us count the number of chains $D\in\mathcal{D}$ such that either $|D^{*}|\leq k+1-3\lambda k$, or $|D^*|\geq k+1+\lambda k$. If $|D^{*}|\leq k+1-3\lambda k$, then $D^{*}$ is an incompatible chain in $\mathcal{D}_{0}$, so the number of such chains is at most $2^{n}n^{-11/16+o(1)}=Mn^{-3/16+o(1)}$. On the other hand, if $|D^*|\geq k+1+\lambda k$, then $|D|=k+1$ and there exists $x\in A_k$ such that $D^*=D\cup (S_x \cap \mathcal{L})$. But then $|S_x\cap \mathcal{L}|\geq \lambda k$, so the number of such chains is at most $\frac{|\mathcal{L}|}{\lambda k}=2^{n}n^{-5/8+o(1)}=M n^{-1/8+o(1)}$.\hfill$\Box$ \begin{proof}[Proof of Corollary \ref{remark}] Let $\mathcal{C}_{0}$ be the family of chains $C\in \mathcal{C}$ such that $||C|-s|\geq n^{\frac{1}{2}-\frac{1}{20}}$. By Theorem \ref{thm:mainthm}, $|\mathcal{C}_{0}| \leq M n^{-1/8+o(1)}$ Also, let $\mathcal{C}_{1}\subset \mathcal{C}_{0}$ be the family of chains $C$ such that $|C|\geq \sqrt{n}\log n$, and let $\mathcal{C}_{2}=\mathcal{C}_{0}\setminus\mathcal{C}_{1}$. First, note that every chain of size $\sqrt{n}\log n$ must contain a set of size either at least $\frac{n+\sqrt{n}\log n}{2}$, or at most $\frac{n-\sqrt{n}\log n}{2}$. But by Claim~\ref{claim:binomial}, (3), we have $$\left|[n]^{(\leq \frac{n-\sqrt{n}\log n}{2})}\right|=\left|[n]^{(\geq \frac{n+\sqrt{n}\log n}{2})}\right|\leq 2^{n}e^{-(\log n)^{2}/2},$$ so $|\mathcal{C}_{1}|\leq 2^{n+1}e^{-(\log n)^{2}/2}$. Therefore, $$\sum_{C\in\mathcal{C}_{1}}|C|\leq 2^{n+1}e^{-(\log n)^{2}/2}n=O\left(\frac{2^{n}}{n}\right).$$ Second, since $|\mathcal{C}_{2}|<Mn^{-\frac{1}{8}+o(1)}$, we can write $$\sum_{C\in\mathcal{C}_{2}}|C|<Mn^{-\frac{1}{8}+o(1)}\sqrt{n}\log n=2^{n}n^{-\frac{1}{8}+o(1)}.$$ Thus, $\sum_{C\in\mathcal{C}_{0}}|C|\leq 2^{n}n^{-\frac{1}{8}+o(1)}.$ \end{proof} \section{Applications}\label{sect:applications} \subsection{Minimal Sperner graphs--Proof of Theorem~\ref{thm:sperner}}\label{sect:sperner} Let $M=\binom{n}{\lfloor n/2\rfloor}$. The lower bound follows from Tur\'an's theorem~\cite{T41}. Indeed, for any graph $G$, if $\alpha(G)$ denotes the independence number of $G$, then $|E(G)|\geq \frac{|V(G)|^{2}}{2\alpha(G)}-\frac{|V(G)|}{2}.$ Plugging $|V(G)|=2^{n}$ and $\alpha(G)=M$ into this formula, we get $$|E(G)|\geq \frac{2^{2n}}{2M}-\frac{2^{n}}{2}=\left(\sqrt{\frac{\pi}{8}}+o(1)\right)2^{n}\sqrt{n}.$$ It only remains to prove the upper bound. Let $s=\frac{2^{n}}{M}=(\sqrt{\frac{\pi}{2}}+o(1))\sqrt{n}$. Let $\mathcal{C}$ be a family of $M$ chains partitioning $2^{[n]}$ such that all but at most $n^{-\frac{1}{8}+o(1)}$ proportion of the chains in $\mathcal{C}$ have size $(1+O(n^{-1/16}))s$. Such a chain decomposition exists by Theorem~\ref{thm:mainthm}. Let $G$ be the graph on $2^{[n]}$ in which $x$ and $y$ are joined by an edge if $x$ and $y$ belong to the same chain $C_{i}$. Note that if $I\subset V(G)$ is an independent set, then $|I\cap C|\leq 1$ for $C\in\mathcal{C}$, so $|I|\leq M$. Therefore, $\alpha(G)=M$. It only remains to bound the number of edges of $G$. We are going to proceed similarly as in the proof of Corollary \ref{remark}. By the construction of $G$, we have $$|E(G)|=\sum_{C\in\mathcal{C}}\binom{|C|}{2}\leq \frac{1}{2}\sum_{C\in \mathcal{C}}|C|^{2}.$$ Let $\mathcal{C}_{1}\subset \mathcal{C}$ be the family of chains $C$ such that $|C|\geq \sqrt{n}\log n$, and let $\mathcal{C}_{2}\subset \mathcal{C}$ be the family of chains $C$ such that $s+n^{1/2-1/20}<|C|< \sqrt{n}\log n$. Also, let $\mathcal{C}_{3}=\mathcal{C}\setminus (\mathcal{C}_{1}\cup\mathcal{C}_{2})$. First, note that every chain of size $\sqrt{n}\log n$ must contain a set of size either at least $\frac{n+\sqrt{n}\log n}{2}$, or at most $\frac{n-\sqrt{n}\log n}{2}$. But by Claim~\ref{claim:binomial}, (3), we have $$\left|[n]^{(\leq \frac{n-\sqrt{n}\log n}{2})}\right|=\left|[n]^{(\geq \frac{n+\sqrt{n}\log n}{2})}\right|\leq 2^{n}e^{-(\log n)^{2}/2},$$ so $|\mathcal{C}_{1}|\leq 2^{n+1}e^{-(\log n)^{2}/2}$. Therefore, $$\sum_{C\in\mathcal{C}_{1}}|C|^{2}\leq 2^{n+1}e^{-(\log n)^{2}/2}n^{2}=o(2^{n}).$$ Since, by Theorem \ref{thm:mainthm}, $|\mathcal{C}_{2}|<Mn^{-\frac{1}{8}+o(1)}$, we can write $$\sum_{C\in\mathcal{C}_{2}}|C|^{2}<Mn^{-\frac{1}{8}+o(1)}n(\log n)^{2}=o(2^{n}\sqrt{n}).$$ Finally, $$\sum_{C\in\mathcal{C}_{3}}|C|^{2}\leq M(s+n^{1/2-1/20})^{2}=Ms^{2}(1+o(1))=\left(\sqrt{\frac{\pi}{2}}+o(1)\right)2^{n}\sqrt{n}.$$ Therefore, $$|E(G)|\leq \frac{1}{2}\sum_{C\in\mathcal{C}_{1}}|C|^{2}+\frac{1}{2}\sum_{C\in\mathcal{C}_{2}}|C|^{2}+\frac{1}{2}\sum_{C\in\mathcal{C}_{3}}|C|^{2}\leq \left(\sqrt{\frac{\pi}{8}}+o(1)\right)2^{n}\sqrt{n},$$ finishing the proof. \subsection{Applications to extremal problems}\label{sect:extremal} Several problems in extremal set theory are instances of the following general question. Say that a formula is \emph{affine}, if it is built from variables, the operators $\cap$ and $\cup$, and parentheses $(\, ,)$ (complementation and constants are not allowed, e.g. $x\cap \{1,2,3\}$ and $x\setminus y$ are \emph{not} affine formulas.) Also, an \emph{affine statement} is a statement of the form $f\subset g$ or $f=g$, where $f$ and $g$ are affine formulas. Finally, an \emph{affine configuration} is a Boolean expression, which uses symbols $\lor, \land, \neg$ and whose variables are replaced with affine statements. Given an affine configuration $C$ with $k$ variables, a family $H\subset 2^{[n]}$ \emph{contains $C$}, if there exists $k$ distinct elements of $H$ that satisfy $C$, otherwise, say that \emph{$H$ avoids $C$}. Let $\ex(n,C)$ denote the size of the largest family $H\subset 2^{[n]}$ such that $H$ avoids $C$. Say that an affine configuration $C$ is \emph{satisfiable} if there exists a family of sets satisfying $C$. Here are some examples of well known questions which ask to determine the order of magnitude of $\ex(n,C)$ for some specific affine configuration $C$. \textbf{Sperner's theorem.} An antichain is exactly a family not containing the affine configuration $C\equiv(x\subset y)$. Hence, Sperner's theorem~\cite{S28} is equivalent to the statement $\mbox{ex}(n,C)=\binom{n}{\lfloor n/2\rfloor}$. \textbf{Union-free families.} A family $H\subset 2^{[n]}$ is \emph{union-free}, if it does not contain three distinct sets $x,y,z$ such that $z=x\cup y$. But $H$ is union-free if and only if it does not contain the affine configuration $(z=x\cup y)$. The size of the largest union-free family was investigated by Kleitman~\cite{K76}, who proved that the size of such a family is at most $(1+o(1))\binom{n}{\lfloor n/2\rfloor}$. \textbf{Forbidden subposets.} Let $P$ be a poset, and let $\prec$ be the partial ordering on $P$. The following questions are extensively studied \cite{BJ12,B09,GL09, KT83, MP17, T19}: what is the maximum size of a family in $2^{[n]}$ that does not contain $P$ as a weak/induced subposet? For each $p\in P$, introduce the variable $x_{p}$. Then, forbidding $P$ as a weak subposet is equivalent to forbidding the affine configuration $$C_{P}\equiv\bigwedge_{\substack{p,q\in P\\ p\prec q}}(x_{q}\subset x_{q}),$$ while $P$ as an induced subposet corresponds to the affine configuration $$C'_{P}\equiv\bigwedge_{\substack{p,q\in P\\ p\prec q}}(x_{p}\subset x_{q})\wedge\bigwedge_{\substack{p,q\in P\\ p\not\prec q,q\not\prec p}}(\neg(x_{p}\subset x_{q})\wedge \neg(x_{q}\subset x_{p})).$$ Let $e(P)$ denote the maximum number $k$ such that the union of the $k$ middle levels of $2^{[n]}$ does not contain $C_{P}$, and define $e'(P)$ similarly for $C'_{P}$. It is commonly believed that $\ex(n,C_{P})=(e(P)+o(1))\binom{n}{\lfloor n/2\rfloor}$ and $\ex(n,C'_{P})=(e'(P)+o(1))\binom{n}{\lfloor n/2\rfloor}$. This conjecture has been only verified for posets with certain special structures, for example when the Hasse diagram of $P$ is a tree \cite{BJ12,B09}, so in general it is wide open. Also, while it is clear that $\ex(n,C_{P})\leq (|P|-1)\binom{n}{\lfloor n/2\rfloor}$ (as a chain of size $|P|$ satisfies $C_{P}$), it is already not obvious that $\ex(n,C'_{P})=O(\binom{n}{\lfloor n/2\rfloor})$. This was verified by Methuku and P\'alv\"olgyi \cite{MP17}. Finally, it is not even known whether the limit $\lim_{n\rightarrow\infty}\ex(n,C_{P})/\binom{n}{\lfloor n/2\rfloor}$ exists, see~\cite{GL09}. \textbf{Boolean algebras.} The \emph{$d$-dimensional Boolean algebra} is a set of the form $$\{x_{0}\cup_{i\in I}x_{i}:I\subset [d]\},$$ where $x_{0},\dots,x_{d}$ are pairwise disjoint sets, $x_{1},\dots,x_{d}$ are nonempty. Let $b(n,d)$ denote the size of the largest family $H\subset 2^{[n]}$ that does not contain a $d$-dimensional Boolean algebra. It was proved by Erd\H{o}s and Kleitman~\cite{EK71} that $b(n,2)=\Theta(2^{n}n^{-1/4})$, where the constants hidden by the $\Theta(.)$ notation are unspecified and difficult to compute. Also, this was extended by Gunderson, R\"{o}dl and Sidorenko~\cite{GRS99} who proved that $b(n,d)=O(2^{n}n^{-1/2^{d}})$, where the constant hidden by the $O(.)$ notation depends on $d$. Finally, this was strengthened by Johnston, Lu and Milans \cite{JLM15} to $b(n,d)\leq 22\cdot 2^{n}n^{-1/2^{d}}$. Note that a Boolean algebra is equivalent to the following affine configuration: for $I\subset [d]$, let $x_{I}$ be a variable, then the corresponding affine configuration is $$\bigwedge_{1\leq i<j\leq d}(x_{\emptyset}=x_{\{i\}}\cap x_{\{j\}})\wedge\bigwedge_{I\subset [d], I\neq\emptyset} (x_{I}=\bigcup_{i\in I}x_{\{i\}}).$$ Moreover, the above results on Boolean algebras also show that for any formular $C$, if it is satisfiable, then there exists $\alpha>0$ such that $\ex(n,C)=O(2^{n}n^{-\alpha})$. Indeed, if $C$ is satisfiable, then there exists $d$ such that $2^{[d]}$ contains $C$, but then every $d$-dimensional Boolean algebra also contains $C$. \bigskip Here, we provide a unified framework to handle such problems. First, let us consider a more general problem. A \emph{$d$-dimensional grid} is a $d$-term Cartesian product of the form $[k_1]\times...\times[k_d]$, endowed with the following coordinatewise ordering $\subset$: $(a_{1},\dots,a_{d})\subset (b_{1},\dots,b_{d})$ if $a_{i}\leq b_{i}$ for $i=1,\dots,d$ (with slight abuse of notation, we also use $\subset$ to denote the comparability in the grid, for reasons that should become clear later). Also, define the operations $\cap$ and $\cup$ such that $(a_{1},\dots,a_{d})\cap (b_{1},\dots,b_{d})=(\min\{a_{1},b_{1}\},\dots,\min\{a_{d},b_{d}\})$ and $(a_{1},\dots,a_{d})\cup (b_{1},\dots,b_{d})=(\max\{a_{1},b_{1}\},\dots,\max\{a_{d},b_{d}\})$. Considering the natural isomorphism between the Boolean lattice $2^{[n]}$ and the grid $[2]^{n}$, $\subset,\cap,\cup$ naturally extend their usual definition. But now we can talk about affine configurations in the grid as well. If $F$ is a grid, say that a subset $H\subset F$ \emph{contains} the affine configuration $C$ with $k$ variables, if there exists $k$ distinct elements of $H$ that satisfy $C$, otherwise, say that \emph{$H$ avoids $C$}. Let $\ex(F,C)$ denote the size of the largest subset of $F$ which does not contain $C$, and write $\ex(k,d,C)$ instead of $\ex([k]^d,C)$. Our aim is to show that one can derive bounds for $\ex(n,C)$ using the function $f(k)=\ex(k,d,C)$, where $d$ is some fixed integer. Indeed, by considering a chain decomposition of $2^{[n/d]}$ into chains of almost equal size, one can partition $2^{[n]}$ into $d$-dimensional grids that are also almost equal. Then, given a family $H\subset 2^{[n]}$ avoiding $C$, we bound the intersection of $H$ with each of these grids (using the function $\ex(k,d,C)$), which then turns into a bound on $\ex(n,C)$. The reason why we would like to work with $\ex(k,d,C)$ instead of $\ex(n,C)$ is that for many affine configurations $C$, estimating $\ex(k,d,C)$ is equivalent to an (ordered) hypergraph Tur\'an problem, which is sometimes easier to handle or already has good upper bounds. Similar ideas were already present in~\cite{EK71,GRS99,MP17}, but executed in a somewhat suboptimal way. The following theorem is the main result of this section. \begin{theorem}\label{thm:grid} Let $d$ be a positive integer, and let $c,\alpha>0$ such that $\mbox{ex}(k,d,C)\leq ck^{d-\alpha}$ holds for every sufficiently large $k\in\mathbb{Z}^{+}$. Then $$\ex(n,C)\leq (1+o(1))c\left(\frac{2d}{\pi n}\right)^{\frac{\alpha}{2}}2^{n}.$$ \end{theorem} Before we can prove this theorem, let us see how $\ex(F,C)$ and $\ex(k,d,C)$ are related. \begin{claim}\label{claim:assymetric} Let $k\leq k_{1}\leq\dots\leq k_d$ and $F=[k_1]\times\dots\times[k_d]$. Then $$\frac{\ex(F,C)}{k_1\dots k_d}\leq \frac{\ex(k,d,C)}{k^d}.$$ \end{claim} \begin{proof} Let $H\subset F$ such that $H$ does not contain a copy of $C$ and $|H|=\ex(F,C)$. For $i=1,\dots,d$, let $X_i$ be a random $k$ element subset of $[k_i]$, chosen from the uniform distribution, and let $F'=X_1\times\dots\times X_d$. Let $N=|F'\cap H|.$ Clearly, for every $v\in H$, we have $$\mathbb{P}(v\in F')=\frac{k^d}{k_1\dots k_d},$$ so $\mathbb{E}(N)=|H|\frac{k^d}{k_1\dots k_d}.$ Therefore, there exists a choice for $X_1,\dots,X_d$ such that $N\geq |H|\frac{k^d}{k_1\dots k_d}$. As $F'$ is isomorphic to the grid $[k]^d$ and $F'\cap H$ does not contain a copy of $C$, we get $$\ex(k,d,C)\geq \frac{k^d}{k_1\dots k_d}\ex(F,C).$$ \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:grid}] In this proof, we consider $d$ as a constant, so the notation $O(.)$ hides a constant which might depend on $d$. Let $H\subset 2^{[n]}$ be a subset of size $\mbox{ex}(n,C)$ not containing a copy of $C$. Write $n=n_1+\dots+n_d$, where $n_i\in\{\lfloor n/d\rfloor,\lceil n/d\rceil\}$ for $i=1,\dots,d$. Let $\mathcal{C}_i$ be a chain decomposition of $2^{[n_i]}$ given by Theorem~\ref{thm:mainthm}, that is, all but at most $n^{-\frac{1}{8}+o(1)}$ proportion of the chains in $\mathcal{D}_i$ have size $s(1+O(n^{-\frac{1}{16}}))$, where $$s=\left(\sqrt{\frac{\pi}{2}}+o(1)\right)\sqrt{\frac{n}{d}}.$$ If a chain $D\in \mathcal{C}_{i}$ is longer than $s(1+n^{-\frac{1}{20}})$, cut it into $\lceil\frac{|D|}{n}\rceil$ smaller chains such that the size of all but at most one of them is $s$. Let $\mathcal{D}_{i}$ be the resulting chain partition. As the number of chains of size more than $s(1+n^{-\frac{1}{20}})$ is at most $2^{n_{i}}n^{-\frac{5}{8}+o(1)}$, every chain in $\mathcal{D}_{i}$ has size at most $s(1+n^{-\frac{1}{20}})$, and the number of chains of size less than $s(1-n^{-\frac{1}{20}})$ is at most $2^{n_{i}}n^{-\frac{5}{8}+o(1)}$. Let $\mathcal{D}=\{D_{1}\times\dots\times D_{d}:D_{1}\in\mathcal{D}_{1},\dots,D_{d}\in\mathcal{D}_{d}\}.$ Then $2^{[n]}$ is the disjoint union of the elements of $\mathcal{D}$. Here, $D=D_1\times\dots\times D_{d}\in\mathcal{D}$ behaves exactly like the $d$-dimensional grid $F=[|D_1|]\times\dots\times[|D_d|]$. More precisely, let $\phi_i: D_i\rightarrow [|D_i|]$ be the bijection defined as $\phi_i(x_j)=j$, where $x_1\subset\dots\subset x_{|D_i|}$ are the elements of $D_i$. Setting $\phi=(\phi_1,\dots,\phi_d)$, $\phi$ is a bijection between $D$ and $F$ such that for any $x,y,z\in D$, \begin{itemize} \item $x\subset y$ if and only if $\phi(x)\subset \phi(y),$ \item $x\cup y=z$ if and only if $\phi(x)\cup\phi(y)=\phi(z),$ \item $x\cap y=z$ if and only if $\phi(x)\cap \phi(y)=\phi(z).$ \end{itemize} But this means that a subset $H\cap D$ contains $C$ if and only if $\phi(H\cap D)$ contains $C$. Therefore, $|H\cap D|\leq \mbox{ex}(F,C)$. Let $k=\min\{|D_{1}|,\dots,|D_{d}|\}$. Then by Claim~\ref{claim:assymetric}, we have \begin{equation}\label{equ:exFC}\ex(F,C)\leq \frac{|D_{1}|\dots |D_{d}|}{k^{d}}\ex(k,d,C)\leq c|F|k^{-\alpha}\leq c(1+o(1))s^{d-\alpha}. \end{equation} Let $$\mathcal{D}'=\{D_{1}\times\dots\times D_{d}\in \mathcal{D}:|D_{i}|\leq s(1-n^{-\frac{1}{20}})\mbox{ for some }i\in [d]\}.$$ Then $|\mathcal{D}'|=o(2^{n}n^{-d/2}).$ Hence, $$\sum_{D\in\mathcal{D}'}|D\cap H|\leq o(2^{n}n^{-\frac{d}{2}}s^{d-\alpha})=o(2^{n}n^{-\frac{\alpha}{2}}).$$ Also, by the second inequality in (\ref{equ:exFC}), we have $$\sum_{D\in\mathcal{D}\setminus \mathcal{D}'}|D\cap H|\leq \sum_{D\in\mathcal{D}\setminus \mathcal{D}'} (1+o(1)c|D|s^{-\alpha}\leq (1+o(1))c2^{n}s^{-\alpha}.$$ Therefore, $$|H|=\sum_{D\in\mathcal{D}}|D\cap H|\leq (1+o(1))c\left(\frac{2d}{\pi n}\right)^{\frac{\alpha}{2}}2^{n}.$$ \end{proof} Let us see some quick applications. Note that most of these applications were already covered in \cite{T19} with slightly worse constants. \textbf{Sperner's theorem.} As an easy exercise, let us recover the asymptotic version of Sperner's theorem from Theorem \ref{thm:grid}. Indeed, let $C\equiv (x\subset y)$, then trivially $\ex(k,1,C)=1$. Therefore, $\ex(n,C)\leq (1+o(1))\sqrt{\frac{2}{n\pi}}2^{n}$. \textbf{Union-free families.} Let $C\equiv (z=x\cup y)$. Consider the case $d=2$, then the affine configuration $C$ in $[k]^{2}$ corresponds to three points of the grid which form a corner, i.e., $(a,b), (c,b), (c,d)$ such that $a<c$ and $d<b$. It is not difficult to see that $\ex(k,2,C) \leq 2k$. Indeed, suppose $Q$ is a subset of the grid of order at least $2k + 1$. On every horizontal line delete the left most point and on every vertical line delete the lowest point which is in $Q$. Since we delete at most $2k$ points, some point $(b,c) \in Q$ must remain. Then, by definition, there are points $(a, b)$ and $(c,d)$ with $a < c$ and $d < b$ which are also in $Q$. Thus by Theorem \ref{thm:grid} (with $d=2$ and $\alpha=1$), we get $$\ex(n,C)\leq (1+o(1))2\sqrt{\frac{4}{\pi n}}2^{n}=(1+o(1))2\sqrt{2}\binom{n}{\lfloor n/2\rfloor},$$ which is only slightly worse than the bound of Kleitman \cite{K76}. \textbf{Forbidden subposets.} Let $P$ be a poset that is not an antichain, and consider the corresponding affine configurations $C_{P}$ and $C'_{P}$. Let $d_{0}$ be the Duschnik-Miller dimension of $P$, that is, $d_{0}$ is the smallest $d$ such that $[k]^{d}$ contains the affine configuration $C'_{P}$ for some $k$. It was proved by Tomon \cite{T19} that there exists a constant $\alpha(P)$ such that if $d\geq d_{0}$, then $\ex(k,d,C'_{P})\leq \alpha(P)w$, where $w$ is the size of the largest antichain in $[k]^{d}$. We remark that $w=(1+o(1))\sqrt{\frac{6}{\pi}}\cdot\frac{k^{d-1}}{\sqrt{d}}$ as $\min\{k,d\}\rightarrow\infty$, see p.~63--68 in \cite{A87}. Let $$\beta(d,P)=\limsup_{k\rightarrow\infty} \frac{\sqrt{d}}{k^{d-1}}\ex(k,d,C_{P}),$$ and $$\beta'(d,P)=\limsup_{k\rightarrow\infty} \frac{\sqrt{d}}{k^{d-1}}\ex(k,d,C'_{P}).$$ Then $\beta(d,P)\leq \beta'(d,P)<\infty.$ Applying Theorem \ref{thm:grid}, we get $$\ex(n,C_{P})\leq (1+o(1))\beta(d,P)\binom{n}{\lfloor n/2\rfloor},$$ and $$\ex(n,C'_{P})\leq (1+o(1))\beta'(d,P)\binom{n}{\lfloor n/2\rfloor}.$$ This tells us that one can derive bounds on $\ex(n,C_{P})$ and $\ex(n,C'_{P})$ by considering the behavior of the functions $\ex(k,d,C_{P})$ and $\ex(k,d,C'_{P})$ for some fixed $d$. However, finding the values of these functions is equivalent to a forbidden $d$-dimensional matrix pattern problem (see e.g. \cite{KM06} for a description of this problem, and \cite{MP17,T19} for the connection of posets and matrix patterns), which provides us with new tools in order to estimate $\ex(n,C_{P})$ and $\ex(n,C'_{P})$. \textbf{Boolean algebras.} Finally, let us consider Boolean algebras, in particular the case $d=2$. If $C$ is the affine configuration corresponding to the $2$-dimensional Boolean algebra, then a set $H\subset [k]\times [l]$ avoids $C$ if and only if $H$ does not contain four distinct points $(a,b),(a',b),(a,b'),(a',b')$, which is equivalent to a cycle of length four in the appropriate bipartite graph. But then by the K\H{o}v\'ari-S\'os-Tur\'an theorem \cite{KST54}, we have $$|H|\leq kl^{1/2}+O(k+l),$$ so $\ex(k,2,C)\leq (1+o(1))k^{3/2}.$ But then by Theorem \ref{thm:grid} (with $d=2$ and $\alpha=1/2$), we get $$b(n,2)\leq (1+o(1))\left(\frac{4}{\pi n}\right)^{1/4}2^{n}.$$ One can get an even better bound by slightly modifying the proof of Theorem \ref{thm:grid}: instead of choosing $n_{1}=\lfloor n/2\rfloor$ and $n_{2}=\lceil n/2\rceil$, set $n_{1}=\lfloor n^{2/3}\rfloor$ and $n_{2}=n-n_{1}$, and write $\ex(F,C)\leq |D_{1}||D_{2}|^{1/2}+O(|D_{1}|+|D_{2}|)$. Then, after repeating the same calculations, we get $$b(n,2)\leq (1+o(1))\left(\frac{2}{\pi n}\right)^{1/4}2^{n}\approx 0.89\cdot 2^{n}n^{-\frac{1}{4}}.$$ We omit the details. \section{Concluding remarks}\label{sect:remarks} Let $M=\binom{n}{\lfloor n/2\rfloor}$, and let $\sigma_{1}\geq\dots\geq\sigma_{M}$ be the sizes of the chains in a symmetric chain decomposition of $2^{[n]}$. Let $D_{1},\dots,D_{M}$ be a chain decomposition of $2^{[n]}$ such that $|D_{1}|\geq \dots\geq |D_{M}|$. Then it is easy to show that the sequence $\sigma_{1},\dots,\sigma_{M}$ \emph{dominates} $|D_{1}|,\dots,|D_{M}|$, that is, $$\sum_{i=1}^{k}\sigma_{i}\geq \sum_{i=1}^{k} |D_{i}|$$ for $k=1,\dots,M$. Griggs \cite{G88} proposed the following conjecture. \begin{conjecture}\label{conj:griggs} Let $s_{1}\geq \dots\geq s_{M}$ be a sequence of positive integers dominated by $\sigma_{1},\dots,\sigma_{M}$ such that $\sum_{i=1}^{M}s_{i}=2^{n}$. Then there exists a chain decomposition $D_{1},\dots,D_{M}$ of $2^{[n]}$ such that $|D_{i}|=s_{i}$. \end{conjecture} Note that Conjecture \ref{conj:mainconj} is a special subcase of this conjecture, possibly the most challenging one. One might consider a similar question for the upper half of $2^{[n]}$, that is, for the family $B=[n]^{(\geq n/2)}$. Then a conjecture akin to Conjecture \ref{conj:griggs} would be as follows. For $i=1,\dots,M$, let $\sigma_{i}'=\lceil \frac{\sigma_{i}}{2}\rceil$, then $\sigma_{1}',\dots,\sigma_{M}'$ are the sizes of the chains in a symmetric chain decomposition of $2^{[n]}$ restricted to $B$. \begin{conjecture}\label{conj:B} Let $s_{1}\geq \dots\geq s_{M}$ be a sequence of positive integers dominated by $\sigma_{1}',\dots,\sigma_{M}'$ such that $\sum_{i=1}^{M}s_{i}=|B|$. Then there exists a chain decomposition $D_{1},\dots,D_{M}$ of $B$ such that $|D_{i}|=s_{i}$. \end{conjecture} It is plausible that one can use a modification of our approach to prove an asymptotic version of this conjecture. I.e., there exists a chain decomposition $D_{1},\dots,D_{M}$ of $B$ such that for all but at most $o(M)$ indices $i\in [M]$, we have $|D_{i}|=(1+o(1))s_{i}$. However, such a result will not immediately yield an asymptotic version of Conjecture \ref{conj:griggs} for the following reason: we might be able to partition the lower and upper half of $2^{[n]}$ into chains of the desired lengths, but when we try to match the chains in the lower and upper half, we are unable to guarantee that the chains of right lengths are connected.
{ "timestamp": "2019-11-22T02:14:48", "yymm": "1911", "arxiv_id": "1911.09533", "language": "en", "url": "https://arxiv.org/abs/1911.09533", "abstract": "The Boolean lattice $2^{[n]}$ is the family of all subsets of $[n]=\\{1,\\dots,n\\}$ ordered by inclusion, and a chain is a family of pairwise comparable elements of $2^{[n]}$. Let $s=2^{n}/\\binom{n}{\\lfloor n/2\\rfloor}$, which is the average size of a chain in a minimal chain decomposition of $2^{[n]}$. We prove that $2^{[n]}$ can be partitioned into $\\binom{n}{\\lfloor n/2\\rfloor}$ chains such that all but at most $o(1)$ proportion of the chains have size $s(1+o(1))$. This asymptotically proves a conjecture of Füredi from 1985. Our proof is based on probabilistic arguments. To analyze our random partition we develop a weighted variant of the graph container method. Using this result, we also answer a Kalai-type question raised recently by Das, Lamaison and Tran. What is the minimum number of forbidden comparable pairs forcing that the largest subfamily of $2^{[n]}$ not containing any of them has size at most $\\binom{n}{\\lfloor n/2\\rfloor}$? We show that the answer is $(\\sqrt{\\frac{\\pi}{8}}+o(1))2^{n}\\sqrt{n}$. Finally, we discuss how these uniform chain decompositions can be used to optimize and simplify various results in extremal set theory.", "subjects": "Combinatorics (math.CO)", "title": "Uniform chain decompositions and applications", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9896718471760709, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.7080349874211156 }
https://arxiv.org/abs/1906.12054
On the equational graphs over finite fields
In this paper, we generalize the notion of functional graph. Specifically, given an equation $E(X,Y) = 0$ with variables $X$ and $Y$ over a finite field $\mathbb{F}_q$ of odd characteristic, we define a digraph by choosing the elements in $\mathbb{F}_q$ as vertices and drawing an edge from $x$ to $y$ if and only if $E(x,y)=0$. We call this graph as equational graph. In this paper, we study the equational graphs when choosing $E(X,Y) = (Y^2 - f(X))(\lambda Y^2 - f(X))$ with $f(X)$ a polynomial over $\mathbb{F}_q$ and $\lambda$ a non-square element in $\mathbb{F}_q$. We show that if $f$ is a permutation polynomial over $\mathbb{F}_q$, then every connected component of the graph has a Hamiltonian cycle. Moreover, these Hamiltonian cycles can be used to construct balancing binary sequences. By making computations for permutation polynomials $f$ of low degree, it appears that almost all these graphs are strongly connected, and there are many Hamiltonian cycles in such a graph if it is connected.
\section{Introduction} Let $\F_q$ be the finite field of $q$ elements, where $q$ is a power of some odd prime $p$. Let $\F_q^*$ be the set of non-zero elements in $\F_q$. For any polynomial $f \in \F_q[X]$, we define the \textit{functional graph} of $f$ as a digraph on $q$ vertices labelled by the elements of $\F_q$, where there is an edge from $x$ to $y$ if and only if $f(x) = y$. These graphs have been extensively studied in recent years; see \cite{BuSch,FlGar,HS,KLMMSS,MSSS,OstSha,VaSha} and the references therein. The motivation for studying these graphs comes from several resources, such as Lucas-Lehmer primality test for Mersenne numbers, Pollard's rho algorithm for integer factorization, pseudo-random number generators, and arithmetic dynamics. In the above construction, we in fact use the equation $Y - f(X) = 0$. Then, there is an edge from $x$ to $y$ if and only if $y-f(x)=0$. Hence, more generally, for any equation over $\F_q$: $$ E(X,Y) = 0 $$ with variables $X$ and $Y$, we define a digraph by choosing the elements in $\F_q$ as vertices and drawing an edge from $x$ to $y$ if and only if $E(x,y) = 0$. We call this graph an \textit{equational graph} of the above equation. However, it might happen that this equation has no solution over $\F_q$. We remark that clearly functional graph and equational graph can be defined similarly over finite fields of even characteristic. But in this paper we only consider finite fields of odd characteristic. In this paper, we consider equational graphs generated by equations of the form \begin{equation} \label{eq:equation} (Y^2 - f(X))(\lambda Y^2 - f(X)) = 0 \end{equation} with variables $X$ and $Y$, where $f(X)$ is a fixed polynomial over $\F_q$ and $\lambda$ is a fixed non-square element in $\F_q$. Then, there is an edge from $x$ to $y$ if and only if $(y^2 - f(x))(\lambda y^2 - f(x)) = 0$. This yields an equational graph over $\F_q$, denoted by $\cG(\lambda,f)$; see Figure~\ref{fig:linear} for a simple example. Since $\lambda$ is non-square, the out-degree of each vertex $x$ is positive, which in fact equals to $2$ if $f(x) \ne 0$ (because if there is an edge from $x$ to $y$, then there is also an edge from $x$ to $-y$). However, the in-degree of $x$ can be from zero to the degree of $f$. Note that we allow the graph $\cG(\lambda,f)$ to have loops. \begin{figure}[!htbp] \begin{center} \setlength{\unitlength}{1cm} \begin{picture}(20,3.5) \multiput(2.5,0)(2,0){2}{\circle{0.7}} \multiput(2.5,3.2)(2,0){2}{\circle{0.7}} \multiput(6.5,1.6)(2,0){3}{\circle{0.7}} \put(2.35, 2.875){\vector(0,-1){2.55}} \put(2.65, 0.325){\vector(0,1){2.55}} \put(2.85, 0){\vector(1,0){1.3}} \put(4.15, 3.2){\vector(-1,0){1.3}} \put(4.5, 2.85){\vector(0,-1){2.5}} \put(4.75, 0.25){\vector(3,2){1.57}} \put(6.3, 1.9){\vector(-3,2){1.55}} \put(6.835, 1.5){\vector(1,0){1.33}} \put(8.165, 1.7){\vector(-1,0){1.33}} \put(8.835, 1.5){\vector(1,0){1.33}} \put(10.165, 1.7){\vector(-1,0){1.33}} \put(4.85, 0){\vector(4,1){5.4}} \put(10.3, 1.9){\vector(-4,1){5.45}} \put(2.4,-0.15){$0$} \put(4.4,-0.15){$1$} \put(2.4,3.05){$6$} \put(4.4,3.05){$2$} \put(6.4,1.45){$4$} \put(8.4,1.45){$5$} \put(10.4,1.45){$3$} \end{picture} \end{center} \caption{The equational graph $\cG(3,X+1)$ over $\F_7$} \label{fig:linear} \end{figure} We show that if $f$ is a permutation polynomial over $\F_q$, then every (weakly) connected component of the graph $\cG(\lambda,f)$ is strongly connected (see Proposition~\ref{prop:perm-conn}) and has a Hamiltonian cycle (see Theorem~\ref{thm:perm-Ha}). By distinguishing the edges according to the subequations ($Y^2 - f(X)=0$ or $\lambda Y^2 - f(X)=0$) they come from, we classify these Hamiltonian cycles (see Definition~\ref{def:type}) and show that there is no Hamiltonian cycle of Type 1 for many such graphs (see Theorem~\ref{thm:HT1}, Corollary~\ref{cor:linear-HT1} and Theorem~\ref{thm:perm-HT1}). Moreover, we prove that these Hamiltonian cycles can be used to construct balancing binary sequences by associating weights to the edges (see Theorem~\ref{thm:perm-balance}). Through making computations for permutation polynomials $f$ of low degree, it appears that almost all these graphs are strongly connected, and each such connected graph has many Hamiltonian cycles. That is, using these graphs we can frequently obtain balancing periodic sequences of period $q$. Additionally, we also investigate the graphs $\cG(\lambda,f)$ with polynomials $f$ of low degree in more detail. We remark that by construction, any graph $\cG(\lambda,f)$ with permutation polynomial $f$ is quite close to be a 2-regular digraph. The result in \cite[Theorem 5.1]{FF} implies that almost every 2-regular digraph is strongly connected, and then the result in \cite[Theorem 1]{CF} suggests that almost every 2-regular digraph has a Hamiltonian cycle. We thus can view such graphs $\cG(\lambda,f)$ as typical examples for this. The paper is organized as follows: Section~\ref{sec:perm} deals with the case when $f$ is a permutation polynomial over $\F_q$, and the algorithms for the computations of its connectedness and Hamiltonian cycles are presented in Section~\ref{sec:alg}. We then study the cases when $f$ is of degree 1, 2 and 3 in Sections~\ref{sec:linear}, \ref{sec:quad} and \ref{sec:cubic} respectively. Finally we make some comments for further study. \section{The case of permutation polynomials} \label{sec:perm} Here, in the graph $\cG(\lambda,f)$ we choose $f$ to be a permutation polynomial over $\F_q$. That is, the map $x \mapsto f(x)$ is a bijection from $\F_q$ to itself. We refer to \cite[Chapter 7]{LN} for an extensive introduction on permutation polynomials. Recall that $q$ is odd, and $\lambda$ is a non-square element in $\F_q$. \subsection{Basic properties} First, it is easy to determine the in-degrees and out-degrees of the vertices in the graph $\cG(\lambda,f)$. \begin{proposition} \label{prop:perm-inout} Let $f$ be a permutation polynomial over $\F_q$. If $f(0) \ne 0$, then in the graph $\cG(\lambda,f)$, the vertex $0$ has in-degree $1$ and out-degree $2$, the vertex $f^{-1}(0)$ has in-degree $2$ and out-degree $1$, and any other vertex $x$ $(x \ne 0$ and $x \ne f^{-1}(0))$ has in-degree $2$ and out-degree $2$. Otherwise, if $f(0) = 0$, then in the graph $\cG(\lambda,f)$, the vertex $0$ has in-degree $1$ and out-degree $1$, and any other vertex $x$ $(x \ne 0)$ has in-degree $2$ and out-degree $2$. \end{proposition} \begin{proof} The proof is quite straightforward. We only need to note that the map $x \mapsto f(x)$ gives a bijection from $\F_q$ to itself. \end{proof} Moreover, each (weakly) connected component of the graph $\cG(\lambda,f)$ is strongly connected. \begin{proposition} \label{prop:perm-conn} In each graph $\cG(\lambda,f)$ with permutation polynomial $f$, every vertex lies in a cycle, and also every edge lies in a cycle. In particular, every connected component is strongly connected. \end{proposition} \begin{proof} Let $C$ be an arbitrary connected component of the graph $\cG(\lambda,f)$. For our purpose, it suffices to show that given an edge from $x$ to $y$ in $C$, there is a (directed) path from $y$ to $x$. Notice that by Proposition~\ref{prop:perm-inout} the in-degree and out-degree of each vertex in the graph $\cG(\lambda,f)$ are both positive. Then, starting from $x$, we draw the predecessors of $x$, and the predecessors of the predecessors of $x$, and so on; this gives a subgraph, say $G_1$. While starting from $y$, we draw the successors of $y$, and the successors of the successors of $y$, and so on; this gives another subgraph, say $G_2$. If $G_1$ and $G_2$ have a common vertex, then everything is done. So, we only need to prove that these two subgraphs indeed have a common vertex. Now, by contradiction, suppose that $G_1$ and $G_2$ have no common vertex. Then, the edge from $x$ to $y$ is not in $G_1$ and also not in $G_2$. Without loss of generality, we can assume that $G_2$ does not contain the vertex $0$. So, noticing $y \ne 0$ and $y \ne f^{-1}(0)$ and using Proposition~\ref{prop:perm-inout}, in $G_2$ the vertex $y$ has out-degree $2$ and in-degree at most $1$, and any other vertex in $G_2$ has out-degree $2$ and in-degree at most $2$. Thus, the sum of out-degrees in $G_2$ is greater than the sum of in-degrees. But in fact they must be equal. Hence, $G_1$ and $G_2$ indeed have a common vertex. \end{proof} The following proposition suggests that the graph $\cG(\lambda,f)$ can be complicated. \begin{proposition} \label{prop:perm-bi} Each graph $\cG(\lambda,f)$ with permutation polynomial $f$ is not a bipartite graph. \end{proposition} \begin{proof} By contradiction, assume that the graph $\cG(\lambda,f)$ over $\F_q$ is a bipartite graph. Then, the vertex set can be separated into two subsets, say $S_1$ and $S_2$, such that there are no edges among the vertices in $S_1$ and also there are no edges among the vertices in $S_2$. So, there are no loops in $\cG(\lambda,f)$, which implies $f(0) \ne 0$. Without loss of generality, we assume that $f^{-1}(0) \in S_1$ and $0 \in S_2$. Let $m=|S_1|$ and $n=|S_2|$. Then, by Proposition~\ref{prop:perm-inout}, the sum of out-degrees of the vertices in $S_1$ is equal to $2(m-1)+1$, and the sum of in-degrees of the vertices in $S_2$ is equal to $2(n-1)+1$. By assumption, we must have $$ 2(m-1)+1 = 2(n-1)+1, $$ which implies that $m=n$, and so $m+n$ is an even integer. However, $m+n =q$, and $q$ is odd. This leads to a contradiction. So, $\cG(\lambda,f)$ is not a bipartite graph. \end{proof} \subsection{Existence of Hamiltonian cycles} For each graph $\cG(\lambda,f)$ over $\F_q$ with permutation polynomial $f$, by Propositions~\ref{prop:perm-inout} and \ref{prop:perm-conn} its connected components are close to be strongly connected $2$-regular digraphs except at the vertex $0$ when $f(0)=0$. Note that not every strongly connected $2$-regular digraph (even without loops) has a Hamiltonian cycle; see, for example, \cite[Corollary 3.8.2]{GR}. However, for the graph $\cG(\lambda,f)$, its connected components all have Hamiltonian cycles. \begin{theorem} \label{thm:perm-Ha} In each graph $\cG(\lambda,f)$ over $\F_q$ with permutation polynomial $f$, every connected component has a Hamiltonian cycle. \end{theorem} \begin{proof} Let $C$ be an arbitrary connected component of the graph $\cG(\lambda,f)$. By contradiction, suppose that $C$ has no Hamiltonian cycle. Then, by Proposition~\ref{prop:perm-conn}, we choose a maximal cycle, say $M$, in $C$ such that if the vertex $0$ lies in $C$, then $0$ also lies in $M$. Note that by the maximal assumption, the cycle $M$ can not be enlarged. First, since $C$ has no Hamiltonian cycle, the cycle $M$ does not go through all the vertices of $C$. Then, at least one of the vertices in $M$ has a successor not in $M$; see Figure~\ref{fig:perm-Ha1}. In Figure~\ref{fig:perm-Ha1}, the vertex $y_0$ is a successor of the vertex $x_0$ and is outside of the cycle $M$. Note that if the vertex $0$ lies in $C$, then it also lies in $M$. So, we have $y_0 \ne 0$, and thus the in-degree of $y_0$ is $2$. By construction and noticing that the out-degree of each vertex is at most two, the vertex $y_0$ must have a predecessor not in $M$, say $z_0$ (because there is an edge from $z_0$ to $-y_0$). In fact, either $x_0=f^{-1}(y_0^2), z_0 = f^{-1}(\lambda y_0^2) $, or $x_0=f^{-1}(\lambda y_0^2), z_0 = f^{-1}(y_0^2)$. \begin{figure}[!htbp] \begin{center} \setlength{\unitlength}{1cm} \begin{picture}(20,3) \multiput(2.5,1)(1.3,0){3}{$\bullet$} \multiput(2.75, 1.1)(1.3,0){3}{\vector(1,0){1}} \multiput(6.55,1)(0.65,0){3}{$\cdots$} \multiput(8.5,1)(1.3,0){2}{$\bullet$} \put(8.75, 1.1){\vector(1,0){1}} \put(6.25,1.25){\oval(7.3,3.3)[t]} \put(6.25, 2.9){\vector(-1,0){0}} \multiput(2.5,-0.3)(1.3,0){2}{$\bullet$} \put(2.6, 0.95){\vector(0,-1){1}} \put(3.9, -0.05){\vector(0,1){1}} \put(3.75, -0.2){\vector(-1,0){1}} \put(2.1,1){$x_0$} \put(3.5,1.35){$-y_0$} \put(2.1,-0.3){$y_0$} \put(4.05,-0.3){$z_0$} \end{picture} \end{center} \caption{The cycle $M$} \label{fig:perm-Ha1} \end{figure} Now, in Figure~\ref{fig:perm-Ha1}, by Proposition~\ref{prop:perm-conn}, the edge from $z_0$ to $y_0$ lies in a cycle, say $C_1$. If this cycle does not intersect with $M$, then we can emerge the cycles $M$ and $C_1$ by dropping the edge from $x_0$ to $-y_0$ and the edge from $z_0$ to $y_0$. This gives a larger cycle, but this contradicts with the maximal assumption on $M$. So, the cycle $C_1$ must intersect with $M$. Then, there must exist a vertex, say $x_1$, in $C_1$ and also in $M$ such that along the cycle $C_1$ the path from $x_1$ to $y_0$ does not intersect with $M$ except the vertex $x_1$; see Figure~\ref{fig:perm-Ha2} for example. As the above, $y_1 \ne 0$, and the vertex $z_1$ does not lie in $M$. \begin{figure}[!htbp] \begin{center} \setlength{\unitlength}{1cm} \begin{picture}(20,3) \multiput(2.5,1)(1.3,0){2}{$\bullet$} \multiput(2.75, 1.1)(1.3,0){2}{\vector(1,0){1}} \multiput(5.2,1)(0.65,0){2}{$\cdots$} \multiput(6.5,1)(1.3,0){2}{$\bullet$} \put(6.75, 1.1){\vector(1,0){1}} \put(8.05, 1.1){\vector(1,0){1}} \multiput(9.25,1)(0.65,0){2}{$\cdots$} \multiput(10.5,1)(1.3,0){2}{$\bullet$} \put(10.75, 1.1){\vector(1,0){1}} \put(7.25,1.25){\oval(9.3,3.3)[t]} \put(7.25, 2.9){\vector(-1,0){0}} \multiput(2.5,-0.3)(1.3,0){2}{$\bullet$} \put(2.6, 0.95){\vector(0,-1){1}} \put(3.9, -0.05){\vector(0,1){1}} \put(3.75, -0.2){\vector(-1,0){1}} \put(2.1,1){$x_0$} \put(3.5,1.35){$-y_0$} \put(2.1,-0.3){$y_0$} \put(4.05,0){$z_0$} \multiput(6.5,-0.3)(1.3,0){2}{$\bullet$} \put(6.6, 0.95){\vector(0,-1){1}} \put(7.9, -0.05){\vector(0,1){1}} \put(7.75, -0.2){\vector(-1,0){1}} \put(6.45, -0.2){\vector(-1,0){1}} \multiput(4.15,-0.3)(0.65,0){2}{$\cdots$} \put(6.5,1.35){$x_1$} \put(7.5,1.35){$-y_1$} \put(6.75,0){$y_1$} \put(8.05,0){$z_1$} \end{picture} \end{center} \caption{Going through the procedure} \label{fig:perm-Ha2} \end{figure} We then go through the above procedure again and again, and thus we obtain an infinite sequence of vertices in $M$: $x_0, x_1, x_2, \ldots$. For example, in Figure~\ref{fig:perm-Ha2}, by Proposition~\ref{prop:perm-conn}, the edge from $z_1$ to $y_1$ lies in a cycle, say $C_2$. As the above, the cycle $C_2$ must intersect with $M$, and there must exist a vertex, say $x_2$, in $C_2$ and also in $M$ such that along the cycle $C_2$ the path from $x_2$ to $y_1$ does not intersect with $M$ except the vertex $x_2$. Then, we draw vertices $y_2,-y_2,z_2$ and the edges among them as before (note that $y_2 \ne 0$, and $z_2$ does not lie in $M$). \begin{figure}[!htbp] \begin{center} \setlength{\unitlength}{1cm} \begin{picture}(20,3.5) \multiput(2.5,1.8)(1.3,0){2}{$\bullet$} \multiput(2.75, 1.9)(1.3,0){2}{\vector(1,0){1}} \multiput(5.2,1.8)(0.65,0){2}{$\cdots$} \multiput(6.5,1.8)(1.3,0){2}{$\bullet$} \put(6.75, 1.9){\vector(1,0){1}} \put(8.05, 1.9){\vector(1,0){1}} \multiput(9.25,1.8)(0.65,0){2}{$\cdots$} \multiput(10.5,1.8)(1.3,0){2}{$\bullet$} \put(10.75, 1.9){\vector(1,0){1}} \put(7.25,2.05){\oval(9.3,2.1)[t]} \put(7.25, 3.1){\vector(-1,0){0}} \multiput(2.5,0.5)(1.3,0){2}{$\bullet$} \put(2.6, 1.75){\vector(0,-1){1}} \put(3.9, 0.75){\vector(0,1){1}} \put(3.75, 0.6){\vector(-1,0){1}} \put(1.1,1.8){$x_2=x_0$} \put(3.5,2.15){$-y_0$} \put(2.1,0.5){$y_0$} \put(4.05,0.8){$z_0$} \multiput(6.5,0.5)(1.3,0){2}{$\bullet$} \put(6.6, 1.75){\vector(0,-1){1}} \put(7.9, 0.75){\vector(0,1){1}} \put(7.75, 0.6){\vector(-1,0){1}} \put(6.45, 0.6){\vector(-1,0){1}} \multiput(4.15,0.5)(0.65,0){2}{$\cdots$} \put(6.5,2.15){$x_1$} \put(7.5,2.15){$-y_1$} \put(6.75,0.8){$y_1$} \put(8.05,0.8){$z_1$} \put(5.25,0.45){\oval(5.3,1.5)[b]} \put(5.25, -0.3){\vector(1,0){0}} \end{picture} \end{center} \caption{The case $x_2=x_0$} \label{fig:perm-Ha3} \end{figure} Since $M$ is a finite cycle, we must have $x_i = x_j$ for some integers $i, j \ge 0$. Without loss of generality, we assume $x_2=x_0$. Then, the picture looks like Figure~\ref{fig:perm-Ha3}. By going through the edges from $x_1$ to $y_1$, to $z_0$, to $y_0$, to $z_1$ and then to $-y_1$, we can enlarge the cycle $M$. This contradicts with the maximal assumption on $M$. Therefore, the connected component $C$ indeed has a Hamiltonian cycle. \end{proof} We remark that the result in Theorem~\ref{thm:perm-Ha} can hold for some non-permutation polynomials, such as $f(X)=X^2$. \begin{remark} For a connected graph $\cG(\lambda,f)$ over $\F_q$, by Theorem~\ref{thm:perm-Ha} there is a Hamiltonian cycle travelling through all the $q$ vertices, and then outputing the vertices along the Hamiltonian cycle can give a pseudo-random number generator. \end{remark} \subsection{Classification of Hamiltonian cycles} Recall that the edges of a graph $\cG(\lambda,f)$ come from either $Y^2=f(X)$ or $\lambda Y^2 = f(X)$. Using this we can classify the Hamiltonian cycles of connected components of $\cG(\lambda,f)$. We first associate weights to the edges in $\cG(\lambda,f)$. \begin{definition} \label{def:weight} For any edge $(x, y)$ in $\cG(\lambda,f)$, if the edge comes from the relation $y^2 = f(x)$, then its weight is $0$, and otherwise its weight is $1$. In particular, the edge going to the vertex $0$ has weight $0$. \end{definition} We now can classify the (directed) paths and Hamiltonian cycles in $\cG(\lambda,f)$. \begin{definition} \label{def:type} A trail in $\cG(\lambda,f)$ is a path with all the edges of the same weight. A path in $\cG(\lambda,f)$ is said to be of Type $n$ ($n$ is a positive integer) if it contains a trail of length $n$ but it contains no trail of length greater than $n$. Then, a Hamiltonian cycle $H$ of a connected component in $\cG(\lambda,f)$ is said to be of Type $n$ if $H \setminus \{0\}$ is a path of Type $n$. \end{definition} In Definition \ref{def:type}, we exclude the edge going to the vertex $0$, because it can be viewed from both $Y^2=f(X)$ and $\lambda Y^2= f(X)$. For any polynomial $f \in \F_q[X]$, denote by $\cV(f)$ the value set of $f$, that is, $$ \cV(f) = \{f(a): \, a\in \F_q\}. $$ We now want to find a large class of connected graphs $\cG(\lambda,f)$ which do not have Hamiltonian cycles of Type 1. \begin{theorem} \label{thm:HT1} Let $f \in \F_q[X]$ be a permutation polynomial. Suppose that the graph $\cG(\lambda,f)$ is connected, $$ |\{(f^{-1}(a^2))^2:\, a \in \F_q\}| \ne \frac{q-1}{2} $$ and $$ |\{(f^{-1}(\lambda a^2))^2:\, a \in \F_q\}| \ne \frac{q-1}{2} $$ Then, the graph $\cG(\lambda,f)$ has no Hamiltonian cycle of Type 1. \end{theorem} \begin{proof} Since $f \in \F_q[X]$ is a permutation polynomial, there is a permutation polynomial $g \in \F_q[X]$ such that both $f(g(X))$ and $g(f(X))$ induce the identity map from $\F_q$ to itself. By assumption, \begin{equation} \label{eq:Vg1} |\cV (g(X^2)^2)| \ne \frac{q-1}{2}, \qquad |\cV(g(\lambda X^2)^2)| \ne \frac{q-1}{2}. \end{equation} Since the graph $\cG(\lambda,f)$ is connected, by Theorem~\ref{thm:perm-Ha} it has a Hamiltonian cycle. Let $H$ be an arbitrary Hamiltonian cycle of $\cG(\lambda,f)$. We prove the desired result by contradiction. Suppose that $H$ is of Type 1. Then, along the cycle $H$, we obtain a path $P$ of Type 1 containing $q$ vertices and from the vertex $0$ to the vertex $g(0)$. So, there are no two consecutive edges in $P$ having the same weight. Clearly, $|\cV (g(X^2)^2)| \le (q+1)/2$ and $|\cV (g(\lambda X^2)^2)| \le (q+1)/2$. If either $|\cV (g(X^2)^2)| = (q+1)/2$ or $|\cV (g(\lambda X^2)^2)| = (q+1)/2$, then in view of $\cV (g(X^2)) \cap \cV (g(\lambda X^2))=\{g(0)\}$ and $\cV (g(X^2)) \cup \cV (g(\lambda X^2))=\F_q$, we must have $g(0)=0$. This contradicts the fact $g(0) \ne 0$. So, noticing \eqref{eq:Vg1} we must have \begin{equation} \label{eq:Vg2} |\cV (g(X^2)^2)| \le \frac{q-3}{2}, \qquad |\cV(g(\lambda X^2)^2)| \le \frac{q-3}{2}. \end{equation} Note that by reversing the directions of all the edges in the graph $\cG(\lambda,f)$, we can see that this exactly gives the equational graph, say $\cG^\prime (\lambda,g)$, generated by the equation $$ (Y-g(X^2))(Y-g(\lambda X^2))=0. $$ Then, the path $P$ in $\cG(\lambda,f)$ corresponds to a path, say $P^\prime$, in $\cG^\prime (\lambda,g)$. So, $P^\prime$ is from the vertex $g(0)$ to the vertex $0$, and also $P^\prime$ has $q$ vertices. Now, let $x=g(0)$, and let $(x, y)$ be the first edge in the path $P^\prime$. Clearly, there are two cases depending on whether $y=g(x^2)$ or $y=g(\lambda x^2)$. We first consider the case that $y=g(x^2)$. Define the polynomial $h(X) = g(\lambda g(X^2)^2)$. Then, the path $P^\prime$ is of the form in Figure \ref{fig:perm-P1}, where $x=g(0)$. So, from Figure \ref{fig:perm-P1}, in this case the number of vertices in $P^\prime$ is at most $$ 2|\cV(h)| + 2 \le q-1, $$ where the inequality follows from $|\cV(h)|=|\cV (g(X^2)^2)|$ and \eqref{eq:Vg2}. This contradicts the fact that $P^\prime$ has $q$ vertices. \begin{figure}[!htbp] \begin{center} \setlength{\unitlength}{1cm} \begin{picture}(20,1) \multiput(1.5,0.5)(1.8,0){5}{$\bullet$} \multiput(1.75, 0.6)(1.8,0){5}{\vector(1,0){1.5}} \multiput(10.6,0.5)(0.65,0){2}{$\cdots$} \put(1.5,0){$x$} \put(3,0){$g(x^2)$} \put(4.9,0){$h(x)$} \put(6.2,0){$g(h(x)^2)$} \put(8.2,0){$h(h(x))$} \end{picture} \end{center} \caption{The first case of $P^\prime$} \label{fig:perm-P1} \end{figure} Similarly, for the other case that $y=g(\lambda x^2)$, we define the polynomial $u(X) = g( g(\lambda X^2)^2)$. Then, the path $P^\prime$ is of the form in Figure \ref{fig:perm-P2}, where $x=g(0)$. Thus, from Figure \ref{fig:perm-P2}, in this case the number of vertices in $P^\prime$ is at most $$ 2|\cV(u)| + 2 \le q-1, $$ where the inequality follows from $|\cV(u)|=|\cV (g(\lambda X^2)^2)|$ and \eqref{eq:Vg2}. This contradicts the fact that $P^\prime$ has $q$ vertices. \begin{figure}[!htbp] \begin{center} \setlength{\unitlength}{1cm} \begin{picture}(20,1) \multiput(1.5,0.5)(1.8,0){5}{$\bullet$} \multiput(1.75, 0.6)(1.8,0){5}{\vector(1,0){1.5}} \multiput(10.6,0.5)(0.65,0){2}{$\cdots$} \put(1.5,0){$x$} \put(2.9,0){$g(\lambda x^2)$} \put(4.9,0){$u(x)$} \put(6.1,0){$g(\lambda u(x)^2)$} \put(8.2,0){$u(u(x))$} \end{picture} \end{center} \caption{The second case of $P^\prime$} \label{fig:perm-P2} \end{figure} Therefore, there is no such Hamiltonian cycle of Type 1. \end{proof} When the polynomial $f$ is of degree one, we can achieve more. \begin{theorem} \label{thm:linear-P} Let $f \in \F_q[X]$ be a polynomial of degree one with non-zero constant term. Then, any path of Type 1 in the graph $\cG(\lambda,f)$ contains at most $\lfloor \frac{3}{4}q + \frac{17}{4} \rfloor$ vertices. \end{theorem} \begin{proof} From Proposition~\ref{prop:linear1} below, we can assume that $f(X)=X+a, a\in \F_q^*$. Let $P$ be any path of Type $1$ in the graph $\cG(\lambda,f)$. Let $N$ be the number of vertices in $P$. Following the arguments and the notation in the proof of Theorem~\ref{thm:HT1}, in the case here we have $g(X) = X - a$, $$ h(X) = g(\lambda g(X^2)^2) = \lambda (X^2-a)^2-a, $$ and $$ u(X) = g( g(\lambda X^2)^2) = (\lambda X^2-a)^2-a. $$ Then, either $N \le 2 |\cV(h)| + 2$ or $N \le 2 |\cV(u)| + 2$. Note that $\cV(h)=\cV((X^2-a)^2)=\cV(X^4-2aX^2)$. Then, noticing $a \in \F_q^*$ and using \cite[Theorem 10]{CGM}, we obtain \begin{equation} \label{eq:Vh} \begin{split} |\cV(h)| & \le \frac{q-1}{2\gcd(4,q-1)} + \frac{q+1}{2\gcd(4,q+1)} + 1 \\ & = \frac{3}{8}q - \frac{1}{2\gcd(4,q-1)} + \frac{1}{2\gcd(4,q+1)} + 1 \\ & \le \frac{3}{8}q + \frac{9}{8}. \end{split} \end{equation} Similarly, we obtain $$ |\cV(u)| \le \frac{3}{8}q + \frac{9}{8}. $$ Finally, collecting the above estimates we have $$ N \le \frac{3}{4}q + \frac{17}{4}. $$ This completes the proof. \end{proof} We remark that the result in Theorem~\ref{thm:linear-P} does not always hold if $f$ has zero constant term. For example, the graph $\cG(2,X)$ over $\F_{19}$ has a path of Type 1 having 18 vertices. \begin{corollary} \label{cor:linear-HT1} Assume $q > 17$. Let $f \in \F_q[X]$ be a polynomial of degree one. Suppose that the graph $\cG(\lambda,f)$ is connected. Then, the graph $\cG(\lambda,f)$ has no Hamiltonian cycle of Type 1. \end{corollary} \begin{proof} Since $\cG(\lambda,f)$ is connected, we know that $f$ has non-zero constant term (otherwise the vertex 0 itself forms a connected component). By Theorem~\ref{thm:linear-P}, if $\frac{3}{4}q + \frac{17}{4} < q$, then the graph $\cG(\lambda,f)$ has no Hamiltonian cycle of Type 1. Since $q > 17$, this automatically holds. \end{proof} In fact, we can obtain similar results for more permutation polynomials over $\F_q$. \begin{theorem} \label{thm:perm-HT1} Let $f \in \F_q[X]$ be a permutation polynomial of the form $Xw(X^2) + a, w \in \F_q[X], a \in \F_q^*$. Then, the results in Theorem~\ref{thm:linear-P} and Corollary~\ref{cor:linear-HT1} still hold for the graph $\cG(\lambda,f)$. \end{theorem} \begin{proof} Since $f=Xw(X^2) + a \in \F_q[X]$ is a permutation polynomial over $\F_q$, there is a permutation polynomial $g \in \F_q[X]$ such that both $f(g(X))$ and $g(f(X))$ induce the identity map from $\F_q$ to itself. As in the proof of Theorem~\ref{thm:linear-P}, it suffices to prove $$ |\cV (g(X^2)^2)| \le \frac{3}{8}q + \frac{9}{8}, \qquad |\cV(g(\lambda X^2)^2)| \le \frac{3}{8}q + \frac{9}{8}. $$ Denote $n=(q+1)/2$. Note that there are exactly $n$ squares in $\F_q$ (including 0). Let $a_1, \ldots, a_n \in \F_q$ be all the elements such that $a_i + a$ is a square for each $1 \le i \le n$. Then, let $b_1, \ldots, b_n \in \F_q$ be such that $b_i w(b_i^2) = a_i$ for each $1 \le i \le n$ (here one should note that $Xw(X^2)$ is also a permutation polynomial over $\F_q$). Note that if we have $y = g(x^2)$ for some $x,y \in \F_q$, then $x^2 = f(y) = yw(y^2) +a$, and so $yw(y^2) = a_i$ for some $1 \le i \le n$, and thus $y = b_i$. So, we have $$ \cV (g(X^2)) = \{b_1, \ldots, b_n\}, $$ which implies $$ \cV (g(X^2)^2) = \{b_1^2, \ldots, b_n^2\}. $$ Since $\{a_1, \ldots, a_n\} = \cV(X^2-a)$ by construction, as in \eqref{eq:Vh} we obtain $$ |\{a_1^2, \ldots, a_n^2\}| = |\cV((X^2-a)^2)| \le \frac{3}{8}q + \frac{9}{8}. $$ Clearly for any $1 \le i,j \le n, i \ne j$, we have that $a_i = - a_j$ if and only if $b_i = -b_j$ (because $Xw(X^2)$ is a permutation polynomial). Hence, we have $$ |\{b_1^2, \ldots, b_n^2\}| = |\{a_1^2, \ldots, a_n^2\}|, $$ which implies $$ |\cV (g(X^2)^2)| \le \frac{3}{8}q + \frac{9}{8}. $$ Similarly, we obtain $$ |\cV(g(\lambda X^2)^2)| \le \frac{3}{8}q + \frac{9}{8}. $$ This in fact completes the proof. \end{proof} Note that when $3 \nmid q-1$, $X^3 +a \in \F_q[X]$ is a permutation polynomial, so we immediately obtain the following result from Theorem~\ref{thm:perm-HT1}. \begin{corollary} \label{cor:cubic-HT1} Assume that $q > 17$ and $3 \nmid q-1$. Let $f = X^3 + a \in \F_q[X]$. Suppose that the graph $\cG(\lambda,f)$ is connected. Then, the graph $\cG(\lambda,f)$ has no Hamiltonian cycle of Type 1. \end{corollary} In Sections~\ref{sec:linear} and \ref{sec:cubic}, we will make computations about Hamiltonian cycles of Type 2 and Type 3 for the graphs $\cG(\lambda,X+a)$ and $\cG(\lambda,X^3+a)$ respectively. The computations suggest that these graphs can have many types of Hamiltonian cycles. \subsection{Binary sequences derived from Hamiltonian cycles} Traveling through a cycle in $\cG(\lambda,f)$, we can get a binary sequence by recording the weights of the edges (see Definition~\ref{def:weight}) in the cycle. Especially, we can get a balancing sequence along any Hamiltonian cyle of a connected component. The word ``balancing" means that the difference between the number of $0$'s and the number of $1$'s in the sequence is at most $1$. \begin{theorem} \label{thm:perm-balance} For each graph $\cG(\lambda,f)$ with permutation polynomial $f$, along any Hamiltonian cycle of any connected component, we can get a balancing binary sequence. \end{theorem} \begin{proof} Let $H$ be a Hamiltonian cycle of a connected component $C$ in the graph $\cG(\lambda,f)$. The existence of $H$ has been confirmed by Theorem~\ref{thm:perm-Ha}. We can assume that $C$ has a vertex not equal to $0$. As mentioned the above, we can get a binary sequence by going through the cycle $H$ and recording the weights of the edges in $H$. So, it remains to prove that this sequence is balancing. Let $y$ be any non-zero vertex in $C$. Then, $-y$ is also in $C$. In fact, by Proposition~\ref{prop:perm-inout} they form a rectangle in $C$ having two possible choices with weights; see Figure~\ref{fig:perm-rec}. Notice that the cycle $H$ travels every vertex in $C$, automatically including the vertices $x, \pm y, z$ in Figure~\ref{fig:perm-rec}. Then, it is easy to see that the cycle $H$ goes through the edge from $x$ to $y$ with weight $0$ (respectively, $1$) if and only if it goes through the edge from $z$ to $-y$ with weight $1$ (respectively, $0$); also, the cycle $H$ goes through the edge from $x$ to $-y$ with weight $0$ (respectively, $1$) if and only if it goes through the edge from $z$ to $y$ with weight $1$ (respectively, $0$). \begin{figure}[!htbp] \begin{center} \setlength{\unitlength}{1cm} \begin{picture}(20,2) \multiput(2.5,1.5)(1.8,0){2}{$\bullet$} \put(2.75, 1.6){\vector(1,0){1.5}} \multiput(2.5,-0.3)(1.8,0){2}{$\bullet$} \put(2.6, 1.45){\vector(0,-1){1.5}} \put(4.4, -0.05){\vector(0,1){1.5}} \put(4.25, -0.2){\vector(-1,0){1.5}} \put(2.2,1.5){$x$} \put(4.6,1.5){$-y$} \put(2.2,-0.3){$y$} \put(4.55,-0.3){$z$} \put(3.35,1.75){$0$} \put(2.3,0.6){$0$} \put(4.5,0.6){$1$} \put(3.4,-0.05){$1$} \multiput(7.5,1.5)(1.8,0){2}{$\bullet$} \put(7.75, 1.6){\vector(1,0){1.5}} \multiput(7.5,-0.3)(1.8,0){2}{$\bullet$} \put(7.6, 1.45){\vector(0,-1){1.5}} \put(9.4, -0.05){\vector(0,1){1.5}} \put(9.25, -0.2){\vector(-1,0){1.5}} \put(7.2,1.5){$x$} \put(9.6,1.5){$-y$} \put(7.2,-0.3){$y$} \put(9.55,-0.3){$z$} \put(8.35,1.75){$1$} \put(7.3,0.6){$1$} \put(9.5,0.6){$0$} \put(8.4,-0.05){$0$} \end{picture} \end{center} \caption{The rectangle related to $\pm y$} \label{fig:perm-rec} \end{figure} Hence, if the vertex $0$ is not in $C$, then we have the same numbers of $0$'s and $1$'s in the sequence. Otherwise, if $0$ is a vertex in $C$, then by Definition~\ref{def:weight} the weight of the edge from $f^{-1}(0)$ to $0$ is $0$, and thus there is exactly one more $0$ than $1$'s in the sequence. This completes the proof. \end{proof} \section{Algorithms} \label{sec:alg} In this section, we describe briefly the algorithms we use for counting connected components and searching Hamiltonian cycles in a graph $\cG(\lambda,f)$. We then use them to make computations for the cases when $f=X+a$ and $f=X^3+a$ in Sections~\ref{sec:linear} and \ref{sec:cubic} respectively. \subsection{Counting connected components} In order to count connected components in a graph $\cG(\lambda,f)$, we first build the edge table, that is, the table containing all the edges $(x,y)$ such that $(x,y)$ satisfies either $y^2 = f(x)$ or $y^2 = \lambda f(x)$. Once the edge table is constructed, we perform the standard depth-first search to find the number of connected components in $\cG(\lambda,f)$. Note that, as soon as the edge table is built, this is the same code for any polynomial, but here we focus on both linear and cubic cases. The linear case when $f=X+a$ is straightforward. For the cubic case when $f=X^3+a$, to speed up the process, we precompute the values of $x^3$ for all $x \in \F_q$ as well as the values of square roots in $\F_q$. Therefore for each $a$, we only need to perform $q$ additions to construct the edge table. \subsection{Searching Hamiltonian cycles} To enumerate all the Hamiltonian cycles, we use the backtracking algorithm. At the same time it also depends on the Types. For example, to search Hamiltonian cycles of Type 2, we use this algorithm by disregarding any path that contains a trail of length greater than 2. This can be done much faster since their proportion decreases significantly as $q$ increases. \section{Linear case} \label{sec:linear} Since for any $a\ne 0,b\in \F_q$ the polynomial $aX+b$ is a permutation polynomial, all the results in Section~\ref{sec:perm} automatically hold for the graph $\cG(\lambda,aX+b)$. Here, we want to investigate these graphs in more detail. Recall that $q$ is odd, and $\lambda$ is a non-square element in $\F_q$. \subsection{Isomorphism classes} It is easy to find some isomorphism classes of the graphs $\cG(\lambda,aX+b)$ over $\F_q$. \begin{proposition} \label{prop:linear1} For any $a\ne 0,b\in \F_q$, the graph $\cG(\lambda,aX+b)$ is isomorphic to the graph $\cG(\lambda,X+a^{-2}b)$. \end{proposition} \begin{proof} Let $\psi$ be the bijection map from $\F_q$ to itself defined by $\psi(x)=a^{-1}x$. Automatically, $\psi$ is a bijection between the vertices of $\cG(\lambda,aX+b)$ and the vertices of $\cG(\lambda,X+a^{-2}b)$. To prove the isomorphism, it suffices to show that there is an edge from $x$ to $y$ in $\cG(\lambda,aX+b)$ if and only if there is an edge from $\psi(x)$ to $\psi(y)$ in $\cG(\lambda,X+a^{-2}b)$. This can be done by direct computation. \end{proof} \begin{proposition} \label{prop:linear2} For any $a\in \F_q$, the graph $\cG(\lambda, X+a)$ is isomorphic to the graph $\cG(\lambda^{-1}, X+\lambda a)$. \end{proposition} \begin{proof} Note that the isomorphism is induced by the bijection map $\psi$ from $\F_q$ to itself defined by $\psi(x)=\lambda x$. \end{proof} From Propositions~\ref{prop:linear1} and \ref{prop:linear2}, we know that to investigate the linear case it suffices to consider the graphs $\cG(\lambda,X+a)$ when $\lambda$ runs over half of the non-square elements of $\F_q$ and $a$ runs over $\F_q$. We remark that by reversing the directions of the edges in the graph $\cG(\lambda,X+a)$, we exactly obtain the equational graph generated by the equation $(Y-X^2+a)(Y- \lambda X^2 + a)=0$. Note that the equational graph generated by the equation $Y-X^2+a = 0$ in fact has been studied extensively; see \cite{KLMMSS,MSSS,VaSha}. \subsection{Fixed vertices} In a graph, we say a vertex is a \textit{fixed vertex} if there is an edge from the vertex to itself. For any $a \in \F_q$, it is easy to see that the graph $\cG(\lambda,X+a)$ has a fixed vertex if and only if either $a+ \frac{1}{4}$ is a square or $\lambda a+ \frac{1}{4}$ is a square. So, one graph $\cG(\lambda,X+a)$ can have zero, one, two, three or four fixed vertices. We first recall some classical results for character sums with polynomial arguments, which are special cases of \cite[Theorems 5.41 and 5.48]{LN}. \begin{theorem} \label{thm:Weil} Let $\chi$ be the multiplicative quadratic character of $\F_q$, and let $f\in\F_q[X]$ be a polynomial of positive degree that is not, up to a multiplicative constant, a square of any polynomial. Let $d$ be the number of distinct roots of $f$ in its splitting field over $\F_q$. Under these conditions, the following inequality holds: $$ \left| \sum_{x\in\F_q}\chi(f(x))\right|\le (d-1)q^{1/2}. $$ Moreover, if $f= aX^2 + bX +c$ with $a \ne 0$ and $b^2-4ac \ne 0$, then $$ \sum_{x\in\F_q}\chi(f(x)) = - \chi(a). $$ \end{theorem} We now want to count how many graphs $\cG(\lambda,X+a)$ have a fixed vertex. \begin{proposition} \label{prop:linear-fix} Define the set $$ S_\lambda = \{a \in \F_q: \, \textrm{$\cG(\lambda,X+a)$ has a fixed vertex} \}. $$ Then, we have $$ |S_\lambda| = \frac{1}{4} \big(3q + 1 + \chi(\lambda - 1) - \chi(1-\lambda) \big), $$ where $\chi$ is the multiplicative quadratic character of $\F_q$. In particular, we have $|S_\lambda| = \frac{1}{4}(3q + 1)$ if $-1$ is a square in $\F_q$, and otherwise $|S_\lambda| = \frac{1}{4}(3q - 1)$ or $\frac{1}{4}(3q + 3)$. \end{proposition} \begin{proof} We first define the set $$ T_\lambda = \{a \in \F_q: \, \textrm{$\cG(\lambda,X+a)$ has no fixed vertex} \}. $$ Since $S_\lambda = \F_q \setminus T_\lambda$, it is equivalent to show that $$ |T_\lambda| = \frac{1}{4} \big(q - 1 - \chi(\lambda - 1) + \chi(1-\lambda) \big). $$ Note that by convention, $\chi(0)=0$. For any $a \in \F_q$, we have that $a \in T_\lambda$ if and only if both $a+ \frac{1}{4}$ and $\lambda a+ \frac{1}{4}$ are not squares, that is, $$ \chi(a+ \frac{1}{4}) = \chi(\lambda a+ \frac{1}{4}) = -1. $$ So, we obtain \begin{align*} |T_\lambda| = & \frac{1}{4} \sum_{a \in \F_q} \big(1-\chi(a+ \frac{1}{4})\big) \big(1-\chi(\lambda a+ \frac{1}{4})\big) \\ & - \frac{1}{4} (1-\chi(1-\lambda)) - \frac{1}{4} (1+\chi(\lambda -1)), \end{align*} where the last two terms come from the two cases when $a+ \frac{1}{4} = 0$ or $\lambda a+ \frac{1}{4} = 0$. Then, expanding the brackets we further have \begin{align*} |T_\lambda| = \frac{q}{4} - \frac{1}{2} - \frac{1}{4} \chi(\lambda - 1) + \frac{1}{4} \chi(1- \lambda) + \frac{1}{4} \sum_{a \in \F_q} \chi\big((a+ \frac{1}{4})(\lambda a+ \frac{1}{4})\big), \end{align*} where we use the fact $\sum_{a \in \F_q} \chi(a) =0$. Using Theorem~\ref{thm:Weil} and noticing that $\lambda$ is a non-square element, we have \begin{align*} \sum_{a \in \F_q} \chi\big((a+ \frac{1}{4})(\lambda a+ \frac{1}{4})\big) & = \sum_{a \in \F_q} \chi\big(\lambda a^2 + \frac{1}{4}(\lambda + 1)a + \frac{1}{16}\big) \\ & = - \chi(\lambda) = 1. \end{align*} Hence, we obtain $$ |T_\lambda| = \frac{1}{4} \big(q - 1 - \chi(\lambda - 1) + \chi(1-\lambda) \big). $$ This completes the proof. \end{proof} \subsection{Small connected components} \label{sec:linear-conn} Here we want to determine small connected components of the graphs $\cG(\lambda,X+a)$. This implies some kinds of unconnected graphs. The later computations suggest that they almost cover all the unconnected graphs in the linear case. \begin{proposition} \label{prop:linear-com2} For any $a \in \F_q^*$, the graph $\cG(\lambda,X+a)$ has a connected component with two vertices if and only if $\lambda \ne -1$ and $a = 2(\lambda +1)/(\lambda -1)^2$. In particular, if $\lambda \ne -1$ and $a = 2(\lambda +1)/(\lambda -1)^2$, then the vertices $2/(1-\lambda), 2/(\lambda -1)$ form a connected component in $\cG(\lambda,X+a)$. \end{proposition} \begin{proof} First, suppose that the graph $\cG(\lambda,X+a)$ has a connected component, say $C$, with two vertices. By construction and using Proposition~\ref{prop:perm-inout}, both vertices in $C$ have a loop, and they form a cycle. Moreover, if one vertex is $x \in \F_q$, then the other must be $-x \in \F_q$. Note that $x \ne 0$. Since the vertices $x$ and $-x$ form a cycle, without loss of generality we can assume that \begin{equation} \label{eq:linear-com2} x^2=x+a, \qquad \lambda x^2 = -x +a. \end{equation} If $\lambda = -1$, we have $a=0$, which contradicts with $a \in \F_q^*$. So, we must have $\lambda \ne -1$. From \eqref{eq:linear-com2}, we deduce that $$ x = \frac{2}{1-\lambda}, \qquad a = \frac{2(\lambda +1)}{(\lambda -1)^2}. $$ Conversely, if $\lambda \ne -1$ and $a = 2(\lambda +1)/(\lambda -1)^2$, then the vertex $x = 2/(1-\lambda)$ satisfies \eqref{eq:linear-com2}, and thus the vertices $x$ and $-x$ form a connected component in the graph $\cG(\lambda,X+a)$. \end{proof} We remark that if $-1$ is a non-square element in $\F_q$, then the graph $\cG(\lambda,X)$ has a connected component with two vertices if and only if $\lambda = -1$ (in fact these two vertices are $1, -1$). \begin{proposition} \label{prop:linear-com3} For any $a \in \F_q$, the graph $\cG(\lambda,X+a)$ has a connected component with three vertices if and only if either $\lambda = 2, a = 1$, or $\lambda = 1/2, a = 2$. In particular, if $2$ is a non-square element in $\F_q$, then the vertices $0, 1$ and $-1$ form a connected component in $\cG(2,X+1)$, and the vertices $0, 2$ and $-2$ form a connected component in $\cG(1/2,X+2)$. \end{proposition} \begin{proof} It is easy to check that if $\lambda = 2$ and $a = 1$ (by the assumption on $\lambda$, $2$ is a non-square element in $\F_q$), the vertices $0, 1$ and $-1$ form a connected component in $\cG(2,X+1)$. Moreover, if $\lambda = 1/2$ and $a = 2$ ($2$ is a non-square element in $\F_q$), the vertices $0, 2$ and $-2$ form a connected component in $\cG(1/2,X+2)$. This shows the sufficiency. It remains to show the necessity. Now, suppose that the graph $\cG(\lambda,X+a)$ has a connected component, say $C$, with three vertices. Note that by construction if $x \in \F_q$ is a vertex in $C$, then so is $-x$. So, the vertex $0$ must be in $C$. Since there is an edge from $-a$ to $0$, by construction we have that the three vertices of $C$ are $0, a$ and $-a$ (so $a \ne 0$), and there are edges from $0$ to $a$ and $-a$. Moreover, there is an edge from $a$ to $-a$. These give either (noticing $a \ne 0$) $$ a^2 = a, \qquad \lambda a^2 = a +a, $$ or $$ \lambda a^2 = a, \qquad a^2 = a +a. $$ Hence, we deduce that either $\lambda = 2, a = 1$, or $\lambda = 1/2, a = 2$. \end{proof} We remark that by Proposition~\ref{prop:linear2}, the graph $\cG(2,X+1)$ is isomorphic to the graph $\cG(1/2,X+2)$. By Proposition~\ref{prop:linear-com3} we also know that if $2$ is a square element in $\F_q$, then connected components with three vertices can not occur in the graphs $\cG(\lambda,X+a)$ over $\F_q$ (note that $\lambda$ is set to be a non-square element in $\F_q$ throughout the paper). However, there are few connected components having four vertices. \begin{proposition} \label{prop:linear-com4} For any $a \in \F_q^*$, the graph $\cG(\lambda,X+a)$ has no connected component with four vertices. \end{proposition} \begin{proof} By contradiction, we assume that the graph $\cG(\lambda,X+a)$ has a connected component, say $C$, having four vertices. By Proposition~\ref{prop:perm-inout}, it is easy to see that $C$ has only two possible cases; see Figure~\ref{fig:linear-4}. \begin{figure}[!htbp] \begin{center} \setlength{\unitlength}{1cm} \begin{picture}(20,2) \multiput(2.5,1.5)(1.8,0){2}{$\bullet$} \put(2.75, 1.7){\vector(1,0){1.5}} \put(4.25, 1.5){\vector(-1,0){1.5}} \multiput(2.5,-0.3)(1.8,0){2}{$\bullet$} \put(2.5, 1.45){\vector(0,-1){1.5}} \put(2.7, -0.05){\vector(0,1){1.5}} \put(4.5, -0.05){\vector(0,1){1.5}} \put(4.3, 1.45){\vector(0,-1){1.5}} \put(4.25, -0.3){\vector(-1,0){1.5}} \put(2.75, -0.1){\vector(1,0){1.5}} \put(2.2,1.5){$x$} \put(4.6,1.5){$-y$} \put(2.2,-0.3){$y$} \put(4.55,-0.3){$-x$} \multiput(7.5,1.5)(1.8,0){2}{$\bullet$} \multiput(7.5,-0.3)(1.8,0){2}{$\bullet$} \put(7.6, 1.45){\vector(0,-1){1.5}} \put(9.4, 1.45){\vector(0,-1){1.5}} \put(9.25, -0.3){\vector(-1,0){1.5}} \put(9.25, 0){\vector(-1,1){1.5}} \put(7.75, -0.1){\vector(1,0){1.5}} \put(7.75, 0){\vector(1,1){1.5}} \put(7.65,1.7){\oval(0.3,0.8)[t]} \put(7.8, 1.7){\vector(0,-1){0}} \put(9.35,1.7){\oval(0.3,0.8)[t]} \put(9.2, 1.7){\vector(0,-1){0}} \put(7.2,1.5){$x$} \put(9.6,1.5){$y$} \put(6.8,-0.3){$-x$} \put(9.55,-0.3){$-y$} \end{picture} \end{center} \caption{Two cases of a component with four vertices} \label{fig:linear-4} \end{figure} We now consider the first case that there is no fixed vertex. Without loss of generality, we can assume that $$ y^2 = x + a, \qquad \lambda y^2 = -x+a. $$ Moreover, either $$ x^2 = y + a, \qquad \lambda x^2 = -y+a, $$ or $$ x^2 = -y + a, \qquad \lambda x^2 = y+a. $$ Then, noticing $x \ne \pm y$, we obtain $\lambda = -1, a = 0, x^3 =1$ and $x \ne 1$. This contradicts with $a \ne 0$. So, the first case cannot happen. For the second case in Figure~\ref{fig:linear-4}, noticing $x \ne \pm y$, we only need to consider the following four subcases: \begin{itemize} \item[(1)] $x^2 = x + a, \qquad y^2 = -x+a, \qquad \lambda y^2 = y + a, \qquad \lambda x^2 = -y + a$; \item[(2)] $x^2 = x + a, \qquad \lambda y^2 = -x+a, \qquad y^2 = y + a, \qquad \lambda x^2 = -y + a$; \item[(3)] $\lambda x^2 = x + a, \qquad y^2 = -x+a, \qquad \lambda y^2 = y + a, \qquad x^2 = -y + a$; \item[(4)] $\lambda x^2 = x + a, \qquad \lambda y^2 = -x+a, \qquad y^2 = y + a, \qquad x^2 = -y + a$. \end{itemize} By direct calculations, from Cases (2) and (3) we obtain $\lambda = 1$, which contradicts with the assumption that $\lambda$ is non-square; and Case (1) gives $\lambda^2 = -1, a = 0, x=1, y= -\lambda$; Case (4) gives $\lambda^2 = -1, a = 0, x = -\lambda, y =1$. Hence, the second case also cannot happen (due to $\lambda$ non-square and $a \ne 0$). Therefore, there is no connected component with four vertices in the graph $\cG(\lambda,X+a)$ with $a \in \F_q^*$. \end{proof} When proving Proposition~\ref{prop:linear-com4}, we in fact have obtained the following result about the graph $\cG(\lambda,X)$. \begin{proposition} If $-1$ is a non-square element in $\F_q$, then the graph $\cG(\lambda,X)$ has a connected component with four vertices if and only if $3 \mid q-1$ and $\lambda = -1$ (in fact these four vertices are $x, -x, 1/x, -1/x$, where $x^3=1$ and $x \ne 1$, corresponding to the first case in Figure~\ref{fig:linear-4}). Otherwise, if $-1$ is a square in $\F_q$, then the graph $\cG(\lambda,X)$ has a connected component with four vertices if and only if $4 \mid q-1$ and $\lambda^2 = -1$ (in fact these four vertices are $1, -1, \lambda, -\lambda$, corresponding to the second case in Figure~\ref{fig:linear-4}). \end{proposition} We remark that connected components with five vertices exist. For example, in the graph $\cG(3,X+2)$ over $\F_7$ the vertices $0,2,3,4,5$ form a connected component, and the graph $\cG(10,X+12)$ over $\F_{17}$ has a connected component with 5 vertices (that is, $0, 4, 5, 12, 13$). Moreover, the graph $\cG(5,X+8)$ over $\F_{17}$ has a connected component with 6 vertices (that is, $1, 3, 7, 10, 14, 16$), and the graph $\cG(3,X+13)$ over $\F_{31}$ has a connected component with 6 vertices (that is, $3, 4, 14, 17, 27, 28$). \subsection{Computations concerning connectedness} \label{sec:linear-com} Recall that $p$ is an odd prime. Here, we want to make some computations for the graphs $\cG(\lambda,X+a)$ over $\F_p$ concerning its connectedness. From \cite[Section 2]{MSSS}, the numerical results suggest that almost all the functional graphs generated by polynomials $f(X)=X^2 +a$ ($a$ runs over $\F_p^*$) are weakly unconnected. However, our computations suggest that almost all the graphs $\cG(\lambda,X+a)$ are connected (in fact, strongly connected by Proposition~\ref{prop:perm-conn}). By Proposition~\ref{prop:linear2} we do not need to consider all the non-square elements $\lambda$. We first identify the elements of $\F_p$ as the set $\{0,1, \ldots, p-1\}$, and then define $\N_p$ to be the subset of non-square elements in $\F_p$ such that for any non-square element $\lambda$, only the smaller one of $\lambda$ and $\lambda^{-1}$ is contained in $\N_p$. Clearly, for the size of $\N_p$, we have \begin{equation} \label{eq:Np} |\N_p| = \left\{\begin{array}{ll} (p-1)/4 & \text{if $p \equiv 1 \pmod{4}$,}\\ (p+1)/4 & \text{if $p \equiv 3 \pmod{4}$,}\\ \end{array}\right. \end{equation} where we use the fact that $-1$ is non-square modulo $p$ if and only if $p \equiv 3 \pmod{4}$. So, here we make computations for the graphs $\cG(\lambda,X+a)$ when $\lambda$ runs over $\N_p$ and $a$ runs over $\F_p^*$. By Propositions \ref{prop:linear1} and \ref{prop:linear2}, this indeed includes all the linear cases over $\F_p$ except the case $\cG(\lambda,X)$. Let $C_1(p)$ (respectively, $U_1(p)$) be the number of connected (respectively, unconnected) graphs among all the graphs $\cG(\lambda,X+a)$ when $\lambda$ runs over $\N_p$ and $a$ runs over $\F_p^*$, and let $R_1(p)$ be the ratio of connected graphs, that is $$ R_1(p) = \frac{C_1(p)}{C_1(p) + U_1(p)}. $$ In Table~\ref{tab:linear-conn1} we record the first five decimal digits of the ratio $R_1(p)$, and we will do the same for all the other ratios and the average numbers. Further, let $M_1(p)$ be the maximum number of connected components of an unconnected graph counted in $U_1(p)$. Define \begin{equation} \label{eq:Lp} L(p) = \left\{\begin{array}{ll} (p-1)/4 & \text{if $p \equiv 1 \pmod{8}$,}\\ (p+1)/4 & \text{if $p \equiv 3 \pmod{8}$,}\\ (p+3)/4 & \text{if $p \equiv 5 \pmod{8}$,}\\ (p-3)/4 & \text{if $p \equiv 7 \pmod{8}$.} \end{array}\right. \end{equation} It is well-known that $2$ is a non-square element modulo $p$ if and only if $p \equiv 3,5 \pmod{8}$. In view of the construction of $\N_p$, we always have $1/2 \pmod{p} \not\in \N_p$. So, the number of the unconnected graphs counted in $U_1(p)$ and described as in Propositions~\ref{prop:linear-com2} and \ref{prop:linear-com3} is equal to $L(p)$ when $p > 5$ (using also \eqref{eq:Np}). Hence, we have \begin{equation} U_1(p) \ge L(p), \quad p > 5. \end{equation} Notice that when $q=5$, the graphs in Propositions~\ref{prop:linear-com2} and \ref{prop:linear-com3} coincide. From Table~\ref{tab:linear-conn1}, we can see that almost all the graphs $\cG(\lambda,X+a)$ are connected, and $U_1(p)$ is quite close to $L(p)$. Moreover, the data suggest that each unconnected graph $\cG(\lambda,X+a)$ with $a \ne 0$ has exactly two connected components, and its small connected component usually has exactly two vertices. \begin{table}[!htbp] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline $p$ & $C_1(p)$ & $U_1(p)$ & $L(p)$ & $M_1(p)$ & $R_1(p)$ \\ \hline 31 & 232 & 8 & 7 & 2 & 0.96666 \\ \hline 107 & 2835 & 27 & 27 & 2 & 0.99056 \\ \hline 523 & 68251 & 131 & 131 & 2 & 0.99808 \\ \hline 1009 & 253764 & 252 & 252 & 2 & 0.99900 \\ \hline 1511 & 570403 & 377 & 377 & 2 & 0.99933 \\ \hline 2029 & 1027688 & 508 & 508 & 2 & 0.99950 \\ \hline 2521 & 1586970 & 630 & 630 & 2 & 0.99960 \\ \hline 3037 & 2303564 & 760 & 760 & 2 & 0.99967 \\ \hline 4049 & 4095564 & 1012 & 1012 & 2 & 0.99975 \\ \hline 5003 & 6256251 & 1251 & 1251 & 2 & 0.99980 \\ \hline \end{tabular} \bigskip \caption{Counting connected graphs in linear case} \label{tab:linear-conn1} \end{table} \begin{conjecture} \label{conj:linear-conn0} Almost all the graphs $\cG(\lambda,X+a)$ over $\F_p$ are connected when $p$ goes to infinity. \end{conjecture} We in fact make computations for all primes $p \le 3041$ and find that $U_1(p) = L(p)$ for any $31< p \le 3041$, and for any $5 \le p \le 3041$ each unconnected graph $\cG(\lambda,X+a)$ with $a \ne 0$ over $\F_p$ has exactly two connected components. Hence, we make the following conjectures. \begin{conjecture} \label{conj:linear-conn1} For any prime $p>31$, $U_1(p) = L(p)$. \end{conjecture} \begin{conjecture} \label{conj:linear-conn2} For any prime $p \ge 5$, each unconnected graph $\cG(\lambda,X+a)$ with $a \ne 0$ over $\F_p$ has exactly two connected components. \end{conjecture} Conjecture~\ref{conj:linear-conn1} suggests that when $p>31$, if a graph $\cG(\lambda,X+a)$ over $\F_p$ with $a \ne 0$ does not belong to the cases described in Propositions~\ref{prop:linear-com2} and \ref{prop:linear-com3}, then it is a connected graph. \subsection{Hamiltonian cycles} For each graph $\cG(\lambda,X+a)$, Theorem~\ref{thm:perm-Ha} has confirmed that all its connected components have a Hamiltonian cycle. Here, we only make computations on Hamiltonian cycles of $\cG(\lambda,X+a)$ over $\F_p$ when it is a connected graph. Due to the complexity, we only can test small primes $p$. Let $H_{11}(p)$ (respectively, $H_{12}(p)$) be the minimal (respectively, maximal) number of Hamiltonian cycles in a connected graph of the form $\cG(\lambda,X+a)$ ($\lambda$ runs over $\N_p$ and $a$ runs over $\F_p^*$). Then, let $H_1(p)$ be the average number of Hamiltonian cycles in these connected graphs. By Corollary~\ref{cor:linear-HT1}, a connected graph $\cG(\lambda,X+a)$ over $\F_p$ has no Hamiltonian cycle of Type $1$ when $p> 17$. Let $R_{12}(p)$ (respectively, $R_{13}(p)$) be the ratio of such connected graphs having Hamiltonian cycles of Type $2$ (respectively, Type $3$) over all the connected graphs (when $\lambda$ runs over $\N_p$ and $a$ runs over $\F_p^*$). Let $A_{12}(p)$ (respectively, $A_{13}(p)$) be the average number of Hamiltonian cycles of Type $2$ (respectively, Type $3$) over all such connected graphs having Hamiltonian cycles of Type $2$ (respectively, Type $3$). From Table~\ref{tab:linear-hamilton}, we can see that there are many Hamiltonian cycles in a connected graph $\cG(\lambda,X+a)$ over $\F_p$, whose amount grows rapidly with respect to $p$. \begin{table}[!htbp] \centering \begin{tabular}{|c|c|c|c|c|} \hline $p$ & $H_{11}(p)$ & $H_{12}(p)$ & $H_1(p)$ \\ \hline 17 & 1 & 72 & 18.31034 \\ \hline 19 & 4 & 148 & 34.03529 \\ \hline 23 & 5 & 423 & 93.70078 \\ \hline 29 & 34 & 2840 & 666.13829 \\ \hline 31 & 30 & 5410 & 1206.08620 \\ \hline 37 & 448 & 45546 & 7906.61783 \\ \hline 41 & 1223 & 175428 & 28473.26666 \\ \hline 43 & 2222 & 255558 & 53999.07760 \\ \hline 47 & 6576 & 1273729 & 195723.05914 \\ \hline 53 & 63363 & 6795031 & 1297781.68277 \\ \hline \end{tabular} \bigskip \caption{Counting Hamiltonian cycles in linear case} \label{tab:linear-hamilton} \end{table} Tables~\ref{tab:linear-HT2} and \ref{tab:linear-HT3} suggest that although many connected graphs have Hamiltonian cycles of Types 2 and 3, these two types of Hamiltonian cycles occupy a small proportion when $p$ is large. This means that each connected graph $\cG(\lambda,X+a)$ over $\F_p$ is likely to have many types of Hamiltonian cycles. \begin{table}[!htbp] \centering \begin{tabular}{|c|c|c|c|} \hline $p$ & $A_{12}(p)$ & $A_{12}(p)/H_1(p)$ & $R_{12}(p)$ \\ \hline 17 & 2.40909 & 0.13156 & 0.75862 \\ \hline 19 & 3.30158 & 0.09700 & 0.74117 \\ \hline 23 & 3.79452 & 0.04049 & 0.57480 \\ \hline 29 & 8.30434 & 0.01246 & 0.61170 \\ \hline 31 & 8.25954 & 0.00684 & 0.56465 \\ \hline 37 & 17.72580 & 0.00224 & 0.59235 \\ \hline 41 & 39.12643 & 0.00137 & 0.44615 \\ \hline 43 & 32.31088 & 0.00059 & 0.42793 \\ \hline 47 & 60.58173 & 0.00030 & 0.38447 \\ \hline 53 & 107.67010 & 0.00008 & 0.43957\\ \hline \end{tabular} \bigskip \caption{Hamiltonian cycles of Type 2 in linear case} \label{tab:linear-HT2} \end{table} \begin{table}[!htbp] \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline $p$ & $A_{13}(p)$ & $A_{13}(p)/H_1(p)$ & $R_{13}(p)$ \\ \hline 17 & 7.42857 & 0.40570 & 0.96551 \\ \hline 19 & 11.38095 & 0.33438 & 0.98823 \\ \hline 23 & 25.72580 & 0.27455 & 0.97637 \\ \hline 29 & 125.45161 & 0.18832 & 0.98936 \\ \hline 31 & 190.37117 & 0.15784 & 0.98706 \\ \hline 37 & 754.65594 & 0.09544 & 0.99044 \\ \hline 41 & 2036.97927 & 0.07154 & 0.98974 \\ \hline 43 & 3296.68456 & 0.06105 & 0.99113 \\ \hline 47 & 7935.52918 & 0.04054 & 0.95009 \\ \hline 53 & 35505.00762 & 0.02735 & 0.99093 \\ \hline \end{tabular} \bigskip \caption{Hamiltonian cycles of Type 3 in linear case} \label{tab:linear-HT3} \end{table} \section{Quadratic case} \label{sec:quad} For the quadratic case we only establish some general properties without making computations. These suggest that this case might be not attractive. The main reason is that quadratic polynomials are not permutation polynomials. Recall that $q$ is odd, and $\lambda$ is a non-square element in $\F_q$. \begin{proposition} \label{prop:quad-iso1} For any $a\ne 0,b\in \F_q$, the graph $\cG(\lambda,X^2+aX+b)$ is isomorphic to the graph $\cG(\lambda,X^2+X+a^{-2}b)$. \end{proposition} \begin{proof} Note that the isomorphism is induced by the bijection map $\psi$ from $\F_q$ to itself defined by $\psi(x)=a^{-1} x$. \end{proof} \begin{proposition} \label{prop:quad-iso2} For any $a \ne 0, b \ne 0$, if $b/a$ is a square element in $\F_q$, then the graph $\cG(\lambda,X^2+a)$ is isomorphic to the graph $\cG(\lambda,X^2+b)$. \end{proposition} \begin{proof} Write $b/a = c^2, c \in \F_q$. Note that the isomorphism is induced by the bijection map $\psi$ from $\F_q$ to itself defined by $\psi(x)=cx$. \end{proof} From Proposition~\ref{prop:quad-iso2}, we know that for a fixed $\lambda$, there are at most two graphs up to isomorphism among the graphs $\cG(\lambda,X^2+a)$ when $a$ runs over $\F_q^*$. We remark that in the graph $\cG(\lambda,X^2+X+a)$, if $-a + 1/4$ is not a square element, then the in-degree of the vertex $0$ is zero. Similarly, in the graph $\cG(\lambda,X^2+a)$, if $-a$ is not a square element in $\F_q$, then the in-degree of the vertex $0$ is zero. In fact, there could be many vertices with zero in-degree. \begin{proposition} \label{prop:quad-indeg1} For each graph $\cG(\lambda, X^2+X+a)$ over $\F_q$ with $a \ne 1/4$, there are at least $\lfloor \frac{1}{4} \big( q - 3\sqrt{q} \big) -1 \rfloor$ vertices having zero in-degree. \end{proposition} \begin{proof} Denote by $Z(\lambda, a)$ the number of vertices having zero in-degree in the graph $\cG(\lambda, X^2+X+a)$. Let $\chi$ be the multiplicative quadratic character of $\F_q$. By convention, $\chi(0)=0$. If a vertex $y \in \F_q$ has zero in-degree, then both $y^2 -a +1/4$ and $\lambda y^2 -a +1/4$ are non-square elements in $\F_q$, that is, $$ \chi(y^2 -a +1/4) = \chi(\lambda y^2 -a +1/4) = -1. $$ So, we have $$ Z(\lambda, a) \ge \sum_{y \in \F_q} \frac{1-\chi(y^2 -a +1/4)}{2} \cdot \frac{1-\chi(\lambda y^2 -a +1/4)}{2} - 1, $$ where the term ``$- 1$" comes from one of the two cases when either $y^2 -a +1/4=0$ or $\lambda y^2 -a +1/4=0$ (since $a \ne 1/4$ and $\lambda$ is a non-square element in $\F_q$, these two cases can not both happen). Then, using Theorem~\ref{thm:Weil} and noticing $a \ne 1/4$, we deduce that \begin{align*} Z(\lambda, a) & \ge \sum_{y \in \F_q} \frac{1-\chi(y^2 -a +1/4)}{2} \cdot \frac{1-\chi(\lambda y^2 -a +1/4)}{2} -1 \\ & = \frac{1}{4} \Big( q - \sum_{y \in \F_q} \chi(y^2 -a +1/4) - \sum_{y \in \F_q} \chi(\lambda y^2 -a +1/4) \\ & \qquad\qquad + \sum_{y \in \F_q} \chi((y^2 -a +1/4)(\lambda y^2 -a +1/4))\Big) - 1 \\ & \ge \frac{1}{4} \Big( q + \chi(1) + \chi(\lambda) - 3\sqrt{q} \Big) -1 \\ & = \frac{1}{4} \big( q - 3\sqrt{q} \big) -1. \end{align*} This in fact completes the proof. \end{proof} Similarly, we obtain: \begin{proposition} \label{prop:quad-indeg2} For each graph $\cG(\lambda, X^2+a)$ over $\F_q$ with $a \ne 0$, there are at least $\lfloor \frac{1}{4} \big( q - 3\sqrt{q} \big) -1 \rfloor$ vertices having zero in-degree. \end{proposition} When $q \ge 23$, we have $ \frac{1}{4} \big( q - 3\sqrt{q} \big) -1 \ge 1$. Propositions~\ref{prop:quad-indeg1} and \ref{prop:quad-indeg2} imply that strongly connected graphs are rare in the quadratic case. We also can say something about Hamiltionian cycles. \begin{proposition} \label{prop:quad-H1} For each graph $\cG(\lambda,X^2+X+a)$, if $a \ne 1/4$, then its connected component containing the vertex $0$ does not have a Hamiltonian cycle. \end{proposition} \begin{proof} Notice that in the graph $\cG(\lambda,X^2+X+a)$, if $a \ne 1/4$, then the vertex $0$ either has zero in-degree or has in-degree $2$. If the vertex $0$ has zero in-degree, then the component automatically has no Hamiltonian cycle. If the vertex $0$ has in-degree $2$, let $x_1$ and $x_2$ be the two predecessors of $0$. Note that the vertex $0$ is the only successor of $x_1$ and $x_2$. So, there is no cycle going through both $x_1$ and $x_2$. This completes the proof. \end{proof} Similarly, we have: \begin{proposition} \label{prop:quad-H2} For each graph $\cG(\lambda,X^2+a)$, if $a \ne 0$, then its connected component containing the vertex $0$ does not have a Hamiltonian cycle. \end{proposition} We remark that in Proposition~\ref{prop:quad-H1}, if $a= 1/4$, then the graph $\cG(\lambda,X^2+X+1/4)$ is in fact generated by the equation $(Y - X - 1/2)(Y + X + 1/2) = 0$. \section{Cubic case} \label{sec:cubic} For each polynomial $X^3 + aX +b$ over $\F_q$, if $4a^3+27b^2 \ne 0$, then the equation $Y^2 =X^3 + aX + b$ defines an elliptic curve over $\F_q$. In this section, we consider the graphs $\cG(\lambda,X^3+ aX + b)$ over $\F_q$. Recall that $q$ is odd, and $\lambda$ is a non-square element in $\F_q$. \subsection{Basic properties} As before, we can easily find some isomorphism classes among the graphs $\cG(\lambda,X^3+aX+b)$ when $\lambda$ runs over the non-square elements and $a,b$ run over $\F_q$. \begin{proposition} \label{prop:cubic1} For any $a,b\in \F_q$, the graph $\cG(\lambda,X^3+aX+b)$ is isomorphic to the graph $\cG(\lambda^{-1},X^3+ \lambda^{-2}aX + \lambda^{-3}b)$. \end{proposition} \begin{proof} Note that the isomorphism is induced by the bijection map $\psi$ from $\F_q$ to itself defined by $\psi(x)=\lambda^{-1} x$. \end{proof} When the characteristic of $\F_q$ (that is, $p$) is greater than 3, it is not hard to show that a polynomial $X^3+aX+b$ is a permutation polynomial over $\F_q$ if and only if $3 \nmid q-1$ and $a=0$; see \cite[Theorem 2.2]{MW}. So, in the sequel, we only consider the graph $\cG(\lambda,X^3+a)$. Note that if $3 \nmid q-1$, then each polynomial $X^3+a$ is a permutation polynomial over $\F_q$, and thus all the results in Section~\ref{sec:perm} automatically hold for the graph $\cG(\lambda,X^3+a)$. We know that each vertex in the graph $\cG(\lambda, X^3+a)$ has positive out-degree. However, this is not always true for the in-degree. If $3 \nmid q-1$, by Proposition~\ref{prop:perm-inout} each vertex in the graph $\cG(\lambda, X^3+a)$ has positive in-degree. However, when $3 \mid q-1$, there are many vertices having zero in-degree, and more precisely we can get a similar result as in Proposition~\ref{prop:quad-indeg2} by using a different approach and in a stronger form. \begin{proposition} \label{prop:cubic-indeg} If $3 \mid q-1$, then for each graph $\cG(\lambda, X^3+a)$ over $\F_q$, there are at least $N$ vertices having zero in-degree, where \begin{equation*} N = \left\{\begin{array}{ll} (q-1)/3 & \text{if $-a$ is a cubic element in $\F_q$,}\\ (q-7)/3 & \text{otherwise.}\\ \end{array}\right. \end{equation*} In particular, there exists at least one vertex in $\cG(\lambda, X^3+a)$ having zero in-degree. \end{proposition} \begin{proof} Let $Q = \{x^3: \, x\in \F_q \}$. Since $3 \mid q-1$, we have $$ |Q| = \frac{q-1}{3} + 1. $$ Let $R = \F_q \setminus Q$, and let $S$ be the set of equivalence classes of $\F_q$ modulo $\pm 1$. Clearly, we have \begin{equation} \label{eq:RS} |R| = q - \left(\frac{q-1}{3} + 1 \right) = \frac{2(q-1)}{3}, \qquad |S| = \frac{q+1}{2}. \end{equation} Define the map $\varphi$ from $R$ to $S$ by $\varphi(x) = \{\pm y\}$ if either $y^2=x+a$ or $\lambda y^2=x +a$. If $x_1,x_2 \in R$ with $x_1 \ne x_2$ such that $\varphi(x_1)=\varphi(x_2)=\{\pm y_0\}$ for some $y_0 \in \F_q$, then either $y_0^2=x_1+a, \lambda y_0^2 = x_2+a$, or $\lambda y_0^2=x_1+a, y_0^2 = x_2+a$. This implies that there is no $x_3 \in R$ with $x_3 \ne x_1$ and $x_3 \ne x_2$ such that $\varphi(x_3)=\{\pm y_0\}$. By the definition of $\varphi$, both $y_0^2-a$ and $\lambda y_0^2-a$ are not in $Q$, and thus the vertices $\pm y_0$ have zero in-degree in the graph $\cG(\lambda, X^3+a)$. Now, define the set $$ T = \{x\in R: \, \exists \ x^\prime \in R, x^\prime \ne x, \varphi(x^\prime)=\varphi(x) \}. $$ By the above discussion, we know that $|T|$ is even, and the number of vertices having zero in-degree is at least $|T|$. So, it suffices to get a lower bound for $|T|$. Considering the size of $\varphi(R)$ and noticing $|\varphi(T)| =|T|/2$, we have \begin{equation} \label{eq:RST} |\varphi(R)| = |R| - |T|/2 \le |S|, \end{equation} which, together with \eqref{eq:RS}, implies that $$ |T| \ge \frac{q-7}{3}. $$ Moreover, if $-a \in Q$ (that is, $-a$ is a cubic element in $\F_q$), then there is no $x \in R$ such that $\varphi(x) = \{0\}$. So, the inequality in \eqref{eq:RST} becomes $$ |\varphi(R)| = |R| - |T|/2 \le |S| - 1, $$ which gives $$ |T| \ge \frac{q-1}{3}. $$ This completes the proof for the choice of $N$. For the final claim, by the choice of $N$, we only need to consider the case $q=7$. By direction computation, if $q=7$, indeed there exists at least one vertex in each graph $\cG(\lambda, X^3+a)$ having zero in-degree. \end{proof} As in Proposition~\ref{prop:quad-H2}, we have: \begin{proposition} If $3 \mid q-1$, then for each graph $\cG(\lambda,X^3+a)$ with $a \ne 0$, its connected component containing the vertex $0$ does not have a Hamiltonian cycle. \end{proposition} \subsection{Small connected components} \label{sec:cubic-conn} As in Section~\ref{sec:linear-conn}, we determine small connected components for the graphs $\cG(\lambda,X^3+a)$ over $\F_q$ when $3 \nmid q-1$. \begin{proposition} \label{prop:cubic-com2} Assume $3 \nmid q-1$. For any $a \in \F_q^*$, the graph $\cG(\lambda,X^3+a)$ has a connected component with two vertices if and only if $\lambda \ne -1$ and $a = (\lambda +1)(\lambda -1)^2/8$. In particular, if $\lambda \ne -1$ and $a = (\lambda +1)(\lambda -1)^2/8$, then the vertices $(1-\lambda)/2, (\lambda -1)/2$ form a connected component in $\cG(\lambda,X^3+a)$. \end{proposition} \begin{proof} As before, if the graph $\cG(\lambda,X^3+a)$ has a connected component with two vertices, then these two vertices are $\pm x$ for some $x \in \F_q^*$; and so without loss of generality, we can assume $$ x^2=x^3 +a, \qquad \lambda x^2 = -x^3 +a, $$ which gives $$ \lambda \ne -1, \qquad a = (\lambda +1)(\lambda -1)^2/8, \qquad x = (1-\lambda)/2. $$ The rest is straightforward. \end{proof} We remark that if $-1$ is a non-square element in $\F_q$, then the graph $\cG(\lambda,X^3)$ has a connected component with two vertices if and only if $\lambda = -1$ (in fact these two vertices are $1, -1$). \begin{proposition} \label{prop:cubic-com3} Assume $3 \nmid q-1$. For any $a \in \F_q$, the graph $\cG(\lambda,X^3+a)$ has a connected component with three vertices if and only if either $\lambda = 2, a = 1$, or $\lambda = 1/2, a = 1/8$. In particular, if $2$ is a non-square element in $\F_q$, then the vertices $0, 1$ and $-1$ form a connected component in $\cG(2,X^3+1)$, and the vertices $0, 1/2$ and $-1/2$ form a connected component in $\cG(1/2,X^3+1/8)$. \end{proposition} \begin{proof} We only prove the necessity. As before, if the graph $\cG(\lambda,X^3+a)$ has a connected component with three vertices, then these three vertices are $0, \pm b$, where $b^3=a$. So, there must be an edge from $0$ to $b$ and an edge from $b$ to $-b$ (by Proposition~\ref{prop:perm-inout}). This gives either (noticing $b \ne 0$) $$ b^2 = a, \qquad \lambda b^2 = b^3 +a, $$ or $$ \lambda b^2 = a, \qquad b^2 = b^3 +a. $$ Hence, we obtain either $\lambda = 2, a = 1, b=1$, or $\lambda = 1/2, a = 1/8, b=1/2$. \end{proof} We remark that by Proposition~\ref{prop:cubic1}, the graph $\cG(2,X^3+1)$ is isomorphic to the graph $\cG(1/2,X^3+1/8)$. By Proposition~\ref{prop:cubic-com3} we also know that if $2$ is a square in $\F_q$, then connected components with three vertices can not occur in the graphs $\cG(\lambda,X^3+a)$ over $\F_q$ (note that $\lambda$ is set to be a non-square element in $\F_q$ throughout the paper). \begin{proposition} \label{prop:cubic-com4} Assume $3 \nmid q-1$. For any $a \in \F_q^*$, the graph $\cG(\lambda,X^3+a)$ has no connected component with four vertices. \end{proposition} \begin{proof} As before, if the graph $\cG(\lambda,X^3 +a)$ has a connected component with four vertices, then there are only two possible cases as in Figure~\ref{fig:linear-4}. For the first case when there is no fixed vertex in Figure~\ref{fig:linear-4}, we consider either $$ y^2 = x^3 + a, \quad \lambda y^2 = -x^3+a, \quad x^2 = y^3 + a, \quad \lambda x^2 = -y^3+a, $$ or $$ y^2 = x^3 + a, \quad \lambda y^2 = -x^3 +a, \quad x^2 = -y^3 + a, \quad \lambda x^2 = y^3 +a. $$ Then, noticing $x \ne \pm y$, we obtain $\lambda = -1, a = 0, x^5 =1$ and $x \ne 1$, $y=1/x$. This contradicts with $a \ne 0$. So, the first case cannot happen. For the second case in Figure~\ref{fig:linear-4}, noticing $ (x / y)^3 \ne \pm 1$ (due to $x \ne \pm y$ and $3 \nmid q-1$), we only need to consider the following four subcases: \begin{itemize} \item[(1)] $x^2 = x^3 + a, \quad y^2 = -x^3 +a, \quad \lambda y^2 = y^3 + a, \quad \lambda x^2 = -y^3 + a$; \item[(2)] $x^2 = x^3 + a, \quad \lambda y^2 = -x^3 +a, \quad y^2 = y^3 + a, \quad \lambda x^2 = -y^3 + a$; \item[(3)] $\lambda x^2 = x^3 + a, \quad y^2 = -x^3 +a, \quad \lambda y^2 = y^3 + a, \quad x^2 = -y^3 + a$; \item[(4)] $\lambda x^2 = x^3 + a, \quad \lambda y^2 = -x^3 +a, \quad y^2 = y^3 + a, \quad x^2 = -y^3 + a$. \end{itemize} By direct calculations, from Cases (2) and (3) we obtain $\lambda = 1$, which contradicts with the assumption that $\lambda$ is non-square; and Case (1) gives $\lambda^2 = -1, a = 0, x=1, y= \lambda$; Case (4) gives $\lambda^2 = -1, a = 0, x = \lambda, y =1$. Hence, the second case also cannot happen (due to $\lambda$ non-square and $a \ne 0$). Therefore, there is no connected component with four vertices in the graph $\cG(\lambda,X^3 +a)$ with $a \in \F_q^*$. \end{proof} From the above proof, we directly obtain: \begin{proposition} Assume $3 \nmid q-1$. If $-1$ is a non-square element in $\F_q$, then the graph $\cG(\lambda,X^3)$ has a connected component with four vertices if and only if $5 \mid q-1$ and $\lambda = -1$ \textup{(}in fact these four vertices are of the form $x, -x, 1/x, -1/x$, where $x^5=1$ and $x \ne 1$, corresponding to the first case in Figure~\ref{fig:linear-4}\textup{)}. Otherwise, if $-1$ is a square in $\F_q$, then the graph $\cG(\lambda,X^3)$ has a connected component with four vertices if and only if $4 \mid q-1$ and $\lambda^2 = -1$ \textup{(}in fact these four vertices are $1, -1, \lambda, -\lambda$, corresponding to the second case in Figure~\ref{fig:linear-4}\textup{)}. \end{proposition} \subsection{Computations concerning connectedness} \label{sec:cubic-com} Recall that $p$ is an odd prime. Here, we want to make some computations for the graphs $\cG(\lambda,X^3+a)$ over $\F_p$ concerning its connectedness when $3 \nmid p-1$. From Proposition~\ref{prop:cubic1}, we only need to consider the non-square elements $\lambda$ in $\N_p$, where $\N_p$ has been defined in \eqref{eq:Np}. Our computations suggest that almost all the graphs $\cG(\lambda,X^3+a)$ ($\lambda$ runs over $\N_p$ and $a$ runs over $\F_p^*$) are connected when $3 \nmid p-1$. Let $C_3(p)$ (respectively, $U_3(p)$) be the number of connected (respectively, unconnected) graphs among all the graphs $\cG(\lambda,X^3+a)$ when $\lambda$ runs over $\N_p$ and $a$ runs over $\F_p^*$, and let $R_3(p)$ be the ratio of connected graphs, that is $$ R_3(p) = \frac{C_3(p)}{C_3(p) + U_3(p)}. $$ Further, let $M_3(p)$ be the maximum number of connected components of an unconnected graph counted in $U_3(p)$. As the linear case, by Proposition~\ref{prop:cubic-com2}, Proposition~\ref{prop:cubic-com3} and \eqref{eq:Np}, we have \begin{equation*} U_3(p) \ge L(p), \quad p > 5, \quad 3 \nmid p-1, \end{equation*} where $L(p)$ has been defined in \eqref{eq:Lp}. From Table~\ref{tab:cubic-conn1}, we can see that almost all the graphs $\cG(\lambda,X^3+a)$ are connected, and $U_3(p)$ is quite close to $L(p)$. Moreover, the data suggest that each unconnected graph $\cG(\lambda,X^3+a)$ with $a \ne 0$ has exactly two connected components, and its small connected component usually has exactly two vertices. \begin{conjecture} \label{conj:cubic-conn0} Almost all the graphs $\cG(\lambda,X^3+a)$ over $\F_p$ are connected when $p$ satisfying $3 \nmid p-1$ goes to infinity. \end{conjecture} Our computations for all primes $p \le 3041$ satisfying $3 \nmid p-1$ shows that $U_3(p) = L(p)$ for any such prime $p \in [31,3041]$, and for any $5 \le p \le 3041$ each unconnected graph $\cG(\lambda,X^3+a)$ with $a \ne 0$ over $\F_p$ has exactly two connected components. Hence, we make the following conjectures. \begin{conjecture} \label{conj:cubic-conn1} For any prime $p>31$ satisfying $3 \nmid p-1$, $U_3(p) = L(p)$. \end{conjecture} \begin{conjecture} \label{conj:cubic-conn2} For any prime $p \ge 5$ satisfying $3 \nmid p-1$, each unconnected graph $\cG(\lambda,X^3+a)$ with $a \ne 0$ over $\F_p$ has exactly two connected components. \end{conjecture} Conjecture~\ref{conj:cubic-conn1} suggests that when $p>31$ satisfying $3 \nmid p-1$, if a graph $\cG(\lambda,X^3+a)$ over $\F_p$ with $a \ne 0$ does not belong to the cases described in Propositions~\ref{prop:cubic-com2} and \ref{prop:cubic-com3}, then it is a connected graph. \begin{table}[!htbp] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline $p$ & $C_3(p)$ & $U_3(p)$ & $L(p)$ & $M_3(p)$ & $R_3(p)$ \\ \hline 41 & 188 & 8 & 8 & 2 & 0.95918 \\ \hline 107 & 2835 & 27 & 27 & 2 & 0.99056 \\ \hline 521 & 67470 & 130 & 130 & 2 & 0.99807 \\ \hline 1013 & 255782 & 254 & 254 & 2 & 0.99900 \\ \hline 1511 & 570403 & 377 & 377 & 2 & 0.99933 \\ \hline 2027 & 1026675 & 507 & 507 & 2 & 0.99950 \\ \hline 2531 & 1600857 & 633 & 633 & 2 & 0.99960 \\ \hline 3041 & 2309640 & 760 & 760 & 2 & 0.99967 \\ \hline \end{tabular} \bigskip \caption{Counting connected graphs in cubic case} \label{tab:cubic-conn1} \end{table} \subsection{Hamiltonian cycles} In light of Propositions~\ref{prop:perm-inout} and \ref{prop:cubic-indeg}, here we only consider the case when $3 \nmid p-1$. Theorem~\ref{thm:perm-Ha} has confirmed the existence of Hamiltonian cycles in connected components of the graph $\cG(\lambda,X^3+a)$ over $\F_p$ when $3 \nmid p-1$. Here, we make some computations on counting Hamiltonian cycles of connected graphs. Let $H_{31}(p)$ (respectively, $H_{32}(p)$) be the minimal (respectively, maximal) number of Hamiltonian cycles in a connected graph of the form $\cG(\lambda,X^3+a)$ over $\F_p$ ($\lambda$ runs over $\N_p$ and $a$ runs over $\F_p^*$). Then, let $H_3(p)$ be the average number of Hamiltonian cycles in these connected graphs. By Corollary~\ref{cor:cubic-HT1}, a connected graph $\cG(\lambda,X^3+a)$ over $\F_p$ has no Hamiltonian cycle of Type $1$ when $p> 17$. Let $R_{32}(p)$ (respectively, $R_{33}(p)$) be the ratio of such connected graphs having Hamiltonian cycles of Type $2$ (respectively, Type $3$) over all the connected graphs (when $\lambda$ runs over $\N_p$ and $a$ runs over $\F_p^*$). Let $A_{32}(p)$ (respectively, $A_{33}(p)$) be the average number of Hamiltonian cycles of Type $2$ (respectively, Type $3$) over all such connected graphs having Hamiltonian cycles of Type $2$ (respectively, Type $3$). From Table~\ref{tab:cubic-hamilton}, we can see that there are many Hamiltonian cycles in a connected graph $\cG(\lambda,X^3+a)$ over $\F_p$, whose amount grows rapidly with respect to $p$. \begin{table}[!htbp] \centering \begin{tabular}{|c|c|c|c|c|} \hline $p$ & $H_{31}(p)$ & $H_{32}(p)$ & $H_3(p)$ \\ \hline 11 & 1 & 11 & 4.18518 \\ \hline 17 & 1 & 64 & 19.20689 \\ \hline 23 & 3 & 330 & 91.65079 \\ \hline 29 & 24 & 2574 & 585.11702 \\ \hline 41 & 602 & 127573 & 26075.50256 \\ \hline 47 & 4444 & 923740 & 187351.93160 \\ \hline 53 & 35169 & 6920444 & 1262002.85498 \\ \hline \end{tabular} \bigskip \caption{Counting Hamiltonian cycles in cubic case} \label{tab:cubic-hamilton} \end{table} Tables~\ref{tab:cubic-HT2} and \ref{tab:cubic-HT3} suggest that although many connected graphs have Hamiltonian cycles of Types 2 and 3, these two types of Hamiltonian cycles occupy a small proportion when $p$ is large. This means that each connected graph $\cG(\lambda,X^3+a)$ over $\F_p$ is likely to have many types of Hamiltonian cycles. \begin{table}[!htbp] \centering \begin{tabular}{|c|c|c|c|} \hline $p$ & $A_{32}(p)$ & $A_{32}(p)/H_3(p)$ & $R_{32}(p)$ \\ \hline 11 & 1.41666 & 0.33849 & 0.88888 \\ \hline 17 & 2.65853 & 0.13841 & 0.70689 \\ \hline 23 & 4.32394 & 0.04717 & 0.56349 \\ \hline 29 & 7.24647 & 0.01238 & 0.75531 \\ \hline 41 & 27.42268 & 0.00105 & 0.49743 \\ \hline 47 & 58.89082 & 0.00031 & 0.42329 \\ \hline 53 & 105.13149 & 0.00008 & 0.49395 \\ \hline \end{tabular} \bigskip \caption{Hamiltonian cycles of Type 2 in cubic case} \label{tab:cubic-HT2} \end{table} \begin{table}[!htbp] \centering \begin{tabular}{|c|c|c|c|} \hline $p$ & $A_{33}(p)$ & $A_{33}(p)/H_3(p)$ & $R_{33}(p)$ \\ \hline 11 & 2.50000 & 0.59734 & 0.81481 \\ \hline 17 & 7.69642 & 0.40071 & 0.96551 \\ \hline 23 & 31.16260 & 0.34001 & 0.97619 \\ \hline 29 & 119.66486 & 0.20451 & 0.98404 \\ \hline 41 & 1878.58549 & 0.07204 & 0.98974 \\ \hline 47 & 7503.64606 & 0.04005 & 0.98706 \\ \hline 53 & 37585.84218 & 0.02978 & 0.99546 \\ \hline \end{tabular} \bigskip \caption{Hamiltonian cycles of Type 3 in cubic case} \label{tab:cubic-HT3} \end{table} \section{Comments} Assume that $f$ is a permutation polynomial over $\F_q$. In Theorem~\ref{thm:perm-balance} we have shown that in the graph $\cG(\lambda,f)$, along any Hamiltonian cycle of any connected component, we can get a balancing binary sequence. Our computations in Sections~\ref{sec:linear-com} and \ref{sec:cubic-com} suggest that the graph $\cG(\lambda,f)$ is usually connected. That is, using this way we can frequently obtain a balancing binary periodic sequence of period $q$. Note that the balance property is one of the three randomness postulates about a binary sequence suggested by Golomb; see \cite[Chapter 5]{Gong}. The other two postulates are called the run property and the correlation property. If the types of graphs studied in this paper frequently yield a balancing sequence which also has good run property and correlation property, then this gives a good way to construct pseudorandom number generators. In addition, based on our computations the graph $\cG(\lambda,f)$ is likely to be connected. It will be interesting and also challenging to confirm this theorectically, such as proving $\cG(\lambda,f)$ is connected for an infinite family of permutation polynomials. When $Y^2=X^3+aX+b$ defines an elliptic curve over $\F_q$, it is also interesting to investigate the relation between properties of the graph $\cG(\lambda,X^3+aX+b)$ and the arithmetic of the corresponding elliptic curve. In fact, the graphs studied in this paper are arised from quadratic twists (see \eqref{eq:equation}). One can generalize them to higher twists. For example, if $k \mid q -1$ and $\mu$ is not a $k$-th power in $\F_q$, one can study the graph generated by the equation $$ (Y^k - f(X)) (\mu Y^k - f(X)) \cdots (\mu^{k-1} Y^k - f(X)) = 0. $$ More generally, for any $k$ polynomials $f_0, f_1, \ldots, f_{k-1} \in \F_q[X]$ and any positive integer $n$, one can study the graph generated by the equation $$ (Y^n - f_0(X)) (Y^n - f_1(X)) \cdots ( Y^n - f_{k-1}(X)) = 0. $$ Moreover, in this graph an edge $(x,y)$ has weight $i$ if $y^n = f_i(x)$. \section*{Acknowledgements} The authors want to thank the referees for their valuable comments. They also would like to thank Igor Shparlinski for stimulating discussions and useful comments. For the research, Bernard Mans was partially supported by the Australian Research Council Grants DP140100118 and DP170102794, and Min Sha by a Macquarie University Research Fellowship and the Australian Research Council Grant DE190100888.
{ "timestamp": "2020-03-09T01:06:06", "yymm": "1906", "arxiv_id": "1906.12054", "language": "en", "url": "https://arxiv.org/abs/1906.12054", "abstract": "In this paper, we generalize the notion of functional graph. Specifically, given an equation $E(X,Y) = 0$ with variables $X$ and $Y$ over a finite field $\\mathbb{F}_q$ of odd characteristic, we define a digraph by choosing the elements in $\\mathbb{F}_q$ as vertices and drawing an edge from $x$ to $y$ if and only if $E(x,y)=0$. We call this graph as equational graph. In this paper, we study the equational graphs when choosing $E(X,Y) = (Y^2 - f(X))(\\lambda Y^2 - f(X))$ with $f(X)$ a polynomial over $\\mathbb{F}_q$ and $\\lambda$ a non-square element in $\\mathbb{F}_q$. We show that if $f$ is a permutation polynomial over $\\mathbb{F}_q$, then every connected component of the graph has a Hamiltonian cycle. Moreover, these Hamiltonian cycles can be used to construct balancing binary sequences. By making computations for permutation polynomials $f$ of low degree, it appears that almost all these graphs are strongly connected, and there are many Hamiltonian cycles in such a graph if it is connected.", "subjects": "Combinatorics (math.CO); Number Theory (math.NT)", "title": "On the equational graphs over finite fields", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9896718453483274, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.708034986113504 }
https://arxiv.org/abs/1704.08070
Generator matrix for two-dimensional cyclic codes of arbitrary length
Two-dimensional cyclic codes of length $n=\ell s$ over the finite field $\mathbb{F}$ are ideals of the polynomial ring $\frac{\mathbb{F}[x,y]}{< x^{s}-1,y^{\ell}-1 >}$. The aim of this paper, is to present a novel method to study the algebraic structure of two-dimensional cyclic codes of any length $n=\ell s$ over the finite field $\mathbb{F}$. By using this method, we find the generator polynomials for ideals of $\frac{\mathbb{F}[x,y]}{< x^{s}-1,y^{\ell}-1 >}$ corresponding to two dimensional cyclic codes. These polynomials will be applied to obtain the generator matrix for two- dimensional cyclic codes.
\section{Introduction} One of the important generalizations of the cyclic code is two-dimensional cyclic (TDC) code. \begin{definition} Suppose that $C$ is a linear code over $\mathbb{F}$ of length $s\ell$ whose codewords are viewed as $s×\ell$ arrays. That is every codeword $c$ in $C$ has the following form \begin{equation*} c= \left( \begin{array}{ccc} c_{0,0} \ \ \ \dots \ \ c_{0,\ell-1}\\ c_{1,0} \ \ \ \dots \ \ c_{1,\ell-1}\\ \vdots\\ c_{s-1,0} \ \ \dots \ \ c_{s-1,\ell-1} \end{array} \right). \end{equation*} If $C$ is closed under row shift and column shift of codewords, then we call $C$ a $\mathrm{TDC}$ code of size $s\ell$ over $\mathbb{F}$. \end{definition} It is well known that TDC codes of length $n=s\ell$ over the finite field $\mathbb{F}$ are ideals of the polynomial ring $\mathbb{F}[x,y]/{<x^s -1,y^\ell-1>}.$ The characterization for TDC codes, for the first time was presented by Ikai et al. in \cite{ik}. Since the method was pure, it didn't help decode these codes. After that, Imai introduced basic theories for binary TDC codes (\cite{im}). The structure of some two-dimensional cyclic codes corresponding to the ideals of $\mathbb{F}[x, y]/<x^s-1,y^{2^k}-1>$ is characterized by the present author in \cite{zahra}. The aim of this paper, is to find the generator matrix for TDC codes of arbitrary length $n=s\ell$ over the finite field $\mathbb{F}$. To achieve this aim, we present a new method to characterize ideals of the ring $\mathbb{F}[x, y]/<x^s-1,y^\ell-1>$ corresponding to TDC codes, and find generator polynomials for these ideals. Finally, we use these polynomials to obtain the generator matrix for corresponding TDC codes. \begin{remark} For simplicity of notation, we write $g(x)$ instead of $g(x)+<\frak{a}>$ for elements of $\mathbb{F}[x]/<\frak{a}>$. Similarly, we write $g(x,y)$ instead of $g(x,y)+<\frak{a},\frak{b}>$ for elements of $\mathbb{F}[x,y]/<\frak{a},\frak{b}>$. \end{remark} \section{Generator polynomials} Set $R:=\mathbb{F}[x,y]/<x^s-1, y^{\ell}-1>$ and $S:=\mathbb{F}[x]/<x^s-1>$. Suppose that $I$ is an ideal of $R$. In this section, we construct ideals $I_i$ of $S$ ($i=0,\dots, \ell-1$) and prove that the monic generator polynomials of $I_i$ provide a generating set for $I$. Since $$\mathbb{F}[x,y]/<x^s-1,y^{\ell}-1>\cong (\mathbb{F}[x]/<x^s-1>)[y]/<y^{\ell}-1>,$$ an arbitrary element $f(x,y)$ of $I$ can be written uniquely as $f(x,y)=\sum_{i=0}^{\ell-1}f_i(x)y^i$, where $f_i(x) \in S$ for $i=0,\dots, \ell-1$. Put \begin{align*} I_0=\{g_0(x) \in S: \mathrm{there \ exists \ } & g(x,y)\in I \ \mathrm{such \ that}\ g(x,y)=\sum_{i=0}^{\ell-1}g_i(x)y^i\}. \end{align*} First, we prove that $I_0$ is an ideal of the ring $S$. Assume that $g_0(x)$ is an arbitrary element of $I_0$. According to the definition of $I_0$, there exists $g(x,y)\in I$ such that $g(x,y)=\sum_{i=0}^{\ell-1}g_i(x)y^i$. Now, $xg_0(x)\in I_0$ since $I$ is an ideal of $R$ and $xg(x,y)=\sum_{i=0}^{\ell-1}xg_i(x)y^i$ is an element of $I$. Besides, $I_0$ is closed under addition and so $I_0$ is an ideal of $S$. It is well-known that $S$ is a principal ideal ring. Therefore, there exists a unique monic polynomial $p_0^{0}(x)$ in $S$ such that $I_0=<p_0^{0}(x)>$ and $p_0^{0}(x)$ is a divisor of $x^s-1$. So there exists a polynomial $p'_0(x)$ in $\mathbb{F}[x]$ such that $x^s-1=p'_0(x)p_0^{0}(x)$. Now, consider the following equations \begin{align*} f(x,y)&=f_0(x)+f_1(x)y+\dots+ f_{\ell-1}(x)y^{\ell-1}\\ yf(x,y)&=f_0(x)y+f_1(x)y^2+\dots+ f_{\ell-1}(x)y^{\ell}\\& =f_{\ell-1}(x)+f_0(x)y+f_1(x)y^2+\dots+f_{\ell-2}(x)y^{\ell-1}. \ \ \ \ \ (y^\ell=1 \ \mathrm{in} \ R ) \end{align*} Since $I$ is an ideal of $R$, $yf(x, y) \in I$. So according to the definition of $I_0$, $f_{\ell-1}(x) \in I_0$. A similar method can be applied to prove that $f_i(x) \in I_0=<p_0^{0}(x)>$ for $i=1,\dots,\ell-2$. So \begin{align}\label{1} f_i(x)=p_0^{0}(x) q_i(x) \end{align} for some $q_i (x) \in S$. Now, $p_0^{0}(x) \in I_0$ so according to the definition of $I_0$, there exists $\frak{p}_0(x,y) \in I$ such that $$\frak{p}_0 (x,y)=\sum_{i=0}^{\ell-1} p_i^{0}(x)y^i.$$ Again since $I$ is an ideal of $R$, $y^i\frak{p}_0(x, y) \in I$ for $i=1,\dots,\ell-1$. So according to the definition of $I_0$, $p^0_i(x) \in I_0=<p_0^{0}(x)>$. Therefore, \begin{align*} p^0_i(x)=p_0^{0}(x) t^0_i(x) \end{align*} for some $t^0_i (x) \in S$, and so $$\frak{p}_0 (x,y)=p_0^{0}(x)+\sum_{i=1}^{\ell-1} p_0^{0}(x) t^0_i(x)y^i.$$ Set \begin{align*} h_1(x,y):&=f(x,y)-\frak{p}_0(x,y)q_0(x)=\sum_{i=0}^{\ell-1}f_i(x)y^i-q_0(x)\sum_{i=0}^{\ell-1} p_i^{0}(x)y^i\\&= f_0(x)+\sum_{i=1}^{\ell-1}f_i(x)y^i-p_0^{0}(x) q_0(x)-q_0(x)\sum_{i=1}^{\ell-1} p_i^{0}(x)y^i\\&=\sum_{i=1}^{\ell-1}f_i(x)y^i-q_0(x)\sum_{i=1}^{\ell-1} p_i^{0}(x)y^i. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \mathrm{(by\ equation\ \ref{1})} \end{align*} Since $f(x,y)$ and $\frak{p}_0(x,y)$ are in $I$ and $I$ is an ideal of $R$, $h_1(x,y)$ is a polynomial in $I$. Also note that $h_1(x,y)$ is in the form of $h_1(x,y)=\sum_{i=1}^{\ell-1} h_i^{1}(x)y^i$ for some $h_i^{1}(x) \in S$. Now, put \begin{align*} I_1=\{g_1(x) \in S:& \ \mathrm{there \ exists} \ g(x,y)\in I \ \mathrm{such \ that \ } g(x,y)=\sum_{i=1}^{\ell-1}g_i(x)y^i\}. \end{align*} By the same method being applied for $I_0$, it can be proved that $I_1$ is an ideal of $S$. Thus, there exists a unique monic polynomial $p_1^1(x)$ in $S$ such that $I_1=<p_1^{1}(x)>$ and $p_1^{1}(x)$ is a divisor of $x^s-1$. Therefore, there exists a polynomial $p'_1(x)$ in $\mathbb{F}[x]$ such that $x^s-1=p'_1(x)p_1^{1}(x)$. Now, $h_1(x,y) \in I$ so according to the definition of $I_1$, $h_1^{1}(x) \in I_1=<p_1^{1}(x)>$, and so \begin{align}\label{2} h_1^{1}(x)=p_1^{1}(x) q_1(x) \end{align} for some $ q_1(x) \in S$. And now, $p_1^1(x) \in I_1$ so according to the definition of $I_1$, there exists $\frak{p}_1(x,y) \in I$ such that $$\frak{p}_1(x,y)=\sum_{i=1}^{\ell-1} p_i^1(x)y^i.$$ Again since $I$ is an ideal of $R$, $y^i\frak{p}_1(x, y) \in I$. So according to the definition of $I_0$, $p^1_i(x) \in I_0=<p_0^{0}(x)>$ for $i=1,\dots,\ell-1$. Therefore, \begin{align*} p^1_i(x)=p_0^{0}(x) t^1_i(x) \end{align*} for some $t^1_i(x) \in S$, and so $$\frak{p}_1 (x,y)=\sum_{i=1}^{\ell-1} p_0^{0}(x) t^1_i(x)y^i.$$ Set \begin{align*} h_2(x,y):&=h_1(x,y)-\frak{p}_1(x,y)q_1(x)=\sum_{i=1}^{\ell-1} h_i^{1}(x)y^i-q_1(x)\sum_{i=1}^{\ell-1} p_i^1(x)y^i\\&=h_1^{1}(x)y+\sum_{i=2}^{\ell-1} h_i^{1}(x)y^i-p_1^{1}(x) q_1(x)y-q_1(x)\sum_{i=2}^{\ell-1} p_i^1(x)y^i\\&=\sum_{i=2}^{\ell-1} h_i^{1}(x)y^i-q_1(x)\sum_{i=2}^{\ell-1} p_i^1(x)y^i.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \mathrm{(by\ equation\ \ref{2})} \end{align*} Since $h_1(x,y)$ and $\frak{p}_1(x,y)$ are in $I$ and $I$ is an ideal of $R$, $h_2(x,y)$ is a polynomial in $I$ in the form of $h_2(x,y)=\sum_{i=2}^{\ell-1}h_i^{2}(x)y^i$ for some $h_i^{2}(x) \in S$. Put \begin{align*} I_2=\{g_2(x) \in S: & \ \mathrm{there \ exists \ } g(x,y)\in I \ \mathrm{such \ that \ } g(x,y)= \sum_{i=2}^{\ell-1} g_i(x)y^i\}. \end{align*} Again $I_2$ is an ideal of $S$, and so there exists a unique monic polynomial $p_2^{2}(x)$ in $S$ such that $I_2=<p_2^{2}(x)>$. Also $p_2^{2}(x)$ is a divisor of $x^s-1$, and so there exists a polynomial $p'_2(x)$ in $\mathbb{F}[x]$ such that $x^s-1=p'_2(x)p_2^{2}(x)$. Now, $h_2(x,y) \in I$ so according to the definition of $I_2$, $h_2^{2}(x) \in I_2=<p_2^{2}(x)>$. So \begin{align}\label{3} h_2^{2}(x)=p_2^{2}(x)q_2(x) \end{align} for some $q_2(x)\in S$. Besides, $p_2^2(x) \in I_2$ so by definition of $I_2$, there exists $\frak{p}_2(x,y) \in I$ such that $$\frak{p}_2(x,y)=\sum_{i=2}^{\ell-1}p_i^2(x)y^i.$$ Again since $I$ is an ideal of $R$, $y^i\frak{p}_2(x, y) \in I$. So according to the definition of $I_0$, $p^2_i(x) \in I_0=<p_0^{0}(x)>$. Therefore, \begin{align*} p^2_i(x)=p_0^{0}(x) t^2_i(x) \end{align*} for some $t^2_i(x) \in S$, and so $$\frak{p}_2 (x,y)=\sum_{i=2}^{\ell-1} p_0^{0}(x) t^2_i(x)y^i.$$ Set \begin{align*} h_3(x,y):&=h_2(x,y)-\frak{p}_2(x,y)q_2(x)=\sum_{i=2}^{\ell-1} h_i^{2}(x)y^i-q_2(x)\sum_{i=2}^{\ell-1} p_i^2(x)y^i\\&=h_2^{2}(x)y^2+\sum_{i=3}^{\ell-1} h_i^{2}(x)y^i-p_2^{2}(x) q_2(x)y^2-q_2(x)\sum_{i=3}^{\ell-1} p_i^2(x)y^i\\&=\sum_{i=3}^{\ell-1} h_i^{2}(x)y^i-q_2(x)\sum_{i=3}^{\ell-1} p_i^2(x)y^i.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \mathrm{(by\ equation \ \ref{3})} \end{align*} Therefore, $h_3(x,y)$ is a polynomial in $I$ in the form of $h_3(x,y)=\sum_{i=3}^{\ell-1}h_i^3(x)y^i$ for some $h_i^{3}(x) \in S$. In the next step, we put \begin{align*} I_3=\{&g_3(x) \in S: \mathrm{there \ exists \ } g(x,y)\in I \ \mathrm{such \ that \ } g(x,y)=\sum_{i=3}^{\ell-1}g_i(x)y^i\}. \end{align*} The same procedure is applied to obtain polynomials $$h_4(x,y),\dots,h_{\ell-2}(x,y),\frak{p}_3(x,y),\dots,\frak{p}_{\ell-2}(x,y)$$ in $I$ and polynomials $q_3(x),\dots,q_{\ell-2}(x)$ in $S$ and construct ideals $I_4,\dots,I_{\ell-2}$. Finally, we set $$h_{\ell-1}(x,y):=h_{\ell-2}(x,y)-\frak{p}_{\ell-2}(x,y)q_{\ell-2}(x).$$ Thus, $h_{\ell-1}(x,y)$ is a polynomial in $I$ in the form of $h_{\ell-1}(x,y)=h_{\ell-1}^{\ell-1}(x)y^{\ell-1}$. Set \begin{align*} I_{\ell-1}=\{ g_{\ell-1}(x) \in S: & \ \mathrm{there \ exists \ } g(x,y)\in I \ \mathrm{such \ that \ } g(x,y)=g_{\ell-1}(x)y^{\ell-1}\}. \end{align*} Clearly $I_{\ell-1}$ is an ideal of $S$. Thus, there exists a unique monic polynomial $p_{\ell-1}^{\ell-1}(x)$ in $S$ such that $I_{\ell-1}=<p_{\ell-1}^{\ell-1}(x)>$ and $p_{\ell-1}^{\ell-1}(x)$ is a divisor of $x^s-1$ (there exists $p'_{\ell-1}(x)$ in $\mathbb{F}[x]$ such that $x^s-1=p'_{\ell-1}(x)p_{\ell-1}^{\ell-1}(x)$). Now, $h_{\ell-1}(x,y) \in I$ so according to the definition of $I_{\ell-1}$, $h_{\ell-1}^{\ell-1}(x) \in I_{\ell-1}=<p_{\ell-1}^{\ell-1}(x)>$. So \begin{align}\label{tip} h_{\ell-1}^{\ell-1}(x)=q_{\ell-1}(x)p_{\ell-1}^{\ell-1}(x) \end{align} for some $q_{\ell-1}(x) \in S$. And now, $p_{\ell-1}^{\ell-1}(x) \in I_{\ell-1}$ so according to the definition of $I_{\ell-1}$, there exists $\frak{p}_{\ell-1}(x,y) \in I$ such that $\frak{p}_{\ell-1}(x,y)=p_{\ell-1}^{\ell-1}(x)y^{\ell-1}.$ Therefore, by equation \ref{tip} $$h_{\ell-1}(x,y)=h_{\ell-1}^{\ell-1}(x)y^{\ell-1}=q_{\ell-1}(x)p_{\ell-1}^{\ell-1}(x)y^{\ell-1}=q_{\ell-1}(x)\frak{p}_{\ell-1}(x,y).$$ Again since $I$ is an ideal of $R$, $y\frak{p}_{\ell-1}(x, y) \in I$. So according to the definition of $I_0$, $p^{\ell-1}_{\ell-1}(x) \in I_0=<p_0^{0}(x)>$. Thus, \begin{align*} p^{\ell-1}_{\ell-1}(x)=p_0^{0}(x) t^{\ell-1}_{\ell-1}(x) \end{align*} for some $t^{\ell-1}_{\ell-1}(x) \in S$, and so $$\frak{p}_{\ell-1} (x,y)=p_0^{0}(x) t^{\ell-1}_{\ell-1}(x)y^{\ell-1}.$$ Therefore, for an arbitrary element $f(x,y) \in I$ we show that \begin{align*} & h_1(x,y):=f(x,y)-\frak{p}_0(x,y)q_0(x)\\& h_2(x,y):=h_1(x,y)-\frak{p}_1(x,y)q_1(x)\\& h_3(x,y):=h_2(x,y)-\frak{p}_2(x,y)q_2(x)\\& \dots\\& h_{\ell-1}(x,y):=h_{\ell-2}(x,y)-\frak{p}_{\ell-2}(x,y)q_{\ell-2}(x)\\& h_{\ell-1}(x,y)=q_{\ell-1}(x)\frak{p}_{\ell-1}(x,y). \end{align*} So \begin{align*} f(x,y)&=\frak{p}_0(x,y)q_0(x)+\frak{p}_1(x,y)q_1(x)+\frak{p}_2(x,y)q_2(x)\\&+ \dots+\frak{p}_{\ell-2}(x,y)q_{\ell-2}(x)+\frak{p}_{\ell-1}(x,y)q_{\ell-1}(x). \end{align*} Since $\frak{p}_{i}(x,y)\in I$ for $i=0,\dots,\ell-1$ and $f(x,y)$ is an arbitrary element of $I$ and $I$ is an ideal of $R$, we conclude that $$I=<\frak{p}_0(x,y),\dots,\frak{p}_{\ell-1}(x,y)>,$$ where $\frak{p}_j (x,y)=\sum_{i=j}^{\ell-1} p_0^{0}(x) t^j_i(x)y^i$. So $\{\frak{p}_0(x,y), \frak{p}_1(x,y),\dots,\frak{p}_{\ell-1}(x,y)\}$ is a set of generating polynomials for $I$. In the next theorem, we introduce the generator matrix for TDC codes. \begin{theorem} Suppose that $I$ is an ideal of $\mathbb{F}[x,y]/<x^s-1,y^{\ell}-1>$ and is generated by $\{\frak{p}_0(x,y),\dots,\frak{p}_{\ell-1}(x,y)\}$, which obtained from the above method. Then the set \begin{align*} \{&\frak{p}_0(x,y),x\frak{p}_0(x,y),\dots,x^{s-a_0-1}\frak{p}_0(x,y), \\ & \frak{p}_1(x,y),x\frak{p}_1(x,y),\dots,x^{s-a_1-1}\frak{p}_1(x,y), \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \vdots \\ & \frak{p}_{\ell-1}(x,y),x\frak{p}_{\ell-1}(x,y),\dots,x^{s-a_{\ell-1}-1}\frak{p}_{\ell-1}(x,y)\} \end{align*} forms an $\mathbb{F}$-basis for $I$, where $a_i=\mathrm{deg}(p_{i}^i(x))$. \end{theorem} \begin{proof} Assume that $l_0(x),\dots, l_{\ell-1}(x)$ are polynomials in $\mathbb{F}[x]$ such that $\mathrm{deg}(l_i(x))< s-a_i$ and $l_0(x) \frak{p}_0(x,y)+\dots+l_{\ell-1}(x)\frak{p}_{\ell-1}(x,y)=0$. These imply the following equation in $S$ $$l_0(x) p_0^0(x)=0.$$ Therefore, $l_0(x) p_0^0(x)=s(x) (x^s-1)$ for some $s(x) \in \mathbb{F}[x]$. Now, the degree of $x$ in the right side of this equation is at least $s$ but since $\mathrm{deg}(p_0^0(x))=a_0$ and $\mathrm{deg}(l_0(x))< s-a_0$, the degree of $x$ in the left side of this equation is at most $s-1$. So we get $l_0(x)=0$. Similar arguments yield $l_i(x)=0$ for $i=1,\dots,\ell-1$. \end{proof} \section{Conclusion} In this paper, we present a novel method for studying the structure of TDC codes of length $n=s\ell$. This leads to studying the structure of ideals of the ring $\mathbb{F}[x,y]/<x^s-1,y^{\ell}-1>$. By using the novel method, we obtain generating sets of polynomials and generator matrix for TDC codes.
{ "timestamp": "2017-04-27T02:05:17", "yymm": "1704", "arxiv_id": "1704.08070", "language": "en", "url": "https://arxiv.org/abs/1704.08070", "abstract": "Two-dimensional cyclic codes of length $n=\\ell s$ over the finite field $\\mathbb{F}$ are ideals of the polynomial ring $\\frac{\\mathbb{F}[x,y]}{< x^{s}-1,y^{\\ell}-1 >}$. The aim of this paper, is to present a novel method to study the algebraic structure of two-dimensional cyclic codes of any length $n=\\ell s$ over the finite field $\\mathbb{F}$. By using this method, we find the generator polynomials for ideals of $\\frac{\\mathbb{F}[x,y]}{< x^{s}-1,y^{\\ell}-1 >}$ corresponding to two dimensional cyclic codes. These polynomials will be applied to obtain the generator matrix for two- dimensional cyclic codes.", "subjects": "Commutative Algebra (math.AC)", "title": "Generator matrix for two-dimensional cyclic codes of arbitrary length", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9896718450437034, "lm_q2_score": 0.7154239957834733, "lm_q1q2_score": 0.7080349858955687 }
https://arxiv.org/abs/1607.06639
Vector lattices and $f$-algebras: the classical inequalities
We prove an identity for sesquilinear maps from the Cartesian square of a vector space to a geometric mean closed Archimedean (real or complex) vector lattice, from which the Cauchy-Schwarz inequality follows. A reformulation of this result for sesquilinear maps with a geometric mean closed semiprime Archimedean (real or complex) $f$-algebra as codomain is also given. In addition, a sufficient and necessary condition for equality is presented. We also prove the Hölder inequality for weighted geometric mean closed Archimedean (real or complex) $\Phi$-algebras, improving results by Boulabiar and Toumi. As a consequence, the Minkowski inequality for weighted geometric mean closed Archimedean (real or complex) $\Phi$-algebras is obtained.
\section*{\leftline{\large\bf #1}}} \def\th#1{\noindent{\bf #1}\bgroup\it} \def\endth{\egroup\par} \title[The Classical Inequalities]{ Vector Lattices and $f$-Algebras: The Classical Inequalities} \author{G. Buskes} \address{Department of Mathematics, University of Mississippi, University, MS 38677, USA} \email{mmbuskes@olemiss.edu} \author{C. Schwanke} \address{Unit for BMI, North-West University, Private Bag X6001, Potchefstroom, 2520, South Africa} \email{schwankc326@gmail.com} \date{\today} \subjclass[2010]{46A40} \keywords{vector lattice, $f$-algebra, Cauchy-Schwarz inequality, H\"older inequality, Minkowski inequality} \begin{abstract} We present some of the classical inequalities in analysis in the context of Archimedean (real or complex) vector lattices and $f$-algebras. In particular, we prove an identity for sesquilinear maps from the Cartesian square of a vector space to a geometric mean closed Archimedean vector lattice, from which a Cauchy-Schwarz inequality follows. A reformulation of this result for sesquilinear maps with a geometric mean closed semiprime Archimedean $f$-algebra as codomain is also given. In addition, a sufficient and necessary condition for equality is presented. We also prove a H\"older inequality for weighted geometric mean closed Archimedean $\Phi$-algebras, substantially improving results by Boulabiar and Toumi. As a consequence, a Minkowski inequality for weighted geometric mean closed Archimedean $\Phi$-algebras is obtained. \end{abstract} \maketitle \section{Introduction}\label{S:intro} Rich connections between the theory of Archimedean vector lattices and the classical inequalities in analysis, though hitherto little explored, were implied in \cite{BerHui} and the subsequent developments in \cite{Bo2,BusvR4,Kus2,Tou}. In particular, \cite{BusvR4} presents a relationship between the Cauchy-Schwarz inequality and the theory of multilinear maps on vector lattices, built on the analogy between disjointness in vector lattices and orthogonality in inner product spaces. The ideas in \cite{BusvR4} led to the construction of powers of vector lattices (see \cite{BoBus,BusvR2}), a theory that was recently extended to the complex vector lattice environment in \cite{BusSch2}. This paper follows the complex theme of \cite{BusSch2} and in fact contains results that are valid for both real vector lattices and complex vector lattices. We also conjoin the Cauchy-Schwarz, H\"older, and Minkowski inequalities with the theory of geometric mean closed Archimedean vector lattices, as found in \cite{Az,AzBoBus}. We first discuss these results more closely. In \cite[Corollary 4]{BusvR4}, the first author and van Rooij extend the classical Cauchy-Schwarz inequality as follows. If $V$ is a real vector space and $A$ is an Archimedean almost $f$-algebra then for every bilinear map $T\colon V\times V\rightarrow A$ such that \begin{itemize} \item[(1)] $T(v,v)\geq 0\ (v\in V)$, and \item[(2)] $T(u,v)=T(v,u)\ (u,v\in V)$ \end{itemize} we have \[ T(u,v)^2\leq T(u,u)T(v,v)\ (u,v\in V). \] This is the classical Cauchy-Schwarz inequality when $A=\mathbb{R}$, which is equivalent to \begin{equation}\label{(1)} |T(u,v)|\leq\bigl(T(u,u)T(v,v)\bigr)^{1/2}=2^{-1}\inf\{\theta T(u,u)+\theta^{-1}T(v,v):\theta\in(0,\infty)\} \end{equation} for $u,v\in V$. The proof of \cite[Corollary 4]{BusvR4} easily adapts to a natural complex analogue for sesquilinear maps, though the condition for equality in the classical Cauchy-Schwarz inequality (see e.g. \cite[page 3]{Con}) does not hold in this more general context (see Example~\ref{E:noclaeqco}). In Theorem~\ref{T:CSI} of this paper, we extend both the real and complex versions of the classical Cauchy-Schwarz inequality by replacing the codomain of the sesquilinear maps with an Archimedean (real or complex) vector lattice that is closed under the infimum in \eqref{(1)} above. We also prove a convenient formula for the difference between the two sides of the Cauchy-Schwarz inequality and use it to generalize the known condition for equality in the classical case. In Corollary~\ref{C:CSI}, we obtain a Cauchy-Schwarz inequality as well as a condition for equality for sesquilinear maps with values in a semiprime Archimedean $f$-algebra that is closed under the infimum in \eqref{(1)}. Theorem~\ref{T:HI} of this paper proves a H\"older inequality for positive linear maps between Archimedean $\Phi$-algebras that are closed under certain weighted renditions of \eqref{(1)}. Our H\"older inequality generalizes \cite[Theorem 5, Corollary 6]{Bo2} and \cite[Theorem 3.12]{Tou} by (1) weakening the assumption of uniform completeness, (2) including irrational exponents via explicit formulas without restricting the codomain to the real numbers, (3) providing a result for several variables, and (4) enabling the domain and codomain of the positive linear maps in question to be either both real $\Phi$-algebras or both complex $\Phi$-algebras. We remark that Theorem~\ref{T:HI} is itself a consequence of Proposition~\ref{P:Mali&Me}, which in turn generalizes a reformulation of the classical H\"older inequality by Maligranda \cite[(HI$_{1}$)]{Mali}. Indeed, Proposition~\ref{P:Mali&Me} is a reinterpretation of (HI$_{1}$) for Archimedean vector lattices that simultaneously extends (HI$_{1}$) to several variables. We add that Kusraev \cite[Theorem 4.2]{Kus2} independently developed his own version of (HI$_{1}$) in the setting of uniformly complete Archimedean vector lattices. Our Proposition~\ref{P:Mali&Me} contains \cite[Theorem 4.2]{Kus2}. Noting that Proposition~\ref{P:Mali&Me} relies primarily on the Archimedean vector lattice functional calculus, it (contrary to \cite[Theorem 4.2]{Kus2}) depends only on (at most) the countable Axiom of Choice. Finally, we employ the H\"older inequality of Theorem~\ref{T:HI} to prove a Minkowski inequality in Theorem~\ref{T:MI}. We proceed with some preliminaries. \section{Preliminaries}\label{S:prelims} We refer the reader to \cite{AB,LuxZan1,Zan2} for any unexplained terminology regarding vector lattices and $f$-algebras. Throughout, $\mathbb{R}$ is used for the real numbers, $\mathbb{C}$ denotes the complex numbers, $\mathbb{K}$ stands for either $\mathbb{R}$ or $\mathbb{C}$, and the symbol for the set of strictly positive integers is $\mathbb{N}$. An Archimedean real vector lattice $E$ is said to be \textit{square mean closed} (see \cite[page 482]{AzBoBus}) if $\sup\{ (\cos\theta)f+(\sin\theta)g:\theta\in[0,2\pi]\}$ exists in $E$ for every $f,g\in E$, and in this case we write \[ f\boxplus g=\sup\{ (\cos\theta)f+(\sin\theta)g:\theta\in[0,2\pi]\}\ (f,g\in E). \] The notion of square mean closedness in vector lattices dates back to a 1973 paper by de Schipper, under the term \textit{property (E)} (see \cite[page 356]{dS}). We adopt de Schipper's definition of an Archimedean complex vector lattice but use the terminology found in \cite{AzBoBus}. Throughout, $V+iV$ denotes the commonly used vector space complexification of a real vector space $V$. An \textit{Archimedean complex vector lattice} is a complex vector space of the form $E+iE$, where $E$ is a square mean closed Archimedean real vector lattice \cite[pages 356--357]{dS}. An Archimedean real vector lattice will also be called an \textit{Archimedean vector lattice over $\mathbb{R}$}, and an Archimedean complex vector lattice will additionally be referred to as an \textit{Archimedean vector lattice over $\mathbb{C}$}. An \textit{Archimedean vector lattice over} $\mathbb{K}$ is a vector space that is either an Archimedean vector lattice over $\mathbb{R}$ or an Archimedean vector lattice over $\mathbb{C}$. Equivalently, an Archimedean vector lattice over $\mathbb{K}$ is a vector space over $\mathbb{K}$ that is equipped with an Archimedean modulus, as defined axiomatically by Mittelmeyer and Wolff in \cite[Definition 1.1]{MW}. Given an Archimedean vector lattice $E+iE$ over $\mathbb{C}$, we write $\text{Re}(f+ig)=f$ and $\text{Im}(f+ig)=g\ (f,g\in E)$. For convenience, we write $\text{Re}(f)=f$ and $\text{Im}(f)=0\ (f\in E)$ when $E$ is an Archimedean vector lattice over $\mathbb{R}$. Lemma 1.2, Corollary 1.4, Proposition 1.5, and Theorem 2.2 of \cite{MW} together imply that the Archimedean modulus on an Archimedean vector lattice $E$ over $\mathbb{K}$ is given by the formula \begin{align*} |f|=\sup\{\text{Re}(\lambda f):\lambda\in\mathbb{K},|\lambda|=1\}\ (f\in E). \end{align*} In particular, for an Archimedean vector lattice $E$ over $\mathbb{R}$ we have $|f|=f\vee(-f)\ (f\in E)$, while $|f+ig|=f\boxplus g\ (f,g\in E)$ holds in any Archimedean vector lattice $E+iE$ over $\mathbb{C}$. For an Archimedean vector lattice $E$ over $\mathbb{K}$, we define the \textit{positive cone} $E^{+}$ of $E$ by $E^{+}=\{ f\in E:|f|=f\}$, while the real vector lattice $E_{\rho}=\{ f-g:f,g\in E^{+}\}$ is called the \textit{real part} of $E$. With this notation, $E=E_{\rho}$ for every Archimedean vector lattice $E$ over $\mathbb{R}$, whereas $E=E_{\rho}+iE_{\rho}$ whenever $E$ is an Archimedean vector lattice over $\mathbb{C}$. We say that an Archimedean vector lattice $E$ over $\mathbb{K}$ is \textit{uniformly complete} (respectively, \textit{Dedekind complete}) if $E_{\rho}$ is uniformly complete (respectively, Dedekind complete). We denote the Dedekind completion of an Archimedean vector lattice $E$ over $\mathbb{R}$ by $E^{\delta}$. Given an Archimedean vector lattice $E$ over $\mathbb{C}$, we define $E^{\delta}=(E_{\rho})^{\delta}+i(E_{\rho})^{\delta}$ and note that $E^{\delta}$ is also an Archimedean vector lattice over $\mathbb{C}$. Indeed, every Dedekind complete Archimedean vector lattice over $\mathbb{R}$ is uniformly complete \cite[Lemma 39.2, Theorem 39.4]{LuxZan1}, and every uniformly complete Archimedean vector lattice over $\mathbb{R}$ is square mean closed \cite[Section 2]{BeuHuidP}. An Archimedean vector lattice $E$ over $\mathbb{R}$ is said to be \textit{geometric mean closed} (see \cite[page 486]{AzBoBus}) if $\inf\{\theta f+\theta^{-1}g:\theta\in(0,\infty)\}$ exists in $E$ for every $f,g\in E^{+}$, and in this case we write \[ f\boxtimes g=2^{-1}\inf\{\theta f+\theta^{-1}g:\theta\in(0,\infty)\}\ (f,g\in E^{+}). \] We define an Archimedean vector lattice over $\mathbb{K}$ to be \textit{geometric mean closed} (respectively, \textit{square mean closed}) if $A_{\rho}$ is geometric mean closed (respectively, square mean closed). Thus every Archimedean vector lattice over $\mathbb{C}$ is square mean closed. We next provide some basic information regarding Archimedean $f$-algebras that will be needed throughout this paper. The multiplication on an Archimedean $f$-algebra $A$ canonically extends to a multiplication on $A+iA$. We call an Archimedean vector lattice $A$ over $\mathbb{K}$ an \textit{Archimedean $f$-algebra over $\mathbb{K}$} if $A_{\rho}$ is an $f$-algebra. If in addition $A$ has a multiplicative identity then we say that $A$ is an \textit{Archimedean $\Phi$-algebra over $\mathbb{K}$}. It was proved in \cite[Corollary 10.4]{dP} (also see \cite[Theorem 142.5]{Zan2}) that every Archimedean $\Phi$-algebra over $\mathbb{R}$ (and therefore every Archimedean $\Phi$-algebra over $\mathbb{K}$) is semiprime. The multiplication on an Archimedean $f$-algebra over $\mathbb{K}$ will be denoted by juxtaposition throughout. For Archimedean $f$-algebras $A$ and $B$ over $\mathbb{K}$, we say that a map $T\colon A\rightarrow B$ is \textit{multiplicative} if $T(ab)=T(a)T(b)\ (a,b\in A)$. Let $A$ be an Archimedean $f$-algebra over $\mathbb{K}$, and let $n\in\mathbb{N}$ and $a\in A^{+}$. If there exists a unique element $r$ of $A^{+}$ such that $r^{n}=a$, we write $r=a^{1/n}$ and say that $a^{1/n}$ exists. If $A$ is an Archimedean semiprime $f$-algebra and $a,r\in A^{+}$ satisfy $r^{n}=a$ then $r=a^{1/n}$ \cite[Proposition 2(ii)]{BeuHui}. Given $m,n\in\mathbb{N}$, we write $a^{m/n}=(a^{m})^{1/n}$, provided $(a^{m})^{1/n}$ exists. Every uniformly complete semiprime Archimedean $f$-algebra $A$ over $\mathbb{R}$ is geometric mean closed (see \cite[Theorem 2.21]{Az}) and \begin{equation}\label{(2)} f\boxtimes g=(fg)^{1/2}\ (f,g\in A^{+}). \end{equation} The formula \eqref{(2)} also holds in the weaker condition when $A$ is geometric mean closed. In fact, the proof of \eqref{(2)} in \cite[Theorem 2.21]{Az} does not require uniform completeness, as illustrated in the next proposition. Since \cite{Az} is not widely accessible, we reproduce the proof of \cite[Theorem 2.21]{Az}, while (trivially) extending this theorem to include complex vector lattices. \begin{proposition}\label{P:gmfa} Let $A$ be a semiprime Archimedean $f$-algebra over $\mathbb{K}$. If $A$ is geometric mean closed then $f\boxtimes g=(fg)^{1/2}\ (f,g\in A^{+})$. \end{proposition} \begin{proof} Evidently, $A_{\rho}$ is a semiprime Archimedean $f$-algebra over $\mathbb{R}$. Let $f,g\in A^{+}$, and let $C$ be the $f$-subalgebra of $A_{\rho}$ generated by the elements $f,g,f\boxtimes g$. Suppose that $\phi\colon C\rightarrow\mathbb{R}$ is a nonzero multiplicative vector lattice homomorphism. Using \cite[Proposition 2.20]{Az} or \cite[Corollary 3.13]{BusSch} (first equality), we obtain \begin{align*} \phi(f\boxtimes g)=\phi(f)\boxtimes\phi(g)=\bigl(\phi(f)\phi(g)\bigr)^{1/2}=\bigl(\phi(fg)\bigr)^{1/2}. \end{align*} Therefore, \begin{align*} \phi\bigl((f\boxtimes g)^{2}\bigr)=\bigl(\phi(f\boxtimes g)\bigr)^{2}=\phi(fg). \end{align*} Since the set of all nonzero multiplicative vector lattice homomorphisms from $C$ into $\mathbb{R}$ separates the points of $C$ (see \cite[Corollary 2.7]{BusdPvR}), we have $(f\boxtimes g)^{2}=fg$. \end{proof} We conclude this section with some basic terminology regarding Archimedean vector lattices over $\mathbb{K}$. Given an Archimedean vector lattice $E$ over $\mathbb{C}$, we define the complex conjugate \begin{align*} \overline{f+ig}=f-ig\ (f,g\in E_{\rho}). \end{align*} Since every Archimedean vector lattice over $\mathbb{R}$ canonically embeds into an Archimedean vector lattice over $\mathbb{C}$ (see \cite[Theorem 3.3]{BusSch2}), the previous definition also makes sense in Archimedean vector lattices over $\mathbb{R}$ (via such an embedding). If $E$ is an Archimedean vector lattice over $\mathbb{K}$ then the familiar identities $\text{Re}(f)=2^{-1}(f+\bar{f})$ and $\text{Im}(f)=(2i)^{-1}(f-\bar{f})$ are valid for every $f\in E$. Let $V$ be a vector space over $\mathbb{K}$, and suppose that $F$ is an Archimedean vector lattice over $\mathbb{K}$. A map $T\colon V\times V\rightarrow F$ is called \textit{positive semidefinite} if $T(v,v)\geq 0$ for every $v\in V$. If $T(u,v)=\overline{T(v,u)}$ for each $u,v\in V$ then $T$ is said to be \textit{conjugate symmetric}. We say that $T$ is \textit{sesquilinear} if \begin{itemize} \item[(1)] $T(\alpha u_{1}+\beta u_{2},v)=\alpha T(u_{1},v)+\beta T(u_{2},v)\ (\alpha,\beta\in\mathbb{K}, u_{1},u_{2},v\in V)$, and \item[(2)] $T(u,\alpha v_{1}+\beta v_{2})=\bar{\alpha}T(u,v_{1})+\bar{\beta}T(u,v_{2})\ (\alpha,\beta\in\mathbb{K}, u,v_{1},v_{2}\in V)$. \end{itemize} \section{A Cauchy-Schwarz Inequality}\label{S:CSI} We prove a Cauchy-Schwarz inequality for sesquilinear maps with a geometric mean closed Archimedean vector lattice over $\mathbb{K}$ as codomain (Theorem~\ref{T:CSI}) and with a geometric mean closed semiprime Archimedean $f$-algebra over $\mathbb{K}$ as codomain (Corollary~\ref{C:CSI}). A necessary and sufficient condition for equality in Theorem~\ref{T:CSI} and Corollary~\ref{C:CSI} is given via an explicit formula for the difference between the two sides in the Cauchy-Schwarz inequality of Theorem 3.1 mentioned above. However, Example~\ref{E:noclaeqco} illustrates that the condition for equality in the classical Cauchy-Schwarz inequality fails in Theorem~\ref{T:CSI} and Corollary~\ref{C:CSI}. We proceed to the main result of this section. \begin{theorem}\label{T:CSI} \textnormal{\textbf{(Cauchy-Schwarz Inequality)}} Let $V$ be a vector space over $\mathbb{K}$, and suppose that $F$ is a geometric mean closed Archimedean vector lattice over $\mathbb{K}$. If a map $T\colon V\times V\rightarrow F$ is positive semidefinite, conjugate symmetric, and sesquilinear then \begin{itemize} \item[(1)] $\underset{z\in\mathbb{K}\setminus\{ 0\}}{\inf}\{|z|^{-1}T(zu-v,zu-v)\}$ exists in $F\ (u,v\in V)$, \item[(2)] $|T(u,v)|=T(u,u)\boxtimes T(v,v)-2^{-1}\underset{z\in\mathbb{K}\setminus\{ 0\}}{\inf}\{|z|^{-1}T(zu-v,zu-v)\}\ (u,v\in V)$, \item[(3)] $|T(u,v)|\leq T(u,u)\boxtimes T(v,v)\ (u,v\in V)$, and \item[(4)] $|T(u,v)|=T(u,u)\boxtimes T(v,v)$ if and only if $\underset{z\in\mathbb{K}\setminus\{ 0\}}{\inf}\{|z|^{-1}T(zu-v,zu-v)\}=0$. \end{itemize} \end{theorem} \begin{proof} Suppose that $T\colon V\times V\rightarrow F$ is a positive semidefinite, conjugate symmetric, sesquilinear map. Consider $T$ as a map from $V\times V$ to $F^{\delta}$. Let $u,v\in V$, and put $\theta\in (0,\infty)$. Using the sesquilinearity of $T$ and the identity $\text{Re}(f)=2^{-1}(f+\bar{f})$ for $f\in F^{\delta}$, we obtain \begin{align*} T(\theta u-v,u-\theta^{-1}v)&=T(\theta u,u)+T(v,\theta^{-1}v)-T(\theta u,\theta^{-1}v)-T(v,u)\\ &=\theta T(u,u)+\theta^{-1}T(v,v)-2\text{Re}\bigl(T(u,v)\bigr). \end{align*} Therefore, \begin{align*} \text{Re}\bigl(T(u,v)\bigr)=2^{-1}\bigl(\theta T(u,u)+\theta^{-1}T(v,v)\bigr)-(2\theta)^{-1}T(\theta u-v,\theta u-v). \end{align*} In particular, for every $\lambda\in S=\{\lambda\in\mathbb{K}:|\lambda|=1\}$ we have \begin{align*} \text{Re}\bigl(\lambda T(u,v)\bigr)=\text{Re}\bigl(T(\lambda u,v)\bigr)=2^{-1}\bigl(\theta T(u,u)+\theta^{-1}T(v,v)\bigr)-(2\theta)^{-1}T(\theta\lambda u-v,\theta\lambda u-v). \end{align*} Thus we obtain \begin{align*} |T(u,v)|&=\underset{\lambda\in S}{\sup}\bigl\{\text{Re}\bigl(\lambda T(u,v)\bigr)\bigr\}\\ &=2^{-1}\underset{\lambda\in S}{\sup}\bigl\{\theta T(u,u)+\theta^{-1}T(v,v)-\theta^{-1}T(\theta\lambda u-v,\theta\lambda u-v)\bigr\}, \end{align*} where the suprema above are in $F^{\delta}$. Since $-\theta^{-1}T(\theta\lambda u-v,\theta\lambda u-v)\leq 0\ (\lambda\in S)$, it follows that $\underset{\lambda\in S}{\sup}\bigl\{-\theta^{-1}T(\theta\lambda u-v,\theta\lambda u-v)\bigr\}$ exists in $F^{\delta}$. Moreover, \begin{align*} \underset{\lambda\in S}{\sup}&\bigl\{\theta T(u,u)+\theta^{-1}T(v,v)-\theta^{-1}T(\theta\lambda u-v,\theta\lambda u-v)\bigr\}\\ &=\theta T(u,u)+\theta^{-1}T(v,v)+\underset{\lambda\in S}{\sup}\bigl\{-\theta^{-1}T(\theta\lambda u-v,\theta\lambda u-v)\bigr\}\\ &=\theta T(u,u)+\theta^{-1}T(v,v)-\underset{\lambda\in S}{\inf}\bigl\{ \theta^{-1}T(\theta\lambda u-v,\theta\lambda u-v)\bigr\}. \end{align*} Therefore, \begin{align*} 2^{-1}\bigl(\theta T(u,u)+\theta^{-1}T(v,v)\bigr)=|T(u,v)|+2^{-1}\underset{\lambda\in S}{\inf}\bigl\{\theta^{-1}T(\theta\lambda u-v,\theta\lambda u-v)\bigr\}. \end{align*} Since $\underset{\lambda\in S}{\inf}\bigl\{\theta^{-1}T(\theta\lambda u-v,\theta\lambda u-v)\bigr\}\geq 0\ (\theta\in(0,\infty))$, it follows that \begin{align*} \underset{\theta\in(0,\infty)}{\inf}\Bigl\{\underset{\lambda\in S}{\inf}\{\theta^{-1}T(\theta\lambda u-v,\theta\lambda u-v)\}\Bigr\} \end{align*} exists in $F^{\delta}$. Then with the infima still in $F^{\delta}$, \begin{align*} T(u,u)\boxtimes T(v,v)&=\underset{\theta\in(0,\infty)}{\inf}\Bigl\{|T(u,v)|+2^{-1}\underset{\lambda\in S}{\inf}\{\theta^{-1}T(\theta\lambda u-v,\theta\lambda u-v)\}\Bigr\}\\ &=|T(u,v)|+2^{-1}\underset{\theta\in(0,\infty)}{\inf}\Bigl\{\underset{\lambda\in S}{\inf}\{\theta^{-1}T(\theta\lambda u-v,\theta\lambda u-v)\}\Bigr\}\\ &=|T(u,v)|+2^{-1}\underset{\lambda\in S,\theta\in(0,\infty)}{\inf}\{\theta^{-1}T(\theta\lambda u-v,\theta\lambda u-v)\}\\ &=|T(u,v)|+2^{-1}\underset{z\in\mathbb{K}\setminus\{ 0\}}{\inf}\{|z|^{-1}T(zu-v,zu-v)\}. \end{align*} Thus, now in $F$, the equality \begin{align*} 2^{-1}\underset{z\in\mathbb{K}\setminus\{ 0\}}{\inf}\{|z|^{-1}T(zu-v,zu-v)\}=\bigl(T(u,u)\boxtimes T(v,v)-|T(u,v)|\bigr)\ \end{align*} authenticates statement (1) of this theorem and implies (in $F$) \begin{align*} |T(u,v)|=T(u,u)\boxtimes T(v,v)-2^{-1}\underset{z\in\mathbb{K}\setminus\{ 0\}}{\inf}\{|z|^{-1}T(zu-v,zu-v)\}. \end{align*} Therefore, part (2) is verified. Statements (3) and (4) immediately follow from (2). \end{proof} As a consequence of Theorem~\ref{T:CSI} and Proposition~\ref{P:gmfa}, we obtain the following. \begin{corollary}\label{C:CSI} Let $V$ be a vector space over $\mathbb{K}$, and suppose that $A$ is a geometric mean closed semiprime Archimedean $f$-algebra over $\mathbb{K}$. If $T\colon V\times V\rightarrow A$ is a positive semidefinite, conjugate symmetric, sesquilinear map then \begin{itemize} \item[(1)] $\underset{z\in\mathbb{K}\setminus\{ 0\}}{\inf}\{|z|^{-1}T(zu-v,zu-v)\}$ exists in $A\ (u,v\in V)$, \item[(2)] $|T(u,v)|^{2}=\Bigl(\bigl(T(u,u)T(v,v)\bigr)^{1/2}-2^{-1}\underset{z\in\mathbb{K}\setminus\{ 0\}}{\inf}\{|z|^{-1}T(zu-v,zu-v)\}\Bigr)^{2}\ (u,v\in V)$, \item[(3)] $|T(u,v)|^{2}\leq T(u,u)T(v,v)\ (u,v\in V)$, and \item[(4)] $|T(u,v)|^{2}=T(u,u)T(v,v)$ if and only if $\underset{z\in\mathbb{K}\setminus\{ 0\}}{\inf}\{|z|^{-1}T(zu-v,zu-v)\}=0$. \end{itemize} \end{corollary} Statement (3) in Corollary~\ref{C:CSI} follows from part (2) of the corollary as well as \cite[Proposition 2(iii)]{BeuHui}. For $\mathbb{K}=\mathbb{R}$, this statement is contained in \cite[Corollary 4]{BusvR4}. Similarly, the given statement is contained in the complex analogue of \cite[Corollary 4]{BusvR4} (mentioned in the introduction) when $\mathbb{K}=\mathbb{C}$. Part (2) of Corollary~\ref{C:CSI}, however, depends on the uniqueness of square roots, which implies the semiprime property in Archimedean almost $f$-algebras. Indeed, let $A$ be an Archimedean almost $f$-algebra and suppose $a^{2}=b^{2}$ implies $a=b\ (a,b\in A^{+})$. Let $a$ be a nilpotent in $A$. Then $a^{3}=0$ (see \cite[Theorem 3.2]{BerHui2}), and thus $a^{4}=0$. Using the uniqueness of square roots twice, we obtain $a=0$. Finally, every semiprime almost $f$-algebra is automatically an $f$-algebra (\cite[Theorem 1.11(i)]{BerHui2}). The special case where $A=\mathbb{K}$ in the inequality of Corollary~\ref{C:CSI} is the classical Cauchy-Schwarz inequality. Thus we know in this special case that $|T(u,v)|^{2}=T(u,u)T(v,v)$ if and only if there exist $\alpha,\beta\in\mathbb{K}$, not both zero, such that $T(\beta u+\alpha v,\beta u+\alpha v)=0$ (see, e.g., \cite[page 3]{Con}). This criterion no longer holds for Theorem~\ref{T:CSI} nor Corollary~\ref{C:CSI}. \begin{example}\label{E:noclaeqco} Define $T\colon \mathbb{K}^{2}\times\mathbb{K}^{2}\rightarrow\mathbb{K}^{2}$ by \begin{align*} T\bigl((z_{1},z_{2}),(w_{1},w_{2})\bigr)=(z_{1}\bar{w_{1}},z_{2}\bar{w_{2}})\ \bigl((z_{1},z_{2}),(w_{1},w_{2})\in\mathbb{K}^{2}\bigr). \end{align*} Since $\mathbb{C}^{2}=\mathbb{R}^{2}+i\mathbb{R}^{2}$, we see that $\mathbb{K}^{2}$ is a geometric mean closed semiprime Archimedean $f$-algebra over $\mathbb{K}$ with respect to the coordinatewise vector space operations, coordinatewise ordering, and coordinatewise multiplication. Also, $T$ is a positive semidefinite, conjugate symmetric, sesquilinear map. Note that \[ |T\bigl((1,0),(0,1)\bigr)|^{2}=T\bigl((1,0),(1,0)\bigr)T\bigl((0,1),(0,1)\bigr). \] Suppose there exist $\alpha,\beta\in\mathbb{K}$, not both zero, for which \[ T\bigl(\beta(1,0)+\alpha(0,1),\beta(1,0)+\alpha(0,1)\bigr)=0. \] Then $(|\beta|^{2},|\alpha|^{2})=(0,0)$, a contradiction. \end{example} \section{A H\"older Inequality}\label{S:HI} We prove a H\"older inequality for positive linear maps between weighted geometric mean closed Archimedean $\Phi$-algebras over $\mathbb{K}$ in this section, extending \cite[Theorem 5, Corollary 6]{Bo2} by Boulabiar and \cite[Theorem 3.12]{Tou} by Toumi. We begin with some definitions. Let $A$ be an Archimedean $f$-algebra over $\mathbb{K}$, and suppose that $n\in\mathbb{N}$. As usual, we write $\prod\limits_{k=1}^{n}a_{k}=a_{1}\cdots a_{n}$ for $a_{1},\dots,a_{n}\in A$. For every $r_{1},\dots,r_{n}\in(0,1)$ such that $\sum\limits_{k=1}^{n}r_{k}=1$, we define a \textit{weighted geometric mean} $\gamma_{r_{1},\dots,r_{n}}:\mathbb{R}^{n}\rightarrow\mathbb{R}$ by \begin{align*} \gamma_{r_{1},\dots,r_{n}}(x_{1},\dots,x_{n})=\prod\limits_{k=1}^{n}|x_{k}|^{r_{k}}\ (x_{1},\dots,x_{n}\in\mathbb{R}). \end{align*} The weighted geometric means are concave on $(\mathbb{R}^{+})^{n}$ as well as continuous and positively homogeneous on $\mathbb{R}^{n}$. Moreover, for each $r_{1},\dots,r_{n}\in(0,1)$ with $\sum\limits_{k=1}^{n}r_{k}=1$, it follows from \cite[Lemma 3.6(iii)]{BusSch} that \begin{align*} \gamma_{r_{1},\dots,r_{n}}(x_{1},\dots,x_{n})=\inf\Bigl\{\sum\limits_{k=1}^{n}r_{k}\theta_{k}x_{k}:\theta_{k}\in(0,\infty),\ \prod\limits_{k=1}^{n}\theta_{k}^{r_{k}}=1\Bigr\}\ (x_{1},\dots,x_{n}\in\mathbb{R}^{+}). \end{align*} An Archimedean vector lattice $E$ over $\mathbb{K}$ is said to be \textit{weighted geometric mean closed} if $\inf\Bigl\{\sum\limits_{k=1}^{n}r_{k}\theta_{k}|f_{k}|:\theta_{k}\in(0,\infty),\ \prod\limits_{k=1}^{n}\theta_{k}^{r_{k}}=1\Bigr\}$ exists in $E$ for every $f_{1},\dots,f_{n}\in E$ and every $r_{1},\dots,r_{n}\in(0,1)$ with $\sum\limits_{k=1}^{n}r_{k}=1$. In this case, we write \begin{align*} \bigtriangleup\limits_{k=1}^{n}(f_{k},r_{k})=\inf\Bigl\{\sum\limits_{k=1}^{n}r_{k}\theta_{k}|f_{k}|:\theta_{k}\in(0,\infty),\ \prod\limits_{k=1}^{n}\theta_{k}^{r_{k}}=1\Bigr\}\ (f_{1},\dots,f_{n}\in E). \end{align*} Let $(X,\mathcal{M},\mu)$ be a measure space, and let $p\in(1,\infty)$. It follows from \cite[(HI$_{1}$)]{Mali} by Maligranda that $|f|^{1/p}|g|^{1-1/p}\in L_{1}(X,\mu)$ for $f,g\in L_{1}(X,\mu)$, and \begin{align*} ||\ |f|^{1/p}|g|^{1-1/p}\ ||_{1}\leq||f||_{1}^{1/p}||g||_{1}^{1-1/p}. \end{align*} Following Maligranda's proof, we redevelop and extend \cite[(HI$_{1}$)]{Mali} to a multivariate version in the setting of positive operators between vector lattices. \begin{proposition}\label{P:Mali&Me} Let $E$ and $F$ be weighted geometric mean closed Archimedean vector lattices over $\mathbb{K}$, and suppose that $r_{1},\dots,r_{n}\in(0,1)$ satisfy $\sum\limits_{k=1}^{n}r_{k}=1$. \begin{itemize} \item[(1)] For each positive linear map $T\colon E\rightarrow F$, \begin{align*} T\Bigl(\bigtriangleup\limits_{k=1}^{n}(f_{k},r_{k})\Bigr)\leq\bigtriangleup\limits_{k=1}^{n}\bigl(T(|f_{k}|),r_{k}\bigr)\ (f_{1},\dots,f_{n}\in E). \end{align*} \item[(2)] If $T\colon E\rightarrow F$ is a linear map then $T$ is a vector lattice homomorphism if and only if \begin{align*} T\Bigl(\bigtriangleup\limits_{k=1}^{n}(f_{k},r_{k})\Bigr)=\bigtriangleup\limits_{k=1}^{n}\bigl(T(f_{k}),r_{k}\bigr)\ (f_{1},\dots,f_{n}\in E). \end{align*} \item[(3)] If $G$ is a (not necessarily weighted geometric mean closed) vector sublattice of $E$, $T\colon G\rightarrow F$ is a vector lattice homomorphism, and $f_{1},\dots,f_{n},\bigtriangleup\limits_{k=1}^{n}(f_{k},r_{k})\in G$ then \[ T\Bigl(\bigtriangleup\limits_{k=1}^{n}(f_{k},r_{k})\Bigr)=\bigtriangleup\limits_{k=1}^{n}\bigl(T(f_{k}),r_{k}\bigr). \] \end{itemize} \end{proposition} \begin{proof} (1) Assume $T\colon E\rightarrow F$ is a positive linear map. Let $f_{1},\dots,f_{n}\in E$, and suppose $\theta_{1},\dots,\theta_{n}\in(0,\infty)$ are such that $\prod\limits_{k=1}^{n}\theta_{k}^{r_{k}}=1$. From the positivity and linearity of $T$ we have \begin{align*} T\Bigl(\bigtriangleup\limits_{k=1}^{n}(f_{k},r_{k})\Bigr)\leq T\Bigl(\sum\limits_{k=1}^{n}r_{k}\theta_{k}|f_{k}|\Bigr)=\sum\limits_{k=1}^{n}r_{k}\theta_{k}T(|f_{k}|). \end{align*} Then \begin{align*} T\Bigl(\bigtriangleup\limits_{k=1}^{n}(f_{k},r_{k})\Bigr)\leq\inf\Bigl\{\sum\limits_{k=1}^{n}r_{k}\theta_{k}T(|f_{k}|):\theta_{k}\in(0,\infty),\ \prod\limits_{k=1}^{n}\theta_{k}^{r_{k}}=1\Bigr\}. \end{align*} (2) Suppose $T\colon E\rightarrow F$ is a vector lattice homomorphism. It follows from \cite[Theorem 3.7(2)]{BusSch} that $\gamma_{r_{1},\dots,r_{n}}(f_{1},\dots,f_{n})$, which is defined via functional calculus \cite[Definition 3.1]{BusdPvR}, exists in $E$ for every $f_{1},\dots,f_{n}\in E^{+}$ and \[ \gamma_{r_{1},\dots,r_{n}}(f_{1},\dots,f_{n})=\bigtriangleup\limits_{k=1}^{n}(f_{k},r_{k})\ (f_{1},\dots,f_{n}\in E^{+}). \] It is readily checked using \cite[Definition 3.1]{BusdPvR} and the identity \[ \gamma_{r_{1},\dots,r_{n}}(x_{1},\dots,x_{n})=\gamma_{r_{1},\dots,r_{n}}(|x_{1}|,\dots,|x_{n}|)\ (x_{1},\dots,x_{n}\in\mathbb{R}) \] that $\gamma_{r_{1},\dots,r_{n}}(f_{1},\dots,f_{n})$ exists in $E$ for all $f_{1},\dots,f_{n}\in E$ and \[ \gamma_{r_{1},\dots,r_{n}}(f_{1},\dots,f_{n})=\gamma_{r_{1},\dots,r_{n}}(|f_{1}|,\dots,|f_{n}|)\ (f_{1},\dots,f_{n}\in E). \] It follows that \[ \gamma_{r_{1},\dots,r_{n}}(f_{1},\dots,f_{n})=\bigtriangleup\limits_{k=1}^{n}(f_{k},r_{k})\ (f_{1},\dots,f_{n}\in E). \] By \cite[Theorem 3.11]{BusSch} (second equality), we have for all $f_{1},\dots,f_{n}\in E$, \begin{align*} T\bigl(\bigtriangleup\limits_{k=1}^{n}(f_{k},r_{k})\bigr)&=T\bigl(\gamma_{r_{1},\dots,r_{n}}(f_{1},\dots,f_{n})\bigr)\\ &=\gamma_{r_{1},\dots,r_{n}}\bigl(T(f_{1}),\dots,T(f_{n})\bigr)=\bigtriangleup\limits_{k=1}^{n}\bigl(T(f_{k}),r_{k}\bigr). \end{align*} On the other hand, assume $T\colon E\rightarrow F$ is a linear map and \begin{align*} T\Bigl(\bigtriangleup\limits_{k=1}^{n}(f_{k},r_{k})\Bigr)=\bigtriangleup\limits_{k=1}^{n}\bigl(T(f_{k}),r_{k}\bigr)\ (f_{1},\dots,f_{n}\in E). \end{align*} From \cite[Theorem 3.11]{BusSch}, we conclude that \begin{align*} T\bigl(\gamma_{r_{1},\dots,r_{n}}(f_{1},\dots,f_{n})\bigr)=\gamma_{r_{1},\dots,r_{n}}\bigl(T(f_{1}),\dots,T(f_{n})\bigr)\ (f_{1},\dots,f_{n}\in E), \end{align*} and (since $\gamma_{r_{1},\dots,r_{n}}(x,\dots,x)=|x|\ (x\in\mathbb{R})$) that $T$ is a vector lattice homomorphism. (3) Let $G$ be a vector sublattice of $E$, and let $T:G\rightarrow F$ be a vector lattice homomorphism. Let $\mathcal{D}$ be the collection of all weighted geometric means. That is, let \[ \mathcal{D}=\left\{\gamma_{r_{1},\dots,r_{n}}:r_{1},\dots,r_{n}\in(0,1)\ \text{and}\ \sum\limits_{k=1}^{n}r_{k}=1\right\}. \] It follows from \cite[Theorem 3.7(2)]{BusSch} that $F$ is $\mathcal{D}$-complete (see \cite[Definition 3.2]{BusSch}). By \cite[Theorem 3.17]{BusSch}, $T$ uniquely extends to a vector lattice homomorphism $T^{\mathcal{D}}:G^{\mathcal{D}}\rightarrow F$, where $G^{\mathcal{D}}$ denotes the $\mathcal{D}$-completion of $G$ (see \cite[Definition 3.10]{BusSch}). Note that $G^{\mathcal{D}}$ is $\mathcal{D}$-complete by definition. Thus \cite[Theorem 3.7(2)]{BusSch} implies that $G^{\mathcal{D}}$ is weighted geometric mean closed. An appeal to (2) now verifies (3). \end{proof} Let $A$ be an Archimedean $\Phi$-algebra over $\mathbb{K}$. If $A$ is uniformly complete then $a^{1/n}$ exists in $A$ for every $a\in A^{+}$ and every $n\in\mathbb{N}$ (see \cite[Corollary 6]{BeuHui}). It follows that $a^{q}$ exists in $A$ for every $a\in A^{+}$ and every $q\in\mathbb{Q}\cap(0,\infty)$. The assumption of uniform completeness in \cite[Corollary 6]{BeuHui} can be weakened, which is the content of our next lemma. \begin{lemma}\label{L:a^q} Let $A$ be a weighted geometric mean closed Archimedean $\Phi$-algebra over $\mathbb{K}$. Then $a^{q}$ exists in $A$ for all $a\in A^{+}$ and $q\in\mathbb{Q}\cap(0,\infty)$. Furthermore, if $q_{1},\dots,q_{n}\in\mathbb{Q}\cap(0,1)$ are such that $\sum\limits_{k=1}^{n}q_{k}=1$ then for every $a_{1},\dots,a_{n}\in A^{+}$, \begin{align*} \prod\limits_{k=1}^{n}a_{k}^{q_{k}}=\bigtriangleup\limits_{k=1}^{n}(a_{k},q_{k})\in A. \end{align*} \end{lemma} \begin{proof} Denote the unit element of $A$ by $e$, and let $a\in A^{+}$. In order to prove that $a^{q}$ is defined in $A$ for every $q\in\mathbb{Q}\cap(0,\infty)$, it suffices to verify that $a^{1/n}$ exists in $A$ for all $n\in\mathbb{N}$. To this end, let $n\in\mathbb{N}\setminus\{1\}$. Let $C$ be the Archimedean $\Phi$-subalgebra of $A_{\rho}$ generated by the elements $a\in A^{+}$ and \[ b=\inf\{n^{-1}\theta_{1}a+(1-n^{-1})\theta_{2} e:\theta_{1},\theta_{2}\in(0,\infty),\ \theta_{1}^{1/n}\theta_{2}^{1-1/n}=1\}\in A^{+}. \] Suppose that $\omega\colon C\rightarrow\mathbb{R}$ is a nonzero multiplicative vector lattice homomorphism. It follows that $\omega(e)=1$. Using Proposition~\ref{P:Mali&Me}(3) (third equality), we obtain \begin{align*} \omega(b^{n})&=\omega(b)^{n}\\ &=\big(\omega(\inf\{n^{-1}\theta_{1}a+(1-n^{-1})\theta_{2} e:\theta_{1},\theta_{2}\in(0,\infty),\ \theta_{1}^{1/n}\theta_{2}^{1-1/n}=1\})\bigr)^{n}\\ &=\big(\inf\{n^{-1}\theta_{1}\omega(a)+(1-n^{-1})\theta_{2} :\theta_{1},\theta_{2}\in(0,\infty),\ \theta_{1}^{1/n}\theta_{2}^{1-1/n}=1\})\bigr)^{n}\\ &=\Bigl(\gamma_{\frac{1}{n},\frac{n-1}{n}}\bigl(\omega(a),1\bigr)\Bigr)^{n}=\Bigl(\bigl(\omega(a)\bigr)^{1/n}\Bigr)^{n}=\omega(a). \end{align*} Since the set of all nonzero multiplicative vector lattice homomorphisms $\omega\colon C\rightarrow\mathbb{R}$ separates the points of $C$ (see \cite[Corollary 2.7]{BusdPvR}), we have in $A_{\rho}$ that $b^{n}=a$. But then $a^{1/n}=b$. Finally, a similar proof verifies that \begin{align*} \prod\limits_{k=1}^{n}a_{k}^{q_{k}}=\bigtriangleup\limits_{k=1}^{n}(a_{k},q_{k}) \end{align*} for every $a_{1},\dots,a_{n}\in A^{+}$ and all $q_{1},\dots,q_{n}\in\mathbb{Q}\cap(0,1)$ such that $\sum_{k=1}^{n}q_{k}=1$. \end{proof} We next use the proof of Lemma~\ref{L:a^q} as a guide to define strictly positive irrational powers of positive elements in weighted geometric mean closed Archimedean $\Phi$-algebras in an intrinsic manner that does not require representation theory dependent on more than the countable Axiom of Choice. For $r\in(0,\infty)$, define \begin{align*} \lfloor r\rfloor=\max\{n\in\mathbb{N}\cup\{0\}:n\leq r\}\ \text{and}\ \tilde{r}=r-\lfloor r\rfloor. \end{align*} \begin{definition}\label{D:a^r} Suppose that $A$ is a weighted geometric mean closed Archimedean $\Phi$-algebra over $\mathbb{K}$. Let $e$ be the unit element of $A$. For $a\in A^{+}$ and $r\in(0,\infty)$, define \begin{align*} a^{r}=a^{\lfloor r\rfloor}\inf\{\tilde{r}\theta_{1}a+(1-\tilde{r})\theta_{2} e:\theta_{1},\theta_{2}\in(0,\infty),\ \theta_{1}^{\tilde{r}}\theta_{2}^{1-\tilde{r}}=1\}, \end{align*} where $a^{\lfloor r\rfloor}$ is taken to equal $e$ in the case where $\lfloor r\rfloor=0$. \end{definition} By Lemma~\ref{L:a^q}, the above definition of strictly positive real exponents extends the natural definition of strictly positive rational exponents previously discussed. We next give an easy corollary of Proposition~\ref{P:Mali&Me}(3). \begin{corollary}\label{C:T(a^r)=T(a)^r} Let $A$ and $B$ be weighted geometric mean closed Archimedean $\Phi$-algebras over $\mathbb{K}$ with unit elements $e$ and $e'$, respectively. Suppose $C$ is a $\Phi$-subalgebra of $A$ and that $T\colon C\rightarrow B$ is a multiplicative vector lattice homomorphism such that $T(e)=e'$. Let $a\in A^{+}$ and $r\in(0,\infty)$. If $a,a^{r}\in C$ then $T(a^{r})=\bigl(T(a)\bigr)^{r}$. \end{corollary} The following lemma verifies some familiar (and needed for Theorems~\ref{T:HI} and \ref{T:MI}) arithmetical rules for positive real exponents in geometric mean closed Archimedean $\Phi$-algebras over $\mathbb{K}$. \begin{lemma}\label{L:powerrules} Let $p,q\in(0,\infty)$, and let $A$ be a weighted geometric mean closed Archimedean $\Phi$-algebra over $\mathbb{K}$. For each $a\in A^{+}$, the following hold. \begin{itemize} \item[(1)] $(a^{p})^{q}=a^{pq}$. \item[(2)] $a^{p}a^{q}=a^{p+q}$. \end{itemize} \end{lemma} \begin{proof} We prove (1), leaving the similar proof of (2) to the reader. To this end, let $a\in A^{+}$. Let $C$ be the real $\Phi$-subalgebra of $A_{\rho}$ generated by $a, a^{\tilde{p}}, (a^{p})^{\tilde{q}}$, and $a^{\overset{\sim}{pq}}$, and note that $a, a^{p}, (a^{p})^{q}, a^{pq}\in C$. If $\omega\colon C\rightarrow\mathbb{R}$ is a nonzero multiplicative vector lattice homomorphism then $\omega$ is surjective and thus $\omega(e)=1$. Using Corollary~\ref{C:T(a^r)=T(a)^r}, we obtain \begin{align*} \omega\bigl((a^{p})^{q}\bigr)&=\bigl(\omega(a^{p})\bigr)^{q}=\bigl(\omega(a)\bigr)^{pq}=\omega(a^{pq}). \end{align*} Since the nonzero multiplicative vector lattice homomorphisms separate the points of $C$ (see \cite[Corollary 2.7]{BusdPvR}), we conclude that $(a^{p})^{q}=a^{pq}$. \end{proof} In light of Definition~\ref{D:a^r}, the second part of Lemma~\ref{L:a^q} can now be improved to include irrational exponents. The proof of Lemma~\ref{L:a^r} uses real-valued multiplicative vector lattice homomorphisms, similar to what is found in the proofs of Lemmas \ref{L:a^q} and \ref{L:powerrules}. Therefore, the proof is omitted. \begin{lemma}\label{L:a^r} Let $A$ be a weighted geometric mean closed Archimedean $\Phi$-algebra over $\mathbb{K}$. If $r_{1},\dots,r_{n}\in(0,1)$ are such that $\sum\limits_{k=1}^{n}r_{k}=1$ then for every $a_{1},\dots,a_{n}\in A^{+}$, \begin{align*} \prod\limits_{k=1}^{n}a_{k}^{r_{k}}=\bigtriangleup\limits_{k=1}^{n}(a_{k},r_{k}). \end{align*} \end{lemma} We proceed with the main theorem of this section. \begin{theorem}\label{T:HI} \textnormal{\textbf{(H\"older Inequality)}} Let $p_{1},\dots,p_{n}\in(1,\infty)$ with $\sum\limits_{k=1}^{n}p_{k}^{-1}=1$. Assume $A$ is a weighted geometric mean closed Archimedean $\Phi$-algebra over $\mathbb{K}$. \begin{itemize} \item[(1)] If $B$ is also a weighted geometric mean closed Archimedean $\Phi$-algebra over $\mathbb{K}$ and $T\colon A\rightarrow B$ is a positive linear map then \begin{align*} T\Bigl(\prod\limits_{k=1}^{n}|a_{k}|\Bigr)\leq\prod\limits_{k=1}^{n}\bigl(T(|a_{k}|^{p_{k}})\bigr)^{1/p_{k}}\ (a_{1},\dots,a_{n}\in A). \end{align*} \item[(2)] If $B$ is a weighted geometric mean closed Archimedean vector lattice over $\mathbb{K}$ and $T\colon A\rightarrow B$ is a positive linear map then \begin{align*} T\Bigl(\prod\limits_{k=1}^{n}|a_{k}|\Bigr)\leq\bigtriangleup\limits_{k=1}^{n}\bigl(T(|a_{k}|^{p_{k}}),1/p_{k}\bigr)\ (a_{1},\dots,a_{n}\in A). \end{align*} \end{itemize} \end{theorem} \begin{proof} We only prove (1) since the proof of (2) is similar. To this end, let $B$ be a weighted geometric mean closed Archimedean $\Phi$-algebra over $\mathbb{K}$, and suppose $T\colon A\rightarrow B$ is a positive linear map. Using Lemma~\ref{L:powerrules}(1) (first equality), Lemma~\ref{L:a^r} (second equality and last equality), and Proposition~\ref{P:Mali&Me}(1) (for the inequality), we have \begin{align*} T\Bigl(\prod\limits_{k=1}^{n}|a_{k}|\Bigr)&=T\Bigl(\prod\limits_{k=1}^{n}(|a_{k}|^{p_{k}})^{1/p_{k}}\Bigr)=T\Bigl(\bigtriangleup\limits_{k=1}^{n}(|a_{k}|^{p_{k}},1/p_{k})\Bigr)\\ &\leq\bigtriangleup\limits_{k=1}^{n}\bigl(T(|a_{k}|^{p_{k}}),1/p_{k}\bigr)=\prod\limits_{k=1}^{n}\bigl(T(|a_{k}|^{p_{k}})\bigr)^{1/p_{k}}. \end{align*} \end{proof} \section{A Minkowski Inequality}\label{S:MI} We employ the H\"older inequality in Theorem~\ref{T:HI}(1) to prove a Minkowski inequality in the setting of Archimedean $\Phi$-algebras over $\mathbb{K}$ in this section. \begin{theorem}\label{T:MI} \textnormal{\textbf{(Minkowski Inequality)}} Let $p\in(1,\infty)$. Suppose that $A$ and $B$ are both weighted geometric mean closed Archimedean $\Phi$-algebras over $\mathbb{K}$. For every positive linear map $T\colon A\rightarrow B$, we have \begin{align*} \Bigl(T\Bigl(\bigl|\sum\limits_{k=1}^{n}a_{k}\bigr|^{p}\Bigr)\Bigr)^{1/p}\leq\sum\limits_{k=1}^{n}\bigl(T(|a_{k}|^{p})\bigr)^{1/p}\ (a_{1},\dots,a_{n}\in A). \end{align*} \end{theorem} \begin{proof} We prove the result for $n=2$ and note that the rest of the proof follows from a standard induction argument. To this end, let $T\colon A\rightarrow B$ be a positive linear map, and assume that $a,b\in A$. Let $q\in(1,\infty)$ satisfy $q^{-1}+p^{-1}=1$. By Lemma~\ref{L:powerrules}(2) (first equality), Theorem~\ref{T:HI}(1) (second inequality), and Lemma~\ref{L:powerrules}(1) (third equality), \begin{align*} T(|a+b|^{p})&=T(|a+b|^{p-1}|a+b|)\\ &\leq T\bigl(|a+b|^{p-1}(|a|+|b|)\bigr)=T(|a+b|^{p-1}|a|)+T(|a+b|^{p-1}|b|)\\ &\leq T\bigl((|a+b|^{p-1})^{q}\bigr)^{1/q}T(|a|^{p})^{1/p}+T\bigl((|a+b|^{p-1})^{q}\bigr)^{1/q}T(|b|^{p})^{1/p}\\ &=T(|a+b|^{p})^{1/q}T(|a|^{p})^{1/p}+T(|a+b|^{p})^{1/q}T(|b|^{p})^{1/p}\\ &=T(|a+b|^{p})^{1/q}\bigl(T(|a|^{p})^{1/p}+T(|b|^{p})^{1/p}\bigr). \end{align*} Setting $f=T(|a+b|^{p})$ and $g=T(|a|^{p})^{1/p}+T(|b|^{p})^{1/p}$, we have $f\leq f^{1/q}g$. Next let $C$ be the vector $\Phi$-subalgebra of $B$ generated by $f,f^{1/p},f^{1/q}$, and $g$. Let $\omega\colon C\rightarrow\mathbb{R}$ be a nonzero multiplicative vector lattice homomorphism, so that $\omega(e)=1$, where $e$ is the unit element of $A$. Using Corollary~\ref{C:T(a^r)=T(a)^r}, we have \[ \omega(f)\leq\omega(f^{1/q}g)=\omega(f^{1/q})\omega(g)=\omega(f)^{1/q}\omega(g). \] Thus if $\omega(f)\neq 0$ then $\omega(f)^{1/p}\leq\omega(g)$. Of course, $\omega(f)^{1/p}\leq\omega(g)$ also holds in the case that $\omega(f)=0$, since $g\in B^{+}$. By Corollary~\ref{C:T(a^r)=T(a)^r} again, $\omega(f^{1/p})\leq\omega(g)$. Since the collection of all nonzero multiplicative vector lattice homomorphisms separate the points of $C$ (see \cite[Corollary 2.7]{BusdPvR}), we conclude that $f^{1/p}\leq g$. Therefore, we obtain \[ T(|a+b|^{p})^{1/p}\leq T(|a|^{p})^{1/p}+T(|b|^{p})^{1/p}. \] \end{proof}
{ "timestamp": "2017-02-13T02:03:01", "yymm": "1607", "arxiv_id": "1607.06639", "language": "en", "url": "https://arxiv.org/abs/1607.06639", "abstract": "We prove an identity for sesquilinear maps from the Cartesian square of a vector space to a geometric mean closed Archimedean (real or complex) vector lattice, from which the Cauchy-Schwarz inequality follows. A reformulation of this result for sesquilinear maps with a geometric mean closed semiprime Archimedean (real or complex) $f$-algebra as codomain is also given. In addition, a sufficient and necessary condition for equality is presented. We also prove the Hölder inequality for weighted geometric mean closed Archimedean (real or complex) $\\Phi$-algebras, improving results by Boulabiar and Toumi. As a consequence, the Minkowski inequality for weighted geometric mean closed Archimedean (real or complex) $\\Phi$-algebras is obtained.", "subjects": "Functional Analysis (math.FA)", "title": "Vector lattices and $f$-algebras: the classical inequalities", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.989671849308438, "lm_q2_score": 0.7154239897159438, "lm_q1q2_score": 0.708034982941799 }
https://arxiv.org/abs/2108.02528
A Determinantal Identity for the Permanent of a Rank 2 Matrix
We prove an identity relating the permanent of a rank $2$ matrix and the determinants of its Hadamard powers. When viewed in the right way, the resulting formula looks strikingly similar to an identity of Carlitz and Levine, suggesting the possibility that these are actually special cases of some more general identity (or class of identities) connecting permanents and determinants. The proof combines some basic facts from the theory of symmetric functions with an application of a famous theorem of Binet and Cauchy in linear algebra.
\section{Introduction}\label{sec:intro} The relationship between the {\em determinant} function, which maps a square matrix $A$ to \[ \mydet{A} = \sum_{\sigma \in S_n} (-1)^{|\sigma|} \prod_{i=1}^n A(i, \sigma(i)), \] and the {\em permanent} function, which maps a square matrix $B$ to \[ \perm{B} = \sum_{\sigma \in S_n}\prod_{i=1}^n B(i, \sigma(i)), \] is an important open problem in complexity theory. Despite having similar forms, the computation of the determinant can be done efficiently (due to, among other things, its predictable behavior with respect to Gaussian elimination) while the computation of the permanent is thought to be significantly harder. One of the more successful approaches to relating these two functions, known as {\em geometric complexity theory}, is to look for formulas of the type \begin{equation}\label{eq:perm-det} \perm{M} = \mydet{M'}, \end{equation} where $M$ is a square matrix and $M'$ is a (typically much larger) square matrix formed from affine combinations of the entries of $M$. (The size of the matrix $M'$ that is needed for \eqref{eq:perm-det} to hold can then be related to other measures of complexity --- we refer the reader to \cite{burg} for more details.) For matrices of a specific form, however, it is sometimes possible to find a formula that expresses the permanent of a matrix as the determinant of other matrices {\em of the same size}. One notable example of this is a result of Borchardt \cite{borchardt} that was later generalized by Cayley \cite{cayley} and then further generalized by Carlitz and Levine \cite{carlitz}. For a matrix $M \in \mathbb{R}^{n \times n}$ with entries $m(i, j)$ and integer $p$, let $M_p$ be the matrix with \[ M_p(i, j) = m(i, j)^p. \] In this notation, the main result of \cite{carlitz} is the following theorem: \begin{theorem}[Carlitz--Levine]\label{thm:cl} For a rank 2 matrix $M$ with no zero entries, \begin{equation} \label{eq:them} \mydet{M_{-2}} = \mydet{M_{-1}} \perm{M_{-1}}. \end{equation} \end{theorem} The proof in \cite{carlitz} is elementary, using little more than the definitions and some facts concerning the cycle structure of permutations. The goal of this article is to prove a formula that has an intriguingly similar form to \eqref{eq:them} but for seemingly quite different reasons. Our main result, Theorem~\ref{thm:main}, states that for matrices $M$ with rank at most $2$, \begin{equation}\label{eq:us} (n!)^2 \mydet{M_{n}} = (n^n) \mydet{M_{n-1}}\perm{M_1}. \end{equation} The proof will use a combination of tools from the theory of polynomials and a theorem of Binet and Cauchy on the minors of a product of matrices. \subsection{Motivation.}\label{sec:motivation} While the restriction to matrices of rank at most 2 may seem overly simplistic, it should be noted that the permanents of such matrices appear naturally in the context of binary operations that are symmetric in both arguments (or BOSBAs). The author came across them, for example, during an investigation of the characteristic polynomials of random matrices. More specifically, let $A, B \in \mathbb{R}^{n \times n}$ be Hermitian matrices, and consider the (random) polynomial \[ r(x) = \mydet{x I - A - Q^T B Q}, \] where $Q$ is an orthogonal matrix drawn uniformly from $\mathcal{O}_n$ (the group of $n \times n$ orthogonal matrices) with respect to the Haar measure. It is not hard to see that the value of $r(x)$ is independent of the eigenvectors of $B$ and therefore must be a symmetric function of the eigenvalues of $B$. Conjugating by $Q$, one can see that the same is true for $A$ as well. That is, when viewed as a function of the eigenvalues of $A$ (as the first argument) and the eigenvalues of $B$ (as the second argument), $r(x)$ is a BOSBA. Furthermore, it can be shown that the expected value of this polynomial is the permanent of a rank 2 matrix: \[ \mathbb{E}_{Q} \left\{ \mydet{x I - A - Q^T B Q} \right\} = \frac{1}{n!}\perm{ \{ x - \lambda_i(A) - \lambda_j(B) \}_{i, j=1}^n }, \] where $\lambda_i(A)$ denotes the eigenvalues of $A$ (and similarly for $B$). Expected characteristic polynomials of this type play an important role in the field of {\em finite free probability} which has contributed to a number of recent theoretical \cite{gorin, if3} and algorithmic \cite{if4, xie_xu} advances. Since permanents are, by nature, often more difficult to work with than determinants (they are not multiplicative, for example), the hope is that a formula like \eqref{eq:us} could have uses (though, admittedly, the author has not found any yet --- see Section~\ref{sec:conclusion}). \section{Preliminaries} We will use the customary notation that $[n] = \{ 1, 2, \dots, n \}$ and that $\binom{[n]}{k}$ denotes the collection of subsets of $[n]$ size $k$. For a permutation $\sigma$, we write $|\sigma|$ to denote the number of cycles in its cycle decomposition. For a vector $\vec{x} \in \mathbb{R}^n$, a permutation $\sigma \in S_n$, and a set $S \subseteq [n]$, we will write \[ \vec{x}^S := \prod_{i \in S} x_i \quad\text{and}\quad \sigma(S) = \{ \sigma(i) : i \in S \}. \] In particular, given a matrix $M$ and sets $I, J$, we will write $M(I, J)$ to denote the submatrix of $M$ formed by the rows in $I$ and columns in $J$. \subsection{Symmetric and alternating polynomials} A polynomial $p \in \mathbb{R}[x_1, \dots, x_n]$ is said to be {\em symmetric} if \[ p(x_1, \dots x_i, x_{i+1}, \dots, x_n) = p(x_1, \dots x_{i+1}, x_{i}, \dots, x_n) \] and {\em alternating} if \[ p(x_1, \dots x_i, x_{i+1}, \dots, x_n) = -p(x_1, \dots x_{i+1}, x_{i}, \dots, x_n) \] for all transpositions $(i, i+1)$. Since the set of transpositions generates the symmetric group, equivalent definitions are (for symmetric polynomials) \[ p(x_1, \dots, x_n) = p(x_{\pi(1)}, \dots, x_{\pi(n)}) \] and (for alternating polynomials) \begin{equation} \label{eq:alternate} p(x_1, \dots, x_n) = (-1)^{|\pi|} p(x_{\pi(1)}, \dots, x_{\pi(n)}) \end{equation} for all $\pi \in S_n$. In particular, (\ref{eq:alternate}) implies that any alternating polynomial $p$ must be $0$ whenever $x_i = x_j$ for some $i \neq j$. Examples of symmetric polynomials are the {\em elementary symmetric polynomials} \[ e_k(x_1, \dots, x_n) = \begin{cases} 1 & \text{for $k = 0$,}\\ \sum\limits_{S \in \binom{[n]}{k}} \vec{x}^S & \text{for $1 \leq k \leq n$,}\\ 0 & \text{otherwise} \end{cases} \] and the {\em power sum polynomials} \[ p_k(x_1, \dots, x_n) = \sum_{i=1}^n x_i^k. \] One example of an alternating polynomial is the {\em Vandermonde polynomial} \begin{equation} \label{eq:vand} \Delta(x_1, \dots, x_n) = \prod_{i < j} (x_j - x_i). \end{equation} Furthermore, it is easy to see that the Vandermonde polynomial is an essential part of any alternating polynomial: \begin{lemma} \label{lem:alternate} For all alternating polynomials $f(x_1, \dots, x_n)$, there exists a symmetric polynomial $t(x_1, \dots, x_n)$ such that \[ f(x_1, \dots, x_n) = \Delta(x_1, \dots, x_n)t(x_1, \dots, x_n). \] \end{lemma} \begin{proof} For distinct $y_2, \dots, y_n \in \mathbb{R}$, (\ref{eq:alternate}) implies that the univariate polynomial \[ g(x) = f(x, y_2, \dots, y_n) \in \mathbb{R}[x] \] satisfies $g(y_k) = 0$ for each $k = 2, \dots, n$. Hence $(x - y_k)$ must be a factor of $g$ and so $(x_1 - x_k)$ must be a factor of $f$. Since this is true for all $k$ and all $i$ (not just $i=1$), every polynomial of the form $(x_i - x_k)$ must be a factor of $f$, and so \begin{equation}\label{eq:decomposition} f(x_1, \dots, x_n) = \Delta(x_1, \dots, x_n)t(x_1, \dots, x_n) \end{equation} for some polynomial $t$. It remains to show that $t$ is symmetric. To do so, for each $\sigma \in S_n$, we can apply \eqref{eq:decomposition} to \eqref{eq:alternate} to get \[ \Delta(x_1, \dots, x_n)t(x_1, \dots, x_n) = (-1)^{|\sigma|}\Delta(x_{\sigma(1)}, \dots, x_{\sigma(n)})t(x_{\sigma(1)}, \dots, x_{\sigma(n)}). \] Since $\Delta$ is, itself, an alternating polynomial, we have \[ \Delta(x_1, \dots, x_n) = (-1)^{|\sigma|}\Delta(x_{\sigma(1)}, \dots, x_{\sigma(n)}) \] and so canceling both sides results in \[ t(x_1, \dots, x_n) = t(x_{\sigma(1)}, \dots, x_{\sigma(n)}) \] as needed. \end{proof} \subsection{Linear algebra} The only tool we will need from linear algebra is a famous theorem of Binet and Cauchy regarding the minors of products of matrices. The {\em $(I, J)$-minor} of a matrix $M$ is defined as \[ \minor{M}_{I, J} = \mydet{ M(I, J) }. \] The following theorem is attributed to Binet and Cauchy (independently) \cite{cb}. \begin{theorem}[Cauchy--Binet]\label{thm:cb} Let $m, n, p$ and $k$ be positive integers for which $k \leq \min\{m, n, p\}$. Then for all $m \times n$ matrices $A$ and $n \times p$ matrices $B$ and all sets $I, J$ with $|I| = |J| = k$, we have \begin{equation}\label{eq:cb} \minor{ A B }_{I, J} = \sum_{K \in \binom{[n]}{k}} \minor{A}_{I, K} \minor{B}_{K, J} \end{equation} \end{theorem} For those unfamiliar with Theorem~\ref{thm:cb}, we hope to convey some appreciation for it by pointing out that \eqref{eq:cb} simultaneously generalizes two fundamental formulas from linear algebra that (a priori) have no obvious relation to each other: the formula for matrix multiplication (the case when $k = 1$), and the product formula for determinants (the case when $m=n=p=k$). \section{The main theorem} Our approach will be to first prove (\ref{eq:us}) in a special case (Corollary~\ref{cor:main}) and to then extend that result to the full theorem. We start by finding an expansion for the permanent of certain matrices in terms of the elementary symmetric polynomials: \begin{lemma}\label{lem:perm} For vectors $\vec{u}, \vec{v} \in \mathbb{R}^n$, if $A$ is the $n \times n$ matrix with \[ A(i, j) = 1 + u_i v_j, \] then \[ \perm{A} = \sum_{k=0}^n k! (n-k)! e_k(\vec{u})e_k(\vec{v}). \] \end{lemma} \begin{proof} By definition, we have \[ \perm{A} = \sum_{\sigma \in S_n} \prod_{i=1}^n (1 + u_iv_{\sigma(i)}) \] where for each $\sigma$, we have \[ \prod_{i=1}^n (1 + u_iv_{\sigma(i)}) = \sum_{S \subseteq [n]} \vec{u}^S \vec{v}^{\,\sigma(S)}. \] For fixed $S$ with $|S| = k$, as $\sigma$ ranges over all permutations, $\sigma(S)$ will range over all sets $T \in \binom{[n]}{k}$ and any $\sigma'$ for which $\sigma'(S) = \sigma(S)$ will give the same term. As there are a total of $k!(n-k)!$ such permutations, we have \[ \perm{A} = \sum_{k=0}^n k! (n-k)! \sum_{S \in \binom{[n]}{k}}\sum_{T \in \binom{[n]}{k}} \vec{u}^S \vec{v}^{T} = \sum_{k=0}^n k! (n-k)! e_k(\vec{u})e_k(\vec{v}) \] as claimed. \end{proof} For a vector $\vec{x} \in \mathbb{R}^n$, let $\{ Q_{\vec{x}}^k \}_{k=0}^n$ be the collection of $n \times n$ matrices with entries \begin{equation}\label{eq:jac} \Qmat{\vec{x}}{k}(i, j) = \begin{cases} x_j^{i} & \text{if $i > k$,} \\ x_j^{i-1} & \text{otherwise}. \end{cases} \end{equation} For example, when $\vec{x} = (a, b, c)$, we have \[ \Qmat{\vec{x}}{0} = \begin{bmatrix} a & b & c \\ a^2 & b^2 & c^2 \\ a^3 & b^3 & c^3 \end{bmatrix} ,~ \Qmat{\vec{x}}{1} = \begin{bmatrix} 1 & 1 & 1 \\ a^2 & b^2 & c^2 \\ a^3 & b^3 & c^3 \end{bmatrix} ,~\dots~,~ \Qmat{\vec{x}}{3} = \begin{bmatrix} 1 & 1 & 1 \\ a & b & c \\ a^2 & b^2 & c^2 \end{bmatrix}. \] \begin{lemma}\label{lem:Q} For any vector $\vec{x} \in \mathbb{R}^n$, we have \[ \mydet{ \Qmat{\vec{x}}{k} } = e_{n-k}(\vec{x}) \Delta(\vec{x}). \] \end{lemma} \begin{proof} It should be clear from the properties of determinants that each function \[ f_k(\vec{x}) := \mydet{ \Qmat{\vec{x}}{k} } \] is an alternating polynomial. Hence, by Lemma~\ref{lem:alternate}, we have that \[ f_k(\vec{x}) = t_k(\vec{x}) \Delta(\vec{x}) \] for some symmetric function $t_k$, so it remains to show that $t_k = e_k$. We start by considering degrees. Note that each $f_k(\vec{x})$ is a homogeneous polynomial of degree $\binom{n+1}{2} - k$ and that $\Delta(\vec{x})$ is a homogeneous polynomial of degree $\binom{n}{2}$. Hence $t_k(\vec{x})$ must be a homogeneous polynomial of degree $n-k$. Now note that the degree of each $x_i$ in $f_k(\vec{x})$ is at most $n$, whereas the the degree of each $x_i$ in $\Delta(\vec{x})$ is $n-1$. Hence the degree of each $x_i$ in $t_k(\vec{x})$ is at most $1$. The only symmetric polynomials that satisfy these constraints are multiples of the elementary symmetric polynomials, and so we must have \[ f_k(\vec{x}) = w_k e_{n-k}(\vec{x}) \Delta(\vec{x}) \] for some constant $w_k$. However, one can check that we must have $w_k = 1$ by looking at the monomial formed by the product of the diagonal entries in $\Qmat{\vec{x}}{k}$ and seeing that it is positive. \end{proof} We will now use Lemma~\ref{lem:Q} to prove two corollaries. The first corollary appears in the solution to Problem 293(a) in \cite{mir} (and we suspect the authors were aware of Lemma~\ref{lem:Q} though they did not explicitly state it). \begin{corollary}\label{cor:fn-1} For vectors $\vec{u}, \vec{v} \in \mathbb{R}^n$, if $B$ is the $n \times n$ matrix with entries \[ B(i,j) = (1 + u_i v_j)^{n-1}, \] then \[ \mydet{B} = \Delta(\vec{u})\Delta(\vec{v}) \left(\prod_{j=0}^{n-1} \binom{n-1}{j} \right). \] \end{corollary} \begin{proof} Let $Y$ be the $n \times n$ diagonal matrix with entries $Y(i, i) = \binom{n-1}{i-1}$. It is easy to check that \[ B = \Qmat{\vec{u}}{n}^{\intercal} Y \Qmat{\vec{v}}{n} \] and so by the product rule for the determinant, we have \[ \mydet{B} = \mydet{\Qmat{\vec{u}}{n}}\mydet{Y}\mydet{ \Qmat{\vec{v}}{n}}. \] Lemma~\ref{lem:Q} then completes the proof. \end{proof} The second corollary is similar in spirit to the first one, but requires the added machinery of Theorem~\ref{thm:cb}. \begin{corollary}\label{cor:fn} For vectors $\vec{u}, \vec{v} \in \mathbb{R}^n$, if $C$ is the $n \times n$ matrix with entries \[ C(i,j) = (1 + u_i v_j)^{n}, \] then \[ \mydet{C} = \Delta(\vec{u})\Delta(\vec{v}) \left(\prod_{j=0}^{n} \binom{n}{j} \right) \sum_{k=0}^n \frac{e_{k}(\vec{u}) e_{k}(\vec{v})}{\binom{n}{k}} . \] \end{corollary} \begin{proof} Let $\hat{Q}_{\vec{x}}$ be the $n \times (n+1)$ matrix with entries \[ \hat{Q}_{\vec{x}}(i, j) = x_i^{j-1} \] and let $Z$ be the $(n+1) \times (n+1)$ diagonal matrix with $Z(i, i) = \binom{n}{i-1}$. Then we have $C = \hat{Q}_{\vec{u}} Z \hat{Q}_{\vec{v}}^\intercal$ and so we can use Cauchy--Binet (twice) to expand $\mydet{C}$: \begin{align} \mydet{C} &= \minor{\hat{Q}_{\vec{u}} Z \hat{Q}_{\vec{v}}^\intercal}_{[n], [n]} = \sum_{K \in \binom{[n+1]}{n}} \minor{\hat{Q}_{\vec{u}}}_{[n], K} \minor{Z \hat{Q}_{\vec{v}}^\intercal}_{K, [n]} \notag \\&= \sum_{K, L \in \binom{[n+1]}{n}} \minor{\hat{Q}_{\vec{u}}}_{[n], K} \minor{Z}_{K, L} \minor{\hat{Q}_{\vec{v}}^\intercal}_{L, [n]}. \label{eq:expand} \end{align} To simplify \eqref{eq:expand}, we note that since $Z$ is diagonal, $\minor{Z}_{K, L} = 0$ unless $K = L$. Furthermore, note that the complement of each set $K$ contains a single element. When that element is $p \in [n+1]$, we have \[ \minor{\hat{Q}_{\vec{x}}}_{[n], K} = \minor{\Qmat{\vec{x}}{p}} \quad\text{and}\quad \minor{Z}_{K, K} = \frac{\mydet{Z}}{\binom{n}{p-1}}. \] The result then follows from Lemma~\ref{lem:Q}. \end{proof} Putting the three corollaries together gives us the following: \begin{corollary}\label{cor:main} For vectors $\vec{u}, \vec{v} \in \mathbb{R}^n$, if $A, B, C$ are $n \times n$ matrices with entries \[ A(i, j) = 1 + u_i v_j, \quad B(i, j) = (1 + u_i v_j)^{n-1}, \quad\text{and}\quad C(i, j) = (1 + u_i v_j)^{n}, \] then \[ (n!)^2 \mydet{C} = (n^n) \mydet{B} \perm{A}. \] \end{corollary} \begin{proof} Combining Lemma~\ref{lem:perm} and Corollary~\ref{cor:fn} gives \[ n! \mydet{C} = \Delta(\vec{u})\Delta(\vec{v}) \left(\prod_{j=0}^{n} \binom{n}{j} \right) \perm{A}, \] where we have \[ \prod_{j=0}^n \binom{n}{j} = \prod_{j=1}^n \frac{n}{j}\binom{n-1}{j-1} = \frac{n^n}{n!}\prod_{j=1}^n \binom{n-1}{j-1} = \frac{n^n}{n!} \prod_{j=0}^{n-1} \binom{n-1}{j}. \] Plugging in Corollary~\ref{cor:fn-1} gives the result. \end{proof} Finally, we extend Corollary~\ref{cor:main} to the case of any rank $2$ matrix. \begin{theorem}\label{thm:main} Let $X \in \mathbb{R}^{n \times n}$ be any rank $2$ matrix and let $X_{n-1}, X_n \in \mathbb{R}^{n \times n}$ be the matrices with \[ X_{n-1}(i, j) = X(i, j)^{n-1} \quad\text{and}\quad X_{n}(i, j) = X(i, j)^n. \] Then \[ (n!)^2 \mydet{X_n} = (n^n) \mydet{X_{n-1}} \perm{X}. \] \end{theorem} \begin{proof} First note that if we let $\vec{1} \in \mathbb{R}^n$ denote the vector with $\vec{1}(k) = 1$ for all $k$, then Corollary~\ref{cor:main} proves the theorem in the case that \[ X = \vec{u} \otimes \vec{v} + \vec{1} \otimes \vec{1}. \] Now let $Y = \vec{u} \otimes \vec{v} + \vec{w} \otimes \vec{x}$ for general $\vec{w}, \vec{x}$. The expansion of $\mydet{Y_n}$ in terms of monomials has the form \begin{equation} \label{eq:Y1} \mydet{Y_n} = \sum_{i_1, \dots, i_n, j_1, \dots, j_n} c_{i_1, \dots, i_n, j_1, \dots, j_n} \prod_k u_k^{i_k}w_k^{n - i_k}v_k^{j_k}x_k^{n - j_k} \end{equation} where the $c_{i_1, \dots, i_n, j_1, \dots, j_n}$ are constants. Similarly, $\mydet{Y_{n-1}} \perm{Y}$ has an expansion \begin{equation} \label{eq:Y2} \mydet{Y_{n-1}} \perm{Y} = \sum_{i_1, \dots, i_n, j_1, \dots, j_n} \hat{c}_{i_1, \dots, i_n, j_1, \dots, j_n} \prod_k u_k^{i_k}w_k^{n - i_k}v_k^{j_k}x_k^{n - j_k} \end{equation} for some constants $\hat{c}_{i_1, \dots, i_n, j_1, \dots, j_n}$. Plugging in $\vec{x} = \vec{1}$ and $\vec{w} = \vec{1}$, however, does not cause any of the coefficients to combine. That is, \[ \mydet{X_n} = \sum_{i_1, \dots, i_n, j_1, \dots, j_n} c_{i_1, \dots, i_n, j_1, \dots, j_n} \prod_k u_k^{i_k}v_k^{j_k} \] and \[ \mydet{X_{n-1}} \perm{X} = \sum_{i_1, \dots, i_n, j_1, \dots, j_n} \hat{c}_{i_1, \dots, i_n, j_1, \dots, j_n} \prod_k u_k^{i_k}v_k^{j_k}, \] and so by Corollary~\ref{cor:main}, we have \[ (n!)^2 c_{i_1, \dots, i_n, j_1, \dots, j_n} = (n^n) \hat{c}_{i_1, \dots, i_n, j_1, \dots, j_n} \] for all indices $i_1, \dots, i_n$ and $j_1, \dots, j_n$. Plugging this into (\ref{eq:Y1}) and (\ref{eq:Y2}) implies equality for $Y$. \end{proof} \section{Conclusion}\label{sec:conclusion} It is worth noting that Theorem~\ref{thm:main} does not seem to be useful algorithmically. For the purpose of computing permanents, it tends to be slower than the algorithm of Barvinok \cite{barvinok} and also runs into stability issues whenever the two determinants approach $0$ (in particular, when the original matrix $X$ is actually rank 1). Returning to the motivation discussed in Section~\ref{sec:intro}, the author discovered Theorem~\ref{thm:main} in a (failed) attempt to prove the following conjecture, which would have useful implications in the field of finite free probability: \begin{conjecture} For any matrix $T$ with rank at most $2$, \[ \perm{ \begin{matrix} T & T \\ T & T \end{matrix} } \leq \binom{2n}{n} \perm{T}^2. \] \end{conjecture} \noindent A proof is known in the case that the entries of $T$ are all positive, but the general case seems harder. \subsection{Further research} There are two obvious directions for extending Theorem~\ref{thm:main}, both of which would be quite interesting. First, the structural similarity between \eqref{eq:them} and \eqref{eq:us} suggests that a larger class of formulas of the type \[ c(n) \det[M_i] = \mydet{M_j} \perm{M_k} \] could exist for matrices $M \in \mathbb{R}^{n\times n}$ of a certain type (in our case, rank 2). Second, one can ask if there is a determinantal formula similar to \eqref{eq:us} that is capable of computing the permanent of a rank $3$ matrix. In this regard, it is worth mentioning that the author's original proof of Theorem~\ref{thm:main} used a formula of Jacobi that relates the determinants of the $\Qmat{\vec{x}}{k}$ matrices defined in \eqref{eq:jac} to {\em Schur polynomials} \cite{jacobi}. The author opted for the current presentation, which was suggested by an anonymous referee, due to its elegance. For the purposes of extension, however, we mention the connection to Schur polynomials as a possible approach. A final (but far more speculative) research direction lies in the fundamental relationship between the permanent and determinant functions. In particular, one can ask whether different models for geometric complexity theory could lead to new and interesting results. Equations \eqref{eq:them} and \eqref{eq:us} suggest two possible generalizations. First, one could attempt to satisfy \eqref{eq:perm-det} with a matrix $M'$ whose entries are {\em polynomials} in the entries of $M$ (instead of just affine combinations). Second, one could try to replace \eqref{eq:perm-det} with an equation of the form \[ \mydet{M''}\perm{M} = \mydet{M'}, \] where both $M''$ and $M'$ are affine (or polynomial) combinations of the entries of $M$. It is unclear what the immediate implications of either generalization would be, but given the central importance of complexity theory in computer science, one would expect that any new results of this type would be quite interesting.
{ "timestamp": "2021-08-11T02:16:26", "yymm": "2108", "arxiv_id": "2108.02528", "language": "en", "url": "https://arxiv.org/abs/2108.02528", "abstract": "We prove an identity relating the permanent of a rank $2$ matrix and the determinants of its Hadamard powers. When viewed in the right way, the resulting formula looks strikingly similar to an identity of Carlitz and Levine, suggesting the possibility that these are actually special cases of some more general identity (or class of identities) connecting permanents and determinants. The proof combines some basic facts from the theory of symmetric functions with an application of a famous theorem of Binet and Cauchy in linear algebra.", "subjects": "Combinatorics (math.CO)", "title": "A Determinantal Identity for the Permanent of a Rank 2 Matrix", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9896718486991903, "lm_q2_score": 0.7154239897159439, "lm_q1q2_score": 0.7080349825059288 }
https://arxiv.org/abs/2207.12347
Degrees of maps and multiscale geometry
We study the degree of an $L$-Lipschitz map between Riemannian manifolds, proving new upper bounds and constructing new examples. For instance, if $X_k$ is the connected sum of $k$ copies of $\mathbb CP^2$ for $k \ge 4$, then we prove that the maximum degree of an $L$-Lipschitz self-map of $X_k$ is between $C_1 L^4 (\log L)^{-4}$ and $C_2 L^4 (\log L)^{-1/2}$. More generally, we divide simply connected manifolds into three topological types with three different behaviors. Each type is defined by purely topological criteria. For scalable simply connected $n$-manifolds, the maximal degree is $\sim L^n$. For formal but non-scalable simply connected $n$-manifolds, the maximal degree grows roughly like $L^n (\log L)^{\theta(1)}$. And for non-formal simply connected $n$-manifolds, the maximal degree is bounded by $L^\alpha$ for some $\alpha < n$.
\part{\@startsection{part}{0}% \begin{document} \title{Degrees of maps and multiscale geometry} \author[A.~Berdnikov]{Aleksandr Berdnikov} \address[A.~Berdnikov]{Institute for Advanced Study, Princeton, NJ, United States} \email{beerdoss@mail.ru} \author[L.~Guth]{Larry Guth} \address[L.~Guth]{Department of Mathematics, MIT, Cambridge, MA, United States} \email{lguth@math.mit.edu} \author[F.~Manin]{Fedor Manin} \address[F.~Manin]{Department of Mathematics, UCSB, Santa Barbara, CA, United States} \email{manin@math.ucsb.edu} \begin{abstract} We study the degree of an $L$-Lipschitz map between Riemannian manifolds, proving new upper bounds and constructing new examples. For instance, if $X_k$ is the connected sum of $k$ copies of $\mathbb CP^2$ for $k \ge 4$, then we prove that the maximum degree of an $L$-Lipschitz self-map of $X_k$ is between $C_1 L^4 (\log L)^{-4}$ and $C_2 L^4 (\log L)^{-1/2}$. More generally, we divide simply connected manifolds into three topological types with three different behaviors. Each type is defined by purely topological criteria. For scalable simply connected $n$-manifolds, the maximal degree is $\sim L^n$. For formal but non-scalable simply connected $n$-manifolds, the maximal degree grows roughly like $L^n (\log L)^{\theta(1)}$. And for non-formal simply connected $n$-manifolds, the maximal degree is bounded by $L^\alpha$ for some $\alpha < n$. \end{abstract} \maketitle \section{Introduction} \subsection{Background} Given an oriented Riemannian manifold $M$, how does the Lipschitz constant of a map $M \to M$ control its degree? In all cases, if $M$ is an $n$-manifold, an $L$-Lipschitz map $M \to M$ multiplies $n$-dimensional volumes by at most $L^n$, and so its degree is at most $L^n$. In \cite[Ch.~2]{GrMS}, Gromov studied the extent to which this estimate is sharp. For example, he showed that if $M$ admits a sequence of self-maps $f_k$ with \[\deg(f_k) \ge (1 - o(1)) \Lip(f_k)^n,\] then $M$ must be flat \cite[2.32]{GrMS}. He also asked: for what $M$ are there $f_k$ with unbounded degree such that the ratio $\Lip(f_k)^n/\deg(f_k)$ is bounded \cite[2.40(c)]{GrMS}? The answer to this modified question only depends on the topology of $M$. Gromov constructed such maps when $M$ is a sphere or a product of spheres. He singled out $(S^2 \times S^2) \# (S^2 \times S^2)$ as a case in which he did not know whether such maps exist. We now know that the answer for connected sums of copies of $S^2 \times S^2$ or of $\mathbb{C} P^2$ is rather subtle. (The behavior is similar for both families.) Consider the family of manifolds $X_k = \#_k \mathbb CP^2$. Volume considerations show that an $L$-Lipschitz self-map of any $4$-manifold has degree at most $L^4$. It's not difficult to construct an $L$-Lipschitz self map of $\mathbb CP^2$ with degree $\sim L^4$. When $k=2$ or $3$, then \cite{scal} shows that there are also $L$-Lipschitz self maps of $X_k$ with degree $\sim L^4$. But when $k \ge 4$, \cite{scal} shows that an $L$-Lipschitz self map of $X_k$ has degree $o(L^4)$. Before this paper, the most efficient known maps had degree $\sim L^3$. One of our goals in this paper is to give sharper quantitative estimates for the case $k \ge 4$. We will show that the maximal degree $p$ lies in the range \[L^4(\log L)^{-4} \lesssim p \lesssim L^4(\log L)^{-1/2}.\] This phase transition between $k=3$ and $k=4$ is an example of a broader phenomenon. Our second goal in the paper is to develop the general theory of this phenomenon. For a given $M$, the maximally efficient relationship $\Lip f \sim (\deg f)^{1/n}$ may not be achievable for several reasons. For example, $M$ may be \emph{inflexible}, meaning that it does not have self-maps of degree $>1$. (Examples of inflexible simply connected manifolds are given in \cite{ArLu,CLoh,CV,Am}.) Or it may be the case that any self-map of $M$ of degree $D$ multiplies some $k$-dimensional homology class by a factor greater than $D^{k/n}$, giving a stronger bound on the Lipschitz constant. A compact manifold $M$ is \emph{formal} if it has a self-map $M \to M$ which, for some $p$, induces multiplication by $p^k$ on $H_k(M;\mathbb R)$, for every $k \geq 1$. This notion, first defined by Sullivan and coauthors in terms of rational homotopy theory, has played a role in many other geometric applications, starting with \cite{DGMS}. If $M$ is a formal $n$-manifold, then obstructions to obtaining an $L$-Lipschitz map $M \to M$ of degree $L^n$ cannot come from measuring volumes of cycles. However, in \cite{scal} it was shown that more subtle obstructions may exist. This motivates the definition of a \emph{scalable} manifold to be one which has $O(L)$-Lipschitz self-maps of degree $L^n$. The paper \cite{scal} shows that scalability is equivalent to several other conditions; most importantly, a manifold $M$ (perhaps with boundary) is scalable if and only if there is a ring homomorphism $H^*(M;\mathbb{R}) \to \Omega^*(M)$ which realizes cohomology classes as differential forms representing them. \subsection{Main results} For non-scalable formal spaces, \cite{scal} proves that any $L$-Lipschitz self-map has degree $o(L^n)$. Before this paper, the examples that had been constructed had degree $O(L^{n-1})$. In this paper, we gain a sharper quantitative understanding: \begin{thmA} \label{main} Let $M$ be a formal, simply connected closed $n$-manifold which is not scalable. Then the maximal degree $p$ of an $L$-Lipschitz map $M \to M$ satisfies \[L^n(\log L)^{-\beta(M)} \lesssim p \lesssim L^n(\log L)^{-\alpha(M)},\] where $\beta(M) \geq \alpha(M)>0$ are constants depending only on the real cohomology ring of $M$. \end{thmA} For example, in the case of $M=\#_k \mathbb{C} P^2$, $\beta(M)=4$ and $\alpha(M)=1/2$. The lower bound of Theorem \ref{main} generalizes to compact manifolds with boundary with a slightly more complicated statement (see Theorem \ref{self-maps}). We obtain a similar result for sizes of nullhomotopies of $L$-Lipschitz maps to a non-scalable formal space: \begin{thmA} \label{main:homotopies} Let $Y$ be a formal, simply connected compact Riemannian $n$-manifold (perhaps with boundary). Then for any finite simplicial complex $X$, any nullhomotopic $L$-Lipschitz map $f:X \to Y$ is $O(L(\log L)^{n-2})$-Lipschitz nullhomotopic. \end{thmA} For scalable spaces, a linear bound was proved in \cite{scal}; thus this result is interesting mainly for non-scalable formal spaces. In contrast, in non-formal spaces it is often impossible to do better than a bound of the form $L^\alpha$ for some $\alpha>1$. One of the main theorems of \cite{scal} says that a manifold $M$ is scalable if and only if there is a ring homomorphism from $H^*(M; \mathbb{R})$ to $\Omega^*(M)$ which takes each cohomology class to a differential form in that class. Because $\Omega^*(M)$ is infinite-dimensional, this condition is not so easy to check. In the case of closed manifolds, we verify the conjecture given in \cite{scal} which states that scalability is equivalent to a simple homological criterion: \begin{thmA} \label{optconj} Let $M$ be a formal, simply connected closed $n$-manifold. Then $M$ is scalable if and only if there is an injective ring homomorphism $H^*(M;\mathbb R) \to \Lambda^*\mathbb R^n$. \end{thmA} \begin{ex} If $M$ is an $(n-1)$-connected $2n$-manifold, then its real cohomology ring is completely described by the signature $(k,\ell)$ of the bilinear form \[\smile:H^n(M;\mathbb R) \times H^n(M;\mathbb R) \to H^{2n}(M;\mathbb R).\] Then $M$ is scalable if and only if $k$ and $\ell$ are both at most ${2n \choose n}/2$. \end{ex} Theorem \ref{optconj} is closely related to another idea studied by Gromov in \cite[2.41]{GrMS}. For a closed $n$-manifold $M$, say a map $f:\mathbb R^n \to M$ has \emph{positive asymptotic degree} if \[\limsup_{R \to \infty} \frac{\int_{B_R(0)} f^*d\vol_M}{R^n}=\delta>0.\] Intuitively, if you zoom in on an efficient self-map $M \to M$ of high degree, you will see a map of positive asymptotic degree on a large ball. This intuition can be made precise for formal spaces: \begin{thmCprime} \label{elliptic} Let $M$ be a formal, simply connected closed $n$-manifold. Then there is a $1$-Lipschitz map $f:\mathbb R^n \to M$ of positive asymptotic degree if and only if $M$ is scalable. \end{thmCprime} \begin{rmk} Gromov refers to manifolds with this property as \emph{elliptic}. However, there is not a close connection between scalability and elliptic spaces as studied in rational homotopy theory. \end{rmk} \begin{question} Can a non-formal simply connected manifold be Gromov-elliptic? \end{question} Finally, we explore the behavior of non-formal manifolds: \begin{thmA} \label{main:NF} Let $M$ be a closed simply connected $n$-manifold which is not formal. Then either $M$ is inflexible (has no self-maps of degree $>1$) or the maximal degree of an $L$-Lipschitz map $M \to M$ is bounded by $L^\alpha$ for some real number $\alpha<n$. \end{thmA} To see how the latter situation arises, consider the simplest example of a non-formal simply connected manifold, given in \cite[p.~94]{FOT}. This is the total space $M$ of a fiber bundle $S^3 \to M \to S^2 \times S^2$ obtained by pulling back the Hopf fibration $S^3 \to S^7 \to S^4$ along the degree $1$ map $S^2 \times S^2 \to S^4$. A self-map of $M$ is determined by its action on $H^2(M) \cong \mathbb Z^2$. This is because the generators of $H^5(M)$ can be obtained from the generators of $H^2(M)$ by taking Massey products of order~3. An $L$-Lipschitz self-map takes the generators of $H^5(M)$ to vectors of length $O(L^5)$, and therefore it takes the generators of $H^2(M)$ to vectors of length $O(L^{5/3})$. This means the degree of such a map is $O(L^{20/3}) \prec L^7$. In fact, an alternate definition of formality is that a formal space has no nontrivial higher-order rational cohomology operations. \subsection{Proof ideas} \label{discus} In \cite{scal}, the $o(L^n)$ upper bound for the degree of an $L$-Lipschitz map $M \to M$ is obtained by looking at the induced pullbacks of differential forms representing cohomology classes of $M$ and taking flat limits. To get the sharper upper bound of Theorem \ref{main}, we analyze the same pullback forms using Fourier analysis, namely Littlewood--Paley theory. These pullback forms can be decomposed into summands concentrated in different frequency ranges. To start to get an idea how the proof works, first imagine that all the pullback forms are concentrated in a single frequency range. If the frequency range is high, then we got a lot of cancellation when we integrate the forms, leading to a non-trivial bound for the degree. If the frequency range is low, then we use the fact that $M$ is not scalable to get a non-trivial bound for the degree -- roughly speaking if all the relevant forms were large and low frequency, we could use them to build a ring homomorphism from $H^*(M; \mathbb{R})$ to $\Omega^*(M)$. In general, the pullback forms have contributions from many frequency ranges. We carefully break up the integral for the degree into pieces involving different frequency ranges, and we use the two ideas above to bound the pieces. It turns out that the interaction of different frequency ranges is important in this estimate. In the worst case, the forms have roughly equal contributions in every frequency range. Indeed, a self-map of $M$ which comes close to the upper bound must have pieces in a wide range of frequencies (see Proposition \ref{allfreq} for a precise statement). Let's see what such a self-map might look like in the case of $M=\#_k \mathbb{C} P^2$. We think of $M$ as a CW complex with one $0$-cell, $k$ $2$-cells, and one $4$-cell. We construct self-maps $r_\ell:M \to M$ which have degree $2^{4\ell}$ on the top cell. We would like to arrange that $r_\ell$ has Lipschitz constant at most $C \ell \cdot 2^{\ell}$. A naive way to build a map $r_\ell$ of the right degree is to start with some $r_1$ and iterate it $\ell$ times to get $r_\ell$. In this case, $\Lip(r_\ell) \le \Lip(r_1)^\ell$. However, $\Lip(r_1)$ is strictly bigger than 2 (by \cite[2.32]{GrMS}, the Lipschitz constant could only be 2 if $M = \#_k \mathbb{C} P^2$ had a flat metric). Therefore, the bound $\Lip(r_1)^\ell$ is too big. By performing some optimization each time we iterate, we can bring $\Lip(r_\ell)$ down to the target value. \begin{figure} \centering \begin{tikzpicture}[scale=0.8] \filldraw[very thick,fill=gray!20] (-2,-2) rectangle (2,2); \foreach \x/\y in {-1.5/-1.5, -1.5/0.5, 0.5/-1.5, 0.5/0.5} { \filldraw[thick,fill=gray!50] (\x,\y) rectangle (\x+1,\y+1); \draw (\x+0.5,\y+0.5) node {$r_{\ell-1}$}; } \draw (0,0) node {to $2$-skeleton}; \draw[line width=3pt,->] (2.2,0) -- (3.8,0); \filldraw[very thick,fill=gray!20] (4,-2) rectangle (8,2); \foreach \x/\y in {4.2/-1.8, 4.2/0.2, 6.2/-1.8, 6.2/0.2} { \filldraw[thick,fill=gray!50] (\x,\y) rectangle (\x+1.6,\y+1.6); } \foreach \x in {4.3,5.1,6.3,7.1} { \foreach \y in {-1.7,-0.9,0.3,1.1} { \fill[gray] (\x,\y) rectangle (\x+0.6,\y+0.6); } } \end{tikzpicture} \caption{Rescaling the ``layers'' of the iterated map.} \label{blocks} \end{figure} We may build $r_1$, which has degree $16$, as follows: the top cell $e_4$ contains 16 cubical regions that each map homeomorphically, even homothetically, to the whole cell, whereas the area outside those cubical regions maps to the $2$-skeleton. To try to make this map efficient, we can arrange the cubical regions in a $2 \times 2 \times 2 \times 2$ grid. But when we iterate this map many times, the regions that map homothetically to the $4$-cell become tiny, and most of the $4$-cell maps to the $2$-skeleton. The main idea of the construction is that we can actually expand the homothetic regions so that they take up a much larger part of the cell, while compressing the parts that map to the $2$-skeleton to a thin layer. This has to do with the fact that self-maps of $S^2$ of high degree are easy to produce and modify. In the end, each of the $\ell$ iterations contributes a layer of roughly the same thickness, leading to an estimate of $O(\ell \cdot 2^\ell)$ for the Lipschitz constant, or $O(d^{1/4}\log d)$ in terms of the degree $d=2^{4\ell}$. See Figure \ref{blocks} for a rough illustration. The proof of the lower bound of Theorem \ref{main} is a straightforward generalization of this idea. To end this introduction, we consider the Littlewood--Paley pieces of the differential forms from this map and from other maps we have discussed. For simplicity, let us first discuss a self map $S^2 \to S^2$ with degree $2^{2p}$ and Lipschitz constant $2^p$. The pullback of the volume form is very repetitive, so that after averaging on scale $2^{-p}$ it becomes essentially constant. Therefore, the Littlewood--Paley pieces of the pullback are large at the highest frequency scale $2^p$ and at frequency 1, but they can be very small at all the in-between frequencies. The maps between scalable spaces constructed in \cite{scal} have a similar Littlewood--Paley profile. These maps are highly regular ``rescalings''. In fact, we prove Theorem \ref{optconj} by building maps which are modeled on constant forms---the lowest possible frequency. Such maps are built on each cell and patched together using previous results from quantitative homotopy theory. The patching introduces high frequency pieces, but there don't need to be any contributions from the intermediate frequencies. The Littlewood--Paley decomposition for the self-map of $\#_k \mathbb{C} P^2$ sketched above is very different. The outermost layer is dominated by very low-frequency terms (at scale around the diameter of the space) and very high-frequency terms (at scale $\sim 2^{-\ell}$). Similarly, the $k$th layer, which looks like the outermost layer but on a different scale, is dominated by terms at scale $2^{-k}$ and $2^{-\ell}$. Overall, the map has pieces at every frequency range, as suggested by its fractal-like self-similarity. \subsection{Structure of the paper} Section \ref{S:Fourier} contains the Fourier-analytic proof of the upper bound of Theorem \ref{main}; it is independent of the remainder of the paper. Section \ref{S:lower} discusses the corresponding lower bound, and is likewise largely self-contained. Section \ref{S:RHT} introduces some necessary results from rational and quantitative homotopy theory. In Section \ref{S:optconj}, we use this machinery to prove Theorems \ref{optconj} and \ref{elliptic}, and in Section \ref{S:homotopies}, we use it to prove Theorem \ref{main:homotopies}. Finally, in Section \ref{S:NF}, we discuss what our techniques can say about non-formal spaces, proving Theorem \ref{main:NF} as well as some complementary bounds. \subsection{Acknowledgements} The second author is supported by a Simons Investigator Award. The third author was partially supported by NSF individual grant DMS-2001042. \section{Upper bounds on degree using Fourier analysis} \label{S:Fourier} In this section, we show the upper bound of Theorem \ref{main}. To introduce the method, we first handle the case of a connected sum of $\mathbb{C} P^2$s: \begin{thm} \label{cp2k} Let $X_k = \#_k \mathbb{C} P^2$. Fix a metric $g$ on $X_k$. Suppose that $f: X_k \rightarrow X_k$ is $L$-Lipschitz. If $k \ge 4$, then \[\deg (f) \le C(k, g) L^4 (\log L)^{-1/2}.\] \end{thm} We then use the same method to prove the general result: \begin{thm} \label{gendegbound} Suppose that $M$ is a closed connected oriented $n$-manifold such that $H^*(M; \mathbb{R})$ does not embed into $\Lambda^* \mathbb{R}^n$, and $N$ is any closed oriented $n$-manifold. Then there exists $\alpha(M) > 0$ so that for any metric $g$ on $M$ and $g'$ on $N$ and any $L$-Lipschitz map $f: N \rightarrow M$, \[\deg(f) \le C(M,g,N,g') L^n (\log L)^{- \alpha(M)}.\] \end{thm} Note that by Theorem \ref{optconj}, proved later in the paper, if $M$ is simply connected and formal, then this condition holds if and only if $M$ is not scalable. However, the theorem also holds for non-formal manifolds as well as those with nontrivial fundamental group. A similar result also holds for many non-closed domain manifolds. We give the proof for a unit ball, although it extends easily to any compact manifold with boundary: \begin{thm} \label{balldegbound} Suppose that $M$ is a closed connected oriented $n$-manifold such that $H^*(M; \mathbb{R})$ does not embed into $\Lambda^* \mathbb{R}^n$, and let $\alpha(M)>0$ be as in the statement of Theorem \ref{gendegbound}. Let $B^n \subseteq \mathbb{R}^n$ be the unit ball. Then for any metric $g$ on $M$ and any $L$-Lipschitz map $f:B^n \to M$, \[\int_{B^n} f^*d\vol_M \leq C(M,g)L^n(\log L)^{-\alpha(M)}.\] \end{thm} As discussed in the introduction, we prove these results by using Littlewood--Paley theory to divide the forms into pieces at different frequency ranges. In the first subsection, we review the tools from Littlewood--Paley theory that we need. In the second part, we prove Theorem \ref{cp2k}. In the third part, we introduce the modifications needed to prove the more general estimate in Theorem \ref{gendegbound}. \subsection{Littlewood--Paley theory} If $a$ denotes a differential form on $\mathbb{R}^d$, then we can define its Fourier transform term by term. In other words, if $I$ is a multi-index and $ a = \sum_I a_I(x) dx^I$, then \[\hat a := \sum_I \hat a_I dx^I.\] To set up Littlewood--Paley theory, pick a partition of unity on Fourier space: \[\sum_{k \in \mathbb{Z}} \eta_k (\xi) := 1,\] where $\eta_k$ is supported in the annulus $\Ann_k := \{ \xi: 2^{k-1} \le |\xi| \le 2^{k+1} \}$. We can also arrange that $0 \le \eta_k \le 1$, and that $\eta_k$ are smooth with appropriate bounds on their derivatives. Then define \[P_k a := ( \eta_k \hat a)^{\vee},\] where $\vee$ denotes the inverse Fourier transform. We have $a = \sum_{k \in \mathbb{Z}} P_k a$, and we know that $\widehat{P_k a} = \eta_k \hat a$ is supported in $\Ann_k$. We also write $P_{\le k} a = \sum_{k' \le k} P_{k'} a$, and $\eta_{\le k} = \sum_{k' \le k} \eta_k$, so $P_{\le k} a = (\eta_{\le k} \hat a)^\vee$. We say that a form $a = \sum_I a_I(x) dx^I$ is Schwartz if each function $a_I(x)$ is Schwartz. A form $a$ is Schwartz if and only if $\hat a$ is Schwartz. Therefore, if $a$ is Schwartz, then $P_k a$ and $P_{\le k} a$ are also Schwartz. In this section, we review some estimates related to the $P_k a$. These results are proven using some inequalities about the inverse Fourier transform of smooth bump functions. \begin{lem} \label{etaveebound1} Suppose that $\eta(\omega)$ is a smooth function supported on a ball $B\subset \mathbb{R}^d$ of radius~1 such that \begin{itemize} \item $| \eta(\omega) | \le A$ for all $\omega$. \item $| \partial_J \eta(\omega) | \le A_N$ for all multi-indices $J$ with $|J| \le N$. \end{itemize} Then \begin{align*} \lvert\eta^\vee(x)\rvert &\lesssim_d A \qquad\text{for every } x \in \mathbb{R}^d. \\ \lvert \eta^\vee(x) \rvert &\lesssim_d A_N \lvert x \rvert^{-N} \qquad \text{for every } x \in \mathbb{R}^d. \end{align*} Therefore, if $N > d$, \[ \lVert \eta^\vee \rVert_{L^1} \lesssim_d A + A_N.\] \end{lem} \begin{proof} For the first bound, we write \[|\eta^\vee(x)| = | {\textstyle\int \eta(\omega) e^{2 \pi i \omega x} d \omega} | \le {\textstyle\int} | \eta| \le |B| A.\] For the second bound, we integrate by parts $N$ times. For a given $x \in \mathbb{R}^d$, we choose a multi-index $J$ with $|J| =N$ and $|x|^N \sim x^J$. Then \[|\eta^\vee(x)| = \bigl\lvert {\textstyle\int \eta(\omega) e^{2 \pi i \omega x} d \omega} \bigr\rvert = \bigl\lvert{\textstyle\int \partial_J \eta (2 \pi i)^{-N} x^{-J} e^{2 \pi i \omega x} d \omega} \bigr\rvert \lesssim |x|^{-N} {\textstyle\int |\partial_J \eta|} \le \lvert x \rvert^{-N} \lvert B \rvert A_N.\] To bound $\int |\eta^\vee(x)| dx$, we use the first bound when $|x| \le 1$ and the second bound when $|x| \ge 1$. \end{proof} \begin{lem} \label{etaveeboundR} Suppose that $\eta(\omega)$ is a smooth function supported on a ball $B\subset \mathbb{R}^d$ of radius R such that \begin{itemize} \item $| \eta(\omega) | \le A$ for all $\omega$. \item $| \partial_J \eta(\omega) | \le A_N R^{-|J|}$ for all multi-indices $J$ with $|J| \le N$. \end{itemize} Then \begin{align*} | \eta^\vee(x) | &\lesssim_d A R^d \qquad\textrm{ for every } x \in \mathbb{R}^d. \\ | \eta^\vee(x) | &\lesssim_d A_N R^d \lvert Rx \rvert^{-N} \qquad\textrm{ for every } x \in \mathbb{R}^d. \end{align*} Therefore, if $N > d$, \[\| \eta^\vee \|_{L^1} \lesssim_d A + A_N.\] \end{lem} \begin{proof} The first two bounds follow from Lemma \ref{etaveebound1} by a change of variables. Alternatively, one can use the same method as in Lemma \ref{etaveebound1}. To bound $\int |\eta^\vee(x)| dx$, we use the first bound when $|x| \le 1/R$ and the second bound when $|x| \ge 1/R$. \end{proof} \begin{lem} \label{multiplierbound} Suppose that $\eta(\omega)$ is a smooth function supported on a ball $B\subset \mathbb{R}^d$ of radius R such that \begin{itemize} \item $| \eta(\omega) | \le A$ for all $\omega$. \item $| \partial_J \eta(\omega) | \le A_N R^{-|J|}$ for all multi-indices $J$ with $|J| \le N$. \end{itemize} Write $M f = \bigl( \eta \hat f \bigr)^\vee$. Then if $N>d$, \[\| M f \|_{L^p} \lesssim_d (A + A_N) \| f \|_{L^p}\text{ for every }1 \le p \le \infty.\] \end{lem} \begin{proof} We have $Mf = f * \eta^\vee$. So $\| M f \|_{L^p} \le \| f \|_{L^p} \| \eta^\vee \|_{L^1}$. Now apply the bound for $\| \eta^\vee \|_{L^1}$ from Lemma \ref{etaveeboundR}. \end{proof} We apply these bounds to study the Littlewood--Paley projections $P_k$. \begin{lem} \label{etaveebound} $\| \eta_k^\vee \|_{L^1} \lesssim 1$ uniformly in $k$. $\| d \eta_k^\vee \|_{L^1} \lesssim 2^k$ uniformly in $k$. \end{lem} \begin{proof} We can first arrange that $\eta_k(\omega) = \eta_0(2^{-k} \omega)$. Then the function $\eta_k$ obeys the hypotheses of Lemma \ref{etaveeboundR} with $R = 2^k$, with bounds that are uniform in $k$. Then Lemma \ref{etaveeboundR} gives the estimate $\| \eta_k^\vee \|_{L^1} \lesssim_d 1$. Next, we will show that $\| \partial_j \eta_k^\vee \|_{L^1} \lesssim_d 2^k$. This will imply $\| d \eta_k^\vee \|_{L^1} \lesssim_d 2^k$ as desired. The Fourier transform of $\partial_j \eta_k^\vee$ is $2 \pi i \omega_j \eta_k(\omega)$. Notice that $|\omega_j| \lesssim 2^k$ on $\Ann_k$. We write \[2 \pi i \omega_j \eta_k = 2^k \cdot \underbrace{ 2 \pi i \frac{\omega_j}{2^k} \eta_k }_{\psi}.\] The function $\psi$ obeys the hypotheses of Lemma \ref{etaveeboundR}. Therefore, $\| \psi^\vee \|_{L^1} \lesssim_d 1.$ And so \[\| \partial_j \eta_k^\vee \|_{L^1} = 2^k \| \psi^\vee \|_{L^1} \lesssim 2^k. \qedhere\] \end{proof} \begin{lem} \label{Pkbound} $\| P_k a \|_{L^p} \le C \| a \|_{L^p}$, for all $k$ and all $1 \le p \le \infty$ with a uniform constant $C$. \end{lem} \begin{proof} $\|P_k a \|_{L^p} = \| \eta_k^\vee * a \|_{L^p} \le \| \eta_k^\vee \|_{L^1} \| a \|_{L^p}$. Now $\| \eta_k^\vee \|_{L^1}$ is bounded uniformly in $k$ by Lemma \ref{etaveebound}. \end{proof} \begin{lem} \label{projcomm} The projection operator $P_k$ commutes with the exterior derivative $d$: \[d (P_k a) = P_k (da).\] \end{lem} \begin{proof} We can see this by taking the Fourier transform on both sides. The exterior derivative $d$ becomes pointwise multiplication by a matrix on the Fourier side. The projection operator $P_k$ becomes pointwise multiplication by the scalar $\eta_k$. These commute. \end{proof} \begin{lem} \label{prim} Suppose that $a$ is a Schwartz form on $\mathbb{R}^d$ with $da = 0$ and with $\hat a$ is supported in $\Ann_k:= \{ \xi: 2^{k-1} \le |\xi| \le 2^{k+1} \}$. Then $a$ has a primitive, which we denote $\Prim(a)$, so that \begin{itemize} \item $ d \Prim(a) = a$. (This is what the word `primitive' means.) \item $\Prim(a)$ is a Schwartz form. \item $ \lVert\Prim(a)\rVert_{L^p} \le C 2^{-k} \lVert a \rVert_{L^p}$ for all $1 \le p \le \infty$, with a uniform constant $C$. \end{itemize} \end{lem} This is really the key property of frequency localized forms. The intuition is that $\Prim(a)$ is defined by integrating $a$, and the integral cancels at length scales larger than $2^{-k}$. Before starting the proof, we make a quick remark about top-dimensional forms. If $a$ is a $d$-form on $\mathbb{R}^d$, then the condition $da=0$ is automatic. In order for $a$ to have a Schwartz primitive, we need to know that $\int_{\mathbb{R}^d} a = 0$. This fact is implied by our assumption that $\hat a$ is supported in $\Ann_k$, because $\int_{\mathbb{R}^d} a = \hat a(0) = 0$. \begin{proof} First cover $\Ann_k$ with $\sim 1$ balls $B$ so that the radius of each ball is $\sim 2^k$ and the distance from each ball to the origin is also $\sim 2^k$. Let $\psi_B$ be a partition of unity: $\sum_B \psi_B = 1$ on $\Ann_k$ and $\psi_B$ is supported in $B$. Decompose $a = \sum_B a_B$ where \[\hat a_B = \psi_B \hat a.\] The form $\hat a_B$ is smooth and supported in $\Ann_k \cup \Ann_{k-1} \cup \Ann_{k+1}$. Just as in the proof of Lemma \ref{projcomm}, it follows that $d a_B = 0$. Using Lemma \ref{multiplierbound}, $ \| a_B \|_{L^p} \le C \| a \|_{L^p}$ for all $1 \le p \le \infty$. We will construct a primitive $\Prim(a_B)$ for each form $a_B$ such that \begin{itemize} \item $ d \Prim(a_B) = a_B$. \item $\Prim(a_B)$ is a Schwartz form. \item $ \lVert \Prim(a_B) \rVert_{L^p} \le C 2^{-k} \lVert a_B \rVert_{L^p}$ for all $1 \le p \le \infty$, with a uniform constant $C$. \end{itemize} Finally, we define $\Prim(a) = \sum_B \Prim(a_B)$. Since $\Prim(a_B)$ has the desired properties, it follows that $\Prim(a)$ does also. Now we have to construct $\Prim(a_B)$. For ease of notation, we will abbreviate $a_B$ by $a$. We know that $\hat a$ is supported on $B$. We can choose coordinates so that $\omega_1 \sim 2^k$ on $B$. We write the form $a$ as \[\sum_I a_I(x) dx_I = \sum_{I = 1 \cup J} a_I(x) dx_1 \wedge dx_J + \sum_{1 \notin I} a_I dx_I.\] We define the antiderivative $\int a_I dx_1$ via the Fourier transform by the formula: \begin{equation} \label{defantider} \widehat {\textstyle\int a_I dx_1} (\omega) = \frac{1}{2 \pi i \omega_1} \hat a_I(\omega). \end{equation} Since $\omega_1 > 0$ on $B$, and $\hat a_I(\omega)$ is supported in $B$, the right-hand side is a smooth compactly supported function on Fourier space. Therefore, $\int a_I dx_1$ is a Schwartz function on $\mathbb{R}^d$. From \eqref{defantider}, we can also check that \[\frac{\partial}{\partial x_1} \left( {\textstyle\int a_I dx_1} \right) = a_I.\] (We can also define $\int a_I dx_1$ using definite integrals: \[\int a_I dx_1 (x_1, x_2, ..., x_d) = \int_{- \infty}^{x_1} a_I(\tilde x_1, x_2, ..., x_d) d \tilde x_1.\] This definite integral formula is equivalent to \eqref{defantider}. From the definite integral formula, it takes a little work to check that $\int a_I dx_1$ is in fact a Schwartz function on $\mathbb{R}^d$, although it's not that difficult. In our proof we will only need \eqref{defantider}.) We now define \[\Prim(a) = \sum_{I = 1 \cup J} ({\textstyle\int} a_I dx_1) dx_J.\] This is a standard construction for primitives of forms which appears in the proof of the Poincar\'e lemma, cf. \cite[p.~38]{BottTu}. We will check that $d \Prim(a) = a$, following the same general method as in \cite{BottTu}. We first compute $d( \int a_I dx_1)$: \[d({\textstyle\int} a_I dx_1) = \partial_1 ({\textstyle\int} a_I dx_1) dx_1 + \sum_{j=2}^d \partial_j ({\textstyle\int} a_I dx_1) dx_j = a_I dx_1 + \sum_{j=2}^d {\textstyle\int} \partial_j a_I dx_1.\] Now \[d \Prim(a) = \sum_{I = 1 \cup J} d ({\textstyle\int a_I dx_1}) dx_J = \sum_{I = 1 \cup J} a_I dx_1 \wedge dx_J + \sum_{I = 1 \cup J} \sum_{j=2}^d ({\textstyle\int \partial_j a_I dx_1}) dx_j \wedge dx_J.\] The first term is $\sum_{I = 1 \cup J} a_I dx_I$. So we have to check that the second term is the rest of $a$. In other words, we want to show that \begin{equation} \label{havetocheck} \sum_{I = 1 \cup J} \sum_{j=2}^d ({\textstyle\int} \partial_j a_I dx_1) dx_j \wedge dx_J = \sum_{1 \notin I'} a_{I'} dx_{I'}. \end{equation} Since both forms are Schwartz, it suffices to check that $\partial_1$ of both sides are equal: \begin{equation} \sum_{I = 1 \cup J} \sum_{j=2}^d \partial_j a_I dx_j \wedge dx_J = \sum_{1 \notin I'} \partial_1 a_{I'} dx_{I'}. \end{equation} Since there is no 1 in $J$ or $j$ or $I'$, it suffices to check that $dx_1$ wedged with both sides are equal: \begin{equation} \sum_{I = 1 \cup J} \sum_{j=2}^d \partial_j a_I dx_1 \wedge dx_j \wedge dx_J = \sum_{1 \notin I'} \partial_1 a_{I'} dx_1 \wedge dx_{I'}. \end{equation} This in turn follows from $da = 0$. To bound $\Prim(a)$, the main point is that $| \frac{1}{2 \pi i \omega_1}| \sim 2^{-k}$ on the ball $B$. Define $\eta_B = 1 $ on $B$, and $0 \le \eta_B \le 1$ and with $\eta_B$ supported in a slightly larger ball $\tilde B = 1.01 B$. We can assume that $\omega_1 \sim 2^k$ on $\tilde B$. Then \[\frac{1}{2 \pi i \omega_1} \hat a_I(\omega) = 2^{-k} \underbrace{\frac{1}{2 \pi i} \frac{2^k}{\omega_1} \eta_B}_{\tilde \eta_B} \hat a_I (\omega).\] The function $\tilde \eta_B$ is supported on $\tilde B$, and it obeys the bounds from Lemma \ref{multiplierbound}. The lemma tells us that \[\| {\textstyle\int} a_I dx_1 \|_{L^p} = 2^{-k} \lVert\left( \tilde \eta_B \hat a_I \right)^\vee \rVert_{L^p} \le C 2^{-k} \lVert a_I \rVert_{L^p}.\] Therefore $\lVert\Prim(a)\rVert_{L^p} \le C 2^{-k} \lVert a \rVert_{L^p}$ as desired. \end{proof} \begin{lem} \label{orth} For any function $f$, \[\sum_{k \in \mathbb{Z}} \| P_k f \|_{L^2}^2 \sim \| f \|_{L^2}^2.\] Similarly, for any form $a$, \[\sum_{k \in \mathbb{Z}} \| P_k a \|_{L^2}^2 \sim \| a \|_{L^2}^2.\] \end{lem} \begin{proof} By the Plancherel theorem, \[\sum_{k \in \mathbb{Z}} \| P_k f \|_{L^2}^2 = \sum_{k \in \mathbb{Z}} \int_{\mathbb{R}^d} \bigl\lvert \widehat{P_k f} \bigl\rvert^2 = \sum_{k \in \mathbb{Z}} \int_{\mathbb{R}^d} \lvert\eta_k(\omega)\rvert^2 \lvert\hat f(\omega)\rvert^2 d \omega.\] Now for every $\omega$, $(1/10) \le \sum_{k \in \mathbb{Z}} \eta_k(\omega)^2 \le 1$. This holds because $\sum_{k \in \mathbb{Z}} \eta_k(\omega) = 1$ and each $\eta_k (\omega) \ge 0$, and each $\omega$ lies in the support of $\eta_k$ for at most 5 values of $k$. Therefore, \[\sum_{k \in \mathbb{Z}} \| P_k f \|_{L^2}^2 = \int_{\mathbb{R}^d} \biggl( \sum_{k \in \mathbb{Z}} \eta_k(\omega)^2 \biggr) |\hat f(\omega)|^2 d \omega \sim \int_{\mathbb{R}^d} |\hat f (\omega)|^2 d \omega = \int_{\mathbb{R}^d} |f(x)|^2 dx.\] For a form $a = \sum_{I} a_I(x) dx_I$, $P_k(a) = \sum_I P_k a_I(x) dx_I$ and $\| a \|_{L^2}^2 := \sum_I \int |a_I(x)|^2 dx$. So the case of forms follows from the case of functions. \end{proof} \begin{lem} \label{foursupp} The Fourier support of $P_{\le k} a_1 \wedge P_{\le k} a_2$ is contained in the ball of radius $2^{k+2}$ around 0. Therefore, \[P_{\le k+3} \left( P_{\le k} a_1 \wedge P_{\le k} a_2 \right) = P_{\le k} a_1 \wedge P_{\le k} a_2.\] \end{lem} \begin{proof} The Fourier support of $P_{\le k} a$ is contained in the ball $B(2^{k+1}, 0)$. For any functions $f$ and $g$, the Fourier transform of $fg$ is given by \[\widehat{fg}(\omega) = \hat f * \hat g(\omega) = \int \hat f(\tilde \omega) \hat g( \omega - \tilde \omega) d \tilde \omega.\] If $\hat f$ and $\hat g$ are supported in $B(2^{k+1}, 0)$, then $\widehat{fg}$ is supported in $B( 2\cdot 2^{k+1}, 0)$. This argument also applies to wedge products of forms instead of products of functions, just by writing out the components of the forms. This shows that the Fourier transform of $P_{\le k} a_1 \wedge P_{\le k} a_2$ is supported in $B(2^{k+2}, 0)$. Now $\eta_{\le k+3}(\omega)$ is identically 1 on this ball, and so \[P_{\le k+3} \left( P_{\le k} a_1 \wedge P_{\le k} a_2 \right) = P_{\le k} a_1 \wedge P_{\le k} a_2. \qedhere\] \end{proof} \subsection{Bounds for connected sums of $\mathbb CP^2$s} \subsubsection{Setup} In this section, we will prove Theorem \ref{cp2k}. We recall the statement. \begin{thm*} Let $X_k = (\mathbb{C} P^2)^{\# k}$. Fix a metric $g$ on $X_k$. Suppose that $f: X_k \rightarrow X_k$ is $L$-Lipschitz. If $k \ge 4$, then \[\deg (f) \le C(k, g) L^4 (\log L)^{-1/2}.\] \end{thm*} \begin{proof} Let $u_i \in H^2(X_k; \mathbb{R})$ be a cohomology class dual to the $i$th copy of $\mathbb{C} P^1$ in $X_k$, for $i = 1, \ldots, k$. Let $\alpha_i$ be a 2-form in the cohomology class $u_i$. We can assume that the $\alpha_i$ have disjoint supports. For any $i$, we can write \begin{equation} \label{degform} \deg (f) = \int_{X_k} f^* \alpha_i \wedge f^* \alpha_i. \end{equation} We will use Littlewood--Paley theory to estimate the right-hand side. Because Littlewood--Paley theory is by far nicest on $\mathbb{R}^d$, we first switch to charts. Fix an atlas of charts for $X_k$: suppose that $X_k = \cup U'$, and $\phi_U: U \rightarrow U'$ are parametrizations. Suppose that $\sum_{U'} \psi_{U'} = 1$ is a partition of unity on $X_k$ subordinate to these charts. Define $\psi_U:\mathbb{R}^4 \to \mathbb{R}$ by \[\psi_U(x) = \begin{cases} \phi^{*} \psi_{U'}(x) & x \in U \\ 0 & x \notin U. \end{cases}\] Now we can extend $\phi_U|_{\supp(U)}$ to a smooth map $\tilde\phi_U:\mathbb R^4 \to X_k$, and we can do it so that $\tilde\phi_U$ sends the complement of a compact set to a single point. Then define differential forms $a_i$ on $\mathbb{R}^4$ by \begin{equation} \label{defai} a_i = \tilde\phi_U^* f^*\alpha_i. \end{equation} (The forms $a_i$ also implicitly depend on $U$.) Plugging this definition into \eqref{degform}, we get \begin{equation} \label{degform2} \deg (f) = \sum_U \int_{\mathbb{R}^4} \psi_U a_i \wedge a_i. \end{equation} We will bound each of these integrals. Before going on, we discuss properties of the $a_i$. We made sure these forms are defined on all of $\mathbb{R}^4$ so that we can apply Littlewood--Paley theory. We have $\| a_i \|_{L^\infty} \lesssim L^2$. We also know that $d a_i = 0$. The form $a_i$ is supported on a fixed ball, and so for every $1 \le p \le \infty$, we also have $\| a_i \|_{L^p} \lesssim \| a_i \|_{L^\infty} \lesssim L^2$. \subsubsection{Using that $k$ is large} \label{seckge4} In this section, we prove a lemma that takes advantage of the fact that $k \ge 4$. This lemma is similar to a lemma in \cite{scal}. \begin{lem} \label{kge4} Suppose that $k \ge 4$ and that $b_1, \ldots, b_k$ are 2-forms on $\mathbb{R}^4$. Then at each point $x$, we have \[| b_1 \wedge b_1 (x) | \le C \sum_{i \not= j} | b_i \wedge b_i - b_j \wedge b_j| + | b_i \wedge b_j|.\] \end{lem} \begin{proof} Suppose not. By scaling, we can assume that $b_1 \wedge b_1(x) = dx_1 \wedge \cdots \wedge dx_4$. Then we must have $b_j \wedge b_j(x)$ is almost $dx_1 \wedge \cdots \wedge dx_4$ for every $j$ and $b_i \wedge b_j(x)$ is almost zero for every $i \neq j$. Next we will get a contradiction by considering the wedge product. Let $W: \Lambda^2 \mathbb{R}^4 \times \Lambda^2 \mathbb{R}^4 \rightarrow \Lambda^4 \mathbb{R}^4$ be the quadratic form given by the wedge product. It has signature (3,3). Now let $B \subset \Lambda^2 \mathbb{R}^4$ be the subspace spanned by $b_1, \ldots, b_k$. When we restrict $W$ to the subspace $B$, we will check that it has signature $(k,0)$. Since $k \ge 4$, this gives the desired contradiction. It remains to compute the signature of the quadratic form $W$ restricted to $B$. This is isomorphic to the quadratic form $(c_1, \ldots, c_k) \mapsto (\sum c_i b_i(x)) \wedge (\sum c_i b_i(x))$. Expanding out the right-hand side, we get \[\sum_{i,j} c_i c_j b_i \wedge b_j.\] Since $b_i \wedge b_j$ is almost 0 for every $i \neq j$ and $b_i \wedge b_i$ is almost $dx_1 \wedge \cdots \wedge dx_4$ for every $i$, we see that this form is almost \[(c_1, \ldots, c_k) \mapsto (c_1^2 + \cdots + c_k^2) dx_1 \wedge \cdots \wedge dx_4.\] In particular, the form has signature $(k,0)$. \end{proof} \subsubsection{Relations in cohomology and low-frequency bounds} Let $u_i \in H^2(X_k; \mathbb{R})$ be a cohomology class dual to the $i$th copy of $\mathbb{C} P^1$ in $X_k$, for $i = 1, \ldots, k$. Let $\alpha_i$ be a 2-form in the cohomology class $u_i$. We know that $u_i \smile u_i - u_j \smile u_j = 0$ in $H^4(X_k; \mathbb{R})$. Therefore, the corresponding differential forms $\alpha_i \wedge \alpha_i - \alpha_j \wedge \alpha_j $ are exact. Similarly, for $i \neq j$, $u_i \smile u_j = 0$, and so the forms $\alpha_i \wedge \alpha_j$ are exact. Let $\gamma_r$ be primitives for these forms. We have $2{k \choose 2}$ exact forms total, and so $r$ goes from $1$ to $2{k \choose 2}$. Define $g_r = \phi^* f^* \gamma_r$. Since $\gamma_r$ is a 3-form, \begin{equation} \label{grbound} \|g_r \|_{L^\infty} \lesssim L^3. \end{equation} Depending on $r$, we have $dg_r = a_i \wedge a_i - a_j \wedge a_j$ or $dg_r = a_i \wedge a_j$ with $i \neq j$. The bound $\| g_r \|_{L^\infty} \lesssim L^3$ gives extra information about $a_i \wedge a_j$. In particular, we get bounds on the low frequency parts of $a_i \wedge a_j$. \begin{lem} \label{lowfreqaiaj} If $i \neq j$, then \begin{align*} \lVert P_{ k} ( a_i \wedge a_j) \rVert_{L^\infty} &\lesssim 2^k L^3 \\ \lVert P_{k} ( a_i \wedge a_i - a_j \wedge a_j) \rVert_{L^\infty} &\lesssim 2^k L^3. \end{align*} The same bounds hold with $P_{\le k}$ in place of $P_k$. \end{lem} Notice that $\| a_i \|_{L^\infty} \lesssim L^2$, and so we have $\| a_i \wedge a_j \|_{L^\infty} \lesssim L^4$. But the low frequency part of $a_i \wedge a_j$ obeys a much stronger bound. \begin{proof} We write \[\left\lvert P_{k} (a_i \wedge a_j) (x) \right\rvert = \left\lvert \int \eta_{k}^\vee (y) a_i \wedge a_j (x-y) dy \right\rvert.\] We now substitute in $a_i \wedge a_j = d g_r$ and then integrate by parts: \[\left\lvert \int \eta_{k}^\vee (y) d g_r (x-y) dy \right\rvert = \left\lvert \int d \eta_{k}^\vee (y) g_r(x-y) dy \right\rvert.\] Since $\| g_r \|_{L^\infty} \lesssim L^3$, and $\int |d \eta_k^\vee| \lesssim 2^k$ by Lemma \ref{etaveebound}, our expression is bounded by \[\lesssim L^3 \int | d \eta_k^\vee| \lesssim 2^k L^3.\] The same proof applies to $ \| P_{k} ( a_i \wedge a_i - a_j \wedge a_j) \|_{L^\infty}$ and with $P_{\le k}$ in place of $P_k$. \end{proof} \subsubsection{Toy case: all forms are low frequency} To illustrate how the tools we have developed work together, we now do a toy case of our main theorem: the case where all forms have low frequency. Suppose that the forms $a_i$ are all low-frequency: $P_{\le 1} a_i = a_i$ for every $i$. It follows that the wedge products are also fairly low frequency: $P_{\le 2} (a_i \wedge a_j) = a_i \wedge a_j$ for every $i, j$. We can now bound $\int \psi_U a_1 \wedge a_1$ using the tools we have developed. First, Lemma \ref{kge4} tells us that \[\int \psi_U a_1 \wedge a_1 \le \int \psi_U |a_1 \wedge a_1| \le \sum_{i \neq j} \int \psi_U | a_i \wedge a_j| + \int \psi_U |a_i \wedge a_i - a_j \wedge a_j|.\] We are discussing the low frequency special case, where $|a_i \wedge a_j| = | P_{\le 2} (a_i \wedge a_j) |$. By Lemma \ref{lowfreqaiaj}, we have \[|a_i \wedge a_j| = | P_{\le 2} (a_i \wedge a_j) | \lesssim L^3.\] Similarly, \[|a_i \wedge a_i - a_j \wedge a_j| = | P_{\le 2} (a_i \wedge a_i - a_j \wedge a_j) | \lesssim L^3.\] Therefore, $\int \psi_U a_1 \wedge a_1 \lesssim L^3$, and so finally we have $\deg f \lesssim L^3$. If we have a weaker low frequency assumption that $P_{\le \bar \ell} a_i = a_i$ for every $i$, then the same argument shows that $\deg f \lesssim 2^{\bar \ell} L^3$. As long as the frequency range $2^{\bar \ell}$ is significantly less than $L$, then we get a strong estimate. For instance, if $2^{\bar \ell} = L^{.9}$, then $\deg f \le L^{3.9}$. \subsubsection{Bounding high-frequency contributions} We use the Littlewood--Paley decomposition to write \[\int_{\mathbb{R}^d} \psi_U a_i \wedge a_i = \int_{\mathbb{R}^d} \psi_U \sum_{k \in \mathbb{Z}} P_k a_i \wedge \sum_{\ell \in \mathbb{Z}} P_\ell a_i.\] We can bound each term on the right-hand side by using our primitive estimate, Lemma \ref{prim}, and integration by parts: \begin{align*} \left\lvert\int_{\mathbb{R}^d} \psi_U P_k a_i \wedge P_\ell a_i \right\rvert &= \left\lvert\int_{\mathbb{R}^d} \psi_U P_k a_i \wedge d( \Prim(P_\ell a_i) )\right\rvert \\ &= \left\lvert\int d \psi_U \wedge P_k a_i \wedge \Prim( P_\ell a_i)\right\rvert \\ &\le \int \lvert d\psi_U \rvert \lvert P_k a_i \rvert \lvert\Prim (P_\ell a_i)\rvert. \end{align*} Now $d \psi_U$ is a fixed $C^\infty_{comp}$ form, and we have $|P_k a_i| \lesssim L^2$ and $| \Prim P_\ell(a_i)| \lesssim 2^{-\ell} L^2$. All together, we get the bound \begin{equation} \label{highfreqsmall} \left\lvert\int_{\mathbb{R}^d} \psi_U P_k a_i \wedge P_\ell a_i\right\rvert \lesssim 2^{-\ell} L^4. \end{equation} This shows that the high-frequency parts of $a_i$ contribute little to the integral for the degree. By summing this geometric series of error terms, we see that \begin{lem} \label{highfreqsmall2} For any frequency cutoff $\bar \ell$, \[\left\lvert\int_{\mathbb{R}^d} \psi_U a_i \wedge a_i\right\rvert \lesssim \left\lvert\int \psi_U P_{\le \bar \ell} a_i \wedge P_{\le \bar \ell} a_i \right\rvert + O(2^{-\bar \ell} L^4).\] \end{lem} In particular, Lemma \ref{highfreqsmall2} allows us to resolve another toy case of our problem. If every form $a_i$ is purely high-frequency, in the sense that $P_{\le \bar \ell} a_i = 0$, then Lemma \ref{highfreqsmall2} gives the bound $\deg f \lesssim 2^{- \bar \ell} L^4$. For instance, if $2^{\bar \ell}$ is at least $L^{1/10}$, then we get a strong estimate: $\deg f \lesssim L^{3.9}$. We now have strong bounds in two toy cases: the pure low frequency case and the pure high frequency case. We will prove bounds in the general case by combining these tools. However, combining the tools is not completely straightforward. Based on the discussion above, it initially sounds like we might get a bound of the form $\deg f \lesssim L^{4 - \beta}$ for some $\beta > 0$. But there are maps $f$ with Lipschitz constant $L$ and degree at least $L^4 (\log L)^{-C}$ for some constant $C$. The forms coming from these maps crucially have signifinant contributions at all frequency levels. \subsubsection{Bounds in the general case} We begin by applying Lemma \ref{highfreqsmall2}. For any frequency cutoff $\bar \ell$, the lemma tells us that \begin{equation} \label{breakoffhigh} \left\lvert\int_{\mathbb{R}^d} \psi_U a_1 \wedge a_1\right\rvert \lesssim \int \psi_U \left\lvert P_{\le \bar \ell} a_1 \wedge P_{\le \bar \ell} a_1\right\rvert + 2^{-\bar \ell} L^4. \end{equation} We will choose $\bar \ell$ later, in the range $2^{\bar \ell} \ge L^{1/10}$. This guarantees that the last term is $\lesssim L^{3.9}$, which is much smaller than our goal. To control the first term, we apply Lemma \ref{kge4} with $b_i = P_{\le \bar \ell} a_i(x)$ at each point $x$. Lemma \ref{kge4} tells us that at each point \[\left\lvert P_{\le \bar \ell} a_1 \wedge P_{\le \bar \ell} a_1 \right\rvert \lesssim \sum_{i \neq j} \left\lvert P_{\le \bar \ell} a_i \wedge P_{\le \bar \ell} a_j \right\rvert + \left\lvert P_{\le \bar \ell} a_i \wedge P_{\le \bar \ell} a_i - P_{\le \bar \ell} a_j \wedge P_{\le \bar \ell} a_j\right\rvert.\] Plugging into the integral, we get \[\int \psi_U | P_{\le \bar \ell} a_1 \wedge P_{\le \bar \ell} a_1 | \lesssim \sum_{i \neq j} \underbrace{\int \psi_U |P_{\le \bar \ell} a_i \wedge P_{\le \bar \ell} a_j|}_{I} + \underbrace{\int \psi_U |P_{\le \bar \ell} a_i \wedge P_{\le \bar \ell} a_i - P_{\le \bar \ell} a_j \wedge P_{\le \bar \ell} a_j|}_{II}.\] The two terms are similar to each other. We focus on the terms of type I first. The same arguments apply to type II. The form $P_{\le \bar \ell} a_i \wedge P_{\le \bar \ell} a_j$ looks a little bit like $P_{\le \bar \ell} (a_i \wedge a_j)$, which has strong bounds coming from Lemma \ref{lowfreqaiaj}. However, these forms are not equal to each other. We will examine the situation more carefully and find that \begin{equation} \label{lowfreqcompare} P_{\le \bar \ell} a_i \wedge P_{\le \bar \ell} a_j = P_{\le \bar \ell + 3} (a_i \wedge a_j) + \text{ additional terms}. \end{equation} The additional terms are crucial to our story---they actually make the largest contribution in our bound for the degree of $f$. To work out the details of \eqref{lowfreqcompare}, we begin by doing the Littlewood--Paley expansion of $a_i$ and $a_j$: \[a_i \wedge a_j = \sum_{k_1, k_2 \in \mathbb{Z}} P_{k_1} a_i \wedge P_{k_2} a_j.\] Grouping the terms according to whether $k_1$ or $k_2$ is bigger, we get \begin{equation} \label{expandaiaj} a_i \wedge a_j = P_{\le \bar \ell} a_i \wedge P_{\le \bar \ell} a_j + \sum_{k_1 = \bar \ell+1}^\infty P_{k_1} a_i P_{\le k_1} a_j + \sum_{k_2 = \bar \ell+1}^\infty P_{< k_2} a_i P_{k_2} a_j. \end{equation} Note that the Fourier transform of $P_{\le \bar \ell} a_i \wedge P_{\le \bar \ell} a_j$ is supported in $|\omega| \le 4 \cdot 2^{\bar \ell}$ (cf. Lemma \ref{foursupp}). Therefore \[P_{\le \bar \ell + 3} ( P_{\le \bar \ell} a_i \wedge P_{\le \bar \ell} a_j ) = P_{\le \bar \ell} a_i \wedge P_{\le \bar \ell} a_j .\] We apply $P_{\le \bar \ell + 3}$ to both sides of \eqref{expandaiaj} to get \begin{equation} \label{expandaiaj2} \begin{aligned} P_{\le \bar \ell +3} ( a_i \wedge a_j ) &= P_{\le \bar \ell} a_i \wedge P_{\le \bar \ell} a_j \\ & \qquad {}+ \sum_{k_1 = \bar \ell+1}^\infty P_{\le \bar \ell +3} (P_{k_1} a_i P_{\le k_1} a_j) + \sum_{k_2 = \bar \ell+1}^\infty P_{\le \bar \ell + 3} (P_{< k_2} a_i P_{k_2} a_j). \end{aligned} \end{equation} This gives us our fleshed out version of \eqref{lowfreqcompare}: \begin{equation} \label{lowfreqcompare2} \begin{aligned} P_{\le \bar \ell} a_i \wedge P_{\le \bar \ell} a_j &= \underbrace{P_{\le \bar \ell + 3} (a_i \wedge a_j)}_{\textrm{Term 1}} \\ & \qquad {} - \underbrace{ \sum_{k_1 = \bar \ell+1}^\infty P_{\le \bar \ell +3} (P_{k_1} a_i P_{\le k_1} a_j) }_{\textrm{Term 2.1}}- \underbrace{\sum_{k_2 = \bar \ell+1}^\infty P_{\le \bar \ell + 3} (P_{< k_2} a_i P_{k_2} a_j)}_{\textrm{Term 2.2}}. \end{aligned} \end{equation} We want to bound $\int \psi_U | P_{\le \bar \ell} a_i \wedge P_{\le \bar \ell} a_j |$. We plug in \eqref{lowfreqcompare2}, and then we have to bound the contributions of term 1, term 2.1 and term 2.2. The contribution of Term 1 is bounded using Lemma \ref{lowfreqaiaj}: \begin{equation} \label{term1bound} \int \psi_U | P_{\le \bar \ell + 3} (a_i \wedge a_j)| \lesssim 2^{\bar \ell} L^3. \end{equation} We will choose $\bar \ell$ in the range $2^{\bar \ell} \le L^{9/10}$, and so the right-hand side is $\lesssim L^{3.9}$, much smaller than our goal. Terms 2.1 and 2.2 are similar, so we just explain Term 2.1. The contribution of Term 2.1 is at most \begin{equation} \label{term21} \sum_{k_1 = \bar \ell+1}^\infty \int \psi_U \lvert P_{\le \bar \ell +3} (P_{k_1} a_i P_{\le k_1} a_j) \rvert \le \sum_{k_1 = \bar \ell + 1}^\infty \lVert P_{\le \bar \ell +3} (P_{k_1} a_i P_{\le k_1} a_j) \rVert_{L^1}. \end{equation} We start with a direct bound for this $L^1$ norm. Lemma \ref{Pkbound} gives \[\| P_{\le \bar \ell +3} (P_{k_1} a_i P_{\le k_1} a_j) \|_{L^1} \le \| P_{k_1} a_i P_{\le k_1} a_j \|_{L^1} \le \| P_{k_1} a_i \|_{L^1} \| P_{\le k_1} a_j \|_{L^\infty}.\] Now Lemma \ref{Pkbound} again gives $\| P_{\le k_1} a_j \|_{L^\infty} \lesssim \| a_j \|_{L^\infty} \lesssim L^2$. All together this gives \begin{equation} \label{direct} \| P_{\le \bar \ell +3} (P_{k_1} a_i P_{\le k_1} a_j) \|_{L^1} \lesssim L^2 \| P_{k_1} a_i \|_{L^1}. \end{equation} If $k_1 = \bar \ell$, this is the best bound we know. But if $k_1$ is much larger than $\bar \ell$, then we can get a better estimate by using the primitive of $P_{k_1} a_i$ and integrating by parts. \[ P_{\le \bar \ell +3} (P_{k_1} a_i P_{\le k_1} a_j) = \eta_{\le \bar \ell + 3}^\vee * \left[ ( d \Prim ( P_{k_1} a_i) P_{\le k_1} a_j \right].\] Writing out what this means and integrating by parts, we get: \begin{align*} \left\lvert P_{\le \bar \ell +3} (P_{k_1} a_i P_{\le k_1} a_j) (x)\right\rvert &= \left\lvert \int \eta_{\le \bar \ell + 3}^\vee(y) ( d \Prim ( P_{k_1} a_i))(x-y) P_{\le k_1} a_j (x-y) dy \right\rvert \\ &= \left\lvert \int d \eta_{\le \bar \ell + 3}^\vee(y) ( \Prim ( P_{k_1} a_i))(x-y) P_{\le k_1} a_j (x-y) dy \right\rvert. \end{align*} Therefore, we have a pointwise bound \[\left\lvert P_{\le \bar \ell +3} (P_{k_1} a_i P_{\le k_1} a_j) \right\rvert \le \left\lvert d \eta_{\le \bar \ell + 3} * \left[ \Prim (P_{k_1} a_i) \cdot P_{k_2} a_j \right]\right\rvert.\] Taking $L^1$ norms, we get \[\lVert P_{\le \bar \ell +3} (P_{k_1} a_i P_{\le k_1} a_j) \rVert_{L^1} \le \lVert d \eta_{\le \bar \ell + 3} \rVert_{L^1} \lVert \Prim P_{k_1} a_i \rVert_{L^1} \lVert P_{k_2} a_j \rVert_{L^\infty}.\] Now Lemma \ref{etaveebound} gives $ \| d \eta_{\le \bar \ell + 3} \|_{L^1} \lesssim 2^{\bar \ell}$ and Lemma \ref{prim} gives $ \lVert\Prim P_{k_1} a_i \rVert_{L^1} \lesssim 2^{-k_1} \lVert P_{k_1} a_i \rVert_{L^1}$. We also know by Lemma \ref{Pkbound} that $\lVert P_{k_2} a_j \rVert_{L^\infty} \lesssim \lVert a_j \rVert_{L^\infty} \lesssim L^2$. Putting these bounds together, we see that \begin{equation} \label{intpartsbound} \lVert P_{\le \bar \ell +3} (P_{k_1} a_i P_{\le k_1} a_j) \rVert_{L^1} \lesssim 2^{\bar \ell - k_1} L^2 \lVert P_{k_1} a_i \rVert_{L^1}. \end{equation} Returning to the contribution of Term 2.1 in \eqref{term21}, we have the bound \begin{equation} \label{term21bound} \sum_{k_1 = \bar \ell+1}^\infty \int \psi_U \lvert P_{\le \bar \ell +3} (P_{k_1} a_i P_{\le k_1} a_j) \rvert \le \sum_{k_1 = \bar \ell + 1}^\infty 2^{\bar \ell - k_1} L^2 \lVert P_{k_1} a_i \rVert_{L^1}. \end{equation} Putting together our bounds for all the different terms, we get the following estimate for any choice of scale $\bar \ell$: \begin{equation} \label{finalbound} \left\lvert\int_{\mathbb{R}^d} \psi_U a_1 \wedge a_1\right\rvert \lesssim 2^{- \bar \ell} L^4 + 2^{\bar \ell} L^3 + \sum_{k_1 = \bar \ell + 1}^\infty 2^{\bar \ell - k_1} L^2 \| P_{k_1} a_i \|_{L^1}. \end{equation} (On the right-hand side, the first term comes from high frequency pieces, the next term comes from Term 1 and is bounded using the low frequency method, and the final term comes from Terms 2.1 and 2.2. The fact that $k \ge 4$ is used in the bound for Term 1.) Let us pause to digest this bound. To begin, note that the first two terms, $2^{- \bar \ell} L^4 + 2^{\bar \ell} L^3$, can be made much smaller than $L^4$. For instance, we can choose $\bar \ell$ so that $2^{\bar \ell} = L^{1/2}$, and then these first two terms give $L^{3.5}$. The final term is often the most important. Now let us try to get some intuition about the last term. Because of the exponentially decaying factor $2^{\bar \ell - k_1}$, the final term comes mainly from $k_1$ close to $\bar \ell$. If $\| P_{k_1} a_i \|_{L^1}$ is very small for a range of $k_1$, then it is strategic for us to choose $\bar \ell$ at the start of this range. This scenario could lead to a bound which is much stronger than $L^4 (\log L)^{-1/2}$ -- see Proposition \ref{allfreq} below. On the other hand, it may happen that $\| P_{k_1} a_i \|_{L^1}$ are all roughly equal. This is actually the worst scenario from the point of view of Theorem \ref{cp2k}. In this case we can improve on the bound $\| P_{k_1} a_i \|_{L^1} \lesssim \| a_i \|_{L^1} = L^2$ by using the orthogonality of the $P_{k_1} a_i$. By Cauchy-Schwarz, $\| P_{k_1} a_i \|_{L^1} \lesssim \|P_{k_1} a_i \|_{L^2}$, and $\sum_{k_1} \| P_{k_1} a_i \|_{L^2}^2 \lesssim \| a_i \|_{L^2}^2 \lesssim L^4$. If $\|P_{k_1} a_i \|_{L^1}$ are all equal, then we can compute $\| P_{k_1} a_i \|_{L^1} \lesssim L^2 (\log L)^{-1/2}$. Plugging this into the last term, and summing the geometric series, the last term contributes $L^4 (\log L)^{-1/2}$. We now finish the formal proof of Theorem \ref{cp2k}. We will choose $\bar \ell$ in the range $L^{1/10} \le 2^{\bar \ell} \le L^{9/10}$. The number of different $\bar \ell$ in this range is $\sim \log L$. For each $\bar \ell$ in this range, (\ref{finalbound}) gives: \[ \left\lvert\int_{\mathbb{R}^d} \psi_U a_1 \wedge a_1\right\rvert \lesssim L^{3.9} + \sum_{k_1 = \bar \ell + 1}^\infty 2^{\bar \ell - k_1} L^2 \| P_{k_1} a_i \|_{L^1}.\] Adding together all the $\bar \ell$ in this range, we get \begin{equation} \label{addscales} \log L \left\lvert\int_{\mathbb{R}^d} \psi_U a_1 \wedge a_1\right\rvert \lesssim L^{3.91} + \sum_{L^{1/10} \le 2^{\bar \ell} \le L^{9/10}} \sum_{k_1 = \bar \ell + 1}^\infty 2^{\bar \ell - k_1} L^2 \| P_{k_1} a_i \|_{L^1}. \end{equation} In this sum, the terms with $2^{k_1} > L$ can be bounded by $L^{3.9}$ and absorbed into the first term. The remaining terms are \[ \sum_{L^{1/10} \le 2^{k_1} \le L} \sum_{L^{1/10} \le 2^{\bar \ell} \le 2^{k_1}-1} 2^{\bar \ell - k_1} L^2 \| P_{k_1} a_i \|_{L^1} \lesssim \sum_{L^{1/10} \le 2^{k_1} \le L} L^2 \|P_{k_1} a_i \|_{L^1}.\] Next we want to use orthogonality from Lemma \ref{orth}: $\| a_i \|_{L^2}^2 \sim \sum_k \| P_k a_i \|_{L^2}^2$. To get these $L^2$ norms into play we apply Cauchy--Schwarz. Since the $a_i$ are supported in a fixed ball, and since the $P_{k_1} a_i$ are rapidly decaying away from that ball, we have $\| P_{k_1} a_i \|_{L^1} \le \| P_{k_1} a_i \|_{L^2}$. Since there are $\sim \log L$ values of $k_1$ in the range $L^{1/10} \le 2^{k_1} \le L$, we have \begin{align*} \sum_{L^{1/10} \le 2^{k_1} \le L} L^2 \|P_{k_1} a_i \|_{L^1} &\le (\log L)^{1/2} L^2 \left( \sum_{L^{1/10} \le 2^{k_1} \le L} \|P_{k_1} a_i \|_{L^2}^2 \right)^{1/2} \\ &\lesssim (\log L)^{1/2} L^2 \| a_i \|_{L^2} \lesssim (\log L)^{1/2} L^4. \end{align*} Plugging this back into \eqref{addscales}, we see that \[\log L \left\lvert\int_{\mathbb{R}^d} \psi_U a_1 \wedge a_1\right\rvert \lesssim L^{3.91} + (\log L)^{1/2} L^4 \] and so \[\left\lvert\int_{\mathbb{R}^d} \psi_U a_1 \wedge a_1\right\rvert \lesssim (\log L)^{-1/2} L^4.\] But the degree of $f$ is given by \eqref{degform2}: \[\deg (f) = \sum_U \int \psi_U a_1 \wedge a_1 \lesssim (\log L)^{-1/2} L^4.\] This finishes the proof of Theorem \ref{cp2k}. \end{proof} The bound (\ref{finalbound}) contains somewhat more information than Theorem \ref{cp2k}. It also tells us that if the degree of $f$ is close to $L^4 (\log L)^{-1/2}$, then the forms $a_i$ must have contributions from essentially all frequency ranges. We make this precise in the following proposition. \begin{prop} \label{allfreq} Suppose that $k \ge 4$. Suppose $f: X_k \rightarrow X_k$ is $L$-Lipschitz. Let the forms $a_i$ be as in (\ref{defai}), and fix $0 < \beta_1 < \beta_2 < 1$. Suppose that for every chart and every $i$, and every $k_1$ in the range $L^{\beta_1} < 2^{k_1} < L^{\beta_2}$, \begin{equation} \label{freqrangebound} \lVert P_{k_1} a_i \rVert_{L^1} \le L^{2 - \gamma}. \end{equation} Then the degree of $f$ is bounded by $C(g) L^{4 - \eta}$, where \[\eta = \min( \beta_1, \beta_2 - \beta_1, \gamma).\] \end{prop} \begin{proof} Recall that $\| P_{k_1} a_i \|_{L^1} \lesssim \| a_i \|_{L^1} \lesssim L^2$. The hypothesis (\ref{freqrangebound}) says that we have a stronger bound on $\| P_{k_1} a_i \|_{L^1}$ when $2^{k_1}$ lies in the range $[L^{\beta_1}, L^{\beta_2}]$. To prove the bound, we plug all our hypotheses into the bound (\ref{finalbound}). That shows that the degree is bounded by \[L^{4 - \beta_1} + L^{3 + \beta_1} + \sum_{L^{\beta_1} \le 2^{k_1} \le L^{\beta_2}} L^{\beta_1} 2^{-k_1} L^2 L^{2 - \gamma} + \sum_{2^{k_1} \ge L^{\beta_2}} L^{\beta_1} 2^{-k_1} L^4.\] Carrying out the geometric series and grouping terms finishes the proof. \end{proof} \subsection{General estimate} In this section, we prove theorem \ref{gendegbound}. We recall the statement. \begin{thm*} Suppose that $M$ is a closed connected oriented $n$-manifold such that $H^*(M; \mathbb{R})$ does not embed into $\Lambda^* \mathbb{R}^n$, and $N$ is any closed oriented $n$-manifold. Then there exists $\alpha(M) > 0$ so that for any metric $g$ on $M$ and $g'$ on $N$ and any map $f: N \rightarrow M$ with $\Lip(f) = L$, \[\deg(f) \le C(M,g,N,g') L^n (\log L)^{- \alpha(M)}.\] \end{thm*} \begin{rmk} The constant $\alpha(M)$ depends only on the real cohomology algebra of $M$, $H^*(M; \mathbb{R})$. \end{rmk} \begin{rmk} Because the constant $C(M,g)$ depends on $g$, it suffices to prove the estimate for any one metric $g$. \end{rmk} The main difference between the general situation in Theorem \ref{gendegbound} and the special case $X_k = (\mathbb{C} P^2)^{\# k}$ in Theorem \ref{cp2k} is to find the right analogue of Lemma \ref{kge4}. Lemma \ref{kge4} takes advantage of the hypothesis that $k \ge 4$ for $X_k$. Similarly, the following lemma takes advantage of the hypothesis that $H^*(M; \mathbb{R})$ does not embed into $\Lambda^* \mathbb{R}^n$. \begin{lem} \label{topvsrel} Suppose that $M$ is a closed connected oriented $n$-manifold such that $H^*(M; \mathbb{R})$ does not embed into $\Lambda^* \mathbb{R}^n$. Then there exists an integer $m(M)$ so that the following holds. Let $u_j \in H^{d_j} (M; \mathbb{R})$ be a set of generators for the cohomology algebra of $M$, including a generator $u_{\topp} \in H^n(M; \mathbb{R})$. Suppose that the relations of the cohomology algebra are given by $R_r(u_1, \ldots, u_J) = 0$. Fix $\beta_j \in \Lambda^{d_j} \mathbb{R}^n$ for each $j =1, \ldots, J$ such that $|\beta_j| \le 1$ for each $j$ and $| R_r(\vec \beta)| \le \epsilon$ for each $r$. Then $| \beta_{\topp} | \le C_M \epsilon^{\frac{1}{2m}}$. \end{lem} \begin{proof} The tuple $(\beta_1, \ldots, \beta_J)$ belongs to the space $\prod_{j=1}^J \Lambda^{d_j} \mathbb{R}^n$, which is isomorphic to $\mathbb{R}^N$. We can think of (each component of) $\beta_j$ as a coordinate on this space, and we can think of $R_r$ as a polynomial on this space. We let $V(R_1, \ldots, R_k)$ be the set of $\vec \beta$ where all the polynomials $R_r$ vanish. Each $(\beta_1, \ldots, \beta_J) \in V(R_1, \ldots, R_k)$ corresponds to a homomorphism $\phi: H^*(M; \mathbb{R}) \rightarrow \Lambda^* \mathbb{R}^n$ with $\beta_j = \phi(u_j)$. By hypothesis, each such homomorphism is non-injective. By Poincar\'e duality, we have that each such homomorphism sends $u_{\topp}$ to 0. Therefore, $\beta_{\topp} = 0$ on $V(R_1, \ldots, R_k)$. For any set $X \subset \mathbb{R}^N$, we let $I(X)$ denote the ideal of polynomials $f \in \mathbb{R}[\beta]$ that vanish on $X$. So we see that $\beta_{\topp} \in I (V (R_1, \ldots, R_k))$. The structure of $I( V( R_1, \ldots, R_k))$ is described by the real Nullstellensatz---cf. \cite[\S2.3]{Marshall}: \begin{thm} [Real Nullstellensatz] A polynomial $f \in \mathbb{R}[\beta]$ lies in $I (V (R_1, \ldots, R_k))$ if and only if there is an integer $m \ge 1$ and polynomials $g_i, h_r \in \mathbb{R}[\beta]$ so that \[f^{2m} + g_1^2 + \ldots + g_s^2 = \sum_{r=1}^k h_r R_r.\] \end{thm} By the real Nullstellensatz, we see that there is some integer $m$ such that \[\beta_{\topp}^{2m} + g_1(\beta)^2 + \ldots + g_s(\beta)^2 = \sum_r h_r(\beta) R_r(\beta).\] If we also know that $|\beta_j| \le 1$ for every $j$ and $|R_r(\beta)| \le \epsilon$ for every $r$, then we see that \[\beta_{\topp}^{2m} \le C_M \epsilon.\] Therefore, $| \beta_{\topp} | \le C_M \epsilon^{\frac{1}{2m}}$. \end{proof} With this lemma, we can start the proof of the theorem. The ideas are the same. We just have to carry them out in a more general situation, with a little more notation. Recall that $u_j \in H^{d_j} (M; \mathbb{R})$ is a set of generators for the cohomology of $M$, with $u_{\topp}$ the generator of $H^n(M; \mathbb{R})$. Suppose that the relations of the cohomology algebra are given by $R_r(u_1, \ldots, u_J) = 0$. Choose $\alpha_j$ to be a closed form on $M$ in the cohomology class $u_j$. The cohomology class of $R_r(\vec \alpha)$ is zero, so $R_r(\vec \alpha)$ is exact. Choose a primitive: \[d \gamma_r = R_r(\vec\alpha).\] Next suppose that $f: N \rightarrow M$ is an $L$-Lipschitz map. Cover $N$ with charts $U'$, and let $1 = \sum_{U'} \psi_{U'}$ be a partition of unity subordinate to the cover. Let $\phi: U \rightarrow U'$ be a parametrization of $U'$, where $U \subset \mathbb{R}^n$, which extends to a smooth map $\phi: \mathbb{R}^n \rightarrow M$ sending the complement of a large ball in $\mathbb{R}^n$ to a single point in $M$. Define a smooth compactly supported function \[\psi_U(x) = \begin{cases} \phi^{*} \psi_{U'}(x) & x \in U \\ 0 & x \notin U. \end{cases}\] Define forms on $\mathbb{R}^n$ which correspond to the $\alpha_j$ as follows: \[a_j := \frac{1}{L^{d_j}} \phi^* f^* \alpha_j.\] With this normalization, $\| a_j \|_{L^\infty} \lesssim 1$ and the $a_j$ are smooth compactly supported differential forms. Then \begin{equation} \label{degintegral} \deg(f) = L^n \sum_{U} \int_{\mathbb{R}^n} \psi_U a_{\topp}. \end{equation} Define forms on $\mathbb{R}^n$ which correspond to the $\gamma_r$ as follows. If $\gamma_r \in H^{d_r}(M; \mathbb{R})$, then \[g_r := \frac{1}{L^{d(\gamma_r)+1}} \phi^* f^* \gamma_r.\] The forms $g_r$ are also smooth compactly supported differential forms. The power of $L$ is chosen so that \[d g_r = R_r(a_j).\] The power of $L$ works out to make the forms $g_r$ very small: \[\| g_r \|_{L^\infty} \lesssim L^{-1}.\] This allows us to show that the low-frequency parts of the forms $R_r(a)$ are small. \begin{lem} \label{lowfreqrel} $\| P_{\le k} R_r(a) \|_{L^\infty} \lesssim 2^k L^{-1}$. \end{lem} \begin{proof} We start by computing \[P_{\le k} R_r(a) (x) = \int_{\mathbb{R}^n} \eta_k^\vee(y) R_r(a) (x-y) dy = \int_{\mathbb{R}^n} \eta_k^\vee(y) dg_r (x-y) dy.\] Now we can integrate by parts to get \[\int_{\mathbb{R}^n} \eta_k^\vee(y) dg_r (x-y) dy = \int_{\mathbb{R}^n} d\eta_k^\vee(y) g_r (x-y) dy.\] Taking norms and using $\| g_r \|_{L^\infty} \lesssim L^{-1}$, we see that \[| P_{\le k} R_r(a) (x) | \lesssim L^{-1} \int | d \eta_k^\vee| \lesssim 2^k L^{-1}. \qedhere\] \end{proof} We want to bound $\int \psi_U a_{\topp}$. We break this up into a low frequency and high frequency part at a frequency cutoff $k$ which we will choose later. (Eventually we will average over many $k$.) \begin{equation} \label{lowhigh} \int \psi_U a_{\topp} = \underbrace{\int \psi_U P_{\le k} a_{\topp}}_{\textrm{low}} + \underbrace{\sum_{\ell > k} \int \psi_U P_\ell a_{\topp}}_{\textrm{high}}. \end{equation} For the high frequency pieces in \eqref{lowhigh}, we will find a small primitive and then integrate by parts. Lemma \ref{prim} tells us that $P_{\ell} a_{\topp}$ has a primitive with \[\lVert \Prim ( P_\ell a_{\topp}) \rVert_{L^\infty} \lesssim 2^{- \ell} \lVert P_{\ell} a_{\topp} \rVert_{L^\infty} \lesssim 2^{- \ell} \lVert a_{\topp} \rVert_{L^\infty} \lesssim 2^{- \ell}.\] Then we can bound $\int \psi_U P_{\ell} a_{\topp}$ by \[\int \psi_U P_{\ell} a_{\topp} = \int d \psi_U \Prim (P_\ell a_{\topp}) \lesssim 2^{-\ell}.\] We will choose $k$ with $2^k \ge L^{1/10}$, and so the contribution of all the high frequency parts is bounded by $L^{-1/10}$, which is much smaller than the bound we are aiming for. For the low-frequency piece in \eqref{lowhigh}), we apply Lemma \ref{topvsrel} to the forms $P_{\le k} a_j$. Since all these forms have norm $\lesssim 1$ pointwise, the lemma gives us a pointwise bound \[|P_{\le k} a_{\topp}(x)| \lesssim \sum_r | R_r( P_{\le k} a)|^{\frac{1}{2m}}.\] Integrating and using the H\"older inequality, we get the bound \begin{equation} \label{notepoint} \int \psi_U P_{\le k} a_{\topp} \le \sum_r \int \psi_U | R_r(P_{\le k} a)|^{\frac{1}{2m}} \lesssim \sum_r \left( \int \psi_U |R_r(P_{\le k} a)| \right)^{\frac{1}{2m}}. \end{equation} In the H\"older step, in detail we wrote \begin{align*} \int \psi_U | R_r(P_{\le k} a)|^{\frac{1}{2m}} &= \int \psi_U^{\frac{2m-1}{2m}} \cdot \psi_U^{\frac{1}{2m}} |R_r(P_{\le k} a)|^{\frac{1}{2m}} \\ &\le \underbrace{\left( \int \psi_U \right)^{\frac{2m-1}{2m}}}_{\lesssim 1} \left( \int \psi_U |R_r(P_{\le k} a)| \right)^{\frac{1}{2m}}. \end{align*} Now we have to bound each integral $\int \psi_U |R_r( P_{\le k} a)|$. Since $\| a \|_{L^\infty} \lesssim 1$, we get a bound $\int \psi_U |R_r( P_{\le k} a)|\lesssim 1$, and to prove our theorem we need to beat this bound by a power of $\log L$, at least for some choice of $k$. The key input is the bound on the low freq part of $R_r(a)$: Lemma \ref{lowfreqrel} tells us that $\| P_{\le k} R_r(a) \|_{L^\infty} \le 2^{k} L^{-1}$. Next we have to relate $R_r (P_{\le k} a)$ with $P_{\le k} R_r(a)$. Remember that each $R_r$ is a polynomial in the $a_j$. Each $R_r(a_j)$ is a sum of terms of the form $c a_{j_1} \wedge \cdots \wedge a_{j_P}$. If we do a Littlewood--Paley decomposition of each $a_j$, we see that \begin{equation} \label{lpexpand} a_{j_1} \wedge \cdots \wedge a_{j_P} = \sum_{k_1, \ldots, k_P} P_{k_1} a_{j_1} \wedge \cdots \wedge P_{k_P} a_{j_P}. \end{equation} For each choice of $k_1, \ldots, k_P$, we write $k_{\max} = \max_p k_p$. We let $p_{\max}$ be the value of $p$ that maximizes $k_p$. If there is a tie, we let $p_{\max}$ be the smallest $p$ so that $k_p = k_{\max}$. We can now organize the sum on the right-hand side of \eqref{lpexpand} according to the value of $k_{\max}$ and $p_{\max}$: \begin{multline*} \sum_{k_1, \ldots, k_P} P_{k_1} a_{j_1} \wedge \cdots \wedge P_{k_P} a_{j_P} = \sum_{k_{\max}} \sum_{p_{\max} = 1}^P P_{< k_{\max}} a_{j_1} \wedge \cdots \wedge \\ {} \wedge P_{< k_{\max} } a_{j_{p_{\max} - 1}} \wedge P_{k_{\max}} a_{j_{p_{\max}}} \wedge P_{\le k_{\max}} a_{j_{p_{\max}}+1} \wedge \cdots \wedge P_{\le k_{\max}} a_{j_P}. \end{multline*} Similarly, \[P_{\le k} a_{j_1} \wedge \cdots \wedge P_{\le k} a_{j_P}= \sum_{k_{\max} \le k } \sum_{p_{\max} = 1}^P P_{< k_{\max}} a_{j_1} \wedge \cdots \wedge P_{k_{\max}} a_{j_{p_{\max}}} \wedge \cdots \wedge P_{\le k_{\max}} a_{j_P}.\] Therefore, \begin{multline*} P_{\le k} a_{j_1} \wedge \cdots \wedge P_{\le k} a_{j_P} \\ = a_{j_1} \wedge \cdots \wedge a_{j_P} - \sum_{k_{\max} > k } \sum_{p_{\max} = 1}^P P_{< k_{\max}} a_{j_1} \wedge \cdots \wedge P_{k_{\max}} a_{j_{p_{\max}}} \wedge \cdots \wedge P_{\le k_{\max}} a_{j_P}. \end{multline*} This discussion applies to each monomial of $R_r$. Therefore, $R_r(a)$ is equal to $R_r( P_{\le k} a)$ plus a finite linear combination of terms of the form \begin{equation} \label{termform} \sum_{k_{\max} > k } \sum_{p_{\max} = 1}^P P_{< k_{\max}} a_{j_1} \wedge \cdots \wedge P_{k_{\max}} a_{j_{p_{\max}}} \wedge \cdots \wedge P_{\le k_{\max}} a_{j_P}. \end{equation} Now for a large constant $c$, we have $P_{\le k+c} R_r( P_{\le k} a_j) = R_r( P_{\le k} a_j)$. Therefore, $ R_r( P_{\le k} a)$ is equal to $P_{\le k+c} R_r(a) $ plus a finite linear combination of terms of the form \begin{equation} \label{termform2} \sum_{k_{\max} > k } \sum_{p_{\max} = 1}^P P_{\le k + c} \left( P_{< k_{\max}} a_{j_1} \wedge \cdots \wedge P_{k_{\max}} a_{j_{p_{\max}}} \wedge \cdots \wedge P_{\le k_{\max}} a_{j_P}\right) . \end{equation} In summary, \begin{equation} \label{strucsummary} R_r(P_{\le k} a) = P_{\le k+ c} R_r(a) + \text{ terms of the form } \eqref{termform2}. \end{equation} The first term in \eqref{strucsummary} is controlled by Lemma \ref{lowfreqrel}: $\| P_{\le k+ c} R_r(a) \|_{L^\infty} \lesssim 2^{k+c} L^{-1} \lesssim 2^k L^{-1}$. We will choose $k$ so that $2^k \le L^{9/10}$, so this term is bounded by $L^{-1/10}$, which is much smaller than our goal. For each remaining term of type \eqref{termform2}, we will again take a primitive and integrate by parts. We apply Lemma \ref{prim} to get a good primitive: $P_{k_{\max}} a_{j_{p_{\max}}} = d \Prim ( P_{k_{\max}} a_{j_{p_{\max}}} )$, where $\lVert \Prim ( P_{k_{\max}} a_{j_{p_{\max}}} ) \rVert_{L^p} \lesssim 2^{-k_{\max}} \lVert P_{k_{\max}} a_{j_{p_{\max}}} \rVert_{L^p}$ for every $1 \le p \le \infty$. For each fixed choice of $k_{\max}$ and $p_{\max}$, we write \begin{multline*} P_{\le k + c} \left( P_{< k_{\max}} a_{j_1} \wedge \cdots \wedge P_{k_{\max}} a_{j_{p_{\max}}} \wedge \cdots \wedge P_{\le k_{\max}} a_{j_P}\right) \\ = \eta_{\le k + c}^\vee * \left( P_{< k_{\max}} a_{j_1} \wedge \cdots \wedge d \Prim( P_{k_{\max}} a_{j_{p_{\max}}}) \wedge \cdots \wedge P_{\le k_{\max}} a_{j_P}\right) \\ = d \eta_{\le k + c}^\vee * \left( P_{< k_{\max}} a_{j_1} \wedge \cdots \wedge \Prim( P_{k_{\max}} a_{j_{p_{\max}}}) \wedge \cdots \wedge P_{\le k_{\max}} a_{j_P}\right). \end{multline*} We now take the $L^1$ norm of our term. Since $\| a_j\|_{L^\infty}$ and $\| P_{< k_{\max} a_j} \|_{L^\infty}$ are all $\lesssim 1$, we see that \begin{multline*} \bigl\lVert d \eta_{\le k + c}^\vee * \left( P_{< k_{\max}} a_{j_1} \wedge \cdots \wedge \Prim( P_{k_{\max}} a_{j_{p_{\max}}}) \wedge \cdots \wedge P_{\le k_{\max}} a_{j_P}\right)\bigr\rVert_{L^1} \\ \lesssim \bigl\lVert d \eta_{\le k+ c}^\vee \bigr\rVert_{L^1} \bigl\lVert \Prim ( P_{k_{\max}} a_{j_{p_{\max}}}) \bigr\rVert_{L^1} \lesssim 2^{k+c} 2^{-k_{\max}} \bigl\lVert P_{k_{\max}} a \bigr\rVert_{L^1}. \end{multline*} To summarize, we have proved the following bound on each summand of \eqref{termform2}: \begin{equation} \label{termbound} \| P_{\le k + c} \left( P_{< k_{\max}} a_{j_1} \wedge \cdots \wedge P_{k_{\max}} a_{j_{p_{\max}}} \wedge \cdots \wedge P_{\le k_{\max}} a_{j_P}\right) \|_{L^1} \lesssim 2^{k+c} 2^{-k_{\max}} \| P_{k_{\max}} a \|_{L^1}. \end{equation} Now the $L^1$ norm of each term of form \eqref{termform2} is bounded as follows: \begin{multline*} \biggl\lVert \sum_{k_{\max} > k } \sum_{p_{\max} = 1}^P P_{\le k + c} \left( P_{< k_{\max}} a_{j_1} \wedge \cdots \wedge P_{k_{\max}} a_{j_{p_{\max}}} \wedge \cdots \wedge P_{\le k_{\max}} a_{j_P}\right) \biggr\rVert_{L^1} \\ \lesssim \sum_{k_{\max} > k} 2^{k - k_{\max}} \| P_{k_{\max}} a \|_{L^1}. \end{multline*} We now have our bounds on all the terms and we just have to put them together. Recall \eqref{notepoint} tells us that \begin{equation} \Bigl(\int \psi_U P_{\le k} a_{\topp}\Bigr)^{2m} \lesssim \sum_r \int \psi_U |R_r(P_{\le k} a)|. \end{equation} By \eqref{strucsummary}, we can break up $R_r(P_{\le k} a)$ into pieces: \begin{equation*} R_r(P_{\le k} a) = P_{\le k+ c} R_r(a) + \text{ terms of the form }\eqref{termform2}. \end{equation*} We have now bounded each term on the right-hand side. Combining our bounds, we see that \[\Bigl(\int \psi_U P_{\le k} a_{\topp}\Bigr)^{2m} \lesssim \int \psi_U |R_r(P_{\le k} a)| \lesssim 2^k L^{-1} + \sum_{k_{\max} > k} 2^{k-k_{\max}} \| P_{k_{\max}} a \|_{L^1}.\] Let us pause to digest this bound. The first term $2^k L^{-1}$ is very small as long as $2^k \le L^{9/10}$. In the second term, there is exponential decay for $k_{\max} > k$. Therefore, the main contribution on the right hand side is when $k_{\max} = k$, which gives $\| P_k a \|_{L^1}$. For comparison, it would be straightforward to get an upper bound of $\| a \|_{L^1} \lesssim 1$. The upper bound $\| P_k a \|_{L^1}$ is an improvement because it includes only one Littlewood--Paley piece of $a$. We can then take advantage of this improvement by averaging over $k$ and using orthogonality: $\sum_k \| P_k a \|_{L^2}^2 \sim \| a \|_{L^2}^2$. Now we turn to the details of this estimate. We will sum over $k$ in the range $L^{1/10} \le 2^k \le L^{9/10}$. There are $\sim \log L$ different $k$ in this range. \[\sum_{L^{1/10} \le 2^k \le L^{9/10}} \Bigl(\int \psi_U P_{\le k} a_{\topp}\Bigr)^{2m} \lesssim L^{-1/10} + \sum_{L^{1/10} \le 2^k \le L^{9/10}} \sum_{2^k < 2^{k_{\max}} < L} 2^{k-k_{\max}} \| P_{k_{\max}} a \|_{L^1}.\] (Here, terms with $2^{k_{\max}} > L$ are bounded by the $L^{-1/10}$ term). Now the last term is bounded by \[\sum_{L^{1/10} \le 2^k \le L^{9/10}} \sum_{2^k < 2^{k_{\max}} < L} 2^{k-k_{\max}} \| P_{k_{\max}} a \|_{L^1} \lesssim \sum_{L^{1/10} \le k_{\max} \le L} \| P_{k_{\max}} a \|_{L^1}.\] The number of terms in this last sum is $\sim \log L$. Therefore, we can use the Cauchy--Schwarz inequality to get \[\sum_{L^{1/10} \le k_{\max} \le L} \| P_{k_{\max}} a \|_{L^1} \le (\log L)^{1/2} \biggl( \sum_{L^{1/10} \le 2^{k_{\max}} \le L} \| P_{k_{\max}} a \|_{L^1}^2 \biggr)^{1/2}.\] Since $a$ is supported on a fixed compact set, and $P_{k_{\max}} a$ is essentially supported on that set, Cauchy--Schwarz gives $\| P_{k_{\max}} a \|_{L^1} \lesssim \| P_{k_{\max}} a \|_{L^2}$. Plugging this into the last term above gives \[\lesssim (\log L)^{1/2} \biggl( \sum_{L^{1/10} \le 2^{k_{\max}} \le L} \| P_{k_{\max}} a \|_{L^2}^2 \biggr)^{1/2} \lesssim (\log L)^{1/2} \| a \|_{L^2}.\] All together, we now have \[\sum_{L^{1/10} \le 2^k \le L^{9/10}} \Bigl(\int \psi_U P_{\le k} a_{\topp} \Bigr)^{2m} \lesssim (\log L)^{1/2} \| a \|_{L^2}.\] Since there are $\sim \log L$ terms on the left-hand side, we can choose $k$ in the range $L^{1/10} \le 2^k \le L^{9/10}$ so that \[\Bigl(\int \psi_U P_{\le k} a_{\topp}\Bigr)^{2m} \lesssim (\log L)^{-1/2} \| a \|_{L^2} \lesssim (\log L)^{-1/2}.\] Taking roots, we get $\int \psi_U P_{\le k} a_{\topp} \lesssim (\log L)^{-\frac{1}{4m}}$. Recall that we broke up $\int \psi_U a_{\topp}$ into low frequency and high frequency pieces in \eqref{lowhigh}: \[\int \psi_U a_{\topp} = \underbrace{\int \psi_U P_{\le k} a_{\topp}}_{\textrm{low}} + \underbrace{\sum_{\ell > k} \int \psi_U P_\ell a_{\topp}}_{\textrm{high}}.\] We showed that the high frequency pieces are bounded by $\lesssim 2^{-k}$. We just found $k$ with $L^{1/10} \le 2^k \le L^{9/10}$ where the low frequency piece has the bound $\lesssim (\log L)^{- \frac{1}{4m}}$. Therefore, the total is bounded: \[\int \psi_U a_{\topp} \lesssim (\log L)^{- \frac{1}{4m}}.\] Recall from \eqref{degintegral} that $\deg f = L^n \sum_U \int \psi_U a_{\topp}$, and so \[\deg f \lesssim L^n (\log L)^{- \frac{1}{4m}}.\] This proves the theorem, with $\alpha(m) = \frac{1}{4m}$. The integer $m$ came from the real Nullstellensatz, and it depended only on the cohomology ring $H^*(M; \mathbb{R})$. \subsubsection{Proof of Theorem \ref{balldegbound}} Finally, we describe the modifications needed to prove the result on the ball, which we restate here: \begin{thm*} Suppose that $M$ is a closed connected oriented $n$-manifold such that $H^*(M; \mathbb{R})$ does not embed into $\Lambda^* \mathbb{R}^n$, and let $\alpha(M)>0$ be as in the statement of Theorem \ref{gendegbound}. Let $B^n \subseteq \mathbb{R}^n$ be the unit ball. Then for any metric $g$ on $M$ and any $L$-Lipschitz map $f:B^n \to M$, \[\int_{B^n} f^*d\vol_M \leq C(M,g)L^n(\log L)^{-\alpha(M)}.\] \end{thm*} \begin{proof} Our argument above already studies forms defined on a ball. The only difference is that above we study $\int_{B^n} \psi f^*d\vol_M$, where $\psi:B^n \to M$ is some function which decays to $0$ at the boundary, whereas we now want to understand $\int_{B^n} f^*d\vol_M$. To bridge the gap, we expand the domain. Define a function $\tilde f:B_2(0) \to M$ on the ball of radius $2$ by \[\tilde f(x)=\begin{cases} f(x) & \lVert x \rVert \leq 1 \\ f(x/\lVert x \rVert) & \lVert x \rVert>1. \end{cases}\] If $f$ is $L$-Lipschitz, this function is $2L$-Lipschitz. Moreover, since $\tilde f$ has rank $n-1$ outside the ball of radius $1$, $\tilde f^*d\vol_M=0$ outside that ball. Therefore, for any $\psi:\mathbb{R}^n \to \mathbb{R}$ such that $\psi|_{B^n} \equiv 1$, we have \[\int_{B_2(0)} \psi \tilde f^*d\vol_M=\int_{B^n} f^*d\vol_M.\] The argument in the proof of Theorem \ref{gendegbound} bounds the left side as desired. \end{proof} \section{Explicit construction of efficient self-maps} \label{S:lower} In this section, we discuss the lower bound of Theorem \ref{main}, which follows from the following result: \begin{thm} \label{self-maps} Let $Y$ be a formal compact Riemannian manifold such that $H_n(Y;\mathbb{Q})$ is nonzero for $d$ different values of $n>0$. Then there are integers $a>0,p>1$ such that for every $\ell \in \mathbb{N}$ and $q=ap^\ell$, there is an $O(\ell^{d-1}p^\ell)$-Lipschitz map $r_q:Y \to Y$ which induces multiplication by $q^n$ on $H_n(Y;\mathbb{Q})$. \end{thm} For the purpose of this section, a simply connected finite CW complex $Y$ is \emph{formal} if and only if for some $q>1$, there is a map $r_q:Y \to Y$ such that \begin{equation} \label{eq:self} (r_q)^*=q^n:H^n(Y;\mathbb{Q}) \to H^n(Y;\mathbb{Q}). \end{equation} Clearly, if such a map exists for some $q$, then it exists for $q^\ell$ for every $\ell$. This is not the original definition of formality due to Sullivan, which is based on the rationalization of $Y$ \cite{DGMS,SulLong}; the equivalence of our definition in the case of finite complexes was first stated by \cite{Shiga}. To see that Theorem \ref{self-maps} indeed implies the lower bound of Theorem \ref{main}, suppose that $Y$ is an $n$-manifold. Let $K(\ell)$ be the Lipschitz constant of $r_{ap^\ell}:Y \to Y$, and notice that for $\ell \geq 2$, \[K(\ell)/K(\ell-1)=p \cdot \frac{\ell^{d-1}}{(\ell-1)^{d-1}} \leq 2p.\] Then for $L>>0$, somewhere between $L/2p$ and $L$ is a value of $K(\ell)$ for some $\ell$. This means that for $q=ap^\ell$, \[L/2p=O(q(\log q)^{d-1})\] and therefore there is an $O(L)$-Lipschitz map $f:Y \to Y$ such that \[\deg f=q^n=\Omega(L^n(\log L)^{-n(d-1)}).\] \subsection{Warmup example} We start by proving Theorem \ref{self-maps} in the simple case of connected sums of $\mathbb CP^2$, before moving on to the general case. \begin{thm} \label{warmup} Let $M=\#_k \mathbb{C} P^2$. Then there is a constant $C$ such that for each $\ell>0$, there is a self-map $r_{2^\ell}:M \to M$ of degree $2^{4\ell}$ and Lipschitz constant bounded by $C\ell \cdot 2^\ell$. \end{thm} As discussed in the introduction, the strategy is to build $r_{2^\ell}$ inductively by gluing together several copies of $r_{2^{\ell-1}}$ without adding too much stuff in between. Before giving the detailed proof, we start with a lemma about self-maps of spheres which will also be useful for the general case of Theorem \ref{self-maps}. \begin{lem} \label{SntoSn} For every $d$, there is a map $f_\ell:S^n \to S^n$ of degree $d^n$ whose Lipschitz constant is $C(n)d$. Moreover, for each $p>1$ there is a $C'(n,p)d$-Lipschitz homotopy $H_p:S^n \times [0,1] \to S^n$ between $f_{pd}$ and $f_d \circ f_p$. \end{lem} \begin{proof} Give $S^n$ the metric of $\partial [0,1]^{n+1}$, which is bilipschitz to the round metric, and divide one of the faces into $d^n$ identical sub-cubes, $d$ to a side. We map all other faces to a base point, and the sub-cubes to the sphere by a rescaling of a standard degree 1 map \[f_0:([0,1]^n,\partial[0,1]^n) \to (S^n,\text{pt}).\] The resulting map has degree $d^n$ and Lipschitz constant $\ell\Lip f_0 \leq (2 \Lip f_0)d$. Now consider the map $f_d \circ f_p$. Like $f_{pd}$, it consists of $(pd)^n$ cubical preimages of $S^n$, with the rest of the sphere mapped to the basepoint. However, instead of one cluster of preimages filling a whole face of $\partial [0,1]^{n+1}$, there are $p^n$ clusters of slightly smaller preimages. We homotope $f_d \circ f_p$ to $f_{pd}$ by linearly expanding these preimages to fill the whole face. It is easy to see that this homotopy is $C'(n,p)d$-Lipschitz. \end{proof} \begin{proof}[Proof of Theorem \ref{warmup}] We fix a cell structure for $M=\#_k \mathbb{C} P^2$ consisting of one $0$-cell, $k$ $2$-cells, and a $4$-cell. Let $\iota:[0,1]^4 \to M$ be the inclusion map of the $4$-cell, and let \[\partial=\iota|_{\partial[0,1]^4}:S^3 \to M^{(2)}=\bigvee_{i=1}^k S^2\] be its attaching map. The projection of $\partial$ to each $S^2$ summand has Hopf invariant one. Notice that a map $\bigvee_{i=1}^k S^2 \to \bigvee_{i=1}^k S^2$ which sends each $S^2$ to itself with degree $d$ extends to a map $M \to M$ of degree $d^2$. We prove the theorem by induction on $p$. For the base of the induction we take $r_1:M \to M$ to be any map whose restriction to each $2$-cell is the map $f_2:S^2 \to S^2$ from Lemma \ref{SntoSn}. For the inductive step, assume that we have constructed a $C(\ell-1) \cdot 2^{\ell-1}$-Lipschitz map $r_{2^{\ell-1}}:M \to M$ whose restriction to each $2$-cell is $f_{2^{\ell-1}}$. To build $r_p$, we take a $2 \times 2 \times 2 \times 2$ grid of sub-cubes inside $[0,1]^4$, each of side length $\frac{1}{2}\cdot\frac{\ell-1}{\ell}$, and send each of them to $M$ via a homothetic rescaling of $r_{2^{\ell-1}}$. Then the Lipschitz constant on each sub-cube is $C\ell \cdot 2^\ell$. \begin{figure} \centering \begin{tikzpicture} \filldraw[very thick,fill=gray!20] (-2,0) circle(2); \filldraw[very thick,fill=white] (-2,0) circle(1); \draw (-2,-2) node[anchor=north] {$S^3 \times [0,1]$}; \draw (-2,-1.5) node {$h$}; \filldraw[very thick,fill=gray!20] (4,-2) rectangle (8,2); \foreach \x/\y in {4.4/-1.6, 4.4/0.4, 6.4/-1.6, 6.4/0.4} { \filldraw[thick,fill=gray!50] (\x,\y) rectangle (\x+1.2,\y+1.2); \draw (\x+0.6,\y+0.6) node {$r_{2^{\ell-1}}$}; } \draw[thick] (4.4,0.4) -- (5.6,0) -- (4.4,-0.4); \draw[thick] (7.6,0.4) -- (6.4,0) -- (7.6,-0.4); \draw[thick] (5.6,1.6) -- (6,0.4) -- (6.4,1.6); \draw[thick] (5.6,-1.6) -- (6,-0.4) -- (6.4,-1.6); \draw (6,-2) node[anchor=north] {$[0,1]^4$}; \draw[stealth-] (-0.5,0) .. controls (0.5,0.5) and (3.5,0.5) .. node[midway,anchor=south,align=center] {$C_1\ell$-Lipschitz\\map} (4.2,0.5); \draw[stealth-] (4,-1) .. controls (3.5,-1) and (2.75,-1.25) .. (2.5,-1.75) node[anchor=north] {$f_{2^\ell} \circ \partial$}; \draw[stealth-] (4.7,-0.3) .. controls (4.2,0) and (2.5,0) .. (2,-0.5) node[anchor=north] {$f_{2^{\ell-1}} \circ \partial \circ g$}; \end{tikzpicture} \caption{Inductively assembling the map $r_{2^\ell}$. The light gray regions map to $M^{(2)}$ and the dark gray regions map to the $4$-cell. Some regions are labeled with the restriction of $r_{2^\ell}$ to that region.} \label{interstitial} \end{figure} We must now extend the map to the rest of $[0,1]^4$, filling the space in between with the same Lipschitz constant. These gaps have width on the order of $1/\ell$. As seen in Figure \ref{interstitial}, we can build the extension by composing a $C_1\ell$-Lipschitz map from this region to $S^3 \times [0,1]$ with a $C_2 \cdot 2^\ell$-Lipschitz homotopy $h:S^3 \times [0,1] \to M^{(2)}$ between the maps $f_{2^\ell} \circ \partial$ and $f_{2^{\ell-1}} \circ \partial \circ g$, where $g:S^3 \to S^3$ is a map of degree 16. This homotopy can be done in two steps: \begin{itemize} \item First homotope $f_{2^\ell} \circ \partial$ to $f_{2^{\ell-1}} \circ f_2 \circ \partial$. This homotopy can be made $C \cdot 2^\ell$-Lipschitz by Lemma \ref{SntoSn}. \item Then homotope $f_{2^{\ell-1}} \circ f_2 \circ \partial$ to $f_{2^{\ell-1}} \circ \partial \circ g$. To build this homotopy, take a fixed homotopy from $f_2 \circ \partial$ to $\partial \circ g$ (whose Lipschitz constant is independent of $\ell$) and compose it with $f_{2^{\ell-1}}$ (with Lipschitz constant $C \cdot 2^{\ell-1}$). \end{itemize} This completes the inductive step and the proof. \end{proof} \subsection{Building efficient self-maps} We give a mostly elementary proof of Theorem \ref{self-maps}, building maps $r_q$ ``by hand''. The definition of formality gives us a self-map $r_p:Y \to Y$ of degree $p^n$; the proof consists of homotoping the iterates $(r_p)^\ell$ to maps $r_{p^\ell}$ with a controlled Lipschitz constant. Although we have no control over the Lipschitz constant of the original $r_p$, this only affects the multiplicative constant. First, we assume that $Y$ is a finite CW complex of a particular form. We construct $r_{p^\ell}$ by induction on skeleta, extending along one cell at a time. Each $n$-cell maps to itself with degree $p^{\ell n}$, and contains a grid of homeomorphic preimages of its interior, $p^\ell$ to a side. The tricky part, and the source of the polylog factor, is filling in the area between these preimages. This is done by induction on $\ell$: we take $p^n$ copies of $r_{p^{\ell-1}}$, arranged in a grid, and glue them together using a homotopy built in the course of the $(n-1)$-dimensional construction. The Lipschitz constant of this homotopy is proportional to the Lipschitz constant obtained for self-maps of $Y^{(n-1)}$; since there are $\log \ell$ nested layers, we gain a factor of $\log\ell$ in moving from $Y^{(n-1)}$ to $Y^{(n)}$. In passing from self-maps of the CW complex to those of our original manifold, we gain an additional factor of $a$ for the degree. We now give the details of this argument. This is the heart of the proof of Theorem \ref{self-maps}, although it only covers a special case. The remainder of the section after this proof is devoted to showing that this is sufficient to prove the general case. \begin{lem} \label{detailed} Let $Z$ be a finite CW complex with the following properties: \begin{itemize} \item $H^i(Z)$ is nontrivial in $d$ different dimensions (not including $i=0$). \item The cellular chain complex has zero differential. (In other words, the cells are in bijection with a basis for $H^*(Z)$.) \item The attaching maps of $Z$ are Lipschitz maps $D^n \to Z^{(n-1)}$. \end{itemize} Let $r_p:Z \to Z$ be a map which induces multiplication by $p^i$ on $H^i(Z;\mathbb Q)$ for every $i>0$. Then there is a metric on $Z$ such that every iterate $(r_p)^\ell$ of $r_p$ is homotopic to a $C(r_p,Z)\ell^{d-1}p^\ell$-Lipschitz map $r_{p^\ell}:Z \to Z$. Moreover, $r_{p^\ell}$ is homotopic to $r_{p^{\ell-1}} \circ r_p$ via a $C'(r_p,Z)\ell^{d-1}p^\ell$-Lipschitz homotopy $H_\ell:Z \times [0,1] \to Z$. \end{lem} The homotopy $H_\ell$ is needed for the inductive step, in order to prove the lemma one dimension higher. \begin{proof} First suppose that $d=1$, and let $n=\dim Z$. Then $Z$ is a wedge of $n$-spheres, so the base of the induction is provided by Lemma \ref{SntoSn}. Now suppose that we have proved the lemma for spaces with cells in $d-1$ dimensions, in particular for $Z^{(n-1)}$ where $\dim Z=n \geq 3$. We start by building a metric on $Z$ as follows. First, homothetically shrink $Z^{(n-1)}$ until the attaching maps of $n$-cells can be given by $1$-Lipschitz maps from $\partial[0,1]^n$. Then give $Z$ the \emph{nearly Euclidean metric} (as defined further down in \S\ref{subS:metric}) derived from attaching cells isometric to $[0,1]^n$. By Proposition \ref{htpy-to-lip}, proved further down, we can also assume that $r_p:Z \to Z$ is cellular and Lipschitz. By applying a homotopy which is constant on the $(n-1)$-skeleton, we can also ensure that $r_p$ has the following property: \begin{quote} For every open $n$-cell $e$ of $Z$, $\overline{r_p^{-1}(e)}$ is a disjoint union of $p^n$ subcubes of $(0,1)^n$, arranged in a grid inside $e$, whose interiors map homothetically to $e$. \end{quote} Such a homotopy can be performed in several steps. First, ensure that $r_p$ is smooth on the preimages of the ``middle halves'' of $n$-cells, and that the centers of the cells are regular values. Then, by composing with a homotopy that expands a small neighborhood of the center to cover the whole cell, ensure that the preimage of each open $n$-cell is a disjoint union of homeomorphic copies. Then, since $Z$ is simply connected and $n \geq 3$, it is possible to cancel out copies of opposite orientations. The details of this purely topological argument can be found, for example, in \cite[Lemma 5.3]{GrMo} or \cite{White}. Finally, we can deform this map to obtain the desired geometry. We now construct $r_{p^\ell}$ and $H_\ell$ by induction on $\ell$. Suppose we have constructed a map $r_{p^{\ell-1}}$ that is $C(r_p,Z)(\ell-1)^{d-1}p^{\ell-1}$-Lipschitz and is an extension of $r_{p^{\ell-1}}^{(n-1)}$ to the $n$-cells of $Z$. We will homotope $r_{p^{\ell-1}} \circ r_p$ to the desired $C(r_p,Z)\ell^{d-1}p^\ell$-Lipschitz map $r_{p^\ell}$. We first apply the homotopy $H_\ell^{(n-1)}$ to $Z^{(n-1)}$. We extend this homotopy to a $n$-cell $e$ as follows. Equip $e$ with polar coordinates $(\theta,s)$, with $\theta \in S^{n-1}$ and $s \in [0,1)$, and let $\partial_e:S^{n-1} \to Z^{(n-1)}$ denote the attaching map of $e$. Then we let \[\tilde H(s,\theta,t)=\begin{cases} H_\ell^{(n-1)}(\partial_e(\theta),t+2(s-1)), & s \geq 1-t/2, \\ r_{p^{\ell-1}} \circ r_p(\theta,(1-t/2)^{-1}s), & s \leq 1-t/2. \end{cases}\] From this formula we see that: \begin{itemize} \item When $s=1$, $\tilde H(s,\theta,t)$ agrees with $H_\ell^{(n-1)}$. \item At $s=1-t/2$, $\tilde H$ is continuous since \[H_\ell^{(n-1)}(\partial_e(\theta),t+2(s-1))=H_\ell^{(n-1)}(\partial_e(\theta),0)=r_{p^{\ell-1}} \circ r_p(\theta,1).\] \end{itemize} \begin{figure} \centering \begin{minipage}{.33\textwidth} \centering \subfloat[\centering $\tilde H|_{t=0}=r_{p^{\ell-1}} \circ r_p$] {\includegraphics[width=4cm]{t1.pdf}} \label{ft1} \end{minipage}% \begin{minipage}{.33\textwidth} \centering \subfloat[\centering $\tilde H|_{t=1}$: $L_i$'s differ] {\includegraphics[width=4cm]{t2.pdf}} \label{ft2} \end{minipage} \begin{minipage}{.33\textwidth} \centering \subfloat[\centering $J|_{t=1}$: $L_i$'s equalized] {\includegraphics[width=4cm]{t3.pdf}} \label{ft3} \end{minipage} \caption{Stages of the homotopy $H_\ell$, the concatenation of $\tilde H$ and $J$.} \end{figure} At this point, $\tilde H|_{e \times \{1\}}$ has different Lipschitz constants on different regions of $e$, which we bound by induction on $\ell$ and $d$: \begin{enumerate}[(i)] \item On the outer half of the disk, the Lipschitz constant is \[L_1=2\Lip H_\ell^{(n-1)} \leq 2C'(r_p,Z^{(n-1)})\ell^{d-2}p^\ell.\] \item On the inner half, but outside $\frac{1}{2}r_p^{-1}(e)$ (here $\frac{1}{2}$ refers to the homothety $(s,\theta) \mapsto (\frac{s}{2},\theta)$), the Lipschitz constant is \[L_2=2\Lip\bigl(r_{p^{\ell-1}} \circ r_p\bigr) \leq \Lip(r_p) \cdot 2C(r_p,Z^{(n-1)})(\ell-1)^{d-2}p^{\ell-1}.\] This bound holds because on this subdomain, the image of $r_p(\theta,s/2)$ lies in $Z^{(n-1)}$. \item In $\frac{1}{2}r_p^{-1}(e)$, the Lipschitz constant is \[L_3=D^{-1}\Lip r_{p^{\ell-1}} \leq D^{-1}C(r_p,Z)(\ell-1)^{d-1}p^{\ell-1},\] where $D$ is the side length of one of the subcubes comprising $\frac{1}{2}r_p^{-1}(e)$. \end{enumerate} In the second stage $J:Z \times [0,1] \to Z$ of the homotopy, which is constant on $Z^{(n-1)}$, we expand and shrink these three regions via a product of piecewise linear homotopies of $[0,1]$ so as to equalize the Lipschitz constants. At time $1$, $e$ is nearly covered by a $p \times \cdots \times p$ grid of subcubes which each map to $Z$ via $r_{p^{\ell-1}}|_e$ composed with a homothety; the outer half of $\tilde H|_{e \times \{1\}}$ is relegated to a thin shell on the outside of the cube. We can imagine expanding every part of the domain until the Lipschitz constant is $1$ on each relevant subinterval, and then shrinking the whole domain proportionally. This shows that the resulting map $J|_{t=1}$ has Lipschitz constant bounded above by \begin{multline*} pDL_3+\left(\frac{1}{2}-pD\right)L_2+\frac{1}{2}L_1 \\ \leq pC(r_p,Z)(\ell-1)^{d-1}p^{\ell-1} + \Lip(r_p)C(r_p,Z^{(n-1)})(\ell-1)^{d-2}p^{\ell-1} + C'(r_p,Z^{(n-1)})\ell^{d-2}p^\ell \\ \leq C(r_p,Z)\ell^{d-1}p^\ell, \end{multline*} where the second inequality holds as long as \[C(r_p,Z) \geq p^{-1}\Lip(r_p)C(r_p,Z^{(n-1)})+C'(r_p,Z^{(n-1)}).\] Then we set $r_{p^\ell}=J|_{t=1}$ and $H_\ell$ to be the concatenation of $\tilde H$ and $J$. By computing derivatives of $\tilde H$ and $J$ in the space and time directions, we see that \[\Lip(H_\ell)=\max\{L_1,L_2,L_3\},\] and therefore we can set $C'(r_p,Z) \leq \max\{2,(pD)^{-1}\}C(r_p,Z)$. \end{proof} \subsection{Lipschitz homotopy equivalence} \label{subS:metric} To show that Lemma \ref{detailed} implies Theorem \ref{self-maps}, we need to introduce some geometric and topological facts. We start with the geometry, discussing metrics on CW complexes: we would like to show that the ``special'' metric we imposed on the complex $Z$ in Lemma \ref{detailed} is not too special to be useful. The relevant ideas date back to Gromov, see e.g.~\cite[\S7.20]{GrMS}, and are developed more systematically in \cite{LLY}. The basic idea is that if two homotopy equivalent metric spaces are compact and sufficiently locally nice, then they are Lipschitz homotopy equivalent (in the obvious sense). The importance of this is that asymptotic results about Lipschitz constants are preserved under Lipschitz homotopy equivalence. That is, for metric spaces $X$ and $Y$, define the Lipschitz norm of a homotopy class $\alpha \in [X,Y]$ to be \[\lVert\alpha\rVert_{\Lip}=\min \{\Lip(f) : f \in \alpha\}.\] Suppose now that $f:X' \to X$ and $g:Y \to Y'$ are Lipschitz homotopy equivalences. Then there are constants $C,K>0$ depending on $f$ and $g$ (but not $\alpha$) such that \[\frac{1}{C}\lVert\alpha\rVert_{\Lip}-K \leq \lVert g \circ \alpha \circ f \rVert_{\Lip} \leq C\lVert\alpha\rVert_{\Lip}+K.\] Therefore, asymptotics such as those in Theorem \ref{main} are invariant under Lipschitz homotopy equivalence. \begin{defn} A \emph{nearly Euclidean CW complex} is a CW complex $X$ equipped with a metric constructed inductively as follows. The 1-skeleton is a metric graph. Once we have constructed a metric on $X^{(n-1)}$, we also fix a metric $d_i$ on $D^n$ for every $n$-cell $e_i$, such that $d_i$ is bilipschitz to the standard Euclidean metric and the attaching map $f_i:S^{n-1} \to X^{(n-1)}$ is Lipschitz with respect to the induced metric on $S^{n-1}=\partial D^n$. Then the metric on $X^{(n)}$ is the quotient metric with respect to this gluing. \end{defn} In particular, notice that if $L=\max_i \Lip(f_i)$, the for points $x,y \in X^{(n-1)}$, \[\frac{1}{L}d_{X^{(n-1)}}(x,y) \leq d_{X^{(n)}}(x,y) \leq d_{X^{(n-1)}}(x,y).\] For example, every compact Riemannian manifold is smoothly triangulable; with any such triangulation it is a nearly Euclidean CW complex. More generally, every simplicial complex with a simplexwise Riemannian metric is an example. \begin{prop} \label{he-to-lhe} Suppose that $X$ and $Y$ are homotopy equivalent nearly Euclidean finite CW complexes. Then they are Lipschitz homotopy equivalent. \end{prop} In particular, the metric we constructed on $Z$ in the proof of Lemma \ref{detailed} is nearly Euclidean, and so $Z$ is Lipschitz homotopy equivalent to, for example, any homotopy equivalent compact Riemannian manifold. This follows immediately from the following more general statement: \begin{prop} \label{htpy-to-lip} Let $X$ and $Y$ be nearly Euclidean finite CW complexes, and $A \subset X$ a subcomplex. Let $f:X \to Y$ be a map such that $f|_A$ is Lipschitz. Then $f$ is homotopic rel $A$ to a Lipschitz map. Moreover, if the original map is cellular, then so is the new map. \end{prop} There is another useful consequence of this fact: \begin{cor} \label{CWRiem} Given a finite CW complex $X$, we can always find a homotopy equivalent complex with a nearly Euclidean metric. \end{cor} \begin{proof} We use induction on skeleta. Suppose we have constructed a complex $Y^{(k)}$ with a nearly Euclidean metric and a homotopy equivalence $f:X^{(k)} \to Y^{(k)}$. Then for every $(k+1)$-cell of $X$ with attaching map $g:S^k \to X^{(k)}$, $f \circ g$ is homotopic to a Lipschitz map $\tilde g:S^k \to Y^{(k)}$. Then we can attach a $(k+1)$-cell along $\tilde g$ and extend $f$ to the $(k+1)$-cell by combining $\tilde g$ and the homotopy. \end{proof} \begin{proof}[{Proof of Prop.~\ref{htpy-to-lip}.}] We start by proving a lemma: \begin{lem} $Y$ is \emph{locally Lipschitz contractible}, that is, for every $y \in Y$, there is a neighborhood $N_y \ni y$ which admits a Lipschitz deformation retraction to a point. In particular, for every $n$, every Lipschitz map $S^n \to N_y$ extends to $D^{n+1}$ (as a Lipschitz map). \end{lem} \begin{proof} We build such a neighborhood by induction on skeleta, using the standard construction for a contractible neighborhood inside a CW complex. Let $y \in Y$, and let $k$ be such that $y$ is contained in an open $k$-cell. Then we can take a ball in that $k$-cell which is Lipschitz contractible in $Y^{(k)}$. Now suppose we have constructed a contractible neighborhood $N(n)$ of $y$ in $Y^{(n)}$, and consider an $(n+1)$-cell with attaching map $f:S^n \to Y$. Then, thinking of the cell as the cone on $S^n$, we can add $f^{-1}(N(n)) \times [0,\varepsilon)$ to our neighborhood. Doing this for every cell gives us a neighborhood in $Y^{(n+1)}$ with an obvious deformation retraction to $N(n)$, which is Lipschitz since the metric on the cell is bilipschitz to the Euclidean metric. \end{proof} We now make $f$ Lipschitz, also by induction on skeleta. Clearly $f|_{X^{(0)}}$ is Lipschitz to begin with. Now suppose that $f|_{X^{(k)}}$ is Lipschitz (notice that this is true with respect to the metric induced from $X^{(k+1)}$ as well as that on $X^{(k)}$) and consider a $(k+1)$-cell not in $A$ with an inclusion map $e:D^{k+1} \to X$. Now take a triangulation of $D^{k+1}$ at a small enough scale that $f \circ e$ takes every simplex into a Lipschitz contractible neighborhood. By induction on the skeleta of this triangulation, we deform $f \circ e$ to a Lipschitz map, while leaving it constant on $\partial D^{k+1}$. If $f$ is cellular, then we can construct the $(k+1)$st stage of the homotopy as a map to $Y^{(k+1)}$, rather than to $Y$. Then the resulting map is still cellular. \end{proof} \subsection{Properties of formal spaces} \label{formal} Finally, we need to show that the topological properties of $Z$ are also not too special to be useful. This requires some discussion of properties of formal spaces. One property, which follows from \cite[Proposition 3.1]{Papa}, is that a map between formal spaces which induces isomorphisms on rational cohomology is rationally invertible: \begin{prop} \label{back-and-forth} If $Y$ is formal and $f:Z \to Y$ is a map between simply connected complexes inducing an isomorphism on rational cohomology, then $Z$ is formal, and there is a map $g:Y \to Z$ such that $g \circ f$ induces multiplication by $q^n$ on $H^n(Y;\mathbb{Q})$, for some $q$. \end{prop} Now, given $Y$, we build a rationally equivalent $Z$ which satisfies the topological hypotheses of Lemma \ref{detailed}: \begin{prop} \label{cells} Let $Y$ be a simply connected space with finite-dimensional rational homology, and fix a basis for $H_n(Y;\mathbb{Q})$ for every $n$. Then there is a CW complex $Z$ and a map $f:Z \to Y$ which induces isomorphisms on rational cohomology such that: \begin{enumerate}[(i)] \item The rational cellular chain complex of $Z$ has zero differential; that is, rational cellular chains on $Z$ are in bijection with $H_*(Z;\mathbb Q)$. \item The induced isomorphism $f_*:H_n(Z;\mathbb Q) \to H_n(Y;\mathbb Q)$ maps each cell to a multiple of a basis element. \end{enumerate} \end{prop} \begin{proof} We construct $Z$ and $f$ by induction on skeleta. We set $Z^{(0)}=Z^{(1)}=*$. Now suppose we have built $Z^{(n)}$ and a map $f_n:Z^{(n)} \to Y$ which induces an isomorphism on $H_k({-};\mathbb{Q})$, $k \leq n$. By the rational relative Hurewicz theorem, the Hurewicz map induces an isomorphism \[\pi_{n+1}(Y,Z^{(n)}) \otimes \mathbb{Q} \to H_{n+1}(Y,Z^{(n)};\mathbb{Q}) \cong H_{n+1}(Y;\mathbb{Q}).\] So choose elements $\alpha_1,\ldots,\alpha_r \in \pi_{n+1}(Y,Z^{(n)})$ forming a basis for $\pi_{n+1}(Y,Z^{(n)}) \otimes \mathbb{Q}$. We build $Z^{(n+1)}$ by attaching an $(n+1)$-cell $e_i$ along each $\partial\alpha_i$, $i=1,\ldots,r$, and extend $f_n$ to $f_{n+1}:Z^{(n+1)} \to Y$ by mapping each $e_i$ to $Y$ via a representative of $\alpha_i$. Since $(f_n)_*:H_n(Z^{(n)};\mathbb{Q}) \to H_n(Y;\mathbb{Q})$ is an isomorphism, by the long exact sequence of that pair, the Hurewicz image of each $\partial\alpha_i$ is zero. Therefore the map \[H_{n+1}(Z^{(n+1)};\mathbb{Q}) \to H_{n+1}(Z^{(n+1)},Z^{(n)};\mathbb{Q})\] is an isomorphism; in other words, the cells of $Z^{(n+1)}$ form a basis for $H_{n+1}(Z^{(n+1)};\mathbb{Q})$. Moreover, by the definition of the extension $f_{n+1}$, the map \[(f_{n+1})_*:\pi_{n+1}(Z^{(n+1)},Z^{(n)}) \otimes \mathbb{Q} \to \pi_{n+1}(Y,Z^{(n)}) \otimes \mathbb{Q}\] is an isomorphism. But these groups are naturally isomorphic to $H_{n+1}(Z^{(n+1)};\mathbb{Q})$ and $H_{n+1}(Y;\mathbb{Q})$, respectively. This shows that $f_{n+1}$ induces a bijection on $H_{n+1}({-};\mathbb{Q})$ as well. Once we have done this in every dimension in which $H_*(Y;\mathbb{Q}) \neq 0$, we have constructed the desired $Z$. To satisfy condition (ii), notice that we can always pick the $\alpha_i$ to be integer multiples of the elements of our chosen basis. \end{proof} Now we conclude the section: \begin{proof}[Proof of Theorem \ref{self-maps}] Let $Y$ be a simply connected formal compact Riemannian manifold. Using Proposition \ref{cells}, we can find a complex $Z$ such that the cellular chain complex of $Z$ has zero differential and a rational equivalence $g:Z \to Y$. Moreover, by Proposition \ref{back-and-forth}, there is a rational equivalence $f:Y \to Z$ such that $f \circ g$ induces multiplication by $a^n$ on $H^n(Y;\mathbb Q)$, for some $a>0$. By Corollary \ref{CWRiem}, we can put a nearly Euclidean metric on $Z$, and by Proposition \ref{he-to-lhe} we can assume $f$ and $g$ are Lipschitz. Now let $r_p:Z \to Z$ be a map that induces multiplication by $p^n$ on $H^n(Z;\mathbb Q)$. By Lemma \ref{detailed} and Proposition \ref{he-to-lhe}, for any nearly Euclidean metric on $Z$, and for every $\ell$, there are $O(p^\ell\ell^{d-1})$-Lipschitz maps $r_{p^\ell}$ homotopic to $r_p^\ell$. Then the maps $f \circ r_{p^\ell} \circ g$ are again $O(p^\ell\ell^{d-1})$-Lipschitz and induce multiplication by $(ap^\ell)^n$ on $H^n(Y;\mathbb R)$. \end{proof} \section{Rational homotopy theory} \label{S:RHT} The remainder of the paper will require machinery from rational homotopy theory. We will assume familiarity with Sullivan's theory of minimal models, referring the reader to \cite{GrMo,FHT} for the general background and \cite{PCDF,scal} for treatments geared towards quantitative topology. Given a simply connected space $Y$ of finite type, we write its minimal DGA over $\mathbb R$ as $\mathcal M_Y^*$. (Note that we will not need to worry about the difference between rational and real homotopy theory.) There is a non-canonical isomorphism \[\mathcal M_Y^*=\bigwedge_{n=2}^\infty V_n, \qquad \text{where }V_n \cong \Hom(\pi_n(Y),\mathbb R).\] The elements of the $V_n$ are called \emph{indecomposables}. If $Y$ is a manifold or simplicial complex, then it has a (non-unique) \emph{minimal model}, a quasi-isomorphism $m_Y:\mathcal M_Y^* \to \Omega^*(Y)$ realizing the generators of the minimal DGA as differential forms. The codomain of the minimal model may also be the algebra $\Omega^*_\flat(Y)$ of flat forms in the sense of Whitney; see \cite[\S2 and 6.1]{scal}. When we want to be noncommittal about whether we are using smooth or flat forms, we write $\Omega^*_{(\flat)}(Y)$. We will frequently leave the map $m_Y$ implicit when we speak of the \emph{rationalization} of a map $f:Y \to Z$, which is a map $\rho$ which completes the commutative square \[\xymatrix{ \mathcal M_Z \ar[r]^\rho \ar[d]_{m_Z} & \mathcal M_Y \ar[d]_{m_Y} \\ \Omega^*_{(\flat)}Z \ar[r]^{f^*} & \Omega^*_{(\flat)}Y }\] up to homotopy. This is unique up to DGA homotopy. In the rest of this section, we introduce some prior results in quantitative homotopy theory as well as some information about formal spaces. \subsection{The shadowing principle} The main technical result of \cite{PCDF} shows a kind of coarse density of genuine maps in the space of ``formal'' rational-homotopic maps between spaces $X$ and $Y$. That is, given a homomorphism $\mathcal{M}_Y^* \to \Omega_{(\flat)}^*(X)$, one can produce a nearby genuine map $X \to Y$ whose Lipschitz constant depends on geometric properties of the homomorphism. To state this precisely, we first introduce some definitions. Let $X$ and $Y$ be finite simplicial complexes or compact Riemannian manifolds such that $Y$ is simply connected and has a minimal model $m_Y:\mathcal{M}_Y^*\to\Omega_\flat^*Y$. Fix norms on the finite-dimensional vector spaces $V_k$ of degree $k$ indecomposables of $\mathcal{M}_Y^*$; then for homomorphisms $\varphi:\mathcal{M}_Y^* \to \Omega_\flat^*(X)$ we define the formal dilatation \[\Dil(\varphi)=\max_{2 \leq k \leq \dim X} \lVert\varphi|_{V_k}\rVert_{\mathrm{op}}^{1/k},\] where we use the $L^\infty$ norm on $\Omega_\flat^*(X)$. Notice that if $f:X \to Y$ is an $L$-Lipschitz map, then $\Dil(f^*m_Y) \leq CL$, where the exact constant depends on the dimension of $X$, the minimal model on $Y$, and the norms. Thus the dilatation is an algebraic analogue of the Lipschitz constant. Given a formal homotopy \[\Phi:\mathcal{M}_Y^* \to \Omega_\flat^*(X \times [0,T]),\] we can define the dilatation $\Dil_T(\Phi)$ in a similar way. The subscript indicates that we can always rescale $\Phi$ to spread over a smaller or larger interval, changing the dilatation; this is a formal analogue of defining separate Lipschitz constants in the time and space direction, although in the DGA world they are not so easily separable. Now we can state some results from \cite{PCDF}. They are stated in that paper in terms of smooth forms; for the argument that they can be adapted to flat forms, see \cite[\S6]{scal}. \begin{thm}[{A special case of the shadowing principle, \cite[Thm.~4--1]{PCDF}}] \label{shadow} Let $X$ be a Riemannian manifold or simplicial complex of locally bounded geometry, and let $Y$ be a simply connected compact Riemmanian manifold or simplicial complex. Let $\varphi:\mathcal{M}_Y^* \to \Omega_{(\flat)}^*(X)$ be a homomorphism with $\Dil(\varphi) \leq L$ which is formally homotopic to $f^*m_Y$ for some $f:X \to Y$. Then $f$ is homotopic to a $g:X \to Y$ which is $C(X,Y)(L+1)$-Lipschitz and such that $g^*m_Y$ is homotopic to $\varphi$ via a homotopy $\Phi$ with $\Dil_{1/L}(\Phi) \leq C(X,Y)(L+1)$. \end{thm} In other words, one can produce a genuine map by a small formal deformation of $\varphi$. Note that in the above result, $X$ does not have to be compact. In fact, the constants depend only on the bounds on the local geometry of $X$. We also present one relative version of this result: \begin{thm}[{Cf.~\cite[Thm.~5--7]{PCDF}}] \label{relshadow} Let $X$ and $Y$ be finite simplicial complexes or compact Riemannian manifolds, with $Y$ simply connected. Let $f,g:X \to Y$ be two nullhomotopic $L$-Lipschitz maps and suppose that $f^*m_Y$ and $g^*m_Y$ are formally homotopic via a homotopy $\Phi:\mathcal{M}_Y^* \to \Omega_{(\flat)}^*(X \times [0,T])$ with $\Dil_T(\Phi) \leq L$. Then there is a $C(X,Y)(L+1)$-Lipschitz homotopy $F:X \times [0,T] \to Y$ between $f$ and $g$. \end{thm} It is important for this result that the maps be nullhomotopic, rather than just in the same homotopy class. This is because we did not require our formal homotopy to be in the relative homotopy class of a genuine homotopy. In the zero homotopy class, one can always remedy this by a small modification, but in general the minimal size of the modification may depend in an opaque way on the homotopy class. \subsection{Formal spaces, again} In \S\ref{formal}, we introduced formal spaces as spaces which admit self-maps of a certain type. However, the original definition comes from rational homotopy theory. A space $Y$ is \emph{formal} if $\Omega^*Y$ is quasi-isomorphic to the cohomology ring $H^*(Y;\mathbb{R})$, viewed as a DGA with zero differential. In other words, there is a map $\mathcal{M}_Y^* \to H^*(Y;\mathbb{R})$ which is a quasi-isomorphism. (By \cite[Thm.~12.1]{SulLong}, the definition using any other ground field $\mathbb{F} \supseteq \mathbb{Q}$ is equivalent.) More generally, we say a DGA is \emph{formal} if it is quasi-isomorphic to its cohomology ring. It follows that, while many rational homotopy types may have the same cohomology ring, exactly one of these is formal, and its minimal DGA can be constructed ``formally'' from the cohomology ring: at stage $k$, one adds generators that kill the relative $(k+1)$st cohomology of the map $\mathcal{M}_Y^*(k-1)(\mathbb{F}) \to H^*(Y;\mathbb{F})$. This is the genesis of the term. Another way of saying this from the point of view of minimal models is this. Define the \emph{depth filtration} \[0 \subseteq U_0 \subseteq U_1 \subseteq U_2 \subseteq \cdots\] of $\mathcal M_Y^*$ inductively as follows: \begin{itemize} \item $U_0$ is generated by all indecomposables with zero differential. \item The product respects the filtration: if $u_1 \in U_i$ and $u_2 \in U_j$, then $u_1u_2 \in U_{i+j}$. \item $U_i$ contains all indecomposables whose differentials are in $U_{i-1}$. \end{itemize} This filtration is canonical once one fixes the vector spaces of indecomposables. Then formal spaces are those whose cohomology is a quotient of $U_0$. In other words, a minimal DGA is \emph{non-}formal if and only if it has a cohomology class which is not in $U_0$. Halperin and Stasheff showed \cite[\S3]{HaSt} that a space is formal if and only if one can choose the vector space of indecomposable generators of the minimal model so that the depth filtration $\{U_i\}$ can be refined, non-canonically, to a bigrading $\mathcal{M}_Y^*=\bigwedge_i W_i$, where \[(U_i \cap \text{indecomposables})=W_i \oplus (U_{i-1} \cap \text{indecomposables}).\] In particular, if $w \in W_i$, then $dw \in W_{i-1}$. For every bigrading $\mathcal M_Y^*=\bigwedge_i W_i$ of this form, the map $\hat\rho_t$ sending $w \in W_i \cap V_n$ to $t^{n+i}w$ is a lift of $\rho$. In fact, if $Y$ is a finite complex and $\mathbb F=\mathbb Q$, then any family of lifts of this form can be realized by genuine maps: \begin{thm}[{\cite[Theorem A]{PWSM}}] \label{thm:PWSM} Let $Y$ be a formal, simply connected finite CW complex and let $\hat\rho_t:\mathcal M_Y \to \mathcal M_Y$ be the map \[\hat\rho_t(w)=t^{n+i}w,\qquad w \in W_i \cap V_n,\] for some bigrading $\mathcal M_Y=\bigwedge_i W_i$. Then there is an integer $t_0 \geq 1$ such that for every $z \in \mathbb{Z}$, there is a genuine map $r_z:Y \to Y$ whose rationalization is $\hat\rho_{zt_0}$. \end{thm} The same paper also gives a stronger version of Proposition \ref{back-and-forth}: \begin{thm}[{\cite[Theorem B]{PWSM}}] \label{thm:PWSM-B} Let $Y$ be a formal, simply connected finite CW complex and let $\hat\rho_t:\mathcal M_Y \to \mathcal M_Y$ be the map \[\hat\rho_t(w)=t^{n+i}w,\qquad w \in W_i \cap V_n,\] for some bigrading $\mathcal M_Y=\bigwedge_i W_i$. Suppose that $Z$ is another simply connected complex and $f:Z \to Y$ is a map inducing an isomorphism on rational cohomology. Then for some $p$, there is a map $g:Y \to Z$ such that the rationalization of $f \circ g$ is $\hat\rho_p$. \end{thm} We then get the following upgraded statement of Theorem \ref{self-maps}: \begin{thm} \label{self-maps+} Let $Y$ be a formal, simply connected finite CW complex whose rational homology is nontrivial in $d$ positive degrees, and let $\hat\rho_t:\mathcal M_Y \to \mathcal M_Y$ be the map \[\hat\rho_t(w)=t^{n+i}w,\qquad w \in W_i \cap V_n,\] for some bigrading $\mathcal M_Y=\bigwedge_i W_i$. Then there is a constant $C>0$, depending on the choice of $\hat\rho_t$ as well as $Y$, such that for every homotopy class in $[Y,Y]$ whose rationalization is $\hat\rho_t$ there is a $(Ct(\log t)^{d-1}+C)$-Lipschitz representative $f:Y \to Y$. \end{thm} \begin{proof} Using Theorems \ref{thm:PWSM} and \ref{thm:PWSM-B}, we obtain topological control over the maps $f$, $g$, and $r_p$ used in the proof of Theorem \ref{self-maps}. Then we see that there are $a$ and $p$ such that for every $q=ap^\ell$ there is a $C_0(q(\log q)^{d-1}+1)$-Lipschitz map $f_q:Y \to Y$ whose rationalization is $\hat\rho_q$, where $C_0$ depends on the family $\hat\rho_t$. Now suppose that $g:Y \to Y$ is some map whose rationalization is $\hat\rho_t$, and let $m_Y:\mathcal M_Y \to \Omega^*Y$ be a minimal model of $Y$. Let $q=ap^\ell$ satisfy $ap^{\ell-1}<t<ap^\ell$. Then the map \[f_q^*m_Y\hat\rho_{t/q}:\mathcal M_Y \to \Omega^*Y\] is algebraically homotopic to $g^*m_Y$. Notice also that, with an appropriate norm on indecomposables, the operator norm of $\rho_{t/q}$ is $t/q$. Therefore, by the shadowing principle, there is an $C_1(Y)((t/q)\Lip f_q+1)$-Lipschitz map in the homotopy class of $g$. Then we are done because \[\Lip f_q \leq C_0(pt\log(pt)^{d-1}+1). \qedhere\] \end{proof} \section{A finite criterion for scalability} \label{S:optconj} In this section we prove Theorems \ref{optconj} and \ref{elliptic}. In \cite{scal}, it was shown that the following conditions are equivalent for a finite simplicial complex or compact manifold $Y$ which is formal and simply connected: \begin{enumerate}[(i)] \item There is a homomorphism $H^*(Y) \to \Omega_\flat^*Y$ of differential graded algebras which sends each cohomology class to a representative of that class. Here $\Omega_\flat^*Y$ denotes the flat forms, an algebra of not-necessarily-smooth differential forms studied by Whitney. \item There is a constant $C(Y)$ and infinitely many $p \in \mathbb{N}$ such that there is a $C(Y)(p+1)$-Lipschitz self-map which induces multiplication by $p^n$ on $H^n(Y;\mathbb{R})$. \item For all finite simplicial complexes $X$, nullhomotopic $L$-Lipschitz maps $X \to Y$ have $C(X,Y)(L+1)$-Lipschitz nullhomotopies. \item For all $n<\dim Y$, homotopic $L$-Lipschitz maps $S^n \to Y$ have $C(Y)(L+1)$-Lipschitz homotopies. \end{enumerate} Spaces satisfying these conditions are called \emph{scalable}. Now we will show: \begin{thm} \label{thm:finite} If $Y$ is a closed $n$-manifold (or, more generally, satisfies Poincar\'e duality over the reals), the following condition is also equivalent to those above: \begin{enumerate}[(i),resume] \item There is an injective $\mathbb R$-algebra homomorphism $h:H^*(Y;\mathbb R) \to \bigwedge^*\mathbb R^n$. \item There is a $1$-Lipschitz map $f:\mathbb R^n \to M$ of positive asymptotic degree. \end{enumerate} \end{thm} \begin{rmk} Condition (v) can also be thought of as saying that there is an injective homomorphism $H^*(Y;\mathbb R) \to H^*(T^n;\mathbb R)$. When is this homomorphism induced by a genuine map $T^n \to Y$ of positive degree? A necessary condition is that the homomorphism can also be realized over the rationals. In fact, this condition is also sufficient. A homomorphism $H^*(Y; \mathbb Q) \to H^*(T^n;\mathbb Q)$ lifts (non-uniquely) to a homomorphism of minimal models. By \cite[Proposition 3.1]{Papa}, after composing with a self-map $\mathcal M_Y \to \mathcal M_Y$ that induces multiplication by $p^n$ on $H^n(Y;\mathbb Q)$ for some $p$, this homomorphism becomes the rationalization of a genuine map $T^n \to Y$. This does not always happen. For example, take the real Poincar\'e duality space \[Y=(S^2 \times S^2)/(x,*) \sim (*,x) \mathbin{\#} 2\mathbb CP^2 \mathbin{\#} 3 \overline{\mathbb CP^2},\] where $*$ is a basepoint. The cup product $H^2(Y) \times H^2(Y) \to H^4(Y)$ is the quadratic form $\langle 2,1,1,-1,-1,-1 \rangle$, which has discriminant $-2$, and therefore is not rationally equivalent to the quadratic form induced by the cup product $H^2(T^4) \times H^2(T^4) \to H^4(T^4)$. However, $Y$ is scalable, since over the reals, the two quadratic forms are equivalent. To get a manifold counterexample, embed $Y$ in $\mathbb R^{10}$ and let $M$ be the boundary of a thickening $W$ of this embedding. Using Alexander duality and the Mayer--Vietoris sequence, we see that the injection $M \to W$ induces an isomorphism \[H^*(Y;\mathbb{Q}) \cong H^*(W;\mathbb{Q}) \xrightarrow{\simeq} H^{\leq 4}(M;\mathbb{Q}),\] and the higher-dimensional classes in $H^*(M;\mathbb{Q})$ are Poincar\'e duals of those coming from $W$. This determines the rational and therefore the real cohomology ring, which is easily seen to be scalable. On the other hand, if there were a injection $H^*(M;\mathbb Q) \to H^*(T^9;\mathbb Q)$, this would induce an injection $H^*(Y) \to H^*(T^4;\mathbb Q)$, which we already showed cannot exist. Thus one can distinguish a class of ``rationally scalable'' manifolds within the larger class of scalable spaces. It would be interesting to know what other properties distinguish these two classes. \end{rmk} \begin{rmk} When $Y$ is not a closed manifold, there is a similar condition which is necessary for $Y$ to be scalable: \begin{enumerate} \item[(v$'$)] For some $n_1,\ldots,n_N$, there is an injective $\mathbb{R}$-algebra homomorphism \[h:H^*(Y;\mathbb R) \to \bigoplus_{i=1}^N \Lambda^*\mathbb{R}^{n_i}.\] \end{enumerate} We suspect that this condition may be sufficient, but our proof strategy fails to generalize to this situation because of the difference between real and rational coefficients. \end{rmk} \begin{proof}[Proof of Theorem \ref{thm:finite}] We will prove that (i) implies (v$'$) for all simply connected finite complexes (which is straightforward) and that (v) implies (ii) for all simply connected finite complexes (which is an application of the shadowing principle). Then we will show that scalable closed $n$-manifolds satisfy (vi) and, conversely, (vi) implies (v) for any closed $n$-manifold. To see that (i) implies (v$'$), choose a basis $u_1,\ldots,u_N$ for $H^*(Y;\mathbb{Q})$ and let $\omega_1,\ldots,\omega_N$ be the corresponding flat differential forms. Then for each $i$, there is a set of positive measure on which $\omega_i \neq 0$. Since the homomorphism $H^*(Y) \to \Omega_\flat^*(Y)$ is multiplicative almost everywhere, we can choose a point $x_i \in Y$ such that $u_j \mapsto \omega_j|_{x_i}$ is a homomorphism $h_i:H^*(Y;\mathbb{R}) \to \bigwedge^*\mathbb{R}^{n_i}$ such that $h_i(u_i) \neq 0$. Then we can take \[h=(h_1,\ldots,h_N):H^*(Y;\mathbb R) \to \bigoplus_{i=1}^N {\textstyle\bigwedge^*}\mathbb R^{n_i}.\] If Poincar\'e duality is satisfied, then (v$'$) implies (v) since we can project $h$ to some $\bigwedge^*\mathbb R^{n_i}$ under which the image of the fundamental class is nonzero. This projection is still injective. Now we will prove that (v) implies (ii). Since we know that scalability is invariant with respect to rational homotopy type, by Proposition \ref{cells} we may replace $Y$ with a rationally equivalent complex $Z$ whose rational cellular chain complex has zero differential; in other words, the cells of $Z$ form a basis for $H_*(Z;\mathbb R)$. We equip $Z$ with a nearly Euclidean metric. To show that $Y$ satisfies (ii), it suffices to show that $Z$ does. Fix a bigrading $\mathcal M_Z^* \cong \bigwedge_i W_i$ as described in the previous section. We get a quasi-isomrophism $\mathcal M_Z^* \to H^*(Z;\mathbb R)$ by taking a further quotient of the projection $\mathcal M_Z^* \to \bigwedge W_0$, and an automorphism $\rho_t:\mathcal M_Z^* \to \mathcal M_Z^*$ which takes $w \in W_i \cap V_n$ to $t^{n+i}w$; then $\varphi \circ \rho_t=t^{\deg}\varphi$. Moreover, by Theorem \ref{thm:PWSM}, for some $p>1$ there is a genuine self-map $r_p:Z \to Z$ whose rationalization is $\rho_p$, and in particular induces multiplication by $p^n$ on $H^n(Z;\mathbb R)$. We will show that $Z$ satisfies (ii) by induction on skeleta. If $Z^{(n)}$ is scalable, then for every $\ell>0$, the iterate $r_p^\ell|_{Z^{(n)}}$ is homotopic to an $O(p^\ell)$-Lipschitz map. Moreover, for each $(n+1)$-cell, condition (v) lets us build an $O(p^\ell)$-Lipschitz map from an $(n+1)$-cell to $Z^{(n+1)}$ whose degree over that cell is $p^{\ell(n+1)}$. We construct self-maps of $Z^{(n+1)}$ satisfying (ii) by patching these together; this shows that $Z^{(n+1)}$ is also scalable. Now we give the details. Let $\mathbf Z$ and its submanifold $\mathbf Z^{(n)}$ be compact Riemannian manifolds with boundary homotopy equivalent to $Z$ and $Z^{(n)}$. Suppose, by induction, that $Z^{(n)}$ is scalable. By condition (i), there is an injective homomorphism $H^*(Z^{(n)}) \to \Omega_\flat^*(\mathbf Z^{(n)})$ which sends each class to a representative; composing with $\varphi$ gives a map $\mathcal M_Z^* \to \Omega_\flat^*(\mathbf Z^{(n)})$, and by a Poincar\'e lemma argument this extends to a minimal model $m_{Z,n}:\mathcal M_Z^* \to \Omega_\flat^*(\mathbf Z)$ whose projection to $\Omega_\flat^*(\mathbf Z^{(n)})$ factors through $\varphi$. Then $(r_p^\ell)^*m_{Z,n}$ is formally homotopic to $m_{Z,n}\rho_p^\ell$. By the shadowing principle \ref{shadow} and the Lipschitz homotopy equivalence between $\mathbf Z$ and $Z$, this lets us homotope $r_p^\ell$ to a map $r_{p^\ell,n}:Z \to Z$ which is $O(p^\ell)$-Lipschitz on $Z^{(n)}$. Consider an $(n+1)$-cell with Lipschitz inclusion map $i:[0,1]^{n+1} \to Z$, which is dual to a class $u \in H^{n+1}(Z;\mathbb R)$. By restricting and perhaps rescaling the homomorphism $h$ in the hypothesis, we get a homomorphism \[h_u:H^*(Z;\mathbb R) \to {\textstyle\bigwedge^*}\mathbb R^{n+1}\] such that $h_u(u)=d\vol$ and $h_u(v)=0$ for all $v \in H^{n+1}(Z;\mathbb R)$ dual to other $(n+1)$-cells of $Z$. So consider the map $\tilde\varphi:\mathcal M_Z^* \to \Omega^*(I^{n+1})$ where, independently of $x \in I^{n+1}$, \[\tilde\varphi(a)|_{T_xI^n}=p^{\ell k}h_u \circ \varphi(a),\qquad a \in \mathcal M_Z^k.\] Applying the shadowing principle, we get an $O(p^\ell)$-Lipschitz map $f:I^{n+1} \to Z$ such that $f^*\varphi$ is related to $\tilde\varphi$ by a formal homotopy \[\Phi:\mathcal M_Z^* \to \Omega^*([0,1]^{n+1} \times [0,p^{-\ell}])\] satisfying $\Dil(\Phi)=O(p^{\ell})$. We can moreover assume that $f$ is cellular, since this can be achieved by a short genuine homotopy which we can append to $\Phi$. Thus it makes sense to discuss the degree of $f$ over an $n$-cell $e$ of $Z$, which we write $\deg_e(f)$. Let $v \in \mathcal M_Z^*$ be an element which represents the cohomology class dual to $e$. By Stokes' theorem, \[\deg_e(f)=\int_{[0,1]^{n+1} \times \{0\}} \Phi(v)+\int_{\partial [0,1]^{n+1} \times [0,p^{-\ell}]} \Phi(v)= \begin{cases} p^{\ell(n+1)}+O(p^{\ell n}) & \text{if }v=u \\ O(p^{\ell n}) & \text{otherwise.} \end{cases}\] Now for each $e$, fix a map $i_e:([0,1]^{n+1},\partial [0,1]^{n+1}) \to (Z,Z^{(n)})$ which maps to $e$ with degree 1 and sends all but one of the faces of $[0,1]^{n+1}$ to a single basepoint. We can use these as ``building blocks'' for an $O(p^\ell)$-Lipschitz map $g:[0,1]^n \times [0,p^{-\ell}] \to Z$ whose degree over each cell is the ``error'' of $f$. In other words, by joining $f$ and $g$, we get an $O(p^\ell)$-Lipschitz cellular map $\tilde f:[0,1]^{n+1} \to Z$ whose homotopy class in $\pi_{n+1}(Z,Z^{(n)})$ is $p^{\ell(n+1)}[i]$. In particular, since $\tilde f$ and $r_{p^\ell,n} \circ i$ are in the same class in $\pi_{n+1}(Z,Z^{(n)})$, their restrictions to the boundary are homotopic in $\pi_n(Z^{(n)})$. Therefore, since $Z^{(n)}$ is scalable, using condition (iii) we can construct an $O(p^\ell)$-Lipschitz homotopy between $\tilde f|_{\partial [0,1]^{n+1}}$ and $r_{p^\ell,n} \circ i|_{\partial [0,1]^{n+1}}$. We then extend $r_{p^\ell,n}|_{Z^{(n)}}$ to our $(n+1)$-cell in an $O(p^\ell)$-Lipschitz way using this homotopy on the outer part of the cell and $\tilde f$ on the inner part. After we do this for every $(n+1)$-cell, we get an $O(p^\ell)$-Lipschitz map $Z^{(n+1)} \to Z^{(n+1)}$ that induces the right action on homology. Although this map may not be homotopic to $r_p^\ell|_{Z^{(n+1)}}$, this is sufficient to prove condition (ii) and therefore the inductive step. To see that scalable closed $n$-manifolds satisfy (vi), we use a variant of an argument already used above. Since $Y$ is formal, there is a quasi-isomorphism $q:\mathcal M_Y^* \to H^*(Y;\mathbb{R})$. Composing this with the homomorphism $h:H^*(Y;\mathbb{R}) \to \Lambda^*\mathbb{R}^n$, we get a homomorphism \[\varphi:\mathcal M_Y^* \to \Omega^*\mathbb{R}^n\] whose image consists of constant forms, and such that the image of the fundamental class is nonzero. Since $\mathbb{R}^n$ is contractible and has locally bounded geometry, we can apply the shadowing principle to $\varphi$ to produce an $O(1)$-Lipschitz map; by the argument two paragraphs up, this map has positive asymptotic degree. We can turn it into a $1$-Lipschitz map by rescaling. Now we argue that (vi) implies (v). One way to see this is by a direct application of Theorem \ref{balldegbound}, which shows that (vi) implies (v) for any closed $n$-manifold, as well as giving a quantitative result describing how fast the degree goes to $0$ asymptotically. If $Y$ is formal, we can instead use a softer, less technical argument from \cite{scal}. Suppose there is a $1$-Lipschitz map $f:\mathbb{R}^n \to Y$ of positive asymptotic degree. Then we can take the sequence of homomorphisms \[\varphi_t=(f/t)^*m_Y\hat\rho_{1/t}:\mathcal M_Y^* \to \Omega^*_\flat(\mathbb{R}^n),\] where $\hat\rho_u:\mathcal M_Y^* \to \mathcal M_Y^*$ is given by \[\hat\rho_u(w)=u^{n+i}w, \qquad w \in W_i \cap V_n,\] for some bigrading $\mathcal M_Y^*=\bigwedge_i W_i$ such that $W_0$ consists of the indecomposables in $H^*(Y;\mathbb{R})$. These homomorphisms are bounded in the flat norm and therefore have a subsequence that converges to some \[\varphi_\infty:\mathcal M_Y^* \to \Omega^*_\flat(\mathbb{R}^n).\] Then $\varphi_\infty$ factors through the cohomology of $Y$. To see this, notice that for $w \in W_i$, \[\lVert\varphi_t(w)\rVert_\infty \leq t^{-n-i}\Lip(f/t)^n\lVert m_Y(w) \rVert_\infty \leq t^{-i}\lVert m_Y(w) \rVert_\infty,\] and therefore $\varphi_\infty(w)=0$ for $i \geq 1$. Therefore $\varphi_\infty$ factors through $\bigwedge W_0/d\bigwedge_i W_i \cong H^*(Y;\mathbb{R})$. This proves (v). \end{proof} \section{Efficient nullhomotopies} \label{S:homotopies} Now we prove Theorem \ref{main:homotopies}, which we restate here: \begin{thm} \label{homotopies} Let $Y$ be a finite formal CW complex with a piecewise Riemannian metric and Lipschitz attaching maps such that $H_n(Y;\mathbb{Q})$ is nonzero for $d$ different values of $n>0$. Then for any finite simplicial complex $X$, any nullhomotopic $L$-Lipschitz map $f:X \to Y$ is $O(L(\log L)^{d-1})$-Lipschitz nullhomotopic. \end{thm} We will use Theorem \ref{self-maps} to prove Theorem \ref{homotopies}. The argument is similar to the proof of (ii)$\Rightarrow$(iii) of the main theorem of \cite{scal}. \begin{proof} Let $X$ be a finite simplicial complex and $f:X \to Y$ a nullhomotopic $L$-Lipschitz map. Fix a minimal model $m_Y:\mathcal M_Y \to \Omega^*Y$ and a family of automorphisms $\rho_t:\mathcal M_Y \to \mathcal M_Y$ which induce the grading automorphisms on cohomology. By Theorem \ref{thm:PWSM}, there is a $p>1$ and a self-map $r_p:Y \to Y$ whose rationalization is $\rho_p$. Moreover, by Theorem \ref{self-maps+}, there is a sequence of $O(\ell^{d-1}p^\ell)$-Lipschitz maps $r_{p^\ell}$ homotopic to the $\ell$th iterate $r_p^\ell$. We will define a nullhomotopy of $f$ by homotoping through a series of maps which are more and more ``locally organized''. Specifically, for $1 \leq \ell \leq s=\lceil\log_pL\rceil$, we look at the map $\rho_{p^{-\ell}}$ which multiplies each degree $d$ generator by $p^{-\ell k}$ where $k \geq d$. Thus applying the shadowing principle \ref{shadow} to the map \[f^*m_Y\rho_{p^{-\ell}}:\mathcal{M}_Y^* \to \Omega^*X\] gives a $C(X,Y)(L/p^\ell+1)$-Lipschitz map $f_\ell:X \to Y$. Similarly, we get a $C(Y)(s^{d-1}p^\ell+1)$-Lipschitz self-map $g_\ell:Y \to Y$ homotopic to $r_{p^\ell}$ by applying the shadowing principle to the map \[r_{p^s}^*\rho_{p^{\ell-s}}:\mathcal M_Y^* \to \Omega^*Y.\] We will build a nullhomotopy of $f$ through the sequence of maps \[\xymatrix{ f \ar@{-}[rd] & g_1 \circ f_1 \ar@{-}[rd] & g_2 \circ f_2 \ar@{-}[rd] & \ldots \ar@{-}[rd] & r_{p^s} \circ f_s \ar@{-}[r] & \text{const}. \\ & r_p \circ f_1 \ar@{-}[u] & g_1 \circ r_p \circ f_2 \ar@{-}[u] & \ldots & g_{s-1} \circ r_p \circ f_s \ar@{-}[u] }\] As we go right, the \emph{length} (Lipschitz constant in the time direction) of the $\ell$th intermediate homotopy increases---it is $O(s^{d-1} p^\ell)$---while the \emph{thickness} (Lipschitz constant in the space direction) stays a constant $O(s^{d-1}L)$. Thus all together, these homotopies can be glued into an $O(s^{d-1}p^s)$-Lipschitz nullhomotopy of $f$. Informally, the intermediate maps $g_\ell \circ f_\ell$ look at scale $p^\ell/L$ like thickness-$p^\ell$ ``bundles'' or ``cables'' of identical standard maps at scale $1/L$. This structure makes them essentially as easy to nullhomotope as $L/p^\ell$-Lipschitz maps. We now build the aforementioned homotopies: \begin{lem} \label{lem:rp} There is a homotopy $G_\ell:Y \times [0,1] \to Y$ between $g_\ell$ and $g_{\ell-1} \circ r_p$ which has constant length and thickness $O(s^{d-1} p^\ell)$. \end{lem} \begin{lem} \label{lem:fk} There is a homotopy $F_\ell:X \times [0,1] \to Y$ between $f_\ell$ and $r_p \circ f_{\ell+1}$ which has constant length and thickness $O(p^\ell)$. \end{lem} This induces homotopies of thickness $O(s^{d-1}p^s)$ and length $O(s^{d-1} p^\ell)$: \begin{itemize} \item $G_\ell \circ (f_\ell \times \id)$ from $g_{\ell-1} \circ r_p \circ f_\ell$ to $g_\ell \circ f_\ell$ of thickness $O(s^{d-1}p^s)$ and length $O(p^\ell)$; \item $g_\ell \circ F_\ell$ from $g_\ell \circ f_\ell$ to $g_\ell \circ r_p \circ f_{\ell+1}$ of thickness $O(s^{d-1}p^s)$ and length $O(s^{d-1}p^\ell)$. \end{itemize} It remains to build a homotopy from $r_p$ to the $C(Y)(s^{d-1}p+1)$-Lipschitz map $g_1$. By \cite[Theorem 5--6]{PCDF}, such a homotopy $\tilde G: Y \times [0,1] \to Y$ can be chosen to have thickness $O(s^{d-1})$ and length $O(s^{d(d-1)})$. Thus the homotopy $\tilde G \circ (f_1 \times \id)$ has thickness $O(s^{d-1}p^s)$ and length $O(s^{d(d-1)})$. Finally, the map $f_s$ is $C(X,Y)$-Lipschitz and therefore has a short homotopy to one of a finite set of nullhomotopic simplicial maps $X \to Y$. For each map in this finite set, we can pick a fixed nullhomotopy, giving a constant bound for the Lipschitz constant of a nullhomotopy of $f_s$ and therefore a linear one for $r_{p^s} \circ f_s$. The lengths of these homotopies are bounded above by a geometric series which sums to $O(L(\log L)^d)$, completing the proof of the theorem modulo the two lemmas above. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:rp}.] We use the fact that the maps $g_\ell$ were built using the shadowing principle. Thus, there are formal homotopies $\Psi_i$ of length $C(X,Y)$ between $r_{p^s}^*m_Y\rho_{p^{s-i}}$ and $g_i^*m_Y$. There is also a formal homotopy $\Upsilon$ between $r_p^*m_Y$ and $m_Y\rho_p$. This allows us to construct the following formal homotopies: \begin{itemize} \item $\Psi_\ell$, time-reversed, between $g_\ell^*m_Y$ and $r_{p^s}^*m_Y\rho_{p^{s-\ell}}$, of length $C(Y)$; \item $\Psi_{\ell-1}\rho_p$ between $r_{p^s}^*m_Y\rho_{p^{s-\ell}}$ and $g_{\ell-1}^*m_Y\rho_p$, of length $C(Y)p$; \item and $(g_{\ell-1}^* \otimes \id)\Upsilon$ between $g_{\ell-1}^*m_Y\rho_p$ and $g_{\ell-1}^*r_p^*m_Y$, of length $C(Y)$. \end{itemize} Concatenating these three homotopies and applying the relative shadowing principle \ref{relshadow} to the resulting map $\mathcal{M}^*_Y \to \Omega^*(Y \times [0,1])$ rel ends, we get a linear thickness homotopy of length $O(p)$ between the two maps. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:fk}.] We use the fact that the maps $f_\ell$ and $f_{\ell+1}$ were built using the shadowing principle. Thus there are formal homotopies $\Phi_i$ of length $C(X,Y)$ between $f^*m_Y\rho_{p^{-i}}$ and $f_i$. This allows us to construct the following formal homotopies: \begin{itemize} \item $\Phi_\ell$, time-reversed, between $f_\ell$ and $f^*m_Y\rho_{p^{-\ell}}$, of length $C(X,Y)$; \item $\Phi_{\ell+1}\rho_p$ between $f^*m_Y\rho_{p^{-\ell}}$ and $f_{\ell+1}^*m_Y\rho_p$, of length $C(X,Y)p$; \item and $(f_{\ell+1}^* \otimes \id)\Upsilon$ between $f_{\ell+1}^*m_Y\rho_p$ and $f_{\ell+1}^*r_p^*m_Y$, of length $C(X,Y)$. \end{itemize} Concatenating these three homotopies and applying the relative shadowing principle \ref{relshadow} to the resulting map $\mathcal{M}^*_Y \to \Omega^*(X \times [0,1])$ rel ends, we get a linear thickness homotopy of length $O(p)$ between the two maps. \end{proof} \section{Non-formal spaces} \label{S:NF} In this section, we discuss the relationship between the degree and Lipschitz constants of self-maps of non-formal manifolds. First, we note that such manifolds may have no self-maps of degree $>1$ at all. Such manifolds are called \emph{inflexible}; examples of this phenomenon are given in \cite{ArLu,CLoh,CV,Am}. Manifolds which have self-maps of high degree are called \emph{flexible}. Among flexible manifolds, a distinguished class are those with positive weights. A space $Y$ has \emph{positive weights} if its minimal model $\mathcal M_Y$ has a one-parameter family of ``rescaling'' automorphisms, i.e.\ there is a basis $\{v_i\}$ for the indecomposables and integers $n_i$ such that the map $\lambda_t:\mathcal M_Y \to \mathcal M_Y$ sending $v_i \mapsto t^{n_i}v_i$ is a DGA automorphism for any $t \in (0,\infty)$. This can be thought of as a generalization of formality: a formal space has such automorphisms with $n_i=\dim v_i$. \begin{ex} \label{ex:NF} One non-formal manifold with positive weights is the example given in the introduction, the total space $M$ of the bundle $S^3 \to M \to S^2 \times S^2$ obtained by pulling back the Hopf fibration along a degree 1 map $S^2 \times S^2 \to S^4$. According to \cite[p.~95]{FOT}, its minimal model is given by \[\mathcal M_M=\bigl(\Lambda(a_1^{(2)},a_2^{(2)},b_{11}^{(3)},b_{12}^{(3)},b_{22}^{(3)}) \mid da_i=0,db_{ij}=a_ia_j\bigr)\] and therefore, for any $t$, it has an automorphism which takes $a_i \mapsto ta_i$ and $b_{ij} \mapsto t^2b_{ij}$. Now, \begin{align*} H^5(M;\mathbb Q) &\cong \langle b_{11}a_2-a_1b_{12}, b_{12}a_2-a_1b_{22} \rangle \\ H^7(M;\mathbb Q) &\cong \langle b_{11}a_2^2-a_1a_2b_{12} \sim a_1^2b_{22}-a_1a_2b_{12} \rangle, \end{align*} and therefore this automorphism multiplies elements of $H^5(M;\mathbb Q)$ by $t^3$ and elements of $H^7(M;\mathbb Q)$ by $t^4$. \end{ex} A priori, automorphisms of the minimal model need not be realized by genuine maps of finite complexes. But manifolds with positive weights have self-maps of arbitrarily high degree \cite[Theorem 3.2]{CMV}. In fact, for any family of scaling automorphisms $\lambda_t$, there is some $t_0>0$ such that for every $z \in \mathbb Z$, $\lambda_{zt_0}$ is the rationalization of a genuine map $Y \to Y$ \cite[Theorem A]{PWSM}. Of course, not every flexible manifold has positive weights. For example, if $M$ is inflexible and $N$ has positive weights, then $M \times N$ is flexible, but does not have positive weights. \subsection{Upper bounds on degree} Having introduced the main actors, we prove Theorem \ref{main:NF}, which we restate here for convenience: \begin{thm*} Let $M$ be a closed simply connected $n$-manifold which is not formal. Then either $M$ is inflexible (has no self-maps of degree $>1$) or the maximal degree of an $L$-Lipschitz map $M \to M$ is bounded by $L^\alpha$ for some rational $\alpha<n$. \end{thm*} \begin{ex} As stated in the introduction, for the $7$-manifold $M$ described in Example \ref{ex:NF}, we get $\alpha=20/3<7$. To see this, consider an automorphism $\rho:\mathcal M_M \to \mathcal M_M$ of the minimal model of $M$. Such an automorphism is determined by the images \begin{align*} \rho(a_1) &= t_{11}a_1+t_{12}a_2 \\ \rho(a_2) &= t_{21}a_1+t_{22}a_2, \end{align*} and a computation determines that \[\deg \rho=\rho([M])=(\det T)^2 [M]\] where $T=\begin{pmatrix} t_{11}&t_{12} \\ t_{21}&t_{22} \end{pmatrix}$. Let $\lambda_1,\lambda_2$ be the eigenvalues of $T$ with $\lvert\lambda_1\rvert \leq \lvert\lambda_2\rvert$. Then the action of $\rho$ on $H^5(M;\mathbb R)$ has eigenvalues $\lvert\lambda_1\rvert^2\overline{\lambda_2}$ and $\overline{\lambda_1}\lvert\lambda_2\rvert^2$, and therefore, by Lemma \ref{iterates} below, for any self-map $f:M \to M$ whose rationalization is $\rho$, \[\Lip f \geq \lvert \lambda_1\lambda_2^2 \rvert^{1/5} \geq \lvert\det T\rvert^{3/10}=\lvert\deg f\rvert^{3/20}.\] \end{ex} \begin{proof}[Proof of Theorem \ref{main:NF}] We prove the contrapositive. Suppose that there is a sequence of maps $f_i:M \to M$ with strictly increasing degrees such that for every $\alpha<n$, $\deg f_i$ eventually grows faster than $(\Lip f_i)^\alpha$. We will show that $M$ must be formal. This requires a lemma: \begin{lem} \label{iterates} Let $f:M \to M$, and suppose the induced map $f_*:H^k(M;\mathbb{C}) \to H^k(M;\mathbb{C})$ has an eigenvalue $\lambda$. Then \[\Lip f \geq \lvert\lambda\rvert^{1/k}.\] \end{lem} \begin{proof} The eigenvalue $\lambda$ is either real or one of a conjugate pair of complex eigenvalues. If it is real, choose a $\lVert{\cdot}\rVert_\infty$-minimizing form $\omega \in \Omega^k_\flat(M)$ among those which represent an eigenvector $a \in H^k(M;\mathbb{R})$. Then \[\lvert\lambda\rvert\cdot\lVert\omega\rVert_\infty \leq \lVert f^*\omega \rVert_\infty \leq (\Lip f)^k \rVert\omega\rVert_\infty.\] If $\lambda$ is not real, choose an invariant two-dimensional subspace of $H^k(M;\mathbb{R})$ whose complexification contains eigenvectors for $\lambda$ and $\overline\lambda$, and within this, an $f^*/\lvert\lambda\rvert$-invariant ellipse $E$. Let $\omega \in \Omega^k_\flat(M)$ be a $\lVert{\cdot}\rVert_\infty$-minimizing form among those representing elements of $E$. Then once again \[\lvert\lambda\rvert\cdot\lVert\omega\rVert_\infty \leq \lVert f^*\omega \rVert_\infty \leq (\Lip f)^k \rVert\omega\rVert_\infty. \qedhere\] \end{proof} Now suppose $f:M \to M$ is of degree $d$ and $f_*:H^k(M;\mathbb{C}) \to H^k(M;\mathbb{C})$ has some eigenvalue $\lambda$ such that $\lvert\lambda\rvert \neq d^{k/n}$. Then either $\lvert\lambda\rvert>d^{k/n}$, or by Poincar\'e duality the induced map on $H^{n-k}(M;\mathbb{C})$ has an eigenvalue $\mu$ with $\lvert\mu\rvert>d^{\frac{n-k}{n}}$. Now consider the automorphisms $\varphi_i:\mathcal L_M(\mathbb{C}) \to \mathcal L_M(\mathbb{C})$ induced by the $f_i$. Here $\mathcal L_M(\mathbb{C})$ is the complexified \emph{Lie minimal model} of $M$, a free differential graded Lie algebra whose indecomposables in degree $k$ are $L_k \cong H_*(M;\mathbb{C})$. The Lie minimal model is in many ways dual to the Sullivan minimal model; see \cite[Part IV]{FHT} for the detailed theory. The endomorphisms of $\mathcal L_M$ form an affine variety in the vector space of linear maps $H_*(M;\mathbb{C}) \to H_*(M;\mathbb{C})$, and the automorphisms $\Aut(\mathcal L_M(\mathbb{C}))$ form a linear algebraic group which is Zariski open inside that variety. Moreover, the Zariski closure of $\Aut(\mathcal L_M(\mathbb{C}))$, which is the same as its metric closure, is contained in the endomorphism variety. By our hypotheses and Lemma \ref{iterates}, as $i \to \infty$, the absolute values of eigenvalues of $(f_i)_*:H_k(M;\mathbb{C}) \to H_k(M;\mathbb{C})$ (and therefore of $\varphi_i|_{L_k}$) uniformly approach $(\deg f_i)^{k/n}$. That is, for any such eigenvalue $\lambda$, \[k/n-C_i \leq \log_{\deg f_i} \lvert\lambda\rvert \leq k/n+C_i, \qquad \text{where }\lim_{i \to \infty} C_i=0.\] We now apply the theory of linear algebraic groups, see e.g.\ \cite[Ch.~IV, \S11]{Borel}. Choose a Borel subgroup $G \subseteq \Aut(\mathcal L_M(\mathbb{C}))$, say the subgroup of matrices which are upper triangular with respect to some basis of $H_*(M;\mathbb{C})$. Every $\varphi_i$ is conjugate to some $\varphi_i' \in G$. Moreover, since $G$ is abelian-by-unipotent, for every $\varphi_i'$ it also contains the diagonal matrix $\varphi_i''$ obtained by zeroing out the off-diagonal entries of $\varphi_i'$. Then the sequence $\psi_i=(\varphi_i'')^{\log_{\deg f_i} 2^n}$ have a subsequence which converges to some $\psi_\infty:\mathcal L_M(\mathbb{C}) \to \mathcal L_M(\mathbb{C})$ whose eigenvalues on $L_k$ have absolute value $2^k$. Now we use the diagonalizability of $\psi_\infty$. Notice that if $a \in L_k$ is an eigenvector of $\psi_\infty$, then so is $\partial a \in \mathcal L_M(\mathbb{C})_{k-1}$. Eigenspaces in $\mathcal L_M(\mathbb{C})_{k-1}$ are spanned by iterated brackets of eigenvectors in various $L_j$; therefore $\partial a$ is a sum of brackets which are all eigenvectors for the same eigenvalue. In other words, if we replace each eigenvalue with its absolute value, then the resulting linear map, which sends every element $a \in L_k$ to $2^k a$, is still an automorphism of $\mathcal L_M(\mathbb{C})$. This automorphism descends to $\mathcal L_M(\mathbb{Q})$. Since the automorphisms of a rational minimal model are the same as those of the rationalized space $M_{(0)}$, this shows that $M$ is formal. \end{proof} \subsection{Lower bounds on degree} Using the techniques of \S\ref{S:lower}, we can give lower bounds on the maximal degree of an $L$-Lipschitz self-map of a manifold with positive weights that complement the upper bound of Theorem \ref{main:NF}: \begin{thm} \label{lower:NF} Let $Y$ be a space with positive weights and $\rho_t:\mathcal M_Y \to \mathcal M_Y$ a scaling automorphism of its minimal model. Let $\{z_i\}$ be a basis for the rational homology of $Y$ such that $\rho_t$ induces the map $z_i \mapsto t^{n_i}z_i$, and let \[\alpha=\max_{i \geq 2} \frac{n_i}{\dim z_i},\] and let $d$ be the number of dimensions of $z_i$ for which this maximum is attained. Then there are integers $a>0$ and $p>1$ such that for every $q=ap^\ell$ there is an $O(q^{\ell\alpha}(\log q)^{d-1})$-Lipschitz map whose rationalization is $\rho_q$. \end{thm} \begin{ex} In particular, this shows that the $7$-manifold $M$ described in Example \ref{ex:NF} has $L$-Lipschitz self-maps of degree $\sim L^{20/3}$: the bound of Theorem \ref{main:NF} is asymptotically sharp in this case. This is because for the automorphism $\rho_t:\mathcal M_M \to \mathcal M_M$ defined by \[a_i \mapsto ta_i, \qquad b_{ij} \mapsto t^2b_{ij},\] we get $n_i/\dim z_i=1/2$ when $z_i$ is any $2$-cycle, $3/5$ when $z_i$ is any $5$-cycle, and $4/7$ when $z_i$ is any 7-cycle. Thus the maximum is only attained in dimension 5, and therefore the number $d$ defined in the statement of Theorem \ref{lower:NF} is $1$ in this case. For a map $f:M \to M$ whose rationalization is $\rho_t$, we have $\deg f=t^4$; by Theorem \ref{lower:NF}, there are such maps which are $O(t^{3/5})$-Lipschitz. \end{ex} \begin{proof}[Proof of Theorem \ref{lower:NF}] The proof is almost identical to that of Theorem \ref{self-maps}, so we give an outline and indicate the main differences. As with Theorem \ref{self-maps}, we first reduce to the case of a nearly Euclidean cell complex $Z$ whose cells are in bijection with the basis for $H_*(Z;\mathbb Q) \cong H_*(Y;\mathbb Q)$ specified in the positive weight decomposition. Such a complex exists by Proposition \ref{cells}. The reduction is exactly the same as before, but requires a generalization of Proposition \ref{back-and-forth}: \begin{prop}[{\cite[Thm.~B]{PWSM}; see also the slightly weaker \cite[Thm.~3.4]{BCC}}] \label{back-and-forth+} Let $Y$ be a space with positive weights, and let $\rho_t:\mathcal M_Y \to \mathcal M_Y$ be a one-parameter family of automorphisms. If $f:Z \to Y$ is a map between simply connected complexes inducing an isomorphism on rational cohomology, then it is a rational equivalence, and there is a map $g:Y \to Z$ such that the rationalization of $f \circ g$ is $\rho_t$. \end{prop} Now, by \cite[Theorem A]{PWSM}, there is a $p>1$ and a map $r_p:Z \to Z$ whose rationalization is $\rho_p$. As in Lemma \ref{detailed}, we construct maps $r_{p^\ell}$ homotopic to the iterates $r_p^\ell$, bounding the Lipschitz constant by induction on both $\ell$ and the dimension. We also construct controlled homotopies $H_\ell$ from $r_{p^{\ell-1}} \circ r_p$ to $r_{p^\ell}$. There are two main points on which the proof differs from that of Lemma \ref{detailed}. First, as in Lemma \ref{detailed}, we assume that $r_p$ has a nice geometric form. Specifically, we assume that for every $n$-cell $e_i$, $\overline{r_p^{-1}(e_i)}$ is a grid inside $e$ of homothetic preimages of $e$. Rather than $p$ to a side, this grid has $p^{n_i/n}$ subcubes to a side, where $n_i$ is the ``weight'' of the homology class $[e_i]$. For this to make sense, $p^{n_i/\dim z_i}$ must be an integer; we can make sure this is true for every $i$ by iterating $r_p$ at most $(\dim Z)!$ times. The other main difference is in the Lipschitz constant estimate. As before, we set \begin{align*} L_1 &= 2\Lip(H_\ell|_{Z^{(n-1)}}) \\ L_2 &= 2\Lip(r_p)\Lip(r_{p^{\ell-1}}|_{Z^{(n-1)}}) \\ L_3 &= D^{-1}\Lip(r_{p^{\ell-1}}), \end{align*} where $D$ is the side length of a subcube. Then the Lipschitz constant of $r_{p^\ell}$ on a cell $e_i$ is bounded by \[p^{n_i/n}DL_3+\left(\frac{1}{2}-p^{\alpha_n}D\right)L_2+\frac{1}{2}L_1.\] Now the proof splits into cases. Suppose, by induction, that \begin{align*} \Lip(r_{p^\ell}|_{Z^{(n-1)}}) &\leq C(n-1)\ell^{d_{n-1}}p^{\alpha_{n-1}\ell} \\ \Lip(H_\ell|_{Z^{(n-1)}}) &\leq C'(n-1)\ell^{d_{n-1}}p^{\alpha_{n-1}\ell}. \end{align*} If $\alpha_{n-1}=n_i/n$, then the proof is exactly as before and \begin{align*} \Lip(r_{p^\ell}|_e) &\leq C(n)\ell^{d_{n-1}+1}p^{\alpha_{n-1}\ell} \\ \Lip(H_\ell|_e) &\leq C'(n)\ell^{d_{n-1}+1}p^{\alpha_{n-1}\ell} \end{align*} for sufficiently large $C(n)$ and $C'(n)$ depending on $Z$ and $r_p$. If $\alpha_{n-1}<n_i/n$, then the estimate for the Lipschitz constant is dominated by the $L_3$ term. After substituting the expression for the bound on $\Lip(r_{p^{\ell-1}})$ and summing a geometric series, we see that \[\Lip(r_{p^\ell}|_e) \leq C(n)p^{(n_i/n)\ell}\] for sufficiently large $C(n)$. Finally, if $\alpha_{n-1}>n_i/n$, then the estimate for the Lipschitz constant is dominated by the $L_1$ and $L_2$ terms, and therefore, for sufficiently large $C(n)$, \[\Lip(r_{p^\ell}|_e) \leq C(n)\ell^{d_{n-1}}p^{\alpha_{n-1}\ell}.\] Similar estimates hold for the Lipschitz constant of $H_\ell$. This gives the estimate in the theorem: the polynomial power in the Lipschitz constant is governed by the largest possible value of $n_i/n$, and the power of the polylogarithm is governed by the number of $n$ for which that value is attained. \end{proof} \begin{rmk} The methods of this theorem do not extend to manifolds without positive weights because Proposition \ref{back-and-forth+} fails. For example, suppose $M$ is rationally equivalent to $N=P \times Q$, where $P$ has positive weights and $Q$ does not. Then if $f:P \to P$ is a map of degree $>1$, so is $f \times \id_Q:N \to N$, and Theorem \ref{lower:NF} lets us find efficient maps homotopic to $f^\ell$ for $\ell \geq 1$. However, this does not automatically tell us whether $M$ has self-maps of positive degree or, if it does, anything about the Lipschitz constants of these maps. It would be interesting to either show that these properties are rationally invariant or to find examples in which they are not. \end{rmk} \bibliographystyle{amsalpha}
{ "timestamp": "2022-07-26T02:38:45", "yymm": "2207", "arxiv_id": "2207.12347", "language": "en", "url": "https://arxiv.org/abs/2207.12347", "abstract": "We study the degree of an $L$-Lipschitz map between Riemannian manifolds, proving new upper bounds and constructing new examples. For instance, if $X_k$ is the connected sum of $k$ copies of $\\mathbb CP^2$ for $k \\ge 4$, then we prove that the maximum degree of an $L$-Lipschitz self-map of $X_k$ is between $C_1 L^4 (\\log L)^{-4}$ and $C_2 L^4 (\\log L)^{-1/2}$. More generally, we divide simply connected manifolds into three topological types with three different behaviors. Each type is defined by purely topological criteria. For scalable simply connected $n$-manifolds, the maximal degree is $\\sim L^n$. For formal but non-scalable simply connected $n$-manifolds, the maximal degree grows roughly like $L^n (\\log L)^{\\theta(1)}$. And for non-formal simply connected $n$-manifolds, the maximal degree is bounded by $L^\\alpha$ for some $\\alpha < n$.", "subjects": "Metric Geometry (math.MG); Algebraic Topology (math.AT); Differential Geometry (math.DG)", "title": "Degrees of maps and multiscale geometry", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9896718486991903, "lm_q2_score": 0.7154239897159438, "lm_q1q2_score": 0.7080349825059287 }
https://arxiv.org/abs/1807.02055
Solving isomorphism problems about 2-designs from disjoint difference families
Recently, two new constructions of $(v,k,k-1)$ disjoint difference families in Galois rings were presented by Davis, Huczynska, and Mullen and Momihara. Both were motivated by a well-known construction of difference families from cyclotomy in finite fields by Wilson. It is obvious that the difference families in the Galois ring and the difference families in the finite field are not equivalent. A related question which is in general harder to answer is whether the associated designs are isomorphic or not. In our case, this problem was raised by Davis, Huczynska and Mullen. In this paper we show that the $2$-$(v,k,k-1)$ designs arising from the difference families in Galois rings and those arising from the difference families in finite fields are nonisomorphic by comparing their block intersection numbers.
\section{Introduction} \label{sec:introduction} Various types of difference families have long been studied in combinatorial literature \cite{abel2006,beth1999,chang2006,furino1991,wilson1972}. They have applications in coding theory, and communications and information security \cite{ng2016}, and they have connections to many other combinatorial objects, in particular to combinatorial designs. When, like recently by \textcite{davis2017} and by \textcite{momihara2017}, a new construction of difference families is presented, it is a natural question to ask whether the new construction also leads to new difference families. In this paper, we compare two infinite families of difference families in Galois rings \cite{davis2017,momihara2017} with a well-known infinite family of difference families in finite fields which was introduced by \textcite{wilson1972} in 1972. Since the difference families we compare are in different groups, they cannot be equivalent. Hence, we will examine whether the associated combinatorial designs are isomorphic or not. This question is of particular interest because the authors of both \cite{davis2017} and \cite{momihara2017} mention that they were inspired by the construction of \textcite{wilson1972}, and the authors of \cite{davis2017} state explicitly that it is not known in general if the associated designs are nonisomorphic.\par We start by defining the relevant objects we examine in this paper. First, we need the following notations: Let $G$ be an abelian group, $A,B \subseteq G$ and $g \in G$. We define multisets \begin{align*} \Delta A &:= \{ a - a' : a, a' \in A, a \ne a'\},\\ \Delta_+ A &:= \{ a + a' : a, a' \in A, a \ne -a'\},\\ A-B &:= \{a-b : a \in A, b \in B, a \ne b\},\\ A + B &:= \{a+b: a \in A, b \in B, a \ne -b\},\\ A + g &:= \{a + g : a \in A\}. \end{align*} In the course of this paper we will sometimes use these notations to denote sets, not multisets. It will be clear from the context if the multiset or the respective set is meant. \begin{definition} \label{def:DF} Let $G$ be an abelian group of order $v$, and let $D_1, D_2, \dots, D_b$ be $k$-subsets of $G$. The collection $D = \{D_1, D_2, \dots, D_b\}$ of these subsets is called a \emph{difference family in G with parameters $(v,k,\lambda)$} if each nonzero element of $G$ occurs exactly $\lambda$ times in the multiset union \[ \bigcup_{i=1}^b \Delta D_i. \] If the subsets $D_1, D_2, \dots, D_b$ are mutually disjoint, they form a \emph{disjoint difference family}. If $b = 1$, one speaks of a \emph{$(v,k,\lambda)$ difference set}. We call $D$ \emph{complete} (or \emph{near-complete}) if the $D_i$ partition $G$ (or $G \setminus \{0\}$, respectively). \end{definition} In this paper we will focus on near-complete $(v,k,k-1)$ disjoint difference families. These objects are closely related to so-called external difference families: \begin{definition} \label{def:EDF} Let $G$ be an abelian group of order $v$, and let $D_1, D_2, \dots, D_b$ be mutually disjoint $k$-subsets of $G$. The collection $D = \{D_1, D_2, \dots, D_b\}$ of these subsets is called a \emph{$(v,k,\lambda)$ external difference family} if each nonzero element of $G$ occurs exactly $\lambda$ times in the multiset union \[ \bigcup_{\substack{1 \le i,j \le b \\ i \ne j}} \left(D_i - D_j\right). \] Analogously to \autoref{def:DF} we call an external difference family \emph{complete} (or \emph{near-complete}) if the $D_i$ partition the (nonzero) elements of $G$. \end{definition} The following \autoref{prop:EDF_DDF} shows that under certain conditions a disjoint difference family is also an external difference family. This result was observed by \textcite{momihara2017}, and, for near-complete disjoint difference families, it was also mentioned by \textcite{chang2006} and \textcite{davis2017}. We will add a short proof. \begin{proposition} \label{prop:EDF_DDF} Let $G$ be an abelian group of order $v$, and let $D = \{D_1, D_2, \dots, D_b\}$ be a $(v,k,\lambda)$ disjoint difference family in $G$. The collection $D$ forms an external difference family in $G$ if and only if the union $\bigcup_{i=1}^{b} D_i$ of the $D_i$ is a $(v,bk,\lambda')$ difference set in $G$ for some integer $\lambda'>\lambda$. As an external difference family, $D$ has parameters $(v,k,\lambda'-\lambda)$. \end{proposition} \begin{proof} Let $G$ be an abelian group of order $v$, and let $D = \{D_1, D_2, \dots, D_b\}$ be a collection of mutually disjoint $k$-subsets whose union $\bigcup_{i=1}^{b} D_i$ is a $(v,bk,\lambda')$ difference set for some integer $\lambda'>2$. We can split all the differences in $\bigcup_{i=1}^{b} D_i$ in the following way into the \enquote{internal} and the \enquote{external} differences of the $D_i$: \[ \Delta \left(\bigcup_{i=1}^{b} D_i\right) = \bigcup_{i=1}^b \Delta D_i \cup \bigcup_{\substack{1 \le i,j \le b \\ i \ne j}} (D_i-D_j). \] Since $\bigcup_{i=1}^{b} D_i$ is a difference set, each element $g \in G \setminus \{0\}$ is represented as $\lambda'$ differences in $\Delta(\bigcup_{i=1}^{b} D_i)$. It follows that each nonzero element in $G$ is represented as $\lambda$ differences in $\bigcup_{i=1}^b \Delta D_i$ (meaning $D$ is a $(v,k,\lambda)$ disjoint difference family) if and only if it is represented $\lambda'-\lambda$ times in $\bigcup_{1 \le i,j \le b,\, i \ne j} (D_i-D_j)$ (meaning $D$ is a $(v,k,\lambda'-\lambda)$ external difference family). \end{proof} From \autoref{prop:EDF_DDF} it follows that every near-complete $(v,k,k-1)$ disjoint difference family is also a near-complete $(v,k,v-k-1)$ external difference family because the $D_i$ partition $G\setminus\{0\}$ and $G\setminus\{0\}$ is a $(v,v-1,v-2)$ difference set in $G$. For extended background on $(v,k,k-1)$ disjoint difference families the reader is referred to \textcite{buratti2017} who gives an overview over these difference families and summarizes several constructions, including the one by \textcite{davis2017} (see \autoref{sec:EDF_GR} for some additional information). As mentioned above, every difference family gives rise to a combinatorial design. \begin{definition} \label{def:design} Let $P$ be a set with $v$ elements (\emph{points}). A \emph{$t$-$(v,k,\lambda)$~design} (or \emph{$t$-design}, in brief) is a collection of $k$-subsets (\emph{blocks}) of $P$ such that each $t$-subset of $P$ is contained in exactly $\lambda$ blocks. \end{definition} The designs coming from difference families are $2$-designs, which are often referred to as \emph{balanced incomplete block designs (BIBD)}. They can be constructed from difference families in the following way. \begin{definition} Let $G$ be an abelian group, and let $D = \{D_1, D_2, \dots, D_b\}$ be a family of subsets of $G$. The \emph{development $dev(D)$ of $D$} is the collection \[ \left\lbrace D_i + g : D_i \in D, g \in G\right\rbrace \] of all the translates of the subsets contained in $D$. The sets $D_1, D_2, \dots, D_b$ are called the \emph{base blocks} of $dev(D)$. \end{definition} In other words: The development $dev(D)$ of $D$ contains the orbits of the sets $D_i \in D$ under the action of $G$. If all the orbits have full length, $dev(D)$ consists of $vb$ blocks. The following \autoref{prop:dev_design} is well known. We will add a proof for completeness. \begin{proposition} \label{prop:dev_design} Led $D$ be a $(v,k,\lambda)$ difference family in an abelian group $G$. The development $dev(D)$ of $D$ forms a $2$-$(v,k,\lambda)$ design with point set $G$. \end{proposition} \begin{proof} Let $D = \{D_1, D_2, \dots, D_b\}$ be a $(v,k,\lambda)$ difference family in an abelian group $G$. Take an arbitrary $2$-subset $\{t_1, t_2\}$ of $G$. We need to show that $\{t_1, t_2\}$ is contained in $\lambda$~blocks of $dev(D)$. Let $d = t_1-t_2$. Since $d \ne 0$, $d$ is represented $\lambda$ times as a difference $d = d' - d''$, where $d', d'' \in D_i, 1\le i \le b$. Obviously, the differences in a base block $D_i$ and in all its translates $D_i+g, g \in G$, are the same, in short: $\Delta D_i = \Delta (D_i + g)$ for all $1 \le i \le b$ and $g \in G$. Hence, for each of the $\lambda$ pairs $d', d''$ we choose $g$ such that $d' + g = t_1$. Then $d'' + g = t_2$ and consequently $\{t_1, t_2\} \subseteq D_i + g$. \end{proof} \section{Disjoint difference families from cyclotomy in finite fields} \label{sec:DDF_Fq} First, we present the well-known construction of disjoint difference families in finite fields by \textcite{wilson1972}. It makes use of the cyclotomoy of the $e$-th powers in a finite field. Let $q$ be a power of a prime $p$. We denote by $\Fq$ the finite field with $q$ elements and by $\alpha$ a generator of the multiplicative group $\Fq^*$ of $\Fq$. \begin{theorem} \label{th:Fq_DDF} Let $e,f$ be integers satisfying $ef = q-1$, $e,f \geq 2$, and let \begin{align*} \label{eq:C_i} C_i = \{ \alpha^{t}\ | \ t \equiv i \pmod e \}, i = 0, 1, \dots, e-1, \end{align*} be the cosets of the unique subgroup $C_0$ of index $e$ and order $f$ formed by the $e$-th powers of~$\alpha$ in $\Fq^*$. Then, the family $C = \{C_0, C_1, \dots, C_{e-1}\}$ of all these cosets forms a $(q, f, f-1)$ near-complete disjoint difference family in the additive group $(\Fq,+)$. \end{theorem} \begin{proof} This proof is similar to the proof by \textcite[Theorem~2.1]{davis2017} where the authors prove that $C$ is a near-complete external difference family (see \autoref{prop:EDF_DDF}). Let $x,y \in \Fq^*$, and let $z = yx^{-1}$. Suppose $x = c - c'$ for $c, c' \in C_i, 0 \le i \le e-1$. Then $y = zx = zc - zc'$ and, obviously, $zc, zc'$ are in the same coset $C_j$. Hence, we have found a representation of $y$ as the difference of two distinct elements from the same set $C_j$. The other way around, every difference for $y$ will give us a difference for $x$. Consequently, every element of $\Fq^*$ will have the same number of differences. So, all we need to do is to count the total number of differences and divide it by the number of elements in $\Fq^*$: There are $e$~sets~$C_i$, and in each $C_i$ we can calculate $f(f-1)$ differences, giving us a total of $ef(f-1)$ differences. In $\Fq^*$ there are $q-1 =ef$ elements. Hence, each element $x \in \Fq^*$ will have \[ \frac{ef(f-1)}{ef} = f-1 \] differences $x = c -c'$ where $c$ and $c'$ come from the same set $C_i$. \end{proof} We remark that this theorem can also be proved using the results of \textcite[Theorem~3.3, Corollary~3.5]{furino1991}. Employing the construction of a $2$-design mentioned above, the development $dev(C)$ of $C$ is a $2$-$(q, f, f-1)$~design. \section{Galois rings} \label{sec:Galois_rings} In this section, we will give a short introduction to Galois rings, see the work by \textcite{wan2003} for extended general background on this topic. Let $p$ be a prime, and let $f(x) \in \Z_{p^m}[x]$ be a monic basic irreducible polynomial of degree~$r$. The factor ring $\Z_{p^m}[x]/ \langle f(x)\rangle$ is called a \emph{Galois ring} of characteristic $p^m$ and extension degree $r$. It is denoted by $\GR(p^m, r)$, and its order is $p^{mr}$. Since any two Galois rings of the same characteristic and the same order are isomorphic, we will speak of \emph{the} Galois ring $\GR(p^m, r)$.\par Galois rings are local commutative rings. The unique maximal ideal of the ring $R := \GR(p^m, r)$ is $\I = pR := \{pa : a \in R\}$. The factor ring $R / \I$ is isomorphic to the finite field $\F_{p^r}$ with $p^r$ elements. As a system of representatives of $R/\I$ we take the \emph{Teichmüller set} $\T = \{0, 1, \xi, \dots, \xi^{p^r-2}\}$ where $\xi$ denotes a root of order $p^r-1$ of $f(x)$. It is convenient to choose the \emph{generalized Conway polynomial}, i.\,e. the Hensel lift from $\F_p[x]$ to $\mathbb{Z}_{p^m}[x]$ of the Conway polynomial, as our polynomial $f(x)$ since then, $x +(f)$ is a generator of the Teichmüller group, and we set $\xi = x + (f)$ (see \autocite[Section~1.3]{zwanzger2011} for more information on the generalized Conway polynomial and its construction). An arbitrary element $a$ of $R$ has a unique \emph{$p$-adic representation} $a = \alpha_0 + p\alpha_1 + \dots + p^{m-1}\alpha_{m-1}$, where $\alpha_0, \alpha_1, \dots, \alpha_{m-1} \in \T$.\par The elements of $R \setminus \I$ are all the units of $R$, this \emph{unit group} is denoted by $R^*$. It has order $p^{mr} - p^{(m-1)r} = p^{(m-1)r}(p^r-1)$ and is the direct product of the cyclic \emph{Teichmüller group $\T^* = \T \setminus \{0\} = \{ 1, \xi, \dots, \xi^{p^r-2}\}$} of order $p^r-1$ and the \emph{group of principal units $\mathbb{P} := 1 + \I$} of order $p^{(m-1)r}$. If $p$ is odd or if $p = 2$ and $m \leq 2$, then $\mathbb{P}$ is a direct product of $r$ cyclic groups of order $p^{m-1}$. If $p = 2$ and $m \geq 3$, then $\mathbb{P}$ is a direct product of a cyclic group of order $2$, a cyclic group of order $2^{m-2}$ and $r-1$ cyclic groups of order~$2^{m-1}$. So we have $R^* = \T^* \times \mathbb{P}$. In this paper, we will only consider Galois rings of characteristic~$p^2$. In this case, $(1+p\alpha)(1+p\beta) = 1+p(\alpha + \beta)$ for any $\alpha, \beta \in \T$, and every unit $u \in \GR(p^2, r)^*$ has a unique representation $u = \alpha_0 (1 + p\alpha_1), \alpha_0, \alpha_1 \in \T, \alpha_0 \ne 0$. Moreover, if $m=2$, the group of principal units $\mathbb{P}$ is a direct product of $r$ cyclic groups of order $p$ and thus has the structure of an elementary abelian group of order $p^r$. \section{Disjoint difference families in Galois rings I} \label{sec:DDF_GR} We will now present the new construction of disjoint difference families in Galois rings $\GR(p^2, 2n)$ with characteristic $p^2$ and even degree $r = 2n, n \in \mathbb{N}$, that was introduced by \textcite{momihara2017}: Let $p$ be a prime. Let $R_{2n}$ denote the Galois ring $\GR(p^2, 2n) = \Z_{p^2}[x] / \langle f(x) \rangle$, where $f(x)$ is a monic basic irreducible polynomial of degree $2n$, and let $\xi$ be a root of order $p^{2n}-1$ of $f(x)$. Let $\I_{2n}$ be the maximal ideal and let $\mathbb{P}_{2n} = 1 + \I_{2n}$ be the group of principal units of $R_{2n}$. Moreover, we have the Teichmüller set \[ \T_{2n} = \{ 0, 1, \xi, \dots, \xi^{p^{2n}-2}\}, \] and each element of $R_{2n}$ has a unique $p$-adic representation \[ a_0 + pa_1, \quad a_0, a_1 \in \T_{2n}. \] The Galois ring $R_{2n}$ contains a unique Galois ring $R_n = \GR(p^2, n)$ of characteristic $p^2$ and degree $n$ as its subring \autocite[Theorem~14.24]{wan2003}. It can be constructed in the following way \autocite[Corollary~14.28]{wan2003}: Obviously, $\xi^{(p^n+1)}$ is a root of order $p^n-1$ of $f(x)$. It follows that \[ \T_n = \{ 0, 1, \xi^{p^n+1}, \xi^{2(p^n+1)}, \dots, \xi^{(p^n-2)(p^n+1)}\} \] is the Teichmüller set of $R_n$ and $R_n = \{a_0 + pa_1: a_0, a_1 \in \T_n \}$. Then, \[ R_n^* = \{ \alpha_0(1+p\alpha_1) :\alpha_0,\alpha_1 \in \T_n, \alpha_0 \ne 0\} \] is the unit group of the subring $R_n$. Analogously to $R_{2n}$, let $\I_n$ denote the maximal ideal and $\mathbb{P}_n$ the group of principal units of $R_n$. So we have $R_n^* = \T_n^* \times \mathbb{P}_n$. Now, let $pS$ be a system of representatives of $\I_{2n}/\I_n$ which means that $1 + pS$ will be a system of representatives of $\mathbb{P}_{2n} / \mathbb{P}_n$. Each element of $1+ pS$ can be written as $1 + px$ for some $x \in \T_{2n}$, i.\,e. $1 + pS = \{1+px : x \in S\}$ for some subset $S$ of $\T_{2n}$. Write $S = \{x_0, x_1, \dots, x_{p^n-1}\}$. Finally, define a coset $P$ of the maximal ideal $\I_n$ of $R_n$ as \[ P = \{ p\xi^{p^n}, p\xi^{(p^n+1)+p^n}, p\xi^{2(p^n+1)+p^n}, \dots, p\xi^{(p^n-2)(p^n+1)+p^n} \}. \] \begin{theorem}[{\autocite[Theorem~1]{momihara2017}}] \label{th:momihara_3.1} Using the notation from above, define subsets \begin{equation*} D_i = \xi^i \left( P \cup \left( \bigcup_{j = 0}^{p^n-1} \xi^j (1+px_j)R_n^*\right)\right), i = 0, 1, \dots, p^n, \end{equation*} of $R_{2n}$. The family $D = \{D_0, D_1, \dots, D_{p^n}\}$ forms a near-complete disjoint difference family in $(R_{2n}, +)$ with parameters $\left(p^{4n}, (p^{2n}+1)(p^n-1), (p^{2n}+1)(p^n-1) - 1 \right)$. \end{theorem} For the extensive and technical proof of this theorem the reader is referred to \textcite{momihara2017}. The author of \cite{momihara2017} mentions that, since $(p^{2n}+1)(p^n-1)$ divides $p^{4n}-1$, we can use \autoref{th:Fq_DDF} to construct a disjoint difference family $C = \{C_0, C_1, \dots, C_{p^n}\}$ with the same parameters as in \autoref{th:momihara_3.1} in the additive group of the finite field $\Fqfour$. According to \autoref{prop:dev_design} the developments $dev(C)$ and $dev(D)$ of the difference families form $2$\nobreakdash-designs with parameters $(v,k,\lambda) = \left(p^{4n}, (p^{2n}+1)(p^n-1), (p^{2n}+1)(p^n-1) - 1\right)$. The comparison of these two $2$-designs leads to our first main theorem: \begin{theorem} \label{th:designs_noniso} Let $C$ be a $\left(p^{4n}, (p^{2n}+1)(p^n-1), (p^{2n}+1)(p^n-1) - 1 \right)$ disjoint difference family in the additive group of the finite field $\F_{p^{4n}}$ constructed with \autoref{th:Fq_DDF}, and let $D$ be a disjoint difference families with the same parameters in the additive group of the Galois ring $\GR(p^2,2n)$ constructed with \autoref{th:momihara_3.1}. The $2$-$\left(v,k,k-1\right)$~designs $dev(C)$ and $dev(D)$ with parameters $v = p^{4n}$ and $k = (p^{2n}+1)(p^n-1)$ are nonisomorphic. \end{theorem} There are various ways of isomorphism testing for combinatorial designs. One popular approach is to study the ranks of their incidence matrices. However, in our case this yields no valid results. We were more successful examining the so-called block intersection numbers of both $2$-designs. We will prove \autoref{th:designs_noniso} by showing that the block intersection numbers of $dev(C)$ and $dev(D)$ differ. The \emph{block intersection numbers} of a $t$-design are the cardinalities $\left| B_i \cap B_j \right|$ of the intersections of two distinct blocks $B_i, B_j$ of the design. \begin{remark} \label{remark:intersectionNumbers_calc} Block intersection numbers can be easily calculated in the following way: Let $M$ denote the incidence matrix of a $t$-design with the rows of $M$ corresponding to the points and the columns of $M$ corresponding to the blocks of the design. The entry $(i,j)$ of the matrix $M^T M$ is exactly $|B_i \cap B_j|$. \end{remark} Block intersection numbers of combinatorial designs are invariant under isomorphism. So, to prove that our designs are nonisomorphic it is sufficient to show that $dev(D)$ has one block intersection number different from the block intersection numbers of $dev(C)$. We will first calculate all the intersection numbers of $dev(C)$. These are given as the so-called cyclotomic numbers: Analogously to \autoref{sec:DDF_Fq}, let $C_0, C_1, \dots, C_{e-1}$ be the cosets of the subgroup $C_0$ of the $e$-th powers in $\Fq^*$. For fixed non-negative integers $i,j \le e-1$ the \emph{cyclotomic number $(i,j)_e$ of order $e$} is defined as \begin{equation*} \label{eq:cyclo_number} (i,j)_e = |(C_i + 1) \cap C_j|. \end{equation*} In general, it is a hard number theoretic problem to calculate these cyclotomic numbers. However, \textcite{baumert1982} proved that in special cases they are easy to calculate: \begin{proposition}[{\cite[Theorems~1 and~4]{baumert1982}}] \label{prop:uniform_cyclotomy} Let $p$ be a prime, and let $e \ge 3$ be a divisor of $p^m-1$ for a positive integer $m$. If $-1$ is a power of $p$ modulo $e$, then either $p = 2$ or $f = (p^m-1)/e$ is even, $p^m = s^2$ and $s \equiv 1 \pmod e$, and the cyclotomic numbers of order $e$ are given as \begin{align} \label{eq:uniform_cyclotomy} (0,0)_e &= \eta^2 - (e-3)\eta - 1, && \nonumber\\ (0,i)_e = (i,0)_e = (i,i)_e &= \eta^2 + \eta &&\textrm{for } i \ne 0,\\ (i,j)_e &= \eta^2 &&\textrm{for } i \ne j \textrm{ and } i,j \ne 0, \nonumber \end{align} where $\eta = (s-1)/e$. \end{proposition} Because there exist only three distinct cyclotomic numbers in the described case, the authors of \cite{baumert1982} speak of \emph{uniform} cyclotomic numbers. Applying \autoref{prop:uniform_cyclotomy} to $dev(D)$ leads to \begin{corollary} \label{cor:intersection_numbers_devC} Let $C$ be a $\left(p^{4n}, (p^{2n}+1)(p^n-1), (p^{2n}+1)(p^n-1) - 1 \right)$ disjoint difference family in the additive group of the finite field $\F_{p^{4n}}$ constructed with \autoref{th:Fq_DDF}. The $2$-$(p^{4n}, (p^{2n}+1)(p^n-1), (p^{2n}+1)(p^n-1) - 1)$ design $dev(C)$ has exactly three block intersection numbers, namely $p^n-2$, $p^n(p^n-1)$ and $(p^n-1)^2$. \end{corollary} \begin{proof} We show that $C$ meets the conditions of \autoref{prop:uniform_cyclotomy}: In our case $e = p^n + 1$. So $-1$ is the $n$-th power of $p$ modulo $e$. Moreover, from $p^{4n} = s^2$ it follows that $s = p^{2n} \equiv 1 \pmod e$, so $\eta = (p^{2n}-1)/(p^n+1) = p^n-1$. By \cref{eq:uniform_cyclotomy} we obtain the cyclotomic numbers \begin{align*} (0,0)_{p^n+1} &= p^n-2, \nonumber\\ (0,i)_{p^n+1} = (i,0)_{p^n+1} = (i,i)_{p^n+1} &= p^n(p^n-1) &&\textrm{for } i \ne 0,\\ (i,j)_{p^n+1} &= (p^n-1)^2 &&\textrm{for } i \ne j \textrm{ and } i,j \ne 0, \nonumber \end{align*} that occur as the intersection numbers of $dev(C)$. \end{proof} The next step will be to show that there is a block intersection number in $dev(D)$ that does not occur in $dev(C)$. \begin{lemma} \label{lem:intersection_numbers_devD} Let $D$ be a $\left(p^{4n}, (p^{2n}+1)(p^n-1), (p^{2n}+1)(p^n-1) - 1 \right)$ disjoint difference family in the additive group of $\GR(p^2,2n)$ constructed with \autoref{th:momihara_3.1}. The $2$-$(p^{4n}, (p^{2n}+1)(p^n-1), (p^{2n}+1)(p^n-1) - 1)$ design $dev(D)$ has a block intersection number $(2p^n-1)(p^n-2)$. \end{lemma} \begin{proof} This proof has a similar structure to the proofs by \textcite[Lemmata~4--7]{momihara2017}, but unlike \textcite{momihara2017}, we will not consider all the sets $D_0, D_1, \dots, D_b$ of the difference family, but only $D_0$. This requires a more detailed analysis of the intersection relations. Let, like above, $\xi$ be a generator of the Teichmüller group, $x_j \in S$, where $1+pS$ is a system of representatives of $\mathbb{P}_{2n}/\mathbb{P}_n$, and $R_n^*$ denote the unit group of the subring $R_n = \GR(p^2, n)$. Furthermore, define subsets $U$ and $V$ of $R_{2n}^*$ as \begin{align*} U = \bigcup_{j=0}^{p^n-1} \xi^j (1+px_j)R_n^* \qquad \mbox{and} \qquad V = \bigcup_{j=0}^{p^n-1} (1+px_j)R_n^*. \end{align*} Note that $D_i = \xi^i(P \cup U)$ and that $V = \T_n^* \times \mathbb{P}_{2n}$ and $\bigcup_{j=0}^{p^n} \xi^j V = R_{2n}^*$. We will prove \autoref{lem:intersection_numbers_devD} by showing that the block intersection number $| (D_0+u) \cap D_0 |$ of the block $D_0$ and its translate $D_0+u$ equals $(2p^n-1)(p^n-2)$ for all $u \in U$. The above statement is equivalent to: An arbitrary element $u \in U$ occurs exactly $(2p^n-1)(p^n-2)$ times in the multiset $\Delta D_0$. Looking at the structure of \[ D_0 = P \cup U = P \cup \left( \bigcup_{j = 0}^{p^n-1} \xi^j (1+px_j)R_n^*\right), \] we can split all the differences in this multiset into four different types. Let $0 \le s,t \le p^n-1$ and $s \ne t$. We have differences of \begin{enumerate}[label=\emph{Type \arabic*:}, ref=type~\arabic*, leftmargin=*, labelsep=2em] \item $\xi^s (1+px_s)R_n^* - \xi^t (1+px_t)R_n^*$, \label{enum:1} \item $\Delta\left(\xi^s (1+px_s)R_n^*\right)$, \label{enum:2} \item $\xi^s (1+px_s)R_n^* - P$, \label{enum:3} \item $\Delta P$. \label{enum:4} \end{enumerate} Now, let $u$ be a fixed element of $U$. We will count the number of occurrences of $u$ in $\Delta D_0$ by counting its occurrences in each of the four multisets defined above. Before we start we state the following useful lemma which will show that it does not matter whether we look at the differences or the sums in \ref{enum:1}--\ref{enum:4}: \begin{lemma} \label{lem:-1_GR} Consider the Galois Ring $\GR(p^m, r)$. If $p$ is odd, then $-1$ is an element of the Teichmüller group $\T^*$. If $p = 2$, then $-1$ is a principal unit. \end{lemma} \begin{proof} Let $p$ be odd. Then, the group of principal units $\mathbb{P}$ is a direct product of $r$ cyclic groups, each of odd order $p^{m-1}$, and the Teichmüller group $\T^*$ has even order $p^r-1$. Consequently, there are only two second roots of unity in $\GR(p^m,r)$: $1$ and $-1$. Hence, $-1 = \xi^{(p^{r}-1)/2} \in \T^*$. If $p=2$, all the even integers of $\Z_{p^m} \subseteq \GR(p^m,r)$ are elements of the maximal ideal $\I = 2R$. Since $-1$ is odd, it follows that $-1 \in \mathbb{P} = 1 + \I$. \end{proof} In our case, $m=2$ and $r = 2n$, it follows from \autoref{lem:-1_GR} that if $p$ is odd, $-1 = \xi^{(p^{2n}-1)/2} = \xi^{(p^n-1)(p^n+1)/2}$, and thus $-1$ is included in the Teichmüller group $\T_n^* = \{\xi^{i(p^n+1)} : i = 0,1,\dots,p^n-2\}$ of the subring $R_n$. Because $\T_n^* \subseteq R_n^*$ and $P = p \xi^{p^n} \T_n^*$, we have \begin{align} \label{eq:-R=R_-P=P} R_n^* = -R_n^*, \qquad\textrm{and}\qquad P = -P. \end{align} If $p = 2$, we have $-1 = 3 = 1\cdot(1+2\cdot1) \in R_n^*$, since clearly $1 \in \T_n^*$. Furthermore, since $-2 = 2$, all the elements of $\I_{2n} = 2R_{2n}$ (and consequently of $P \subseteq \I_{2n}$) are self inverse. Thus, \cref{eq:-R=R_-P=P} holds in this case as well. Now we start our proof by analyzing differences of \ref{enum:1}, and we first state a helpful lemma \cite{momihara2017}: \begin{lemma}[{\cite[Lemma~3]{momihara2017}}] \label{lem:momihara_3.3} Let $a$ be an integer, $0 \le a \le p^n-2$, let $b$ be an element of $R_{2n}$, and let $V$ be as defined above. If $\xi^a(1+pb) \notin R_n^*$ and $\xi^a \notin \T_n$, then \begin{equation*} R_n^* + \xi^a(1+pb)R_n^* = R_{2n}^* \setminus \left( V \cup \xi^a V\right). \end{equation*} \end{lemma} Now, let $0 \le s,t \le p^n-1, s \ne t$ be fixed. By \cref{eq:-R=R_-P=P} we have \begin{align*} \xi^s (1+px_s)R_n^* - \xi^t (1+px_t)R_n^* &= \xi^s (1+px_s)R_n^* + \xi^t (1+px_t)R_n^*. \end{align*} We factor out $\xi^s(1+px_s)$ and obtain \begin{align*} \xi^s(1+px_s) \left(R_n^* + \xi^{t-s} \left(1+p(x_t-x_s)\right)R_n^*\right). \end{align*} Applying \autoref{lem:momihara_3.3}, this equals \begin{align*} \xi^s(1+px_s) \left(R_{2n}^* \setminus \left( V \cup \xi^{t-s} V\right)\right). \end{align*} Since $(1+px)V = V$ for any $x \in R_{2n}$, the factor $(1+px_s)$ can be omitted, and we obtain the result \begin{align} \label{eq:type1} \xi^s (1+px_s)R_n^* - \xi^t (1+px_t)R_n^* = R_{2n}^* \setminus \left( \xi^s V \cup \xi^t V \right). \end{align} Now, we are able to count differences: The set $D_0$ contains $p^n$ distinct subsets of the type $\xi^s(1+px_s)R_n^*$. Consequently, we have $p^n(p^n-1)$ \ref{enum:1} multisets. Since $0 \le s,t \le p^n-1$, the elements of $\xi^{p^n} V$ are, according to \cref{eq:type1}, contained in each of these multisets, whereas the elements of $\bigcup_{j=0}^{p^n-1} \xi^j V = R_{2n}^* \setminus \xi^{p^n}V$ occur in only $(p^n-2)(p^n-1)$ of them. Since $U$ is a subset of $R_{2n}^* \setminus \xi^{p^n}V$, we count, so far, $(p^n-2)(p^n-1)$ occurrences of our element $u$. In the next step, we will address differences of \ref{enum:2}. We will argue similarly to \textcite[Lemma~6]{momihara2017}. Let $s$ be a fixed integer, $0 \le s \le p^n-1$. We consider the set \begin{align*} \Delta\left(\xi^s (1+px_s)R_n^*\right) = \left\lbrace \xi^s(1+px_s)a - \xi^s(1+px_s)b : a,b \in R_n^*, a \ne b \right\rbrace. \end{align*} Since $a,b \in R_n^*$, we write $a = \xi^{a_1}(1+pa_2)$ and $b = \xi^{b_1}(1+pb_2)$, where $a_1, b_1 \in \{j(p^n+1) : j = 0,1,\dots,p^n-2\}$, $a_2,b_2 \in \T_n$, and $(a_1,a_2) \ne (b_1, b_2)$. We will consider two cases. First, if $a_1 = b_1$, we have \begin{align} \label{eq:a1=b1_1} \xi^s(1+px_s)\xi^{a_1}(1+pa_2) - \xi^s(1+px_s)\xi^{a_1}(1+pb_2). \end{align} We factor out $\xi^{s+a_1}$ and multiply the elements of $\mathbb{P}_n$: \begin{align*} \xi^{s+a_1} \left(1+p \left(x_s + a_2\right) - \left(1+p \left(x_s+b_2\right)\right)\right). \end{align*} We summarize this expression and obtain \begin{align*} \xi^{s+a_1}p(a_2-b_2), \end{align*} which is clearly an element of the maximal ideal $\I_{2n}$. Thus, the case $a_1 = b_1$ leads to no additional differences representing the unit $u$. Now, let $a_1 \ne b_1$. Instead of \cref{eq:a1=b1_1}, we now have \begin{align*} \xi^s(1+px_s)\xi^{a_1}(1+pa_2) - \xi^s(1+px_s)\xi^{b_1}(1+pb_2). \end{align*} Factoring out $\xi^s(1+px_s)$ yields \begin{align*} \xi^s(1+px_s) \left(\xi^{a_1}(1+pa_2) - \xi^{b_1}(1+pb_2)\right), \end{align*} which equals \begin{align*} \xi^s(1+px_s) \left(\xi^{a_1}-\xi^{b_1} + p(\xi^{a_1}a_2 - \xi^{b_1}b_2)\right). \end{align*} Because $\xi^{a_1}-\xi^{b_1} \ne 0$, and $\xi^{a_1}, \xi^{b_1}, a_2, b_2 \in R_n$, the terms $\xi^{a_1}-\xi^{b_1} + p(\xi^{a_1}a_2 - \xi^{b_1}b_2)$ represent the elements of $R_n^*=R_n \setminus \I_n$. The unit group $R_n^*$ contains $p^n(p^n-1)$ elements, and we can choose $a_1, a_2, b_1, b_2$ in $p^{2n}(p^n-1)(p^n-2)$ different ways. The differences are evenly distributed, which means that our element $u$ has $p^n(p^n-2)$ distinct representations of \ref{enum:2}. By adding our numbers of \ref{enum:1} and \ref{enum:2} difference representations of $u$ we have already reached the number $(2p^n-1)(p^n-2)$ stated in \autoref{lem:intersection_numbers_devD}. So, for the remaining types, we need to show that $u$ does not occur in multisets of of \ref{enum:3} and \ref{enum:4}. We examine differences of \ref{enum:3}: $\xi^s (1+px_s)R_n^* - P$. First, we take arbitrary elements $\xi^{k(p^n+1)}(1+pa), a \in \T_n$, from $R_n^*$ and $-p\xi^{\ell(p^n+1)+p^n}$ from $P$ (recall that $P = -P$), where $k,\ell \in \{0,1,\dots,p^n-2\}$ and $a \in \T_n$. So, we are interested in differences of the form \begin{align*} \xi^s (1+px_s) \xi^{k(p^n+1)} (1+pa) + p\xi^{\ell(p^n+1)+p^n}, \end{align*} where $s \in \{0,1, \dots, p^n-1\}$ and $x_s \in S$ are fixed. We factor out $\xi^{s+k(p^n+1)}$, summarize, and obtain \begin{align} \label{eq:type3_3} \xi^{s+k(p^n+1)} \left( 1 + px_s \right) \left( 1 + pa \right) \left( 1 + p\xi^{(\ell-k)(p^n+1)+p^n-s} \right). \end{align} We write \cref{eq:type3_3} with respect to all $0 \le k, \ell \le p^n-2$ and all $a \in \T_n$: \begin{align} \label{eq:type3_4} \xi^s (1+px_s) \left(\T_n^* \mathbb{P}_n \left( 1+p\xi^{p^n-s}\T_n^*\right) \right). \end{align} Since $\xi^{p^n-s} \notin \T_n$, it is clear that $\mathbb{P}_n$ and $1+p\xi^{p^n-s}\T_n^*$ are disjoint. We show that each of the $p^n-1$ other cosets in $\mathbb{P}_{2n} / \mathbb{P}_n$ is represented exactly once by $1+p\xi^{p^n-s}\T_n^*$. This is equivalent to showing that $p\xi^{p^n-s}\T_n$ represents every coset of $\I_{2n}/\I_n$ except $\I_n = p\T_n^*$ itself. Assume there were two elements from the same coset in $p\xi^{p^n-s}\T_n^*$, then their difference would be in $\I_n$. However, for two distinct integers $k,\ell$, the difference \begin{align*} p\xi^{p^n-s+k(p^n+1)} - \left(p\xi^{p^n-s+\ell(p^n+1)}\right) = \xi^{p^n-s}\left(p\xi^{k(p^n+1)}-p\xi^{\ell(p^n+1)}\right) \end{align*} is not in $\I_n$ since $\xi^{p^n-s} \notin \T_n$ and $\left(p\xi^{k(p^n+1)}-p\xi^{\ell(p^n+1)}\right) \in \I_n$. It follows that \cref{eq:type3_4} equals \begin{align*} \xi^s (1+px_s) \left(\T_n^* \left(\mathbb{P}_{2n} \setminus \mathbb{P}_n \right) \right). \end{align*} With the definition of $V = \T_n^* \times \mathbb{P}_{2n}$ from above, we have \begin{align*} \xi^s (1+px_s)R_n^* - P = \xi^s \left(V \setminus (1+px_s)R_n^*\right). \end{align*} Recall that $U = \bigcup_{j=0}^{p^n-1} \xi^j (1+px_j)R_n^*$. Hence $u \in U$ does not occur in multisets of \ref{enum:3}. We finish our proof by examining differences of \ref{enum:4}: Since $P$ is a subset of the maximal ideal $\I_{2n}$ the set of all the differences of distinct elements of $P$ is also a subset of $\I_{2n}$. Thus, $\Delta P$ yields no representations of $u$. \end{proof} For $p^n > 2$, the intersection number $(2p^n-1)(p^n-2)$ of $dev(D)$ does not equal any of the cyclotomic numbers $p^n-2$, $p^n(p^n-1)$ and $(p^n-1)^2$ of $dev(C)$. Hence, \autoref{cor:intersection_numbers_devC} in combination with \autoref{lem:intersection_numbers_devD} proves \autoref{th:designs_noniso} for $p^n > 2$. In the case $p^n=2$, i.\,e., $p=2$ and $n=1$, however, the intersection numbers of $dev(C)$ match those from $dev(D)$. We complete the proof of \autoref{th:designs_noniso} by using the computer algebra system \texttt{Magma}~\cite{magma} to compute the full automorphism groups of $dev(C)$ and $dev(D)$. We see that, for $p^n=2$, the automorphism group of $dev(C)$ has order~$960$, whereas for $dev(D)$ it is only of order~$192$. Thus, these $2$-$(16,5,4)$~designs are nonisomorphic as well. \begin{remark} \label{remark:automorphism_group} The full automorphism group of $dev(D)$ has order $4np^{4n}(p^{4n}-1)$, it consists of the additive group $(\Fqfour, +)$ of order $p^{4n}$, the multiplicative group $\Fqfour^*$ of order $p^{4n}-1$ and the Galois group $\mathrm{Gal}(\Fqfour / \F_p)$ of order $4n$. The full automorphism group of $dev(C)$ has order $2p^{5n(p^{2n}-1)}$, it consists of the additive group $(R_{2n}, +)$ of order $p^{4n}$, the Teichmüller group $\T_{2n}$ of $R_{2n}$ of order $p^{2n}-1$, the group of principal units $\mathbb{P}_n$ of the subring $R_n$ of order $p^n$ and an interesting automorphism of order~$2$ of the additive group $(R_{2n}, +)$. In the case $p^n = 2$, it is defined by $1 \mapsto 1 + 2\xi, \xi \mapsto 1 + 3\xi$. \end{remark} \section{Disjoint difference families in Galois rings II} \label{sec:EDF_GR} \textcite{davis2017} found a new cyclotomic construction of near-complete $(v,k,k-1)$ external difference families in Galois rings $\GR(p^2,r)$ of characteristic $p^2$. This construction was in a more general way already given by \textcite{furino1991} in 1991 for arbitrary commutative rings with an identity. \textcite{furino1991} used the approach to create near-complete disjoint difference families, and we know from \autoref{prop:EDF_DDF} that every near-complete disjoint difference family is also a near-complete external difference family. Furthermore, we remark that the construction by \textcite{furino1991} was generalized in the case $(v,k,k-1)$ by \textcite{buratti2017} to so-called \emph{Ferrero pairs}~$(G,A)$, where $A$ is a non-trivial group of automorphisms of $G$ acting semiregularly on a group $G\setminus\{0\}$. Before we state the result by \textcite{davis2017} we need the following two useful lemmas about differences in Galois rings. Let, like before, $\T$ denote the Teichmüller set, $\T^* = \T \setminus \{0\}$ denote the cyclic Teichmüller group having order $p^r-1$, and $\I = p\GR(p^2,r)$ denote the maximal ideal of the Galois ring $\GR(p^2,r)$. \begin{lemma} \label{lem:unit_difference_unit} In $\GR(p^2, r)$ the difference $u-u'$ of two distinct units $u = \alpha_0(1+p\alpha_1), u' = \alpha_0'(1+p\alpha_1')$, where $\alpha_0, \alpha_0' \in \T^*, \alpha_1, \alpha_1' \in \T$, is a unit if and only if $\alpha_0 \ne \alpha_0'$. \end{lemma} \begin{proof} Let $u = \alpha_0(1+p\alpha_1), u' = \alpha_0'(1+p\alpha_1')$, where $\alpha_0, \alpha_0' \in \T^*, \alpha_1, \alpha_1' \in \T$. If $\alpha_0 = \alpha_0' = \alpha$, we have \begin{align*} u-u' = \alpha(1+p\alpha_1) - \alpha(1+p\alpha_1') = p\alpha(\alpha_1-\alpha_1'), \end{align*} which is an element of $\I$. If $\alpha_0 \ne \alpha_0'$, we have \begin{align*} u-u' = \alpha_0(1+p\alpha_1) - \alpha_0'(1+p\alpha_1') = \alpha_0 - \alpha_0' + p(\alpha_1-\alpha_1'), \end{align*} which is clearly a unit since $\T$ is a system of representatives of $\GR(p^2,r) / \I$. \end{proof} \begin{lemma} \label{lem:differences_in_T_cosets} \begin{enumerate} \item The multiset $\Delta \T^*$ contains only units of $\GR(p^2, r)$. \item Let $d \in (1+p\beta)\T^*$ for some $\beta \in \T^*$. If $d$ is contained in $\Delta \T^*$ then the whole coset $(1+p\beta)\T^*$ is a subset of $\Delta \T^*$. \end{enumerate} \end{lemma} \begin{proof} The first statement follows immediately from \autoref{lem:unit_difference_unit}. The second statement can be proved as follows: Let $d \in \Delta \T^*$. Then $d\T^*$ is a multiplicative coset of $\T^*$, and there exists $\beta \in \T^*$ such that $(1+p\beta)\T^* = d\T^*$. Since $d \in \Delta \T^*$, there are distinct elements $\alpha, \alpha' \in \T^*$ such that $d = \alpha-\alpha'$. Since for every $\gamma \in \T^*$ the elements $\alpha\gamma, \alpha'\gamma$ are contained in $\T^*$, the set $d\T^* = \{d\gamma : \gamma \in \T^*\}$ is a subset of $\Delta \T^*$. \end{proof} Let us now present the construction of $(v,k,k-1)$ disjoint difference families by \textcite{davis2017}. Since the authors of \cite{davis2017} proved the result in terms of external difference families, we will include a short proof that is analogous to the proof of \autoref{th:Fq_DDF}. We remark that the theorem can also be proved using the results by \textcite{furino1991} or the Ferrero pairs by \textcite{buratti2017}. \begin{theorem}[{\cite[Theorem~4.1]{davis2017}}] \label{th:Davis_EDF} Let $\T$ be the Teichmüller set of the Galois ring $\GR(p^2,r)$, and let $\T^* = \T \setminus \{0\}$. The collection \begin{equation*} \label{eq:Davis_EDF} E = \left \lbrace (1+p\alpha)\T^* : \alpha \in \T \right \rbrace \cup p\T^* \end{equation*} forms a near-complete $(p^{2r}, p^r-1, p^r-2)$ disjoint difference family in the additive group of $\GR(p^2, r)$. \end{theorem} \begin{proof} We will first count the number of differences for the units of $\GR(p^2,r)$ and next for the non-invertible elements. Let $x,y \in \GR(p^2,r)^*$. Assume $x = u-u'$, where $u,u'$ are elements of the same coset $(1+p\alpha)\T^*$ of the Teichmüller group $\T^*$ for some fixed element $\alpha \in \T$. There exists $z \in \GR(p^2,r)$ so that $y = zx$. Hence, $y = zu-zu'$ and $zu, zu' \in (1+p\alpha')\T^*$ for some $\alpha' \in \T$, and we have found a representation of $y$ as the difference of two distinct elements from the same set $(1+p\alpha')\T^*$. Since every difference for $y$ will also give us a difference for $x$, it follows that every unit will have the same number of differences. In each of the $p^r$ sets $(1+p\alpha)\T$ we have $(p^r-1)(p^r-2)$ differences, giving us a total of $p^r(p^r-1)(p^r-2)$ differences. From \autoref{lem:differences_in_T_cosets}, we know that all these differences are units. Since there are $p^r(p^r-1)$ units in $\GR(p^2,r)$, each unit has \begin{align*} \frac{p^r(p^r-1)(p^r-2)}{p^r(p^r-1)} = p^r-2 \end{align*} representations as a difference from two distinct elements of the sets $(1+p\alpha)\T^*, \alpha \in \T$.\par We now consider the non-invertible non-zero elements of $\GR(p^2,r)$, i.\,e. the elements of the set $p\T^* = \I \setminus \{0\}$. Since $\I$ is a group under addition, $\I \setminus \{0\}$ is a trivial $(p^r,p^r-1,p^r-2)$ difference set in $\I$. Hence, $\Delta p\T^* = (p^r-2) (\I \setminus \{0\})$. Combining these two results, we see that every non-zero element of $\GR(p^2,r)$ has $p^r-2$ differences in the sets of $E$. \end{proof} \textcite{davis2017} remark that, since $p^r+1$ divides $p^{2r}-1$, there is also a disjoint difference family with the same parameters in $(\F_{p^{2r}},+)$ which can be constructed using \autoref{th:Fq_DDF}. The authors ask whether the associated designs, i.\,e. the developments of these two disjoint difference families, are isomorphic. We will answer this question by showing that the designs are nonisomorphic in all but one case. This is our second main theorem. \begin{theorem} \label{th:designs_noniso_Davis} Let $C$ be a $(p^{2r}, p^r-1, p^r-2)$ disjoint difference family in the additive group of the finite field $(\F_{p^{2r}},+)$ constructed with \autoref{th:Fq_DDF}, and let $E$ be a disjoint difference family with the same parameters in the additive group of the Galois ring $(\GR(p^2,r),+)$ constructed with \autoref{th:Davis_EDF}. The $2$-$(p^{2r}, p^r-1, p^r-2)$ designs $dev(C)$ and $dev(E)$ are isomorphic if $p=3$ and $r=1$, and they are nonisomorphic in every other case. \end{theorem} To prove \autoref{th:designs_noniso_Davis} we will consider four cases: First, we will examine the case $p=3$ and $r=1$, second, we will consider $p=2$ and $r=2$, third, we look at the case $p=2$ and $r \ge 3$, and last we will consider $p \ge 3$ and arbitrary $r$ (except $p=3$ and $r=1$). If $p=3$ and $r=1$, the $2$-designs $dev(C)$ and $dev(E)$ are isomorphic. In this case $\GR(9,1) \cong \Z_9$ and $(\F_9,+) \cong \Z_3 \times \Z_3$. An isomorphism between $dev(E)$ and $dev(C)$ computed by \texttt{Magma}~\cite{magma} is the map $f:\Z_9 \to \Z_3 \times \Z_3$ on the point set of $dev(E)$ with \begin{align*} 0 &\mapsto (0,0),&& 1 \mapsto (0,1),&& 2 \mapsto (1,2),&& 3 \mapsto (1,1), && 4 \mapsto (2,2)\\ 5 &\mapsto (2,0),&& 6 \mapsto (1,0),&& 7 \mapsto (2,1),&& 8 \mapsto (0,2). \end{align*} If $p=2$ and $r=2$, the designs $dev(E)$ and $dev(C)$ share the same block intersection numbers (together with their multiplicities), which we use in the proof of the remaining two cases. For both designs the block intersection numbers are $0$~($1600$~times), $1$~($1440$ times) and $2$ ($120$ times). Hence, we solve this case by computing the automorphism groups of the designs with the help of \texttt{Magma}~\cite{magma}: The automorphism group of $dev(E)$ has order $384$ while the automorphism group of $dev(C)$ is of order $5760$. If $dev(E)$ and $dev(C)$ were isomorphic, their automorphism groups would be the same. Hence, the two designs are nonisomorphic. For the case $p=2$ and $r \ge 3$ and the case $p\ge 3$ (except $p=3$ and $r=1$) we will prove \autoref{th:designs_noniso_Davis} similarly to \autoref{th:designs_noniso} by showing that the block intersection numbers of $dev(C)$ and $dev(E)$ differ. We start by calculating the block intersection numbers of $dev(C)$, that are the cyclotomic numbers of order $p^r+1$ in $\F_{p^{2r}}$. As before, these cyclotomic numbers are uniform (see \autoref{prop:uniform_cyclotomy}): \begin{corollary} \label{cor:intersection_numbers_devC_Davis} Let $C$ be a $(p^{2r}, p^r-1, p^r-2)$ disjoint difference family in the additive group of the finite field $\F_{p^{2r}}$ constructed with \autoref{th:Fq_DDF}. The $2$-$(p^{2r}, p^r-1, p^r-2)$ design $dev(C)$ has exactly three block intersection numbers, namely $p^r-2$, $0$ and $1$. \end{corollary} \begin{proof} Since $e = p^r+1$, we have $-1 \equiv p^r \pmod e$, and we are allowed to employ \autoref{prop:uniform_cyclotomy}. From $s^2 = p^{2r}$ and $s \equiv 1 \pmod e$ it follows that $s = -p^r$. Thus, $\eta = (-p^r-1)/(p^r+1) = -1$, and we get the cyclotomic numbers \begin{align*} (0,0)_{p^r+1} &= p^r-2, \nonumber\\ (0,i)_{p^r+1} = (i,0)_{p^r+1} = (i,i)_{p^r+1} &= 0 &&\textrm{for } i \ne 0,\\ (i,j)_{p^r+1} &= 1 &&\textrm{for } i \ne j \textrm{ and } i,j \ne 0, \nonumber \end{align*} that occur as the intersection numbers of $dev(C)$. \end{proof} \begin{remark} \label{rem:Fq_subfield} Since $r$ divides $2r$, the finite field $\F_{p^{2r}}$ contains a unique subfield $\F_{p^r}$. Thus, the subgroup $C_0 = \{ \alpha^{i(p^r+1)} : i = 0, 1, \dots, p^r-2\}$ of order $p^r-1$ of $\F_{p^{2r}}^*$ that generates the disjoint difference family $C$ is the multiplicative group $\F_{p^r}^*$ of $\F_{p^r}$. Hence, it is clear that $\Delta C_0 = (p^r-2)C_0$. \end{remark} We now focus on the intersection numbers of $dev(E)$ in the Galois ring $\GR(p^2, r)$. The following \autoref{lem:int_numbers_Davis} in combination with \autoref{cor:intersection_numbers_devC_Davis} will finish the proof of \autoref{th:designs_noniso_Davis}. \begin{lemma} \label{lem:int_numbers_Davis} Let $E$ be a $(p^{2r}, p^r-1, p^r-2)$ disjoint difference family in the additive group of the Galois ring $\GR(p^2,r)$ constructed with \autoref{th:Davis_EDF}. If $p=2$ and $r \ge 2$, the $2$-$(2^{2r}, 2^r-1, 2^r-2)$ design $dev(E)$ has a block intersection number $2$. If $p\ge 3$ (except the case $p=3$ and $r=1$), the $2$-$(p^{2r}, p^r-1, p^r-2)$ design $dev(E)$ has a block intersection number $N$ with $1 < N < p^r-2$. \end{lemma} We prove \autoref{lem:int_numbers_Davis} by a series of lemmas. We start by considering the Galois ring $\GR(4,r)$, $r \ge 2$, i.\,e., we set $p=2$, and show that, in this case, $2$ is a block intersection number of $dev(E)$ if $r \ge 2$. Recall that in the case $p=2$, according to \autoref{lem:-1_GR}, $-1$ is a principal unit. Looking at the construction of our disjoint difference family $E$ we notice that for every block $(1+2\alpha)\T^*, \alpha \in \T$, of $E$ its additive inverse $-(1+2\alpha)\T^*, \alpha \in \T$, is also a block of $E$, and the two sets are disjoint. We also need the following result: \begin{lemma} In the Galois ring $\GR(4,r)$, $r \ge 2$, the Teichmüller set $\T$ is the set of all squares. \end{lemma} \begin{proof} Let $x \in \GR(4,r)$. If $x \in \I$, say $x = p\alpha$ for some $\alpha \in \T$, then $x^2 = p^2 \alpha^2 = 0$. If $x$ is a unit, say $x = \alpha_0(1+p\alpha_1), \alpha_0 \in \T^*, \alpha_1 \in \T, \alpha_0 \ne 0$, then $x^2 = \alpha_0^2 (1+p\alpha_1)^2 = \alpha_0$ because each principal unit has order $2$. \end{proof} To examine the block intersection numbers of $dev(E)$ we use the well-known result that in a Galois ring of characteristic~$4$ the Teichmüller set is a relative difference set. We add a proof in \autoref{prop:TeichmuellerSet_PDS_even}. Let $G$ be a group of order $mn$ that contains a normal subgroup $N$ of order $n$. A $k$-subset $D$ of $G$ is called an \emph{$(m,n,k,\lambda)$-relative difference set} in $G$ relative to $N$ if each element of $G\setminus N$ occurs exactly $\lambda$ times and the elements of $N$ do not occur in $\Delta D$. \begin{proposition} \label{prop:TeichmuellerSet_PDS_even} In $\GR(4,r)$, $r \ge 2$, the Teichmüller set $\T = \{0,1, \xi, \dots, \xi^{2^r-2}\}$ is a $(2^r,2^r,2^r,1)$-relative difference set in the additive group of $\GR(4,r)$ relative to the maximal ideal $\I = 2\GR(4,r)$. \end{proposition} \begin{proof} This proof is similar to the one by \textcite[Lemmas~2 and~3]{bonnecaze1997}. From \autoref{lem:unit_difference_unit} we know that the difference $\beta-\beta'$ of two Teichmüller elements $\beta, \beta' \in \T$ is a unit if and only if $\beta \ne \beta'$. Additionally, we know that each element $x$ of $\GR(4,r)$ has a unique $2$-adic representation $x = \alpha_0 + 2\alpha_1,\ \alpha_0, \alpha_1 \in \T$. We consider the equation \begin{align} \label{eq:PDS_1} \alpha_0 + 2\alpha_1 = \beta - \beta'. \end{align} We choose $\beta$ and $\beta'$ and fix thereby $\alpha_0$ and $\alpha_1$. Hence, \cref{eq:PDS_1} has $4^r$ solutions $(\alpha_0, \alpha_1, \beta, \beta')$. In the next step we consider the system of equations \begin{align*} \alpha_0^2+2\alpha_0\alpha_1 + \alpha_1^2 &= \alpha_0\beta\\ \alpha_1^2 &= \alpha_0 \beta'\\ \beta &= \beta',\qquad \text{if } \alpha_0 = 0. \end{align*} The system of equations has also $4^r$ solutions $(\alpha_0, \alpha_1, \beta, \beta')$: It has $2^r$ solutions if $\alpha_0 = 0$, namely $(0,0,\beta, \beta)$ for arbitrary $\beta$, and it has $2^r(2^r-1)$ solutions if $\alpha_0 \ne 0$: In this case, we can chose $\alpha_0 \in \T^*, \alpha \in \T$ arbitrarily and obtain unique solutions for $\beta, \beta'$, namely \begin{align} \label{eq:beta} \beta &= \alpha_0^{-1}(\alpha_0 + \alpha_1)^2,\\ \label{eq:beta'} \beta' &= \alpha_0^{-1}\alpha_1^2. \end{align} It is easy to check that the solutions to the system of equations also solve \cref{eq:PDS_1}. Hence, each unit $\alpha_0 + 2\alpha_1, \alpha_0 \ne 0,$ can be uniquely represented as the difference of two Teichmüller elements $\beta, \beta'$ as described in \cref{eq:beta} and \cref{eq:beta'}. \end{proof} \autoref{prop:TeichmuellerSet_PDS_even} implies \begin{corollary} \label{cor:diff_Teichmueller_even} In $\GR(4, r)$, $r \ge 2$, we have for each $\alpha \in \T$ that \begin{align*} \Delta(1+2\alpha)\T^* = \GR(4,r) \setminus (\I \cup (1+2\alpha)\T^* \cup -(1+2\alpha)\T^*), \end{align*} and each element of $\GR(4,r) \setminus (\I \cup (1+2\alpha)\T^* \cup -(1+2\alpha\T^*)$ has multiplicity $1$ in this multiset. \end{corollary} \begin{proof} According to \autoref{prop:TeichmuellerSet_PDS_even} the Teichmüller set $\T$ is a $(2^r, 2^r, 2^r, 1)$ relative difference set in the additive group of $\GR(4,r)$ relative to $\I$. Hence, by removing $0$ from $\T$ to obtain $\T^*$ we remove differences of the type $\T^*-0 = \T^*$ and $0-\T^* = -\T^*$ from our set of differences. \end{proof} These results enable us to determine an intersection number of $dev(E)$. For our purpose the following \autoref{lem:sum_Teichmueller_even_atmost2} suffices. A more detailed analysis of the differences and sums in the Teichmüller set in $\GR(4,r)$, that includes the following result in a slightly different way, is given by \cite[Section~III.--C.]{hammons1994} and \textcite[Theorem~1]{bonnecaze1997}. Arguments of this type have also been used by \textcite{ghinelli2003,deresmini2002} in the theory of difference sets to obtain ovals in the development of a difference set and by \textcite{pottzhou2017} to construct Cayley graphs . \begin{lemma} \label{lem:sum_Teichmueller_even_atmost2} In $\GR(4,r)$, $r \ge 2$, an element $s$ of the multiset $\Delta_+(1+2\alpha)\T^*$ has multiplicity~$2$ if $s$ is a unit and multiplicity $1$ if $s \in \I \setminus \{0\}$. \end{lemma} \begin{proof} Let $\beta, \gamma$ be two distinct elements of the Teichmüller group $\T^*$. Since $2\T = \I$, it follows that $\beta+\beta = 2\beta \ne 2\gamma = \gamma+\gamma$. Hence, the elements of $\I \setminus \{0\}$ are represented once as the sum of two elements of $\T^*$. We now consider sums of the type $\beta + \gamma, \beta \ne \gamma$. It is clear that each sum $s = \beta + \gamma$ has at least two representations: $\beta + \gamma$ and $\gamma + \beta$. To prove that there are no more than those two representations we assume that there exist elements $\beta', \gamma' \in \T^*,\ \beta', \gamma' \notin \{\beta,\gamma\}$ such that $\beta+\gamma = \beta' + \gamma'$. This is equivalent to $\beta-\beta' = \gamma'-\gamma$. However, according to \autoref{cor:diff_Teichmueller_even} all the differences of two distinct elements of $\T^*$ are distinct. Consequently, $\beta + \gamma \ne \beta' + \gamma'$. \end{proof} \autoref{cor:diff_Teichmueller_even} and \autoref{lem:sum_Teichmueller_even_atmost2} lead us to the following statement about the block intersection numbers in $dev(E)$: \begin{lemma} In $\GR(4,r)$, $r \ge 2$, let $E_\alpha = (1+2\alpha)\T^*, \alpha \in \T,$ and let $d \in \GR(4,r)$. Then \begin{align*} \left|(E_{\alpha} + d) \cap E_{\alpha}\right| &= \begin{cases} 1, & \textrm{if } d \in \Delta E_{\alpha},\\ 0, & \textrm{in any other case,} \end{cases}\\ \left|(E_{\alpha} + d) \cap -E_{\alpha}\right| &= \begin{cases} 2, & \textrm{if } d \in \left(\Delta_+-E_{\alpha}\right) \setminus \I,\\ 1, & \textrm{if } d \in \I \setminus \{0\},\\ 0, & \textrm{in any other case.} \end{cases} \end{align*} \end{lemma} Since $(1+2\alpha)\T^* + d, \alpha \in \T, d \in \GR(4,r),$ and $-(1+2\alpha)\T^*$ are blocks of the design $dev(E)$, it follows that $dev(E)$ contains blocks that intersect in two elements. According to \autoref{cor:intersection_numbers_devC_Davis} however, the design $dev(C)$ has block intersection numbers $2^r-2, 0$ and $1$, and $2^r-2 > 2$ if $r \ge 3$. We conclude that the combinatorial designs $dev(C)$ and $dev(E)$ are nonisomorphic if $p=2$ and $r \ge 3$. Now, let $p$ be an odd prime. \begin{lemma} \label{lem:T=-T} In $\GR(p^2,r)$, $p$ odd, we have $\T^* = -\T^*$ . \end{lemma} \begin{proof} According to \autoref{lem:-1_GR}, $-1 \in \T^*$ if $p$ is odd. \end{proof} Hence, the Teichmüller group consists of pairs of an element and its additive inverse and we can write \begin{equation*} \T^* = \left\lbrace 1, \xi, \xi^2, \dots, \xi^{(p^r-3)/2}, -1, -\xi, -\xi^2, \dots, -\xi^{(p^r-3)/2} \right\rbrace. \end{equation*} From this notation we can deduce \begin{lemma} \label{lem:pairwise_differences} In $\GR(p^2,r)$, $p$ odd, a difference $d$ of two distinct elements of $\T^*$ occurs at least twice in $\Delta \T^*$, except if $d \in 2\T^*$, then $d$ occurs at least once in $\Delta \T^*$. Consequently, a difference $d$ has odd multiplicity in $\Delta \T^*$ if and only if $d \in 2\T^*$. \end{lemma} \begin{proof} Let $d = \alpha - \alpha' \in \Delta \T^*$ be the difference of two arbitrary elements of the Teichmüller group $\T^*$. According to \autoref{lem:T=-T}, the elements $-\alpha, -\alpha'$ are also in $\T^*$. Hence, $-\alpha'- (-\alpha) = d$ is another representation of $d$ as the difference of two elements from $\T^*$. However, those two representations are the same, if $\alpha' = -\alpha$. Then $d=2\alpha$, and it is not guaranteed that $d$ has more than this single representation. It could, however, happen, that there exist more distinct pairs $(\beta_1,\beta_1'), \dots, (\beta_\ell,\beta_\ell'), \beta_1, \beta_1', \dots, \beta_\ell, \beta_\ell' \in \T^* \setminus \{\alpha, \alpha'\}$ with $\beta_i - \beta_i' = -\beta_i' - (-\beta_i) = d$. Then, $d$ has $2\ell+2$ representations as a difference if $d \notin 2\T^*$ and $2\ell+1$ representations if $d \in \T^*$. \end{proof} \begin{corollary} \label{cor:int_num_>1} In $\GR(p^2,r)$, $p$ odd (except $p=3$ and $r=1$), let $d \in (\Delta \T^* \setminus 2\T^*)$. Then, the block intersection number $|\T^* \cap (\T^* + d)| \ge 2$. \end{corollary} \begin{proof} The statement follows from \autoref{lem:pairwise_differences}: Let $d = \alpha-\alpha'$ for some distinct $\alpha, \alpha' \in \T^*, \alpha' \ne -\alpha$. Then $d$ occurs at least twice in $\Delta \T^*$. In the case $p=3$ and $r=1$, the Teichmüller group contains only two elements, namely $1$ and $-1$. Hence, there are no $\alpha, \alpha' \in \T^*$ with $\alpha \ne -\alpha'$. So, we need to exclude this case. \end{proof} To finish our proof of \autoref{lem:int_numbers_Davis} we need to show that the block intersection numbers greater than $1$ from \autoref{cor:int_num_>1} are less than $p^r-2$. \begin{lemma} \label{lem:d_less_p^r-2} In $\GR(p^2,r)$, $p$ odd (except $p=3$ and $r=1$), there is no difference $d \in \Delta \T^*$ with multiplicity $p^r-2$ in $\Delta \T^*$. \end{lemma} \begin{proof} Assume there is an element $d \in \Delta \T^*$ with multiplicity $p^r-2$. We know from \autoref{lem:differences_in_T_cosets} that $\Delta \T^*$ is the union of whole cosets of $\T^*$ and that the elements of the same coset have the same multiplicity. Counting multiplicities, $\Delta \T^*$ contains $(p^r-1)(p^r-2)$ elements. Hence, if $d$ has multiplicity $p^r-2$, every $d'$ from the same coset as $d$ will also have multiplicity $p^r-2$. Since $\T^*$ and its cosets contain $p^r-1$ elements, this means that $\Delta \T^*$ consists of $p^r-2$ times the same coset. Obviously, $p^r-2$ is odd. From \autoref{lem:pairwise_differences} we know, that an element $d \in \T^*$ only has odd multiplicity if $d \in 2\T^*$. Thus, we have $\Delta \T^* = (p^r-2)(2\T^*)$. Now, let $\alpha$ be an arbitrary element of $\T^*$. Then $\alpha-1 \in \Delta \T^*$, and there is an element $\beta \in \T^*$ such that $\alpha-1 = 2\beta$. However, since $-1 \in \T^*$, the element $\alpha+1$ is also in $\Delta \T^*$, and we have $\alpha+1 = 2\beta +2 = 2(\beta + 1)$. Since $p\ge 3$, the element $2$ is a unit, and thus $\beta + 1 \in \T^*$. Consequently, we have $\Delta \T^* = (p^r-2)\T^*$.\par In other words, $\T^*$ needs to be an (additive) $\left(p^{r-1}, p^{r-2}, p^{r-2}\right)$ difference set in $\T = \T^* \cup \{0\}$. This is only the case if $\T$ forms an additive group. It is clear, however, that this is not the case: Since $1 \in \T$, this would mean that $p\in \T$, but $p$ is not a unit. Hence, there is no $d \in \T^*$ with multiplicity $p^r-2$. \end{proof} \autoref{lem:d_less_p^r-2} immediately gives us the following result. \begin{corollary} \label{cor:int_num_<p^r-2} In $\GR(p^2,r)$, $p$ odd (except $p=3$ and $r=1$), the block intersection number $|\T^* \cap (\T^* + d)| < p^r-2$ for all $d \in \Delta \T^*$. \end{corollary} By combining \autoref{cor:int_num_>1} and \autoref{cor:int_num_<p^r-2} we see that for odd $p$ (except $p=3$ and $r=1$) there exists an element $d \in \GR(p^2,r)$ such that the intersection number of the blocks $\T^*$ and $\T^* + d$ of $dev(E)$ is greater than $1$ and less than $p^r-2$. This concludes the proof of \autoref{lem:int_numbers_Davis}. \section{Conclusion} In this paper, we solve the isomorphism problem for two pairs of near-complete $(v,k,k-1)$ disjoint difference families in Galois rings and finite fields. However, there exist many more constructions of difference families, and it is a natural question to ask whether their associated designs are nonisomorphic. Hence, we leave to future work the task to solve the isomorphism problem for more disjoint difference families.\par Moreover, it would be nice to have a general powerful construction of difference families for which one can show that (almost) all their candidates are nonisomorphic. For parameters $(v,k,k-1)$, the construction presented by \textcite{buratti2017} seems to be powerful, and it will be interesting to check if this construction leads to nonisomorphic designs. \printbibliography \end{document}
{ "timestamp": "2018-07-06T02:09:40", "yymm": "1807", "arxiv_id": "1807.02055", "language": "en", "url": "https://arxiv.org/abs/1807.02055", "abstract": "Recently, two new constructions of $(v,k,k-1)$ disjoint difference families in Galois rings were presented by Davis, Huczynska, and Mullen and Momihara. Both were motivated by a well-known construction of difference families from cyclotomy in finite fields by Wilson. It is obvious that the difference families in the Galois ring and the difference families in the finite field are not equivalent. A related question which is in general harder to answer is whether the associated designs are isomorphic or not. In our case, this problem was raised by Davis, Huczynska and Mullen. In this paper we show that the $2$-$(v,k,k-1)$ designs arising from the difference families in Galois rings and those arising from the difference families in finite fields are nonisomorphic by comparing their block intersection numbers.", "subjects": "Combinatorics (math.CO)", "title": "Solving isomorphism problems about 2-designs from disjoint difference families", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9896718480899426, "lm_q2_score": 0.7154239897159439, "lm_q1q2_score": 0.7080349820700582 }
https://arxiv.org/abs/1103.3938
On the Number of Facets of Polytopes Representing Comparative Probability Orders
Fine and Gill (1973) introduced the geometric representation for those comparative probability orders on n atoms that have an underlying probability measure. In this representation every such comparative probability order is represented by a region of a certain hyperplane arrangement. Maclagan (1999) asked how many facets a polytope, which is the closure of such a region, might have. We prove that the maximal number of facets is at least F_{n+1}, where F_n is the nth Fibonacci number. We conjecture that this lower bound is sharp. Our proof is combinatorial and makes use of the concept of flippable pairs introduced by Maclagan. We also obtain an upper bound which is not too far from the lower bound.
\section{Introduction} Considering comparative probability orders from the combinatorial viewpoint, Maclagan \cite{DM} introduced the concept of a flippable pair of subsets. This concept appears to be very central for the theory as it has nice algebraic and geometric characterisations. Algebraically, comparisons of subsets in flippable pairs correspond to irreducible vectors in the discrete cone associated with the comparative probability order, i.e., those vectors that cannot be split into the sum of two other vectors of the cone \cite{PF1,CCS}. Geometrically, a representable comparative probability order corresponds to a polytope in a certain arrangement of hyperplanes and flippable pairs correspond (with one exception) to those facets of the polytope which are also facets of one of the neighboring polytopes. Christian et al \cite{CCS} showed that in any minimal set of comparisons that define a representable comparative probability order all pairs of subsets in those comparisons are flippable. Maclagan formulated a number of very interesting questions (see \cite[p. 295]{DM}). In particular, she asked how many flippable pairs a comparative probability order on $n$ atoms may have. In this paper we show that a representable comparative probability order may have up to $F_{n+1}$ flippable pairs, which is the $(n+1)$th Fibonacci number. We conjecture that this lower bound on the maximal number of flippable pairs is sharp. This conjecture was put forward by one of us and we call it Searles' conjecture. We provide an upper bound on the maximal number of flippable pairs in a representable comparative probability order that is not too far from $F_{n+1}$. Section 2 contains preliminary results and formulates Maclagan's problem. In Sections 3 and 4 we discuss Searles' conjecture in relation to Maclagan's problem and prove the aforementioned lower and upper bounds. Section 5 concludes by stating several open problems. \section{Preliminaries} \label{prels} {\bf 2.1. Comparative Probability Orders.} Given a (weak) order, that is, a reflexive, complete and transitive binary relation, $\preceq$ on a set $A$, the symbols $\prec$ and $\sim$ will, as usual, denote the corresponding (strict) linear order and indifference relation, respectively. \begin{definition} Let $X$ be a finite set. A linear order $\preceq$ on $2^X$ is called a {\em comparative probability order} on $X$ if $\emptyset\prec A$ for every nonempty subset $A$ of $X$, and $\preceq$ satisfies de Finetti's axiom, namely \begin{equation} \label{deFeq} A\preceq B \ \Longleftrightarrow \ A\cup C\,\preceq\, B\cup C , \end{equation} for all $A,B,C\in 2^X$ such that $(A\cup B)\cap C=\emptyset$. \end{definition} As in \cite{PF1,PF2} at this stage of investigation we preclude indifferences between sets. For convenience, we will further suppose that $X=[n]=\{1,2,\ldots, n\}$ and denote the set of all comparative probability orders on $2^{[n]}$ by $\mbox{$\cal P$}_n$. If we have a probability measure ${\bf p}=(\row pn)$ on $X$, where $p_i$ is the probability of $i$, then we know the probability $p(A)$ of every event $A$ which is given by $p(A)=\sum_{i\in A}p_i$. We may now define an order $\preceq_{\bf p}$ on $2^X$ by \[ A\preceq_{\bf p} B \quad \mbox{if and only if}\quad p(A)\le p(B). \] If the probabilities of all events are different, then $\preceq_{\bf p}$ is a comparative probability order on $X$. Any such order is called {\em (additively) representable}. The set of representable orders is denoted by $\mbox{$\cal L$}_n$. It is known \cite{KPS} that $\mbox{$\cal L$}_n$ is strictly contained in $\mbox{$\cal P$}_n$ for all $n\ge 5$. Since a representable comparative probability order does not have a unique probability measure representing it but a class of them, any representable comparative probability order can be viewed as a credal set (a closed and convex set of probability measures, see, e.g., \cite{L1}) of a very special type. We will return to this interpretation slightly later.\par As in \cite{PF1,PF2}, it is often convenient to assume that $ 1\prec 2\prec \ldots \prec n. $ This reduces the number of possible orders under consideration by a factor of $n!$. The set of all comparative probability orders on $[n]$ that satisfy this condition will be denoted by $\mbox{$\cal P$}_n^*$, and the set of all such representable comparative probability orders on $[n]$ will be denoted by~$\mbox{$\cal L$}_n^*$.\par We can also define a representable comparative probability order by any vector of positive utilities ${\bf u}=(\row un)$ by \[ A\preceq_{\bf u} B \quad \mbox{if and only if}\quad \sum_{i\in A}u_i\le \sum_{i\in B}u_i. \] We do not get anything new since this will be the order $\preceq_{\bf p}$ for the measure ${\bf p}=\frac{1}{S}{\bf u}$, where $S=\sum_{i=1}^nu_i$. However, sometimes it is convenient to have the coordinates of ${\bf u}$ integers. In this case we will call $u(A)=\sum_{i\in A}u_i$ the {\em utility} of $A$. Kraft et al \cite{KPS} gave necessary and sufficient conditions for a comparative probability order to be representable. They are not so easy to formulate and they have appeared in the literature in various forms (see, e.g., \cite{DS,PF1}). The easiest way to formulate them is through the concept of a {\em trading transform} introduced in \cite{TZ}. \begin{definition} A sequence of subsets $(\row Ak; \row Bk)$ of $[n]$ of even length $2k$ is said to be a trading transform of length $k$ if for every $i\in [n]$ $$ \left|\{j\mid i\in A_j\}\right|=\left|\{j\mid i\in B_j\}\right|. $$ In other words, sets $\row Ak$ can be converted into $\row Bk$ by rearranging their elements. \end{definition} Now the result of \cite{KPS} can be reformulated as follows. \begin{theorem}[Kraft-Pratt-Seidenberg] \label{KPStheorem} A comparative probability order $\preceq $ is representable if and only if for no $k$ there exist pairs $A_i\prec B_i$, $i=1,2,\ldots, k$ such that $ (\row Ak; \row Bk) $ is a trading transform of length~$k$. \end{theorem} \par \medskip {\bf 2.2. Discrete Cones.} To every linear order $\preceq$~$\in \mbox{$\cal P$}_n^*$, there corresponds a {\em discrete cone} $C(\preceq)$ in $T^n$, where $T=\{-1,0,1\}$ (as defined in \cite{AK,PF1}). \begin{definition} A subset ${\cal C}\subseteq T^n$ is said to be a discrete cone if the following properties hold: \begin{enumerate} \item[{\rm D1.}] $\{ {\bf e}_1, {\bf e}_2,\ldots, {\bf e}_{n} \}\subseteq {\cal C}$, where $\{{\bf e}_1,\ldots,{\bf e}_n\}$ is the standard basis of $\mathbb{R}^n$, \item[{\rm D2.}] for every ${\bf x}\in T^n$, exactly one vector of the set $\{-{\bf x},{\bf x}\}$ belongs to $\cal C$, \item[{\rm D3.}] ${\bf x}+{\bf y}\in C$ whenever ${\bf x},{\bf y}\in {\cal C}$ and ${\bf x}+{\bf y}\in T^n$. \end{enumerate} \end{definition} We note that in \cite{PF1} Fishburn requires ${\bf 0}\notin {\cal C}$ because his orders are anti-reflexive. In our case, condition D2 implies ${\bf 0}\in {\cal C}$. \medskip For each subset $A\subseteq X$ we define the characteristic vector $\chi_A$ of this subset by setting $\chi_A(i)=1$, if $ i\in A$, and $\chi_A(i)=0$, if $ i\notin A$. Given a comparative probability order $\preceq $ on $X$, we define the characteristic vector $\chi(A,B)=\chi_B -\chi_A\in T^n$ for every possible pair $(A,B) $ such that $A\preceq B$. The set of all characteristic vectors $\chi(A,B)$ is denoted by $C(\preceq)$. The two axioms of comparative probability guarantee that $C(\preceq)$ is a discrete cone (see \cite[Lemma~2.1]{PF1}). \iffalse We can define a restricted sum for vectors in a discrete cone $\cal C$. Let ${\bf u},{\bf v}\in {\cal C}$. Then \[ {\bf u}\oplus {\bf v}= \left\{ \begin{array}{cl} {\bf u}+{\bf v}& \text{if ${\bf u}+{\bf v}\in T^n $}, \\\ \text{undefined} & \text{if ${\bf u}+{\bf v}\notin T^n $}. \end{array} \right. \] This is equivalent to transitivity of the respective comparative probability order \cite{CCS}. \begin{definition} We say that the cone ${\cal C}$ is {\em weakly generated} by vectors $\brow vk$ if every nonzero vector ${\bf c}\in {\cal C}$ can be expressed as a restricted sum of $\brow vk$, in which each generating vector can be used as many times as needed. We denote this by ${\cal C}={<}\brow vk{>_w}$. \end{definition} \fi \medskip {\bf 2.3. Critical and flippable pairs.} Not all relations $A\prec B$ for pairs of subsets $(A,B)$ in a comparative probability order are equally informative. Some of these may be implied by others through transitivity or de Finetti's axiom. This is certainly true for any pair consisting of two nonadjacent sets or two sets with nonempty intersection. \begin{definition} Let $A$ and $B$ be disjoint subsets of $[n]$. The pair $(A,B)$ is said to be {\em critical} for $\preceq $ if $A\prec B$ and $(A,B)$ are adjacent, i.e., there is no $C\subseteq [n]$ for which $A\prec C\prec B$. \end{definition} In the above definition we follow Fishburn \cite{PF3}, while Maclagan \cite{DM} calls such pairs {\em primitive}. It is known \cite{DM} that, if $\preceq$ and $\preceq'$ are distinct comparative probability orders, then there exists a critical pair $(A,B)$ for $\preceq$ such that $B \prec' A$. This shows that critical pairs are of interest due to the fact that they define orders. But there are even more interesting pairs. \begin{definition} A critical pair $(A,B)$ is said to be {\em flippable\/} for $\preceq $ if for every $D\subseteq [n]$, disjoint from $A\cup B$, the pair $(A\cup D,B\cup D)$ is adjacent in $\preceq$. \end{definition} We note that the set of flippable pairs is not empty, since the central pair of any comparative probability order is flippable \cite{KPS}. Indeed, this consists of a certain set $A$ and its complement $A^c=X\setminus A$, and there is no $D$ which has empty intersection with both of these sets. It is not known whether this can be the {\em only\/} flippable pair of the order.\par \medskip Suppose now that a pair $(A,B)$ is flippable for a comparative probability order $\preceq $, and $A\ne \emptyset$. Then reversing each comparison $A\cup D \prec B\cup D$ (to $B\cup D \prec A\cup D$), we will obtain a new comparative probability order $\preceq '$, since the de Finetti axiom (\ref{deFeq}) will still be satisfied. We say that $\preceq ' $ is obtained from $\preceq $ by {\em flipping over\/} $A\prec B$. The orders $\preceq $ and $\preceq '$ are called {\em flip-related}. This flip relation turns $\mbox{$\cal P$}_n$ into a graph which we will denote ${\cal G}_n$. \begin{definition} An element ${\bf w}$ of the cone ${\cal C}$ is said to be {\em reducible\/} if there exist two other vectors ${\bf u}, {\bf v}\in {\cal C}$ such that ${\bf w}={\bf u} + {\bf v}$, and {\em irreducible\/} otherwise. The set of all irreducible elements of ${\cal C}$ will be denoted as $\text{Irr}({\cal C})$. \end{definition} \begin{theorem}[\cite{DM,CCS}] \label{flippable-irreducible} A pair $(A,B)$ of disjoint subsets is flippable for $\preceq $ if and only if the corresponding characteristic vector $\chi (A,B)$ is irreducible in ${\cal C}(\preceq )$. So the cardinality $|\text{Irr}({\cal C}(\preceq))|$ is the total number of flippable pairs in $\preceq$. \end{theorem} Flippable pairs uniquely define a representable order but this does not hold for nonrepresentable orders \cite{CCS}. \medskip As we know, the flip relation turns $\mbox{$\cal P$}_n$ into a graph ${\cal G}_n$. Let $\preceq $ and $\preceq '$ be two comparative probability orders which are connected by an edge in this graph (and so are flip-related). We say that $\preceq $ and $\preceq '$ are {\em friendly\/} if they are either both representable or both nonrepresentable. \par\medskip {\bf 2.4. Geometric Representation of Representable Orders and Maclagan's Problem.} Let $A,B\subseteq [n]$ be disjoint subsets, of which at least one is nonempty. Let $H(A,B)$ be a hyperplane consisting of all points ${\bf x}\in \mathbb{R}^n$ satisfying the equation \begin{equation*} \sum_{a\in A}x_a-\sum_{b\in B}x_b=0. \end{equation*} We denote the corresponding hyperplane arrangement by ${\cal A}_n$. Also let $J$ be the hyperplane $x_1+x_2+\ldots +x_n=1,$ and let ${\cal H}_n={\cal A}_n^J$ be the induced hyperplane arrangement. Fine and Gill \cite{FG} showed that the regions of ${\cal H}_n$ in the positive orthant $\mathbb{R}^n_+$ of $\mathbb{R}^n$ correspond to representable orders from ${\mbox{$\cal P$}}_n$. Now we can see what is special in the credal sets that correspond to comparative probability orders. They are not only convex, as credal sets must be, but they are in fact interiors of polytopes. When in the future we refer to a region of this hyperplane arrangement we will refer to the polytope which is the closure of that region. This will invite no confusion. \begin{problem}[Maclagan \cite{DM}] \label{prob1} What is an upper bound for the number of representable neighbors for a representable comparative probability order on $n$ atoms? In other words, how many facets can regions of ${\cal H}_n$ have? \end{problem} The maximal number of facets of regions of ${\cal H}_n$ we will call the $n$th {\em Maclagan number} and denote $M(n)$, while the maximal number of flippable pairs for a representable order on $n$ atoms will be denoted $m(n)$. In this paper we provide bounds on these functions, some of which we suspect to be sharp. It is clearly sufficient to solve Maclagan's problem (Problem~\ref{prob1}) for comparative probability orders in $\mbox{$\cal L$}_n^{*}$. \par\medskip The main combinatorial tool for calculating or estimating $M(n)$ is the following semi-obvious proposition. \begin{proposition}[\cite{DM,CCS}] \label{mainprop} Let $\preceq $ be a representable comparative probability order, and let $P$ be the corresponding convex polytope, which is a region of the hyperplane arrangement ${\cal H}_n$. Then the number of facets of $P$ equals the number of representable comparative probability orders that are flip-related to $\preceq$ (plus one if the pair $\emptyset \prec 1$ is flippable). \end{proposition} \begin{corollary} \label{Mandm} $M(n)\le m(n)$. \end{corollary} \begin{proof} From the proposition it follows that $M(n)$ cannot be greater that $m(n)$. However theoretically it can be smaller since not all flips of the representable comparative probability order that has the maximal number of flips may be friendly. \end{proof} It is worth noting that the minimal number of facets of a region in ${\cal H}_n$ is known and equal to $n$ \cite{CS,CCS}. \section{The Lower Bound} \iffalse Let us summarise what is known about the cardinality of $|\text{Irr}({\cal C})|$ in the following. For $n=5$ and $n=6$ a solution was found computationally, using Proposition~\ref{mainprop}. \begin{theorem}[\cite{CS,CCS}] Let $\preceq $ be a comparative probability order on $2^X$ with $|X|=n$, and ${\cal C}$ be the corresponding discrete cone. Then \begin{itemize} \item if $\preceq $ is representable, then the set of all irreducible elements ${\rm Irr}({\cal C})$ weakly generates $\cal C$ and $|{\rm Irr}({\cal C})|\ge n$, while \item if $\preceq $ is nonrepresentable, then the set of all irreducible elements ${\rm Irr}({\cal C})$ may not generate $\cal C$ and it may be that $|{\rm Irr}({\cal C})|< n$. \end{itemize} \end{theorem} \fi It is known that $M(3)=m(3)=3$ and $M(4)=m(4)=5$ \cite{CCS}. Computations in {\sc Magma\/} {\cite{CCS}} show that $5\le |{\rm Irr}({\cal C})|\le 8$ for $n=5$ and $5\le |{\rm Irr}({\cal C})|\le 13$ for $n=6$ with all intermediate values being attainable for both values of $n$. It was also observed that for $n=5$ and $n=6$, all comparative probability orders with the largest possible number of flips (namely 8 for $n=5$, and 13 for $n=6$) are representable, and all of their flips are friendly. This means that $M(5)=m(5)=8$ and $M(6)=m(6)=13$ Searles noticed that the four known values are Fibonacci numbers, i.e., belong to the sequence defined by $F_1=F_2=1$ and $F_{n+2}=F_{n+1}+F_n$. He conjectured that \par \medskip \noindent {\bf Conjecture (Searles, 2007)} The maximal number of facets of regions of $\mbox{$\cal H$}_n$ is equal to the maximal cardinality of $\text{Irr}({\cal C(\preceq)})$ for $\preceq \in \mbox{$\cal L$}^{*}_n$, and equal to the Fibonacci number $F_{n+1}$ or, alternatively, $M(n)=m(n)=F_{n+1}$.\par\medskip The first part of this conjecture will be proved if we show that for some representative comparative probability order $\preceq $, for which $|\text{Irr}({\cal C(\preceq)})|$ is maximal, all flips of $\preceq $ are friendly. The existence of such an order was checked for all $n\le 12$. \par\medskip In this section we prove that $M(n)\ge F_{n+1}$. To this end we prove \begin{theorem} \label{Dtheorem} In ${\cal P}_n$ there exists a representable comparative probability order which (a) has $ F_{n+1}$ flippable pairs and (b) whose flips are all friendly. \end{theorem} The proof will be split into several observations. Let us introduce the following notation first. Let ${\bf u}=(\row un)$ be a vector such that $0<u_1<\ldots<u_n$ and $q>0$ be a number such that $u_j<q<u_{j+1}$ for some $j=0,1,2,\ldots,n$ (we assume that $u_0 = 0$ and $u_{n+1}=\infty$). In this case we set $({\bf u},q)$ to be the vector of $\mathbb{R}^{n+1}$ such that \[ ({\bf u},q)=(u_1,\ldots,u_j,q,u_{j+1},\ldots, u_n). \] We also denote ${\bf \ell}_n=(1,2,4,\ldots,2^{n-1})$ and $2{\bf \ell}_n=(2,4,8,\ldots,2^{n})$. We start with an easy and well-known observation. \begin{proposition} \label{pr2} $\preceq_{{\bf\ell}_n}$ is the lexicographic order on $2^{[n]}$. The utilities of subsets from $2^{[n]}$ cover the whole range of integers between 0 and $2^n-1$ and the utilities of any two consecutive subsets in it differ by~$1$. \end{proposition} \begin{proof} This is equivalent to every natural number possessing a unique binary representation. We leave the verification to the reader.\end{proof} \begin{proposition} \label{pr3} Let $q$ be an odd positive integer smaller than $2^{n}$ and ${\bf m}=(2{\bf\ell}_n,q)$. Consider the order $\preceq_{\bf m}$ on $2^{[n+1]}$. Then the difference between the utilities of any two consecutive subsets in this order is not greater than~$2$. \end{proposition} \begin{proof} Suppose $2^{j-1} \leq q<2^{j}$, that is, $q$ is the utility of $j$ in $\preceq_{\bf m}$. By Proposition~\ref{pr2} the utilities of the subsets from $[n+1]\setminus \{j\}$ cover the range of even values from 0 to $2^{n+1}-2$. Suppose $B$ is a subset in $\preceq_{\bf m}$, where $B \neq \emptyset$. If $u(B) \leq 2^{n+1}-2$, then by Proposition 2 there exists a subset $A$ such that $0 < u(B) - u(A) \leq 2$. If $u(B)> 2^{n+1}-2$, then we must have $j \in B$, and since $u(j) < 2^n$, $B'=B \setminus \{j\} \neq \emptyset$. As $j \notin B'$ we have $u(B') \leq 2^{n+1}-2$, and so by Proposition 2 there exists $A' \subseteq [n+1] \setminus \{j\}$ such that $A' \prec_m B'$ and $u(B')-u(A') =2$. Then adding $j$ to both subsets we obtain $u(B) - u(A) = 2$ for $A = A' \cup \{j\}$. Therefore, for any nonempty $B$ in $\preceq_{\bf m}$, there exists a subset $A$ such that $0 < u(B) - u(A) \leq 2$, and so for any adjacent pair $(C,D)$ of subsets, we have $0 < u(D) - u(C) \leq 2$. \end{proof} Let us denote by $\mbox{$\cal S$}_{n+1}$ the class of orders on $X=\{1,2,\ldots,n+1\}$ of type $\preceq_{\bf m}$, where ${\bf m}=(2{\bf\ell}_n,q)$ for some odd $0 < q< 2^{n}$. And let $j$ denote the number such that $2^{j-1} \leq q<2^{j}$. Obviously, $j<n+1$. By $\row u{n+1}$ we will denote the respective utilities of elements of $X$, that is ${\bf m}=(\row u{n+1})$. \begin{proposition} \label{pr4} From the position at which the subset $\{j\}$ appears in the order $\preceq_{\bf m}$ until the position after which all subsets contain $j$, subsets not containing $j$ alternate with those containing $j$, with the difference in utilities for any two consecutive terms being $1$. \end{proposition} \begin{proof} All subsets not containing $j$ have even utility and all those containing $j$ have odd utility. If we consider these two sequences separately, by Proposition~\ref{pr2} the difference of utilities of neighboring terms in each sequence will be equal to 2. Hence they have to alternate in $\preceq_{\bf m}$. \end{proof} \begin{lemma} \label{lemma_abc} Let $\preceq_{\bf m}$ be an order from the class $\mbox{$\cal S$}_{n+1}$ and let $(A,B)$ be a critical pair for $\preceq_{\bf m}$. Then the following conditions are equivalent: \begin{enumerate} \item[\rm (a)] $(A,B)$ is flippable; \item[\rm (b)] either $A$ or $B$ contains $j$ but not both; \item[\rm (c)] $u(B)-u(A)=1$. \end{enumerate} \end{lemma} \begin{proof} (a) $\Longrightarrow $ (b): Suppose $(A,B)$ is flippable. As $(A,B)$ is critical, it is impossible for $A$ and $B$ each to contain $j$ as $A\cap B=\emptyset$. We only have to prove that it is impossible for both of them not to contain $j$. If $j\notin A$ and $j\notin B$, then $u(A)+2=u(B)$. Since the pair is critical, by Proposition~\ref{pr4} both $A$ and $B$ appear in the order earlier than $\{j\}$. Hence $u(A)+2=u(B) < u(j)$. Then, in particular, $u(A)<u(B)<u(n+1)=2^n$, hence neither $A$ nor $B$ contains $n+1$. But then for $A'=A\cup \{n+1\}$ and $B'=B\cup \{n+1\}$ we have $u(j)<u(A')<u(B')$. Both $A'$ and $B'$ do not contain $j$, hence they are in the alternating part of the order, and since $u(B')-u(A')=2$, they cannot be consecutive terms. As $(A,B)$ is flippable, this is impossible, which proves that either $A$ or $B$ contains~$j$. (b) $\Longrightarrow $ (c): This follows from Proposition~\ref{pr4}. (c) $\Longrightarrow $ (a): This is true not only for orders from our class, but also for all orders defined by integer utility vectors. Indeed, if $u(B)-u(A)=1$, then for any $C\cap (A\cup B)=\emptyset$ we have $u(B\cup C)-u(A\cup C)=1$, and so $A\cup C$ and $B\cup C$ are consecutive. \end{proof} Up to now, the values of $q$ and $j$ did not matter. Now we will try to maximise the number of flippable pairs in $\preceq_{\bf m}$, so we will need to choose them carefully. It should come as no surprise that the optimal choice of $j$ and $q$ will depend on $n$, so we will talk about $j_n$ and $q_n$ now. For the rest of the proof we will set \begin{equation} \label{numberq} j_n=n-1,\qquad q_n=\frac{(-1)^{n+1}+2^n}{3}. \end{equation} An equivalent way of defining $q_n$ would be by the recurrence relation \begin{equation} \label{recrel} q_n=q_{n-1}+2q_{n-2} \end{equation} with the initial values $q_3=3$, $q_4=5$. We also note: \begin{proposition} \label{pr5} $ q_n\equiv 2+(-1)^{n+1} \pmod4. $ \end{proposition} \begin{proof} Easy induction using (\ref{recrel}). \end{proof} Let us now consider a flippable pair $(A,B)$ for $\preceq_{\bf m}$, where ${\bf m}=(2{\bf\ell}_n,q_n)$. Since $j_n=n-1$, we have either $A=A'\cup\{n-1\}$ or $B=B'\cup\{n-1\}$ but not both. In the first case, $(A',B)$ is a pair of nonintersecting subsets from the lexicographic order induced by $2{\bf\ell}_n$ on $ [n+1]\setminus \{n-1\}$ with $u(B)-u(A')=q_n+1$. In the second, $(B',A)$ is a pair of nonintersecting subsets from the same lexicographic order with $u(A)-u(B')=q_n-1$. As $ [n+1]\setminus \{n-1\}$ can be identified with $[n]$, we let $g_n$ be the number of pairs $(A,B)$ in the lexicographic order $\preceq_{2{\bf \ell}_n}$ on $n$ atoms with $u(B)-u(A)=q_n+1$, and let $h_n$ be the number of pairs $(A,B)$ in the same order with $u(B)-u(A)=q_n-1$. What we have proved is the following: \begin{lemma} \label{g_n+h_n} Let ${\bf m}=(2{\bf\ell}_n,q_n)$. Then the number of flippable pairs in $\preceq_{\bf m}$ is $g_n+h_n$. \end{lemma} This reduces our calculations to a rather understandable lexicographic order $\preceq_{2{\bf \ell}_n}$.\par\medskip For convenience we will denote $q_n^+=q_n+1$ and $q_n^-=q_n-1$. We note that Proposition~\ref{pr5} implies \begin{proposition} \label{pr6} $ q_n^-\equiv 1+(-1)^{n+1} \pmod4$, and $q_n^+\equiv 3+(-1)^{n+1} \pmod4. $ In particular, if $n$ is even, $q^-_n\equiv 0 \pmod4$ and $q^+_n\equiv 2 \pmod4$ and if $n$ is odd, $q^-_n\equiv 2 \pmod4$ and $q^+_n\equiv 0 \pmod4$. \end{proposition} A direct calculation also shows that the following equations hold: \begin{proposition} \label{pr7} \begin{eqnarray} \label{-2-} q_{n+1}^-&=&2q^-_n \qquad\ \ \text{for all odd $n\ge 3$},\\ \label{-3-} q_{n+1}^-&=&2q^-_n+2 \quad \text{for all even $n\ge 4$},\\ \label{-4-} q_{n+1}^+&=&2q^+_n-2 \quad \text{for all odd $n\ge 3$},\\ \label{-5-} q_{n+1}^+&=&2q^+_n \qquad\ \ \text{for all even $n\ge 4$}. \end{eqnarray} \end{proposition} \begin{lemma} \label{ghrecrel} The following recurrence relations hold: for any odd $n\ge 3$ \begin{eqnarray*} g_{n+1}=g_n+h_n,\qquad h_{n+1}=h_n, \end{eqnarray*} and for any even $n\ge 4$ \begin{eqnarray*} g_{n+1}=g_n,\qquad h_{n+1}=g_n+h_n. \end{eqnarray*} \end{lemma} \begin{proof} Firstly we assume that $n$ is odd. Then $n+1$ is even. We know from (\ref{-2-}) that $q_{n+1}^-=2q^-_n$. Given any nonintersecting pair $(A, B)$ of subsets in $[n]$, with $A \prec_{2{\bf \ell}_n} B$ and $u(B)-u(A)=q^-_n$, we may shift both subsets to the right, replacing each element $i$ in them with the element $i+1$, to obtain a nonintersecting pair $(\overline{A},\overline{B})$ of subsets in $[n+1]$, where $\overline{A}$ precedes $\overline{B}$ in $\preceq_{2{\bf \ell}_{n+1}}$. This procedure of shifting doubles the difference in utilities, so $u(\overline{B})-u(\overline{A})=2q^{-}_n=q_{n+1}^{-}$. This proves $h_{n+1}\ge h_n$. Moreover, by (\ref{-2-}) and Proposition~\ref{pr6}, $q^-_{n+1}\equiv 0 \pmod4$, hence no nonintersecting pair $(C, D)$ in $\preceq_{2{\bf \ell}_{n+1}}$ with difference of utilities $q^-_{n+1}$ can include~$1$, either in $C$ or in $D$, as $u_1 = 2$. Therefore $C=\overline{A}$ and $D=\overline{B}$ for some nonintersecting pair $(A, B)$ in $[n]$ with $u(B)-u(A)=q^-_n$. This shows $h_{n+1}= h_n$. Let $(A,B)$ be one of the $h_n = h_{n+1}$ nonintersecting pairs of subsets of $[n+1]$ with $u(B)-u(A)=q^-_{n+1}$ as above. As before, since $q^-_{n+1}\equiv 0 \pmod4$, neither of the sets contain~$1$. We can use these pairs to construct the same number of nonintersecting pairs of $\preceq_{2{\bf \ell}_{n+1}}$ with utility difference $q^+_{n+1}=q^-_{n+1}+2$. Indeed, adding~$1$ to $B$ will create a pair $(A,B\cup \{1\})$ with a utility difference $q^-_{n+1}+2=q^+_{n+1}$. We can also use (\ref{-4-}) and a shifting technique to create another $g_n$ nonintersecting pairs with utility difference $q^+_{n+1}$. Indeed, if $(A,B)$ is one of the $g_n$ nonintersecting pairs in $\preceq_{2{\bf \ell}_{n}}$ with utility difference $q^+_{n}$, then the pair $(\{1\}\cup \overline{A}, \overline{B})$ will be nonintersecting in $\preceq_{2{\bf \ell}_{n+1}}$ with utility difference $2q^+_n-2=q^+_{n+1}$. We observe that the $h_{n+1}$ pairs $(C,D)$ constructed in the first method all have $1\in D$ while the $g_n$ pairs $(C,D)$ constructed in the second method all have $1 \in C$, and so the two methods never construct the same pair. Thus $g_{n+1}\ge g_n+h_n$. Now, let $(C,D)$ be any nonintersecting pair in $\preceq_{2{\bf \ell}_{n+1}}$ with utility difference $u(D)-u(C)=q^+_{n+1}$. As $n+1$ is even, Proposition~\ref{pr6} gives $q^+_{n+1}\equiv 2\pmod4$. This implies that either $1\in C$ or $1\in D$. Now as above, we can show that $(C,D)$ can be obtained as $(\{1\}\cup \overline{A},\overline{B})$ or $(A,\{1\}\cup B)$ by the second or the first method, respectively. Thus $g_{n+1}= g_n+h_n$. For even $n$, the statement can be proved similarly, using the other two equations in Proposition~\ref{pr7} and congruences in Proposition~\ref{pr6}. \end{proof} \noindent{\it Proof of Theorem~\ref{Dtheorem} (a).} Let us consider the case $n=3$. We have $q_3=3$, so $q_3^-=2$ and $q_3^+=4$. We have three nonintersecting pairs in $\preceq_{2{\bf \ell}_3}$ with utility difference two, namely $\emptyset \preceq_{2{\bf \ell}_3} \{1\}$, $\{1\}\preceq_{2{\bf \ell}_3} \{2\}$, and $\{1,2\}\preceq_{2{\bf \ell}_3} \{3\}$, and two nonintersecting pairs with utility difference four, namely, $\emptyset \preceq_{2{\bf \ell}_3} \{2\}$ and $\{2\}\preceq_{2{\bf \ell}_3} \{3\}$. Thus $g_3=2$ and $h_3=3$. Alternatively, we may say that $(g_3, h_3)=(F_3, F_4)$. It is also easy to check that $(g_4, h_4)=(5,3)=(F_5, F_4)$. A simple induction argument with the use of Lemma~\ref{ghrecrel} now shows that $(g_n,h_n)=(F_n,F_{n+1})$ for odd $n$ and $(g_n,h_n)=(F_{n+1},F_n)$ for even $n$. By Lemma~\ref{g_n+h_n} we find that the number of flippable pairs of $\preceq_{\bf m}$ is \[ g_n+h_n=F_{n+1}+F_n=F_{n+2}. \] It remains to notice that $\preceq_{\bf m}$ is in ${\mathcal G}_{n+1}$. \par\medskip \noindent{\it Proof of Theorem~\ref{Dtheorem} (b).} Let $\preceq$ be obtained from $\preceq_{\bf m}$ by a flip. Assume it was the pair $B \prec_{\bf m} A$ in $\preceq_{\bf m}$ which was flipped, so in $\preceq$ we have $A \prec B$. Assume $\preceq$ is not representable. By Theorem~\ref{KPStheorem} there must exist a trading transform $(\row Ak; \row Bk)$ such that $A_i\prec B_i$ for $i=1,\ldots, k$. For each $i$ we may assume that $A_i \cap B_i = \emptyset$ since otherwise we could remove the intersection for each pair and obtain another trading transform with empty intersections. Since each element of $[n]$ appears in the sequence $\row Ak$ exactly as many times as in $\row Bk$, for the weight function $u$ of $\preceq_{\bf m}$, we must have $\sum_{i=1}^k u(A_i) = \sum_{i=1}^k u(B_i)$. However the only nonintersecting pair $C \prec D$ in $\preceq$ with $u(C) \geq u(D)$ is the flipped pair $A \prec B$, and furthermore we know from Lemma~\ref{lemma_abc} that $u(A) - u(B) =1$ and for every other pair $(A_i,B_i)$ different from $(A,B)$, $u(A_i) - u(B_i) \leq -1$. Hence for $\sum_{i=1}^k u(A_i) = \sum_{i=1}^k u(B_i)$ to hold at least half of the pairs $A_i \prec B_i$ must be the pair $A \prec B$. Without loss of generality assume that $A_i \prec B_i$ is the pair $A \prec B$ for $i = 1,2,...,r$ with $r \geq \frac{k}{2}$. Let $j \in A$ be any element of $A$. Then $j$ appears $r$ times in the the sequence $\row Ar$ and no times in the sequence $\row Br$. Hence it must appear $r$ times in $(B_{r+1},\ldots, B_k)$, but $r \geq \frac{k}{2}$ and $j$ can appear at most once in each $B_i$ and so we must have $r=\frac{k}{2}$, $j \in B_i$ and $u(A_i)-u(B_i) = -1$ for $i=r+1,...,k$. But $j$ was an arbitrary element of $A$, so $A \subseteq B_i$ for $i=r+1,...,k$. The same argument shows that $B \subseteq A_i$ for $i=r+1,...,k$. But if $A_i = B \cup C_i$ and $B_i = A \cup D_i$ with $C_i$, $D_i$, $A$ and $B$ all disjoint for $i=r+1,...,k$ then \[ u(A_i)-u(B_i) =u(B \cup C_i) - u(A \cup D_i) = u(B) + u(C_i) - u(A) - u(D_i) = -1. \] But $u(A) - u(B) = 1$ and so $u(C_i) = u(D_i)$. Since $\preceq_{\bf m}$ is a linear order, this implies $C_i = D_i = \emptyset$ and so $B \prec A$ which gives the desired contradiction. \section{The Upper Bound} We now present a result giving an upper bound on the number of flippable pairs in any comparative probability order, representable or not. This will give us an upper bound for $m(n)$ and hence for $M(n)$. The basic result is in the following lemma which estimates the number of flippable pairs from above. \begin{lemma} \label{ub} Let $\preceq$ be a comparative probability order on $n$ atoms. If $s$ is any positive integer such that $\sum_{i=0}^s2^{i}\binom{n}{i}\geq2^{n}-1$, then $|\text{Irr}({\cal C(\preceq)})| \le \sum_{i=0}^s\binom{n}{i}$. \end{lemma} \begin{proof} We first prove that if $A\prec B$ and $E\prec F$ are two distinct flippable pairs then $A\cup B\neq E\cup F$. Let $A\prec B$ be a flippable pair and consider $\preceq$ restricted to the subsets of $D=A\cup B$ and call this order $\preceq^{\prime}$. Clearly $\preceq^{\prime}$ is a comparative probability order: \[ \emptyset=D_{1}\prec^{\prime}D_{2}\prec^{\prime}D_{3}\prec^{\prime}\ldots\prec^{\prime}D_{2^{r}-1}\prec^{\prime}D_{2^{r}}=D \] where $r=|D|$, $D_{i}\subseteq D$ and $D_{i}\prec^{\prime}D_{j}\Longleftrightarrow D_{i}\prec D_{j}$. Because $A$ and $B$ were adjacent in $\preceq$, they will also be adjacent in $\preceq^{\prime}$, and since $A$ and $B$ are complements in $D$, they must be the central pair of $\preceq^{\prime}$, i.e. $A=D_{2^{r-1}}$ and $B=D_{2^{r-1}+1}$. However if $E\prec F$ was also a flippable pair with $E\cup F=D$, then it must also be the central pair of $\preceq^{\prime}$, and hence $A=E$ and $B=F$. We now look at $\preceq$: \[ \emptyset=A_{1}\prec A_{2}\prec A_{3}\prec\ldots\prec A_{2^{n}-1}\prec A_{2^{n}}=[n]. \] Call the gap between two adjacent subsets an {\em adjacency}. There are a total of $2^{n}-1$ adjacencies, one for each $\prec$ sign in the order above. Consider a flippable pair $A\prec B$ in $\preceq$ and let $r=|(A\cup B)^c|$, which the size of the complement of $A\cup B$. From the definition of flippable pairs, every pair of the form $A\cup C\prec B\cup C$, where $C\subseteq (A\cup B)^c$, is adjacent. Let $C_{1},C_{2},\ldots,C_{2^{r}}$ be the subsets of $(A\cup B)^c$. Every pair $A\cup C_{i}\prec B\cup C_{i}$ will take up an adjacency, and hence the flippable pair $A\prec B$ will take up exactly $2^{r}$ adjacencies. Hence we know that for every $r$ at most $\binom{n}{r}$ flippable pairs take up exactly $2^{r}$ adjacencies. This is because if there were more than $\binom{n}{r}$ such flippable pairs then by the pigeonhole principle two of the flippable pairs $A_{1}\prec B_{1}$ and $A_{2}\prec B_{2}$ will have $(A_{1}\cup B_{1})^c=(A_{2}\cup B_{2})^c$ and so must be the same pair. Hence there can be at most $\binom{n}{0}=1$ flippable pair that takes up $1$ adjacency, at most $\binom{n}{1}$ flippable pairs that take up $2^1$ adjacencies, at most $\binom{n}{2}$ flippable pairs that take up $2^2$ adjacencies, etc. But we have only $2^{n}-1$ adjacencies, so if we choose $s$ such that $\sum_{i=0}^s2^{i}\binom{n}{i}\geq2^{n}-1$ then the number of flippable pairs cannot exceed $\sum_{i=0}^s \binom{n}{i}$. \end{proof} While the result is true for any such $s$, to maximize the strength of the upper bound we clearly wish to take the smallest value of $s$ possible. We will further need the binary entropy function ${H(\lambda)=-\lambda\log\lambda-(1-\lambda)\log(1-\lambda)}$ where the logarithms are of base 2. Using known approximations to the binomial coefficient we obtain the following: \begin{corollary} \label{c1} Let $\lambda$ be the solution to the equation $\lambda+H(\lambda)=1$ and $\lambda<c<\frac{1}{2}$. Then $m(n)\leq2^{H(c)n}$ for sufficiently large $n$. In particular, for any $\lambda<c<\frac{1}{2}$ we have $m(n)=O\left(2^{H(c)n}\right)$. \end{corollary} \begin{proof} Let $\lambda<c<\frac{1}{2}$ and $\lambda < c' < c$ (e.g. $c' = \frac{\lambda + c}{2}$). Then $c'+H(c')>1$ since $H(x)$ is strictly increasing for $0<x<\frac{1}{2}$, and for sufficiently large $n$ it holds that $\lfloor cn \rfloor > \lceil c'n \rceil$. Hence \[ \underset{i=0}{\overset{\lfloor cn \rfloor}{\sum}}2^{i}\binom{n}{i} > 2^{\lceil c'n \rceil}\binom{n}{\lceil c'n \rceil} \geq 2^{c'n}\frac{\sqrt{\pi}}{2}\frac{1}{\sqrt{2\pi nc'(1-c')}} 2^{H(c')n} > 2^n \] where the second inequality is obtained from \cite[p. 466]{PW} and the last inequality holds for sufficiently large $n$. So by Lemma~\ref{ub}, we have for sufficiently large $n$ \[ m(n)\leq\underset{i=0}{\overset{\lfloor cn \rfloor}{\sum}}\binom{n}{i}\leq c^{-cn}(1-c)^{-(1-c)n}=2^{H(c)n} \] where the second inequality is also obtained from \cite[p. 468]{PW}. \end{proof} \begin{example} \label{upeg} Take $c=0.25$ and consider $s=\lfloor{cn}\rfloor$. It can be checked that for $n \geq 102$ and $c' = c - \frac{1}{102} \leq \frac{s}{n}$ we have $\lceil c'n \rceil \leq s$ and the following inequalities hold \[ \underset{i=0}{\overset{s}{\sum}}2^{i}\binom{n}{i} > 2^{\lceil c'n \rceil}\binom{n}{\lceil c'n \rceil} \geq 2^{c'n}\frac{\sqrt{\pi}}{2}\frac{1}{\sqrt{2\pi n c' (1-c')}}2^{H(c')n} > 2^n. \] Hence by Lemma~\ref{ub} it holds that \[ m(n)\leq\underset{i=0}{\overset{\lfloor cn \rfloor}{\sum}}\binom{n}{i}\leq 2^{H(c)n}. \] Here $2^{H(c)} < 1.7548$. Along with Theorem~\ref{Dtheorem} and standard bounds on the Fibonacci sequence, we have the following bounds for $n \geq 102$: \[ F_{n+1} = \left[ \frac{\phi^{n+1}}{\sqrt{5}}\right] \leq m(n) \leq 1.7548^n, \] where $\phi \approx 1.6180$ is the golden ratio and $[x]$ is the closest integer to $x$. \end{example} Clearly in this example the exponent of 2 in the upper bound of $m(n)$ can be brought arbitrarily close to $H(\lambda)$ for sufficiently large $n$. As $2^{H(\lambda)} \approx 1.7087$, this gives the rough bounds $1.6180^n < M(n) \leq m(n) < 1.7087^n$ up to constant factors. \begin{corollary} For sufficiently large $n$ \[ 1.6180^n < M(n) < 1.7087^n \] up to constant factors. \end{corollary} \section{Further Research} We would like to know, of course, if Searles' conjecture is true. Or at least, we would like to reduce the gap between the current bounds for $M(n)$ further. There are some interesting questions that are not directly related to Searles' conjecture but nevertheless interesting. One of them is the question of connectedness of ${\cal G}_n$. Since the subgraph of representable comparative probability orders is clearly connected, this question shows that we do not really understand much about nonrepresentable orders. In particular, this question will be answered in the affirmative if we could show that any order is connected to a representable order by a series of flips. A similar question is to find the minimum value of $ |{\rm Irr}({\cal C})|$ in ${\cal G}_n$. For representable orders this minimum is $n$ but for nonrepresentable orders we cannot even say if it is possible that the central pair is the only flippable pair in the order. Maclagan also emphasised these questions \cite{DM}.
{ "timestamp": "2011-03-22T01:02:32", "yymm": "1103", "arxiv_id": "1103.3938", "language": "en", "url": "https://arxiv.org/abs/1103.3938", "abstract": "Fine and Gill (1973) introduced the geometric representation for those comparative probability orders on n atoms that have an underlying probability measure. In this representation every such comparative probability order is represented by a region of a certain hyperplane arrangement. Maclagan (1999) asked how many facets a polytope, which is the closure of such a region, might have. We prove that the maximal number of facets is at least F_{n+1}, where F_n is the nth Fibonacci number. We conjecture that this lower bound is sharp. Our proof is combinatorial and makes use of the concept of flippable pairs introduced by Maclagan. We also obtain an upper bound which is not too far from the lower bound.", "subjects": "Combinatorics (math.CO)", "title": "On the Number of Facets of Polytopes Representing Comparative Probability Orders", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9896718462621992, "lm_q2_score": 0.7154239897159439, "lm_q1q2_score": 0.7080349807624468 }
https://arxiv.org/abs/1011.2962
Short loop decompositions of surfaces and the geometry of Jacobians
Given a Riemannian surface, we consider a naturally embedded graph which captures part of the topology and geometry of the surface. By studying this graph, we obtain results in three different directions.First, we find bounds on the lengths of homologically independent curves on closed Riemannian surfaces. As a consequence, we show that for any $\lambda \in (0,1)$ there exists a constant $C_\lambda$ such that every closed Riemannian surface of genus $g$ whose area is normalized at $4\pi(g-1)$ has at least $[\lambda g]$ homologically independent loops of length at most $C_\lambda \log(g)$. This result extends Gromov's asymptotic $\log(g)$ bound on the homological systole of genus $g$ surfaces. We construct hyperbolic surfaces showing that our general result is sharp. We also extend the upper bound obtained by P. Buser and P. Sarnak on the minimal norm of nonzero period lattice vectors of Riemann surfaces %systole of Jacobians of Riemann surfaces in their geometric approach of the Schottky problem to almost $g$ homologically independent vectors.Then, we consider the lengths of pants decompositions on complete Riemannian surfaces in connexion with Bers' constant and its generalizations. In particular, we show that a complete noncompact Riemannian surface of genus $g$ with $n$ ends and area normalized to $4\pi (g+\frac{n}{2}-1)$ admits a pants decomposition whose total length (sum of the lengths) does not exceed $C_g \, n \log (n+1)$ for some constant $C_g$ depending only on the genus.Finally, we obtain a lower bound on the systolic area of finitely presentable nontrivial groups with no free factor isomorphic to $\Z$ in terms of its first Betti number. The asymptotic behavior of this lower bound is optimal.
\section{Introduction} Consider a surface of genus $g$ with $n$ marked points and consider the different complete Riemannian metrics of finite area one can put on it. These include complete hyperbolic metrics of genus $g$ with $n$ cusps, but also complete Riemannian metrics with $n$ ends which we normalize to the area of their hyperbolic counterparts. We are interested in describing what surfaces with large genus and/or large number of ends can look like. More specifically, we are interested in the lengths of certain curves that help describe the geometry of the surface. \ In the first part of this article, we generalize the following results regarding the shortest length of a homologically nontrivial loop on a closed Riemannian surface (i.e., the homological systole) and on the Jacobian of a Riemann surface. The homological systole of a closed Riemannian surface of genus $g$ with normalized area, that is, with area $4 \pi (g-1)$, is at most $\sim \log(g)$. This result, due to M.~Gromov \cite[2.C]{gro96}, is optimal. Indeed, there exist families of hyperbolic surfaces, one in each genus, whose homological systoles grow like $\sim \log(g)$. The first of these were constructed by P.~Buser and P.~Sarnak in their seminal article~\cite{BS94}, and there have been other constructions since by R. Brooks \cite{BR99} and M.~Katz, M.~Schaps and U.~Vishne \cite{KSV07}. By showing that the shortest homologically nontrivial loop on a hyperbolic surface lies in a ``thick" embedded cylinder, P.~Buser and P.~Sarnak also derived new bounds on the minimal norm of nonzero period lattice vectors of Riemann surfaces. This result paved the way for a geometric approach of the Schottky problem which consists in characterizing Jacobians (or period lattices of Riemann surfaces) among abelian varieties. \ Bounds on the lengths of curves in a homology basis have also been studied by P.~Buser and M.~Sepp\"al\"a~\cite{BuS02,BS03} for closed hyperbolic surfaces. Note however that without a lower bound on the homological systole, the $g+1$ shortest homologically independent loop cannot be bounded by any function of the genus. Indeed, consider a hyperbolic surface with $g$ very short homologically independent (and thus disjoint) loops. Every loop homologically independent from these short curves must cross one of them, and via the collar lemma, can be made arbitrarily large by pinching our initial $g$ curves. \ As a preliminary result, we obtain new bounds on the lengths of short homology basis for closed Riemannian surfaces with homological systole bounded from below. \begin{theorem} \label{theo:0} Let $M$ be a closed orientable Riemannian surface of genus~$g$ with homological systole at least~$\ell$ and area equal to $4\pi(g-1)$. Then there exist $2g$ loops $\alpha_{1},\ldots,\alpha_{2g}$ on~$M$ which induce a basis of~$H_{1}(M;{\mathbb Z})$ such that \begin{equation} {\rm length}(\alpha_{k}) \leq C_{0} \, \frac{\log(2g-k+2)}{2g-k+1} \, g, \end{equation} where $C_0=\frac{2^{16}}{\min\{1,\ell\}}$. \end{theorem} On the other hand, without assuming any lower bound on the homological systole, M.~Gromov~\cite[1.2.D']{gro83} proved that on every closed Riemannian surface of genus~$g$ with area normalized to $4 \pi (g-1)$, the length of the $g$ shortest homologically independent loops is at most $\sim \sqrt{g}$. Furthermore, Buser's so-called hairy torus example~\cite{bus81,bus92} with hair tips pairwise glued together shows that this bound is optimal, even for hyperbolic surfaces. \ A natural question is then to find out for how many homologically independent curves does Gromov's $\log(g)$ bound hold. The only result in this direction we are aware of is due to B. Muetzel~\cite{mue}, who recently proved that on every genus~$g$ hyperbolic surface there exist at least two homologically independent loops of lengths at most $\sim \log(g)$. We show that on every closed Riemannian surface of genus~$g$ with normalized area there exist almost $g$ homologically independent loops of lengths at most $\sim \log(g)$. More precisely, we prove the following. \begin{theorem} \label{theo:A} Let $\eta:{\mathbb N} \to {\mathbb N}$ be a function such that $$ \displaystyle \lambda := \sup_g \frac{\eta(g)}{g} < 1. $$ Then there exists a constant~$C_\lambda$ such that for every closed Riemannian surface~$M$ of genus~$g$ there are at least $\eta(g)$ homologically independent loops $\alpha_{1},\ldots,\alpha_{\eta(g)}$ which satisfy \begin{equation*} {\rm length}(\alpha_{i}) \leq C_\lambda \, \frac{\log(g+1)}{\sqrt{g}} \, \sqrt{{\rm area}(M)} \end{equation*} for every $i \in \{1,\ldots,\eta(g) \}$. \end{theorem} Typically, this result applies to $\eta(g)=[\lambda g]$ where $\lambda \in (0,1)$. \ Thus, the previous theorem generalizes Gromov's $\log(g)$ bound on the homological systole, {\it cf.}~\cite[2.C]{gro96}, to the lengths of almost $g$ homologically independent loops. Note that its proof differs from other systolic inequality proofs. Specifically, it directly yields a $\log(g)$ bound on the homological systole without considering the homotopical systole (that is, the shortest length of a homotopically nontrivial loop). Initially, M.~Gromov obtained his bound from a similar bound on the homotopical systole using surgery, {\it cf.}~\cite[2.C]{gro96}. However the original proof of the $\log(g)$ bound on the homotopical systole, {\it cf.}~\cite[6.4.D']{gro83} and \cite{gro96}, as well as the alternative proofs available, {\it cf.}~\cite{bal04,KS05}, do not directly apply to the homological systole. \medskip One can ask how far from being optimal our result on the number of short (homologically independent) loops is. Of course, in light of the Buser-Sarnak examples, one can not hope to do (roughly) better than a logarithmic bound on their lengths, but the question on the number of such curves remains. Now, because of Buser's hairy torus example, we know that the $g$ shortest homologically independent loops of a hyperbolic surface of genus~$g$ can grow like $\sim \sqrt{g}$ and that the result of Theorem~\ref{theo:A} cannot be extended to $\eta(g)=g$. Still, one can ask for $g-1$ homologically independent loops of lengths at most $\sim \log(g)$, or for a number of homologically independent loops of lengths at most $\sim \log(g)$ which grows asymptotically like~$g$. Note that the surface constructed from Buser's hairy torus does not provide a counterexample in any of these cases. \medskip Our next theorem shows this is impossible, which proves that the result of Theorem~\ref{theo:A} on the number of homologically independent loops whose lengths satisfy a $\log(g)$ bound is optimal. Before stating this theorem, it is convenient to introduce the following definition. \begin{definition} \label{def:ksys} Given $k \in {\mathbb N}^*$, the \emph{$k$-th homological systole} of a closed Riemannian manifold~$M$, denoted by~${\rm sys}_{k}(M)$, is defined as the smallest real~$L \geq 0$ such that there exist $k$ homologically independent loops on~$M$ of length at most~$L$. \end{definition} With this definition, under the assumption of Theorem~\ref{theo:A}, every closed Riemannian surface of genus~$g$ with area $4 \pi (g-1)$ satisfies $$ {\rm sys}_{\eta(g)}(M) \leq C_\lambda \, \log(g+1) $$ for some constant~$C_\lambda$ depending only on~$\lambda$. Furthermore, still under the assumption of Theorem~\ref{theo:A}, Gromov's sharp estimate, {\it cf.}~\cite[1.2.D']{gro83}, with this notation becomes $$ {\rm sys}_g(M) \leq C \, \sqrt{g} $$ where $C$ is a universal constant. \ We can now state our second main result. \begin{theorem} \label{theo:B} Let $\eta:{\mathbb N} \to {\mathbb N}$ be a function such that $$ \displaystyle \lim_{g \to \infty} \frac{\eta(g)}{g} = 1. $$ Then there exists a sequence of genus~$g_{k}$ hyperbolic surfaces~$M_{g_{k}}$ with $g_{k}$ tending to infinity such that $$ \lim_{k \to \infty} \frac{{\rm sys}_{\eta(g_{k})}(M_{g_{k}})}{\log(g_{k})} = \infty. $$ \end{theorem} In their geometric approach of the Schottky problem, P.~Buser and P.~Sarnak~\cite{BS94} also proved that the homological systole of the Jacobian of a Riemann surface~$M$ of genus~$g$ is at most $\sim \sqrt{\log(g)}$ and this bound is optimal. In other words, there is a nonzero lattice vector in~$H^1(M;{\mathbb Z})$ whose $L^2$-norm satisfies a $\sqrt{\log(g)}$ upper bound (see Section~\ref{sec:jacobian} for a precise definition). We extend their result by showing that there exist almost $g$ linearly independent lattice vectors whose norms satisfy a similar upper bound. More precisely, we have the following. \begin{corollary} \label{coro:C} Let $\eta:{\mathbb N} \to {\mathbb N}$ be a function such that $$ \displaystyle \lambda := \sup_g \frac{\eta(g)}{g} < 1. $$ Then there exists a constant~$C_\lambda$ such that for every closed Riemann surface~$M$ of genus~$g$ there are at least $\eta(g)$ linearly independent lattice vectors $\Omega_{1},\ldots,\Omega_{\eta(g)} \in H^1(M;{\mathbb Z})$ which satisfy \begin{equation} |\Omega_i|_{L^2}^2 \leq C_\lambda \, \log(g+1) \end{equation} for~every $i \in \{1,\ldots,\eta(g)\}$. \end{corollary} Contrary to Theorem~\ref{theo:A}, we do not know whether the result of Corollary~\ref{coro:C} is sharp regarding the number of independent lattice vectors of norm at most $\sim \sqrt{\log(g)}$. \medskip To prove Theorem~\ref{theo:0} we consider naturally embedded graphs which capture a part of the topology and geometry of the surface, and study these graphs carefully. Then, we derive Theorem~\ref{theo:A} in the absence of lower bound on the homological systole. We actually present the proof of Theorem~\ref{theo:A}, first in the hyperbolic case (restricting ourselves to hyperbolic metrics in our constructions), then in the Riemannian case (a more general framework which allows us to make use of more flexible constructions). To derive Corollary~\ref{coro:C}, we show that on closed hyperbolic surfaces the loops given by Theorem~\ref{theo:A} have embedded collars of uniform width. To prove Theorem~\ref{theo:B}, we adapt known constructions of surfaces with large homological systole to obtain closed hyperbolic surfaces of large genus which asymptotically approach the limit case.\\ In the second part of this article, we study an invariant related to pants decompositions of surfaces. A pants decomposition is a collection of nontrivial disjoint simple loops on a surface so that the complementary region is a set of three-holed spheres (so-called pants). L.~Bers~\cite{bers1,bers2} showed that on a hyperbolic surface one can always find such a collection with all curves of length bounded by a constant which only depends on the genus and number of cusps. These constants are generally called Bers' constants. This result was quantified and generalized to closed Riemannian surfaces of genus~$g$ by P.~Buser \cite{bus81,bus92}, and P.~Buser and M. S\"eppala \cite{BS92} who showed that the constants behave at least like $\sim \sqrt{g}$ and at most like $\sim g$. The correct behavior remains unknown, except in the cases of punctured spheres and hyperelliptic surfaces \cite{BP09}. \medskip We apply the same graph embedding technique as developed in the first part to obtain results on the minimal total length of such pants decompositions of surfaces. For a complete Riemannian surface of genus $g$ with $n$ ends whose area is normalized at $4\pi(g+\frac{n}{2}-1)$, P.~Buser's bounds mentioned before imply that one can always find a pants decomposition whose sum of lengths is bounded from above by $\sim (g+n)^2$. Our main result is the following. \begin{theorem} Let $M$ be a complete noncompact Riemannian surface of genus $g$ with $n$ ends whose area is equal to $4\pi(g+\frac{n}{2}-1)$. Then $M$ admits a pants decomposition whose sum of the lengths is bounded from above by $$ C_g\, n \log (n+1), $$ where $C_g$ is an explicit genus dependent constant. \end{theorem} In the case of a punctured sphere, it suffices to study the embedded graph mentioned above. In the more general case, this requires a bound on the usual Bers' constant which relies on F.~Balacheff and S.~Sabourau's bounds on the diastole, {\it cf.}~\cite{BS10}. Specifically, in Proposition \ref{prop:bers} we show that the diastole is an upper bound on lengths of pants decompositions, and thus, if one is not concerned with the multiplicative constants, the result of~\cite{BS10} provides an optimal square root upper-bound on Bers' constants for puncture growth, and an alternative proof of Buser's linear bounds for genus growth. \medskip As a corollary to the above we show that a hyperelliptic surface of genus $g$ admits a pants decomposition of total length at most $\sim g\log g$. This is in strong contrast with the general case according to a result of L.~Guth, H.~Parlier and R.~Young~\cite{GPY}: ``random" hyperbolic surfaces have all their pants decompositions of total length at least $\sim g^{\frac{7}{6}-\varepsilon}$ for any $\varepsilon>0$. \\ In the last part of this article, we consider the systolic area of finitely presentable groups, {\it cf.}~Definition~\ref{def:SG}. From~\cite[6.7.A]{gro83}, the systolic area of an unfree group is bounded away from zero, see also \cite{KRS06}, \cite{RS08}, \cite{KKSSW} and \cite{BB} for simpler proofs and extensions. The converse is also true: a finitely presentable group with positive systolic area is not free, {\it cf.}~\cite{KRS06}. The systolic finiteness result of~\cite{RS08} regarding finitely presentable groups and their structure provides a lower bound on the systolic area of these groups in terms of their first Betti number when the groups have no free factor isomorphic to~${\mathbb Z}$. In particular, the systolic area of such a group $G$ goes to infinity with its first Betti number~$b_1(G)$. Specifically, $$ {\mathfrak S}(G) \geq C \, (\log(b_1(G)+1))^{\frac{1}{3}} $$ where ${\mathfrak S}(G)$ is the systolic area of~$G$ and $C$ is some positive universal constant. \medskip In this article, we improve this lower bound. \begin{theorem} \label{theo:G} Let $G$ be a finitely presentable nontrivial group with no free factor isomorphic to~${\mathbb Z}$. Then \begin{equation} \label{eq:GG} {\mathfrak S}(G)\geq C \, \frac{b_1(G)+1}{(\log(b_1(G)+2))^2} \end{equation} for some positive universal constant $C$. \end{theorem} In Example \ref{ex:GG}, we show that the order of the bound in inequality~\eqref{eq:GG} cannot be improved. The proof of Theorem~\ref{theo:G} follows the proof of Theorem~\ref{theo:0} in this somewhat different context. \section{Independent loops on surfaces with homological systole bounded from below} \label{sec:ell} Here we show the following theorem which allows us to bound the lengths of an integer homology basis in terms of the genus and the homological systole of the surface. \begin{theorem} \label{theo:ell} Let $M$ be a closed orientable Riemannian surface of genus~$g$ with homological systole at least~$\ell$ and area equal to $4\pi(g-1)$. Then there exist $2g$ loops $\alpha_{1},\ldots,\alpha_{2g}$ on~$M$ which induce a basis of~$H_{1}(M;{\mathbb Z})$ such that \begin{equation} \label{eq:gen} {\rm length}(\alpha_{k}) \leq C_{0} \, \frac{\log(2g-k+2)}{2g-k+1} \, g, \end{equation} where $C_0=\frac{2^{16}}{\min\{1,\ell\}}$.\\ \noindent In particular: \begin{enumerate} \item the lengths of the $\alpha_{i}$ are bounded by~$C_{0} \, g$; \label{item1} \item the median length of the $\alpha_{i}$ is bounded by~$C_{0} \, \log(g+1)$. \label{item2} \end{enumerate} \end{theorem} \begin{remark} The linear upper bound in the genus of item~\eqref{item1} already appeared in~\cite{BS03} for hyperbolic surfaces, where the authors obtained a similar bound for the length of so-called canonical homology basis. They also constructed a genus~$g$ hyperbolic surface all of whose homology bases have a loop of length at least~$C \, g$ for some positive constant~$C$. This shows that the linear upper bound in~\eqref{item1} is roughly optimal. However, the general bound~\eqref{eq:gen} on the length of the loops of a short homology basis, and in particular the item~\eqref{item2}, cannot be derived from the arguments of~\cite{BS03} even in the hyperbolic case. The bound obtained in~\eqref{item2} is also roughly optimal. Indeed, the Buser-Sarnak surfaces~\cite{BS94} have their homological systole greater or equal to $\frac{4}{3} \log(g)$ minus a constant. \end{remark} \begin{remark} In a different direction, the homological systole of a ``typical" hyperbolic surface, where we take R.~Brooks and E.~Makover's definition of a random surface \cite{BM04}, is bounded away from zero. Therefore, the conclusion of the theorem holds for these typical hyperbolic surfaces with $\ell$ constant. However, their diameter is bounded by $C \log(g)$ for some constant~$C$. This shows that there exists a homology basis on these surfaces formed of loops of length at most $2C \log(g)$ (see Remark \ref{rem:diameter}). Thus, the upper bound in~\eqref{item1} is not optimal for these surfaces. \end{remark} \begin{remark}\label{rem:ell} A non-orientable version of this theorem also holds. Recall that a closed non-orientable surface of genus $g$ is a surface homeomorphic to the connected sum of $g$ copies of the projective plane. Let $M$ be a closed non-orientable Riemannian surface of genus~$g$ with homological systole at least~$\ell$. Then there exist $g$ loops $\alpha_{1},\ldots,\alpha_{g}$ on~$M$ which induce a basis of~$H_{1}(M;{\mathbb Z})$ such that $$ {\rm length}(\alpha_{k}) \leq C_{0} \, \frac{\log(g-k+2)}{g-k+1} \, g, $$ where $C_0=\frac{C}{\min\{1,\ell\}}$ for some positive constant $C$. \end{remark} Let us introduce some definitions and results, which will be used several times in this article. \begin{definition} \label{def:min} Let $(\gamma_i)_i$ be a collection of loops on a compact Riemannian surface~$M$ of genus~$g$ (possibly with boundary components). The loops~$(\gamma_i)_i$ form a \emph{minimal homology basis} of~$M$ if \begin{enumerate} \item their homology classes form a basis of $H_1(M;{\mathbb Z})$; \item for every $k=1,\ldots,2g$ and every collection of $k$ homologically independent loops~$\gamma'_1,\ldots,\gamma'_k$, there exist $k$ loops $\gamma_{i_1},\ldots,\gamma_{i_k}$ of length at most $$ \sup_{1 \leq j \leq k} {\rm length}(\gamma'_j). $$ \label{2} \end{enumerate} In this definition, one could replace the condition~\eqref{2} with the following condition: \begin{enumerate} \item[(2')] for every collection of loops~$(\gamma'_i)_i$ whose homology classes form a basis of $H_1(M;{\mathbb Z})$, we have $$ \sum_{i=1}^{2g} {\rm length}(\gamma_i) \leq \sum_{i=1}^{2g} {\rm length}(\gamma'_i). $$ \end{enumerate} \end{definition} \begin{lemma} \label{lem:cut} Let $M$ be a closed orientable Riemannian surface. Let $\alpha$ be a homologically trivial loop such that the distance between any pair of points of~$\alpha$ is equal to the length of the shortest arc of~$\alpha$ between these two points. Then no curve of a minimal homology basis of~$M$ crosses~$\alpha$. \end{lemma} \begin{proof} Let~$\gamma$ be a loop of~$M$ which crosses~$\alpha$. Then $\gamma$ must cross~$\alpha$ at least twice. Consider an arc $c$ of $\gamma$ that leaves from $\alpha$ and then returns. Let $d$ be the shortest arc of~$\alpha$ connecting the two endpoints of~$c$. By assumption, $d$ is no longer than $c$ and $\gamma \setminus c$. Thus, both $c \cup d$ and $(\gamma \setminus c) \cup d$ are homotopic to loops which are shorter than~$\gamma$. These loops are obtained by smoothing out $c \cup d$ and $(\gamma \setminus c) \cup d$. Since $\gamma$ is homologuous to the sum of these loops, with proper orientations, the curve~$\gamma$ does not lie in a minimal homology basis. \end{proof} We continue with some more notations and definitions. We denote $\ell_{M}(\gamma)$ the infimal length of the loops of~$M$ freely homotopic to~$\gamma$. \begin{definition} \label{def:mls} Consider two metrics on the same surface and denote by $M$ and $M'$ the two metric spaces. The marked length spectrum of $M$ is said to be greater or equal to the marked length spectrum of~$M'$ if \begin{equation} \label{eq:mls} \ell_M(\gamma) \geq \ell_{M'}(\gamma) \end{equation} for every loop~$\gamma$ on the surface. Similarly, the two marked length spectra of $M$ and $M'$ are equal if equality holds in~\eqref{eq:mls} for every loop~$\gamma$ on the surface. \end{definition} \begin{definition} \label{def:sys} Let $M$ be a compact nonsimply connected Riemannian manifold. The \emph{homotopical systole} of~$M$, denoted by ${\rm sys}_{\pi}(M)$, is the length of its shortest noncontractible loop of~$M$. A homotopical systolic loop of~$M$ is a noncontractible loop of~$M$ of least length. Similarly, the \emph{homological systole} of~$M$, denoted by ${\rm sys}_{H}(M)$, is the length of its shortest homologically nontrivial loop of~$M$. A homological systolic loop of~$M$ is a homologically nontrivial loop of~$M$ of least length. \end{definition} Note that ${\rm sys}_H(M) = {\rm sys}_1(M)$, {\it cf.}~Definition~\ref{def:ksys}. \ In order to prove Theorem \ref{theo:ell}, we will need the following result from~\cite[5.6.C'']{gro83}. \begin{lemma} \label{lem:regmet} Let $M_{0}$ be a closed Riemannian surface and $0<R \leq \frac{1}{2} \, {\rm sys}_{\pi}(M_{0})$. Then there exists a closed Riemannian surface $M$ conformal to $M_0$ such that \begin{enumerate} \item $M$ and $M_{0}$ have the same area; \item the marked length spectrum of $M$ is greater or equal to that of $M_0$; \item the area of every disk of radius $R$ in $M$ is greater or equal to~$R^{2}/2$. \end{enumerate} \end{lemma} We can now proceed to the proof of Theorem~\ref{theo:ell}. \begin{proof}[Proof of Theorem~\ref{theo:ell}] \mbox{ } \medskip Without loss of generality, we can suppose that $\ell$ is at most~$1$. \medskip \noindent {\it Step 1.} Let us show first that we can assume that the homotopical systole of~$M$, and not merely the homological systole, is bounded from below by~$\ell$. Suppose that there exists a homotopical systolic loop~$\alpha$ of~$M$ of length less than~$\ell$ which is homologically trivial. Clearly, the distance between any pair of points of~$\alpha$ is equal to the length of the shortest arc of~$\alpha$ connecting this pair of points. (Note that both the homotopical and homological systolic loops have this property for closed surfaces.) We split the surface along the simple loop~$\alpha$ and attach two round hemispheres along the boundary components of the connected components of the surface. We obtain two closed surfaces $M'$ and~$M''$ of genus less than~$g$. The sum of their areas is equal to ${\rm area}(M)+\frac{\ell^2}{\pi}$. Collapsing $\alpha$ to a point induces an isomorphism $H_1(M;{\mathbb Z}) \to H_1(M' \vee M'';{\mathbb Z}) \simeq H_1(M' \coprod M'';{\mathbb Z})$. From Lemma~\ref{lem:cut}, the loops of a minimal homology basis of~$M$ do not cross~$\alpha$. Therefore, they also lie in the disjoint union $M' \coprod M''$. Conversely, every loop of $M' \coprod M''$ can be deformed without increasing its length into a loop which does not go through the two round hemispheres (that we previously attached), and therefore also lies in~$M$. This shows that two minimal homology basis of~$M$ and $M' \coprod M''$ have the same lengths. We repeat the previous surface splitting to the new surfaces on and on as many times as possible. After at most~$g$ steps ({\it i.e.}, $g$ cuts), this process stops. By construction, we obtain a closed Riemannian surface~$N$ with several connected components such that \begin{enumerate} \item the homotopical systole of~$N$ is at least~$\ell$; \item $H_1(N;{\mathbb Z})$ is naturally isomorphic to $H_1(M;{\mathbb Z})$; \item two minimal homology basis of~$M$ and $N$ have the same length. \end{enumerate} Furthermore, we have \begin{eqnarray*} {\rm area}(N) & \leq &{\rm area}(M) + g \, \frac{\ell^2}{\pi} \\ & \leq & 2^4 (1+ \ell^2) \, g. \end{eqnarray*} Thus, it is enough to show that the conclusion of Theorem~\ref{theo:ell} holds for every (not necessarily connected) closed Riemannian surface~$M$ of genus~$g$ with homotopical systole at least~$\ell$ and area at most $2^4 (1+ \ell^2) \, g$. \\ \noindent{\it Step 2.} Assume now that $M$ is such a surface. Fix $r_0 < \ell/8$, say $r_0=\ell/16$. By Lemma \ref{lem:regmet}, we can suppose that any disk of radius~$r_{0}$ of~$M$ has area at least~$r_{0}^2/2$. Consider a maximal system of disjoint disks $\{D_i\}_{i \in I}$ of radius~$r_{0}$. Since each disk~$D_i$ has area at least $r_{0}^2/2$, the system admits at most $2\, {\rm area}(M)/r_{0}^2$ disks. That is, \begin{eqnarray} |I| & \leq & \frac{2^5}{r_{0}^2} \, (1+ \ell^2) \, g \nonumber \\ & \leq & 2^{13} \, (1 + \frac{1}{\ell^2}) \, g. \label{eq:I} \end{eqnarray} As this system is maximal, the disks $2D_{i}$ of radius~$2r_{0}$ with the same centers~$x_{i}$ as~$D_{i}$ cover~$M$. \\ \noindent Let $2D_{i}+\varepsilon$ be the disks centered at~$x_{i}$ with radius~$2r_{0}+\varepsilon$, where $\varepsilon>0$ satisfies $4r_{0}+2\varepsilon<\ell/2 \leq {\rm sys}_{\pi}(M)/2$. Consider the $1$-skeleton~$\Gamma$ of the nerve of the covering of~$M$ by the disks~$2D_{i}+\varepsilon$. In other words, $\Gamma$ is a graph with vertices~$\{v_{i}\}_{i \in I}$ corresponding to the centers ~$\{x_{i}\}_{i \in I}$ where two vertices $v_{i}$ and~$v_{j}$ are connected by an edge if and only if $2D_{i}+\varepsilon$ and~$2D_{j}+\varepsilon$ intersect each other. Denote by $v$, $e$ and $b$ its number of vertices, its number of edges and its first Betti number. We have the relation $b \geq e-v+1$ (with equality if the graph is connected). \\ \noindent Endow the graph $\Gamma$ with the metric such that each edge has length $\ell/2$. Consider the map \mbox{$\varphi:\Gamma \to M$} which takes each edge with endpoints $v_{i}$ and~$v_{j}$ to a geodesic segment connecting $x_{i}$ and~$x_{j}$. This segment is not necessarily unique, but we can choose one. Since the points~$x_{i}$ and~$x_{j}$ are distant from at most $4r_{0}+2\varepsilon<\ell/2$, the map $\varphi$ is distance nonincreasing. \begin{lemma} \label{lem:epi} The map $\varphi$ induces an epimorphism $\pi_{1}(\Gamma) \to \pi_{1}(M)$. In particular, it induces an epimorphism in integral homology. \end{lemma} \begin{proof}[Proof of Lemma \ref{lem:epi}] Let $c$ be a geodesic loop of~$M$. Divide $c$ into segments $c_{1}, \ldots, c_{m}$ of length less than~$\varepsilon$. Denote by $p_{k}$ and~$p_{k+1}$ the endpoints of~$c_{k}$ with $p_{m+1}=p_{1}$. Since the disks~$2D_{i}$ cover~$M$, every point~$p_{k}$ is at distance at most~$2r_{0}$ from a point~$q_{k}$ among the centers~$x_{i}$. Consider the loop $$ \alpha_k= c_k \cup [p_{k+1},q_{k+1}] \cup [q_{k+1},q_k] \cup [q_k,p_k], $$ where $[x,y]$ denotes a segment joining $x$ to~$y$. Then $$ {\rm length}(\alpha_k) \leq 2 \, (4 r_{0} + \varepsilon) < {\rm sys}_{\pi}(M). $$ Thus, the loops $\alpha_k$ are contractible. Therefore, the loop~$c$ is homotopic to a piecewise geodesic loop~$c'= (q_1, \dots, q_{m})$. Since the distance between the centers $q_{k}=x_{i_{k}}$ and $q_{k+1}=x_{i_{k+1}}$ is at most $4r_{0}+\varepsilon$, the vertices $v_{i_{k}}$ and~$v_{i_{k+1}}$ are connected by an edge in~$\Gamma$. The union of these edges forms a loop~$(v_{i_{1}}, \ldots, v_{i_{m}})$ in~$\Gamma$ whose image by~$\varphi$ agrees with~$c'$ and is homotopic to the initial loop~$c$. Hence the result. \end{proof} \noindent Let $\Bbbk$ be a field. Consider a subgraph~$\Gamma_{1}$ of~$\Gamma$ with a minimal number of edges such that the restriction of~$\varphi$ to~$\Gamma_{1}$ still induces an epimorphism in homology with coefficients in~$\Bbbk$. The graph~$\Gamma_{1}$ inherits the simplicial structure of~$\Gamma$. \begin{lemma} \label{lem:isom} The epimorphism $\varphi_{*}:H_{1}(\Gamma_{1};\Bbbk) \to H_{1}(M;\Bbbk)$ induced by~$\varphi$ is an isomorphism. In particular, the first Betti number~$b_{1}$ of~$\Gamma_{1}$ is equal to~$2g$. \end{lemma} \forget \begin{proof}[Proof of Lemma \ref{lem:isom}] Suppose the contrary and remove an edge from a simple cycle~$c$ representing a nontrivial element of the kernel of~$\varphi_{*}$. The resulting graph~$\Gamma'_{1}$ has fewer edges than~$\Gamma_{1}$. Adding a suitable integral multiple of~$c$ to any cycle~$\sigma$ of~$\Gamma_{1}$ yields a new cycle~$\sigma'$ lying in~$\Gamma'_{1}$. The cycles $\sigma$ and~$\sigma'$ are sent to the same homology class by~$\varphi$. Thus, the restriction of~$\varphi$ to~$\Gamma'_{1}$ still induces an epimorphism in integral homology, which is absurd by definition of~$\Gamma_{1}$. \end{proof} \forgotten \begin{proof}[Proof of Lemma \ref{lem:isom}] The graph~$\Gamma_1$ is homotopy equivalent to a union of bouquets of circles $c_1,\cdots,c_m$ (simply identify a maximal tree in each connected component of~$\Gamma_1$ to a point). The homology classes $[c_1],\cdots,[c_m]$ of these circles form a basis in homology with coefficients in~$\Bbbk$. If the image by~$\varphi_*$ of one of these homology classes~$[c_{i_0}]$ lies in the vector space spanned by the images of the others, we remove the edge of~$\Gamma_1$ corresponding to the circle~$c_{i_0}$. The resulting graph~$\Gamma'_{1}$ has fewer edges than~$\Gamma_1$ and the restriction of~$\varphi$ to~$\Gamma'_{1}$ still induces an epimorphism in homology with coefficients in~$\Bbbk$, which is absurd by definition of~$\Gamma_{1}$. \end{proof} \noindent At least $b-b_{1}$ edges were removed from $\Gamma$ to obtain~$\Gamma_{1}$. As the length of every edge of~$\Gamma$ is equal to~$\ell/2$, we have \begin{eqnarray} {\rm length}(\Gamma_{1}) & \leq & {\rm length}(\Gamma) - (b-b_{1}) \, \frac{\ell}{2} \nonumber \\ & \leq & (e-b+2g) \, \frac{\ell}{2} \nonumber \\ & \leq & (v-1+2g) \, \frac{\ell}{2} \label{eq:l1} \end{eqnarray} \noindent Let us contruct by induction $n$ graphs~$\Gamma_{n} \subset \cdots \subset \Gamma_{1}$ and $n$ (simple) loops~$\displaystyle \{ \gamma_{k} \}_{k=1}^{n}$ with $n \leq 2g$ such that \begin{enumerate} \item $\gamma_{k}$ is a systolic loop of~$\Gamma_{k}$; \item the loops~$(\gamma_{k})_{k=1}^{n-1}$ induce a basis of a supplementary subspace of~$H_{1}(\Gamma_{n};\Bbbk)$ in~$H_{1}(\Gamma_{1};\Bbbk)$. \end{enumerate} For $n=1$, the result clearly holds. \\ \noindent Suppose we have constructed $n$ graphs~$\{\Gamma_{k}\}_{k=1}^{n}$ and $n$ loops~$\{\gamma_{k}\}_{k=1}^{n}$ satisfying these properties. Remove an edge of~$\Gamma_{n}$ through which $\gamma_{n}$ passes. We obtain a graph~$\Gamma_{n+1} \subset \Gamma_{n}$ such that $H_{1}(\Gamma_{n};\Bbbk)$ decomposes into the direct sum of $H_{1}(\Gamma_{n+1};\Bbbk)$ and~$\Bbbk \, [\gamma_{n}]$, where $[\gamma_{n}]$ is the homology class of~$\gamma_{n}$ (recall that $\gamma_{n}$ generates a nontrivial class in homology). That is, \begin{equation} \label{eq:decomp} H_{1}(\Gamma_{n};\Bbbk) = H_{1}(\Gamma_{n+1};\Bbbk) \oplus \Bbbk \, [\gamma_{n}]. \end{equation} Let $\gamma_{n+1}$ be a systolic loop of~$\Gamma_{n+1}$. The condition on~$\{ \gamma_{k} \}_{k=1}^{n}$ along with the decomposition~\eqref{eq:decomp} shows that the loops~$(\gamma_{k})_{k=1}^{n}$ induce a basis of a supplementary subspace of~$H_{1}(\Gamma_{n+1};\Bbbk)$ in~$H_{1}(\Gamma_{1};\Bbbk)$. This concludes the construction by induction. \\ \noindent The graph~$\Gamma_{k}$ has $k-1$ fewer edges than~$\Gamma_{1}$. Hence, from~\eqref{eq:l1}, we deduce that \begin{eqnarray} {\rm length}(\Gamma_{k}) & \leq & {\rm length}(\Gamma_{1}) - (k-1) \, \frac{\ell}{2} \nonumber \\ & \leq & (v+2g - k) \, \frac{\ell}{2}. \label{eq:vg} \end{eqnarray} \noindent By construction, the first Betti number~$b_{k}$ of~$\Gamma_{k}$ is equal to $b_{1}-k+1=2g-k+1$. Thus, Bollob\'as-Szemer\'edi-Thomason's systolic inequality on graphs~\cite{BT97,BS02} along with the bounds \eqref{eq:vg} and~\eqref{eq:I} implies that \begin{eqnarray*} {\rm length}(\gamma_{k}) = {\rm sys}(\Gamma_{k}) & \leq & 4 \, \frac{\log(1+b_{k})}{b_{k}} \, {\rm length}(\Gamma_{k}) \nonumber \\ & \leq & 2^{15} \, \left( \ell + \frac{1}{\ell} \right) \, \frac{\log(2g-k+2)}{2g-k+1} \, g. \end{eqnarray*} Hence, $$ {\rm length}(\gamma_{k}) \leq C_{0} \, \frac{\log(2g-k+2)}{2g-k+1} \, g, $$ where $\displaystyle C_{0} = \frac{2^{16}}{\min\{1,\ell\}}$ (recall we can assume that $\ell \leq 1$). Since the map $\varphi$ is distance nonincreasing and induces an isomorphism in homology, the images of the loops~$\gamma_{k}$ by~$\varphi$ yield the desired curves~$\alpha_k$ on~$M$. \\ Now, recall that $$ {\rm length}(\gamma_{1}) \leq \cdots \leq {\rm length}(\gamma_{2g}). $$ We deduce that the curves~$\alpha_k$ are of length at most $$ {\rm length}(\gamma_{2g}) \leq C_0 \, g $$ and that the median length is bounded from above by $$ {\rm length}(\gamma_{g+1}) \leq C_0 \, \log(g+1). $$ This concludes the proof of Theorem \ref{theo:ell}. \end{proof} \forget \begin{remark} \label{rem:C0} When the metric is hyperbolic, the disks~$D_i$ are embedded and have area \mbox{$2 \pi \, (\cosh(r_0)-1)$}. In this case, we can take $C_0= \frac{8}{s_0} \, (\ell+\frac{1}{\ell})$, where $s_0 = \cosh(r_0)-1$. \end{remark} \bigskip We conclude this section with some consequences concerning hyperbolic surfaces. \\ The existence of hyperbolic surfaces whose homological systole satisfies a near-optimal asymptotic behaviour in~$\log(g)$ follows from the arithmetic construction of Buser-Sarnak~\cite{BS94}, extended in Katz-Schnider-Vishne~\cite{KSV07}. More specifically, these constructions yield hyperbolic surfaces of genus~$g$ (in certain genus) whose homological systole is bounded from below by $\frac{4}{3} \log(g)-c$, where $c$ is a constant. \\ Keeping in mind this result, the previous theorem leads to the following bound on the length of short homology basis on these hyperbolic surfaces with large homological systole. \begin{corollary} \label{prop:BS} Given a real $c$, every closed hyperbolic orientable surface of genus~$g$ whose homological systole is bounded from below by $\frac{4}{3} \log(g)-c$ admits a homology basis consisting of loops of length at most $$ C_{1} \, g^{\frac{5}{6}} \, \log g, $$ where $C_1$ is a constant depending only on~$c$. \end{corollary} \begin{proof} The estimate follows from the bound on the length of~$\gamma_{2g}$ in Theorem~\ref{theo:ell}.\eqref{item1}. Simply use the formula for~$C_0$ given in Remark~\ref{rem:C0} and plug in $\ell = \frac{4}{3} \log(g)-c$ with $r_0$ arbitrarily close to~$\ell/8$. This yields $C_0 \simeq \frac{\log(g)}{g^{1/6}}$. \end{proof} In the hyperbolic case, one can also show that the minimum degree of the graph~$\Gamma$ introduced in the proof of Theorem~\ref{theo:ell} is at least three and that its maximum degree is bounded from above by a constant depending only on~$\ell$ (by the Bishop-Gromov inequality). Therefore, using the result of~\cite{BV96} in the proof of Theorem~\ref{theo:ell} instead of the systolic inequality on graphs of~\cite{BT97,BS02}, one can prove the following result on the existence of short homologically independent disjoint cycles. \begin{theorem} On every closed hyperbolic orientable surface of genus~$g$ with homological systole at least~$\ell$, there exist at least $\displaystyle \left[C_{2} \, \frac{g}{\log(g+1)} \right]+1$ homologically independent disjoint loops~$\alpha_{k}$ such that $$ {\rm length}(\alpha_{k}) \leq C_{3} \, \log (g+1), $$ where the fonctions $C_2=C_2(\ell)$ and $C_3=C_3(\ell)$ only depend on~$\ell$. \end{theorem} \forgotten \section{Short loops and the Jacobian of hyperbolic surfaces} This section is dedicated to generalizing the results of P. Buser and P. Sarnak~\cite{BS94} in the following way. We begin by extending the $\log(g)$ upper bound on the length of the shortest homological non-trivial loop to almost $g$ loops, and then use these bounds and the methods developed in~\cite{BS94} to obtain information on the geometry of Jacobians. \subsection{Short homologically independent loops on hyperbolic surfaces} We begin by showing that one can extend the usual $\log(g)$ bound on the homological systole of a hyperbolic surface, to a set of almost $g$ homologically independent loops. In the case where the homological systole is bounded below by a constant, this is a consequence of Theorem~\ref{theo:ell}, so here we show how to deal with surfaces with small curves. More precisely, our result is the following: \begin{theorem} \label{theo:lng} Let $\eta:{\mathbb N} \to {\mathbb N}$ be a function such that $$ \displaystyle \lambda := \sup_g \frac{\eta(g)}{g} < 1. $$ Then there exists a constant~$C_\lambda=\frac{2^{17}}{1-\lambda}$ such that for every closed hyperbolic orientable surface~$M$ of genus~$g$ there are at least $\eta(g)$ homologically independent loops $\alpha_{1},\ldots,\alpha_{\eta(g)}$ which satisfy \begin{equation} \label{eq:upper1} {\rm length}(\alpha_{i}) \leq C_\lambda \, \log(g+1) \end{equation} for~every $i \in \{1,\ldots,\eta(g)\}$ and admit a collar of width at least $$ w_0={1\over 2}{\,\rm arcsinh}(1) $$ around each of them. \end{theorem} The previous theorem applies with $\eta(g)=[\lambda g]$, where $\lambda \in (0,1)$. \begin{remark}\label{rem:ln} The non-orientable version of this theorem is the following: if $\eta:{\mathbb N} \to {\mathbb N}$ is a function such that $$ \displaystyle \lambda := \sup_g \frac{\eta(g)}{g} < \frac{1}{2}, $$ then there exists a constant~$c'$ such that for every closed hyperbolic non-orientable surface~$M$ of genus~$g$ (homeomorphic to the sum of $g$ copies of the projective plane) there are at least $\eta(g)$ homologically independent loops $\alpha_{1},\ldots,\alpha_{\eta(g)}$ which satisfy $$ {\rm length}(\alpha_{i}) \leq \frac{C}{1-2\lambda} \, \log(g+1) $$ for~every $i \in \{1,\ldots,\eta(g)\}$. \end{remark} The following lemma about minimal homology bases, {\it cf.}~Definition~\ref{def:min}, is proved in \cite[Section~5]{gro83}, see also~\cite[Lemma~2.2]{guth}. Note it applies to Riemannian surfaces, not only hyperbolic ones. \begin{lemma} \label{lem:simple} Let $M$ be a compact Riemannian surface with geodesic boundary components. Then every minimal homology basis~$(\gamma_i)_i$ of $M$ is formed of simple closed geodesics such that for every~$i$, the distance between every pair of points of~$\gamma_i$ is equal to the length of the shortest arc of~$\gamma_i$ between these two points. Furthermore, two different loops of~$(\gamma_i)_i$ intersect each other at most once. \end{lemma} \begin{remark} \label{rem:straight} The homotopical systolic loops of~$M$ also satisfy the conclusion of Lemma~\ref{lem:simple}. \end{remark} \begin{remark}\label{rem:diameter} In particular, Lemma~\ref{lem:simple} tells us that one can always find a homology basis of length at most twice the diameter of the surface. In \cite{GPY}, this is shown to be false for lengths of pants decompositions. Indeed, a pants decomposition of a ``random" pants decomposition (where for instance random is taken in the sense of R.~Brooks and E.~Makover) there is a curve of length at least $g^{\frac{1}{6}-\varepsilon}$ and the diameter behaves roughly like $\sim \log(g)$. \end{remark} In the next lemma, we construct particular one-holed hyperbolic tori, which we call {\it fat tori} for future reference. \begin{lemma}\label{lem:fat} Let $\varepsilon \in (0,2 {\,\rm arcsinh}(1)]$. There exists a hyperbolic one-holed torus~$T$ such that \begin{enumerate} \item the length of its boundary~$\partial T$ is equal to~$\varepsilon$; \item the length of every homotopically nontrivial loop of~$T$ is at least~$\varepsilon$; \label{item22} \item every arc with endpoints in~$\partial T$ representing a nontrivial class in~$\pi_{1}(T,\partial T)$ is longer than any homologically nontrivial loop of~$T$. \end{enumerate} \end{lemma} \begin{proof} We construct $T$ as follows. We begin by constructing, for any $\varepsilon \in (0,2 {\,\rm arcsinh}(1)]$, the unique right-angled pentagon $P$ with one side of length $\frac{\varepsilon}{4}$, and the two sides not adjacent to this side of equal length, say $a$. By the pentagon formula, $$ a = {\,\rm arcsinh}\sqrt{\cosh\frac{\varepsilon}{4}}. $$ A simple calculation shows that $2a> \varepsilon$ for $\varepsilon \leq 2 {\,\rm arcsinh} (1)$. For future reference, denote by~$h$ the length of the two edges of~$P$ adjacent to the side of length~$\frac{\varepsilon}{4}$. Now, we glue four copies of $P$, along the edges of length $h$ to obtain a square with a hole, {\it cf.}~Figure~\ref{fig0}. \begin{figure}[h] \leavevmode \SetLabels \L(.21*.65) $a$\\ \L(.27*.8) $a$\\ \L(.36*.67) $h$\\ \L(.255*.42) $h$\\ \L(.31*.55) $\frac{\varepsilon}{4}$\\ \endSetLabels \begin{center} \AffixLabels{\centerline{\epsfig{file =fattorus.pdf,width=10cm,angle=0}} \end{center} \caption{The construction of a fat torus} \label{fig0} \end{figure} We glue the opposite sides of this square to obtain a one-holed torus $T$ with boundary length $\varepsilon$. Note that the sides of the square project onto two simple closed geodesics $\gamma_1$ and~$\gamma_2$ of length~$2a$. The distance between the two connected boundary components of $T \setminus \{\gamma_i\}$ arising from~$\gamma_i$ is at least~$2a$. Therefore, as every noncontractible simple loop of~$T$ not homotopic to the boundary~$\partial T$ has a nonzero intersection number with $\gamma_1$ or~$\gamma_2$, its length is at least $2a>\varepsilon$. We immediately deduce the point~\eqref{item22}. This also shows that $\gamma_1$ and~$\gamma_2$ form a minimal homology basis of~$T$. In particular, the homological systole of~$T$ is equal to~$2a$. Now, in the one-holed square, the distance between the boundary circle of length~$\varepsilon$ and each of the sides of the square is clearly equal to~$h$. Thus, the arcs of~$T$ with endpoints in~$\partial T$ homotopically nontrivial in the free homotopy class of arcs with endpoints lying on~$\partial T$ are of length at least~$2h$. By the pentagon formula, $$ \sinh h \, \sinh \frac{\varepsilon}{2} = \cosh a. $$ As $\varepsilon \leq 2 {\,\rm arcsinh}(1)$, we conclude that $2h > 2a$. This proves the lemma. \end{proof} \forget Let $T$ be the hyperbolic pair of pants with three boundary components of length~$\varepsilon$. The three minimizing arcs, of length~$a$, joining the boundary components of the punctured torus decompose~$T$ into two isometric right-angled hexagons. Each of these hexagons can be further decomposed into two isometric right-angled pentagons along a minimizing arc, of length~$b$, joining two opposite sides. The lengths of the sides of these pentagons are $\varepsilon/4$, $a$, $\varepsilon/2$, $a/2$ and $b$ (in cyclic order). By the pentagon formula, {\it cf.}~\cite{bus92}, we have \begin{eqnarray*} \cosh a & = & \coth(\varepsilon/4) \, \coth(\varepsilon/4) \\ \cosh b & = & \sinh(a) \, \sinh(\varepsilon/2). \end{eqnarray*} Since $\varepsilon \leq 2 {\,\rm arcsinh}(1)$, we derive, after some computation, that $\varepsilon \leq a \leq 2b$, which yields the desired result. \forgotten We can now proceed to the main proof of the section. \begin{proof}[Proof of Theorem \ref{theo:lng}] \mbox{ } \medskip \noindent{\it Step 1.} We know this theorem to be true if the homological systole ${\rm sys}_H(M)$ is at least $2 {\,\rm arcsinh}(1)$ so suppose that this does not hold.\\ \noindent Consider $\alpha_1,\ldots,\alpha_{n}$ the set of homologically independent closed geodesics of $M$ of length less than $2 {\,\rm arcsinh}(1)$. Note that by the collar lemma, these curves are simple and disjoint, and there is a collar of width~$1>w_0$ around each of them. \\ \noindent If $n\geq \eta(g)$ then the theorem is correct with $C_\lambda = 2 {\,\rm arcsinh}(1)/\log(2)$ so let us suppose that $n < \eta(g)$. Let us consider $N= M \setminus \{\alpha_1,\ldots,\alpha_n\}$, a surface of signature $(g-n,2n)$. The homological systole of~$N$ is at least $2 {\,\rm arcsinh}(1)$. Using \cite{parlier}, we can now deform $N$ into a new hyperbolic surface~$N'$ which satisfies the following properties: \begin{enumerate} \item the boundary components of $N'$ are geodesic of length exactly $2 {\,\rm arcsinh}(1)$ in $N'$; \item for any simple loop $\gamma$, we have $$ \ell_{N'}(\gamma) \geq \ell_{N}(\gamma). $$ In particular, $$ {\rm sys}_H(N') \geq {\rm sys}_H(N) \geq 2 {\,\rm arcsinh}(1). $$ \end{enumerate} Recall that $\ell_{N}(\gamma)$ is the length of the shortest loop of~$N$ homotopic to~$\gamma$. \\ \noindent We define a new surface $S$ by gluing $2n$ hyperbolic fat tori $T_1,\hdots,T_{2n}$ (from Lemma \ref{lem:fat} with $\varepsilon = 2 {\,\rm arcsinh}(1)$) to the boundary geodesics of $N'$. This new surface is of genus $g+n$ and its homological systole is at least $2 {\,\rm arcsinh}(1)$. As such, Theorem~\ref{theo:ell} implies that the first $\eta(g)+3n$ homologically independent curves $\gamma_1,\hdots,\gamma_{\eta(g)+3n}$ of a minimal homology basis of $S$ satisfy \begin{eqnarray} \ell_{S}(\gamma_k) & \leq & C_0 \frac{\log (2g+2)}{2g-\eta(g)-n} \, (g+n) \nonumber\\ & \leq & C_0 \, \frac{g}{g-\eta(g)} \, \log (2 g+2) \nonumber \\ & \leq & C_\lambda \, \log(g+1) \label{eq:Clambda} \end{eqnarray} for~$C_\lambda = \frac{2C_{0}}{1-\lambda} > 2 {\,\rm arcsinh}(1)/\log(2)$, from the assumption on~$\eta$ and since $n < \eta(g)$. \\ \noindent Although the surfaces $M$ and $S$ are possibly non homeomorphic, it should be clear what homotopy class we mean by $\alpha_k$ on $S$. By construction of~$S$, the curves $\alpha_1,\ldots,\alpha_n$ are homologically trivial (separating) homotopical systolic loops of~$S$. Furtermore, the loops $\gamma_k$ lie in a minimal homology basis of $S$. Thus, from Lemma~\ref{lem:cut} and Remark~\ref{rem:straight}, the curves $\gamma_k$ are disjoint from the curves $\alpha_{1},\ldots,\alpha_n$. In particular, $\gamma_{k}$ lies either in~$N'$ or in a fat torus~$T_{i}$. \\ \noindent Among the $\gamma_1,\hdots,\gamma_{\eta(g)+3n}$, some of them can lie in the fat tori $T_k$. There are $2n$ fat tori and at most two homologically independent loops lie inside each torus. Therefore, there are at least $\eta(g)-n$ curves among the $\gamma_k$ which lie in $N'$. Renumbering the indices if necessary, we can assume that the loops $\gamma_{1},\ldots,\gamma_{\eta(g)-n}$ lie in~$N'$. These $\eta(g)-n$ loops~$\gamma_{i}$ induce $\eta(g)-n$ loops in~$M$, still denoted by $\gamma_{i}$, through the inclusion of~$N'$ into~$M$. Combined with the curves $\alpha_{1},\ldots,\alpha_{n}$ of~$M$, we obtain $\eta(g)$ loops $\alpha_{1}.\ldots,\alpha_{n},\gamma_{1},\ldots,\gamma_{\eta(g)-n}$ in~$M$. \\ \noindent Let us show that these loops are homologically independent in~$M$. Consider an integral cycle \begin{equation} \label{eq:cycle} \sum_{i=1}^n a_i \, \alpha_i + \sum_{j=1}^{\eta(g)-n} b_j \, \gamma_j \end{equation} homologuous to zero in~$M$. Since the curves $\alpha_i$ lie in the boundary $\partial N$ of $N$, the second sum of this cycle represents a trivial homology class in $H_1(N,\partial N;{\mathbb Z})$, and so in $H_1(S;{\mathbb Z})$. Now, since the $\gamma_j$ are homologically independent in~$S$, this implies that all the $b_j$ equal zero. Thus, the first sum in~\eqref{eq:cycle} is homologically trivial in~$M$. As the $\alpha_i$ are homologically independent in~$M$, we conclude that all the $a_i$ equal zero too. \\ \noindent The curves $\alpha_k$ have their lengths bounded from above by $2 {\,\rm arcsinh}(1)$ and clearly satisfy the estimate~\eqref{eq:upper1}. Now, since the simple curves $\gamma_k$ do not cross any of the $\alpha_i$, we have $$ \ell_M(\gamma_k) = \ell_{N}(\gamma_{k}) \leq \ell_{N'}(\gamma_k) = \ell_{S}(\gamma_k). $$ Therefore, the lengths of the $\eta(g)$ homologically independent geodesic loops $\alpha_{1}.\ldots,\alpha_{n},\gamma_{1},\ldots,\gamma_{\eta(g)-n}$ of~$M$ are bounded from above by $C_\lambda \, \log(g)$, with $C_{\lambda} = \frac{2^{17}}{1-\lambda}$. \\ \noindent{\it Step 2.} To complete the proof of the theorem, we need a lower bound on the width of the collars of these $\eta(g)$ closed geodesics in~$M$. We already know that the curves $\alpha_1,\ldots,\alpha_n$ admit a collar of width~$w_0$ around each of them. Without loss of generality, we can assume that the geodesics $\gamma_{1},\ldots,\gamma_{\eta(g)-n}$, which lie in~$N$, are part of a minimal basis of~$N$. \\ \noindent Let $\gamma$ be one of these simple closed geodesics. Recall that $\gamma$ is disjoint from the $\alpha_i$. Suppose that the width~$w$ of its maximal collar is less than~$w_0$. Then there exists a non-selfintersecting geodesic arc~$c$ of length~$2w$ intersecting $\gamma$ only at its endpoints. Let $d_1$ be the shortest arc of~$\gamma$ connecting the endpoints of~$c$ and $d_2$ the longest one. From Lemma~\ref{lem:simple} applied to~$N$, the arc~$d_1$ is no longer than~$c$. Therefore, $$ {\rm length}(c \cup d_1) \leq 4w < 2 \, {\,\rm arcsinh}(1). $$ By definition of the $\alpha_i$, the simple closed geodesic $\delta_1$ homotopic to the loop~$c \cup d_1$, of length less than $2 {\,\rm arcsinh}(1)$, is homologuous to an integral combination of the~$\alpha_i$. Thus, the intersection number of $\gamma$ and~$\delta_1$ is zero. As these two curves intersect at most once, they must be disjoint. Denoting by~$\delta_2$ the simple closed geodesic homotopic to the loop~$c \cup d_2$, we deduce that $\gamma$, $\delta_1$ and $\delta_2$ bound a pair of pants. The curve~$c$ along with the three minimizing arcs joining $\gamma$ to $\delta_1$, $\gamma$ to $\delta_2$ and $\delta_1$ to~$\delta_2$ decompose this pair of pants into four right-angled pentagons. From the pentagon formula, $$ \cosh \left( \frac{1}{2} {\rm length}(\delta_i) \right) = \sinh(w) \, \sinh \left( \frac{1}{2} {\rm length}(d_i) \right) $$ Since $w \leq {\,\rm arcsinh}(1)$, we deduce that $\delta_i$ is shorter than~$d_i$. Hence, $$ {\rm length}(\delta_1)+{\rm length}(\delta_2)\leq{\rm length}(\gamma). $$ This yields another contradiction. \end{proof} \forget As ${\rm length}(h) = w \leq {\,\rm arcsinh}(1)$, This pair of pants is naturally divided into two isometric right-angled hexagons which in turn split into four right-angled pentagons obtained by cuting each hexagon along the height $h$ between $\gamma$ and its opposite side. As ${\rm length}(h) = w \leq {\,\rm arcsinh}(1)$, elementary hyperbolic trigonometry considerations in the right-angled pentagons involved lead to the inequality \noindent{\it Step 2.} To complete the proof of the theorem, we need a lower bound on the width of the collars of these $\eta(g)$ closed geodesics in~$M$. We already know that the curves $\alpha_1,\ldots,\alpha_n$ admit a collar of width~$w_0$ around each of them. Without loss of generality, we can assume that the geodesics $\gamma_{1},\ldots,\gamma_{\eta(g)-n}$, which lie in~$N$, are part of a minimal basis of~$N$. \\ \noindent Let $\gamma$ be one of these geodesics. Suppose that the width~$w$ of its maximal collar is less than~$w_0$. There exists a non-selfintersecting geodesic arc~$c$ of length~$2w$ intersecting $\gamma$ only at its endpoints. Let $d$ be the shortest arc of~$\gamma$ connecting the endpoints of~$c$. From Lemma~\ref{lem:simple} applied to~$N$, the arc~$d$ is no longer than~$c$. Therefore, $$ {\rm length}(c \cup d) \leq 4w < 2 \, {\,\rm arcsinh}(1). $$ Hence, the simple loop~$c \cup d$ is homotopic to some noncontractible closed geodesic~$\alpha$ of length less than $2 {\,\rm arcsinh}(1)$. \\ \noindent Consider now a collar~$U$ of~$\alpha$ of width $w(\alpha)={\,\rm arcsinh}(1/\sinh(\frac{1}{2} {\rm length}(\alpha)))$. The length~$\ell$ of the boundary components of~$U$ is less than~$2\sqrt{2} {\,\rm arcsinh}(1)$, {\it cf.}~\cite[\S~4.1]{bus92}. (Recall indeed that the length of~$\alpha$ is less than~$2 {\,\rm arcsinh}(1)$.) Let $\delta$ be the distance from $c \cup d$ to~$M \setminus U$. Remark that the injectivity radius of~$M$ at every point of~$c \cup d$ is less or equal to~$2w$. Hence, from the collar theorem, {\it cf.}~\cite[Theorem~4.1.6]{bus92}, we derive \begin{eqnarray*} \sinh(2w) & \geq & \cosh \left( \frac{1}{2} {\rm length}(\alpha) \right) \, \cosh(\delta) - \sinh(\delta) \\ & \geq & \cosh(\delta) - \sinh(\delta) = e^{-\delta}. \end{eqnarray*} Thus, since $w < \frac{1}{2} {\,\rm arcsinh}(e^{-\sqrt{2} {\,\rm arcsinh}(1)}) \leq \frac{1}{2} {\,\rm arcsinh}(e^{-\ell/2})$, the loop~$c \cup d$ lies in~$U$ at distance greater than~$\ell/2$ from~$\partial U$. In this case, the arc of~$\gamma \cap U$ containing~$d$ with endpoints in~$\partial U$, of length greater than~$\ell$, can be homotoped to a shorter arc of~$\partial U$ while keeping its endpoints fixed (recall that $\gamma$ does not intersect~$\alpha$). This yields a contradiction because $\gamma$ is length minimizing in its homotopy class. \forgotten \subsection{Jacobians of Riemann surfaces} \label{sec:jacobian} In this section, we present an application of the results of the previous section to the geometry of Jacobians of Riemann surfaces, extending the work~\cite{BS94} of P.~Buser and P.~Sarnak. \\ Consider a closed Riemann surface~$M$ of genus~$g$. We define the $L^2$-norm $|.|_{L^2}$, simply noted $|.|$, on $H^1(M;{\mathbb R}) \simeq {\mathbb R}^{2g}$ by setting \begin{equation} \label{eq:inf} |\Omega|^2 = \inf_{\omega \in \Omega} \int_M \omega \wedge *\omega \end{equation} where $*$ is the Hodge star operator and the infimum is taken over all the closed one-forms~$\omega$ on~$M$ representing the cohomology class~$\Omega$. The infimum in~\eqref{eq:inf} is attained by the unique closed harmonic one-form in the cohomology class~$\Omega$. The space $H^1(M;{\mathbb Z})$ of the closed one-forms on~$M$ with integral periods (that is, whose integrals over the cycles of~$M$ are integers) is a lattice of~$H^1(M;{\mathbb R})$. The Jacobian~$J$ of~$M$ is a principally polarized abelian variety isometric to the flat torus $$ {\mathbb T}^{2g} \simeq H^1(M;{\mathbb R})/H^1(M;{\mathbb Z}) $$ endowed with the metric induced by~$|.|$. \\ In their seminal article~\cite{BS94}, P.~Buser and P.~Sarnak show that the homological systole of the Jacobian of~$M$ is bounded from above by $\sqrt{\log(g)}$ up to a multiplicative constant. In other words, there is a nonzero lattice vector in~$H^1(M;{\mathbb Z})$ whose $L^2$-norm satisfies a $\sqrt{\log(g)}$ upper bound. We extend their result by showing that there exist almost $g$ linearly independent lattice vectors whose norms satisfy a similar upper bound. More precisely, we have the following. \begin{theorem} Let $\eta:{\mathbb N} \to {\mathbb N}$ be a function such that $$ \displaystyle \lambda := \sup_g \frac{\eta(g)}{g} < 1. $$ Then there exists a constant~$C_\lambda=\frac{2^{17}}{1-\lambda}$ such that for every closed Riemann surface~$M$ of genus~$g$ there are at least $\eta(g)$ linearly independent lattice vectors $\Omega_{1},\ldots,\Omega_{\eta(g)} \in H^1(M;{\mathbb Z})$ which satisfy \begin{equation} \label{eq:upper2} |\Omega_i|^2 \leq C_\lambda \, \log(g+1) \end{equation} for~every $i \in \{1,\ldots,\eta(g)\}$. \end{theorem} \begin{proof} Let $\alpha_{1},\ldots,\alpha_{\eta(g)}$ be the homologically independent loops of~$M$ given by Theorem~\ref{theo:lng}. Following~\cite{BS94}, for every~$i$, we consider a collar~$U_i$ of width $w_0=\frac{1}{2} {\,\rm arcsinh}(1)$ around~$\alpha_i$. Let $F_i$ be a smooth function defined on~$U_i$ which takes the value $0$ on one connected component of~$\partial U_i$ and the value~$1$ on the other. We define a one-form~$\omega_i$ on~$M$ with integral periods by setting $\omega_i=dF_i$ on~$U_i$ and $\omega_i=0$ ouside~$U_i$. Let $\Omega_i$ be the cohomology class of~$\omega_i$, that is, $\Omega_i=[\omega_i]$. Clearly, we have $$ |\Omega_i|^2 \leq \inf_{F_i} \int_{U_i} dF_i \wedge *dF_i $$ where the infimum is taken over all the functions~$F_i$ as above. This infimum agrees with the capacity of the collar~$U_i$. Now, by~\cite[(3.4)]{BS94}, we have $$ |\Omega_i|^2 \leq \frac{{\rm length}(\alpha_i)}{\pi-2 \theta_0} $$ where $\theta_0 = \arcsin(1/\cosh(w_0))$. Since the homology class of~$\alpha_i$ is the Poincar\'e dual of the cohomology class~$\Omega_i$ of~$\omega_i$, the cohomology classes~$\Omega_i$ are linearly independent. The result follows from Theorem~\ref{theo:lng}. \end{proof} \section{Short homologically independent loops on Riemannian surfaces} This section is devoted to the proof of the Riemannian version of Theorem \ref{theo:lng}. More precisely, we show the following. \begin{theorem} \label{theo:lngr} Let $\eta:{\mathbb N} \to {\mathbb N}$ be a function such that $$ \displaystyle \lambda := \sup_g \frac{\eta(g)}{g} < 1. $$ Then there exists a constant~$C_\lambda=\frac{2^{18}}{1-\lambda}$ such that for every closed orientable Riemannian surface~$M$ of genus~$g$ there are at least $\eta(g)$ homologically independent loops $\alpha_{1},\ldots,\alpha_{\eta(g)}$ which satisfy \begin{equation} \label{eq:upperr} {\rm length}(\alpha_{i}) \leq C_\lambda \, \frac{\log(g+1)}{\sqrt{g}} \, \sqrt{{\rm area}(M)} \end{equation} for every $i \in \{1,\ldots,\eta(g) \}$. \end{theorem} \begin{remark} In \cite[Theorem 5.3.E]{gro83}, M.~Gromov also proposes an extension of a systolic inequality to families of curves. Note however that neither the growth rate (roughly $g^{1-\varepsilon}$ for any positive~$\varepsilon$) nor the number of curves is close to being optimal. Furthermore, even an adaptation of the proof taking into account the stronger systolic inequality \cite[Theorem 2.6]{gro96} gives a bound of order $\sqrt{g^{c \log g}}.\log g$ for the $g/4$ first curves, where $c$ is some positive constant. \end{remark} \begin{remark} A non-orientable version of this statement also holds, {\it cf.}~Remark~\ref{rem:ln}. \end{remark} \begin{proof}[Proof of Theorem~\ref{theo:lngr}] Multiplying the metric by a constant if necessary, we can assume that the area of~$M$ is equal to~$g$. The strategy of the proof is to deform the surface~$M$ into another surface to which we can apply Theorem~\ref{theo:ell}. Through this deformation, we want to make the homological systole uniformly bounded from below while controlling the area and the length of some minimal homology basis. \\ \noindent{\it Step 1.} Fix $\varepsilon=1$. Consider a maximal collection of simple closed geodesics $\alpha_{1},\ldots,\alpha_{n}$ of length at most~$\varepsilon$ which are pairwise disjoint and homologically independent in~$M$. We have $n \leq g$. Let us cut the surface open along the loops~$\alpha_{i}$. We obtain a compact surface~$N$ of signature $(g-n,2n)$. Each loop~$\alpha_i$ gives rise to two boundary components $\alpha_i^+$ and~$\alpha_i^-$. By construction, the simple closed geodesics of~$N$ of length at most~$\varepsilon$ are separating. If $n \geq \eta(g)$ then the theorem clearly holds. We will therefore assume that $n < \eta(g)$. \\ \noindent{\it Step 2.} Set $\alpha=\alpha_1^+$. Consider a loop~$c$ homotopic to~$\alpha$ of length at most~$\varepsilon$ on~$N$ which bounds a domain~$C$ of maximal area with~$\alpha$. By the Ascoli theorem, such a loop exists. The domain~$C$ may not be homeomorphic to a cylinder because $\alpha$ may have (nontransverse) self-intersections. However, if we remove~$C$ from~$N$ and glue a cylinder along~$c$, we get a surface homeomorphic to~$N$. The domain~$C$ of~$N$ does not cover the whole surface~$N$. Otherwise, every $1$-dimensional homology class of~$N$ could be represented by a loop lying in the support of~$c$. Since some $1$-dimensional homology classes of~$N$ cannot be represented by the sum of loops of length at most~$\varepsilon$, the length of~$c$ would be greater than~$\varepsilon$, which is impossible. Thus, the loop $c$ decomposes~$N$ into two domains with nonempty interiors: $C$ and its complementary. This implies that the length of~$c$ is equal to~$\varepsilon$, otherwise we could push the loop~$c$ away from~$\alpha$ in a neighborhood of some point of~$c$. Now, we replace the domain~$C$ of~$N$ with the flat cylinder~$S_{\varepsilon} \times [0,\varepsilon/2]$, where $S_{\varepsilon}$ is the circle of length~$\varepsilon$. The resulting surface~$N_1$ has the same signature as~$N$ and its area is less or equal to ${\rm area}(M)+\varepsilon^2/2$. Furthermore, no loop of~$N_1$ of length less than~$\varepsilon$ is homotopic to the boundary component of~$N_1$ corresponding to~$\alpha$, otherwise we could derive a contradiction with the definition of~$c$. In addition, the marked length spectrum of~$N_{1}$ is clearly greater or equal to the one of~$N$, {\it cf.}~Definition~\ref{def:mls}. \\ \noindent We successively repeat this process with the remaining $2n-1$ curves $\alpha_{1}^-,\alpha_2^{\pm},\ldots,\alpha_{n}^{\pm}$. At the end, we obtain a surface~$N'=N_{2n}$, homeomorphic to~$N$, of area less or equal to ${\rm area}(N)+n \varepsilon^{2}$ whose marked length spectrum is greater or equal to the one of~$N$. By construction, the loops of~$N'$ of length less than~$\varepsilon$ are separating and nonhomotopic to the boundary components of~$N'$. \\ \noindent{\it Step 3.} We can now follow the proof of Theorem~\ref{theo:lng}. As the $2n$ boundary components of~$N'$ are isometric to~$S_{\varepsilon}$, we can attach to each of them a fat torus, {\it cf.}~Lemma~\ref{lem:fat}. This yields a genus $g+n$ surface~$S$ with homological systole at least~$\varepsilon$. From Theorem~\ref{theo:ell} applied to~$S$, the first $\eta(g)+3n$ homologically independent loops $\gamma_{1},\ldots,\gamma_{\eta(g)+3n}$ of~$S$ satisfy the following inequality (compare with~\eqref{eq:Clambda}) \begin{equation} {\rm length}(\gamma_{i}) \leq 2 \, C_\lambda \, \log(g+1) \label{eq:logg+2} \end{equation} for $C_{\lambda}=\frac{2C_{0}}{1-\lambda}$, where $C_{0}=2^{17}$, since $n < \eta(g) < g$ and ${\rm area}(S) = g+2n \, 2 \pi + n \varepsilon^2 \leq (2+4\pi) g$. \\ \noindent Now, from Lemma~\ref{lem:cut}, the loops~$\gamma_{i}$ do not cross the curves~$\alpha_k$ and therefore lie either in a fat torus or in $N'$. Arguing as in the proof of Theorem~\ref{theo:lng}, we obtain $\eta(g)-n$ loops among the~$\gamma_{i}$, say $\gamma_1,\ldots,\gamma_{\eta(g)-n}$, which form with $\alpha_1,\ldots,\alpha_n$ a family of $\eta(g)$ homologically independent loops of~$M$. The lengths of the $\alpha_k$ are bounded from above by~$\varepsilon$ and satisfy the upper bound~\eqref{eq:upperr}. Since the curves $\gamma_k$ do not cross any of the $\alpha_i$, we also have $$ \ell_M(\gamma_k) = \ell_N(\gamma_k) \leq \ell_{N'}(\gamma_k) = \ell_{S}(\gamma_k). $$ Therefore, the lengths of the $\eta(g)$ loops $\alpha_{1},\ldots,\alpha_n, \gamma_{1}.\ldots,\gamma_{\eta(g)-n}$ are bounded from above by $C_\lambda \, \log(g)$, with $C_{\lambda} = \frac{2^{18}}{1-\lambda}$. \end{proof} Theorem~\ref{theo:lngr} also leads to the following upper bound on the sum of the lengths of the $g$ shortest homologically independent loops of a genus~$g$ Riemannian surface. \begin{corollary} There exists a universal constant~$C$ such that for every closed Riemannian surface~$M$ of genus~$g$, the sum of the lengths of the $g$ shortest homologically independent loops is bounded from above by $$ C \, g^{3/4} \sqrt{\log(g)} \, \sqrt{{\rm area}(M)}. $$ \end{corollary} \begin{proof} Let $\lambda \in (0,1)$. From Theorem~\ref{theo:lngr}, the lengths of the first $[\lambda g]$ homologically independent loops are bounded by $\displaystyle \frac{C'}{1-\lambda} \, \frac{\log(g+1)}{\sqrt{g}} \, \sqrt{{\rm area}(M)}$, while from~\cite[1.2.D']{gro83}, the lengths of the next $g-[\lambda g]$ others are bounded by $\displaystyle C'' \, \sqrt{{\rm area}(M)}$, for some universal constants $C'$ and $C''$. Thus, the sum of the $g$ shortest homologically independent loops is bounded from above by $$ C' \, \frac{[\lambda g]}{1-\lambda} \, \frac{\log(g+1)}{\sqrt{g}} \, \sqrt{{\rm area}(M)} + C'' \, (g-[\lambda g]) \, \sqrt{{\rm area}(M)}. $$ The result follows from a suitable choice of $\lambda$. \end{proof} \forget We can assume that the loops $\alpha_{1},\ldots,\alpha_{\eta(g)+3k}$ are length-minimizing representatives of a minimal homology basis on~$S$. As in Lemma~\ref{lem:disj}, the loops~$\alpha_{i}$ lie either in a fat torus or in $N'_{m'} \setminus \{\gamma'_{1},\ldots,\gamma'_{m'}\}$. Since at most two homologically independent loops lie in each fat torus, there are at least $\eta(g)-k$ curves among the~$\alpha_{i}$ which lie in $N'_{m'} \setminus \{\gamma'_{1},\ldots,\gamma'_{m'}\}$. Furthermore, we can assume that these $\eta(g)-k$ loops are homologically independent in $H_{1}(N'_{m'},\partial N'_{m'}) \simeq H_{1}(N,\partial N)$. From Steps 2 and 3, the homotopy classes of these $\eta(g)-k$ curves can be represented in $N' \simeq N$, and so in $M$, by $\eta(g)-k$ loops satisfying the relation~\eqref{eq:logg+2}. Since these $\eta(g)-k$ loops are homologically independent with the $k$ loops $\gamma_{1},\ldots,\gamma_{k}$ of~$M$ of length at most~$\varepsilon$, we obtain the desired result. \forgotten \forget Let $(M,{\mathcal G}_0)$ be a closed Riemannian surface of genus $g$ with area less than $g$ and systole less than $\epsilon$. Let $c$ be a closed geodesic homologically non trivial with length less than $\epsilon$. We cut $M$ along $c$ and obtain a new surface $M'$ with two boundary geodesics $\alpha$ and $\beta$. Let $C^\epsilon_\alpha$ denote the completion of the set consisting of all simple loops of $M'$ in the homotopy class of $\alpha$ with length less than $\epsilon$. This set is not empty as $\alpha \in C^\epsilon_\alpha$ and we can define a partial order on it as follows. Two elements $\alpha_1$ and $\alpha_2$ of $C^\epsilon_\alpha$ satisfies the relation $\alpha_1 \leq \alpha_2$ if the cylinder of $M'$ bounded by both $\alpha$ and $\alpha_2$ contained $\alpha_1$. So $(C^\epsilon_\alpha,\leq)$ is a partially ordered set. Furthermore every chain $L$, {\it i.e.} a totally ordered subset, of $C^\epsilon_\alpha$ admits an upperbound. For this consider the union of all the cylinder bounded by both $\alpha$ and an element of $L$. This is a cylinder whose boundary is an element of $C^\epsilon_\alpha$. By Zorn's lemma the set $C^\epsilon_\alpha$ admits a maximal element $\alpha'$. We remove the cylinder bounded by $\alpha$ and $\alpha'$ and define $C^\epsilon_\beta$ in the new surface obtained. We can also find a maximal element $\beta'$ in $C^\epsilon_\beta$. By removing the cylinder bounded by $\beta$ and $\beta'$, we obtain a surface denoted by $M''$. Either the two boundary curves $\alpha'$ and $\beta'$ have length $\epsilon$, or their union is a graph $\Gamma$ of length less than $2\epsilon$ such that $$ p : \pi_1(\Gamma) \to \pi_1(M'',\partial M'') $$ is injective. In this last case we can find $[g/2]$ homogically independent curves of length less than $\epsilon$ and the proof is finished. If they do not coincide, then $\alpha'$ and $\beta'$ have length $\epsilon$. Then we glue a cylinder of width $\epsilon$ on $M''$ by identifying $\alpha'$ with one side of the cylinder and $\beta'$ with the other side. We then obtain a surface denoted by $M_1$ with $$ {\rm area}(M_1) \leq {\rm area}(M)+\epsilon^2; $$ such that every loop in the homotopy of $c$ or intersecting the homotopy class of $c$ has length at least $\epsilon$. \forgotten \section{Asymptotically optimal hyperbolic surfaces} \label{sec:cex} In this section, we show that the bound obtained in Theorem~\ref{theo:lngr} on the number of homologically independent loops of length at most $\sim \log(g)$ on a genus~$g$ hyperbolic surface is optimal. Namely, we construct hyperbolic surfaces whose number of homologically independent loops of length at most $\sim \log(g)$ does not grow asymptotically like~$g$. Specifically, we prove the following (we refer to Definition~\ref{def:ksys} for the definition of the $k$-th homological systole). \begin{theorem} \label{theo:cex} Let $\eta:{\mathbb N} \to {\mathbb N}$ be a function such that $\displaystyle \lim_{g \to \infty} \frac{\eta(g)}{g} = 1$. Then there exists a sequence of genus~$g_{k}$ hyperbolic surfaces~$M_{g_{k}}$ with $g_{k}$ tending to infinity such that $$ \lim_{k \to \infty} \frac{{\rm sys}_{\eta(g_{k})}(M_{g_{k}})}{\log(g_{k})} = \infty. $$ \end{theorem} \medskip Before proceeding to the proof of this theorem, we will need the following constructions. \\ All hyperbolic polygons will be geodesics and convex. We will say that a hyperbolic polygon is symmetric if it admits an axial symmetry. \\ Fix $\ell > 0$ such that $\cosh(\ell) > 7$ and let $L,L'>0$ be large enough (to be determined later). \\ \noindent{\it Construction of a symmetric hexagon.} With our choice of~$\ell$, there is no hyperbolic triangle with a basis of length~$\ell$ making an angle greater or equal to~$\pi/6$ with the other two sides. Therefore, there exists a symmetric hyperbolic hexagon $H_{\pi/6,L}$ (resp. $H_{\pi/3,L}$) with a basis of length~$\ell$ forming an angle of~$\pi/6$ (resp.~$\pi/3$) with its adjacent sides such that all its other angles are right and the side opposite to the basis is of length~$L$, {\it cf.}~Figure~\ref{fig2}. Note that the length of the two sides adjacent to the side of length~$L$ goes to zero when $L$ tends to infinity. \\ \begin{figure}[h] \leavevmode \SetLabels \L(.5*.78) $L$\\ \L(.5*.09) $\ell$\\ \endSetLabels \begin{center} \AffixLabels{\centerline{\epsfig{file =hexagon.pdf,width=10cm,angle=0}}} \end{center} \caption{The schematics for $H_{\pi/6,L}$ and $H_{\pi/3,L}$}\label{fig2} \end{figure} \noindent{\it Construction of a $3$-holed triangle.} Consider the hyperbolic right-angled hexagon~$H_{L}$ with three non-adjacent sides of length~$L$. Note that the lengths of the other three sides go to zero when $L$ tends to infinity. By gluing three copies of~$H_{\pi/6,L}$ to~$H_{L}$ along the sides of length~$L$, we obtain a three-holed triangle~$X_{3}$ with angles measuring~$\pi/3$ and sides of length~$\ell$, where the three geodesic boundary components can be made arbitrarily short by taking $L$ large enough, {\it cf.}~Figure~\ref{fig3}. We will assume that the geodesic boundary components of~$X_{3}$ are short enough to ensure that the widths of theirs collars are greater than~$e^{g}$, with $g$ to be determined later. \\ \begin{figure}[h] \leavevmode \SetLabels \L(.445*.4) $L$\\ \L(.53*.4) $L$\\ \L(.49*.23) $L$\\ \L(.377*.48) $\ell$\\ \L(.5*.07) $\ell$\\ \L(.61*.48) $\ell$\\ \endSetLabels \begin{center} \AffixLabels{\centerline{\epsfig{file =threeholedtriangle.pdf,width=10cm,angle=0}}} \end{center} \caption{} \label{fig3} \end{figure} \noindent{\it Construction of a $7$-holed heptagon.} There exists a symmetric hyperbolic pentagon~$P_{L'}$ with all its angles right except for one measuring~$2\pi/7$ such that the length of the side opposite to the non-right angle is equal to~$L'$. As previously, the length of the two sides adjacent to the side of length~$L'$ goes to zero when $L'$ tends to infinity. By gluing seven copies of~$P_{L'}$ around their vertex with a non-right angle, we obtain a hyperbolic right-angled $14$-gon, {\it cf.}~Figure~\ref{fig4}. Now, we paste along the sides of length~$L'$ seven copies of~$H_{\pi/3,L'}$. We obtain a $7$-holed heptagon~$X_{7}$ with angles measuring~$2\pi/3$ and sides of length~$\ell$, where the seven geodesic boundary components can be made arbitrarily short by taking $L'$ large enough, {\it cf.}~Figure~\ref{fig4}. We will assume that the geodesic boundary components of~$X_{7}$ are as short as the geodesic boundary components of~$X_{3}$, which guarantees that the widths of theirs collars are also greater than~$e^{g}$, with $g$ to be determined later. \\ \begin{figure}[h] \leavevmode \SetLabels \L(.42*.88) $\ell$\\ \L(.44*.74) $L'$\\ \L(.485*.25) $P_{L'}$\\ \endSetLabels \begin{center} \AffixLabels{\centerline{\epsfig{file =heptagon.pdf,width=8cm,angle=0}}} \end{center} \caption{The $7$-holed heptagon} \label{fig4} \end{figure} \noindent{\it Hurwitz surfaces with large systole.} Generalizing the work of Buser-Sarnak~\cite{BS94}, Katz-Schaps-Vishne~\cite{KSV07} constructed a family of hyperbolic surfaces~$N$ defined as a principal congruence tower of Hurwitz surfaces with growing genus~$h$ and with homological systole at least $\frac{4}{3} \log(h)$. Since Hurwitz surfaces are $(2,3,7)$-triangle surfaces, they admit a triangulation~${\mathcal T}$ made of copies of the hyperbolic equilateral triangle with angles~$2\pi/7$. The area of this triangle equals~$2\pi/7$. Therefore, the triangulation~${\mathcal T}$ of~$N$ is formed of $14(h-1)$ triangles and has $6(h-1)$ vertices. Remark that not every integer~$h$ can be attained as the genus of a Hurwitz surface with homological systole at least~$\frac{4}{3} \log(h)$. Still, this is true for infinitely many $h$'s. \\ \noindent{\it Adding handles.} In order to describe our construction, it is more convenient to replace the previous hyperbolic equilateral triangles of~${\mathcal T}$ with Euclidean equilateral triangles of unit side length, which gives rise to a piecewise flat surface $N_0 \simeq N$. For every of these Euclidean triangles~$\Delta$, consider a subdivision of each of its sides into $m$ segments of equal length. The lines of~$\Delta$ parallel to the sides of the triangle and passing through the points of the subdivision decompose~$\Delta$ into~$m^{2}$ Euclidean equilateral triangles of size~$\frac{1}{m}$. These small triangles define a new (finer) triangulation~${\mathcal T}'$ of~$N$ with exactly seven triangles around each vertex of the original triangulation~${\mathcal T}$ and exactly six triangles around the new vertices. Note that the new triangulation~${\mathcal T}'$ is formed of $14(h-1)m^2$ triangles. Now, replace each heptagon (with a conical singularity in its center) formed of the seven small triangles of~${\mathcal T}'$ around the vertices of the original triangulation~${\mathcal T}$ by a copy of the hyperbolic $7$-holed heptagon~$X_{7}$ (of side length~$\ell$). Replace also the other small triangles of~${\mathcal T}'$ by a copy of the hyperbolic three-holed triangle~$X_{3}$ (of side length~$\ell$). The conditions on the angles of $X_{3}$ and~$X_{7}$ imply that the resulting surface~$M'$ is a compact hyperbolic surface with geodesic boundary components, of signature~$(h,42(h-1)(m^{2}-2))$. Note that the lengths of the nonboundary closed geodesics of~$M'$ are bounded away from zero by a positive constant $\kappa=\kappa(\ell)$ depending only on~$\ell$. By gluing the geodesic boundary components pairwise, which all have the same length, we obtain a closed hyperbolic surface~$M$ of genus~$g=h+21(h-1)(m^{2}-2)$. \begin{remark} \label{rem:punc} It is not possible to use single-punctured hyperbolic polygons in our construction and still have the required conditions on their angles and the lengths of their sides to obtain a smooth closed hyperbolic surface at the end. Note also that the combinatorial structure of Hurwitz surfaces, and more generally triangle surfaces, makes the description of our surfaces simple. \end{remark} The following lemma features the main property of our surfaces. \begin{lemma} \label{lem:logh} Let $k=21(h-1)(m^{2}-2)$. The $(k+1)$-th homological systole of~$M$ is large. More precisely, there exists a universal constant~$K \in (0,1)$ such that $$ {\rm sys}_{k+1}(M) \geq \frac{4}{3} \, Km \, \log(h). $$ \end{lemma} \begin{proof} Let us start with some distance estimates. No matter how small the geodesic boundary components of~$X_3$ are, the distance between two points of its non-geodesic boundary component~$\partial_0 X_3$ is greater or equal to $K$ times the distance between the corresponding two points in the boundary~$\partial \Delta$ of a Euclidean triangle of unit side length. That is, $$ {\rm dist}_{\partial_0 X_3 \times \partial_0 X_3} \geq K \, {\rm dist}_{\partial \Delta \times \partial \Delta}, $$ where $K$ is a universal constant. If, instead of a Euclidean triangle of unit side length, we consider a small Euclidean triangle of size~$\frac{1}{m}$, we have to change $K$ into~$Km$ in the previous bound. The same inequality holds, albeit with a different value of~$K$, if one switches $X_3$ for~$X_7$. Here, of course, we should replace the small Euclidean triangle with the singular Euclidean heptagon that $X_7$ replaces in the construction of~$M$. \\ Now, let us estimate the $(k+1)$-th homological systole of~$N$. By construction, there are $k$ short disjoint closed geodesics $\alpha_1,\ldots,\alpha_k$ of the same length which admit a collar in~$M$ of width at least~$e^g$. Furthermore, the loops~$\alpha_i$ are homologically independent in~$M$. Let $\gamma$ be a geodesic loop in~$M$ homologically independent from the~$\alpha_i$. We can suppose that $\gamma$ does not intersect the loops~$\alpha_i$, otherwise its length would be at least~$e^g$ and we would be done. Thus, if the trajectory of~$\gamma$ enters into a copy of $X_3$ or~$X_7$ (through a side of length~$\ell$), it will leave it through a side of length~$\ell$. Therefore, using the previous distance estimates, the curve~$\gamma$ induces a homotopically nontrivial loop in~$N_0 \simeq N$ of length at most $$ \frac{1}{Km} \, {\rm length}(\gamma). $$ Since $N$ and $N_0$ are bilipschitz equivalent, it does not matter with respect to which metric we measure the lengths; the only effect might be on the constant~$K$. Now, by construction the homological systole of~$N$ is at least~$\frac{4}{3} \log(h)$, we conclude that $$ {\rm length}(\gamma) \geq \frac{4}{3} \, Km \, \log(h). $$ \end{proof} With this contruction, we can prove Theorem~\ref{theo:cex}. \begin{proof}[Proof of Theorem~\ref{theo:cex}] Given $\eta$ as in the theorem, let us show that for every $C>1$, there exist infinitely many hyperbolic surfaces~$M_{g}$ with pairwise distinct genus~$g$ such that \begin{equation} \label{eq:C} \frac{{\rm sys}_{\eta(g)}(M_{g})}{\log(g)} \geq C. \end{equation} The surfaces~$M_g$ have already been constructed in the discussion following Theorem~\ref{theo:cex}. We will simply set the parameters $h$ and $m$ on which they depend so that the inequality~\eqref{eq:C} holds. \\ Let $\varepsilon > 0$ such that $\varepsilon \leq \frac{1}{100(\frac{3C}{2K}+1)^{2}}$ and $m = E(\frac{1}{10 \sqrt{\varepsilon}})$. Note that $\varepsilon \leq \frac{1}{100}$, $m \geq \frac{3C}{2K}$ and $\varepsilon m^2 \leq \frac{1}{100}$. By assumption on~$\eta$, there exists an integer~$g_{0} \geq 100$ such that for every $g \geq g_{0}$, we have $\eta(g) \geq (1-\varepsilon)g$. Now, we can set $h \geq \max \{ 21(m^{2}-2),g_{0} \}$ for which there exists a genus~$h$ Hurwitz surface~$N$ with homological systole at least $\frac{4}{3} \log(h)$. Remark that there are infinitely many choices for~$h$. From the construction following Theorem~\ref{theo:cex}, we obtain infinitely many hyperbolic surfaces~$M_{g}$ with pairwise distinct genus~$g=h+21(h-1)(m^{2}-2)$. Now, from our choice of parameters, we have $$ \eta(g) \geq (1-\varepsilon) g \geq 21(h-1)(m^{2}-2)+1. $$ Combined with Lemma~\ref{lem:logh}, this implies that $$ {\rm sys}_{\eta(g)}(M_{g}) \geq \frac{4}{3} \, Km \, \log(h). $$ From the expression of~$g$ and our choice of parameters, we derive $$ \log(g) \leq \log(h) + \log(21(m^{2}-2)) \leq 2 \, \log(h) \leq \frac{4}{3} \, \frac{Km}{C} \, \log(h). $$ Therefore, $$ {\rm sys}_{\eta(g)}(M_{g}) \geq C \, \log(g). $$ Hence the theorem. \end{proof} \begin{remark} Note that the surfaces constructed in the previous theorem are hyperbolic, not merely Riemannian. The construction of {\it nonhyperbolic} Riemannian surfaces satisfying the conclusion of Theorem~\ref{theo:cex} is easier to carry out as there is no constraint imposed by the hyperbolic structure anymore. We can start with a Buser-Sarnak sequence of surfaces and attach long thin handles to these surfaces. Then we can argue as in the proof of the theorem. \end{remark} \forget \begin{proposition} For any $C>0$, the quantity \begin{equation}\label{eqn:contra} C \log(g) \sqrt{\frac{{\rm area}(S)}{g}} \end{equation} cannot be a universal upper bound on the lengths of $f(g)$ homologically distinct curves where $f$ is a function of $g$ with $\lim_{g\to \infty} \frac{f(g)}{g}=1$. \end{proposition} \begin{proof} Suppose by contradiction that this was the case, i.e., that there exists a bound given by equation \ref{eqn:contra} on the length of $f(g)$ homologically independent curves with $\lim_{g\to \infty} \frac{f(g)}{g}=1$. In particular, this implies that for any $\varepsilon>0$, and sufficienly high $g$, equation \ref{eqn:contra} holds for at least $ag=(1-\varepsilon)g$ curves. We begin with a Buser-Sarnak sequence of surfaces. More precisely, a family of hyperbolic surfaces with growing genus $h$ and with systole growth $\frac{4}{3}\log(h)$. We begin by cutting out $2k$ disjoint small disks of circumference $\varepsilon$ (for some very small $\varepsilon$). We group the $\varepsilon$ boundary curves into pairs, and we attach $k$ thin Euclidean cylinders of boundary length $\varepsilon$ and of length $L$. We choose $L$ so that any curve that goes through a cylinder is extremely long, say length $e^h$. We choose $\varepsilon$ so that the total area of the cylinders is less than some constant, say $4\pi$. As such, the surface obtained has total area at most $4\pi(h-1) + 4\pi=4\pi h$, and genus $g=h+k$. Notice that there are $k$ short homotopically and homologically distinct curves on the surface $\sigma_1,\hdots,\sigma_k$, those given by the parallel curves to the cylinders, and the remaining curves must be of length greater or equal to the systole of the original surface, i.e., of length at least $\frac{4}{3}\log(h) = \frac{4}{3}\log(g-k)$. As such we have for any curve $\gamma$ homologically independent from $\sigma_1,\hdots,\sigma_k$: $$ \ell(\gamma)\geq \frac{4}{3}\log(g-k). $$ Because of what precedes, we know we need to consider $k =(1- \lambda) g$ for $\lambda \in ]0,\frac{1}{2}[$. Thus $$ \ell(\gamma)\geq \frac{4}{3}\log(\lambda g). $$ Consider $\lambda$ such that $\lambda< \frac{4}{9\pi C^2}$ (we should be thinking of large $C$ so this quantity is less than $1$). Thus: $$ \frac{4}{3}-2\sqrt{\pi}C \sqrt{\lambda}>0 $$ and for sufficiently large $g$, we have $$ \left(\frac{4}{3}-2\sqrt{\pi}C \sqrt{\lambda}\right)\log(g)>\frac{4}{3}\log(\lambda^{-1}). $$ Now, as the area of our surface is at most $4\pi h = 4\pi \lambda g$, we can conclude that $$ \ell(\gamma)\geq \frac{4}{3}\log(\lambda g)>2\sqrt{\pi}C \log(g) \sqrt{\frac{\lambda g}{g}}=C \log(g) \sqrt{\frac{4\pi\lambda g}{g}}>C\log(g) \sqrt{\frac{{\rm area}(S)}{g}}. $$ Remark that if the inequality holds for $a g$ curves then $$ a\leq 1- \frac{4}{9\pi C^2}. $$ \end{proof} Remark that we have not found a contradiction to the possibility that for any $a<1$, there exists a $C$ sufficiently large such that the desired inequality would hold for $a g$ curves. Also remark that the above examples are not hyperbolic. By adapting the hairy torus argument to ``hairy" Buser-Sarnak surfaces, the above examples can be mirrored by hyperbolic surfaces, but they aren't as easy to construct, and do not achieve better results. \forgotten \section{Pants decompositions} \label{sec:pants1} In this section, we establish bounds on the length of short pants decompositions of genus~$g$ Riemannian surfaces with $n$ marked points. Specifically, we establish bounds using two different measures of length. Classically, one measures the length of a pants decomposition by considering the length of the longest curve in the pants decomposition. This is the point of view taken by P.~Buser in his quantification of Bers' constants for instance. One can also measure the total length of a pants decomposition. We establish news bounds for both measures but our primary goal is to establish bounds on sums of lengths of pants decompositions for surfaces with marked points and hyperelliptic surfaces. \subsection{Some preliminary bounds} Recall that the {\it Bers constant} of a Riemannian surface~$M$, denoted by~${\mathcal B}(M)$, is the length of a shortest pants decomposition of~$M$, where the length of a pants decomposition~$\mathcal P$ of $M$ is defined as \begin{equation} \label{eq:pants1} {\rm length}(\mathcal P)=\max_{\gamma \in \mathcal P} \, {\rm length}(\gamma). \end{equation} In Section \ref{sec:pants2}, we will also consider the sum of the lengths of a pants decomposition~$\mathcal P$ (or total length of~$\mathcal P$) defined as \begin{equation} \label{eq:pants2} \sum_{\gamma \in \mathcal P} {\rm length}(\gamma). \end{equation} We begin by observing that the two subjects of finding homologically non-trivial short curves, and finding short pants decompositions, are related. \begin{lemma}\label{lem:bershomology} Every pants decomposition of a genus $g$ surface contains $g$ homologically independent disjoint loops. \end{lemma} \begin{proof} Consider a pants decomposition. Clearly it contains a curve such that once removed, the genus is $g-1$. The remaining curves form a pants decomposition of the surface with the curve removed, so they contain a curve such that once removed the genus is $g-2$, and so forth until the remaining genus is $0$. The $g$ curves are clearly homologically distinct. \end{proof} One could try to find a short pants decomposition by considering disjoint homologically non-trivial loops, and then completing them into a pants decomposition. And indeed, in the case where there are $g$ homologically independent disjoint loops of length at most~$C \, \log(g)$, we can derive a near-optimal bound on the Bers constant of the surface. \begin{proposition} \label{theo:ell'''} Let $M$ be a closed orientable hyperbolic surface of genus~$g$ which admits $g$ homologically nontrivial disjoint loops of length at most $C \, \log(g)$. Then $$ {\mathcal B}(M)\leq C' \, \sqrt{g} \, \log(g) $$ for $C'=46 \, C$. In particular such surfaces satisfy $$ {\mathcal B}(M)\leq C' \, g^{\frac{1}{2}+\varepsilon} $$ for any positive $\varepsilon$ and large enough genus. \end{proposition} Recall that there exist examples of closed hyperbolic surfaces of genus $g$ with Bers' constant at least $\sqrt{6g}-2$, and that it is conjectured that it cannot be substantially improved (see \cite{bus92}). \begin{proof} Cut $M$ open along the $g$ homologically nontrivial disjoint loops of length at most $C \, \log(g)$. The resulting surface $M'$ is a sphere of area $4\pi (g-1)$ with $2g$ boundary components of length at most $C \, \log(g)$. By~\cite[Theorem~5]{BP09}, there exists a pants decomposition of $M'$ of length bounded by $$ 46\, \sqrt{2\pi(2g-2)}\sqrt{\left(\frac{C\, \log(g)}{2\pi}\right)^2+1} \leq 46 \, C \, \sqrt{g} \, \log(g). $$ The same result can also be derived from~\cite{BS10}, albeit with a worse multiplicative constant. \end{proof} Unfortunately, in light of the examples of families of surfaces with Cheeger constant uniformly away from zero, {\it cf.}~\cite{BR99}, one cannot hope for $\log(g)$ bounds on the length of too many disjoint loops in general. So this strategy would need to be adapted to say anything new about the lengths of pants decompositions in general. \ We now derive some results which we will need in the next section to estimate the sum of the lengths of short pants decompositions, {\it cf.}~Theorem~\ref{thm:sumgenusg}. Our first estimate relies on the diastolic inequality for surfaces, {\it cf.}~\cite{BS10}. \begin{proposition}\label{prop:bers} Let $M$ be a closed Riemannian surface of genus $g$ with $n$ marked points. Then $M$ admits a pants decomposition with respect to the marked points of length at most $$ C \, \sqrt{g} \, \sqrt{{\rm area}(M)} $$ for an explicit universal constant~$C$. \end{proposition} \begin{proof} Let $f:M \to {\mathbb R}$ be a topological Morse function. We suppose that there is only one critical point on each critical level set and that the marked points of~$M$ are regular points lying on different level sets. Such an assumption is generically satisfied. \\ \noindent The function~$f$ factors through the Reeb space of~$f$ defined as the quotient $$ G = {\rm Reeb}(f) = M / \sim, $$ where the equivalence $x \sim y$ holds if and only if $f(x)=f(y)$ and $x$ and $y$ lie in the same connected component of the level set $f^{-1}(f(x))$. More precisely, we have $$ f = j \circ \overline{f} $$ where $\overline{f}: M \to G$ and $j:G \to {\mathbb R}$ are the natural factor maps induced by $f$ and the equivalence relation~$\sim$. Since $f$ is a topological Morse function, its Reeb space is a finite graph and the factor map $\overline{f} : M \to G$ is a trivial $S^1$-bundle over the interior of each edge of~$G$. \\ \noindent Now, we subdivide~$G$ so that $\overline{f}$ takes the marked points of~$M$ to vertices of~$G$. From Morse theory, the disjoint simple loops formed by the preimages of the midpoints of the edges of~$G$ decompose~$M$ into pants, disks and cylinders. Therefore, there exists a pants decomposition of~$M$ with respect to the marked points of length at most $$ \sup_{t \in G} {\rm length} \, \overline{f}^{-1}(t) $$ and, in particular, of length at most \begin{equation} \label{eq:ft} \sup_{t \in {\mathbb R}} {\rm length} \, f^{-1}(t). \end{equation} \noindent On the other hand, from~\cite{BS10}, there exists a function $f:M \to {\mathbb R}$ as above with \begin{equation} \label{eq:bs} \sup_{t \in {\mathbb R}} {\rm length} \, f^{-1}(t) \leq C \, \sqrt{g} \, \sqrt{{\rm area}(M)} \end{equation} for an explicit universal constant~$C$. Strictly speaking, the main result of~\cite{BS10} is stated differently (using the notion of diastole), however the proof leads to the above estimate. \\ \noindent Combining the estimates~\eqref{eq:ft} and \eqref{eq:bs}, we derive the desired bound. \end{proof} \subsection{Sums of lengths of pants decompositions} \label{sec:pants2} One of our motivations is the study of hyperbolic surfaces with cusps, or possibly cone points. Our techniques allow us to treat the more general case of Riemannian surfaces where the appropriate replacement for cusps are marked points (see the proof of Corollary \ref{coro:sumgenusg} to relate the two notions). We thus require the following extension of the notion of systole. \forget \begin{definition} \label{def:ms} The \emph{marked homotopical systole} of a compact Riemannian surface~$M$ with boundary and $n$ marked points $x_1,\ldots,x_n$ is defined as the supremum of the reals~$\ell$ such that every simple loop of~$M' = M \setminus \{x_1,\ldots,x_n\}$ of length less than~$\ell$ is homotopic to a point in $M'$, a connected component of~$\partial M$ or a small circle around some marked point. \end{definition} \forgotten \begin{definition} \label{def:ms} Let $M$ be a compact Riemannian surface with (possibly empty) boundary and $n$ marked points $x_1,\ldots,x_n$. A loop of~$M$ is \emph{admissible} if it lies in $M' = M \setminus \{x_1,\ldots,x_n\}$ and is not homotopic to a point in $M'$, a connected component of~$\partial M$ or a multiple of some small circle around some marked point~$x_i$. The \emph{marked homotopical systole} of~$M$ is the infimal length of the admissible loop of~$M$. \end{definition} We will implicitly assume that the topology of~$M$ is such that admissible loops exist, otherwise there is nothing to prove. Let us first establish the following result similar to Lemma~\ref{lem:regmet}. \begin{lemma} \label{lem:regmetm} Let $M_{0}$ be a closed Riemannian surface with $n$ marked points and marked homotopical systole greater or equal to~$\ell$. Fix $0<R \leq \frac{\ell}{4}$. Then there exists a closed Riemannian surface $M$ conformal to~$M_0$ such that \begin{enumerate} \item the area of~$M$ is less or equal to the area of~$M_{0}$; \item $M \setminus \{x_1,\ldots,x_n\}$ and $M_0 \setminus \{x_1,\ldots,x_n\}$ have the same marked length spectrum; \label{eq:reg2} \item the area of every disk of radius $R$ in $M$ is greater or equal to~$R^{2}/2$. \end{enumerate} \end{lemma} The proof of this lemma closely follows the arguments of~\cite[5.5.C']{gro83} based on the height function. In our case, we will need a modified version of it. \begin{definition} The \emph{tension} of an admissible loop~$\gamma$ of~$M$, denoted by ${\rm tens}(\gamma)$, is defined as $$ {\rm tens}(\gamma) = {\rm length}(\gamma) - \ell_{M'}(\gamma). $$ Recall that $\ell_{M'}(\gamma)$ is the infimal length of the loops of~$M'$ freely homotopic to~$\gamma$. The \emph{modified height function} of~$M$ is defined for every $x \in M'$ as the infimal tension of the admissible loops of~$M$ based at~$x$. It is denoted by~$h'(x)$. \end{definition} The following estimate is a slight extension of~\cite[5.1.B]{gro83}. \begin{lemma} \label{lem:lowerbound} Let $M$ be a closed Riemannian surface with $n$ marked points and marked homotopical systole greater or equal to~$\ell$. Then $$ {\rm area} \, D(x,R) \geq \left( R-\frac{h'(x)}{2} \right)^2 $$ for every $x \in M$ and every $R$ such that $\frac{1}{2} h'(x) \leq R \leq \frac{\ell}{4}$. \end{lemma} \begin{proof} We argue as in~\cite[5.1.B]{gro83}. Note first that the assumption clearly implies that the marked points are at distance at least~$\ell/2$ from each other. If $R < \ell/4$, the disk $D=D(x,R)$ is contractible in~$M$ and contains at most one marked point of~$M$. Thus, every admissible loop~$\gamma$ based at~$x$ contains an arc~$\alpha$ passing through~$x$ with endpoints in~$\partial D$. This arc~$\alpha$ can be deformed into an arc~$\alpha'$ of~$\partial D$ in~$M'$ while keeping its endpoints fixed (indeed, $D$ contains at most one marked point). The tension of~$\gamma$ satisfies \begin{eqnarray*} {\rm tens}(\gamma) & \geq & {\rm length}(\alpha) - {\rm length}(\alpha') \\ & \geq & 2R - {\rm length}(\partial D). \end{eqnarray*} Hence, ${\rm length}(\partial D) \geq 2R - h'(x)$. By the coarea formula, we derive the desired lower bound on the area of~$D$ by integrating the previous inequality from $\frac{1}{2}h'(x)$ to~$R$. \end{proof} Our first preliminary result, namely Lemma~\ref{lem:regmetm}, can be derived from this estimate. \begin{proof}[Proof of Lemma~\ref{lem:regmetm}] The proof of~\cite[5.5.C'']{gro83} (see~\cite[Lemma~4.2]{RS08} for further details) applies with the modified height function and shows that there exists a closed Riemannian surface~$M$ conformal to~$M_0$ with a conformal factor less or equal to~$1$ which satisfies~\eqref{eq:reg2} and has its modified height function arbitrarily small. The result follows from Lemma~\ref{lem:lowerbound}. \end{proof} Now, we can state the following estimate. \begin{proposition}\label{prop:S} Let $S$ be a Riemannian sphere with $n$ marked points $x_1,\ldots,x_n$. Suppose that the marked homotopical systole of~$S$ is greater or equal to~$\ell$. Then the sphere~$S$ admits a pants decomposition with respect to its marked points whose sum of the lengths is bounded from above by $$ C\, \log(n) \frac{{\rm area} \, S}{\ell}, $$ where $C=2^{10}$. \end{proposition} \begin{proof} We can suppose that $n \geq 4$, otherwise there is nothing to prove. As noticed before, the assumption implies that the marked points $x_1,\ldots,x_n$ are at distance at least $\ell/2$ from each other. By Lemma~\ref{lem:regmetm}, we can further assume that the area of every $r_{0}$-disk on~$S$ is greater or equal to~$r_{0}^{2}/2$, with $r_{0}=\ell/4$ \begin{lemma}\label{lem:path} There exists a nonselfintersecting path~$\Gamma$ on~$S$ connecting all the marked points~$x_i$ with \begin{equation} \label{eq:G} {\rm length} \, \Gamma \, \leq C' \, \frac{{\rm area} \, S}{\ell} \end{equation} where $C'=2^{7}$. \end{lemma} \begin{proof}[Proof of Lemma \ref{lem:path}] We consider the family of disjoint $r_{0}$-disks $D_1,\ldots,D_n$ centered at the marked points $x_1,\ldots,x_n$ of~$S$ with $r_0=\ell/4$. We complete this family into a maximal family of disjoint $r_{0}$-disks~$\{D_i\}_{i \in I}$. Since the area of these disks is greater or equal to~$r_{0}^{2}/2$, we have \begin{equation} \label{eq:II} |I|\leq 2 \, \frac{{\rm area} \, S}{r_{0}^{2}}. \end{equation} \bigskip \noindent We decompose the sphere~$S$ into Voronoi cells $V_i=\{p \in S \mid d(p,x_i)\leq d(p,x_j) \text{ for all } j\neq i\}$ around the points~$x_i$, with $i \in I$. Each Voronoi cell is a polygon centered at some point~$x_{i}$. Remark that a pair of adjacent Voronoi cells ({\it i.e.}, meeting along an edge) corresponds to a pair of centers of distance at most~$4 r_0$. To see this, consider a point~$p$ on the boundary of both of two adjacent cells. It is at an equal distance~$\delta$ to both cell centers and it is closer to their centers than to any others. Now $\delta \leq 2 r_0$, otherwise there would exist a disk of radius~$r_0$ around $p$ in~$S$ disjoint from all other disks~$\{D_i\}_{i \in I}$ and the system of disks would not be maximal.\\ \noindent We connect the center of every Voronoi cell~$V_{i}$ to the midpoints of its edges through length-minimizing arcs of~$V_{i}$. (The length-minimizing arc connecting a pair of points is not necessarily unique, but we choose one.) As a result, we obtain a connected embedded graph~$G$ in~$S$ with vertices~$\{x_{i}\}_{i \in I}$. We already noticed that the lengths of the edges of~$G$ are at most~$4 r_0$. \\ \noindent The graph $\cup_i \partial V_i$ of~$S$ given by the Voronoi cell decomposition has the same number of edges~$e$ as~$G$. Since the number of vertices in the Voronoi cell decomposition is less or equal to~$\frac{2e}{3}$, the Euler characteristic formula shows that the number of edges in the Voronoi cell decomposition and in~$G$ is at most $3|I|-6$. Thus, \begin{equation} \label{eq:lG} {\rm length} \, G \leq 12 \, (|I|-2) \, r_0. \end{equation} \medskip \noindent By considering the boundary~$\partial U$ of a small enough $\rho$-tubular neighborhood~$U$ of the minimal spanning tree~$T$ of~$G$, we obtain a loop surrounding~$T$ of length less than \begin{equation} \label{eq:2T} 2 \, {\rm length} \, T + \varepsilon \end{equation} for any given $\varepsilon >0$. We then construct a nonselfintersecting path~$\Gamma$ connecting all the marked points~$x_1,\ldots,x_{n}$ with $$ {\rm length} \, \Gamma \leq 2 \, {\rm length} \, T + \varepsilon + 2n \rho. $$ It suffices to take~$\Gamma$ lying in~$\partial U$ and modify it in the neighborhood of each point~$x_i$ by connecting~$\Gamma$ to~$x_i$ through two rays arising from~$x_{i}$. \\ \noindent The result follows from \eqref{eq:lG} and~\eqref{eq:II} by taking $\varepsilon$ and $\rho$ small enough. \end{proof} \medskip \noindent Let us resume the proof of Proposition~\ref{prop:S}. Let $\Gamma \subset S$ be as in the previous lemma. Without loss of generality, we can suppose that $\Gamma$ is a piecewise geodesic path connecting the marked points $x_1,\ldots,x_n$ in this order. \\ \noindent We split the piecewise geodesic path $\Gamma=(x_1,\ldots,x_n)$ into two paths $\Gamma_1=(x_1,\ldots,x_m)$ and $\Gamma_2=(x_{m+1},\ldots,x_n)$ with $m=n/2$ if $n$ is even, and $m=(n+1)/2$ otherwise. Now, we consider a loop~$\gamma$ surrounding~$\Gamma$ in~$S$, that is, the boundary of a small tubular neighborhood~$U$ of~$\Gamma$ in~$S$. We also consider two loops $\gamma_1$ and~$\gamma_2$ surrounding $\Gamma_1$ and $\Gamma_2$ in~$U$. Then we repeat this process with $\Gamma_1$ and $\Gamma_2$, and so forth, {\it cf.}~Figure~\ref{fig1}. (When a path is reduced to a single marked point, we cannot split it any further. So we take the same loop surrounding this marked point for $\gamma$, $\gamma_1$ and~$\gamma_2$.) We stop the process at the step $$ \kappa= [\log_2 n] +1 $$ because after this step all the new loops surround a single marked point. \\ \begin{figure}[h] \begin{center} \ovalbox{\ovalbox{\ovalbox{$\bullet \quad \bullet$} \ovalbox{$\bullet \quad \bullet$}} \ovalbox{\ovalbox{$\bullet \quad \bullet$} $\bullet$}} \end{center} \caption{} \label{fig1} \end{figure} \noindent It is clear that from our construction some subfamily of the set of loops that arise gives a (nongeodesic) pants decomposition of the sphere~$S$ with respect to its marked points. \\ \noindent Since each segment of~$\Gamma$ is surrounded by at most~$\kappa$ loops, the sum of the lengths of the pants decomposition loops is bounded from above by \begin{equation} \label{eq:kappa} 2 \kappa \, {\rm length} \, \Gamma + \varepsilon, \end{equation} where $\varepsilon$ depends on the width of~$U$ and goes to zero with it. The desired upper bound follows from~\eqref{eq:G} by letting $\varepsilon$ go to zero. \end{proof} We now state our main theorem in this direction which states that for any fixed genus $g$, one can control the growth rate of sums of lengths of pants decompositions of a surface of area $\sim g+n$ by a factor which grows like $n \log(n)$, where $n$ is the number of marked points. \begin{theorem}\label{thm:sumgenusg} Let $M$ be a closed Riemannian surface of genus $g$ with $n \geq 1$ marked points whose area is equal to $4\pi(g+\frac{n}{2}-1)$. Then $M$ admits a pants decomposition with respect to the marked points whose sum of the lengths is bounded from above by $$ C_g\, n \log (n+1), $$ where $C_g$ is an explicit genus dependent constant. \end{theorem} Before proving the theorem, we note the following corollary. \begin{corollary} \label{coro:sumgenusg} Let $M$ be a noncompact hyperbolic surface of genus $g$ with $n$ cusps. Then $M$ admits a pants decomposition with the sum of its lengths bounded above by $$ C_g\, n \log (n+1), $$ where $C_g$ is an explicit genus dependent constant. \end{corollary} \begin{proof}[Proof of Corollary~\ref{coro:sumgenusg}] Let us cut off the cusps of the hyperbolic surface~$M$ and replace the tips with small round hemispheres to obtain a closed Riemannian surface~$N$ with $n$ marked points corresponding to the summits of the hemispheres. The area of~$N$ can be made arbitrarily close to the original area by choosing the cut off tips of area arbitrarily small. The hemispheres will then be of arbitrarily small area as well. To avoid burdening the argument by epsilontics, we will assume that $M$ and $N$ have the same area. We remark that short simple closed geodesics (on either surface) do not approach the tips. Indeed, every simple loop passing through a small enough tip can be deformed into a shorter loop. Therefore, the geodesic behavior that we are concerned with is identical on both surfaces. We conclude by applying Theorem~\ref{thm:sumgenusg} to~$N$, which yields the desired pants decomposition on~$M$. \end{proof} \begin{remark} The proof of Corollary~\ref{coro:sumgenusg} applies to noncompact complete Riemannian surfaces of genus~$g$ with $n$ ends whose area is normalized at $4\pi(g+\frac{n}{2}-1)$. \end{remark} Now, we can proceed to the proof of Theorem~\ref{thm:sumgenusg}, which relies on Propositions~\ref{prop:bers} and~\ref{prop:S}. \begin{proof}[Proof of Theorem~\ref{thm:sumgenusg}] \forget Arguing as in the beginning of the proof of Theorem~\ref{theo:lng}, we can assume that there is no (simple) closed geodesic of length at most~$\ell$ on~$M$ [to develop]. \\ Let us cut off the cusps of the hyperbolic surface~$M$ and replace the tips with small round hemispheres to obtain a closed Riemannian surface~$N$. The area of~$N$ can be made arbitrarily close to the original area by choosing the cut off tips of area arbitrarily small, and the hemispheres will then be of arbitrarily small area as well. To avoid burdening the argument by epsilontics, we will assume that $M$ and $N$ have the same area. \\ Remark that short simple closed geodesics (on either surface) do not approach the tips. Indeed, every simple loop passing through a small enough tip can be deformed into a shorter loop. Therefore, the geodesic behavior that we are concerned with is identical on both surfaces. Furthermore, we can assume that the tips are at distance at least~$\ell$ from any simple closed geodesic and from any other tips (on either surface). \\ \forgotten We will prove a more general result. Namely, Theorem~\ref{thm:sumgenusg} holds true for compact Riemannian surfaces~$M$ of signature $(g;p,q)$ ({\it i.e.}, of genus $g$ with $p$ marked points $x_1,\ldots,x_p$ and $q$ boundary components) with boundary components of length at most~$\ell$, where $\ell:=1$. In this case, $n=p+q$ represents the total number of marked points and boundary components of~$M$. \\ \noindent It is enough to show this result when the marked homotopical systole of $M$ is greater or equal to~$\ell$, {\it cf.}~Definition~\ref{def:ms}, otherwise we split the surface along a simple loop of $M \setminus \{ x_1,\ldots,x_p \}$ of length less than~$\ell$ nonhomotopic to a point, a boundary component or a small circle around a marked point. Then we deal with the resulting surfaces. Indeed, by splitting the surface~$M$ of signature $(g;p,q)$, we obtain one of the following: \begin{enumerate} \item a surface of signature $(g-1;p,q+2)$; \item two surfaces of signature $(g_i;p_i,q_i)$ with $0<g_i < g$ and $p_i + q_i \leq p + q +2$ for $i=1,2$; \item two surfaces of signature $(g_i;p_i,q_i)$ with $g_1=0$, $p_1 + q_1 \leq p + q +1$, $g_2=g$ and $p_2 + q_2 < p + q$; \item or, in case $g=0$, two surfaces of signature $(0;p_i,q_i)$ with $p_i + q_i < p + q$ for $i=1,2$. \end{enumerate} In all cases, we can conclude by induction on both $g$ and $n=p+q$. \\ \noindent First, we attach a cap ({\it i.e.}, a round hemisphere) along each boundary component $c_1,\ldots,c_q$ of~$M$ to obtain a closed Riemannian surface $N$ with $n=p+q$ marked points corresponding to the $p$ marked points of~$M$ and the $q$ summits of the caps of~$N$. As with~$M$, the marked homotopical systole of $N$ is greater or equal to $\ell$. By construction, we have \begin{eqnarray*} {\rm area} \, N & \leq & {\rm area} \, M + \sum_{i=1}^q \frac{1}{2\pi} {\rm length}(c_i)^2 \\ & \leq & 4 \pi \left( g+\frac{n}{2}-1 \right) + \frac{q}{2 \pi} \, \ell^2 \end{eqnarray*} where $n=p+q$. \\ \noindent Now, we consider $g$ homologically independent disjoint loops $\gamma_1,\hdots,\gamma_g$ in a pants decomposition of~$N$ where the marked points are ignored with $$ {\rm length} \,\gamma_{k} \leq A_g \, \sqrt{n}, $$ for every $k=1,\ldots,g$, where $A_g$ is some constant depending only on~$g$, {\it cf.}~Proposition \ref{prop:bers} and Lemma~\ref{lem:bershomology}. Sliding these curves through a length-nonincreasing deformation if necessary, we can assume that they do not cross the caps of~$N$. We can also assume that they do not pass through the marked points of~$N$ by slightly perturbing them and changing the value of~$A_g$. \\ \noindent We cut $N$ along $\gamma_1,\hdots,\gamma_g$ and attach caps along the $2g$ boundary components thus obtained. As a result, we obtain a Riemannian sphere~$S$ with $2g+n$ marked points corresponding to the $n$ marked points of $N$ and the $2g$ summits of the caps we glued on~$N$. By construction, the marked homotopical systole of~$S$ is greater or equal to~$\ell$ and $$ {\rm area} \, S \leq {\rm area} \, N + 2g \, \frac{A_{g}^{2} \, n}{2 \pi} \leq B_{g} \, n $$ for some constant~$B_{g}$. \\ \noindent By applying Proposition~\ref{prop:S} to~$S$, we obtain a pants decomposition~$\mathcal{P}_{S}$ of~$S$ with respect to its marked points whose total length does not exceed $$ \frac{C}{\ell} \, B_{g} \, n \, \log(2g+n). $$ \noindent As previously, we can push a curve away from the caps of $S$ without increasing its length. Therefore, we can assume that the pants decomposition loops of~$\mathcal{P}_{S}$ stay away from the caps. This shows that the loops of~$\mathcal{P}_{S}$ also lie in~$M$, and form with $\gamma_1,\hdots,\gamma_g$ and the connected components of $\partial M$ a pants decomposition of~$M$. By construction, the total length of this pants decomposition of~$M$ is bounded from above by $$ \frac{C}{\ell} \, B_{g} \, n \, \log(2g+n) + g \, A_g \, \sqrt{n} + q \, \ell \leq C_{g} \, n \, \log (n+1) $$ for some constant~$C_{g}$. \end{proof} We conclude this section with a corollary of the above result for hyperelliptic Riemannian surfaces, i.e., surfaces with an orientation-preserving isometric involution where the quotient surface is a sphere. \begin{theorem} Every closed hyperelliptic Riemannian surface of genus $g$ and area $4\pi (g - 1)$ admits a pants decomposition whose sum of the lengths is bounded above by $$ C \, g \log (g) $$ for a universal constant $C$. \end{theorem} \begin{proof} We begin by taking the quotient of the hyperelliptic surface $M$ by its hyperelliptic involution $\sigma$ to obtain a sphere $S$ with $2g+2$ cone points of angle $\pi$. We denote these points $x_1,\hdots,x_{2g+2}$. \\ \noindent Using Theorem \ref{thm:sumgenusg}, there exists a pants decomposition of $S$ with marked points $x_1,\hdots,x_{2g+2}$ of total length which does not exceed $C_0 (2g+2) \log(2g+2)$ for some universal constant $C_0$. We proceed to lift this pants decomposition via $\sigma$ to the surface $M$, and we obtain a multicurve $\mu \subset M$ of total length which does not exceed $2 C_0 (2g+2) \log(2g+2)$. This multicurve is not a pants decomposition, but it is not too difficult to see that by cutting along it, one obtains a collection of cylinders, pairs of pants or four-holed spheres. This is explicitly shown in \cite[Lemma 6]{BP09}. \\ \noindent To complete the multicurve into a full pants decomposition, we must add curves which lie in the four-holed spheres. Consider the set of four-holed spheres $\{F_k\}_{k=1}^{n_0}$ which arise as connected components of $M\setminus \mu$. Note that there are at most $g-1$ of them. For each four-holed sphere $F_k$, we consider an interior curve $\gamma_k$ that cuts it into two pairs of pants. We claim that these curves can be chosen such that the sum of their lengths is bounded above by $C_2 \, g \log(g)$ for a universal constant $C_2$. \\ \noindent To show this, consider for each $k$, the lengths $\{\ell_{k,i}\}_{i=1}^4$ of the four boundary curves of $F_k$. To each~$F_k$, we glue four round hemispheres of boundary lengths $\{\ell_{k,i}\}_{i=1}^4$ and we mark the four summits of the hemispheres to obtain a marked sphere $\tilde{F}_k$. By Proposition~\ref{prop:bers}, the four-holed sphere~$\tilde{F}_k$ admits a pants decomposition (which here is reduced to a single curve) $\gamma_k$ of length at most $C_1 \sqrt{{\rm area}(\tilde{F}_k)}$. Sliding these curves away from the marked points of the hemispheres without increasing their lengths, we can assume that each curve~$\gamma_k$ lies in~$F_k$. If we denote $A_k={\rm area}(F_k)$, we have that \begin{eqnarray*} \sum_{k=1}^{n_0} {\rm length}(\gamma_k) &\leq& C_1 \sum_{k=1}^{n_0} \sqrt{{\rm area}(\tilde{F}_k)}\\ &\leq& C_1 \sum_{k=1}^{n_0} \sqrt{A_k + \sum_{i=1}^4 \frac{\ell_{k,i}^2}{2\pi} }\\ &\leq& C_1 \sum_{k=1}^{n_0} \left(\sqrt{A_k}+ \sqrt{\sum_{i=1}^4 \frac{\ell_{k,i}^2}{2\pi} }\right) \end{eqnarray*} Now, unless a radicand is of value less than $1$, it bounds its square root. Hence $\sqrt{A_k} \leq 1 + A_k$. Denoting $L_k = \max_{i} \ell_{k,i}$, we obtain \begin{eqnarray*} \sum_{k=1}^{n_0} {\rm length}(\gamma_k) &\leq& C_1 \left( n_0 + \sum_{k=1}^{n_0} \left(A_k + \sqrt{ \frac{2}{\pi} L^2_k} \right) \right)\\ & \leq & C_1 \, n_0 + C_1 \sum_{k=1}^{n_0} A_k + C_1 \sum_{k=1}^{n_0} \sqrt{\frac{2}{\pi}} L_k. \end{eqnarray*} Note that $\sum_{k=1}^{n_0} L_k \leq 4 \sum_{k,i} \ell_{k,i} \leq 16 C_0 \, (2g+2) \log(2g+2)$ because each curve of $\mu$ can be a boundary of at most two four-holed spheres. Now, since $\sum_{k=1}^{n_0} A_k \leq {\rm area}(M) = 4\pi (g-1)$ and $n_0 \leq g-1$, we conclude that the sum of the lengths of the $\gamma_k$ is bounded above by $C_2 \, g \log(g)$. Hence the claim. \\ \noindent In conclusion, the multicurve $\mu \cup \{\gamma_k\}_{k=1}^{n_0}$ contains a pants decomposition of total length not exceeding $$2 C_0 \, (2g+2) \log(2g+2) + C_2 \, g \log(g) < C \, g \log(g)$$ for some universal constant $C$. \end{proof} \section{Systolic area and first Betti number of groups} In this section, we use the approach developed in Section~\ref{sec:ell} to evaluate the systolic area of a finitely presentable group $G$ in terms of its first Betti number. \\ \begin{definition} \label{def:SG} The \emph{systolic area} of~$G$ is defined as \begin{equation*} {\mathfrak S}(G)=\inf_X \frac{{\rm area}(X)}{{\rm sys}_\pi(X)^2}, \end{equation*} where the infimum is taken over all piecewise flat $2$-complexes $X$ with fundamental group isomorphic to $G$ and ${\rm sys}_\pi$ denotes the homotopical systole, {\it cf.}~Definition~\ref{def:sys}. One can also take the infimum over piecewise Riemannian $2$-complexes since Riemannian metrics can be approximated as close as wanted by piecewise flat metrics. Recall also that the first Betti number of~$G$ is defined as the dimension of its first real homology group $$ H_1(G,{\mathbb R}):=H_1(K(G,1),{\mathbb R}), $$ where $K(G,1)$ denotes the Eilenberg-MacLane space associated to $G$. \end{definition} \begin{theorem} Let $G$ be a finitely presentable nontrivial group with no free factor isomorphic to~${\mathbb Z}$. Then $$ {\mathfrak S}(G)\geq C \, \frac{b_1(G)+1}{(\log(b_1(G)+2))^2} $$ for some positive universal constant $C$. \end{theorem} \begin{remark} Consider the free product $G_n=F_n * G$, where $F_n$ is the free group with $n$ generators and $G$ is a finitely presentable nontrivial group. The first Betti number of~$G_n$ goes to infinity with~$n$, while its systolic area remains bounded by the systolic area of~$G$. This example shows that a restriction on the free factors is needed in the previous theorem. \end{remark} \begin{proof} Let $X$ be a piecewise flat $2$-complex with $\pi_1(X)=G$. We can apply the metric regularization process of~\cite[Lemma~4.2]{RS08} to~$X$ and change the metric of~$X$ for a piecewise flat metric with a better systolic area. Thus, we can now assume from \cite[Theorem~3.5]{RS08} that the area of every disk~$D$ of radius~$\frac{1}{8} {\rm sys}_\pi(X)$ in~$X$ satisfies $$ {\rm area}(D) \geq \frac{1}{128} \, {\rm sys}_\pi(X)^2. $$ We can also normalize the area of $X$ to be equal to $b_1(G)$. If the homotopical systole of~$X$ is bounded by $1$, there is nothing to prove. Thus, we can assume that it is greater than $\ell:=1$. Now, set $r_0=\frac{1}{8} {\rm sys}_\pi(X)$.\\ 1) Since each disk of radius~$r_{0}$ has area at least~$r_{0}^{2}/2$, the maximal system of disjoint \mbox{$r_{0}$-disks} $\{D_{i}\}_{i \in I}$ admits at most $\frac{2 \, b_1(G)}{r_{0}^{2}}$ disks, that is, $$ |I| \leq \frac{2 \, b_1(G)}{r_{0}^{2}}. $$ 2) As in the proof of Theorem~\ref{theo:ell} (Step~2), consider the $1$-skeleton~$\Gamma$ of the nerve of the covering of~$M$ by the disks $2D_i+\varepsilon$ with $\varepsilon$ positive small enough. The graph~$\Gamma$ is endowed with the metric for which each edge has length~$\ell/2$. The map $\varphi : \Gamma \to X$, which takes each edge with endpoints $v_{i}$ and~$v_{j}$ to a segment connecting $x_{i}$ and~$x_{j}$, induces an epimorphism $\pi_1(\Gamma) \to \pi_1(X) \simeq G$, {\it cf.}~Lemma~\ref{lem:epi} (whose proof works with complexes too). \\ 3) Consider a connected subgraph $\Gamma_1$ of $\Gamma$ with a minimal number of edges such that the restriction of $\varphi$ to $\Gamma_1$ still induces an epimorphism in real homology. By Lemma~\ref{lem:isom} (whose proof works with complexes too), the homomorphism induced in real homology by the restriction of~$\varphi$ to~$\Gamma_1$ is an isomorphim (observe that $H_1(G,{\mathbb R}):=H_1(K(G,1),{\mathbb R})\simeq H_1(X,{\mathbb R})$). Arguing as in the proof of Theorem~\ref{theo:ell}, we can show that the length of~$\Gamma_1$ is at most $C' \, b_1(G)$ and that $$ {\rm sys}_\pi(\Gamma_1) \leq C'' \, \log(b_1(G)+1), $$ where $C'=C'(\ell)$ and $C''=C''(\ell)$ are universal constants (recall that $\ell$ is fixed equal to~$1$). Note that a homotopical systolic loop of~$\Gamma_1$ induces a nontrivial class in real homology. Since $\varphi$ is distance nonincreasing and its restriction to~$\Gamma_1$ induces an isomorphism in real homology, the same upper bound holds for~${\rm sys}_{\pi}(X)$. Hence the result. \end{proof} The order of the bound in the previous theorem is asymptotically optimal, as shown by the following family of examples. \begin{example}\label{ex:GG} {\it Even case.} Let $g\geq 2$ be an integer and $G_{2g}$ be the fundamental group of a closed orientable surface of genus $g$. It is a finitely presentable group with no free factor isomorphic to~${\mathbb Z}$ and with first Betti number $2g$. The Buser-Sarnak hyperbolic surfaces~\cite{BS94} show that $$ {\mathfrak S}(G_{b}) \leq c_0 \, \frac{b}{\log(b)^2}, $$ where $b=2g$, for some positive universal constant~$c_0$. \\ \item{\it Odd case.} Now let $G_{2g+1}$ be the fundamental group of the connected sum of a closed orientable surface of genus $g$ and a Klein bottle. It is a finitely presentable group with no free factor isomorphic to~${\mathbb Z}$ and with first Betti number $2g+1$. Consider on the one hand a Buser-Sarnak hyperbolic surface $M$ of genus $g$ with homotopical systole greater than~$c \, \log(g)$ for some positive constant~$c$. Consider on the other hand a flat rectangle $[0,\frac{L}{2}] \times [0,L]$ with $L = \frac{c}{2} \log(g)$, and glue the opposite sides of length~$L$ of this rectangle to obtain a flat Moebius band~$\mathcal{M}$ with boundary length~$L$. We can find two disjoint minimizing arcs on~$M$ of length~$\frac{L}{2}=\frac{c}{4} \log(g)$. Now, we cut the surface~$M$ open along these arcs and attach two Moebius bands~$\mathcal{M}$ along the boundary components of the surface. We obtain a closed nonorientable surface~$M_{2g+1}$ with fundamental group isomorphic to~$G_{2g+1}$. By construction, $$ {\rm area}(M_{2g+1}) = 4 \pi (g-1) + \frac{c^2}{4} \log(g)^2 $$ and $$ {\rm sys}_{\pi}(M_{2g+1}) = \frac{L}{2} = \frac{c}{4} \log(g). $$ Thus, $$ {\mathfrak S}(G_{b}) \leq c_0 \, \frac{b}{\log(b)^2}, $$ where $b=2g+1$, for some positive universal constant $c_0$. \end{example} \bibliographystyle{plain}
{ "timestamp": "2010-11-15T02:02:28", "yymm": "1011", "arxiv_id": "1011.2962", "language": "en", "url": "https://arxiv.org/abs/1011.2962", "abstract": "Given a Riemannian surface, we consider a naturally embedded graph which captures part of the topology and geometry of the surface. By studying this graph, we obtain results in three different directions.First, we find bounds on the lengths of homologically independent curves on closed Riemannian surfaces. As a consequence, we show that for any $\\lambda \\in (0,1)$ there exists a constant $C_\\lambda$ such that every closed Riemannian surface of genus $g$ whose area is normalized at $4\\pi(g-1)$ has at least $[\\lambda g]$ homologically independent loops of length at most $C_\\lambda \\log(g)$. This result extends Gromov's asymptotic $\\log(g)$ bound on the homological systole of genus $g$ surfaces. We construct hyperbolic surfaces showing that our general result is sharp. We also extend the upper bound obtained by P. Buser and P. Sarnak on the minimal norm of nonzero period lattice vectors of Riemann surfaces %systole of Jacobians of Riemann surfaces in their geometric approach of the Schottky problem to almost $g$ homologically independent vectors.Then, we consider the lengths of pants decompositions on complete Riemannian surfaces in connexion with Bers' constant and its generalizations. In particular, we show that a complete noncompact Riemannian surface of genus $g$ with $n$ ends and area normalized to $4\\pi (g+\\frac{n}{2}-1)$ admits a pants decomposition whose total length (sum of the lengths) does not exceed $C_g \\, n \\log (n+1)$ for some constant $C_g$ depending only on the genus.Finally, we obtain a lower bound on the systolic area of finitely presentable nontrivial groups with no free factor isomorphic to $\\Z$ in terms of its first Betti number. The asymptotic behavior of this lower bound is optimal.", "subjects": "Differential Geometry (math.DG); Geometric Topology (math.GT)", "title": "Short loop decompositions of surfaces and the geometry of Jacobians", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9896718462621992, "lm_q2_score": 0.7154239836484143, "lm_q1q2_score": 0.7080349747575836 }
https://arxiv.org/abs/1511.08916
Numerical Ranges of 4-by-4 Nilpotent Matrices: Flat Portions on the Boundary
In their 2008 paper Gau and Wu conjectured that the numerical range of a 4-by-4 nilpotent matrix has at most two flat portions on its boundary. We prove this conjecture, establishing along the way some additional facts of independent interest. In particular, a full description of the case in which these two portions indeed materialize and are parallel to each other is included.
\section{Introduction} We consider the space $\mathbb{C}^n$ endowed with the standard scalar product $\scal{.,.}$ and the norm $\norm{.}$ associated with it. Elements $x\in\mathbb{C}^n$ are $n$-columns; however, to simplify the notation we will write them as $(x_1,\ldots,x_n)$. For an $n$-by-$n$ matrix $A$, its {\em numerical range}, also known as the {\em field of values}, is defined as \[ F(A)=\{ \scal{Ax,x}\colon x\in\mathbb{C}^n, \ \norm{x}=1\}. \] The classical Toeplitz-Hausdorff theorem claims that the set $F(A)$ is always convex; this and other well-known properties of the numerical range are discussed in detail, e.g., in monographs \cite{GusRa,HJ1}. As was observed by Kippenhahn (\cite{Ki}, see also the English translation \cite{Ki08}), $F(A)$ can be described in terms of the homogeneous polynomial \eq{pa} p_A(u,v,w)=\det(uH+vK+wI), \en where \[ H=\frac{A+A^*}{2}:=\operatorname{Re} A, \quad K=\frac{A-A^*}{2i}:=\operatorname{Im} A.\] Namely, $F(A)$ is the convex hull of the curve $C(A)$ dual (in projective coordinates) to $p_A(u,v,w)=0$. So, it is not surprising that, starting with $n=3$, the boundary $\partial F(A)$ of $F(A)$ may contain line segments (sometimes also called {\em flat portions}), even when the polynomial $p_A$ is irreducible. For a unitarily irreducible $3$-by-$3$ matrix there is at most one such flat portion, as was first observed in the same paper \cite{Ki}, with constructive tests for its presence provided in \cite{KRS,RS05}. The phenomenon of flat portions in higher dimensions was further studied in \cite{BS041}. For convenience of reference, we restate here Theorem~37 from \cite{BS041}. \begin{theorem}\label{4x4} {\em [Brown-Spitkovsky]} Any $4$-by-$4$ matrix has at most $4$ flat portions on the boundary of its numerical range ($3$, if it is unitarily irreducible). \end{theorem} Of course, the upper bounds may be lower if an additional structure is imposed on $A$. In this paper, we will tackle the case of nilpotent matrices. For reducible $4$-by-$4$ nilpotent matrices it is easy to see that the maximum possible number of flat portions is one; for the sake of completeness, this result is stated with a proof in Section~\ref{wk} (Proposition~\ref{p:red}). Unitarily irreducible $4$-by-$4$ nilpotent matrices were considered by Gau and Wu in \cite{GauWu081}, where in particular examples of such matrices $A$ with two flat portions on $\partial F(A)$ were given and it was also conjectured that three flat portions do not materialize. This conjecture was supported there by the following theorem, which is a special case of their result for $n$-by-$n$ matrices \cite[Theorem~3.4]{GauWu081}. \begin{theorem}\label{GauWuCircular} {\em [Gau-Wu]} If $A$ is a $4$-by-$4$ nilpotent matrix which has a $3$-by-$3$ submatrix $B$ with $W(B)$ a circular disk centered at the origin, then there are at most two flat portions on the boundary of $F(A)$. \end{theorem} In this paper we prove the Gau-Wu conjecture. This is done in Section~\ref{s:proof}. As a natural preliminary step, necessary and sufficient conditions for a $4$-by-$4$ nilpotent matrix to have at least one flat portion on the boundary of its numerical range are derived in Section~\ref{wk}. A special family of nilpotent matrices that is important for the proof of the main theorem is analyzed in Section~\ref{s:alt}. Section~\ref{s:two} contains necessary and sufficient conditions for a nilpotent matrix to have two parallel flat portions on the boundary of its numerical range. In addition, we show there that for a nilpotent $4$-by-$4$ matrix $A$ with two non-parallel flat portions on the boundary of $F(A)$ that are on lines equidistant from the origin, these are the only flat portions. The latter result is also used in Section~\ref{s:proof}, where in Theorem~\ref{twoflat} it is shown that if $A$ is a $4$-by-$4$ nilpotent matrix, then $ \partial F(A)$ contains at most 2 flat portions. The proof follows from an analysis of the locations of the singularities of the boundary generating curve \eqref{pa}. In the final Section~\ref{s:5x5} we use Theorem~\ref{twoflat} to tackle the case of $5$-by-$5$ unitarily reducible matrices. \section{Matrices with a flat portion on the boundary of their numerical range}\label{wk} We start with an easy case of unitarily reducible matrices. \begin{prop}\label{p:red} Let $A$ be an $4$-by-$4$ unitarily reducible nilpotent matrix. Then its numerical range $F(A)$ has at most one flat portion on the boundary. \end{prop} \begin{proof} Let $A$ be unitarily similar to a direct sum $B_1\oplus\cdots\oplus B_k$, with $k>1$. The blocks $B_j$ are of course also nilpotent, and the following cases are possible. {\sl Case 1.} $k=2$. If $B_1$ is a $3$-by-$3$ unitarily irreducible nilpotent matrix and $B_2=[0]$, then $F(A)=F(B_1)$. According to \cite[Theorem~4.1]{KRS}, $F(B_1)$ either has no flat portions on the boundary or exactly one such portion. Thus, so does $F(A)$. If there are two nilpotent $2$-by-$2$ blocks, then the numerical range of each block is a circular disk centered at the origin, and $F(A)$ is the largest of these disks and hence has no flat portion on its boundary. {\sl Case 2.} $k=3$, that is, $B_1$ is a $2$-by-$2$ unitarily irreducible nilpotent matrix, while $B_2=B_3=[0]$. Then $F(A)=F(B_1)$ is again a circular disk centered at the origin, and there are no flat portions on its boundary. {\sl Case 3.} $k=4$, implying that $A=0$, and $F(A)=\{0\}$. Hence there are no flat portions. \end{proof} If $A$ is not supposed to be unitarily reducible, the situation becomes more complicated. Let us establish the criterion for at least one flat portion to exist on $\partial F(A)$. To this end, some background terminology and information is useful. First, recall the notion of an {\em exceptional supporting line} of $F(A)$ which for an arbitrary matrix $A$ was introduced in \cite{LLS13}. Namely, let $\ell_\theta$ be the supporting line of $F(A)$ having slope $-\cot\theta$ and such that $e^{-i\theta}F(A)$ lies to the right of the vertical line $e^{-i\theta}\ell_\theta$. Then this supporting line is exceptional (and, respectively, $\theta$ is an {\em exceptional angle}) if at least one $z\in\ell_\theta\cap F(A)$ is {\em multiply generated}, that is, there exist at least two linearly independent unit vectors $x_j$ for which $\scal{Ax_j,x_j}=z$. For a given $A$, the angle $\theta$ is exceptional if and only if the hermitian matrix $\operatorname{Re} (e^{-i\theta}A)$ has a multiple minimal eigenvalue \cite[Theorem 2.1]{LLS13}; denote by $\mathcal L$ the respective eigenspace. The above mentioned value $z$ is unique if and only if the compression of $\operatorname{Im} (e^{-i\theta}A)$ (equivalently: $A$) onto $\mathcal L$ is a scalar multiple of the identity; $z$ is then called a {\em multiply generated round boundary point} of $F(A)$. On the other hand, all points in the relative interior of a flat portion on the boundary of $F(A)$ are multiply generated. So, flat portions occur only on exceptional supporting lines, and for them to materialize it is necessary and sufficient that the the compression $A|{\mathcal L}$ of $A$ onto $\mathcal L$ is {\em not} a scalar multiple of the identity. In our setting we will have to deal with 2-dimensional $\mathcal L$. The following test is useful in this regard. \begin{prop}\label{quadformtest} Let $A$ be such that for some $\theta \in [0, 2 \pi)$ there exist two linearly independent vectors $y_1,y_2$ corresponding to the same eigenvalue $\lambda$ of $\operatorname{Re} (e^{-i\theta}A)$, and let ${\mathcal L}=\operatorname{Span}\{y_1,y_2\}$. Then the compression of $A$ onto $\mathcal L$ is a scalar multiple of the identity if and only if \eq{eqn1evec} \scal{Ay_1,y_1}\norm{y_2}^2=\scal{Ay_2,y_2}\norm{y_1}^2 \en and \eq{eqn2evec} \scal{Ay_2,y_1}\norm{y_1}^2=\scal{y_2,y_1}\scal{Ay_1,y_1}. \en \end{prop} \begin{proof} Without loss of generality we may normalize the vectors $y_1,y_2$, dividing each of them by its length, and thus rewrite \eqref{eqn1evec}, \eqref{eqn2evec} in a slightly simpler form \eq{1evec} \scal{Ay_1,y_1}=\scal{Ay_2,y_2}, \en \eq{2evec} \scal{Ay_2,y_1}=\scal{y_2,y_1}\scal{Ay_1,y_1}. \en Observe also that \eqref{2evec} means exactly that \eq{3evec} \scal{A\widetilde{y}_2,y_1}=0,\en where $\widetilde{y}_2$ is a unit vector in $\mathcal L$ orthogonal to $y_1$. So, we just need to show that $A|{\mathcal L}$ is a scalar multiple of the identity if and only if \eqref{1evec} and \eqref{3evec} hold. The {\sl necessity} of \eqref{1evec}, \eqref{3evec} is trivial, and even holds for an arbitrary subspace $\mathcal L$, not consisting of eigenvectors of $\operatorname{Re} (e^{-i\theta}A)$. As for their {\sl sufficiency}, note that $\operatorname{Re} (e^{-i\theta}A)|{\mathcal L}$, being a scalar multiple of the identity, commutes with $\operatorname{Im} (e^{-i\theta}A)|{\mathcal L}$. Thus, $A|{\mathcal L}$ is normal. As such, condition \eqref{3evec} implies that the matrix of $A|{\mathcal L}$ with respect to the orthonormal basis $\{y_1,\widetilde{y}_2\}$ is diagonal. Consequently, $y_1$ is an eigenvector of $A|{\mathcal L}$ corresponding to its eigenvalue $\mu=\scal{Ay_1,y_1}$, and the latter is an endpoint of $F(A|{\mathcal L})$. On the other hand, \eqref{1evec} shows that this value is attained at two linearly independent unit vectors, $y_1$ and $y_2$. This is only possible if $A|{\mathcal L}=\mu I$. \end{proof} We return now to the nilpotent matrix setting. Let us first establish the criterion for an exceptional supporting line to exist, independent of whether or not it contains a proper flat portion. \begin{theorem}\label{th:except}Let $A$ be a $4$-by-$4$ nilpotent matrix. Then $F(A)$ has an exceptional supporting line if and only if $A$ is unitarily similar to \eq{Ae} \alpha\begin{bmatrix}0 & a_1 & a_2 & a_3 \\ & 0 & a_4 & a_5 \\ & & 0 & a_6 \\ & & & 0 \end{bmatrix}, \en where \eq{aj} \alpha\in\mathbb{C}, \quad \abs{a_j}\leq 1 \text{ for } j=1,2,3,\en \begin{align}\label{condex1} \abs{a_4-\overline{a_1}a_2}^2 & = (1-\abs{a_1}^2)(1-\abs{a_2}^2),\\ \label{condex2} \abs{a_5-\overline{a_1}a_3}^2 & = (1-\abs{a_1}^2)(1-\abs{a_3}^2),\\ \abs{a_6-\overline{a_2}a_3}^2 & = (1-\abs{a_2}^2)(1-\abs{a_3}^2), \label{condex3} \end{align} and \eq{arg} \arg(a_6-\overline{a_2}a_3)=\arg(a_5-\overline{a_1}a_3)-\arg(a_4-\overline{a_1}a_2) \mod 2\pi. \en \end{theorem} Note that all three arguments in \eqref{arg} are defined only if the inequalities in \eqref{aj} are strict. If this is not the case, we agree by convention that condition \eqref{arg} is vacuous. \begin{proof}The result obviously holds for $A=0$. Indeed, then $A$ is in the form \eqref{Ae}, and every supporting line of $F(A)=\{0\}$ is exceptional. So, in what follows we will suppose that $A\neq 0$. {\sl Necessity.} Suppose $A\neq 0$ is nilpotent, and (at least) one of the supporting lines of $F(A)$ is exceptional. Multiplying $A$ by a unimodular scalar if needed, we may without loss of generality suppose in addition that the exceptional supporting line is vertical. Let $d(\geq 0)$ be its distance from the origin. Then $\operatorname{Re} A+dI$ is a positive semi definite matrix with rank at most~2. If $d=0$, then $\operatorname{Re} A$ is positive semidefinite with zero trace, and thus zero diagonal. This is only possible if $\operatorname{Re} A=0$. But then $\operatorname{Im} A$ differs from $A$ by a scalar multiply only, and is therefore nilpotent along with $A$. Being hermitian, it is also zero. We arrive at a contradiction with $A$ being non-zero, implying that $d>0$. Multiplying $A$ by another scalar, this time positive, we may without loss of generality suppose that $d=1/2$, that is, $A+A^*+I$ is positive semi definite of rank at most~2. We will show that for such matrices the statement holds with $\alpha=1$. To this end, use unitary similarity to put $A$ in upper triangular form \eqref{Ae} with $\alpha=1$, and observe that then \eq{AA*} A+A^*+I=\begin{bmatrix} 1 & a_1 & a_2 & a_3 \\ \overline{a_1} & 1 & a_4 & a_5 \\ \overline{a_2 }& \overline{a_4} & 1 & a_6 \\ \overline{a_3} & \overline{a_5 } & \overline{a_6 } & 1 \end{bmatrix}. \en The matrix in the right hand side of \eqref{AA*} is congruent to $[1]\oplus G$, where \eq{G} G= \begin{bmatrix} 1-\abs{a_1}^2 & a_4-\overline{a_1}a_2 & a_5-\overline{a_1}a_3 \\ \overline{a_4}-a_1\overline{a_2} & 1-\abs{a_2}^2 & a_6-\overline{a_2}a_3 \\ \overline{a_5}-a_1\overline{a_3} & \overline{a_6}-a_2\overline{a_3} & 1-\abs{a_3}^2\end{bmatrix}. \en So, $G$ must be positive semi definite of rank at most 1. The former property implies the inequalities in \eqref{aj}, while due to the latter the three $2$-by-$2$ principal minors of $G$ are equal to zero. This is equivalent to \eqref{condex1}--\eqref{condex3}. In its turn, if $\abs{a_1}<1$, then due to \eqref{condex1}, \eqref{condex2} $G$ is congruent to \eq{H3} [1-\abs{a_1}^2]\oplus \begin{bmatrix} 0 & w \\ \overline{w} & 0\end{bmatrix}, \en where \[ w= a_6-\overline{a_2}a_3-\frac{(a_5-\overline{a_1}a_3)(\overline{a_4}-a_1\overline{a_2})}{1-\abs{a_1}^2}. \] So, in this case \[ \operatorname{rank} G = \begin{cases} 1 & \text{ if } w=0, \\ 3 & \text{ otherwise},\end{cases} \] which implies \eqref{arg}. {\sl Sufficiency.} Without loss of generality, let $A$ be given by \eqref{Ae} with $\alpha=1$. Then \eqref{AA*} and \eqref{G} hold. If $\abs{a_1}=1$, then \eqref{condex1}, \eqref{condex2} imply \eq{H2} G=[0]\oplus\begin{bmatrix} 1-\abs{a_2}^2 & a_6-\overline{a_2}a_3 \\ \overline{a_6}-a_2\overline{a_3} & 1-\abs{a_3}^2\end{bmatrix}, \en and the second summand in \eqref{H2} is singular due to \eqref{condex3}. So, the minimal eigenvalue $-1/2$ of $\operatorname{Re} A$ has multiplicity 2. For the case $\abs{a_1}<1$, the same conclusion follows from the congruence of $G$ and \eqref{H3}, since \eqref{condex1}--\eqref{arg} imply $w=0$. \end{proof} Note that the exceptional supporting line $\ell$, the existence of which is established by Theorem~\ref{th:except}, is the vertical line $x=-1/2$ scaled by $\alpha$. Consequently, $\ell$ is at the distance $\abs{\alpha}/2$ from the origin, and has the slope $-\cot\arg\alpha$. Also, conditions \eqref{condex1}--\eqref{condex3} can be rewritten in an equivalent form \[ a_4= \overline{a_1}a_2+r_1r_2e^{i\theta_1},\quad a_5= \overline{a_1}a_3+r_1r_3e^{i\theta_2},\quad a_6= \overline{a_2}a_3+r_2r_3e^{i\theta_3}, \] where \[ r_j=\sqrt{1-\abs{a_j}^2}, \quad j=1,2,3,\] and \eq{theta} \theta_1=\arg(a_4-\overline{a_1}a_2), \quad \theta_2=\arg(a_5-\overline{a_1}a_3), \quad \theta_3=\arg(a_6-\overline{a_2}a_3). \en Consequently, the matrix \eqref{Ae} that satisfies the conditions in Theorem \ref{th:except} can be represented more explicitly as \eq{Ae1} \alpha\begin{bmatrix}0 & a_1 & a_2 & a_3 \\ & 0 & \overline{a_1}a_2+r_1r_2e^{i\theta_1} & \overline{a_1}a_3+r_1r_3e^{i\theta_2} \\ & & 0 & \overline{a_2}a_3+r_2r_3e^{i\theta_3} \\ & & & 0 \end{bmatrix}, \en where $\theta_3=\theta_2-\theta_1.$ We will now use Proposition~\ref{quadformtest} to establish the additional conditions on $A$ under which a flat portion of $F(A)$ on $\ell$ actually materializes. \iffalse line $x=-\frac{1}{2}$. A flat portion will materialize on a rotated line at a different distance from the origin exactly when the matrix described in Theorem~\ref{oneflat} is scaled and rotated appropriately. In Theorem~\ref{oneflat} we assume that condition \eqref{aj} holds with strict inequalities. If the matrix $A$ satisfies the conditions \eqref{Ae} through \eqref{arg} with $\alpha=1$ and with $|a_j|=1$ for any $j$, then $F(A)$ will have a flat portion on $x=-\frac{1}{2}$ except in the case where one $|a_j|=1$ and the other two values are zero. In this latter case (for example, if $|a_1|=1$ and $|a_2|=|a_3|=0$), $F(A)$ is a circular disk. The proof that the flat portion materializes otherwise is similar to the proof of Theorem~\ref{oneflat} but with simpler calculations. \fi \begin{theorem}\label{oneflat}Let $A$ be unitarily similar to \eqref{Ae1}. In the notation introduced above, $F(A)\cap\ell$ is a proper line segment \iffalse $4$-by-$4$ nilpotent matrix of the form \eqref{Ae}. with $\alpha=1$. Assume conditions \eqref{aj} through \eqref{arg} are satisfied. Set $r_j=\sqrt{1-\abs{a_j}^2}$ for $j=1,2,3,$ and let $\theta_1=\arg(a_4-\overline{a_1}a_2) $ and $\theta_2=\arg(a_5-\overline{a_1}a_3)$.The boundary of $F(A)$ has a flat portion on the line $x=-\frac{1}{2}$\fi unless one of the following four conditions holds. \smallskip (i) $r_1 r_2 r_3 \ne 0$ and $\tau_1=\tau_2=0$, where \eq{bdryval} \begin{array}{l c l} \tau_1= r_3(r_1^2+r_2^2-r_1^2r_2^2)(\overline{a}_1 a_3 e^{-i \theta_2}-a_1 \overline{a}_3 e^{i \theta_2})&+&r_2(r_1^2+r_3^2-r_1^2r_3^2)(a_1 \overline{a}_2e^{i \theta_1}-\overline{a}_1 a_2 e^{-i \theta_1}) \\ &+&r_1r_2 r_3 |a_1|^2 (a_2 \overline{a}_3 e^{i (\theta_2-\theta_1)} -\overline{a}_2 a_3 e^{-i (\theta_2-\theta_1)}), \end{array} \en and \eq{mixedterm} \begin{array}{l c l} \tau_2=\overline{a}_2a_3r_1(r_1^2+2r_2^2-2r_1^2r_2^2)-a_1\overline{a}_2^2a_3r_1^2r_2e^{i \theta_1}+\overline{a}_1 a_3r_2(-r_2^2-r_1^2+r_1^2r_2^2)e^{-i \theta_1}\\+a_1 \overline{a}_2r_1^2r_3|a_2|^2e^{i \theta_2} +r_1r_2r_3(1-2|a_1|^2 |a_2|^2)e^{i(\theta_2-\theta_1)}+\overline{a}_1a_2|a_1|^2r_2^2r_3e^{i(\theta_2-2\theta_1)}.\end{array} \en \smallskip (ii) $r_1=0$, $r_2=r_3 \ne 0$, and $\arg(a_3)=\arg(a_2)+ \theta_3.$ \smallskip (iii) $r_2=0$, $r_1=r_3 \ne 0$, and $\arg(a_3)=\arg(a_1)+ \theta_2+\pi.$ \smallskip (iv) $r_3=0$, $r_1=r_2 \ne 0$, and $\arg(a_2)=\arg(a_1)+ \theta_1.$ \end{theorem} \begin{proof} Without loss of generality we may suppose that $A$ is in the form \eqref{Ae1}, not just unitarily similar to it, and that $\alpha=1$. Then $$\operatorname{Re} A=\frac{1}{2}\left(\begin{array}{cccc}0 & a_1 & a_2 & a_3 \\\overline{a}_1 & 0 & \overline{a}_1a_2 +r_1r_2e^{i \theta_1}& \overline{a}_1a_3 +r_1r_3e^{i \theta_2} \\\overline{a}_2 & a_1\overline{a}_2+r_1r_2e^{-i \theta_1} & 0 & \overline{a}_2a_3 +r_2r_3e^{i \theta_3}\\\overline{a}_3 & a_1 \overline{a}_3 +r_1r_3e^{-i \theta_2}& a_2 \overline{a}_3+r_2r_3e^{-i\theta_3} & 0\end{array}\right).$$ \smallskip {\sl Case 1.} Let $r_1r_2r_3 \ne 0$. It is straightforward to check that the minimal eigenvalue $\lambda=-\frac{1}{2}$ of $\operatorname{Re} A$ has multiplicity 2, and \[y_1=(-a_2r_1+a_1r_2e^{i \theta_1},-r_2e^{i \theta_1},r_1,0), \quad y_2=(-a_3r_1+a_1r_3e^{i \theta_2},-r_3e^{i \theta_2},0,r_1)\] form a basis of the respective eigenspace. \iffalse are linearly independent eigenvectors of $\operatorname{Re} A$ corresponding to $\lambda=-\frac{1}{2}$. Expression \eqref{eqn1evec} will hold exactly when $\|y_2 \|^2 \langle Ay_1, y_1 \rangle-\|y_1 \|^2 \langle Ay_2, y_2 \rangle=0$. Using the hypotheses for $A$ again results in \fi Moreover, $$Ay_1=(-a_1r_2e^{i \theta_1}+a_2r_1,r_1\overline{a}_1a_2+r_1^2r_2e^{i \theta_1},0,0)$$ and $$Ay_2=(-a_1r_3e^{i \theta_2}+a_3r_1,r_1\overline{a}_1a_3+r_1^2r_3e^{i \theta_2},r_1\overline{a}_2a_3+r_1r_2r_3e^{i (\theta_2-\theta_1)},0).$$ Therefore \begin{align*} \|y_1 \|^2&=2r_1^2+2r_2^2-2r_1^2r_2^2-a_1\overline{a}_2r_1r_2e^{i \theta_1}-\overline{a}_1a_2r_1r_2e^{-i \theta_1}, \\ \|y_2 \|^2&=2r_1^2+2r_3^2-2r_1^2r_3^2-a_1\overline{a}_3r_1r_3e^{i \theta_2}-\overline{a}_1a_3r_1r_3e^{-i \theta_2}, \\ \langle Ay_1, y_1 \rangle &=-r_1^2-r_2^2+r_1^2r_2^2+a_1\overline{a}_2r_1r_2e^{i \theta_1}, \\ \langle A y_2,y_2 \rangle & =-r_1^2-r_3^2+r_1^2r_3^2+a_1\overline{a}_3r_1r_3e^{i \theta_2}. \end{align*} After some simplification, $\|y_2 \|^2 \langle Ay_1, y_1 \rangle-\|y_1 \|^2 \langle Ay_2, y_2 \rangle$ becomes $r_1\tau_1$ with $\tau_1$ defined by \eqref{bdryval}. Hence condition \eqref{eqn1evec} is satisfied if and only if $\tau_1=0$. Next note that \iffalse expression \eqref{eqn2evec} will hold exactly when $\|y_1 \|^2 \langle Ay_2, y_1 \rangle-\langle y_2, y_1 \rangle \langle A y_1, y_1 \rangle=0$. In addition to the calculations above, we need \fi $$\langle y_2, y_1 \rangle = r_1^2a_3 \overline{a}_2-\overline{a}_2a_1r_1r_3e^{i\theta_2}-a_3 \overline{a}_1 r_1r_2e^{-i \theta_1}+(1+|a_1|^2)r_2r_3 e^{i(\theta_2-\theta_1)},$$ and $$\langle Ay_2, y_1 \rangle =a_1r_3e^{i \theta_2} (\overline{a}_2r_1-\overline{a}_1 r_2e^{-i \theta_1}).$$ Now $\|y_1 \|^2 \langle Ay_2, y_1 \rangle-\langle y_2, y_1 \rangle \langle A y_1, y_1 \rangle$ simplifies to $r_1\tau_2$, so \eqref{eqn2evec} holds exactly when $\tau_2=0$. Therefore, by Proposition~\ref{quadformtest}, the line $x=-\frac{1}{2}$ will contain a proper line segment of $F(A)$ if and only if at least one of $\tau_1$ or $\tau_2$ is nonzero. This agrees with the statement of the theorem. {\sl Case 2.} At least two of $r_j$ are equal to zero, $j=1,2,3$. To be consistent with the statement of the theorem, we need to show that $F(A)\cap\ell$ is a proper line segment. But this is indeed so. For example, if $r_1=r_2=0$, then it immediately follows that the vectors $y_1=(-a_1, 1, 0,0)$ and $y_2=(-a_2, 0, 1, 0)$ are linearly independent eigenvectors of $\operatorname{Re} A$ corresponding to $-\frac{1}{2}$. Since $A y_1=(a_1, 0,0,0)$ and $A y_2=(a_2,0,0,0)$, both sides of \eqref{eqn1evec} equal $-2$. However, \eqref{eqn2evec} is not satisfied because $$\langle A y_2, y_1 \rangle \|y_1 \|^2=-2 a_2 \overline{a}_1 \ne- a_2 \overline{a}_1=\langle y_2, y_1 \rangle \langle A y_1, y_1 \rangle.$$ Therefore, the flat portion will exist. All other cases where at least two $r_j$ values are zero are treated in the same manner. {\sl Case 3.} Exactly one of $r_j$ is equal to zero. For the sake of definiteness, let $r_1=0$, $r_2r_3\neq 0$. \iffalse To see how condition (ii) arises, assume $r_1=0$ and $r_2 r_3 \ne 0$. In this case \fi Then $y_1=(-a_1, 1,0,0)$ and $y_2=(a_2r_3e^{i\theta_3}-r_2 a_3,0,-r_3e^{i\theta_3},r_2)$ are linearly independent eigenvectors of $\operatorname{Re} A$ corresponding to $-\frac{1}{2}$. It still holds that $\|y_1 \|^2=2$ and $\langle A y_1, y_1 \rangle=-1$. Now we also have $$\langle y_2, y_1 \rangle=r_2 a_3 \overline{a}_1-a_2 \overline{a}_1 r_3 e^{i\theta_3},$$ $$\|y_2 \|^2=2-\overline{a}_2 a_3 r_2 r_3 e^{-i\theta_3}-a_2 \overline{a}_3 r_2 r_3 e^{i\theta_3}-2|a_2|^2 |a_3|^2.$$ and $$A y_2=(-a_2r_3e^{i\theta_3}+r_2a_3,-\overline{a}_1a_2r_3e^{i\theta_3}+\overline{a}_1a_3r_2, r_2\overline{a}_2a_3+r_2^2r_3e^{i\theta_3},0).$$ Therefore, we can can compute the remaining quantities from Proposition~\ref{quadformtest}: $$ \langle A y_2, y_2 \rangle=a_2 \overline{a}_3 r_2 r_3 e^{i\theta_3}-1+|a_2|^2|a_3|^2. $$ and $$ \langle A y_2, y_1 \rangle=0.$$ Equation \eqref{eqn1evec} holds if and only if $-\|y_2 \|^2=2 \langle A y_2, y_2 \rangle$. Substituting into the latter equation yields $$ -2+\overline{a}_2 a_3 r_2 r_3 e^{-i\theta_3}+a_2 \overline{a}_3 r_2 r_3 e^{i\theta_3}+2|a_2|^2 |a_3|^2=2a_2 \overline{a}_3 r_2 r_3 e^{i\theta_3}-2+2|a_2|^2|a_3|^2,$$ which simplifies to \begin{equation}\label{r1zeroeq1} \overline{a}_2 a_3 r_2 r_3 e^{-i\theta_3}-a_2 \overline{a}_3 r_2 r_3 e^{i\theta_3}=0. \end{equation} Since $a_1 \ne 0$, equation \eqref{eqn2evec} holds if and only if \begin{equation}\label{r1zeroeq2} r_2 a_3 -a_2 r_3 e^{i\theta_3}=0. \end{equation} If equation \eqref{r1zeroeq2} holds, then $r_3=r_2$ and hence $\arg(a_3)=\arg(a_2)+\theta_3.$ This proves the necessity of the conditions in (ii) in order for $F(A)$ to fail to have a flat portion on $x=-\frac{1}{2}$. Conversely, if the conditions in (ii) hold, then clearly \eqref{r1zeroeq1} and \eqref{r1zeroeq2} are true, which results in no flat portion by Proposition~\ref{quadformtest}. This agrees with (ii) in the statement of the theorem. The situations when $r_2=0$, $r_1r_3\neq 0$ and $r_3=0, r_1r_2\neq 0$ can be treated similarly. The only difference will be in the specific choice of the eigenvectors, namely, \[ y_1=(-a_2, 0,1,0), \quad y_2= (-a_3 r_1+r_3a_1e^{i \theta_2},-r_3e^{i \theta_2},0,r_1) \] in the former, and \[ y_1=(-a_3, 0,0,1), y_2=(-a_2r_1+a_1r_2e^{i \theta_1},-r_2e^{i \theta_1}, r_1,0) \] in the latter. Direct computations show that a flat portion does not materialize if and only if, respectively, (iii) or (iv) holds. \end{proof} {Conditions (i)--(iv) of Theorem~\ref{oneflat} simplify somewhat if the entries $a_j$ in \eqref{Ae} are real. Indeed, then $\theta_j=0\mod\pi$, and the argument conditions in (ii)--(iv) boil down to \[ a_2a_3(a_6-a_2a_3)\geq 0, \quad a_1a_3(a_5-a_1a_3)\leq 0, \text{ and } a_1a_2(a_4-a_1a_2)\geq 0, \] respectively. In its turn, $\tau_1$ in (i) is zero automatically, while \eq{realtau2}\begin{array}{lcl} \tau_2= r_1a_2a_3(r_1^2+2r_2^2-2r_1^2r_2^2)- a_1r_2a_3(r_2^2+2r_1^2-2r_1^2r_2^2)e^{i\theta_1} \\ + a_1a_2r_3(r_1^2+r_2^2-2r_1^2r_2^2)e^{i\theta_2}+ r_1r_2r_3(1-2a_1^2a_2^2)e^{i\theta_3}.\end{array}\en \begin{example} Let $A$ be of the form \eqref{Ae} with $\alpha=1$ and $a_1=\frac{\sqrt{2+\sqrt{3}}}{2}$, $a_2=\frac{1}{2}$, $a_3=\frac{\sqrt{2}}{2}$. Choosing $a_4,a_5,a_6$ in such a way that \eqref{condex1}--\eqref{condex3} hold with $\theta_j=0$ in \eqref{theta}, $j=1,2,3$ yields $a_4=\frac{\sqrt{2}}{2}$, $a_5=\frac{\sqrt{3}}{2}$, and $a_6=\frac{\sqrt{2+\sqrt{3}}}{2}$. A direct substitution into \eqref{realtau2} reveals that $\tau_2=0$. So, $F(A)$ has no flat portion on the supporting line $x=-\frac{1}{2}$ by Theorem~\ref{oneflat} even though this line is exceptional by Theorem~\ref{th:except}. \end{example} \begin{figure}[h] \centering \includegraphics[scale=.6]{TsaiExnoflatpdf} \caption{$F(A)$ with exceptional support line but no flat portion}\label{noflat} \end{figure} \section{Special case: An alternative approach} \label{s:alt} {As can be seen from the discussion above, Theorem~\ref{oneflat} provides a convenient tool for constructing specific examples of nilpotent matrices $A$ with a prescribed exceptional supporting line $\ell$, with $F|(A)\cap\ell$ being just one point or a proper line segment. For another example, by setting $a_1=1$, $a_2=\frac{1}{2}$, $a_3=\frac{\sqrt{3}}{2}$, and $\theta_3=0$ in Theorem~\ref{th:except} we immediately obtain a matrix \begin{equation*} A=\left(\begin{array}{cccc}0 & 1 & \frac{1}{2} & \frac{\sqrt{3}}{2} \\0 & 0 & \frac{1}{2} & \frac{\sqrt{3}}{2} \\0 & 0 & 0 & \frac{\sqrt{3}}{2} \\0 & 0 & 0 & 0\end{array}\right) \end{equation*} that fails to satisfy any of the conditions (i)-(iv) in Theorem~\ref{oneflat} and hence has a flat portion as shown in Figure~\ref{withflat}. \begin{figure}[h] \centering \includegraphics[scale=.6]{TsaiExwithflatpdf} \caption{$F(A)$ with flat portion}\label{withflat} \end{figure} However, the conditions in Theorem~\ref{oneflat} are not as useful when considering a given (even triangular) matrix with the number and orientation of flat portions not known a priori. We present now one such case, to illustrate this point, and also since it plays an important role in Section~\ref{s:proof}.} \begin{lemma}\label{realA} Let \[ A=\begin{bmatrix}0 & a_{1} & a_2 & a_{3} \\ 0 & 0 & a_{3} & a_2 \\ 0 & 0 & 0 & a_{1} \\ 0 & 0 & 0 & 0\end{bmatrix}, \] where $a_1$, $a_2$, and $a_3$ are real and $a_1 \ne 0$. The boundary of $F(A)$ contains a vertical flat portion if and only if $|a_1|=|a_3|$ and $|a_2| \geq |a_1|$. In this case, this is the only flat portion on $\partial F(A)$. \end{lemma} \begin{proof} As is well-known (see, for example, \cite{GauWu08}), the boundary $\partial F(A)$ of the numerical range of $A$ contains a portion of the line $\cos(\theta) x+ \sin(\theta) y=d$ if and only if $d$ is the maximum (or minimum) eigenvalue of $\mathrm{Re} \, e^{-i \theta} A$ and there are two eigenvectors $y_1$ and $y_2$ associated with $d$ such that either condition \eqref{eqn1evec} or \eqref{eqn2evec} in Proposition~\ref{quadformtest} fails to hold. A straightforward calculation shows that the characteristic polynomial of $\mathrm{Re} \, e^{-i \theta} A$ is \begin{equation*} q_{\theta}(\lambda)=\lambda^4-\frac{1}{2}(a_1^2+a_2^2+a_3^2) \lambda^2-(a_1a_2a_3 \cos\theta ) \lambda+\frac{1}{16}(a_1^4+a_2^4+a_3^4-2a_1^2a_2^2-2a_2^2a_3^2-2a_1^2a_3^2 \cos2 \theta ). \end{equation*} This polynomial factors as \begin{equation*} q_{\theta}(\lambda)=\left(\lambda^2+a_2 \lambda+ \frac{a_1a_3\cos\theta}{2}+\frac{a_2^2}{4}-\frac{a_1^2}{4}-\frac{a_3^2}{4}\right)\left(\lambda^2-a_2 \lambda- \frac{a_1a_3\cos\theta}{2}+\frac{a_2^2}{4}-\frac{a_1^2}{4}-\frac{a_3^2}{4}\right) \end{equation*} Therefore the roots of $q_{\theta}$, which are the eigenvalues of $\mathrm{Re} \, e^{-i \theta} A$, can be explicitly calculated as \begin{equation}\label{evalsrotatedrealA} \begin{split} \lambda_1(\theta) & =\frac{1}{2}\left(-a_2-\sqrt{a_1^2+a_3^2-2a_1a_3 \cos\theta}\right),\\ \lambda_2(\theta) & =\frac{1}{2}\left(a_2-\sqrt{a_1^2+a_3^2+2a_1a_3 \cos\theta}\right),\\ \lambda_3 (\theta)& =\frac{1}{2}\left(-a_2+\sqrt{a_1^2+a_3^2-2a_1a_3 \cos\theta}\right),\\ \lambda_4(\theta) & =\frac{1}{2}\left(a_2+\sqrt{a_1^2+a_3^2+2a_1a_3 \cos\theta}\right). \end{split}\end{equation} When $\theta=0$, the eigenvalues of $\mathrm{Re}A$ simplify to \begin{equation*}\label{evalsrealA} \begin{split} \lambda_1& =\frac{1}{2}\left(-a_2-a_1+a_3 \right),\\ \lambda_2 & =\frac{1}{2}\left(a_2-a_1-a_3 \right),\\ \lambda_3& =\frac{1}{2}\left(-a_2+a_1-a_3\right),\\ \lambda_4 & =\frac{1}{2}\left(a_2+a_1+a_3\right), \end{split}\end{equation*} where relabeling may have occurred based on absolute values. Furthermore, in this case it is straightforward to verify that corresponding eigenvectors of $\mathrm{Re} A$ are $ y_1=(1,-1,-1,1)$, $y_2=(-1,1,-1,1)$, $y_3=(-1,-1,1,1)$, and $y_4=(1,1,1,1).$ Notice that \begin{equation*}\label{Ay} \begin{split} A y_1& =(-a_1-a_2+a_3,-a_3+a_2,a_1,0),\\ A y_2&=(a_1-a_2+a_3,a_2-a_3,a_1,0),\\ A y_3&=(-a_1+a_2+a_3,a_3+a_2,a_1,0),\\ A y_4&=(a_1+a_2+a_3,a_2+a_3,a_1,0). \end{split}\end{equation*} The values $\| y_j \|^2=4$ and $\langle A y_j, y_j \rangle=4 \lambda_j$ for $j=1,2,3,4.$ Therefore for all $1 \leq j, k \leq 4$, if there is a repeated eigenvalue $\lambda_j=\lambda_k$ (extremal or not), we have $$ \langle A y_j, y_j \rangle \|y_k \|^2=4 \lambda_j=4 \lambda_k= \langle A y_k, y_k \rangle \|y_j \|^2.$$ Hence \eqref{eqn1evec} will always hold, and a vertical flat portion will exist exactly when there is an extremal eigenvalue where \eqref{eqn2evec} fails. When $j \ne k$, the eigenvectors $y_j$ and $y_k$ are orthogonal. Accordingly, \eqref{eqn2evec} fails to hold for a case where $\lambda_j=\lambda_k$ if and only if \begin{equation}\label{eqn3evec} \langle A y_j, y_k \rangle \ne 0. \end{equation} All possible cases to consider can be studied with the following inner products. \begin{align} \label{IPcase1} \langle A y_1, y_2 \rangle & =2a_3-2a_2,\\ \label{IPcase2} \langle A y_3, y_4 \rangle & =-2a_3-2a_2,\\ \label{IPcase3} \langle A y_1, y_3 \rangle&=\langle A y_2, y_4 \rangle =-2a_2,\\ \label{IPcase4} \langle A y_1, y_4 \rangle&=\langle A y_2, y_3 \rangle =0 \\ \nonumber \end{align} Now, assume the boundary of $F(A)$ has a vertical flat portion. Thus the maximal or minimal eigenvalue of $\mathrm{Re} A$ is repeated and the corresponding eigenvectors satisfy \eqref{eqn3evec}. The equality $\lambda_1=\lambda_2$ implies $a_2=a_3$, so equation \eqref{IPcase1} shows condition \eqref{eqn3evec} fails in this case. Similarly, by equation \eqref{IPcase2}, condition \eqref{eqn3evec} fails when $\lambda_3 =\lambda_4$ and $a_2=-a_3$. Clearly, by \eqref{IPcase4}, no combination with $\lambda_1=\lambda_4$ or $\lambda_2=\lambda_3$ is possible with a vertical flat portion. Therefore it must hold that either $\lambda_1=\lambda_3$ or $\lambda_2=\lambda_4$. In the former case, we have $a_1=a_3$; in order for this repeated $\lambda_1$ to be extremal, we must have $|a_2| \geq |a_1|$. Similarly, if $\lambda_2=\lambda_4$, then $a_1=-a_3$ and to be extremal $|a_2| \geq |a_1|$. Thus a vertical flat position implies the conditions $|a_1|=|a_3|$ and $|a_2| \geq |a_1|$. Conversely, if $a_1 \ne 0$, $a_1=a_3$ and $|a_2| \geq |a_1|$, then $\lambda_1=\lambda_3$ is either the maximal or minimal eigenvalue of $\mathrm{Re}A$ and $\langle A y_1, y_2 \rangle \ne 0$ by \eqref{IPcase3}. Likewise, if $a_3=-a_1 \ne 0$ and $|a_2| \geq |a_1|$, then $\lambda_2=\lambda_4$ is an extreme eigenvalue and $\langle A y_2, y_4 \rangle \ne 0$, again by \eqref{IPcase3}. Hence in both cases Proposition~\ref{quadformtest} shows that $\partial F(A)$ has a vertical flat portion. There are never two vertical flat portions on $\partial F(A)$ because that would require both a repeated maximal and a repeated minimal eigenvalue of $\mathrm{Re} A$, which implies $a_1=a_3$ and $a_1=-a_3$ and we assumed $a_1 \ne 0$. To see that a vertical flat portion never coexists with any other flat portion, assume there is a vertical flat portion and there is also a flat portion on the line $\cos\theta x+ \sin\theta y=d$ for some real $d$ and $\theta \in (0, 2 \pi)$ with $\theta \ne \pi$. It suffices to show that there is not a repeated maximal eigenvalue at $\theta$, because if there was a repeated minimal eigenvalue at $\theta$ there would also be a repeated maximal eigenvalue at $\theta \pm \pi$. In the list of the eigenvalues of $\mathrm{Re} \, e^{-i \theta} A$ in \eqref{evalsrotatedrealA}, $\lambda_2(\theta)<\lambda_4(\theta)$ and $\lambda_1(\theta)<\lambda_3(\theta)$. Therefore the only possibility of a repeated maximal eigenvalue is when $\lambda_3(\theta)=\lambda_4(\theta)$. Setting $|a_1|=|a_3|$ in this equality yields $$2a_2=\sqrt{2a_1^2-2a_1^2 \cos\theta}-\sqrt{2a_1^2+2a_1^2\cos\theta}=2|a_1| \left( \sin\left(\frac{\theta}{2}\right)-\vert \cos\left(\frac{\theta}{2}\right) \vert \right).$$ The extreme values of $ \sin(\frac{\theta}{2})-\vert \cos(\frac{\theta}{2}) \vert $ on $[0, 2 \pi)$ are 1 (only when $\theta=\pi$) and -1 (only when $\theta=0$). Thus $\lambda_3(\theta)=\lambda_4(\theta)$ cannot hold at another value of $\theta$ or else $|a_2| \geq |a_1|$ is contradicted. \end{proof} A sketch of the numerical range of $A$ when $a_1=1$, $a_3=-1$, and $a_2=2$ is shown in Figure~\ref{verticalflat}. \begin{figure}[h] \centering \includegraphics[scale=.6]{exampleverticalflatpdf} \caption{$F(A)$ with one vertical flat portion}\label{verticalflat} \end{figure} \section{Matrices with two flat portions}\label{s:two} \begin{prop}\label{p:par} A $4$-by-$4$ matrix $A$ is nilpotent, with two parallel flat portions on the boundary of its numerical range, if and only if it is a product of a scalar $\alpha\in\mathbb{C}\setminus\{0\}$ and a matrix unitarily similar to \eq{paralcanon} \begin{bmatrix}0 & a_1 & a_2 & a_3 \\ 0 & 0 & a_3 & -a_2 \\ 0 & 0 & 0 & a_1 \\ 0 & 0 & 0 & 0 \end{bmatrix}, \en with $a_1,a_3>0$ and $a_2\in\mathbb{R}$. \end{prop} \begin{proof} {\sl Necessity.} Using Schur's lemma, we may without loss of generality suppose that the matrix $A$ is upper triangular. Being nilpotent, it thus can be written as \eq{tri} A=\begin{bmatrix}0 & a_{12} & a_{13} & a_{14} \\ 0 & 0 & a_{23} & a_{24} \\ 0 & 0 & 0 & a_{34} \\ 0 & 0 & 0 & 0 \end{bmatrix}. \en Note that the general form of $A$ has entries with double subscripts, but we will gradually simplify $A$ so that it has the form \eqref{paralcanon}. Multiplying matrix \eqref{tri} by an appropriate scalar (of absolute value one, if desired), we may also suppose that the parallel flat portions on the boundary of $F(A)$ are horizontal. Yet another unitary similarity, this time via a diagonal matrix, allows us to adjust the arguments of the entries $a_{i,i+1}$ any way we wish ($i=1,2,3$) without changing the absolute values. Let us agree therefore to choose these entries real and non-negative. Having agreed on the above, we now introduce $H=\operatorname{Re} A$ and $K=\operatorname{Im} A$ as the hermitian matrices from the representation $A=H+iK$, so in particular \eq{K} K=\frac{1}{2i}\begin{bmatrix}0 & a_{12} & a_{13} & a_{14} \\ -a_{12} & 0 & a_{23} & a_{24} \\ -\overline{a_{13}} & -a_{23} & 0 & a_{34} \\ -\overline{a_{14}} & -\overline{a_{24}} & -a_{34} & 0 \end{bmatrix}. \en For a matrix $B$ of any size, two horizontal flat portions on the boundary of $F(B)$ can materialize only if the maximal and minimal eigenvalues of $\operatorname{Im} B$ both have multiplicity at least 2. For our $4$-by-$4$ matrix $A$ it means that $K$ has two distinct eigenvalues, say $\lambda_1$ and $\lambda_2$, each of multiplicity two. In addition, $\operatorname{Tr} K=0$, and so $\lambda_1=-\lambda_2 (:=\lambda)$. Consequently, \[ K^2=\lambda^2 I.\] In particular, the diagonal entries of $K^2$ are all equal. From here and \eqref{K}: \[ a_{12}^2+\abs{a_{13}}^2+\abs{a_{14}}^2=a_{12}^2+a_{23}^2+\abs{a_{24}}^2=a_{34}^2+\abs{a_{13}}^2+a_{23}^2=a_{34}^2+\abs{a_{14}}^2+\abs{a_{24}}^2. \] Equivalently (and taking into account non-negativity of $a_{12},a_{23},a_{34}$): \eq{fromdiag} a_{12}=a_{34}:=a_1, \quad \abs{a_{14}}=a_{23}:=a_3,\quad \abs{a_{13}}=\abs{a_{24}}.\en Taking \eqref{fromdiag} into consideration, the fact that off diagonal entries of $K^2$ are all equal to zero boils down to \eq{fromoffdiag} a_1(a_3-a_{14})=0, \quad a_1(a_{13}+a_{24})=0,\quad a_3a_{13}+a_{14}\overline{a_{24}}=0,\quad a_3a_{24}+\overline{a_{13}}a_{14}=0. \en On the other hand, from \eqref{tri} and \eqref{fromdiag}: \[ A^2= \begin{bmatrix} 0 & 0 & a_1a_3 & a_1(a_{13}+a_{24}) \\ 0 & 0 & 0 & a_1a_3 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}, \] which, when combined with the second equation in \eqref{fromoffdiag}, yields \[ A^2= a_1a_3\begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}. \] But $A^2\neq 0$, since otherwise $F(A)$ would be a circular disk (see e.g. \cite{TsoWu}) exhibiting no flat portions on the boundary. So, $a_1,a_3>0$. The solution to \eqref{fromoffdiag} is then given by \[ a_{14}=a_3, \quad a_{13}=-a_{24}\ (:=a_2)\in\mathbb{R}. \] So, $A$ is indeed in the form \eqref{paralcanon} up to unitary similarity and scaling. {\sl Sufficiency.} The scalar multiple $\alpha$ is inconsequential, so without loss of generality $A$ is given by \eqref{paralcanon}. The eigenvalues of $K=\operatorname{Im} A$ are then $\pm\lambda$, each of multiplicity two, where $\lambda=\sqrt{a_1^2+a_2^2+a_3^2}/2$. Let us apply a unitary similarity, putting $K$ in the form $\begin{bmatrix}\lambda I & 0 \\ 0 & -\lambda I\end{bmatrix}$, and denote by $\begin{bmatrix}H_1 & Z \\ Z^* & H_2\end{bmatrix}$ the result of applying the same unitary similarity to $\operatorname{Re} A$. Since $A$ is real, its numerical range $F(A)$ is symmetric with respect to the $x$-axis, and thus there are either two or no horizontal flat portions on its boundary. But the latter is possible only if $H_1$ and $H_2$ are scalar multiples of the identity. Applying yet another (block diagonal) unitary similarity, we can reduce $A$ to \eq{red} \begin{bmatrix}h_1+i \lambda & 0 & \sigma_1 & 0 \\ 0 &h_1+i \lambda & 0 & \sigma_2 \\ \sigma_1 & 0 &h_2-i \lambda & 0 \\ 0 & \sigma_2 & 0 & h_2-i \lambda \end{bmatrix}, \en where $\sigma_1,\sigma_2$ are the $s$-numbers of $Z$. The matrix \eqref{red} is unitarily (and even permutationally) reducible, which is in contradiction with the fact that $A$ has just one Jordan block. So, there are indeed two parallel flat portions on the boundary of $F(A)$. \end{proof} {\sl Remark.} From the proof of Proposition~\ref{p:par} it is clear that the parallel flat portions of $\partial F(A)$ are on lines that are at an equal distance $\abs{\alpha}\sqrt{a_1^2+a_2^2+a_3^2}/2$ from the origin, forming the angle $\arg\alpha$ with the positive direction of the $x$-axis. Of course, $\abs{\alpha}$ can be changed arbitrarily via absorbing it (or part thereof) by the $a_j$ entries in \eqref{paralcanon}. \begin{corollary}\label{co:onlytwo}If $A$ is a $4$-by-$4$ nilpotent matrix with two parallel flat portions on the boundary of its numerical range, then these are the only flat portions of the boundary. \end{corollary} \begin{proof} Without loss of generality, $A$ is in the form \eqref{paralcanon}, and thus there are two horizontal flat portions on the boundary of $F(A)$. We also know $A$ is irreducible from Proposition~\ref{p:red}. Any other flat portion $\ell$ of $\partial F(A)$, if it exists, cannot be horizontal. Suppose it is not vertical either. Then, due to the symmetry of $F(A)$ with respect to the $x$-axis, $\partial F(A)$ would have to contain the complex conjugate of $\ell$ as well, bringing the number of flat portions to (at least) four. But any $4$-by-$4$ matrix with four flat portions on the boundary of its numerical range is unitarily reducible, as stated in Theorem~\ref{4x4}, while $A$ is not. It remains to consider the case of vertical $\ell$. In order for it to exist, the matrix $H=\operatorname{Re} A$ should have a multiple eigenvalue. This eigenvalue would then have to be common for $3$-by-$3$ principal submatrices of $H$. A direct computation shows, however, that the characteristic polynomials of these submatrices up to a constant multiple equal \[ -4\lambda^3+\lambda (a_1^2+a_2^2+a_3^2)+a_1a_2a_3 \text{ and } -4\lambda^3+\lambda (a_1^2+a_2^2+a_3^2)-a_1a_2a_3. \] So, the common eigenvalues occur only if $a_2=0$ (recall that $a_1$ and $a_3$ are strictly positive). On the other hand, if $a_2=0$, then the matrix $iA$ under the diagonal unitary similarity via $\operatorname{diag}[1,i,1,i]$ turns into \[ \begin{bmatrix} 0 & a_1 & 0 & a_3 \\ 0 & 0 & -a_3 & 0 \\ 0 & 0 & 0 & a_1 \\ 0 & 0 & 0 & 0\end{bmatrix}.\] So, $F(A)$ in this case is symmetric with respect to the $y$-axis as well. Thus, a vertical flat portion $\ell$ would have its counterpart $-\ell$ on $\partial F(A)$ which again would bring the number of flat portions up to four, in contradiction with the unitary irreducibility of $A$. \end{proof} \begin{prop}Let $A$ be a $4$-by-$4$ nilpotent unitarily irreducible matrix with two non-parallel flat portions on the boundary of its numerical range $F(A)$ the supporting lines of which are equidistant from the origin. Then these are the only flat portions of $\partial F(A)$. \label{p:nonpa} \end{prop} \begin{proof} Multiplying $A$ by a non-zero scalar we may without loss of generality suppose that the given flat portions of $\partial F(A)$ lie on lines intersecting at some point on the negative real half-axis and that the distance from each line to the origin equals $1/2$. Then for some unimodular $\omega=\xi+i\eta$ ($\xi \neq 0,\ \eta >0$) the imaginary part of both $\omega A$ and $-\overline{\omega}A$ will have a multiple eigenvalue $-1/2$. With this notation, these lines will be $\eta x \pm \xi y=-\frac{1}{2};$ they intersect at $(-\frac{1}{2 \eta}, 0)$. By an appropriate unitary similarity $A$ can be put in the upper triangular form \eqref{tri} and, moreover, the elements $a_{12}, a_{23}, a_{34}$ of its first sup-diagonal can be all made real. The above mentioned condition on the eigenvalues of $\operatorname{Im} (\omega A)$, $\operatorname{Im} (-\overline{\omega} A)$ implies then \iffalse Furthermore, for $A$ given by \eqref{tri} this means that\fi the matrices \eq{ima} \begin{bmatrix} 1 & -i\omega a_{12} & -i\omega a_{13} & -i\omega a_{14}\\ i\overline{\omega}\overline{a_{12}} & 1 & -i\omega a_{23} & -i\omega a_{24} \\ i\overline{\omega}\overline{a_{13}} & i\overline{\omega}\overline{a_{23}} & 1 & -i\omega a_{34}\\ i\overline{\omega}\overline{a_{14}} & i\overline{\omega}\overline{a_{24}} & i\overline{\omega}\overline{a_{34}} & 1\end{bmatrix} \text{ and } \begin{bmatrix} 1 & i \overline{\omega} a_{12} & i \overline{\omega} a_{13} & i \overline{\omega} a_{14} \\ -i{\omega}\overline{a_{12}} & 1 & i \overline{\omega} a_{23} & i \overline{\omega} a_{24} \\ -i{\omega}\overline{a_{13}} & -i{\omega}\overline{a_{23}} & 1 & i \overline{\omega} a_{34}\\ -i{\omega}\overline{a_{14}} & -i{\omega}\overline{a_{24}} & -i{\omega}\overline{a_{34}} & 1\end{bmatrix}\en have rank at most 2. Equating the left upper $3$-by-$3$ minors of \eqref{ima} to zero, we see that \[ 1-(\abs{a_{13}}^2+\abs{a_{12}}^2+\abs{a_{23}}^2)+2\operatorname{Im}(\omega a_{12}\overline{a_{13}}a_{23})=0 \] and \[ 1-(\abs{a_{13}}^2+\abs{a_{12}}^2+\abs{a_{23}}^2)-2\operatorname{Im}(\overline{\omega}a_{12}\overline{a_{13}}a_{23})=0. \] Equivalently, $a_{12}\overline{a_{13}}a_{23}$ is real, and \eq{1} 1-(\abs{a_{13}}^2+\abs{a_{12}}^2+\abs{a_{23}}^2)+2\eta a_{12}\overline{a_{13}}a_{23}=0.\en If $a_{12}=0$ or $a_{23}=0$, then \eqref{1} implies that for the $3$-by-$3$ matrix $B$ located in the upper left corner of $A$ the numerical range $F(B)$ is the circular disk $\{z\colon\abs{z}\leq 1/2\}$ (see \cite{Marc} or \cite[Theorem 4.1]{KRS}). Since this case is covered by Theorem~\ref{GauWuCircular}, we may suppose that $a_{12},a_{23}\neq 0$. Then $a_{13}$ is necessarily real along with $a_{12},a_{23}$, and \eqref{1} can be rewritten as \eq{2} 1-(a_{13}^2+a_{12}^2+a_{23}^2)+2\eta a_{12}a_{13}a_{23}=0.\en Repeating this reasoning for three other principal $3$-by-$3$ minors in \eqref{ima} we see that without loss of generality $A$ is of the form \eqref{tri}, real, and, along with \eqref{2}: \begin{equation}\label{123} \begin{split} 1-(a_{12}^2+a_{14}^2+a_{24}^2)+2\eta a_{12}{a_{14}}a_{24} & =0,\\ 1-(a_{13}^2+a_{14}^2+a_{34}^2)+2\eta a_{13}{a_{14}}a_{34} & =0,\\ 1-(a_{23}^2+a_{24}^2+a_{34}^2)+2\eta a_{23}{a_{24}}a_{34} & =0. \end{split}\end{equation} Since $A$ is real, the third flat portion of $\partial F(A)$, if it exists, must be vertical. Indeed, otherwise its reflection with respect to the real axis would be the fourth flat portion of $\partial F(A)$, implying by Theorem~\ref{4x4} the unitary reducibility of $A$. But this would contradict Proposition~\ref{p:red}. So, suppose now that a vertical flat portion of $\partial F(A)$ is indeed present. Then its abscissa, which it is convenient for us to denote by $-x/2$, is a multiple eigenvalue of $\operatorname{Re} A$. Equivalently, the matrix \[ \begin{bmatrix} x & a_{12} & a_{13} & a_{14} \\ a_{12} & x & a_{23} & a_{24} \\ a_{13} & a_{23} & x & a_{34} \\ a_{14} & a_{24} & a_{34} & x \end{bmatrix} \] has rank at most two. Consequently, \begin{equation}\label{456} \begin{split} x^3-x({a_{13}}^2+{a_{12}}^2+{a_{23}}^2)+2a_{12}{a_{13}}a_{23} & =0,\\ x^3-x({a_{12}}^2+{a_{14}}^2+{a_{24}}^2)+2a_{12}{a_{14}}a_{24} & =0,\\ x^3-x({a_{13}}^2+{a_{14}}^2+{a_{34}}^2)+2a_{13}{a_{14}}a_{34} & =0,\\ x^3-x({a_{23}}^2+{a_{24}}^2+{a_{34}}^2)+2a_{23}{a_{24}}a_{34} & =0. \end{split}\end{equation} Comparing the respective lines in \eqref{2}--\eqref{123} and \eqref{456}, we conclude that $x^3-x+2C(1-x\eta)=0$ when in place of $C$ is plugged in any of the products $a_{12}{a_{13}}a_{23},a_{12}{a_{14}}a_{24},a_{13}{a_{14}}a_{34},a_{23}{a_{24}}a_{34}$. If $1- x \eta =0$, then $-x/2=-1/2 \eta$, the point on the negative x-axis where the support lines containing the flat portions of $\partial F(A)$ intersect. This would be a corner point on $F(A)$, thus implying unitary reducibility of $A$. So, $1- x \eta \ne 0$ and the above mentioned permissible values of $C$ are all equal: \eq{epro} a_{12}{a_{13}}a_{23}=a_{12}{a_{14}}a_{24}=a_{13}{a_{14}}a_{34}=a_{23}{a_{24}}a_{34}. \en From here we conclude that the coefficients of $x$ in all the equations \eqref{456} are also equal: \eq{equa} {a_{13}}^2+{a_{12}}^2+{a_{23}}^2={a_{12}}^2+{a_{14}}^2+{a_{24}}^2={a_{13}}^2+{a_{14}}^2+{a_{34}}^2={a_{23}}^2+{a_{24}}^2+{a_{34}}^2.\en From \eqref{epro}--\eqref{equa} we conclude that \eq{sol} a_{34}=\epsilon_1 a_{12}, \ a_{14}=\epsilon_2 a_{23},\ a_{24}=\epsilon_3 a_{13}, \quad \epsilon_j=\pm 1, \ j=1,2,3, \en and either $a_{12}a_{13}a_{23}=0$ or $\epsilon_1=\epsilon_2=\epsilon_3$. Since the former case is covered by Theorem~\ref{GauWuCircular}, we may concentrate on the latter. Moreover, applying a unitary similarity via the diagonal matrix $\operatorname{diag}[1,1,1,-1]$, we may change the signs of all $\epsilon_j$ simultaneously, and thus to suppose that they are all equal to $1$. In other words, it remains to consider \eq{Ar1} A=\begin{bmatrix}0 & a_{1} & a_2 & a_{3} \\ 0 & 0 & a_{3} & a_2 \\ 0 & 0 & 0 & a_{1} \\ 0 & 0 & 0 & 0\end{bmatrix} \en with $a_j\in\mathbb{R}$ and $a_1\neq 0$. By Lemma~\ref{realA}, the numerical range of a matrix of this form can only have a vertical flat portion on its boundary if that is the only flat portion. Therefore the original matrix has only the two flat portions on lines which are equidistant from the origin. \end{proof} \section{Proof of the main result}\label{s:proof} In this section, we show there are at most two flat portions on the boundary of the numerical range of a $4$-by-$4$ unitarily irreducible nilpotent matrix. This result follows from an analysis of the singularities of the polynomial $p_A$ defined in \eqref{pa}, so some facts about algebraic curves are reviewed next for convenience. Define the complex projective plane $\mathbb{C} \mathbb{P}^2$ to be the set of all equivalence classes of points in $\mathbb{C}^3 -\left\{ (0,0,0) \right\}$ determined by the equivalence relation $\sim$ where $(x,y,z) \sim (a,b,c)$ if and only if $(x,y,z)=\lambda (a,b,c)$ for some nonzero complex number $\lambda$. The complex plane can be considered a subset of $\mathbb{C} \mathbb{P}^2$ if the point $(x,y,1)$ for $(x,y) \in \mathbb{R}^2$ is identified with $x+iy$. An {\it algebraic curve} $C$ is defined to be the zero set of a homogeneous polynomial $f(x,y,z)$ in $\mathbb{C} \mathbb{P}^2$. If $f(x,y,z)$ has real coefficients, the real affine part of the curve C is defined to be all $(x,y) \in \mathbb{R}^2$ such that $f(x,y,1)=0$. There is a one-to-one correspondence between points $(u,v,w) \in \mathbb{C} \mathbb{P}^2$ and lines $L$ in $\mathbb{C} \mathbb{P}^2$ given by the mapping that sends $(u,v,w)$ to the line $$\left\{ (x,y,z) \in \mathbb{C} \mathbb{P}^2 \, : \ ux+vy +wz=0 \right\}.$$ Therefore, $(u,v,w)$ could denote either a point or a line. In the latter case, the coordinates are called {\it line coordinates}. Let $f(x,y,z)$ be a homogeneous polynomial. Let $C$ be the algebraic curve defined by $f$ in point coordinates. That is, $$C=\left\{ (x,y,z) \in \mathbb{C} \mathbb{P}^2 \, : \, f(x,y,z)=0 \right\}.$$ The curve $C'$ which is {\it dual} to $C$ is obtained by considering $f$ in line coordinates. That is, $$C'=\left\{ (u,v,w) \in \mathbb{C} \mathbb{P}^2 \, : \, ux+vy+wz=0 \text{ is a tangent line to $C$} \right\}.$$ The dual curve is an algebraic curve \cite{Bries} \cite{Walk} except in the special case where the original curve is a line and the dual is a point; we will assume this is not the case. Therefore there exists a homogeneous polynomial $p(u,v,w)$ such that $(u,v,w) \in C'$ if and only if $p(u,v,w)=0$. The references just noted also show that the dual curve to the dual curve is the original curve. Therefore a point on $C$ corresponds to a tangent line to $C'$ and vice versa. A {\it singular point} of a homogeneous polynomial $p$ is a point $(u_0,v_0,w_0)$ on the curve $C'$ defined by $p=0$ such that $$(p_u(u_0,v_0,w_0),p_v(u_0,v_0,w_0), p_w(u_0,v_0,w_0))=(0,0,0).$$ If $(u_0,v_0,w_0)$ is not a singular point, then the curve $C'$ has a well-defined tangent line at $(u_0,v_0,w_0)$ with line coordinates $(p_u(u_0,v_0,w_0),p_v(u_0,v_0,w_0),p_w(u_0,v_0,w_0))$. If $(u_0,v_0,1)$ is a singular point of $C'$, then the line $u=u_0+\lambda t$, $v=v_0+ \mu t$ for $t \in \mathbb{C}$ intersects the curve $C'$ at $t=0$. If the second order partial derivatives of $p$ are not all zero at $(u_0,v_0,1)$, then the Taylor expansion of $p(u_0+\lambda t, v_0+\mu t,1)$ shows that $$p(u,v,1)=\left(p_{xx}(u_0,v_0,1) \lambda^2+ 2p_{xy}(u_0,v_0,1) \lambda \mu + p_{yy}(u_0,v_0,1) \mu^2 \right) t^2 + \ldots ,$$ and in this case the curve $C'$ has two tangent lines (counting multiplicity) at $(u_0,v_0,1)$ defined by $(\lambda, \mu, 1)$ such that $p_{xx}(u_0,v_0,1) \lambda^2+ 2p_{xy}(u_0,v_0,1) \lambda \mu + p_{yy}(u_0,v_0,1) \mu^2=0$. If the minimum order for which all the partial derivatives at $(u_0, v_0, 1)$ of order $r$ are not identically zero is $r>2$, we similarly obtain $r$ tangent lines to $C'$ at $(u_0, v_0,1)$, counting multiplicities. Conversely, any point at which $p=0$ has two (or more) tangent lines is a singular point of $p$. Since $p_A(u,v,w)$ defined in \eqref{pa} is a homogeneous polynomial, its zero set is an algebraic curve in $\mathbb{C} \mathbb{P}^2.$ Recall that Kippenhahn \cite{Ki} showed that the convex hull of the real affine part of the curve $C(A)$ which is dual to $p_A(u,v,w)=0$ is the numerical range of $A$. In terms of the description above, Kippenhahn showed that $p_A(u,v,w)$ is the polynomial $p$ above, while the curve $C(A)$ is given by $C$ above. He called $C(A)$ the {\it boundary generating curve} of $F(A)$. In the proof below, $C'(A)$ is the curve given by $p_A(u,v,w)$ in point coordinates. \begin{lemma}\label{singatflat} Let $A$ be an $n$-by-$n$ matrix. If the line $$\left\{ (x,y) \in \mathbb{R}^2 \, : \, u_0x+v_0y+1=0 \right\}$$ contains a flat portion on the boundary of $F(A)$, then the homogeneous polynomial $p_A(u,v,w)$ defined by equation \eqref{pa} has a singularity at $(u_0,v_0,1)$. \end{lemma} \begin{proof} Any flat portion on the boundary of $F(A)$ is a line $L$ defined by real numbers $u_0$, $v_0$ such that $p_A(u_0, v_0, 1)=0$. Furthermore, $L$ is tangent to two or more points on $C(A)$. Since the dual to the dual is the original curve, these points of tangency are both tangent lines to the dual curve $C'(A)$ at $(u_0,v_0,1)$. Therefore $(u_0,v_0,1)$ is a singular point of $p_A$ since the tangent line there is not unique. \end{proof} Therefore the singularities of $p_A$ help determine how many flat portions are possible on the boundary of $F(A)$. In order to study the flat portions on the boundary of a general nilpotent $4 \times 4$ matrix, we will show that the associated polynomial $p_A$ has a special form where many of the coefficients are either zero or are equal to each other. The points at which singularities occur correspond to equations in a system of linear equations in the coefficients. Note that if $z=\frac{u+iv}{2}$ and $p_A$ is given by \eqref{pa}, then $p_A(u,v,w)=\det(zA^*+\overline{z} A -(-w)I)$. The latter expression is $q(-w)$ where $q(w)$ is the characteristic polynomial of $zA^*+\overline{z} A$. Applying Newton's identities to this matrix yields the following lemma. \begin{equation*}\label{evalsrotatedrealA1} \begin{split} \lambda_1(\theta) & =\frac{1}{2}\left(-a_2-\sqrt{a_1^2+a_3^2-2a_1a_3 \cos\theta}\right),\\ \lambda_2(\theta) & =\frac{1}{2}\left(a_2-\sqrt{a_1^2+a_3^2+2a_1a_3 \cos\theta}\right),\\ \lambda_3 (\theta)& =\frac{1}{2}\left(-a_2+\sqrt{a_1^2+a_3^2-2a_1a_3 \cos\theta}\right),\\ \lambda_4(\theta) & =\frac{1}{2}\left(a_2+\sqrt{a_1^2+a_3^2+2a_1a_3 \cos\theta}\right). \end{split}\end{equation*} \begin{lemma}\label{nilpotentbgc} Let $A$ be a $4 \times 4$ nilpotent matrix. The boundary generating curve for $A$ is defined by \begin{equation*}\begin{split} p_A(u,v,w)&=c_1 u^4+c_2 u^3 v+c_3 u^3w+ (c_1+c_4) u^2 v^2 +c_5 u^2 w^2\\+&c_6 u^2vw+c_2 uv^3+c_3uv^2w+c_4v^4+c_6v^3w+c_5 v^2w^2+w^4,\end{split}\end{equation*} where the coefficients $c_j$ are given below. \begin{align*} c_1&=-\frac{1}{16} \left( \operatorname{Tr}(A^3A^*)+\operatorname{Tr}(\left[A^*\right]^3 A)+ \operatorname{Tr}(A^2 \left[A^*\right]^2)+\frac{1}{2} \operatorname{Tr}(A^* A A^*A)-\frac{1}{2} \left[ \operatorname{Tr}(A A^*) \right]^2 \right).\\ \notag c_2&= \frac{i}{8} \left(\operatorname{Tr}(A^3 A^*)- \operatorname{Tr}(\left[A^*\right]^3 A) \right). \\ \notag c_3&= \frac{1}{8} \left(\operatorname{Tr}(\left[A^*\right]^2 A)+ \operatorname{Tr}(A^2 A^*) \right) \\ \notag c_4 &= \frac{1}{16} \left( \operatorname{Tr}(A^3 A^*) + \operatorname{Tr}(\left[A^*\right]^3 A) -\operatorname{Tr}(A^2 \left[A^*\right]^2) -\frac{1}{2} \operatorname{Tr}(A^* A A^* A) +\frac{1}{2}\left[ \operatorname{Tr}(A A^*) \right]^2 \right). \\ \notag c_5 &= -\frac{1}{4} \operatorname{Tr}(A A^*) \\ \notag c_6 &= \frac{i}{8} \left( \operatorname{Tr}( \left[A^*\right]^2 A) -\operatorname{Tr}(A^2 A^*)\right) . \end{align*} \end{lemma} Proof of Lemma~\ref{nilpotentbgc}: Let $M$ be an $n \times n$ matrix with characteristic polynomial $$q(w)= \sum_{j=0}^n (-1)^j q_j w^{n-j}.$$ By Newton's Identities (see \cite{Bake}), if $q_0=1$, then the remaining coefficients ($m=1, \ldots, n$) satisfy \[ (-1)^m q_m=-\left( \frac{1}{m} \right) \sum_{j=0}^{m-1} (-1)^j \operatorname{Tr} \left( M^{m-j} \right) q_j. \] Applying these identities to $M=zA^*+\overline{z} A$ will yield the coefficients of the polynomial $$q(w)=q_0 w^4-q_1 w^3+q_2 w^2-q_3 w+q_4$$ where each $q_j$ will be a polynomial in $u$ and $v$. The polynomial $p_A$ will then be defined by $p_A(u,v,w)=q(-w)=q_0 w^4+q_1 w^3+q_2 w^2+q_3 w+q_4.$ Note that since $A$ is nilpotent, $\operatorname{Tr}(A^k)=\operatorname{Tr}([A^*]^k)=0$ for $k=1, 2, 3, 4$. The calculations below are also simplified with the identity $\operatorname{Tr} (M N)=\operatorname{Tr}(N M)$ for all $n \times n$ matrices $M$ and $N$. Thus $q_0=1$ and $q_1=\operatorname{Tr}(z A^* + \overline{z} A) q_0=0$. Next, \begin{equation*} q_2=-\frac{1}{2} \left( \operatorname{Tr}([z A^* + \overline{z} A]^2) q_0-\operatorname{Tr}[z A^* + \overline{z} A] q_1\right) =- \frac{1}{2} 2|z|^2 \operatorname{Tr}(A A^*)= -\frac{1}{4} (u^2 +v^2) \operatorname{Tr}(A A^*) . \end{equation*} \begin{align*} q_3&=\frac{1}{3} \left( \operatorname{Tr}([z A^* + \overline{z} A]^3) q_0-\operatorname{Tr}([z A^* + \overline{z} A]^2) q_1+ \operatorname{Tr}(z A^* + \overline{z} A) q_2 \right) \\ \notag &=\frac{1}{3}\left( \operatorname{Tr}([z A^* + \overline{z} A]^3) -\operatorname{Tr}([z A^* + \overline{z} A]^2) 0 + 0 q_2 \right) \\ \notag &=\frac{1}{3} \operatorname{Tr}([z A^* + \overline{z} A]^3) \\ \notag &=\frac{1}{3} \left( z^3 \operatorname{Tr}[A^*]^3+ z^2 \overline{z} \operatorname{Tr}([A^*]^2A+ A [A^*]^2 + A^* A A^*)\right) \\ \notag & +\frac{1}{3} \left( \overline{z}^2 z \operatorname{Tr}( A A^* A + A^2 A^* + A^* A^2) + \overline{z}^3 \operatorname{Tr}(A^3)\right) \\ \notag &=z^2 \overline{z} \operatorname{Tr}([A^*]^2A)+ \overline{z}^2 z \operatorname{Tr}( A^2 A^*) \\ \notag &=\left(\frac{u^3+iu^2v +uv^2+iv^3}{8}\right) \operatorname{Tr}([A^*]^2A)+\left(\frac{u^3-iu^2v+uv^2-iv^3}{8} \right) \operatorname{Tr}( A^2 A^*) \\ \notag &=\frac{u^3}{8} (\operatorname{Tr}([A^*]^2A)+\operatorname{Tr}( A^2 A^*)) + \frac{u^2v}{8} (i \operatorname{Tr}([A^*]^2A)-i \operatorname{Tr}( A^2 A^*) ) \\ \notag &+ \frac{uv^2}{8} (\operatorname{Tr}([A^*]^2A)+\operatorname{Tr}( A^2 A^*))+ \frac{v^3}{8} (i \operatorname{Tr}([A^*]^2A)-i \operatorname{Tr}( A^2 A^*) ). \notag \end{align*} Finally, \begin{align*} q_4&=-\frac{1}{4} \left\{ \operatorname{Tr}([z A^* + \overline{z} A]^4 )q_0-\operatorname{Tr}([z A^* + \overline{z} A]^3) q_1+ \operatorname{Tr}([z A^* + \overline{z} A]^2) q_2 -\operatorname{Tr}(z A^* + \overline{z} A) q_3 \right\} \\ \notag &=-\frac{1}{4} \left\{ \operatorname{Tr}([z A^* + \overline{z} A]^4) + \operatorname{Tr}([z A^* + \overline{z} A]^2) q_2 \right\} \\ \notag &=-\frac{1}{4} \left\{ \operatorname{Tr}([z A^* + \overline{z} A]^4) + 2|z|^2 \operatorname{Tr}(A A^*) (- |z|^2 \operatorname{Tr}(A A^*) ) \right\} \\ \notag &=-\frac{1}{4} \left\{ \operatorname{Tr}([z A^* + \overline{z} A]^4 - 2 |z|^4 \left[ \operatorname{Tr}(A A^*) \right]^2 \right\} \\ \notag &=-\frac{1}{4} \left\{ z^4 \operatorname{Tr}([A^*]^4) + z^3 \overline{z} \ 4 \operatorname{Tr}([A^*]^3 A) + |z|^4 ( 4 \operatorname{Tr}([A^*]^2 A^2) + 2 \operatorname{Tr}(A A^* A A^*) -2 [\operatorname{Tr}(A A^*)]^2) \right. \\ \notag &+ \left. \overline{z}^3 z \ 4 \operatorname{Tr}(A^3 A^*) + \overline{z}^4 \operatorname{Tr}(A^4)\right\}\\ \notag &= -\frac{1}{64} \left\{ (u+iv)^3 (u-iv) 4 \operatorname{Tr}([A^*]^3 A) +(u-iv)^3(u+iv) 4 \operatorname{Tr}(A^3 A^*) \right.\\ \notag &+ \left.(u^2+v^2)^2 ( 4 \operatorname{Tr}([A^*]^2 A^2) + 2 \operatorname{Tr}(A A^* A A^*) -2 [\operatorname{Tr}(A A^*)]^2) \right\} \\ \notag &= -\frac{1}{64} \left\{ (u^4+2 i u^3 v+ 2iu v^3-v^4) 4 \operatorname{Tr}([A^*]^3 A) +(u^4-2 i u^3 v- 2iu v^3-v^4) 4 \operatorname{Tr}(A^3 A^*) \right. \\ \notag &+ \left. (u^4+ 2 u^2v^2 +v^4)( 4 \operatorname{Tr}([A^*]^2 A^2) + 2 \operatorname{Tr}(A A^* A A^*) -2 [\operatorname{Tr}(A A^*)]^2)\right\} \\ \notag \end{align*} \begin{align*} &=u^4 \left(\frac{- 2\operatorname{Tr}([A^*]^3 A)- 2 \operatorname{Tr}(A^3 A^*)-2\operatorname{Tr}([A^*]^2 A^2) -\operatorname{Tr}(A A^* A A^*)+[\operatorname{Tr}(A A^*)]^2}{32}\right) \\ \notag &+ u^3 v\left(\frac{-i \operatorname{Tr}([A^*]^3 A)+i\operatorname{Tr}(A^3 A^*) }{8}\right)+ u^2 v^2 \left(\frac{-2\operatorname{Tr}([A^*]^2 A^2) -\operatorname{Tr}(A A^* A A^*)+[\operatorname{Tr}(A A^*)]^2}{16}\right) \\ \notag &+ uv^3 \left(\frac{i \operatorname{Tr}(A^3 A^*)-i\operatorname{Tr}([A^*]^3 A) }{8}\right) \\ \notag &+ v^4 \left(\frac{2\operatorname{Tr}([A^*]^3 A)+2 \operatorname{Tr}(A^3 A^*) -2\operatorname{Tr}([A^*]^2 A^2) -\operatorname{Tr}(A A^* A A^*)+[\operatorname{Tr}(A A^*)]^2 }{32}\right). \notag \end{align*} Now $p_A(u,v,w)=w^4+0w^3+q_2 w^2+ q_3 w +q_4, $ and from this expression we can identity the coefficients of each of the degree 4 homogeneous terms in $u$, $v$, and $w$ as stated in the lemma. The $w^4$ term has coefficient 1 and all of the terms containing $w^3$ have coefficient 0. The terms containing $w^2$ are obtained from $q_2$ and clearly the $u^2 w^2$ and $v^2w^2$ coefficients are both $c_5=-\operatorname{Tr}(A A^*)/4,$ while there is no $u v w^2$ term. The terms containing $w$ are obtained from $q_3$. Note that the coefficients of $u^3 w$ and $uv^2 w$ are equal to each other with the value $c_3$, while the coefficients of $u^2vw$ and $v^3w$ are equal to each other with the value $c_6$. For the terms without $w$, note that $c_1$ is the coefficient of $u^4$ in $q_4$ and $c_4$ is the coefficient of $v^4$ in $q_4$. In addition, the coefficient of $u^2 v^2$ in $q_4$ is exactly $c_1+c_4$. Finally, the coefficients of $u^3v$ and $uv^3$ are both equal to $c_2$. $\square$ Now we consider the condition where $p_A$ has a singularity. \begin{lemma}\label{singularity} The homogeneous polynomial $p_A$ from Lemma~\ref{nilpotentbgc} has a singularity at $(u,v,w)$ if and only if \begin{align}\label{gensings} (4u^3+2 uv^2) c_1+ (3 u^2v+v^3) c_2+ (3u^2w+ v^2w) c_3+ (2 uv^2) c_4+ (2uw^2) c_5+ (2 uvw) c_6 &=0. \\ \notag (2u^2v) c_1+(u^3+ 3uv^2) c_2+ (2uvw )c_3+ (2u^2v+4v^3) c_4+ (2vw^2) c_5 +(u^2w+3v^2w)c_6&=0 \\ \notag (u^3+uv^2) c_3+ (2u^2w+2v^2w)c_5+(u^2v+v^3) c_6+4w^3&=0\notag \end{align} \end{lemma} \begin{proof} By Lemma~\ref{singatflat} he polynomial $p_A$ has a singularity at a point $(u,v,w)$ if and only if $$\frac{\partial}{\partial u} p_A(u,v,w)=\frac{\partial}{\partial v} p_A(u,v,w)=\frac{\partial}{\partial w} p_A(u,v,w)=0.$$ Using the form of $p_A$ from Lemma~\ref{nilpotentbgc} yields the equations in \eqref{gensings}. \end{proof} When $w=1$, the system \eqref{gensings} becomes the non-homogeneous system \begin{align}\label{affinesystem} (4u^3+2 uv^2) c_1+ (3 u^2v+v^3) c_2+ (3u^2+v^2) c_3+ (2 uv^2) c_4+ (2u) c_5+ (2 uv) c_6 &=0. \\ \notag (2u^2v) c_1+(u^3+ 3uv^2) c_2+ (2uv )c_3+ (2u^2v+4v^3) c_4+ (2v) c_5 +(u^2+3v^2)c_6&=0. \\ \notag (u^3+uv^2) c_3+ (2u^2+2v^2)c_5+(u^2v+v^3) c_6&=-4,\notag \end{align} from which the following special case is immediate. \begin{lemma}\label{singularityspec} The polynomial $p_A$ has a singularity at $(2,0,1)$ if and only if \begin{align*} 8c_1+ 3c_3+c_5 &=0. \\ \notag 2c_2+c_6&=0. \\ \notag 8c_3+8c_5&=-4. \notag \end{align*} \end{lemma} The above condition is necessary for $F(A)$ to have a flat portion at $x=-1/2$. This system can be rewritten as \begin{equation}\label{cs} \left\{ \begin{aligned} c_3 &=-4c_1+\frac{1}{4}. \\ c_6&=-2c_2. \\ c_5&=4c_1-\frac{3}{4}. \end{aligned} \right. \end{equation} We can use this necessary condition to eliminate certain possibilities involving other flat portions. \begin{theorem}\label{twoflat} If $A$ is a 4-by-4 unitarily irreducible nilpotent matrix, then $\partial F(A)$ has at most two flat portions. \end{theorem} \begin{proof} Assume $A$ is a 4-by-4 unitarily irreducible nilpotent matrix. The associated polynomial $p_A$ thus has the form given by Lemma~\ref{nilpotentbgc}. If there is at least one flat portion on the boundary of $F(A)$, we may rotate and scale $A$ so that there is a flat portion on the line $x=-\frac{1}{2}$. This flat portion corresponds to a singularity $(2,0,1)$ so the system \eqref{cs} is satisfied. Thus only the variables $c_1$, $c_2$, and $c_4$ are free. Assume there is another flat portion on the line $ux+vy+1=0$. By Lemma~\ref{singatflat} there is a singularity at this $(u,v,1)$ where $(u,v) \ne (2,0)$. For any such singularity, we can eliminate $c_3$, $c_5$ and $c_6$ in the necessary equations \eqref{affinesystem} to obtain the new consistent system below. \begin{equation}\label{gensecondsing} \left\{ \begin{aligned} (4u^3+2 uv^2+8u-12u^2-4v^2) c_1+ (3 u^2v+v^3-4uv) c_2+ (2 uv^2) c_4 &=-\frac{1}{4}v^2+\frac{3}{2}u-\frac{3}{4}u^2. \\ (2u^2v-8uv+8v) c_1+ (u^3+ 3uv^2-2u^2-6v^2) c_2+ (2u^2v+4v^3) c_4&=-\frac{1}{2}uv+\frac{3}{2}v. \\ 4(2-u)(u^2+v^2) c_1-2v (u^2+v^2)c_2&=-4+\left(\frac{6-u}{4}\right)(u^2+v^2). \end{aligned} \right. \end{equation} If $v=0$ and $u \ne 2$ for the singular point $(u,v,1)$, then the corresponding flat portion is on a vertical line $x=-\frac{1}{u}$ and there are two parallel flat portions which must be the only flat portions by Corollary~\ref{co:onlytwo}. If $u=0$ at the singularity, then the system above is consistent if and only if $v= \pm 2$. The point $(0, 2)$ could only correspond to a flat portion on the line $y=- \frac{1}{2}$ and the point $(0,-2)$ could only correspond to a flat portion on $y=\frac{1}{2}$. Each of these support lines is at a distance of $\frac{1}{2}$ from the origin. Therefore Proposition~\ref{p:nonpa} shows that if there are flat portions both on $x=-\frac{1}{2}$ and on either $y=\frac{1}{2}$ or $y=-\frac{1}{2}$, then there will only be these two flat portions. Therefore in the remainder of the argument, we will assume that any singular points satisfy $u \ne 0$ and $v \ne 0$. To simplify row reductions in \eqref{gensecondsing}, put $c_4$ in the first column and $c_1$ in the third column. If the resulting matrix is row reduced using only the extra assumption that neither $u$ nor $v$ is zero then we get the matrix $$\left( \begin{array}{cccc} 2 u v^2 & v^3+u (3 u-4) v & 2 (u-2) \left(v^2+2 (u-1) u\right) & -\frac{v^2}{4}-\frac{3}{4} (u-2) u \\ 0 & -2 v & 8-4 u & -\frac{u}{4}+\frac{3}{2}-\frac{4}{u^2+v^2} \\ 0 & -2v(u^2+v^2-u) & 4(2-u)(u^2+v^2-u) & \frac{3}{4}u^2+\frac{1}{2} v^2-\frac{3}{2}u \\ \end{array} \right).$$ If $u^2+v^2-u=0$, the system described above is inconsistent unless $\frac{3}{4}u^2+\frac{1}{2} v^2-\frac{3}{2}u=0$, but the combination of those equations implies $u=0$ which has already been ruled out. Therefore we may assume $u^2+v^2-u \ne 0$ and thus obtain the row-equivalent matrix \begin{equation}\label{threebyfour}\left( \begin{array}{cccc} 2 u v^2 & v^3+u (3 u-4) v & 2 (u-2) \left(v^2+2 (u-1) u\right) & -\frac{v^2}{4}-\frac{3}{4} (u-2) u \\ 0 & v & 2(u-2) & \frac{u}{8}-\frac{3}{4}+\frac{2}{u^2+v^2} \\ 0 & 0 & 0 & \frac{\left(u^2+v^2-4\right) \left(u (u-2)^2+(u-4) v^2\right)}{4 \left(u^2+v^2\right)} \\ \end{array} \right). \end{equation} The matrix \eqref{threebyfour} corresponds to an inconsistent system unless either $u^2+v^2=4$ or $v^2=\frac{u(u-2)^2}{4-u}$. Any point $(u,v)$ corresponding to a flat portion must satisfy at least one of these conditions so if there are two flat portions besides the one on $x=-\frac{1}{2}$, both must satisfy at least one of these conditions. When $u^2+v^2=4$ the line $ux+vy+1=0$ is a distance of $\frac{1}{2}$ from the origin, which is the same as the line $x=-\frac{1}{2}$ containing the original flat portion. Consequently if there is a singularity with $u^2+v^2=4$, then the corresponding line contains the only other possible flat portion by Proposition~\ref{p:nonpa}. Therefore, there could only be three flat portions if there are two different pairs $(u_1, v_1)$ and $(u_2, v_2)$ that form a $6 \times 4$ augmented matrix that is consistent and where each $(u_j, v_j)$ pair satisfies $v^2=\frac{u(u-2)^2}{4-u}$. For a given singularity $(u,v,1)$ with $v^2=\frac{u(u-2)^2}{4-u}$, lengthy calculations show that the matrix \eqref{threebyfour} is row equivalent to $$\left( \begin{array}{cccc} 1 & 0 &\frac{u-4}{u} & \frac{(u-6)(u-4)}{8u^2} \\ 0 & 1 & \frac{2 (u-2)}{v} & \frac{(u-2)(u-8)}{8 u v} \\ 0 & 0 &0 &0\\ \end{array} \right).$$ Therefore, if there are three flat portions on $\partial F(A)$, then there exist points $(u_1,v_1)$ and $(u_2, v_2)$ with neither $u_i=0$ nor $v_i=0$ for $i=1,2$ such that the matrix $M$ below corresponds to a consistent system, and consequently satisfies $\det(M)=0$. $$M=\left( \begin{array}{cccc} 1 & 0 &\frac{u_1-4}{u_1} & \frac{(u_1-6)(u_1-4)}{8u_1^2} \\ 0 & 1 & \frac{2 (u_1-2)}{v_1} & \frac{(u_1-2)(u_1-8)}{8 u_1 v_1} \\ 1 & 0 &\frac{u_2-4}{u_2} & \frac{(u_2-6)(u_2-4)}{8u_2^2} \\ 0 & 1 & \frac{2 (u_2-2)}{v_2} & \frac{(u_2-2)(u_2-8)}{8 u_2 v_2} \\ \end{array} \right).$$ Note that $M$ has the form $$M=\left( \begin{array}{cccc} 1 & 0 &a_1 & a_1c_1 \\ 0 & 1 & b_1 & b_1 d_1 \\ 1 & 0 &a_2 & a_2c_2 \\ 0 & 1 & b_2 & b_2d_2 \\ \end{array} \right),$$ from which it follows that $$\det(M)=b_2\left( (a_2-a_1)d_2+a_1c_1-a_2c_2\right)+b_1\left( (a_1-a_2)d_1+a_2c_2-a_1c_1\right).$$ Therefore, \begin{align*} \det(M)&= \frac{2 (u_2-2)}{v_2} \left(\left(\frac{u_2-4}{u_2}-\frac{u_1-4}{u_1}\right) \frac{(u_2-8)}{16 u_2 } + \frac{(u_1-6)(u_1-4)}{8u_1^2}- \frac{(u_2-6)(u_2-4)}{8u_2^2}\right)\\ &+ \frac{2 (u_1-2)}{v_1}\left( \left(\frac{u_1-4}{u_1}-\frac{u_2-4}{u_2}\right) \frac{(u_1-8)}{16 u_1 } + \frac{(u_2-6)(u_2-4)}{8u_2^2}- \frac{(u_1-6)(u_1-4)}{8u_1^2}\right). \end{align*} Simplifying and removing the common factor $\frac{(u_1-u_2)}{4 u_1^2 u_2^2}$ from both terms in parentheses shows that \begin{align*} \det(M)&=\frac{(u_1-u_2)}{4 u_1^2 u_2^2} \left[ \frac{2 (u_2-2)}{v_2} \left(-u_1(u_2-8)-12u_1-12u_2+5u_1u_2\right) \right.\\ &+\left. \frac{2 (u_1-2)}{v_1}\left(u_2(u_1-8)+12u_1+12u_2-5u_1u_2\right)\right]. \end{align*} If $u_1=u_2$, then $v_1^2=\frac{u_1(u_1-2)^2}{4-u_1}=\frac{u_2(u_2-2)^2}{4-u_2}=v_2^2$ and hence the singular points $(u_1, v_1)$ and $(u_1, v_2)$ result in flat portions that are the same distance $1/\sqrt{u_1^2+v_1^2}$ from the origin. Therefore these two flat portions cannot coincide with the original flat portion at $x=-1/2$ by Proposition~\ref{p:nonpa}. So the only remaining case that could lead to three flat portions on the boundary of $F(A)$ is if $\det(M)=0$ because \begin{equation}\label{deteq} \frac{(u_2-2)}{v_2}\left(u_1u_2-u_1-3u_2\right)= \frac{ (u_1-2)}{v_1}\left(u_1u_2-u_2-3u_1\right). \end{equation} Squaring both sides of \eqref{deteq} and replacing $ \frac{(u_j-2)^2}{v_j^2}$ with $\frac{4-u_j}{u_j}$ for $j=1,2$ results in $$\frac{(4-u_2)}{u_2}\left(u_1u_2-u_1-3u_2\right)^2= \frac{ (4-u_1)}{u_1}\left(u_1u_2-u_2-3u_1\right)^2,$$ and this implies that $$u_1(4-u_2)(u_1u_2-u_1-3u_2)^2-u_2(4-u_1)(u_1u_2-u_2-3u_1)^2=0.$$ However, the left side of the expression above is $4(u_1-u_2)^3$, and as mentioned previously, $u_1=u_2$ leads to a contradiction of Proposition~\ref{p:nonpa}. \end{proof} \section{Case of unitarily reducible 5-by-5 matrices} \label{s:5x5} With Theorem~\ref{twoflat} at our disposal, it is not difficult to describe completely the situation with the flat portions on the boundary of $F(A)$ for nilpotent $5$-by-$5$ matrices $A$, provided that they are unitarily reducible. \begin{theorem}\label{th:5x5red} Let a $5$-by-$5$ matrix $A$ be nilpotent and unitarily reducible. Then there are at most two flat portions on the boundary of its numerical range. Moreover, any number from $0$ to $2$ is actually attained by some such matrices $A$. \end{theorem} \begin{proof} Suppose first that $\ker A\cap\ker A^*\neq \{0\}$. Then $A$ is unitarily similar to $A_1\oplus [0]$, where $A_1$ is also nilpotent. Consequently, $F(A)=F(A_1)$, and the statement follows from Proposition~\ref{p:red} if $A_1$ is in its turn unitarily reducible and Theorem~\ref{twoflat} otherwise. Note that all three possibilities (no flat portions, one or two flat portions on $\partial F(A)$ already materialize in this case. Suppose now that $\ker A\cap\ker A^*=\{0\}$. Then the only possible structure of matrices unitarily similar to $A$ is $A_1\oplus A_2$, with one $2$-by-$2$ and one $3$-by-$3$ block. Multiplying $A$ by an appropriate scalar and applying yet another unitary similarity if needed, we may without loss of generality suppose that \[ A_1= \begin{pmatrix} 0 & r \\ 0 & 0 \end{pmatrix}, \quad A_2= \begin{pmatrix} 0 & r_1 & r_2 \\ 0 & 0 & r_3 \\ 0 & 0 & 0 \end{pmatrix},\] where $r,r_1,r_3>0$, $r_2\geq 0$. The numerical range of $A_1$ is the circular disk of the radius $r/2$ centered at the origin. If $r_2=0$, then $F(A_2)$ also is a circular disk centered at the origin \cite[Theorem~4.1]{KRS}, and $F(A)$, being the largest of the two disks, has no flat portions on its boundary. So, it remains to consider the case when all $r_j$ are positive. The distance from the origin to the supporting line of $F(A_2)$ forming angle $\theta$ with the vertical axis equals the maximal eigenvalue of $\operatorname{Re} e^{i\theta}A$, that is, half of the largest root of the polynomial \eq{f} f_\theta(\lambda)=\lambda^3-\lambda (r_1^2+r_2^2+r_3^2)-2r_1r_2r_3\cos\theta. \en Since $f_\theta$ is a monotonically increasing function of $\lambda$ for $\lambda\geq\lambda_0=\left(\frac{r_1^2+r_2^2+r_3^2}{3}\right)^{1/2}$, and since $f_\theta(\lambda_{0})\leq 0$ due to the inequality between the arithmetic and geometric means of $r_j^2$, the maximal root $\lambda(\theta)$ of $f_\theta$ is bigger than $\lambda_0$. If $\cos\theta_1<\cos\theta_2$, then $f_{\theta_1}(\lambda(\theta_2))>0$, and so $\lambda(\theta_1)<\lambda(\theta_2)$. In other words, the maximal root of $f_\theta$ is a strictly monotonic function of $\theta$ both on $[0,\pi]$ and $[-\pi,0]$. So, the disk $F(A_1)$ will have exactly two common supporting lines with $F(A_2)$ when $r/2$ lies strictly between the minimal and maximal distance from the points of $\partial F(A_2)$ to the origin, and none otherwise. Further reasoning depends on whether or not the parameters $r_j$ are all equal. {\sl Case 1.} Among $r_j$ at least two are distinct. According to already cited Theorem~4.1 from \cite{KRS}, $F(A_2)$ has the so called ``ovular shape''; in particular, there are no flat portions on its boundary. Then the flat portions on the boundary of $F(A)$ are exactly those lying on common supporting lines of $F(A_1)$ and $F(A_2)$, and so there are either two or none of them. To be more specific, the distance from the origin to the supporting line at $\theta$ discussed above is (using Vi\`ete's formula) \begin{equation*} \lambda(\theta)/2=\sqrt{s/3} \cos \left( \frac{1}{3} \arccos \left( \frac{3 \sqrt{3} t \cos\theta}{s^{3/2}} \right) \right), \end{equation*} where $s=r_1^2+r_2^2+r_3^2$ and $t=r_1r_2r_3.$ Since the distance from the origin to the tangent line of the disk $F(A_1)$ is a constant $r/2$, there will be two values of $\theta$ (opposite of each other) for which these tangent lines coincide with supporting lines of $F(A_2)$ if and only if \begin{equation*} \sqrt{s/3} \cos \left( \frac{1}{3} \arccos \left( -\frac{3 \sqrt{3} t }{s^{3/2}} \right) \right)<\frac{r}{2}<\sqrt{s/3} \cos \left( \frac{1}{3} \arccos \left( \frac{3 \sqrt{3} t }{s^{3/2}} \right) \right), \end{equation*} and none otherwise. {\sl Case 2.} All $r_j$ are equal. The boundary generating curve $C(A_2)$ (see Section~\ref{s:proof} for the definition) is then a cardioid, appropriately shifted and scaled, as shown (yet again) in \cite[Theorem~4.1]{KRS}. Consequently, $\partial F(A_2)$ itself has a (vertical) flat portion, and we need to go into more details. To this end, suppose (without loss of generality) that $r_1=r_2=r_3=3$, and invoke formula on p. 130 of \cite{KRS}, according to which $C(A_2)$ is given by the parametric equations \eq{car} x(\theta)=2\cos\theta+\cos 2\theta, \quad y(\theta) = 2\sin\theta+\sin 2\theta, \qquad \theta\in [-\pi,\pi]. \en The boundary of $F(A_2)$ is the union of the arc $\gamma$ of \eqref{car} corresponding to $\theta\in [-2\pi/3,2\pi/3]$ with the vertical line segment $\ell$ connecting its endpoints. The remaining portion of the curve \eqref{car} lies inside $F(A_2)$. Observe also that $\abs{x(\theta)+iy(\theta)}=\sqrt{5+4\cos\theta}$ is an even function of $\theta$ monotonically decreasing on $[0,\pi]$. Putting these pieces together yields the following: For $r\leq 3$, the disk $F(A_1)$ lies inside $F(A_2)$. Thus, $F(A)=F(A_2)$ has one flat portion on the boundary. For $3<r\leq 2\sqrt{3}$ the circle $\partial F(A_1)$ intersects $\partial F(A_2)$ at two points of $\ell$. This results in two flat portions on $\partial F(A)$. For $2\sqrt{3}<r<6$ the circle $\partial F(A_1)$ intersects $\partial F(A_2)$ at two points of $\gamma$, while $\ell$ lies inside $F(A_1)$. This again results in two flat portions on $\partial F(A)$. Finally, if $r\geq 6$, then $F(A_2)$ lies in $F(A_1)$, so $F(A)=F(A_1)$ is a circular disk, and there are no flat portions on its boundary. \end{proof} The case where $r_1=r_2=r_2=3$ and $r=3.3$, which results in two flat portions caused by intersections between the circular disk and the numerical range of the $3$-by-$3$ nilpotent matrix, is shown in Figure~\ref{5by5twoflat}. The case where $r_1=r_2=r_3=r=3.3$, which results in one flat portion from the numerical range of the $3$-by-$3$ block, with the circular disk tangent inside is shown in Figure~\ref{5by5oneflat}. In all cases the cardiod boundary generating curve and the boundary of the numerical range of the $2$-by-$2$ matrix is included. \begin{figure}[h] \centering \includegraphics[scale=.4]{twoflatoneinsidepng} \caption{Numerical range of reducible $5$-by-$5$ matrix with two flat portions}\label{5by5twoflat} \end{figure} \begin{figure}[h] \centering \includegraphics[scale=.4]{oneflatpng} \caption{Numerical range of reducible $5$-by-$5$ matrix with one flat portion}\label{5by5oneflat} \end{figure} \clearpage \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2015-12-01T02:08:27", "yymm": "1511", "arxiv_id": "1511.08916", "language": "en", "url": "https://arxiv.org/abs/1511.08916", "abstract": "In their 2008 paper Gau and Wu conjectured that the numerical range of a 4-by-4 nilpotent matrix has at most two flat portions on its boundary. We prove this conjecture, establishing along the way some additional facts of independent interest. In particular, a full description of the case in which these two portions indeed materialize and are parallel to each other is included.", "subjects": "Functional Analysis (math.FA)", "title": "Numerical Ranges of 4-by-4 Nilpotent Matrices: Flat Portions on the Boundary", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9896718450437033, "lm_q2_score": 0.7154239836484144, "lm_q1q2_score": 0.7080349738858426 }
https://arxiv.org/abs/2202.13846
Improved bounds for acyclic coloring parameters
The acyclic chromatic number of a graph is the least number of colors needed to properly color its vertices so that none of its cycles has only two colors. The acyclic chromatic index is the analogous graph parameter for edge colorings. We first show that the acyclic chromatic index is at most $2\Delta-1$, where $\Delta$ is the maximum degree of the graph. We then show that for all $\epsilon >0$ and for $\Delta$ large enough (depending on $\epsilon$), the acyclic chromatic number of the graph is at most $\lceil(2^{-1/3} +\epsilon) {\Delta}^{4/3} \rceil +\Delta+ 1$. Both results improve long chains of previous successive advances. Both are algorithmic, in the sense that the colorings are generated by randomized algorithms. However, in contrast with extant approaches, where the randomized algorithms assume the availability of enough colors to guarantee properness deterministically, and use additional colors for randomization in dealing with the bichromatic cycles, our algorithms may initially generate colorings that are not necessarily proper; they only aim at avoiding cycles where all pairs of edges, or vertices, that are one edge, or vertex, apart in a traversal of the cycle are homochromatic (of the same color). When this goal is reached, they check for properness and if necessary they repeat until properness is attained.
\section{Introduction}\label{sec:intro} Let $\chi(G)$ denote the {\em chromatic number} of a graph, i.e the least number of colors needed to color the vertices of $G$ in a way that no adjacent vertices are homochromatic. The {\em acyclic chromatic number} of a graph $G$, a notion introduced back in 1973 by Gr\"{u}nbaum \cite{grunbaum1973acyclic} and denoted here by $\chi_{a}(G)$, is the least number of colors needed to properly color the vertices of $G$ so that no cycle of even length is {\em bichromatic} (has only two colors). Notice that in any properly colored graph, no cycle of odd length can be bichromatic. The literature on the acyclic chromatic number for general graphs with arbitrary maximum degree $\Delta$ includes: \begin{itemize} \item Alon eta al. \cite{alon1991acyclic} proved that $\chi_{a}(G) \leq \lceil 50 {\Delta}^{4/3} \rceil$. They also showed that there are graphs for which $\chi_{a}(G) = \Omega \Big( \frac{{\Delta}^{4/3}}{(\log {\Delta})^{1/3} }\Big)$. \item Ndreca eta al. \cite{ndreca2012improved} proved that for $ \chi_{a}(G) \leq \lceil 6.59 {\Delta}^{4/3} + 3.3 \Delta \rceil$. \item Sereni and Volec \cite{sereni2013note} proved that $$\chi_{a}(G) \leq (9/2^{5/3}) {\Delta}^{4/3} + \Delta < 2.835 \Delta^{4/3} + \Delta. $$ \item Finally, Gon\c{c}alves et al. \cite{gonccalves2014entropy} provided the best previous bound, namely that for $\Delta \geq 24,$ $$\chi_{a}(G) \leq (3/2) {\Delta}^{4/3} + \min\left(5\Delta -14, \Delta +\frac{8\Delta^{4/3}}{\Delta^{2/3} - 4}+1\right). $$ \end{itemize} % There is also an extensive literature on special cases of graphs. Here we show that for all $\epsilon >0$ and for $\Delta$ large enough (depending on $\epsilon$), $$\chi_{a}(G) \leq \lceil(2^{-1/3} +\epsilon) {\Delta}^{4/3} \rceil +\Delta+ 1.$$ With respect to edge coloring, the {\em chromatic index} of $G$, often denoted by $\chi'(G)$, is the least number of colors needed to properly color the edges of $G$, i.e., to color them so that no adjacent edges get the same color. It is known that its chromatic index is either $\Delta$ or $\Delta+1$ (Vizing \cite{vizing1965critical}). Nevertheless observe that to generate a proper edge coloring by successively coloring its edges in a way that at each step the color is arbitrarily chosen so that properness is not destroyed, and no color is ever changed, necessitates a palette of at least $2\Delta -1$ colors, because $2\Delta -2 $ edges might be coincident with any given edge. The {\em acyclic chromatic index} of $G$, often denoted by $\chi'_{a}(G)$, is the least number of colors needed to properly color the edges of $G$ so that no cycle of even length is bichromatic. Notice again that in any properly edge-colored graph, no cycle of odd length can be bichromatic. It has been conjectured (J. Fiam\v{c}ik \cite{fiam} and Alon et al. \cite{alon2001acyclic}) that the acyclic chromatic index of any graph with maximum degree $\Delta$ is at most $\Delta +2$. Besides the numerous publications for special cases of graphs, the literature on the acyclic chromatic index for general graphs with max degree $\Delta$ includes: \begin{itemize} \item Alon et al. \cite{alon1991acyclic} proved $\chi'_{\alpha}(G) \leq 64\Delta$, Molloy and Reed improved this to $\chi'_{\alpha}(G) \leq 16\Delta$, and then Ndreca et al \cite{{ndreca2012improved}} showed $\chi'_{\alpha}(G) \leq \lceil9.62(\Delta-1)\rceil$. Subsequently, \item Esperet and Parreau \cite{DBLP:journals/ejc/EsperetP13} proved that $\chi'_{\alpha}(G) \leq 4( \Delta -1)$. \item The latter bound was improved to $\lceil3.74(\Delta -1)\rceil +1$ by Giotis et al. \cite{giotis2017acyclic}. Also, an improvement of the $4( \Delta -1)$ bound was announced by Gutowski et al. \cite{Gutowski2018} (the specific coefficient for $\Delta$ is not given in the abstract of the announcement). \item Finally, the best bound until now was given by Fialho et al. \cite{fialho2019new}, who proved that $\chi'_{\alpha}(G) \leq \lceil3.569(\Delta -1)\rceil +1$. \end{itemize} Here we show that $$\chi'_{\alpha}(G) \leq 2\Delta -1.$$ % The most recent results from both groups above are based on the algorithmic proofs of Lov\'{a}sz Local Lemma (LLL) by Moser \cite{moser2009constructive} and Moser and Tardos \cite{moser2010constructive}, which use an approach that has been known as the {\em entropy compression method}. The main difficulty in this approach is to prove the eventual halting of a randomized algorithm that successively and randomly assigns colors to the vertices, or edges in the case of edge coloring, unassigning some colors when a violation of the desired properties arises. Towards proving the eventual halting (actually proving that the expected time of duration of the process is constant), a structure called {\em witness} forest is associated with the process, so that at every step, the history of the random choices made can be reconstructed from the current witness forest and the current coloring; the key observation is that the number of such forests (entropy) is not compatible, probabilistically, with the number of random choices made if the process lasted for too long. For nice expositions, see Tao \cite{TT} and Spencer \cite{Sp}. It should be kept in mind that as the algorithm develops, dependencies are introduced between the colors of the vertices. Very roughly and in qualitative terms, to get our improvements on acyclic colorings, we again design a Moser-type algorithm, but {\it not} caring for properness, and only with the aim to avoid badly colored cycles, i.e. a cycles where all pairs of vertices, or edges for edge coloring, that are an odd number of vertices, or edges, apart in a traversal of the cycle are homochromatic. Parenthetically, observe that in the case of properly colored graphs the notions of being badly colored and bichromatic for a cycle are equivalent, whereas if non-properness is considered, being badly colored is the natural formulation of being bichromatic. With our approach, we avoid the need of a number of colors that guarantee that choices are made in a way that does not destroy properness plus additional colors to leave sufficient leeway for randomness. If the coloring generated by the Moser-type algorithm of our approach is not proper, we just repeat the process, until properness is achieved. Of course, this rough sketch puts under the rug many things: for example that we need to show that the probability that the graph generated by the Moser-type algorithm is not proper is bounded away from 1. To prove this, we approach the correctness proof of the Moser-type algorithms not via the entropy compression method, but via a direct probabilistic argument, first introduced in Giotis et al.\cite{giotisanalco}. Also, actually our algorithm during its Moser-type phases not only ignores properness, but a stronger property that has to do with the colorings of 4-cycles. This is because the Moser-type algorithm does not work well for 4-cycles. This stronger properness notion is defined differently for the cases of vertex and edge colorings. It turns out that with the number of colors we assume to have, the graph generated during a Moser-type phase has a positive probability to have the strong properness property. \subsection{Notation and terminology} In the sequel, we give some general notions, and introduce the notation and terminology we use. Throughout this paper, $G$ is a simple graph with $l$ vertices and $m$ edges, and these parameters are considered {\it constant}. On the other hand, we denote by $n$ the number of steps an algorithms takes, and it is only with reference to $n$ that we make asymptotic considerations. The \emph{maximum degree} of $G$ is denoted by $\Delta$ and we assume, to avoid trivialities, that it is $>1$. A (simple) $k$-path is a succession $u_1, \ldots, u_k, u_{k+1}$ of $k+1 \geq 2$ distinct vertices any two consecutive of which are connected by an edge. A $k$-cycle is a succession of $k\geq 3$ distinct vertices $u_1, \ldots, u_k$ any two consecutive of which, as well as the the pair $u_1, u_k$, are connected by an edge. A path (respectively, cycle) is a $k$-path (respectively, $k$-cycle) for some $k$. Vertices of a cycle or a path separated by an odd number of other vertices are said to have {\em equal parity}. Analogously, we define equal parity edges of a cycle. A vertex coloring of $G$ is an assignment of colors to its vertices selected from a given palette of colors. A vertex coloring is \emph{proper} if no neighboring vertices have the same color. We define analogously edge coloring and proper edge coloring (no coincident pair of edges is homochromatic). A path or a cycle of a properly vertex colored graph is called {\em bichromatic} if the vertices of the path or the cycle are colored by only two colors. Analogously for edge colorings. A proper coloring is {\em $k$-acyclic}, for some $k\geq 3$, if there are no bichromatic $k'$-cycles, for any $k' \geq k$. A proper coloring is called acyclic if there are no bichromatic cycles of any length. Note that for a cycle to be bichromatic in a proper coloring, its length must be even. The \emph{acyclic chromatic number } of $G$, denoted by $\chi_a(G)$, is the least number of colors needed to produce a proper, acyclic vertex coloring of $G$. Analogously, we define the \emph{acyclic chromatic index } of $G$, denoted by $\chi_a'(G)$, for edge colorings. In the algorithms of this paper not necessarily proper colorings are constructed by independently selecting one color from a palette of $K$ colors, for suitable values of $K$, uniformly at random (u.a.r.). Thus, for any vertex $v\in V$, or edge $e \in E$ for the case of edge coloring, and any color $i\in\{1,\ldots, K\}$, \begin{equation}\label{eq:prob}\Pr[v \text{ (or } e) \text{ receives color }i]=\frac{1}{K}.\end{equation} We call such colorings {\em random colorings} (they are not necessarily proper). In all that follows, we assume the existence of some arbitrary (total, strict) ordering among all vertices, paths and cycles of the given graph to be denoted by $\prec$. Among the two consecutive traversals of a cycle, we arbitrarily select one and call it {\it positive}. Given a vertex $v$ and a $2k$-cycle $C$ containing it, we define $C(v):=\{v=v_1^C,\ldots,v_{2k}^C\}$ to be the set of vertices of $C$ in the positive traversal starting from $v$. The two disjoint and equal cardinality subsets of $C(v)$ comprised of vertices of the same parity that are at even (odd, respectively) distance from $v$ are to be denoted by $C^0(v)$ ($C^1(v)$, respectively). We define analogously $C(e):=\{e=e_1^C,\ldots,e_{2k}^C\}, C^0(e)$ and $C^1(e)$, for the case of edge colorings and an edge $e$ of $C$. We call {\em badly colored} cycles whose sets of equal parity vertices (or edges for edge colorings) are monochromatic (have a single color). Notice that in the case of not proper colorings, a badly colored cycle might have all its vertices (edges) of the same color. Also bichromaticity of a cycle does not imply that its coloring is bad. We define the {\em scope} of $C(v)$ to be the set $\{v=v_1^C,\ldots,v_{2k-2}^C\}$, i.e. all but the last two of the vertices of $C$ in a traversal starting from $v$. Analogously, we define the scope of $C(e)$ to be the set of all but the last two edges in a traversal starting from $e$. Roughly, the reason we introduce this notion is because if we recolor the scope of a badly colored cycle, all information of it being badly colored is lost, and thus the randomness for the colors before discovering that the cycle is badly colored is re-established. In the following sections, edge and vertex colorings will be investigated separately. We start with edge coloring, because we consider, perhaps quite subjectively, that the corresponding result is more interesting. \paragraph*{Probabilistic considerations.} In all algorithms of this work, except where we explicitly take another probabilistic approach, we assume that we are given a sequence $\rho$ of color choices (irrespective of the edges, or vertices, those colors will be assigned to) that are selected independently and u.a.r. from the palette; then each examined algorithm assigns colors to those edges, or vertices, that are selected by its commands, following successively the color choices of $\rho$. We always assume that $\rho$ is long enough to carry out the execution of the algorithm until it halts, assuming it ever halts. Probabilistic considerations are made in relation to the space of such sequences (except when another space is explicitly considered). \section{Acyclic edge colorings}\label{sec:aec} % In this section, the term ``coloring" refers to edge coloring, even in the absence of the specification ``edge". We assume that we have $K=\lceil(2+\epsilon)(\Delta-1)\rceil$ colors at our disposal, where $\epsilon> 0$ is an arbitrarily small constant. We show that this number of colors suffice to algorithmically construct, with positive probability, a proper, acyclic edge coloring for $G$. Therefore, since for any $\Delta$, there exists a constant $\epsilon>0$ such that $\lceil(2+\epsilon)(\Delta-1)\rceil\leq 2(\Delta-1)+1=2\Delta-1$, it follows that $\chi_a'(G)\leq 2\Delta-1$. We now give a cornerstone result proven by Esperet and Parreau~\cite{DBLP:journals/ejc/EsperetP13}: \begin{lemma}[Esperet and Parreau~\cite{DBLP:journals/ejc/EsperetP13}]\label{lem:sufficientcolors} At any step of any successive coloring of the edges of a graph, there are at most $2 (\Delta-1)$ colors that should be avoided in order to produce a proper 4-acyclic coloring. \end{lemma} \begin{proof}[Proof Sketch] Notice that for each edge $e$, one has to avoid the colors of all edges adjacent to $e$, and moreover for each pair of homochromatic edges $e_1, e_2$ adjacent to $e$ at different endpoints (which contribute one to the count of colors to be avoided), one has also to avoid the color of the at most one edge $e_3$ that together with $e,e_1, e_2$ define a cycle of length 4. Thus, easily, the total count of colors to be avoided does not exceed the number of adjacent edges of $e$, which is at most $2(\Delta-1)$.\end{proof} We now give the following definition: \begin{definition}\label{def:strongp} We call a coloring {\em strongly proper} if it is proper and 4-acyclic. \end{definition} So by the above Lemma, $2\Delta-1$ colors are sufficient to produce a strongly proper coloring, by choosing colors successively for all edges in any way that at each step strong properness is not destroyed. \subsection{\textsc{EdgeColor}}\label{ssec:color} We start by presenting below the algorithm \textsc{EdgeColor}. \begin{algorithm}[!hb]\label{alg:ec} \caption{\textsc{EdgeColor}} \vspace{0.1cm} \begin{algorithmic}[1] \For{each $e\in E$}\label{ec:for} \State Choose a color for $e$ from the palette, independently for each $e$, and u.a.r. \Statex \hspace{2.5em}(not caring for properness \label{ec:color} \EndFor\label{ec:endfor} \While{there is an edge contained in a badly colored cycle of even length $\geq 6,$ \Statex \hspace{3.6em} let $e$ be the least such edge and $C$ be the least such cycle and}\label{ec:while} \State \textsc{Recolor}($e,C$)\label{ec:recolor} \EndWhile\label{ec:endwhile} \State \textbf{return} the current coloring \end{algorithmic} \begin{algorithmic}[1] \vspace{0.1cm} \Statex \underline{\textsc{Recolor}($e,C$)}, where $C = C(e)=\{e=e_1^C,\ldots,e_{2k}^C\}$, $k\geq 3$. \For{$i = 1,\ldots,2k-2 $}\label{recolor:for} \State Choose a color $e_i^C$ independently and u.a.r. (not caring for properness)\label{recolor:color} \EndFor\label{recolor:endfor} \While{there is an edge in $\mathrm{sc}(C(e)) = \{e_1^C,\ldots,e_{2k-2}^C\}$ contained in a badly \Statex \hspace{3.6em} colored cycle of even length $\geq 6,$ let $e'$ be the least such edge and \Statex \hspace{3.6em} $C'$ the least such cycle and}\label{recolor:while} \State \textsc{Recolor}($e',C'$)\label{recolor:recolor} \EndWhile\label{recolor:endwhile} \end{algorithmic} \end{algorithm} Each iteration of a call of the \textsc{Recolor}\ procedure from line \ref{ec:recolor} of the algorithm \textsc{EdgeColor}, which entails coloring all but two edges of a cycle, is called a {\em phase}. Phases are nested. The number of steps of a phase is at most $m-2$ (recall that $m$ denotes the number of the edges). In the sequel, we count phases rather than color assignments. Because the number $m$ of the edges of the graph is constant, this does not affect the form of the asymptotics of the number of steps. \begin{importantrmk*} Notice that \textsc{EdgeColor}\ introduces dependencies between the colors, since choosing the least edge and cycle means that all previous edges, with respect to the assumed ordering, do not belong to a badly colored cycle. We will deal with this problem by introducing an algorithm which instead of choosing cycles it takes as input a succession of cycles possibly generated by \textsc{EdgeColor}\ and validates that indeed this could be sequence of cycles generated by \textsc{EdgeColor}\ (see Algorithm \textsc{EdgeValidation}\ in Subsection \ref{ssec:vale}). \end{importantrmk*} Notice also that \textsc{EdgeColor}\ may not halt, and perhaps worse, even if it stops, it may generate a coloring that is not strongly proper. However, it is obvious, because of the {\bf while}-loops in the main part of \textsc{EdgeColor}\ and in the procedure \textsc{Recolor}, that if the algorithm halts, then it outputs a coloring with no badly colored cycles of even length $\geq 6$. So in the \textsc{MainAlgorithm-Edges}\ that follows, we repeat \textsc{EdgeColor}\ until the desired coloring is obtained. \begin{algorithm}[ht]\label{alg:main} \caption{\textsc{MainAlgorithm-Edges}} \vspace{0.1cm} \begin{algorithmic}[1] \State Execute \textsc{EdgeColor}\ and if it stops, let $\mathcal{C}$ be the coloring it generates. \While{$\mathcal{C}$ is not strongly} proper \remove{or contains a bichromatic 4-cycle} \State Execute \textsc{EdgeColor}\ anew and if it halts, set $\mathcal{C}$ to be the newly \Statex \hspace{2em} generated coloring \EndWhile \end{algorithmic} \end{algorithm} Obviously \textsc{MainAlgorithm-Edges}, {\it if and when} it stops, outputs a proper acyclic coloring. The rest of the paper is devoted to compute the probability distribution of the number of steps it takes. We prove the following progression lemma, which shows that at every time a \textsc{Recolor}($e,C$) procedure terminates, some progress has indeed been made, which is then preserved in subsequent phases. \begin{lemma}\label{lem:progr} Consider an arbitrary call of \textsc{Recolor}($e,C$) and let $\mathcal{E}$ be the set of edges that at the beginning of the call are not contained in a cycle of even length $\geq 6$ with homochromatic edges of the same parity. Then, if and when that call terminates, no such edge in $\mathcal{E}\cup \{e\}$ exists. \end{lemma} \begin{proof} Suppose that \textsc{Recolor}($e,C$) terminates and there is an edge $e'\in\mathcal{E}\cup\{e\}$ contained in a cycle of even length $\geq 6$ whose edges of the same parity are homochromatic. If $e'=e$, then by line \ref{recolor:while}, \textsc{Recolor}($e,C$) could not have terminated. Thus, $e'\in\mathcal{E}$. Since $e'$ is not contained in a cycle with homochromatic edges of the same parity, it must be the case that at some point during this call, some cycle, with $e'$ among its edges, turned into one whose edges of the same parity are homochromatic, because of some call of \textsc{Recolor}. Consider the last time this happened and let \textsc{Recolor}($e^*,C^*$)\ be the causing call. Then, there is some cycle $C'$ of even length $\geq 6$ and with $e'\in C'$, such that the recoloring of the edges of $C^*$ resulted in $C'$ having all edges of the same parity homochromatic and staying such until the end of the \textsc{Recolor}($e,C$) call. Then there is at least one edge $e''$ contained in both $C^*$ and $C'$ that was recolored by \textsc{Recolor}($e^*,C^*$). By line \ref{recolor:while} of \textsc{Recolor}($e^*,C^*$), this procedure could not terminate, and thus neither could \textsc{Recolor}($e,C$), a contradiction. \end{proof} By Lemma \ref{lem:progr}, we get: \begin{lemma}\label{lem:root} There are at most $m$, the number of edges of $G$, i.e. a constant, repetitions of the {\bf while}-loop of the main part of \textsc{EdgeColor}. \end{lemma} However, a {\bf while}-loop of \textsc{Recolor}\ or \textsc{MainAlgorithm-Edges}\ could last infinitely long. In the next subsection we analyze the distribution of the number of steps they take. \subsection{Analysis of the algorithms}\label{sec:analysis} In this subsection we will prove the following two facts: \begin{fact} \label{factt1} The probability that \textsc{EdgeColor}\ lasts at least $n$ phases is inverse exponential in $n$, i.e. there exist constant integer $n_0$ and $c\in (0,1)$ such that if $n \geq n_0$, this probability is at most $c^n$.\end{fact} \begin{fact} \label{factt2} The probability that the {\bf while}-loop of \textsc{MainAlgorithm-Edges}\ is repeated at least $n$ times is inverse exponential in $n$.\end{fact} From the above two facts, {\it yet to be proved}, and because $\epsilon$ in the number of colors $\lceil(2+\epsilon)(\Delta-1)\rceil$ of the palette is an arbitrary positive constant, we get Theorem \ref{maintheorem} below and its corollary, Corollary \ref{maincorollary}, the main results of this section. \begin{theorem}\label{maintheorem} Assume that the number of edges $m$, and the number of vertices $l$ of a graph are considered constants and that there are $2\Delta-1$ colors available, where $\Delta $ is the maximum degree of the graph. Then the probability that \textsc{MainAlgorithm-Edges}\ lasts at least $n^2$ steps is inverse exponential in $n$. \end{theorem} \begin{proof} By Fact \ref{factt1}, the probability that one of the first $n$ repetitions of the {\bf while}-loop of \textsc{MainAlgorithm-Edges}\ entails an execution of \textsc{EdgeColor}\ with $n$ or more phases is inverse exponential in $n$. The result now follows by Fact \ref{factt2}. \end{proof} Therefore: \begin{corollary}\label{maincorollary} $2\Delta -1$ colors suffice to properly and acyclically color a graph. \end{corollary} The possible successions of edges that are colored by \textsc{EdgeColor} are depicted by graph structures called {\em feasible forests\/}. These structures are described next. \subsection{Feasible forests} \label{ssec:forests} We will depict an execution of \textsc{EdgeColor}\ organized in phases with a rooted forest, that is an acyclic graph whose connected components (trees) all have a designated vertex as their root. We label the vertices of such forests with pairs $(e,C)$, where $e$ is an edge and $C$ a $2k$-cycle containing $e$, for some $k\geq 3$. If a vertex $u$ of $\mathcal{F}$ is labeled by $(e,C)$, we will sometimes say that $e$ is the \emph{edge-label} and $C$ the \emph{cycle-label} of $u$. The number of nodes of a forest is denoted by $|\mathcal{F}|$. \begin{definition}\label{def:forest} A labeled rooted forest $\mathcal{F}$ is called \emph{feasible}, if the following two conditions hold: \begin{itemize} \item[i.] Let $e$ and $e'$ be the edge-labels of two distinct vertices $u$ and $v$ of $\mathcal{F}$. Then, if $u,v$ are both either roots of $\mathcal{F}$ or siblings (i.e. they have a common parent) in $\mathcal{F}$, then $e$ and $e'$ are distinct. \item[ii.] If $(e,C)$ is the label of a vertex $u$ that is not a leaf, where $C$ has half-length $k\geq 3$, and $e'$ is the edge-label of a child $v$ of $u$, then $e'\in \mathrm{sc}(C(e)) = \{e_1^C,\ldots,e_{2k-2}^C\}$. \end{itemize} \end{definition} Notice that because the edge-labels of the roots of the trees are distinct, a feasible forest has at most as many trees as the number $m$ of edges. Given an execution of \textsc{EdgeColor}\ with at least $n$ phases, we construct a feasible forest with $n$ nodes by creating one node $u$ labeled by $(e,C)$ for each phase. We structure these nodes according to the order their labels appear in the recursive stack implementing \textsc{EdgeColor}: the children of a node $u$ labeled by $(e,C)$ correspond to the recursive calls of \textsc{Recolor} \ made by line \ref{recolor:recolor} of \textsc{Recolor}($e,C$), with the leftmost child corresponding to the first such call and so on. We order the roots and the siblings of the witness forest according to the order they were examined by \textsc{EdgeColor}. By traversing $\mathcal{F}$ in a depth-first fashion, respecting the ordering of roots and siblings, we obtain the \emph{label sequence} $\mathcal{L}(\mathcal{F})=(e_1,C_1),\ldots,(e_{|\mathcal{F}|},C_{|\mathcal{F}|})$ of $\mathcal{F}$. \begin{definition}\label{def:fr} Given a finite sequence $\rho$ of independent and u.a.r. color-choices, let $\mathcal{F}_{\rho}$ be the uniquely defined feasible forest generated by \textsc{EdgeColor}\ if it follows the random choices $\rho$, assuming it halts. \end{definition} Let $P_n$ be the probability that \textsc{EdgeColor}\ lasts at least $n$ phases, {\color{black} and $Q_n$ be the probability that \textsc{EdgeColor}\ lasts for less than $n$ phases and the coloring generated is not strongly proper. } In the next subsection, we will compute upper bounds for these probabilities. \subsection{Validation Algorithm}\label{ssec:vale} We now give the validation algorithm: \begin{algorithm}[H]\label{alg:vale} \caption{\textsc{EdgeValidation}($\mathcal{F}$)} \vspace{0.1cm} \begin{algorithmic}[1] \Statex \underline{Input:} $\mathcal{L}(\mathcal{F})=(e_1,C_1),\ldots,(e_{|\mathcal{F}|},C_{|\mathcal{F}|}): \ C_i(e_i)=\{e_i=e_1^{C_i},\ldots,e_{2k_i}^{C_i}\}$. \State Color the edges of $G$, independently and selecting for each a color u.a.r. from \Statex \hspace{1em} $\{1,\ldots,K\}$.\label{val:ass}\For{$i=1,\ldots,|\mathcal{F}|$} \label{val:for} \If{$C_i$ is badly colored } \State Recolor the edges in $\mathrm{sc}(C(e)) = \{e_1^{C_i},\ldots,e_{2k_i-2}^{C_i}\}$ by selecting colors \Statex \hspace{3.6em} independently and u.a.r. \Else \State \textbf{return} {\tt failure} and \textbf{exit} \EndIf \EndFor \label{val:endfor} \State \textbf{return} {\tt success} \end{algorithmic} \end{algorithm} We call each iteration of the \textbf{for}-loop of lines \ref{val:for}--\ref{val:endfor} a {\em phase} of \textsc{EdgeValidation}($\mathcal{F}$). Phases are nested. Given a sequence $\rho$ of color choices (made independently from the palette u.a.r.) and a feasible $\mathcal{F}$, we say that \textsc{EdgeValidation}$(\mathcal{F})$ if executed following $\rho$ is {\em successful} if it goes through all cycles of $\mathcal{L}(\mathcal{F})$ without reporting {\tt failure}. Let $V_{\mathcal{F}}$ be the event comprised of sequences of color choices $\rho$ such that \textsc{EdgeValidation}$(\mathcal{F})$, if executed following $\rho$, is successful. \begin{lemma}\label{lem:random} The following hold: \begin{itemize} \item[a.] For every feasible $\mathcal{F}$, the coloring generated at the end of a successful phase of the execution of \textsc{EdgeValidation}$(\mathcal{F})$ is random, i.e. every coloring can be equiprobably generated. Formally, if $c$ denotes a coloring of the graph, then for any phase of the execu\-tion of $$\textsc{EdgeValidation}(\mathcal{F}),$$ the probability of the event $E(c;\mathcal{F})$ that at the end of the phase the coloring $c$ is generated, conditional that the phase is successful is the same for all colorings $c$. \item[b.] For every feasible $\mathcal{F}$, if $$\mathcal{L}(\mathcal{F})=(e_1,C_1),\ldots,(e_{|\mathcal{F}|},C_{|\mathcal{F}|})$$ and if $C_i$ has half-length $k_i\geq 3$, $i=1,\ldots,n$, then $$\Pr[V_{\mathcal{F}}]=\prod_{i=1}^{|\mathcal{F}|}\Bigg(\frac{1}{K^{(2k_i-2)}}\Bigg).$$ \item[c.] Given any finite collection $\frak{F}= \{F_1, \ldots, \mathcal{F}_k\}$ of feasible forests, consider it as a uniformly distributed probability space. Then consider the probability space $\Omega_{\frak{F}}$ comprised of pairs $(\rho, \mathcal{F})$, $\mathcal{F} \in \frak{F}$ such that \textsc{EdgeValidation}$(F)$ is successful when $\rho$ is followed. Then over $\Omega$, random is the coloring $c$ generated when \textsc{EdgeValidation}$(\mathcal{F})$\ is executed for some $\mathcal{F} \in \frak{F}$. Formally, the probability of the event $E(c; \frak{F})$ comprised of pairs $(\rho, \mathcal{F}) \in \Omega_{\frak{F}}$ such that the coloring $c$ is generated if \textsc{EdgeValidation}$(\mathcal{F})$ is executed following $\rho$ is the same for all $c$. \end{itemize} \end{lemma} \begin{proof} For the first statement, observe that at each successful phase of \textsc{EdgeValidation}$(\mathcal{F})$, all colors in the scope of the corresponding cycle are replaced by random choices of colors. The second statement is an immediate corollary of the first and the fact that for a cycle of half length $k$, the probability that is badly colored for a random coloring is $\frac{1}{K^{(2k-2)}}$. The third statement is again a corollary of the first statement, since it is true for every fixed $\mathcal{F} \in \frak{F}$. We had to consider a space with elements pairs $(\rho, \mathcal{F})$, because the same coloring can be obtained by executing \textsc{EdgeValidation}\ with input two different forests. \end{proof} \begin{remark*} A random coloring can be generated by choosing a color from the palette u.a.r. for each edge. \end{remark*} Consider the probability space $\Omega_n$ of $(\rho, \mathcal{F})$, where $|\mathcal{F}| <n$ and \textsc{EdgeValidation}$(\mathcal{F})$ is successful when $\rho$ is followed, and let $\hat{{\rm Q}}_n$ the event over $\Omega_n$ that \textsc{EdgeValidation}$(\mathcal{F})$ generates a coloring that is not strongly proper. \begin{lemma} \label{lem:boundQn} We have that: $$Q_n \leq \hat{Q}_n \leq 1 - \left( 1 -\left( \frac{2}{2+\epsilon}\right) \right)^m.$$ \end{lemma} \begin{proof} For the first inequality, first observe that the expressions in the lhs and in the middle above are probabilities over different spaces. The expression in the lhs is a probability over the space of sequences $\rho$ of independent and u.a.r. color choices, whereas the expression in the middle is a probability over the space $\Omega_n$ defined above. To show the inequality, given $\rho$ such that \textsc{EdgeColor}\ halts in less than $n$ steps and generates a forest $\mathcal{F}_{\rho}$ (see Definition \ref{def:fr}) which is not strongly proper, consider $(\rho, \mathcal{F}_{\rho}) \in \Omega_n$. The required follows because \textsc{EdgeValidation}$(\mathcal{F}_{\rho})$ is successful when executed following $\rho$. For the second inequality, first observe that from the the cornerstone result of Esperet and Parreau given in Lemma~\ref{lem:sufficientcolors}, we have that the probability that a random coloring is proper and does not contain a bichromatic 4-cycle is at at least $\left( 1 -\left( \frac{2}{2+\epsilon}\right)\right)^m$. {\color{black} Recall now that by the last part of Lemma~\ref{lem:random}, random is a coloring generated when \textsc{EdgeValidation}$(\mathcal{F})$\ is successfully executed on input some $\mathcal{F}$ of length less than $n$. Thus, the probability in $\Omega_n$ that the coloring generated is not strongly proper is at most $1-\left( 1 -\left( \frac{2}{2+\epsilon}\right)\right)^m$.} Observe that the bound of $\hat{Q}_n$ (and therefore of $Q_n$ as well) is independent of $n$. \end{proof} \iffalse \begin{lemma}\label{lem:random} The following hold: \begin{itemize} \item[a.] For every feasible $\mathcal{F}$, the coloring generated at the end of every successful phase of the execution of \textsc{EdgeValidation}$(\mathcal{F})$ is random. Formally, if $c$ denotes a coloring of the graph, then for any phase of the execution of \textsc{EdgeValidation}$(\mathcal{F})$, the probability of the event $E(c;\mathcal{F})$ that at the end of the phase the coloring $c$ is generated, conditional that the phase successful, is the same for all colorings $c$. \item[b.] For every feasible $\mathcal{F}$, if $$\mathcal{L}(\mathcal{F})=(e_1,C_1),\ldots,(e_{|\mathcal{F}|},C_{|\mathcal{F}|})$$ and if $C_i$ has half-length $k_i\geq 3$, $i=1,\ldots,n$, then $$\Pr[V_{\mathcal{F}}]=\prod_{i=1}^{|\mathcal{F}|}\Bigg(\frac{1}{K^{(2k_i-2)}}\Bigg).$$ {\color{dblue} \item[c.] Given any finite collection $\frak{F}= \{F_1, \ldots, \mathcal{F}_k\}$ of feasible forests, random is the coloring $c$ generated when \textsc{EdgeValidation}$(\mathcal{F})$\ is successfully executed for some $\mathcal{F} \in \frak{F}$. Formally, if we consider the probability space $\Omega_{\frak{F}}$ comprised of pairs $(\rho, \mathcal{F})$, $\mathcal{F} \in \frak{F}$ (uniformly distributed), then the same for all $c$ is the probability of the event $$E(c; \frak{F})$$ comprised of pairs $(\rho, \mathcal{F}) \in \Omega_{\frak{F}}$ such that \textsc{EdgeValidation}$(\mathcal{F})$ is successful if the choices of $\rho$ are followed and the coloring $c$ is generated. } \end{itemize} \end{lemma} \begin{proof} For the first statement, observe that at each successful phase of \textsc{EdgeValidation}$(\mathcal{F})$, all colors in the scope of the corresponding cycle are replaced by random choices of colors. The second statement is an immediate corollary of the first and the fact that for a cycle of half length $k$, the probability that is badly colored for a random coloring is $\frac{1}{K^{(2k-2)}}$. {\color{dblue} The third statement is again a corollary of the fact that at each successful phase of \textsc{EdgeValidation}$(\mathcal{F})$ all colors in the scope of the corresponding cycle are replaced by random choices of colors. However, since more than one feasible forests are in play, to correctly capture the randomness of the generated coloring, we introduce the probability space $\Omega_{\frak{F}}$ to avoid the complication with the probability of colorings that arises when the same coloring is generated by executing \textsc{EdgeValidation}\ with input two distinct forests following the same $\rho$. }\end{proof} Consider the probability space $\Omega_n$ of $(\rho, \mathcal{F})$, where $|\mathcal{F}| <n$ (uniformly distributed) and let $\hat{{\rm Q}}_n$ the event in $\Omega_n$ that \textsc{EdgeValidation}$(\mathcal{F})$ is successful if $\rho$ is followed and the generated coloring is not strongly proper. {\color{dblue} \begin{lemma} \label{lem:boundQn} We have that: $$Q_n \leq \hat{Q}_n \leq 1 - \left( 1 -\left( \frac{2}{2+\epsilon}\right) \right)^m.$$ \end{lemma} \begin{proof} For the first inequality, first observe that the expressions in the lhs and in the middle above are probabilities over different spaces. The expression in the lhs is a probability over the space of sequences $\rho$ of independent and u.a.r. color choices, whereas the expression in the middle is a probability over the space $\Omega_n$ defined above. To show the inequality, given $\rho$ such that \textsc{EdgeColor}\ halts in less than $n$ steps and generates a forest $\mathcal{F}_{\rho}$ (see Definition \ref{def:fr}) which is not strongly proper, consider $(\rho, \mathcal{F}_{\rho}) \in \Omega_n$. The required follows because the coloring obtained when \textsc{EdgeValidation}$(\mathcal{F}_{\rho})$ is executed following $\rho$ is not strongly proper. For the second inequality, first observe that from the the cornerstone result of Esperet and Parreau given in Lemma~\ref{lem:sufficientcolors}, we have that the probability that a random coloring is proper and does not contain a bichromatic 4-cycle is at at least $\left( 1 -\left( \frac{2}{2+\epsilon}\right)\right)^m$. Recall now that by the last part of Lemma~\ref{lem:random} random is a coloring generated when \textsc{EdgeValidation}$(\mathcal{F})$\ is successfully executed on input some $\mathcal{F}$ of length less than $n$. Thus, the probability in $\Omega_n$ that the coloring generated is not strongly proper is at most $1-\left( 1 -\left( \frac{2}{2+\epsilon}\right)\right)^m$. Observe that the bound of $\hat{Q}_n$ (and therefore of $Q_n$ as well) is independent of $n$. \end{proof} } \fi Now, Fact \ref{factt2} follows from Lemma \ref{lem:boundQn}, since $Q_n$ is bounded away from 1 by an amount independent of $n$. To prove Fact \ref{factt1}, we first let $\hat{P}_n$ be the probability that \textsc{EdgeValidation}($\mathcal{F}$) succeeds for at least one $\mathcal{F}$ with $n$ nodes. \begin{lemma}\label{lem:boundPn} We have that $P_n \leq \hat{P}_n \leq \sum_{|\mathcal{F}|=n}\Pr[V_{\mathcal{F}}]$. \end{lemma} \begin{proof} For the first inequality, consider an execution of \textsc{EdgeColor}\ with sequence of random choices $\rho$ and let $\mathcal{F}_{\rho}$ be the corresponding feasible forest. Execute now \textsc{EdgeValidation}($\mathcal{F}_{\rho}$) making the random choices in $\rho$. The second one is obvious from union-bound.\end{proof} \iffals \begin{proof} The first inequality is obvious. For the second inequality, first observe that from the the cornerstone result of Esperet and Parreau given in Lemma~\ref{lem:sufficientcolors}, we have that the probability that a random coloring is proper and does not contain a bichromatic 4-cycle is at at least $\left( 1 -\left( \frac{2}{2+\epsilon}\right)\right)^m$. {\color{black} Recall now that by the last part of Lemma~\ref{lem:distr}, the distribution of the colorings generated when \textsc{EdgeValidation}$(\mathcal{F})$\ is executed on input some $\mathcal{F}$ of length less than $n$ (which may depend on the random choices) is uniform. Thus, the probability of the event that \textsc{EdgeValidation} \ is successfully executed on input some $\mathcal{F}$ of length less than $n$ and that the coloring generated is not strongly proper, is at most the probability of the event that a random coloring is not strongly proper. The latter probability is at most $1-\left( 1 -\left( \frac{2}{2+\epsilon}\right)\right)^m$.} Observe that this bound of $\hat{Q}_n$ is independent of $n$. \end{proof} The second inequality of Lemma \ref{lem:bounds1}\remove{the Lemma above} has as corollary, by Lemma~\ref{lem:PQn}, that the probability that the number of repetitions of the {\bf while}-loop in an execution of \textsc{MainAlgorithm-Edges}\ is at least a number $n$ is inverse exponential in $n$ (Fact \ref{factt2}). \f So all that remains to be proved to complete the proof of Fact \ref{factt1}, is show that $\sum_{|\mathcal{F}|=n}\Pr[V_{\mathcal{F}}] $ is inverse exponential in $n$. We do this in the next subsection, expressing the sum as a recurrence. \subsection{The recurrence}\label{ssec:rec} We will estimate $\sum_{|\mathcal{F}|=n}\Pr[V_{\mathcal{F}}]$ by purely combinatorial arguments. Towards this end, we first define the weight of a forest, denoted by $\|\mathcal{F}\|$, to be the number $$\prod_{i=1}^{|\mathcal{F}|}\Bigg(\frac{1}{K^{(2k_i-2)}}\Bigg),$$ (recall that $|\mathcal{F}|$ denotes the number of nodes of $\mathcal{F}$), and observe that by Lemma~\ref{lem:distr} \begin{equation}\label{eq:norm}\sum_{|\mathcal{F}|=n}\Pr[V_{\mathcal{F}}] = \sum_{|\mathcal{F}|=n}\|\mathcal{F}\|.\end{equation} From the definition of a feasible forest, we have that such a forest is comprised of a number of trees at most as many as the number of edges. For $j=1, \ldots, m$, let $\mathcal{T}_j$ be the set of all possible feasible trees whose root has as edge-label the edge $e_j$ together with the empty tree. Assume that the weight of the empty tree is one, i.e. $\|\emptyset \| = 1$ (of course, the number of nodes of the empty tree is 0, i.e. $|\emptyset| =0$). Let also $\mathcal{T}$ be the collection of all $m$-ary sequences $(T_1,\ldots,T_m)$ with $T_j \in \mathcal{T}_j$. Now, obviously:\begin{align}\label{eq:trees} \sum_{|\mathcal{F}|=n}\|\mathcal{F}\|& =\sum_{\substack{(T_1,\ldots,T_m)\in\mathcal{T} \\ |T_1|+ \cdots +|T_m| =n}}\|T_1\|\cdots\|T_m\| \nonumber\\ & = \sum_{\substack{n_1+\cdots+n_m=n\(n_1,\ldots,n_{2m})_1,\ldots,n_m\geq 0}}\Bigg(\Big(\sum_{\substack{T_1\in\mathcal{T}_1:\\|T_1|=n_1}}\|T_1\|\Big)\cdots\Big(\sum_{\substack{T_m\in\mathcal{T}_m:\\|T_m|=n_m}}\|T_m\|\Big)\Bigg).\end{align} We will now obtain a recurrence for each factor of the rhs of \eqref{eq:trees}. Let:\begin{equation}\label{eq:q} q= \frac{\Delta-1}{K} = \frac{\Delta-1}{\lceil(2+\epsilon)(\Delta-1)\rceil}. \end{equation} \begin{lemma}\label{lem:rrec} Let $\mathcal{T}^e$ be anyone of the $\mathcal{T}_j$. Then:\begin{equation}\label{Te} \sum_{\substack{T\in\mathcal{T}^e \\ |T| =n}}\|T\|\leq R_n, \end{equation} where $R_n$ is defined as follows:\begin{equation}\label{Qn} R_n:=\sum_{k\geq 3}q^{2k-2}\Bigg(\sum_{\substack{n_1+\cdots+n_{2k-2}=n-1\(n_1,\ldots,n_{2m})_1,\ldots,n_{2k-2}\geq 0}}R_{n_1}\ldots R_{n_{2k-2}}\Bigg) \end{equation} and $R_0=1$. \end{lemma} \begin{proof} Indeed, the result is obvious if $n=0$, because the only possible $T$ is the empty tree, which has weight 1. Now if $n>0$, observe that there are at most $\Delta^{2k-2} $ possible cycles with $2k$ edges, for some $k \geq 3$, that can be the cycle-edge of the root of a tree $T\in \mathcal{T}^e$ with $|T| >0$. Since the probability of each such cycle having homochromatic equal parity sets is $\left(\frac{1}{K}\right)^{2k-2}$, the lemma follows. \end{proof} \subsection{The solution of the recurrence}\label{ssec:analysisrec} For the solution we will follow the technique peresented by Flajolet and Sedgewick in \cite[Proposition IV.5, p. 278]{Flajolet:2009:AC:1506267}. Towards this we will first find the Ordinary Generating Function (OGF) of the sequence $R_n$. For technical reasons (simpler computations), we will find the OGF $R(z)$ where for $n=0,\ldots, \infty$ the coefficient of $z^{n+1}$ is $R_{n}$ and the constant term is 0. Since $R_0=1$, the coefficient of $z$ in $R(z)$ is 1. Multiply both sides of \eqref{Qn} by $z^{n+1}$ and sum for $n= 1, \dots, \infty$ to get \begin{equation}\label{Q} R(z) -z = \sum_{k\geq 3}\Bigg(q^{2k-2} zR(z)^{2k-2}\Bigg). \end{equation} Letting $R := R(z)$ we get: \begin{equation}\label{W2} R= z\left(\sum_{k\geq 2}\left(q^{2k}R^{2k}\right)+1\right) =z\left(\frac{(qR)^4}{1-(qR)^2}+1\right). \end{equation} Now, set: \begin{equation}\label{eq:phi} \phi(x) =\frac{(qx)^4}{1-(qx)^2} +1, \end{equation} to get from \eqref{W2}: \begin{equation}\label{Wphi} R= z\phi(R). \end{equation} Observe now that: \begin{itemize} \item $\phi$ is a function analytic at 0 with nonnegative Taylor coefficients (with respect to $x$), \item $\phi(0) \neq 0$, \item the radius of convergence $r$ of the series representing $\phi$ at 0 is $1/q$ and $\lim_{x \rightarrow r^-} \frac{x\phi'(x)}{\phi(x)}= +\infty$,\end{itemize} so all the hypotheses to apply \cite[Proposition IV.5, p. 278]{Flajolet:2009:AC:1506267} are satisfied. Therefore, $[z^n]R \bowtie {\rho}^n$, i.e. $\limsup \ ([z^n]R)^{1/n} = \rho$, where $\rho=\phi'(\tau),$ and $\tau$ is the (necessarily unique) solution of the {\em characteristic equation}: \begin{equation} \label{eq:char} \frac{\tau\phi'(\tau)}{\phi(\tau)} =1 \end{equation} within $(0,r) = (0, 1/q)$ (for the asymptotic notation ``$\bowtie$" see \cite[IV.3.2, p. 243]{Flajolet:2009:AC:1506267}). Within the interval $(0,r) = (0, 1/q)$, the characteristic equation given in \eqref{eq:char} reduces to a quadratic one with unique solution $\tau = \frac{\sqrt{5} -1}{2q}$. The value of $\phi'(\tau)$ is easily computed to be $2q$, which since $q<1/2$, is $<1$. Therefore, by \cite[Proposition IV.5, p. 278]{Flajolet:2009:AC:1506267}, the rate of growth of the sequence $[z^n]R$ is inverse exponential. By the above, and since there are at most $n^m$ sequences $n_1,\ldots,n_m$ of integers that add up to $n$ and since $m$ is considered to be constant, we get by the second inequality of Lemma \ref{lem:boundPn}, then using Equations \eqref{eq:norm} and \eqref{eq:trees} and finally using Lemma~\ref{lem:rrec} that: \begin{equation}\label{finall} \hat{{\rm P}}_n \bowtie \rho^n, \end{equation} where $\rho$ is a positive constant $<1$. By Equation \eqref{finall} and the first inequality of Lemma \ref{lem:boundPn}, we get Fact \ref{factt1}, and thus the proofs of Theorem \ref{maintheorem} and its corollary, Corollary \ref{maincorollary}, our main results of this section, are completed. \section{Acyclic vertex coloring}\label{sec:avc} In this section, since we only deal with vertex colorings, we often avoid the specification ``vertex" for colorings (unless the context is ambiguous). It is known that the acyclic chromatic number is $O(\Delta^{4/3})$ and that this result is optimal within a logarithmic factor (see Intoduction). Intuitively, the reason we need asymptotically more colors for acyclic vertex coloring than acyclic edge coloring is essentially the fact that given a vertex, we have a choice of $k-1$ other vertices to form a $k$-cycle, whereas given an edge we have $k-2$ choices of other edges (the last edge is uniquely determined), so the probability of selecting a vertex should be smaller to offset the larger number of choices. We will prove that (the main result of this section): \begin{theorem}\label{thm:mainv} For all $\alpha >2^{-1/3}$ and for $\Delta$ large enough (depending on $\alpha$), the acyclic chromatic number of the graph is at most $K := \lceil \alpha{\Delta}^{4/3} \rceil +\Delta+ 1.$ \end{theorem} Again the main idea is to design a Moser-type algorithm that ignores properness (actually a stronger notion, as we will see immediately below). Since several theorems, and their proofs, are very analogous to the edge coloring case, we will often give only an outline of, or even omit, proofs, when they are fully analogous to their edge coloring counterpart. As in the case of edge coloring, the stronger properness condition entails 4-cycles. However, the stronger properness condition will not force all 4-cycles not to be bichromatic, as in edge coloring (recall Definition \ref{def:strongp}). Rather besides properness, it will require that any two vertices $u$ and $v$ such that many 4-cycles exist having $u$ and $v$ as opposite vertices are differently colored. In the next subsection we formalize this notion. \subsection{Special pairs}\label{ssec:special} We define the notion of special pairs originally introduced by Alon et al. \cite{alon1991acyclic}. Gon\c{c}alves et al. \cite{gonccalves2014entropy} generalized this notion and proved about it results of which we make strong use. The reason that the notion of special special pairs is useful for us is on one hand that 4-cycles through a given vertex that forms a special pair with its opposing vertex can be handled with respect to bichromaticity directly and on the other that 4-cycles through a given vertex that does not form a special pair with its opposing vertex, although commoner, their number (see Lemma \ref{lem:4-cycles}) allows them to be handled with respect to bichromaticity with a Moser-type algorithm. Below, we follow the notation and terminology of Gon\c{c}alves et al. slightly adjusted to our needs. We give in detail the relevant definitions and proofs. Given a vertex $u$, let $N(u)$ and $N^2(u)$, respectively, denote the set of vertices at distance one and two, respectively, from $u$. Among the vertices in $N^2(u)$ define a strict total order $\prec_u$ as follows: $v_1 \prec_u v_2$ if either $|N(u)\cap N(v_1)| < |N(u)\cap N(v_2)|$ or alternatively $|N(u)\cap N(v_1)| = |N(u)\cap N(v_2)|$ and $v_1$ precedes $v_2$ in the ordering $\prec$ between vertices we assumed to exist. \begin{definition}[Gon\c{c}alves et al. \cite{gonccalves2014entropy}]\label{def:special} A pair $(u,v)$ of vertices such that $v \in N^2(u)$ is called an $\alpha$-special pair if it belongs to the at most $\lceil\alpha \Delta^{4/3}\rceil$ highest, in the sense of $\prec{u}$, elements of $N^2(u)$. The set of vertices $v$ for which $(u, v)$ form a special pair is denoted by $S_{\alpha}(u)$. Also $N^2(u)\setminus S_{\alpha}(u)$ is denoted by $\overline{S_{\alpha}(u)} $ \end{definition} It is possible that $v \in S_{\alpha}(u)$ but $u \in \overline{S_{\alpha}(v)}$. Also by definition, \begin{equation}\label{eq:S}|S_{\alpha}(u)| = \min(\lceil\alpha\Delta ^{4/3}\rceil, |N^2(u)|). \end{equation} We now give the proof of the following, that is essentially the proof presented by Gon\c{c}alves et al. \cite{gonccalves2014entropy}.\begin{lemma}[Gon\c{c}alves et al. {\cite[Claim 11]{gonccalves2014entropy}}]\label{lem:4-cycles} For all vertices $u$, there are at most $\frac{\Delta ^{8/3}}{8\alpha}$ 4-cyles that contain $u$ but contain no vertex $v \in S_{\alpha}(u)$. \end{lemma}\label{lem:basicgoncalves} \begin{proof}Let $d$ be an integer such that \begin{align} \mbox{if } v \in S_{\alpha}(u) \mbox{ then } |N(u)\cap N(v)| &\geq d \mbox{ and } \label{eq:more}\\ \mbox{if } v \in \overline{S_{\alpha}(u)} \mbox{ then } |N(u)\cap N(v)| &\leq d. \label{eq:less} \end{align} Now, because cycles that contain $u$ and a given $v \not\in S_{\alpha}(u)$ are in one to one correspondence with a subset of the at most $\binom{N(u) \cap N(v)}{2}$ pairs of distinct edges from $u$ to $N(u) \cap N(v)$, and because of Equation \eqref{eq:less}, we conclude that the 4-cycles through $u$ whose opposing vertex is not in $S_{\alpha}(u)$ are at most $\sum_{v \in \overline{S_{\alpha}(u)} } \binom{|N(u) \cap N(v)|}{2}\leq (1/2)d \sum_{v \in \overline{S_{\alpha}(u)}} |N(u) \cap N(v)|.$ Assume now that $ \lceil\alpha \Delta ^{4/3} \rceil \leq |N^2(u)|$ , and therefore by Equation \eqref{eq:S} that $|S_{\alpha}(u)| = \lceil\alpha\Delta ^{4/3}\rceil$ (otherwise all vertices in $N^2(u)$ are special and so there is nothing to prove). Observe that because there at most $\Delta^2$ edges between $N(u) \cap N(v)$ and $N^2(u)$, and because of Equation \eqref{eq:more} above, $$\sum_{v \in \overline{S_{\alpha}(u)}} |N(u) \cap N(v)| \leq \Delta^2 - d S_{\alpha}(u) \leq \Delta^2 - d \alpha \Delta ^{4/3}$$ and therefore the number of 4-cycles through $u$ whose opposing vertex $v \not\in S_{\alpha}(u)$ is at most $(1/2)d(\Delta^2 - d \alpha \Delta ^{4/3})$, a binomial in $d$ whose maximum is $\frac{\Delta^{8/3}}{8\alpha}.$ \end{proof} % We now give the following definition. \begin{definition}\label{def:special2} We call a coloring $\alpha$-specially proper, if for any two vertices $u,v $ such that $v$ is a neighbor of $u$ or $v \in S_{\alpha}(u)$, $u$ and $v$ are differently colored. \end{definition} We have that: \begin{lemma} \label{lem:mainrandom} For any graph with maximum degree $\Delta$ and any positive $\alpha$, a random coloring from a palette with $ \lceil \alpha{\Delta}^{4/3}\rceil + \Delta+ 1$ colors is $\alpha$-specially proper with positive probability. \end{lemma} \begin{proof} Given a vertex $u$, there are at most $\lceil \alpha \Delta^{4/3} \rceil$ vertices forming a $\alpha$-special pair with $u$ and also at most $\Delta$ neighbors of $u$. Therefore a palette with at least $\lceil\alpha{\Delta}^{4/3}\rceil + \Delta + 1$ colors suffices for a random coloring to have positive probability to avoid for $u$ all the colors of its neighbors and of vertices that form a special pair with $u$. Since vertices are assigned their colors independently, positive is the probability for a random coloring to be $\alpha$-specially proper (recall at this point that all parameters except the number of steps of the algorithms are considered constant). \end{proof} \subsection{The Moser part of the proof}\label{sec:moser} In this section we will show that, for any $\alpha>2^{-1/3}$, $\lceil \alpha{\Delta}^{4/3}\rceil + \Delta+ 1$ colors suffice to color the vertices of a graph in a way that although may produce a not $\alpha$-specially proper (or not even proper) coloring, it succeeds nevertheless in producing a coloring where for every vertex $u$, for all 4-cycles that contain $u$ and the opposing to $u$ vertex does not form a $\alpha$-special pair with $u$ (these are the commoner 4-cycles), as well as for all cycles of length at least 6 that contain $u$, not both equal parity sets are monochromatic, i.e. not all equal parity pairs of vertices are homochromatic. For this we will need to assume that the maximum degree of the graph is at least as small as an integer depending on $\alpha$ (but not depending on the graph). In what follows, assume we have a palette of $\lceil a{\Delta}^{4/3}\rceil + \Delta +1$ colors, $\alpha>2^{-1/3}$. Let also $\mathcal{B}$ be the set comprised (i) of all $4$-cycles whose opposing vertices do not form $\alpha$-special pairs and (ii) of all $5$-paths, that is paths containing five edges and six vertices. Recall that the elements of $\mathcal{B}$ are ordered according to $\prec$. Given a set $B\in\mathcal{B}$, a {\em pivot} vertex $u$ of $B$ is any vertex in $B$ if $B$ is a 4-cycle, or any of $B$'s endpoints if $B$ is a 5-path. In the former case, let $B(u):=\{u=u_1^B,\ldots,u_4^B\}$ be the set of consecutive vertices of $B$ in its positive traversal beginning from $u$, while in the latter case let $B(u):=\{u=u_1^B,\ldots,u_6^B\}$ be the set of consecutive vertices of $B$ starting from $u$. Also let $|B| = 4 \mbox{ or } 6$ be the number of vertices of $B$. Given a pivot vertex $u$ of a set $B\in\mathcal{B}$, we define the \emph{scope} of $B(u)$ to be the set $\mathrm{sc}(B(u)):=\{u_1^B,\ldots,u_{k-2}^B\}$, where $k=4$ or $6$. In the sequel, we call {\em badly colored\/} the sets in $\mathcal{B}$ whose both equal parity sets are monochromatic. Consider now $\textsc{VertexColor}$, Algorithm \ref{alg:vc} defined below. \begin{algorithm}[ht] \caption{\textsc{VertexColor}}\label{alg:vc} \vspace{0.1cm} \begin{algorithmic}[1] \For{each $u\in V$} \State Choose a color from the palette, independently for each $u$, and u.a.r. \EndFor \While{there is a pivot vertex of a badly colored set in $\mathcal{B}$, let $u$ be the least such \Statex \hspace{3.6em} vertex and $B$ be the least such set and}\label{main:loop} \State \hspace{1.4em}\textsc{Recolor}($u,B$)\label{main:rec} \EndWhile \State \textbf{return} the current coloring \end{algorithmic} \begin{algorithmic}[1] \vspace{0.1cm} \Statex \underline{\textsc{Recolor}($u,B$)}, $B(u)=\{u=u_1^B,\ldots,u_k^B\}$, $k=4$ or $k=6$. \State Choose a color independently for each $v\in\mathrm{sc}(B(u)))$, and u.a.r. \While{there is vertex in $\mathrm{sc}(B(u))$ which is a pivot vertex of a badly colored \Statex \hspace{3.6em} set in $\mathcal{B}$, let $u'$ be the least such vertex and $B'$ be the least such \Statex \hspace{3.6em} set and}\label{rec:loop} \State \hspace{1.4em}\textsc{Recolor}($u',B'$)\label{rec:rec} \EndWhile \end{algorithmic} \end{algorithm} \begin{remark} As in edge coloring, note that \textsc{VertexColor}\ introduces dependencies between the colors, since choosing the least vertex $u$ and set $B$ means that all previous, with respect to the assumed ordering, vertices are not pivot vertices in badly colored sets of $\mathcal{B}$. As in the case of edge coloring, we deal with this problem introducing a validation algorithm. \end{remark} Because of lack of $\alpha$-special properness of colorings generated by \textsc{VertexColor} \ we introduce: \begin{algorithm}[ht]\label{alg:mainv} \caption{\textsc{MainAlgorithm-Vertices}} \vspace{0.1cm} \begin{algorithmic}[1] \State Execute \textsc{VertexColor}\ and if it stops, let $c$ be the coloring it generates. \While{$c$ is not $\alpha$-specially proper}\label{ln:mainwhile} \State Execute \textsc{VertexColor}\ anew and if it halts, set $c$ to be the newly \Statex \hspace{2em} generated coloring \EndWhile \end{algorithmic} \end{algorithm} The following two lemmas are analogous to Lemmas \ref{lem:progr} and \ref{lem:root}, respectively. We do not prove them, neither do we give any comment, as both their proofs and their role should be clear by analogy. \begin{lemma}\label{lem:progrr} Let $\mathcal{V}$ be the set of vertices that at the beginning of some procedure \textsc{Recolor}($u,B$) are not pivot vertices in a badly colored set $B$. Then, if and when that call terminates, no such vertex in $\mathcal{V}\cup \{u\}$ exists. \end{lemma} \begin{lemma}\label{lem:roott} There are at most $l$, the number of vertices of $G$, i.e. a constant, repetitions of the {\bf while}-loop of line \ref{main:loop} of the main part of \textsc{VertexColor}. \end{lemma} Let $P_n$ be probability that \textsc{VertexColor}\ lasts at least $n$ phases, and $Q_n$ be the probability that \textsc{VertexColor}\ lasts for less than $n$ phases and the coloring generated is not strongly proper. We will now show that for any $\alpha> 2^{-1/3}$, there is an integer $\Delta_{\alpha}$ such that for any graph whose maximum degree is at least $\Delta_{\alpha}$, the following two facts hold: \begin{fact}\label{fact3} The probability ${\rm P}_n$ is inverse exponential in $n$. \end{fact} \begin{fact}\label{fact4}The probability that the {\bf while}-loop of \textsc{MainAlgorithm-Vertices}\ is repeated at least $n$ times is inverse exponential in $n$. \end{fact} From the above two facts, {\it yet to be proved}, the proof of Theorem \ref{thm:mainv} follows. As in edge coloring, we depict the phases of an execution of the algorithm $\textsc{VertexColor}$ with a labeled rooted forest, the witness structure. The labels are pairs $(u,B)$, where $u$ is a pivot vertex of $B\in\mathcal{B}$. We call $u$ the vertex-label and $B$ the set-label of the node of the tree. \begin{definition}\label{def:forest2} A labeled rooted forest $\mathcal{F}$ is called feasible, if the following conditions hold: \begin{itemize} \item[i.] Let $u$ and $v$ be the vertex-labels of two distinct nodes $x$ and $y$ of $\mathcal{F}$. If $x$ and $y$ are both either roots of $\mathcal{F}$ or siblings in $\mathcal{F}$, then $u$ and $v$ are distinct. \item[ii.] If $(u,B)$ is the label of an internal node $x$ of the forest, the vertex-labels of the children of $x$ comprise the set $\mathrm{sc}(B(u))$. \end{itemize} \end{definition} As in edge coloring, we order the vertices of a feasible forest, and we set: $$\mathcal{L}(\mathcal{F}):=((u_1,B_1),\ldots,(u_{|\mathcal{F}|},B_{|\mathcal{F}|})).$$ We also associate a feasible forest with every execution of \textsc{VertexColor}\ (the forest may differ for different executions). We now give $\textsc{VertexValidation}$, algorithm \ref{alg:vv} below. \begin{algorithm}[!ht] \caption{\textsc{VertexValidation}($\mathcal{F}$)}\label{alg:vv} \vspace{0.1cm} \begin{algorithmic}[1] \Statex \underline{Input:} Feasible forest $\mathcal{F}$, where $\mathcal{L}(\mathcal{F})=(u_1,B_1),\ldots,(u_{|\mathcal{F}|},B_{|\mathcal{F}|})$. \State Color the edges of $G$, independently and selecting for each a color u.a.r. \Statex \hspace{1em} from the palette. \For{$i=1,\ldots,|\mathcal{F}|$}\label{vv:for} \If{ $B_i$ is badly colored} \State recolor the vertices in $\mathrm{sc}(B_i(u_i))$ independently by selecting for each a \Statex \hspace{3.6em} color u.a.r. from the palette.\label{vv:recolor} \Else \State \textbf{return} {\tt failure} and \textbf{exit} \EndIf \EndFor \label{vv:endfor} \State \textbf{return} {\tt success} \end{algorithmic} \end{algorithm} Let $V_{\mathcal{F}}$ be the event comprised of sequences of color choices $\rho$ such that \textsc{VertexValidation}$(\mathcal{F})$ if executed following $\rho$ is successful . The following three Lemmas \ref{lem:vrandom}, \ref{lem:vboundQn} and \ref{lem:vboundPn} are analogous to the corresponding ones for edge coloring, namely Lemmas \ref{lem:random}, \ref{lem:boundQn} and \ref{lem:boundPn}, respectively, so we only sketch the proofs where they deviate from the ones about vertex coloring. \begin{lemma}\label{lem:vrandom} The following hold: \begin{itemize} \item[a.] For every feasible $\mathcal{F}$, the coloring generated at the end of every successful phase of the execution of \textsc{VertexValidation}$(\mathcal{F})$ is random. \item[b.] \label{ite:sec} For every feasible $\mathcal{F}$, if $$\mathcal{L}(\mathcal{F})=(u_1,B_1),\ldots,(u_{|\mathcal{F}|},B_{|\mathcal{F}|}),$$ then $$\Pr[V_{\mathcal{F}}]=\prod_{i=1}^{|\mathcal{F}|}\Bigg(\frac{1}{K^{(|B_i|-2)}}\Bigg).$$ \item[c.] Given any finite collection $\frak{F}= \{F_1, \ldots, \mathcal{F}_k\}$ of feasible forests, random is the coloring $c$ generated when \textsc{VertexValidation}$(\mathcal{F})$\ is successfully executed for some $\mathcal{F} \in \frak{F}$. \end{itemize} \end{lemma} Consider the probability space $\Omega_n$ of $(\rho, \mathcal{F})$, where $|\mathcal{F}| <n$ and let $\hat{{\rm Q}}_n$ the event in $\Omega_n$ that \textsc{VertexValidation}$(\mathcal{F})$ is successful if $\rho$ is followed and the generated coloring is not strongly proper. \begin{lemma} \label{lem:vboundQn} We have that: $$Q_n \leq \hat{Q}_n $$ and moreover $\hat{Q}_n$ is bounded away from 1 by an amount independent of $n$. \end{lemma} \begin{proof} We only show the second statement, namely that $\hat{Q}_n$ is bounded away from 1 by an amount independent of $n$. Observe that by Lemma \ref{lem:vrandom}, the coloring produced at the end of an execution of $\textsc{VertexValidation}$ for input at least one $\mathcal{F}$ is random. Also, by Lemma \ref{lem:mainrandom}, the probability that a random coloring is $\alpha$-specially proper is positive. This means that $\hat{{\rm Q}}_n$ and, by the first statement, $Q_n$ too, is less than $1$ by an amount independent of $n$. \end{proof} From Lemma \ref{lem:vboundQn} we get Fact \ref{fact4}. Let now $\hat{P}_n$ be the probability that \textsc{VertexValidation}($\mathcal{F}$) succeeds for at least one $\mathcal{F}$ with exactly $n$ nodes. \begin{lemma}\label{lem:vboundPn} We have that $P_n \leq \hat{P}_n \leq \sum_{|\mathcal{F}|=n}\Pr[V_{\mathcal{F}}]$. \end{lemma} To show Fact \ref{fact3}, we will bound $\sum_{|\mathcal{F}|=n}\Pr[V_{\mathcal{F}}]$. Let: \begin{equation}\label{q} q:= \frac{1}{\alpha{\Delta}^{4/3}} > \frac{1}{\lceil \alpha{\Delta}^{4/3}\rceil + \Delta +1}. \end{equation} We define the weight $\|\mathcal{F}\|$ of a feasible forest $\mathcal{F}$ by taking the product of weights assigned to its nodes as follows: for each node with set-label $B$, if $B$ is a $4$-cycle, assign weight $q^2$; if $B$ is a $5$-path, assign weight $q^4$. We get by the second item of Lemma \ref{lem:vrandom}: \begin{equation}\label{norm} \sum_{|\mathcal{F}|=n}\Pr[V_{\mathcal{F}}] \leq \sum_{\mathcal{F}:|\mathcal{F}|=n}\|\mathcal{F}\|. \end{equation} We will bound $\sum_{\mathcal{F}:|\mathcal{F}|=n}\|\mathcal{F}\|$ by purely combinatorial arguments. Let $\mathcal{T}_j$ be the set of all possible feasible trees, whose root has as a vertex-label the vertex $u_j$, together with the empty tree (for the latter, we assume $|\emptyset|= 0$ and $\|\emptyset\| =1$). Let also $\mathcal{T}$ be the collection of all $l$-ary sequences $(T_1,\ldots,T_l)$ with $T_j \in \mathcal{T}_j$. Now, obviously:\begin{align}\label{trees} \sum_{\mathcal{F}: |\mathcal{F}|=n}\|\mathcal{F}\|& =\sum_{\substack{(T_1,\ldots,T_l)\in\mathcal{T} \\ |T_1|+ \cdots +|T_l| =n}}\|T_1\|\cdots\|T_l\| \nonumber\\ & = \sum_{\substack{n_1+\cdots+n_l=n\(n_1,\ldots,n_{2m})_1,\ldots,n_l\geq 0}}\Bigg(\Big(\sum_{\substack{T_1\in\mathcal{T}_1:\\|T_1|=n_1}}\|T_1\|\Big)\cdots\Big(\sum_{\substack{T_l\in\mathcal{T}_l:\\|T_l|=n_l}}\|T_l\|\Big)\Bigg).\end{align} We now obtain a recurrence for each factor of \eqref{trees}. \begin{lemma}\label{lem:rec} Let $\mathcal{T}^u$ be anyone of the $\mathcal{T}_j$. Then:\begin{equation}\label{Rn} \sum_{\substack{T\in\mathcal{T}^u\\|T|=n}}\|T\|\leq R_n, \end{equation} where $R_n$ is defined as follows:\begin{align}\label{rec} \mbox{ for } n \geq 1,\ \\R_n:= & \frac{\Delta^{8/3}}{8\alpha} q^2\sum_{\substack{n_1+n_2=n-1\(n_1,\ldots,n_{2m})_1,n_2\geq 0}}\Big(R_{n_1}R_{n_2}\Big)\\ + & \Delta^5 q^4\sum_{\substack{n_1+n_2+n_3+n_4=n-1\(n_1,\ldots,n_{2m})_1,n_2,n_3,n_4\geq 0}}\Big(R_{n_1}R_{n_2}R_{n_3}R_{n_4}\Big)\nonumber\\ = & \frac{1}{8\alpha^3}\sum_{\substack{n_1+n_2=n-1\(n_1,\ldots,n_{2m})_1,n_2\geq 0}}\Big(R_{n_1}R_{n_2}\Big)\\ + & \frac{1}{\Delta^{1/3}\alpha^4}\sum_{\substack{n_1+n_2+n_3+n_4=n-1\(n_1,\ldots,n_{2m})_1,n_2,n_3,n_4\geq 0}}\Big(R_{n_1}R_{n_2}R_{n_3}R_{n_4}\Big),\nonumber \end{align} $R_0=1$. \end{lemma} \begin{proof} The result is obvious for $n=0$, because the empty tree has weight 1. For $n>0$, we have two cases for the set-label $B$ of the root of $T\in\mathcal{T}^u$. If is one of the, by Lemma \ref{lem:4-cycles}, $\frac{\Delta^{8/3}}{8\alpha}$ $4$-cycles whose opposing to $u$ vertices do not form special pairs, then the root $u$ has weight $q^2$ and two children. Otherwise, observe that there are at most $\Delta^5$ $5$-paths beginning from $u$. In this case, $u$ has weight $q^4$ and four children. \end{proof} To estimate the asymptotic behavior of the sequence $R_n$, we will find the Ordinary Generating Function (OGF) of $R_n$ and apply \cite[Proposition IV.5, p. 278]{Flajolet:2009:AC:1506267}. For technical reasons, we find the OGF $R(z)$ for $R_n$, with $R_n, n\geq 0$ being the coefficient of $z^{n+1}$ instead of $z^n$, and the constant term being 0. Multiply both sides of Eq. \eqref{rec} by $z^{n+1}$ and sum for $n=1,\ldots,+\infty$, to get:\begin{align}\label{OGF} R(z)-zR_0= & \frac{1}{8\alpha^3} zR(z)^2+\frac{1}{\Delta^{1/3}\alpha^4}zR(z)^4\Rightarrow\nonumber\\ R(z)= & z\Bigg(\frac{1}{8\alpha^3}R(z)^2+\frac{1}{\Delta^{1/3}\alpha^4}R(z)^4\Bigg)+z \end{align} Set $R:=R(z)$ and observe that for: \begin{equation}\label{phix} \phi(x)=\frac{x^4}{\Delta^{1/3}\alpha^4}+\frac{x^2}{8\alpha^3}+1, \end{equation} we have that $R=z\phi(R)$. Therefore following \cite[Proposition IV.5, p. 278]{Flajolet:2009:AC:1506267}, we consider the characteristic equation:\begin{equation}\label{characteristic} x{\phi}'(x)-{\phi}(x)=0 \Leftrightarrow \frac{3x^4}{\alpha^4\Delta^{1/3}}+\frac{x^2}{8\alpha^3}-1=0, \end{equation} and we let $\tau$ be its unique positive solution. It only remains to find the range $\alpha$ for which ${\phi}'(\tau) <1$. Towards this, we consider instead equation $\frac{x^2}{8\alpha^3}-1=0$, which has unique positive solution $\sqrt{8\alpha^3}.$ We first observe that: \begin{lemma}\label{lem:hat} The limit of the unique positive solution of the characteristic equation \eqref{characteristic} for $\Delta\rightarrow+\infty$ is equal to $\sqrt{8a^3}$. \end{lemma} \begin{proof} It is easy to check that the unique positive solution of Eq. \eqref{characteristic} is: \begin{equation}\label{withD} \Big(\frac{\alpha\Delta^{1/6}}{48}\Big(\sqrt{\Delta^{1/3}+768\alpha^2}-\Delta^{1/6}\Big)\Big)^{1/2}, \end{equation} which, by taking the limit for $\Delta$ going to infinity, is $(8\alpha^3)^{1/2}$. \end{proof} So, the range of $\alpha$ for which ${\phi}'(\tau) <1$ is computed as follows: \begin{align}\label{final} 4\frac{{\tau}^3}{\alpha^4\Delta^{1/3}}+\frac{{\tau}}{4\alpha^3} & <1\stackrel{\Delta\rightarrow+\infty}{\Longleftrightarrow}\nonumber\\ 0+\frac{\sqrt{8\alpha^3}}{4\alpha^3} & <1\Leftrightarrow\nonumber\\ \frac{1}{2^{1/2}\alpha^{3/2}} & <1\Leftrightarrow\nonumber\\ 2^{-1/3} & <\alpha. \end{align} It follows that, for every $\alpha>2^{-1/3}$, there is a $\Delta_{\alpha}$ (depending on $\alpha$), such that if the maximum degree $\Delta$ of the graph is at least $\Delta_{\alpha}$, then with $\lceil\alpha{\Delta}^{4/3}\rceil + \Delta+ 1$ colors, $P_n$ is exponentially small, so the proof of Fact \ref{fact3} and therefore Theorem \ref{thm:mainv}, our main result in this section, is complete. \begin{remark} Our technique does not lead to the conclusion that for large enough maximum degree $\Delta$, the chromatic number is at most $\lceil2^{-1/3} {\Delta}^{4/3} \rceil +\Delta+ 1$, because we cannot exclude that $\Delta_{\alpha}$ approaches $+\infty$ as $\alpha$ approaches $2^{-1/3}$. \end{remark} \section{Acknowldgements} We are indebted to the anonymous referees who pointed out mistakes in previous versions of this work. For the same reason, we are grateful to Fotis Iliopoulos, Aldo Procacci and Benedetto Scoppola.
{ "timestamp": "2022-06-22T02:30:42", "yymm": "2202", "arxiv_id": "2202.13846", "language": "en", "url": "https://arxiv.org/abs/2202.13846", "abstract": "The acyclic chromatic number of a graph is the least number of colors needed to properly color its vertices so that none of its cycles has only two colors. The acyclic chromatic index is the analogous graph parameter for edge colorings. We first show that the acyclic chromatic index is at most $2\\Delta-1$, where $\\Delta$ is the maximum degree of the graph. We then show that for all $\\epsilon >0$ and for $\\Delta$ large enough (depending on $\\epsilon$), the acyclic chromatic number of the graph is at most $\\lceil(2^{-1/3} +\\epsilon) {\\Delta}^{4/3} \\rceil +\\Delta+ 1$. Both results improve long chains of previous successive advances. Both are algorithmic, in the sense that the colorings are generated by randomized algorithms. However, in contrast with extant approaches, where the randomized algorithms assume the availability of enough colors to guarantee properness deterministically, and use additional colors for randomization in dealing with the bichromatic cycles, our algorithms may initially generate colorings that are not necessarily proper; they only aim at avoiding cycles where all pairs of edges, or vertices, that are one edge, or vertex, apart in a traversal of the cycle are homochromatic (of the same color). When this goal is reached, they check for properness and if necessary they repeat until properness is attained.", "subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM)", "title": "Improved bounds for acyclic coloring parameters", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.985271388745168, "lm_q2_score": 0.7185944046238981, "lm_q1q2_score": 0.7080105069882953 }
https://arxiv.org/abs/2006.16117
A Multilevel Spectral Indicator Method for Eigenvalues of Large Non-Hermitian Matrices
Recently a novel family of eigensolvers, called spectral indicator methods (SIMs), was proposed. Given a region on the complex plane, SIMs first compute an indicator by the spectral projection. The indicator is used to test if the region contains eigenvalue(s). Then the region containing eigenvalues(s) is subdivided and tested. The procedure is repeated until the eigenvalues are identified within a specified precision. In this paper, using Cayley transformation and Krylov subspaces, a memory efficient multilevel eigensolver is proposed. The method uses less memory compared with the early versions of SIMs and is particularly suitable to compute many eigenvalues of large sparse (non-Hermitian) matrices. Several examples are presented for demonstration.
\section{Introduction} Consider the generalized eigenvalue problem \begin{equation} \label{AxLambdaBx} A x= \lambda B x, \end{equation} where $A, B$ are $n \times n$ large sparse non-Hermitian matrices. In particular, we are interested in the computation of all eigenvalues in a region $R \subset \mathbb C$, which contains $p$ eigenvalues such that $1 \ll p \ll n$ or $1 \ll p \sim n$. Many efficient eigensolvers are proposed in literature for large sparse Hermitian (or symmetric) matrices (see, e.g., \cite{Saad2003}). In contrast, for non-Hermitian matrices, there exist much fewer methods including the Arnoldi method and Jacobi-Davidson method \cite{arpack, Templates2000}. Unfortunately, these methods are still far from satisfactory as pointed out in \cite{Saad2011}: ``{\it In essence what differentiates the Hermitian from the non-Hermitian eigenvalue problem is that in the first case we can always manage to compute an approximation whereas there are non-symmetric problems that can be arbitrarily difficult to solve and can essentially make any algorithm fail}." Recently, a family of eigensolvers, called the spectral indicator methods (SIMs), was proposed \cite{SunZhou2016, Huang2016JCP,Huang2018NLA}. The idea of SIMs is different from the classical eigensolvers. In brief, given a region $R \subset \mathbb C$ whose boundary $\partial R$ is a simple closed curve, an indicator $I_R$ is defined and then used to decide if $R$ contains eigenvalue(s). When the answer is positive, $R$ is divided into sub-regions and indicators for these sub-regions are computed. The procedure continues until the size of the sub-region(s) is smaller than the specified precision, e.g., $10^{-6}$. The indicator $I_R$ is defined using the spectral projection $P$, i.e., Cauchy contour integral of the resolvent of the matrix pencil $(A, B)$ on $\partial R$ \cite{Kato1966}. In particular, one can construct $I_R$ based on the spectral projection of a random vector ${\boldsymbol f}$. It is well-known that $P$ projects ${\boldsymbol f}$ to the generalized eigenspace associated to the eigenvalues enclosed by $\partial R$ \cite{Kato1966}. $P{\boldsymbol f}$ is zero if there is no eigenvalue(s) inside $R$, and nonzero otherwise. Hence $P{\boldsymbol f}$ can be used to decide if $R$ contains eigenvalues(s) or not. Evaluation of $P{\boldsymbol f}$ needs to solve linear systems at quadrature points on $\partial R$. In general, it is believed that computing eigenvalues is more difficult than solving linear systems of equations \cite{HernandezEtal2005ACMTOMS}. The proposed method converts the eigenvalue problem to solving a number of related linear systems. Spectral projection is a classical tool in functional analysis to study, e.g., the spectrum of operators \cite{Kato1966} and the finite element convergence theory for eigenvalue problems of partial differential equations \cite{SunZhou2016}. It has been used to compute matrix eigenvalue problems in the method by Sakurai-Sugiura \cite{SakuraiSugiura2003CAM} and FEAST by Polizzi \cite{Polizzi2009PRB}. For example, FEAST uses spectral projection to build subspaces and thus can be viewed as a subspace method \cite{TangPolizzi2014SIAMMAA}. In contrast, SIMs uses the spectral projection to define indicators and combines the idea of bisection to locate eigenvalues. Note that the use of other tools such as the condition number to define the indicator is possible. In this paper, we propose a new SIM, called SIM-M. Firstly, by proposing a new indicator, the memory requirement is significantly reduced and thus the computation of many eigenvalues of large matrices becomes realistic. Secondly, a new strategy to speedup the computation of the indicators is presented. Thirdly, other than the recursive calls in the first two members of SIMs \cite{Huang2016JCP,Huang2018NLA}, a multilevel technique is used to further improve the efficiency. Moreover, a subroutine is added to find the multiplicities of the eigenvalues. The rest of the paper is organized as follows. Section \ref{CT} presents the basic idea of SIMs and two early members of SIMs. In Section \ref{MEI}, we propose a new eigensolver SIM-M with the above features. The algorithm and the implementation details are discussed as well. The proposed method is tested by various matrices in Section \ref{NE}. Finally, in Section~\ref{Con}, we draw some conclusions and discuss some future work. \section{Spectral Indicator Methods}\label{CT} In this section, we give an introduction of SIMs and refer the readers to \cite{SunZhou2016, Huang2016JCP,Huang2018NLA} for more details. For simplicity, assume that $R$ is a square and $\Gamma:=\partial R$ lies in the resolvent set $\mathcal{R}$ of $(A, B)$, i.e., the set of $z \in \mathbb C$ such that $(A - zB)$ is invertible. The key idea of SIMs is to find an indicator that can be used to decide if $R$ contains eigenvalue(s) . One way to define the indicator is to use the spectral projection, a classical tool in functional analysis \cite{Kato1966}. Specifically, the matrix $P$ defined by \begin{equation}\label{P} P=\dfrac{1}{2\pi i}\int_{\Gamma}(A-zB)^{-1}dz \end{equation} is the spectral projection of a vector ${\boldsymbol f}$ onto the algebraic eigenspace associated with the eigenvalues of \eqref{AxLambdaBx} inside $\Gamma$. If there are no eigenvalues inside $\Gamma$, then $P = 0$, and hence $P{\boldsymbol f} = {\boldsymbol 0}$ for all ${\boldsymbol f} \in \mathbb C^n$. If $\Gamma$ does enclose one or more eigenvalues, then $P{\boldsymbol f} \ne {\boldsymbol 0}$ with probability $1$ for a random vector ${\boldsymbol f}$. To improve robustness, in RIM (recursive integral method) \cite{Huang2016JCP}, the first member of SIMs, the indicator is defined as \begin{equation}\label{indicatorRIM} I_R:=\left \| P \left( \frac{P {\boldsymbol f}}{\|P {\boldsymbol f}\|}\right)\right\|. \end{equation} Analytically, $I_R = 1$ if there exists at least one eigenvalue in $\Gamma$. Note that when a quadrature rule is applied, $I_R \ne 1$ in general. The RIM algorithm is very simple and listed as follows \cite{Huang2016JCP}. \begin{itemize} \item[] {\bf RIM}$(A, B, R, h_0, \delta_0, {\boldsymbol f})$ \item[]{\bf Input:} matrices $A, B$, region $R$, precision $h_0$, threshhold $\delta_0$, random vector ${\boldsymbol f}$. \item[]{\bf Output:} generalized eigenvalue(s) $\lambda$ inside $R$ \item[1.] Compute ${I_R}$. \item[2.] If $I_R < \delta_0$, exit (no eigenvalues in $R$). \item[3.] Otherwise, compute the diameter $h$ of $R$. \begin{itemize} \item[-] If $h > h_0 $, partition $R$ into subregions $R_j, j=1, \ldots N$. \begin{itemize} \item[] for $j=1$ to $N$ \item[] $\qquad${\bf RIM}$(A, B, R_j, h_0, \delta_0, {\boldsymbol f})$. \item[] end \end{itemize} \item[-] else, \begin{itemize} \item[] set $\lambda$ to be the center of $R$. output $\lambda$ and exit. \end{itemize} \end{itemize} \end{itemize} The major task of RIM is to compute the indicator $I_R$ defined in \eqref{indicatorRIM}. Let the approximation to $P{\boldsymbol f}$ be given by \begin{equation}\label{XLXf} P{\boldsymbol f} \approx \dfrac{1}{2 \pi i} \sum_{j=1}^{n_0} \omega_j {\boldsymbol x}_j, \end{equation} where $\omega_j$'s are quadrature weights and ${\boldsymbol x}_j$'s are the solutions of the linear systems \begin{equation}\label{linearsys} (A- z_jB){\boldsymbol x}_j = {\boldsymbol f}, \quad j = 1, \ldots, n_0. \end{equation} Here $z_j's$ are quadrature points on $\Gamma$. The total number of the linear systems \eqref{linearsys} for RIM to solve is at most \begin{equation}\label{complexityRIM} 2n_0 \lceil \log_2(h/h_0)\rceil p, \end{equation} where $p$ is the number of eigenvalues in $R$, $n_0$ is the number of the quadrature points, $h$ is the size of the $R$, $h_0$ is the required precision, and $\lceil \cdot \rceil$ denotes the least larger integer. Given $R$, $p$ is a fixed number. The complexity of RIM is proportional to the complexity of solving the linear system \eqref{linearsys}. The computational cost of RIM mainly comes from solving the linear systems \eqref{linearsys} to approximate the spectral projection $P{\boldsymbol f}$. It is clear that the cost will be greatly reduced if one can take advantage of the parametrized linear systems of the same structure. In \cite{Huang2018NLA}, a new member RIM-C (recursive integral method using Cayley transformation) is proposed. The idea is to construct some Krylov subspaces and use them to solve \eqref{linearsys} for all quadrature points $z_j$'s. Since the method we shall propose is based on RIM-C, a description of RIM-C is included as follows. Let $M$ be a $n \times n$ matrix, ${\boldsymbol b} \in \mathbb C^n$ be a vector, and $m$ be a non-negative integer. The Krylov subspace is defined as \begin{align} K_{m}(M; {\boldsymbol b}):=\text{span} \{{\boldsymbol b}, M {\boldsymbol b}, \ldots, M^{m-1}{\boldsymbol b} \}. \end{align} It has the shift-invariant property \begin{align} \label{eq:5} K_{m}(\gamma_1 M+ \gamma_2 I; {\boldsymbol b})=K_{m}(M; {\boldsymbol b}), \end{align} where $\gamma_1$ and $\gamma_2$ are two scalars. Consider a family of linear systems \begin{equation} \label{AzBxf} (A-zB) {\boldsymbol x} ={\boldsymbol f}, \end{equation} where $z$ is a complex number. Assume that $\sigma$ is not a generalized eigenvalue and $\sigma \neq z$. By Cayley transformation, multiplying both sides of \eqref{AzBxf} by $(A- \sigma B)^{-1}$, we have that \begin{eqnarray*} \label{ABsigmaz}(A- \sigma B)^{-1}{\boldsymbol f}&=&(A- \sigma B)^{-1}(A-z B){\boldsymbol x} \\ \nonumber &=&(A- \sigma B)^{-1}(A-\sigma B+(\sigma -z)B) {\boldsymbol x} \label{eq:Cayley} \\ \nonumber &=&(I+(\sigma - z)(A- \sigma B)^{-1}B) {\boldsymbol x}. \end{eqnarray*} Let $M=(A- \sigma B)^{-1}B$ and ${\boldsymbol b}=(A- \sigma B)^{-1}{\boldsymbol f}$. Then \eqref{AzBxf} becomes \begin{align} \label{IMxb} (I+(\sigma -z)M) {\boldsymbol x} = {\boldsymbol b}. \end{align} From \eqref{eq:5}, the Krylov subspace $K_{m}(I+(\sigma -z)M; {\boldsymbol b})$ is the same as $K_{m}(M; {\boldsymbol b})$. We shall use $K^\sigma_{m}(M; {\boldsymbol b})$ when it is necessary to indicate its dependence on the shift $\sigma$. Arnoldi's method is used by RIM-C to solve the linear systems. First, consider the orthogonal projection method for \[ M {\boldsymbol x}={\boldsymbol b}. \] Let the initial guess be ${\boldsymbol x}_0={\boldsymbol 0}$. One seeks an approximate solution ${\boldsymbol x}_m$ in $K_m(M; {\boldsymbol b})$ by imposing the Galerkin condition \cite{Saad2003} \begin{equation} \label{eq:Krylov} ({\boldsymbol b}-M {\boldsymbol x}_m) \perp K_m(M; {\boldsymbol b}). \end{equation} The Arnoldi's method (Algorithm 6.1 of \cite{Saad2011}) is as follows. \begin{itemize} \item[1.] Choose a vector ${\boldsymbol v}_1$ of norm $1$ (${\boldsymbol v}_1={\boldsymbol b}/\|{\boldsymbol b}\|_2$). \item[2.] for $j=1,2, \ldots, m$ \begin{itemize} \item $h_{ij} = (M{\boldsymbol v}_j, {\boldsymbol v}_i), \quad i=1,2, \ldots,j$. \item ${\boldsymbol w}_j = M {\boldsymbol v}_j - \sum_{i=1}^j h_{ij} {\boldsymbol v}_i$. \item $h_{j+1,j}=\|{\boldsymbol w}_j\|_2$. If $h_{j+1, j}=0$, stop. \item ${\boldsymbol v}_{j+1} = {\boldsymbol w}_j/h_{j+1, j}$. \end{itemize} \end{itemize} Let $V_m$ be the $n \times m$ orthogonal matrix with column vectors ${\boldsymbol v}_1, \ldots, {\boldsymbol v}_m$ and $H_m$ be the $m \times m$ Hessenberg matrix whose nonzero entries are $h_{i,j}$. Proposition 6.5 of \cite{Saad2011} implies that \begin{equation} \label{eq:arnoldi} M V_m=V_m H_m + {\boldsymbol v}_{m+1} {h}_{m+1,m}{\boldsymbol e}^{T}_m \end{equation} and \[ \text{span}\{\mathrm{col}(V_m)\}=K_{m}(M; {\boldsymbol b}). \] Let ${\boldsymbol x}_m=V_m \tilde{\boldsymbol y}$ such that the Galerkin condition \eqref{eq:Krylov} holds, i.e., \begin{equation} V^{T}_m {\boldsymbol b}-V^{T}_m M V_m \tilde{\boldsymbol y} ={\boldsymbol 0}. \end{equation} Using \eqref{eq:arnoldi}, the residual is given by \begin{equation} \label{eq:err1} \|{\boldsymbol b}-M {\boldsymbol x}_m\|_2 = {h}_{m+1,m}|{\boldsymbol e}_m^{T} \tilde{\boldsymbol y}|. \end{equation} Next, we consider the linear system \eqref{IMxb}. For $I+(\sigma -z)M$, due to the shift invariant property, one has that \begin{equation} \label{shiftedLS} \{I+(\sigma -z)M \} V_m =V_m( I+(\sigma -z)H_m)\\ +(\sigma - z){\boldsymbol v}_{m+1} {h}_{m+1,m}{\boldsymbol e}^{T}_m. \end{equation} The Galerkin condition \eqref{eq:Krylov} becomes \begin{equation} V^{T}_m {\boldsymbol b}-V^{T}_m \{I+(\sigma -z)M \} V_m {\boldsymbol y} =0. \end{equation} It implies that \begin{equation} \label{eq:many} \{I+(\sigma -z)H_m \} {\boldsymbol y} =\beta {\boldsymbol e}_1, \end{equation} where $\beta = \|{\boldsymbol b}\|_2$. Combination of \eqref{shiftedLS} and \eqref{eq:many} gives the residual \begin{equation}\label{eq:error} \|b-\{I+(\sigma - z)M \} {\boldsymbol x}_m\|_2 =(\sigma -z){h}_{m+1,m}|{\boldsymbol e}_m^{T} {\boldsymbol y}|. \end{equation} Let $z_j$ be a quadrature point and one need to solve \begin{align} (I+(\sigma -z_j)M) {\boldsymbol x}_j={\boldsymbol b}, \end{align} where $M=(A-\sigma B)^{-1} B$ and ${\boldsymbol b}=(A- \sigma B )^{-1} {\boldsymbol f}$. From \eqref{XLXf} and \eqref{eq:many}, \begin{eqnarray} \label{eq:y} {\boldsymbol y}_j&=&\beta (I+ (\sigma-z_j) H_m)^{-1}{\boldsymbol e}_1, \\ \nonumber {\boldsymbol x}_j &\approx& V_m {\boldsymbol y}_j, \\ \label{eq:reduced} P{\boldsymbol f} &\approx& \dfrac{1}{2 \pi i} \sum w_j V_m {\boldsymbol y}_j. \end{eqnarray} The idea of RIM-C is to use the Krylov subspace for $M=(A-\sigma B)^{-1} B$ to solve \eqref{linearsys} for as many $z_j$'s as possible. The residual can be monitored with a little extra cost using \eqref{eq:error}. Since the Krylov subspace method is used, the indicator defined in \eqref{indicatorRIM} is not appropriate since it projects ${\boldsymbol f}$ twice. RIM-C defines an indicator different from \eqref{indicatorRIM}. Let $P{\boldsymbol f}|_{n_0}$ be the approximation of $ P {\boldsymbol f} $ with $n_0$ quadrature points for the circle circumscribing $R$. It is well-known that the trapezoidal quadrature of a periodic function converges exponentially \cite[Section 4.6.5]{davis1984methods}, i.e., \begin{align*} \left \|P{\boldsymbol f}- P{\boldsymbol f}|_{n_0}\right\| = O(e^{-C n_0}), \end{align*} where C is a constant. For a large enough $n_0$, one has that \begin{equation}\label{IndicatorPf} \dfrac{ \left \| P {\boldsymbol f}|_{2n_0}\right\|}{ \left \| P {\boldsymbol f}|_{n_0} \right\|}=\begin{cases} \dfrac{\|P {\boldsymbol f}\| + O(e^{-C 2n_0})}{\|P {\boldsymbol f}\| + O(e^{-C n_0})} & \text{if there are eigenvalues inside } R, \\ \dfrac{ O(e^{-C 2n_0})}{ O(e^{-C n_0})}=O(e^{-C n_0}) & \text{no eigenvalue inside } R. \end{cases} \end{equation} The indicator is then defined as \begin{equation}\label{ISPf} I_R := \frac{\|P {\boldsymbol f}_{2n_0}\|}{\|P {\boldsymbol f}_{n_0}\|} \approx \frac{\quad \left \|\sum_{j=1}^{2n_0} w_j V_m {\boldsymbol y}_j \right\| \quad}{\left\| \sum_{j=1}^{n_0} w_j V_m {\boldsymbol y}_j\right\|}. \end{equation} \section{Multilevel Memory Efficient Method}\label{MEI} In this section, we make several improvements of RIM-C and propose a multilevel memory efficient method, called SIM-M. \subsection{A New Memory Efficient Indicator} In view of \eqref{ISPf}, the computation of the indicator needs to store $V_m$. When $R$ contains a lot of eigenvalues, the method can become memory intensive. \begin{definition} A (square) region $R$ is resolvable with respect to $(\sigma, \epsilon_0)$ if the linear systems \eqref{linearsys} associated with all the quadrature points can be solved up to the given residual $\epsilon_0$ using the Krylov subspace related to a shift $\sigma$. \end{definition} Assume that $R$ is resolvable with respect to $(\sigma, \epsilon_0)$. From \eqref{ISPf}, one has that \begin{equation} I_R \approx \frac{ \| \sum_{j=1}^{2n_0} w_j V_m {\boldsymbol y} _j \| }{\| \sum_{j=1}^{n_0} w_j V_m {\boldsymbol y} _j \|} = \frac{ \| V_m \sum_{j=1}^{2n_0} w_j {\boldsymbol y}_j \| }{\| V_m \sum_{j=1}^{n_0} w_j {\boldsymbol y} _j \|}. \end{equation} Note that \begin{equation} \left \| V_m \sum_{j=1}^{n_0} w_j {\boldsymbol y}_j \right \|^2 = \left(\sum_{j=1}^{n_0} w_j {\boldsymbol y}_j\right)^{T} V_m^{T} V_m \sum_{j=1}^{n_0} w_j {\boldsymbol y} _j = \left \| \sum_{j=1}^{n_0} w_j {\boldsymbol y}_j \right \|^2 \end{equation} since $V_m^{T}V_m $ is the identity matrix. Dropping $V_m$ in \eqref{eq:reduced}, we define a new indicator \begin{equation}\label{DeltaSm} \tilde{I}_R = \frac{\|\sum_{j=1}^{2n_0} w_j {\boldsymbol y}_j \|}{\|\sum_{j=1}^{n_0} w_j {\boldsymbol y}_j \|}. \end{equation} As a consequence, there is no need to store $V_m$'s ($n \times m$ matrices) but to store much smaller $m \times m$ ($m = O(1)$) matrices $H_m$'s. As before, we use a threshold to decide whether or not eigenvalues exist in $R$. From \eqref{IndicatorPf}, if there are no eigenvalues in $R$, the indicator $I_R = O(e^{-C n_0})$. In the experiments, we take $n_0=4$. Assume that $C=1$, we would have that $I_R \approx 0.018$. It is reasonable to take $\delta_0 = 1/20$ as the threshold. The choice is ad-hoc. Nonetheless, the numerical examples show that the choice is robust. \begin{definition} A (square) region $R$ is admissible if $I_R > \delta_0$. \end{definition} \begin{remark} In practice, a region which is smaller than $h_0$ and not resolvable with respect to $(\sigma, \epsilon_0)$ is taken to be admissible. \end{remark} \subsection{Speedup the Computation of Indicators} To check if a linear system \eqref{linearsys} can be solved effectively using a Krylov space $K^\sigma_{m}(M; {\boldsymbol b})$, one need to compute the residual \eqref{eq:error} for many $z_j$'s. In the following, we propose a fast method. First rewrite \eqref{eq:many} as \begin{equation}\label{zjHm} \left(\frac{1}{\sigma -z_j}I+H_m \right) {\boldsymbol y}_j = \frac{\beta}{\sigma -z_j} {\boldsymbol e}_1. \end{equation} Assume that $H_m$ has the following eigen-decomposition $H_m = PDP^{-1}$ where \[ D=\text{diag}\{ \lambda_1, \lambda_1, \ldots, \lambda_m \}. \] Then \eqref{zjHm} can be written as \[ P\left(\frac{1}{\sigma -z_j}I+D \right)P^{-1} {\boldsymbol y}_j = \frac{\beta}{\sigma -z_j} {\boldsymbol e}_1, \] whose solution is simply \begin{eqnarray*} {\boldsymbol y}_j &=& P \left(\frac{1}{\sigma-z_j}I+D\right)^{-1}P^{-1} \frac{\beta}{\sigma- z_j}{\boldsymbol e}_1 \\ &=& P\left(I + (\sigma-z_j)D\right)^{-1} P^{-1} {\boldsymbol e}_1. \end{eqnarray*} Hence \begin{eqnarray}\nonumber {\boldsymbol e}_m^{T} {\boldsymbol y}_j &=& {\boldsymbol e}_m^{T} P\left(I + (\sigma-z_j)D\right)^{-1} P^{-1} {\boldsymbol e}_1 \\ \label{rmLc1} &=& {\boldsymbol r}_m \Lambda {\boldsymbol c}_1, \end{eqnarray} where ${\boldsymbol r}_m$ is the last row of $P$, ${\boldsymbol c}_1$ is the first column of $P^{-1}$, and \[ \Lambda = \text{diag}\left\{\frac{1}{1+(\sigma-z_j) \lambda_1}, \frac{1}{1+(\sigma-z_j) \lambda_2},\ldots, \frac{1}{1+(\sigma-z_j) \lambda_m}\right\}. \] In fact, this further reduces the memory requirement since only three $m\times 1$ vectors, ${\boldsymbol r}_m$, ${\boldsymbol c}_1$, and $ \Lambda$ are stored for each shift $\sigma$. \subsection{Multilevel Technique}\label{MT} Now we propose a multilevel technique, which is more efficient and suitable for parallelization. In SIM-M, the following strategy is employed. At level $1$, $R$ is divided uniformly into smaller squares $R^1_j, j=1, \ldots, N^1$. Collect all quadrature points $z^1_j$'s and solve the linear systems \eqref{linearsys} accordingly. The indicators of $R^1_j$'s are computed and squares containing eigenvalues are chosen. Indicators of the resolvable squares are computed. Squares containing eigenvalues are subdivided into smaller square. Squares that are not resolvable are also subdivided into smaller squares. These squares are left to the next level. At level $2$, the same operation is carried out. The process stops at level $K$ when. the size of the squares is smaller than the given precision $h_0$. \subsection{Multiplicities of Eigenvalues}\label{ME} The first two members of SIMs only output the eigenvalues. A function to find the multiplicities of the eigenvalues can be integrated into SIM-M. \begin{definition} An eigenvalue $\lambda$ is said to be resolved by a shift $\sigma$ if the small square at level $K$ containing $\lambda$ is resolvable using the Krylov subspace $K_m^\sigma$. \end{definition} When the eigenvalues are computed, a mapping from the set of eigenvalues $\Lambda$ to the set of shifts $\Sigma$ is also established. Hence, for a shift $\sigma$, one can find the set of all eigenvalues that are resolved by $\sigma$, denoted by \[ \Lambda_\sigma = \{ \lambda_1, \ldots, \lambda_n\}. \] For $k$ random vectors ${\boldsymbol f}_1, \ldots, {\boldsymbol f}_k$, generate $k$ Krylov subspaces $K_m^\sigma(M, {\boldsymbol b}_i), i=1, \ldots, k$. For each $\lambda \in \Lambda_\sigma $, compute the spectral projections of ${\boldsymbol f}_1, \ldots, {\boldsymbol f}_k$ using the above Krylov subspaces. Then the number of significant singular values of the matrix $[P{\boldsymbol f}_1, \ldots, P{\boldsymbol f}_k]$ is the multiplicity of $\lambda$. \begin{remark} In fact, the associated eigenvectors can be obtained with little extra cost by adding more quadrature points. However, it can be expected that it needs a lot of more time and memory to find the multiplicities since more Krylov subspaces are generated. \end{remark} \subsection{Algorithm for SIM-M} Now we are ready to present the new algorithm SIM-M. \begin{itemize} \item[] \text{SIM-M}$(A, B, R, {\boldsymbol f}, h_0, \epsilon, \delta_0, m, n_0)$ \item[] {\bf Input:} \begin{itemize} \item $A, B$: $n \times n$ matrices \item $R$: search region in $\mathbb C$ \item ${\boldsymbol f}$: a random vector \item $h_0$: precision \item $\epsilon$: residual tolerance \item $\delta_0$: indicator threshold \item $m$: size of Krylov subspace \item $n_0$: number of quadrature points \end{itemize} \item[] {\bf Output:} \begin{itemize} \item generalized eigenvalues $\lambda$'s inside $R$ \end{itemize} \item[1.] use the center of $R$ as the first shift and generate the associated Krylov subspaces. \item[2.] pre-divide $R$ into small squares of size $h_0$: $R_j, j=1, \ldots, J$ (these are selected squares at the initial level). \item[3.] for $j=1:J$ do \begin{itemize} \item For all quadrature points for $R_j$, check if the related linear systems can be solved using any one of the existing Krylov subspaces up to the given residual $\epsilon_0$. If yes, associate $R_j$ with that Krylov subspace. Otherwise, set the shift to be the center of $R_j$ and construct a Krylov subspace. \end{itemize} \item[4.] calculate the number of the levels, denoted by $K$, needed to reach the precision $h_0$. \item[5.] for $k=1:K$ \begin{itemize} \item for each selected square $R_j^k$ at level $k$, check if $R_j^k$ is resolvable. \begin{itemize} \item if $R_j^k$ is resolvable, compute the indicator for $R_j^k$ and mark it when the indicator is larger than $\delta_0$, i.e., $R_j^k$ contains eigenvalues. \item if $R_j^k$ is not solvable, mark $R_j^k$ and leave it to next level. \end{itemize} \item divide marked squares into four squares uniformly and move to next level. \end{itemize} \item[6.] post-processing the marked squares at level $K$, merge eigenvalues when necessary, show warnings if there exist unsolvable squares. \item[7.] output eigenvalues. \end{itemize} In the implementation, we choose $m=50$. Similar values such as $m=30$ do not change the performance significantly. The indicator threshold is set to be $\delta_0 = 1/20$ as discussed in Section 3.1. The number of quadrature points is $n_0 = 8$, which is effective for the examples. The choices of these parameters affect the efficiency and robustness of the algorithm in a subtle way and deserve more study for different problems. \section{Numerical Examples}\label{NE} We show some examples for SIM-M. All the test matrices are from the University of Florida Sparse Matrix Collection \cite{DavisHu2011ACMTOMS} except the last example. The computations are done using MATLAB R2017a on a MacBook Pro with 16 GB memory and a 3-GHz Intel Core i7 CPU. \subsection{Directed Weighted Graphs} The first group contains four non-symmetric matrices, HB/gre\_115, HB/gre\_343, HB/gre\_512, HB/gre\_1107. These matrices represent directed weighted graphs. \begin{table}[h!] \caption{Time (in second) used for all eigenvalues by SIM-M.} \label{gre} \centering \begin{tabular}{l|r|r|r|r} \hline N (size of the matrix) & 115& 343& 512& 1107\\ \hline T (time in seconds) & 3.4141s & 10.2917s & 14.7461s& 40.2252s\\ \hline T/N &0.0297 & 0.0300 & 0.0288 & 0.0363 \\ \hline \hline \end{tabular} \end{table} We compute all eigenvalues using SIM-M in Table~\ref{gre}. The first row represents sizes of the four matrices. The second row shows the CPU times (in seconds) used by SIM-M. The numbers in the third row are the ratios of the seconds used by SIM-M and the sizes of the matrices, i.e., the average time to compute one eigenvalue. It seems that the ratio is stable for matrices of different sizes. In Fig.~\ref{grePlots}, we show the eigenvalues computed by SIM-M and by Matlab {\it eig}, which coincide each other. \begin{figure}[h!] \label{grePlots} \begin{minipage}{0.50\linewidth} \includegraphics[angle=0, width=\textwidth]{gre115.eps \vspace{-0.5cm} \begin{center} (a) \end{center} \end{minipage} \begin{minipage}{0.50\linewidth} \includegraphics[angle=0, width=\textwidth]{gre343.eps} \vspace{-1cm} \begin{center} (b) \end{center} \end{minipage} \begin{minipage}{0.50\linewidth} \includegraphics[angle=0, width=\textwidth]{gre512.eps} \vspace{-1cm} \begin{center} (c) \end{center} \end{minipage} \begin{minipage}{0.50\linewidth} \includegraphics[angle=0, width=\textwidth]{gre1107.eps} \vspace{-1cm} \begin{center} (d) \end{center} \end{minipage} \caption{Eigenvalues computed by SIM-M and Matlab {\it eig} coincide. (a) HB/gre\_115. (b) HB/gre\_343. (c): HB/gre\_512. (d): HB/gre\_1107.} \end{figure} \subsection{A Quantum Chemistry Problem} The second example, Bai/qc2534, is a sparse $2534 \times 2534$ matrix from modeling H2+ in an electromagnetic field. The full spectrum, computed by Matlab {\it eig}, is shown in Fig.~\ref{QC2534}(a), in which the red rectangle is $R_1=[-0.1, 0] \times [-0.125, 0.025]$. In Fig.~2(b), the eigenvalues are computed by SIM-M in $R_1$, which coincide with those computed by Matlab {\it eig}. The red rectangle is Fig.~2(b) is $R_2=[-0.04, 0]\times [-0.04, 0]$. Eigenvalues in $R_2$ computed by SIM-M are shown in Fig.~2(c). The rectangle in Fig.~2(c) is $R_3=[-0.02, 0]\times [-0.03, -0.02]$. Eigenvalues in $R_3$ computed by SIM-M are shown in Fig.~2(d). \begin{figure}[h!] \label{QC2534} \begin{minipage}{0.50\linewidth} \includegraphics[angle=0, width=\textwidth]{QC2534Full.eps \vspace{-0.5cm} \begin{center} (a) \end{center} \end{minipage} \begin{minipage}{0.50\linewidth} \includegraphics[angle=0, width=\textwidth]{QC2534s.eps} \vspace{-1cm} \begin{center} (b) \end{center} \end{minipage} \begin{minipage}{0.50\linewidth} \includegraphics[angle=0, width=\textwidth]{QC2534sr.eps} \vspace{-1cm} \begin{center} (c) \end{center} \end{minipage} \begin{minipage}{0.50\linewidth} \includegraphics[angle=0, width=\textwidth]{QC2534srr.eps} \vspace{-1cm} \begin{center} (d) \end{center} \end{minipage} \caption{QC2534. (a): Full spectrum by Matlab {\it eig} (the rectangle is $R_1$). (b): Eigenvalues by SIM-M in $R_1$ (the rectangle is $R_2$). (c): Eigenvalues by SIM-M in $R_2$ (the rectangle is $R_3$). (d): Eigenvalues by SIM-M in $R_3$. } \end{figure} The second row of Table~\ref{T2534} shows that there are $88$, $23$ and $7$ eigenvalues in $R_1, R_2$ and $R_3$, respectively. The third row shows the time used by SIM-M to compute all eigenvalues in $R_1, R_2$ and $R_3$. The fourth row shows the average time to compute one eigenvalue, which seems to be consistent. \begin{table}[h!] \caption{Time (in second) used by SIM-M for different regions.} \label{T2534} \centering \begin{tabular}{l|r|r|r} \hline & $R_1$& $R_2$& $R_3$\\ \hline N (\# of eigenvalues) & 88 & 23& 7\\ \hline T (time in seconds) & 14.7445s & 3.7005s& 0.54645s\\ \hline T/N & 0.1676 & 0.1609 & 0.0781\\ \hline \end{tabular} \end{table} \subsection{DNA Electrophoresis} The third example is a $39,082 \times 39,082$ matrix, vanHeukelum/cage11, arising from DNA electrophoresis. We consider a series of nested domains \begin{eqnarray*} &&R_1=[0.230, 0.270] \times [-0.0005, 0.0005],\\ &&R_2=[0.250, 0.270] \times [-0.0005, 0.0005],\\ &&R_3=[0.250, 0.260] \times [-0.0005, 0.0005],\\ &&R_4=[0.254, 0.256] \times [-0.0005, 0.0005]. \end{eqnarray*} In Table~\ref{cage11}, the time and number of eigenvalues found in each domain are shown. Again, the average time to compute one eigenvalue is stable. \begin{table}[h!] \caption{Time (in second) used by SIM-M for different regions.} \label{cage11} \centering \begin{tabular}{l|c|c|c|c} \hline & $R_1$& $R_2$& $R_3 $& $R_4$\\ \hline\hline N (\# of eigenvalues) &$105$& $31$& $31$& $8$\\ \hline T (time in seconds) & 588.3552s& 299.4242s & 214.0637s& 47.8098s\\ \hline T/N & 5.6034 & 9.6588 & 6.9053 & 5.9762\\ \hline \end{tabular} \end{table} \begin{remark} Note that it is not possible to use Matlab {\it eig} to find all eigenvalues due the memory constraint. However, SIM-M does not have this limitation. In fact, numerical results in the above two subsections indicate that a parallel version of SIM-M has the potential to be faster than the classical methods. \end{remark} \subsection{Quantum States in Disordered Media} The test matrices are sparse and symmetric arising from localized quantum states in random or disordered media \cite{Arnold2016}. We would like to use this example to show that the method can treat rather large problems on a laptop. The matrices $A$ and $B$ are of $1,966,080 \times 1,966,080$. We consider three nested domains given by \begin{eqnarray*} && R_1=[0.00, 0.60] \times [-0.05, 0.05],\\ && R_2=[0.00, 0.50] \times [-0.05, 0.05], \\ && R_3=[0.00, 0.40] \times [-0.05, 0.05], \end{eqnarray*} In Table~\ref{AB65536}, time and number of eigenvalues in each domain are shown. Again, we observe that the average time to compute one eigenvalue is stable. \begin{table}[h!] \caption{Time (in second) used by SIM-M for different regions.} \label{AB65536} \centering \begin{tabular}{l|c|c|c} \hline & $R_1$& $R_2$& $R_3$\\ \hline \hline N (\# of eigenvalues) & $36$& $7$& $3$\\ \hline T (time in seconds) & 573.1088s&112.1876s & 58.9957s\\ \hline T/N & 15.9197 & 16.0268 &19.6652 \\ \hline \end{tabular} \end{table} \section{Conclusions and Future Work}\label{Con} Given a region on the complex plane, SIMs first compute an indicator, which is used to test if the region contains eigenvalues. Then the region is subdivided and tested until all the eigenvalues are isolated with a specified precision. Hence SIMs can be viewed as a bisection technique. We propose an improved version SIM-M to compute many eigenvalues of large matrices. Several examples are presented for demonstrations. However, to make the method practically competitive, a parallel implementation on super computers is necessary. Currently, SIMs use the spectral projection to compute the indicators. Other ways to define the indicators should be investigated in future.
{ "timestamp": "2020-06-30T02:35:11", "yymm": "2006", "arxiv_id": "2006.16117", "language": "en", "url": "https://arxiv.org/abs/2006.16117", "abstract": "Recently a novel family of eigensolvers, called spectral indicator methods (SIMs), was proposed. Given a region on the complex plane, SIMs first compute an indicator by the spectral projection. The indicator is used to test if the region contains eigenvalue(s). Then the region containing eigenvalues(s) is subdivided and tested. The procedure is repeated until the eigenvalues are identified within a specified precision. In this paper, using Cayley transformation and Krylov subspaces, a memory efficient multilevel eigensolver is proposed. The method uses less memory compared with the early versions of SIMs and is particularly suitable to compute many eigenvalues of large sparse (non-Hermitian) matrices. Several examples are presented for demonstration.", "subjects": "Numerical Analysis (math.NA)", "title": "A Multilevel Spectral Indicator Method for Eigenvalues of Large Non-Hermitian Matrices", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713883126862, "lm_q2_score": 0.7185944046238981, "lm_q1q2_score": 0.7080105066775163 }
https://arxiv.org/abs/2102.06541
Upper functions for sample paths of Lévy(-type) processes
We study the small-time asymptotics of sample paths of Lévy processes and Lévy-type processes. Namely, we investigate under which conditions the limit $$\limsup_{t \to 0} \frac{1}{f(t)} |X_t-X_0|$$ is finite resp.\ infinite with probability $1$. We establish integral criteria in terms of the infinitesimal characteristics and the symbol of the process. Our results apply to a wide class of processes, including solutions to Lévy-driven SDEs and stable-like processes. For the particular case of Lévy processes, we recover and extend earlier results from the literature. Moreover, we present a new maximal inequality for Lévy-type processes, which is of independent interest.
\section{Introduction} \label{intro} A mapping $f:[0,1] \to [0,\infty)$ is called an upper function for a stochastic process $(X_t)_{t \geq 0}$ if \begin{equation*} \limsup_{t \to 0} \frac{1}{f(t)} |X_t-X_0| \leq 1 \quad \text{almost surely}, \end{equation*} i.e.\ a typical sample path $t \mapsto X_t(\omega)$ grows asymptotically at most as fast as $f(t)$. In this article, we are interested in upper functions for L\'evy and L\'evy-type processes. Our aim is to establish integral criteria in terms of the characteristics and the symbol of the process -- see Section~\ref{def} for definitions -- which characterize whether $f$ is an upper function. \par For L\'evy processes, the study of upper functions was initiated by Khintchine \cite{khin39}. He showed that any one-dimensional L\'evy process satisfies the following law of iterated logarithm (LIL): \begin{equation*} - \liminf_{t \to 0} \frac{X_t}{\sqrt{2t \log \log \frac{1}{t}}} = \limsup_{t \to 0} \frac{X_t}{\sqrt{2 t \log \log \frac{1}{t}}} = \sigma \quad \text{a.s.}, \end{equation*} where $\sigma \geq 0$ is the diffusion coefficient. In consequence, the small-time asymptotics of a L\'evy process is governed by the Gaussian part if $\sigma \neq 0$. For this reason our focus is on processes with vanishing diffusion part. Khintchine \cite{khin39} also showed -- under some mild assumptions -- that $f$ is an upper function for a L\'evy process $(X_t)_{t \geq 0}$ if \begin{equation*} \int_0^1 \frac{1}{t} \mbb{P}(|X_t| \geq c f(t)) \, dt < \infty \end{equation*} for a suitable constant $c>0$. In practice, it is often difficult to check whether the latter integral is finite. There is a more tractable criterion in terms of the L\'evy measure $\nu$. Namely, it holds for a wide class of functions $f$ that \begin{equation} \limsup_{t \to 0} \frac{1}{f(t)} |X_t| = \begin{rcases} \begin{dcases} 0 \\ \infty \end{dcases} \end{rcases}\, \, \text{a.s.} \iff \int_{0}^1 \nu(\{y \in \mbb{R}^d; |y| \geq f(t)\}) \, dt \,\, \begin{rcases} \begin{dcases}< \infty \\ = \infty \end{dcases} \end{rcases}; \label{intro-eq1} \end{equation} this characterization is classical for stable L\'evy processes, see e.g.\ \cite{fristedt}, and has been extended to general one-dimensional L\'evy processes by Wee \& Kim \cite{wee88}. For some processes, \eqref{intro-eq1} breaks down, and it may happen that $\limsup_{t \to 0} \frac{1}{f(t)} |X_t| \in (0,\infty)$ almost surely, see \cite{bertoin08,savov09,wee88} for details. A number of further characterizations for upper functions are collected in Theorem~\ref{main-3}. We require only mild assumptions on the L\'evy process $(X_t)_{t \geq 0}$ and the mapping $f$; thus generalizing earlier results in the literature. For power functions $f(t)=t^{\kappa}$, there is a close connection to the Blumenthal--Getoor index, cf.\ Corollary~\ref{main-47}. \par The second part of our results is about the small-time asymptotics of L\'evy-type processes. Intuitively, a L\'evy-type process behaves locally like a L\'evy process but the L\'evy triplet depends on the current position of the process, see Section~\ref{def} below for the precise definition. Important examples include solutions to L\'evy-driven stochastic differential equations (SDEs), processes of variable order and random time changes of L\'evy processes, just to mention a few. Studying the sample path behaviour of L\'evy-type processes is much more delicate than in the L\'evy case because the processes are no longer homogeneous in space, see \cite[Chapter 5]{ltp} for a survey on results for the closely related class of Feller processes. Schilling \cite{rs-growth} introduced generalized Blumenthal--Getoor index and obtained a criterion for a power function $f(t)=t^{\kappa}$ to be an upper function of a L\'evy-type process, see also \cite[Theorem 5.16]{ltp}. A recent paper by Reker \cite{reker20} studies the small-time asymptotics of solutions to SDEs driven by jump processes. Moreover, there are LIL-type results for L\'evy-type processes and other classes of jump processes available in the literature, see \cite{kim21,kim17,knop14} and the references therein. Our contribution in this paper is two-fold. Firstly, we establish sufficient conditions in terms of the characteristics and the symbol, which ensure that a given mapping $f$ is an upper function for a L\'evy-type process, cf.\ Theorem~\ref{main-5}. On the way, we obtain new results on upper functions for Markov processes, cf.\ Section~\ref{up}. Secondly, we prove a criterion for a given function $f$ \emph{not} to be an upper function, cf.\ Theorem~\ref{main-9}. The key ingredients for the proofs are a new maximal inequality for L\'evy-type processes, cf.\ Section~\ref{max}, and a conditional Borel--Cantelli lemma for backward filtrations. \section{Main results} \label{main} This section is divided into two parts: First, we present our results for L\'evy processes and then, in the second part, we state the results which apply for the wider class of L\'evy-type processes. See Section~\ref{def} below for definitions and notation. The following is our first main result. \begin{theorem} \label{main-3} Let $(X_t)_{t \geq 0}$ be a L\'evy process with L\'evy triplet $(b,0,\nu)$ and characteristic exponent $\psi$ satisfying the sector condition, i.e.\ $|\im \psi(\xi)| \leq C \re \psi(\xi)$, $\xi \in \mbb{R}^d$, for some constant $C>0$. Let $f: [0,1] \to (0,\infty)$ be non-decreasing, and assume that one of the following conditions holds. \begin{enumerate}[label*=\upshape (A\arabic*),ref=\upshape A\arabic*] \item\label{A1} The L\'evy measure $\nu$ satisfies \begin{equation*} \limsup_{r \to 0} \frac{\int_{|y| \leq r} |y|^2 \, \nu(dy)}{r^2 \nu(\{|y| > r\})} < \infty. \end{equation*} \item\label{A2} There is a constant $M>0$ such that \begin{equation*} \int_{r<f(t)} \frac{1}{f(t)^2} \, dt \leq M \frac{f^{-1}(r)}{r^2}, \qquad r \in (0,1). \end{equation*} \end{enumerate} The following statements are equivalent. \begin{enumerate}[label*=\upshape (L\arabic*),ref=\text{\upshape (L\arabic*)}] \item\label{main-3-i} $\int_0^1 \nu(\{|y| \geq cf(t)\}) \, dt < \infty$ for some $c>0$, \item\label{main-3-ii} $\int_0^1 \sup_{|\xi| \leq 1/(\eps f(t))} |\psi(\xi)| \, dt<\infty$ for some (all) $\eps>0$, \item\label{main-3-iii} $\int_0^1 \mbb{P} \left( \sup_{s \leq t} |X_s| \geq \eps f(t)\right) \frac{1}{t} \, dt < \infty$ for all $\eps>0$, \item\label{main-3-iv} $\int_0^1 \mbb{P}(|X_t| \geq \eps f(t)) \frac{1}{t} \, dt < \infty$ for all $\eps>0$, \item\label{main-3-v} $\limsup_{t \to 0} \frac{1}{f(t)} \sup_{s \leq t} |X_s|=0$ almost surely, \item\label{main-3-vi} $\limsup_{t \to 0} \frac{1}{f(t)} |X_t|=0$ almost surely, \item\label{main-3-vii} $\limsup_{t \to 0} \frac{1}{f(t)} |X_t|<\infty$ almost surely. \end{enumerate} In particular, \begin{equation*} \limsup_{t \to 0} \frac{1}{f(t)} |X_t| \in \{0,\infty\} \quad\text{a.s.} \end{equation*} In \ref{main-3-v}-\ref{main-3-vii} we may replace 'almost surely' by 'with positive probability'. \end{theorem} Theorem~\ref{main-3} generalizes \cite[Corollary 11.3]{fristedt} for stable processes and the results from \cite[Section III.4]{bertoin} for subordinators. Savov \cite{savov09} proved the equivalence of \ref{main-3-i} and \ref{main-3-vi} under (a bit stronger condition than) \eqref{A2} and the assumption that $(X_t)_{t \geq 0}$ has paths of unbounded variation. The equivalence of \ref{main-3-iv} and \ref{main-3-vi} goes back to Khintchine \cite{khin39}, see also \cite[Appendix, Theorem 2]{skorohod91}. The proof of Theorem~\ref{main-3} will be presented in Section~\ref{p}. \begin{remark} \label{main-4} \begin{enumerate}[wide, labelwidth=!, labelindent=0pt] \item\label{main-4-ii} The proof of Theorem~\ref{main-3} shows that the implications \begin{equation*} \ref{main-3-ii}\implies\ref{main-3-iii}\implies\ref{main-3-iv}\implies\ref{main-3-v}\implies\ref{main-3-vi}\implies\ref{main-3-vii}\implies\ref{main-3-i} \end{equation*} hold without the sector condition. The sector condition is only needed to relate the integrals in \ref{main-3-i} and \ref{main-3-ii} to each other. In fact, the key for the proof of \ref{main-3-i} $\implies$\ref{main-3-ii} is the implication \begin{equation} \exists c>0\::\: \int_0^1 \nu(\{|y| \geq cf(t)\}) \, dt < \infty \implies \forall \eps>0\::\: \int_0^1 \sup_{|\xi| \leq 1/(\eps f(t))} \re \psi(\xi) \, dt<\infty, \label{main-eq7} \end{equation} which does not require the sector condition, see Lemma~\ref{p-1} below; the sector condition is then used to replace $\re \psi$ by $|\psi|$ in the integral on the right-hand side. \item\label{main-4-iii} For the equivalences to hold, it is crucial that one of the assumptions \eqref{A1}, \eqref{A2} is satisfied; if both assumptions are violated, then the equivalences break down in general and it may happen that \begin{equation*} 0< \limsup_{t \to 0} \frac{1}{f(t)} |X_t| < \infty \quad \text{a.s.}, \end{equation*} see \cite{bertoin08,savov09,wee88} and Example~\ref{main-48} below. \item\label{main-4-i} Condition \eqref{A2} holds for any continuous function $f:[0,1] \to [0,\infty)$ satisfying $\frac{f(t)}{t} \uparrow \infty$ as $t \downarrow 0$ and $\frac{f(t)}{t^{\alpha}} \downarrow 0$ as $t \downarrow 0$ for some $\alpha>\tfrac{1}{2}$, cf.\ \cite[Proof of Corollary 2.1]{savov09}. While this criterion is useful in many cases, it is too restrictive in some situations. For instance, if $(X_t)_{t \geq 0}$ is an isotropic $\alpha$-stable L\'evy process, then $f(t)=t^{1/(\alpha-\eps)}$ is an upper function, cf.\ Example~\ref{main-45} below, but clearly $\frac{f(t)}{t} \uparrow \infty$ as $t \to 0$ fails to hold if $\alpha<1$. On the other hand, a straightforward calculation shows that the L\'evy measure of the isotropic $\alpha$-stable L\'evy process satisfies \eqref{A1}, and therefore Theorem~\ref{main-3} applies in this case without any additional growth assumptions on $f$. For further comments on \eqref{A1} and equivalent formulations, we refer to Remark~\ref{p-3}. \end{enumerate} \end{remark} Let us illustrate Theorem~\ref{main-3} with an example. \begin{example} \label{main-45} Let $(X_t)_{t \geq 0}$ be an $\alpha$-stable pure-jump L\'evy process, $\alpha \in (0,2)$, that is, a L\'evy process with L\'evy triplet $(0,0,\nu)$ where the L\'evy measure $\nu$ is of the form \begin{equation*} \nu(A) = \int_0^{\infty} \int_{\mbb{S}^{d-1}} \I_A(r \theta) \frac{1}{r^{1+\alpha}} \, \mu(d\theta) \, dr \end{equation*} for a measure $\mu$ on the sphere $\mbb{S}^{d-1}$ satisfying $\mu(\mbb{S}^{d-1})>0$, see \cite{sato} for a thorough discussion of stable processes. Theorem~\ref{main-3} shows that \begin{equation} \limsup_{t \to 0} \frac{1}{f(t)} |X_t|= \begin{rcases} \begin{dcases} 0 \\ \infty \end{dcases} \end{rcases} \, \, \text{a.s.} \iff \int_0^1 |f(t)|^{-\alpha} \, dt \,\, \begin{rcases} \begin{dcases} < \infty \\ = \infty \end{dcases} \end{rcases} \label{main-eq9} \end{equation} for any non-decreasing function $f:[0,1] \to [0,\infty)$, and so we recover the classical characterization for upper functions of sample paths of stable L\'evy processes, see e.g.\ \cite[Corollary 11.3]{fristedt}. \end{example} For power functions $f(t)=t^{\kappa}$, the finiteness of $\limsup_{t \to 0} \frac{1}{f(t)} |X_t|$ can be characterized in terms of the Blumenthal--Getoor index \begin{equation*} \beta := \inf\left\{ \alpha>0; \int_{|y|<1} |y|^{\alpha} \, \nu(dy)< \infty \right\} \in [0,2], \end{equation*} which was introduced in \cite{bg61}. The following result is immediate from Theorem~\ref{main-3}. \begin{corollary} \label{main-47} Let $(X_t)_{t \geq 0}$ be a L\'evy process with L\'evy triplet $(b,0,\nu)$ and assume that the characteristic exponent satisfies the sector condition. Then \begin{equation} \limsup_{t \to 0} \frac{1}{t^{\kappa}} |X_t| = \begin{rcases} \begin{dcases}0 \\ \infty \end{dcases} \end{rcases}\, \, \text{a.s.} \iff \int_{|y|<1} |y|^{1/\kappa} \, \nu(dy) \begin{rcases} \begin{dcases}<\infty \\ = \infty \end{dcases} \end{rcases} \label{main-eq4} \end{equation} for every $\kappa \in (\tfrac{1}{2},\infty)$, and \begin{equation} \limsup_{t \to 0} \frac{1}{t^{\kappa}} |X_t| = \begin{rcases} \begin{dcases}0 \\ \infty \end{dcases} \end{rcases} \, \, \text{a.s.} \quad \text{according as} \quad \begin{rcases} \begin{dcases}\kappa < 1/\beta \\ \kappa>1/\beta \end{dcases} \end{rcases} \label{main-eq5} \end{equation} for all $\kappa>0$. If \eqref{A1} from Theorem~\ref{main-3} is satisfied, then \begin{equation*} \limsup_{t \to 0} \frac{1}{\sqrt{t}} |X_t| = 0 \quad \text{a.s.} \end{equation*} \end{corollary} The characterization \eqref{main-eq5} goes back to Pruitt \cite{pruitt81} and Blumenthal \& Getoor \cite{bg61}, see also \cite[Proposition 47.24]{sato}. Note that the critical case $\kappa=1/\beta$ is excluded in \eqref{main-eq5}; one has to check by hand whether the integral $\int_{|y|<1}|y|^{\beta} \, \nu(dy)$ is finite. In \eqref{main-eq4} the critical case is $\kappa=\frac{1}{2}$; this is due to the fact that $\int_{|y| < 1} |y|^2 \, \nu(dy) < \infty$ is always satisfied but at the same time there are pure-jump L\'evy processes with \begin{equation*} 0< \limsup_{t \to 0} \frac{1}{\sqrt{t}} |X_t| < \infty \quad \text{a.s.} \end{equation*} cf.\ \cite{bertoin08,wee88}. Consequently, \eqref{main-eq4} fails, in general, for $\kappa=\tfrac{1}{2}$. Let us give an example of such a process and explain why this is not a contradiction to Theorem~\ref{main-3}. \begin{example} \label{main-48} Let $(X_t)_{t \geq 0}$ be a one-dimensional L\'evy process with L\'evy triplet $(0,0,\nu)$ and L\'evy measure \begin{equation*} \nu(dy) = \frac{1}{2} \frac{1}{|y|^2} \varphi'(|y|) \I_{(0,1/e^e)}(|y|) \, dy \end{equation*} for $\varphi(r)=1/\log \log \frac{1}{r}$. Note that $\nu$ is indeed a L\'evy measure, i.e.\ $\int \min\{|y|^2,1\} \, \nu(dy)<\infty$. As \begin{equation} \int_{|y| \leq r} |y|^2 \, \nu(dy) = \varphi(r), \label{main-eq6} \end{equation} it follows from \cite[Theorem 2.2]{bertoin08} that \begin{equation*} \limsup_{t \to 0} \frac{1}{\sqrt{t}} |X_t| = \sqrt{2} \quad \text{a.s.} \end{equation*} In particular, \eqref{main-eq4} breaks down for $\kappa=\tfrac{1}{2}$ and the equivalences in Theorem~\ref{main-3} fail to hold for $f(t)=\sqrt{t}$. This is not, however, a contradiction to Theorem~\ref{main-3} because the assumptions \eqref{A1} and \eqref{A2} in Theorem~\ref{main-3} are both violated. It is straightforward to check that \eqref{A2} fails for $f(t)=\sqrt{t}$; to see that \eqref{A1} fails we note that, by the Karamata's Tauberian theorem, see e.g.\ \cite{bingham}, \begin{equation*} \nu(\{|y| > r\}) = \int_r^{1/e^e} \frac{1}{y^2} \varphi'(y) \, dy \approx \frac{1}{2} r^{-2} \frac{1}{\log \frac{1}{r} \left(\log \log \frac{1}{r}\right)^2} \quad \text{as $r \to 0$}, \end{equation*} and thus, by \eqref{main-eq6} and the definition of $\varphi$, \begin{equation*} \lim_{r \to 0} \frac{\int_{|y| \leq r} |y|^2 \, \nu(dy)}{r^2 \nu(\{|y|>r\})} = \infty. \end{equation*} \end{example} Next we present our results for the wider class of L\'evy-type processes, see Section~\ref{def} below for the definition. The following theorem gives sufficient conditions for an increasing function $f:[0,1] \to [0,\infty)$ to be an upper function of a L\'evy-type process. \begin{theorem} \label{main-5} Let $(X_t)_{t \geq 0}$ be a L\'evy-type process with characteristics $(b(x),0,\nu(x,dy))$ and symbol $q$ satisfying the sector condition. Let $x \in \mbb{R}^d$ and $R>0$ such that \begin{equation*} M:=\limsup_{r \to 0} \sup_{|z-x| \leq R} \frac{\int_{|y| \leq r} |y|^2 \, \nu(z,dy)}{r^2 \nu(z,\{|y| > r\})} < \infty. \tag{A1'} \label{A1'} \end{equation*} Then the implications \begin{equation*} \ref{main-5-i} \implies \ref{main-5-ii} \implies\ref{main-5-iii} \implies \ref{main-5-iv} \end{equation*} hold for any non-decreasing function $f:[0,1] \to (0,\infty)$, where \begin{enumerate}[label*=\upshape (LTP\arabic*),ref=\text{\upshape (LTP\arabic*)}] \item\label{main-5-i} $\int_0^1 \sup_{|z-x| \leq f(t)} \nu(z,\{|y| \geq c f(t)\}) \, dt < \infty$ for some $c>0$, \item\label{main-5-ii} $\int_0^1 \sup_{|z-x| \leq f(t)} \sup_{|\xi| \leq 1/(\eps f(t))} |q(z,\xi)| \, dt < \infty$ for some (all) $\eps>0$, \item\label{main-5-iii} $\int_0^1 \sup_{|z-x| \leq f(t)} \mbb{P}^z \left( \sup_{s \leq t} |X_s-z| \geq \eps f(t) \right) \, \frac{1}{t} \, dt < \infty$ for all $\eps>0$, \item\label{main-5-iv} $\limsup_{t \to 0} \frac{1}{f(t)} \sup_{s \leq t} |X_s-x|=0$ $\mbb{P}^x$-almost surely. \end{enumerate} \end{theorem} \begin{remark} \label{main-6} \begin{enumerate}[wide, labelwidth=!, labelindent=0pt] \item\label{main-6-i} The implications $\ref{main-5-ii} \implies \ref{main-5-iii}\implies\ref{main-5-iv}$ hold for \emph{any} L\'evy-type process, i.e.\ all the additional assumptions are only needed for the proof of the implication $\ref{main-5-i}\implies\ref{main-5-ii}$. \item\label{main-6-ii} The implication $\ref{main-5-iii} \implies \ref{main-5-iv}$ holds for any strong Markov process, see Theorem~\ref{up-4}. \item\label{main-6-iii} In Theorem~\ref{main-5} we assume that the symbol $q$ satisfies the sector condition, cf.\ \eqref{def-eq14}. A close look at the proof shows that we actually only need a local sector condition, in the sense that, for fixed $x \in \mbb{R}^d$ there are constants $r>0$ and $C>0$ such that \begin{equation*} |\im q(z,\xi)| \leq C \re q(z,\xi) \fa \xi \in \mbb{R}^d, |z-x| \leq r. \end{equation*} The same is true for the Proposition~\ref{main-8} and Theorem~\ref{main-9} below. \item\label{main-6-iv} As already mentioned, assumption \eqref{A1'} is crucial for the proof of the implication $\ref{main-5-i}\implies\ref{main-5-ii}$. One might ask whether this implication also holds under assumption \eqref{A2} from Theorem~\ref{main-3}. We did not manage to prove this, but there is the following slightly weaker result. Consider \begin{equation} \int_0^1 \sup_{|z-x| \leq R} \nu(z,\{|y| \geq c f(t)\}) \, dt < \infty \quad \text{for some $c>0$, $R>0$.} \tag{LTP1'} \label{LTP1'} \end{equation} Note that \eqref{LTP1'} is a bit stronger than \ref{main-5-i}. If \eqref{A2} holds and \begin{equation} \forall r \in (0,R) \, \exists x_0 \in \overline{B(x,r)} \, \, \forall \xi \in \mbb{R}^d,|\xi| \geq 1\::\: \sup_{|z-x| \leq r} |q(z,\xi)| \leq |q(x_0,\xi)|, \label{main-eq15} \end{equation} then $\eqref{LTP1'} \implies \ref{main-5-ii}$. Because of the majorization condition \eqref{main-eq15}, the proof of this assertion is analogous to the case of L\'evy processes, see the proof of Theorem~\ref{main-3} and Lemma~\ref{p-1}. \end{enumerate} \end{remark} For the particular case that $(X_t)_{t \geq 0}$ is a L\'evy process, Theorem~\ref{main-5} yields the implications $\ref{main-3-i}\implies\ref{main-3-ii}\implies\ref{main-3-iii}\implies\ref{main-3-v}$ from Theorem~\ref{main-3}. In this sense, Theorem~\ref{main-5} is a natural extension of Theorem~\ref{main-3}. Unlike in the L\'evy case, it is not to be expected that the conditions \ref{main-5-i}-\ref{main-5-iv} in Theorem~\ref{main-5} are equivalent for a general L\'evy-type process. However, there is the following partial converse. \begin{proposition}\label{main-7} Let $(X_t)_{t \geq 0}$ be a L\'evy-type process with characteristics $(b(x),0,\nu(x,dy))$, and let $f:[0,1] \to [0,\infty)$ be non-decreasing. \begin{enumerate} \item\label{main-7-i}If $x \in \mbb{R}^d$ is such that \begin{equation*} \int_0^1 \sup_{|z-x| \leq f(t)} \mbb{P}^z \left( \sup_{s \leq t} |X_s-z| \geq f(t) \right) \frac{1}{t} \, dt < \infty, \end{equation*} then \begin{equation*} \int_0^1 \inf_{|z-x| \leq 10f(t)} \nu(z; \{|y| > 10 f(t)\}) \, dt < \infty. \end{equation*} \item\label{main-7-ii} Assume that \eqref{A1'} from Theorem~\ref{main-5} holds for some $R>0$ and $x \in \mbb{R}^d$. If \begin{equation*} \int_0^1 \inf_{|z-x| \leq C f(t)} \nu(z; \{|y| > C f(t)\}) \, dt < \infty \end{equation*} for a constant $C>0$, then \begin{equation*} \int_0^1 \inf_{|z-x| \leq C f(t)} \sup_{|\xi| \leq c/f(t)} \re q(z,\xi) \, dt < \infty \end{equation*} for all $c>0$. \end{enumerate} \end{proposition} The next result gives a lower bound for the growth of the sample paths of a L\'evy-type process. \begin{proposition} \label{main-8} Let $(X_t)_{t \geq 0}$ be a L\'evy-type process with symbol $q$ satisfying the sector condition. Let $x \in \mbb{R}^d$. If $f:[0,1] \to (0,\infty)$ is a function such that \begin{equation} \limsup_{t \to 0} t \cdot \inf_{|z-x| \leq Rf(t)} \sup_{|\xi| \leq 1/(Cf(t))} \re q(z,\xi) = \infty, \label{main-eq17} \end{equation} for every $R\geq 1$ and some constant $C=C(R)>0$, then \begin{equation} \limsup_{t \to 0} \frac{1}{f(t)} \sup_{s \leq t} |X_s-x| =\infty \quad \text{$\mbb{P}^x$-a.s.} \label{main-eq18} \end{equation} Moreover: \begin{enumerate} \item\label{main-8-i} If additionally $f$ is non-decreasing, then \begin{equation*} \limsup_{t \to 0} \frac{1}{f(t)} |X_t-x| =\infty \quad \text{$\mbb{P}^x$-a.s.} \end{equation*} \item\label{main-8-ii} If $f$ is regularly varying at zero, then \eqref{main-eq18} holds under the milder assumption that \eqref{main-eq17} is satisfied for $R=1$. \end{enumerate} \end{proposition} In \cite[Theorem 4.3]{rs-growth}, this result was shown for power functions $f(t)=t^{\kappa}$ but in fact the proof goes through for arbitrary functions $f$, see Section~\ref{conv}. The following statement is an immediate consequence of Proposition~\ref{main-8}: If $f$ is a non-negative function and $q$ the symbol of a L\'evy-type process $(X_t)_{t \geq 0}$ such that \begin{equation*} \limsup_{t \to 0} t^{-\beta/\alpha} f(t)<\infty \quad \text{and} \quad \inf_{|z-x| \leq r} \re q(z,\xi) \geq c |\xi|^{\alpha} \quad \text{for all} \, \, |\xi| \gg 1 \end{equation*} for some $x \in \mbb{R}^d$, $r>0$, $\alpha>0$, $c>0$ and $\beta>1$, then \begin{equation*} \limsup_{t \to 0} \frac{1}{f(t)} \sup_{s \leq t} |X_s-x| = \infty \quad \text{$\mbb{P}^x$-a.s.} \end{equation*} Our final main result gives an integral criterion for a function $f$ \emph{not} to be an upper function of a L\'evy-type process. \begin{theorem} \label{main-9} Let $(X_t)_{t \geq 0}$ be a L\'evy-type process with characteristics $(b(x),0,\nu(x,dy))$ and symbol $q$, and let $f:[0,1] \to (0,\infty)$ be non-decreasing. Let $x \in \mbb{R}^d$ be such that one of the following conditions holds. \begin{enumerate}[label*=\upshape (C\arabic*),ref=\text{\upshape (C\arabic*)}] \item\label{C1} $q$ satisfies the sector condition and there is $\kappa \in [0,1)$ such that \begin{equation} \sup_{|z-x| \leq f(t)} \sup_{|\xi| \leq 1/f(t)} |q(z,\xi)| \leq c t^{-\kappa} \inf_{|z-x| \leq R f(t)} \sup_{|\xi| \leq 1/f(t)} |q(z,\xi)|, \qquad t \in (0,1), \label{main-eq185} \end{equation} for every $R \geq 1$ and some constant $c=c(R)>0$. \item\label{C2} There are constants $\alpha \in (0,2]$, $r>0$ and $c>0$ such that \begin{equation} \sup_{|z-x| \leq r} |q(z,\xi)| \leq c(1+|\xi|^{\alpha}), \qquad |\xi| \gg 1, \label{main-eq19} \end{equation} and $\liminf_{t \to 0} t^{-2/\alpha} f(t)=\infty$. \end{enumerate} Then: \begin{enumerate} \item\label{main-9-i} If \begin{equation} \int_{0}^1 \inf_{|z-x| \leq C f(t)} \nu(z,\{|y| \geq C f(t)\}) \, dt = \infty \label{main-eq20} \end{equation} for some constant $C>0$, then \begin{equation*} \limsup_{t \to 0} \frac{1}{f(t)} |X_t-x| \geq \frac{C}{5} \quad \text{$\mbb{P}^x$-a.s.} \end{equation*} \item\label{main-9-ii} Assume that \eqref{A1'} from Theorem~\ref{main-5} is satisfied for some $R>0$. If \begin{equation} \int_0^1 \inf_{|z-x| \leq C f(t)} \sup_{|\xi| \leq 1/f(t)} |q(z,\xi)| \, dt = \infty \label{main-eq21} \end{equation} for some constant $C>0$, then \begin{equation*} \limsup_{t \to 0} \frac{1}{f(t)} |X_t-x| \geq \frac{C}{5} \quad \text{$\mbb{P}^x$-a.s.} \end{equation*} \end{enumerate} \end{theorem} \begin{remark}\label{main-10}\begin{enumerate}[wide, labelwidth=!, labelindent=0pt] \item\label{main-10-iii} If the constant $C$ in \eqref{main-eq20} resp.\ \eqref{main-eq21} can be chosen arbitrarily large, then \begin{equation*} \limsup_{t \to 0} \frac{1}{f(t)} |X_t-x|=\infty \quad \text{$\mbb{P}^x$-a.s.} \end{equation*} \item\label{main-10-i} The growth condition \eqref{main-eq19} on the symbol holds automatically for $\alpha=2$, cf.\ \eqref{def-eq13}. \item\label{main-10-ii} If $f$ is regularly varying, then it suffices to check \eqref{main-eq185} in \ref{C1} for $R=1$. Moreover, we note that \eqref{main-eq185} is trivially satisfied if the symbol $q$ does not depend on $z$, i.e.\ if $(X_t)_{t \geq 0}$ is a L\'evy process. \item\label{main-10-iv} For the particular case of L\'evy processes, Theorem~\ref{main-9} is known -- see Theorem~\ref{main-3} and the references below it -- but Theorem~\ref{main-9} seems to be the first result in this direction which applies for the much wider class of L\'evy-type processes. Let us comment on the differences in the proofs. For L\'evy processes, the standard approach to prove an assertion of the form \begin{equation*} \limsup_{t \to 0} \frac{1}{f(t)} |X_t| \geq C \quad \text{a.s.} \end{equation*} is to construct a suitable sequence $(A_n)_{n \in \mbb{N}}$ of sets which involves only the increments of $(X_t)_{t \geq 0}$ and which satisfies \begin{equation*} \limsup_{n \to \infty} A_n \subseteq \left\{ \limsup_{t \to 0} \frac{1}{f(t)} |X_t| \geq C \right\}, \end{equation*} and to use (the difficult direction of) the Borel--Cantelli lemma to deduce from $\sum_{n \in \mbb{N}} \mbb{P}(A_n)=\infty$ that $\mbb{P}(\limsup_n A_n)=1$. This approach relies heavily on the independence of the increments -- ensuring the sets $(A_n)_{n \in \mbb{N}}$ are independent -- and so it fails to work in the more general framework of L\'evy-type processes. We fix this issue by using a conditional Borel-Cantelli lemma for backward filtrations, cf.\ Proposition~\ref{appendix-1}. Moreover, our proof uses a new maximal inequality for L\'evy-type processes which is of independent interest, cf.\ Section~\ref{max}. \end{enumerate} \end{remark} We close this section with some illustrating examples. \begin{example}[Process of variable order] \label{main-11} Let $(X_t)_{t \geq 0}$ be a L\'evy-type process with symbol $q(x,\xi)=|\xi|^{\alpha(x)}$ for $\alpha: \mbb{R}^d \to (0,2)$ continuous; a sufficient condition for the existence of such a process is that $\alpha$ is H\"older continuous and bounded away from zero, see e.g.\ \cite{bass,kol,matters} for details. Let us mention that Negoro \cite{negoro94} was one of the first to study the small-time asymptotics of processes of variable order. If we set $\alpha^*(x,r):=\sup_{|z-x| \leq r} \alpha(z)$ and $\alpha_*(x,r) := \inf_{|z-x| \leq r} \alpha(z)$, then our results show that \begin{equation} \int_0^1 |f(t)|^{-\alpha^*(x,r)} \, dt < \infty \, \, \text{for some $r>0$} \implies \limsup_{t \to 0} \frac{1}{f(t)} \sup_{s \leq t} |X_s-x| =0 \quad \text{$\mbb{P}^x$-a.s.} \label{main-eq25} \end{equation} and \begin{equation} \int_0^1 |f(t)|^{-\alpha_*(x,r)} \, dt = \infty \, \, \text{for some $r>0$} \implies \limsup_{t \to 0} \frac{1}{f(t)} |X_t-x| =\infty \quad \text{$\mbb{P}^x$-a.s.} \label{main-eq26} \end{equation} for any $f:[0,1] \to [0,\infty)$ non-decreasing. By the continuity of $\alpha$, this entails that \begin{equation*} \int_0^1 |f(t)|^{-\beta} \, dt < \infty \, \, \text{for some $\beta>\alpha(x)$} \implies \limsup_{t \to 0} \frac{1}{f(t)} \sup_{s \leq t} |X_s-x| =0 \quad \text{$\mbb{P}^x$-a.s.} \end{equation*} and \begin{equation*} \int_0^1 |f(t)|^{-\beta} \, dt = \infty \, \, \text{for some $\beta<\alpha(x)$} \implies \limsup_{t \to 0} \frac{1}{f(t)} |X_t-x| =\infty \quad \text{$\mbb{P}^x$-a.s.} \end{equation*} In particular, this generalizes \cite[Theorem 2.1]{negoro94}, which is about the particular case that $f$ is a power function. If $\alpha$ has a local maximum at $x$, then \eqref{main-eq25} yields \begin{equation*} \int_0^1 |f(t)|^{-\alpha(x)} \, dt < \infty \implies \limsup_{t \to 0} \frac{1}{f(t)} \sup_{s \leq t} |X_s-x| =0 \quad \text{$\mbb{P}^x$-a.s.} \end{equation*} This holds, in particular, if $\alpha(x)=\alpha$ is constant, i.e.\ if $(X_t)_{t \geq 0}$ is an isotropic $\alpha$-stable L\'evy process. An analogous consideration works for \eqref{main-eq26} if $\alpha$ has a local minimum at $x$. In particular, we recover the classical criterion for isotropic $\alpha$-stable L\'evy processes, cf.\ Example~\ref{main-45}. \end{example} \begin{example}[Stable-type process] \label{main-12} Consider a L\'evy-type process $(X_t)_{t \geq 0}$ with characteristics $(0,0,\nu(x,dy))$, where \begin{equation*} \nu(x,dy) = \kappa(x,y) \frac{1}{|y|^{d+\alpha}} \,dy \end{equation*} for some $\alpha \in (0,2)$ and a mapping $\kappa: \mbb{R}^d \times \mbb{R} \to (0,\infty)$ which is symmetric in the $y$-variable and satisfies $0<\inf_{x,y} \kappa(x,y)\leq\sup_{x,y} \kappa(x,y) < \infty$, see e.g.\ \cite{bass09,matters} for the existence of such processes. Since \begin{equation*} \frac{1}{M} r^{-\alpha} \leq \nu(x,\{|y| \geq r\}) \leq M r^{-\alpha}, \qquad r>0, \end{equation*} for a constant $M>0$ not depending on $x \in \mbb{R}^d$, it follows from Theorem~\ref{main-5} and Theorem~\ref{main-9} that \begin{equation*} \limsup_{t \to 0} \frac{1}{f(t)} |X_t-x| = \begin{rcases} \begin{dcases}0 \\ \infty \end{dcases} \end{rcases} \, \, \text{$\mbb{P}^x$-a.s.} \quad \text{according as} \quad \int_0^1 |f(t)|^{-\alpha} \, dt \begin{rcases} \begin{dcases}< \infty \\ = \infty \end{dcases} \end{rcases} \end{equation*} for any non-decreasing function $f:[0,1] \to [0,\infty)$. \end{example} \begin{example}[{L\'evy-driven SDE}] \label{main-13} Let $(L_t)_{t \geq 0}$ be a pure-jump L\'evy process with characteristic exponent $\psi$ satisfying the sector condition, and assume that the L\'evy measure $\nu_L$ satisfies \eqref{A1} from Theorem~\ref{main-3}. Let $(X_t)_{t \geq 0}$ be the unique weak solution to an SDE \begin{equation*} dX_t = \sigma(X_{t-}) \, dL_t, \qquad X_0 = x, \end{equation*} for a bounded continuous function $\sigma: \mbb{R} \to \mbb{R}$. Then $(X_t)_{t \geq 0}$ is a L\'evy-type process with symbol $q(x,\xi) := \psi(\sigma(x) \xi)$, cf.\ \cite{kurtz,sde,schnurr10}. Fix $x \in \mbb{R}$ such that $\sigma(x) \neq 0$. By Theorem~\ref{main-5} and Theorem~\ref{main-9}, the following statements hold for any non-decreasing function $f:[0,1] \to [0,\infty)$. \begin{enumerate} \item\label{main-13-i} If there exists a constant $c>0$ such that \begin{equation*} \int_0^1 \nu_L(\{|y| \geq cf(t)\}) \, dt < \infty, \end{equation*} then \begin{equation*} \limsup_{t \to 0} \frac{1}{f(t)} \sup_{s \leq t} |X_s-x| = 0 \quad \text{and} \quad \limsup_{t \to 0} \frac{1}{f(t)} \sup_{s \leq t} |L_s| = 0. \end{equation*} \item\label{main-13-ii} Assume that $\psi^*(r):=\sup_{|\xi| \leq r} |\psi(\xi)|$ satisfies the following weak scaling condition (at zero): There are constants $\alpha>0$ and $C>0$ such that \begin{equation*} \psi^*(\lambda r) \geq C \lambda^{\alpha} \psi^*(r) \fa r>0,\;\lambda \in (0,1). \end{equation*} If \begin{equation*} \int_0^1 \nu_L(\{|y| \geq c f(t)\}) \, dt = \infty \end{equation*} for some constant $c>0$, then \begin{equation*} \limsup_{t \to 0} \frac{1}{f(t)} \sup_{s \leq t} |X_s-x| > 0 \quad \text{and} \quad \limsup_{t \to 0} \frac{1}{f(t)} \sup_{s \leq t} |L_s| > 0. \end{equation*} \end{enumerate} \end{example} The remainder of the article is organized as follows. After introducing basic definitions and notation in Section~\ref{def}, we establish a new maximal inequality for L\'evy-type processes in Section~\ref{max} and study some of its consequences. In Section~\ref{up} we obtain integral criteria for upper functions of Markov processes. They are the key for the proofs of Theorem~\ref{main-3} and Theorem~\ref{main-5}, which are presented in Section~\ref{p}. Finally, in Section~\ref{conv}, we give the proofs of Proposition~\ref{main-7}, Proposition~\ref{main-8} and Theorem~\ref{main-9}. \section{Basic definitions and notation} \label{def} We consider the Euclidean space $\mbb{R}^d$ with the canonical scalar product $x \cdot y := \sum_{j=1}^d x_j y_j$ and the Borel $\sigma$-algebra $\mc{B}(\mbb{R}^d)$ generated by the open balls $B(x,r) := \{y \in \mbb{R}^d; |y-x|<r\}$. For a real-valued function $f$, we denote by $\nabla f$ the gradient and by $\nabla^2 f$ the Hessian of $f$. If $\nu$ is a measure, say on $\mbb{R}^d$, we use the short-hand $\nu(\{|y|>r\})$ for $\nu(\{y \in \mbb{R}^d; |y|>r\})$. \par An operator $A$ defined on the space $C_c^{\infty}(\mbb{R}^d)$ of compactly supported smooth functions is a \emph{L\'evy-type operator} if it has a representation of the form \begin{align} \label{def-eq3} \begin{aligned} Af(x) &= b(x) \cdot \nabla f(x) + \frac{1}{2} \tr(Q(x) \cdot \nabla^2 f(x)) \\ &\quad + \int_{y \neq 0} (f(x+y)-f(x)-y \cdot \nabla f(x) \I_{(0,1)}(|y|)) \, \nu(x,dy), \qquad f\in C_c^{\infty}(\mbb{R}^d), \end{aligned} \end{align} where $b(x) \in \mbb{R}^d$ is a vector, $Q(x) \in \mbb{R}^{d \times d}$ is a positive semi-definite matrix and $\nu(x,dy)$ is a measure on $\mbb{R}^d \setminus \{0\}$ satisfying $\int_{y \neq 0} \min\{1,|y|^2\} \, \nu(x,dy) < \infty$ for each $x \in \mbb{R}^d$. The family $(b(x),Q(x),\nu(x,dy))$, $x \in \mbb{R}^d$, is the \emph{(infinitesimal) characteristics} of $A$. Equivalently, $A$ can be written as a pseudo-differential operator \begin{equation*} Af(x) = - \int_{\mbb{R}^d} q(x,\xi) e^{ix \cdot \xi} \hat{f}(\xi) \, d\xi, \qquad f \in C_c^{\infty}(\mbb{R}^d),\; x \in \mbb{R}^d, \end{equation*} with \emph{symbol} \begin{equation*} q(x,\xi) := -i b(x) \cdot \xi + \frac{1}{2} \xi \cdot Q(x) \xi + \int_{y \neq 0} \left(1-e^{iy \cdot \xi}+iy \cdot \xi \I_{(0,1)}(|y|) \right) \, \nu(x,dy), \qquad x,\xi \in \mbb{R}^d. \end{equation*} For each fixed $x \in \mbb{R}^d$, the mapping $\xi \mapsto q(x,\xi)$ is continuous and negative definite (in the sense of Schoenberg). In consequence, $\xi \mapsto \sqrt{|q(x,\xi)|}$ is subadditive, i.e. \begin{equation} \sqrt{|q(x,\xi+\eta)|} \leq \sqrt{|q(x,\xi)|} + \sqrt{|q(x,\eta)|}, \qquad x,\xi,\eta \in \mbb{R}^d, \label{def-eq11} \end{equation} which implies \begin{equation*} |q(x,\xi)| \leq 2 \sup_{|\eta| \leq 1} |q(x,\eta)| (1+|\xi|^2), \qquad x,\xi \in \mbb{R}^d, \end{equation*} see e.g.\ \cite[Theorem 6.2]{barca}. In particular, $(x,\xi) \mapsto q(x,\xi)$ is locally bounded if, and only if, there is for every compact set $K \subseteq \mbb{R}^d$ some constant $C>0$ such that \begin{equation} |q(x,\xi)| \leq C(1+|\xi|^2), \qquad \xi \in \mbb{R}^d,\; x \in K. \label{def-eq13} \end{equation} The local boundedness of $q$ can also be characterized in terms of the characteristics; namely, $q$ is locally bounded if, and only if, \begin{equation*} \forall K \subseteq \mbb{R}^d \, \, \text{compact}\::\: \sup_{x \in K} \left( |b(x)|+|Q(x)| +\int_{y \neq 0} \min\{1,|y|^2\} \nu(x,dy) \right)<\infty. \end{equation*} Applying Taylor's formula in \eqref{def-eq3} shows that the local boundedness of the symbol $q$ of the L\'evy-type operator $A$ implies $\|Af\|_{\infty}<\infty$ for every $f \in C_c^{\infty}(\mbb{R}^d)$. We say that $q$ satisfies the \emph{sector condition} if there is a constant $C>0$ such that \begin{equation} |\im q(x,\xi)| \leq C \re q(x,\xi) \fa x,\xi \in \mbb{R}^d. \label{def-eq14} \end{equation} Next we introduce the probabilistic objects. Let $A$ be a L\'evy-type operator and $(\Omega,\mc{A},\mbb{P})$ a probability space. A stochastic process $X_t: \Omega \to \mbb{R}^d$, $t \geq 0$, with \cadlag sample paths is a \emph{solution to the $(A,C_c^{\infty}(\mbb{R}^d))$-martingale problem with initial distribution $\mu$} if $\mbb{P}(X_0 \in \cdot)=\mu(\cdot)$ and \begin{equation*} M_t^f := f(X_t)-f(X_0)- \int_0^t Af(X_s) \, ds, \qquad t \geq 0, \end{equation*} is a martingale with respect to the canonical filtration $\mc{F}_t := \sigma(X_s; s \leq t)$ for every $f \in C_c^{\infty}(\mbb{R}^d)$. A tuple $(X_t, t \geq 0; \mbb{P}^x, x \in \mbb{R}^d)$ consisting of a family of probability measures $\mbb{P}^x$, $x \in \mbb{R}^d$, on a measurable space $(\Omega,\mc{A})$ and a stochastic process $X_t: \Omega \to \mbb{R}^d$ with \cadlag sample paths is called a \emph{L\'evy-type process with symbol $q$} if \begin{enumerate} \item\label{ltp-1} $(X_t,t \geq 0; \mbb{P}^x, x \in \mbb{R}^d)$ is a strong Markov process; \item\label{ltp-2} $(X_t)_{t \geq 0}$ solves the $(A,C_c^{\infty}(\mbb{R}^d))$-martingale problem for the L\'evy-type operator $A$ with symbol $q$. More precisely, for each $x \in \mbb{R}^d$, the stochastic process $(X_t)_{t \geq 0}$ considered on the probability space $(\Omega,\mc{A},\mbb{P}^x)$ is a solution to the $(A,C_c^{\infty}(\mbb{R}^d))$-martingale problem with initial distribution $\mu=\delta_x$; \item\label{ltp-3} $q$ is locally bounded. \end{enumerate} Note that \ref{ltp-3} entails that $Af$ is bounded for every $f \in C_c^{\infty}(\mbb{R}^d)$, and so the integral $\int_0^t Af(X_s) \, ds$ appearing in the definition of the martingale problem is well-defined. If the $(A,C_c^{\infty}(\mbb{R}^d))$-martingale problem is \emph{well-posed}, i.e.\ there exists a unique solution to the martingale problem for any initial distribution $\mu$, then the strong Markov property \ref{ltp-1} is automatically satisfied, cf.\ \cite[Theorem 4.4.2]{ethier}. Well-posedness is, however, not necessary for the existence of strongly Markovian solutions to martingale problems; one can use so-called Markovian selections to construct such solutions, see \cite[Section 4.5]{ethier} and \cite{markovian}. For a thorough discussion of martingale problems associated with L\'evy-type operators, we refer to \cite{ltp,hoh,jacob1}. The following classes of stochastic processes are examples of L\'evy-type processes: \begin{itemize} \item L\'evy processes: A \emph{L\'evy process} is a stochastic process $(X_t)_{t \geq 0}$ with stationary and independent increments and \cadlag sample paths. It is uniquely determined (in distribution) by its L\'evy triplet $(b,Q,\nu)$ and its characteristic exponent $\psi$, cf.\ \cite{sato,barca}. Any L\'evy process is a L\'evy-type process in the sense of the above definition; the corresponding operator $A$ is the pseudo-differential operator with symbol $q(x,\xi):=\psi(\xi)$ and characteristics $(b,Q,\nu)$. \item Feller processes: If $(X_t)_{t \geq 0}$ is a Feller process whose infinitesimal generator $(A,\mc{D}(A))$ satisfies $C_c^{\infty}(\mbb{R}^d) \subseteq \mc{D}(A)$, then $(X_t)_{t \geq 0}$ is a L\'evy-type operator; this follows from a result by Courr\`ege \& von Waldenfels, see \cite{ltp,jacob1,mp-feller} for further information. \item solutions to L\'evy-driven SDEs: Let $(L_t)_{t \geq 0}$ be a L\'evy process with characteristic exponent $\psi$. If $\sigma$ is a bounded continuous function and the stochastic differential equation (SDE) \begin{equation*} dX_t = \sigma(X_{t-}) dL_t, \qquad X_0 =x, \end{equation*} has a unique weak solution $(X_t)_{t \geq 0}$, then $(X_t)_{t \geq 0}$ is a L\'evy-type process with symbol $q(x,\xi)=\psi(\sigma(x)^T \xi)$, cf.\ \cite{ethier,schnurr10}. The assumptions on $\sigma$ can be relaxed, cf.\ \cite{sde,markovian}. \end{itemize} \section{A maximal inequality for L\'evy-type processes} \label{max} In this section, we establish a new maximal inequality for L\'evy-type processes and present some consequences of this inequality. Before we start, we recall the following maximal inequality, which will be used frequently in this paper. \begin{proposition} \label{max-0} Let $(X_t)_{t \geq 0}$ be a L\'evy-type process with symbol $q$. There is an absolute constant $c>0$ such that \begin{equation} \mbb{P}^x \left( \sup_{s \leq t} |X_s-x| \geq r \right) \leq ct \sup_{|z-x| \leq r} \sup_{|\xi| \leq 1/r} |q(z,\xi)|, \qquad x \in \mbb{R}^d,\;t>0,\;r>0. \label{max-eq0} \end{equation} \end{proposition} This maximal inequality goes back to Schilling \cite{rs-growth}, see also \cite[Theorem 5.1]{ltp} and \cite[Proposition 2.8]{markovian}. Let us mention two variants of this inequality: a version for random times, cf.\ \cite[Lemma 1.29]{matters}, and a localized version, cf.\ \cite[Lemma 4.1]{ihke}. If we denote by $\tau_r^x = \inf\{t \geq0; |X_t-x| \geq r\}$ the first exit time of $(X_t)_{t \geq 0}$ from the open ball $B(x,r)$, then \eqref{max-eq0} can be equivalently formulated as follows: \begin{equation*} \mbb{P}^x(\tau_r^x \leq t) \leq c t \sup_{|z-x| \leq r} \sup_{|\xi| \leq 1/r} |q(z,\xi)|, \qquad x \in \mbb{R}^d,\;t>0,\;r>0. \end{equation*} For the proofs of our main results, we need an upper bound for the probability \begin{equation*} \mbb{P}^x \left( \sup_{s \leq t} |X_s-x| < r \right). \end{equation*} The following maximal inequality allows us to derive suitable bounds and is of independent interest. \begin{proposition} \label{max-1} Let $(X_t)_{t \geq 0}$ be a L\'evy-type process with characteristics $(b(x),Q(x),\nu(x,dy))$, $x \in \mbb{R}^d$, and denote by $\tau_r^x$ the first exit time from $B(x,r)$. Then \begin{equation*} \mbb{P}^x(\tau_r^x \geq t) \leq \frac{1}{1+t G(x,2r)} \quad \text{with} \quad G(x,r) := \inf_{|z-x| \leq r} \nu(z, \{|y| > r\}) \end{equation*} for all $x \in \mbb{R}^d$ and $r>0$. \end{proposition} Note that Proposition~\ref{max-1} implies that \begin{equation*} \mbb{P}^x \left( \sup_{s \leq t} |X_s-x| < r \right) \leq \frac{1}{1+tG(x,2r)}, \qquad x \in \mbb{R}^d,\; r>0. \end{equation*} Intuitively, $G(x,r) = \inf_{|z-x| \leq r} \nu(z,\{|y| > r\})$ quantifies the likelihood of a jump of modulus $> r$ while the process is close to its starting point $x$. The idea behind our estimate is that the process leaves immediately the ball $B(x,r)$ if a jump of modulus $>2r$ occurs. Other means of leaving the ball, e.g.\ due to a drift or diffusion part, are not taken into account. In consequence, Proposition~\ref{max-1} does well for pure-jump processes but less so e.g.\ for processes with a non-vanishing diffusion part. There is a related estimate in \cite[Theorem 5.5]{ltp}, see also \cite[Lemma 6.3]{rs-growth}, giving an upper bound for $\mbb{P}^x(\tau_r^x \geq t)$ in terms of the symbol\footnote{Beware that there is a typo in the definition of $k(x,r)$ in \cite[Theorem 5.5]{ltp}; the two suprema should be infima.}; for the particular that $q$ satisfies the sector condition \eqref{def-eq14}, it reads \begin{equation} \mbb{P}^x(\tau_r^x \geq t) \leq c \frac{1}{1+t h(x,r)} \quad \text{with} \quad h(x,r) := \sup_{|\xi| \leq 1/(2r)} \inf_{|z-x| \leq r} \re q(z,\xi) \label{max-eq4} \end{equation} for some constant $c>0$. In some situations, \eqref{max-eq4} gives better estimates than Proposition~\ref{max-1} -- e.g.\ if there is a diffusion part -- but our result has its advantages e.g.\ if the sector condition is not satisfied. For instance, for $q(x,\xi) = i \xi + \sqrt{|\xi|}$, we get $\mbb{P}^x(\tau_r^x \geq t) \leq 1/(1+ct r^{-1/2}) \leq c' r^{1/2} t^{-1}$ while \cite[Theorem 5.5]{ltp} gives only $\mbb{P}^x(\tau_r^x \geq t) \leq c'' r^{1/3}t^{-1}$; note that the estimates are of interest only if the right-hand sides are less or equal than $1$, i.e.\ for $r>0$ small. \begin{proof}[Proof of Proposition~\ref{max-1}] For fixed $x \in \mbb{R}^d$, $\eps>0$ and $r>0$, pick $\chi \in C_c^{\infty}(\mbb{R}^d)$ such that $\I_{B(x,r)} \leq \chi \leq \I_{B(x,r+\eps)}$. As $\chi(X_t)=1$ on $\{t<\tau_r^x\}$, it follows from Dynkin's formula that \begin{equation} \mbb{P}^x(\tau_r^x>t) = \mbb{E}^x(\chi(X_t) \I_{\{\tau_r^x>t\}}) \leq \mbb{E}^x(\chi(X_{t \wedge \tau_r^x})) = 1 + \mbb{E}^x \left( \int_{(0,t \wedge \tau_r^x)} A \chi(X_s) \, ds \right), \label{max-eq5} \end{equation} where $A$ is the L\'evy-type operator associated with the family of triplets $(b(x),Q(x),\nu(x,dy))$, see \eqref{def-eq3}. For $z \in B(x,r)$, we have $\chi(z)=1$, $\nabla \chi(z)=0$ and $\nabla^2 \chi(z)=0$, and so \begin{equation*} A\chi(z) = \int_{y \neq 0} (\chi(z+y)-1) \, \nu(z,dy). \end{equation*} Using that $0 \leq \chi \leq 1$ on $\mbb{R}^d$ and $\chi=0$ outside $B(x,r+\eps)$, we find that \begin{equation*} A \chi(z) \leq \int_{|(z+y)-x| \geq r+\eps} (\chi(z+y)-1) \, \nu(z,dy) \leq - \int_{|y| \geq 2r+\eps} \nu(z,dy) \end{equation*} for all $z \in B(x,r)$. Since $X_s \in B(x,r)$ for $s<\tau_r^x$, it follows from \eqref{max-eq5} that \begin{equation*} \mbb{P}^x(\tau_r^x>t) \leq 1- \mbb{E}^x \left( \int_{(0,t \wedge \tau_r^x)} \nu(X_s,\{|y| \geq 2r+\eps\}) \, ds \right) \end{equation*} for all $\eps>0$. Letting $\eps \downarrow 0$ using the dominated convergence theorem, we arrive at \begin{align} \mbb{P}^x(\tau_r^x>t) &\leq 1- \mbb{E}^x \left( \int_{(0,t \wedge \tau_r^x)} \nu(X_s,\{|y| >2r \}) \, ds \right) \notag\\ &\leq 1- \mbb{E}^x(\tau_r^x \wedge t) \inf_{|z-x| \leq r} \int_{|y| > 2r} \nu(z,dy). \label{max-eq7} \end{align} The elementary estimate $\mbb{E}^x(\tau_r^x \wedge t) \geq t \mbb{P}^x(\tau_r^x>t)$ now gives \begin{equation*} \mbb{P}^x(\tau_r^x>t) \leq 1- t \mbb{P}^x(\tau_r^x>t) \inf_{|z-x| \leq r} \int_{|y|>2r} \nu(z,dy), \end{equation*} i.e. \begin{align*} \mbb{P}^x(\tau_r^x>t) \leq \frac{1}{1+t\inf_{|z-x| \leq r} \nu(z,\{|y|>2r\})}. \end{align*} Thus, \begin{equation*} \mbb{P}^x(\tau_r^x \geq t) = \lim_{\eps \to 0} \mbb{P}^x(\tau_r^x > t-\eps) \leq \frac{1}{1+t\inf_{|z-x| \leq r} \nu(z,\{|y|>2r\})}. \qedhere \end{equation*} \end{proof} \begin{corollary} \label{max-3} Let $(X_t)_{t \geq 0}$ be a L\'evy-type process with characteristics $(b(x),Q(x),\nu(x,dy))$, and denote by $\tau_r^x$ the exit time from the ball $B(x,r)$. Then \begin{equation} \mbb{E}^x \tau_r^x \leq \frac{1}{G(x,2r)} \label{max-eq11} \end{equation} and \begin{equation} \mbb{P}^x(\tau_r^x \geq t) \leq C_0 \exp (-C_1 t G(x,2r)) \label{max-eq12} \end{equation} for all $x \in \mbb{R}^d$, $t>0$ and $r>0$, where $C_0,C_1 <\infty$ are uniform constants and $G(x,r)$ is the mapping defined in Proposition~\ref{max-1}. \end{corollary} \begin{proof} By \eqref{max-eq7}, \begin{equation*} \mbb{E}^x(\tau_r^x \wedge t) \leq \frac{1-\mbb{P}^x(\tau_r^x>t)}{G(x,2r)} \leq \frac{1}{G(x,2r)}. \end{equation*} Letting $t \to \infty$ using Fatou's lemma, proves the first assertion. The second inequality is obtained from Proposition~\ref{max-1} by an iteration argument using the Markov property; it is the same reasoning as in \cite[Proof of Theorem 5.9]{ltp}. \end{proof} \begin{remark} \begin{enumerate}[wide, labelwidth=!, labelindent=0pt] \item For the particular case that $(X_t)_{t \geq 0}$ is a L\'evy process, we know that $N_t := \sharp\{s \leq t; |\Delta X_s| > 2r\}$ is a Poisson process with intensity $\lambda=\nu(\{|y|>2r\})$, where $\nu$ is the L\'evy measure, and so \begin{equation*} \mbb{P}^x(\tau_r^x \geq t) = \lim_{\eps \to 0} \mbb{P}^x(\tau_r^x > t+\eps) \leq \lim_{\eps \to 0} \mbb{P}^x(N_{t+\eps}=0) = e^{-\lambda t} = e^{-t\nu(\{|y|>2r\})}, \end{equation*} which is \eqref{max-eq12} with $C_0=C_1=1$. If $(X_t)_{t \geq 0}$ is a general L\'evy-type process $(X_t)_{t \geq 0}$, then $(N_t)_{t \geq 0}$ is no longer a Poisson process but our result shows that we can still get an analogous estimate in terms of the jump intensity $G(x,2r)$. This fits well to the intuition that a L\'evy-type process behaves locally like a L\'evy process. \item The estimate \eqref{max-eq12} is optimal for a wide family of jump processes. However, our approach incorporates only the tails of the L\'evy measures and therefore some information may be lost, leading to non-optimal estimates for certain processes. This is best seen for the particular case of stable L\'evy processes, for which Taylor \cite{taylor67} derived upper and lower bounds for $\mbb{P}(\tau_r \geq t)$ (i.e. $x=0$). He shows for $r>0$ small that \begin{align*} \mbb{P}(\tau_r\geq t) &\asymp e^{-ct r^{-\alpha}} \qquad \text{for stable processes of type A} \\ \mbb{P}(\tau_r \geq t) &\asymp e^{- ct r^{-\alpha/(1-\alpha)}} \qquad \text{for stable processes of type B, $\alpha \in (0,1)$}, \end{align*} where the constants $c$ in the lower and upper bound may differ. Here, 'type B' means essentially that the process has a projection which is a subordinator -- formally, the L\'evy measure is concentrated on a hemisphere $\{y \in \mbb{R}^d; y_j \geq 0\}$ for some $j \in \{1,\ldots,d\}$ -- and all other stable processes are of type A. While our estimate \eqref{max-eq12} yields the correct upper bound for stable processes of type A, we only get the (sub-optimal) upper bound $e^{-ctr^{-\alpha}}$ for processes of type B. \end{enumerate} \end{remark} As a direct consequence of Proposition~\ref{max-1}, we also obtain the following corollary. \begin{corollary} \label{max-5} Let $(X_t)_{t \geq 0}$ be a L\'evy-type process with characteristics $(b(x),Q(x),\nu(x,dy))$, and let $c \in [0,1]$. If $x \in \mbb{R}^d$, $t>0$ and $r>0$ are such that \begin{equation*} \mbb{P}^x \left( \sup_{s \leq t} |X_s-x| > r \right) \leq c, \end{equation*} then \begin{equation*} \mbb{P}^x \left( \sup_{s \leq t} |X_s-x| > r \right) \geq (1-c) t G(x,2r) \end{equation*} for $G(x,r)$ defined in Proposition~\ref{max-1}. \end{corollary} As an immediate consequence, we see that \begin{equation*} \limsup_{t \to 0} \mbb{P}^x \left( \sup_{s \leq t} |X_s-x| > r(t) \right) < 1 \end{equation*} implies \begin{equation*} \mbb{P}^x \left( \sup_{s \leq t} |X_s-x|>r(t) \right) \geq C t G(x,2r(t)) \end{equation*} for small $t>0$ and some constant $C>0$, which will be useful lateron. \begin{proof}[Proof of Corollary~\ref{max-5}] By Proposition~\ref{max-1}, \begin{equation*} \mbb{P}^x \left( \sup_{s \leq t} |X_s-x| \leq r\right) \leq \frac{1}{1+tG(x,2r)}, \end{equation*} which is equivalent to \begin{equation*} \mbb{P}^x \left( \sup_{s \leq t} |X_s-x| \leq r\right) \leq 1- t G(x,2r) \mbb{P}^x \left( \sup_{s \leq t} |X_s-x| \leq r\right). \end{equation*} Hence, \begin{align*} \mbb{P}^x \left( \sup_{s \leq t} |X_s-x|>r \right) &\geq t G(x,2r) \mbb{P}^x \left( \sup_{s \leq t} |X_s-x| \leq r\right) \\ &= t G(x,2r) \left[ 1- \mbb{P}^x \left( \sup_{s \leq t} |X_s-x|>r \right) \right], \end{align*} which proves the assertion. \end{proof} Let us illustrate the results from this section with an example. \begin{example} \label{max-7} Let $(X_t)_{t \geq 0}$ be a process of variable order, i.e.\ a L\'evy-type process with symbol $q(x,\xi) = |\xi|^{\alpha(x)}$ for a continuous mapping $\alpha: \mbb{R}^d \to (0,2]$. Denote by $\tau_r^x$ the first exit time of $(X_t)_{t \geq 0}$ from the ball $B(x,r)$ and set $\alpha_*(x,r):=\inf_{|z-x| \leq r} \alpha(z)$. The following estimates hold for uniform constants $c_0,\ldots,c_4 \in (0,\infty)$: \begin{enumerate} \item $\mbb{P}^x(\tau_r^x \geq t) \leq 1/(1+ c_0 t r^{-\alpha_*(x,r)})$ and $\mbb{P}^x(\tau_r^x \geq t) \leq c_1 \exp(-c_2 t r^{-\alpha_*(x,r)})$, \item $\mbb{E}^x(\tau_r^x) \leq c_3 r^{\alpha_*(x,r)}$, \item $\mbb{P}^x \left( \sup_{s \leq t} |X_s-x| \geq r\right) \geq c_4 t r^{-\alpha_*(x,r)}$ for $t=t(r)>0$ small. \end{enumerate} \end{example} \section{Integral criteria for upper functions} \label{up} Let $(X_t)_{t \geq 0}$ be a Markov process and $f:[0,1] \to [0,\infty)$ a non-decreasing function. The aim of this section is to derive sufficient conditions for \begin{equation} \limsup_{t \to 0} \frac{1}{f(t)} \sup_{s \leq t} |X_s-x| \leq c \quad \text{$\mbb{P}^x$-a.s.} \label{up-eq1} \end{equation} in terms of certain integrals. Our first main result is the following theorem. \begin{theorem} \label{up-4} Let $(X_t)_{t \geq 0}$ be a Markov process with \cadlag sample paths and $f:[0,1] \to [0,\infty)$ a non-decreasing function. If \begin{equation} \int_0^1 \frac{1}{t} \sup_{|z-x| \leq f(t)} \mbb{P}^z \left( \sup_{s \leq t} |X_s-z| \geq f(t) \right) \, dt < \infty \label{up-eq3} \end{equation} for some $x \in \mbb{R}^d$, then \begin{equation*} \limsup_{t \to 0} \frac{1}{f(t)} \sup_{s \leq t} |X_s-x| \leq 4 \quad \text{$\mbb{P}^x$-a.s.} \end{equation*} \end{theorem} \begin{proof} \primolist Claim: \begin{equation} \mbb{P}^x \left( \sup_{s \leq 2t} |X_s-x| \geq 2r \right) \leq 3 \sup_{|z-x| \leq r} \mbb{P}^z \left( \sup_{s \leq t} |X_s-z| \geq r \right), \qquad x \in \mbb{R}^d,\;r>0,\;t>0. \label{up-eq4} \end{equation} To prove this, we note that \begin{equation*} \mbb{P}^x \left( \sup_{s \leq 2t} |X_s-x| \geq 2r \right) \leq \mbb{P}^x \left( \sup_{s \leq t} |X_s-x| \geq 2r \right) + \mbb{P}^x \left( \sup_{s \leq t} |X_{s+t}-x| \geq 2r \right), \end{equation*} and, by the Markov property, \begin{align*} \mbb{P}^x \left( \sup_{s \leq t} |X_{s+t}-x| \geq 2r \right) &= \mbb{E}^x \left( \mbb{P}^z \left[ \sup_{s \leq t} |X_s-x| \geq 2r \right] \bigg|_{z=X_t} \right) \\ &\leq \mbb{P}^x(|X_t-x| \geq r) + \sup_{|z-x| \leq r} \mbb{P}^z \left( \sup_{s \leq t} |X_s-z| \geq r \right). \end{align*} \secundolist This part of the proof uses an idea from Khintchine \cite{khin39}. Fix $x \in \mbb{R}^d$ such that \eqref{up-eq3} holds. Since $f$ is monotone, we have \begin{equation*} p_n:=\mbb{P}^x \left( \sup_{2^{-(n+1)} \leq s \leq 2^{-n}} \frac{1}{f(s)} \sup_{r \leq s} |X_r-x| \geq 4 \right) \leq \mbb{P}^x \left( \sup_{s \leq 2^{-n}} |X_s-x| \geq 4 f(2^{-(n+1)}) \right) \end{equation*} for every $n \in \mbb{N}$. Take any $\theta_n \in [2^{-n},2^{-(n-1)}]$, then $\theta_n/2 \leq 2^{-n}$ and using \primo\, and the monotonicity of $f$, we get \begin{align*} p_n &\leq \mbb{P}^x \left( \sup_{s \leq \theta_n} |X_s-x| \geq 4 f(\theta_n/4) \right) \leq 9 \sup_{|z-x| \leq 3f(\theta_n/4)} \mbb{P}^z \left( \sup_{s \leq \theta_n/4} |X_r-z| \geq f(\theta_n/4) \right). \end{align*} Writing $\theta_n = 2^{-u}$ for $u \in [n-1,n]$ and integrating with respect to $u \in [n-1,n]$, it follows that \begin{align*} p_n \leq 9 \int_{n-1}^n \sup_{|z-x| \leq 3f(2^{-u-2})} \mbb{P}^z \left( \sup_{s \leq 2^{-u-2}} |X_r-z| \geq f(2^{-u-2}) \right) \, du. \end{align*} By a change of variables ($t=2^{-u-2}$), \begin{align*} p_n \leq \frac{9}{|\log 2|} \int_{2^{-(n+2)}}^{2^{-(n+1)}} \frac{1}{t} \sup_{|z-x| \leq 3 f(t)} \mbb{P}^z \left( \sup_{r \leq t} |X_r-z| \geq f(t) \right) \, dt, \end{align*} and so \eqref{up-eq3} yields $\sum_{n \in \mbb{N}} p_n < \infty$. Applying the Borel--Cantelli lemma, we conclude that \begin{equation*} \limsup_{n \to \infty} \sup_{2^{-(n+1)} \leq s \leq 2^{-n}} \frac{1}{f(s)} \sup_{r \leq s} |X_r-x| \leq 4 \quad \text{$\mbb{P}^x$-a.s.} \qedhere \end{equation*} \end{proof} It is natural to ask whether the two suprema in \eqref{up-eq3} are needed, i.e.\ if upper functions can also be characterized in terms of the integral $\int_0^1 \frac{1}{t} \mbb{P}^x(|X_t-x| \geq C f(t)) \, dt$. Our next result shows that this is possible under some additional assumptions. \begin{proposition} \label{up-1} Let $(X_t)_{t \geq 0}$ be a strong Markov process with \cadlag sample paths. Let $f:[0,1] \to [0,\infty)$ be a non-decreasing function such that\footnote{Here, essinf denotes the essential infimum with respect to Lebesgue measure.} \begin{equation} C := \essinf \left\{ \limsup_{n \to \infty} \frac{f(s^n)}{f(s^{n+1})}; s \in (0,1) \right\}<\infty. \label{up-eq5} \end{equation} Assume that the following conditions are satisfied for some constants $\varrho,\kappa>0$ and a function $R:[0,1] \to (0,\infty]$: \begin{align} \limsup_{t \to 0} \sup_{|z-x| \leq R(t)} \mbb{P}^z(|X_t-z| \geq \varrho f(t)) &< 1 \label{up-eq6} \\ \sum_{n \geq 1} \mbb{P}^x \left( \sup_{u \leq s^n} |X_u-x| > R(s^n) \right) &< \infty \quad \text{for a.e.\ $s \in (0,1)$} \label{up-eq65} \\ \int_0^1\frac{1}{t} \mbb{P}^x(|X_t-x|> \kappa f(t)) \, dt &< \infty. \label{up-eq7} \end{align} Then \begin{equation*} \limsup_{t \to 0} \frac{1}{f(t)} \sup_{s \leq t} |X_s-x| \leq C(\varrho+\kappa) \quad \text{$\mbb{P}^x$-a.s.} \end{equation*} \end{proposition} \begin{remark} \label{up-2} \begin{enumerate}[wide, labelwidth=!, labelindent=0pt] \item\label{up-2-i} Since $f$ is non-decreasing, the constant $C$ in \eqref{up-eq5} is greater or equal than $1$. If $f$ is regularly varying at zero, i.e.\ if the limit \begin{equation*} L(a) := \lim_{t \to 0} \frac{f(at)}{f(t)} \end{equation*} exists for all $a>0$, then $C=1$; this follows from the fact that, by Karamata's characterization theorem, see e.g.\ \cite{bingham}, the limit $L$ is of the form $L(a)=a^{\varrho}$ for some $\varrho \geq 0$. \item\label{up-2-ii} There is a trade-off between \eqref{up-eq6} and \eqref{up-eq65} regarding the choice of $R$; e.g.\ for $R\equiv \infty$, condition \eqref{up-eq65} is trivially satisfied but a uniform bound for $z \in \mbb{R}^d$ is needed in \eqref{up-eq6}. \item If $(X_t)_{t \geq 0}$ is a L\'evy-type process, then the maximal inequality \eqref{max-eq0} shows that \eqref{up-eq65} is automatically satisfied for $R(t)\equiv R$ constant. \item\label{up-2-iii} It is not hard to check that \begin{equation} \int_{(0,1)} \frac{1}{t} \mbb{P}^x \left( \sup_{s \leq t} |X_s-x| > \kappa f(t) \right) \, dt < \infty \label{up-eq8} \end{equation} implies that \eqref{up-eq65} and \eqref{up-eq7} hold with $R(t)=\kappa f(t)$. \end{enumerate} \end{remark} For the proof of Proposition~\ref{up-1} we use the following Ottaviani-type inequality; for $R=\infty$ this is the classical Ottaviani inequality for Markov processes, see e.g.\ \cite[p.~420]{gikh1} or \cite[p.~125]{ito}. \begin{lemma} \label{up-3} Let $(X_t)_{t \geq 0}$ be a strong Markov process with \cadlag sample paths. Then \begin{equation*} \mbb{P}^x \left( \sup_{s \leq t} |X_s-x| > u+v \right) \leq \frac{1}{1-\alpha_{R,x}(t,u)} \left[ \mbb{P}^x(|X_t-x| > v) + \mbb{P}^x \left( \sup_{s \leq t} |X_s-x| >R \right) \right] \end{equation*} for all $x \in \mbb{R}^d$, $u,v>0$ and $R \in (0,\infty]$, where \begin{equation*} \alpha_{R,x}(t,u) := \sup_{s \leq t} \sup_{|z-x| \leq R} \mbb{P}^z(|X_s-z| \geq u). \end{equation*} \end{lemma} \begin{proof} Denote by $\tau_r^x$ the first exit time of $(X_t)_{t \geq 0}$ from the closed ball $\overline{B(x,r)}$ and set $\sigma := \tau_{u+v}^x$ for fixed $u,v>0$. We have \begin{align*} \mbb{P}^x \left( \sup_{s \leq t} |X_s-x| > u + v \right) &= \mbb{P}^x(\sigma \leq t) \\ &\leq \mbb{P}^x(|X_t-x| > v) + \mbb{P}^x \left( |X_t-x| \leq v, \sigma \leq t, \tau_R^x>t \right) + \mbb{P}^x(\tau_R^x \leq t). \end{align*} By the strong Markov property, \begin{align*} \mbb{P}^x \left( |X_t-x| \leq v, \sigma \leq t, \tau_R^x>t \right) &\leq \mbb{P}^x \left( |X_t-X_{\sigma}| \geq u, \sigma \leq t, |X_{\sigma}-x| \leq R \right) \\ &= \mbb{E}^x \left[ \I_{\{\sigma \leq t\}} \I_{\{|X_{\sigma}-x| \leq R\}} \mbb{P}^z(|X_{t-s}-z| \geq u) \big|_{z=X_{\sigma},s=\sigma} \right] \\ &\leq \alpha_{R,x}(t,u) \mbb{P}^x(\sigma \leq t), \end{align*} and so \begin{align*} \mbb{P}^x \left( \sup_{s \leq t} |X_s-x| > u + v \right) (1-\alpha_{R,x}(t,u)) &\leq \mbb{P}^x(|X_t-x| > v) + \mbb{P}^x(\tau_R^x \leq t) \\ &= \mbb{P}^x(|X_t-x| > v) + \mbb{P}^x \left( \sup_{s \leq t} |X_s-x|>R \right). \qedhere \end{align*} \end{proof} \begin{proof}[Proof of Proposition~\ref{up-1}] \primolist Claim: \begin{equation} \sum_{n \in \mbb{N}} \int_0^1 \mbb{P}^x(|X_{s^n}-x| > \kappa f(s^n)) \log \frac{1}{s} \, ds < \infty. \label{up-st1} \end{equation} By a change of variables, $t=s^n$, $dt=n t^{(n-1)/n} \, ds$, we find that \begin{align*} \int_0^1 \mbb{P}(|X_{s^n}-x| > \kappa f(s^n)) \log \frac{1}{s} \, ds = \int_0^1 \frac{1}{n^2} t^{1/n} \log \frac{1}{t} \mbb{P}^x(|X_t-x|> \kappa f(t)) \frac{1}{t} \, dt. \end{align*} As \begin{equation*} \sum_{n \in \mbb{N}} \frac{1}{n^2} t^{1/n} \log \frac{1}{t} \leq 2, \qquad t \in (0,1), \end{equation*} cf.\ Lemma~\ref{appendix-2}, the monotone convergence theorem yields \begin{equation*} \sum_{n \in \mbb{N}} \int_0^1 \mbb{P}(|X_{s^n}-x| > \kappa f(s^n)) \log \frac{1}{s} \, ds \leq 2 \int_0^1 \mbb{P}^x(|X_t-x|> \kappa f(t)) \frac{1}{t} \, dt, \end{equation*} and the latter integral is finite by \eqref{up-eq7}. This proves \eqref{up-st1}. In particular, there is a Lebesgue null set $N \subseteq (0,1)$ such that \begin{equation} \sum_{n \in \mbb{N}} \mbb{P}^x(|X_{s^n}-x|> \kappa f(s^n)) < \infty \fa s \in (0,1) \setminus N. \label{up-st2} \end{equation} \secundolist Fix $\eps>0$, and take $s \in (0,1) \setminus N$ such that $\limsup_{n \to \infty} f(s^n)/f(s^{n+1}) \leq (C+\eps)$ for the constant $C$ defined in \eqref{up-eq5}. By Lemma~\ref{up-3}, we have \begin{align*} &\mbb{P}^x \left( \sup_{r \leq s^n} |X_r-x| > (\kappa+\varrho) f(s^n) \right) \\ &\quad\leq \frac{1}{1-\alpha_{R,x}(s^n,\varrho f(s^n))} \left[\mbb{P}^x(|X_{s^n}-x| > \kappa f(s^n)) + \mbb{P}^x \left( \sup_{u \leq s^n} |X_u-x| > R(s^n) \right) \right], \end{align*} where $\alpha_{R,x}(t,r) := \sup_{u \leq t} \sup_{|z-x| \leq R(t)} \mbb{P}^z(|X_u-z| \geq r)$. From \eqref{up-eq6} and the monotonicity of $f$, we see that there exists some $\delta \in (0,1)$ such that \begin{equation} \alpha_{R,x}(s^n,\varrho f(s^n)) \leq \sup_{r \leq s^n} \sup_{|z-x| \leq R(s^n)} \mbb{P}^z(|X_{r}-z| \geq \varrho f(r)) \leq 1-\delta \label{up-eq85} \end{equation} for $n \gg 1$. Thus, \begin{align*} \mbb{P}^x \left( \sup_{r \leq s^n} |X_r-x| > (\kappa+\varrho) f(s^n) \right) \leq \frac{1}{\delta} \left[\mbb{P}^x(|X_{s^n}-x| > \kappa f(s^n)) + \mbb{P}^x \left( \sup_{u \leq s^n} |X_u-x| > R(s^n) \right) \right], \end{align*} which implies, by \eqref{up-st2} and \eqref{up-eq65}, \begin{equation*} \sum_{n \in \mbb{N}} \mbb{P}^x \left( \sup_{r \leq s^n} |X_r-x| > (\kappa+\varrho) f(s^n) \right) < \infty. \end{equation*} Applying the Borel-Cantelli lemma gives \begin{equation*} \limsup_{n \to \infty} \frac{1}{f(s^n)} \sup_{r \leq s^n} |X_r-x| \leq \varrho+\kappa \quad \text{$\mbb{P}^x$-a.s.} \end{equation*} If $t \in [s^{n+1},s^n)$ for some $n \gg 1$, then \begin{equation*} \frac{1}{f(t)} \sup_{r \leq t} |X_r-x| \leq \frac{1}{f(s^{n+1})} \sup_{r \leq s^n} |X_r-x| = \frac{f(s^n)}{f(s^{n+1})} \frac{1}{f(s^n)} \sup_{r \leq s^n} |X_r-x|, \end{equation*} and so \begin{equation*} \limsup_{t \to 0} \frac{1}{f(t)} \sup_{r \leq t} |X_r-x| \leq \limsup_{n \to \infty} \left( \frac{f(s^n)}{f(s^{n+1})} \frac{1}{f(s^n)} \sup_{r \leq s^n} |X_r-x| \right) \leq (C+\eps)(\kappa+\varrho) \end{equation*} $\mbb{P}^x$-almost surely. Since $\eps>0$ is arbitrary, this finishes the proof. \end{proof} Combining Theorem~\ref{up-4} with the maximal inequality \eqref{max-eq0}, we get the following criterion; see \cite[Proposition 9]{knop14} for a closely related result. \begin{corollary} \label{up-9} Let $(X_t)_{t \geq 0}$ be a L\'evy-type process with symbol $q$. If $f:[0,1] \to [0,\infty)$ is a non-decreasing function such that \begin{equation} \int_0^1 \sup_{|z-x| \leq f(t)} \sup_{|\xi| \leq 1/(C f(t))} |q(z,\xi)| \, dt < \infty \label{up-eq25} \end{equation} for some constant $C>0$, then \begin{equation*} \lim_{t \to 0} \frac{1}{f(t)} \sup_{s \leq t} |X_s-x| = 0 \quad \text{$\mbb{P}^x$-a.s.} \end{equation*} \end{corollary} \begin{proof} If the integral in \eqref{up-eq25} is finite some $C>0$, then it is finite for arbitrary small $C>0$. \emph{Indeed:} Since $\xi \mapsto \sqrt{|q(z,\xi)|}$ is subadditive, we have \begin{align*} |q(z,2\xi)| =|q(z,\xi+\xi)| \leq \left(\sqrt{|q(z,\xi)|}+\sqrt{|q(z,\xi)|}\right)^2 = 4 |q(z,\xi)|, \end{align*} which implies that \begin{align*} \int_0^1 \sup_{|z-x| \leq f(t)} \sup_{|\xi| \leq 1/(2^{-n} C f(t))} |q(z,\xi)| \, dt \leq 4^n \int_0^1 \sup_{|z-x| \leq f(t)} \sup_{|\xi| \leq 1/(C f(t))} |q(z,\xi)| \, dt < \infty \end{align*} for every $n \in \mbb{N}$. Applying the maximal inequality \eqref{max-eq0} and Theorem~\ref{up-4} yields \begin{align*} \limsup_{t \to 0} \frac{1}{f(t)} \sup_{s \leq t} |X_s-x| \leq 4 C 2^{-n} \quad \text{$\mbb{P}^x$-a.s.} \end{align*} Letting $n \to \infty$ proves the assertion. \end{proof} We conclude this section with the following result on the growth of sample paths of L\'evy-type processes. \begin{proposition} \label{up-11} Let $(X_t)_{t \geq 0}$ be a L\'evy-type process with symbol $q$. Then: \begin{enumerate} \item\label{up-11-i} $\limsup_{t \to 0} t^{-\kappa} \sup_{s \leq t} |X_s-x| = 0$ $\mbb{P}$-a.s.\ for any $\kappa < \tfrac{1}{2}$. \item\label{up-11-ii} If $x \in \mbb{R}^d$ is such that \begin{equation} \sup_{|z-x| \leq R} \sup_{|\xi| \leq r} |q(z,\xi)| \leq c \frac{r^2}{|\log r|^{1+\eps}}, \qquad r \gg 1, \label{up-eq31} \end{equation} for some constants $R>0$, $c>0$ and $\eps>0$, then \begin{equation*} \limsup_{t \to 0} \frac{1}{\sqrt{t \log |\log t|}} \sup_{s \leq t} |X_s-x| = 0 \quad \text{$\mbb{P}^x$-a.s.} \end{equation*} \end{enumerate} \end{proposition} Khintchine \cite{khin39} (see also \cite[Appendix, Theorem 4]{skorohod91}) showed that any L\'evy process without Gaussian component satisfies \begin{equation*} \limsup_{t \to 0} \frac{|X_t|}{\sqrt{t \log |\log t|}} = 0 \quad \text{a.s.} \end{equation*} One might expect that an analogous result holds for L\'evy-type processes but this does not seem to follow from our results; note that \eqref{up-eq31} is stronger than assuming that $(X_t)_{t \geq 0}$ has no Gaussian component, cf.\ \cite[Lemma A.3]{ihke}. \begin{proof}[Proof of Proposition~\ref{up-11}] \primolist Because of the subadditivity of $\xi \mapsto \sqrt{|q(x,\xi|}$, it holds that \begin{equation*} |q(x,\xi)| \leq \sup_{|\eta| \leq 1} |q(x,\eta)| (1+|\xi|^2), \end{equation*} cf.\ \cite[Theorem 6.2]{barca}, and so \begin{equation*} \sup_{|z-x| \leq 1} \sup_{|\xi| \leq r} |q(z,\xi)| \leq c' (1+r^2) \end{equation*} for some constant $c'>0$. Hence, \begin{equation*} \int_0^1 \sup_{|z-x| \leq 1} \sup_{|\xi| \leq 1/(C t^{\kappa})} |q(z,\xi)| \, dt < \infty \end{equation*} for any $\kappa \in (0,\tfrac{1}{2})$ and $C>0$. By Corollary~\ref{up-9}, this proves \ref{up-11-i}. \secundolist Set $f(t) := \sqrt{t \log \log \frac{1}{t}}$, then, by \eqref{up-eq31}, \begin{equation*} \int_0^{1/e^e} \sup_{|z-x| \leq R} \sup_{|\xi| \leq 1/(C f(t))} |q(z,\xi)| \, dt \leq \frac{c}{C^2} \int_0^{1/e^e} \frac{1}{t \log \log \frac{1}{t}} \frac{1}{|\log \sqrt{C^2 t \log \log \frac{1}{t}}|^{1+\eps}} \, dt \end{equation*} for every $C>0$, and the latter integral is finite. Corollary~\ref{up-9} gives the assertion. \end{proof} In the remainder of the article, we prove the results announced in Section~\ref{main}. \section{Proofs of Theorem~\ref{main-3} and Theorem~\ref{main-5}} \label{p} For the proof of Theorem~\ref{main-3} and Theorem~\ref{main-5}, we need the following result which links two of our integral conditions. \begin{lemma} \label{p-1} Let $\psi: \mbb{R}^d \to \mbb{C}$ be a continuous negative definite function with L\'evy triplet $(b,0,\nu)$, and set \begin{equation*} \psi^*(r) := \sup_{|\xi| \leq r} \re \psi(\xi), \qquad r>0. \end{equation*} If $f:[0,1] \to [0,\infty)$ is a non-decreasing function, then the implication \begin{equation*} \int_0^1 \nu(\{|y| \geq f(t)\}) \, dt < \infty \implies \int_0^1 \psi^* \left( \frac{1}{f(t)} \right) \, dt < \infty \end{equation*} holds in each of the following two cases. \begin{enumerate}[label*=\upshape (A\arabic*),ref=\upshape A\arabic*] \item The L\'evy measure $\nu$ satisfies \begin{equation*} \limsup_{r \to 0} \frac{\int_{|y| \leq r} |y|^2 \, \nu(dy)}{r^2 \nu(\{|y| > r\})} < \infty. \end{equation*} \item There is a constant $c>0$ such that \begin{equation*} \int_{r<f(t)} \frac{1}{f(t)^2} \, dt \leq c \frac{f^{-1}(r)}{r^2}, \qquad r \in (0,1). \end{equation*} \end{enumerate} \end{lemma} \begin{remark} \label{p-3} \begin{enumerate}[wide, labelwidth=!, labelindent=0pt] \item\label{p-3-i} There are several equivalent formulations of condition \eqref{A1} in terms of so-called concentration functions. If we define, following \cite{pruitt81}, \begin{equation*} G(r) := \nu(\{|y| > r\}) \quad \text{and} \quad K(r) := \frac{1}{r^2} \int_{|y| \leq r} |y|^2 \, \nu(dy), \end{equation*} then \eqref{A1} can be stated equivalently in the following way: \begin{equation*} \limsup_{r \to 0} \frac{K(r)}{G(r)}<\infty. \end{equation*} Set \begin{equation*} h(r) := \int_{y \neq 0} \min\left\{1,\frac{|y|^2}{r^2}\right\} \, \nu(dy) = K(r)+G(r), \end{equation*} then we see that \begin{equation} \eqref{A1} \iff \liminf_{r \to 0} \frac{G(r)}{h(r)} >0. \label{p-eq4} \end{equation} Since \begin{equation*} \frac{1}{c} h(r) \leq \psi^* \left( \frac{1}{r} \right) \leq c h(r), \qquad r>0, \end{equation*} for some constant $c>0$, depending only on the dimension $d$, see e.g.\ \cite[Lemma 5.1 and p.~595]{rs-growth} or \cite[Lemma 4]{grz}, it follows that \begin{equation} \eqref{A1} \iff \liminf_{r \to 0} \frac{G(r)}{\psi^*(1/r)} = \liminf_{r \to 0} \frac{\nu(\{|y| > r\})}{\psi^*(1/r)}>0. \label{p-eq5} \end{equation} Moreover, there is a sufficient condition for \eqref{A1} in terms of the function \begin{equation*} I(r) := \int_{y \neq 0} \min\{r^2,|y|^2\} \, \nu(dy)=r^2 h(r); \end{equation*} namely, if \begin{equation} \liminf_{r \to 0} \frac{I(2r)}{I(r)}>1, \label{p-eq7} \end{equation} then \eqref{A1} holds. \emph{Indeed:} By Tonelli's theorem, \begin{align*} I(r) = \int_{y \neq 0} \int_0^{\min\{r^2,|y|^2\}} \, dz \, \nu(dy) &= \int_{\mbb{R}} \int_{y \neq 0} \I_{\{|z| < r^2\}} \I_{\{|z| < |y|^2\}} \, \nu(dy) \, dz \\ &= \int_0^{r^2} \nu(\{|y| > \sqrt{z}\}) \, dz. \end{align*} Thus, \begin{equation*} I(2r) = I(r) + \int_{r^2}^{4r^2} \nu(\{|y| > \sqrt{z}\}) \, dz \leq I(r) + 3r^2 \nu(\{ |y| > r\}). \end{equation*} Consequently, \eqref{p-eq7} implies that \begin{equation*} 1 < \liminf_{r \to 0} \frac{I(2r)}{I(r)} \leq 1+ \liminf_{r \to 0} \frac{3r^2 \nu(\{|y| > r\})}{I(r)}. \end{equation*} As $I(r)=r^2 h(r)$, this is equivalent to \eqref{p-eq4} and hence to \eqref{A1}. Let us mention that a condition similar to \eqref{p-eq7} appears in the monograph \cite{bertoin} by Bertoin in the study of upper functions for sample paths of subordinators. \item\label{p-3-ii} If $\nu$ is the L\'evy measure of a one-dimensional L\'evy process and $\nu(\{|y| \geq r\})$ grows faster than $\log r$ as $r \to 0$, then \eqref{A1} implies that (the law of) $X_t$ has a smooth density $p_t \in C_b^{\infty}(\mbb{R})$ for every $t>0$, see \cite[Section 5]{kallenberg} and also \cite[p.~127]{knop13}. \end{enumerate} \end{remark} \begin{proof}[Proof of Lemma~\ref{p-1}] \primolist Suppose that \eqref{A1} holds. Then there is some constant $c>0$ such that \begin{equation*} \psi^* \left( \frac{1}{r} \right) = \sup_{|\xi| \leq 1/r} \re \psi(\xi) \leq c \nu(\{|y| > r\}) \end{equation*} for $r>0$ small, cf.\ \eqref{p-eq5}. Since we may assume without loss of generality that $f(t) \to 0$ as $t \downarrow 0$, we find that \begin{equation*} \int_0^{\delta} \psi^* \left( \frac{1}{f(t)} \right) \, dt \leq c \int_0^{\delta} \nu(\{|y| > f(t)\}) \, dt \end{equation*} for some $\delta>0$. As $\psi$ is bounded on compact sets, this proves the assertion. \secundolist Suppose that \eqref{A2} holds. From \begin{equation*} \psi^*(r) = \sup_{|\xi| \leq r} \re \psi(\xi) \leq 2 \int_{y \neq 0} \min\{1,|y|^2 r^2\} \, \nu(dy), \end{equation*} we get \begin{align} \int_0^1 \psi^* \left( \frac{1}{f(t)} \right) \, dt &\leq 2 \int_0^1 \frac{1}{f(t)^2} \int_{|y| \leq f(t)} y^2 \, \nu(dy) \, dt + 2 \int_0^1 \nu(\{|y| > f(t)\}) \, dt. \label{p-eq9} \end{align} The second integral on the right-hand side of \eqref{p-eq9} is finite by assumption, and so it suffices to show that the first integral \begin{equation*} J := \int_0^1 \frac{1}{f(t)^2} \int_{|y| \leq f(t)} y^2 \, \nu(dy) \, dt \end{equation*} is finite. By Tonelli's theorem and \eqref{A2}, we have \begin{align*} J = \int_{y \neq 0} |y|^2 \int_{f(t) \geq |y|} \frac{1}{f(t)^2} \, dt \, \nu(dy) \leq c \int_{y \neq 0} f^{-1}(|y|) \, \nu(dy). \end{align*} Since $f$ is non-decreasing, we find by another application of Tonelli's theorem that \begin{align*} J \leq c \int_{y \neq 0} \int_{t \leq f^{-1}(|y|)} \, dt \, \nu(dy) &\leq c \int_{y \neq 0} \int_{f(t) \leq |y|} \, dt \, \nu(dy) = c \int_{0}^1 \nu(\{|y| \geq f(t)\}) \, dt<\infty. \qedhere \end{align*} \end{proof} \begin{proof}[Proof of Theorem~\ref{main-3}] \ref{main-3-i} $\implies$ \ref{main-3-ii}: If $\int_0^1 \nu(\{|y| \geq c f(t)\}) \, dt < \infty$, then it follows from Lemma~\ref{p-1} and the sector condition that \begin{equation*} \int_0^1 \sup_{|\xi| \leq 1/(c f(t))} |\psi(\xi)| \, dt < \infty. \end{equation*} By the subadditivity of $\xi \mapsto \sqrt{|\psi(\xi)|}$, this implies that \begin{equation*} \int_0^1 \sup_{|\xi| \leq 1/(2^{-n} c f(t))} |\psi(\xi)| \, dt < \infty. \end{equation*} for all $n \in \mbb{N}$, see the proof of Corollary~\ref{up-9}. Since the integral expression is monotone w.r.t.\ $c$, we conclude that \begin{equation*} \int_0^1 \sup_{|\xi| \leq 1/( c f(t))} |\psi(\xi)| \, dt < \infty \fa c>0. \end{equation*} \ref{main-3-ii} $\implies$ \ref{main-3-iii}: This is clear from the maximal inequality, cf.\ Proposition~\ref{max-0}. \par \ref{main-3-iii}$\iff$\ref{main-3-iv}: The implication $\ref{main-3-iii}\implies\ref{main-3-iv}$ is obvious. The other direction is immediate from Etemadi's inequality, see e.g.\ \cite[Theorem 22.5]{billingsley} or \cite[Theorem 7.6]{gut}, which shows that \begin{equation*} \mbb{P} \left( \sup_{s \leq t} |X_s| \geq 3r \right) \leq 3 \mbb{P}(|X_t| \geq r), \qquad r>0,\;t>0. \end{equation*} \ref{main-3-iii} $\implies$ \ref{main-3-v}: This is immediate from Theorem~\ref{up-4}; note that the supremum in \eqref{up-eq3} breaks down because L\'evy processes are homogenous in space. \par \ref{main-3-v}$\implies$\ref{main-3-vi}$\implies$\ref{main-3-vii}: Obvious. \par \ref{main-3-vii}$\implies$\ref{main-3-i}: In dimension $d=1$, this follows from \cite[Proposition 4.2]{bertoin08}. The following reasoning works in any dimension $d \geq 1$. By Blumenthal's 0-1 law, there exists a constant $C>0$ such that \begin{equation} \limsup_{t \to 0} \frac{1}{f(t)} |X_t| \leq \frac{C}{2} \quad \text{almost surely.} \label{p-eq13} \end{equation} Suppose that $\int_0^1 \nu(\{|y| \geq 2C f(t)\}) \, dt$ is infinite. As $f$ is non-decreasing, the series test yields \begin{equation} \sum_{n =2}^{\infty} \nu(\{|y| \geq 2C f(1/n)\}) \left( \frac{1}{n-1}-\frac{1}{n}\right)=\infty. \label{p-eq14} \end{equation} The random variables \begin{equation*} N_{s,t}^{(r)} := \sharp \{u \in (s,t]; |\Delta X_u| \geq r\}, \qquad 0\leq s<t, \, r>0, \end{equation*} are Poisson distributed with parameter $(t-s) \nu(\{|y| \geq r\})$, and so $Y_n := N_{1/(n+1),1/n}^{2Cf(1/n)}$ are Poisson distributed with parameter $\lambda_n := \nu(\{|y| \geq 2Cf(1/n)\}) \left( \frac{1}{n}-\frac{1}{n+1} \right)$. Using the elementary estimate $1-e^{-x} \geq x/(1+x)$, we get \begin{equation*} \sum_{n \in \mbb{N}} \mbb{P}(Y_n \geq 1) = \sum_{n \in \mbb{N}} \left( 1- e^{-\lambda_n} \right) \geq \sum_{n \in \mbb{N}} \frac{\lambda_n}{1+\lambda_n} \geq \sum_{n \in \mbb{N}} \min\left\{\lambda_n,\frac{1}{2} \right\} = \infty; \end{equation*} here we use that \eqref{p-eq14} implies $\sum_{n \in \mbb{N}} \lambda_n = \infty$ because $\frac{1}{n-1}-\frac{1}{n} \approx \frac{1}{n^2} \approx \frac{1}{n+1}-\frac{1}{n}$ for $n \gg 1$. Since the random variables $Y_n$, $n \in \mbb{N}$, are independent, the Borel--Cantelli lemma shows that the event $\{Y_n \geq 1$ infinitely often$\}$ has probability $1$. Thus, with probability $1$ there are infinitely many $n \in \mbb{N}$ such that $|\Delta X_u| \geq 2C f(1/n)$ for some $u \in [\frac{1}{n+1},\frac{1}{n}]$. Since either $|X_u| \geq C f(1/n) \geq Cf(u)$ or $|X_{u-}| \geq Cf(1/n) \geq Cf(u-)$ for any such $u \in [\frac{1}{n+1},\frac{1}{n}]$, we conclude that \begin{equation*} \limsup_{t \to 0} \frac{1}{f(t)} |X_t| \geq C \quad \text{almost surely,} \end{equation*} which contradicts \eqref{p-eq13}. Hence, $\int_0^1 \nu(\{|y| \geq 2Cf(t)\}) \, dt < \infty$. See (the proof of) Theorem~\ref{main-9} for an alternative reasoning. \par The random variables $\limsup_{t \to 0} \frac{1}{f(t)} |X_t|$ and $\limsup_{t \to 0} \frac{1}{f(t)} \sup_{s \leq t} |X_s|$ are $\mathcal{F}_{0+}$-measurable, and therefore Blumenthal's 0-1-law shows that the events in \ref{main-3-v}-\ref{main-3-vii} have probability $0$ or $1$. Consequently, 'almost surely' may be replaced by 'with positive probability' in each of the statements. \end{proof} \begin{proof}[Proof of Theorem~\ref{main-5}] \ref{main-5-i}$\implies$\ref{main-5-ii}: Without loss of generality, $f(t) \to 0=f(0)$ as $t \downarrow 0$; otherwise the assertion is immediate from the local boundedness of the symbol, cf.\ \eqref{def-eq13}. It follows from \eqref{A1'} that \begin{equation*} \liminf_{r \to 0} \inf_{|z-x| \leq R} \frac{\nu(z,\{|y|>r\})}{\sup_{|\xi| \leq 1/r} \re q(z,\xi)} >0, \end{equation*} see Remark~\ref{p-3}\ref{p-3-i}. Since the sector condition holds (with a constant not depending on $z \in \overline{B(x,R)}$), we find that \begin{equation*} \liminf_{r \to 0} \inf_{|z-x| \leq R} \frac{\nu(z,\{|y|>r\})}{\sup_{|\xi| \leq 1/r} |q(z,\xi)|} >0, \end{equation*} i.e.\ there are constants $K>0$ and $\delta>0$ such that \begin{equation*} \sup_{|\xi| \leq 1/r} |q(z,\xi)| \leq K \nu(z,\{|y|>r\}), \qquad z \in \overline{B(x,R)}, \end{equation*} for $r \leq \delta$. As $f(t) \to 0$ as $t \downarrow 0$, this implies \begin{equation*} \sup_{|z-x| \leq f(t)} \sup_{|\xi| \leq 1/(cf(t))} |q(z,\xi)| \leq K \sup_{|z-x| \leq f(t)} \nu(z,\{|y| > c f(t)\}) \end{equation*} for $t>0$ small. Integrating with respect to $t$ and using the local boundedness of $q$, we conclude that \begin{equation*} \int_0^1 \sup_{|z-x| \leq f(t)} \sup_{|\xi| \leq 1/(cf(t))} |q(z,\xi)| \, dt < \infty. \end{equation*} \ref{main-5-ii}$\implies$\ref{main-5-iii}: If the integral in \ref{main-5-ii} is finite for some $\eps>0$, then it is finite for all $\eps>0$; this follows from the subadditivity of $\xi \mapsto \sqrt{|q(z,\xi)|}$, see the proof of Corollary~\ref{up-9}. The implication \ref{main-5-ii}$\implies$\ref{main-5-iii} is now immediate from the maximal inequality \eqref{max-eq0}. \par \ref{main-5-iii}$\implies$\ref{main-5-iv}: cf.\ Theorem~\ref{up-4}. \end{proof} \section{Proof of the converse and the lower growth bounds} \label{conv} In this section, we present the proofs of Proposition~\ref{main-7}, Proposition~\ref{main-8} and Theorem~\ref{main-9}. \begin{proof}[Proof of Proposition~\ref{main-7}] \primolist If \begin{equation*} \int_0^1 \sup_{|z-x| \leq f(t)} \mbb{P}^z \left( \sup_{s \leq t} |X_s-z| \geq f(t) \right) \frac{1}{t} \, dt < \infty, \end{equation*} then Theorem~\ref{up-4} shows that $\limsup_{t \to 0} \frac{1}{f(t)} \sup_{s \leq t} |X_s-x|\leq 4$ $\mbb{P}^x$-almost surely. Consequently, $\mbb{P}^x(A_k) \to 1$ for $A_k := \{\forall t \leq 1/k\::\: \frac{1}{f(t)} \sup_{s \leq t} |X_s-x| < 5\}$. Hence, \begin{equation*} \sup_{t \leq 1/k} \mbb{P}^x \left( \sup_{s \leq t} |X_s-x|\geq 5 f(t)\right) \leq \mbb{P}^x(A_k^c) \xrightarrow[k \to \infty]{} 0. \end{equation*} By Corollary~\ref{max-5}, this implies \begin{equation*} \mbb{P}^x \left( \sup_{s \leq t} |X_s-x|\geq 5 f(t)\right) \geq \frac{1}{2} t G(x,10f(t)), \qquad t \leq \frac{1}{k}, \end{equation*} for $k \gg1$ sufficiently large, where $G(x,r) := \inf_{|z-x| \leq r} \nu(z,\{|y| > r\})$. Dividing both sides by $t$ and integrating over $t \in (0,1)$ yields $\int_0^1 G(x,10f(t)) \, dt < \infty$, which proves \ref{main-7-i}. \secundolist Suppose that \begin{equation*} \int_0^1 G(x,C f(t)) \, dt < \infty \end{equation*} for some $C>0$ and $G(x,r)$ as in \primo, and assume that \eqref{A1'} holds for some $R>0$. It follows from Remark~\ref{p-3} that there is some constant $\gamma>0$ such that \begin{equation*} \liminf_{r \to 0} \inf_{|z-x| \leq R} \frac{\nu(z,\{|y| > r\})}{\sup_{|\xi| \leq 1/r} \re q(z,\xi)} \geq \gamma. \end{equation*} Thus, \begin{equation*} \inf_{|z-x| \leq Cf(t)} \sup_{|\xi| \leq 1/(Cf(t))} \re q(z,\xi) \leq \frac{1}{\gamma} G(x,C f(t)) \end{equation*} for $t>0$ small. Since the symbol $q$ is bounded on compact sets, integration with respect to $t$ gives \begin{equation*} \int_0^1 \inf_{|z-x| \leq C f(t)} \sup_{|\xi| \leq 1/(Cf(t))} \re q(z,\xi) \, dt < \infty. \end{equation*} Because of the subadditivity of the mapping $\xi \mapsto \sqrt{\re q(z,\xi)}$, we may replace $1/(Cf(t))$ by $c/f(t)$ for any $c>0$, compare the proof of Corollary~\ref{up-9}. \end{proof} \begin{proof}[Proof of Proposition~\ref{main-8}] Let $f:[0,1] \to [0,\infty)$ be such that \begin{equation} \limsup_{t \to 0} t \inf_{|z-x| \leq R f(t)} \sup_{|\xi| \leq 1/(Cf(t))} \re q(z,\xi)= \infty, \label{conv-eq9} \end{equation} for all $R \geq 1$ and some constant $C=C(R)>0$. \primolist Claim: the convergence in \eqref{conv-eq9} holds for any $C>0$. \emph{Indeed}: Clearly, it suffices to show that \eqref{conv-eq9} holds with $C$ replaced by $C 2^n$, $n \in \mbb{N}$. Because of the subadditivity of $\xi \mapsto \sqrt{\re q(z,\xi)}$, we have $\re q(z,2\xi) \leq 4 \re q(z,\xi)$ for all $\xi,z \in \mbb{R}^d$ implying \begin{equation*} \sup_{|\xi| \leq r} \re q(z,\xi) \geq \frac{1}{4} \sup_{|\xi| \leq 2r} \re q(z,\xi) \geq \ldots \geq \frac{1}{4^n} \sup_{|\xi| \leq 2^n r} \re q(z,\xi) \end{equation*} for all $r>0$. Using this estimate for $r=1/(2^n C f(t))$, we see that \eqref{conv-eq9} holds with $C$ replaced by $C2^n$. \secundolist The idea for this part of the proof is from \cite{rs-growth}. For fixed $R \geq 1$, pick $(t_k)_{k \in \mbb{N}} \subseteq (0,1)$ with $t_k \downarrow 0$ and \begin{equation*} \lim_{k \to \infty} t_k \inf_{|z-x| \leq R f(t_k)} \sup_{|\xi| \leq 1/(Rf(t_k))} \re q(z,\xi) = \infty. \end{equation*} Then the maximal inequality \eqref{max-eq4} shows that \begin{equation*} \mbb{P}^x \left( \sup_{s \leq t_k} |X_s-x| < R f(t_k) \right) \xrightarrow[]{k \to \infty} 0, \end{equation*} and so, by Fatou's lemma, \begin{align*} \mbb{P}^x \left( \limsup_{k \to \infty} \left\{ \sup_{s \leq t_k} |X_s-x| \geq R f(t_k) \right\} \right) &\geq \limsup_{k \to \infty} \mbb{P}^x \left( \sup_{s \leq t_k} |X_s-x| \geq R f(t_k) \right) \\ &= 1- \liminf_{k \to \infty} \mbb{P}^x \left( \sup_{s \leq t_k} |X_s-x|< R f(t_k) \right) \\ &= 1. \end{align*} Consequently, there is a measurable set $\Omega_0$ with $\mbb{P}^x(\Omega_0)=1$ such that $\sup_{s \leq t_k} |X_{t_k}(\omega)-x| \geq R f(t_k)$ infinitely often for every $\omega \in \Omega_0$. In particular, \begin{equation*} \limsup_{k \to \infty} \frac{1}{f(t_k)} \sup_{s \leq t_k} |X_s(\omega)-x| \geq R, \qquad \omega \in \Omega_0. \end{equation*} \tertiolist Now assume additionally that $f$ is non-decreasing. For $\omega \in \Omega_0$, let $s_k=s_k(\omega) \in [0,t_k]$ be such that \begin{equation*} |X_{s_k}(\omega)-x| \geq \frac{1}{2} \sup_{s \leq t_k} |X_s(\omega)-x|. \end{equation*} By the monotonicity, we have $f(s_k) \leq f(t_k)$, and so \begin{equation*} \limsup_{t \to 0} \frac{1}{f(t)} |X_t(\omega)-x| \geq \limsup_{k \to \infty} \frac{1}{f(s_k)} |X_{s_k}(\omega)-x| \geq \frac{1}{2} \limsup_{k \to \infty} \frac{1}{f(t_k)} \sup_{s \leq t_k} |X_s(\omega)-x| \geq \frac{R}{2}. \end{equation*} As $R \geq 1$ is arbitrary, this proves \ref{main-8-i}. \quartolist It remains to prove \ref{main-8-ii}. To this end, we show that if $f$ is regularly varying at zero, i.e.\ \begin{equation*} \exists \beta>0 \, \, \forall \lambda>0\::\: \lim_{t \to 0} \frac{f(\lambda t)}{f(t)}=\lambda^{\beta}, \end{equation*} then \eqref{conv-eq9} for $R=1$ implies \eqref{conv-eq9} for all $R \geq 1$. The desired lower bound for the growth of the sample paths then follows from the first part of this proof. Let $C>0$ be such that \eqref{conv-eq9} holds with $R=1$. As we have seen in \primo, it follows that \eqref{conv-eq9} holds with $R=1$ for any $C>0$. Since $f$ is regularly varying at zero, there is $\lambda>0$ such that $f(\lambda t)/f(t) \geq R$ for $t>0$ small. Thus, \begin{align*} \limsup_{t \to 0} t \inf_{|z-x| \leq R f(t)} \sup_{|\xi| \leq 1/(Cf(t))} \re q(z,\xi) &\geq \limsup_{t \to 0} t \inf_{|z-x| \leq f(\lambda t)} \sup_{|\xi| \leq 1/(Cf(t))} \re q(z,\xi) \\ &= \frac{1}{\lambda} \limsup_{t \to 0} t \inf_{|z-x| \leq f(t)} \sup_{|\xi| \leq 1/(Cf(t/\lambda))} \re q(z,\xi). \end{align*} Using once more that $f$ is regularly varying, we find that $f(t/\lambda) \geq \frac{1}{2} \frac{1}{\lambda^{\beta}} f(t)=:\gamma f(t)$ for $t>0$ small. Hence, \begin{equation*} \limsup_{t \to 0} t \inf_{|z-x| \leq R f(t)} \sup_{|\xi| \leq 1/(Cf(t))} \re q(z,\xi) \geq \frac{1}{\lambda} \limsup_{t \to 0} t \inf_{|z-x| \leq f(t)} \sup_{|\xi| \leq 1/(\gamma Cf(t))} \re q(z,\xi)=\infty. \qedhere \end{equation*} \end{proof} The key for the proof of our final main result, Theorem~\ref{main-9}, is the following proposition. \begin{proposition} \label{conv-3} Let $(X_t)_{t \geq 0}$ be a L\'evy-type process with characteristics $(b(x),0,\nu(x,dy))$ and symbol $q$. Let $f:[0,1] \to [0,\infty)$ be a non-decreasing function. If \begin{equation} \limsup_{n \to \infty} \frac{1}{n^2} \sup_{|z-x| \leq 3 f(1/n)} \sup_{|\xi| \leq c/f(1/n)} |q(z,\xi)|<1 \label{conv-eq11} \end{equation} and \begin{equation} \int_0^1 \inf_{|z-x| \leq 5 f(t)} \nu(z,\{|y| >5 f(t)\}) \, dt = \infty, \label{conv-eq12} \end{equation} for some $c>0$ and $x \in \mbb{R}^d$, then \begin{equation*} \limsup_{t \to 0} \frac{1}{f(t)} |X_t-x| \geq 1 \quad \text{$\mbb{P}^x$-a.s.} \end{equation*} \end{proposition} \begin{remark} \label{conv-4} \begin{enumerate}[wide, labelwidth=!, labelindent=0pt] \item\label{conv-4-i} Replacing $f$ by $C\cdot f$ for $C>0$, we obtain immediately a sufficient condition for \begin{equation*} \limsup_{t \to 0} \frac{1}{f(t)} |X_t-x| \geq C \quad \text{$\mbb{P}^x$-a.s.} \end{equation*} \item\label{conv-4-ii} By the local boundedness of $q$, there is a finite constant $c=c(R,x)$ such that $|q(z,\xi)| \leq c(1+|\xi|^2)$ for all $\xi \in \mbb{R}^d$ and $|z-x| \leq R$, cf.\ \eqref{def-eq13}. Thus, $\liminf_{t \downarrow 0} f(t)/t = \infty$ is a sufficient condition for \eqref{conv-eq11}; let us mention that this growth condition on $f$ also appears in the study of upper functions for sample paths of L\'evy processes, cf.\ \cite{savov09}. More generally, if $\sup_{|z-x| \leq R} |q(z,\xi)| \leq c (1+|\xi|^{\alpha})$ for some $\alpha \in (0,2]$, then \eqref{conv-eq11} holds for any function $f$ satisfying $\liminf_{t \downarrow 0} f(t)/t^{2/\alpha}= \infty$. \end{enumerate} \end{remark} \begin{proof}[Proof of Proposition~\ref{conv-3}] Let $x \in \mbb{R}^d$ and $c>0$ be such that \eqref{conv-eq11} and \eqref{conv-eq12} hold, and set \begin{equation*} G(x,r) := \inf_{|z-x| \leq r} \nu(z,\{|y| > r\}). \end{equation*} Using the subadditivity of $\xi \mapsto \sqrt{|q(z,\xi)|}$, we see that \eqref{conv-eq11} actually holds for \emph{any} $c>0$. \par \primolist By the monotonicity of $f$ and $r \mapsto G(x,r)$, it follows from \eqref{conv-eq12} that \begin{equation*} \sum_{n =2}^{\infty} \left(\frac{1}{n-1}-\frac{1}{n} \right) G(x,5f(1/n)) \geq \int_0^1 G(x,5f(t)) \, dt = \infty, \end{equation*} and so \begin{equation} \sum_{n \in \mbb{N}} \frac{1}{n^2} G(x,5f(1/n)) = \infty. \label{conv-eq14} \end{equation} Moreover, we note that \eqref{conv-eq12} implies that $f(t) \to 0$ as $t \downarrow 0$. \secundolist Denote by $(\mathcal{F}_t)_{t \geq 0}$ the canonical filtration of $(X_t)_{t \geq 0}$. We claim that \begin{equation} \sum_{n \in \mbb{N}} \mbb{E}^x(\I_{A_n} \mid \mathcal{F}_{1/(n+1)}) = \infty \quad \text{$\mbb{P}^x$-a.s.} \label{conv-eq16} \end{equation} for \begin{equation*} A_n := \bigg\{ \frac{1}{f(1/n)} \sup_{\frac{1}{n+1} \leq r < \frac{1}{n}} |X_r-x| \geq 1 \bigg\}. \end{equation*} To prove \eqref{conv-eq16} we fix $n \in \mbb{N}$ and note that, by the Markov property, \begin{equation*} \mbb{E}^x(\I_{A_n} \mid \mathcal{F}_{1/(n+1)}) = u(X_{1/(n+1)}) \end{equation*} where \begin{equation*} u(z) := \mbb{P}^z \bigg( \sup_{r \leq \frac{1}{n(n+1)}} |X_r-x| \geq f(1/n) \bigg), \qquad z \in \mbb{R}^d. \end{equation*} We need a lower bound for the mapping $u$. If $z \notin B(x,f(1/n)$, then $|X_0-x| \geq f(1/n)$ $\mbb{P}^z$-a.s. which gives $u(z)=1$. Next we consider the case $z \in B(x,f(1/n))$. By the triangle inequality, \begin{equation*} u(z) \geq \mbb{P}^z \bigg( \sup_{r \leq \frac{1}{n(n+1)}} |X_r-z| \geq 2f(1/n) \bigg)=:U(z). \end{equation*} The maximal inequality \eqref{max-eq0} shows that \begin{equation*} U(z) \leq c' \frac{1}{n(n+1)} \sup_{|z-y| \leq 2f(1/n)} \sup_{|\xi| \leq 1/(2f(1/n))} |q(y,\xi)| \end{equation*} for some absolute constant $c'>0$. Since $|z-x| \leq f(1/n)$ and $\sqrt{|q(y,\cdot)|}$ is subadditive, we get \begin{equation*} U(z) \leq 4c' \frac{1}{n (n+1)} \sup_{|y-x| \leq 3 f(1/n)} \sup_{|\xi| \leq 1/f(1/n)} |q(y,\xi)|, \end{equation*} see the proof of Corollary~\ref{up-9}. Thus, by \eqref{conv-eq11}, $U(z) \leq 1-\eps$ for $n \gg 1$ and some $\eps \in (0,1)$. Applying Corollary~\ref{max-5} and using $|z-x| \leq f(1/n)$, we find that \begin{equation*} u(z) \geq U(z) \geq \eps \frac{1}{n(n+1)} G(z,4f(1/n)) \geq \eps \frac{1}{n(n+1)} G(x,5f(1/n)), \qquad z \in B(x,f(1/n)), \end{equation*} for $n \gg 1$. In summary, \begin{equation*} \mbb{E}^x(\I_{A_n} \mid \mathcal{F}_{1/(n+1)}) \geq \min \left\{\eps \frac{1}{n(n+1)} G(x,5f(1/n)),1 \right\} \end{equation*} for $n \gg 1$. Thus, by \eqref{conv-eq14}, $\sum_{n \in \mbb{N}} \mbb{E}^x(\I_{A_n} \mid \mc{F}_{1/(n+1)}) = \infty$ $\mbb{P}^x$-a.s. \tertiolist The almost sure divergence of the series implies by the conditional Borel-Cantelli lemma for backward filtrations, cf.\ Proposition~\ref{appendix-1}, that \begin{equation*} \mbb{P}^x \left( \limsup_{n \to \infty} A_n \right) = 1, \end{equation*} and so there is a measurable set $\tilde{\Omega}$ with $\mbb{P}^x(\tilde{\Omega})=1$ such that \begin{equation*} \forall \omega \in \tilde{\Omega} \, \forall n \gg 1 \, \exists t_n=t_n(\omega) \in \left[ \frac{1}{n+1},\frac{1}{n} \right)\::\: \frac{1}{f(1/n)} |X_{t_n}(\omega)-x| \geq 1. \end{equation*} Using the monotonicity of $f$, we conclude that \begin{equation*} \limsup_{t \to 0} \frac{1}{f(t)} |X_t(\omega)-x| \geq \limsup_{n \to \infty} \frac{1}{f(t_n)} |X_{t_n}(\omega)-x| \geq 1, \qquad \omega \in \tilde{\Omega}. \qedhere \end{equation*} \end{proof} \begin{proof}[Proof of Theorem~\ref{main-9}] First we prove \ref{main-9-i}. Let $f\geq0$ be non-decreasing and $c>0$ such that $\int_0^1 \inf_{|z-x| \leq cf(t)} \nu(z;\{|y|>cf(t)\}) \, dt = \infty$. We consider separately the cases that \ref{C1} resp.\ \ref{C2} holds. \primolist Assume that \ref{C1} holds. If for every $R \geq 1$ the limit \begin{equation} \limsup_{t \to 0} t \inf_{|z-x| \leq R f(t)} \sup_{|\xi| \leq 1/(Cf(t))} \re q(z,\xi) \label{conv-eq20} \end{equation} is infinite for some constant $C=C(R)$, then Proposition~\ref{main-8} yields \begin{equation*} \limsup_{t \to 0} \frac{1}{f(t)} |X_t-x| = \infty \quad \text{$\mbb{P}^x$-a.s.} \end{equation*} On the other hand, if \eqref{conv-eq20} is finite for some $R \geq 1$ and all $C>0$, then \begin{align*} \limsup_{t \to 0} t^2 \sup_{|z-x| \leq C f(t)} \sup_{|\xi| \leq 1/f(t)} \re q(z,\xi) &\leq c' \limsup_{t \to 0} t \frac{\sup_{|z-x| \leq C f(t)} \sup_{|\xi| \leq 1/f(t)} |q(z,\xi)|}{\inf_{|z-x| \leq R f(t)} \sup_{|\xi| \leq 1/f(t)} |q(z,\xi)|} \end{align*} for some constant $c'>0$, and the latter limit is zero by \ref{C1}. Hence, \eqref{conv-eq11} holds. Applying Proposition~\ref{conv-3} proves the assertion. \secundolist If \ref{C2} holds, then the assertion is immediate from Proposition~\ref{conv-3} and Remark~\ref{conv-4}\ref{conv-4-ii}. \par It remains to show \ref{main-9-ii}. To this end, assume additionally that \eqref{A1'} holds for some $R>0$ and let $f$ be non-decreasing with $\int_0^1 \inf_{|z-x| \leq c f(t)} \sup_{|\xi| \leq 1/f(t)} |q(z,\xi)| \, dt = \infty$ for some $c>0$. Then Proposition~\ref{main-7}\ref{main-7-ii} yields $\int_0^1 \inf_{|z-x| \leq cf(t)} \nu(z,\{|y|>c f(t)\}) \, dt = \infty$, and applying \ref{main-9-i} finishes the proof. \end{proof}
{ "timestamp": "2021-10-11T02:10:21", "yymm": "2102", "arxiv_id": "2102.06541", "language": "en", "url": "https://arxiv.org/abs/2102.06541", "abstract": "We study the small-time asymptotics of sample paths of Lévy processes and Lévy-type processes. Namely, we investigate under which conditions the limit $$\\limsup_{t \\to 0} \\frac{1}{f(t)} |X_t-X_0|$$ is finite resp.\\ infinite with probability $1$. We establish integral criteria in terms of the infinitesimal characteristics and the symbol of the process. Our results apply to a wide class of processes, including solutions to Lévy-driven SDEs and stable-like processes. For the particular case of Lévy processes, we recover and extend earlier results from the literature. Moreover, we present a new maximal inequality for Lévy-type processes, which is of independent interest.", "subjects": "Probability (math.PR)", "title": "Upper functions for sample paths of Lévy(-type) processes", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713874477227, "lm_q2_score": 0.7185944046238981, "lm_q1q2_score": 0.7080105060559584 }
https://arxiv.org/abs/2007.10530
McKay graphs for alternating and classical groups
Let $G$ be a finite group, and $\alpha$ a nontrivial character of $G$. The McKay graph $\mathcal{M}(G,\alpha)$ has the irreducible characters of $G$ as vertices, with an edge from $\chi_1$ to $\chi_2$ if $\chi_2$ is a constituent of $\alpha\chi_1$. We study the diameters of McKay graphs for finite simple groups $G$. For alternating groups, we prove a conjecture made in [LST]: there is an absolute constant $C$ such that $\hbox{diam}\,{\mathcal M}(G,\alpha) \le C\frac{\log |\mathsf{A}_n|}{\log \alpha(1)}$ for all nontrivial irreducible characters $\alpha$ of $\mathsf{A}_n$. Also for classsical groups of symplectic or orthogonal type of rank $r$, we establish a linear upper bound $Cr$ on the diameters of all nontrivial McKay graphs.
\section{Introduction} For a finite group $G$, and a (complex) character $\alpha$ of $G$, the {\it McKay graph} $\mathcal {M}(G,\alpha)$ is defined to be the directed graph with vertex set ${\rm Irr}(G)$, there being an edge from $\chi_1$ to $\chi_2$ if and only if $\chi_2$ is a constituent of $\alpha\chi_1$. A classical result of Burnside and Brauer \cite{Br} shows that $\mathcal {M}(G,\alpha)$ is connected if and only if $\alpha$ is faithful. The study of McKay graphs for finite simple groups $G$ was initiated in \cite{LST}, with a particular focus on the diameters of these graphs. Theorem 2 of \cite{LST} establishes a quadratic upper bound $\hbox{diam}\,{\mathcal M}(G,\alpha) \le Cr^2$ for any simple group $G$ of Lie type or rank $r$ and any nontrivial $\alpha \in {\rm Irr}(G)$. Notice that the smallest (resp. largest) nontrivial irreducible character degrees of $G$ are at most $q^{cr}$ (resp. at least $q^{c'r^2}$), where $c,c'$ are constants, and hence the maximal diameter of a McKay graph ${\mathcal M}(G,\alpha)$ is at least a linear function of $r$. Theorem 3 of \cite{LST} implies a linear upper bound on these diameters for the classical groups $G=\mathrm {PSL}_n^\epsilon(q)$, provided $q$ is large compared to $n$. Our first main result establishes a linear upper bound for the remaining classical groups. \begin{theorem}\label{main1} Let $G$ be a quasisimple classical group $Sp_n(q)$ or $\Omega_n^\epsilon(q)$, and let $\alpha$ be a nontrivial irreducible character of $G$. Then $\hbox{diam}\,{\mathcal M}(G,\alpha) \le Cn$, where $C=16$ or $32$, respectively. \end{theorem} An obvious lower bound for $\hbox{diam}\,{\mathcal M}(G,\alpha)$ (when $\alpha(1)>1$) is given by $\frac{\log \mathsf{b}(G)}{\log \alpha(1)}$, where $\mathsf{b}(G)$ is the largest degree of an irreducible character of $G$. In \cite[Conjecture 1]{LST} we conjectured that for simple groups $G$, this bound is tight up to a multiplicative constant. This conjecture was proved in \cite[Theorem 3]{LST} for the simple groups $\mathrm {PSL}_n^\epsilon(q)$, provided $q$ is large compared to $n$. Recently it has also been established for the symmetric groups in \cite{S}. Deducing it for the alternating groups is not entirely trivial, and this is the content of our next result. \begin{theorem}\label{main2} There is an effective absolute constant $C$ such that, for all $n \geq 5$ and for all nontrivial irreducible characters $\alpha$ of $G:=\mathsf{A}_n$, $$\hbox{diam}\,{\mathcal M}(G,\alpha) \le C\frac{\log |G|}{\log \alpha(1)}.$$ \end{theorem} In our final result, we consider covering ${\rm Irr}(G)$ by products of arbitrary irreducible characters, instead of powers of a fixed character. This idea was suggested by Gill \cite{G}, inspired by an analogous result of Rodgers and Saxl \cite{RS} for conjugacy classes in $G=\mathrm {SL}_n(q)$: this states that if a collection of conjugacy classes of $G$ satisfies the condition that the product of the class sizes is at least $|G|^{12}$, then the product of the classes is equal to $G$. As a piece of notation, for characters $\chi_1,\ldots,\chi_l$ of $G$, we write $\chi_1\chi_2\cdots \chi_l \supseteq \mathrm{Irr}(G)$ to mean that every irreducible character of $G$ appears as a constituent of $\chi_1\chi_2\cdots \chi_l$. Also, let $g: \mathbb N\to \mathbb N$ be the function appearing in \cite[Theorem 3]{LST}. \begin{theorem}\label{rodsax} \begin{itemize} \item[{\rm (i)}] Let $G$ be a simple group of Lie type of rank $r$, let $l \ge 489r^2$, and let $\chi_1,\ldots,\chi_l \in \mathrm{Irr}(G) \setminus 1_G$. Then $\chi_1\chi_2\cdots \chi_l \supseteq \mathrm{Irr}(G)$. \vspace{2mm} \item[{\rm (ii)}] Let $G = \mathrm {PSL}_n^\epsilon(q)$ with $q>g(n)$, let $l \in \mathbb N$, and let $\chi_1,\ldots \chi_l \in \mathrm{Irr}(G)$ satisfy $\prod_1^l \chi_i(1) > |G|^{10}$. Then $\chi_1\chi_2\cdots \chi_l \supseteq \mathrm{Irr}(G)$. \end{itemize} \end{theorem} Gill \cite{G} has conjectured that part (ii) of the theorem holds for all simple groups (with the constant 10 possibly replaced by a different constant). As a stepping stone in the spirit of the linear bound given by Theorem \ref{main1}, let us pose the following more modest conjecture. \begin{conj}\label{rsax} There is an absolute constant $C>0$ such that the following holds. Let $G={\rm Cl}_n(q)$, a classical simple group of dimension $n$, or $\mathsf{A}_n$, an alternating group of degree $n\ge 5$. Let $l \ge Cn$, and let $\chi_1,\ldots,\chi_l \in \mathrm{Irr}(G) \setminus 1_G$. Then $\chi_1\chi_2\cdots \chi_l \supseteq \mathrm{Irr}(G)$. \end{conj} See Proposition \ref{rs2-an} for some partial result on Conjecture \ref{rsax} in the cae of $\mathsf{A}_n$. The layout of the paper is as follows. Section \ref{prel1} contains a substantial amount of character theory for symplectic and orthogonal groups that is required for the proof of Theorem \ref{main1}, which is completed in Section \ref{pfth1}. The remaining sections \ref{pfth2} and \ref{pfth3} contain the proofs of Theorems \ref{main2} and \ref{rodsax}, respectively. \section{Some character theory for symplectic and orthogonal groups}\label{prel1} Let $V = \mathbb F_q^d$ be endowed with a non-degenerate, alternating or quadratic of type $\epsilon = \pm$, form and let $G$ denote the derived subgroup of the full isometry group of the form. Assume that $G$ is quasisimple, so that $G = \mathrm {Sp}(V) = \mathrm {Sp}_d(q)$ or $\Omega(V) = \Omega^\epsilon_d(q)$. This section contains a detailed study of some specific irreducible characters $\chi$ of $G$ -- namely, the constituents of the permutation character $\mathrm{Ind}^G_{[P,P]}(1_{[P,P]})$, where $P$ is the maximal parabolic subgroup of $G$ stabilizing a singular 1-space. Two of the main results of the section are Propositions \ref{rat-so21} and \ref{rat-sp-so22}, which give upper bounds for the character ratios $|\chi(g)/\chi(1)|$ for $g\in G$. These will be used in Section \ref{pfth1} to prove Theorem \ref{main1}. \subsection{Reduction lemmas}\label{red} It is well known that the permutation action of $G$ on the set of singular $1$-spaces of $V$ is primitive of rank $3$, and thus its character is $\rho = 1_G + \alpha + \beta$, with $\alpha, \beta \in \mathrm{Irr}(G)$. Let (the parabolic subgroup) $P=QL$ denote a point stabilizer in this action, with $Q$ the unipotent radical and $L$ a Levi subgroup. Aside from $\alpha,\beta$, we also need to consider the remaining non-principal irreducible constituents $\bar{g}_i$ of $\mathrm{Ind}^G_{[P,P]}(1_{[P,P]})$. Let $\mathsf{St}$ denote the Steinberg character of $G$. \begin{lem}\label{mc-r1} The following statements hold. \begin{enumerate}[\rm(i)] \item Suppose that every semisimple element $s \in G$ is real. Then for any $\chi \in \mathrm{Irr}(G)$ and $k \in \mathbb N$, $\chi^{2k}$ contains $\mathsf{St}$ if and only if $(\chi\overline\chi)^k$ contains $\mathsf{St}$. \item All semisimple elements in $G$ are real, if $G = \mathrm {Sp}_{2n}(q)$, $\Omega_{2n+1}(q)$, or $\Omega^\epsilon_{4n}(q)$. \end{enumerate} \end{lem} \begin{proof} (i) Recall that $\mathsf{St}(g) = 0$ if $g \in G$ is not semisimple. Furthermore, $\chi(g) = \overline\chi(g)$ if $g \in G$ is semisimple, by hypothesis. Hence $$\begin{aligned} ~[\chi^{2k},\mathsf{St}]_G & = \frac{1}{|G|}\sum_{g \in G}\chi(g)^{2k}\overline\mathsf{St}(g)\\ & = \frac{1}{|G|}\sum_{g \in G,~g\mbox{ {\tiny semisimple}}}\chi(g)^{2k}\overline\mathsf{St}(g)\\ & = \frac{1}{|G|}\sum_{g \in G,~g\mbox{ {\tiny semisimple}}}\chi(g)^{k}\overline\chi(g)^k\overline\mathsf{St}(g)\\ & = \frac{1}{|G|}\sum_{g \in G}\chi(g)^k\overline\chi(g)^{k}\overline\mathsf{St}(g) = [(\chi\overline\chi)^k,\mathsf{St}]_G, \end{aligned}$$ and the claim follows. \smallskip (ii) This is well known, see e.g. \cite[Proposition 3.1]{TZ2}. \end{proof} \begin{lem}\label{mc-r2} Let $G = \mathrm {Sp}(V) = \mathrm {Sp}_{2n}(q)$ with $n \geq 3$. Suppose $C \in \mathbb N$ is such that both $\alpha^C$ and $\beta^C$ contain $\mathsf{St}$. Then for any $1_G \neq \chi \in \mathrm{Irr}(G)$, $\chi^{2C}$ contains $\mathsf{St}$. \end{lem} \begin{proof} In the aforementioned rank $3$ permutation action of $G$ with character $\rho = 1_G+\alpha+\beta$, a point stabilizer $P$ is the normalizer $\mathbf{N}_G(Z)$ of some long-root subgroup $Z$. Since $n \geq 3$, $Z$ has a nonzero fixed point on any $\mathbb C G$-module affording $\chi$ by \cite[Theorem 1.6]{T}. It follows that $\chi|_P$ is reducible, and so \begin{equation}\label{eq:mc1} 2 \leq [\chi|_P,\chi|_P]_P = [\chi\overline\chi,\mathrm{Ind}^G_P(1_P)]_G = [\chi\overline\chi,\rho]_G. \end{equation} As $[\chi\overline\chi,1_G]_G = 1$, $\chi\overline\chi$ contains either $\alpha$ or $\beta$, whence $(\chi\overline\chi)^C$ contains $\mathsf{St}$. Applying Lemma \ref{mc-r1}, we conclude that $\chi^{2C}$ contains $\mathsf{St}$. \end{proof} \begin{lem}\label{mc-r3} Let $G = \Omega(V) = \Omega^\epsilon_{n}(q)$ with $n \geq 5$. Suppose $C \in \mathbb N$ is such that both $\alpha^C$ and $\beta^C$ contain $\mathsf{St}$. Consider any $1_G \neq \chi \in \mathrm{Irr}(G)$, and suppose in addition that either $n \not\equiv 2 (\bmod\ 4)$, or $\chi = \overline\chi$. Then $\chi^{4C}$ contains $\mathsf{St}$. \end{lem} \begin{proof} Again we consider a point stabilizer $P=QL$ in the aforementioned rank $3$ permutation action of $G$ with character $\rho = 1_G+\alpha+\beta$. Note that $Q$ is elementary abelian, $[L,L] \cong \Omega^\epsilon_{n-2}(q)$, and we can identify $\mathrm{Irr}(Q)$ with the natural module $\mathbb F_q^{n-2}$ for $[L,L]$. In particular, any $[L,L]$-orbit on $\mathrm{Irr}(Q) \smallsetminus \{1_Q\}$ has length at least $2$. It is also clear that some irreducible constituent of $\chi|_Q$ is non-principal, since $\mathrm{Ker}(\chi) \leq \mathbf{Z}(G)$ and $Q \not\leq \mathbf{Z}(G)$. It follows that $\chi|_Q$ is reducible, and so $$2 \leq [\chi|_Q,\chi|_Q]_Q = [(\chi\overline\chi)|_Q,1_Q]_Q.$$ Since $[\chi\overline\chi,1_G]_G = 1$, at least one non-principal irreducible constituent $\theta$ of $\chi\overline\chi$ contains $1_Q$ on restriction to $Q$. But $P$ normalizes $Q$, so the latter implies that $\theta|_P$ is reducible. Thus \eqref{eq:mc1} holds for $\theta$ instead of $\chi$. Arguing as in the proof of Lemma \ref{mc-r1}, we obtain that $\theta\overline\theta$ contains either $\alpha$ or $\beta$, whence $(\chi\overline\chi)^2$ contains either $\alpha$ or $\beta$. It follows that $(\chi\overline\chi)^{2C}$ contains $\mathsf{St}$, and we are done if $\chi = \overline\chi$. Applying Lemma \ref{mc-r1}, we also have that $\chi^{4C}$ contains $\mathsf{St}$ in the case $n \not\equiv 2 (\bmod\ 4)$. \end{proof} \begin{lem}\label{mc-r4} Let $G = \Omega(V) = \Omega^\epsilon_{n}(q)$ with $n \geq 10$ and $n \equiv 2 (\bmod\ 4)$. Suppose $C \in \mathbb N$ is such that each of $\alpha^C$, $\beta^C$, and $\bar{g}_i^C$ contains $\mathsf{St}$. Then for any $\chi \in \mathrm{Irr}(G)$ with $\chi \neq \overline\chi$, $\chi^{4C}$ contains $\mathsf{St}$. \end{lem} \begin{proof} (i) As noted in the proof of Lemma \ref{mc-r3}, $Q$ is elementary abelian, $[L,L] \cong \Omega^\epsilon_{n-2}(q)$, and we can identify $\mathrm{Irr}(Q)$ with the natural module $\mathbb F_q^{n-2}$ for $[L,L]$. Since $n-2 \geq 8$, it is straightforward to check that any $[L,L]$-orbit on nonzero vectors of $\mathbb F_q^{n-2}$ contains a vector $v$ and also $-v$. Thus, any $[L,L]$-orbit on $\mathrm{Irr}(Q) \smallsetminus \{1_Q\}$ contains a characters $\lambda$ and also its complex conjugate $\overline\lambda$. As noted in the proof of Lemma \ref{mc-r3}, $Q \not\leq \mathrm{Ker}(\chi)$. Thus we may assume that $\chi|_Q$ contains $\lambda$ and also $\overline\lambda$. It follows that $1 \leq [\chi^2|_Q,1_Q]_Q$. Since $[\chi^2,1_G]_G = [\chi,\overline\chi]_G = 0$, at least one non-principal irreducible constituent $\theta$ of $\chi^2$ contains $1_Q$ on restriction to $Q$. In particular, $\theta|_P$ is reducible, since $P$ normalizes $Q$, and \eqref{eq:mc1} holds for $\theta$ instead of $\chi$, and so the arguments in the proof of Lemma \ref{mc-r2} shows that $\theta\overline\theta$ contains $\alpha$ or $\beta$. If, moreover, $\theta = \overline\theta$, then we conclude that $\theta^2$ contains $\alpha$ or $\beta$. \smallskip (ii) Now consider the case $\theta \neq \overline\theta$, and let $\theta$ be afforded by a $\mathbb C G$-module $U$. As shown in (i), the $Q$-fixed point subspace $U^Q$ on $U$ is nonzero, and $L$ acts on $U^Q$. Recall that $4|(n-2)$ and $n-2 \geq 8$. Now, if $(\epsilon,q) \neq (+, \equiv 3(\bmod\ 4))$, then all irreducible characters of $[L,L] \cong \Omega^e_{n-2}(q)$ are real-valued, and so the $[L,L]$-module $U^Q$ contains an irreducible submodule $W \cong W^*$. Consider the case $(\epsilon,q) = (+,\equiv 3(\bmod\ 4))$ and let $P = \mathrm{Stab}_G(\langle u \rangle_{\mathbb F_q})$ for a singular vector $0 \neq u \in V$. We can consider $P$ inside $\tilde P:=\mathrm{Stab}_{\mathrm {SO}(V)}(\langle u \rangle_{\mathbb F_q})=Q\tilde L$, and find another singular vector $u' \in V$ such that $V = V_1 \oplus V_2$, with $V_1 = \langle u,u' \rangle_{\mathbb F_q}$, $V_2 = V_1^{\perp}$, and $[L,L] = \Omega(V_2)$. Since $q \equiv 3 (\bmod\ 4)$, $t:=-1_{V_1} \in \mathrm {SO}(V_1) \smallsetminus \Omega(V_1)$. Choosing some $t' \in \mathrm {SO}(V_2) \smallsetminus \Omega(V_2)$, we see that $tt' \in \tilde L \cap \Omega(V) = L$, and $L_1 := \langle [L,L],tt' \rangle \cong \mathrm {SO}^+_{n-2}(q)$. By \cite{Gow}, all irreducible characters of $L_1$ are real-valued, and so the $L_1$-module $U^Q$ contains an irreducible submodule $W \cong W^*$. We have shown that the $[L,L]$-module $U^Q$ contains a nonzero submodule $W \cong W^*$. We can also inflate $W$ to a nonzero self-dual module over $[P,P] = Q[L,L]$. It follows that $(U \otimes_{\mathbb C} U)|_{[P,P]}$ contains $W \otimes_{\mathbb C} W^*$, which certainly contains the trivial submodule. Thus, $\theta^2|_{[P,P]}$ contains the principal character $1_{[P,P]}$, and so \begin{equation}\label{eq:mc2} 1 \leq [\theta^2,\mathrm{Ind}^G_{[P,P]}(1_{[P,P]})]_G. \end{equation} Recall we are assuming that $0 = [\theta,\overline\theta]_G = [\theta^2,1_G]_G$. Hence \eqref{eq:mc2} implies that $\theta^2$ contains at least one of $\alpha$, $\beta$, or $\bar{g}_i$. \smallskip (iii) We have shown that, in all cases, $\theta^2$ contains at least one of $\alpha$, $\beta$, or $\bar{g}_i$. As $\chi^2$ contains $\theta$, we see that $\chi^4$ contains at least one of $\alpha$, $\beta$, or $\bar{g}_i$, and so $\chi^{4C}$ contains $\mathsf{St}^2$. \end{proof} \subsection{Classical groups in characteristic $2$} In this subsection we study certain characters of $\tilde G = \mathrm {Sp}(V) = \mathrm {Sp}_{2n}(q)$ and $G = \Omega(V)=\Omega^\epsilon_{2n}(q)$, where $n \geq 5$ and $2|q$. These results will be used subsequently and are also of independent interest. First we endow $V$ with a non-degenerate alternating form $(\cdot,\cdot)$, and work with its isometry group $\tilde G = \mathrm {Sp}(V)$. We will consider the following irreducible characters of $\tilde G$: $\bullet$ the $q/2+1$ {\it linear-Weil} characters: $\rho^1_n$ of degree $(q^n+1)(q^n-q)/2(q-1)$, $\rho^2_n$ of degree $(q^n-1)(q^n+q)/2(q-1)$, and $\tau^i_n$ of degree $(q^{2n}-1)/(q-1)$, $1 \leq i \leq (q-2)/2$, and $\bullet$ the $q/2+2$ {\it unitary-Weil} characters: $\alpha_n$ of degree $(q^n-1)(q^n-q)/2(q+1)$, $\beta_n$ of degree $(q^n+1)(q^n+q)/2(q+1)$, and $\zeta^i_n$ of degree $(q^{2n}-1)/(q+1)$, $1 \leq i \leq q/2$;\\ see \cite[Table 1]{GT}. Then \begin{equation}\label{eq:dec11} \rho:=1_{\tilde G}+\rho^1_n+\rho^2_n \end{equation} is the rank $3$ permutation character of $\tilde G$ acting on the set of $1$-spaces of $V$. The following statement is well known, see e.g. formula (1) of \cite{FST}: \begin{lem}\label{quad1} For $\epsilon = \pm$, the character $\pi^\epsilon$ of the permutation action of $\tilde G$ on quadratic forms of type $\epsilon$ associated to $(\cdot,\cdot)$ is given as follows: $$ \pi^+ = 1_{\tilde G} + \rho^2_n + \sum^{(q-2)/2}_{i=1}\tau^i_n,~~~ \pi^- = 1_{\tilde G} + \rho^1_n + \sum^{(q-2)/2}_{i=1}\tau^i_n.$$ \end{lem} Given any $g \in \mathrm {GL}(V)$, let $$d(x,g):= \dim_{\overline{\mathbb F}_q}\mathrm{Ker}(g-x \cdot 1_{V \otimes_{\mathbb F_q}}\overline{\mathbb F}_q)$$ for any $x \in \overline{\mathbb F}_q^\times$, and define the {\it support} of $g$ to be \[ \mathsf{supp}(g) := \dim(V)-\max_{x \in \overline{\mathbb F}_q^\times}d(x,g). \] Set $$d(g):= \dim(V)-\mathsf{supp}(g).$$ \begin{prop}\label{rat-sp2} Let $\tilde G = \mathrm {Sp}_{2n}(q)$ with $n \geq 3$ and $2|q$, and let $g \in \tilde G$ have support $s=\mathsf{supp}(g)$. If $\chi \in \{\rho^1_n,\rho^2_n\}$, then $$\frac{|\chi(g)|}{\chi(1)} \leq \frac{1}{q^{s/3}}.$$ \end{prop} \begin{proof} The statement is obvious if $s=0$. Suppose $s=1$. It is easy to see that in this case $g$ is a transvection, and so $$\rho^1_n(g) = \rho^2_n(g) = \frac{q^{2n-1}-q}{2(q-1)}$$ by \cite[Corollary 7.8]{GT}, and the statement follows. From now on we may assume $s \geq 2$. Observe that $d:=\max_{x \in \mathbb F_q^\times}d(x,g) \leq d(g) = 2n-s$. Hence, $$0 \leq \rho(g) = \sum_{x \in \mathbb F_q^\times}\frac{q^{d(x)}-1}{q-1} \leq q^d-1,$$ and so \eqref{eq:dec11} implies $$|\rho^1_n(g)+\rho^2_n(g)| \leq q^d-1.$$ On the other hand, since $\pi^\pm(g) \geq 0$ and $\pi^++\pi^-$ is just the permutation character of $\tilde G$ acting on $V$, Lemma \ref{quad1} implies that $$|\rho^1_n(g)-\rho^2_n(g)| = |\pi^+(g)-\pi^-(g)| \leq \pi^+(g)+\pi^-(g) = q^{d(1,g)} \leq q^d.$$ It follows for any $i \in \{1,2\}$ that $$|\rho^i_n(g)| \leq \bigl(|\rho^1_n(g)+\rho^2_n(g)|+|\rho^1_n(g)+\rho^2_n(g)|\bigr)/2 < q^d \leq q^{2n-s}.$$ Since $n \geq 3$, we can also check that $$\rho^i_n(1) \geq \frac{(q^n+1)(q^n-q)}{2(q-1)} > q^{2n-4/3}.$$ Thus $|\chi(g)|/\chi(1)| < q^{4/3-s} \leq q^{-s/3}$, as stated. \end{proof} Next we endow $V = \mathbb F_q^{2n}$ with a non-degenerate quadratic form $\mathsf{Q}$ of type $\epsilon = \pm$ associated to the alternating form $(\cdot,\cdot)$. Choose a Witt basis $(e_1,\ldots,e_n,f_1, \ldots, f_n)$ for $(\cdot,\cdot)$, such that $\mathsf{Q}(e_1)=\mathsf{Q}(f_1)=0$. We may assume that $P = \mathrm{Stab}_G(\langle e_1 \rangle_{\mathbb F_q}) = QL$, where $Q$ is elementary abelian of order $q^{2n-2}$, $L \cong \Omega^\epsilon_{2n-2}(q) \times C_{q-1}$, and $$[P,P] = \mathrm{Stab}_G(e_1)=Q \rtimes [L,L]$$ has index $(q^n-\epsilon)(q^{n-1}+\epsilon)$ in $G$. Also consider $H := \mathrm{Stab}_G(e_1+f_1)$. According to \cite[Theorem 1.3]{N}, $G$ has $q+1$ non-principal complex irreducible characters of degree at most $(q^n-\epsilon)(q^{n-1}+\epsilon)$, namely, $\alpha$ of degree $(q^n-\epsilon)(q^{n-1}+\epsilon q)/(q^2-1)$, $\beta$ of degree $(q^{2n}-q^2)/(q^2-1)$, $\bar{g}_i$ of degree $(q^n-\epsilon)(q^{n-1}+\epsilon)/(q-1)$, $1 \leq i \leq (q-2)/2$, and $\delta_j$ of degree $(q^n-\epsilon)(q^{n-1}-\epsilon)/(q+1)$, $1 \leq j \leq q/2$. \begin{prop}\label{dec-so2} Let $G = \Omega^\epsilon_{2n}(q)$ with $n \geq 5$ and $2|q$, and consider $P = \mathrm{Stab}_G(e_1)$ and $H = \mathrm{Stab}_G(e_1+f_1)$ as above. Then the following statements hold. \begin{enumerate}[\rm(i)] \item $\mathrm{Ind}^G_P(1_P) = 1_G + \alpha + \beta$. \item $\mathrm{Ind}^G_{[P,P]}(1_{[P,P]}) = 1_G +\alpha+\beta + 2\sum^{(q-2)/2}_{i=1}\bar{g}_i$. \item $\mathrm{Ind}^G_H(1_H) = 1_G +\beta + \sum^{(q-2)/2}_{i=1}\bar{g}_i+\sum^{q/2}_{j=1}\delta_j$. \end{enumerate} \end{prop} \begin{proof} (i) is well known. Next, $P/[P,P] \cong C_{q-1}$ has $q-1$ irreducible characters: $1_P$ and $(q-2)/2$ pairs of $\{\nu_i,\overline\nu_i\}$, $1 \leq i \leq (q-2)/2$. An application of Mackey's formula shows that $\mathrm{Ind}^G_P(\nu_i) = \mathrm{Ind}^G_P(\overline\nu_i)$ is irreducible for all $i$. Now using (i) we can write \begin{equation}\label{eq:dec1} \mathrm{Ind}^G_{[P,P]}(1_{[P,P]}) = \mathrm{Ind}^G_P\bigl( \mathrm{Ind}^P_{[P,P]}(1_{[P,P]}) \bigr) = 1_G+\alpha+\beta + 2\sum^{(q-2)/2}_{i=1}\mathrm{Ind}^G_P(\nu_i). \end{equation} On the other hand, note that $[P,P]$ has exactly $2q-1$ orbits on the set of nonzero singular vectors in $V$: $q-1$ orbits $\{xe_1\}$ with $x \in \mathbb F_q^\times$, one orbit $\{v \in e_1^\perp \smallsetminus \langle e_1 \rangle_{\mathbb F_q} \mid \mathsf{Q}(v)=0\}$, and $(q-1)$ orbits $\{yf_1 + v \mid v \in e_1^\perp, \mathsf{Q}(yf_1+v) =0\}$ with $y \in \mathbb F_q^\times$. Together with \eqref{eq:dec1}, this implies that all summands in the last decomposition in \eqref{eq:dec1} are pairwise distinct. Since $\bar{g}_i = (q^n-\epsilon)(q^{n-1}+\epsilon)/(q-1) = \mathrm{Ind}^G_P(\nu_{i'})$, renumbering the $\nu_i$ if necessary, we may assume that $\mathrm{Ind}^G_P(\nu_i)=\bar{g}_i$, and (ii) follows. \smallskip For (iii), first note that $P$ has two orbits on the set $\mathcal {X} := \{ v \in V \mid \mathsf{Q}(v)=1\}$, namely, $\mathcal {X} \cap e_1^\perp$ and $\mathcal {X} \smallsetminus e_1^\perp$. Since $\mathrm{Ind}^G_H(1_H)$ is the character of the permutation action of $G$ on $\mathcal {X}$, we get \begin{equation}\label{eq:dec2} [\mathrm{Ind}^G_P(1_P),\mathrm{Ind}^G_H(1_H)]_G = 2. \end{equation} Next, $[P,P]$ has $q$ orbits on $\mathcal {X}$, namely, $\mathcal {X} \cap e_1^\perp$, and $\{yf_1+w \in \mathcal {X} \mid w \in e_1^\perp\}$ with $y \in \mathbb F_q^\times$. Thus \begin{equation}\label{eq:dec3} [\mathrm{Ind}^G_{[P,P]}(1_{[P,P]}),\mathrm{Ind}^G_H(1_H)]_G = q. \end{equation} Combining the results of (i), (ii), with \eqref{eq:dec2}, \eqref{eq:dec3}, and again using \cite[Theorem 1.3]{N}, we can write \begin{equation}\label{eq:dec4} \mathrm{Ind}^G_H(1_H) = 1_G + (a\alpha + b\beta) +\sum^{(q-2)/2}_{i=1}c_i\bar{g}_i + \sum^{q/2}_{j=1}d_j\delta_j, \end{equation} where $a,b,c_i,d_j \in \mathbb{Z}_{\geq 0}$, $a+b=1$, $\sum_ic_i = (q-2)/2$. \smallskip Let $\tau$ denote the character of the permutation action of $G$ on $V \smallsetminus \{0\}$, so that $$\tau = \mathrm{Ind}^G_{[P,P]}(1_{[P,P]}) + (q-1)\mathrm{Ind}^G_H(1_H).$$ Note that $G$ has $q^3+q^2-q$ orbits on $(V \smallsetminus \{0\}) \times (V \smallsetminus \{0\})$, namely, $q(q-1)$ orbits of $(u,xu)$, where $x \in \mathbb F_q^\times$ and $\mathsf{Q}(u) = y \in \mathbb F_q$, and $q^3$ orbits of $(u,v)$, where $u,v$ are linearly independent and $(\mathsf{Q}(u),(u,v),\mathsf{Q}(v)) = (x,y,z) \in \mathbb F_q^3$. In other words, $[\tau,\tau]_G = q^3+q^2-q$. Using (ii) and \eqref{eq:dec3}, we deduce that \begin{equation}\label{eq:dec5} [\mathrm{Ind}^G_H(1_H),\mathrm{Ind}^G_H(1_H)]_G = q+1. \end{equation} In particular, if $q=2$ then $\mathrm{Ind}^G_H(1_H)$ is the sum of $3$ pairwise distinct irreducible characters. By checking the degrees of $\alpha,\beta$ and $\delta_1$, (iii) immediately follows from \eqref{eq:dec4}. \smallskip Now we may assume $q=2^e \geq 4$. Let $\ell_+ = \ell(2^{ne}-1)$ denote a primitive prime divisor of $2^{ne}-1$, which exists by \cite{Zs}. Likewise, let $\ell_- = \ell(2^{2ne}-1)$ denote a primitive prime divisor of $2^{2ne}-1$. Then note that $\ell_\epsilon$ divides the degree of each of $\alpha$, $\bar{g}_i$, $d_j$, but neither $[G:H]-1$ nor $\beta(1)$. Hence \eqref{eq:dec4} implies that $(a,b)=(0,1)$. Comparing the degrees in \eqref{eq:dec4}, we also see that $\sum_jd_j = q/2$. Now $$q+1 = [\mathrm{Ind}^G_H(1_H),\mathrm{Ind}^G_H(1_H)]_G = 2 + \sum^{(q-2)/2}_{i=1}c_i^2 + \sum^{q/2}_{j=1}d_j^2 \geq 2 + \sum^{(q-2)/2}_{i=1}c_i + \sum^{q/2}_{j=1}d_j = 2 +\frac{q-2}{2}+\frac{q}{2},$$ yielding $c_i^2=c_i$, $d_j^2=d_j$, $c_i,d_j \in \{0,1\}$, and so $c_i = d_j = 1$, as desired. \end{proof} In the next statement, we embed $G = \Omega(V)$ in $\tilde G := \mathrm {Sp}(V)$ (the isometry group of the form $(\cdot,\cdot)$ on $V$). \begin{prop}\label{sp-so1} Let $n \geq 5$, $2|q$, and $\epsilon = \pm$. Then the characters $\rho^1_n$ and $\rho^2_n$ of $\mathrm {Sp}(V) \cong \mathrm {Sp}_{2n}(q)$ restrict to $G = \Omega(V) \cong \Omega^\epsilon_{2n}(q)$ as follows: $$\begin{array}{ll}(\rho^1_n)|_{\Omega^+_{2n}(q)} = \beta + \sum^{q/2}_{j=1}\delta_j, & (\rho^2_n)|_{\Omega^+_{2n}(q)} = 1+\alpha+\beta + \sum^{(q-2)/2}_{i=1}\bar{g}_i,\\ (\rho^1_n)|_{\Omega^-_{2n}(q)} = 1+\alpha+\beta + \sum^{(q-2)/2}_{i=1}\bar{g}_i, & (\rho^2_n)|_{\Omega^-_{2n}(q)} = \beta + \sum^{q/2}_{j=1}\delta_j. \end{array}$$ \end{prop} \begin{proof} Note by \eqref{eq:dec11} that $1_G + (\rho^1_n+\rho^2_n)|_G$ is just the character of the permutation action on the set of $1$-spaces of $V$. Hence, by Proposition \ref{dec-so2} we have \begin{equation}\label{eq:dec21} \bigl( \rho^1_n+\rho^2_n \bigr)|_G = \mathrm{Ind}^G_P(1_P) + \mathrm{Ind}^G_H(1_H) -1_G = 1_G +\alpha+2\beta + \sum^{(q-2)/2}_{i=1}\bar{g}_i + \sum^{q/2}_{j=1}\delta_j. \end{equation} Furthermore, Lemma \ref{quad1} implies by Frobenius' reciprocity that \begin{equation}\label{eq:dec22} \bigl(\rho^2_n\bigr)|_G \mbox { contains }1_G \mbox { when }\epsilon=+, \mbox{ and }\bigl(\rho^1_n\bigr)|_G \mbox { contains }1_G \mbox { when }\epsilon=-. \end{equation} \smallskip (i) First we consider the case $\epsilon = +$. If $(n,q) \neq (6,2)$, one can find a primitive prime divisor $\ell = \ell(2^{ne}-1)$, where $q = 2^e$. If $(n,q) = (6,2)$, then set $\ell = 7$. By its choice, $\ell$ divides the degrees of $\rho^2_n$, $\alpha$, $\bar{g}_i$, and $\delta_j$, but $\beta(1) \equiv \rho^1_n(1) \equiv -1 (\bmod\ \ell)$. Hence, \eqref{eq:dec21} and \eqref{eq:dec22} imply that $$\bigl(\rho^2_n\bigr)|_G = 1_G +\beta +x\alpha + \sum^{(q-2)/2}_{i=1}y_i\bar{g}_i + \sum^{q/2}_{j=1}z_j\delta_j,$$ where $x,y_i,z_j \in \{0,1\}$. Setting $y:=\sum^{(q-2)/2}_{i=1}y_i$ and $z:=\sum^{q/2}_{j=1}z_j$ and comparing the degrees, we get $$(1-x)(q^{n-1}+q)+(q^{n-1}+1)(q+1)((q-2)/2-y) = z(q^{n-1}-1)(q-1),$$ and so $q^{n-1}+1$ divides $(1-x+2z)(q-1)$. Note that $\gcd(q-1,q^{n-1}+1)=1$ and $0 \leq (1-x+2z)(q-1) \leq q^2-1 < q^{n-1}+1$. It follows that $x=1$, $z=0$, $y=(q-2)/2$, whence $y_i=1$ and $z_j=1$, as stated. \smallskip (ii) Now let $\epsilon = -$, and choose $\ell$ to be a primitive prime divisor $\ell(2^{2ne}-1)$. By its choice, $\ell$ divides the degrees of $\rho^1_n$, $\alpha$, $\bar{g}_i$, and $\delta_j$, but $\beta(1) \equiv \rho^2_n(1) \equiv -1 (\bmod\ \ell)$. Hence, \eqref{eq:dec21} and \eqref{eq:dec22} imply that $$\bigl(\rho^1_n\bigr)|_G = 1_G +\beta +x\alpha + \sum^{(q-2)/2}_{i=1}y_i\bar{g}_i + \sum^{q/2}_{j=1}z_j\delta_j,$$ where $x,y_i,z_j \in \{0,1\}$. Setting $y:=\sum^{(q-2)/2}_{i=1}y_i$ and $z:=\sum^{q/2}_{j=1}z_j$ and comparing the degrees, we get $$(1-x)(q^{n-1}-q)+(q^{n-1}-1)(q+1)((q-2)/2-y) = z(q^{n-1}+1)(q-1),$$ and so $(q^{n-1}-1)/(q-1)$ divides $1-x+2z$. Since $0 \leq 1-x+2z \leq q+1 < (q^{n-1}-1)/(q-1)$, it follows that $x=1$, $z=0$, $y=(q-2)/2$, whence $y_i=1$ and $z_j=1$, as stated. \end{proof} For the subsequent discussion, we recall the {\it quasi-determinant} $\kappa_\epsilon: \mathrm {O}_\epsilon \to \{-1,1\}$, where $\mathrm {O}_\epsilon:= \mathrm{GO}(V) \cong \mathrm{GO}^\epsilon_{2n}(q)$, defined via $$\kappa_\epsilon(g) := (-1)^{\dim_{\mathbb F_q}\mathrm{Ker}(g-1_V)}.$$ It is known, see e.g. \cite[Lemma 5.8(i)]{GT}, that $\kappa$ is a group homomorphism, with \begin{equation}\label{eq:kappa1} \mathrm{Ker}(\kappa_\epsilon) = \Omega_\epsilon:= \Omega(V) \cong \Omega^\epsilon_{2n}(q). \end{equation} Now we prove the ``unitary'' analogue of Lemma \ref{quad1}: \begin{lem}\label{quad2} For $n \geq 3$ and $2|q$, the following decompositions hold: $$\mathrm{Ind}^{\tilde G}_{\mathrm {O}_+}(\kappa_+) = \beta_n + \sum^{q/2}_{i=1}\zeta^i_n,~~~ \mathrm{Ind}^{\tilde G}_{\mathrm {O}_-}(\kappa_-) = \alpha_n + \sum^{q/2}_{i=1}\zeta^i_n.$$ \end{lem} \begin{proof} According to formulae (10) and (4)--(6) of \cite{GT}, \begin{equation}\label{eq:dec31} \mathrm{Ind}^{\tilde G}_{\Omega_+}(\kappa_+)+ \mathrm{Ind}^{\tilde G}_{\Omega_-}(\kappa_-) = \alpha_n+\beta_n + 2\sum^{q/2}_{i=1}\zeta^i_n. \end{equation} Hence we can write \begin{equation}\label{eq:dec32} \mathrm{Ind}^{\tilde G}_{\Omega_+}(\kappa_+) = x\alpha_n+y\beta_n + \sum^{q/2}_{i=1}z_i\zeta^i_n, \end{equation} where $x,y,z_i \in \mathbb{Z}_{\geq 0}$, $x,y \leq 1$ and $z_i \leq 2$. Note that, since $\pi^+= \mathrm{Ind}^{\tilde G}_{\mathrm {O}_+}(1_{\mathrm {O}_+})$, Lemma \ref{quad1} implies that $$|\mathrm {O}_+ \backslash \tilde G/\mathrm {O}_+| = \frac{q}{2}+1.$$ Next, by Mackey's formula we have $$[\mathrm{Ind}^{\tilde G}_{\Omega_+}(\kappa_+),\mathrm{Ind}^{\tilde G}_{\Omega_+}(\kappa_+)]_G = \sum_{\mathrm {O}_+t\mathrm {O}_+ \in \mathrm {O}_+ \backslash \tilde G/\mathrm {O}_+} [(\kappa_+)|_{\mathrm {O}_+ \cap t\mathrm {O}_+t^{-1}},(\kappa^t_+)|_{\mathrm {O}_+ \cap t\mathrm {O}_+t^{-1}}]_{\mathrm {O}_+ \cap t\mathrm {O}_+t^{-1}},$$ where $\kappa^t_+(x) = \kappa(x^t) := \kappa(t^{-1}xt)$ for any $x \in \mathrm {O}_+ \cap t\mathrm {O}_+t^{-1}$. For such an $x$, note that \begin{equation}\label{eq:dec321} \kappa_+(x) = 1 \Leftrightarrow 2 | \dim_{\mathbb F_q}\mathrm{Ker}(x-1_V) \Leftrightarrow 2 | \dim_{\mathbb F_q}\mathrm{Ker}(x^{t-1}-1_V) \Leftrightarrow (\kappa_+)^t(x) = 1, \end{equation} i.e. $\kappa_+(x) = \kappa^t_+(x)$. It follows that \begin{equation}\label{eq:dec33} x^2+y^2+\sum^{q/2}_{i=1}z_i^2= [\mathrm{Ind}^{\tilde G}_{\Omega_+}(\kappa_+),\mathrm{Ind}^{\tilde G}_{\Omega_+}(\kappa_+)]_G = |\mathrm {O}_+ \backslash \tilde G/\mathrm {O}_+| = \frac{q}{2}+1. \end{equation} On the other hand, equating the character degrees in \eqref{eq:dec32} we obtain \begin{equation}\label{eq:dec34} \frac{q^n(q^n+1)}{2} = x\frac{(q^n-1)(q^n-q)}{2(q+1)}+y\frac{(q^n+1)(q^n+q)}{2(q+1)}+\sum^{q/2}_{i=1}z_i \cdot \frac{q^{2n}-1}{q+1}. \end{equation} We claim that $x=0$. Indeed, if $(n,q) = (3,2)$, then \eqref{eq:dec34} implies that $3|x$, and so $x=0$ as $0 \leq x \leq 1$. Assume $(n,q) \neq (3,2)$. Then we can find a primitive prime divisor $\ell = \ell(2^{2ne}-1)$ for $q = 2^e$, and note from \eqref{eq:dec34} that $\ell|x$. Since $\ell > 2$ and $x \in \{0,1\}$, we again have $x=0$. Now if $y=0$, then \eqref{eq:dec34} implies that $q^n(q^n+1)/2$ is divisible by $(q^{2n}-1)/(q+1)$, a contradiction. Hence $y=1$, and from \eqref{eq:dec34} we obtain that $\sum^{q/2}_{i=1}z_i = q/2$. On the other hand, $\sum^{q/2}_{i=1}z_i^2 = q/2$ by \eqref{eq:dec33}. Thus $\sum^{q/2}_{i=1}(z_i-1)^2 = 0$, and so $z_i = 1$ for all $i$. Together with \eqref{eq:dec31}, this yields the two stated decompositions. \end{proof} \begin{prop}\label{sp-so2} Let $n \geq 5$, $2|q$, and $\epsilon = \pm$. Then the characters $\alpha_n$ and $\beta_n$ of $\mathrm {Sp}(V) \cong \mathrm {Sp}_{2n}(q)$ restrict to $G = \Omega(V) \cong \Omega^\epsilon_{2n}(q)$ as follows: $$\begin{array}{ll}(\alpha_n)|_{\Omega^+_{2n}(q)} = \sum^{q/2}_{j=1}\delta_j, & (\beta_n)|_{\Omega^+_{2n}(q)} = 1+\alpha + \sum^{(q-2)/2}_{i=1}\bar{g}_i,\\ (\alpha_n)|_{\Omega^-_{2n}(q)} = 1+\alpha + \sum^{(q-2)/2}_{i=1}\bar{g}_i, & (\beta_n)|_{\Omega^-_{2n}(q)} = \sum^{q/2}_{j=1}\delta_j. \end{array}$$ In particular, the following formula holds for the irreducible character $\beta$ of $G$ of degree $(q^{2n}-q^2)/(q^2-1)$: $$\bigl( (\rho^1_n+\rho^2_n)-(\alpha_n+\beta_n)\bigr)|_{\Omega^\epsilon_{2n}(q)} = 2\beta.$$ \end{prop} \begin{proof} By Mackey's formula, $$\bigl(\mathrm{Ind}^{\tilde G}_{\Omega_+}(\kappa_+)\bigr)|_G = \sum_{Gt\mathrm {O}_+\in G \backslash \tilde G/\mathrm {O}_+} \mathrm{Ind}^G_{G \cap t\mathrm {O}_+t^{-1}}\bigl((\kappa^t_+)|_{G \cap t\mathrm {O}_+t^{-1}}\bigr),$$ and similarly for $\pi^+=\mathrm{Ind}^{\tilde G}_{\Omega_+}(1_{\mathrm {O}_+})$. The argument in \eqref{eq:dec321} shows that $\kappa^t_+(x)=1$ for all $x \in G \cap t\mathrm {O}_+t^{-1}$, and so $\pi^+$ and $\mathrm{Ind}^{\tilde G}_{\Omega_+}(\kappa_+)$ agree on $G$. Similarly, $\pi^-$ and $\mathrm{Ind}^{\tilde G}_{\Omega_-}(\kappa_-)$ agree on $G$. It then follows from Lemmas \ref{quad1} and \ref{quad2} that \begin{equation}\label{eq:dec41} \bigl( \rho^2_n-\rho^1_n\bigr)|_G = \bigl( \pi^+-\pi^-\bigr)|_G = \bigl( \mathrm{Ind}^{\tilde G}_{\Omega_+}(\kappa_+)- \mathrm{Ind}^{\tilde G}_{\Omega_-}(\kappa_-) \bigr)|_G = \bigl( \beta_n-\alpha_n\bigr)|_G. \end{equation} First assume that $\epsilon=+$. Then using Proposition \ref{sp-so1} we get $$\bigl( \beta_n-\alpha_n\bigr)|_G= 1_G+\alpha+\sum^{(q-2)/2}_{i=1}\bar{g}_i-\sum^{q/2}_{j=1}\delta_j,$$ i.e. $$\sum^{q/2}_{j=1}\delta_j+(\beta_n)|_G = 1_G +\alpha+\sum^{(q-2)/2}_{i=1}\bar{g}_i + (\alpha_n)|_G.$$ Aside from $(\alpha_n)|_G$ and $(\beta_n)|_G$, all the other characters in the above equality are irreducible and pairwise distinct. It follows that $(\alpha_n)|_G$ contains $\sum^{q/2}_{j=1}\delta_j$. Comparing the degrees, we see that $$(\alpha_n)|_G = \sum^{q/2}_{j=1}\delta_j,$$ which then implies that $$(\beta_n)|_G = 1_G +\alpha+\sum^{(q-2)/2}_{i=1}\bar{g}_i.$$ Now assume that $\epsilon=-$. Then again using Proposition \ref{sp-so1} and \eqref{eq:dec41} we get $$\bigl( \alpha_n-\beta_n\bigr)|_G= 1_G+\alpha+\sum^{(q-2)/2}_{i=1}\bar{g}_i-\sum^{q/2}_{j=1}\delta_j,$$ i.e. $$\sum^{q/2}_{j=1}\delta_j+(\alpha_n)|_G = 1_G +\alpha+\sum^{(q-2)/2}_{i=1}\bar{g}_i + (\beta_n)|_G.$$ Aside from $(\alpha_n)|_G$ and $(\beta_n)|_G$, all the other characters in the above equality are irreducible and pairwise distinct. It follows that $(\beta_n)|_G$ contains $\sum^{q/2}_{j=1}\delta_j$. Comparing the degrees, we see that $$(\beta_n)|_G = \sum^{q/2}_{j=1}\delta_j,$$ which then implies that $$(\alpha_n)|_G = 1_G +\alpha+\sum^{(q-2)/2}_{i=1}\bar{g}_i.$$ For both $\epsilon = \pm$, the last statement now follows from \eqref{eq:dec21}. \end{proof} Proposition \ref{sp-so2} leads to the following explicit formula for $\beta$, which we will show to hold for all special orthogonal groups in all characteristics and all dimensions, and which is of independent interest. In this result, we let $V = \mathbb F_q^n$ be a quadratic space, $L := \mathrm {SO}(V)$ if $2 \nmid q$, $L := \Omega(V)$ if $2|q$, and extend the action of $L$ on $V$ to $\tilde V := V \otimes_{\mathbb F_q}\mathbb F_{q^2}$, and we assume $2 \nmid q$ if $2 \nmid n$. Also, set $$\mu_{q-1}:= \mathbb F_q^\times,~~\mu_{q+1} := \{ x \in \mathbb F_{q^2}^\times \mid x^{q+1} = 1 \}.$$ If $2 \nmid q$, let $\chi_2^+$ be the unique linear character of order $2$ of $\mu_{q-1}$, and let $\chi_2^-$ be the unique linear character of order $2$ of $\mu_{q+1}$. \begin{thm}\label{beta-so2} Let $n \geq 10$, $\epsilon = \pm$, and let $q$ be any prime power. If $2|n$, let $\psi = \beta$ be the irreducible constituent $\beta$ of degree $(q^{n}-q^2)/(q^2-1)$ of the rank $3$ permutation character of $L = \Omega(V)$ when $2|q$, and of $L = \mathrm {SO}(V)$ when $2 \nmid q$, on the set of singular $1$-spaces of its natural module $V=\mathbb F_q^{n}$. If $2 \nmid qn$, let $\psi$ be the irreducible character of $L = \mathrm {SO}(V)$ of degree $(q^n-q)/(q^2-1)$ denoted by $D_{\mathsf{St}}$ in \cite[Proposition 5.7]{LBST}. Then for any $g \in L$ we have $$\psi(g) = \frac{1}{2(q-1)}\sum_{\lambda \in \mu_{q-1}}q^{\dim_{\mathbb F_q}\mathrm{Ker}(g - \lambda \cdot 1_V)} - \frac{1}{2(q+1)}\sum_{\lambda \in \mu_{q+1}}(-q)^{\dim_{\mathbb F_{q^2}}\mathrm{Ker}(g - \lambda \cdot 1_{\tilde V})} -1$$ when $2|n$, and by $$\psi(g) = \frac{1}{2(q-1)}\sum_{\lambda \in \mu_{q-1}}\chi_2^+(\lambda)q^{\dim_{\mathbb F_q}\mathrm{Ker}(g - \lambda \cdot 1_V)} + \frac{1}{2(q+1)}\sum_{\lambda \in \mu_{q+1}}\chi^-_2(\lambda)(-q)^{\dim_{\mathbb F_{q^2}}\mathrm{Ker}(g - \lambda \cdot 1_{\tilde V})} $$ when $2 \nmid qn$. \end{thm} \begin{proof} In the case $2|q$, the statement follows from the last formula in Proposition \ref{sp-so2}, together with formulae (3) and (6) of \cite{GT}. Assume now that $2 \nmid q$, and set $\kappa := 1$ if $2|n$ and $\kappa := 0$ if $2 \nmid n$. By \cite[Proposition 5.7]{LBST} (and in the notation of \cite[\S5.1]{LBST}), $$\psi(g)=\frac{1}{|\mathrm {Sp}_2(q)|}\sum_{x \in \mathrm {Sp}_2(q)}\omega_{n}(xg)\mathsf{St}(x)-\kappa,$$ where $\omega_{2n}$ denotes a reducible Weil character of $\mathrm {Sp}_{2n}(q)$ and $\mathsf{St}$ denotes the Steinberg character of $S:= \mathrm {Sp}_2(q)$. If $x \in S$ is not semisimple, then $\mathsf{St}(x) = 0$. Suppose $x = \mathrm{diag}(\lambda,\lambda^{-1}) \in T_1 <S$, where $T_1 \cong C_{q-1}$ is a split torus and $\lambda \in \mu_{q-1}$. In this case, we can view $T_1$ as $\mathrm {GL}_1(q)$, embed $G$ in $\mathrm {GL}_{n}(q)$, and view $xg$ as an element $h=\lambda g$ in a Levi subgroup $\mathrm {GL}_{n}(q)$ of $\mathrm {Sp}_{2n}(q)$, with $\det(h) = \lambda^{n}$. It follows from \cite[Theorem 2.4(c)]{Ge} that $$\omega_{n}(xg) = \chi_2^+(\lambda^n) q^{\dim_{\mathbb F_q}\mathrm{Ker}(h-1)} = \chi_2^+(\lambda^n) q^{\dim_{\mathbb F_q}\mathrm{Ker}(g-\lambda^{-1})}.$$ If $\lambda \neq \pm 1$, then $|x^S| = q(q+1)$ and $\mathsf{St}(x) = 1$. If $\lambda = \pm 1$, then $|x^S| = 1$ and $\mathsf{St}(x)=q$. Note that since $g \in \mathrm{GO}(V)$, $$\dim_{\mathbb F_q}\mathrm{Ker}(g-\lambda^{-1}) = \dim_{\mathbb F_q}\mathrm{Ker}(\tw t g-\lambda^{-1}) = \dim_{\mathbb F_q}\mathrm{Ker}(g^{-1}-\lambda^{-1}) = \dim_{\mathbb F_q}\mathrm{Ker}(g-\lambda).$$ We also note that since $g \in \mathrm {SO}(V)$, \begin{equation}\label{eq:kappa2} \dim_{\mathbb F_q}\mathrm{Ker}(g-1_V) \equiv n (\mod 2),~~ \dim_{\mathbb F_q}\mathrm{Ker}(g+1_V) \equiv 0 (\mod 2). \end{equation} (Indeed, since $\det(g)=1$, each of $\mathrm{Ker}(g_s-1_V)$ and $\mathrm{Ker}(g_s+1_V)$ is a non-degenerate subspace of $V$ if nonzero, where $g=g_sg_u$ is the Jordan decomposition; furthermore, $2|\dim_{\mathbb F_q}\mathrm{Ker}(g_s+1_V)$ and $\dim\mathrm{Ker}_{\mathbb F_q}\mathrm{Ker}(g_s-1_V) \equiv n (\mod 2)$. Hence the claim reduces to the unipotent case $g=g_u$. In the latter case, the number of Jordan blocks of $g_u$ of each even size is even, see \cite[\S13.1]{Car}, and the claim follows.) Suppose $x = \mathrm{diag}(\mu,\mu^{-1}) \in T_2 <S$, where $T_2 \cong C_{q+1}$ is a non-split torus and $\mu \in \mu_{q+1}$ with $\mu \neq \pm 1$. Then $\mathsf{St}(x) = -1$ and $|x^S| = q(q-1)$. In this case, we can view $T_2$ as $\mathrm {GU}_1(q)$, embed $G$ in $\mathrm {GU}_{n}(q)$, and view $xg$ as an element $h=\mu g$ in a subgroup $\mathrm {GU}_{n}(q)$ of $\mathrm {Sp}_{2n}(q)$, with $\det(h) = \mu^{n}$. It follows from \cite[Theorem 3.3]{Ge} that $$\omega_{n}(xg) = (-1)^n\chi_2^-(\mu^n)(-q)^{\dim_{\mathbb F_{q^2}}\mathrm{Ker}(h-1)} = (-1)^n\chi_2^-(\mu^n)(-q)^{\dim_{\mathbb F_{q^2}}\mathrm{Ker}(g-\mu^{-1})}.$$ Altogether, we have shown that \begin{equation}\label{eq:dec51} \begin{aligned}\psi(g) & = \frac{1}{q^2-1}\bigl(q^{\dim_{\mathbb F_q}\mathrm{Ker}(g-1)}+\chi^+_2((-1)^n)q^{\dim_{\mathbb F_q}\mathrm{Ker}(g+1)}\bigr)\\ & +\frac{1}{2(q-1)}\sum_{\lambda \in \mu_{q-1} \smallsetminus \{\pm 1\}}\chi^+_2(\lambda^n)q^{\dim_{\mathbb F_q}\mathrm{Ker}(g -\lambda)}\\ & - \frac{(-1)^n}{2(q+1)}\sum_{\mu \in \mu_{q+1} \smallsetminus \{\pm 1\}}\chi^-_2(\mu^n)(-q)^{\dim_{\mathbb F_{q^2}}\mathrm{Ker}(g-\mu)} -\kappa,\end{aligned} \end{equation} and the statement now follows if we use \eqref{eq:kappa2}. \end{proof} \subsection{Some character estimates} \begin{prop}\label{rat-so21} Let $q$ be any prime power, $G = \Omega^\epsilon_{2n}(q)$ with $n \geq 5$, $\epsilon=\pm$, and let $g \in G$ have support $s=\mathsf{supp}(g)$. Assume that $\chi \in \{\alpha,\beta\}$ if $2 \nmid q$, and $\chi \in \{\alpha,\beta,\bar{g}_i\}$ if $2|q$. Then $$\frac{|\chi(g)|}{\chi(1)} \leq \frac{1}{q^{s/3}}.$$ \end{prop} \begin{proof} (i) First we consider the case $s \geq n \geq 5$. Then \begin{equation}\label{eq:rb1} d(x,g) \leq 2n-s \end{equation} for any $x \in \overline{\mathbb F}_q^\times$. In particular, \begin{equation}\label{eq:rb2} 0 \leq \rho(g) \leq \sum_{x \in \mathbb F_q^\times}\frac{q^{d(x,g)}-1}{q-1} \leq q^{2n-s}-1. \end{equation} Now, (when $2|q$) part (i) of the proof of Proposition \ref{dec-so2} shows that $\bar{g}_i = \mathrm{Ind}^G_P(\nu_j)$ for some linear character $\nu_j$ of $P$, and recall that $\rho = \mathrm{Ind}^G_P(1_P)$. It follows that $$|\bar{g}_i(g)| \leq |\rho(g)| \leq q^{2n-s}-1,$$ and so $|\bar{g}_i(g)/\bar{g}_i(1)| < 1/q^{s-2} \leq q^{-3s/5}$ as $\bar{g}_i(1) = [G:P] > q^{2n-2}$. Next, using Theorem \ref{beta-so2} and \eqref{eq:rb1} we also see that \begin{equation}\label{eq:rb3} |\beta(g)+1| \leq \frac{1}{2(q-1)}\sum_{x \in \mathbb F_q^\times}q^{d(x,g)} + \frac{1}{2(q+1)}\sum_{x \in \overline{\mathbb F}_q^\times,x^{q+1}=1}q^{d(x,g)} \leq q^{2n-s}. \end{equation} In particular, $|\beta(g)| \leq q^{2n-s}+1$. Since $\beta(1) = (q^{2n}-q^2)/(q^2-1)$, we deduce that $|\beta(g)/\beta(1)| < q^{-3s/5}$. Furthermore, as $\alpha(g) = \rho(g)-(\beta(g)+1)$, we obtain from \eqref{eq:rb2}--\eqref{eq:rb3} that $$|\alpha(g)| \leq 2q^{2n-s}-1.$$ If $s \geq 6$, then it follows that $|\alpha(g)/\alpha(1)| < q^{4-s} \leq q^{-s/3}$, since $\alpha(1) > q^{2n-3}$. Suppose that $s=n=5$. Then we can strengthen \eqref{eq:rb3} to $$\frac{-2q^5-(q-1)q^3}{2(q+1)} \leq \beta(g)+1 \leq q^5.$$ Together with \eqref{eq:rb2}, this implies that $$|\alpha(g)| = |\rho(g)-(\beta(g)+1)| < q^5+q^4 < \alpha(1)/q^{s/3}$$ since $\alpha(1) \geq (q^5+1)(q^4-q)/(q^2-1)$. \smallskip (ii) From now on we may assume that $s \leq n-1$. As $g \in G=\Omega^\epsilon_{2n}(q)$, it follows that $d(z,g) = 2n-s$ for a unique $z \in \{1,-1\}$. Furthermore, $2|s$. (Indeed, this has been recorded in \eqref{eq:kappa1} when $2|q$, and in \eqref{eq:kappa2} when $2 \nmid q$.) We also have that \begin{equation}\label{eq:rb4} d(x,g) \leq 2n-d(z,g) =s \end{equation} for all $x \in \overline\mathbb F_q^\times \smallsetminus \{z\}$, Assume in addition that $s \geq 4$. Using \eqref{eq:rb4} we obtain \begin{equation}\label{eq:rb5} 0 \leq \rho(g) \leq \frac{q^{2n-s}-1+(q-2)(q^s-1)}{q-1}. \end{equation} As $\rho(1)=(q^n-\epsilon)(q^{n-1}+\epsilon)/(q-1)$, it follows that $|\rho(g)/\rho(1)| < q^{-3s/5}$. As above, the same bound also applies to $\chi=\bar{g}_i$ when $2|q$. Next, since $2|s$, using Theorem \ref{beta-so2} and applying \eqref{eq:rb4} to $x^{q \pm 1} = 1$ and $x \neq z$, we have that \begin{equation}\label{eq:rb6} \frac{q^{2n-s}}{q^2-1}-q^s \cdot \frac{q}{2(q+1)} \leq \beta(g)+1 \leq \frac{q^{2n-s}}{q^2-1}+ q^s \cdot \biggl( \frac{q-2}{2(q-1)} + \frac{q}{2(q+1)} \biggr); \end{equation} in particular, $$|\beta(g)| < \frac{q^{2n-s}+q^s(q^2-q-1)}{q^2-1}.$$ Since $\beta(1) = (q^{2n}-q^2)/(q^2-1)$, we obtain that $|\beta(g)/\beta(1)| < q^{-4s/5}$. Furthermore, using \eqref{eq:rb5}--\eqref{eq:rb6}, we can bound $$|\alpha(g)| = |\rho(g)-(\beta(g)+1)| < \frac{q^{2n-s+1}+q^s(3q^2-3q-4)/2}{q^2-1} <\frac{\alpha(1)}{q^{2s/5}}$$ since $\alpha(1) \geq (q^n+1)(q^{n-1}-q)/(q^2-1)$. \smallskip (iii) Since the statement is obvious for $s=0$, it remains to consider the case $s=2$, i.e. $d(1,zg) = 2n-2$. Using \cite[Lemma 4.9]{TZ1}, one can readily show that $g$ fixes an orthogonal decomposition $V = U \oplus U^\perp$, with $U \subset \mathrm{Ker}(g-z \cdot 1_V)$ being non-degenerate of dimension $2n-4$, and \begin{equation}\label{eq:rb7} \dim_{\mathbb F_q}(U^\perp)^{zg} = 2. \end{equation} First we estimate $\rho(g)$. Suppose $g(v) = tv$ for some singular $0 \neq v \in V$ and $t \in \mathbb F_q^\times$. If $t \neq z$, then $v \in U^\perp$, and \eqref{eq:rb7} implies that $g$ can fixes at most $q+1$ such singular $1$-spaces $\langle v \rangle_{\mathbb F_q}$. Likewise, $g$ fixes at most $q+1$ singular $1$-spaces $\langle v \rangle_{\mathbb F_q} \subset U^\perp$ with $g(v) = zv$. Assume now that $g(v) = zv$ with $v = u+u'$, $0 \neq u \in U$ and $u' \in U^\perp$. As $0 = \mathsf{Q}(v) = \mathsf{Q}(u)+\mathsf{Q}(u')$, the total number of such $v$ is $$N:=\sum_{x \in \mathbb F_q}|\{ 0 \neq w \in U \mid \mathsf{Q}(w) = x \}| \cdot |\{ w' \in U^\perp \mid g(w') = zw',\mathsf{Q}(w') = -x \}|.$$ Note that, since $U$ is a non-degenerate quadratic space of dimension $2n-4$, $$(q^{n-2}+1)(q^{n-3}-1) \leq |\{ 0 \neq w \in U \mid \mathsf{Q}(w) = x \}| \leq (q^{n-2}-1)(q^{n-3}+1)$$ for any $x \in \mathbb F_q$. On the other hand, \eqref{eq:rb7} implies that $$\sum_{x \in \mathbb F_q}|\{ w' \in U^\perp \mid g(w') = zw',\mathsf{Q}(w') = -x \}| = |(U^\perp)^{zg}| = q^2.$$ It follows that $$q^2(q^{n-2}+1)(q^{n-3}-1) \leq N \leq q^2(q^{n-2}-1)(q^{n-3}+1),$$ and so \begin{equation}\label{eq:rb8} \frac{q^2(q^{n-2}+1)(q^{n-3}-1)}{q-1} \leq \rho(g) \leq 2q+2+\frac{q^2(q^{n-2}-1)(q^{n-3}+1)}{q-1}. \end{equation} In particular, when $2|q$ we have $|\bar{g}_i(g)| \leq |\rho(g)| < \rho(1)/q^{4s/5}$. Next, applying \eqref{eq:rb6} to $s=2$ we have $$|\beta(g)| \leq \frac{q^{2n-2}+q^2(q^2-q-1)}{q^2-1} < \frac{\beta(1)}{q^{4s/5}}.$$ Finally, using \eqref{eq:rb6} with $s=2$ and \eqref{eq:rb8}, we obtain $$|\alpha(g)| = |\rho(g)-(\beta(g)+1)| < \frac{q^{2n-3}+q^{n+1}-q^{n-1}}{q^2-1}+(q+1) <\frac{\alpha(1)}{q^{3s/5}}.$$ \end{proof} \begin{prop}\label{rat-sp-so22} Let $q$ be any odd prime power, $n \geq 5$, and $\epsilon=\pm$. Assume that $\chi \in \mathrm{Irr}(G)$, where either $G \in \{\mathrm {Sp}_{2n}(q), \Omega_{2n+1}(q)\}$ and $\chi \in \{\alpha,\beta\}$, or $G = \Omega^\epsilon_{2n}(q)$ and $\chi \in \{\alpha,\beta,\bar{g}_i\}$. If $g \in G$ has support $s=\mathsf{supp}(g)$, then $$\frac{|\chi(g)|}{\chi(1)} \leq \frac{1}{q^{s/3}}.$$ \end{prop} \begin{proof} (i) As usual, we may assume $s \geq 1$. First we consider the case $G = \Omega^\epsilon_{2n}(q)$. Then \cite[Corollary 5.14]{NT} and \cite[Proposition 5.7]{LBST} show (in their notation) that $\alpha=D_{1}-1_G$, $\beta = D_{\mathsf{St}}-1_G$. Furthermore if $\nu \neq 1_P$ is a linear character of $P$, then $\mathrm{Ind}^G_P(\nu) = D_{\chi_j}$ if $\nu$ has order $>2$, and $\mathrm{Ind}^G_P(\nu) = D_{\xi_1}+D_{\xi_2}$ if $\nu$ has order $2$. If $\chi = \alpha$ or $\beta$, then the statement is already proved in Proposition \ref{rat-so21}, whose proof also applies to the case $\chi=\bar{g}_i = D_{\chi_j}$ (using the estimate $|\mathrm{Ind}^G_P(\nu)(g)| \leq \rho(g)$). It remains to consider the case $\chi = \bar{g}_i = D_{\xi_j}$ for $j = 1,2$. Again the previous argument applied to $\nu$ of order $2$ shows that $$|D_{\xi_1}(g)+D_{\xi_2}(g)| \leq \frac{[G:P]}{q^{3s/5}} = \frac{2\chi(1)}{q^{3s/5}}.$$ On the other hand, the formula for $D_\alpha$ in \cite[Lemma 5.5]{LBST}, the character table of $\mathrm {SL}_2(q)$ \cite[Theorem 38.1]{D}, and part 1) of the proof of \cite[Proposition 5.11]{LBST} imply that \begin{equation}\label{eq:rb21} |D_{\xi_1}(g)-D_{\xi_2}(g)| \leq \frac{2(q^2-1)q^n \cdot\sqrt{q}}{q(q^2-1)} = 2q^{n-1/2}. \end{equation} If $4 \leq s \leq 2n-2$, then since $\chi(1) \geq (q^n+1)(q^{n-1}-1)/2(q-1)> q^{2n-3}(q+1)$ it follows that $$\begin{aligned}|\chi(g)| & \leq \bigl(|D_{\xi_1}(g)+D_{\xi_2}(g)|+|D_{\xi_1}(g)+D_{\xi_2}(g)|\bigr)/2 \\ & \leq \frac{\chi(1)}{q^{3s/ 5}}+q^{n-1/2} < \frac{\chi(1)}{q^{3s/5}} + \frac{2\chi(1)}{q^{s/3-1/6}(q+1)} < \frac{\chi(1)}{q^{s/3}}.\end{aligned}$$ If $1 \leq s \leq 4$, then $s < n$, and so $2|s$ as shown in part (ii) of the proof of Proposition \ref{rat-so21}. Hence $s=2$, and we again have $$|\chi(g)| \leq \frac{\chi(1)}{q^{3s/ 5}}+q^{n-1/2} < \frac{\chi(1)}{q^{3s/5}} + \frac{2\chi(1)}{q^{s/3+17/6}} < \frac{\chi(1)}{q^{s/3}}.$$ Finally, if $s=2n-1$, then $d(x,g) \leq 1$ for all $x \in \overline\mathbb F_q^\times$ by \eqref{eq:rb1}; moreover, $d(\pm 1,g) = 0$. Hence, instead of \eqref{eq:rb21} we now have the stronger bound $$|D_{\xi_1}(g)-D_{\xi_2}(g)| \leq \frac{2(q^2-1) \cdot\sqrt{q}}{q(q^2-1)} = 2q^{-1/2},$$ whence $|\chi(g)| \leq \chi(1)q^{-3s/5}+q^{-1/2} < \chi(1)q^{-s/3}$. \smallskip (ii) Next we consider the case $G = \Omega^\epsilon_{2n+1}(q)$. Then \cite[Corollary 5.15]{NT} and \cite[Proposition 5.7]{LBST} show (in their notation) that $\alpha=D_{\xi_1}-1_G$, $\beta = D_{\xi_2}-1_G$. Again using the formula for $D_\alpha$ in \cite[Lemma 5.5]{LBST}, the character table of $\mathrm {SL}_2(q)$ \cite[Theorem 38.1]{D}, and part 1) of the proof of \cite[Proposition 5.11]{LBST}, we obtain that \begin{equation}\label{eq:rb22} |\alpha(g)-\beta(g)| = |D_{\xi_1}(g)-D_{\xi_2}(g)| \leq \frac{2(q^2-1)q^{n+1/2} \cdot\sqrt{q}}{q(q^2-1)} = 2q^n. \end{equation} Suppose in addition that $3 \leq s \leq 2n-2$. Since $d(x,g) \leq 2n+1-s$ by \eqref{eq:rb1}, we have that $$0 \leq \rho(g)=1+\alpha(g)+\beta(g) \leq \sum_{x \in \mu_{q-1}}\frac{q^{d(x,g)}-1}{q-1} \leq q^{2n+1-s}.$$ As $\chi(1) \geq (q^n+1)(q^n-q)/2(q-1)$, it follows that $$|\alpha(g)+\beta(g)| \leq q^{2n+1-s}-1 < \frac{2(1-1/q)q^{2-s}\chi(1)}{(1+1/q^n)(1-1/q^{n-1})} < \frac{2(1-1/q)\chi(1)}{q^{s/3}(1-1/q^{n-1})}.$$ On the other hand, \eqref{eq:rb22} implies that $$|\alpha(g)-\beta(g)| \leq \frac{4(1-1/q)\chi(1)}{q^{(s+4)/3}(1-1/q^{n-1})},$$ and so $$\frac{|\chi(g)|}{\chi(1)} < \frac{(1-1/q)}{q^{s/3}(1-1/q^{n-1})}+ \frac{2(1-1/q)}{q^{(s+4)/3}(1-1/q^{n-1})} < \frac{1}{q^{s/3}}.$$ If $s=2n-1$ or $2n$, then $d(x,g) \leq 2$ for all $x \in \overline\mathbb F_q^\times$ by \eqref{eq:rb1}. Hence, instead of \eqref{eq:rb21} we now have the stronger bound $$|\alpha(g)-\beta(g)|=|D_{\xi_1}(g)-D_{\xi_2}(g)| \leq \frac{2(q^2-1)q^2 \cdot\sqrt{q}}{q(q^2-1)} = 2q^{3/2},$$ whence $$|\chi(g)| < \frac{(1-1/q)q^{2-s}\chi(1)}{(1-1/q^{n-1})} +q^{3/2} < \chi(1)q^{-s/3}.$$ It remains to consider the case $s=1,2$, i.e. $d(1,zg) = 2n$ or $2n-1$ for some $z \in \{1,-1\}$. Using \cite[Lemma 4.9]{TZ1}, one can readily show that $g$ fixes an orthogonal decomposition $V = U \oplus U^\perp$, with $U \subset \mathrm{Ker}(g-z \cdot 1_V)$ being non-degenerate of dimension $2n-3$, and \begin{equation}\label{eq:rb23} \dim_{\mathbb F_q}(U^\perp)^{zg} = 4-s. \end{equation} First we estimate $\rho(g)$. Suppose $g(v) = tv$ for some singular $0 \neq v \in V$ and $t \in \mathbb F_q^\times$. If $t \neq z$, then $v \in U^\perp$, and \eqref{eq:rb23} implies that $g$ can fixes at most $(q^s-1)/(q-1) \leq q+1$ such singular $1$-spaces $\langle v \rangle_{\mathbb F_q}$. Likewise, $g$ fixes at most $(q+1)^2$ singular $1$-spaces $\langle v \rangle_{\mathbb F_q} \subset U^\perp$ with $g(v) = zv$, since $\dim U^\perp = 4$. Assume now that $g(v) = zv$ with $v = u+u'$, $0 \neq u \in U$ and $u' \in U^\perp$. As $0 = \mathsf{Q}(v) = \mathsf{Q}(u)+\mathsf{Q}(u')$, the total number of such $v$ is $$N:=\sum_{x \in \mathbb F_q}|\{ 0 \neq w \in U \mid \mathsf{Q}(w) = x \}| \cdot |\{ w' \in U^\perp \mid g(w') = zw',\mathsf{Q}(w') = -x \}|.$$ Note that, since $U$ is a non-degenerate quadratic space of dimension $2n-3$, $$q^{n-2}(q^{n-2}-1) \leq |\{ 0 \neq w \in U \mid \mathsf{Q}(w) = x \}| \leq q^{n-2}(q^{n-2}+1)$$ for any $x \in \mathbb F_q$. On the other hand, \eqref{eq:rb23} implies that $$\sum_{x \in \mathbb F_q}|\{ w' \in U^\perp \mid g(w') = zw',\mathsf{Q}(w') = -x \}| = |(U^\perp)^{zg}| = q^{4-s}.$$ It follows that $$q^{n+2-s}(q^{n-2}-1) \leq N \leq q^{n+2-s}(q^{n-2}+1),$$ and so $$\frac{q^{n+2-s}(q^{n-2}-1)}{q-1} \leq \rho(g)=1+\alpha(g)+\beta(g) \leq q^2+3q+2+\frac{q^{n+2-s}(q^{n-2}+1)}{q-1}.$$ Together with \eqref{eq:rb22}, this implies that $$\frac{|\chi(g)|}{\chi(1)} \leq \frac{(q^2-1)(q+2)+q^{n+2-s}(q^{n-2}+1)+2q^n(q-1)}{(q^n+1)(q^n-q)} < \frac{1}{q^{s/2}}.$$ \smallskip (iii) Finally, we consider the case $G = \mathrm {Sp}_{2n}(q)$. In this case, arguing similarly to the proof of \cite[Proposition 5.7]{LBST}, one can show that $\{\alpha,\beta\} = \{D^\circ_{\lambda_0},D^\circ_{\lambda_1}\}$, where $S = \mathrm {O}^+_2(q) \cong D_{2(q-1)}$, with $\lambda_0$, $\lambda_1$ being the two linear characters trivial at $\mathrm {SO}^+_2(q)$, and we consider the dual pairs $G \times S \to \mathrm {Sp}_{4n}(q)$. In particular, $\chi(1) \geq (q^n+1)(q^n-q)/2(q-1) > q^{2n-4/3}$. Now, the formula for $D_\alpha$ in \cite[Lemma 5.5]{LBST}, the character table of $S$, and part 1) of the proof of \cite[Proposition 5.11]{LBST} imply that \begin{equation}\label{eq:rb24} |\alpha(g)-\beta(g)| \leq q^{(d(1,g)+d(-1,g))/2} \leq q^{2n-s}. \end{equation} On the other hand, using \eqref{eq:rb1} we have $0 \leq \rho(g) = \alpha(g)+\beta(g)+1 \leq q^{2n-s}-1$. In particular, when $s \geq 2$ we have $$|\chi(g)| \leq \bigl(|\alpha(g)+\beta(g)|+|\alpha(g)-\beta(g)|\bigr)/2 \leq q^{2n-s} < \chi(1)q^{-s/3}.$$ Assume now that $s=1$. Then $g = zu$ for some $z = \pm 1$ and unipotent $u \in G$; furthermore, $\rho(g) = (q^{2n-1}-1)/(q-1)$. Applying also \eqref{eq:rb24}, we obtain $$|\chi(g)| \leq \biggl(|\alpha(g)+\beta(g)|+|\alpha(g)-\beta(g)|\biggr)/2 \leq \biggl(\frac{q^{2n-1}-q}{q-1}+q^{n-1/2}\biggr)/2 < \chi(1)q^{-4s/5},$$ and the proof is complete. \end{proof} \section{Classical groups: Proof of Theorem \ref{main1}}\label{pfth1} Let $G = \mathrm {Sp}(V)$ or $\Omega(V)$, where $V = V_n(q)$. Write $G = {\rm Cl}_n(q)$ to cover both cases. As before, for a semisimple element $g \in G$, define $\nu(s) = \hbox{supp}(g)$, the codimension of the largest eigenspace of $g$ over $\bar \mathbb F_q$. For $n<10$, Theorem \ref{main1} can be easily proved by exactly the same method of proof of \cite[Theorem 2]{LST} (improving the constant $D$ in Lemma 2.3 of \cite{LST} by using better bounds for $|G|$ and $|C_G(g)|_p$). So assume from now on that $n\ge 10$, so that the character ratio bounds in Propositions \ref{rat-so21} and \ref{rat-sp-so22} apply. We begin with a lemma analogous to \cite[Lemma 3.2]{LST}. \begin{lem}\label{sest} For $1\le s<n$, define $$N_s(G) := \{g\in G_{\mathrm {ss}} : \nu(g)=s\}$$ and let $n_s(G):=|N_s(G)|$. \begin{itemize} \item[{\rm (i)}] If $g \in N_s(g)$ and $s<\frac{n}{2}$ then $|\mathbf{C}_G(g)|_p < q^{\frac{1}{4}((n-s)^2+s^2) - v\frac{n-1}{2}}$, where $v=0$ or $1$ according as $G$ is symplectic or orthogonal. \item[{\rm (ii)}] If $g \in N_s(g)$ and $s\ge \frac{n}{2}$ then $|\mathbf{C}_G(g)|_p < q^{\frac{1}{4}(n^2-ns)}$. \item[{\rm (iii)}] $\sum_{n-1 \geq s \geq n/2}n_s(G) < |G| < q^{\frac{1}{2}(n^2+n)-vn}$, where $v$ is as in $(ii)$. \item[{\rm (iv)}] If $s < n/2$, then $n_s(G) < cq^{\frac{1}{2}s(2n-s+1)+\frac{n}{2}}$, where $c$ is an absolute constant that can be taken to be $15.2$. \end{itemize} \end{lem} \begin{proof} (i) If $\nu(g)=s<\frac{n}{2}$, then the largest eigenspace of $g$ has dimension $n-s>\frac{n}{2}$, so has eigenvalue $\pm 1$, and so $\mathbf{C}_G(g) \le {\rm Cl}_{n-s}(q) \times {\rm Cl}_s(q)$. Part (i) follows. \vspace{2mm} (ii) Now suppose $\nu(g) = s \ge \frac{n}{2}$, and let $E_\lambda$ ($\lambda \in \bar \mathbb F_q$) be an eigenspace of maximal dimension $n-s$. Assume first that $\lambda \ne \pm 1$. Then letting $a$ and $b$ denote the dimensions of the $+1$- and $-1$-eigenspaces, we have \begin{equation}\label{cent} \mathbf{C}_G(g) \le \prod_{i=1}^t \mathrm {GL}_{d_i}(q^{k_i}) \times {\rm Cl}_a(q) \times {\rm Cl}_b(q), \end{equation} where $n-s = d_1 \ge d_2\ge \cdots \ge d_t$ and also $d_1 \ge a\ge b$ and $2\sum_1^t k_id_i+a+b = n$. Hence $|\mathbf{C}_G(g)|_p \le q^D$, where \begin{equation}\label{expd} D = \frac{1}{2}\sum_{i=1}^t k_id_i(d_i-1) + \frac{1}{4}(a^2+b^2). \end{equation} If $n\ge 4d_1$, this expression is maximised when $a=b=d_1$ and $(d_1,\ldots ,d_t) = (d_1,\ldots ,d_1,r)$ with $r\le d_1$ and $k_i=1$ for all $i$.. Hence in this case, \[ D \le \frac{1}{2}(t-1)d_1(d_1-1) + \frac{1}{2}r(r-1) + \frac{1}{2}d_1^2 = \frac{1}{2}td_1^2-\frac{1}{2}(t-1)d_1+\frac{1}{2}r(r-1), \] and this is easily seen to be less than $\frac{1}{4}nd_1$, as required for part (ii). Similarly, if $4d_1>n\ge 3d_1$, the expression (\ref{expd}) is maximised when $t=1$, $k_1=1$, $a=d_1$ and $b=r < d_1$; and when $3d_1>n \ge 2d_1$ (note that $n\ge 2d_1 = 2(n-s)$ by our assumption that $\nu(g) = s \ge \frac{n}{2}$), the expression (\ref{expd}) is maximised when $t=1$ and $a=r< d_1$. In each case, we see that $D< \frac{1}{4}nd_1$ as above. Assume finally that the eigenvalue $\lambda = \pm 1$. In this case the centralizer $\mathbf{C}(g)$ is as in (\ref{cent}), with $n-s=a \ge d_1\ge \cdots \ge d_t$ and also $a\ge b$ and $2\sum_1^t k_id_i+a+b = n$. Again we have $|\mathbf{C}_G(g)|_p \le q^D$, with $D$ as in (\ref{expd}), and we argue as above that $D < \frac{1}{4}na = \frac{1}{4}n(n-s)$. This completes the proof of (ii). \vspace{2mm} (iii) This is clear. \vspace{2mm} (iv) If $\nu(g) = s < \frac{n}{2}$ then as in (i), the largest eigenspace of $g$ has eigenvalue $\pm 1$, so we have $\mathbf{C}_G(g) \ge {\rm Cl}_{n-s}(q) \times T_s$, where $T_s$ is a maximal torus of ${\rm Cl}_s(q)$. Hence $|g^G| \le |G:{\rm Cl}_{n-s}(q)T_s| \le q^{\frac{1}{2}s(2n-s+1)}$. Also the number of conjugacy classes in $G$ is at most $15.2q^{n/2}$ by \cite{FG}, and (iv) follows. \end{proof} \begin{lem}\label{stein} Let $\chi \in \{\alpha,\beta,\bar{g}_i\}$, where $\alpha,\beta,\bar{g}_i$ are the irreducible characters of $G$ defined in Section \ref{red}. Then $\mathsf{St} \subseteq \chi^{4n}$. \end{lem} \begin{proof} As in the proof of \cite[Lemma 2.3]{LST}, there are signs $\epsilon_g=\pm 1$ such that \begin{equation}\label{useag} \begin{array}{ll} [\chi^l,\mathsf{St}]_G & = \dfrac{1}{|G|}\sum_{g\in G_{\mathrm {ss}}} \epsilon_g \chi^l(g)|\mathbf{C}_G(g)|_p \\ & = \dfrac{\chi^l(1)}{|G|}\left(|G|_p + \sum_{1 \neq g \in G_{\mathrm {ss}}} \epsilon_g \left(\frac{\chi(g)}{\chi(1)}\right)^l|\mathbf{C}_G(g)|_p\right). \end{array} \end{equation} Hence $[\chi^l,\mathsf{St}]_G \ne 0$ provided $\Sigma_l < |G|_p$, where \[ \Sigma_l := \sum_{1 \neq g\in G_{\mathrm {ss}}} \left|\frac{\chi(g)}{\chi(1)}\right|^l|\mathbf{C}_G(g)|_p. \] By Propositions \ref{rat-so21} and \ref{rat-sp-so22}, if $s = \nu(g)$ we have \[ \frac{|\chi(g)|}{\chi(1)} \le \frac{1}{q^{s/3}}. \] Hence applying Lemma \ref{sest}, we have $\Sigma_l \le \Sigma_1+\Sigma_2$, where \[ \begin{array}{l} \Sigma_1 = \sum_{1\le s<\frac{n}{2}} cq^{\frac{1}{2}s(2n-s+1)+\frac{n}{2}}. \frac{1}{q^{ls/3}} . q^{\frac{1}{4}((n-s)^2+s^2) - v\frac{n-1}{2}}, \\ \Sigma_2 = \sum_{\frac{n}{2}\le s < n} q^{\frac{1}{2}(n^2+n)-vn}. \frac{1}{q^{ls/3}} . q^{\frac{1}{4}(n^2-ns)}. \end{array} \] For a term in $\Sigma_1$, the exponent of $q$ is \[ \frac{1}{4}n^2-v\frac{n-1}{2} + \frac{1}{2}s(n+1)+\frac{1}{2}n-\frac{ls}{3}. \] As $|G|_p \le q^{\frac{1}{4}n^2-v\frac{n-1}{2}}$, taking $l=4n$ this gives \[ \begin{array}{ll} \frac{\Sigma_1}{|G|_p} & \le \sum_{1\le s<\frac{n}{2}} cq^{\frac{1}{2}s(n+1)+\frac{n}{2}-\frac{ls}{3}} \\ & \le \sum_{1\le s<\frac{n}{2}} cq^{\frac{1}{2}n(1-\frac{5s}{3})+\frac{s}{2}}. \end{array} \] Recalling that $c=15.2$, it follows that $\frac{\Sigma_1}{|G|_p} < \frac{1}{2}$ (except for $q=2, n\le 20$, in which case we obtain the same conclusion using slightly more refined estimates instead of Lemma \ref{sest}(iv)). For a term in $\Sigma_2$, the exponent of $q$ is \[ \frac{1}{2}(n^2+n)-vn +\frac{1}{4}n(n-s) - \frac{ls}{3}, \] and leads similarly to the inequality $\frac{\Sigma_2}{|G|_p} < \frac{1}{2}$ when $l=4n$. We conclude that $\Sigma_l < |G|_p$ for $l=4n$, proving the lemma. \end{proof} \begin{proof}[Proof of Theorem \ref{main1}] Let $1\ne \psi \in \mathrm{Irr}(G)$. By Lemma \ref{stein} together with Lemmas \ref{mc-r2} and \ref{mc-r3}, we have $\mathsf{St} \subseteq \psi^{8n}$ for $G = Sp_n(q)$, and $\mathsf{St} \subseteq \psi^{16n}$ for $G = \Omega^\epsilon_n(q)$. Since $\mathsf{St}^2$ contains all irreducible characters by \cite{HSTZ}, the conclusion of Theorem \ref{main1} follows. \end{proof} \section{Alternating groups: Proof of Theorem \ref{main2}}\label{pfth2} In this section we prove Theorem \ref{main2}. \begin{lem}\label{staircase} Let $n := m(m+1)/2$ with $m \in \mathbb{Z}_{\geq 6}$, and let $\chi_m := \chi^{(m,m-1,\ldots,1)}$ be the staircase character of $\mathsf{S}_n$. Then $$\chi_m(1) \geq |\mathsf{S}_n|^{5/11}.$$ \end{lem} \begin{proof} We will proceed by induction on $m \geq 6$. The induction base $m=6,7$ can be checked directly. For the induction step going from $m$ to $m+2$, note by the hook length formula that $\chi_m(1)= n!/H_m$, where $H_m$ is the product of all the hook lengths in the Young diagram of the staircase partition $(m,m-1, \ldots,1)$. Hence it is equivalent to to prove that $$(m(m+1)/2)! > H_m^{11/6}.$$ Since the statement holds for $m$ and $H_{m+2}/H_m = (2m+3)!!(2m+1)!!$, it suffices to prove that \begin{equation}\label{eq:st1} \prod^{2m+3}_{i=1}(m(m+1)/2+i) > \bigl((2m+3)!!(2m+1)!!\bigr)^{11/6} \end{equation} for any $m \geq 6$. Direct computation shows that \eqref{eq:st1} holds when $3 \leq m \leq 40$. When $m \geq 40$, note that $$\begin{array}{ll} \prod^{2m+3}_{i=1}(m(m+1)/2+i) & > \bigl(m(m+1)/2+1\bigr)^{2m+3}\\ & > \bigl((m+3)^{m+1}(m+2)^m\bigr)^{11/6}\\ & > \bigl((2m+3)!!(2m+1)!!\bigr)^{11/6},\end{array}$$ proving \eqref{eq:st1} and completing the induction step. \end{proof} \begin{proof}[Proof of Theorem \ref{main2}] We will make use of \cite[Theorem 1.4]{S} which states that there exists an effective absolute constant $C_1 \geq 2$ such that \begin{equation}\label{eq:s} \chi^{t} \mbox{ contains }\mathrm{Irr}(\mathsf{S}_n) \mbox{ whenever }t \geq C_1n\log(n)/\log(\chi(1)) \end{equation} for every non-linear $\chi \in \mathrm{Irr}(\mathsf{S}_n)$. With this, we will prove that when $n$ is sufficiently large we have \begin{equation}\label{eq:main2} \varphi^{k} \mbox{ contains }\mathrm{Irr}(\mathsf{A}_n) \mbox{ whenever }k \geq Cn\log(n)/\log(\varphi(1)) \end{equation} for every nontrivial $\varphi \in \mathrm{Irr}(\mathsf{A}_n)$, with $C=5C_1^2$. \smallskip (i) Consider any $n \geq 5$ and any nontrivial $\varphi \in \mathrm{Irr}(\mathsf{A}_n)$. If $\varphi$ extends to $\mathsf{S}_n$, then we are done by \eqref{eq:s}. Hence we may assume that $\varphi$ lies under some $\chi^\lambda \in \mathrm{Irr}(\mathsf{S}_n)$, where $\lambda \vdash n$ is self-associate, and that $n$ is sufficiently large. By \cite[Proposition 4.3]{KST}, the latter implies that \begin{equation}\label{eq:a1} \varphi(1) \geq 2^{(n-5)/4}. \end{equation} Consider the Young diagram $Y(\lambda)$ of $\lambda$, and let $A$ denote the removable node in the last row of $Y(\lambda)$. Also let $\rho:=\chi^{\lambda \smallsetminus A} \in \mathrm{Irr}(\mathsf{S}_{n-1})$. Since $\lambda \smallsetminus A$ is not self-associate, $\rho$ is also irreducible over $\mathsf{A}_{n-1}$. Furthermore, by Frobenius' reciprocity, $$1 \leq [(\chi^\lambda)|_{\mathsf{S}_{n-1}},\rho]_{\mathsf{S}_{n-1}} = [\chi^\lambda,\mathrm{Ind}^{\mathsf{S}_n}_{\mathsf{S}_{n-1}}(\rho)]_{\mathsf{S}_n},$$ whence $2\varphi(1) = \chi^\lambda(1) \leq \mathrm{Ind}^{\mathsf{S}_n}_{\mathsf{S}_{n-1}}(\rho)(1) = n\rho(1)$, and so \begin{equation}\label{eq:a2} \rho(1) \geq (2/n)\varphi(1). \end{equation} It follows from \eqref{eq:a1} and \eqref{eq:a2} that when $n$ is large enough, $$\log(\rho(1)) \geq \log(\varphi(1))-\log(n/2) \geq (9/10)\log(\varphi(1)).$$ Now we consider any integer \begin{equation}\label{eq:a3} s \geq \frac{10C_1}{9} \cdot \frac{n\log(n)}{\log(\varphi(1))}. \end{equation} This ensures that $s \geq C_1(n-1)\log(n-1)/\log(\rho(1))$, and so, by \eqref{eq:s} applied to $\rho$, $\rho^s$ contains $\mathrm{Irr}(\mathsf{S}_{n-1})$. \smallskip (ii) Next, we can find a unique $m \in \mathbb{Z}_{\geq 3}$ such that \begin{equation}\label{eq:a4} n_0:=m(m+1)/2 \leq n-3 < (m+1)(m+2)/2, \end{equation} and consider the following partition \begin{equation}\label{eq:a5} \mu:= (n-1-m(m-1)/2,m-1,m-2, \ldots,2,1) \end{equation} of $n-1$. Note that $\mu$ has $m$ rows, with the first (longest) row $$\mu_1=n-1-m(m-1)/2 \geq m+2$$ by \eqref{eq:a4}. Hence, if $B$ is any addable node for the Young diagram $Y(\mu)$ of $\mu$, $Y(\mu \sqcup B)$ has at most $m+1$ rows and at least $m+2$ columns, and so is not self-associate. It follows that, for any such $B$, the character $\chi^{\mu \sqcup B}$ of $\mathsf{S}_n$ is irreducible over $\mathsf{A}_n$. \smallskip (iii) Recall that $\chi^\lambda|_{\mathsf{A}_n} = \varphi+\varphi^\star$ with $\varphi^\star$ being $\mathsf{S}_n$-conjugate to $\varphi$. It suffices to prove \eqref{eq:main2} for an $\mathsf{S}_n$-conjugate of $\varphi$. As $\chi^\lambda|_{\mathsf{S}_{n-1}}$ contains $\rho=\chi^{\lambda \smallsetminus A}$ which is irreducible over $\mathsf{A}_{n-1}$, without loss we may assume that $\varphi|_{\mathsf{A}_{n-1}}$ contains $\rho|_{\mathsf{A}_{n-1}}$. By the result of (i), $\rho^s$ contains $\chi^\mu$, with $\mu$ defined in \eqref{eq:a5} Thus \begin{equation}\label{eq:a6} 1 \leq [\varphi^s|_{\mathsf{A}_{n-1}},(\chi^\mu)|_{\mathsf{A}_{n-1}}]_{\mathsf{A}_{n-1}}=\bigl[\varphi^s, \mathrm{Ind}^{\mathsf{A}_n}_{\mathsf{A}_{n-1}}\bigl((\chi^\mu)|_{\mathsf{A}_{n-1}}\bigr)\bigr]_{\mathsf{A}_n}. \end{equation} Also recall that $\chi^\mu$ is an $\mathsf{S}_{n-1}$-character and $\mathsf{S}_n = \mathsf{A}_n\mathsf{S}_{n-1}$. Hence $$\mathrm{Ind}^{\mathsf{A}_n}_{\mathsf{A}_{n-1}}\bigl((\chi^\mu)|_{\mathsf{A}_{n-1}}\bigr) = \bigl(\mathrm{Ind}^{\mathsf{S}_n}_{\mathsf{S}_{n-1}}(\chi^\mu)\bigr)|_{\mathsf{A}_{n}}.$$ Next, $$\mathrm{Ind}^{\mathsf{S}_n}_{\mathsf{S}_{n-1}}(\chi^\mu) = \sum_{B~{\rm \tiny{addable}}}\chi^{\mu \sqcup B},$$ where, as shown in (ii), each such $\chi^{\mu \sqcup B}$ is irreducible over $\mathsf{A}_n$. Hence, it now follows from \eqref{eq:a6} that there is an addable node $B_0$ for $Y(\mu)$ that $\varphi^s$ contains $\psi|_{\mathsf{A}_n}$, with $\psi:=\chi^{\mu \sqcup B_0}$. \smallskip (iv) By the choice of $B_0$, $\psi|_{\mathsf{S}_{n-1}}$ contains $\chi^\mu$, whence $\psi(1) \geq \chi^\mu(1)$. Next, by \eqref{eq:a4}, we can remove $n-1-n_0 \geq 2$ nodes from the first row to arrive at the staircase partition $(m,m-1, \ldots,1) \vdash n_0$. In particular, $\psi|_{\mathsf{S}_{n_0}}$ contains the character $\chi_m$ of $\mathsf{S}_{n_0}$. By Lemma \ref{staircase}, for $n$ sufficiently large we have \begin{equation}\label{eq:a7} \log(\psi(1)) \geq \log(\chi_m(1)) \geq (5/11)\log(n_0!) \geq (2/5)n\log(n), \end{equation} since $$n_0 = m(m+1)/2 \geq n-(m+2) \geq n-(3/2+\sqrt{2n-4})$$ by the choice \eqref{eq:a4} of $m$. Now we consider the integer $t := \lceil (5/2)C_1 \rceil \leq 3C_1$ (since $C_1 \geq 2$). Then $$C_1n\log(n)/\log(\psi(1)) \leq (5/2)C_1 \leq t$$ by \eqref{eq:a7}, and so $\psi^t$ contains $\mathrm{Irr}(\mathsf{S}_n)$ by \eqref{eq:s} applied to $\psi$. In particular, $(\psi^t)|_{\mathsf{A}_n}$ contains $\mathrm{Irr}(\mathsf{A}_n)$. Recall from (iii) that $\varphi^s$ contains the irreducible character $\psi|_{\mathsf{A}_n}$. It follows that $\varphi^{st}$ contains $(\psi^t)|_{\mathsf{A}_n}$, and so $\varphi^{st}$ contains $\mathrm{Irr}(\mathsf{A}_n)$. \smallskip (v) Finally, consider any integer $k \geq Cn\log(n)/\varphi(1)$ with $C=5C_1^2$. Then $$k/t \geq k/3C_1 \geq (5/3)C_1n\log(n)/\log(\varphi(1)).$$ As $C_1\geq 1$ and $n\log(n)/\log(\varphi(1) \geq 2$, we have that $$(5/3-10/9)C_1n\log(n)/\log(\varphi(1)) \geq 10/9.$$ In particular, we can find an integer $s_0$ such that $$k/t \geq s_0 \geq (10/9)C_1n\log(n)/\log(\varphi(1)).$$ As $s$ satisfies \eqref{eq:a3}, the result of (iv) shows that $\varphi^{s_0t}$ contains $\mathrm{Irr}(\mathsf{A}_n)$. Now, given any $\gamma \in \mathrm{Irr}(\mathsf{A}_n)$, we can find an irreducible constituent $\delta$ of $\varphi^{k-s_0t}\overline\gamma$. By the previous result, $\varphi^{s_0t}$ contains $\overline\delta$. It follows that $\varphi^k$ contains $\varphi^{k-s_0t}\overline\delta$, and $$[\varphi^{k-s_0t}\overline\delta,\gamma]_{\mathsf{A}_n}= [\varphi^{k-s_0t}\overline\gamma,\delta]_{\mathsf{A}_n} \geq 1,$$ i.e. $\varphi^k$ contains $\gamma$, and the proof of \eqref{eq:main2} is completed. \end{proof} \section{Products of characters}\label{pfth3} \subsection{Products of characters in classical groups} This is very similar to the proof of Theorem 2 of \cite{LST}. Let $G = G_r(q)$ be a simple group of Lie type of rank $r$ over $\mathbb F_q$. \begin{lem}\label{stdiam} There is an absolute constant $D$ such that for any $m\ge Dr^2$ and any $\chi_1,\ldots,\chi_m \in {\rm Irr}(G)$, we have $[\prod_1^m\chi_i,\mathsf{St}]_G\ne 0$. Indeed, $D=163$ suffices. \end{lem} \begin{proof} This is proved exactly as for \cite[Lemma 2.3]{LST}, replacing the power $\chi^m$ by the product $\prod_1^m\chi_i$. \end{proof} \begin{proof}[Proof of Theorem \ref{rodsax}(i)] Take $c_1=3D$ with $D$ as in the lemma, and let $\chi_1,\ldots ,\chi_l \in {\rm Irr}(G)$ with $l=c_1r^2$. Writing $m=l/3 = Dr^2$, Lemma \ref{stdiam} shows that each of the products $\prod_1^m\chi_i$, $\prod_{m+1}^{2m}\chi_i$ and $\prod_{2m+1}^{3m}\chi_i$ contains $\mathsf{St}$. Hence $\prod_1^l\chi_i$ contains $\mathsf{St}^3$, and this contains ${\rm Irr}(G)$ by \cite[Prop. 2.1]{LST}. This completes the proof. \end{proof} \subsection{Products of characters in linear and unitary groups} This is similar to the proof of Theorem 3 of \cite{LST}. Let $G = \mathrm {PSL}_n^\epsilon(q)$. We shall need \cite[Theorem 3.1]{LST}, which states that there is a function $f:\mathbb N\to \mathbb N$ such that for any $g \in G_{\mathrm {ss}}$ with $s = \nu(g)$, and any $\chi \in {\rm Irr}(G)$, we have \begin{equation}\label{31lst} |\chi(g)| < f(n)\chi(1)^{1-\frac{s}{n}}. \end{equation} Again we begin with a lemma involving the Steinberg character. \begin{lem}\label{ste} Let $m\in \mathbb N$ and let $\chi_1,\ldots,\chi_m \in {\rm Irr}(G)$. Set $c=44.1$, and define \[ \begin{array}{l} \Delta_{1m} = cf(n)^m \sum_{1\le s <n/2} q^{ns+\frac{3n}{2}-1}\left(\prod_1^m\chi_i(1)\right)^{-s/n},\\ \Delta_{2m} = f(n)^m \sum_{n/2\le s<n}q^{n^2-\frac{1}{2}n(s-1)-1}\left(\prod_1^m\chi_i(1)\right)^{-s/n}. \end{array} \] If $\Delta_{1m}+\Delta_{2m}<1$, then $[\prod_1^m\chi_i,\,\mathsf{St}]_G \ne 0$. \end{lem} \begin{proof} Arguing as in the proof of \cite[Lemma 3.3]{LST}, we see that $[\prod_1^m\chi_i,\,\mathsf{St}]_G \ne 0$ provided $\Delta_m <1$, where \[ \Delta_m := \sum_{1 \leq s < n/2} cq^{ns+\frac{3n}{2}-1}\left|\prod_1^m\frac{\chi_i(g_{i,s})}{\chi(1)}\right| + \sum_{n/2 \leq s < n} q^{n^2-\frac{1}{2}n(s-1)-1}\left|\prod_1^m\frac{\chi(g_{i,s})}{\chi(1)}\right|, \] where $g_{i,s} \in G_{\mathrm {ss}}$ is chosen such that $\nu(g_{i,s})=s$ and $|\chi_i(g_{i,s})|$ is maximal. Now application of (\ref{31lst}) gives the conclusion. \end{proof} \begin{lem}\label{better} There is a function $g:\mathbb N\to \mathbb N$ such that the following holds. Suppose that $\chi_1,\ldots,\chi_m \in {\rm Irr}(G)$ satisfy $\prod_1^m \chi_i(1) > |G|^3$. Then provided $q>g(n)$, we have $[\prod_1^m\chi_i,\,\mathsf{St}]_G \ne 0$. \end{lem} \begin{proof} We have $|G|>\frac{1}{2}q^{n^2-2}$, so for $s<n$, \[ \left(\prod_1^m\chi_i(1)\right)^{-s/n} < 8q^{-3ns+\frac{6s}{n}}. \] Hence \[ \Delta_{1m} \le 8cf(n)^m \sum_{1\le s <n/2} q^{-2ns+\frac{3n}{2}+2}, \] and \[ \begin{array}{ll} \Delta_{2m} & \le 8f(n)^m \sum_{n/2\le s<n}q^{n^2-\frac{1}{2}n(s-1)-1} q^{-3ns+6} \\ & \le 8f(n)^m \sum_{n/2\le s<n}q^{-\frac{3n^2}{4}+\frac{1}{2}n+5}. \end{array} \] Now the conclusion follows from Lemma \ref{ste} (using some slight refinements of the above inequalities for $n\le 4$). \end{proof} \begin{proof}[Proof of Theorem \ref{rodsax}(ii)] Assume $\chi_1,\ldots,\chi_l \in {\rm Irr}(G)$ satisfy $\prod_1^l \chi_i(1) > |G|^{10}$. Since $\chi_i(1) < |G|^{1/2}$ for all $i$, there are disjoint subsets $I_1,I_2,I_3$ of $\{1,\ldots ,m\}$ such that $\prod_{i\in I_k} \chi_i(1) > |G|^3$ for $k=1,2,3$. Then $\prod_{i\in I_k} \chi_i$ contains $\mathsf{St}$ for each $k$, by Lemma \ref{better}, and so $\prod_1^l\chi_i$ contains $\mathsf{St}^3$, hence contains ${\rm Irr}(G)$, completing the proof. \end{proof} \subsection{Products of characters in symmetric and alternating groups} \begin{prop}\label{rs2-an} Let $G \in \{\mathsf{S}_n,\mathsf{A}_n\}$, $l \in \mathbb{Z}_{\geq 1}$, and let $\chi_1,\chi_2, \ldots,\chi_l \in \mathrm{Irr}(G)$ with $\chi_i(1) > 1$ for all $i$. \begin{enumerate}[\rm(i)] \item If $l \geq 8n-11$, then $\bigl(\prod^l_{i=1}\chi_i\bigr)^{2}$ contains $\mathrm{Irr}(G)$. \item Suppose that, for each $1 \leq i \leq l$, there exists some $j \neq i$ such that $\chi_j = \chi_i$. If $l \geq 24n-33$ then $\prod^l_{i=1}\chi_i$ contains $\mathrm{Irr}(G)$. \end{enumerate} \end{prop} \begin{proof} (i) Let $\chi^\lambda$ denote the irreducible character of $\mathsf{S}_n$ labeled by the partition $\lambda \vdash n$. A key result established in the proof of \cite[Theorem 5]{LST} is that, for any $i$ there exists $$\alpha_i \in \left\{\chi^{(n-1,1)},\chi^{(n-2,2)},\chi^{(n-2,1^2)},\chi^{(n-3,3)}\right\}$$ such that $\chi_i^2$ contains $(\alpha_i)|_G$. Since $l \geq 8n-11$, there must be some $$\beta \in \left\{\chi^{(n-1,1)},\chi^{(n-2,2)},\chi^{(n-2,1^2)},\chi^{(n-3,3)}\right\}$$ such that $\beta=\alpha_i$ for at least $2n-2$ distinct values of $i$. It follows that $\bigl(\prod^l_{i=1}\chi_i\bigr)^{2}=\bar{g}\delta$, where $\bar{g} := \beta^{2n-2}|_G$, and $\delta$ is a character of $G$. By \cite[Theorem 5]{LST}, $\beta^{2n-2}$ contains $\mathrm{Irr}(\mathsf{S}_n)$, whence $\bar{g}$ contains $\mathrm{Irr}(G)$. Now the arguments in the last paragraph of the proof of Theorem \ref{main2} show that $\bar{g}\delta$ contains $\mathrm{Irr}(G)$ as well. \smallskip (ii) Note that the assumptions imply, after a suitable relabeling, that $\prod^l_{i=1}\chi_i$ contains $\sigma\lambda$, where $\lambda$ is a character of $G$ and $$\sigma= \prod^{8n-11}_{i=1}\chi_i^2.$$ (Indeed, any subproduct $\chi_{i_1}\ldots\chi_{i_t}$ with $t> 1$ and $\chi_{i_1}=\ldots =\chi_{i_t}$ yields a term $(\chi_{i_1}^2)^{\lfloor t/2 \rfloor}$ in $\sigma$.) By (i), $\sigma$ contains $\mathrm{Irr}(G)$, and so we are done as above. \end{proof}
{ "timestamp": "2020-07-22T02:06:03", "yymm": "2007", "arxiv_id": "2007.10530", "language": "en", "url": "https://arxiv.org/abs/2007.10530", "abstract": "Let $G$ be a finite group, and $\\alpha$ a nontrivial character of $G$. The McKay graph $\\mathcal{M}(G,\\alpha)$ has the irreducible characters of $G$ as vertices, with an edge from $\\chi_1$ to $\\chi_2$ if $\\chi_2$ is a constituent of $\\alpha\\chi_1$. We study the diameters of McKay graphs for finite simple groups $G$. For alternating groups, we prove a conjecture made in [LST]: there is an absolute constant $C$ such that $\\hbox{diam}\\,{\\mathcal M}(G,\\alpha) \\le C\\frac{\\log |\\mathsf{A}_n|}{\\log \\alpha(1)}$ for all nontrivial irreducible characters $\\alpha$ of $\\mathsf{A}_n$. Also for classsical groups of symplectic or orthogonal type of rank $r$, we establish a linear upper bound $Cr$ on the diameters of all nontrivial McKay graphs.", "subjects": "Group Theory (math.GR); Representation Theory (math.RT)", "title": "McKay graphs for alternating and classical groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713874477227, "lm_q2_score": 0.7185944046238981, "lm_q1q2_score": 0.7080105060559584 }
https://arxiv.org/abs/1405.0984
Differential geometry via infinitesimal displacements
We present a new formulation of some basic differential geometric notions on a smooth manifold M, in the setting of nonstandard analysis. In place of classical vector fields, for which one needs to construct the tangent bundle of M, we define a prevector field, which is an internal map from *M to itself, implementing the intuitive notion of vectors as infinitesimal displacements. We introduce regularity conditions for prevector fields, defined by finite differences, thus purely combinatorial conditions involving no analysis. These conditions replace the more elaborate analytic regularity conditions appearing in previous similar approaches, e.g. by Stroyan and Luxemburg or Lutz and Goze. We define the flow of a prevector field by hyperfinite iteration of the given prevector field, in the spirit of Euler's method. We define the Lie bracket of two prevector fields by appropriate iteration of their commutator. We study the properties of flows and Lie brackets, particularly in relation with our proposed regularity conditions. We present several simple applications to the classical setting, such as bounds related to the flow of vector fields, analysis of small oscillations of a pendulum, and an instance of Frobenius' Theorem regarding the complete integrability of independent vector fields.
\section{Introduction}\label{intro} We develop foundations for differential geometry on smooth manifolds, based on infinitesimals, where vectors and vector fields are represented by infinitesimal displacements in the manifold itself, as they were thought of historically. Such an approach was previously introduced e.g. by Stroyan and Luxemburg in \cite{stl}, and by Lutz and Goze in \cite{lg}. For such an approach to work, one needs to assume some regularity condition on the infinitesimal displacement maps used to represent vector fields, in place of the smoothness properties appearing in the classical setting. The various regularity conditions chosen in existing sources seem non basic and overly tied up with classical analytic notions. In the present work we introduce natural and easily verifiable regularity conditions, defined by finite differences. We show that these weak regularity conditions are sufficient for defining notions such as the flow and Lie bracket of vector fields. In more detail, we would like to study vectors and vector fields in a smooth manifold $M$ while bypassing its tangent bundle. We would like to think of a vector based at a point $a$ in $M$ as a little arrow in $M$ itself, whose tail is $a$ and whose head is a nearby point $x$ in $M$. The notions ``little'' and ``nearby'' can be formalized in the hyperreal framework, i.e. nonstandard analysis, where \emph{infinitesimal} quantities are available. We aim to demonstrate that various differential geometric concepts and various proofs become simpler and more transparent in this framework. For good introductions to nonstandard analysis see e.g. Albeverio et al \cite{a}, Goldblatt \cite{g}, Gordon, Kusraev and Kutateladze \cite{GKK}, Keisler \cite{k}, Loeb and Wolff \cite{lw}, V\"{a}th \cite{v}. For an advanced study of axiomatic treatments see Kanovei and Reeken \cite{kr}. For a historical perspective see Bascelli et al \cite{bb}. For additional previous applications of nonstandard analysis to differential geometry see Almeida, Neves and Stroyan \cite{ans} and references therein. For application of nonstandard analysis to the solution of Hilbert's fifth problem see Tao \cite{tao} and references therein. Nonstandard analysis was initiated by Robinson~\cite{rob}. Infinitesimal quantities may themselves be infinitely large or infinitely small compared to one another, so the key to our application of infinitesimal quantities in differential geometry is to fix a positive infinitesimal hyperreal number ${\lambda}$ once and for all, which will fix the scale of our constructions. We then define a \emph{prevector} based at a nearstandard point $a$ of ${}^* \! M$ to be a pair of points $(a,x)$ in ${}^* \! M$ for which the distance between $a$ and $x$ is not infinitely large compared to ${\lambda}$. Two prevectors $(a,x)$, $(a,y)$ based at $a$ are termed equivalent if the distance between $x$ and $y$ is infinitely small compared to ${\lambda}$. Note that the notion of the distance in ${}^* \! M$ being infinitely large or infinitely small compared to ${\lambda}$ does not require a metric on $M$; it is intrinsic to the differentiable structure of $M$. We next define a prevector \emph{field} to be an internal map $F:{}^* \! M \to {}^* \! M$ such that for every nearstandard point $a$ in ${}^* \! M$, the pair $(a,F(a))$ is a prevector at $a$. The requirement that the map $F$ be internal is crucial e.g. for hyperfinite iteration, internal induction, and the internal definition principle. Two prevector fields $F,G$ are equivalent if for every nearstandard $a$ in ${}^* \! M$, the pairs $(a,F(a))$ and $(a,G(a))$ are equivalent prevectors. As already mentioned, in place of the hierarchy $C^k$ of smoothness appearing in the classical setting of vector fields, we introduce a hierarchy $D^k$ of weaker regularity conditions for prevector fields, defined by finite differences. We show that these weaker regularity conditions are sufficient for defining notions such as the flow of a prevector field and the Lie bracket of two prevector fields. We use our flow to show that a canonical representative can be chosen from every equivalence class of local prevector fields, and that every classical vector field on $M$ can be realized by a prevector field on ${}^* \! M$ whose values on the nearstandard part of ${}^* \! M$ are canonically prescribed. For this last statement, in case the manifold is non compact, we will need to assume that our nonstandard extension is countably saturated. (The notion of countable saturation will be explained in Section~\protect\ref{rl}.) The framework we propose suggests various possibilities for further investigation. For example, one could seek to formulate notions corresponding to the Poincar\'{e}-Hopf Theorem regarding indices of zeros of vector fields, in terms of infinitesimal displacements. Another example is proving Frobenius' Theorem in the spirit of our proof of Theorem~\protect\ref{comm} and Classical~Corollary~\protect\ref{cc4}, characterizing when $k$ vector fields are the first $k$ coordinate vector fields for some choice of coordinates. This can be thought of as an instance of Frobenius' Theorem, with stronger assumption and stronger conclusion. A very different alternative approach to the foundations of differential geometry is that of Synthetic Differential Geometry, introduced by Lawvere and others, see e.g. Kock \cite{ko}. It relies on category-theoretic concepts and intuitionistic logic, which are not needed for our approach. To the extent that our hierarchy $D^k$ of regularity classes is formulated in terms of finite differences and thus avoids classical \emph{analytic} notions, our approach can also be characterized as \emph{synthetic} differential geometry. The structure of the paper is as follows. In Section~\protect\ref{tv} we define prevectors, and explain their relation to classical tangent vectors. We define such notions as the action of a prevector on a smooth function, and the differential of a smooth map from one smooth manifold to another. In Section~\protect\ref{vf} we define local and global prevector fields. We define the $D^k$ regularity property of prevector fields. The property $D^k$ is defined via coordinates by a finite difference condition. We show how a local classical vector field induces a local prevector field, and show that if the classical vector field is $C^k$ then the induced local prevector field is $D^k$ (Proposition~\protect\ref{6}). We then show the following, which though rather simple for $D^1$ turns out somewhat involved for $D^2$. \begin{thm} The definition of $D^1$ and $D^2$ prevector fields is independent of the choice of coordinates (Propositions \protect\ref{5}, \protect\ref{4}). \end{thm} We further show that $D^2$ implies $D^1$ (Proposition~\protect\ref{3}). In Section~\protect\ref{global} we show that a global $D^1$ prevector field is bijective on the nearstandard part of $M$ (Theorem~\protect\ref{bij}). In Section~\protect\ref{f} we define the flow of a prevector field by hyperfinite iteration of the given prevector field. It is a generalization of the Euler approximation for the flow appearing e.g. in Keisler \cite[p.~162]{k}, as well as Stroyan and Luxemburg \cite[p.~128]{stl} and Lutz and Goze \cite[p.~115]{lg}. Using straightforward internal induction we prove the following. \begin{thm} The flow of a $D^1$ prevector field remains in a bounded region for some appreciable interval of time. The growth of the distance between two points moving under the flow is bounded above and below after time $t$ by factors of the form $e^{\pm K t}$ (Theorem~\protect\ref{flow1}). The difference between the flows of different prevector fields is bounded above by a function of the form $\beta t e^{K t}$ (Theorem~\protect\ref{flow2}). \end{thm} We note that this implies corresponding bounds for the flow of classical vector fields (Classical~Corollary~\protect\ref{cc1}). We use the flow of prevector fields to show the following. \begin{thm} A canonical representative can be chosen from each equivalence class of local prevector fields which contains a $D^1$ (resp. $D^2$) prevector field, and this representative is itself $D^1$ (resp. $D^2$). \end{thm} This is done roughly as follows. Given a $D^1$ (resp. $D^2$) local prevector field $F$, its flow in ${}^* \! M$ induces a standard local flow in $M$, which is then extended back to ${}^* \! M$ and evaluated at time $t={\lambda}$, producing a new prevector field ${\widetilde{F}}$. Different representatives $F$ of the given equivalence class induce the same standard flow (Theorem~\protect\ref{hft}) and so ${\widetilde{F}}$ is indeed canonically chosen. We then need to show that ${\widetilde{F}}$ is in fact equivalent to the original $F$ one started with (Theorem~\protect\ref{tld}), and that if $F$ is $D^1$ or $D^2$ then the same holds for ${\widetilde{F}}$ (Propositions \protect\ref{111}, \protect\ref{112}). As example of an application of our results on flows we analyze oscillations of a pendulum with infinitesimal amplitude (Section~\protect\ref{secpend}). In Section~\protect\ref{rl} we show the following. \begin{thm} Every \emph{global} classical $C^1$ (resp. $C^2$) vector field can be realized by a \emph{global} $D^1$ (resp. $D^2$) prevector field, whose values on the nearstandard part of ${}^* \! M$ are canonically prescribed (Theorem~\protect\ref{gl}). \end{thm} This involves techniques similar to those mentioned above in relation to the construction of ${\widetilde{F}}$, and additionally, for a vector field which does not have compact support, our assumption of countable saturation is used. In Section~\protect\ref{lbr} we define the Lie bracket of two prevector fields, and relate it to the classical Lie bracket of classical vector fields (Theorem~\protect\ref{rb}). We show the following. \begin{thm} The Lie bracket of two $D^1$ prevector fields is itself a prevector field (Theorem~\protect\ref{7}). The Lie bracket of two $D^2$ prevector fields is $D^1$ (Theorem~\protect\ref{9}). The Lie bracket is well defined on equivalence classes of $D^2$ prevector fields (Theorem~\protect\ref{ld2}), and this is not the case for prevector fields that are merely $D^1$ (Example~\protect\ref{e4}). \end{thm} We show that the Lie bracket of two $D^2$ prevector fields is equivalent to the identity prevector field if and only if their local standard flows commute (Theorem~\protect\ref{comm}). We note that this implies the classical result that the flows of two vector fields commute if and only if their Lie bracket vanishes (Classical~Corollary~\protect\ref{cc4}). We would like to thank Thomas McGaffey for guiding us to the existing literature on nonstandard analysis approaches to differential geometry. \section{Prevectors}\label{tv} For our analysis we will need to compare different infinitesimal quantities. It is helpful to introduce the following relations. \begin{dfn}\label{dd1} For $r,s \in {}^*{\mathbb R}$, we will write $r \prec s$ if $r=as$ for finite $a$, and will write $r \prec\prec s$ if $r=as$ for infinitesimal $a$. \end{dfn} Thus $r\prec 1$ means that $r$ is finite, and $r \prec\prec 1$ means that $r$ is infinitesimal. Given a finite dimensional vector space $V$ over ${\mathbb R}$, and given $v \in {}^* V$, and $s\in {}^*{\mathbb R}$, we will write $v\prec s$ if ${}^*\|v\| \prec s$ for some norm $\|\cdot\|$ on $V$. We will generally omit the $*$ from function symbols and so will simply write $\|v\| \prec s$. This condition is independent of the choice of norm since all norms on $V$ are equivalent. Similarly we will write $v\prec\prec s$ if $\|v\| \prec\prec s$. We will also write $v \approx w$ when $v-w \prec\prec 1$. If one chooses a basis for $V$ thus identifying it with ${\mathbb R}^n$, then $x=(x_1,\dots,x_n)\in{}^*{\mathbb R}^n$ satisfies $x\prec s$ or $x\prec\prec s$ if and only if each $x_i$ satisfies this, since this is clear in, say, the Euclidean norm on ${\mathbb R}^n$. Given $0 < s\in{}^*{\mathbb R}$ let $V^s_F=\{v\in {}^* V:v \prec s\}$ and $V^s_I=\{v\in {}^* V:v \prec\prec s\}$. Then $V^s_I \subseteq V^s_F \subseteq {}^* V$ are linear subspaces over ${\mathbb R}$, and $V^s_F / V^s_I \cong V$, since it is well known that $V^1_F /V^1_I \cong V$, and multiplication by $s$ maps $V^1_F$ onto $V^s_F$ and $V^1_I$ onto $V^s_I$. Our object of interest is a smooth manifold $M$. For $p \in M$, the halo of $p$, which we denote by ${\mathfrak{h}}(p)$, is the set of all points $x$ in the nonstandard extension ${}^* \! M$ of $M$, for which there is a coordinate neighborhood $U$ of $p$ such that $x\in{}^* U$ and $x \approx p$ in the given coordinates. In fact, the definition of ${\mathfrak{h}}(p)$ does not require coordinates, but rather depends only on the \emph{topology} of $M$. \footnote{For a topological space $X$ and $p \in X$ let $N_p$ be the set of all open neighborhoods of $p$ in $X$. Then the halo (or monad) of $p$ is defined as ${\mathfrak{h}} (p)= \bigcap_{U \in N_p} {}^* U$.} The points of $M$ are called \emph{standard}, and a point which is in ${\mathfrak{h}}(p)$ for some $p\in M$ is called \emph{nearstandard}. If $a$ is nearstandard then the standard part (or shadow) of $a$, denoted $st(a)$, is the unique $p \in M$ such that $a \in {\mathfrak{h}}(p)$. \begin{dfn}\label{dd2} For $A\subseteq M$, the halo of $A$ is ${ {}^{\mathfrak{h}} } \! A = \bigcup_{a\in A} {\mathfrak{h}}(a)$. \end{dfn} In particular, ${{}^{\mathfrak{h}} \! {M}}$ is the set of all nearstandard points in ${}^* \! M$. If $A\subseteq M$ is open in $M$ then ${ {}^{\mathfrak{h}} }\! A \subseteq {}^*\! A$, and if it is compact then $ {}^*\! A \subseteq { {}^{\mathfrak{h}} }\! A $. In particular, if $M$ is compact then ${{}^{\mathfrak{h}} \! {M}} = {}^* \! M$. If $M$ is noncompact then ${{}^{\mathfrak{h}} \! {M}}$ is an external set. \footnote{The converse of the statements in this paragraph also hold if we assume that our nonstandard extension satisfies countable saturation, and using the fact that $M$ has a countable basis. See e.g. Albeverio et al \cite[Section 2.1]{a}} Much of our analysis will be local, so given an open $W \subseteq {\mathbb R}^n$ and a smooth function $f:W \to {\mathbb R}$ we note some properties of the extension ${}^*\! f : {}^* W \to {}^* {\mathbb R}$, obtained by transfer. When there is no risk of confusion, we will omit the $*$ from the function symbol ${}^*\! f$ and simply write $f$ for both the original function and its extension. \begin{lemma}\label{ff} For open $W\subseteq{{\mathbb R}^n}$, let $f:W \to{\mathbb R}$ be continuous. Then $f(a)$ is finite for every $a \in { {}^{\mathfrak{h}} } W$. \end{lemma} \begin{pf} Given $a \in { {}^{\mathfrak{h}} } W$, let $U$ be a neighborhood of $st(a)$ such that $\overline{U} \subseteq W$ and $\overline{U}$ is compact. So there is $C \in {\mathbb R}$ such that $|f(x)| \leq C$ for all $x \in U$. By transfer $|f(x)| \leq C$ for all $x \in {}^* U$, in particular $|f(a)| \leq C$, so $f(a)$ is finite. (As for functions, we omit the $*$ from relation symbols, writing $\leq$ in place of ${}^*\!\!\leq$.) \end{pf} Given an open $U \subseteq {\mathbb R}^n$ and a smooth function $f:U \to {\mathbb R}$, the partial derivatives of ${}^*\! f$ are by definition the functions ${}^*\big( {{\partial} f \over {\partial} x_i}\big)$, i.e. the extensions of the partial derivatives of $f$. So, one has a row vector $D_a$ of partial derivatives at any point $a\in{}^* U$ if $f$ is $C^1$, and similarly a Hessian matrix $H_a$ of the second partial derivatives at every $a \in{}^* U$, in case $f$ is $C^2$. By Lemma~\protect\ref{ff} $D_a$ and $H_a$ are finite throughout ${ {}^{\mathfrak{h}} } U$. We state the following properties of $D_a$ and $H_a$ as three remarks for future reference. \begin{remark}\label{dh1} Let $a,b \in {{}^{\mathfrak{h}} {U}}$ with $a\approx b$, then the interval between $a$ and $b$ is included in ${{}^{\mathfrak{h}} {U}} \subseteq {}^* U$. By transfer of the mean value theorem, if $f$ is $C^1$ then $f(b) - f(a) = D_x(b-a)$ for some $x$ in the interval between $a$ and $b$. Since the partial derivatives are continuous, we have by the characterization of continuity via infinitesimals \footnote{$g$ is continuous at $a$ if and only if ${}^*\! g({\mathfrak{h}}(a)) \subseteq {\mathfrak{h}}(g(a))$.} that $D_x - D_y \prec\prec 1$ for any $y \approx a$ (e.g. $y=st(a)$), so writing $$f(b)-f(a) = D_y(b-a) + (D_x-D_y)(b-a),$$ we see that $f(b)-f(a) - D_y(b-a) \prec\prec \|b-a\|$. Furthermore, if the first partial derivatives are Lipschitz in some neighborhood (e.g. if $f$ is $C^2$), and we are given a constant $\beta\prec\prec 1$ such that $\|x - y \| \prec \beta$ for all $x$ in the interval between $a$ and $b$, then we have the stronger condition $f(b)-f(a) - D_y(b-a) \prec \beta\|b-a\|$. \end{remark} \begin{remark}\label{dh2} If $f$ is $C^2$ then by transfer of the Taylor approximation theorem we have $f(b)-f(a) = D_a(b-a) +{1\over 2} (b-a)^tH_x(b-a)$ for some $x$ in the interval between $a$ and $b$, and remarks similar to those we have made regarding $D_x - D_y$ apply to $H_x - H_y$. \end{remark} \begin{remark}\label{dh3} If ${\varphi} = ( {\varphi}^i ) : U \to {\mathbb R}^n$ then the $n$ rows $D^i_a$ corresponding to ${\varphi}^i$ form the Jacobian matrix $J_a$ of ${\varphi}$ at $a$. By applying the above considerations to each ${\varphi}^i$ we obtain that ${\varphi}(b)-{\varphi}(a) - J_y(b-a) \prec\prec \|b-a\|$, or if all partial derivatives are Lipschitz in some neighborhood (e.g. if ${\varphi}$ is $C^2$) then ${\varphi}(b)-{\varphi}(a) - J_y(b-a) \prec \beta \|b-a\|$ with $\beta$ as above. \end{remark} Now choose a positive infinitesimal ${\lambda} \in {}^*{{\mathbb R}}$, and \emph{fix it once and for all}. \begin{dfn}\label{p} Given $a \in {{}^{\mathfrak{h}} \! {M}}$, a prevector based at $a$ is a pair $(a,x)$, $x \in {{}^{\mathfrak{h}} \! {M}}$, such that for every smooth $f:M \to {\mathbb R}$, $f(x) - f(a) \prec {\lambda}$. Equivalently, given coordinates in a neighborhood $W$ of $st(a)$ in $M$, whose image is $U \subseteq {\mathbb R}^n$, and $\hat{a},\hat{x} \in {}^* U$ are the coordinates for $a$ and $x$, then $(a,x)$ is a prevector based at $a$ if $\hat{x} - \hat{a} \prec {\lambda}$, where the difference $\hat{x} - \hat{a}$ is defined in ${}^*{\mathbb R}^n \supseteq {}^* U$. \end{dfn} We show that the two definitions are indeed equivalent. Assume the first definition, and let $x_1,\dots,x_n$ be the chosen coordinate functions. Since each $x_i$ is smooth, we get $ x_i(x)- x_i(a) \prec {\lambda}$ for each $i$, i.e. $\hat{x} - \hat{a} \prec {\lambda}$. Conversely, assume the second definition holds, and let $f:M \to {\mathbb R}$ be a smooth function. Then $ f(x) - f(a) = D_c(\hat{x}-\hat{a})$ for some $c$ in the interval between $\hat{a}$ and $\hat{x}$, and so $f(x)-f(a) \prec \| \hat{x} - \hat{a}\| \prec {\lambda}$. (The components of $D_c$ are finite by Lemma~\protect\ref{ff}.) We denote by $P_a=P_a(M)$ the set of prevectors based at $a$. \begin{dfn}\label{dd3} We define an equivalence relation $\equiv$ on $P_a$ as follows: $(a,x)\equiv (a,y)$ if $ f(y) - f(x) \prec\prec{\lambda}$ for every smooth $f:M \to {\mathbb R}$, or equivalently, if in coordinates as above, $\hat{y}-\hat{x} \prec\prec{\lambda}$. \end{dfn} The equivalence of the two definitions follows by the same argument as above. Since the relation $(a,x)\equiv (a,y)$ depends only on $x,y$, we will also simply write $x \equiv y$. We denote the set of equivalence classes $P_a/\!\!\equiv$ by $T_a=T_a(M)$. In the spirit of physics notation, the equivalence class of $(a,x) \in P_a$ will be denoted $\overrightarrow{ax}$. Given $a \in {{}^{\mathfrak{h}} \! {M}}$ let $W$ be a coordinate neighborhood of $st(a)$ in $M$ with image $U \subseteq{\mathbb R}^n$. We can identify $P_a$ with $({\mathbb R}^n)^{\lambda}_F$ via $(a,x) \mapsto \hat{x}-\hat{a}$. This induces an identification of $T_a = P_a/\!\!\equiv$ with ${\mathbb R}^n = ({\mathbb R}^n)^{\lambda}_F/({\mathbb R}^n)^{\lambda}_I$. Under this identification $T_a$ inherits the structure of a vector space over ${\mathbb R}$. If we choose different coordinates in a neighborhood of $st(a)$ with image $U' \subseteq {\mathbb R}^n$, then if $\varphi:U \to U'$ is the change of coordinates, then by Remark~\protect\ref{dh3} $\varphi(\hat{x}) - \varphi(\hat{a}) - J_{st(a)}(\hat{x} - \hat{a}) \prec\prec \|\hat{x} - \hat{a}\|\prec {\lambda}$. This means that the map ${\mathbb R}^n \to {\mathbb R}^n$ induced by the two identifications of $T_a$ with ${\mathbb R}^n$ provided by the two coordinate maps, is given by multiplication by the matrix $J_{st(a)}$, and so is linear. Thus the vector space structure induced on $T_a$ via coordinates is independent of the choice of coordinates, and so we have a well defined vector space structure on $T_a$ over ${\mathbb R}$. Note that the object $T_a$ is a mixture of standard and nonstandard notions. It is a vector space over ${\mathbb R}$ rather than ${}^*{\mathbb R}$, but defined at every $a\in {{}^{\mathfrak{h}} \! {M}}$. If $a,b \in {{}^{\mathfrak{h}} \! {M}}$ and $a\approx b$, then given coordinates in a neighborhood of $st(a)$, the identifications of $T_a$ and $T_b$ with ${\mathbb R}^n$ induced by these coordinates induces an identification between $T_a$ and $T_b$. Given a different choice of coordinates, the matrix $J_{st(a)}$ used in the previous paragraph is the same matrix for $a$ and $b$, and so the identification of $T_a$ with $T_b$ is well defined, independent of a choice of coordinates. Thus when $a \approx b \in {{}^{\mathfrak{h}} \! {M}}$ we may unambiguously add a vector $\overrightarrow{a \ x} \in T_a$ with a vector $\overrightarrow{b \ y} \in T_b$. \begin{dfn}\label{dd4} A prevector $(a,x) \in P_a$ acts on a smooth function $f:M \to {\mathbb R}$ as follows: $$(a,x)f = {1\over{\lambda}}( f(x)- f(a)),$$ which is finite by definition of prevector. \end{dfn} This induces a differentiation of $f:M \to {\mathbb R}$ by a vector $\overrightarrow{a \ x} \in T_a$ as follows: $\overrightarrow{a \ x} \ f = st((a,x)f)$. The action $\overrightarrow{a \ x} \ f$ is well defined by definition of the equivalence relation $\equiv$. Note our mixture again, $\overrightarrow{a \ x}$ is a nonstandard object based at the nonstandard point $a$, but it assigns a standard real number to the standard function $f$. The action $(a,x)f$ satisfies the Leibniz rule up to infinitesimals, indeed: \begin{align*} {1\over{\lambda}}\Bigl( f(x)g(x)- f(a)g(a) \Bigr) & = {1\over{\lambda}}\Bigl( f(x)g(x)-f(x)g(a)+ f(x)g(a) -f(a)g(a) \Bigr) \\ & =f(x){1\over{\lambda}}\Bigl(g(x)-g(a)\Bigr)+{1\over{\lambda}} \Bigl(f(x) -f(a)\Bigr)g(a) \\ & \approx f(a){1\over{\lambda}}\Bigl(g(x)-g(a)\Bigr)+{1\over{\lambda}} \Bigl(f(x) -f(a)\Bigr)g(a) \end{align*} where the final $\approx$ is by continuity of $f$. For the action $\overrightarrow{a \ x}f$ this implies the following, where the second equality is by continuity of $f$ and $g$. \begin{prop}\label{pvec} Letting $a_0=st(a)$ we have $$\overrightarrow{a \ x}(fg) = st(f(a)) \cdot \overrightarrow{a \ x} g + \overrightarrow{a \ x}f \cdot st(g(a))= f(a_0) \cdot \overrightarrow{a \ x}g + \overrightarrow{a \ x}f \cdot g(a_0).$$ \end{prop} \begin{dfn}\label{dd5} If $h:M \to N$ is a smooth map between smooth manifolds, then for $a \in {{}^{\mathfrak{h}} \! {M}}$ we define the differential of $h$, $dh_a:P_a(M) \to P_{h(a)}(N)$ by setting $$dh_a((a,x)) = ( h(a) , h(x)).$$ \end{dfn} This induces a map $dh_a:T_a(M) \to T_{h(a)}(N)$ given by $dh_a(\overrightarrow{a \ x}) = \overrightarrow{ h(a) \ h(x)}$. The relation here between $h$ and $dh$ seems more transparent than in the corresponding classical definition. Furthermore, the ``chain rule'', i.e. the fact that $d({g \circ h})_a = dg_{h(a)} \circ dh_a$, becomes immediate, for both $P_a$ and $T_a$. Namely, $dg_{h(a)} \circ dh_a((a, x)) =dg_{h(a)} (( h(a), h(x))) = ( g\circ h(a) , g \circ h(x)) = d({g \circ h})_a((a , x))$, and similarly for $\overrightarrow{a \ x} \in T_a$. \begin{remark}\label{67} For standard $a \in M$, $T_a$ is naturally identified with the classical tangent space of $M$ at $a$ as follows. Since we have done everything also in terms of coordinates, it is enough to see this for open $U \subseteq {\mathbb R}^n$, where the tangent space at any point $a$ is ${\mathbb R}^n$ itself. A vector $v \in {\mathbb R}^n$ is then identified with $\overrightarrow{a \ (a+{\lambda}\cdot v)}$. Under this identification, our definitions of $\overrightarrow{a \ x}f$ and $dh(\overrightarrow{a \ x})$ coincide with the classical ones. \end{remark} \section{Prevector fields}\label{vf} For a smooth manifold $M$, recall that ${{}^{\mathfrak{h}} \! {M}}$ denotes the set of all nearstandard points in ${}^* \! M$, and $P_a$ denotes the set of prevectors based at $a\in {{}^{\mathfrak{h}} \! {M}}$. We define a \emph{prevector field} on ${}^* \! M$ to be an \emph{internal} map $F:{}^* \! M \to {}^* \! M$ such that $(a,F(a)) \in P_a$ for every $a \in{{}^{\mathfrak{h}} \! {M}}$, that is, in coordinates $ F(a) - a \prec{\lambda} $ for every $a \in {{}^{\mathfrak{h}} \! {M}}$. If $F$ and $G$ are two prevector fields then we will say $F$ is equivalent to $G$ and write $F \equiv G$ if $F(a) \equiv G(a)$ for every $a\in{{}^{\mathfrak{h}} \! {M}}$ (recall Definition~\protect\ref{dd3}). A \emph{local} prevector field is an internal map $F:{}^* U \to {}^* V$ satisfying the above condition, where $U \subseteq V \subseteq M$ are open. When the distinction is needed, we will call a prevector field defined on all of ${}^* \! M$ a \emph{global} prevector field. The reason for allowing the values of a local prevector field defined on ${}^* U$ to lie in a slightly larger range ${}^* V$ is in order to allow a prevector field $F$ to be restricted to a smaller domain which is not invariant under $F$. For example, if $M = {\mathbb R}$ and one wants to restrict the prevector field $F$ given by $F(a)=a+{\lambda}$, to the domain ${}^* (0,1)$, then one needs to allow a slightly larger range. In the sequel we will usually not mention the larger range $V$ when describing a local prevector field, but it will always be tacitly assumed that we have such $V$ when needed. A second instance where it may be needed for the range to be slightly larger than the domain is the following natural setting for defining a local prevector field. \begin{example}\label{e} Let $p \in M$, $V$ a coordinate neighborhood of $p$ with image $V' \subseteq {\mathbb R}^n$, and $X$ a classical vector field on $V$, given in coordinates by $X' : V' \to {\mathbb R}^n$. Then there is a neighborhood $U'$ of the image of $p$, with $U' \subseteq V'$, such that we can define $F' : {}^* {U'} \to {}^* {V'}$ by $$F'(a)=a + {\lambda} \cdot X'(a),$$ e.g. one can take $U'$ such that $\overline{U'}$ is compact and $\overline{U'} \subseteq V'$. For the corresponding $U \subseteq V$ this induces a local prevector field $F: {}^* U \to {}^* V$ which realizes $X$ in $U$ in the sense of the following definition. (Recall Remark~\protect\ref{67}.) \end{example} \begin{dfn}\label{realize} A local prevector field $F$ on ${}^* U$ realizes the classical vector field $X$ on $U$ if for every smooth $h:U \to {\mathbb R}$ we have $Xh(a) = \overrightarrow{a \ F(a)}h$ for all $a\in U$. \end{dfn} When realizing a vector field as in Example~\protect\ref{e} it may indeed be necessary to restrict to a smaller neighborhood $U$, e.g. for $M=V=V'=(0,1)$ and $X' = 1$, one needs to take $U=(0,r)$ for some $1 > r \in {\mathbb R}$ in order for $F(a)=a+{\lambda}$ to always lie in ${}^* V$. Note that Definition~\protect\ref{realize} involves only \emph{standard} points; see however Corollary~\protect\ref{sr} for a discussion of this matter. Different coordinates for the same neighborhood $U$ will induce equivalent realizations in ${}^* U$. More precisely, we show the following. \begin{prop}\label{re} For $U \subseteq {\mathbb R}^n$, let $X:U \to {\mathbb R}^n$ be a classical vector field, ${\varphi}:U \to W \subseteq {\mathbb R}^n$ a change of coordinates, and $Y:W \to {\mathbb R}^n$ the corresponding vector field, i.e. $Y({\varphi}(a)) = J_a X(a)$, where $J_a$ is the Jacobian matrix of ${\varphi}$ at $a$. Let $F,G$ be the prevector fields given by $F(a) = a+{\lambda} X(a)$, $G(a)=a+ {\lambda} Y(a)$ as in Example~\protect\ref{e}. Then $F \equiv {\varphi}^{-1} \circ G \circ {\varphi}$, or equivalently, $ {\varphi} \circ F(a) - G \circ{\varphi}(a) \prec\prec {\lambda}$ for all $a \in { {}^{\mathfrak{h}} } U$. \end{prop} \begin{pf} Let ${\varphi}^i,Y^i,G^i$ be the $i$th component of ${\varphi},Y,G$ respectively, and let $D^i_a$ be the differential of ${\varphi}^i$ at $a$, (so $D^i_a$ is the $i$th row of $J_a$). Then we have \begin{align*} {\varphi}^i \circ F(a) - G^i \circ{\varphi}(a) &= {\varphi}^i(a+ {\lambda} X(a)) - {\varphi}^i(a) - {\lambda} Y^i({\varphi}(a)) \\ &= D^i_c{\lambda} X(a) - {\lambda} D^i_a X(a) = {\lambda}( D^i_c - D^i_a) X(a) \prec\prec {\lambda}, \end{align*} where $D^i_c$ is the differential of ${\varphi}^i$ at some point $c$ in the interval between $a$ and $a+ {\lambda} X(a)$. (Such $c$ exists by Remark~\protect\ref{dh1}.) Since this is true for each component $i$, we have $$ {\varphi} \circ F(a) - G \circ{\varphi}(a) \prec\prec {\lambda}.$$ \end{pf} \begin{dfn}\label{I} We define $I$ to be the \emph{identity} prevector field on ${}^* \! M$, (or on ${}^* U$ for any $U \subseteq M$ or $U\subseteq{{\mathbb R}^n}$), i.e. $I(a)=a$ for all $a$. The prevector field $I$ corresponds to classical \emph{zero} vector field via the procedure of Example~\protect\ref{e}. \end{dfn} \subsection{Regularity conditions} If one wants to define various operations on prevector fields, such as their flow, or Lie bracket, then one must assume some regularity properties. Recall that a classical vector field $X:U \to {\mathbb R}^n$ is called \emph{Lipschitz} if there is $K \in {\mathbb R}$ such that $\| X(a)-X(b) \| \leq K\|a - b\|$ for $a,b \in U$. For the local prevector field $F$ of Example~\protect\ref{e}, where $F(a)-a={\lambda} X(a)$, this translates into $$\Bigl\| \Bigl( F(a)-a \Bigr) - \Bigl( F(b)-b \Bigr) \Bigr\| \leq K{\lambda}\|a - b\|$$ for $a,b \in {}^* U$. This motivates the following definition. \begin{dfn}\label{d1} A prevector field $F$ on a smooth manifold $M$ is of class $D^1$ if whenever $a,b \in{{}^{\mathfrak{h}} \! {M}}$ and $(a, b) \in P_a$ then in coordinates $F(a)-a - F(b) + b \prec{\lambda} \|a-b\| $. \end{dfn} One can then also think of ``order $k$'' Lipschitz conditions on prevector fields, for the definition of which we will use the following Euler notation for finite differences. Given vector spaces $V,W$ (classical or nonstandard), and given $A \subseteq V$, $a \in A$, and $v_1,\dots,v_k \in V$ such that $a+e_1v_1+\cdots+e_kv_k \in A$ for all $(e_1,\dots,e_k)\in\{0,1\}^k$, and given a function $F:A \to W$, we define the $k$th difference ${\x^k_{v_1,\dots,v_k}} F(a)$ as follows: $${\x^k_{v_1,\dots,v_k}} F(a) = \sum_{(e_1,\dots,e_k)\in\{0,1\}^k} (-1)^{\sum e_j}F(a+e_1v_1+\cdots+e_k v_k).$$ We note that in terms of this difference notation, the $D^1$ condition can be stated as follows: $\Delta^1_{b-a}(F-I)(a)\prec{\lambda}\|a-b\|$ for any $a,b\in{{}^{\mathfrak{h}} \! {M}}$ with $a-b\prec{\lambda}$, (recall that $I$ denotes the identity prevector field, i.e. $I(a)=a$ for all $a$). Or, if we let $v=b-a$ then this can be written as $\Delta^1_v(F-I)(a)\prec{\lambda}\|v\|$. Generalizing to higher order differences, we define the $D^k$ regularity condition on a prevector field $F$ by the following condition, in coordinates in a neighborhood $U$: For any $a \in {{}^{\mathfrak{h}} {U}}$ and any $v_1,\dots,v_k \in {}^*{\mathbb R}^n$ with $v_i \prec {\lambda}$, $$ {\x^k_{v_1,\dots,v_k}} (F-I)(a) \prec{\lambda}\|v_1\| \| v_2\|\cdots\|v_k\|.$$ We note that for $k \geq 2 $, ${\x^k_{v_1,\dots,v_k}} I(a)=0$, so for $k \geq 2$ the $D^k$ condition simplifies to: $${\x^k_{v_1,\dots,v_k}} F(a) \prec{\lambda}\|v_1\| \| v_2\|\cdots\|v_k\|.$$ For $k=2$ this reads as follows. \begin{dfn}\label{d2} A prevector field $F$ on a smooth manifold $M$ is of class $D^2$ if for any $a \in {{}^{\mathfrak{h}} \! {M}}$, we have in coordinates that for any $v,w \in {}^*{\mathbb R}^n$ with $v,w \prec {\lambda}$, $$\Delta^2_{v,w}F(a)=F(a) -F(a+v) - F(a+w) + F(a+v+w) \prec{\lambda}\|v\|\|w\|.$$ \end{dfn} We will show that the definitions of $D^1$ and $D^2$ prevector fields are independent of coordinates in Propositions \protect\ref{5} and \protect\ref{4} respectively. We will show in Proposition~\protect\ref{3} that $D^2$ implies $D^1$. In fact, the proof of the invariance of $D^2$ will use the fact that $D^2$ implies $D^1$ in any given coordinates, which in turn relies on the technical Lemma~\protect\ref{1}. In Proposition~\protect\ref{6} we will show that the prevector field of Example~\protect\ref{e} induced by a classical vector field of class $C^k$, is a $D^k$ prevector field. This is in fact the central motivation for our definition of $D^k$, but we note that our $D^k$ is in fact a weaker condition than $C^k$, e.g. $F$ of Example~\protect\ref{e} is $D^1$ if $X$ is (order 1) Lipschitz, which is weaker than $C^1$. We remark that a definition of $D^0$ along the above lines would simply amount to $F(a)-a \prec {\lambda}$, i.e. $F$ being a prevector field. Note, however, that in our definitions above, being a prevector field is part of the definition of $D^k$. \begin{prop}\label{6} For open $W \subseteq {\mathbb R}^n$, let $X:W \to {\mathbb R}^n$ be a classical $C^k$ vector field. Then for any $a \in { {}^{\mathfrak{h}} } W$ and any $v_1,\dots,v_k \in {}^*{\mathbb R}^n$ with $v_i \prec\prec 1$, $${\x^k_{v_1,\dots,v_k}} X(a) \prec\|v_1\| \| v_2\|\cdots\|v_k\|.$$ It follows that if $F$ is the prevector field on ${}^* W$ of Example~\protect\ref{e}, i.e. $F(a)-a={\lambda} X(a)$, then $F$ is $D^k$. \end{prop} \begin{pf} Let $U \subseteq W$ be a smaller neighborhood of $st(a)$ for which all $k$th partial derivatives of $X$ are bounded. Let $X^1,\dots,X^n$ be the components of $X$. Given $p \in U$ and $v_1,\dots,v_k \in {\mathbb R}^n$ such that $p+s_1v_1+\cdots+s_kv_k \in U$ for all $0 \leq s_1,\dots,s_k \leq 1$, let $$\psi^i(s_1,\dots,s_k) = X^i(p+s_1v_1+\cdots+s_kv_k).$$ By iterating the mean value theorem $k$ times there is $(t_1,\dots,t_k) \in [0,1]^k$ such that $$\sum_{(e_1,\dots,e_k)\in\{0,1\}^k} (-1)^{\sum e_j}\psi^i(e_1,\dots,e_k) ={{\partial}^k \over {\partial} s_1 {\partial} s_2 \cdots {\partial} s_k}\psi^i(t_1,\dots,t_k)(-1)^k$$ (For the case $k=2$ see e.g. Rudin \cite[Theorem 9.40]{r}). So \begin{align*} |{\x^k_{v_1,\dots,v_k}} X^i(p)|&= \Bigl|\sum_{(e_1,\dots,e_k)\in\{0,1\}^k} (-1)^{\sum e_j}X^i(p+e_1v_1+\cdots+e_kv_k) \Bigr| \\ &= \Bigl|\sum_{(e_1,\dots,e_k)\in\{0,1\}^k} (-1)^{\sum e_j}\psi^i(e_1,\dots,e_k)\Bigr| \\&=\Bigl|{{\partial}^k \over {\partial} s_1\cdots {\partial} s_k}\psi^i(t_1,\dots,t_k)\Bigr| \leq C_i\|v_1\| \| v_2\|\cdots\|v_k\| \end{align*} where $C_i$ is determined by a bound for all $k$th partial derivatives of $X^i$ in $U$. This is true for each $X^i$, $i=1,\dots,n$, and so there is a $K\in{\mathbb R}$ such that $$\|{\x^k_{v_1,\dots,v_k}} X(p) \| \leq K \|v_1\| \| v_2\|\cdots\|v_k\|$$ for every $p \in U$ and $v_1,\dots,v_k\in {\mathbb R}^n$ such that $p+s_1v_1+\cdots+s_kv_k \in U$ for all $0 \leq s_1,\dots,s_k \leq 1$, $s_i \in {\mathbb R}$. By transfer the same is true, with the same $K$, for all $p \in {}^* U$ and $v_1,\dots,v_k \in{}^*{\mathbb R}^n$ such that $a+s_1v_1+\cdots+s_kv_k \in {}^* U$ for all $0 \leq s_1,\dots,s_k \leq 1$, $s_i \in {}^*{\mathbb R}$. In particular this is true for our $a$ and all $v_1,\dots,v_k \in {}^*{\mathbb R}^n$ with $v_i \prec\prec 1$. \end{pf} \begin{remark}\label{vfcc} Proposition~\protect\ref{6} was stated for a $C^k$ vector field $X:U \to {\mathbb R}^n$, but for the first statement one can think of $X$ as any $C^k$ map, and indeed in the proof of Proposition~\protect\ref{4} below it will be used for $X={\varphi}:U \to W$ a $C^k$ change of coordinates. \end{remark} We note that a $D^1$ prevector field $F$ satisfies the following. \begin{prop}\label{0} If $F$ is $D^1$ and $a,b \in{{}^{\mathfrak{h}} \! {M}}$ with $a-b \prec {\lambda}$, then $$\|a-b\| \prec \|F(a)-F(b)\| \prec \|a-b\|.$$ More in detail, given $K\prec 1$ such that $ \|F(a)-F(b)-a+b \| \leq K{\lambda}\|a-b\|$ we have $(1-K{\lambda})\|a-b\| \leq \|F(a)-F(b)\| \leq (1+K{\lambda})\|a-b\|$. \end{prop} \begin{pf} We have $|\|F(a)-F(b)\|-\|a-b \| | \leq \|F(a)-F(b)-a+b \| \leq K{\lambda}\|a-b\|$, so $(1-K{\lambda})\|a-b\| \leq \|F(a)-F(b)\| \leq (1+K{\lambda})\|a-b\|$. \end{pf} \subsection{Invariance of regularity conditions} The definitions of $D^1$ and $D^2$ above are in terms of coordinates. In the present section we prove that these definitions are in fact independent of coordinates, and that every $D^2$ prevector field is also $D^1$. This section is quite technical and can be skipped on first reading. Lemma \protect\ref{1} that we prove and use in this section will be used again only in the proof of Lemma \protect\ref{10}. \begin{prop}\label{5} The definition of $D^1$ is independent of coordinates. \end{prop} \begin{pf} Let $U,W \subseteq {\mathbb R}^n$ be two coordinate charts for a neighborhood of $st(a)$, and ${\varphi}:U \to W$ the change of coordinates map. Let $F$ be a $D^1$ prevector field in $U$ and $G$ the corresponding prevector field in $W$ i.e. $G \circ {\varphi} = {\varphi} \circ F$. For $b$ with $a-b \prec{\lambda}$ we must show $$G({\varphi}(a)) - G({\varphi}(b)) - {\varphi}(a) + {\varphi}(b) \prec {\lambda}\|{\varphi}(a)-{\varphi}(b)\|.$$ Let $\phi = {\varphi}^i$ be the $i$th component of ${\varphi}$. By Remark~\protect\ref{dh1} there is a point $x$ on the interval between $F(a)$ and $F(b)$ such that $\phi(F(a)) - \phi(F(b)) = D_x(F(a)-F(b))$, where $D$ is the differential of $\phi$. There is a point $y$ on the interval between $a$ and $b$ such that $\phi(a) - \phi(b) = D_y(a-b)$. So \begin{align*}\phi(F(a)) - \phi(F(b)) - \phi(a) + \phi(b) &= D_x(F(a)-F(b)) - D_y(a-b) \\ &= D_x(F(a)-F(b)-a+b) - (D_y - D_x)(a-b) \\& \prec{\lambda}\|a-b\| \prec {\lambda}\|{\varphi}(a)-{\varphi}(b)\|,\end{align*} since 1) the entries of $D_x$ are finite, 2) $F(a)-F(b)-a+b \prec{\lambda}\|a-b\|$ by assumption, 3) $D_x-D_y \prec \|x-y\| \prec{\lambda}$ (assuming the partial derivatives of $\varphi$ are Lipschitz, e.g. if $\varphi$ is $C^2$), and 4) the entries of the Jacobian of ${\varphi}^{-1}$ are finite, giving $\|a-b\| \prec \|{\varphi}(a)-{\varphi}(b)\|$. This is true for all components $\phi={\varphi}^i$ of ${\varphi}$ and so it is true for ${\varphi}$, i.e. $${\varphi}(F(a)) - {\varphi}(F(b)) - {\varphi}(a) + {\varphi}(b) \prec {\lambda}\|{\varphi}(a)-{\varphi}(b)\|$$ which completes the proof since ${\varphi} \circ F = G \circ {\varphi} $. \end{pf} We would now like to show that for any given coordinates, $D^2$ implies $D^1$. We first prove the following technical lemma, which will also be used in the proof of Lemma~\protect\ref{10}. We demonstrate the content of this lemma with a simple example. Let $f,g:{}^*{\mathbb R} \to{}^*{\mathbb R}$ be $f(x)=cx$ and $g(x)=cd\sin {\pi \over 2 d}x$, with $d\prec\prec 1$, then $f(0)=g(0)=0$. We now advance by steps of size $d$ and see how $f$ and $g$ develop. The increment of $f$ and $g$ after one step is the same, $f(d)=g(d)=cd$. But after $m$ steps with $m \geq {1 \over d}$, $f(md)\geq c$ whereas $g(md)\leq cd \prec\prec c$. The increments of $f$ properly accumulate along the $m$ steps to produce a large value for $f(md)$ in comparison to $f(d)$, due to the fact that the increment $f(x+d)-f(x)$ is constant. On the other hand, $g(md)$ remains small since the increments of $g$ are not sufficiently persistent, this being reflected in the fact that the difference $\Bigl(g(x+2d)-g(x+d)\Bigr) - \Bigl(g(x+d)-g(x)\Bigr)$ between successive increments is not sufficiently small compared to the first increment $g(d)-g(0)$. This lemma will, in fact, be used in the reverse direction, namely, a bound on $f(md)-f(0)$ will be used in order to obtain a stronger bound on $f(d)-f(0)$. \begin{lemma}\label{1} Let $B \subseteq {\mathbb R}^n$ be an open ball around the origin $0$, let $a\in {\mathfrak{h}}(0)$, and let $0\neq v\in{}^*{\mathbb R}^n$ with $v\prec\prec 1$. If $G:{}^*\! B \to {}^* {\mathbb R}^n$ is an internal function satisfying $$\Delta^2_{v,v}G(x) \prec \|v\| \|G(a) - G(a+v) \|$$ for all $x \in { {}^{\mathfrak{h}} } \! B$, then there is $m \in {}^*{\mathbb N}$ such that $a + mv \in { {}^{\mathfrak{h}} }\! B$ and $$G(a) - G(a+v) \prec \|v \| \|G(a) - G(a+mv)\|.$$ \end{lemma} \begin{pf} Let $N={\lfloor} r/\|v\| {\rfloor}$ where $0<r \in {\mathbb R}$ is slightly smaller than the radius of $B$, and for $0 \leq x \in {}^*{\mathbb R}$, ${\lfloor} x {\rfloor} \in{}^*{\mathbb N}$ is the integer part of $x$. So any $m \leq N$ satisfies that $a+mv \in { {}^{\mathfrak{h}} } \! B$. Let $A=G(a)-G(a+v)$. For $0 \leq j \leq N$ let $x_j=a+jv$, then by our assumption on $G$ we have $G(x_j)-2G(x_{j+1})+G(x_{j+2})= \Delta^2_{v,v}G(x_j) =C_j \|v\| \|A \|$ with $C_j \prec 1$. Let $C$ be the maximum of $C_0,\cdots, C_N$, then $C\prec 1$ and $G(x_j)-2G(x_{j+1})+G(x_{j+2}) \leq C \|v\| \|A \|$ for all $0 \leq j \leq N$. Given $k \leq N$ we have \begin{multline*} \Bigl\| A-\Bigl(G(x_k) - G(x_{k+1})\Bigr) \Bigr\| = \Bigl\|\Bigl(G(x_0) - G(x_1)\Bigr)-\Bigl(G(x_k) - G(x_{k+1})\Bigr)\Bigr\| \\ \leq \sum_{j=0}^{k-1} \Bigl\|\Bigl(G(x_j) - G(x_{j+1})\Bigr) - \Bigl(G(x_{j+1}) - G(x_{j+2})\Bigr)\Bigr\| \leq C k \| v \| \|A\|. \end{multline*} So for any $m \leq N$, $$\Bigl\|mA - \Bigl(G(x_0) - G(x_m)\Bigr) \Bigr\| = \Bigl\| \sum_{k=0}^{m-1} \Bigl( A-\Bigl(G(x_k) - G(x_{k+1})\Bigr)\Bigr) \Bigr\| \leq C m^2 \|v\|\|A\|,$$ so $\Bigl\|mA - \Bigl(G(x_0) - G(x_m)\Bigr) \Bigr\| = K m^2 \|v\|\|A\|$ with $K \prec 1$. It follows that $$ m\|A\| - K m^2 \|v\|\|A\| \leq \|G(x_0) - G(x_m) \|,$$ and so, multiplying by $\|v\|$, we have $ m\|v\| \|A\|(1 - K m \|v\|) \leq \|v\| \|G(x_0) - G(x_m) \| $. Now let $m = \min \{ \ N \ , \ {\lfloor}{1 \over 2K \|v\|}{\rfloor} \ \}$, then $$ m\|v\|\|A\|/2 \leq \|v\| \|G(x_0) - G(x_m) \| .$$ By definition of $N$ and since $K\prec 1$ we have that $m\|v\|$ is appreciable, i.e. not infinitesimal, and so finally $A \prec \|v\| \|G(x_0) - G(x_m) \| $, that is, $G(a)-G(a+v) \prec \|v\| \|G(a) - G(a+mv) \|$. \end{pf} \begin{prop}\label{3} If $F$ is $D^2$ for some choice of coordinates in $W$, then $F$ is $D^1$. \end{prop} \begin{pf} Given $a \in { {}^{\mathfrak{h}} } W$, in the given coordinates take some ball $B$ around $st(a)$. Define $G=F-I$, i.e. $G(x)=F(x)-x$, then we must show for any $v\prec{\lambda} $ that $G(a)-G(a+v) \prec{\lambda} \|v \|$ (here $v = b-a$ in Definition~\protect\ref{d1}). If $G(a)-G(a+v) \prec\prec {\lambda} \|v \|$ then we are certainly done. Otherwise ${\lambda} v \prec\|G(a) - G(a+v)\|$ so ${\lambda} \|v\|^2 \prec\|v\| \|G(a) - G(a+v)\|$, and on the other hand $\Delta^2_{v,v}G(x)=\Delta^2_{v,v}F(x) \prec{\lambda}\|v\|^2$ for all $x$, since $\Delta^2_{v,v}I(x)=0$, and by taking $v=w$ in Definition~\protect\ref{d2}. Together we have $\Delta^2_{v,v}G(x) \prec \|v \|\|G(a) - G(a+v)\|$, so by Lemma~\protect\ref{1} there is $m\in{}^*{\mathbb N}$ such that $a+mv \in { {}^{\mathfrak{h}} }\! B$ and $$G(a) - G(a+v) \prec\|v\| \|G(a) - G(a+mv) \| \leq \|v\| \Bigl(\|G(a)\| + \|G(a+mv)\|\Bigr) \prec \|v\|{\lambda}$$ since $F$ is a prevector field and so $G(x)=F(x)-x\prec{\lambda}$ for all $x$. \end{pf} We need one more lemma before proving that $D^2$ is independent of coordinates. \begin{lemma}\label{2} In given coordinates, $F$ is $D^2$ if and only if it satisfies $${\x^2_{v,w}} F(a) \prec{\lambda} \bigl(\max \{\|v\|,\|w\|\}\bigr)^2$$ for every $a$, and every $v,w \prec{\lambda}$. \end{lemma} \begin{pf} Clearly $D^2$ implies the above condition. For the converse, say $\|v \| \leq \|w \|$. If $w \prec\|v\|$ then the two conditions are clearly equivalent. Otherwise let $n = {\lfloor}\|w\| / \| v \|{\rfloor}$, and $w'=w/n$. Then $\| v \| \leq \|w' \| \leq {n+1 \over n} \|v \|$, and so by the preceding remark the two conditions are equivalent for $v,w'$, and so for every $1 \leq k \leq n$, $$ C_k = {\Delta^2_{v,w'}F(a+(k-1) w') \over {\lambda}\|v\|\|w'\|} $$ is finite. Let $C$ be the maximum of $C_1,\dots, C_n$ then $C$ is finite and $$ \Delta^2_{v,w'}F(a+(k-1) w') \leq C{\lambda}\|v\|\|w'\|$$ for every $1 \leq k \leq n$. And so \begin{align*} \| {\x^2_{v,w}} & F(a)\|= \|F(a) -F(a+v) - F(a+w) + F(a+v+w) \| \\ \leq &\sum_{k=1}^n \| F(a+(k-1) w') -F(a+(k-1) w'+v) - F(a+k w') + F(a+k w'+v) \| \\ =& \sum_{k=1}^n \| \Delta^2_{v,w'}F(a+(k-1) w') \| \leq nC{\lambda}\|v\|\|w'\| = C{\lambda}\|v\|\|w\|. \end{align*} \end{pf} \begin{prop}\label{4} The definition of $D^2$ is independent of coordinates. \end{prop} \begin{pf} Let ${\varphi}:U \to W$ be a change of coordinates. Let $F$ be a $D^2$ prevector field in $U$ and $G$ the corresponding prevector field in $W$ i.e. $G \circ {\varphi} = {\varphi} \circ F$. Given $p \in {}^{\mathfrak{h}} W$, and $x,y \in{}^*{\mathbb R}^n$ with $x,y\prec{\lambda}$, and say $ \|x \|\leq \|y\|$, then by Lemma~\protect\ref{2} it is enough to show $\Delta^2_{x,y} G(p)=G(p)-G(p+x)-G(p+y)+G(p+x+y) \prec{\lambda} \|y\|^2$. Let $a,v,w$ be such that ${\varphi}(a)=p$, ${\varphi}(a+v)=p+x$, ${\varphi}(a+w)=p+y$. Then $\|w\|\prec \|y\|$ and so it is enough to show \begin{equation*}\label{x1}\tag{1} {\varphi}( F(a))-{\varphi}(F(a+v))-{\varphi}(F(a+w))+G(p+x+y) \prec{\lambda} \|w\|^2. \end{equation*} Since $F$ is $D^2$, by Proposition~\protect\ref{3} it is also $D^1$, and so by Proposition~\protect\ref{5} $G$ is $D^1$. So \begin{align*}\label{x2}\tag{2} G(p+x&+y) - G( {\varphi}(a+v+w))-(p+x+y) + {\varphi}(a+v+w) \\ & \prec{\lambda}\| p+x+y-{\varphi}(a+v+w)\| \\ & = {\lambda}\| -{\varphi}(a)+{\varphi}(a+v)+{\varphi}(a+w)-{\varphi}(a+v+w)\| \prec{\lambda}\|w\|^2, \end{align*} by Remark~\protect\ref{vfcc} (assuming ${\varphi}$ is $C^2$) and since $p+x+y=-{\varphi}(a)+{\varphi}(a+v)+{\varphi}(a+w)$ and $\|v\| \prec \| w\|$. In view of (\protect\ref{x2}) we see that (\protect\ref{x1}) holds if and only if \begin{multline*} {\varphi}( F(a))-{\varphi}( F(a+v))-{\varphi}( F(a+w)) +{\varphi} ( F (a+v+w)) \\ -{\varphi}(a)+{\varphi}(a+v)+{\varphi}(a+w) - {\varphi}(a+v+w) \\ = {\x^2_{v,w}}({\varphi}\circ F -{\varphi})(a) \prec{\lambda} \|w\|^2, \end{multline*} so we proceed to prove this last inequality. Let $\phi = {\varphi}^i$ be the $i$th component of ${\varphi}$. We have \begin{multline*}\label{x3}\tag{3} \phi( F(a+v)) - \phi( F(a)) = \\ D(F(a+v)-F(a))+ {1\over 2}(F(a+v)-F(a))^tH_1(F(a+v)-F(a)) \end{multline*} where $D=D_{F(a)}$ is the differential of $\phi$ at $F(a)$ and $H_1$ is the Hessian matrix of $\phi$ at some point on the interval between $F(a)$ and $F(a+v)$, (recall Remark~\protect\ref{dh2}). Similarly \begin{multline*}\label{x4}\tag{4} \phi( F(a+w)) - \phi( F(a)) =\\ D(F(a+w)-F(a))+ {1\over 2}(F(a+w)-F(a))^tH_2(F(a+w)-F(a)) \end{multline*} and \begin{multline*}\label{x5}\tag{5} \phi( F(a+v+w)) - \phi( F(a)) =\\ D(F(a+v+w)-F(a))+ {1\over 2}(F(a+v+w)-F(a))^tH_3(F(a+v+w)-F(a))\end{multline*} with $H_2,H_3$ similarly defined. We have $$\phi(a+v) - \phi(a) = D_a v+ {1\over 2}v^tH'_1v$$ where $D_a$ is the differential of $\phi$ at $a$ and $H'_1$ is the Hessian matrix of $\phi$ at some point on the interval between $a$ and $a+v$. Now let $Y_1 = H'_1 -H_1$ then $Y_1 \prec {\lambda}$ (assuming the second partial derivatives of ${\varphi}$ are Lipschitz, e.g. if ${\varphi}$ is $C^3$), and we have \begin{equation*}\label{x6}\tag{6} \phi(a+v) - \phi(a) = D_av+ {1\over 2} v^t(H_1+Y_1)v. \end{equation*} Similarly there are $Y_2,Y_3 \prec {\lambda}$ such that \begin{equation*}\label{x7}\tag{7} \phi(a+w) - \phi(a) = D_a w+ {1\over 2} w^t(H_2+Y_2)w \end{equation*} and \begin{equation*}\label{x8}\tag{8} \phi(a+v+w) - \phi(a) = D_a (v+w)+ {1\over 2}(v+w)^t(H_3+Y_3)(v+w) \end{equation*} Furthermore, since by Proposition~\protect\ref{3} the prevector field $F$ is $D^1$, we have $$F(a)-F(a+v)-a+(a+v) \prec{\lambda}\| v\|$$ i.e. $F(a+v)-F(a)=v+\delta_1$ with $\delta_1 \prec {\lambda} \|v\| \prec {\lambda} \|w\|$. Similarly there are $\delta_2,\delta_3 \prec {\lambda} \| w \|$ such that $F(a+w)-F(a)=w+\delta_2$, $F(a+v+w)-F(a)=v+w+\delta_3$. Substituting this into the quadratic terms of (\protect\ref{x3}),(\protect\ref{x4}),(\protect\ref{x5}) we get \begin{equation*}\label{x9}\tag{9} \phi( F(a+v)) - \phi( F(a)) = D(F(a+v)-F(a))+ {1\over 2}(v+\delta_1)^tH_1(v+\delta_1) \end{equation*} \begin{equation*}\label{x10}\tag{10} \phi( F(a+w)) - \phi( F(a)) = D(F(a+w)-F(a))+ {1\over 2}(w+\delta_2)^tH_2(w+\delta_2) \end{equation*} \begin{equation*}\label{x11}\tag{11} \phi( F(a+v+w)) - \phi( F(a)) = D(F(a+v+w)-F(a))+{1\over 2}(v+w+\delta_3)^tH_3(v+w+\delta_3) \end{equation*} Now \begin{align*} &{\x^2_{v,w}}(\phi\circ F -\phi)(a) = \\ & \phi( F(a))-\phi(F(a+v))-\phi(F(a+w)) +\phi (F (a+v+w)) \\& -\phi(a)+\phi(a+v)+\phi(a+w) - \phi(a+v+w) \\ =& - \Bigl(\phi( F(a+v)) - \phi( F(a))\Bigr) - \Bigl(\phi( F(a+w)) - \phi(F(a))\Bigr) + \Bigl(\phi( F(a+v+w)) - \phi( F(a))\Bigr) \\& +\Bigl(\phi(a+v) - \phi(a)\Bigr) +\Bigl(\phi(a+w) - \phi(a)\Bigr) -\Bigl(\phi(a+v+w) - \phi(a)\Bigr) \end{align*} Substituting (\protect\ref{x6}),(\protect\ref{x7}),(\protect\ref{x8}),(\protect\ref{x9}),(\protect\ref{x10}),(\protect\ref{x11}) for these six parenthesized summands, after all cancellations we remain with \begin{multline*} D\Bigl(F(a)-F(a+v)-F(a+w)+F(a+v+w)\Bigr) \\ - v^tH_1\delta_1 - w^tH_2\delta_2 + (v+w)^tH_3\delta_3 - {1\over 2}\delta_1^tH_1\delta_1 - {1\over 2}\delta_2^tH_2\delta_2 + {1\over 2}\delta_3^tH_3\delta_3 \\ +{1\over 2}v^tY_1v + {1\over 2}w^tY_2w - {1\over 2}(v+w)^tY_3(v+w) \prec {\lambda}\|w\|^2. \end{multline*} This is true for all components $\phi={\varphi}^i$ of ${\varphi}$ and so it is true for ${\varphi}$. \end{pf} \subsection{Operations on prevector fields} We now show that addition of the vectors corresponding to $D^1$ prevector fields $F,G$ is realized by their composition $F \circ G$. More precisely, we show the following. \begin{prop} Let $F$ be a $D^1$ prevector field and $G$ any prevector field. Then for every $a\in{{}^{\mathfrak{h}} \! {M}}$, $$\overrightarrow{a \ F(G(a))} = \overrightarrow{a \ F(a)}+\overrightarrow{a \ G(a)}.$$ In particular if both $F,G$ are $D^1$ then $F\circ G \equiv G \circ F$. \end{prop} \begin{pf} In coordinates $\overrightarrow{a \ F(a)}+\overrightarrow{a \ G(a)} = \overrightarrow{a \ x}$ where $x=a+ (F(a)-a) + (G(a)-a) = F(a)+G(a)-a$. So $F(G(a)) - x = F(G(a))-F(a) - G(a)+a \prec {\lambda}\|G(a)-a\| \prec\prec {\lambda}$. \end{pf} We next show that the composition of $D^1$ (resp. $D^2$) prevector fields is $D^1$ (resp. $D^2$). \begin{prop}\label{40} If $F,G$ are $D^1$ then $F\circ G$ is $D^1$. \end{prop} \begin{pf} We have \begin{align*}\|F\circ G(a) &-F\circ G(b)-a+b\| \\&\leq \|F\circ G(a)-F\circ G(b) -G(a) + G(b)\| +\|G(a) -G(b) -a+b\| \\& \prec {\lambda}\|G(a)-G(b)\| + {\lambda}\|a-b\| \prec {\lambda}\|a-b\| \end{align*} by Proposition~\protect\ref{0}. \end{pf} \begin{prop}\label{q} If $F,G$ are $D^2$ then $F\circ G$ is $D^2$. \end{prop} \begin{pf} In some coordinates let $p=G(a)$, $x=G(a+v)-G(a)$ and $y=G(a+w)-G(a)$, and so by Propositions \protect\ref{3} and \protect\ref{0} we have $x \prec \|v\|$ and $y \prec \|w\|$. Also by Propositions \protect\ref{3} and \protect\ref{0} we have \begin{multline*} \|F(p+x+y)-F(G(a+v+w))\| \prec \| p+x+y-G(a+v+w) \| \\ = \|-G(a)+G(a+v)+G(a+w)-G(a+v+w)\| \prec {\lambda}\|v\|\|w\|.\end{multline*} Now \begin{align*} \|&{\x^2_{v,w}} (F \circ G)(a)\| \\&= \|F \circ G (a) - F \circ G (a+v) - F \circ G ( a+w) + F \circ G (a+v+w) \| \\ &= \| F (p) - F (p+x) - F ( p+y) + F \circ G (a+v+w) \| \\ &\leq \| F (p) - F (p+x) - F ( p+y) + F(p+x+y) \| + \|F(p+x+y)- F (G (a+v+w)) \| \\ &\prec {\lambda}\|x\| \|y\| + {\lambda}\|v\|\|w\| \prec {\lambda}\|v\|\|w\|.\end{align*} \end{pf} \section{Global properties of prevector fields}\label{global} The definition of $D^1$ and $D^2$ prevector fields relates to points in ${{}^{\mathfrak{h}} \! {M}}$ which are infinitely close to each other, in fact of distance $\prec{\lambda}$. In this section we establish properties of $D^1$ and $D^2$ prevector fields valid on appreciable neighborhoods, or on the whole of ${{}^{\mathfrak{h}} \! {M}}$. \begin{prop}\label{18} Let $F$ be a prevector field. \begin{enumerate} \item If $W$ is a coordinate neighborhood with image $U \subseteq {\mathbb R}^n$ and $B \subseteq U$ is a closed ball, then there is a finite $C$ such that $\|F(a)-a\| \leq C{\lambda}$ for all $a\in {}^*\! B$. (We use $F$ to denote both the prevector field itself, and its action in coordinates.) \item If $G$ is another prevector field, then there is a finite $\beta$ such that $\|F(a)-G(a)\| \leq \beta{\lambda}$ for all $a\in {}^*\! B$. \item If furthermore $F\equiv G$ then an infinitesimal such $\beta$ exists. \end{enumerate} \end{prop} \begin{pf} The first statement is a special case of the second, by taking $G(a)=a$ for all $a$. So we prove the second statement. Let $$A= \{ n \in {}^*{\mathbb N} : \| F(a)-G(a) \| \leq n{\lambda} \hbox{\ \ for every\ \ } a \in {}^*\! B \}.$$ Every infinite $n\in{}^*{\mathbb N}$ is in $A$ and so by underspill \footnote{Recall that for ${}^*{\mathbb N}$, ``underspill'' is the fact that if $A \subseteq {}^*{\mathbb N}$ is an internal set, and $A$ contains all infinite $n$ then it must also contain a finite $n$.} there is a finite $C$ in $A$. For $F \equiv G$, let $$A= \{ n \in {}^*{\mathbb N} : \| F(a)-G(a) \| \leq {{\lambda}\over n} \hbox{\ \ for every\ \ } a \in {}^*\! B \}.$$ Every finite $n \in{}^*{\mathbb N}$ is in $A$ and so by overspill \footnote{If $B \subseteq {}^*{\mathbb N}$ is internal, and $B \supseteq {\mathbb N}$, then there must also be an infinite $n \in B$.} there is an infinite $n\in{}^*{\mathbb N}$ in $A$, and take $\beta = {1 \over n} \prec\prec 1$. \end{pf} \begin{prop}\label{20} Let $F$ be a $D^1$ prevector field. If $W$ is a coordinate neighborhood with image $U \subseteq {\mathbb R}^n$ and $B \subseteq U$ is a closed ball, then there is $K \in {\mathbb R}$ such that \begin{equation*} \| F(a)-a-F(b)+b\| \leq K{\lambda}\|a-b\| \tag{1} \end{equation*} for all $a,b \in {}^* \! B$. It follows that \begin{equation*} (1-K{\lambda})\|a-b\| \leq \|F(a)-F(b)\| \leq (1+K{\lambda})\|a-b\|. \tag{2} \end{equation*} \end{prop} \begin{pf} Let $N = {\lfloor} 1 / {\lambda}{\rfloor}$. Given $a,b \in {}^* \! B$, for $k=0,\dots,N$ let $a_k=a+{k \over N}(b-a)$, then $a_k - a_{k+1} \prec{\lambda}$. Let $C_{ab}$ be the maximum of $${\| F(a_k)-a_k - F(a_{k+1}) + a_{k+1} \| \over {\lambda}\|a_k-a_{k+1}\|}$$ for $k=0,\dots,N-1$, then $C_{ab}$ is finite. For every $0\leq k \leq N-1$ we have $$\| F(a_k)-a_k - F(a_{k+1}) + a_{k+1} \| \leq C_{ab}{\lambda}\|a_k-a_{k+1}\|=C_{ab}{\lambda}{\|a-b\|\over N}.$$ So $$\| F(a)-a - F(b) + b \| \leq \sum_{k=0}^{N-1} \| F(a_k)-a_k - F(a_{k+1}) + a_{k+1} \| \leq C_{ab} {\lambda}\|a-b\|.$$ Now let $$A=\{ n \in {}^*{\mathbb N} : \| F(a)-a - F(b) + b \| \leq n {\lambda}\|a-b\| \hbox{\ \ for every\ \ } a,b \in {}^*\! B \}.$$ Since each $C_{ab}$ is finite, every infinite $ n \in {}^*{\mathbb N}$ is in $A$, and so by underspill, there is a finite $K$ in $A$, and the first statement follows. The second statement follows from the first as in the proof of Proposition~\protect\ref{0}. \end{pf} \begin{prop}\label{202} Let $F$ be a $D^2$ prevector field. If $W$ is a coordinate neighborhood with image $U \subseteq {\mathbb R}^n$ and $B \subseteq U$ is a closed ball, then there is $K \in {\mathbb R}$ such that $$\| {\x^2_{v,w}} F(a) \| \leq K{\lambda} \|v\| \|w\|$$ for all $a \in {}^*\! B$ and $v,w \in{}^* {\mathbb R}^n$ such that $a+v,a+w,a+v+w\in{}^*\! B$. \end{prop} \begin{pf} The proof is similar to that of Proposition~\protect\ref{20}. Let $N = {\lfloor} 1 / {\lambda}{\rfloor}$. Given $a \in{}^*\! B$ and $v,w\in{}^*{\mathbb R}^n$ such that $a+v,a+w,a+v+w\in{}^*\! B$, let $a_{k,l}=a+{k \over N}v + {l \over N}w$, $0 \leq k,l \leq N$. Let $C_{avw}$ be the maximum of $${\| \Delta^2_{v/N,w/N}F(a_{k,l}) \| \over {\lambda}\|v/N\|\|w/N\|}$$ for $0\leq k,l \leq N-1$, then $C_{avw}$ is finite. For every $0 \leq k,l \leq N-1$ we have \begin{multline*} \| F(a_{k,l})-F(a_{k+1,l}) - F(a_{k,l+1}) + F(a_{k+1,l+1} \| = \\ \| \Delta^2_{v/N,w/N}F(a_{k,l}) \|\leq C_{avw}{\lambda}\|v/N\|\|w/N\|.\end{multline*} Summing over $0 \leq k,l \leq N-1$ we get $$\| F(a)-F(a+v)-F(a+w)+F(a+v+w)\| \leq C_{avw}{\lambda} \|v\| \|w\|.$$ By underspill as in the proof of Proposition~\protect\ref{20}, there is a single finite $K$ which works for all $a,v,w$. \end{pf} When speaking about local $D^1$ or $D^2$ prevector fields, whenever needed we will assume, perhaps by passing to a smaller domain, that a constant $K$ as in Propositions \protect\ref{20}, \protect\ref{202} exists. \begin{cor}\label{inj} If $F$ is a $D^1$ prevector field then $F$ is injective on ${{}^{\mathfrak{h}} \! {M}}$. \end{cor} \begin{pf} Let $a \neq b \in {{}^{\mathfrak{h}} \! {M}}$. If $st(a)\neq st(b)$ then clearly $F(a) \neq F(b)$. Otherwise there exists a $B$ containing $a,b$ as in Proposition~\protect\ref{20}, and (2) of that proposition implies $F(a) \neq F(b)$. \end{pf} We will now show that a $D^1$ prevector field is in fact bijective on ${{}^{\mathfrak{h}} \! {M}}$. We first prove local surjectivity, as follows. \begin{prop}\label{surj} Let $B_1 \subseteq B_2 \subseteq B_3 \subseteq {\mathbb R}^n$ be closed balls centered at the origin, of radii $r_1<r_2<r_3$. If $F:{}^*\! B_2 \to {}^*\! B_3$ is a local $D^1$ prevector field then $F({}^*\! B_2) \supseteq {}^*\! B_1$. \end{prop} \begin{pf} Fix $0<s\in{\mathbb R}$ smaller than $r_2-r_1$ and $r_3-r_2$. We will apply transfer to the following fact: For every function $f:B_2 \to B_3$, if $\|f(x)-f(y)\| \leq 2\|x-y\|$ for all $x,y \in B_2$ and $\|f(x)-x\| < s$ for all $x \in B_2$ then $f(B_2) \supseteq B_1$. This fact is indeed true since our assumptions on $f$ imply that it is continuous, and that for every $x \in {\partial} B_2$ the straight interval between $x$ and $f(x)$ is included in $B_3 -B_1$, so $f|_{{\partial} B_2}$ is homotopic in $B_3-B_1$ to the inclusion of ${\partial} B_2$. Now if some $p \in B_1$ is not in $f(B_2)$ then $f|_{{\partial} B_2}$ is null-homotopic in $B_3-\{ p\}$, and so the same is true for the inclusion of ${\partial} B_2$, a contradiction. Applying transfer we get that for every internal function $f:{}^*\! B_2 \to {}^*\! B_3$, if $\|f(x)-f(y)\| \leq 2\|x-y\|$ for all $x,y \in {}^*\! B_2$ and $\|f(x)-x\| < s$ for all $x \in {}^*\! B_2$ then $f({}^*\! B_2) \supseteq {}^*\! B_1$. In particular this is true for a $D^1$ prevector field $F:{}^*\! B_2 \to {}^*\! B_3$, by Proposition~\protect\ref{20}(2). \end{pf} The following is immediate from Corollary~\protect\ref{inj} and Propostion~\protect\ref{surj}. \begin{thm}\label{bij} If $F: {}^* \! M \to {}^* \! M$ is a $D^1$ prevector field then $F|_{{{}^{\mathfrak{h}} \! {M}}} :{{}^{\mathfrak{h}} \! {M}} \to {{}^{\mathfrak{h}} \! {M}}$ is bijective. \end{thm} \begin{remark}\label{finv1} On all of ${}^* \! M$, a $D^1$ prevector field may be noninjective and nonsurjective, e.g. take $M=(0,1)$ and $F:{}^* \! M \to {}^* \! M$ given by $F(x)={\lambda}$ for $x \leq {\lambda}$ and $F(x)=x$ otherwise. (Recall that the definition of $D^1$ prevector field imposes no restrictions at points of ${}^* \! M - {{}^{\mathfrak{h}} \! {M}}$). \end{remark} \begin{remark}\label{finv2} For $D^1$ prevector field $F$, the map $F|_{{}^{\mathfrak{h}} \! {M}} :{{}^{\mathfrak{h}} \! {M}} \to {{}^{\mathfrak{h}} \! {M}}$ and its inverse $(F|_{{{}^{\mathfrak{h}} \! {M}}})^{-1}$ are not internal if $M$ is noncompact, since their domain is not internal. On the other hand, for any $A \subseteq M$, $F|_{{}^*\! A}$ is internal. Furthermore, on ${}^*\! B_1$ of Proposition~\protect\ref{surj}, $F$ has an inverse $F^{-1}:{}^*\! B_1 \to {}^*\! B_2$ in the sense that $F\circ F^{-1}(a) = a$ for all $a\in {}^*\! B_1$, and $F^{-1}$ is internal. So, for a local $D^1$ prevector field $F:{}^* U \to {}^* V$ we may always assume (perhaps for slightly smaller domain) that $F^{-1} : {}^* U \to {}^* V$ also exists, in the above sense. As mentioned, we will usually not mention the range ${}^* V$ but rather speak of a local prevector field on ${}^* U$. \end{remark} \begin{prop}\label{30} If $F$ is $D^1$ then $F^{-1}$ is $D^1$. ($F^{-1}$ exists by Remark~\protect\ref{finv2}.) More in detail, if for $x=F^{-1}(a), y=F^{-1}(b)$ there is given $K \prec 1$ such that $\|F(x)-F(y)-x+y\| \leq K{\lambda}\|x-y\|$, then $\|F^{-1}(a)-F^{-1}(b)-a+b\| \leq K'{\lambda}\|a-b\|$, with $K'$ only slightly larger, namely $K' = K/(1-K{\lambda})$. \end{prop} \begin{pf} Let $x=F^{-1}(a)$, $y=F^{-1}(b)$, then \begin{multline*}\|F^{-1}(a)-F^{-1}(b)-a+b\| =\\ \|x-y-F(x)+F(y)\| \leq K {\lambda}\|x-y\| \leq K' {\lambda}\|F(x)-F(y)\|=K'{\lambda}\|a-b\|\end{multline*} by Lemma~\protect\ref{0}. \end{pf} We conclude this section with the following observations. \begin{lemma}\label{103} Let $F,G$ be $D^1$ prevector fields. If $F(a) \equiv G(a)$ for all \emph{standard} $a$, then $F(b)\equiv G(b)$ for all nearstandard $b$, i.e. $F\equiv G$. \end{lemma} \begin{pf} Given $b$, let $K$ be as in Proposition~\protect\ref{20} for both $F$ and $G$, in a ball around $a=st(b)$. Then \begin{align*}\|F(b)-G(b)\| &= \| F(b)-F(a)-b+a+F(a) - G(a) +G(a) -G(b) -a +b \| \\& \leq \|F(b)-F(a)-b+a\| + \|F(a)-G(a)\| + \|G(a)-G(b)-a+b\| \\ & \leq K{\lambda}\|a-b\| +\|F(a)-G(a)\| + K{\lambda}\|a-b\| \prec\prec{\lambda}. \end{align*} \end{pf} Recall that Definition~\protect\ref{realize}, which defines when a prevector field $F$ realizes a classical vector field $X$, involves only \emph{standard} points. It follows from Lemma~\protect\ref{103} that if $F$ is $D^1$ then this determines $F$ up to equivalence. Namely, we have the following. \begin{cor}\label{sr} Let $U \subseteq{{\mathbb R}^n}$ be open, and $X:U\to{{\mathbb R}^n}$ a classical vector field. If $F,G$ are two $D^1$ prevector fields that realize $X$ then $F\equiv G$. In particular, if $X$ is Lipschitz and $G$ is a $D^1$ prevector field that realizes $X$, then $F\equiv G$, where $F$ is the prevector field obtained from $X$ as in Example~\protect\ref{e}. \end{cor} \section{The flow of a prevector field}\label{f} In this section we define and study the flow of global and local prevector fields. In our definition of a prevector field as a map from ${}^* \! M$ to itself, we wish to view $F$ as its own flow at time ${\lambda}$. The flow for later time $t$ should thus be defined by iterating $F$ the appropriate number of times. (Thus the classical notion of a vector field being the infinitesimal generator of its flow receives literal meaning in our setting.) Thus, for a global prevector field $F: {}^* \! M \to {}^* \! M$ and for $0 \leq t \in{}^*{\mathbb R}$, let $n = n(t) = {\lfloor} t/{\lambda} {\rfloor}$ and define the flow $F_t$ of $F$ at time $t$ to be $F_t(a)=F^n(a)$, where $F^n$ is given by the map ${}^*{\mbox{Map}} (M) \times {}^* {\mathbb N} \to {}^*{\mbox{Map}}(M)$ which is the extension of the map $\mbox{Map}(M) \times {\mathbb N} \to \mbox{Map}(M)$ taking $(f,n)$ to $f^n=f \circ f\circ \cdots \circ f$. The flow of a local prevector field is similarly defined, only a bit of care is needed regarding its domain. So, for local prevector field $F:{}^* U \to {}^* V$, extend $F$ to $F':{}^* V \to {}^* V$ by defining $F'(a)=a$ for all $a \in {}^* V - {}^* U$, and let $Y_n = \{ a \in {}^* U \ : \ (F')^n(a) \in {}^* U \}$. We set the domain of $F_t$ to be $Y_{n(t)}$, where it is defined by $F_t(a) = F'_t(a)$. We would also like to consider $F_t$ for $t \leq 0$. For global prevector field $F$ which is bijective on ${}^* \! M$, or for local prevector field which is bijective in the sense of Remark~\protect\ref{finv2}, in particular a $D^1$ local prevector field, we define $F_t$ for $t \leq 0$ to be $(F^{-1})_{-t}$. Note that for any \emph{global} prevector field $F$, $F_t$ is defined for all $t \geq 0$, unlike the situation for the classical flow of a classical vector field. Similarly $F_t$ is defined for all $t \leq 0$ if $F$ is bijective. Directly from the definition of a flow, we may immediately notice the following. \begin{prop}\label{invfl} A prevector field $F$ is invariant under its own flow $F_t$, where the action of a map $h$ on a prevector $(a,x)$ is given, as in Definition~\protect\ref{dd5}, by $(h(a),h(x))$. \end{prop} \begin{pf} Let $n = {\lfloor} t/{\lambda} {\rfloor}$ then $F_t((a,F(a))) = F^n((a,F(a)))=(F^n(a),F^{n+1}(a)) = (b,F(b))$ where $b=F^n(a)=F_t(a)$. \end{pf} \subsection{Dependence on initial condition and on prevector field}\label{sf1} We now establish bounds on the distance in coordinates between two flows $F_t(a),F_t(b)$ of a given $D^1$ prevector field $F$, and between the flows $F_t(a),G_t(a)$ of two different prevector fields. These bounds can of course be combined into a bound on the distance between $F_t(a)$ and $G_t(b)$. \begin{thm}\label{flow1} Let $F$ be a local $D^1$ prevector field on ${}^* U$. Given $p \in U$ and a coordinate neighborhood of $p$ with image $W \subseteq {\mathbb R}^n$, let $B' \subseteq B \subseteq W$ be closed balls of radii $r/2,r$ around the image of $p$. Suppose $\|F(a)-a-F(b)+b\| \leq K{\lambda}\|a-b\|$ for all $a,b \in {}^*\! B$, with $K$ a finite constant (such finite $K$ exists by Proposition~\protect\ref{20}), then there is $0 < T \in {\mathbb R}$ such that $F_t(a) \in {}^*\! B$ for all $a \in {}^*\! B'$ and $-T \leq t \leq T$. Furthermore, for all $a,b \in {}^*\! B'$ and $0 \leq t \leq T$: $\|F_t(a) - F_t(b) \| \leq e^{Kt}\|a-b\|$. If we take a slightly larger constant $K'=K/(1-K{\lambda})^2$, then for all $a,b \in {}^*\! B'$ and $-T \leq t \leq T$: $$e^{-K'|t|}\|a-b\|\leq \|F_t(a) - F_t(b) \| \leq e^{K'|t|}\|a-b\|.$$ \end{thm} \begin{pf} We first prove the statement for $t\geq 0$. Let $C$ be as in Proposition~\protect\ref{18}(1). Take $T = {r \over 2C}$, then $0 < T \in {\mathbb R}$, and we have by internal induction \footnote{If $A$ is an internal subset of ${}^*{\mathbb N}$ that contains $1$ and is closed under the successor function $n \mapsto n + 1$, then $A = {}^*{\mathbb N}$. So, one can prove by induction in ${}^*{\mathbb N}$, as long as all objects under discussion are internal. This is called \emph{internal induction}.} for $a \in {}^*\! B'$, $0 \leq t \leq T$, and $n={\lfloor} t / {\lambda} {\rfloor}$, that $\|F^n(a) - a\| \leq \sum_{m=1}^n \|F^m(a) - F^{m-1}(a)\| \leq nC{\lambda} \leq {r \over 2}$, and so $F^n(a) \in {}^*\! B$. By Proposition~\protect\ref{20}(2) we have $(1-K{\lambda})\|a-b\| \leq \|F(a)-F(b)\| \leq (1+K{\lambda})\|a-b\|$ for all $a,b \in {}^*\! B$, and so by internal induction $$(1-K{\lambda})^n\|a-b\|\leq \|F^n(a) - F^n(b) \| \leq (1+K{\lambda})^n\|a-b\|.$$ For $t\geq 0$ we have $(1+K{\lambda})^n \leq e^{Kt}$ since $1+K{\lambda} \leq e^{K{\lambda}}$, and letting $h=\frac{1}{1-K{\lambda}}$ we have $e^{-hKt}\leq (1-K{\lambda})^n$ since $e^{-hK{\lambda}} \leq 1-hK{\lambda} + \frac{h^2 K^2{\lambda}^2}{2} \leq 1-hK{\lambda} +hK^2{\lambda}^2=1-K{\lambda}$. For $t\leq 0$ we are considering $F^{-1}$. The same constant $C$ can be used, and by Proposition~\protect\ref{30} $K$ should be replace by $hK$, and so $hK$ is replaced by $K'=h^2 K$. \end{pf} \begin{thm}\label{flow2} Let $F$ be a $D^1$ local prevector field on ${}^* U$ and let $G$ be any local prevector field on ${}^* U$. Given a coordinate neighborhood included in $U$ with image $W \subseteq{\mathbb R}^n$, let $A'\subseteq A \subseteq {}^* W$ be internal sets. Suppose \begin{enumerate} \item $\|F(a)-G(a)\| \leq \beta{\lambda}$ for all $a\in A$, with some constant $\beta$. (If $F\equiv G$ then an infinitesimal such $\beta$ exists by Proposition~\protect\ref{18}(3)), \item $\| F(a)-a-F(b)+b\| \leq K{\lambda}\|a-b\|$ for all $a,b \in A$, with $K$ a finite constant. (Such finite $K$ exists by Proposition~\protect\ref{20}), \item $0<T\in{\mathbb R}$ is such that $F_t(a)$ and $G_t(a)$ are in $ A$ for all $a \in A'$ and $0 \leq t \leq T$. \end{enumerate} Then for all $a\in A'$ and $0 \leq t \leq T$, $$\|F_t(a) - G_t(a) \| \leq {\beta\over K} (e^{Kt}-1)\leq \beta t e^{Kt}.$$ If $G^{-1}$ exists, e.g. if $G$ is also $D^1$, and if $F_t(a)$ and $G_t(a)$ are in $A$ for all $a \in A'$ and $-T \leq t \leq T$, then $\|F_t(a) - G_t(a) \| \leq {\beta\over K} (e^{K|t|}-1) \leq \beta |t| e^{K|t|}$ for all $-T \leq t \leq T$. \end{thm} \begin{pf} Again it is enough to prove the statement for positive $t$. We prove by internal induction that $$ \|F^n(a) - G^n(a) \| \leq {\beta\over K} \Bigl((1+K{\lambda})^n - 1\Bigr) $$ which implies the statement. By Proposition~\protect\ref{20}(2) we have \begin{align*} \|F^{n+1}(a)-G^{n+1}(a) \| &\leq \| F(F^n(a))-F(G^n(a)) \| + \| F(G^n(a))-G(G^n(a)) \| \\ & \leq (1+K{\lambda}) \|F^n(a)-G^n(a)\| + \beta{\lambda} \end{align*} from which the induction step from $n$ to $n+1$ follows. \end{pf} \begin{cor}\label{flow7} For $F$ and $T$ as in Theorem~\protect\ref{flow1}, if $a\approx b$ then $F_t(a) \approx F_t(b)$ for all $0 \leq t \leq T$. \end{cor} \begin{cor}\label{flow3} For $F$ and $G$ as in Theorem~\protect\ref{flow2}, if $F\equiv G$, then $F_t(a) \approx G_t(a)$ for all $0 \leq t \leq T$. \end{cor} \begin{pf} By Proposition~\protect\ref{18}(3) there is an infinitesimal $\beta$ for the statement of Theorem~\protect\ref{flow2}, which gives $\|F_t(a) - G_t(a) \| \leq \beta t e^{Kt}$, so $F_t(a) \approx G_t(a)$. \end{pf} Our flow $F_t$ of a prevector field $F$ induces a classical flow on $M$ as follows. \begin{dfn}\label{clflow} Let $F$ be a $D^1$ local prevector field. On a neighborhood $B'\subseteq{{\mathbb R}^n}$ and interval $[-T,T]$ as in Theorem~\protect\ref{flow1}, we define the \emph{standard} flow $h^F_t:B' \to M$ induced by $F$ as follows: $h^F_t(x) = st(F_t(x))$. \end{dfn} The following are immediate consequences of Theorems~\protect\ref{flow1} and Corollary~\protect\ref{flow3}. \begin{thm}\label{hft} Given a $D^1$ prevector field $F$ the following hold: \begin{enumerate} \item $h^F_t$ is Lipschitz continuous with constant $e^{K|t|}$. \item $h^F_t$ is injective. \item If $G$ is another $D^1$ prevector field and $F \equiv G$ then $h^F_t = h^G_t$. \end{enumerate} \end{thm} \begin{remark}\label{kf} If $F$ is obtained from a classical vector field $X$ by the procedure of Example~\protect\ref{e} then Keisler \cite[Theorem 14.1]{k} shows that our $h^F_t$ is in fact the flow of $X$ in the classical sense. By Theorem~\protect\ref{hft}(3) this will be true for any prevector field $F$ that realizes $X$. \end{remark} The results of this subsection have the following application to the standard setting. \begin{clcor}\label{cc1} For open $U \subseteq{{\mathbb R}^n}$ let $X,Y:U\to{{\mathbb R}^n}$ be classical vector fields, where $X$ is Lipschitz with constant $K$, and $\| X(x)-Y(x) \| \leq b$ for all $x\in U$. If $x(t),x'(t)$ are integral curves of $X$ then $\|x(t)-x'(t)\| \leq e^{Kt}\|x(0) - x'(0)\|$. If $y(t)$ is an integral curve of $Y$ with $x(0)=y(0)$ then $\|x(t) - y(t) \| \leq {b\over K} (e^{Kt}-1)\leq b t e^{Kt}$. \end{clcor} \begin{pf} Define prevector fields on ${}^* U$ by $F(a)-a={\lambda} X(a)$ and $G(a)-a={\lambda} Y(a)$ as in Example~\protect\ref{e}, and apply Theorems \protect\ref{flow1}, \protect\ref{flow2}, and Remark~\protect\ref{kf}. \end{pf} To conclude this section we look at the flow of a prevector field in an infinitesimal neighborhood of a fixed point. This corresponds to a zero of a vector field in the classical setting. In a neighborhood of such zero, one often approximates the given vector field with a simpler one (e.g. the linear approximation), to obtain an approximation of the original vector field's flow. We present the following approach for prevector fields, which we apply in Section~\protect\ref{secpend} to infinitesimal oscillations of a pendulum. \begin{cor}\label{rsc} Let $F,G$ be local $D^1$ prevector fields on ${}^* U$ where $U$ is a neighborhood of $p \in {\mathbb R}^n$, and assume $F(p)=G(p)=p$. Fix an infinitesimal $a>0$ and let $Q=\{x\in{}^* U : x-p\prec a\}$ (an external set). \begin{enumerate} \item If $F(x)-G(x) \prec\prec {\lambda} a$ for all $x\in Q$ then $F_t(x)-G_t(x)\prec\prec a$ for all $x\in Q$, and the appropriate range of $t$, by which we mean finite $t$ for which $F_s(x) \in Q$ for all $0\leq s \leq t$. \item If $F(x)-G(x) \prec\prec \|F(x)-x\|$ for all $x\in Q$, then $F_t(x)-G_t(x) \prec\prec \|F_t(x)-p\|$ for all $ x\in Q$, and an appropriate range of $t$ as above. \end{enumerate} \end{cor} \begin{pf} For convenience assume $p=0$ so we have $F(0)=G(0)=0$ and $x\in Q$ simply means $x\prec a$. We first note that $F(x)-x =F(x)-x-F(0)+0 \prec {\lambda}\|x\|$ by $D^1$ and Proposition~\protect\ref{20}. Now let $F'(x)=\frac{1}{a}F(ax)$, then $F'$ is defined for all $x\prec 1$ (i.e. $x\in { {}^{\mathfrak{h}} }{\mathbb R}^n$). For $x\prec 1$, $F'(x)-x=\frac{1}{a}\big(F(ax)-ax\big) \prec\frac{1}{a}{\lambda}\|ax\|={\lambda}\|x\|\prec{\lambda}$ so $F'$ is a prevector field. We have $F'(x)-x-F'(y)+y= \frac{1}{a}\big(F(ax)-ax-F(ay)+ay\big)\prec \frac{1}{a}{\lambda}\|ax-ay\|={\lambda}\|x-y\|$ so $F'$ is $D^1$. Similarly define $G'$. For (1) we have $F'(x)-G'(x)=\frac{1}{a}\big(F(ax)-G(ax)\big)\prec\prec\frac{1}{a}{\lambda} a={\lambda}$ so $F'\equiv G'$. We thus get by Corollary~\protect\ref{flow3} that $F'_t(x)\approx G'_t(x)$ for $x\prec 1$ and for appropriate range of $t$. Now, for $x\prec a$ we have $\frac{1}{a}x\prec 1$ so $F_t(x)-G_t(x)=a\big(F'_t(\frac{1}{a}x)-G'_t(\frac{1}{a}x)\big)\prec\prec a$. (We remark that though our range of $t$ gives $F'_s(x) \prec 1$ for all $0\leq s \leq t$, which is an external condition, in fact there is a finite ball $B$ such that $F'_s(x) \in {}^* B$ for all $0\leq s \leq t$. This can be seen e.g. by underspill as in the proof of Proposition~\protect\ref{18}.) For (2), the statement holds for $x=0$ since $0\prec\prec 0$. Given a fixed $0 \neq x\prec a$ let $b=\|x\|$. Then for all $y\prec b$ we have $F(y)-G(y) \prec\prec \| F(y)-y\| \prec{\lambda}\|y\| \prec {\lambda} b$, so by (1) applied to $b$ we have $F_t(x)-G_t(x)\prec\prec b$ for appropriate range of $t$. By Theorem~\protect\ref{flow1} we have $b= \|x-0\| \prec \|F_t(x)-F_t(0)\|=\|F_t(x)\|$, so together $F_t(x)-G_t(x)\prec\prec \|F_t(x)\|$. \end{pf} \begin{remark}\label{sc} We can slightly weaken the assumptions in Corollary~\protect\ref{rsc} by replacing the assumption that $G$ is $D^1$ by the weaker assumption that all $x\prec a$ satisfy $G(x)-x \prec {\lambda}\|x\|$. The proof remains unchanged. \end{remark} \begin{example}\label{sc2} In Corollary~\protect\ref{rsc} we take $p=0\in\mathbb C$. We further assume that 0 is the only fixed point of $F,G$ in $Q$ (this corresponds to an isolated zero in the classical setting). For $0\neq a,b\in{}^*\mathbb C$ we will say that $a$ and $b$ are \emph{adequal} if $\frac{a}{b}\approx 1$. Then Corollary~\protect\ref{rsc}(2) tells us that if $F(x)-x$ and $G(x)-x$ are adequal for all $0\neq x\in Q$ then $F_t(x)$ and $G_t(x)$ are adequal for all $0\neq x\in Q$, and an appropriate range of $t$ as in Corollary~\protect\ref{rsc}. \end{example} \subsection{The canonical representative prevector field} Once we have the standard function $h^F_t$, we can extend it to the nonstandard domain as usual, and use it to define a new prevector field ${\widetilde{F}}$ as follows. \begin{dfn}\label{can} ${\widetilde{F}}=h^F_{\lambda}$. \end{dfn} The map ${\widetilde{F}}$ is indeed a prevector field, i.e. ${\widetilde{F}}(a)-a \prec{\lambda}$ for all $a$. Indeed, for $C \in{\mathbb R}$ given by Proposition~\protect\ref{18}(1) we have $\|F^n(a) - a\| \leq \sum_{m=1}^n \|F^m(a) - F^{m-1}(a)\| \leq nC{\lambda} $, which implies $\| h^F_t(a) -a\| \leq Ct$, which by transfer implies $\|{\widetilde{F}}(a)-a\| \leq C{\lambda}$. By Theorem~\protect\ref{hft}(3), if $F \equiv G$ then ${\widetilde{F}} = {\widetilde{G}}$. We will show in Theorem~\protect\ref{tld} that ${\widetilde{F}} \equiv F$, and so ${\widetilde{F}}$ is a canonical choice of a representative from the equivalence class of $F$. (Perhaps in a smaller neighborhood of a given point, as required by Theorem~\protect\ref{flow1}.) We will show in Propositions \protect\ref{111}, \protect\ref{112} that if $F$ is $D^1$ (resp. $D^2$) then ${\widetilde{F}}$ is $D^1$ (resp. $D^2$). That is, if a given equivalence class contains some member which is $D^1$ (resp. $D^2$) then the canonical representative ${\widetilde{F}}$ of that equivalence class is also $D^1$ (resp. $D^2$). We note that indeed not all members of the given class are $D^1$ (resp. $D^2$), for example for ${}^*{\mathbb R}$ take $F(x)=x$ for all $x\in{}^*{\mathbb R}$, and $G(x)=x$ for all $x\neq 0$ and $G(0)={\lambda}^2$. Then $F$ is $D^2$, $F\equiv G$, but $G$ is not even $D^1$, as is seen by taking $a=0,b={\lambda}^2$. \begin{lemma}\label{ek} Let $F$ be a local $D^1$ prevector field defined on ${}^* U$. Assume $$\|F(a)-F(b)-a+b\| \leq K{\lambda}\|a-b\|$$ for all $a,b \in {}^* U$. Then the flow of $F$ satisfies: $$\| F^n(a)-F^n(b)-a+b \| \leq K {\lambda} n e^{K{\lambda} n} \|a-b\|.$$ \end{lemma} \begin{pf} We have \begin{align*} \| F^n(a)-F^n(b)-a+b \| &\leq \sum_{i=1}^n \| F^i(a) - F^i(b) -F^{i-1}(a) +F^{i-1}(b) \| \\ &\leq \sum_{i=1}^n K{\lambda} \|F^{i-1}(a) - F^{i-1}(b) \| \\& \leq \sum_{i=1}^n K{\lambda} (1+K{\lambda})^{i-1} \|a - b \| \leq K {\lambda} n e^{K{\lambda} n} \|a-b\|. \end{align*} The third inequality is by internal induction as in the proof of Theorem~\protect\ref{flow1}. \end{pf} \begin{prop}\label{111} If $F$ is $D^1$ then ${\widetilde{F}}$ is $D^1$. \end{prop} \begin{pf} Assume $\|F(a)-F(b)-a+b \| \leq K{\lambda}\|a-b\|$ for all $a,b$ in some ${}^*\! B$ as in Proposition~\protect\ref{20}. Then for $n = {\lfloor} t / {\lambda} {\rfloor}$ we have by Lemma~\protect\ref{ek} $ \| F^n(a)-F^n(b)-a+b \| \leq K {\lambda} n e^{K{\lambda} n} \|a-b\| \leq Kte^{Kt}\|a-b\|.$ So for standard $a,b$ we have $\|h^F_t(a)-h^F_t(b)-a+b\| \leq Kte^{Kt}\|a-b\|$. Extending back to the nonstandard domain and evaluating at $t={\lambda}$ we get, by transfer, $\| {\widetilde{F}}(a) - {\widetilde{F}}(b) -a +b \| \leq K{\lambda} e^{K{\lambda}}\|a-b\|$. \end{pf} \begin{prop}\label{112} If $F$ is $D^2$ then ${\widetilde{F}}$ is $D^2$. \end{prop} \begin{pf} Assume $\|{\x^2_{v,w}} F(a)\| \leq K{\lambda}\|v\| \|w\|$ for all $a,v,w$ in some ${}^*\! B$ as in Proposition~\protect\ref{202}. We prove by internal induction that $$\| {\x^2_{v,w}} F^n (a) \|=\| F^n(a)-F^n(a+v)-F^n(a+w)+F^n(a+v+w)\| \leq K{\lambda}\sum_{i=n-1}^{2n-2} (1+K{\lambda})^i \|v\| \|w\|.$$ Let $p=F^n(a)$, $x=F^n(a+v)-F^n(a)$, $y=F^n(a+w)-F^n(a)$. Then $\|x\| \leq (1+K{\lambda})^n \|v\|$, $\|y\| \leq (1+K{\lambda})^n \|w\|$, and \begin{align*} \|F(p+x+y) - & F^{n+1}(a+v+w))\| \\&\leq (1+K{\lambda})\|p+x+y - F^n(a+v+w)\| \\& = (1+K{\lambda}) \|-F^n(a)+F^n(a+v)+F^n(a+w)-F^n(a+v+w)\| \\& \leq(1+K{\lambda})K{\lambda}\sum_{i=n-1}^{2n-2} (1+K{\lambda})^i \|v\| \|w\| = K{\lambda}\sum_{i=n}^{2n-1} (1+K{\lambda})^i \|v\| \|w\|, \end{align*} by the induction hypothesis. Now \begin{align*} \|& F^{n+1}(a)-F^{n+1}(a+v)-F^{n+1}(a+w)+F^{n+1}(a+v+w)\| \\ \leq& \|F(p) - F(p+x) - F(p+y) + F(p+x+y)\| +\|F(p+x+y)-F^{n+1}(a+v+w)\| \\ \leq& K{\lambda}\|x\|\|y\| + K{\lambda}\sum_{i=n}^{2n-1} (1+K{\lambda})^i \|v\| \|w\| \\ \leq& K{\lambda}(1+K{\lambda})^{2n} \|v\| \|w\| + K{\lambda}\sum_{i=n}^{2n-1} (1+K{\lambda})^i \|v\| \|w\| =K{\lambda}\sum_{i=n}^{2n} (1+K{\lambda})^i \|v\| \|w\|, \end{align*} which completes the induction. So for $n={\lfloor} t/{\lambda} {\rfloor}$ we have \begin{multline*} \| F^n(a)-F^n(a+v)-F^n(a+w)+F^n(a+v+w)\| \\ \leq K{\lambda}\sum_{i=n-1}^{2n-2} (1+K{\lambda})^i \|v\| \|w\| \leq K{\lambda} n e^{2Kt} \|v\| \|w\| \leq Kt e^{2Kt} \|v \| \|w\|. \end{multline*} Thus for standard $a,v,w$ we have $$\| h^F_t(a)-h^F_t(a+v)-h^F_t(a+w)+h^F_t(a+v+w)\| \leq Kt e^{2Kt} \|v \| \|w\|.$$ Extending back to the nonstandard domain and evaluating at $t={\lambda}$ we get: $$\| {\widetilde{F}}(a)-{\widetilde{F}}(a+v)-{\widetilde{F}}(a+w)+{\widetilde{F}}(a+v+w)\| \leq K{\lambda} e^{2K{\lambda}} \|v \| \|w\|.$$ \end{pf} Next we would like to prove that ${\widetilde{F}} \equiv F$. We first need two lemmas. \begin{lemma}\label{101} Let $F$ be a local prevector field defined on ${}^* U$. Assume $\|F(a)-a\| \leq C{\lambda}$ and $\|F(a)-a-F(b)+b\| \leq K{\lambda}\|a-b\|$ for all $a,b \in {}^* U$. Then the flow of $F$ satisfies: $\|F^n(a)-a - n(F(a)-a)\| \leq KC n^2 {\lambda}^2$. \end{lemma} \begin{pf} We have \begin{align*} \|F^n(a)-a &- n(F(a)-a)\| = \|\sum_{i=1}^n \Bigl(F^i(a)-F^{i-1}(a) - (F(a)-a)\Bigr)\| \\ &\leq \sum_{i=1}^n \| F(F^{i-1}(a)) - F^{i-1}(a) - F(a)+a \| \leq \sum_{i=1}^n K{\lambda} \|F^{i-1}(a) - a \| \\& \leq \sum_{1 \leq j < i \leq n} K{\lambda} \|F^j(a) - F^{j-1}(a) \| \leq n^2 K{\lambda} C{\lambda}. \end{align*} \end{pf} \begin{lemma}\label{102} Let $v\in{}^* {\mathbb R}^n$ with $v \prec {\lambda}$, let $g:[0,\infty)\to{\mathbb R}^n$ be the standard function $g(t) = st({\lfloor} t / {\lambda} {\rfloor} v )$, and let $V = st(v/{\lambda})$. Then $g(t)=tV$ and the extension of $g$ back to the nonstandard domain satisfies $g( {\lambda}) \equiv v$. \end{lemma} \begin{pf} Let $n = {\lfloor} t / {\lambda} {\rfloor} $. Then $$\|tV -nv\| \leq \|tV -t(v/{\lambda})\| + \|t(v/{\lambda}) - n{\lambda}(v/{\lambda}) \| = t\|V-(v/{\lambda})\| + | t - n{\lambda} | \|v/{\lambda}\| \prec\prec 1.$$ This shows that $g(t)=tV$. So $g({\lambda}) = {\lambda} V$, and we have $\|{\lambda} V -v\| = {\lambda} \|V- v/{\lambda}\| \prec\prec {\lambda}$. \end{pf} We are now ready to prove the following. \begin{thm}\label{tld} If $F$ is a local $D^1$ prevector field then ${\widetilde{F}} \equiv F$. \end{thm} \begin{pf} By Proposition \protect\ref{111} and Lemma~\protect\ref{103} it is enough to show that ${\widetilde{F}}(a) \equiv F(a)$ for all \emph{standard} $a$. So, for standard $a$ let $g_t(a)=st\Bigl(a+{\lfloor} t / {\lambda} {\rfloor} (F(a)-a) \Bigr)$. Letting $n={\lfloor} t / {\lambda} {\rfloor}$ we have $$\| h^F_t(a)- g_t(a) \| =\|st(F^n(a)) - st\Bigl(a+n (F(a)-a) \Bigr)\| = st\|F^n(a) - a-n (F(a)-a)\| \leq At^2$$ for some $A \in {\mathbb R}$, by Lemma~\protect\ref{101}. Extending and evaluating at $t={\lambda}$ gives $\|{\widetilde{F}}(a)-g_{\lambda}(a)\| \leq A{\lambda}^2 \prec\prec{\lambda}$, i.e. ${\widetilde{F}}(a) \equiv g_{\lambda}(a)$. Now $g_t(a)-a= st\Bigl({\lfloor} t / {\lambda} {\rfloor} (F(a)-a) \Bigr)$ so by Lemma~\protect\ref{102} we have $g_{\lambda}(a)-a \equiv F(a)-a$, so $g_{\lambda}(a) \equiv F(a)$, and together we get ${\widetilde{F}}(a)\equiv F(a)$. \end{pf} To conclude, ${\widetilde{F}}$ is a canonically chosen representative from the equivalence class of $F$ (perhaps in a smaller neighborhood of a given point), and if $F$ is $D^1$ (resp. $D^2$) then ${\widetilde{F}}$ is $D^1$ (resp. $D^2$). \subsection{Infinitesimal oscillations of a pendulum}\label{secpend} We now demonstrate and discuss some of the concepts and results of this section in relation to a concrete physical problem, that of small oscillations of a pendulum. (Compare Stroyan \cite{str}.) Let $x$ denote the angle between a pendulum and the downward vertical direction. By considering the projection of the force of gravitation in the direction of motion, one obtains the equation of motion $$m\ell\ddot{x} = -mg\sin x$$ where $m$ is the mass of the bob of the pendulum, $\ell$ is the length of its massless rod, and $g$ is the constant of gravity. Letting $\omega=\sqrt{g/\ell}$ we have $\ddot{x} = -\omega^2\sin x$. The initial condition of releasing the pendulum at angle $a$ is described by $x(0)=a$, $\dot{x}(0)=0$. We replace this single second order differential equation with the system of two first order equations $\dot{x}=\omega y$, $\dot{y}=-\omega\sin x$, and initial condition $(x,y)=(a,0)$. The classical vector field corresponding to this system is $X(x,y)=(\omega y,-\omega\sin x)$. We are interested in ``small'' oscillations in the classical setting, i.e. the limiting behavior when the parameter $a$ above tends to 0, and correspondingly, infinitesimal oscillations in the hyperreal setting, i.e. when $a$ is infinitesimal. To this end, if $p_a(t)$ is the classical motion with initial angle $a$, we look at the motion rescaled by the factor $a$, i.e. we look at $\frac{p_a(t)}{a}$. This is the $x$ component of the flow of the rescaled vector field $Y(x,y)=\frac{1}{a}X(ax,ay)=(\omega y, -\omega \frac{\sin ax}{a})$. The initial condition $(a,0)$ for $X$ corresponds to initial condition $(1,0)$ for $Y$. We can incorporate the parameter $a$ into our manifold and look at the vector field $Z$ on ${\mathbb R}^3$ given by $Z(x,y,a)=(\omega y, -\omega \frac{\sin ax}{a},0)$, and initial conidtion $(1,0,a)$. Note that $Z$ is well defined and analytic also for $a=0$ (indeed $\frac{\sin ax}{a} = x - \frac{a^2x^3}{3!} + \frac{a^4x^5}{5!} - \cdots$), and its value for $a=0$ is $Z(x,y,0)=(\omega y , -\omega x, 0)$. The classical flow for $a=0$ i.e. initial condition $(1,0,0)$ is $(\cos\omega t,\sin\omega t,0)$, and so by Classical Corollary \protect\ref{cc1} we have $\frac{p_a(t)}{a} \to \cos\omega t$ (in fact, uniformly on finite intervals). It follows that for infinitesimal $a$, $\frac{p_a(t)}{a} \approx \cos\omega t$, for all finite $t$. The above computation was for the classical flow of a classical vector field, and was then extended to the nonstandard domain. But we may also view the flow itself as occurring in the nonstandard domain ${}^*{\mathbb R}^2$, via the prevector field $F(x,y)=(x+{\lambda}\omega y, y-{\lambda}\omega \sin x)$ with initial condition $(a,0)$. This is the prevector field obtained from our classical vector field $X$ by the procedure of Example~\protect\ref{e}. After rescaling as before, we have the prevector field $G(x,y)=(x+{\lambda}\omega y, y-{\lambda}\omega \frac{\sin ax}{a})$ and initial condition $(1,0)$. Define the prevector field $E(x,y)=(x+{\lambda}\omega y, y-{\lambda}\omega x)$, then for infinitesimal $a$ we have $G\equiv E$ since $E(x,y)-G(x,y)=(0,{\lambda}\omega (\frac{\sin ax}{ax}-1)x)$ and $\frac{\sin ax}{ax}-1\prec\prec 1$. Let us define another prevector field $H(x,y)=(x\cos{\lambda}\omega +y\sin{\lambda}\omega , -x\sin{\lambda}\omega + y\cos{\lambda}\omega )$, then $H$ is clockwise rotation of the $xy$ plane by angle ${\lambda}\omega$, so $H_t(1,0) \approx (\cos\omega t, -\sin\omega t)$. We have $\cos{\lambda}\omega-1\prec\prec{\lambda}\omega$ and $\sin{\lambda}\omega-{\lambda}\omega\prec\prec{\lambda}\omega$, so $E\equiv H$. We have $G\equiv E\equiv H$, so by Corollary~\protect\ref{flow3}, since $E$ is evidently $D^1$, $G_t(1,0) \approx (\cos\omega t, -\sin\omega t)$. (We have used arguments from the proof of Corollary~\protect\ref{rsc} rather than quoting it.) So finally, the $x$ component of $G_t(1,0)$ is $\approx \cos\omega t$ for any infinitesimal $a$, which means that the $x$ component of $\frac{F_t(a,0)}{a}$ is $\approx \cos\omega t$ for any infinitesimal $a$. We may thus say the following. \begin{cor}\label{pendcor} The motion of a pendulum with infinitesimal amplitude $a$ is practically harmonic motion, in the sense that if rescaled to appreciable size, it is infinitely close to standard harmonic motion, for all finite time. \end{cor} Equivalently, one could say that the motion itself is harmonic with the given infinitesimal amplitude $a$, with error which is infinitely smaller than $a$. \section{Realizing classical vector fields}\label{rl} Given a classical vector field on a smooth manifold $M$, we seek a prevector field realizing it. Using Example~\protect\ref{e} we can do this only locally, while by Proposition~\protect\ref{re} these local prevector fields are compatible up to equivalence. This leads to the following definition. \begin{dfn}\label{coh1} A $D^1$ (resp. $D^2$) \emph{coherent family} of local prevector fields on $M$ is a family $\{ (F_\alpha , U_\alpha) \}_{\alpha \in J}$ where $\{ U_\alpha \}_{\alpha \in J}$ is an open covering of $M$, and each $F_\alpha$ is a local $D^1$ (resp. $D^2$) prevector field on ${}^* U_\alpha$, such that for $\alpha,\beta\in J$, $F_\alpha|_{{}^*\! U_\alpha \cap {}^*\! U_\beta} \equiv F_\beta|_{{}^*\! U_\alpha \cap {}^*\! U_\beta}$. \end{dfn} \begin{dfn}\label{coh2} A coherent family $\{ (G_\alpha , V_\alpha) \}_{\alpha \in K}$ is said to be a \emph{refinement} of $\{ (F_\alpha , U_\alpha) \}_{\alpha \in J}$, if for each $\alpha \in K$ there is $\beta \in J$ such that $V_\alpha \subseteq U_\beta$ and $G_\alpha \equiv F_\beta|_{{}^*\! V_\alpha}$. \end{dfn} \begin{dfn}\label{coh3} A refinement $\{ (G_\alpha , V_\alpha) \}_{\alpha \in K}$ of $\{ (F_\alpha , U_\alpha) \}_{\alpha \in J}$ is said to be a \emph{flowing} refinement if there are $0 < T_\alpha \in {\mathbb R}$ for each $\alpha \in K$ such that the flow $h^{G_\alpha}_t$ is defined on $V_\alpha$ for $0 \leq t \leq T_\alpha$. \end{dfn} By Theorem~\protect\ref{flow1} any $D^1$ coherent family of prevector fields has a $D^1$ flowing refinement. By Theorem~\protect\ref{hft}(3), if $V_\alpha \cap V_\beta \neq {\varnothing}$ then $h^{G_\alpha}_t=h^{G_\beta}_t$ on $ V_\alpha \cap V_\beta$ for $0 \leq t \leq \min \{T_\alpha, T_\beta \}$. If we can choose a single $0 < T \in {\mathbb R}$ which is good for all $\alpha \in K$, then we will say that the original family $\{(F_\alpha, U_\alpha) \}_{\alpha \in J}$ is \emph{complete}. In that case we have a global well defined flow $h_t : M \to M$ for $0 \leq t \leq T$, and by iteration, for all $0\leq t \in{\mathbb R}$. Extending $h_t$ back to ${}^* \! M$, let $G = h_{\lambda}$, then $G$ is a global prevector field. By Proposition~\protect\ref{111}, $G$ is $D^1$ since $\{F_\alpha \}$ is $D^1$, and by Proposition~\protect\ref{112}, if $\{F_\alpha \}$ is $D^2$ then $G$ is $D^2$. By Theorem~\protect\ref{tld} we have $G|_{{}^* U_\alpha} \equiv F_\alpha$ for all $\alpha \in J$. We will call $G$ the \emph{globalization} of the complete coherent family $\{(F_\alpha ,U_\alpha)\}_{\alpha \in J}$. By Theorem~\protect\ref{hft}(3) if two complete coherent families have a common refinement, then they define the same flow $h_t : M \to M$, and so they have the same globalization. We note that if $\{(F_\alpha,U_\alpha) \}_{\alpha \in J}$ has a \emph{finite} flowing refinement, i.e. a flowing refinement $\{ (G_\alpha , V_\alpha) \}_{\alpha \in J}$ for which $J$ is finite, then $\{(F_\alpha,U_\alpha) \}_{\alpha \in J}$ is clearly complete. \begin{dfn}\label{comps} A coherent family $\{(F_\alpha, U_\alpha) \}_{\alpha \in J}$ has compact support, if there is a compact $C \subseteq M$ such that $\{(F_\alpha, U_\alpha) \}_{\alpha \in J} \cup \{(I,M-C)\}$ is coherent, (recall $I(a)=a$ for all $a$). \end{dfn} Clearly a coherent $D^1$ family with compact support has a finite flowing refinement, so the following holds. \begin{prop}\label{cscp} A coherent $D^1$ family with compact support is complete. \end{prop} Given a classical vector field $X$ on $M$ of class $C^1$ or $C^2$, we would like to realize it by a global prevector field on ${}^* \! M$ of class $D^1$ or $D^2$ respectively. In Proposition~\protect\ref{6} we have shown that this can be done locally. We now state and prove our global realization result. In the following proof we use our assumption that our nonstandard extension satisfies countable saturation. This means that for any sequence $\{A_n\}_{n\in{\mathbb N}}$ of internal sets such that $A_n \neq{\varnothing}$ and $A_{n+1} \subseteq A_n$ for all $n$, one has $\bigcap_{n\in{\mathbb N}} A_n \neq{\varnothing}$. \begin{thm}\label{gl} Let $X$ be a classical $C^1$ (resp. $C^2$) vector field on $M$. Then there is a $D^1$ (resp. $D^2$) global prevector field $F$ on ${}^* \! M$ that realizes $X$, where the value of $F$ in ${{}^{\mathfrak{h}} \! {M}}$ is canonically prescribed. If $X$ has compact support (in the classical sense), then the value of $F$ throughout ${}^* \! M$ is canonically prescribed, with $F(a)=a$ for $a \in {}^* \! M - {{}^{\mathfrak{h}} \! {M}}$. \end{thm} \begin{pf} Assume first that $X$ has compact support. There is a family $U_\alpha$ of coordinate neighborhoods for $M$, on each of which $X$ is realized by $F_\alpha$ as in Example~\protect\ref{e}, and by Proposition~\protect\ref{re} the family $\{ (F_\alpha,U_\alpha) \}$ is coherent. By Proposition~\protect\ref{6}, the family $\{ (F_\alpha,U_\alpha) \}$ is $D^1$ (resp. $D^2$) if $X$ is $C^1$ (resp. $C^2$). The vector field $X$ having compact support $C \subseteq M$ in the classical sense implies that $\{ (F_\alpha,U_\alpha) \}$ has compact support in the sense of Definition~\protect\ref{comps}. Thus by Proposition~\protect\ref{cscp} it is complete, and let $F$ be its globalization. We first notice that the flow $h_t:M \to M$ which defines $F$ satisfies $h_t(a)=a$ for all $a \in M-C$ and so by transfer $F(a) = h_{\lambda}(a) = a$ for all $a \in {}^* \! M - {}^* C \supseteq {}^* \! M - {{}^{\mathfrak{h}} \! {M}}$, proving the concluding statement regarding $X$ with compact support. Furthermore, by Propositions \protect\ref{111}, \protect\ref{112}, $F$ is $D^1$ (resp. $D^2$) if $\{ (F_\alpha,U_\alpha) \}$ is $D^1$ (resp. $D^2$), which, as mentioned, holds if $X$ is $C^1$ (resp. $C^2$). By Proposition~\protect\ref{re} and Theorem~\protect\ref{hft}(3) $F$ is uniquely determined by $X$. This completes the compact support case. If $X$ does not have compact support, we proceed using countable saturation of our nonstandard extension. Let $\{U_n\}_{n\in{\mathbb N}}$ be a sequence of open sets in $M$ with $\overline{U_n}$ compact, $\overline{U_n} \subseteq U_{n+1}$, and $\bigcup U_n =M$. Let $f_n:M \to [0,1]$ be a sequence of smooth functions with compact support, such that $f_n|_{U_{n+1}} = 1$. Now let $G_n$ be the realization of $f_n X$ given by the compact support case. Let $A_n = \{ F \in {}^*{\mbox{Map}} (M) : F|_{{}^* U_n} = G_n |_{{}^* U_n} \}$, then $A_n$ is nonempty for each $n$, since $G_n \in A_n$. We further have $A_{n+1} \subseteq A_n$ since $G_{n+1}|_{{}^* U_n} = G_n|_{{}^* U_n}$, which is true since $f_{n+1}$ and $f_n$ are both 1 on $U_{n+1} \supseteq \overline{U_n}$ and so the same flow determines $G_{n+1}|_{{}^* U_n}$ and $G_n|_{{}^* U_n}$. So, by countable saturation $\bigcap A_n \neq{\varnothing}$. An $F$ in this intersection satisfies $F \in {}^*{\mbox{Map}} (M)$, i.e. it is internal. Since $\bigcup {}^* U_n = {{}^{\mathfrak{h}} \! {M}}$, $F$ realizes $X$. The restriction $F|_{{}^{\mathfrak{h}} \! {M}}$ is uniquely determined by $X$, since $F|_{{}^* U_n} = G_n |_{{}^* U_n}$ is uniquely determined by $X$, again since $f_n$ is 1 on $U_{n+1} \supseteq \overline{U_n}$. \end{pf} In the following example we demonstrate the need for $\{ U_n \}$ and $\{ f_n \}$ in the proof of Theorem~\protect\ref{gl}, and the fact that the values of $F$ on ${}^* \! M - {{}^{\mathfrak{h}} \! {M}}$ may depend on the choice of $\{ U_n \}$, $\{ f_n \}$. \begin{example}\label{e1} Let $M={(0,1)}$, and let $X$ be the classical vector field on $M$ given by $X(x)=-1$ for all $x \in {(0,1)}$. On ${}^*{(0,1)}$ $X$ does not induce a prevector field via the procedure of Example~\protect\ref{e} since for ${\lambda} > x \in{}^*{(0,1)}$, $x-{\lambda} \not\in {}^*{(0,1)}$. However we can take the coherent family $\{(F_r , (r,1))\}_{r>0}$ where $F_r$ is always defined by $F_r(a)=a-{\lambda}$. The standard flow $h^{F_r}_t$ is defined for $0 \leq t \leq r$ and always given by $h^{F_r}_t(a)=a-t$. But this family is not complete. There is no common $T>0$ for which the flow is defined on $[0,T]$, and so there is no global flow $h_t:{(0,1)}\to{(0,1)}$ in which one can substitute $t={\lambda}$. (Note that the global prevector field that may seem to exist by naively substituting $t={\lambda}$ ignoring the problem of common domain $[0,T]$, would be $a \mapsto a-{\lambda}$, which, as noted, is not defined on ${}^*{(0,1)}$.) So, following the proof of Theorem~\protect\ref{gl}, let $\{a_n\}$ be a strictly decreasing sequence with $a_n \to 0$. Let $U_n = (a_n , 1-a_n)$ and let $f_n:{(0,1)} \to [0,1]$ be a smooth function such that $f_n(x)=1$ for $a_{n+1} \leq x \leq 1-a_{n+1}$, and $f_n(x)=0$ for $ 0<x \leq a_{n+2}$ and $1-a_{n+2} \leq x < 1$. To realize $f_n X$ as in Example~\protect\ref{e} we do not need a covering $\{ U_\alpha \}$ as in the general case appearing in the proof of Theorem~\protect\ref{gl}, rather we can take one $F_n$ defined on all ${(0,1)}$. For $a \in {}^* (a_{n+1}, 1-a_{n+1})$ we have $F_n(a)=a-{\lambda}$, and so for $a \in (a_n , 1-a_n)$ and $0 \leq t \leq a_n-a_{n+1}$ we have $h^{F_n}_t(a)=a-t$, and so finally for $a \in {}^* U_n = {}^* (a_n , 1-a_n)$ the realization $G_n$ of $f_n X$ satisfies $G_n(a)=a - {\lambda}$. Thus, a global $F:{}^*{(0,1)} \to{}^*{(0,1)}$ which is obtained from the sequence $G_n$ as in the proof of Theorem~\protect\ref{gl} will have $F(a)=a-{\lambda}$ for all $a \in {}^{\mathfrak{h}} {{(0,1)}} = \{ a \in {}^* {(0,1)} \ : 0 < \ st(a) < 1 \}$, and this fact is independent of all choices involved in the construction. However, the values on ${}^*{(0,1)} - {}^{\mathfrak{h}} {{(0,1)}}$ may indeed depend on our choice of $\{U_n\}$ and $\{f_n\}$, as we now demonstrate. Suppose our nonstandard extension is given by the ultrapower construction on the index set ${\mathbb N}$ with nonprincipal ultrafilter, and elements in the ultrapower are given by sequences in angle brackets $\langle x_i \rangle_{i \in{\mathbb N}}$. \footnote{Such extension always satisfies countable saturation.} Assume ${\lambda}= \langle \delta_i \rangle_{i \in{\mathbb N}}$ where $\{\delta_i\}$ is a strictly decreasing sequence with $\delta_i \to 0$. Then $G_n = h^{F_n}_{\lambda} = \langle h^{F_n}_{\delta_i} \rangle_{i\in{\mathbb N}}$. Let $F=\langle h^{F_i}_{\delta_i} \rangle_{i\in{\mathbb N}}$, and we claim that $F|_{{}^* U_n} = G_n|_{{}^* U_n}$ for all $n$, i.e. $F \in \bigcap A_n$. Indeed, the elements of ${}^* U_n$ are represented by sequences $\langle u_i \rangle_{i \in{\mathbb N}}$ such that $u_i \in U_n$ for all $i$, and so for $i$ sufficiently large so that $i \geq n$ and $\delta_i < a_n -a_{n+1}$ we have $h^{F_i}_{\delta_i}(u_i) = u_i -\delta_i = h^{F_n}_{\delta_i}(u_i)$. Now let $x=\langle a_{i+2} \rangle_{i\in{\mathbb N}}$ and $y=\langle a_i + \delta_i \rangle_{i\in{\mathbb N}}$, then $F(x)=x$ and $F(y)=y-{\lambda}$. If we repeat our construction with $a'_n = a_{n-2}+\delta_{n-2}$ in place of $a_n$, producing the realization $F'$, then for the same reason that $F(x)=x$ we will have $F'(y)=y \neq F(y)$, showing that $F$ indeed depends on our choices. \end{example} \section{Lie bracket}\label{lbr} Given two local prevector fields $F,G$ for which $F^{-1},G^{-1}$ exist, e.g. if $F,G$ are $D^1$ (by Remark~\protect\ref{finv2}), we define their Lie bracket $[F,G]$ as follows. Its relation to the classical Lie bracket will be clarified in Section~\protect\ref{rclb}. \begin{dfn}\label{dflie} $[F,G] = (G^{-1} \circ F^{-1} \circ G \circ F)^{{\lfloor} {1 \over {\lambda}} {\rfloor}}_{\phantom i}$. \end{dfn} Since our fixed choice of ${\lambda}$ was arbitrary, we may have chosen it as $1 \over N$ for some infinite $N \in {}^*{\mathbb N}$, and so we may assume $1\over {\lambda}$ is in fact a hyperinteger and drop the ${\lfloor} \cdot {\rfloor}$ from the above expression. In Theorem~\protect\ref{rb} below we will justify this definition, i.e. we will establish its relation to the classical Lie bracket. We will show that if $F,G$ are $D^1$ then $[F,G]$ is indeed a prevector field, and if $F,G$ are $D^2$ then $[F,G]$ is $D^1$. Furthermore, we will show that if $F,G$ are $D^2$ and $F \equiv F'$, $G \equiv G'$ then $[F,G] \equiv [F',G']$. We will give an example showing that this is not true if $F,G$ are merely $D^1$. We will show that the Lie bracket of two $D^2$ prevector fields is equivalent to the identity prevector field if and only if their local standard flows commute. In the present section our study will always be local, and so the quantifier ``for all $a$'' will always mean for all $a$ in ${}^* U$ where $U$ is some appropriate coordinate neighborhood, and all computations are in coordinates. \subsection{Fundamental properties of Lie bracket} \begin{thm}\label{7} If $F,G$ are local $D^1$ prevector fields then $[F,G]$ is a prevector field, that is, $[F,G](a)-a \prec{\lambda}$ for all $a$. \end{thm} \begin{pf} Substituting $x=a$ and $y=F^{-1} \circ G \circ F(a)$ in the relation $F(x)-x-F(y)+y \prec{\lambda}\|x-y\|$ gives $$F(a)-a-G \circ F(a) + F^{-1} \circ G \circ F(a) \prec {\lambda} \|a-F^{-1} \circ G \circ F(a)\| \prec {\lambda}^2.$$ Now substituting $x=F(a)$ and $y=G^{-1} \circ F^{-1} \circ G \circ F(a)$ in the relation $G(x)-x-G(y)+y \prec{\lambda}\|x-y\|$ gives $$G\circ F(a)-F(a)- F^{-1} \circ G \circ F(a)+ G^{-1} \circ F^{-1} \circ G \circ F(a) \prec {\lambda} \|F(a)-G^{-1} \circ F^{-1} \circ G \circ F(a)\| \prec {\lambda}^2.$$ Adding the above two expressions gives: $ G^{-1} \circ F^{-1} \circ G \circ F(a) - a \prec {\lambda}^2$. By underspill in an appropriate ${}^* U$ there exists $C \prec 1$ such that $\| G^{-1} \circ F^{-1} \circ G \circ F(a) - a \|\leq C {\lambda}^2$ for all $a\in{}^* U$. And so \begin{multline*}\| (G^{-1} \circ F^{-1} \circ G \circ F)^{1 \over \DD}_{\phantom i} (a) - a \| \\ \leq \sum_{k=1}^{1\over {\lambda}} \| (G^{-1} \circ F^{-1} \circ G \circ F)^k (a) - (G^{-1} \circ F^{-1} \circ G \circ F)^{k-1} (a) \| \leq C {\lambda}.\end{multline*} \end{pf} \begin{example}\label{e2} We give an example of two prevector fields $F,G$, where $F$ is $D^2$ (so also $D^1$) and $[F,G]$ is not a prevector field. Let $M={\mathbb R}^2$ and let $F(x,y)=(x+{\lambda},y)$, $G(x,y)= (x, y+{\lambda}\sin{\pi \over 2{\lambda}}x)$. Then $[F,G](0,0) = (0,1)$ so $[F,G](0,0)-(0,0) = (0,1) \not\prec {\lambda}$ . \end{example} To prove that if $F,G$ are $D^2$ then $[F,G]$ is $D^1$ we need the following lemma. A sum of eight terms appears in its statement, namely $$\Bigl(F(a)-a\Bigr)-\Bigl(F(b)-b\Bigr) -\Bigl(F(G(a))-G(a)\Bigr) + \Bigl(F(G(b))-G(b)\Bigr)$$ which is similar to the sum \begin{multline*} {\x^2_{v,w}} (F-I)(a)= \\ \Bigl(F(a)-a\Bigr)-\Bigl(F(a+v)-(a+v)\Bigr) -\Bigl(F(a+w)-(a+w)\Bigr) + \Bigl(F(a+v+w)-(a+v+w)\Bigr) \end{multline*} appearing in the general definition of $D^k$ applied to $k=2$. As already noticed, the four terms $a,a+v,a+w,a+v+w$ cancel, leaving the four terms appearing in Definition~\protect\ref{d2}. In the present sum the corresponding four terms $a,b,G(a),G(b)$ do not cancel, and we remain with all eight terms. We have already encountered a similar eight term sum ${\x^2_{v,w}}({\varphi}\circ F -{\varphi})(a)= {\x^2_{v,w}}(G \circ {\varphi} -{\varphi})(a)$ where no cancellation occurs, in the proof of Proposition~\protect\ref{4}. \begin{lemma}\label{8} Let $F$ be $D^2$ and $G$ be $D^1$, then for all $a,b$ with $a-b \prec {\lambda}$, $$F(a)-F(b)-F(G(a))+F(G(b))-a+b+G(a)-G(b) \prec {\lambda}^2\|a-b\|.$$ \end{lemma} \begin{pf} Let $v=b-a$ and $w=G(a)-a$. Since $F$ is $D^1$ (by Proposition~\protect\ref{3}) we have $F(a+v+w)-F(G(b))-(a+v+w)+G(b) \prec {\lambda}\|a+v+w-G(b)\|$. But $a+v+w = b+G(a)-a$ and so we have $$F(a+v+w)-F(G(b))-b-G(a)+a+G(b) \prec {\lambda} \| b+G(a)-a-G(b) \| \prec {\lambda}^2 \|a-b\|$$ since $G$ is $D^1$. So \begin{multline*} \|F(a)-F(b)-F(G(a))+F(G(b))-a+b+G(a)-G(b) \| \\ = \|F(a)-F(a+v)-F(a+w)+F(a+v+w)-F(a+v+w) + F(G(b))-a+b+G(a)-G(b) \| \\ \leq \| F(a)-F(a+v)-F(a+w)+F(a+v+w) \| + \| -F(a+v+w) + F(G(b))-a+b+G(a)-G(b) \| \\ \prec {\lambda}\|b-a\|\|G(a)-a\| + {\lambda}^2\|a-b\| \prec {\lambda}^2\|a-b\|. \end{multline*} \end{pf} \begin{thm}\label{9} If $F,G$ are $D^2$ then $[F,G]$ is $ D^1$. \end{thm} \begin{pf} By Propositions \protect\ref{3}, \protect\ref{30}, and \protect\ref{40}, $F^{-1} \circ G \circ F$ is $D^1$. Now in Lemma~\protect\ref{8} take $G$ to be $F^{-1} \circ G \circ F$ then we get for $a-b \prec {\lambda}$: $$F(a)-F(b)-G\circ F(a) + G\circ F(b)-a+b+F^{-1} \circ G \circ F (a) - F^{-1} \circ G \circ F (b) \prec {\lambda}^2 \|a-b\|.$$ As above $G^{-1} \circ F^{-1} \circ G$ is $D^1$ and now take in Lemma~\protect\ref{8} $a,b,F,G$ to be respectively $F(a),F(b),G,G^{-1} \circ F^{-1} \circ G$ then we get \begin{multline*} G \circ F(a)- G \circ F(b)-F^{-1} \circ G\circ F(a) + F^{-1} \circ G\circ F(b) \\ -F(a)+F(b)+G^{-1} \circ F^{-1} \circ G \circ F (a) - G^{-1} \circ F^{-1} \circ G \circ F (b) \prec {\lambda}^2 \|F(a)-F(b)\| \prec {\lambda}^2\|a-b\|\end{multline*} by Proposition~\protect\ref{0}. Adding these two inequalities we get $$G^{-1} \circ F^{-1} \circ G \circ F (a) - G^{-1} \circ F^{-1} \circ G \circ F (b) -a+b \prec {\lambda}^2\|a-b\|$$ Denote $H=G^{-1} \circ F^{-1} \circ G \circ F$ then $[F,G] = H^{1 \over \DD}_{\phantom i} $ and so we must show $H^{1 \over \DD}_{\phantom i} (a)-H^{1 \over \DD}_{\phantom i} (b)-a+b \prec {\lambda}\|a-b\|$ and we know $H(a)-H(b)-a+b \prec {\lambda}^2\|a-b\|$. By underspill in an appropriate ${}^* U$ there exists $C \prec 1$ such that $\|H(a)-H(b)-a+b \| \leq C{\lambda}^2\|a-b\|$ for all $a,b \in{}^* U$. So by Lemma~\protect\ref{ek} with $K=C{\lambda}$ and $n={1\over {\lambda}}$, we get $\|H^{1 \over \DD}_{\phantom i} (a)-H^{1 \over \DD}_{\phantom i} (b)-a+b \| \leq C{\lambda} e^{C{\lambda}}\|a-b \|$. \end{pf} \begin{example}\label{e3} We give an example of two prevector fields $F,G$, where $F$ is $D^2$, $G$ is $D^1$ and $[F,G]$ is not $D^1$. Let $M={\mathbb R}^2$ and let $F(x,y)=(x+{\lambda},y)$, $G(x,y)= (x, y+{\lambda}^2 \sin{\pi \over 2{\lambda}}x)$. Clearly $F$ is $D^2$, and we show $G$ is $D^1$: \begin{align*} \|G(x_1,y_1) -(x_1,y_1) - G(x_2,y_2)+(x_2,y_2)\| &= {\lambda}^2 |\sin{\pi \over 2{\lambda}}x_1 - \sin{\pi \over 2{\lambda}}x_2 | \\ &= {\lambda}{\pi \over 2}|(x_1 - x_2) \cos{\pi \over 2{\lambda}}\theta | \\& \prec {\lambda}|x_1 - x_2| \prec {\lambda}\|(x_1,y_1)-(x_2,y_2)\|,\end{align*} where $x_1 \leq \theta \leq x_2$. Finally we show $[F,G]$ is not $D^1$: $[F,G](0,0) = (0,{\lambda})$, $[F,G]({\lambda},0) = ({\lambda},-{\lambda})$, so $[F,G](0,0)-(0,0) - [F,G]({\lambda},0)+({\lambda},0) = (0,2{\lambda}) \not\prec {\lambda}\|(0,0)-({\lambda},0)\|$. \end{example} Our definition of Lie bracket involves an iteration $1\over {\lambda}$ times of the commutator $G^{-1} \circ F^{-1} \circ G \circ F$. The following Proposition compares this with multiplication by $1\over {\lambda}$ in coordinates. It will be used in the proofs of Theorems \protect\ref{ld2}, \protect\ref{rb}, \protect\ref{comm}. \begin{prop}\label{scl} Let $F,G$ be $D^2$, then $[F,G] (a)\equiv a+{1 \over {\lambda}} \Bigl(G^{-1} \circ F^{-1} \circ G \circ F(a) - a\Bigr)$ for all $a$. \end{prop} \begin{pf} Let $H=G^{-1} \circ F^{-1} \circ G \circ F$. The proof of Theorems \protect\ref{7} provides $C' \prec 1$ such that $\|H(a)-a\| \leq C' {\lambda}^2$ for all $a$. The proof of Theorem~\protect\ref{9} provides $C''\prec 1$ such that $\|H(a)-H(b)-a+b\|\leq C'' {\lambda}^2\|a-b\|$ for all $a,b$. Taking $C=C'{\lambda}$, $K=C''{\lambda}$ and $n={1\over{\lambda}}$ in Lemma~\protect\ref{101} we get $\|H^{1 \over \DD}_{\phantom i} (a)-a - {1\over{\lambda}}(H(a)-a)\| \leq C'C'' {\lambda}^2\prec\prec{\lambda}$. \end{pf} Next we would like to show that if $F,F',G,G'$ are $D^2$ and $F \equiv F'$, $G \equiv G'$ then $[F,G] \equiv [F',G']$. We will need the following two lemmas. \begin{lemma}\label{10} If $G,H$ are $D^2$ and $G \equiv H$, (i.e. $G(a)-H(a)\prec\prec{\lambda}$ for all $a$) then $$\Bigl(G(a)-H(a)\Bigr)-\Bigl(G(b)-H(b)\Bigr)\prec\prec{\lambda}\|a-b\|$$ for all $a,b$ with $a-b \prec {\lambda}$. \end{lemma} \begin{pf} Let $F(x)=G(x)-H(x)$ so $F(x) \prec\prec{\lambda}$ for all $x$. Assume $F(a)-F(b)$ is not $\prec\prec {\lambda}\|a-b\|$ for some $a,b$ with $a-b \prec {\lambda}$, then $ {\lambda}\|a-b\| \prec\| F(a)-F(b)\|$. Let $v=b-a$ then ${\lambda}\|v\|^2 \prec \|v \| \|F(a)-F(a+v)\|$, and since $G,H$ are $D^2$, $F$ satisfies $\Delta^2_{v,v}F(x) \prec {\lambda}\|v\|^2$ for all $x$. Together we have $\Delta^2_{v,v}F(x) \prec \|v\| \|F(a)-F(a+v)\|$, so by Lemma~\protect\ref{1} (taking some ball around $st(a)$), there is $m \in{}^*{\mathbb N}$ such that $F(a)-F(a+v)\prec \|v\| \| F(a)-F(a+mv)\| \leq \|v\| \Bigl(\| F(a)\|+\|F(a+mv)\|\Bigr) \prec\prec \|v\|{\lambda}$. \end{pf} \begin{lemma}\label{11} If $G,H$ are prevector fields with $G\equiv H$ and $G$ is $D^1$, then $G^{-1}\equiv H^{-1}$ (assuming $H^{-1}$ exists). \end{lemma} \begin{pf} Given $a$ let $x=G^{-1}(a)$ and $y=H^{-1}(a)$ then we must show $x-y \prec\prec{\lambda}$. We have $G(x)=a=H(y)$ so $\|G(x)-G(y)\| = \|H(y)-G(y)\| =\beta{\lambda}$ for some $\beta \prec\prec 1$. Since $G$ is $D^1$, $\|G(x)-G(y) - x+y\| = K{\lambda}\|x-y\|$ for some $K \prec 1$. So $$\|x-y\| \leq \|G(x)-G(y) - x+y\| + \|G(x)-G(y)\| = K{\lambda}\|x-y\| + \beta{\lambda}.$$ So $(1-K{\lambda})\|x-y\| \leq \beta{\lambda}$ or $\|x-y\| \leq {\beta \over 1-K{\lambda}}{\lambda} \prec\prec{\lambda}$. \end{pf} We are now ready to prove the following. \begin{thm}\label{ld2} If $F,E,G,H$ are $D^2$, $F \equiv E$ and $G \equiv H$ then $[F,G] \equiv [E,H]$. \end{thm} \begin{pf} We first claim that it is enough to establish the statement with $F=E$, that is, to show $[F,G] \equiv [F,H]$. Indeed it is clear from the definition that $[G,F]=[F,G]^{-1}$, so if we know $[F,G] \equiv [F,H]$ and similarly $[H,F] \equiv [H,E]$ then by Theorem~\protect\ref{9} and Lemma~\protect\ref{11} we have $[F,G] \equiv [F,H]=[H,F]^{-1} \equiv [H,E]^{-1}=[E,H] $. So we proceed to show $[F,G] \equiv [F,H]$. Given $x$ let $a=G( F(x))$, $b=H( F(x))$, then by assumption $a-b \prec\prec{\lambda}$. By Propositions \protect\ref{3}, \protect\ref{30}, $$F^{-1}(a)-F^{-1}(b)-a+b \prec {\lambda}\|a-b\|\prec\prec {\lambda}^2.$$ Denote $c=G^{-1} \circ F^{-1} \circ G \circ F(x)$. By Lemma~\protect\ref{10} $$F^{-1}(a)-H(c)-a+b = \Bigl(G(c) - H(c) \Bigr) - \Bigl( G(F(x)) - H(F(x)) \Bigr) \prec\prec {\lambda} \| c-F(x)\| \prec {\lambda}^2.$$ Combining the last two inequalities we get $H(c) - F^{-1}(b) \prec\prec{\lambda}^2$ and so \begin{multline*}G^{-1} \circ F^{-1} \circ G \circ F(x) - H^{-1} \circ F^{-1} \circ H \circ F(x) =\\ H^{-1}(H(c)) - H^{-1}(F^{-1}(b)) \prec \| H(c) - F^{-1}(b) \| \prec\prec {\lambda}^2\end{multline*} by Propositions \protect\ref{3}, \protect\ref{30}, \protect\ref{0}. So we have $${1\over {\lambda}}\Bigl(G^{-1} \circ F^{-1} \circ G \circ F(x)\Bigr) - {1\over {\lambda}}\Bigl(H^{-1} \circ F^{-1} \circ H \circ F(x)\Bigr) \prec\prec{\lambda}$$ and so by Proposition~\protect\ref{scl} $[F,G](x)\equiv [F,H](x)$. \end{pf} \begin{example}\label{e4} We give an example of $F,H$ which are $D^2$, $G$ is $D^1$ and $G \equiv H$, and yet $[F,G] \not\equiv [F,H]$. Let $M={\mathbb R}^2$ and let $F(x,y)=(x+{\lambda},y)$, $H(x,y)=(x,y)$, and $G(x,y) = (x, y+{\lambda}^2\sin{\pi \over 2{\lambda}}x)$. Then $[F,H](0,0)=(0,0)$ whereas $[F,G](0,0) = (0,{\lambda}) \not\equiv (0,0)$. Clearly $F,H$ are $D^2$, and it has been shown in Example~\protect\ref{e3} that $G$ is $D^1$. \end{example} \subsection{Relation to classical Lie bracket}\label{rclb} The following theorem justifies our definition of $[F,G]$, by relating it to the classical notion of Lie bracket. \begin{thm}\label{rb} Let $X,Y$ be two classical $C^2$ vector fields and let $[X,Y]_{cl}$ denote their classical Lie bracket. Let $F,G$ be $D^2$ prevector fields that realize $X,Y$ respectively. Then $[F,G]$ realizes $[X,Y]_{cl}$. \end{thm} \begin{pf} By Remark~\protect\ref{kf}, the flows $h^F_t$, $h^G_t$ coincide with the classical flows of $X$, $Y$. It is well known that $[X,Y]_{cl}$ is related in coordinates to the classical flow as follows: $$[X,Y]_{cl}(p)=\lim_{t\to 0} {1 \over t^2} \Bigl((h^G_t)^{-1} \circ (g^F_t)^{-1} \circ h^G_t \circ h^F_t(p) - p\Bigr).$$ By the equivalent characterization of limits via infinitesimals we thus have $$[X,Y]_{cl}(p) \approx {1 \over {\lambda}^2} \Bigl({\widetilde{G}}^{-1} \circ {\widetilde{F}}^{-1} \circ {\widetilde{G}} \circ {\widetilde{F}}(p) - p\Bigr).$$ Now, if $v \approx w$ then ${\lambda} v \equiv {\lambda} w$, so by Example~\protect\ref{e}, $[X,Y]_{cl}$ can be realized by the prevector field $$A(a) = a+{1 \over {\lambda}} \Bigl({\widetilde{G}}^{-1} \circ {\widetilde{F}}^{-1} \circ {\widetilde{G}} \circ {\widetilde{F}}(a) - a\Bigr).$$ Thus it remains to show that $[F,G] \equiv A$. By Proposition~\protect\ref{112} ${\widetilde{F}},{\widetilde{G}}$ are $D^2$, and so by Proposition~\protect\ref{scl} $ [{\widetilde{F}},{\widetilde{G}}] \equiv A $. By Theorem~\protect\ref{tld} $F\equiv {\widetilde{F}}, G\equiv {\widetilde{G}}$, and so by Theorem~\protect\ref{ld2} $ [F,G] \equiv A $. \end{pf} The following theorem corresponds to the classical fact that the bracket of two vector fields vanishes if and only if their flows commute. \begin{thm}\label{comm} Let $F,G$ be two $D^2$ prevector fields. Then $[F,G]\equiv I$ (recall $I(a)=a$ for all $a$), if and only if $h^F_t \circ h^G_s = h^G_s \circ h^F_t $ for all $0 \leq t,s \leq T$ for some $0 < T \in{\mathbb R}$. \end{thm} \begin{pf} Assume first that $[F,G]\equiv I$, i.e. $[F,G](a)- a\prec\prec{\lambda}$ for all $a$. So by Proposition~\protect\ref{scl} ${1 \over {\lambda}} \Bigl(G^{-1} \circ F^{-1} \circ G \circ F(a) - a\Bigr) \prec\prec{\lambda}$, so $G^{-1} \circ F^{-1} \circ G \circ F(a) - a \prec\prec{\lambda}^2$, which implies by Proposition~\protect\ref{0} that $G \circ F(a) - F\circ G( a) \prec\prec{\lambda}^2$ for all $a$. Now let $n = {\lfloor} t/{\lambda} {\rfloor}$ and $m = {\lfloor} s/{\lambda} {\rfloor}$, then we need to show $F^n \circ G^m(a) \approx G^m \circ F^n(a)$ for all $a$. This involves $nm$ interchanges of $F$ and $G$, where a typical move is from $F^k \circ G^r \circ F \circ G^{m-r} \circ F^{n-k-1}$ to $F^k \circ G^{r+1} \circ F \circ G^{m-r-1} \circ F^{n-k-1}$. Applying $F \circ G(p) - G\circ F( p) \prec\prec{\lambda}^2$ to $p=G^{m-r-1} \circ F^{n-k-1}(a)$ we get $$F \circ G^{m-r} \circ F^{n-k-1}(a) -G \circ F \circ G^{m-r-1} \circ F^{n-k-1}(a) \prec\prec{\lambda}^2.$$ By Propositions \protect\ref{3}, \protect\ref{20} there is $K\in{\mathbb R}$ such that $\| F(a)-a-F(b)+b\| \leq K{\lambda}\|a-b\|$ and $\| G(a)-a-G(b)+b\| \leq K{\lambda}\|a-b\|$ for all $a,b$ in an appropriate domain. Then by Theorem~\protect\ref{flow1} applied to $G^r$ and then to $F^k$, \begin{multline*} \|F^k \circ G^r \circ F \circ G^{m-r} \circ F^{n-k-1}(a)-F^k \circ G^{r+1} \circ F \circ G^{m-r-1} \circ F^{n-k-1}(a) \| \\ \leq e^{K(t+s)}\| F \circ G^{m-r} \circ F^{n-k-1}(a) -G \circ F \circ G^{m-r-1} \circ F^{n-k-1}(a)\| \prec\prec{\lambda}^2. \end{multline*} Adding the $nm$ contributions when passing from $F^n \circ G^m(a)$ to $G^m \circ F^n(a)$ we get $$F^n \circ G^m(a) - G^m \circ F^n(a) \prec\prec 1.$$ This is because among the $nm$ differences that we add, there is a maximal one, which is say $\beta {\lambda}^2$ with $\beta \prec\prec 1$, and so the sum of all $nm$ contributions is $\leq nm \beta{\lambda}^2 \leq ts\beta \prec\prec 1$. Conversely, assume $h^F_t \circ h^G_t = h^G_t \circ h^F_t $. Then by transfer ${\widetilde{F}} \circ {\widetilde{G}} = {\widetilde{G}} \circ {\widetilde{F}}$, so ${\widetilde{G}}^{-1} \circ {\widetilde{F}}^{-1} \circ {\widetilde{G}} \circ {\widetilde{F}} = I$, and so $ [{\widetilde{F}},{\widetilde{G}}]=I$. By Proposition~\protect\ref{112} and Theorems \protect\ref{tld}, \protect\ref{ld2} we get $[F,G]\equiv I$. \end{pf} We have the following application to the standard setting. \begin{clcor}\label{cc4} Let $X,Y$ be classical $C^2$ vector fields. Then the flows of $X$ and $Y$ commute if and only if their Lie bracket vanishes. It follows that if $X_1,\dots,X_k$ are $k$ independent vector fields with $[X_i,X_j]_{cl}=0$ (classical Lie bracket) for $1 \leq i,j \leq k$, then there are coordinates in a neighborhood of any given point such that $X_1,\dots,X_k$ are the first $k$ coordinate vector fields. \end{clcor} \begin{pf} Define prevector fields by $F(a)=a+{\lambda} X(a)$ and $G(a)=a+{\lambda} Y(a)$ as in Example~\protect\ref{e}, and apply Proposition~\protect\ref{6}, Remark~\protect\ref{kf}, and Theorems \protect\ref{rb}, \protect\ref{comm}. The final statement is a straightforward conclusion in the classical setting. \end{pf}
{ "timestamp": "2015-12-24T02:05:42", "yymm": "1405", "arxiv_id": "1405.0984", "language": "en", "url": "https://arxiv.org/abs/1405.0984", "abstract": "We present a new formulation of some basic differential geometric notions on a smooth manifold M, in the setting of nonstandard analysis. In place of classical vector fields, for which one needs to construct the tangent bundle of M, we define a prevector field, which is an internal map from *M to itself, implementing the intuitive notion of vectors as infinitesimal displacements. We introduce regularity conditions for prevector fields, defined by finite differences, thus purely combinatorial conditions involving no analysis. These conditions replace the more elaborate analytic regularity conditions appearing in previous similar approaches, e.g. by Stroyan and Luxemburg or Lutz and Goze. We define the flow of a prevector field by hyperfinite iteration of the given prevector field, in the spirit of Euler's method. We define the Lie bracket of two prevector fields by appropriate iteration of their commutator. We study the properties of flows and Lie brackets, particularly in relation with our proposed regularity conditions. We present several simple applications to the classical setting, such as bounds related to the flow of vector fields, analysis of small oscillations of a pendulum, and an instance of Frobenius' Theorem regarding the complete integrability of independent vector fields.", "subjects": "Differential Geometry (math.DG); Logic (math.LO)", "title": "Differential geometry via infinitesimal displacements", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713857177956, "lm_q2_score": 0.7185944046238981, "lm_q1q2_score": 0.7080105048128424 }
https://arxiv.org/abs/1609.05660
The Riemann minimal examples
Near the end of his life, Bernhard Riemann made the marvelous discovery of a 1-parameter family $R_{\lambda}$, $\lambda\in (0,\infty)$, of periodic properly embedded minimal surfaces in $\mathbb{R}^3$ with the property that every horizontal plane intersects each of his examples in either a circle or a straight line. Furthermore, as the parameter $\lambda\to 0$ his surfaces converge to a vertical catenoid and as $\lambda\to \infty$ his surfaces converge to a vertical helicoid. Since Riemann's minimal examples are topologically planar domains that are periodic with the fundamental domains for the associated $\mathbb{Z}$-action being diffeomorphic to a compact annulus punctured in a single point, then topologically each of these surfaces is diffeomorphic to the unique genus zero surface with two limit ends. Also he described his surfaces analytically in terms of elliptic functions on rectangular elliptic curves. This article exams Riemann's original proof of the classification of minimal surfaces foliated by circles and lines in parallel planes and presents a complete outline of the recent proof that every properly embedded minimal planar domain in $\mathbb{R}^3$ is either a Riemann minimal example, a catenoid, a helicoid or a plane.
\section{Introduction.} Shortly after the death of Bernhard Riemann, a large number of unpublished handwritten papers were found in his office. Some of these papers were unfinished, but all of them were of great interest because of their profound insights and because of the deep and original mathematical results that they contained. This discovery of Riemann's handwritten unpublished manuscripts led several of his students and colleagues to rewrite these works, completing any missing arguments, and then to publish them in their completed form in the Memoirs of the Royal Society of Sciences of G\"{o}ttingen as a series of papers that began appearing in 1867. One of these papers~\cite{ri2}, written by K. Hattendorf and M. H. Weber from Riemann's original notes from the period 1860-61, was devoted to the theory of minimal surfaces in $\mathbb{R} ^3$. In one of these rewritten works, Riemann described several examples of compact surfaces with boundary that minimized their area among all surfaces with the given explicit boundary. In the last section of this manuscript Riemann tackled the problem of classifying those minimal surfaces which are bounded by a pair of circles in parallel planes, under the additional hypothesis that every intermediate plane between the planes containing the boundary circles also intersects the surface in a circle. Riemann proved that the only solutions to this problem are (up to homotheties and rigid motions) the catenoid and a previously unknown 1-parameter family of surfaces $\{R_\lambda \ | \ \lambda \in \mathbb{R} \},$ known today as the {\it Riemann minimal examples.} Later in 1869, A. Enneper~\cite{en1} demonstrated that there do not exist minimal surfaces foliated by circles in a family of nonparallel planes, thereby completing the classification of the minimal surfaces foliated by circles. This purpose of this article is threefold. Firstly we will recover the original arguments by Riemann by expressing them in more modern mathematical language (Section~\ref{riemann}); more specifically, we will provide Riemann's analytic classification of minimal surfaces foliated by circles in parallel planes. We refer the reader to Figure~\ref{R-image} for an image of a Riemann minimal surface created from his mathematical formulas and produced by the graphics package in {\tt Mathematica.} Secondly we will illustrate how the family of Riemann's minimal examples are still of great interest in the current state of minimal surface theory in $\mathbb{R}^3$. Thirdly, we will indicate the key results that have led to the recent proof that the plane, the helicoid, the catenoid and the Riemann minimal examples are the only properly embedded minimal surfaces in $\mathbb{R}^3$ with the topology of a planar domain; see Section~\ref{MPR} for a sketch of this proof. In regards to this result, the reader should note that the plane and the helicoid are conformally diffeomorphic to the complex plane, and the catenoid is conformally diffeomorphic to the complex plane punctured in a single point; in particular these three examples are surfaces of finite topology. However, the Riemann minimal examples are planar domains of infinite topology, diffeomorphic to each other and characterized topologically as being diffeomorphic to the unique surface of genus zero with two limit ends. The proof that the properly embedded minimal surfaces of infinite topology and genus zero are Riemann minimal examples is due to Meeks, P\'erez and Ros~\cite{mpr6}, and it uses sophisticated ingredients from many branches of mathematics. Two essential ingredients in their proof of this classification result are Colding-Minicozzi theory, which concerns the geometry of embedded minimal surfaces of finite genus, and the theory of integrable systems associated to the Korteweg-de Vries equation. In 1956 M. Shiffman~\cite{sh1} generalized in some aspects Riemann's classification theorem; Shiffman's main result shows that a compact minimal annulus that is bounded by two circles in parallel planes must be foliated by circles in the intermediate planes. Riemann's result is more concerned with classifying analytically such minimal surfaces and his proof that we give in Section~\ref{riemann} is simple and self-contained; in Sections~\ref{subsec:shiffman} and~\ref{MPR} we will explore some aspects of the arguments by Shiffman. After the preliminaries of Section~\ref{preli}, we will include in Section~\ref{enneper} the aforementioned reduction by Enneper from the general case in which the surface is foliated by circles to the case where the circles lie in parallel planes. In Section~\ref{secgraphics} we will introduce a more modern viewpoint to study the Riemann minimal examples, which moreover will allow us to produce graphics of these surfaces using the software package \textsf{Mathematica}. The analytic tool for obtaining this graphical representation will be the Weierstrass representation of a minimal surface, which is briefly described in Section~\ref{subsec:weierstrass}. In general, minimal surfaces are geometrical objects that adapt well to rendering software, partly because from their analytical properties, the Schwarz reflection principle for harmonic functions can be applied. This reflection principle, that also explained in the preliminaries section, allows one to represent relatively simple pieces of a minimal surface (where the computer graphics can achieve high resolution), and then to generate the remainder of the surface by simple reflections or $180^\circ$-rotations around lines contained in the boundaries of the simple fundamental piece. On the other hand, as rendering software often represents a surface in space by producing the image under the immersion of a mesh of points in the parameter domain, it especially important that we use parameterizations whose associated meshes have natural properties, such as preserving angles through the use of isothermal coordinates. \begin{remark} {\rm There is an interesting dichotomy between the situation in $\mathbb{R}^3$ and the one in $\mathbb{R}^n$, $n\geq 4$. Regarding the natural $n$-dimensional generalization of the problem tacked by Riemann, of producing a family of minimal hypersurfaces foliated by $(n-2)$-dimensional spheres, W. C. Jagy~\cite{ja2} proved that if $n \geq 4$, then a minimal hypersurface in $\mathbb{R}^n$ foliated by hyperspheres in parallel planes must be rotationally symmetric. Along this line of thought, we could say that the Riemann minimal examples do not have a counterpart in higher dimensions. } \end{remark} \noindent {\bf Acknowledgements} The authors would like to express their gratitude to Francisco Mart\'\i n for giving his permission to incorporate much of the material from a previous joint paper~\cite{mape1} by him and the second author into the present manuscript. In particular, the parts of our manuscript concerning the classical arguments of Riemann and Weierstrass are largely rewritten translations of the paper~\cite{mape1}. First author's financial support: This material is based upon work for the NSF under Award No. DMS-1309236. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the NSF. Second author's financial support: Research partially supported by a MEC/FEDER grant no. MTM2011-22547, and Regional J. Andaluc\'\i a grant no. P06-FQM-01642. \section{Preliminaries.} \label{preli} Among the several equivalent defining formulations of a minimal surface $M\subset \mathbb{R} ^3$, i.e., a surface with mean curvature identically zero, we highlight the Euler-Lagrange equation \[ (1+f_y^2)f_{xx}-2f_xf_yf_{xy}+(1+f_x^2)f_{yy}=0, \] where $M$ is expressed locally as the graph of a function $z=f(x,y)$ (in this paper we will use the abbreviated notation $f_x=\frac{\partial f}{\partial x},\partial f_{xx}$, etc., to refer to the partial derivatives of any expression with respect to one of its variables), and the formulation in local coordinates \[ eG-2fF+Eg=0, \] where $\left( \begin{array}{cc}E & F \\ F & G \end{array}\right) , \left( \begin{array}{cc}e & f \\ f & g \end{array}\right) $ are respectively the matrices of the first and second fundamental forms of $M$ in a local parameterization. Of course, there are other ways of characterize minimality such as by the harmonicity of the coordinate functions or the holomorphicity of the Gauss map, but at this point it is worthwhile to remember the historical context in which the ideas that we wish to explain appeared. Riemann was one of the founders of the study of functions of one complex variable, and few things were well understood in Riemann's time concerning the relationship between minimal surfaces and holomorphic or harmonic functions. Instead, Riemann imposed minimality by expressing the surface in implicit form, i.e., by espressing it as the zero set of a smooth function $F\colon O\to \mathbb{R} $ defined in an open set $O\subset \mathbb{R}^3$, namely \begin{equation} \label{eq:div} \mbox{\rm div} \left( \frac{\nabla F}{| \nabla F| }\right) =0 \quad \mbox{ in $O$,} \end{equation} where $\mbox{\rm div} $ and $\nabla $ denote divergence and gradient in $\mathbb{R} ^3$, respectively. The derivation of equation (\ref{eq:div}) is standard, but we next derive it for the sake of completeness. First note that \[ \mbox{\rm div} \left( \frac{\nabla F}{| \nabla F| }\right) =\frac{1}{| \nabla F| } \left( \Delta F-\frac{1}{| \nabla F| }(\nabla F)(| \nabla F | ) \right) , \] where $\Delta $ is the laplacian on $M$; hence (\ref{eq:div}) follows directly from the next lemma. \begin{lemma} \label{equivalencia} If $0$ is a regular value of a smooth function $F\colon O \rightarrow \mathbb{R}$, then the surface $M=F^{-1}(\{0 \})$ is minimal if and only if $| \nabla F| \Delta F=(\nabla F)(| \nabla F| )$ on $M$. \end{lemma} \begin{proof} Since the tangent plane to $M$ at a point $p\in M$ is $T_pM=\ker (dF_p)=\langle (\nabla F)_p\rangle ^{\perp },$ then $(\nabla F)|_M$ is a nowhere zero vector field normal to $M$, and $N=( \frac{\nabla F}{|\nabla F|}) |_M$ is a Gauss map for $M$. On the other hand, \begin{equation} \label{eq:lema1-1} \Delta F=\mbox{trace}(\nabla ^2F)=\sum _{i=1}^2(\nabla ^2F)(E_i,E_i)+(\nabla ^2F)(N,N) \end{equation} where $\nabla ^2F $ is the hessian of $F$ and $E_1,E_2$ is a local orthonormal frame tangent to $M$. As $(\nabla ^2F)(E_i,E_i)=\langle E_i(\nabla F) ,E_i\rangle =\langle E_i(|\nabla F| N),E_i\rangle =| \nabla F| \langle dN(E_i),E_i\rangle $, then \begin{equation} \label{eq:lema1-2} \sum _{i=1}^2(\nabla ^2F)(E_i,E_i)=| \nabla F|\mbox{ trace}(dN)=-2| \nabla F| H, \end{equation} with $H$ the mean curvature of $M$ with respect to $N$. Also, $|\nabla F|^2(\nabla ^2F)(N,N)=(\nabla ^2F)(\nabla F,\nabla F)= \frac{1}{2}(\nabla F)(| \nabla F| ^2)= | \nabla F| (\nabla F) (| \nabla F| ) $. Plugging this formula and (\ref{eq:lema1-2}) into (\ref{eq:lema1-1}), we get $| \nabla F| \Delta F=-2| \nabla F| ^2H+(\nabla F)(| \nabla F| )$, and the proof is complete. \end{proof} \subsection{Weierstrass representation.} \label{subsec:weierstrass} In the period 1860-70, Enneper and Weierstrass obtained representation formulas for minimal surfaces in $\mathbb{R}^3$ by using curvature lines as parameter lines. Their formulae have become fundamental in the study of orientable minimal surfaces (for nonorientable minimal surfaces there are similar formulations, although we will not describe them here). The reader can find a detailed explanation of the Weierstrass representation in treatises on minimal surfaces by Hildebrandt et al.~\cite{dhkw1}, Nitsche~\cite{ni2} and Osserman~\cite{os1}. The starting point is the well-known formula \[ \Delta X =2 \, H \, N, \] valid for an isometric immersion $X=(x_1,x_2,x_3)\colon M \to \mathbb{R} ^3$ of a Riemannian surface into Euclidean space, where $N$ is the Gauss map, $H$ is the mean curvature and $\Delta X= (\Delta x_1,\Delta x_2,\Delta x_3)$ is the Laplacian of the immersion. In particular, minimality of $M$ is equivalent to the harmonicity of the coordinate functions $x_j$, $j=1,2,3$. In the sequel, it is worth considering $M$ as a Riemann surface. We will also denote by $i=\sqrt{-1}$ and Re, Im will stand for real and imaginary parts. We denote by $x_j^*$ the harmonic conjugate of $x_j$, which is locally well-defined up to additive constants. Thus, \[ \phi_j :=d x_j +i\,d x_j^\ast , \] is a holomorphic 1-form, globally defined on $M$. If we choose a base point $p_0\in M$, then the equality \begin{equation} \label{eq:WR} X(p)=X(p_0)+\mbox{Re} \int_{p_0}^p (\phi_1,\phi_2,\phi_3 ), \quad p \in M, \end{equation} recovers the initial minimal immersion, where integration in (\ref{eq:WR}) does not depend on the path in $M$ joining $p_0$ to $p$. The information encoded by $\phi _j$, $j=1,2,3$, can be expressed with only two pieces of data (this follows from the relation that $\sum _{j=1}^3\phi _j^2=0$); for instance, the meromorphic function $g=\frac{\phi_3}{\phi_1-i\phi_2} $ together with $\phi _3$ produce the other two 1-forms by means of the formulas \begin{equation} \label{eq:phij} \phi_1 = \frac{1}{2} \left(\frac 1g-g \right) \phi_3, \quad \phi_2 = \frac{i}{2} \left(\frac 1g+g \right) \phi_3, \end{equation} with the added bonus that $g$ is the stereographic projection of the Gauss map $N$ from the north pole, i.e., \[ N=\left( 2 \: \frac{\mbox{Re}(g)}{1+|g|^2}, 2 \: \frac{\mbox{Im}(g)}{1+|g|^2}, \frac{|g|^2-1}{1+|g|^2} \right) . \] We finish this preliminaries section with the statement of the celebrated Schwarz reflection principle that will be useful later. In 1894, H. A. Schwarz adapted his reflection principle for real-valued harmonic functions in open sets of the plane to obtain the following result for minimal surfaces. The classical proof of this principle can be found in Lemma 7.3 of~\cite{os1}, and a different proof, based on the so-called Bj\"{o}rling problem, can be found in \Sigma$3.4$ of~\cite{dhkw1}. \begin{lemma}[Schwarz] \label{th:schwarz} Any straight line segment (resp. planar geodesic) in a minimal surface is an axis of a $180^\circ$-rotational symmetry (resp. a reflective symmetry in the plane containing the geodesic) of the surface. \end{lemma} \section{Enneper's reduction of the classification problem to foliations by circles in parallel planes.} \label{enneper} We will say that a surface $M \subset \mathbb{R} ^3$ is foliated by circles if it can be parameterized as \begin{equation} \label{para} X(u,v)= c(u)+r(u) \left( \cos v \, {\mathbf v}_1(u)+\sin v \, {\mathbf v}_2(u) \right), \quad u_0 <u<u_1, \; 0 \leq v<2 \pi, \end{equation} where $c\colon (u_0,u_1)\to \mathbb{R}^3$ is a curve that parameterizes the centers of the circles in the foliation, and $r\colon (u_0,u_1)\to (0,\infty )$ is the radius of the foliating circle. Let ${\mathbf v}_1(u)$ and ${\mathbf v}_2(u)$ be an orthonormal basis of the linear subspace associated to the affine plane that contains the foliating circle. The purpose of this section is to prove Enneper's reduction. \begin{proposition}[Enneper] \label{th:enneper} If a minimal surface $M \subset \mathbb{R} ^3$ is foliated by circles, then these circles are contained in parallel planes. \end{proposition} \begin{proof} Let $\{\mathcal{C}_u \ | \ u\in (u_0,u_1) \}$ be the smooth, 1-parameter family of circles that foliate $M$. Let $t\colon (u_0,u_1)\to \mathbb{S}^2(1)$ be the unit normal vector to the plane that contains $\mathcal{C}_u$. It suffices to show that $t(u)$ is constant. Arguing by contradiction, assume that $t$ is not constant. After possibly restricting to a subinterval, we can suppose that $t'(u) \neq 0$ for all $u \in (u_0,u_1)$. Take a curve $\gamma:(u_0,u_1) \to \mathbb{R} ^3$ with $\gamma'(u)=t(u)$, for all $u \in (u_0,u_1)$. The condition that $t'(u)$ vanishes nowhere implies that the curvature function $\kappa (u)$ of ${\gamma} $ is everywhere positive. Let $n(u)$, $b(u)$ be the normal and binormal (unit) vectors to ${\gamma}$, i.e., $\{ t,n,b\} $ is the Frenet dihedron of ${\gamma} $, and let $\tau $ be the torsion of ${\gamma} $. The surface $M$ can be written as in (\ref{para}) with ${\mathbf v}_1=n$ and ${\mathbf v}_2=b$. Our purpose is to express the minimality condition $H=0$ in terms of this parameterization. To do this, denote by \[ \begin{array}{ccc} E= |X_u|^2, & F=\langle X_u,X_v \rangle, & G=|X_v|^2, \\ \rule{0cm}{.6cm} e=\frac{\det (X_u,X_v,X_{uu})}{|X_u \wedge X_v|}, & f=\frac{\det(X_u,X_v,X_{uv})}{|X_u \wedge X_v|} , & g=\frac{\det (X_u,X_v,X_{vv})}{|X_u\wedge X_v|} \end{array} \] the coefficients of the first and second fundamental forms of $M$ in the parameterization~$X$. If $(\alpha,\beta,\delta)$ corresponds to the coordinates of the velocity vector $c'$ of the curve of centers $c(u)$ with respect to the orthonormal basis $\{t,n,b \}$, then a straightforward computation that only uses the Frenet equations for ${\gamma} $ leads to \begin{multline*} e\,G-2 \, f \, F+E \, g= \tfrac{1}{{|X_u \wedge X_v|}}\left({a_1}\,\cos (3\,v)+ {a_2}\,\sin (3\,v)+ \right. \\ \left. {a_3}\,\cos (2\,v) + {a_4}\,\sin (2\,v) +{a_5}\,\cos v+{a_6}\,\sin v+{a_7}\right), \end{multline*} where the functions $a_j$, $j=1, \ldots,7$ depend only on the parameter $u$ and are given in terms of the radius $r(u)$ of ${\cal C}_u$ and the curvature $\kappa (u)$ and torsion $\tau (u)$ of ${\gamma}$ by \begin{eqnarray*} a_1 & = & -\tfrac{1}{2} {r}^3\,\kappa \,\left( {\beta }^2 - {\delta }^2 + {r}^2\,{\kappa }^2 \right),\\ a_2 & = & -{r}^3\,\beta \,\delta \,\kappa,\\ a_3 & = & \tfrac{{r}^3}{2}\,\left( -6\,\beta \,\kappa \,r' + r\,\left( 5\,\alpha \,{\kappa }^2 + \kappa \,\beta ' - \beta \,\kappa ' \right) \right), \\ a_4 & = & \tfrac{{r}^3}{2}\,\left( r\,\kappa \,\delta ' - \delta \,\left( 6\,\kappa \,r' + r\,\kappa ' \right) \right),\\ a_5 & = & -\tfrac{{r}^2}{2} \,\left( 3\,{r}^3\,{\kappa }^3 - 4\,\alpha \,\beta \,r' + \right. \\ & & \left. r\,\left( 8\,{\alpha }^2\,\kappa + 3\,{\beta }^2\,\kappa + 3\,\kappa \,\left( {\delta }^2 + 2\,{(r')}^2 \right) - 2\,\beta \,\alpha ' + 2\,\alpha \,\left( \delta \,\tau + \beta ' \right) \right) + 2\,{r}^2\,\left( r'\,\kappa ' - \kappa \,r'' \right) \right) , \\ a_6 & = & {r}^2\,\left( 2\,\alpha \,\delta \,r' + {r}^2\,\kappa \,\tau \,r' + r\,\left( \delta \,\alpha ' + \alpha \,\left( \beta \,\tau - \delta ' \right) \right) \right),\\ a_7 & = & \tfrac{{r}^2}{2} \,\left( 2\,{\alpha }^3 + r\,\left( 2\,r'\,\left( -2\,\beta \,\kappa + \alpha ' \right) + r\,\left( \kappa \,\left( 2\,\delta \,\tau + \beta ' \right) - \beta \,\kappa ' \right) \right) + \right. \\ & & \left. \alpha \,\left( 2\,{\beta }^2 + 2\,{\delta }^2 + 5\,{r}^2\,{\kappa }^2 + 2\,{(r')}^2 - 2\,r\,r'' \right) \right). \end{eqnarray*} As the functions in $\{ \cos(n \, v), \sin ((n+1) \, v)\ | \ n \in \mathbb{N} \cup \{ 0 \} \} $ are linearly independent, then the condition $H=0$ is equivalent to $a_j= 0$, for each $j=1, \ldots,7$. Since $r>0$ and $\kappa >0$, then the conditions $a_1=a_2 =0$ above imply that $\beta=0$ and $\delta^2=r^2 \kappa^2$. Plugging this into $a_4=0$, we get $5 r \, r' \, \kappa^2=0$, from where $r'=0$. Substituting this into $a_3=0$ we get $5 r \, \alpha \, \kappa^2=0$, hence $\alpha=0$. Finally, plugging $\alpha=\beta= r'=0$ and $\delta^2=r^2 \kappa^2$ into $a_5=0$, we deduce that $-3 r^5\kappa^3=0$, which is a contradiction. This contradiction shows that $t(u)$ is constant and the proposition is proved. \end{proof} \section{The argument by Riemann.} \label{riemann} In this section we explain the classification by Riemann of the minimal surfaces in $\mathbb{R}^3$ that are foliated by circles. By Proposition~\ref{th:enneper}, we can assume that the foliating circles of our minimal surface $M\subset \mathbb{R}^3$ are contained in parallel planes, which after a rotation we will assumed to be horizontal. We will take the height $z$ of the plane as a parameter of the foliation, and denote by $({\alpha} (z),z)= ({\alpha} _1(z),{\alpha} _2(z),z)$ the center of the circle $M\cap \{ x_3=z\} $ and $r(z)>0$ its radius; here we are using the identification $\mathbb{R} ^3=\mathbb{R}^2\times \mathbb{R} $. The functions ${\alpha} _1,{\alpha} _2,r$ are assumed to be of class $C^2$ in an interval $(z_0,z_1)\subset \mathbb{R}$. Consider the function $F:\mathbb{R}^2\times (z_0,z_1)\to \mathbb{R}$ given by \begin{equation} \label{eq:F} F(x,z)=| x-{\alpha} (z)| ^2-r(z)^2, \end{equation} where $x=(x_1,x_2)\in \mathbb{R}^2$. So, $M\subset F^{-1}(\{ 0\} )$ and thus, Lemma~\ref{equivalencia} gives that $M$ is minimal if and only if \[ F_z^2+| x-{\alpha} | ^2\left( 2+F_{zz}\right) +2\langle x-{\alpha} ,{\alpha} '\rangle F_z=0 \quad \mbox{ in $M$}. \] As $| x-{\alpha} | ^2=r^2$ in $M$ and $2\langle x-{\alpha} ,{\alpha} '\rangle =-\left( | x-{\alpha} | ^2\right) _z=-(F+r^2)_z=-F_z-(r^2)'$, then the minimality of $M$ can be written as \begin{equation} \label{eq:teor2-3} 2 r^2+r^2F_{zz}-(r^2)'F_z=0. \end{equation} The argument amounts to integrating (\ref{eq:teor2-3}). First we divide by $r^4$, \[ 0=\frac{2}{r^2}+\frac{r^2F_{zz}-(r^2)'F_z}{r^4}=\frac{2}{r^2}+\left( \frac{F_z}{r^2}\right) _z, \] and then we integrate with respect to $z$, obtaining \begin{equation} \label{eq:teor2-4} f(x)=2\int ^z\frac{du}{r(u)^2}+\frac{F_z(x,z)}{r(z)^2}, \end{equation} for a certain function of $x$. On the other hand, (\ref{eq:F}) implies that \[ \frac{\partial }{\partial x_j}\left( \frac{F_z}{r^2}\right) = -\frac{2{\alpha} _j'}{r^2}, \] which is a function depending only on $z$, for each $j=1,2$. Therefore, for $z$ fixed, the function $x\mapsto \frac{F_z}{r^2}$ must be affine. As $\int ^z\frac{du}{r(u)^2}$ only depends on $z$, we conclude from (\ref{eq:teor2-4}) that $f(x)$ is also an affine function of $x$, i.e., \begin{equation} \label{eq:teor2-5} 2\int ^z\frac{du}{r(u)^2}+\frac{F_z(x,z)}{r(z)^2}=2\langle a,x\rangle +c \end{equation} for certain $a=(a_1,a_2)\in \mathbb{R}^2$, $c\in \mathbb{R} $. Taking derivatives in (\ref{eq:teor2-5}) with respect to $x_j$, we get ${\alpha} _j'(z)=-a_jr^2(z)$; hence \[ {\alpha} (z)=-m(z)\, a \quad \mbox{ where }m(z)=\int ^zr(u)^2\, du. \] These formulas determine the center of the circle $M\cap \{ x_3=z\} $ up to a horizontal translation that is independent of the height. In order to determine the radius of the circle $M\cap \{ x_3=z\} $ as a function of $z$, we come back to equation (\ref{eq:teor2-3}). Since $F_z=-2\langle x-{\alpha} ,{\alpha} '\rangle -(r^2)'$, then $F_{zz}=2| {\alpha} '| ^2-2\langle x-{\alpha} ,{\alpha} ''\rangle -(r^2)''$. Plugging ${\alpha} '=-r^2\, a$ into these two expressions, and this one into (\ref{eq:teor2-3}), we deduce that the minimality of $M$ can be rewritten as \begin{equation} \label{eq:teor2-6} 2| a|^2r^6+((r^2)')^2+r^2(2-(r^2)'')=0, \end{equation} which is an ordinary differential equation in the function $q(z)=r(z)^2$. Solving this ODE is straightforward: first note that \[ \left( \frac{(q')^2}{q^2}\right) '=\frac{2q'}{q^3}\left[ -(q')^2+qq''\right] \stackrel{(\ref{eq:teor2-6})}{=} \frac{2q'}{q^3}\left( 2| a| ^2q^3+2q\right) =4\left( | a| ^2q'+\frac{q'}{q^2}\right). \] Hence integrating with respect to $z$ we have \begin{equation} \label{eq:teor2-7} \frac{(q')^2}{q^2}=4\left( | a| ^2q-\frac{1}{q}\right) +4{\lambda} , \end{equation} for certain ${\lambda} \in \mathbb{R} $. In particular, the right-hand side of (\ref{eq:teor2-7}) must be nonnegative. Solving for $q'(z)$ we have \[ q'=\frac{dq}{dz}=2\sqrt{| a| ^2q^3-q+{\lambda} q^2}; \] hence the height differential of $M$ is \[ dz=\frac{1}{2}\frac{dq}{\sqrt{| a| ^2q^3-q+{\lambda} q^2}}, \] where $q=r^2$. Viewing $q$ as a real variable, the third coordinate function of $M$ can be expressed as \begin{equation} \label{eq:teor2-8} z(q)=\frac{1}{2}\int ^q\frac{du}{\sqrt{| a| ^2u^3-u+{\lambda} u^2}}. \end{equation} To obtain the first two coordinate functions of $M$, recall that $M\cap \{ x_3=z\} $ is a circle centered at $({\alpha} ,z)$ with radius $r$. This means that besides the variable $q$ that gives the height $z(q)$, we need another real variable to parameterize the circle centered at $({\alpha} ,z)$ with radius $r$: \[ x(q,v)={\alpha} (z(q))+\sqrt{q}e^{iv}=-m(z(q))\, a+\sqrt{q}e^{iv}=-\int ^{z(q)}q(z)\, dz\cdot a+\sqrt{q}e^{iv} \] \[ =-\int ^{q}q\frac{dz}{dq}\, dq\cdot a+\sqrt{q}e^{iv}=-\frac{1}{2}\int ^q\frac{u \, du} {\sqrt{| a| ^2u^3-u+{\lambda} u^2}}\cdot a +\sqrt{q}e^{iv}, \] where $0\leq v<2\pi $. In summary, we have obtained the following parameterization $X=(x_1,x_2,z)=(x_1+ix_2,z)$ of $M$: \begin{equation} \label{eq:teor2-9} X(q,v)=f(q)(a,0)+\sqrt{q}(e^{iv},0)+(0,z(q)), \end{equation} where ${\displaystyle f(q)=-\frac{1}{2}\int ^q\frac{u \, du}{\sqrt{| a| ^2u^3-u+{\lambda} u^2}}}$ and $z(q)$ is given by (\ref{eq:teor2-8}). The surfaces in (\ref{eq:teor2-9}) come in a 3-parameter family depending on the values of $a_1,a_2,{\lambda} $. Nevertheless, two of these parameters correspond to rotations and homotheties of the same geometric surface, and so there is only one genuine geometric parameter. Next we will analyze further the surfaces in the family (\ref{eq:teor2-9}), in order to understand their shape and other properties. First observe that the first term in the last expression of $X(q,v)$ parameterizes the center of the circle $M\cap \{ x_3=z(q)\} $ as if it were placed at height zero, the second term parameterizes the circle itself ($q$ is positive as $q(z)=r(z)^2$), and the third one places the circle at height $z(q)$. To study the shape of the surface, we will analyze for which values of $q>0$ the radicand $| a| ^2q^3-q+{\lambda} q^2$ is nonnegative (this is a necessary condition, see (\ref{eq:teor2-7})). This indicates that the range of $q$ is of the form $[q_1,+\infty )$ for certain $q_1>0$. Also, the positivity of the integrand in (\ref{eq:teor2-8}) implies that $z(q)$ is increasing. Since choosing a starting value of $q$ for the integral (\ref{eq:teor2-8}) amounts to vertically translating the surface, we can choose this starting value as $q_1$, which geometrically means: \begin{quote} {\it We normalize $M$ so that the circle of minimum radius in $M$ is at height zero, and $M$ is a subset of the half-space $\{ (x_1,x_2,x_3) \mid x_3\geq 0\} $.} \end{quote} This translated surface will have a lower boundary component being a circle (or in the limit case a straight line) contained in the plane $\{ z=0\} $ (in particular, $M$ is not complete). If we choose the negative branch of the square root when solving for $q'(z)$ after (\ref{eq:teor2-7}), we will obtain another surface $M'$ contained in the half-space $\{ x_3\leq 0\} $ with the same boundary as $M$ in $\{ x_3=0\} $. The union of $M$ with $M'$ is again a smooth minimal surface; this is because the tangent spaces to both $M,M'$ coincide along the common boundary. Nevertheless, $M\cup M'$ might fail to be complete; we will obtain more information about this issue of completeness when discussing the value of $a$. Analogously, picking a starting value of $q$ for the integral in (\ref{eq:teor2-9}) that gives $f$, corresponds to translating horizontally the center of the circle $M\cap \{ x_3=z(q)\} $ by a vector independent of $q$, or equivalently, translating $M$ horizontally in $\mathbb{R}^3$. Thus, we can normalize this starting value of $q$ for the integral in (\ref{eq:teor2-9}) to be the same $q_1$ as before. This means that we may assume: \begin{quote} {\it The circle of minimum radius in $M$ has its center at the origin of $\mathbb{R} ^3$.} \end{quote} We next discuss cases depending on whether or not $a$ vanishes. \par \vspace{.2cm} {\sc Case I: $a=(0,0)$ gives the catenoid.} \par \vspace{.2cm} \noindent In this case, (\ref{eq:teor2-9}) reads $X(q,v)=\sqrt{q}(e^{iv},0)+(0,z(q))$, where $z(q)=\frac{1}{2}\int_{q_1}^q\frac{du}{\sqrt{-u+{\lambda} u^2}}du$. In particular, $M$ is a surface of revolution around the $z$-axis, so $M$ is a half-catenoid with neck circle at height zero. In order to determine $q_1$, observe that ${\lambda} $ is positive as follows from (\ref{eq:teor2-7})), and that the function $u\mapsto -u+{\lambda} u^2$ is nonnegative in $(-\infty ,0]\cup [\frac{1}{{\lambda} },\infty )$. Therefore, we must take $q_1=\frac{1}{{\lambda} }$. Furthermore, the integral that defines $z(q)$ can be explicitly solved: \[ z(q)=\frac{1}{2}\int _{1/{\lambda} }^q\frac{du}{\sqrt{-u+{\lambda} u^2}}du=\frac{1} {{\lambda}}\arg \sinh \, \sqrt{-1+q{\lambda} }, \] which gives the following parameterization of $M$: \begin{equation} \label{eq:catenoide1} X(q,v)=\left( \sqrt{q}e^{iv},\frac{1}{{\lambda} }\arg \sinh\, \sqrt{-1+q{\lambda} }\right) . \end{equation} In this case, the surface $M'\subset \{ x_3\leq 0\} $ defined by the negative branch of the square root when solving for $q'(z)$ in (\ref{eq:teor2-7}) is the lower half of the same catenoid of which $M$ is the upper part. In particular, $M\cup M'$ is complete. \par \vspace{.2cm} {\sc Case II: $a\neq (0,0)$ gives the Riemann minimal examples.} \par \vspace{.2cm} \noindent As $f,z$ depend on $| a| ^2$ but not on $a=(a_1,a_2)$, then a rotation of $a\equiv a_1+ia_2$ in $\mathbb{R}^2 \equiv \mathbb{C} $ around the origin by angle ${\theta} $ will leave $f$ and $z$ invariant. By (\ref{eq:teor2-9}), the center of the circle $M\cap \{ x_3=z(q)\} $ will be also rotated by the same angle ${\theta}$ around the $x_3$-axis, while the second and third terms in (\ref{eq:teor2-9}) will remain the same. This says that rotating $a$ in $\mathbb{C} $ corresponds to rotating $M$ in $\mathbb{R} ^3$ around the $x_3$-axis, so without lost of generality we can assume that $a\equiv (a,0)\in \mathbb{R}^2$ with $a>0$. The radicand of equation (\ref{eq:teor2-8}) is now expressed by $q(a^2q^2-1+{\lambda} q)$, hence $a^2q^2-1+{\lambda} q\geq 0$. This occurs for $q \in \left[ \frac{1}{2a^2}(-{\lambda} -\sqrt{4a^2+{\lambda} ^2}),0\right] \cup \left[ q_1,\infty \right)$, where \[ q_1=\frac{1}{2a^2}(-{\lambda} +\sqrt{4a^2+{\lambda} ^2}). \] As $q>0$, then the correct range is $q\in [q_1,\infty )$. Now fix the starting integration values for $z(q),f(q)$ at $q_1$, and denote by $z_{a,{\lambda} },f_{a,{\lambda} }$ the functions given by (\ref{eq:teor2-8}), (\ref{eq:teor2-9}), respectively. A straightforward change of variables shows that \[ \sqrt{a}z_{a,{\lambda} }(q)=z_{1,{\lambda} /a}( aq) ,\qquad a^{3/2}f_{a,{\lambda} }(q)=f_{1,{\lambda} /a}(aq). \] Thus, the minimal immersions $X_{a,{\lambda} },X_{1,{\lambda} /a}$ are related by a homothety of ratio $\sqrt{a}$: \[ \sqrt{a}X_{a,{\lambda} }(q,v)=X_{1,{\lambda} /a}(aq,v). \] Therefore, we can assume $a=1$, i.e., our surfaces only depend on the real parameter ${\lambda} $: \begin{equation} \label{eq:Riemann1} X_{{\lambda} }(q,v)=f_{{\lambda} }(q)(1,0)+\sqrt{q}(e^{iv},0)+(0,z_{{\lambda} }(q)), \end{equation} where \[ f_{{\lambda} }(q)=-\frac{1}{2}\int _{q_1}^q\frac{u \, du}{\sqrt{u^3-u+{\lambda} u^2}}, \quad z_{{\lambda} }(q)=\frac{1}{2}\int _{q_1}^q \frac{u \, du}{\sqrt{u^3-u+{\lambda} u^2}} \quad \mbox{and} \] \[ q_1=q_1({\lambda} )=\frac{1}{2}\left( -{\lambda} +\sqrt{4+{\lambda} ^2}\right) . \] Calling $M_{{\lambda} }=X_{{\lambda}}([q_1,\infty )\times [0,2\pi ))$, the center of the circle $M_{{\lambda} }\cap \{ x_3=z_{{\lambda} }(q)\} $ lies en the plane $\Pi \equiv \{ x_2=0\} $, which implies that $M_{{\lambda} }$ is symmetric with respect to $\Pi $. The increasing function $q\mapsto z_{{\lambda} }(q)$, $q\in [q_1,\infty )$, is bounded because $\displaystyle\int ^q\frac{du}{\sqrt{u^3}}$ has limit zero as $q\rightarrow \infty $. This means that $M_{{\lambda} }$ lies in a slab of the form $$S(0,\zeta )=\{ (x_1,x_2,z) \in \mathbb{R} ^3 \ | \ 1\leq z\leq \zeta \}, $$ where $\zeta =\zeta ({\lambda} )=\lim _{q\rightarrow \infty }z_{{\lambda} }(q)>0$. We next analyze $M_{{\lambda} }\cap \{ x_3=\zeta \} $. As $M_{{\lambda} }$ is symmetric with respect to $\Pi $, given $q\in [q_1,\infty [$, the circle $M_{{\lambda} }\cap \{ x_3=z(q)\} $ intersects $\Pi $ at two antipodal points $A_+(q),A_-(q)$ that are obtained by imposing the condition $\sin v=0$ in (\ref{eq:Riemann1}), i.e., \[ A_{\pm }(q)=\left( -\frac{1}{2}\int _{q_1}^q\frac{u \, du}{\sqrt{u^3-u+{\lambda} u^2}} \pm \sqrt{q},z_{{\lambda} }(q)\right) \in (\mathbb{R} \times \{ 0 \} )\times \mathbb{R} \subset \mathbb{C} \times \mathbb{R} . \] Since $q$ is not bounded, the first coordinate of $A_-(q)$ tends to $-\infty $ as $q\to \infty $, that is to say, when we approach the upper boundary plane $\{ x_3=\zeta \} $ of the slab $S(0,\zeta )$. On the contrary, the first coordinate of $A_+(q)$ tends to a finite limit as $q\to \infty $, because for sufficiently large values of $q$ we have \[ \left( -\frac{1}{2}\int _{q_1}^q\frac{u \, du}{\sqrt{u^3-u+{\lambda} u^2}}+\sqrt{q}\right) \approx \left( \mbox{constant}({\lambda} )-\frac{1}{2} \int _{q_1}^q\frac{du}{\sqrt{u}}+\sqrt{q}\right) =\mbox{constant}({\lambda} ). \] Therefore, $A_+(q)$ converges as $q\to \infty $ to a point $A\in \{ x_3=\zeta \} \cap \Pi $. This proves that $M_{{\lambda} }\cap \{ x_3=\zeta \} \neq \mbox{\O }$. As $M_{{\lambda} }\cap \{ x_3=\zeta \} $ cannot be compact (because $A_-(q)$ diverges in $\mathbb{R} ^3$ as $q\to \infty $), we deduce that $M_{{\lambda} }\cap \{ x_3=\zeta \} $ is a noncompact limit of circles symmetric with respect to $\Pi $, hence it is a straight line $r$ orthogonal to $\Pi $ and passing through $A$. As the boundary of $M_{{\lambda} }$ consists of a circle in $\{ x_3=0\} $ and a straight line $r$ in $\{ x_3=\zeta \} $, then the Schwarz reflection principle (Lemma~\ref{th:schwarz}) implies that $M\cup \mbox{Rot}_r(M)$ is a minimal surface, where Rot$_r$ denotes the $180^\circ$-rotation around $r$. Clearly, $M\cup \mbox{Rot}_r(M)\subset S(0,2 \, \zeta )$ has two horizontal boundary circles of the same radius. The same behavior holds if we choose the negative branch of the square root when solving (\ref{eq:teor2-7}) for $q'(z)$, but now for a slab of the type $S(-\zeta ,0)$. This means that we can rotate the surface by $180^\circ$ around the straight line $r'= \{ -p\ | \ p\in r\} \subset \{ x_3=-\zeta \} $, obtaining a minimal surface that lies in $S(-4\, \zeta ,2\, \zeta )$, whose boundary consists of two circles of the same radius in the boundary planes of this slab and such that the surface contains in its interior three parallel straight lines at heights $\zeta$, $-\zeta $, $-3 \, \zeta$, all orthogonal to $\Pi $. Repeating this rotation-extension process indefinitely we produce a complete embedded minimal surface $R_{{\lambda} }\subset \mathbb{R} ^3$ that contains parallel lines contained in the planes $\{ x_3=(2k+1)\zeta ({\lambda} )\} $ with $k\in \mathbb{Z} $ and orthogonal to $\Pi $. It is also clear that $R_{{\lambda} }$ is invariant under the translation $T_{{\lambda}}$ by the vector $2(A-B)$ (here $B=r'\cap \Pi $), obtained by the composition of two $180^\circ$-rotations in consecutive lines. This surface $R_{{\lambda} }$ is what we call a {\it Riemann minimal example,} and it is foliated by circles and lines in parallel planes. For each Riemann minimal example $R_{{\lambda} }$, the circles of minimum radius lie in the planes $\{ x_3=2 \, k \, \zeta \,({\lambda} )\} $, $k\in \mathbb{Z} $, and this minimum radius is \[ \sqrt{q_1({\lambda} )}=\sqrt{(-{\lambda} +\sqrt{4+{\lambda} ^2})/2}. \] This function of ${\lambda} \in \mathbb{R}$ is one-to-one (with negative derivative), from where we conclude that \begin{quote} {\it The Riemann minimal examples $\{ R_{\lambda} \ | \ {\lambda} \in \mathbb{R} \}$ form a 1-parameter family of noncongruent surfaces. } \end{quote} In Figure~\ref{fig:AB} we have represented the intersection of $R_{{\lambda} }$ for ${\lambda} =1$ with the symmetry plane $\Pi $. At each of the points $A$ and $B$ there passes a straight line contained in the surface and orthogonal to $\Pi$. \begin{figure} \begin{center} \includegraphics[height=5cm]{corteplanosim.pdf} \caption{The intersection of $R_{\lambda}$ with the symmetry plane $\Pi =\{x_2=0 \}$. The translation of vector $2 (A-B)$ leaves $R_{\lambda}$ invariant.} \label{fig:AB} \end{center} \end{figure} Viewed as a complete surface in $\mathbb{R}^3$, each Riemann minimal example $R_\lambda$ has the topology of an infinite cylinder punctured in an infinite discrete set of points, corresponding to its planar ends, and $R_\lambda$ is invariant under a translation $T_\lambda$. Furthermore, quotient surface ${\cal R}_\lambda=R_\lambda /T_\lambda$ in the 3-dimensional flat Riemannian manifold $\mathbb{R}^3/T_\lambda$ is conformally diffeomorphic to a flat torus ${\Theta}$ punctured in two points. In addition, the Gauss map $G$ of ${\cal R}_\lambda$ has degree two and has exactly two branch points. This means that $G$ is a meromorphic function on the punctured torus that extends to a meromorphic function $\widehat{G}$ of degree two on ${\Theta}$ whose zeros and poles of order two occur at the two points corresponding to the planar ends of ${\cal R}_\lambda$ in $\mathbb{R}^3/T_\lambda$. It then follows from Riemann surface theory that the degree-two meromorphic function $\widehat{G}$ is a complex multiple of the Weierstrass $\mathcal{P}$-function on the underlying elliptic curve ${\Theta}$. Furthermore, the vertical plane of symmetry and the rotational symmetry around either of the two lines on the quotient surface ${\cal R}_\lambda$ imply that ${\Theta}$ is conformally $\mathbb{C}/\Lambda$ where $\Lambda$ is a rectangular lattice. \subsection{Graphics representation of the Riemann minimal examples.} \label{secgraphics} With the parameterizations of the Riemann minimal examples obtained in the preceding section we can represent these surfaces with the help of the \textsf{Mathematica} graphing package. Nevertheless, we will not use the parameterization in (\ref{eq:Riemann1}), mainly because the parameter $q$ diverges when producing the straight lines contained in $R_{{\lambda} }$; instead, we will use a conformal parameterization given by the Weierstrass representation. Recall that ${\cal R}_\lambda=R_\lambda /T_\lambda$ is a twice punctured torus, and that its Gauss map (which can be regarded as a holomorphic function on ${\cal R}_{{\lambda} }$, see Section~\ref{subsec:weierstrass}) has degree two on the compactification. In particular, the compactification of ${\cal R}_{{\lambda} }$ is conformally equivalent to the following algebraic curve: \[ \overline{M}_\sigma=\left\{ (z,w) \in \overline{\mathbb{C}} \times \overline{\mathbb{C}} \ | \ w^2 = z(z-1) (z+\sigma) \right\}, \] where $\sigma \in \mathbb{C}-\{0,1\}$ depends on ${\lambda} $ in some way to be determined later, and we can moreover choose the degree-two function $z$ on $\overline{M}_\sigma$ so that meromorphic extension of the Gauss map of ${\cal R}_\lambda $ to $\overline{M}_\sigma$ is $g^{\sigma}(z,w)= \rho \, z$, for a certain constant $\rho=\rho(\sigma) \in \mathbb{C}^*$. It is worth mentioning that the way one endows $\overline{M}_\sigma$ with a complex structure is also due to Riemann, when studying multivalued functions on the complex plane (in this case, $z \mapsto \sqrt{z (z-1) (z+\sigma)}$). Since the third coordinate function of $R_\lambda $ is harmonic and extends analytically through the planar ends (because it is bounded in a neighborhood of each end), its complex differential $dx_3+i\, dx_3^\ast =\phi _3^{\sigma}$ is a holomorphic 1-form on the torus $\overline{M}_\sigma$ (without zeros). As the linear space of holomorphic 1-forms on a torus has complex dimension~1, then we deduce that $\phi_3^{\sigma} \equiv \mu \frac{dz}{w}$ for some $\mu \in \mathbb{C}^*$ also to be determined. Clearly, after possibly applying a homothety to $R_{{\lambda} }$, we can assume that $|\mu |=1$. \subsubsection{Symmetries of the surface.} \label{sec4.1.1} Recall that each surface $R_\lambda$ is invariant under certain rigid motions of $\mathbb{R}^3$, which therefore induce intrinsic isometries of the surface. These intrinsic isometries are in particular conformal diffeomorphisms of the algebraic curve $\overline{M}_{\sigma }$, that might be holomorphic or anti-holomorphic depending on whether or not they preserve or invert the orientation of the surface. These symmetries will be useful in determining the constants that appeared in the above two paragraphs. First consider the orientation-preserving isometry of $R_{{\lambda} }$ given by $180^\circ$-rotation about a straight line parallel to the $x_2$-axis, that intersects $R_{{\lambda} }$ orthogonally at two points lying in a horizontal circle of minimum radius (these points would be represented by the mid point of the segment $\overline{AB}$ in Figure~\ref{fig:AB}). This symmetry induces an order-two biholomorphism $S_1$ of $\overline{M}_\sigma$, that acts on $g^{\sigma},\phi_3^{\sigma}$ in the following way: \[ g^{\sigma} \circ S_1=-1/g^{\sigma},\qquad S_1^*\phi_3^{\sigma}=-\phi_3^{\sigma}. \] As $S_1$ interchanges the branch values of $g$, we deduce that $\rho= \frac{1}{\sqrt{\sigma}}$ and $S_1(z,w)=(-\frac{\sigma}{z},-\sigma \frac{w}{z^2})$. Another isometry of $R_{\lambda}$ is the $180^\circ$-rotation about a straight line $r$ parallel to the $x_2$-axis and contained in the surface. As this symmetry reverses orientation of $R_{\lambda}$, then it induces an order-two anti-holomorphic diffeomorphism $S_2$ of $\overline{M}_\sigma$, that acts on $g^{\sigma},\phi_3^{\sigma}$ as \[ g^{\sigma} \circ S_2= \overline{g^{\sigma}}\qquad S_2^*\phi_3^{\sigma} = -\overline{\phi_3^{\sigma}}. \] $S_2$ fixes the branch points of $g^\sigma$ (one of them lies in $r$), from where we get $\sigma \in \mathbb{R}$, $\mu \in \{\pm 1, \pm i \}$ and $S_2(z,w)=(\overline{z},\pm \overline{w})$. Furthermore, the unit normal vector to $R_{{\lambda} }$ along $r$ takes values in $\mathbb{S}^1(1)\cap \{x_2=0 \}$, which implies that $g^\sigma(1,0)\in \mathbb{R}$, and so, $\sigma>0$. The following argument shows that we can assume that $\mu =1$. Consider the Weierstrass data $\left(\overline{M}_\sigma, \; g^{\sigma}, \; \phi_3^{\sigma}\right)$, and the biholomorphism $\Sigma \colon \overline{M}_{1/\sigma} \to \overline{M}_{\sigma}$ given by $\Sigma(z,w)=(-\sigma \, z, i \, \sigma^{3/2}\, w)$. Then, we have that \[ g^{\sigma} \circ \Sigma= -g^{1/\sigma}, \quad \Sigma^* \phi_3^{\sigma}= \frac{i}{\sqrt{\sigma}} \phi_3^{1/\sigma}. \] The change of variable via $\Sigma $ gives \[ \int_{\Sigma(P_0)}^{\Sigma(P)} \frac{i}{\sqrt{\sigma}} \phi_3^{1/\sigma}=\int_{P_0}^{P} \phi_3^{\sigma}, \] and similar equations hold for the other two components $\phi_1^{1/\sigma}, \phi_2^{1/\sigma}$ of the Weierstrass form (see~(\ref{eq:phij})). This implies that we can assume after a rigid motion and homothety, that $\mu =1$ and that the isometry $S_2$ is \[ S_2(z,w)=(\overline{z},-\overline{w}). \] Finally, the reflective symmetry of $R_{{\lambda} }$ with respect to the plane $\{x_2=0 \}$ induces an anti-holomorphic involution $S_3$ of $\overline{M}_{\sigma }$ which fixes the branch points of $g^{\sigma }$ (including the zeros and poles $(0,0),(\infty ,\infty )$ of $g$, which correspond to the ends of ${\cal R}_{{\lambda} }$), and that preserves the third coordinate function. It is then clear that this transformation has the form $S_3(z,w)=(\overline{z},\overline{w})$. \subsubsection{The period problem.} \label{sec4.1.2} We next check that the Weierstrass data $\left(\overline{M}_\sigma, \; g^{\sigma}, \; \phi_3^{\sigma}\right)$ produces a minimal surface in $\mathbb{R}^3$, invariant under a translation vector. A homology basis of the algebraic curve $\overline{M}_\sigma$ is ${\cal B}=\{\gamma_1,\gamma_2\}$, where these closed curves are the liftings to $\overline{M}_\sigma$ through the $z$-projection of the curves in the complex plane represented in Figure~\ref{fig:curvas}. \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{curvas.pdf} \caption{The curves $\gamma_1$ and $\gamma_2$ in the $z$-complex plane.} \label{fig:curvas} \end{center} \end{figure} The action of the symmetries $S_j$, $j=1,2,3$, on the basis $\cal B$ and on the Weierstrass data implies that \[ \mbox{Re} \int_{\gamma_1} (\phi_1^{\sigma},\phi_2^{\sigma},\phi_3^{\sigma}) =0, \qquad \mbox{Re} \int_{\gamma_2} (\phi_1^{\sigma},\phi_2^{\sigma},\phi_3^{\sigma}) \in \{x_2=0 \}. \] In particular, the Weierstrass data $\left(\overline{M}_\sigma, \; g^{\sigma}, \; \phi_3^{\sigma}\right)$ gives rise to a well-defined minimal immersion on the cyclic covering of $\overline{M}_\sigma -\{ (0,0),(\infty ,\infty )\} $ associated to the subgroup of the fundamental group of $\overline{M}_\sigma$ generated by $\gamma_2$. We will call $M_{\sigma }\subset \mathbb{R} ^3$ to the image of this immersion; we will prove in Section~\ref{subsec:shiffman} that $M_{\sigma }$ is one of the Riemann minimal examples $R_{{\lambda} }$ obtained in Section~\ref{riemann}. For the moment, we will content ourselves with finding a simply connected domain of $M_{\sigma }$ bordered by symmetry lines (planar geodesics or straight lines). The reason for this is that the package \textsf{Mathematica} works with parameterizations defined in domains of the plane, which once represented graphically, can be replicated in space by means of the Schwarz reflection principles; we will call such a simply connected domain of $M_{\sigma }$ a {\it fundamental piece.} Having in mind how the isometries $S_2$ and $S_3$ act on the $z$-complex plane, it is clear that we can reduce ourselves to representing graphically the domain of $M_{\sigma }$ that corresponds to moving $z$ in the upper half-plane. Using the symmetry $S_1$ we can reduce even further this domain, to the set $ \left\{ z \in \mathbb{C} \ | \ \left| z-\frac{1-\sigma}{2} \right| \leq \frac{1+\sigma}{2}, \; \mbox{Im} (z) \geq 0 \right\} $. As the point $(z,w)=(0,0)$ corresponds to an end of the minimal surface $M_{\sigma }$, we will remove a small neighborhood of in the $z$-plane centered at the origin. In this way we get a planar domain $\Omega_\sigma$ as in Figure~\ref{fig:omega}. \begin{figure} \begin{center} \includegraphics[width=0.5\textwidth]{omega.pdf} \caption{The domain $\Omega_\sigma$.} \label{fig:omega} \end{center} \end{figure} The above arguments lead to the conformal parameterization $X^{\sigma}=(x_1^{\sigma},x_2^{\sigma},x_3^{\sigma})\colon \Omega_\sigma \to \mathbb{R} ^3$ given by \begin{eqnarray*} x_1^{\sigma}(z) & = & \mbox{Re} \left( \int_1^z \frac 12 \left( \frac{\sqrt{\sigma}}{u}-\frac{u}{\sqrt{\sigma}} \right) \frac{d \, u}{\sqrt{u (u-1) (u+\sigma)}} \right) ,\\ x_2^{\sigma}(z) & = & \mbox{Re} \left( \int_1^z \frac{i}2 \left( \frac{\sqrt{\sigma}}{u}+\frac{u}{\sqrt{\sigma}} \right) \frac{d \, u}{\sqrt{u (u-1) (u+\sigma)}} \right) ,\\ x_3^{\sigma}(z) & = & \mbox{Re} \left( \int_1^z \frac{d \, u} {\sqrt{u (u-1) (u+\sigma)}} \right). \end{eqnarray*} The following properties are easy to check from the symmetries of the Weierstrass data: \begin{enumerate}[(P1)] \item The image through $X^{\sigma}$ of the boundary segment $[-\sigma,0]\cap \partial \Omega_\sigma $ corresponds to a planar geodesic of reflective symmetry of $M_{\sigma }$, contained in the plane $\{x_2=0\}$. \item The image through $X^{\sigma}$ of $[0,1]\cap \partial \Omega_\sigma$ is a straight line segment contained in the surface and parallel to the $x_2$-axis. \item The image through $X^{\sigma}$ of the outer half-circle in $\partial \Omega _{\sigma }$ is a curve in $M_{\sigma }$ which is invariant under the $180^\circ$-rotation around a straight line parallel to the $x_2$-axis, that passes through the fixed point $X^{\sigma}(i\sqrt{\sigma})$ of $S_1$. \end{enumerate} \begin{figure} \begin{center} \includegraphics[width=7.5cm]{pieza-1.pdf} \caption{The fundamental piece for $\sigma=1$.} \label{fig:pieza-1} \end{center} \end{figure} After reflecting the fundamental piece $X^{\sigma }(\Omega _{\sigma })$ across its boundary, we will obtain the complete minimal surface $M_{\sigma }\subset \mathbb{R} ^3$. \subsection{Relationships between $M_\sigma$ and $R_{\lambda}$: the Shiffman function.} \label{subsec:shiffman} Each of the minimal surfaces $M_\sigma\subset \mathbb{R}^3$, $\sigma >0$, constructed in the last section is topologically a cylinder punctured in an infinite discrete set of points, and $M_{\sigma }$ has no points with horizontal tangent plane. For a minimal surface with these conditions, M. Shiffman introduced in 1956 a function that expresses the variation of the curvature of planar sections of the surface. More precisely, around a point $p$ of a minimal surface $M$ with nonhorizontal tangent plane, one can always pick a local conformal coordinate $\xi=x+i\, y$ so that the height differential is $\phi_3=d \, \xi$. The horizontal level curves $\{ x_3=c\} \cap M$ near $p$ are then parameterized as $\xi_c(y)=c+i\, y$. If we call $\kappa_c$ to the curvature of this level section as a planar curve, then one can check that \[ \kappa_c (y)=\left[ \frac{|g|}{1+|g|^2} \mbox{ Re} \left( \frac{g'}{g} \right) \right] |_ {\xi=\xi_c(y)}, \] where $g$ is the Gauss map of $M$ in this local coordinate, see Section~\ref{subsec:weierstrass}. The Shiffman function is defined as \[ S_M=\Lambda \frac{\partial \kappa_c}{\partial y}, \] where $\Lambda >0$ is the smooth function so that the induced metric on $M$ is $ds^2=\Lambda^2 |d \xi |^2$. In particular, the vanishing of the Shiffman function is equivalent to the fact that $M$ is foliated by arcs of circles or straight lines in horizontal planes. Therefore, a way to show that the surface $M_{\sigma }$ defined in the last section coincides with one of the Riemann minimal examples $R_{{\lambda} }$ is to verify that its Shiffman function vanishes identically. A direct computation shows that \begin{equation} \label{eq:Shiffman} S_M=\mbox{Im} \left[ \frac{3}{2} \left( \frac{g'}{g} \right)^2 - \frac{g''}{g} - \frac{1}{1+|g|^2} \left( \frac{g'}{g} \right)^2 \right], \end{equation} For the surface $M_{\sigma }$, we have $g(z,w)=g^\sigma(z,w)\tfrac{z}{\sqrt{\sigma}}$ and $d \xi=\phi_3^\sigma=\frac{dz}{w}= \sqrt{\sigma} \frac{g' \, d \xi}{w}$; hence $w=\sqrt{\sigma }g'$ and thus $ {{(g')^2}= {{g\,\left( \sqrt{\sigma} + g \right) \, \left( -1 + \sqrt{\sigma} \,g \right) }}}$. Taking derivatives of this expression we obtain ${{g''}= {\frac{-\sqrt{\sigma} }{2} + \left( -1 + {\sqrt{\sigma} }^2 \right) \,g + \frac{3\,\sqrt{\sigma} \,{g}^2}{2}}} . $ Plugging this in (\ref{eq:Shiffman}) we get \[ S_M=\mbox{Im} \left[\frac{\left( \sigma -1 \right) \, \left( {|g|}^2-1 \right) - 4\,\sqrt{\sigma} \,\mbox{Re}(g)}{2\, \left( 1 + {|g|}^2 \right) } \right]=0, \] which implies that $M_\sigma$ is one of the Riemann minimal examples $R_{\lambda}$, but which one? We must look for an expression of $\sigma >0$ in terms of ${\lambda} \in \mathbb{R} $ (or vice versa), so that the surfaces $R_{\lambda}$ and $M_\sigma$ are congruent. Since $g^\sigma(1,0)=1/\sqrt{\sigma},$ $g^\sigma(-\sigma,0)=-\sqrt{\sigma}$, and we know that $X^\sigma(1,0)$, $X^\sigma(-\sigma,0)$ are points where straight lines contained in $M_{\sigma }$ intersect the vertical plane of symmetry, the values of the stereographically projected Gauss map of the surface at these points will help us to find $\sigma({\lambda} )$. Recall that with the parameterization $X_{\lambda}(q,v)$ of $R_{{\lambda} }$ given in equation (\ref{eq:Riemann1}), these points are given by taking $v=0$ and $q\to \infty$. Hence we must compute the limit as $q\to \infty $ of $\frac{N_1(q,0)}{1-N_3(q,0)}$, where $N=(N_1,N_2,N_3)$ is the Gauss map associated to $X_{\lambda}$. This limit is $\frac{2}{{\lambda} -\sqrt{{\lambda}^2+4}}<0$, so we must impose it to coincide with $-\sqrt{\sigma}$. In other words, \[ \sigma=\left(\frac{2}{\sqrt{{\lambda}^2+4}-{\lambda}} \right)^2. \] \subsubsection{Parameterizing the surface with {\sf Mathematica}.} \label{subsec:parametrizando} When using \textsf{Mathematica}, we must keep in mind that the germs of multivalued functions that appear in the integration of the Weierstrass representation are necessarily continuous. These choices of branches do not always coincide with the choices made by the program, but we do not want to bother the reader with these technicalities. Thus, we directly write the three coordinate functions of the parameterization (already integrated) as follows: \[ \begin{array}{lll} \tt x1[\sigma_{-}][z_{-}]&:=&\frac{1}{\sqrt{\sigma}} \left( \left( \sqrt{\frac{-1+z}z} \sqrt{z+\sigma}+ \right. \right. \\ & & \tt \frac 1{\sqrt{1+\sigma}} \left(2 \left( (-1-\sigma) EllipticE \left[ArcSin\left[\sqrt{1+\frac z{\sigma}}\right],\frac{\sigma}{1+\sigma}\right]+ \right. \right.\\ & & \tt \left. \left. \left. \left. EllipticF\left[ArcSin\left[\sqrt{1+\frac z{\sigma}}\right],\frac{\sigma}{1+\sigma}\right] \right) \right)\right) \right) \\ \rule{0cm}{.6cm} \tt x2[\sigma_{-}][z_{-}]&:=& - {\sqrt{\frac{-1 + z}{-\sigma \,z}}}\,{\sqrt{\sigma + z}} \\ \rule{0cm}{.6cm} \tt x3[\sigma_{-}][z_{-}]&:=&\tt \frac{-2}{{\sqrt{\sigma}}} EllipticF \left[ArcSin \left[\frac{{\sqrt{\sigma}}}{{\sqrt{\sigma + z}}}\right],\frac{1 + \sigma}{\sigma}\right] \end{array} \] We will translate the surface so that the point $v_0^\sigma=X^\sigma(1)$ equals the origin, defining the parameterization $\psi^\sigma(z)=X^\sigma(z)-v_0^\sigma$: \[ \begin{array}{lll} \tt v0[\sigma_{-}]&:=& \tt \left\{ -2\,Im \left[\frac{1}{\sqrt{-\left( 1 +\sigma \right) \,\sigma}} \left(\left( -1 -\sigma \right) \, EllipticE \left[ ArcSin \left[\sqrt{\frac{1 + \sigma}{\sigma}} \right] ,\frac{\sigma}{1 + \sigma}\right] + \right. \right. \right. \\ & & \tt \left. \left. EllipticF \left[ ArcSin \left[\sqrt{\frac{1 + \sigma}{\sigma}} \right],\frac{\sigma}{1 + \sigma}\right] \right)\right],0, \\ \tt & & \tt \left. -2\,{Re}\left[\frac{1}{\sqrt{\sigma}} EllipticF \left[ ArcSin \left[{\sqrt{\frac{\sigma}{1 + \sigma}}}\right],\frac{1 + {\sigma}}{{\sigma}}\right] \right] \right\} \\ \rule{0cm}{.6cm} \tt \psi[\sigma_{-}][z_{-}]&:=& \tt Re[\{x1[\sigma][z],x2[\sigma][z],x3[\sigma][z]\}]-v0[\sigma] \end{array} \] In order to simplify our parameterization, we will use a M\"{o}bius transformation that maps the half-annulus $\{ e \leq |z| \leq 1, \; \mbox{Im}(z) \geq 0 \} $, for certain $e\in (0,1)$, into the domain region $\Omega_\sigma$; we will also use polar coordinates in the half-annulus: \[ \begin{array}{lll} \tt f[z_{-}, a_{-},e_{-}] &:=& \tt (-a^2 (1 + e^2) (-1 + z) - 2 a (-3 + e^2) z - (1 + e^2) (1 + z) + \\ & & \tt Sqrt[4 (-1 + a)^2 e^2 + (1 + a)^2 (-1 + e^2)^2] (1 + a (-1 + z) + z)) \\ & & \tt (2 \, Sqrt[4 (-1 + a)^2 e^2 + (1 + a)^2 (-1 + e^2)^2] \\ & & \tt -2 ((1 + a) (-1 + e^2) - 2 (-1 + a) z)) \end{array} \] The graphics representation of the fundamental piece $\Omega _{\sigma }$ through the immersion $\psi ^{\sigma }$ is given by the following expression; observe that we leave as variables the parameter $e\in (0,1)$, which measures how much of the end of $\Omega _{\sigma }$ is represented (the smaller the value of $e$, the larger size of the represented end) and the parameter $\sigma $ of the minimal Riemann example. \[ \begin{array}{lll} \tt r1[\sigma_{-}][e_{-}] &:=& \tt ParametricPlot3D[ \psi[\sigma][f[r Exp[I \;Pi \;t], \sigma, e]], \{r, e, 1 \}, \{t, 0, 1\}, \\ & & \tt PlotPoints -> \{40, 60 \}, Axes -> False]; \end{array} \] We are now ready to render the fundamental piece that appears in Figure~\ref{fig:pieza-1}, which corresponds to execute the command $\tt r1[2][0.1]$ (i.e., $\sigma =2$, $e=0.1$). We type: \[ \tt p1=r1[2][0.1] \] \par \vspace{-.4cm} \begin{center} \includegraphics[width=0.3\textwidth]{nor1.png} \end{center} In the last figure, we have also represented a straight line parallel to the $x_2$-axis, that intersects the surface orthogonally at a point in the boundary of the fundamental piece; the command to render both graphics objects simultaneously is \[ \tt p2=Show[p1,line] \] where \[ \begin{array}{lll} \tt line&=& \tt ParametricPlot3D[\psi [2][Sqrt[2]I]+t\{ 0,1,0\} ,\{ t,-2,2\} ,\\ & & \tt Axes->False,\ Boxed->False,\ \\ & & \tt PlotRange->All,\ PlotStyle->Thickness[0.005]]; \end{array} \] produces the line $\psi ^{\sigma }(\sqrt{2}i)+\mbox{Span}(0,1,0)$. Next we will extend the fundamental piece by $180^\circ$-rotation around this line, which induces the holomorphic involution $S_1$ explained in Section~\ref{sec4.1.1}. To define this transformation of the graphic, we will use the command ${\tt GeometricTransformation[x,\{ m,w\} ]}$ that applies to a graphics object {\tt x} the affine transformation $x\mapsto mx+w$ (here $m$ is a real $3\times 3$ matrix and $w\in \mathbb{R}^3$ a translation vector). In our case, \[ m=\left( \begin{array}{ccc} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1\end{array}\right) ,\qquad w=(2c_1,0,2c_3), \] where $c=\psi ^{\sigma}(\sqrt{2}i)$. We type: \[ \begin{array}{lll} \tt p3&=& \tt Graphics3D[GeometricTransformation[p1[[1]], \\ & & \tt \{ \{ \{ -1,0,0\} ,\{ 0,1,0\} ,\{ 0,0,-1\} \} ,\{ 2c1,0,2c3\} \} ]]; \end{array} \] after having defined {\tt c1} and {\tt c3} as the first and third coordinates of $\psi ^{\sigma }(\sqrt{2}i)$. In order to render the pieces {\tt p1} and {\tt p3} at the same time, as well as the $180^\circ$-rotation axis, we type: \[ \tt p4=Show[p1,p3,line] \] \begin{center} \includegraphics[width=0.4\textwidth]{nor2.png} \end{center} The next step consists of reflecting the last figure in the plane $\{ x_2=0\} $ (which is the plane orthogonal to the line segments contained in the boundary of the last piece and to the orientation-preserving $180^\circ$-rotation axis). \[ \begin{array}{lll} \tt p5&=& \tt Graphics3D[GeometricTransformation[p4[[1]], \\ & & \tt \{ \{ \{ 1,0,0\} ,\{ 0,-1,0\} ,\{ 0,0,1\} \} \} ]]; \\ \tt p6&=& \tt Show[p4,p5,line] \end{array} \] \begin{center} \includegraphics[width=0.4\textwidth]{nor3.png} \end{center} Next we rotate the last piece by angle $180^\circ$ around the $x_2$-axis, which is contained in the boundary of the surface. \[ \begin{array}{lll} \tt p7&=& \tt Graphics3D[GeometricTransformation[p6[[1]], \\ & & \tt \{ \{ \{ -1,0,0\} ,\{ 0,1,0\} ,\{ 0,0,-1\} \} \} ]]; \\ \tt p8&=& \tt Show[p6,p7] \end{array} \] \begin{center} \includegraphics[width=0.55\textwidth]{nor4.png} \end{center} $\tt p8$ represents a fundamental domain of the Riemann minimal example. The whole surface can be now obtained by translating the graphics domain $\tt p8$ by multiples of the vector $2 \, t_0= 2\psi^{\sigma }(-\sigma)$. We define this translation vector $t_0$: \[ \tt t0=\psi[2][-1.99999] \] The reason why we have evaluated at a point close to $-2$ on the right, is due to the aforementioned fact that we must use a continuous branch of the elliptic functions that appear in the expression of $\psi$. The numeric error that we are making is insignificant with a normal screen resolution. After this translation, we type: \[ \begin{array}{lll} \tt p9&=& \tt Graphics3D[GeometricTransformation[p8[[1]], \\ & & \tt \{ \{ \{ 1,0,0\} ,\{ 0,1,0\} ,\{ 0,0,1\} \} ,2\ t0\} ]]; \\ \tt p10&=& \tt Show[p8,p9] \end{array} \] \begin{center} \includegraphics[width=0.5\textwidth]{nor5.png} \end{center} It is desirable to have a better understanding of the surface ``at infinity''. This can done by taking a smaller value of the parameter {\tt e} y repeating the whole process. Figure~\ref{R-image} represents the final stage {\tt p10} in the case {\tt e=0.02}. \begin{figure}[H] \begin{center} \includegraphics[width=0.6\textwidth]{nor02.png} \end{center} \caption{A image of one of the Riemann minimal examples.} \label{R-image} \end{figure} Figure~\ref{R-image} indicates that the surface becomes asymptotic to an infinite family of parallel (actually horizontal) planes, equally spaced. This justifies the wording {\em planar ends.} \begin{remark} {\rm For very large or very small values of the parameter $\sigma $, one can find imperfections in the graphics, especially around $\psi^\sigma(1)$ or $\psi^\sigma(-\sigma)$. This is due to the fact that these values of $\sigma $ produce a non-homogeneous distribution of points in the mesh that the program computes when rendering the figure. This issue can be solved by substituting $\tt Exp[I \;Pi \;t]$ by $\tt Exp[I \;Pi \;t^{n}]$ in the definition of $\tt r1[\sigma][z]$, with $\tt n$ large if $\sigma$ is close to zero $0$, or with $\tt n$ close to zero if $\sigma$ is large. } \end{remark} \section{Uniqueness of the properly embedded minimal planar domains in $\mathbb{R}^3$.} \label{MPR} As mentioned in Section~\ref{riemann}, each Riemann minimal example $R_{{\lambda} }$ is a complete (in fact, proper) embedded minimal surface in $\mathbb{R}^3$ with the topology of a cylinder punctured in an infinite discrete set of points, which is invariant under a translation. If we view this cylinder as a twice punctured sphere, then one deduces that $R_{{\lambda} }$ is topologically (even conformally) equivalent to a sphere minus an infinite set of points that accumulate only at distinct two points, the so called limit ends of $R_{{\lambda} }$. In particular, $R_{{\lambda} }$ is a planar domain. One longstanding open problem in minimal surface theory has been the following one: \begin{quote} {\it Problem: Classify all properly embedded minimal planar domains in $\mathbb{R}^3$. } \end{quote} Up to scaling and rigid motions, the family ${\cal P}$ of all properly embedded planar domains in $\mathbb{R}^3$ comprises the plane, the catenoid, the helicoid and the 1-parameter family of Riemann minimal examples. The proof that this list is complete is a joint effort described in a series of papers by different researchers. In this section we will give an outline of this classification. \subsection{The case of finitely many ends, more than one.} \label{sec5.1} One starts by considering a surface $M\in {\cal P}$ with $k$ ends, $2\leq k<\infty $. Even without assuming genus zero, such surfaces of finite genus were proven to have finite total curvature by Collin~\cite{col1}, a case in which the asymptotic geometry is particularly well-understood: the finitely many ends are asymptotic to planes or half-catenoids, the conformal structure of the surface is the one of a compact Riemann surface minus a finite number of points, and the Gauss map of the surface extends meromorphically to this compactification (Huber~\cite{hu1}, Osserman~\cite{os1}). The asymptotic behavior of the ends and the embeddedness of $M$ forces the values of the extended Gauss map at the ends to be contained in a set of two antipodal points on the sphere. After possibly a rotation in $\mathbb{R}^3$, these values of the extension of the Gauss map of $M$ at the ends can be assumed to be $(0,0,\pm 1)$. L\'opez and Ros~\cite{lor1} found the following argument to conclude that $M$ is either a plane or a catenoid. The genus zero assumption plays a crucial role in the well-posedness of the deformation ${\lambda} >0\mapsto M_{{\lambda} }$ of $M$ by minimal surfaces $M_{{\lambda} } \subset \mathbb{R}^3$ with the same conformal structure and the same height differential as $M$ but with the meromorphic Gauss map scaled by ${\lambda}\in [1,\infty)$. A clever application of the maximum principle for minimal surfaces along this deformation implies that all surfaces $M_{{\lambda} }$ are embedded and that if $M$ is not a plane, then $M$ has no points with horizontal tangent planes and no planar ends (because either of these cases produces self-intersections in $M_{{\lambda} }$ for sufficiently large or small values of ${\lambda} >0$); in this situation, the third coordinate function of $M$ is proper without critical points, and an application of Morse theory implies that $M$ has the topology of an annulus. From here it is not difficult to prove that $M$ is a catenoid. \subsection{The case of just one end.} \label{sec5.2} The next case to consider is when $M\in {\cal P}$ has exactly one end, in particular $M$ is topologically a plane, and the goal is to show that $M$ congruent to a plane or to a helicoid. This case was solved by Meeks and Rosenberg~\cite{mr8}, by an impressive application of a series of powerful new tools in minimal surface theory: the study of minimal laminations, the so-called Colding-Minicozzi theory and some progresses in the understanding of the conformal structure of complete minimal surfaces. The first two of these tools study the possible limits of a sequence of embedded minimal surfaces under lack of either uniform local area bounds (minimal laminations) or uniform local curvature bounds (Colding-Minicozzi theory) or even both issues happening simultaneously; this is in contrast to the classical situation for describing such limits, that requires both uniform local area and curvature bounds to obtain a classical limit minimal surface. The study of minimal laminations, carried out in~\cite{mr8} by Meeks and Rosenberg, allows one to relate completeness of embedded minimal surfaces in $\mathbb{R}^3$ to their properness, which is a stronger condition. For instance, in~\cite{mr8} Meeks and Rosenberg proved that if $M\subset \mathbb{R}^3$ is a connected, complete embedded minimal surface with finite topology and bounded Gaussian curvature on compact subdomains of $\mathbb{R}^3$, then $M$ is proper; this properness conclusion was later generalized to the case of finite genus by Meeks, P\'erez and Ros in~\cite{mpr3}, and Colding and Minicozzi~\cite{cm35} proved it in the stronger case that one drops the local boundedness hypothesis for the Gaussian curvature. The theory of minimal laminations has led to other interesting results by itself, see e.g., \cite{Kh1,kl1,me25,me30,mpr10,mpr11,mpr18,mr13}. Regarding Colding-Minicozzi theory in its relation to the classification by Meeks and Rosenberg of the plane and the helicoid as the only simply connected elements in ${\cal P}$, its main result, called the {\it Limit Lamination Theorem for Disks,} describes the limit of (a subsequence of) a sequence of compact, embedded minimal disks $M_n$ with boundaries $\partial M_n$ lying in the boundary spheres of Euclidean balls $\mathbb{B} (R_n)$ centered at the origin, where the radii of these balls diverge to $\infty $, provided that the Gaussian curvatures of the $M_n$ blow up at some sequence of points in $M_n\cap \mathbb{B} (1)$: they proved that this limit is a foliation ${\cal F}$ of $\mathbb{R}^3$ by parallel planes, and the convergence of the $M_n$ to ${\cal F}$ is of class $C^{{\alpha} }$, ${\alpha} \in (0,1)$, away from some Lipschitz curve $S$ (called the singular set of convergence of the $M_n$ to ${\cal F}$) that intersects each of the planar leaves of ${\cal F}$ transversely just once, and arbitrarily close to every point of $S$, the Gaussian curvature of the $M_n$ also blows up as $n\to \infty $, see~\cite{cm23} for further details. There is another result of fundamental importance in Colding-Minicozzi theory, that is used to prove the Limit Lamination Theorem for Disks and also to demonstrate the results in~\cite{cm35,mpr3,mr8} mentioned in the last paragraph, which is called the {\it one-sided curvature estimate}~\cite{cm23}, a scale invariant bound for the Gaussian curvature of any embedded minimal disk in a half-space. With all these ingredients in mind, we next give a rough sketch of the proof by Meeks and Rosenberg of the uniqueness of the helicoid as the unique simply connected, properly embedded, nonplanar minimal surface in $\mathbb{R}^3$. Let $M\in {\cal P}$ be a simply-connected surface. Consider any sequence of positive numbers $\{ \lambda_n \}_{n}$ which decays to zero and let $\lambda_n M$ be the surface scaled by $\lambda_n$. By the Limit Lamination Theorem for Disks, a subsequence of these surfaces converges on compact subsets of $\mathbb{R}^3$ to a minimal foliation ${\cal F}$ of $\mathbb{R}^3$ by parallel planes, with singular set of convergence $S$ being a Lipschitz curve that can be parameterized by the height over the planes in ${\cal F} $. An application of Colding-Minicozzi theory assures that the limit foliation ${\cal F}$ is independent of the sequence $\{\lambda_n\}_n$. After a rotation of $M$ and replacement of the ${\lambda} _nM$ by a subsequence, one can suppose that the ${\lambda} _nM$ converge to the foliation ${\cal F}$ of $\mathbb{R}^3$ by horizontal planes, outside of the singular set of convergence given by a Lipschitz curve $S$ parameterized by its $x_3$-coordinate. A more careful analysis of the convergence of ${\lambda} _nM$ to ${\cal F}$ allows also to show that $M$ intersects transversely each of the planes in ${\cal F}$. This implies that the Gauss map of $M$ does not take vertical values, so after composing with the stereographical projection we can write this Gauss map $g \colon M \to \mathbb{C} \cup \{\infty \}$ as \begin{equation} \label{eq:ghelic} g(z)=e^{f(z)} \end{equation} for some holomorphic function $f\colon M \to \mathbb{C}$, and the height differential $\phi _3$ of $M$ has no zeros or poles. The next step in the proof is to check that the conformal structure of $M$ is $\mathbb{C} $; to see this, first observe that the nonexistence of points in $M$ with vertical normal vector implies that the intrinsic gradient of the third coordinate function $x_3\colon M\to \mathbb{R} $ has no zeros on $M$. In a delicate argument that uses both the above Colding-Minicozzi picture for limits under shrinkings of $M$ and a finiteness result for the number of components of a minimal graph over a possibly disconnected, proper domain in $\mathbb{R}^2$ with zero boundary values, Meeks and Rosenberg proved that none of the integral curves of $\nabla x_3$ is asymptotic to a plane in ${\cal F}$, and that every such horizontal plane intersects $M$ transversely in a single proper arc. This is enough to use the conjugate harmonic function $x_3^*$ of $x_3$ (which is well-defined on $M$ as the surface is simply connected) to show that $x_3+ix_3^*\colon M\to \mathbb{C} $ is a conformal diffeomorphism. Once one knows that $M$ is conformally $\mathbb{C} $, then we can reparameterize conformally $M$ so that \[ \phi _3=dx_3 + i\, dx_3^* =dz; \] in particular, the third coordinate $x_3 \colon \mathbb{C} \to \mathbb{R}$ is $x_3(z)=\mbox{Re} (z)$. To finish the proof, it only remains to determine the Gauss map $g$ of $M$, of which we now have the description (\ref{eq:ghelic}) with $f\colon \mathbb{C} \to \mathbb{C} $ entire. If the holomorphic function $f(z)$ is a linear function of the form $az+b$, then one deduces that $M$ is an associate surface\footnote{The family of {\it associate surfaces} of a simply connected minimal surface with Weierstrass data $(g,\phi _3)$ are those with the same Gauss map and height differential $e^{i{\theta} }\phi _3$, ${\theta} \in [0,2\pi )$. In particular, the case ${\theta} =\pi /2$ is the conjugate surface. This notion can be generalized to non-simply connected surfaces, although in that case the associate surfaces may have periods.} to the helicoid; but none of the nontrivial associate surfaces to the helicoid are injective as mappings, which implies that $M$ must be the helicoid itself when $f(z)$ is linear. Thus, the proof reduces to proving that $f(z)$ is linear. The explicit expression for the Gaussian curvature $K$ of $M$ in terms of $g,\phi _3$ is \begin{equation} \label{eq:K} K=-\left( \frac{4\left| dg/g\right| }{(|g|+ |g|^{-1})^2|\phi _3|}\right) ^2. \end{equation} Plugging our formulas (\ref{eq:ghelic}), $\phi _3=dz$ in our setting, an application of Picard's Theorem to $f$ shows that $f(z)$ is linear if and only if $M$ has bounded curvature. This boundedness curvature assumption for $M$ can be achieved by a clever blow-up argument, thereby finishing the sketch of the proof. For further details, see~\cite{mr8}. \subsection{Back to the Riemann minimal examples: the case of infinitely many ends.} To finish our outline of the solution of the classification problem stated at the beginning of Section~\ref{MPR}, we must explain how to prove that the Riemann minimal examples are the only properly embedded planar domains in $\mathbb{R}^3$ with infinitely many ends. Let $M\in {\cal P}$ be a surface with infinitely many ends. First we analyze the structure of the space ${\cal E}(M)$ of ends of $M$. ${\cal E}(M)$ is the quotient ${\mathcal A}/_\sim $ of the set \[ {\mathcal A}=\{ {\alpha} \colon [0,\infty )\to M\ | \ {\alpha} \mbox{ is a proper arc}\} \] (observe that ${\mathcal A}$ is nonempty as $M$ is not compact) under the following equivalence relation: Given ${\alpha}_1, {\alpha}_2\in {\mathcal A}$, ${\alpha} _1\sim {\alpha} _2$ if for every compact set $C \subset M$, there exists $t_C\in [0,\infty )$ such that ${\alpha} _1(t),{\alpha} _2(t)$ lie the same component of $M-C$, for all $t\geq t_C$. Each equivalence class in ${\mathcal E}(M)$ is called a {\it topological end} of $M$. If $e\in {\mathcal E}(M)$, ${\alpha} \in e$ is a representative proper arc and $\Omega \subset M$ is a proper subdomain with compact boundary such that $\alpha \subset \Omega $, then we say that the domain $\Omega $ {\it represents} the end $e$. The space ${\mathcal E}(M)$ has a natural topology, which is defined in terms of a basis of open sets: for each proper domain $\Omega \subset M$ with compact boundary, we define the basis open set $B(\Omega ) \subset {\mathcal E} (M)$ to be those equivalence classes in ${\mathcal E}(M)$ which have representatives contained in $\Omega $. With this topology, ${\mathcal E}(M)$ is a totally disconnected compact Hausdorff space which embeds topologically as a subspace of $[0,1]\subset \mathbb{R} $ (see e.g., pages 288-289 of \cite{mpe1} for a proof of this embedding result for ${\mathcal E}(M)$). In the particular case that $M$ is a properly embedded minimal surface in $\mathbb{R}^3$ with more than one end, a fundamental result is that ${\mathcal E}(M)$ admits a geometrical ordering by relative heights over a plane called the {\it limit plane at infinity} of $M$. To define this reference plane, Callahan, Hoffman and Meeks~\cite{chm3} showed that in one of the closed complements of $M$ in $\mathbb{R}^3$, there exists a noncompact, properly embedded minimal surface $\Sigma $ with compact boundary and finite total curvature. The ends of $\Sigma $ are then of catenoidal or planar type, and the embeddedness of $\Sigma $ forces its ends to have parallel normal vectors at infinity. The limit tangent plane at infinity of $M$ is the plane in $\mathbb{R}^3$ passing through the origin, whose normal vector equals (up to sign) the limiting normal vector at the ends of $\Sigma $. It can be proved that such a plane does not depend on the finite total curvature minimal surface $\Sigma \subset \mathbb{R}^3-M$~\cite{chm3}. With this notion in hand, the ordering theorem is stated as follows. \begin{theorem} {\bf (Ordering Theorem, Frohman, Meeks~\cite{fme2})} \label{thordering} Let $M\subset \mathbb{R}^3$ be a properly embedded minimal surface with more than one end and horizontal limit tangent plane at infinity. Then, the space ${\mathcal E}(M)$ of ends of $M$ is linearly ordered geometrically by the relative heights of the ends over the $(x_1,x_2)$-plane, and embeds topologically in $[0,1]$ in an ordering preserving way. Furthermore, this ordering has a topological nature in the following sense: If $M$ is properly isotopic to a properly embedded minimal surface $M'$ with horizontal limit tangent plane at infinity, then the associated ordering of the ends of $M'$ either agrees with or is opposite to the ordering coming from $M$. \end{theorem} Given a minimal surface $M\subset \mathbb{R}^3$ satisfying the hypotheses of Theorem~\ref{thordering}, we define the {\it top end} $e_T$ of $M$ as the unique maximal element in ${\mathcal E}(M)$ for the ordering given in this theorem (as ${\mathcal E}(M)\subset [0,1]$ is compact, then $e_T$ exists). Analogously, the {\it bottom end} $e_B$ of $M$ is the unique minimal element in ${\mathcal E}(M)$. If $e\in {\mathcal E}(M)$ is neither the top nor the bottom end of $M$, then it is called a {\it middle end} of $M$. There is another way of grouping ends of such a surface $M$ into simple and limit ends; for the sake of simplicity and as we are interested in discussing the classification of surfaces $M\in {\cal P}$, we will restrict in the sequel to the case of a surface $M$ of genus zero. Given $M\in {\cal P}$, an isolated point $e \in {\mathcal E} (M)$ is called a {\it simple end of $M$}, and $e$ can be represented by a proper subdomain $\Omega \subset M$ with compact boundary which is homeomorphic to the annulus $\mathbb{S} ^1 \times [0,\infty)$. Because of this model, $e$ is also called an {\it annular end.} On the contrary, ends in ${\mathcal E} (M)$ which are not simple (i.e., they are limit points of ${\mathcal E}(M)\subset [0,1]$) are called {\it limit ends} of $M$. In our situation of $M$ being a planar domain, its limit ends can be represented by proper subdomains $\Omega \subset M$ with compact boundary, genus zero and infinitely many ends. As in this section $M$ is assumed to have infinitely many ends and ${\mathcal E}(M)$ is compact, then $M$ must have at least one limit end. Each of the planar ends of a Riemann minimal example $R_{{\lambda} }$ is a simple annular middle end, and $R_{{\lambda} }$ has two limit ends corresponding to the limits of planar ends as the height function of $R_{{\lambda} }$ goes to $\infty $ (this is the top end of $R_{{\lambda} }$) or to $-\infty $ (bottom end). Thus, middle ends of $R_{{\lambda} }$ correspond to simple ends, and its top and bottom ends are limit ends. Most of this behavior is in fact valid for any properly embedded minimal surface $M\subset \mathbb{R}^3$ with more than one end: \begin{theorem}[Collin, Kusner, Meeks, Rosenberg~\cite{ckmr1}] \label{thmckmr} Let $M\subset \mathbb{R}^3$ be a properly embedded minimal surface with more than one end and horizontal limit tangent plane at infinity. Then, any limit end of $M$ must be a top or bottom end of $M$. In particular, $M$ can have at most two limit ends, each middle end is simple and the number of ends of $M$ is countable. \end{theorem} In the sequel, we will assume that our surface $M\in {\cal P}$ has horizontal limit tangent plane at infinity. By Theorem~\ref{thmckmr}, $M$ has no middle limit ends, hence either it has one limit end (this one being its top or its bottom limit end) or both top and bottom ends are the limit ends of $M$, like in a Riemann minimal example. The next step in our description of the classification of surfaces in ${\cal P}$ is due to Meeks, P\'erez and Ros~\cite{mpr4}, who discarded the one limit end case through the following result. \begin{theorem}[Meeks, P\'erez, Ros \cite{mpr4}] \label{thmno1limitend} If $M\subset \mathbb{R}^3$ is a properly embedded minimal surface with finite genus, then $M$ cannot have exactly one limit end. \end{theorem} The proof of Theorem~\ref{thmno1limitend} is by contradiction. One assumes that the set of ends of $M$, linearly ordered by increasing heights by the Ordering Theorem~\ref{thordering}, is ${\mathcal E}(M)= \{ e_1,e_2,\ldots ,e_{\infty }\} $ with the limit end of $M$ being its top end $e_{\infty }$. Each annular end of $M$ is a simple end and can be proven to be asymptotic to a graphical annular end $E_n$ of a vertical catenoid with negative logarithmic growth $a_n$ satisfying $a_1 \leq \ldots \leq a_n \leq \ldots < 0$ (Theorem~2 in Meeks, P\'erez and Ros~\cite{mpr3}). Then one studies the subsequential limits of homothetic shrinkings $\{ {\lambda} _nM\} _n$, where $\{ {\lambda} _n\} _n\subset \mathbb{R} ^+$ is any sequence of numbers decaying to zero; recall that this was also a crucial step in the proof of the uniqueness of the helicoid sketched in Section~\ref{sec5.2}. Nevertheless, the situation now is more delicate as the surfaces ${\lambda} _nM$ are not simply connected. Instead, it can be proved that the sequence $\{ {\lambda} _nM\} _n$ is {\it locally simply connected} in $\mathbb{R}^3-\{ \vec{0}\} $, in the sense that given any point $p\in \mathbb{R}^3-\{ \vec{0}\} $, there exists a number $r(p)\in (0,|p|)$ such that the open ball $\mathbb{B} (p,r(p))$ centered at $p$ with radius $r(p)$ intersects ${\lambda} _nM$ in compact disks whose boundaries lie on $\partial \mathbb{B} (p,r(p))$, for all $n\in \mathbb{N}$. This is a difficult technical part of the proof, where the Colding-Minicozzi theory again plays a crucial role. Then one uses this locally simply connected property in $\mathbb{R}^3-\{ 0\} $ to show that the limits of subsequences of $\{ {\lambda} _nM\} _n$ consist of minimal laminations ${\mathcal L}$ of $H(*)=\{ x_3\geq 0\} -\{ \vec{0}\} \subset \mathbb{R}^3$ containing $\partial H(*)$ as a leaf, and that the singular set of convergence of the of ${\lambda} _nM$ to ${\mathcal L}$ is empty; from here one has that \begin{quote} $(\star )$ The sequence $\{ |K_{{\lambda} _nM}|\} _n$ of absolute Gaussian curvature functions of the ${\lambda} _nM$, is locally bounded in $\mathbb{R}^3-\{ 0\} $. \end{quote} In particular, taking ${\lambda} _n=|p_n|^{-1}$ where $p_n$ is any divergent sequence of points on $M$, $(\star )$ implies that the absolute Gaussian curvature of $M$ decays at least quadratically in terms of the distance function $|p_n|$ to the origin. In this setting, the Quadratic Curvature Decay Theorem stated in Theorem~\ref{thm1introd} below implies that $M$ has finite total curvature; this is impossible in our situation with infinitely many ends, which finishes our sketch of proof of Theorem~\ref{thmno1limitend}. \begin{theorem} {\bf (Quadratic Curvature Decay Theorem, \, Meeks, P\'erez, Ros~\cite{mpr10})} \label{thm1introd} Let $M\subset \mathbb{R}^3-\{ \vec{0}\} $ be an embedded minimal surface with compact boundary (possibly empty), which is complete outside the origin $\vec{0}$; i.e., all divergent paths of finite length on~$M$ limit to $\vec{0}$. Then, $M$ has quadratic decay of curvature\footnote{This means that $|K_M|R^2$ is bounded on $M$, where $R^2=x_1^2+x_2^2+x_3^2$.} if and only if its closure in $\mathbb{R}^3$ has finite total curvature. \end{theorem} Once we have discarded the case of a surface $M\in {\cal P}$ with just one limit end, it remains to prove that when $M$ has two limit ends, then $M$ is a Riemann minimal example. The argument for proving this is also delicate, but since it uses strongly the Shiffman function and its surprising connection to the theory of integrable systems and more precisely, to the Korteweg-de Vries equation, we will include some details of it. We first need to establish a framework for $M$ which makes possible to use globally the Shiffman function; here the word {\it globally} also refers its {\it extension across the planar ends} of $M$, in a strong sense to be precise soon. Recall that we have normalized $M$ so that its limit tangent plane at infinity is the $(x_1,x_2)$-plane. By Theorem~\ref{thmckmr}, the middle ends of $M$ are not limit ends, and as $M$ has genus zero, then these middle ends can be represented by annuli. Since $M$ has more than one end, then every annular end of $M$ has finite total curvature (by Collin's theorem~\cite{col1}, that we also used at the beginning of Section~\ref{sec5.1}), and thus such annular ends of $M$ are asymptotic to the ends of planes or catenoids. Now recall Theorem~\ref{thmckmr} above, due to Collin, Kusner, Meeks and Rosenberg. The same authors obtained in~\cite{ckmr1} the following additional information about the middle ends: \begin{theorem}[Theorem~3.5 in~\cite{ckmr1}] \label{thm5.5} Suppose a properly embedded minimal surface $\Sigma$ in $\mathbb{R}^3$ has two limit ends with horizontal limit tangent plane at infinity. Then there exists a sequence of horizontal planes $\{ P_j\} _{j\in \mathbb{N} }$ in $\mathbb{R}^3$ with increasing heights, such that $\Sigma$ intersects each $P_j$ transversely in a compact set, every middle end of $\Sigma$ has an end representative which is the closure of the intersection of $\Sigma$ with the slab bounded by $P_j\cup P_{j+1}$, and every such slab contains exactly one of these middle end representatives. \end{theorem} Theorem~\ref{thm5.5} gives a way of separating the middle ends of $M$ into regions determined by horizontal slabs, in a similar manner as the planar ends of a Riemann minimal example can be separated by slabs bounded by horizontal planes. Furthermore, the Half-space Theorem~\cite{hm10} by Hoffman and Meeks ensures that the restriction of the third coordinate function $x_3$ to the portion $M(+)$ of $M$ above $P_0$ is not bounded from above and extends smoothly across the middle ends. Another crucial result, Theorem~3.1 in~\cite{ckmr1}, implies that $M(+)$ is conformally parabolic (in the sense that Brownian motion on $M(+)$ is recurrent), in particular the annular simple middle ends of $M$ in $M(+)$ are conformally punctured disks. After compactification of $M(+)$ by adding its middle ends and their limit point $p_{\infty }$ corresponding to the top end of $M$, we obtain a conformal parameterization of this compactification defined on the unit disk $\mathbb{D} =\{ |z|\leq 1\} $, so that $p_{\infty }=0$, the middle ends in $M(+)$ correspond to a sequence of points $p_j\in \mathbb{D} -\{ 0\} $ converging to zero as $j\to \infty $, and \[ x_3|_{M(+)}(z)=-{\lambda} \ln|z|+c \] for some ${\lambda} ,c\in \mathbb{R} $, ${\lambda}>0$. This implies that there are no points in $M(+)$ with horizontal tangent plane. Observe that different planar ends in $M$ cannot have the same height above $P_0$ by Theorem~\ref{thm5.5}, which implies that $M(+)$ intersects every plane $P'$ above $P_0$ in a simple closed curve if the height of $P'$ does not correspond to the height of any middle end, while $P'$ intersects $M(+)$ is a proper Jordan arc when the height of $P'$ equals the height of a middle end. Similar reasoning can be made for the surface $M(-)=M-[M(+)\cup P_0]$. From here one deduces easily that the meromorphic extension through the planar ends of the stereographically projected Gauss map $g$ of $M$ has order-two zeros and poles at the planar ends, and no other zeros or poles in $M$. This is a sketch of the proof of the first four items of the following descriptive result, which is part of Theorem~1 in~\cite{mpr3}. \begin{theorem} \label{thm5.6} Let $M\in {\cal P}$ be a surface with infinitely many ends. Then, after a rotation and a homothety we have: \begin{enumerate} \item $M$ can be conformally parameterized by the cylinder $\mathbb{C} /\langle i\rangle $ punctured in an infinite discrete set of points $\{ p_j,q_j\} _{j\in \mathbb{Z} }$ which correspond to the planar ends of $M$. \item The stereographically projected Gauss map $g\colon (\mathbb{C} /\langle i\rangle )-\{ p_j,q_j\} _{j\in \mathbb{Z} }\to \mathbb{C} \cup \{ \infty \} $ extends through the planar ends of $M$ to a meromorphic function $g$ on $\mathbb{C} /\langle i\rangle $ which has double zeros at the points $p_j$ and double poles at the $q_j$. \item The height differential of $M$ is $\phi _3=dz$ with $z$ being the usual conformal coordinate on~$\mathbb{C} $, hence the third coordinate function of $M$ is $x_3(z)=\mbox{\rm Re} (z)$. \item The planar ends of $M$ are ordered by their heights so that $\mbox{\rm Re} (p_j)<\mbox{\rm Re} (q_j)<\mbox{\rm Re} (p_{j+1})$ for all $j$ with $\mbox{\rm Re} (p_j)\to \infty $ (resp. $\mbox{\rm Re} (p_j) \to -\infty $) when $j\to \infty $ (resp. $j\to -\infty $). \end{enumerate} \end{theorem} The description in Theorem~\ref{thm5.6} allows us to define globally the Shiffman function on any surface $M$ as in that theorem. To continue our study of properties of such a surface, we need the notion of {\it flux}. The flux vector along a closed curve $\gamma \subset M$ is defined as \begin{equation} \label{eq:flux} F( \gamma) = \int_\gamma \mbox{Rot}_{90^\circ }({\gamma} ') = \mbox{Im} \int _{{\gamma} }\left( \frac{1}{2}\left( \frac{1}{g}-g\right), \frac{i}{2}\left( \frac{1}{g}+g\right) ,1\right) \phi _3, \end{equation} where $(g,\phi _3)$ is the Weierstrass data of $M$ and $\mbox{Rot}_{90^\circ }$ denotes the rotation by angle $\pi /2$ in the tangent plane of $M$ at any point. It is easy to show that $F({\gamma} )$ only depends of the homology class of ${\gamma} $ in $M$, and that the flux along a closed curve that encloses a planar end of $M$ is zero. In particular, for a surface $M$ as in Theorem~\ref{thm5.6}, the only flux vector to consider is that associated to any compact horizontal section $M\cap \{ x_3=\mbox{constant}\} $, which we will denote by $F(M)$. Note that by item~(3) of Theorem~\ref{thm5.6}, the third component of $F(M)$ is 1. In the sequel, we will assume the following normalization for $M$ after possibly a rotation in $\mathbb{R}^3$ around the $x_3$-axis: \begin{equation} \label{star} F(M)=(h,0,1) \ \mbox{ for some }h\geq 0. \end{equation} The next result collects some more subtle properties of a surface $M$ as in Theorem~\ref{thm5.6} (this is the second part of Theorem~1 in~\cite{mpr3}). \begin{theorem} \label{thm5.7} For a surface $M$ normalized as in (\ref{star}), we have: \begin{enumerate} \setcounter{enumi}{4} \item The flux vector $F(M)$ of $M$ along a compact horizontal section has nonzero horizontal component; equivalently, $F(M)=(h,0,1)$ for some $h>0$. \item The Gaussian curvature of $M$ is bounded and the vertical spacings between consecutive planar ends are bounded from above and below by positive constants, with all these constants depending only on $h$. \item For every divergent sequence $\{ z_k\} _k\subset \mathbb{C} /\langle i\rangle $, there exists a subsequence of the meromorphic functions $g_k(z)=g(z+z_k)$ which converges uniformly on compact subsets of $\mathbb{C} /\langle i\rangle $ to a non-constant meromorphic function $g_{\infty }\colon \mathbb{C} /\langle i\rangle \to \mathbb{C} \cup \{ \infty \} $ (we will refer to this property saying that g is quasi-periodic). In fact, $g_{\infty }$ corresponds to the Gauss map of a minimal surface $M_{\infty }$ satisfying the same properties and normalization (\ref{star}) as $M$, which is the limit of a related subsequence of translations of $M$ by vectors whose $x_3$-components are $\mbox{\rm Re} (z_k)$. \end{enumerate} \end{theorem} As said above, the proof of properties 5, 6 and 7 are more delicate than the ones in Theorem~\ref{thm5.6} as they depend on Colding-Minicozzi theory. For instance, the fact that the Gaussian curvature $K_M$ of $M$ is bounded with the bound depending only on an upper bound of the horizontal component of $F(M)$, was proven in Theorem~5 of~\cite{mpr3} in the more general case of a sequence $\{ M_k\} _k\subset {\cal P}$ as in Theorem~\ref{thm5.6}, such that $F(M_k)=(h_k,0,1)$ and $\{ h_k\} _k$ is bounded from above. This proof of the existence of a uniform curvature estimate is by contradiction: the existence of a sequence $p_k\in M_k$ such that $|K_{M_k}|(p_k)\to \infty $ creates a nonflat blow-up limit of the $M_k$ around $p_k$ with can be proven to be a vertical helicoid (this uses the uniqueness of the helicoid among properly embedded, simply connected, nonflat minimal surfaces, see Section~\ref{sec5.2}). A careful application of the so called {\it Limit Lamination Theorem for Planar Domains} (Theorem~0.9 in Colding and Minicozzi~\cite{cm25}) produces a sequence $\mu _k>0$ so that after possibly a sequence of translations and rotations around a vertical axis, $\{ \mu _kM_k\} _k$ converges to a foliation of $\mathbb{R}^3$ by horizontal planes with singular set of convergence consisting of two vertical lines ${\Gamma} \cup {\Gamma}'$ separated by a positive distance. From here one can produce a nontrivial closed curve in each $\mu _kM_k$ such that the flux vector $F(\mu _kM_k)$ converges as $k\to \infty $ to twice the horizontal vector that joins ${\Gamma} $ and ${\Gamma} '$. Since the angle between $F(M_k)$ and its horizontal projection $(h_k,0,0)$ is invariant under translations, homotheties and rotations around the $x_3$-axis, then we contradict that $h_k$ is bounded from above. The proof that there is a lower bound of the vertical spacings between consecutive planar ends in item~6 of Theorem~\ref{thm5.7} follows from that fact that the boundedness of $K_M$ implies the existence of an embedded regular neighborhood of constant radius (Meeks and Rosenberg~\cite{mr1}). The bound from above of the same vertical spacing is again proved by contradiction, by a clever application of the L\'opez-Ros deformation argument (see Section~\ref{sec5.1}), which also gives property~5 of Theorem~\ref{thm5.7}. Finally, the proof of the compactness result in item~7 of Theorem~\ref{thm5.7} is essentially a consequence of the already proven uniform bound of the Gaussian curvatures and the uniform local bounds for the area of a sequence of translations of the surface $M$ given by item~6 of the same theorem. This completes our sketch of proof of Theorem~\ref{thm5.7}. As explained above, in our setting for $M\in {\cal P}$ satisfying the normalizations in Theorems~\ref{thm5.6} and~\ref{thm5.7}, we can consider its Shiffman function $S_M$ defined by equation (\ref{eq:Shiffman}), which is also defined on the conformal compactification of $M$ (recall that in Section~\ref{subsec:shiffman} we normalized the height differential $\phi _3$ to be $dz$, as in Theorem~\ref{thm5.6}). Recall also that the vanishing of the Shiffman function is equivalent to the fact that $M$ is a Riemann minimal example. But instead of proving directly that $S_M=0$ on $M$, Meeks, P\'erez and Ros demonstrated that $S_M$ is a linear Jacobi function; to understand this concept, we must first recall some basic facts about Jacobi functions on a minimal surface. Since minimal surfaces can be viewed as critical points for the area functional $A$, the nullity of the hessian of $A$ at a minimal surface $M$ contains valuable information about the geometry of $M$. Normal variational fields for $M$ can be identified with functions, and the second variation of area tells us that the functions in the nullity of the hessian of $A$ coincide with the kernel of the {\it Jacobi operator}, which is the Schr\"{o}dinger operator on $M$ given by \begin{equation} \label{eq:Jacobi} L=\Delta - 2K_M, \end{equation} where $\Delta $ denotes the intrinsic Laplacian on $M$. Any function $v\in C^{\infty }(M)$ satisfying $\Delta v - 2K_Mv=0$ on $M$ is called a {\it Jacobi function,} and corresponds to an {\it infinitesimal} deformation of $M$ by minimal surfaces. It turns out that the Shiffman function $S_M$ is a Jacobi function, i.e., it satisfies (\ref{eq:Jacobi}) (this is general for any minimal surface whenever $S_M$ is well-defined, and follows by direct computation from (\ref{eq:Shiffman})). One obvious way to produce Jacobi fields is to take the normal part of the variational field of the variation of $M$ by moving it through a 1-parameter family of isometries of $\mathbb{R}^3$. For instance, the translations $M+ta$ with $a\in \mathbb{R}^3$, produce the Jacobi function $\langle N,a\rangle $ (here $N$ is the Gauss map of $M$), which is called {\it linear Jacobi function.} One key step in the proof of Meeks, P\'erez and Ros is the following one. \begin{proposition} \label{prop5.8} Let $M\in {\cal P}$ be a surface with infinitely many ends and satisfying the normalizations in Theorems~\ref{thm5.6} and~\ref{thm5.7}. If the Shiffman Jacobi function $S_M$ of $M$ is linear, i.e., $S_M=\langle N,a\rangle $ for some $a\in \mathbb{R}^3$, then $M$ is a Riemann minimal example. \end{proposition} The proof of Proposition~\ref{prop5.8} goes as follows. We first pass from real valued Jacobi functions to complex valued ones by means of the {\it conjugate of a Jacobi function.} The conjugate function of a Jacobi function $u$ over a minimal surface $M$ is the (locally defined) support function $u^*=\langle (X_u)^*,N\rangle $ of the conjugate surface\footnote{The conjugate surface of a minimal surface is that one whose coordinate functions are harmonic conjugates of the coordinate functions of the original minimal surface; the conjugate surface is only locally defined, and it is always minimal.} $(X_u)^*$ of the branched minimal surface $X_u$ associated to $u$ by the so-called Montiel-Ros correspondence~\cite{mro1}; in particular, both $X_u$ and $(X_u)^*$ have the same Gauss map $N$ as $M$, and $u^*$ also satisfies the Jacobi equation. Now suppose $M\in {\cal P}$ is an Proposition~\ref{prop5.8}, i.e., $S_M$ is linear. It is then easy to show that the conjugate Jacobi function $(S_M)^*$ of $S_M$, is also linear, from where $S_M+iS_M^*=\langle N,a\rangle $ for some $a\in \mathbb{C}^3$, which again by (\ref{eq:Shiffman}), produces a complex ODE for $g$ of second order, namely \begin{equation} \label{eq:gOGE} \overline{g}\left( \frac{3}{2}\frac{(g')^2}{g}-g''-B-a_3g\right) = \frac{g''}{g}-\frac{1}{2}\left( \frac{g'}{g}\right) ^2+Ag-a_3, \end{equation} for some complex constants $A,B,a_3$ that only depend of $a$. As $g$ is holomorphic and not constant, (\ref{eq:gOGE}) implies that both its right-hand side and the expression between parenthesis in its left-hand side vanish identically. Solving for $g''$ in both equations, one arrives to the following complex ODE of first order: \[ (g')^2 =g(-Ag^2 +2a_3 g+B), \] which in turn says that the Weierstrass data $(g,\phi _3=dz)$ of $M$ factorizes through the torus $\Sigma =\{ (\xi ,w)\in (\mathbb{C}\cup \{ \infty \} )^2\ | \ w^2=\xi(-A\xi^2+2a_3\xi+ B)\} $; in other words, we deduce that $M$ is in fact periodic under a translation, with a quotient being a twice punctured torus. In this very particular situation, one can apply the classification of periodic examples by Meeks, P\'erez and Ros in~\cite{mpr1} to conclude that $M$ is a Riemann minimal example, and the proposition is proved. In light of Proposition~\ref{prop5.8}, one way of finishing our classification problem consists of proving the following statement, which will be proved assuming that Theorem~\ref{thm5.9} stated immediately after it holds; the proof of Theorem~\ref{thm5.9} will be sketched later. \begin{proposition} \label{propult} For every $M\in {\cal P}$ with infinitely many ends and satisfying the normalizations in Theorems~\ref{thm5.6}, \ref{thm5.7} and in (\ref{star}), the Shiffman Jacobi function $S_M$ of $M$ is linear. \end{proposition} \begin{theorem}[Theorem~5.14 in~\cite{mpr6}] \label{thm5.9} Given a surface $M\in {\cal P}$ with infinitely many ends and satisfying the normalizations in Theorems~\ref{thm5.6} and~\ref{thm5.7}, there exists a 1-parameter family $\{ M_t\} _t\subset {\mathcal P}$ such that $M_0=M$ and the normal part of the variational field for this variation, when restricted to each $M_t$, is the Shiffman function $S_{M_{t}}$ multiplied by the unit normal vector field to $M_t$. \end{theorem} Before proving Proposition~\ref{propult}, some explanation about the integration of the Shiffman function $S_M$ appearing in Theorem~\ref{thm5.9} is in order. As we explained in the paragraph just before the statement of Proposition~\ref{prop5.8}, $S_M$ corresponds to an {\it infinitesimal} deformation of $M$ by minimal surfaces (every Jacobi function has this property). But this is very different from the quite strong property of proving that $S_M$ can be integrated to an actual variation $t\mapsto M_t\in {\cal P}$, as stated in Theorem~\ref{thm5.9}. Even more, the parameter $t$ of this deformation can be proven to extend to be a complex number in $\mathbb{D} ({\varepsilon} )=\{ t\in \mathbb{C} \ | \ |t|<{\varepsilon} \} $ for some ${\varepsilon} >0$, and $t\in \mathbb{D} ({\varepsilon} )\mapsto M_t$ can be viewed as the real part of a complex valued {\it holomorphic} curve in a certain complex variety. This is a very special integration property for $S_M$, which we refer to by saying that the Shiffman function can be {\it holomorphically integrated} for every surface $M$ as in Theorem~\ref{thm5.9}. We next give a sketch of the proof of Proposition~\ref{propult}, assuming the validity of Theorem~\ref{thm5.9}. One fixes a flux vector $F=(h,0,1)$, $h>0$, consider the set \[ {\cal P}_F=\{ M\in {\cal P} \mbox{ as in Theorems~\ref{thm5.6} and \ref{thm5.7} } \ | \ F(M)=F\} \] and maximize the spacing between the planar ends of surfaces in ${\cal P}_F$ (to do this one needs to be careful when specifying what planar ends are compared when measuring distances; we will not enter in technical details here), which can be done by the compactness property given in item~6 of Theorem~\ref{thm5.7}. Then one proves that any maximizer (not necessarily unique a priori) $M_{\max }\in {\cal P}_F$ must have linear Shiffman function; the argument for this claim has two steps: \begin{enumerate}[(S1)] \item The assumed homomorphic integration of the Shiffman function of $M_{\max }$ produces a complex holomorphic curve $t\in \mathbb{D} ({\varepsilon} )\mapsto g_t\in {\cal W}$, with $g_0=g_{\max }$ being the Gauss map of $M_{\max }$, where ${\mathcal W}$ is the complex manifold of quasi-periodic meromorphic functions on $\mathbb{C} /\langle i\rangle $ (in the sense explained in item~7 of Theorem~\ref{thm5.7}) with double zeros and double poles; ${\cal W}$ can be identified to the set of potential Weierstrass data of minimal immersions $(g,\phi _3=dz)$ with infinitely many planar ends. The fact that the period problem associated to $(g_t,dz)$ is solved for any $t$ comes from the fact that for $t=0$, this period problem is solved (as $M_{\max }$ is a genuine surface) and that the velocity vector of $t\mapsto g_t$ is the Shiffman function at any value of $t$, which lies in the kernel of the period map. A similar argument shows that not only $(g_t,dz)$ solve the period problem, but also the flux vector $F(M_t)$ is independent of $t$, where $M_t$ is the minimal surface produced by the Weierstrass data $(g_t,dz)$ (thus $M_0=M_{\max }$). Embeddedness of $M_t$ is guaranteed from that of $M_{\max }$, by the application of the maximum principle for minimal surfaces. Altogether, we deduce that $t\in \mathbb{D} ({\varepsilon} )\mapsto M_t$ actually lies in ${\cal P}_F$, which implies that the spacing between the planar ends of $M_t$, viewed as a function of $t$, achieves a maximum at $t=0$. \item As the spacing between the planar ends of $M_t$ can be viewed as a harmonic function of $t$ (this follows from item~4 of Theorem~\ref{thm5.6}), then the maximizing property of $M_{\max }$ in the family $t\in \mathbb{D} ({\varepsilon} )\mapsto M_t$ and the maximum principle for harmonic functions gives that the spacing between the planar ends of $M_t$ remains constant in $t$; from here it is not difficult to deduce that $t\mapsto g_t$ is just a translation in the cylinder $\mathbb{C} /\langle i \rangle $ of the zeros and poles of $g_t$, which corresponds geometrically to the fact that $t\mapsto M_t$ is a translation in $\mathbb{R}^3$ of $M_{\max}$. Therefore, the velocity vector of $t\mapsto M_t$ at $t=0$, which is the Shiffman function of $M_{\max }$, is linear. \end{enumerate} Once it is proven that the Shiffman function of $M_{\max }$ is linear, Proposition~\ref{prop5.8} implies that $M_{\max }$ is a Riemann minimal example. A similar reasoning can be done for a minimizer $M_{\min }\in {\cal P}_F$ of the spacing between planar ends, hence $M_{\min }$ is also a Riemann minimal example. As there is only one Riemann example for each flux $F$, then we deduce that the maximizer and minimizer are the same. In particular, every surface in ${\cal P}_F$ is both a maximizer and minimizer and, hence, its Shiffman function is linear. This finishes the sketch of proof of Proposition~\ref{propult}. To finish this article, we will indicate how to demonstrate that the Shiffman function of every surface $M\in {\cal P}$ in the hypotheses of Theorem~\ref{thm5.9} can be holomorphically integrated. This step is where the Korteweg-de Vries equation (KdV) plays a crucial role, which we will explain now. We recommend the interested reader to consult the excellent survey by Gesztesy and Weikard~\cite{gewe1} for an overview of the notions and properties that we will use in the sequel. First we explain the connection between the Shiffman function and the KdV equation. Let $M\in {\cal P}$ be a surface satisfying the hypotheses of Theorem~\ref{thm5.9}, and let $S_M$ be its Shiffman function, which is globally defined. Its conjugate Jacobi function $(S_M)^*$ is also globally defined; in fact, $(S_M)^*$ is given by minus the real part of the expression enclosed between brackets in (\ref{eq:Shiffman}). By the Montiel-Ros correspondence~\cite{mro1}, both $S_M$, $(S_M)^*$ can be viewed as the support functions of conjugate branched minimal immersions $X,X^*\colon M\to \mathbb{R}^3$ with the same Gauss map as $M$. The holomorphicity of $X+iX^*$ allows us to identify $S_M+iS_M^*$ with an infinitesimal deformation of the Gauss map $g$ of $M$ in the space ${\mathcal W}$ of quasi-periodic meromorphic functions on $\mathbb{C} /\langle i\rangle $ that appears in step (S1) above. In other words, $S_M+iS_M^*$ can be viewed as the derivative $\dot{g}_S=\left. \frac{d}{dt}\right| _{t=0}g_t$ of a holomorphic curve $t\in \mathbb{D} ({\varepsilon} )=\{ t\in \mathbb{C} \ | \ |t|< {\varepsilon} \} \mapsto g_t\in {\mathcal W}$ with $g_0=g$, which can be explicitly computed from (\ref{eq:Shiffman}) as \begin{equation} \label{gpuntodeShiffman} \dot{g}_S=\frac{i}{2}\left( g'''-3\frac{g'g''}{g}+\frac{3}{2}\frac{(g')^3}{g^2}\right) . \end{equation} Therefore, to integrate $S_M$ holomorphically one needs to find a holomorphic curve $t\in \mathbb{D} ({\varepsilon} ) \mapsto g_t\in {\mathcal W}$ with $g_0=g$, such that for all $t$, the pair $(g_t,\phi _3=dz)$ is the Weierstrass data of a minimal surface $M_t\in {\mathcal P}$ satisfying the conditions of Theorem~\ref{thm5.9}, and such that for every value of $t$, \[ \left. \frac{d}{dt}\right| _{t}g_t=\frac{i}{2}\left( g_t'''-3\frac{g_t'g_t''}{g_t}+ \frac{3}{2}\frac{(g_t')^3}{g_t^2}\right) . \] Viewing (\ref{gpuntodeShiffman}) as an evolution equation in complex time $t$, one could apply general techniques to find solutions $g_t=g_t(z)$ defined locally around a point $z_0\in (\mathbb{C} /\langle i \rangle )-g^{-1}(\{ 0,\infty \} )$ with the initial condition $g_0=g$, but such solutions are not necessarily defined on the whole cylinder, can develop essential singularities, and even if they were meromorphic on $\mathbb{C} /\langle i\rangle $, it is not clear {\it a priori} that they would have only double zeros and poles and other properties necessary to give rise, via the Weierstrass representation with height differential $\phi _3=dz$, to minimal surfaces $M_t$ in ${\mathcal P}$ with infinitely many planar ends. Fortunately, all of these problems can be solved by arguments related to the theory of the meromorphic KdV equation. The change of variables \begin{equation} \label{u} u =-\frac{3(g')^2}{4g^2}+\frac{g''}{2g}. \end{equation} transforms (\ref{gpuntodeShiffman}) into the evolution equation \begin{equation} \label{kdv} \frac{\partial u}{\partial t} = -u'''-6 u u', \end{equation} which is the celebrated KdV equation\footnote{One can find different normalizations of the KdV equation in the literature, given by different coefficients for $u''', uu'$ in equation (\ref{kdv}); all of them are equivalent up to a change of variables.}. The apparently magical change of variables (\ref{u}) has a natural explanation: the change of variables $x=g'/g$ transforms the expression (\ref{gpuntodeShiffman}) for $\dot{g}_S$ into the evolution equation \[ \dot{x}=\frac{i}{2}(x'''-\frac{3}{2}x^2x'), \] which is called a {\it modified KdV equation} (mKdV). It is well-known that mKdV equations in $x$ can be transformed into KdV equations in $u$ through the so called {\it Miura transformations,} $x\mapsto u=ax'+bx^2$ with $a,b$ suitable constants, see for example~\cite{gewe1} page~273. Equation~(\ref{u}) is nothing but the composition of $g\mapsto x$ and a Miura transformation. The holomorphic integration of the Shiffman function $S_M$ could be performed just in terms of the theory of the mKdV equation, but we will instead use the more standard KdV theory. Coming back to the holomorphic integration of $S_M$, this problem amounts to solving globally in $\mathbb{C} /\langle i\rangle $ the Cauchy problem for equation (\ref{kdv}), i.e., \begin{quote} {\sc Problem.} Find a meromorphic solution $u(z,t)$ of (\ref{kdv}) defined for $z\in \mathbb{C} /\langle i\rangle $ and $t\in \mathbb{D} ({\varepsilon} )$, whose initial condition is $u(z,0)=u(z)$ given by~(\ref{u}). \end{quote} It is a well-known fact in KdV theory (see for instance~\cite{gewe1} and also see Segal and Wilson~\cite{SeWi}) that the above Cauchy problem can be solved globally producing a holomorphic curve $t\mapsto u_t$ of meromorphic functions $u(z,t)=u_t(z)$ on $\mathbb{C} /\langle i\rangle $ with controlled Laurent expansions in poles of $u_t$, provided that the initial condition $u(z)$ is an {\it algebro-geometric potential} for the KdV equation (to be defined below); a different question is whether or not this family $u_t(z)$ solves our geometric problem related to minimal surfaces in ${\cal P}$. To understand the notion of algebro-geometric potential, one must view (\ref{kdv}) as the level $n=1$ in a sequence of evolution equations in $u$, called the {\it KdV hierarchy,} \begin{equation} \label{kdvn} \left\{ \frac{\partial u}{\partial t_n} = -\partial_z{\mathcal P}_{n+1}(u)\right\} _{n\geq 0}, \end{equation} where ${\mathcal P}_{n+1}(u)$ is a differential operator given by a polynomial expression of $u$ and its derivatives with respect to $z$ up to order $2n$. These operators, which are closely related to Lax Pairs (see Section~2.3 in~\cite{gewe1}) are defined by the recurrence law \begin{eqnarray} \label{law} \left\{ \begin{array}{l} \partial_z {\mathcal P}_{n+1}(u) = (\partial_{zzz} + 4u\,\partial_z+2u'){\mathcal P}_{n}(u), \\ \rule{0cm}{.5cm}{\mathcal P}_{0}(u)=\frac{1}{2}. \end{array}\right. \end{eqnarray} In particular, ${\mathcal P}_1(u)=u$ and ${\mathcal P}_2(u)=u''+3u^2$ (plugging ${\mathcal P}_2(u)$ in (\ref{kdvn}) one obtains the KdV equation). Hence, for each $n\in \mathbb{N} \cup \{ 0\} $, one must consider the right-hand side of the $n$-th equation in (\ref{kdvn}) as a polynomial expression of $u=u(z)$ and its derivatives with respect to $z$ up to order $2n+1$. We will call this expression a {\it flow}, denoted by $\frac{\partial u}{\partial t_n}$. A function $u(z)$ is said to be an {\it algebro-geometric potential} of the KdV equation if there exists a flow $\frac{\partial u}{\partial t_n}$ which is a linear combination of the lower order flows in the KdV hierarchy. Once we understand the notion of algebro-geometric potential for the KdV equation, we have divided our goal of proving the holomorphic integration of the Shiffman function for any surface $M$ as in Theorem~\ref{thm5.9} into two final steps. \begin{enumerate}[(T1)] \item For every minimal surface $M\in {\cal P}$ satisfying the hypotheses of Theorem~\ref{thm5.9}, the function $u=u(z)$ defined by equation (\ref{u}) in terms of the Gauss map $g(z)$ of $M$, is an algebro-geometric potential of the KdV equation. This step would then give a meromorphic solution $u(z,t)=u_t(z)$ of the KdV flow (\ref{kdv}) defined for $z\in \mathbb{C} /\langle i\rangle $ and $t\in \mathbb{D} ({\varepsilon} )$, with initial condition $u(z,0)=u(z)$ given by~(\ref{u}). \item With $u_t(z)$ as in (T1), it is possible to define a holomorphic curve $t\mapsto g_t\in {\cal W}$ with $g_0=g$ (recall that $g$ is the stereographic projection of the Gauss map of $M$), such that $(g_t,\phi _3=dz)$ solves the period problem and defines a minimal surface $M_t\in {\cal P}$ that satisfies the conclusions of Theorem~\ref{thm5.9}. \end{enumerate} Property (T1) follows from a combination of the following two facts: \begin{enumerate}[(a)] \item Each flow $\frac{\partial u}{\partial t_n}$ in the KdV hierarchy (\ref{kdvn}) produces a {\it bounded,} complex valued Jacobi function $v_n$ on $\mathbb{C} /\langle i\rangle $ in a similar manner to the way that the flow $\frac{\partial u}{\partial t_1}$ produces the complex Shiffman function $S_M+iS_M^*$. \item Since the Jacobi functions $v_n$ produced in item~(a) are bounded on $\mathbb{C} /\langle i\rangle $ and the Jacobi operator (\ref{eq:Jacobi}) is the Shr\"{o}dinger operator given by (\ref{eq:Jacobi}) on $M$, then the $v_n$ can be considered to lie in the kernel of a Schr\"{o}dinger operator $L_M$ on $\mathbb{C} /\langle i\rangle $ with bounded potential; namely, $L_M=(\Delta_{\mathbb{S} ^1}+\partial^2_t)+V_M$ where $\mathbb{C} /\langle i\rangle $ has been isometrically identified with $\mathbb{S}^1\times \mathbb{R}$ endowed with the usual product metric $d\theta ^2\times dt^2$, and the potential $V_M$ is the square of the norm of the differential of the Gauss map of $M$ with respect to $d\theta ^2\times dt^2$ ($V_M$ is bounded since $M$ has bounded Gaussian curvature by item~6 of Theorem~\ref{thm5.7}). Finally, the kernel of $L_M$ restricted to bounded functions is finite dimensional; this finite dimensionality was proved by Meeks, P\'erez and Ros\footnote{Following arguments by Pacard (personal communication), which in turn are inspired in a paper by Lockhart and McOwen~\cite{loMcOw1}.} in~\cite{mpr6} and also follows from a more general result by Colding, de Lellis and Minicozzi~\cite{cm39}). \end{enumerate} As for the proof of property (T2) above, the aforementioned control on the Laurent expansions in poles of $u_t$ coming from the integration of the Cauchy problem for the KdV equation, is enough to prove that the corresponding meromorphic function $g_t$ associated to $u_t$ by equation (\ref{u}) has the correct behavior in poles and zeros; this property together with the fact that both $S_M,S_M^*$ preserve infinitesimally the complex periods along any closed curve in $\mathbb{C} /\langle i \rangle $, suffice to show that the Weierstrass data $(g_t,\phi _3=dz)$ solves the period problem for every $t$ and has the same flux vector $F=(h,0,1)$ as the original $M$, thereby giving rise to a surface $M_t\in {\mathcal P}$ with the desired properties. This finishes our sketch of proof of the holomorphic integration of the Shiffman function of an arbitrary surface $M\in {\cal P}$ satisfying the hypotheses of Theorem~\ref{thm5.9}. \begin{remark} {\em While the classification problem for properly embedded minimal planar domains stated at the beginning of Section 5 has been completed, a natural and important generalization to it remains open: \begin{quote} {\it Problem: Classify all complete embedded minimal planar domains in $\mathbb{R}^3$. } \end{quote} This more general classification question would be resolved if we knew a priori that any complete embedded minimal surface $M$ of finite genus in $\mathbb{R}^3$ is proper. The conjecture that this properness property holds for such an $M$ is called the Embedded Calabi-Yau Conjecture for Finite Genus. In their ground breaking work in~\cite{cm35}, Colding and Minicozzi solved this conjecture in the special case that the minimal surface $M$ has finite topology. More recently, Meeks, P\'erez and Ros~\cite{mpr9} proved that the conjecture holds if and only it $M$ has a countable number of ends. However, as of the writing of this manuscript, the solution of the Embedded Calabi-Yau Conjecture for Finite Genus remains unsettled. } \end{remark} \bibliographystyle{plain}
{ "timestamp": "2016-09-20T02:08:50", "yymm": "1609", "arxiv_id": "1609.05660", "language": "en", "url": "https://arxiv.org/abs/1609.05660", "abstract": "Near the end of his life, Bernhard Riemann made the marvelous discovery of a 1-parameter family $R_{\\lambda}$, $\\lambda\\in (0,\\infty)$, of periodic properly embedded minimal surfaces in $\\mathbb{R}^3$ with the property that every horizontal plane intersects each of his examples in either a circle or a straight line. Furthermore, as the parameter $\\lambda\\to 0$ his surfaces converge to a vertical catenoid and as $\\lambda\\to \\infty$ his surfaces converge to a vertical helicoid. Since Riemann's minimal examples are topologically planar domains that are periodic with the fundamental domains for the associated $\\mathbb{Z}$-action being diffeomorphic to a compact annulus punctured in a single point, then topologically each of these surfaces is diffeomorphic to the unique genus zero surface with two limit ends. Also he described his surfaces analytically in terms of elliptic functions on rectangular elliptic curves. This article exams Riemann's original proof of the classification of minimal surfaces foliated by circles and lines in parallel planes and presents a complete outline of the recent proof that every properly embedded minimal planar domain in $\\mathbb{R}^3$ is either a Riemann minimal example, a catenoid, a helicoid or a plane.", "subjects": "Differential Geometry (math.DG)", "title": "The Riemann minimal examples", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713844203499, "lm_q2_score": 0.7185944046238981, "lm_q1q2_score": 0.7080105038805051 }
https://arxiv.org/abs/0910.4520
Distributed delays stabilize negative feedback loops
Linear scalar differential equations with distributed delays appear in the study of the local stability of nonlinear differential equations with feedback, which are common in biology and physics. Negative feedback loops tend to promote oscillation around steady states, and their stability depends on the particular shape of the delay distribution. Since in applications the mean delay is often the only reliable information available about the distribution, it is desirable to find conditions for stability that are independent from the shape of the distribution. We show here that the linear equation with distributed delays is asymptotically stable if the associated differential equation with a discrete delay of the same mean is asymptotically stable. Therefore, distributed delays stabilize negative feedback loops.
\section{Introduction} The delayed feedback system of the form \begin{align}\label{eq:nl} \dot x = F\Bigl(x, \int_0^{\infty} [d\eta(\tau)] \cdot g\bigl(x(t-\tau),\tau\bigr) \Bigr), \end{align} is a model paradigm in biology and physics \citep{monk2003, atay2003, adimy2006, rateitschak2007, eurich2005, meyer2008}. The first argument is the instantaneous part and the second one, the delayed or retarded part, which forms a feedback loop. The function $\eta$ is a cumulative distribution of delays and $F$ and $g$ are nonlinear functions satisfying $F(0,0)=0$ and $g(0,\tau)=0$. When $F:\text{R}^d \times \text{R}^{d \times d} \to \text{R}^d$ and $g:\text{R}^d \times \text{R} \to \text{R}^{d \times d}$ are smooth functions, the stability of $x=0$ is given by the linearized form, \begin{align} \dot x = -A x - \int_0^{\infty} [B(\tau) \cdot d \eta(\tau)] x(t-\tau). \end{align} The coefficients $A$ and $B(\tau) \in \text{R}^{d \times d}$ are the Jacobian matrices of the instantaneous and the delayed parts, $\eta: [0,\infty) \to \text{R}^{d \times d}$ is the distribution of delays and $(\cdot)$ is the pointwise matrix multiplication. In biological applications, discrete delays in the feedback loop are often used to account for the finite time required to perform essential steps before $x(t)$ is affected. This includes maturation and growth times needed to reach reproductive age in a population \citep{hutchinson1948,mackey1978}, signal propagation along neuronal axons \citep{campbell2007}, and post-translational protein modifications \citep{monk2003,bernard2006b}. Introduction of a discrete delay can generate complex dynamics, from limit cycles to chaos \citep{sriram2008}. Linear stability properties of scalar delayed equations are fairly well characterized. However, lumping intermediate steps into a delayed term can produce broad and atypical delay distributions, and it is not clear how that affects the stability compared to a discrete delay \citep{campbell2009}. Here, we study the stability of the zero solution of a scalar ($d=1$) differential equation with distributed delays, \begin{align}\label{eq:x} \dot x & = - a x - b \int_0^{\infty} x(t-\tau) d\eta(\tau). \end{align} The solution $x(t) \in \text{R}$ is the deviation from the zero steady state of equation (\ref{eq:nl}). Coefficients $a=-\text{D}_1F(0,0) \in \text{R}$ and $b = -\text{D}_2F(0,0) \neq 0$, and the integral is taken in the Riemann-Stieltjes sense. We assume that $\eta$ is a cumulative probability distribution function, i.e. $\eta: \text{R} \to [0, 1]$ is nondecreasing, piecewise continuous to the left, $\eta(\tau)=0$ for $\tau<0$ and $\eta(+\infty)=1$. Additionally, we assume that there exists $\nu>0$ such that \begin{align}\label{eq:nu} \int_0^{\infty} e^{\nu \tau} d\eta(\tau) < \infty. \end{align} This last condition implies that the mean delay value is finite, \begin{align*} E & = \int_0^{\infty} \tau d\eta(\tau) < \infty. \end{align*} The corresponding probability density function is $f(\tau)$ given by $d \eta(\tau) = f(\tau) d\tau$, where the derivative is taken in the generalized sense. The distribution can be continuous, discrete, or a mixture of continuous and discrete elements. When it is a single discrete delay (a Dirac mass), the asymptotic stability of the zero solution of equation (\ref{eq:x}) is fully determined by the following theorem, due to Hayes \citep{hayes1950}, \begin{theorem}\label{th:hayes} Let $f(\tau) = \delta(\tau-E)$ a Dirac mass at $E$. The trivial solution of equation (\ref{eq:x}) is asymptotically stable if and only if $a > |b|$, or if $b>|a|$ and \begin{equation}\label{eq:Emax} E < \frac{\arccos(-a/b)}{\sqrt{b^2-a^2}}. \end{equation} \end{theorem} There is a Hopf point if the characteristic equation of equation (\ref{eq:x}) has a pair of imaginary roots and all other roots are negative. For a discrete delay, the Hopf point occurs when equality in (\ref{eq:Emax}) is satisfied. Moreover, for any distribution $\eta$, there is a zero root along the line $-a=b$. At $-a=b=1/E$, there is a double zero root. When $a>-1/E$, all other roots have negative real parts, but when $a<-1/E$, there is one positive real root. Thus, the stability depends on $\eta$ if and only if $b>|a|$. Moreover, only a Hopf point can occur when $b>|a|$. Therefore, a distribution of delays can only destabilize equation (\ref{eq:x}) through a Hopf point, and only when $b>|a|$. This is a common situation when the feedback acts negatively on the system ($\text{D}_2F(0,0)<0$) to cause oscillations. Assuming $b>0$ and making the change of timescale $t \to bt$, we have $a \to a/b$, $b \to 1$ and $\eta(\tau) \to \eta(b\tau)$. Equation (\ref{eq:x}) can be rewritten as \begin{align}\label{eq:xx} \dot x & = - a x - \int_0^{\infty} x(t-\tau) d\eta(\tau). \end{align} The aim of this paper is to study the effect of delay distributions on the stability of the trivial solution of equation (\ref{eq:xx}), therefore, we focus on the region $|a|<1$. To emphasize the relation between the stability and the delay distribution, we will say that $\eta$ (or $f$) is stable if the trivial solution of equation (\ref{eq:xx}) is stable, and that $\eta$ (or $f$) is unstable if the trivial solution is unstable. It has been conjectured that among distributions with a given mean $E$, the discrete delay is the least stable one \citep{bernard01,atay2008}. If this were true, according to Theorem \ref{th:hayes}, all distributions would be stable provided that \begin{equation}\label{eq:sc} E < \frac{\arccos(-a)}{\sqrt{1-a^2}}. \end{equation} This conjecture has been proved for $a=0$ using Lyapunov-Razumikhin functions \citep{krisztin1990}, and for distributions that are symmetric about their means [$f(E-\tau) = f(E+\tau)$] \citep{miyazaki1997,bernard01,atay2008,kiss2009}. It has been observed that in general, a greater relative variance provides a greater stability, a property linked to geometrical features of the delay distribution \citep{anderson1991}. There are, however, counter-examples to this principle, and there is no proof that for $a \neq 0$ the least stable distribution is the single discrete delay. It is possible to lump the non-delayed term into the delay distribution using the condition found in \citep{krisztin1990}, but the resulting stability condition, $E/(1+a)<\pi/2$, is not optimal. Here, we show that if inequality (\ref{eq:sc}) holds, all distributions are asymptotically stable. That is, distributed delays stabilize negative feedback loops. In section \ref{s:pre}, we set the stage for the main stability results. In section \ref{s:d}, we show the stability for distributions of discrete delays. In section \ref{s:g}, we present the generalization to any distributions and in section \ref{s:b}, we provide illustrative examples. \section{Preliminary results}\label{s:pre} Let $\eta$ be a distribution with mean $1$. We consider the family of distributions \begin{align}\label{eq:scale} \eta_E(\tau) = \begin{cases} \eta(\tau/E), & E>0, \\ H(\tau), & E=0. \end{cases} \end{align} where $H(\tau)$ is the step or heaviside function at 0. The distribution $\eta_E$ has a mean $E \geq 0$. The characteristic equation of equation (\ref{eq:xx}), obtained by making the ansatz $x(t) = \exp(-\lambda t)$, is \begin{align}\label{eq:ce} \lambda + a + \int_0^\infty{e^{-\lambda \tau} d\eta_E(\tau)} = 0. \end{align} When condition (\ref{eq:nu}) is satisfied, the distribution $\eta_E$ is asymptotically stable if and only if all roots of the characteristic equation have a negative real part $\text{Re}(\lambda)<0$ \citep{stepan1989}. Condition (\ref{eq:nu}) guarantees that there is no sequence of roots with real parts converging to a non-negative value. The leading roots of the characterisitc equations are therefore well defined. When $E=0$, i.e. when there is no delay, there is only one root, $\lambda < 0$. When $E>0$, the characteristic equation has pure imaginary roots $\lambda = \pm i \omega$ only if $0<\omega<\omega_c = \sqrt{1-a^2}$. Thus, the search for the boundary of stability can be restricted to imaginary parts $\omega \in (0, \omega_c]$ \citep{bernard01}. We define \begin{align} C(\omega) = & \int_{0}^{\infty} \cos(\omega \tau) d\eta_E(\tau), \\ S(\omega) = & \int_{0}^{\infty} \sin(\omega \tau) d\eta_E(\tau). \end{align} We use a geometric argument to bound the roots of the characteric equation of equation (\ref{eq:xx}) by the roots of the characteristic equation with a discrete delay. More precisely, if the leading roots associated to the discrete delay are a pair of imaginary roots, then all the roots associated to the distribution of delays have negative real parts. We first state a criterion for stability: if $S(\omega)<\omega$ whenever $C(\omega)+a=0$, then $f$ is stable. The larger the value of $S(\omega)$, the more ``unstable'' the distribution is. We then show that a distribution of $n$ discrete delays $f_n$ is more stable than an certain distribution with two delays $f^*$, i.e. $S_n(\omega)\leq S^*(\omega)$. We construct $f^*$ and determine that one of the delays of this ``most unstable'' distribution $f^*$ is $\tau^*_1=0$, making it easy to determine its stability using Theorem \ref{th:hayes}. We then generalize for any distribution of delays. The next proposition provides a necessary condition for instability. It is a direct consequence of theorem 2.19 in \citep{stepan1989}. We give a short proof for completeness. \begin{prop}\label{pr:os} If the distribution $\eta_E$ is asymptotically unstable, then there exists $\omega_s \in [0,\omega_c]$ such that $C(\omega_s) + a = 0$ and $S(\omega_s) \geq \omega_s$. \end{prop} \begin{proof} Suppose that the distribution $\eta_E$ is asymptotically unstable. The roots of the characteristic equation depend continuously on the parameter $E$ and cannot appear in the right half complex plane. Thus there is a critical value $0<\rho<1$ at which $\eta_{\rho E}$ loses its stability, and this happens when the characteristic equation (\ref{eq:ce}) has a pair of imaginary roots $\lambda = \pm i\omega$ with $0 \leq \omega < \omega_c = \sqrt{1-a^2}$, i.e. through a Hopf point. Splitting the characteristic equation in real and imaginary parts, we have \begin{align*} \int_{0}^{\infty} \cos(\omega \tau) d\eta_{\rho E}(\tau) & + a = 0, \\ \int_{0}^{\infty} \sin(\omega \tau) d\eta_{\rho E}(\tau) & = \omega. \end{align*} Rewriting in term of $\eta_E$, we obtain \begin{align*} \int_{0}^{\infty} \cos(\omega \rho \tau) d\eta_E(\tau) & + a = 0, \\ \int_{0}^{\infty} \sin(\omega \rho \tau) d\eta_E(\tau) & = \omega. \end{align*} Finally, denoting $\omega_s = \rho \omega \leq \omega < \omega_c$, we have \begin{align*} \int_{0}^{\infty} \cos(\omega_s \tau) d\eta_E(\tau) & + a = 0, \\ \int_{0}^{\infty} \sin(\omega_s \tau) d\eta_E(\tau) & = \omega = \omega_s/\rho \geq \omega_s. \end{align*} This completes the proof. \end{proof} Proposition \ref{pr:os} provides a sufficient condition for stability: \begin{cor}\label{th:cor} The distribution $\eta_E$ is asymptotically stable if (i) $C(\omega)>-a$ for all $\omega \in [0,\omega_c]$ or if (ii) $C(\omega)=-a$, $\omega \in [0,\omega_c]$, implies that $S(\omega)<\omega$. \end{cor} Proposition \ref{pr:os} suggests that the scaling $\eta_E=\eta(\tau/E)$ is appropriate for looking at the stability with respect to the mean delay. The mean delay scales linearly, and unstable distributions therefore lose their stability at a smaller values of the mean delay, under this scaling. The condition $S(\omega_s)<\omega_s$ is however not necesssary for stability, as one can find cases where $S(\omega_s)>\omega_s$ even though the distribution is stable. This happens when an unstable distribution switches back to stability as $E$ is further increased (see for instance \citep{boese1989} or \citep{beretta2002} and example \ref{s:s}). \section{Stability of a distribution of discrete delays}\label{s:d} We define a density of $n$ discrete delays $\tau_i \geq 0$, and $p_i > 0$, $i=1,...,n$, $n \geq 1$, as \begin{align}\label{eq:fn} f_n(\tau) & = \sum_{i=1}^{n} p_i \delta(\tau-\tau_i) \\ \intertext{where $\delta(t-\tau_i)$ is a Dirac mass at $\tau_i$, and} \sum_{i=1}^{n} p_i \tau_i & = E, \text{ and } \sum_{i=1}^{n} p_i = 1. \nonumber \end{align} In this section, we show that $f_n$ is more stable than a single discrete delay. We do that by observing that among all $n$-delay distributions, $n\geq 2$, that satisfy $C_n(\omega_s)+a=0$ for a fixed value $\omega_s < \omega_c$, the distribution $f^*$ that maximizes $S_n(\omega_s)$, \begin{equation} \max_{f_n} \bigl\{ S_n(\omega_s) | C_n(\omega_s)+a=0 \bigr\} = S^*(\omega_s), \end{equation} has 2 delays. We show that $S^*(\omega_s)<\omega_s$, implying that all distributions are stable. The following lemma shows how to maximize $S(\omega_s)$ for distributions of two delays. \begin{lem}\label{th:S} Let $f_2$ be a delay density with mean $E$. Assume in addition that there exists $\omega_s \in (0,\omega_c)$ such that $C(\omega_s) = -a < \cos(\omega_c E)$. Then there exists $\tau_2^*$, $p_1^*$ and $p_2^*$ such that \begin{align} \tau_1^* & = 0, \\ p_1^* + p_2^* & = 1, \\ p_1^* + p_2^* \cos(\omega_s \tau_2^*) & = p_1 \cos(\omega_s \tau_1) + p_2 \cos(\omega_s \tau_2), \\ p_2^* \tau_2^* & = E. \end{align} Moreover, there is at most two solutions for $\tau_2^*$ with $\tau_2^*<\pi/\omega_s$. If $\tau_2^*$ is the smallest solution, we have that $\tau_2^* \leq \tau_2$ and \begin{align*} S^*(\omega_s) \equiv \sum_{i=1}^2 p_i^* \sin(\omega_s \tau_i^*) & \geq S(\omega_s). \end{align*} \end{lem} \begin{proof} To see that there is always a solution, let $c>0$ be the smallest value such that the inequality $\cos(\theta) \geq 1 - c \theta$ is verified for all $\theta$. [$c = 0.725...$ by solving $c=\sin(\theta)$, with $1-\theta \sin(\theta) = \cos(\theta)$.] We have that $1-c \omega E \leq C(\omega) \leq \cos(\omega E)$. Thus, the line $1-d \omega E$ that goes through $C(\omega_s)$ satisfies $d = (1-C(\omega_s))/(\omega_s E) \leq c$, and therefore crosses the curve $\cos(\omega_s E)$ at some points. The smallest solution $\tau_2^*$ is the one such that $1 - d \omega_s \tau_2^* = \cos(\omega_s \tau_2^*)$. This way, \begin{align*} p_1^* + p_2^* \cos(\omega_s \tau_2^*) & = 1 - (1-C(\omega_s)) p_2^* \tau_2^*/E, \\ & = C(\omega_s). \end{align*} These new delay values maximize $S(\omega_s)$ under the constraints that $C(\omega_s)+a=0$ and that the mean remains $E$. That is, we will prove that \begin{align*} S^*(\omega_s) \equiv \sum_{i=1}^2 p_i^* \sin(\omega_s \tau_i^*) & \geq \sum_{i=1}^2 p_i \sin(\omega_s \tau_i), \end{align*} for all admissible $p_i$, $\tau_i$. Two show that, we recast the problem in a slightly different way. Writing $u=\omega_s \tau_1$, $v=\omega_s \tau_2$ and $T = \omega_s E$, we can express parameters $p_i$ in terms of $(u,v)$: \begin{equation*} p_1 = \frac{v-T}{v-u} \quad\text{and}\quad p_2 = \frac{T-u}{v-u}, \end{equation*} where $u<T<v$. We consider $C$ and $S$ as functions of $(u,v)$. \begin{align} C(u,v) & = \frac{v-T}{v-u} \cos(u) + \frac{T-u}{v-u} \cos(v), \label{eq:cuv} \\ S(u,v) & = \frac{v-T}{v-u} \sin(u) + \frac{T-u}{v-u} \sin(v). \label{eq:suv} \end{align} Equation (\ref{eq:suv}) is to be maximized for $(u,v)$ along the curve $h=\{u,v\}$ implicitly defined by the level curves $C(u,v)=-a$. There are either two solutions in $v$, including multiplicity, of the equation $C(0,v)=-a$ or none, so the curve can be parametrized in a way that $(u(\xi),v(\xi))$ satisfies $(u(0)=0,v(0)=v_{max})$ and $(u(1)=0,v(1)=v_{min})$, with $v_{min} \leq v_{max}$. We claim that $S$ is maximized for $\xi=1$, i.e. $u=0$ and $v=v_{max}$. This is true only if $S(u(\xi),v(\xi))$ is increasing with $\xi$. That is, the curve $h$ must cross the level curves of $S$ upward. It is clear that $S$ is a decreasing function of $v$, for $u$ fixed and an increasing function of $u$, for $v$ fixed. Thus the level curves $S(u,v)=k$ can be expressed as an increasing function $v_{S,k}(u)$ such that \begin{equation*} S(u,v_{S,k}(u))=k, \end{equation*} when $k$ is in the image of $S$. Likewise, equation (\ref{eq:cuv}) can be solved locally to yield $v_{C,a}(u)$ such that \begin{equation*} C(u,v_{C,a}(u))=-a, \end{equation*} whenever $-a$ is in the image of $C$. The function $v_{C,a}(u)$ could take two values on the domain of definition. Because $S$ is decreasing in $v$, we choose the lower solution branch for $v_{C,a}(u)$. If, along that lower branch, the slope of $v_{C,a}(u)$ is larger than that of $v_{S,k}(u)$, then as $v$ decreases along the curve $c$, $S$ increases. Therefore, to show that $(0,v_{C,a}(0)=v_{min})$ maximizes $S$, we need to show that \begin{equation}\label{eq:vCvS} \frac{d v_{C,a}(u)}{du} > \frac{d v_{S,k}(u)}{du} > 0. \end{equation} It is clear that $d v_{S,k}(u)/du>0$. The pointwise derivatives of the level curves at $(u,v)$ are \begin{align*} \frac{d v_{C}(u)}{du} & = \frac{v-T}{T-u} \frac{-\cos(u)+\cos(v)+(v-u)\sin(u)}{\cos(u)-\cos(v)-(v-u)\sin(v)}, \\ \frac{d v_{S}(u)}{du} & = \frac{v-T}{T-u} \frac{-\sin(u)+\sin(v)-(v-u)\cos(u)}{\sin(u)-\sin(v)+(v-u)\cos(v)}. \end{align*} Because only the lower branch of $v_C$ is considered, we restrict $(u,v)$ where $dv_{C}(u) / du <+\infty$. This is done without loss of generality since $S$ is strictly larger on the lower branch than on the upper branch. Along the lower branch, $v_C(u)<\pi$. Inequality (\ref{eq:vCvS}) then holds if \begin{multline*} (v-u) \bigl[ 2 - 2 \bigl(\cos(u)\cos(v)+\sin(u)\sin(v)\bigr) + (v-u) \bigl(\sin(u)\cos(v)-\cos(u)\sin(v) \bigr) \bigr] > 0. \end{multline*} Notice that this inequality does not depend on $T$, which cancels out, nor on $a$, since comparison is made pointwise, for any level curves. The inequality can be simplified and rewritten in terms of $z=v-u>0$, \begin{equation*} z \bigl[ 2 - 2 \cos(z) - z \sin(z)\bigl] > 0. \end{equation*} It can be verified that this inequality is satisfied for $z \in (0,\pi]$. Therefore, $S$ is maximized when $u=0$ and $v=v_{C,a}(0)=v_{min}$. \end{proof} \begin{figure} \includegraphics[width=0.5\linewidth]{twodelaysproof.ps} \caption{How delays are replaced to get an maximal value of $S^*$. In this example, $a=-0.1$. Parameters are $u=0.2$, $v=2$, $p_1=0.37$, $p_2=0.63$ and $T=1.33$. The parameters maximizing $S$ are $u^*=0$, $v^*=1.76$, $p_1^*=0.24$ and $p_2^*=0.76$.}\label{f:2d} \end{figure} \begin{theorem}\label{th:n} Let $f_n$ be a density with $n \geq 1$ discrete delays and mean $E$ satisfying inequality (\ref{eq:sc}). The density $f_n$ is asymptotically stable. \end{theorem} \begin{proof} Single delay distributions ($n=1$) are asymptotically stable by Theorem \ref{th:hayes}. We first show the case $n=2$. Consider a density $f_2$, with $\tau_1 < \tau_{2}$. Suppose $C(\omega_s)+a=0$ for a value of $\omega_s<\omega_c$ (if not, Corollary \ref{th:cor} states that $f_2$ is stable). Remark that $-a=C(\omega_s)<\cos(\omega_s E)$. Indeed, from inequality (\ref{eq:sc}) and $\omega_s \leq \omega_c = \sqrt{1-a^2}$, we have $\cos(\omega_s E) \geq \cos(\omega_c E) > -a$. Replace the two delays by two new delays with new weights: $\tau_1^* = 0$ and $\tau_2^* \geq 0$ the smallest delay such that the following equations are satisfied: \begin{align} p_2^* \tau_2^* & = p_1 \tau_1 + p_2 \tau_2, \\ p_1^* + p_2^* \cos(\omega_s \tau_2^*) & = p_1 \cos(\omega_s \tau_1) + p_2 \cos(\omega_s \tau_2), \\ p_1^* + p_2^* & = p_1 + p_2 \quad (=1). \end{align} Lemma \ref{th:S} ensures that there always exists a solution when $C(\omega_s) \leq \cos(\omega_s E)$. Additionally, $\tau_2^* \leq \tau_2$ and \begin{align*} S^*(\omega_s) \equiv \sum_{i=1}^2 p_i^* \sin(\omega_s \tau_i^*) & \geq \sum_{i=1}^2 p_i \sin(\omega_s \tau_i). \end{align*} That is, the new distribution $*$ maximizes the value of $S$. Therefore, if we are able to show that distributions with a zero and a nonzero delay satisfy $S(\omega_s)<\omega_s$, then by Corollary \ref{th:cor}, all distributions with two delays are stable. Consider $f(\tau) = (1-p) \delta(\tau) + p \delta(\tau-r)$. Suppose that there is $\omega_s \leq \omega_c$ such that \[ C(\omega_s) = 1-p + p \cos(\omega_s r) = -a. \] We must show that $S(\omega_s) = p \sin(\omega_s) < \omega_s$. Summing up the squares of the cosine and the sine, we obtain \[ p^2 = (-a+p-1)^2 + S^2(\omega_s), \] so \[ S(\omega_s) = \sqrt{p^2 - (-a+p-1)^2}. \] By assumption, the mean delay statisfies inequality (\ref{eq:sc}), \[ pr < \frac{\arccos(-a)}{\sqrt{1-a^2}}. \] Thus, \[ \omega_s = \frac{\arccos \bigl( - (a+1-p) p^{-1} \bigr)}{r} > p \sqrt{1-a^2}\frac{\arccos \bigl( - (a+1-p) p^{-1} \bigr)}{\arccos(-a)}. \] Because $(a+1-p)/p \geq a$ for $p \in (0,1]$ and $a \in (-1,1)$, we have the following inequality \[ \frac{\arccos(-a)}{\sqrt{1-a^2}} \leq \frac{\arccos \bigl( - (a+1-p) p^{-1} \bigr)}{\sqrt{1-\bigl( (a+1-p) p^{-1} \bigr)^2}}. \] Thus, \[ S(\omega_s) = \sqrt{p^2 - (-a+p-1)^2} \leq p \sqrt{1-a^2} \frac{\arccos \bigl( - (a+1-p) p^{-1} \bigr)}{\arccos(-a)} < \omega_s. \] This completes the proof for the case $n=2$. For distributions $f_n$ with $n>2$ delays, the strategy is also to find a stable distribution that keeps $C(\omega_s)$ constant and increases $S(\omega_s)$, assuming that $C(\omega_s)+a=0$. This requires two steps. In the first one, all pairs of delays $\tau_i<\tau_j$ for which the inequality \begin{align}\label{eq:belowcos} \sum_{k \in \{i,j\}} p_k \cos(\omega_s \tau_k)\leq \cos\biggl(\omega_s \sum_{k \in \{i,j\}} p_k \tau_k\biggr), \end{align} holds are iteratively replaced by new delays $\tau_i^*=0$ and $\tau_j^*<\tau_j$, as done in Lemma \ref{th:S}. This transformation preserves $E$, $C(\omega_s)$ and increases $S(\omega_s)$. This is repeated until there remains $m<n$ delays with $\tau_i>0$, $i=2,...,m$ such that \[ \sum_{k \in \{i,j\}} p_k \cos(\omega_s \tau_k) > \cos\biggl(\omega_s \sum_{k \in \{i,j\}} p_k \tau_k\biggr), \] for $i \neq j \in \{2,...,m\}$, and $\tau_1 = 0$. (The $\tau_i$ are not the same as in the original distribution, the $*$ have been dropped for ease of reading.) The positive delays $\tau_i>0$ satisfy \begin{align*} \sum_{i=2}^m p_i \cos(\omega_s \tau_i) > \cos\Bigl( \omega_s \sum_{i=2}^m p_i \tau_i \Bigr). \intertext{while, by assumption,} \sum_{i=1}^m p_i \cos(\omega_s \tau_i) = -a < \cos(\omega_s E). \end{align*} The second step is to replace all delays $\tau_i$, $i=2,...,m$ with the single delay $\bar\tau_2 = \sum_{i=2}^m p_i \tau_i$. We now have a 2-delay distribution with $\bar \tau_1=0$ and $\bar \tau_2 > 0$, $\bar p_1 \bar \tau_1 + \bar p_2 \bar \tau_2=E$, $\bar C(\omega_s) \leq C(\omega_s)$ and $\bar S(\omega_s)\geq S(\omega_s)$. Replace $\bar \tau_2$ by the delay $\tau_2^*<\bar \tau_2$ so that $C^*(\omega_s) = -a$, while keeping $E$ constant. Existence of $\tau_2^*$ is shown using the notation from the proof of Lemma \ref{th:S}, and noting that $C(0,v)$ and $S(0,v)$ are both decreasing in $v$. This change of delay has the effect of increasing $S$: $ S^*(\omega_s) \geq \bar S(\omega_s)$. Therefore, we have found a pair of discrete delays $(0, \tau_2^*)$ such that $C^*(\omega_s) = C(\omega_s)$ and $\omega_s > S^*(\omega_s) \geq S(\omega_s)$. By Corollary \ref{th:cor}, $f_n$ is asymptotically stable. \end{proof} \begin{figure} \includegraphics[width=0.8\linewidth]{stabilitychart.ps} \caption{Stability chart of distributions of delay in the $(a/b,bE)$ plane. The distribution-independent stability region is to the right of the blue curve. The distribution-dependent stability region is the shaded area. All stability curves leave from the point $(a=-b,E=1/b)$. The signs of the real roots of the characteristic equation $\lambda_0, \lambda_1$ along $a=-b$ are distribution-independent.}\label{f:chart} \end{figure} \section{Stability of a general distribution of delays}\label{s:g} From the stability of distributions of discrete delays to the stability of general distributions of delays, there is a small step. First we need to bound the roots of the characteristic equation for general distributed delays. \begin{lem}\label{th:mu} Let $\eta_E$ be a delay distribution with mean $E$ satisfying inequality (\ref{eq:sc}). There exists a sequence $\{\eta_{n,E}\}_{n \geq 1}$ with distribution $\eta_{n,E}$ having $n$ delays, such that $\eta_{n,E}$ converges weakly to $\eta_E$. Then $\lambda$ is a root of the characteristic equation if and only if there exists a sequence of roots $\lambda_n$ for $\eta_{n,E}$ such that $\lim_{n \to \infty} \lambda_n = \lambda$. Let $\{\mu_n\}_{n \geq 1}$ be a sequence of real parts of roots of the characteristic equations. Additionally, \begin{align*} \limsup_{n \to \infty} \mu_n = \mu < 0. \end{align*} \end{lem} \begin{proof} Consider $\lambda_n = \mu_n + i \omega_n$ a root the characterisitic equation for $\eta_{n,E}$. $E$ satisfies inequality (\ref{eq:sc}), so $\mu_n < 0$. So \begin{align*} & \Bigl\lvert \lambda_n + a + \int_0^{\infty} e^{-\lambda_n \tau} d \eta_{E}(\tau) \Bigl\lvert \\ & = \Bigl\lvert \lambda_n + a + \int_0^{\infty} e^{-\lambda_n \tau} d [\eta_{E}(\tau)-\eta_{n,E}(\tau)] + \int_0^{\infty} e^{-\lambda_n \tau} d \eta_{n,E}(\tau) \Bigr\rvert \\ & = \Bigl\lvert \int_0^{\infty} e^{-\lambda_n \tau} d [\eta_{E}(\tau)-\eta_{n,E}(\tau)] \Bigl\lvert \to 0, \end{align*} as $n \to \infty$ by weak convergence. Thus any converging sub-sequence of roots converges to a root for $\eta_E$. The same way, if $\lambda$ is a root for $\eta_E$, \begin{align*} & \Bigl\lvert \lambda + a + \int_0^{\infty} e^{-\lambda \tau} d \eta_{n,E}(\tau) \Bigl\lvert \\ & = \Bigl\lvert \lambda + a + \int_0^{\infty} e^{-\lambda \tau} d [\eta_{n,E}(\tau)-\eta_{E}(\tau)] + \int_0^{\infty} e^{-\lambda \tau} d \eta_{E}(\tau) \Bigr\rvert \\ & = \Bigl\lvert \int_0^{\infty} e^{-\lambda \tau} d [\eta_{n,E}(\tau)-\eta_{E}(\tau)] \Bigl\lvert \to 0, \end{align*} as $n \to \infty$. Convergence is guarantedd by inequality (\ref{eq:nu}). Therefore, each root $\lambda$ lies close to a corresponding root $\lambda_n$. Denote $\mu = \limsup_{n \to \infty} \mu_n$. Then $\mu$ is the real part of a root of the characteristic equation associated to $\eta_E$. $\mu_n<0$ for all $n$, so $\mu \leq 0$. Suppose $\mu=0$. Without loss of generality, we can assume that all other roots have negative real parts. Then $\eta_E$ is at a Hopf point, i.e. the leading roots of the charateristic equation are pure imaginary. Consider the distribution $\eta_{\bar a,\rho}(\tau) = \eta(\tau/\rho)$ and the associated real parts $\mu_{\bar a, \rho}$, where the subscript $a$ is there to emphasize the dependence of the stability on $a$. Then, by continuity, there exists $(\bar a,\rho)$ in the neighborhood $\varepsilon>0$ of $(a,E)$ for which $\eta_{\bar a, \rho}$ is unstable, i.e. $\mu_{\bar a, \rho}>0$. For sufficiently small $\varepsilon>0$, inequality (\ref{eq:sc}) is still satisfied: \begin{equation*} \rho < \frac{\arccos(- \bar a)}{\sqrt{1-\bar a^2}}. \end{equation*} Additionally, $\eta_{n,\rho}$ converges weakly to $\eta_{\bar a, \rho}$. However, because $\eta_{\bar a, \rho}$ is unstable, there exists $N>1$ such that $\eta_{n,\bar a, \rho}$ is unstable for all $n>N$, a contradiction to Theorem \ref{th:n}. Therefore $\mu<0$. \end{proof} \begin{theorem}\label{th:main} Let $\eta_E$ be a delay distribution with mean $E$ satisfying inequality (\ref{eq:sc}). The distribution $\eta_E$ is asymptotically stable. \end{theorem} \begin{proof} Consider the sequence of distributions with $n$ delays $\{\eta_{n,E}\}_{n \geq 1}$ where $\eta_{n,E}$ converges weakly to $\eta_E$. By Lemma \ref{th:mu}, the leading roots of the characteristic equation of $\eta_E$ have negative real parts. Therefore $\eta_E$ is asymptotically stable. \end{proof} Is there a result similar to Theorem \ref{th:main} for the most stable distribution? That is, is there a mean delay value such that all distributions having a larger mean are unstable? When $a \geq 0$, the answer is no. For instance the exponential distribution with parameter is asymptotically stable for all mean delays, a property called unconditional stability. Other distributions are also unconditionally stable for $a\geq 0$. Anderson has shown that all distributions with smooth enough convex density functions are unconditionally stable \citep{anderson1991}, but densities do not need to be convex to be unconditionally stable. For example, the non-convex density $f(\tau)=0.5 [\delta(\tau)+\delta(\tau-2E)]$ has mean $E$ but is unconditionally stable. However, no distribution is unconditionally stable for all values of $a \in [-1,0)$, although some are for $a\geq a^*$ with $a^*>-1$ (see example below). From the results obtained here, we have the most complete picture of the stability of equation (\ref{eq:x}) when the only information about the distribution of delays is the mean (figure \ref{f:chart}). \begin{cor}\label{th:stab} The zero solution of equation (\ref{eq:x}) is asymptotically stable if $a>-b$ and $a \geq |b|$ or if $b>|a|$ and \begin{equation*} E < \frac{\arccos(-a/b)}{\sqrt{b^2-a^2}}. \end{equation*} The zero solution of equation \ref{eq:x} may be asymptotically stable (depending on the particular distribution) if $b>|a|$ and \begin{equation*} E \geq \frac{\arccos(-a/b)}{\sqrt{b^2-a^2}}. \end{equation*} The zero solution of equation (\ref{eq:x}) is unstable if $a \leq -b$. \end{cor} \section{Boundary of stability}\label{s:b} The exact boundary of the stability region in the $(a,E)$ plane can be calculated by parametrizing $\bigl(a(u),E(u)\bigr)$. Consider the distribution $\eta$. Then, at the boundary of stability, \begin{align*} 0 & = i \omega + a + \int_0^\infty e^{- i \omega \tau} d \eta(\tau/E), \\ & = i \omega + a + \int_0^\infty e^{- i \omega E \tau} d \eta(\tau), \intertext{setting $u = E \tau$,} & = i \frac{u}{E} + a + \int_0^\infty e^{- i u \tau} d \eta(\tau). \end{align*} Separating the imaginary and the real part, we obtain \begin{equation}\label{eq:par} a(u) = - C(u) \quad \text {and} \quad E(u) = \frac{u}{S(u)}, \end{equation} for $u \geq 0$. The fact that $u$ depends on $E$ is not a problem: $u \to \infty$ if and only if $E \to \infty$, and $u \to 0$ if and only if $E \to 0$. Equations (\ref{eq:par}) allows systematic exploration of the boundary of stability in the $(a,E)$ plane. \subsection{Exponential distribution} The exponential distribution $f(\tau) = e^{- \tau}$ has normalized mean 1, and \begin{align*} C(u) = \frac{1}{1+u^2} \quad \text{and} \quad S(u) = \frac{u}{1+u^2}. \end{align*} The stability boundary is given by $E = -1/a$, for $-1 \leq a<0$. Therefore the exponential distribution is not unconditionally stable for $a<0$. \subsection{Discrete delays} The exponential distribution is also not the most stable distribution. The density with a zero and a positive delay is $f(\tau)=(1-p) \delta(\tau) + p \delta(\tau-r)$, $p \in (0,1]$. After lumping the zero delay into the undelayed part, the exact stabiltity boundary becomes \begin{equation*} E = p r = \frac{\arccos\big(-(a+1-p)p^{-1}\bigr)}{\sqrt{1-\big((a+1-p)p^{-1}\bigr)^2}} \end{equation*} This has an asymptote at $a=2p-1$, which can be located anywhere in $(-1,1]$. In general, for a distribution with $n$ delays, \begin{align*} a(u) = -\sum_{i=1}^n p_i \cos(u \tau_i) \quad \text{and} \quad E(u) = \frac{u}{\sum_{i=1}^n p_i \sin(u \tau_i)}. \end{align*} The boundary of the stability region can be formed of many branches, as with a distribution with three delays in figure \ref{f:softkernel}. \subsection{Gamma distribution}\label{s:s} As the mean $E$ is increased, distributions can revert to stability. This is the case with the second order gamma distribution (also called strong kernel) with normalized mean 1, \begin{equation}\label{eq:sk} f(\tau) = 2^2 \tau e^{-2 \tau}. \end{equation} We have \begin{align*} C(u) = \frac{1-u^2}{\bigl(1+u^2\bigr)^2}, \quad \text{and} \quad S(u) = \frac{2 u}{\bigl(1+u^2\bigr)^2}, \end{align*} The boundary of stability is given by \begin{align*} \bigl(a(u),E(u)\bigr) & = \biggl(\frac{u-1}{(1+u)^2},(1+u)^2\biggr), \end{align*} There is a largest value $\hat a = 0.1216$. For large values of $E$, $a \to 0^+$. Therefore the boundary of the stability region is not monotonous; for $a \in (0,\hat a)$, $f$ first becomes unstable and then reverts to stability as the mean is increased (figure \ref{f:softkernel}). \begin{figure} \includegraphics[width=0.45\linewidth]{threedelays.eps} \includegraphics[width=0.45\linewidth]{softkernel.eps} \caption{(\emph{Left}) Stability chart of the three-delay distribution with $\tau_2 = 16 \tau_1$, $\tau_3 = 96 \tau_1$, $p_1 = 0.51$, $p_2 = 0.39$, $p_3 = 0.1$. (\emph{Right}) Stability chart of the second order gamma distribution, equation (\ref{eq:sk}).}\label{f:softkernel} \end{figure} \subsection*{Acknowledgements} The author thanks Fabien Crauste for helpful discussion. \bibliographystyle{plain}
{ "timestamp": "2009-10-23T16:21:02", "yymm": "0910", "arxiv_id": "0910.4520", "language": "en", "url": "https://arxiv.org/abs/0910.4520", "abstract": "Linear scalar differential equations with distributed delays appear in the study of the local stability of nonlinear differential equations with feedback, which are common in biology and physics. Negative feedback loops tend to promote oscillation around steady states, and their stability depends on the particular shape of the delay distribution. Since in applications the mean delay is often the only reliable information available about the distribution, it is desirable to find conditions for stability that are independent from the shape of the distribution. We show here that the linear equation with distributed delays is asymptotically stable if the associated differential equation with a discrete delay of the same mean is asymptotically stable. Therefore, distributed delays stabilize negative feedback loops.", "subjects": "Dynamical Systems (math.DS)", "title": "Distributed delays stabilize negative feedback loops", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713891776498, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7080105013613156 }
https://arxiv.org/abs/0906.3359
The Hardy inequality and the heat equation in twisted tubes
We show that a twist of a three-dimensional tube of uniform cross-section yields an improved decay rate for the heat semigroup associated with the Dirichlet Laplacian in the tube. The proof employs Hardy inequalities for the Dirichlet Laplacian in twisted tubes and the method of self-similar variables and weighted Sobolev spaces for the heat equation.
\section{Introduction} It has been shown recently in~\cite{EKK} that a local twist of a straight three-di\-men\-sional tube $\Omega_0:=\mathbb{R}\times\omega$ of non-circular cross-section $\omega\subset\mathbb{R}^2$ leads to an effective repulsive interaction in the Schr\"odinger equation of a quantum particle constrained to the twisted tube~$\Omega_\theta$. More precisely, there is a Hardy-type inequality for the particle Hamiltonian modelled by the Dirichlet Laplacian~$-\Delta_D^{\Omega_\theta}$ at its threshold energy~$E_1$ if, and only if, the tube is twisted (\emph{cf}~Figure~\ref{Fig.3D_twist}). That is, the inequality \begin{equation}\label{I.Hardy} -\Delta_D^{\Omega_\theta} - E_1 \geq \varrho \end{equation} holds true, in the sense of quadratic forms in $L^2(\Omega_\theta)$, with a positive function~$\varrho$ provided that the tube is twisted, while~$\varrho$ is necessarily zero for~$\Omega_0$. Here~$E_1$ coincides with the first eigenvalue of the Dirichlet Laplacian~$-\Delta_D^{\omega}$ in the cross-section~$\omega$. \begin{figure}[h!] \begin{center} \epsfig{file=3d_straight.eps,width=0.45\textwidth} \epsfig{file=3d_twist.eps,width=0.49\textwidth} \end{center} \caption{ Untwisted and twisted tubes of elliptical cross-section. } \label{Fig.3D_twist} \end{figure} The inequality~\eqref{I.Hardy} has important consequences for conductance properties of quantum waveguides. It clearly implies the absence of bound states (\emph{i.e.}, stationary solutions to the Schr\"odinger equation) below the energy~$E_1$ even if the particle is subjected to a small attractive interaction, which can be either of potential or geometric origin (\emph{cf}~\cite{EKK} for more details). At the same time, a repulsive effect of twisting on eigenvalues embedded in the essential spectrum has been demonstrated in~\cite{Kovarik-Sacchetti_2007}. Hence, roughly speaking, the twist prevents the particle to be trapped in the waveguide. Additional spectral properties of twisted tubes have been studied in \cite{EKov_2005,K6-erratum,Briet-Kovarik-Raikov-Soccorsi_2008}. It is natural to ask whether the repulsive effect of twisting demonstrated in~\cite{EKK} in the quantum context has its counterpart in other areas of physics, too. The present paper gives an affirmative answer to this question for systems modelled by the diffusion equation in the tube~$\Omega_\theta$: \begin{equation}\label{I.heat} u_t - \Delta u = 0 \,, \end{equation} subject to Dirichlet boundary conditions on~$\partial\Omega_\theta$. Indeed, we show that the twist is responsible for a faster convergence of the solutions of~\eqref{I.heat} to the (zero) stable equilibrium. The second objective of the paper is to give a new (simpler and more direct) proof of the Hardy inequality~\eqref{I.Hardy} under weaker conditions than those in~\cite{EKK}. \subsection{The main result} Before stating the main result about the large time behaviour of the solutions to~\eqref{I.heat}, let us make some comments on the subtleties arising with the study of the heat equation in~$\Omega_\theta$. The specific deformation~$\Omega_\theta$ of~$\Omega_0$ via twisting we consider can be visualized as follows: instead of simply translating~$\omega$ along~$\mathbb{R}$ we also allow the (non-circular) cross-section~$\omega$ to rotate with respect to a (non-constant) angle $x_1\mapsto\theta(x_1)$. See Figure~\ref{Fig.3D_twist} (the precise definition is postponed until Section~\ref{Sec.Pre}, \emph{cf}~Definition~\ref{definition}). We assume that the deformation is local, \emph{i.e.}, \begin{equation}\label{locally} \dot\theta \ \mbox{has compact support in $\mathbb{R}$}. \end{equation} Then the straight and twisted tubes have the same spectrum (\emph{cf}~\cite[Sec.~4]{K6}): \begin{equation}\label{spectrum} \sigma(-\Delta_D^{\Omega_\theta}) = \sigma_\mathrm{ess}(-\Delta_D^{\Omega_\theta}) = [E_1,\infty) \,. \end{equation} The fine difference between twisted and untwisted tubes in the spectral setting is reflected in the existence of~\eqref{I.Hardy} for the former. In view of the spectral mapping theorem, the indifference~\eqref{spectrum} transfers to the following identity for the heat semigroup: \begin{equation}\label{E.decay} \forall t \geq 0 \,, \qquad \big\| e^{\Delta_D^{\Omega_\theta} t} \big\|_{L^2(\Omega_\theta)\toL^2(\Omega_\theta)} = e^{-E_1 t} \,, \end{equation} irrespectively whether the tube~$\Omega_\theta$ is twisted or not. That is, we clearly have the exponential decay \begin{equation}\label{solution.norate} \|u(t)\|_{L^2(\Omega_\theta)} \leq e^{-E_1 t} \, \|u_0\|_{L^2(\Omega_\theta)} \end{equation} for each time $t \geq 0$ and any initial datum~$u_0$ of~\eqref{I.heat}. To obtain some finer differences as regards the time-decay of solutions, it is therefore natural to consider rather the \emph{``shifted''} semigroup \begin{equation}\label{semigroup} S(t) := e^{(\Delta_D^{\Omega}+E_1)t} \end{equation} as an operator from a \emph{subspace} of $L^2(\Omega_\theta)$ to $L^2(\Omega_\theta)$. In this paper we mainly (but not exclusively) consider the subspace of initial data given by the weighted space \begin{equation}\label{weight} L^2(\Omega_\theta,K) \qquad\mbox{with}\qquad K(x) := e^{x_1^2/4} \,, \end{equation} and study the asymptotic properties of the semigroup via the \emph{decay rate} defined by \begin{equation*} \Gamma(\Omega_\theta) := \sup \Big\{ \Gamma \left| \ \exists C_\Gamma > 0, \, \forall t \geq 0, \ \|S(t)\|_{ L^2(\Omega_\theta,K) \to L^2(\Omega_\theta) } \leq C_\Gamma \, (1+t)^{-\Gamma} \Big\} \right. \!. \end{equation*} Our main result reads as follows: \begin{Theorem}\label{Thm.rate} Let $\theta\in C^1(\mathbb{R})$ satisfy~\eqref{locally}. We have $$ \Gamma(\Omega_\theta) \begin{cases} \, = 1/4 & \mbox{if $\Omega_\theta$ is untwisted}, \\ \, \geq 3/4 & \mbox{if $\Omega_\theta$ is twisted}. \end{cases} $$ \end{Theorem} The statement of the theorem for solutions~$u$ of~\eqref{I.heat} in~$\Omega_\theta$ can be reformulated as follows. For every $\Gamma < \Gamma(\Omega_\theta)$, there exists a positive constant~$C_\Gamma$ such that \begin{equation}\label{solution.rate} \|u(t)\|_{L^2(\Omega_\theta)} \leq C_\Gamma \, (1+t)^{-\Gamma} \, e^{-E_1 t} \, \|u_0\|_{L^2(\Omega_\theta,K)} \end{equation} for each time $t \geq 0$ and any initial datum $u_0 \in L^2(\Omega_\theta,K)$. This should be compared with the inequality~\eqref{solution.norate} which is sharp in the sense that it does not allow for any extra polynomial-type decay rate due to~\eqref{E.decay}. On the other hand, we see that the decay rate is at least three times better in a twisted tube provided that the initial data are restricted to the weighted space. A type of the estimate~\eqref{solution.rate} in an untwisted tube can be obtained in a less restrictive weighted space (\emph{cf}~Theorem~\ref{Thm.decay.1D}). The power~$1/4$ actually reflects the quasi-one-dimensional nature of our model. Indeed, in the whole Euclidean space one has the well known dimensional bound \begin{equation}\label{P.decay} \forall t \geq 0 \,, \qquad \big\|e^{\Delta_D^{\mathbb{R}^d} t}\big\|_{ L^2(\mathbb{R}^d,K) \to L^2(\mathbb{R}^d) } \, \leq \, (1+t)^{-d/4} \,. \end{equation} The fact that the power~$1/4$ is optimal for untwisted tubes can be established quite easily by a ``separation of variables'' (\emph{cf}~Proposition~\ref{Prop.decay.1D}). The fine effect of twisting is then reflected in the positivity of $\Gamma(\Omega_\theta)-1/4$; in view of~\eqref{P.decay}, it can be interpreted as ``enlarging the dimension'' of the tube. \subsection{The idea of the proof} The principal idea behind the main result of Theorem~\ref{Thm.rate}, \emph{i.e.}\ the better decay rate in twisted tubes, is the positivity of the function~$\varrho$ in~\eqref{I.Hardy}. In fact, Hardy inequalities have already been used as an essential tool to study the asymptotic behaviour of the heat equation in other situations~\cite{Cabre-Martel_1999,Vazquez-Zuazua_2000}. However, it should be stressed that Theorem~\ref{Thm.rate} does not follow as a direct consequence of~\eqref{I.Hardy} by some energy estimates (\emph{cf}~Section~\ref{Sec.failure}) but that important and further technical developments that we explain now are needed. Nevertheless, overall, the main result of the paper confirms that the Hardy inequalities end up enhancing the decay rate of solutions.. Let us now briefly describe our proof (as given in Section~\ref{Sec.similar}) that there is the extra decay rate if the tube is twisted. \smallskip \\ \textbf{I.} First, we map the twisted tube~$\Omega_\theta$ to the straight one~$\Omega_0$ by a change of variables, and consider rather the transformed (and shifted by~$E_1$) equation \begin{equation}\label{heat.straight} u_t - (\partial_1-\dot{\theta}\,\partial_\tau)^2u - \Delta' u - E_1 u = 0 \end{equation} in~$\Omega_0$ instead of~\eqref{I.heat}. Here $-\Delta' := - \partial_2^2 - \partial_3^2$ and $\partial_\tau := x_3 \partial_2 - x_2 \partial_3$, with $x=(x_1,x_2,x_3) \in \Omega_0$, denote the ``transverse'' Laplace and angular-derivative operators, respectively.% \smallskip \\ \textbf{II.} The main ingredient in the subsequent analysis is the method of self-similar solutions developed in the whole Euclidean space by Escobedo and Kavian~\cite{Escobedo-Kavian_1987}. Writing \begin{equation}\label{SST} \tilde{u}(y_1,y_2,y_3,s) = e^{s/4} u(e^{s/2}y_1,y_2,y_3,e^s-1) \,, \end{equation} the equation~\eqref{heat.straight} is transformed to \begin{equation}\label{heat.similar} \tilde{u}_s - \mbox{$\frac{1}{2}$} \, y_1 \;\! \partial_1 \tilde{u} - (\partial_1-\sigma_s\,\partial_\tau)^2 \tilde{u} - e^s \, \Delta' \tilde{u} - E_1 \, e^s \, \tilde{u} - \mbox{$\frac{1}{4}$} \, \tilde{u} = 0 \end{equation} in self-similarity variables $(y,s)\in\Omega_0\times(0,\infty)$, where \begin{equation}\label{sigma} \sigma_s(y_1) := e^{s/2} \dot{\theta}(e^{s/2}y_1) \,. \end{equation} Note that~\eqref{heat.similar} is a parabolic equation with \emph{time-dependent} coefficients. This non-autonomous feature is a consequence of the non-trivial geometry we deal with and represents thus the main difficulty in our study. We note that an analogous difficulty has been encountered previously for a convection-diffusion equation in the whole space but with a variable diffusion coefficient \cite{Duro-Zuazua_1999}. \smallskip \\ \textbf{III.} We reconsider~\eqref{heat.similar} in the weighted space~\eqref{weight} and show that the associated generator has purely discrete spectrum then. Now a difference with respect to the self-similarity transformation in the whole Euclidean space is that the generator is not a symmetric operator if the tube is twisted. However, this is not a significant obstacle since only the real part of the corresponding quadratic form is relevant for subsequent energy estimates (\emph{cf}~\eqref{formal}). \smallskip \\ \textbf{IV.} Finally, we look at the asymptotic behaviour of~\eqref{heat.similar} as the self-similar time~$s$ tends to infinity. Assume that the tube is twisted. The scaling coming from the self-similarity transformation is such that the function~\eqref{sigma} converges in a distributional sense to a multiple of the delta function supported at zero as $s\to\infty$. The square of~$\sigma_s$ becomes therefore extremely singular at the section $\{0\}\times\omega$ of the tube for large times. At the same time, the prefactors~$e^s$ in~\eqref{heat.similar} diverge exactly as if the cross-section of the tube shrunk to zero $s\to\infty$. Taking these two simultaneous limits into account, it is expectable that~\eqref{heat.similar} will be approximated for large times by the essentially one-dimensional problem \begin{equation}\label{heat.1D} \varphi_s - \mbox{$\frac{1}{2}$} \, y_1 \;\! \varphi_{y_1} - \varphi_{y_1y_1} - \mbox{$\frac{1}{4}$} \, \varphi = 0 \,, \qquad s \in (0,\infty), \ y_1 \in \mathbb{R} \,, \end{equation} with an extra Dirichlet boundary condition at $y_1=0$. This evolution equation is explicitly solvable in $L^2(\mathbb{R},K)$ and it is easy to see that \begin{equation}\label{heat.1D.estimate} \|\varphi\|_{L^2(\mathbb{R},K)} \leq e^{-\frac{3}{4} s} \, \|\varphi_0\|_{L^2(\mathbb{R},K)} \,, \end{equation} for any initial datum~$\varphi_0$. Here the exponential decay rate transfers to a polynomial one after returning to the original time~$t$, and the number~$3/4$ gives rise to that of the bound of Theorem~\ref{Thm.rate} in the twisted case. On the other hand, we get just~$1/4$ in~\eqref{heat.1D.estimate} provided that the tube is untwisted (which corresponds to imposing no extra condition at $y_1=0$). \smallskip Two comments are in order. First, we do not establish any theorem that solutions of~\eqref{heat.similar} can be approximated by those of~\eqref{heat.1D} as $s \to \infty$. We only show a strong-resolvent convergence for operators related to their generators (Proposition~\ref{Prop.strong}). This is, however, sufficient to prove Theorem~\ref{Thm.rate} with help of energy estimates. Proposition~\ref{Prop.strong} is probably the most significant auxiliary result of the paper and we believe it is interesting in its own right. Second, in the proof of Proposition~\ref{Prop.strong} we essentially use the existence of the Hardy inequality~\eqref{I.Hardy} in twisted tubes. In fact, the positivity of~$\varrho$ is directly responsible for the extra Dirichlet boundary condition of~\eqref{heat.1D}. Since the Hardy inequality holds in the Hilbert space $L^2(\Omega_0)$ (no weight), Proposition~\ref{Prop.strong} is stated for operators transformed to it from~\eqref{weight} by an obvious unitary transform. In particular, the asymptotic operator~$h_D$ of Proposition~\ref{Prop.strong} acts in a different space, $L^2(\mathbb{R})$, but it is unitarily equivalent to the generator of~\eqref{heat.1D}. \subsection{The content of the paper} The organization of this paper is as follows. In the following Section~\ref{Sec.Pre} we give a precise definition of twisted tubes~$\Omega_\theta$ and the corresponding Dirichlet Laplacian $-\Delta_D^{\Omega_\theta}$. Section~\ref{Sec.Hardy} is mainly devoted to a new proof of the Hardy inequality (Theorem~\ref{Thm.Hardy}) as announced in~\cite{K6-erratum}. We mention its consequences on the stability of the spectrum of the Laplacian (Proposition~\ref{Prop.difference}) and emphasize that the Hardy weight cannot be made arbitrarily large by increasing the twisting (Proposition~\ref{Prop.limit}). Finally, we establish there a new Sobolev-type inequality in twisted tubes (Theorem~\ref{Thm.Sobolev}). The heat equation in twisted tubes is considered in Section~\ref{Sec.heat}. Using some energy-type estimates, we prove in Theorems~\ref{Thm.decay.1D} and~\ref{Thm.decay} polynomial-type decay results for the heat semigroup as a consequence of the Sobolev and Hardy inequalities, respectively. Unfortunately, Theorem~\ref{Thm.decay} does not represent any improvement upon the 1/4-decay rate of Theorem~\ref{Thm.decay.1D} which is valid in untwisted tubes as well. The main body of the paper is therefore represented by Section~\ref{Sec.similar} where we develop the method of self-similar solutions to get the improved decay rate of Theorem~\ref{Thm.rate} as described above. Furthermore, in Section~\ref{Sec.alternative} we establish an alternative version of Theorem~\ref{Thm.rate}. The paper is concluded in Section~\ref{Sec.end} by referring to physical interpretations of the result and to some open problems. \section{Preliminaries}\label{Sec.Pre} In this section we introduce some basic definitions and notations we shall use throughout the paper. \subsection{The geometry of a twisted tube} Given a bounded open connected set $\omega \subset\mathbb{R}^2$, let $\Omega_0:=\mathbb{R}\times\omega$ be a straight tube of cross-section~$\omega$. We assume no regularity hypotheses about~$\omega$. Let $\theta:\mathbb{R}\to\mathbb{R}$ be a $C^1$-smooth function with bounded derivative (occasionally we will denote by the same symbol~$\theta$ the function $\theta \otimes 1$ on $\Omega_0$). We introduce another tube of the same cross-section~$\omega$ as the image $$ \Omega_\theta := \mathcal{L}_\theta(\Omega_0) \,, $$ where the mapping $\mathcal{L}_\theta:\mathbb{R}^3\to\mathbb{R}^3$ is given by \begin{equation}\label{diffeomorphism} \mathcal{L}_\theta(x) := \big( x_1, x_2\cos\theta(x_1)+x_3\sin\theta(x_1), -x_2\sin\theta(x_1)+x_3\cos\theta(x_1) \big) \,. \end{equation} \begin{Definition}[Twisted and untwisted tubes]\label{definition} We say that the tube~$\Omega_\theta$ is \emph{twisted} if the following two conditions are satisfied: \begin{enumerate} \item $\theta$ is not constant, \item $\omega$ is not rotationally symmetric with respect to the origin in~$\mathbb{R}^2$. \end{enumerate} Otherwise we say that~$\Omega_\theta$ is \emph{untwisted}. \end{Definition} Here the precise meaning of~$\omega$ being ``rotationally symmetric with respect to the origin in~$\mathbb{R}^2$'' is that, for every $\vartheta\in(0,2\pi)$, $$ \omega_\vartheta := \left\{ x_2\cos\vartheta+x_3\sin\vartheta, -x_2\sin\vartheta+x_3\cos\vartheta \, \big| \, (x_2,x_3)\in\omega \right\} = \omega \,, $$ with the natural convention that we identify~$\omega$ and~$\omega_\vartheta$ (and other open sets) provided that they differ on a set of zero capacity. Hence, modulus a set of zero capacity, $\omega$~is rotationally symmetric with respect to the origin in~$\mathbb{R}^2$ if, and only if, it is a disc or an annulus centered at the origin of~$\mathbb{R}^2$. In view of this convention, any untwisted $\Omega_\theta$ can be identified with the straight tube~$\Omega_0$ by an isometry of the Euclidean space. We write $x=(x_1,x_2,x_3)$ for a point/vector in~$\mathbb{R}^3$. If~$x$ is used to denote a point in~$\Omega_0$ or~$\Omega_\theta$, we refer to~$x_1$ and $x':=(x_2,x_3)$ as ``longitudinal'' and ``transverse'' variables in the tube, respectively. It is easy to check that the mapping~$\mathcal{L}_\theta$ is injective and that its Jacobian is identically equal to~$1$. Consequently, $\mathcal{L}_\theta$ induces a (global) diffeomorphism between~$\Omega_0$ and~$\Omega_\theta$. \subsection{The Dirichlet Laplacian in a twisted tube} It follows from the last result that~$\Omega_\theta$ is an open set. The corresponding Dirichlet Laplacian in~$L^2(\Omega_\theta)$ can be therefore introduced in a standard way as the self-adjoint operator $-\Delta_D^{\Omega_\theta}$ associated with the quadratic form $$ Q_D^{\Omega_\theta}[\Psi] := \|\nabla\Psi\|_{L^2(\Omega_\theta)}^2 , \qquad \Psi \in \mathfrak{D}(Q_D^{\Omega_\theta}) := H_0^1(\Omega_\theta) \,. $$ By the representation theorem, $ -\Delta_D^{\Omega_\theta}\Psi = -\Delta\Psi $ for $ \Psi \in \mathfrak{D}(-\Delta_D^{\Omega_\theta}) := \{\Psi\in H_0^1(\Omega_\theta) \, | \, \Delta\Psi \in L^2(\Omega_\theta)\} $, where the Laplacian $\Delta\Psi$ should be understood in the distributional sense. Moreover, using the diffeomorphism induced by~$\mathcal{L}_\theta$, we can ``untwist'' the tube by expressing the Laplacian $-\Delta_D^{\Omega_\theta}$ in the curvilinear coordinates determined by~\eqref{diffeomorphism}. More precisely, let~$U_\theta$ be the unitary transformation from $L^2(\Omega_\theta)$ to $L^2(\Omega_0)$ defined by \begin{equation}\label{unitary} U_\theta \Psi:=\Psi\circ\mathcal{L}_\theta \,. \end{equation} It is easy to check that $ H_{\theta} := U_\theta(-\Delta_D^{\Omega_\theta})U_\theta^{-1} $ is the self-adjoint operator in $L^2(\Omega_0)$ associated with the quadratic form \begin{equation}\label{form} Q_\theta[\psi] := \|\partial_1\psi-\dot{\theta}\,\partial_\tau\psi\|_{L^2(\Omega_0)}^2 + \|\nabla'\psi\|_{L^2(\Omega_0)}^2 \,, \qquad \psi \in \mathfrak{D}(Q_\theta) := H_0^1(\Omega_0) \,. \end{equation} Here $\nabla':=(\partial_2,\partial_3)$ denotes the transverse gradient and~$\partial_\tau$ is a shorthand for the transverse angular-derivative operator $$ \partial_\tau := \tau\cdot\nabla' = x_3 \partial_2 - x_2 \partial_3 \,, \qquad\mbox{where}\qquad \tau(x_2,x_3) := (x_3,-x_2) \,. $$ We have the point-wise estimate \begin{equation}\label{a-estimate} |\partial_\tau \psi| \leq a \, |\nabla' \psi| \,,\qquad\mbox{where}\qquad a := \sup_{x'\in\omega}|x'| \,. \end{equation} The sesquilinear form associated with $Q_\theta[\cdot]$ will be denoted by $Q_\theta(\cdot,\cdot)$. In the distributional sense, we can write \begin{equation}\label{H.distributional} H_{\theta} \psi = - (\partial_1-\dot{\theta}\,\partial_\tau)^2\psi - \Delta'\psi \,, \end{equation} where $-\Delta' := - \partial_2^2 - \partial_3^2$ denotes the transverse Laplacian. \section{The Hardy and Sobolev inequalities}\label{Sec.Hardy} In this section we summarize basic spectral results about the Laplacian $-\Delta_D^{\Omega_\theta}$ we shall need later to study the asymptotic behaviour of the associated semigroup. \subsection{The Poincar\'e inequality} Let~$E_1$ be the first eigenvalue of the Dirichlet Laplacian in~$\omega$. Using the Poincar\'e-type inequality in the cross-section \begin{equation}\label{Poincare} \|\nabla f\|_{L^2(\omega)}^2 \geq E_1 \|f\|_{L^2(\omega)}^2 \,, \qquad \forall f \in H_0^1(\omega) \,, \end{equation} and Fubini's theorem, it readily follows that $ Q_\theta[\psi] \geq E_1 \|\psi\|_{L^2(\Omega_0)}^2 $ for every $\psi \in H_0^1(\Omega_0)$. Or, equivalently, \begin{equation}\label{Poincare.tube} -\Delta_D^{\Omega_\theta} \geq E_1 \end{equation} in the form sense in $L^2(\Omega_\theta)$. Consequently, the spectrum of $-\Delta_D^{\Omega_\theta}$ does not start below~$E_1$. The result~\eqref{Poincare.tube} can be interpreted as a Poincar\'e-type inequality and it holds for any tube~$\Omega_\theta$. The inequality~\eqref{Poincare.tube} is clearly sharp for an untwisted tube, since~\eqref{spectrum} holds in that case trivially by separation of variables. In general, the spectrum of $-\Delta_D^{\Omega_\theta}$ can start strictly above~$E_1$ if the twisting is effective at infinity (\emph{cf}~\cite[Corol.~6.6]{K6-erratum}). In this paper, however, we focus on tubes for which the energy~$E_1$ coincides with the spectral threshold of $-\Delta_D^{\Omega_\theta}$. This is typically the case if the twisting vanishes at infinity (\emph{cf}~\cite[Sec.~4]{K6}). More restrictively, we assume~\eqref{locally}. Under this hypothesis, \eqref{spectrum}~holds and~\eqref{Poincare.tube} is sharp in the twisted case too. \subsection{The Poincar\'e inequality in a bounded tube} For our further purposes, it is important that a better result than~\eqref{Poincare.tube} holds in bounded tubes. Given a bounded open interval $I \subset \mathbb{R}$, let~$H_\theta^I$ be the ``restriction'' of~$H_\theta$ to the tube $I \times \omega$ determined by the conditions $ \partial_1\psi-\dot\theta\partial_\tau\psi = 0 $ on the new boundary $(\partial I) \times \omega$. More precisely, $H_\theta^I$~is introduced as the self-adjoint operator in $L^2(I\times\omega)$ associated with the quadratic form \begin{align*} Q_\theta^I[\psi] &:= \|\partial_1\psi-\dot{\theta}\,\partial_\tau\psi\|_{L^2(I\times\omega)}^2 + \|\nabla'\psi\|_{L^2(I\times\omega)}^2 \,, \\ \psi \in \mathfrak{D}(Q_\theta^I) &:= \left\{ \psi\!\upharpoonright\!(I\times\Omega) \ | \ \psi \in H_0^1(\Omega_0) \right\} \,. \end{align*} That is, we impose no additional boundary conditions in the form setting. Contrary to~$H_\theta$, $H_\theta^I$~is an operator with compact resolvent. Let $\lambda(\dot\theta,I)$ denote the lowest eigenvalue of the shifted operator $H_\theta^I-E_1$. We have the following variational characterization: \begin{equation}\label{lambda} \lambda(\dot\theta,I) = \min_{\psi \in \mathfrak{D}(Q_{\theta}^I) \setminus\{0\}} \frac{\, Q_\theta^I[\psi] - E_1 \|\psi\|_{L^2(I\times\omega)}^2} {\|\psi\|_{L^2(I\times\omega)}^2} \,. \end{equation} As in the unbounded case, \eqref{Poincare}~yields that $\lambda(\dot\theta,I)$ is non-negative (it is zero if the tube is untwisted). However, thanks to the compactness, now we have that $H_\theta^I-E_1$ is a positive operator whenever the tube is twisted. \begin{Lemma}\label{Lem.cornerstone} Let $\theta \in C^1(\mathbb{R})$. Let $I \subset \mathbb{R}$ be a bounded open interval such that $\theta\!\upharpoonright\!I$ is not constant. Let~$\omega$ be not rotationally invariant with respect to the origin in~$\mathbb{R}^2$. Then $$ \lambda(\dot\theta,I) > 0 \,. $$ \end{Lemma} \begin{proof} We proceed by contradiction and assume that $\lambda(\dot\theta,I) = 0$. Then the minimum~\eqref{lambda} is attained by a (smooth) function $\psi \in \mathfrak{D}(Q_\theta^I)$ satisfying (recall~\eqref{Poincare}) \begin{equation}\label{2.ids} \|\partial_1 \psi-\dot\theta\,\partial_\tau \psi\|_{L^2(I\times\omega)}^2 = 0 \quad\ \mbox{and}\ \quad \|\nabla'\psi\|_{L^2(I\times\omega)}^2 - E_1 \|\psi\|_{L^2(I\times\omega)}^2 = 0 \,. \end{equation} Writing $ \psi(x) = \varphi(x_1) \mathcal{J}_1(x') + \phi(x) $, where~$\mathcal{J}_1$ is the positive eigenfunction corresponding to~$E_1$ of the Dirichlet Laplacian in $L^2(\omega)$ and $(\mathcal{J}_1,\phi(x_1,\cdot))_{L^2(\omega)}=0$ for every $x_1 \in I$, we deduce from the second equality in~\eqref{2.ids} that $\phi = 0$. The first identity is then equivalent to \begin{equation*} \|\dot\varphi\|_{L^2(I)}^2 \|\mathcal{J}_1\|_{L^2(\omega)}^2 + \|\dot\theta \;\! \varphi\|_{L^2(I)}^2 \|\partial_\tau\mathcal{J}_1\|_{L^2(\omega)}^2 \\ - 2 (\mathcal{J}_1,\partial_\tau\mathcal{J}_1)_{L^2(\omega)} \Re (\dot\varphi,\dot\theta\;\!\varphi)_{L^2(I)} = 0 \,. \end{equation*} Since $(\mathcal{J}_1,\partial_\tau\mathcal{J}_1)_{L^2(\omega)}=0$ by an integration by parts, it follows that~$\varphi$ must be constant and that $$ \|\dot{\theta}\|_{L^2(I)} = 0 \qquad\mbox{or}\qquad \|\partial_\tau\mathcal{J}_1\|_{L^2(\omega)} = 0 \,. $$ However, this is impossible under the stated assumptions because $\|\dot\theta\|_{L^2(I)}$ vanishes if and only if $\theta$~is constant on~$I$, and $\partial_\tau\mathcal{J}_1 = 0$ identically in~$\omega$ if and only if $\omega$~is rotationally invariant with respect to the origin. \end{proof} Lemma~\ref{Lem.cornerstone} was the cornerstone of the method of~\cite{K6-erratum} to establish the existence of Hardy inequalities in twisted tubes (see also the proof of Theorem~\ref{Thm.Hardy} below). \subsection{Infinitesimally thin tubes} It is clear that $ \lambda(\dot\theta,\mathbb{R}) := \inf\sigma(H_\theta) =0 $ whenever~\eqref{spectrum} holds (\emph{e.g.}, if~\eqref{locally} is satisfied). It turns out that the shifted spectral threshold diminishes also in the opposite asymptotic regime, \emph{i.e.}\ when the interval $I_\epsilon:=(-\epsilon,\epsilon)$ shrinks, and this irrespectively of the properties of~$\omega$ and $\dot\theta$. \begin{Proposition}\label{Prop.erratum} Let $\theta \in C^1(\mathbb{R})$. We have $$ \lim_{\epsilon \to 0} \lambda\big(\dot\theta,I_\epsilon\big) = 0 \,. $$ \end{Proposition} \begin{proof} Let $\{\omega_k\}_{k=0}^\infty$ be an exhaustion sequence of~$\omega$, \emph{i.e.}, each~$\omega_k$ is an open set with smooth boundaries satisfying $\omega_k \Subset \omega_{k+1}$ and $\cup_{k=0}^\infty \omega_k = \omega$. Let $\mathcal{J}_1^k$ denote the first eigenfunction of the Dirichlet Laplacian in $L^2(\omega_k)$; we extend it by zero to the whole~$\mathbb{R}^2$. Finally, set $ \psi^k := (1\otimes\mathcal{J}_1^k) \circ \mathcal{L}_{\theta_0} $ with $\theta_0(x_1):=\dot\theta(0)\,x_1$, \emph{i.e.}, $$ \psi^k(x) = \mathcal{J}_1^k \left( x_2\cos(\dot\theta_0 x_1)+x_3\sin(\dot\theta_0 x_1), -x_2\sin(\dot\theta_0 x_1)+x_3\cos(\dot\theta_0 x_1) \right) \,, $$ where $\dot\theta_0 := \dot\theta(0)$. For any (large) $k \in \mathbb{N}$ there exists (small) positive~$\epsilon_k$ such that $\psi^k$ belongs to $\mathfrak{D}(Q_\theta^{I_\epsilon})$ for all $\epsilon \leq \epsilon_k$. Hence it is an admissible trial function for~\eqref{lambda}. Now, fix $k \in \mathbb{N}$ and assume that $\epsilon \leq \epsilon_k$. Then we have \begin{equation*} \|\psi^k\|_{L^2(I_\epsilon\times\omega)}^2 = |I_\epsilon| \, \|\mathcal{J}_1^k\|_{L^2(\omega_k)}^2 \,, \end{equation*} where we have used the change of variables $ y = \mathcal{L}_{\theta_0}(x) $. At the same time, employing consecutively the identity $ \partial_1 \psi^k -\dot\theta\,\partial_\tau \psi^k = (\dot{\theta}_0-\dot\theta) \, \partial_\tau\psi^k $, the bound~\eqref{a-estimate}, the identity $ |\nabla'\psi^k(x)| = |\nabla \mathcal{J}_1^k(y_2,y_3)| $ and the same change of variables, we get the estimate \begin{equation*} \| \partial_1 \psi^k -\dot\theta\,\partial_\tau \psi^k \|_{L^2(I_\epsilon\times\omega)}^2 \leq \| (\dot\theta_0-\dot\theta) \|_{L^\infty(I_\epsilon)}^2 \, |I_\epsilon| \, a^2 \, \| \nabla\mathcal{J}_1^k \|_{L^2(\omega_k)}^2 \,, \end{equation*} where the supremum norm clearly tends to zero as $\epsilon \to 0$. Finally, $$ \|\nabla'\psi^k\|_{L^2(I_\epsilon\times\omega)}^2 - E_1 \|\psi^k\|_{L^2(I_\epsilon\times\omega)}^2 = (E_1^k - E_1) \, |I_\epsilon| \, \|\mathcal{J}_1^k\|_{L^2(\omega_k)}^2 \,, $$ where~$E_1^k$ denotes the first eigenvalue of the Dirichlet Laplacian in $L^2(\omega_k)$. Sending~$\epsilon$ to zero, the trial-function argument therefore yields $$ \lim_{\epsilon \to 0} \lambda(\dot\theta,I_\epsilon) \leq E_1^k - E_1 \,. $$ Since~$k$ can be made arbitrarily large and $E_1^k \to E_1$ as $k\to\infty$ by standard approximation arguments (see, \emph{e.g.}, \cite{Daners}), we conclude with the desired result. \end{proof} \begin{Remark}[An erratum to \cite{K6}] The study of the infinitesimally thin tubes played a crucial in the proof of Hardy inequalities given in~\cite{K6}. According to Lemma~6.3 in~\cite{K6}, $\lambda\big(\dot\theta,I_\epsilon\big)$, with constant~$\dot\theta$, is independent of~$\epsilon>0$ (and therefore remains positive for a twisted tube even if $\epsilon \to 0$). However, in view Proposition~\ref{Prop.erratum}, this is false. Consequently, Lemmata~6.3 and~6.5 and Theorem~6.6 in~\cite{K6} cannot hold. The proof of Hardy inequalities presented in~\cite{K6} is incorrect. A corrected version of the paper~\cite{K6} can be found in~\cite{K6-erratum}. \end{Remark} \subsection{The Hardy inequality} Now we come back to unbounded tubes~$\Omega_\theta$. Although~\eqref{Poincare.tube} represents a sharp Poincar\'e-type inequality both for twisted and untwisted tubes (if~\eqref{spectrum} holds), there is a fine difference in the spectral setting. Whenever the tube~$\Omega_\theta$ is non-trivially twisted (\emph{cf}~Definition~\ref{definition}), there exists a positive function~$\varrho$ (necessarily vanishing at infinity if~\eqref{spectrum} holds) such that~\eqref{Poincare.tube} is improved to~\eqref{I.Hardy}. A variant of the Hardy inequality is represented by the following theorem: \begin{Theorem}\label{Thm.Hardy} Let $\theta \in C^1(\mathbb{R})$ and suppose that $\dot\theta$ has compact support. Then for every $\Psi \in H_0^1(\Omega_\theta)$ we have \begin{equation}\label{Hardy} \|\nabla \Psi\|_{L^2(\Omega_\theta)}^2 - E_1 \;\! \|\Psi\|_{L^2(\Omega_\theta)}^2 \ \geq\ c_H \, \|\rho \Psi\|_{L^2(\Omega_\theta)}^2 \,, \end{equation} where $\rho(x):=1/\sqrt{1+x_1^2}$ and~$c_H$ is a non-negative constant depending on~$\dot\theta$ and~$\omega$. Moreover, $c_H$~is positive if, and only if, $\Omega_\theta$ is twisted. \end{Theorem} \begin{proof} It is clear that the left hand side of~\eqref{Hardy} is non-negative due to~\eqref{Poincare.tube}. The fact that $c_H=0$ if the tube is untwisted follows from the more general result included in Proposition~\ref{Prop.difference}.2 below. We divide the proof of the converse fact (\emph{i.e.}\ that twisting implies $c_H>0$) into several steps. Recall the identification of $\Psi\inL^2(\Omega_\theta)$ with $\psi:=U_\theta\Psi\inL^2(\Omega_0)$ via~\eqref{unitary}. \smallskip \noindent \emph{1.}~Let us first assume that the interval $I := (\inf\mathop{\mathrm{supp}}\nolimits\dot\theta,\sup\mathop{\mathrm{supp}}\nolimits\dot\theta)$ is symmetric with respect to the origin of~$\mathbb{R}$. \smallskip \noindent \emph{2.}~The main ingredient in the proof is the following Hardy-type inequality for a Schr\"odinger operator in $\mathbb{R}\times\omega$ with a characteristic-function potential: \begin{equation}\label{Hardy.classical} \|\rho\psi\|_{L^2(\Omega_0)}^2 \leq 16 \, \|\partial_1\psi\|_{L^2(\Omega_0)}^2 + (2+64/|I|^2) \, \|\psi\|_{L^2(I\times\omega)}^2 \end{equation} for every $\psi \in H_0^1(\Omega_0)$. This inequality is a consequence of the classical one-dimensional Hardy inequality $ \int_\mathbb{R} x_1^{-2} |\varphi(x_1)|^2 dx_1 \leq 4 \int_\mathbb{R} |\dot\varphi(x_1)|^2 dx_1 $ valid for any $\varphi\in H_0^1(\mathbb{R}\!\setminus\!\{0\})$. Indeed, following~\cite[Sec.~3.3]{EKK}, let~$\eta$ be the Lipschitz function on~$\mathbb{R}$ defined by $\eta(x_1):=2|x_1|/|I|$ for $|x_1|\leq |I|/2$ and~$1$ otherwise in~$\mathbb{R}$ (we shall denote by the symbol the function $\eta \otimes 1$ on $\mathbb{R}\times\omega$). For any $\psi \in C_0^\infty(\Omega_0)$, let us write $\psi =\eta\psi+(1-\eta)\psi$, so that $(\eta\psi)(\cdot,x') \in H_0^1(\mathbb{R}\!\setminus\!\{0\})$ for every $x'\in\omega$. Then, employing Fubini's theorem, we can estimate as follows: \begin{align*} \|\rho\psi\|_{L^2(\Omega_0)}^2 & \leq 2 \int_{\Omega_0} x_1^{-2} \, |(\eta\psi)(x)|^2 \, dx + 2 \, \|(1-\eta)\psi\|_{L^2(\Omega_0)}^2 \\ & \leq 8 \, \|\partial_1(\eta\psi)\|_{L^2(\Omega_0)}^2 + 2 \, \|\psi\|_{L^2(I\times\omega)}^2 \\ & \leq 16 \, \|\eta \partial_1\psi\|_{L^2(\Omega_0)}^2 + 16 \, \|(\partial_1{\eta})\psi\|_{L^2(\Omega_0)}^2 + 2 \, \|\psi\|_{L^2(I\times\omega)}^2 \\ & \leq 16 \, \|\partial_1\psi\|_{L^2(\Omega_0)}^2 + (2+64/|I|^2) \, \|\psi\|_{L^2(I\times\omega)}^2 \,. \end{align*} By density, this result extends to all $\psi\in H_0^1(\Omega_0)=\mathfrak{D}(Q_{\theta})$. \smallskip \noindent \emph{3.}~By Lemma~\ref{Lem.cornerstone}, we have \begin{equation}\label{bound1} Q_{\theta}[\psi] - E_1 \;\! \|\psi\|_{L^2(\Omega_0)}^2 \, \geq \, Q_{\theta}^{I}[\psi] - E_1 \;\! \|\psi\|_{L^2(I\times\omega)}^2 \, \geq \, \lambda(\dot\theta,I) \, \|\psi\|_{L^2(I\times\omega)}^2 \end{equation} for every $\psi \in \mathfrak{D}(Q_{\theta})$. Here the first inequality employs the trivial fact that the restriction to $I\times\omega$ of a function from $\mathfrak{D}(Q_{\theta})$ belongs to $\mathfrak{D}(Q_{\theta}^I)$. Under the stated hypotheses, we know from Lemma~\ref{Lem.cornerstone} that $\lambda(\dot\theta,I)$ is a positive number. \smallskip \noindent \emph{4.}~At the same time, for every $\psi \in \mathfrak{D}(Q_{\theta})$, \begin{eqnarray}\label{bound3} \lefteqn{ Q_{\theta}[\psi] - E_1 \;\! \|\psi\|_{L^2(\Omega_0)}^2 } \nonumber \\ && \geq \epsilon \, \|\partial_1 \psi\|_{L^2(\Omega_0)}^2 + \int_{\Omega_0} \left\{ \left[1-\frac{\epsilon}{1-\epsilon} \, a^2 \, \dot{\theta}^2(x_1)\right] |\nabla'\psi(x)|^2 - E_1 \, |\psi(x)|^2 \right\} dx \nonumber \\ && \geq \epsilon \, \|\partial_1 \psi\|_{L^2(\Omega_0)}^2 - \frac{\epsilon}{1-\epsilon} \, a^2 E_1 \, \|\dot\theta \psi\|_{L^2(\Omega_0)}^2 \nonumber \\ && \geq \epsilon \, \|\partial_1 \psi\|_{L^2(\Omega_0)}^2 - \frac{\epsilon}{1-\epsilon} \, \|\dot\theta\|_{L^\infty(I)}^2 \, a^2 E_1 \, \|\psi\|_{L^2(I\times\omega)}^2 \end{eqnarray} for sufficiently small positive~$\epsilon$. Here the first estimate is an elementary Cauchy-type inequality employing~\eqref{a-estimate} and valid for all $\epsilon\in(0,1)$. The second inequality in~\eqref{bound3} follows from~\eqref{Poincare} with help of Fubini's theorem provided that~$\epsilon$ is sufficiently small, namely if $\epsilon < \big(1+a^2\|\dot{\theta}\|_{L^\infty(\mathbb{R})}^2\big)^{-1}$. \smallskip \noindent \emph{5.}~Interpolating between the bounds~\eqref{bound1} and~\eqref{bound3}, and using~\eqref{Hardy.classical} in the latter, we finally arrive at \begin{multline*} Q_{\theta}[\psi] - E_1 \;\! \|\psi\|_{L^2(\Omega_0)}^2 \geq \frac{1}{2} \frac{\epsilon}{16} \ \|\rho\psi\|_{L^2(\Omega_0)}^2 \\ + \frac{1}{2} \left[ \lambda(\dot\theta,I) - \epsilon \, \left(\frac{1}{8}+\frac{4}{|I|^2}\right) - \frac{\epsilon}{1-\epsilon} \, \|\dot\theta\|_{L^\infty(I)}^2 \, a^2 E_1 \right] \|\psi\|_{L^2(I\times\omega)}^2 \end{multline*} for every $\psi \in \mathfrak{D}(Q_{\theta})$. It is clear that the last line on the right hand side of this inequality can be made non-negative by choosing~$\epsilon$ sufficiently small. Such an~$\epsilon$ then determines the Hardy constant $c_H' := \epsilon/32$. \smallskip \noindent \emph{6.}~The previous bound can be transferred to $L^2(\Omega_\theta)$ via~\eqref{unitary}. In general, if the centre of~$I$ is an arbitrary point $x_1^0\in\mathbb{R}$, the obtained result is equivalent to $$ \forall \Psi\in\mathfrak{D}(Q_D^{\Omega_\theta}) \,, \qquad \|\nabla \Psi\|_{L^2(\Omega_\theta)}^2 - E_1 \;\! \|\Psi\|_{L^2(\Omega_\theta)}^2 \ \geq\ c_H' \, \|\rho_{x_1^0} \;\! \Psi\|_{L^2(\Omega_\theta)}^2 \,, $$ where $\rho_{x_1^0}(x):=1/\sqrt{1+(x_1-x_1^0)^2}$. This yields~\eqref{Hardy} with $$ c_H := c_H' \min_{x_1 \in \mathbb{R}} \frac{1+x_1^2}{1+(x_1-x_1^0)^2} \,, $$ where the minimum is a positive constant depending on~$x_1^0$. \end{proof} The Hardy inequality of Theorem~\ref{Thm.Hardy} was first established~\cite{EKK} under additional hypotheses. The present version is adopted from~\cite{K6-erratum}, where other variants of the inequality can be found, too. \subsection{The spectral stability} Theorem~\ref{Thm.Hardy} provides certain stability properties of the spectrum for twisted tubes, while the untwisted case is always unstable, in the following sense: \begin{Proposition}\label{Prop.difference} Let $V$ be the multiplication operator in $L^2(\Omega_\theta)$ by a bounded non-zero non-negative function~$v$ such that $v(x) \sim |x_1|^{-2}$ as $|x_1|\to\infty$. Then \begin{enumerate} \item if~$\Omega_\theta$ is twisted with $\theta \in C^1(\mathbb{R})$ and $\dot\theta$ has compact support, then there exists $\varepsilon_0>0$ such that for all $\varepsilon<\varepsilon_0$, $$ \inf\sigma(-\Delta_D^{\Omega_\theta}-\varepsilon V) \geq E_1 \,; $$ \item if~$\Omega_\theta$ is untwisted then, for all $\varepsilon>0$, $$ \inf\sigma(-\Delta_D^{\Omega_\theta}-\varepsilon V) < E_1 \,. $$ \end{enumerate} \end{Proposition} \begin{proof} The first statement follows readily from one part of Theorem~\ref{Thm.Hardy}. To prove the second property (and therefore the other part of Theorem~\ref{Thm.Hardy} stating that $c_H=0$ if the tube is untwisted), it is enough to consider the case $\theta=0$ and construct a test function~$\psi$ from $H_0^1(\Omega_0)$ such that \begin{equation*} P_0[\psi] := \|\nabla \psi\|_{L^2(\Omega_0)}^2 - E_1 \|\psi\|_{L^2(\Omega_0)}^2 - \varepsilon \, \big\|v^{1/2}\psi\big\|_{L^2(\Omega_0)}^2 < 0 \end{equation*} for all positive~$\varepsilon$. For every $n\geq 1$, we define \begin{equation}\label{gef} \psi_n(x) := \varphi_n(x_1) \mathcal{J}_1(x_2,x_3) \,, \end{equation} where~$\mathcal{J}_1$ is the positive eigenfunction corresponding to~$E_1$ of the Dirichlet Laplacian in the cross-section~$\omega$, normalized to~$1$ in $L^2(\omega)$, and \begin{equation}\label{mollifier} \varphi_n(x_1) := \exp{\left(-\frac{x_1^2}{n}\right)} \,. \end{equation} In view of the separation of variables and the normalization of~$\mathcal{J}_1$, we have $$ P_0[\psi_n] = \|\dot\varphi_n\|_{L^2(\mathbb{R})}^2 - \varepsilon \, \big\|v_1^{1/2}\varphi_n\big\|_{L^2(\mathbb{R})}^2 \,, $$ where $v_1(x_1) := \| v(x_1,\cdot)^{1/2} \mathcal{J}_1\|_{L^2(\omega)}^2$. By hypothesis, $v_1 \in L^1(\mathbb{R})$ and the integral $\|v_1\|_{L^1(\mathbb{R})}$ is positive. Finally, an explicit calculation yields $\|\dot\varphi_n\|_{L^2(\mathbb{R})} \sim n^{-1/4}$. By the dominated convergence theorem, we therefore have $$ P_0[\psi_n] \xrightarrow[n\to\infty]{} - \varepsilon \, \|v_1\|_{L^1(\mathbb{R})} \,. $$ Consequently, taking~$n$ sufficiently large and $\varepsilon$~positive, we can make the form $P_0[\psi_n]$ negative. \end{proof} Since the potential~$V$ is bounded and vanishes at infinity, it is easy to see that the essential spectrum is not changed, \emph{i.e.}, $ \sigma_\mathrm{ess}(-\Delta_D^{\Omega_\theta}-\varepsilon V) = [E_1,\infty) $, independently of the value of~$\varepsilon$ and irrespectively of whether the tube is twisted or not. As a consequence of Proposition~\ref{Prop.difference}, we have that an arbitrarily small attractive potential $-\varepsilon V$ added to the shifted operator $-\Delta_D^{\Omega_\theta}-E_1$ in the untwisted tube would generate negative discrete eigenvalues, however, a certain critical value of~$\varepsilon$ is needed in order to generate the negative spectrum in the twisted case. In the language of~\cite{Pinchover_2007}, the operator $-\Delta_D^{\Omega_\theta}-E_1$ is therefore subcritical (respectively critical) if~$\Omega_\theta$ is twisted (respectively untwisted). \subsection{An upper bound to the Hardy constant} Now we come back to Theorem~\ref{Thm.Hardy} and show that the Hardy weight at the right hand side of~\eqref{Hardy} cannot be made arbitrarily large by increasing~$\dot\theta$ or making the cross-section~$\omega$ more eccentric. \begin{Proposition}\label{Prop.limit} Let $\theta \in C^1(\mathbb{R})$ and suppose that $\dot\theta$ has compact support. Then $$ c_H \leq 1/2 \,, $$ where~$c_H$ is the constant of Theorem~\ref{Thm.Hardy}. \end{Proposition} \begin{proof} Recall the unitary equivalence of~$-\Delta_D^{\Omega_\theta}$ and~$H_\theta$ given by~\eqref{unitary}. We proceed by contradiction and show that the operator $H_\theta - E_1 - c \rho^2$ is not non-negative if $c>1/2$, irrespectively of properties of~$\theta$ and~$\omega$. (Recall that~$\rho$ was initially introduced in Theorem~\ref{Thm.Hardy} as a function on~$\Omega_\theta$. In this proof, with an abuse of notation, we denote by the same symbol analogous functions on~$\Omega_0$ and~$\mathbb{R}$.) It is enough to construct a test function~$\psi$ from~$\mathfrak{D}(Q_\theta)$ such that \begin{equation*} P_\theta^c[\psi] := Q_\theta[\psi] - E_1 \|\psi\|_{L^2(\Omega_0)}^2 - c \, \|\rho\psi\|_{L^2(\Omega_0)}^2 < 0 \,. \end{equation*} As in the proof of Proposition~\ref{Prop.difference}, we use the decomposition~\eqref{gef}, but now the sequence of functions $\varphi_n$ is defined as follows: $$ \varphi_n(x_1) := \begin{cases} \frac{x_1 - b_n^1}{b_n^2-b_n^1} & \mbox{if} \quad x_1 \in [b_n^1,b_n^2) \,, \\ \frac{b_n^3 - x_1}{b_n^3-b_n^2} & \mbox{if} \quad x_1 \in [b_n^2,b_n^3) \,, \\ 0 & \mbox{otherwise} \,. \end{cases} $$ Here $\{b_n^j\}_{n\in\mathbb{N}}$, with $j=1,2,3$, are numerical sequences such that $\sup\mathop{\mathrm{supp}}\nolimits\dot\theta < b_n^1 < b_n^2 < b_n^3$ for each $n\in\mathbb{N}$ and $b_n^1 \to \infty$ as $n\to\infty$; further requirements will be imposed later on. Since~$\varphi_n$ and~$\dot\theta$ have disjoint supports, and~$\mathcal{J}_1$ is supposed to be normalized to~$1$ in~$L^2(\omega)$, it easily follows that $$ P_\theta^c[\psi_n] = \|\dot\varphi_n\|_{L^2(\mathbb{R})}^2 - c \, \|\rho\varphi_n\|_{L^2(\mathbb{R})}^2 \,. $$ Note that the right hand side is independent of~$\theta$ and~$\omega$. An explicit calculation yields \begin{align*} \|\dot\varphi_n\|_{L^2(\mathbb{R})}^2 =\ & \frac{1}{b_n^2-b_n^1} + \frac{1}{b_n^3-b_n^2} \,, \\ \|\rho\varphi_n\|_{L^2(\mathbb{R})}^2 =\ & \frac{ b_n^2-b_n^1 + [(b_n^1)^2-1](\arctan b_n^2-\arctan b_n^1) -b_n^1 \log\frac{1+(b_n^2)^2}{1+(b_n^1)^2}} {(b_n^2-b_n^1)^2} \\ & + \frac{ b_n^3-b_n^2 + [(b_n^3)^2-1](\arctan b_n^3-\arctan b_n^2) -b_n^3 \log\frac{1+(b_n^3)^2}{1+(b_n^2)^2}} {(b_n^3-b_n^2)^2} \,. \end{align*} Specifying the numerical sequences in such a way that also the quotients $b_n^2/b_n^1$ and $b_n^3/b_n^2$ tend to infinity as $n\to\infty$, it is then straightforward to check that $$ b_n^2 \, P_\theta^c[\psi_n] \xrightarrow[n\to\infty]{} 1 - 2 c \,. $$ Since the limit is negative for $c>1/2$, it follows that $P_\theta^c[\psi_n]$ can be made negative by choosing~$n$ sufficiently large. \end{proof} The proposition shows that the effect of twisting is limited in its nature, at least if~\eqref{locally} holds. This will have important consequences for the usage of energy methods when studying the heat semigroup below. \subsection{The Sobolev inequality} Regardless of whether the tube is twisted or not, the operator $-\Delta_D^{\Omega_\theta}-E_1$ satisfies the following Sobolev-type inequality. \begin{Theorem}[Sobolev inequality]\label{Thm.Sobolev} Let $\theta \in C^1(\mathbb{R})$ and suppose that $\dot\theta$ has compact support. Then for every $\Psi \in H_0^1(\Omega_\theta) \cap L^2(\Omega_\theta,\rho^{-2})$ we have \begin{equation}\label{Sobolev} \|\nabla \Psi\|_{L^2(\Omega_\theta)}^2 - E_1 \|\Psi\|_{L^2(\Omega_\theta)}^2 \ \geq\ c_S \, \frac{\ \|\Psi\|_{L^2(\Omega_\theta)}^6}{\|\Psi\|_1^4} \,, \end{equation} where $ \|\Psi\|_1 := \sqrt{\int_\omega dx_2 dx_3 \left(\int_\mathbb{R} dx_1 |(\Psi\circ\mathcal{L}_\theta)(x)|\right)^2} $ and~$c_S$ is a positive constant depending on~$\dot\theta$ and~$\omega$. \end{Theorem} \begin{proof} Recall that $\Psi\circ\mathcal{L}_\theta = U_\theta\Psi =:\psi$ belongs to $L^2(\Omega_0)$. First of all, let us notice that $\|\Psi\|_1$ is well defined for $\Psi \in L^2(\Omega_\theta,\rho^{-2})$. Indeed, the Schwarz inequality together with Fubini's theorem yields \begin{equation}\label{1norm} \|\Psi\|_1^2 \leq \|\rho^{-1}\psi\|_{L^2(\Omega_0)}^2 \int_\mathbb{R} \frac{dx_1}{1+x_1^2} = \|\rho^{-1}\Psi\|_{L^2(\Omega_\theta)}^2 \, \pi < \infty \,. \end{equation} Here the equality of the norms is obvious from the facts that the mapping $\mathcal{L}_\theta$ leaves invariant the first coordinate in~$\mathbb{R}^3$ and that its Jacobian is one. We also remark that, by density, it is enough to prove the theorem for $\Psi \in C_0^\infty(\Omega_\theta)$. The inequality~\eqref{Sobolev} is a consequence of the one-dimensional inequality \begin{equation}\label{Sobolev.1D} \forall \varphi \in H^1(\mathbb{R}) \cap L^1(\mathbb{R}) \,, \qquad \|\dot\varphi\|_{L^2(\mathbb{R})}^2 \geq \frac{1}{4} \, \frac{\|\varphi\|_{L^2(\mathbb{R})}^6}{\|\varphi\|_{L^1(\mathbb{R})}^4} \,, \end{equation} which is established quite easily by combining elementary estimates $$ \|\varphi\|_{L^2(\mathbb{R})}^2 \leq \|\varphi\|_{L^1(\mathbb{R})} \|\varphi\|_{L^\infty(\mathbb{R})} \qquad\mbox{and}\qquad \|\varphi\|_{L^\infty(\mathbb{R})}^2 \leq 2 \, \|\varphi\|_{L^2(\mathbb{R})} \|\dot\varphi\|_{L^2(\mathbb{R})} \,. $$ In order to apply~\eqref{Sobolev.1D}, we need to estimate the left hand side of~\eqref{Sobolev} from below by $\|\partial_1\psi\|_{L^2(\Omega_0)}^2$. We can proceed as in the proof of Theorem~\ref{Thm.Hardy}. Interpolating between the bounds~\eqref{bound1} and~\eqref{bound3}, we get \begin{equation*} \|\nabla\Psi\|_{L^2(\Omega_\theta)}^2 - E_1 \|\Psi\|_{L^2(\Omega_\theta)}^2 \geq \frac{\epsilon}{2} \, \|\partial_1\psi\|_{L^2(\Omega_0)}^2 \,, \end{equation*} where $\epsilon=:8 \, c_S$ is a positive constant depending on~$\dot\theta$ and~$\omega$. Using now~\eqref{Sobolev.1D} with help of Fubini's theorem, we conclude the proof with $$ \|\partial_1\psi\|_{L^2(\Omega_0)}^2 \geq \frac{1}{4} \int_\omega \frac{\|\psi(\cdot,x_2,x_3)\|_{L^2(\mathbb{R})}^6} {\|\psi(\cdot,x_2,x_3)\|_{L^1(\mathbb{R})}^4} \, dx_2 dx_3 \geq \frac{1}{4} \frac{\ \|\Psi\|_{L^2(\Omega_\theta)}^6}{\|\Psi\|_1^4} \,. $$ Here the second inequality follows by the H\"older inequality with properly chosen conjugate exponents (recall also that $ \|\psi\|_{L^2(\Omega_0)}=\|\Psi\|_{L^2(\Omega_\theta)} $). \end{proof} \section{The energy estimates}\label{Sec.heat} \subsection{The heat equation} Having the replacement $ u(x,t) \mapsto e^{-E_1 t} \, u(x,t) $ for~\eqref{I.heat} in mind, let us consider the following $t$-time evolution problem in the tube~$\Omega_\theta$: \begin{equation}\label{heat} \left\{ \begin{aligned} u_t - \Delta u - E_1 u &= 0 &\quad\mbox{in} & \quad \Omega_\theta\times(0,\infty) \,, \\ u &= u_0 &\quad\mbox{in} & \quad \Omega_\theta\times\{0\} \,, \\ u &= 0 &\quad\mbox{in} & \quad (\partial\Omega_\theta)\times(0,\infty) \,, \end{aligned} \right. \end{equation} where $u_0 \in L^2(\Omega_\theta)$. As usual, we consider the weak formulation of the problem, \emph{i.e.}, we say a Hilbert space-valued function $ u \in L^2_\mathrm{loc}\big((0,\infty);H_0^1(\Omega_\theta)\big) $, with the weak derivative $ u' \in L^2_\mathrm{loc}\big((0,\infty);H^{-1}(\Omega_\theta)\big) $, is a (global) solution of~\eqref{heat} provided that \begin{equation}\label{heat.weak} \big\langle v,u'(t)\big\rangle + \big(\nabla v,\nabla u(t)\big)_{L^2(\Omega_\theta)} - E_1 \, \big(v,u(t)\big)_{L^2(\Omega_\theta)} = 0 \end{equation} for each $v \in H_0^1(\Omega_\theta)$ and a.e. $t\in[0,\infty)$, and $u(0)=u_0$. Here $\langle\cdot,\cdot\rangle$ denotes the pairing of $H_0^1(\Omega_\theta)$ and $H^{-1}(\Omega_\theta)$. With an abuse of notation, we denote by the same symbol~$u$ both the function on $\Omega_\theta\times(0,\infty)$ and the mapping $(0,\infty) \to H_0^1(\Omega_\theta)$. Standard semigroup theory implies that there indeed exists a unique solution of~\eqref{heat} that belongs to $C^0\big([0,\infty),L^2(\Omega_\theta)\big)$. More precisely, the solution is given by $u(t) = S(t) u_0$, where~$S(t)$ is the heat semigroup~\eqref{semigroup} associated with $-\Delta_D^{\Omega_\theta}-E_1$. By the Beurling-Deny criterion, $S(t)$~is positivity-preserving for all~$t \geq 0$. Since~$E_1$ corresponds to the threshold of the spectrum of $-\Delta_D^{\Omega_\theta}$ if~\eqref{locally} holds, we cannot expect a uniform decay of solutions of~\eqref{heat} as $t\to\infty$ in this case. More precisely, the spectral mapping theorem together with~\eqref{spectrum} yields: \begin{Proposition}\label{Prop.nodecay} Let $\theta \in C^1(\mathbb{R})$ and suppose that~$\dot\theta$ has compact support. Then for each time $t \geq 0$ we have $$ \|S(t)\|_{L^2(\Omega_\theta)\toL^2(\Omega_\theta)} \, = \, 1 \,. $$ \end{Proposition} \noindent Consequently, for each $t>0$ and each $\varepsilon\in(0,1)$ we can find an initial datum $u_0 \in H_0^1(\Omega_\theta)$ such that $\|u_0\|_{L^2(\Omega_\theta)}=1$ and such that the solution of~\eqref{heat} satisfies $$ \|u(t)\|_{L^2(\Omega_\theta)} \ \geq \ 1-\varepsilon \,. $$ \subsection{The dimensional decay rate} However, if we restrict ourselves to initial data decaying sufficiently fast at the infinity of the tube, it is possible to obtain a polynomial decay rate for the solutions of~\eqref{heat}. In particular, we have the following result based on Theorem~\ref{Thm.Sobolev}: \begin{Theorem}\label{Thm.decay.1D} Let $\theta \in C^1(\mathbb{R})$ and suppose that~$\dot\theta$ has compact support. Then for each time $t \geq 0$ we have $$ \|S(t)\|_{ L^2(\Omega_\theta,\rho^{-2}) \to L^2(\Omega_\theta) } \, \leq \, \left( 1 + \frac{4 \, c_S}{\pi^2} \, t \right)^{-1/4} \,, $$ where~$c_S$ is the positive constant of Theorem~\ref{Thm.Sobolev} and~$\rho$ is introduced in Theorem~\ref{Thm.Hardy}. \end{Theorem} \begin{proof} The statement is equivalent to the following bound for the solution~$u$ of~\eqref{heat}: \begin{equation}\label{dispersive} \forall t\in[0,\infty) \,, \qquad \|u(t)\|_{L^2(\Omega_\theta)} \ \leq \ \|\rho^{-1}u_0\|_{L^2(\Omega_\theta)} \left( 1 + \frac{4 \, c_S}{\pi^2} \, t \right)^{-1/4} \,, \end{equation} where $ u_0\in L^2(\Omega_\theta,\rho^{-2}) $ is any non-trivial datum. It is easy to see that the real and imaginary parts of the solution of~\eqref{heat} evolve separately. Furthermore, since~$S(t)$ is positivity-preserving, given a non-negative datum~$u_0$, the solution~$u(t)$ remains non-negative for all $t \geq 0$. Consequently, establishing the bound for positive and negative parts of~$u(t)$ separately, it is enough to prove~\eqref{dispersive} for non-negative (and non-trivial) initial data only. Without loss of generality, we therefore assume in the proof below that $u(t) \geq 0$ for all $t \geq 0$. Let $\{\varphi_n\}_{n=1}^\infty$ be the family of mollifiers on~$\mathbb{R}$ given by~\eqref{mollifier}; we denote by the same symbol the functions $\varphi_n \otimes 1$ on $\mathbb{R}\times\mathbb{R}^2 \supset \Omega_\theta$. Inserting the trial function $$ v_n(x;t) := \varphi_n(x_1) \, \bar{u}_n(x_2,x_3;t) \,, \qquad \bar{u}_n(x_2,x_3;t) := \big\|\varphi_n u(\cdot,x_2,x_3;t)\big\|_{L^1(\mathbb{R})} \,, $$ into~\eqref{heat.weak}, we arrive at (recall the definition of $\|\cdot\|_1$ from Theorem~\ref{Thm.Sobolev}) \begin{align*} \frac{1}{2} \frac{d}{dt} \|\varphi_n u(t)\|_1^2 &= - \|\nabla\bar{u}_n(t)\|_{L^2(\omega)}^2 + E_1 \|\bar{u}_n(t)\|_{L^2(\omega)}^2 - \big(\partial_1 v_n(t),\partial_1 u(t)\big)_{L^2(\Omega_\theta)} \\ &\leq \big(\partial_1 v_n(t),\partial_1 u(t)\big)_{L^2(\Omega_\theta)} \\ &\leq \|\partial_1 v_n(t)\|_{L^2(\Omega_\theta)} \|\nabla u(t)\|_{L^2(\Omega_\theta)} \,. \end{align*} Here the first inequality is due to the Poincar\'e-type inequality in the cross-section~\eqref{Poincare}. We clearly have $$ \|\partial_1 v_n(t)\|_{L^2(\Omega_\theta)} = \|\dot\varphi_n\|_{L^2(\mathbb{R})} \, \|\bar{u}_n(t)\|_{L^2(\omega)} = \|\dot\varphi_n\|_{L^2(\mathbb{R})} \, \|\varphi_n u(t)\|_1 \,. $$ Integrating the differential inequality, we therefore get $$ \|\varphi_n u(t)\|_1 -\|\varphi_n u_0\|_1 \leq \|\dot\varphi_n\|_{L^2(\mathbb{R})} \int_0^t \|\nabla u(t')\|_{L^2(\Omega_\theta)}^2 \, dt' \,. $$ Since $\|\dot\varphi_n\|_{L^2(\mathbb{R})} \to 0$ and $\{\varphi_n\}_{n=1}^\infty$ is an increasing sequence of functions converging pointwise to~$1$ as $n\to \infty$, we conclude from this inequality that \begin{equation}\label{mass} \forall t \in [0,\infty) \,, \qquad \|u(t)\|_1 \leq \|u_0\|_1 \,, \end{equation} where $\|u_0\|_1$ is finite due to~\eqref{1norm}. Now, substituting~$u$ for the trial function~$v$ in~\eqref{heat.weak}, applying Theorem~\ref{Thm.Sobolev} and using~\eqref{mass}, we get \begin{align*} \frac{1}{2} \frac{d}{dt} \|u(t)\|_{L^2(\Omega_\theta)}^2 &= - \Big(\|\nabla u(t)\|_{L^2(\Omega_\theta)}^2 - E_1 \|u(t)\|_{L^2(\Omega_\theta)}^2\Big) \\ &\leq - c_S \, \frac{\ \|u(t)\|_{L^2(\Omega_\theta)}^6}{\, \|u(t)\|_1^4} \\ &\leq - c_S \, \frac{\ \|u(t)\|_{L^2(\Omega_\theta)}^6}{\, \|u_0\|_1^4} \,. \end{align*} An integration of this differential inequality leads to \begin{align*} \forall t\in[0,\infty) \,, \qquad \|u(t)\|_{L^2(\Omega_\theta)} \ \leq \ \|u_0\|_{L^2(\Omega_\theta)} \left( 1+4 \, c_S \, \frac{\|u_0\|_{L^2(\Omega_\theta)}^4}{\|u_0\|_1^4} \ t \right)^{-1/4} . \end{align*} Dividing the last inequality by $\|\rho^{-1} u_0\|_{L^2(\Omega_\theta)}$ and replacing $\|u_0\|_1$ with $\|\rho^{-1} u_0\|_{L^2(\Omega_\theta)}$ using~\eqref{1norm}, we get \begin{align*} \frac{\|u(t)\|_{L^2(\Omega_\theta)}}{\|\rho^{-1} u_0\|_{L^2(\Omega_\theta)}} \ \leq \ \xi \left( 1+ \frac{4 \, c_S}{\pi^2} \, \xi^4 \ t \right)^{-1/4} \leq \left( 1+ \frac{4 \, c_S}{\pi^2} \ t \right)^{-1/4} , \end{align*} where $ \xi := \|u_0\|_{L^2(\Omega_\theta)} / \|\rho^{-1} u_0\|_{L^2(\Omega_\theta)} \in (0,1) $. This establishes~\eqref{dispersive}. \end{proof} As a direct consequence of the theorem, we get: \begin{Corollary}\label{Corol.decay.1D} Under the hypotheses of Theorem~\ref{Thm.decay.1D}, $\Gamma(\Omega_\theta) \geq 1/4$. \end{Corollary} \begin{proof} It is enough to realize that $L^2(\Omega_\theta,K)$ is embedded in $L^2(\Omega_\theta,\rho^{-2})$. \end{proof} The following proposition shows that the decay rate of Theorem~\ref{Thm.decay.1D} is optimal for untwisted tubes. \begin{Proposition}\label{Prop.decay.1D} Let $\Omega_\theta$ be untwisted. Then for each time $t \geq 0$ we have $$ \|S(t)\|_{ L^2(\Omega_\theta,K) \to L^2(\Omega_\theta) } \, \geq \, \frac{1}{\sqrt{2}} \, \left( 1+t \right)^{-1/4} \,. $$ \end{Proposition} \begin{proof} Without loss of generality, we may assume $\theta = 0$. It is enough to find an initial datum $u_0 \in L^2(\Omega_0,K)$ such that the solution~$u$ of~\eqref{heat} satisfies \begin{equation}\label{norate} \forall t\in[0,\infty) \,, \qquad \frac{\|u(t)\|_{L^2(\Omega_0)}}{\|u_0\|_{L^2(\Omega_0,K)}} \ \geq \ \frac{1}{\sqrt{2}} \, \left( 1+t \right)^{-1/4} \,. \end{equation} The idea is to take $u_0 := \psi_n$, where $\{\psi_n\}_{n=1}^\infty$ is the sequence~\eqref{gef} approximating a generalized eigenfunction of $-\Delta_D^{\Omega_0}$ corresponding to the threshold energy~$E_1$. Using the fact that~$\Omega_0$ is a cross-product of~$\mathbb{R}$ and~$\omega$, \eqref{heat}~can be solved explicitly in terms of an expansion into the eigenbasis of the Dirichlet Laplacian in the cross-section and a partial Fourier transform in the longitudinal variable. In particular, for our initial data we get $$ \|u(t)\|_{L^2(\Omega_0)}^2 = \int_\mathbb{R} |\hat{\varphi}_n(\xi)|^2 \, \exp{(-2 \xi^2 t)} \, d\xi = \sqrt{\frac{n}{n+4t}} \sqrt{\frac{\pi n}{2}} \,, $$ where the second equality is a result of an explicit calculation enabled due to the special form of~$\varphi_n$ given by~\eqref{mollifier}. At the same time, for every $n<8$ $\psi_n$~belongs to $L^2(\Omega_0,K)$ and an explicit calculation yields $$ \|u_0\|_{L^2(\Omega_0,K)}^2 = 2 \, \sqrt{\frac{\pi n}{8-n}} \,. $$ For the special choice $n=6$ we get that the left hand side of~\eqref{norate} actually equals the right hand side with~$t$ being replaced by $2t/3 < t$. \end{proof} The power~$1/4$ in the decay bounds of Theorem~\ref{Thm.decay.1D} and Proposition~\ref{Prop.decay.1D} reflects the quasi-one-dimensional nature of~$\Omega_\theta$ (\emph{cf}~\eqref{P.decay}), at least if the tube is untwisted. More precisely, Proposition~\ref{Prop.decay.1D} readily implies that the inequality of Corollary~\ref{Corol.decay.1D} is sharp for untwisted tubes. \begin{Corollary}\label{Corol.norate} Let $\Omega_\theta$ be untwisted. Then $\Gamma(\Omega_\theta) = 1/4$. \end{Corollary} This result establishes one part of Theorem~\ref{Thm.rate}. The much more difficult part is to show that the decay rate is improved whenever the tube is twisted. \subsection{The failure of the energy method}\label{Sec.failure} As a consequence of combination of direct energy arguments with Theorem~\ref{Thm.Hardy}, we get the following result. In Remark~\ref{Rem.useless} below we explain why it is useless. \begin{Theorem}\label{Thm.decay} Let $\Omega_\theta$ be twisted with $\theta \in C^1(\mathbb{R})$. Suppose that~$\dot\theta$ has compact support. Then for each time $t \geq 0$ we have \begin{equation}\label{decay} \|S(t)\|_{ L^2(\Omega_\theta,\rho^{-2}) \to L^2(\Omega_\theta) } \, \leq \, \left( 1+2 \, t \right)^{\!-\min\{1/2,c_H/2\}} \ \,, \end{equation} where~$c_H$ is the positive constant of Theorem~\ref{Thm.Hardy}. \end{Theorem} \begin{proof} For any positive integer~$n$ and $x\in\Omega_\theta$, let us set $ \rho_n(x) := \min\{\rho(x),n^{-1}\} $. Then $\{\rho_n^{-1}\}_{n=1}^\infty$ is a non-decreasing sequence of bounded functions converging pointwise to~$\rho^{-1}$ as $n \to \infty$. Recalling the definition of~$\rho$ from Theorem~\ref{Thm.Hardy}, it is clear that $x \mapsto \rho_n(x)$ is in fact independent of the transverse variables~$x'$. Moreover, $\rho_n^{-\gamma} u$ belongs to $H_0^1(\Omega_\theta)$ for every $\gamma\in\mathbb{R}$ provided $u \in H_0^1(\Omega_\theta)$. Choosing $v:=\rho_n^{-2} u$ in~\eqref{heat.weak} (and possibly combining with the conjugate version of the equation if we allow non-real initial data), we get the identity \begin{equation}\label{ineq1.5} \frac{1}{2} \frac{d}{dt} \|\rho_n^{-1}u(t)\|^2 = -\|\rho_n^{-1}\nabla u(t)\|^2 + E_1 \|\rho_n^{-1}u(t)\|^2 - \Re \Big(u(t)\nabla\rho_n^{-2},\nabla u(t)\Big) \,. \end{equation} Here and in the rest of the proof, $\|\cdot\|$ and $(\cdot,\cdot)$ denote the norm and inner product in $L^2(\Omega_\theta)$ (we suppress the subscripts in this proof). Since~$\rho_n$ depends on the first variable only, we clearly have $ \nabla(\rho_n^{-2})=(-2\rho^{-3}\partial_1\rho,0,0) $. Introducing an auxiliary function $v_n(t) := \rho_n^{-1} u(t)$, one finds \begin{equation* \begin{aligned} \|\rho_n^{-1}\nabla u(t)\|^2 = \|\nabla v_n(t)\|^2 + \|(\partial_1\rho_n/\rho_n) \, v_n(t)\|^2 +2 \Re \Big( v_n(t), (\partial_1\rho_n/\rho_n) \, \partial_1 v_n(t) \Big) \,, \\ \Re\Big(u(t)\nabla\rho_n^{-2},\nabla u(t)\Big) = -2 \|(\partial_1\rho_n/\rho_n) \, v_n(t)\|^2 -2 \Re \Big( v_n(t),(\partial_1\rho_n/\rho_n) \, \partial_1 v_n(t) \Big) \,. \end{aligned} \end{equation*} Combining these two identities and substituting the explicit expression for~$\rho$, we see that the right hand side of~\eqref{ineq1.5} equals \begin{eqnarray}\label{explicit} \lefteqn{-\|\nabla v_n(t)\|^2 + E_1 \|v_n(t)\|^2 + \|(\partial_1\rho_n/\rho_n) \, v_n(t)\|^2} \nonumber \\ &&= -\|\nabla v_n(t)\|^2 + E_1 \|v_n(t)\|^2 + \|\chi_n \rho v_n(t)\|^2 - \|\chi_n \rho^2 v_n(t)\|^2 \\ &&\leq (1-c_H) \, \|\chi_n \rho v_n(t)\|^2 - \|\chi_n \rho^2 v_n(t)\|^2 \,. \nonumber \end{eqnarray} Here~$\chi_n$ denotes the characteristic function of the set $\Omega_\theta^n:=\Omega_\theta \cap \{\mathop{\mathrm{supp}}\nolimits(\partial_1\rho_n)\}$, and the inequality follows from Theorem~\ref{Thm.Hardy} and an obvious inclusion $\Omega_\theta^n \subset \Omega_\theta$. Substituting back the solution~$u(t)$, we finally arrive at \begin{align}\label{ineq2} \frac{1}{2} \frac{d}{dt} \|\rho_n^{-1}u(t)\|^2 &\leq (1-c_H) \, \|\chi_n \rho v_n(t)\|^2 - \|\chi_n \rho^2 v_n(t)\|^2 \nonumber \\ &\leq (1-c_H) \, \|\chi_n \rho v_n(t)\|^2 \,. \end{align} Now, using the monotone convergence theorem and recalling the initial data to which we restrict in the hypotheses of the theorem, the last estimate implies that $u(t)$ belongs to $L^2(\Omega_\theta,\rho^{-2})$ and that it remains true after passing to the limit $n\to\infty$, \emph{i.e.}, \begin{equation}\label{ineq2.bis} \frac{1}{2} \frac{d}{dt} \|\rho^{-1}u(t)\|^2 \leq (1-c_H) \, \|u(t)\|^2 \,. \end{equation} At the same time, we have \begin{align}\label{ineq1} \frac{1}{2} \frac{d}{dt} \|u(t)\|^2 &= - \Big(\|\nabla u(t)\|^2 - E_1 \|u(t)\|^2\Big) \nonumber \\ &\leq - c_H \, \|\rho u(t)\|^2 \nonumber \\ &\leq - c_H \, \frac{\|u(t)\|^4}{\, \|\rho^{-1}u(t)\|^2} \,, \end{align} where the equality follows from~\eqref{heat}, the first inequality follows from Theorem~\ref{Thm.Hardy} and the last inequality is established by means of the Schwarz inequality. Summing up, in view of~\eqref{ineq1} and~\eqref{ineq2.bis}, $a(t):=\|u(t)\|^2$ and $b(t):=\|\rho^{-1}u(t)\|^2$ satisfy the system of differential inequalities \begin{equation}\label{system} \dot{a} \leq - 2 \, c_H \, \frac{a^2}{b} \,, \qquad \dot{b} \leq 2 \, (1-c_H) \, a \,, \end{equation} with the initial conditions $a(0)=\|u_0\|^2=:a_0$ and $b(0)=\|\rho^{-1}u_0\|^2=:b_0$. We distinguish two cases: \smallskip \\ \emph{1.} \underline{$c_H \geq 1$}. In this case, it follows from the second inequality of~\eqref{system} that~$b$ is decreasing. Solving the first inequality of~\eqref{system} with~$b$ being replaced by~$b_0$, we then get $$ a(t) \leq a_0 \, \big[1+2 \, c_H \, (a_0/b_0) \, t \big]^{-1} \,. $$ Dividing this inequality by~$b_0$ and maximizing the resulting right hand side with respect to $a_0/b_0 \in (0,1)$, we finally get \begin{equation} \forall t\in[0,\infty) \,, \qquad \|u(t)\| \ \leq \ \|\rho^{-1} u_0\| \left( 1+2 \, c_H \, t \right)^{-1/2} \ \,, \end{equation} which in particular implies~\eqref{decay}. \smallskip \\ \emph{2.} \underline{$c_H \leq 1$}. We ``linearize''~\eqref{system} by replacing one~$a$ of the square on the right hand side of first inequality by employing the second inequality of~\eqref{system}: $$ \frac{\dot{a}}{a} \leq - 2 \, c_H \, \frac{a}{b} \leq -\frac{c_H}{1-c_H} \, \frac{\dot{b}}{b} \,. $$ This leads to $$ a/a_0 \leq (b/b_0)^{-\frac{c_H}{1-c_H}} \,. $$ Using this estimate in the original, non-linearized system~\eqref{system}, \emph{i.e.}\ solving the system by eliminating~$b$ from the first and~$a$ from the second inequality of~\eqref{system}, we respectively obtain $$ a(t) \leq a_0 \, \big[1+2\,(a_0/b_0)\,t\big]^{-c_H} \,, \qquad b(t) \leq b_0 \, \big[1+2\,(a_0/b_0)\,t\big]^{1-c_H} \,. $$ Dividing the first inequality by~$b_0$ and maximizing the resulting right hand side with respect to $a_0/b_0 \in (0,1)$, we finally get \begin{equation} \forall t\in[0,\infty) \,, \qquad \|u(t)\| \ \leq \ \|\rho^{-1} u_0\| \left( 1+2 \, t \right)^{-c_H/2} \ \,, \end{equation} which is equivalent to~\eqref{decay}. \end{proof} \begin{Remark} We see that the power in the polynomial decay rate of Theorem~\ref{Thm.decay} diminishes as $c_H \to 0$. Let us now argue that this cannot be improved by the present method of proof. Indeed, the first inequality of~\eqref{ineq1} is an application of the Hardy inequality of Theorem~\ref{Thm.Hardy} and the second one is sharp. The Hardy inequality is also applied in the first inequality of~\eqref{ineq2}. In the second inequality of~\eqref{ineq2}, however, we have neglected a negative term. Applying the second inequality of~\eqref{ineq1} to it instead, we conclude with an improved system of differential inequalities \begin{equation}\label{system.bis} \dot{a} \leq - 2 \, c_H \, \frac{a^2}{b} \,, \qquad \dot{b} \leq 2 \, (1-c_H) \, a - 2 \, \frac{a^2}{b} \,. \end{equation} The corresponding system of differential equations has the explicit solution \begin{equation*} \tilde{a}(t) = a_0 \left(\frac{\xi_0} {W\big[\xi_0 \exp{(\xi_0+2t)}\big]}\right)^{c_H} , \quad \tilde{b}(t) = a(t) \Big( 1 + W\big[\xi_0 \exp{(\xi_0+2t)}\big] \Big) , \end{equation*} where $\xi_0 := b_0/a_0-1>0$ and~$W$ denotes the Lambert W function (product log), \emph{i.e.}~the inverse function of $w \mapsto w \exp(w)$. Since $$ W\big[\xi_0 \exp{(\xi_0+2t)}\big] = 2\,t + o(t) \qquad\mbox{as}\qquad t \to \infty \,, $$ we see that the $t^{-c_H/2}$ decay in~\eqref{decay} for $c_H<1$ cannot be improved by replacing~\eqref{system} with~\eqref{system.bis}. \end{Remark} \begin{Remark}\label{Rem.useless} Note that the hypothesis~\eqref{locally} is not explicitly used in the proof of Theorem~\ref{Thm.decay}, it is just required that the inequality~\eqref{Hardy} holds with some positive constant~$c_H$. For tubes satisfying~\eqref{locally}, however, we know from Proposition~\ref{Prop.limit} that the constant cannot exceed the value $1/2$. Consequently, irrespectively of the strength of twisting, Theorem~\ref{Thm.decay} never represents an improvement upon Theorem~\ref{Thm.decay.1D}. This is what we mean by the failure of a direct energy argument based on the Hardy inequality of Theorem~\ref{Thm.Hardy}. \end{Remark} \section{The self-similarity transformation}\label{Sec.similar} Let us now turn to a completely different approach which leads to an improved decay rate regardless of the smallness of twisting. \subsection{Straightening of the tube}\label{Sec.straightening} First of all, we reconsider the heat equation~\eqref{heat} in an untwisted tube~$\Omega_0$ by using the change of variables defined by the mapping~$\mathcal{L}_\theta$. In view of the unitary transform~\eqref{unitary}, one can identify the Dirichlet Laplacian in $L^2(\Omega_\theta)$ with the operator~\eqref{H.distributional} in $L^2(\Omega_0)$, and it is readily seen that~\eqref{heat} is equivalent to $$ u_t + H_\theta u - E_1 u = 0 \qquad \mbox{in} \qquad \Omega_0\times(0,\infty) \,, $$ plus the Dirichlet boundary conditions on~$\partial\Omega_0$ and an initial condition at $t=0$. (We keep the same latter~$u$ for the solutions transformed to~$\Omega_0$.) More precisely, the weak formulation~\eqref{heat.weak} is equivalent to \begin{equation}\label{heat.weak.straight} \big\langle v,u'(t)\big\rangle + Q_\theta\big(v,u(t)\big) - E_1 \big( v,u(t)\big)_{L^2(\Omega_0)} = 0 \end{equation} for each $v \in H_0^1(\Omega_0)$ and a.e.~$t\in[0,\infty)$, with $u(0) = u_0 \in L^2(\Omega_0)$. Here $\langle\cdot,\cdot\rangle$ denotes the pairing of $H_0^1(\Omega_0)$ and $H^{-1}(\Omega_0)$. We know that the transformed solution~$u$ belongs to $C^0\big([0,\infty),L^2(\Omega_0)\big)$ by the semigroup theory. \subsection{Changing the time} The main idea is to adapt the method of self-similar solutions used in the case of the heat equation in the whole Euclidean space by Escobedo and Kavian~\cite{Escobedo-Kavian_1987} to the present problem. We perform the self-similarity transformation in the first (longitudinal) space variable only, while keeping the other (transverse) space variables unchanged. More precisely, we consider a unitary transformation~$\tilde{U}$ on $L^2(\Omega_0)$ which associates to every solution $ u \in L^2_\mathrm{loc}\big((0,\infty),dt;L^2(\Omega_0,dx)\big) $ of~\eqref{heat.weak.straight} a self-similar solution~$\tilde{u}:=\tilde{U}u$ in a new $s$-time weighted space $ L^2_\mathrm{loc}\big((0,\infty),e^s ds;L^2(\Omega_0,dy)\big) $ via~\eqref{SST}. The inverse change of variables is given by $$ u(x_1,x_2,x_3,t) = (t+1)^{-1/4} \, \tilde{u}\big((t+1)^{-1/2}x_1,x_2,x_3,\log(t+1)\big) \,. $$ When evolution is posed in that context, $y=(y_1,y_2,y_3)$ plays the role of space variable and~$s$ is the new time. One can check that, in the new variables, the evolution is governed by~\eqref{heat.similar}. More precisely, the weak formulation~\eqref{heat.weak.straight} transfers to \begin{equation}\label{heat.weak.similar} \big\langle \tilde{v}, \tilde{u}'(s) -\mbox{$\frac{1}{2}$} \, y_1 \;\! \partial_1\tilde{u}(s) \big\rangle + \tilde{Q}_{s}\big(\tilde{v},\tilde{u}(s)\big) - E_1 \, e^s \, \big(\tilde{v},\tilde{u}(s)\big)_{L^2(\Omega_0)} = 0 \end{equation} for each $\tilde{v} \in H_0^1(\Omega_0)$ and a.e.~$s\in[0,\infty)$, with $\tilde{u}(0) = \tilde{u}_0 := \tilde{U} u_0 = u_0$. Here~$\tilde{Q}_{s}(\cdot,\cdot)$ denotes the sesquilinear form associated with \begin{align*} \tilde{Q}_{s}[\tilde{u}] &:= \|\partial_1\tilde{u}-\sigma_s\,\partial_\tau \tilde{u}\|_{L^2(\Omega_0)}^2 + e^s \, \|\nabla'\tilde{u}\|_{L^2(\Omega_0)}^2 - \frac{1}{4} \, \|\tilde{u}\|_{L^2(\Omega_0)}^2 \,, \\ \tilde{u} \in \mathfrak{D}(\tilde{Q}_{s}) &:= H_0^1(\Omega_0) \,, \end{align*} where~$\sigma_s$ has been introduced in~\eqref{sigma}. Note that the operator~$\tilde{H}_s$ in $L^2(\Omega_0)$ associated with the form~$\tilde{Q}_s$ has $s$-time-dependent coefficients, which makes the problem different from the whole-space case. In particular, the twisting represented by the function~\eqref{sigma} becomes more and more ``localized'' in a neighbourhood of the origin $y_1=0$ for large time~$s$. \subsection{The natural weighted space} Since~$\tilde{U}$ acts as a unitary transformation on $L^2(\Omega_0)$, it preserves the space norm of solutions of~\eqref{heat.weak.straight} and~\eqref{heat.weak.similar}, \emph{i.e.}, \begin{equation}\label{preserve} \|u(t)\|_{L^2(\Omega_0)}=\|\tilde{u}(s)\|_{L^2(\Omega_0)} \,. \end{equation} This means that we can analyse the asymptotic time behaviour of the former by studying the latter. However, the natural space to study the evolution~\eqref{heat.weak.similar} is not $L^2(\Omega_0)$ but rather the weighted space~\eqref{weight}. For $k \in \mathbb{Z}$, we define $$ \mathcal{H}_k := L^2\big(\Omega_0,K^k(y_1) \, dy_1 dy_2 dy_3\big) \,. $$ Hereafter we abuse the notation a bit by denoting by~$K$, initially introduced as a function on~$\Omega_\theta$ in~\eqref{weight}, the analogous function on~$\mathbb{R}$ too. Note that~$K^{-1/2}$ is the first eigenfunction of the harmonic-oscillator Hamiltonian \begin{equation}\label{oscillator} h := -\frac{d^2}{dy_1^2} + \frac{1}{16} \, y_1^2 \qquad \mbox{in} \qquad L^2(\mathbb{R}) \end{equation} (\emph{i.e.}\ the Friedrichs extension of this operator initially defined on $C_0^\infty(\mathbb{R})$). The advantage of reformulating~\eqref{heat.weak.similar} in~$\mathcal{H}_1$ instead of $\mathcal{H}_0 = L^2(\Omega_0)$ lies in the fact that then the governing elliptic operator has compact resolvent, as we shall see below (\emph{cf}~Proposition~\ref{Prop.compact}). Let us also introduce the weighted Sobolev space $$ \mathcal{H}_k^1 := H_0^1\big(\Omega_0,K^{k}(y_1) \, dy_1 dy_2 dy_3\big) \,, $$ defined as the closure of $C_0^\infty(\Omega_0)$ with respect to the norm $ (\|\cdot\|_{\mathcal{H}_k}^2 + \|\nabla\cdot\|_{\mathcal{H}_k}^2)^{1/2} $. Finally, we denote by $\mathcal{H}_k^{-1}$ the dual space to $\mathcal{H}_k^1$. \subsection{The evolution in the weighted space} We want to reconsider~\eqref{heat.similar} as a parabolic problem posed in the weighted space~$\mathcal{H}_1$ instead of~$\mathcal{H}_0$. We begin with a formal calculation. Choosing $\tilde{v}(y) := K(y_1) v(y)$ for the test function in~\eqref{heat.weak.similar}, where $v \in C_0^\infty(\Omega_0)$ is arbitrary, we can formally cast~\eqref{heat.weak.similar} into the form \begin{equation}\label{heat.weak.weighted} \big\langle v, \tilde{u}'(s) \big\rangle + a_s\big(v,\tilde{u}(s)\big) = 0 \,. \end{equation} Here $\langle\cdot,\cdot\rangle$ denotes the pairing of $\mathcal{H}_1^1$ and $\mathcal{H}_1^{-1}$, and \begin{align*} a_s(v,\tilde{u}) := & \ \big(\partial_1 v - \sigma_s \, \partial_\tau v, \partial_1 \tilde{u} - \sigma_s \, \partial_\tau \tilde{u} \big)_{\mathcal{H}_1} + e^s \, \big(\nabla' v,\nabla'\tilde{u}\big)_{\mathcal{H}_1} \\ & \ - E_1 \, e^s \, \big(v,\tilde{u}\big)_{\mathcal{H}_1} - \frac{1}{2} \, \big(y_1 \;\! v, \sigma_s \, \partial_\tau \tilde{u}\big)_{\mathcal{H}_1} - \frac{1}{4} \, \big(v,\tilde{u}\big)_{\mathcal{H}_1} \,. \end{align*} Note that~$a_s$ is not a symmetric form. Of course, the formulae are meaningless in general, because the solution~$\tilde{u}(s)$ and its derivative~$\tilde{u}'(s)$ may not belong to $\mathcal{H}_1^1$ and $\mathcal{H}_1^{-1}$, respectively. We therefore proceed conversely by showing that~\eqref{heat.weak.weighted} is actually well posed in~$\mathcal{H}_1$ and that the solution solves~\eqref{heat.weak.similar} too. As for the former, we have: \begin{Proposition}\label{Prop.Lions} For any $u_0 \in \mathcal{H}_1$, there exists a unique function~$\tilde{u}$ such that $$ \tilde{u} \in L^2_\mathrm{loc}\big((0,\infty);\mathcal{H}_1^1\big) \cap C^0\big([0,\infty);\mathcal{H}_1\big) \,, \qquad \tilde{u}' \in L^2_\mathrm{loc}\big((0,\infty);\mathcal{H}_1^{-1}\big) \,, $$ and it satisfies~\eqref{heat.weak.weighted} for each $v \in \mathcal{H}_1^1$ and a.e.\ $s\in[0,\infty)$, and $\tilde{u}(0)=u_0$. \end{Proposition} \begin{proof} First of all, let us show that~$a_s$ is well-defined as a sesquilinear form with domain $\mathfrak{D}(a_s) := \mathcal{H}_1^1$ for any fixed $s \in [0,\infty)$. In view of the boundedness of~$\sigma_s$ (for every finite~$s$) and the estimate~\eqref{a-estimate}, it only requires to check that $y_1 v \in \mathcal{H}_1$ provided $v \in \mathcal{H}_1^1$. Let $v \in C_0^\infty(\Omega_0)$. Then \begin{align*} \|y_1 v\|_{\mathcal{H}_1}^2 & = 2 \int_{\Omega_0} y_1 \, |v(y)|^2 \, \frac{d K(y_1)}{d y_1} \, dy \\ & = -2 \int_{\Omega_0} \Big\{ |v(y)|^2 + 2 \, y_1 \, \Re\big[\bar{v}(y)\partial_1 v(y)\big] \Big\} \, K(y_1) \, dy \\ & \leq 4 \, \|y_1 \;\! v\|_{\mathcal{H}_1} \, \|\partial_1 v\|_{\mathcal{H}_1} \,. \end{align*} Consequently, \begin{equation}\label{embed} \|y_1 v\|_{\mathcal{H}_1} \leq 4 \, \|\partial_1 v\|_{\mathcal{H}_1} \leq 4 \, \|v\|_{\mathcal{H}_1^1} \,. \end{equation} By density, this inequality extends to all $v \in \mathcal{H}_1^1$. Hence, $a_s(v,u)$ is well defined for all $s \geq 0$ and $v,u \in \mathcal{H}_1^1$ (we suppress the tilde over~$u$ in the rest of the proof). Then the Proposition follows by a theorem of J.~L.~Lions \cite[Thm.~X.9]{Brezis_FR} about weak solutions of parabolic equations with time-dependent coefficients. We only need to verify its hypotheses: \smallskip \\ \emph{1.~Measurability.} The function $s \mapsto a_s(v,u)$ is clearly measurable on $[0,\infty)$ for all $v,u \in \mathcal{H}_1^1$, since it is in fact continuous. \smallskip \\ \emph{2.~Boundedness.} Let~$s_0$ be an arbitrary positive number. Using the boundedness of~$\dot{\theta}$, the estimates~\eqref{a-estimate} and~\eqref{embed}, it is quite easy to show that there is a constant~$C$, depending uniquely on~$s_0$, $\|\dot\theta\|_{L^\infty(\mathbb{R})}$ and the geometry of~$\omega$ (through~$a$ and~$E_1$), such that \begin{equation}\label{boundedness} |a_s(v,u)| \leq C \, \|v\|_{\mathcal{H}_1^1} \, \|u\|_{\mathcal{H}_1^1} \end{equation} for all $s \in [0,s_0]$ and $v,u \in \mathcal{H}_1^1$. \smallskip \\ \emph{3.~Coercivity.} Recall that~$a_s$ is not symmetric and that we consider complex functional spaces. However, since the real and imaginary parts of the solution~$\tilde{u}$ of~\eqref{heat.weak.weighted} evolve independently, one may restrict to real-valued functions~$v$ and~$\tilde{u}$ there. Alternatively, it is enough to check the coercivity of the real part of~$a_s$. We therefore need to show that there are positive constants~$\epsilon$ and~$C$ such that the inequality \begin{equation}\label{coercivity} \Re\{ a_s[v] \} \geq \epsilon \, \|v\|_{\mathcal{H}_1^1}^2 - C \, \|v\|_{\mathcal{H}_1}^2 \end{equation} holds for all $v \in \mathcal{H}_1^1$ and $s\in[0,s_0]$, where $a_s[v] := a_s(v,v)$. We have \begin{multline}\label{begin.estimate} \Re\{ a_s[v] \} = \|\partial_1 v - \sigma_s \, \partial_\tau v \|_{\mathcal{H}_1}^2 + e^s \, \|\nabla' v\|_{\mathcal{H}_1}^2 - E_1 \, e^s \, \|v\|_{\mathcal{H}_1}^2 - \frac{1}{4} \, \|v\|_{\mathcal{H}_1}^2 \\ - \frac{1}{2} \, \Re\,(y_1 \;\! v, \sigma_s \, \partial_\tau v)_{\mathcal{H}_1} \end{multline} for all $v \in \mathcal{H}_1^1$. For every $v \in C_0^\infty(\Omega_0)$, an integration by parts shows that: \begin{equation}\label{mixed.term} \Re\,(y_1 \;\! v,\sigma_s \, \partial_\tau v)_{\mathcal{H}_1} = 0 \,; \end{equation} by density, this result extends to all $v \in \mathcal{H}_1^1$. Hence, the mixed term in~\eqref{begin.estimate} vanishes. We continue with estimating the first term on the right hand side of~\eqref{begin.estimate}: \begin{align*} \|\partial_1 v - \sigma_s \, \partial_\tau v \|_{\mathcal{H}_1}^2 &\geq \epsilon \, \|\partial_1 v\|_{\mathcal{H}_1}^2 - \frac{\epsilon}{1-\epsilon} \, \|\sigma_s \, \partial_\tau v \|_{\mathcal{H}_1}^2 \\ &\geq \epsilon \, \|\partial_1 v\|_{\mathcal{H}_1}^2 - \frac{\epsilon}{1-\epsilon} \, e^{s} \, \|\dot\theta\|_{L^\infty(\mathbb{R})} \, a^2 \, \|\nabla' v \|_{\mathcal{H}_1}^2 \end{align*} valid for every $\epsilon\in(0,1)$ and $v \in \mathcal{H}_1^1$. Here the second inequality follows from the definition of~$\sigma_s$ in~\eqref{sigma} and the estimate~\eqref{a-estimate}. Using~\eqref{Poincare} with help of Fubini's theorem, we therefore have \begin{multline*} \|\partial_1 v - \sigma_s \, \partial_\tau v \|_{\mathcal{H}_1}^2 + (1-\epsilon) \, e^s \, \|\nabla' v\|_{\mathcal{H}_1}^2 \\ \geq \epsilon \, \|\partial_1 v\|_{\mathcal{H}_1}^2 + E_1 \, e^s \left( 1-\epsilon-\frac{\epsilon}{1-\epsilon} \, \|\dot\theta\|_{L^\infty(\mathbb{R})} \, a^2 \right) \|v\|_{\mathcal{H}_1}^2 \end{multline*} provided that~$\epsilon$ is sufficiently small (so that the expression in the round brackets is positive). Putting this inequality into~\eqref{begin.estimate}, recalling~\eqref{mixed.term} and using the trivial bounds $1 \leq e^s \leq e^{s_0}$ for $s \in [0,s_0]$, we conclude with $$ \Re\{ a_s[v] \} \geq \epsilon \, \|\nabla v\|_{\mathcal{H}_1}^2 - \left[E_1 \, e^{s_0} \left( \epsilon+\frac{\epsilon}{1-\epsilon} \, \|\dot\theta\|_{L^\infty(\mathbb{R})} \, a^2 \right) + \frac{1}{4} \right] \|v\|_{\mathcal{H}_1}^2 \,, $$ valid for all sufficiently small~$\epsilon$ and all real-valued $v \in \mathcal{H}_1^1$. It is clear that the last inequality can be cast into the form~\eqref{coercivity}, with a constant~$\epsilon$ depending on~$a$ and $\|\dot\theta\|_{L^\infty(\mathbb{R})}$, and a constant~$C$ depending on $s_0$, $\|\dot\theta\|_{L^\infty(\mathbb{R})}$ and the geometry of~$\omega$ (through $a$ and $E_1$). \smallskip \\ Now it follows from~\cite[Thm.~X.9]{Brezis_FR} that the unique solution~$\tilde{u}$ of~\eqref{heat.weak.weighted} satisfies $$ \tilde{u} \in L^2\big((0,s_0);\mathcal{H}_1^1\big) \cap C^0\big([0,s_0];\mathcal{H}_1\big) \,, \qquad \tilde{u}' \in L^2\big((0,s_0);\mathcal{H}_1^{-1}\big) \,. $$ Since~$s_0$ is an arbitrary positive number here, we actually get a global continuous solution in the sense that $ \tilde{u} \in C^0\big([0,\infty);\mathcal{H}_1\big) $. \end{proof} \begin{Remark} As a consequence of~\eqref{boundedness}, \eqref{coercivity} and the Lax-Milgram theorem, it follows that the form~$a_s$ is closed on its domain $\mathcal{H}_1^1$. \end{Remark} Now we are in a position to prove a partial equivalence of evolutions~\eqref{heat.weak.similar} and~\eqref{heat.weak.weighted}. \begin{Proposition} Let $u_0 \in \mathcal{H}_1$. Let~$\tilde{u}$ be the unique solution to~\eqref{heat.weak.weighted} for each $v \in \mathcal{H}_1^1$ and a.e.\ $s\in[0,\infty)$, subject to the initial condition $\tilde{u}(0)=u_0$, that is specified in Proposition~\ref{Prop.Lions}. Then~$\tilde{u}$ is also the unique solution to~\eqref{heat.weak.similar} for each $\tilde{v} \in \mathcal{H}_0^1$ and a.e.\ $s\in[0,\infty)$, subject to the same initial condition. \end{Proposition} \begin{proof} Choosing $v(y) := K(y_1)^{-1} \;\! \tilde{v}(y)$ for the test function in~\eqref{heat.weak.weighted}, where $\tilde{v} \in C_0^\infty(\Omega_0)$ is arbitrary, one easily checks that~$\tilde{u}$ satisfies~\eqref{heat.weak.similar} for each $\tilde{v} \in C_0^\infty(\Omega_0)$ and a.e.\ $s\in[0,\infty)$. By density, this result extends to all $\tilde{v} \in \mathcal{H}_0^1$. \end{proof} \subsection{Reduction to a spectral problem} As a consequence of the previous subsection, reducing the space of initial data, we can focus on the asymptotic time behaviour of the solutions to~\eqref{heat.weak.weighted}. Choosing $v := \tilde{u}(s)$ in~\eqref{heat.weak.weighted} (and possibly combining with the conjugate version of the equation if we allow non-real initial data), we arrive at the identity \begin{equation}\label{formal} \frac{1}{2} \frac{d}{ds} \|\tilde{u}(s)\|_{\mathcal{H}_{1}}^2 = - J_s^{(1)}[\tilde{u}(s)] \,, \end{equation} where $J_s^{(1)}[\tilde{u}] := \Re\{a_s[\tilde{u}]\}$, $\tilde{u} \in \mathfrak{D}(J_s^{(1)}) := \mathfrak{D}(a_s) = \mathcal{H}_1^1$ (independent of~$s$). Recalling~\eqref{begin.estimate} and~\eqref{mixed.term}, we have \begin{equation*} J_s^{(1)}[\tilde{u}] = \|\partial_1\tilde{u}-\sigma_s\,\partial_\tau \tilde{u}\|_{\mathcal{H}_1}^2 + e^s \, \|\nabla'\tilde{u}\|_{\mathcal{H}_1}^2 - E_1 \, e^s \, \|\tilde{u}\|_{\mathcal{H}_1}^2 - \frac{1}{4} \, \|\tilde{u}\|_{\mathcal{H}_1}^2 \,. \end{equation*} As a consequence of~\eqref{boundedness}, \eqref{coercivity} and the Lax-Milgram theorem, we know that $J_s^{(1)}$ is closed on its domain~$\mathcal{H}_1^1$. It remains to analyse the coercivity of the form~$J_s^{(1)}$. More precisely, as usual for energy estimates, we replace the right hand side of~\eqref{formal} by the spectral bound, valid for each fixed $s \in [0,\infty)$, \begin{equation}\label{spectral.reduction} \forall \tilde{u} \in \mathcal{H}_1^1 \;\!, \qquad J_s^{(1)}[\tilde{u}] \geq \mu(s) \, \|\tilde{u}\|_{\mathcal{H}_1}^2 \,, \end{equation} where~$\mu(s)$ denotes the lowest point in the spectrum of the self-adjoint operator~$T_s^{(1)}$ in~$\mathcal{H}_1$ associated with~$J_s^{(1)}$. Then~\eqref{formal} together with~\eqref{spectral.reduction} implies the exponential bound \begin{equation}\label{spectral.reduction.integral} \forall s \in [0,\infty) \;\!, \qquad \|\tilde{u}(s)\|_{\mathcal{H}_1} \leq \|\tilde{u}_0\|_{\mathcal{H}_1} \, e^{-\int_0^s \mu(r) dr} \,, \end{equation} In this way, the problem is reduced to a spectral analysis of the family of operators $\{T_s^{(1)}\}_{s \geq 0}$. \subsection{Removing the weight} In order to investigate the operator~$T_s^{(1)}$ in~$\mathcal{H}_1$, we first map it into a unitarily equivalent operator~$T_s^{(0)}$ in~$\mathcal{H}_0$. This can be carried out via the unitary transform $\mathcal{U}_{0}:\mathcal{H}_1\to\mathcal{H}_0$ defined by $$ (\mathcal{U}_{0}u)(y):=K^{1/2}(y_1)\,u(y) \,. $$ We define $T_s^{(0)} := \mathcal{U}_0 T_s^{(1)} \mathcal{U}_0^{-1}$, which is the self-adjoint operator associated with the quadratic form $J_s^{(0)}[v] := J_s^{(1)}[\mathcal{U}_0^{-1}v]$, $v \in \mathfrak{D}(J_s^{(0)}) := \mathcal{U}_0\,\mathfrak{D}(J_s^{(1)})$. A straightforward calculation yields \begin{equation}\label{J0.form} J_s^{(0)}[v] = \|\partial_1 v-\sigma_s\,\partial_\tau v\|_{\mathcal{H}_0}^2 + \frac{1}{16} \, \|y_1 v\|_{\mathcal{H}_0}^2 + e^s \, \|\nabla'v\|_{\mathcal{H}_0}^2 - E_1 \;\! e^s \, \|v\|_{\mathcal{H}_0}^2 \,. \end{equation} It is easy to verify that the domain of~$J_s^{(0)}$ coincides with the closure of $C_0^\infty(\Omega_0)$ with respect to the norm $ (\|\cdot\|_{\mathcal{H}_0}^2 + \|\nabla\cdot\|_{\mathcal{H}_0}^2 +\|y_1\cdot\|_{\mathcal{H}_0}^2)^{1/2} $. In particular, $\mathfrak{D}(J_s^{(0)})$~is independent of~$s$. Moreover, since this closure is compactly embedded in~$\mathcal{H}_0$ (one can employ the well-known fact that~\eqref{oscillator} has purely discrete spectrum, which essentially uses the fact that the form domain of~$h$ is compactly embedded in $L^2(\mathbb{R})$), it follows that $T_s^{(0)}$ (and therefore $T_s^{(1)}$) is an operator with compact resolvent. In particular, we have: \begin{Proposition}\label{Prop.compact} $T_s^{(1)} \simeq T_s^{(0)}$ have purely discrete spectrum for all $s\in[0,\infty)$. \end{Proposition} \noindent Consequently, $\mu(s)$ is the lowest eigenvalue of $T_s^{(1)}$. \subsection{The asymptotic behaviour of the spectrum}\label{Sec.strong} In order to study the decay rate via~\eqref{spectral.reduction.integral}, we need information about the limit of the eigenvalue~$\mu(s)$ as the time~$s$ tends to infinity. Since the function~$\sigma_s$ from~\eqref{sigma} converges in the distributional sense to a multiple of the delta function supported at zero as~$s\to\infty$, it is expectable (\emph{cf}~\eqref{J0.form}) that the operator~$T_s^{(0)}$ will converge, in a suitable sense, to the one-dimensional operator~$h$ from~\eqref{oscillator} with an extra Dirichlet boundary condition at zero. More precisely, the limiting operator, denoted by~$h_D$, is introduced as the self-adjoint operator in~$L^2(\mathbb{R})$ whose quadratic form acts in the same way as that of~$h$ but has a smaller domain $$ \mathfrak{D}(h_D^{1/2}) := \big\{ \varphi\in\mathfrak{D}(h^{1/2})\ |\ \varphi(0)=0 \big\} \,. $$ Alternatively, the form domain $\mathfrak{D}(h_D^{1/2})$ is the closure of $C_0^\infty(\mathbb{R}\setminus\{0\})$ with respect to the norm $ (\|\cdot\|_{L^2(\mathbb{R})}^2 + \|\nabla\cdot\|_{L^2(\mathbb{R})}^2 + \|y_1\cdot\|_{L^2(\mathbb{R})}^2)^{1/2} $. To make this limit rigorous ($T_s^{(0)}$ and~$h_D$ act in different spaces), we follow~\cite{Friedlander-Solomyak_2007} and decompose the Hilbert space~$\mathcal{H}_0$ into an orthogonal sum $$ \mathcal{H}_0 = \mathfrak{H}_1 \oplus \mathfrak{H}_1^\bot \,, $$ where the subspace~$\mathfrak{H}_1$ consists of functions of the form $\psi_1(y) = \varphi(y_1)\mathcal{J}_1(y')$. Recall that~$\mathcal{J}_1$ denotes the positive eigenfunction of $-\Delta_D^\omega$ corresponding to~$E_1$, normalized to~$1$ in $L^2(\omega)$. Given any $\psi \in \mathcal{H}_0$, we have the decomposition $\psi = \psi_1 + \phi$ with $\psi_1\in\mathfrak{H}_1$ as above and $\phi \in \mathfrak{H}_1^\bot$. The mapping $\pi:\varphi\mapsto\psi_1$ is an isomorphism of $L^2(\mathbb{R})$ onto~$\mathfrak{H}_1$. Hence, with an abuse of notations, we may identify any operator~$h$ on $L^2(\mathbb{R})$ with the operator $\pi h \pi^{-1}$ acting on $\mathfrak{H}_1 \subset \mathcal{H}_0$. \begin{Proposition}\label{Prop.strong} Let $\Omega_\theta$ be twisted with $\theta \in C^1(\mathbb{R})$. Suppose that~$\dot\theta$ has compact support. Then $T_s^{(0)}$ converges to $ h_D \oplus 0^\bot $ in the strong-resolvent sense as $s \to \infty$, \emph{i.e.}, for every $F \in \mathcal{H}_0$, $$ \lim_{s \to \infty} \left\| \big(T_s^{(0)}+1\big)^{-1}F - \left[\big(h_D + 1 \big)^{-1} \oplus 0^\bot\right] F \right\|_{\mathcal{H}_0} = 0 \,. $$ Here~$0^\bot$ denotes the zero operator on the subspace $\mathfrak{H}_1^\bot \subset \mathcal{H}_0$. \end{Proposition} \begin{proof} For any fixed $F \in \mathcal{H}_0$ and sufficiently large positive number~$z$, let us set $\psi_s := (T_s^{(0)}+z)^{-1}F$. In other words, $\psi_s$~satisfies the resolvent equation \begin{equation}\label{re} \forall v \in \mathfrak{D}(J_s^{(0)}) \,, \qquad J_s^{(0)}(v,\psi_s) + z \, (v,\psi_s)_{\mathcal{H}_0} = (v,F)_{\mathcal{H}_0} \,. \end{equation} In particular, choosing~$\psi_s$ for the test function~$v$ in~\eqref{re}, we have \begin{multline}\label{resolvent.identity} \|\partial_1 \psi_s-\sigma_s\,\partial_\tau \psi_s\|_{\mathcal{H}_0}^2 + \frac{1}{16} \, \|y_1 \psi_s\|_{\mathcal{H}_0}^2 + e^s \Big( \|\nabla'\psi_s\|_{\mathcal{H}_0}^2 - E_1 \|\psi_s\|_{\mathcal{H}_0}^2 \Big) + z \, \|\psi_s\|_{\mathcal{H}_0}^2 \\ = (\psi_s,F)_{\mathcal{H}_0} \leq \frac{1}{4} \, \|\psi_s\|_{\mathcal{H}_0}^2 + \|F\|_{\mathcal{H}_0}^2 \,. \end{multline} Henceforth we assume that $z > 1/4$. We employ the decomposition $\psi_s(y)=\varphi_s(y_1)\mathcal{J}_1(y_1) + \phi_s(y)$ where $\phi_s \in \mathfrak{H}_1^\bot$, \emph{i.e.}, \begin{equation}\label{orthogonality} \forall y_1 \in \mathbb{R} \,, \qquad \big(\mathcal{J}_1,\phi_s(y_1,\cdot)\big)_{L^2(\omega)} = 0 \,. \end{equation} Then, for every $\epsilon \in (0,1)$, \begin{align*} \|\nabla'\psi_s\|_{\mathcal{H}_0}^2 - E_1 \|\psi_s\|_{\mathcal{H}_0}^2 &= \epsilon \|\nabla'\phi_s\|_{\mathcal{H}_0}^2 + (1-\epsilon) \|\nabla'\phi_s\|_{\mathcal{H}_0}^2 - E_1 \|\phi_s\|_{\mathcal{H}_0}^2 \\ &\geq \epsilon \|\nabla'\phi_s\|_{\mathcal{H}_0}^2 + \big[(1-\epsilon)E_2-E_1\big] \|\phi_s\|_{\mathcal{H}_0}^2 \,, \end{align*} where~$E_2$ denotes the second eigenvalue of~$-\Delta_D^\omega$. Since~$E_1$ is (strictly) less then~$E_2$, we can choose the~$\epsilon$ so small that~\eqref{resolvent.identity} implies \begin{equation}\label{ri1} \|\phi_s\|_{\mathcal{H}_0}^2 \leq C e^{-s} \qquad\mbox{and}\qquad \|\nabla'\phi_s\|_{\mathcal{H}_0}^2 \leq C e^{-s} \,, \end{equation} where~$C$ is a constant depending on~$\omega$ and $\|F\|_{\mathcal{H}_0}$. At the same time, \eqref{resolvent.identity} yields \begin{equation}\label{ri2} \|\varphi_s\|_{L^2(\mathbb{R})} \leq C \,, \qquad \|y_1\varphi_s\|_{L^2(\mathbb{R})} \leq C \,, \qquad\mbox{and}\qquad \|y_1\phi_s\|_{\mathcal{H}_0} \leq C \,, \end{equation} where~$C$ is a constant depending on~$\|F\|_{\mathcal{H}_0}$. To get an estimate on the longitudinal derivative of~$\psi_s$, we handle the first three terms on left hand side of~\eqref{resolvent.identity} as follows. Defining a new function $u_s\in\mathcal{H}_0$ by $\psi_s(y)=e^{s/4} u_s(e^{s/2} y_1,y')$ (\emph{cf}~the self-similarity transformation~\eqref{SST}) and making the change of variables $(x_1,x')=(e^{s/2} y_1,y')$, we have \begin{align}\label{unself} J_s^{(0)}[\psi_s] &= e^s \|\partial_1 u_s-\dot\theta\,\partial_\tau u_s\|_{\mathcal{H}_0}^2 + \frac{e^{-s}}{16} \, \|x_1 u_s\|_{\mathcal{H}_0}^2 + e^s \Big( \|\nabla'u_s\|_{\mathcal{H}_0}^2 - E_1 \|u_s\|_{\mathcal{H}_0}^2 \Big) \nonumber \\ &\geq e^s \left\{ \|\partial_1 u_s-\dot\theta\,\partial_\tau u_s\|_{\mathcal{H}_0}^2 + \|\nabla'u_s\|_{\mathcal{H}_0}^2 - E_1 \|u_s\|_{\mathcal{H}_0}^2 \right\} \nonumber \\ &\geq e^s \, c_H \, \|\rho \;\! u_s\|_{\mathcal{H}_0}^2 \,, \nonumber \\ &= e^s \, c_H \, \|\rho_s\psi_s\|_{\mathcal{H}_0}^2 \,, \qquad \mbox{where} \quad \rho_s(y) := \rho(e^{s/2}y_1,y') \,. \end{align} In the second inequality we have employed the Hardy inequality of Theorem~\ref{Thm.Hardy}; the constant~$c_H$ is positive by the hypothesis. Consequently, \eqref{resolvent.identity}~yields \begin{equation}\label{ri3} \|\rho_s\psi_s\|_{\mathcal{H}_0}^2 \leq C e^{-s} \,, \end{equation} where~$C$ is a constant depending on~$\dot\theta$, $\omega$ and~$\|F\|_{\mathcal{H}_0}$. Now, proceeding as in the proof of~\eqref{bound3}, we get \begin{multline*} \|\partial_1 \psi_s-\sigma_s\,\partial_\tau \psi_s\|_{\mathcal{H}_0}^2 + e^s \Big( \|\nabla'\psi_s\|_{\mathcal{H}_0}^2 - E_1 \|\psi_s\|_{\mathcal{H}_0}^2 \Big) \\ \geq \epsilon \, \|\partial_1 \psi_s\|_{\mathcal{H}_{0}}^2 - \frac{\epsilon}{1-\epsilon} \, \|\dot\theta\|_{L^\infty(\mathbb{R})}^2 \, a^2 E_1 \, e^s \, \|\psi_s\|_{L^2(I_s\times\omega)}^2 \end{multline*} for every $\epsilon < \big(1+a^2\|\dot{\theta}\|_{L^\infty(\mathbb{R})}^2\big)^{-1}$, where $ I_s := e^{-s/2} I \equiv \{e^{-s/2} x_1 \,|\, x_1 \in I \} $ with $I := (\inf\mathop{\mathrm{supp}}\nolimits\dot\theta,\sup\mathop{\mathrm{supp}}\nolimits\dot\theta)$. Since \begin{equation}\label{exclusively} \|\psi_s\|_{L^2(I_s\times\omega)} \leq C \, \|\rho_s\psi_s\|_{\mathcal{H}_0} \,, \end{equation} where~$C$ is a constant depending exclusively on~$I$, \eqref{resolvent.identity}~together with~\eqref{ri3} implies $ \|\partial_1 \psi_s\|_{\mathcal{H}_{0}}^2 \leq C $, where~$C$ is a constant depending on~$\dot\theta$, $\omega$ and~$\|F\|_{\mathcal{H}_0}$. Recalling~\eqref{orthogonality}, we therefore get the separate bounds \begin{equation}\label{ri4} \|\partial_1\phi_s\|_{\mathcal{H}_0} \leq C \qquad\mbox{and}\qquad \|\dot\varphi_s\|_{L^2(\mathbb{R})} \leq C \,, \end{equation} with the same constant~$C$. By~\eqref{ri1}, $\phi_s$~converges strongly to zero in~$\mathcal{H}_0$ as $s \to \infty$. Moreover, it follows from~\eqref{ri1}, \eqref{ri2} and~\eqref{ri4} that $\{\phi_s\}_{s \geq 0}$ is a bounded family in $\mathfrak{D}(J_s^{(0)})$. Consequently, $\phi_s$ converges weakly to zero in $\mathfrak{D}(J_s^{(0)})$ as $s \to \infty$. At the same time, it follows from~\eqref{ri2} and~\eqref{ri4} that $\{\varphi_s\}_{s \geq 0}$ is a bounded family in $\mathfrak{D}(h^{1/2})$. Therefore it is precompact in the weak topology of $\mathfrak{D}(h^{1/2})$. Let~$\varphi_\infty$ be a weak limit point, \emph{i.e.}, for an increasing sequence of positive numbers $\{s_n\}_{n\in\mathbb{N}}$ such that $s_n \to \infty$ as $n \to \infty$, $\{\varphi_{s_n}\}_{n\in\mathbb{N}}$ converges weakly to~$\varphi_\infty$ in $\mathfrak{D}(h^{1/2})$. Actually, we may assume that it converges strongly in $L^2(\mathbb{R})$ because $\mathfrak{D}(h^{1/2})$ is compactly embedded in $L^2(\mathbb{R})$. Employing~\eqref{orthogonality}, \eqref{ri3}~together with~\eqref{exclusively} gives \begin{equation}\label{ri5} \|\varphi_s\|_{L^2(I_s)}^2 \leq C e^{-s} \,, \end{equation} where~$C$ is a constant depending on~$\dot\theta$, $\omega$ and~$\|F\|_{\mathcal{H}_0}$. Multiplying this inequality by $e^{s/2}$ and taking the limit $s \to \infty$, we verify that \begin{equation}\label{ri6} \varphi_\infty(0) = 0 \,. \end{equation} (We note that $\mathfrak{D}(h^{1/2}) \subset H^1(\mathbb{R})$ and that $H^1(J)$ is compactly embedded in $C^{0,\lambda}(J)$ for every $\lambda\in(0,1/2)$ and any bounded interval~$J\subset\mathbb{R}$.) Finally, let $\varphi \in C_0^\infty(\mathbb{R}\!\setminus\!\{0\})$ be arbitrary. Taking $v(x) := \varphi(x_1) \mathcal{J}_1(x')$ as the test function in~\eqref{re}, with~$s$ being replaced by~$s_n$, and sending~$n$ to infinity, we easily check that \begin{equation*} (\dot\varphi,\dot\varphi_\infty)_{L^2(\mathbb{R})} + \frac{1}{16} \, (y_1\varphi,y_1\varphi_\infty)_{L^2(\mathbb{R})} + z \, (\varphi,\varphi_\infty)_{L^2(\mathbb{R})} = (\varphi,f)_{L^2(\mathbb{R})} \,, \end{equation*} where $f(x_1) := (\mathcal{J}_1,F(x_1,\cdot))_{L^2(\omega)}$. That is, $\varphi_\infty = (h_D+z)^{-1} f$, for \emph{any} weak limit point of $\{\varphi_s\}_{s \geq 0}$. Summing up, we have shown that~$\psi_{s}$ converges strongly to $\psi_\infty$ in $\mathcal{H}_0$ as $s \to \infty$, where $ \psi_\infty(y) := \varphi_\infty(y_1)\mathcal{J}_1(y') = \big[(h_D+z)^{-1} \oplus 0^\bot \big] F $. \end{proof} \begin{Remark}\label{Rem.unself} The crucial step in the proof is certainly the usage of the Hardy inequality in the second inequality of~\eqref{unself}. Indeed, it enables one to control the mixed terms coming from the first term on the left hand side of~\eqref{resolvent.identity}. We would like to mention that instead of the Hardy inequality itself we could have used in~\eqref{unself} the corner-stone Lemma~\ref{Lem.cornerstone}. This would leave to the lower bound $ J_s^{(0)}[\psi_s] \geq e^s \, \lambda(\dot\theta,I) \, \|\psi_s\|_{L^2(I_s\times\omega)}^2 $, which is sufficient to conclude the proof in the same way as above. \end{Remark} \begin{Corollary}\label{Corol.strong} Let $\Omega_\theta$ be twisted with $\theta \in C^1(\mathbb{R})$. Suppose that~$\dot\theta$ has compact support. Then $$ \lim_{s\to\infty} \mu(s) = 3/4 \,. $$ \end{Corollary} \begin{proof} In general, the strong-resolvent convergence of Proposition~\ref{Prop.strong} is not enough to guarantee the convergence of spectra. However, in our case, since the spectra are purely discrete, the eigenprojections converge even in norm (\emph{cf}~\cite{Weidmann_1980}). In particular, $\mu(s)$~converges to the first eigenvalue of~$h_D$. It remains to notice that the first eigenvalue of~$h_D$ coincides (in view of the symmetry) with the second eigenvalue of~$h$ which is~$3/4$. (For the spectrum of~$h$, see any textbook dealing with quantum harmonic oscillator, \emph{e.g.}, \cite[Sec.~2.3]{Griffiths}.) \end{proof} \subsection{The improved decay rate - Proof of Theorem~\ref{Thm.rate}}\label{Sec.improved} Now we have all the prerequisites to prove Theorem~\ref{Thm.rate}. Recall that the identity $\Gamma(\Omega_\theta)=1/4$ for untwisted tubes is already established by Corollary~\ref{Corol.norate}. Throughout this subsection we therefore assume that~$\Omega_\theta$ is twisted with~\eqref{locally} and show that there is an extra decay rate. We come back to~\eqref{spectral.reduction.integral}. It follows from Corollary~\ref{Corol.strong} that for arbitrarily small positive number~$\varepsilon$ there exists a (large) positive time~$s_\varepsilon$ such that for all $s \geq s_\varepsilon$, we have $\mu(s) \geq 3/4 - \varepsilon$. Hence, fixing $\varepsilon>0$, for all $s \geq s_\varepsilon$, we have $$ {-\int_0^s \mu(r) \, dr} \leq {-\int_0^{s_\varepsilon} \mu(r) \, dr} {-(3/4-\varepsilon)(s-{s_\varepsilon})} \leq {(3/4-\varepsilon) s_\varepsilon} {-(3/4-\varepsilon) s} \,, $$ where the second inequality is due to the fact that~$\mu(s)$ is non-negative for all $s \geq 0$ (it is in fact greater than~$1/4$, \emph{cf}~Proposition~\ref{Prop.positivity}). At the same time, assuming $\varepsilon \leq 3/4$, we trivially have $$ {-\int_0^s \mu(r) \, dr} \leq 0 \leq {(3/4-\varepsilon) s_\varepsilon} {-(3/4-\varepsilon) s} $$ also for all $s \leq s_\varepsilon$. Summing up, \eqref{spectral.reduction.integral}~implies \begin{equation}\label{instead} \|\tilde{u}(s)\|_{\mathcal{H}_{1}} \leq C_\varepsilon \, e^{-(3/4-\varepsilon)s} \, \|\tilde{u}_0\|_{\mathcal{H}_{1}} \end{equation} for every $s \in [0,\infty)$, where $C_\varepsilon := e^{s_\varepsilon} \geq e^{(3/4-\varepsilon)s_\varepsilon}$. Returning to the variables in the straightened tube via $u = \tilde{U}^{-1} \tilde{u}$, using~\eqref{preserve} together with the point-wise estimate $1 \leq K$, and recalling that $\tilde{u}_0=u_0$, it follows that $$ \|u(t)\|_{\mathcal{H}_0} = \|\tilde{u}(s)\|_{\mathcal{H}_0} \leq \|\tilde{u}(s)\|_{\mathcal{H}_{1}} \leq C_\varepsilon \, (1+t)^{-(3/4-\varepsilon)} \, \|u_0\|_{\mathcal{H}_{1}} $$ for every $t \in [0,\infty)$. Finally, we recall that the weight~$K$ in~$\mathcal{H}_1$ depends on the longitudinal variable only, which is therefore left invariant by the mapping~$\mathcal{L}_\theta$. Consequently, we apply the unitary transform~\eqref{unitary} and conclude with $$ \|S(t)\|_{L^2(\Omega_\theta,K) \to L^2(\Omega_\theta)} = \sup_{u_0 \in \mathcal{H}_1\setminus\{0\}} \frac{\|u(t)\|_{\mathcal{H}_0}}{\|u_0\|_{\mathcal{H}_1}} \leq C_\varepsilon \, (1+t)^{-(3/4-\varepsilon)} $$ for every $t \in [0,\infty)$. Since~$\varepsilon$ can be made arbitrarily small, this bound implies $\Gamma(\Omega_\theta) \geq 3/4$ and concludes thus the proof of Theorem~\ref{Thm.rate}. \subsection{The improved decay rate - an alternative statement}\label{Sec.alternative} Theorem~\ref{Thm.rate} provides quite precise information about the extra polynomial decay of solutions~$u$ of~\eqref{I.heat} in a twisted tube in the sense that the decay rate~$\Gamma(\Omega_\theta)$ is at least three times better than in the untwisted case. On the other hand, we have no control over the constant~$C_\Gamma$ in~\eqref{solution.rate} (in principle it may blow up as $\Gamma \to \Gamma(\Omega_\theta)$). As an alternative result, we therefore present also the following theorem, where we get rid of the constant~$C_\Gamma$ but the prize we pay is just a qualitative knowledge about the decay rate. \begin{Theorem}\label{Thm.I} Let $\theta\in C^1(\mathbb{R})$ satisfy~\eqref{locally}. We have \begin{equation}\label{decay.similar.I} \forall t \geq 0 \,, \qquad \|S(t)\|_{ L^2(\Omega_\theta,K) \to L^2(\Omega_\theta) } \, \leq \, \left( 1+t \right)^{\!-(\gamma+1/4)} \,, \end{equation} where~$\gamma$ is a non-negative constant depending on~$\dot{\theta}$ and~$\omega$. Moreover, $\gamma$~is positive if, and only if, $\Omega_\theta$ is twisted. \end{Theorem} In order to establish Theorem~\ref{Thm.I}, the asymptotic result of Corollary~\ref{Corol.strong} need to be supplied with information about values of~$\mu(s)$ for finite times~$s$. \subsubsection{Singling the dimensional decay rate out} It follows from Theorem~\ref{Thm.decay.1D} that there is at least a $1/4$ polynomial decay rate for the solutions of the heat equations. In the setting of self-similar solutions (recall~\eqref{spectral.reduction.integral} and the relation between the initial and self-similar times~$t$ and~$s$ given by~\eqref{SST}), this will be reflected in that we actually have $\mu(s) \geq 1/4$, regardless whether the tube is twisted or not. It is therefore natural to study rather the shifted operator $T_s^{(0)}-1/4$. However, it is not obvious from~\eqref{J0.form} that such an operator is non-negative. In order to introduce the shift explicitly into the structure of the operator, we therefore introduce another unitarily equivalent operator $T_s^{(-1)}:=\mathcal{U}_{-1} T_s^{(0)} (\mathcal{U}_{-1})^{-1}$ in~$\mathcal{H}_{-1}$, where the map $\mathcal{U}_{-1} : \mathcal{H}_0 \to \mathcal{H}_{-1}$ acts in the same way as~$\mathcal{U}_0$: $$ (\mathcal{U}_{-1}v)(y):=K^{1/2}(y_1)\,v(y) \,. $$ $T_s^{(-1)}$ is the self-adjoint operator associated with the quadratic form $J_s^{(-1)}[w] := J_s^{(0)}[(\mathcal{U}_{-1})^{-1}w]$, $w \in \mathfrak{D}(J_s^{(-1)}) := \mathcal{U}_{-1}\,\mathfrak{D}(J_s^{(0)})$. Again, it is straightforward to check that \begin{align*} J_s^{(-1)}[w] &= \|\partial_1 w-\sigma_s\,\partial_\tau w\|_{\mathcal{H}_{-1}}^2 + e^s \, \|\nabla'w\|_{\mathcal{H}_{-1}}^2 - E_1 \, e^s \, \|w\|_{\mathcal{H}_{-1}}^2 + \frac{1}{4} \, \|w\|_{\mathcal{H}_{-1}}^2 \,. \end{align*} Now it readily follows from the structure of the quadratic form that the shifted operator $T_s^{(-1)}-1/4$ is non-negative. Moreover, it is positive if, and only if, the tube is twisted. \begin{Proposition}\label{Prop.positivity} If $\Omega_\theta$ is twisted with $\theta \in C^1(\mathbb{R})$, then we have $$ \forall s \in [0,\infty), \qquad \mu(s) > 1/4 \,. $$ Conversely, $\mu(s) = 1/4$ for all $s \in [0,\infty)$ if $\Omega_\theta$ is untwisted. \end{Proposition} \begin{proof} Since $J_s^{(-1)}[w] - \frac{1}{4} \, \|w\|_{\mathcal{H}_{-1}}^2 \geq 0$ for every $w \in \mathfrak{D}(J_s^{(-1)})$, we clearly have $\mu(s) \geq 1/4$, regardless whether the tube is twisted or not. By definition, if it is untwisted, then either $\sigma_s = 0$ identically in~$\mathbb{R}$ for all $s \in [0,\infty)$ or $\partial_\tau \mathcal{J}_1 = 0$ identically in~$\omega$, where~$\mathcal{J}_1$ is the positive eigenfunction corresponding to~$E_1$ of the Dirichlet Laplacian in~$L^2(\omega)$. Consequently, choosing $w(y) = \mathcal{J}_1(y')$ as a test function for~$J_s^{(-1)}$, we also get the opposite bound $\mu(s) \leq 1/4$ in the untwisted case. To get the converse result, we can proceed exactly as in the proof of Lemma~\ref{Lem.cornerstone}: Assuming $\mu(s) = 1/4$ in the twisted case, the variational definition of the eigenvalue~$\mu(s)$ would imply $$ \|\sigma_s\|_{L^2(\mathbb{R},K^{-1})} = 0 \qquad\mbox{or}\qquad \|\partial_\tau\mathcal{J}_1\|_{L^2(\omega)} = 0 \,, $$ a contradiction. \end{proof} Now we are in a position to prove Theorem~\ref{Thm.I}. \subsubsection{Proof of Theorem~\ref{Thm.I}} Assume~\eqref{locally}. It follows from Proposition~\ref{Prop.positivity} and Corollary~\ref{Corol.strong} that the number \begin{equation} \gamma := \inf_{s \in [0,\infty)}\mu(s) - 1/4 \end{equation} is positive if, and only if, $\Omega_\theta$~is twisted. In any case, \eqref{spectral.reduction.integral}~implies $$ \|\tilde{u}(s)\|_{\mathcal{H}_{1}} \leq \|\tilde{u}_0\|_{\mathcal{H}_{1}} \, e^{-(\gamma+1/4)s} $$ for every $s \in [0,\infty)$. Using this estimate instead of~\eqref{instead}, but following the same type of arguments as in Section~\ref{Sec.improved} below~\eqref{instead}, we get $$ \|S(t)\|_{L^2(\Omega_\theta,K) \to L^2(\Omega_\theta)} \leq (1+t)^{-(\gamma+1/4)} $$ for every $t \in [0,\infty)$. This is equivalent to~\eqref{decay.similar.I} and we know that~$\gamma$ is positive if~$\Omega_\theta$ is twisted. On the other hand, in view of Proposition~\ref{Prop.decay.1D}, estimate~\eqref{decay.similar.I} cannot hold with positive~$\gamma$ if the tube is untwisted. This concludes the proof of Theorem~\ref{Thm.I}. \section{Conclusions}\label{Sec.end} The classical interpretation of the heat equation~\eqref{I.heat} is that its solution~$u$ gives the evolution of the temperature distribution of a medium in the tube cooled down to zero on the boundary. It also represents the simplest version of the stochastic Fokker-Planck equation describing the Brownian motion in~$\Omega_\theta$ with killing boundary conditions. Then the results of the present paper can be interpreted as that the twisting implies a faster cool-down/death of the medium/Brownian particle in the tube. Many other diffusive processes in nature are governed by~\eqref{I.heat}. Our proof that there is an extra decay rate for solutions of~\eqref{I.heat} if the tube is twisted was far from being straightforward. This is a bit surprising because the result is quite expectable from the physical interpretation, if one notices that the twist (locally) enlarges the boundary of the tube, while it (locally) keeps the volume unchanged. (By ``locally'' we mean that it is the case for bounded tubes, otherwise both the quantities are infinite of course.) At the same time, the Hardy inequality~\eqref{I.Hardy} did not play a direct role in the proof of Theorems~\ref{Thm.rate} and~\ref{Thm.I} (although, combining any of the theorems with Theorem~\ref{Thm.Hardy}, we eventually know that the existence of the Hardy inequality is equivalent to the extra decay rate for the heat semigroup). It would be desirable to find a more direct proof of Theorem~\ref{Thm.rate} based on~\eqref{I.Hardy}. We conjecture that the inequality of Theorem~\ref{Thm.rate} can be replaced by equality, \emph{i.e.}, $\Gamma(\Omega_\theta)=3/4$ if the tube is twisted and~\eqref{locally} holds. The study of the quantitative dependence of the constant~$\gamma$ from Theorem~\ref{Thm.I} on properties of~$\dot\theta$ and the geometry of~$\omega$ also constitutes an interesting open problem. Note that the two quantities are related by $\gamma+1/4 \leq \Gamma(\Omega_\theta)$. Throughout the paper we assumed~\eqref{locally}. We expect that this hypothesis can be replaced by a mere vanishing of~$\dot\theta$ at infinity to get Theorems~\ref{Thm.rate} and~\ref{Thm.I} (and also Theorem~\ref{Thm.Hardy}). This less restrictive assumption is known to be enough to ensure~\eqref{spectrum} and there exist versions of~\eqref{I.Hardy} even if~\eqref{locally} is violated (\emph{cf}~\cite{K6-erratum}). However, it is quite possible that a slower decay of~$\dot\theta$ at infinity will make the effect of twisting stronger. In particular, can~$\Gamma(\Omega_\theta)$ be strictly greater than~$3/4$ if the tube is twisted and~$\dot\theta$ decays to zero very slowly at infinity? Equally, it is not clear whether Proposition~\ref{Prop.limit} holds if~\eqref{locally} is violated. There are some further open problems related to the Hardy inequality of Theorem~\ref{Thm.Hardy}. In particular, it is frustrating that the proof of the theorem does not extend to all~$\dot\theta$ merely vanishing at infinity. In this context, it would be highly desirable to establish a more quantitative version of Lemma~\ref{Lem.cornerstone}, \emph{i.e.}~to get a positive lower bound to $\lambda(\dot{\theta},I)$ depending explicitly on~$\dot\theta$, $|I|$ and~$\omega$. On the other hand, a completely different situation will appear if one allows twisted tubes for which~$\dot\theta$ does not vanish at infinity. Then the spectrum of $-\Delta_D^{\Omega_\theta}$ can actually start strictly above~$E_1$ (\emph{cf}~\cite{EKov_2005} or \cite[Corol.~6.6]{K6}) and an extra exponential decay rate for our semigroup~$S(t)$ follows at once already in $L^2(\Omega_\theta)$. In such situations it is more natural to study the decay of the semigroup associated with $-\Delta_D^{\Omega_\theta}$ shifted by the lowest point in its spectrum. As a particularly interesting situation we mention the case of periodically twisted tubes, for which a systematic analysis based on the Floquet-Bloch decomposition could be developed in the spirit of \cite{Duro-Zuazua_2000,Ortega-Zuazua_2000}. We expect that the extra decay rate will be induced also in other twisted models for which Hardy inequalities have been established recently \cite{K3,KK2}. It would be also interesting to study the effect of twisting in other physical models. As one possible direction of this research, let us mention the question of the long time behaviour of the solutions to the dissipative wave equation \cite{Gallay-Raugel_1997,Gallay-Raugel_1998,Orive-Pazoto-Zuazua_2001}. Let us conclude the paper by a general conjecture. We expect that there is always an improvement of the decay rate for the heat semigroup if a Hardy inequality holds: \begin{Conjecture} Let~$\Omega$ be an open connected subset of~$\mathbb{R}^d$. Let~$H$ and $H_+$ be two self-adjoint operators in $L^2(\Omega)$ such that $ \inf\sigma(H) = \inf\sigma(H_+) = 0 $. Assume that there is a positive smooth function $\varrho:\Omega\to\mathbb{R}$ such that $H_+ \geq \varrho$, while $H-V$ is a negative operator for any non-negative non-trivial $V \in C_0^\infty(\Omega)$. Then there exists a positive function $K:\Omega\to\mathbb{R}$ such that $$ \lim_{t\to\infty} \frac{\|e^{- H_+ t}\|_{L^2(\Omega,K)\toL^2(\Omega)}} {\|e^{- H t}\|_{L^2(\Omega,K)\toL^2(\Omega)}} = 0 \,. $$ \end{Conjecture} \noindent A similar conjecture can be stated for the same type of operators in different Hilbert spaces. In this paper we proved the conjecture for the special situation where $H=H_0-E_1$ and $H_+=H_\theta-E_1$ (transformed Dirichlet Laplacians) in $L^2(\Omega)$, with $\Omega=\Omega_0$ (unbounded tube). In general, the proof seems to be a hardly accessible problem. \section*{Acknowledgment} The first author would like to thank the Basque Center for Applied Mathematics in Bilbao where part of this work was carried out, for hospitality and support. The work was partially supported by the Czech Ministry of Education, Youth and Sports within the project LC06002, and by Grant MTM2008-03541 of the MICINN (Spain). \addcontentsline{toc}{section}{References}
{ "timestamp": "2009-06-22T08:32:33", "yymm": "0906", "arxiv_id": "0906.3359", "language": "en", "url": "https://arxiv.org/abs/0906.3359", "abstract": "We show that a twist of a three-dimensional tube of uniform cross-section yields an improved decay rate for the heat semigroup associated with the Dirichlet Laplacian in the tube. The proof employs Hardy inequalities for the Dirichlet Laplacian in twisted tubes and the method of self-similar variables and weighted Sobolev spaces for the heat equation.", "subjects": "Analysis of PDEs (math.AP); Mathematical Physics (math-ph); Spectral Theory (math.SP)", "title": "The Hardy inequality and the heat equation in twisted tubes", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.985271388745168, "lm_q2_score": 0.7185943985973773, "lm_q1q2_score": 0.7080105010505366 }
https://arxiv.org/abs/1911.07066
Maximal subgroup growth of a few polycyclic groups
We give here the exact maximal subgroup growth of two classes of polycyclic groups. Let $G_k = \langle x_1, x_2, ..., x_k \mid x_ix_jx_i^{-1}x_j \text{ for all } i < j \rangle$. So $G_k = \mathbb{Z} \rtimes (\mathbb{Z} \rtimes (\mathbb{Z} \rtimes ... \rtimes \mathbb{Z})$. Then for all $k \geq 2$, we calculate $m_n(G_k)$, the number of maximal subgroups of $G_k$ of index $n$, exactly. Also, for infinitely many groups $H_k$ of the form $\mathbb{Z}^2 \rtimes G_2$, we calculate $m_n(H_k)$ exactly.
\section{Introduction} Let $G$ be a finitely generated (f.g.) group. We denote by $a_n(G)$ the number of subgroups of $G$ of index $n$ (which is necessarily finite), and we denote by $\maxsubgr(G)$ the number of maximal subgroups of $G$ of index $n$. The subgroup growth of $G$ deals with the growth rate of the function $a_n(G)$ and related functions, such as $\maxsubgr(G)$ or $s_n(G) := \sum_{k = 1}^n a_k(G)$ or of counting only the normal subgroups of $G$ of index $n$. The area of subgroup growth has had some great success. One highlight is the classification of all f.g.\ groups for which $a_n(G)$ is bounded above by a polynomial in $n$ (see chapter 5 in \cite{Lubotzky-and-Segal}). Also, Jaikin-Zapirain and Pyber made a great advance in \cite{Jaikin-Zapirain2011}, where they give a ``semi-structural characterization'' of groups $G$ for which $\maxsubgr(G)$ is bounded above by a polynomial in $n$. For calculating the \emph{word} growth in a group with polynomial (word) growth, this degree of polynomial growth is given by nice, simple formula. However, for subgroup growth, it is often very challenging, given a group $G$ of polynomial subgroup growth, to calculate $\deg(G)$, its degree of polynomial growth: \[ \deg(G) = \inf \{ \alpha \mid \subgr(G) \leq n^\alpha \text{ for all large $n$} \} = \limsup \frac{\log \subgr(G)}{\log n}. \] Similarly for groups $G$ with polynomial maximal subgroup growth, it is often difficult to determine $\mdeg(G)$, where \[ \mdeg(G) = \inf \{ \alpha \mid \maxsubgr(G) \leq n^\alpha \text{ for all large $n$} \} = \limsup \frac{\log \maxsubgr(G)}{\log n}. \] But progress in both areas have been made. In \cite{Shalev_On_the_degree}, Shalev calculated $\deg(G)$ exactly for certain metabelian groups and for all virtually abelian groups. In \cite{Kelley-some metabelian groups}, the first author calculated $\mdeg(G)$ for some metabelian groups, and in \cite{Kelley-dissertation} he does so for all virtually abelian groups. What is even rarer than calculating $\mdeg(G)$ is to give an exact formula for $\maxsubgr(G)$ (or of $\subgr(G)$). In \cite{Gelman}, Gelman gives a beautiful, exact formula for $\subgr(\BS(a,b))$, assuming $\gcd(a,b) = 1$, where $\BS(a, b)$ is the Baumslag-Solitar group having presentation $\langle x, y \mid y^{-1}x^ay = x^b \rangle$. Gelman's argument can be easily modified to give an exact formula for $\maxsubgr(BS(a,b))$, where again $\gcd(a,b) = 1$. (Alternatively, a different argument, that explains why $\gcd(a,b) = 1$ is such a nice assumption, is given by the first author in \cite{Kelley-Baumslag-Solitar groups}.) Since there are so few groups $G$ for which $\maxsubgr(G)$ is known exactly, this paper does so for two infinite classes of polycyclic groups. For $k \geq 2$, consider the group $G_k$ with presentation $\langle x_1, x_2, ..., x_k | x_ix_jx_i^{-1}x_j \text{ for all } i < j \rangle$. Then $G_k$ has the form $\Z \rtimes (\Z \rtimes (\Z \rtimes ... \rtimes \Z)$, where the $i$th $\Z$, reading from right to left, is generated by $x_i$. Note that the Hirsch length of $G_k$ is $k$, and so if $i \neq j$, then $G_i \not\cong G_j$. In Theorem~\ref{thm:m_n(G_k)}, we calculate $\maxsubgr(G_k)$ exactly for $k \geq 2$. Let $G_2$ be as above, but write $G_2=\Z \rtimes \Z$ as $\langle b \rangle \rtimes \langle a \rangle$ instead of $\langle x_2 \rangle \rtimes \langle x_1 \rangle$. For $k \in \Z$, we will define the group $H_k$, which is of the form $\Z^2 \rtimes G_2$. The generator $a$ acts (by conjugation) on $\Z^2$ by multiplication by the matrix $A = \left(\begin{smallmatrix} 0 & 1 \\ 1 & 0 \end{smallmatrix}\right)$, and the generator $b$ acts (by conjugation) on $\Z^2$ by multiplication by the matrix $B_k = \left(\begin{smallmatrix} 0 & 1 \\ -1 & k \end{smallmatrix}\right)$. Then in Theorem~\ref{thm:exact maximal subgroup growth of H_k}, we calculate $\maxsubgr(H_k)$ exactly for all $k \in \Z$. A consequence of this theorem is that among the groups $H_k$, there are infinitely many that are pairwise non-isomorphic. Also, it is interesting that $\mdeg(H_2) = 2$, but $\mdeg(H_k) = 1$ for all $k \neq 2$. For a module $N$, we let $\maxsubmod(N)$ denote the number of maximal submodules of $N$ of index $n$. Also, $M \leq N$ denotes that $M$ is a submodule of $N$. And $\langle n_1, n_2, ...,n_k \rangle$, where $n_i \in N$ for all $i$, denotes the submodule they generate. Recall that when $N$ is a $G$-module, a function $\delta : G \to N$ is called a derivation (or a 1-cocycle) if $\delta(gh) = \delta(g) + g\cdot \delta(h)$ for all $g, h \in G$. The set of derivations from $G$ to $N$ is denoted $\Der(G, N)$. \section{Groups of the form $\Z \rtimes (\Z \rtimes (\Z \rtimes ... \rtimes \Z)$} Let $G_k$ be as in the introduction. (And let $G_1 = \Z$.) In the following lemma, we will use the fact that if $\delta \in \Der(G, N)$, then for $g \in G$, we have $\delta(g^{-1}) = -g^{-1}\delta(g)$ which follows from the fact that $\delta(g^{-1}g) = \delta(1) = 0$. \begin{lemma} \label{lem:General case for generators x_i lemma} Let $S$ be a simple $G_k$-module. There is a one-to-one correspondence between the set $\Der(G_k, S)$ and the set $\Delta$ of all functions $\delta: \{x_1, x_2,...x_k\} \to S$ satisfying \begin{equation} (1-x_j^{-1})\delta(x_i) = (-x_i-x_j^{-1})\delta(x_j) \text{\quad for all $i$, $j$ with $i<j$.} \tag{$*$} \end{equation} \end{lemma} \begin{proof} First, let $\delta \in \Der(G_k, S)$. Fix $i$ and $j$ with $i < j$. By the relations of the presentation of $G_k$, we have that $x_ix_j = x_j^{-1}x_i$. Thus, $\delta(x_ix_j) = \delta(x_j^{-1}x_i)$. This gives us \\ \begin{align*} \delta(x_ix_j)&=\delta(x_j^{-1}x_i)\\ \delta(x_i) + x_i\delta(x_j) &= -x_j^{-1}\delta(x_j) + x_j^{-1}\delta(x_i) \\ \delta(x_i) - x_j^{-1}\delta(x_i) &= -x_i\delta(x_j) - x_j^{-1}\delta(x_j) \\ (1-x_j^{-1})\delta(x_i) &= (-x_i - x_j^{-1})\delta(x_j) \end{align*} which is the equation in ($*$). Next, let $\delta \in \Delta$. By exercise 3(a) in \cite{Brown} (pg.\ 90) (or Lemma 2.20 from \cite{Kelley-dissertation}), there is a unique derivation $\delta: F_k \xrightarrow{} S$, where $F_k$ is the free group on $k$ generators and the action of $F_k$ on $S$ is the induced action. Fix $i$ and $j$ with $i < j$. Then $\delta(x_ix_jx_i^{-1}x_j) = 0$ because \begin{align*} \delta(x_ix_jx_i^{-1}x_j) &= \delta(x_i) + x_i\delta(x_jx_i^{-1}x_j) \\ &= \delta(x_i) + x_i(\delta(x_j) + x_j\delta(x_i^{-1}x_j)) \\ &= \delta(x_i) + x_i(\delta(x_j) + x_j(\delta(x_i^{-1}) + x_i^{-1}\delta(x_j))) \\ &= \delta(x_i) + x_i\delta(x_j) + x_ix_j\delta(x_i^{-1})+x_ix_jx_i^{-1}\delta(x_j) \\ &= \delta(x_i) + x_i\delta(x_j) -x_ix_jx_i^{-1}\delta(x_i)+x_j^{-1}\delta(x_j) \\ &= \delta(x_i) + x_i\delta(x_j) -x_j^{-1}\delta(x_i) + x_j^{-1}\delta(x_j) \\ &= (\delta(x_i) - x_j^{-1}\delta(x_i))- (-x_i\delta(x_j) - x_j^{-1}\delta(x_j)) \\ &= 0. \end{align*} Here, the last equality holds because ($*$) holds. Thus, by Lemma 2.19 from \cite{Kelley-dissertation}, which is basically exercise 4(a) in \cite{Brown} (pg.\ 90), we have a derivation $\delta$ from $G_k$ to $S$. \end{proof} \begin{lemma} \label{lem:counting derivations form G_k} Consider $\Z/p\Z$, a simple $G_k$-module, where each generator $x_i$ of $G_k$ acts on $\Z/p\Z$ by multiplication by $-1$. Then \[ \lvert\Der(G_k, \Z/p\Z)\rvert = \begin{cases} 2^k & \text{if } p=2\\ p & \text{if } p \neq 2. \end{cases}\] \end{lemma} \begin{proof} If $p=2$, then the action of $G_k$ on $\Z/2\Z$ is trivial, and so $\lvert\Der(G_k, \Z/2\Z)\rvert=\lvert\Hom(G_k,\Z/2\Z)\rvert$, which is $2^k$. Next, suppose $p\neq2$. The action of $(1-x_i^{-1})$ and $(-x_i-x_j^{-1})$ on $\Z/p\Z$ is multiplication by 2, which is invertible since $p \neq 2$. So ($*$) from Lemma~\ref{lem:General case for generators x_i lemma} becomes $2\delta(x_i) = 2\delta(x_j)$ for all $i < j$, which simplifies to $\delta(x_i) = \delta(x_j)$ for all $i < j$. So by Lemma \ref{lem:General case for generators x_i lemma}, we may choose a derivation by picking $\delta(x_k)$ to be any element of $\Z/p\Z$, and then letting $\delta(x_i) = \delta(x_k)$ for all $i < k$. Thus $\lvert\Der(G_k, \Z/p\Z)\rvert = |\Z/p\Z|=p$. \end{proof} \begin{theorem} \label{thm:m_n(G_k)} Let $G_k$ be as above. Then \[ \tag{$*$} \maxsubgr(G_{k}) = \begin{cases} 1 + (k-1)n & \text{if $n$ is a prime with $n>2$} \\ \sum\limits_{j=0}^{k-1} 2^j & \text{if $n=2$}\\ 0 & \text{if $n$ is not prime}. \end{cases} \] \end{theorem} \begin{proof} Consider $N$, the subgroup of $G_k$ generated by $x_k$. Then $N \cong \Z$, and $N \trianglelefteq G_k$ and $G_k/ N \cong G_{k-1}$. So $N$ is a $G_{k-1}$-module. So since $G_k \cong N \rtimes G_{k-1}$, Lemma 5 from \cite{Kelley-some metabelian groups} gives us \begin{equation*} \maxsubgr(G_k) = \maxsubgr(G_{k-1}) + \sum\limits_{N_0}\lvert \Der(G_{k-1}, N/N_0)\rvert \end{equation*} where the sum is over all maximal submodules $N_0$ of $N$ of index $n$. Of course, the maximal submodules of $N$ are precisely the subgroups of prime index. Thus if $n$ is not prime, then $\maxsubgr(G_k)=0$; this follows by induction on $k$. Fix a prime $p$. For both cases $p>2$ and $p = 2$, we proceed by induction on $k$. First, let $p > 2$, and let $k=1$. Then $m_p(G_1) = 1 = 1+(k-1)p$. Assume ($*$) holds for $k=a$. Then $m_p(G_a)=1+(a-1)p$. Consider $k=a+1$. We have \begin{equation*} m_p(G_{a+1})= m_p(G_a) + \sum\limits_{N_0} \lvert \Der(G_{a}, N/N_0)\rvert. \end{equation*} By Lemma \ref{lem:counting derivations form G_k}, $\sum\limits_{N_0} \lvert \Der(G_{a}, N/N_0) \rvert = p$. So $m_p(G_{a+1}) = 1+(a-1)p + p = 1+ (a+1-1)p$, the desired result. Finally, let $p=2$, and let $k=1$. Then $m_2(G_1) = 1=2^0$. Assume ($*$) holds for $k=a$. Then $m_2(G_a)= 2^0+2^1+ \cdots + 2^{a-1}$. Consider $k=a+1$. Then $\maxsubgr[2](G_{a+1})=\maxsubgr[2](G_a) + \lvert \Der(G_{a}, \Z/2\Z) \rvert$. By Lemma \ref{lem:counting derivations form G_k}, $\lvert\Der(G_{a}, \Z/2\Z)\rvert=$ $2^{a}$. Thus $m_{2}(G_{a+1})=2^0+2^1+ \cdots +2^{a-1} + 2^a$, the desired result. \end{proof} \section{Some groups of the form $\Z^2 \rtimes (\Z \rtimes \Z)$} \label{sec:the groups H_n} Next, we will define the groups $H_k$, which are of the form $\Z^2 \rtimes (\Z \rtimes \Z)$. We will write $G_2=\Z \rtimes \Z$ as $\langle b \rangle \rtimes \langle a \rangle$ instead of $\langle x_2 \rangle \rtimes \langle x_1 \rangle$. Recall that $G_2=\langle a, b | aba^{-1}b\rangle$. To form a group of the form $\Z^2 \rtimes (\Z \rtimes \Z)$, what we need is an action of $G_2$ on $\Z^2$, and so we just need to find matrices $A, B \in \GL_2(\Z)$ such that $ABA^{-1}B = I_2$, and then we can say that the action (by conjugation) of the generator $a$ on $\Z^2$ is multiplication by the matrix $A$, and the generator $b$ acts (by conjugation) on $\Z^2$ by multiplication by the matrix $B$. We will take $A = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$. Let $B = \begin{pmatrix} w & x \\ y & z \end{pmatrix}$. Then $ABA^{-1}B = \begin{pmatrix} y^2 + wz & yz + xz \\ wy + xw & wz + x^2 \end{pmatrix}$, and we need this to equal $I_2$. And so we have $y^2 + wz = 1$, $wz + x^2 = 1$, $wy + xw = 0$ (equivalently, $w = 0$ or $x + y = 0$), and $yz + xz = 0$ (equivalently, $z = 0$ or $x + y = 0$). Also, since we need $\det(B) = \pm 1$, we must have $wz - xy = \pm 1$. One way to solve these equations is to let $w = 0$. Then $x, y = \pm 1$. Take $x = 1$. If we take $y = -1$, then $z$ can be any integer. Let the group $H_k$ be the group formed when we take $B$ to be $B_k = \begin{pmatrix} 0 & 1 \\ -1 & k \end{pmatrix}$. \begin{lemma} \label{lem:maximal submodules contain pZ^2} Let $M$ be a maximal submodule of $\Z^2$. Then $p\Z^2 \leq M$. \end{lemma} \begin{proof} First, recall that every maximal subgroup of a polycyclic group has prime power index; this follows, for example, from the proof of Result 5.4.3 (iii) in \cite{Robinson}. Since $M$ yields a maximal subgroup of $H_k$ with index equal to $[\Z^2 : M]$, we must have $[\Z^2 : M] = p^j$ for some prime $p$. Therefore, $p^j \Z^2 \leq M$. Consider the group $\Z^2 / p^j \Z^2$. Its Frattini subgroup is $p\Z^2/p^j\Z^2$, and therefore, $p\Z^2/p^j\Z^2 + M/p^j\Z^2 = M/p^j\Z^2$. And hence $p\Z^2 + M = M$, that is, that $p\Z^2 \leq M$. \end{proof} For a prime $p$, inside the module $\Z^2$, consider the submodule $M_{p, \mathbf{w}} = \left\langle \left(\begin{smallmatrix} p \\ 0 \end{smallmatrix}\right), \left( \begin{smallmatrix} 0 \\ p \end{smallmatrix}\right), \mathbf{w} \right\rangle$, where $\mathbf{w} \in \Z^2$. We will assume $\mathbf{w} \notin p\Z^2$. Then $M_{p, \mathbf{w}}$ is a proper (and hence maximal) submodule of $\Z^2$ if and only if $\mathbf{w}$ is an eigenvector of both matrices $A$ and $B_k$, considered as elements of $\GL_2(\mathbb{F}_p)$. Let $\mathbf{v} = \left(\begin{smallmatrix} 1 \\ 1 \end{smallmatrix}\right)$ and $\mathbf{u} = \left( \begin{smallmatrix} 1 \\ -1 \end{smallmatrix}\right)$, and let $M_p = M_{p, \mathbf{v}}$ and $M_{p,-1} = M_{p, \mathbf{u}}$. Of course, $\mathbf{v}$ and $\mathbf{u}$ are eigenvectors of $A \in \GL_2(\mathbb{F}_p)$ for all $p$. (And $\mathbf{v} \equiv \mathbf{u}$ (mod 2). Hence, $M_2 = M_{2,-1}$.) \begin{lemma} \label{lem:the maximal submodules of Z^2} With the above notation, we have that $M_p$ is a maximal submodule of $\Z^2$ iff $p \mid k - 2$. Also, $M_{p, -1}$ is a maximal submodule of $\Z^2$ iff $p \mid k + 2$. Further, $M_{p, \mathbf{w}}$ is not a proper submodule of $\Z^2$ unless $M_{p, \mathbf{w}} = M_p$ or $M_{p, -1}$. Thus, $p\Z^2$ is a maximal submodule of $\Z^2$ iff $p \nmid (k - 2)(k + 2)$. Finally there are no maximal submodules of $\Z^2$ besides (the appropriate) $M_p$, $M_{p, -1}$, and $p\Z^2$. \end{lemma} \begin{proof} Let $\mathbf{v}$ and $\mathbf{u}$ be as above, but consider them as elements of $\Z^2/p\Z^2$. We have that $B_k \mathbf{v} = \left(\begin{smallmatrix} 1 \\ k - 1 \end{smallmatrix}\right)$, and so $B_k \mathbf{v} = \lambda \mathbf{v}$ for some $\lambda \in \Z$ iff $k - 1 \equiv 1$ (mod $p$), i.e.\ iff $p \mid k - 2$. Also, $B_k \mathbf{u} = \left(\begin{smallmatrix} -1 \\ -1 - k \end{smallmatrix}\right)$, and so $B_k \mathbf{u} = \lambda \mathbf{u}$ for some $\lambda \in \Z$ iff $\lambda \equiv -1$ (mod $p$) and $-1 - k \equiv -\lambda$ (mod $p$), and this happens iff $-1 - k \equiv 1$ or $k \equiv -2$ (mod $p$), which is equivalent to $p \mid k + 2$. That no other $M_{p, \mathbf{w}}$ is a proper submodule of $\Z^2$ follows from the fact that any eigenvector of $A$ is a multiple of $\mathbf{v}$ or of $\mathbf{u}$. Next, let $p \nmid (k - 2)(k + 2)$. Since neither $M_p$ nor $M_{p, -1}$ is a maximal submodule of $\Z^2$ and neither is any other $M_{p, \mathbf{w}}$, we have that $p\Z^2$ is a maximal submodule of $\Z^2$. And if $p \mid (k - 2)(k+2)$, then $p \mid k - 2$ or $p \mid k + 2$, in which case $M_p$ or $M_{p, -1}$ is a proper submodule of $\Z^2$ that properly contains $p\Z^2$. The final statement of this lemma follows from the previous parts of this lemma, together with Lemma~\ref{lem:maximal submodules contain pZ^2}: Indeed Lemma~\ref{lem:maximal submodules contain pZ^2} implies that either $p\Z^2$ is maximal, or some module containing it is. We are done since any proper submodule of $\Z^2$ containing $p\Z^2$ is of the form $M_{p, \mathbf{w}}$. \end{proof} For a module $N$, we let $\maxsubmod(N)$ denote the number of maximal submodules of $N$ of index $n$. \begin{corollary} \label{cor:maximal submodule growth of Z^2} In what follows, $p$ stands for a prime. We have \[ \maxsubmod(\Z^2) = \begin{cases} 1 & \text{if } n = p \text{ and } p \mid (k - 2)(k + 2) \\ 1 & \text{if } n = p^2 \text{ and } p \nmid (k - 2)(k + 2) \\ 0 & \text{otherwise}. \end{cases} \] \end{corollary} \begin{proof} Note that if $p \mid k - 2$ and $p \mid k + 2$, then $k - 2 \equiv k + 2$ (mod $p$), whence $p = 2$. And using the previous notation, recall that $M_2 = M_{2,-1}$. This corollary then follows from Lemma~\ref{lem:the maximal submodules of Z^2}. \end{proof} \begin{lemma} \label{lem:derivations from G_2} Consider $G_2$ with presentation $\langle a, b \mid aba^{-1}b\rangle$, as described above. Let $S$ be a simple $G_2$-module. Then there is a one-to-one correspondence between the set $\Der(G_2, S)$ and the set of functions $\delta: \{a,b\}\xrightarrow{} S$ satisfying \begin{equation*} (1-b^{-1}) \delta(a) = (-a-b^{-1})\delta(b). \tag{$*$} \end{equation*} \end{lemma} \begin{proof} This follows from Lemma $\ref{lem:General case for generators x_i lemma}$. \end{proof} \begin{lemma} \label{lem:counting derivations from G_2} Let $S$ be a simple $G_2$-module with $S$ either $\Z^2/M_p$ or $\Z^2/M_{p, -1}$ or $\Z^2/p\Z^2$. Then \[ \lvert \Der(G_2, S) \rvert = \begin{cases} |S|^2 = p^2 & \text{if } S = \Z^2/M_p \\ |S| = p & \text{if } S = \Z^2/M_{p, -1} \text{ and } p > 2 \\ |S| = p^2 & \text{if } S = \Z^2/p\Z^2. \end{cases} \] \end{lemma} \begin{proof} The element $1 - b^{-1}$ in ($*$) from Lemma~\ref{lem:derivations from G_2} acts on $\delta(a)$ by multiplication by the matrix $I_2 - B_k^{-1} = \left( \begin{smallmatrix} 1 - k & 1 \\ -1 & 1 \end{smallmatrix} \right)$ which has determinant $2 - k$, and the element $-a - b^{-1}$ acts by multiplication by the matrix $-A - B_k^{-1} = \left( \begin{smallmatrix} -k & 0 \\ -2 & 0 \end{smallmatrix} \right)$. First, suppose $S \neq \Z^2/M_p$. Then $S = \Z^2/M_{p, -1}$ with $p > 2$ (since $M_2 = M_{2, -1}$) or $S = \Z^2/p\Z^2$. Either way, with $p$ thus determined by $S$, we have that $M_p$ is not a maximal submodule of $\Z^2$ since either $p\Z^2$ is maximal or $M_{p, -1}$ is and $\maxsubmod[p](\Z^2) \leq 1$. Hence $p \nmid k - 2$ by Lemma~\ref{lem:the maximal submodules of Z^2}. In this case, $I_2 - B_k^{-1}$ is invertible, considered as an element of $\GL_2(\mathbb{F}_p)$. Hence ($*$) from Lemma~\ref{lem:derivations from G_2} may be written as \[ \delta(a) = (I_2 - B_k^{-1})^{-1}(-A - B_k^{-1}) \delta(b). \] And so in this case, we are free to choose $\delta(b)$ to be any element of $S$, and then this determines what $\delta(a)$ must be. Thus we would have $\lvert \Der(G_2, S) \rvert = |S|$. Next, suppose $S = \Z^2/M_p$. Then the maximality of $M_p$ implies (by Lemma~\ref{lem:the maximal submodules of Z^2}) that $p \mid k - 2$. And so then $I_2 - B_k^{-1} \equiv \left( \begin{smallmatrix}-1 & 1 \\ -1 & 1 \end{smallmatrix} \right)$ and $-A - B_k^{-1} \equiv \left( \begin{smallmatrix} -2 & 0 \\ -2 & 0 \end{smallmatrix} \right)$ (mod $p$). We have that $\left\{\left( \begin{smallmatrix} i \\ 0 \end{smallmatrix} \right) : 0 \leq i < p \right\}$ is a complete set of representatives of $\Z^2/M_p$. Then letting $\delta(a) = \left( \begin{smallmatrix} i \\ 0 \end{smallmatrix} \right) + M_p$ and $\delta(b) = \left( \begin{smallmatrix} j \\ 0 \end{smallmatrix} \right) + M_p$, we have that equation ($*$) from Lemma~\ref{lem:derivations from G_2} holds because $(I_2 - B_k^{-1}) \left( \begin{smallmatrix} i \\ 0 \end{smallmatrix} \right) = \left( \begin{smallmatrix} -i \\ -i \end{smallmatrix} \right) \in M_p$ and $(-A - B_k^{-1}) \left( \begin{smallmatrix} j \\ 0 \end{smallmatrix} \right) = \left( \begin{smallmatrix} -2j \\-2j \end{smallmatrix} \right) \in M_p$. And so both $(1-b^{-1}) \delta(a)$ and $(-a-b^{-1})\delta(b)$ are the trivial element of $\Z^2/M_p$. Therefore, in this case, we have $|S|^2$ options for a derivation from $G_2$ to $S$. \end{proof} \begin{theorem} \label{thm:exact maximal subgroup growth of H_k} In what follows, $p$ stands for a prime. We have \[ \maxsubgr(H_k) = \begin{cases} n^2 + n + 1 & \text{if } n = p \text{ and } p \mid k- 2\\ 2n + 1 & \text{if } n = p \text{ and } p \mid k + 2 \text{ with } p > 2\\ n + 1 & \text{if } n = p \text{ and } p \nmid (k-2)(k+2)\\ n & \text{if } n = p^2 \text{ and } p \nmid (k -2)(k+2) \\ 0 & \text{otherwise}. \end{cases} \] \end{theorem} \begin{proof} Consider $\Z^2 \trianglelefteq \Z^2 \rtimes G_2$. By Lemma 5 from \cite{Kelley-some metabelian groups}, we have \[\tag{1} \maxsubgr(H_k) = \maxsubgr(G_2) + \sum_{N_0} \lvert \Der(G_2, \Z^2 / N_0)\rvert \] where the sum is over all maximal submodules $N_0$ of $\Z^2$ of index $n$. Also, by Theorem~\ref{thm:m_n(G_k)}, we have \[\tag{2} \maxsubgr(G_2)= \begin{cases} n+1 & \text{if } n \text{ is a prime}\\ 0 & \text{otherwise}. \end{cases} \] We have that Lemmas~\ref{lem:the maximal submodules of Z^2} and \ref{lem:counting derivations from G_2} (and Corollary~\ref{cor:maximal submodule growth of Z^2}) together imply that \[\tag{3} \sum_{N_0} \lvert \Der(G_2, \Z^2 / N_0)\rvert = \begin{cases} n^2 & \text{if } n = p \text{ and } p \mid k - 2\\ n & \text{if } n = p \text{ and } p \mid k +2 \text{ with } p > 2\\ n & \text{if } n = p^2 \text{ and } p \nmid (k - 2)(k + 2)\\ 0 & \text{otherwise}. \end{cases} \] The present theorem follows from (1) by adding (2) and (3). \end{proof} For $n \in \Z$, let $\pi(n)$ denote the set of prime numbers dividing $n$. Then a consequence of Theorem~\ref{thm:exact maximal subgroup growth of H_k} is that for $i, j \in \Z$, if $\pi(i - 2) \neq \pi(j - 2)$ or $\pi(i + 2) \neq \pi(j+ 2)$, then $H_i \not\cong H_j$. Also, note that $\mdeg(H_2) = 2$ (because $\pi(0)$ is the set of all primes), and for all $k \neq 2$, $\mdeg(H_k) = 1$.
{ "timestamp": "2019-11-19T02:09:04", "yymm": "1911", "arxiv_id": "1911.07066", "language": "en", "url": "https://arxiv.org/abs/1911.07066", "abstract": "We give here the exact maximal subgroup growth of two classes of polycyclic groups. Let $G_k = \\langle x_1, x_2, ..., x_k \\mid x_ix_jx_i^{-1}x_j \\text{ for all } i < j \\rangle$. So $G_k = \\mathbb{Z} \\rtimes (\\mathbb{Z} \\rtimes (\\mathbb{Z} \\rtimes ... \\rtimes \\mathbb{Z})$. Then for all $k \\geq 2$, we calculate $m_n(G_k)$, the number of maximal subgroups of $G_k$ of index $n$, exactly. Also, for infinitely many groups $H_k$ of the form $\\mathbb{Z}^2 \\rtimes G_2$, we calculate $m_n(H_k)$ exactly.", "subjects": "Group Theory (math.GR)", "title": "Maximal subgroup growth of a few polycyclic groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713883126862, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7080105007397577 }
https://arxiv.org/abs/2012.10371
The first higher Stasheff-Tamari orders are quotients of the higher Bruhat orders
We prove the conjecture that the higher Tamari orders of Dimakis and Müller-Hoissen coincide with the first higher Stasheff--Tamari orders. To this end, we show that the higher Tamari orders may be conceived as the image of an order-preserving map from the higher Bruhat orders to the first higher Stasheff--Tamari orders. This map is defined by taking the first cross-section of a cubillage of a cyclic zonotope. We provide a new proof that this map is surjective and show further that the map is full, which entails the aforementioned conjecture. We explain how order-preserving maps which are surjective and full correspond to quotients of posets. Our results connect the first higher Stasheff--Tamari orders with the literature on the role of the higher Tamari orders in integrable systems.
\section{Introduction} Two of the best known and most widely studied partially ordered sets in mathematics are the Tamari lattice \cite{tamari} and the weak Bruhat order on the symmetric group. The Tamari lattice appears in a broad range of areas of mathematics, physics, and computer science \cite{tamari-festschrift}. It was introduced by Tamari \cite{tamari,huang-tamari} as an order on the set of bracketings of a string. It is the 1-skeleton of the associahedron, which was famously used by Stasheff in topology to define $A_{\infty}$-spaces \cite{stasheff}. The weak Bruhat order on the symmetric group was first studied by statisticians in the 1960s \cite{savage,lehmann,yo_bruhat} and is now a fundamental part of Coxeter theory---see \cite{bb_coxeter}, for example. It furthermore provides a useful framework for studying questions in the theory of social choice \cite{abello_thesis,chameni-nembua,abello_bruhat,dkk_condorcet_tiling}. Both orders also appear in the representation theory of algebras \cite{buan-krause,thomas_tamari,mizuno-preproj,irrt} and the theory of cluster algebras \cite{fz-y,rs_framework}. These posets have higher-dimensional versions, namely the first higher Stasheff--Tamari orders $\mathcal{S}(n, \delta)$ \cite{kv-poly,er} and the higher Bruhat orders $\mathcal{B}(n, \delta + 1)$ \cite{ms}. These higher posets arise as orders on (equivalence classes of) maximal chains in the original posets, and then as orders on (equivalence classes of) maximal chains in those posets, and so on \cite{ms,rambau}. For instance, the elements of the higher Bruhat order $\mathcal{B}(n,2)$ correspond to reduced expressions for the longest element in the symmetric group, while the covering relations correspond to braid moves. In this way, the higher posets encode higher-categorical data latent within the original posets. Another way of thinking of the higher-dimensional posets is geometrically. The Tamari lattice concerns triangulations of convex polygons, whereas the first higher Stasheff--Tamari orders concern triangulations of cyclic polytopes; the weak Bruhat order concerns complexes of edges of hypercubes, whereas the higher Bruhat orders concern complexes of faces of hypercubes, known as \emph{cubillages} or \emph{fine zonotopal tilings}. The first higher Stasheff--Tamari orders and the higher Bruhat orders have their own connections with other areas of mathematics. The first higher Stasheff--Tamari orders occur in the representation theory of algebras \cite{njw-hst} and algebraic $K$-theory \cite{poguntke}. The higher Bruhat orders were originally introduced to study hyperplane arrangements \cite{ms} and have found application in the theories of Soergel bimodules \cite{elias_bruhat}, quasi-commuting Pl\"ucker coordinates \cite{lz}, and social choice \cite{gr_bruhat}. They are also tightly connected with the quantum Yang--Baxter equation and its generalisations \cite[and references therein]{dm-h-simplex}. The relation between the Tamari lattice and the weak Bruhat order has been of significant interest. There is a classical surjection from the latter to the former, which can be realised as a map from permutations to binary trees. This map arises in many different places \cite{bw_coxeter,bw_shell_2,tonks,lr_hopf,lr_order,reading_cambrian}. Kapranov and Voevodsky extended this surjection to a map from the higher Bruhat orders to the first higher Stasheff--Tamari orders $f \colon \mathcal{B}(n, \delta) \to \mathcal{S}(n + 2, \delta + 1)$ \cite{kv-poly}, which they conjectured was a surjection as well. This remains an open problem despite some detailed studies \cite{rambau,thomas-bst}. In this paper, we consider a closely related map from the higher Bruhat orders to the first higher Stasheff--Tamari orders $g\colon\mathcal{B}(n,\delta+1) \rightarrow \mathcal{S}(n,\delta)$. This map was first considered as a map of posets in \cite{thomas-bst}, in its dual form, and was itself considered in \cite[Appendix B]{dkk-survey}. As a map of sets it was considered and shown to be surjective in \cite{rs-baues}, using the language of \emph{lifting triangulations}. We provide a new proof of surjectivity, and go further by showing that the map is full. We call order-preserving maps which are both surjective and full \emph{quotient maps of posets}. \begin{customthm}{A}[Theorem~\ref{thm:quot}]\label{thm:int_a} The map $g \colon\mathcal{B}(n,\delta+1) \rightarrow \mathcal{S}(n,\delta)$ is a quotient map of posets. \end{customthm} Indeed, in this paper we give a new approach to quotients of posets. The quotient of a poset by an arbitrary equivalence relation is not always a well-defined poset. Previous authors \cite{hs-char-poly,cs_cong,reading_order} have given sufficient conditions for the quotient to be well-defined which ensure that other structure is also preserved, such as lattice-theoretic properties. Because the posets we are considering are not in general lattices \cite{ziegler-bruhat,njw-hst}, we instead consider weaker conditions, which are necessary and sufficient for the quotient to be a well-defined poset. We show that quotients of posets in this sense correspond to order-preserving maps which are surjective and full. Part of the motivation for considering quotient posets in the way that we do stems from \cite{dm-h}, where Dimakis and M\"uller-Hoissen apply an equivalence relation to the higher Bruhat orders to define the ``higher Tamari orders'' in order to describe a class of soliton solutions of the KP equation. In subsequent work \cite{dm-h-simplex}, the authors make further connections with mathematical physics by using the higher Tamari orders to define \emph{polygon equations}, an infinite family of equations which generalise the pentagon equation. The pentagon equation appears in many different areas of physics, including the theory of angular momentum in quantum mechanics \cite{dmh_amqp}, and conformal field theory \cite{dmh_cft}, as well as several other places \cite{dmh_qha,dmh_rd,dmh_qd}. The polygon equations which generalise the pentagon equation themselves occur in category theory \cite{kv-zam,street-fusion} and as ``Pachner relations'' in 4D topological quantum field theory \cite{dmh_tqft}. Dimakis and M\"uller-Hoissen conjectured the higher Tamari orders to coincide with the first higher Stasheff--Tamari orders. We prove this conjecture by showing that the higher Tamari orders are given by the image of the map $g$, as first noted in \cite[Appendix B]{dkk-survey}. We then apply Theorem~\ref{thm:int_a}; for the two sets of orders to be equal, it is only necessary for the map $g\colon \mathcal{B}(n, \delta+1) \to \mathcal{S}(n, \delta)$ to be a quotient map of posets in our sense, rather than in any stronger sense. The upshot of our result is that two far-reaching sets of combinatorics are united. We unite the first higher Stasheff--Tamari orders, with their connections to the representation theory of algebras \cite{njw-hst}, and the higher Tamari orders, which describe classes of KP solitons \cite{dm-h} and from which arise the polygon equations \cite{dm-h-simplex}. Since the map $g$ is defined by taking a certain cross-section of a cubillage, our work shows the connection between \cite{dm-h} and the papers \cite{kk,gpw}, in which KP solitons are related to cross-sections of three-dimensional cubillages, building on \cite{kw_invent,kw_advances,huang_thesis}. See also \cite{galashin,olarte_santos}, for more work on cross-sections of cubillages. \begin{customthm}{B}[Corollary~\ref{cor:t=st}] The higher Tamari orders and the first higher Stasheff--Tamari orders coincide. \end{customthm} Our approach is to use the description of the higher Bruhat orders in terms of cubillages of cyclic zonotopes and the description of these objects in terms of separated collections established in \cite{gp} and studied extensively in \cite{dkk,dkk-interrelations,dkk-survey,dkk-weak,dkk-symmetric}. These tools allow us to construct cubillages which are pre-images under the map $g$, which is instrumental in the proof of Theorem~\ref{thm:int_a}. This paper is structured as follows. In Section~\ref{sect:notation}, we lay out some notation and conventions that we use in the paper. We give background on the higher Bruhat orders and cubillages of cyclic zonotopes in Section~\ref{sect-hbo} and on the higher Stasheff--Tamari orders and triangulations of cyclic polytopes in Section~\ref{sect-hst}. In Section~\ref{sect:g_interpretations} we consider the map $g \colon \mathcal{B}(n, \delta + 1) \to \mathcal{S}(n, \delta)$. We give three different characterisations of this map in Sections~\ref{sect:g_geom},~\ref{sect:g_comb}, and~\ref{sect:vis}, which correspond to the three different possible interpretations of the higher Bruhat orders. In Section~\ref{sect:quotient_framework}, we lay the necessary groundwork in the theory of quotient posets to make the statement that $g$ is a quotient map of posets precise. In Section~\ref{sect-surj} we give a new proof of the fact that the map $g$ is surjective. We prove in Section~\ref{sect-quot} that it is full, and hence a quotient map of posets. \subsection*{Acknowledgements} This paper forms part of my PhD studies. I would like to thank my supervisor Professor Sibylle Schroll for her continuing support and attention. I would also like to thank Mikhail Kapranov for a clarification, Jordan McMahon for helpful comments on an earlier version of this paper, and Hugh Thomas and Mikhail Gorsky for interesting discussions. I am supported by a studentship from the University of Leicester. \section{Terminology and conventions}\label{sect:notation} Here we outline some general terminology and conventions that we use throughout the paper. \subsubsection{Notation} We use $[n]$ to denote the set $\left\lbrace 1, \dots, n\right\rbrace $ and $\subs{n}{k}{}$ to denote the subsets of $[n]$ of size $k$. We sometimes refer to such subsets as $k$-subsets. Given a set $A \subseteq [n]$ such that $\# A = k + 1$, unless otherwise indicated, we shall denote the elements of $A$ by $A = \{a_{0}, \dots, a_{k}\}$, where $a_{0} < \dots < a_{k}$. The same applies to other letters of the alphabet: the upper case letter denotes the set; the lower case letter is used for the elements, which are ordered according to their index starting from 0. \subsubsection{Ordering} In this paper, it is convenient for us to consider both the linear and cyclic orderings of $[n]$. Unless stated otherwise, it should be assumed that we refer to the linear ordering on this set. We denote by $(a,b), [a,b] \subseteq [n]$ respectively the \emph{open} and \emph{closed cyclic intervals}. That is, \begin{align*} (a,b) &:= \{ i \in [n] \mid a < i < b \text{ is a cyclic ordering} \}, \\ [a,b] &:= (a,b) \cup \{a,b\}. \end{align*} The one exception to this is that we will find it convenient to set $[a, a - 1] := \emptyset$. When we have $a<b$ in the linear ordering on $[n]$, we say that $[a,b]$ and $(a,b)$ are \emph{intervals}. We call $I \subseteq [n]$ an \emph{$l$-ple interval} if it can be written as a union of $l$ intervals, but cannot be written as a union of fewer than $l$ intervals. We similarly define \emph{cyclic $l$-ple intervals}. When we refer to the elements $a_{i}$ of a subset $A \subseteq [n]$ with $\# A = d + 1$, we will sometimes write $i \in \mathbb{Z}/(d+1)\mathbb{Z}$ to indicate that one should interpret $a_{d + 1}$ as being equal to $a_{0}$. That is, if $A = \{1, 3, 5\}$, then $a_{0} = 1, a_{1} = 3, a_{2} = 5, a_{3} = 1$. Given a linearly ordered set $L$ and $S \subset L$, we say that an element $l \in L \setminus S$ is an \emph{even gap} in $S$ if $\#\{s \in S \mid s > l \}$ is even. Otherwise, it is an \emph{odd gap}. A subset $S \subset L$ is \emph{even} if every $l \in L \setminus S$ is an even gap. A subset $S \subset L$ is \emph{odd} if every $l \in L \setminus S$ is an odd gap. Let $\mathcal{P}$ be a partially ordered set. We say that $q$ \emph{covers} $p$ in $\mathcal{P}$ if $p < q$ and whenever $p \leqslant r \leqslant q$ in $\mathcal{P}$, then $r = p$ or $r = q$. If $q$ covers $p$ in $\mathcal{P}$ then we write $p \lessdot q$. If $\mathcal{P}$ is a finite poset, we have that $\mathcal{P}$ is the transitive-reflexive closure of its covering relations. Hence, in this case one can define $\mathcal{P}$ by specifying its covering relations. \subsubsection{Convex geometry} Recall that a set $\Gamma \subseteq \mathbb{R}^{\delta}$ is \emph {convex} if, for any $\mathbf{x}, \mathbf{y} \in \Gamma$, the line segment $\overline{\mathbf{xy}}$ between $\mathbf{x}$ and $\mathbf{y}$ is contained in $\Gamma$. Given a set of points $\Gamma \subseteq \mathbb{R}^{\delta}$, the \emph{convex hull} $\mathrm{conv}(\Gamma)$ is defined to be the smallest convex set containing $\Gamma$ or, equivalently, the intersection of all convex sets containing $\Gamma$. A \emph{convex polytope} is the convex hull of a finite set of points in $\mathbb{R}^{\delta}$. Let $\Delta \subset \mathbb{R}^{\delta}$ be a convex polytope. A \emph{facet} of $\Delta$ is a face of codimension one. The \emph{upper} facets of $\Delta$ are those that can be seen from a very large positive $\delta$-th coordinate. The \emph{lower} facets of $\Delta$ are those that can be seen from a very large negative $\delta$-th coordinate. A \emph{$k$-face} of a polytope is a face of dimension $k$. A \emph{subcomplex} of a polytope is a union of faces of the polytope. Recall that for $\Gamma, \Gamma' \subseteq \mathbb{R}^{\delta}$, the \emph{Minkowski sum} of $\Gamma$ and $\Gamma'$ is defined to be \[\Gamma + \Gamma' = \{\mathbf{x} + \mathbf{y} \mid \mathbf{x} \in \Gamma,\, \mathbf{y} \in \Gamma'\}.\] \section{Background}\label{sect-background} \subsection{Higher Bruhat orders}\label{sect-hbo} In this section we give the definition of the higher Bruhat orders. The fundamental definition of the higher Bruhat orders for our purposes is the description in terms of cubillages of cyclic zonotopes given in \cite{kv-poly} and formalised in \cite{thomas-bst}. After giving this definition, we give the characterisation of cubillages of cyclic zonotopes established in \cite{gp} and studied in \cite{dkk}. Finally, we explain the original definition of the higher Bruhat orders from \cite{ms}, which we will also need. \subsubsection{Cubillages} We first give the geometric description of the higher Bruhat orders due to \cite{kv-poly,thomas-bst}. Consider the \emph{Veronese curve} $\xi\colon \mathbb{R} \rightarrow \mathbb{R}^{\delta+1}$, given by $\xi_{t}=(1,t, \dots, t^{\delta})$. Let $\left\lbrace t_{1}, \dots, t_{n}\right\rbrace \subset \mathbb{R}$ with $t_{1}<\dots<t_{n}$ and $n \geqslant \delta+1$. The \emph{cyclic zonotope} $Z(n,\delta+1)$ is defined to be the Minkowski sum of the line segments \[\overline{\mathbf{0}\xi_{t_{1}}} + \dots + \overline{\mathbf{0}\xi_{t_{n}}},\] where $\mathbf{0}$ is the origin. The properties of the zonotope do not depend on the exact choice of $\{t_{1}, \dots, t_{n}\} \subset \mathbb{R}$. Hence, for ease we set $t_{i}=i$. For $k \geqslant l$ we have a canonical projection map \begin{align*} \pi_{k,l} \colon \mathbb{R}^{k} &\to \mathbb{R}^{l} \\ (x_{1}, \dots, x_{k}) &\mapsto (x_{1}, \dots, x_{l}) \end{align*} which maps $Z(n, k) \to Z(n, l)$. A \emph{cubillage} $\mathcal{Q}$ of $Z(n,\delta+1)$ is a subcomplex of $Z(n, n)$ such that $\pi_{n, \delta+1} \colon \mathcal{Q} \to Z(n, \delta +1)$ is a bijection. Note that $\mathcal{Q}$ therefore contains faces of $Z(n, n)$ of dimension at most $\delta+1$. We call these $(\delta + 1)$-dimensional faces of $\mathcal{Q}$ the \emph{cubes} of the cubillage. In the literature, cubillages are often called \emph{fine zonotopal tilings}---for example, in \cite{gp}. After \cite[Theorem 4.4]{kv-poly} and \cite[Theorem 2.1, Proposition 2.1]{thomas-bst} one may define the \emph{higher Bruhat poset} $\mathcal{B}(n,\delta+1)$ as follows. The elements of $\mathcal{B}(n,\delta+1)$ consist of cubillages of $Z(n,\delta+1)$. The covering relations of $\mathcal{B}(n,\delta+1)$ are given by pairs of cubillages $\mathcal{Q} \lessdot \mathcal{Q}'$ where there is a $(\delta + 2)$-face $\Gamma$ of $Z(n, n)$ such that $\mathcal{Q} \setminus \Gamma = \mathcal{Q}' \setminus \Gamma$ and $\pi_{n, \delta + 2}(\mathcal{Q})$ contains the lower facets of $\pi_{n, \delta + 2}(\Gamma)$, whereas $\pi_{n, \delta + 2}(\mathcal{Q}')$ contains the upper facets of $\pi_{n, \delta + 2}(\Gamma)$. Here we say that $\mathcal{Q}'$ is an \emph{increasing flip} of $\mathcal{Q}$. The cyclic zonotope $Z(n,\delta+1)$ possesses two canonical cubillages, one given by the subcomplex $\mathcal{Q}_{l}$ of $Z(n, n)$ such that $\pi_{n, \delta + 2}(\mathcal{Q}_{l})$ consists of the lower facets of $Z(n,\delta+2)$, which we call the \emph{lower cubillage}, and the other given by the subcomplex $\mathcal{Q}_{u}$ of $Z(n, n)$ such that $\pi_{n, \delta + 2}(\mathcal{Q}_{u})$ consists of the upper facets of $Z(n,\delta+2)$, which we call the \emph{upper cubillage}. The lower cubillage of $Z(n,\delta+1)$ gives the unique minimum of the poset $\mathcal{B}(n,\delta+1)$, and the upper cubillage gives the unique maximum. \subsubsection{Separated collections} We now explain how one may characterise cubillages as separated collections of subsets, as shown in \cite{gp}. The subsets $E \subseteq [n]$ are naturally identified with the corresponding points $\xi_{E}:=\sum_{e \in E}\xi_{e}$ in $Z(n, n)$, where $\xi_{\emptyset} := \mathbf{0}$. This represents each vertex of a cubillage $\mathcal{Q}$ as a subset of $[n]$. For a cubillage $\mathcal{Q}$ of $Z(n,\delta+1)$, the collection of subsets corresponding to its vertices is called the \emph{spectrum} of $\mathcal{Q}$ and is denoted by $\mathrm{Sp}(\mathcal{Q})$. Each cube in $\mathcal{Q}$ is viewed as the Minkowski sum of line segments \[\overline{\xi_{E}\xi_{E \cup \{a_{i}\}}}\] for some set $A$ with $\# A = \delta + 1$ and $E\subseteq [n]\setminus A $. Here we call $\xi_{E}$ the \emph{initial vertex} of the cube, $\xi_{E \cup A}$ the \emph{final vertex}, and $A$ the set of \emph{generating vectors}. We say that, given two sets $A,B \subseteq [n]$, $A$ \emph{$\delta$-interweaves} $B$ if there exist $i_{\delta+1}, i_{\delta-1}, \ldots \in B \setminus A$ and $i_{\delta}, i_{\delta-2}, \ldots \in A \setminus B$ such that \[i_{0}<i_{1}<\dots< i_{\delta+1}.\] We also say that $\{i_{\delta+1},i_{\delta-1}, \dots\}$ and $\{i_{\delta},i_{\delta-2}, \dots\}$ \emph{witness} that $A$ $\delta$-interweaves $B$. If either $A$ $\delta$-interweaves $B$ or $B$ $\delta$-interweaves $A$, then we say that $A$ and $B$ are \emph{$\delta$-interweaving}. If $A$ $\delta$-interweaves $B$ as above and $B \setminus A = \{i_{\delta+1}, i_{\delta-1}, \ldots\}$ and $A \setminus B = \{i_{\delta}, i_{\delta-2}, \ldots\}$, then we say that \emph{$A$ tightly $\delta$-interweaves $B$}, in the manner of \cite{bbge}. If $A$ and $B$ are not $\delta$-interweaving then we say that $A$ and $B$ are \emph{$\delta$-separated}, following \cite{gp,dkk}. We call a collection $\mathcal{C}\subseteq 2^{[n]}$ $\delta$-separated if it is pairwise $\delta$-separated. If $\delta = 2d$, then being $\delta$-interweaving is the same as being $(d + 1)$-interlacing in the terminology of \cite{bbge} and $(d + 1)$-intertwining in the terminology of \cite{njw-jm}. We choose new terminology because we wish to have an opposite of $\delta$-separated for $\delta$ odd as well as $\delta$ even. It follows from \cite[Theorem 2.7]{gp} that the correspondence $\mathcal{Q} \mapsto \mathrm{Sp}(\mathcal{Q})$ gives a bijection between the set of cubillages on $Z(n,\delta+1)$ and the set of $\delta$-separated collections of maximal size in $2^{[n]}$. In particular, for any cubillage $\mathcal{Q}$ of $Z(n,\delta+1)$, we have that $\# \mathrm{Sp}(\mathcal{Q}) = \Sigma_{i=0}^{\delta+1}\binom{n}{i}$, which is the maximal size of a $\delta$-separated collection in $2^{[n]}$. For $A \subseteq [n]$, if $\pi_{n, \delta + 1}(\xi_{A})$ is a boundary vertex of the zonotope $Z(n,\delta+1)$, then $\xi_{A}$ is a vertex of every cubillage of $Z(n, \delta + 1)$, and hence $A$ is in every $\delta$-separated collection in $2^{[n]}$ of maximal size. Moreover, the subsets $A \subseteq [n]$ such that $\pi_{n, \delta + 1}(\xi_{A})$ is a boundary vertex of the zonotope $Z(n,\delta+1)$ are precisely those subsets which are $\delta$-separated from every other subset of $[n]$. Hence the subsets of interest are those which project to the interior of the zonotope $Z(n,\delta+1)$. The vertices of the zonotope $Z(n,\delta+1)$ are known to be in bijection with the number of regions of the arrangement of $(\delta-1)$-spheres associated with the set of points $\Xi = \{\xi_{1}, \dots, \xi_{n}\}$ on the Veronese curve, see \cite[Proposition 2.2.2]{blswz}. Since no set of $\delta$ points of $\Xi$ lie in a linear hyperplane, the number of regions of this arrangement of $(\delta - 1)$-spheres is the maximal number of \[\binom{n-1}{\delta} + \sum_{i=0}^{\delta}\binom{n}{i}.\] (For instance, see \cite[Problem 4, p.73]{comtet}.) Hence a cubillage $\mathcal{Q}$ of $Z(n,\delta+1)$ has \[\sum_{i=0}^{\delta+1}\binom{n}{i}-\left(\binom{n-1}{\delta} + \sum_{i=0}^{\delta}\binom{n}{i}\right)=\binom{n-1}{\delta+1}\] vertices which project to the interior of $Z(n,\delta+1)$ if $n > \delta + 1$, and $0$ otherwise. We call a point $\xi_{A} \in \mathbb{R}^{n}$ an \emph{internal point in $Z(n, \delta + 1)$} if $\pi_{n, \delta + 1}(\xi_{A})$ lies in the interior of $Z(n, \delta + 1)$. We call a vertex $\xi_{A}$ of a cubillage $\mathcal{Q} \subset \mathbb{R}^{n}$ of $Z(n, \delta + 1)$ \emph{internal} if $\xi_{A}$ is an internal point in $Z(n,\delta + 1)$. Given a cubillage $\mathcal{Q}$ of $Z(n, \delta + 1)$, we define its \emph{internal spectrum} $\mathrm{ISp}(\mathcal{Q})$ to consist of the elements of $\mathrm{Sp}(\mathcal{Q})$ which correspond to internal vertices of $\mathcal{Q}$. By \cite[(2.7)]{dkk}, $\xi_{A}$ is an internal point in $Z(n,\delta+1)$ if and only if \begin{itemize} \item $\delta = 2d$ and $A$ is a cyclic $l$-ple interval for $l \geqslant d + 1$, or \item $\delta = 2d + 1$ and $A$ is an $l$-ple interval for $l \geqslant d + 2$, or a $(d + 1)$-ple interval containing neither $1$ nor $n$. \end{itemize} We will also need the following concepts from \cite{dkk-interrelations}. Given a cubillage $\mathcal{Q}$ of $Z(n, \delta +1)$ and a subcomplex $\mathcal{M}$ of $\mathcal{Q}$, we say that $\mathcal{M}$ is a \emph{membrane} in $\mathcal{Q}$ if $\mathcal{M}$ is a cubillage of $Z(n, \delta)$. We say that an edge in a cubillage $\mathcal{Q}$ from $\xi_{E}$ to $\xi_{E \cup \{i\}}$ is an edge \emph{of colour $i$}, where $E \subseteq [n] \setminus \{i\}$ is any subset. For a cubillage $\mathcal{Q}$ of $Z(n, \delta + 1)$ and $i \in [n]$, we define the \emph{$i$-pie} $\Pi_{i}(\mathcal{Q})$ to be the subcomplex of $\mathcal{Q}$ given by all the cubes which have an edge of colour $i$. In \cite[Chapter 7]{ziegler}, the $i$-pie is called the \emph{$i$-th zone}. By \cite{dkk,gp}, we can obtain a cubillage $\mathcal{Q}/i$ from $\mathcal{Q}$ by contracting the edges of colour $i$ until they have length zero. The cubillage $\mathcal{Q}/i$ is known as the $i$-contraction of $\mathcal{Q}$. The image of the $n$-pie $\Pi_{n}(\mathcal{Q})$ is a membrane in $\mathcal{Q}/n$, but this is not in general true for $1 < i < n$, by \cite[(4.4)]{dkk}. An example of $4$-contraction is shown in Figure~\ref{fig:ipie}. Here the 4-pie is shown in red on the left-hand cubillage, and this is contracted to zero in the right-hand cubillage, where its image is a membrane. Note that here we are illustrating cubillages of $Z(4, 2)$ and $Z(3, 2)$ by their images under the projection maps $\pi_{4, 2}$ and $\pi_{3, 2}$ respectively. We will always illustrate cubillages in this way. \begin{figure} \caption{$4$-contraction}\label{fig:ipie} \[ \scalebox{0.8}{ \begin{tikzpicture} \begin{scope}[xscale=0.75] \coordinate(0) at (0,0); \node at (0)[left = 1mm of 0]{$\emptyset$}; \coordinate(1) at (2,-2); \node at (1)[below left = 1mm of 1]{1}; \coordinate(12) at (4,-3); \node at (12)[below = 1mm of 12]{12}; \coordinate(123) at (6,-2); \node at (123)[below right = 1mm of 123]{123}; \coordinate(1234) at (8,0); \node at (1234)[right = 1mm of 1234]{1234}; \coordinate(234) at (6,2); \node at (234)[above right = 1mm of 234]{234}; \coordinate(34) at (4,3); \node at (34)[above = 1mm of 34]{34}; \coordinate(4) at (2,2); \node at (4)[above left = 1mm of 4]{4}; \draw (0) -- (1) -- (12) -- (123) -- (1234) -- (234) -- (34) -- (4) -- (0); \coordinate(3) at (2,1); \coordinate(13) at (4,-1); \coordinate(23) at (4,0); \draw[fill=red!30,draw=none] (0) -- (4) -- (34) -- (3) -- (0); \draw[fill=red!30,draw=none] (3) -- (34) -- (234) -- (23) -- (3); \draw[fill=red!30,draw=none] (23) -- (234) -- (1234) -- (123) -- (23); \draw (0) -- (3); \draw (3) -- (34); \draw (3) -- (23); \draw (3) -- (13); \draw (1) -- (13); \draw (13) -- (123); \draw (23) -- (123); \draw (23) -- (234); \node at (3) [below = 1mm of 3]{3}; \node at (13) [left = 1mm of 13]{13}; \node at (23) [below = 1mm of 23]{23}; \draw[->,ultra thick] (9.75, 0) -- (10.75, 0); \coordinate(0) at (12,0); \node at (0)[left = 1mm of 0]{$\emptyset$}; \coordinate(1) at (14,-2); \node at (1)[below left = 1mm of 1]{1}; \coordinate(12) at (16,-3); \node at (12)[below = 1mm of 12]{12}; \coordinate(123) at (18,-2); \node at (123)[below right = 1mm of 123]{123}; \coordinate(3) at (14,1); \coordinate(13) at (16,-1); \coordinate(23) at (16,0); \draw (0) -- (1) -- (12) -- (123) -- (23) -- (3) -- (0); \draw (3) -- (13); \draw (1) -- (13); \draw (13) -- (123); \node at (3) [above left = 1mm of 3]{3}; \node at (13) [left = 1mm of 13]{13}; \node at (23) [right = 1mm of 23]{23}; \draw[red, ultra thick] (0) -- (3) -- (23) -- (123); \end{scope} \end{tikzpicture} } \] \end{figure} \subsubsection{Admissible orders}\label{sect:admissible_orders} The original definition of the higher Bruhat orders from \cite{ms} is as follows. Given $A \in \binom{[n]}{\delta+2}$, the set \[P(A)=\left\lbrace B \mathrel{\Big|} B \in \binom{[n]}{\delta+1}, B \subset A \right\rbrace \] is called the \emph{packet} of $A$. The set $P(A)$ is naturally ordered by the \emph{lexicographic order}, where $P(A)\setminus a_{i} < P(A) \setminus a_{j}$ if and only if $j < i$. An ordering $\alpha$ of $\binom{[n]}{\delta+1}$ is \emph{admissible} if the elements of any packet appear in either lexicographic or reverse-lexicographic order under $\alpha$. Two orderings $\alpha$ and $\alpha'$ of $\binom{[n]}{\delta + 1}$ are \emph{equivalent} if they differ by a sequence of interchanges of pairs of adjacent elements that do not lie in a common packet. Note that these interchanges preserve admissibility. We use $[\alpha]$ to denote the equivalence class of $\alpha$. The \emph{inversion set} $\mathrm{inv}(\alpha)$ of an admissible order $\alpha$ is the set of all $(\delta+2)$-subsets of $[n]$ whose packets appear in reverse-lexicographic order in $\alpha$. Note that inversion sets are well-defined on equivalence classes of admissible orders. The higher Bruhat poset $\mathcal{B}(n,\delta+1)$ is the partial order on equivalence classes of admissible orders of $\binom{[n]}{\delta+1}$ where $[\alpha] \lessdot [\alpha']$ if $\mathrm{inv}(\alpha')=\mathrm{inv}(\alpha) \cup \{ A\}$ for $A \in \binom{[n]}{\delta+2}\setminus\mathrm{inv}(\alpha)$. One can explain the bijection between cubillages of $Z(n,\delta+1)$ and admissible orders on $\binom{[n]}{\delta+1}$. Let $\mathcal{Q}$ be a cubillage of $Z(n,\delta+1)$ corresponding to an equivalence class $[\alpha]$ of admissible orders on $\binom{[n]}{\delta+1}$. It follows from \cite{thomas-bst} that the cubes of $\mathcal{Q}$ are in bijection with the elements of $\binom{[n]}{\delta+1}$ via sending a cube to its set of generating vectors. A packet which can be inverted corresponds to a set of lower facets of $\pi_{n, \delta + 2}(\Gamma)$, where $\Gamma$ is a $(\delta+2)$-face $\Gamma$ of $Z(n, n)$. Inverting the packet corresponds to an increasing flip: exchanging the lower facets of $\pi_{n, \delta + 2}(\Gamma)$ for its upper facets. Hence, a cubillage $\mathcal{Q}$ of $Z(n,\delta+1)$ is determined once, for every element of $\binom{[n]}{\delta + 1}$, one knows the initial vertex of the cube with that set of generating vectors. Let $\alpha$ be an admissible order of $\binom{[n]}{\delta+1}$ corresponding to a cubillage $\mathcal{Q}$ of $Z(n,\delta+1)$ and let $\Delta$ be the cube of $\mathcal{Q}$ with set of generating vectors $I$ and initial vertex $\xi_{E}$. Then, given $e \in [n]\setminus I$, we have that $e \in E$ if and only if either \begin{itemize} \item $I \cup \{e\} \notin \mathrm{inv}(\alpha)$ and $e$ is an odd gap in $I$, or \item $I \cup \{e\} \in \mathrm{inv}(\alpha)$ and $e$ is an even gap in $I$. \end{itemize} This follows from \cite[Theorem 2.1]{thomas-bst} if one swaps the sign convention for $\delta + 1$ odd. This makes the statement simpler and reveals connections with the paper \cite{dm-h}, as we explain in Section~\ref{sect:vis}. An analogous statement was shown for more general zonotopes in \cite[Lemma 5.13]{gpw}. Conversely, given a cubillage $\mathcal{Q}$ of $Z(n, \delta + 1)$, one can determine an equivalence class of admissible orders of $\binom{[n]}{\delta + 1}$. Define a partial order on the cubes of the cubillage $\mathcal{Q}$ by $\Delta \lessdot \Delta'$ if $\pi_{n, 2d + 1}(\Delta) \cap \pi_{n, 2d + 1}(\Delta')$ is an upper facet of $\pi_{n, 2d + 1}(\Delta)$ and a lower facet of $\pi_{n, 2d + 1}(\Delta')$. The linear extensions of this partial order then comprise the admissible orders in the equivalence class $[\alpha]$ corresponding to $\mathcal{Q}$, by \cite[Lemma 2.2]{ziegler-bruhat} and \cite{ms,thomas-bst}. \subsection{Higher Stasheff--Tamari orders}\label{sect-hst} In this section we give the definition of the first higher Stasheff--Tamari orders. These were originally defined by Kapranov and Voevodsky under the name the \emph{higher Stasheff orders} in the context of higher category theory \cite[Definition 3.3]{kv-poly}. This was built upon by Edelman and Reiner, who introduced the \emph{first} and \emph{second higher Stasheff--Tamari orders} in \cite{er}. Thomas later proved that the first higher Stasheff--Tamari orders were the same as the higher Stasheff orders of Kapranov and Voevodsky \cite[Proposition 3.3]{thomas-bst}. The definition of the first higher Stasheff--Tamari orders is similar in style to the geometric definition of the higher Bruhat orders using cubillages. The \emph{moment curve} is defined by $p_{t}=(t, t^{2}, \dots , t^{\delta}) \subseteq \mathbb{R}^{\delta}$ for $t \in \mathbb{R}$. Choose $t_{1}, \dots , t_{n} \in \mathbb{R}$ such that $t_{1} < t_{2} < \dots < t_{n}$ and $n \geqslant \delta + 1$. The \emph{cyclic polytope} $C(n, \delta)$ is defined to be the convex polytope $\mathrm{conv}(p_{t_{1}}, \dots , p_{t_{n}})$. The properties of the cyclic polytope do not depend on the exact choice of $\{t_{1}, \dots, t_{n}\} \subset \mathbb{R}$. Hence, for ease we set $t_{i}=i$. A \emph{triangulation} of the cyclic polytope $C(n, \delta)$ is a subcomplex $\mathcal{T}$ of $C(n, n - 1)$ such that $\pi_{n-1, \delta} \colon \mathcal{T} \to C(n, \delta)$ is a bijection. After \cite{kv-poly, thomas-bst}, we define the \emph{first higher Stasheff--Tamari poset} $\mathcal{S}(n,\delta)$ as follows. The elements of $\mathcal{S}(n,\delta)$ are triangulations of $C(n,\delta)$. The covering relations of $\mathcal{S}(n,\delta)$ are given by pairs of triangulations $\mathcal{T} \lessdot \mathcal{T}'$ where there is a $(\delta + 1)$-face $\Sigma$ of $C(n, n - 1)$ such that $\mathcal{T}\setminus \Sigma = \mathcal{T}' \setminus \Sigma$ and $\pi_{n - 1, \delta + 1}(\mathcal{T})$ contains the lower facets of $\pi_{n - 1, \delta + 1}(\Sigma)$, whereas $\pi_{n - 1, \delta + 1}(\mathcal{T}')$ contains the upper facets of $\pi_{n - 1, \delta + 1}(\Sigma)$. Here we say that $\mathcal{T}'$ is an \emph{increasing flip} of $\mathcal{T}$. The cyclic polytope $C(n,\delta)$ possesses two canonical triangulations, one given by the subcomplex $\mathcal{T}_{l}$ of $C(n, n - 1)$ such that $\pi_{n - 1, \delta + 1}(\mathcal{T}_{l})$ consists of the lower facets of $C(n, \delta+1)$, known as the \emph{lower triangulation}, and the other given by the subcomplex $\mathcal{T}_{u}$ of $C(n, n - 1)$ such that $\pi_{n - 1, \delta + 1}(\mathcal{T}')$ consists of the upper facets of $C(n, \delta+1)$, known as the \emph{upper triangulation}. The lower triangulation of $C(n,\delta)$ gives the unique minimum of the poset $\mathcal{S}(n,\delta)$ and the upper triangulation gives the unique maximum. Given a subset $A \subseteq [n]$ with $\# A = k + 1$, we write $|A| := \mathrm{conv}(p_{a_{0}}, \dots, p_{a_{k}})$ for its geometric realisation as a simplex in $\mathbb{R}^{n - 1}$. One may combinatorially describe the lower facets and upper facets of $C(n, \delta)$, and hence the lower and upper triangulations of $C(n, \delta - 1)$. Gale's Evenness Criterion \cite[Theorem 3]{gale}\cite[Lemma 2.3]{er} states that, for $F \subseteq [n]$ with $\# F = \delta$, we have that $\pi_{n - 1, \delta}|F|$ is an upper facet of $C(n, \delta)$ if and only if $F$ is an odd subset, and that $\pi_{n - 1, \delta}|F|$ is a lower face of $C(n, \delta)$ if and only if $F$ is an even subset. We remove excess brackets, so that here $\pi_{n - 1, \delta}|F| = \pi_{n - 1, \delta}(|F|)$. We call a $\floor{\delta/2}$-simplex $|A| \subset \mathbb{R}^{n - 1}$ \emph{internal in} $C(n, \delta)$ if $\pi_{n - 1, \delta}|A|$ does not lie within a facet of $C(n,\delta)$. A $\floor{\delta/2}$-simplex $|A|$ of a triangulation $\mathcal{T} \subset \mathbb{R}^{n - 1}$ of $C(n,\delta)$ is an \emph{internal $\floor{\delta/2}$-simplex} if it is internal in $C(n, \delta)$. It is clear that a triangulation of a convex polygon is determined by the arcs of the triangulation; similarly, a triangulation of $C(n,\delta)$ is determined by the internal $\floor{\delta/2}$-simplices of the triangulation, by a theorem of Dey \cite{dey}. Hence, for a triangulation $\mathcal{T}$ of $C(n,\delta)$ we denote by \[\mathring{e}(\mathcal{T}) := \left\lbrace A \in \binom{[n]}{\floor{\delta/2} + 1} \mathrel{\Big|} |A| \text{ is an internal }\floor{\delta/2}\text{-simplex of }\mathcal{T}\right\rbrace.\] By \cite[Lemma 2.1]{ot} and \cite[Lemma 4.2]{njw-hst}, given $A \in \binom{[n]}{\floor{\delta/2} + 1}$, we have that $|A|$ is an internal $\floor{\delta/2}$-simplex in $C(n,\delta)$ if \begin{itemize} \item $\delta = 2d$ and $A$ is a cyclic $(d + 1)$-ple interval, or \item $\delta = 2d + 1$ and $A$ is a $(d + 1)$-ple interval containing neither $1$ nor $n$. \end{itemize} \begin{observation}\label{obs:internal} Given $A \in \binom{[n]}{\floor{\delta/2} + 1}$, we have that $|A|$ is an internal $\floor{\delta/2}$-simplex in $C(n, \delta)$ if and only if $\xi_{A}$ is an internal point in $Z(n, \delta + 1)$. \end{observation} A \emph{circuit} of a cyclic polytope $C(n,\delta)$ is a pair of disjoint subsets $X, Y \subseteq [n]$ which are inclusion-minimal with the property that $\pi_{n - 1, \delta}|X| \cap \pi_{n - 1, \delta}|Y| \neq \emptyset$. If $A, B \subseteq [n]$ are such that $A \supseteq X$ and $B \supseteq Y$ where $(X, Y)$ is a circuit of $C(n, \delta)$, then we say that $\pi_{n - 1, \delta}|A|$ and $\pi_{n - 1, \delta}|B|$ intersect \emph{transversely} in $C(n, \delta)$. By \cite{breen}, the circuits of $C(n,\delta)$ are the pairs $(X, Y)$ and $(Y, X)$ such that $\#X = \floor{\delta/2}+1$, $\#Y = \ceil{\delta/2}+1$, and $X$ $\delta$-interweaves $Y$. This also follows from the description of the oriented matroid given by a cyclic polytope \cite{b-lv,sturmfels,cd}. We will later use the fact that if $|A|$ and $|B|$ are simplices in the same triangulation, then there is no circuit $(X, Y)$ such that $A \supseteq X$ and $B \supseteq Y$ \cite[Proposition 2.2]{rambau}. \section{Interpretations}\label{sect:g_interpretations} In this section we study the map \[g\colon\mathcal{B}(n,\delta+1) \rightarrow \mathcal{S}(n,\delta).\] We give three different interpretations of this map, corresponding to the three different ways of defining the higher Bruhat orders. \subsection{Cubillages}\label{sect:g_geom} Here we give our principal definition of the map $g$. This definition is geometric and uses the interpretation of $\mathcal{B}(n,\delta+1)$ in terms of cubillages. This was how the map was considered in \cite[Appendix B]{dkk-survey}, where Lemma~\ref{lem:vertexFigTriang} and Proposition~\ref{prop:covRel} were both noted. \begin{lemma}\label{lem:vertexFigTriang} If $\mathcal{Q}$ is a cubillage of $Z(n,\delta+1)$, then the vertex figure of $\mathcal{Q}$ at $\xi_{\emptyset}$ gives a triangulation of $C(n, \delta)$. \end{lemma} \begin{proof} Let $H_{k}$ denote the affine hyperplane \[H_{k}:=\{(x_{1}, \dots, x_{k}) \in \mathbb{R}^{k} \mid x_{1}=1\}.\] The vertex figure of the zonotope $Z(n,k)$ at the vertex $\xi_{\emptyset}$ can be given by the intersection $Z(n, k) \cap H_{k}$. It is clear from the definitions of $Z(n, k)$ and $C(n, k)$ that this intersection is the cyclic polytope $C(n,k)$. The vertex figure of the cubillage $\mathcal{Q}$ of $Z(n,\delta+1)$ at $\xi_{\emptyset}$ then induces a subcomplex $\mathcal{T} = \mathcal{Q} \cap H_{n}$ of $C(n, n - 1)$. This subcomplex $\mathcal{T}$ is a triangulation of $C(n, \delta)$ because we have that $\pi_{n, \delta + 1} \colon \mathcal{Q} \to Z(n, \delta + 1)$ is a bijection, which then restricts to a bijection from $\mathcal{Q} \cap H_{n} = \mathcal{T}$ to $Z(n, \delta + 1) \cap H_{\delta + 1} = C(n, \delta)$. \end{proof} Hence we define the map \begin{align*} g\colon \mathcal{B}(n,\delta+1)&\rightarrow\mathcal{S}(n,\delta) \\ \mathcal{Q} &\mapsto \mathcal{Q} \cap H_{n}. \end{align*} For the purposes of this paper, this is the definition of the map $g$, and the characterisations in Section~\ref{sect:g_comb} and Section~\ref{sect:vis} are simply other interpretations. \begin{remark} The intersections of cubillages with the hyperplanes given by $x_{1} = l$ for $l \in [n - 1]$ have been the objects of significant study in the literature. For three-dimensional zonotopes, such cross-sections are dual to plabic graphs \cite{galashin}, which arise in the combinatorics associated to Grassmannians \cite{post-grass,post_icm}. When the cubillage is \emph{regular}, such graphs arise in the study of KP solitons \cite{huang_thesis,kk,gpw}, and it is this connection that lies behind the definition of the higher Tamari orders in \cite{dm-h}. The paper \cite{olarte_santos} studies \emph{hypersimplicial subdivisions} and shows that, in general, only a subset of these come from cross-sections of subdivisions of zonotopes. This means that the analogues of the map $g$ for other cross-sections of cubillages are not generally surjective. In \cite{dkk-interrelations,dkk-weak,dkk-symmetric}, rather than studying the intersection of a cubillage with these hyperplanes, the \emph{fragmentation} of a cubillage into different pieces cut by these hyperplanes is studied. \end{remark} We identify the hyperplane $H_{n}$ with the space $\mathbb{R}^{n - 1}$, so that we can consider $C(n, n - 1)$ sitting inside it as usual. In particular, we abuse notation by using $\pi_{n - 1, \delta + 1}$ to denote the restriction $\pi_{n, \delta + 2}|_{H_{n}}\colon H_{n} \to H_{\delta + 2}$. This convention is illustrated in the following proof, in which we examine how $g$ interacts with increasing flips. \begin{proposition}\label{prop:covRel} If $\mathcal{Q}, \mathcal{Q}'$ are cubillages of $Z(n, \delta + 1)$ such that $\mathcal{Q} \lessdot \mathcal{Q}'$, then either $g(\mathcal{Q}) = g(\mathcal{Q}')$ or $g(\mathcal{Q}) \lessdot g(\mathcal{Q}')$. \end{proposition} \begin{proof} Let $\mathcal{Q}$ and $\mathcal{Q}'$ be two cubillages such that $\mathcal{Q} \lessdot \mathcal{Q}'$. Let $\Gamma$ be the $(\delta + 2)$-face of $Z(n, n)$ which induces the increasing flip, and let the initial vertex of $\Gamma$ be $\xi_{E} = (x_{1}, \dots, x_{n})$. Then $\mathcal{Q}$ and $\mathcal{Q}'$ differ only in that $\pi_{n, \delta + 2}(\mathcal{Q})$ contains the lower facets of $\pi_{n, \delta + 2}(\Gamma)$ and $\pi_{n, \delta + 2}(\mathcal{Q}')$ contains the upper facets of $\pi_{n, \delta + 2}(\Gamma)$. The intersection $\Gamma \cap H_{n}$ consists of more than a single point if and only if $E = \emptyset$. This is because, given $(y_{1}, \dots, y_{n}) \in \Gamma$, we have $y_{1} \geqslant x_{1} = \# E$. Hence if $\# E > 1$, then $\Gamma \cap H_{n} = \emptyset$; and if $\# E = 1$, then $\Gamma \cap H_{n} = \xi_{E}$. Thus if $E \neq \emptyset$, then $\mathcal{Q}$ and $\mathcal{Q}'$ both have the same intersection with the hyperplane $H_{n}$, so that $g(\mathcal{Q})=g(\mathcal{Q}')$. If $E = \emptyset$, then $\pi_{n, \delta + 2}(\Gamma) \cap H_{\delta + 2}$ is the $(\delta+1)$-simplex $\pi_{n - 1, \delta + 1}|A|$, where $A$ is the generating set of $\Gamma$. We then have that $g(\mathcal{Q})$ and $g(\mathcal{Q}')$ differ only in that $\pi_{n - 1, \delta + 1}(g(\mathcal{Q}))$ contains the lower facets of $\pi_{n - 1, \delta + 1}|A|$, whereas $\pi_{n - 1, \delta + 1}(g(\mathcal{Q}'))$ contains the upper facets of $\pi_{n - 1, \delta + 1}|A|$. Hence $g(\mathcal{Q}) \lessdot g(\mathcal{Q}')$. \end{proof} Recall that if $(X, \leqslant)$ and $(Y, \leqslant)$ are posets, and $f \colon X \to Y$ is map such that we have $f(x) \leqslant f(x')$ whenever $x \leqslant x'$, then $f$ is called \emph{order-preserving}. \begin{corollary}\label{cor-ord-pres} The map $g \colon \mathcal{B}(n, \delta + 1) \to \mathcal{S}(n, \delta)$ is order-preserving. \end{corollary} \begin{example}\label{ex:vertex_figures} We now give two examples of taking the vertex figure of a cubillage of $Z(n,\delta+1)$ at $\xi_{\emptyset}$. First, consider the cubillage $\mathcal{Q}_{1}$ of $Z(4,2)$ shown in Figure~\ref{fig:q1}. As we did above, we can find the vertex figure of $\mathcal{Q}_{1}$ at $\xi_{\emptyset}$ by intersecting with the hyperplane $H_{4}$, as shown. We thus obtain the triangulation $g(\mathcal{Q}_{1})=\mathcal{T}_{1}$ of $C(4,1)$ shown in Figure~\ref{fig:t1}. \begin{figure} \caption{The cubillage $\mathcal{Q}_{1}$ of $Z(4,2)$ intersected with $H_{4}$.}\label{fig:q1} \[ \begin{tikzpicture} \coordinate(0) at (0,0); \node at (0)[left = 1mm of 0]{$\emptyset$}; \coordinate(1) at (2,-2); \node at (1)[below left = 1mm of 1]{1}; \coordinate(12) at (4,-3); \node at (12)[below = 1mm of 12]{12}; \coordinate(123) at (6,-2); \node at (123)[below right = 1mm of 123]{123}; \coordinate(1234) at (8,0); \node at (1234)[right = 1mm of 1234]{1234}; \coordinate(234) at (6,2); \node at (234)[above right = 1mm of 234]{234}; \coordinate(34) at (4,3); \node at (34)[above = 1mm of 34]{34}; \coordinate(4) at (2,2); \node at (4)[above left = 1mm of 4]{4}; \draw (0) -- (1) -- (12) -- (123) -- (1234) -- (234) -- (34) -- (4) -- (0); \coordinate(3) at (2,1); \coordinate(13) at (4,-1); \coordinate(23) at (4,0); \node at (3) [above left = 1mm of 3]{3}; \node at (13) [below = 1mm of 13]{13}; \node at (23) [right = 1mm of 23]{23}; \draw (0) -- (3); \draw (3) -- (34); \draw (3) -- (23); \draw (3) -- (13); \draw (1) -- (13); \draw (13) -- (123); \draw (23) -- (123); \draw (23) -- (234); \draw[red] (2,-3) -- (2,3); \end{tikzpicture} \] \end{figure} \begin{figure} \caption{The triangulation $g(\mathcal{Q}_{1})=\mathcal{T}_{1}$ of $C(4,1)$.}\label{fig:t1} \[ \begin{tikzpicture} \draw[red] (1,0) -- (4,0); \node(1) at (1,0) {$\bullet$}; \node at (1)[below = 1mm of 1]{1}; \node(3) at (3,0) {$\bullet$}; \node at (3)[below = 1mm of 3]{3}; \node(4) at (4,0) {$\bullet$}; \node at (4)[below = 1mm of 4]{4}; \end{tikzpicture} \] \end{figure} Secondly, consider the cubillage $\mathcal{Q}_{2}$ of $Z(4,3)$ illustrated in Figure~\ref{fig:q2}. This cubillage possesses four cubes, two of which share the face highlighted in blue. The hyperplane $H_{4}$ is shown here in red. The intersection gives the triangulation $g(\mathcal{Q}_{2})=\mathcal{T}_{2}$ of $C(4,2)$ shown in Figure~\ref{fig:t2}. \begin{figure} \caption{The cubillage $\mathcal{Q}_{2}$ of $Z(4,3)$.}\label{fig:q2} \[ \begin{tikzpicture} \begin{scope}[xscale=0.9] \coordinate(0) at (0,-5); \coordinate(1) at (-3,-2); \coordinate(2) at (-3,-3); \coordinate(3) at (3,-3); \coordinate(4) at (3,-2); \coordinate(12) at (-6,0); \coordinate(23) at (0,-1); \coordinate(13) at (0,0); \coordinate(14) at (0,1); \coordinate(34) at (6,0); \coordinate(123) at (-3,2); \coordinate(124) at (-3,3); \coordinate(234) at (3,2); \coordinate(134) at (3,3); \coordinate(1234) at (0,5); \draw[gray] (0) -- (4); \draw[gray] (4) -- (34); \draw[gray] (1) -- (14); \draw[gray] (4) -- (14); \draw[gray] (14) -- (124); \draw[gray] (14) -- (134); \draw[gray] (0) -- (1); \draw[gray] (1) -- (12); \draw[fill=blue!30,draw=none] (0) -- (3) -- (13) -- (1) -- (0); \draw[fill=red!30,draw=none] (1) -- (2) -- (3) -- (1); \path[name path = line 1] (1) -- (4); \path[name path = line 2] (3) -- (13); \path [name intersections={of = line 1 and line 2}]; \coordinate (a) at (intersection-1); \draw[fill=red!30,draw=none] (a) -- (4) -- (3) -- (a); \draw[dotted,red] (1) -- (2) -- (3) -- (4) -- (1); \draw[dotted,blue] (1) -- (3); \draw[dashed] (1) -- (13); \draw[dashed] (3) -- (13); \draw[dashed] (13) -- (123); \draw (0) -- (2); \draw (0) -- (3); \draw (2) -- (12); \draw (2) -- (23); \draw (3) -- (23); \draw (12) -- (123); \draw (23) -- (123); \draw (3) -- (34); \draw (23) -- (234); \draw (12) -- (124); \draw (34) -- (134); \draw (34) -- (234); \draw (123) -- (1234); \draw (124) -- (1234); \draw (234) -- (1234); \draw (134) -- (1234); \draw[dashed] (13) -- (134); \node at (0) [below = 1mm of 0]{$\emptyset$}; \node at (1) [above = 1mm of 1]{1}; \node at (2) [below left = 1mm of 2]{2}; \node at (3) [below right = 1mm of 3]{3}; \node at (4) [above = 1mm of 4]{4}; \node at (12) [left = 1mm of 12]{12}; \node at (23) [below = 1mm of 23]{23}; \node at (34) [right = 1mm of 34]{34}; \node at (14) [above = 1mm of 14]{14}; \node at (123) [below = 1mm of 123]{123}; \node at (234) [below = 1mm of 234]{234}; \node at (134) [above right = 1mm of 134]{134}; \node at (124) [above left = 1mm of 124]{124}; \node at (1234) [above = 1mm of 1234]{1234}; \node at (13) [above = 1mm of 13]{13}; \end{scope} \end{tikzpicture} \] \end{figure} \begin{figure} \caption{The triangulation $g(\mathcal{Q}_{2})=\mathcal{T}_{2}$ of $C(4, 2)$.}\label{fig:t2} \[ \begin{tikzpicture} \coordinate(1) at (-2,1); \coordinate(2) at (-2,-1); \coordinate(3) at (2,-1); \coordinate(4) at (2,1); \draw[red] (1) -- (2) -- (3) -- (4) -- (1); \draw[blue] (1) -- (3); \node at (1) [above left = 1mm of 1]{1}; \node at (2) [below left = 1mm of 2]{2}; \node at (3) [below right = 1mm of 3]{3}; \node at (4) [above right = 1mm of 4]{4}; \end{tikzpicture} \] \end{figure} \end{example} \begin{remark}\label{rmk:g_dual} There is a dual version of the map $g$, given by \begin{align*} \overline{g}\colon \mathcal{B}(n,\delta+1)&\rightarrow\mathcal{S}(n,\delta) \\ \mathcal{Q} &\mapsto \mathcal{Q}\cap \overline{H}_{n}, \end{align*} where $\overline{H}_{n} = \{(x_{1}, \dots, x_{n})\in \mathbb{R}^{n} \mid x_{1} = n - 1 \}$. Given a cubillage $\mathcal{Q}$ of $Z(n,\delta+1)$, the triangulation $\overline{g}(\mathcal{Q})$ is induced by taking the vertex figure of $Z(n, n)$ at the vertex $\xi_{[n]}$. This map was considered in \cite[Proposition 7.1]{thomas-bst}. The dual of Proposition~\ref{prop-comb-interp} gives that if $\mathcal{Q} \lessdot \mathcal{Q}'$, then either $\overline{g}(\mathcal{Q})=\overline{g}(\mathcal{Q}')$ or $\overline{g}(\mathcal{Q}) \gtrdot \overline{g}(\mathcal{Q}')$. Hence $\overline{g}$ is order-reversing. That is, if $\mathcal{Q} \leqslant \mathcal{Q}'$, then $g(\mathcal{Q}) \geqslant g(\mathcal{Q}')$. \end{remark} \subsection{Separated collections}\label{sect:g_comb} Our second definition of the map uses the characterisation of cubillages in terms of separated collections and the combinatorial framework for triangulations of cyclic polytopes from \cite{ot,njw-hst}. This is the framework we use to prove that $g$ is a quotient map of posets in Section~\ref{sect-surj} and Section~\ref{sect-quot}. Given a triangulation $\mathcal{T}$ of $C(n,\delta)$, let \[\Sigma(\mathcal{T}):=\{A \subseteq [n] \mid |A| \text{ is a simplex of }\mathcal{T}\}.\] This can be viewed as the abstract simplicial complex corresponding to $\mathcal{T}$. The following lemma tells us how the value of $g(\mathcal{Q})$ is determined by $\mathrm{Sp}(\mathcal{Q})$. \begin{lemma}\label{lem-all-simps} Let $\mathcal{Q}$ be a cubillage of $Z(n,\delta+1)$ and $\mathcal{T}$ be a triangulation of $C(n,\delta)$. Then $g(\mathcal{Q})=\mathcal{T}$ if and only if $\mathrm{Sp}(\mathcal{Q})\supseteq \Sigma(\mathcal{T})$. \end{lemma} \begin{proof} Suppose that $g(\mathcal{Q})=\mathcal{T}$. Let $|A|$ be a $\delta$-simplex of $\mathcal{T}$. Then there is a cube $\Delta$ of $\mathcal{Q}$ such that $|A|=\Delta \cap H_{n}$. We must have that the initial vertex of $\Delta$ is $\xi_{\emptyset}$ and the set of generating vectors is $A$. Thus if $|B|$ is a face of $|A|$, then $\xi_{B}$ is a vertex of $\Delta$, and hence $B \in \mathrm{Sp}(\mathcal{Q})$. Since every simplex of the triangulation $\mathcal{T}$ is a face of a $\delta$-simplex, we have that $\mathrm{Sp}(\mathcal{Q}) \supseteq \Sigma(\mathcal{T})$. Conversely, suppose that $\mathrm{Sp}(\mathcal{Q}) \supseteq \Sigma(\mathcal{T})$. Let $|A|$ be a $\delta$-simplex of $\mathcal{T}$. Then $2^{A} \subseteq \Sigma(\mathcal{T}) \subseteq \mathrm{Sp}(\mathcal{Q})$. By \cite[(2.5)]{dkk}, the cube $\Delta$ with initial vertex $\emptyset$ and generating vectors $A$ is therefore a cube of $\mathcal{Q}$. This means that $|A|$ is a $\delta$-simplex of $g(\mathcal{Q})$, since $|A| = \Delta \cap H_{n}$. Since this is true for any $\delta$-simplex of $\mathcal{T}$, we must have $g(\mathcal{Q})=\mathcal{T}$. \end{proof} In fact, as the following proposition shows, we need only consider $\mathrm{ISp}(\mathcal{Q}) \cap \binom{[n]}{\floor{\delta/2} + 1}$ to know the value of $g(\mathcal{Q})$. \begin{proposition}\label{prop-comb-interp} Given a cubillage $\mathcal{Q} \in \mathcal{B}(n,\delta+1)$, we have that $\mathring{e}(g(\mathcal{Q}))=\mathrm{ISp}(\mathcal{Q}) \cap \binom{[n]}{\floor{\delta/2} + 1}$. \end{proposition} \begin{proof} It follows immediately from Lemma~\ref{lem-all-simps} that $\mathring{e}(g(\mathcal{Q})) \subseteq \mathrm{ISp}(\mathcal{Q}) \cap \binom{[n]}{\floor{\delta/2} + 1}$, since if $\# A = \floor{\delta/2} + 1$, then $|A|$ is an internal $\floor{\delta/2}$-simplex in $C(n,\delta)$ if and only if $\xi_{A}$ is an internal point in $Z(n,\delta+1)$, by Observation~\ref{obs:internal}. To show that $\mathring{e}(g(\mathcal{Q})) \supseteq \mathrm{ISp}(\mathcal{Q}) \cap \binom{[n]}{\floor{\delta/2} + 1}$, suppose that we have $A \in \left(\mathrm{ISp}(\mathcal{Q}) \cap \binom{[n]}{\floor{\delta/2} + 1}\right) \setminus \mathring{e}(g(\mathcal{Q}))$. Then note that $|A|$ must be an internal $\floor{\delta/2}$-simplex in $C(n, \delta)$, since $\xi_{A}$ is an internal point in $Z(n, \delta + 1)$. However, $|A|$ is not a $\floor{\delta/2}$-simplex of $\mathcal{T}$, so $\pi_{n - 1, \delta}|A|$ intersects a $\ceil{\delta/2}$-simplex $\pi_{n - 1, \delta}|B|$ of $\pi_{n - 1, \delta}(\mathcal{T})$ transversely. This implies that $(A, B)$ is a circuit, and so $A$ and $B$ are $\delta$-interweaving. But this is a contradiction, since $B \in \mathrm{Sp}(\mathcal{Q})$ by Lemma~\ref{lem-all-simps}. \end{proof} Proposition~\ref{prop-comb-interp} gives an interpretation of the map $g$ in terms of separated collections. We know that a cubillage $\mathcal{Q}$ of $Z(n,\delta+1)$ is determined by $\mathrm{ISp}(\mathcal{Q})$, and likewise a triangulation $\mathcal{T}$ of $C(n,\delta)$ is determined by $\mathring{e}(\mathcal{T})$. Hence one could also define $g(\mathcal{Q})$ to be the triangulation $\mathcal{T}$ such that $\mathring{e}(\mathcal{T})=\mathrm{ISp}(\mathcal{Q}) \cap \binom{[n]}{\floor{\delta/2} + 1}$. \begin{example}\label{ex:g_comb} We illustrate how to apply the interpretation of $g$ from Proposition~\ref{prop-comb-interp} to the cubillages from Example~\ref{ex:vertex_figures}. Consider the internal spectrum of $\mathcal{Q}_{1}$, as shown in Figure~\ref{fig:q1}. We have $\mathrm{ISp}(\mathcal{Q}_{1}) = \{3,13,23\}$, so $\mathrm{ISp}(\mathcal{Q}_{1}) \cap \binom{[4]}{1} = \{3\}$. This implies that $\{3\}=\mathring{e}(g(\mathcal{Q}_{1}))=\mathring{e}(\mathcal{T}_{1})$, which is indeed the case. Note that having $\mathring{e}(\mathcal{T}_{1})=\{3\}$ defines $\mathcal{T}_{1}$. Next, consider the internal spectrum of $\mathcal{Q}_{2}$, as shown in Figure~\ref{fig:q2}. We have $\mathrm{ISp}(\mathcal{Q}_{2}) = \{13\}$, so $\mathrm{ISp}(\mathcal{Q}_{2}) \cap \binom{[4]}{2} = \{13\}$. This implies that $\{13\}=\mathring{e}(g(\mathcal{Q}_{2}))=\mathring{e}(\mathcal{T}_{2})$, which is indeed the case. Note that having $\mathring{e}(\mathcal{T}_{2})=\{13\}$ defines $\mathcal{T}_{2}$. \end{example} \begin{remark} The interpretation of $\overline{g}$ for separated collections is as follows. We have that $\overline{g}(\mathcal{Q})$ is the triangulation $\mathcal{T}$ such that \[\mathring{e}(\mathcal{T}) = \left\lbrace [n]\setminus A \mathrel{\Big|} A \in \mathcal{C}\cap \subs{n}{n-\floor{\delta/2} - 1}{}\right\rbrace.\] \end{remark} \subsection{Admissible orders}\label{sect:vis} In this section we give a way of defining the map $g$ while interpreting the elements of the higher Bruhat orders as equivalence classes of admissible orders. We use the following notions, which were used in \cite{dm-h} to define the higher Tamari orders. Let $\alpha$ be an admissible order of $\subs{n}{\delta + 1}{}$ and $I \in \subs{n}{\delta + 1}{}$. Given $e \in [n]\setminus I$, we say that $I$ is \emph{invisible in $P(I \cup \{e\})$} if either \begin{itemize} \item $I \cup \{e\} \notin \mathrm{inv}(\alpha)$ and $e$ is an odd gap in $I$, or \item $I \cup \{e\} \in \mathrm{inv}(\alpha)$ and $e$ is an even gap in $I$. \end{itemize} Otherwise, we say that $I$ is \emph{coinvisible in $P(I \cup \{e\})$}. (We note that $I$ being invisible in $P(I \cup \{e\})$ is equivalent to $e$ being \emph{externally semi-active} with respect to $I$, in the terminology of \cite{gpw}, which applies to more general matroids.) Then: \begin{itemize} \item We say that $I$ is \emph{invisible in $\alpha$} if there is a $e \in [n]\setminus I$ such that $I$ is invisible in $P(I \cup \{e\})$. \item We say that $I$ is \emph{coinvisible in $\alpha$} if there is a $e \in [n]\setminus I$ such that $I$ is coinvisible in $P(I \cup \{e\})$. \item We say that $I$ is \emph{visible in $\alpha$} if there is no $e \in [n]\setminus I$ such that $I$ is invisible in $P(I \cup \{e\})$. (Note that this is not the same notion of visibility as in \cite[Section 9]{dkk-survey}.) \item We say that $I$ is \emph{covisible in $\alpha$} if there is no $e \in [n]\setminus I$ such that $I$ is coinvisible in $P(I \cup \{e\})$. \end{itemize} Given an admissible order $\alpha$ of $\binom{[n]}{\delta + 1}$, we use $V(\alpha)$ to denote the elements of $\binom{[n]}{\delta+1}$ which are visible in $\alpha$ and $\overline{V}(\alpha)$ to denote the elements of $\binom{[n]}{\delta+1}$ which are covisible in $\alpha$. (In \cite{dm-h-simplex}, visible elements are labelled in blue; covisible elements are labelled in red; and elements which are neither visible nor covisible are labelled in green.) Given an admissible order $\alpha$ of $\binom{[n]}{\delta+1}$, we write $\mathcal{Q}_{\alpha}$ for the corresponding cubillage of $Z(n,\delta+1)$. \begin{proposition}\label{prop-vis-init-vert} Let $\alpha$ be an admissible order of $\subs{n}{\delta+1}{}$ and $I \in \subs{n}{\delta+1}{}$. Then the cube in $\mathcal{Q}_{\alpha}$ with generating set $I$ has initial vertex $\xi_{E}$, where \[E = \{e \in [n]\setminus I \mid I \text{ is invisible in } P(I \cup \{e\} )\}.\] \end{proposition} \begin{proof} This follows immediately from the correspondence between admissible orders and cubillages in \cite{thomas-bst}, as described in Section~\ref{sect-hbo}. \end{proof} The following result was noted in \cite[Appendix B]{dkk-survey}. \begin{corollary}\label{cor-vis-init-vert} Let $\alpha$ be an admissible order of $\subs{n}{\delta+1}{}$ and $I \in \subs{n}{\delta+1}{}$. Then $I \in V(\alpha)$ if and only if the cube in $\mathcal{Q}_{\alpha}$ with generating set $I$ has initial vertex $\xi_{\emptyset}$. \end{corollary} This gives us yet another interpretation of the map $g$. \begin{corollary}\label{cor:g_vis_interp} Given $[\alpha] \in \mathcal{B}(n,\delta+1)$, we have that $g(\mathcal{Q}_{\alpha})$ is the triangulation with \[\{|A| \mid A \in V(\alpha)\}\] as its set of $\delta$-simplices. \end{corollary} \begin{example} We continue from Example~\ref{ex:vertex_figures} and Example~\ref{ex:g_comb} and illustrate how the map $g$ can also be characterised using visibility. We consider $\mathcal{Q}_{1}$ first. By labelling the cubes of $\mathcal{Q}_{1}$ with the elements of $\binom{[4]}{2}$, as shown in Figure~\ref{fig:q1_cubes}, it can be seen that the admissible order corresponding to $\mathcal{Q}_{1}$ is \[\alpha_{1} = \{23<13<12<14<24<34\}.\] We compute that $\mathrm{inv}(\alpha_{1})=\{123\}$. \begin{figure} \caption{$\mathcal{Q}_{1}$ with its cubes labelled.}\label{fig:q1_cubes} \[ \begin{tikzpicture} \coordinate(0) at (0,0); \node at (0)[left = 1mm of 0]{$\emptyset$}; \coordinate(1) at (2,-2); \coordinate(12) at (4,-3); \coordinate(123) at (6,-2); \coordinate(1234) at (8,0); \coordinate(234) at (6,2); \coordinate(34) at (4,3); \coordinate(4) at (2,2); \draw (0) -- (1) -- (12) -- (123) -- (1234) -- (234) -- (34) -- (4) -- (0); \coordinate(3) at (2,1); \coordinate(13) at (4,-1); \coordinate(23) at (4,0); \node at (2,-0.5){13}; \node at (2,1.5){34}; \node at (4,-2){23}; \node at (4,-0.5){12}; \node at (4,1.5){24}; \node at (6,0){14}; \draw (0) -- (3); \draw (3) -- (34); \draw (3) -- (23); \draw (3) -- (13); \draw (1) -- (13); \draw (13) -- (123); \draw (23) -- (123); \draw (23) -- (234); \end{tikzpicture} \] \end{figure} We can then analyse which elements of $\binom{[4]}{2}$ are visible in $\alpha_{1}$: \begin{itemize} \item 23: invisible because $123 \in \mathrm{inv}(\alpha_{1})$ and $1$ is an even gap in $23$; \item 13: visible; \item 12: invisible because $123 \in \mathrm{inv}(\alpha_{1})$ and $3$ is an even gap in $12$; \item 14: invisible because $124 \notin \mathrm{inv}(\alpha_{1})$ and $2$ is an odd gap in $14$; \item 24: invisible because $234 \notin \mathrm{inv}(\alpha_{1})$ and $3$ is an odd gap in $24$; \item 34: visible. \end{itemize} Note that, as Corollary~\ref{cor-vis-init-vert} shows, 13 and 34 are precisely the cubes with $\xi_{\emptyset}$ as their initial vertex. Furthermore, as Corollary~\ref{cor:g_vis_interp} shows, $g(\mathcal{Q}_{1})=\mathcal{T}_{1}$ is the triangulation with $1$-simplices $|13|$ and $|34|$. We now conduct the same analysis of $\mathcal{Q}_{2}$. The admissible order corresponding to $\mathcal{Q}_{2}$ is \[\alpha_{2}=\{123<124<134<234\}.\] It is easy to see that $\mathrm{inv}(\alpha_{2})=\emptyset$. Hence the visible elements of $\binom{[4]}{3}$ in $\alpha_{2}$ are as follows: \begin{itemize} \item 123: visible; \item 124: invisible because $1234 \notin \mathrm{inv}(\alpha_{2})$ and $3$ is an odd gap in $124$; \item 134: visible; \item 234: invisible because $1234 \notin \mathrm{inv}(\alpha_{2})$ and $1$ is an odd gap in $234$. \end{itemize} Again, it can be seen in Figure~\ref{fig:q2} that 123 and 134 are precisely the cubes with $\xi_{\emptyset}$ as their initial vertex, as shown by Corollary~\ref{cor-vis-init-vert}. Moreover, as Corollary~\ref{cor:g_vis_interp} shows, $g(\mathcal{Q}_{2})=\mathcal{T}_{2}$ is the triangulation with $2$-simplices $|123|$ and $|134|$. \end{example} The dual statements to Proposition~\ref{prop-vis-init-vert}, Corollary~\ref{cor-vis-init-vert}, and Corollay~\ref{cor:g_vis_interp} are as follows. \begin{proposition} Let $\alpha$ be an admissible order of $\subs{n}{\delta+1}{}$ and $I \in \subs{n}{\delta+1}{}$. Then the cube in $\mathcal{Q}_{\alpha}$ with generating set $I$ has final vertex $\xi_{F}$ where \[F = [n]\setminus\{e \in [n]\setminus I \mid I \text{ is coinvisible in } P(I \cup \{e\})\}.\] \end{proposition} \begin{corollary} Let $\alpha$ be an admissible order of $\subs{n}{\delta+1}{}$ and $I \in \subs{n}{\delta+1}{}$. Then $I \in \overline{V}(\alpha)$ if and only if the cube in $\mathcal{Q}_{\alpha}$ with generating set $I$ has final vertex $\xi_{[n]}$. \end{corollary} \begin{corollary} Given $[\alpha] \in \mathcal{B}(n,\delta+1)$, we have that $\overline{g}(\mathcal{Q}_{\alpha})$ is the triangulation with \[\{|A| \mid A \in \overline{V}(\alpha)\}\] as its set of $\delta$-simplices. \end{corollary} \section{Quotient maps of posets}\label{sect:quotient_framework} Dimakis and M\"uller-Hoissen use the definition of the map $g$ from Section~\ref{sect:vis} to define the \emph{higher Tamari orders}. We restate their definition in the framework of quotient posets. In this section, we explain our approach to this notion. Given a poset $(X, \leqslant)$ subject to an equivalence relation $\sim$, the \emph{quotient} $(X/{\sim}, R)$ is defined to be the set of $\sim$-equivalence classes $[x]$ of $X$, with the binary relation $R$ defined by $[x]R[y]$ if and only if there exist $x' \in [x]$ and $y' \in [y]$ such that $x' \leqslant y'$. The quotient of a poset is in general only a reflexive binary relation, not a partial order, since the relation $R$ is not necessarily transitive or anti-symmetric. Previous authors have considered various different conditions on the equivalence relation $\sim$ which are sufficient to guarantee that the quotient $X/{\sim}$ is a poset. Stanley considers the case where $\sim$ is given by the orbits of a group of automorphisms \cite{stanley_quotient_peck,stanley-appl-alg-comb}. Two similar notions of congruence which also preserve lattice-theoretic properties are considered by Chajda and Sn\'a\v{s}el, and Reading \cite{cs_cong,reading_order}. Most recently, Hallam and Sagan \cite{hs-char-poly,hallam_applications} consider \emph{homogeneous quotients} in order to study the characteristic polynomials of lattices. Whilst these conditions are sufficient to guarantee that the quotient poset is well-defined, none of them are necessary. In this paper we are interested only in the minimal conditions which provide that the quotient poset is well-defined, and not in whether the quotient also preserves other properties. These necessary and sufficient conditions are as follows. \begin{proposition}\label{prop:tautology} The quotient $X/{\sim}$ is a poset if and only if \begin{enumerate} \item if there exist $x_{1} \sim x$ and $y_{1} \sim y$ such that $x_{1} \leqslant y_{1}$, and $x_{2} \sim x$ and $y_{2} \sim y$ such that $x_{2} \geqslant y_{2}$, then $x \sim y$, and\label{op:antisym} \item given $x, y, z \in X$ such that there exist $x_{1} \sim x$ and $y_{1} \sim y$ such that $x_{1} \leqslant y_{1}$, and $y_{2} \sim y$ and $z_{2} \sim z$ such that $y_{2} \leqslant z_{2}$, then there exist $x_{3} \sim x$ and $z_{3} \sim z$ such that $x_{3} \leqslant z_{3}$.\label{op:trans} \end{enumerate} \end{proposition} \begin{proof} Condition (\ref{op:antisym}) is equivalent to the relation $R$ being anti-symmetric. Condition (\ref{op:trans}) is equivalent to the relation $R$ being transitive. \end{proof} If both condition (\ref{op:antisym}) and condition (\ref{op:trans}) hold, then we write $\leqslant$ instead of $R$, to acknowledge that the relation gives us a partial order. In this case, we say that $\sim$ is a \emph{weak order congruence} on the poset $X$. Note that, in particular, order congruences \cite{reading_order,cs_cong} and the equivalence relations which give homogeneous quotients \cite{hs-char-poly,hallam_applications} are weak order congruences. If $\sim$ is a weak order congruence, so that $X/{\sim}$ is a poset, then we have a canonical order-preserving map \begin{align*} X &\rightarrow X/{\sim} \\ x &\mapsto [x]. \end{align*} Indeed, for any order-preserving map of posets $f \colon X \to Y$, one can consider the equivalence relation on $X$ defined by $x \sim x'$ if and only if $f(x) = f(x')$. We then define the \emph{image} of $f$ to be the quotient $f(X) = X/{\sim}$. We identify the $\sim$-equivalence class $[x]$ of $X$ with the element $f(x) \in Y$, so that $f(X) \subseteq Y$ and the quotient relation on $f(X)$ is a subrelation of the partial order on $Y$. If the equivalence relation $\sim$ on $X$ is a weak order congruence, so that the image $f(X)$ is a well-defined poset, then we say that the map $f$ is \emph{photogenic}. We say that a map $f \colon X \to Y$ is \emph{full} if whenever $f(x_{1}) \leqslant f(x_{2})$ in $Y$, there exist $x_{1}', x'_{2} \in X$ such that $x'_{1} \leqslant x'_{2}$, with $f(x'_{1}) = f(x_{1})$ and $f(x'_{2}) = f(x_{2})$. (In \cite{cs_cong}, maps which are full and order-preserving are called \emph{strong}.) \begin{proposition}\label{prop:quotient_maps} Let $X$ and $Y$ be posets with $f \colon X \to Y$ an order-preserving map. Then the relation on $f(X)$ is anti-symmetric. Furthermore, if $f$ is full, then the relation on $f(X)$ is transitive, and so $f$ is photogenic. Finally, $f(X) = Y$ as posets if and only if $f$ is surjective and full. \end{proposition} \begin{proof} Suppose that $x_{1}, x_{2} \in X$ are such that $[x_{1}]R[x_{2}]$ and $[x_{2}]R[x_{1}]$. Since $f$ is order-preserving, this implies that $f(x_{1}) \leqslant f(x_{2})$ and $f(x_{2}) \leqslant f(x_{1})$. Hence $f(x_{1}) = f(x_{2})$ and so $x_{1} \sim x_{2}$. Thus $R$ is anti-symmetric. Now suppose that $f$ is full. Let $x_{1}, x_{2}, x_{3} \in X$ be such that $[x_{1}]R[x_{2}]$ and $[x_{2}]R[x_{3}]$. This implies that $f(x_{1}) \leqslant f(x_{2})$ and $f(x_{2}) \leqslant f(x_{3})$, since $f$ is order-preserving. Hence $f(x_{1}) \leqslant f(x_{3})$. Since $f$ is full, there exist $x'_{1}, x'_{3} \in X$ such that $x'_{1} \leqslant x'_{3}$, with $f(x'_{1}) = f(x_{1})$ and $f(x'_{3}) = f(x_{3})$. Hence $[x_{1}]R[x_{3}]$, and so $R$ is transitive. Finally, it is clear that $f(X) = Y$ as sets if and only if $f$ is surjective. Then $f$ being full and order-preserving is equivalent to having $[x_{1}] \leqslant [x_{2}]$ in $f(X)$ if and only if $f(x_{1}) \leqslant f(x_{2})$ in $Y$. \end{proof} Therefore, every quotient of a poset by a weak order congruence gives an order-preserving map which is surjective and full, and, conversely, every order-preserving map which is surjective and full gives a quotient by a weak order congruence. Hence, if an order-preserving map $f$ is surjective and full, then we say that $f$ is a \emph{quotient map of posets}. With this technical framework in mind, the \emph{higher Tamari order} $T(n,\delta+1)$ \cite{dm-h} is defined to be the image of the map $g \colon \mathcal{B}(n, \delta + 1) \to \mathcal{S}(n, \delta)$, or, explicitly, the quotient of $\mathcal{B}(n, \delta + 1)$ by the relation defined by $\mathcal{Q} \sim \mathcal{Q}'$ if and only if $g(\mathcal{Q}) = g(\mathcal{Q}')$. That this is equivalent to \cite[Definition 4.7]{dm-h} follows from Corollary~\ref{cor:g_vis_interp}. Note that it is not evident that $T(n, \delta + 1)$ is a well-defined poset, since it is not clear that the map $g$ is photogenic. However, in Section~\ref{sect-quot} we shall prove that $g$ is full, which implies that $g$ is photogenic by Proposition~\ref{prop:quotient_maps}, since we already know that $g$ is order-preserving by Corollary~\ref{cor-ord-pres}. In Section~\ref{sect-surj}, we give a new proof of the fact that $g$ is surjective, originally known from \cite[Theorem 3.5]{rs-baues}. Therefore, the results of the two subsequent sections entail the following theorem. \begin{theorem}\label{thm:quot} The map $g \colon \mathcal{B}(n, \delta + 1) \to \mathcal{S}(n, \delta)$ is a quotient map of posets. \end{theorem} Hence, we obtain by Proposition~\ref{prop:quotient_maps} that the higher Tamari orders are indeed the same posets as the first higher Stasheff--Tamari orders. \begin{corollary}\label{cor:t=st} $T(n,\delta+1) \cong \mathcal{S}(n,\delta)$. \end{corollary} \section{Surjectivity}\label{sect-surj} We now give a new construction showing that the map $g$ is a surjection. Our strategy is to explicitly show that $g$ is a surjection when $\delta$ is even, and then to use this to deduce the case where $\delta$ is odd. Given a triangulation $\mathcal{T}$ of $C(n,2d)$, we will construct a cubillage $\mathcal{Q}_{\mathcal{T}}$ of $Z(n,2d+1)$ such that $g(\mathcal{Q}_{\mathcal{T}})=\mathcal{T}$. We will define $\mathcal{Q}_{\mathcal{T}}$ by specifying its internal spectrum. \begin{convention} In this section and in Section~\ref{sect-quot}, we will frequently be using arithmetic modulo $n$. In particular, given a set $S \in \binom{[n]}{2d + 2}$, we have $s_{0} - s_{2d + 1} \equiv s_{0} - s_{2d + 1} + n \pmod{n}$, which is an element of $[n]$. \end{convention} For $I \subseteq [n]$, we write $I = J \sqcup J'$ if $I = J \cup J'$ and there are no $j \in J, j' \in J'$ such that $j, j'$ are cyclically consecutive. Given a cyclic $l$-ple interval $I = [i_{0}, i'_{0}] \sqcup \dots \sqcup [i_{l-1}, i'_{l-1}]$, we use the notation $\widehat{I} := \{i_{0}, \dots, i_{l-1}\}$ from \cite{njw-jm}. We claim that the collection of subsets \[U(\mathcal{T})=\left\lbrace I \subseteq [n]\mid |\widehat{I}| \text{ is a } d'\text{-simplex of }\mathcal{T} \text{ for }d' \geqslant d\right\rbrace \] defines the internal spectrum of a cubillage on $Z(n, 2d + 1)$. This is similar to the construction in \cite[Theorem 3.8]{njw-jm}. In order to show that $U(\mathcal{T})$ is the internal spectrum of a cubillage, we must show that it is $2d$-separated and that $\# U(\mathcal{T})=\binom{n-1}{2d+1}$. We begin by showing that $U(\mathcal{T})$ is $2d$-separated, for which we need the following lemma. This generalises one direction of \cite[Lemma 3.7]{njw-jm}, although the proof in \textit{op.\ cit.}\ requires only minor changes. \begin{lemma}\label{lem-int-endpts} Let $I,J \subseteq [n]$. Then $I$ $\delta$-interweaves $J$ only if there exist subsets $X \subseteq \widehat{I}$ and $Y \subseteq \widehat{J}$ such that $\# X = \floor{\delta/2}$ and $\# Y = \ceil{\delta/2}$, and $X$ $\delta$-interweaves $Y$. \end{lemma} \begin{proof} We let $\delta=2d$, since the case $\delta=2d+1$ behaves similarly. Let $I = [i_{0}, i'_{0}] \sqcup \dots \sqcup [i_{r}, i'_{r}]$ and $J = [j_{0}, j'_{0}] \sqcup \dots \sqcup [j_{s}, j'_{s}]$. Suppose that $I$ $2d$-interweaves $J$, and let $A \subseteq I\setminus J$ and $B \subseteq J\setminus I$ witness this. For any $0\leqslant p < q\leqslant d$ we cannot have both $a_{p} \in [i_{t}, i'_{t}]$ and $a_{q} \in [i_{t}, i'_{t}]$, since this implies that $b_{p}, \dots, b_{q-1} \in [i_{t}, i'_{t}] \subseteq I$, which contradicts $B \cap I = \emptyset$. Hence, for all $0 \leqslant k \leqslant d$, let $t_{k}$ be such that $a_{k} \in [i_{t_{k}}, i'_{t_{k}}]$ and let $u_{k}$ be such that $b_{k} \in [j_{u_{k}}, j'_{u_{k}}]$. Moreover, since $B \cap I = \emptyset$, we have $b_{k} \in (i'_{t_{k}}, i_{t_{k+1}})$, and similarly $a_{k} \in (j'_{u_{k-1}}, j_{u_{k}})$ for $k \in \mathbb{Z}/(d + 1)\mathbb{Z}$. Then \[i_{t_{0}} \leqslant a_{0} < j_{u_{0}} \leqslant b_{0} < i_{t_{1}} \leqslant a_{1} < \dots < i_{t_{d}} \leqslant a_{d} < j_{u_{d}} \leqslant b_{d},\] and so \[ i_{t_{0}} < j_{u_{0}} < i_{t_{1}} < \dots < i_{t_{d}} < j_{u_{d}}.\] Letting $X=\{i_{t_{0}}, \dots, i_{t_{d}}\}$ and $Y=\{j_{u_{0}}, \dots, j_{u_{d}}\}$ gives us the desired result. \end{proof} \begin{lemma}\label{lem:cub_sep} The collection $U(\mathcal{T})$ is $2d$-separated. \end{lemma} \begin{proof} Suppose that there exist $I,J \in U(\mathcal{T})$ such that $I$ and $J$ are $2d$-interweaving. By Lemma~\ref{lem-int-endpts}, we have $X \subseteq \widehat{I}$ and $Y \subseteq \widehat{J}$ such that $X$ and $Y$ are $\delta$-interweaving. But this implies that $\widehat{I}$ and $\widehat{J}$ each contain one half of a circuit $(X, Y)$ for $C(n, 2d)$. This is a contradiction, since, by construction of $U(\mathcal{T})$, $|\widehat{I}|$ and $|\widehat{J}|$ are both simplices of the triangulation $\mathcal{T}$ of $C(n, 2d)$. \end{proof} We must now show that $\# U(\mathcal{T})=\binom{n-1}{2d+1}$. We use induction for this, showing that the size of $U(\mathcal{T})$ is preserved by increasing flips of $\mathcal{T}$, which requires the following lemma. \begin{lemma}\label{lem-card-flip} Let $|S|$ be a $(2d + 1)$-simplex inducing an increasing flip of a triangulation $\mathcal{T}$ of $C(n, 2d)$ and denote $S_{l} = \{s_{0}, s_{2}, \dots, s_{2d}\}$ and $S_{u} = \{s_{1}, s_{3}, \dots, s_{2d + 1}\}$. Then the following two sets have the same cardinality: \begin{align*} \mathcal{I}_{l}(S, n) &= \left\lbrace I \subseteq [n] \mid S_{l} \subseteq \widehat{I} \subset S \right\rbrace, \\ \mathcal{I}_{u}(S, n) &= \left\lbrace I \subseteq [n] \mid S_{u} \subseteq \widehat{I} \subset S \right\rbrace. \end{align*} \end{lemma} Here we use the symbol `$\subset$' to denote proper subsets. \begin{proof} Note that we may instead consider \begin{align*} \mathcal{I}'_{l}(S,n)&:=\left\lbrace I \subseteq [n] \mid S_{l} \subseteq \widehat{I} \subseteq S\right\rbrace , \\ \mathcal{I}'_{u}(S,n)&:=\left\lbrace I \subseteq [n] \mid S_{u} \subseteq \widehat{I} \subseteq S\right\rbrace . \end{align*} This is because \[\mathcal{I}'_{l}(S,n)\setminus \mathcal{I}_{l}(S,n)=\mathcal{I}'_{u}(S,n)\setminus \mathcal{I}_{u}(S,n)=\left\lbrace I \subseteq [n] \mid \widehat{I} = S\right\rbrace .\] Hence if $\# \mathcal{I}'_{l}(S, n) = \# \mathcal{I}'_{u}(S, n)$, then $\# \mathcal{I}_{l}(S, n) = \# \mathcal{I}_{u}(S, n)$. We prove the claim by explicit enumeration. Let \[I=[s_{0}, s'_{0}] \cup [s_{1}, s'_{1}] \cup [s_{2}, s'_{2}] \cup \dots \cup [s_{2d}, s'_{2d}] \cup [s_{2d + 1}, s'_{2d + 1}].\] Then $I \in \mathcal{I}'_{l}(S, n)$ if and only if, for all $i \in \mathbb{Z}/(d + 1)\mathbb{Z}$, \[s'_{2i} \in [s_{2i}, s_{2i + 1} - 1] \text{ and } s'_{2i + 1} \in [s_{2i + 1} - 1, s_{2i + 2} - 2].\] Recall that our convention here is that if $s'_{j} = s_{j} - 1$, then $[s_{j}, s'_{j}] = \emptyset$. Similarly, $I \in \mathcal{I}'_{u}(S, n)$ if and only if, for all $i \in \mathbb{Z}/(d + 1)\mathbb{Z}$, \[s'_{2i} \in [s_{2i} - 1, s_{2i + 1} - 2] \text{ and } s'_{2i + 1} \in [s_{2i + 1}, s_{2i + 2} - 1].\] Therefore, \begin{align*} \# \mathcal{I}'_{l}(S, n) = \# \mathcal{I}'_{u}(S, n) &= \prod_{i \in \mathbb{Z}/(d + 1)\mathbb{Z}}(s_{2i + 1} - s_{2i})(s_{2i + 2} - s_{2i + 2})\\ &= \prod_{j \in \mathbb{Z}/(2d + 2)\mathbb{Z}}(s_{j + 1} - s_{j}). \end{align*} \end{proof} This allows us to prove that our $2d$-separated collection $U(\mathcal{T})$ is the right size to be the internal spectrum of a cubillage. \begin{lemma}\label{lem:right_size} Given a triangulation $\mathcal{T}$ of $C(n,2d)$, we have that $\# U(\mathcal{T})=\binom{n-1}{2d+1}$. \end{lemma} \begin{proof} We prove the claim by induction on increasing flips of the triangulation. This is valid since every triangulation of a cyclic polytope can be reached via a sequence of increasing flips from the lower triangulation by \cite[Theorem 1.1(i)]{rambau}. For the base case, let $\mathcal{T}_{l}$ be the lower triangulation of $C(n,2d)$. By Gale's Evenness Criterion, the $2d$-simplices of $\mathcal{T}_{l}$ are given by $1$ together with $d$ disjoint pairs of consecutive numbers. Therefore, the only $d'$-simplices of $\mathcal{T}_{l}$ with $d' \geqslant d$ which have no cyclically consecutive entries are the internal $d$-simplices. Hence if $I \in U(\mathcal{T}_{l})$, then $|\widehat{I}|$ is an internal $d$-simplex of $\mathcal{T}_{l}$. Moreover, the internal $d$-simplices of $\mathcal{T}_{l}$ are given by $(d+1)$-subsets which are cyclic $(d + 1)$-ple intervals and contain 1. By \cite[(4.2)(ii)]{dkk}, the internal spectrum of the lower cubillage of $Z(n,2d+1)$ consists of all cyclic $(d + 1)$-ple intervals which contain 1. It is then straightforward to see that $\mathcal{U}(\mathcal{T})$ is indeed the internal spectrum of the lower cubillage of $Z(n, 2d + 1)$ when $\mathcal{T}$ is the lower triangulation of $C(n, 2d)$. Therefore, we have in this case that $\# U(\mathcal{T})=\binom{n-1}{2d+1}$. For the inductive step, we suppose that we have a triangulation $\mathcal{T}'$ obtained by performing an increasing flip induced by a $(2d+1)$-simplex $|S|$ on a triangulation $\mathcal{T}$ for which the induction hypothesis holds. Then $\mathcal{I}_{l}(S, n)$ contains precisely the subsets $I$ such that $\pi_{n - 1, 2d + 1}|\widehat{I}|$ is contained in a lower facet of $\pi_{n - 1, 2d + 1}|S|$ but not any upper facets, by Gale's Evenness Criterion. Similarly, $\mathcal{I}_{u}(S, n)$ contains precisely the subsets $I$ such that $\pi_{n - 1, 2d + 1}|\widehat{I}|$ is contained in an upper facet of $\pi_{n - 1, 2d + 1}|S|$ but not any lower facets. Hence \[U(\mathcal{T}') = (U(\mathcal{T})\setminus\mathcal{I}_{l}(S, n)) \cup \mathcal{I}_{u}(S, n),\] and so $\# U(\mathcal{T}) = \# U(\mathcal{T}')$ by Lemma~\ref{lem-card-flip}. The result then follows by induction. \end{proof} Hence we obtain that $g$ is a surjection in even dimensions. \begin{theorem}\label{thm-g-even} The map $g \colon \mathcal{B}(n, \delta + 1) \to \mathcal{S}(n, \delta)$ is a surjection for even $\delta$. \end{theorem} \begin{proof} Let $\delta=2d$ and let $\mathcal{T}$ be a triangulation of $C(n,2d)$. By Lemma~\ref{lem:cub_sep}, Lemma~\ref{lem:right_size}, and the correspondence between cubillages and separated collections from \cite{gp}, we have that the collection $U(\mathcal{T})$ is the internal spectrum of a cubillage $\mathcal{Q}_{\mathcal{T}}$ of $Z(n,2d+1)$. Moreover, $g(\mathcal{Q}_{\mathcal{T}})=\mathcal{T}$ by Proposition~\ref{prop-comb-interp}, since if $\# A=d+1$, then $A \in U(\mathcal{T})$ if and only if $|A|$ is an internal $d$-simplex of $\mathcal{T}$. \end{proof} \begin{example}\label{ex:even_surj} We give an example of the construction used to prove Theorem~\ref{thm-g-even}. Consider the triangulation $\mathcal{T}$ of the hexagon $C(6,2)$ which has arcs $\mathring{e}(\mathcal{T})=\{13,15,35\}$. Then we have \begin{align*} U(\mathcal{T})=\{ &13,15,35,\\ &134,125,356,135,\\ &1345,1235,1356\}. \end{align*} Note the presence of $135 \in U(\mathcal{T})$, since $|135|$ is a 2-simplex of $\mathcal{T}$. One can check that $U(\mathcal{T})$ is 2-separated. Furthermore, $\# U(\mathcal{T}) = 10 = \binom{5}{3} = \binom{6-1}{2+1}$, as desired. We thus obtain the cubillage $\mathcal{Q}_{\mathcal{T}}$ which is defined by $\mathrm{ISp}(\mathcal{Q}_{\mathcal{T}})=U(\mathcal{T})$. It then follows from Proposition~\ref{prop-comb-interp} that $g(\mathcal{Q}_{\mathcal{T}})=\mathcal{T}$; compare Example~\ref{ex:g_comb}. Hence $\mathcal{T}$ has a pre-image under $g$. \end{example} We now use this result to show that the map $g$ must be a surjection for odd $\delta$. Following many authors, given a set $\mathcal{S}$ of subsets of $[n]$, we denote by $\mathcal{S} \ast (n+1)$ the set \[\mathcal{S} \ast (n+1) = \{ A \cup \{n+1\} \mid A \in \mathcal{S}\}.\] \begin{theorem}\label{thm-g-odd} The map $g \colon \mathcal{B}(n, \delta + 1) \to \mathcal{S}(n, \delta)$ is a surjection for odd $\delta$. \end{theorem} \begin{proof} Let $\delta=2d+1$. Let $\mathcal{T}$ be a triangulation of $C(n,2d+1)$. We show that there exists a cubillage $\mathcal{Q}_{\mathcal{T}}$ of $Z(n,2d+2)$ such that $\mathrm{Sp}(\mathcal{Q}_{\mathcal{T}}) \supseteq \Sigma(\mathcal{T})$. Consider the triangulation $\hat{\mathcal{T}}$ of $C(n+1,2d+2)$ defined in \cite[Definition 4.1]{rambau}. By Theorem~\ref{thm-g-even}, there is a cubillage $\mathcal{Q}'$ of $Z(n+1,2d+3)$ such that $g(\mathcal{Q}')=\hat{\mathcal{T}}$. By definition of $\hat{\mathcal{T}}$, we have that $\Sigma(\mathcal{T})\cup\Sigma(\mathcal{T})\ast(n+1) \subseteq \Sigma(\hat{\mathcal{T}}) \subseteq \mathrm{Sp}(\mathcal{Q}')$. By \cite[Lemma 5.2]{dkk}, if we take the $(n + 1)$-contraction of $\mathcal{Q}'$ then we get a membrane $\mathcal{M}$ in $\mathcal{Q}'/(n + 1)$ as the image of the $(n + 1)$-pie, and we have that $\mathrm{Sp}(\mathcal{M}) \supseteq \Sigma(\mathcal{T})$. We therefore define $\mathcal{Q}_{\mathcal{T}} = \mathcal{M}$, recalling that $\mathcal{M}$ is a cubillage of $Z(n, 2d + 2)$. By Lemma~\ref{lem-all-simps}, we must have that $g(\mathcal{Q}_{\mathcal{T}})=\mathcal{T}$. \end{proof} \begin{corollary}\label{cor:g_surj} The map $g \colon \mathcal{B}(n, \delta + 1) \to \mathcal{S}(n, \delta)$ is a surjection. \end{corollary} \begin{remark} In \cite[Theorem 4.10]{kv-poly}, Kapranov and Voevodsky gave a map $f\colon \mathcal{B}(n,\delta) \rightarrow \mathcal{S}(n+2,\delta+1)$ which they stated was a surjection. A proof of this statement remains unfound. It was shown in \cite[Proposition 7.1]{thomas-bst} that there is a factorisation \[ \begin{tikzcd} \mathcal{B}(n,\delta) \ar[dr,"f"] \ar[rr,"\overline{g}"] && \mathcal{S}(n, \delta-1), \\ & \mathcal{S}(n+2,\delta+1) \ar[ru, dashed, two heads] & \end{tikzcd}\] where $\overline{g}$ is the dual map to $g$ from Remark~\ref{rmk:g_dual} and the dotted map is a surjection by \cite[Corollary 4.3]{rambau}. The map $f$ should not only be a surjection, but also a quotient map of posets, as we show is true of the map $g$ in this paper. This was shown for $\delta = 1$ by Reading \cite{reading_cambrian}, drawing upon \cite{bw_shell_2}. However, note that $f$ cannot in general realise $\mathcal{S}(n + 2, \delta + 1)$ as a quotient of $\mathcal{B}(n, \delta)$ by an order congruence in the sense used in \cite{reading_cambrian}. This is because the equivalence classes of an order congruence must be intervals, but \cite[Section 6]{thomas-bst} shows that the fibres of the map $f$ are not always intervals. Hence $f$ can only be a quotient map of posets in a more general sense, such as that considered in this paper. \end{remark} \section{Fullness}\label{sect-quot} We now show that the map $g$ is full, and hence is a quotient map of posets. To do this, we must show that if $\mathcal{T} \leqslant \mathcal{T}'$ for triangulations $\mathcal{T}, \mathcal{T}'$ of $C(n,\delta)$, then there are cubillages $\mathcal{Q}, \mathcal{Q}'$ of $Z(n,\delta+1)$ such that $g(\mathcal{Q})=\mathcal{T}$, $g(\mathcal{Q}')=\mathcal{T}'$, and $\mathcal{Q} \leqslant \mathcal{Q}'$. We follow the approach of Section~\ref{sect-surj}, whereby we work explicitly for even-dimensional triangulations, and then use this to show the result for odd dimensions. Indeed, we show that for triangulations $\mathcal{T}, \mathcal{T}'$ of $C(n, 2d)$ with $\mathcal{T} \leqslant \mathcal{T}'$, we have $\mathcal{Q}_{\mathcal{T}} \leqslant \mathcal{Q}_{\mathcal{T}'}$. For this, it suffices to show that if $\mathcal{T} \lessdot \mathcal{T}'$, then $\mathcal{Q}_{\mathcal{T}} < \mathcal{Q}_{\mathcal{T}'}$. To do this, we find a sequence of increasing flips from $\mathcal{Q}_{\mathcal{T}}$ to $\mathcal{Q}_{\mathcal{T}'}$. We wish to continue working in the framework of separated collections, as in Section~\ref{sect-surj}. Hence, we must show what the covering relations of the higher Bruhat orders are in this framework. \begin{theorem}\label{thm-comb-flips} Given cubillages $\mathcal{Q}, \mathcal{Q}'$ of $Z(n, \delta + 1)$ we have that $\mathcal{Q} \lessdot \mathcal{Q}'$ if and only if $\mathrm{Sp}(\mathcal{Q}') = (\mathrm{Sp}(\mathcal{Q})\setminus \{A\})\cup \{B\}$, where $A$ $\delta$-interweaves $B$. Moreover, in this case $A$ tightly $\delta$-interweaves $B$. \end{theorem} \begin{proof} The forwards direction follows from \cite[Proposition 8.1]{dkk-survey}. Namely, if the increasing flip from $\mathcal{Q}$ to $\mathcal{Q}'$ is induced by the face $\Gamma$ of $Z(n, n)$, then $\Gamma$ has a vertex $\xi_{A}$ and a vertex $\xi_{B}$ such that $A$ tightly $\delta$-interweaves $B$, $\pi_{n, \delta + 2}(\xi_{A})$ is only contained in the lower facets of $\pi_{n, \delta + 2}(\Gamma)$, $\pi_{n, \delta + 2}(\xi_{B})$ is only contained in the upper facets of $\pi_{n, \delta + 2}(\Gamma)$, and every other vertex of $\pi_{n, \delta + 2}(\Gamma)$ is contained in at least one lower facet and at least one upper facet. Hence, $\mathrm{Sp}(\mathcal{Q}') = (\mathrm{Sp}(\mathcal{Q})\setminus \{A\})\cup \{B\}$, where $A$ tightly $\delta$-interweaves $B$. We now prove the backwards direction, supposing that $\mathrm{Sp}(\mathcal{Q}') = (\mathrm{Sp}(\mathcal{Q})\setminus \{A\})\cup \{B\}$, where $A$ $\delta$-interweaves $B$. Let $A' \subseteq A \setminus B$ and $B' \subseteq B\setminus A$ witness the fact that $A$ $\delta$-interweaves $B$. We consider first the case where $\delta=2d$. We begin by proving that $A' = A \setminus B$ and $B' = B \setminus A$, so that $A$ tightly $2d$-interweaves $B$. The vertex $\xi_{A}$ must be an internal vertex in the cubillage $\mathcal{Q}$, since subsets corresponding to boundary vertices are contained in every $2d$-separated collection. Therefore, $\xi_{A}$ must be a vertex of at least two cubes in $\mathcal{Q}$, and so must have at least $2d+2$ edges emanating from it. The subsets at the other end of each of these edges must be $2d$-separated from $B$, so the edges must either add elements of $B'$ or remove elements of $A'$. Since $\# A' \cup B' = 2d + 2$, the edges emanating from $\xi_{A}$ in $\mathcal{Q}$ must be precisely the edges which remove elements of $A'$ and add elements of $B'$. Now suppose that there exists $a \in A \setminus (A' \cup B)$. Then $a \in (b'_{i-1}, b'_{i})$ for some $i \in \mathbb{Z}/(d + 1)\mathbb{Z}$. But this implies that $A \setminus \{a'_{i}\}$ $\delta$-interweaves $B$, which contradicts the fact that the edge from $\xi_{A}$ to $\xi_{A \setminus \{a'_{i}\}}$ is in the cubillage $\mathcal{Q}$. Hence $A' = A \setminus B$. The argument that $B' = B \setminus A$ is similar. Therefore $\xi_{A}$ is incident to $2d+2$ edges in the cubillage, where $d+1$ of the edges add elements of $B'$ and $d+1$ of the edges remove elements of $A'$. The cubes with $\xi_{A}$ as a vertex are generated by a choice of $2d+1$ of these edges. If $\mathcal{P}$ is the union of cubes in $\mathcal{Q}$ with $\xi_{A}$ as a vertex, then $\mathcal{P}$ is a set of facets of a $(2d+2)$-face $\Gamma$ of $Z(n, n)$ which has initial vertex $\xi_{A \cap B}$ and which is generated by $A' \cup B'$. By \cite[Proposition 8.1]{dkk-survey}, $\pi_{n, \delta + 2}(\mathcal{P})$ gives the lower facets of $\pi_{n, \delta + 2}(\Gamma)$, since $\pi_{n, \delta + 2}(\mathcal{P})$ consists of all the facets of $\pi_{n, \delta + 2}(\Gamma)$ which contain $\pi_{n, \delta + 2}(\xi_{A})$. Since, likewise, the upper facets of $\pi_{n, \delta + 2}(\Gamma)$ are precisely those containing $\pi_{n, \delta + 2}(\xi_{B})$, we obtain that $\mathcal{Q}'$ is an increasing flip of $\mathcal{Q}$. For $\delta=2d+1$, the argument is similar. We deduce that $\xi_{A}$ has $2d + 3$ edges emanating from it in $\mathcal{Q}$, $d + 1$ of which remove elements of $A'$ and $d + 2$ of which add elements of $B'$. To show that $A' = A \setminus B$ and $B' = B \setminus A$, the only extra thing to consider is the possibility that we have $a \in A \setminus (A' \cup B)$ such that $a < b'_{0}$ or $a > b'_{d+1}$. But in the first instance here, we have that $B$ $\delta$-interweaves $A \cup \{b'_{d+1}\}$, since \[a < b'_{0} < a'_{0} < b'_{1} < \dots < b'_{d} < a'_{d}.\] But this is a contradiction, since we know that the edge from $\xi_{A}$ to $\xi_{A \cup \{b'_{d+1}\}}$ is in $\mathcal{Q}$. In the second instance, we have that $B$ $\delta$-interweaves $A \cup \{b'_{0}\}$, when we know that the edge from $\xi_{A}$ to $\xi_{A \cup \{b'_{0}\}}$ is in $\mathcal{Q}$. The remainder of the case where $\delta = 2d - 1$ is similar. \end{proof} In the setting of the above theorem, we say that $(A, B)$ is the \emph{exchange pair} of the flip and that we \emph{exchange} $A$ for $B$. Using this characterisation of increasing flips, it can be seen that, in order to show that $\mathcal{Q}_{\mathcal{T}} \leqslant \mathcal{Q}_{\mathcal{T}'}$, we must show that we can gradually exchange the elements of $\mathrm{Sp}(\mathcal{Q}_{\mathcal{T}})\setminus\mathrm{Sp}(\mathcal{Q}_{\mathcal{T}'})$ for the elements of $\mathrm{Sp}(\mathcal{Q}_{\mathcal{T}'}) \setminus \mathrm{Sp}(\mathcal{Q}_{\mathcal{T}})$. If $|S|$ is the simplex inducing the increasing flip from $\mathcal{T}$ to $\mathcal{T}'$, then $\mathrm{Sp}(\mathcal{Q}_{\mathcal{T}})\setminus\mathrm{Sp}(\mathcal{Q}_{\mathcal{T}'}) = \mathcal{I}_{l}(S, n)$ and $\mathrm{Sp}(\mathcal{Q}_{\mathcal{T}'}) \setminus \mathrm{Sp}(\mathcal{Q}_{\mathcal{T}}) = \mathcal{I}_{u}(S, n)$, as in Lemma~\ref{lem:right_size}. Hence, we will define a sequence of exchanges which replaces $\mathcal{I}_{l}(S, n)$ with $\mathcal{I}_{u}(S, n)$. To show that our sequence of exchanges works, we will need the following lemma. \begin{lemma}\label{lem:interweaving_criterion_for_flips} Let \[I = [s_{0}, s^{i}_{0}] \cup [s_{1}, s^{i}_{1}] \cup \dots \cup [s_{2d}, s^{i}_{2d}] \cup [s_{2d+1}, s^{i}_{2d+1}]\] and \[J = [s_{0}, s^{j}_{0}] \cup [s_{1}, s^{j}_{1}] \cup \dots \cup [s_{2d}, s^{j}_{2d}] \cup [s_{2d+1}, s^{j}_{2d+1}].\] Then $I$ $2d$-interweaves $J$ if and only if, for all $r$, \[s^{j}_{2r} < s^{i}_{2r} \text{ and } s^{i}_{2r+1} < s^{j}_{2r+1}.\] \end{lemma} \begin{proof} If we have that, for all $r$, $s^{j}_{2r} < s^{i}_{2r}$ and $s^{i}_{2r+1} < s^{j}_{2r+1}$, then we have that $\{s^{i}_{0}, s^{i}_{2}, \dots, s^{i}_{2d}\} \subseteq I \setminus J$ and $\{s_{1}^{j}, s_{3}^{j}, \dots, s_{2d+1}^{j}\} \subseteq J \setminus I$ with \[s_{0}^{i} < s_{1}^{j} < s_{2}^{i} < s_{3}^{j} < \dots < s_{2d}^{i} < s_{2d+1}^{j}.\] Hence $I$ $2d$-interweaves $J$. Conversely, suppose that $I$ $2d$-interweaves $J$, and let $X \subseteq I \setminus J$ and $Y \subseteq J \setminus I$ witness this. We cannot have both $x_{p}, x_{q} \in [s_{t}, s^{i}_{t}]$ for $p \neq q$, since this implies that $y_{r} \in [s_{t}, s^{i}_{t}]$ for $p \leqslant r < q$. Furthermore, we cannot have both $x_{p} \in [s_{t}, s_{t}^{i}]$ and $y_{p} \in [s_{t}, s_{t}^{j}]$, since we must have either $[s_{t}, s_{t}^{i}] \subseteq [s_{t}, s_{t}^{j}]$ or $[s_{t}, s_{t}^{j}] \subseteq [s_{t}, s_{t}^{i}]$. By the pigeonhole principle and the fact that $x_{0} < y_{0}$, we deduce that $x_{r} \in [s_{2r}, s_{2r}^{i}]$ and $y_{r} \in [s_{2r + 1}, s_{2r + 1}^{j}]$ for all $r$. But this implies that $s^{j}_{2r} < s^{i}_{2r}$ and $s^{i}_{2r + 1} < s^{j}_{2r + 1}$ for all $r$. \end{proof} It is now useful for us to obtain an explicit map for the bijection from Lemma~\ref{lem-card-flip}. This allows us to construct the sequence of exchanges which replaces $\mathcal{I}_{l}(S, n)$ with $\mathcal{I}_{u}(S, n)$. \begin{construction}\label{constr:bij} Given $S \in \binom{[n]}{2d + 2}$, we define \begin{align*} \mathcal{I}(S, n) &= \mathcal{I}_{l}(S, n) \cup \mathcal{I}_{u}(S, n), \\ \mathcal{I}'(S, n) &= \mathcal{I}'_{l}(S, n) \cup \mathcal{I}'_{u}(S, n). \end{align*} In order to get a convenient parametrisation of these sets, we define a map \begin{align*} \phi \colon \prod_{i \in \mathbb{Z}/(2d + 2)\mathbb{Z}} [0, s_{i + 1} - s_{i}] &\rightarrow 2^{[n]} \\ (n_{0}, n_{1}, \dots, n_{2d + 1}) &\mapsto \bigcup_{i \in \mathbb{Z}/(2d + 2)\mathbb{Z}} [s_{i}, s_{i} + n_{i} - 1]. \end{align*} We abbreviate $\mathbf{n} = (n_{0}, n_{1}, \dots, n_{2d + 1})$. Then \begin{itemize} \item $\phi(\mathbf{n}) \in \mathcal{I}'_{l}(S,n)$ if and only if $n_{2i - 1} < s_{2i} - s_{2i - 1}$ and $n_{2i} > 0$ for all $i \in \mathbb{Z}/(d+1)\mathbb{Z}$; \item $\phi(\mathbf{n}) \in \mathcal{I}'_{u}(S,n)$ if and only if $n_{2i} < s_{2i+1} - s_{2i}$ and $n_{2i+1} > 0$ for all $i \in \mathbb{Z}/(d+1)\mathbb{Z}$; \item $\phi(\mathbf{n}) \in \mathcal{I}_{l}(S,n)$ if and only if $n_{2i-1} < s_{2i} - s_{2i-1}$ and $n_{2i} > 0$ for all $i \in \mathbb{Z}/(d+1)\mathbb{Z}$, and there exists a $j \in \mathbb{Z}/(d+1)\mathbb{Z}$ such that either $n_{2j+1} = 0$ or $n_{2j} = s_{2j+1} - s_{2j}$; \item $\phi(\mathbf{n}) \in \mathcal{I}_{u}(S,n)$ if and only if $n_{2i} < s_{2i+1} - s_{2i}$ and $n_{2i+1} > 0$ for all $i \in \mathbb{Z}/(d+1)\mathbb{Z}$, and there exists a $j \in \mathbb{Z}/(d+1)\mathbb{Z}$ such that either $n_{2j} = 0$, or $n_{2j-1} = s_{2j} - s_{2j-1}$. \end{itemize} We then obtain an explicit bijection by defining a map \[\psi \colon \mathcal{I}_{l}(S,n) \rightarrow \mathcal{I}_{u}(S,n)\] as follows. Let $I \in \mathcal{I}_{l}(S,n)$ such that $I = \phi(\mathbf{n})$ and let $\mathbf{t} = (-1, 1, -1, 1, \dots, -1, 1)$. Further, define \[\lambda_{I} = \max\left\lbrace\lambda \in \mathbb{Z}_{>0} \mathrel{\Big|} \mathbf{n} + \lambda\mathbf{t} \in \prod_{i \in \mathbb{Z}/(2d + 2)\mathbb{Z}} [0, s_{i + 1} - s_{i}]\right\rbrace.\] By construction, \[\phi(\mathbf{n} + \lambda_{I}\mathbf{t}) \in \mathcal{I}_{u}(S, n),\] since we must either have some $j \in \mathbb{Z}/(d+1)\mathbb{Z}$ such that $s_{2j} - \lambda_{I} = 0$, or some $j \in \mathbb{Z}/(d+1)\mathbb{Z}$ such that $s_{2j-1} + \lambda_{I} = s_{2j} - s_{2j-1}$, otherwise $\lambda_{I}$ would not be maximal. Therefore define \[\psi(I) = \phi(\mathbf{n} + \lambda_{I}\mathbf{t}).\] It can be seen that the map $\psi$ is a bijection because one may define its inverse as follows. Let $J \in \mathcal{I}_{u}(S,n)$ such that $J = \phi(\mathbf{n})$. Then let \[\mu_{J} = \max\left\lbrace\mu \in \mathbb{Z}_{>0} \mathrel{\Big|} \mathbf{n} - \mu\mathbf{t} \in \prod_{i \in \mathbb{Z}/(2d + 2)\mathbb{Z}} [0, s_{i + 1} - s_{i}]\right\rbrace.\] By construction, \[\phi(\mathbf{n} - \mu_{J}\mathbf{t}) \in \mathcal{I}_{l}(S, n),\] since we must either have some $j \in \mathbb{Z}/(d+1)\mathbb{Z}$ such that $n_{2j+1} - \mu_{J} = 0$, or some $j \in \mathbb{Z}/(d+1)\mathbb{Z}$ such that $n_{2j} + \mu_{J} = s_{2j+1} - s_{2j}$. It is then clear that \[\psi^{-1}(J) = \phi(\mathbf{n} - \mu_{J}\mathbf{t}).\] \end{construction} \begin{theorem}\label{thm-even-quot} Given triangulations $\mathcal{T}, \mathcal{T}'$ of $C(n,2d)$ such that $\mathcal{T} \lessdot \mathcal{T}'$, there exist cubillages $\mathcal{Q}_{0}, \dots, \mathcal{Q}_{r}$ of $Z(n,2d+1)$ such that $\mathcal{Q}_{0} = \mathcal{Q}_{\mathcal{T}}$, $\mathcal{Q}_{r} = \mathcal{Q}_{\mathcal{T}'}$ and \[\mathcal{Q}_{0} \lessdot \mathcal{Q}_{1} \lessdot \dots \lessdot \mathcal{Q}_{r},\] so that $\mathcal{Q}_{\mathcal{T}} \leqslant \mathcal{Q}_{\mathcal{T}'}$. \end{theorem} \begin{proof} Suppose that the increasing flip of $\mathcal{T}$ which gives $\mathcal{T}'$ is induced by the $(2d+1)$-face $|S|$ of $C(n, n - 1)$. Then $\mathrm{ISp}(\mathcal{Q}_{\mathcal{T}}) \setminus \mathrm{ISp}(\mathcal{Q}_{\mathcal{T}'}) = \mathcal{I}_{l}(S,n)$ and $\mathrm{ISp}(\mathcal{Q}_{\mathcal{T}'}) \setminus \mathrm{ISp}(\mathcal{Q}_{\mathcal{T}}) = \mathcal{I}_{u}(S,n)$. Let $\mathcal{R} = \mathrm{ISp}(\mathcal{Q}_{\mathcal{T}}) \setminus \mathcal{I}_{l}(S,n) = \mathrm{ISp}(\mathcal{Q}_{\mathcal{T}'}) \setminus \mathcal{I}_{u}(S,n)$. Hence we must find a sequence of flips starting at $\mathcal{Q}_{\mathcal{T}}$ which gradually replaces $\mathcal{I}_{l}(S, n)$ with $\mathcal{I}_{u}(S, n)$. The flips of cubillages we wish to perform are as follows. Given $\phi(\mathbf{n}) \in \mathcal{I}_{l}(S, n)$, we make the sequence of exchanges \[\phi(\mathbf{n}) \leadsto \phi(\mathbf{n} + \mathbf{t}) \leadsto \dots \leadsto \phi(\mathbf{n} + (\lambda_{\phi(\mathbf{n})} - 1)\mathbf{t}) \leadsto \phi(\mathbf{n} + \lambda_{\phi(\mathbf{n})}\mathbf{t)},\] where $\phi(\mathbf{n}) \leadsto \phi(\mathbf{n} + \mathbf{t})$ means that we remove $\phi(\mathbf{n})$ and replace it with $\phi(\mathbf{n} + \mathbf{t})$. Hence the set of exchange pairs in our sequence of flips from $\mathcal{Q}_{\mathcal{T}}$ to $\mathcal{Q}_{\mathcal{T}'}$ is \[\{(\phi(\mathbf{n} + r\mathbf{t}), \phi(\mathbf{n} + (r + 1)\mathbf{t}) \mid \phi(\mathbf{n}) \in \mathcal{I}_{l}(S, n), ~ 0 \leqslant r < \lambda_{\phi(\mathbf{n})}\}.\] We must show that there is an order in which we can make these exchanges such that after each exchange we still have a $2d$-separated collection. Here each exchange gives an increasing flip by Theorem~\ref{thm-comb-flips}. Note further that $\phi(\mathbf{n} + r\mathbf{t})$ and $\phi(\mathbf{n} + (r + 1)\mathbf{t})$ are tightly $2d$-interweaving, as we know must be the case from Theorem~\ref{thm-comb-flips}. Our exchanges give a bijection \begin{align*} \mathcal{I}'(S, n) \setminus \mathcal{I}_{u}(S, n) &\rightarrow \mathcal{I}'(S, n) \setminus \mathcal{I}_{l}(S, n) \\ \phi(\mathbf{n}) &\mapsto \phi(\mathbf{n} + \mathbf{t}). \end{align*} Hence, we have one exchange per element of $\mathcal{I}'(S, n) \setminus \mathcal{I}_{u}(S, n)$. By Construction~\ref{constr:bij}, we have that $\phi$ is a bijection between $[1, s_{1} - s_{0}] \times [0, s_{2} - s_{1} - 1] \times \dots \times [1, s_{2d+1} - s_{2d}] \times [0, s_{0} - s_{2d+1} - 1 + n]$ and $\mathcal{I}'(S, n) \setminus \mathcal{I}_{u}(S, n)$. The set $[1, s_{1} - s_{0}] \times [0, s_{2} - s_{1} - 1] \times \dots \times [1, s_{2d+1} - s_{2d}] \times [0, s_{0} - s_{2d+1} - 1 + n]$ is a lattice under the order given by \[(n_{0}, n_{1}, \dots, n_{2d+1}) \leqslant (n'_{0}, n'_{1}, \dots, n'_{2d+1})\] if and only if for all $j$ \[n'_{2j} \leqslant n_{2j} \text{ and } n_{2j+1} \leqslant n'_{2j+1},\] since this is just the usual product order, but reversed on coordinates with even index. We claim that any linear extension $\mathbf{n}^{1} < \dots < \mathbf{n}^{r}$ of this lattice gives an order on $\mathcal{I}'(S, n)\setminus \mathcal{I}_{u}(S, n)$ such that if $\mathcal{C}_{0} := \mathrm{Sp}(\mathcal{Q}_{\mathcal{T}})$ and $\mathcal{C}_{i} := (\mathcal{C}_{i - 1} \setminus \{\phi(\mathbf{n}^{i})\}) \cup \{\phi(\mathbf{n}^{i} + \mathbf{t})\}$, then $\mathcal{C}_{i}$ is $2d$-separated for all $i$. Note first that we always must have $\phi(\mathbf{n}^{i}) \in \mathcal{C}_{i - 1}$. This is because either $\phi(\mathbf{n}^{i}) \in \mathcal{I}_{l}(S, n)$ or $\phi(\mathbf{n}^{i} - \mathbf{t}) \in \mathcal{I}'(S, n) \setminus \mathcal{I}_{u}(S, n)$. Hence, either $\phi(\mathbf{n}^{i}) \in \mathcal{C}_{0}$, or $\phi(\mathbf{n}^{i})$ is the result of an earlier exchange, since $\mathbf{n}^{i} - \mathbf{t} < \mathbf{n}^{i}$ in our order. Now suppose that $\mathcal{C}_{i}$ is not $2d$-separated for some $i$. We may choose the minimal $i$ for which this is the case. We first show that no element of $\mathcal{I}'(S, n)$ is $2d$-interweaving with any element of $\mathcal{R}$. Suppose, on the contrary, that there exist $I \in \mathcal{I}'(S, n)$ and $J \in \mathcal{R}$ such that $I$ and $J$ are $2d$-interweaving. Then, by Lemma~\ref{lem-int-endpts}, we have $X \subseteq \widehat{I}$ and $Y \subseteq \widehat{J}$ such that $\# X = \# Y = d + 1$ and $X$ and $Y$ are $2d$-interweaving. We have that $X \subseteq \widehat{I} \subseteq S$, and since $\# X = d + 1$, we must have either $X \not\supseteq S_{u}$, or $X \not\supseteq S_{l}$. If $X \not\supseteq S_{u}$, then $X \subseteq F$ for a $2d$-simplex $|F|$ of $\mathcal{T}$, by Gale's Evenness Criterion. This gives a contradiction, since $|F|$ and $|\widehat{J}|$ are both simplices of $\mathcal{T}$ and $(X, Y)$ is a circuit. One can derive a similar contradiction using $\mathcal{T}'$ when $X \not\supseteq S_{l}$. Therefore, if $\mathcal{C}_{i}$ is not $2d$-separated, it must be because $\phi(\mathbf{n}^{i} + \mathbf{t})$ is $2d$-interweaving with an element $I \in \mathcal{I}(S, n) \cap \mathcal{C}_{i}$. By Lemma~\ref{lem:interweaving_criterion_for_flips}, we must have \[I = [s_{0}, s'_{0}] \cup [s_{1}, s'_{1}] \cup \dots \cup [s_{2d+1}, s'_{2d+1}] \in \mathcal{C}_{i} \setminus \{\phi(\mathbf{n}^{i} + \mathbf{t})\} = \mathcal{C}_{i - 1} \setminus \{\phi(\mathbf{n}^{i})\}\] such that either $s_{2j} + (n_{2j}^{i} - 1) - 1 < s'_{2j}$ and $s'_{2j+1} < s_{2j+1} + (n_{2j+1}^{i} + 1) - 1$ for all $j$, or $s'_{2j} < s_{2j} + (n_{2j}^{i} - 1) - 1$ and $s_{2j+1} + (n_{2j+1}^{i} + 1) - 1 < s'_{2j+1}$ for all $j$. In the latter case, we also have that $s'_{2j} < s_{2j} + n_{2j}^{i} - 1$ and $s_{2j+1} + n_{2j+1}^{i} - 1 < s'_{2j+1}$, so that $\phi(\mathbf{n}^{i})$ also $2d$-interweaves $I$, which means that $\mathcal{C}_{i - 1}$ is not $2d$-separated. This contradicts $i$ being the minimal index such that this was the case. In the former case, we have that $I$ precedes $\phi(\mathbf{n}^{i})$ in our chosen order on $\mathcal{I}'(S, n)\setminus \mathcal{I}_{u}(S, n)$. This means that $I$ must have already been exchanged, which is also a contradiction. Therefore, we have cubillages $\mathcal{Q}_{0}, \dots, \mathcal{Q}_{r}$ such that $\mathcal{C}_{i} = \mathrm{Sp}(\mathcal{Q}_{i})$ for each $i$. By Theorem~\ref{thm-comb-flips}, we have \[\mathcal{Q}_{0} \lessdot \mathcal{Q}_{1} \lessdot \dots \lessdot \mathcal{Q}_{r}.\] By construction, we have that $\mathcal{Q}_{0} = \mathcal{Q}_{\mathcal{T}}$ and $\mathcal{Q}_{r} = \mathcal{Q}_{\mathcal{T}'}$. \end{proof} \begin{example}\label{ex:even_quot} We give an example of the construction used to prove Theorem~\ref{thm-even-quot}. \begin{enumerate}[wide] \item Consider the triangulation $\mathcal{T}$ of the heptagon $C(7, 2)$ given by $\mathring{e}(\mathcal{T}) = \{13, 16, 35, 36\}$. We perform the increasing flip on this triangulation induced by the simplex $|1236|$, thereby obtaining the triangulation $\mathcal{T}'$ of $C(7, 2)$ with $\mathring{e}(\mathcal{T}') = \{16, 26, 35, 36\}$. We have \begin{align*} \mathrm{ISp}(\mathcal{Q}_{\mathcal{T}}) = \{&13, 16, 35, 36,\\ &126, 134, 136, 346, 356, 367,\\ &1236, 1345, 1346, 1367, 3467, 3567, \\ &12346, 13456, 13467, 13567\} \end{align*} and \begin{align*} \mathrm{ISp}(\mathcal{Q}_{\mathcal{T}'}) = \{&16, 26, 35, 36\\ &126, 236, 267, 346, 356, 367,\\ &1236, 1367, 2346, 2367, 3467, 3567,\\ &12346, 13467, 13567, 23467\}. \end{align*} Moreover, \[\mathrm{ISp}(\mathcal{Q}_{\mathcal{T}})\setminus \mathrm{ISp}(\mathcal{Q}_{\mathcal{T}'}) = \mathcal{I}_{l}(1236, 7) = \{13, 134, 136, 1345, 1346, 13456\}\] and \[\mathrm{ISp}(\mathcal{Q}_{\mathcal{T}'})\setminus \mathrm{ISp}(\mathcal{Q}_{\mathcal{T}}) = \mathcal{I}_{u}(1236, 7) = \{26, 236, 267, 2346, 2367, 23467\}.\] We illustrate how we can gradually replace elements of $\mathcal{I}_{l}(1236, 7)$ in $\mathrm{ISp}(\mathcal{Q}_{\mathcal{T}})$ with the elements of $\mathcal{I}_{u}(1236, 7)$, whilst ensuring that the collection remains $2$-separated. The coordinate parameterisation of $\mathcal{I}'(1236, 7)$ by $\phi$ gives \begin{align*} \phi(1, 0, 1, 0) &= 13, \\ \phi(1, 0, 2, 0) &= 134, \\ \phi(1, 0, 1, 1) &= 136, \\ \phi(1, 0, 3, 0) &= 1345, \\ \phi(1, 0, 2, 1) &= 1346, \\ \phi(1, 0, 3, 1) &= 13456, \\ \phi(0, 1, 0, 1) &= 26, \\ \phi(0, 1, 1, 1) &= 236, \\ \phi(0, 1, 0, 2) &= 267, \\ \phi(0, 1, 2, 1) &= 2346, \\ \phi(0, 1, 1, 2) &= 2367, \\ \phi(0, 1, 2, 2) &= 23467. \end{align*} The bijection $\psi\colon \mathcal{I}_{l}(1236, 7) \rightarrow \mathcal{I}_{u}(1236, 7)$ in this case gives \[ \begin{tabular}{rcccccl} 13 & = & $\phi(1, 0, 1, 0)$ & $\mapsto$ & $\phi(0, 1, 0, 1)$ & = & 26,\\ 134 & = & $\phi(1, 0, 2, 0)$ & $\mapsto$ & $\phi(0, 1, 1, 1)$ & = & 236, \\ 136 & = & $\phi(1, 0, 1, 1)$ & $\mapsto$ & $\phi(0, 1, 0, 2)$ & = & 267, \\ 1345 & = & $\phi(1, 0, 3, 0)$ & $\mapsto$ & $\phi(0, 1, 2, 1)$ & = & 2346, \\ 1346 & = & $\phi(1, 0, 2, 1)$ & $\mapsto$ & $\phi(0, 1, 1, 2)$ & = & 2367, \\ 13456 & = & $\phi(1, 0, 3, 1)$ & $\mapsto$ & $\phi(0, 1, 2, 2)$ & = & 23467. \\ \end{tabular} \] Note that in this example, we have that $\mathcal{I}'_{l}(1236, 7) = \mathcal{I}_{l}(1236, 7)$ and $\mathcal{I}'_{u}(1236, 7) = \mathcal{I}_{u}(1236, 7)$, since we cannot have $\widehat{I} = 1236$ for any subset $I$. Thus we consider the lattice on $\mathcal{I}'(1237, 6)\setminus \mathcal{I}_{u}(1237, 6) = \mathcal{I}_{l}(1236, 7)$ given by \[ \begin{tikzcd} & (1, 0, 1, 1) && \\ (1, 0, 1, 0) \ar[ur] && (1, 0, 2, 1) \ar[ul] & \\ & (1, 0, 2, 0) \ar[ul] \ar[ur] && (1, 0, 3, 1) \ar[ul] \\ && (1, 0, 3, 0) \ar[ul] \ar[ur], \end{tikzcd} \] which is \[ \begin{tikzcd} & 136 && \\ 13 \ar[ur] && 1346 \ar[ul] & \\ & 134 \ar[ul] \ar[ur] && 13456 \ar[ul] \\ && 1345. \ar[ul] \ar[ur] \end{tikzcd} \] Note here that we place minimal element of the lattice at the bottom. Therefore, by Theorem~\ref{thm-even-quot}, we may perform the exchanges replacing $\phi(\mathbf{n})$ by $\phi(\mathbf{n} + \mathbf{t})$ in an order given by any linear extension of \[ \begin{tikzcd} & 136 \leadsto 267 && \\ 13 \leadsto 26 \ar[ur] && 1346 \leadsto 2367 \ar[ul] & \\ & 134 \leadsto 236 \ar[ul] \ar[ur] && 13456 \leadsto 23467 \ar[ul] \\ && 1345 \leadsto 2346. \ar[ul] \ar[ur] \end{tikzcd} \] Note here that we first make the exchange at the bottom of the lattice, and then move up. \item We now give an example where we do not have $\mathcal{I}(S, n) = \mathcal{I}'(S, n)$. This example is somewhat larger than the previous example, so we do not go through it in the same level of detail. Indeed, we do not consider full triangulations, but only the set $\mathcal{I}_{l}(1357, 8)$, which we wish to replace with the set $\mathcal{I}_{u}(1357, 8)$. Here we have $\mathcal{I}'_{l}(1357, 8) = \mathcal{I}_{l}(1357, 8) \cup \{1357\}$ and $\mathcal{I}'_{u}(1357, 8) = \mathcal{I}_{u}(1357, 8) \cup \{1357\}$. The sequence of exchanges from $\mathcal{I}_{l}(1357, 8)$ to $\mathcal{I}_{u}(1357, 8)$ is given by the bijection $\phi(\mathbf{n}) \mapsto \phi(\mathbf{n} + \mathbf{t})$ from $\mathcal{I}'(1357, 8)\setminus\mathcal{I}_{u}(1357, 8)$ to $\mathcal{I}'(1357, 8)\setminus\mathcal{I}_{l}(1357, 8)$. Any sequence of exchanges done in the order of any linear extension of the following lattice will preserve 2-separatedness. One can check that this is the lattice from the proof of Theorem~\ref{thm-even-quot}. \[ \adjustbox{scale=0.9, center}{ \begin{tikzcd} && 1357 \leadsto 3478 &&& \\ & 135 \leadsto 347 \ar[ur] & 157 \leadsto 378 \ar[u] & 13567 \leadsto 34578 \ar[ul] & 12357 \leadsto 13478 \ar[ull] & \\ 15 \leadsto 37 \ar[ur] \ar[urr] & 1356 \leadsto 3457 \ar[u] \ar[urr] & 1567 \leadsto 3578 \ar[u] \ar[ur] & 1235 \leadsto 1347 \ar[ull] \ar[ur] & 1257 \leadsto 1378 \ar[ull] \ar[u] & 123567 \leadsto 134578 \ar[ull] \ar[ul] \\ & 156 \leadsto 357 \ar[ul] \ar[u] \ar[ur] & 12356 \leadsto 13457 \ar[ul] \ar[ur] \ar[urrr] & 125 \leadsto 137 \ar[ulll] \ar[u] \ar[ur] & 12567 \leadsto 13578 \ar[ull] \ar[u] \ar[ur] & \\ &&& 1256 \leadsto 1357 \ar[ull] \ar[ul] \ar[u] \ar[ur] && \end{tikzcd} } \] Note that here, since $1357 \in \mathcal{I}'(1357, 8)\setminus\mathcal{I}_{l}(1357, 8)$, but $1357 \notin \mathcal{I}_{u}(1357, 8)$, we have that $1256 \leadsto 1357 \leadsto 3478$. That is, $1357$ is only an intermediate subset in the sequence of exchanges from $\mathcal{I}_{l}(1357, 8)$ to $\mathcal{I}_{u}(1357, 8)$. \end{enumerate} \end{example} We now show the result for odd dimensions. \begin{theorem}\label{thm-odd-quot} Given triangulations $\mathcal{T}, \mathcal{T}'$ of $C(n, 2d + 1)$ such that $\mathcal{T} \lessdot \mathcal{T}'$, there exist cubillages $\mathcal{Q}_{0}, \dots, \mathcal{Q}_{r}$ of $Z(n, 2d + 2)$ such that $\mathcal{Q}_{0} = \mathcal{Q}_{\mathcal{T}}$, $\mathcal{Q}_{r} = \mathcal{Q}_{\mathcal{T}'}$ and \[\mathcal{Q}_{0} \lessdot \mathcal{Q}_{1} \lessdot \dots \lessdot \mathcal{Q}_{r},\] so that $\mathcal{Q}_{\mathcal{T}} \leqslant \mathcal{Q}_{\mathcal{T}'}$. \end{theorem} \begin{proof} We start, as in the proof of Theorem~\ref{thm-g-odd}, by considering the triangulations $\hat{\mathcal{T}}, \hat{\mathcal{T}'}$ of $C(n+1,2d+2)$. By \cite[Proposition 5.14(i)]{rambau}, we have that $\hat{\mathcal{T}'} < \hat{\mathcal{T}}$. By Theorem~\ref{thm-even-quot}, there exist cubillages $\mathcal{Q}'_{s} \lessdot \dots \lessdot \mathcal{Q}'_{0}$ of $Z(n + 1, 2d + 3)$ such that $\mathcal{Q}'_{s} = \mathcal{Q}_{\hat{\mathcal{T}'}}$ and $\mathcal{Q}'_{0} = \mathcal{Q}_{\hat{\mathcal{T}}}$. As in the proof of \cite[Lemma 5.2]{dkk}, we have that the $(n+1)$-contraction of $\mathcal{Q}'_{i}$ gives a membrane $\mathcal{M}_{i}$, which is a cubillage of $Z(n, 2d + 2)$. As in the proof of Theorem~\ref{thm-g-odd}, we have that $\mathcal{M}_{s} = \mathcal{Q}_{\mathcal{T}'}$ and $\mathcal{M}_{0} = \mathcal{Q}_{\mathcal{T}}$. We claim that for each $i$ we either have $\mathcal{M}_{i} = \mathcal{M}_{i + 1}$ or $\mathcal{M}_{i} \lessdot \mathcal{M}_{i + 1}$. Consider the increasing flip which takes $\mathcal{Q}'_{i + 1}$ to $\mathcal{Q}'_{i}$. Suppose this increasing flip is induced by a $(2d + 4)$-face $\Gamma$ of $Z(n + 1, n + 1)$ which has $A$ as its set of generating vectors. If $n + 1 \notin A$, then the increasing flip does not affect the $(n + 1)$-pie, so that $\mathcal{M}_{i} = \mathcal{M}_{i + 1}$. Hence, suppose instead that $n + 1 \in A$. Let the lower facets of $\pi_{n + 1, 2d + 4}(\Gamma)$ consist of the cubes $\pi_{n + 1, 2d + 4}(\Delta_{j})$, where $\Delta_{j}$ is generated by $A \setminus \{a_{j}\}$, noting that we must have $a_{2d + 3} = n + 1$. Similarly, let the upper facets of $\pi_{n + 1, 2d + 4}(\Gamma)$ consist of the cubes $\pi_{n + 1, 2d + 4}(\Delta'_{j})$, where $\Delta'_{j}$ is generated by $A \setminus \{a_{j}\}$. Then, it is well-known that for $j < k$ the cubes $\pi_{n + 1, 2d + 4}(\Delta_{j})$ and $\pi_{n + 1, 2d + 4}(\Delta_{k})$ intersect in an upper facet of $\pi_{n + 1, 2d + 4}(\Delta_{k})$ and a lower facet of $\pi_{n + 1, 2d + 4}(\Delta_{j})$, while the cubes $\pi_{n + 1, 2d + 4}(\Delta'_{j})$ and $\pi_{n + 1, 2d + 4}(\Delta'_{k})$ intersect in an upper facet of $\pi_{n + 1, 2d + 4}(\Delta'_{j})$ and a lower facet of $\pi_{n + 1, 2d + 4}(\Delta'_{k})$. This is because the increasing flip corresponds to inverting the packet of $A$: the cubes $\Delta_{j}$ and $\Delta'_{j}$ correspond to the sets $A \setminus \{a_{j}\}$; these must be ordered lexicographically for $\Delta_{j}$ and reverse-lexicographically for $\Delta'_{j}$. Contracting the $(n + 1)$-pie of $\mathcal{Q}'_{i + 1}$ sends the cubes $\Delta_{j}$ for $j < 2d + 3$ to their facet generated by $A\setminus \{a_{j}, n + 1\}$, which is precisely the intersection $\Delta_{j} \cap \Delta_{2d + 3}$. By the above paragraph, this projects to an upper facet of $\pi_{n + 1, 2d + 4}(\Delta_{2d + 3})$. Hence the part of $\mathcal{M}_{i + 1}$ which lies within $\Gamma/(n + 1)$ consists of the upper facets of $\pi_{n + 1, 2d + 4}(\Delta_{2d + 3}/(n + 1))$. Here we use $\Gamma/(n + 1)$ to denote the image of $\Gamma/(n + 1)$ under the $(n + 1)$-contraction, and so forth. Similarly, we have that the part of $\mathcal{M}_{i}$ which lies within $\Gamma/(n + 1)$ consists of the lower facets of $\pi_{n + 1, 2d + 4}(\Delta'_{2d + 3}/(n + 1))$. We then have that $\Gamma/(n + 1) = \Delta_{2d + 3}/(n + 1) = \Delta'_{2d + 3}/(n + 1)$, and so $\mathcal{M}_{i} \lessdot \mathcal{M}_{i + 1}$. This is since $\mathcal{M}_{i}$ and $\mathcal{M}_{i + 1}$ only differ within $\Gamma/(n + 1)$, because $\mathcal{Q}'_{i + 1}$ and $\mathcal{Q}'_{i}$ only differ within $\Gamma$. Moreover, $\pi_{n, 2d + 3}(\mathcal{M}_{i + 1})$ contains the upper facets of $\pi_{n, 2d + 3}(\Gamma/(n + 1))$, whereas $\pi_{n, 2d + 3}(\mathcal{M}_{i})$ contains the lower facets of $\pi_{n, 2d + 3}(\Gamma/(n + 1))$. This argument is illustrated in Figure~\ref{fig:arg_ill}; compare \cite[Figure 7]{dkk-survey}. This gives a chain of cubillages $\mathcal{Q}_{\mathcal{T}} = \mathcal{M}_{0} = \mathcal{Q}_{0} \lessdot \dots \lessdot \mathcal{Q}_{r} = \mathcal{M}_{s} = \mathcal{Q}_{\mathcal{T}'}$ by applying the result of the above paragraph to the chain $\mathcal{Q}'_{s} \lessdot \dots \lessdot \mathcal{Q}'_{0}$. Here the cubillages $\mathcal{Q}_{0}, \dots, \mathcal{Q}_{r}$ are the cubillages $\mathcal{M}_{0}, \dots, \mathcal{M}_{s}$ with the duplicates removed, corresponding to the cases above where $\mathcal{M}_{i} = \mathcal{M}_{i + 1}$. \end{proof} \begin{figure} \caption{An illustration of the argument of Theorem~\ref{thm-odd-quot}.}\label{fig:arg_ill} \[ \scalebox{0.8}{ \begin{tikzpicture} \begin{scope}[xscale=0.7] \coordinate(0) at (0,0); \node at (0)[left = 1mm of 0]{$\emptyset$}; \coordinate(1) at (2,-2); \node at (1)[below left = 1mm of 1]{1}; \coordinate(12) at (4,-3); \node at (12)[below = 1mm of 12]{12}; \coordinate(123) at (6,-2); \node at (123)[below right = 1mm of 123]{123}; \coordinate(1234) at (8,0); \node at (1234)[right = 1mm of 1234]{1234}; \coordinate(234) at (6,2); \node at (234)[above right = 1mm of 234]{234}; \coordinate(34) at (4,3); \node at (34)[above = 1mm of 34]{34}; \coordinate(4) at (2,2); \node at (4)[above left = 1mm of 4]{4}; \draw (0) -- (1) -- (12) -- (123) -- (1234) -- (234) -- (34) -- (4) -- (0); \coordinate(3) at (2,1); \coordinate(13) at (4,-1); \coordinate(23) at (4,0); \draw[fill=red!30,draw=none] (0) -- (4) -- (34) -- (3) -- (0); \draw[fill=red!30,draw=none] (3) -- (34) -- (234) -- (23) -- (3); \draw[fill=red!30,draw=none] (23) -- (234) -- (1234) -- (123) -- (23); \draw (0) -- (3); \draw (3) -- (34); \draw (3) -- (23); \draw (3) -- (13); \draw (1) -- (13); \draw (13) -- (123); \draw (23) -- (123); \draw (23) -- (234); \node at (3) [below = 1mm of 3]{3}; \node at (13) [left = 1mm of 13]{13}; \node at (23) [below = 1mm of 23]{23}; \node at (-1.5,0) {\huge $\mathcal{Q}_{i+1}$}; \draw[->,ultra thick] (9.75, 0) -- (10.75, 0); \coordinate(0) at (12,0); \node at (0)[left = 1mm of 0]{$\emptyset$}; \coordinate(1) at (14,-2); \node at (1)[below left = 1mm of 1]{1}; \coordinate(12) at (16,-3); \node at (12)[below = 1mm of 12]{12}; \coordinate(123) at (18,-2); \node at (123)[below right = 1mm of 123]{123}; \coordinate(3) at (14,1); \coordinate(13) at (16,-1); \coordinate(23) at (16,0); \draw (0) -- (1) -- (12) -- (123) -- (23) -- (3) -- (0); \draw (3) -- (13); \draw (1) -- (13); \draw (13) -- (123); \node at (3) [above left = 1mm of 3]{3}; \node at (13) [left = 1mm of 13]{13}; \node at (23) [right = 1mm of 23]{23}; \draw[red,ultra thick] (0) -- (3) -- (23) -- (123); \node at (18,0) {\huge \color{red} $\mathcal{M}_{i+1}$}; \end{scope} \end{tikzpicture} } \] \[ \scalebox{0.8}{ \begin{tikzpicture} \begin{scope}[xscale=0.7] \coordinate(0) at (0,0); \node at (0)[left = 1mm of 0]{$\emptyset$}; \coordinate(1) at (2,-2); \node at (1)[below left = 1mm of 1]{1}; \coordinate(12) at (4,-3); \node at (12)[below = 1mm of 12]{12}; \coordinate(123) at (6,-2); \node at (123)[below right = 1mm of 123]{123}; \coordinate(1234) at (8,0); \node at (1234)[right = 1mm of 1234]{1234}; \coordinate(234) at (6,2); \node at (234)[above right = 1mm of 234]{234}; \coordinate(34) at (4,3); \node at (34)[above = 1mm of 34]{34}; \coordinate(4) at (2,2); \node at (4)[above left = 1mm of 4]{4}; \draw (0) -- (1) -- (12) -- (123) -- (1234) -- (234) -- (34) -- (4) -- (0); \coordinate(3) at (2,1); \coordinate(13) at (4,-1); \coordinate(134) at (6,1); \draw[fill=red!30,draw=none] (0) -- (4) -- (34) -- (3) -- (0); \draw[fill=red!30,draw=none] (3) -- (34) -- (134) -- (13) -- (3); \draw[fill=red!30,draw=none] (13) -- (134) -- (1234) -- (123) -- (13); \draw (0) -- (3); \draw (3) -- (34); \draw (13) -- (134); \draw (34) -- (134); \draw (3) -- (13); \draw (1) -- (13); \draw (13) -- (123); \draw (134) -- (1234); \node at (3) [below = 1mm of 3]{3}; \node at (13) [left = 1mm of 13]{13}; \node at (134) [above = 1mm of 134]{134}; \node at (-1.5,0) {\huge $\mathcal{Q}_{i}$}; \draw[->,ultra thick] (9.75, 0) -- (10.75, 0); \coordinate(0) at (12,0); \node at (0)[left = 1mm of 0]{$\emptyset$}; \coordinate(1) at (14,-2); \node at (1)[below left = 1mm of 1]{1}; \coordinate(12) at (16,-3); \node at (12)[below = 1mm of 12]{12}; \coordinate(123) at (18,-2); \node at (123)[below right = 1mm of 123]{123}; \coordinate(3) at (14,1); \coordinate(13) at (16,-1); \coordinate(23) at (16,0); \draw (0) -- (1) -- (12) -- (123) -- (23) -- (3) -- (0); \draw (3) -- (13); \draw (1) -- (13); \draw (13) -- (123); \node at (3) [above left = 1mm of 3]{3}; \node at (13) [left = 1mm of 13]{13}; \node at (23) [right = 1mm of 23]{23}; \draw[red,ultra thick] (0) -- (3) -- (13) -- (123); \node at (18,0) {\huge \color{red} $\mathcal{M}_{i}$}; \end{scope} \end{tikzpicture} } \] \end{figure} By putting together Theorem~\ref{thm-g-even}, Theorem~\ref{thm-g-odd}, Theorem~\ref{thm-even-quot}, and Theorem~\ref{thm-odd-quot}, this finally establishes Theorem~\ref{thm:quot}, and hence also Corollary~\ref{cor:t=st}. \printbibliography \end{document}
{ "timestamp": "2021-05-19T02:16:10", "yymm": "2012", "arxiv_id": "2012.10371", "language": "en", "url": "https://arxiv.org/abs/2012.10371", "abstract": "We prove the conjecture that the higher Tamari orders of Dimakis and Müller-Hoissen coincide with the first higher Stasheff--Tamari orders. To this end, we show that the higher Tamari orders may be conceived as the image of an order-preserving map from the higher Bruhat orders to the first higher Stasheff--Tamari orders. This map is defined by taking the first cross-section of a cubillage of a cyclic zonotope. We provide a new proof that this map is surjective and show further that the map is full, which entails the aforementioned conjecture. We explain how order-preserving maps which are surjective and full correspond to quotients of posets. Our results connect the first higher Stasheff--Tamari orders with the literature on the role of the higher Tamari orders in integrable systems.", "subjects": "Combinatorics (math.CO)", "title": "The first higher Stasheff-Tamari orders are quotients of the higher Bruhat orders", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713878802044, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7080105004289786 }
https://arxiv.org/abs/1709.07656
Antisymmetry of solutions for some weighted elliptic problems
This article concerns the antisymmetry, uniqueness, and monotonicity properties of solutions to some elliptic functionals involving weights and a double well potential. In the one-dimensional case, we introduce the continuous odd rearrangement of an increasing function and we show that it decreases the energy functional when the weights satisfy a certain convexity-type hypothesis. This leads to the antisymmetry or oddness of increasing solutions (and not only of minimizers). We also prove a uniqueness result (which leads to antisymmetry) where a convexity-type condition by Berestycki and Nirenberg on the weights is improved to a monotonicity condition. In addition, we provide with a large class of problems where antisymmetry does not hold. Finally, some rather partial extensions in higher dimensions are also given.
\section{Introduction}\label{section1} Symmetry properties of solutions to nonlinear elliptic problems have been extensively studied in the literature. For Dirichlet problems with zero boundary conditions, the Steiner and Schwarz symmetrizations (see~\cite{Kawohl,PolyaSzego}) and the moving planes method~\cite{Alexandrov,GNN} have been successfully applied to derive symmetry, with respect to a hyperplane, of minimizers or of positive solutions to many nonlinear problems. For sign changing solutions, for instance still with zero Dirichlet boundary conditions, it is well known that the symmetry with respect to a hyperplane may fail. For this, simply consider the Dirichlet eigenfunctions of the Laplacian in an interval or a ball. Instead, for some of them, what holds is antisymmetry, as defined next. A natural question that we address here is whether solutions are {\it antisymmetric} or {\it odd} with respect to a hyperplane (and also with respect to certain cones, as we will see later) whenever the problem is invariant under the odd reflection of the solution. Aside being interesting for its own sake (and for a possible answer to an open problem presented below), symmetry and antisymmetry of solutions of PDEs are important in physics and other fields of mathematics. For instance, in quantum mechanics, a system of identical bosons (respectively, fermions) is described by a multiparticle wavefunction which is symmetric (respectively, antisymmetric) under the interchange of pairs of particles. For nonzero Dirichlet boundary data, Berestycki and Nirenberg~\cite{BeresNiren} used the maximum principle, together with different versions of their sliding method, to give some sufficient conditions that guarantee solutions to be unique and antisymmetric with respect to a hyperplane passing through the origin. In \cite{WeiWinter:2005}, Wei and Winter showed that two-peaks nodal solutions to $\varepsilon \Delta u - u + |u|^{p-2} u = 0$ in a ball, with zero Dirichlet boundary conditions, are antisymmetric (with respect to a hyperplane through the origin) when $\varepsilon$ is small. Extremals of the ratio $\|\nabla u\|_2/ \|u \|_p$ for functions of average zero in a ball (Neumann boundary conditions) have been considered by Gir\~ao and Weth in~\cite{GiraoWeth}, who showed that they are antisymmetric (with respect to a well chosen hyperplane through the origin) for $p$ close to $2$, while this is not anymore the case for large $p$. In~\cite{GT}, Grumiau and Troestler proved that, for $p$ close to $2$, the least energy nodal solution of $\Delta u + |u|^{p-2} u = 0$ with zero Dirichlet boundary condition in a ball or an annulus is unique (up to rotation and multiplicative constant $\pm 1$) and antisymmetric with respect to a hyperplane passing through the origin. Our main motivation to study the antisymmetry of solutions is driven by one conjecture posed by De Giorgi~\cite{DeGiorgi} in 1978. The following is one of its natural formulations: {\it Let $u \in C^2(\mathbb R^N)$ be a bounded function which is, on each bounded domain $\Omega \subset \mathbb R^N$, a minimizer $($under perturbations with compact support in $\Omega$$)$ of the Allen-Cahn functional \begin{equation}\label{allencahn} E (u, {\Omega}) = \int_{\Omega} \left\{ \frac{1}{2}|\nabla u|^2 + G(u) \right\} dx , \end{equation} where $G (u) = (1 - u^2)^2/4$. Is it true that the level sets of $u$ are hyperplanes, at least if $N\leq 7$}? Throughout the paper, by minimizer we always mean ``absolute minimizer''. After the first results in dimensions 2 and 3 in \cite{GhoussoubGui,AmbrosioCabre,AAC}, a breakthrough came with the work by Savin~\cite{Savin} who showed that the above conjecture is indeed true up to dimension $N\leq 7$. Later, for $N = 9$ del Pino, Kowalczyk, and Wei~\cite{DelPinoKowalczykWei} constructed a solution~$u$ that is monotone in the direction $x_9$, has limit $\pm 1$ as $x_9 \to \pm \infty$, and has level sets which are not hyperplanes. A result from~\cite{AAC} guarantees that such monotone solution is in fact a minimizer of the functional $E$, providing a counter-example to the above conjecture in dimensions $N \geq 9$. More recently, Liu, Wang, and Wei~\cite{LWW} have shown the existence of a minimizer when $N=8$ with level sets that are not hyperplanes. In dimension 8, however, an important open question that we describe next remains open. The conjecture of De Giorgi was motivated by a classical result on minimal surfaces. While every minimizing minimal surface in all of $\mathbb{R}^N$ must be a hyperplane if $N\leq 7$, Bombieri, De Giorgi, and Giusti \cite{BdGG} established that \textit{the Simons cone} $$ \mathcal{C}:=\{(x^1,x^2)\in\mathbb{R}^4\times\mathbb{R}^4:|x^1|=|x^2|\} $$ is a minimizing minimal surface in $\mathbb{R}^8$ different from a hyperplane. Therefore, in dimension 8, the canonical counter-example to the conjecture of De Giorgi should be given by the so-called \textit{saddle-shaped solution} $u$ to the Euler-Lagrange equation of \eqref{allencahn}, \textit{i.e.}, $-\Delta u=u-u^3$. Namely, a solution $u=u(x^1,x^2)$, where $(x^1,x^2) \in \mathbb R^{m} \times \mathbb R^{m}$, in even dimension $N=2m$ which is radially symmetric in the first $m$ variables and also in the last $m$ variables (\textit{i.e.}, $u=u(|x^1|,|x^2|)$) and antisymmetric under the reflection $\sigma (x^1,x^2) = (x^2,x^1)$ (\textit{i.e.}, $u(|x^2|,|x^1|)=-u(|x^1|,|x^2|)$). In particular, its zero level set is the Simons cone $\mathcal{C}$ above and $u$ is odd with respect to $\mathcal{C}$. While the existence of such antisymmetric solution in dimension $N=2m$ is easy to establish (\cite{CT1,CT2}), its uniqueness is a more delicate issue and has been established more recently by the first author \cite{Cabre}. The remaining open problem is the following: \vspace{2mm} \textbf{Open question 1.} {\it Is the saddle-shaped solution a minimizer of $- \Delta u = u - u^3$ in dimensions $N=2m \geq 8$}? \vspace{2mm} The saddle-shaped solution in $\mathbb{R}^{2m}=\mathbb{R}^m\times\mathbb{R}^m$ is a function of the two radial variables $s=|x^1|$ and $t=|x^2|$, $u=u(s,t)$. In these variables the energy functional (up to a multiplicative constant) reads \begin{equation}\label{omegaR} E (u,\Omega_R) = \int_{\Omega_R} \Big\{ \frac{1}{2} |\nabla u|^2 + G(u) \Big\} s^{m-1} t^{m-1} ds\,dt. \end{equation} This functional is invariant under odd reflection in the diagonal $\{s=t\}$, which is the Simons cone $\mathcal{C}$. Here, for instance we may take $\Omega_R:=\{s>0,\ t>0,\ s^2+t^2<R^2\}$ to be a quarter of ball in the plane. The saddle-shaped solution is antisymmetric or odd with respect to $\{s=t\}$. The following open problem will be connected with Open Question~1. \vspace{2mm} \textbf{Open question 2.} {\it Are minimizers of \eqref{omegaR} $($for all, or at least for some, Dirichlet boundary conditions on $\{s>0,\ t>0,\ s^2+t^2=R^2\}$ which are antisymmetric with respect to $\{s=t\})$ also antisymmetric when $2m\geq 8$ and $R$ is large enough}? \vspace{2mm} A positive answer to Open question 2 leads to the corresponding positive answer to Open question 1. Indeed, if antisymmetry of minimizers holds for the problem in $\Omega_R$ then, by letting $R\rightarrow\infty$, one obtains an antisymmetric solution in all of $\mathbb{R}^{2m}$ which is a minimizer (being limit of minimizers in $\mathbb{R}^{2m}$). In particular, this solution being a minimizer, one easily shows that it is not identically zero (see~\cite{CT1,CT2}). Thus, by the uniqueness result of \cite{Cabre}, it is the saddle-shaped solution. Therefore, Open question 2 has a negative answer in dimensions 2, 4, and 6, since in these dimensions the saddle-shaped solution is known not to be a minimizer (for instance by the results of Cabr\'e and Terra~\cite{CT1,CT2} on instability of the saddle solution, or by Savin's \cite{Savin} result). Note the presence of the weight $s^{m-1} t^{m-1}$ in the energy functional above. Alternatively, considering coordinates $y=(s+t)/\sqrt{2}$ and $z=(s-t)/\sqrt{2}$, we would be concerned with oddness in the variable $z$ in the presence of the weighted measure $$ 2^{m-1}s^{m-1}t^{m-1}\,dsdt=(y^2-z^2)^{m-1}\,dy\,dz, $$ which is even in $z$, where $z\in [-y,y]$. Note that this weight (as a function of $z$) is not increasing in $[0,y]$ ---while being increasing is the condition that leads to oddness (at least in dimension 1) in one of our results, Theorem~\ref{thm:Brahms2}. Other questions regarding the weighted measure $s^{m-1}t^{m-1}\,ds\,dt$ (or, more generally, $x_1^{A_1}\cdots x_n^{A_N}\,dx_1\cdots dx_n$ coming from multiple radial symmetries) have been recently studied in \cite{CR-01,CR-02}. They concern sharp weighted isoperimetric and Sobolev inequalities and were originated from the study of extremal solutions in explosion (or Gelfand type) problems. With this motivation in mind, we are led to understand the antisymmetry of critical points of functionals involving weights. Our paper presents alternative ways of proving antisymmetry of minimizers and provides several new uniqueness results for variational problems with weights. Our main results apply to one-dimensional problems. Some partial answers in the higher dimensional case ---which however do not allow to solve the motivating open questions above--- are presented later in this section. \subsection{One-dimensional case} In the one-dimensional case, given functions $a$ and $b$ defined on an interval $[-L, L]$ and a function $G$ satisfying \begin{equation} \label{eq:A} \left. \begin{array}{c} a,b:[-L,L] \to \mathbb{R} \textrm{ are positive and even }C^1([-L,L]) \textrm{ functions}, \vspace{3mm}\\ G: \mathbb R \to \mathbb R \textrm{ is a nonnegative and even }C^1([-L,L]) \textrm{ function}, \end{array} \right\} \end{equation} we consider the energy functional \begin{equation}\label{functional} \mathcal{E} (u, (-L,L)) := \int_{-L}^L \left\{\frac{1}{2}(u')^2a(x)+G(u)b(x)\right\} dx \end{equation} in $$ H^1_m ((-L,L)) := \left\{ u \in H^1 ((-L,L)) \, \colon \, u(-L) = -m,\ u(L)=m \right\}, $$ where $m\geq 0$ is given. Critical points of this functional are solutions of the associated Euler-Lagrange equation \begin{equation}\label{E-L} \left\{ \begin{array}{l} -\left(a u'\right)' = b f(u) \qquad \hbox{ in } (-L,L), \\ u(L)=-u(-L)= m, \end{array} \right. \end{equation} where $$ f=-G' $$ is an odd nonlinearity. Note that $G$ is defined up to an additive constant and, therefore, the hypothesis ``$G$ is nonnegative'' in \eqref{eq:A} can be replaced by ``$G$ is bounded from below''. We define the \textit{flipped} $u_\star$ of a continuous function $u$ in $[-L,L]$ as \begin{equation}\label{def:flipped} u_\star(x):=-u(-x) \quad \textrm{for }x\in[-L,L]. \end{equation} Note that if $u$ is a solution of \eqref{E-L}, its flipped is also a solution under assumption \eqref{eq:A} (see~Figure~\ref{fig1}). In addition, $u$ is antisymmetric or odd if and only if $u=u_\star$. Note also that $\mathcal{E}(u,(-L,L))=\mathcal{E}(u_\star,(-L,L))$ under assumption \eqref{eq:A}. After an appropriate change of variables $y=\gamma(x)$ (see \eqref{eq:ChangeVariable} in Section~\ref{subsec2:2}), one can always reduce the problem either to the case $a\equiv b$ or to the case $b\equiv 1$ ---something that sometimes will be useful. When $a\equiv b$, the equation in \eqref{E-L} reads \begin{equation}\label{nondiv} -u''-({\rm log}\ a)'u'=f(u)\quad \textrm{in }(-L,L). \end{equation} For this last equation, Berestycki and Nirenberg~\cite{BeresNiren} used several versions of their sliding method to prove uniqueness and antisymmetry results. In the one-dimen-\-sional case, one of their results states the following. It requires the first order coefficient $({\rm log}\ a)'$ in \eqref{nondiv} to be nondecreasing. \begin{Theorem} [Berestycki-Nirenberg~\cite{BeresNiren}, Theorem~4.1 and Corollary~4.3] \label{thm:BerestyckiNirenberg} Let us assume that \eqref{eq:A} holds, $a \equiv b$, and that $f$ is locally Lipschitz. Let $L$ and $m$ be positive numbers. If $$ {\rm log}\,a \textrm{ is convex} $$ then there exists at most one solution to~\eqref{E-L} satisfying \begin{equation}\label{apriori} -m \leq u \leq m \quad\textrm{in }[-L,L], \end{equation} and this solution, if it exists, is odd and increasing. \end{Theorem} In higher dimensions, an analogous result was proved also in the same paper \cite{BeresNiren}. In fact, when the domain $\Omega$ is a cylinder $(-L,L)\times\omega$, with $\omega\subset\mathbb{R}^{N-1}$, they proved monotonicity in the $x_1$ variable, as well as uniqueness and antisymmetry of solutions of $-\Delta u=h(x,u,\nabla u)$ under suitable symmetry and monotonicity assumptions on the boundary data and on $h(x,q,p)$. The main ingredient in their proof of Theorem \ref{thm:BerestyckiNirenberg} is a parabolic version of the sliding method. They compare translations of the solution with the solution itself and then apply the maximum principle to obtain monotonicity and uniqueness of solutions (see the proof of our Proposition~\ref{prop:BerNir} for this kind of argument). In~\cite{BeresNiren}, it is also observed that, by the maximum principle, the \textit{a priori} bound \eqref{apriori} in the above theorem is automatically satisfied by every solution $u$ of \eqref{E-L} (with $a\equiv b$) if for instance one assumes $$ G'\geq0 \quad \textrm{in } (m,\infty)=(u(L),\infty). $$ This is the same as assuming $f\leq0$ in $(m,\infty)$ ---recall that here we assume $G$ to be even and hence the nonnegativeness of $f$ also in $(-\infty,-m)$ follows. Our results will complete in several ways, in the one-dimensional case, the above statement of Berestycki and Nirenberg. Theorem~\ref{thm:BerestyckiNirenberg} assumes log-convexity of the weight $a$ but only \eqref{eq:A} for the potential $G$ (\textit{i.e.}, that $G$ is even). If, instead, one assumes only \eqref{eq:A} on the weight $a$ (\textit{i.e.}, that $a$ is even) but also that $G$ is convex, then we also have uniqueness of solution. This is clear since the energy functional $\mathcal{E}$ will be convex in this case. Our first result improves Theorem~\ref{thm:BerestyckiNirenberg} by replacing the assumption on log-convexity of $a$ by only the monotonicity of $a$, at the price of assuming also monotonicity of $G$ in $(0,m)$. In addition, we do not require the \textit{a priori} assumption \eqref{apriori} on the solution. More precisely, we establish the following. \begin{Theorem} \label{thm:Brahms2} Let $L$ and $m$ be positive numbers. Assume that \eqref{eq:A} holds, $a \equiv b$, and that $f=-G'$ is locally Lipschitz. Suppose further that \begin{equation}\label{hyp:thm:Brahms2} a' \geq 0 \hbox{ in } (0,L), \quad G \geq G(m) \hbox{ in } (0, \infty), \quad \textrm{and}\quad G' \leq 0 \hbox{ in } (0, m). \end{equation} Then, problem~\eqref{E-L} admits a unique solution, which is therefore odd. Furthermore, this solution is increasing. \end{Theorem} Note that $G'\leq 0$ in $(0,m)$ is simply the hypothesis $f\geq 0$ in $(0,m)$ on the nonlinearity. It holds for instance in our model case $f(u)=u-u^3$, $G(u)=(1-u^2)^2/4$, and $m=1$. The other hypothesis, $G\geq G(m)$ in $(0,\infty)$, is also satisfied in this case. We will prove Theorem~\ref{thm:Brahms2} for general weights $a$ and $b$ (not necessarily equal). In this general case the first assumption in \eqref{hyp:thm:Brahms2} becomes \begin{equation}\label{ab:inc} (ab)'\geq 0 \quad\textrm{in }(0,L) \end{equation} (in Section \ref{subsec2:2} we explain how one can reduce the problem either to the case $a\equiv b$ or to $b\equiv 1$). Our proof uses the Hamiltonian function \begin{equation}\label{L:intro} \mathcal{H}(x,q,p):=\frac{1}{2}(a(x)p)^2-a(x)b(x)G(q), \quad (x,q,p)\in(-L,L)\times \mathbb{R}^2. \end{equation} We use it first to prove that any solution $u$ of \eqref{E-L} is increasing if both~\eqref{ab:inc} and the second assumption in \eqref{hyp:thm:Brahms2} hold. Then, we are able to prove uniqueness, and hence antisymmetry, of solutions $u$ using the identity $\frac{d}{dx}\mathcal{H}(x,u,u')=-(ab)'G(u)$. Our second main contribution consists in deriving antisymmetry of solutions by using a new continuous odd rearrangement. Here we will need the log convexity assumption, as in the Berestycki-Nirenberg result. Making a change of variables (see Section \ref{subsec2:2}) we can assume $b\equiv 1$. In this case, given an \textit{increasing} function $v \in H^1_m ((-L,L))$, let us call $\rho=\rho(\lambda)$ its inverse function, \textit{i.e.}, $v(\rho(\lambda))=\lambda$ for all $\lambda \in[-m,m]$ (see Figure \ref{fig1}). Recall that functions in $H^1((-L,L))$ are continuous, and thus, the hypothesis of being increasing is justified. We define (see Definition~\ref{def:rear} below) the \textit{continuous odd rearrangement} $\{v^t\}$ of $v$, with $0\leq t\leq 1$, as the inverse function $v^t(x)=\lambda$ of $$ \begin{array}{lcl} \rho^t(\lambda)&:=&t\rho(\lambda)+(1-t)(-\rho(-\lambda)) \\ &=&t\rho(\lambda)+(1-t)\rho_\star(\lambda)\qquad\textrm{ for all } \lambda\in [-m,m]. \end{array} $$ Note that $\rho_\star(\lambda)=-\rho(-\lambda)$, the flipped of $\rho$, is the inverse function of the flipped $v_\star$ of $v$. For $t=1$ and $t=0$, $v^t$ coincides respectively with $v$ and its flipped $v_\star$: $v^1=v$ and $v^0=v_\star$. Moreover, for $t=1/2$, $v^{1/2}$ is always an odd function. We call it the \textit{odd rearrangement} of $v$. \begin{figure}[ht]\label{fig1} \begin{center} \psset{xunit=3cm,yunit=3cm} \begin{pspicture}(0,1.2) \psaxes[linestyle=dashed,axesstyle=frame,ticks=none]{->}(0,0)(-1,-1)(1,1) \pscurve[linecolor=blue,linewidth=0.5pt](-1,-1)(-0.5,0)(0,0.5)(1,1) \pscurve[linecolor=red,linewidth=0.5pt](-1,-1)(0,-0.5)(0.5,0)(1,1) \psline[linewidth=0.5pt]{->}(-1.1,0)(1.1,0) \psline[linewidth=0.5pt]{->}(0,-1.1)(0,1.1) \rput(0.4,0.85){$v$} \rput(-0.4,-0.85){$v_\star$} \rput(1.15,0){$x$} \rput(0,1.15){$\lambda$} \psline[linecolor=blue,linestyle=dashed,linewidth=0.5pt]{-*}(-0.2,0)(-0.2,0.34) \psline[linecolor=blue,linestyle=dashed,linewidth=0.5pt](-0.2,0.34)(0,0.34) \rput(-0.28,-0.10){$x=\rho(\lambda)$} \rput(0.34,0.34){$v(x)=\lambda$} \psline[linecolor=red,linestyle=dashed,linewidth=0.5pt]{-*}(0.2,0)(0.2,-0.34) \psline[linecolor=red,linestyle=dashed,linewidth=0.5pt](0,-0.34)(0.2,-0.34) \rput(-0.34,-0.34){$v_\star(\bar x)=\bar \lambda$} \rput(0.28,0.1){$\bar x=\rho_\star(\bar \lambda)$} \end{pspicture} \end{center} \vspace{2.7cm} \caption{A function $v$ and its flipped $v_\star$.} \end{figure} One property of the continuous odd rearrangement is that $v$ and $v^t$ are equidistributed with respect to the weight $b\equiv1$, \textit{i.e.}, $$ \left| \{ - \lambda < v^t < \lambda \} \right| = \left| \{ - \lambda < v < \lambda \} \right| = \rho(\lambda)-\rho(-\lambda) $$ for all $t\in[0,1]$ and $\lambda\in[0,m]$. In particular, the integral $\int_{-L}^L G(v)\ dx$ is preserved under this rearrangement for every \textit{even} continuous function $G$. For a general positive weight $b$ we define the continuous odd rearrangement $\{v^t\}$ of $v$ with respect to $b$, with $0\leq t\leq 1$, as the inverse function $v^t(x)=\lambda$ of $$ \rho^t(\lambda):=B^{-1}\Big(t B(\rho(\lambda))+(1-t)B(\rho_\star(\lambda))\Big) \quad\textrm{ for all }\lambda\in [-m,m], $$ where $B(x):=\int_0^x b(y)\, dy$. In this case, $v$ and $v^t$ are also equidistributed with respect to the weight $b$, and therefore, the integral $\int_{-L}^L G(v) b(x)\, dx$ is preserved under this rearrangement when $G$ is continuous and even. Our main result states that continuous odd rearrangement decreases the kinetic energy under the hypothesis that $a\equiv b$ is log-convex. We assume that the given function $v$ is increasing, as in the previous definition. \begin{Theorem} \label{thm:OddRearrangement} For $L>0$, $m>0$, let $a$, $b \in C^0([-L,L])$ and $G\in C^0(\mathbb{R})$ be even functions and $v \in C^1_m ([-L,L]):=\left\{ w \in C^1 ([-L,L]) \, \colon \, w(L) = - w(-L)=m \right\}$. Assume that $v$ is increasing. Then, the continuous odd rearrangement $v^t$ of $v$ with respect to $b$ satisfies: \begin{enumerate} \item[$($a$)$] If $b>0$ then \begin{equation}\label{equidistributed} \int_{-L}^L G(v^t) b(x)\, dx = \int_{-L}^L G(v)b(x) \, dx\quad \textrm{for all }0\leq t\leq 1. \end{equation} \item[$($b$)$] If $a\equiv b>0$ and ${\rm log}\, a$ is convex, then the following assertions hold: \item[$($b$.1)$] For all $0\leq t\leq 1$, \begin{equation}\label{eq:PolyaSzego} \int_{-L}^L\left( \frac{ d v^t }{dx} \right)^2 \! \! a(x) \, dx \leq \int_{-L}^L \left( \frac{ d v }{dx} \right)^2 \! \! a(x) \, dx. \end{equation} \item[$($b$.2)$] Equality in \eqref{eq:PolyaSzego} holds for some $t\in(0,1)$ if and only if $v^t=v$ for all $t\in[0,1]$. In such case, $v$ must be odd. \item[$($b$.3)$] The function $t\longmapsto\mathcal{E}(v^t,(-L,L))$ is convex in $[0,1]$. \end{enumerate} \end{Theorem} Part $(a)$ will follow from the definition of continuous odd rearrangement. To prove parts $(b.1)$ and $(b.2)$, we use the coarea formula to obtain $$ \int_{-L}^L \left( \frac{d v^t}{dx } \right)^2 \! a(x) \, dx = \int_0^m \left\{\frac{a(\rho^t(\lambda))}{(\rho^t)'(\lambda)} + \frac{a(\rho^t(-\lambda))}{(\rho^t)'(-\lambda)}\right\}\ d\lambda, $$ and then we compare the integrand in the second integral for $t\in(0,1)$ and for $t=1$ using the log-convexity of $a$. Finally, part $(b.3)$ will follow from $(a)$ and from differentiating twice the function $\mathcal{E}(v^t,(-L,L))$ and using that $a$ is log-convex, after regularizing $a$. Theorem~\ref{thm:OddRearrangement} allows us to prove the following extension (in the one-dimensional case and once we know that the solution is increasing) of Berestycki and Nirenberg's result on antisymmetry (Theorem~\ref{thm:BerestyckiNirenberg}). Our proof uses a completely different technique (rearrangement) than theirs (the sliding method). Under the same hypothesis on the weight $a$, our method leads to antisymmetry for increasing solutions not only for locally Lipschitz nonlinearities $f=-G'$ but also for discontinuous ones, since we only require $G$ to be locally Lipschitz. \begin{Theorem}\label{thm:Brahms1} Assume that $a\equiv b \in C^0([-L,L])$ and $G\in C^{0,1}_{\rm loc}(\mathbb{R})$ are even functions and that $a>0$ in $[-L,L]$. Note that $f=-G'$ could be discontinuous. Let $m >0$ and let $u \in H^1_m ((-L,L))$ be an increasing critical point of the functional $\mathcal{E} (\cdot, (-L,L))$. If $a$ is {\rm log}-convex, then $u$ is odd. \end{Theorem} Note that we assume that the critical point $u$ is increasing. For a {\rm log}-convex weight $a\equiv b\in C^1([-L,L])$ and $G\in C^{1,1}_{\rm loc}(\mathbb{R})$, this automatically holds if either $-m=u(-L)\leq u\leq u(L)=m$ in $(-L,L)$ or if $G(-m)=G(m)\leq G$ in $\mathbb{R}$. That the first assumption suffices is a consequence of Theorem~\ref{thm:BerestyckiNirenberg}, while the sufficiency of the second one ---without requiring any \textit{a priori} estimate on the solution---follows from Theorem~\ref{thm:Increasing}~(i) below. Note that the Allen-Cahn potential $G (u) = (1 - u^2)^2/4$ with $m=1$ satisfies this last assumption. \begin{Remark}\label{rem:OddRearrangement} We will prove the statements in Theorems~\ref{thm:OddRearrangement} and \ref{thm:Brahms1}, as well as the one stated in Theorem~\ref{thm:BerestyckiNirenberg}, for general weights $a$ and $b$ (not necessarily equal) ---see the beginning of the proof of Theorem~\ref{thm:OddRearrangement} in Section~\ref{subsec2:2}. In particular, they hold assuming that \begin{equation}\label{eq:BerNiren} \frac{\big( \sqrt{a b } \, \big)'}{b}\quad \textrm{is nondecreasing in }(-L,L) \end{equation} instead of the log-convexity of $a\equiv b$ (assuming $a,b\in C^1([-L,L])$). For this, see \eqref{newexp} in the beginning of Section~\ref{subsec2:2}. Note that assumption \eqref{eq:BerNiren} for $b\equiv1$ is equivalent to require that $\sqrt{a}$ is a convex function. \end{Remark} Property~\eqref{eq:PolyaSzego} has been first proved for the Steiner or Schwarz symmetrization of a function with zero Dirichlet boundary data and $a \equiv 1$ by P\'olya-Szeg\"o \cite{PolyaSzego} (see also \cite{Kawohl}). Later, their well known result has been studied for non-constant weights $a$ by several authors; see \cite{BBMP1999, BBMP2008, Brock1999, Brock2000, EspositoTrombetta} among others. In the higher dimensional case, Esposito and Trombetti~\cite{EspositoTrombetta} proved that the functional \begin{equation}\label{Trombetti:fct} \int_{\mathbb{R}^n} \left\{\frac{1}{2}\tilde{a}(x')|\nabla_{x'} u|^2 +\frac{1}{2}a(x_N)u_{x_N}^2+\tilde{b}(x')G(u)\right\}\,dx, \end{equation} where $x=(x',x_N)\in\mathbb{R}^{N-1}\times\mathbb{R}$, is decreased under Steiner symmetrization of functions $u$ with compact support when $\sqrt a$ is strictly convex. Moreover, they proved that minimizers are Steiner symmetric under this assumption. Note that in the 1-dimensional case ($N=1$) this assumption coincides with the one in Remark~\ref{rem:OddRearrangement}. Hence, our Theorem~\ref{thm:OddRearrangement} shows that the same properties hold for our continuous odd rearrangement under the same assumption as theirs. Theorem~\ref{thm:OddRearrangement} may be easily extended to the $N$-dimensional case (though we do not write the details in this paper). The result asserts that the functional \eqref{Trombetti:fct} is decreased under the continuous odd rearrangement with respect to the $x_N$ variable whenever $\sqrt{a(x_N)}$ is convex in $x_N$, $\tilde{b}\equiv 1$, and $\tilde{a}$ is nonnegative. For another rearrangement, the monotone decreasing one, Landes~\cite{Landes} shows that~\eqref{eq:PolyaSzego} holds whenever $a$ is nonnegative and nondecreasing (this result does not require any convexity assumption on $a$). When $a\equiv b$, the log-convexity assumption in Theorem~\ref{thm:OddRearrangement} also appears in a different (but related) context. In \cite{RCBM}, Rosales, Ca\~nete, Bayle, and Morgan study the subsets $E$ of $\mathbb{R}$ (with given weighted volume $\int_Ea $) which minimize the weighted perimeter $\int_{\partial E}a$. In Corollary~4.12 of \cite{RCBM} they show that if $a$ is an even and strictly log-convex weight, then intervals centered at $0$ are the unique minimizers. In Section~\ref{section1:2} we will mention another result of \cite{RCBM} in higher dimensions which is related to one of our results in dimensions $N\geq 2$. Paper \cite{RCBM} motivated the so called ``log-convex density conjecture'', first stated by Kenneth Brakke, as follows. In $\mathbb{R}^N$, with a smooth, radial, log-convex density, balls around the origin provide isoperimetric regions of any given volume. The conjecture has been recently proven by Gregory R. Chambers~\cite{Chambers}. Odd symmetry of solutions may not hold without the previous monotonicity or convexity-type assumptions on the weights. In fact, for some weights and for the potential $G(u)=(1-u^2)^2/4$, we will prove the existence of non-odd minimizers which are increasing from $-1$ to $1$. Indeed, by considering the space of antisymmetric functions $$ H^{as}_m ((-L,L)) := \{ u \in H^1_m ((-L,L)) \, \colon \, u(x) = - u (-x) \} \,, $$ we will provide sufficient conditions on the weights $a$ and $b$ for which \begin{equation}\label{minleqmin} \min_{u \in H^{as}_m ((-L,L))} \mathcal{E} (u, (-L,L)) > \min_{u \in H^{1}_m ((-L,L))} \mathcal{E} (u, (-L,L)) \end{equation} when $L$ is large enough. Note that if this holds, then minimizers in $H^{1}_m ((-L,L))$ are not antisymmetric. The following conditions on the weights $a$ and $b$ guarantee \eqref{minleqmin} and therefore non-oddness of minimizers. \begin{Proposition}\label{cor:characteriation} Assume that $a$, $b$, and $G$ are even $C^0(\mathbb{R})$ functions and that $a,b>0$. Let $m>0$ and suppose that $G(s) \geq G(m)$ for all $s \in \mathbb R$ and $G(s)>G(m)$ for all $s\in(-m,m)$. If there exists a sequence of bounded intervals $J_n \subset \mathbb R$ satisfying \begin{equation}\label{eq:Characterization} \int_{J_n}\frac{1}{a}\rightarrow +\infty \quad\textrm{and}\quad \int_{J_n} b \rightarrow 0\, , \end{equation} then there exists $L_0>0$ such that $\mathcal{E} (\cdot,I)$ has no odd minimizers on any interval $I:=(-L,L)$ with $L>L_0$. \end{Proposition} Note that this result applies to the Allen-Cahn potential and boundary values~$\pm m=\pm1$. In order to prove Proposition~\ref{cor:characteriation}, we can assume without loss of generality that $G(m)=0$ by replacing $G$ by $G-G(m)$ if necessary. We will first see that the infimum of the functional $\mathcal{E} (\cdot,I)$, in the class of odd functions $H^{as}_m (I)$, is bounded from below by a positive constant which is independent of the interval. Next, in Proposition~\ref{infimo=0} we prove that condition~\eqref{eq:Characterization} is equivalent to the fact that $$ \min_{u \in H^1_m (I) } \mathcal{E} (u, I) \to 0 \qquad \hbox{ as } L \to \infty. $$ \begin{Remark}\label{notunique} Under the assumptions of Proposition~\ref{cor:characteriation}, we deduce that on any interval $I:= (-L,L)$ with $L > L_0$ the functional $\mathcal{E} (\cdot, I)$ admits at least three critical points: \begin{enumerate} \item[(i)] Two minimizers: $u$ and its flipped $u_\star(x)=-u(-x)$ (which are different since $u$ is not antisymmetric). \item[(ii)] A critical point $u_{as}$ which is antisymmetric. It is obtained by minimizing the functional $\mathcal{E} (\cdot, I)$ in the space $H^{as}_m (I)$. \end{enumerate} \end{Remark} \begin{Example}\label{ex:nonodd} Let us exhibit a simple class of weights for which critical points in large enough intervals are not odd. Assume that $0<a,b\in C^0(\mathbb{R})$ and that $0\leq G\in C^1(\mathbb{R})$ are even functions, $a\in L^\infty (\mathbb{R})$, and $\lim_{x\to+\infty}b(x)=0$. Under these assumptions, take any sequence $x_n\to+\infty$ such that $b(x)\leq 1/n^2$ for all $x>x_n$. Then, we have $$ \int_{x_n}^{x_n+n} \frac{1}{a} \, \geq \, \frac{n}{\Vert a\Vert_\infty}\to +\infty \quad \hbox{ and } \quad \int_{x_n}^{x_n+n} b \, \leq \, n\frac{1}{n^2}\to 0. $$ Thus, condition~\eqref{eq:Characterization} is satisfied. Therefore, by Proposition~\ref{cor:characteriation}, there exists $L_0>0$ such that $\mathcal{E}$ has no odd minimizers on $(-L,L)$ whenever $L>L_0$. \end{Example} A related, but different, question is the existence of non-odd minimizers in $H^1_m((-L,L))$ which are not increasing. For $G(u) = (1-u^2)^2/4$ this cannot happen if $u(\pm L)=\pm 1$ (in this case any minimizer is increasing), but it may occur if $u(\pm L)=\pm\varepsilon$ with $\varepsilon$ small and $L$ large. In Proposition~\ref{prop:NonOdd} below we prove the existence of such non-odd minimizers of $\mathcal{E}$ for a large family of weights, which includes the unweighted case $a\equiv b\equiv 1$. This result will be proved using a perturbation argument from the case $m=0$ (see Figure \ref{fig2}). \begin{figure}[ht] \begin{center} \psset{xunit=6cm,yunit=3cm} \begin{pspicture}(0,1.2) \psaxes[linestyle=dashed,axesstyle=frame,ticks=none]{->}(0,0)(-1,-1)(1,1) \psline[linewidth=0.5pt]{->}(-1.1,0)(1.1,0) \psline[linewidth=0.5pt]{->}(0,-1.1)(0,1.1) \psline[linecolor=blue,linestyle=dashed,linewidth=0.5pt](-1,-0.2)(0,-0.2) \psline[linecolor=blue,linestyle=dashed,linewidth=0.5pt](0,0.2)(1,0.2) \pscurve[linecolor=red,linewidth=0.5pt]{*-*}(-1,0)(0,0.8)(1,0) \pscurve[linecolor=blue,linewidth=0.5pt]{*-*}(-1,-0.2)(0,0.6)(1,0.2) \rput(-0.58,0.55){$u_0$} \rput(-0.4,0.25){$u_\varepsilon$} \rput(1.15,0){$x$} \rput(0,1.15){$\lambda$} \rput[linecolor=blue](0.01,-0.2){$-\varepsilon$} \rput[linecolor=blue](-0.05,0.2){$\varepsilon$} \rput(1.05,-0.1){$L_0$} \rput(-1.08,-0.1){$-L_0$} \end{pspicture} \end{center} \vspace{2.5cm} \caption{Minimizers for $m=0$ and $m=\varepsilon$: $u_0$ and $u_\varepsilon$} \label{fig2} \end{figure} Another contribution of our paper is to provide conditions on the weights $a$, $b$, and on the potential $G$ which guarantee the monotonicity of solutions for the one-dimensional problem~\eqref{E-L}, without relying on the {\it a priori} bound~\eqref{apriori} used in~\cite{BeresNiren}. Recall that in Theorem~\ref{thm:Brahms2} we already gave conditions to guarantee uniqueness and monotonicity of solutions. Our following result guarantees monotonicity under more general conditions on $a$, $b$, and $G$. Here $a$, $b$, and $G$ are not assumed to be even. In the even case, we would take $x_0=0$ in the following condition \eqref{eq:Muffin}. \begin{Theorem} \label{thm:Increasing} Let $a,b \in C^1 ([-L,L])$ such that $a,b>0$, and $G\in C^{1,1}_{\rm loc}(\mathbb{R})$. Assume that there exists $x_0 \in [-L,L]$ such that \begin{equation} \label{eq:Muffin} (ab)' \leq 0 \, \hbox{ in } (-L,x_0] \quad \hbox { and } \quad (ab)' \geq 0 \, \hbox{ in } [x_0,L). \end{equation} Then, any solution to \eqref{E-L} with $m>0$ is increasing if either \begin{enumerate} \item[{\rm (i)}] $G\geq G(-m)=G(m)$ in $\mathbb{R}$; \end{enumerate} or \begin{enumerate} \item[{\rm (ii)}] For some $M\in(0,m]$ the function $G$ satisfies \begin{equation} \label{eq:Double:bis} \begin{array}{l} G\geq G(-M)=G(M)\textrm{ in }[-M,M], \quad G' \leq 0 \textrm{ in } (-\infty,- M), \\ \textrm{and}\quad G' \geq 0 \textrm{ in } (M,+\infty). \end{array} \end{equation} \end{enumerate} \end{Theorem} As an example, note that the Allen-Cahn potential $G (u) = (1 - u^2)^2/4$ with $M=1$ satisfies assumption \eqref{eq:Double:bis}. In this particular case, Theorem~\ref{thm:Increasing}~(ii) establishes that any solution is increasing if $m\geq 1$. Instead, as we said before, in the case where $m=\varepsilon<1$ solutions which are not increasing do exist (see Figure~\ref{fig2} and Proposition~\ref{prop:NonOdd}). Note that, when $a$ and $b$ are even, the monotonicity condition \eqref{eq:Muffin} is weaker than the convexity-type assumption \eqref{eq:BerNiren} (see Remark~\ref{rem:MonAme}) and that we do not assume any {\it a priori} bound on the solution. As a consequence, if the weights $a$ and $b$ satisfy \eqref{eq:BerNiren} then any solution $u$ to \eqref{E-L} is increasing under assumption (i) or (ii) of Theorem~\ref{thm:Increasing}, and hence, the \textit{a priori} estimate \eqref{apriori} automatically holds. To prove Theorem~\ref{thm:Increasing} we use the ``Hamiltonian'' $\mathcal{H}$ defined in \eqref{L:intro} and a phase plane type analysis. \subsection{The higher dimensional case}\label{section1:2} In the remaining of the Introduction, we consider the extension of the functional~\eqref{functional} to a $N$-dimensional domain. More specifically, given a bounded domain $\Omega \subset \mathbb R^N$, a $C^1$-map $A: \overline{\Omega} \to S_N (\mathbb R)$ with range in the set of symmetric matrices and assumed to be uniformly coercive, a function $0<b \in C^1 (\overline{\Omega}, \mathbb R)$, and a potential $G\in C^2(\mathbb{R})$ satisfying that \begin{equation}\label{newG} \begin{array}{l} \textrm{there exists }M>0\textrm{ such that } G'(s)\leq 0\textrm{ for all }s< -M \\ \textrm{and }G'(s)\geq 0\textrm{ for all }s>M, \end{array} \end{equation} we consider the functional \begin{equation}\label{eq:NDimFunctional} \mathcal{E} (u, \Omega) := \int_{\Omega} \left\{ \frac{1}{2} \langle A(x) \nabla u, \nabla u \rangle + b(x) G(u) \right\} dx , \quad u \in H^{1}_{\varphi} (\Omega) , \end{equation} where $\varphi \in (H^1\cap L^\infty)(\Omega)$ and $$ H^{1}_{\varphi} (\Omega) := \left\{ u \in H^1 (\Omega) \, \colon \, u - \varphi \in H^1_0 (\Omega ) \right\} . $$ The Euler-Lagrange equation associated to \eqref{eq:NDimFunctional} is given by \begin{equation} \label{eq:Giessen} - \hbox{div} (A(x) \nabla u) + b(x) G'(u) = 0, \quad u \in H^1_{\varphi} (\Omega). \end{equation} Recall that under quite restrictive assumptions on $A$ and $b$, a result for the odd rearrangement in the $x_N$-variable (which leads to antisymmetry) has been mentioned in \eqref{Trombetti:fct} and the comments after it. This result requires assumptions on how $A$ and $b$ depend on the variables $x=(x',x_N)$. In the following (and without the previous restrictive assumptions), we will prove several uniqueness results. Setting $$ \lambda_1 (A,b, \Omega) := \inf\left\{ \frac{\int_{\Omega} \langle A(x) \nabla \xi, \nabla \xi \rangle} {\int_{\Omega} b(x) \xi^2} \, \colon \, \xi \in H^1_{0} (\Omega), \, \xi \not \equiv 0 \right\} , $$ under the assumption that \begin{equation} \label{SturmUndDrang} \lambda_1 (A,b,\Omega) \geq - G''(0) > - G''(s) \quad \textrm{for all } s \not = 0, \end{equation} simple arguments show that the functional $\mathcal{E} (\cdot, \Omega)$ has a unique critical point in $H^1_{\varphi} (\Omega)$ (see Proposition~\ref{Proposition:Marc1}). For the double-well potential $G(s) = \frac{1}{4}(1-s^2)^2$, namely the type of nonlinearity arising in the De Giorgi conjecture discussed above, the condition $-G''(0) > -G''(s)$ for all $s\neq 0$ (as well as \eqref{newG}) is clearly satisfied. It is also easy to verify that $ \lambda_1 (A,b,\Omega) \geq - G''(0) $ holds for small domains $\Omega$. Therefore it is of interest to find a class of weights for which this lower bound is independent of the domain. A typical situation for which this holds is provided by the weights $A(x) = e^{\alpha |x|^2} {\rm Id}$, $b(x) = e^{\alpha|x|^2}$ with $\alpha$ large enough. For a more general class of weights, we are able to give an explicit lower bound on $ \lambda_1 (A,b,\Omega)$ depending on $(A,b)$, which in the simplest case $A \equiv a\, {\rm Id}$ and $a \equiv b$ leads to the following uniqueness result: \begin{Theorem} \label{thm:ndim} Let $a\in C^2(\overline{\Omega})$, $G \in C^2 (\mathbb R)$ satisfying \eqref{newG}, and $\varphi\in (H^1\cap L^\infty)(\Omega)$. Assume $A \equiv a \, {\rm Id}$, $a\equiv b>0$ in $\overline\Omega$, $$ \frac{\Delta \sqrt{a}}{\sqrt{a}} \geq - G''(0) \hbox{ in } \Omega, \quad \hbox{ and } \quad -G''(0) > -G''(s) \textrm{ for all } s \neq 0. $$ Then $\mathcal{E} (\cdot, \Omega)$ admits a unique critical point in $H^1_\varphi(\Omega)$. \end{Theorem} As a consequence we obtain the following result. Let $\sigma:\mathbb R^N \to \mathbb R^N$ be a reflection with respect to a hyperplane. If \begin{equation} \label{eq:Flughafen} G(s) = G(-s)\textrm{ for all }s\in\mathbb{R}, \quad A \circ \sigma = A, \quad b \circ \sigma = b , \quad \varphi \circ \sigma = - \varphi, \end{equation} and $\sigma$ leaves $\Omega$ invariant (\textit{i.e.}, $\sigma(\Omega) = \Omega$), then the critical points of $\mathcal{E} (\cdot, \Omega)$ in $H^1_\varphi(\Omega)$ inherit this same invariance: \begin{Corollary} \label{cor:OddHigher} Assume that $\sigma(\Omega)=\Omega$ and that condition \eqref{eq:Flughafen} holds for some reflection $\sigma:\mathbb R^N \to \mathbb R^N$ with respect to a hyperplane. Then, under the hypotheses of Theorem~{\rm \ref{thm:ndim}}, for every $\varphi\in (H^1\cap L^\infty)(\Omega)$ the functional $\mathcal{E} (\cdot, \Omega)$ in \eqref{eq:NDimFunctional} admits a unique critical point $u$ in $H^1_\varphi(\Omega)$. In particular, $u$ is antisymmetric, in the sense that $u \circ \sigma = - u$. \end{Corollary} \begin{Remark} The conclusions on uniqueness and antisymmetry of Theorem~\ref{thm:ndim} and Corollary~\ref{cor:OddHigher} hold in the two following cases in which we assume $A\equiv a\,{\rm Id}$ and $G(u)=(1-u^2)^2/4$. \begin{enumerate} \item[{\rm (i)}] $a(x)=b(x)=e^{\alpha|x|^2}$ for all $x\in\Omega$ and $2\alpha N\geq 1$ (see Example~\ref{ex6:2}~(ii)). \item[{\rm (ii)}]$a(x)=|x|^{\beta+2}$, $b(x)=|x|^\beta$ for all $x\in\Omega$, and $\beta > -N$ (see Remark~\ref{rem6:7}. Note that here $a\not\equiv b$). \end{enumerate} The log-convex weight $e^{\alpha|x|^2}$, $\alpha>0$, also appears in the paper \cite{RCBM}. There, in Theorem 5.2, it is proved that balls in $\mathbb{R}^N$ centered at the origin are the unique minimizers of weighted perimeter for a given weighted volume. For this, the authors use Steiner symmetrization among other tools. \end{Remark} Note that the saddle-shaped solution (mentioned above in the De Giorgi conjecture in $\mathbb{R}^{2m}$) is a function $u=u(s,t)$ of two radial variables and it is a critical point of the functional $$ \int_{(0,L)^2} \Big\{ \frac{1}{2} |\nabla u|^2 + G(u) \Big\} s^{m-1} t^{m-1} ds dt, $$ namely a functional of the type \eqref{eq:NDimFunctional} with $A(x) = (st)^{m-1} {\rm Id}$ and $b(x) = (st)^{m-1}$. For these weights, our results show that the minimizers are antisymmetric if $L$ is small enough, whereas our uniqueness result cannot be applied for large $L$ (see Section~6). \subsection{Plan of the paper} We have organized our paper as follows. The continuous odd rearrangement and its main properties, stated in Theorems~\ref{thm:OddRearrangement}, \ref{thm:Brahms1}, and Remark~\ref{rem:OddRearrangement}, are contained in Section~\ref{sec:Rearrangement}. In Section~\ref{sec:Monotonicity}, we discuss the monotonicity of solutions for the one-dimensional problem, and prove Theorem~\ref{thm:Increasing}. In Section~\ref{section4}, we prove our uniqueness result stated in Theorem~\ref{thm:Brahms2}, as well as a more general result (Corollary~\ref{thm:Uniqueness}). Section~\ref{section5} is devoted to give conditions on the weights under which minimizers in large intervals are not odd functions (we prove in particular Proposition~\ref{cor:characteriation}). Finally, in Section~\ref{sec:ndim} we give some uniqueness results in higher dimensions and prove Theorem~\ref{thm:ndim}. \section{Antisymmetry of critical points: continuous odd rearrangement} \label{sec:Rearrangement} In this section we collect general properties of minimizers and we show how antisymmetry of critical points can be obtained by using our new continuous rearrangement. \subsection{General properties of minimizers} We start by giving some qualitative properties of minimizers of $\mathcal{E} (\cdot, I)$ that can be obtained without any monotonicity assumption on the weights $a$ and $b$. Here, and in the rest of the paper, $$ I:=(-L,L). $$ \begin{Lemma}\label{lem:OderedMin} Let $L>0$, $m\geq 0$, $G\in C^{1,1}_{\rm loc}(\mathbb{R})$, and assume that $a$, $b$, and $G$ satisfy \eqref{eq:A}. If $u_1$ and $u_2$ are two minimizers of $\mathcal{E} (\cdot,I)$ in $H^1_m(I)$, then the following alternative holds: \begin{equation*} \label{eq:OderedMin} \textrm{either }u_1 > u_2 \hbox { in } I, \quad \, \textrm{ or } u_1 < u_2 \hbox{ in } I, \quad \, \textrm{ or } u_1 \equiv u_2 \hbox{ in } I. \end{equation*} \end{Lemma} Before commenting the proof of the lemma, let us start with some generalities that will be used at different moments of the paper. First, a minimizer $u$ as in the lemma will be a $C^2$ function satisfying \eqref{E-L} pointwise. Indeed, $u$ being in $H^1_m(I)$ tells us that $u$ is continuous in $[-L,L]$. Thus $bf(u)$ is also continuous and, by the weak sense of \eqref{E-L}, $au'$ will be $C^1$. Since $a>0$ is $C^1$, we conclude that $u\in C^2((-L,L))$. Second, under the hypotheses of the lemma (in particular, $G\in C^{1,1}_{\rm loc}(\mathbb{R})$), the initial value problem for the ODE $-(au')'=bf(u)$ in \eqref{E-L} enjoys uniqueness. More precisely, if two solutions $u_1$ and $u_2$ of the ODE satisfy $u_1(x_0)=u_2(x_0)$ and $u_1'(x_0)=u_2'(x_0)$ for some $x_0\in (-L,L)$, then they agree. This is a consequence of the classical uniqueness theorem for ODEs, which in our case requires $a$, $a'$, and $b$ to be bounded and continuous, and $f=-G'$ to be Lipschitz continuous. \begin{proof}[Proof of Lemma~{\rm \ref{lem:OderedMin}}] We use a well known cutting and energy argument. One considers the function $v:=\min(u_1,u_2)$, which satisfies $v\leq u_1$. Using that both $u_1$ and $u_2$ are minimizers, one easily shows that $v$ has the same energy as $u_1$, and thus $v$ is also a minimizer (see the details in~\cite[Lemma 3.1]{JerisonMonneau}, for instance). Now, since $v\leq u_1$ are both solutions of the equation, the strong maximum principle leads to the alternative of the lemma. Alternatively, the same conclusion can be deduced from the uniqueness theorem for the initial value problem for the ODE (commented above). \end{proof} The following proposition collects other important properties of minimizers. \begin{Proposition} \label{prop:Schubert} Let $L>0$, $m\geq 0$, $G\in C^{1,1}_{\rm loc}(\mathbb{R})$, and that $a$, $b$, and $G$ satisfy \eqref{eq:A}. If $u$ is a minimizer of $\mathcal{E} (\cdot,I)$ in $H^1_m(I)$, then the following hold: \begin{enumerate} \item[{\rm (i)}] For $m=0$, we have either $u \equiv 0$ or $|u|>0$ in $I$. Moreover, $u$ is even; \item[{\rm (ii)}] $u$ is odd if and only if $u(0) = 0$; \item[{\rm (iii)}] For $m >0$, $u$ has exactly one zero and $u'(0) >0$; \item[{\rm (iv)}] If $m>0$ and $ -m \leq u \leq m$, then $u$ is increasing. \end{enumerate} \end{Proposition} \begin{proof} {\bf (i)} If $m=0$, the assumption that $G$ is even gives that $|u| \in H^1_0 (I)$ is also a minimizer. A classical argument based on the strong maximum principle immediately yields the alternative $u \equiv 0$ or $|u| >0$ in $I$. Consider $v(x) := u(-x)$, which is also solution of \eqref{E-L} since $m=0$. We easily check $\mathcal{E} (u,I) = \mathcal{E} (v,I)$, which shows that $v$ is also a minimizer. Since $u(0) = v(0)$, we must have $u \equiv v$ by Lemma~\ref{lem:OderedMin}. {\bf (ii)} Assume $u(0)=0$. Without loss of generality, we may assume that $$ \int_0^L \left\{ \frac{u'^2}{2} a(x) + G(u) b(x) \right\} dx \geq \int_{-L}^0 \left\{\frac{u'^2}{2} a(x) + G(u) b(x) \right\} dx. $$ By defining $$ {\tilde u} (x) := \left\{ \begin{array}{ll} u (x) &\hbox{ for } x \in (-L,0) , \\ -u(-x) &\hbox{ for } x \in (0,L), \end{array} \right. $$ we easily see that ${\tilde u}\in H^1_m(I)$ and $\mathcal{E} (u, I) \geq \mathcal{E} ({\tilde u}, I)$. Hence ${\tilde u}$ is an odd minimizer satisfying ${\tilde u}(0) = u(0)$. By the alternative of Lemma~\ref{lem:OderedMin} we deduce that ${\tilde u} \equiv u$. This shows that $u$ is odd. Note that this argument also works in some higher dimensional case under the assumption that $u$ vanishes in a hyperplane as well as assuming appropriate symmetry assumptions on the domain, the boundary condition, and the weights. In dimension one there is another proof that gives the same statement not only for minimizers but also for solutions of \eqref{E-L}. Indeed, let $u$ be a solution of \eqref{E-L} such that $u(0)=0$ and let $u_\star$ be its flipped (as in \eqref{def:flipped}). Since $u$ and $u_\star$ are solutions of \eqref{E-L} satisfying $u(0)=u_\star$(0) and $u'(0)=u_\star'(0)$, we conclude that $u=u_\star$ by uniqueness for the Cauchy problem. Therefore, $u$ is odd. {\bf (iii)} Let $m>0$. Assume by contradiction that there exists a nonempty interval $(x_1,x_2) \subset I$ such that $u(x_1) = u(x_2) = 0$ and $u>0$ in $(x_1,x_2)$. Let $\tilde{u}:=-u$ in $(x_1,x_2)$ and $\tilde{u}:=u$ in $\overline{I}\setminus(x_1,x_2)$ and note that $\mathcal{E} (u, I) = \mathcal{E} ({\tilde u}, I)$ by \eqref{eq:A}. Using the alternative of Lemma~\ref{lem:OderedMin} we have a contradiction, since we would have two minimizers $u$ and $\tilde{u}$ satisfying $u\equiv \tilde{u}$ in $\overline{I}\setminus(x_1,x_2)$. Consider $v(x) := u(-x)$. We claim that $u>v$ in $(0,L)$. Note that $u(0)=v(0)$ and $u(L)=m>-m=v(L)$. Indeed, assume first that $u<v$ in an open interval $J\subset(0,L)$ and $u=v$ on $\partial J$. Replacing $u$ by $v$ if necessary we may assume that $\mathcal{E}(v,J)\leq \mathcal{E}(u,J)$. Therefore, defining $\bar{u}=u$ in $I\setminus J$ and $\bar{u}=v$ in $\bar{J}$, we obtain that $\bar{u}$ is a minimizer of $\mathcal{E}(\cdot,I)$ in $H^1_m(I)$ different from $u$. This is a contradiction with the alternative of Lemma~\ref{lem:OderedMin} and proves that $u\geq v$ in $(0,L)$. However, if there exists $x_0\in(0,L)$ such that $u(x_0)=v(x_0)$ we would have $u'(x_0)=v'(x_0)$ (since $u\geq v$ in $(0,L)$). This is a contradiction we the uniqueness of the Cauchy problem, since $u$ and $v$ are solutions of equation \eqref{E-L} satisfying $u(x_0)=v(x_0)$, $u'(x_0)=v'(x_0)$, and $u(L) \not = v(L)$, and proves the claim. As a consequence, we obtain that $u'(0) \geq 0$, and in fact, $u'(0) > 0$ (otherwise we would have $u \equiv v$, again by uniqueness of the Cauchy problem, which cannot hold since $u(L) \not = v(L)$). {\bf (iv)} We first claim that $u$ is nondecreasing. Assume on the contrary that the minimizer admits two local extrema. Let $x_1\in(-L,L)$ be the smallest local maximum and let $s_1\in(-m,m]$ be its critical value. Let $\bar{x}$ be the smallest solution to $u(x)=s_1$ in $(x_1,L]$ and $s_2=\min_{[x_1,\bar{x}]}u$. Let $G(\bar{s}) = \min_{s \in [s_2,s_1]} G(s)$, $\bar{x}_1=\sup\{\tau\in(-L,x_1):u(\tau)=\bar{s}\}$, and $\bar{x}_2=\sup\{\tau\in(-L,\bar{x}):u(\tau)=\bar{s}\}$ (see Figure~\ref{fig3}). \begin{figure}[ht] \begin{center} \psset{xunit=6cm,yunit=3cm} \begin{pspicture}(0,1.1) \psline[linewidth=0.5pt]{-}(-1.1,-1)(0.9,-1) \psline[linewidth=0.5pt]{-}(-1,-1.1)(-1,1) \psline[linewidth=0.5pt]{-}(0.8,-1.1)(0.8,1) \pscurve[linecolor=red,linewidth=0.5pt]{-}(-1,-0.8)(-0.4,0.4)(0.3,-0.5)(0.8,0.8) \psline[linecolor=blue,linestyle=dashed,linewidth=0.5pt]{-*}(-0.37,-1)(-0.37,0.4) \rput(-0.4,-1.1){$x_1$} \psline[linecolor=blue,linestyle=dashed,linewidth=0.5pt]{-}(-1,0.4)(0.71,0.4) \psline[linecolor=blue,linestyle=dashed,linewidth=0.5pt]{-}(0.71,0.4)(0.71,-1) \rput(-1.1,0.4){$s_1$} \rput(0.71,-1.1){$\bar{x}$} \psline[linecolor=blue,linestyle=dashed,linewidth=0.5pt]{-*}(-1,-0.5)(0.27,-0.5) \rput(-1.1,-0.5){$s_2$} \psline[linecolor=blue,linestyle=dashed,linewidth=0.5pt]{-}(-1,0.0)(0.59,0.0) \psline[linecolor=blue,linestyle=dashed,linewidth=0.5pt]{-}(-0.72,-1.0)(-0.72,0.0) \psline[linecolor=blue,linestyle=dashed,linewidth=0.5pt]{-}(0.59,-1.0)(0.59,0.0) \rput(-1.1,-0.0){$\bar{s}$} \rput(-0.72,-1.1){$\bar{x}_1$} \rput(0.59,-1.1){$\bar{x}_2$} \rput(0.7,0.7){$u$} \end{pspicture} \end{center} \vspace{2.7cm} \caption{Graph of $u$} \label{fig3} \end{figure} Defining $$ \tilde u (x) := \left\{ \begin{array}{cl} \bar{s} &\hbox{ if } x \in (\bar{x}_1,\bar{x}_2), \\ u(x) &\hbox{ otherwise}, \end{array} \right. $$ we easily see that $\tilde{u}\in H^1_m(I)$ and $\mathcal{E} (\tilde u, I) \leq \mathcal{E} (u, I) $. Therefore, $\tilde{u}$ is a minimizer which is constant in $[\bar{x}_1,\bar{x}_2]$. This contradiction proves the claim. Hence, ${\tilde u}$ is a nondecreasing minimizer and by the strong maximum principle it must be increasing. \end{proof} \subsection{Continuous odd rearrangement}\label{subsec2:2} In this subsection we prove Theorems \ref{thm:OddRearrangement} and \ref{thm:Brahms1}. As we said in Remark~\ref{rem:OddRearrangement} both results will be proved in fact for functionals for which the weights $a$ and $b$ are not necessarily equal. Given an increasing and odd diffeomorphism $\gamma: (-L,L) \to (-{\tilde L}, {\tilde L})$ of class $C^1$ and making the change of variables $x=\gamma^{-1}(y)$ we obtain $$ \mathcal{E}(u,(-L,L))=\int_{-L}^L\frac{1}{2}(u')^2a(x)+G(u)b(x)\,dx =\int_{\gamma(-L)}^{\gamma(L)}\frac{1}{2}(\tilde{u}')^2\tilde{a}(y)+G(\tilde{u})\tilde{b}(y)\,dy $$ where $$ {\tilde u} := u \circ \gamma^{-1}, \qquad {\tilde a} := (a \gamma') \circ \gamma^{-1}, \qquad\textrm{and}\qquad {\tilde b} := \frac{b}{\gamma'} \circ \gamma^{-1} \,. $$ Similarly, a straightforward computation shows that, when $G\in C^{0,1}_{\rm loc}(\mathbb{R})$, problem~\eqref{E-L} is equivalent to the following \begin{equation}\label{eq:E-LBis} \left\{ \begin{array}{l} - ( \tilde{a} {\tilde u} ')'={\tilde b} \, f({\tilde u}) \quad \hbox{ in } (-\tilde{L},\tilde{L}),\\ {\tilde u} (\tilde{L}) \, = \, - {\tilde u} (- \tilde{L})= m, \end{array} \right. \end{equation} where $\tilde L=\gamma(L)$. In particular, the diffeomorphism \begin{equation} \label{eq:ChangeVariable} \gamma_1 (x) := \int_0^x \sqrt{\frac{b}{a} } \qquad (\hbox{respectively,} \, \, \, \gamma_2 (x) := \int_0^x b) \end{equation} allows to rewrite the functional $\mathcal{E}(u,(-L,L))$ (or problem~\eqref{E-L}) as $$ \mathcal{E}(\tilde{u},(-\gamma(L),\gamma(L))) = \int_{-\gamma(L)}^{\gamma(L)}\frac{1}{2}(\tilde{u}')^2\tilde{a}(y)+G(\tilde{u})\tilde{b}(y)\,dy $$ (or ~\eqref{eq:E-LBis}) with weights $({\tilde a}, {\tilde b})$ satisfying ${\tilde a} \equiv {\tilde b}=\sqrt{ab}\circ\gamma_1^{-1}$ (respectively, $\tilde{a}=(ab)\circ\gamma_2^{-1}$ and ${\tilde b} \equiv 1$). Note that \begin{equation}\label{newexp} \frac{(\sqrt{ab})'}{b}\circ \gamma_1^{-1} = \frac{(\sqrt{ab})'\circ\gamma_1^{-1}}{\sqrt{ab}\circ\gamma_1^{-1}}(\gamma_1^{-1})' = \frac{\tilde{a}'}{\tilde{a}} = ({\rm log}\, \tilde{a})' \end{equation} and \begin{equation}\label{newexp2} \frac{(\sqrt{ab})'}{b}\circ\gamma_2^{-1} = (\sqrt{ab}\circ\gamma_2^{-1})' = (\sqrt{\tilde{a}})'. \end{equation} This shows that assumption \eqref{eq:BerNiren} is equivalent to the log-convexity of $\tilde{a}$ when ${\tilde a} \equiv {\tilde b}=\sqrt{ab}\circ\gamma_1^{-1}$ and to the convexity of $\sqrt{\tilde{a}}$ when $\tilde{a}=(ab)\circ\gamma_2^{-1}$ and ${\tilde b} \equiv 1$. We now define the continuous odd rearrangement of an increasing function (with respect to the weight $b\equiv1$). \begin{Definition}\label{def:rear} Let $v \in H^1_m (I)$ be an increasing function. Let us denote the inverse of $v$ as $\rho$, \textit{i.e.}, $$ v(x)=\lambda\qquad\textrm{if and only if}\qquad \rho(\lambda)=x. $$ Let $t\in[0,1]$ and define the family of functions $$ \rho^t(\lambda):=t\rho(\lambda)+(1-t)\rho_\star(\lambda)\quad\textrm{ for all } \lambda\in [-m,m], $$ where $\rho_\star(\lambda)=-\rho(-\lambda)$ denotes the flipped of $\rho$. It is clear that $\rho^t$ is an increasing function for all $t\in[0,1]$. We define the \textit{continuous odd rearrangement} $\{v^t\}_{0\leq t\leq1}$ of $v$ as the family of inverse functions of $\{\rho^t\}_{0\leq t\leq1}$. \end{Definition} \begin{Remark}\label{rmk:lll} Note that $\rho^t$ (or $v^t$) will be an odd function if and only if $(2t-1)(\rho(\lambda)+\rho(-\lambda))=0$ for all $\lambda\in[-m,m]$. In particular, the continuous rearrangement $v^t$ is an odd function if either $\rho$ is odd (\textit{i.e.}, $v$ is odd) or $t=1/2$. We call $v^{1/2}$ the \textit{odd rearrangement} of $v$. \end{Remark} For a positive even weight $a$ which is square root convex (when $b\equiv1$), and a general even nonlinearity $G$, we can prove that continuous odd rearrangements and Schwarz rearrangements share similar properties. We start proving that the weighted Dirichlet integral is decreased under the odd rearrangement $v^{1/2}$ when $\sqrt{a}$ is convex. This is the key that later leads to oddness of minimizers. However, since we also prove oddness of critical points (not necessarily minimizers), we need to use the whole family $v^t$, $t\in[0,1]$, in the continuous odd rearrangement. Next result states that all functions $v^t$ have less weighted Dirichlet energy than $v$, when $\sqrt{a}$ is convex. \begin{Lemma}\label{Lem:Dirichlet} Let $a \in C^0([-L,L])$ be an even positive function such that $\sqrt a$ is convex. If $v \in C^1 ([-L,L])$ is increasing and $v(L)=-v(-L)=m>0$, then it holds either that \begin{equation}\label{claim1} h(t):=\int_{-L}^L \left( \frac{d v^t}{dx } \right)^2 \! a(x) \, dx \, < \, \int_{-L}^L \left( \frac{d v}{dx } \right)^2 \! a(x) \, dx \quad \textrm{for all }t\in(0,1), \end{equation} or that $v^t=v$ for all $t\in[0,1]$. \end{Lemma} Valenti~\cite{NOI} proved that $h(1/2) \leq h(1)$ holds even for $H^1_m$ functions. Its proof uses a reflection argument to convert the odd symmetry property into even symmetry. He then uses Schwarz decreasing symmetrization ---which applies to $H^1$ functions (see \cite{LL}). Next we provide a different proof than the one given in \cite{NOI}. In fact, our proof applies to all $v^t$, $t\in(0,1)$, assuming that $v\in C^1$. A possible way to prove our lemma for functions $v\in H^1_m$ (that we do not do here) would be extending to $v^t$, $t\in(0,1)$, Coron's result~\cite{Coron} on the continuity of Schwarz rearrangement in dimension $1$ in the $W^{1,p}$-norm. This surely works for $t=1/2$, by the results of \cite{NOI}. Note that Coron's result is a delicate one and, in fact, continuity of Schwarz rearrangement in $H^1$-norm does not hold in higher dimensions $N\geq 2$ (see \cite{Alm-Lieb}). We now establish Lemma~{\rm \ref{Lem:Dirichlet}}. An alternative proof is given below (see Remark~\ref{Rmk:26}) as a consequence of Lemma~\ref{Lem:Convex_energy}. The proof is shorter but requires the use of the whole family $v^t$, $t\in(0,1)$, even to establish Lemma~\ref{Lem:Dirichlet} for $t=1/2$. \begin{proof}[Proof of Lemma~{\rm \ref{Lem:Dirichlet}}] We first establish that \eqref{claim1} with $<$ replaced by $\leq$ holds. For this, by definition of the continuous odd rearrangement we have \begin{equation}\label{integ1} \int_{-L}^L \left( \frac{d v^t}{dx } \right)^2 \! a(x) \, dx = \int_0^m \left\{\frac{a(\rho^t(\lambda))}{(\rho^t)'(\lambda)} + \frac{a(\rho^t(-\lambda))}{(\rho^t)'(-\lambda)}\right\}\ d\lambda. \end{equation} We have to compare this quantity with $$ \int_{-L}^L \left( \frac{d v}{dx } \right)^2 \! a(x) \, dx =\int_0^m \left\{\frac{a(\rho^1(\lambda))}{(\rho^1)'(\lambda)}+ \frac{a(\rho^0(\lambda))}{(\rho^0)'(\lambda)}\right\}\ d\lambda. $$ We will do it pointwise. Since $\sqrt{a}$ is convex the integrand in the second integral of \eqref{integ1} is less or equal than $$ \frac{\left(t\sqrt{a(\rho^1)} + (1-t)\sqrt{a(\rho^0)}\right)^2}{t(\rho^1)' + (1-t)(\rho^0)'} + \frac{\left(t\sqrt{a(\rho^0)} + (1-t)\sqrt{a(\rho^1)}\right)^2}{t(\rho^0)' + (1-t)(\rho^1)'}. $$ Therefore, it is sufficient to prove that \begin{equation} \begin{split}\label{conv} \frac{\left(t\sqrt{a(\rho^1)} + (1-t)\sqrt{a(\rho^0)}\right)^2}{t(\rho^1)' + (1-t)(\rho^0)'} +& \frac{\left(t\sqrt{a(\rho^0)} + (1-t)\sqrt{a(\rho^1)}\right)^2}{t(\rho^0)' + (1-t)(\rho^1)'} \\ &\hspace{-2cm} \leq \frac{a(\rho^1)}{(\rho^1)'} + \frac{a(\rho^0)}{(\rho^0)'}. \end{split} \end{equation} There are two ways to proceed now. First, let $E:=\{\lambda \in [-m,m] : (\rho^0)'(\lambda)=+\infty\textrm{ and } (\rho^1)'(\lambda)=+\infty\}$. It is clear that \eqref{conv} holds in $E$. In the set $[-m,m]\setminus E$, a simple, but arduous, computation shows that the previous inequality is equivalent to \begin{equation}\label{desigualdad} \frac{t(1-t)((\rho^0)'+(\rho^1)')}{(\rho^0)'(\rho^1)'} \left( (\rho^0)'\sqrt{a(\rho^1)}-(\rho^1)'\sqrt{a(\rho^0)} \right)^2\geq0 \end{equation} which clearly holds for all $t\in[0,1]$ (since $\rho^t$ is an increasing function for all $t\in[0,1]$). A second proof is the following. By symmetry one sees that \eqref{conv} follows if we prove that \begin{equation}\label{convpar} \frac{\left(t\sqrt{a(\rho^1)}+(1-t)\sqrt{a(\rho^0)}\right)^2}{t(\rho^1)'+(1-t)(\rho^0)'} \leq t\frac{a(\rho^1)}{(\rho^1)'}+(1-t)\frac{a(\rho^0)}{(\rho^0)'} \end{equation} and we add this same expression by replacing $\rho^1$ by $\rho^0$ and $(\rho^0)'$ by $(\rho^1)'$. Finally, \eqref{convpar} is easily seen to be true using that the function $H(A,P)=A^2/P$ is a convex function in $\mathbb{R}_+\times\mathbb{R}_+$ and taking $A_i=\sqrt{a(\rho^i)}$ and $P_i=(\rho^i)'$. It remains to prove that equality holds in \eqref{claim1} for some $t\in (0,1)$ if and only if $v^t=v$ for all $t\in[0,1]$. Assuming that equality holds, using any of the previous approaches (and developing \eqref{convpar} in the second approach), we see that all the previous inequalities become equalities. In particular, for our $t\in(0,1)$, inequality \eqref{desigualdad} (or \eqref{convpar}) becomes equality, which means $$ \frac{(\rho^1)'}{(\rho^0)'}=\sqrt{\frac{a(\rho^1)}{a(\rho^0)}}\quad \textrm{for every }\lambda \in [-m,m]. $$ \noindent This is equivalent to the fact that the derivative of the function $$ \Psi(\lambda):=\int_{-\rho(-\lambda)}^{\rho(\lambda)}\frac{ds}{\sqrt{a(s)}} $$ vanishes in $[-m,m]$. Therefore $\Psi(\lambda)=\Psi(m)=0$ for every $\lambda\in [-m,m]$. It follows immediately that $\rho(\lambda)=-\rho(-\lambda)$. Hence $\rho$ is odd and its inverse $v$ too. This automatically leads to $v^t=v$ for all $t\in[0,1]$. \end{proof} The following result will be useful to prove Theorem~\ref{thm:Brahms1}, that is, that critical points of $\mathcal{E}$ are odd. \begin{Lemma}\label{Lem:Convex_energy} Let $a \in C^0([-L,L])$ be an even positive function such that $\sqrt a$ is convex. If $v \in C^1 ([-L,L])$ is increasing and $v(L)=-v(-L)=m>0$, then \begin{equation}\label{def:ht} h(t):=\frac{1}{2}\int_{-L}^L \left( \frac{d v^t}{dx } \right)^2 \! a(x) \, dx,\quad t\in[0,1], \end{equation} is a convex function. \end{Lemma} \begin{proof} Note that there is a sequence of even positive functions $a_n\in C^2((-L,L))$ such that $\sqrt{a_n}$ is convex and $a_n$ tends to $a$ in $L^\infty((-L,L))$. Consider now the function $h_n$ defined by \eqref{def:ht} with $a$ replaced by $a_n$. If we show that $h_n$ is convex then, taking into account \eqref{integ1} and letting $n\to\infty$, we will deduce that $h$ is convex. Therefore, in order to prove the lemma, we can assume that $a\in C^2((-L,L))$ without loss of generality. Let us show the existence of $a_n$ as above. Note that $\sqrt{a}$ is differentiable a.e. since it is convex. Moreover, $(\sqrt{a})'=a'/(2\sqrt{a})$ is nondecreasing and nonnegative in $(0,L)$, and it belongs to $L^1((0,L))$. Using an standard convolution argument we see that there exists a sequence of increasing positive functions $q_n\in C^\infty((-L,L))$ such that $q_n(0)=0$ and $q_n$ tends to $a'/(2\sqrt{a})$ in $L^1((0,L))$. Defining $$ a_n(x)=\left(\sqrt{a(0)}+\int_0^x q_n(t)\,dt\right)^2\quad\textrm{for }x\in[0,L] $$ and $a_n(x)=a_n(-x)$ for $x\in[-L,0]$ we obtain the desired sequence. Thus, we may assume $a\in C^2((-L,L))$. Noting that \begin{equation}\label{h(t)} h(t)= \frac{1}{2}\int_{-L}^L \left( \frac{d v^t}{dx } \right)^2 \! a(x) \, dx = \frac{1}{2}\int_{-m}^m \frac{a(\rho^t)(\lambda)}{(\rho^t)'(\lambda)}\ d\lambda, \end{equation} a simple computation shows that $$ h'(t) = \frac{1}{2}\int_{-m}^{m} \left\{a'(\rho^t)\frac{\rho^1-\rho^0}{(\rho^t)'} - a (\rho^t)\frac{(\rho^1)' - (\rho^0)'}{((\rho^t)')^2} \right\}\, d\lambda. $$ Moreover since $\sqrt{a}$ is convex, and hence $2a''/a\geq (a'/a)^2$, we obtain \begin{eqnarray*} h''(t) &=& \int_{-m}^{m} \frac{a(\rho^t)((\rho^1)' - (\rho^0)')^2}{4((\rho^t)')^3} \left\{ 2\frac{a''(\rho^t)}{a(\rho^t)}\frac{( \rho^1 - \rho^0 )^2}{((\rho^1)' - (\rho^0)')^2}((\rho^t)')^2\right.\\ && \hspace{4.5cm} - \left. 4 \frac{a ' (\rho^t)}{a(\rho^t)}\frac{\rho^1-\rho^0}{(\rho^1)' - (\rho^0)'}(\rho^t)' + 4 \right\}\, d\lambda\\ &\geq& \int_{-m}^{m} \frac{a(\rho^t)((\rho^1)' - (\rho^0)')^2}{4((\rho^t)')^3} \left\{ \frac{a ' (\rho^t)}{a(\rho^t)}\frac{\rho^1-\rho^0}{(\rho^1)' - (\rho^0)'}(\rho^t)'-2 \right\}^2\, d\lambda \end{eqnarray*} which is clearly nonnegative. \end{proof} \begin{Remark}\label{Rmk:26} We claim that Lemma~\ref{Lem:Dirichlet} can be deduced from Lemma~\ref{Lem:Convex_energy}. First, making a regularization argument we can assume that $a\in C^2((-L,L))$; see the proof of Lemma~\ref{Lem:Convex_energy}. To prove the claim, since $h$ is a convex function such that $h'(1/2)=0$, it is clear that $h$ is nonincreasing in $(0,1/2)$ and nondecreasing in $(1/2,1)$. In particular, $h(1/2)\leq h(t) \leq h(0)=h(1)$ for all $t\in[0,1]$. We want to show that either $h(t)\equiv h(1)$ or that $h(t)<h(1)$ for all $t\in(0,1)$. Assume $h\not\equiv h(1)$ and that there exist $t_0\in(0,1)$ such that $h(t_0)=h(1)$. Note that $h'(t_0)=0$ since $h$ is a $C^1$ function such that $h(t) \leq h(0)=h(1)$ for all $t\in[0,1]$. Now, since $h$ is convex and $h'(t_0)=0$, we deduce that $h(t)\geq h(t_0)=h(1)$ for all $t\in [0,1]$. Since $h\leq h(1)$, this a contradiction with $h\not\equiv h(1)$. \end{Remark} Now, we prove Theorem~\ref{thm:OddRearrangement} and Remark~\ref{rem:OddRearrangement}. \begin{proof}[Proof of Theorem~{\rm \ref{thm:OddRearrangement}}] Thanks to the diffeomorphism $\gamma_2$ defined in \eqref{eq:ChangeVariable}, we may assume the new weights to be $\tilde{a}$ and $\tilde{b}\equiv{1}$ being $\sqrt{\tilde{a}}$ a convex function. (a) Let us prove that an increasing function $v\in C^1([-L,L)]$ satisfying $v(L)=-v(-L)=m>0$ and its continuous rearrangement $\{v^t\}_{0\leq t\leq 1}$ satisfy \eqref{equidistributed}. Indeed, on the one hand $$ \int_{-L}^L G(v)\ dx=\int_{-m}^m G(\lambda)\rho'(\lambda)\ d\lambda =\int_0^m G(\lambda)(\rho'(\lambda)+\rho'(-\lambda))\ d\lambda. $$ On the other hand, \begin{eqnarray*} \int_{-L}^L G(v^t)\ dx &=& \int_{-m}^m G(\lambda)(\rho^t)'(\lambda)\ d\lambda\\ &=& t\int_{-m}^m G(\lambda)\rho'(\lambda)\ d\lambda + (1-t)\int_{-m}^m G(\lambda)\rho'(-\lambda)\ d\lambda\\ &=& \int_0^m G(\lambda)(\rho'(\lambda)+\rho'(-\lambda))\ d\lambda. \end{eqnarray*} (b) By Lemma~\ref{Lem:Dirichlet} we only need to prove that $t\longmapsto\mathcal{E}(v^t,(-L,L))$ is a convex function, \textit{i.e.}, part (b.3). This follows directly from Lemma~\ref{Lem:Convex_energy} since by part (a) we have $$ \mathcal{E} (v^{t}, (-L,L))=h(t)+\int_{-L}^L G(v^t)\ dx=h(t)+\int_{-L}^L G(v)\ dx $$ for all $t\in[0,1]$. \end{proof} With the above rearrangement we can now prove Theorem~\ref{thm:Brahms1}, \textit{i.e.}, that increasing critical points of $\mathcal{E} (\cdot, (-L,L))$ in $H^1_m((-L,L))$ are odd under assumption~\eqref{eq:BerNiren}. As mentioned in the Introduction, this argument applies to every locally Lipschitz $G\in C^{0,1}_{\rm loc}(\mathbb{R})$, not only to $G\in C^{1,1}_{\rm loc}(\mathbb{R})$ as in Theorem~\ref{thm:BerestyckiNirenberg}. \begin{proof}[Proof of Theorem~{\rm \ref{thm:Brahms1}}] Let $u\in H^1_m((-L,L))$ be an increasing critical point of $\mathcal{E}(\cdot,(-L,L))$. Using the change of variable $\gamma_2$ given in~\eqref{eq:ChangeVariable}, we can work with the equivalent problem~\eqref{eq:E-LBis} with weights $\tilde{a}=(a b) \circ \gamma_2^{-1}$ and $\tilde b \equiv 1$, whose associated functional is given by $$ \tilde{\mathcal{E}} (v, (-{\tilde L},{\tilde L})) :=\int_{-\tilde{L}}^{\tilde{L}} \left\{\frac{1}{2} v'(y)^2\tilde{a}(y)+G(v)\right\} dy . $$ We note that $\tilde{a}$ is even and that, by~\eqref{eq:BerNiren}, we have that $\sqrt {\tilde a}$ is convex. Furthermore critical points of $\tilde{\mathcal{E}} (\cdot, {\tilde I})$ and $\mathcal{E} (\cdot, I)$ are related by ${\tilde u} = u \circ \gamma_2^{-1} $ and, since $\gamma_2$ is increasing, we deduce that the critical point ${\tilde u}$ is also increasing. We also note that $\tilde{u}\in C^1([-L,L])$. In fact, since $\tilde{u}\in H^1_m((-L,L))$ (in particular $\tilde{u}\in L^\infty((-L,L))$) and $G\in C^{0,1}_{\rm loc}(\mathbb{R})$ then $-(\tilde{a}\tilde{u}')' = f(\tilde{u}) \in L^\infty((-L,L))$. Therefore, $\tilde{a}\tilde{u}'\in H^1((-L,L))$, and hence, $\tilde{u}'\in C^0([-L,L])$. Let $\{\tilde{u}^t\}_{0\leq t\leq 1}$ be the continuous odd rearrangement of $\tilde{u}$ and let $h$ be defined in \eqref{claim1} (with $v^t$ and $a$ replaced by $\tilde{u}^t$ and $\tilde{a}$, respectively). By Lemma~\ref{Lem:Dirichlet} it holds either that $h(t)<h(1)$ for all $t\in(0,1)$ or that $\tilde{u}^t=\tilde{u}$ for all $t\in[0,1]$. Assume that $h(t)<h(1)$ for all $t\in(0,1)$. We claim that, since $\tilde{u}$ is a solution of the associated Euler-Lagrange equation, it holds that $h'(1)=0$. Indeed, noting that $(\tilde{u}^t)'>0$ since $u$ is increasing, we have that $\tilde{\rho}^t$ tends to $\tilde{\rho}=\tilde{\rho}^1$ as $t$ goes to $1$ in $C^1$. It follows that $\tilde{u}^t$ also tends to $\tilde{u}^1=\tilde{u}$ in $C^1$ as $t$ goes to $1$. As a consequence, since $\tilde{u}$ is a solution of the Euler-Lagrange equation and the potential energy is constant in $t$, we deduce that $h'(1)=0$. We now obtain a contradiction with the convexity of $h$ (given by Lemma~\ref{Lem:Convex_energy}) noting that $$ h'(1)\geq \frac{h(1)-h(t)}{1-t} >0 \quad \textrm{for all }t\in(0,1). $$ Therefore $\tilde{u}=\tilde{u}^t$ for all $t\in[0,1]$. In particular, $\tilde{u}$ is an odd function, as well as $u={\tilde u}\circ \gamma_2$ is. \end{proof} \begin{Remark} In order to derive the main property~\eqref{eq:PolyaSzego}, our odd rearrangement has been defined on the subset of {\it increasing} functions in $H^1_m (I)$. One may wonder if there exists a more general map $ R: H^1_m (I) \to H^{as}_m (I), $ where $H^{as}_m (I)$ is the subspace of $H^1_m (I)$ formed by antisymmetric functions, satisfying \begin{equation} \label{eq:Volare} R_{|H_m^{as}(I)} = {\rm Id}, \quad\textrm{and}\quad \mathcal{E} (u, I) \geq \mathcal{E} (R(u) , I). \end{equation} However, even under the assumption~\eqref{eq:BerNiren}, it is in general impossible to find such a map defined in the entire functional space $H^1_m (I)$. Indeed, in Section~\ref{section5} (see Proposition~\ref{prop:NonOdd}) we will see that, when $a \equiv b \equiv 1$, for large interval $I$ and $m$ small enough the minimizers of $\mathcal{E} (\cdot, I)$ in $H^1_m(I)$ cannot be odd (and neither nonincreasing). Hence in this case such a map $R$ cannot exist, since \eqref{eq:Volare} would imply that $R(u) $ is an odd minimizer. \end{Remark} \section{Monotonicity of solutions. Proof of Theorem~\ref{thm:Increasing}}\label{sec:Monotonicity} As stated in Theorem~\ref{thm:BerestyckiNirenberg}, Berestycki and Nirenberg established the uniqueness and monotonicity of solutions of \eqref{E-L} under the assumptions that the weight $a\equiv b\in C^1$ and the potential $G\in C^{1,1}_{\rm loc}$ are even functions, $a$ is log-convex, and the \textit{a priori} estimate on the solution $|u|\leq m$. The goal of this section is to prove Theorem~\ref{thm:Increasing}, providing weaker conditions on the weights $a$ and $b$ (in particular, no convexity assumption on them) to ensure the monotonicity of solutions for the one-dimensional problem~\eqref{E-L}, at the price of assuming some structural conditions on the potential $G$. In the proof of Theorem~\ref{thm:Increasing} we use the function \begin{equation} \label{eq:Lyapunov} \mathcal{H} (x, q,p) := \frac{1}{2} \big( a(x) p)^2 - a(x) b(x) G(q) , \end{equation} defined in $I \times \mathbb R^2$ (\textit{i.e.}, in the extended phase space). Given a solution $u$ to \eqref{E-L}, we easily see that \begin{equation} \label{eq:LaVita} \frac{d}{dx} \mathcal{H} (\cdot, u, u') = - (ab)' G(u) . \end{equation} Indeed, multiplying equation \eqref{E-L} by $au'$, we obtain $$ - \frac{1}{2} \big[(au')^2 \big]' + ab \big[ G(u) \big]' = 0 , $$ which is equivalent to $$ \left\{ - \frac{1}{2}(au')^2 + ab G(u) \right\}' - (a b)' G(u) = 0 , $$ and this last relation is exactly~\eqref{eq:LaVita}. In the special case where $a$ and $b$ are constant, the function $\mathcal{H}$ is the Hamiltonian associated to the ODE in~\eqref{E-L}. \begin{proof}[Proof of Theorem~{\rm \ref{thm:Increasing}}] {\bf (i)} Assume \eqref{eq:Muffin}. Let us show that any solution of \eqref{E-L} is increasing assuming $G\geq G(-m)=G(m)$ in $\mathbb{R}$. By adding a constant to $G$, we may assume that $G\geq G(-m)=G(m)=0$ in $\mathbb{R}$. Given a solution $u$ of~\eqref{E-L}, let us consider in the extended phase space the associated `trajectory' $\varphi(x) := (x, u(x), u'(x))$, $x \in (-L,L)$. Equality~\eqref{eq:LaVita} together with the assumptions \eqref{eq:Muffin} and $G \geq 0$ yield \begin{equation} \label{eq:Volem} \frac{d}{dx} (\mathcal{H} \circ \varphi) (x) \left\{ \begin{array}{rl} \geq 0 &\hbox{ for } x\in(-L,x_0], \vspace{1mm} \\ \leq 0 &\hbox{ for } x\in[x_0,L). \end{array} \right. \end{equation} Hence $\mathcal{H} \circ \varphi$ is nondecreasing in $(-L,x_0)$ and nonincreasing in $(x_0,L)$. It follows that \begin{eqnarray*} \mathcal{H}(\varphi(x)) &\geq& \min\left\{ \mathcal{H}(\varphi(-L)), \mathcal{H}(\varphi(L))\right\} \\ &=& \frac{1}{2} \min\left\{ (au')^2 (-L), (au')^2 (L) \right\} \\ &>& 0, \end{eqnarray*} where we have used $G(\pm m) = 0$, $a>0$, and $u' (\pm L) \not = 0$ (which follows from the uniqueness to the Cauchy problem and the fact that $G'(\pm m)=0$). We conclude $$ \frac{1}{2}(a u')^2> a b G(u)\geq 0 \quad \hbox{ in } [-L,L]. $$ In particular, $u'(x)> 0$ for all $x\in [-L,L]$. {\bf (ii)} Now we prove that any solution $u$ of \eqref{E-L} is increasing if \eqref{eq:Double:bis} holds. By replacing $G$ by $G-G(M)$ in the equation \eqref{E-L}, we can assume without loss of generality that $G\geq 0$ in $[-M,M]$. Since $$ G'(s) \leq 0 \, \textrm{ for all } s\leq -M \quad\textrm{and}\quad G'(s) \geq 0 \, \textrm{ for all } s \geq M $$ for some constant $M$, it is easy to prove using the maximum principle that any solution of \eqref{E-L} satisfies \begin{equation} \label{eq:SoonOrLater} |u| \leq \max \{ M, m \}. \end{equation} In particular, since $m \geq M$, the {\it a priori} bound~\eqref{apriori}, $|u|\leq m$, holds. Note that \eqref{eq:Volem} also holds now, since $G\geq 0$ in $\mathbb{R}$. Moreover, by assumption \eqref{eq:Double:bis} and the maximum principle any solution $u$ to \eqref{E-L}, with $m \geq M >0$, satisfies $-m\leq u\leq m$ (see \eqref{eq:SoonOrLater}). Assume by contradiction that $u$ admits a local maximum $x_1\in(-L,L)$ and a local minimum $x_2\in(-L,L)$ satisfying $x_1<x_2$ and $\alpha_1:=u(x_1)>u(x_2)=:\alpha_2$. First, we claim that $\alpha_1\in[-m,M]$ and $\alpha_2\in[-M,m]$. Indeed, for instance, if $\alpha_2\in[-m,-M)$ then, by equation \eqref{E-L} and condition \eqref{eq:Double:bis}, we get $a(x_2)u''(x_2)=b(x_2)G'(u(x_2)) \leq 0$. Therefore, using that $x_2$ is a local minimum, we have $u''(x_2)\geq 0$, and hence, $G'(u(x_2))=-f(u(x_2))=0$. We obtain a contradiction by uniqueness of the Cauchy problem $$ \left\{ \begin{array}{rcl} -(aw')'&=&bf(w),\\ w(x_2)&=&u(x_2),\\ w'(x_2)&=&0, \end{array} \right. $$ by noting that $u$ and $w\equiv u(x_2)$ are two different solutions. Thus, we have $\alpha_2\in[-M,m]$. Analogously, we obtain $\alpha_1\in[-m,M]$, proving the claim. Therefore, $\alpha_1\in(-M,M]$ and $\alpha_2\in[-M,M)$ (remember that $\alpha_2<\alpha_1$). Finally, choose $\bar{x}_1 < x_1 < x_2 < \bar{x}_2$ such that $u(\bar{x}_1)=-M$ and $u(\bar{x}_2)=M$. Since $\mathcal{H}(\varphi(\bar{x}_i))> 0$ (note that $G'(\pm M)=0$ since $G\geq G(\pm M)=0$ in $\mathbb{R}$, and thus $u'(\bar{x}_i)\neq 0$ by uniqueness) and $\mathcal{H}(\varphi(x_i))\leq 0$ for $i=1,2$, we obtain a contradiction with \eqref{eq:Volem}. This proves that $u$ is increasing. \end{proof} \begin{Remark} \label{rem:MonAme} Let us emphasize that~\eqref{eq:BerNiren} is more restrictive than condition \eqref{eq:Muffin} in Theorem~\ref{thm:Increasing}. Indeed, assume that $a$ and $b$ satisfy~\eqref{eq:BerNiren}. Then there are three possible cases: \begin{enumerate} \item[(i)] $(ab)'>0$ in $(-L,L)$. Then we can take $x_0=-L$ in \eqref{eq:Muffin}. \item[(ii)] $(ab)'<0$ in $(-L,L)$. Then we can take $x_0=L$ in \eqref{eq:Muffin}. \item[(iii)] There exists $x_0\in (-L,L)$ such that $(ab)'(x_0)=0$. Since $$ (ab)' = 2 \sqrt{ab} \, (\sqrt{ab}\, )' = 2 \sqrt{ab}\, b \, \frac{(\sqrt{ab}\, )'}{b} $$ and $(\sqrt{ab}\, )'/b$ is nondecreasing in $(-L,L)$, it follows that $(ab)'\leq 0$ in $(-L,x_0)$ and $(ab)'\geq 0$ in $(x_0,L)$. \end{enumerate} As a consequence, if assumption \eqref{eq:BerNiren} holds then \eqref{eq:Muffin} also holds. \end{Remark} If in addition we assume $f=-G'$ to be concave in $(0,m)$ we obtain the following comparison result between the derivatives of an increasing minimizer $u$ and its flipped $u_\star$. We include it here even that we will not use it in the rest of the paper. \begin{Proposition} Assume \eqref{eq:A}, $G\in C^{1,1}_{\rm loc}(\mathbb{R})$, and $a \equiv b\in C^2$. Let $u\in H^1_m(I)$ be an increasing minimizer of $\mathcal{E} (\cdot, I)$ and $u_\star$ its flipped. Assume $u(0)>0$. If $f=-G'\in C^1$ is concave in $(0,m)$, then $u_\star'(x)>u'(x)$ for all $x\in(0,L]$. \end{Proposition} \begin{proof} Let $u$ be an increasing minimizer and let $u_\star(x)=-u(-x)$, $x\in[-L,L]$. Since $u(0)>0$, by minimality we see that $u_\star<u$ in $(-L,L)$ (see Lemma~\ref{lem:OderedMin}). Let $L_u$ be the linear operator defined by $L_u\varphi:=\varphi''+(\log a)'\varphi'+(f'(u)+(\log a)'')\varphi$ and note that $L_u u'=L_{u_\star}u_\star'=0$ in $(-L,L)$. This can be easily obtained differentiating \eqref{E-L} and using that $u$ and $u_\star$ are solutions of this equation. In particular, using the assumption that $f$ is concave in $(0,m)$ and odd, and noting that $|u_\star|<u$ in $(0,L)$ since $u_\star <u$ and $u$ is increasing, we obtain $$ L_u(u_\star'-u')=L_u u_\star'=(f'(u)-f'(|u_\star|))u_\star' \leq 0\quad \textrm{in }(0,L). $$ Moreover, noting that $L_u u'=0$ and $u'>0$ in $[0,L)$ we obtain that the first Dirichlet eigenvalue $\lambda_1(L_u,(0,L))$ $>0$ (see Corollary~2.4 and Theorem~1.1 in \cite{BNV}). Since $u_\star\leq u$ with equality at $x=L$, we deduce $(u_\star'-u')(L)\geq 0$. Hence, since $\lambda_1(L_u,(0,L))>0$, by \cite{BNV} we can apply the maximum principle (and later the strong maximum principle) to $$ \left\{ \begin{array}{l} L_u(u_\star'-u')\leq 0\quad\textrm{in }(0,L),\\ (u_\star'-u')(0)=0,\quad (u_\star'-u')(L)\geq 0, \end{array} \right. $$ to obtain $u_\star'>u'$ in $(0,L)$. Finally, the fact that $u_\star'(L)>u'(L)$ easily follows by contradiction using the uniqueness for the Cauchy problem $$ \left\{ \begin{array}{l} -(aw')'=bf(w), \\ w(L)=u(L),\ w'(L)=u'(L). \end{array} \right. $$ \end{proof} \section{Uniqueness in dimension one}\label{section4} In this section we give sufficient conditions on the weights $a$ and $b$, and on the potential $G$, to guarantee uniqueness of solution to~\eqref{E-L}. We start by proving the following result. When $a\equiv b$ this is exactly Theorem~\ref{thm:Brahms2}. \begin{Proposition} \label{prop:Salva} Assume that~\eqref{eq:A} holds and $G\in C^{1,1}_{\rm loc}(\mathbb{R})$. Let $L$ and $m$ be positive numbers. Suppose further that $$ (ab)' \geq 0 \quad \textrm{in } (0,L), \quad G \geq G(m) \quad \textrm{in } \mathbb (0,\infty), \quad\textrm{and}\quad G' \leq 0 \quad \textrm{in } (0,m). $$ Then, problem \eqref{E-L} admits a unique solution, which is therefore odd. Furthermore, this solution is increasing. \end{Proposition} \begin{proof} The existence of minimizer, and thus of solution, is standard. Indeed, since $a>0$ and $G\geq 0$ in $\mathbb{R}$, for a minimizing sequence we will have $\int_I |u_{k}'|^2 \leq C$ for some constant $C$ independent of $k$. Now, let $z_k \in I$ be a zero of $u_{k}$. We have $ |u_{m_k} (x) | \leq \left|\int_{z_k}^x |u'_{m_k}| \right| \leq (2 L)^{1/2} \left( \int_I |u_{m_k}'|^2 \right)^{1/2} \leq C $ for any $x\in I$. It follows that $u_{k}$ converges (up to a subsequence) weakly in $H^1 (I)$ and strongly in $C^0 (\overline{I})$ to some $u\in H^1_m(I)$, which will be a minimizer (and hence a solution). Next, let us show uniqueness of solution. Let $u$ be a solution of \eqref{E-L} and $u_\star(x)=-u(-x)$ its flipped. By Theorem~\ref{thm:Increasing}~(i), used with $x_0=0$, we may assume that both $u$ and $u_\star$ are increasing solutions of \eqref{E-L} and, changing $u$ by $u_\star$ if necessary, that $u(0)\geq u_\star(0)$. First, we claim that $u(0) = 0$. Indeed, assume by contradiction that $u(0) > 0$ and set $$ L_0:=\min\left\{ x\in (0,L]: u(x)=u_\star(x)\right\}. $$ Note that $L_0 >0$ and that $u$ and $u_\star$ solve $$ \left\{ \begin{array}{l} -(au')'=b f(u)\qquad \textrm{in }(-L_0,L_0),\\ u(L_0)=-u( - L_0). \end{array} \right. $$ Moreover, since $u_\star<u$ in $(0,L_0)$ and both $u$ and $u_\star$ are increasing, we have $u'(L_0)^2<u_\star'(L_0)^2$ (by uniqueness for the Cauchy problem, or by Hopf's lemma). Integrating~\eqref{eq:LaVita} in $(-L_0,L_0)$ we obtain $$ \frac{a(L_0)^2}{2}\left(u'(L_0)^2-u_\star'(L_0)^2\right) +\int_{0}^{L_0}(a b )'\Big(G(u )-G(u_\star)\Big)\ dx=0. $$ Finally, using that $G$ is nonincreasing in $(0,m)$ we deduce that the integrand of the previous integral is nonpositive, obtaining a contradiction. Hence $u(0)=0$, proving the claim. Now assume that problem~\eqref{E-L} admits two solutions $u_1$ and $u_2$ with $u_2 - u_1 \not \equiv 0$. We know by the previous argument that $u_1(0) = u_2 (0) = 0$. Let $L_1 \in (0,L]$ be the first positive zero of the function $u_2 - u_1$. We can assume, without loss of generality, that $$ (u_2 - u_1) (0) = (u_2 - u_1) (L_1) = 0 \quad \hbox{ and }\quad u_2 -u_1 > 0 \hbox{ in } (0,L_1). $$ The Hopf Lemma leads to \begin{equation} \label{eq:Tarragona1} u_2'(0) > u_1'(0) > 0 \quad \hbox{ and } \quad 0 < u_2'(L_1) < u_1'(L_1), \end{equation} since we have proved that every solution is increasing. Subtracting identity \eqref{eq:LaVita} for $u_1$ and $u_2$ and integrating in $(0, L_1)$ we get \begin{eqnarray} & &\hspace{-2cm} \frac{a(L_1)^2}{2} \big(u_2'(L_1)^2 - u_1'(L_1)^2 \big) - \frac{a(0)^2}{2} \big( u_2'(0)^2 - u_1'(0)^2 \big) \nonumber \\ &+& \int_{0}^{L_1}(a b )'\Big\{ G(u_2 )-G(u_1) \Big\} \, dx=0 . \label{eq:Tarragona2} \end{eqnarray} Using~\eqref{eq:Tarragona1} and the fact that $G$ is nonincreasing in $(0,m)$, we reach a contradiction as before. Therefore, $u_1 \equiv u_2$. In particular, since $u$ and $u_\star(x)=-u(-x)$ are solutions of \eqref{E-L} we obtain that $u=u_\star$, \textit{i.e.}, $u$ is odd. \end{proof} The following result was established by Berestycki and Nirenberg in~\cite{BeresNiren}. We give here an alternative proof (which, however, also uses their sliding method). Note that here $a$, $b$, and $G$ need not be even. \begin{Proposition}[\cite{BeresNiren}] \label{prop:BerNir} Assume $m>0$, $a,b\in C^1([-L,L])$ such that $a,b>0$, and $G\in C^{1,1}_{\rm loc}(\mathbb{R})$. If $(\sqrt{ab}\,)'/b$ is nondecreasing in $(-L,L)$, then problem \eqref{E-L} admits at most one increasing solution. \end{Proposition} \begin{proof} Let $u$ be a solution of \eqref{E-L} and $\tilde{u}=u\circ\gamma_1^{-1}$, where $\gamma_1$ is the diffeomorphism defined in \eqref{eq:ChangeVariable}. Under this change of variables, the monotonicity of solutions is preserved and the condition~\eqref{eq:BerNiren}, for the new weights $\tilde{a}=\tilde{b}=\sqrt{ab}\circ\gamma_1^{-1}$, turns out to be equivalent to the ${\rm log}$ convexity of $\tilde{a}$, \textit{i.e.}, $\tilde{a}'/\tilde{a}$ is nondecreasing in $(-L,L)$. Thus, without loss of generality we may prove our statement for weights $a \equiv b \in C^1 ([-L,L])$ satisfying that $$ \frac{a'}{a} \textrm{ is nondecreasing in }(-L,L). $$ Let $u$ and $v$ be two increasing solutions to problem~\eqref{E-L}, and assume \textit{ab absurdo} that \begin{equation} \label{eq:clock} u>v \quad \hbox{ in } (L-\varepsilon, L) \end{equation} for some $\varepsilon >0$. Consider the family of functions $(u_{\tau})_{\tau \in [0,2L)}$ defined as $$ u_{\tau} : I_{\tau} \to \mathbb R, \qquad x \mapsto u(x - \tau) $$ on the interval $I_{\tau} := (-L + \tau, L)$. Using the assumption that $a'/a$ is nondecreasing and $u' \geq 0$, we immediately see that $$ \frac{a'}{a} (x) u'_{\tau} (x) \geq \frac{a'}{a} (x- \tau) u'_{\tau} (x) \quad \textrm{for all }x \in I_{\tau}, $$ and therefore $$ - u_{\tau}''(x) - \frac{a'}{a} (x) u'_{\tau} (x) + G'(u_{\tau}) \, \leq \, 0 \quad \hbox{ in } I_{\tau}, $$ \textit{i.e.}, $u_{\tau}$ is a subsolution in $I_\tau$ of the ODE in \eqref{E-L}. Define $$ T:= \left\{ \tau \in [0,2L) \, \colon \, v - u_{\tau} >0 \textrm{ in } I_\tau \right\}, \quad \tau_0 := \inf T , $$ and note that: \begin{enumerate} \item[(i)] $T \not = \emptyset$. Indeed, since $u(-L) = -m$ and $v(L) =m$, we deduce that values $\tau$ close to $2L$ belong to the set $T$. Thus, $\tau_0$ is well defined, and by~\eqref{eq:clock} we have $\tau_0 > 0$. \item[(ii)] $v - u_{\tau_0} \geq 0$ in $I_{\tau_0}$ and $(v - u_{\tau_0}) (x_{\tau_0}) = 0$ for some $x_{\tau_0} \in \overline{I}_{\tau_0}$. However, since $u$ is increasing and $\tau_0 > 0$, on the boundary of $I_{\tau_0}$ we have $$ u_{\tau_0} (L) = u(L - \tau_0) < m = v(L) $$ $$ u_{\tau_0} (-L + \tau_0) = u(-L) = - m < v(-L + \tau_0) \,. $$ Therefore $x_{\tau_0} \not \in \partial I_{\tau_0}$. Finally, applying the strong maximum principle on the interval $I_{\tau_0}$ (recall that $v$ is a solution and $u_{\tau_0}$ a subsolution of the nonlinear problem), we derive a contradiction. \end{enumerate} \end{proof} \begin{Remark} \label{cor:Newark} Let $a,b \in C^1([-L,L])$ be positive functions satisfying~\eqref{eq:BerNiren}, and $G\in C^{1,1}_{\rm loc}(\mathbb{R})$ be such that~\eqref{eq:Double:bis} holds for some $M\in(0,m]$. Then Theorem~\ref{thm:Increasing}~(ii) and Proposition~\ref{prop:BerNir} show that the functional $\mathcal{E} (\cdot, (-L,L))$ admits a unique critical point in $H^1_m((-L,L))$ for any $m \geq M>0$, which is increasing (a result already stated in~\cite{BeresNiren}). \end{Remark} In the following corollary, $a$, $b$, and $G$ need not be even. \begin{Corollary} \label{thm:Uniqueness} Let $m>0$, $a,b \in C^1 ([-L,L])$ with $a,b>0$, and $G\in C^{1,1}_{\rm loc}(\mathbb{R})$. If $$ G(s) \geq G(-m)=G(m) \quad \textrm{for all } s \in \mathbb R $$ and $\big( \sqrt{a b } \, \big)'/b$ is nondecreasing in $(-L,L)$. Then the functional $\mathcal{E} (\cdot, (-L,L))$ admits a unique critical point in $H^1_m((-L,L))$. \end{Corollary} \begin{proof} The existence part is easily established, as in the beginning of the proof of Proposition~\ref{prop:Salva}. Next, by Remark~\ref{rem:MonAme}, there exists $x_0\in[-L,L]$ such that \eqref{eq:Muffin} holds. Therefore, by Theorem~\ref{thm:Increasing}~(i) any solution of \eqref{E-L} is increasing. We conclude by applying the uniqueness result of increasing solutions established in Proposition~\ref{prop:BerNir}. \end{proof} \section{Non-increasing and non-odd minimizers}\label{section5} In this section we give conditions on the weights $a$ and $b$ for which the minimizers of $\mathcal{E} (\cdot,(-L,L))$ in $H^1_m((-L,L))$ are either not increasing or non-odd. Throughout this section we shall assume \begin{equation} \label{eq:LicensePlate} \left. \begin{array}{c} a,b \in C^0(\mathbb R), \quad a, b \hbox{ even, } \quad a , b >0, \vspace{3mm} \\ G \in C^0(\mathbb R), \quad G \hbox{ even, } \vspace{3mm} \\ G(s) \geq G(M) = 0 \hbox{ in } \mathbb R , \quad G(s) > G(M) = 0 \hbox{ in } [0,M) \end{array} \right\} \end{equation} for some $M >0$. To estimate the energy value of a minimizer of $\mathcal{E}$, we will need the following preliminary results. \begin{Lemma}\label{Lemma3:5} Let $a\in C^0([\alpha, \beta])$ be a positive function and $m_1,m_2 \in \mathbb{R}$. Then $$ \min\left\{\int_\alpha^\beta a v'^2 : v\in C^1([\alpha, \beta]), v(\alpha)=m_1, v(\beta)=m_2\right\} =\frac{(m_2-m_1)^2}{{\int_\alpha^\beta 1/a}}, $$ and the minimum is achieved by \begin{equation} \label{eq:Hunan} u(x)=(m_2-m_1) \frac{\int_\alpha^x 1/a }{\int_\alpha^\beta 1/a }+m_1 . \end{equation} \end{Lemma} \begin{proof} By Schwarz inequality $$ \vert m_2-m_1 \vert =\left\vert\int_\alpha^\beta v'\right\vert \leq \int_\alpha^\beta \sqrt{a} \vert v'\vert \frac{1}{\sqrt{a}} \leq \left(\int_\alpha^\beta a v'^2\right)^{1/2}\left(\int_\alpha^\beta \frac{1}{a}\right)^{1/2}. $$ On the other hand, the minimization problem admits a unique solution $u$ which solves $$ \left\{ \begin{array}{l} (a u')' = 0 \textrm{ in }(\alpha,\beta) \\ u(\alpha) = m_1 ,\ u(\beta) = m_2 . \end{array} \right. $$ We readily deduce that the solution of this Dirichlet problem is given by~\eqref{eq:Hunan}, and a straightforward computation gives $$ \int_\alpha^\beta a u'^2 = \frac{(m_2-m_1)^2}{\int_\alpha^\beta 1/a} . $$ \end{proof} \begin{Proposition} \label{prop:XiMen} Assume that \eqref{eq:LicensePlate} holds, that $m\geq 0$, and let $ t \in [0,L)$. Then, \begin{equation} \label{eq:XiMen} \inf_{v \in H^1_m ((-L,L))} \mathcal{E} (v, (-L,L)) \, \leq \, \frac{M^2 + m^2 }{\int_{t}^L 1/a} + 2 \, G_1 \int_{t}^L b \end{equation} where $G_1 := \sup_{s \in (-m, \overline{M})} G(s) $ and $\overline{M} = \max\{m,M\}$. \end{Proposition} \begin{proof} Let $u_1$, respectively $u_2$, be the solution to the minimizing problem $$ \min\left\{\int_{-L}^{-t} a u'^2 : u\in C^1([-L, - t]), u(-L )= - m, \, u(-t)= M\right\}, $$ respectively, $$ \min\left\{\int_t^L a u'^2 : u\in C^1([t, L]), u(t)= M, \, u(L)=m\right\}. $$ Consider the test function $v \in H^1_m ((-L,L))$ defined by $$ v (x)=\left\{ \begin{array}{ll} u_1 &\mbox{ if } -L < x < - t , \vspace{1mm} \\ M &\mbox{ if } - t \leq x \leq t, \vspace{1mm} \\ u_2 &\mbox{ if } t < x < L. \end{array} \right. $$ Since $G$ is even, $G(\pm M) = 0$, and the weights $a$ and $b$ are also even, Lemma~\ref{Lemma3:5} gives \begin{eqnarray*} \mathcal{E} (v, (-L,L)) &=& \int_{-L}^{-t}\left\{ \frac{1}{2} a v'^2+ b G(v) \right\}+\int_t^L\left\{ \frac{1}{2} a v'^2+ b G(v) \right\} \\ &=& \frac {(M +m)^2}{2 \int_{t}^L 1/a}+ \int_{-L}^{- t} b G(u_1) + \frac {(m - M)^2}{2 \int_{t}^L 1/a} + \int_{t}^{L} b G(u_2) \\ &\leq& \frac {M^2 + m^2}{\int_{t}^L 1/a} + 2 \sup_{s \in (-m, \overline{M})} G(s) \int_{t}^L b , \end{eqnarray*} where we have used that both $u_1$ and $u_2$ are monotone functions, as follows from \eqref{eq:Hunan}. \end{proof} \subsection{Boundary perturbation of non-odd minimizers} Recall the notation $I=(-L,L)$. We first show that the property for a minimizer of $\mathcal{E} (\cdot, I)$ in $H^1_m(I)$ to be non-odd is preserved under small perturbation of boundary data. \begin{Proposition} \label{prop:CiPortera} Assume that \eqref{eq:LicensePlate} holds. Let $( u_{m_k})_{k=1}^{\infty}$ be a sequence of minimizers of $\mathcal{E} (\cdot, I)$ in $H^1_{m_k}(I)$ with $0\leq m_k \rightarrow m$. Then, up to a subsequence, we have $$ u_{m_k} \to u_m \hbox{ in } H^1(I) $$ and $u_m$ is a minimizer of $\mathcal{E} (\cdot,I)$ in $H^1_m(I)$. In particular $u_{m_k}\to u_m$ in $C^0(\overline{I})$. \end{Proposition} \begin{proof} Since $a>0$ and $G\geq 0$ in $\mathbb{R}$, the upper bound \eqref{eq:XiMen} used with $t=0$ gives $$ \int_I |u_{m_k}'|^2 \leq C $$ for some constant $C$ independent of $m_k$. Moreover, for each $u_{m_k}\in H^1_{m_k}(I)$, let $z_k \in \overline{I}$ be a zero of $u_{m_k}$. The fundamental theorem of calculus yields $$ |u_{m_k} (x) | \leq \left|\int_{z_k}^x |u'_{m_k}| \right| \leq (2 L)^{1/2} \left( \int_I |u_{m_k}'|^2 \right)^{1/2} \leq C $$ for any $x\in I$. It follows that $u_{m_k}$ converges (up to a subsequence) weakly in $H^1 (I)$ and strongly in $C^0 (\overline{I})$ to some $u_m\in H^1_m(I)$. Let us now prove that $u_m$ is a minimizer of $\mathcal{E} (\cdot,I)$ in $H^1_m(I)$. Indeed, take an arbitrary function $u\in H^1_m(I)$ and consider the sequence $v_k := u + (m_k - m) \frac{x}{L}$ in $H^1_{m_k} (I)$. Note that $v_k\rightarrow u$ in $H^1(I)$ and $\mathcal{E} (u_{m_k},I)\leq \mathcal{E} (v_k,I)$. Using that $\mathcal{E} (\cdot,I)$ is weakly lower semicontinuous we conclude that \begin{equation}\label{*****} \mathcal{E} (u_m,I) \leq \limsup \mathcal{E} (u_{m_k},I) \leq \limsup \mathcal{E} (v_k,I) = \mathcal{E} (u,I), \end{equation} proving that $u_m$ is a minimizer. Finally, using \eqref{*****} with $u=u_m$ we deduce that $\limsup \mathcal{E} (u_{m_k},I) =\mathcal{E} (u_m,I)$. Therefore, since $u_{m_k}$ converges weakly to $u_m$, we have that in fact $u_{m_k}\rightarrow u_m$ in $H^1(I)$. \end{proof} We can now show that, under boundary perturbation, the property of minimizers being non-odd is preserved. \begin{Proposition} \label{prop:LaPazzia} Let \eqref{eq:LicensePlate} be satisfied, $G\in C^{1,1}_{\rm loc}(\mathbb{R})$, and assume that for some $m_0 \geq 0$, all minimizers of $\mathcal{E} (\cdot, I)$ in $H^1_{m_0}(I)$ are non-odd. Then, there exists $\varepsilon >0$ such that the functional $\mathcal{E}(\cdot, I)$ admits non-odd minimizers in $H^1_m(I)$ for each $m \in (m_0 - \varepsilon, m_0 + \varepsilon) \cap [0, \infty)$. \end{Proposition} \begin{proof} Consider a sequence of minimizer $u_{m_k}\in H^1_{m_k}(I)$ of $\mathcal{E} (\cdot,I)$ with $m_k \to m_0$ and $m_k \geq 0$. By Proposition~\ref{prop:CiPortera}, up to a subsequence, the sequence $u_{m_k}$ converges strongly in $C^0(\overline{I})$ to a minimizer $u_{m_0}\in H^1_{m_0}(I)$ of $\mathcal{E} (\cdot,I)$. Since $u_{m_0} (0) \not = 0$ ($u_{m_0} $ is not odd and recall Proposition~\ref{prop:Schubert}~(ii)), we deduce that $u_{m_k} (0) \neq 0$ for all $m_k$ close enough to $m_0$. Thus $u_{m_k}$ is not odd. \end{proof} \subsection{Non-odd minimizer} Note that the results of the previous subsection apply with $m=0$. This allows to give sufficient conditions on $a$ and $b$ to guarantee that minimizers for small odd boundary data are non-odd. \begin{Proposition} \label{prop:NonOdd} Assume \eqref{eq:LicensePlate}, $G\in C^{1,1}_{\rm loc}(\mathbb{R})$, and $G(s)\leq G(0)$ for all $s\in(0,M)$. If \begin{equation} \label{eq:KindMuckenhoupt} \sup_{t \in (0, L)} \left( \int_{t}^{L} \frac{1}{a} \right) \left( \int_{0}^t b \right) > \frac{M^2}{2 G(0)}, \end{equation} then the following holds: \begin{enumerate} \item[{\rm (i)}] If $m=0$ then the minimizers of $\mathcal{E} (\cdot, I)$ in $H^1_0(I)$ are not identically zero. \item[{\rm (ii)}] There exists $\varepsilon >0$ such that, for each $m\in[0,\varepsilon)$, the functional $\mathcal{E} (\cdot, I)$ admits minimizers in $H^1_m(I)$ which are non-odd and not increasing. \end{enumerate} \end{Proposition} Note that (ii) applies to the unweighted case $a\equiv b\equiv 1$ whenever $I=(-L,L)$ is large enough and $G$ satisfies \eqref{eq:LicensePlate} for some $M$. \begin{proof}[Proof of Proposition~{\rm \ref{prop:NonOdd}}] {\bf (i)} Assume $m=0$. Let us show that under condition~\eqref{eq:KindMuckenhoupt} we have \begin{equation} \label{eq:UnaTonteriaMas} \inf_{v \in H^1_0 (I)} \mathcal{E} (v, I) < \mathcal{E} (0, I) . \end{equation} Indeed, by applying Proposition~\ref{prop:XiMen} with $m=0$, we have that inequality~\eqref{eq:UnaTonteriaMas} holds if \begin{equation} \label{eq:DosTonteriaMas} \frac{M^2}{\int_{t}^L 1/a} + 2 \, G (0) \int_{t}^L b \, < \, 2 G(0) \int_0^L b=\mathcal{E}(0,I) \end{equation} for some $t \in (0,L)$. We obtain the conclusion, by noting that inequality~\eqref{eq:DosTonteriaMas} is equivalent to $$ \left( \int_{t}^{L} \frac{1}{a} \right) \left( \int_{0}^t b \right) > \frac{M^2}{2 G(0)} . $$ {\bf (ii)} Let $u_0\in H^1_0(I)$ be a minimizer of $\mathcal{E} (\cdot, I)$. Since $u_0 \not \equiv 0$ by part (i), we can assume $u_0> 0$ (see Proposition~\ref{prop:Schubert}~(i)), and in that case $u_0'(L) < 0$. In particular, $u_0$ is non-odd and not nondecreasing. We conclude by applying Propositions~\ref{prop:CiPortera} and \ref{prop:LaPazzia} (see Figure~\ref{fig2}). \end{proof} \begin{Example} Condition \eqref{eq:KindMuckenhoupt} holds true for $L$ large enough for any positive and even function $a: \mathbb R \to \mathbb R$ satisfying, for instance, $$ \int_0^{+\infty} \frac{1}{a} = +\infty, $$ independently of the weight $b$. In particular, it holds for $a \equiv 1$. In this case, by Proposition~\ref{prop:NonOdd}, there exist $L_0>0$ and $\varepsilon > 0$ such that the minimizers of $\mathcal{E}$ in $H^1_m((-L,L))$ are non-odd and not increasing whenever $ L > L_0$ and $m \in [0, \varepsilon)$. \end{Example} The above result gives a class of weights for which minimizers are not odd in large intervals and small boundary values. To obtain similar results for ``large" boundary data (such as $u(\pm L)=\pm M$; recall that we are assuming \eqref{eq:LicensePlate} and that $M=1$ in the Allen-Cahn nonlinearity), we will look for conditions on the weights $a$ and $b$ to ensure \begin{equation} \label{eq:Hubei} \inf_{u \in H^{as}_m (I)} \mathcal{E} (u, I) \, > \, \inf_{u \in H^{1}_m (I)} \mathcal{E} (u, I) . \end{equation} Recall that $H^{as}_m(I)$ is formed by those functions in $H^1_m(I)$ which are odd. Note that Proposition~\ref{prop:XiMen} gives an upper bound for the right hand-side value in~\eqref{eq:Hubei}. The following proposition gives now a lower bound on $\min_{u \in H^{as}_m (I)} \mathcal{E} (u, I)$. \begin{Proposition}\label{acotadas} Assume that \eqref{eq:LicensePlate} holds and $m>0$. Then, there exists a positive constant $C^{as}$ depending only on $a$, $b$, $G$, $M$, and $m$ $($but independent of $L$$)$ such that \begin{equation} \label{eq:BellaCiao} \mathcal{E} (u, I) \geq C^{as} >0 \quad \textrm{for all } u \in H^{as}_m (I), \text{ where } I=(-L,L) . \end{equation} Moreover, the constant $C^{as}$ can be chosen as \begin{equation}\label{Cas} C^{as} := \inf_{t >0} \Big\{ \frac{m_0^2}{\int_0^t 1/a} + 2 G_0 \int_0^t b \Big\} , \end{equation} where \begin{equation}\label{m0_and_G0} m_0 := \frac{1}{2}\min\{m,M\} \quad \hbox{ and } \quad G_0 := \inf_{s \in (0, m_0)} G(s) . \end{equation} \end{Proposition} \begin{proof} Let $u \in H^{as}_m (I)$ be such that $$ \mathcal{E} (u,I) = \inf_{v \in H^{as}_m (I)} \mathcal{E} (v, I)\,. $$ Since $u(0) =0$, we can choose $\beta \in (0,L)$ be such that $$ |u(\beta)|= m_0 \quad\textrm{and}\quad |u(x)| < m_0 \quad \textrm{for all } x \in (0, \beta) \,. $$ Setting $G_0 := \inf_{s \in (0, m_0)} G(s)$ we note that $G(u(x))\geq G_0>0$ for all $x\in[0,\beta]$. This inequality together with Lemma~\ref{Lemma3:5} (with boundary conditions $u(0) =0$, $u(\beta) = m_0$ or $u(\beta) = -m_0$) yield \begin{eqnarray} \mathcal{E} (u, I) &\geq& \int_{-\beta}^\beta \Big\{ \frac{1}{2} a u'^2+ b G(u) \Big\} = \int_0^\beta \Big\{a u'^2+ 2 b G(u) \Big\} \nonumber \\ &\geq& \frac {m_0^2}{\int_0^\beta 1/a} + 2 G_0 \int_0^\beta b. \label{key} \end{eqnarray} Define this last expression as a function of $t>0$, namely $$ \Psi(t):= \frac {m_0^2}{\int_0^t 1/a} + 2 G_0 \int_0^t b. $$ Clearly $\lim_{t\to 0}\Psi(t)=+\infty$ and, since $G_0>0$ by the last assumption in \eqref{eq:LicensePlate} and the fact that $m_0<M$, $\lim_{t \to +\infty}\Psi(t)\in(0,+\infty]$. Since $\Psi$ is positive and continuous in $(0,\infty)$, it holds that $C^{as}=\inf_{t>0}\Psi(t)>0$ and $C^{as}$ depends only on $a$, $b$, $m_0$, and $G_0$. This combined with \eqref{key} proves the result. \end{proof} Propositions~\ref{prop:XiMen} and \ref{acotadas} yield immediately the following result. \begin{Corollary} Assume that \eqref{eq:LicensePlate} holds, $m>0$, and that $$ \inf_{t \in (0,L)} \left\{ \frac{M^2 + m^2 }{\int_{t}^L 1/a} + 2 \, G_1 \int_{t}^L b \right\} \, < \, C^{as}, $$ where $G_1 := \sup_{s \in (-m, \overline{M})} G(s)$, $\overline{M} = \max\{m,M\}$, and $C^{as}$ is defined by \eqref{Cas}-\eqref{m0_and_G0}. Then $$ \inf_{u \in H^{as}_m (I)} \mathcal{E} (u, I) > \inf_{u \in H^{1}_m (I)} \mathcal{E} (u, I). $$ In particular, the minimizers of $\mathcal{E} (\cdot, I)$ in $H^1_m(I)$ are not odd. \end{Corollary} Next, by setting $$ \Phi_m (L) := \inf_{u \in H^{1}_m (I)} \mathcal{E} (u, I),\quad L>0\,, $$ we characterize the weights $a$ and $b$ for which $\liminf_{L\rightarrow+\infty}\Phi_m (L)=0$. This, jointly with Proposition~\ref{acotadas}, will provide a first class of weights that guarantee~\eqref{eq:Hubei}. That is, a class of weights for which the minimizers of $\mathcal{E}(\cdot,I)$ in $H^1_m(I)$ are not odd. \begin{Proposition}\label{infimo=0} Assume that \eqref{eq:LicensePlate} holds and $m>0$. The following assertions are equivalent: \begin{enumerate} \item[{\rm (i)}] $\liminf_{L\rightarrow+\infty}\Phi_m (L)=0$; \item[{\rm (ii)}] There exists a sequence of bounded intervals $J_n=[\alpha_n,\beta_n] \subset \mathbb{R}$ $(\alpha_n < \beta_n)$ satisfying \begin{equation} \label{eq:Salva} \int_{J_n}\frac{1}{a}\rightarrow +\infty \quad \hbox{ and } \quad \int_{J_n} b\rightarrow 0. \end{equation} \end{enumerate} \end{Proposition} \begin{proof} (i) $\Rightarrow$ (ii) Set $I_n := (-L_n, L_n)$ with $L_n\to\infty$. Let $u_n\in H^1_m(I_n)$ be a minimizer of $\mathcal{E} (\cdot, I_n)$ and assume that $\Phi_m (L_n)= \mathcal{E} (u_n, I_n)\to 0$. Let $G_0$ and $m_0$ be given in \eqref{m0_and_G0}. Consider an interval $J_n :=[\alpha_n, \beta_n] \subset I_n$ such that $$ u_n ( \alpha_n)=0,\quad u_n ( \beta_n)=m_0, \quad \hbox{ and } \quad |u_n(x)| \leq m_0 \quad \textrm{for all }x \in (\alpha_n, \beta_n). $$ Since $G(u_n(x)) \geq G_0 >0$ for all $x \in (\alpha_n, \beta_n)$ by \eqref{eq:LicensePlate}, applying Lemma~\ref{Lemma3:5} on the interval $ (\alpha_n, \beta_n)$, we conclude $$ \mathcal{E} (u_n, I_n) \geq \int_{J_n} \left\{ \frac{1}{2} a u_n'^2+ b G(u_n)\right\} \geq \frac {m_0^2}{2\int_{J_n} 1/a}+ G_0 \int_{J_n} b. $$ This last inequality proves the assertion. (ii) $\Rightarrow$ (i) We claim that we can assume, without loss of generality, that the sequence of intervals $J_n=[\alpha_{n}, \beta_{n}] \subset [0,\infty)$ instead of $J_n\subset\mathbb{R}$. Indeed, note that we can suppose that $\beta_n>0$ changing $[\alpha_n,\beta_n]$ by $[-\beta_n,-\alpha_n]$ if necessary. Moreover, if $\alpha_n\leq0$ then $$ 0\leq \int_{\alpha_n}^0 b \leq \int_{J_n} b \rightarrow 0 \quad \textrm{and}\quad 0\leq \int_0^{\beta_n} b \leq \int_{J_n} b \rightarrow 0 $$ by \eqref{eq:Salva}. This proves that $\alpha_n$ and $\beta_n$ tend to zero as $n$ goes to infinity, a contradiction with \eqref{eq:Salva}: \begin{equation}\label{new:a} \int_{J_n}\frac{1}{a}=\int_{\alpha_n}^{\beta_n}\frac{1}{a}\rightarrow +\infty. \end{equation} Therefore, we can assume, up to a subsequence, that $\alpha_n\geq 0$ proving the claim. Let $J_n=[\alpha_{n}, \beta_{n}] \subset [0,\infty)$ be a sequence of bounded intervals satisfying~\eqref{eq:Salva}. Inequality~\eqref{eq:XiMen} applied with $t= \alpha_n$ and $L=\beta_n$ gives $$ 0 \leq \Phi_m (\beta_n) \, \leq \, \frac {M^2+m^2}{\int_{J_n} 1/a}+ 2G_1\int_{J_n} b, $$ where $G_1=\sup_{s\in(-m,\overline{M})} G(s)$ and $\overline{M} = \max\{m,M\}$. We conclude the proof noting that the right hand-side of the previous inequality tends to zero by \eqref{eq:Salva} and that $\beta_n\rightarrow+\infty$ by \eqref{new:a}. \end{proof} Now, as stated in Proposition \ref{cor:characteriation}, we are able to exhibit a class of weights $a$, $b$, for which the minimizers of $\mathcal{E} (\cdot, I)$ in $H^1_m(I)$ are not odd when the domain is large. \begin{proof}[Proof of Proposition {\rm\ref{cor:characteriation}}] By the hypothesis of the proposition, \eqref{eq:LicensePlate} is satisfied taking $M:=m$, after replacing $G$ by $G-G(M)$. Now, on the one hand, by Proposition \ref{acotadas} there exists a constant $C^{as}>0$ (independent of the interval $I$) such that $\mathcal{E} (u, I)\geq C^{as}$ for all $u \in H_m^{as}(I)$. On the other hand, note that $\Phi_m$ is a nonincreasing function, \textit{i.e.}, $\Phi_m(L_2)\leq \Phi_m(L_1)$ for all $L_1<L_2$. This follows by noting that given $u\in H^1_m((-L_1,L_1))$ we can extend it to $\tilde{u}\in H^1_m((-L_2,L_2))$: $$ \tilde{u}:= \left\{ \begin{array}{ccc} -m&\textrm{in}&(-L_2,-L_1),\\ u&\textrm{in}&(-L_1,L_1),\\ m&\textrm{in}&(L_1,L_2), \end{array} \right. $$ and $\mathcal{E}(u,(-L_1,L_1))=\mathcal{E}(\tilde{u},(-L_2,L_2))$. Therefore, by Proposition~\ref{infimo=0} we can take $L_0>0$ such that $\Phi_m(L)<C^{as}$ for all $L\geq L_0$. As a consequence, minimizers of $\mathcal{E} (\cdot,I)$ in $H^1_m(I)$ cannot be odd for $L\geq L_0$. \end{proof} \begin{Remark} Note that if $ab$ is increasing and positive in $[0,\infty)$, then condition~\eqref{eq:Salva} cannot hold and therefore we cannot use Proposition~\ref{cor:characteriation} to obtain non-oddness of minimizers on large intervals. Indeed, the previous assertion on condition ~\eqref{eq:Salva} follows from $$ \int_{J_n} b \, = \, \int_{J_n} \frac{ab}{a} \, \geq \, (ab) (0) \int_{J_n} \frac{1}{a}. $$ This is consistent with the result of Theorem~\ref{thm:Brahms2} where we proved that problem \eqref{E-L} admits a unique solution, which is therefore odd, under the assumption that $a\equiv b$ is nondecreasing in $(0,L)$. \end{Remark} Finally, we point out that we could give a precise quantitative result for the minimum length of the interval $L$ (in terms of lower and upper bounds on $a$ and $b$, and of the nonlinearity $G$) guaranteeing non-oddness of minimizers. \section{Uniqueness results in higher dimensions} \label{sec:ndim} In this section we consider a bounded domain $\Omega\subset\mathbb{R}^N$, a reflection with respect to a hyperplane, $\sigma:\mathbb R^N \to \mathbb R^N$, that leaves $\Omega$ invariant, and \begin{equation}\label{6.0} A \in C^2 (\overline{\Omega}, S_N(\mathbb R) ), \, \, \, 0 < b \in C^1 (\overline{\Omega}, \mathbb R), \, \, \, \varphi\in (H^1\cap L^\infty)(\Omega), \end{equation} where $S_N (\mathbb R)$ stands for the set of $N \times N$ symmetric matrices with real coefficients. We will assume that \begin{equation} \label{eq:UnifCoercivity} \langle A(x) \xi, \xi \rangle \geq c_0 |\xi|^2 \quad \textrm{for all }x\in\Omega\textrm{ and } \xi \in \mathbb R^N, \end{equation} for some positive constant $c_0$. The potential $G\in C^2 (\mathbb R)$ will satisfy \begin{equation}\label{newG:bis} \begin{array}{l} \textrm{there exists }M>0\textrm{ such that } G'(s)\leq 0\textrm{ for all }s< -M \\ \textrm{and }G'(s)\geq 0\textrm{ for all }s>M \end{array} \end{equation} for some constant $M>0$. When discussing the antisymmetry property of the solution, we shall also assume \begin{equation} \label{eq:Flughafen:bis} G(s) = G(-s)\textrm{ in }\mathbb{R}, \quad A \circ \sigma = A, \quad b \circ \sigma = b , \quad \varphi \circ \sigma = - \varphi. \end{equation} With these assumptions, we address the question of uniqueness of critical points for the functional \begin{equation}\label{eq:NDimFunctional:bis} \mathcal{E} (u, \Omega) := \int_{\Omega} \left\{ \frac{1}{2} \langle A(x) \nabla u, \nabla u \rangle + b(x) G(u) \right\} dx,\quad u\in H^1_\varphi(\Omega), \end{equation} and also of their antisymmetry property. We work with the functional spaces $$ H^{1}_{\varphi} (\Omega) := \{ u \in H^1 (\Omega) \, \colon \, u - \varphi \in H^1_0 (\Omega) \} $$ and $$ H^{as}_{\varphi} (\Omega) := \{ u \in H^1_{\varphi} (\Omega) \, \colon \, u \circ \sigma = - u\} . $$ Let us first emphasize that critical points $u$ are weak solutions to the Euler-Lagrange equation \begin{equation}\label{new116} - \hbox{div} (A(x) \nabla u) + b(x) G'(u) = 0, \quad u \in H^1_{\varphi} (\Omega). \end{equation} As a consequence, if \eqref{newG:bis} holds then \begin{equation} \label{linf} |u|\leq \max\{M,\|\varphi\|_{L^\infty(\partial\Omega)}\} \qquad \text{ in } \Omega \end{equation} by \eqref{newG:bis} and the maximum principle. Therefore, since $u\in L^\infty(\Omega)$ then $G'(u)\in L^\infty(\Omega)$, and hence problem \eqref{new116} can be understood in the distributional sense. The following existence result states, in particular, that $\mathcal{E}$ always admits an antisymmetric critical point under assumption \eqref{eq:Flughafen:bis}. \begin{Proposition} \label{prop:ExistenceMin} Assume \eqref{6.0}, \eqref{eq:UnifCoercivity}, and \eqref{newG:bis}. Both functionals, $\mathcal{E}$ and when assuming \eqref{eq:Flughafen:bis} its restriction $\mathcal{E}|_{H^{as}_{\varphi} (\Omega)}$, admit a minimizer. Moreover, both minimizers are critical points of $\mathcal{E}$ in $H^1_{\varphi}(\Omega)$. \end{Proposition} \begin{proof} The fact that $\mathcal{E}$ and the restriction $\mathcal{E}|_{H^{as}_{\varphi} (\Omega)}$ admit a minimizer $u_0 \in H^{1}_{\varphi} (\Omega)$ and $u^{as} \in H^{as}_{\varphi} (\Omega)$, respectively, follows by applying standard results of the calculus of variations (note that $G$ is bounded from below by \eqref{newG:bis} and that $\mathcal{E}(\varphi,\Omega)<+\infty$). To show that $u^{as}\in H^{as}_{\varphi} (\Omega)$ is a critical point of $\mathcal{E} (\cdot, \Omega)$, write any $\xi \in C_0^\infty(\Omega)$ as $\xi = \xi^{s} + \xi^{as}$ with $\xi^{s}$ and $\xi^{as}$ to be symmetric and antisymmetric respectively. Obviously $D \mathcal{E} (u^{as})\xi^{as} = 0$ in the weak sense. Furthermore, due to the symmetry assumptions on $A$, $b$, and $G$, we readily see that the functions $$ x \mapsto \langle A( x ) \nabla u^{as} (x), \nabla \xi^{s} (x) \rangle \qquad x \mapsto b (x) G'(u^{as} (x) ) \xi^{s} (x) $$ are antisymmetric. Therefore $D \mathcal{E} (u^{as})\xi^{s} = 0$. We conclude $D \mathcal{E} (u^{as})\xi = 0$ for any $\xi \in C_0^{\infty} (\Omega)$. \end{proof} Our next proposition establishes uniqueness of critical points of $\mathcal{E}$ when the second variation of $\mathcal{E}$ at $u\equiv0$ is nonnegative and, in addition, $-G''(0) > -G''(s)$ for all $s\neq0$. Let us note that this last condition on $G$ is satisfied by the double well potential $G(s) = (1-s^2)^2/4$. Before stating our result, let us recall that the second variation of energy at $u\equiv 0$ is nonnegative whenever $$ D^2\mathcal{E}(0)(\xi,\xi) :=\int_{\Omega} \{ \langle A(x) \nabla \xi , \nabla \xi \rangle + b(x) G''(0) \xi^2 \}\,dx \, \geq \, 0 \quad \textrm{for all }\xi \in H^1_0 (\Omega). $$ If in addition $u\equiv0$ is a solution of \eqref{new116}, then we say that $u\equiv0$ is a semi-stable solution. Instead, if $u\equiv0$ is a solution of \eqref{new116} and $D^2\mathcal{E}(0)$ is not nonnegative definite, then we say that $u\equiv0$ is an unstable solution. \begin{Remark}\label{RemD^2} By considering the eigenvalue $$ \lambda_1 (A,b, \Omega) := \inf\left\{ \frac{\int_{\Omega} \langle A(x) \nabla \xi, \nabla \xi \rangle\,dx} {\int_{\Omega} b(x) \xi^2\,dx} \, \colon \, \xi \in H^1_{0} (\Omega), \, \xi \not \equiv 0 \right\} , $$ we easily see that $D^2\mathcal{E}(0) \geq 0$ if and only if $\lambda_1(A,b, \Omega) \geq - G''(0)$. \end{Remark} We establish the following antisymmetry result and a kind of converse to it. \begin{Proposition}\label{Proposition:Marc1} Assume \eqref{6.0}, \eqref{eq:UnifCoercivity}, and \eqref{newG:bis}. The following assertions hold: \begin{enumerate} \item[{\rm (i)}] Assume that $D^2\mathcal{E}(0) \geq 0$ and that $-G''(0) > -G''(s)$ for all $s\neq 0$. Then, for every $\varphi\in (H^1\cap L^\infty)(\Omega)$, $\mathcal{E} (\cdot, \Omega)$ admits a unique critical point in $H^{1}_\varphi (\Omega)$. In addition, under condition~\eqref{eq:Flughafen:bis} it is antisymmetric. \item[{\rm (ii)}] Assume \eqref{eq:Flughafen:bis} and that $D^2\mathcal{E}(0)(\xi,\xi) < 0$ for some $\xi\in H^1_0(\Omega)$. Then, for some boundary values $\varphi\in C^1(\overline{\Omega})$, the minimizers of $\mathcal{E} (\cdot, \Omega)$ in $H^{1}_\varphi(\Omega)$ are not antisymmetric. \end{enumerate} \end{Proposition} \begin{proof} {\bf (i)} Let $u_1, u_2 \in H^1_{\varphi} (\Omega)$ be two critical points of $\mathcal{E} (\cdot, \Omega)$ and recall that $u_1,u_2\in L^\infty(\Omega)$ by \eqref{linf}. Define $u^t:=tu_1+(1-t)u_2$, $t\in[0,1]$. Since $G''(s)\geq G''(0)$ for all $s$, we have \begin{eqnarray}\label{second:der} \frac{d^2}{dt^2}\mathcal{E}(u^t,\Omega) &=& \int_\Omega \left\{\langle A(x)\nabla (u_1-u_2),\nabla (u_1-u_2)\rangle+b(x)G''(u^t)(u_1-u_2)^2\right\}\,dx \nonumber\\ &\geq& \int_\Omega \left\{\langle A(x)\nabla (u_1-u_2),\nabla (u_1-u_2)\rangle+b(x)G''(0)(u_1-u_2)^2\right\}\,dx \nonumber\\ &\geq& 0, \end{eqnarray} where in the last inequality we have used that $u_1-u_2\in H^1_0(\Omega)$ and that $D^2\mathcal{E}(0) \geq 0$. Therefore, $h(t):=\mathcal{E}(u^t,\Omega)$ is a convex function. Moreover, since $u_1$ and $u_2$ are critical points, we have that $h'(0)=h'(1)=0$. It follows that $h$ is constant in $[0,1]$. As a consequence, the left hand side of \eqref{second:der} is zero and, thus, all the inequalities in \eqref{second:der} become equalities. Hence, since $-G''(0) > -G''(s)$ for all $s\neq 0$, we obtain that $u^t=tu_1+(1-t)u_2=0$ (\textit{i.e.}, $u_1=t^{-1}(t-1)u_2$) in $\{x\in\Omega: u_1(x)\neq u_2(x)\}$. Since this must hold for all $t\in(0,1)$, we deduce that $\{x\in\Omega: u_1(x)\neq u_2(x)\}=\varnothing$. Thus, $\mathcal{E} (\cdot, \Omega)$ admits a unique critical point $u$. Under the additional condition~\eqref{eq:Flughafen:bis}, we have that $- u \circ \sigma$ is also a critical point. By uniqueness we must have $- u \circ \sigma = u$. Thus, $u$ is antisymmetric. {\bf (ii)} Assume now that $D^2\mathcal{E}(0)(\xi,\xi) < 0$ for some $\xi\in H^1_0(\Omega)$. Let $\varphi_n\in C^1(\overline{\Omega})$ be any sequence converging to zero in $C^1(\overline{\Omega})$ and let $u_n\in H^1(\Omega)$ be a minimizer of $\mathcal{E}(\cdot,\Omega)$ in $H^1_{\varphi_n}(\Omega)$. Let us prove that, for $n$ large enough, $u_n$ is not antisymmetric. We claim that $u_n\rightharpoonup u_0$ in $H^1(\Omega)$ (up to a subsequence) and that $u_0$ is a minimizer of $\mathcal{E}(\cdot,\Omega)$ in $H^1_0(\Omega)$. Indeed, since $u_n$ is a minimizer we have $\mathcal{E}(u_n,\Omega)\leq \mathcal{E}(\varphi_n,\Omega)\leq C$ for some constant $C$ independent of~$n$. In particular, $\int_\Omega|\nabla u_n|^2\,dx\leq C$ and therefore $$ \int_\Omega|\nabla (u_n-\varphi_n)|^2\, dx\leq C. $$ As a consequence, since $(u_n-\varphi_n)|_{\partial\Omega}\equiv 0$ and thus $u_n-\varphi_n\in H^1_0(\Omega)$, a subsequence of $u_n-\varphi_n$ converges weakly. Thus, up to a subsequence, $u_n\rightharpoonup u_0$ in $H^1(\Omega)$ for some $u_0\in H^1(\Omega)$. Let $u\in H^1_0(\Omega)$. Note that $u+\varphi_n\in H^1_{\varphi_n}(\Omega)$. By minimality of $u_n$ we have $\mathcal{E}(u_n,\Omega)\leq \mathcal{E}(u+\varphi_n,\Omega)$. Using the semicontinuity of the $H^1$-norm and Fatou's lemma, we obtain (taking the $\liminf$) that $\mathcal{E}(u_0,\Omega)\leq \mathcal{E}(u,\Omega)$. That is, $u_0$ is a minimizer in $H^1_0(\Omega)$. Finally, assume by contradiction that, for $n$ large enough, $u_n$ is antisymmetric with respect to a hyperplane. Then \begin{equation}\label{lalalala} 0=\int_\Omega u_n\,dx\longrightarrow \int_\Omega u_0\, dx\quad\textrm{as }n\rightarrow+\infty. \end{equation} Since $D^2\mathcal{E}(0)(\xi,\xi) < 0$ for some $\xi\in H^1_0(\Omega)$, any minimizer $u_0$ of $\mathcal{E} (\cdot, \Omega)$ in $H^1_0(\Omega)$ cannot be identically zero (otherwise $u\equiv 0$ would be a minimizer, and would be unstable by hypothesis, a contradiction). In addition, $u_0$ has constant sign in $\Omega$ (since its absolute value also minimizes; one uses here an argument as in Proposition~\ref{prop:Schubert}). This and \eqref{lalalala} give a contradiction, obtaining that for $n$ large enough $u_n$ is a minimizer of $\mathcal{E}(\cdot,\Omega)$ in $H^1_{\varphi_n}(\Omega)$ which is not antisymmetric. \end{proof} As we said before, $D^2\mathcal{E}(0)\geq 0$ is equivalent to $\lambda_1(A,b,\Omega)\geq -G''(0)$ (see Remark~\ref{RemD^2}). Therefore, the first part of the previous proposition can be reformulated as follows. \begin{Corollary}\label{Cor:unique_crt} Assume \eqref{6.0}, \eqref{eq:UnifCoercivity}, and \eqref{newG:bis}. If $\lambda_1 (A,b,\Omega) \geq - G''(0) > - G''(s)$ for all $s\neq 0$, then $\mathcal{E} (\cdot, \Omega)$ admits a unique critical point in $H^1_{\varphi}(\Omega)$ for every $\varphi\in (H^1\cap L^\infty)(\Omega)$. In addition, under assumption~\eqref{eq:Flughafen:bis} it is antisymmetric. \end{Corollary} The following result gives a lower bound for the eigenvalue $\lambda_1(A,b,\Omega)$ and will be useful in order to apply Corollary~\ref{Cor:unique_crt}. \begin{Proposition} \label{prop:Elizabeth} Assume \eqref{6.0} and that $A(x)$ is a nonnegative symmetric matrix for all $x\in\overline\Omega$. The following inequality holds: \begin{equation}\label{eq:Anna} \lambda_1 (A,b ,\Omega) \, \geq \, \frac{1}{4} \inf_{\Omega} \left\{ 2 \, {\rm div} \left( A (x) \frac{\nabla b}{b^2} \right) + \Big\langle A (x) \nabla b , \frac{\nabla b}{b^3} \Big\rangle \right\} . \end{equation} In particular, if $A \equiv a\, {\rm Id}$ and $a \equiv b$, then \begin{equation} \label{eq:AnnaRoja} \lambda_1 (A,b ,\Omega) \geq \inf_{\Omega}\frac{\Delta \sqrt a}{\sqrt a} . \end{equation} \end{Proposition} \begin{proof} Noting that the map $$ \xi \longmapsto \eta:=b^{1/2} \xi , $$ is a bijection from $H^1_0(\Omega)$ to itself, we obviously have $$ \lambda_1 (A,b,\Omega) = \inf_{\eta \in H^1_0 (\Omega) \setminus \{ 0 \}} \frac{\int_{\Omega} \big\langle A (x) \nabla (\eta b^{-1/2}), \nabla (\eta b^{-1/2}) \big\rangle\,dx } {\int_{\Omega} \eta^2\,dx}. $$ For $\eta \in H^1_0(\Omega) \setminus \{ 0 \}$, using the fact that $\langle A\nabla\eta,\nabla\eta\rangle\geq 0$, we get \begin{eqnarray} \big\langle A (x) \nabla (\eta b^{-1/2}), \nabla (\eta b^{-1/2}) \big\rangle &=& \frac{1}{b} \Big\langle A (x) \big( \nabla \eta - \frac{\eta}{2} \frac{\nabla b}{b} \big) , \nabla \eta - \frac{\eta}{2} \frac{\nabla b}{b} \Big\rangle \nonumber\\ &\geq& - \eta \Big\langle A (x) \frac{\nabla b}{b^2} , \nabla \eta \Big\rangle + \frac{\eta^2}{4 b} \Big\langle A (x) \frac{\nabla b}{b} , \frac{\nabla b}{b} \Big\rangle \nonumber \\ &=& - \Big\langle A(x) \frac{\nabla b}{b^2} , \nabla \big(\frac{\eta^2}{2} \big) \Big\rangle + \frac{\eta^2}{4 } \Big\langle A (x) \nabla b , \frac{\nabla b}{b^3} \Big\rangle . \nonumber \end{eqnarray} Integrating and applying the divergence theorem, we get \begin{eqnarray*} & & \hspace{-2cm} \int_{\Omega} \big\langle A (x) \nabla(\eta b^{-1/2}), \nabla(\eta b^{-1/2}) \big\rangle\,dx \\ & \geq & \frac{1}{4} \int_{\Omega} \left\{ 2 \hbox{div} \left( A (x) \frac{\nabla b}{b^2} \right) + \Big\langle A (x) \nabla b , \frac{\nabla b}{b^3} \Big\rangle \right\} \eta^2\,dx \end{eqnarray*} and therefore $$ \lambda_1 (A,b, \Omega) \, \geq \, \frac{1}{4} \inf_{\Omega} \left\{ 2 \hbox{div} \left( A (x) \frac{\nabla b}{b^2} \right) + \Big\langle A (x) \nabla b , \frac{\nabla b}{b^3} \Big\rangle \right\} , $$ which is inequality~\eqref{eq:Anna}. In the case $A \equiv a\, {\rm Id}$ and $a \equiv b$, inequality~\eqref{eq:Anna} turns out to be equivalent to $$ \lambda_1 (a\, {\rm Id},a,\Omega) \geq \frac{1}{4}\inf_{\Omega} \left\{ 2 \Delta (\log a ) + |\nabla (\log a)|^2\right\} = \inf_{\Omega}\frac{\Delta\sqrt{a}}{\sqrt{a}}. $$ \end{proof} Theorem~\ref{thm:ndim} and Corollary~\ref{cor:OddHigher} follow as an immediate consequence of Corollary~\ref{Cor:unique_crt} and Proposition~\ref{prop:Elizabeth}. \begin{proof}[Proof of Theorem~{\rm \ref{thm:ndim}} and Corollary~{\rm \ref{cor:OddHigher}}] Assume $A \equiv a\, {\rm Id}$ and $a \equiv b$. By Proposition~\ref{prop:Elizabeth} and the assumptions of the theorem, we have $$ \lambda_1 (A,b ,\Omega) \geq \inf_\Omega\frac{\Delta \sqrt a}{\sqrt a}\geq -G''(0) > -G''(s) \quad \textrm{for all }s\neq 0. $$ The results now follow from Corollary~\ref{Cor:unique_crt}. \end{proof} Let us give some examples of weights $A$ and $b$ for which $D^2\mathcal{E}(0)\geq 0$ (\textit{i.e.,} for which $\lambda_1(A,b, \Omega) \geq - G''(0)$). We know that in this case, if in addition $-G''(0) > -G''(s)$ for all $s\neq 0$, then $\mathcal{E}$ admits a unique critical point in $H^1_\varphi(\Omega)$ which, furthermore, is antisymmetric under assumption \eqref{eq:Flughafen:bis}. \begin{Example}\label{ex6:2} \begin{enumerate} \item[(i)] The condition $\lambda_1 (A,b,\Omega) \geq - G''(0)$ is always satisfied for small domains $\Omega$. Indeed, by setting $\varepsilon\Omega := \{ \varepsilon x \, \colon \, x \in \Omega \}$, and using the assumption~\eqref{eq:UnifCoercivity} together with $b \in L^{\infty} (\Omega)$, we get $\lambda_1 (A,b, \varepsilon\Omega) \geq c_0\|b\|_\infty^{-1} \varepsilon^{-2} \lambda_1 ({\rm Id}, 1, \Omega)$. Hence, we have $\lambda_1 (A,b,\varepsilon\Omega) \geq - G''(0)$ for $\varepsilon$ small. \item[(ii)] Consider the weights $$ A(x) = e^{\alpha |x|^2} {\rm Id} \quad\textrm{and}\quad b(x) = e^{\alpha |x|^2} \quad\textrm{for all }x\in\Omega\subset\mathbb{R}^N, $$ with $2 \alpha N \geq -G''(0)$. Then we easily check that the function $\psi(x)=e^{-\alpha |x|^2}> 0$ is a positive supersolution of the linearized problem at $u\equiv0$: $$ - \hbox{div } \Big( A(x) \nabla \psi \Big) + b(x) G''(0) \psi \geq 0 \quad \textrm{in }\Omega. $$ It is then standard to conclude that $D^2\mathcal{E}(0)\geq 0$ (multiply the previous inequality by $\xi^2/\psi$ with $\xi\in H^1_0(\Omega)$, integrate by parts, and use Cauchy-Schwarz). \item[(iii)] Note that, for $\alpha>0$, the previous weight $e^{\alpha|x|^2}$ is log-convex. More generally, assume now $$ \langle A(x) \xi, \xi \rangle \geq e^{\alpha|x|^2} |\xi|^2 , \quad b(x) \leq e^{\alpha|x|^2} , \quad \textrm{and} \quad 2 \alpha N \geq -G''(0) . $$ Then $$ \frac{\int_{\Omega} \langle A(x) \nabla \xi, \nabla \xi \rangle\,dx}{\int_{\Omega} b(x) \xi^2\,dx} \geq \frac{\int_{\Omega} e^{\alpha|x|^2} |\nabla \xi|^2\,dx}{\int_{\Omega} e^{\alpha|x|^2} \xi^2\,dx} \,. $$ By the argument in (ii), this provides examples of weights (which are not necessarily log-convex) for which critical points are antisymmetric on any domain $\Omega$ (even in dimension $N=1$). \item[(iv)] Assume $|A|$, $b \in L^{\infty} (\mathbb R^N)$ and $\inf_{\mathbb R^N} b >0$. Then, we easily find a constant $C>0$ such that $\lambda_1 (A, b, \Omega) \leq C \lambda_1 (\textrm{Id},1,\Omega)$. Therefore, if $G''(0)<0$, for large domains $\Omega$ it holds that $\lambda_1(A,b,\Omega)<-G''(0)$ (\textit{i.e.}, $D^2\mathcal{E}(0)(\xi,\xi)<0$ for some $\xi\in H^1_0(\Omega)$). Hence, for some $\varphi\in C^1(\overline{\Omega})$ there are minimizers of $\mathcal{E} (\cdot, \Omega)$ in $H^{1}_\varphi (\Omega)$ which are not antisymmetric (by Proposition~\ref{Proposition:Marc1}~(ii)). \end{enumerate} \end{Example} \begin{Remark}\label{rem6:7} The uniform coercivity condition~\eqref{eq:UnifCoercivity} is used to guarantee the existence of minimizers in Proposition~\ref{prop:ExistenceMin}. It can be relaxed, and one can consider weights $A(x)$ that either vanish at some point, or that are not uniformly coercive. Let us briefly make two observations in this direction. Let $\Omega \subset\mathbb{R}^N$ a bounded smooth domain that is star-shaped with respect to the origin. Assume $N\geq 2$ and that there exist $\alpha>0$ and $\beta\in\mathbb{R}$ such that $$ A(\tau x) = \tau^{\alpha} A(x), \qquad b(\tau x) = \tau^{\beta} b(x) \quad \textrm{for all } \tau > 0\textrm{ and }x\in\Omega, $$ with $A$ and $b$ continuous in $\overline{\Omega}$ and positive in $\Omega$. Then a simple scaling argument shows that $$ \lambda_1(A, b, \tau \Omega ) = \tau^{\alpha - \beta - 2} \lambda_1 (A,b, \Omega). $$ By the weighted Hardy inequality in $H^1(\Omega;|x|^\alpha\,dx)$ we have \begin{equation}\label{lambda_positive} \lambda_1(A, b, \Omega ) > 0\quad\textrm{when }\alpha-\beta\leq 2. \end{equation} This can be proved integrating with spherical coordinates and using, on every ray, the typical argument that gives the classical Hardy inequality (through integration by parts and Cauchy-Schwarz inequality). Now, if $G''(0)\geq 0$ then $D^2\mathcal{E}(0)\geq 0$ independently of the domain. In addition, if $G''(0)<0$ and $\alpha - \beta < 2$ then we see by scaling that $\lambda_1(A, b, \Omega )$ is large for small domains $\Omega$. In particular, $D^2\mathcal{E}(0) \geq 0$ in this case. On the contrary, when $\alpha - \beta =2$, $\lambda_1(A,b, \Omega)$ is invariant by dilations of the domain. An important example of the previous situation is the weight $A(s,t) = (st)^{m-1}{\rm Id}$, $b(s,t) = (st)^{m-1}$ in $\Omega = (0,L)^{2}\subset\mathbb{R}^{2}$ related to the De Giorgi conjecture, as discussed in the Introduction. In this case $\alpha-\beta=0<2$, and thus we have uniqueness and antisymmetry in small domains. Another relevant case of homogeneous weights is given by radial weights. If $A(x) = |x|^{\alpha} {\rm Id}$ and $b(x) = |x|^{\beta}$, with $\alpha>2-N$ and $\beta>-N$, then the value of $\lambda_1(A,b, \Omega)$ is known by the weighted Hardy inequality in $H^1(\Omega;|x|^\alpha\,dx)$ in the critical case $\alpha - \beta = 2$ (recall that $0\in\Omega$ since $\Omega$ is assumed to be star-shaped): $$ \lambda_1 (A,b,\Omega) = \frac{(N-2+\alpha)^2}{4}. $$ Hence for this kind of weights, if \begin{equation}\label{cond_uniq} \frac{(N-2+\alpha)^2}{4} \geq - G''(0) > - G''(s)\quad \textrm{for all }s\neq 0 \end{equation} holds, then $D^2\mathcal{E}(0)\geq0$ and $\mathcal{E}(\cdot,\Omega)$ admits a unique critical point in $H^1(\Omega;|x|^\alpha\,dx)$. The proof is the same as that of Proposition~\ref{Proposition:Marc1} with $H^1_\varphi(\Omega)$ replaced by the previous weighted Sobolev space. Note that \eqref{cond_uniq} is independent of the domain $\Omega$, and that the first inequality is satisfied for large dimensions $N$ for any $\alpha>0$. \end{Remark} Let us conclude with some remarks in dimension $N=1$. In this case, we can characterize the weights $a,b$ defined on $\mathbb R$ for which $\lambda_1 (a,b,I) \geq C$ for some positive constant $C$ independent of the interval $I$. This characterization is similar to the Muckenhoupt's condition available for Hardy's type inequalities with weights (see~\cite{Muckenhoupt}). More specifically, referring to Opic and Kufner~\cite[p.~93]{OpicKufner}, given even weights $a,b$, define on each interval $I=(-L,L)$ the constant \begin{equation}\label{M(a,b,I)} M (a,b,I) := \sup_{\alpha, \beta \in I} \left\{ \left( \int_{\alpha}^{\beta} b \right) \left( \int_{\max\{ |\alpha|, |\beta| \} }^{L} \! \!a^{-1} \right) \right\} \,. \end{equation} Then, \begin{equation}\label{opic} \frac{1}{16 M (a,b,I)} \, \leq \, \lambda_1 (a,b,I) \, \leq \, \frac{4}{M (a,b,I)}. \end{equation} In particular $$ \lim_{L \to \infty} \lambda_1(a,b, (-L,L))=0 \quad \textrm{if and only if} \quad\lim_{L \to \infty} M(a,b,(-L,L)) = \infty \,. $$ Note that the quantity appearing in \eqref{M(a,b,I)} already appeared in \eqref{eq:KindMuckenhoupt}. If $a^{-1} , b \in L^1 (\mathbb R)$ we immediately deduce, from Corollary~\ref{Cor:unique_crt}, \eqref{M(a,b,I)}, and \eqref{opic}, the following result. \begin{Proposition} Assume $a^{-1} , b \in L^1 (\mathbb R)$ and that \eqref{eq:A} and \eqref{newG:bis} hold. If $$ \frac{1}{ 16 \| a^{-1} \|_{L^1 (\mathbb R)} \| b \|_{L^1 (\mathbb R)} } \geq - G'' (0) > - G''(s) \quad \textrm{for all }s\neq 0, $$ then the functional $\mathcal{E} (\cdot, (-L,L))$ admits a unique critical point in $H^1_m((-L,L))$ for any $m >0$. \end{Proposition}
{ "timestamp": "2017-09-25T02:06:25", "yymm": "1709", "arxiv_id": "1709.07656", "language": "en", "url": "https://arxiv.org/abs/1709.07656", "abstract": "This article concerns the antisymmetry, uniqueness, and monotonicity properties of solutions to some elliptic functionals involving weights and a double well potential. In the one-dimensional case, we introduce the continuous odd rearrangement of an increasing function and we show that it decreases the energy functional when the weights satisfy a certain convexity-type hypothesis. This leads to the antisymmetry or oddness of increasing solutions (and not only of minimizers). We also prove a uniqueness result (which leads to antisymmetry) where a convexity-type condition by Berestycki and Nirenberg on the weights is improved to a monotonicity condition. In addition, we provide with a large class of problems where antisymmetry does not hold. Finally, some rather partial extensions in higher dimensions are also given.", "subjects": "Analysis of PDEs (math.AP)", "title": "Antisymmetry of solutions for some weighted elliptic problems", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713874477228, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7080105001181998 }
https://arxiv.org/abs/0705.1220
A simple solution to Ulam's liar game with one lie
Ulam asked for the maximum number of questions required to determine an integer between one and one million by asking questions whose answer is `Yes' or `No' and where one untruthful answer is allowed. Pelc showed that the number of questions required is 25. Here we give a simple proof of this result.
\subsection*{Introduction}\label{intro} We consider the following game between a questioner and a responder, first proposed by Ulam~\cite{ulam}. (A variation of this game was independently proposed by R\'enyi, see~\cite{pelc1}.) The responder thinks of an integer $x \in \{1,\dots,n\}$ and the questioner must determine $x$ by asking questions whose answer is `Yes' or `No'. The responder is allowed to lie at most $k$ times during the game. Let $q_k(n)$ be the maximum number of questions needed by the questioner, under an optimal strategy, to determine $x$ under these rules. In particular, Ulam asked for the value of $q_1(10^6)$ (as this is related to the well-known `twenty questions' game). It follows from an observation of Berlekamp~\cite{berlekamp} that $q_1(10^6) \ge 25$ and Rivest et al.~\cite{rivest} as well as Spencer~\cite{spencer1} gave bounds which imply that $q_1(10^6) \le 26$. Pelc~\cite{pelc2} was then able to determine $q_1(n)$ exactly for all $n$: \begin{thm} \label{thmpelc} \cite{pelc1} For even $n \in \mathbb{N}$, $q_1(n)$ is the smallest integer $q$ which satisfies $n \le 2^q/(q+1)$. For odd $n \in \mathbb{N}$, $q_1(n)$ is the smallest integer $q$ which satisfies $n \le (2^q-q+1)/(q+1)$. \end{thm} In particular, his result shows that the lower bound of Berlekamp for $n=10^6$ was correct. Shortly afterwards, Spencer~\cite{spencer1} determined $q_k(n)$ asymptotically (i.e.~for fixed $k$ and large $n$). The values of $q_k(10^6)$ have been determined for all $k$. These and many other related results are surveyed by Hill~\cite{hill}, Pelc~\cite{pelc1} and Cicalese~\cite{cicalese}. Here, we give a simple strategy and analysis for the game with at most one lie which implies the above result of Pelc for many values of $n$. \begin{thm} If $n \le 2^\ell \le 2^q/(q+1)$ for some integer $\ell$, then the questioner has a strategy which identifies $x$ in $q$ questions if at most one lie is allowed. In particular, $q_1(n) \le q$. \label{genstrat}\end{thm} Below, we will give a self contained argument (Proposition~\ref{lowerprop}) which shows that if $n$ also satisfies $n>2^{q-1}/q$, then the strategy in Theorem~\ref{genstrat} is optimal. This implies that the bound in Theorem~\ref{genstrat} is optimal if $n=2^\ell$ for some $\ell \in \mathbb{N}$. More generally, Theorem~\ref{thmpelc} implies that for even $n$, Theorem~\ref{genstrat} gives the correct bound if and only if we can find a binary power $2^\ell$ with $n \le 2^\ell \le 2^q/(q+1)$, where $q$ is the smallest integer with $n \le 2^q/(q+1)$. (Similarly, one can read off a more complicated condition for odd $n$ as well.) In particular, if $n=10^6$, we obtain $q_1(10^6)=25$. To check this, note that for $q=25$ and $\ell=20$, we have $$ \lceil 2^{q-1} /q \rceil =671088 < n \le 1048576=2^{\ell} < 1290555= \lfloor 2^q/(q+1) \rfloor. $$ If we compare the bounds from Theorems~\ref{thmpelc} and~\ref{genstrat}, then one can check that the smallest value where the latter gives a worse bound is $n=17$, where Theorem~\ref{genstrat} requires $9$ questions whereas $q_1(17)=8$. The smaller values are $q_1(2) = 3$, $q_1(3) =q_1(4)= 5$, $q_1(5)=\dots= q_1(8)=6$ and $q_1(9)=\dots=q_1(16)=7$. More generally, it is easy to see that for any $n$ the strategy in Theorem~\ref{genstrat} uses at most two more question than an optimal strategy. Indeed, given $n$, let $\ell$ and $q$ be the smallest integers satisfying $n \le 2^\ell \le 2^q/(q+1)$. So Theorem~\ref{genstrat} implies that $q$ questions suffice. Proposition~\ref{lowerprop} implies that if $n>2^{q-3}/(q-2)$, then any successful strategy needs at least $q-2$ questions in the worst case. To see that $n>2^{q-3}/(q-2)$, suppose that this is not the case. Then by assumption on $\ell$ we have $2^{\ell-1} <n \le 2^{q-3}/(q-2)$. So if $q \ge 4$ (which we may assume in view of the above discussion of small values), we have $2^\ell < 2^{q-2}/(q-2) \le 2^{q-1}/q$. This contradicts the choice of $q$. Our proof of Theorem~\ref{genstrat} uses ideas from Cicalese~\cite{cicalese} and Spencer~\cite{spencer}. It gives a flavour of some techniques which are typical for the area. Elsholtz (personal communication) has obtained another short proof for the case $n=10^6$. Throughout, all logarithms are binary. \medskip From now on, we consider only the game in which at most one lie is allowed. For the purposes of the analysis, it is convenient to allow the responder to play an adversarial strategy, i.e.~the responder does not have to think of the integer $x$ in advance (but does answer the questions so that there always is at least one integer $x$ which fits all but at most one of the previous answers). The questioner has then determined $x$ as soon as there is exactly one integer which fits all but at most one of the previous answers. We analyze the game by associating a sequence of states $(a,b)$ with the game. The state is updated after each answer. $a$ is always the number of integers which fit all previous answers and $b$ is the number of integers which fit all but exactly one answer. So initially, $a=n$ and $b=0$. The questioner has won as soon as $a+b \le 1$. If there are $j$ questions remaining in the game and the state is $(a,b)$, then we associate a weight $w_j(a,b):=(j+1)a+b$ with this state. Also, we call the integers which fit all but one exactly answer \emph{pennies} (note that each of these contributes exactly one to the weight of the state). For completeness, we now give a proof of the lower bound mentioned in the introduction. As mentioned above, the fact is due to Berlekamp~\cite{berlekamp}, see also~\cite{cicalese,pelc2,rivest} for the argument. The proof has a very elegant probabilistic formulation which generalizes more easily to the case of $k \ge 1$ lies (see Spencer~\cite{spencer}). \begin{prop} \label{lowerprop} If $n>2^{q-1}/q$, then the questioner does not have a strategy which determines $x$ with $q-1$ questions. \end{prop} \removelastskip\penalty55\medskip\noindent{\bf Proof. } Note that our assumption implies that the initial weight satisfies $w_{q-1}(n,0)>2^{q-1}$. It is easy to check that before each answer, the sum of the weights of the two possible new states $(a_{yes},b_{yes})$ and $(a_{no},b_{no})$ is equal to the weight of the current state $(a,b)$, i.e. \begin{equation} \label{sum} w_j(a,b)=w_{j-1}(a_{yes},b_{yes})+w_{j-1}(a_{no},b_{no}). \end{equation} To see this, observe that $a=a_{yes}+a_{no}$ and $a+b=b_{yes}+b_{no}$ and substitute this into the definition of the weight functions. (\ref{sum}) implies that the responder can always ensure that the new state $(a',b')$ (with $j$ questions remaining) satisfies \begin{equation} \label{half} w_j(a',b') \ge w_{j+1}(a,b)/2 \ge w_{q-1}(n,0)2^{-(q-1-j)}>2^j. \end{equation} Thus responder can ensure that the final state has weight greater than one. We also claim that this game never goes into state $(1,0)$. (Together, this implies that the final state consists of more than one penny, which means that the responder wins). To prove the claim, suppose that we are in state $(1,0)$ with $j-1$ questions to remaining. Then the previous state must have been $(1,t)$ for some $t>0$. Note that~(\ref{half}) implies that $w_j(1,t)> 2^j$. On the other hand, the assumption on the strategy of the responder implies that $w_{j-1}(1,0) \ge w_{j-1}(0,t)$. Combined with~(\ref{sum}), this means that $w_j(1,t)= w_{j-1}(1,0) + w_{j-1}(0,t) \le 2w_{j-1}(1,0)=2j$. But $2j <2^j$ has no solution for $j \ge 1$, and so we have a contradiction. \noproof\bigskip \subsection*{Proof of Theorem~\ref{genstrat}} Note that the weight of the initial state is $w_q(n,0) = n(q+1) \leq 2^q.$ By making $n$ larger if necessary, note that we may assume that $\log n= \ell$, for some $\ell \in \mathbb{N}$. So $\ell \leq q - \log (q+1)$. Since $\ell \in \mathbb{N}$, this implies \begin{align}{\ell \leq q - \lceil\log (q+1)\rceil \label{lv}}.\end{align} Consider each integer \begin{math}{n = 2^\ell}\end{math} in its binary form, i.e.~we have $2^\ell$ strings of length $\ell$. The questioner performs a binary search on these numbers by asking questions of the form `Is the value of $x$ in position $i$ a 1?'. The binary search on the search space $\{1,\dots,n\}$ uses exactly $\ell$ questions and as a result we obtain $\ell+1$ possible binary numbers for $x$. There is exactly one integer which satisfies all the answers. There are also $\ell$ integers which satisfy all but one answer. Therefore, after the binary search has been performed we are in state $(1,\ell)$. Moreover, $w_{q-\ell}(1,\ell) = 1\cdot (q-\ell+1) + \ell\cdot 1 = q+1.$ Let $p=q-\ell$. By (\ref{lv}), it now suffices to identify $x$ within \begin{math}{p :=\lceil\log (q+1)\rceil }\end{math} questions. Note that the weight of the state satisfies \begin{math}{2^{p-1}< \;w_{q-\ell}(1,\ell)\; \leq 2^p}\end{math}. Suppose that $q+1$ is not a power of 2. It is easy to see that we can add pennies to the state until the total weight is equal to $2^p$, as the addition of pennies will only make the game harder for the questioner. Suppose that we now have $r$ pennies in total, so we obtain the new state $P^* = (1,r)$, with $r \geq \ell$, where the weight of $P^*$ equals $2^p$. Thus \begin{align}{p + 1+r = w_p(1,r) = 2^p.}\label{**}\end{align} We now have two cases to consider: \vspace{0.2cm} \noindent\textbf{Case One:} If $ r< p+1$, then~(\ref{**}) implies that $p+1 > 2^{p-1}$, which holds if and only if $p\leq 2$. This means that we have one nonpenny and at most two pennies. It is easy to see that the Questioner can easily identify $x$ using two more questions in this case. \vspace{0.2cm} \noindent\textbf{Case Two:} Suppose $r\geq p+1$. This implies that $2^{p-1} \geq p+1$ and thus $p>2$. We know that the total weight of this state is even and so we wish to find a set, say $A_p$, such that when a question is asked about it, regardless of the responder's reply, the weight is exactly halved. Assume that $A_p$ contains the nonpenny and $y$ pennies and that the weight of $A_p$ is equal to $2^{p-1}$. Suppose that the answer to `Is $x \in A_p$?' is `Yes'. Then the weight of the resulting state is $p + y$ (since we are left with one nonpenny of weight $p$ and $y$ pennies). If the answer is `No', the resulting state has weight $r+1-y$ (since the nonpenny has turned into a penny and the $y$ pennies have been excluded). Thus we wish to solve $r+1- y = p +y$, which gives \begin{equation} \label{y} y = \frac{1}{2}(r+1-p). \end{equation} Note also that that (\ref{**}) implies $r+1-p$ is even and so $y$ is an integer. Moreover, the condition $r \geq p+1$ implies that $ y \geq 1$. So suppose that the questioner chooses $A_p$ as above and asks `Is $x\in A_p$?'. If the responder replies `Yes', we obtain a position $P'$, which consists of one nonpenny and $y$ pennies, i.e.~$P' =(1,y)$, which has weight $2^{p-1}$. If $p-1 = 2$, then by Case~1, the questioner can easily identify $x$. If $p-1 > 2$, we redefine $r$ such that $r :=y$ and then calculate the new value of $y$ by (\ref{y}), to obtain a new set $A_{p-1}$. The questioner continues inductively with $A_{p-1}$ instead of $A_p$, so the next question will be `Is $x\in A_{p-1}$?'. If the responder replies `No' to the original question `Is $x\in A_p$?' then we obtain a position $P'$ which consists only of pennies, i.e.~$P'=(0,r-y+1)$. Again, this has weight $2^{p-1}$. Since we have $p-1$ questions remaining we perform a binary search on the $r-y-1=2^{p-1}$ pennies remaining and after $p-1$ questions we will have identified $x$. Note that eventually, the answer to the question `Is $x\in A_i$?' must be either `No' or it is `Yes' and we have $i-1=2$ as well as a new weight of $2^{i-1}$ (in which case there are 2 questions and at most one nonpenny and two pennies remaining). By the above arguments, the questioner can find the integer $x$ in the required total number $q$ of questions in both cases, which completes the proof of the theorem.\noproof\bigskip In case $n=10^6$, the above strategy would mean that after $20$ questions, we would be in state $(1,20)$ and have weight $w_5(1,20)=26$. Our aim is to find $x$ within $5$ more questions. We add $6$ pennies to obtain the state $(1,r)$ with $r=26$ and weight $2^p$, where $p=5$. Thus~(\ref{y}) gives $y=11$. So $A_5$ consists of the nonpenny and $11$ pennies. If the answer is `Yes', then $A_4$ consists of the nonpenny and $4$ pennies. If the answer is `No', we have 16 pennies left and can find $x$ after $4$ more questions by using binary search. \enlargethispage{1cm}
{ "timestamp": "2007-05-09T09:42:47", "yymm": "0705", "arxiv_id": "0705.1220", "language": "en", "url": "https://arxiv.org/abs/0705.1220", "abstract": "Ulam asked for the maximum number of questions required to determine an integer between one and one million by asking questions whose answer is `Yes' or `No' and where one untruthful answer is allowed. Pelc showed that the number of questions required is 25. Here we give a simple proof of this result.", "subjects": "Combinatorics (math.CO)", "title": "A simple solution to Ulam's liar game with one lie", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713870152409, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7080104998074207 }
https://arxiv.org/abs/2302.11094
Locally biHölder continuous mappings and their induced embeddings between Besov spaces
In this paper, we introduce a class of homeomorphisms between metric spaces, which are locally biHölder continuous mappings. Then an embedding result between Besov spaces induced by locally biHölder continuous mappings between Ahlfors regular spaces is established, which extends the corresponding result of Björn-Björn-Gill-Shanmugalingam (J. Reine Angew. Math. 725: 63-114, 2017). Furthermore, an example is constructed to show that our embedding result is more general. We also introduce a geometric condition, named as uniform boundedness, to characterize when a quasisymmetric mapping between uniformly perfect spaces is locally biHölder continuous.
\section{Introduction} For nearly three decades, the analysis on metric measure spaces has been under active study, e.g., \cite{BB11,BBS,BBS03,H03,HP,H,HK,HKST}. Given a metric measure space $(Z, d_Z, \nu_Z)$, many function spaces defined on this space have been well established, e.g., Sobolev spaces, Besov spaces and Triebel-Lizorkin spaces (see \cite{N00,H96,GKS10,BP03,KYZ,KYZ10,GKZ13} and the references therein). Given a homeomorphism $f$ between metric spaces $(Z,d_Z)$ and $(W,d_W)$, one natural question is that what kind of correspondence between certain function spaces on $(Z,d_Z, \nu_Z)$ and $(W,d_W, \nu_W)$ can be induced by $f$. The question has been extensively studied for many function spaces when $f$ is a quasiconformal or quasisymmetric mapping, including Sobolev spaces, Besov spaces, Triebel-Lizorkin spaces and other related function spaces (see \cite{BBGS,BoSi, HKM92,Rie,KYZ,HenK,Vo,KXZZ} and the references therein). In a very recent work by Bj\"{o}rn-Bj\"{o}rn-Gill-Shanmugalingam \cite{BBGS}, the question was studied when the homeomorphism $f$ is a biH\"{o}lder continuous mapping and the underlying spaces are bounded Ahlfors regular spaces $(Z,d_Z, \nu_Z)$ and $(W,d_W, \nu_W)$. It was shown that $f$ induces an embedding between Besov spaces $B^{s}_{p, p}(W)$ and $B^{ s^\prime}_{p, p}(Z)$ for suitable $s$ and $s^\prime$, via composition; see \cite[Proposition 7.2]{BBGS} for the details. Recall that for $\theta_1>0$ and $\theta_2>0$, a homeomorphism $f: (Z, d_Z)\to (W, d_W)$ is called $(\theta_1, \theta_2)$-{\it biH\"{o}lder continuous} if there exists a constant $C\geq1$ such that for all $x, y\in Z$, \begin{equation}\label{defn-biholder-intro} C^{-1}d_Z(x, y)^{\theta_1}\leq d_W(f(x), f(y))\leq Cd_Z(x, y)^{\theta_2}. \end{equation} In particular, if $\theta_1=\theta_2$, then $f$ is called a {\it snowflake mapping}. It is interesting to ask what can remain of the conclusion of \cite[Proposition 7.2]{BBGS} if the assumption that the underlying metric spaces $(Z, d_Z)$ and $(W, d_W)$ are bounded is removed. As the first purpose of this paper, we consider this question. However, the assumption of boundedness on the underlying spaces plays a key role, since for a biH\"{o}lder continuous mapping $f: (Z, d_Z)\to (W, d_W)$, if $(Z, d_Z)$ is Ahlfors regular and unbounded, then $f$ must be a snowflake mapping. To avoid such a constraint, let us introduce the following class of mappings. Before the statement of the definition, we make the following conventions: $(1)$ For a subset $A$ of $(Z, d_Z)$, we use ${\operatorname{diam}} A$ to denote the diameter of $A$, that is, ${\operatorname{diam}} A=\sup\{d_Z(z_1, z_2): z_1, z_2\in A\}$. $(2)$ All metric spaces involved in this paper are assumed to contain at least two points. $(3)$ When $(Z, d_Z)$ is unbounded, we take ${\operatorname{diam}} Z=\infty$. Then for any metric space $(Z, d_Z)$, $0<{\operatorname{diam}} Z\leq \infty$. \begin{defn}\label{LbHC} Let $\theta_1>0$, $\theta_2>0$ and $0<r<2 {\operatorname{diam}} Z$. A homeomorphism $f: (Z, d_Z)\to (W, d_W)$ is called {\it locally $(\theta_1, \theta_2, r)$-biH\"{o}lder continuous} if every pair of points $x, y\in Z$ satisfies the condition \eqref{defn-biholder-intro} provided that $d_Z(x, y)<r$. Also, the constant $C$ in \eqref{defn-biholder-intro} is called a {\it locally biH\"{o}lder continuity coefficient} of $f$. \end{defn} Obviously, every biH\"{o}lder continuous mapping is locally biH\"{o}lder continuous, while the converse is not true (See Example \ref{ex} below). The following are direct consequences of the definitions. \begin{prop}\label{1-8-8} $(1)$ If $f$ is $(\theta_1, \theta_2)$-biH\"{o}lder continuous with a biH\"{o}lder continuity coefficient $C_1$, then the inverse $f^{-1}$ of $f$ is $(1/\theta_2, 1/\theta_1)$-biH\"{o}lder continuous with a biH\"{o}lder continuity coefficient $C_2=\max\{C_1^{1/ \theta_1},\; C_1^{1/ \theta_2}\}$. $(2)$ If $f$ is locally $(\theta_3, \theta_4, r)$-biH\"{o}lder continuous with a locally biH\"{o}lder continuity coefficient $C_3$, then the inverse $f^{-1}$ of $f$ is locally $(1/\theta_4, 1/\theta_3, C_3^{-1}r^{\theta_3})$-biH\"{o}lder continuous with a locally biH\"{o}lder continuity coefficient $C_4=\max\{C_3^{1/ \theta_3},\; C_3^{1/ \theta_4}\}$. \end{prop} The following result is our answer to the aforementioned question, which provides us with embeddings between Besov spaces induced by locally biH\"{o}lder continuous mappings. \begin{thm}\label{thm-1} Assume that $(Z, d_Z, \nu_Z)$ and $(W, d_W, \nu_W)$ are Ahlfors $Q_Z$-regular and Ahlfors $Q_W$-regular spaces with $Q_Z>0$ and $Q_W>0$, respectively, and let $\theta_1>0$, $\theta_2>0$, $s>0$, $s^\prime>0$ and $p\geq 1$ be constants such that \begin{equation}\label{s-s-relation} Q_Z\geq \theta_1 Q_W\;\;\mbox{and}\;\; s^\prime\leq \theta_2 s+\frac{\theta_2 Q_W-Q_Z}{p}. \end{equation} Suppose that $f: (Z, d_Z)\rightarrow (W, d_W)$ is a locally $(\theta_1, \theta_2, r)$-biH\"{o}lder continuous mapping with $0<r<2\,{\operatorname{diam}} Z$. Then $f$ induces a canonical bounded embedding $f_{\#}: B^{s}_{p, p}(W)\rightarrow B^{ s^\prime}_{p, p}(Z)$ via composition. \end{thm} The terminology appeared in Theorem \ref{thm-1} and in the rest of this section will be introduced in Section \ref{sec-2} unless stated otherwise. \begin{rem} Theorem \ref{thm-1} is a generalization of \cite[Proposition 7.2]{BBGS}. This is because when $(Z, d_Z)$ is bounded and $r>{\operatorname{diam}} Z$, Theorem \ref{thm-1} coincides with \cite[Proposition 7.2]{BBGS}. Further, Example \ref{ex} and Remark \ref{sec5}$(ii)$ below show that Theorem \ref{thm-1} is more general than \cite[Proposition 7.2]{BBGS}. \end{rem} As we know, a quasisymmetric mapping between bounded uniformly perfect spaces is locally biH\"{o}lder continuous since it follows from \cite[Theorem 3.14]{TV} or \cite[Corollary 11.5]{H} that it is biH\"{o}lder continuous. Naturally, one will ask if there is any analog for the case when the underlying spaces are unbounded. However, Example \ref{ex-add} below shows that not every quasisymmetric mapping between unbounded uniformly perfect spaces is locally biH\"{o}lder continuous. As the second purpose of this paper, we seek for a characterization for a quasisymmetric mapping to be locally biH\"{o}lder continuous. Before the statement of our result, let us introduce the following concept. \begin{defn} For $0<r< 2 {\operatorname{diam}} Z$, a homeomorphism $f: (Z, d_Z)\to (W, d_W)$ is called $r$-{\it uniformly bounded} if there exist constants $a$ and $b$ with $0<a<b$ such that for all $x\in Z$, $$a \leq {\operatorname{diam}} f\big(B(x, r)\big)\leq b,$$ where $B(x, r)=\{y\in Z:\; d_Z(y, x)<r\}$, i.e., the open ball in $Z$ with center $x$ and radius $r$. \end{defn} The following property shows that in the definition of $r$-uniform boundedness, the exact value of the parameter $r$ is not important for quasisymmetric mappings. \begin{prop}\label{uniformly-bounded} Suppose that $(Z, d_Z)$ is a $\kappa$-uniformly perfect space with $\kappa>1$ and $f: (Z, d_Z)\to (W, d_W)$ is $\eta$-quasisymmetric. If $f$ is $r$-uniformly bounded for an $r$ with $0<r< 2 {\operatorname{diam}} Z$, then $f$ is $s$-uniformly bounded for any $s\in (0,2{\operatorname{diam}} Z)$. \end{prop} Note that quasisymmetry in a uniformly perfect space implies power quasisymmetry (See Theorem $A$ below). Based on the uniform boundedness, we obtain the following geometric characterization for a (power) quasisymmetric mapping between unbounded uniformly perfect spaces to be locally biH\"{o}lder continuous. \begin{thm}\label{thm-2} Suppose that $(Z, d_Z)$ is $\kappa$-uniformly perfect with $\kappa>1$, and $f: (Z, d_Z)\to (W, d_W)$ is a $(\theta, \lambda)$-power quasisymmetric mapping with $\theta\geq 1$ and $\lambda\geq 1$. Then for any $r\in (0,2 {\operatorname{diam}} Z)$, the following are quantitatively equivalent: \begin{enumerate} \item[$(1)$] $f$ is locally $(\theta, 1/\theta, r)$-biH\"{o}lder continuous. \item[$(2)$] $f$ is $r$-uniformly bounded. \end{enumerate} \end{thm} Here, for two conditions, we say that Condition $\Phi$ quantitatively implies Condition $\Psi$ if Condition $\Phi$ implies Condition $\Psi$ and the data of Condition $\Psi$ depends only on that of Condition $\Phi$. If Condition $\Psi$ also quantitatively implies Condition $\Phi$, then we say that Condition $\Phi$ is equivalent to Condition $\Psi$, quantitatively. \begin{rem} In Theorem \ref{thm-2}, the assumption that $(Z, d_Z)$ is uniformly perfect cannot be removed. For example, the identity mapping of integers ${\operatorname{id}}: \mathbb Z\rightarrow \mathbb Z$ with the standard Euclidean distance is $1$-biLipschitz, and thus, it is power quasisymmetric, and locally $(\theta_1,\theta_2,r)$-biH\"{o}lder continuous for any $\theta_1>0$, $\theta_2>0$ and $0<r<1$. However, it is not $s$-uniformly bounded for any $s\in (0,1)$. \end{rem} Throughout this paper, the letter $C$ (sometimes with a subscript) denotes a positive constant that depends only on the given parameters of the spaces and may change at different occurrences. The notation $A\lesssim B$ (resp. $A \gtrsim B$) means that there is a constant $C_1\geq 1$ (resp. $C_2\geq 1$) such that $A \leq C_1 \cdot B$ (resp. $A \geq C_2 \cdot B).$ We also call $C_1$ and $C_2$ comparison coefficients of $A$ and $B$. In particular, $C_1$ (resp. $C_2$) is called an upper comparison coefficient (resp. a lower comparison coefficient) for $A$ and $B$. If $A\lesssim B$ and $A \gtrsim B$, then we write $A\approx B$. The paper is organized as follows. In Section \ref{sec-2}, some basic concepts and known results will be introduced. Section \ref{sec-3} will be devoted to the proof of Theorem \ref{thm-1}. In Section \ref{sec-4}, the proofs of Proposition \ref{uniformly-bounded} and Theorem \ref{thm-2} will be presented, and in Section \ref{sec-5}, two examples will be constructed. \section{Basic terminologies}\label{sec-2} In this section, we introduce some necessary notions and notations. A metric space $(Z, d_Z)$ is called $\kappa$-{\it uniformly perfect} with $\kappa> 1$ if for each $x\in Z$ and for each $r>0$, the set $B(x, r)\setminus B(x, r/\kappa)$ is nonempty whenever the set $Z\setminus B(x, r)$ is nonempty. Sometimes, $(Z, d_Z)$ is called {\it uniformly perfect} if $Z$ is $\kappa$-{\it uniformly perfect} for some $\kappa > 1$. \begin{lem}\label{1-8-4} Suppose that $(Z, d_Z)$ is $\kappa$-uniformly perfect with $\kappa>1$, and let $x\in Z$. Then for any $r\in(0,2{\operatorname{diam}} Z)$, there exists $z\in Z$ such that $$\frac{r}{\mu}\leq d_Z(x, z)<r,$$ where $\mu=\max\{8, \kappa\}$. \end{lem} \begin{pf} Let $x\in Z$. Since $0<r<2{\operatorname{diam}} Z$, we see that there exists $y\in Z$ such that $d_Z(x, y)>r/8$. If $d_Z(y, x)< r$, by letting $z=y$, we see that the lemma is true. If $d_Z(y, x)\geq r$, then the uniform perfectness of $(Z,d_Z)$ implies that there is $y^\prime\in Z$ such that $r/\kappa\leq d_Z(x, y^\prime)<r$. By letting $z=y^\prime$, we know that the lemma holds true as well. \end{pf} A homeomorphism $f: (Z, d_Z)\to (W, d_W)$ is called {\it $\eta$-quasisymmetric} if there exists a self-homeomorphism $\eta$ of $[0, +\infty)$ such that for all triples of points $x, y, z\in Z$, \begin{equation}\label{eta} \frac{d_W(f(x), f(z))}{d_W(f(y), f(z))}\leq \eta\left(\frac{d_Z(x, z)}{d_Z(y, z)}\right). \end{equation} In particular, if there are constants $\theta\geq 1$ and $\lambda\geq 1$ such that \begin{equation*}\label{eq-1.1} \eta_{\lambda, \theta}(t)= \left\{\begin{array}{cl} \lambda t^{\frac{1}{\theta}}& \text{for} \;\; 0<t<1, \\ \lambda t^{\theta}& \text{for} \;\; t\geq 1, \end{array}\right. \end{equation*} then $f$ is called a $(\theta, \lambda)$-{\it power quasisymmetric mapping}. Here, the notation $\eta_{\lambda, \theta}$ means that the control function $\eta$ depends only on the given parameters $\theta$ and $\lambda$. \begin{Thm}[{\cite[Theorem 11.3]{H}}]\label{Thm-A} An $\eta$-quasisymmetric mapping of a uniformly perfect space is $(\theta, \lambda)$-power quasisymmetric, quantitatively. \end{Thm} In the following, we always use the notation $(Z, d_Z, \nu_Z)$ to denote a metric space $(Z, d_Z)$ admitting a Borel regular measure $\nu_Z$. A metric measure space $(Z, d_Z, \nu_Z)$ is called \begin{enumerate} \item[$(1)$] {\it doubling} if there exists a constant $C\geq 1$ such that for all $x\in Z$ and $0<r<2{\operatorname{diam}} Z$, $$0< \nu_Z(B(x, 2r))\leq C \nu_Z\big(B(x, r)\big)<\infty.$$ \item[$(2)$] {\it $Q_Z$-Ahlfors regular} with $Q_Z>0$ if there exists a constant $C\geq 1$ such that for all $z\in Z$ and $0<r<2 {\operatorname{diam}} Z$, $$ C^{-1} r^{Q_Z}\leq \nu_Z\Big(B(z, r)\big) \leq C r^{Q_Z}. $$ \end{enumerate} It is known that every Ahlfors regular space is doubling and uniformly perfect (cf. \cite[Section 11]{H}). For given $1\leq p<\infty$, $s>0$ and a function $u: Z\to\mathbb R$, the {\it homogeneous Besov norm} on the metric measure space $(Z, d_Z, \nu_Z)$ is defined by \begin{equation}\label{eq-besov} \|u\|_{\dot B_{p, p}^{s}(Z)}=\left(\int_Z \int_Z\frac{|u(x)-u(y)|^p}{d_Z(x, y)^{sp}}\frac{d\nu_Z(x)\nu_Z(y)}{\nu_Z(B(x, d_Z(x, y)))}\right)^{1/p}. \end{equation} We write the {\it homogeneous Besov space} $\dot B_{p, p}^{s}(Z)$ for the subspace of $L^p_{\text{loc}}(Z)$ consisting of all functions $u$ such that $$\|u\|_{\dot B_{p, p}^{s}(Z)}<\infty.$$ We note that, properly speaking, \eqref{eq-besov} is actually a seminorm on $L^p_{\text{loc}}(Z)$ since any constant function has Besov norm 0. We define the {\it Besov space} $B_{p, p}^{s}(Z)$ to be the normed space of all measurable functions $u\in L^{p}(Z)$ such that $$ \|u\|_{B_{p, p}^{s}(Z)}=\|u\|_{L^{p}(Z)}+\|u\|_{\dot B_{p,p}^{s}(Z)}<\infty. $$ \section{Locally biH\"{o}lder continuous mappings and their induced embeddings}\label{sec-3} The aim of this section is to prove Theorem \ref{thm-1}. Before the proof, we need some preparation which consists of the following two auxiliary lemmas. \begin{lem}\label{lem-besov-norm} Suppose that $(Z, d_Z, \nu_Z)$ is Ahlfors $Q_Z$-regular with $Q_Z>0$. Let $n_0\in \mathbb Z$, and for each $n\in \mathbb Z$, let $t_n=C\sigma^n$, where $C>0$ and $0<\sigma<1$. For any $s>0$ and $p\geq 1$, if $u\in L^{p}(Z)$, then $$\|u\|^p_{B^s_{p, p}(Z)}\approx \|u\|^p_{L^p(Z)}+\sum_{n=n_0}^{+\infty} t_n^{-sp}\int_Z\vint_{B(x, t_n)}|u(x)-u(y)|^p d\nu_Z(y)d\nu_Z(x),$$ where the comparison coefficients depend on $n_0$. \end{lem} \begin{proof} The following estimate easily follows from similar arguments as in the proof of \cite[Theorem 5.2]{GKS10} or \cite[Lemma 5.4]{BBGS}: \begin{eqnarray}\label{1-3-1} \|u\|^p_{\dot B^s_{p, p}(Z)} \approx \sum_{n=-\infty}^{+\infty} t_n^{-sp}\int_Z\vint_{B(x, t_n)}|u(x)-u(y)|^p d\nu_Z(y)d\nu_Z(x). \end{eqnarray} Let $n_0\in \mathbb Z$. Then the estimate \eqref{1-3-1} shows that to prove the estimate in the lemma, it suffices to show that \begin{equation*}\label{norm-equiv} \sum_{n=-\infty}^{n_0} t_n^{-sp}\int_Z\vint_{B(x, t_n)}|u(x)-u(y)|^p d\nu_Z(y)d\nu_Z(x)\lesssim \|u\|^p_{L^p(Z)}. \end{equation*} Note that \begin{align*} \int_Z\vint_{B(x, t_n)}|u(x)-u(y)|^p d\nu_Z(y)d\nu_Z(x)&\lesssim \int_Z\vint_{B(x, t_n)} (|u(x)|^p+|u(y)|^p)d\nu_Z(y)d\nu_Z(x)\\ &=\|u\|^p_{L^p(Z)}+\int_Z\vint_{B(x, t_n)} |u(y)|^pd\nu_Z(y)d\nu_Z(x). \end{align*} Since $(Z, d_Z, \nu_Z)$ is Ahlfors $Q_Z$-regular, we know that for any $y\in B(x, t_n)$, $$\nu_Z(B(x, t_n))\approx \nu_Z(B(y, t_n)).$$ It follows from the Fubini theorem that \begin{align*} \int_Z\vint_{B(x, t_n)} |u(y)|^pd\nu_Z(y)d\nu_Z(x)&\approx \int_Z\int_Z \frac{|u(y)|^p\chi_{B(x, t_n)}(y)}{\nu_Z(B(y, t_n))} d\nu_Z(y)d\nu_Z(x)\\ &=\int_Z |u(y)|^p d\nu_Z(y) \int_{Z}\frac{\chi_{B(x, t_n)}(y)}{\nu_Z(B(y, t_n))} d\nu_Z(x)\\ &=\|u\|^p_{L^p(Z)}. \end{align*} Therefore, $$\sum_{n=-\infty}^{n_0} t_n^{-sp}\int_Z\vint_{B(x, t_n)}|u(x)-u(y)|^p d\nu_Z(y)d\nu_Z(x)\lesssim \sum_{n=-\infty}^{n_0} t_n^{-sp} \|u\|^p_{L^p(Z)}\lesssim \|u\|^p_{L^p(Z)},$$ which is what we need, and hence, the lemma is proved. \end{proof} \begin{lem}\label{embedding-lp} Assume that $(Z, d_Z, \nu_Z)$ and $(W, d_W, \nu_W)$ are Ahlfors $Q_Z$-regular and Ahlfors $Q_W$-regular spaces with $Q_Z>0$ and $Q_W>0$, respectively. Let $\theta_1>0$, $\theta_2>0$ and $0<r<2{\operatorname{diam}} Z$. Suppose that $f: Z\rightarrow W$ is a locally $(\theta_1, \theta_2, r)$-biH\"{o}lder continuous mapping such that $Q_Z\geq \theta_1 Q_W$. Then for any $p\geq 1$, the mapping $f$ induces a bounded embedding $f_{\#}: L^p(W)\rightarrow L^p(Z)$ via composition. \end{lem} When $Z$ and $W$ are bounded and $f$ is biH\"{o}lder continuous, Lemma \ref{embedding-lp} coincides with \cite[Lemma 7.1]{BBGS}. The proof method of \cite[Lemma 7.1]{BBGS} is also applicable to Lemma \ref{embedding-lp}, and so, we omit the details here. Now, we are ready to prove Theorem \ref{thm-1}. \subsection*{Proof of Theorem \ref{thm-1}} Assume that $(Z, d_Z, \nu_Z)$ and $(W, d_W, \nu_W)$ are Ahlfors $Q_Z$-regular and Ahlfors $Q_W$-regular spaces with $Q_Z>0$ and $Q_W>0$, respectively. Suppose that $f: (Z, d_Z)\rightarrow (W, d_W)$ is a locally $(\theta_1, \theta_2, r)$-biH\"{o}lder continuous mapping with $\theta_1\geq \theta_2>0$ and $r\in (0, 2{\operatorname{diam}} Z)$. Let $u\in B^{s}_{p, p}(W)$ with $s>0$ and $p\geq 1$, and let $v=u\circ f$. Since $u\in L^p(W)$, it follows from Lemma \ref{embedding-lp} that \begin{equation}\label{eq-1-4} \|v\|_{L^p(Z)}\lesssim \|u\|_{L^p(W)}, \end{equation} which implies $v\in L^p(Z)$. For each $n\in \mathbb{Z}$, let $t_n=C\sigma^n$, where $C>0$ and $0<\sigma<1$. Apparently, there is $n_0\in \mathbb Z$ such that for $n\in \mathbb{Z}$, if $n\geq n_0$, then $$t_{n}<r.$$ Also, it follows from Lemma \ref{lem-besov-norm} that for any $s'>0$, \begin{align} \|v\|^p_{B^{s^\prime}_{p, p}(Z)} \approx \|v\|^p_{L^p(Z)}+\sum_{n=n_0}^{+\infty} I_n, \label{zw-10-16} \end{align} where $$I_n=t_n^{- s^\prime p}\int_Z\vint_{B(x, t_n)}|v(x)-v(y)|^p d\nu_Z(y)d\nu_Z(x)\notag.$$ In the following, we are going to estimate $I_n$. For this, we first estimate the integral $$i_n=\vint_{B(x, t_n)}|v(x)-v(y)|^p d\nu_Z(y).$$ Since for any $n\geq n_0$, $t_n<r$, we infer from the locally biH\"{o}lder continuity of $f$ that there is $C_0\geq 1$ such that $$f(B(x, t_n))\subset B(f(x), C_0t_n^{\theta_2}).$$ Then the Ahlfors regularity of $(Z, d_Z, \nu_Z)$ gives \begin{align}\label{1-5-2} i_n &\approx \frac{1}{t_n^{Q_Z}} \int_{Z}|v(x)-u\circ f(y)|^p \chi_{B(x, t_n)}(y) d\nu_Z(y) \\ \nonumber &\leq \frac{1}{t_n^{Q_Z}} \int_{Z}|v(x)-u\circ f(y)|^p \chi_{B(f(x), C_0t_n^{\theta_2})}(f(y)) d\nu_Z(y). \end{align} As $v\in L^p(Z)$, we know that $v(x)$ is finite for almost every $x\in Z$. Since \begin{align*} \int_W |v(x)-u(y^\prime)|^p\chi_{B(f(x), C_0t_n^{\theta_2})}(y^\prime)d\nu_W(y^\prime)\lesssim \;& |v(x)|^p\nu_W(B(f(x), C_0t_n^{\theta_2}))\\ &+ \int_{B(f(x), C_0t_n^{\theta_2})}|u(y^\prime)|^pd\nu_W(y^\prime), \end{align*} we see from the Ahlfors regularity of $(W, d_W, \nu_W)$ that for almost every $x\in Z$, as a function of $y^\prime$, $|v(x)-u(y^\prime)|^p\chi_{B(f(x), C_0t_n^{\theta_2})}(y^\prime)$ belongs to $L^1(W)$. It follows from Lemma \ref{embedding-lp} that for almost every $x\in Z$, $$\int_{Z}|v(x)-u\circ f(y)|^p \chi_{B(f(x), C_0t_n^{\theta_2})}(f(y)) d\nu_Z(y)\lesssim \int_W |v(x)-u(y^\prime)|^p\chi_{B(f(x), C_0t_n^{\theta_2})}(y^\prime) d\nu_W(y^\prime),$$ and thus, we deduce from \eqref{1-5-2} that \begin{align*} i_n \lesssim t_n^{\theta_2 Q_W-Q_Z} \vint_{B(f(x), C_0t_n^{\theta_2})} |v(x)-u(y^\prime)|^p d\nu_W(y^\prime). \end{align*}This is what we want. Since $u\in B^{s}_{p, p}(W)$, it follows from Lemma \ref{lem-besov-norm} that for any $n\geq n_0$, \begin{equation*} \int_W \vint_{B(x^\prime, C_0t_n^{\theta_2})} |u(x^\prime)-u(y^\prime)|^p d\nu_W(y^\prime)d\nu_W(x^\prime)<\infty, \end{equation*} which shows that $\vint_{B(x^\prime, C_0t_n^{\theta_2})} |u(x^\prime)-u(y^\prime)|^p d\nu_W(y^\prime)$ belongs to $L^1(W)$ as a function of $x^\prime$. Again, by Lemma \ref{embedding-lp}, we obtain that \begin{align*} I_n&\lesssim t_n^{-s^\prime p+\theta_2 Q_W-Q_Z} \int_Z \vint_{B(f(x), C_0t_n^{\theta_2})} |u\circ f(x)-u(y^\prime)|^p d\nu_W(y^\prime) d\nu_Z(x)\\ &\lesssim t_n^{-s^\prime p+\theta_2 Q_W-Q_Z} \int_W \vint_{B(x^\prime, C_0t_n^{\theta_2})} |u(x^\prime)-u(y^\prime)|^p d\nu_W(y^\prime)d\nu_W(x^\prime). \end{align*} Assume that $s$ and $s^\prime$ satisfy the relation \eqref{s-s-relation}. Then we know that for any $n\geq n_0$, $$t_n^{-s^\prime p+\theta_2 Q_W-Q_Z}\leq r^{\theta_2(Q_W+sp)-s^\prime p-Q_Z}(t_n^{\theta_2})^{-sp}.$$ This implies that $$I_n\lesssim (t_n^{\theta_2})^{-sp} \int_W \vint_{B(x^\prime, C_0t_n^{\theta_2})} |u(x^\prime)-u(y^\prime)|^p d\nu_W(y^\prime)d\nu_W(x^\prime).$$ By substituting the estimate of $I_n$ into \eqref{zw-10-16}, we conclude from Lemma \ref{lem-besov-norm} that $$\|v\|^p_{B^{s^\prime}_{p, p}(Z)} \lesssim \|u\|^p_{B^{s}_{p, p}(W)}. $$ Let $$f_{\#}(u)=u\circ f.$$ Then we have proved that $f_{\#}: B^{s}_{p, p}(W)\rightarrow B^{ s^\prime}_{p, p}(Z)$ is a bounded embedding. \qed \section{Power quasisymmetry, locally biH\"{o}lder continuity and uniform boundedness}\label{sec-4} The purpose of this section is to prove Proposition \ref{uniformly-bounded} and Theorem \ref{thm-2}. \begin{proof}[{\bf Proof of Proposition \ref{uniformly-bounded}}] It follows from the assumption of $f$ being $r$-uniformly bounded that there must exist constants $a>0$ and $b>0$ such that for all $x\in Z$, \begin{eqnarray}\label{1-7-1}a\leq {\operatorname{diam}} f\big(B(x, r)\big)\leq b.\end{eqnarray} Let $x\in Z$ and $s\in (0, 2{\operatorname{diam}} Z)$. To prove that $f$ is $s$-uniformly bounded, we only need to consider two cases: $s>r$ and $s<r$. For the first case, it follows from the fact $B(x, r)\subset B(x, s)$ and \eqref{1-7-1} that \begin{eqnarray}\label{1-7-2}{\operatorname{diam}} f(B(x, s))\geq {\operatorname{diam}} f\big(B(x, r)\big)\geq a.\end{eqnarray} If $B(x, s)\setminus B(x, r)=\emptyset$, obviously, we obtain from \eqref{1-7-1} that \begin{eqnarray}\label{1-7-3} {\operatorname{diam}} f(B(x, s))={\operatorname{diam}} f\big(B(x, r)\big)\leq b, \end{eqnarray} and if $B(x, s)\setminus B(x, r)\not=\emptyset$, it follows from the uniform perfectness of $(Z, d_Z)$ that there exists $z\in B(x, r)$ such that $$\frac{r}{\kappa}\leq d_Z(x, z)<r.$$ This indicates that for any $y\in B(x, s)$, $$\frac{d_Z(x, y)}{d_Z(x, z)}\leq \frac{\kappa s}{r}.$$ Then the $\eta$-quasisymmetry of $f$ gives $$\frac{d_W(f(x), f(y))}{d_W(f(x), f(z))}\leq \eta\left(\frac{\kappa s}{r}\right),$$ and thus, we get $$d_W(f(x), f(y))\leq \eta\left(\frac{\kappa s}{r}\right) {\operatorname{diam}} f\big(B(x, r)\big)\leq \eta\left(\frac{\kappa s}{r}\right) b.$$ This implies that \begin{eqnarray}\label{1-7-4} {\operatorname{diam}} f(B(x, s))\leq \eta\left(\frac{\kappa s}{r}\right) b.\end{eqnarray} For the remaining case, that is, $s<r$, the fact $B(x, s)\subset B(x, r)$ leads to \begin{eqnarray}\label{1-7-5} {\operatorname{diam}} f(B(x, s))\leq {\operatorname{diam}} f\big(B(x, r)\big)\leq b.\end{eqnarray} If $B(x, r)\setminus B(x, s)=\emptyset$, apparently, \begin{eqnarray} \label{1-7-6} {\operatorname{diam}} f(B(x, s))={\operatorname{diam}} f\big(B(x, r)\big)\geq a, \end{eqnarray} and if $B(x, r)\setminus B(x, s)\not=\emptyset$, then the similar reasoning as in the proof of \eqref{1-7-4} ensures that \begin{eqnarray} \label{1-7-7} {\operatorname{diam}} f(B(x, s))\geq \frac{a}{\eta\left(\frac{\kappa r}{s}\right)}. \end{eqnarray} Now, we conclude from \eqref{1-7-2}$-$\eqref{1-7-7} that for all $x\in Z$, $$a_1\leq {\operatorname{diam}} f(B(x, s))\leq b_1,$$ where $$a_1=\min\left\{a,\;\frac{a}{\eta\left(\frac{\kappa r}{s}\right)}\right\}\;\;\mbox{and}\;\;b_1=\max\left\{b,\;\eta\left(\frac{\kappa s}{r}\right)b\right\}.$$ This shows that $f$ is $s$-uniformly bounded. \end{proof} \begin{proof}[{\bf Proof of Theorem \ref{thm-2}}] $(1)\Rightarrow(2)$. Assume that $f$ is locally $(\theta, 1/\theta, r)$-biH\"{o}lder continuous with $\theta\geq 1$ and $0<r<2{\operatorname{diam}} Z$. Then there is $C\geq 1$ such that for any $x\in Z$ and any $y\in B(x, r)$, \begin{equation}\label{lem-10-11} C^{-1}d_Z(x, y)^{\theta} \leq d_W(f(x), f(y)) \leq Cd_Z(x, y)^{1/\theta}, \end{equation} which leads to $$ {\operatorname{diam}} f\big(B(x, r)\big)\leq 2Cr^{1/\theta}. $$ Moreover, it follows from Lemma \ref{1-8-4} that there is $z\in Z$ such that $$\frac{r}{\mu}\leq d_Z(x, z)<r,$$ where $\mu=\max\{8, \kappa\}$. Then \eqref{lem-10-11} leads to $${\operatorname{diam}} f\big(B(x, r)\big)\geq d_W(f(x), f(z))\geq \frac{r^\theta}{\mu^\theta C}.$$ These show that $f$ is $r$-uniformly bounded. $(2)\Rightarrow(1)$. Assume that $f$ is $r$-uniformly bounded with $0<r<2{\operatorname{diam}} Z$. This assumption implies that there are two constants $a>0$ and $b>0$ such that for any $\xi\in Z$, \begin{equation}\label{l1-8-1} a\leq {\operatorname{diam}} f(B(\xi, r))\leq b. \end{equation} Let $x\in Z$. By Lemma \ref{1-8-4}, we see that there is $\zeta\in Z$ such that \begin{equation}\label{1-8-7} \frac{r}{\mu}\leq d_Z(x, \zeta)<r, \end{equation} where $\mu=\max\{8, \kappa\}$. We assert that \begin{equation}\label{lemma-bdd} \frac{a}{3\lambda {\mu}^\theta} \leq d_W(f(x), f(\zeta))\leq b. \end{equation} The right-side inequality of \eqref{lemma-bdd} easily follows from \eqref{l1-8-1}. For the proof of the left-side inequality, let $\zeta_1\in B(x, r)$ be such that $$d_W(f(x), f(\zeta_1))\geq \frac {1}{3}{\operatorname{diam}} f\big(B(x, r)\big),$$ and then, it follows from \eqref{l1-8-1} that $$d_W(f(x), f(\zeta_1))\geq \frac a 3.$$ Since $$\frac{d_Z(x, \zeta_1)}{d_Z(x, \zeta)}\leq \mu,$$ we know from the assumption of $f$ being $(\theta, \lambda)$-power quasisymmetric with $\theta\geq 1$ and $\lambda\geq 1$ that $$\frac{d_W(f(x), f(\zeta_1))}{d_W(f(x), f(\zeta))} \leq \lambda {\mu}^{\theta}.$$ Hence $${d_W(f(x), f(\zeta))} \geq \frac{1}{\lambda {\mu}^\theta}d_W(f(x), f(\zeta_1)) \geq \frac{a}{3\lambda {\mu}^\theta},$$ which is what we need. Thus the estimates in \eqref{lemma-bdd} are proved. Let $y\in Z$ be such that $$d_Z(x, y)<r.$$ If $d_Z(x, y)\geq d_Z(x, \zeta)$, then $r/{\mu}\leq d_Z(x, y)<r$. It follows from \eqref{lemma-bdd} that \begin{eqnarray}\label{1-8-5} \frac{a}{3\lambda (r\mu)^\theta} d_Z(x, y)^{\theta} \leq \frac{a}{3\lambda {\mu}^\theta} \leq d_W(f(x), f(y))\leq b\leq \frac{b\mu^{\frac{1}{\theta}}}{r^{\frac{1}{\theta}}} d_Z(x, y)^{\frac{1}{\theta}}. \end{eqnarray} If $d_Z(x, y)<d_Z(x, \zeta)$, it follows from the assumption of $f$ being $(\theta, \lambda)$-power quasisymmetric that \begin{equation*} \lambda^{-1} \left(\frac{d_Z(x, y)}{d_Z(x, \zeta)}\right)^{\theta}\leq \frac{d_W(f(x), f(y))}{d_W(f(x), f(\zeta))}\leq \lambda\left(\frac{d_Z(x, y)}{d_Z(x, \zeta)}\right)^{1/\theta}, \end{equation*} and then, we deduce from \eqref{1-8-7} and \eqref{lemma-bdd} that \begin{eqnarray}\label{1-8-6} \frac{a}{3\lambda^2 {(r\mu)}^\theta } d_Z(x, y)^{\theta}\leq d_W(f(x), f(y))\leq \frac{\lambda b\mu^{\frac{1}{\theta}}}{r^{\frac{1}{\theta}}}d_Z(x, y)^{\frac{1}{\theta}}. \end{eqnarray} Now, we conclude from \eqref{1-8-5} and \eqref{1-8-6} that $f$ is locally $(\theta, 1/\theta, r)$-biH\"{o}lder continuous, and hence, the theorem is proved. \end{proof} \section{Examples}\label{sec-5} As an application of Theorem \ref{thm-2}, in this section, we construct two examples. The first example gives a quasisymmetric mapping between unbounded uniformly perfect spaces, which is not locally biH\"older continuous. In the second example, we construct a locally biH\"older continuous mapping between unbounded Alhfors regular spaces, which is not biH\"{o}lder continuous. This example, together with Remark \ref{sec5}$(ii)$ below, also illustrates that Theorem \ref{thm-1} is more general than \cite[Proposition 7.2]{BBGS}. \begin{example}\label{ex-add} Let $f$ be the radial stretching $f(x)=|x|x$ of $({\operatorname{Re}\,}^2, |\cdot|)$, where $|\cdot|$ denotes the usual Euclidean metric. Then $f$ is a power quasisymmetric mapping but not locally biH\"older continuous. \end{example} \begin{proof} It follows from \cite[p. 49]{V1} or \cite[p. 309]{HKM} that $f$ is a quasiconformal mapping. It is a fundamental fact that quasiconformal self-mappings of Euclidean spaces with dimension at least two are quasisymmetric, see for example Gehring \cite{G1} or Heinonen-Koskela \cite{HK}. This fact implies that $f$ is a quasisymmetric mapping. Then we know from Theorem A that $f$ is power quasisymmetric. Here, we refer interested readers to \cite{HK,V1} for the definitions of quasiconformal mappings. Suppose on the contrary that $f$ is locally biH\"older continuous. By Theorem \ref{thm-2} and Proposition \ref{uniformly-bounded}, $f$ must be $1$-uniformly bounded. However, for any $(n, 0)\in \mathbb R^2$ with $n\in \mathbb N$, a direct computation gives that $${\operatorname{diam}} f\left(B\big((n, 0), 1\big)\right)\geq (n+1)^2-n^2= 2n+1,$$ which contradicts the uniform boundedness condition. We conclude that $f$ is not locally biH\"older continuous. \end{proof} \begin{example}\label{ex} Let $f$ be the following self-homeomorphism of $({\operatorname{Re}\,}^2, |\cdot|)$: \begin{equation*} f(x)=\left\{\begin{array}{cl} 0,& x=0,\\ \frac{x}{|x|}\cdot |x|^{\frac 12}, &0<|x|<1, \\ x, &|x|\geq 1. \end{array}\right. \end{equation*} Then the following statements hold. \begin{enumerate} \item[$(1)$] $f$ is power quasisymmetric. \item[$(2)$] $f$ is locally biH\"older continuous. \item[$(3)$] $f$ is not $(\theta_1, \theta_2)$-biH\"older continuous for any $\theta_1>0$ and $\theta_2>0$. \end{enumerate} \end{example} \begin{proof} $(1)$ The statement $(1)$ in the example follows from a similar argument with the one in the proof of Example \ref{ex-add}. $(2)$ To show $f$ is locally biH\"older continuous, by Theorem \ref{thm-2}, it suffices to show that $f$ is $r$-uniformly bounded for some $r>0$. Choose $r=2$. Then it is obvious from the definition of $f$ that for any $x\in {\operatorname{Re}\,}^2$, $$2\leq {\operatorname{diam}} f(B(x, 2))\leq 6.$$ This implies that $f$ is $2$-uniformly bounded, and hence, it is locally biH\"older continuous. $(3)$ Suppose on the contrary that $f$ is $(\theta_1, \theta_2)$-biH\"older continuous for some $\theta_1$ and $\theta_2$ with $\theta_1\geq \theta_2>0$. Then $f$ is a snowflake mapping since $(\mathbb R^2, |\cdot|)$ is unbounded. That is, there are constants $C\geq 1$ and $\theta>0$ such that for any pair of $x$ and $y$, $$ C^{-1}|x-y|^{\theta}\leq |f(y)-f(x)|\leq C|x-y|^{\theta}. $$ However, for any $x$ with $|x|<1$, $$ |f(x)-f(0)|=|x|^{\frac12}, $$ which implies that $\theta = \frac12$; and for any $x$ with $|x|\geq1$, $$ |f(x)-f(0)|=|x|, $$ which shows that $\theta = 1$. We conclude from this contradiction that $f$ is not $(\theta_1, \theta_2)$-biH\"older continuous for any $\theta_1>0$ and $\theta_2>0$. \end{proof} \begin{rem}\label{sec5} $(i)$ Following the arguments in the proofs of statements $(2)$ and $(3)$ in the Example \ref{ex}, it is not difficult to show that the mapping $f$ in Example \ref{ex} is locally $(1, \frac12, 2)$-biH\"older continuous. We omit the detailed computations here. $(ii)$ It is known that ${\operatorname{Re}\,}^2$ is $2$-Ahlfors regular. Let $s>0$, $s^\prime>0$ and $p\geq 1$ be parameters such that $(s-2s^\prime)p\geq 2$. Then we see that all assumptions in Theorem \ref{thm-1} are satisfied. Therefore, we infer from Theorem \ref{thm-1} that $f$ induces a canonical bounded embedding $f_{\#}: B^{s}_{p, p}(\mathbb R^2)\rightarrow B^{ s^\prime}_{p, p}(\mathbb R^2)$ via composition. \end{rem} \subsection*{Acknowledgments} The second author (X. Wang) was partly supported by NNSF of China under the number 12071121, and the third author (Z. Wang) was partly supported by NNSF of China under the number 12101226. \vspace*{5mm}
{ "timestamp": "2023-02-23T02:06:48", "yymm": "2302", "arxiv_id": "2302.11094", "language": "en", "url": "https://arxiv.org/abs/2302.11094", "abstract": "In this paper, we introduce a class of homeomorphisms between metric spaces, which are locally biHölder continuous mappings. Then an embedding result between Besov spaces induced by locally biHölder continuous mappings between Ahlfors regular spaces is established, which extends the corresponding result of Björn-Björn-Gill-Shanmugalingam (J. Reine Angew. Math. 725: 63-114, 2017). Furthermore, an example is constructed to show that our embedding result is more general. We also introduce a geometric condition, named as uniform boundedness, to characterize when a quasisymmetric mapping between uniformly perfect spaces is locally biHölder continuous.", "subjects": "Functional Analysis (math.FA); Metric Geometry (math.MG)", "title": "Locally biHölder continuous mappings and their induced embeddings between Besov spaces", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713857177956, "lm_q2_score": 0.7185943985973773, "lm_q1q2_score": 0.7080104988750838 }
https://arxiv.org/abs/1602.01591
A Partial Proof of a Conjecture of Dris
Euler showed that if an odd perfect number $N$ exists, it must consist of two parts $N=q^k n^2$, with $q$ prime, $q \equiv k \equiv 1 \pmod{4}$, and gcd$(q,n)=1$. Dris conjectured that $q^k < n$. We first show that $q<n$ for all odd perfect numbers. Afterwards, we show $q^k < n$ holds in many cases.
\section{Introduction} We define \sig{N} to be the sum of the positive divisors of $N$ and note the following properties of $\sigma$, which we will use freely: \begin{enumerate} \item $\sigma(p^b)= 1+p+p^2+$ \ldots $+p^b$ for powers of primes. \item $\sigma(ab) = \sigma(a)\sigma(b)$, whenever gcd$(a,b)=1$. \item $(\frac{p-1}{p}) \sigma(p^{2b}) < p^{2b}$. (Note that $(\frac{2}{3}) \sigma(p^{2b}) < p^{2b}$ works for any odd prime $p$.) \end{enumerate} We say $N$ is perfect when $\sigma(N) = 2N$. Euler showed that if an odd perfect number $N$ exists, then its factorization consists of two parts. A special prime $q$ appearing an odd number, say $k$ times, such that $q \equiv k \equiv 1 \pmod{4}$. The rest of the primes in the factorization appear an even number of times, which we represent as $n^2$. It is understood that gcd$(q,n)=1$. When written as $N=q^k n^2$ we say $N$ is written in Eulerian form. The condition gcd$(q,n)=1$, implies $n \neq q$. Thus, it is interesting to determine conditions requiring and consequences of $n<q$ and $q<n$. A well known result of Nielsen \cite{nielsen2006} states that $N$ must consist of at least 9 different odd primes, i.e. $n$ must have at least 8 unique factors. At first glance, it would seem reasonable to guess that $q<n$. However, a quick consideration of Descartes famous ``spoof'' odd perfect number: \begin{center} $N=3^2 7^2 11^2 13^2 22021$ \end{center} where if one pretends for a moment that 22021 is prime, and that $\sigma(22021) = 22022$, then $\sigma(N)=2N$. For this example, $q=22021$ and $q>n$. So it seems a plausible question to ask that if an odd perfect number exists, is it necessary that the special prime dominate the rest of the factors? Our initial intuitions turn out to be correct. Dris proved in \cite{dris2012} that $k > 1 \Longrightarrow q < n$. Acquaah and Konyagin \cite{Acqu2012} later showed $k=1 \Longrightarrow q < (3N)^{1/3}$ from which it is immediate that $q < \sqrt{3} n$. (Their proof having been modified from Luca and Pomerance \cite{luca2010}.) In \cite{dris2015B}, Dagal and Dris, utilize Acquaah and Konyagin's results to show $q<n$ so long as $3 \nmid N$. In section 2, we utilize Neilsen's result to make a simple adjustment to Acquaah and Konyagin's argument to conclude $k=1 \Longrightarrow q < n$, which allows us to conclude $q<n$ (and mildly stronger results) for all odd perfect numbers. Dris conjectured in \cite{dris2008} that $q^k < n$. In section 3, we endeavor to prove this conjecture adjusting Acquaah and Konyagin's argument even further. We start with a proof of the simplest case and show the argument can be massaged to conclude $k > 1 \Longrightarrow q^k < n$. However, limitations in the method prevent a complete proof without additional assumptions in the second and third case. \section{Proof of $q<n$} \begin{theorem} \label{premain} Let $N = q^k n^2$ be an odd perfect number written in Eulerian form, then $q<n$. \end{theorem} \begin{proof} As mentioned above, the case $k>1$ has been established, so we assume k=1. Rewrite $N$ in full as \begin{center} $N = q p^{2b} r_{1}^{2\beta_{1}} r_{2}^{2\beta_{2}}$, \ldots, $r_{j}^{2\beta_{j}}$. \end{center} \noindent Where $p$ is the unique prime whereby $q|\sigma(p^{2b})$, $r_i$ for $1 \leq i \leq j$, represent the rest of the primes dividing $N$. When convenient, we will truncate $N$ as \begin{center} $N = q \ p^{2b} \ r_{1}^{2\beta_{1}} \ w^{2}$ \end{center} \noindent Let $c_i \geq 0$ be the integer whereby $p^{c_i} || \sigma(r_{i}^{2\beta_{i}})$ for $1 \leq i \leq j$. Where we give ``$||$'' its standard meaning that $p^{c_i} | \sigma(r_{i}^{2\beta_{i}})$, but $p^{c_i+1} \nmid \sigma(r_{i}^{2\beta_{i}})$. It is possible that $p^{c_i} = \sigma(r_{i}^{2\beta_{i}})$ for any particular $i$, but since we know $n$ has at least eight components, at least one of the $\sigma(r_{i}^{2\beta_{i}})$ has to have factors other than $p$. Thus, we may rewrite subscripts and assume: \begin{center} $p^{c_1} r_{2} | \sigma(r_{1}^{2\beta_{1}})$ \end{center} \begin{center} \textbf{Case 1: $p \nmid \sigma(q)$} \end{center} \EQ{2N = \sigma(N) = \sigma(q) \sigma(p^{2b}) \sigma(r_{1}^{2\beta_{1}}) \sigma(w^2)} \noindent Observe that $p \nmid \sigma(q)$ implies $p^{2b-c_1} || \sigma(w^2)$. Thus, \EQ{2N > (q+1) \ q \ (p^{c_1} r_{2}) \ (p^{2b-c_1})} \noindent We now utilize the fact that $p^{2b} > \frac{2}{3} \sigma(p^{2b})$ and $r_2$ being an odd prime means $r_2 \geq 3$. \EQ{2N > q^2 \ (3) \ \frac{2}{3} \sigma(p^{2b})} \EQ{2N > q^2 \ (3) \ \frac{2}{3} \ q} \EQ{N > q^3} \noindent from which $q < n$ easily follows. \begin{center} \textbf{Case 2: $p|\sigma(q)$} \end{center} \noindent Let $p^{c_q} || \sigma(q)$. Let $u=\sigma(p^{2b})/q$. Since \EQ{\sigma(p^{2b}) \equiv 1 \pmod{p}, \ \ \ \ q \equiv -1 \pmod{p}} \noindent we know $u \equiv -1 \pmod{p}$. Since $u$ is odd, we know $u \neq p-1$, and thus $u \geq 2p-1$. \noindent By construction, we have $p^{2b-c_q-c_1} || \sigma(w^2)$, which implies \EQ{\sigma(w^2) \geq p^{2b-c_q-c_1}} \noindent Observe now, \EQ{p^{2b+1}-1 = (p-1)\sigma(p^{2b}) = (p-1)uq = (p-1)u\sigma(q)-(p-1)u} \noindent Therefore, $(p-1)u \equiv 1 \pmod{p^{c_q}}$. Which implies $(p-1)u > p^{c_q}$. \noindent Combining the last two inequalities yields, \EQ{\sigma(w^2)(p-1)u > p^{2b-c_1} \ \ \ \Longrightarrow \ \ \ \sigma(w^2)u > \frac{p^{2b-c_1}}{p-1}} \noindent This should be all we need: \EQ{2N = \sigma(N) = \sigma(q) \sigma(p^{2b}) \sigma(r_{1}^{2\beta_{1}}) \sigma(w^2)} \EQ{2N > (q+1) \ uq \ (p^{c_1} r_{2}) \ \sigma(w^2)} \EQ{2N > q^2 \frac{p^{2b-c_1}}{p-1} \ p^{c_1} r_{2}} \EQ{2N > q^2 \ r_{2} \ \frac{p^{2b}}{p-1}} \noindent Again, we utilize $p^{2b} > \frac{2}{3} \sigma(p^{2b})$. \EQ{2N > q^2 \ r_{2} \ \frac{2 \sigma(p^{2b})}{3(p-1)}} \EQ{2N > q^2 \ r_{2} \ \frac{2uq}{3(p-1)}} \noindent Recall, $u \geq 2p-1$ and again $r_2$ being an odd prime means $r_2 \geq 3$. \EQ{2N > q^3 \ 3 \ (\frac{2}{3}) \ \frac{2p-1}{(p-1)}} \EQ{N > 2q^3} \noindent And again, we get $q < n$. \end{proof} There is nothing special about this method of proof and the result of $q < n$ compared to Acquaah and Kanyagin's estimate $q < \sqrt{3} n$. If one is prepared to do the bookkeeping to account for extra unaccounted for factors $r_3$, $r_4$, $r_5$, etc., one can estimate them as $r_3 \geq 5$, $r_4 \geq 11$, $r_5 \geq 13$, etc. to get $q < \frac{n}{(\sqrt{5*11*13 \ldots})}$. \section{Partial Proof of Dris's Conjecture} Assume $N=q^k n^2$ is an odd perfect number written in Eulerian form. The case k=1 is proven in Theorem \ref{premain}, so we assume $k \geq 5$. Our goal is to prove $q^k < n$ with as few assumptions as possible. Rewrite $N$ in full as \EQ{N = q^k p_1^{2b_1} \ldots p_s^{2b_s} r^{2\beta} w^{2}} \noindent Where the $p_i$ are the primes whereby $q^{t_i}||\sigma(p_i^{2b_i})$, for integers $t_i \geq 0$ and $1 \leq i \leq s$. It is convenient to name another prime $r$ separate from the $p_i$'s and allow $w^2$ to represent the rest of the primes dividing $N$. We do not assume a priori that $r$ and $w$ necessarily exist and in such cases we simply take one or both to be one. Let $c_i \geq 0$ be the integer whereby $p_i^{c_i} || \sigma(p_1^{2b_1}$ \ldots $p_s^{2b_s})$ for $1 \leq i \leq s$. \begin{center} \textbf{Case 1: $p_i \nmid \sigma(q^k)$ for each $i$} \end{center} Because $k$ is odd, $\sigma(q^k) = (1+q)(1+q^2+q^4$ \ldots $+q^{k-1})$. It is straight forward to show $(1+q^2+q^4$ \ldots $+q^{k-1})$ is coprime to its formal derivative, which makes the polynomial separable, and as such has no repeated roots modulo any prime. Thus any prime dividing $(1+q^2+q^4$ \ldots $+q^{k-1})$, divides at most once. Let $r$ be a prime dividing $\sigma(q^k)$ such that $r || \sigma(q^k)$. By assumption $r$ is not any of the $p_i$'s. Also note that we may assume $r \geq 7$. If $r = 1+q^2+q^4$ \ldots $+q^{k-1}$, then clearly $r \geq 7$. Otherwise, we may assume $r \equiv 1 \pmod{\frac{k+1}{2}}$, by virtue of the fact that $r$ divides a cyclotomic polynomial. For the smallest exponent, $k=5$, $\frac{k+1}{2} = 3$; and the smallest $1 \pmod{3}$ prime is 7. \EQ{2N = \sig{N} = \{\sig{q^k}\} \{\sig{p_1^{2b_1} \ldots p_s^{2b_s}}\} \{\sig{r^{2\beta} w^{2}}\}} \EQ{2N > \{q^k\} \ \ \{q^k p_1^{c_1} \ldots p_s^{c_s}\} \ \ \{p_1^{2b_1-c_1} \ldots p_s^{2b_s-c_s}\} \ \ \{r\}} \noindent Note that the quantity in each brace on the right hand side is less than or equal to the quantity in the respective brace in the previous line, with the exception of $\{r\}$. Since $r$ divides $N$ an even number of times, but can only divide \sig{q^k} once, we know $r$ must divide either \sig{p_1^{2b_1} \ldots p_s^{2b_s}} or \sig{r^{2\beta} w^{2}}, in addition to our previous assumptions. \EQ{2N > q^{2k} (r) p_1^{2b_1} \ldots p_s^{2b_s}} \noindent We utilize the fact that $(\frac{p-1}{p}) \sigma(p^{2b}) < p^{2b}$ and that $r \geq 7$. \EQ{2N > q^{2k} (7) \ \bprod{i=1}{s} (\frac{p_i - 1}{p_i}) \ \sig{p_1^{2b_1} \ldots p_s^{2b_s}}} \EQ{2N > q^{2k} (7) \ \bprod{i=1}{s} (\frac{p_i - 1}{p_i}) \ q^k p_1^{c_1} \ldots p_s^{c_s}} \EQ{2N > q^{3k} (7) \ \bprod{i=1}{s} (1 - \frac{1}{p_i})} \noindent Next we use the well known result, if $0 < \theta_i < 1$ for $i=1$, \ldots, $s$, then \begin{center} $\bprod{i=1}{s} (1-\theta_i) \geq 1 - \bsum{i=1}{s} \theta_i$. \end{center} \EQ{2N > q^{3k} (7) \ (1 - \bsum{i=1}{s} \frac{1}{p_i})} \noindent While not the first to prove $\bsum{p|N}{} \frac{1}{p} < ln(2)$, Cohen gives a simple proof of this fact in (\cite{cohen1978}). \EQ{2N > q^{3k} (7) \ (1 - ln(2))} \noindent Since $7(1 - ln(2)) > 2.14$, we have \EQ{q^{3k} < N = q^k n^2 \Longrightarrow q^k < n} \noindent as required. \begin{center} \textbf{Case 2: $s=1$ and $p_1 |\sigma(q^k)$} \end{center} In Section 2, case 2, the result relied on being able to find two inequalities $u \geq 2p-1$ and $(p-1)u \geq p^{c_q}$. The former depending on $p$ being unique and the latter depending on $k=1$, which made $\sig{q^k} = q+1$. To give a full proof of Dris's conjecture using this methodology, these two obstacles will have to be overcome. In this case, with $s=1$, we get $p_1$ is unique. We procede as before, let $c_{1q} \geq 0$ be the integer for which $p_{1}^{c_{1q}} || \sigma(q^k)$ and let $u = \frac{\sig{p_1^{2b_1}}}{q^k}$. We may again conclude $u \equiv -1 \pmod {p_1}$ and $u \geq 2p_1-1$, however, \EQ{p_1^{2b+1}-1 = (p_1-1)\sigma(p_1^{2b}) = (p_1-1)uq^k = (p_1-1)u\sig{q^k}-(p_1-1)u\sig{q^{k-1}}} \noindent allows us to, at best, conclude $(p_1-1)u\sig{q^{k-1}} \equiv 1 \pmod p_1^{c_{1q}}$; which seems to be unhelpful. We push on, let $v = \frac{\sig{w^2}}{p_1^{2b_1-c_{1q}}}$ \EQ{2N = \sig{N} = \sig{q^k} \sig{p_1^{2b_1}} \sig{w^2}} \EQ{2N > q^k \ uq^k \ v p_1^{2b_1-c_{1q}}} \EQ{2N > q^{2k} \ uv \ p_1^{2b_1} p_1^{-c_{1q}}} \EQ{2N > q^{2k} \ uv \ \frac{p_1-1}{p_1} \sig{p_1^{2b_1}} p_1^{-c_{1q}}} \EQ{2N > q^{2k} \ uv \ \frac{p_1-1}{p_1} \ uq^k p_1^{-c_{1q}}} \EQ{2N > q^{3k} \ u^2 v \ \frac{p_1-1}{p_1} \ p_1^{-c_{1q}}} \EQ{2N > q^{3k} (2p_1-1)^2 v \ \frac{p_1-1}{p_1} \ p_1^{-c_{1q}}} \EQ{N > q^{3k} (2p_1^2 - 4p_1 + \frac{5}{2} - \frac{1}{2p_1})v \ p_1^{-c_{1q}}} \noindent We see Dris's conjecture follows immediately whenever $c_{1q} \leq 2$ or, with more knowledge about $N$, when $vp_1^2 > p_1^{c_{1q}}$. By Neilsen's result, $w$ must have at least 7 components, which makes the latter inequality seem quite likely. Since these are amongst the first theorems relating components of an odd perfect number, more research is clearly needed. \begin{center} \textbf{Case 3: $s>1$ and $p_i |\sigma(q^k)$ for at least one $i$} \end{center} Let $c_{iq} \geq 0$ be the integer for which $p_{i}^{c_{iq}} || \sigma(q^k)$ for $1 \leq i \leq s$. We begin as before, \EQ{2N = \sig{N} = \sig{q^k} \sig{p_1^{2b_1} \ldots p_s^{2b_s}} \sig{w^{2}}} \EQ{2N > q^k \ q^k p_1^{c_1} \ldots p_s^{c_s} \ p_1^{2b_1-c_1-c_{1q}} \ldots p_s^{2b_s-c_s-c_{sq}}} \EQ{2N > q^{2k} \ p_1^{2b_1} \ldots p_s^{2b_s} \ p_1^{-c_{1q}} \ldots p_s^{-c_{sq}}} \EQ{2N > q^{2k} \ \bprod{i=1}{s} (\frac{p_i-1}{p_i}) \sig{p_1^{2b_1} \ldots p_s^{2b_s}} \ p_1^{-c_{1q}} \ldots p_s^{-c_{sq}}} \EQ{2N > q^{2k} \ \bprod{i=1}{s} (\frac{p_i-1}{p_i}) \ q^k p_1^{c_1} \ldots p_s^{c_s} \ p_1^{-c_{1q}} \ldots p_s^{-c_{sq}}} \EQ{2N > q^{3k} \ \bprod{i=1}{s} (\frac{p_i-1}{p_i}) \ p_1^{c_1-c_{1q}} \ldots p_s^{c_s-c_{sq}}} \noindent Using $\bprod{i=1}{s} (\frac{p_i-1}{p_i}) > 1-ln(2)$, we see now the result $q^k < n$ follows whenever \EQ{p_1^{c_1} \ldots p_s^{c_s} > \frac{2}{1-ln(2)} p_1^{c_{1q}} \ldots p_s^{c_{sq}}} \noindent We recap our results thus far in the following \begin{theorem} \label{main} Let $N = q^k n^2$ be an odd perfect number written in Eulerian form, then $q<n$. Write $N = q^k p_1^{2b_1}$, \ldots, $p_s^{2b_s} w^2$, where $q|\sig{p_i}$ for $1 \leq i \leq s$. Let $c_i$, $c_{iq} \geq 0$ be integers where $p_i^{c_i} || \sig{p_1^{2b_1}, \ldots, p_s^{2b_s}}$ and $p_i^{c_{iq}} || \sig{q^k}$ for $1 \leq i \leq s$. If $k>1$ and one of the following holds: \begin{enumerate} \item $p_i \nmid \sig{q^k}$ for $1 \leq i \leq s$; \item $s=1$, $p_1 |\sigma(q^k)$, and $c_{1q} \leq 2$; \item $s>1$, and $p_i | \sig{q^k}$ for at least one $p_i$ and \begin{center} $p_1^{c_1} \ldots p_s^{c_s} \geq 7 p_1^{c_{1q}} \ldots p_s^{c_{sq}}$ \end{center} \end{enumerate} then $q^k < n$. \end{theorem} \section{Further Considerations} The condition $p_1^{c_1} \ldots p_s^{c_s} \geq 7 p_1^{c_{1q}} \ldots p_s^{c_{sq}}$ in Theorem \ref{main} seems to suggest Dris's conjecture holds if \begin{center} \bprod{i \neq j}{} gcd$(\sig{p_i^{2b_i}p_j^{2b_j}},p_i^{2b_i}p_j^{2b_j}) >$ \bprod{i=1}{s} gcd$(\sig{q^k},p_i^{2b_i})$. \end{center} Again, at first glance, it seems nothing can be said about this situation, but Dandapat, Hunsucker, and Pomerance in \cite{dandapat1975}, Theorem 2 implies for each fixed $i$, there is a $j$ where \begin{center} gcd$(\sig{p_i^{2b_i}p_j^{2b_j}},p_i^{2b_i}p_j^{2b_j})>1$ for $i \neq j$ \end{center} holds for most non-special components of $n$, not just for the restricted $p_i$'s as we have defined them. The next obvious question may be for $N = q^k p^{2b} w^2$, an odd perfect number where $q$ is the special prime, $p$ is any other prime dividing $N$, and $w^2$ encompasses the rest of the components of $N$, in the same way we showed $q<n$, can we show $p<w$, $p^b < w$, or even $p^{2b} < w$?
{ "timestamp": "2016-02-05T02:06:14", "yymm": "1602", "arxiv_id": "1602.01591", "language": "en", "url": "https://arxiv.org/abs/1602.01591", "abstract": "Euler showed that if an odd perfect number $N$ exists, it must consist of two parts $N=q^k n^2$, with $q$ prime, $q \\equiv k \\equiv 1 \\pmod{4}$, and gcd$(q,n)=1$. Dris conjectured that $q^k < n$. We first show that $q<n$ for all odd perfect numbers. Afterwards, we show $q^k < n$ holds in many cases.", "subjects": "Number Theory (math.NT)", "title": "A Partial Proof of a Conjecture of Dris", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713857177955, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7080104988750837 }
https://arxiv.org/abs/1910.09529
Adaptive Gradient Descent without Descent
We present a strikingly simple proof that two rules are sufficient to automate gradient descent: 1) don't increase the stepsize too fast and 2) don't overstep the local curvature. No need for functional values, no line search, no information about the function except for the gradients. By following these rules, you get a method adaptive to the local geometry, with convergence guarantees depending only on the smoothness in a neighborhood of a solution. Given that the problem is convex, our method converges even if the global smoothness constant is infinity. As an illustration, it can minimize arbitrary continuously twice-differentiable convex function. We examine its performance on a range of convex and nonconvex problems, including logistic regression and matrix factorization.
\section{Introduction} Since the early days of optimization it was evident that there is a need for algorithms that are as independent from the user as possible. First-order methods have proven to be versatile and efficient in a wide range of applications, but one drawback has been present all that time: the stepsize. Despite certain success stories, line search procedures and adaptive online methods have not removed the need to manually tune the optimization parameters. Even in smooth convex optimization, which is often believed to be much simpler than the nonconvex counterpart, robust rules for stepsize selection have been elusive. The purpose of this work is to remedy this deficiency. The problem formulation that we consider is the basic unconstrained optimization problem \begin{equation} \label{main} \min_{x\in \mathbb R^d}\ f(x), \end{equation} where $f\colon \mathbb R^d \to \mathbb R$ is a differentiable function. Throughout the paper we assume that \eqref{main} has a solution and we denote its optimal value by $f_*$. The simplest and most known approach to this problem is the gradient descent method (GD), whose origin can be traced back to Cauchy~\cite{cauchy1847methode,lemarechal2012cauchy}. Although it is probably the oldest optimization method, it continues to play a central role in modern algorithmic theory and applications. Its definition can be written in a mere one line, \begin{equation} \label{eq:grad} x^{k+1} = x^k - \lambda \nabla f(x^k), \qquad k\geq 0, \end{equation} where $x^0\in \mathbb R^d$ is arbitrary and $\lambda >0$. Under assumptions that $ f$ is convex and $L$--smooth (equivalently, $\nabla f$ is $L$-Lipschitz) that is \begin{equation}\label{Lipschitz} \n{\nabla f(x)-\nabla f(y)}\leq L \n{x-y}, \quad \forall x,y, \end{equation} one can show that GD with $\lambda \in (0, \frac 2 L)$ converges to an optimal solution~\cite{polyak1963gradient}. Moreover, with $\lambda = \frac 1 L $ the convergence rate~\cite{drori2014performance} is \begin{equation} \label{eq:grad_rate} f(x^k)-f_*\leq \frac{L\n{x^0-x^*}^2}{2(2k+1)}, \end{equation} where $x^*$ is any solution of \eqref{main}. Note that this bound is not improvable~\cite{drori2014performance}. We identify four important challenges that limit the applications of gradient descent even in the convex case: \begin{enumerate} \itemsep0em \item GD is not general: many functions do not satisfy \eqref{Lipschitz} globally. \item GD is not a free lunch: one needs to guess $\lambda$, potentially trying many values before a success. \item GD is not robust: failing to provide $\lambda < \frac{2}{L}$ may lead to divergence. \item GD is slow: even if $L$ is finite, it might be arbitrarily larger than local smoothness. \end{enumerate} \subsection{Related work} Certain ways to address some of the issues above already exist in the literature. They include line search, adaptive Polyak's stepsize, mirror descent, dual preconditioning, and stepsize estimation for subgradient methods. We discuss them one by one below, in a process reminiscent of cutting off Hydra's limbs: if one issue is fixed, two others take its place. The most practical and generic solution to the aforementioned issues is known as line search (or backtracking). This direction of research started from the seminal works~\cite{goldstein1962cauchy} and~\cite{armijo1966} and continues to attract attention, see \cite{bello2016convergence,salzo2017variable} and references therein. In general, at each iteration the line search executes another subroutine with additional evaluations of $\nabla f$ and/or $f$ until some condition is met. Obviously, this makes each iteration more expensive. At the same time, the famous Polyak's stepsize~\cite{polyak1969minimization} stands out as a very fast alternative to gradient descent. Furthermore, it does not depend on the global smoothness constant and uses the current gradient to estimate the geometry. The formula might look deceitfully simple, $\lambda_k = \frac{f(x^k)-f_*}{\n{\nabla f(x^k)}^2}$, but there is a catch: it is rarely possible to know $f_*$. This method, again, requires the user to guess $f_*$. What is more, with $\lambda$ it was fine to underestimate it by a factor of 10, but the guess for $f_*$ must be tight, otherwise it has to be reestimated later~\cite{hazan2019revisiting}. Seemingly no issue is present in the Barzilai-Borwein stepsize. Motivated by the quasi-Newton schemes, \cite{barzilai1988two} suggested using steps \[\lambda_k = \frac{\lr{x^k-x^{k-1},\nabla f(x^k)-\nabla f(x^{k-1})}}{\n{\nabla f(x^k)-\nabla f(x^{k-1})}^2}.\] Alas, the convergence results regarding this choice of $\lambda_k$ are very limited and the only known case where it provably works is quadratic problems~\cite{raydan1993barzilai, dai2002r}. In general it may not work even for smooth strongly convex functions, see the counterexample in~\cite{burdakov2019stabilized}. Other more interesting ways to deal with non-Lipschitzness of $\nabla f$ use the problem structure. The first method, proposed in~\cite{birnbaum2011distributed} and further developed in~\cite{Bauschke2016}, shows that the mirror descent method \cite{nemirovsky1983problem}, which is another extension of GD, can be used with a fixed stepsize, whenever $f$ satisfies a certain generalization of~\eqref{Lipschitz}. In addition, \cite{maddison2019dual} proposed the dual preconditioning method---another refined version of GD. Similarly to the former technique, it also goes beyond the standard smoothness assumption of $f$, but in a different way. Unfortunately, these two simple and elegant approaches cannot resolve all issues yet. First, not many functions fulfill respective generalized conditions. And secondly, both methods still get us back to the problem of not knowing the allowed range of stepsizes. A whole branch of optimization considers adaptive extensions of GD that deal with functions whose (sub)gradients are bounded. Probably the earliest work in that direction was written by \cite{shor1962application}. He showed that the method \begin{align*} x^{k+1} = x^k - \lambda_k \frac{g^k}{\|g^k\|}, \end{align*} where $g^k\in \partial f(x^k)$ is a subgradient, converges for properly chosen sequences $(\lambda_k)$, see, e.g.,\ Section 3.2.3 in~\cite{Nesterov2013}. Moreover, $\lambda_k$ requires no knowledge about the function whatsoever. Similar methods that work in online setting such as Adagrad~\cite{duchi2011adaptive,mcmahan2010adaptive} received a lot of attention in recent years and remain an active topic of research~\cite{ward19a}. Methods similar to Adagrad---Adam~\cite{adam, adam2}, RMSprop~\cite{Tieleman2012} and Adadelta~\cite{zeiler2012adadelta}---remain state-of-the-art for training neural networks. The corresponding objective is usually neither smooth nor convex, and the theory often assumes Lipschitzness of the function rather than of the gradients. Therefore, this direction of research is mostly orthogonal to ours, although we do compare with some of these methods in our neural networks experiment. We also note that without momentum Adam and RMSprop reduce to signSGD~\cite{bernstein2018signsgd}, which is known to be non-convergent for arbitrary stepsizes on a simple quadratic problem~\cite{karimireddy2019error}. In a close relation to ours is the recent work~\cite{Malitsky2019}, where there was proposed an adaptive golden ratio algorithm for monotone variational inequalities. As it solves a more general problem, it does not exploit the structure of~\eqref{main} and, as most variational inequality methods, has a more conservative update. Although the method estimates the smoothness, it still requires an upper bound on the stepsize as input. \paragraph{Contribution.} We propose a new version of GD that at no cost resolves all aforementioned issues. The idea is simple, and it is surprising that it has not been yet discovered. In each iteration we choose $\lambda_k$ as a certain approximation of the inverse local Lipschitz constant. With such a choice, we prove that convexity and local smoothness of $f$ are sufficient for convergence of iterates with the complexity $\mathcal{O}(1/k)$ for $f(x^k)-f_*$ in the worst case. \paragraph{Discussion.} Let us now briefly discuss why we believe that proofs based on monotonicity and global smoothness lead to slower methods. Gradient descent is by far not a recent method, so there have been obtained optimal rates of convergence. However, we argue that adaptive methods require rethinking optimality of the stepsizes. Take as an example a simple quadratic problem, $f(x, y)=\frac{1}{2}x^2+\frac{\delta}{2}y^2$, where $\delta \ll 1$. Clearly, the smoothness constant of this problem is equal to $L=1$ and the strong convexity one is $\mu=\delta$. If we run GD from an arbitrary point $(x^0, y^0)$ with the ``optimal'' stepsize $\lambda = \frac{1}{L} =1$, then one iteration of GD gives us $(x^1, y^1) = (0, (1-\delta) y^0)$, and similarly $(x^k, y^k) = (0, (1-\delta)^{k}y^0)$. Evidently for $\delta $ small enough it will take a long time to converge to the solution $(0,0)$. Instead GD would converge in two iterations if it adjusts its step after the first iteration to $\lambda = \frac{1}{\delta}$. Nevertheless, all existing analyses of the gradient descent with $L$-smooth $f$ use stepsizes bounded by $2/L$. Besides, functional analysis gives \[f(x^{k+1})\le f(x^k) - \lambda\Bigl(1 - \frac{\lambda L}{2}\Bigr)\|\nabla f(x^k)\|^2,\] from which $1/L$ can be seen as the ``optimal'' stepsize. Alternatively, we can assume that $f$ is $\mu$-strongly convex, and the analysis in norms gives \begin{align*} \|x^{k+1}-x^*\|^2\le \Bigl(1 - 2\frac{\lambda L \mu}{L + \mu} \Bigr)\|x^k-x^*\|^2 - \lambda\Bigl(\frac{2}{L+\mu} - \lambda\Bigr)\|\nabla f(x^k) \|^2, \end{align*} whence the ``optimal'' step is $\frac{2}{L+\mu}$. Finally, line search procedures use some certain type of monotonicity, for instance ensuring that $f(x^{k+1})\le f(x^k) - c\|\nabla f(x^k)\|^2$ for some $c>0$. We break with this tradition and merely ask for convergence in the end. \section{Main part} \subsection{Local smoothness of $f$}\label{subs:main} Recall that a mapping is \emph{locally Lipschitz} if it is Lipschitz over any compact set of its domain. A function $f$ with (locally) Lipschitz gradient $\nabla f$ is called (locally) smooth. It is natural to ask whether some interesting functions are smooth locally, but not globally. It turns out there is no shortage of examples, most prominently among highly nonlinear functions. In $\mathbb R$, they include $x\mapsto \exp(x)$, $\log(x)$, $\tan(x)$, $x^p$, for $p > 2$, etc. More generally, they include any twice differentiable $f$, since $\nabla^2 f(x)$, as a continuous mapping, is bounded over any bounded set $\mathcal{C}$. In this case, we have that $\nabla f$ is Lipschitz on $\mathcal{C}$, due to the mean value inequality \[\n{\nabla f(x) - \nabla f(y)}\leq \max_{z\in \mathcal{C}}\n{\nabla^2 f(z)}\n{x-y},\quad \forall x,y\in \mathcal{C}.\] Algorithm~\ref{alg:main} that we propose is just a slight modification of GD. The quick explanation why local Lipschitzness of $\nabla f$ does not cause us any problems, unlike most other methods, lies in the way we prove its convergence. Whenever the stepsize $\lambda_k$ satisfies two inequalities\footnote{It can be shown that instead of the second condition it is enough to ask for $\lambda_k^2\le \frac{\n{x^{k}-x^{k-1}}^2}{[3\n{\nabla f(x^{k})}^2 - 4\<\nabla f(x^k), \nabla f(x^{k-1})>]_+}$, where $[a]_+\stackrel{\mathrm{def}}{=} \max \{0, a\}$, but we prefer the option written in the main text for its simplicity.} \begin{align*} \begin{cases} \lambda_k^2 & \leq (1+\th_{k-1})\lambda_{k-1}^2,\\ \lambda_k & \leq \frac{\n{x^{k}-x^{k-1}}}{2\n{\nabla f(x^{k})-\nabla f(x^{k-1})}}, \end{cases} \end{align*} independently of the properties of $f$ (apart from convexity), we can show that the iterates $(x^k)$ remain bounded. Here and everywhere else we use the convention $1/0=+\infty$, so if $\nabla f(x^k) -\nabla f(x^{k-1})=0$, the second inequality can be ignored. In the first iteration it might happen that $\lambda_1 = \min\{+\infty\}$, in this case we suppose that any choice of $\lambda_1>0$ is possible. \begin{algorithm}[t] \caption{Adaptive gradient descent} \label{alg:main} \begin{algorithmic}[1] \STATE \textbf{Input:} $x^0 \in \mathbb R^d$, $\lambda_0>0$, $\th_0=+\infty$\\ \STATE $x^1= x^0-\lambda_0\nabla f(x^0)$ \FOR{$k = 1,2,\dots$} \STATE $\lambda_k = \min\Bigl\{ \sqrt{1+\th_{k-1}}\lambda_{k-1},\frac{\n{x^{k}-x^{k-1}}}{2\n{\nabla f(x^{k})-\nabla f(x^{k-1})}}\Bigr\}$ \STATE $x^{k+1} = x^k - \lambda_k \nabla f(x^k)$ \STATE $\th_k = \frac{\lambda_k}{\lambda_{k-1}}$ \ENDFOR \end{algorithmic} \end{algorithm} Although Algorithm~\ref{alg:main} needs $x^0$ and $\lambda_0$ as input, this is not an issue as one can simply fix $x^0=0$ and $\lambda_0=10^{-10}$. Equipped with a tiny $\lambda_0$, we ensure that $x^1$ will be close enough to $x^0$ and likely will give a good estimate for $\lambda_1$. Otherwise, this has no influence on further steps. \subsection{Analysis without descent} It is now time to show our main contribution, the new analysis technique. The tools that we are going to use are the well-known Cauchy-Schwarz and convexity inequalities. In addition, our methods are related to potential functions~\cite{taylor19a}, which is a powerful tool for producing tight bounds for GD. Another divergence from the common practice is that our main lemma includes not only $x^{k+1}$ and $x^k$, but also $x^{k-1}$. This can be seen as a two-step analysis, while the majority of optimization methods have one-step bounds. However, as we want to adapt to the local geometry of our objective, it is rather natural to have two terms to capture the change in the gradients. Now, it is time to derive a characteristic inequality for a specific Lyapunov energy. \begin{lemma}\label{lemma:energy} Let $f\colon \mathbb R^d\to \mathbb R$ be convex and differential and let $x^*$ be any solution of \eqref{main}. Then for $(x^k)$ generated by Algorithm~\ref{alg:main} it holds \begin{multline} \label{eq:lemma_ineq} \n{x^{k+1}-x^*}^2+ \frac 1 2 \n{x^{k+1}-x^k}^2 + 2\lambda_{k}(1+\th_{k}) (f(x^k)-f_*) \\ \leq \n{x^k-x^*}^2 + \frac 1 2 \n{x^k-x^{k-1}}^2 + 2\lambda_k \th_k (f(x^{k-1})-f_*). \end{multline} \end{lemma} \begin{proof} Let $k\geq 1$. We start from the standard way of analyzing GD: \begin{align*} \|x^{k+1}- x^*\|^2 &= \|x^k - x^*\|^2 + 2\<x^{k+1} - x^{k}, x^k-x^*>+ \|x^{k+1} - x^{k}\|^2\\ &= \|x^k - x^*\|^2 + 2\lambda_k \<\nabla f(x^k), x^* - x^k> + \|x^{k+1} - x^{k}\|^2. \end{align*} As usually, we bound the scalar product by convexity of $f$: \begin{align}\label{eq:conv} 2\lambda_k \<\nabla f(x^k), x^* - x^k> \le 2\lambda_k (f_* - f(x^k)), \end{align} which gives us \begin{align}\label{eq:norms} \|x^{k+1}- x^*\|^2 \le \|x^k - x^*\|^2 - 2\lambda_k(f(x^k)-f_*) + \|x^{k+1} - x^{k}\|^2. \end{align} These two steps have been repeated thousands of times, but now we continue in a completely different manner. We have precisely one ``bad'' term in~\eqref{eq:norms}, which is $\n{x^{k+1}-x^k}^2$. We will bound it using the difference of gradients: \begin{align}\label{dif_x} \|x^{k+1} -x^k\|^2 & = 2 \n{x^{k+1}-x^k}^2 - \n{x^{k+1}-x^k}^2 = -2\lambda_k \lr{\nabla f(x^k), x^{k+1}-x^k}- \n{x^{k+1}-x^k}^2\notag \\ &= 2\lambda_k \lr{\nabla f(x^k)-\nabla f(x^{k-1}), x^{k}-x^{k+1}}\notag\\ & \qquad \qquad + 2\lambda_k\lr{\nabla f(x^{k-1}), x^k-x^{k+1}} - \n{x^{k+1}-x^k}^2. \end{align} Let us estimate the first two terms in the right-hand side above. First, definition of $\lambda_k$, followed by Cauchy-Schwarz and Young's inequalities, yields \begin{align}\label{cs} 2\lambda_k \lr{\nabla f(x^k) -\nabla f(x^{k-1}), x^k - x^{k+1}} & \leq 2\lambda_k \n{\nabla f(x^k) -\nabla f(x^{k-1})} \n{x^k - x^{k+1}} \notag \\ &\leq \n{x^k -x^{k-1}} \n{x^k - x^{k+1}} \notag \\ &\leq \frac 1 2 \n{x^{k}-x^{k-1}}^2 + \frac{1}{2}\n{x^{k+1}-x^k}^2. \end{align} Secondly, by convexity of $f$, \begin{align} \label{eq:terrible_simple} 2\lambda_k\lr{\nabla f(x^{k-1}), x^k-x^{k+1}} &= \frac{2\lambda_k}{\lambda_{k-1}}\lr{x^{k-1} - x^{k}, x^{k}-x^{k+1}}\notag = 2\lambda_k\th_k \lr{x^{k-1}-x^{k}, \nabla f(x^k)} \notag \\ & \leq 2\lambda_k\th_k (f(x^{k-1})-f(x^k)). \end{align} Plugging~\eqref{cs} and \eqref{eq:terrible_simple} in~\eqref{dif_x}, we obtain \begin{align*} \n{x^{k+1}-x^k}^2 \leq \frac 1 2 \n{x^k-x^{k-1}}^2 - \frac 1 2 \n{x^{k+1}-x^k}^2 + 2\lambda_k\th_k(f(x^{k-1})-f(x^{k})). \end{align*} Finally, using the produced estimate for $\n{x^{k+1}-x^k}^2$ in \eqref{eq:norms}, we deduce the desired inequality~\eqref{eq:lemma_ineq}. \end{proof} The above lemma already might give a good hint why our method works. From inequality~\eqref{eq:lemma_ineq} together with condition $\lambda_k^2\leq (1+\th_{k-1})\lambda_{k-1}^2$, we obtain that the Lyapunov energy---the left-hand side of \eqref{eq:lemma_ineq}---is decreasing. This gives us boundedness of $(x^k)$, which is often the key ingredient for proving convergence. In the next theorem we formally state our result. \begin{theorem}\label{th:main} Suppose that $f\colon \mathbb R^d\to \mathbb R$ is convex with locally Lipschitz gradient $\nabla f$. Then $(x^k)$ generated by Algorithm~\ref{alg:main} converges to a solution of \eqref{main} and we have that \[f(\hat x^k)-f_* \leq \frac{D}{2S_k}=\mathcal{O}\Bigl(\frac{1}{k}\Bigr),\] where \begin{align*} \hat x^k &= \frac{\lambda_k(1+\th_k)x^k + \sum_{i=1}^{k-1}w_i x^i}{S_k},\\ w_i &= \lambda_i(1+\th_i)-\lambda_{i+1}\th_{i+1},\\ S_k &= \lambda_k(1+\th_k) + \sum_{i=1}^{k-1}w_i = \sum_{i=1}^k \lambda_i + \lambda_1\th_1, \end{align*} and $D$ is a constant that explicitly depends on the initial data and the solution set, see \eqref{eq:telescope}. \end{theorem} Our proof will consist of two parts. The first one is a straightforward application of Lemma~\ref{lemma:energy}, from which we derive boundedness of $(x^k)$ and complexity result. Due to its conciseness, we provide it directly after this remark. In the second part, we prove that the whole sequence $(x^k)$ converges to a solution. Surprisingly, this part is a bit more technical than expected, and thus we postpone it to the appendix. \begin{proof} \textit{(Boundedness and complexity result.)} Fix any $x^*$ from the solution set of \cref{main}. Telescoping inequality~\eqref{eq:lemma_ineq}, we deduce \begin{multline}\label{eq:telescope} \n{x^{k+1}-x^*}^2+ \frac 1 2 \n{x^{k+1}-x^k}^2 + 2\lambda_k (1+\th_k) (f(x^k)-f_*) \\+ 2\sum_{i=1}^{k-1}[\lambda_i(1+\th_i)-\lambda_{i+1}\th_{i+1}](f(x^i)-f_*) \\ \leq \, \n{x^1-x^*}^2 + \frac 1 2 \n{x^1-x^{0}}^2 + 2\lambda_1 \th_1 [f(x^{0})-f_*]\stackrel{\mathrm{def}}{=} D. \end{multline} Note that by definition of $\lambda_k$, the second line above is always nonnegative. Thus, the sequence $(x^k)$ is bounded. Since $\nabla f$ is locally Lipschitz, it is Lipschitz continuous on bounded sets. It means that for the set $\mathcal{C} = \clconv \{x^*, x^0, x^1,\dots\}$, which is bounded as the convex hull of bounded points, there exists $L>0$ such that \[\n{\nabla f(x)-\nabla f(y)} \leq L \n{x-y} \quad \forall x,y\in \mathcal{C}.\] Clearly, $\lambda_1=\frac{\n{x^1-x^0}}{2\n{\nabla f(x^1)-\nabla f(x^0)}}\geq \frac{1}{2L}$, thus, by induction one can prove that \(\lambda_k \geq \frac{1}{2L} \), in other words, the sequence $(\lambda_k)$ is separated from zero. Now we want to apply the Jensen's inequality for the sum of all terms $f(x^i)-f_*$ in the left-hand side of \eqref{eq:telescope}. Notice, that the total sum of coefficients at these terms is \[\lambda_k(1+\th_k) +\sum_{i=1}^{k-1}[\lambda_i(1+\th_i)-\lambda_{i+1}\th_{i+1}] =\sum_{i=1}^k \lambda_i + \lambda_1\th_1=S_k\] Thus, by Jensen's inequality, \[\frac D 2 \geq \frac{\text{LHS of \eqref{eq:telescope}}}{2} \geq S_k (f(\hat x^k)-f_*),\] where $\hat x^k$ is given in the statement of the theorem. By this, the first part of the proof is complete. Convergence of $(x^k)$ to a solution is provided in the appendix. \end{proof} As we have shown that $\lambda_i\geq \frac{1}{2L}$ for all $i$, we have a theoretical upper bound $f(\hat x^k)-f_*\leq \frac{D L}{k}$. Note that in practice, however, $(\lambda_k)$ might be much larger than the pessimistic lower bound $\frac{1}{2L}$, which we observe in our experiments together with a faster convergence. \subsection{$f$ is locally strongly convex} Since one of our goals is to make optimization easy to use, we believe that a good method should have state-of-the-art guarantees in various scenarios. For strongly convex functions, this means that we want to see linear convergence, which is not covered by normalized GD or online methods. In \cref{subs:main} we have shown that Algorithm~\ref{alg:main} matches the $\mathcal{O}(1/\varepsilon)$ complexity of GD on convex problems. Now we show that it also matches $\mathcal{O}(\frac{L}{\mu}\log\frac{1}{\varepsilon})$ complexity of GD when $f$ is locally strongly convex. Similarly to local smoothness, we call $f$ \emph{locally strongly convex} if it is strongly convex over any compact set of its domain. For proof simplicity, instead of using bound $\lambda_k \leq \sqrt{1+\th_{k-1}}\lambda_{k-1}$ as in step~4 of Algorithm~\ref{alg:main} we will use a more conservative bound $\lambda_k\leq \sqrt{1+\frac{\th_{k-1}}{2}}\lambda_{k-1}$ (otherwise the derivation would be too technical). It is clear that with such a change \Cref{th:main} still holds true, so the sequence is bounded and we can rely on local smoothness and local strong convexity. \begin{theorem}\label{th:strong} Suppose that $f\colon \mathbb R^d\to \mathbb R$ is locally strongly convex and $\nabla f$ is locally Lipschitz. Then $(x^k)$ generated by Algorithm~\ref{alg:main} (with the modification mentioned above) converges to the solution $x^*$ of \eqref{main}. The complexity to get $\|x^k - x^*\|^2\le \varepsilon$ is $\mathcal{O}(\kappa \log\frac 1 \varepsilon)$, where $\kappa = \frac{L}{\mu}$ and $L,\mu$ are the smoothness and strong convexity constants of $f$ on the set $\mathcal{C}=\clconv\{x^*, x^0, x^1, \dots\}$. \end{theorem} We want to highlight that in our rate $\kappa$ depends on the local Lipschitz and strong convexity constants $L$ and $\mu$, which is meaningful even when these properties are not satisfied globally. Similarly, if $f$ is globally smooth and strongly convex, our rate is still faster as it depends on the smaller local constants. \section{Heuristics} In this section, we describe several extensions of our method. We do not have a full theory for them, but believe that they are of interest in applications. \subsection{Acceleration} \begin{algorithm}[t] \caption{Adaptive accelerated gradient descent} \label{alg:accel} \begin{algorithmic}[1] \STATE \textbf{Input:} $x^0 \in \mathbb R^d$, $\lambda_0>0$, $\Lambda_0>0$, $\th_0=\Theta_0 = +\infty$\\ \STATE $y^1 = x^1= x^0-\lambda_0\nabla f(x^0)$ \FOR{$k = 1,2,\dots$} \STATE $\lambda_k = \min\Bigl\{ \sqrt{1+\frac{\th_{k-1}}{2}}\lambda_{k-1},\frac{\n{x^{k}-x^{k-1}}}{2\n{\nabla f(x^{k})-\nabla f(x^{k-1})}}\Bigr\}$ \STATE $\Lambda_k = \min\Bigl\{\sqrt{1 + \frac{\Theta_{k-1}}{2}}\Lambda_{k-1}, \frac{\| \nabla f(x^k) - \nabla f(x^{k-1}) \|}{2\|x^k - x^{k-1}\|} \Bigr\}$ \STATE $\beta_k =\frac{\sqrt{1/\lambda_k} - \sqrt{\Lambda_k}}{\sqrt{1/\lambda_k}+\sqrt{\Lambda_k}} $ \STATE $y^{k+1} = x^k - \lambda_k \nabla f(x^k)$ \STATE $x^{k+1}=y^{k+1} + \beta_k (y^{k+1}-y^k)$ \STATE $\th_k = \frac{\lambda_k}{\lambda_{k-1}}$, $\Theta_k = \frac{\Lambda_k}{\Lambda_{k-1}}$ \ENDFOR \end{algorithmic} \end{algorithm} Suppose that $f$ is $\mu$-strongly convex. One version of the accelerated gradient method proposed by Nesterov~\cite{Nesterov2013} is \begin{align*} y^{k+1} &= x^k - \frac{1}{L}\nabla f(x^k),\\ x^{k+1}&= y^{k+1} + \beta(y^{k+1} - y^{k}), \end{align*} where $\beta = \frac{\sqrt{L} - \sqrt{\mu}}{\sqrt{L}+\sqrt{\mu}}$. Adaptive gradient descent for strongly convex $f$ efficiently estimated $\frac{1}{2L}$ by \begin{align*} \lambda_k = \min\biggl\{\sqrt{1 + \frac{\th_{k-1}}{2}}\lambda_{k-1}, \frac{\|x^k - x^{k-1}\|}{2\|\nabla f(x^k) -\nabla f(x^{k-1})\|}\biggr\}. \end{align*} What about the strong convexity constant $\mu$? We know that it equals to the inverse smoothness constant of the conjugate $f^*(y) \stackrel{\mathrm{def}}{=} \sup_x\{\lr{x,y} - f(x) \}$. Thus, it is tempting to estimate this inverse constant just as we estimated inverse smoothness of $f$, i.e.,\ by formula \begin{align*} \Lambda_k = \min\biggl\{\sqrt{1 +\frac{\Theta_{k-1}}{2}} \Lambda_{k-1}, \frac{\| p^k - p^{k-1} \|}{2\|\nabla f^*(p^k) - \nabla f^*(p^{k-1})\|} \biggr\} \end{align*} where $p^k$ and $p^{k-1}$ are some elements of the dual space and $\Theta_k = \frac{\Lambda_k}{\Lambda_{k-1}}$. A natural choice then is $p^k = \nabla f(x^k)$ since it is an element of the dual space that we use. What is its value? It is well known that $\nabla f^*(\nabla f(x))=x$, so we come up with the update rule \begin{align*} \Lambda_k = \min\biggl\{\sqrt{1 + \frac{\Theta_{k-1}}{2}}\Lambda_{k-1}, \frac{\| \nabla f(x^k) - \nabla f(x^{k-1}) \|}{2\|x^k - x^{k-1}\|} \biggr\}, \end{align*} and hence we can estimate $\beta$ by $ \beta_k = \frac{\sqrt{1/\lambda_k} - \sqrt{\Lambda_k}}{\sqrt{1/\lambda_k}+\sqrt{\Lambda_k}}$. We summarize our arguments in Algorithm~\ref{alg:accel}. Unfortunately, we do not have any theoretical guarantees for it. Estimating strong convexity parameter $\mu$ is important in practice. Most common approaches rely on restarting technique proposed by \cite{Nesterov2013a}, see also~\cite{fercoq2017adaptive} and references therein. Unlike Algorithm~\ref{alg:accel}, these works have theoretical guarantees, however, the methods themselves are more complicated and still require tuning of other unknown parameters. \subsection{Uniting our steps with stochastic gradients} Here we would like to discuss applications of our method to the problem \begin{align*} \min_x \E{f_\xi(x)}, \end{align*} where $f_\xi$ is almost surely $L$-smooth and $\mu$-strongly convex. Assume that at each iteration we get sample $\xi^k$ to make a stochastic gradient step, \begin{align*} x^{k+1}=x^k - \lambda_k\nabla f_{\xi^k}(x^k). \end{align*} Then, we have two ways of incorporating our stepsize into SGD. The first is to reuse $\nabla f_{\xi^k}(x^k)$ to estimate $L_k=\frac{\n{\nabla f_{\xi^k}(x^{k})-\nabla f_{\xi^k}(x^{k-1})}}{\n{x^{k}-x^{k-1}}}$, but this would make $\lambda_k\nabla f_{\xi^k}(x^k)$ biased. Alternatively, one can use an extra sample to estimate $L_k$, but this is less intuitive since our goal is to estimate the curvature of the function used in the update. We give a full description in~\Cref{alg:stoch}. We remark that the option with a biased estimate performed much better in our experiments with neural networks. The theorem below provides convergence guarantees for both cases, but with different assumptions. \begin{algorithm}[t] \caption{Adaptive SGD} \label{alg:stoch} \begin{algorithmic}[1] \STATE \textbf{Input:} $x^0 \in \mathbb R^d$, $\lambda_0>0$, $\th_0=+\infty$, $\xi^0$, $\alpha>0$ \STATE $x^1 = x^0-\lambda_0\nabla f_{\xi^0}(x^0)$ \FOR{$k = 1,2,\dots$} \STATE Sample $\xi^k$ and optionally $\zeta^k$ \STATE Option I (biased): $L_k = \frac{\n{\nabla f_{\xi^k}(x^{k})-\nabla f_{\xi^k}(x^{k-1})}}{\n{x^{k}-x^{k-1}}}$ \STATE Option II (unbiased): $L_k = \frac{\n{\nabla f_{\zeta^k}(x^{k})-\nabla f_{\zeta^k}(x^{k-1})}}{\n{x^{k}-x^{k-1}}}$ \STATE $\lambda_k = \min\Bigl\{ \sqrt{1+\th_{k-1}}\lambda_{k-1},\frac{\alpha}{ L_k}\Bigr\}$ \STATE $x^{k+1} = x^k - \lambda_k \nabla f_{\xi^k}(x^k)$ \STATE $\th_k = \frac{\lambda_k}{\lambda_{k-1}}$ \ENDFOR \end{algorithmic} \end{algorithm} \begin{theorem} Let $f_\xi$ be $L$-smooth and $\mu$-strongly convex almost surely. Assuming $\alpha\le \frac{1}{2\kappa}$ and estimating $L_k$ with $\nabla f_{\zeta^k}$, the complexity to get $\E{\n{x^k- x^*}^2}\le \varepsilon$ is not worse than $\mathcal{O}\left(\frac{\kappa^2}{\varepsilon}\log \frac{\kappa}{\varepsilon}\right)$. Furthermore, if the model is overparameterized, i.e.,\ $\nabla f_\xi(x^*)=0$ almost surely, then one can estimate $L_k$ with $\xi^k$ and the complexity is $\mathcal{O}\left(\kappa^2 \log \frac{1}{\varepsilon}\right)$. \end{theorem} Note that in both cases we match the known dependency on $\varepsilon$ up to logarithmic terms, but we get an extra $\kappa$ as the price for adaptive estimation of the stepsize. Another potential application of our techniques is estimation of decreasing stepsizes in SGD. The best known rates for SGD~\cite{stich2019unified}, are obtained using $\lambda_k$ that evolves as $\mathcal{O}\left(\frac{1}{L+\mu k}\right)$. This requires estimates of both smoothness and strong convexity, which can be borrowed from the previous discussion. We leave rigorous proof of such schemes for future work. \section{Experiments} In the experiments\footnote{See \href{https://github.com/ymalitsky/adaptive_GD}{https://github.com/ymalitsky/adaptive\_gd}}, we compare our approach with the two most related methods: GD and Nesterov's accelerated method for convex functions~\cite{Nesterov1983a}. Additionally, we consider line search, Polyak step, and Barzilai-Borwein method. For neural networks we also include a comparison with SGD, SGDm and Adam. \paragraph{Logistic regression.} The logistic loss with $\ell_2$-regularization is given by $\frac{1}{n}\sum_{i=1}^n \log(1 + \exp(-b_i a_i^\top x)) + \frac{\gamma}{2}\|x\|^2$, where $n$ is the number of observations, $\gamma>0$ is a regularization parameter, and $(a_i, b_i)\in\mathbb R^{d}\times \mathbb R$, $i=1,\dots, n$, are the observations. We use `mushrooms' and `covtype' datasets to run the experiments. We choose $\gamma$ proportionally to $\frac{1}{n}$ as often done in practice. Since we have closed-form expressions to estimate $L=\frac{1}{4n}\|A\|^2+\gamma$, where $A=(a_1^\top, \dotsc, a_n^\top)^\top$, we used stepsize $\frac{1}{L}$ in GD and its acceleration. The results are provided in Figure~\ref{fig:logistic}. \paragraph{Matrix factorization.} Given a matrix $A\in \mathbb R^{m\times n}$ and $r<\min\{m,n\}$, we want to solve $\min_{X=[U,V]} f(X)=f(U,V)=\frac 1 2 \n{UV^\top-A}^2_F$ for $U\in \mathbb R^{m\times r}$ and $V\in \mathbb R^{n\times r}$. It is a nonconvex problem, and the gradient $\nabla f$ is not globally Lipschitz. With some tuning, one still can apply GD and Nesterov's accelerated method, but---and we want to emphasize it---it was not a trivial thing to find the steps in practice. The steps we have chosen were almost optimal, namely, the methods did not converge if we doubled the steps. In contrast, our methods do not require any tuning, so even in this regard they are much more practical. For the experiments we used Movilens 100K dataset~\cite{harper2016movielens} with more than million entries and several values of $r=10,\ 20,\ 30$. All algorithms were initialized at the same point, chosen randomly. The results are presented in \Cref{fig:factorization}. \paragraph{Cubic regularization.} \begin{figure*}[!t] \centering \begin{subfigure}[t]{0.32\textwidth} {\includegraphics[trim={3mm 0 3mm 0},clip,width=1\textwidth]{plots/logistic_mushrooms.pdf}} \caption{Mushrooms dataset, objective} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} {\includegraphics[trim={3mm 0 3mm 0},clip,width=1\textwidth]{plots/logistic_mushrooms_lr.pdf}} \caption{Mushrooms dataset, stepsize} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} {\includegraphics[trim={3mm 0 3mm 0},clip,width=1\textwidth]{plots/logistic_covtype.pdf}} \caption{Covtype dataset, objective} \end{subfigure} \caption{Results for the logistic regression problem.} \label{fig:logistic} \end{figure*} \begin{figure*}[t] \centering \begin{subfigure}[t]{0.32\textwidth} {\includegraphics[trim={3mm 0 3mm 0},clip,width=1\textwidth]{plots/matrix_factorization=10.pdf}} \caption{$r=10$} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} {\includegraphics[trim={3mm 0 3mm 0},clip,width=1\textwidth]{plots//matrix_factorization=20.pdf}} \caption{$r=20$} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} {\includegraphics[trim={3mm 0 3mm 0},clip,width=1\textwidth]{plots//matrix_factorization=30.pdf}} \caption{$r=30$} \end{subfigure} \caption{Results for matrix factorization. The objective is neither convex nor smooth.} \label{fig:factorization} \end{figure*} \begin{figure*}[t] \centering \begin{subfigure}[t]{0.32\textwidth} {\includegraphics[trim={3mm 0 3mm 0},clip,width=1\textwidth]{plots//cubic_reg_covtype_10.pdf}} \caption{$M=10$} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} {\includegraphics[trim={3mm 0 3mm 0},clip,width=1\textwidth]{plots//cubic_reg_covtype_20.pdf}} \caption{$M=20$} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} {\includegraphics[trim={3mm 0 3mm 0},clip,width=1\textwidth]{plots//cubic_reg_covtype_100.pdf}} \caption{$M=100$} \end{subfigure} \caption{Results for the non-smooth subproblem from cubic regularization.} \label{fig:cubic} \end{figure*} In cubic regularization of Newton method~\cite{nesterov2006cubic}, at each iteration we need to minimize $f(x)= g^\top x + \frac{1}{2}x^\top H x + \frac{M}{6}\|x\|^3$, where $g\in \mathbb R^d, H\in \mathbb R^{d\times d}$ and $M>0$ are given. This objective is smooth only locally due to the cubic term, which is our motivation to consider it. $g$ and $H$ were the gradient and the Hessian of the logistic loss with the `covtype' dataset, evaluated at $x=0\in\mathbb R^d$. Although the values of $M=10$, $20$, $100$ led to similar results, they also required different numbers of iterations, so we present the corresponding results in \Cref{fig:cubic}. \paragraph{Barzilai-Borwein, Polyak and line searches.} \begin{figure*}[t] \centering \begin{subfigure}[t]{0.32\textwidth} {\includegraphics[trim={3mm 0 3mm 0},clip,width=1\textwidth]{plots/logistic_mushrooms_extra.pdf}} \caption{Mushrooms dataset, objective} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} {\includegraphics[trim={3mm 0 3mm 0},clip,width=1\textwidth]{plots/logistic_covtype_extra.pdf}} \caption{Covtype dataset, objective} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} {\includegraphics[trim={3mm 0 3mm 0},clip,width=1\textwidth]{plots/logistic_w8a_ls.pdf}} \caption{W8a dataset, objective} \end{subfigure} \caption{Additional results for the logistic regression problem.} \label{fig:logistic_extra} \end{figure*} We have started this paper with an overview of different approaches to tackle the issue of a stepsize for GD. Now, we demonstrate some of those solutions. We again consider the $\ell_2$-regularized logistic regression (same setting as before) with `mushrooms', `covtype', and `w8a' datasets. In~\Cref{fig:logistic_extra} (left) we see that the Barzilai-Borwein method can indeed be very fast. However, as we said before, it lacks a theoretical basis and \Cref{fig:logistic_extra} (middle) illustrates this quite well. Just changing one dataset to another makes both versions of this method to diverge on a strongly convex and smooth problem. Polyak's method consistently performs well (see \Cref{fig:logistic_extra} (left and middle)), however, only after it was fed with $f_*$ that we found by running another method. Unfortunately, for logistic regression there is no way to guess this value beforehand. Finally, line search for GD (Armijo version) and Nesterov GD (implemented as in \cite{Nesterov2013a}) eliminates the need to know the stepsize, but this comes with a higher price per iteration as \Cref{fig:logistic_extra} (right) shows. Actually in all our experiments for logistic regression with different datasets one iteration of Armijo line search was approximately 2 times more expensive than AdGD, while line search for Nesterov GD was 4 times more expensive. We note that these observations are consistent with the theoretical derivations in \cite{Nesterov2013a}. \paragraph{Neural networks.} \begin{figure*}[t] \centering \begin{subfigure}[t]{0.32\textwidth} {\includegraphics[trim={3mm 0 3mm 0},clip,width=1\textwidth]{plots//test_acc_resnet18.pdf}} \caption{Test accuracy} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} {\includegraphics[trim={3mm 0 3mm 0},clip,width=1\textwidth]{plots//lr_resnet18.pdf}} \caption{Stepsize} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} {\includegraphics[trim={3mm 0 3mm 0},clip,width=1\textwidth]{plots//train_loss_resnet18.pdf}} \caption{Train loss} \end{subfigure} \caption{Results for training ResNet-18 on Cifar10. Labels for AdGD correspond to how $\lambda_k$ was estimated.} \label{fig:resnet} \end{figure*} \begin{figure*}[t] \centering \begin{subfigure}[t]{0.32\textwidth} {\includegraphics[trim={3mm 0 3mm 0},clip,width=1\textwidth]{plots//test_acc_densenet121.pdf}} \caption{Test accuracy} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} {\includegraphics[trim={3mm 0 3mm 0},clip,width=1\textwidth]{plots//lr_densenet121.pdf}} \caption{Stepsize} \end{subfigure} \hfill \begin{subfigure}[t]{0.32\textwidth} {\includegraphics[trim={3mm 0 3mm 0},clip,width=1\textwidth]{plots//train_loss_densenet121.pdf}} \caption{Train loss} \end{subfigure} \caption{Results for training DenseNet-121 on Cifar10.} \label{fig:densenet} \end{figure*} We use standard ResNet-18 and DenseNet-121 architectures implemented in PyTorch~\cite{paszke2017automatic} and train them to classify images from the Cifar10 dataset~\cite{krizhevsky2009learning} with cross-entropy loss. We use batch size 128 for all methods. For our method, we observed that $\frac{1}{L_k}$ works better than $\frac{1}{2L_k}$. We ran it with $\sqrt{1+\gamma\th_k}$ in the other factor with values of $\gamma$ from $\{1, 0.1, 0.05, 0.02, \\ 0.01\}$ and $\gamma=0.02$ performed the best. For reference, we provide the result for the theoretical estimate as well and value $\gamma=0.1$ in the plot with estimated stepsizes. The results are depicted in~\Cref{fig:resnet,fig:densenet} and other details are provided in~\cref{ap:exp_details}. We can see that, surprisingly, our method achieves better test accuracy than SGD despite having the same train loss. At the same time, our method is significantly slower at the early stage and the results are quite noisy for the first 75 epochs. Another observation is that the smoothness estimates are very non-uniform and $\lambda_k$ plummets once train loss becomes small. \section{Perspectives} We briefly provide a few directions which we personally consider to be important and challenging. \begin{enumerate} \item \textbf{Nonconvex case.} A great challenge for us is to obtain theoretical guarantees of the proposed method in the nonconvex settings. We are not aware of any generic first-order method for nonconvex optimization that does not rely on the descent lemma (or its generalization), see, e.g., \cite{attouch2013convergence}. \item \textbf{Performance estimation.} In our experiments we often observed much better performance of Algorithm~\ref{alg:main}, than GD or AGD. However, the theoretical rate we can show coincides with that of GD. The challenge here is to bridge this gap and we hope that the approach pioneered by~\cite{drori2014performance} and further developed in~\cite{taylor2017smooth,kim2016optimized,taylor19a} has a potential to do that. \item \textbf{Composite minimization.} In classical first-order methods, the transition from smooth to composite minimization~\cite{Nesterov2013a} is rather straightforward. Unfortunately, the proposed proof of \Cref{alg:main} does not seem to provide any route for generalization and we hope there is some way of resolving this issue. \item \textbf{Stochastic optimization.} The derived bounds for the stochastic case are not satisfactory and have a suboptimal dependency on $\kappa$. However, it is not clear to us whether one can extend the techniques from the deterministic analysis to improve the rate. \item \textbf{Heuristics.} Finally, we want to have some solid ground in understanding the performance of the proposed heuristics. \end{enumerate} \paragraph{Acknowledgment.} Yura Malitsky wishes to thank Roman Cheplyaka for his interest in optimization that partly inspired the current work. Yura Malitsky was supported by the ONRG project N62909-17-1-2111 and HASLER project N16066. \newpage \printbibliography \clearpage \part*{Appendix: } \section{Missing proofs} Recall that in the proof of \Cref{th:main} we only showed boundedness of the iterates and complexity for minimizing $f(x)$. It remains to show that sequence $(x^k)$ converges to a solution. For this, we need some variation of the Opial lemma. \begin{lemma}\label{opial-like} Let $(x^k)$ and $(a_k)$ be two sequences in $\mathbb R^d$ and $\mathbb R_+$ respectively. Suppose that $(x^k)$ is bounded, its cluster points belong to $\mathcal X \subset \mathbb R^d$ and it also holds that \begin{equation}\label{fejer} \n{x^{k+1}-x}^2 + a_{k+1}\leq \n{x^k-x}^2 + a_k, \qquad \forall x\in \mathcal X. \end{equation} Then $(x^k)$ converges to some element in $\mathcal X$. \end{lemma} \begin{proof} Let $\bar x_1$, $\bar x_2$ be any cluster points of $(x^k)$. Thus, there exist two subsequences $(x^{k_i})$ and $(x^{k_j})$ such that $x^{k_i}\to \bar x_1$ and $x^{k_j}\to \bar x_2$. Since $\n{x^k-x}^2 + a_k$ is nonnegative and bounded, $\lim_{k\to \infty}(\n{x^k-x}^2 + a_k)$ exists for any $x\in \mathcal X$. Let $x=\bar x_1$. This yields \begin{align*} \lim_{k\to \infty}(\n{x^k-\bar x_1}^2 + a_k)&=\lim_{i\to \infty}(\n{x^{k_i}-\bar x_1}^2 + a_{k_i})=\lim_{i\to \infty}a_{k_i}\\ & = \lim_{j\to \infty}(\n{x^{k_j}-\bar x_1}^2 + a_{k_j})=\n{\bar x_2-\bar x_1}^2 + \lim_{j\to \infty}a_{k_j}. \end{align*} Hence, $\lim_{i\to \infty} a_{k_i} = \lim_{j\to \infty}a_{k_j} + \n{\bar x_1-\bar x_2}^2$. Doing the same with $x=\bar x_2$ instead of $x=\bar x_1$, yields $\lim_{j\to \infty} a_{k_j} = \lim_{i\to \infty}a_{k_i} + \n{\bar x_1-\bar x_2}^2$. Thus, we obtain that $\bar x_1=\bar x_2$, which finishes the proof. \end{proof} Another statement that we need here is the following tightening of the convexity property. \begin{lemma}[Theorem 2.1.5, \cite{Nesterov2013}] \label{lemma:coco} Let $\mathcal{C}$ be a closed convex set in $\mathbb R^d$. If $f\colon \mathcal{C}\to \mathbb R $ is convex and $L$-smooth, then $\forall x,y\in \mathcal{C}$ it holds \begin{align}\label{eq:smooth_and_convex} f(x)-f(y)-\lr{\nabla f(y),x-y}\geq \frac{1}{2L}\n{\nabla f(x)-\nabla f(y)}^2. \end{align} \end{lemma} \begin{proof}[\textbf{Proof of Theorem~\ref{th:main}}](\textit{Convergence of $(x^k)$}) Note that in the first part we have already proved that $(x^k)$ is bounded and that $\nabla f$ is $L$-Lipschitz on $\mathcal{C}=\clconv\{x^*, x^0, x^1, \dots\}$. Invoking Lemma~\ref{lemma:coco}, we deduce that \begin{equation} \label{eq:better_with_Lip} \lambda_k(f(x^*) - f(x^k)) \geq \lambda_k \lr{\nabla f(x^k), x^*-x^{k}} + \frac{\lambda_k}{2L}\n{\nabla f(x^k)}^2. \end{equation} This indicates that instead of using inequality~\eqref{eq:conv} in the proof of Lemma~\ref{lemma:energy}, we could use a better estimate~\eqref{eq:better_with_Lip}. However, we want to emphasize that we did not assume that $\nabla f$ is globally Lipschitz, but rather obtained Lipschitzness on $\mathcal{C}$ as an artifact of our analysis. Clearly, in the end this improvement gives us an additional term $\frac{\lambda_k}{L}\n{\nabla f(x^k)}^2$ in the left-hand side of \eqref{eq:lemma_ineq}, that is \begin{multline} \label{eq:lemma_ineq_appendix} \n{x^{k+1}-x^*}^2+ \frac 1 2 \n{x^{k+1}-x^k}^2 + 2\lambda_{k}(1+\th_{k}) (f(x^k)-f_*) + \frac{\lambda_k}{L}\n{\nabla f(x^k)}^2 \\ \leq \n{x^k-x^*}^2 + \frac 1 2 \n{x^k-x^{k-1}}^2 + 2\lambda_k \th_k (f(x^{k-1})-f_*). \end{multline} Thus, telescoping \eqref{eq:lemma_ineq_appendix}, one obtains that $\sum_{i=1}^k\frac{\lambda_k}{L}\n{\nabla f(x^k)}^2\leq D$. As $\lambda_k\geq \frac{1}{2L}$, one has that $\nabla f(x^k)\to 0$. Now we might conclude that all cluster points of $(x^k)$ are solutions of~\eqref{main}. Let $\mathcal X$ be the solution set of \eqref{main} and $a_k = \frac 1 2 \n{x^k-x^{k-1}}^2 + 2\lambda_k \th_k(f(x^{k-1})-f_*)$. We want to finish the proof applying Lemma~\ref{opial-like}. To this end, notice that inequality~\eqref{eq:lemma_ineq} yields \eqref{fejer}, since $\lambda_{k+1}\th_{k+1}\leq (1+\th_k)\lambda_k$. This completes the proof. \end{proof} \begin{proof}[\textbf{Proof of \Cref{th:strong}}]~\\ First of all, we note that using the stricter inequality $\lambda_k\leq \sqrt{1+\frac{\th_{k-1}}{2}}\lambda_{k-1}$ does not change the statement of \Cref{th:main}. Hence, $x^k\to x^*$ and there exist $\mu,L >0$ such that $f$ is $\mu$-strongly convex and $\nabla f$ is $L$-Lipschitz on $\mathcal{C}=\clconv\{x^*, x^0, x^1, \dots\}$. Secondly, due to local strong convexity, $\n{\nabla f(x^k)-\nabla f(x^{k-1})}\geq \mu \n{x^k-x^{k-1}}$, and hence $\lambda_k \leq \frac{1}{2\mu}$ for $k\geq 1$. Now we tighten some steps in the analysis to improve bound~\eqref{eq:conv}. By strong convexity, \begin{align*} \lambda_k \lr{\nabla f(x^k), x^*-x^{k}} & \leq \lambda_k(f(x^*) - f(x^k)) - \lambda_k\frac{\mu}{2}\|x^* - x^k\|^2. \end{align*} By $L$-smoothness and bound $\lambda_k\le \frac{1}{2\mu}$, \begin{align*} \lambda_k \lr{\nabla f(x^k), x^*-x^{k}} & \leq \lambda_k(f(x^*) - f(x^k)) - \lambda_k\frac{1}{2L}\|\nabla f(x^k)\|^2 \\ &= \lambda_k(f_* - f(x^k)) - \frac{1}{2L\lambda_k}\|x^{k+1}-x^k\|^2 \\ &\le \lambda_k(f_* - f(x^k)) - \frac{\mu}{L}\|x^{k+1}-x^k\|^2. \end{align*} Together, these two bounds give us \begin{align*} \lambda_k \lr{\nabla f(x^k), x^*-x^{k}} & \leq \lambda_k(f_* - f(x^k)) - \lambda_k\frac{\mu}{4}\|x^k - x^*\|^2 - \frac{\mu}{2L}\|x^{k+1}-x^k\|^2. \end{align*} We keep inequality~\eqref{eq:terrible_simple} and the rest of the proof as is. Then the strengthen analog of~\eqref{eq:lemma_ineq} will be \begin{align} &\n{x^{k+1}-x^*}^2+ \frac 1 2\left(1 + \frac{2\mu}{L}\right) \n{x^{k+1}-x^k}^2 + 2\lambda_k (1+\th_k) (f(x^k)-f_*) \nonumber \\ \leq \, & \left(1 - \frac{\lambda_k\mu}{2}\right)\n{x^k-x^*}^2 + \frac 1 2 \n{x^k-x^{k-1}}^2 + 2\lambda_k \th_k (f(x^{k-1})-f_*) \nonumber\\ \leq \, & \left(1 - \frac{\lambda_k\mu}{2}\right)\n{x^k-x^*}^2 + \frac 1 2 \n{x^k-x^{k-1}}^2 + 2\lambda_{k-1} \left(1+\frac{\th_{k-1}}{2}\right) (f(x^{k-1})-f_*), \label{eq:contraction} \end{align} where in the last inequality we used our new condition on $\lambda_k$. Under the new update we have contraction in every term: $1-\frac{\lambda_k\mu}{2}$ in the first, $\frac{1}{1+2\mu/L}=1-\frac{2\mu}{L+2\mu}$ in the second and $\frac{1+\th_{k-1}/2}{1+\th_{k-1}}=1-\frac{\th_{k-1}}{2(1+\th_{k-1})}$ in the last one. To further bound the last contraction, recall that $\lambda_k \in \left[\frac{1}{2L}, \frac{1}{2\mu}\right]$ for $k\ge 1$. Therefore, $\th_k = \frac{\lambda_k}{\lambda_{k-1}} \ge \frac{1}{\kappa}$ for any $k>1$, where $\kappa\stackrel{\mathrm{def}}{=} \frac{L}{\mu}$. Since the function $\th\mapsto \frac{\th}{1+\th}$ monotonically increases with $\th>0$, this implies $\frac{\th_{k-1}}{2(1+\th_{k-1})}\ge \frac{1}{2(\kappa+1)}$ when $k>2$. Thus, for the full energy $\Psi^{k+1}$ (the left-hand side of~\eqref{eq:contraction}) we have \begin{align*} \Psi^{k+1} &\le \left(1 - \min\left\{\frac{\lambda_k \mu}{2}, \frac{1}{2(\kappa+1)}, \frac{2\mu}{L+2\mu}\right\}\right) \Psi^k. \end{align*} Using simple bounds $\frac{\lambda_k\mu}{2}\geq \frac{1}{4\kappa}$, $\frac{2\mu}{L+2\mu}=\frac{2}{\kappa +2}\geq \frac{1}{4\kappa}$, and $\frac{1}{2(\kappa +1)}\geq \frac{1}{4\kappa}$, we obtain $\Psi^{k+1}\le (1 - \frac{1}{4\kappa})\Psi^k$ for $k>2$. This gives $\mathcal{O}\left(\kappa \log \frac{1}{\varepsilon}\right)$ convergence rate. \end{proof} \section{Extensions} \subsection{More general update} One may wonder how flexible the update for $\lambda_k$ in \Cref{alg:main} is? For example, is it necessary to upper bound the stepsize with $\sqrt{1+\th_k}\lambda_{k-1}$ and put $2$ in the denominator of $\frac{\n{x^k-x^{k-1}}}{2\n{\nabla f(x^k)-\nabla f(x^{k-1})}}$? \Cref{alg:general_update} that we present here partially answers this question. \begin{algorithm}[t] \caption{Adaptive gradient descent (general update)} \label{alg:general_update} \begin{algorithmic}[1] \STATE \textbf{Input:} $x^0 \in \mathbb R^d$, $\lambda_0>0$, $\th_0=+\infty$, $\a\in (0,1)$, $\b = \frac{1}{2(1-\a)}$\\ \STATE $x^1= x^0-\lambda_0\nabla f(x^0)$ \FOR{$k = 1,2,\dots$} \STATE $\lambda_k = \min\Bigl\{ \sqrt{\frac{1}{\beta}+\th_{k-1}}\lambda_{k-1},\frac{\a\n{x^{k}-x^{k-1}}}{\n{\nabla f(x^{k})-\nabla f(x^{k-1})}}\Bigr\}$ \STATE $x^{k+1} = x^k - \lambda_k \nabla f(x^k)$ \STATE $\th_k = \frac{\lambda_k}{\lambda_{k-1}}$ \ENDFOR \end{algorithmic} \end{algorithm} Obviously, \Cref{alg:main} is a particular case of \Cref{alg:general_update} with $\a=\frac 12$ and $\b =1$. \begin{theorem}\label{th:gen_update} Suppose that $f\colon \mathbb R^d\to \mathbb R$ is convex with locally Lipschitz gradient $\nabla f$. Then $(x^k)$ generated by~\Cref{alg:general_update} converges to a solution of \eqref{main} and we have that \[f(\hat x^k)-f_* \leq \frac{D}{2S_k}=\mathcal{O}\Bigl(\frac{1}{k}\Bigr),\] where \begin{align*} \hat x^k &= \frac{\lambda_k(1+\th_k\b)x^k + \sum_{i=1}^{k-1}w_i x^i}{S_k},\\ w_i &= \lambda_i(1+\th_i\b)-\lambda_{i+1}\th_{i+1}\b,\\ S_k &= \lambda_k(1+\th_k\b) + \sum_{i=1}^{k-1}w_i = \sum_{i=1}^k \lambda_i + \lambda_1\th_1\b, \end{align*} and $D$ is a constant that explicitly depends on the initial data and the solution set. \end{theorem} \begin{proof} Let $x^*$ be arbitrary solution of \eqref{main}. We note that equations~\eqref{eq:norms} and~\eqref{dif_x} hold for any variant of GD, independently of $\lambda_k$, $\a$, $\b$. With the new rule for $\lambda_k$, from \eqref{dif_x} it follows \begin{align*} \|x^{k+1}-x^k\|^2 &\le 2\lambda_k\th_k(f(x^{k-1})-f(x^k))-\|x^{k+1}-x^k\|^2+2\lambda_k\|\nabla f(x^k)-\nabla f(x^{k-1})\|\| x^k-x^{k+1}\| \\ &\le 2\lambda_k\th_k(f(x^{k-1})-f(x^k))-(1-\alpha)\|x^{k+1}-x^k\|^2+\alpha\|x^k-x^{k-1}\|^2, \end{align*} which, after multiplication by $\beta$ and reshuffling the terms, becomes \begin{align*} \beta(2 - \alpha)\|x^{k+1}-x^k\|^2+2\beta\lambda_k\th_k (f(x^k)-f_*) \le \alpha\beta\|x^k-x^{k-1}\|^2+ 2\beta\lambda_k\th_k (f(x^{k-1})-f_*). \end{align*} Adding \eqref{eq:norms} and the latter inequality gives us \begin{align*} &\|x^{k+1}-x^*\|^2 + 2\lambda_k(1+\th_k\beta)(f(x^k)-f_*) + (2\beta-\alpha\beta-1)\|x^{k+1}-x^k\|^2\\ &\le \|x^k-x^*\|^2+2\lambda_k\th_k\beta(f(x^{k-1})-f_*) +\alpha\beta\|x^{k+1}-x^k\|^2. \end{align*} Notice that by $\b = \frac{1}{2(1-\a)}$, we have $2\b-\a\b -1= \a\b$ and hence, \begin{align*} &\|x^{k+1}-x^*\|^2 + 2\lambda_k(1+\th_k\beta)(f(x^k)-f_*) +\a\b\|x^{k+1}-x^k\|^2\\ &\le \|x^k-x^*\|^2+2\lambda_k\th_k\beta(f(x^{k-1})-f_*) +\alpha\beta\|x^{k+1}-x^k\|^2. \end{align*} As a sanity check, we can see that with $\a=\frac{1}{2}$ and $\b =1$, the above inequality coincides with~\eqref{eq:lemma_ineq}. Telescoping this inequality, we deduce \begin{align}\label{eq:telescope2} \n{x^{k+1}-x^*}^2&+ \a\b \n{x^{k+1}-x^k}^2 + 2\lambda_k (1+\th_k\b) (f(x^k)-f_*) \notag \\ &\qquad + 2\sum_{i=1}^{k-1}[\lambda_i(1+\th_i\b)-\lambda_{i+1}\th_{i+1}\b](f(x^i)-f_*) \notag\\ &\leq \n{x^1-x^*}^2 + \a\b \n{x^1-x^{0}}^2 + 2\lambda_1 \th_1\b [f(x^{0})-f_*]\stackrel{\mathrm{def}}{=} D. \end{align} Note that because of the way we defined stepsize, $\lambda_i(1+\th_i\b)-\lambda_{i+1}\th_{i+1}\b\geq 0$. Thus, the sequence $(x^k)$ is bounded. Since $\nabla f$ is locally Lipschitz, it is Lipschitz continuous on bounded sets. Let $L$ be a Lipschitz constant of $\nabla f$ on a bounded set $\mathcal{C}=\clconv\{x^*,x^1,x^2,\dots\}$. If $\a \leq \frac 1 2 $, then $\frac{1}{\b}>1$ and similarly to~\Cref{th:main} we might conclude that $\lambda_k\geq\frac{\a}{L}$ for all $k$. However, for the case $\a > \frac 1 2$ we cannot do this. Instead, we prove that $\lambda_k\ge \frac{2\a(1-\a)}{L}$, which suffices for our purposes. Let $m, n\in \mathbb N$ be the smallest numbers such that $\b^{-\frac{1}{2^m}}\ge 1 - \frac{1}{2\b}$ and $(1+\frac{1}{\b})^{\frac n 2}\ge \b$. We want to prove that for any $k$ it holds $\lambda_k\ge \frac{2\a(1-\a)}{L}$ and among every $m+n+1$ consecutive elements $\lambda_k,\lambda_{k+1},\dots,\lambda_{k+m+n}$ at least one is no less than $\frac{\a}{L}$. We shall prove this by induction. First, note that the second bound always satisfies $\frac{\a\n{x^k-x^{k-1}}}{\n{\nabla f(x^k)-\nabla f(x^{k-1})}}\geq \frac{\a}{L}$ for all $k$, which also implies that $\lambda_1\geq \frac{\a}{L}$. If for all $k$ we have $\lambda_k\ge \frac{\a}{L}$, then we are done. Now assume that $\lambda_{k-1}\geq \frac{\a}{L}$ and $\lambda_k<\frac{\a}{L}$ for some $k$. Choose the largest $j$ (possibly infinite) such that the second bound is not active for $\lambda_k, \dotsc, \lambda_{k+j-1}$, i.e., $\lambda_{k+i}=\sqrt{\frac{1}{\b} + \th_{k+i-i}}\lambda_{k+i-1}$ for $i<j$. Let us prove that $\lambda_k, \dotsc, \lambda_{k+j-1}\ge \frac{2\a(1-\a)}{L}$. The definition of $j$ yields $\th_{k+i}=\sqrt{\frac{1}{\b} + \th_{k+i-1}}$ for all $i=0,\dots, j-1$. Recall that $\b > 1$, and thus, \[\th_k\geq \b^{-\frac 1 2},\quad \th_{k+1}\geq \sqrt{\frac{1}{\b} + \sqrt{\frac{1}{\b}}}\geq \b^{-\frac{1}{4}},\quad \dotsc, \quad \th_{k+i}\geq \sqrt{\frac{1}{\b} + \b^{-\frac{1}{2^i}}} \geq \b^{-\frac{1}{2^{i+1}}}\] for all $i<j$. Now it remains to notice that for any $i<j$ \[\frac{\lambda_{k+i}}{\lambda_{k-1}} = \th_{k}\th_{k+1}\dots \th_{k+i} \geq \beta^{-\frac{1}{2}-\frac{1}{4}-\dotsb -\frac{1}{2^{i+1}} }\geq \frac{1}{\beta}=2(1-\a)\] and hence $\lambda_{k+i}\geq 2(1-\a)\lambda_{k-1}\geq \frac{2\a(1-\a)}{L}$. If $j\leq m+n$, then at $(k+j)$-th iteration the second bound is active, i.e., $\lambda_{k+j}\ge \frac{\a}{L}$, and we are done with the other claim as well. Otherwise, note \[ \th_{k+m-1} \ge \b^{-\frac{1}{2^m}} \ge 1 - \frac{1}{2\b}, \] so $\th_{k+m}=\sqrt{\frac{1}{\b}+\th_{k+m-1}}\ge \sqrt{\frac{1}{\b}+1 - \frac{1}{2\b}}=\sqrt{1 + \frac{1}{2\b}}$ and for any $i\in [m, j-2]$ we have $\th_{k+i+1}= \sqrt{\frac{1}{\b}+\th_{k+i}}\ge \sqrt{\frac{1}{\b}+1}$. Thus, \[ \lambda_{k+m+n} = \lambda_{k-1}\Bigl(\prod_{l=k}^{k+m-1}\th_l\Bigr)\Bigl(\prod_{l=k+m}^{k+m+n}\th_l\Bigr)\ge \lambda_{k-1}\frac{1}{\b}\sqrt{1+\frac{1}{2\b}}\Bigl(1+\frac{1}{\b}\Bigr)^{\frac n 2}\ge \lambda_{k-1} \ge \frac{\a}{L}, \] so we have shown the second claim too. To conclude, in both cases $\a\leq \frac 12$ and $\a>\frac 12$, we have $S_k = \Omega(k)$. Applying the Jensen inequality for the sum of all terms $f(x^i)-f_*$ in the left-hand side of~\eqref{eq:telescope2}, we obtain \[\frac D 2 \geq \frac{\text{LHS of \eqref{eq:telescope2}}}{2} \geq S_k (f(\hat x^k)-f_*),\] where $\hat x^k$ is defined in the statement of the theorem. Finally, convergence of $(x^k)$ can be proved in a similar way as \Cref{th:main}. \end{proof} \subsection{$f$ is $L$-smooth} Often, it is known that $f$ is smooth and even some estimate for the Lipschitz constant $L$ of $\nabla f$ is available. In this case, we can use slightly larger steps, since instead of just convexity the stronger inequality in \Cref{lemma:coco} holds. To take advantage of it, we present a modified version of \Cref{alg:main} in \Cref{alg:smooth}. Note that we have chosen to modify \Cref{alg:main} and not its more general variant~\Cref{alg:general_update} only for simplicity. \begin{algorithm}[t] \caption{Adaptive GD ($L$ is known)} \label{alg:smooth} \begin{algorithmic}[1] \STATE \textbf{Input:} $x^0 \in \mathbb R^d$, $\lambda_0 = \frac{1}{L}$, $\th_0=+\infty$\\ \STATE $x^1 = x^0-\lambda_0\nabla f(x^0)$ \FOR{$k = 1,2,\dots$} \STATE $L_k = \frac{\n{\nabla f(x^{k})-\nabla f(x^{k-1})}}{\n{x^{k}-x^{k-1}}}$ \STATE $\lambda_k = \min\left\{ \sqrt{1+\th_{k-1}}\lambda_{k-1},\frac{1}{\lambda_{k-1}L^2}+\frac{1}{2L_k} \right\}$ \STATE $x^{k+1} = x^k - \lambda_k \nabla f(x^k)$ \STATE $\th_k = \frac{\lambda_k}{\lambda_{k-1}}$ \ENDFOR \end{algorithmic} \end{algorithm} \begin{theorem}\label{theorem:energy-L} Let $f$ be convex and $L$-smooth. Then for $(x^k)$ generated by Algorithm~\ref{alg:smooth} inequality~\eqref{eq:lemma_ineq} holds. As a corollary, it holds for some ergodic vector $\hat x^k$ that $f(\hat x^k)-f_*=\mathcal{O}\left(\frac{1}{k}\right)$. \end{theorem} \begin{proof} Proceeding similarly as in Lemma~\ref{lemma:energy}, we have \begin{equation} \label{eq:2_simple-L} \|x^{k+1}- x^*\|^2 = \|x^k - x^*\|^2 - 2\lambda_k \<\nabla f(x^k), x^k-x^*> + \|x^{k+1} - x^{k}\|^2. \end{equation} By convexity of $f$ and Lemma~\ref{lemma:coco}, \begin{align}\label{eq:conv2_simple-L} 2\lambda_k \lr{\nabla f(x^k), x^*-x^{k}} & \overset{\eqref{eq:smooth_and_convex}}{\leq} 2\lambda_k(f(x^*) - f(x^k)-\frac{1}{2L}\n{\nabla f(x^k)}^2) \notag \\&= 2\lambda_k(f_* - f(x^k))-\frac{1}{\lambda_k L}\n{x^{k+1}-x^k}^2. \end{align} As in~\eqref{dif_x}, we have \begin{equation}\label{dif_x-L} \|x^{k+1} -x^k\|^2 = 2\lambda_k \lr{\nabla f(x^k)-f(x^{k-1}), x^{k}-x^{k+1}} + 2\lambda_k\lr{f(x^{k-1}), x^k-x^{k+1}} - \n{x^{k+1}-x^k}^2. \end{equation} Again, instead of using merely convexity of $f$, we combine it with \Cref{lemma:coco}. This gives \begin{align} \label{eq:terrible_simple-L} 2\lambda_k\lr{\nabla f(x^{k-1}), x^k-x^{k+1}} &= \frac{2\lambda_k}{\lambda_{k-1}}\lr{x^{k-1} - x^{k}, x^{k}-x^{k+1}}\notag \\ &= 2\lambda_k\th_k \lr{x^{k-1}-x^{k}, \nabla f(x^k)} \notag \\ & \overset{\eqref{eq:smooth_and_convex}}{\leq} 2\lambda_k\th_k (f(x^{k-1})-f(x^k)) - \frac{\lambda_k\th_k}{L}\n{\nabla f(x^k)-\nabla f(x^{k-1})}^2. \end{align} Since now we have two additional terms $\frac{1}{\lambda_kL}\n{x^{k+1}-x^k}^2$ and $\frac{\lambda_k\th_k}{L}\n{\nabla f(x^{k}) - \nabla f(x^{k-1})}^2$, we can do better than~\eqref{cs}. But first we need a simple, yet a bit tedious fact. By our choice of $\lambda_k$, in every iteration $\lambda_k\leq \frac{1}{\lambda_{k-1}L^2} + \frac{1}{2L_k}$ with $L_k= \frac{\n{\nabla f(x^k)-\nabla f(x^{k-1})}}{\n{x^k-x^{k-1}}} $. We want to show that it implies \begin{equation}\label{so_much_trouble} 2\left(\lambda_k - \frac{\sqrt{\th_k}}{L} \right)\leq \frac{1}{L_k}, \end{equation} which is equivalent to $\lambda_k-\frac{\sqrt{\lambda_k}}{\sqrt{\lambda_{k-1}}L}-\frac{1}{2L_k}\le 0$. Nonnegative solutions of the quadratic inequality $t^2 - \frac{t}{\sqrt{\lambda_{k-1}}L}-\frac{1}{2L_k}\leq 0$ are \[0\leq t\leq \frac{1}{2\sqrt{\lambda_{k-1}}L}+\frac{1}{2}\sqrt{\frac{1}{\lambda_{k-1}L^2}+\frac{2}{L_k}} = \frac{1}{2\sqrt{\lambda_{k-1}}L}\left(1 + \sqrt{1 + \frac{2\lambda_{k-1}L^2}{L_k}}\right).\] Let us prove that $\sqrt{\frac{1}{\lambda_{k-1}L^2} + \frac{1}{2L_k}}$ falls into this segment and, hence, $\sqrt{\lambda_k}$ does as well. Using a simple inequality $4 + a\leq (1 + \sqrt{1+a})^2 $, for $a>0$, we obtain \[\frac{1}{\lambda_{k-1}L^2} + \frac{1}{2L_k} = \frac{1}{4\lambda_{k-1}L^2} \bigl(4 + \frac{2\lambda_{k-1}L^2}{L_k}\bigr)\leq \frac{1}{4{\lambda_{k-1}}L^2}\left(1 + \sqrt{1 + \frac{2\lambda_{k-1}L^2}{L_k}}\right)^2.\] This confirms that~\eqref{so_much_trouble} is true. Thus, by Cauchy-Schwarz and Young's inequalities, one has \begin{equation}\label{cs-L} \begin{aligned} & 2\lambda_k \lr{\nabla f(x^k) -\nabla f(x^{k-1}), x^k - x^{k+1}} \leq 2\lambda_k \n{\nabla f(x^k) -\nabla f(x^{k-1})} \n{x^k - x^{k+1}} \\ &= 2\left(\lambda_k - \frac{\sqrt{\th_k}}{L}\right) \n{\nabla f(x^k) -\nabla f(x^{k-1})}\n{x^k - x^{k+1}} + \frac{2\sqrt{\th_k}}{L}\n{\nabla f(x^k) -\nabla f(x^{k-1})}\n{x^k - x^{k+1}}\\ & \overset{\eqref{so_much_trouble}}{\leq} \frac{1}{L_k} \n{\nabla f(x^k) -\nabla f(x^{k-1})} \n{x^k - x^{k+1}} + \frac{\lambda_k\th_k}{L}\n{\nabla f(x^k)-\nabla f(x^{k-1})}^2 + \frac{1}{\lambda_k L}\n{x^{k+1}-x^k}^2 \\ & = \n{x^k -x^{k-1}} \n{x^k - x^{k+1}} + \frac{\lambda_k\th_k}{L}\n{\nabla f(x^k)-\nabla f(x^{k-1})}^2 + \frac{1}{\lambda_k L}\n{x^{k+1}-x^k}^2 \\ & \leq \frac 1 2 \n{x^{k}-x^{k-1}}^2 + \frac{1}{2}\n{x^{k+1}-x^k}^2 + \frac{\lambda_k\th_k}{L}\n{\nabla f(x^k)-\nabla f(x^{k-1})}^2 + \frac{1}{\lambda_k L}\n{x^{k+1}-x^k}^2. \end{aligned} \end{equation} Combining everything together, we obtain the statement of the theorem. \end{proof} \section{Stochastic analysis}\label{ap:stoch} \subsection{Different samples} Consider the following version of SGD, in which we have two samples at each iteration, $\xi^k$ and $\zeta^k$ to compute \begin{align*} \lambda_k &= \min\left\{\sqrt{1+\th_k} \lambda_{k-1}, \frac{\alpha\|x^k - x^{k-1}\|}{\|\nabla f_{\zeta^k}(x^k) - \nabla f_{\zeta^k}(x^{k-1})\|} \right\}, \\ x^{k+1} &= x^k - \lambda_k \nabla f_{\xi^k}(x^k). \end{align*} As before, we assume that $\theta_0=+\infty$, so $\lambda_1 = \frac{\alpha\|x^1 - x^{0}\|}{\|\nabla f_{\zeta^1}(x^1) - \nabla f_{\zeta^1}(x^{0})\|}$. \begin{lemma} Let $f_\xi$ be $L$-smooth $\mu$-strongly convex almost surely. It holds for $\lambda_k$ produced by the rule above \begin{align} \frac{\alpha}{L} \le \lambda_k \le \frac{\alpha}{\mu} \quad \text{a.s.} \label{eq:stochastic_la} \end{align} \end{lemma} \begin{proof} Let us start with the upper bound. Strong convexity of $f_{\zeta^k}$ implies that $\|x-y\|\le\frac{1}{\mu}\|\nabla f_{\zeta^k}(x) - \nabla f_{\zeta^k}(y)\|$ for any $x, y$. Therefore, $\lambda_k\le \min\left\{\sqrt{1+\th_k}\lambda_{k-1}, \alpha/\mu \right\}\le \alpha/\mu$ a.s. On the other hand, $L$-smoothness gives $\lambda_k\ge \min\left\{\sqrt{1+\th_k} \lambda_{k-1}, \alpha/L\right\}\ge \min\left\{ \lambda_{k-1}, \alpha/L\right\}$ a.s. Iterating this inequality, we obtain the stated lower bound. \end{proof} \begin{proposition}% Denote $\sigma^2\stackrel{\mathrm{def}}{=} \E{\|\nabla f_\xi (x^*)\|^2}$ and assume $f$ to be almost surely $L$-smooth and convex. Then it holds for any $x$ \begin{align} \E{\|\nabla f_\xi (x) \|^2} \le 4L (f(x) - f_*) + 2\sigma^2. \label{eq:sgd_variance} \end{align} \end{proposition} Another fact that we will use is a strong convexity bound, which states for any $x,y$ \begin{align} \<\nabla f(x), x - y>\ge \frac{\mu}{2}\|x-y\|^2 + f(x) -f(y). \label{eq:grad_dist_bound} \end{align} \begin{theorem} Let $f_\xi$ be $L$-smooth and $\mu$-strongly convex almost surely. If we choose some $\alpha \le \frac{\mu}{2L}$, then \begin{align*} \E{\|x^k - x^*\|^2} \le \exp\left(-k\frac{\mu\alpha}{L}\right)C_0 + \alpha\frac{\sigma^2}{\mu^2}, \end{align*} where $C_0\stackrel{\mathrm{def}}{=} 2(1+2\lambda_0^2L^2)\|x^0-x^*\|^2 + 4\lambda_0^2\sigma^2$ and $\sigma^2\stackrel{\mathrm{def}}{=} \E{\|\nabla f_\xi(x^*)\|^2}$. \end{theorem} \begin{proof} Under our assumptions on $\alpha\le \frac{\mu}{2L}$, we have $\lambda_k\le \frac{\alpha}{\mu}\le \frac{1}{2L}$. Since $\lambda_k$ is independent of $\xi^k$, we have $\E{\lambda_k \nabla f_{\xi^k}(x^k)} = \E{\lambda_k}\E{\nabla f(x^k)}$ and \begin{align*} \E{\|x^{k+1} - x^*\|^2} &= \E{\|x^k - x^*\|^2} - 2 \E{\lambda_k\<\nabla f(x^k), x^k - x^*>} + \E{\lambda_k^2}\E{\|\nabla f_{\xi^k}(x^k)\|^2} \\ &\overset{\eqref{eq:grad_dist_bound}}{\le} \E{(1-\lambda_k\mu)\|x^k - x^*\|^2} - 2\E{\lambda_k (f(x^k) - f_*)} + \E{\lambda_k^2}\E{\|\nabla f_{\xi^k}(x^k)\|^2} \\ &\overset{\eqref{eq:sgd_variance}}{\le} \E{(1-\lambda_k\mu)\|x^k - x^*\|^2} - 2 \mathbb{E}\Bigl[\lambda_k\underbrace{(1 - 2\lambda_k L)}_{\ge 0}(f(x^k) - f_*)\Bigr] + \E{\lambda_k^2}\sigma^2 \\ &\overset{\eqref{eq:stochastic_la}}{\le} \E{1-\lambda_k\mu}\E{\|x^k - x^*\|^2} + \alpha\frac{\E{\lambda_k}\sigma^2}{\mu}. \end{align*} Therefore, if we subtract $\alpha\frac{\sigma^2}{\mu^2}$ from both sides, we obtain \begin{align*} \E{\|x^{k+1} - x^*\|^2 - \alpha\frac{\sigma^2}{\mu^2}} \le \E{1 - \lambda_k\mu}\E{\|x^k - x^*\|^2 - \alpha\frac{\sigma^2}{\mu^2}}. \end{align*} If $\E{\|x^k-x^*\|^2}\le \alpha\frac{\sigma^2}{\mu^2}$ for some $k$, it follows that $\E{\|x^{t}-x^*\|^2}\le \alpha\frac{\sigma^2}{\mu^2}$ for any $t\ge k$. Otherwise, we can reuse the produced bound to obtain \begin{align*} \E{\|x^{k+1} - x^*\|^2} \le \prod_{t=1}^k\E{1 - \lambda_t\mu}\|x^1 - x^*\|^2 + \alpha\frac{\sigma^2}{\mu^2}. \end{align*} By inequality $1-x\le e^{-x}$, we have $\prod_{t=1}^k\E{1 - \lambda_t\mu}\le \exp\left(-\mu\sum_{t=0}^k\lambda_t \right)$. In addition, recall that in accordance with~\eqref{eq:stochastic_la} we have $\lambda_k \ge \frac{\alpha}{L} $. Thus, \begin{align*} \E{\|x^{k+1} - x^*\|^2} \le \exp\left(-k\alpha\frac{\mu}{L}\right)\E{\|x^1 - x^*\|^2} + \alpha\frac{\sigma^2}{\mu^2}. \end{align*} It remains to mention that \begin{align*} \E{\|x^1-x^*\|^2} \le 2\|x^0-x^*\| + 2\lambda_0^2\E{\|\nabla f_{\xi^0}(x^0)\|^2} \overset{\eqref{eq:sgd_variance}}{\le} 2\|x^0-x^*\| + 2\lambda_0^2\left(2L^2\|x^0-x^*\|^2 + 2\sigma^2\right). \end{align*} \end{proof} This gives the following corollary. \begin{corollary} Choose $\alpha = \gamma\frac{\mu}{2 L}$ with $\gamma\le 1$. Then, to achieve $\E{\|x^k - x^*\|^2}= \mathcal O(\varepsilon + \gamma\sigma^2)$ we need only $k=\mathcal O\left(\frac{L^2}{\gamma\mu^2}\log \frac{1+\lambda_0^2}{\varepsilon} \right)$ iterations. If we choose $\gamma$ proportionally to $\varepsilon$, it implies $\mathcal O\left(\frac{1}{\varepsilon}\log \frac{1}{\varepsilon}\right)$ complexity. \end{corollary} \subsection{Same sample: overparameterized models} Assume additionally that the model is overparameterized, i.e.,\ $\nabla f_\xi(x^*)=0$ with probability one. In that case, we can prove that one can use the same stochastic sample to compute the stepsize and to move the iterate. The update becomes \begin{align*} \lambda_k &= \min\left\{\sqrt{1+\th_k} \lambda_{k-1}, \frac{\alpha\|x^k - x^{k-1}\|}{\|\nabla f_{\xi^k}(x^k) - \nabla f_{\xi^k}(x^{k-1})\|} \right\}, \\ x^{k+1} &= x^k - \lambda_k \nabla f_{\xi^k}(x^k). \end{align*} \begin{theorem} Let $f_\xi$ be $L$-smooth, $\mu$-strongly convex and satisfy $\nabla f_\xi(x^*)=0$ with probability one. If we choose $\alpha \le \frac{\mu}{L}$, then \begin{align*} \E{\|x^k - x^*\|^2} \le \exp\left(-k\alpha\frac{\mu}{L}\right)C_0, \end{align*} where $C_0\stackrel{\mathrm{def}}{=} 2(1+\lambda_0^2L^2)\|x^0-x^*\|^2$. \end{theorem} \begin{proof} Now $\lambda_k$ depends on $\xi^k$, so we do not have an unbiased update anymore. However, under the new assumption, $\nabla f_{\xi^k}(x^*)=0$, so we can write \begin{align*} \<\nabla f_{\xi^k}(x^k), x^k-x^*> \overset{\eqref{eq:grad_dist_bound}}{\ge} \frac{\mu}{2}\|x^k -x^*\|^2 + f_{\xi^k}(x^k) - f_{\xi^k}(x^*). \end{align*} In addition, $L$-smoothness and convexity of $f_{\xi^k}$ give \begin{align*} \|\nabla f_{\xi^k}(x^k)\|^2 \le 2L (f_{\xi^k}(x^k) - f_{\xi^k}(x^*)). \end{align*} Since our choice of $\alpha$ implies $\lambda_k\le \frac{1}{L}$, we conclude that \begin{align*} \|x^{k+1}-x^*\|^2 &= \|x^k - x^*\|^2 - 2\lambda_k\<\nabla f_{\xi^k}(x^k), x^k- x^*> + \lambda_k^2\|\nabla f_{\xi^k}(x^k)\|^2 \\ &\le (1 - \lambda_k\mu)\|x^k - x^*\|^2 - 2\lambda_k(1 - \lambda_k L)(f_{\xi^k}(x^k) - f_{\xi^k}(x^*)) \\ &\le (1 - \lambda_k\mu)\|x^k - x^*\|^2. \end{align*} Furthermore, as $\|\nabla f_{\xi^0}(x^0)\|=\|\nabla f_{\xi^0}(x^0)- \nabla f_{\xi^0}(x^*)\|\le L\|x^0-x^*\|$, we also get a better bound on $\E{\|x^1-x^*\|^2}$, namely \begin{align*} \E{\|x^1-x^*\|^2}\le 2\|x^0-x^*\| + 2\lambda_0^2\E{\|\nabla f_{\xi^0}(x^0)\|^2} \le 2(1+\lambda_0^2L^2)\|x^0-x^*\|. \end{align*} \end{proof} \section{Experiments details}\label{ap:exp_details} Here we provide some omitted details of the experiments with neural networks. We took the implementation of neural networks from a publicly available repository\footnote{\href{https://github.com/kuangliu/pytorch-cifar/blob/master/models/resnet.py}{https://github.com/kuangliu/pytorch-cifar/blob/master/models/resnet.py}}. All methods were run with standard data augmentation and no weight decay. The confidence intervals for ResNet-18 are obtained from 5 different random seeds and for DenseNet-121 from 3 seeds. In our ResNet-18 experiments, we used the default parameters for Adam. SGD was used with a stepsize divided by 10 at epochs 120 and 160 when the loss plateaus. Log grid search with a factor of 2 was used to tune the initial stepsize of SGD and the best initial value was 0.2. Tuning was done by running SGD 3 times and comparing the average of test accuracies over the runs at epoch 200. For the momentum version (SGDm) we used the standard values of momentum and initial stepsize for training residual networks, 0.9 and 0.1 correspondingly. We used the same parameters for DenseNet-121 without extra tuning. For our method we used the variant of SGD $x^{k+1}=x^k - \lambda_k \nabla f_{\xi^k}(x^k)$ with $\lambda_k$ computed using $\xi^k$ as well (biased option). We did not test stepsizes that use values other than $\frac{1}{L_k}$ and $\frac{1}{2L_k}$, so it is possible that other options will perform better. Moreover, the coefficient before $\th_{k-1}$ might be suboptimal too. \end{document}
{ "timestamp": "2020-08-18T02:12:08", "yymm": "1910", "arxiv_id": "1910.09529", "language": "en", "url": "https://arxiv.org/abs/1910.09529", "abstract": "We present a strikingly simple proof that two rules are sufficient to automate gradient descent: 1) don't increase the stepsize too fast and 2) don't overstep the local curvature. No need for functional values, no line search, no information about the function except for the gradients. By following these rules, you get a method adaptive to the local geometry, with convergence guarantees depending only on the smoothness in a neighborhood of a solution. Given that the problem is convex, our method converges even if the global smoothness constant is infinity. As an illustration, it can minimize arbitrary continuously twice-differentiable convex function. We examine its performance on a range of convex and nonconvex problems, including logistic regression and matrix factorization.", "subjects": "Optimization and Control (math.OC); Machine Learning (cs.LG); Numerical Analysis (math.NA); Machine Learning (stat.ML)", "title": "Adaptive Gradient Descent without Descent", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713852853137, "lm_q2_score": 0.7185943985973773, "lm_q1q2_score": 0.7080104985643048 }
https://arxiv.org/abs/2109.09773
A note on fully commutative elements in complex reflection groups
Fully commutative elements in types $B$ and $D$ are completely characterized and counted by Stembridge. Recently, Feinberg-Kim-Lee-Oh have extended the study of fully commutative elements from Coxeter groups to the complex setting, giving an enumeration of such elements in $G(m,1,n)$. In this note, we prove a connection between fully commutative elements in $B_n$ and in $G(m,1,n)$, which allows us to characterize fully commutative elements in $G(m,1,n )$ by pattern avoidance. Further, we present a counting formula for such elements in $G(m,1,n)$.
\section{Introduction} \label{sec:introduction} Let $G$ be the group generated by a set of elements $\{s_1, s_2, \ldots\}$. Assume $g\in G$ has a reduced expression $g=s_{i_1}s_{i_2}\cdots s_{i_{\ell(g)}}$ where $\ell(g)$ is the minimum number of generators needed. If any other reduced expression of $g$ can be obtained from $s_{i_1}s_{i_2}\cdots s_{i_{\ell(g)}}$ by interchanges of adjacent commuting generators, i.e., commutation relations, then $g$ is \emph{fully commutative}. Consider the symmetric group $\Symm_n$ generated by simple transpositions. Fully commutative elements are permutations that avoid the pattern $321$ \cite{BJS}. The number of such permutations is the $n$-th Catalan number. When $G$ is a Weyl group, fully commutative elements can be characterized by root system \cite{BP05} and Lusztig's $a$-function \cite{BF98, Shi05b}. When $G$ is a simply laced Coxeter group, fully commutative elements can also be described by root system \cite{FS97}. When $G$ is the Coxeter group $B_n$ or $D_n$, fully commutative elements can be viewed as the linear extensions of a heap \cite{S96} and then be characterized by pattern avoidance \cite{S97}. We will discuss Stembridge's work on pattern avoidance more extensively in Section \ref{sec:background}. Recently, \cite{FKLO} generalized the study of fully commutative elements from Coxeter groups to the complex setting, proving a counting formula for these elements that agrees with \cite{S96} in $B_n$. The purpose of this note is to further explore the connection between fully commutative elements in $B_n$ and in $G(m,1,n)$. We will show that an element $g$ in $G(m,1,n)$ is fully commutative if and only if the resulting element, after every nontrivial $m$-th root of unity in $g$ is replaced with $-1$, is fully commutative in $B_n$. This implies that the pattern avoidance of fully commutative elements in $B_n$ extends to $G(m,1,n)$. We will further show that to count fully commutative elements in $G(m,1,n)$, it suffices to count fully commutative elements in $B_n$ by the number of $-1$'s in their matrix forms. The remainder of this note is as follows: in Section \ref{sec:background}, we give relevant background on Coxeter groups and complex reflection groups, and introduce Stembridge's work in $B_n$ and the result by Feinberg-Kim-Lee-Oh in $G(m,1,n)$. In Section \ref{sec:main}, we prove our main theorem, on the connection between fully commutative elements in $B_n$ and in $G(m,1,n)$. Then in Section \ref{sec:open}, we propose a few open questions concerning $G(m,m,n)$. \subsection*{Acknowledgements} The author thanks Kyu-Hwan Lee for giving a talk in Sage Days FPSAC 2019 that introduced the idea and is deeply grateful to Joel Brewster Lewis for many insightful discussions and useful comments, and Alejandro Morales and Theo Douvropoulos for helpful conversations. \section{Background} \label{sec:background} \subsection{Coxeter groups} For a thorough treatment on Coxeter groups, see \cite{BB,JH}. Let $W$ be a group with a set of generators $S\subseteq W$, subject only to relations of the form \[ (ss')^{m(s,s')} =1,\] where $m(s,s)=1$, $m(s,s')=m(s',s)\geq 2$ for $s\neq s'$ in $S$, with the convention that $m(s, s')=\infty$ when no relation occurs for a pair $s, s'$. Then $W$ is a \emph{Coxeter group}. One key example is the symmetric group $\Symm_n$, which can be realized as permutations. The hyperoctahedral group $B_n$ of all \emph{signed permutations} of the set $\{1, 2, \ldots, n\}$ (permutations of the set $\{-n, \ldots, -1, 1, \ldots, n\}$) is another example. Every element in $B_n$ is a \emph{monomial matrix}, i.e., in each row and column, there is exactly one nonzero entry, whose nonzero entries are either $1$ or $-1$. The group $B_n$ has a subgroup $D_n$, whose elements are monomial matrices that have an even number of $-1$'s. Thus, $D_n$ is called the group of \emph{even-signed permutations}. $D_n$ is a Coxeter group. Also, the dihedral group $I_2(m)$ is a Coxeter group. One may notice that these examples are all \emph{finite real reflection groups}: a finite group generated by \emph{real reflections}, which are linear transformations that fix a hyperplane in a (real) Euclidean space. In fact, finite real reflection groups are the finite Coxeter groups. \subsection{Complex reflection groups} For a general reference on complex reflection groups, see \cite{LehrerTaylor}. Given a finite-dimensional complex vector space $V$, a \emph{reflection} is a linear transformation $t: V \rightarrow V$ whose fixed space $\operatorname{ker}(t-1)$ is a hyperplane (has codimension $1$), and a finite subgroup $G$ of $GL(V)$ is called a \emph{complex reflection group} if $G$ is generated by its subset of reflections. Complex reflection groups were classified by Shephard and Todd~\cite{ST54}: every complex reflection group is a direct product of irreducibles, and every irreducible is isomorphic to a group of the form \[ G(m, p, n) \overset{\text{def}}{=\hspace{-4pt}=} \left\{\begin{array}{l} n \times n \text{ monomial matrices whose nonzero entries are}\\m\text{th} \text{ roots of unity with product a } \frac{m}{p}\text{th} \text{ root of unity}\end{array} \right\} \] for positive integers $m$, $p$, $n$ with $p \mid m$ or to one of $34$ exceptional examples. For every $m$, $p$, $n$, there is a projection map \[ f: G(m,p,n) \twoheadrightarrow G(1,1,n) = \Symm_n\] where $f(g)$ is the result of replacing every root of unity in the matrix of $g$ with $1$. The resulting permutation $f(g)$ is the \emph{underlying permutation} of $g$. We may use a shorthand to refer to an element $g\in G(m,p, n)$ as $[f(g); (a_1, \ldots, a_n)]$, where $f(g)$ is the underlying permutation of $g$ and $a_i$ is the exponent of the nonzero entry in the $i$-th column. We call $a_i$ the \emph{weight} of the entry and $a_1 +\ldots +a_n \pmod{m}$ the \emph{weight} of the element. For example, in $G(30, 1, 6)$, we have \begin{equation*} g=\left[\begin{smallmatrix} &\omega^{17} &&&&\\ 1&&&&&\\ &&&&\omega^{2}&\\ &&\omega^{2}&&&\\ &&&\omega^{3}&&\\ &&&&&\omega^{6} \end{smallmatrix}\right] = [214536 ; (0, 17, 2, 3, 2, 6)] = [(12)(345)(6) ; (0, 17, 2, 3, 2, 6)], \end{equation*} where $\omega=\exp(\frac{2\pi i}{m})$ denotes a fixed primitive $m$th root of unity. Let $s_1, s_2, \ldots, s_{n-1}$ be simple transpositions, i.e., $s_j = [(j\, j+1); (0,\ldots, 0)]$ and $s_0 = [\mathrm{id}; (1, 0, \ldots, 0)]$ be a diagonal reflection. When $p =1$, the group $G(m, 1, n)$ can be generated by reflections $s_0, s_1, \ldots, s_{n-1}$. The simple transpositions $s_j$'s have order $2$ and the diagonal reflection $s_0$ has order $n$. One recovers the infinite families of real reflection groups as the following special cases: the group $G(1, 1, n)$ is the symmetric group $\Symm_n$; $G(2, 1, n)$ is the signed permutation group $B_n$; $G(2, 2, n)$ is the even-signed permutation group $D_n$; and $G(m, m, 2)$ is the dihedral group $I_2(m)$. \subsection{Fully commutative elements in Coxeter group $B_n$} Let $s_1, s_2, \ldots, s_{n-1}$ be simple transpositions, i.e., $s_j = [(j\, j+1); (0,\ldots, 0)]$ and $s_0 = [\mathrm{id}; (1, 0, \ldots, 0)]$ be a diagonal reflection. Then Coxeter group $B_n$ can be generated by $s_0, s_1, \ldots, s_{n-1}$ with defining relations: \begin{center} \begin{tabular}{rll} $(s_0s_1)^4= s_i^2$ & $=1$ & for $1\leq i\leq n-1$,\\ $s_i s_j$ & $= s_j s_i $ & for $i+1 < j\leq n-1$, \\ $s_{i+1}s_i s_{i+1}$ & $ = s_i s_{i+1}s_i$ & for $1\leq i \leq n-2$.\\ \end{tabular} \end{center} The shortest left coset representatives for $B_n/B_{n-1}$ are \[\{1,\quad s_{n-1}, \quad s_{n-2}s_{n-1}, \quad \ldots,\quad s_0 s_1\cdots s_{n-1}, \quad s_1s_0s_1\cdots s_{n-1}, \quad \ldots, \quad s_{n-1}\ldots s_1s_0 s_1\ldots s_{n-1}\}\] where shortest means minimal. For $0\leq i\leq j$, denote $[i,j]=s_i\cdot s_{i+1}\cdots s_j$ and $[-i, j]=s_i\cdot s_{i-1}\cdots s_1\cdot s_0\cdot s_1\cdots s_j$. Then rewrite the coset representatives as \[\{1,\quad [n-1,n-1] \quad [n-2,n-1], \quad \ldots,\quad [0,n-1], \quad [-1, n-1], \quad \ldots, \quad [-(n-1),n-1]\}.\] Using these coset representatives, Stembridge showed in \cite{S97} that every element in $B_n$ has a canonical reduced word \[ [m_1, n_1]\cdot [m_2, n_2]\cdots [m_r, n_r], \] where $n>n_1>\cdots>n_r\geq 0$ and $|m_i|\leq n_i$. Further, he proved several equivalent statements regarding full commutativity in $B_n$. In particular, he characterized fully commutative elements by pattern avoidance. Let $g\in B_n$. If g avoids the pattern $(-1,-2)$, it means that $g$ does not contain $\left[\begin{smallmatrix} -1&\\ &-1\end{smallmatrix}\right]$ as a submatrix. Similarly, if $g$ avoids the pattern $(2, 1, -3)$, it means that $g$ does not contain $\left[\begin{smallmatrix} &1&\\ 1&&\\ &&-1\end{smallmatrix}\right]$ as a submatrix. \begin{theorem}[{\cite[Cor.~5.6]{S97}}]\label{thrm: type B} For $g\in B_{n}$, the following are equivalent. \begin{enumerate} \item $g$ is fully commutative. \item In the canonical reduced word $[m_1, n_1]\cdots [m_r, n_r]$ for $g$, we have either \begin{enumerate} \item $m_1>\cdots>m_s >m_{s+1}=\cdots=m_r = 0$ for some $s\leq r$, or \item $m_1> \cdots > m_{r-1}> -m_r >0$. \end{enumerate} \item $g$ avoids the pattern $(-1, -2)$ and all patterns $(a, b,c)$ such that $|a|>b>c$ or $-b>|a|>c$. \end{enumerate} \end{theorem} For a complete list of patterns in case $(3)$ in Theorem \ref{thrm: type B}, see Table \ref{patterns}. \begin{example} Consider $B_4$ with generating set $\{s_0, s_1, s_2, s_3\}$ where $s_0 = [\mathrm{id}; (1, 0, 0, 0)]$, $s_1 = [(12);(0,\ldots, 0)]$, $s_2 = [(23); (0, \ldots, 0)]$, and $s_3 = [(34);(0, \ldots, 0)]$. Let $g_1 = \left[\begin{smallmatrix} 1&&&\\ &&&1\\&&-1&\\ &1&&\end{smallmatrix}\right]=[(24);(0,0,1,0)]=[2,3]\cdot [-1, 2]=(s_2 s_3) \cdot ( s_1 s_0 s_1 s_2)$. Here are all its reduced expressions: \[s_2 {\color{red} s_3} s_1 s_0 s_1 s_2, \,\quad s_2 s_1 {\color{red} s_3}s_0 s_1 s_2,\, \quad s_2 s_1 s_0 {\color{red} s_3}s_1s_2,\,\quad \text{ and } \quad s_2 s_1 s_0 s_1 {\color{red} s_3} s_2.\] Observe that any reduced expression of $g_1$ can be obtained from another via commutation relations. Then $g_1$ is fully commutative. Also, its canonical reduced word is of case $(b)$ in Theorem \ref{thrm: type B}. Element $g_2 =[(1342);(0,0,1,1)]=[-2,3]\cdot [1,2]\cdot [-1,1]=(s_2 s_1 s_0 s_1 s_2 s_3)\cdot (s_1 s_2) \cdot (s_1 s_0 s_1)$ is not fully commutative, since its canonical reduced word is neither case $(a)$ nor $(b)$. Observe that it has a reduced expression $s_2 s_1 s_0 s_1 s_2 s_3 {\color{red} s_2 s_1 s_2} s_0 s_1$ that is not equivalent to the first one via commutation relations. Also, $g_2 =\left[\begin{smallmatrix} &{\color{blue}1}&&\\ &&&-1\\ {\color{blue}1}&&&\\ &&{\color{blue}-1}&\end{smallmatrix}\right]$ has the pattern $(2,1,-3)$, which is one of the patterns $(a,b,c)$ where $|a|>b>c$ that fully commutative elements in $B_n$ do not have. \end{example} Interpreting canonical reduced words as certain plane partitions, Stembridge gave the following counting formula for fully commutative elements in $B_n$. \begin{prop}[{\cite[Prop.~5.9]{S97}}]\label{prop: type B} In $B_n$, there are $(n+2)C_{n} -1$ fully commutative elements, where $C_n = \frac{1}{n+1} {2n \choose n}$ is the $n$th Catalan number. \end{prop} \subsection{Fully commutative elements in $G(m,1,n)$} Let $s_1, s_2, \ldots, s_{n-1}$ be simple transpositions, i.e., $s_j = [(j\, j+1); (0,\ldots, 0)]$ and $s_0 = [\mathrm{id}; (1, 0, \ldots, 0)]$ be a diagonal reflection. The group $G(m,1,n)$ can be generated by $s_0, s_1, s_2, \ldots, s_{n-1}$ with defining relations: \begin{center} \begin{tabular}{rll} $s_0^m = s_i^2$ & $=1$ & for $1\leq i\leq n-1$,\\ $s_i s_j$ & $= s_j s_i $ & for $i+1 < j\leq n-1$, \\ $s_{i+1}s_i s_{i+1}$ & $ = s_i s_{i+1}s_i$ & for $1\leq i \leq n-2$,\\ $s_1 s_0 s_1 s_0 $ & $= s_0 s_1 s_0 s_1$.& \end{tabular} \end{center} \begin{example} Consider $G(3,1,3)$ with generating set $\{s_0, s_1, s_2\}$ where $s_0 =[\mathrm{id}; (1,0,0)]$, $s_1=[(12);(0,0,0)]$ and $s_2=[(23);(0,0,0)]$. Element $[\mathrm{id}; (2,1,0)]$ is not fully commutative since it has two reduced expressions $s_1 s_0 s_1 (s_0)^2 $ and $(s_0)^2 s_1 s_0 s_1$, where neither has adjacent commuting generators. So one cannot be obtained from the other through commutation relations, making this element not fully commutative. Element $[(13); (1,1,1)]$ is fully commutative with reduced expressions $s_0 s_1 {\color{red} s_0 s_2 }s_1 s_0=s_0 s_1 {\color{red} s_2 s_0 }s_1 s_0$. \end{example} By focusing on certain prefixes and suffixes of reduced expressions, \cite{FKLO}\footnote{When comparing the present work with \cite{FKLO}, the reader should note that we follow Stembridge's choice of generators, which are different from the ones used in \cite{FKLO}.} showed the following counting formula that agrees with Proposition~\ref{prop: type B} when $m=2$. \begin{theorem}[{\cite[Cor.~4.12]{FKLO}}]\label{thrm: FKLO} For $n\geq 3$, the number of fully commutative elements in $G(m,1,n)$ is equal to \[ m(m-1)\sum\limits_{s=0}^{n-2} \frac{(n+s)! (n-s+1)}{s!(n+1)!} m^{n-2-s} + (2m-1)C_n -(m-1). \] \end{theorem} In his Ph.D. thesis, Mak presented certain coset representatives, similar to what Stembridge used in $B_n$. \begin{proposition}[{\cite[Prop.~2.2.6]{Mak}}]\label{prop: Mak} The shortest left coset representatives for $G(m,1,n)/G(m,1,n-1)$ are \begin{multline*} \{s_0^{\epsilon} s_1\cdots s_{n-1}, \quad s_1s_0^{\epsilon}s_1\cdots s_{n-1}, \quad \ldots, \quad s_{n-1}\ldots s_1s_0^{\epsilon} s_1\ldots s_{n-1}\, | \, 1\leq \epsilon\leq m-1\} \\ \cup\, \{ 1, \quad s_{n-1},\quad s_{n-2}s_{n-1}, \quad \ldots,\quad s_{n-1}\cdots s_1\}.\qquad \qquad\qquad\qquad\end{multline*} \end{proposition} For $0< i\leq j$, $k\geq 0$, and $\epsilon\in \{1, 2, \ldots, m-1\}$, let $[i^\epsilon,j]=[i,j]=s_i\cdot s_{i+1}\cdots s_j$, $[(- i)^{\epsilon}, j]=s_i\cdot s_{i+1}\cdots s_1 s_0^{\epsilon} s_1\cdot s_2\cdots s_j$ and $[0^{\epsilon}, k]=s_{0}^{\epsilon} s_1 \cdots s_k$. As a result of Proposition \ref{prop: Mak}, it follows that there exists a canonical reduced word for every element in $G(m,1,n)$. \begin{definition}\label{canonical word} In $G(m,1,n)$, every element has a canonical reduced word \[[m_1^{a_1}, n_1]\cdot [m_2^{a_2}, n_2]\cdots [m_r^{a_r}, n_r],\] where $n>n_1>\cdots >n_r\geq 0$, $|m_i|\leq n_i$ and $1\leq a_i\leq m-1$. \end{definition} \begin{example}\label{example} Consider $G(7,1,6)$, and let $g=\left[\begin{smallmatrix} &\omega^2&&&&&\\ &&1&&&\\ \omega&&&&&\\ &&&\omega^4&&\\ &&&&&\omega^6\\ &&&&\omega^5&\end{smallmatrix}\right]=[(132)(4)(56);(1, 2,0,4,5, 6)].$ Then $g$ has a canonical reduced word \[ [-4^6, 5]\cdot [-4^5, 4]\cdot [-3^4,3]\cdot [2^1,2]\cdot [0^2,1]\cdot [0^1, 0]\] with a reduced expression\footnote{Here canonical reduced word and reduced expression refer to the same expression. We call the one with bracket notation \emph{canonical reduced word} and the one with generators multiplied together \emph{reduced expression}.} \[ s_4 s_3 s_2 s_1 s_0^6 s_1 s_2 s_3 s_4 s_5 \cdot s_4 s_3 s_2 s_1 s_0^5 s_1 s_2 s_3 s _4 \cdot s_3 s_2 s_1 s_0^4 s_1 s_2 s_3 \cdot s_2 \cdot s_0^2 s_1 \cdot s_0^1. \] Observe that the exponent $a_i$ in every block $[m_i^{a_i}, n_i]$ where $m_i \leq 0$ corresponds to the weight of a nontrivial entry in $g$. \section{The main theorem} \label{sec:main} Before we state our main theorem, we need a few definitions first. \begin{definition} Let $g\in G(m,1,n)$. We say $g$ has a \emph{nontrivial} entry if that entry is neither $0$ nor $1$. \end{definition} \begin{definition} For positive integers $m$ and $n$, define a map \[ \pi: G(m, 1, n) \rightarrow G(2, 1, n) = B_n \] where $\pi(g)$ is the result of replacing every nontrivial entry in the matrix of $g$ with $-1$. \end{definition} Continuing with Example \ref{example}, the image $\pi(g)=\left[\begin{smallmatrix} &-1&&&&&\\ &&1&&&\\ -1&&&&&\\ &&&-1&&\\ &&&&&-1\\ &&&&-1&\end{smallmatrix}\right]=[(132)(56);(1, 1,0,1,1, 1)]\in G(2,1,6)$ has a canonical reduced word \[ [-4, 5]\cdot [-4, 4]\cdot [-3,3]\cdot [2,2]\cdot [0,1]\cdot [0, 0]\] with a reduced expression \[ s_4 s_3 s_2 s_1 s_0 s_1 s_2 s_3 s_4 s_5 \cdot s_4 s_3 s_2 s_1 s_0 s_1 s_2 s_3 s _4 \cdot s_3 s_2 s_1 s_0 s_1 s_2 s_3 \cdot s_2 \cdot s_0 s_1 \cdot s_0. \] \end{example} \begin{theorem}\label{main theorem} Let $g\in G(m,1,n)$. It is fully commutative if and only if $\pi(g)$ is fully commutative in $G(2,1,n)$. \end{theorem} To prove Theorem \ref{main theorem}, we need a few propositions. The first proposition describes how the map $\pi$ works on canonical reduced words. In general, one does not have $\pi(g_1 g_2) = \pi(g_1)\pi(g_2)$ for an arbitrary pair $g_1, g_2 \in G(m,1,n)$. So $\pi$ is not a group homomorphism from $G(m,1,n)$ to $G(2,1,n)$. But it does behave nicely with respect to canonical reduced words. \begin{proposition}\label{prop: obs} Let $g\in G(m,1,n)$ have a canonical reduced word $[m_1^{a_1}, n_1]\cdot [m_2^{a_2}, n_2]\cdots [m_r^{a_r}, n_r]$. Then $\pi(g) \in G(2,1,n)$ has a canonical reduced word \[\begin{array}{rl} \pi([m_1^{a_1}, n_1]\cdot [m_2^{a_2}, n_2]\cdots [m_r^{a_r}, n_r])&=\pi([m_1^{a_1}, n_1])\cdot \pi([m_2^{a_2}, n_2])\cdots \pi([m_r^{a_r}, n_r])\\ & = [m_1, n_1]\cdot [m_2, n_2]\cdots [m_r, n_r]. \end{array}\] \end{proposition} \begin{proof} If all $m_i$'s in the canonical reduced word $[m_1^{a_1}, n_1]\cdot [m_2^{a_2}, n_2]\cdots [m_r^{a_r}, n_r]$ of $g$ are all positive, then $\pi(g)$ and $g$ have identical canonical reduced word. Then we are done. As in Example \ref{example}, if $m_i$ of the $i$-th block $[m_i^{a_i}, n_i]$ from the left in the canonical reduced word is non-positive, then its reduced expression has the term $s_0^{a_i}$, which corresponds to the $i$-th nontrivial entry from the right columnwise in $g$ with weight $a_i$. Then its image $\pi(g)$ has a $-1$ at the same location. Then the canonical reduced word of $\pi(g)$ has a block $[m_i, n_i]$ whose reduced expression has the term $s_0$, as desired. \end{proof} The next proposition spells out the full commutativity condition on canonical reduced words in $G(m,1,n)$. \begin{prop}\label{main prop} Let $g\in G(m,1,n)$. Then $g$ is fully commutative if and only if its canonical reduced word $[m_1^{a_1}, n_1]\cdot [m_2^{a_2}, n_2]\cdots [m_r^{a_r}, n_r]$ has either \begin{enumerate}[(a)] \item $m_1 >\cdots >m_{r-1}>-m_r>0$, or \item for some $s\leq r$, $m_1>\cdots>m_s=m_{s+1} = \cdots = m_r = 0$. \end{enumerate} \end{prop} \begin{proof} $(\Rightarrow)$ Assume that $[m_1^{a_1}, n_1]\cdot [m_2^{a_2}, n_2]\cdots [m_r^{a_r}, n_r]$ is the canonical reduced word for some fully commutative element $g\in G(m,1,n)$. By Definition \ref{canonical word}, $n>n_1 >\cdots >n_r\geq 0$ and $|m_i|\leq n_i$. \begin{enumerate} \item Case $(a)$: Let $a,b \in \{1, \ldots, m-1\}$. For $i>0$, $[-1^a,i]\cdot {\color{red} s_0^b} = s_1\cdot s_0^a\cdot s_1\cdots s_i\cdot{\color{red} s_0^b}= s_1\cdot s_0^a\cdot s_1\cdot {\color{red} s_0^b}\cdot [2,i]$. Note that $s_1s_0^a s_1 s_0^b=s_0^b s_1 s_0^a s_1$, but these two expressions are not equivalent via commutation relations. So a fully commutative element cannot have reduced expressions that contain $s_1s_0^a s_1 s_0^b$ or $s_0^b s_1 s_0^a s_1$. For $i>j>0$, $[-1^a,i]{\color{red} s_j}=s_1\cdot s_0^a\cdot s_1\cdots s_i\cdot {\color{red} s_j}=s_1\cdot s_0^a\cdot s_1\cdots s_{j-1}\cdot s_j\cdot s_{j+1}\cdot {\color{red} s_j}\cdot s_{j+2}\cdots s_i$. Note that here $s_j s_{j+1} s_j = s_{j+1} s_j s_{j+1}$. Since one cannot be achieved from the other via commutation relations, a fully commutative element cannot have reduced expressions that contain $s_j s_{j+1} s_j$ or $s_{j+1} s_j s_{j+1}$. Since $|m_{i+1}|<n_{i+1}< n_{i}$ by Definition \ref{canonical word}, $[-1^a, i]s_0^b$ or $[-1^a, i]s_j$ occurs in $[m_i^{a_i}, n_i][m_{i+1}^{a_{i+1}}, n_{i+1}]$ when $m_i<0$. Then we must have $m_1, \ldots, m_{r-1}\geq 0$. \item Case $(b)$: For $j>k\geq i\geq 0$, $[i^a,j]{\color{red} s_k}=[i^a, k-1]s_k s_{k+1} {\color{red} s_k}[k+2,j]$. A fully commutative element cannot have canonical reduced words that contain $[i^a, j]s_k$, since $s_k s_{k+1} s_k = s_{k+1} s_k s_{k+1}$ when $k>0$. In $[m_i^{a_i}, n_i][m_{i+1}^{a_{i+1}}, n_{i+1}]$, $[i^a, j]s_k$ occurs when $|m_{i+1}|\geq |m_i|$. To avoid having $s_k s_{k+1} s_k = s_{k+1} s_k s_{k+1}$ $(k>0)$, we need $|m_{i+1}|<|m_{i}|$ or $m_i = m_{i+1}=0$ for $1\leq i <r$. \end{enumerate} Thus, for $g\in G(m,1,n)$ a fully commutative element, its canonical reduced word is either of case $(a)$ or $(b)$. $(\Leftarrow)$ We prove the contrapositive statement: if $g\in G(m,1,n)$ is not fully commutative, then its canonical reduced word is neither case $(a)$ nor $(b)$. Suppose $g\in G(m,1,n)$ is not fully commutative. Then there is a reduced expression for $g$ containing at least one of the terms \[s_1 s_0^{a} s_1 s_0^{a'}, \quad s_0^{a}s_1s_0^{a'}s_1, \quad s_{i+1}s_i s_{i+1}, \, \text{ and } \,s_i s_{i+1} s_i \quad (i\geq 1, \text{ and } a,a'\in \{1, \ldots m-1\})\] that is equivalent, via commutation relations, to the reduced expression associated with the canonical reduced word of $g$. We now show every reduced expression containing such a term leads to a canonical reduced word that is neither case $(a)$ nor $(b)$. We discuss them by the following cases. \begin{enumerate} \item $s_{i} s_{i+1} s_{i}$: These three factors cannot be in the same block due to the structure of an individual block: the indices of the factors are either strictly increasing or strictly decreasing then strictly increasing. The three factors also cannot be in all distinct blocks without breaking the rule $n>n_1>\cdots>n_r\geq 0$ and $|m_i|\leq n_i$ by Definition \ref{canonical word}. Then assume these factors are in two adjacent blocks, which leads to two cases. \begin{enumerate} \item Assume $s_i s_{i+1}$ is in block $[m_1^{a_1}, n_1]$ and $s_i$ is in block $[m_2^{a_2}, n_2]$. In order to move $s_i$ in block $[m_2^{a_2}, n_2]$ next to $s_i s_{i+1}$ in block $[m_1^{a_1}, n_1]$ using only commutation relations, there should be no factor to the left of $s_i$ in block $[m_2^{a_2}, n_2]$. This means that $s_i$ is the first factor in block $[m_2^{a_2}, n_2]$. Then $m_2$ must be $i$ or $-i$ by Definition \ref{canonical word}. Similarly, the factors to the right of $s_i s_{i+1}$ in block $[m_1^{a_1}, n_1]$, if exist, must have indices larger than $i+1$. This means that $n_1 \geq i+1$. Since $s_i s_{i+1}$ is in block $[m_1^{a_1}, n_1]$, the indices of factors to the left of $s_is_{i+1}$ can be strictly increasing to $i-1$ or strictly decreasing to 0 then strictly increasing to $i-1$. Then $m_1\leq i$ by Definition \ref{canonical word}. Then either $m_1<0$ (while $r\geq 2$) or $|m_1|\leq |m_2|$, which is neither case $(a)$ nor $(b)$. \item Assume $s_i$ is in block $[m_1^{a_1}, n_1]$ and $s_{i+1}s_i$ is in block $[m_2^{a_2}, n_2]$. In order to move $s_i$ from block $[m_1^{a_1}, n_1]$ next to $s_i s_{i+1}$ from block $[m_2^{a_2}, n_2]$ using only commutation relations, there should be no factors to the right of $s_i$ in block $[m_1^{a_1}, n_1]$. This means that $s_i$ is the last factor in block $[m_1^{a_1}, n_1]$. Thus $n_1 =i$ by Definition \ref{canonical word}. Similarly, the indices of the factors to the left of $s_{i+1} s_i$ in block $[m_2^{a_2}, n_2]$, if exist, should be larger than $i+1$. This means that $0>-(i+1)\geq m_2$ by Definition \ref{canonical word}. Since $|m_1|<n_1=i$ and $|m_2|\geq i+1$, then $|m_1|< |m_2|$, which is neither case $(a)$ nor $(b)$. \end{enumerate} \item $s_{i+1} s_{i} s_{i+1}$: As in case $(1)$, these three factors cannot be in all distinct blocks without breaking the rule $n>n_1>\cdots>n_r\geq 0$ and $|m_i|\leq n_i$. Also, these three factors cannot be in the same block. Suppose they are in the same block. Then we must have a reduced expression containing $s_{i+1} s_i s_{i-1} \cdots s_1 s_0^{a_1} s_1 \cdots s_{i-1} s_i s_{i+1}$ by Definition \ref{canonical word}. Thus, it is impossible to have $s_{i+1} s_i s_{i+1}$ via commutation relations. It suffices to assume the three factors are in two adjacent blocks, which also leads to two cases. \begin{enumerate} \item Assume $s_{i+1} s_i$ is in block $[m_1^{a_1}, n_1]$ and $s_{i+1}$ is in block $[m_2^{a_2}, n_2]$. In order to move $s_{i+1}$ from block $[m_2^{a_2}, n_2]$ next to $s_{i+1} s_{i}$ from block $[m_1^{a_1}, n_1]$ using only commutation relations, there should be no factors to the left of $s_{i+1}$ in block $[m_2^{a_2}, n_2]$. This means that $s_{i+1}$ is the first factor in block $[m_2^{a_2}, n_2]$. However, the indices of the factors to the right of $s_{i+1} s_i$ in block $[m_1^{a_1}, n_1]$ must be strictly decreasing to $0$ then strictly increasing to at least $i+1$ by Definition \ref{canonical word}. This means there is another $s_{i+1}$ to the right of $s_{i+1}s_i$ in block $[m_1^{a_1}, n_1]$. Thus, it is impossible to move $s_{i+1}$ from block $[m_2^{a_2}, n_2]$ next to $s_{i+1}s_i$ from block $[m_2^{a_2}, n_2]$ using only commutation relations. Then this case does not exist. \item Assume $s_{i+1}$ is in block $[m_1^{a_1}, n_1]$ and $s_i s_{i+1}$ is in block $[m_2^{a_2}, n_2]$. In order to move $s_{i+1}$ from block $[m_1^{a_1}, n_1]$ next to $s_i s_{i+1}$ from block $[m_2^{a_2}, n_2]$ using only commutation relations, factors to the left of $s_i s_{i+1}$ in block $[m_2^{a_2}, n_2]$, if exist, must have indices smaller than $i$. This means $0\leq m_2 \leq i$ by Definition \ref{canonical word}. Factors to the right of $s_{i}s_{i+1}$ in block $[m_2^{a_2}, n_2]$, if exist, must have indices larger than $i+1$. Then $n_2\geq i+1$ by Definition \ref{canonical word}. Similarly, there should be no factors to the right of $s_{i+1}$ in block $[m_2^{a_2}, n_2]$. This means that $s_{i+1}$ is the last factor in block $[m_1^{a_1}, n_1]$. Then $n_1=i+1$ by Definition \ref{canonical word}. Then we have $n_1\leq n_2$, which violates the rule $n>n_1>\cdots>n_r\geq 0$. Then this case does not exist. \end{enumerate} \item $s_1 s_0^{a} s_1 s_0^{a'}$: Since the indices of $s_1 s_0^{a} s_1 s_0^{a'}$ is neither strictly increasing nor strictly decreasing and then strictly increasing, they cannot be in a single block. Below we focus on cases where the four factors come from two adjacent blocks in the canonical reduced word. Similar analysis can be applied to cases when the factors are in three blocks and all distinct blocks. \begin{enumerate} \item Assume $s_1 s_0^a s_1$ is in block $[m_1^{a}, n_1]$ and $s_0^{a'}$ is in block $[m_2^{a'}, n_2]$. Since the indices of $s_1 s_0^a s_1$ in block $[m_1^{a}, n_1]$ decreases and then increases, $m_1<0$ by Definition \ref{canonical word}. In order to move $s_0^{a'}$ in block $[m_2^{a'}, n_2]$ next to $s_1 s_0^a s_1$ in block $[m_1^a, n_1]$ using only commutation relations, there should be no factors to the left of $s_0^{a'}$ in block $[m_2^{a'}, n_2]$. Then $s_0^{a'}$ is the first factor in block $[m_2^{a'}, n_2]$. Then $m_2=0$ by Definition \ref{canonical word}. Similarly, factors to the right of $s_1 s_0^a, s_1$ in block $[m_1^a, n_1]$ must have indices larger than $1$. Then $n_1\geq 1$ by Definition \ref{canonical word}. And the indices of factors to the left of $s_1 s_0^a s_1$ in block $[m_1^a, n_1]$ must be strictly decreasing to $1$. Then $0>-1\geq m_1$ by Definition \ref{canonical word}. Then we have $m_1<m_2$, which is neither case $(a)$ nor $(b)$. \item Assume $s_1 s_0^a$ is in block $[m_1^{a}, n_1]$ and $s_1s_0^{a'}$ is in block $[m_2^{a'}, n_2]$. Since the indices of both $s_1 s_0^a$ and $s_1 s_0^{a'}$ strictly decrease, then $m_1 <0$ and $m_2<0$ by Definition \ref{canonical word}. By the same definition, factors to the right of $s_1s_0^a$ in block $[m_1^a, n_1]$ must have indices strictly increasing from $1$. Then there is a $s_1$ to the right of $s_1 s_0^a$ in block $[m_1^a, n_1]$. Thus, it is impossible to move $s_1 s_0^{a'}$ in block $[m_2^{a'}, n_2]$ next to $s_1 s_0^a$ in block $[m_1^a, n_1]$ using only commutation relations. Then this case does not exist. \item Assume $s_1$ is in block $[m_1^{a_1}, n_1]$ and $s_0^{a} s_1 s_0^{a'}$ in block $[m_2^{a_2}, n_2]$. This is impossible since the indices of $s_0^{a} s_1 s_0^{a'}$ is neither strictly increasing nor strictly decreasing and then strictly increasing. Then this case does not exist. \end{enumerate} \item $s_0^{a} s_1 s_0^{a'} s_1$: As in Case $(3)$, we discuss cases where the four factors are in two adjacent blocks in the canonical reduced word. \begin{enumerate} \item Assume $s_0^{a} s_1 s_0^{a'}$ is in block $[m_1^{a_1}, n_1]$ and $s_1$ is in block $[m_2^{a_2}, n_2]$. This is impossible since the indices of $s_0^{a} s_1 s_0^{a'}$ is neither strictly increasing nor strictly decreasing and then strictly increasing. Then this case does not exist. \item Assume $s_0^{a} s_1$ is in block $[m_1^{a}, n_1]$ and $s_0^{a'} s_1$ is in block $[m_2^{a'}, n_2]$. In order to move $s_0^{a'} s_1$ in block $[m_2^{a'}, n_2]$ next to $s_0^a s_1$ in block $[m_1^a, n_1]$ using only commutation relations, there should be no factors to the left of $s_0^{a'}s_1$ in block $[m_2^{a'}, n_2]$. Then $s_0^{a'} s_1$ are the first two factors in block $[m_2^{a'}, n_2]$. Then $m_2 =0$ by Definition \ref{canonical word}. And factors to the right of $s_0^{a'}s_1$ in block $[m_2^{a'}, n_2]$, if exist, must have indices larger than $1$. Then $n_2\geq 1$ by Definition \ref{canonical word}. Similarly, there should be no factors to the right of $s_0^{a} s_1$ in block $[m_1^a, n_1]$, since $s_1$ and $s_2$ do not commute with each other. Then $n_1=1$ by Definition \ref{canonical word}. Then we have $n_1\leq n_2$, which violates the rule $n>n_1>\cdots>n_r\geq 0$. Then this case does not exist. \item Assume $s_0^{a}$ is in block $[m_1^a, n_1]$ and $s_1 s_0^{a'} s_1$ is in block $[m_2^{a'}, n_2]$. In order to move $s_0^a$ in block $[m_1^a, n_1]$ next to $s_1 s_0^{a'} s_1$ in block $[m_2^{a'}, n_2]$ using only commutation relations, there should be no factors to the right of $s_0^a$ in block $[m_1^a, n_1]$. Then $s_0^a$ is the only factor in block $[m_1^a, n_1]$. Then $m_1 = n_1 =0$ by Definition \ref{canonical word}. Since $s_0$ commutes with every generator other than $s_1$, there is no additional restriction on factors on both sides of $s_1 s_0^{a'} s_1$ in block $[m_2^{a'}, n_2]$. Then $n_2\geq 1$. Then we have $n_1< n_2$, which violates the rule $n>n_1>\cdots>n_r\geq 0$. Then this case does not exist. \end{enumerate} \end{enumerate} Therefore, if $g$ is not fully commutative, then its canonical reduced word cannot be case $(a)$ or $(b)$. $\qedhere$ \end{proof} \begin{proof}[Proof of Theorem~\ref{main theorem}] By Proposition \ref{main prop}, $g$ is fully commutative in $G(m,1,n)$ if and only if its canonical reduced word is of case $(a)$ or $(b)$. By Proposition \ref{prop: obs}, this happens if and only if the canonical reduced word of $\pi(g)$ in $G(2,1,n)$ is also of case $(a)$ or $(b)$, which occurs if and only if $\pi(g)$ is fully commutative in $G(2,1,n)$ by Theorem \ref{thrm: type B}. \end{proof} Thus, the pattern avoidance of fully commutative elements in $G(2,1,n)$ naturally extends to fully commutative elements in $G(m,1,n)$, and we list them in Table \ref{patterns}. \begin{table}[!htp] \centering \scalebox{0.7}{ \begin{tabular}{|c|c|l|} \hline Patterns & In $G(2,1,n)$ & \text{ In $G(m,1,n)$}\\ \hline &&\\ $(-1, -2)$ & $\left[\begin{smallmatrix} -1&\\ &-1\end{smallmatrix}\right]$ & $ \left[\begin{smallmatrix} \omega^{a} &\\ &\omega^{b}\end{smallmatrix}\right] $\text{ $(a, b \neq 0)$}\\ &&\\ $(\pm 3,2,\pm 1)$ & $\left[\begin{smallmatrix} && \pm 1\\ &1&\\ \pm 1&& \end{smallmatrix}\right] $ &$\left[\begin{smallmatrix} && \omega^a\\ &1&\\ \omega^b&& \end{smallmatrix}\right]$\\ &&\\ $(\pm 3,\pm 1,-2)$ & $ \left[\begin{smallmatrix} && \pm 1\\ \pm 1&&\\ &-1& \end{smallmatrix}\right]$ & $ \left[\begin{smallmatrix} && \omega^a\\ \omega^b&&\\ &\omega^c& \end{smallmatrix}\right]$ \text{ $(c \neq 0)$} \\ &&\\ $(\pm 2,\pm 1,-3)$ & $\left[\begin{smallmatrix} & \pm 1&\\ \pm 1&&\\& &-1 \end{smallmatrix}\right]$ &$\left[\begin{smallmatrix} & \omega^a&\\ \omega^b&&\\ &&\omega^c \end{smallmatrix}\right]$ \text{ $(c \neq 0)$}\\ &&\\ $(\pm 2,-3,\pm 1)$ & $\left[\begin{smallmatrix} & \pm 1&\\ &&- 1\\ \pm 1&& \end{smallmatrix}\right]$ &$ \left[\begin{smallmatrix}& \omega^a&\\ &&\omega^b\\ \omega^c&& \end{smallmatrix}\right]$ \text{ $(b \neq 0)$}\\ &&\\ $(\pm 1,-3,-2)$ & $\left[\begin{smallmatrix} \pm 1&&\\ &&- 1\\ &-1&\end{smallmatrix}\right]$ &$ \left[\begin{smallmatrix} \omega^a&&\\ &&\omega^b\\ &\omega^c& \end{smallmatrix}\right]$ \text{$(b,c \neq 0)$}\\ &&\\ \hline \end{tabular}} \vspace{0.1cm} \caption{Pattern avoidance of fully commutative elements in $G(m,1,n)$}\label{patterns} \end{table} \begin{corollary}\label{cor: coeff} In $G(m,1,n)$, the number of fully commutative elements is \[\sum\limits_{k=0}^{n} \alpha_{n,k} (m-1)^{k},\] where $\alpha_{n,k}$ is the number of fully commutative elements with $k$ $-1$'s in $G(2,1,n)$. \end{corollary} \begin{proof} Assume $g$ has $k$ nontrivial entries, then its image $\pi(g)\in G(2,1,n)$ has $k$ $-1$'s. Then the canonical reduced word of $\pi(g)$ has $k$ blocks $[m_i, n_i]$ where $m_i \leq 0$. This means that there are $k$ $s_0$'s in the reduced expression of $\pi(g)$. Then every $s_0$ in the reduced expression of $\pi(g)$ has $m-1$ distinct pre-images, i.e., $s_0$, $(s_0)^2$, $\ldots$, $(s_0)^{m-1}$, in the reduced expression of $g$. Therefore, $\pi(g)$ has $(m-1)^k$ distinct pre-images. By Theorem \ref{main theorem}, these pre-images are fully commutative in $G(m,1,n)$. Again by Theorem \ref{main theorem}, summing over all possible values of $k$ gives the total number of fully commutative elements in $G(m,1,n)$. \end{proof} \begin{corollary} In $G(2,1,n)$, there are \begin{enumerate} \item $C_{n+1} -1$ fully commutative elements with one $-1$, and \item ${2n \choose n+k} - {2n \choose n+k+1}$ fully commutative elements with $k$ $-1$'s ($k\neq 1$). \end{enumerate} \end{corollary} \begin{proof} By Corollary \ref{cor: coeff}, it suffices to show that in $G(m,1,n)$, the number of fully commutative elements is \begin{equation}\label{equ: formula} \sum\limits_{k= 0}^{n}\left( {2n \choose n+k} - {2n \choose n+k+1}\right) (m-1)^k + (C_{n} - 1)(m-1).\end{equation} When $n=2$, fully commutative elements with two nontrivial entries are of the look $\begin{bmatrix} & \star\\ \star&\end{bmatrix}$, with one nontrivial entry can have one of four looks: $\begin{bmatrix} \star &\\ &1\end{bmatrix}$, $\begin{bmatrix} 1&\\ &\star\end{bmatrix}$, $\begin{bmatrix} &1\\ \star&\end{bmatrix}$ and $\begin{bmatrix} &\star\\ 1&\end{bmatrix}$, and with no nontrivial entry are exactly the ones that are fully commutative in $\Symm_2$, and there are $C_2=2$ of them. Since each $\star$ has $m-1$ choices, the total number of fully commutative elements in $G(m,1,2)$ is \[ 1\cdot (m-1)^2 + 4\cdot (m-1)^1 + C_2\cdot (m-1)^0 = m^2 +2m -1, \] which is $(\ref{equ: formula})$ evaluated when $n=2$. When $n\geq 3$, we show that $(\ref{equ: formula})$ agrees with Theorem~\ref{thrm: FKLO}, i.e., for positive integers $m$ and $n$, the following equality is true: {{\footnotesize \[m(m-1)\sum\limits_{s=0}^{n-2} \frac{(n+s)! (n-s+1)}{s!(n+1)!} m^{n-2-s} +(2m-1)C_n -(m-1) =\sum\limits_{k= 0}^{n} \left({2n \choose n+k} - {2n \choose n+k+1} \right)(m-1)^k + (C_{n} - 1)(m-1).\] }} Removing identical terms on both sides, we need to show \[m(m-1)\sum\limits_{s=0}^{n-2} \frac{(n+s)! (n-s+1)}{s!(n+1)!} m^{n-2-s} +mC_n =\sum\limits_{k= 0}^{n} \left( {2n \choose n+k} - {2n \choose n+k+1}\right) (m-1)^k .\] It suffices to show that $[m^j]\mathrm{LHS} = [m^j]\mathrm{RHS}$ for $j=0,1,2, \ldots, n$ where $[m^j]$ denotes the coefficient of the term $m^j$. We do this in three cases. $(1)$ When $j=0$, $[m^0]\mathrm{LHS} =0$ and $ [m^0]\mathrm{RHS} =\sum\limits_{k=0}^{n}\left({2n \choose n+k} - {2n \choose n+k+1}\right)(-1)^k.$ Let $S = \sum\limits_{k=0}^{n} (-1)^k {2n \choose n+k}$. Then \begin{equation*} \begin{split} 2S & = \left[(-1)^n {2n \choose 0} + (-1)^{n-1} {2n \choose 1} + \cdots + {2n \choose n}\right]+\left[ {2n \choose n} + (-1) {2n \choose n+1} + \cdots + (-1)^{n} {2n \choose 2n}\right]\\ & = (-1)^n \left[ {2n \choose 0} - {2n \choose 1} +\cdots +{2n \choose n}\right] + {2n \choose 2n}= {2n \choose n}.\\ \end{split} \end{equation*} Similarly, let $T = \sum\limits_{k=0}^{n} (-1)^k {2n \choose n+k+1} $. Then \begin{equation*} \begin{split} 2T & = \left[(-1)^{n-1} {2n \choose 0} + (-1)^{n-2} {2n \choose 1} + \cdots + {2n \choose n-1}\right]+ \left[ {2n \choose n+1} -{2n \choose n+2} + \cdots + (-1)^{n-1} {2n \choose 2n}\right]\\ & = (-1)^{n-1} \left[ {2n \choose 0} - {2n \choose 1} +\cdots +{2n \choose 2n}\right] + {2n \choose n}= {2n \choose n}.\\ \end{split} \end{equation*} Since $ S=T=\frac{1}{2} {2n \choose n}$, then $[m^0]\mathrm{RHS} = S-T =0=[m^0]\mathrm{LHS}$. $(2)$ When $j=1$, $[m^1]\mathrm{LHS} = -\frac{(2n-2)! 3}{(n-2)!(n+1)!}+C_n= \frac{1}{n-1}{{2n-2} \choose n}$ and \begin{equation*} \begin{split} [m^1]\mathrm{RHS}& = \sum\limits_{k=0}^{n}\left({2n \choose n+k} - {2n \choose n+k+1}\right)k(-1)^{k-1} \\ & =\sum\limits_{k=0}^n (n+k-n){2n \choose n+k}(-1)^{k-1} +\sum\limits_{k=0}^n (n+k+1-n-1){2n \choose n+k+1}(-1)^{k}\\ & = 2n \sum\limits_{k=0}^{n}(-1)^{k-1}{2n-1 \choose n+k-1} +n \sum\limits_{k=0}^{n}(-1)^{k}{2n \choose n+k}\\ &\qquad +2n \sum\limits_{k=0}^{n} (-1)^k {2n-1 \choose n+k} -(n+1)\sum\limits_{k=0}^{n} (-1)^k {2n \choose n+k+1}.\\ \end{split} \end{equation*} From part $(1)$, we have $\sum\limits_{k=0}^{n} (-1)^k {2n \choose n+k} = \sum\limits_{k=0}^{n} (-1)^k {2n \choose n+k+1}=\frac{1}{2}{2n \choose n}.$ Also, it is known that $\sum\limits_{i=0}^{N} (-1)^i {n \choose i}=(-1)^N {n-1 \choose N}$. Then \begin{equation*} \begin{split} [m^1]\mathrm{RHS}& = 2n (-1)^{n-1}\left[(-1)^{n-1}{2n-2 \choose n-1}\right] + \frac{n}{2}{2n \choose n}+ 2n (-1)^{n-1}\left[ (-1)^n {2n-2 \choose n}\right] -\frac{n+1}{2}{2n \choose n}\\ & = 2n \left[ {2n-2 \choose n-1} - {2n-2 \choose n} \right]-\frac{1}{2}{2n \choose n}\\ & = \frac{2n}{n-1}{2n-2 \choose n} - \frac{2n-1}{n-1}{2n -2 \choose n}\\ & = \frac{1}{n-1} {2n-2 \choose n}\\ &=[m^1]\mathrm{LHS} . \end{split} \end{equation*} $(3)$ When $j\geq 2$, $[m^j]\mathrm{LHS} =\frac{(2n-j)!(j+1)}{(n-j)!(n+1)!} - \frac{(2n-j-1)!(j+2)}{(n-j-1)!(n+1)!} = \frac{j+1}{n+1}{2n-j \choose n} - \frac{j+2}{n+1}{2n-j-1 \choose n}= \frac{j+1}{n+1}{2n-j-1 \choose n-1} - \frac{1}{n+1} {2n-j-1 \choose n}$. Since ${k\choose j} = (-1)^{k-j} {-j-1 \choose k-j} $, then \begin{equation*} \begin{split} [m^j]\mathrm{RHS} &= \sum\limits_{k=0}^{n}\left({2n \choose n+k} - {2n \choose n+k+1}\right){k \choose j} (-1)^{k-j}\\ & = \sum\limits_{k=j}^{n}\left({2n \choose n-k} - {2n \choose n-k-1}\right)\left[ (-1)^{k-j} {-j-1 \choose k-j} \right] (-1)^{k-j}\\ & = \sum\limits_{k=j}^{n}\left[ {2n \choose n-k} {-j-1 \choose k-j} - {2n \choose n-k-1}{-j-1 \choose k-j}\right].\\ \end{split} \end{equation*} Recall the Chu-Vandermonde identity $\sum\limits_{i=0}^{c} {a \choose i}{b \choose c-i} = {a+b \choose c}$ for general complex-valued $a$ and $b$ and any non-negative integer $c$. Then letting $c = n-j$, $i = k-j $, $a=-j-1$ and $b=2n$ gives \[\sum\limits_{k=j}^{n}{2n \choose n-k}{-j-1 \choose k-j} = {2n-j-1 \choose n-j}\] and letting $c=n-j-1$, $i = k-j$, $a=-j-1$ and $b=2n$ gives \[\sum\limits_{k=j}^{n}{2n \choose n-k-1}{-j-1 \choose k-j} = {2n \choose n-n-1}{-j-1 \choose n-j} + \sum\limits_{k=j}^{n-1}{2n\choose n-k-1}{-j-1 \choose k-j}=0 + {2n-j-1 \choose n-j-1}.\] Then \begin{equation*} \begin{split} [m^j]\mathrm{RHS} ={2n-j-1 \choose n-j} - {2n-j-1 \choose n-j-1}= {2n-j-1 \choose n-1} - {2n-j-1 \choose n} . \end{split} \end{equation*} Therefore, \begin{multline*} [m^j]\mathrm{LHS}-[m^j]\mathrm{RHS} =\left[\frac{j+1}{n+1}{2n-j-1 \choose n-1} - \frac{1}{n+1} {2n-j-1 \choose n}\right] \\- \left[{2n-j-1 \choose n-1} - {2n-j-1 \choose n} \right] =0.\qedhere \end{multline*} \end{proof} \begin{remark} Observe that the leading coefficient of $(\ref{equ: formula})$ is $1$. This means that the fully commutative elements in $G(m,1,n)$ with $n$ nontrivial entries have canonical reduced words $[0^{a_1}, n-1][0^{a_2}, n-2]\cdots [0^{a_{n-1}}, 1][0^{a_{n}}, 0^{a_n}]$ with the same underlying permutation, namely the reverse identity matrix. One can also see this directly from Proposition \ref{main prop}. In order to create elements with $n$ nontrivial entries, the $m_i$'s in the canonical reduced words $[m_1^{a_1}, n_1]\cdots [m_n^{a_n}, n_n]$ must be non-positive. Since these elements are fully commutative, then their canonical reduced words are of case $(b)$ in Proposition \ref{main prop}, i.e., all $m_i$'s are $0$. Since $n>n_1>n_2>\cdots >n_n\geq 0$, then the desired elements have canonical reduced words of the form \[[0^{a_1}, n-1][0^{a_2}, n-2]\cdots [0^{a_{n-1}}, 1][0^{a_{n}}, 0^{a_n}].\] \end{remark} \section{Open question: what about $G(m,m,n)$?} \label{sec:open} Fully commutative elements in $G(m,m,n)$ are harder to enumerate and to characterize. The recent work by Feinberg-Kim-Lee-Oh studies elements in $G(m,m,n)$ that are fully commutative in $G(m,1,n)$. Note that such elements are not necessarily fully commutative in $G(m,m,n)$. So their counting formula does not recover Stembridge's result in $D_n$. In this section, we propose a few open questions regarding fully commutative elements in $G(m,m,n)$. Let $s_1, s_2, \ldots, s_{n-1}$ be simple transpositions, i.e., $s_i =[(i\, i+1); (0, \ldots, 0)]$, and $s_{\bar{1}}= [(12); (-1,1,0,\ldots, 0)]$. For $m\geq 2$ and $n\geq 3$, the group $G(m,m,n)$ can be generated by $s_{\bar{1}}, s_1, s_2, \ldots, s_{n-1}$ with defining relations \cite{BMR}: \begin{center} \begin{tabular}{rll} $(s_1s_{\bar{1}})^m= s_{\bar{1}}^2 = s_i^2 $ &$=1$ & for $1\leq i\leq n-1$,\\ $s_i s_j$ & $= s_j s_i $ & for $i+1 < j\leq n-1$, \\ $s_i s_{\bar{1}}$ & $ = s_{\bar{1}} s_i$& for $1< i \leq n-1$,\\ $s_{i+1}s_i s_{i+1}$ &$ = s_i s_{i+1}s_i$ & for $1\leq i \leq n-2$,\\ $s_{\bar{1}} s_2 s_{\bar{1}}$ & $= s_2 s_{\bar{1}} s_2$&\\ $(s_{\bar{1}} s_1 s_2)^2 $ & $ = (s_2 s_{\bar{1}} s_1)^2 $.& \end{tabular} \end{center} We refer to this set of generators $S^c=\{s_{\bar{1}}, s_1, s_2, \ldots, s_{n-1} \}$ as the \emph{classical generating set}. Stembridge characterized fully commutative elements by pattern avoidance in $D_n=G(2,2,n)$ with the classical generating set. Equivalent result was also obtained by Fan \cite[$\S 7$]{Fan}. \begin{theorem}[{\cite[Thrm.~10.1]{S97}}]\label{thrm: type D} For $g\in D_{n}$, the following are equivalent. \begin{enumerate} \item $g$ is fully commutative. \item $g$ avoids all patterns $(a, b,c)$ such that $|a|>b>c$ or $-b>|a|>c$. \end{enumerate} \end{theorem} There is also a counting formula for fully commutative elements in $D_n$, with the classical generating set, obtained independently by Stembridge \cite{S97} and Fan \cite{Fan}. \begin{proposition}[{\cite[Prop.~10.4]{S97}}] In $D_n$, there are $\frac{n+3}{2}C_n -1$ fully commutative elements, where $C_n=\frac{1}{n+1}{2n \choose n}$ is the $n$th Catalan number. \end{proposition} \subsection{Enumeration} We consider counting fully commutative elements in $G(m,m,n)$ with the classical generating set. \begin{prop}\label{prop: mm2} When $n=2$, there are \begin{enumerate} \item $4$ fully commutative elements in $G(2,2,2)$, i.e., every element is fully commutative in $G(2,2,2)$. \item $2m-1$ fully commutative elements in $G(m,m,2)$ when $m\geq 3$. \end{enumerate} \end{prop} \begin{proof} \begin{enumerate} \item Clear. \item We know that $G(m,m,2)$ is generated by $s_1 = [\mathrm{id}; (0,0)]$ and $s_{\bar{1}} = [(12); (-1,1)]$. There is exactly one element that is not fully commutative element:$[\mathrm{id}; (d, d)]$ if $m=2d$ or $[(12); (d, -d)]$ if $m=2d+1$, with reduced expressions $s_1 s_{\bar{1}} s_1 s_{\bar{1}} \cdots = s_{\bar{1}} s_1 s_{\bar{1}} s_1 \cdots$. Since there are $2m$ elements in $G(m, m,2)$, then $2m-1$ of them are fully commutative. \qedhere \end{enumerate} \end{proof} \begin{proposition} In $G(m,m,3)$, an element is fully commutative if and only if it has unique reduced expression. \end{proposition} \begin{proof} When $n=3$, $G(m,m,3)$ is generated by $s_1=[(12); (0, 0, 0)]$, $s_2=[(23); (0, 0, 0)]$ and $s_{\bar{1}} = [(12); (-1,1,0)]$. Since no two of the generators commute, the result follows. \end{proof} The present mapping method, from $G(m,1,n)$ to $G(2,1,n)$, is not very useful in counting fully commutative elements in $G(m,m,n)$. At least one obstacle comes from mapping an element in $G(m,m,n)$ to an element in $G(2,2,n)$, i.e., replacing every nontrivial entry with $-1$. Consider $[(123); (1, 2,0)]$ in $G(3,3,3)$, which is fully commutative with unique reduced expression $s_1 s_{\bar{1}} s_2 s_1$. The image $[(123);(1,1,0)]$ is not fully commutative in $G(2,2,3)$, since it has reduced expressions $s_1 s_{\bar{1}} s_2 s_1=s_{\bar{1}} s_1 s_2 s_1=s_{\bar{1}} s_2 s_1 s_2$. Another challenge concerns elements such as $[(23);(1,1,1)]\in G(3,3,3)$, which is fully commutative with unique reduced expression $s_1 s_{\bar{1}} s_2 s_1s_{\bar{1}}$. However, the image $[(23);(1,1,1)]$ does not exist in $G(2,2,3)$. In Table \ref{classical}, we enumerate fully commutative elements in $G(m,m,3)$ for small values of $m$ and list them by \emph{reflection length} $\ell$, i.e., the minimum number of generators needed in their reduced expressions. \begin{table}[!htbp] \centering \scalebox{0.5}{ \begin{tabular}{|cccccccccc|} \hline $\ell$ & $m=2$&$m=3$ & $m=4$ & $m=5$ & $m=6$ & $m=7$ & $m=8$ & $m=9$ & $m=10$ \\ \hline 0 &1& 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \hline 1 & 3&3 & 3 & 3 & 3 & 3 & 3 & 3 & 3 \\ \hline 2 & 5&6 & 6 & 6 & 6 & 6 & 6 & 6 & 6 \\ \hline 3 & 4&6 & 8 & 8 & 8 & 8 & 8 & 8 & 8 \\ \hline 4 &1& 6 & 10 & 12 & 12 & 12 & 12 & 12 & 12 \\ \hline 5 && 6 & 12 & 16 & 18 & 18 & 18 & 18 & 18 \\ \hline 6 && & 10 & 16 & 20 & 22 & 22 & 22 & 22 \\ \hline 7 && & & 16 & 22 & 26 & 28 & 28 & 28 \\ \hline 8 && & & 2 & 18 & 24 & 28 & 30 & 30 \\ \hline 9 && & & & 4 & 26 & 32 & 36 & 38 \\ \hline 10 && & & & & 10 & 26 & 32 & 36 \\ \hline 11 && & & & & & 14 & 36 & 42 \\ \hline 12 && & & & & & & 18 & 34 \\ \hline 13 && & & & & & & 4 & 24 \\ \hline 14 & && & & & & & & 8 \\ \hline \hline total &14 &28& 50&80&112 &156 & 198&254 &310 \\ \hline \end{tabular} } \vspace{0.1cm} \caption{f.c. elements in $G(m,m,3)$ with classical generating set} \label{classical} \end{table} \vspace{-0.3in} \subsection{Pattern avoidance} We now consider characterizing fully commutative elements with the classical generating set by pattern avoidance in $G(m,m,n)$. Theorem \ref{thrm: type D} implies that a fully commutative element in $G(2,2,n+1)$ does not contain a non fully commutative element in $G(2, 2, n)$, as a submatrix. This behavior does not extend to $m>2$. For example, $[(12); (d, -d)]$ is not fully commutative in $G(2d+1, 2d+1, 2)$, by Proposition \ref{prop: mm2}, but $[(23); (0, d, -d)]$ is fully commutative in $G(2d+1, 2d+1, 3)$ $(d\geq 2)$, with unique reduced expression $s_{\bar{1}} s_2 (s_1 s_{\bar{1}})^{d-1} s_1 s_2 s_{\bar{1}}$. Furthermore, one can find this kind of fully commutative elements in both $G(3,3,4)$ and $G(4,4,4)$, as listed in Table \ref{strange}. In fact, Table \ref{strange} lists all such elements in both groups. Current evidence suggests that the number of such strange fully commutative elements in $G(m,m,n+1)$ is very small. \begin{center} \begin{table}[!htbp] \scalebox{0.65}{ \begin{tabular}{|lll|l|} \hline \text{ $G(3,3,4)$: fully commutative } && & \text{ $G(3,3,3)$: non fully commutative }\\ $\left[ \begin{smallmatrix} &1&&\\ &&&\omega^2 \\ &&\omega^2 & \\ \omega^2&&&\end{smallmatrix}\right]$ & & $\left[ \begin{smallmatrix} 1&&&\\ &&&\omega^2\\ &&\omega^2&\\ & \omega^2 &&\end{smallmatrix}\right]$ & $\left[ \begin{smallmatrix} &&\omega^2\\ &\omega^2&\\ \omega^2&&\end{smallmatrix}\right]$\\ $s_{\bar{1}} s_1 s_2 s_{\bar{1}} s_3 s_1 s_2 s_{\bar{1}}$ & & $s_{\bar{1}} s_1 s_2 s_{\bar{1}} s_3 s_1 s_2 s_{\bar{1}} s_1$ & $s_{\bar{1}} s_1 s_2 s_{\bar{1}} s_2= s_{\bar{1}} s_1 s_{\bar{1}} s_2 s_{\bar{1}}$\\ &&&\\ $\left[ \begin{smallmatrix} &&&\omega\\ 1&&&\\ &&\omega&\\ &\omega&&\end{smallmatrix}\right]$ & & $\left[ \begin{smallmatrix} 1&&&\\ &&&\omega\\ &&\omega&\\ &\omega&&\end{smallmatrix}\right]$ & $\left[ \begin{smallmatrix} &&\omega\\ &\omega&\\ \omega&&\end{smallmatrix}\right]$\\ $s_{\bar{1}} s_2 s_1 s_{\bar{1}} s_3 s_2 s_1 s_{\bar{1}}$ & & $ s_1 s_{\bar{1}} s_2 s_1 s_{\bar{1}} s_3 s_2 s_1 s_{\bar{1}}$& $s_{\bar{1}} s_2 s_1 s_{\bar{1}} s_1=s_{\bar{1}} s_2 s_{\bar{1}} s_1 s_{\bar{1}}$\\ \hline \text{ $G(4,4,4)$: fully commutative } && & \text{ $G(4,4,3)$: non fully commutative }\\ $\left[\begin{smallmatrix} &&&\omega^2 \\ 1&&&\\ &&\omega&\\ &\omega&&\end{smallmatrix}\right]$ &&& $\left[\begin{smallmatrix} &&\omega^2 \\ &\omega&\\ \omega&&\end{smallmatrix}\right]$\\ $s_{\bar{1}} s_1 s_{\bar{1}} s_2 s_1 s_{\bar{1}} s_3 s_2 s_1 s_{\bar{1}}$ &&& $s_1 s_{\bar{1}} s_2 s_1 s_{\bar{1}} s_1 s_2=s_{\bar{1}} s_1 s_{\bar{1}} s_2 s_1 s_{\bar{1}}s_1$\\ &&&\\ $\left[\begin{smallmatrix} &1&&\\ &&&\omega^3\\ &&\omega^3 &\\ \omega^2&&&\end{smallmatrix}\right]$ &&& $\left[\begin{smallmatrix} &&\omega^3\\ &\omega^3&\\ \omega^2&&\end{smallmatrix}\right]$\\ $s_{\bar{1}} s_1 s_2 s_{\bar{1}} s_3 s_1 s_2 s_{\bar{1}} s_1 s_{\bar{1}}$ &&& $s_{1} s_{\bar{1}} s_1 s_2 s_{\bar{1}} s_1 s_{\bar{1}}=s_2 s_1 s_{\bar{1}} s_1 s_2 s_{\bar{1}} s_1$\\ \hline \end{tabular}} \vspace{0.1cm} \caption{f.c. elements that contain a non f.c. element} \label{strange} \end{table} \end{center} Comparing Theorem \ref{thrm: type D} with Theorem \ref{thrm: type B}, one notices that the pattern avoidance in $G(2,2,n)$ is also part of the pattern avoidance in $G(2,1,n)$ $(n\geq 3)$. This means that a fully commutative element in $G(2,2,n)$ is also fully commutative in $G(2,1,n)$. This is not true in $G(m,m,n)$ when $m>2$. One can find elements that are fully commutative in $G(3,3,3)$ but not fully commutative in $G(3,1,3)$, such as $[\mathrm{id}; (1,2,0)]$ which is fully commutative in $G(3,3,3)$ with unique reduced expression $s_{\bar{1}}s_1$, but it is not fully commutative in $G(3,1,3)$ with reduced expressions $s_1 s_0^2 s_1 s_0$ and $s_0 s_1 s_0^2 s_1$. Since Stembridge's result in $D_n=G(2,2,n)$ cannot be directly extended to $G(m,m,n)$, full commutativity in $G(m,m,n)$ with the classical generating set remains to be studied further. \subsection{Other generating sets} There are many possible choices of generating set for $G(m,m,n)$, none of which are Coxeter-like in the same way as in $G(m,1,n)$ (see the discussion preceding Lemma 4.2 in \cite{ChaDou} and \cite[$\S 3.7.2$]{Wil19}). Likely because of this phenomenon, full commutativity in $G(m,m,n)$ with the classical generating set is more challenging to investigate. In this section, we look at two other generating sets for $G(m,m,n)$. \subsubsection{Affine generating set} Let $s_1$, $s_2$, \ldots, $s_{n-1}$ be simple transpositions, i.e., $s_i = [(i\, i+1); (0,\ldots, 0)]$, and $\widetilde{s}_n = [(1\, n); (-1, 0, \ldots, 0, 1)]$. Then $G(m,m,n)$ can be generated by $s_1$, $s_2$, $\ldots$, $s_{n-1}$, $\widetilde{s}_n$ with defining relations \cite{Shi-generic}: \begin{center} \begin{tabular}{rll} $(\widetilde{s}_n (s_1 s_2\cdots s_{n-1} \cdots s_2 s_1))^m=(\widetilde{s}_n)^2 = s_i^2 $ &$=1$ & for $1\leq i\leq n-1$,\\ $s_i s_j$ & $= s_j s_i $ & for $i+1 < j\leq n-1$, \\ $s_i \widetilde{s}_n$ & $ = \widetilde{s}_n s_i$& for $1<i < n-1$,\\ $s_{i+1}s_i s_{i+1}$ &$ = s_i s_{i+1}s_i$ & for $1\leq i \leq n-2$,\\ $s_i \widetilde{s}_n s_i$ & $= \widetilde{s}_n s_i \widetilde{s}_n$ & for $i=1,\, n-1$.\\ \end{tabular} \end{center} This generating set for $G(m,m,n)$ is the projection of the Coxeter generating set for the affine symmetric group $\widetilde{A}_n$. We refer to this set of generators $\widetilde{S}= \{ s_1, \ldots, s_{n-1}, \widetilde{s}_n\}$ as the \emph{affine generating set} of $G(m,m,n)$. Tables \ref{enumeration} and \ref{affine in 4} show the current enumerative evidence of fully commutative elements in $G(m,m,3)$ and $G(m,m,4)$. \begin{table}[!htbp] \parbox{.45\linewidth}{ \centering \scalebox{0.9}{ \begin{tabular}{|cccccc|} \hline$\ell$ & $m=2$& $m=3$ & $m=4$ & $m=5$ & $m=6$ \\ \hline 0 &1& 1 & 1 & 1 & 1 \\ \hline 1 &3&3 & 3 & 3 & 3 \\ \hline 2 &6&6 & 6 & 6 & 6 \\ \hline 3 & 6& 6 & 6 & 6 & 6 \\ \hline 4 && 6 & 6 & 6 & 6 \\ \hline 5 && 6 & 6 & 6 & 6 \\ \hline 6 && & 6 & 6 & 6 \\ \hline 7 && & 6 & 6 & 6 \\ \hline 8 && & & 6 & 6 \\ \hline 9 && & & 6 & 6 \\ \hline 10 &&& & & 6 \\ \hline 11 && & & & 6 \\ \hline \hline total &16&28 &40 & 52& 64 \\ \hline \end{tabular}} \vspace{0.1cm} \captionsetup{format=myformat} \caption{f.c. elements in $G(m,m,3)$\newline with affine generating set} \label{enumeration} } \hfill \parbox{.45\linewidth}{ \centering \scalebox{0.7}{ \begin{tabular}{|ccccc|} \hline$\ell$ & $m=2$ & $m=3$ & $m=4$ & $m=5$ \\ \hline 0 & 1 & 1 & 1 & 1 \\ \hline 1 & 4 & 4 & 4 & 4 \\ \hline 2 & 10 & 10 & 10 & 10 \\ \hline 3 & 16 & 16 & 16 & 16 \\ \hline 4 & 18 & 18 & 18 & 18 \\ \hline 5 & 16 & 16 & 16 & 16 \\ \hline 6 & 10 & 18 & 18 & 18 \\ \hline 7 & &16 & 16 & 16 \\ \hline 8 && 18 & 18 & 18 \\ \hline 9 && 8 & 16 & 16 \\ \hline 10 & &10 & 18 & 18 \\ \hline 11 & && 16 & 16 \\ \hline 12 & && 10 & 18 \\ \hline 13 && & 8 & 16 \\ \hline 14 & & &10 & 18 \\ \hline 15 &&& & 8 \\ \hline 16 &&& & 10 \\ \hline 17 & &&& 8 \\ \hline 18 &&& & 10 \\ \hline \hline total &75&135&195 &255 \\ \hline \end{tabular}} \vspace{0.1cm} \captionsetup{format=myformat} \caption{f.c. elements in $G(m,m,4)$\newline with affine generating set} \label{affine in 4} } \end{table} Preliminary data suggests the following conjectures. \begin{conjecture} Let $a_m$ denote the number of fully commutative elements in $G(m,m,n)$ $(n\geq 3)$ with the affine generating set. Then $a_{m+1}=a_{m} +k$, where $k$ is a positive integer. In particular, $k=12$ when $n=3$ and $k=60$ when $n=4$. \end{conjecture} \begin{conjecture} Consider the group $G(m,m,3)$ with the affine generating set. The number of fully commutative elements of reflection length $\ell$ when $\ell>1$ is $6$. \end{conjecture} \subsubsection{Star generating set} Our motivation for this next generating set comes from star transpositions in $\Symm_n$. Besides the set of Coxeter generators $\{(i\, i+1): 1\leq i\leq n\}$, the symmetric group $\Symm_n$ can also be generated by the set $S=\{ (1\, i): 2\leq i \leq n\}$. Observe that the corresponding labelled graph $(V, E)$ where the vertex set is $V=\{1, \ldots, n\}$ and the edge set is $E=\{e_{ij}: (i\, j)\in S\}$ is star-shaped. Thus call elements in $S$ \emph{star transpositions}. Pak counted in \cite{Pak98} the number of reduced expressions of a permutation $\pi\in\Symm_n$ that fixes $1$ and has $m$ cycles of length $k\geq 2$. This result was generalized to any permutation in $\Symm_n$ by Irving and Rattan. They showed in \cite{IR09} that the number of minimal star factorizations of $\pi \in \Symm_n$ with cycles of lengths $\ell_1$, $\ldots$, $\ell_m$ including exactly $k$ fixed points not equal to $1$ is \begin{equation} \label{equ: star} \frac{(n+m-2(k+1))!}{(n-k)!} \ell_1 \cdots \ell_m. \end{equation} For small values of $n$, we enumerate fully commutative elements by reflection length $\ell$ in $\Symm_n$ and list them in Table \ref{star in symmetric}. Observe that at $\ell=t>0$, the number of fully commutative elements in $\Symm_n$ appears to be $ (n-1)\cdot (n-2)\cdots (n-t)$. \begin{table}[!htbp] \centering \scalebox{1.0}{ \begin{tabular}{|ccccc|} \hline$\ell$ & $n=3$ & $n=4$ & $n=5$ & $n=6$ \\ \hline 0 & 1 & 1 & 1 & 1 \\ \hline 1 & 2 & 3 & 4 & 5 \\ \hline 2 & 2 & 6 & 12 & 20 \\ \hline 3 & & 6 & 24 & 60 \\ \hline 4 & & & 24 & 120 \\ \hline 5 & & & & 120 \\ \hline \hline total &5 &16 &65 &326 \\ \hline \end{tabular}} \vspace{0.1cm} \caption{f.c. elements in $\Symm_n$ with star generating set}\label{star in symmetric} \end{table} \begin{proposition} Consider the symmetric group $\Symm_n$ with the generating set $\{ (1\, i): 2\leq i \leq n\}$. The number of fully commutative elements is $1+\sum\limits_{t=1}^{n-1} \prod\limits_{j=1}^{t} (n-j).$ Furthermore, at reflection length $\ell=t>0$, fully commutative elements are the $(t+1)$-cycles that move $1$. \end{proposition} \begin{proof} Let $\pi \in \Symm_n$ be a permutation with cycles of cycle lengths $\ell_1$, $\ldots$, $\ell_m$ including exactly $k$ fixed points not equal to $1$. Assume that $\pi$ is fully commutative. Since no two star transpositions commute with each other, full commutativity means unique reduced expression. Thus, we are looking for $\pi\in \Symm_n$ such that \eqref{equ: star} should give $1$. When $\pi=\mathrm{id}$, it is fully commutative trivially. In this case, $k=n-1$, $m=n$, $\ell_1=\ldots=\ell_m=1$ and $\frac{(n+m-2(k+1))!}{(n-k)!}=1$. When $\pi\neq\mathrm{id}$, we have $\ell_1\cdots\ell_m >1$. In order for \eqref{equ: star} to be $1$, then $\frac{(n+m-2(k+1))!}{(n-k)!}=\frac{(n-k + (m-k-2))!}{(n-k)!}<1$. This means $m-k-2<0$, i.e. $k>m-2$. Since $k$ is the number of fixed points not equal to $1$ and $m$ is the number of cycles in $\pi$, then $k<m$. This forces $k=m-1$. Since $\pi$ is fully commutative and has $m$ cycles including $k=m-1$ fixed points not equal to $1$, then only one cycle in $\pi$ has cycle length bigger than $1$. Since $\frac{(n+m-2(k+1))!}{(n-k)!}=\frac{1}{n-m+1}$, the cycle length of that one cycle is $n-m+1$. Since $\pi\neq\mathrm{id}$, this cycle moves $1$. This implies that $\pi$ is a cycle of the form $(1\, i_1\, i_2\, \ldots\, i_{n-m})$ in $\Symm_n$ where $k=m-1$ and $\ell_1\cdots\ell_m=n-m+1$. Plugging them into \eqref{equ: star} gives $1$. Since there are $(n-1)(n-2)\cdots (n-(n-m))$ of them for every value of $m$ in $\{ 1, \ldots, n-1\}$, the result follows. \end{proof} Now let $s_i = [(1\, \,i+1); (0,\ldots, 0)]$ and $s_{\bar{1}} = [(1\, 2); (-1,1,0,\ldots, 0)]$. The group $G(m,m,n)$ can be generated by elements in the set $S^* = \{s_{\bar{1}}, s_1, s_2, \ldots, s_{n-1}\}$. Denote $S^*$ as the \emph{star generating set} of $G(m,m,n)$. When $n=3$, the star generating set $S^*$ has the exact same presentation as the classical generating set $S^c$. It follows that the enumeration of fully commutative elements by reflection length is the same for both sets. But the actual fully commutative elements are different, as the generators in $S^*$ and in $S^c$ are not identical. Notably, element $[(13);(0,0,0)]$ is not fully commutative in $G(m,m,3)$ with $S^c$, because it has the pattern $321$ that fully commutative elements avoid. But in $G(m,m,3)$ with $S^*$, this element is fully commutative, since it is one of the generators. In general, since no two generators in the star generating set are commutative, full commutativity means unique reduced expression. In Table \ref{star}, we list the number of fully commutative elements in some $G(m,m,4)$ by reflection length $\ell$. \begin{table}[!htbp] \centering \scalebox{0.7}{ \begin{tabular}{|cccccc|} \hline $\ell$ & $m=2$ & $m=3$ & $m=4$ & $m=5$ & $m=6$ \\ \hline 0 & 1 & 1 & 1 & 1 & 1 \\ \hline 1 & 4 & 4 & 4 & 4 & 4 \\ \hline 2 & 11 & 12 & 12 & 12 & 12 \\ \hline 3 & 20 & 24 & 26 & 26 & 26 \\ \hline 4 & 20 & 36 & 44 & 46 & 46 \\ \hline 5 & 8 & 44 & 68 & 76 & 78 \\ \hline 6 & 8 & 48 & 92 & 116 & 124 \\ \hline 7 & & 20 & 96 & 152 & 176 \\ \hline 8 & & & 68 & 176 & 232 \\ \hline 9 & & & 28 & 124 & 232 \\ \hline 10 & & & & 60 & 220 \\ \hline 11 & & & & 24 & 128 \\ \hline 12 & & & & & 40 \\ \hline 13 & & & & &20 \\ \hline \hline total &72 &189 &439 & 817&1339 \\ \hline \end{tabular} } \vspace{0.1cm} \caption{f.c. elements in $G(m,m,4)$ with star generating set} \label{star} \end{table}
{ "timestamp": "2021-09-22T02:00:51", "yymm": "2109", "arxiv_id": "2109.09773", "language": "en", "url": "https://arxiv.org/abs/2109.09773", "abstract": "Fully commutative elements in types $B$ and $D$ are completely characterized and counted by Stembridge. Recently, Feinberg-Kim-Lee-Oh have extended the study of fully commutative elements from Coxeter groups to the complex setting, giving an enumeration of such elements in $G(m,1,n)$. In this note, we prove a connection between fully commutative elements in $B_n$ and in $G(m,1,n)$, which allows us to characterize fully commutative elements in $G(m,1,n )$ by pattern avoidance. Further, we present a counting formula for such elements in $G(m,1,n)$.", "subjects": "Combinatorics (math.CO)", "title": "A note on fully commutative elements in complex reflection groups", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713839878682, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7080104976319676 }
https://arxiv.org/abs/2204.03010
Poset Ramsey number $R(P,Q_n)$. I. Complete multipartite posets
A poset $(P',\le_{P'})$ contains a copy of some other poset $(P,\le_P)$ if there is an injection $f\colon P'\to P$ where for every $X,Y\in P$, $X\le_P Y$ if and only if $f(X)\le_{P'} f(Y)$. For any posets $P$ and $Q$, the poset Ramsey number $R(P,Q)$ is the smallest integer $N$ such that any blue/red coloring of a Boolean lattice of dimension $N$ contains either a copy of $P$ with all elements blue or a copy of $Q$ with all elements red. We denote by $K_{t_1,\dots,t_\ell}$ a complete $\ell$-partite poset, i.e.\ a poset consisting of $\ell$ pairwise disjoint sets $A^i$ of size $t_i$, $1\le i\le \ell$, such that for any $i,j\in\{1,\dots,\ell\}$ and any two $X\in A^{i}$ and $Y\in A^{j}$, $X<Y$ if and only if $i<j$. In this paper we show that $R(K_{t_1,\dots,t_\ell},Q_n)\le n+\frac{(2+o_n(1))\ell n}{\log n}$.
\section{Introduction} Ramsey theory is a field of combinatorics that asks whether in any coloring of the elements in a discrete host structure we find a particular monochromatic substructure. This question offers a lot of variations depending on the chosen sub- and host structure. While originating from a result of Ramsey \cite{R} on uniform hypergraphs from 1930, the most well-known setting considers monochromatic subgraphs in edge-colorings of complete graphs. In contrast, this paper considers a Ramsey-type problem using partially ordered sets, or \textit{posets} for short, as the host structure. A \textit{poset} is a set $P$ which is equipped with a relation $\le_P$ on the elements of $P$ that is transitive, reflexive, and antisymmetric. Whenever it is clear from the context we refer to such a poset $(P,\le_P)$ just as $P$. Given a non-empty set $\ensuremath{\mathcal{X}}$, the poset consisting of all subsets of $\ensuremath{\mathcal{X}}$ equipped with the inclusion relation $\subseteq$ is the \textit{Boolean lattice} $\ensuremath{\mathcal{Q}}(\ensuremath{\mathcal{X}})$ of \textit{dimension} $|\ensuremath{\mathcal{X}}|$. We use $Q_n$ to denote a Boolean lattice with an arbitrary $n$-element ground set. \\ We say that a poset $P_1$ is an \textit{induced subposet} of another poset $P_2$ if $P_1\subseteq P_2$ and for every two $X,Y\in P_1$, $$X \leq_{P_1} Y\text{ if and only if }X \leq_{P_2} Y.$$ A \textit{copy} of $P_1$ in $P_2$ is an induced subposet $P'$ of $P_2$ which is isomorphic to $P_1$.\\ Here we consider color assignments of the elements of a poset $P$ using the colors \textit{blue} and \textit{red}, i.e.\ mappings $c\colon P \rightarrow \{\text{blue}, \text{red}\}$, which we refer to as a \textit{blue/red coloring} of $P$. A poset is colored \textit{monochromatically} if all its elements have the same color. If a poset is colored monochromatically in blue [red], we say that it is a \textit{blue} [\textit{red}] \textit{poset}. The elements of a poset $P$ are usually referred to as \textit{vertices}. \\ Axenovich and Walzer \cite{AW} were the first to consider the following Ramsey variant on posets. For posets $P$ and $Q$, the \textit{poset Ramsey number} of $P$ versus $Q$ is given by \begin{multline*} R(P,Q)=\min\{N\in\ensuremath{\mathbb{N}} \colon \text{ every blue/red coloring of $Q_N$ contains either}\\ \text{a blue copy of $P$ or a red copy of $Q$}\}. \end{multline*} As a central focus of research in this area, bounds on the poset Ramsey number $R(Q_n,Q_n)$ were considered and gradually improved with the best currently known bounds being\linebreak $2n+1 \leq R(Q_n, Q_n) \leq n^2 -n+2$, see listed chronologically Walzer \cite{W}, Axenovich and Walzer \cite{AW}, Cox and Stolee \cite{CS}, Lu and Thompson \cite{LT}, Bohman and Peng \cite{BP}. The related off-diagonal setting $R(Q_m,Q_n)$, $m<n$, also received considerable attention over the last years. When both $m$ and $n$ are large, the best known upper bound is due to Lu and Thompson \cite{LT}, yielding together with a trivial lower bound that $m+n\le R(Q_m,Q_n)\le \big(m-2+o(1)\big)n+m$. When $m$ is fixed and $n$ is large, an exact result is only known in the trivial case $m=1$ where $R(Q_1, Q_n)=n+1$. For $m=2$, after earlier estimates by Axenovich and Walzer \cite{AW} as well as Lu and Thompson \cite{LT}, the best known upper bound is due to Gr\'osz, Methuku, and Tompkins \cite{GMT}, which is complemented by a lower bound shown recently by Axenovich and the present author \cite{QnV}: $$n \left(1 + \frac{1}{15 \log n}\right) \le R(Q_2,Q_n) \le n\left(1 + \frac{2+o(1)}{\log n}\right).$$ In this paper we generalize the upper bound of Grósz, Methuku and Tompkins \cite{GMT} on $R(Q_2,Q_n)$ to a broader class of posets, namely we discuss the poset Ramsey number of a \textit{complete multipartite poset} versus the Boolean lattice $Q_n$. A \textit{complete $\ell$-partite poset} $K_{t_1,\dots,t_\ell}$ is a poset on $\sum_{i=1}^\ell t_i$ vertices obtained as follows. Consider $\ell$ pairwise disjoint \textit{layers} $A^1,\dots,A^\ell$ of vertices, where layer $A^i$ consists of $t_i$ distinct vertices. Now for any two indexes $i,j\in\{1,\dots,\ell\}$ and any vertices $X\in A^i$, $Y\in A^j$, let $X<Y$ if and only if $i<j$. Such a poset can be seen as a complete blow-up of a chain. Note that $Q_2=K_{1,2,1}$. \begin{figure}[H] \centering \includegraphics[scale=0.6]{figs/K342} \caption{Hasse diagram of the complete $3$-partite poset $K_{3,4,2}$} \end{figure} \begin{theorem}\label{thm_multipartite} For $n\in\ensuremath{\mathbb{N}}$, let $\ell\in\ensuremath{\mathbb{N}}$ be an integer such that $\ell=o(\log n)$ and for $i\in\{1,\dots,\ell\}$, let $t_i\in\ensuremath{\mathbb{N}}$ be integers with $\sup_i t_i =n^{o(1)}$. Then $$R(K_{t_1,\dots,t_\ell},Q_n)\le n\left(1+\frac{2+o(1)}{\log n}\right)^\ell\le n+\frac{\big(2+o(1)\big)\ell n}{\log n}.$$ \end{theorem} Here and throughout this paper, the $O$-notation is used exclusively depending on $n$, i.e.\ $f(n)=o(g(n))$ if and only if $\frac{f(n)}{g(n)}\to 0$ for $n\to\infty$. For parameters as above, this theorem implies that $R(K_{t_1,\dots,t_\ell},Q_n)=n+o(n)$. Under the precondition that $\ell$ is fixed, we even obtain a bound that is asymptotically tight in the first and second summand: We say that a complete $\ell$-partite poset $K=K_{t_1,\dots,t_\ell}$ is \textit{non-trivial}, if it is neither a chain nor an antichain, i.e.\ if $\ell\ge 2$ and $t_i\ge 2$ for some $i\in\{1,\dots,\ell\}$. Observe that such a non-trivial $K$ contains either a copy of $K_{1,2}$ or $K_{2,1}$, so Theorem 2 of \cite{QnV} yields $R(K,Q_n)\ge n + \frac{n}{15 \log n}$. Thus for non-trivial~$K$, $R(K,Q_n)=n+\Theta\left(\frac{n}{\log n}\right)$. For trivial $K$, it is known that $R(K,Q_n)=n+\Theta(1)$. In detail, if $K$ is a chain on $\ell$ vertices, then $R(K,Q_n)= n+\ell-1$, where the upper bound is a consequence of Lemma \ref{chain_lem} stated later on and the lower bound is easy to see using a layered coloring of the host lattice. If $K$ is an antichain on $t$ vertices, then a trivial lower bound, Lemma 3 in Axenovich and Walzer's \cite{AW}, and Sperner's Theorem imply $n\le R(K,Q_n)\le n+\alpha(t)$ where $\alpha(t)$ is the smallest integer such that $\binom{\alpha(t)}{\lfloor\alpha(t)/2\rfloor}\ge t$. \\ We shall first consider a special complete multipartite poset that we call a \textit{spindle}. Given $r\ge 0$, $s\ge1$ and $t\ge 0$, an \textit{$(r,s,t)$-spindle} $S_{r,s,t}$ is defined as the complete multipartite poset $K_{t'_1,\dots,t'_{r+1+t}}$ where $t'_1,\dots,t'_r=1$ and $t'_{r+1}=s$ and $t'_{r+2},\dots,t'_{r+1+t}=1$. In other words this poset on $r+s+t$ vertices is constructed using an antichain $A$ of size $s$ and two chains $C_r,C_t$ on $r$ and $t$ vertices, respectively, combined such that every vertex of $A$ is larger than every vertex from $C_r$ but smaller than every vertex from $C_t$. \begin{figure}[H] \centering \includegraphics[scale=0.6]{figs/spindle} \caption{Hasse diagram of the spindle $S_{2,5,3}$} \end{figure} \begin{theorem}\label{thm_spindle} Let $r,s,t$ be non-negative integers with $r+t=o(\sqrt{\log n})$ and $1\le s=n^{o(1)}$ for $n\in\ensuremath{\mathbb{N}}$. Then $$R(S_{r,s,t},Q_n)\le n+\frac{\big(1+o(1)\big)(r+t)n}{\log n}.$$ \end{theorem} The spindle $S_{1,s,1}$ is known in the literature as an \textit{$s$-diamond} $D_s$, while the poset $S_{1,s,0}$ is usually referred to as an \textit{$s$-fork} $V_s$. \begin{corollary}\label{cor_fork}\label{cor_diamond} Let $s\in\ensuremath{\mathbb{N}}$ with $s=n^{o(1)}$ for $n\in\ensuremath{\mathbb{N}}$. Then $$R(D_s,Q_n)\le n+\frac{\big(2+o(1)\big)n}{\log n}\qquad\text{ and }\qquad R(V_s,Q_n)\le n+\frac{\big(1+o(1)\big)n}{\log n}.$$ \end{corollary} For a positive integer $n\in\ensuremath{\mathbb{N}}$, we use $[n]$ to denote the set $\{1,\dots,n\}$, additionally let $[0]=\varnothing$. Here `$\log$' always refers to the logarithm with base $2$. We omit floors and ceilings where appropriate. The structure of the paper is as follows. First, in Section \ref{sec_prelim} we introduce some notation and two preliminary lemmas. In Section \ref{sec_multipartite} we show the bound for spindles and afterwards the generalization for general complete multipartite posets. \section{Preliminaries}\label{sec_prelim} \subsection{Red $Q_n$ versus blue chain}\label{sec_chain} Let $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ be disjoint sets. Then the vertices of the Boolean lattice $\ensuremath{\mathcal{Q}}(\ensuremath{\mathcal{X}}\cup\ensuremath{\mathcal{Y}})$, i.e.\ the subsets of $\ensuremath{\mathcal{X}}\cup\ensuremath{\mathcal{Y}}$, can be partitioned with respect to $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ in the following manner. Every $Z\subseteq\ensuremath{\mathcal{X}}\cup\ensuremath{\mathcal{Y}}$ has an $\ensuremath{\mathcal{X}}$\textit{-part} $X_Z=Z\cap \ensuremath{\mathcal{X}}$ and a $\ensuremath{\mathcal{Y}}$\textit{-part} $Y_Z=Z\cap\ensuremath{\mathcal{Y}}$. In this setting, we refer to $Z$ alternatively as the pair $(X_Z,Y_Z)$. Conversely, for all $X\subseteq\ensuremath{\mathcal{X}}$, $Y\subseteq\ensuremath{\mathcal{Y}}$, the pair $(X,Y)$ corresponds uniquely to the vertex $X\cup Y\in \ensuremath{\mathcal{Q}}(\ensuremath{\mathcal{X}}\cup\ensuremath{\mathcal{Y}})$. One can think of such pairs as elements of the Cartesian product $2^\ensuremath{\mathcal{X}} \times 2^\ensuremath{\mathcal{Y}}$ which has a canonical bijection to $2^{\ensuremath{\mathcal{X}}\cup\ensuremath{\mathcal{Y}}}=\ensuremath{\mathcal{Q}}(\ensuremath{\mathcal{X}}\cup\ensuremath{\mathcal{Y}})$. Observe that for $X_i\subseteq\ensuremath{\mathcal{X}}, Y_i\subseteq\ensuremath{\mathcal{Y}}$, $i\in[2]$, we have $(X_1,Y_1)\subseteq (X_2,Y_2)$ if and only if $X_1\subseteq X_2$ and $Y_1\subseteq Y_2$. \\ \noindent We shall need the following lemma. \begin{lemma}\label{chain_lem} Let $\ensuremath{\mathcal{X}}$, $\ensuremath{\mathcal{Y}}$ be disjoint sets with $|\ensuremath{\mathcal{X}}|=n$ and $|\ensuremath{\mathcal{Y}}|=k$, for some $n,k\in\ensuremath{\mathbb{N}}$. Let $\ensuremath{\mathcal{Q}}=\ensuremath{\mathcal{Q}}(\ensuremath{\mathcal{X}}\cup\ensuremath{\mathcal{Y}})$ be a blue/red colored Boolean lattice. Fix some linear ordering $\pi=(y_1,\dots,y_k)$ of $\ensuremath{\mathcal{Y}}$ and define $Y(0), \ldots, Y(k)$ by $Y(0)=\varnothing$ and $Y(i)=\{y_1,\dots,y_i\}$ for $i\in[k]$. Then there exists at least one of the following in $\ensuremath{\mathcal{Q}}$: \renewcommand{\labelenumi}{(\alph{enumi})} \begin{enumerate} \item a red copy of $Q_n$, or \item a blue chain of length $k+1$ of the form $(X_0,Y(0)),\dots,(X_{k},Y(k))$ where $X_0 \subseteq X_1 \subseteq\dots \subseteq X_k\subseteq \ensuremath{\mathcal{X}}$. \end{enumerate} \end{lemma} Note that a version of this lemma was used implicitly in a paper of Gr\'osz, Methuku and Tompkins \cite{GMT}. It was stated explicitly and reproved by Axenovich and the author, see Lemma 8 in \cite{QnV}. \subsection{Gluing two posets} By identifying vertices of two posets, they can be ``glued together'' creating a new poset. We will later construct complete multipartite posets by gluing spindles on top of each other using the following definition. Given a poset $P_1$ with a unique maximal vertex $Z_1$ and a poset $P_2$ disjoint from $P_1$ with a unique minimal vertex $Z_2$, let $P_1\ensuremath{\!\!\between\!\!} P_2$ be the poset obtained by identifying $Z_1$ and $Z_2$. Formally speaking, $P_1\ensuremath{\!\!\between\!\!} P_2$ is the poset $(P_1\setminus\{Z_1\}) \cup (P_2\setminus \{Z_2\}) \cup \{Z\}$ for a $Z\notin P_1\cup P_2$ where for any two $X,Y\in P_1\ensuremath{\!\!\between\!\!} P_2$, $X<_{P_1\!\between\! P_2}Y$ if and only if one of the following five cases hold: $X,Y\in P_1 $ and $X<_{P_1}Y$; $X,Y\in P_2$ and $X<_{P_2} Y$; $X\in P_1$ and $Y \in P_2$; $X\in P_1$ and $Y=Z$; or $X=Z$ and $Y\in P_2$. \begin{figure}[H] \centering \includegraphics[scale=0.55]{figs/gluing} \caption{Creating $P_1\ensuremath{\!\!\between\!\!} P_2$ from $P_1$ and $P_2$} \end{figure} \begin{lemma}\label{lem_gluing} Let $P_1$ be a poset with a unique maximal vertex and let $P_2$ be a poset with a unique minimal vertex. Then $R(P_1\ensuremath{\!\!\between\!\!} P_2,Q_n)\le R(P_1,Q_{R(P_2,Q_n)})$. \end{lemma} \begin{proof} Let $N=R(P_1,Q_{R(P_2,Q_n)})$. Consider a blue/red colored Boolean lattice $\ensuremath{\mathcal{Q}}$ of dimension $N$ which contains no blue copy of $P_1\ensuremath{\!\!\between\!\!} P_2$. We shall prove that there exists a red copy of $Q_n$ in this coloring. We say that a blue vertex $X$ in $\ensuremath{\mathcal{Q}}$ is \textit{$P_1$-clear} if there is no blue copy of $P_1$ in $\ensuremath{\mathcal{Q}}$ containing $X$ as its maximal vertex. Similarly, a blue vertex $X$ is \textit{$P_2$-clear} if there is no blue copy of $P_2$ in $\ensuremath{\mathcal{Q}}$ with minimal vertex $X$. Observe that every blue vertex is $P_1$-clear or $P_2$-clear (or both), since there is no blue copy of $P_1\ensuremath{\!\!\between\!\!} P_2$. \\ We introduce an auxiliary coloring of $\ensuremath{\mathcal{Q}}$ using colors green and yellow. Color all blue vertices which are $P_1$-clear in green and all other vertices in yellow. Then this coloring does not contain a monochromatic green copy of $P_1$, since otherwise the maximal vertex of such a copy is not $P_1$-clear. Recall that $N=R(P_1,Q_{R(P_2,Q_n)})$, thus $\ensuremath{\mathcal{Q}}$ contains a monochromatic yellow copy of $Q_{R(P_2,Q_n)}$, which we refer to as $\ensuremath{\mathcal{Q}}'$. \\ Consider the original blue/red coloring of $\ensuremath{\mathcal{Q}}'$. Every blue vertex of $\ensuremath{\mathcal{Q}}'$ is yellow in the auxiliary coloring, i.e.\ not $P_1$-clear. Thus every blue vertex of $\ensuremath{\mathcal{Q}}'$ is $P_2$-clear. This coloring of $\ensuremath{\mathcal{Q}}'$ does not contain a blue copy of $P_2$, since otherwise the minimal vertex of such a copy is not $P_2$-clear. Note that the Boolean lattice $\ensuremath{\mathcal{Q}}'$ has dimension $R(P_2,Q_n)$, thus there exists a monochromatic red copy of $Q_n$ in $\ensuremath{\mathcal{Q}}'$, hence also in $\ensuremath{\mathcal{Q}}$. \end{proof} \begin{corollary}\label{cor_gluing} Let $P_1$ be a poset with a unique maximal vertex and let $P_2$ be a poset with a unique minimal vertex. Suppose that there are functions $f_1,f_2\colon \ensuremath{\mathbb{N}}\to \ensuremath{\mathbb{R}}$ with $R(P_1,Q_n)\le f_1(n)n$ and $R(P_2,Q_n)\le f_2(n)n$ for any $n\in\ensuremath{\mathbb{N}}$ and such that $f_1$ is monotonically non-increasing. Then for every $n\in\ensuremath{\mathbb{N}}$, $$R(P_1\ensuremath{\!\!\between\!\!} P_2,Q_n)\le f_1(n)f_2(n)n.$$ \end{corollary} \begin{proof} For an arbitrary $n\in\ensuremath{\mathbb{N}}$, let $n'= f_2(n)n$. Note that for any poset $P$, $R(P,Q_n)\ge n$, so $n'\ge n$. Hence $f_1(n')\le f_1(n)$, and Lemma \ref{lem_gluing} provides $$R(P_1\ensuremath{\!\!\between\!\!} P_2,Q_n)\le R(P_1,Q_{n'})\le f_1(n')n'\le f_1(n)f_2(n)n.$$ \end{proof} \section{Proofs of Theorem \ref{thm_spindle} and Theorem \ref{thm_multipartite}} \label{sec_multipartite} \begin{proof}[Proof of Theorem \ref{thm_spindle}] Let $\epsilon=\frac{\log s}{\log n}$, so $s=n^{\epsilon}$ and $\epsilon=o(1)$. We can suppose that $n$ is large\linebreak and hence $\epsilon<1$. Then let $c=\frac{r+t+\delta}{1-\epsilon}$ where $\delta=\frac{2(r+1)}{\log n}(\log\log n +r+t)$. Since $r+t=o(\sqrt{\log n})$, $\delta=o(1)$. Let $k=\frac{cn}{\log n}$. We show for sufficently large $n$ that $R(S_{r,s,t},Q_n)\le n+k$. If $s=1$, $S_{r,s,t}$ is a chain and $R(S_{r,s,t},Q_n)\le n+r+s\le n+k$ by Lemma \ref{chain_lem}, so suppose $s\ge2$.\\ \noindent \textit{Claim:} For sufficiently large $n$, $k!>2^{(r+t)(n+k)}\cdot (s-1)^{k+1}$.\\ Note that $k!>\left(\frac{k}{e}\right)^k=2^{k(\log k -\log e)}$ and $(s-1)^{k+1}=2^{(k+1)\log (s-1)}$. Thus we shall prove that $k(\log k -\log e)>(r+t+\log(s-1))k+\log (s-1)+(r+t)n$. Using that $k=\frac{cn}{\log n}$ and $s-1\le n^{\epsilon}$, we obtain \begin{align*} &k\big(\log k-\log (s-1)\big)-k\big(r+t+\log e\big)-\log (s-1)-\big(r+t\big)n\\ &\ge \frac{cn}{\log n} \big(\log c + \log n -\log\log n -\epsilon \log n\big)-\frac{cn }{\log n}\big(r+t+\log e\big)-\epsilon \log n -\big(r+t\big)n\\ &\ge cn\big(1-\epsilon \big)-\big(r+t\big)n-\frac{cn }{\log n}\big(\log\log n +r+t+\log e\big)-\epsilon \log n \\ &> \delta n- \frac{2(r+1)n}{\log n}\big(\log\log n +r+t\big)=0, \end{align*} where the last inequality holds for sufficiently large $n$.\qed \\ Let $\ensuremath{\mathcal{X}}$ and $\ensuremath{\mathcal{Y}}$ be disjoint sets with $|\ensuremath{\mathcal{X}}|=n$ and $|\ensuremath{\mathcal{Y}}|=k$. We consider a blue/red coloring of $\ensuremath{\mathcal{Q}}=\ensuremath{\mathcal{Q}}(\ensuremath{\mathcal{X}}\cup\ensuremath{\mathcal{Y}})$ with no red copy of $Q_n$. We shall show that there is a monochromatic blue copy of $S_{r,s,t}$ in $\ensuremath{\mathcal{Q}}$. For every linear ordering $\pi=(y_1^\pi,\dots,y_k^\pi)$ of $\ensuremath{\mathcal{Y}}$, Lemma \ref{chain_lem} provides a blue chain $C^\pi$ of the form $Z^\pi_0=(X^\pi_0,\varnothing), Z^\pi_1=(X^\pi_1,\{y_1^{\pi}\}),\dots,Z^\pi_k=(X^\pi_k,\ensuremath{\mathcal{Y}})$, where $X^\pi_i\subseteq \ensuremath{\mathcal{X}}$. \\ For every ordering $\pi$ of $\ensuremath{\mathcal{Y}}$, we consider the $r$ smallest vertices $Z^{\pi}_0,\dots, Z^{\pi}_{r-1}$ and the $t$ largest vertices $Z^{\pi}_{k-t+1},\dots, Z^{\pi}_{k}$ of its corresponding chain $C^{\pi}$, so let $I=\{0,\dots,r-1\}\cup\{k-t+1,\dots,k\}$. Each $Z^{\pi}_{i}$ is a vertex of $\ensuremath{\mathcal{Q}}$, so one of the $2^{n+k}$ distinct subsets of $\ensuremath{\mathcal{X}}\cup\ensuremath{\mathcal{Y}}$. Thus for a fixed $\pi$, there are at most $\left(2^{n+k}\right)^{r+t}$ distinct combinations of the $Z^{\pi}_{i}$, $i\in I$. Recall that $k!>2^{(r+t)(n+k)}\cdot (s-1)^{k+1}$. By pigeonhole principle, we find a collection $\pi_1,\dots,\pi_m$ of $m= (s-1)^{k+1}+1$ distinct linear orderings of $\ensuremath{\mathcal{Y}}$ such that for all $j\in[m]$ and $i\in I$, $Z^{\pi_{j}}_{i}=Z_i$ for some $Z_i\subseteq\ensuremath{\mathcal{X}}\cup\ensuremath{\mathcal{Y}}$ independent of $j$. In other words, we find many chains with same $r$ smallest vertices $Z_i$, $i\in \{0,\dots,r-1\}$, and same $t$ largest vertices $Z_i$, $i\in \{k-t+1,\dots,k\}$. Let $\ensuremath{\mathcal{P}}$ be the poset induced in $\ensuremath{\mathcal{Q}}$ by the chains $C^{\pi_j}$, $j\in[m]$. \\ If there is an antichain $A$ of size $s$ in $\ensuremath{\mathcal{P}}$, then none of the vertices $Z_i$, $i\in I$, is in $A$, because they are contained in every chain $C^{\pi_j}$ and therefore comparable to all other vertices in $\ensuremath{\mathcal{P}}$. Now $A$ together with the vertices $Z_i$, $i\in I$, form a copy of $S_{r,s,t}$ in $\ensuremath{\mathcal{P}}$. Recall that all vertices in every $C^{\pi_j}$ are blue, i.e.\ $\ensuremath{\mathcal{P}}$ is monochromatic blue. Thus we obtain a blue copy of $S_{r,s,t}$ in $\ensuremath{\mathcal{Q}}$, so we are done. From now on, suppose that there is no antichain of size $s$ in~$\ensuremath{\mathcal{P}}$. By Dilworth's Theorem we obtain $s-1$ chains $\ensuremath{\mathcal{C}}_1,\dots,\ensuremath{\mathcal{C}}_{s-1}$ which cover all vertices of $\ensuremath{\mathcal{P}}$, i.e.\ all vertices of the $C^{\pi_j}$'s. Note that the chains $\ensuremath{\mathcal{C}}_i$ might consist of significantly more vertices than the $(k+1)$-element chains $C^{\pi_j}$. \\ Now we consider the restriction to $\ensuremath{\mathcal{Y}}$ of each vertex in $\ensuremath{\mathcal{P}}$, i.e.\ the sets $Z^{\pi}_{i}\cap\ensuremath{\mathcal{Y}}$, in order to apply the pigeonhole principle once again. Assume for a contradiction that for some $i\in[s-1]$ there are $Z,Z'\in\ensuremath{\mathcal{C}}_i$ with $|Z\cap\ensuremath{\mathcal{Y}}|=|Z'\cap\ensuremath{\mathcal{Y}}|$ but $Z\cap\ensuremath{\mathcal{Y}}\neq Z'\cap\ensuremath{\mathcal{Y}}$. This implies that $Z\cap\ensuremath{\mathcal{Y}}\nsubseteq Z'\cap\ensuremath{\mathcal{Y}}$ and $Z\cap\ensuremath{\mathcal{Y}}\nsupseteq Z'\cap\ensuremath{\mathcal{Y}}$, so $Z$ and $Z'$ are incomparable, a contradiction as they are both contained in the chain $\ensuremath{\mathcal{C}}_i$. Consequently, there is only at most one $\ell$-element set $Y_i^\ell\subseteq \ensuremath{\mathcal{Y}}$, $\ell\in\{0,\dots,k\}$, for which there exists a $Z\in\ensuremath{\mathcal{C}}_i$ with $Z\cap\ensuremath{\mathcal{Y}}=Y_i^\ell$. \\ Note that for all $j\in[m]$ and for all $\ell\in\{0,\dots,k\}$, $|Z^{\pi_j}_\ell\cap \ensuremath{\mathcal{Y}}|=\ell$, i.e.\ $Z^{\pi_j}_\ell\cap \ensuremath{\mathcal{Y}}=Y_i^\ell$ for some $i\in[s-1]$. In other words, for fixed $j$, each of the $k+1$ sets $Z^{\pi_j}_\ell\cap \ensuremath{\mathcal{Y}}$, $\ell\in\{0,\dots,k\}$, is equal to one of at most $s-1$ $Y_i^\ell$'s. Recall that we have chosen $m=(s-1)^{k+1}+1$ distinct linear orderings $\pi_j$ of $\ensuremath{\mathcal{Y}}$. Using pigeonhole principle we find two indexes $j_1,j_2$ such that $Z^{\pi_{j_1}}_\ell\cap \ensuremath{\mathcal{Y}}=Z^{\pi_{j_2}}_\ell\cap \ensuremath{\mathcal{Y}}$ for all $\ell\in\{0,\dots,k\}$. This implies that $y^{\pi_{j_1}}_\ell=y^{\pi_{j_2}}_\ell$, i.e.\ $\pi_{j_1}$ and $\pi_{j_2}$ are equal. But this is a contradiction to the fact that all orderings $\pi_j$ are distinct. \end{proof} Now we extend Theorem \ref{thm_spindle} to general complete multipartite posets using Corollary \ref{cor_gluing}. \begin{proof}[Proof of Theorem \ref{thm_multipartite}] Let $t=\sup_i t_i$. Then Theorem \ref{thm_spindle} shows the existence of a function $\epsilon(n)=o(1)$ with $R(K_{1,t,1},Q_n)\le n\left(1 + \frac{2+\epsilon(n)}{\log n}\right)$. We can suppose that $\epsilon$ is monotonically non-increasing by replacing $\epsilon(n)$ with $\max_{N>n} \{\epsilon(N),0\}$ where necessary. In order to prove the theorem, we show a stronger statement using the auxiliary $(2\ell+1)$-partite poset $P=K_{1,t,1,t,\dots,1,t,1}$. Note that $K_{t_1,\dots,t_\ell}$ is an induced subposet of $P$, thus $R(K_{t_1,\dots,t_\ell},Q_n)\le R(P,Q_n)$. In the following we verify that $$R(P,Q_n)\le n\left(1+\frac{2+\epsilon(n)}{\log n}\right)^\ell.$$ We use induction on $\ell$. If $\ell=1$, then $P=K_{1,t,1}$, so $R(P,Q_n)\le n\left(1 + \frac{2+\epsilon(n)}{\log n}\right)$. If $\ell\ge 2$, we ``deconstruct'' the poset into two parts. Consider $P_1=K_{1,t,1}$ and the complete $(2\ell-1)$-partite poset $P_2=K_{1,t,1,t,\dots,1,t,1}$. Then $P_1$ has a unique maximal vertex and $P_2$ has a unique minimal vertex. Observe that $P_1\ensuremath{\!\!\between\!\!} P_2=P$. Using the induction hypothesis $$R(P_1,Q_n)\le n\left(1 + \frac{2+\epsilon(n)}{\log n}\right)\text{ and }R(P_2,Q_n)\le n\left(1+\frac{2+\epsilon(n)}{\log n}\right)^{\ell-1}.$$ Now Corollary \ref{cor_gluing} provides the required bound. \end{proof} \section{Conclusive remarks} In this paper we considered $R(K,Q_n)$ where $K$ is a complete multipartite poset. Although the presented bounds hold if the parameters of $K$ depend on $n$, the original motivation for these results concerned the case where $K$ is fixed, i.e.\ independent from $n$: After $R(Q_2,Q_n)$ was bounded asymptotically sharply by Gr\'osz, Methuku and Tompkins \cite{GMT} and Axenovich and the present author \cite{QnV}, the examination of $R(Q_3,Q_n)$ is an obvious follow-up question. The best known upper bound is due to Lu and Thompson \cite{LT}, while the best known lower bound can be deduced from a bound on $R(K_{1,2},Q_n)$ in \cite{QnV}, $$n+\tfrac{n}{15\log n}\le R(K_{1,2},Q_n)\le R(Q_3,Q_n)\le \tfrac{37}{16}n+\tfrac{39}{16}.$$ In order to find better bounds and answer the question whether or not $R(Q_3,Q_n)=n+o(n)$, the consideration of $R(P,Q_n)$ for small posets $P$ might prove helpful as building blocks for Boolean lattices. For example, $Q_3$ can be partitioned into a copy of $K_{1,3}$ and a copy of $K_{3,1}$ which interact in a proper way. Both of these posets are complete $2$-partite posets with, as shown here, Ramsey numbers bounded by $$R(K_{1,3},Q_n)=R(K_{3,1},Q_n)=n+\Theta\left(\frac{n}{\log n}\right).$$ However, it remains open how to use our estimate to tighten the bounds on $R(Q_3,Q_n)$. \\ \noindent \textbf{Acknowledgments:}~\quad The author would like to thank Maria Axenovich for helpful\linebreak discussions and comments on the manuscript.
{ "timestamp": "2022-04-08T02:00:40", "yymm": "2204", "arxiv_id": "2204.03010", "language": "en", "url": "https://arxiv.org/abs/2204.03010", "abstract": "A poset $(P',\\le_{P'})$ contains a copy of some other poset $(P,\\le_P)$ if there is an injection $f\\colon P'\\to P$ where for every $X,Y\\in P$, $X\\le_P Y$ if and only if $f(X)\\le_{P'} f(Y)$. For any posets $P$ and $Q$, the poset Ramsey number $R(P,Q)$ is the smallest integer $N$ such that any blue/red coloring of a Boolean lattice of dimension $N$ contains either a copy of $P$ with all elements blue or a copy of $Q$ with all elements red. We denote by $K_{t_1,\\dots,t_\\ell}$ a complete $\\ell$-partite poset, i.e.\\ a poset consisting of $\\ell$ pairwise disjoint sets $A^i$ of size $t_i$, $1\\le i\\le \\ell$, such that for any $i,j\\in\\{1,\\dots,\\ell\\}$ and any two $X\\in A^{i}$ and $Y\\in A^{j}$, $X<Y$ if and only if $i<j$. In this paper we show that $R(K_{t_1,\\dots,t_\\ell},Q_n)\\le n+\\frac{(2+o_n(1))\\ell n}{\\log n}$.", "subjects": "Combinatorics (math.CO)", "title": "Poset Ramsey number $R(P,Q_n)$. I. Complete multipartite posets", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713835553861, "lm_q2_score": 0.7185943985973772, "lm_q1q2_score": 0.7080104973211884 }
https://arxiv.org/abs/0912.0573
Enumeration by kernel positions for strongly Bernoulli type truncation games on words
We find the winning strategy for a class of truncation games played on words. As a consequence of the present author's recent results on some of these games we obtain new formulas for Bernoulli numbers and polynomials of the second kind and a new combinatorial model for the number of connected permutations of given rank. For connected permutations, the decomposition used to find the winning strategy is shown to be bijectively equivalent to King's decomposition, used to recursively generate a transposition Gray code of the connected permutations.
\section*{Introduction} In a recent paper~\cite{Hetyei-EKP} the present author introduced a class of progressively finite games played on ranked posets, where each move of the winning strategy is unique and the positions satisfy the following uniformity criterion: each position of a given rank may be reached from the same number of positions of a given higher rank in a single move. As a consequence, the kernel positions of a given rank may be counted by subtracting from the number of all positions the appropriate multiples of the kernel positions of lower ranks. The main example in~\cite{Hetyei-EKP} is the {\em original Bernoulli game}, a truncation game played on pairs of words of the same length, for which the number of kernel positions of rank $n$ is a signed factorial multiple of the Bernoulli number of the second kind $b_n$. Similarly to this game, most examples mentioned in~\cite{Hetyei-EKP} are also truncation games played on words, where the partial order is defined by taking initial segments and the rank is determined by the length of the words involved. In this paper we consider a class of {\em strongly Bernoulli type truncation games} played on words, for which we do not require the uniformity condition on the rank to be satisfied. We show that for such games, the winning strategy may be found by decomposing each kernel position as a concatenation of {\em elementary kernel factors}. This decomposition is unique. All truncation games considered in~\cite{Hetyei-EKP} (including the ones played on pairs or triplets of words) are isomorphic to a strongly Bernoulli type truncation game. For most of these examples, the elementary kernel factors of a given type are also easy to enumerate. Thus we may obtain explicit summation formulas and non-alternating recurrences for numbers which were expressed in~\cite{Hetyei-EKP} as coefficients in a generating function or by alternating recurrences. The explicit summation formulas are obtained by considering the entire unique decomposition of each kernel position, the non-alternating recurrence is obtained by considering the removal of the last elementary kernel factor only. Thus we find some new identities for the Bernoulli polynomials and numbers of the second kind, and shed new light on King's~\cite{King} decomposition of ``indecomposable'' permutations. The paper is structured as follows. After the Preliminaries, the main unique decomposition theorem is stated in Section~\ref{s_Btg}. In the subsequent sections we consider games to which this result is applicable: we show they are isomorphic to strongly Bernoulli type truncation games, we find formulas expressing their elementary kernel factors of a given type, and use these formulas to express the number of kernel positions as an explicit sum and by a non-alternating recurrence. Most detail is given for the original Bernoulli game in Section~\ref{s_ob2}, omitted details in other sections are replaced by references to the appropriate part of this section. As a consequence of our analysis of the original Bernoulli game, we obtain an explicit summation formula of the Bernoulli numbers of the second kind, expressing them as a sum of entries of the same sign. We also obtain a non-alternating recurrence for their absolute values. In Section~\ref{s_MR} we consider a restriction of the original Bernoulli game to a set of positions, where the kernel positions are identifiable with the {\em connected} or {\em indecomposable} permutations forming an algebra basis of the Malvenuto-Reutenauer Hopf algebra~\cite{Malvenuto-Reutenauer}. For these the recurrence obtained by the removal of the last elementary kernel factor is numerically identical to the recurrence that may be found in King's~\cite{King} recursive construction of a transposition Gray code for the connected permutations. We show that this is not a coincidence: there is a bijection on the set of permutations, modulo which King's recursive step corresponds to the removal of the last elementary kernel factor in the associated {\em place-based non-inversion tables} (a variant of the usual inversion tables). Our result inspires another systematic algorithm to list all connected permutations of a given order, and a new combinatorial model for the numbers of connected permutations of order $n$, in which this number arises as the total weight of all permutations of order $n-2$, such that the highest weight is associated to the permutations having the most {\em strong fixed points} (being thus the ``least connected''). Section~\ref{s_pb2} contains the consequences of our main result to Bernoulli polynomials of the second kind. Here we observe that we obtain the coefficients of these polynomials when we expand them in the basis $\{\binom{x+1}{n}\::\: n\geq 0\}$, and obtain a new formula for the Bernoulli numbers of the second kind. Finally, in Section~\ref{s_fB} we consider the {\em flat Bernoulli game}, whose kernel positions have the generating function $t/((1-t)(1-\ln(1-t))$ and conclude the section with an intriguing conjecture that for a long random initial word a novice player could not decrease the chance of winning below $50\%$ by simply removing the last letter in the first move. \section{Preliminaries} \subsection{Progressively finite games} A progressively finite two-player game is a game whose positions may be represented by the vertices of a directed graph that contains no directed cycle nor infinite path, the edges represent valid moves. Thus the game always ends after a finite number of moves. The players take alternate turns to move along a directed edge to a next position, until one of them reaches a {\em winning position} with no edge going out: the player who moves into this position is declared a winner, the next player is unable the move. The winning strategy for a progressively finite game may be found by calculating the {\em Grundy number} (or Sprague-Grundy number) of each position, the method is well-known, a sample reference is~\cite[Chapter 11]{Tucker}. The positions with Grundy number zero are called {\em kernel positions}. A player has a winning strategy exactly when he or she is allowed to start from a non-kernel position. All games considered in this paper are progressively finite. \subsection{The original Bernoulli game and its generalizations} \label{s_b2} In~\cite{Hetyei-EKP} the present author introduced the {\em original Bernoulli game} as the following progressively finite two-player game. The positions of rank $n>0$ in the game are all pairs of words $(u_1\cdots u_n,v_1\cdots v_n)$ such that \begin{itemize} \item[(i)] the letters $u_1,\ldots,u_n$ and $v_1,\ldots,v_n$ are positive integers; \item[(ii)] for each $i\geq 1$ we have $1\leq u_i, v_i\leq i$. \end{itemize} A valid move consists of replacing the pair $(u_1\cdots u_n,v_1\cdots v_n)$ with $(u_1\cdots u_m,v_1\cdots v_m)$ for some $m\geq 1$ satisfying $u_{m+1}\leq v_j$ for $j=m+1,\ldots, n$. The name of the game refers to the following fact~\cite[Theorem 2.2]{Hetyei-EKP}. \begin{theorem} \label{T_b2} For $n\geq 1$, the number $\kappa_n$ of kernel positions of rank $n$ in the original Bernoulli game is given by $$ \kappa_n=(-1)^{n-1} (n+1)! b_n, $$ where $b_n$ is the $n$-th Bernoulli number of the second kind. \end{theorem} Here the Bernoulli number of the second kind $b_n$ is obtained by substituting zero into the Bernoulli polynomial of the second kind $b_n(x)$, given by the generating function \begin{equation} \label{E_b2} \sum_{n=0}^{\infty} \frac{b_n(x)}{n!} t^n=\frac{t(1+t)^x}{\ln(1+t)}, \end{equation} see Roman~\cite[p.\ 116]{Roman}. Note that~\cite[p. 114]{Roman} Jordan's~\cite[p.\ 279]{Jordan} earlier definition of the Bernoulli polynomial of the second kind $\phi_n(x)$ is obtained by dividing $b_n(x)$ by $n!$. The proof of Theorem~\ref{T_b2} depends on a few simple observations which were generalized in~\cite{Hetyei-EKP} to a class of {\em Bernoulli type games on posets} (see \cite[Definition 3.1]{Hetyei-EKP}). The set of positions $P$ in these games is a partially ordered set with a unique minimum element $\widehat{0}$ and a rank function $\rho: P\rightarrow {\mathbb N}$ such that for each $n\geq 0$ the set $P_n$ of positions of rank $n$ have finitely many elements. The valid moves satisfy the following criteria: \begin{itemize} \item[(i)] Each valid move is from a position of higher rank to a position of lower rank. The set of positions reachable from a single position is a chain. \item[(ii)] If $y_1$ and $y_2$ are both reachable from $x$ in a single move and $y_1<y_2$ then $y_1$ is reachable from $y_2$ in a single move. \item[(iii)] For all $m<n$ there is a number $\gamma_{m,n}$ such that each $y$ of rank $m$ may be reached from exactly $\gamma_{m,n}$ elements of rank $n$ in a single move. \end{itemize} For such games, it was shown in~\cite[Proposition 3.3]{Hetyei-EKP}, the numbers $\kappa_n$ of kernel positions of rank $n$ satisfy the recursion formula \begin{equation} \label{E_gkrec} |P_n|=\kappa_n+\sum_{m=0}^{n-1} \kappa_{m}\cdot \gamma_{m,n}. \end{equation} \section{Winning a strongly Bernoulli type truncation game} \label{s_Btg} Let $\Lambda$ be an alphabet and let us denote by $\Lambda^*$ the free monoid generated by $\Lambda$, i.e., set $$\Lambda^*:=\{v_1\cdots v_n \::\: n\geq 0, \forall i (v_i\in \Lambda)\}.$$ Note that $\Lambda^*$ contains the empty word $\varepsilon$. \begin{definition} Given a subset $M\subseteq \Lambda^*\setminus \{\varepsilon\}$, we define the {\em truncation game induced by $M$} as the game whose positions are the elements of $\Lambda^*$, and whose valid moves consist of all truncations $v_1\cdots v_n\rightarrow v_1\cdots v_i$ such that $v_{i+1}\cdots v_n\in M$. \end{definition} Note that $\varepsilon\not\in M$ guarantees that the truncation game induced by $M$ is progressively finite, we may define the {\em rank} of each position as the length of each word. This rank decreases after each valid move. \begin{definition} Given $M\subset \Lambda^*\setminus \{\varepsilon\}$, and $P\subseteq \Lambda^*$, we say that $P$ is {\em $M$-closed} if for all $v_1\cdots v_n\in\Lambda^*\setminus\{\varepsilon\}$, $v_1\cdots v_n\in P$ and $v_{i+1}\cdots v_n\in M$ imply $v_1\cdots v_i\in P$. For an $M$-closed $P$, the {\em restriction of the truncation game induced by $M$ to $P$} is the game whose positions are the elements of $P$ and whose valid moves consist of all truncations $v_1\cdots v_n\rightarrow v_1\cdots v_i$ such that $v_{i+1}\cdots v_n\in M$ and $v_1\cdots v_n\in P$. We denote this game by $(P,M)$, and call it the {\em truncation game induced by $M$ on $P$}. \end{definition} Clearly the definition of being $M$-closed is equivalent to saying that the set $P$ is closed under making valid moves. \begin{definition} \label{D_Btm} We say that $M\subset \Lambda^*\setminus\{\varepsilon\}$ {\em induces a Bernoulli type truncation game} if for all pairs of words $\underline{u}, \underline{v}\in \Lambda^*\setminus\{\varepsilon\}$, $\underline{u}\underline{v}\in M$ and $\underline{v}\in M$ imply $\underline{u}\in M$. If $M$ is also closed under taking nonempty initial segments, i.e., $v_1\cdots v_n\in M$ implies $v_1\cdots v_m\in M$ for all $m\in\{1,\ldots,n\}$ then we say that $M$ induces a {\em strongly Bernoulli type truncation game}. If $M$ induces a (strongly) Bernoulli type truncation game, we call also $(P,M)$ a (strongly) Bernoulli type truncation game for each $M$-closed $P\subseteq \Lambda^*$. \end{definition} Every strongly Bernoulli type truncation game is also a Bernoulli type truncation game. The converse is not true: consider for example the set $M$ of all words of positive even length. It is easy to see that the truncation game induced by $M$ is Bernoulli type, but it is not strongly Bernoulli type since $M$ is not closed under taking initial segments of odd length. \begin{remark} The definition of a Bernoulli type truncation game is {\em almost} a special case of the Bernoulli type games on posets defined in~\cite[Definition 3.1]{Hetyei-EKP}. Each $M$-closed $P\subseteq \Lambda^*$ is partially ordered by the relation $v_1\cdots v_m<v_1\cdots v_n$ for all $m<n$, the unique minimum element of this poset is $\varepsilon$, and the length function is a rank function for this partial order. For this poset and rank function, the set of valid moves satisfies conditions (i) and (ii) listed in Subsection~\ref{s_b2}. Only the ``uniformity'' condition (iii) and the finiteness of $|P_n|$ do not need to be satisfied. These conditions were used in~\cite{Hetyei-EKP} to prove equation (\ref{E_gkrec}) and count the kernel positions of rank $n$ ``externally''. In this section we will show that the kernel positions of a strongly Bernoulli type truncation game on words may be described ``internally'' in a manner that will allow their enumeration when each $|P_n|$ is finite. The question whether the results presented in this section may be generalized to all Bernoulli type truncation games remains open. All examples of Bernoulli games played on words in~\cite{Hetyei-EKP} are isomorphic to strongly Bernoulli type truncation games, we will prove this for most of them in this paper, the remaining examples are left to the reader. Together with the results in~\cite{Hetyei-EKP}, we thus obtain two independent ways to count the same kernel positions in these games. Comparing the results in~\cite{Hetyei-EKP} with the results in the present paper yields explicit formulas for the coefficients in the Taylor expansion of certain functions. \end{remark} In the rest of the section we set $P=\Lambda^*$ and just find the winning strategy for the truncation game induced by $M$. Only the formulas counting the kernel positions will change when we change the set $P$ in the subsequent sections, the decomposition of the kernel positions will not. First we define some {\em elementary kernel positions} in which the second player may win after at most one move by the first player. \begin{definition} \label{D_ekp} The word $v_1\cdots v_n \in \Lambda^*\setminus\{\varepsilon\}$ is {\em an elementary kernel position} if it satisfies $v_1\cdots v_n\not \in M$, but for all $m<n$ we have $v_1\cdots v_m\in M$. \end{definition} In particular, for $n=1$, $v_1$ is an elementary kernel position if and only if $v_1\not\in M$. Our terminology is justified by the following two lemmas. \begin{remark} \label{R_ekp1} A position $v_1$ is a winning position, if and only if it is an elementary kernel position. Otherwise it is not a kernel position at all. \end{remark} \begin{lemma} \label{L_ekp2} For $n>1$, starting from an elementary kernel position $v_1\cdots v_n$, the first player is either unable to move, or is able to move only to a position where the second player may win in a single move. \end{lemma} \begin{proof} There is nothing to prove if the first player is unable to move. Otherwise, by $v_1\cdots v_n\not\in M$, the first player is unable to move to the empty word. Thus, after his or her move, we arrive in a $v_1\cdots v_m$ where $1\leq m\leq n-1$. Thus $v_1\cdots v_m\in M$ holds, the second player may now move to the empty word right away. \end{proof} Next we show that the set of kernel positions in a strongly Bernoulli type truncation game on $\Lambda^*$ is closed under the {\em concatenation} operation. \begin{proposition} \label{P_conc} Let $\underline{u}:=u_1\cdots u_m$ be a kernel position of length $m\geq 1$ in a strongly Bernoulli truncation game induced by $M$. Then an arbitrary position $\underline{v}:=v_1\cdots v_n$ of length $n\geq 1$ is a kernel position if and only if the concatenation $\underline{u}\underline{v}$ is also a kernel position. \end{proposition} \begin{proof} Assume first that $\underline{u}\underline{v}$ is a kernel position. We instruct the second player to play the winning strategy that exists for $\underline{v}$ as long as the length of the word truncated from $\underline{u}\underline{v}$ at the beginning of his or her move is greater than $m$. For pairs of words longer than $m$, the validity of a move is determined without regard to the letters in the first $m$ positions. By playing the winning strategy for $\underline{v}$ as long as possible, the second player is able to force the first player into a position where the first player is either unable to move, or will be the first to move to a word of length less than $m$, say $u_1\cdots u_k$. The validity of this move implies $u_{k+1}\cdots u_m v_1\cdots v_i\in M$ for some $i\geq 0$. By the strong Bernoulli property we obtain $u_{k+1}\cdots u_m\in M$ and moving from $u_1\cdots u_m$ to $u_1\cdots u_k$ is also a valid move. We may thus pretend that the first player just made the first move from $u_1\cdots u_m$ and the second player may win by following the winning strategy that exists for $\underline{u}$. For the converse, assume that $\underline{v}$ is not a kernel position. In this case we may instruct the first player to play the strategy associated to $\underline{v}$ as long as possible, forcing the second player into a position where he or she is either unable to move, or ends up making a move equivalent to a first move starting from $\underline{u}$. Now the original first player becomes the second player in this subsequent game, and is able to win. Therefore, in this case the concatenation $\underline{u}\underline{v}$ is not a kernel position either. \end{proof} Using all results in this section we obtain the following structure theorem. \begin{theorem} \label{T_sBt} A word $\underline{v}\in \Lambda^*\setminus\{\varepsilon\}$ is a kernel position in a strongly Bernoulli type truncation game, if and only if it may be obtained by the concatenation of one or several elementary kernel positions. Such a decomposition, if it exists, is unique. \end{theorem} \begin{proof} The elementary kernel positions are kernel positions by Remark~\ref{R_ekp1} and Lemma~\ref{L_ekp2}. Repeated use of Proposition~\ref{P_conc} yields that a pair of words obtained by concatenating several elementary kernel positions is also a kernel position. For the converse assume that $\underline{v}:=v_1\cdots v_n$ is a kernel position. We prove by induction on $n$ that this position is either an elementary kernel position or may be obtained by concatenating several elementary kernel positions. Let $m$ be the least index for which $v_1\cdots v_m\not\in M$ holds, such an $m$ exists, otherwise the first player is able to move to $\varepsilon$ and win in the first move. It follows from the definition that the $v_1\cdots v_m$ is an elementary kernel position. If $m=n$ then we are done, otherwise applying Proposition~\ref{P_conc} to $v_1\cdots v_n=(v_1\cdots v_m) \cdot (v_{m+1}\cdots v_n)$ yields that $v_{m+1}\cdots v_n$ must be a kernel position. We may apply the induction hypothesis to $v_{m+1}\cdots v_n$. The uniqueness of the decomposition may also be shown by induction on $n$. Assume that $v_1\cdots v_n$ is a kernel position and thus arises as a concatenation of one or several elementary kernel positions. Let $v_1\cdots v_m$ be the leftmost factor in this concatenation. By Definition~\ref{D_ekp}, $m$ is the least index such $v_1\cdots v_m\not\in M$ is satisfied. This determines the leftmost factor uniquely. Now we may apply our induction hypothesis to $v_{m+1}\cdots v_n$. \end{proof} \section{The original Bernoulli game} \label{s_ob2} When we want to apply Theorem~\ref{T_sBt} to the original Bernoulli game, we encounter two minor obstacles. The first obstacle is that the rule defining a valid move from $(u_1\cdots u_n, v_1\cdots v_n)$ makes an exception for the letters $u_1=v_1=1$, and does not allow their removal. The second obstacle is that the game is defined on pairs of words. Both problems may be easily remedied by changing the alphabet to $\Lambda={\mathbb P}\times {\mathbb P}\times {\mathbb P}={\mathbb P}^3$ where ${\mathbb P}$ is the set of positive integers. \begin{lemma} \label{L_oBiso} The original Bernoulli game is isomorphic to the strongly Bernoulli type truncation game induced by $$ M=\{(p_1,u_1,v_1)\cdots (p_n,u_n,v_n)\::\: p_1\neq 1, u_1\leq v_1, \ldots, v_n\}, $$ on the set of positions $$ P=\{(1,u_1,v_1)\cdots (n,u_n,v_n)\::\: 1\leq u_i, v_i\leq i\}\subset ({\mathbb P}^3)^*. $$ The isomorphism is given by sending each pair of words $(u_1\cdots u_n,v_1\cdots v_n)\in ({\mathbb P}^2)^*$ into the word $(1,u_1,v_1)(2,u_2,v_2)\cdots (n,u_n,v_n)\in ({\mathbb P}^3)^*$. \end{lemma} Theorem~\ref{T_sBt} provides a new way of counting the kernel positions of rank $n$ in the game $(P,M)$ defined in Lemma~\ref{L_oBiso}. Each kernel position $(1,u_1,v_1)\cdots (n,u_n,v_n)$ may be uniquely written as a concatenation of elementary kernel positions. Note that these elementary kernel positions do not need to belong to the set of valid positions $P$. However, we are able to independently describe and count all elementary kernel positions that may appear in a concatenation factorization of a valid kernel position $(1,u_1,v_1)\cdots (n,u_n,v_n)$ and contribute the segment $(i,u_i,v_i)\cdots (j,u_j,v_j)$ to it. We call such a pair an {\em elementary kernel factor of type $(i,j)$} and denote the number of such factors by $\kappa(i,j)$. Note that for $i=1$ we must have $j=1$ and $(1,1,1)$ is the only elementary kernel factor of type $(1,1)$. Thus we have $\kappa(1,1)=1$. \begin{lemma} \label{L_ekf} For $2\leq i\leq j$, a word $(i,u_i,v_i)\cdots (j,u_j,v_j)\in ({\mathbb P}^3)^*$ is an elementary kernel factor of type $(i,j)$ if and only if it satisfies the following criteria: \begin{itemize} \item[(i)] for each $k\in \{i,i+1,\ldots,j\}$ we have $1\leq u_k, v_k\leq k$; \item[(ii)] we have $u_i>v_j$; \item[(iii)] for all $k\in \{i,i+1,\ldots,j-1\}$ we have $u_i\leq v_k$. \end{itemize} \end{lemma} In fact, condition (i) states the requirement for a valid position for the letters at the positions $i,\ldots, j$, whereas conditions (ii) and (iii) reiterate the appropriately shifted variant of the definition of an elementary kernel position. A word $(1,u_1, v_1)\cdots (n,u_n,v_n)$ that arises by concatenating $(1,u_1,v_1)\cdots (i_1,u_{i_1}, v_{i_1})$, $(i_1+1,u_{i_1+1},v_{i_1+1})\cdots (i_2,u_{i_2},v_{i_2})$, and so on, $(i_{k+1,}u_{i_k+1},v_{i_k+1})\cdots (n,u_n,v_n)$ belongs to $P$ if and only if each factor $(i_s+1,u_{i_s+1},v_{i_s+1})\cdots (i_{s+1},u_{i_{s+1}},v_{i_{s+1}})$ (where $0\leq s\leq k$, $i_0=0$ and $i_{k+1}=n$) satisfies conditions (i) and (ii) in Lemma~\ref{L_ekf} with $i=i_s+1$ and $j=i_{s+1}$. We obtain the unique factorization as a concatenation of elementary kernel positions if and only if each factor $(u_{i_s+1},v_{i_s+1})\cdots (u_{i_{s+1}},v_{i_{s+1}})$ also satisfies condition (iii) in Lemma~\ref{L_ekf} with $i=i_s+1$ and $j=i_{s+1}$. Using the description given in Lemma~\ref{L_ekf} it is easy to calculate the numbers $\kappa(i,j)$. \begin{lemma} \label{L_ekfc} For $2\leq i\leq j$, the number of elementary kernel factors of type $(i,j)$ is $$ \kappa(i,j)=(j-i)!^2\binom{j}{i}\binom{j}{i-2}. $$ \end{lemma} \begin{proof} There is no other restriction on $v_{i+1},\ldots,v_{j}$ than the inequality given in condition (i) of Lemma~\ref{L_ekf}. These numbers may be chosen in $(i+1)(i+2)\cdots j=j!/i!$ ways. Let us denote the value of $u_i$ by $u$, this must satisfy $1\leq u\leq i$. However, $v_j<u_i$ may only be satisfied if $u$ is at least $2$. In that case $v_j$ may be selected in $(u-1)$ ways, and each $v_k$ (where $i\leq k\leq j-1$ may be selected in $(k+1-u)$ ways (since $u_i\leq v_k\leq k$). Thus the values of $v_i,\ldots v_j$ may be selected in $(u-1)\cdot (i+1-u)(i+2-u)\cdots (j-u)=(u-1)\cdot (j-u)!/(i-u)!$ ways. We obtain the formula $$ \kappa(i,j)=\sum_{u=2}^{i} (u-1)\cdot \frac{j!(j-u)!}{i!(i-u)!} = (j-i)!^2\binom{j}{i} \sum_{u=2}^{i} \binom{u-1}{u-2}\cdot \binom{j-u}{i-u}. $$ Replacing the binomial coefficients with symbols $$ \left(\binom{n}{k}\right):=\binom{n+k-1}{k}, $$ counting the $k$-element multisets on an $n$-element set, we may rewrite the last sum as $$ \sum_{u=2}^{i} \left(\binom{2}{u-2}\right)\cdot \left(\binom{j-i+1}{i-u}\right)= \left( \binom{j-i+3}{i-2} \right). $$ Thus we obtain $$ \kappa(i,j)=(j-i)!^2\binom{j}{i} \left( \binom{j-i+3}{i-2} \right), $$ which is obviously equivalent to the stated equation. \end{proof} Once we have selected the length of the elementary kernel factors in the unique decomposition of a kernel position, we may select each kernel factor of a given type independently. Thus we obtain the following result. \begin{theorem} \label{T_b2k} For $n\geq 1$, the number $\kappa_n$ of kernel positions of rank $n$ in the original Bernoulli game is given by $$ \kappa_n=\sum_{k=0}^{n-2}\sum_{1=i_0<i_1<\cdots <i_{k+1}= n} \prod_{j=0}^k (i_{j+1}-i_j-1)!^2 \binom{i_{j+1}}{i_j+1}\binom{i_{j+1}}{i_j-1}. $$ \end{theorem} \begin{proof} Consider the isomorphic game $(P,M)$ given in Lemma~\ref{L_oBiso}. Assuming that the elementary kernel factors cover the positions $1$ through $1$, $2=i_0+1$ through $i_1$, $i_1+1$ through $i_2$, and so on, $i_k+1$ through $i_{k+1}=n$, we obtain the formula $$ \kappa_n=\kappa(1,1)\sum_{k=0}^{n-1}\sum_{1=i_0<i_1<\cdots <i_{k+1}= n} \prod_{j=0}^k \kappa(i_j+1,i_{j+1}), $$ from which the statement follows by $\kappa(1,1)=1$ and Lemma~\ref{L_ekfc}. \end{proof} Comparing Theorem~\ref{T_b2k} with Theorem~\ref{T_b2} we obtain the following formula for the Bernoulli numbers of the second kind. \begin{corollary} \label{C_b2} For $n\geq 2$ the Bernoulli numbers of the second kind are given by \begin{equation} \label{E_b2e} b_n=(-1)^{n-1} \frac{1}{(n+1)!} \sum_{k=0}^{n-2}\sum_{1=i_0<i_1<\cdots <i_{k+1}= n} \prod_{j=0}^k (i_{j+1}-i_j-1)!^2 \binom{i_{j+1}}{i_j+1}\binom{i_{j+1}}{i_j-1}. \end{equation} \end{corollary} \begin{example} For $n=4$, Equation (\ref{E_b2e}) yields \begin{align*} b_4=\frac{-1}{5!}&\left((3-1)!^2\binom{4}{2}\binom{4}{0} +(1-1)!^2\binom{2}{2}\binom{2}{0}(3-2)!^2\binom{4}{3}\binom{4}{1}\right.\\ &+(2-1)!^2\binom{3}{2}\binom{3}{0}(3-3)!^2\binom{4}{4}\binom{4}{2}\\ &\left.+(1-1)!^2\binom{2}{2}\binom{2}{0} (2-2)!^2\binom{3}{3}\binom{3}{1} (3-3)!^2\binom{4}{4}\binom{4}{2}\right)=-\frac{19}{30}. \end{align*} Thus $b_4/4!=-19/720$, which agrees with the number tabulated by Jordan~\cite[p.\ 266]{Jordan}. \end{example} As $n$ increases, the number of terms in (\ref{E_b2e}) increases exponentially. However, we are unaware of any other explicit formula expressing the Bernoulli numbers of the second kind as a sum of terms of the same sign. Lemma~\ref{L_ekfc} may also be used to obtain a recursion formula for the number of kernel positions of rank $n$ in the original Bernoulli game. \begin{proposition} \label{P_b2krec} For $n\geq 2$, the number $\kappa_n$ of kernel positions of rank $n$ in the original Bernoulli game satisfies the recursion formula $$ \kappa_n=\sum_{i=1}^{n-1} \kappa_i (n-i-1)!^2 \binom{n}{i+1}\binom{n}{i-1}. $$ \end{proposition} \begin{proof} Consider again the isomorphic game $(P,M)$ given in Lemma~\ref{L_oBiso}. Assume the last elementary kernel factor is $(i+1,u_{i+1}, v_{i+1})\cdots (n,u_n,v_n)$ where $i\geq 1$. Removing it we obtain a kernel position of rank $i$. Conversely, concatenating an elementary kernel factor $(i+1,u_{i+1}, v_{i+1})\cdots (n,u_n,v_n)$ to a kernel position of rank $i$ yields a kernel position of rank $n$. Thus we have \begin{equation} \kappa_n=\sum_{i=0}^{n-1} \kappa_i \cdot \kappa(i+1,n), \end{equation} and the statement follows by Lemma~\ref{L_ekfc}. \end{proof} Comparing Proposition~\ref{P_b2krec} with Theorem~\ref{T_b2} we obtain the following recursion formula for absolute values of the Bernoulli numbers of the second kind. \begin{equation} \label{E_b2rec} |b_n|=\frac{1}{n+1}\sum_{i=1}^{n-1} |b_i| (n-i-1)! \binom{n}{i-1}\quad\mbox{holds for $n\geq 2$}. \end{equation} Equivalently, Jordan's~\cite{Jordan} Bernoulli numbers of the second kind $b_n/n!$ satisfy \begin{equation} \label{E_b2Jrec} \left|\frac{b_n}{n!}\right|=\sum_{i=1}^{n-1} \left|\frac{b_i}{i!}\right| \frac{i}{(n+1)(n-i+1)(n-i)} \quad\mbox{for $n\geq 2$}. \end{equation} \begin{remark} Since the sign of $b_n$ for $n\geq 1$ is $(-1)^{n-1}$, and substituting $x=0$ in (\ref{E_b2}) gives $$ \sum_{n\geq 0} \frac{b_n}{n!} t^n=\frac{t}{\ln(1-t)}, $$ it is easy to verify that (\ref{E_b2Jrec}) could also be derived from the following equation, satisfied by the generating function of the numbers $b_n$: $$ \frac{d}{dt}\left(t\cdot \frac{t}{\ln(1-t)}\right)+1-t = \frac{d}{dt}\left(\frac{t}{\ln(1-t)}\right)\cdot ((1-t)\ln(1-t)+t). $$ However, it seems hard to guess that this equation will yield a nice recursion formula. \end{remark} \section{Decomposing the indecomposable permutations} \label{s_MR} \begin{definition} The {\em instant Bernoulli game} is the restriction of the original Bernoulli game to the set of positions $\{ (12\cdots n,v_1\cdots v_n)\::\: n\geq 1\}$. \end{definition} \begin{lemma} \label{L_MRsimple} Equivalently, we may define the set of positions of the instant Bernoulli game as the set of words $v_1\cdots v_n$ satisfying $n\geq 1$ and $1\leq v_i\leq i$ for all $i$. A valid move consists of replacing $v_1\cdots v_n$ with $v_1\cdots v_m$ for some $m\geq 1$ such that $m+1\leq v_{m+1}, v_{m+2},\ldots, v_n$ holds. \end{lemma} Lemma~\ref{L_MRsimple} offers the simplest possible way to visualize the instant Bernoulli game, even if this is not a form in which the applicability of Theorem~\ref{T_sBt} could be directly seen. For that purpose we need to note that the isomorphism of games stated in Lemma~\ref{L_oBiso} may be restricted to the set of positions of the instant Bernoulli game, and we obtain the following representation. \begin{lemma} \label{L_iBiso} The instant Bernoulli game is isomorphic to the strongly Bernoulli type truncation game induced by $$ M=\{(p_1,u_1,v_1)\cdots (p_n,u_n,v_n)\::\: p_1\neq 1, u_1\leq v_1, \ldots, v_n\}, $$ on the set of positions $$ P=\{(1,1,v_1)\cdots (n,n,v_n)\::\: 1\leq u_i, v_i\leq i\}\subset ({\mathbb P}^3)^*. $$ \end{lemma} Unless otherwise noted, we will use the simplified representation stated in Lemma~\ref{L_MRsimple}. The kernel positions of the instant Bernoulli game are identifiable with the primitive elements of the Malvenuto-Reutenauer Hopf algebra, as it was mentioned in the concluding remarks of~\cite{Hetyei-EKP}. We call this game the instant Bernoulli game because this is a game in which one of the players wins instantly: either there is no valid move and the second player wins instantly, or the first player may select the least $m\geq 1$ satisfying $m+1\leq v_{m+1}, v_{m+2},\ldots, v_n$ and move to $v_1\cdots v_m$, thus winning instantly. The kernel positions are identical to the winning positions in this game. The recursion formula (\ref{E_gkrec}) may be rewritten as $$ n!=\kappa_n+\sum_{m=1}^{n-1} \kappa_m (n-m)!, $$ (we start the summation with $\kappa_1$ since the first letter cannot be removed), and the generating function of the numbers $\kappa_n$ is easily seen to be \begin{equation} \label{E_MRgf} \sum_{n=1}^{\infty} \kappa_n t^n=1-\frac{1}{\sum_{n=0}^{\infty}n!t^n}. \end{equation} The numbers $\{\kappa_n\}_{n\geq 0}$ are listed as sequence A003319 in the On-Line Encyclopedia of Integer Sequences~\cite{OEIS}, and count the number of {\em connected} or {\em indecomposable} permutations of $\{1,2,\ldots,n\}$. A permutation $\pi\in S_n$ is {\em connected} if there is no $m<n$ such that $\pi$ takes the set $\{1,\ldots,m\}$ into itself. The kernel positions of the instant Bernoulli game are directly identifiable with the connected permutations in more than one ways. One way is mentioned at the end of~\cite{Hetyei-EKP}, we may formalize that bijection using two variants of the well-known {\em inversion tables} (see, for example ~\cite[Section 5.1.1]{Knuth} or \cite[Section 1.3]{Stanley-EC1}). \begin{definition} Given a permutation $\pi\in S_n$ we define its {\em letter-based non-inversion table} as the word $v_1\cdots v_n$ where $v_j=1+|\{i<j\::\: \pi^{-1}(i)<\pi^{-1}(j)\}|$. \end{definition} For example, for $\pi=693714825$ the letter-based non-inversion table is $121351362$. This is obtained by adding $1$ to all entries in the usual definition of an inversion table~\cite[Section 1.3]{Stanley-EC1} of the permutation $\widetilde{\pi}=417396285$, defined by $\widetilde{\pi}(i)=n+1-\pi(i)$ and taking the reverse of the resulting word. In particular, for $\widetilde{\pi}=417396285$ we find the inversion table $(1,5,2,0,4,2,0,1,0)$ in~\cite[Section 1.3]{Stanley-EC1}. Our term {\em letter-based} refers to the fact that here we associate the letter $j$ to $v_j$ and not the place $j$. A variant of the notion of letter-based non-inversion table is the place-based non-inversion table. \begin{definition} Given a permutation $\pi\in S_n$ we define its {\em place-based non-inversion table (PNT)} as the word $v_1\cdots v_n$ where $v_j=1+|\{i<j\::\: \pi(i)<\pi(j)\}|$. \end{definition} Obviously the PNT of a permutation $\pi$ equals the letter-based non-inversion table of $\pi^{-1}$. For example, for $\pi=583691472$ the PNT is $121351362$. We have $v_7=3=1+2$ because $\pi(7)=4$ is preceded by two letters $\pi(i)$ such that $(\pi(i),\pi(7))$ is not an inversion. Any PNT $v_1\cdots v_n$ is a word satisfying $1\leq i\leq v_i$. \begin{lemma} \label{L_MRc} A position $v_1\cdots v_n$ in the instant Bernoulli game is a kernel position if and only if it is the place-based (letter-based) non-inversion table of a connected permutation. \end{lemma} \begin{proof} We prove the place-based variant of the lemma, the letter-based version follows immediately since the set of connected permutations is closed under taking inverses. It is easy to verify that the place-based non-inversion table $v_1\cdots v_n$ of a permutation $\pi$ satisfies $m+1\leq v_{m+1},\ldots, v_n$ if and only if $\pi$ takes the set $\{1,\ldots,m\}$ into itself. Thus the first player has no valid move if and only if $\pi$ is connected. \end{proof} The study of connected permutations goes back to the work of Comtet~\cite{Comtet-n!,Comtet-AC}, for a reasonably complete list of references we refer to the entry A003319 in the On-Line Encyclopedia of Integer Sequences~\cite{OEIS}. It was shown by Poirier and Reutenauer~\cite{Poirier-Reutenauer} that the connected permutations form a free algebra basis of the Malvenuto-Reutenauer Hopf-algebra, introduced by Malvenuto and Reutenauer~\cite{Malvenuto-Reutenauer}. The same statement appears in dual form in the work of Aguiar and Sottile~\cite{Aguiar-Sottile}. Although the instant Bernoulli game is very simple, Theorem~\ref{T_sBt} offers a nontrivial analysis of its kernel positions, allowing to identify a unique structure on each connected permutation. We begin with stating the following analogue of Theorem~\ref{T_b2k}. \begin{theorem} \label{T_MRk} The number $\kappa_n$ of connected permutations of rank $n$ is given by $$ \kappa_n=\sum_{k=1}^{n-1}\sum_{1\leq i_1<i_2<\cdots <i_{k+1}=n} \prod_{j=1}^k (i_{j+1}-i_j-1)!\cdot i_j. $$ \end{theorem} \begin{proof} By Lemma~\ref{L_MRc}, $\kappa_n$ is the number of kernel positions of rank $n$ in the instant Bernoulli game. The fact that this number is equal to the expression on the right hand side may be shown similarly to the proof of Theorem~\ref{T_b2k}. Consider the equivalent representation of the instant Bernoulli game given in Lemma~\ref{L_MRsimple}. Note that this is obtained from the representation given in Lemma~\ref{L_iBiso} by deleting the ``redundant coordinates'' $i,i$ from each letter $(i,i,v_i)$. Given an arbitrary kernel position $v_1\cdots v_n$, the first letter $v_1=1$ corresponds to an elementary kernel factor of type $(1,1)$ and we have $\kappa(1,1)=1$. For $2\leq i\leq j$, by abuse of terminology, let us call $v_i\cdots v_j$ an elementary kernel factor of type $(i,j)$ if it corresponds to an elementary kernel factor in the equivalent representation in Lemma~\ref{L_iBiso}. The elementary kernel factors of type $(i,j)$ are then exactly those words $v_i\cdots v_j$ for which $i\leq v_i,\ldots, v_{j-1}$ and $v_j<i$ hold. Thus their number is \begin{equation} \label{E_MRij} \kappa(i,j)=(j-i)!\cdot (i-1), \end{equation} The statement now follows from the obvious formula $$ \kappa_n=\kappa(1,1)\cdot \sum_{k=1}^{n-1} \sum_{1=i_0<i_1<\cdots<i_{k+1}=n} \prod_{j=1}^k \kappa(i_j+1,i_{j+1}). $$ \end{proof} In analogy to Proposition~\ref{P_b2krec}, we may also use (\ref{E_MRij}) to obtain a recursion formula for the number of connected permutations. We end up with a formula that was first discovered by King~\cite[Theorem 4]{King}. \begin{proposition}[King] \label{P_MRrec} For $n\geq 2$, the number $\kappa_n$ of connected permutations of rank $n$ satisfies the recursion formula $$ \kappa_n=\sum_{i=1}^{n-1} \kappa_i (n-i-1)!i. $$ \end{proposition} The proof may be presented the same way as for Proposition~\ref{P_b2krec}, by removing the last elementary kernel factor of type $(i,n)$, using informal notion of an elementary kernel factor as in the proof of Theorem~\ref{T_MRk}. King's proof is worded differently, but may be shown to yield a bijectively equivalent decomposition. \begin{lemma} The induction step presented in King's proof of Proposition~\ref{P_MRrec} is equivalent to the removal of the last elementary kernel factor in the place-based non-inversion table of $\widetilde{\sigma}(1)\widetilde{\sigma}(2)\cdots\widetilde{\sigma}(n)$. Here $\widetilde{\sigma}(i)=n+1-\sigma(n+1-i)$. \end{lemma} \begin{proof} Let $\sigma(1)\cdots \sigma(n)$ be the connected permutation considered in King's proof, and let $v_1\cdots v_n$ be the PNT of $\widetilde{\sigma}(1)\widetilde{\sigma}(2)\cdots \widetilde{\sigma}(n)$. King's proof first identifies $\sigma(1)=r$. This is equivalent to setting $v_n=n+1-r$. King then defines $\pi(1)\cdots \pi(n-1)$ as the permutation obtained by deleting $\sigma(1)$ and subtracting $1$ from all letters greater than $r$. Introducing $\widetilde{\pi}(i)=n-\pi(n-i)$, the permutation $\widetilde{\pi}(1)\cdots\widetilde{\pi}(n-1)$ is obtained from $\widetilde{\sigma}(1)\widetilde{\sigma}(2)\cdots \widetilde{\sigma}(n)$ by deleting the last letter $n+1-r$ and by decreasing all letters greater than $n+1-r$ by one. The PNT of $\widetilde{\pi}(1)\cdots\widetilde{\pi}(n-1)$ is thus $v_1\cdots v_{n-1}$. King then defines $j$ as the largest $j$ such that $\pi(\{1,\ldots,j\})=\{1,\ldots,j\}$. This is equivalent to finding the least $n-j$ such that $\widetilde{\pi}(\{n-j,n-j+1,\ldots,n-1\})=\{n-j,n-j+1,\ldots,n-1\}$. Using the proof of Lemma~\ref{L_MRc}, this is easily seen to be equivalent to finding the smallest $n-j$ such that $v_{n-j}=n-j$ and for all $n-j\leq k\leq n-1$ we have $v_{k}\geq n-j$. King defines $\beta(\pi)$ as the permutation obtained from $\pi$ by removing $\pi(1)\cdots \pi(j)$ and then subtracting $j$ from each element. Correspondingly, we may define $\widetilde{\beta}(\widetilde{\pi})$ as the permutation obtained from $\widetilde{\pi}$ by removing $\widetilde{\pi}(n-j)\cdots \widetilde{\pi}(n-1)$. The PNT of $\widetilde{\beta}(\widetilde{\pi})$ is then $v_1\cdots v_{n-j-1}$, representing a kernel position in the instant Bernoulli game. This is the kernel position of the least rank that is reachable from $v_1\cdots v_{n-1}$. In terms of elementary kernel factors, the removal of $v_{n}$ makes the first player able to remove the rest of the last elementary kernel factor in a single valid move, we only need to show that the fist player cannot move to a position $v_1\cdots v_k$ where $r\leq k\leq s$ for some elementary kernel factor $v_r\cdots v_s$. Assume by way of contradiction that such a move is possible. By definition of a valid move, we then have $k\leq v_s$, implying $r\leq v_s$, in contradiction with the definition of the elementary kernel factor $v_r\cdots v_s$. Therefore $v_1\cdots v_{n-j-1}$ is obtained from $v_1\cdots v_n$ by removing exactly the last elementary kernel factor. \end{proof} King~\cite{King} uses the removal of the last elementary kernel factor to recursively define a {\em transposition Gray code} of all connected permutations of a given rank. A transposition Gray code is a list of permutations such that subsequent elements differ by a transposition. Using place-based non-inversion tables, not only the last elementary kernel factor is easily identifiable, but the entire unique decomposition into elementary kernel factors is transparent. This gives rise to a new way to systematically list all connected permutations. The resulting list is not a transposition Gray code, but it is fairly easy to generate. To explain the construction, consider the connected permutation $\pi=251376948$. Its letter-based non-inversion table is $v_1\cdots v_8=121355748$ whose decomposition into elementary kernel factors is $1\cdot 21\cdot 3\cdot 5574\cdot 8$. For $i<j$, each elementary kernel factor of type $(i,j)$ begins with $i$, all entries in the factor are at least $i$, except for the last letter which is less than $i$. For $i=1$, $1$ is a special elementary kernel factor, for $i>1$ a kernel factor of type $(i,i)$ is a positive integer less than $i$. \begin{definition} Given a connected permutation $\pi$, we define its {\em elevation $E(\pi)$} as the permutation whose PNT is obtained from the PNT of $\pi$ as follows: for each elementary kernel factor of type $(i,j)$, increase the last letter in the factor to $j$. \end{definition} For example, the PNT of the elevation of $251376948$ is $1\cdot 23\cdot 4\cdot 5578\cdot 9$, thus $E(\pi)$ is $123465789$. The PNT of $E(\pi)$ is written as a product of factors, such that each factor $u_i\cdots u_j$ ends with $j$, and all letters after $u_j$ are more than $j$. We may use this observation to prove that each factor $u_i\cdots u_j$ ends with a $j$ that is a {\em strong fixed point} $j$. \begin{definition} A number $i\in\{1,\ldots,n\}$ is a strong fixed point of a permutation $\sigma$ of $\{1,\ldots,n\}$ if $\sigma(i)=i$ and $\sigma(\{1,\ldots,i\})=\{1,\ldots,i\}$. We denote the set of strong fixed points of $\sigma$ by $\mbox{SF}(\sigma)$. \end{definition} \begin{remark} The definition of a strong fixed point may be found in Stanley's book~\cite[Ch.\ 1, Exercise 32b]{Stanley-EC1}, where it is stated that the number $g(n)$ of permutations of rank $n$ with no strong fixed points has the generating function $$ \sum_{n\geq 0} g(n) t^n=\frac{\sum_{n\geq 0} n! t^n}{1+t \sum_{n\geq 0} n! t^n}. $$ \end{remark} \begin{lemma} \label{L_ifp} Let $v_1\cdots v_n$ be the PNT of a permutation $\sigma$. Then $j$ is an strong fixed point of $\sigma$ if and only if $v_j=j$ and for all $k>j$ we have $v_k>j$. \end{lemma} In fact, the condition $\forall j (k>j\implies v_k>j)$ is easily seen to be equivalent to $\sigma(\{1,\ldots,j\})=\{1,\ldots,j\}$. Assuming this is satisfied, $j$ is a fixed point of $\sigma$ if and only if $v_j=j$. As a consequence of Lemma~\ref{L_ifp} the last letters of the elementary kernel factors of the PNT of $\pi$ mark strong fixed points of $E(\pi)$. The converse is not necessarily true: in our example $7$ is an strong fixed point of $E(\pi)$; however, no elementary kernel factor of the PNT of $\pi$ ends with $v_7$. On the other hand, $v_1$ is always a special elementary kernel factor by itself and the last elementary kernel factor must end at $v_n$, thus $1$ and $n$ must always be strong fixed points of $E(\pi)$. The numbers $1$ and $n$ are also special in the sense that $i\in\{1,n\}$ is an strong fixed point if and only if it is a fixed point. \begin{theorem} \label{T_epi} Let $\sigma\in S_n$ be a permutation satisfying $\sigma(1)=1$ and $\sigma(n)=n$ and let the strong fixed points of $\sigma$ be $1=i_0<i_1<\cdots<i_{k+1}=n$. Then there are exactly $(i_1+1)\cdots (i_k+1)$ connected permutations $\pi$ whose elevation is $\sigma$. \end{theorem} \begin{proof} Assume $E(\pi)=\sigma$ and the PNT of $\pi$ is the product of elementary factors of type $(1,1)$, $(j_0+1,j_1)$, $(j_1+1,j_2)$, \ldots, $(j_l+1,j_{l+1})$, where $1=j_0<j_1<\cdots<j_{l+1}=n$. As we have seen above, $\{j_1,\ldots,j_l\}$ must be a subset of $\{i_1,\ldots,i_k\}$. This condition is also sufficient since we may decompose the PNT of $\sigma$ as $u_1\cdot (u_{j_0+1}\cdots u_{j_1})\cdots (u_{j_l+1},u_{j_{l+1}})$, and decrease the value of each $u_{j_t}=j_t$ (where $t=1,2,\ldots,l+1$) independently to any number that is at most $j_{t-1}$. Note that each $u_{j_t+1}=j_t+1$, and the required inequalities for all other $u_j$s are automatically satisfied as a consequence of having selected the $j_t$s from among the strong fixed points. Thus we obtain the PNT of a connected permutation, whose kernel factors are of type $(1,1)$, $(j_0+1,j_1)$, $(j_1+1,j_2)$, \ldots, $(j_l+1,j_{l+1})$. Therefore the number of permutations $\pi$ satisfying $E(\pi)=\sigma$ is $$ \sum_{l=0}^k \sum_{\{j_1,\ldots,j_l\}\subseteq \{i_1,\ldots,i_k\}} j_1\cdots j_l=(i_1+1)\cdots (i_k+1). $$ \end{proof} The proof of Theorem~\ref{T_epi} suggests a straightforward way to list the PNTs of all connected permutations of rank $n$: \begin{itemize} \item[(1)] List all words $u_1\cdots u_n$ satisfying $u_1=1$, $u_n=n$ and $1\leq u_i\leq i$ for all $i$. These are the PNTs of all permutations of rank $n$, of which $1$ and $n$ are fixed points. \item[(2)] For each $u_1\cdots u_n$, identify the places of strong fixed points by finding all $i$s such that $u_i=i$ and $u_k>i$ for all $k>i$. \item[(3)] For each $u_1\cdots u_n$ select a subset $\{j_1,\ldots,j_l\}$ of the set of strong fixed points satisfying $1<j_1<\cdots<j_l<n$ and decrease the values of each $u_{j_t}$ to any number in $\{1,\ldots,j_{t-1}\}$. Output these as the PNTs of connected permutations. \end{itemize} Steps and $(1)$ and $(3)$ involve nothing more than listing words using some lexicographic order, step $(2)$ may be performed after reading each word once. As a consequence of Theorem~\ref{T_epi} we obtain the following formula for the number of connected permutations of rank $n\geq 2$: $$ \kappa_n=\sum_{\substack{\sigma\in S_n\\\sigma(1)=1, \sigma(n)=n}} \prod_{i\in \mbox{SF}(\sigma)\setminus\{1,n\}} (i+1). $$ After removing the redundant letters $\sigma(1)=1$ and $\sigma(n)=n$ and decreasing all remaining letters by $1$, we obtain that \begin{equation} \label{E_IF} \kappa_n=\sum_{\sigma\in S_{n-2}} \prod_{i\in \mbox{SF}(\sigma)} (i+2)\quad\mbox{holds for $n\geq 2$}. \end{equation} Equation (\ref{E_IF}) offers a new combinatorial model for the numbers counting the connected permutations of rank $n\geq 2$: it is the total weight of all permutations of rank $n-2$, using a weighting which assigns the most value to those permutations which have the most strong fixed points and are thus in a sense the farthest from being connected. \section{The polynomial Bernoulli game of the second kind, indexed by $x$} \label{s_pb2} This game is defined in~\cite{Hetyei-EKP} on triplets of words $(u_1\cdots u_n, v_1\cdots v_n, w_1\cdots w_n)$ for $n\geq 0$ such that $1\leq u_i\leq i$, $1\leq v_i\leq i+1$ and $1\leq w_i\leq x$ hold for $i\geq 1$, furthermore we require $w_i\leq w_{i+1}$ for all $i\leq n-1$. A valid move consists of replacing $(u_1\cdots u_n,v_1\cdots v_n, w_1\cdots w_n)$ with $(u_1\cdots u_m,v_1\cdots v_m, w_1\cdots w_m)$ for some $m\geq 0$ satisfying $w_{m+1}=w_{m+2}=\cdots=w_n=x$ and $u_{m+1}< v_j$ for $j=m+1,\ldots, n$. Theorem~\ref{T_sBt} is applicable to this game, because of the following isomorphism. \begin{lemma} \label{L_rpb2} Let $\Lambda={\mathbb P}\times {\mathbb P}\times \{1,\ldots,x\}$ where $x\in {\mathbb P}$. The polynomial Bernoulli game, indexed by $x$ is isomorphic to the strongly Bernoulli type truncation game, induced by $$ M:=\{(u_1,v_1,x)\cdots (u_n,v_n,x)\::\: u_1< v_1, \ldots, v_n\} $$ on the set of positions $$ P:=\{(u_1,v_1,w_1)\cdots(u_n,v_n,w_n)\::\: 1\leq u_i\leq i, 1\leq v_i\leq i+1, w_1\leq\cdots\leq w_n\}. $$ This isomorphism is given by sending each triplet $(u_1\cdots u_n, v_1\cdots v_n, w_1\cdots w_n)\in {\mathbb P}^*\times {\mathbb P}^* \times \{1,\ldots,x\}^*$ into $(u_1,v_1,w_1)\cdots (u_n,v_n,w_n)\in ({\mathbb P}\times {\mathbb P}\times \{1,\ldots,x\})^*$. \end{lemma} \begin{theorem} \label{T_b2p} The number $\kappa_n$ of kernel positions of rank $n$ in the polynomial Bernoulli game of the second kind, indexed by $x$ is \begin{align*} \kappa_n=&\sum_{m=0}^{n-1} \binom{x+m-2}{m} m!(m+1)! \sum_{k=0}^{n-m-1}\sum_{m=i_0<i_1<\cdots <i_{k+1}=n} \prod_{j=0}^k (i_{j+1}-i_j-1)!^2 \binom{i_{j+1}}{i_j+1}\binom{i_{j+1}+1}{i_j}\\ &+\binom{x+n-2}{n} n!(n+1)!. \end{align*} \end{theorem} \begin{proof} Consider the isomorphic game $(P,M)$, given in Lemma~\ref{L_rpb2}. Since in a valid move all truncated letters $(u_j,v_j,w_j)$ satisfy $w_j=x$, we have to distinguish two types of elementary kernel factors: those which contain a letter $(u_i,v_i,w_i)$ with $w_i<x$ and those which do not. If the elementary kernel factor contains a $(u_i,v_i,w_i)$ with $w_i<x$, it must consist of the single letter $(u_i,v_i,w_i)$. We call such a factor an {\em elementary kernel factor of type $(i;w_i)$}. Clearly, their number is \begin{equation} \label{E_iw} \kappa(i;w_i)=i(i+1), \end{equation} since $u_i\in \{1,\ldots,i\}$ and $v_i\in \{1,\ldots, i+1\}$ may be selected independently. The elementary kernel factors containing only $x$ in their $w$-component of their letters are similar to the ones considered in Lemma~\ref{L_ekf}. We call an elementary kernel factor of type $(i,j;x)$ an elementary kernel factor $(u_i,v_i,x)\cdots (u_j, v_j, x)$. A calculation completely analogous to the one in Lemma~\ref{L_ekfc} shows that their number is \begin{equation} \label{E_ijx} \kappa(i,j;x)=\sum_{u=1}^i u\frac{(j-u)!j!}{(i-u)!i!}=(j-i)!^2\binom{j}{i}\binom{j+1}{i-1}. \end{equation} Because of $w_1\leq \cdots \leq w_n$, the factors of type $(i;w_i)$ must precede the factors of type $(i,j;x)$. Thus we obtain \begin{align*} \kappa_n=& \sum_{m=0}^{n-1} \sum_{1\leq w_1\leq \cdots \leq w_m\leq x-1} \prod_{i=1}^m \kappa(i;w_i) \sum_{k=0}^{n-m-1}\sum_{m=i_0<i_1<\cdots <i_{k+1}=n} \prod_{j=0}^k \kappa(i_j+1,i_{j+1};x)\\ &+\sum_{1\leq w_1\leq \cdots \leq w_n\leq x-1} \prod_{i=1}^n \kappa(i;w_i) \end{align*} The statement now follows from (\ref{E_iw}), (\ref{E_ijx}), from $\prod_{i=1}^m i(i+1)=m!(m+1)!$, and from the fact that the number of words $w_1\cdots w_m$ satisfying $1\leq w_1\leq \cdots \leq w_m\leq x-1$ is $$ \left(\binom{x-1}{m}\right)=\binom{x+m-2}{m}. $$ \end{proof} We already know~\cite[Theorem 4.2]{Hetyei-EKP} that we also have $$\kappa_n=(-1)^n(n+1)!b_n(-x)$$ for all positive integer $x$. Since two polynomial functions are equal if their agree for infinitely many substitutions, we obtain a valid expansion of the polynomial $(-1)^n(n+1)!b_n(-x)$. Substituting $-x$ into $x$ and rearranging yields the expansion of $b_n(x)$ in the basis $\{\binom{x+1}{n}\::\: n\geq 0\}$. \begin{corollary} \label{C_b2p} Introducing $c_{n,n}=n!$ and $$ c_{n,m}=\frac{(-1)^{n-m}m!(m+1)!}{(n+1)!} \sum_{k=0}^{n-m-1}\sum_{m=i_0<i_1<\cdots <i_{k+1}=n} \prod_{j=0}^k (i_{j+1}-i_j-1)!^2 \binom{i_{j+1}}{i_j+1}\binom{i_{j+1}+1}{i_j} $$ for $0\leq m<n$, we have $$ b_n(x)=\sum_{m=0}^{n} c_{n,m}\binom{x+1}{m}. $$ \end{corollary} \begin{example} For $n=2$, Corollary~\ref{C_b2p} gives \begin{align*} b_2(x)=& \frac{0!1!}{3!}\left(1!^2\binom{2}{1}\binom{3}{0} +0!^2\binom{1}{1}\binom{2}{0}0!^2\binom{2}{2}\binom{3}{1}\right) -\frac{1!2!}{3!}\binom{x+1}{1} 0!^2\binom{2}{2}\binom{3}{1}+\binom{x+1}{2}2!\\ =&\frac{5}{6}-(x+1)+(x+1)x=x^2-\frac{1}{6}. \end{align*} Thus $b_2(x)/2!=x^2/2-1/12$ which agrees with the formula given in~\cite[\S 92]{Jordan}. \end{example} We may also obtain a new formula for the Bernoulli numbers of the second kind by substituting $x=0$ into Corollary~\ref{C_b2p}. We obtain $b_n=c_{n,0}+c_{n,1}$, i.e., \begin{equation} \begin{aligned} b_n=&\frac{(-1)^{n}}{(n+1)!} \sum_{k=0}^{n-1}\sum_{0=i_0<i_1<\cdots <i_{k+1}=n} \prod_{j=0}^k (i_{j+1}-i_j-1)!^2 \binom{i_{j+1}}{i_j+1}\binom{i_{j+1}+1}{i_j}\\ &+ \frac{(-1)^{n-1}\cdot 2}{(n+1)!} \sum_{k=0}^{n-2}\sum_{1=i_0<i_1<\cdots <i_{k+1}=n} \prod_{j=0}^k (i_{j+1}-i_j-1)!^2 \binom{i_{j+1}}{i_j+1}\binom{i_{j+1}+1}{i_j} \end{aligned} \end{equation} for $n\geq 2$. \section{The flat Bernoulli game} \label{s_fB} This game is defined in~\cite{Hetyei-EKP} on words $u_1\cdots u_n$ for $n\geq 0$ such that each $u_i\in{\mathbb P}$ satisfies $1\leq u_i\leq i$. A valid move consists of replacing $u_1\cdots u_n$ with $u_1\cdots u_m$ if $m\geq 1$ and $u_{m+1}<u_j$ holds for all $j>m+1$. In analogy to Lemma~\ref{L_oBiso}, we have the following result. \begin{lemma} \label{L_fBiso} The flat Bernoulli game is isomorphic to the strongly Bernoulli type truncation game induced by $$ M=\{(p_1,u_1)\cdots (p_n,v_n)\::\: p_1\neq 1, u_1<u_2,\ldots, u_n\}, $$ on the set of positions $$ P=\{(1,u_1)\cdots (n,u_n)\::\: 1\leq u_i\leq i\}\subset ({\mathbb P}^2)^*. $$ The isomorphism is given by sending each word $u_1\cdots u_n\in {\mathbb P}^*$ into the word $(1,u_1)(2,u_2)\cdots (n,u_n)\in ({\mathbb P}^2)^*$. \end{lemma} \begin{theorem} \label{T_fB} For $n\geq 2$, the number $\kappa_n$ of kernel positions of rank $n$ in the flat Bernoulli game is $$ \kappa_n=\sum_{k=0}^{\lfloor(n-3)/2\rfloor} \sum_{\substack{1=i_0<i_1<\cdots<i_{k+1}=n\\ i_{j+1}-i_j\geq 2}} \prod_{j=0}^k (i_{j+1}-i_j-2)!\binom{i_{j+1}}{i_j}. $$ \end{theorem} \begin{proof} Consider isomorphic representation given in Lemma~\ref{L_fBiso}. Note first that, in any kernel position $(1,u_1)\cdots (n,u_n)$, the letter $u_1=(1,1)$ is an elementary kernel factor of type $(1,1)$ and we have $\kappa(1,1)=1$. For $2\leq i< j$, let $\kappa(i,j)$ be the number of elementary kernel factors $(i,u_i)\cdots (j,u_j)$ of type $(i,j)$. A calculation completely analogous to the one in Lemma~\ref{L_ekfc} shows \begin{equation} \label{E_ijf} \kappa(i,j)=\sum_{u=1}^i u \frac{(j-1-u)!}{(j-1-i)!}=(j-1-i)!\binom{j}{i-1}. \end{equation} Note that for $i\geq 2$ there is no elementary kernel factor of type $(i,i)$ since removing the last letter only is always a valid move, provided at least one letter is left. The statement now follows from Equation (\ref{E_ijf}) and the obvious formula $$ \kappa_n=\kappa(1,1)\cdot \sum_{k=0}^{\lfloor(n-3)/2\rfloor} \sum_{\substack{1=i_0<i_1<\cdots<i_{k+1}=n\\ i_{j+1}-i_j\geq 2}} \prod_{j=0}^k \kappa(i_j+1,i_{j+1}). $$ \end{proof} Introducing $m_j:=i_{j}-i_{j-1}-1$ for $j\geq 1$ and shifting the index $k$ by $1$, we may rewrite the equation in Theorem~\ref{T_fB} as \begin{equation} \label{E_fB} \kappa_n=n\cdot \sum_{k=1}^{\lfloor(n-1)/2\rfloor} \sum_{\substack{m_1+\cdots + m_{k}=n-1\\ m_1,\ldots, m_k\geq 2}} \binom{n-1}{m_1, \ldots, m_k} (m_1-2)!\cdots (m_k-2)!. \end{equation} A more direct proof of this equation follows from Corollary~\ref{C_fBperm} below. \begin{example} For $n=5$, (\ref{E_fB}) yields $$ \kappa_5=5\left(\binom{4}{4} 2!+\binom{4}{2,2}0!0!\right)=40. $$ Thus $\kappa_5/5!=1/3$ which agrees with the number given in~\cite[Table 1]{Hetyei-EKP}. \end{example} We already know~\cite[Proposition 7.3]{Hetyei-EKP} that the exponential generating function of the numbers $\kappa_n$ is \begin{equation} \label{E_fBgen} \sum_{n=1}^{\infty}\frac{\kappa_n}{n!} t^n =\frac{t}{(1-t)(1-\ln(1-t))}. \end{equation} Just like in Section~\ref{s_MR}, we may use place-based non-inversion tables to find a permutation enumeration model for for the numbers $\kappa_n$. \begin{lemma} \label{L_PNT<} Let $u_1\cdots u_n$ be the PNT of a permutation $\pi\in S_n$. Then, for all $i<j$, $\pi(i)<\pi(j)$ implies $u_i<u_j$. The following partial converse is also true: $u_i<u_{i+1},\ldots, u_j$ implies $\pi(i)<\pi(i+1),\ldots, \pi(j)$. \end{lemma} \begin{proof} If $\pi(i)<\pi(j)$ then the set $\{k<i\::\: \pi(k)<\pi(i)\}$ is a proper subset of $\{k<j\::\: \pi(k)<\pi(j)\}$ (the index $i$ belongs only to the second subset). Thus $u_i<u_j$. The converse may be shown by induction on $j-i$. For $j=i+1$, $\pi(i)>\pi(i+1)$ implies that the set $\{k<i+1\::\: \pi(k)<\pi(i+1)\}$ is a subset of $\{k<i\::\: \pi(k)<\pi(i)\}$, thus $u_i\geq u_{i+1}$. Therefore $u_i<u_{i+1}$ implies $\pi(i)<\pi(i+1)$. Assume now that $u_i\leq u_{i+1},\ldots, u_j$ holds and that we have already shown $\pi(i)<\pi(i+1),\ldots, \pi(j-1)$. Assume, by way of contradiction, that $\pi(i)>\pi(j)$ holds. Then there is no $k$ satisfying $i<k<j$ and $\pi(k)<\pi(j)$ thus $\{k<j\::\: \pi(k)<\pi(j)\}$ is a subset of $\{k<i\::\: \pi(k)<\pi(i)\}$, implying $u_i\geq u_j$, a contradiction. Therefore we obtain $\pi(i)<\pi(j)$. \end{proof} \begin{corollary} Let $u_1\cdots u_n$ be the PNT of a permutation $\pi\in S_n$. Then $u_i\cdots u_j$ satisfies $u_i<u_{i+1},\ldots, u_{j-1}$ and $u_i\geq u_j$ if and only if $\pi(j)<\pi(i)< \pi(i+1),\ldots,\pi(j-1)$ holds. \end{corollary} \begin{corollary} \label{C_fBperm} Let $u_1\cdots u_n$ be the PNT of a permutation $\pi\in S_n$. Then $u_1\cdots u_n$ is a kernel position in the flat Bernoulli game, if and only if there exists a set of indices $1=i_0<i_1<\cdots<i_{k+1}=n$ such that for each $j\in\{0,\ldots,k\}$ we have $\pi(i_{j+1})<\pi(i_j+1)<\pi(i_j+2), \pi(i_j+3),\ldots,\pi(i_{j+1}-1)$. \end{corollary} Equation (\ref{E_fB}) also follows from Corollary~\ref{C_fBperm}. In fact, there are $n$ ways to select $\pi(1)$. Then, introducing $m_j:=i_{j}-i_{j-1}-1$ for $j\geq 1$, we have $\binom{n-1}{m_1,\ldots,m_k}$ ways to select the partitioning $$ \{1,\ldots,n\}\setminus \pi(1)=\biguplus_{j=0}^k \pi\left(\{i_j+1,\ldots,i_{j+1}\}\right) $$ and, for each $j$ there are $(i_{j+1}-i_j-2)!=(m_j-2)!$ ways to select the partial permutation $\pi(i_j+1)\cdots\pi(i_{j+1})$. Both Equation (\ref{E_fB}) and Corollary~\ref{C_fBperm} suggest looking at the numbers \begin{equation} \label{E_K} K_n=\kappa_{n+1}/(n+1)= \sum_{k=1}^{\lfloor n/2\rfloor} \sum_{\substack{m_1+\cdots + m_{k}=n\\ m_1,\ldots, m_k\geq 2}} \binom{n}{m_1, \ldots, m_k} (m_1-2)!\cdots (m_k-2)!\quad\mbox{for $n\geq 0$}. \end{equation} It is easy to check the following statement. \begin{proposition} $K_n$ is the number of kernel positions of rank $n$ in the {\em exception-free} variant of the flat Bernoulli game, where removing the entire word if $u_1<u_2,\ldots, u_n$ is also a valid move, and the empty word is a valid position. \end{proposition} Corollary~\ref{C_fBperm} may be rephrased as follows. \begin{corollary} \label{C_fBpermK} $K_n$ is the number of those permutations $\pi\in S_n$ for which there exists a set of indices $0=i_0<i_1<\cdots<i_{k+1}=n$ such that for each $j\in\{0,\ldots,k\}$ we have $\pi(i_{j+1})<\pi(i_j+1)<\pi(i_j+2), \pi(i_j+3),\ldots,\pi(i_{j+1}-1)$. \end{corollary} The generating function of the numbers $K_n$ is \begin{equation} \label{E_Kgen} \sum_{n=0}^{\infty} \frac{K_n}{n!} t^n =\frac{1}{(1-t)(1-\ln(1-t))}. \end{equation} This formula may be derived not only from $K_n=\kappa_{n+1}/(n+1)$ and (\ref{E_fBgen}), but also from from Corollary~\ref{C_fBpermK} and the compositional formula for exponential generating functions~\cite[Thm.\ 5.5.4]{Stanley-EC2}. We only need to observe that $$ \frac{1}{(1-t)(1-\ln(1-t))}=\frac{1}{1-t}\circ \left(t+(1-t)\ln(1-t)\right), $$ where $$ \frac{1}{1-t}=\sum_{n=0}^{\infty} \frac{n! t^n}{n!} $$ is the exponential generating function of linear orders, whereas $$ t+(1-t)\ln(1-t)=-t\ln(1-t)-(-\ln(1-t-t)) =\sum_{n=1}^{\infty}\frac{t^{n+1}}{n}-\sum_{n=2}^{\infty}\frac{t^{n}}{n} =\sum_{n=2}^{\infty}\frac{(n-2)! t^{n}}{n!} $$ is the exponential generating function of linear orders of $\{1,\ldots,n\}$, listing $1$ last and $2$ first. By taking the antiderivative on both sides of (\ref{E_Kgen}) we obtain $$ \sum_{n=0}^\infty \frac{K_n}{(n+1)!} t^{n+1}=\int \frac{1}{(1-t)(1-\ln(1-t))}\ dt=\ln(1-\ln(1-t))+K_{-1}. $$ Introducing $K_{-1}:=0$, the numbers $K_{-1}, K_0,K_1,\ldots$ are listed as sequence A089064 in the On-Line Encyclopedia of Integer Sequences~\cite{OEIS}. There we may also find the formula \begin{equation} \label{E_st} K_n=(-1)^{n}\sum_{k=1}^{n+1} s(n+1,k)\cdot (k-1)! \end{equation} expressing them in terms of the Stirling numbers of the first kind. Using the well-known formulas $$ \sum_{k=1}^n s(n+1,k) x^k=x(x-1)\cdots (x-n) \quad \mbox{and} \quad n!=\int_0^{\infty} x^n e^{-x}\ dx, $$ Equation (\ref{E_st}) is equivalent to \begin{equation} \label{E_Kint} K_n=(-1)^{n}\int_0^{\infty} (x-1)\cdots(x-n) e^{-x}\ dx. \end{equation} This formula may be directly verified by substituting it into the left hand side of (\ref{E_Kgen}) and obtaining $$ \int_{0}^{\infty} e^{-x} \sum_{n=0}^{\infty}\binom{x-1}{n}(-t)^n\ dx =\int_{0}^{\infty} e^{-x} (1-t)^{x-1}\ dx =\frac{1}{(1-t)(1-\ln(1-t))}. $$ We conclude this section with an intriguing conjecture. By inspection of (\ref{E_fBgen}) and (\ref{E_Kgen}) we obtain the following formula. \begin{lemma} For $n\geq 1$, $$a_{n}:=(-1)^{n}\frac{(\kappa_{n+1}-(n+1)\cdot \kappa_{n})}{n+1} =(-1)^{n}\left(K_n-n\cdot K_{n-1}\right) $$ is the coefficient of $t^{n}/n!$ in $1/(1-\ln(1+t))$. \end{lemma} The numbers $a_0,a_1,\ldots $ are listed as sequence A006252 in the On-Line Encyclopedia of Integer Sequences~\cite{OEIS}. The first $11$ entries are positive, then $a_{12}=-519312$ is negative, the subsequent entries seem to have alternating signs. The conjecture that this alternation continues indefinitely, may be rephrased as follows. \begin{conjecture} \label{C_novice} For $n\geq 12$ we have $n\cdot\kappa_{n-1}> \kappa_n$. Equivalently, $n\cdot K_{n-1}>K_n$ holds for $n\geq 11$. \end{conjecture} We may call Conjecture~\ref{C_novice} the {\em novice's chance}. Imagine that the first player asks a novice friend to replace him or her for just the first move in a flat Bernoulli game starting from a random position of rank $n\geq 12$. If Conjecture~\ref{C_novice} is correct then novice could simply remove the last letter, because the number of nonkernel positions in which this is the first move of the first player's winning strategy still exceeds the number of all kernel positions. We should note that for the original Bernoulli game a novice has no such chance. In that game the removal of a single letter at the end of both words is not always a valid move, but we could advise our novice to remove the last letters at the end of both words if this is a valid move and make a random valid move otherwise. Our novice would have a chance if $$ \kappa_{n-1}\cdot \left(n^2-\binom{n-1}{2}\right) =\kappa_{n-1}\cdot \binom{n+1}{2}\geq \kappa_n $$ was true for all large $n$. However, it is known~\cite[\S 93]{Jordan} that we have \begin{equation} \label{E_b2bound} \frac{n-2}{n}\left|\frac{b_{n-1}}{(n-1)!}\right| <\left|\frac{b_{n}}{n!}\right| < \frac{n-1}{n}\left|\frac{b_{n-1}}{(n-1)!}\right|,\quad\mbox{implying} \end{equation} $$ (n-2)(n+1)\kappa_{n-1}<\kappa_n<(n-1)(n+1)\kappa_{n-1}. $$ On the page of A006252 in~\cite{OEIS} we find that the coefficient of $t^{n}/n!$ in $1/(1-\ln(1+t))$ is \begin{equation} \label{E_novicest} \frac{(-1)^{n}(\kappa_{n+1}-(n+1)\cdot \kappa_{n})}{n+1} =(-1)^{n}\left(K_n-n\cdot K_{n-1}\right)=\sum_{k=0}^{n} s(n,k) k! \end{equation} Equivalently, \begin{equation} \label{E_noviceint} \frac{(-1)^{n}(\kappa_{n+1}-(n+1)\cdot \kappa_{n})}{n+1}= (-1)^{n}\left(K_n-n\cdot K_{n-1}\right)=\int_0^{\infty} x(x-1)\cdots (x-n+1) e^{-x}\ dx. \end{equation} Equations (\ref{E_novicest}) and (\ref{E_noviceint}) may be verified the same way as the analogous formulas (\ref{E_st}) and (\ref{E_Kint}). Therefore we may rewrite Conjecture~\ref{C_novice} as follows: \begin{equation} \label{E_novice} (-1)^{n}\int_0^{\infty} x(x-1)\cdots(x-n+1) e^{-x}\ dx >0\quad\mbox{holds for $n\geq 11$.} \end{equation} This form indicates well the complication that arises, compared to the original Bernoulli game. To prove (\ref{E_b2bound}), Jordan~\cite[\S 93]{Jordan} uses the formula $$ \frac{b_n}{n!}=\int_0^1 \binom{x}{n} \ dx $$ and is able to use the mean value theorem to compare $b_n/n!$ with $b_{n+1}/(n+1)!$, because the function $\binom{x}{n}$ does not change sign on the interval $(0,1)$. Proving Equation (\ref{E_novice}) is equivalent to a similar estimate of the change of the integral $(-1)^n\int_0^{\infty} (x-1)\cdots(x-n) e^{-x}\ dx$ as we increase $n$, however, this integrand does change the sign several times on the interval $(0,\infty)$. \section{Concluding remarks} \label{s_c} Conjecture~\ref{C_novice}, if true, would be an intriguing example of a sequence ``finding its correct signature pattern'' after a relatively long ``exceptional initial segment''. Many such examples seem to exist in analysis, and it is perhaps time for combinatorialists to start developing a method of proving some of them. Some of the most interesting questions arising in connection with this paper seem to be related to the instant Bernoulli game, presented in Section~\ref{s_MR}. The fact that our decomposition into elementary kernel factors is bijectively equivalent to King's~\cite{King} construction raises the suspicion that this decomposition may also have an algebraic importance beyond the combinatorial one. This suspicion is underscored by the fact that the correspondence between our decomposition and King's is via some modified inversion table, whereas Aguiar and Sottile~\cite{Aguiar-Sottile} highlight the importance of the weak order to the structure of the Malvenuto-Reutenauer Hopf algebra, first pointed out by Loday and Ronco~\cite{Loday-Ronco}. The weak order is based on comparing the sets of inversions of two permutations. Depending the way we choose the basis of the self-dual Malvenuto-Reutenauer Hopf algebra, expressing one of the product and coproduct seems easy in terms of place-based non-inversion tables, whereas the other seems very difficult. If we choose the representation considered by Poirier and Reutenauer~\cite{Poirier-Reutenauer} where connected permutations form the free algebra basis, then the product of two permutations is easily expressed in terms of PNTs, thus the elementary kernel factor decomposition might indicate the presence of a larger algebra ``looming on the horizon'' in which the multiplicative indecomposables of the Malvenuto-Reutenauer Hopf algebra become decomposable. We should also mention that the decomposition that is equivalent to the removal of the last elementary kernel factor is only the first phase in King's construction~\cite{King}, a lot of hard work is done afterwards to find the transposition Gray code, while recursing on these reduction steps. Our presentation allows to better visualize King's entire ``rough'' decomposition ``at once'' and thus may be suitable to attack the open question of finding an adjacent transposition Gray code. Finally, the degenerate Bernoulli game indexed with $(p,q)$~\cite[\S 6]{Hetyei-EKP} can also be shown to be isomorphic to a strongly Bernoulli type truncation game. For this game, the number of kernel positions of rank $n$ is $(-q)^n (n+1)!\beta_n(p/q,0)$~\cite[Thm.\ 6.2]{Hetyei-EKP}, where $\beta_n(p/q)$ is a degenerate Bernoulli number. We leave the detailed analysis of this game to a future occasion. \section*{Acknowledgements} This work was completed while the author was on reassignment of duties sponsored by the University of North Carolina at Charlotte. The author wishes to thank two anonymous referees for helping substantially improve both the presentation and the contents of this paper and Christian Krattenthaler for remembering the exercise in Stanley's book~\cite[Ch.\ 1, Exercise 32b]{Stanley-EC1} on strong fixed points.
{ "timestamp": "2009-12-03T04:43:43", "yymm": "0912", "arxiv_id": "0912.0573", "language": "en", "url": "https://arxiv.org/abs/0912.0573", "abstract": "We find the winning strategy for a class of truncation games played on words. As a consequence of the present author's recent results on some of these games we obtain new formulas for Bernoulli numbers and polynomials of the second kind and a new combinatorial model for the number of connected permutations of given rank. For connected permutations, the decomposition used to find the winning strategy is shown to be bijectively equivalent to King's decomposition, used to recursively generate a transposition Gray code of the connected permutations.", "subjects": "Combinatorics (math.CO)", "title": "Enumeration by kernel positions for strongly Bernoulli type truncation games on words", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713896101315, "lm_q2_score": 0.7185943925708562, "lm_q1q2_score": 0.7080104957343358 }
https://arxiv.org/abs/1507.04465
Permutations fixing a k-set
Let $i(n,k)$ be the proportion of permutations $\pi\in\mathcal{S}_n$ having an invariant set of size $k$. In this note we adapt arguments of the second author to prove that $i(n,k) \asymp k^{-\delta} (1+\log k)^{-3/2}$ uniformly for $1\leq k\leq n/2$, where $\delta = 1 - \frac{1 + \log \log 2}{\log 2}$. As an application we show that the proportion of $\pi\in\mathcal{S}_n$ contained in a transitive subgroup not containing $\mathcal{A}_n$ is at least $n^{-\delta+o(1)}$ if $n$ is even.
\section{Introduction and notation} Let $k,n$ be integers with $1\le k\le n/2$ and select a permutation $\pi \in \cS_n$, that is to say a permutation of $\{1,\dots, n\}$, at random. What is $i(n,k)$, the probability that $\pi$ fixes some set of size $k$? Equivalently, what is the probability that the cycle decomposition of $\pi$ contains disjoint cycles with lengths summing to $k$? Somewhat surprisingly, $i(n,k)$ has only recently been at all well understood in the published literature. The lower bound $\lim_{n\to\infty} i(n,k) \gg \log k/k$ is contained in a paper of Diaconis, Fulman and Guralnick \cite{dfg08}, while the upper bound $i(n,k) \ll k^{-1/100}$ may be found in work of {\L}uczak and Pyber \cite{lp93}. (These authors did not make any special effort to optimise the constant 1/100, but their method does not lead to a sharp bound.) Here and throughout $X\ll Y$ means $X\leq CY$ for some constant $C>0$. The notation $X\order Y$ will be used to mean $X\ll Y$ and $X\gg Y$. In the limit as $n \rightarrow \infty$ with $k$ fixed, a much better bound was very recently obtained by Pemantle, Peres, and Rivin \cite[Theorem 1.7]{PPR}. They prove that $\lim_{n \rightarrow \infty} i(n,k) = k^{-\delta + o(1)}$, where $$\delta = 1 - \frac{1 + \log \log 2}{\log 2}\approx 0.08607.$$ They also note a connection between the problem of estimating $i(n,k)$ and a certain number-theoretic problem, an analogy that will be also be key to our work. The same connection has also been observed by Diaconis and Soundararajan~\cite[page 14]{sound-ams}. Let us explain the connection with number theory. There is a well known analogy (see, for example, \cite{ABT}) between the cycle decomposition of a random permutation and the prime factorisation of a random integer. Specifically, if $\pi$ is a random permutation with cycles of lengths $a_1 \le a_2 \le \dots$, and if $n$ is a random integer with prime factors $p_1 < p_2 < \dots$ then one expects both sequences $\log a_1, \log a_2, \dots$ and $\log \log p_1 , \log \log p_2 , \dots$ to behave roughly like Poisson processes with intensity $1$. (Of course, this does not make sense if taken too literally, since the $a_i$ are all integers, and the $p_i$ are all primes, plus we have not specified exactly what we mean by either a ``random permutation'' or a ``random integer''.) The condition that $a_{i_1} + \dots + a_{i_m} = k$ (that is, that a particular set of cycle lengths sum to $k$) is, because the $a_i$ are all integers, equivalent to $k \leq a_{i_1} + \dots + a_{i_m} < k+1$. Pursuing the analogy between cycles and primes, we may equate this with the condition $k \leq \log p_{i_1} + \dots + \log p_{i_m} \leq k+1$, or in other words $e^k \leq p_{i_1} \cdots p_{i_m} \leq e^{k+1}$. This then suggests that we might compare $i(n,k)$ with $\tilde i(n,k)$, the probability that a random very large integer (selected uniformly from $[e^n, e^{n+1})$, say) has a divisor in the range $[e^k, e^{k+1})$. This last problem has a long history, originating as a problem of Besicovitch~\cite{Bes} in 1934, and was solved (up to a constant factor) by the second author~\cite{ford,ford-2}. In those papers it was shown that $\tilde i(n,k) \asymp k^{-\delta} (1 + \log k)^{-3/2}$ uniformly for $k\le n/2$, where $\delta$ is the constant mentioned above. In this paper we use the same method to prove the same rate of decay for $i(n,k)$. \begin{theorem}\label{mainthm} $i(n,k) \asymp k^{-\delta} (1+\log k)^{-3/2}$ uniformly for $1\le k\le n/2$. \end{theorem} Since $i(n,n-k)=i(n,k)$, Theorem \ref{mainthm} establishes the order of $i(n,k)$ for all $n,k$. Theorem~\ref{mainthm} has implications for a conjecture of Cameron related to random generation of the symmetric group. Cameron conjectured that the proportion of $\pi\in\cS_n$ contained in a transitive subgroup not containing $\mathcal{A}_n$ tends to zero: this was proved by {\L}uczak and Pyber~\cite{lp93} using their bound $i(n,k)\ll k^{-1/100}$. Cameron further guessed that this proportion might decay as fast as $n^{-1/2+o(1)}$ (see~\cite[Section 5]{lp93}). However Theorem~\ref{mainthm} has the following corollary. \begin{corollary}\label{trans} The proportion of $\pi\in\cS_n$ contained in a transitive subgroup not containing $\mathcal{A}_n$ is $\gg n^{-\delta}(\log n)^{-3/2}$, provided that $n$ is even and greater than $2$. \end{corollary} \begin{proof} By Theorem~\ref{mainthm} the proportion of $\pi\in\cS_n$ fixing a set $B_1$ of size $n/2$ is $\order n^{-\delta}(\log n)^{-3/2}$. Such a permutation $\pi$ must also fix the set $B_2=\{1,\dots,n\}\setminus B_1$, and thus preserve the partition $\{B_1,B_2\}$ of $\{1,\dots,n\}$. Since $|B_1|=|B_2|$, the set of all $\tau$ preserving this partition is a transitive subgroup not containing $\mathcal{A}_n$. \end{proof} We believe that a matching upper bound $O(n^{-\delta}(\log n)^{-3/2})$ holds in Corollary~\ref{trans}, and that for odd $n$ there is an upper bound of the form $O(n^{-\delta'})$ for some $\delta'>\delta$. We intend to return to this problem in a subsequent paper. Whether or not a permutation $\pi$ has a fixed set of size $k$ depends only on the vector $\mathbf{c}=(c_1(\pi),c_2(\pi),\ldots,c_k(\pi))$ listing the number of cycles of length $1,2,\ldots,k$, respectively, in $\pi$. Crucial to our argument is the well known fact (see, e.g., \cite{ABT}) that for \emph{fixed} $k$, $\mathbf{c}$ has limiting distribution (as $n\to\infty$) equal to $\mathbf{X}_k=(X_1,X_2,\dots,X_k)$, where the $X_i$ are independent and $X_i$ has Poisson distribution with parameter $1/i$ (for short, $X_i \deq \Pois(1/i)$). A simple corollary is that the limit $i(\infty,k) = \lim_{n \rightarrow \infty} i(n,k)$ exists for every $k$. Define, for any finite list $\mathbf{c}=(c_1,c_2,\ldots,c_k)$ of non-negative integers, the quantity \begin{equation}\label{Lc} \mathscr{L}(\mathbf{c}) = \{ m_1 + 2m_2 + \dots +km_k: 0 \leq m_j \leq c_j \; \; \mbox{for $j =1,2, \dots, k$}\big\}. \end{equation} We immediately obtain that \begin{equation}\label{fund-inf} i(\infty,k) = \mathbb{P}(k \in \mathscr{L}(\mathbf{X}_k)). \end{equation} This makes it easy to compute $i(\infty,k)$ for small values of $k$. For example we have the extremely well known result (derangements) that \[ i(\infty,1) = \mathbb{P}(X_1 \geq 1) = 1 - \frac{1}{e} \approx 0.6321,\] and the less well known fact that \[ i(\infty,2) = 1 - \mathbb{P}(X_1 = X_2 = 0) - \mathbb{P}(X_1 = 1, X_2 = 0) = 1- 2e^{-3/2} \approx 0.5537.\] When $k$ is allowed to grow with $n$, the vector $\mathbf{c}$ is still close to being distributed as $\mathbf{X}_k$, the total variation distance between the two distributions decaying rapidly as $n/k\to\infty$ \cite{AT92}. This fact is, however, not strong enough for our application. We must establish an approximate analog of \eqref{fund-inf}, showing that $i(n,k)$ has about the same order as $\mathbb{P}(k \in \mathscr{L}(\mathbf{X}_k))$, uniformly in $k\le n/2$. Instead of directly estimating the probability of a single number lying in $\mathscr{L}(\mathbf{X}_k)$, however, we apply a local-to-global principle used in \cite{ford,ford-2} to reduce the problem to studying the \emph{size} of $\mathscr{L}(\mathbf{X}_k)$. We expect a positive proportion of the elements of $\mathscr{L}(\mathbf{X}_k)$ to lie in the range $[\frac{1}{10}k, 10k]$ (say). The reason for this is that we expect to find $\sim 1$ index $j$ for which $X_j > 0$ in any interval $[e^i, e^{i+1}]$. In particular, it is fairly likely that there is some such $j$ with $j > k/10$, in which case at least half of the sums $m_1 + 2m_2 + \dots + km_k$ will be $\geq k/10$ (those with $m_j > 0$), yet at the same time it is reasonably likely that \emph{all} elements of $\mathscr{L}(\mathbf{X}_k)$ are $< 10k$. Assuming this heuristic is reasonable, we might expect that \begin{equation}\label{heur} i(n,k) \order \mathbb{P}(k \in \mathscr{L}(\mathbf{X}_k)) \asymp \frac{1}{k}\mathbb{E}|\mathscr{L}(\mathbf{X}_k)|. \end{equation} In Section \ref{first-red}, we will show that \eqref{heur} does indeed hold. The main result of that section is the following. \begin{proposition}\label{first-reduction}$i(n,k) \order \frac{1}{k} \mathbb{E} |\mathscr{L}(\mathbf{X}_k)|$ uniformly for $1 \le k\le n/2$. \end{proposition} Our main theorem follows immediately from this and the next proposition, whose proof occupies Sections \ref{lower} (lower bound) and \ref{upper} (upper bound). Note that in these propositions we operate with the sequence $\mathbf{X}_k = (X_1,X_2,\dots,X_k)$ of genuinely independent random variables, which is independent of $n$. \begin{proposition}\label{average-result} $\mathbb{E} |\mathscr{L}(\mathbf{X}_k)| \asymp k^{1-\delta}(1+\log k)^{-3/2}$. \end{proposition} To briefly explain the origin of the exponent $\delta$, we first observe the simple inequalities \begin{equation}\label{sLup1} |\mathscr{L}(\mathbf{X}_k)| \le \min \( 2^{X_1+\cdots + X_k}, 1+X_1+2X_2+\cdots+kX_k \). \end{equation} Assume this is close to being sharp with reasonably high probability, and condition on $Y=X_1+\cdots+X_k$, the number of cycles of length at most $k$ in a random permutation. Following our earlier heuristic, the second term on the right side of \eqref{sLup1} is $\asymp k$ most of the time, and so there is a change of behaviour around $Y=\frac{\log k}{\log 2}+O(1)$. Since $Y$ is Poisson with parameter $\log k+ O(1)$, a short calculation reveals that $\mathbb{E} \min (2^Y,k) \order k^{1-\delta} (\log k)^{-1/2}$. We err in the logarithmic term due to the fact that \eqref{sLup1} is only sharp with probability about $1/\log k$, a fact that is related to order statistics \cite[Sec.~4]{ford-2}. Let us finally mention two open questions. \begin{question} Is there some constant $C$ such that $i(\infty,k) \sim C k^{-\delta} (\log k)^{-3/2}$? \end{question} It would be surprising if this were not the case. \begin{question} Is $i(\infty,k)$ monotonically decreasing in $k$? \end{question} Data collected by Britnell and Wildon~\cite{britnell-wildon} shows that this is so at least as far as $i(\infty,30)$, and of course a positive answer is plausible just from the fact that $i(\infty,k) \to 0$. \section{A permutation sieve}\label{sec-perm-sieve} As mentioned in the introduction, the asymptotic distribution (as $n \rightarrow \infty$ with $k$ fixed) of the cycle lengths $(c_1(\pi), \dots, c_k(\pi))$ of a random $\pi \in \cS_n$ is that of $\mathbf{X}_k = (X_1,\dots, X_k)$, where the $X_i$ are independent with $X_i \deq \Pois(1/i)$. In the nonasymptotic regime, where $n$ may be as small as $2k$, this property is lost. We do, however, have the following substitute which will suffice for this paper. \begin{proposition}\label{sieve2} Let $1\le m<n$ and $c_1,\ldots,c_m$ be non-negative integers satisfying $$c_1+2c_2+\cdots +mc_m\le n-m-1.$$ Suppose that $\pi \in \cS_n$ is chosen uniformly at random. Then \[ \frac{1}{(2m+2) \prod_{i=1}^m c_i! i^{c_i}} \leq \mathbb{P}(c_1(\pi) = c_1,\dots, c_m(\pi) = c_m) \leq \frac{1}{(m+1) \prod_{i=1}^m c_i! i^{c_i}}.\] \end{proposition} We will prove this shortly, but first let us fix some notation. As every permutation $\pi \in \cS_n$ factors uniquely as a product of disjoint cycles, in keeping with the analogy with integers we say that any product of these cycles, including the empty product, is a \emph{factor} or \emph{divisor} of $\pi$. The sets induced by these factors are precisely the invariant sets of $\pi$. We make the following further definitions: \begin{itemize} \item $\mathcal{C}_{k,n}$ is the set of cycles of length $k$ in $\cS_n$; \item $|\sigma|$ is the length of any factor $\sigma$ (of some permutation in $\cS_n$); \item $\tau|\pi$ means that $\tau$ is an invariant set or divisor of $\pi$. \end{itemize} The following lemma is a slight generalization of the well known formula of Cauchy. \begin{lemma}\label{Cauchy} Let $1\le m\le n$, and let $c_1,\ldots,c_m$ be non-negative integers with $t=c_1+2c_2+\cdots+mc_m\le n$. Then the number of ways of choosing $c_1+\cdots+c_m$ disjoint cycles consisting of $c_i$ cycles in $\cC_{i,n}$ for $1\le i\le m$ is \[ \frac{n!}{(n-t)!} \prod_{j=1}^m \frac{1}{c_j! j^{c_j}}. \] \end{lemma} \begin{proof} First count the number of ways of choosing the subsets that make up the cycles, and then multiply by the number of ways to arrange the elements of these subsets into cycles. The result is \[ \binom{n}{\underbrace{1 \cdots 1}_{c_1} \underbrace{2 \cdots 2}_{c_2} \cdots \underbrace{m\cdots m}_{c_m}} \frac{1}{c_1! \cdots c_m!} \, \times \, \prod_{j=1}^m (j-1)!^{c_j}, \] which simplifies to the claimed expression. \end{proof} Our next lemma is an analogue for permutations of a basic lemma from sieve theory. \begin{lemma}\label{sieve} Suppose that $m,n$ are integers with $1 \leq m \leq n$. Let $\pi \in \cS_n$ be chosen uniformly at random. Then \[ \frac{1}{2m} \leq \mathbb{P}(\mbox{$\pi$ has no cycle of length $< m$}) \leq \frac{1}{m}.\] \end{lemma} \emph{Remarks.} Both upper and lower bounds are best possible, since trivially the probability in question is exactly $1/n$ when $n/2<m\le n$ (if a permutation has no cycle of length $< m$, with $m$ in this range, then it must be an $n$-cycle). In fact, it is not difficult to prove an asymptotic formula $ \sim \omega(n/m)/m$ ($n\to\infty$, $m\to\infty$, $m\le n$) for the probability in question, where $\omega$ is Buchstab's function and $\omega(u)\to e^{-\gamma}$ as $u\to\infty$ \cite[Theorem 2.2]{granville}. \begin{proof} (See the proof of \cite[Theorem 2.2]{granville}). We phrase the proof combinatorially rather than probabilistically; thus let $c(n,m)$ be the number of permutations of $\cS_n$ that have no cycles of length $<m$. We proceed by induction on $n$, the result being trivial when $n=1$. Let $\sum^*$ denote a sum over permutations with no cycle of length $<m$. Using the fact that the sum of lengths of cycles in a permutation in $\cS_n$ is $n$, we get \begin{align*} n c(n,m) &= {\sum_{\pi \in \cS_n}}^* n = {\sum_{\pi \in \cS_n}}^* \ssum{\sigma|\pi \\ \sigma\text{ a cycle}} |\sigma| = \sum_{k\ge m} k \sum_{\sigma \in \cC_{k,n}} {\ssum{\pi \in \cS_n \\ \sigma | \pi}}^* 1 \\ &= \sum_{m\le k\le n-m} k \sum_{\sigma \in \cC_{k,n}} c(n-k,m) + \sum_{\sigma\in \cC_{n,n}} n \\ &= n! + \sum_{m\le k\le n-m} \frac{n!}{(n-k)!} c(n-k,m). \end{align*} If $\frac{n}{2} <m \le n$, then $c(n,m)=\frac{n!}{n}$ and the result follows. Otherwise, by the induction hypothesis, \[ n c(n,m) \le n! + \sum_{m\le k\le n-m} \frac{n!}{m} = n! \( 1 + \frac{n-2m+1}{m} \) \le \frac{n! \cdot n}{m} \] and \[ n c(n,m) \ge n! + \sum_{m\le k\le n-m} \frac{n!}{2m} = n! \( 1 + \frac{n-2m+1}{2m} \) \ge \frac{n! \cdot n}{2m}. \qedhere \] \end{proof} It is now a simple matter to establish Proposition \ref{sieve2}. \begin{proof}[Proof of Proposition \ref{sieve2}] Let $t=c_1+2c_2+\cdots+mc_m$. For each choice of the $c_1+\cdots+c_m$ disjoint cycles consisting of $c_j$ cycles from $\cC_{j,n}$ ($1\le j\le m$), there are $c(n-t,m+1)$ permutations $\pi\in \cS_n$ containing these cycles as factors and no other cycles of length at most $m$, where $c(n-t, m+1)$ is the number of permutations on $n-t$ letters with no cycle of length $< m+1$, as in the proof of Lemma \ref{sieve}. Applying Lemmas \ref{Cauchy} and \ref{sieve} completes the proof. \end{proof} \section{The local-to-global principle}\label{first-red} As in the introduction, let $X_1, X_2, \dots$ be independent random variables with distribution $X_j \deq \Pois(1/j)$. We record here that \begin{equation}\label{ELXk} \mathbb{E} |\mathscr{L}(\mathbf{X}_k)| = \sum_{c_1,\ldots,c_k\ge 0} |\mathscr{L}(\mathbf{c})| \mathbb{P} (X_1=c_1)\cdots \mathbb{P} (X_k=c_k) = e^{-h_k} \sum_{c_1,\ldots,c_k\ge 0} \frac{|\mathscr{L}(\mathbf{c})|}{\prod_{i=1}^k c_i! i^{c_i}}, \end{equation} where $h_k=1+\frac12 + \cdots + \frac{1}{k}$. We also record the inequalities \begin{equation}\label{hk} \log(k+1) \le h_k \le 1 + \log k, \qquad (k\geq 1) \end{equation} which may be proved, for example, by summing the obvious inequalities $\frac{1}{n+1} \le \int_n^{n+1} dt/t \le \frac{1}{n}$. \begin{lemma}\label{Lclem} Let $k\in \mathbb{N}$, $c_1,\ldots,c_k\ge 0$, $I\subset [k]$ and $c_i'=c_i$ for $i\not\in I$, $c'_i=0$ for $i\in I$. then $$|\mathscr{L}(\mathbf{c})| \le |\mathscr{L}(\mathbf{c}')| \prod_{i\in I} (c_i+1).$$ \end{lemma} \begin{proof} Clearly, $\mathscr{L}(\mathbf{c})$ is the union of $\prod_{i\in I} (c_i+1)$ translates of $\mathscr{L}(\mathbf{c}')$. \end{proof} \begin{lemma}\label{lem555} Suppose that $\ell' \leq \ell$. Then \[ \frac{1}{\ell}\mathbb{E} |\mathscr{L}(\mathbf{X}_{\ell})| \le \frac{1}{\ell'} \mathbb{E} |\mathscr{L}(\mathbf{X}_{\ell'})|.\] \end{lemma} \begin{proof} By Lemma \ref{Lclem}, $|\mathscr{L}(\mathbf{X}_{\ell})| \leq (1+X_{\ell' + 1})\cdots(1+X_{\ell})|\mathscr{L}(\mathbf{X}_{\ell'})|.$ Thus by independence, \[ \mathbb{E}|\mathscr{L}(\mathbf{X}_{\ell})| \le \bigg( \prod_{i = \ell'+1}^{\ell} \mathbb{E} (1+X_i) \bigg) \mathbb{E} |\mathscr{L}(\mathbf{X}_{\ell'})| = \frac{\ell+1}{\ell'+1} \mathbb{E} |\mathscr{L}(\mathbf{X}_{\ell'})| \leq \frac{\ell}{\ell'}\mathbb{E}|\mathscr{L}(\mathbf{X}_{\ell'})|. \qedhere \] \end{proof} We also need to compute the mixed moments of $|\mathscr{L}(\mathbf{X}_k)|$ with powers of some $X_j$. Recall that the $m$th moment $\mathbb{E} X^m$, if $X \deq \Pois(1)$, is the $m$th Bell number $B_m$. The sequence of Bell numbers starts $1,2,5,15, 52, 203,\dots$. \begin{lemma}\label{lem-sum-gen} Suppose that $j_1,\dots, j_h \leq k$ are distinct integers and that $a_1,\dots, a_h$ are positive integers. Then \[ \mathbb{E} |\mathscr{L}(\mathbf{X}_k)| X_{j_1}^{a_1} \cdots X_{j_h}^{a_h} \leq \frac{C_{a_1,\dots, a_h}}{j_1 \dots j_h} \mathbb{E} |\mathscr{L}(\mathbf{X}_k)|.\] We may take $C_{a_1,\dots, a_h} = \prod_{i = 1}^h (B_{a_i}+B_{a_i+1})$. In particular we may take $C_1 = 3$. \end{lemma} \begin{proof} Define $\mathbf{X}'_k$ by putting $X'_{j_1}=\cdots=X'_{j_h}=0$ and $X'_j=X_j$ for all other $j$. By Lemma~\ref{Lclem}, we have \[ |\mathscr{L}(\mathbf{X}_k)| \leq |\mathscr{L}(\mathbf{X}'_k)|(1+X_{j_1})\cdots(1+X_{j_h}).\] Thus by independence \begin{equation}\label{eqmomentbound} \mathbb{E} |\mathscr{L}(\mathbf{X}_k)|X_{j_1}^{a_1}\cdots X_{j_h}^{a_h} \leq \mathbb{E}|\mathscr{L}(\mathbf{X}'_k)| \prod_{i=1}^h (\mathbb{E} X_{j_i}^{a_i} + \mathbb{E} X_{j_i}^{a_{i+1}}). \end{equation} For $X \deq \Pois(\lambda)$ we have $\mathbb{E} X^m = \phi_m(\lambda)$, where $\phi_m(\lambda)$ is the $m$-th Touchard (or Bell) polynomial, a polynomial with positive coefficients and zero constant coefficient. If $\lambda \le 1$, it follows that $\mathbb{E} X^m \leq \lambda B_m$ for $m \geq 1$. The result follows immediately from this, \eqref{eqmomentbound}, and the observation that $ \mathbb{E}|\mathscr{L}(\mathbf{X}'_k)| \le \mathbb{E}|\mathscr{L}(\mathbf{X}_k)|$. \end{proof} We turn now to the proof of Proposition~\ref{first-reduction}. In what follows write $S(\mathbf{X}_\ell) = X_1 + 2X_2 + \dots + \ell X_{\ell} = \max \mathscr{L}(\mathbf{X}_\ell)$. We will treat the lower bound and upper bound in Proposition \ref{first-reduction} separately, the former being somewhat more straightforward than the latter. \begin{proof}[Proof of Proposition \ref{first-reduction} (Lower bound)] If $k<40$ then $i(n,k)\order i(\infty,k)\order 1\order \frac{1}{k}\mathbb{E}|\mathscr{L}(\mathbf{X}_k)|$, so we may assume $k\geq 40$. Let $r=\fl{k/20}$ (so $r\ge 2$), and consider the permutations $\pi=\a \sigma_1 \sigma_2 \beta \in \cS_n$, where $\sigma_1$ and $\sigma_2$ are cycles, $|\a| \le 4r < |\sigma_1| < |\sigma_2| < 16r$, all cycles in $\a$ have length $\le r$, all cycles in $\beta$ have length at least $16r$, and $\a \sigma_1 \sigma_2$ has a fixed set of size $k$. Because of the size restrictions on $\alpha,\sigma_1,\sigma_2$, if $\a$ is of type $\mathbf{c}=(c_1,\ldots,c_r)$, with $c_i$ cycles of length $i$ for $1\le i\le r$, then the last condition is equivalent to $k-|\sigma_1|- |\sigma_2| \in \mathscr{L}(\mathbf{c})$. In particular $|\sigma_1| +|\sigma_2|\le k$, and hence $n-|\a| - |\sigma_1|- |\sigma_2| \ge \frac45 k \ge 16r$. Fix $\mathbf{c}$ and $\ell_1, \ell_2$ with $4r<\ell_1<\ell_2<16r$ such that $k-\ell_1-\ell_2 \in \mathscr{L}(\mathbf{c})$. By Proposition \ref{sieve2}, the probability that a random $\pi \in \cS_n$ has $c_i$ cycles of length $i$ ($1\le i\le r$), one cycle each of length $\ell_1, \ell_2$ and no other cycles of length $<16r$ is at least \[ \frac{1}{32r \ell_1 \ell_2 \prod_{i=1}^r c_i! i^{c_i}} \ge \frac{1}{2^{13} r^3 \prod_{i=1}^r c_i! i^{c_i}}. \] For any $\ell_1$ satisfying $4r+1 \le \ell_1 \le 8r-1$, there are $|\mathscr{L}(\mathbf{c})|$ admissible values of $\ell_2>\ell_1$ for which $k-\ell_1-\ell_2 \in \mathscr{L}(\mathbf{c})$, since $\max \mathscr{L}(\mathbf{c}) \le 4r \le k/5$. We conclude that \[ i(n,k) \ge \frac{4r-1}{2^{13} r^3} \ssum{c_1,\cdots,c_{r}\ge 0 \\ S(\mathbf{c}) \le 4r} \frac{|\mathscr{L}(\mathbf{c})|}{\prod_{i=1}^r c_i! i^{c_i}}. \] As in \eqref{ELXk}, the sum above equals $e^{h_r}\mathbb{E} |\mathscr{L}(\mathbf{X}_r)| 1_{S(\mathbf{X}_r)\le 4r}$. Hence, by \eqref{hk}, we see that \[ i(n,k) \ge \frac{1}{2^{11} r} \mathbb{E} |\mathscr{L}(\mathbf{X}_r)| 1_{S(\mathbf{X}_r)\le 4r}. \] To estimate this, we use the inequality \[ 1_{S(\mathbf{X}_r) \leq 4r} \geq 1 - \frac{S(\mathbf{X}_r)}{4r}.\] By Lemma \ref{lem-sum-gen} we have \[ \mathbb{E} |\mathscr{L}(\mathbf{X}_r)| S(\mathbf{X}_r) = \sum_{j = 1}^{r} \mathbb{E} |\mathscr{L}(\mathbf{X}_r)| j X_j \leq 3r \mathbb{E} |\mathscr{L}(\mathbf{X}_r)|.\] It follows that \[ i(n,k) \geq \frac{1}{2^{13} r} \mathbb{E} |\mathscr{L}(\mathbf{X}_r)|.\] Finally, the lower bound in Proposition \ref{first-reduction} is a consequence of this and Lemma \ref{lem555}\footnote{Strictly for the purposes of proving our main theorem, this appeal to Lemma \ref{lem555} is unnecessary. However, that lemma is straightforward and it is more aesthetically pleasing to have $\mathbb{E}|\mathscr{L}(\mathbf{X}_k)|$ in the lower bound for $i(n,k)$ rather than $\mathbb{E} |\mathscr{L}(\mathbf{X}_r)|$.}. \end{proof} \begin{proof}[Proof of Proposition \ref{first-reduction} (Upper bound)] Temporarily impose a total ordering on the set of all cycles $\bigcup_{k=1}^n \cC_{k,n}$, first ordering them by length, then imposing an arbitrary ordering of the cycles of a given length. Let $\pi\in \cS_n$ have an invariant set of size $k$. Let $k_1=k$ and $k_2=n-k$. Then $\pi=\pi_1 \pi_2$, where $\pi_j$ is a product of cycles which, all together, have total length $k_j$, for $j=1,2$. For some $j\in\{1,2\}$, the largest cycle in $\pi$, with respect to our total ordering, lies in $\pi_{3-j}$. Let $\sigma$ be the largest cycle in $\pi_j$, and note that $|\sigma|\leq\min(k_1,k_2)=k$. Write $\pi=\a \sigma \mathbf{b}$, where $\a$ is the product of all cycles dividing $\pi$ which are smaller than $\sigma$ and $\beta$ is the product of all cycles which are larger than $\sigma$. In particular $|\beta| \geq |\sigma|$ since $\beta$ contains the largest cycle in $\pi$ as a factor, and thus $|\sigma| \le |\beta| = n - |\sigma| - |\a|$. By definition of $\sigma$ and $\alpha$, $\a \sigma$ has a divisor of size $k_j$. Suppose $|\sigma|=\ell$ and $\mathbf{c}=(c_1,c_2,\ldots,c_\ell)$ represents how many cycles $\a$ has of length $1,2,\ldots,\ell$, respectively. Then $k_j-\ell \in \mathscr{L}(\mathbf{c})$. For $\ell$ and $\mathbf{c}$ satisfying this last condition, the number of possible pairs $\a,\sigma$ is at most (by Lemma \ref{Cauchy}) \[ \frac{n!}{(n-|\a|-|\sigma|)!} \prod_{i<\ell} \frac{1}{c_i! i^{c_i}} \, \times \frac{1}{(c_\ell+1)! \ell^{c_{\ell}+1}} \le \frac{n!}{\ell (n-|\a|-|\sigma|)!} \prod_{i \le \ell} \frac{1}{c_i! i^{c_i}}. \] Given $\a$ and $\sigma$, since $|\sigma|\leq n-|\a|-|\sigma|$, Lemma~\ref{sieve} implies that the number of choices for $\beta$ is at most $(n-|\a|-|\sigma|)!/|\sigma|$. Thus \[ i(n,k) \le \sum_{j=1}^2 \sum_{\ell=1}^k \frac{1}{\ell^2} \ssum{c_1,\ldots,c_\ell \ge 0 \\ k_j-\ell \in \mathscr{L}(\mathbf{c})} \prod_{i \le \ell} \frac{1}{c_i! i^{c_i}} = \sum_{j=1}^2 \sum_{c_1,\ldots,c_k \ge 0} \; \prod_{i \le k} \frac{1}{c_i! i^{c_i}} \ssum{m(\mathbf{c}) \le \ell \le k \\ k_j-\ell \in \mathscr{L}(\mathbf{c})} \frac{1}{\ell^2}, \] where $m(\mathbf{c})=\max\{i:c_i>0\}\cup\{1\}$. With $\mathbf{c}$ fixed, note that $\ell \ge \max(m(\mathbf{c}),k_j-S(\mathbf{c}))$. Also, the number of $\ell$ such that $k_j-\ell\in \mathscr{L}(\mathbf{c})$ is at most $|\mathscr{L}(\mathbf{c})|$. Thus, the innermost sum on the right side above is at most \[ \frac{|\mathscr{L}(\mathbf{c})|}{\max(m(\mathbf{c}),k_j-S(\mathbf{c}))^2}. \] Like \eqref{ELXk}, using \eqref{hk} we thus see that \begin{equation}\label{upper-1} i(n,k) \le 2ek \, \mathbb{E} \, \frac{|\mathscr{L}(\mathbf{X}_k)|}{\max(m(\mathbf{X}_k),k-S(\mathbf{X}_k))^2}. \end{equation} To bound this we use the inequality \[ \frac{1}{\max(m,k-S)^2} \le \frac{4}{k^2} \(1 + \frac{S^2}{m^2}\), \] which can be checked in the cases $S\geq k/2$ and $S\leq k/2$ separately. It follows from this and \eqref{upper-1} that \begin{equation}\label{upper-1a} i(n,k) \leq 8e \frac1k \mathbb{E} |\mathscr{L}(\mathbf{X}_k)| + 8e \frac1k \mathbb{E}\frac{|\mathscr{L}(\mathbf{X}_k)|S(\mathbf{X}_k)^2}{m(\mathbf{X}_k)^2} . \end{equation} The first of these two terms is what we want, but the second requires a keener analysis. By conditioning on $m=m(\mathbf{X}_k)$ we have \begin{align*} \mathbb{E} \frac{|\mathscr{L}(\mathbf{X}_k)| S(\mathbf{X}_k)^2}{m(\mathbf{X}_k)^2} &= \sum_{m=1}^k \frac{1}{m^2} \ssum{c_1,\dots,c_m\ge 0 \\ c_m\ge 1} |\mathscr{L}(\mathbf{c})| S(\mathbf{c})^2 \mathbb{P} (\mathbf{X}_m=\mathbf{c}) \mathbb{P} (X_{m+1}=\cdots=X_{k_j}=0) \\ &= \sum_{m=1}^{k_j} \frac{1}{m^2} \mathbb{E} \, \mathbf{Y}_m S(\mathbf{X}_m)^2 1_{X_m\ge 1} \exp\bigg( - \sum_{j=m+1}^k \frac{1}{j} \bigg) \\ & \leq \frac{e}k \sum_{m=1}^k \frac1m \mathbb{E} \, \mathbf{Y}_m S(\mathbf{X}_m)^2 X_m. \end{align*} Here we have written $\mathbf{Y}_m=|\mathscr{L}(\mathbf{X}_m)|$ for brevity, and in the last step we used the crude inequality $1_{X_m\ge 1} \le X_m$. Expanding $S(\mathbf{X}_m)^2 = (X_1 + 2X_2 + \dots + mX_m)^2$ and using \eqref{upper-1a}, we arrive at \begin{equation}\label{upper-1b} i(n,k) \ll \frac1k \mathbb{E} |\mathscr{L}(\mathbf{X}_k)| + \frac1{k^2} \sum_{m=1}^k \frac{1}{m} \sum_{i,i'=1}^m i i' \mathbb{E}\mathbf{Y}_m X_i X_{i'} X_m. \end{equation} The innermost sum is estimated using Lemma \ref{lem-sum-gen}, splitting into various cases depending on the set of distinct values among $i,i',m$. \begin{description} \item[Case 1] $i, i', m$ all distinct. Then $ii'\mathbb{E}\mathbf{Y}_m X_i X_{i'} X_m \leq \frac{C_{1,1,1}}{m}\mathbb{E} \mathbf{Y}_m = \frac{27}{m} \mathbb{E} \mathbf{Y}_m$. \item[Case 2] $i = i' \neq m$. Then $ii'\mathbb{E}\mathbf{Y}_m X_i X_{i'} X_m \leq \frac{C_{1,2}i}{m} \mathbb{E} \mathbf{Y}_m \leq C_{1,2}\mathbb{E}\mathbf{Y}_m = 21\mathbb{E} \mathbf{Y}_m$. \item[Case 3] $i = i' = m$. Then $ii'\mathbb{E}\mathbf{Y}_m X_i X_{i'} X_m \leq C_3 m \mathbb{E} \mathbf{Y}_m = 20 m \mathbb{E} \mathbf{Y}_m$. \item[Case 4] $i \neq i' = m$ or $i' \neq i = m$. In both cases $ii'\mathbb{E} \mathbf{Y}_m X_i X_{i'} X_m \leq 21\mathbb{E} \mathbf{Y}_m$. \end{description} Summing over all cases, it follows that \[ \sum_{i,i'=1}^m ii'\mathbb{E}\mathbf{Y}_m X_i X_{i'} X_m \ll m \mathbb{E} \mathbf{Y}_m. \] Since clearly $\mathbb{E} \mathbf{Y}_m \le \mathbb{E} \mathbf{Y}_k$ for every $m\leq k$ the result follows from this and \eqref{upper-1b}. \end{proof} \section{The lower bound in Proposition~\ref{average-result}}\label{lower} In this section we prove the lower bound in Proposition \ref{average-result}, and hence the lower bound in our main theorem. We begin by noting that from \eqref{ELXk} and \eqref{hk} follows \begin{equation}\label{eq221} \mathbb{E} |\mathscr{L}(\mathbf{X}_k)| \ge \frac{1}{ek} \sum_{c_1,\ldots,c_k\ge 0} \frac{|\mathscr{L}(\mathbf{c})|}{\prod_{i=1}^k c_i! i^{c_i}}. \end{equation} If we fix $r=c_1+\cdots+c_k$, which we may think of as the number of cycles in a random permutation, then \begin{equation}\label{eq221a} \sum_{c_1+\cdots +c_k=r} \frac{|\mathscr{L}(\mathbf{c})|}{\prod_{i=1}^k c_i! i^{c_i}} = \frac{1}{r!} \sum_{a_1,\ldots,a_r=1}^k \frac{|\mathscr{L}^*(\mathbf{a})|}{a_1\cdots a_r}, \end{equation} where \begin{equation}\label{sLstar} \mathscr{L}^*(\mathbf{a})=\Big\{\sum_{i\in I} a_i : I \subset [r] \Big\}. \end{equation} The equality is most easily seen by starting from the right side and setting $c_i=|\{j:a_j=i\}|$ for each $i$: then $\mathscr{L}(\mathbf{c}) = \mathscr{L}^*(\mathbf{a})$, $\prod_{i = 1}^k i^{c_i} = a_1 \cdots a_k$, and each $\mathbf{c} = (c_1,\dots, c_k)$ comes from $\frac{r!}{c_1! \cdots c_k!}$ different choices of $a_1,\dots, a_k$. One may think of $a_1,\ldots,a_r$ as the (unordered) cycle lengths in a random permutation, in this case conditioned so that there are $r$ total cycles. Now let $J= \fl{\frac{\log k}{\log 2}}$ and suppose that $b_1,\ldots,b_J$ are arbitrary non-negative integers with sum $r$. Consider the part of the sum in which \[ b_i = \sum_{j=2^{i-1}}^{2^i-1} c_j \;\; (i=1,2,\ldots,J), \qquad c_j=0 \; (j>2^J-1). \] Equivalently, suppose there are exactly $b_i$ of the $a_j$ in each interval $[2^{i-1},2^i-1]$. Writing $\curly D(\mathbf{b}) = \prod_{i=1}^J \{ 2^{i-1}, \ldots, 2^{i}-1 \}^{b_i}$, we have \begin{equation}\label{eq222} \frac{1}{r!} \sum_{a_1,\ldots,a_r=1}^{2^J-1} \frac{|\mathscr{L}^*(\mathbf{a})|}{a_1\cdots a_r} = \sum_{b_1,\dots, b_J} \frac{1}{b_1! \cdots b_J!} \sum_{\d \in \curly D(\mathbf{b})} \frac{|\mathscr{L}^*(\d)|}{d_1\cdots d_r}.\end{equation} To see this, fix $b_1, \ldots, b_J$ and observe that there are $\frac{r!}{b_1! \cdots b_J!}$ ways to choose which $b_i$ of the variables $a_1,\ldots,a_r$ lie in $[2^{i-1},2^{i}-1]$ for $1\le i\le J$. Combining \eqref{eq221}, \eqref{eq221a} and \eqref{eq222} gives \[ \mathbb{E} |\mathscr{L}(\mathbf{X}_k)| \gg \frac{1}{k} \sum_r \sum_{b_1+ \dots + b_J = r} \frac{1}{b_1! \cdots b_J!} \sum_{\d \in \curly D(\mathbf{b})} \frac{|\mathscr{L}^*(\d)|}{d_1 \cdots d_r}. \] Thus in particular one has \begin{equation}\label{eq223} \mathbb{E} |\mathscr{L}(\mathbf{X}_k)| \gg \frac{1}{k} \sum_{b_1+ \dots + b_J = J} \frac{1}{b_1! \cdots b_J!} \sum_{\d \in \curly D(\mathbf{b})} \frac{|\mathscr{L}^*(\d)|}{d_1 \cdots d_J}. \end{equation} (This may seem wasteful at first sight, but in fact a more careful -- though unnecessary -- analysis would reveal that the main contribution is from $r = J + O(1)$, so this is not in fact the case.) In the light of this, the motivation for proving the following lemma is clear. \begin{lemma}\label{sumd} For any $\mathbf{b}=(b_1,\ldots,b_J)$ with $b_1 + \dots + b_J = J$ we have \[ \sum_{\d \in \curly D(\mathbf{b})} \frac{|\mathscr{L}^*(\d)|}{d_1\cdots d_J} \gg \frac{(2\log 2)^{J}}{\sum_{i=1}^J 2^{b_1+\cdots+b_i-i}}. \] \end{lemma} \begin{proof} Given $\ell \in \mathbb{N}$, let $R(\d, \ell)$ be the number of $I \subset [J]$ with $\ell = \sum_{i \in I} d_i$ (One should think of the number of cycles with lengths summing to precisely $\ell$ in a random permutation.) Then $\sum_{\ell} R(\d,\ell) = 2^{J}.$ Also, define $\lambda_i=\sum_{j=2^{i-1}}^{2^i-1} 1/j$ for $1\le i\le J$ (thus $\lambda_i \approx \log 2$). By Cauchy-Schwarz, \begin{equation}\label{CauchyD} \begin{split} 2^{2J} \prod_{j=1}^J \lambda_j^{2b_j} &= \bigg( \sum_{\d\in\curly D(\mathbf{b})} \frac{1}{d_1\cdots d_J} \sum_\ell R(\d,\ell) \bigg)^2 \\ &=\bigg( \sum_{\d\in\curly D(\mathbf{b})} \frac{1}{d_1\cdots d_J} \sum_{\ell\in \mathscr{L}^*(\d)} R(\d,\ell) \bigg)^2 \\ &\le \bigg( \sum_{\d\in\curly D(\mathbf{b}), \ell} \frac{R(\d,\ell)^2}{d_1\cdots d_J} \bigg) \bigg( \sum_{\d\in\curly D(\mathbf{b})} \frac{|\mathscr{L}^*(\d)|}{d_1\cdots d_J} \bigg). \end{split} \end{equation} Our next aim is to establish an upper bound for the first sum on the right side. We have \begin{equation}\label{RS1} \sum_{\d\in\curly D(\mathbf{b}), \ell} \frac{R(\d,\ell)^2}{d_1\cdots d_J} = \sum_{I_1, I_2 \subset [J]} S(I_1,I_2), \end{equation} where \[ S(I_1,I_2) = \sum_{\substack{\d \in \curly D(\mathbf{b}) \\ \sum_{i\in I_1} d_i = \sum_{i\in I_2} d_i. } } \frac{1}{d_1 \cdots d_J}\] If $I_1=I_2$, then evidently $S(I_1,I_2) = \lambda_1^{b_1} \cdots \lambda_J^{b_J}$. If $I_1$ and $I_2$ are distinct, let $j=\max(I_1\triangle I_2)$ be the largest coordinate at which $I_1$ and $I_2$ differ. With all of the quantities $d_i$ fixed except for $d_j$, we see that $d_j$ is uniquely determined by the relation $\sum_{i\in I_1} d_i = \sum_{i\in I_2} d_i$. If we define $e(j)\in [J]$ uniquely by \[ b_1 + \cdots + b_{e(j)-1}+1 \le j \le b_1 + \cdots + b_{e(j)}, \] then $d_j \ge 2^{e(j)-1}$, regardless of the choice of $d_1,\dots, d_{j-1}, d_{j+1},\dots, d_J$ and thus \[ S(I_1,I_2) \leq \prod_{\substack{i = 1 \\ i \neq j}}^J(\sum_{d_i} \frac{1}{d_i}) \cdot \frac{1}{2^{e(j) - 1}} = \frac{\lambda_1^{b_1} \cdots \lambda_J^{b_J} \lambda_{e(j)}^{-1}}{2^{e(j)-1}} \ll \frac{\lambda_1^{b_1} \cdots \lambda_J^{b_J}}{2^{e(j)}}. \] (Here, the sums over $d_i$ are over the appropriate dyadic intervals required so that $\d \in \curly D(\mathbf{b})$.) Here we used the fact that $\lambda_i \asymp 1$; in fact one may note that $\lambda_{i} \geq \lambda_{i+1}$ for all $i$ (since $\frac{1}{n} \geq \frac{1}{2n} + \frac{1}{2n+1}$) and that $\lim_{i \rightarrow \infty} \lambda_i = \log 2$, so in fact $\lambda_i \geq \log 2$ for all $i$. Since the number of pairs of subsets $I_1,I_2\subset[J]$ with $\max(I_1\triangle I_2)=j$ is exactly $2^{J+j-1}$, we get from this and \eqref{RS1} that \begin{align*} \prod_{j=1}^J \lambda_j^{-b_j} \sum_{d\in\curly D(\mathbf{b}), \ell} \frac{R(\d,\ell)^2}{d_1\cdots d_J} &\ll 2^J + 2^J \sum_{j=1}^J 2^{j-e(j)} \ = 2^J + 2^J \sum_{i=1}^J 2^{-i} \sum_{j:e(j)=i} 2^j\\ &\ll 2^J + 2^J \sum_{i=1}^J 2^{b_1+\cdots+b_i-i}\\ &\ll 2^J \sum_{i=1}^J 2^{b_1+\cdots+b_i-i}. \end{align*} Comparing with \eqref{CauchyD}, and using again that $\lambda_i \geq \log 2$, completes the proof. \end{proof} Combining Lemma \ref{sumd} and \eqref{eq223}, we obtain \begin{equation}\label{eqbeforecyclelemma} \mathbb{E} |\mathscr{L}(\mathbf{X}_k)| \gg \frac{(2\log 2)^J}{k} \sum_{b_1+\cdots+b_J=J} \frac{1}{b_1! \cdots b_J! \sum_{i=1}^J 2^{b_1+\cdots+ b_i-i}}. \end{equation} Somewhat surprisingly, the right hand side here can be evaluated explicitly using the ``cycle lemma'', as in~\cite{ford-2}. The key trick is to add an additional averaging over the $J$ cyclic permutations of $b_1,\ldots,b_J$ to the inner summation. \begin{lemma}\label{cyclelemma} Let $x_1,\dots,x_J$ be positive reals such that $x_1\cdots x_J = 1$. Then the average of $\big(\sum_{i=1}^J x_1\cdots x_i\big)^{-1}$ over cyclic permutations of $x_1,\dots,x_J$ is exactly $1/J$. \end{lemma} \begin{proof} Reading indices modulo $J$ we have \[ \sum_{t=1}^J \frac{1}{\sum_{i=1}^J x_{t+1}\cdots x_{t+i}} = \sum_{t=1}^J \frac{x_1\cdots x_t}{\sum_{i=1}^J x_1\cdots x_{t+i}} = 1.\qedhere \] \end{proof} Applying the cycle lemma with $x_i = 2^{b_i - 1}$ gives (noting that cyclic permutation of the variables is a 1-1 map on the set of $(b_1,\dots, b_J)$ with $b_1 + \dots + b_J = J$) that \[ \sum_{b_1+\cdots+b_J=J} \frac{1}{b_1! \cdots b_J! \sum_{i=1}^J 2^{b_1+\cdots+ b_i-i}} = \frac{1}{J} \sum_{b_1+\cdots+b_J=J} \frac{1}{b_1!\cdots b_J!} = \frac{1}{J} \cdot \frac{J^J}{J!},\] the second equality being a consequence of the multinomial theorem. Substituting into \eqref{eqbeforecyclelemma}, and recalling that $J=\frac{\log k}{\log 2} + O(1)$, the lower bound in Proposition~\ref{average-result} now follows from Stirling's formula. \section{The upper bound in Proposition~\ref{average-result}}\label{upper} In this section we turn to the upper bound in Proposition \ref{average-result}, that is to say the bound \[ \mathbb{E} |\mathscr{L}(\mathbf{X}_k)| \ll k^{\alpha} (\log k)^{-3/2}.\] As with the lower bound, we condition on the number of cycles of length at most $k$ in a random permutation. Recall from \eqref{sLstar} the definition of $\mathscr{L}^*(\mathbf{a})$: \[ \mathscr{L}^*(\mathbf{a}) = \Big\{\sum_{i\in I} a_i : I \subset [r] \Big\}.\] From \eqref{ELXk}, \eqref{hk} and \eqref{eq221a} we have \begin{equation}\label{upper-start} \mathbb{E} |\mathscr{L}(\mathbf{X}_k)| \le \frac{1}{k} \sum_r \frac{1}{r!} \sum_{a_1,\ldots,a_r=1}^k \frac{|\mathscr{L}^*(\mathbf{a})|}{a_1\cdots a_r}. \end{equation} The most common way for $|\mathscr{L}^*(\mathbf{a})|$ to be small is when there are many of the $a_i$ which are small. To capture this, let $\tilde{a}_1, \tilde{a}_2,\ldots$ be the increasing rearrangement of the sequence $\mathbf{a}$, so that $\tilde{a}_1 \le \tilde{a}_2 \le \cdots$. For any $j$ satisfying $0\le j\le r$, we have \[ \mathscr{L}^*(\mathbf{a}) \subset \left\{ m + \sum_{i\in I} \tilde{a}_i : 0\le m\le \sum_{i=1}^j \tilde{a}_i, \, I \subset \{j+1, \ldots, r\} \right\}, \] from which it follows immediately that \[ |\mathscr{L}^*(\mathbf{a})| \le G(\mathbf{a}), \] where \begin{equation}\label{LG} G(\mathbf{a}) = \min_{0\le j\le r} 2^{r-j} \( \tilde{a}_1 + \cdots + \tilde{a}_j + 1 \). \end{equation} It is reasonable to expect that \begin{equation}\label{up-approx} \sum_{a_1,\ldots,a_r=1}^k \frac{G(\mathbf{a})}{a_1\cdots a_r} \sim \int^k_1\!\! \cdots \!\!\int^k_1 \frac{G(\mathbf{t})}{t_1\cdots t_r} d\mathbf{t} = (\log k)^r \int^1_0 \!\!\cdots\!\! \int^1_0 G(e^{\xi_1\log k},\ldots,e^{\xi_r\log k}) d\boldsymbol{\xi}, \end{equation} where we have enlarged the domain of $G$ to include $r$-tuples of positive real numbers. However, $G$ is not an especially regular function and so \eqref{up-approx} is perhaps too much to hope for. The function $G$ is, however, increasing in every coordinate and we may exploit this to prove an approximate version of \eqref{up-approx}. \begin{lemma}\label{lemma4.1} For any $r\ge 1$, we have \[ \sum_{a_1,\ldots,a_r=1}^k \frac{|\mathscr{L}^*(\mathbf{a})|}{a_1\cdots a_r} \ll (2h_k)^r r! \int_{\Omega_r} \min_{0\le j\le r} 2^{-j} (k^{\xi_1} + \cdots + k^{\xi_j} + 1 ) d\boldsymbol{\xi}, \] where $\Omega_r = \{ (\xi_1, \dots, \xi_r) : 0 \leq \xi_1 \leq \xi_2 \leq \dots \leq \xi_r \leq 1\}$. \end{lemma} \begin{proof} Motivated by the fact that $1/a = \int_{\exp(h_{a-1})}^{\exp(h_a)} dt/t$, define the product sets \[ R(\mathbf{a}) = \prod_{i=1}^r \left[ \exp\( h_{a_i-1} \), \exp\( h_{a_i} \) \right]. \] By \eqref{LG}, we have \[ \sum_{a_1,\ldots,a_r=1}^k \frac{|\mathscr{L}^*(\mathbf{a})|}{a_1\cdots a_r} \le \sum_{a_1,\ldots,a_r=1}^k \frac{G(\mathbf{a})}{a_1\cdots a_r} = \sum_{a_1,\ldots,a_r=1}^k G(\mathbf{a}) \int_{R(\mathbf{a})} \frac{d \mathbf{t}}{t_1\cdots t_r}. \] Consider some $\mathbf{t} \in R(\mathbf{a})$. Writing $\tilde t_1 \leq \tilde t_2 \leq \dots \leq \tilde t_r$ for the non-decreasing rearrangement of $\mathbf{t}$, we have \[ \exp\( h_{\tilde{a}_i-1} \) \le \tilde{t_i} \le \exp\( h_{\tilde{a}_i} \) \quad \mbox{for $1\le i\le r$}. \] From \eqref{hk} we see that $\tilde{t}_i \ge \tilde{a}_i$ for all $i$. Hence \[ G(\mathbf{a}) \le \min_{0\le j\le r} 2^{r-j} (\tilde{t_1} + \cdots + \tilde{t_j}+1) = G(\mathbf{t}) \quad \mbox{for all $\mathbf{t}\in R(\mathbf{a})$}. \] This yields \[ \sum_{a_1,\ldots,a_r=1}^k G(\mathbf{a}) \int_{R(\mathbf{a})} \frac{d \mathbf{t}}{t_1\cdots t_r} \le \sum_{a_1,\ldots,a_r=1}^k \int_{R(\mathbf{a})} \frac{G(\mathbf{t})}{t_1\cdots t_r} d \mathbf{t} = \int_1^{\exp(h_k)}\cdots \int_1^{\exp(h_k)} \frac{G(\mathbf{t})}{t_1\cdots t_r} d \mathbf{t}. \] The integrand on the right is symmetric in $t_1,\ldots,t_r$. Making the change of variables $t_i=e^{\xi_i h_k}$ yields \[ \sum_{a_1,\ldots,a_r=1}^k \frac{|\mathscr{L}^*(\mathbf{a})|}{a_1\cdots a_r} \le (2h_k)^r r! \int_{\Omega_r} \min_{0\le j\le r} 2^{-j}\( e^{\xi_1 h_k} + \cdots + e^{\xi_j h_k} + 1 \) d \boldsymbol{\xi}. \] The lemma follows from the upper bound in \eqref{hk}, namely $h_k\le 1+ \log k$. \end{proof} With Lemma \ref{lemma4.1} established, we may conclude the proof of the upper bound in Proposition \ref{average-result} by quoting \cite[Lemma 3.6]{ford-2}. Indeed, in the notation of that paper \[ \int_{\Omega_r} \min_{0\le j\le r} 2^{-j}\( k^{\xi_1} + \cdots + k^{\xi_j} + 1 \) d \boldsymbol{\xi} = U_r(\log_2 k),\] and thus by \eqref{upper-start} and Lemma \ref{lemma4.1} we have \begin{equation}\label{lkx} \mathbb{E} |\mathscr{L}(\mathbf{X}_k)| \ll \frac{1}{k} \sum_r (2 h_k)^r U_r(\log_2 k).\end{equation} Now \cite[Lemma 3.6]{ford-2} provides the bound \[ U_r(\log_2 k) \ll \frac{1 + |\log_2 k - r|^2}{(r+1)! (2^{r - \log_2 k} + 1)},\] uniformly for $0 \leq r \leq 10 \log_2 k$ . Set \[ r_* = \lfloor \log_2 k\rfloor.\] In what follows, we will use the observation that $a^n/(n+1)!$ is increasing for $n \leq a - 2$ and decreasing thereafter. If $r = r_*+ m$ with $m \leq 9 \log_2 k$, $m \in \mathbb{Z}_{\geq 0}$, then we have \begin{align*} (2h_k)^r U_r(\log_2 k) & \ll \frac{(\frac{4}{3}h_k)^r}{(r+1)!} \cdot \pfrac{3}{2}^r \cdot \frac{1 + m^2}{2^m} \\ & \ll \frac{(\frac{4}{3}h_k)^{r_*}}{(r_*+1)!} \cdot \pfrac{3}{2}^{r_*} \cdot \frac{1 + m^2}{(\frac{4}{3})^m} \\ & \ll k^{1 + \frac{1 + \log\log 2}{\log 2}} (\log k)^{-3/2} \cdot \frac{1 + m^2}{(\frac{4}{3})^m}.\end{align*} In the first step we used the observation (and the fact that $\frac{4}{3} < \frac{1}{\log 2}$), and in the second step we used Stirling's formula and \eqref{hk}. Summed over $m$, this is of course rapidly convergent and shows that the contribution to \eqref{lkx} from this range of $r$ is acceptable. Next suppose that $r = r_* - m$, $m \in \mathbb{N}$. Then we have \begin{align*} (2h_k)^r U_r(\log_2 k) & \ll \frac{(\frac{3}{2}h_k)^r}{(r+1)!} \cdot \pfrac{4}{3}^r \cdot (1 + m^2) \\ & \ll \frac{(\frac{3}{2}h_{k})^{r_*}}{(r_*+1)!} \cdot \pfrac{4}{3}^r \cdot (1 + m^2) \\ & \ll k^{1 + \frac{1 + \log\log 2}{\log 2}} (\log k)^{-3/2} \cdot \frac{1 + m^2}{(\frac{4}{3})^m}. \end{align*} Here, we used the observation (and the fact that $\frac{3}{2} > \frac{1}{\log 2}$) and a second application of Stirling's formula. Summed over $m$, this is once again rapidly convergent and the contribution to \eqref{lkx} from this range of $r$ is acceptable. There remains the range $r > 10 \log_2 k$. Here, we use the trivial bound $U_r(\log_2 k) \leq 1/r!$ and thus \[ \sum_{r > 10 \log_2 k} (2h_k)^r U_r(\log_2 k) \ll \sum_{r > 10 \log_2 k} \frac{(2h_k)^r}{r!} \ll k^{-10}, \] which is obviously minuscule in comparison to the other terms. \emph{Remarks.} It is obvious from this analysis and the lower bound in our main theorem that a proportion $\geq 1 - \varepsilon$ of all permutations fixing some set of size $k$ have $\log_2 k + O(\log (1/\varepsilon))$ cycles of length at most $k$. It is most probably also true that for a proportion $\geq 1-\varepsilon$ of all permutations fixing some set of size $k$ we have $\log \tilde a_j \geq j \log 2 - O_{\varepsilon}(1)$ for $j\le \log_2 k - O_\varepsilon(1)$, where the $\tilde a_j$ are the (ordered) cycle lengths of the permutation. To establish this would require opening up some of the arguments used to bound the quantities $U_k$ in \cite{ford-2}. We plan to return to this and other issues in a future paper. \emph{Acknolwedgments}. BG is supported by ERC Starting Grant number 279438, \emph{Approximate algebraic structure and applications}, and a Simons Investigator Award. KF is supported by National Science Foundation grants DMS-1201442 and DMS-1501982. \bibliographystyle{alpha}
{ "timestamp": "2015-11-19T02:10:29", "yymm": "1507", "arxiv_id": "1507.04465", "language": "en", "url": "https://arxiv.org/abs/1507.04465", "abstract": "Let $i(n,k)$ be the proportion of permutations $\\pi\\in\\mathcal{S}_n$ having an invariant set of size $k$. In this note we adapt arguments of the second author to prove that $i(n,k) \\asymp k^{-\\delta} (1+\\log k)^{-3/2}$ uniformly for $1\\leq k\\leq n/2$, where $\\delta = 1 - \\frac{1 + \\log \\log 2}{\\log 2}$. As an application we show that the proportion of $\\pi\\in\\mathcal{S}_n$ contained in a transitive subgroup not containing $\\mathcal{A}_n$ is at least $n^{-\\delta+o(1)}$ if $n$ is even.", "subjects": "Combinatorics (math.CO); Group Theory (math.GR)", "title": "Permutations fixing a k-set", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713891776498, "lm_q2_score": 0.7185943925708562, "lm_q1q2_score": 0.7080104954235569 }
https://arxiv.org/abs/1910.12568
Gradient versus proper gradient homotopies
We compare the sets of homotopy classes of gradient and proper gradient vector fields in the plane. Namely, we show that gradient and proper gradient homotopy classifications are essentially different. We provide a complete description of the sets of homotopy classes of gradient maps from $\mathbb{R}^n$ to $\mathbb{R}^n$ and proper gradient maps from $\mathbb{R}^2$ to $\mathbb{R}^2$ with the Brouwer degree greater or equal to zero.
\section*{Introduction}\label{sec:intro} The search for new homotopy invariants in the class of gradient maps has a long history. In 1985, to obtain new bifurcation results, E.~N.~Dancer \cite{D} introduced a new topological invariant for $S^1$\babelhyphen{hard}equivariant gradient maps. In turn, A.~Parusi{\'n}ski \cite{P} showed that if two gradient vector fields on the unit disc $D^n$ nonvanishing on $S^{n-1}$ are homotopic, i.e., have the same Brouwer degree, then they are also gradient homotopic. Similarly, the authors of this paper proved in \cite{BP1,BP2} that there is no better invariant than the Brouwer degree for gradient and proper gradient otopies in $\R^n$. Recall that otopy was introduced by J.~C.~Becker and D.~H.~Gottlieb \cite{BG} as a very useful generalization of the concept of homotopy. However, quite surprisingly, M.~Starostka \cite{S} showed that for $n\ge2$ there exist proper gradient vector fields in $\R^n$ which are homotopic but not proper gradient homotopic. Roughly speaking, he proved that the identity and the minus identity on the plane are not proper gradient homotopic and then generalized this result to $\R^n$. Since his reasoning is nice and elegant, we will present it briefly here. First recall that the linear source and sink in the plane are isolated invariant sets with different homological Conley indices. Namely, since $(D_r(0),\partial D_r(0))$ and $(D_r(0),\emptyset)$, where $D_r(0)$ denotes the $r$-disc at the origin, are index pairs for the source and sink respectively, the respective homological Conley indices are (see Figure~\ref{fig:conley}) \[ \text{CH}_*(D_r(0),\eta_{\id})=H_*(S^2,\text{pt}) \,\,\text{ and }\,\, \text{CH}_*(D_r(0),\eta_{-\id})=H_*(S^0,\text{pt}), \] where $\eta_f$ denotes the flow generated by the vector field $f$. Suppose now that there is a proper gradient homotopy connecting $\id$ to $-\id$. Such a homotopy determines a continuation between the gradient flows $\eta_{\id}$ and $\eta_{-\id}$ for which a sufficiently large disc $D_r(0)$ is a common isolating neighbourhood for all parameter values of the continuation (this is true for proper gradient vector fields and homotopies). Thus, by the continuation property of the Conley index, $\text{CH}_*(D_r(0),\eta_{\id})=\text{CH}_*(D_r(0),\eta_{-\id})$, a contradiction. To summarize, if we restrict ourselves to proper gradient vector fields and homotopies, then the Conley index is a better invariant than the Brouwer degree. \begin{figure}[ht] \centering \includegraphics[scale=1.4,trim= 62mm 218mm 70mm 45mm]{conley} \caption{Conley indices of a source and sink} \label{fig:conley} \end{figure} In this paper we strengthen and complement Starostka's result. We present the comparison of two homotopy classifications of gradient vector fields in the plane: gradient and proper gradient. Namely, we show that the set of homotopy classes of gradient vector fields in $\R^n$ having the same Brouwer degree is a singleton (a Parusi{\'n}ski-type theorem). On the other hand, the set of homotopy classes of proper gradient vector fields in $\R^2$ the same Brouwer degree is empty if the degree is greater than $1$, has exactly two elements if the degree is equal to $1$ and has one element if the degree is equal to $0$. What is still lacking is a description of this set for the degree less than $0$. It also would be desirable to provide the proper gradient homotopy classification for the general case of $\R^n$. The organization of the paper is as follows. Section~\ref{sec:prel} contains some preliminaries. Our main four theorems are stated in Section~\ref{sec:main}. These results are proved subsequently in Sections~\ref{sec:proof1}-\ref{sec:proof4}. Finally, Appendix~\ref{sec:appA} presents a series of technical results needed in previous sections. \section{Preliminaries} \label{sec:prel} In what follows, a map denotes always a continuous function and $\deg$ denotes the classical Brouwer degree. \subsection{Gradient and proper gradient maps} Recall that a map $f$ is called \emph{gradient} if there is a~$C^1$~function $\varphi\colon\R^n\to\R$ such that $f=\nabla\varphi$ and is called \emph{proper} if preimages of compact sets for $f$ are compact. Let $I:=[0,1]$. We write $f\in\V(\R^n)$ ($f\in\Prop(\R^n)$) if \begin{enumerate} \item $f$ is gradient, \item $f^{-1}(0)$ is compact ($f$ is proper). \end{enumerate} Moreover, let $\Vk:=\{f\in\V(\R^n)\mid\deg f=k\}$ and $\Pk:=\{f\in\Prop(\R^n)\mid\deg f=k\}$. \subsection{Gradient and proper gradient homotopies} Apart from maps we consider two classes of homotopies: gradient and proper gradient. Namely, a map $h\colon I\times\R^n\to\R^n$ is called a \emph{(proper) gradient homotopy} if \begin{enumerate} \item $h_t(\cdot):=h(t,\cdot)$ is gradient for each $t\in I$, \item $h^{-1}(0)$ is compact ($h$ is proper). \end{enumerate} If $h$ is a (proper) gradient homotopy, we say that $h_0$ and $h_1$ are \emph{(proper) gradient homotopic}. The relation of being (proper) gradient homotopic is an equivalence relation in $\Vk$ ($\Pk$). The sets of homotopy classes of the respective relation will be denoted by $\Vsq{k}{n}$ and $\Psq{k}{n}$. \subsection{Hessian maps} Let us consider a $C^2$ function $\varphi\colon\R^2\to\R$. Assume that $p\in\R^2$ is a nondegenerate critical point of $\varphi$. Let $\Hess_p\!\varphi$ denote the Hessian of $\varphi$ at $p$. In that situation, the Hessian is nondegenerate bilinear symmetric form and, in consequence, its matrix is invertible symmetric. Let us make two simple observations. \begin{lem}\label{lem:symm} Any two elements of the space of invertible symmetric matrices are in the same path-connected component if and only if they have the same signature. \end{lem} \begin{cor}\label{cor:symm} Let $\varphi\colon\R^2\to\R$ and $p$ be a nondegenerate critical point of $\varphi$. Then the Hessian map $\Hess_p\!\varphi\colon\R^2\to\R^2$ is proper gradient homotopic to $\id_{\R^2}$ if $\sign\Hess_p\!\varphi=0$ or to $-\id_{\R^2}$ if $\sign\Hess_p\!\varphi=2$. \end{cor} \subsection{Local flows} Let $\Omega$ be an open subset of $R^2$. A map $\eta\colon A\to\Omega$ is called a \emph{local flow} on $\Omega$ if: \begin{itemize} \item $A$ is an open neighbourhood of $\{0\}\times\Omega$ in $\R\times\Omega$, \item for each $x\in\Omega$ there are $\alpha_x, \omega_x\in\R\cup\{\pm\infty\}$ such that $(\alpha_x,\omega_x)$\\ $=\{t\in\R\mid (t,x)\in A\}$, \item $\eta(0,x)=x$ and $\eta(s,\eta(t,x))=\eta(s+t,x)$ for all $x\in\Omega$ and $s,t\in(\alpha_x,\omega_x)$ such that $s+t\in(\alpha_x,\omega_x)$. \end{itemize} Assume that $f\colon\Omega\to\R^2$ is a $C^1$ vector field. It is well-known that if $t\to\eta(t,x_0)$ is a solution of the initial value problem \[ \dot{x}=f(x),\quad x(0)=x_0 \] and $(\alpha_{x_0},\omega_{x_0})$ is the maximal interval of existence of the solution of the initial value problem, then the map $\eta$ is a local flow on $\Omega$. \begin{prop}[{\cite[Ch. 6]{H}}]\label{prop:generic} Any element of $\Prop(\R^2)$ is proper gradient homotopic to a generic map. \end{prop} \subsection{Notation} Let us denote by $B_r(p)$ ($D_r(p)$) the open (closed) $r$-ball in $\R^n$ around~$p$. \section{Main results} \label{sec:main} Let us formulate the main results of our paper. \begin{thm}\label{thm:one} $\Vsq{k}{n}$ is a singleton for each $k\in\Z$ and $n\ge2$. \end{thm} \begin{thm}\label{thm:twoone} $\Psq{k}{2}$ is empty for $k>1$. \end{thm} \begin{thm}\label{thm:twotwo} $\Psq{0}{2}$ is a singleton. \end{thm} \begin{thm}\label{thm:twothree} $\Psq{1}{2}$ has at most two elements. \end{thm} Combining Theorem \ref{thm:twothree} with the theorem of M. Starostka (see \cite[Main Theorem]{S}) gives immediately the following result. \begin{cor} $\Psq{1}{2}$ has exactly two elements: the class of the identity and the minus identity. \end{cor} We close this section with the following conjecture and open problem. \begin{conj} $\Psq{k}{2}$ is a singleton for $k<0$. \end{conj} \begin{quest} Give the description of the set $\Psq{k}{n}$ for any $k\in\Z$ and $n>2$. \end{quest} \section{Proof of Theorem \texorpdfstring{\ref{thm:one}}{2.1}} \label{sec:proof1} A~slight modification of the reasoning presented in the proof of Lemma~4 in~\cite{P} shows that the sets $\Vsq{k}{n}$ are nonempty. Now we prove that $\Vsq{k}{n}$ consist of only one element. Let $\nabla\varphi,\nabla\psi\in\V_k(\R^n)$. There is $r>0$ such that $(\nabla\varphi)^{-1}(0)\cup(\nabla\psi)^{-1}(0)\Subset B_r(0)$. By the Parusi\'nski theorem (\cite[Thm 1]{P}), there is a $C^1$ function $\zeta\colon I\times D_r(0)\to\R$ such that \begin{itemize} \item $\nabla_x\zeta(t,x)\neq0$ for all $t\in I$ and $x\in\partial D_r(0)$, \item $\nabla\zeta_0=\nabla\big(\restrictionmap{\varphi}{D_r(0)}\big)$, \item $\nabla\zeta_1=\nabla\big(\restrictionmap{\psi}{D_r(0)}\big)$. \end{itemize} Assume that $\theta$ is a diffeotopy from Lemma~\ref{lem:proof1a}. Let us define three homotopies $h^i\colon I\times\R^n\to\R^n$ ($i=1,2,3$) by the formulas \begin{align*} h_t^1(t,x)&=\nabla_x\varphi(\theta(t,x)),\\ h_t^2(t,x)&=\nabla_x\psi(\theta(t,x)),\\ h_t^3(t,x)&=\nabla_x(\zeta(t,\theta_1(x))). \end{align*} By Lemma~\ref{lem:proof1b}, $h^1$ and $h^2$ are gradient homotopies and by Lemma~\ref{lem:proof1c}, $h^3$ is a gradient homotopy. Thus we obtain the following sequence of the gradient homotopy relations \[ \nabla\varphi=h_0^1\sim h_1^1=h_0^3\sim h_1^3=h_1^2\sim h_0^2=\nabla\psi, \] which completes the proof.\qed \section{Proof of Theorem \texorpdfstring{\ref{thm:twothree}}{2.4}} \label{sec:proof2} The following two propositions are crucial for the proof of Theorem~\ref{thm:twothree}. Assume that $f=\nabla\varphi$ is generic. \begin{prop}\label{prop:point} Let $f^{-1}(0)=\{p\}$. If $p$ is a source then $f\sim\id_{\R^2}$. If $p$ is a sink then $f\sim-\id_{\R^2}$. \end{prop} \begin{proof} Assume that $p$ is a source. By Corollary~\ref{cor:proof2}, $f$ is proper gradient homotopic to the Hessian map $\Hess_p\!\varphi\colon\R^2\to\R^2$ and by Corollary~\ref{cor:symm}, $\Hess_p\!\varphi$ is proper gradient homotopic to $\id_{\R^2}$. The same reasoning applies to the case of a sink. \end{proof} Let $A_f^-$ ($A_f^+$) denote the set of sources (sinks) of $f$, $A_f=A_f^-\cup A_f^+$ and $B_f$ the set of saddles. \begin{prop}\label{prop:cancel} If $A_f$ and $B_f$ are nonempty then there is a generic map $f'$ such that $f\sim f'$, $\abs{A_{f'}}<\abs{A_{f}}$ and $\abs{B_{f'}}<\abs{B_{f}}$. \end{prop} The proof of Proposition \ref{prop:cancel} will be preceded by a series of lemmas. Let us start with the following notation. Assume that $x\in A_f^-$ and $y\in A_f^+\cup B_f\cup\{\infty\}$. Set \begin{align*} E_x^-&=\{z\in\R^2\mid\omega^-(z)=x\},\\ E_y^+&=\{z\in\R^2\mid\omega^+(z)=y\}. \end{align*} From now on, $\eta(t,z)=\eta^t(z)$ denotes the local flow generated by $f$. Observe that for $z\in E_x^-$ the vector \[ v_z=\lim_{t\to-\infty}\frac{f(\eta^t(z))}{\abs{f(\eta^t(z))}} \] is well-defined. Finally, let us denote by $V_x^y$ the set $\{v_z\mid z\in E_x^-\cap E_y^+\}$. The following lemma describes properties of sets $V_x^y$. \begin{lem}\label{lem:open} Assume that $y\in A_f^+\cup\{\infty\}$. Then \begin{enumerate} \item $V_x^y$ is an open subset of $S^1$, \item if $V_x^y=S^1$ then $x\in A_f^-$ is the only stationary point of $f$ and $y=\infty$. \end{enumerate} \end{lem} \begin{proof} Note that there is a neighbourhood $U^-$ of $x$ such that $U^-$ is the unit ball and $f=\id$ on $U^-$ in some coordinate system. Now we can identify the set of directions $\{v_z\in S^1\}$ with $\partial U^-$.\vspace{1mm} \noindent\emph{Ad (1).} Assume that $y\in A_f^+$. Analogously as for $x$, there is a neighbourhood $U^+$ of $y$ such that $U^+$ is a ball and $f=-\id$ on $U^+$. Let $z_0\in V_x^y\subset S^1$. There is $T\in\R$ such that $\eta^T(z_0)\in U^+$. Therefore we can choose a neighbourhood $W$ of $z_0$ in $S^1$ such that $\eta^T(z)\in U^+$ for all $z\in W$. Hence $W\subset V_x^y$, and in consequence, $V_x^y$ is open. Now let $y=\infty$. Since $f$ is proper, there is $\rho>0$ such that for $z\not\in D_\rho(0)$ $\lim_{t\to-\infty}\eta^t(z)=x$ implies $\lim_{t\to\infty}\eta^t(z)=\infty$ (see \cite[Prop~2.4]{S}). Let $z_0\in V_x^\infty\subset S^1=\partial U^-$. Then there is $T>0$ such that $\eta^T(z_0)\not\in D_\rho(0)$. Therefore there exists a neighbourhood $W$ of $z_0$ in $S^1$ such that $\eta^T(z)\not\in D_\rho(0)$ for all $z\in W$, which proves that $V_x^\infty$ is open.\vspace{1mm} \noindent\emph{Ad (2).} Without loss of generality we can assume that $\varphi(x)=0$ and $\varphi(z)=1$ for $z\in\partial U^-$. Set \[ \beta_z:=\sup{\{\varphi(\eta^t(z))\mid t\in[0,\omega_z)\}} \] for $z\in\partial U^-$ and $\beta:=\inf{\{\beta_z\mid z\in\partial U^-\}}$. Since $\varphi$ increases on trajectories of $\eta$, for $z\in\partial U^-$ and $\alpha\in(0,\beta)$ there is a unique $t(z,\alpha)\in(-\infty,\omega_z)$ such that $\varphi(\eta^{t(z,\alpha)}(z))=\alpha$. Write \[ S(\alpha)=\{\eta^{t(z,\alpha)}(z)\mid z\in\partial U^-\}. \] Note that $S(\alpha)$ is homeomorphic to $S^1$ for $\alpha\in(0,\beta)$ and $S(1)=\partial U^-$. We begin by proving that $y=\infty$. Conversely, suppose that $y$ is a point. Note that $y$ cannot be a saddle, because for a saddle there are only two directions of approach along the flow. Hence $y$ is a sink and $\varphi(y)=\beta$. Once again, let $U^+$ be a disc neighbourhood of $y$ such that $f=-\id$ on $U^+$. By compactness of $\partial U^-$, while $\alpha$ is approaching $\beta$, a level set $S(\alpha)$ is closer to $y$. From this observation it follows that there is an embedding $\gamma\colon S^2\to\R^2$, which maps poles on $x$ and $y$, a contradiction. This clearly forces that $y=\infty$. It remains to prove that $A_f\cup B_f=\{x\}$. There is no loss of generality in assuming that $x=0$. Since $\nabla\varphi$ is proper, we have $\beta=\infty$. Therefore $S(\alpha)$ is defined for every $\alpha>0$. We show that any $z\neq0$ belongs to a level set $S(\alpha)$ for some $\alpha>0$ and, in consequence, is not a critical point of $\varphi$. For $z\in U^-$ it is obvious. Let $z\not\in U^-$ and $\alpha_0=\max{\{\varphi(w)\mid\abs{w}\le\abs{z}\}}$. Choose arbitrary $\alpha_1>\alpha_0$. Observe that $\Ind(S(1/2),z)=0$ and $\Ind(S(\alpha_1),z)=\Ind(S(\alpha_1),0)=1$, where $\Ind$ denotes the winding number. Suppose, contrary to our claim, that $z\not\in S(\alpha)$ for $1/2<\alpha<\alpha_1$. Then $\Ind(S(1/2),z)=\Ind(S(\alpha_1),z)$, a~contradiction. This completes the proof. \end{proof} The next four lemmas are of utmost importance for the proof of Proposition~ \ref{prop:cancel}. \begin{lem}\label{lem:key} Let $x\in A_f^-$ and $B_f$ is nonempty. Then there is $y\in B_f$ such that $x$ and $y$ are connected by a trajectory of $\eta$. \end{lem} \begin{proof} Write $V_x^Y:=\cup_{y\in Y}V_x^y$ for $Y\subset A_f^+\cup B_f\cup\{\infty\}$. Since $f$ is generic, we have \[ V_x^\infty\cup V_x^{A_f^+}\cup V_x^{B_f}=S^1. \] By Lemma~\ref{lem:open}, $V_x^\infty\cup V_x^{A_f^+}$ is an open strict subsets of $S^1$. Hence $V_x^{B_f}\neq\emptyset$, which is our claim. \end{proof} Assume that $x\in A_f^-$. Write \begin{align*} A_f^+(x)&:=\{y\in A_f^+\mid V_x^y\neq\emptyset\},\\ B_f(x)&:=\{y\in B_f\mid V_x^y\neq\emptyset\}. \end{align*} \begin{lem}\label{lem:height} If $A_f^+(x)\neq\emptyset$ and $B_f(x)\neq\emptyset$ then \[ \min{\{\varphi(y)\mid y\in A_f^+(x)\}}\ge \min{\{\varphi(y)\mid y\in B_f(x)\}}. \] \end{lem} \begin{proof} Let $y_1\in A_f^+(x)$ such that $\varphi(y_1)=\min{\{\varphi(y)\mid y\in A_f^+(x)\}}$. By the above and Lemma~\ref{lem:open}(2), $\emptyset\neq V_x^{y_1}\neq S^1$. Let $C$ denote a connected component of $V_x^{y_1}$ and let $a$ be one of the ends of the arc $C$. By Lemmas \ref{lem:open} and \ref{lem:key}, there is $y_0\in B_f(x)$ such that $a\in V_x^{y_0}$. Suppose that $\varphi(y_0)>\varphi(y_1)$. Therefore there are disjoint neighbourhoods $U_0$ of $y_0$ and $U_1$ of $y_1$ such that for all $z\in U_0$ and $w\in U_1$ we have $\varphi(z)>\varphi(w)$ and, moreover, no trajectory leaves $U_1$. Observe that if $a'\in C$ is close enough to $a$ then there is $t_0$ such that $\eta^{t_0}(a')\in U_0$. Moreover, since $a'\in C$, there is $t_1$ such that $\eta^{t_1}(a')\in U_1$. Note that $t_1>t_0$, because for $t\ge t_1$ we have $\eta^{t}(a')\in U_1$. Hence \[ \varphi\big(\eta^{t_0}(a')\big)< \varphi\big(\eta^{t_1}(a')\big), \] a contradiction. This gives our assertion. \end{proof} The following result, which can be found in \cite[Sec. 1]{L}, is devoted to the question of cancelling a pair of critical points. \begin{lem}\label{lem:cancel} Let us consider a $C^2$ function $\varphi\colon\R^2\to\R$ such that $\nabla\varphi$ is generic and its local flow $\eta$. Let $p$ and $q$ be two critical points of $\varphi$ satisfying the following conditions: \begin{itemize} \item $W^u(p)$ and $W^s(q)$ intersect transversely and the intersection consists of one orbit $l$ of $\eta$, \item for some $\epsilon>0$, each orbit of $\eta$ in $W^u(p)$ distinct from $l$ crosses the level set $\varphi^{-1}\big(\varphi(q)+\epsilon\big)$. \end{itemize} Let $U$ denote an open neighbourhood of the closure of $W^u(p)\cap \{\varphi\leq\varphi(q)+\epsilon\}$ such that the only critical points in $\cl U$ are $p$ and $q$. Then there is a path of smooth functions $\left\{\varphi_t\right\}_{ t\in I},$ such that: \begin{itemize} \item $\varphi_0=\varphi$, \item for every $t\in I$, $\varphi_t$ coincides with $\varphi$ on $\R^2\setminus U$, \item the function $\varphi_t\vert U$ is has two nondegenerate critical points when $0\leq t<1/2$; it has one degenerate critical point when $t=1/2$ and it has no critical points when $1/2<t\leq 1$. \end{itemize} \end{lem} \begin{lem}\label{lem:twoorbits} Assume that $f=\nabla\varphi$ is generic. Let $x\in A_f^-$ and $y\in B_f$. If there are two trajectories of $\eta$ connecting $x$ to $y$ then there is generic $f'$ otopic to $f$ such that $\abs{A_{f'}}<\abs{A_{f}}$ and $\abs{B_{f'}}<\abs{B_{f}}$. \end{lem} \begin{proof} Let us denote by $\Gamma_1$ and $\Gamma_2$ two trajectories of $\eta$ (smooth curves) connecting $x$ to $y$. Notice that these trajectories form a straight angle at $y$ (see Figure~\ref{fig:saddle}). Let $G$ stand for the domain bounded by $\Gamma_1$ and $\Gamma_2$. Since $x$ is a source, we can choose points $x_1\in\Gamma_1$, $x_2\in\Gamma_2$ and a level subset $\Pi_x\subset G$ of $\varphi$ connecting them close enough to $x$. Similarly, since $y$ is a saddle, we can choose $y_1\in\Gamma_1$, $y_2\in\Gamma_2$ such that $\varphi(y_1)=\varphi(y_2)$ and a smooth curve $\Pi_y\subset G$ perpendicular to $\Gamma_i$ at $y_i$ for $i=1,2$. Let us denote by $G_1$ the domain bounded by $\Pi_x$, $\Pi_y$ and trajectories connecting $x_i$ to $y_i$. \begin{figure}[ht] \centering \includegraphics[scale=0.7,trim= 70mm 187mm 60mm 45mm]{saddle} \caption{Domains $G_1$ and $G_2$} \label{fig:saddle} \end{figure} Let us extend $\Pi_y$ to a little longer smooth curve $\Pi'_y=\arc y'_1y'_2$ such that segments $\arc y_iy'_i$ are contained in level sets of $\varphi$. In the neighbourhood of the saddle $y$ choose points $y''_i$ for $i=1,2$ on the trajectory starting from $y'_i$ such that $\varphi(y''_1)=\varphi(y''_2)$. Points $y''_1$ and $y''_2$ are connected by a short segment of a level set of $\varphi$. Let us denote by $G_2$ the domain bounded by $\Pi'_y$, trajectories connecting $y'_i$ to $y''_i$ and this short segment of a level set connecting $y''_1$ and $y''_2$. Since $f$ is defined and nonvanishing on $\partial(G_1\cup G_2)$, we can extend $\restrictionmap{f}{\partial(G_1\cup G_2)}$ to nonvanishing smooth vector field $\wt{f}\colon\partial G_1\cup\partial G_2\to\R^2$ such that $\wt{f}$ is perpendicular to the curve $\Pi_y$. It is easy to check that $\wt{f}$ satisfies the assumptions of Lemma~\ref{lem:field} for both $G_1$ and $G_2$. As a conclusion we obtain that $\wt{f}$ can be extended to nonvanishing gradient vector field $\wh{f}\colon G_1\cup G_2\to\R^2$. Finally, define $f'\colon\R^2\to\R^2$ by the formula \[ f'(z)=\begin{cases} f(z)&\text{if $z\not\in G_1\cup G_2$},\\ \wh{f}(z)&\text{if $z\in G_1\cup G_2$}. \end{cases} \] The map $f'$ satisfies the assertion of Lemma~\ref{lem:twoorbits}. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:cancel}] Without loss of generality we can assume that $A^-_{f}\neq\emptyset$ (in the case $A^+_{f}\neq\emptyset$ the proof is analogous). Let $x\in A^-_{f}$. By Lemma~\ref{lem:key}, $B_f(x)\neq\emptyset$. Moreover, by Lemma~\ref{lem:height}, there is $y\in B_f(x)$ which realizes the minimum of $F=\{\varphi(z)\mid z\in A^+_{f}(x)\cup B_f(x)\}$. Applying Lemma~\ref{lem:bump} we can assume that $y$ is the only minimum in~$F$. There are two possibilities. There is either one or two trajectories connecting $x$ to $y$. In the second case it is enough to apply Lemma~\ref{lem:twoorbits} to obtain at once the desired conclusion. Now let us consider the first case. Observe that all assumptions of Lemma~\ref{lem:cancel} are satisfied. In consequence, a proper gradient homotopy allows us to cancel both critical points $x$ and $y$, which is our assertion. \end{proof} It occurs that Theorem \ref{thm:twothree} is now a consequence of Propositions~\ref{prop:generic}, \ref{prop:point} and~\ref{prop:cancel} . \begin{proof}[Proof of Theorem \ref{thm:twothree}] Let $f\in\Prop_1(\R^2)$. By Proposition~\ref{prop:generic}, without loss of generality we can assume that $f$ is generic. Moreover, $\deg f=\abs{A_{f}}-\abs{B_{f}}=1$. By~Proposition~\ref{prop:cancel}, there is $f'$ generic such that $f\sim f'$, $f'^{-1}(0)=\{p\}$ and $p$ is a source or sink. Hence, by Proposition~\ref{prop:point}, $f\sim\id_{\R^2}$ or $f\sim-\id_{\R^2}$. \end{proof} \section{Proof of Theorem \texorpdfstring{\ref{thm:twotwo}}{2.3}} \label{sec:proof3} The main result of this section is the following lemma. \begin{lem}\label{lem:nozero} If $f,f'\in\Prop(\R^2)$ have no zeroes then $f\sim f'$. \end{lem} \begin{proof} Let $f=\nabla\varphi$. For $t\in[1/2,1]$ write $f_t(x):=f((2-2t)x)$. Set $c:=\min\{\abs{f(x)}\mid x\in\R^2\}$. Observe that $c>0$ and for each $t\in[1/2,1]$ \begin{itemize} \item $f_t$ are gradient maps, \item $\min{\{\abs{f_t(x)}\mid x\in\R^2\}}\ge c$. \end{itemize} Next for $t\in[0,1/2]$ put $\xi_t(x):=(1+t\abs{x})x$ and \[ \Xi_t(f)(x):=\nabla\big(\varphi(\xi_t(x))\big)= D\xi_t^T(x)\nabla\varphi(\xi_t(x))= D\xi_t^T(x)f(\xi_t(x)). \] Let us check that following inequalities \begin{enumerate} \item $\abs{\xi_t(x)}\ge\abs{x}$, \item $\abs{D\xi_t^T(x)(v)}\ge(1+t\abs{x})\abs{v}$, \item $\abs{\Xi_t(f)(x)}\ge\abs{f(\xi_t(x))}$, \item $\big\lvert\Xi_\frac12(f_t)(x)\big\rvert\ge\big(1+\frac12\abs{x}\big)c$ for $t\in[1/2,1]$. \end{enumerate} The first one is obvious. The second follows from the fact that for a~given $x$ the matrix $D\xi_t(x)$ is diagonal in some basis with the elements $(1+2t\abs{x})$ and $(1+t\abs{x})$ on the diagonal. The third and fourth follow immediately from the second. Finally, define a homotopy \[ h_t(x)=\begin{cases} \Xi_t(f)(x)& \text{if $t\in[0,1/2]$},\\ \Xi_\frac12(f_t)(x)& \text{if $t\in[1/2,1]$}. \end{cases} \] The homotopy $h_t$ is obviously gradient. Moreover, it is proper. Namely, the first part of the homotopy is proper from (1) and (3) and the properness of $f$, and the second part is proper from (4). Observe that the homotopy $h_t$ connects $f$ to $\Xi_\frac12(f(0))$, where $f(0)$ denotes a constant vector field on $\R^2$. What is left is to show that \[ \Xi_\frac12(f(0))\sim\Xi_\frac12(f'(0)). \] Note that there is a homotopy $g_t$ between $f(0)$ and $f'(0)$ consisting of nonzero constant vector fields. It is immediate that the homotopy $\Xi_\frac12(g_t)$ is proper gradient, which completes the proof. \end{proof} \begin{rem} The last lemma is true for $\Prop(\R^n)$ ($n\ge2$) with the same proof. \end{rem} \begin{proof}[Proof of Theorem \ref{thm:twotwo}] Let $f,f'\in\Prop_0(\R^2)$. By Proposition~\ref{prop:cancel}, we can assume that $f$ and $f'$ have no zeroes. Lemma~\ref{lem:nozero} now shows that $f\sim f'$. \end{proof} \section{Proof of Theorem \texorpdfstring{\ref{thm:twoone}}{2.2}} \label{sec:proof4} Let $f\in\Prop_k(\R^2)$ ($k>1$) be generic. By Proposition~\ref{prop:cancel}, we can assume that $f$ has no saddles. Observe that to complete the proof it is enough to show that if $A^-_f\neq\emptyset$ then $\abs{A^-_f}=\abs{A_f}=1$, which contradicts our assumption $k>1$ (the case $A^+_f\neq\emptyset$ is analogous). Let $x\in A^-_f$. Since $\cup_{y\in A^+_f\cup\{\infty\}}V_x^y=S^1$ and, by Lemma~\ref{lem:open}(1), the sets $V_x^y$ are open, we have $V_x^y=S^1$ for some $y\in A^+_f\cup\{\infty\}$. Hence, by Lemma~\ref{lem:open}(2), $y=\infty$ and $x$ is the only stationary point of $f$, i.e.,\ $\abs{A^-_f}=\abs{A_f}=1$.\qed
{ "timestamp": "2019-10-30T01:15:18", "yymm": "1910", "arxiv_id": "1910.12568", "language": "en", "url": "https://arxiv.org/abs/1910.12568", "abstract": "We compare the sets of homotopy classes of gradient and proper gradient vector fields in the plane. Namely, we show that gradient and proper gradient homotopy classifications are essentially different. We provide a complete description of the sets of homotopy classes of gradient maps from $\\mathbb{R}^n$ to $\\mathbb{R}^n$ and proper gradient maps from $\\mathbb{R}^2$ to $\\mathbb{R}^2$ with the Brouwer degree greater or equal to zero.", "subjects": "Dynamical Systems (math.DS)", "title": "Gradient versus proper gradient homotopies", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9852713887451681, "lm_q2_score": 0.7185943925708562, "lm_q1q2_score": 0.708010495112778 }
https://arxiv.org/abs/2207.08641
There are infinitely many (-1,1)-Carmichael numbers
We prove that there exist infinitely many (-1,1)-Carmichael numbers, that is, square-free, composite integers n such that p+1 divides n-1 for each prime p dividing n.
\section{Introduction} The well-known Fermat's Little Theorem states that if $p$ is a prime, $p$ divides $a^p-a$ for each integers $a$. Carmichael numbers are composite numbers which have this property, i.e. a positive composite integer $n$ is a Carmichael number if $n$ divides $a^n-a$ for each integers $a$. In 1899, Korselt \cite{korselt1899probleme} give a criterion for Carmichael numbers: $n$ is a Carmichael number if and only if $n$ is squareefree and $p-1\,|\,n-1$ for each prime $p$ dividing $n$. Now we define a series of numbers satisfying some conditions analogous to Korselt's criterion. \begin{definition} \textit{A positive squarefree composite number $n$ is an $(a,b)$-Carmichael numbers if $n\neq b$ and $p-a\,|\,n-b$ for each prime $p$ dividing $n$.} \end{definition} The condition of squarefree eliminates some trivial cases. For example, cube of prime numbers are $(-1,-1)$-Carmichael numbers, since $p^3+1=(p+1)(p^2-p+1)$. Under the definition, $(1,1)$-Carmichael numbers are the usual Carmichael numbers; $(-1,-1)$-Carmichael numbers are known as Lucas-Carmichael numbers; $(k,k)$-Carmichael numbers are known as $k$-Korselt numbers. These numbers play an important role in various primality tests. Thus we are interested in the properties of these numbers. Alford, Granville and Pomerance \cite{alford1994there} proved that there are infinitely many Carmichael numbers and stated that ~ \textit{One can modify our proof to show that for any fixed non-zero integer $a$, there are infinitely many squarefree, composite integers $n$ such that $p-a$ divides $n-1$ for all primes $p$ dividing $n$.} ~ In our notation it is equivalent to prove that there are infinitely many $(a,1)$-Carmichael numbers with $a$ is non-zero. However, there are no any result about infinitude of $(a,1)$-Carmichael numbers but $a=1$. In 2018, Wright \cite{wright2018there} proved that there are infinitely many $(-1,-1)$-Carmichael numbers, or Lucas-Carmichael numbers. In this paper, we prove that there are infinitely many $(-1,1)$-Carmichael numbers, that is, squarefree, composite integers $n$ such that $p-a\,|\,n-1$ for each prime $p$ dividing $n$. The smallest one of $(-1,1)$-Carmichael numbers is $385=5\times7\times11$. One can see more examples in OEIS/A225711 \cite{sloane2007line}. The method we used is a modification of \cite{alford1994there}. \begin{definition} \textit{If $a$, $b$ are integers, we define $C(x;a,b)$ as the numbers of $(a,b)$-Carmichael numbers not exceeding $x$.} \end{definition} We will use $C(x)$ in the place of $C(x;-1,1)$ in this paper for convenience. In particular, we prove that $C(x)>x^{0.29}$ for all sufficiently large values of $x$. We believe that there is some gap between $x^{0.29}$ and the true size of $C(x)$, but it is enough to prove the infinitude of $(-1,1)$-Carmichael numbers. Let $\pi(x)$ be the numbers of primes $p\leq x$. For a fixed non-zero integer $a$, let $\pi_a(x,y)$ be the number of primes $a<p\leq x$ for which $p-a$ is free of prime factors exceeding $y$. From \cite{friedlander1989shifted} we have \begin{equation} \label{shifted primes without large factors} \pi_a(x,x^{1-E})\geq\gamma(E)\frac{x}{\log x} \end{equation} ~ \noindent for any $x\geq x_1$ and all non-zero integer $a$; $E\in(0,1-(2\sqrt e)^{-1})$; $\gamma(E)$ is a constant depends on $E$; $x_1$ depends on $a$ and $E$. Define $\pi(x;d,a)$ to be the number of primes up to $x$ that belong to the arithmetic progression $a$ mod $d$. Let $\mathcal{B}$ denote the set of numbers $B$ in the range $0<B<1$ for which there is a number $x_2(B)$ and a positive integer $D_B$ such that if $x\geq x_2(B)$, $(a,d)=1$ and $1\leq d\leq\min\{x^B,y/x^{1-B}\}$ then \begin{equation} \label{primes in arithmetic progression} \pi(y;d,a)\geq\frac{\pi(y)}{2\varphi(d)} \end{equation} ~ \noindent whenever $d$ is not divisible by any member of $\mathcal{D}_B(x)$, a set of at most $D_B$ integers, each of which exceeds $\log x$. We have $(0,5/12)\subset\mathcal{B}$ (see \cite{alford1994there}). The main theorem of this paper depends intimately on the set $\mathcal{B}$. \begin{theorem} \label{main theorem} For each $B\in\mathcal{B}$ and $E\in(0,1-(2\sqrt e)^{-1})$, there is a number $x_0=x_0(E,B)$ such that $C(x)\geq x^{EB}$ for all $x\geq x_0$. \end{theorem} Since $(0,5/12)\subset\mathcal{B}$, we have $C(x)>x^{\beta-\varepsilon}$ for any $\varepsilon>0$ and all sufficiently large $x$, where \begin{equation} \notag \beta=\bracket{1-\frac{1}{2\sqrt e}}\times\frac{5}{12}=0.290306\cdots \end{equation} ~ \noindent As mentioned above, we have $C(x)>x^{0.29}$ for all sufficiently large values of $x$. ~ \section{Subsequence products representing the identity in a group} For a finite group $G$, let $n(G)$ denote the length of the longest sequence of (not necessarily distinct) elements of $G$ for which no non-empty subsequence has product the identity. Baker and Schmidt \cite{baker1980diophantine} gave an upper bound for $n(G)$ for arbitrary finite abelian groups, which is \begin{theorem} \label{n(G)} If $G$ is a finite abelian group and $m$ is the maximal order of an element in $G$, then $n(G)<m(1+\log(|G|/m))$. \end{theorem} The idea is to construct an integer $L$ for which there are a very large number of primes $p$ such that $p+1$ divides $L$. Suppose that the product of some of these primes, say $C=p_1\cdots p_k$, is congruent to $1$ mod $L$, then $C$ is a $(-1,1)$-Carmichael number by definition. If we view these primes $p$ as elements of the group $G=(\mathbb{Z}/L\mathbb{Z})^*$, then the condition becomes that, $C$ equals the identity of $G$. Though different primes $p$ may map to the same element of $G$, we view them as different elements of $G$, say label them with different numbers. The next result allows us to construct many such products \cite{alford1994there}. \begin{theorem} \label{many subsequences} Let $G$ be a finite abelian group and let $r>t>n=n(G)$ be integers. Then any sequence of $r$ elements of $G$ contains at least $\left.\binom{r}{t}\right/\binom{r}{n}$ distinct subsequences of length at most $t$ and at least $t-n$, whose product is the identity. \end{theorem} Let $R$ be a sequence of $r$ elements of $G$. Though $R$ may contain same elements of $G$, we label these same elements with different numbers and view them as different elements. The step is necessary since the original proof may construct same subsequences, but we view them as different subsequences if their elements has different labeled numbers. For example, let $G=(\mathbb{Z}/5\mathbb{Z})^*$. We view $\{19,29\}$ and $\{19,59\}$ as different subsequences of $G$, although they are $\{4,4\}$ in the reduced residue class mod $5$. ~ \section{Infinitude of (-1,1)-Carmichael numbers} First we prove a theorem resembling Theorem 3.1 of \cite{alford1994there} and the proof is based on that. \begin{theorem} \label{Prachar} Suppose that $B\in\mathcal{B}$ and $a$ is a fixed non-zero integer. There exists a number $x_3(B)$ such that if $x\geq x_3(B)$ and $L$ is a squarefree integer not divisible by any prime exceeding $x^{(1-B)/2}$ and for which $\sum_{\mathrm{prime}\ q|L}1/q\leq(1-B)/32$, then there is a positive integer $k\leq x^{1-B}+|a|$ with $(k,aL)=1$, such that \begin{equation} \notag \#\{ d|L:dk+a\leq x,\ dk+a\text{ is prime} \}\geq\frac{2^{-D_B-\omega(a)-3}}{\log x}\#\{ d|L:1\leq d\leq x^B \}, \end{equation} ~ \noindent where $\omega(n)$ is the number of distinct prime factors of $|n|$. \end{theorem} \begin{proof} Let $x_3(B)=\max\{x_2(B),17^{(1-B)^{-1}},|a|^{(1-B)^{-1}}\}$. For each $d\in\mathcal{D}_B(x)$ which divides $L$, we divide some prime factor of $d$ out from $L$. Furthermore, we divide all prime factors of $(a,L)$ out from $L$, so as to obtain a number $L'$ which is not divisible by any number in $\mathcal{D}_B(x)$ and $(a,L')=1$. Thus $\omega(L')\geq\omega(L)-D_B-\omega(a)$, and \begin{equation} \label{L' and L} \#\{ d|L':1\leq d\leq y \}\geq2^{-D_B-\omega(a)}\#\{ d|L:1\leq d\leq y \} \end{equation} ~ \noindent for any $y\geq1$. To see this, think of a divisor $d'$ of $L'$ as corresponding to a divisor $d$ of $L$ if and only if $d'$ divides $d$ and $d/d'$ divides $L/L'$. So if $d\leq y$ then the corresponding $d'$ is at most $y$. Moreover, for any divisor $d'$ of $L'$, the number of divisors $d$ of $L$ which correspond to $d'$ is at most the number of divisors of $L/L'$, which is at most $2^{D_B+\omega(a)}$. From \eqref{primes in arithmetic progression} we see that, for each divisor $d$ of $L'$ with $1\leq d\leq x^B$, we have \begin{equation} \label{lower bound} \pi(dx^{1-B};d,a)\geq\frac{\pi(dx^{1-B})}{2\varphi(d)}\geq\frac{dx^{1-B}}{2\varphi(d)\log(dx^{1-B})}\geq\frac{dx^{1-B}}{2\varphi(d)\log x}, \end{equation} ~ \noindent since $\pi(y)\geq y/\log y$ for all $y\geq17$ (see \cite{rosser1962approximate}). Furthermore, since any prime factor $q$ of $L$ is at most $x^{(1-B)/2}$, we can use Montgomery and Vaughan's explicit version of the Brun-Titchmarsh theorem \cite{montgomery1973large}, to get \begin{equation} \notag \pi(dx^{1-B};dq,a)\leq\frac{2dx^{1-B}}{\varphi(dq)\log(x^{1-B}/q)}\leq\frac{4}{\varphi(q)(1-B)}\frac{dx^{1-B}}{\varphi(d)\log x}\leq\frac{8}{q(1-B)}\frac{dx^{1-B}}{\varphi(d)\log x}. \end{equation} ~ Therefore, by \eqref{lower bound}, the number of primes $p\leq dx^{1-B}$ with $\modulo{p}{a}{d}$ and $((p-a)/d,L)=1$ is at least \begin{equation} \notag \begin{aligned} &\ \ \ \ \pi(dx^{1-B};d,a)-\sum_{\text{prime }q|L}\pi(dx^{1-B};dq,a)\\ &\geq\bracket{\frac{1}{2}-\frac{8}{1-B}\sum_{\text{prime }q|L}\frac{1}{q}}\frac{dx^{1-B}}{\varphi(d)\log x}\\ &\geq\frac{x^{1-B}}{4\log x}. \end{aligned} \end{equation} ~ \noindent Thus we have at least \begin{equation} \notag \frac{x^{1-B}}{4\log x}\#\{d|L':1\leq d\leq x^B\} \end{equation} ~ \noindent pairs $(p,d)$ where $p\leq dx^{1-B}$ is a prime, $\modulo{p}{a}{d}$, $((p-a)/d,L)=1$, $d|L'$ and $1\leq d\leq x^B$. Each such pair $(p,d)$ corresponds to an integer $(p-a)/d$ that is coprime to $L$ and \begin{equation} \notag \frac{p-a}{d}\leq x^{1-B}+\frac{|a|}{d}\leq x^{1-B}+|a|. \end{equation} ~ \noindent Since $x^{1-B}\geq|a|$, there is at least one integer $k\leq x^{1-B}+|a|$ with $(k,L)=1$ such that $k$ has at least \begin{equation} \notag \frac{x^{1-B}}{x^{1-B}+|a|}\frac{1}{4\log x}\#\{d|L':1\leq d\leq x^B\}\geq\frac{1}{8\log x}\#\{d|L':1\leq d\leq x^B\} \end{equation} ~ \noindent representations as $(p-a)/d$ with $(p,d)$ as above. Moreover, we have $(a,d)=1$ since $(a,L')=1$, so \begin{equation} \notag \bracket{\frac{p-a}{d},a}=(p-a,a)=(p,a). \end{equation} ~ \noindent If $(p,a)=p$, then $a=p$ since $p\geq a$, which is a contradiction to the condition $((p-a)/d,L)=1$. Thus for this integer $k$ we have $(k,a)=1$ and \begin{equation} \notag \begin{aligned} &\ \ \ \ \#\{ d|L:dk+a\leq x,\ dk+a\text{ is prime} \}\\ &\geq\frac{1}{8\log x}\#\{d|L':1\leq d\leq x^B\}\\ &\geq\frac{2^{-D_B-\omega(a)-3}}{\log x}\#\{ d|L:1\leq d\leq x^B \}, \end{aligned} \end{equation} ~ \noindent where we use \eqref{L' and L} in the last inequality. \end{proof} \noindent Now we recalling the main theorem of this paper. ~ \noindent \textbf{Theorem 1.1.} \textit{For each $B\in\mathcal{B}$ and $E\in(0,1-(2\sqrt e)^{-1})$, there is a number $x_0=x_0(E,B)$ such that $C(x)\geq x^{EB}$ for all $x\geq x_0$.} \begin{proof} Let $0<\varepsilon<EB$ be a fixed number, $\theta=(1-E)^{-1}$ and let $y\geq2$ be a parameter. Denote by $\mathcal{Q}$ the set of primes $q\in(y^\theta/\log y,y^\theta]$ for which $q-1$ is free of prime factors exceeding $y$. By \eqref{shifted primes without large factors}, \begin{equation} \label{calQ} \begin{aligned} |\mathcal{Q}|&=\pi_1(y^\theta,y)-\pi_1(y^\theta/\log y,y)\\ &\geq\gamma(E)\frac{y^\theta}{\log y^\theta}-\frac{2y^\theta}{(\log y)\log(y^\theta/\log y)}\\ &\geq\frac{\gamma(E)}{2}\frac{y^\theta}{\log y^\theta} \end{aligned} \end{equation} ~ \noindent for all sufficiently large $y$. Let $L$ be the product of the primes $q\in\mathcal{Q}$, then \begin{equation} \label{log(L)} \log L\leq|\mathcal{Q}|\log y^\theta\leq\pi(y^\theta)\log y^\theta\leq2y^\theta \end{equation} ~ \noindent for all sufficiently large $y$. Let $\lambda(n)$ denote the Carmichael lambda function, the largest order of an element in $(\mathbb{Z}/n\mathbb{Z})^*$. We have $\lambda(L)$ is the least common multiple of the numbers $q-1$ for those primes $q$ that divide $L$. Since each such $q-1$ is free of prime factors exceeding $y$, we know that if the prime power $p^s$ divides $\lambda(L)$ then $p\leq y$ and $p^s\leq y^\theta$. Thus if we let $p^{s_p}$ be the largest power of $p$ with $p^{s_p}\leq y^\theta$, then \begin{equation} \label{lambda(L)} \lambda(L)\leq\prod_{p\leq y}p^{a_p}\leq\prod_{p\leq y}y^\theta=y^{\theta\pi(y)}\leq e^{2\theta y} \end{equation} ~ \noindent for all sufficiently large $y$. Let $\delta=\varepsilon\theta/(4B)$ and let $x=e^{y^{1+\delta}}$. By Theorem 5 of \cite{rosser1962approximate} we have \begin{equation} \notag \begin{aligned} \sum_{\text{prime }q|L}\frac{1}{q}&\leq\sum_{y^\theta/\log y<q\leq y^\theta}\frac{1}{q}\\ &\leq\log\log y^\theta+\frac{1}{2(\log y^\theta)^2}-\log\log(y^\theta/\log y)+\frac{1}{2(\log(y^\theta/\log y))^2}\\ &=\log\frac{\theta\log y}{\theta\log y-\log\log y}+\frac{1}{2(\log y^\theta)^2}+\frac{1}{2(\log(y^\theta/\log y))^2}\\ &\leq\frac{1-B}{32} \end{aligned} \end{equation} ~ \noindent for all sufficiently large $y$. Then we can apply Theorem 3.1 with $B$, $x$, $L$. Thus for all sufficiently large $y$, there is an integer $k$ coprime to $L$, for which the set $\mathcal{P}$ of primes $p\leq x$ with $p=dk-1$ for some divisor $d$ of $L$, satisfies \begin{equation} \label{calP} |\mathcal{P}|\geq\frac{2^{-D_B-3}}{\log x}\#\{d|L:1\leq d\leq x^B\}. \end{equation} ~ \noindent Let $G=(\mathbb{Z}/L\mathbb{Z})^*\times (\mathbb{Z}/2\mathbb{Z})^+$. Since $\lambda(n)$ is even for all $n\in\mathbb{Z}_{\geq3}$, we conclude from Theorem \ref{n(G)}, \eqref{log(L)} and \eqref{lambda(L)} that \begin{equation} \label{n(G) upper bound} n(G)<\lambda(L)\bracket{1+\log\frac{2\varphi(L)}{\lambda(L)}}\leq\lambda(L)(1+\log2+\log L)\leq e^{3\theta y} \end{equation} ~ \noindent for all sufficiently large $y$. The product of any \begin{equation} \notag u:=\floor{\frac{\log x^B}{\log y^\theta}}=\floor{\frac{B\log x}{\theta\log y}} \end{equation} ~ \noindent distinct prime factors of $L$, is a divisor $d$ of $L$ with $d\leq x^B$. We deduce from \eqref{calQ} that \begin{equation} \notag \#\{d|L:1\leq d\leq x^B\}\geq\binom{\omega(L)}{u}\geq\bracket{\frac{\omega(L)}{u}}^u\geq\bracket{\frac{\gamma(E)y^\theta}{2B\log x}}^u=\bracket{\frac{\gamma(E)}{2B}y^{\theta-1-\delta}}^u. \end{equation} ~ \noindent Thus, by $\eqref{calP}$ and the identity $(\theta-1-\delta)B/\theta=EB-\varepsilon/4$, we have \begin{equation} \label{calP2} |\mathcal{P}|\geq\frac{2^{-D_B-3}}{\log x}\bracket{\frac{\gamma(E)}{2B}y^{\theta-1-\delta}}^{\floor{\frac{B\log x}{\theta\log y}}} \end{equation} ~ \noindent for all sufficiently large $y$. Now take $\mathcal{P}'=\mathcal{P}\backslash\mathcal{Q}$. Since $|\mathcal{Q}|\leq y^\theta$, we have by \eqref{calP2} that \begin{equation} \label{calP'} |\mathcal{P}'|\geq x^{EB-\varepsilon/2} \end{equation} ~ \noindent for all sufficiently large $y$. For each element of $p\in\mathcal{P}'$, we view it as the element $(\overline{p},-1)\in G$, where $\overline{p}$ denotes the residue of $p$ in $(\mathbb{Z}/L\mathbb{Z})^*$. As mentioned above, if $\overline{p_i}=\overline{p_j}$ but $p_i\neq p_j$, then we view $(\overline{p_i},-1)$ and $(\overline{p_j},-1)$ as two different elements of $G$. If $\mathcal{S}$ is a subsequence of $G$ that contains more than one element and if \begin{equation} \notag \Pi(\mathcal{S}):=\prod_{g\in\mathcal{S}}g=1_G, \end{equation} ~ \noindent then we can construct a $(-1,1)$-Carmichael number. Firstly, $|\mathcal{S}|$ must be even, since the second component of all elements are $-1$. Moreover, the product of the first component of all elements is $1$ mod $L$, say \begin{equation} \notag \Pi_1(\mathcal{S}):=\prod_{p}\modulo{p}{1}{L}, \end{equation} ~ \noindent where the product is over all preimages of first component of elements in $\mathcal{S}$. Furthermore, each member of $\mathcal{P}$ is $-1$ mod $k$ so that $\modulo{\Pi_1(\mathcal{S})}{1}{k}$, since $|\mathcal{S}|$ is even. Thus $\modulo{\Pi_1(\mathcal{S})}{1}{kL}$ since $(k,L)=1$. If $p\in\mathcal{P}'$ then $p\in\mathcal{P}$ so that $p+1=kd\,|\,kL\,|\,\Pi_1(\mathcal{S})-1$. Thus $\Pi_1(\mathcal{S})$ is a $(-1,1)$-Carmichael number. Let $t=e^{y^{1+\delta/2}}$. Evidently $t>n(G)$ for all sufficiently large $y$. Then by Theorem \ref{many subsequences}, we see that the number of $(-1,1)$-Carmichael numbers of the form $\Pi_1(\mathcal{S})$, where $\floor{t}-n(G)\leq|\mathcal{S}|\leq\floor{t}$, is at least \begin{equation} \notag \bigdivide{\binom{|\mathcal{P}'|}{\floor{t}}}\binom{|\mathcal{P}'|}{n(G)}\geq\bigdivide{{\bracket{\frac{|\mathcal{P}'|}{\floor{t}}}^{\floor{t}}}}|\mathcal{P}'|^{n(G)}\geq\bracket{x^{EB-\varepsilon/2}}^{\floor{t}-n(G)}\floor{t}^{-\floor{t}}\geq x^{t(EB-\varepsilon)} \end{equation} ~ \noindent for all sufficiently large $y$, using $\eqref{n(G) upper bound}$ and $\eqref{calP'}$. Since each $(-1,1)$-Carmichael number we construct satisfy $\Pi_1(\mathcal{S})\leq x^t$, we have $C(X)\geq X^{EB-\varepsilon}$ for all sufficiently large $y$, where $X:=x^t$. Moreover, since $y$ and $X$ can be uniquely determined by each other, we derive $C(x)\geq x^{EB-\varepsilon}$ for each $B\in\mathcal{B}$, $E\in(0,1-(2\sqrt e)^{-1})$, $0<\varepsilon<EB$ and sufficiently large $x$. Since $E$ is in an open interval, choose $E'\in(0,1-(2\sqrt e)^{-1})$ with $E'>E$ and let $\varepsilon=(E'-E)B$, then we have $C(x)\geq x^{E'B-(E'-E)B}=x^{EB}$ for sufficiently large $x$, this completes the proof. \end{proof} Unfortunately, our proof is not applicable to the case $a\neq1$. In fact, we can only solve the case that $a$ have a small order mod $k$ (see Theorem \ref{Prachar}). Since the properties of $k$ is unknown, $k$ may ruin the estimate of some arguments. Anyway, we believe that for non-zero integers $a$, $b$, there are infinitely many $(a,b)$-Carmichael numbers. \section*{Acknowledgement} The ideas came to us after seeing the papers \cite{alford1994there} and \cite{wright2018there}. The method we used in this paper is a simple modification of method in \cite{alford1994there}. ~
{ "timestamp": "2022-07-26T02:08:24", "yymm": "2207", "arxiv_id": "2207.08641", "language": "en", "url": "https://arxiv.org/abs/2207.08641", "abstract": "We prove that there exist infinitely many (-1,1)-Carmichael numbers, that is, square-free, composite integers n such that p+1 divides n-1 for each prime p dividing n.", "subjects": "Number Theory (math.NT)", "title": "There are infinitely many (-1,1)-Carmichael numbers", "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.985271387015241, "lm_q2_score": 0.7185943925708561, "lm_q1q2_score": 0.708010493869662 }